For the general processing I am using a XMLReader due to the size of the file and technically only need a once time read per file.
But the external company who made this starts it by doing a count of the nodes to say how many inserts/updates are to be done this is done by basically looping through the xml file counting them and then they recreate the xmlreader again to finally process it.
It just seems a bit clumsy to me, understandably the size of the file means we can't just load it into memory but is there another low cost method to count the nodes without having to recreate the xmlreader each time.
Could we stream the file instead and use XMLDocument or Xdocument (giving us linq capabilities) to improve the efficience as I imagine there must be some sort of performance hit re-creating a XMLReader 2 to 3 times.
For example would this use more memory than say xmlreading and looping the whole thing through to get a count.
using (XmlReader reader = XmlReader.Create(filename, settings))
XPathNavigator nav = new XPathDocument(reader).CreateNavigator();
XPathNodeIterator xPathIt = nav.Select("//root/theNode");
int c = xPathIt.Count;
View Complete Post