You've seen code that uses basic string manipulation (IndexOf, Substring) to get values from XML? That's nice - although I guess it probably seems natural to those who insist on creating XML by string concatenation...
In that case I would do whatever was necessary to get that required performance - but only after doing extensive performance testing to ensure that's really where the bottleneck is and providing copious comments and possibly supporting documentation to justify this approach.
99.9% of the time I have seen code do crazy things "because it is faster" it's not performance critical code anyway and there is no explanation provided.
The one time I've seen this particular problem in action this was exactly the case.
Additionally the XML and question was all precisely to the byte level the same until you hit the giant (100s of Megs) based 64 blob that was the content. The parser stripped X number of bytes from the start of the file, and from the end, and de-base-64ed the center - which if I recall it then sent off to another parser as the content was in some old but standard record format from the 80s.
Anyhow - I'd say using XML in this case was the abuse, not the substring. But we were in no position to get the vendor to change their data format so...
Find a different XML parser or adopt a different way of reading the XML (DOM vs. SAX; or just a different library that performs better). I see where you are coming from though. The problem with XML is that it is used to solve problems that it shouldn't be solving - it's a great technology when used correctly (XMPP is a great example of how XML can make other transfer formats look like a dress rehearsal). In most cases, as you said, "global design flaw" - a good indicator that you are abusing XML is if you are not using xmlns attributes and if you do not have multiple namespace (because in that case JSON is simpler, faster and makes more sense).
What's the advantage of XMPP? I find the one time when I don't hate xml is when it doesn't have namespaces or schemata, as then it's just a slightly more verbose JSON.
Not much is that special in terms of the XEPs (extension protocols) that they have defined. When you innovate with it though, man you really see the power of correct XML.
You can slap custom elements pretty much anywhere you want, as long as you have your own namespace (and it's recommended you only place them under <message> or <iq> elements). Say you have some proprietary technology in a client application, with XMPP you can throw an element under the <message> that your client can recognise and act on. For everyone else provide a hyperlink within the <body> element and serve up a web page for them. If they are using your client "bam!" instant added functionality - but if they are on device X which you do not support they are not left out in the cold.
Do it abstractly. Nobody should know how your XML processor is getting its data, simply that it is an extremely brittle, extremely fast choice for one or two parse steps. If you can't pay for speed with brittleness then your bottleneck is unfixable.
If you know that the XML has been generated in a particular way, you can often beat any compliant XML server by employing knowledge of the exact structure.
E.g. you can sometimes skip over chunks of characters without every accessing them, and get speedups of magnitudes over even "just" checking every byte in the input.
There's nothing a faster proper XML parser can do about a custom parser like that.
(Obviously this is a brittle solution and a last resort optimisation, and should be accompanied by ample warnings, but sometimes there are no alternatives)
Surely the only way you could possibly do this is if you controlled both sides - in which case you could achieve the same speedup less brittlely by switching to a binary system like Protocol Buffers?
I have seen people store XML as VARCHAR in a database and send that very same raw XML string as a payload over a SOAP web service. That company also had their own GUI framework that consisted of generating XML by string concatenation and then doing XSLT transformations. I suppose that is what happens when you have team leaders, managers and a chief software architect who have little understanding of technology and its proper use.