XML is not a data serialisation tool, it is a language tool. It creates notations abd should be used to create phrase-like structures. So if a user needs these distinctions, he makes a notation that expresses them.
Basically the difference is that underlying data structures are different.
JSON supports arrays of arbitrary items and dictionaries with string keys and arbitrary values. It aligns well with commonly used data structures.
XML node supports dictionary with string keys and string values (attributes), one dedicated string attribute (name), array of nodes (child nodes). This is very unusual structure and requires dedicated effort to map to programming language objects and structures. There were even so-called "OXM" frameworks (Object-XML Mapper), similarly to ORM.
Of course in the end it is possible to build a mapping between array, dictionary and DOM. But JSON is much more natural fit.
XML is immediately usable if you need to mark up text. You can literally just write or edit it and invent tags as needed. As long as they are consistent and mark what needs to be marked any set of tags will do; you can always change them later.
XML is meant to write phrase-like structures. Structures like this:
int myFunc(int a, void *b);
This is a phrase. It is not data, not an array or a dictionary, although technically something like that will be used in the implementation. Here it is written in a C-like notation. The idea of XML was to introduce a uniform substrate for notations. The example above could be like:
This is, of course, less convenient to write than a specific notation. But you don't need a parser and can have tools to process any notation. (And technically a parser can produce its results in XML, it is a very natural form, basically an AST). Parsers are usually a part of a tool and do not work on their own, so first there is a parser for C, then an indexer for C, then a syntax highlighter for C and so on: each does some parsing for its own purpose, thus doing the same job several times. With the XML processing scenario is not limited to anything: the above example can be used for documentation, indexing, code generation, etc.
XML is a very good fit for niche notations written by few professionals: interface specifications, keyboard layouts, complex drawings, and so on. And it is being used there right now, because there are no other tool like it, aside from a full-fledged language with a parser. E.g. there is an XML notation that describes numerous bibliography styles. How many people need to describe bibliography styles? Right. With XML they start getting usable descriptions right away and can fine-tune them as they go. And these descriptions will be immediately usable by generic XML tools that actually produce these bibliographies in different styles.
Processing XML is like parsing a language, except that the parser is generic. Assuming you have no text content it goes in two steps: first you get an element header (name and attributes), then the child elements. By the time you get these children they are no longer XML elements, but objects created by your code from these elements. Having all that you create another object and return it so that it will be processed by the code that handles the parent element. The process is two-step so that before parsing you could alter the parsing rules based on the element header. This is all very natural as long as you remember it is a language, not a data dump. Text complicates this only a little: on the second step you get objects interspersed with text, that's all.
People cannot author data dumps. E.g. the relational model is a very good fit for internal data representation, much better than JSON. But there is no way a human could author a set of interrelated tables aside from tiny toy examples. (The same thing happens with state machines.) Yet a human can produce tons of phrase-like descriptions of anything without breaking a sweat. XML is such an authoring tool.