Things have changed in 30 years. Unix designers didn't have the time or resources to write elaborate tools, nor the need for complicated software. Back then you would use telnet to browse Gopher or send your mail. Things are different now. I dare you to read a 3KB JSON file without a parser. Base64 was a hack for text-based protocols. Tool support will make it a lot nicer than no support.
Simple protocol, makes it simpler to write tools, simpler tools are easier to change, upgread and its simpler to add features, faster development, faster improvment. Better live for programmer.
Also sometimes you need something spcific and then you have to option to code it up yourself, or change a open source library quickly and efficently.
The main "thing" that changed in 30 years is computational power, which is now several orders of magnitude greater. If 30 years ago computers sporting the power of timex watches spared the cycles for text protocol overhead, I fail to see the need to squeeze, in today's hardware, that last drop of performance.
The advantages of text based protocol remain the same. The disadvantage is lessened by faster CPUs.
The only advantage to a text-based protocol is you can read and understand it in raw form. Unfortunately this is not an advantage over binary protocols.
If anything, the more complex the protocol, the more redundant the text becomes, because we have to write tools to parse the text and output it so we can understand it better or identify flaws in it, and work around bugs introduced by the human element of the protocol. The ability to view and interpret the protocol in a text editor is equivalent to the ability to view and interpret the protocol as output from a debugging tool or log file - except the tool can give you much more detail than the text file in a variety of ways. Text files are inferior, but they can be quicker/simpler, depending on what you're doing.
You still need a library or tool to write the protocol out, as it's complicated and needs to be structured for the machine, not a person.
Second, saying "it's ok that it's slow, we'll just buy a faster CPU" is not a good argument for anything ever. It's part of the reason it's taken so long to adopt encrypted services everywhere. Someone (Google) had to finally prove it wasn't slow so people would adopt it.
Third, the state of modern computers is that there is no difference in speed between interpreting most text protocols and binary protocols. But that has nothing to do with efficiency, or what the machine is naturally suited to doing. You have to translate from English into machine code for a computer to know what the hell another machine is talking about. Machines don't care about line-by-line, or capitalization, or indentation, spaces, or any vestige of our natural language. Strip all those things away and machines purr along happily with less bullshit to deal with, which means simpler, more efficient code. Note that I didn't say faster.
Fourth, your performance and history observation is flawed. We need a lot more performance today than we did before, as we're scaling existing technology to many, many orders of magnitude higher than anything that existed when it was invented. Yes, we have faster CPUs. We also have more users and more data, and we don't have time to sit around reading packet dumps in text editors.
> The ability to view and interpret the protocol in a text editor is equivalent to the ability to view and interpret the protocol as output from a debugging tool or log file - except the tool can give you much more detail than the text file in a variety of ways. Text files are inferior, but they can be quicker/simpler, depending on what you're doing.
This is entirely false, as anyone who ever had to debug a malfunctioning http proxy or a misbehaving IMAP can tell you. Nothing beats netcat for a quick bug isolation test. As for the need for formal parsing, again it is true for production code, entirely false for sysops transient tasks.
Compare debugging a corba server with debugging http for a whiff of the he difference.
Tools aren't omnipresent. My miryad busybox embedded devices won't ever likely have a protocol analyzer. If I'm in need of one there, I'm done with.
You're already dependent on tools - your eyes and language processing parts of your brain - to use text formats. With a binary protocol you'd be equally dependent on tools. They're just not embedded in your skull.
Seeing as we use binary protocols every day of our lives, and the tools to work with them have existed for years, and nobody has any problem with using them, let's let this argument rest.
"Here is a ridiculous stretch of semantics because I can't admit anyone else has a point. Also let's stop arguing at the end of this sentence to avoid rebuttal."