Just like the Hindu-Arabic number system enabled great innovation in mathematics because it was so much easier to use than the Roman system, I wonder if the simplicity of characters in English, enabled the US to jump to such an early start in computer software.
English characters are pretty compact (26 symbols), no accent marks that can't be ignored, have 1:1 mapping between uppercase and lowercase, and are easy to break up by word. This enables even very simple algorithms in a very resource constrained computer to do some work that is mostly right.
For example, splitting a person's name by space, taking the last word, making it uppercase and sorting by ASCII value lexicographically, mostly works if you want to produce a phone book listing, especially in 1950s-1980's America. And you can code this with very simple integer operations without needing a lookup table or a bunch of special rules. Soundex is also pretty simple to implement and deals with a lot of homophones.
Of course, now that we have the computing resources and libraries, handling the vast diversity of human languages is doable, but in terms of bootstrapping a computer software industry, simplicity, I think played a role.
I suspect the simplicity of the English (Latin) alphabet was only of marginal importance in the U.S. having such a long lead in computer science and engineering over the rest of the world. Need (WWII, Cold War), population, industry (the U.S. produced 50% of the world's GDP at the end of WWII), wealth, a market economy, and all that goes with and precedes all of the preceding list (education, opportunities, institutions, culture, etc.) meant that the U.S. was bound to get started on computer science very early, and was bound to make great progress.
If English had had twice the number of glyphs, IBM would still have existed, even though they'd have had more work to do. There might have been some number of glyphs past which the burden might have slowed progress enough that some other nation would have been competitive, but it's all just information, and ultimately computer engineers would have figured out something.
Or look at some evidence: Japan already had what it needed to communicate digitally before WWII in spite of having a complex script (three, four if you include romaji), and most computer engineering and computer science progress came after WWII, and indeed, Japan's computer industry did reasonably well post-WWII -- comparable to France's, which did quite well considering how much smaller France was than the U.S. in population, industry, wealth, etc. And: need was decidedly critical in the UK's computer engineering and science development during WWII (Alan Turing, Colossus). France, Japan, and the UK, each had some of the conditions that the U.S. also had, but not as many, so it's not surprising that the U.S. did well.
I've a feeling that among the things that helped the U.S. that I didn't list above are also: the decadal census (which is intimately related to IBM's rise), Sears & Roebuck (the antecedent to Amazon), the immense geography of the country and widely dispersed population (which impacted the preceding two items). I bet others can add items to the list. Some of these certainly existed elsewhere.
Of somewhat less importance is that a number of very important mathematicians who helped found computer science (e.g., Claude Shannon, Alonzo Church, Haskell Curry) were Americans. Their research was published, of course, and many others were not Americans (e.g., Charles Babbage) which is why I say they were of less importance.
> And: need was decidedly critical in the UK's computer engineering and science development during WWII (Alan Turing, Colossus)
It's useful to note here too that UK's work in WWII was started by desperate Polish need, and there is indication they got somewhat further ahead than people tend to credit them for. (It's interesting to wonder if Poland may have been ahead in computation if it wasn't, you know, invaded, and forced to kick the ball over to the UK.)
It's also worth noting the different approaches to "war time secrets" the US and the UK took. Where Alan Turing had a strong practical head start on computation in WWII, his work was locked under confidential and secret designations and he was not allowed to commercialize it after the war. Worse, he was almost entirely stopped from building practical machines, and it's a wonder he managed to contribute as much to computing theory as he did even with the restrictions he was under. (Also don't forget his own government lead him to an early suicide.)
The US in its contractor-based approach (including IBM's involvement) didn't hamstring commercial interests anywhere near as strongly (despite declassified hindsight now telling us they were straggling behind, comparatively, UK's efforts) so much after WWII by locking things into a "classified vault" and the US did not try stopping the people who had worked on computation from continuing to work on computation.
Yes indeed, the Polish really helped a lot. But obviously they could not continue their efforts after Poland fell -- not in Poland anyways. And everything else you say is also true. The last thing you say is particularly insightful: that the U.S., by using commercial contractors, was essentially committing to letting them commercialize some of the technology, while the UK by not using commercial contractors, was not and then did not. It's especially sad that homophobia led to the early end of Turing's life -- imagine what he could have done had he lived longer!
1950s-1980s America had complexity, they just overcame it, and you're used to this. Having a notion of 'case' or contractions at all (plenty of languages don't), the imperial system, the odd MM/DD/YYYY format.
This is surprisingly hilarious for a "Unicode Technical Note." It changed my opinion about the Unicode Consortium—positively!—until I read this:
> These technical notes are independent publications, not approved by any of the Unicode Technical Committees, nor are they part of the Unicode Standard or any other Unicode specification. Publication does not imply endorsement by the Unicode Consortium in any way.
TBF Unicode contains quite a few absurdities because of its commitment to round trip consistency. That means glyphs which were obviously representations of the same thing (such as ‘[‘ in one character set and ‘[‘ in another) got different code points, The names often reflect that too.
? My problem is not with Unicode here. Unicode has angle-brackets. But SGML (which tbf predates Unicode by a lot) doesn't use them.... It uses less-than/greater-than signs and pretends that they're angle-brackets. They're not. Consequently, HTML <tags> are typographically incorrect and look horrible.
It's made even worse by "programmer fonts" that make the glyphs for less-than/greater-than look like angle-brackets, which is just wrong; now there's no way to write the comparators.
I guess we all just wish that ASCII (and/or keyboards) had one or two more pairs of brackets.
One could argue that because of the widespread use of < and > in HTML, those characters have gained an alternate meaning as brackets. Saying they're NOT BRACKETS is a bit like saying "selfie" is not a word. Just as language changes over time, glyphs can gain alternate meanings through usage.
That line has two occurrences of U+0029 RIGHT PARENTHESIS in it. I think one of them should have been U+0028.
On top of that, you really want dir=rtl on the element to give it right-to-left base directionality. If that's not an option (e.g. in a HackerNews comment), you can surround it with Unicode directional control characters U+202B RIGHT-TO-LEFT EMBEDDING .. U+202C POP DIRECTIONAL FORMATTING to make it behave correctly as a right-to-left run within a left-to-right context.
Of course mathematicians would also consider brackets used like ]0,1[ and [0,1[ as valid notation for open or half-open intervals. And if you try hard enough you can even make your braces {} look like a ξ and write some monstrosity like {ξ∈[0,1[}.
English characters are pretty compact (26 symbols), no accent marks that can't be ignored, have 1:1 mapping between uppercase and lowercase, and are easy to break up by word. This enables even very simple algorithms in a very resource constrained computer to do some work that is mostly right.
For example, splitting a person's name by space, taking the last word, making it uppercase and sorting by ASCII value lexicographically, mostly works if you want to produce a phone book listing, especially in 1950s-1980's America. And you can code this with very simple integer operations without needing a lookup table or a bunch of special rules. Soundex is also pretty simple to implement and deals with a lot of homophones.
Of course, now that we have the computing resources and libraries, handling the vast diversity of human languages is doable, but in terms of bootstrapping a computer software industry, simplicity, I think played a role.