The "chance at making FU money" and "100% more work/stress" and "losing funding or... goes bust and can't make payroll", that's stuff that happens at startups that are paying _below_ the VC-backed market rate.
Once a company gets money to be paying employees in the third hump of the trimodal distribution of SWE pay, those "exciting" parts are long gone. You're no longer in the running to receive a hilarious amount of money should everything 10-100x, and you're no longer worried about going bust and not making payroll.
Even a pre-series A startup I worked at saw the writing on the wall with 5 months of runway left and switched from trying to get funded to finding an M&A deal. At the very least you can get the team acquired by someone in the expansion phase and get the rare chance to interview as a group and compare notes and have an exec try and negotiate your salary _up_ for once.
By the time a startup is paying VC-backed market rate, the level of stress you have just comes from the usual suspects: management, culture toxicity, etc. some of that you can absorb, some might prompt you to find a new job, just like normal.
There's still a level of "hoo-rah, we're a startup" present of course, but I just consider this a bad attempt to keep the magic of when the company and products were small alive. I let it affect me none.
$JOB-2 got sued over website accessibility. The way these cases go is someone uses the WAVE browser extension to identify accessibility issues, and anything yellow or red becomes part of the cause of action. IIRC we settled out of court.
We were pretty large, big-name online merchandise retailer. Definitely put our heels to the fire on giving the frontend a design refresh.
You're implying the the only benefactors of that actions were the people suing. That's not the case. They're accessibility standards for a reason. They're the minimum required to not make an impaired person's life hell. And we should all care about that.
In this case yes, because clearly if the goal is accessibility, then a company shouldn't face legal action unless they refuse when asked. Like in the DOJ examples in this comment section.
Any and every good cause will attract a bunch of vultures, scammers and scavengers, including accessibility issues. But I'm not going to believe those who say that any company which isn't up to date on this will "get sued to oblivion". Not without examples that prove it.
With that said, I'm a strong supporter of Internet accessibility because there are no down-sides to it. It is essential to the people who need it, and at the same time it improves the experience of those who don't.
A reference to Postel's Law: be conservative in what you produce and liberal in what you accept.
The law references that you should strive to follow all standards in your own output, but you should make a best effort to accept content that may break a standard.
This is useful in the context of open standards and evolving ecosystems since it allows peers speaking different versions of a protocol to continue to communicate.
The assertion being made here is that the world has become too fraught with exploiting this attitude for it to continue being a useful rule
What would have been the result of John Postel advocating for conservative inputs, I wonder? I'm wondering if the most common protocols would have been bypassed if they had all done this by other protocols that allowed more liberal inputs.
Probably more convoluted protocols, because there are always things that you do accept and that can be used to negotiate protocol extensions.
Imagine a protocol where both sides have to speak JSON with a rigidly-defined structure, and none of the sides is allowed to ask whether the other supports any extension. Such a protocol looks impossible to extend, but that is not the case, you can indicate that you speak a "relaxed" version of that protocol by e.g. following your first left brace by a predefined, large number of whitespace characters. If you see a client doing this, you know they won't drop the connection if you include a supported_extensions field, and you're still able to speak the rigid version to strict clients.
This made me laugh, because it's even more terrible than the most ridiculous chicanery we had to vomit into HTML and CSS over the years (most of which was the fault of MSIE6).
Yep. Which is why Postel law is, sadly, more like a law of nature (see also "worse is better") than an engineering principle you may or may not follow.
I know it is a single example and we should extrapolate much out of it, but in the case of html those who accepted more liberal input (html4/5) won over over those that were more conservative (xhtml).
HTML is rather different because it's authored by people. It's typically (though not always!) a good idea to not be too pedantic about accepting user input if you can. XHTML (served with the correct Content-Type) will completely error out if you made a typo and didn't test carefully enough. Useful in dev cycle? Sure. In production? Less so. "The entire page goes tits up because you used <br> instead of <br />" is just not helpful (and also: needlessly pedantic).
But that doesn't really apply to protocols like TCP. Postel's "law" is best understood in the context of 1980, when TCP had been around for a while but without a real standard, everyone was kind of experimenting, and there were tons of little incompatibilities. In this context, it was reasonable and practical advice.
For a lot of other things though: not so much. "Fail fast" is typically the better approach, which will benefit everyone, especially the people implementing the protocols.
This is also why Sendmail became the de-facto standard around the same time by the way: it was bug-compatible with everything else. Later this become a liability (sendmail.cf!), but originally it was a great feature.
RFC 9413 referenced in a parent mentions HTML. It points out that formats meant to be human-authored may benefit more from being liberally accepted.
I also read that XHTML made template authoring hard, as the template itself might not be valid XHTML and/or different template inputs might make output invalid. (I sadly can't find the source of this point right now, but I can't claim credit for it).
I don't recall XHTML being harder to generate from PHP and ASP templates. It's largely down to making sure that all tags in the output are always balanced, which isn't difficult at all.
With PHP specifically there was an issue where the use of shorthand <? syntax for code snippets would conflict with <?xml declaration that would normally be placed at the beginning of the XHTML document - it would see the <? and try to interpret the rest of it as PHP code, which obviously didn't work. The workaround was to disable short tags and always use <?php explicitly
I would almost argue a failing of so many standards is the lack of surrounding tooling. Is this implementation correct? Who knows! Try it against this other version and see if they kind of agree. More specifications need to require test suites.
Yes, but only if you served the XHTML with the proper MIME type of application/xhtml+xml. Nearly everyone served it as text/html, which would lead to the document being intepreted as this weird pseudo XHTML/HTML4 hybrid dialect with all sorts of brower idiosyncrasies [1].
HTML5 was born in an era of decent HTML authoring tooling. Very few people write HTML by hand nowadays. This was not true of earlier versions.
Also note that HTML5 codified into liberal acceptance some of the "lazy" manual errors that people made in the early days (many of which were strictly and noisily rejected in XHTML, for example).
> Is there some reason a cryptographic algorithm developer must track the latest release of a compiler?
Tracking the latest release is important because:
1. Distributions build (most? all?) libraries from source, using compilers and flags the algorithm authors can't control
2. Today's latest release is the base of tomorrow's LTS.
If the people who know most about these algorithms aren't tracking the latest compiler releases, then who else would be qualified to detect these issues before a compiler version bearing a problematic optimization is used for the next release of Debian or RHEL?
> Logically, therefore, must we not also expect CPU designers to
also forego changes that could alter timing behavior?
Maybe? [1]
> freezing all compiler development
There are many, many interesting areas of compiler development beyond incremental application of increasingly niche optimizations.
For instance, greater ability to demarcate code that is intended to be constant time. Or test suites that can detect when optimizations pose a threat to certain algorithms or implementations. Or optimizing the performance of the compiler itself.
Overall I agree with you somewhat. All engineers must constantly rail against entropy, and we are doomed to fail. But DJB is probably correct that a well-reasoned rant aimed at the community that both most desires and most produces the problematic optimizations has a better chance at changing the tide of opinion and shifting the rate at which all must diminish than yelling at chipmakers or the laws of thermodynamics.
The output from an LLVM frontend compiler is LLVM IR, which is sort of a high-level architecture-agnostic assembly.
The LLVM backend compiles this IR to the target architecture's machine code.
Some optimizations can (and do?) happen in the backend compiler, but for the most part it is up to the frontend to emit IR that is well optimized within the semantics of the frontend language.
The history of Rust, for example, is filled with examples of optimizations made not to the standard library, but to how the frontend emits IR.
Beyond that, different languages and ecosystems are going to have differing levels of optimization in their stdlib and runtime that will have an outsize effect on day-to-day performance.
I wonder what your actual cache hit rate is? Sounds like it could use some tuning since 100-150ms sounds consistent with most requests hitting the origin?
Well, due process is a right co-equal to free speech, so which rights override which others in which circumstances will come down to legal precedent.
My understanding is that the FBI or other non-judicial body cannot unilaterally issue a gag order. Subpoenas and gag orders related to them are granted by judges.
(Which isn't to say that the relationship between the judicial branch and law enforcement bodies is always pure and equal)
"Open Core", where the core product is open source and you provide closed source extensions, generally high-value integrations and enteprise-y features like SSO, active directory support, etc.
"Paid Services" where you have offerings on top of the open product. You might offer trainings, consultations, support contracts, custom development work (e.g. aforementioned high-value integrations). Also very common is to offer a fully-managed SaaS experience, but I wouldn't expect that to be your key to success; many open-core businesses fail expecting that full hosting to be more attractive than it is.
It's becoming more popular to offer "on-prem"/"self-hosted" support. Cloud costs are going to get cut after 3rd party products among your customers, so getting your product counted in column B gives you a little edge. Even if they need to cut their support contract down they're still using your product and are more likely to come back when the storm is over.
The "chance at making FU money" and "100% more work/stress" and "losing funding or... goes bust and can't make payroll", that's stuff that happens at startups that are paying _below_ the VC-backed market rate.
Once a company gets money to be paying employees in the third hump of the trimodal distribution of SWE pay, those "exciting" parts are long gone. You're no longer in the running to receive a hilarious amount of money should everything 10-100x, and you're no longer worried about going bust and not making payroll.
Even a pre-series A startup I worked at saw the writing on the wall with 5 months of runway left and switched from trying to get funded to finding an M&A deal. At the very least you can get the team acquired by someone in the expansion phase and get the rare chance to interview as a group and compare notes and have an exec try and negotiate your salary _up_ for once.
By the time a startup is paying VC-backed market rate, the level of stress you have just comes from the usual suspects: management, culture toxicity, etc. some of that you can absorb, some might prompt you to find a new job, just like normal.
There's still a level of "hoo-rah, we're a startup" present of course, but I just consider this a bad attempt to keep the magic of when the company and products were small alive. I let it affect me none.