I think a better reasoning is that NaN does not have a single binary representation but in software, one may not be able to distinguish them.
An f32-NaN has 22 bits that can have any value, originally intended to encode error information or other user data. Also, there are two kinds of NaNs: queit NaN (qNaN) and signalling NaNs (sNaN) which behave differently when used in calculations (sNaNs may throw exceptions).
Without looking at the bits, all you can see is NaN, so it makes sense to not equal them in general. Otherwise, some NaN === NaN and some NaN !== NaN, which would be even more confusing.
I don't think that logic quite holds up because when you have two NaNs that do have the same bit representation, a conforming implementation still has to report them as not equal. So an implementation of `==` that handles NaN still ends up poking around in the bits and doing some extra logic. It's not just "are the bit patterns the same?"
(I believe this is also true for non-NaN floating point values. I'm not sure but off the top of my head, I think `==` needs to ignore the difference between positive and negative zero.)
In julia NaN === NaN evaluates to true but NaN === -NaN evaluates to false. Of course, NaN == NaN evaluates to false. I think it makes sense that in principle === looks at bit representations, but cannot think of any reason === is useful here, unless you want to encode meaningful stuff inside your NaNs for some reason. It reminded me of this satirical repo [0] discussed also here [1].
I tend to lower-case all my HTML because it has less entropy and therefore can be compressed more effectively.
But in case of modern compression algorithms, some of them come with a pre-defined dictionary for websites. These usually contain the common stuff like <!DOCTYPE html> in its most used form. So doing it like everybody else might even make the compression even more effective.
I really dislike it when a turing-complete language is used for configuration. It almost always breaks every possibility to programmatically process or analyze the config. You can't just JSON.parse the file and check it.
Also I've been in projects where I had to debug the config multiple levels deep, tracking side-effects someone made in some constructor trying to DRY out the code. We already have these issues in the application itself. Lets not also do that in configurations.
> It almost always breaks every possibility to programmatically process or analyze the config. You can't just JSON.parse the file and check it.
Counterpoint: 95% of config-readers are or could be checked in with all the config they ever read.
I have yet to come across a programming language where it is easier to read + parse + type/structure validate a json/whatever file than it is to import a thing. Imports are also /much/ less fragile to e.g. the current working directory. And you get autocomplete! As for checks, you can use unit tests. And types, if you've got them.
I try to frame these guys as "data values" rather than configuration though. People tend to have less funny ideas about making their data 'clean'.
The only time where JSON.parse is actually easier is when you can't use a normal import. This boils down to when users write the data and have practical barriers to checking in to your source code. IME such cases are rare, and most are bad UX.
> Side effects in constructors
Putting such things in configuration files will not save you from people DRYing out the config files indirectly with effectful config processing logic. I recently spent the better part of a month ripping out one such chimera because changing the data model was intractable.
This is what's nice about Pkl, you define a schema as a Pkl file, you define a value of that schema as a Pkl file that imports the schema, `pkl eval my file.pkl` will do the type check and output yaml for visual inspection or programmatic processing, but keeping it to one file per module means that I almost never obsessively D-R-Y my Pkl configs.
Actually that's not the biggest benefit (which is tests for schemas) but it's nice to have the “.ts” file actually log the actual config as JSON and then the app consumes it as JSON, rather than importing the .ts file and all its dependencies and having weird things like “this configuration property expects a lambda.”
I still have to see a JS project where the config for each tool could not be something simple like `.toolrc`. We could have some markers to delineate plugins config.
Instead, there’s a another software in the configuration of sample projects, instead of just using good code organization and sensible conventions.
Why are they optional? Why not just make them mandatory? So I don't need to guess which chars need quotes.
Edit:
What most languages also lack: semantics on de-serialization. In the best case, I want to preserve formatting and stuff when the config is changed/re-committed programmatically.
Two extra characters per rep, each involving a "shift", and it's furthermore an eyesore to read
How is it that the comments on this post seem to consist 100% of people who think JSON is the perfect language and that any deviation from it is an unnecessary complexity? Use JSON for configuration for literally 5 minutes and you will get annoyed at quoting keys, lacking comments, escaping long strings, and juggling commas. MAML is almost exactly what I'd come up with (although I wouldn't have made commas optional, that feels weird.)
> Two extra characters per rep, each involving a "shift"
You'd expect text editors to do this automatically; I'll admit, I don't think mine does.
> and it's furthermore an eyesore to read
We'll have to disagree on that one because I think it looks a lot nicer. I always preferred quoted attributes in html too.
> How is it that the comments on this post seem to consist 100% of people who think JSON is the perfect language
I'm sure you intended that as hyperbole. JSON isn't perfect, but it's got a lot going for it, not least ubiquity.
> Use JSON for configuration for literally 5 minutes and you will get annoyed at quoting keys, lacking comments, escaping long strings, and juggling commas.
I've used JSON for configuration loads and haven't faced these issues. I'm not denying your experience, I just want to understand it.
IIRC the CEO(?) of Duolingo was asked what he would choose if he had to choose between a more effective language course and more gamification. His answer was gamification, because the best course doesn't help anyone if noone shows up.
So at least they know that it's not the best way of learning a language.
I use the mental model of nested maps for "column order matters".
For example, an index "published, low_quality_probability, lang" is just a Map<date, Map<number, Map<lang, rowId>>> in my mental model.
These maps are ordered by the order the index possesses. That explains why column order matters and why one cannot skip columns and why it stops at range queries.
Just imagine getting a final rowId from these nested maps and you'll see why the index works for some queries and doesn't for others.
I use this same approach to explain indexes to database novices; it usually helps a lot, especially with how the leaf nodes / included indexes work as well (using that info instead of or in addition to rowid/primary key as the last value there)
It's actually a List<Tuple<date, number, rowid>>> in sorted order, and queries are more akin to binary search (they are not actually binary but use a wider fanout depending on many factors)
Yeah it might be closer to what's actually happening but it doesn't make it obvious why something doesn't work. The map model doesn't even cover non-unique indices. Or different type of indexes.
> alongside features that let you transfer your encrypted message history between Android, iOS, and Desktop devices.
That's actually the feature I've been looking forward to. As I moved vom Android to iOS, I lost _all_ message histories from all messenger apps that use E2EE (Signal, WhatsApp, Threema, etc). The only one that "just worked" was Telegram due to not being encrypted. WhatsApp had a migration app that has to be done when setting up the iPhone, but it failed due to some bug. Signal had backups, but they didn't seem to be compatible between different OS versions.
You already can, if you at least set up desktop, you can transfer also message history, though you won't have your media older than 45 days. Maybe it can work as a stopgap before they roll out encrypeted backups everywhere
That's a weird and crappy arbitrary limitation when I could move an arbitrary amount of data between the two devices otherwise. It's the worst part of Signal.
On top of that you don't have that limitation on Android. It's like enterprise IT, where you put up restrictions everywhere on files and then people can upload files to their personal one drive.
I've always been able to transfer history, from Android phone to Android phone, when I switched to iOS, I didn't bother since my wife was just going to start using Messages due to its encrypted nature. I really only used Signal with my wife, she only used it because I was using it and it allowed us to send images back and forth without losing quality.
> WhatsApp had a migration app that has to be done when setting up the iPhone, but it failed due to some bug.
It's appalling to see how poor there QA is for a company that big. They also have a migration tool for migrations between android devices without going through a Google drive, but this one didn't work either when I tried it two years ago.
reply