They've discovered how to write dynamically-typed code correctly, or at least, a philosophy of it. It's not "discovering static typing" because that doesn't come up in static type languages. (Typescript is, for this particular purpose, still effective a dynamically typed language.)
I remember writing Python and Perl where functions largely just aimed you passed them the correct types (with isolated exceptions where it may have made sense) years before JavaScript was anything but a browser language for little functionality snippets. It's a dynamic language antipattern for every function to be constantly defensively checking all of it's input for type correctness, because despite being written for nominal "correctness", it's fragile, inconsistent between definitions, often wrong anyhow, slow, and complicates every function it touches, to the point it essentially eliminates the advantages of dynamic language in the first place.
Dynamic languages have to move some responsibility for being called with correct arguments to the caller, because checking the correctness of the arguments correctly is difficult and at times simply impossible. If the function is called with the wrong arguments and blows up, you need to be blaming the caller, not the called function.
I observe that in general this seems to be something that requires a certain degree of programming maturity to internalize: Just because the compiler or stack trace says the problem is on line 123 of program file X, does not mean the problem is actually there or that the correct fix will go there.
I’ve seen something similar happen in Rust as well (and I do consider it an antipattern).
Some libraries take a `TryFrom<RealType>` as input, instead of RealType. Their return value is now polluted with the Error type of the potential failure.
This is a pain to work with when you’re passing the exact type, since you basically need to handle an unreachable error case.
Functions should take the raw types which they need, and leave conversation to the call site.
It's annoying, but not for the error handling. To the contrary, I think the error handling is actually improved by this pattern.
If you manually convert beforehand you easily run into working with a Result<Result<T, E>, E>.
What I find annoying about the pattern is that it hinders API exploration through intellisense ("okay, it seems I need a XY, how do I get one of them"), because the TryFrom (sort of) obscures all the types that would be valid. This problem isn't exclusive to Rust though, very OO APIs that only have a base class in the signature, but really expect some concrete implementation are similarly annoying.
Of course you can look up "who implements X"; it's just an inconvenient extra step.
And there is merit to APIs designed like this - stuff like Axum in Rust would be much more significantly more annoying to use if you had to convert everything by hand.
Though often this kind of design feels like a band aid for the lack of union types in the language.
It's definitely pretty annoying, though not because of the errors. Actually the errors might be the biggest benefit even.
If the conversion fails I can't continue with the function call.
I think there is an important observation in it though: That dynamic, loosely-typed languages will let you create code that "works" faster, but over the long run will lead to more ecosystem bloat - because there are more unexpected edge cases that the language drops onto the programmer for deciding how to handle.
Untyped languages force developers into a tradeoff between readability and safety that exists only to a much lesser degree in typed languages. Different authors in the ecosystem will make that tradeoff in a different way.
In my experience, this only holds true for small scripts. When you're doing scientific computing or deep learning with data flowing between different libraries, the lack of type safety makes development much slower if you don't maintain strict discipline around your interfaces.
For this particular example where they have to do a runtime parse to do the string to number conversion, yes. But in general static type checks are resolved at compile time, so they incur neither runtime cost nor do they increase the size of the resulting code. This is the primary benefit of doing static type checking.
If we're trying to solve problems with good design, use endpoint1 and endpoint2 and then the function sorts them. Having max and min is itself a bad design choice, the function doesn't need the caller to work that out. Why should the caller have to order the ends of the interval? It adds nothing but the possibility of calling the function wrong. So in this this case:
export function clamp(value: number, endpoint1: number, endpoint2: number): number {
return Math.min(Math.max(value, Math.min(endpoint1, endpoint2)), Math.max(endpoint1, endpoint2));
}
That would lead to unpleasant surprises. When calling the function from some loop and when the bounds are inclusive, it's pretty common for (correct) edge cases to exist where you'd call the function with end===start-1. The function would do the right thing by returning an empty set. You'd get duplicate/unexpected records in some cases, that may be hard to debug.
It seems like your approach is just trying to ignore programmer errors, which is rarely a good idea.
I have no horse in the race and would usually just implement my clamp function the way the article does. However, if the clamp function clamping a number is an unpleasant surprise, I'm not going to accept that it is the fault of the clamp function. This hypothetical loop is buggy code and should be rewritten to expect clamp to clamp.
It is a special type of madness if we're supporting a reliance on implementation specific failure modes of the clamp function when someone calls it with incoherent arguments.
> This hypothetical loop is buggy code and should be rewritten to expect clamp to clamp.
But it makes it harder for the developer to recognize that the code is buggy. More feedback to the developer allows them to write better code, with less bugs.
Your argument could be made in the same way to claim that static typing is bad; because the caller should be calling it with the right types of values in the first place.
> But it makes it harder for the developer to recognize that the code is buggy. More feedback to the developer allows them to write better code, with less bugs.
But the feedback is unrelated to the bug, the bug here is that the programmer doesn't understand what the word "clamp" means and is trying to use the function in an incorrect way. Randomly throwing an exception on around 50% of intervals doesn't help them understand that, and the other 50% of the time they're still coding wrong and not getting any feedback. I'm not against the clamp function doing whatever if people want it to, it can make coffee and cook pancakes when we call it for all I care. But if it just clamps that is probably better. It isn't a bug if I call clamp and don't get pancakes. It also isn't a bug if I call clamp and it remains silent on the fact that one argument is larger than another one.
Feedback has to be relevant. It'd be like having a type system that blocks and argument that isn't set to a value. If the programmer provides code that has bugs, it'll give them lots of feedback. But the bug and the error won't be related and it is effectively noise.
So an implicit fallback, but make it explicit through good design. Haven't even thought about this as a principle, since type checking persuades me to avoid anything implicit, thank you!
This maps poorly to the mathematical concept of a closed interval [a, b], which can be written a ≤ x ≤ b for a set of x. An interval where a > b is usually a programming error.
To ensure only valid intervals are supported at the type system level, the function could perhaps be redefined as:
function clamp(n: number, i: Interval<number>): number
Of course, you need to deal with the distinction between closed and open intervals. Clamping really only makes sense for closed ones.
It maps very well onto the mathematical concept of a closed interval [a, b] where a and b are endpoints of the interval though. You're adding a constraint for no logical reason and it happens to be very hard to represent in a basic type system.
> An interval where a > b is usually a programming error.
If you want it to be, sure. Anything can be a programming error if the library author feels like it. We may as well put all sorts of constraints on clamp, it is probably an error if the caller uses a large number or a negative too. It is still bad design in a theoretical sense - the clamp function throws an error despite there being an obvious non-error return value. It isn't hard to meaningfully clamp 2 between 4 and 3.
Well, if your language has a sufficiently strong type system (namely, dependent types), you can take proofs of some properties as arguments. Example in Lean:
def clamp (value min max : Float) {H : min < max} : Float := ...
Sure, but the author picked TypeScript nonetheless. TypeScript is not a runtime, but a mere type checker - JavaScript is the runtime and a highly dynamic language. This detail got somehow completely lost in the article, but is IMHO the main culprit why such validations aren't bad, or sometimes even preferred.
The article also skipped over the following related topics:
- When would you wrap errors from lower levels as your own?
- What does "parse don't validate" mean when a TypeScript library gets transpiled to JavaScript?
Nobody would question that, but publishing a JavaScript library means that anyone using plain JavaScript can make use of it. Even though you aren't ever in control of the toolchain of your library's users, it's still your responsibility - as library author - to take that differences into account. If you'd transpile your library from Idris to JavaScript and publish it, these validations just can't be neglected at runtime. Type systems are just another model of the world at runtime.
In a compiled language, it takes one or two machine instructions to test
assert!(b >= a);
Works in C, C++, Go, Rust...
Amusingly, nowhere in the original article is it mentioned that the article is only about Javascript.
Languages should have compile time strong typing for at least the machine types: integers, floats, characters, strings, and booleans. If user defined types are handled as an "any" type resolved at run time, performance is OK, because there's enough overhead dealing with user defined structures that the run time check won't kill performance.
(This is why Python needs NumPy to get decent numeric performance.)
Sure, use macros in function bodies. That won't affect the function signature in any meaningful way for the type checker and remains a check at runtime only, doesn't it?
It seems like the point of the article was to not do that though, contrary to my own opinion, and I just wonder why...
Many libraries throw an exception, panic, or silently swap the parameters at runtime.
To detect this at compile time, you would need either min and max to be known at compile time, or a type system that supports value-dependent types. None of the popular language support this. (My language named 'Bau', which is not popular of course, support value-dependent types to avoid array-bound checks.)
You don't need to. One if statement to check that is not a problem. The problem occurs when you have a bunch of other ifs as well to check all kinds of other stuff that a type system would handle for you like nullability, incorrect types etc.
Personally I just write JS like a typed language. I follow all the same rules as I would in Java or C# or whatever. It's not a perfect solution and I still don't like JS but it works.
‘’’
export function clamp(value: number | string, min: number | string, max: number | string): number {
if (typeof value === 'string' && Number.isNaN(Number(value))) {
throw new Error('value must be a number or a number-like string');
}
if (typeof min === 'string' && Number.isNaN(Number(min))) {
throw new Error('min must be a number or a number-like string');
}
if (typeof max === 'string' && Number.isNaN(Number(max))) {
throw new Error('max must be a number or a number-like string');
}
if (Number(min) > Number(max)) {
throw new Error('min must be less than or equal to max');
}
return Math.min(Math.max(value, min), max);
}
‘’’
> Oh, look, somebody just re-discovered static typing.
If you're going to smug, at least do it when you're on the right side of the technology. The problem the article describes has nothing to do with the degree of static typing a language might have. You can make narrow, tight, clean interfaces in dynamic languages; you can make sprawling and unfocused ones in statically-typed languages.
The problem is one of mindset --- the way I'd do it, an insufficient appreciation of the beauty of parsimony. Nothing to do with any specific type system or language.
Yep, I’ve seen this in Swift with a dozen overloads for functions and class initializers to support umpteen similar, but different, types as input. Sloppy schema design reveals itself in combinatorial explosions of type conversions
Oh, look, somebody just re-discovered static typing.