Using TypeScript for the entire stack feels like a superpower. The type system is incredible. V8 is fast. The frameworks are phenomenal (Next.js, Material UI, etc). The ecosystem is enormous, with packages for everything. The unified codebases save gobs of duplicate code (e.g. ferrying data betwixt client/server). I'm not surprised that such an expressive system can play Sudoku!
For me, it's the opposite. The type system is decent, but it's generics can get extremely out of hand, it's not sound, and I run into weird type errors with libraries more often than not.
Having no integer types (ok, this isn't something typescript could just implement) other than BigInt is another big one for me.
That you can just do `as unknown as T` is an endless source of pain and sometimes it doesn't help that JS just does whatever it wants with type coercion. I've seen React libraries with number inputs that actually returned string values in their callbacks, but you wouldn't notice this until you tried doing addition and ended up with concatenation. Have fun finding out where your number turned into a string.
The number of times I've read `... does not exist on type undefined` reaches trauma-inducing levels.
TypeScript is as good as it can get for being a superset of JS. But it's not a language I have fun writing stuff in (and I even fear it on the backend). It has its place, and it's definitely a decent language, but I would choose something else for frontend if I could, and wouldn't use it on the backend at all. I somehow don't trust it. I know people write amazing software with it, but YMMV I guess.
This comes up in every one of these threads and I always wonder: do you actually experience problems with soundness in your regular coding? (Aside from `as unknown`, which as others have noted just means you need a linter to stop the bad practices.)
It feels like such a theoretical issue to me, and I've yet to see anyone cite an example of where it came up in production code. The complaint comes off as being more a general sense of ickiness than a practical concern.
Soundness is a constraint more than a feature. Having it limits what is possible in the language, forcing you into a smaller set of available solutions.
So for me it's not about running into unsound types regularly but how much complexity is possible and needs to be carved away to get at a working idea because of it. In TS I spend relatively a lot of time thinking about and tweaking minute mechanics of the types as I code. Where by comparison in ocaml (or more relevantly rescript) I just declare some straightforward type constructors up front and everything else is inferred as I go. When I get the logic complete I can go back and formalize the signatures.
Because of the unsoundness (I think? I'm not a type systems expert) TS's inference isn't as good, and you lose this practical distinction between type and logic. And more subtly and subjectively, you lose the temporal distinction. Nothing comes first: you have to work it all out at once, solving the type problems as you construct the logic, holding both models as you work through the details.
yes, all the time. It's more of an issue of no runtime type safety, which makes it a poor choice for many backend projects. There are workarounds, but it's silly when there are many better alternatives.
In this case, not really. TypeScript can't be sound because there is zero runtime type safety in JS. That you are able to do `as unknown as T` makes TypeScript unsound, but it's also an escape hatch often needed to interact with JavaScript's dynamic typing.
It's never needed, it's just often convenient for something quick and dirty. You can always write a guard to accomplish the same thing—roll your own runtime safety. If you want to avoid doing it manually there's Zod. It's not that much different than writing a binding for a C library in another language in that you're manually enforcing the constraints of your app at the boundaries.
You're blaming TypeScript for self-inflicted wounds.
Don't blame the type system that you banished (with `as unknown as T`) for not catching you; or for some React library having bugs, e.g. an incorrect interface; for not defining types and Pikachu-facing when types are `undefined`. These traumas are fixed by using TypeScript, not ignoring it.
These issues don't exist in languages that aren't built on a marsh.
More specifically though, I feel like the way javascript libraries with types work is often painful to use and that's why people use TS's escape hatches, whereas the same thing doesn't happen in languages where everything is built on types from the get go.
The same friction is true for example of using C libraries from other languages.
The rest of the world is an equally muddy marsh. C++: static_cast, C: void-pointers and unions, java/c#: casting back from (object) or IAbstractWidget.
If anything, typescript has the most expressive, accurate and powerful type system of the lot, letting you be a lot more precise in your generics and then get the exact type back in your callback, rather than some abstract IWidget that needs to be cast back to the more concrete class. The structural typing also makes it much easier to be lax and let the type engine deduce correctness, rather than manually casting things around.
C is famously unsafe. But in Java/C#, you do have runtime type safety. You can't just cast something to something else and call it a day. In the worst case, you'll get an exception telling you that this does not work. In TypeScript, the "cast" succeeds, but then 200 lines later, you get some weird runtime error like "does not exist on type undefined" which doesn't tell you at all what the source of the error is.
In TypeScript, a variable can have a different runtime type from its declared type entirely, that's just not true for many other languages
I'm not sure about the CLR, but the JVM has no type safety whatsoever. Java the language does, but all of that goes away once you are in the JVM bytecode.
This is just partially true (or completely untrue in the mathematical sense since your statement is "_no_ type safety _whatsoever_" :P ). The whole purpose of the bytecode verifier is to ensure that the bytecode accesses objects according to their known type and this is enforced at classloading time. I think you meant type erasure which is related - generic types do not exist on the bytecode level, their type parameters are treated as Object. This does not violate the bytecode verifier's static guarantees of memory safety (since those Objects will be cast and the cast will be verified at runtime), but indeed, it is not a 100% mapping of the Java types - nevertheless, it is still a mapping and it is typed, just on a reduced subset.
"don't exist in languages that aren't built on a marsh"
Sure. Last time I checked, JavaScript is the language that actually powers the web. If you can get a language that isn't built on a marsh along with all the libraries to run the web, I'll switch in a second.
In other words, the criticism is simply irrelevant. If it works, it works. We don't talk about technologies that don't exist.
Not run on the web, "to run the web". Maybe someday WASM will be complete enough to actually run the web, but JavaScript is what we have right now and it's done a pretty okay job so far.
> These issues don't exist in languages that aren't built on a marsh.
Unsafe casts exist in almost any GCed strongly typed language. You can unsafe things even in Rust, if you want to. Author of this code has deliberately made a choice to circumvent language's limitation and forgo it's guarantees. We have been doing it since undefined behaviour in C, how is that Typescript's fault?
Totally! I really wonder what these libraries people are complaining about that have such bad type definitions. In my experience TS definitions on the average NPM package are generally fairly decent.
Well, the reality of the situation still is that there are libraries with incorrect or low quality typings that blow up in your face. Me using TypeScript will not make that library better, but this problem is still the daily reality of using TypeScript. It's not the fault of TS, but still a pain you encounter when working with it.
I haven't worked with a language where you can statically cast invalid types that easily since C, a language not exactly famously known for its safety.
There's a reason `as unknown as T` exists, and it's JavaScript's highly dynamic nature and absence of runtime types (ignoring classes and primitives). It's an escape hatch that is needed sometimes. Sure, within your own codebase everything will be fine if you forbid its use, but every library call is an API boundary that you potentially have to check types for. That's just not great DX
I haven't worked with a language where you can statically cast invalid types that easily since C, a language not exactly famously known for its safety.
But it’s not the same situation at all, is it? If you make an invalid cast in C, your program will crash or behave in bizarre ways.
If you make an invalid cast in TS, that doesn’t affect the JS code at all. The results will be consistent and perfectly well-defined. (It probably won’t do what you wanted, of course, but you can’t have everything.)
TS is much more like Java than it is like C (but with a much nicer type system than either).
Meh, in Java (afaik) you'll get exceptions when you cast incorrectly. In JS and C, it just gets allowed and you get some runtime error later on and have to make your way back to find your incorrect cast.
I tend to agree with you but for problem like this one:
> That you can just do `as unknown as T` is an endless source of pain
You should be using strict typingcheck/linting rules somewhere in your pipeline to make these illegal (or at least carefully scrutinised and documented).
1. If someone is willing to do `as unknown as T`, they're probably also just as willing to do `// @ts-ignore`.
2. It's not only your own code, it's the libraries you use as well. Typings can often be slightly incorrect and you have to work around that occasionally.
Popular libraries tend to get type hygiene issues ironed out rather quickly for 90% of the API surface area. For this reason, i find that lib selection from npm is much easier these days. The heuristic is simple:
1) has types? 2) has (large) download count? 3) has docs?
After that it’s generally smooth sailing. Of course this doesnt at all apply to the application codebase being applied to, but one of the parent/sibling remarks emphasized “madness” and i seek to smooth that over.
For #1, this is literally what PRs are for. Someone might be willing to do it, but it should be stopped before merge. If it isn’t, you have bigger problems to solve than type coercion.
For #2, if it’s open source you’re welcome to change the source or its typings.
You can also turn off all warnings in C and C++ (and C#?). That's not a flaw in the language it's a flaw in code bases and programmers that turn them off.
ESLint rules that require type information (not just stripping types) are prohibitively expensive for larger code bases.
As far as I know, there isn't any kind of tsconfig rule to disallow this (please correct me if I'm missing something here!). So unless you're using tools I don't know about, this is kind of a mandatory last bastion of "any".
You can disallow any, enable the strictest possible null/undefined checks (including noUncheckedIndexedAccess).
And there's also the assertion TS check that normally prevents erroneous type assertions.
But "as unknown as MyType" is not preventable by means of tsc, as far as I know. Unless there's an option I don't know do disable this kind of assertion (or even all assertions).
How large is too large and what counts as prohibitive? We're using lints with types on over a million lines of TypeScript and the lints are instant inside of the editors. They take a good 10 minutes to run in CI across the whole project, but that's shorter than the tests which are running in parallel.
Good point, I was talking about similarly sized code bases, yes.
Because of the hefty CI runtime increase, myself I opposed to adding it. We have lots of parallel branches that people work on and many code reviews every day, where the CI just needs to run from a clean slate so to speak.
But most of the current long CI run penalty in the frontend of that comes from tsc, not ESLint, in my case.
I might look into it again.
In the project there already are all kinds of optimizations (caching to speed up full Ci runs with common isolated parts of a monorepo).
And for TS, project references + incremental builds for development are used, tsc in a monorepo would be miserable without them.
I think it depends on your code and dependencies. At work, the time between making a change in our codebase (which is much smaller than a million LOC) to having ESLint update itself in the IDE can take 5+ seconds, depending on what you changed. But we also use some pretty over-engineered, generic-heavy dependencies across our entire codebase.
None of these are performance concerns. Modern JS engines are plenty fast for most of my use cases.
It irks me that I can't trust it to be an integer within a given range. Especially with Number, I often have the sensation that the type system just doesn't have my back. Sure, I can be careful and make sure it's always an integer, I've got 53 bits of integer precision which is plenty. But I've shot myself in the foot too many times, and I hust don't trust it to be an integer even if I know it is.
As for BigInt, I default to it by now and I've not found my performance noticeably worse. But it irks me that I can get a number that's out of range of an actual int32 or int64, especially when doing databases. Will I get tto that point? Probably not, but it's a potential error waiting to be overlooked that could be so easily solved if JS had int32/int64 data types.
Sound currency arithmetic is a lot harder when you have to constantly watch out for the accidental introduction of a fractional part that the type system can't warn you about, and that can never be safe with IEEE 754 floats. (This doesn't just bite in and near finance: go use floating-point math to compute sales tax, you'll find out soon enough what I mean.)
Bigints solve that problem, but can't be natively represented by JSON, so there tends to be a lot of resistance to their use.
Not really. In my parent comment I tried to make clear that it's not a limitation for me in real-world scenarios I encounter, but still something I feel like being a potential class of problems that could be so easily solved.
When I really needed dedicated integer types of a specific size, e.g. for encoding/decoding some binary data, so far I've been successful using something like Uint8Array
> Especially with Number, I often have the sensation that the type system just doesn't have my back.
That's sounding dangerously close to dependent types, which are awesome but barely exist in any programming languages, let alone mainstream general purpose programming languages.
You could do this with a branded type. The downside will be ergonomics, since you can't safely use e.g. the normal arithmetic operators on these restricted integer types.
> As for BigInt, I default to it by now and I've not found my performance noticeably worse. But it irks me that I can get a number that's out of range of an actual int32 or int64, especially when doing databases. Will I get tto that point? Probably not, but it's a potential error waiting to be overlooked that could be so easily solved if JS had int32/int64 data types.
If your numbers can get out of the range of 32 or 64 bits then representing them as int32 or int64 will not solve your problems, it will just give you other problems instead ;)
If you want integers in JS/TS I think using bigint is a great option. The performance cost is completely negligible, the literal syntax is concise, and plenty of other languages (Python, etc.) have gotten away with using arbitrary precision bignums for their integers without any trouble. One could even do `type Int = bigint` to make it clear in code that the "big" part is not why the type is used.
While TypeScript seems to be a nice language, it's ecosystem is the JS ecosystem and it is madness.
Major versions of some common library breaking backwards compatibility released every week mean you need to run as fast as you can just to stay in place.
Public opinion can't be relied upon. The most popular ORM library just recently added support for native JOINs, and this wasn't common knowledge (I almost used it before at had them!). The most decent ORM as chosen by some coworkers very experienced with the ecosystem still doesn't support subqueries (well, there's an escape hatch for writing them in raw SQL...). Some marketer-turned-programmer created hundreds of useless packages (like, a separate package per ansi color) that all require each other, and they are popular enough that if you'll run npm ls in your real world project you will find that you depend on them.
Having professionally used cargo, pip, even cabal, npm feels like the eternal september of open source packages.
Most of the madness that you're describing there isn't inherent in the JavaScript ecosystem, it's more to do with undisciplined development practices that might be more likely to be enabled by JavaScript's flexibility but are by no means required in order to participate.
Don't use libraries that aren't stable. Aggressively trim dependencies. Lock versions and upgrade intentionally. Ideally, use a company registry to cache what you actually want to be using.
Every ecosystem has its problems, and JS+NPM's are largely that it's too good at making everything too easy, leading to an abundance of naive developers building in a naive way.
On the whole I'll take that over the unnecessary barriers in other ecosystems (don't get me started on pip...), but it definitely requires some discipline to navigate safely.
Can you recommend me basic libraries for full stack dev that don't suffer from this? At least logging, ORM, web request handling, authentication, sessions, bundling (edit: and middleware, because apparently that's a separate library).
In every other language I work in there are 1-2 libraries that cover all of this (except bundling which is only relevant to js) and don't require me to step on the versioning treadmill. If I had the same for JS, I'd be much happier writing typescript.
Edit: for example, this week I had to downgrade a dependency (middy) from v5 to v4 all across our services because jest doesn't support I think ejs well enough and v5 dropped support for all other ways to do modules. It tooks hours of fighting to find the right combination of deleting the lock file, deleting various node_modules dirs and running npm i that actually replaced the installed v5 with v4.
you can get lucky and reach a nirvana state where all your dependencies function well in a new project, but 6 months later its a disaster like ah you need to upgrade node, but ah your transpiler requires the older version of node, but ah the semantic versioning was not followed by your type definition addendum library and now there were autoupdated breaking changes, ah your project only worked with a locked package file and if you re-install any package the wrong way everything breaks in incomprehensible ways!
I know my way around it though, so yay big bucks and quick deployment of greenfield projects
I know this isn’t realistic for many many scenarios, but if you can help it there is a sweet spot where you dedicate ~30 minutes to merge weekly dependabot updates and you don’t run into this problem.
I had this thought with a personal project but I got lost in a nightmare of configuration between typescript, node, the browser, my bundler etc.
Maybe it’s a better time with a framework like Next, which I assume comes with the client / server code sharing part preconfigured, bun looked promising as well to simplify the development setup. But I ended up switching to another language for my backend and I feel pretty good about it.
I agree about TypeScript as a language but I’m very burnt out on the various JS frameworks. They’re all so bulky and heavy. I like feeling as though I’m close to the end product and the average React codebase is a pile of abstractions trying to push you away from the underlying APIs.
I agree with you. If I develop for the browser, I'd like to use browser APIs. If those browser APIs are too complex for me to use without abstractions, that's another problem in itself that has to be addressed separately.
Depending on the # of lines of code and how many people use it actively I decide between just
* forking it and using the fork as a dependency
* copying the parts i need from it
* or just using it as a dependency
Either way, the dependencies existing is just a boon. Even if you don't want to depend on it directly you can use the existing work as a guide and implement something yourself.
I would have agreed with you once upon a time, but I'm building a project in Laravel right now. You want superpowers? Do a week's worth of fullstack JS in an afternoon with Laravel.
Only when those types actually could be used by runtimes, to catch up the the various JVM and CLR implementations.
There is no more juice left to squeeze in JavaScript, as it stands, beyond heroic efforts.
I rather stay with JVM and CLR languages, and only reach out to the node ecosystem, when SaaS products leave me no choice in regards to SDK support.
If Typescript is to never leave the role of being a linter, and a down version translator, eventually we have to ask ourselves when is Typescript done, instead of being an intellectual exercise for the sake of it.
> we have to ask ourselves when is Typescript done, instead of being an intellectual exercise for the sake of it.
You’re doing yourself an extreme disservice if this is all you perceive TypeScript as. It’s brought some semblance of order out of the Wild West that is JavaScript, which, like it or not, powers the vast majority of the modern web. Not only that, it’s also introduced a whole bunch of developers to different ideas about what a type system is capable of.
The problem with generics type erasure is less of an issue though in practice because the ecosystem is generally compiled from typed code and hence the compile-time guarantees reduce the dangers of erasure. This is unfortunately not true in TypeScript where you encounter plain JS all the time (sometimes TypeScript wrappers of dubious quality) causing more havoc. So while _theoretically_ type erasure could be considered having similar problems, in _practice_ it is much more manageable in Java. I guess if the whole JS ecosystem would be TypeScript only it would be less of an issue there as well, but right now it can be messy.
One more addition, there is a subtle but very important difference between how TypeScript's "erasure" works compared to Java's.
In the case of Java, an explicit cast is emitted in the bytecode, which upon execution checks the compatibility of the actual runtime type with the target type during runtime. Yes, this makes it a runtime error compared to a static bytecode verifier/compiler error, but the behavior is well defined and the error does not propagate further from the original mistake.
In comparison, Typescript does not emit "verification code" at the casting site ensuring for example that all asserted fields exist on the runtime object at that point. The result of this is that the type mismatch will be only evident at the point where for example a missing field is accessed - which can be very far from the original mistake.
If you wish, you can consider type issues caused by Java's erasure as runtime, but _defined behavior_, while in TypeScript it is kind-of undefined behavior that can lead to various error symptoms not known in advance.
TypeScript is overly complicated and for what? Compile-time "safety". It's better than plain JavaScript, though whatever library I pull as a dependency and peek inside the TypeScript<>JavaScript interface glue, I always find horrors beyond my comprehension.
Compile error messages in a classical typed language: "Error: Object of type 'StructA' cannot be assigned to variable of type 'StructB'"
Compile error messages in TypeScript when you use a library like React: "Error: Cannot reconcile <5 pages of arcane gibberish> with <5 pages of different arcane gibberish>"
… the typing file? That’s literally the only thing that’s different and if the original codebase was written in TS then is literally the types they defined.
Well either the typing glue or the library itself if it's written in TypeScript. My point was: the types are too damn complicated. Way too complicated for what they achieve. Almost always.
It's crazy what you can achieve with typescript types, what about inferring sql query results types from a raw sql query string: https://github.com/nikeee/sequelts
(completely serious) I think soon TS is going to need a "type debugger" where you can set breakpoints and step through tsc as it propagates your type info through generics and conditionals and such
Otherwise it's just nightmarish to try to debug ultra complex types
Learning the more advanced features of typescript is one of the worst things that's happened to my productivity. I keep wasting hours making a hyper specific type when in reality `{ [K: string]: string }` probably would have worked just fine.
If you can capture validity of the game state in the type system, a sufficiently advanced IDE with autocomplete/intellisense could actually already be a naive solver, since they will tend to treat contextual autocomplete as a form of constraint satisfiability problem, which is a sufficient approach to solve easy Sudokus.
In other words, throw this into Webstorm, put your cursor in an empty cell that currently has only one possible answer, and hit ctrl-space, and I would expect it to automatically fill in the answer for you.
This is a good illustration of one of the perils of TypeScript, in a funny way. It’s easy to get lost writing a complex verifier for your model using the fantastic type system… and then wake up the next morning and remember that at some point you’ll need to actually implement the logic.
And you’ll probably give up and end it with `return result as unknown as ComplexVerifierType<A, B, C>` anyway.
Indeed, the link gives an example of such an infinitely recursive type:
type Foo<T extends "true", B> = { "true": Foo<T, Foo<T, B>> }[T];
let f: Foo<"true", {}> = null!;
As for how it's handled, it yields the error "Type instantiation is excessively deep and possibly infinite. [2589]" I'm not sure what the max stack depth is, though.
> The goal is that we can play Sudoku in TypeScript while the type checker complains about mistakes. This is not about implementing a Sudoku solver.
That goal could be extended to implement a Sudoku solver in the type system. One such solver was described at https://ocamlpro.com/blog/2017_04_01_ezsudoku/ for OCaml. TLDR: Your compiler can report the solution as an error message if your language supports refutation and has enough type machinery to accurately model Sudoku.
Having said that, I don’t think a Sudoku solver implementation embedded in a type system is practical (maybe fun for educational purposes though!)