> The compiler is also relatively slow. Would Rust have been worth working with on 30+ year old hardware?
As I understand it, a lot of the slowness of the rust compiler comes about from llvm. And how rust and llvm interoperate. Rustc creates and sends gigabytes of stuff to llvm - which passes all of that to its optimizer. If you skip all that work - for example by running cargo check - the compiler is an order of magnitude faster.
If rust were invented in the 90s, it wouldn’t have used llvm. Rust could still have been implemented, and we’d probably have a much faster compiler as a result. But it would have missed out on all the benefits of llvm too. It would have needed its own backend to be written - which would have been more work. And the compiler probably wouldn’t have been as good at low level optimisations. And it probably wouldn’t have out of the box support for so many target platforms. At least, not from day 1.
My favorite framing for this is that rust front loads all the pain.
C and C++ are incredibly subtle languages. But you can get a lot of code written before you run into certain foot guns in C and C++. This gives those language a more enjoyable on-ramp for beginners.
In comparison, rust is a wall. The compiler just won’t compile your code at all if you do anything wrong. This makes the act of learning rust much more painful. But once you’ve learned rust, it’s a much smoother experience. There’s far fewer ways for your programs to surprise you at runtime.
> The way it should work is that before even writing code, you design a modular acceptance system that runs full suite of tests …
Sometimes. It depends on what you’re working on.
Part of the fun challenge in writing software is that the act of programming can teach you that you’re wrong at every level. The syntax can be wrong. The algorithm you’re implementing can be wrong. The way you’re designing a module can be misguided. And you might be solving the wrong problem entirely! Like, maybe you spend weeks adding a feature to a game and it makes the game less fun! Oops!
Tests formalise beliefs about what you want your code to do, at some level of abstraction. But if those beliefs turn out to be wrong, the tests themselves become a headwind when you try and refactor. You want those early refactoring to be as easy as possible while you’re learning a problem space.
Now, some programs don’t suffer from this as much. If you’re implementing a C compiler or drop in replacement for grep, you have some clear acceptance tests that will almost certainly not change through your project’s lifecycle.
But not all problems have clear constraints like that. Sometimes you’re inventing a programming language. Or writing a game. Or making a user interface. In my opinion, problems that are fully constrained from the start are some of the least interesting to work on. Where’s the discovery?
The only people I’ve met who seem to think it’s a feud war are a few dyed in the wool C++ fans who implicitly hate the idea of programming anything else. Rust is just a language. It has some strengths and weaknesses just like every programming language. Some of its strengths are incredibly compelling.
Personally I’m relieved that we’re starting to see real competition to the C & C++ duopoly. For awhile there all the new languages were GC, and paid for their shiny features with poor runtime performance. (Eg java, C#, Ruby, Python, lua, go, etc etc)
Rust is a fine language. Personally I can’t wait to see what comes after it. I’m sure there’s even better ways to implement some of rust’s features. I hope someone out there is clever enough to figure them out.
That is a surprising opinion. Rust marketing is entirely based - like in this submission - on comparing its memory safety to C/C++ and saying that C is bad!
Even in its own "memory safety" definition, which is the first result on Google, they criticize C instead of providing a proper definition:
Yeah; generally I find CMake rules are much easier to read and modify than autotools and makefiles. With Makefiles there’s about 18 different ways to write a rule to compile something, and I find I need to go hunting through a bunch of files to figure out how this makefile in particular defined a rule for compiling this C file in particular. CMake is much higher level. I can just see all the higher level targets, and how they’re built. Then - orthogonally - I can modify the build system that CMake uses to compile C code. It makes a lot more sense.
But I’d take cargo over any of this stuff. Cargo means I don’t have to think about compiler flags at all.
> I cannot like Rust syntax, sorry. For me the ideal syntax is C/Go, just to be clear what I like.
I’m sorry if this comes across as dismissive, but I find it hard to take people seriously with complaints about syntax like this. Learning new syntax is really easy. Like, if you’re familiar with C & Go, you could probably learn all the syntax of rust in under an hour. The only surprising part of rust’s syntax is all the weird variants of match expressions.
Rust has some surprising semantics. Like how lifetimes work (and when you need to specify them explicitly). That stuff is legitimately difficult. But learning that if statements don’t need parenthesis is like - seriously whatever dude. If you want to spend your career never learning new stuff, software isn’t for you.
I picked up objective C about 15 years ago. The only thing most of my friends knew about it was that it had “that weird syntax”. It took no time at all to adjust. It’s just not that hard to type [] characters.
I'm very vocal that I don't like Python syntax, but I wouldn't refuse to write Python because of the syntax. If I had reason to write Python, I would grumble a bit but write the Python.
I’ve found it a joy to use compared to CMake and friends. How does it make it harder to consume something downstream? Seems easy enough to me - just share the source crate.
Are you trying to distribute pre compiled code or something like that? I can see how that would be harder - cargo doesn’t really support that use case.
One big source of bugs in TS is structural sharing. Like, imagine you have some complex object that needs to be accessed from multiple places. The obvious, high performance way to share that object is to just pass around references wherever you need them. But this is dangerous. It’s easy to later forget that the object is shared, and mutate it in one place without considering the implications for other parts of your code.
I’ve made this mistake in TS more times than I’d like to admit. It gives rise to some bugs that are very tricky to track down. The obvious ways to avoid this bug are by making everything deeply immutable. Or by cloning instead of sharing. Both of these options aren’t well supported by the language. And they can both be very expensive from a performance pov. I don’t want to pay that cost when it’s not necessary.
Typescript is pretty good. But it’s very normal for a TS program to type check but still contain bugs. In my experience, far fewer bugs slip past the rust compiler.
Appreciate it, that makes a lot of sense. I feel like I've been trained to favor immutability so much in every language that I sometimes forget about these things.
Similar. I mostly design my code around something like pipe and lifetime. The longer something needs to live the closer it is to the start of the program. If I need to mutate it, I take care that the actual mutation happens in one place, so I can differentiate between read and write access. For anything else, I clone and I update. It may not be efficient and you need to track memory usage, but logic is way far simple.
I bought a bunch of games for console over the years that I can't play any more.
I have about a dozen games on the switch. In another console generation, nintendo will make all my existing switch games unplayable again. I feel like you don't really buy console games. You rent them for one console generation.
I mean, I can't tell whats worse - that Nintendo has the gall to try and sell me the same game for switch that I already bought retail on the Wii several years ago. Or that I can't play a lot of my old Wii games at all any more.
But every year I end up picking up more and more games on steam. So many games. I have hundreds, and so do most of my friends. And all of those games keep running on every PC I own.
That's the value proposition of a steam box. It ships with hundreds of games that I already own and already enjoy. Fancy playing bioshock again? Sure. Factorio? Yeah hit me. Dota? Cyberpunk? Terraria? Stardew Valley? Lets go.
Once burned, twice shy. It’s going to take a few more generations to see how long they actually maintain that compatibility for going forward.
I suspect consoles will move to arm chips at some point. When they do, will Sony and Nintendo bother making a Rosetta type layer for backwards compatibility to play the games they’re selling now? I doubt it. We’ll see.
- HTML rendering - which is insanely complex to do efficiently for arbitrary web apps.
- Video conferencing software
- A graphics engine. Used for rendering web content, canvas2d, webgl, webgpu and video encoding & decoding for a bunch of formats. It also has a bunch of backends (eg CPU, Metal, Vulcan, etc)
- JS, and every feature ever added to JS.
- WASM compiler, optimizer & runtime
- Memory manager, process isolation per tab, and all the plumbing to make that work.
- The Inspector - which contains a debugger and most of an IDE for JS, WASM, CSS and HTML.
- So much interop. Like chromecast support, http1, http2, quic, websockets, webtransport, webrtc, javascript remote debugger protocol, support for lots of pre-unicode text formats, DoH, webdriver, and on and on.
- Extension support
- Gamepad drivers, web bluetooth, webserial, midi support
What am I missing? Probably lots of stuff. I have no idea how much each of these components contributes to browser bloat. But all up? Its a lot. Chrome is bigger than most complete operating systems from a decade or two ago. Browser vendors have to pick between fast, simple and fully featured. Chrome sacrifices simplicity every time.
As I understand it, a lot of the slowness of the rust compiler comes about from llvm. And how rust and llvm interoperate. Rustc creates and sends gigabytes of stuff to llvm - which passes all of that to its optimizer. If you skip all that work - for example by running cargo check - the compiler is an order of magnitude faster.
If rust were invented in the 90s, it wouldn’t have used llvm. Rust could still have been implemented, and we’d probably have a much faster compiler as a result. But it would have missed out on all the benefits of llvm too. It would have needed its own backend to be written - which would have been more work. And the compiler probably wouldn’t have been as good at low level optimisations. And it probably wouldn’t have out of the box support for so many target platforms. At least, not from day 1.
reply