I think Rust has hit critical mass. It's now basically the default choice for something you want to perform well but want to be reasonably secure. For example, uv in the python ecosystem.
I'm not personally a fan of Java, but if I was implementing a compiler, I'd pick a language with GC. There's pretty much no downside to a GC in that context, and it gives you more flexibility when working with graph data structures.
If 'building a programming language' means writing an interpreter or VM, then I can see the attraction of Rust for that case. But writing interpreters and VMs is like 0.0001% of the programming that gets done in the world.
There is no reason I would care about borrow checking implementing a compiler, and besides all the tooling, Java also has stuff like ANTRLR and MPS, and naturally Graal is a good playground for compiler backend tooling.
However in general, I would rather look into OCaml, Haskell, F#, Scala.
I wouldn't be surprised if that was closer to the truth. A heck of a lot of boring software runs on the JVM. That said, it's a slightly different niche from command line tools.
Current LLMs are not good at writing any language you actually understand, unless you do so much of the work that you might as well have written the whole program yourself.
We should make calculators like this for kids to learn on. Every so often it makes mistakes that you will spot if you could have done the arithmetic yourself and are just saving time. That is where ai code is at right now.
This is exactly why I don't trust LLMs (and therefore why I don't use them). When dealing with something I know about I can see the many mistakes they make - I would have to be a complete fool to trust them to do better on subjects I don't know about.
That narrative is still popular with LLMs themselves. If you ask an LLM whether it can code Rust, it will tell you that it can but not very well.
They're good at web languages, python, and C/C++. As far as I can tell Rust works if you're already good at Rust and you can catch its screwups and strange architecture choices quickly.
Maybe I'm doing it wrong (using a variety of models on GitHub Copilot) but in complex tasks I often find that they give me code that doesn't quite compile (often due to lifetime errors, sometimes other issues)
Try agents like Claude code. My experience was that the initial code was conceptually correct with some type errors on the first pass. It then iterated on compile errors about 6 times, tweaking the code to resolve the issues. Then it compiled and ran correctly.
This was about 500 lines of working rust in about 10 minutes, approximately 25x my pace at writing rust. (I’m a bit of a beginner.)
I feel so safe when my Rust code compiles; it feels like the program will run forever. I'm not sure what you mean by "agentic runtimes," but if they offer the same safety standards as Rust, I wouldn't mind using them.
what you sharing is not a rust specific, It's the same for npm and pypy packages.
Rust is native binary + fearless concurrency + memory safe and AI can help you to achieve these targets very fast. That's why Rust is the winner of all the languages, every software needs to be fast, secure and able to run forever.