I know nothing about pathology, but in terms of software, I think slower adoption to new tech is what we need, especially when the "new tech" is just a 5% faster javascript framework.
By the way, for content creation, the only platfrom that really favors new creators is TikTok. Whether it leads to higher content quality is left for one's judgement.
> I know nothing about pathology, but in terms of software, I think slower adoption to new tech is what we need, especially when the "new tech" is just a 5% faster javascript framework.
I hope that's not the definition people are using when discussing adoption of "new tech".
When it comes to the topic of AI and "new tech adoption", I think about something like the Rust programming language.
I apologize if it chafes the people reading this comment that I'm something of a Rust evangelist and I'm working from a point of view that Rust's very existence is a (large) net-positive when it comes to programming and how we think about programming language design.
My fear with AI tools in their current state is that it will slow down innovation in programming languages. Rust gained popularity because it brought things to the table that made writing safe, performant, and correct (thinking about the strong, expressive, static type checking) software much easier than it had been with the old incumbents (in certain domains).
But, if Rust were released today or in the near future, would it take off? If we could, hypothetically, get to a point where an AI tool could spit out C or C++ code and push it through some memory sanitzers, Valgrind, etc and just iterate with itself until it was very likely to be free of memory safety bugs, why would we need a new language to fix those things? I guess we wouldn't. And it wouldn't really matter if the code that gets generated is totally inscrutable. But, it saddens me to think that we might be nearing the end of human-readable programming language research and design.
It will be harder for new languages and frameworks. The AI Its exasperates the bootstrapping problem.
An interesting example is perl which is essentially static at this point (perl 6 was renamed and never got traction).
I know from experience running pipelines that those old perl scripts almost always work, where if I come across an old python script (2x) I will have to go in and make some adjustments. Maybe a library has changed too…
People like new shinny things though. Maybe the new languages will try to train the ai and release there own models, but that’s a huge lift.
Might be easier than you think. If DeepSeek can train a model cheaply, so you can you. Probably more cheaply as the technology and models get better.
People used to be worried that AI performance was going to degenerate if models are trained on AI slop, but it's been found that synthetic data is the bee's knees for coding, reasoning and such, so it may well be that a new language comes with a large amount of synthetic examples which will not just be good for AI training but also for documentation, testing and all that.
I'm also going to argue that Rust is a less AI-friendly language than, say, Go.
GC languages have many benefits that come from 'you don't have to think about memory allocation'. For instance you can just smack an arbitrary library into a Java program with maven and not think at all about whether the library or the application is responsible for freeing an object. The global problem of memory allocation is handled by a globally scoped garbage collector.
LLMs are great at superficial/local translation processes (e.g. medium-quality translation of 中文 to English doesn't require constraint solving any more than remembering which of the many indexing schemes is the right one for 'how do I look up this row/column/whatever in pandas') But fighting with the borrow checker (getting global invariants right) is entirely outside the realm of LLM competence.
All you're talking about there in the end would be another compilation step.
I'm highly bearish on the concept of anything like that ever being possible (and near perfectly reliable) with llms, but if it were then it'd make sense as just another processing phase in compilation.
That's not wrong. There is a lot of hype-driven development in the programming world. People are always jumping on the latest web frameworks and such. A little bit more stability is not a bad thing.
That being said, I think that people underestimate how fast LLM technology can evolve. At the moment, lots of training data is needed for LLMs to learn something. This may not always be the case. In 2 to 5 years, it may be possible to train an LLM to be helpful with a new programming language with much less data than is needed today. No reason to assume that the current situation is what things will be like forever. It's not like this technology isn't evolving incredibly fast.
After watching the entire world’s reaction to AI, at this point my conclusion is that hype driven development is human nature, and we just need to come to terms with that (but you will have to drag me kicking and screaming).
Maybe if you think artificially inflating the hype through massive ad campaigns, marketing campaigns, and shoe-horning AI into every product then yeah the world has a reaction to AI. It's mostly been meh, things like Apple Intelligence and Office Copilot have largely fallen flat.
If the hype was real none of these AI initiatives would be struggling to make money, but they are.
I don't really see it different than the artificial web3 hype, the only difference being that LLMs are use for extreme happy path scenarios.
The problem is that Apple intelligence is currently kinda useless. They rushed it into production in a misguided effort to "stay relevant". It may take a few years but we should eventually get useful personal assistant type AIs.
I would say LLMs are very useful for specific scenarios. They're also getting better. Just takes time to iron out the kinks.
Jokes aside, I find it curious what does and doesn't gain traction in tech. The slowness of the IPv6 was already an embarrassment when I learned about it in university… 21 years ago, and therefore before the people currently learning about it in university had been conceived.
What actually took hold? A growing pantheon of software architecture styles and patterns, and enough layers of abstraction to make jokes about Java class names from 2011 (and earlier) seem tame in comparison: https://news.ycombinator.com/item?id=3215736
The way all of us seem to approach code, the certainty of what the best way to write it looks like, the degree to which a lone developer can still build fantastic products and keep up with an entire team… we're less like engineers, more like poets arguing over a preferred form and structure of the words, of which metaphors and simile work best — and all the while, the audience is asking us to rhyme "orange" or "purple"
The slowness to adopt IPv6 is because it's not a great design.
Going from 32-bits to 128-bits is complete overengineering. We will never need 128-bits of network address space as long as we are confined to this solar system, and the resulting addresses are extremely cumbersome to use. (Can you tell someone your IPv6 address over the phone? Can you see it on one monitor and type it into a different computer? Can you remember it for the 10 seconds it takes to walk over to the other terminal?)
48-bit addresses would have been sufficient, and at worst they could have gone with 64-bit addresses. This is already too cumbersome (9-12 base36 digits), but maybe with area-code like segmentation it could be rotated into manageable. 128-bits is just not workable.
> extremely cumbersome to use. (Can you tell someone your IPv6 address over the phone? Can you see it on one monitor and type it into a different computer? Can you remember it for the 10 seconds it takes to walk over to the other terminal?)
That's your idea of "extremely cumbersome"?
128 bit is exactly as hard as four groups of 32 bit.
Maybe I'm overlooking something but just looking at the past decade or so, a lot of new technologies and practices have been adopted. I assume most people would call these changes progress. So with this in mind, if in 10 years, we're by and large using the same technologies with AI injected in it, I feel that we would be missing something, as this article points out.
It's kind of sad to think that there may never be new technologies like Rust that break out and gain a critical traction. I'm hoping I'm wrong.
I guess it makes sense to differentiate technological areas where we want progress at "any possible pace" vs "wait and see pace". I don't know if pathologists or other medical professionals feel the same about their field.
On a related note, are there any techniques for facilitating tech adoption and bringing all users up to speed?
Tiktok does not favour new creators, its users do. And only because it's a new generation of consumer for the most part, who want to consume content from their chosen platform. The same thing will happen with alpha.
By the way, for content creation, the only platfrom that really favors new creators is TikTok. Whether it leads to higher content quality is left for one's judgement.