Are grammatical errors and typos fashionable now? Reading this post it seems the anti-thesis in the LLM era is not to edit at all, but rather write down a stream of consciousness to make it "personal".
I feel like having to signal that you're a human detracts from the content side of things. Proper spelling and grammar, good style etc. are there to help you convey your ideas more accurately. Resorting to a stream of consciousness style of unrefined writing makes it apparent that you're a human, but the downside is that your text is bad.
Oh no, I have had enough of people with quirky (i.e. cringey) writing on the internet. It started with those who refused to use their shift key and it's quickly devolving into something that makes you shiver when you read it. (Not to mention how easy it is to use a system prompt to make an AI write in whatever style you like.)
Kurzgesagt typically does STEM focused videos... they've got a new channel "After Dark" which focuses on history and historical figures. Their first one: Kurzgesagt After Dark The Final Days of Louis XIV - https://youtu.be/bIwX4QuL90k?si=9WLbzKqxo08KCDum&t=564
> And though the operation was done in secret, a new fashion sweeps the court: Bandages wrapped around everyone’s buttocks.
When writing letters of recommendation now, I write in a more human tone to avoid sounding like a bot with a line of explanation at the start. Not an error in the sense you mean, but an error in tone for a letter of recommendation, certainly.
An awful lot of stuff in the "hand made" aesthetic are made by machine and factory too, and I suspect a similar thing will happen to any popular writing aesthetic that attempts to avoid being automated away.
Personally, I'll just continue to use my own voice. I try to correct spelling and grammar mistakes, and proof-read my writing before posting.
It's not perfect, and my writing can at times be idiosyncratic, but it's my voice and it's all I've got left.
But don't be mistaken in thinking that those mistakes make it better, it just makes it mine.
That has been the case so far but is changing this year.
The SpacemiT K3 is faster than QEMU. Much faster chips are expected to release over the next few months.
I mean things like the Milk-V Pioneer were already faster but expensive.
One thing that has been frustrating about RISC-V is that many companies close to releasing decent chips have been bought and then those chips never appear (Ventana, Rivos, etc). That and US sanctions (eg. Sophgo SG2380).
“Good enough” here was meant to mean good enough to sell more, and therefore to drop prices.
That is already happening. It just needs to happen more. And I think it will. If you don’t find the RISC-V boards of 24 months from now “good enough”, that is ok with me. I just want them to get cheaper.
The other thing that is happening on that front is that microcontrollers are getting more powerful and staying inexpensive. You can get RISC-V microcontrollers today with similar performance to the original Raspberry Pi and with things like WiFi, Bluetooth, and USB. They are crazy cheap and there are many projects for which they are now “good enough”. And, of course, they keep getting better.
Well, part of “good enough” is features. The RVA23 profile was ratified a few months ago and the first chips are appearing now. That brings RISC-V to feature parity with X86-64 and ARM, including things like vector instructions and virtualization. QUbuntu 26.04 is compiled to require RVA23. So, the RISC-V advocates got that part right. Of course, the other side of “good enough” is performance.
The SpacemiT K3 has the multi-core performance of a 2019 MacBook Air and higher AI performance than an M4. That is better multi-core than an RK3588. If it were less expensive, the K3 would already be good enough for many people.
Alibaba has the C930 which is faster than the K3. We will see if it gets released to the rest of us.
Tenstorrent will release a chip in a few months that is twice as fast as the K3.
The recently announced C950 is supposed to be even faster but will be a year or more.
Of course, “good enough” is subjective but my statement was based on the above.
But you are right that there have been some false starts.
The SG2380 was just as fast as K3 and was ready to go two years ago. TSMC refused to manufacture it over US sanctions.
Ventana was about to release a very fast RISC-V chip but Qualcomm bought them.
Rivos was very close to releasing a RISC-V GPU but Meta bought them.
But even without these high-end chips, RISC-V is enjoying great success. It is taking over the microcontroller space. And billions of RISC-V cores are shipping.
It is the case for embedded microcontrollers. An ESP32-C series is about as cheap as you can get a WiFi controller, and it includes one or more RISC-V cores that can run custom software. The Raspberry Pi Pico and Milk-V Duo are both a few dollars and include both ARM and RISC-V view. with all but the cheapest Duo able to run Linux.
Some of that could be related to the ISA but I'm hoping that it's just the fact that the current implementations aren't mature enough.
The vast majority of the ecosystem seems to be focused on uCs until very recently. So it'll take time for the applications processors to be competitive.
I'd be pretty surprised if Ascalon actually hits Zen 5 perf (I'm gessing more like Zen2/3 for most real world workloads). CPU design is really hard, and no one makes a perfect CPU in their first real generation with customers. Tenstorrent has a good team, but even the "simple" things like compilers won't be ready to give them peak performance for a few years.
At least for SBCs, I’ve bought a few orange pi rv2s and r2s to use as builder nodes, and in some cases they are slower than the same thing running in qemu w/buildx or just qemu
The arrival of the first RVA23 chips, which is expected next month, will change the status quo.
Besides RVA23 compliance, these are dramatically faster than earlier chips, enough for most people's everyday computing needs i.e. web browsing, video decoding and such. K3 got close to rpi5 per-core performance, but with more cores, better peripherals, and 32GB RAM possible, although unfortunately current RAM prices are no good.
And it'll only get better from there, as other, much faster, RVA23 chips like Tenstorrent Alastor ship later this year.
You're glancing over the fact that mathematics uses only one token per variable `x = ...`, whereas software engineering best practices demand an excessive number of tokens per variable for clarity.
It's also a pretty silly thing to say difficulty = tokens. We all know line counts don't tell you much, and it shows in their own example.
Even if you did have Math-like tokenisation, refactoring a thousand lines of "X=..." to "Y=..." isnt a difficult problem even though it would be at least a thousand tokens. And if you could come up with E=mc^2 in a thousand tokens, does not make the two tasks remotely comparable difficulty.
The other day someone commented on this site that in the age of agentic coding "maintaining a fork is really not that serious of and endeavor anymore." and that's probably the case. I'm sure continuously rebasing "revert birthday field" can be fully automated.
Then the only thing remaining is convincing a critical mass that development now happens over at `Jeffrey-Sardina/systemd` on GitHub.
IMO, the benefits aren't from getting mass adoption of this fork, but actually the opposite, at least ostensibly, because if it were to become "the" systemd, it would then face scrutiny and potential legal threat. This way, the maintainers can be in compliance, the legislators (who if there are any paying attention) can be superficially satisfied, while people can still avoid the antipattern. It's the "brown paper bag" speech from the Wire, basically
At some point people will realize that not having an optional data field might not be worth the effort of indefinitely rebasing a revert and recompiling, since they could just not set the field for their user account by doing nothing
That's overstating things. The biggest piece of infra is PyPI, to which uv is only an interface. They do distribute Python binaries, but that's not very impressive.
So when Charlie Marsh goes on a podcast saying that the majority the complications they face with their work is in DevOps, he's also overstating things?
As it's selfreporting and it's more about expectations than actual happiness a finnish dude only needs to think that life is just incredible compared to what he sees at the other side of the border to selfreport a 10 in happiness
I haven't travelled there but I grew up in Poland and still visit. US feels very capitalistic to me. I feel the pace is slower in Poland. In US I feel the need to produce. Might be just me.
This is how I feel as a Canadian. It's just a border between us, we've got issues of our own but on one side life seems much more transactional and individualistic in a somewhat repulsive way. I'm sure it's not unique to them, and I'm sure it's not uniformly pervasive. I rarely feel like a true foreigner while I'm in the country, but there's just this unsettling feeling of distrust coupled with a drive to consume that I don't feel when I'm north of the border.
Well, that's just inherent in the question which asks someone to imagine the best possible life vs. the worst possible life. In a society with lots of room to grow you aren't at the higher rungs. In a society with no progress possible you're at the top easily.
Maybe, but I quoted specific part I was replying to. TS has no impact on runtime performance of JS. Type hints in Python have no impact on runtime performance of Python (unless you try things like mypyc etc; actually, mypy provides `from mypy_extensions import i64`)
Therefore Python has no use for TS-like superset, because it already has facilities for static analysis with no bearing on runtime, which is what TS provides.
Because the python devs weren't allowed to optimize on types. They are only hints, not contracts. If they become contracts, it will get 5-10x faster. But `const` would be more important than core types.
Still makes no sense. OP demands introduction of different runtime semantics, but this doesn't require adding more language constructs (TS-like superset). Current type hints provide all necessary info on the language level, and it is a matter of implementation to use them or not.
From all posts it looks like what OP wants is a different language that looks somewhat like Python syntax-wise, so calling for "backwards-compatible" superset is pointless, because stuff that is being demanded would break compatibility by necessity.
That was how the Mojo language started. And then soon after the hype they said that being a superset of Python was no longer the goal. Probably because being a superset of Python is not a guarantee for performance either.
Being a superset would mean all valid Python 3 is valid Python 4. A valuable property for sure, but not what OP suggested. In fact, it is the exact opposite.
Are people starting to write and talk in this manner, I see so many YouTube videos where you can see a person reading an AI written text, its one thing if the AI wrote it, but another if the human wrote it in the style of an AI.
As someone pointed out to me the way an AI writes text can be changed, so it is less obvious, its just that people don't tend to realise that.
Someone had one of those AI videos on in the background and, I can’t explain it, the ordering of the words is like nails on a chalkboard to me. I’m starting to have a visceral physiological response to AI prose that makes it actually painful to listen to.
The video was a biography about some Olympian, and I could tell the prompt included some facts about her wanting to be a tap dancer as a kid, because the video kept going back to that fact constantly. Every few sentences it would reference “that kid who wanted to be a tap dancer”. By the 6th time it brought up she wanted to be a tap dancer I was ready to scream.
Man you are bad at TL;DR;-ing, you completely left out the main point article makes comparing stateful/mutating object oriented programming that humans like and pure functional oriented programing that presumably according to author LLMs thrive in.
reply