Hacker Newsnew | past | comments | ask | show | jobs | submit | stabbles's commentslogin

Are grammatical errors and typos fashionable now? Reading this post it seems the anti-thesis in the LLM era is not to edit at all, but rather write down a stream of consciousness to make it "personal".

I've started letting some run on sentences remain because it feels closer to how humans think and usually write. Letting typos go seems silly though.

Definitely think it is. It will be glorious. We will focus more on content than on just aesthetic as people try to signal that they are not llm

I feel like having to signal that you're a human detracts from the content side of things. Proper spelling and grammar, good style etc. are there to help you convey your ideas more accurately. Resorting to a stream of consciousness style of unrefined writing makes it apparent that you're a human, but the downside is that your text is bad.

Style is entirely subjective, and not every text is looking for a refined reader.

Oh no, I have had enough of people with quirky (i.e. cringey) writing on the internet. It started with those who refused to use their shift key and it's quickly devolving into something that makes you shiver when you read it. (Not to mention how easy it is to use a system prompt to make an AI write in whatever style you like.)

Flaw become aesthetic all time. People faked butt bandage follow Sun King fashion. Ugly as sin, still aesthetic.


Kurzgesagt typically does STEM focused videos... they've got a new channel "After Dark" which focuses on history and historical figures. Their first one: Kurzgesagt After Dark The Final Days of Louis XIV - https://youtu.be/bIwX4QuL90k?si=9WLbzKqxo08KCDum&t=564

> And though the operation was done in secret, a new fashion sweeps the court: Bandages wrapped around everyone’s buttocks.


I see loads of LLM articles where it's been prompted to never capitalise, avoid full stops, pepper in spelling mistakes, etc. it sucks.

When writing letters of recommendation now, I write in a more human tone to avoid sounding like a bot with a line of explanation at the start. Not an error in the sense you mean, but an error in tone for a letter of recommendation, certainly.

I don't know but capitalisation seems to have gone down the shitter.

Maybe it is.

Just like hand made items are popular for their imperfections.


An awful lot of stuff in the "hand made" aesthetic are made by machine and factory too, and I suspect a similar thing will happen to any popular writing aesthetic that attempts to avoid being automated away.

Personally, I'll just continue to use my own voice. I try to correct spelling and grammar mistakes, and proof-read my writing before posting.

It's not perfect, and my writing can at times be idiosyncratic, but it's my voice and it's all I've got left.

But don't be mistaken in thinking that those mistakes make it better, it just makes it mine.



I mean yes? I am more likely to read and trust something that is not written or cowritten by ai.

I want real humans giving real human opinions not ai giving their best opinion on what is the most "rewarding" weighted opinion


My experience with RISC-V so far is that the chips are not much faster than QEMU emulation. In other words, it's very slow.

I've added it, to one of my repos, and yes, it's slower than using emulation.

Particularly for my use case, Go cross compilation, QEMU and binfmt work really well together.

Still, for some things, it's nice to test on actual hardware.

Here's a workflow so you can see both approaches working: https://github.com/ncruces/wasm2go/blob/main/.github/workflo...


That has been the case so far but is changing this year.

The SpacemiT K3 is faster than QEMU. Much faster chips are expected to release over the next few months.

I mean things like the Milk-V Pioneer were already faster but expensive.

One thing that has been frustrating about RISC-V is that many companies close to releasing decent chips have been bought and then those chips never appear (Ventana, Rivos, etc). That and US sanctions (eg. Sophgo SG2380).


One thing I observed is that RVV code is usually slower in QEMU

Oftentimes slow is fine, when the work is parallel and the hardware is cheap

RISC-V microcontrollers are inexpensive but “application” processors will be expensive until volumes increase.

Performance will get “good enough” over the next 2 years. Prices will drop after that.


I should have replied differently.

“Good enough” here was meant to mean good enough to sell more, and therefore to drop prices.

That is already happening. It just needs to happen more. And I think it will. If you don’t find the RISC-V boards of 24 months from now “good enough”, that is ok with me. I just want them to get cheaper.

The other thing that is happening on that front is that microcontrollers are getting more powerful and staying inexpensive. You can get RISC-V microcontrollers today with similar performance to the original Raspberry Pi and with things like WiFi, Bluetooth, and USB. They are crazy cheap and there are many projects for which they are now “good enough”. And, of course, they keep getting better.


That the "good enough" SoCs will be arriving "over the next 2 years" is what the RISC-V advocates have told us for quite a few years now.

Well, part of “good enough” is features. The RVA23 profile was ratified a few months ago and the first chips are appearing now. That brings RISC-V to feature parity with X86-64 and ARM, including things like vector instructions and virtualization. QUbuntu 26.04 is compiled to require RVA23. So, the RISC-V advocates got that part right. Of course, the other side of “good enough” is performance.

The SpacemiT K3 has the multi-core performance of a 2019 MacBook Air and higher AI performance than an M4. That is better multi-core than an RK3588. If it were less expensive, the K3 would already be good enough for many people.

Alibaba has the C930 which is faster than the K3. We will see if it gets released to the rest of us.

Tenstorrent will release a chip in a few months that is twice as fast as the K3.

The recently announced C950 is supposed to be even faster but will be a year or more.

Of course, “good enough” is subjective but my statement was based on the above.

But you are right that there have been some false starts.

The SG2380 was just as fast as K3 and was ready to go two years ago. TSMC refused to manufacture it over US sanctions.

Ventana was about to release a very fast RISC-V chip but Qualcomm bought them.

Rivos was very close to releasing a RISC-V GPU but Meta bought them.

But even without these high-end chips, RISC-V is enjoying great success. It is taking over the microcontroller space. And billions of RISC-V cores are shipping.


which, sadly, isnt the case right now

It is the case for embedded microcontrollers. An ESP32-C series is about as cheap as you can get a WiFi controller, and it includes one or more RISC-V cores that can run custom software. The Raspberry Pi Pico and Milk-V Duo are both a few dollars and include both ARM and RISC-V view. with all but the cheapest Duo able to run Linux.

Some of that could be related to the ISA but I'm hoping that it's just the fact that the current implementations aren't mature enough.

The vast majority of the ecosystem seems to be focused on uCs until very recently. So it'll take time for the applications processors to be competitive.


The RISC-V ISA can be fast.

Tenstorrent Ascalon, expected later this year, is expected to be AMD Ryzen 5 speeds. Tenstorrent hopes to achieve Apple Silicon speeds in a few years.

The SpacemiT K3 is about half as fast as Ascalon and available in April. K3 is 3-4 times faster than the K1 (previous generation).

This should give you an idea about how fast RISC-V is improving.


Assuming AMD, Intel, ARM, Apple in a few years haven't released new CPUs, otherwise the difference is the same as today.

I'd be pretty surprised if Ascalon actually hits Zen 5 perf (I'm gessing more like Zen2/3 for most real world workloads). CPU design is really hard, and no one makes a perfect CPU in their first real generation with customers. Tenstorrent has a good team, but even the "simple" things like compilers won't be ready to give them peak performance for a few years.

>I'd be pretty surprised if Ascalon actually hits Zen 5 perf

Certainly not in the Atlantis SoC, due to the older fab node used. Zen2-3 territory IPC is the expectation, with lower clocks than these actually got.

By the time they have the necessary scale to use the best fabs, they'll be tapping out something newer than the Ascalon that went into Atlantis.

Tenstorrent expects to reach parity with the best x86 and arm chips by 2028.


Same experience here.

At least for SBCs, I’ve bought a few orange pi rv2s and r2s to use as builder nodes, and in some cases they are slower than the same thing running in qemu w/buildx or just qemu


The arrival of the first RVA23 chips, which is expected next month, will change the status quo.

Besides RVA23 compliance, these are dramatically faster than earlier chips, enough for most people's everyday computing needs i.e. web browsing, video decoding and such. K3 got close to rpi5 per-core performance, but with more cores, better peripherals, and 32GB RAM possible, although unfortunately current RAM prices are no good.

And it'll only get better from there, as other, much faster, RVA23 chips like Tenstorrent Alastor ship later this year.


s/Alastor/Atlantis/g.

Alastor is something else; a core from Tenstorrent that is considerably smaller than Ascalon.


You're glancing over the fact that mathematics uses only one token per variable `x = ...`, whereas software engineering best practices demand an excessive number of tokens per variable for clarity.

It's also a pretty silly thing to say difficulty = tokens. We all know line counts don't tell you much, and it shows in their own example.

Even if you did have Math-like tokenisation, refactoring a thousand lines of "X=..." to "Y=..." isnt a difficult problem even though it would be at least a thousand tokens. And if you could come up with E=mc^2 in a thousand tokens, does not make the two tasks remotely comparable difficulty.


The other day someone commented on this site that in the age of agentic coding "maintaining a fork is really not that serious of and endeavor anymore." and that's probably the case. I'm sure continuously rebasing "revert birthday field" can be fully automated.

Then the only thing remaining is convincing a critical mass that development now happens over at `Jeffrey-Sardina/systemd` on GitHub.


IMO, the benefits aren't from getting mass adoption of this fork, but actually the opposite, at least ostensibly, because if it were to become "the" systemd, it would then face scrutiny and potential legal threat. This way, the maintainers can be in compliance, the legislators (who if there are any paying attention) can be superficially satisfied, while people can still avoid the antipattern. It's the "brown paper bag" speech from the Wire, basically

At some point people will realize that not having an optional data field might not be worth the effort of indefinitely rebasing a revert and recompiling, since they could just not set the field for their user account by doing nothing

That's overstating things. The biggest piece of infra is PyPI, to which uv is only an interface. They do distribute Python binaries, but that's not very impressive.

So when Charlie Marsh goes on a podcast saying that the majority the complications they face with their work is in DevOps, he's also overstating things?

But you know best it seems!


Overstating complexity justifies funding, and attracts attention.


You would think that Finland's unemployment rate (10%+) would influence its ranking, but that's not the case at all.

As it's selfreporting and it's more about expectations than actual happiness a finnish dude only needs to think that life is just incredible compared to what he sees at the other side of the border to selfreport a 10 in happiness

Could also explain Israel

Nordic countries have better safety nets.

I haven't travelled there but I grew up in Poland and still visit. US feels very capitalistic to me. I feel the pace is slower in Poland. In US I feel the need to produce. Might be just me.


This is how I feel as a Canadian. It's just a border between us, we've got issues of our own but on one side life seems much more transactional and individualistic in a somewhat repulsive way. I'm sure it's not unique to them, and I'm sure it's not uniformly pervasive. I rarely feel like a true foreigner while I'm in the country, but there's just this unsettling feeling of distrust coupled with a drive to consume that I don't feel when I'm north of the border.

Well, that's just inherent in the question which asks someone to imagine the best possible life vs. the worst possible life. In a society with lots of room to grow you aren't at the higher rungs. In a society with no progress possible you're at the top easily.

You really need dedicated types for `int64` and something like `final`. Consider:

    class Foo:
      __slots__ = ("a", "b")
      a: int
      b: float
there are multiple issues with Python that prevent optimizations:

* a user can define subtype `class my_int(int)`, so you cannot optimize the layout of `class Foo`

* the builtin `int` and `float` are big-int like numbers, so operations on them are branchy and allocating.

and the fact that Foo is mutable and that `id(foo.a)` has to produce something complicates things further.


Maybe, but I quoted specific part I was replying to. TS has no impact on runtime performance of JS. Type hints in Python have no impact on runtime performance of Python (unless you try things like mypyc etc; actually, mypy provides `from mypy_extensions import i64`)

Therefore Python has no use for TS-like superset, because it already has facilities for static analysis with no bearing on runtime, which is what TS provides.


Because the python devs weren't allowed to optimize on types. They are only hints, not contracts. If they become contracts, it will get 5-10x faster. But `const` would be more important than core types.

What OP means is that they need to:

1) Add TS like language on top of Python in backwards compatible way

2) Introduce frozen/final runtime types

3) Use 1 and 2 to drive runtime optimizations


Still makes no sense. OP demands introduction of different runtime semantics, but this doesn't require adding more language constructs (TS-like superset). Current type hints provide all necessary info on the language level, and it is a matter of implementation to use them or not.

From all posts it looks like what OP wants is a different language that looks somewhat like Python syntax-wise, so calling for "backwards-compatible" superset is pointless, because stuff that is being demanded would break compatibility by necessity.


That was how the Mojo language started. And then soon after the hype they said that being a superset of Python was no longer the goal. Probably because being a superset of Python is not a guarantee for performance either.

Being a superset would mean all valid Python 3 is valid Python 4. A valuable property for sure, but not what OP suggested. In fact, it is the exact opposite.

The TL;DR: code should be easy to audit, not easy to write for humans.

The rest is AI-fluff:

> This isn't about optimizing for humans. It's about infrastructure

> But the bottleneck was never creation. It was always verification.

> For software, the load-bearing interface isn't actually code. Code is implementation.

> It's not just the Elixir language design that's remarkable, it's the entire ecosystem.

> The 'hard' languages were never hard. They were just waiting for a mind that didn't need movies.


To put it another way: this article isn’t about the AI fluff, it’s about the two sentences at the top the author wrote themselves. ;)

Perhaps we need an AI to human transformer to remove the AI fluff?

It really is AI fluff.

Are people starting to write and talk in this manner, I see so many YouTube videos where you can see a person reading an AI written text, its one thing if the AI wrote it, but another if the human wrote it in the style of an AI.

As someone pointed out to me the way an AI writes text can be changed, so it is less obvious, its just that people don't tend to realise that.


Someone had one of those AI videos on in the background and, I can’t explain it, the ordering of the words is like nails on a chalkboard to me. I’m starting to have a visceral physiological response to AI prose that makes it actually painful to listen to.

The video was a biography about some Olympian, and I could tell the prompt included some facts about her wanting to be a tap dancer as a kid, because the video kept going back to that fact constantly. Every few sentences it would reference “that kid who wanted to be a tap dancer”. By the 6th time it brought up she wanted to be a tap dancer I was ready to scream.


Whenever I see a sentence of the form:

"X isn't A, it's (something opposite A)" I twitch involuntarily.


It's even infecting the highest levels of government:

https://www.pimlicojournal.co.uk/p/mps-are-almost-certainly-...


Man you are bad at TL;DR;-ing, you completely left out the main point article makes comparing stateful/mutating object oriented programming that humans like and pure functional oriented programing that presumably according to author LLMs thrive in.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: