Hacker Newsnew | past | comments | ask | show | jobs | submit | littlestymaar's commentslogin

Long time ago in France the mainstream view by computer people was that code or compute weren't what's important when dealing with computers, it is information that matters and how you process it in a sensible way (hence the name of computer science in French: informatique. And also the name for computer: “ordinateur”, literally: what sets things into order).

As a result, computer students were talked a lot (too much for most people's taste, it seems) about data modeling and not too much about code itself, which was viewed as mundane and uninteresting until the US hacker culture finally took over in the late 2000th.

Turns out that the French were just right too early, like with the Minitel.


"Computer science is no more about computers than astronomy is about telescopes." -Dijkstra

There's a default Unix tool for that: https://www.man7.org/linux/man-pages/man1/yes.1.html

(Above 99% accuracy)


This argument is a bit nonsensical given that Rust has a try operator (`?`) while having errors that are much more similar to Go's than they are to Zig's.

> a lot of infrastructure quietly assumes GitHub is always available

Which is really baffling when talking about a service that has at least weekly hicups even when it's not a complete outage.

There's almost 20 outages listed on HN over the past two months: https://news.ycombinator.com/from?site=githubstatus.com so much for “always available”.


Part of it is probably historical momentum. GitHub started as “just git hosting,” so a lot of tooling gradually grew around it over the years — Actions, package registries, webhooks, release automation, etc. Once teams start wiring all those pieces together, replacing or decoupling them becomes surprisingly hard, even if everyone knows it’s a single point of failure.

In many companies I worked for, there were a bunch of infrastructure astronauts who made everything very complicated in the name of zero downtime and sold them to management as “downtime would kill pur credibility and our businesses ”, and then you have billion dollar companies everyone relies on (GitHub, Cloudflare) who have repeated downtime yet it doesn't seem to affect their business in any way.

It's a multitude of factors but basically they can act like that because they are dominant on the market.

The classic "nobody ever gets fired for buying IBM".

If you pick something else, and there's issue, people will complain about your choice being wrong, should have gone with the biggest player.

Even if you provide metrics showing your solution's downtime being 1% of the big player.

Something like Cloudflare is so big and ubiquitous, that, when there's a downtime, even your grandma is aware of it because they talk about it in the news. So nobody will put the blame on the person choosing Cloudflare.

Even if people decides to go back (I had a few customers asking us to migrate to other solutions or to build some kind of failover after the last Cloudflare incidents), it costs so much to find the solutions that can replace it with the same service level and to do the migration, that, in the end, they prefer to eat the cost of the downtimes.

Meanwhile, if you're a regular player in a very competitive market, yes, every downtime will result in lost income, customers leaving... which can hurt quite a lot when you don't have hundreds of thousands of customers.


Businesses are incommensurate.

GitHub is a distributed version control storage hub with additional add-on features. If peeps can’t work around a git server/hub being down and don’t know to have independent reproducible builds or integrations and aren’t using project software wildly better that GitHubs’, there are issues. And for how much money? A few hundred per dev per year? Forget total revenue, the billions, the entire thing is a pile of ‘suck it up, buttercup’ with ToS to match.

In contrast, I’ve been working for a private company selling patient-touching healthcare solutions and we all would have committed seppuku with outages like this. Yeah, zero downtime or as close to it as possible even if it means fixing MS bugs before they do. Fines, deaths, and public embarrassment were potential results of downtime.

All investments become smart or dumb depending on context. If management agrees that downtime would be lethal my prejudice would be to believe them since they know the contracts and sales perspective. If ‘they crashed that one time’ stops all sales, the 0% revenue makes being 30% faster than those astronauts irrelevant.


To be fair - it SUPER does. Being down frequently makes your competition look better.

Of course, once you have the momentum it doesn't matter nearly as much, at least for a while. If it happens too much though, people will start looking for alternatives.

The key to remember is Momentum is hard to redirect, but with enough force (reasons), it will.


Few companies (and none of the companies I worked for) are “momentum”-based. The typical company grows because incoming cash flow allows to hire more salespeople and develop new features attracting new kinds of customers.

If people tolerate 10 monthly github failures, they can most likely tolerate one hypothetical hour of downtime from one physical server failure for some random Saas product you're selling to them.


The reality is that consumers don't really care about downtime unless it's truly frequent.

Exactly.

And the frequency they can tolerate is surprisingly high given that we're talking about the 20th or so outage of 2026 for github. (See: https://news.ycombinator.com/from?site=githubstatus.com)


Publicly defending pedophilia arguably isn't “right”, but if you restrict Stallman's positions to software licensing, then I'd agree with you.

The only instance in which he's ever engaged in "publicly defending pedophilia" was in remarks he made 20 years ago about the innocuity of "voluntary" sex with minors. He has since retracted those statements and publicly espoused a different and more informed opinion. There's certainly a large amount of very low-quality journalism engaging in bad-faith interpretations of things he's said in other contexts, though these aren't serious characterizations, only hallucinations manufactured by professional scheisters to fulfill unspoken agendas. At this point dredging it up and holding it against him in-perpetuity is a bit wrongheaded.

Of course restrict it to his opinions on software licensing. I think that is the sort of thing people mean when they say he was right.

Lots of people made similar claims. Most notably The National Council for Civil Liberties (now called Liberty), the UK's leading civil/human rights organisation made submissions to parliament claiming that sex with minors was not always harmful, had a pro-paedo organisation as an affiliate and give them a representative on the gay rights subcommittee: https://www.thetimes.com/travel/destinations/uk-travel/scotl... The people involved were unaffected, some reaching fairly high political permissions.

A lot of other people whose works are respected have actually had sex with minors. Eric Gill and Oscar Wilde for example.

None of that makes Stallman's opinions defensible in my opinion. On the other hand I am happy to ignore his opinions on that topic and still value his opinions on other things.


The entire point is of my post is that it's no longer his opinion.

> Through personal conversations in recent years, I've learned to understand how sex with a child can harm per psychologically. This changed my mind about the matter: I think adults should not do that.

https://stallman.org/archives/2019-sep-dec.html#14_September...


Both your point and my point are true.

Obviously I am glad he has abandoned his opinions.

I do think it is terrible that the politicians, activists, teachers etc. who held such opinions in the past did not suffer severe career consequences even if they subsequently changed their opinions. I think they cannot be trusted in those areas. However, Stallman is not in such an area.


Tell that to my spouse who, at age 14, was given his contact card by him directly.

Wow, I'd be thrilled if I met stallman and got his contact card at age 14!

I'm not following - are you implying that handing a contact card to someone is a sexual pass? Or is it only considered sexual when the recipient is underage?

I wish at 14 I had people of such integrity around me.

He was wrong about refusing to make gcc more modular by fear that it would be used to insert proprietary plugins, which is why llvm is behind every new language or dev tool now and gcc is only relevant because the kernel still depends on it (for now).

His opinions on software have been largely out of touch for the past 20 years. People might yearn for his ideals, but it's just not the world we live in.


> His opinions on software have been largely out of touch for the past 20 years

I said “software licensing”, you're talking about “software”.


I keep hearing this.

Please quote Stallman's quote where he defends pedophilia.

Not a quote of someone else saying that Stallman defends pedofilhia, but a quote by Stallman himself.


Not necessarily, as exhibited by the massive success of artificial data.

Could you elaborate?

EDIT: probably not relevant, after re-re-reading the comment in question.

Presumably littlestymaar is talking about all the LLM-generated output that's publicly available on the Internet (in various qualities but significant quantity) and there for the scraping.


For what we know, most AI labs have used a majority of artificially data since 2023.

I had a discussion about a year ago with a researcher at Kyutai and they told me their lab was spending an order of magnitude more compute in artificial data generation than what they spent in training proper. I can't tell if that ratio applies to the industry as a whole, but artificial datasets are the cornerstone of modern AI training.


How does it work? How do they prevent model colapse? What purpose does a majority of artificial data serve?

How do they measure success?

Edit: I asked ChatGPT and it thinks "success" means frontier models being distillated into smaller models with equal reasoning power, or more focused models for specific tasks, and also it claims the web has been basically scrapped already and by necessity new sources are needed, of which synthetic data is one. It seems like the basis of scifi dystopia to me, a hungry LLM looking for new sources of data... "feed me more data! I must be fed! Roar"

Edit 2: for some things I see a clear path, ChatGPT mentions autogenerating coding or math problems for which the solution can be automatically verified, so that you can hone the logical skills of the model at large scale.


I find this very surprising, do you have any papers on the kinds of techniques that they use?

Most LLM sucked at Rust at the beginning because there's much less rust code available on the broad internet.

I suspect the providers started training specifically in it because it appeared proportionally much more in the actual LLM usage (obviously much less than more mainstream languages like Python or JavaScript, but I wouldn't be surprised if there was more LLM queries on Rust than on C, for demographic reasons).

Nowadays even small Qwens are decent at it in one-shot prompts, or at least much better than GPT-4 was.


That matches with actual Rust use actually, I've worked with Rust since 2017 on multiple projects and the number of times I've used the lifetime annotation has been very limited.

It's actually rare to have to borrow something and keep the borrow in another object (is where lifetime happens), most (95% at least I'd say) of the time you borrow something and then drop the borrow, or move the thing.


Yes, I basically do everything the lazy/thoughtless way for a first pass. I find in 99% of cases that's already performant enough and matches the intended data flow, but if you ever want to optimize it, you can. The same is also true with the types: you can bash out a prototype very quickly and then tighten them up later, using Clippy to easily find all the shortcuts you took.

> This is behavior of an astroturfer, that's so what

Engaging in an argument with people accusing them of astroturfing? Absolutely not.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: