Hacker Newsnew | past | comments | ask | show | jobs | submit | dns_snek's commentslogin

Your theory is baseless given that YouTube claimed that decisions weren't automated:

> The platform claimed its "initial actions" (could be either the first takedown or appeal denial, or both) were not the result of automation.


YouTube frequently claims this and are frequently caught lying. (Oh, you really watched this one hour video and reached your decision in an email sent 96 seconds after the appeal was submitted? Yeah okay...)

They'll silently fix the edge case in the OP and never admit it was any kind of algorithmic (OR human) failure.


I'm aware that there's a chance that Google is lying, I'm just pointing out that their comment doesn't make any sense if they believe that Google deserves the benefit of the doubt.

> That's not error data, that's (one level of) a stack trace.

They're not talking about the stack trace, but about the common case where the error is not helpful without additional information, for example a JSON parsing library that wants to report the position (line number) in the string where the error appears.

There's no way of doing that in Zig, the best you can do is return a "ParseError" and build you own, non-standard diagnostic facilities to report detailed information though output arguments.


Another way to look at this example is that, for the parser, this is not an error. The parser is doing its job correctly, providing an accurate interpretation of its input, and for the parser, this is qualitatively different from something that prevents it doing its job (say, running out of memory).

At the next level up, though, there might be code that expects to be able to read a JSON config file at a certain location, and if it fails, it’s reasonable to report which file it tried to read, the line number, and what the error was.

Sure, but that's a different level with different considerations. The JSON parser shouldn't care about things like ‘files’ with ‘locations’; maybe it's running on some little esp8266 device that doesn't have such things.

I don't follow, because there's a possibility that someone somewhere might create a bad overly-generic error set if they were allowed to stuff details in the payload when those should be reflected in the error "type", it's a good idea to make the vast majority of error reporting bad and overly-generic by eliminating error payloads entirely?

Agreed, this is probably my biggest ongoing issue with Zig. I really enjoy it overall but this is a really big sticking point.

I find it really amusing that we have a language that has built its brand around "only one obvious way to do things", "reducing the amount one must remember", and passing allocators around so that callers can control the most suitable memory allocation strategy.

And yet in this language we supposedly can't have error payloads because not every error reporting strategy is suitable for every environment due to memory constraints, so we must rely on every library implementing its own, yet slightly unique version of the diagnostic pattern that should really be codified as some sort of a language construct where the caller decides which allocator to use for error payloads (if any).

Instead we must hope that library authors are experienced and curious enough to have gone out of their way to learn this pattern because it isn't mentioned in any official documentation and doesn't have any supporting language constructs and isn't standardized in any way.

There must be an argument against this (rather obvious) observation but I'm not aware of it?


Genuine question, how would error set unioning work with payloads? error.WriterFailed (might be getting the exact name wrong) is returned from many different writers, whether writing to a statically allocated array, writing to a socket, or writing to a file. Each error would have a very different payload, so how would you disambiguate between the different payloads with a global error type? The way I see it is either you have error sets, or payloads, but not both.

I'm also wondering what payload people want. There's already an error handling trace (similar but different to a normal stack trace) that captures the how the error propagates up to the point it's being handled so shows you exactly where the initial point was.

Payments providers engage in censorship and moral policing, e.g. https://www.theguardian.com/world/2025/jul/29/mastercard-vis...

That's a motte and bailey fallacy. Nobody said that they aren't useful, the argument is that they can't reason [1]. The world is full of useful tools that can't reason or think in any capacity.

[1] That does not mean that they can never produce texts which describes a valid reasoning process, it means that they can't do so reliably. Sometimes their output can be genius and other times you're left questioning if they even have the reasoning skills of a 1st grader.


I don't agree that LLMs can't reason reliably. If you give them a simple reasoning question, they can generally make a decent attempt at coming up with a solution. Complete howlers are rare from cutting-edge models. (If you disagree, give an example!)

Humans sometimes make mistakes in reasoning, too; sometimes they come up with conclusions that leave me completely bewildered (like somehow reasoning that the Earth is flat).

I think we can all agree that humans are significantly better and more consistently good at reasoning than even the best LLM models, but the argument that LLMs cannot reliably reason doesn't seem to match the evidence.


No, that's decidedly not what is happening here.

One is saying "I've seen an LLM spectacularly fail at basic reasoning enough times to know that LLMs don't have a general ability to think" (but they can sometimes reproduce the appearance of doing so).

The other is trying to generalize "I've seen LLMs produce convincing thought processes therefore LLMs have the general ability to think" (and not just occasionally reproduce the appearance of doing so).

And indeed, only one of these is a valid generalization.


When we say "think" in this context, do we just mean generalize? LLMs clearly generalize (you can give one a problem that is not exactly in it's training data and it can solve it), but perhaps not to the extent a human can. But then we're talking about degrees. If it was able to generalize at a higher level of abstraction maybe more people would regard it as "thinking".

I meant it in the same way the previous commenter did:

> Having seen LLMs so many times produce incoherent, nonsensical and invalid chains of reasoning... LLMs are little more than RNGs. They are the tea leaves and you read whatever you want into them.

Of course LLMs are capable of generating solutions that aren't in their training data sets but they don't arrive at those solutions through any sort of rigorous reasoning. This means that while their solutions can be impressive at times they're not reliable, they go down wrong paths that they can never get out of and they become less reliable the more autonomy they're given.


It's rather seldom that humans arrive at solutions through rigorous reasoning. The word "think" doesn't mean "rigorous reasoning" in every day language. I'm sure 99% of human decisions are pattern matching on past experience.

Even when mathematicians do in fact do rigorous reasoning, they use years to "train" first, to get experiences to pattern match from.


I have been on a crusade now for about a year to get people to share chats where SOTA LLMs have failed spectacularly to produce coherent, good information. Anything with Heavy hallucinations and outright bad information.

So far, all I have gotten is data that is outside the knowledge cutoff (this is by far the most common) and technicality wrong information (Hawsmer House instead of Hosmer House) kind of fails.

I thought maybe I hit on something with the recent BBC study about not trusting LLM output, but they used 2nd shelf/old mid-tier models to do their tests. Top LLMs correctly answered their test prompts.

I'm still holding out for one of those totally off the rails Google AI overviews hallucinations showing up in a top shelf model.


Sure, and I’ve seen the same. But I’ve also seen the amount to which they do that decrease rapidly over time, so if that trend continues would your opinion change?

I don’t think there’s any point in comparing to human intelligence when assessing machine intelligence, there’s zero reason to think it would have similar qualities. It’s quite clear for the foreseeable future it will be far below human intelligence in many areas, while already exceeding humans in some areas that we regard as signs of intelligence.


s/LLM/human/

Clever. Yes, humans can be terrible at reasoning too, but in any half decent technical workplace it's so rare for people to fail to apply logic as often and in ways that are as frustrating to deal with as LLMs. And if they are then they should be fired.

I can't say I remember a single coworker that would fit this description though many were frustrating to deal with for other reasons, of course.


It works well for smaller folders but it slows down to a crawl with folders that contain thousands of files. If I add a file to an empty shared folder it will sync almost instantly but if I take a photo both sides become aware of the change rather quickly but then they just sit around for 5 minutes doing nothing before starting the transfer.

how many thousands? I have a folder with a total of 12760 files spread within several folders, but the largest I think is the one with 3827 files.

I've noticed the sync isn't instantaneous, but if I ping one device from the other, it starts immediately. I think Android has some kind of network related sleep somewhere, since the two nixos ones just sync immediately.


I have around 4000 photos and videos in this folder. I don't know what it is but I know that it's not a network issue.

I think it takes a long time because the phone CPU is much slower than the desktop but I couldn't tell you what it's doing, the status doesn't say anything useful except noting that files are out of sync and that the other device is connected.


yes I do wish it would say a bit more of what's going on and have a big button that says "try it now".

> pick some numbers for the employer paid health plan

Don't forget to account for real annual healthcare costs, insurance premiums are just one part of the story, but you still need to pay out of pocket for deductibles and copays. It feels like there are so many benefit that we take for granted in much of EU (even if not all of them apply universally) that you have to budget for in the US and they also incur poorly understood but very real psychological cost.

I'm not in France but I graduated from university with 0 EUR in student loans because there were no tuition fees and the accommodation, food, and public transport were heavily subsidized and easily paid for with a part-time job. I don't need a car because we have a decent and safe public transport.

When I took a sabbatical from work and quit my job I didn't qualify for benefits which means that I had to pay around ~50 EUR a month for health insurance. I could continue seeing specialists the same as I did before and I didn't pay a single cent out of pocket all year. I've seen people talk about how they pay $100-$200 a month for the same medication and that's with expensive insurance and then hundreds more for appointments, how they're having to fight insurance companies, scavenge for deals on medications at various pharmacies and it all sounds so exhausting.

If I lived in the US, I think trying to repeat what I did here would've put me at serious risk of homelessness and inescapable life-long debt, especially if I had some bad luck with my health.


It drew a 10-sided shape, numbering the vertices instead of the edges, and the labels were wrong (some numbers repeated, others skipped)

Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: