Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> 1) Those tasks that it is still currently only possible for a human to do. 2) Those tasks which are easier and cheaper for a human to do.

I agree, but "1" must include all tasks where a mistake could lead to liabilities for the company, which is probably most tasks. LLMs can't be held responsible for their fuckups, they can't be punished, they have no body. It's like the genie from the bottle, it will grant your three wishes, but they might turn out in a surprising way and it can't be held accountable.

The same will apply for example for using LLMs in medicine. We can't afford to risk it on AI, a human must certify the diagnosis and treatment.

In conclusion we can say LLMs can't handle accountability, not even in principle. That's a big issue in many jobs. The OP mentioned this as well:

> even when AI coders can be rented out like EC2 instances, it will be beneficial to have an inhouse team of Software Developers to oversee their work

Oversight is basically manual-mode AI alignment. We won't automate that, the more advanced an AI, the more effort we need to put in overseeing its work.



> I agree, but "1" must include all tasks where a mistake could lead to liabilities for the company, which is probably most tasks

If you hire a junior programmer and they make a mistake, they aren't held liable either. Sure, you can fire them, but unless there's malice or gross negligence the liability buck stops at the company. The same can be said about the wealth of software currently involved in producing software and making decisions. The difficulty of suing Microsoft or the llvm project over compiler bugs hasn't stopped anyone from using their compilers.

I don't see how LLMs are meaningful different from a company assuming liability for employees they hire or software they run. Even if they were AGI it wouldn't meaningfully change anything. You make a decision whether the benefits outweigh the risks, and adjust that calculation as you get more data on both benefits and risks. Right now companies are hesitant because the risks are both large and uncertain, but as we get better at understanding and mitigating them LLMs will be used more.


Even with a junior there is generally a logic to the mistake and a fairly direct path to improving in future. I just don't know if the next token was chosen to be x statistically is going to be able to get to that level.


Good thing LLMs aren't a glorified statistical model, then, eh?

Anyway, why wouldn't there be? You reach out to the parent company with an issue and request for improvement. If you're a big enough client, you get your request prioritized higher. Same as with any other product that's part of your product today.

The appliance of LLMs today isn't straight up text in, text out; It has become more complex than that. Enough that it can be improved without improving the LLM model.

Your argument is moot.


"A COMPUTER CAN NEVER BE HELD ACCOUNTABLE THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION" — IBM slide from 1979.


hahaha funny

Let me tell you a story - a company was using AI for invoice processing, and it misread a comma for a dot, so they sent a payment 1000x larger than expected, all automated of course because they were very modern. The result? they went bankrupt. "Bankrupted by AI error" might become a thing


Of course. It is like when a company goes bankrupt because they didn't establish good fire protection in their factory. Using AI automation has its risks that have to be mitigated appropriately.


That’s why you buy a cyber insurance policy.


Some might consider that a plus in the same way that "you can't get fired for choosing IBM" -- it's a way to outsource blame.


ted nelson calls it "cybercrud", blaming the machine as if it has the final say on the matter, "the system won't let me..."


How do you negotiate for a salary when the role is to be ablative armor for the company? "I'm excited to make myself available to absorb potential reputation damage for $CORP when the AI goes off the rails."


I think there is some under-explored issue in the liability, but I don’t know enough about business law to have a useful opinion on it. It seems interesting, though.

Even if an LLM and a human were equally competent, the LLM is not a living being and, I guess, isn’t capable of being liable for anything. You can’t sue it or fire it.

Doctors have to carry insurance to handle their liability. I can see why it would be hard to replace a doctor with an LLM as a result.

Typically engineers aren’t personally liable for their mistakes in a corporate setting. (I mean, there’s the whole licensed Professional Engineer distinction, but I don’t feel like dying on that hill at the moment). So where does the liability “go?” I think it just gets eaten by the company somehow. They might fire the engineer, but that doesn’t make the victim whole or benefit society, right?

Ultimately we’d expect companies that are so bad at engineering to get sued so often that they implement process improvements. That could be wrapped around AI’s instead of people, right? But we’re not using the humans’ unique ability to bear liability, I think?


It's vanishingly rare that individuals have any liability or are punished for software fuckups. Maybe if someone is completely incompetent, they'll get fired, but I'm not sure that's meaningfully different than cancelling a service that doesn't work as advertised.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: