Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> This is missing the entire point of ML. ML is literally defined as not having to explicitly program responses in for every situation.

So how do you know how the system will respond to an arbitrary situation? You could easily argue that we don't know how an arbitrary human will respond to an arbitrary situation, but we have systems in place to deal with the consequences if they handle it badly.

For example, if a driver handles a situation badly enough, they could lose their license. If an autonomous car does something bad enough that a human would have lost their license, what happens? Do all of that company's cars get pulled off the road until the bug is fixed and validated?



> So how do you know how the system will respond to an arbitrary situation?

You put them in that situation and see how they respond. If they respond badly, you keep training them until they respond better. I'm not saying it's easy, but I am saying it's exactly what autonomous-car developers been doing all this time.


> If they respond badly, you keep training them until they respond better.

Right, but what do you do with all the other cars on the road that presumably still have the bad behavior (while the fix is being developed)? Just assume that the situation is rare enough that you'll be able to fix it before it happens again?


You're basically saying test-driven development can find all problems with software, and it's well-known that that isn't the case. It's very dangerous to assume TDD is all that's needed when lives are at stake.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: