Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If those are the stronger examples, then you should have went with them. It’s more inline with the HN guidelines than taking the weaker interpretation.

I think you missed my point. Because software is more opaque, it has a much higher threshold before the public feels comfortable with it. My claim is it will have to be an outstanding driver, not just an “alright” one before autonomous driving is given the reins en masse. In addition, I don’t think we know much about the true distribution of risk, so claims about the long-tail are undefined and somewhat meaningless. We don’t have a codified “edge” that defines what you call edge cases. Both software and people are imperfect. Given the opaqueness of software, I still maintain people are more comfortable with human drivers due to the evolved theory of mind. Do you think more people would prefer their non-seatbelted toddler to be in a an average autonomous vehicle by themselves or with a human driver give the current state of the art?

But more to my point, humans are also irrational so statistical arguments don’t translate well to policy. Just look at how many people (and professionals) trade unleveraged stocks when an index fund is a better statistical bet. Your point hinges on humans being modeled as rational actors and my point is that is a bad assumption. It’s a sociological problem as much as an engineering one.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: