Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Exactly. Same lane, same time of day, same time of year (sunset/sunrise changes in lighting conditions) same weather patterns, same level of traffic. The combinatorics involved expand the possibilities to the point that some scenariors may have had little or no coverage from a dataset with 85k observations.


Yes, Tesla needs to not be releasing PR targeted for the masses, they should be releasing statements with a scientific level of information. Whether it is disingenuous or not, whether it was rushed statement, they perhaps need to better scrutinize. It would be fine if they released the holistic information, so long as they get into full depth possible. They need to act as stewards we can trust, if they fail to do so then they will lose mindshare and longer-term market share as they will simply blend in with everyone else.

The obvious failure was the highway safety mechanism not immediately being replaced after the previous crash - that would have made the Tesla crash have gone from death to perhaps only minor injuries.

I would argue too there is a need for AI image analysis - could perhaps be crowdsourced - to analyze the structure/state of a highway or any road. It would of course need to be trained, however it could also likely serve as tool to improve road safety worldwide - bringing everyone in line with known best practices. In this case, it would have caught the improper safety mechanism not being replaced, and then that death would have been avoided; cost-benefit analysis says this system would be worth it as life is invaluable.

If Tesla/Elon leads this effort it would show them being proactive in improving future safety, accountable, and taking ownership. Why has no other auto manufacturer done so, why has no government implemented such a system? Well, because honestly we are stuck in scarcity only caring about our own costs, and so if the random person dies here and there, there's not a big enough ripple effect (unfortunately) to cause enough personal worry to cause change. Tesla, if wanting to posture as being the steward I think many of us hope they are, does however absorb these ripples that get associated with their brand - and therefore the responsibility does pass onto them, whether they agree to accept it or not.

All auto manufacturers would likely benefit from this, allowing their software to work better within certain expected constraints. Many more possibilities as I think of it, newly spotted damage or cracks in bridges, detection of debris on highway or excessive dirt on the edge of the road making the conditions more dangerous for emergency stopping/maneuvering, etc.

Edit: I added a few sentences, so the one upvote prior to editing, they didn't upvote everything I said though it's all in line.


>I would argue too there is a need for AI image analysis - could perhaps be crowdsourced - to analyze the structure/state of a highway or any road.

I fully agree with you here. Urban areas are too dynamic to rely on static maps and local LOS only. This sort of data sharing is going to be a necessity, IMO, to achieve level 4 or level 5 autonomy.


There was a big note in the GTC keynote this year about Nvidia using virtual environments to debug their autopilot algorithms. Think "car" being driven in a high fidelity video game.

Advantage being that if you can provide representative, simulated input then you can increase the training miles by orders of magnitude in the same amount of time, limited by computation rather than physical mileage.


This is true, to the extent that the selection of input parameters to simulate provides coverage over the domain of all possible input parameters. At some point you're back to testing the failure of imagination of the simulation/test creators.


The problem is the simulation by definition can only simulate things that are accounted for. Any number of completely arbitrary out of the blue things can happen in real life.


Simulations can make use of random number generators and could in theory, following the lead of a project like afl, adaptively find algorithmic weak spots.


Simulations are only useful up to a point.

Trying to use them to model real world scenarios would be useless in practice, due to the Ludic fallacy [0]. Real life is too complex to be modeled in any simulation.

[0] https://en.wikipedia.org/wiki/Ludic_fallacy


I don't think the connection to the referenced fallacy is nearly strong enough to serve as a QED on its own. It's also seemingly promoted only by one person.

As for modeling real life with simulations, the data exists for every type of accident a human has encountered. If you incorporate such into a simulation, plus randomly vary every free parameter, then your simulation will cover more scenarios than any human driver can possibly experience.

Thus, simulations should be able to help an autonomous vehicle outperform humans by a large margin, which is the only goal that matters.


Oooh, adversarial scenario generation! I think there's something to that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: