Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Tesla's system is real time, which means that it makes routing decisions on every frame, and the decisions are independent from previous decisions. It doesn't have memory. It just makes decisions based on what it actually sees with its cameras. It doesn't trust maps, like most other systems. For some reason, it thinks that it must turn right.

Situations like these are sent to Tesla to be analyzed, the corresponding data is added to the training data, and the error is corrected in the next versions. This is how the system improves.

After this situation is fixed, there will be an another edge case that the HN crowd panics over.



> It doesn't have memory. It just makes decisions based on what it actually sees with its cameras. It doesn't trust maps, like most other systems.

This is incorrect. Tesla also relies on maps for traffic signs, intersections, stop signs etc. They just don't have additional details like the others do.

> After this situation is fixed, there will be an another edge case that the HN crowd panics over.

Is anything Tesla FSD can't handle an "edge case" now? It's literally an everyday driving scenario.


It's a specific edge case that the driver was testing. It has issues around those monorail pillars.


Monorail pillars are also not an edge case. Plenty of cities have monorails. Just because FSD doesn't work there doesn't mean it's an edge case.


You're probably right, but from your tone you seem to be implying that this is in any way an acceptable way to develop safety critical software, which is a bit baffling.


It is safe, because there's a driver who is ready to correct any mistakes. There isn't a single case where FSD Beta actually hits something or causes an accident. So, based on actual data, the current testing procedure seems to be safe.

It isn't possible to learn edge cases without a lot of training data, so I don't see any other way.


This person is not a Tesla employee being trained and paid to test this extremely dangerous piece of software that has already killed at least 11 people. This is obviously unacceptable, and no other self-driving company has taken the insane step of letting beta testers try out their barely functional software.


FSD Beta has never killed anyone. Maybe you're confusing it with Autopilot, which is different software, but also has saved much more people than killed.

I'm not saying that safety couldn't be improved, for example by disengaging more easily in situations where it isn't confident. One heuristic would be when it changes route suddenly, like in this scenario.


Source on it saving anyone?


"In the 1st quarter, we registered one accident for every 4.19 million miles driven in which drivers had Autopilot engaged. For those driving without Autopilot but with our active safety features, we registered one accident for every 2.05 million miles driven. For those driving without Autopilot and without our active safety features, we registered one accident for every 978 thousand miles driven. By comparison, NHTSA’s most recent data shows that in the United States there is an automobile crash every 484,000 miles."

Source: https://www.tesla.com/VehicleSafetyReport


It is an inherently biased statistics, where drivers will let the car drive on parts where they are confident it can do its job, and will take over for the rare, more complex situations.

Also, the NHTSA dataset will contain old as hell cars, comparing it to a fresh out of the factory one will by itself skew the data.


Apples to oranges (or more like apples to oceans).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: