Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Waymos however need thousands of dollars of hardware to achieve this and only work in limited areas. Tesla's bet is a lot more risky but also with a lot more potential.


The idea that Tesla would win the robotaxi race by not needing LiDAR died sometime between when LiDAR cost $100k and when it cost $1k. Now it’s just Elon being intransigent.


can I get a lidar for 1k like the cars use?


There is no reliable FSD implementation on any car right now so it's kind of an irrelevant question.

The more relevant one is what will happen first. Tesla figuring out how to make vision only work on their existing hardware. Or the price of LiDAR coming down.


I’ve ridden Waymo in SF and it has gone great. The cost was cheaper than the cheapest Lyft/Uber, but in a much nicer vehicle. I felt 100% safe the whole time, which is better than I can say about humans who get paid more if they drive faster. My only complaint is cases like where it “wasted” a few minutes because it didn’t want to do an illegal U-turn during the pickup (any human driver would have done it).

The word “reliable” without any units attached isn’t well-defined, so I can’t say whether Waymo meets that bar, but it’s a good customer experience.


FYI Waymo is typically ~30% more expensive than Lyft/Uber in SF.


On-chip lidars are coming to automotive sector currently


You're going to have to be more specific with your necessary specs, there are $100 360° hobbyist LIDAR sensors on Amazon.


No there aren't. Those use triangulation. LIDAR is time-of-flight. They also only scan a single rotating point which is only sufficient for simple robots like vacuum cleaners.


Intel's L515 lidar from 2020 was <$300, uses MEMS ToF instead of rotating for very high speed scanning. 730p@30.

Good indoor range but not really useful outdoors at any range. Scaling to higher power is indeed a challenge, but that Intel delivered so so much in 2020 for such a small price is awesome, shows potential.


Damn that looks amazing! Such a shame they abandoned RealSense.

Still, $300 is not $100 and presumably they were selling at big loss, otherwise they wouldn't have shuttered RealSense.


Apple has ToF lidar for face recognition for years now. It's a matter of spec.

Similar style single chip lidar for automotive is in engineering sampling phase now [1]. Price remains to be seen but anything sub 1k would be a no-brainer to add to a robo-taxi.

Oh, everyone in the industry thinks Tesla is .. how to put it nicely .. is irrelevant for the future because of their CEOs stance on sensors. Camera will never be enough.

https://scantinel.com/2023/07/03/scantinel-photonics-launche...


If you buy en masse maybe. We buy such devices on or few at a time for industrial use cases, and those will cost you 10k€ for the big ones, and maybe less for the smaller ones. lots of development happening in the space tho.


Humans don’t use lidar, which clearly shows that a vision-only robotaxi is very much feasible.


Birds have to flap wings while our planes don't have to. There is absolutely no reason to limit self-driving cars in the same way our bodies are limited.

When it comes to AI though, humans are using biological neural net much more capable than any today's AI you can cram into a car. So, even if one accepts your premise of targeting human performance as a design guideline, more sensors is still logical at this point as way to compensate for the weaker AI.

Also, if you read how Tesla does vision it is very different from, and i think inferior to, how your eyes and brain build the 3d map of the surroundings. If one is limiting oneself to only vision, the first thing would be to try to get as good as possible that 3d mapping, and the vision seems to be among the simplest and most researched brain functions, ie. easiest to reproduce. As Tesla doesn't seem to be doing it - only may be couple years ago they only started to elicit the 3d model - i think they aren't on the shortest path to success when it comes to FSD.


Planes do ”flap their wings”, just not the ones protruding from the fuselage.


I think you're mistaking rotating for flapping. Rotation is one of those fundamental things differentiating our technological civilization from Nature.


Those rotating things still produce their thrust by pushing a wing-shaped structure through air, producing a high-pressure zone on one side, and a low-pressure zone on another. That is what I was getting at. It is the same principle.


No, it is different. A prop or fan blade is inmovably attached to the shaft and pushed through the air the same way like the plane's wing, and the blade isn't flapped like the bird's wing.


> Rotation is one of those fundamental things differentiating our technological civilization from Nature.

Rotation is very common in nature.

Planetary rotation, inner-core rotation, spinning galaxies, dung beetle rolling, Keratinocyte migration, Rotifers, spirals, rotational symmetry, etc.

What isn’t common (but not non-existent) is using rotation for locomotion in biology.


Many plants and trees spread rotating ”helicopter seeds”. Many vines roto-grow themselves around vertical supports. Day flowers rotate to follow the sun.

Apples and oranges fall on the ground and can roll far and wide. Walnuts too.

Partial rotation is still rotation, of course: see animal joints in walk, trot and gallop.

And then there’s the belly-up pig drunk on brewery grain rolling down the hill. That mash packs a wallop!


Yes! Which is why the idea that “ Rotation is one of those fundamental things differentiating our technological civilization from Nature” is not all that useful a statement.


Huge swaths of microbes use electrostatic rotary motors driving screw type propellers, so if not say it’s that uncommon.



Humans don't act based on visual patterns alone though. We act based on our understanding of the world as a whole, including the intentions of other humans.

For instance, when we see a ball rolling onto the street, we know that there is probably a young person nearby who wants that ball back. We don't have to be trained on the visual patterns of what might happen next.

Of course AI can be trained on the visuals of high probability events like this. But the number of things that can potentially happen is far greater than the number of training examples we could ever produce.


> the number of things that can potentially happen is far greater than the number of training examples we could ever produce

Models don't need to have been trained on every single possibility - it's possible for them to generalize and interpolate/extrapolate.

But, even knowing that it's theoretically possible to drive at human-level with only the senses humans have, it does seem like it makes it unnecessarily difficult to limit the vehicle to just that. Forces solving hard tasks at/near 100% human-level, opposed to reaching 70% then making up for the shortcoming with extra information that humans don't have.


>Models don't need to have been trained on every single possibility - it's possible for them to generalize and interpolate/extrapolate.

They do have some in-distribution generalisation capabilities, but human intentions are not a generalisation of visual information.


"human intentions are not a generalisation of visual information" is a bit confusing category-wise. Question would be to what extent you can predict someone's next action, like running out to retrieve a ball, given just what a human driver can sense.

Clearly that's possible to some extent, and in theory it should be possible for some system receiving the same inputs to reach human-level performance on the task, but it seems very challenging given the imposed constraints.

Also, for clarity, note that the limitations don't require the model be trained only on driver-view data. It may be that reasoning capability is better learned through text pretraining for instance.


Humans don’t have radar, or thermal cameras, or ultrasonic sensors, doesn’t mean planes and boats shouldn’t use those


Humans eyes are an order of magnitude better than the cameras in a Tesla. Humans also have a database in their head and remembers how to behave in certain situations. FSD doesn't have any database of any kind.


That same argument can be used for all companies to fire all their employees. They are all human after all. Just implement all the needed features in hardware and software, done.


Humans use our brains to drive. Unless you're planning on popping an actual human brain or something that can perform equivalently into the car, you'd do well to consider more superior sensor suites.


Humans continuously move their heads in three dimensions to infer depth.

Cars can't do this.

And not surprisingly the biggest problem with FSD is the accuracy of its bounding boxes.


Citation? Humans are not constantly moving their heads to the degree that chickens do, and I find it doubtful that the micro movements from our head (which our eyes have to adjust for with the vestibulo-ocular reflex so things aren't blurry, similar to image stabilization in cameras) are large enough to infer depth.


I never said people are moving their heads like chickens.

But we do move our heads around pretty frequently. Enough to build mental records of what the bounding boxes are going to be for a range of objects.


We're not doing that while driving, though.

If we're talking purely about going off memory, there's no reason why machines couldn't build up a similar catalog (which could be used by every self driving AI once learned). And human ability to judge distances varies significantly between drivers.


feasible? I want the thing to drive better than me, especially in the rain, fog, and the dark!


I can't tell if this is satire, or if replicating 6 million years of evolution has legitimately become handwave material for Elon's supporters...


They are afraid, times of crisis - especially planetary one, have the weaker minded and scared ones always rally around figureheads. Some guy in operetta uniforms, exclaiming "Im the captain, give me all your cash" brandishing a detached steering wheel is what the passengers want to see. Reality be a lovecraftian horror to much to bear.


Mammalian vision and vision itself have been around a lot longer than 6 million years by at least one, likely two, orders of magnitude.


I don't know if you've tried this recently, but take a photo of something on your phone and put it into an AI.

There may even be an AI built into your photo library app.


The fact I work on self-driving cars makes me a tiny bit more of a realist than someone who thinks CLIP is proof of what AI can and can't do...


I'm curious. Can you elaborate on what CLIP proves about what AI can and can't do?


My point is that it doesn't.

The fact your phone can identify an object doesn't inform you on the capabilities of self-driving car's vision stack. It's complete non-sequitur.


So your job is to, in your own words, be "replicating 6 million years of evolution"?

You know how big your own team is, and that your team is itself an abstraction from the outside world. You know you get the shortcuts of being able to look at what nature does and engineer it rather than simply copy without understanding. You know your own evolutionary algorithms, assuming you're using them at all, run as fast as you can evaluate the fitness function, and that that is much faster than the same cycle with human, or even mammalian, generational gaps.

> CLIP is proof of what AI can and can't do

CLIP says nothing about what AI can't do, but it definitely says what AI can do. It's a minimum, not a maximum.


Not to be rude but you're arguing with somebody that works in what I would assume is a highly mathematical space and asserting your opinion on how quickly that highly mathematical space can advance while your own profile admits that you were unable to understand "advanced calculus or group theory" and your own github indicates that you are stuck on "the hard stuff — abelian groups, curls, wedge products, Hessians and Laplacians" because you "don't understand the notation." Your opinion on the speed of advancement just doesn't seem informed?

Maybe this is an old post and your understanding has dramatically improved to the point where you're able to offer useful insight on ML/AI/self-driving?

https://benwheatley.github.io/blog/2024/03/11-12.00.16.html


1. Note time stamp: https://github.com/BenWheatley/char-rnn

2. Most ML is basic calculus and basic linear algebra — to the extent that people who don't follow it, use that fact itself as a shallow argument.

3. I'm not asserting how fast it can advance, I'm asserting that the comparison with "6 million years of evolution" is a as much a shallow hand-wave as saying it's trivial, as evidenced by what we've done so far.


You mean "very much theoretically possible".


s/feasible/possible/


Think of pile-ups. No matter how good a driver you are there are situations where you cannot prevent crashing. But lidar can.


Pile ups happen because people drive:

- Over the speed limit (it's called a limit for a reason)

- Too fast for the conditions (speed limit != speed target)

- Too close to the vehicle in front of them

There are very few situations that can't be prevented by driving properly in the first place.


Pray tell how a Lidar prevents crashing in this situation?


Accurately determine distance to objects in almost no time. While a human has 1 second reaction time. There will be situations a fast reaction time alone can save.


I strongly believe that once you have everything working it's much easier to start working on the costs.


I wouldn’t step into a Tesla robotaxi in bad weather, period. They’d absolutely need a human remotely operating it. Without a steering wheel, passengers can’t take control even if they wanted to. Even in good weather, I’d be genuinely surprised if Teslas, in their current form, could drive around autonomously. I was really hoping Musk would mention new sensors being added for extra safety, maybe spinning it as: “Your Model 3 doesn’t need additional sensors, but just to be safe, we’re adding new ones.”


can multiple operators from India operate the robotaxis or does it need one on one operation? I mean consider the savings!


The latency will kill you.

Both figuratively and literally.

Maybe something like Mexico would be better.


Latency? That's when Starlink comes into play!


>The latency will kill you.

I mean - I wasn't thinking I would risk it!!


Just ranting here - The psychological disconnect between a remote operator and passengers in a robotaxi needs more research though. Remote operator might have less empathy and responsibility towards passengers possibly causing moral disengagement. The remote driver might never face the real-world aftermath of their actions, which can reduce their sense of remorse or responsibility. There could be complex legal dilemmas too (especially if operator is from a different country)

But this has definitely been researched a lot in the field of military drone operators who can make life altering decisions from thousands of miles away.


My Tesla still beeps at me because it thinks I'm about to drive into pedestrians or parked cars because there's a bend in the road.

I honestly think at this point Tesla's FSD AI is way, way overfitting on a few US cities.


It's way overfitting on the routes that the CEO and a few YouTubers drive. https://electrek.co/2024/07/09/tesla-insiders-say-elon-optim...


Well my 10 year old BMW F11 does it too sometimes, it really is a stupid primitive technology with tons of badly handled corner cases. Luckily its not obnoxious and I got used to it quickly so ignoring it. But in critical situations it can take away a bit of focus which is pretty bad. Of course can't be turned off.

Nobody expected 15 year old design from BMW to perform better I guess. From modern up-to-date teslas who don't even have steering wheel but lidar is a no-no because his ego? I can't imagine it getting approved in Europe, ever. Which is fine, there will be tons of competition for this in few years.


But Teslas vehicles can’t operate ANYWHERE autonomously — not even with supervision inside the closed loop tunnels in Las Vegas [0] — 6+ years after Musk said autopilot/FSD was capable of driving itself coast to coast.

Whenever Teslas manage to offer autonomous driving, what makes you think LIDAR etc will still cost what it does now?

[0] https://www.reviewjournal.com/news/news-columns/road-warrior...


Waymos exist. This intro of nothing doesn't.

It's much much easier to make an existing thing cheaper and better over time.


Waymos drive in limited geofenced zones. FSD exists but needs driver supervision. Both are incomplete and it is unknown who will be the winner.


But isn't it the case that Waymos are actually usable (for this purpose), while FSD/RoboTaxi isn't?

Geofencing sounds like a good idea to me. It's a mean to roll things out carefully, while minimizing risk of death. If actual FSD/Robotaxi is ever released, I suspect that they'll need to geofence, too, for a while.


FSD is also geofenced, just a bigger fence. When/if the robotaxi actually debuts it too will be heavily geofenced and have usage restrictions but it will also be several years behind Waymo in terms of development, testing, technology and regulation.

Whilst it is unknown who will be the winner, or even valid competitors, we can predict with high confidence that Tesla has a massive challenge to reach where Waymo is today.


The people living in the "geofenced" areas don't care about how "incomplete" it is. The point of a self driving taxi is that you don't have to own the vehicle.


And this is a vast difference. You can expand a self-driving taxi service by running an Uber-like service and dispatching w/safety-drivers for journeys taking you out of the area, and its functionally equivalent - people are buying the mobility, not the self-driving. You get to start rolling out without solving the whole problem.


The geofence can extend. Drivers are harder to replace.


Have been seeing both the LA and SF Waymo operating zones increase steadily. Also seen Waymo's being driven manually outside of those ranges presumably for further development.


How does it have more potential? I don't know how much taxi drivers earn in the US but let's pick a random number like 50k USD per year. If Waymo hardware costs $10k but Tesla costs $2k, then the savings are $40k for Waymo in the first year and $48k for Tesla. That is a 20% increase in the worst case for Waymo and the longer the vehicle lasts or the higher the taxi driver salary, the worse it gets for Tesla. Their potential is heavily bounded and gets worse over time.


Not to mention, Tesla might not be able to do it at all. Plus a driver that makes magnitudes less mistakes, is punctual, doesn't sue, doesn't rest, or is a liability is worth much more than $50k. The human component though we could argue whether humans are meant to be driving cars full time putting themselves and others at human error risk


This is definitely the big bet Tesla is making. However, the hardware Waymo uses will also become cheaper over time and with scale, so either bet has advantages and disadvantages.


I think “thousands of dollars” in hardware is fine


Tesla's autobots are free?

Waymo & Hyundai announced a partnership. IIRC Waymo has always intended to work with OEMs, vs make their own vehicles.

https://seekingalpha.com/news/4156375-hyundai-motor-joins-fo...

Having no opinions about the IONIQ 5, I've gleaned that it's well regarded. Maybe not a Model Y, but close enough.

Of all the legacy OEMs, Hyundai has a fair chance of surviving the Tesla (& BYD) juggernaut. So I think Waymo chose wisely.


Isn't any Tesla costing thousands of dollars anyway?


Potential lawsuits




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: