The car was not really idle, it was driving and fast. It's more like it crashed into the garage and burned it. Btw iirc, even IRL a basic insurance policy does not cover the case where the car in the garage starts a fire and burns down your own house, you have to tick extra boxes to cover that.
There is a lot of society level knowledge and education around car usage incl. laws requiring prior training. Agents directed by AI are relatively new. It took a lot of targeted technical, law enforcement and educational effort stopping people flying through windshields.
If you get behind the cockpit of the dangerous new prototype(of your own volition!), it's really up to your own skill level whether you're a crash dummy or the test pilot.
When Google software deletes the contents of somebody's D:\ drive without requiring the user to explicitly allow it to. I don't like Google, I'd go as far to say that they've significantly worsened the internet, but this specific case is not the fault of Google.
For OpenAI, it's invoked as codex --dangerously-bypass-approvals-and-sandbox, for Anthropic, it's claude --dangerously-skip-permissions. I don't know what it is for Antigravity, but yeah I'm sorry but I'm blaming the victim here.
And yet it didn't. When I installed it, I had 3 options to choose from: Agent always asks to run commands; agent asks on "risky" commands; agent never asks (always run). On the 2nd choice it will run most commands, but ask on rm stuff.