I think there is far less than 1% chance for this to happen, but there are probably millions of antigravity users at this point, 1 millionths chance of this to happen is already a problem.
We need local sandboxing for FS and network access (e.g. via `cgroups` or similar for non-linux OSes) to run these kinds of tools more safely.
Codex does such sandboxing, fwiw. In practice it gets pretty annoying when e.g. it wants to use the Go cli which uses a global module cache. Claude Code recently got something similar[0] but I haven’t tried it yet.
In practice I just use a docker container when I want to run Claude with —-dangerously-skip-permissions.
I think the general public has a MUCH better grasp on the potential consequences of crashing a car into a garage than some sort of auto-run terminal command mode in an AI agent.
These are being sold as a way for non-developers to create software, I don't think it's reasonable to expect that kind of user to have the same understanding as an actual developer.
I think a lot of these products avoid making that clear because the products suddenly become a lot less attractive if there are warnings like "we might accidentally delete your whole hard drive or destroy a production database."
Google (and others) are (in my opinion) flirting with false advertising with how they advertise the capabilities of these "AI"s to mainstream audiences.
At the same time, the user is responsible for their device and what code and programs they choose to run on it, and any outcomes as a result of their actions are their responsibility.
Hopefully they've learned that you can't trust everything a big corporation tells you about their products.
This is an archetypal case of where a law wouldn't help. The other side of the coin is that this is exactly a data loss bug in a product that is perfectly capable of being modified to make it harder for a user to screw up this way. Have people forgotten how comically easy it was to do this without any AI involved? Then shells got just a wee bit smarter and it got harder to do this to yourself.
LLM makers that make this kind of thing possible share the blame. It wouldn't take a lot of manual functional testing to find this bug. And it is a bug. It's unsafe for users. But it's unsafe in a way that doesn't call for a law. Just like rm -rf * did not need a law.
- sell software that interacts with your computer and can lead to data loss, you can
- give people software for free that can lead to data loss.
...
the Antigravity installer comes with a ToS that has this
The Service includes goal-oriented AI systems or workflows that perform
actions or tasks on your behalf in a supervised or autonomous manner that you
may create, orchestrate, or initiate within the Service (“AI Agents”). You
are solely responsible for: (a) the actions and tasks performed by an AI
Agent; (b) determining whether the use an AI Agent is fit for its use case;
(c) authorizing an AI Agent’s access and connection to data, applications,
and systems; and (d) exercising judgment and supervision when and if an AI
Agent is used in production environments to avoid any potential harm the AI
Agent may cause.
We need local sandboxing for FS and network access (e.g. via `cgroups` or similar for non-linux OSes) to run these kinds of tools more safely.