Wiener says YC and Andreessen Horowitz are lying about risks from the new California AI bill.
They said the bill requires a "kill switch" on AI systems that "could function as a de facto ban on open-source AI development." Wiener says: "SB 1047 includes an emergency shutdown provision that only applies to models within the control of the developer. This requirement does not include open source models over which the developer has no control."
Another example about perjury: "YC’s letter makes the categorically false — and, frankly, irresponsible — claim that, “creating a penalty of perjury would mean that AI software developers could go to jail simply for failing to anticipate misuse of their software.” That is absolutely untrue. It’s a scare tactic designed to convey to founders that this bill will land them in jail if something goes awry with a model they build. Putting aside that the bill doesn’t apply to startups, perjury requires knowingly making a false statement under oath — an intentional lie, whether on a driver’s license application, a tax return, or many other statements to the government. Good faith mistakes are not perjury. Harms that result from a model are not perjury. Incorrect predictions about a model’s performance are not perjury."
The bill only applies to training runs that cost >$100M. If you think frontier AI models could pose risks to national security (e.g. bioweapons development), then it doesn't seem crazy to ask OpenAI / Microsoft / Google / Meta / etc. to run their own risk assessments.
They said the bill requires a "kill switch" on AI systems that "could function as a de facto ban on open-source AI development." Wiener says: "SB 1047 includes an emergency shutdown provision that only applies to models within the control of the developer. This requirement does not include open source models over which the developer has no control."
Another example about perjury: "YC’s letter makes the categorically false — and, frankly, irresponsible — claim that, “creating a penalty of perjury would mean that AI software developers could go to jail simply for failing to anticipate misuse of their software.” That is absolutely untrue. It’s a scare tactic designed to convey to founders that this bill will land them in jail if something goes awry with a model they build. Putting aside that the bill doesn’t apply to startups, perjury requires knowingly making a false statement under oath — an intentional lie, whether on a driver’s license application, a tax return, or many other statements to the government. Good faith mistakes are not perjury. Harms that result from a model are not perjury. Incorrect predictions about a model’s performance are not perjury."
The bill only applies to training runs that cost >$100M. If you think frontier AI models could pose risks to national security (e.g. bioweapons development), then it doesn't seem crazy to ask OpenAI / Microsoft / Google / Meta / etc. to run their own risk assessments.