If it's used to write code that runs on an ICBM without verification, yes.
If it is used to write the firmware that runs inside 1 million lithium battery BMS systems that causes them all to catch on fire after 65536 seconds of runtime due to an integer overflow and code crashing and a FET getting left in the on state, then yes.
I'm not an AI safety nut and don't think AI itself is a threat, but what is a threat is lazy humans using shitty AI to take over things that humans should be doing.
If it is used to write the firmware that runs inside 1 million lithium battery BMS systems that causes them all to catch on fire after 65536 seconds of runtime due to an integer overflow and code crashing and a FET getting left in the on state, then yes.
I'm not an AI safety nut and don't think AI itself is a threat, but what is a threat is lazy humans using shitty AI to take over things that humans should be doing.