I infer from this explanation that there were hundreds of Meta devices in the room just waiting for the wake phrase. Since if the dev server died with only a few tens of clients, that would be its own source of embarrassment.
But then: it sounds like quite the security risk if the wake phrase can trigger those hundreds of devices to go off at once, no? "Hey Meta AI, blast Baby Shark into my eyeballs NOW" could be quite the attack on an office, in a train, etc.
I would imagine that the solution is to either allow for a dedicated wake phrase, or to voice fingerprint the wake phrase during device setup so that it only triggers for the single user.
My main object of pity is for the engineers who had to build this demo and probably got reamed afterwards. Vaya con Dios, y'all.
I initially thought it to be the AI context already filled with previous rehearsal conversations. And it remembers that the ingredients are already prepped. In which case, they could just start another chat to start fresh.
This maybe true and sounds reasonable but Is also sort of classic fail screwup for anyone who has used Siri much. Basically every time you are on imperfect WiFi or something is imperfect about the tech, which is probably more often then you’d think, this will happen.
But then: it sounds like quite the security risk if the wake phrase can trigger those hundreds of devices to go off at once, no? "Hey Meta AI, blast Baby Shark into my eyeballs NOW" could be quite the attack on an office, in a train, etc.
I would imagine that the solution is to either allow for a dedicated wake phrase, or to voice fingerprint the wake phrase during device setup so that it only triggers for the single user.
My main object of pity is for the engineers who had to build this demo and probably got reamed afterwards. Vaya con Dios, y'all.