They’re closely related for some use cases, like client-side content screening. If they can’t have a backdoor then maybe they’ll push for a local LLM to spy on the user’s activity and phone home when it sees something bad.
I suspect the steel man version of the argument is that AI is capable of producing images indistinguishable from child and revenge porn, so it must be regulated, and that means they need a way to reach into your zeros and ones to check. Maybe they want to know you're not asking it how to terrorize people, too.
They've been looking to use AI for consumer surveillance; AI user monitoring essentially.
"We can't have a backdoor so we can't use AI to monitor the user"