I think there's a market for LLM-based therapy that is reviewed/tweaked by a human therapist in between sessions. That would give people the assurance that things aren't going way off the rails.
OTOH, I could also see a market for an offline, fully private LLM therapist. That way you could say anything without concern about being judged. These would probably need to be tweaked to be different from regular therapists, who normally interact with people who have somewhat more of a filter, since they would fear being judged. If people opened up to LLM therapists in more transparent ways, the LLMs might not respond in the way a human therapist would recommend (having seen very little data on such interactions).
The privacy aspect is what made me connect local LLMs with therapeutic use. But yeah, AI as it stands today just isn't safe enough. We need nine 9s of safe usage here (99.999999999% safe), or more, for me to actually feel comfortable with the technology.
It would also open up some legal gray areas if it were to happen. Would psychotherapist-patient privilege apply to an LLM box? If the state has a zero day granting them access to a seized "therapy box," it could be more revealing and damaging than anything a human therapist could provide police.
OTOH, I could also see a market for an offline, fully private LLM therapist. That way you could say anything without concern about being judged. These would probably need to be tweaked to be different from regular therapists, who normally interact with people who have somewhat more of a filter, since they would fear being judged. If people opened up to LLM therapists in more transparent ways, the LLMs might not respond in the way a human therapist would recommend (having seen very little data on such interactions).