They are going to say the usual things, child pornography, risk of terrorist use etc... and conclude
that we need our overlords to protect us.
Same reasons that have been used in the past to take away freedoms.
Not saying they aren't real risks... Are there going to be sad incidents where innocent people will be harmed? Yes. The question though is if there will be enough of them, and egregious enough, for us to disregard the upsides of technology that is not heavily regulated.
Oh that's so interesting, I fall very much on the side of avoiding regulations as the solution at all costs.
Regulations won't fix a damn thing and regulators have no chance of keeping up even if they wanted to. We need these tools to not be developed or offered at all by choice. Unfortunately that really means either the company, investors, or customers need to make a moral decision at the cost of economic gain or convenience/novelty.
This is a blind spot for me, do existing legal protections generally cover fake/generated content?
To be clear, the context of this thread is extremely important. I don't know how far our existing laws might go to make sure ML generated child porn is illegal and I'd personally feel much more comfortable to know that ML generation isn't some kind of legal loophole.
Last I heard it varies by jurisdiction. Some only prohibit things that actually involve actual children, some ban based on subject matter even if it's say a cartoon drawing.
I assume that training separate models for separate jurisdictions would be rather expensive, so probably unlikely.
I'm sure they're preventing any prompts that explicitly ask for child porn, but they can't stop the inevitably difficult challenge of knowing when an image is porn of a child versus porn of someone that just looks under age versus graphic imagery of a child that may not meet some specific definition of what makes it porn.
Child pornography is the easy answer. Deep fake porn would be a huge risk in my opinion though, kids making fake porn and sending it around school could absolutely lead to suicides or attacks against other students.
Better question, how the heck do you see something like the ability for anyone to make extremely convincing, fake erotica and not see at least some risk in that?
Oh I can see that. Given OpenAI's push for multimodal engagement with GTP4o I fully expect the erotica they want to support would be more than just text.
I do agree with you there though, I can't think of any meaningful risks of text-only erotica unless you consider the psychological impact of the person choosing to engage with it.