> Who even initiates the proceeding? For copyright this is generally the owner of the copyrighted work alleged to be infringed. For AI-generated works that isn't any specific party, so it would presumably be the government.
This is the most obvious problem, yes. Consumer protection agencies seem like the most obvious candidate. I have already admitted I am not a lawyer, but this really does not seem like an intractable problem to me.
> The reason you want it to be labeled is for the cases where you can't tell.
This is actually _not_ the most important use case, to me. This functionality seems most useful in the near future when we will be inundated with generative content. In that future, the ability to filter actual human content from the sea of AI blather, or to have specific spaces that are human-only, seems quite valuable.
> But how is the government, or anyone, supposed to prove this?
Consumer protection agencies have broad investigative powers. If corporations or organizations are spamming out generative content without attribution it doesn't seem particularly difficult to detect, prove, and sanction that.
This kind of regulatory regime that falls more heavily on large (and financially resourceful) actors seems far preferable to the "register and thoroughly test advanced models" (aka regulatory capture) approach that is currently being rolled out.
> This functionality seems most useful in the near future when we will be inundated with generative content. In that future, the ability to filter actual human content from the sea of AI blather, or to have specific spaces that are human-only, seems quite valuable.
But then why do you need any new laws at all? We already have laws against false advertising and breach of contract. If you want to declare that a space is exclusively human-generated content, what stops you from doing this under the existing laws?
> Consumer protection agencies have broad investigative powers. If corporations or organizations are spamming out generative content without attribution it doesn't seem particularly difficult to detect, prove, and sanction that.
Companies already do this with human foreign workers in countries with cheap labor. The domestic company would show an invoice from a foreign contractor that may even employ some number of human workers, even if the bulk of the content is machine-generated. In order to prove it you would need some way of distinguishing machine-generated content, which if you had it would make the law irrelevant.
> This kind of regulatory regime that falls more heavily on large (and financially resourceful) actors seems far preferable to the "register and thoroughly test advanced models" (aka regulatory capture) approach that is currently being rolled out.
Doing nothing can be better than doing either of two things that are both worse than nothing.
> But then why do you need any new laws at all? We already have laws against false advertising and breach of contract.
My preference would be for generative content to be disclosed as such. I am aware of no law that does this.
Why did we pass the FFDCA for disclosures of what's in our food? Because the natural path that competition would lead us down would require no such disclosure, so false advertising laws would provide no protection. We (politically) decided it was in the public interest for such things to be known.
It seems inevitable to me that without some sort affirmative disclosure, generative AI will follow the same path. It'll just get mixed into everything we consume online, with no way for us to avoid that.
> Companies already do this with human foreign workers in countries with cheap labor. The domestic company would show an invoice from a foreign contractor that may even employ some number of human workers, even if the bulk of the content is machine-generated.
You are saying here that some companies would break the law and attempt various reputation-laundering schemes to circumvent it. That does seem likely; I am not as convinced as you that it would work well.
> Doing nothing can be better than doing either of two things that are both worse than nothing.
Agreed. However, I am not optimistic that doing nothing will be considered acceptable by the general public, especially once the effects of generative AI are felt in force.
> My preference would be for generative content to be disclosed as such. I am aware of no law that does this.
What you asked for was a space without generative content. If you had a space where generative content is labeled but not restricted in any way (e.g. there are no tools to hide it) then it wouldn't be that. If the space itself does wish to restrict generative content then why can't you have that right now?
> Why did we pass the FFDCA for disclosures of what's in our food?
Because we know how to test it to see if the disclosures are accurate but those tests aren't cost effective for most consumers, so the label provides useful information and can be meaningfully enforced.
> It seems inevitable to me that without some sort affirmative disclosure, generative AI will follow the same path. It'll just get mixed into everything we consume online, with no way for us to avoid that.
This will happen regardless of disclosure unless it's prohibited, and even then people will just lie about it because there is an incentive to do so and it's hard to detect.
> You are saying here that some companies would break the law and attempt various reputation-laundering schemes to circumvent it. That does seem likely; I am not as convinced as you that it would work well.
It will be a technical battle between companies that don't want it on their service and try to detect it against spammers who want to spam. The effectiveness of a law would be directly related to what it would take for the government to prove that someone is violating it, but what are they going to use to do that at scale which the service itself can't?
> I am not optimistic that doing nothing will be considered acceptable by the general public, especially once the effects of generative AI are felt in force.
So you're proposing something which is useless but mostly harmless to satisfy demand for Something Must Be Done. That's fine, but I still wouldn't expect it to be very effective.
This is the most obvious problem, yes. Consumer protection agencies seem like the most obvious candidate. I have already admitted I am not a lawyer, but this really does not seem like an intractable problem to me.
> The reason you want it to be labeled is for the cases where you can't tell.
This is actually _not_ the most important use case, to me. This functionality seems most useful in the near future when we will be inundated with generative content. In that future, the ability to filter actual human content from the sea of AI blather, or to have specific spaces that are human-only, seems quite valuable.
> But how is the government, or anyone, supposed to prove this?
Consumer protection agencies have broad investigative powers. If corporations or organizations are spamming out generative content without attribution it doesn't seem particularly difficult to detect, prove, and sanction that.
This kind of regulatory regime that falls more heavily on large (and financially resourceful) actors seems far preferable to the "register and thoroughly test advanced models" (aka regulatory capture) approach that is currently being rolled out.