Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I would like to see AI moderating of CSAM, perhaps an open weights model that is provided by a nation state.

I don't envy the people who would have to trauma their way through creating such dataset. Yet, it would be useful yes.

> With confidential computing such models can be run on local hardware as a pre-filtering step without undermining anonymity.

I'm not sure it'd make sense to run locally. Many clients aren't powerful enough to run it on the receiving end (+ every client would need to run it, instead of fewer entities), and for obvious reasons it doesn't make sense to run on the senders end.



I guess I meant locally to the server not the client (edge). But also perhaps a very light model could be run on the edge.

I built a porn detection filtering algorithm back in the Random Forest days, it worked well except for the French and their overly flexible definition of 'art'. The 'hot-dog/not-hot-dog' from SV HBO is pretty accurate on what that was like. I've thought about what it would take to make a CSAM filter and if it could be trained entirely within a trusted enclave without external access to the underlying data and I do believe it is possible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: