Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why are these rich assholes so obsessed with tracking everyone?

> Tools for Humanity bragged about many large partnerships that should make any privacy advocates shiver in dread: the Match Group dating apps conglomerate (Tinder, OkCupid, Hinge, Plenty of Fish), Stripe, and Visa are some of them.

Visa and Stripe being involved in this should indeed make everyone shiver and push back against this. Altman is not a good person and this company has already had its unethical practices exposed[0].

[0] https://icj-kenya.org/news/worldcoin-case-postponed-amid-con...



Because they have to be able to sell you the cure to the disease they created: pretty soon nearly all content online will be AI-generated, and the only way you'll be able to tell if you're talking to a human is with cryptographically-sealed boot chains, verified with remote attestation, and with this eyeball device at the end to make sure there's a human there.


In the future:

Easy Remote Job Opportunity! Pay is $1/hr. Perfect for retirees, disabled, and even kids! Requirements: have an eye ball. Duties: whatever you want, except when this device beeps, look into the camera.

Is Amazon's Mechanical Turk or whatever paying people to solve captchas still a thing?


I didn't even think of this workaround. Genius in its simplicity.


It's about time my dog got a job.


> the only way you'll be able to tell if you're talking to a human is with...

Or the tried-and-true method of trusting only friends, friends of friends, recommendations from friends, etc.


Yes but the point is not to ensure that you know that you're interacting with a human. The point is for whoever paid for the ad to know that a real human is seeing it (and not, for instance, and AI in a docker container).


That's a good point, thanks. I interpreted "sell you the cure to the disease they created" as selling it to the public, but I'm sure advertisers would love to make Fifteen Million Merits a reality.


Control. You have to have a good inventory to apply and audit controls.


Information is a prerequisite for manipulating the masses. The writing has been on the wall on that front since at least Cambridge Analytica.


In the AI driven future we are heading into, telling the difference between AI-bot and human might become a valuable good.


Why? And how would that even work? Just because an online account is tied to a verified real human doesn't guarantee that the content isn't coming from an AI-bot.


It's a blockchain, so you can keep permanent record of what a person is doing and when and where they got caught violating the rules. It won't stop the infractions from happening at first, but it will make it very easy to avoid them happening again. And if this gets widespread, people might think twice before risking their blockchain personhood certificate.


You're missing the point. How would they get caught violating the rules in the first place? You (and the HN admins) have no way of knowing whether I typed this comment in myself or an AI bot used my account to do it.


It will be trivial however to hold you liable for the content no matter its source, when it is tied to your irl identity instead of a pseudonym.


The “person” could just be copying and pasting AI output. Eye scanning can’t stop that at all


Maybe it would allow you to rate-limit and/or ban by the human, which is probably more effective than banning by IP address.

(Obviously Worldcoin is shady as shit, I'm not defending it.)


So they're going to get rich from introducing both the problem and its nightmare of a solution.


Or they just want to sell us minority report style ads.


It makes me think of the idea of legibility[1]: it's hard to understand incredibly large and complicated systems, so the solution is to simplify, and then enforce that simplification. From a high-level perspective, it seems reasonable to surveil everyone, to help them. I think someone really can delude themselves into thinking that it's necessary, utilitarian-style.

Now, if we designed technology for humans, we'd realize that most humans have local networks of trust. E.g. I talk to my friend in person, she tells me her discord handle, now we've established trust. In addition, trust is something that's gradually built, not given all at once in a EULA[2].

[1]: https://www.ribbonfarm.com/2010/07/26/a-big-little-idea-call...

[2]: https://ruben.verborgh.org/blog/2024/10/15/trust-takes-time/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: