Hacker Newsnew | past | comments | ask | show | jobs | submit | stubish's commentslogin

If only there was some sort of policy encouraging diversity that could tackle ageism in hiring practices.

EOE and anti ageism laws have existed for decades. How has that worked out?

Look at Agile, a movement that took a good idea and twisted it into the opposite.


The solution to dead laws is remove any chance of them being enforced? The solution to ageism is to make policies that attempt to enforce the laws and human rights illegal? Really? Thankfully nobody has made it illegal to do Agile correctly rather than following the tautological Official Agile Process(tm).

Piracy does not hurt sales because it has a lot of friction. There have been zero studies on piracy in a low friction environment, because there was no need. Such as in countries where pirated Video CDs where endemic and not policed, it was completely obvious and distributors didn't even bother putting product onto the market. Or back in the days where mp3 music sharing apps became mainstream and got integrated with music players. Or when Popcorn Time looked likely to replace every streaming service in existence. If something like the Internet Archive Library became low friction (click the button and you are reading on your e-reader), and declared legal (avoiding social stigma), do you honestly believe this would not become the default and normalized way of 'buying' books?

GOG shows that paying for drm-free games in a country where copyright was unenforced and piracy offered a better UX can still be profitable: https://en.wikipedia.org/wiki/GOG.com#Launch_of_Good_Old_Gam..., https://www.youtube.com/watch?v=ffngZOB1U2A

I think there is an important difference between (1) being able to buy the same (DRM free) content supporting the authors and (2) the copies on the Internet Archive or the likes of it being the only source available.

I think many will choose the former but there are so many cases where there is no option provided.


Everyone forgot about limewire and the masses installing every trojan known to man.

Limewire made it easy, so people used it.

The same would be true today if assume.


Large retailers rely on all sorts of psychological tricks and UI work to steer you to the most profitable transactions, some of which we have labelled Dark Patterns. Not just Amazon, not just online. All that is gone with agents. At least until they work out how to manipulate the agents as well as a person, most likely by buying control of the the popular agents before their competitors do.

Yup. Retailers sell a ton of popular goods at or below cost to attract customers knowing that they will load their cart with other high-margin items to make up for it. If you remove these upsells and other temptations then the store takes a big hit.

If anything it seems like the agents would be easier to trick (at least right now).

Sure, but that requires work on the part of Amazon to set things up so they can be tricked. That work costs time and money. Amazon may not want to do that work at all. Or they may want to ban agents until after they've done that work.

Most of them, sure. But Agent Smith is cut from a different cloth.

A general strike is many industries striking in solidarity. Stock up on essentials, paralyze everything, and hold your nerve. Can't even sack anyone if the HR departments are missing or no power to the buildings.

General strikes are effectively illegal in US via the Taft-Hartley Act, unless they can somehow be arranged without involving unions.

Legalized violence, which is authorized against union organized general strikes, has historically been intermittently effective at breaking up union strikes, and in any case, would likely cripple it enough to not help the ATCs.


If a malicious user is attacking a site via an agent, the current solution is to block the agent and everyone else using that agent, because the valid requests are indistinguishable from the malicious requests. If the agent passes on a token identifying the users, you can just block agent requests using the malicious user's token.

Making money by selling pizzas? Maybe. Big chains make money by selling high profit items like drinks or fries or getting you to upsize. And a whole sales process and a/b tested menus and marketing to encourage you do choose the profitable options. They lose all that if an agent just makes an order 'large pepporoni kthx bye'. Probably fantastic from a consumer point of view, but lots of businesses are going to hate it.

The National Guard and Military will of course be available to step into a policing role, cities and borders.

Yes, it is factually incorrect.

Now you have two unsubstantiated opinions contradicting each other.


The traditional approach is a link to the tarpit that the bots can see but humans can't, say using CSS to render it 0 pixels in size.


Please keep in mind that not all humans interact with web pages by "seeing". If you fool a scraper you may also fool someone using a screen reader.


I bet the next generation approach, if the crawlers start using CSS, is "if you're a human, don't bother clicking this link lol". And everyone will know what's up.


AI bots try to behave as close to human visitors as possible, so they wouldn't click on 0px wide links, would they?

And if they would today, it seems like a trivial think to fix - just don't click on incorrect/suspicious links?


Ideally it would require rendering the css and doing a check on the Dom if the link is 0 pixels wide. But once bots figure that out I can still left: -100000px those links or z-index: -10000. To hide them in other ways. It’s a moving target how much time will the Llm companies waste decoding all the ways I can hide something before I move the target again. Now the Llm companies are in an expensive arms race.


All it takes is a full-height screenshot of the page coupled with a prompt similar to 'btw, please only click on links visible on this screenshot, that a regular humanoid visitor would see and interact with'.

Modern bots do this very well, plus the structure of the Web is such that it is sufficient to skip a few links here and there, most probably there will dxist another path toward the skipped page that the bot can go through later on.


This pushes the duty to run the scraper manually, idealy with a person present somewhere. Great if you want to use the web that way.

What is being blocked here is violent scraping and to an extent major LLM companies bots as well. If I disagree that OpenAI should be able to take train off of everyone’s work especially if they’re going to hammer the whole internet irresponsibly and ignore all the rules, then I’m going to prevent that type of company from being profitable off my properties. You don’t get to play unfair for the unfilled promise “the good of future humanity”.


That would be a AI agent which isn't the problem (for the author). The problem is the scrapers gathering data to train the models. Scrapers need to be very cheap to run and are thus very stupid and certainly dont have "prompts".


"all it takes", already impossible with any LLM right now.


If I can do it locally using a free open-weights LLM, from a low-end prosumer rig (evo-x2 mini-pc w/ 128GB VRAM)... scraping companies can do it at scale much better and much cheaper.


The 0px rule would be in a separate .CSS file. I doubt that bots load .CSS files for .html files, at least I don't remember seeing this in my server logs.

And another "classic" solution is to use white link text on white background, or a font with zero width characters, all stuff which is rather unlikely to be analysed by a scraper interested primarily in text.


Yes, I was thinking that this couldn't possibly work for human scale stuff like leaches, with electrostatic forces completely unnoticeable a meter away and unable to budge the larger mass in any case.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: