Why does this look horribly wrong to me. Why does a union need a fundraiser? Shouldn't they have hold tight belt and have significant war chest for this? Or take extra fees from the new members?
It does feel wrong because in our society having access to more financial resources often translates to better representation in the courtroom. This is similar to how donating to organizations like the EFF can provide more justice to those who are not multibillion-dollar corporations.
I really wished they would also let you disable those very annoying modal popups announcing yet-another-chatbot-integration twice a week: My company is already paying for your product, just let me do my work ffs...
That's just like your opinion man, I see it through rose coloured glasses as a poem from more naive times back when some folks still had some hope... This was way before vulture capitalism fucked everything up you know, or at least that's how I remember it but I was like 10.
Not everyone was into this hopeful vision of cyberspace though, Masters of Doom comes to mind.
You’re right (as someone a bit older but also with rose-tinted glasses).
There was a feeling of hope on the Internet at the time that this was a communication tool that would bring us all together. I do feel like some of that died around 9/11 but that it was Facebook and the algorithms that really killed it. That is where the Internet transitioned from being about showcasing the best of us to showcasing the worst of us. In the name of engagement.
retail investors? no way. The fever-dream may continue for a while but eventually it will end. Meanwhile we don't even know our full exposure to AI. It's going to be ugly and beyond burying gold in my backyard I can't even figure out how to hedge against this monster.
Looks to me that the browser version requires the targeted website to be iframed into the malicious site for this to work, which is mitigated significantly by the fact that many sites today—and certainly the most security-sensitive ones—restrict where they can be iframed via security headers. Allowing your site to be loaded in an iframe elsewhere is already a security risk, and even the most basic scans will tell you you're vulnerable to clickjacking if you do not set those headers.
I also wanted to add a bit more context regarding some of these claims.
For example, back in March Dario Amodei, the CEO and cofounder of Anthropic, said:
> I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code
> By late 2029, existing SEZs have grown overcrowded with robots and factories, so more zones are created all around the world (early investors are now trillionaires, so this is not a hard sell). Armies of drones pour out of the SEZs, accelerating manufacturing on the critical path to space exploration.
> The new decade dawns with Consensus-1’s robot servitors spreading throughout the solar system. By 2035, trillions of tons of planetary material have been launched into space and turned into rings of satellites orbiting the sun.32 The surface of the Earth has been reshaped into Agent-4’s version of utopia
You should not believe any of the claims genAI companies make about their products. They just straight-up lie. For example:
> Several PhD-level reasoning models have been released since September of 2024
This is not true. What's true is that several models have been released that the companies have claimed to be "PhD-level" (whatever that means), but so far none of them have actually demonstrated such a trait outside of very narrow and contrived use cases.
If there is such models. Why are there not widely discussed full thesis works produced fully by them? Surely getting dozens of those out should be trivial if they are that good.
Well, would the AI graduate students also be required to be jerked around by professors, pass hundreds of exams, present seminars, teach, do original research, write proposals, deal with bureaucracy,too? Maybe this would solve the "hallucination" issues?
I'm going to laugh and shit my pants in that or some order when we realize the models that produced ALL the code has sleeper protocols built into the code that's now maintained by AI agents that might also be infected with sleeper protocols. Then later when 50 messages on Claude costs 2,500$ every company in the world is either going to experience exponential cost increases or spend an exponentially large amount of capital hiring and re-hiring engineers to, "un-AI'ify" the codebase.
Yeah that's my understanding as well: unfortunately laws seem to only apply for us peasants...
One difference though is that for Spotify removing pirated files from their servers was trivial, while for LLM companies like OpenAI and Google they would have to retrain their models from scratch, which would be extremely expensive, so what they do instead is to filter problematic results that are too obvious.
https://actionnetwork.org/fundraising/support-rockstar-worke...
reply