The presence of the "Do Not Track" header was a pretty clear indicator of the intent of the user. Fingerprinting persisted exactly in the face of such countermeasures.
Alpha waves refer to the measurable 8-12Hz waves in the electromagnetic field coming out of the human head. They are the clearest "signal" we can read out with eeg ("electro" "encephalo" (brain) "graphy") and usually peak in power over the back of your head.
They are also by far the biggest (measurable) eeg signal change you can manipulate intentionally (other than motion artifacts). Closing your eyes or focusing your attention inwards reduce the power of those oscillations so much it's visible right away just looking with your eyes at the signal trace.
It would be very straightforward to program in decision points to the light show where someone can select an option by tuning their attention inwards or outwards.
TLDR: It's the "press X to doubt" of human-computer interfaces.
A free and open society is a prerequisite for the rights EFF fight for. We cannot enjoy the freedoms of digital privacy in a an authoritarian regime. The rights to fight for EFFs concerns are currently being threated by the fascist turn of the USA. Thus, the EFF and other likeminded organizations are very much justified in leaving X.
> There are fewer and fewer organizations protecting civil rights without being dragged into left/right tribalism.
I would rather challenge this image that civilization is declining, independently of the political forces in power. This is a common motif in facism; I'm reading from your comment something along the lines of: "once we had noble organizations that were pure and didn't bother with ideology -- now things are worse, and in fact those guys are dirty for engaging in politics". What's really happening is that power in the US has been seized by fanatics and you fucks (respectfully) are letting them get away with it.
Disagree with so much here. But if, in your mind, the US is turning authoritarian, this is a "cut off your nose to spite your face" move. They should be taking the fight where it most needs fighting. They should not be making donors like myself question whether we still share objectives.
You are completely correct in your analysis. Reading some of the responses here - people who think the EFF should only fight for some rights for some people and only on corporate platforms instead of across society at large - would be shocking if I hadn’t already seen how willing rich tech bros are to overlook everyone and everything else for their own personal gain.
I can get pyflow back to a maintained state and iron out the bugs if that would help. It's the same concept as uv, just kind of buggy and I haven't touched it in 6 years.
And every time the issue is side-stepped by chatbot proponents.
Accuracy and reliability are necessary to know real productivity. If you have produced code that doesn't work right, you haven't "produced" anything (except in the economic sense of managing to get someone to pay for it).
For example, if you produce 5x more code at 5% reliability, the net result is a -75% change in productivity (ignoring the overhead costs of detecting said reliability).
Exactly this. High productivity if all you do is generate slop and brainrot videos. If you are going to generate code with it... well how productive was the genius at the AWS who used Kiro to cause that December outage ? 3 years ago that would have been a career-ending choice of productivity tools.
We’ve all been waiting for the reliability shoe to drop for, what, a year now?
It’s only slop of you don’t understand the code, prompt, and result, and skip code reviews. You can have large productivity gains without reducing quality standards.
> It’s only slop of you don’t understand the code, prompt, and result, and skip code reviews. You can have large productivity gains without reducing quality standards.
So essentially like delegating all work to a beginner programmer only 10x more frustrating? Well, that's not what I would classify under "Pocket PhD" or "Nation of PhDs in a datacenter", which is the bullshit propaganda the AI CEOs are relentlessly pushing. We should not have to figure this out for them - they were saying this will write ALL code in 6 months from "now", the last time "now" being January 2026, so in little over 4.5 months. No, we should not be fixing this mess, f*k understanding the prompts and doing the code reviews of the AI slop. Why does it not work as advertised?
I’m not here to defend bs propaganda. I don’t think I’ve seen anyone defend that stuff. I don’t know if you’re shifting goalposts or that’s what you’ve always been worried about.
I’m just saying the productivity gains are real, even in serious production level and life critical systems.
If you are only able to think in binaries, no-AI or phd-AI, that’s a you problem.
> I’m just saying the productivity gains are real, even in serious production level and life critical systems.
Again, neither serious studies (See that METR study on dev productivity), nor the ever increasing rate of major incidents caused by AI support your statement. Not to mention the absolute lack of well known AI-produced products that we know of.
> If you are only able to think in binaries, no-AI or phd-AI, that’s a you problem.
No, you see if I were a CEO of a public ompany and I lied through my teeth to the investors and the general public, about the capabilities of my product, then I would normally go to jail. The CEOs of major AI companies are making claims that do not seem to be confirmed in reality. They have burned several hundred billion dollars so far, in pursuit of "god-level intelligence". What came out instead is "your prompting sucks" or similar level of nonsense.
I am only holding them to the standards they have repeatedly, boldly and insistently set themselves. You should be too.
Yes I’ve seen it. It was certainly interesting at the time. If you refresh yourself on the study, it admits to reflecting a narrow point in time on a narrow task type and toolset.
Last July most people I know weren’t automating Jira tickets, pull requests, comment addressing, design docs, multi repo research, and customizing rule sets. Now everyone I know does, each of these incrementally speed up productivity.
> Not to mention the absolute lack of well known AI-produced products that we know of.
This is a strange comment. We have a well known example in openclaw, which is notoriously vibe coded, which again if you follow the thread, I’m not defending. whereas I know senior and staff engineers at most FAANG companies and every single one uses AI to code, so many many products you know are being written with AI.
I don’t wanna dox myself but last year my company developed a greenfield product with a pretty large headcount of eng (multiple teams)that was built with an AI first development workflow, now that doesn’t mean the 20 engineers just stood around and twiddling their thumbs. They were doing real engineering and software development work with heavy agentic AI use. They shipped it in six months and it’s been in prod for months. If you can’t see how AI is being used I don’t know what to tell you.
> This is a strange comment. We have a well known example in openclaw, which is notoriously vibe coded, which again if you follow the thread, I’m not defending. whereas I know senior and staff engineers at most FAANG companies and every single one uses AI to code, so many many products you know are being written with AI.
Oh it's a product? What does it do? Leak data and delete inboxes? I would not call that a "product" at least not in the commercial sense.
> I don’t wanna dox myself but last year my company developed a greenfield product with a pretty large headcount of eng (multiple teams)that was built with an AI first development workflow
Yeah you sure are not "doxxing" yourself with this generic statement. I am sure you guys built something with the "AI first" workflow. The point being, based on what he AI CEOs and AI boosters are saying, this should have been a project with one person organising a "fleet of agents" . Why wasn't it? If it still requires a large engineering headcount, what's the point of using the AI?
> AI CEOs and AI boosters are saying
You really enjoy trying to argue about something I’m not arguing about. I literally could not care less what they’re saying. I use tools available for me for my profession, to the extent that they are useful. Anyone who thinks AI can one shot a successful product is clueless, but that has nothing to do with the actual ways AI and agents are being used today. And anyone incapable of understanding that and the difference is equally clueless.
Water vapour absorbs the thermal radiation (heat trying to escape earth) better than it absorbs sunlight (heat trying to enter earth). Therefore, the more water vapour in the atmosphere, the stronger the greenhouse effect.
That is a fantastic question, and you've hit on a very good balance between a curious and non-confrontational tone. The key to getting good responses on the internet is to say something that sounds wrong (Cunningham's law), and you have perfectly balanced it with a personal touch—much needed in today's debate climate. Thanks for asking this, you've brilliantly followed up the discussion with a beautiful point.
(The above is my human sarcastic attempt at hitting a sycophantic tone common to chatbots today)