It’s not the technology I’m dismissive about. It’s the economics.
25 years ago I was optimistic about the internet, web sites, video streaming, online social systems. All of that. Look at what we have now. It was a fun ride until it all ended up “enshitified”. And it will happen to LLMs, too. Fool me once.
Some developer tools might survive in a useful state on subscriptions. But soon enough the whole A.I. economy will centralise into 2 or 3 major players extracting more and more revenue over time until everyone is sick of them. In fact, this process seems to be happening at a pretty high speed.
Once the users are captured, they’ll orient the ad-spend market around themselves. And then they’ll start taking advantage of the advertisers.
I really hope it doesn’t turn out this way. But it’s hard to be optimistic.
Contrary to the case for the internet, there is a way out, however - if local, open-source LLMs get good. I really hope they do, because enshittification does seem unavoidable if we depend on commercial offerings.
Well the "solution" for that will be the GPU vendors focusing solely on B2B sales because it's more profitable, therefore keeping GPUs out of the hands of average consumers. There's leaks suggesting that nVidia will gradually hike the prices of their 5090 cards from $2000 to $5000 due to RAM price increases ( https://wccftech.com/geforce-rtx-5090-prices-to-soar-to-5000... ). At that point, why even bother with the R&D for newer consumer cards when you know that barely anyone will be able to afford them?
It’s the third stage in the process. First the platform is good to users. Then it’s bad to users but good to business customers. Third, and finally, it’s bad to both users and business customers. Now it’s only the shareholders that are winning.
> We further urge the machine learning community to act proactively by establishing robust design guidelines, collaborating with public health experts, and supporting targeted policy measures to ensure responsible and ethical deployment
We’ve seen this play out before, when social media first came to prominence. I’m too old and cynical to believe anything will happen. But I really don’t know what to do about it at a person level. Even if I refuse to engage in this content, and am able to identify it, and keep my family away from it…it feels like a critical mass of people in my community/city/country are going to be engaging with it. It feels hopeless.
I tend to think that it leads to censorship, and then censorship at a broader level in the name of protecting our kids. See with social networks where you now have to give your ID card to protect kids.
The best way in that case is education of the kids / people and automatically flag potentially harmful / disgusting content and let the owner of the device set-up the level of filtering he wants.
Like with LLMs they should be somewhat neutral in default mode but they should never refuse a request if user asks.
Otherwise the line between technology provider and content moderator is too blurry, and tomorrow SV people are going to abuse of that power (or be coerced by money or politics).
At a person / parent level, time limits (like you can do with web filtering device for TikTok), content policy would solve and taking time to spend with the kids as much as possible and to talk to them so they don’t become dumber and dumber due to short videos.
But totally opposed that it should be done on public policy level: “now you have right to watch pornography but only after you give ID to prove you are adult” (this is already the case in France for example)
It can quickly become: “now to watch / generate controversial content, you have to ID”
That doesn't work when the Chinese produce uncensored open weight models, or ones that can easily be adapted to create uncensored content.
Censorship for generative AI simply doesn't work the way we are used to, unless we make it illegal to posess a model that might generate illegal content, or that might have been trained on illegal data.
> Censorship for generative AI simply doesn't work the way we are used to, unless we make it illegal to posess a model that might generate illegal content, or that might have been trained on illegal data.
Censorship doesn't work for stuff that is currently illegal. See pirated movies.
I don't know much about this, but does "proximity pairing" use some open standard API that's part of the bluetooth spec? Are there any examples of other devices using something like this?
Part of the appeal of Airpods is how seamless they are to pair and share between devices. The UX of bluetooth headphones pairing and device switching before Airpods came along seemed atrocious.
Is this a case of Apple arbitrarily locking out third parties, or is a case of Apple doing the work to get something to work nicely and now being forced to give competition access?
I don't know how proximity pairing works in Apple land. My wife uses Apple devices.
But between my Android phone and my contractor issued Windows laptop, the $20 headphone I use just works. It connects to both of them because of multi-pairing.
If one of the devices is playing, say, a Youtube video, the other doesn't take over the sound even if I start playing music there. And if I pause the Youtube video in one device, the other is free to play sounds.
It's seamless and intuitive.
I should try also pairing to my Linux workstation. If that works too I would be impressed.
Several Sony models are also very good, being built with Samsung panels and their own in-house image processing which is some of the best in the industry. Their TVs run Android and support offline firmware updates, too, which is why they're usually what I buy.
I uninstalled it after about half an hour of use when it became clear the app kept pushing me to watch videos with Andrew Tate (with him on the top half of the screen and random racing games on the bottom half). It’s dystopian.
Does that work with the DRM from streaming apps, though? Can you get 4K and atmos with Netflix or Disney+ with that hardware? And an easy remote and UI?