It's not necessarily broken, but for instance packages in cachy are compiled against x86-64-v3 iirc so they wouldn't work on older machines that don't support avx2
Can't you just add an x86-64-v3 arch to Debian if that really makes much of a difference? (I'd be surprised if it's really that significant because you can't recompile the game itself, and even when you can recompile things use -march=native doesn't make that much difference in my experience).
France and most of europe has fair use (https://fr.wikipedia.org/wiki/Copie_priv%C3%A9e) but also has a mandatory tax on every sold medium that can do storage to recover the "lost fees" due to fair use
Is that not just an exemption for copying for private use? My french is not up to much but this:
> L'exception de copie privée autorise une personne à reproduire une œuvre de l'esprit pour son usage privé, ce qui implique l'utilisation personnelle, mais également dans le cercle privé incluant le cadre familial.
seems to be only for personal use?
Fair dealing in the UK and other countries is broader, and US fair use broader still.
... are you saying that hardware projects fail less than software ones? just building a bridge is something that fails on a regular occurence all over the world. Every chip comes with list of erratas longer than my arm.
While I agree with the general point, this statement is factually incorrect - apple's most powerful laptop GPU punches right about the same as the laptop SKU of the RTX 4070, and the desktop Ultra variant punches up with a 5070ti. I'd say on both fronts that is well above the average.
There is no world where Apple silicone is competing with a 5070ti on modern workloads. Not the hardware and certainly not the software where Nvidia DLSS is in it's own air with AMD just barely having gotten AI upscaling out and started approximating ray reconstruction.
Certainly, nobody would buy an Apple hoping to run triple-A PC games.
But among people running LLMs outside of the data centre, Apple's unified memory together with a good-enough GPU has attracted quite a bit of attention. If you've got the cash, you can get a Mac Studio with 512GB of unified memory. So there's one workload where apple silicon gives nvidia a run for their money.
That simply isn't true. I have an RTX 4070 gaming PC and an M4 MacBook Pro w/ 36GB shared memory. When models fit in VRAM, the RTX 4070 still runs much faster. Maybe the next generation M5 chips are faster but they can't be 2-4x faster.
GP said laptop 4070. The laptop variants are typically much slower than the desktop variants of the same name.
It's not just power budget, the desktop part has more of everything, and in this case the 4070 mobile vs desktop turns out to be a 30-40% difference[1] in games.
Now I don't have a mac so if you meant "2-5x" when you said "much faster" well thdn yea, that 40% difference isn't enough to overcome that.
Only a few, because it's not easy to find contemporary AAA games with native macOS ports. Notebookcheck has some comparisons for Assassins Creed: Shadows and Cyberpunk 2077[1]
a 4.5k$ M4 Max barely competes with an entry-level laptop with a 4060 which will be around ~1K in FPS in cyberpunk given the same settings. For AI it's even worse - on NVidia hardware you're getting double-digit speeds for FPS for real-time inference of e.g. stable diffusion, whereas on the M2 Max I have you get at best 0.5 FPS
(Same... I know people use them to get some pretty effects; but, they add a frame of latency I do not want and require lots of memory and assume acceleration I don't need.)
There is no way to avoid a frame of latency without "racing the beam", which AFAIK quite complicated and not compatible with most GUI frameworks. That is, if you don't want tearing.
One frame of latency and adding a frame of latency are different things. The first is required (without tearing) the second should be avoided at all cost (athough high display refresh rates reduce the problem of "long" swapchains quite a bit).
Yep, right now nvidia libs are broken with clang-21 and recent glibc due to stuff like rsqrt() having throw() in the declaration and not in the definition
> AIUI the forks are required because Microsoft is gatekeeping functionality used by Copilot from extensions so they can't be used by these agents.
reply
I always wonder how this works legally. VSCode needs to comply with the LGPL (it's based on Chromium/Blink which is LGPL) ; they should provide the entire sources that allow us to rebuild our own "official" VSCode binary
Where do you draw the line for properly supported? I've been using g++ in c++23 mode for quite some time now - even if every feature is not entirely implemented, the ones that work, work well and are a huge improvement
I draw the line where I can't expect the default gcc on most Linux and Mac systems to compile my code. And I don't want to force them to install a particular compiler. -std=c++20 seems to work pretty reliably these days.
> LLM's have function / tool calling built into them. No major models have any direct knowledge of MCP.
but the major user interfaces for operating LLMs do and that's what matters
> Not only do you not need MCP, but you should actively avoid using it.
> Stick with tried and proven API standards that are actually observable and secure and let your models/agents directly interact with those API endpoints.
so what's the proven and standard API I can use to interact with ableton live? blender? unity3d? photoshop?
The mcp part is not essential for the actual controlling of the applications. You could “rip out” the mcp functionality and replace it with something else. The only reason why the authors chose mcp is most likely that it was the first and therefore most common plugin interface for llm tools.
Unfortunately, most standards that we end up with are only standard because they're are widely used and not because they are the best or they make the most sense.
It's not even a standard. It's literally not doing anything here. Not only "can" you rip out MCP there is zero technical reason for any of those things to be an "MCP" in the first place.
MCP literally is the "something else", if you have a better idea in mind, now is the time to bring it out before the MCP train is going too fast to catch up.
Code (including shell scripting) allows the LLM to manipulate the results programmatically, which allows for filtering, aggregation and other logic to occur without multiple round trips between the agent and tool(s). This results in substantially less token usage, which means less compute waste, less cost, and less confusion/"hallucination" on the LLM's part.
If one comes to the same conclusion that many others have (including CloudFlare) that code should be the means by which LLMs interface with the world, then why not skip writing an MCP server and instead just write a command-line program and/or library (as well as any public API necessary)?
Isn't that the point they are making? MCP is useful because everyone is using it, not because it has a technical advantage over rolling your own solution. It won mindshare because of marketing and a large company pushing it.
I've actually taken to both approaches recently, using the mcp-client package to give me an interface to a wide array of prebuilt tools in my non-LLM application. I could have written or sourced 10 different connectors, or I can write one client interface and any tool I plug in shares the same standard interface as all the others.
The most hilarious quote from one of those projects:
>The proxy server is required because the public facing API for UXP Based JavaScript plugin does not allow it to listen on a socket connection (as a server) for the MCP Server to connect to (it can only connect to a socket as a client).
Maybe that should have been the sign that this was completely unnecessary and stupid?
>Do you know of another way you can control all of those applications via LLMs?
Seriously. This becoming a bad joke. I mean conceptually, what did you think was happening here? MCP was just magically doing something that didn't already exist before?
So again, how do I automate Ableton live over a network socket with a standard API? I don't know if you've read the remote control API but it doesn't open a magic socket to remote control Live, you have to code the entire integration and protocol yourself to map whatever API messages you want to Live actions manually.
Let's forget about LLMs completely as they are only tangentially relevant to the benefits of MCP. I want to write 15 lines of python that - no matter the software - is going to trigger the "play" button/action in that software. E.g. I want to hit "play" in both Ableton, Unity and Blender without having to write three times the code, without having to manually write them each an extension plug-ins. How do you do that, today, 2025-11-17 ?
> Reducing dependencies is a wrong success metric. You just end up doing more work yourself
Except it's just not true in many cases because of social systems we've built. If I want to ship software to Debian I have to make sure that every single of my 3rdparty dependencies is registered and packaged as a proper debian package - a lot of time it will take much less work to rewrite some code than to get 25 100-lines-of-code micro-libraries accepted into debian.