This is not true at all as evidenced by the fact that most games do not get Denuvo removed once they are cracked. And the companies that DO remove denuvo only do so after several years because of licensing costs as denuvo transitioned to a SaaS model.
Personal experience that I see noone talk about with IPv6 is how much more expensive hardware that handles it correctly is for datacenters. On IPv4 your usual unit of allocation is a /32 for customers - that means a simple hashmap `1.1.1.1=destination mac` works wonderfully and is cheap (single memory lookup), but for ipv6 your usual unit is a /64 so its longest prefix match instead, which requires parsing the address to group it back into the /64 and alot of switches and routers that are already expensive still have very low limits on LPM memory banks.
Expensive switch at work we have can only do 3000 route entries for example on ipv6. If we did /128s it's basically infinite though, because it goes from a LPM to exact match, which has much much more memory available.
Doesn't help as well that for example, to be able to do SLAAC or even DHCPv6 (which barely works reliably in my experience) you need to do a /64 at minimum, those mechanisms dont even work otherwise, so for ISPs who can easily have more than 3000 downstream customers doing routed ipv6 is such an increase in hardware cost vs just doing NAT which they were already doing anyway.
Never mind the actual performance issues that I keep seeing in production deployments.
We have large networks that are essentially rolling on autopilot totally unmanaged, like Lumen'e recently sold Quantum Fiber asset that is now owned by AT&T holding company Forged Fiber 37 LLC
No native IPv6 still on this forgotten about network, 6RD keeps having weird routability issues, but if you just disable IPv6 everything works fine.
NAT requires remembering every connection pair (IPv4:port for both internal and external sides of the NAT)
You don’t need more than the /64 to know where to send traffic, all of the bits required are still just in the prefix. One route per customer… the edge deals with addressing issues.
Why on earth would you ever want to route something smaller than a /64? At the ISP level, you'd only be concerned about your customers' /48 or maybe /56 networks.
OP seems to want to route /64 or larger to each customer, but can only have 3000 total entries larger than /128 in the expensive routers his firm owns.
Essentially the hardware doesn't support scaling a /64 or /56 to each customer, leaving OP in a terrible position when it comes to proper IPv6 deployment.
When you look at Ziply Fiber, they seem to be ripping out these types of Enterprise grade routers left and right in favor of a simple Linux box doing routing. I think a large portion of why they're doing this is due to limitations like what OP is experiencing
Both of which are LPM and cause the issue I just mentioned! It's not about "routing lower than a /64" its about LPM vs Exact Match memory bank usage (and for some reason, how much more expensive good hardware that handles LPM is).
I have heard that if you have telemetry disabled the cache is 5 minutes, otherwise 1h. No clue how true that is however my experience (with telemetry enabled) has been the 1h cache.
Honest suggestion - ask the agent to figure a compat shim out, the files are jsonl stored at your ~/.claude/sessions you can most likely just reshape it to work on OpenCode or similar, or have a different Claude Code config that points to OpenRouter or other API style endpoint CC supports and then you can swap accounts and it should still work!
I'm trying that out with Cursor now. But it does take some work to get it to the same state with subagents and making sure it understands the state of the progress that was interupted.
But it seems worth the time to get a solid skill defined up and running that can do this, given that's it's an almost daily event by now.
Maybe a good candidate for a Claude Routine!
"By this time each day, brace for upcoming outage by preparing a comprehensive information package for Cursor to take over your work on active sessions" ...
I don't use any other harness, but I have a cron that picks up changes in my jsonl every X minutes and writes them to a SQLite database with full text search. I also have instructions in my user level claude.md (applies to all projects) to query that database when I'm asking about previous sessions. That's my primary use case where I want it to grab some specific details from a previous session. I have terrible context discipline and have built some tools to help me recover from just continuing a different task/conversation with the wrong context.
I could search it myself, but haven't needed to. Getting it out of SQLite into some format Cursor understands should be trivial.
I'm not very good at chess, but I dont get why most things are considered a stalemate? I strategically remove all pieces of the enemy, leaving only the king against my rook/tower whatever its called, the king has nowhere to run. In my eyes it's a checkmate. The game just calls it a stalemate. Would be a stalemate if I couldn't do anything, but I can kill the enemy king.
That rule caught me up too. In regular chess if it is your opponents turn and their only pieces are a king in the 1,8 square and a pawn that is pressed up against one of your pawns and you have rooks in the 2,1 and 8,7 squares that counts as a victory does it not?
No. That is a draw assuming it is the player with only a king’s turn to move.
Translating your notation to normal chess notation:
White king on h1, black rooks on a2 and g8, black king in some random other place, white to move.
That is a draw, because white is NOT in check, but has no legal moves. That scenario is called stalemate. If white were in check, it would be checkmate and a win for black. Set it up on any chess analysis board website and it will say the game is a draw.
... and if it weren't the rule, it'd make a lot of mid- and late-game play much safer for the player with the advantage. As it is, it's something they have to watch out for, which constrains them somewhat. You have to win, but not the wrong way, and your opponent can attempt to force you to "win" the "wrong way" (resulting in a stalemate).
I dont know if it works like macOS since I dont use macOS - but it's not a simple copy-to-cloud. It actively replaces file handling in those folders, which breaks a bunch of applications like games that save stuff in Documents/My Games..
Is the "Desktop & Documents Folders" sync option in iCloud on by default? I've never used that feature, and it's a bit buried so it's hard to enable accidentally, but I haven't set up a new Mac from scratch in a long time so I don't know if it is a trap for new users the way OneDrive is.
It isn’t until you explicitly log in to your appleID and choose to do it during setup. It is very clearly laid out. Onedrive bills itself as essential to windows’s functionality.
Somewhat related but I was talking about the OneDrive thing yesterday with a few non-tech friends and it has backfired so many times it's insane. OneDrive seems to take over the folder handling in windows for some reason, instead of just copying the data to the cloud. Any game you play puts stuff in My Games under Documents and you just installed windows? Yep, its gonna sync your entire FFXIV patch files into the cloud, that's 180gb of patches as the launcher downloads the game. tmodloader? Yep and then it's going to fail to launch the game. Skyrim and Starfield save data? Yep. Literally had to help someone troubleshoot that yesterday and the fix is just to uninstall OneDrive so the folder becomes a normal folder again. Had the ffxiv issue myself and the funniest part was getting a notification from microsoft to update my storage plan because it was full... I dont think people would be so much annoyed by OneDrive if it worked like other sync apps that just.. copied your file to the cloud. And maybe if it wasn't an annoying pop-up when you installed windows and didn't set it up (if we logged in, just set it up automatically and dont bother the user), etc..
Quite the opposite i'd wager. Now that AI can figure everything out we can have the AIs do the performance work. Performance work alot of the times also went against developer experience in terms of languages/patterns and such. AI doesn't need to care about DevEx which might also show a shift towards more memory efficient languages and patterns. Only time will tell though.
Black Magic gives video editing software that actual professionals use away for free. They sell professional grade equipment that regular consumers can afford. They also offer a ton of training videos teaching you how to edit professionally....for free. A ton of independent filmmakers have started their career using Black Magic software/devices.
I wish they would add back support for anthropic max/pro plans via calling the claude cli in -p mode. As I understand thats still very much allowed usage of claude code cli (as you are still using claude cli as it was intended anyway and fixes the issue of cache hits which I believe were the primary reason anthropic sent them the c&d). I love the UX from OpenCode (I loved setting it up in web mode on my home server and code from the web browser vs doing claude code over ssh) but until I can use my pro/max subscription I can't go back, the API pricing is way too much for my third world country wallet.
They had that?! I saw that some people wrote skills and plugins to call claude cli and gemini cli to still be able to use the subscription.
I would also wish that this was supported out of the box, something similar to goose cli providers or acp providers (https://block.github.io/goose/docs/guides/acp-providers).
But I don't want to spend testing yet another agent harness or change the workflow when I somewhat got used to one way of working on things (the churn is real).
reply