LineageOS doesn't really cut off the INTERNET access properly. Graphene's approach is more robust. I still wonder why such an important feature is not in the AOSP itself
> still wonder why such an important feature is not in the AOSP itself
Really? Remind yourself who works on Android. Google have been removing functionalities that benefit privacy for ever, and then put half backed alternative buried under tons of settings.
Hey! No problem at all. The mistake was mine... The "create" button was disabled, you needed a tester code... Now you don't! So you can try again if you want :)
it already is, the entire protocol is reverse engineered, there are tools to automatically deobfuscate the code and there is already a full reimplementation of minecraft that also supports servers
If that's the case, how come nobody seems to be writing improved Minecraft clients?
Ever since I started playing it in the beta days I've been frustrated with how poorly Minecraft performs relative to what it's showing on the screen. (Not that that stopped me from pouring hundreds of hours into the damn thing.)
Well, they do? Sodium, for instance. It's a mod, not a full rewrite, rewriting the client from scratch would mean a lot of boring work like speaking with Mojang's server, but I understand Sodium basically rips out and replaces the entire graphics pipeline of the client.
Yeah, it was always weird how 32x32x48 extreme reactors lagged the game whenever you looked at them, but the moment you looked away everything was fine.
the client has to authenticate with a central server and present a ticket to the server it wants to connect to. otherwise clients could impersonate each other easily.
sure, iirc it used to even just be a setting? online-mode=false
most servers leave it enabled because preventing player impersonation is pretty important so people can't just easily grief each other. some piracy servers implemented their own auth on top.
Most likely you have the mitigation already in place, that is disabeling the XTheadVector extension.
The regular distributions don't enable it, since it's a non standard incompatible vendor extension based on a draft spec.
When I wanted to benchmark their implementation last year I patched a kernel to enable it, and needed to consult the open source part of the core [0] to figure out that they placed the enable CSR bit in a different location than the final ratified spec. [1]
> No, software updates or patches cannot fix this vulnerability because it is a hardware bug. The only mitigation is to disable the vector extension in the CPU, which unfortunately impacts the CPU’s performance.
A USB-C port that only supports USB2 data and power only needs a few resistors across some pins to trigger legacy modes and disable high current/voltage operation. All the extra bits are the things that jack up the cost.
USB3 and altmodes require extra signal lines and tolerances in the cable.
High-voltage/current requires PD negotiation (over the CC pins AFAIK)
Data and power role swaps require muxes and dual-role controllers.
That's all the stuff that makes USB-C a pain in the ass, and it's all the sort of thing RPi Nanos don't support.
You're confusing USB C and USB 3.1+. USB C is just the physical spec. You can design a cheap device that will only support USB 2 if you just connect ground, Vbus, D+ and D- and gasp add two resistors. It will work just as well as the micro-usb plug.
completely valid, but i would like to think the org is still designing for accessibility for newbies in mind.
like you said, the connector does not have to follow the standards. i have seen hdmi ports being used to carry pcie signal (not a good like but here is one such device https://pipci.jeffgeerling.com/cards_adapter/pce164p-no6-ver...) amgon other things. it is still non-standard behaviour.
https://www.forbes.com/sites/haileylennon/2021/01/19/the-fal...