Same!!!
I do every so often connect it to update it if a newer firmware is actually worth it…
The extra upside to not accepting the terms is that none of the samsung bloatware apps even start - the TV runs much faster than just with the wifi disconnected.
It’s not uncommon to eat nettles in the PNW! I knew people who would fold the leaves a specific way to break the stingers off so you could eat the leaves raw even.
Not the .IA files from old Sinar digital backs! Those are based on DOOM PWADs (lol). But otherwise mostly yes this is true for nearly every other format as far as I am aware.
I really hate how often modern UIs forgot that sometimes a folder might have _literally Cthulhu_ (A.K.A 4 billion 1kb text files) in it and will absolutely fall apart - looking at you cursor.
I got my taste of "full" folders with NT during the early days of DVD programming. The software would write everything into a single directory where it would create at least 3 files per source asset. We were working a specialty DVD that had 100k assets. The software+NT would crash crash crash. The next year the project came through, we were on a newer version of the software running Win2k and performance was much improved using same hardware. I haven't had to do anything with a folder that full in years, but I'd assume it is less of a chore than the days of NT. Then again, it could have gotten better, but then regressed as well. Really, I'm just happy I don't get any where close to that to find out.
The spiciest file I've ever had to deal with was an 18TB text file with no carriage returns/line feeds (all on one line). It was a log generated by an older Nokia network appliance. I think I ended up 'head'ing the first 2MB into another file and opening that, then I could grok (not the AI) the format and go from there.
Oof, that sounds nasty. Did it turn out to be a standard-ish formatting with a separator where you break the line after x number of separators? I really dislike having to parse a log like that before just being able to read the log
From memory there was no dedicated event separator, it just went straight from the last character of the event to the first character of the timestamp of the next event. I think there was an XML payload in the event somewhere too?
Fortunately I didn't have to edit the log in-place as we were ingesting it into Splunk, so I just wrote some parsing configuration and Splunk was able to munch on it without issue.
Not really. The amount of analog hardware on die is minimal - it's little more than a few multiplexers surrounding the standard sorts of analog peripherals you'd see on a microcontroller like ADCs/DACs/comparators.
All the various bits that get tacked on for doing prefetch and branch prediction all are fairly large too, given the amount of random caching, which often is what people account for when measuring decode power usage I think. That’s going to be the case in any arch besides something like a DSP without any kind of dynamic dispatch.
For the cores working hardest to achieve the absolute lowest cpi running user code, this is true. But these days the computers have computers in them to manage the system. And these kinds of statements aren’t necessarily true for these “inner cores” that aren’t user accessible.
“ RTKit: Apple's proprietary real-time operating system. Most of the accelerators (AGX, ANE, AOP, DCP, AVE, PMP) run RTKit on an internal processor. The string "RTKSTACKRTKSTACK" is characteristic of a firmware containing RTKit.”
With a signed kernel and secure boot it should in principle be similiar to Windows 11? But with DMA based hacks on the rise I'm not sure it matters either way.
reply