Hacker Newsnew | past | comments | ask | show | jobs | submit | buildbot's commentslogin

Same!!! I do every so often connect it to update it if a newer firmware is actually worth it… The extra upside to not accepting the terms is that none of the samsung bloatware apps even start - the TV runs much faster than just with the wifi disconnected.

It’s not uncommon to eat nettles in the PNW! I knew people who would fold the leaves a specific way to break the stingers off so you could eat the leaves raw even.

I would assume it denatures the chemicals the sting delivers

Well only in the UK, if you have the -banned in the UK- ADP on as far as people know it’s not compromised

If the NSA hasn't compromised them then what are they even spending their budget on?

Not the .IA files from old Sinar digital backs! Those are based on DOOM PWADs (lol). But otherwise mostly yes this is true for nearly every other format as far as I am aware.

I really hate how often modern UIs forgot that sometimes a folder might have _literally Cthulhu_ (A.K.A 4 billion 1kb text files) in it and will absolutely fall apart - looking at you cursor.

It's not even UIs—most filesystems hate tons of small files in a single directory. It really shouldn't be done.

I got my taste of "full" folders with NT during the early days of DVD programming. The software would write everything into a single directory where it would create at least 3 files per source asset. We were working a specialty DVD that had 100k assets. The software+NT would crash crash crash. The next year the project came through, we were on a newer version of the software running Win2k and performance was much improved using same hardware. I haven't had to do anything with a folder that full in years, but I'd assume it is less of a chore than the days of NT. Then again, it could have gotten better, but then regressed as well. Really, I'm just happy I don't get any where close to that to find out.

The spiciest file I've ever had to deal with was an 18TB text file with no carriage returns/line feeds (all on one line). It was a log generated by an older Nokia network appliance. I think I ended up 'head'ing the first 2MB into another file and opening that, then I could grok (not the AI) the format and go from there.

Oof, that sounds nasty. Did it turn out to be a standard-ish formatting with a separator where you break the line after x number of separators? I really dislike having to parse a log like that before just being able to read the log

From memory there was no dedicated event separator, it just went straight from the last character of the event to the first character of the timestamp of the next event. I think there was an XML payload in the event somewhere too?

Fortunately I didn't have to edit the log in-place as we were ingesting it into Splunk, so I just wrote some parsing configuration and Splunk was able to munch on it without issue.


True - I have also made this mistake. 'too many open files' warnings across different VMs, just from one VM listing that dir!

or 4 billion levels of subfolder like npm (I haven't used it in years now, but maybe they've fixed it since.)

A lot less in 2003 than 2025?

Does https://en.wikipedia.org/wiki/Cypress_PSoC Count? I’ve kinda wanted to try one out but have 0 use case…

Not really. The amount of analog hardware on die is minimal - it's little more than a few multiplexers surrounding the standard sorts of analog peripherals you'd see on a microcontroller like ADCs/DACs/comparators.

All the various bits that get tacked on for doing prefetch and branch prediction all are fairly large too, given the amount of random caching, which often is what people account for when measuring decode power usage I think. That’s going to be the case in any arch besides something like a DSP without any kind of dynamic dispatch.

I think it's safe to say that a modern x86 branch predictor with its BTBs is significantly larger than the decode block.

Sure, but branch prediction is (as far as we know) a necessary evil. Decode complexity simply isn't.

Right, but decode compexity doesn't matter because of the giant BTB and such. At least that's what I understand.

For the cores working hardest to achieve the absolute lowest cpi running user code, this is true. But these days the computers have computers in them to manage the system. And these kinds of statements aren’t necessarily true for these “inner cores” that aren’t user accessible.

“ RTKit: Apple's proprietary real-time operating system. Most of the accelerators (AGX, ANE, AOP, DCP, AVE, PMP) run RTKit on an internal processor. The string "RTKSTACKRTKSTACK" is characteristic of a firmware containing RTKit.”

https://asahilinux.org/docs/project/glossary/#r


And those cores do not run x86.

I was pretty surprised to find out that the weird non-architectural cores in a Core or Xeon really do run x86 code.

With a signed kernel and secure boot it should in principle be similiar to Windows 11? But with DMA based hacks on the rise I'm not sure it matters either way.

Peripherals get IOMMU'd on Apple platforms

https://support.apple.com/guide/security/direct-memory-acces...

Not so on Linux?


Which is the same reason that we lock the data centers. If someone has physical access to the hardware, there are so many more breaching vectors.

Anywhere I could read more about the DMA attacks?

You can probably look up DMA cards. They plug into a PCIe slot and get full access to inspect and modify memory.

So, Game Genie for 2025?

Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: