Hacker Newsnew | past | comments | ask | show | jobs | submit | bmurphy1976's commentslogin

Seems likely.

As somebody who's Blender curious but not a 3D graphics designer (I have minimal CAD experience, that's about it), I'd like to know what makes 5.0 special. The release notes are too technical and granular for me.


They will release the public facing changelog very soon! It’s more visual and highlights all the big changes.

I prefer to treat testing like insurance. You purchase enough insurance to get the coverage you need, and not a penny more. Anything beyond that could be invested better.

Same thing with tests, get the coverage you need to build the confidence in your codebase, but don't tie yourself in knots trying to get that last 10%. It's not worth it. Create some manual and integration tests and move one.

I feel like type safety, memory safety, thread safety, etc. are are all similar. Building a physics core to simulate the stability of your nuclear stockpile? The typing should be second to none. Building yet another CSV exporter? Who gives a damn.

Context is so damn important.


This is a perfectly reasonable argument if memory safety issues are essentially similar to logic bugs, but memory unsafety isn't like a logic bug.

A logic bug in a library doesn't break unrelated code. It's meaningful to talk about the continued execution of a program in the presence of logic bugs. Logic bugs don't time travel. There are ways to exhaustively prove the absence of logic bugs, e.g. MC/DC or state space exploration, even if they're expensive.

None of these properties are necessarily true of memory safety. A single memory safety violation in a library can smash your stack, or allow your code to be exploited. You can't exhaustively defend against this with error handling either. In C and C++, it's not meaningful to even talk about continued execution in the presence of memory safety violations. In C++, memory safety violations can time travel. You typically can't prove the absence of memory safety violations, except in languages designed to allow that.

With appropriate caveats noted (Fil-C, etc), we don't have good ways to retrofit memory safety onto languages and programs built without it or good ways to exhaustively diagnose violations. All we can do is structurally eliminate the possibility of memory unsafety in any code that might ever be used in a context where it's an important property. That's most code.


All of that stuff doesn’t matter though. If you look close enough everything is different to everything, but in real life we only take significant differences into consideration otherwise we’d go nuts.

Memory bugs have a high risk of exploitability. That’s it; the threat model will tell the team what they need to focus on.

Nothing in software or engineering is absolute. Some projects have decided they need compile-time guarantees about memory safety, others are experimenting with it, many still use C or C++ and the Earth keeps spinning.


If your attacker controls the data you're exporting to a CSV file, they can take advantage of a memory safety issue in your CSV exporter to execute arbitrary code on your machine.

https://georgemauer.net/2017/10/07/csv-injection.html


> Building yet another CSV exporter? Who gives a damn.

The problem with memory unsafe code is that it can have unexpected and unpredictable side-effects. Such as subtly altering the critical data you're exporting, of letting an attacker take control of your CSV exporter.

In other words, you need quite a lot of context to figure out that a memory bug in your CSV exporter won't be used for escalation. Figuring out that context, documenting it and making sure that the context never changes for the lifetime of your code? That sounds like a much complex proposition that using memory-safe tools in the first place.


Since they're all RPI alternatives anyway and you don't get the ecosystem benefits, you should try an Intel N100. I switched my personal services over to one of those a couple years ago, and it's a great bang-for-your-buck small server. Being an Intel chip, stock Ubuntu just works. I've had no compatibility issues.


N100 indeed looks like a good alternative. I own one N100-based mini PC and I see there are some N100-based SBC as well. x86-like support for ARM/RISC-V SoC would be a miracle ;-).


Yeah, I love it. Losing access to the RPI ecosystem addons kind of sucks, but I found I don't really use them anyway. I think you can get a USB GPIO if you really need that, but personally I've moved more towards N100 for services, ESP32 for devices.


How comfortable are you with naming and shaming the company? I don't think things are going to change if we don't call this stuff out loudly and publicly.

That's awful but I'm glad you were able to figure this out. I've had my own problems with insurance companies, but nothing to this level. I can't imagine the frustration, especially with YOUR CHILD'S HEALTH on the line.

Five years back I ended up getting surgery for a herniated disc. I was in immense and crippling pain. Before having the surgery, we decided to go through a round epidural shots. I had done that 20 years previously and it resolved the problem, so why wouldn't I?

Turns out my insurance company (who I will name: BCBSIL) delegated the approval for the epidurals through some kind of extra bureaucratic process with a 3rd party. It took days and additional effort on our end to get approved.

I remind you, I was in crippling pain at the time.

The delays getting this approved lead to me taking more Ibuprofen than I would otherwise have taken, which in turn lead to signs of internal bleeding. I had to ease off the Ibuprofen and significantly increase the amount of codeine (a drug which does not sit well with me) just to get by. Now not only did I have to wait for the approval, but I then had to wait for the signs of internal bleeding to go away before the doctor would give me the shot (which was the right call, even though it sucked).

Delays, compounding delays, compounding delays, all while I was absolutely miserable.

Anyway, I finally got approved and got the shot and it kinda helped, but didn't fix the issue. I had a second shot, got worse, and then decided we had no choice but to schedule the surgery.

The most frustrating thing (but something I am glad for) is that the surgery was approved immediately.

It's so maddening how inconsistent the whole thing is.


> How comfortable are you with naming and shaming the company?

Don't forget about the individuals responsible. Both the ones that made the denial decision, and the ones that instituted the internal system that incentivizes such denials.


It was Anthem Blue Cross.

You know, it is one thing if it is you or I as terrible as that is.

But this was a 6-year-old.


Oh I'm with you. I've been trying to clean up my setup and the Pi5 is still a problem. Even my Intel N100 NUC is using a USB-C to barrel jack adapter and working great with a perfectly normal multi-port GAN charger, but the Pi5? "Undervoltage Detected"

sigh


I'm really digging Ghosty's shaders. It finally feels like a gimmicky terminal done right. Want a Fallout or CRT theme without compromising on other features? You can finally have it. It brings back memories of the early days of Linux when everything was fun and not trying to mimic the most bland OS out there. Think Enlightenment before everything got gnomified.


It played tolerably until act 3, same with my M1 MacBook Pro. Act 3 was awful on both.


I fully admit that I spent 40 delicious hours faffing about in Act 1 and then put it down out of fear that I'd never get anything else done. :P


One big upside of single player games is that they have an ending. After playing MUDs back in the day, this was a decision I've kept -- no games without an end.

To be fair, I've still spent a crazy amount of time with the Civilization games so let's say that was a partial success.


You can make it run much better by increasing the game's process priority with `renice`. I know that sounds like something that should not work, but it does.


I love me some OpenWRT but updating it has always been a risky chore.


Check out attended sysupgrade


Is there something, a blog post, research paper, or other that you know of that explains why this is the case? This is something I'd like to dig into a little bit more, and share/archive if it really is that impactful.


I just replied to OP with an explanation and some links you might enjoy.


OTOH https://www.anthropic.com/research/tracing-thoughts-language...

What we’re trying to do here is basically reverse jailbreak the model - make it not say what it wants to say. It’s a matter of overpowering the active by default neurons. (Not easy sometimes.)


Yeah, sorry I was talking about the "why does saying no elephants cause elephants".

Phrasing it as do rather than don't is probably still more effective on both humans and LLMs. :)


This trend started long before AI. Everybody needs 10+ years experience to get a job anywhere. As an industry we've been terrible at up-leveling the younger generations.

I've been fighting this battle for years in my org and every time we start to make progress we go through yet another crisis and have to let some of our junior staff go. Then when we need to hire again it's an emergency and we can only hire more senior staff because we need to get things done and nobody is there to fill the gaps.

It's been a vicious cycle to break.


I can second this cycle. Agentic code AI is an accelerant to this fire that sure looks like it's burning the bottom rungs of the ladder. Game theory suggests anyone already on the ladder needs to chop off as much of the bottom of the ladder as fast as possible. The cycle appears to only be getting.. vicious-er.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: