Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a hobbyist how do I stay protected and in the loop for breaches like this? I often follow guides that are popular and written by well-respected authors and I might be too flippant with installing dependencies trying to solve a pain point that has derailed my original project.

Somewhat related, I also have a small homelab running local services and every now and then I try a new technology. occasionally I’ll build a little thing that is neat and could be useful to someone else, but then I worry that I’m just a target for some bot to infiltrate because I’m not sophisticated enough to stop it.

Where do I start?





> As a hobbyist how do I stay protected and in the loop for breaches like this?

For the case of general software, "Don't use node" would be my advice, and by extension any packaging backend without external audit and validation. PyPI has its oopses too, Cargo is theoretically just as bad but in practice has been safe.

The gold standard is Use The Software Debian Ships (Fedora is great too, arch is a bit down the ladder but not nearly as bad as the user-submitted madness outside Linux).

But it seems like your question is about front end web development, and that's not my world and I have no advice beyond sympathy.

> occasionally I’ll build a little thing that is neat and could be useful to someone else, but then I worry that I’m just a target for some bot

Pretty much that's the problem exactly. Distributing software is hard. It's a lot of work at a bunch of different levels of the process, and someone needs to commit to doing it. If you aren't willing to commit your time and resources, don't distribute it in a consumable way (obviously you can distribute what you built with it, and if it's appropriately licensed maybe someone else will come along and productize it).

NPM thought they could hack that overhead and do better, but it turns out to have been a moved-too-fast-and-broke-things situation in hindsight.


> PyPI has its oopses too, Cargo is theoretically just as bad but in practice has been safe.

One obvious further mitigation for Python is to configure your package installer to require pre-built wheels, and inspect the resulting environment prior to use. Of course, wheels can contain all sorts of compiled binary blobs and even the Python code can be obfuscated (or even missing, with just a compiled .pyc file in its place); but at least this way you are protected from arbitrary code running at install time.


Having spent a year trying to develop against dependencies only provided by a debian release, it is really painful in practice. At some point you're going to need something that is not packaged, or newer than the packaged version in your release.

It really depends on what you're doing. But yes, if you want to develop in "The NPM Style" where you suck down tiny things to do little pieces of what you need (and those things suck down tiny things, ad infinitum) then you're naturally exposed to the security risks inherent with depending on an unaudited soup of tiny things.

You don't get secure things for free, you have to pay for that by doing things like "import and audit software yourself" or even "write simple utilities from scratch" on occasion.


That's when you join debian :)

I've spent thirty, mostly on stable, and there's been minimal pain. Several orders of magnitude less than on any other system.

(That might hint that I'm not doing trendy things.)


Didn't Debian ship a uniquely weak version of OpenSSL for years? HeartBleed perhaps?

IME Debian is falling behind on security fixes.


And OpenBSD advertises "two remote holes in the default install, in a heck of a long time". And they're pretty serious about audits. It happens. But like the other comment said, this is about supply chain attacks via automatically executing code from live urls and not human fallibility.

They did, and no one is perfect. But Debian is the best.

FWIW, the subject at hand here isn't accidentally introduced security bugs (which affect all software and aren't well treated by auditing and testing). It's deliberately malicious malware appearing as a dependency to legitimate software.

So the use case here isn't Heartbleed, it's something like the xz-utils trojan. I'll give you one guess as to who caught that.


As a hobyist (or profesionally) you can also write code without dependencies outside of node itself.

Somewhat controversial these days, but treat every single dependency as a potential security nightmare, source of bugs, problem that you will have to solve in the future. Use dependencies carefully and as a last resort.

Vendoring dependencies (copying the package code into your project rather than using the package manager to manage it) can help - it won't stop a malicious package, but it will stop a package from turning malicious.

You can also copy the code you need from a dependency into your code (with a comment giving credit and a link to the source package). This is really useful if you just need some of the stuff that the package offers, and also forces you to read and understand the package code; great practice if you're learning.


Inspecting 10 layers of dependencies individually to install a popular tool or an lsp server is going to work once or twice. Eventually either complacency or fatigue sets in and the attacker wins.

I think we need a different solution that fixes the dependency bloat or puts more safeguards around package publishing.

The same goes for any other language with excessive third-party dependency requirements.


Agree.

It's going to take a lot of people getting pwned to change these attitudes though


Why would attitudes change? The impact is diffused across a wide enough populace (precisely because ecosystems with weak community norms around dependency security are extremely popular) that the rate of “shaping up in response to a painful lesson” may remain lower than the rate of newcomers joining the community or the rate of new insecure dependencies proliferating to serve new needs brought about by new use cases brought about by growing popularity of a platform.

That’s not to say it’s hopeless. Rather, it’s more likely that widespread improvement will need to be centrally orchestrated rather than organic in response to hacks.


PHP is living proof that no matter how Broken As Designed something is, it will just go on if it's popular enough.

Local proxies that can work offline also helps. Though not as much as vendoring.

Use dependencies that are fairly popular and pick a release that's at least a year old. Done. If there was something wrong with it, someone would've found it by now. For a hobbyist, that's more than sufficient.

Don't do development on your local machine. Full stop. Just don't.

Do development, all of it, inside VMs or containers, either local or remote.

Use ephemeral credentials within said VMs, or use no credentials. For example, do all your git pulls on your laptop directly, or in a separate VM with a mounted volume that is then shared with the VM/containers where you are running dev tooling.

This has the added benefit of not only sandboxing your code, but also making your dev environments repeatable.

If you are using GitHub, use codespaces. If you are using gitlab, workspaces. If you are using neither, check out tools like UTM or Vagrant.


That's not a realistic solution. Nobody is going to stop using their machine for development just to get some security gains, it's way too much of a pain to do that.

It's 100% realistic because *I've been doing it off-and-on for the last 25 years.*

When I was developing server software for Windows, the first time I was able to setup a development environment by simply cloning a VM instead of spending a day-and-a-half with a lap full of MSDN CDs/DVDs, I never went back.

Prior to that, I was happily net-booting *BSD/Solaris servers all over my house/apartment.

Nowadays, we have so many tools to make this trivial. Your contention doesn't stand up to basic scrutiny of the available data.

If you are downloading software from untrusted sources (e.g. NPM, pip, and others) and running it on your primary working machine, or personal machine, then you are simply begging for trouble.


The way to sell it isn't vague security somethings, but in making it easier to reproduce the build environment "from scratch". If you build the Dockerfile as you go, then you don't waste hours at the end trying to figure out what you did to get it to build and run in the first place.

You are right, if it's a pain no one is going to do it. So the thing that needs to happen is to make it not a pain.

Wake up and smell the codespaces/workspaces/vagrant/so many other tools that make this not a pain. Some of these tools have been around for AGES. Nowadays, with VSCode Remote, you can even use a "modern" IDE environment with a local fat client observing your remote runtime. Other folks do this quite happily, with tremendous tooling, using emacs or *vim.

its not particularly painful to develop in a container. Maybe docker is a nuisance (although I know people do do develop within docker) but something like firejail or bubblewrap is pretty easy to use.

It is a realistic solution.

Taking this more seriously than it perhaps deserves: if that’s true, why isn’t widespread adoption of this approach growing?

Whether or not it’s a good idea, “realistic” implies practicality, which could presumably be measured by whether people find it worthwhile to do the thing.


I suppose it depends on what you're protecting, who's out there to get you, and how boring and time consuming it is to clean up after a breach (can't that take weeks or months), etc.

Aren't you're a bit asking "When X transportation method isn't used by everyone, can it really be any good?" :-)


Are people actually using UTM to do local development?

Im genuinely curious because I casually looked into it so that i could work on some hobby stuff over lunch on my work machine.

However I just assumed the performance wouldn't be too great.

Would love to hear how people are setup…


When I had a Macbook from work, I set up an Arch Linux VM using their basic VM image [1], and followed these steps (it may differ, since is quite old): https://www.youtube.com/watch?v=enF3zbyiNZA

Then, I removed the graphical settings, as I was aiming to use SSH instead of emulated TTY that comes ON by default with UTM (at that time).

Finally, I set up some basic scripting to turn the machine on and SSH into it as soon as sshd.service was available, which I don't have now, but the script finished with this:

(fish shell)

    while not ssh -p 2222 arch@localhost; sleep 2; end;
Later it evolved in something like this:

    virsh start arch-linux_testing && virsh qemu-monitor-command --hmp arch-linux_testing 'hostfwd_add ::2222-:22' && while not ssh -p 2222 arch@localhost; sleep 2; end;
I also removed some unnecessary services for local development:

    arch@archlinux ~> sudo systemctl mask systemd-time-wait-sync.service 
    arch@archlinux ~> sudo systemctl disable systemd-time-wait-sync.service

And done, performance was really good and I could develop on seamlessly.

[1]: https://gitlab.archlinux.org/archlinux/arch-boxes/-/packages...


It works incredibly well with Linux VMs, my daily driver. I plug in a USB keyboard, external monitor and Can't Believe It's Not Linux. Only occasionally when I need to use the laptop screen/keyboard does macOS bother me and remind of it real self.

There's around 10-15% performance penalty for VMs (assuming you use arm64 guests), but the whole system is just so much faster and well built than anything Intel-based to day, that it more than compensates.

For Windows, it's lacking accelerated video drivers, but VMWare Fusion is an ok free alternative - I can totally play AAA games from last decade. Enjoy it until broadcom kills it.


With remote development (vscode and remote extension in jetbrains with ssh to VM) performance is good with headless VM in UTM. Although it always (?) uses performance cores on Apple Silicon Macs, so battery drain is a problem

I started using UTM last week on my Macbook just to try out NixOS + sway and see if I could make environment that I liked using (inspired by the hype around Omarchy).

Pretty soon I liked using the environment so much that I got my work running on it. And when I change the environment, I can sync it to my other machine.

Though NixOS is particularly magical as a dev environment since you have a record of everything you've done. Every time I mess with postgres hb_conf or nginx or pcap or on my local machine, I think "welp, I'll never remember that I did that".


I used to have a separate account on my box for doing code for other people, one for myself and another for surfing the web. Since I have an Apple TV hooked up to one of my monitors I don’t have a ton of reasons for hopping credentials between accounts so I think I’ll be going back to at least that.

The fact I use nvm means a global install won’t cross accounts.


There are some operating systems, like FreeBSD, where you use the system’s package manager and not a million language specific package managers.

I still maintain pushing this back to library authors is the right thing to do instead of making this painful for literally millions of end-users. The friction of getting a package accepted into a critical mass of distributions is the point.


Avoid dependencies with less than 1M downloads per week. Prefer dependencies that have zero dependencies like Hono or Zod.

https://npmgraph.js.org/?q=hono

https://npmgraph.js.org/?q=zod

Recently I switched to Bun in part because many dependencies are already included (db driver, s3 client, etc) that you'd need to download with Node or Deno.


(1) Start by not using packages that have stupid dependencies

Any package that includes a CLI version in the library should have it's dev shamed. Usually that adds 10-20 packages. Those 2 things, a library that provides some functionality, and a CLI command that lets you use the library from the command line, SHOULD NEVER BE MIXED.

The library should be its own package without the bloat of the command line crap

(2) Choose low dependency packages

Example: commander has no dependencies, minimist now has no dependencies. Some other command line parsers used to have 10-20 dependencies.

(3) Stop using packages when you can do it yourself in 1-2 lines of JS

You don't need a package to copy files. `fs.copyFileSync` will copy a file for. `fs.cpSync` will copy a tree, `child_process.spawn` will spawn a process. You don't need some package to do these things. There's plenty of other examples where you don't need a package.


> Any package that includes a CLI version in the library should have it's dev shamed. Usually that adds 10-20 packages.

After your little rant you point out Commander has zero dependencies. So I don’t know what’s up with you.

If the library you’re building has anything with application lifecycle, particularly bootstrapping, then having a CLI with one dependency is quite handy for triage. Most especially for talking someone else through triage when for instance I was out and there was a production issue.

Which is why half the modules I worked on at my last place ended up with a CLI. They are, as a rule, read mostly. Which generally doesn’t require an all caps warning.

Does every module need one of those? No. But if your module is meant as a devDependency, odds are good it might. And if it’s bootstrapping code, then it might as well.

> should have it's dev shamed

Oh I feel embarrassed right now. But not for me.


I'm not sure about NPM specifically, but in general: Pick a specific version and have your build system verify the known good checksum for that version. Give new packages at least 4 weeks before using them, and look at the git commits of the project, especially for lesser-known packages.

shamless plug but here's a list of things you could follow to mitigate risks from npm: https://github.com/bodadotsh/npm-security-best-practices

Running things in containers and VMs reduces the damage. One service on your homelab being compromised is a lot better than your entire homelab server getting compromised.

Neither is a security guarantee, but it does add a substantial extra barrier.


If you're on Linux, I've tried to build an easy yet secure way to isolate your system from your coding projects with containers. See https://github.com/evertheylen/probox

As 'numbsafari said below, you should no longer user your host for dev..this includes all those cool AI assistant tools. You need to containerize all the things with runpod or docker



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: