Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I haven't used TrueNAS since it was still called FreeNAS.

I liked FreeNAS for awhile, but after a certain point I kind of just learned how to properly use Samba and NFS and ZFS, and after that I kind of felt like it was just getting in the way.

Nowadays, my "NAS" is one of those little "mini gaming PCs" you an buy on Amazon for around ~$400, and I have three 8-bay USB hard drive enclosures, each filled with 16TB drives all with ZFS. I lose six drives to the RAID, so total storage is about ~288TB, but even though it's USB it's actually pretty fast; fast enough for what I need to for anyway, which is to watch videos off Jellyfin or host a Minecraft server.

I am not 100% sure who TrueNAS is really for, at least in the "install it yourself" sense; if you know enough about how to install something like TrueNAS, you probably don't really need it...



I was like this in the "I love to spend a lot of time mucking about with my server and want to squeeze everything out of it that I can" phase.

In the last few years I've transitioned to "My family just wants plex to work and I could give a shit about the details". I think I'm more of the target audience. When I had my non-truenas zfs set up I just didn't pay a lot of attention, and when something broke it was like re-learning the whole system over again.


My way of dealing with this is to ensure everything is provisioned and managed via gitops. I have a homelab repo with a combination of Ansible, Terraform (Tofu), and FluxCD. I don't have to remember how to do anything manually, except for provisioning a new bare metal machine (I have a readme file and a couple of scripts for that).

I accidentally gave myself the opportunity to test out my automations when I decided I wanted to rename my k8s nodes (FQDN rather than just hostname). When I did that, everything broke, and I decided it would be easier to simply re-provision than to troubleshoot. I was up and running with completely rebuilt nodes in around an hour.


But configuring a FreeBSD system with zfs and samba is dead easy.

In my experience, a vanilla install and some daemons sprinkled on top works better than these GUI flavours.

Less breakage, fewer quirks, more secure.

YMMV and I’m not saying you’re wrong - just my experience


I agree; I tried Free/TrueNAS and some other flavors of various things and always ran into annoying limitations and handholding I didn’t want; now I just use Gentoo with ZFS and do my own thing.


I can setup Samba, NFS, and ZFS manually myself, but why would I want to? Configuring samba users & shares via SSH sucks. It's tedious. It's error prone. It's boring.

Similarly while docker's CLI interface is relatively nice, it's even nicer to just take my phone, open a browser, and push "update" or "restart" in a little gui to quickly get things back up & going. Or to add new services. Or whatever else I want. Sure I could SSH in from my phone, but that's awful. I could go get a laptop whenever I need to do something, but if Jellyfin or Plex or whatever is cranky and I'm already sitting on the couch, I don't want to have to get up and go find a laptop. I want to just hit "restart service" without moving.

And that's the point of things like TrueNAS or Unraid or whatever. It makes things nicer to use from more interfaces in more places.


Yeah; once you get deep enough, you realize you can just install things yourself, configure them with Ansible, or Nix, or whatever, and have full control.

But for probably 90% of users, they just want a UI they can click through and mostly use the defaults, then log into now and then to make sure things are good.

The UI is also especially helpful for focusing on things that matter, like setting up scrubs, notifications, etc. (though even there I think TrueNAS could do better).

It's why Synology persists, despite growing more and more hostile to their NAS owners.


  > But for probably 90% of users, they just want a UI they can click through...
90% of HN users. More like 99.99% in the real world!


Honestly, even TrueNAS is way more in depth than 99% of users in the wider world want. They want Dropbox at most, and very possibly they don't even want that much involvement. They want backups to just happen without having to put any thought in.


> Yeah; once you get deep enough, you realize you can just install things yourself, configure them with Ansible, or Nix, or whatever, and have full control.

I think if you've gone through the effort of setting up Ansible scripts for setting up & maintaining a NAS, you probably are not actually making a NAS anymore. Like maybe you're doing Ceph or Gluster cluster or something, which can be fun to play with. Heck, I did that with a bunch of ODROID-HC2's as well. It was fun to setup a cluster storage system.

It also wasn't practical at all and at no point did it ever seriously compete with replacing my "real" NAS (which currently is Unraid, but I'd absolutely consider switching to TrueNAS in a future upgrade), since the main feature that a NAS needs to provide is uptime.


That's fair enough; I certainly understand why you might do this if you're just buying something pre-made and letting it sit in a network closet or something; having something that is just pre-made that you can use has advantages.

I guess the thing is that I've never done that with [Free|True]NAS :). I've always used some kind of hardware (either retired rack mount servers or thin clients) and installed FreeNAS on there, and then it never really felt like it saved me a lot of time or effort compared to just doing it all manually.

Of course, I'm being a typical "Hacker News Poster" here; I will (partly) acknowledge that I am the weird one, but at the same time, as I said, if you're in the market to install TrueNAS on your own hardware, you are also probably someone who could relatively easily Google your way through doing it manually in roughly the same amount of time, at least with NixOS.


For me, this is about personal priorities. I’ve been a professional sysadmin and definitely have the skills to build a NAS from scratch - I’ve done it more than once.

But this is one aspect of my life that I look at as core infrastructure that needs to just work, and I don’t really gain anything from rolling my own in this particular category.

I still run a mini home lab where I do all kinds of tinkering. Just not with storage.

But I also completely understand wanting to do this all manually. I’ve been there. That’s just not where I am today.


This reminds me of the famous HN Dropbox comment [0]. It was perfectly correct and yet so wrong at the same time. TrueNAS is probably for the people who want the power and flexibility with almost none of the hassle. Ironically, the people who have to deal with this professionally every day probably want to leave the work at work.

Having a playground/homelab at home is one thing, but playing with your family's data and access to it can get annoying really fast.

[0] https://news.ycombinator.com/item?id=9224


I think most people who don’t want to tinker would prefer a Synology or similar NAS solution.

The problem with TrueNAS is it fills the niche where it’s targeting people who want to tinker, but don’t want to learn how to tinker. Which is likely a smaller demographic than those who are willing to roll their own and those who just want a fully off-the-shelf experience.

I also think Synology would be closer to the Dropbox experience than TrueNAS.


Yeah, that's what I was referring to when I said "Hacker News Poster", and it's why I said that I know I'm the weird one. I'm not completely out of touch.

It's a little different though; TrueNAS still requires a fairly high level of technical competence to install on your own hardware; you still need to understand how to manage partitions and roughly what a Jail is and basic settings for Samba and the like. It's not completely analogous to Dropbox because Dropbox is trivial for pretty much anyone.


> but at the same time, as I said, if you're in the market to install TrueNAS on your own hardware, you are also probably someone who could relatively easily Google your way through doing it manually in roughly the same amount of time, at least with NixOS.

Sure, but both are free options, one just takes strictly less work to do the task of being a NAS. Why would I pick the one that's just more work for the same result? If I want a NAS, why would I roll my own with NixOS instead of just picking a distro that focuses on being a NAS out of the box? What's the benefits of doing it manually?

If I want to just play around with stuff in a homelab setting, that's what proxmox clustering is for :) But storage / NAS is boring. It just needs to sit there doing basic storage stuff. I want it to do the least amount of things possible because everything else depends on storage being there.


I rolled my own with freebsd and ZFS. And set up some media server apps in a freebsd jail. It's not as polished and I'm missing out on some features, but I'm definitely learning a lot.


> I can setup Samba, NFS, and ZFS manually myself, but why would I want to? Configuring samba users & shares via SSH sucks. It's tedious. It's error prone. It's boring.

I agree for the most part, though even vanilla Ubuntu has Cockpit if you need a GUI.

Personally I find that getting it set up with NixOS is pretty straightforward and it's "set and forget", and generally it's not too hard to find configurations you can just copypaste done "correctly". And of course, if you break something you can just reboot and choose a previous generation. Of course restarting still requires SSHing in and `systemctl restart myapp`, so YMMV.


I want to configure it myself because now I know exactly how it works. The configuration options I’ve chosen won’t change unless I change them. Disaster recovery will be easy because when I move the disks to a new machine, LVM will just start working.


I actually can’t recall the last time I setup a share on SMB, has to be years if not tending towards decades.

A few big shares is all I really need; I no longer create a share for every single idea/thing I can think of.


I’ll take Configuration Management for $100, Alex.


> fast enough for what I need to for anyway, which is to watch videos off Jellyfin or host a Minecraft server.

4k Blu-ray rips peak at over 100 Mbps, but usually average around 80 Mbps. I don't know how much disk I/O a Minecraft server does ... I wouldn't think it would do all that much. USB2 (high-speed) bandwidth should be plenty for that; although filling the array and scrubbing/resilvering would be painful.


Even though I have over four hundred Blu-rays, I would of course NEVER condone breaking the DRM and putting them on Jellyfin no matter how easy it is or how stupid I think that law is because that would be a crime according to the DMCA and I'm a good boy who would never ever break the law.

That said, I have lots of home movies that just so happen to be at the exact same bitrates as Blu-rays and after the initial setup, I've never really had any issues with them choking or any bandwidth weirdness. Minecraft doesn't use a ton of disk IO, especially since it is rare that anyone plays on my server other than me.

I do occasionally do stuff that requires decent bandwidth though, enough to saturate a WiFi connection at the very least, and the USB3 + USB SS + Thunderbolt never seems to have much of an issue getting to Wifi speeds.


What do you mean by USB hard drive enclosures? Are you limiting the RAID (8 bay) throughput by a single USB line?! That's like towing a ferrari with a bicycle.


Nope!

I have one enclosure plugged into a USB 3.0 line, another plugged into a "super speed" line, and one plugged into a Thunderbolt line (shared with my 10GbE Thunderbolt card with a 2x20 gigabit splitter).

This was deliberate, each is actually on a separate USB controller. Assuming I'm bottlenecked by the slowest, I'm limited to 5 gigabits per RAID, but for spinners that's really not that bad.

ETA: It's just a soft RAID with ZFS, I set up each 8-bay enclosure with its own RAIDZ2, and then glued all three together into one big pool mounted at `/tank`. I had to do a bit of systemd chicanery with NixOS to make sure it mounted after the USB stuff started, but that was like ten lines of config I do exactly once, so it wasn't a big deal.


288TB spread over 24 drives on soft RAIDZ2 over USB?! You did check the projected rebuild time in the event of a disc failure, right?


Didn't have to do the projection, I've had to replace a drive that broke. It took about 20 hours.

ETA: It would certainly take longer with more data. I've not gotten anywhere close to the 288TB


Have you researched the USB-SATA bridge chips in the enclosures? Reliability of those chips/drivers on Linux used to be very questionable a few years ago when I looked around. Not sure if the situation has improved recently given the popularity of NAS devices.


It seems to work, and has been running for years without issue. `zpool scrubs` generally come back without issue.


Sounds great! Do you mind share the model of the enclosures and what bridge chip is used inside?


From my research a couple years ago it seemed like most issues involved feeding a bridge into a port multiplier, so I got a multi drive enclosure with no multipliers. I've had no problems so far even with a disk dying in it.

Though even flaky adapters just tend to lock up, I think.


Ah yes! Port multiplier is usually the source of most evils (after flaky bridge chips). Unfortunately enclosure makers seldom reveal the internal topology and usually only test again Windows, and Linux kernel has a long blacklist of bad bridge chips…


that's a benefit of zfs, it doesn't trust the drives actually wrote the data to the drives, the so called RAID write hole, since most RAID doesn't actually do that checking and drives don't have the per block checksums in a long time. It checksums to ensure.


The issue with flaky bridge chips wasn't usually about data integrity——it works fine most of the time, i.e. data written got read back correctly.

But often after extensive use, `dmesg` would complain about problems when talking to the drives, e.g. drive not responding or other strange error messages (forgot the exact text but very irritating and google-fu didn't help). There were also problems with SMART commands passthrough and drive power management e.g. sleep/standby adjustment which wasn't reliable when talking via bridge chips.

I use only disks directly connected to SATA controllers afterwards and no such issues ever happened again.


So each enclosure hosts its own RAIDZ2. Have you tested if it can survive loss of USB connectivity? It can happen because of cable damage or movement, and also because of any failure in the enclosure's electronics.


>so total storage is about ~288TB

?!?

How do you fill 288 TB? Is it mostly media?

>I liked FreeNAS for awhile, but after a certain point I kind of just learned how to properly use Samba and NFS and ZFS, and after that I kind of felt like it was just getting in the way.

I've been a mostly happy TrueNAS user for about four years, but I'm starting to feel this way.

I recently wrote about expanding my 4-disk raidz1 pool to a 6-disk raidz2 pool.[1] I did everything using ZFS command-line tools because what I wanted wasn't possible through the TrueNAS UI.

A developer from iXsystems (the company that maintains TrueNAS) read my post and told me that creating a ZFS pool from the zfs command-line utility is not supported, and so I may hit bugs when I use the pool in TrueNAS.

I was really surprised that TrueNAS can't just accept whatever the state of the ZFS pool is. It feels like an overreach that TrueNAS expects to manage all ZFS interactions.

I'm converting more of my infrastructure to NixOS, and I know a lot of people just manage their NAS with NixOS, which is sounding more and more appealing to me.

[1] https://mtlynch.io/raidz1-to-raidz2/

[2] https://www.reddit.com/r/truenas/comments/1m7b5e0/migrating_...


> How do you fill 288 TB? Is it mostly media?

I kind of purposefully don't fill it up :).

This has been a project of tech hoarding over the last ~8 years, but basically I wanted "infinite storage". I wanted to be able to do pretty much any project and know that no matter how crazy I am, I'll have enough space for it. Thus far, even with all my media and AI models and stock data and whatnot, I'm sitting around ~45TB.

On the off chance that I do start running low on space, there's plenty of stuff I can delete if I need to, but of course I probably won't need to for the foreseeable future.

> I'm converting more of my infrastructure to NixOS, and I know a lot of people just manage their NAS with NixOS, which is sounding more and more appealing to me.

Yeah, that's what I do, I have my NixOS run a Samba on my RAID, and it works fine. It was like fifteen lines of config, version controlled, and I haven't thought about it in months.


I'm in a similar position on my storage... though mostly that I bought a new nas with the intent of storing for Chia, but after 3 days, I decided it was a total waste as I'd never catch up to the bigger miners... So I repurposed it for long term and retired my older nas.

older nas was 4x 4tb drives when I retired it and passed it to a friend.

Current nas is a 6-bay synology with ram and nvme upgraded, along with a 5-bay expansion. All with 12tb drives in a 6drive 2-parity and 5-drive 2-parity arrays. I'm just under 40tb used mostly media.


Well you got it, I just wonder if half that storage would still be effectively infinite for you


I bought the drives used on Ebay, and the prices weren't that different for the 8TB vs 16TB when I bought them. Like on the order of ~$15 higher, so I figured I'd just eat that cost and future proof it even more.


Fair enough, though you could’ve bought half as many if you didn’t mind the hit to the speed no?


I am omitting some context. The original incarnation of this had 24 2TB drives and I wanted to upgrade that. I already had the three enclosures, and I incrementally upgraded the disks over the course of a year.


Haha it’s all good I didn’t mean to interrogate you, enjoy your overkill storage, life is short


288TB might be enough to store a complete set of laser disk arcade games. /s


I don't think that /s is needed, that's probably true. The entirety of Gamecube games is only a few terabytes.


I'm surprised that would even reach 1 TB. The discs only hold 1.46 GB. Though I guess the number of game discs images you'd have would depend on if you have each region's version of each game.


TrueNAS is just web-based configuration management. As long as you only use the web UI, your system state can be distilled down to the config file it generates.

If you do a vanilla FreeBSD+samba+NFS+ZFS setup, you'll need to edit several files around the file system, which are easy to forget months down the line in case of adjustment or disaster recovery.


I'm starting a rebuild of my now ancient home server, which has been running Windows with WSL2 Docker containers.

At first, I thought I might just go with TrueNAS. It can manage my containers and my storage. But it's got proprietary bits, and I don't necessarily want to be locked into their way of managing containers.

Then my plan was to run Proxmox with a TrueNAS VM managing a ZFS raidz volume, so I could use whatever I want for container management (I'm going with Podman)

But the more I've researched and planned out this migration, the more I realize that it's pretty easy to do all the stuff I want from TrueNAS, by myself. Setting up ZFS scrubbing and SMART checks, and email alerts when something fishy happens, is pretty easy.

I'm beginning to really understand the UNIX "do one thing and do it well" philosophy.


Same for me, after a while you just want to do something the "managed" software doesn't support.

Now I just run Ubuntu/Samba and use KVM and docker for anything that doesn't need access to the underlying hardware.


Yeah, same. Almost all of the NAS packages sacrifice something - they're great places to start, but just getting Samba going with Ubuntu is easy enough.


I have a similar setup (a dell wyse 5070 connected to an 8 bay enclosure) though I do not use RAID, I simply have some simple rsync script between a few of a drives. I collect old 1-2TB hard drives as cold storage and leave them on a bookshelf. The rsync scripts only run once a week for the non-criticial stuff.

Not to jinx it, but I have never had a hard drive failure since around 2008!


I think that's a good idea. It's a more manual setup but also more space efficient (if you skip backing up eg linux isos) and causes less wear than RAID.


At least for now, until VPNs become illegal, I haven't worried too much about _all_ my linux ISOs being backed up.


Do you think the power consumption matters on your box here? Should you care about the "USB bottleneck"? How do you organize this thing so it's not a mess of USB cables? I kinda wanna make it look esthetically nice compared to something like a proper nas box.


> Do you think the power consumption matters on your box here?

It's actually not too bad; the main "server" idles at around 14W and the power supply for it only goes to 100W under load. The drive bays go up to 100W (I think) but generally idle around 20W each. All together it idles at around ~70-80W.

Not that impressive BUT it replaced a big rack mount server that idled at about 250W and would go up to a kilowatt under load.

> Should you care about the "USB bottleneck"?

Not really, at least not for what I'm doing. I generally can get pretty decent speeds and I think network is often the bottleneck more than the drives themselves.

> How do you organize this thing so it's not a mess of USB cables?

I don't :). It's a big mess of USB cables that's hidden in a closet. It doesn't look pretty at all.


The difference between what you've built and TrueNAS may well only become evident if your ZFS becomes corrupted in the future. That isn't to say YOU won't be able to fix it in the future, but I wouldn't assume that the average TrueNAS user could.


You... swap the drive, run a ZFS replace command, and that's it. I know I'm coming at this from a particular perspective, but what am I missing?


i have 4x 4TB drives that are in my dead QNAP NAS.

i've wanted to get a NAS running again, but while the QNAP form factor is great, the QNAP OS was overkill – difficult to manage (too many knobs and whistles) – and ultimately not reliable.

so, i'm at a junction: 1) no NAS (current state), 2) custom NAS (form factor dominates this discussion – i don't want a gaming tower), or 3) back to an off-the-shelf brand (poor experience previously).

maybe the ideal would be a Mac Mini that i could plug 4 HDDs into, but that setup would be cost-inefficient. so, it's probably a custom build w/ NixOS or an off-the-shelf, but i'm lacking the motivation to get back into the game.


I tried a Mac Mini for a while, but they're just not designed to run headless and I ultimately abandoned it because I wanted it to either work or be fixable remotely. Issues I had:

- External enclosure disconnected frequently (this is more of an issue with the enclosure and its chipset, but I bought a reputable one)

- Many services can't start without being logged in

- If you want to use FileVault, you'll have to input your password when you reboot

Little things went wrong too frequently that needed an attended fix.

If you go off the shelf, I recommend Synology, but make sure you get an Intel model with QSV if you plan to transcode video. You can also install Synology OS to your own hardware using Xpenology - its surprisingly stable, moreso than the mac mini was for me.


> Get a Synology Intel model with QSV if you plan to transcode video

How about if you _don't_ plan to transcode video? For example, this years models DS425+ uses the (six year old) Intel Celeron J4125, while the DS925+ uses the (seven year old) AMD Ryzen Embedded V1500B. Why choose one over the other?


That’s a damn good question. I’d probably go with the intel model that had qsv just in case. Especially compared to an equally ancient AMD processor that doesn’t.


I do recommend those little Beelink computers with an AMD CPU.

They can be had for a bit less than the Mac mini, and I had no issues getting headless Linux working on there. I even have hardware transcoding in Jellyfin working with VAAPI. I think it cost me about $400.


I was in the same boat - QNAP OS was a complete mess. Ended up nuking it and throwing Ubuntu on there instead. Nothing fancy, just basic config, but it actually works now. Other option is pay Unraid.


What are your requirements for a NAS? What do you want to use it for? Is it just to experiment with what a NAS is, or do you have a specific need like multiple computers needing common, fast storage?


I use a QNAP 8-bay JBOD enclosure (TL-D800S) connected via SFF-8088 to a mini-itx PC - I find the form factor pretty good and don't have to deal with QNAP OS.


The fact that Asahi doesn’t run on M4 cpus (ie the current Mac Mini) is also a consideration.

ZFS on macOS sucks really bad, too, so that rules out the obvious alternative.


I guess you could run a Linux VM and pass the disks through? I considered something similar for an ARM64 NixOS build server - though that application doesn’t need the disks.


TrueNAS on a good bit of hardware - in my case the latest truegreen NAS is fantastic. You build it, it runs, it's bulletproof. Putting Jellyfin and/or plex on top of it is fantastic.


I do both. The primary server runs Proxmox and I have a physical TrueNAS box as backup server, so I have to do it by hand on Proxmox.

“Have to”, since I no longer suggest virtualizing TrueNAS even with PCI passthru. I will say the same about zfs-over-USB, but you do you. I’ve had too many bad experiences with both (for those not on the weeds here, both are officially very much not supported and recommended, but they _do_ work).

I really like the TrueNAS value prop - it makes something I’m clearly capable of by hand much easier and less tedious. I back up both my primary zfs tank and well as my PBS storage to it, plus cold backups. It does scheduling, alerts, configuration, and shares, and nothing else. I never got the weird K8s mini cluster they ship - seems like a weird thing that clashes with the core philosophy of just offering a NAS OS.


What enclosure do you use? I had trouble finding a good one.


I've had mixed luck too. The one I'm using now has been mostly ok. https://a.co/d/4AiF1Zp

It actually has been considerably more reliable than the MediaSonics than they replaced.


Can you give a little more details on how the enclosures work? Do you see each drive individually, or does each enclosure show as one "drive" which you then put into a zfs pool.

I'm looking to retire my power hungry 4790k server and go with a mini pc + external storage.


Replied to another comment with my setup - I use a QNAP JBOD enclosure (TL-D800S) connected to a mini-itx PC. (You do need at least one PCIe slot.) Shows up as 8 drives in the OS, the enclosure has no smarts.

I wouldn't do USB-connected enclosures for NAS drives - either SATA via SFF, or Thunderbolt.


In my case, each drive shows up individually. I create a new vdev encompassing all of them.


Each drive shows up as an external USB mass storage device?


I don’t know if it’s USB mass storage. It just shows up as separate devices in lsblk.


Ive seen a lot of people linking beelink mini PCs lately: https://www.bee-link.com/products/beelink-me-mini-n150

The cost is very low, but it would have to be a nvme only build.


My raspberry pi 3b+ used to go corrupt every now and then.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: