I currently have a 10-year old off-the-shelf NAS (Synology) that needs replacing soon. I haven’t done much with it other than the simple things I mention later, so I still consider myself a novice when it comes to NAS, servers, and networking in general, but I’ve been reading a bit lately (which lead my to this sub). For a replacement I’m wondering whether to get another Synology, use an open source NAS/server OS, or just use a Windows PC. Windows is by far the OS I’m most comfortable with so I’m drawn to the final option. However, I regularly see articles and forum posts which frown upon the use Windows for NAS/server purposes even for simple home-use needs, although I can’t remember reading a good explanation of why. I’d be grateful for some explanations as to why Windows (desktop version) is a poor choice as an OS for a simple home NAS/server.
Some observations from me (please critique if any issues in my thinking):
- I initially assumed it was because Windows likely causes a high idle power consumption as its a large OS. But I recently measured the idle power consumption of a celeron-based mini PC running Windows and found it to be only 5W, which is lower than my Synology NAS when idle. It seems to me that any further power consumption savings that might be achieved by a smaller OS, or a more modern Synology, would be pretty negligible in terms of running costs.
- I can see a significant downside of Windows for DIY builds is the cost of Windows license. I wonder is this accounts for most of the critique of Windows? If I went the Windows route I wouldn’t do a DIY build. I would start with a PC which had a Windows OEM licence.
- My needs are very simple (although I think probably represent a majority of home user needs). I need device which is accessible 24/7 on my home network and 1) can provide SMB files shares, 2) act as a target for backing up other devices on home network, 3) run cloud backup software (to back itself up to an off-site backup location) and, 4) run a media server (such as Plex), 5) provide 1-drive redundancy via RAID or a RAID-like solution (such as Windows Storage Spaces). It seems to me Windows is fine for this and people who frown upon Windows for NAS/server usage probably have more advanced needs.
For me, #1 is license costs. I’ve taken home some servers which would require me to buy 4+ windows server licenses because 16 physical cores is a number for entry-level servers at this point. For the cost of those licenses, I could almost buy a new server with a similar amount of cores every single year.
Second, the brand new filesystem, ReFS, (which needs licenses), has just about caught up to what ZFS had in 2005. The biggest omission is that 2005 ZFS could be your root filesystem. This is less important on *nix systems where your root can be tiny, but windows insists on storing tons of stuff on C, which still needs to be NTFS. ZFS also has 22 years of production testing and still has lots of development.
Third, I want to use containers, and windows uses a Linux VM to do that, so why not skip the middle man?
For server:
docker is linux in a jailed namespace (network, filesystem, process tree, etc jail)
Docker hosted on linux is efficient.
Docket hosted on anything else less so.Never been a better time to try Linux. Ubuntu is pretty easy to get started with (download and setup a bootable USB, stick it and go) and ChatGPT is extremely good about walking you through any questions. You don’t even need to ask highly technical questions, just tell it your goal and your system.
“I just installed Ubuntu 22.04 on my computer and want to SSH into it from a Windows computer on my network, how do I do that?”
“I want to download a file from my Ubuntu command line, how do I do that?”
“I want to setup a share that both Windows and Linux computers can access over my network, how do I do that?”
“I have a github action runner provided by github that includes a run.sh file that needs to run constantly. I want to setup as a background service on my Ubuntu Linux computer so it will always be running as long as the computer is on, how can I do that?”
It will spit out every command line you need in what order, contents of a .service file, tell you how to monitor it, and so on. You can ask it what each line does, what the parameters mean, etc. It’s like having a mid-level sys admin at your fingertips. It will interpret any errors you get, and tell you how to fix them.
Perfect? Maybe not, but its close for a remarkable variety of tasks. It may be, and I’m not joking, 20 times more productive and time efficient than Google searches, reading stackoverflow posts, reading documentations/man pages and trying to decipher what you really need out of any of those sources.
I’m sure some are too paranoid to ask ChatGPT certain things for privacy reasons, and I would anonymize anything you paste in, probably just be a bit mindful of anything involving permissions (you can also ask what security risks exist doing something). Just normal ChatGP3.5 (free) is extremely knowledgeable about Linux CLI and administration along with common packages and apps you’d want to use.
SMB only (There is/was a way to make Windows do NFS, but it sucked.)
License cost. The desktop versions of windows (used to?) have a limit on concurrent SMB sessions in order to force users to buy the server version and pay for CALs. No idea how any of that works now.
NTFS is kind of a shitty filesystem.
Limited (native) backup options. No tape support, for example.
Management effectively requires GUI access.
No native way to mirror the OS drive in software. You need either a hardware RAID card (LSI, etc.) or that stupid Intel BIOS RAID thing.
These may or may not be issues for OP, but they are issues for many.
Honestly, you do you. Stick to what works with your workflow and use case.
However, given that you’re in r/homelab, it’s reasonable to think you’re open to learning new things. With that, Windows tended to not be as stable as Linux (hence the dominance of Linux in the server world).
Windows approach to drivers and software wasn’t as clean as Linux. Uninstalling software was not guaranteed to remove everything in Windows.
Windows license is another minus.
Plus, given that it isn’t open source, and given the dominance in desktop world, lots of viruses tend to target Windows, and we don’t get patches on a timely manner. Plus, there’s a history of patches breaking things in Windows.
Linux and Unix, tends to be simple and stable. Synology is a very good NAS, which combines the robustness of bsd with a fantastic GUI. I’d personally urge you to get another Synology or explore xpenology.
But barring that, your use case today is simple enough and if you think Windows is sufficient, go for it.
If you want to also get learning out of it, explore truenas scale. It’s based on Debian and is fantastic. You can also sideload proxmox on it for various VM and lxc magickery.
If you want to get the same resiliency as a NAS, you need to have multiple hard drives so a mini PC won’t cut it. Single drive redundancy is an oxymoron because once the disk goes, so does your redundancy. That said, if you are OK with single disk because of cloud backups, no issue. I would consider how long it would take me to restore it all should the disk go. Its not as fast as you think. Especially when you get into the terabytes conversation.
If you want to run Plex for media streaming, you’ll find the resource consumption of windows just for existing plus all the other things running may impact your quality of digital life should more than one person stream something at the same time. Just check the number of “Service Host:” processes running on your windows machine. All the windows specific ones add up after a while.
Windows updates not only patch security flaws but also introduce new features or remove old ones. This can sometimes impact what you are doing with it because they try to steer you to their ecosystem of products with the changes they introduce. It can also break something that works because it isn’t a dedicated appliance meant to service that one function.
Multiple NICs. I think there might be mini PCs that come with that nowadays and PCs in general can run multiple NICs. However, Windows networking used to be notoriously bad at managing multiple network card connectivity. Not sure if that is still true as I don’t work with Windows too much anymore but if it is and that was in your plans, might want to make sure it can do what you think it can do with the version of windows you get. They still have Windows Pro vs Windows Home right?
Those are some of things I would consider. In any case, your post sounds like your mind is already made up. In the end, you will have to live with it so what you think is really what matters.
Given that you’re “most comfortable” with windows is probably the number one reason why you should go with something other than Windows. I think you should always get out of your comfort zone and expand your knowledge. Sure you can keep using windows but why but branch out! Hell if you really want to take a leap of faith load up TrueNAS core 🤣
Windows bad. Linux good. BSD better.
For real though. Windows cost money, it uses a lot of resources. And Desktop Version is missing vital parts you might want to use on a windows server like Domain Controller, DHCP, Server, Web Server, Hyper-V. Etc.
Those reasons also have most running Limix or even BSD because they are pretty lightweight especially when used headless. Also as open source they are mostly free of cost. And when you virtualize on a free and open source Hypervisor like XCP-ng or Proxmox you can run way more smaller VMs than Windows VMs as they need more resources.
I always recommend windows to people who want a home server that’s easy to maintain. Homelabbing is more about learning and trying new things out.
A nuc with a nice size external can do a lot and they come with windows not to mention it can run fine free. A lot of the services people run are all using mono to run the windows app. Anyone who has used Windows can install an exe but not anyone is willing to use command line in Linux.
Home server and self hosted are more focused on what you’re looking for.
Sever oriented Linux distros are designed with server workflows and high availability in mind. Desktop Windows isn’t. However, if you’re not running mission critical services, who cares? Do whatever is the most practical to you.
I can get behind your pragmatic analysis. If it works, is low power, easy to manage, etc then that might be a good choice! One thing to possibly also consider: how future proof would you say it is?
Personally, unless I need Active Directory, I actively avoid MS Server. One of the biggest issues for me, is the lack of Docker support. If I have to run WSL or a VM for Docker support, then I’d rather just run Linux and cut the middleman.
I use windows 10 pro for my nas/media server. I run drivepool and it works great for me. I run a Pentium gold g6400 and it’s more than enough power. It might use a bit more RAM, but I’ll buy another 8gb of RAM before I spend eons trying to learn how to do something in Linux.
Thanks. hadn’t heard of drivepool. I’ll look into but could you mention key reasons you use this instead of the built-in storage spaces feature?
I used to use Stablebit Drivepool until I migrated to unRAID.
Drivepool is great. You can easily use mismatched drives and you can have folder level duplication (I.E. set files to duplicate to individual folder). If the Drivepool becomes unreadable for whatever reason, you can just mount in and read the drive from almost any operating system. You can also combine it with Stablebit’s DriveScanner and CloudDrive. The scanner will monitor the drives and begin to move data off automatically if the drive reports errors. The clouddrive lets you use cloud providers (such as Google Drive and Onedrive) as normal “hard drives” in your computer. You combine DrivePool and Clouddrive to combine online storage providers to create a single large drive.
The old windows home server was awesome. The problem with standard windows, as I don’t remember if you can do some sort of software raid or not. You could build it with a hardware raid on secondary drives, and just share those out and you would be fine. Not any different than the old windows file servers years ago.
Lots of great responses here, I won’t reiterate what everyone has already explained. The big benefits imo are redundancy using better file systems like ZFS (Truenas) or BTRFS (Synology, unraid), and in general better management of the drives, and data stored on them. These appliances support more robust raid configs as well, so you have a lot less risk losing data. The other big one is simplicity for what you need it to do. Creating an SMB share on a PC using windows isn’t hard, but it’s not nearly as simple as the 3 clicks it takes on the purpose built OS. These OSs also usually have built in solutions for hosting any other apps you may also want to play with. That’s just my two cents.
I sit in r/datahoarder a lot and the general consensus is that BTRFS is unstable and should not be used, and instead people should use EXT4 or ideally ZFS. I know ZFS is the gold standard and expected to be more resource intensive and RAM hungry. Can you shed some light on why you’d use BTRFS?
I am by no means an expert, mostly a home tinkerer with a Plex server. I use BTRFS because my Synology supports it and I use ZFS on my Truenas box. I also use SHR with my Synology so BTRFS makes adding and upgrading drives really flexible as my media library grows. BTRFS and ZFS are very feature rich, as you mentioned ZFS is very RAM hungry which can be a limitation for people just looking to get into the server space on a budget. I think the instability of BTRFS comes from the way it stores data, it can get very fragmented. EXT4 in comparison is pretty boring but it works well and if you’re just writing data to store it you might not need the features and overhead of the other file systems. Personally I have no real preference, I like my Synology and I like my Truenas machine and as a hobbyist they both serve me well, and I would take either over NTFS for a storage appliance.