JayP-NAS 2.0 – A Real Home Server, Part 2.3: Picking an OS and RAID type

This part should probably be more like 0.5 than 2.3, but it’s also easier to use my build as an example for some points that we’ll be going over, so I think it works to throw it in before going over how to setup things on my server.

The Rationale

What are your goals?

My last NAS had 2 major problems from my point of view: I planned storage expansion poorly and it wasn’t powerful enough to do some of the things I wanted to do not long after building it. That happened for a couple reasons, but mostly just poor future proofing. I had wanted to build a computer to serve 2 purposes: act as a NAS, and be a desktop. I also tried to build it too cheap, and really painted myself into a corner with those decisions. This time around I want to lock in exactly what I want this system to do now, how long I expect it to last before I have to replace it, and ensure I can easily upgrade it as needed or already max it out in a given regard.

So, what do I need from it right now? At least what the old one was doing right before it was decommissioned. Those tasks are (in order of CPU intensity):

  1. Occasional Bluray ripping/transcoding. Not often, but often enough. It took forever to rip the data from a Bluray then transcode it to something usable, and that’s with the CPU running at full tilt. Faster CPU will help, obviously, but my real goal is more threads. Locking the transcode to half the threads means the Bluray rip/transcode process will still take a while, but still leaves me half of my performance. I can do this in a few ways, my goal being to setup up a VM I fire up only when I need it.
  2. Plex sharing. My Plex library is shared to ~10 friends. It’s peaked at a few people watching it at once, but my internet upload speed limits it more than the CPU’s ability to process that many transcodes. That said, gigabit fiber is Coming Soon (TM) to my neighborhood so I want to ensure the new machine can handle a couple 1080p streams at once without issue.
  3. Automated downloading and organizing of media. I’m not planning on going into the details of this immediately but as the “magic” of making this work isn’t that big of a deal I’m thinking I’ll write up another blog post later about it. There are guides out there but a couple complicate things more than necessary and a couple aren’t quite detailed enough.
    1. Using something like Flexget + ShowRSS and a bunch of custom flexget configs or filebot scripts you can do this, but it’s difficult to setup and possibly a bit unreliable. That said, it’s also really low overhead on performance.
    2. Alternatively, there are tools like Sonarr, Radarr, and Headphones which make the entire process very easy on you, add in some cool features, and have nice user interfaces. However, the combined impact of all of these tools will have a higher performance overhead than the Flexget/ShowRSS combo. It’s not a huge difference but the difference exists.
  4. Remote management. AKA: sshd. That said, I also really wanted IPMI support. I’ve had a few situations with my current machine where it hung during reboots for no discernible reason. The ability to fire off a quick hard reboot via IPMI is a huge plus to me. I also want to see about setting up a way to limit the IMPI’s allowed devices to my laptop via local network (192.168.x.x) and my VPN subnet, so I could do it from my phone even when I’m away from home. That’s far more complex than I’m ready to approach but it’ll be interesting to setup.

What about what I want it to do in the future? Well, I’ve already alluded to this, but I want to setup some VM’s and Docker containers to replace certain features I had previous and/or add to what I’m doing now. I also want to centralize more of my data. The original JayP-NAS was pretty damn centralized already, but only in respect to what’s at home. I want to setup an off-site backup (once I get gigabit internet, before that there’s no point), as well as switch up my cloud sync’d data a bit. I want to make sure the new system is capable of uploading to whatever backup solution I pick as well as any cloud services I want to use.

Lets Talk about Storage

If you’re building a NAS, I think your first decision needs to be how your storage will work. So, to me, here are the questions to ask yourself:

  1. How much data do you have right now?
    1. I suggest answer this question in terms of “from all devices.” On your laptop, desktop, tablet, the SO’s stuff, etc. Anything you might want to store or backup on a NAS, count it.
  2. How quickly does that number grow?
    1. If you’ve neared your limit and trimmed things do your future self a favor and pretend that trimmed stuff is still around.
    2. Think of this as a worst case scenario. If your new NAS has dozens of TB’s of free space you’ll go a little nuts using it at first.
  3. What kind of data is it?
  4. How is that data accessed?
  5. How frequently is that data accessed?
  6. How much will it suck if your data disappears because of a hardware failure?
  7. How much time/effort can you dedicate to recovering that data after a hardware failure?
  8. How much will you rely on software to organize your data? What happens if that ruins everything?
    1. This is vaguely a discussion of version controlling. It also gets to the topic of snapshots, rolling backups, etc.

That seems simple enough but each question can easily complicate your decision a little. I’ll go through those questions again with my own situation, but really think about your own situation if you’re doing this for yourself as, “well my thing is basically the same as is,” is not true here, trust me.

  1. How much data do you have right now?
    1. Right now I’ve got ~11TB of data.
  2. How quickly does that number grow?
    1. Realistically ~25GB/month but it can get as bad as 5 times that, so lets call it 100GB/mo.
    2. I also want to acknowledge that this is not an easy number to figure out. Since I’m coming off another NAS device I have the data to extrapolate these figures. Here’s what the numbers are really about: I effectively add just over 1TB worth of new content to my NAS per year. If I build a new NAS with 16TB usable storage, copy my 11TB over, that’s 5 years before I fill it up again, and that’s without accounting for randomly deciding to start ripping blu-rays at a higher quality making bigger files.
  3. What kind of data is it?
    1. The vast majority is media: movies, TV shows, music. There is also an archive of digital photos spanning back over a decade. There are some important “historicals” like old tax stuff. I also backup game downloads so I don’t have to redownload them every time I want to play a beloved game again.
  4. How is the data accessed?
    1. I mostly am just moving new files around to keep things organized as it comes in, playing back a few episodes of TV shows a day, or backup up downloaded games. All of this is done mostly via Windows File Sharing, AKA: SMB, or Samba for for my Linux laptop (daily driver) or the gaming desktop. Speed isn’t as important to me as it might be for other users.
    2. Data is also shared remotely via Plex.
  5. How frequently is that data accessed?
    1. Although I’m utilizing some of that data every single day, the vast majority of data sits unread day to day. With my friends also accessing my media collection via Plex things are accessed quite randomly but at nothing resembling a consistent rate.
  6. How much will it suck if your data disappears because of a hardware failure?
    1. Pretty hard but not as bad as it could suck. I regularly run a program called VVV (Virtual Volume View) which catalogs my NAS storage. I use this to keep a record of what I have. The VVV database is kept locally and backed up to Google Drive. That VVV database exists so in the event of a catastrophic failure I can refer to it to rebuild my collection. It won’t be fun but it’s doable. Worst case, a chunk of my media library is backed up on CD’s and DVD’s but nothing has’s been backed up that way in the last 3 years. Music is backed up to Google Play Music, which is easily redownloaded. As for the data that can’t be reobtained, personally created things or my digital photo library, it is currently sync’d with Google Drive. The biggest problem is the time it’ll take to redownload all of the TV shows, movies, and music.
  7. How much time/effort can you dedicate to recovering that data after a hardware failure?
    1. This is going to get less about my personal situation and more about the reality of any given failure situation. Types of hardware failures and likelihood of recovering that data:
      1. Storage: If you have one hard drive and it dies, that data is likely gone. Depending on the type of drive failure it’s possible to recover the data yourself but not entirely probable. There are services that can recover the data but they’re typically somewhat expensive. I’ll explain the types of RAID later, but for now: If you have a RAID1 no biggie, you just replace the failed drive. If you have a RAID5 or RAID6, you can deal with one or two failures repsectively. If you have RAID10 you can deal with a few failures (potentially*).
      2. Nebulous Device(s): This can be the dedicated NAS device itself if you go that route, or the other components of a custom built NAS like CPU, motherboard, power supply, etc. If you don’t have a RAID of any kind, you just pull the drive out and stick it in something else, your data is fine. If you do have a RAID, you can usually replace the failed device and recover the RAID directly. If you have a software RAID of some kind (ZFS, Linux mdm, etc) you’ll need to make sure the replacement device supports that software RAID type, and in some cases that the replacement device’s version of that software RAID solution isn’t older than what you had.
      3. Hardware RAID card: If you go with a hardware RAID card and it fails, your data is probably safe. You can replace the RAID card and a new one should be able to recover the RAID. Typically you need to replace the card with an exact identical, though. Sometimes even down to the firmware version. I believe certain brands of RAID card are better about this than others, as I’ve head that people with failed Dell PERC310’s getting DellPERC710’s and it just magically still works.
  8. How much will you rely on software to organize your data? What happens if that ruins everything?
    1. Personal story time. The first NAS I owned was a Seagate BlackArmor or some dumb model name 1TB storage device. It’s basically an external hardware with an ARM board and a really stripped down version of Linux with a web interface to manage certain aspects. It was OK for the time. One day, they did a software update and added some new media management tools, specifically some DLNA support. I had some DLNA devices so I wanted to play with it. I enabled the feature and immediately saw the drive was running super slow. There was a little window on the web interface showing it was scanning my media library. OK, no biggie. It’s picking up metadata or something for the DLNA service. I let it run over night. I came back the next day and my music player couldn’t play a single file. I poked around… all my MP3’s were gone. Gone as in the nicely organized Music folder I had was empty. What was really strange was the NAS’s used storage didn’t change at all. A cursory poking around showed that, for whatever ridiculous reason, the Seagate’s media management service’s version of organizing media was taking every media file (audio, video, and picture) and dumping it all into one folder. After losing my shit and talking to Seagate’s technical support the best I could do was disable that service and reorganize all my data… which is how I discovered 2 new tools: VVV which we already talked about and MusicBrainz Picard which I adore for how well it works organizing music. I was angry but I got my media organized again. Heck, it was a lot better than it had been, which is a silver lining I guess.
    2. The point is… if you’re going to rely on some software or service to micromanage your stuff, you better be ready to go all in with it. If you’re very careful and purposeful in how you organize things, only use software that looks at how you organize things, not something that’ll do it for you. If you don’t care how things are organized, I guess it doesn’t matter, but don’t be upset if you change software and the new one’s a mess. Worst case, utilizing other software to keep records of where things should be helps.
    3. There are also features like snapshots that can help in these situations. Snapshots are less than a backup, but more than a record of the files. The Wikipedia article on this topic is actually a quite good summary of the idea. It’s a little dense in language, but to paraphrase: a snapshot can be thought of as, “this is the current state of this file system at this exact moment.” You can have more than one snapshot. It’s fairly trivial for a good snapshot system to roll back to a previous state if necessary. It requires little overhead in terms of processing power or time, but does require more storage (albeit a fraction of what backing up locally would). Snapshots are great for systems with a lot of users changing things regularly. You can easily roll something back if someone or something breaks stuff.

Picking an OS & RAID

For most people that are setting up a home server that’s mostly going to functional as a file server/NAS, you have quite a few choices. They really come down to two types: Purpose built OS’s, or general server OS’s where you build out only what you need. There is also a third option in setting up a Hypervisor like ESXi.

Purpose Built OS

The goal of a purpose built OS is simple: Build in all the features a user needs without the fluff for things they don’t. A good appliance OS will also include things like simplified virtual machine and/or container installation, plugins or apps or common tools, and a solid user interface that’s typically accessed via a web browser. These days there are a few major choices.

  1. FreeNAS: Based on FreeBSD, FreeNAS is the go to choice for the homelab/data hoarder communities. It provides a relatively simple way to setup enterprise class features for end users.
  2. OpenMediaVault: Based on Debian Linux, OMV is a commonly suggested alternative to FreeNAS. Generally it’s considered more user friendly, but also not quite as rich in regards to enterprise class features.
  3. unRAID: This one’s a bit of a different player in the market. Unlike it’s counter parts you do have to purchase a license for unRAID. There are a lot of fans of it for it’s simplicity and feature set, but personally I think it’s weighed down by a few major points.

There are possibly more options out there right now but these are the three big players. At the end of the day if you’re going to go this route I think you should pick between one of these three. They have the largest communities and you will find the most help for them. That doesn’t mean that other choices are not valid, just that you better know what you’re doing and have a reason to go to something else. The more unfamiliar you are with things the more you want to choose something that has a large community to help you.

There are a lot of reasons to pick any of these three, as well as reasons not to. Personally, I wasn’t considering OpenMediaVault too seriously because if I was going to setup a Linux system I’d rather do it myself with a general purpose OS install. So instead, lets quickly run over FreeNAS and unRAID and their pro’s and con’s as I see them.

FreeNAS is likely the most popular of the three. It supports ZFS natively and is a darling of the home server/data hoarder communities for it’s robust nature. It supports jails, a kind of container/VM mix that allow you to completely section off certain tasks from one another for security. It has a web interface that’s easy to navigate for the technically minded and easy enough to pick up for those that are willing to learn. It’s updated at a decent pace but is held back a bit by the fact that FreeBSD doesn’t update too fast itself. Some things against FreeNAS is, being based on FreeBSD, there is a learning curve even for those of us with a Linux background. There are certain fundamental differences between Linux and BSD. That’s not inherently a bad thing, a lot of the differences are arguably for the better over Linux in terms of security and whatnot, but these are differences that exist and might not be readily apparent. FreeNAS also supports a lot of applications home server users would want, like Plex Media Server. However, being based on FreeBSD does mean certain things that are easily done in Linux are a mess to make work in FreeNAS if you can do it at all.

unRAID is an interesting option. I’ve never used it but did enough research to figure out that, at least for my purposes, it’s not a good choice. For a lot of home users unRAID is actually a great option, but if you’re even a tiny bit paranoid about your data you should consider the faults of their setup. The biggest problem I have with unRAID is it effectively makes a software RAID4. RAID4 is such a rare choice these days that I genuinely think unRAID is the only thing that does anything even kind of like it. Why is it so rare now? Probably because it’s bad. For those unfamiliar with RAID but that want the nitty gritty details, check out this Wikipedia article on the standard RAID levels. To sum up, RAID4 uses a dedicated disk for parity, as opposed to RAID5 which uses a disk’s worth of space spread out among all disks for parity. RAID6 does the same as RAID5 but with 2 disks worth of space instead of just 1. There’s a ton of math involved in this, but to keep things simple: by spreading the parity across all drives you’re increasing the reliability of your RAID. With the incredibly large disk sizes we’re dealing with these days even RAID5 is looked down on, as the time it takes to rebuild a RAID due to a drive failure is long enough another drive is that much more likely to fail during the rebuild time. I think unRAID’s single disk of parity approach comes down to a “good enough” mentality. For home users specifically you’re not dealing with mission critical uptime and reliability requirements, so who cares? Well, anyone using it to back up the hundreds or thousands of pictures of their new born probably cares, for one. My data’s not that critical, I just dread the idea of having to re-obtain 10+TB’s of data so much I’d rather have a lot of trust in my setup.

There are other drawbacks of unRAID’s system, namely that it doesn’t provide bit rot protection. Bit rot is a problem that will be more and more common for home users as time goes on. The idea of bit rot is that when storing data on magnetic media (such as hard drives), if the data isn’t accessed regularly then bits can flip from a 1 to a 0 (or vis versa) eventually, at random. When you’re someone like me with 10+TB’s of media files, most of which aren’t accessed at a rate remotely resembling regular, this can be a big worry. It will take an incredibly long time for bit rot to have a serious impact on any one file, but it can happen. The issue I have with unRAID is it seems like they and their community of users approach bit rot as if it doesn’t exist. In my research I saw a lot of, “it’s weird, this file’s totally corrupted but it was fine when I last looked at it 3 years ago” type issues, and every time I thought, “sounds like bit rot,” the response was, “You probably did something wrong, but it’s possibly some outside force no one can do anything about.”

All that smack talk about unRAID does lead me to talk about it’s one killer feature: You can just throw drives at it willy nilly, it doesn’t really care. Got a mix of large drives that need a new home? Throw ’em in there. Running out of space but can only afford a couple new drives? Fine, go for it. That’s a huge advantage for the majority of home users and shouldn’t be dismissed. It’s also probably one of the easiest to use and has a ton of software growth options thanks to it’s smart use of VM’s and containers. unRAID is most certainly worth considering for home users, but I want people to be wary of it’s pitfalls. If the integrity of your data is paramount, I can’t suggest it.

General Server OS

First off, let me clear about one thing: When it comes to UNIX based OS’s, there isn’t really a difference between a server and desktop version besides what software is prepackaged with the ISO. A desktop version will have some GUI system and desktop environment, a server build of the same OS won’t. The server build may bake in certain things like a bind DNS server or VM hosts, a desktop version won’t. For most of the major Linux distros anything you can do on a server version you can do on a desktop version or vis versa, it’s just a question of how much crap you have to install to make it work. I say this because some people know they’ll want a desktop interface to run certain tasks or just to be more comfortable. If you do, frankly you should consider installing a desktop version of your preferred OS. If you decide to go for the server version you can still add in that desktop support later, of course.

When deciding if you want to use a general purpose OS or something purpose built, you’re likely going to lean towards general purpose for these reasons:

  1. Comfort: If it’s what you’re comfortable with it makes sense. My last home server was running Linux Mint (it started as a 24/7 desktop and file server), I run a couple Ubuntu Server based VPS’s for this WordPress blog and another for my custom rolled OpenVPN/Pi-Hole VPN solution, and I’ve been using Linux for over half my life now. That’s a big deal. I’ve used BSD a bit but not enough to be immediately comfortable with it, which matters. I also prefer having the same or similar environment on all my systems so I don’t end up running the wrong version of a command in a haphazard mistake.
  2. Flexibility: If you start with a basic server OS you can add whatever you want to it as you’re going. With a prebuilt you either need to roll a VM and setup a general purpose OS inside that VM to do a specific task, find a container to do it and hope it will work without too much fussing about, hope that someone else wanted to do the same thing and there’s an addon or plugin for it, or get busy making your own. Admittedly, most things you’ll be wanting to do are likely supported. Any prebuilt OS that doesn’t have some way to integrated torrents, Plex, and some automated way(s) of downloading stuff isn’t worth considering. If you think you might go farther than the basics, or want finer control of any of those things, then you gotta get picky or do it yourself. The big benefit of a general purpose OS for a home server is you are allowing yourself extreme flexibility.

Picking a RAID type

At the end of the day if you’re building a home server you’re probably going to end up setting up some form of RAID for your storage but you don’t necessarily have to. You decision will be made based on a few factors: Efficient use of storage capacity, performance, redundancy (often in the form of parity), and ease of setup and use. I’m tempted to discuss all the types of RAID and their pro’s vs. con’s, but I feel like that’s been done quite a bit. Instead, here are the potential types I think people building home servers should actually consider.

  • JBOD: “Just a Bunch of Disks,” a JBOD is the most basic and easy to setup solution. You keep the full storage capacity of your drives but gain no performance nor redundancy. The real advantage is it couldn’t be easier to setup.
  • RAID10: a striped mirror, RAID10 is a combination of RAID1 and RAID0. It requires a minimum of 4 disks and will require an even number of disks no matter what. You gain quite a bit of performance and redundancy but you’re losing half of your storage capacity to the mirror. Worth considering if you get very, very large drives or a ridiculous number of individual disks.
  • RAID5 (or raidz1): Some people will still be using this but it’s virtually a universal “do not recommend” at this point. To keep things simple I’ll leave it at this: If you go with a RAID5 with disks =>2TB, math shows that your chances of a second failure during the array rebuild time is well above where most people will be comfortable.
  • RAID6: Probably the most popular array type for home users. Requires at least 4 drives, but if you’re going to use 4 drives just use RAID10. You gain more usable storage space with the same drives versus RAID10, sacrifice little performance versus RAID10, and gain more than enough redundancy (up to 2 drive failures at once) for home users.

Implementing a RAID can come from hardware or software. Hardware RAIDs are becoming less popular with the home server crowds in favor of the benefits that the modern software RAID solutions provide. If you’re going to do it, get a RAID card your preferred OS supports without issue, something that will support enough drives, and something that can work as both a hardware RAID card or a “proper” HBA. That basically means: Buy an LSI card that supports what you need and can be flashed to both IR and IT mode.

As for software raids, I want to direct you to SnapRAID’s comparison chart of the major contenders here. This a very fair and reasonable comparison directly from one of the contenders, which itself is pretty interesting. There are some finer points that they don’t go into quite enough detail on in my opinion though (probably because they are trying to sell you on their product).

  1. ZFS is the big dog in the home server/data hoarding communities. This is the one everything else is really trying to go toe to toe with. ZFS has a lot of big advantages going for it versus it’s competitors, but a few down sides as well. The biggest downside of ZFS is what some call the ZFS-tax. Unlike many of it’s competitors, you cannot easily expand a ZFS RAID. Even if you plan for it it’s not the easiest process. Instead, most of us that commit to ZFS walk in knowing that when it’s time to expand our storage poll we’ll be replacing all the disks at once with larger disks. It’s not convenient and it’s expensive, but when you’re not expanding your storage you’re getting the most out of your disks and some of the best protection for your data using ZFS zraids like raidz2 or raidz3.
    1. ZFS is supported on FreeBSD and it’s derivatives like FreeNAS natively, and ZFS on Linux (ZOL) has existed for Linux for a while. It’s now also baked into Ubuntu 16.04, and a few other Linux distros are picking up baked in support for it. It’ll be a long time before we see ZFS as part of the Linux kernel due to licensing restrictions, hence a lot of the discussions you may see about ZOL having worse performance than the native support in FreeNAS/BSD. It’s difficult to argue against the reality of that performance difference, but for the vast majority of home users you won’t be able to utilize the full speed of even ZOL’s ZFS implementation.
  2. Btrfs (I’ve heard this pronounced as both “Better FS” and “Butter FS” so make of that what you will) is an option that’s frequently discussed and almost as frequently dismissed. Right now the biggest problem with Btrfs is it’s software RAID5/6 support is best described as a broken mess. That’s… not good. Right now Red Hat is abandoning the project as well, which doesn’t look good, either. The only thing it has going for it is it seems like a good idea and could be cool, but with it’s problems and people backing away from it I can’t help but think it’ll never get to where it could be.
    1. Btrfs is intended to be the native Linux alternative to ZFS. It’s quite a bit younger and that’s a big part of the reason it’s still considered unreliable.
  3. SnapRAID/MergeFS (or other “unifying FS” solution) is an interesting solution that is gaining ground these days. Basically you’re pairing up two different systems to create a drivepool that could be compared to a ZFS system but without some of the disadvantages. A unifying file system solution allows you to throw together a bunch of disks, in some cases you won’t even have to format them, and they’ll appear as one. MergeFS itself does a lot of smart stuff so if the file tree architecture is the same on different drives it will merge them together when you’re looking at it. It will also allows for lower power use as only the necessary drive(s) will be spinning at any given moment. SnapRAID then sits on top of that drivepool and provides RAID features like data redundancy, file integrity checking, and the ability to add disks at will.
    1. SnapRAID seems to be supported on any OS you might want to run on a NAS, and there are a number of unifying file system solutions out there to pick from. The big advantage to this is flexibility without losing too many key features. The only real disadvantage in my book is you’ll gain no performance versus a JBOD setup.

Frankly, SnapRAID & MergeFS is likely the ideal solution for the vast majority of home server users if speed isn’t a concern. The reason I didn’t go with it is I do like like saturating the gigabit link between my devices, and simply that I’ve never used it. My last server used ZFS, I’ve been aware of it’s pitfalls and decided it was worth it then and still agree with that decision now.

Chances are there will be a mix of RAID types and RAID methods that you’ll be considering. Do you go RAID10 with a few 6TB drives, RAID6 with a bunch of 4TB drives, or get a couple 10TB drives in RAID1? You can narrow down your choices by throwing your options into a good RAID calculator, like the great one ServeTheHome provides here. Pick and choose a few different options you’re considering then run the math on how much that’ll cost you. Then you can add cost into your decision making process which should help the other factors mean more. You might see that running a whole bunch of smaller (cheaper) drives in RAID10 is about as much or less than a few big and expensive drives in RAID6.

Dang, That’s a Lot to Take In

I guess it’s clear we’re getting serious now. It’s important we answer all of these questions though, as a lot of storage solutions do the same things in different ways and knowing what we need/want will help us decide which option is right for us. At the end of the day, with all of this newly obtained knowledge you have, you should be able to answer these questions:

  1. What is our data integrity choice?
    1. Do you want a mirrored RAID to allow for a disk failure? Maybe you want disk parity to allow for a disk failure but also maximize capacity. If your data sits without being accessed for long periods of times you may want bit rot protection? How are we protecting ourselves against mistakes? Snapshots, manual database creation, etc?
  2. How big will the storage pool be?
    1. If you’ll only need 4TB’s worth of storage over the next few years, well you’re in luck since those drives are cheap and plentiful. If you need a lot of storage they might not make disks big enough to keep it all on one. Even if they do, it might be more expensive to get one than it is spreading it out across a few disks.
    2. How big is each disk? How many disks? Just one? Two? Maybe a bunch?
  3. Can my solution grow as my needs grow?
    1. Maybe you’re comfortable replacing an entire system in a few years. Maybe less, maybe more. Maybe you want to bake upgrading right into the plan.

So What Does This All Mean?

Really, you need to decide for yourself what’s important. I’ll give you a couple of examples I’ve seen from people on forums and subreddits and what I’d recommend they do.

  1. My SO and I take dozens of pictures of our kids a day. We don’t want to rely solely on the cloud for storing these. What do we do?
    1. If this is you, don’t waste your time doing anything custom. Get an off the shelf QNAP/Synology/Asustor NAS device, pick up a set of drives, and setup a RAID1, RAID10, or RAID6 to be safe. If you’re hell bent on doing something custom it’s not just to store pictures of your kids, it’s so you can do other stuff. That other stuff will be your real deciding factor for a custom rig.
  2. I have a lot of music, movies, and/or TV shows. I want something that can store it, distribute it around my home network, maybe setup some stuff to automate organizing, obtaining, and playing back that media.
    1. You’re in the same boat as me. I went way overkill so you can back down some. For someone that’s not as invested into the whole thing as I am, first you need to figure out how much usable storage you need by how much you are using now and multiple by 2 or 3. I recommend leaving yourself a lot of room to grow, but if you go with a RAID type that you can grow on the spot just keep in mind that you’ll have to do so at some point. Run through a RAID calculator to figure your best number of drives that you’ll need in a RAID type you’re comfortable with. Then, I’d go with a decent home server build, budget ~$750-1,000 for the PC before drives to give yourself the performance to do what’s needed now and enough overhead to last quite a while untouched. Make sure your case/chassis can support how many drives you’ll need now and how many you think you’ll need to add in the future. I definitely recommend sticking to hardware that you grow into or upgrade later, as replacing the whole rig is not a fun task.
  3. I know I need a NAS, but I don’t know what I need. I want to do X, Y, and Z. Halps meh.
    1. Cool story, bro.
    2. There’s no good decision to be made without knowing what you need or want. Take an old computer out of a closet or pick something up from Goodwill, throw some drives in it, and experiment. This gets into the home lab territory where you’re just futzing around with things and not committing to anything. Don’t put anything on it you can’t easily recover and you can just keep futzing around until you have an idea what you want.

Posted by JP Powers