Storage is king, and it always has been. Since the beginning, storage has been as costly as it is priority. As the processing ability of systems expanded, so too did the storage requirement for those larger assets and thus the demand for bigger disks.
Even today, most pre-built servers are designed to scale with storage through use of front-bay loaders or even cloud offerings. It's also very common to see an option for a "storage-first" system where all other system aspects take a back seat to the primary need: more space.
With this project, we will be building one of these storage-first systems where the hardware is spec'd specifically for the task of managing a vast amount of storage (on a budget).
The primary need for this storage will be to store virtual machine replicants: meet "Deckard."
Built For Storage
Why not buy a NAS? 'cause with a NAS you're tied down: as extensive as the control may be, it's no match for the amount of options you'll get out of putting together your own x86 dedicated storage machine. A NAS with Hyper-V? Now we're talking.
Atop of that, a NAS isn't as easily upgraded as an x86 system is. USB3 changed the game and made speedy data transfers with the convenience of USB a reality; anyone owning a NAS locked down with USB2 felt envious. Plus, if you ever need to expand your storage, there's always a bigger RAID controller out there.
In order to keep true to our "storage-first" mission statement, we'll be very selective of our hardware so that nothing is wasted: our processor will be light, our disks hefty, our form-factor tight. We'll also be keeping an eye on our budget, at least in the sense that we wont be directing the bigger bucks towards anything but the storage itself.
A server is most often found using two different OSes:
Transparent, flexible, powerful. If you can only spare your time learning a single OS, let it be this one and you shall be free. Linux varies a bit from distribution to distribution, but the overall "feel" and operating base layer is the same. Desktop Linux distros often come with multiple different UI shells for the user to pick from, but a server-based Linux OS will have you mostly interacting with BASH; get used to text-based interfaces and you can whip up anything you need in a matter of minutes, since Linux-based OSes are typically free.
As far Linux alternatives go, and depending on what you intend on doing with your server, Microsoft's Windows Server editions are a great option, despite the new Microsoft update-schema. Traditional Windows Server uses a UI not unlike desktop editions of Windows: the shell is familiar and any Windows "power user" can get around and set things up with ease. Later editions of Windows Server took a few notes from Linux and have started to support the minimal UI route: MS Command Prompt & PowerShell replace BASH for a Linux-like "Windows Server Core" experience. Windows Server.. is not cheap. But you can often play around with trial versions for a few months or get deep discounts for your own copy.
When it comes to storage...
With x86 hardware we're spoiled for choice when it comes to an OS. The usual suspects of Windows and Linux flavors come to bat, but it's worth mentioning there are some OS specifically designed to hand NAS/SAN tasks; seek Openfiler and FreeNAS.
If you're wanting to get just a little bit more out of your storage solution than just.. well... *storage*, then you can look into the full Windows and Linux Server OSes. Each has offerings for Type 1 and 2 hypervisors, offering extensive flexibility for future projects. Why not start your own lab with the extra resources your big-ass-not-a-NAS provides?
An important note: whatever OS you select is going to have to provide some sort of trickery to enhance your storage. Hardware RAID is great, but mistakes happen and sometimes what you planned for is not enough. An example of such software trickery would be Microsoft's Storage Spaces. On the outside, this is basically a software RAID, but on the flipside we're getting some advanced protection and deduplication features for our storage as well as smarter handling of redundancy from the Windows OS.
If you've already planned on picking up a copy of Windows Server, Storage Spaces can also save you a pretty penny on a hardcore RAID controller - a $20 card with enough ports will be all you need for all the fancy stuff, like automatic SSD caching.
Since we're focusing on a "storage first" approach, we're gonna cut every other corner that doesn't get in the way of our main priority. A small form-factor, inexpensive (but reliable!) hardware, and an overall empathizes on low power demand, since all we really want to do is spin some disks. Nevertheless, the hardware selected will still be able to do a few fun things like run a VM or two (or ten).
As for that storage, we'll just "build around it." Our chassis selection should feature a decent amount of drive bays with room to improvise. Surprisingly we can get pretty small in case and still have a lot of room for disks. If you only need one expansion option, Mini ITX motherboards can give you some flexibility for chassis size, or with extra room in your Micro ATX case
Motherboard & CPU
Every couple of years, a new built-in CPU combo board series will be released with several manufacturers offering their own configurations and bringing much joy to the DIY-appliance crowd. These boards are often under $100 and have plenty of power to run things like a Windows server or any Linux hypervisor offering. They can even stand in as a small desktop, but your mileage may vary.
A common series of traits found amoung these built-in CPU combo boards would be:
- Micro-ITX or similar tiny form-factor
- Dual DIMM slots (and no more..)
- Dual NIC ports
- Intel Atom, Celeron or similar low-power draw CPU
- A hard cap on expansion, but it is available
- Moderate performance, but flexible
The first run for Deckard will settle on a Gigabyte GA-C847N-D: a tiny board with a built-in Celeron, dual NICs, DDR3 support and a PCI slot.
It lacks USB3, but Deckard will be oriented towards using the network for file transfers and management.
Memory & Storage
First thing's first: when it comes to a server, not all memory is equal (or even an option). Most "real" servers are going to need ECC and/or Registered memory. Again, checking the manual for your selected system to see what it prefers is going to pay off in this sense.
Without getting too technical, ECC stands for Error-Correcting Code. This means the memory is able to make adjustments and correct common cases of data corruption on the fly. This makes the memory more reliable and, in a sense, faster. Registered memory has a buffer cache built-into each DIMM; again, the idea here is reliability and speed.
When it comes to vendors, the best depends on who & what the server is for. Don't care if it there's downtime? Buy budget memory from a warehouse vendor, maybe it'll keep kickin', maybe it'll burn out. Else, invest in memory with a good warranty; the "good" stuff will still be inexpensive as long as it's not flashy.
If you're on a budget, many retailers will also sell refurbish/rebranded memory for older systems or servers in "bulk" (16GB kits or more). The price point for such a kit that will fit Butter is good, but our steepest cost yet: just under $200 and we have 40GB of DDR3 memory to feed our two Intel Xeons. The brands are mixed, but all the sticks check out and play nice.
For Deckard, we're limited to what our motherboard can do. Two sticks of 8GB DDR3 at any whatever speed we can will be fine. The main thing is we've hit out our 16GB cap for use with a little VM hosting.
Primary storage for a server should be SSD based these days; fast read and write times are expected of any system intended to serve the masses, so an SSD will be optimal. We can also take advantage of mechanical HDDs for storage of data that doesn't need to be quite as zippy. That said, a lot of software storage will help the experience by using a portion of your SSD for automatic caching, so in a way we get the best of both worlds.
Our budget will mostly be sunk investing in disks, nice ones. I have a personal affection for Western Digital (their warranty has saved my bacon more than a few times), and they also happen to make a somewhat inexpensive class of disks that will be well suited to our needs. Enter the WD "Red," or as I've heard the "red pill." These drives are designed for NAS use, meaning they can handle vibration, heat and heavy use. They also tend to be large capacity by default for a tempting price-point.
Vendors for storage don't vary as much these days, frankly most HDDs all come out of the same, few factories. That said, go with the ones you know that have good warranties: Western Digital, for example, is one I recommend only for their warranty support. Drives die - unless you wanna throw money away, anticipate this "feature" and get a company who'll bail you out when your array is about to crumble.
If you are on a budget, there another alternative. "White label" disks are available from places like Amazon at very reasonable prices. These drives are built by major manufacturers, but stripped of their branding and made generic for use in pre-built consumer PCs. 3TB, 7200RPM for $40; hard to argue with.
Chassis, PSU & Other Diddies
Your chassis should fit the size and intent of your project. For a server, we want it to conform either to the intent or the environment, meaning we must design it for a rack or the hardware within. For a true server, we're likely to encounter everything already in the appropriate chassis, so look for the style that suits your project best: a two-post, rack-mount chassis is not a bad way to go for smaller, lighter systems. For heftier hosts, a "tower" or "pedestal" design is preferred that's got enough room for proper cooling.
A reliable vendor for projects like this would be StarTech. Make sure candidate chassis have enough ports and room for all the bits you wish to add. Deckard's chassis will be a 2U front-facing with handy grips for easy maintenance (though I don't intend on opening this sucker much). The import thing for this project is to get a case that has room for a lot of hard drives. A lot of 2U cases will have a decent amount of storage space in the front, thus this is a good option for a decent amount of space in a compact format.
If your server chassis already has a PSU, great! You saved yourself a headache. Else, if you're selecting your own PSU, triple check the form factor and see if you can find high-res pictures of the specific model if it's an odd size or shape - sometimes things just don't line up with the smaller cases. When it comes to pre-built servers, PSUs are often proprietary and purpose-built, be sure they're available at a price affordable to you if you plan on investing in such a system.
Cheaper PSUs these days are a lot more reliable than they were 10 years ago, but efficiency will still cost you. Even so, I suggest going with the most efficient - look for 80 Plus. Wattage demand will already be established on a pre-built system. StarTech can bail you out here once again.
Deckard's 2U chassis allows for a full, desktop-sized PSU, which is great because these are the most common and are often inexpensive. 300w is more than enough to run a bunch of disks and a little motherboard.
Installation & Configuration
Deckard will not be a complicated build - the typical 2U chassis has easy access from the top and everything just tucks in real nice. The only part that can get a little hairy is wiring up all the disks - four HDDs and one SSD makes for a lot of cables.
Hardware wise, if you're here you must already know how to slap this stuff all together. Else, YouTube is a great source to see these things done in action! Here's an example of something ancient.
Deckard's built is pretty straight forward - open up the 2U and drop the stuff in. No complex connectors sans the front panel I/O controllers, which can get tricky when things are cramped.
Managing disk installation is a matter of stacking the drive cages and re-installing these into the chassis. There's always the debate of "do I write this up before I put 'em back in?" Honestly it depends on the design: it may sound like a good idea, but this can later be a big pain the ass when you're trying to pin down those four little screws and the power cables keep getting in the way.
The only access going in is the SATA RAID controller to the PCI slot. PCI is limited in bandwidth, so we can't expect stellar elperformance from our beefed NAS, but this can easily be remedied by a new board in the next round of releases. For now, PCI is enough, and 4 SATA ports are required if we want to rock all these disks. The SSD can be attached directly to the motherboard's SATA ports.
Controllers, Firmware & BIOS (O My)
Most built-in CPU combo boards will have a basic desktop grade BIOS or UEFI experience. You'll get to tweak some minor things but don't expect ultimate control or anything like out-of-band management.
Deckard's BIOS will be visited for power and boot options, not much more. The default settings are largely automatic and what we want anyhow.
The RAID controller will be used mostly for pass-through use, so we don't really need to access its config - by default it will pass the disks on to the OS. This is a handy setup if you intend on using software defined storage, which frankly is a lot easier and safer than hardware RAID.
Boot It Up!
As for installing the OS (as mentioned, chances of finding an intact server OS on a thrifted system is null), this one's easy, 'cause everyone's already done it! Linux is simple to install, and with Ubuntu as an example, is well covered here.
Your options for configuring your server are endless and best guided by your needs. Choose the services, roles & features that are going to best help your organization and plan how you wish to deliver these: virtualization, for example, is an awesome way to get the most from your hardware without turning it into a jumbled mess. If this sounds good to you, consider configuring a hypervisor for your server! Microsoft's Hyper-V Server, Citrix's XenServer or the XCP-ng alternative are all great hypervisor choices.
A common Linux use-case is the "Live CD." The name is a misnomer these days - a Live CD distribution is not limited to compact disc, and the term "CD" really just means to refer the "image" format that's used to burn compiled data onto various mediums like USB drives, DVDs or... well, CDs.
The "CD" in the name also tends to mean the distribution is light weight and able to fit entirely on a single 700MB compact disc. This is how I am able to boot to Linux from a USB drive; if you're in a store that will grant you access to test the machines before you buy, a handy USB drive containing a Linux Live CD is the only tool you need.
As a side note, Microsoft also has its own variance of the Live CD known as the "Preboot Environment" or "PE" for short. This is basically a Windows Command Prompt that runs off a RAMDisk, generated when you boot from the PE media - it's used by Microsoft itself as part of the Windows installation framework. With the right software, you can create your own Windows PE and run some software right from that Command Prompt.
So, Deckard is kinda maxed out. What we're looking at is a capable NAS, with similar performance and options. Two DIMM slots maxed at the motherboard's 16GB top, one PCI slot occupied to supply extra SATA ports. PCI is limited to what it can do as far as bandwidth is concerned - disk speeds on the large disk array are better than USB2, but it doesn't get better than that. But that's also (mostly) OK: Deckard doesn't need to be a performance hog - his job is capacity, with a room to host simple VMs for things like email (for a few users), chat, file server, light web hosting, light game hosting and what ever other light to moderate tasks you can throw at 16GB.
What is probable is a motherboard upgrade when the next line of built-in CPU boards comes around with an attractive enough design to warrant the effort. Note that Deckard's motherboard isn't any current release - it was just handy as I bought a pair when the gettin' was good and I rarely throw something that useful away.