Jonathan Brown has started a series describing the different stages of setting up his FreeNAS server:
“From a pure archival and backup perspective, the Sun in my digital solar system will be a mass storage device—in this case, a Network Attached Storage (NAS) device. There are plenty of these all-in-one appliances out there. Iomega, D-Link, NetGear, HP, Drobo and LG all make some flavor of NAS. I’m sure any NAS from any of those brands will work for most people, but I really want to understand how my storage solution works so it can evolve and, in event of error, I can recover it.
After researching a bit and talking to my system admins at work, I decided that I want a hybrid system that has drive redundancy, a removable component and off-site replication.
For disk redundancy, I’m going with a RAID5 configuration—set of striped disks with parity data spread across all disks—which means the NAS can take a single hard drive failure and continue to operate without any data loss. I considered RAID6 (allows for two drive failures), but that’s really overkill and, combined with the remainder of my backup strategy, unnecessary.
Why do you want disk redundancy? Simply put, hard drives suck. They are so error prone that each drive has a system (called S.M.A.R.T.) that monitors how many errors it makes in hoping to predict a disk failure. Google did a study on drive failure rates. You can see the chart to the right which indicates percent failure by age (years in service). 8.6% of drives with three years of service fail! Ouch. So given the high rate of hard disk failures, that was an absolute requirement for my solution
The solution that will allow me to accomplish these goals? I’m going to build it. I will use the open-source software project called FreeNAS. It has quite a few lovely features (including software RAID, iSCSI and RSYNC) that will [hopefully] perform all I need. I did a little test using the downloadable VMWare image and a USB keychain drive. I was able to do exactly what I wanted, albiet a much more scaled down version. I will document as I go in case anyone wants to try to follow in my footsteps. In the next part, I will discuss the machine build and the parts I will be using.”
To be honest, I’ve not built, technically assembled, a computer in over six years. I used to build all of my own computers when Windows was my primary operating system and I was developing a lot of Windows applications. Back then it made sense to build your own because it was more cost-effective. These days, unless you’re a gamer or have very unique desires, it’s way cheaper (and less frustrating) to go to a big box store and buy off-the-shelf or order online.
The first step was to disassemble the donor machine, which wasn’t a problem at all. The breakdown was smooth with only one trip to HP’s support site to figure out how an interlocking part became not so locking.
Next, I installed the motherboard into the new case. It wasn’t that big of a deal; however, the HP motherboard was fastened to a metal mounting plate and I had to decide whether to keep the HP mount or ditch it. I wanted to scrap it, but realized the CPU heat-sink and fan were mounted to the metal plate, so that decision was made for me. This created a tighter fit than I expected, but in the end turned out to be a very solid.
My next task was to get some power in the case, so I proceeded to install the 460W power supply from the donor. And that’s when I encountered my first hiccup. I didn’t think to measure the power supply to make sure it would fit in the new case—a standard ATX case—and I assumed, stupidly, that the power supply from the HP was of an ATX form-factor. Needless to say, it was not. I don’t even know what you call the form-factor, but it’s height is 97mm high, which is too tall for a standard ATX (86 mm).
More deliveries to follow