I'll only be diving into the differences between my system and my parents' system here. Otherwise, the machine is basically the same. I learned so much building the first one, the second one will be much better both design-wise and software wise.
The first, and perhaps the biggest difference between the two computers is that mine will have 28TB of raw storage. In RAID6 that'll be about 20TB of very fault tolerant storage. My goal was to balance storage size, fault tolerance, and speed (in that order) and I think RAID6 will do just that. I have this spanned across seven 4TB drives.
That's another big difference in the specs: I'll be trying to use a hardware RAID controller. Theoretically this will support RAID6, but I'm not 100% sure because I've seen some conflicting reports regarding this. We'll find out for sure once we plug it in. I'm also on the fence about using the actual RAID card to do the RAID stuff for me, or if I should just set it up for JBOD and do software RAID like before so I can rebuild the array if my card ever goes bad. The more I think about it, I think I may end up going with software RAID. The card wasn't a huge waste of money, however. It has 2 internal Mini-SAS cables that will allow me to connect all of my hard drives without ever having to use a SATA port on my motherboard.
The motherboard and the processor are exactly the same. I'm putting more RAM in mine so I can (potentially) run virtual machines on it. I actually bought RAM for this one instead of using what I had laying around, so it should successfully be Dual Channel as well.
As for the boot volume, I'm planing on trying one of those small drives again, but I think I may spring for an SSD. Either way, I'm not wasting a 4TB drive on the boot volume, that's for sure. I think this RAID controller being a real controller will cut down on the bugs significantly. I put a 500GB 2.5" drive for now.
I installed the same version of Ubuntu, and this time it worked flawlessly the first time. I wasted no time setting up the RAID6 array in software because, as it turns out, the controller did not support RAID6. Bummer, but I'm not too disappointed.
All of the disks were spinning, the RAID6 array formatted and ready to go, and the power supply's fan wasn't even spinning (which means even with 8 hard drives we're not drawing a lot). I'll have to stress test it. But, for now, I think the first big hurdle to overcome is the wire management because even with the modular power supply, it's pretty bad.
However, after some jostling around, I got a much better arrangement.
Better is a relative term.
The way I'm doing the power circuit is different this time as well. I design a specialized circuit so I didn't have to waste a microcontroller on it. You should read about it here.
Everything other than that is essentially the same hardware-wise. The next step was installing software. This machine was going to be doing quite a bit of work and storage, so I planned exactly what I wanted it to do:
- Full LAMP stack on standard ports
- Kodi for HTPC operation
- Plex for other media operation
- Webmin for managing the server
- VirtualBox for running virtual machines
You may be wondering why I am running VirtualBox instead of Qemu or something KVM related. The answer is simple: I had issues installing almost any remote management software for KVM and VirtualBox is more of what I want. I'll use docker for container stuff if I want to virtualize Linux applications. I'll probably only ever want to virtualize Windows machines, so VirtualBox should be good for that. Qemu is important for other things that don't fit into either of these categories. Since I've shown how to install everything and VirtualBox is literally just a debian package, let's move on.
I decided to do something I should have done with my parents' build: benchmark the RAID array. I used gnome-disks and its built-in benchmarker. The settings and results are listed here:
These are 5400RPM hard drives, so they cap out at around 100MB/s. But now I'm getting read speeds of around five to six times that. Write speeds are expected to be slower because it has to do the writing of the parity bits that ensure the data integrity. That's a loss I'm wiling to take. The read speed is pretty good though (but I could be wrong because I don't know any better). An interesting thing to note is that linear decrease in speed over time. It dropped from about 650MB/s to about 550MB/s over the course of 100 50MB samples. I'm not sure why this is, but it's interesting to note. Maybe there's thermal throttling with some of the components along the hard drive data access pipeline. Who knows, Maybe I should add a fan.
But, other than that, that's really all there is to this build. Both of these builds were really fun to do because it was always my dream to put computers into these beautiful cases (all three of them) and, well, now I have. It's just really exciting to breathe new life into these instead of recycling them. I remember growing up with these sitting on the shelf in the living room. Now, I'll have two of them sitting on my shelf and my parents have one of their own. It's nice when you can reuse instead of recycle.
No comments:
Post a Comment