Why build and not buy?
I estimate that building your own, based on a recycled PC enclosure will save you $400-$500 and you get to choose the disks that are used. That choice is not trivial.
In choosing your solution, be it this one or a ready made one, the choice of the host connection is critical. I found, Fibre-Channel, eSata, USB3, 2 and 1, Ethernet 10, 100 and 1000 megabit/sec. The Ethernet solutions are clearly for Network Attached Storage solutions.
I needed (well ok "wanted")
I was running out of high performance storage. NAS solutions were not going to be sufficiently fast also, these incur a big processing overhead given that the whole IP stack must be used. I had no SCSI controller and I did not want to invest in that direction. The same is true for Fibre-Channel although the performance would have been great.
My motherboard thankfully supported an eSata port with port multiplier functionality via a Silicon Graphic 3132 chip. Had the motherboard not had this capability I could have chosen to buy a eSata support module for my host PC.
In bullet form, I wanted:
- eSata host connection
- 10TB or better of storage space
- OS independent
The OS Independence was critical to me as I run a Linux system and I did not want to be stuck with a solution that was either Windows or Mac centric.
Getting the pieces together
The following is a shopping list of sorts
- 1 PC enclosure
- with at least five disk bays
- with ATX power supply in good working condition
- Controller assembly http://www.addonics.com/products/ad5hpmreu.php. Get the SCSI mount option.
- Power control circuit
- 1 x multipurpose PCB board
- 1 x 74LS74 “D” type flip-flop
- 1 x 300 ohm resistor
- 1 x 10uf 35v capacitor
- 1 x 150 ohm resistor
- 1 x standard LED
- 2 feet of 6 strand coper telephone wire
- 1 x 2 pin header. (for the momentary on switch lead from the front panel )
- 1 x 3 pin header (for the power LED lead from the front panel)
- Status display board
- 1 x multipurpose PCB board
- 6 x red LED
- 6 x green LED
- 2 feet 24 conductor ribbon cable
- 2 x female 24 conductor ribbon cable connector
- 2 x enclosure fans (not CPU fans, these are controlled by frequency modulation on a third wire)
- 3-5 disks (see below)
Choosing the disks
Since I wanted a lot of space but at the same time I did not want to put myself on the bleeding edge of disk technology I chose to use 4TB disks. This decision was the easy one. The harder decision is: which ones?
On the internet there are many sites that offer statistics on the performance and durability of hard-drives. From this reading Western Digital seemed to have an edge on Seagate.
I also wanted disks that could handle “always on” applications.
I chose the 4TB Western Digital RED that promised cooler running, less vibration and all around optimization for this type of application.
Only time will tell if this was a good choice. It is not the cheapest but still not in the PRO category as would have been the Western Digital RE series who are at least 33% more costly.
Step by Step Summary
- Clean and empty the enclosure. Do not damage the front panel connection wires and connectors as some of these will be used.
- Build the power control circuit on a small PCB board. I used one that I could mount to the back of the enclosure removable covers. The circuit is:
- Build the status LED supporting board. Again I chose one that could share the back of the removable cover with the Power Control board. (1 male 24 pin header and 12 LEDs. The ground is common and the LEDs can be so wired.)
- Mount the ad5hpmreu and connect the power cable (read the f...ing instructions)
- Connect the Power control circuit to the male motherboard connector of the power supply using the 2 feet of telephone wire. (I used small nails onto which I soldered the wires and insulated them with shrink tubing.) The required connections are identified in the circuit schematic above.
- Connect the LED lead and the momentary on power switch lead of the front panel to the Power control circuit. Usually these leads are identified. (see schematic)
- Assemble the 2 feet ribbon cable taking care of placing “pin one” correctly at both ends. One end is connected to the header on the ad5hpmreu and the other on the Status Display board completed above. The ad5hpmreu's documentation gives the pin-out of its header.
- Place the disks in the enclosure, connect them to the controller and power on.
- The power supply will not power on without a sufficient load. (One disk seems to be enough)
- The power control board draws its power from the standby power of the power supply. (see schematic)
- 2015-08-18 - Tower ready with ventilation
- 2015-08-22 - Controller ordered (AD5HPMREU)
- 2015-08-23 - Power control done. Using tower on/off switch.
- 2015-08-24 - Power control mounted in tower (photo). The tower Power LED and the tower's ON/OFF switch are functional.
- 2015-08-25 - Designed the Status display LEDs (12) board and mount location. Can't go any further until the controller arrives and I can asses what voltages are used and what is the LED driver spec. (3v?, 5v?, common ground? requires a current limiting resistor?)
- 2015-08-25 - The hard part ... waiting!
- 2015-08-26 - Received, installed, it works ... more testing
- 2015-08-27 - Had to order the disks. http://CanadaComputers.com/ was out of stock. Waiting again!
- 2015-08-28 - Change of plan,I am installing a 3 disk RAID5 (12TB)
- 2015-08-28 - Added power-on LED (Power Supply switch is on / power supply is ready to be switched on by power controller)
- 2015-08-30 - 3*4TB disks installed, configured as RAID5. Effective capacity (after parity is removed) 7.3 TB
- 2015-08-30 - END of phase1. Goal reached!!!
- 2015-09-01 - END of phase2, relocated 2*2TB to form 1 second raid device (RAID1). This part is not priced as I had the hard drives already. and were not part of the plan. It is mentioned here to show that it can be done easily. ( They are probably worth $100CDN each ) (16TB total)
COST Bottom line, real life cost, Canadian funds taxes included
- Controller assembly
- -- $99US Assembly
- -- $32US Shipping
- -- $Total $131US or $177CDN
- -- $31 import taxes
- --Total RAID Controller cost $208.00CDN
- Tower $0
- Misc Electronics (Status LED board) $8.37
- Power Control assembly $0
- Disk 3 * 4TB Western Digital RED (WD40EFRX) $724.28CDN
- Disk 2 * 2TB. no cost to me, I had them in house. They are worth $100CDN each.
Total so far $940.65CDN (everything included) for the solution with the first 3 disks. If you like add $200 for the last two disks.
- Capacity, 16TB total. 10 TB when configured as two RAID devices ( RAID5 and RAID1)
- Single device read: 75MB/Sec
- Stripe*2 read 145MB/Sec
- RAID5 single stream 8GB file ( 3 * WD 4TB RED ) write: 105MB/s, read: 93MB/S
- RAID1 single stream 8GB file ( 2 * Seagate 2TB ) write: 108MB/s, read: 115MB/S
- Cache read (eSata link speed) 1.3GB/Sec (Single device)
- Single device read: 75MB/Sec
RAID 5 test:
# dd oflag=nocache if=/dev/zero of=1g.bin bs=1G count=1 1073741824 bytes (1.1 GB) copied, 10.6258 s, 101 MB/s
# dd iflag=nocache if=1g.bin of=/dev/null bs=1G 1073741824 bytes (1.1 GB) copied, 7.93205 s, 135 MB/s
# dd oflag=nocache if=/dev/zero of=2g.bin bs=1G count=1 & dd iflag=nocache if=1g.bin of=/dev/null bs=1G count=1 1073741824 bytes (1.1 GB) copied, 14.1407 s, 75.9 MB/s 1073741824 bytes (1.1 GB) copied, 19.4375 s, 55.2 MB/s
Overall throughput does not seem to be affected by simultaneous read, write.
I Connected the RAID devices to a USB 3.2 port on a very fast system where the I/O test was not predudices by any lack of computing power. The results are very "happy" interesting! These results are on megabytes per second. with buffer length of 4096 and 8192 bytes. First result is the write speed, the second read speed.
$ fseval -i 0 -m 2 -r 4096 -M 8192 -s 4GB -t r -n toto Writing and reading will be prepended by 16248262656 to void all caches Testing file size 4294967296 4096 169.187 219.466 8192 190.931 195.718
2020-09-09 - A failure!
It was bound to happen. One of the disks failed (hard). It was one of the older pair that I added at the end of the project (2TB Western Digital Black WD2002FAEX). These disks had been configured as a mirrored pair (RAID 1). The remaining disk of the pair continued to work. I made the choice to upgrade my configuration rather than replace the failed disk. I opted to upgrade the configuration because I had purchased, some time ago, a second controller. It took close to 20 hours to offload the data. I purchased 3 disks of the same type as I had previously (4TB Western Digital RED). I configured 2 as a new mirrored pair and left the 3rd one as a singleton.
I had no data loss or significant downtime.
Note that this approach did not involve the controller's recovery procedure. It was an unknown that I liked to avoid.
This was a fun, easy, productive and not too costly project that probably saved me $400-$500. For $1140 (CDN) I get 10TB usable, self redundant storage space (16TB total) on a eSata connection. I am happy.
Disclaimer: Use this document at your own risk. I decline any type of responsibility.