As a result of a complete NAS breakdown one of my customers decided to get a new server that had a bit more power than the old one.
I saw this as quite an interesting challenge and got started.
Due to the fact that the rack cabinet that was put up was only ~68cm deep, I had to find a rack chassis that ware to fit these constraints.
It turns out that Travla has some very nice chassis’ with 8 front access hot-swap drive bays for the raid.
- Chassis: Travla T2280
- Mainboard: Zotac G43-ITX
- Processor: Intel E6500
- SATA controller: RocketRAID 2300
- Storage: 4xWester Digital Caviar Green (WD20EARS)
- System “disk”: Delock USB 2.0 Nano Memory stick 2GB
At first, I tried the Jetway NC9C-550-LF mainboard with the 4xSATA daughterboard. But unfortunately, the latter was unsupported, which took the whole idea out of using this board (8xSATA in all). Also the LAN interface was not supported out-of-the-box.
The installation went smooth, and a SoftRAID5 was created using the five disks. The creation was a real pain and took forever.
Initial benchmarks went well, but at deployment a significant slowdown was detected. ~250Mbit LAN usage when transferring large files, and as low as 50Mbit when transferring small files. This was very unacceptable on a Gigabit LAN.
After a switch switch and a NIC switch I turned as a last resort, to what could not possibly be the bottleneck – the server itself!
nas:~# dd if=/dev/zero of=/mnt/storage/zerofile.000 bs=1m count=10000 10000+0 records in 10000+0 records out 10485760000 bytes transferred in 271.362496 secs (38641154 bytes/sec) nas:~# dd of=/dev/zero if=/mnt/storage/zerofile.000 bs=1m 10000+0 records in 10000+0 records out 10485760000 bytes transferred in 96.963503 secs (108141308 bytes/sec)
40/100 Mb/s is not very impressive for sequential r/w – especially not on a RAID5!
Guess the bottleneck was the server itself.
After a bit of reading and research, I came across a story quite similar to mine – using the exact same disks on a softRaid5. The problem was misalignment of partitions due to a change of standard disk blocksize since – well I don’t know when, I usually don’t follow hardware evolution that closely.
Next thing, I persuaded the customer to backup the data, so that I could re-create the RAID – only this time as a RAID-Z.
dd if=/dev/zero of=/storage/zerofile.000 bs=1m count=10000 && dd of=/dev/null if=/storage/zerofile.000 bs=1m && rm /storage/zerofile.000 10000+0 records in 10000+0 records out 10485760000 bytes transferred in 98.727775 secs (106208815 bytes/sec) 10000+0 records in 10000+0 records out 10485760000 bytes transferred in 46.398998 secs (225991087 bytes/sec)
This is a nice improvement! The customer is also satisfied with the speed increase, but then again – who wouldn’t be?
Finally, a photo of the setup.
This is a sight that I just had to document. It is a collection of external disks, and the document on top is the index. This index is created by mounting each disk and take a screenshot of the Finder window. A very nice ad-hoc solution if you ask me.