FreeNAS

Configuration, benchmarks, hardware compability and much more. FreeNAS is an extremely feature-rich and robust piece of NAS software.

Budget-friendly FreeNAS raid-z

When I wrote my previous post, I did not want to too much into detail about my NAS setup.  But, I still had an urge to tell about the splendid configuration.

My motivation for setting up my own freenas server, was my very positive previous experience with the software. And, by having my own configuration, I would be better able to provide both usability and technical troubleshooting.

These sort of posts are usually only of interest of potential buyers googling a specific product – and likewise software product.

But, without further adieu here is:

Yet another hardware configuration blog post

FreeNAS logo

The NAS consists of the following components

  • Jetway NC9C-550-LF motherboard
  • 2GB DDR3 1333 SODIMM I bought along with the motherboard
  • Jetway 4x SATA daughterboard
  • 4x WD20EARS harddisks
  • Lian Li 6070 Scandinavian edition chassis
  • An old usb key (2Gb .. I think)
  • An old pci ethernet adaptor

Lian li has apparently taken the chassis off their site, but Anandtech still has a nice review of the case.

The main reason I used this chassis is because I had it laying around, so to speak. The same goes for the motherboard, as it was a surplus from my previous NAS building experience.

The motivation for building the was the horrible near-datadeath experience I recently  had.So, I thought the time was ripe for a fault-tolerant storage medium.

As I had previously had a positive experience with both FreeNAS and zfs, the choice naturally fell on these. The installation is so very very easy: Download the the embedded gzipped image, put in an empty (or with non-precious content) usb key, and run the following (on a Linux box):

gunzip -c <path>/FreeNAS-amd64-embedded-xxx.img | dd of=/dev/sdx

Replacing the x’s and <path> with the relevant parameters.

Getting the RTL8111E to work

Note: This only applies to FreeNAS 7. The interface is supported in the FreeBSD 8.0 branch, and hereby FreeNAS 8.

The two onboard Realtek interfaces is not supported by the FreeBSD 7.3-RELEASE kernel.  This is also where the old pci network adapter comes in play. I used an older 3com 10/100 card, these are well supported.

However, you can get the onboard NIC’s up and running by downloading and installing the appropriate driver.

You can download the driver here: Realtek RTL8111E FreeBSD 7.3 64-bit driver

Remount /cf as read-write

# mount -o rw /cf

And place the driver in /cf/boot/kernel/

Lastly, you need to update /cf/boot/loader.conf to include the line if_rl_load=”YES” – and your’re done.

Now you can reboot, or remount /cf as read-only and use kldload to load the new driver.

You can follow a related discussion about the driver here.

The filesystem for the disks is zfs, and the raid-z pool is created manually as described in this previous post.

To finish it off: here are some photo’s of the current setup:

The 120mm fan is almost as large as the mainboard
A mini-ITX board looks kind of lonely in an ATX case :-)

Some words about performance

The embedded Atom CPU is definitely not a speed demon in any way. SSH transfer speeds peaks at 5 MB/s but are usually around 3-4 MB/s.
Sequential FTP uploads are about 25 MB/s. CPU usage is about 70% across all “cores”

As far as i can remember, the whole system uses about 50W.

The price of the system is not really represented here – as I have used spare parts, but similar components can be bought for about 450€

All in all a good, stable and robust system.

ZFS drive replacement

This is a post about the robustness of zfs, and can serve as a mini how-to for people who wants to replace disks and do not have hot spare in the system.

Background

Last Monday, our local area was hit by a tremendous rainfall which caused our basement to be flooded. You can see the pictures of the flood  here. Sorry about the quality. The primary objective was to savage various floating hardware :-\

Wet hardware is also the reason fort this post. Upon entering the basement I remembered my fileserver that was standing on the floor and quickly (and heroically) dashed to its rescue.

Unfortunately the server had already taken in quite a lot of water and three of its four raid-z (raid5) disks were already ankle deep in water.

I did not manage to take any pictures at the time, but took some today in order to illustrate where the waterline was.

 

This is the inside of the case side. If you look carefully, you can see the traces after the water.

My crude drawing skills was put to the test in order to create this.

An approximation of the waterlevel

Needless to say, I was quite worried about the state of my data. I quickly removed the power plug and rushed the computer off to dry land (the living room) where a brave team consisting of my girlfriend and son; started drying the disk components after I had disassembled them – well, removed the circuit board at least.

After each disk had been dried, I carefully put them back together and tried to power them on – one by one.
Surprisingly, they all spun up, meaning that the motors were okay – yay!

Next step was to put them back into the fileserver and hope for the best.

And, to my relief, It booted! And the zpool came online! That was amazing! Apparently, nothing was lost. But just to be sure i ran a scrub on the pool.

This is the result:

  pool: pool1p0
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
	attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
	using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub scrub completed after 5h0m with 0 errors on Tue Aug  2 03:20:10 2011
config:

	NAME        STATE     READ WRITE CKSUM
	pool1p0     ONLINE       0     0     0
	  raidz1    ONLINE       0     0     0
	    ad4     ONLINE       0     0     0
	    ad6     ONLINE       0     0     0
	    ad10    ONLINE      51     0     0  1.50M repaired
	    ad12    ONLINE       0     0     0

errors: No known data errors

I consider myself a very lucky man. Only 1.5M of corruption? 3 of 4 disks partially submerged in water. Wow!

Anyway. I rushed to buy three new disks, and replaced one of them (ad10) as soon as it arrived I started replacing them, one by one.

I of course did a full rsync of the date in the storage pool to a another computer.

Replacing the disks

Upon replacing the first diske, (I chose ad10 as this was the one that was marked as bad) I got this error:

nas1:~# zpool status
state: DEGRADED
status: One or more devices has experienced an error resulting in data
	corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
	entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub resilver in progress for 6h22m, 86.62% done, 0h59m to go
config:

	NAME                       STATE     READ WRITE CKSUM
	pool1p0                    DEGRADED     0     0    10
	  raidz1                   DEGRADED     0     0    60
	    ad4                    ONLINE       0     0     0  194M resilvered
	    ad6                    ONLINE       0     0     0  194M resilvered
	    replacing              DEGRADED     0     0     0
	      6658299902220606505  REMOVED      0     0     0  was /dev/ad10/old
	      ad10                 ONLINE       0     0     0  353G resilvered
	    ad12                   ONLINE       0     0     0  161M resilvered

errors: Permanent errors have been detected in the following files:

        <metadata>:<0x32>

The zfs administrators guide explains that the corruption is located in the meta-object set (MOS), but does not give any hint on how to remove or replace the set. Admitted, I have not looked thoroughly into what the MOS actually is.

I put the original (faulted) ad10 disk back in, and the error went away (after a reboot).

Then I decided to try again. This time with ad4. Physical replacing the disk on the sata channel revealed this:

nas1:~# zpool status
  pool: pool1p0
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
	invalid.  Sufficient replicas exist for the pool to continue
	functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-4J
 scrub: none requested
config:

	NAME                     STATE     READ WRITE CKSUM
	pool1p0                  DEGRADED     0     0     0
	  raidz1                 DEGRADED     0     0     0
	    2439714831674233987  UNAVAIL      0    32     0  was /dev/ad4
	    ad6                  ONLINE       0     0     0
	    ad10                 ONLINE       0     0     0
	    ad12                 ONLINE       0     0     0

errors: No known data errors

Okay, then the replacement.

nas1:~# zpool replace pool1p0 2439714831674233987 /dev/ad4

… And the resilvering started. The eta eventually settled at ~5:00 but took about 7,5 hours – which was probably caused by the relative slow Atom processor, being the bottleneck.

nas1:~# zpool status
  pool: pool1p0
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
	continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 0h0m, 0.00% done, 708h0m to go
config:

	NAME                       STATE     READ WRITE CKSUM
	pool1p0                    DEGRADED     0     0     0
	  raidz1                   DEGRADED     0     0     0
	    replacing              DEGRADED     0     0     0
	      2439714831674233987  REMOVED      0     0     0  was /dev/ad4/old
	      ad4                  ONLINE       0     0     0  2.30M resilvered
	    ad6                    ONLINE       0     0     0  1.53M resilvered
	    ad10                   ONLINE       0     0     0  1.52M resilvered
	    ad12                   ONLINE       0     0     0  1.38M resilvered

errors: No known data errors

The resilvering revealed a total of 4 corrupted files, which I could replace from backup.

However, this lead me to the next challenge:

Clearing errors, and merging replacement disks

I could get rid of the errors, effectively leaving the zpool in a permanent degraded state. Every document I could dust up lead me to conclusion that I should remove the files – which I did, and then run zfs clean on the pool to clear the errors.

The solution was to reboot after I had removed the files, and let it resilver again. This worked and let me to believe that I could have simply done a clean and thena scrub to verify the consistency of the data.

After this, I could repeat the somewhat lengthy process for the next disk.

Summery

In total I have had ~10 minutes downtime, caused by replacing the disks.
Plus of course a couple of hours downtime while the server dried. This is, in my opinion, very impressive. Another vote for zfs, or +1 on google+ :-)

I have actually found this zfs recovery exercise very enlightening. It is something you usually do not get to do under such “relaxed” circumstances as I had been privileged with.

Update: The new disks does not support temperature polling, apparently Western Digital has removed the feature.

screenshot
Only the remaining "old" disk now support temperature monitoring

FreeNAS itx setup

As a result of a complete NAS breakdown one of my customers decided to get a new server that had a bit more power than the old one.

I saw this as quite an interesting challenge and got started.

Due to the fact that the rack cabinet that was put up was only ~68cm deep, I had to find a rack chassis that ware to fit these constraints.
It turns out that Travla has some very nice chassis’ with 8 front access hot-swap drive bays for the raid.

Components:

At first, I tried the Jetway NC9C-550-LF mainboard with the 4xSATA daughterboard. But unfortunately, the latter was unsupported, which took the whole idea out of using this board (8xSATA in all). Also the LAN interface was not supported out-of-the-box.

The installation went smooth, and a SoftRAID5 was created using the five disks. The creation was a real pain and took forever.
Initial benchmarks went well, but at deployment a significant slowdown was detected. ~250Mbit LAN usage when transferring large files, and as low as 50Mbit when transferring small files. This was very unacceptable on a Gigabit LAN.

After a switch switch and a NIC switch I turned as a last resort, to what could not possibly be the bottleneck – the server itself!

nas:~# dd if=/dev/zero of=/mnt/storage/zerofile.000 bs=1m count=10000
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 271.362496 secs (38641154 bytes/sec)
nas:~# dd of=/dev/zero if=/mnt/storage/zerofile.000 bs=1m 
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 96.963503 secs (108141308 bytes/sec)

40/100 Mb/s is not very impressive for sequential r/w – especially not on a RAID5!
Guess the bottleneck was the server itself.

After a bit of reading and research, I came across a story quite similar to mine – using the exact same disks on a softRaid5. The problem was misalignment of partitions due to a change of standard disk blocksize since – well I don’t know when, I usually don’t follow hardware evolution that closely.

Next thing, I persuaded the customer to backup the data, so that I could re-create the RAID – only this time as a RAID-Z.

dd if=/dev/zero of=/storage/zerofile.000 bs=1m count=10000 && dd of=/dev/null if=/storage/zerofile.000 bs=1m && rm /storage/zerofile.000 
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 98.727775 secs (106208815 bytes/sec)
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 46.398998 secs (225991087 bytes/sec)

This is a nice improvement! The customer is also satisfied with the speed increase, but then again – who wouldn’t be?

Finally, a photo of the setup.

20110208-154924.jpg

This is a sight that I just had to document. It is a collection of external disks, and the document on top is the index. This index is created by mounting each disk and take a screenshot of the Finder window. A very nice ad-hoc solution if you ask me.

20110208-154905.jpg