ZFS

Deebs

Chief Arsewipe
Staff member
Moderator
FH Subscriber
Joined
Dec 11, 1997
Messages
9,075,887
I've actually got a nice setup here. So I put an old HP Microserver with 8TB at my sister's house and setup reverse SSH on that box to connect to my pfSense firewall here where I NAT the ports to my NAS. What this enables me to do is send whatever I want offsite to the NAS at her house :)
 

TdC

Trem's hunky sex love muffin
Joined
Dec 20, 2003
Messages
30,693
coincidentally I upgraded my server to 7x2TB WD Red's though a SuperMicro 8x SAS/SATA HBA. I've got two 128GB SSD's striped in an Adaptec (Microsemi) RAID card and passed through to ZFS as a dedicated cache. I'm running it as a RAIDZ-1. I have dedup and atime off, and am using LZ4 as a compression algo. the other day I saw it break 55K iops. Not bad for spinning rust tbh :)
 

Deebs

Chief Arsewipe
Staff member
Moderator
FH Subscriber
Joined
Dec 11, 1997
Messages
9,075,887
coincidentally I upgraded my server to 7x2TB WD Red's though a SuperMicro 8x SAS/SATA HBA. I've got two 128GB SSD's striped in an Adaptec (Microsemi) RAID card and passed through to ZFS as a dedicated cache. I'm running it as a RAIDZ-1. I have dedup and atime off, and am using LZ4 as a compression algo. the other day I saw it break 55K iops. Not bad for spinning rust tbh :)
I also have atime an dedup off (needs shed loads of RAM for dedup otherwise it will kill performance). I could put a couple of SSDs in the case for L2ARC and another for a SLOG but not sure I need that level of performance at the moment :)

Maybe when Freenas 10 hits and you get true VM and docker container support....
 

Deebs

Chief Arsewipe
Staff member
Moderator
FH Subscriber
Joined
Dec 11, 1997
Messages
9,075,887
So here is my kit list for my new setup:

Code:
Lian-Li D8000 case
Supermicro X11SSL-CF (onboard LSI 3008 and 6 additional SATA3 ports)
Intel Xeon E3-1220 v5
32GB RAM
IBM M1015 (LSI 2008)
14 x 3TB WD Red NAS drives
Currently has 5 fans installed (5 more on order) and these are the temperatures:
Code:
CPU 0: 27 C
CPU 1: 30 C
CPU 2: 29 C
CPU 3: 33 C

da0 : 32 C
da1 : 30 C
da2 : 29 C
da3 : 27 C
da4 : 34 C
da5 : 32 C
da6 : 34 C
da7 : 29 C
da8 : 31 C
da9 : 28 C
da10: 28 C
da11 : 32 C
da12 : 32 C
da13 : 32 C
Once the new fans arrive I expect the drive temperatures to be more uniform around the 29 C mark. House is currently heated to 21 C.
 

Deebs

Chief Arsewipe
Staff member
Moderator
FH Subscriber
Joined
Dec 11, 1997
Messages
9,075,887
And finally the scrub speed:
Code:
  scan: scrub in progress since Fri Nov 11 15:36:07 2016
        675G scanned out of 15.6T at 693M/s, 6h16m to go
        0 repaired, 4.23% done
 

caLLous

I am a FH squatter
Joined
Dec 23, 2003
Messages
17,569
Mine's in the garage, lovely and chilly.



Scrub of one of the pools:
Code:
  scan: scrub in progress since Fri Nov 11 17:25:56 2016
    478G scanned out of 83.7T at 858M/s, 28h15m to go
    0 repaired, 0.56% done
 

Deebs

Chief Arsewipe
Staff member
Moderator
FH Subscriber
Joined
Dec 11, 1997
Messages
9,075,887
83 terabytes? wtf you got stored on there? I haven't thought about storing it in the garage. How do you deal with winter? Condensation? etc
 

caLLous

I am a FH squatter
Joined
Dec 23, 2003
Messages
17,569
Err, Linux ISOs of course. What else is there to store? :) And it's 83 TB raw, it's just over 56 TB usable with 4 6-disk raidz2 vdevs.

And no problem with humidity in the garage, it's more of a barn on the end of the house and it's, shall we say, well aired (aka draughty as hell).
 

Deebs

Chief Arsewipe
Staff member
Moderator
FH Subscriber
Joined
Dec 11, 1997
Messages
9,075,887
Err, Linux ISOs of course. What else is there to store? :) And it's 83 TB raw, it's just over 56 TB usable with 4 6-disk raidz2 vdevs.

And no problem with humidity in the garage, it's more of a barn on the end of the house and it's, shall we say, well aired (aka draughty as hell).
Haha, Linux ISOs of course, silly me! Btw, why zfs on linux? Have they caught up with features from OpenZFS (I know they are a member) so curious. What about in the summer when it gets hot?
 

Zarjazz

Identifies as a horologist.
Joined
Dec 11, 2003
Messages
2,226
Haha, Linux ISOs of course, silly me! Btw, why zfs on linux? Have they caught up with features from OpenZFS (I know they are a member) so curious. What about in the summer when it gets hot?
ZFS Linux has been working here for a while now. Only had 1 issue last year under heavy load causing a lockup but that had already been fixed in a newer version so a patch + reboot solved that. It's been stable ever since. I'm sure how far they've caught up feature-wise with BSD.

Code:
NAME  SIZE  ALLOC  FREE  EXPANDSZ  FRAG  CAP  DEDUP  HEALTH  ALTROOT
storage  218T  166T  51.4T  -  38%  76%  1.00x  ONLINE  -
Code:
state: ONLINE
config:

  NAME  STATE  READ WRITE CKSUM
  storage  ONLINE  0  0  0
  raidz2-0  ONLINE  0  0  0
  scsi-35000c5005806998f  ONLINE  0  0  0
  scsi-35000c5005806a283  ONLINE  0  0  0
  scsi-35000c5005806a48f  ONLINE  0  0  0
  scsi-35000c5005806a877  ONLINE  0  0  0
  scsi-35000c5005806ac17  ONLINE  0  0  0
  scsi-35000c5005806ad2b  ONLINE  0  0  0
  scsi-35000c5005806b5eb  ONLINE  0  0  0
  scsi-35000c5005806b633  ONLINE  0  0  0
  scsi-35000c5005806bf33  ONLINE  0  0  0
  scsi-35000c50058077f8b  ONLINE  0  0  0
  raidz2-1  ONLINE  0  0  0
  scsi-35000c50058078e63  ONLINE  0  0  0
  scsi-35000c50058079f3b  ONLINE  0  0  0
  scsi-35000c5005807a153  ONLINE  0  0  0
  scsi-35000c5005807a5a3  ONLINE  0  0  0
  scsi-35000c5005807ad33  ONLINE  0  0  0
  scsi-35000c5005807b15b  ONLINE  0  0  0
  scsi-35000c5005807b967  ONLINE  0  0  0
  scsi-35000c500580c02af  ONLINE  0  0  0
  scsi-35000c500580cd90f  ONLINE  0  0  0
  scsi-35000c500580ecb2f  ONLINE  0  0  0
  raidz2-2  ONLINE  0  0  0
  scsi-35000c500580f7513  ONLINE  0  0  0
  scsi-35000c50083d566bf  ONLINE  0  0  0
  scsi-35000c500580f80d3  ONLINE  0  0  0
  scsi-35000c500580f885f  ONLINE  0  0  0
  scsi-35000c500580f8bd3  ONLINE  0  0  0
  scsi-35000c500580f8c57  ONLINE  0  0  0
  scsi-35000c500580f980f  ONLINE  0  0  0
  scsi-35000c500580fada7  ONLINE  0  0  0
  scsi-35000c500580fadc7  ONLINE  0  0  0
  scsi-35000c50063496577  ONLINE  0  0  0
  raidz2-3  ONLINE  0  0  0
  scsi-35000c500580fc5eb  ONLINE  0  0  0
  scsi-35000c500580fcaff  ONLINE  0  0  0
  scsi-35000c500580fce0f  ONLINE  0  0  0
  scsi-35000c500580fcffb  ONLINE  0  0  0
  scsi-35000c500580fd257  ONLINE  0  0  0
  scsi-35000c500580fd3af  ONLINE  0  0  0
  scsi-35000c500580fd463  ONLINE  0  0  0
  scsi-35000c500581052ff  ONLINE  0  0  0
  scsi-35000c50058107e1f  ONLINE  0  0  0
  scsi-35000c50058108477  ONLINE  0  0  0
  raidz2-4  ONLINE  0  0  0
  scsi-35000c500583c6333  ONLINE  0  0  0
  scsi-35000c500583c73a7  ONLINE  0  0  0
  scsi-35000c500583cb9ef  ONLINE  0  0  0
  scsi-35000c500583cba8f  ONLINE  0  0  0
  scsi-35000c500583cbadf  ONLINE  0  0  0
  scsi-35000c5006349115f  ONLINE  0  0  0
  scsi-35000c500583cbe93  ONLINE  0  0  0
  scsi-35000c500583cbf23  ONLINE  0  0  0
  scsi-35000c500583cbfcb  ONLINE  0  0  0
  scsi-35000c500583cc9db  ONLINE  0  0  0
  raidz2-5  ONLINE  0  0  0
  scsi-35000c500583ccb63  ONLINE  0  0  0
  scsi-35000c500583ccc07  ONLINE  0  0  0
  scsi-35000c500583ccf1b  ONLINE  0  0  0
  scsi-35000c500583cd0f7  ONLINE  0  0  0
  scsi-35000c500583cd133  ONLINE  0  0  0
  scsi-35000c500583cd143  ONLINE  0  0  0
  scsi-35000c500583cd193  ONLINE  0  0  0
  scsi-35000c500583cd643  ONLINE  0  0  0
  scsi-35000c500583cdca3  ONLINE  0  0  0
  scsi-35000c500583cde7f  ONLINE  0  0  0
  logs
  mirror-6  ONLINE  0  0  0
  scsi-36c81f660eacd32001ae00a8207cf34fe  ONLINE  0  0  0
  scsi-36c81f660eacd32001ae00a8f0893ffa0  ONLINE  0  0  0
  cache
  scsi-36c81f660eacd32001ae00a660627dbed  ONLINE  0  0  0
  scsi-36c81f660eacd32001ae00a76071876db  ONLINE  0  0  0
 

caLLous

I am a FH squatter
Joined
Dec 23, 2003
Messages
17,569
Honestly I have no idea where they're at with features. I read about and wanted to use ZFS and went through a few options (see the start of this very thread) before settling on some sort of Linux implementation. ZFS on Linux was "production ready" when I chose it and I've stuck with it ever since. ZoL/OpenZFS is now included with Ubuntu (it was nothing more than a licensing issue stopping it from being a part of the distribution) and it works fantastically for me. I upgraded from 14.04.4 LTS (iirc) to 16.04.1 and damn near shit myself when it refused to mount my zpools until I realised it had nuked my PPA installataion of ZoL because it was now built in.

No need for a cache for me, it's almost entirely archive storage. I've had a couple of casualties but I've replaced the disks and ZFS has resilvered and got on with life without batting an eyelid. I can't fault it. Well, maybe the inflexible expansion options (compared to btrfs, anyway) but if you plan ahead it's ok.

Summer is fine as well. It's a double-height space and it's nice and cool even when it's 40c outside.
 

Deebs

Chief Arsewipe
Staff member
Moderator
FH Subscriber
Joined
Dec 11, 1997
Messages
9,075,887
Right, finally gotten round to implemeting ZRep (a script that automates snapshots and replication) to send important datasets to my 2nd ZFS server for backups. Got 4 datasets which replicate every 60 minutes. Everything is going smoothly and currently 5 snapshots are stored on the source and destination for each dataset.

Code:
[root@nas1-home] /mnt/datapool/scripts# ./zrep status
datapool/Photos                                last synced Tue Feb 28 18:00 2017
datapool/Public                                last synced Tue Feb 28 18:15 2017
datapool/jails                                 last synced Tue Feb 28 18:20 2017
datapool/mediafiles/Music                      last synced Tue Feb 28 18:30 2017
I still need to add other datasets but for now these are most important, jails, if anyone is interested, holds my stuff like Mysql for Kodi etc.
 

MYstIC G

Official Licensed Lump of Coal™ Distributor
Staff member
Moderator
FH Subscriber
Joined
Dec 22, 2003
Messages
11,845
I looked at Kodi and SQL once, that's what made me get a plex pass, lol
 

Deebs

Chief Arsewipe
Staff member
Moderator
FH Subscriber
Joined
Dec 11, 1997
Messages
9,075,887
So how many of you guys do auto snapshots? Even though I was not compromised with wannacrypt I am so happy that all my main data/backups/etc is snapshotted by ZFS, you know, just in case.
 

caLLous

I am a FH squatter
Joined
Dec 23, 2003
Messages
17,569
Big leap forward for ZFS, one of the most moaned-about drawbacks is going to be a thing of the past.

 

MYstIC G

Official Licensed Lump of Coal™ Distributor
Staff member
Moderator
FH Subscriber
Joined
Dec 22, 2003
Messages
11,845
So how many of you guys do auto snapshots? Even though I was not compromised with wannacrypt I am so happy that all my main data/backups/etc is snapshotted by ZFS, you know, just in case.
Benefits of using FreeNAS mean this stuff works without any massive amount of user interaction :)
Big leap forward for ZFS, one of the most moaned-about drawbacks is going to be a thing of the past.

This is really good news. The ability to finally undo mistakes in ZFS configurations by changing things is going to be awesome. We've got some stuff at work that's just too much pain to touch until this stuff comes in.
 

caLLous

I am a FH squatter
Joined
Dec 23, 2003
Messages
17,569
Just had some drama with a couple of disks in a raidz2 vdev but because ZFS is amazing, I suffered no loss of data.

While a faulted disk was being resilvered another one in the same vdev started showing read errors, causing the resilver to continuously nearly finish but then restart. Again, and again. I offlined the erroring disk, thinking that the original resilver would complete with effectively no parity disks in the vdev and then *another* of the fuckers started showing read errors. So I cloned both of the disks with read errors with ddrescue (about 400 errors between them, not bad at all - although the resilver had restarted a load of times by that point so the read errors that it had found had been reallocated already), put them back in and the resilver completed fine. I then onlined the clone of the first erroring disk and that resilver in no time at all.

Because ZFS is amazing.
 

Jupitus

Old and short, no wonder I'm grumpy!
Staff member
Moderator
FH Subscriber
Joined
Dec 14, 2003
Messages
2,810
Informative, but pretty confuzzling too!! :D
 

caLLous

I am a FH squatter
Joined
Dec 23, 2003
Messages
17,569
Big leap forward for ZFS, one of the most moaned-about drawbacks is going to be a thing of the past.

Finally, RAIDZ expansion is coming. And DirectIO will also mean massive IO improvements (predominantly write) for pools on SSDs.

 

MYstIC G

Official Licensed Lump of Coal™ Distributor
Staff member
Moderator
FH Subscriber
Joined
Dec 22, 2003
Messages
11,845
Finally, RAIDZ expansion is coming. And DirectIO will also mean massive IO improvements (predominantly write) for pools on SSDs.

Good old iXsystems & co.
 

Users who are viewing this thread

Top Bottom