Testing ZFS array System Specifications CPU:Intel Xeon CPU E5-1650 v2 Motherboard: Supermicro X9SRi-F Memory: 8x32GB RDIMM 800MHz Memory: 8x32GB RDIMM 1333MHz HDD: 4x20TB Seagate Exos X20 Check drive temperature with smartctl smartctl -x /dev/sdb | grep "Current Temperature:" Check drive temperatures with loop as 'root' for i in /dev/sd[a-z]; do echo "$i"; smartctl -x "$i" | grep "Current Temperature:"; done Check drive temperatures with loop with sudo for i in /dev/sd[a-z]; do echo "$i"; sudo smartctl -x "$i" | grep "Current Temperature:"; done Generate file with random date in memory head -c 50G random.data Create ZFS Pool 4x20TB Drives Seagate Exos X20 ST0000NM007D Firmware: SN01 Copy random.data from /dev/shm to /dev/shm Included for reference. 50GB Mem to Mem (800MHz Mem): 1090 MB/s 50GB Mem to Mem (1333MHz Mem): 1090 MB/s ZFS mirror with 4 disks zpool create tank mirror /dev/sdb /dev/sdc /dev/sdd /dev/sde Size: 18.2TB Fault Tolerance: 3 50GB Mem to Disk (800MHz Mem): 235.51 MB/s 50GB Disk to Mem (800MHz Mem): 985.98 MB/s 50GB Mem to Disk (1333MHz Mem): MB/s 50GB Disk to Mem (1333MHz Mem): MB/s bonnie++ -u root -d /tank/ -b (1333MHz Mem) Version 2.00a ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Name:Size etc /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP superprox 515400M 214k 99 219m 23 104m 20 474k 98 559m 35 244.0 9 Latency 41970us 32548us 122ms 87318us 348ms 2049ms Version 2.00a ------Sequential Create------ --------Random Create-------- superprox -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 16384 1 +++++ +++ 16384 1 16384 1 +++++ +++ 16384 1 Latency 123ms 718us 221ms 159ms 7us 133ms ZFS mirror with 2 disks zpool create tank mirror /dev/sdb /dev/sdc Size: 18.2TB Fault Tolerance: 1 50GB Mem to Disk (800MHz Mem): 261.24 MB/s 50GB Disk to Mem (800MHz Mem): 990.00 MB/s 50GB Mem to Disk (1333MHz Mem): MB/s 50GB Disk to Mem (1333MHz Mem): MB/s bonnie++ -u root -d /tank/ -b (1333MHz Mem) ZFS Raidz2 with 4 disks zpool create tank raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde Size: 36TB Fault Tolerance: 2 50GB Mem to Disk (800MHz Mem): 374.57 MB/s 50GB Disk to Mem (800MHz Mem): 991.77 MB/s 50GB Mem to Disk (1333MHz Mem): 380.39 MB/s 50GB Disk to Mem (1333MHz Mem): 1000 MB/s bonnie++ -u root -d /tank/ -b (1333MHz Mem) Version 2.00a ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Name:Size etc /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP superprox 515400M 208k 99 1.1g 97 768m 99 469k 100 1.9g 99 1814 50 Latency 40170us 32485us 20601us 30047us 4112us 111ms Version 2.00a ------Sequential Create------ --------Random Create-------- superprox -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 16384 1 +++++ +++ 16384 1 16384 1 +++++ +++ 16384 1 Latency 196ms 2272us 178ms 162ms 1518us 145ms Copy file from memory to zfs pool rsync --progress /dev/shm/random.data /tank/ Copy file from zfs pool to memory rsync --progress /tank/random.data /dev/shm/randomcopy.data Delete unneed data from memory rm /dev/shm/randomcopy.data Destroy ZFS Pool zpool destroy tank Wipe all signatures from the disks wipsfs -a /dev/sdb /dev/sdc /dev/sdd /dev/sde Create encrypted sub-pool on Raidz2 with 4 disks zfs create -o encryption=aes-256-gcm -o keyformat=passphrase -o keylocation=prompt tank/crypt 50GB Mem to Disk (800MHz Mem): 379.51MB/s 50GB Disk to Mem (800MHz Mem): 362.43MB/s 50GB Mem to Disk (1333MHz Mem): MB/s 50GB Disk to Mem (1333MHz Mem): 474.85MB/s Mount encrypted pool after reboot zfs load-key tank/crypt zfs mount tank/crypt Display IO Stats For Pool Show IO stats for 'tank' and display every 2 seconds until stopped zpool iostat tank 2 Show IO Stats for all pools and display every 2 seconds until stopped zpool iostat 2 Test IO speed with bonnie++ ("-u root" is need if running as root. This could result in data loss.) bonnie++ -u root -d /tank/ Test IO speed with iozone3 (will operate on the PWD) iozone -a -b /root/Docs/iozone-output.xls