FreeNAS Server Manual
FreeNAS Server Manual
W r i t t e n b y M i c h a e l K l e i n p a s t e | F r i d a y, F e b r u a r y 2 6 , 2 0 1 0
W r i t t e n b y M i c h a e l K l e i n p a s t e | F r i d a y, F e b r u a r y 2 6 , 2 0 1 0
Table of Contents
Introductions!
FreeNAS! Components for building a FreeNAS SAN ! Key Concepts in FreeNAS ! Setting up the FreeNAS SAN! ZFS Management!
2
2 2 3 5 7
1 of 9
Introductions
FreeNAS
The FreeNAS storage server is an embedded open source NAS (Network-Attached Storage) distribution based on FreeBSD. It supports the following protocols: CIFS (samba), FTP, NFS, TFTP, AFP, RSYNC, Unison, iSCSI (initiator and target) and UPnP. In addition to these protocols FreeNAS supports Software RAID (0,1,5), ZFS, disk encryption, S.M.A.R.T/email monitoring with a WEB conguration interface (from m0n0wall). Due to its small footprint, FreeNAS can be installed on Compact Flash/USB key, hard drive or booted from LiveCD. The purpose of FreeNAS and ZFS is to provide a full SAN experience without the dependency on proprietary commercial SAN products that cost 10x to 100x more than a similarly provisioned FreeNAS SAN. FreeNAS through its ZFS lesystem provides features that costs thousands to tens of thousands just to implement through a commercial vendor.
2 of 9
iSCSI Extents - Created to provide connectivity between the ZFS Volume and the iSCSI Target. SSH - Used to manage the ZFS structure remotely. Camcontrol command - Used to rescan the SAS subsystem for new or missing drives.
4 of 9
FreeNAS Server
1. 2. 3. Install the SAS card(s) in your FreeNAS server. Install the Quad Port NIC(s) in your FreeNAS Server. Install the drive(s) in your FreeNAS Server. Note keep it minimal. If you can run it from USB or a single SSD do so. The number of drives in your server affects the drive listing in your JBOD(s). For instance, if you have 3 drives in your server, the rst drive in your rst tray will be listed as /dev/da3 not /dev/da1 or /dev/da2. 4. Install your drives in the JBOD subsystem. Make sure to label them as you go. ZFS adds drives to the pool based on their dev name which is listed during the scan. 5. 6. Connect the JBOD UP1 port to the left port on the SAS card. Plug a single network cable into the rst port of the built in Network ports on the FreeNAS server. DO NOT USE THE QUAD PORT NIC PORTS. 7. Install the FreeNAS Operating system from a downloaded and burned ISO to the rst drive or drive set internal to the server. Note: Due to its small footprint, FreeNAS can be installed on Compact Flash/USB key, hard drive or booted from LiveCD. When installed on internal hard drives it is preferably that the OS drives be in an internal RAID array. The recommended RAID level is RAID 10. 8. At the console screen congure the Management Port. In FreeNAS this is known as the LAN port and will be connected to your internal LAN in order to manage the system. Link the active port (usually bge0)to the LAN interface. 9. Assign the LAN interface a manual IP Address.
10. From a web browser, preferably Firefox, connect to the IP Address you assigned to the LAN interface. i.e. http://192.168.0.100. The initial username and password is admin freenas. Note: The Admin account for the web interface is an alias for the command line root account. The passwords are interchangeable. 11. Congure additional General Information under System > General including changing the admin password and setting the web interface to https. 12. Assign the 4 ports (em0, em1, em2 and em3) to the LAGG group via Network > LAGG. This is usually setup as lagg0, lagg1, etc. 13. Assign the LAG to the OPT1 interface and reboot the system. 14. Assign an IP Address to OPT1 via Network > OPT1. Note: The storage network trafc will be separate from LAN trafc. Assign it an IP Address in another subnet. 15. Make all the drives from the JBOD subsystem available to ZFS via Disks > Management. Turn on S.M.A.R.T. Monitoring and set the Pre-formatted le system to zfs storage pool device. Note: RAIDed drives will appear in /dev/ as sda, sdb, etc. Non-RAIDed drives will appear in /dev/ as da# i.e. da1, da2, da3. If the OS is installed on a single drive and there are one or more additional non-RAIDed drives inside the server chassis (not the
5 of 9
JBOD), the rst JBOD drive will be +1 after the last server drive count starting with drive 0 (zero). So for instance, if the OS is installed on drive 0 and there are 2 other drives in the server chassis. The rst drive in the JBOD will be da3. Drive positions do not move if a drive is transferred from one bay to another in the same subsystem. The position is static to the bay as it is scanned.
6 of 9
ZFS Management
For iSCSI presentation you will need to create ZFS Volumes. For simple le sharing via FTP, CIFS, or other protocol you will create a simple ZFS le system that is mounted within the normal FreeNAS lesystem.
Naming Conventions
Any naming convention can be used for pools, lesystems and volumes, but the best practice would be to name pools that are meaningful to the pool. I prefer to name them according to the drive type and size as zfs doesnt list that information, such as sata10k150G/fp01_Edrive. This name for instance tells us that storage pool is made up of 10K, 150GB, SATA drives (the sata part has to be rst because ZFS requires pool to begin with a letter) and the lesystem (or volume in this case) is intended as the Edrive for the server named fp01. This should be considered throughout the storage management process, including iSCSI congurations, NFS, etc.
Creating ZPOOLs
Before you can do anything with zfs you must rst create a zpool. Zpools represent the basic collective storage that is then sliced out through the zfs command as le shares or volumes. They are created by conguring sets of RAIDed drives known as VDEVs that are then collectively pooled together into storage pools. Zpools can be created for different purposes to servers and services either (1 zpool):(1 server/service) or (1 zpool):(many servers/services). To create different RAIDed storage pools use the following commands: #zpool create tank da3 da4 da5 da6 - creates a RAID-0 storage pool named tank consisting of da3, da4, da5 and da6. #zpool create tank mirror da3 da4 mirror da5 da6 - creates a RAID-10 storage pool consisting of da3 and da4 in the rst set with da5 and da6 in the second set. #zpool create tank raidz da3 da4 da5 - creates a RAID-Z storage pool consisting of da3, da4 and da5. #zpool create tank raidz2 da3 da4 da5 da6 - creates a RAID-Z2 storage pool. #zpool create tank raidz3 da3 da4 da5 da6 da7 - creates a RAID-Z3 storage pool. Spare drives can be added at creation or later. Spares can be removed with the remove option, however drives may not be removed from storage pools until an upcoming release of the base ZFS code trickles down to the FreeNAS project. To add hot spares to a storage pool use the following commands: #zpool create tank mirror da3 da4 mirror da5 da6 spare da7 - creates a RAID-10 storage pool with da7 as a hot spare. #zpool add tank spare da7 - adds da7 as a hot spare to the storage pool tank. Drives and vdevs cannot be removed from a storage pool to reclaim unused drive space at this time. Currently only individual drives can be replaced upon failure or upgraded in place to add available storage space. To replace a hard drive in a storage pool use the following command: #zpool replace tank da3 da17 - Replaces drive da3 with drive da17. #zpool replace -f tank da3 da17 - Forces the drive replacement of da3 with drive da17 if da3 shows that it is still in use. Some drives may not respond to the -f switch.
7 of 9
L2ARC Caching
L2ARC is a new level of storage specic to ZFS and will be implemented in FreeNAS 0.8 with FreeBSD 8.0 as its OS base. Spindle drives suffer a CPU:Storage access latency differential approximately 100,000x slower than the the CPU:RAM access latency. L2ARC places the SSDs in a new tier to the memory/cpu stack to reduce this access latency differential by caching commonly read/written les in the SSDs as opposed to storing them on the slower spindle drives taking advantage of ZFSs ability to create Hybrid Storage Pools using different drive types and speeds in the same storage pool. This Hybrid Storage Pool allows the system to provide full SSD performance for the les that require the lowest possible latency without requiring every drive in the storage pool to be SSD drives essentially creating a 4th cache level. 1-internal cache of the CPU, 2-L2 CPU cache, 3-System RAM, 4-SSD hybrid cache, Last-spindle drives. To add SSDs as L2ARC cache (once released in FreeNAS 0.8): #zpool create tank mirror da3 da4 mirror da5 da6 cache da15 da16 - Creates a RAID-10 storage pool, as above, adding da15 da16 (SSD drives) as L2ARC cache. #zpool add tank cache da15 da16 - Adds SSDs da15 and da16 to the existing storage pool.
8 of 9
Fat Provisioned or standard volumes/LUNs are akin to the SAN industrys terminology of LUNs; being a virtual device that is presented to the receiving server as a single drive/volume. A volume is created and served to the server in xed increments. Adding storage to a Fat Provisioned volume requires, 1) modifying the volsize property of the volume with the zfs set option, 2) modifying the extent in the FreeNAS Web GUI, 3) refreshing the iSCSI connection and 4) resizing the Volume (if possible) with the target operating systems internal disk management utilities. Thin Provisioned volumes are presented to the server as being physically larger than the actual available storage. This allows storage managers to provision servers with their maximum capable storage while only buying additional drives as they need to increase the actual storage. It also makes adding additional storage to a target host simpler as there is no need to make changes at any level above the storage pool. The new drives and their storage are added per the zpool command, as above, and are instantly available to the provisioned servers with no additional commands. The true amount of space available to the volume is determined by the actual available space of the storage pool the volume resides in. Note 1: 2TB is the maximum partition size Windows NTFS can manage. If a volume is created above 2TB, Windows will have to split the volume into multiple partitions.
WARNING: When using Thin Provisioned or sparse volumes it is imperative that actual storage pool usage on the FreeNAS server be continuously monitored to ensure that storage is not depleted. The servers receiving their ZFS volumes will be unaware of the actual available storage on the SAN.
To create ZFS Volumes use the following commands: #zfs create -V 250G tank/volume01 - Creates a standard zfs volume in the tank storage pool that is 250GB and named volume01. #zfs create -s -V 2TB tank/volume02 - Creates a thin provisioned zfs volume in the tank storage pool that is presented as 2TB in size and named volume02.
9 of 9