Oracle Solaris Virtualization
Oracle Solaris Virtualization
Table of Contents
ZFS - An Introduction ..................................................................................................................................... 6 zpool Create a regular raid-z zpool named pool1 with 3 disks ......................................................... 6 zpool Create a mirrored zpool named pool1 with 4 disks.................................................................. 6 zpool Adding mirror to zfs storage pool, pool1 with 2 disks .............................................................. 6 zpool List available storage pools ........................................................................................................ 6 zpool List all pool properties for pool1............................................................................................... 6 zpool Destroy a zfs storage pool ........................................................................................................ 6 zpool Export a zfs storage pool, pool1 ............................................................................................... 6 zpool Import a zfs storage pool, pool1................................................................................................ 6 zpool Upgrading zfs storage pool to current version ......................................................................... 6 zpool Managing/Adding hot spares................................................................................................... 7 zpool Create zfs storage pool with mirrored separate intent logs..................................................... 7 zpool Adding cache devices to zfs storage pool ................................................................................. 7 zpool Remove a mirrored device........................................................................................................ 7 zpool Recovering a Faulted zfs pool ................................................................................................... 7 zpool - reverting a zpool disk back to a regular disk. .......................................................................... 7 zfs - hide from df command .............................................................................................................. 7 zfs mount to a pre-defined mount point (zfs managed) ................................................................... 7 zfs mount to a pre-defined mount point (legacy managed) ............................................................. 7 zfs set limits/quota a zfs filesystem.................................................................................................... 8 zfs destroy a zfs filesystem ................................................................................................................. 8 zfs making snapshot .......................................................................................................................... 8 zfs - rolling back .................................................................................................................................... 8 zfs - removing snapshot....................................................................................................................... 8 ZONES (aka Containers)................................................................................................................................. 8 Easy Steps in creating a Zone .................................................................................................................... 8 Recommendations on Zone Build ............................................................................................................. 9
Run zones
No.
Solaris LDOMs are virtualized environments assisted by hardware. Currently, LDOMs run only on Sun/Oracle SPARC T-Series servers (servers having the SPARC T-series CPU chip). Whereas zones (or containers) is not restricted to hardware. Prior to delving into the topics of zones and ldoms, a introduction to zfs and its faculties is presented. Much of the examples made use of zfs in both virtualization implementation.
ZFS - An Introduction
ZFS is a combined file system and logical volume manager designed by Sun Microsystems (now Oracle). It features support for high storage capacities, integration of other filesystem and volume management concepts such as snapshots, clones, raid-z, nfs, smbfs, continuous integrity checking and automatic repair. Discussed below are the ways in managing and using zfs. Creating a zpool is the first step in making and using zfs. Prior to making the pool, you have to decide which type of pool to create (raid 0 or concatenated, raid 1 or mirrored, raidz or single crc disk raid, raidz2 or double crc disk raid, and raidz3 or triple crc disk raid). To create a zpool, just enter the command: zpool create <pool_name> <vdev-type> <devices> To illustrate creating zpools, examples will be used below. zpool Create a regular raid-z zpool named pool1 with 3 disks zpool create pool1 raidz c0t0d0 c0t1d0 c0t2d0 If you want to create raid-z2, you would need at least 3 drives. For raid-z3, its only advisable to use with 5 drives or more. zpool Create a mirrored zpool named pool1 with 4 disks zpool create pool1 mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0 zpool Adding mirror to zfs storage pool, pool1 with 2 disks zpool add pool1 mirror c0t3d0 c0t4d0 zpool List available storage pools zpool list zpool List all pool properties for pool1 zpool get all pool1 zpool Destroy a zfs storage pool zpool destroy f pool1 zpool Export a zfs storage pool, pool1 zpool export pool1 zpool Import a zfs storage pool, pool1 zpool import pool1 zpool Upgrading zfs storage pool to current version zpool upgrade a
** Reverse
zfs mount to a pre-defined mount point (zfs managed) zfs get mounted <pool_name>/<zfs_name> zfs set mountpoint=<new_mount_point> <pool_name>/<zfs_name> zfs mount to a pre-defined mount point (legacy managed) zfs create pool1/autonomy mkdir p /apps/autonomy zfs set mountpoint=legacy pool1/autonomy
10
7. Start the zone install. This will copy the necessary files to the zones location.
zoneadm -z zone1 install zoneadm -z zone2 install
8. Upon confirmation of the zones install, issue the command to make the zones ready for use:
zoneadm -z zone1 ready zoneadm -z zone2 ready 9. Create a sysidcfg template file for each zone (/var/tmp/sysidcfg.zone1 and /var/tmp/sysidcfg.zone2) and copy it to the local zone root location as follows: cp /var/tmp/sysidcfg.zone1 /zonespool/zone1/root/etc cp /var/tmp/sysidcfg.zone2 /zonespool/zone2/root/etc
11. Login to the zones and start the your environment customization.
11
3. Halt the original (source) zone to clone. Global-zone# zlogin zone1 halt 4. Start the cloning as follows: Global-zone# zoneadm -z zone3 clone zone1
5. Create a sysidcfg template file for the new zone (/var/tmp/sysidcfg.zone3) and copy it to the local zone root location as follows: cp /var/tmp/sysidcfg.zone3 /zonespool/zone3/root/etc 6. Start the zones. Global-zone# zoneadm -z zone1 boot Global-zone# zoneadm -z zone3 boot
12
Zone Gotchas
1. If you are to configure multiple zones, its recommended to allocate the cpu shares as follows: Global = 20 Non-global zones = 80/(number of non-global zones) 2. If a decision is made to mirror the ZFS pool on an already made zfs pool with a single drive. In the example given earlier, we already have a ZFS pool named zpool with c1t2d0 as member. To make this a mirror, this can be done simple by issuing the following commands: zpool attach zpool c1t2d0 c1t3d0 3. If IPMP is configured on a physical interface on the global zone and one of the interfaces (the primary active) is used as shared network interface on the local/non-global zone. Then its automatically picked up as IPMP on the non-global zone. Meaning that you do not need to configure IPMP on the non-global zone.
13
14
15
Platform
T2000 T5x20 T5240 T5440
Firmware Version
6.7.0 7.2.1.b 7.2.0 7.2.0
LDOM Version
1.1 1.1 1.1 1.1
If you do not have the correct firmware, you need to apply the firmware patch. Installing Firmware patch on the T5240 System Controller a. Copy the patch to the server (control domain) to be patched. Unpack the patch from any directory from the control domain. Then copy the following files to /tmp: sysfwdownload firmware*.pkg NOTE: The firmware package file will no longer be available and different for individual platforms. b. Afterwards, you need to execute the following commands: cd /tmp ./sysfwdownload ./<Firmware.pkg> Shutdown the domain controller using the commands: shutdown y i0 g0 or halt c. Once in the ok prompt, you will need to get to the system controller (ALOM or ILOM) using the key combination #. d. Once on the ILOM, you nee to login as admin to get to the ALOM prompt (sc>). To get to the ALOM prompt you need to login first as root which gives you the standard ILOM prompt (->). Issue the following command to create an admin user which would provide the default ALOM prompt as follows: -> create /SP/users/admin role=Administrator cli_mode=alom Creating user.. Enter new password: *******
16
17
18
2.
3. 4.
5.
6.
19
20
Sample LDOM build (after primary or control domain has been created):
Assumptions: Server Network Disks : T5240 : nxge2, nxge3, nxge4, and nxge5 : LDOM guests will be assigned the following disks for both OS and apps (c1t2d0, c1t3d0, c1t4d0, c1t5d0, c1t6d0, and c1t7d0 each 146gb)
Steps: 1. Confirm primary LDOM # exec ksh -o vi # export PATH=$PATH:/opt/SUNWldm/bin # ldm list NAME STATE FLAGS CONS VCPU MEMORY Primary active -n-cv- SP 8 4G # 2. Confirm the primary LDOM services # ldm list-services VCC NAME LDOM PORT-RANGE primary-vcc0 primary 5000-5255
UTIL 0.2%
UPTIME 7h 52m
VSW NAME LDOM MAC NET-DEV DEVICE DEFAULT-VLAN-ID PVID VID MODE primary-vsw0 primary 00:14:4f:f9:d8:76 nxge2 switch@0 1 1 VDS NAME LDOM VOLUME OPTIONS MPGROUP DEVICE # In the above we see that the create_primary initializes the first active network interface as the network device for the primary virtual switch 0. You would need to add the other virtual network interfaces as follows: ldm add-vsw net-dev=nxge3 primary-vsw1 primary ldm add-vsw net-dev=nxge4 primary-vsw2 primary
21
3.
4.
5.
6.
22
7. 8.
9.
10.
23
24
Platform
T2000 T5x20 T5240 T5440
Firmware Version
6.7.4 7.2.2.e 7.2.2.e 7.2.2.e
LDOM Version
1.2 1.2 1.2 1.2
Precautions before LDOM Software Upgrade Prior to updating the LDOM Software, be sure to take the following precautions: 1. Backup/Save the Autosave Configuration Directories Whenever you upgrade the OS or the LDOM software on the control domain, you must save and restore the Logical Domains autosave configuration data, which is found in the /var/opt/SUNWldm/autosave-autosave-name directories. You can use tar or cpio to save and restore the entire contents of the directories. NOTE: Each autosave directory includes a timestamp for the last SP configuration update for the related configuration. If you restore the autosave files, the timestamp might be out of sync. In this case, the restored autosave configurations are shown in their previous state, either [newer] or up to date. To save : mkdir p /root/ldom cd /root/ldom tar cvf autosave.tar /var/opt/SUNldm/autosave_* To restore, be sure to remove the existing autosave directories to ensure a clean restore operation: cd /root/ldom rm rf /var/opt/SUNWldm/autosave_* tar xvf autosave.tar 2. Backup/Save the Logical Domain Constraints Database File Whenever you upgrade the OS or LDOM package on the control domain, you must save and restore the Logical Domains constraints database file that can be found in /var/opt/SUNWldm/ldom-db.xml. NOTE: Also, save and restore the /var/opt/SUNWldm/ldom-db.xml file when you perform any other operation that is destructive to the control domainss file data, such as disk swap.
25
WARNING:
In case you lose some of the guest LDOMs configuration resources (ie. vnetX or vdiskX). Dont try to remove and re-create the LDOM. Its easier just to re-add the resource using the commands within the build scripts for the particular LDOM.
26
LDOM Gotchas:
Housekeeping LDOM guests 1. Create an XML backup copy of the LDOM guest as follows: ldm ls-constraints -x <ldom_name> > <ldom_name>.xml This creates a backup XML file in case the guest domain needs to be re-created. To re-create the guest domain using the XML file, issue the following command: ldm add-domain -i <ldom_name>.xml ldm bind <ldom_name> ldm start <ldom_name> Edit the guest XML file and replace the lines with the guest LDOMs reported MAC address and hostid. To get the guest LDOMS MAC and hostid, execute the command: ldm list-domain -l <guest_ldom_name> | more The lines <Content xsi:type="ovf:VirtualSystem_Type" ovf:id="gdom01"> <Section xsi:type="ovf:ResourceAllocationSection_Type"> <Item> <rasd:OtherResourceType>ldom_info</rasd:OtherResourceType> <rasd:Address>auto-allocated</rasd:Address> <gprop:GenericProperty key="hostid">0xffffffff</gprop:GenericProperty> </Item> Replace auto-allocated with the reported MAC address Replace 0xffffffff with the reported hostid of the LDOM guest. 2. From the control domain create an output LDOM resource list and LDOM services using the following commands: ldm list-bindings <ldom_name> > ldm_list_bindings_<ldom_name>.out ldm list-services > ldm_list-services.out 3. Create a link name indicating the respective LDOM filesystem. As an example, ldom1 is rncardsweb01 cd /ldomroot ln -s ldom1 gdom01 4. Make sure to finalize guest LDOM builds with unbind/bind. This is to guarantee that the resource list gets saved on the system controller. NOTE : LDOM resources can be added and deleted dynamically on the fly. To guarantee safe permanent resource assignments on the controller, a unbind/bind has to occur. 5. Make sure that the file permissions of the disk image for the guest LDOMs has the following permission: chmod 1600 <image_file_name> which should provide the following permission as displayed: -rw------T 1 root root <XXXXXX> <Month_date> bootdisk.img
27
28
29
30
Appendix 1.
Template for the zone configuration: create -b set zonepath=<absolute path for the zone location> set autoboot=true set scheduling-class=FSS add net set address=<zone IP_Address> set defrouter=<zone default IP Router> set physical=<zone physical device to use> end add rctl set name=zone.cpu-shares set value=(priv=privileged,limit=<zone_cpu_share_assigned_value>,action=none) end add capped-memory set physical=<user assigned value> set swap=<user assigned value> set locked=<user assigned value> end
Appendix 2.
Template for sysidcfg for the local zones: system_local=C timezone=US/Eastern terminal=xterm security_policy=NONE root_password=x!tra123 name_service=NONE network_interface=primary {hostname=<zone_name> netmask=<zone_netmask> protocol_ipv6=no default_route=<zone_default_route>} nfs4_domain=dynamic
31