VNX CheatSheet
VNX CheatSheet
To extend a Filesystem
nas_fs -xtend testnfs size=500M pool=poolname
Check/set speed and duplex
server_sysconfig server_2 -pci cge0 -o duplex=full,speed=100
Configure server_3 to be a standby for server_2 (presuming server_3 is currently
a primary data mover):
server_standby server_2 -c mover=server_3 -policy=auto
Confgure server_3 to be a primary data mover (presuming server_3 is currently a
standby for server_2):
server_standby server_2 -d mover
server_setup server_3 -type nas
Create a logical interface named cge0-1 on physical interface cge0
server_ifconfig server_2 -c -D cge0 -n cge0-1 -p IP 10.127.52.112
255.255.255.224 10.127.52.127
Display the data movers
nas_server -list (type 1 = primary; type 4 = standby)
Delete a logical interface
server_ifconfig server_2 -d cge0-1
Create a default route
server_route server_2 -a default 10.127.52.126
List all routes
server_route server_2 -l
Delete a default route
server_route server_2 -d default 10.127.52.126
Configure DNS
server_dns server_2 hmarine.com 10.127.52.161
Check to see if dns is running
server_dns server_2
Stop/Start DNS
server_dns server_2 -o stop
Configre NIS:
server_nis server_2 hmarine.com 10.127.52.163
Start & stop certain services
server_setup server_2 -P cifs -o start (or -o stop) (or -o delete)
Set the date on the date mover:
server_date server_2 YYMMDDHHMM
Make the data mover a time services client to an NTP server at 10.127.52.161
server_date server_2 timesvc start ntp 10.127.52.161
Create a 10 Gb system using AVM:
nas_fs -n fsx -create size=10G pool=symm_std
Get information about file systems:
nas_fs -list
for a list of file systems
nas_fs -info fsx
nas_fs -size fsx
Create a mountpoint on the data mover for a file system:
server_mountpoint server_2 -c /yourmountpoint
Mount a filesystem
server_mount server_2 fsx /mpx
Unmount a filesystem
server_umount server_2 -p /mpx
See what file systems are mounted
server_mount server_2
Export a file system mounted on /mpx for NFS
server_export server_2 -o root=10.127.52.12 /mpx (will allow root access
to file system)
Unexport a file system for NFS:
server_export server_2 - u -p /mpx
Export (share) a file system for CIFS (global)
server_export server_2 -P cifs -n sharename /mpx
Export (share) a file system for CIFS (local)
server_export server_2 -P cifs -n sharename -o netbios=CifsSvr1 /mpx
Unexport (stop sharing) a filesystem for CIFS (global share)
server_export server_2 -P cifs -u -n sharename
Unexport (stop sharing) a filesystem for CIFS (local share)
server_export server_2 -P cifs -u -n sharename -o netbios=CifsSvr1
Create a CIFS server
server_cifs server_2 -a
compname=CifsSvr1,domain=corp.hmarine.com,interface=cge0-1
Delete a CIFS server
server_cifs server_2 -d compname=CifsSvr1
Join a CIFS server to the domain
server_cifs server_2 -J
compname=CifsSvr1,domain=corp.hmarine.com,admin=administrator
you must first stop the replication session, make the modification to the
interconnect, and
then start the session again.
================================================
===============================
Configuring VDM
1.The source and destination side interface names must be the same for CIFS
servers transition.You may need to change the interface name created by using
Unisphere (by default it uses IP address) by creating a new interface name with
the server_ifconfig command.
2.The source and destination side mount points must be the same for the share
names to resolve correctly. This will ensure the CIFS share can recognize the full
path to the share directory and users will be able to access the replicated data
after failover.
3.When the source replicated file system is mounted on a VDM, the destination
file system should also be mounted on the VDM that is replicated from the same
source VDM.
4.For the NFS endpoint of a replicated VDM to work correctly on the destination
side, the Operating Environments of the source and destination sides must be
version 7.0.50.0 or later.
5.The local groups in the source VDM are replicated to the destination side in
order to have complete access control lists (ACLs) on the destination file system.
Steps
1.The source VDM to be replicated must be in the loaded read/write or mounted
read-only state.
nas_server -list -vdm
2.The destination VDM will be created automatically, the same size as the source
and read-only as long as the specified storage is available.
nas_pool -size name
3.verify that authentication is configured properly between the Data Mover pair
and validate all the combinations between source and destination IP
addresses/interface names.
nas_cel -interconnect -validate id=
4.Create VDM replication session
The VDM replication session replicates information contained in the root file
system of a VDM. It produces a point-in-time copy of a VDM that re-creates the
CIFS environment at a destination. It does not replicate the file systems mounted
to the VDM.
nas_replicate -create <name> -source -vdm <vdmName> -destination -vdm
<existing_dstVdmName> -interconnect {<name>|id=<interConnectId>} [{max_time_out_of_sync <maxTimeOutOfSync>|-manual_refresh}]
5.Replicate the file system mounted to the VDM
After you have created the session to replicate the VDM, create a session to
replicate the file system mounted to the VDM.