Veritas Admin For Linux
Veritas Admin For Linux
June 2014
Legal Notice
Copyright 2014 Symantec Corporation. All rights reserved.
Symantec, the Symantec Logo, the Checkmark Logo, Veritas, Veritas Storage Foundation,
CommandCentral, NetBackup, Enterprise Vault, and LiveUpdate are trademarks or registered
trademarks of Symantec Corporation or its affiliates in the U.S. and other countries. Other
names may be trademarks of their respective owners.
The product described in this document is distributed under licenses restricting its use, copying,
distribution, and decompilation/reverse engineering. No part of this document may be
reproduced in any form by any means without prior written authorization of Symantec
Corporation and its licensors, if any.
THE DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED
CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED
WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR
NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH
DISCLAIMERS ARE HELD TO BE LEGALLY INVALID. SYMANTEC CORPORATION SHALL
NOT BE LIABLE FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES IN CONNECTION
WITH THE FURNISHING, PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE
INFORMATION CONTAINED IN THIS DOCUMENTATION IS SUBJECT TO CHANGE
WITHOUT NOTICE.
The Licensed Software and Documentation are deemed to be commercial computer software
as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19
"Commercial Computer Software - Restricted Rights" and DFARS 227.7202, "Rights in
Commercial Computer Software or Commercial Computer Software Documentation", as
applicable, and any successor regulations. Any use, modification, reproduction release,
performance, display or disclosure of the Licensed Software and Documentation by the U.S.
Government shall be solely in accordance with the terms of this Agreement.
Symantec Corporation
350 Ellis Street
Mountain View, CA 94043
http://www.symantec.com
Technical Support
Symantec Technical Support maintains support centers globally. Technical Supports
primary role is to respond to specific queries about product features and functionality.
The Technical Support group also creates content for our online Knowledge Base.
The Technical Support group works collaboratively with the other functional areas
within Symantec to answer your questions in a timely fashion. For example, the
Technical Support group works with Product Engineering and Symantec Security
Response to provide alerting services and virus definition updates.
Symantecs support offerings include the following:
A range of support options that give you the flexibility to select the right amount
of service for any size organization
For information about Symantecs support offerings, you can visit our website at
the following URL:
www.symantec.com/business/support/index.jsp
All support services will be delivered in accordance with your support agreement
and the then-current enterprise technical support policy.
Hardware information
Operating system
Network topology
Problem description:
Customer service
Customer service information is available at the following URL:
www.symantec.com/business/support/
Customer Service is available to assist with non-technical questions, such as the
following types of issues:
Documentation
Product guides are available on the media in PDF format. Make sure that you are
using the current version of the documentation. The document version appears on
page 2 of each guide. The latest product documentation is available on the Symantec
website.
https://sort.symantec.com/documents
Your feedback on product documentation is important to us. Send suggestions for
improvements and reports on errors or omissions. Include the title and document
version (located on the second page), and chapter and section titles of the text on
which you are reporting. Send feedback to:
[email protected]
For information regarding the latest HOWTO articles, documentation updates, or
to ask a question regarding product documentation, visit the Storage and Clustering
Documentation forum on Symantec Connect.
https://www-secure.symantec.com/connect/storage-management/
forums/storage-and-clustering-documentation
No longer supported
System requirements
Known issues
Software limitations
Documentation
The information in the Release Notes supersedes the information provided in the
product documents for SF.
This is "Document version: 6.0.1 Rev 4" of the Veritas Storage Foundation Release
Notes. Before you start, make sure that you are using the latest version of this
guide. The latest product documentation is available on the Symantec Web site at:
https://sort.symantec.com/documents
To install the product, follow the instructions in the Veritas Storage Foundation
Installation Guide.
Manage risks
Improve efficiency
Note: Certain features of SORT are not available for all products. Access to SORT
is available at no extra cost.
To access SORT, go to:
https://sort.symantec.com
For important updates regarding this release, review the Late-Breaking News
TechNote on the Symantec Technical Support website:
http://www.symantec.com/docs/TECH164885
10
11
Ability to change the precedence order for the predefined disk classes that are
supported for mirror or stripe separation and confinement.
You can now customize the precedence order for the predefined disk classes
that are supported for mirror or stripe separation and confinement. The mirror
or stripe operation honors the higher priority disk class specified in the custom
precedence order.
12
Use the volume intent management commands to manage the use and require
type of persistent intents. You can set, clear, update, and list the use and require
intents for the volume, after the volume is created.
For more information about vxassist and these enhancements, see the
Administrator's Guide and the vxassist(1M) manual page.
13
Support for Thin Reclamation on a Thin Reclaimable LUN and TRIMs for an
SSD on Linux
The fsadm -R command and the vxfs_ts_reclaim() call can perform Thin
Reclamation on a Thin Reclaimable LUN and TRIMs for an SSD. In a volume set,
the action taken is as per the type of device.
For more information, see the fsadm(1M) manual page.
The glmstat command can display GLM cache memory usage information
You can use the glmstat -M command to display GLM cache memory usage
information.
For more information, see the glmstat(1M) manual page.
14
only the file system blocks that have changed since the last Storage Checkpoint or
backup via a copy-on-write technique.
Support for creation of Golden Image snapshots using FlashSnap for Oracle
In this release, the SFDB tools support the creation of Golden Image snapshots
using FlashSnap for Oracle databases.
Online mode, third-mirror-break-off type snapshot i.e. online FlashSnap snapshot
of a database instance contains all the information needed to create a clone of the
database instance. It can act as a template for creating clone database instances.
You can thus allocate a FlashSnap snapshot that can be used as a master copy
for creating one or more clone instances. The clone instances created from a
FlashSnap image, termed as the 'golden image', are incremental copies of the
master or the golden image. These depend on the FlashSnap image for their
operations.
15
Protection of the VFR target file system from accidental writes (on Linux)
The protected=off|on option of the mount_vxfs command protects the target file
system from accidental writes. Modifications to the target file system by anything
other than the file replication job may cause replication to fail. The new
protected=off|on option mounts the file system at the target system as read-write,
and only allows the replication daemon to apply updates, thus preventing accidental
writes that could cause replication to fail.
Finer granularity of replication at the file and directory level using VFR
consistency groups (on Linux)
VxFS supports replication at the file and directory level to a single target using a
consistency group. A consistency group consists of an include list and an exclude
list, which specify a combination of files and directories within a file system that
needs to be replicated as a single consistent unit, and files and directories that
should not be replicated, respectively. Both include and exclude lists are optional,
and if no path is specified in the include list, the entire file system is replicated. The
consistency group must be configured on both the source and the target systems.
Per RVG read-back memory pool to avoid contention of memory between the
RVGs in the SRL read-back.
16
A separate read-back thread to read the data from the SRL. This is disabled by
default.
Kernel-based Virtual Machine (KVM) technology for Red Hat Enterprise Linux
(RHEL)
SFHA Solutions products provide the following functionality for KVM guest virtual
machines:
Storage visibility
Storage management
High availability
Cluster failover
Replication support
Table 1-1
Objective
Recommended SFHA
Solutions product
configuration
KVM technology
17
Table 1-1
Objective
Recommended SFHA
Solutions product
configuration
KVM technology
RHEL
SLES
Storage management
Storage Foundation (SF) in the
features and replication
KVM guest virtual machines
support for KVM guest virtual
machines
RHEL
Advanced storage
management features and
replication support for KVM
hosts
RHEL
SLES
SLES
End-to-end storage visibility DMP in the KVM host and guest RHEL
in KVM hosts and guest
virtual machines
SLES
virtual machines
Storage management
DMP in the KVM host and SF in
features and replication
the KVM guest virtual machines
support in the KVM guest
virtual machines and storage
visibility in in the KVM host
RHEL
RHEL
RHEL
SLES
SLES
RHEL
SLES
18
VCS provides virtual to virtual (in-guest) clustering support for the following Linux
virtualization environments:
Microsoft Hyper-V
No longer supported
The following features are not supported in this release of SF products:
System requirements
This section describes the system requirements for this release.
19
Table 1-2
Operating systems
Levels
Kernel version
Update 1, 2, 3 2.6.32-131.0.15.el6
2.6.32-220.el6
Chipsets
64-bit x86,
EMT*/Opteron 4.1
64-bit only
2.6.32-279.el6
Red Hat Enterprise Linux 5
Update 5, 6, 7, 2.6.18-194.el5
8, 9
2.6.18-238.el5
64-bit x86,
EMT*/Opteron 4.1
64-bit only
2.6.18-274.el5
2.6.18-308.el5
2.6.18-348.el5
SUSE Linux Enterprise 11
SP1, SP2
2.6.32.12-0.7.1
3.0.13-0.27.1
SP4
2.6.16.60-0.85.1
Oracle Linux 6
Update 1, 2, 3 2.6.32-131.0.15.el6
2.6.32-220.el6
64-bit x86,
EMT*/Opteron 4.1
64-bit only
64-bit x86,
EMT*/Opteron 4.1
64-bit only
64-bit x86,
EMT*/Opteron 4.1
64-bit only
2.6.32-279.el6
Oracle Linux 5
Update 5, 6, 7, 2.6.18-194.el5
8, 9
2.6.18-238.el5
64-bit x86,
EMT*/Opteron 4.1
64-bit only
2.6.18-274.el5
2.6.18-308.el5
2.6.18-348.el5
Note: For SLES11 SP2 kernel versions later than February, 2012, you need to apply
the following Veritas patch: sf-sles11_x86_64-6.0.1.100. This patch is available on
the patch download page at https://sort.symantec.com/patch/detail/6732.
20
If your system is running an older version of either Red Hat Enterprise Linux, SUSE
Linux Enterprise Server, or Oracle Linux, upgrade it before attempting to install the
Veritas software. Consult the Red Hat, SUSE, or Oracle documentation for more
information on upgrading or reinstalling your operating system.
Symantec supports only Oracle, Red Hat, and SUSE distributed kernel binaries.
Symantec products operate on subsequent kernel and patch releases provided the
operating systems maintain kernel Application Binary Interface (ABI) compatibility.
Veritas Storage
Foundations feature
DB2
Oracle
Oracle
RAC
Sybase
Sybase
ASE CE
No
Yes
Yes
No
No
Yes
No
No
No
Concurrent I/O
Yes
Yes
Yes
Yes
Yes
Storage Checkpoints
Yes
Yes
Yes
Yes
Yes
Flashsnap
Yes
Yes
Yes
Yes
Yes
SmartTier
Yes
Yes
Yes
Yes
Yes
Database Storage
Checkpoints
Yes
Yes
Yes
No
No
21
Table 1-3
Veritas Storage
Foundations feature
DB2
Oracle
Oracle
RAC
Sybase
Sybase
ASE CE
Database Flashsnap
Yes
Yes
Yes
No
No
No
Yes
Yes
No
No
Notes:
For the most current information on Storage Foundation products and single instance
Oracle versions supported, see:
http://www.symantec.com/docs/DOC4039
Review the current Oracle documentation to confirm the compatibility of your
hardware and software.
22
Table 1-4
Incident
Description
2329580
2873102
2627076
2622987
sfmh discovery issue when you upgrade your Veritas product to 6.0.1
2585899
On RHEL, unable to create storage for OCR and Vote disk when using
FQDN instead of using only the node name.
2526709
DMP-OSN tunable value not get persistence after upgrade from 5.1SP1
to 6.0.
2088827
Incident
Description
2764861
2753944
2735912
The performance of tier relocation using fsppadm enforce is poor when moving
a large amount of files.
23
Table 1-5
Incident
Description
2712392
2709869
System panic with redzone violation when vx_free() tried to free fiostat.
2682550
Access a VxFS file system via NFS could cause system panic on Linux while
unmount is in progress.
2674639
The cp(1) command with the p option may fail on a file system whose File
Change Log (FCL) feature is enabled. The following error messages are
displayed: cp: setting permissions for 'file_name': Input/output error cp:
preserving permissions for 'file_name': No data available.
2670022
2655788
Using cross-platform data sharing to convert a file system that has more than
32k nlinks does not update the vx_maxlink and maxlink_enable tunables.
2651922
ls -l command on local VxFS file system is running slow and high CPU usage
is seen.
2597347
fsck should not coredump when only one of the device record has been
corrupted and the replica is intact.
2584531
2566875
The write(2) operation exceeding the quota limit fails with an EDQUOT error
(Disc quota exceeded) before the user quota limit is reached.
2559450
2536130
2272072
GAB panics the box because VCS engine HAD did not respond. The lobolt
wraps around.
2086902
Spinlock held too long on vxfs spinlock, and there is high contention for it.
1529708
24
Table 1-6
Fixed
issues
Description
2679361
2678096
2672201
2672148
vxdelestat (1M) when invoked with -v option goes into infinite loop.
2663750
Abrupt messages are seen in engine log after complete storage failure in cvm
resiliency scenario.
2655786
2655754
Deadlock because of wrong spin lock interrupt level at which delayed allocation
list lock is taken.
2653845
When the fsckptadm(1M) command with the '-r' and '-R' option is executed,
two mutually exclusive options gets executed simultaneously.
2649367
2646936
The replication process dumps core when shared extents are present in the
source file system.
2646930
2645435
2645112
2645109
2645108
In certain cases write on a regular file which has shared extent as the last
allocated extent can fail with EIO error.
2634483
2630954
25
Table 1-6
Fixed
issues
Description
2613884
2609002
2599590
2583197
2552095
The system may panic while re-organizing the file system using the fsadm(1M)
command.
2536130
2389318
Incident
Description
2838059
2832784
2826958
2818840
2794625
2792242
2774406
2771452
26
Table 1-7
Incident
Description
2763206
The vxdisk rm command core dumps when list of disknames is very long.
2756059
2754819
Live deadlock seen during disk group rebuild when the disk group contains
cache object.
2751102
2747032
2743926
2741240
The vxdg join transaction failed and did not rollback to the sourcedg.
2739709
2739601
2737420
2729501
Exclude path not working properly and can cause system hang while coming
up after enabling native support.
2726148
2721807
2711312
2710579
Do not write backup labels for CDS disk - irrespective of disk size.
2710147
2709743
2701654
2700792
2700486
The vradmind daemon coredumps when Primary and Secondary have the
same hostname and an active Stats session exists on Primary.
2700086
EMC BCV (NR) established devices are resulting in multiple DMP events
messages (paths being disabled/enabled).
27
Table 1-7
Incident
Description
2698860
The vxassist mirror command failed for thin LUN because statvfs failed.
2689845
After upgrade, some VxVM disks changed to error status and the disk group
import failed.
2688747
Logowner local sequential I/Os starved with heavy I/O load on logclient.
2688308
Do not disable other disk groups when a re-import of a disk group fails during
master take-over.
2680482
2679917
Corrupt space optimized snapshot after a refresh with CVM master switching.
2675538
2664825
Disk group import fails when disk contains no valid UDID tag on config copy
and config copy is disabled.
2660151
2656803
2652485
2644248
2643634
Message enhancement for a mixed (non-cloned and cloned) disk group import.
2627126
2626199
2623182
2612301
Upgrading kernel on encapsulated boot disk does not work on Red Hat
Enterprise Linux (RHEL) 5, 6, and SUSE Linux Enterprise Server (SLES) 10.
2607706
Encapsulation of a multi-pathed root disk fails if the dmpnode name and any
of its path names are not the same.
28
Table 1-7
Incident
Description
2580393
Removal of SAN storage cable on any node brings Oracle Application Groups
down on all nodes.
2566174
2564092
2553729
Status of the EMC Clariion disk changed to "online clone_disk" after upgrade.
2486301
2441283
The vxsnap addmir command sometimes fails under heavy I/O load.
2427894
2249445
Develop a tool to get the disk-related attributes like geometry, label, media
capacity, partition info etc.
2240056
2227678
The second rlink gets detached and does not connect back when overflowed
in a multiple-secondaries environment.
1675482
1533134
1190117
2698035
Tunable values do not change as per the values applied through vxtune.
2682491
Fixed
issues
Description
2674465
29
Table 1-8
Fixed
issues
Description
2666163
2660151
2657797
Starting 32TB RAID5 volume fails with unexpected kernel error in configuration
update.
2649958
2647795
2629429
vxunroot does not set original menu.lst and fstab files, SUSE 10.0 NETAPP
FAS3000 ALUA SANBOOT.
2627056
2626741
2621465
2620556
2620555
2608849
Logowner local I/O starved with heavy I/O load from Logclient.
2607519
2607293
2605702
Bail out initialising disk with large sector size and its foreign device.
2600863
2591321
while upgrading dg version if rlink is not up-to-date the vxrvg command shows
error but dg version gets updated.
2590183
write fails on volume on slave node after join which earlier had disks in "lfailed"
state.
30
Table 1-8
Fixed
issues
Description
2576602
vxdg listtag should give error message and display correct usage when
executed with wrong syntax.
2575581
2574752
2565569
2562416
2556467
2535142
New crash was detected on RHEL6.1 during upgrade due to mod unload,
possibly of vxspec.
2530698
after "vxdg destroy" hung (for shared DG), all vxcommands hang on master.
2526498
2516584
startup scripts use 'quit' instead of 'exit', causing empty directories in /tmp.
2402774
2348180
Failure during validating mirror name interface for linked mirror volume.
2176084
1765916
31
Table 1-9
32
Incident Description
2585643 If you provide an incorrect host name with the -r option of vxsfadm, the command
fails with an error message similar to one of the following:
FSM Error: Can't use string ("") as a HASH ref while
"strict refs" in use at /opt/VRTSdbed/lib/perl/DBED/SfaeFsm.pm
line 776. SFDB vxsfadm ERROR V-81-0609 Repository location is
invalid.
The error messages are unclear.
2703881 The FlashSnap validation operation fails with the following error if the mirrors for
(2534422) data volumes and archive log volumes share the same set of disks:
SFAE Error:0642: Storage for diskgroup oradatadg is not
splittable.
2582694 After you have done FlashSnap cloning using a snapplan, any further attempts to
(2580318) create a clone from the same snapplan using the dbed_vmclonedb continue to
use the original clone SID, rather than the new SID specified using the new_sid
parameter. This issue is also observed when you resynchronize the snapplan,
take a snapshot again without specifying the new clone SID, and then try to clone
with the new SID.
2579929 The sfae_auth_op -o auth_user command, used for authorizing users, fails
with the following error message:
SFDB vxsfadm ERROR V-81-0384 Unable to store credentials
for <username>
The authentication setup might have been run with a strict umask value, which
results in the required files and directories being inaccessible to the non-root users.
Known issues
This section covers the known issues in this release.
Run the installer to finish the upgrade process. After upgrade process completes,
remove the two version files and their directories.
If your system is already affected by this issue, then you must manually install the
VRTSpbx, VRTSat, and VRTSicsco RPMs after the upgrade process completes.
33
Or
Root device: /dev/vx/dsk/bootdg/rootvol (mounted on / as reiserfs)
Module list: pilix mptspi qla2xxx silmage processor thermal fan
reiserfs aedd (xennet xenblk)
Kernel image; /boot/vmlinuz-2.6.16.60-0.54.5-smp
Initrd image: /boot/initrd-2.6.16.60-0.54.5-smp
The operating system upgrade is not failing. The error messages are harmless.
Workaround: Remove the /boot/vmlinuz.b4vxvm and /boot/initrd.b4vxvm files (from
an un-encapsulated system) before the operating system upgrade.
34
To upgrade from 5.1 SP1 RP2 to 6.0.1 while using an encapsulated root disk
Web installer does not ask for authentication after the first
session if the browser is still open (2509330)
If you install or configure SF and then close the Web installer, if you have other
browser windows open, the Web installer does not ask for authentication in the
subsequent sessions. Since there is no option to log out of the Web installer, the
session remains open as long as the browser is open on the system.
Workaround: Make sure that all browser windows are closed to end the browser
session and subsequently log in again.
Start the webinstaller again. On the first Web page you see that the session is
still active. Either take over this session and finish it or terminate it directly.
35
36
default setting. When you use the installer to uninstall or upgrade, you may see a
message similar to the following:
Veritas Storage Foundation Shutdown did not complete successfully
vxportal failed to stop on dblxx64-21-v1
vxfs failed to stop on dblxx64-21-v1
Workaround:
Open a new session and manually unload the modules that failed to unload.
Use commands similar to the following:
# /sbin/modprobe -r vxportal
# /sbin/modprobe -r vxfs
Not all the objects are visible in the VOM GUI (1821803)
After upgrading SF stack from 5.0MP3RP2 to 5.1, the volumes are not visible under
the Volumes tab and the shared diskgroup is discovered as Private and Deported
under the Disgroup tab in the SFM GUI.
Workaround:
To resolve this known issue
37
38
you do not tag the volume with the placement classes prior to constructing a volume
set for the volume.
Workaround:
To see the placement class tags in the VEA GUI, you must tag the volumes prior
to constructing the volume set. If you already constructed the volume set before
tagging the volumes, restart vxsvc to make the tags visible in the GUI.
Workaround:
Before you perform root disk encapsulation, run the the following command to
regenerate the device.map file:
grub-install --recheck /dev/sdb
Edit the /etc/modprobe.conffile and add the following line to the end of the file:
options mptsas mpt_disable_hotplug_remove=0
39
One or more arrays that provide the shared storage for the cluster are being
powered off
At the same time when the arrays are being powered off, an operation that
requires an internal transaction is initiated (such as VxVM configuration
commands)
In such a scenario, disk group import will fail with a split brain error and the
vxsplitlines output will show 0 or 1 pools.
Workaround:
To recover from this situation
Retrieve the disk media identifier (dm_id) from the configuration copy:
# /etc/vx/diag.d/vxprivutil dumpconfig device-path
Use the dm_id in the following command to recover from the situation:
# /etc/vx/diag.d/vxprivutil set device-path ssbid=dm_id
40
After the fabric discovery is finished, issue the vxdisk scandisks command to
bring newly discovered devices into the VxVM configuration.
41
42
The load order of kernel modules in Linux results in the VxFS file system driver
loading late in the boot process. Since the driver is not loaded when the /etc/fstab
file is read by the operating system, file systems of the type vxfs will not mount.
Workaround:
To resolve the failure to mount VxFS file systems at boot, specify additional options
in the /etc/fstab file. These options allow the filesystems to mount later in the
boot process. An example of an entry for a VxFS file system:
/dev/vx/dsk/testdg/testvolume /mountpoint
vxfs
_netdev,hotplug
1 1
To resolve the issue, the fstab entry for VxVM data volumes should be as per
following template:
/dev/vx/dsk/testdg/testvol
/testmnt
vxfs
_netdev
0 0
After you re-add the SAN VC node, run the vxdctl enable command for
Dynamic Multi-Pathing (DMP) to detect the added paths:
# vxdctl enable
The vxdisk resize command keeps the cylinder size (number of the heads * total
number of the sectors per track) constant before and after the resize operation,
43
unless the number of cylinders go beyond 2^16-1 (65535) . Because of the VTOC
limitation of storing geometry values only till 2^16 -1, if the number of cylinders
increases beyond the limit, vxdisk resize increases the cylinder size. If this
happens, the private region will overlap with the public region data and corrupt the
user data.
As a result of this LUN geometry change, VxVM is unable to complete vxdisk
resize on simple format disks. VxVM was not designed to handle such geometry
changes during Dynamic LUN Expansion operations on simple disks.
Workaround:
The VxVM vxdisk resize command behaves differently depending on whether
the disk is simple, sliced, or CDS format.
The problem shown above only occurs on simple disk configurations. As a result
of this difference in behavior, if the geometry changes during a Dynamic LUN
Expansion operation at the LUN level, you can convert the disk to a CDS format
disk. Use the vxcdsconvert command on the disk. Then you can issue the vxdisk
resize command.
See http://www.symantec.com/docs/TECH136240 for more information.
44
This error message may display if the correct feature licenses are not installed.
Workaround:
Check that the Fast Mirror Resync and Disk Group Split and Join licenses are
installed. If not, install the licenses.
45
Taking a FileSnap over NFS multiple times with the same target
name can result in the 'File exists' error (2353352)
The "File exists" error occurs as a result of the caching behavior of the NFS client.
Because the link operation is successful, the NFS client assumes that a file with
the specified target name, such as file2::snap:vxfs:, was created.. As a result,
the NFS client caches a file with this name.
Workaround: Remove the target file after a snapshot is created. This forces the
NFS client to remove the name from the cache. For example:
# ln file1 file2::snap:vxfs:
# rm file2::snap:vxfs:
Workaround:
Use the vxtunefs command to turn off delayed allocation for the file system.
46
47
These messages display because the task is blocked for a long time on sleep locks.
However, the task is not hung and the messages can be safely ignored.
Workaround: You can disable these messages by using the following command:
# echo 0 > /proc/sys/kernel/hung_task_timeout_secs
In addition, the deduplication log contains an error similar to the following example:
2011/10/26 01:35:09 DEDUP_ERROR AddBlock failed. Error = 110
These errors indicate that the deduplication process is running low on space and
needs more free space to complete.
Workaround:
Make more space available on the file system.
48
vxresize fails while shrinking a file system with the "blocks are
currently in use" error (2437138)
The vxresize shrink operation may fail when active I/Os are in progress on the file
system and the file system is being shrunk to a size closer to its current usage. You
see a message similar to the following example:
UX:vxfs fsadm: ERROR: V-3-20343: cannot shrink /dev/vx/rdsk/dg1/vol1 blocks are currently in use.
VxVM vxresize ERROR V-5-1-7514 Problem running fsadm command for volume
vol1, in diskgroup dg1
Workaround:
Rerun the shrink operation after stopping the I/Os.
49
This error occurs because the fsppadm command functionality is not supported on
a disk layout Version that is less than 6.
Workaround:
There is no workaround for this issue.
The issue applies to global clustering with a bunker configuration, where the bunker
replication is configured using storage protocol. It occurs when the Primary comes
back even before the bunker disk group is imported on the bunker host to initialize
the bunker replay by the RVGPrimary agent in the Secondary cluster.
Workaround:
To resolve this issue
Before failback, make sure that bunker replay is either completed or aborted.
After failback, deport and import the bunker disk group on the original Primary.
Bunker replay did not occur when the Application Service Group
was configured on some of the systems in the Primary cluster,
and ClusterFailoverPolicy is set to "AUTO" (2047724)
The time that it takes for a global cluster to fail over an application service group
can sometimes be smaller than the time that it takes for VVR to detect the
configuration change associated with the primary fault. This can occur in a bunkered,
globally clustered configuration when the value of the ClusterFailoverPolicy
attribute is Auto and the AppGroup is configured on a subset of nodes of the primary
cluster.
This causes the RVGPrimary online at the failover site to fail. The following
messages appear in the VCS engine log:
RVGPrimary:RVGPrimary:online:Diskgroup bunkerdgname could not be
imported on bunker host hostname. Operation failed with error 256
and message VxVM VVR vradmin ERROR V-5-52-901 NETWORK ERROR: Remote
server unreachable... Timestamp VCS ERROR V-16-2-13066 (hostname)
Agent is calling clean for resource(RVGPrimary) because the resource
is not up even after online completed.
Workaround:
To resolve this issue
When the configuration includes a bunker node, set the value of the
OnlineRetryLimit attribute of the RVGPrimary resource to a non-zero value.
50
This happens because the file system may not be quiesced before running the
vradmin ibc command and therefore, the snapshot volume containing the file
system may not be fully consistent.
Issue 2:
After a global clustering site failover, mounting a replicated data volume containing
a VxFS file system on the new Primary site in read-write mode may fail with the
following error:
UX:vxfs mount: ERROR: V-3-21268: /dev/vx/dsk/dg/data_volume
is corrupted. needs checking
51
52
This usually happens because the file system was not quiesced on the original
Primary site prior to the global clustering site failover and therefore, the file systems
on the new Primary site may not be fully consistent.
Workaround:
The following workarounds resolve these issues.
For issue 1, run the fsck command on the snapshot volume on the Secondary, to
restore the consistency of the file system residing on the snapshot.
For example:
# fsck -t vxfs /dev/vx/dsk/dg/snapshot_volume
For issue 2, run the fsck command on the replicated data volumes on the new
Primary site, to restore the consistency of the file system residing on the data
volume.
For example:
# fsck -t vxfs /dev/vx/dsk/dg/data_volume
the host name, the DNS server name and domain name are specified to the
YaST tool.
This happens because the YaST tool can replace the /etc/hosts entry containing
127.0.0.2 from the IPv4 host name to the specified new IPv6 host name. For
example:
127.0.0.2 v6hostname.space.ipv6.com v6hostname
Workaround:
The following procedure resolves this issue.
Workaround:
53
Restart vradmind on all the hosts of the RDS to put the new
IPM_HEARTBEAT_TIMEOUT value into affect. Enter the following on all the hosts
of the RDS:
# /etc/init.d/vras-vradmind.sh restart
54
Resize the volumes. In this example, the volume is increased to 10 GB. Enter
the following:
# vxassist -g diskgroup growto vol 10G
Workaround:
There are two workarounds for this issue.
Follow the offline verification procedure in the "Verifying the data on the
Secondary" section of the Veritas Storage Foundation and High Availability
Solutions Replication Administrator's Guide. This process requires ensuring that
the secondary is up-to-date, pausing replication, and running the vradmin
syncrvg command with the -verify option.
55
Workaround:
To relayout a data volume in an RVG from concat to striped-mirror
56
57
This may fail online/monitor the bunker RVG resources, when they are configured.
Workaround:
Manually edit the following files to update the script:
/opt/VRTSvcs/bin/RVG/monitor
/opt/VRTSvcs/bin/RVG/online
/opt/VRTSvcs/bin/RVG/offline
to
sys=`LC_ALL=C; export LC_ALL; $hasys -nodeid | awk '{print $6}'`
Workaround:
As an Oracle user, force shut down the clone database if it is up and then retry the
unmount operation.
58
This error occurs because the following names are reserved and are not permitted
as tier names for SmartTier:
BALANCE
CHECKPOINT
METADATA
Workaround:
Use a name for SmartTier classes that is not a reserved name.
Workaround:
If retrying does not work, perform one the following actions depending on the
point-in-time copy method you are using:
For FlashSnap, resync the snapshot and try the clone operation again.
For FileSnap and Database Storage Checkpoints, destroy the clone and create
the clone again.
Contact Symantec support if retrying using the workaround does not succeed.
59
Error: VxVM vxdg ERROR V-5-1-4597 vxdg join FS_oradg oradg failed
datavol_snp : Record already exists in disk group
archvol_snp : Record already exists in disk group
Workaround:
Destroy the space-optimized snapshot first and then perform the FlashSnap resync
operation.
Workaround
Before running sfua_rept_migrate, rename the startup script NO_S*vxdbms3 to
S*vxdbms3.
Workaround
There is no workaround for this issue.
60
61
This is a known Oracle bug documented in the following Oracle bug IDs:
Workaround:
Retry the cloning operation until it succeeds.
62
Reason: This can be caused by the host being unreachable or the vxdbd
daemon not running on that host.
Action: Verify that the host swpa04 is reachable. If it is, verify
that the vxdbd daemon is running using the /opt/VRTS/bin/vxdbdctrl
status command, and start it using the /opt/VRTS/bin/vxdbdctrl start
command if it is not running.
121.
(.) or string
124.
(m//)
126.
Workaround:
For the 6.0.1 release, create distinct archive and datafile mounts for the checkpoint
service.
63
failed.
SFORA sfua_rept_migrate ERROR V-81-9162 Failed to umount repository.
Workaround:
The error does not hamper the upgrade. The repository migration works fine, but
the old repository volume does not get unmounted. Unmount the mount using the
manual option.
For example, use /opt/VRTS/bin/umount -o mntunlock=VCS /rep.
For more information, see TECH64812.
Software limitations
This section covers the software limitations of this release.
See the corresponding Release Notes for a complete list of software limitations
related to that component or product.
See Documentation on page 68.
64
xiv0_618
xiv0_612
xiv0_613
xiv0_614
xiv0_615
:
:
:
:
:
Done.
Done.
Done.
Done.
Done
As shown in the following example output, the storage is not actually reclaimed.
# vxdisk -o thin list
DEVICE
xiv0_612
xiv0_613
xiv0_614
xiv0_615
xiv0_616
xiv0_617
xiv0_618
SIZE(MB)
19313
19313
19313
19313
19313
19313
19313
PHYS_ALLOC(MB)
2101
2108
35
32
31
31
31
GROUP
dg1
dg1
dg1
dg1
dg1
dg1
dg1
TYPE
thinrclm
thinrclm
thinrclm
thinrclm
thinrclm
thinrclm
thinrclm
DMP does not support devices in the same enclosure that are
configured in different modes (2643506)
DMP does not support the configuration where two devices in the same enclosure
are configured in different modes. For example, if one device is configured as ALUA
and another one is configured as Active/Passive (A/P).
65
/boot/grub/menu.lst
For the GRUB configuration files, add the elevator=deadline parameter to the
kernel command.
For example, for RHEL5, change:
title RHEL5UP3
root (hd1,1)
kernel /boot/vmlinuz-2.6.18-128.el5 ro root=/dev/sdb2
initrd /boot/initrd-2.6.18-128.el5.img
To:
title RHEL5UP3
root (hd1,1)
kernel /boot/vmlinuz-2.6.18-128.el5 ro root=/dev/sdb2 \
elevator=deadline
initrd /boot/initrd-2.6.18-128.el5.img
To:
title RHEL6
root (hd1,1)
kernel /boot/vmlinuz-2.6.32-71.el6 ro root=/dev/sdb2 \
elevator=deadline
initrd /boot/initrd-2.6.32-71.el6.img
A setting for the elevator parameter is always included by SUSE in its LILO and
GRUB configuration files. In this case, change the parameter from elevator=cfq
to elevator=deadline.
66
Reboot the system once the appropriate file has been modified.
See the Linux operating system documentation for more information on I/O
schedulers.
In the cases where the file data must be written to disk immediately, delayed
allocation is disabled on that file. Examples of such cases include Direct I/O,
concurrent I/O, FDD/ODM access, and synchronous I/O.
Delayed allocation is not supported with BSD quotas. When BSD quotas are
enabled on a file system, delayed allocation is turned off automatically for that
file system.
Delayed allocation is not supported for shared mounts in a cluster file system.
67
Documentation
Product guides are available in the PDF format on the software media in the
/docs/product_name directory. Additional documentation is available online.
Make sure that you are using the current version of documentation. The document
version appears on page 2 of each guide. The publication date appears on the title
page of each document. The latest product documentation is available on the
Symantec website.
http://sort.symantec.com/documents
Documentation set
Table 1-10 lists the documentation for Veritas Storage Foundation.
Table 1-10
Document title
File name
sf_notes_601_lin.pdf
sf_install_601_lin.pdf
68
Table 1-10
Document title
File name
sfhas_oracle_admin_601_unix.pdf
vxfs_ref_601_lin.pdf
Table 1-11 lists the documentation for Veritas Storage Foundation and High
Availability Solutions products.
Table 1-11
Document title
File name
sfhas_solutions_601_lin.pdf
sfhas_virtualization_601_lin.pdf
sfhas_replication_admin_601_lin.pdf
If you use Veritas Operations Manager (VOM) to manage Veritas Storage Foundation
and High Availability products, refer to the VOM product documentation at:
http://sort.symantec.com/documents
Manual pages
The manual pages for Veritas Storage Foundation and High Availability Solutions
products are installed in the /opt/VRTS/man directory.
Set the MANPATH environment variable so the man(1) command can point to the
Veritas Storage Foundation manual pages:
For the Bourne or Korn shell (sh or ksh), enter the following commands:
MANPATH=$MANPATH:/opt/VRTS/man
export MANPATH
69
If you use the man command to access manual pages, set LC_ALL to C in
your shell to ensure that the pages are displayed correctly.
export LC_ALL=C
See incident 82099 on the Red Hat Linux support website for more information.
1:8:2:3:4:5:6:7:9:tcl:n:l:p:o
to
MANSECT
1:8:2:3:4:5:6:7:9:tcl:n:l:p:o:3n:1m
The latest manual pages are available online in HTML format on the Symantec
website at:
https://sort.symantec.com/documents
70