Build Two Node Oracle Rac 11gr2 11 2 0 3 With Gns Dns DHCP and Haip1 PDF
Build Two Node Oracle Rac 11gr2 11 2 0 3 With Gns Dns DHCP and Haip1 PDF
In the article you will have a look at how to use some Oracle VirtualBox
features to build two node Oracle 11gR2 (11.2.0.3) RAC system on Oracle
Enterprise Linux (OEL 6.1). The setup will implement a role separation with
different users for Oracle RDBMS and Oracle GI that is, user oracle and grid
respectively in order to split the responsibilities between DBAs and storage
administrators. The article will show you how to configure DHCP and a sample
DNS setup for GNS deployment. You will have a glimpse at deploying HAIP
feature allowing up to four private interconnect interfaces.
An overview to Oracle Virtualization solutions can be seen here. You can see
how to use Oracle VM VirtualBox to build a two node Solaris cluster here. For
information related to building a RAC 11gR2 cluster on OEL without GNS click
here.
In the article you will see how to configure Linux in an Oracle VM VirtualBox
Virtual machines, install Oracle GI, Oracle RDBMS and will create a policy
managed database and service.
Two virtual machines, OEL61A and OEL61B, will be configured for RAC nodes
each with:
4GB RAM
300GB bootable disk (Disk space will be dynamically allocated not
a fixed size pre-allocation)
NIC bridged for public interface in RAC with address
192.168.2.21/22 (first IP 192.168.2.21 on oel61a and second IP
192.168.2.22 on node oel61b). These are public interface in RAC.
NIC bridged for private interface in RAC with address
10.10.2.21/22 (first IP 10.10.2.21 on oel61a and second IP
10.10.2.22 on node oel61b). These are private interface in RAC.
NIC bridged for private interface in RAC with address
10.10.5.21/22 (first IP 10.10.5.21 on oel61a and second IP
10.10.5.22 on node oel61b). These are private interface in RAC.
NIC bridged for private interface in RAC with address
10.10.10.21/22 (first IP 10.10.10.21 on oel61a and second IP
10.10.10.22 on node oel61b). These are private interface in RAC.
5 10GB attached shared disks for the ASM storage. (Normal
Redundancy ASM disk groups will be deployed).
Virtual machine OEL61 will be configured as follows (I will use it for add
node later on):
4GB RAM
300GB bootable disk (Disk space will be dynamically allocated not
a fixed size pre-allocation)
NIC bridged for public interface in RAC with address
192.168.2.11
NIC bridged for private interface in RAC with address
10.10.2.11
NIC bridged for private interface in RAC with address
10.10.5.11
NIC bridged for private interface in RAC with address
10.10.10.11
In this section you will look at how to create a guest OEL 6.1 VM
using Oracle VM VirtualBox.
Press the New button on the menu bar and press Next.
Specify the name of the VM and type of the OS and press Next.
Specify the RAM for the OEL61A VM and press Next.
Select an option to create a new disk.
1. / - 10000M
2. /boot - 10000M
3. /home - 10000M
4. /opt - 10000M
5. /tmp - 10000M
6. /usr - 10000M
7. /usr/local - 10000M
8. /var - 10000M
9. swap - 10000M
10. /u01 - the remaining disk space.
Press Create.
Select Standard partition and press Create.
Specify / and the fixed size and press OK to continue.
Repeat the same steps for all file systems and swap. Once done you
will have file systems similar to the image. Press Next to continue.
Select Database Server and Customize now and press Next to continue.
I selected all.
Wait until all packages get installed and press Reboot.
Skip registration and press Forward.
Press Forward.
Create user and press Forward.
Synchronize NTP and press Forward.
Press Finish.
After reboot you will have a similar screen.
Login as root and select Devices->Install Guest Additions. Press OK.
Press the Run button.
Wait for the installation to complete.
If the auto start window does not prompt you to run Guest Additions
installation go to the media folder and execute the following command.
sh ./VBoxLinuxAdditions.run
Before
After:
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
net.ipv4.conf.eth2.rp_filter = 2
net.ipv4.conf.eth2.rp_filter = 2
net.ipv4.conf.eth1.rp_filter = 1
net.ipv4.conf.eth0.rp_filter = 2
kernel.shmmax = 2074277888
fs.suid_dumpable = 1
# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296
Set the user limits for oracle and grid users in /etc/security/limits.conf to
restrict the maximum number of processes for the oracle software owner users
to 16384 and maximum number of open files to 65536.
Add in /etc/pam.d/login, as per MOS Note 567524.1, the line below in order
for the login program to load the pam_limits.so so that the
/etc/security/limits.conf is read and limits activated and enforced.
For enabling the NTP make sure that /etc/sysconfig/ntpd has the line
modified to include -x.
For disabling the NTP make sure that the NTP service is stopped and
disabled for auto-start and there is not configuration file.
o mv /etc/ntp.conf to /etc/ntp.conf.org
Set the permissions and directories. Note that Oracle RDBMS directory will be
created by OUI in the location specified in the profile.
[root@oel61a shm]#
[root@oel61a shm]# mkdir -p /u01/app/11.2.0/grid
[root@oel61a shm]# mkdir -p /u01/app/grid
[root@oel61a shm]# mkdir -p /u01/app/oracle
[root@oel61a shm]# chown grid:oinstall /u01/app/11.2.0/grid
[root@oel61a shm]# chown grid:oinstall /u01/app/grid
[root@oel61a shm]# chown oracle:oinstall /u01/app/oracle
[root@oel61a shm]# chown -R grid:oinstall /u01
[root@oel61a shm]# mkdir -p /u01/app/oracle
[root@oel61a shm]# chmod -R 775 /u01/
[root@oel61a shm]#
ORACLE_BASE=/u01/app/grid
ORACLE_HOME=/u01/app/11.2.0/grid
ORACLE_HOSTNAME=oel61a
ORACLE_SID=+ASM1
LD_LIBRARY_PATH=$ORACLE_HOME/lib
PATH=$PATH:$ORACLE_HOME/bin
TEMP=/tmp
TMPDIR=/tmp
ulimit -t unlimited
ulimit -f unlimited
ulimit -d unlimited
ulimit -s unlimited
ulimit -v unlimited
if [ -t 0 ]; then
stty intr ^C
fi
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
PATH=$PATH:$HOME/bin
export PATH
PATH=$PATH:$HOME/bin
export PATH
[grid@oel61a ~]$
umask 022
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
ORACLE_HOSTNAME=oel61a
ORACLE_SID=RACDB_1
ORACLE_UNQNAME=RACDB
LD_LIBRARY_PATH=$ORACLE_HOME/lib
PATH=$PATH:$ORACLE_HOME/bin
TEMP=/tmp
TMPDIR=/tmp
ulimit -f unlimited
ulimit -d unlimited
ulimit -s unlimited
ulimit -v unlimited
if [ -t 0 ]; then
stty intr ^C
fi
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
PATH=$PATH:$HOME/bin
export PATH
PATH=$PATH:$HOME/bin
export PATH
[oracle@oel61a ~]$
Lets create 5 shared disks and attach them to the OEL61A VM.
e:\vb>VBoxManage createhd --filename d:\vb\l1asm1.vdi --size 10240 --format VDI
--variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: 2a173888-d0fb-4cfd-a80c-068951bfc4ff
e:\vb>
Now we can confirm that there are 5 new disk devices sdb[b-e].
[root@oel61a dev]# ls sd*
sda sda10 sda2 sda4 sda6 sda8 sdb sdd sdf
sda1 sda11 sda3 sda5 sda7 sda9 sdc sde
[root@oel61a dev]#
Format each of the new devices. For example for /dev/sdb issue the command
below.
Repeat the steps listed below for each disk from /dev/sdb to /dev/sdf.
Configure ASMlib
[root@oel61a dev]# oracleasm configure -i
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Make sure you set proper permissions for the devices. Put into /etc/rc.local
Right click the VM and select Clone or press CTRL+O. Enter the new name, mark
the check box to reinitialize the MAC address of all network interfaces and
press Next button. Here a new OEL61B VM is created from OEL61A.
After dropping the disks from OEL61B I re-attach the shared disks using the
following commands.
E:\vb>type oel61_att1.bat
VBoxManage storageattach OEL61B --storagectl "SATA Controller" --port 1 --
device
0 --type hdd --medium d:\vb\l1asm1.vdi --mtype shareable
VBoxManage storageattach OEL61B --storagectl "SATA Controller" --port 2 --
device
0 --type hdd --medium d:\vb\l1asm2.vdi --mtype shareable
VBoxManage storageattach OEL61B --storagectl "SATA Controller" --port 3 --
device
0 --type hdd --medium d:\vb\l1asm3.vdi --mtype shareable
VBoxManage storageattach OEL61B --storagectl "SATA Controller" --port 4 --
device
0 --type hdd --medium d:\vb\l1asm4.vdi --mtype shareable
VBoxManage storageattach OEL61B --storagectl "SATA Controller" --port 5 --
device
0 --type hdd --medium d:\vb\l1asm5.vdi --mtype shareable
ORACLE_BASE=/u01/app/grid
ORACLE_HOME=/u01/app/11.2.0/grid
ORACLE_HOSTNAME=oel61b
ORACLE_SID=+ASM2
LD_LIBRARY_PATH=$ORACLE_HOME/lib
PATH=$PATH:$ORACLE_HOME/bin
TMPDIR=/tmp
ulimit -t unlimited
ulimit -f unlimited
ulimit -d unlimited
ulimit -s unlimited
ulimit -v unlimited
if [ -t 0 ]; then
stty intr ^C
fi
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
PATH=$PATH:$HOME/bin
export PATH
PATH=$PATH:$HOME/bin
export PATH
[grid@oel61b ~]$
The steps in this section are to be executed as root user only on oel61. Only
/etc/resolv.conf needs to be modified on all three nodes as root.
2.168.192.in-addr.arpa
10.10.10.in-addr.arpa
5.10.10.in-addr.arpa
gj.com.
options {
listen-on port 53 { any; };
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
// memstatistics-file "/var/named/data/named_mem_stats.txt";
// recursion yes;
// allow-recursion { any;};
// allow-recursion-on { any;};
// allow-query-cache { any; };
// allow-query { any; };
// dnssec-enable yes;
// dnssec-validation yes;
// dnssec-lookaside auto;
logging {
channel default_debug {
file "data/named.run";
severity dynamic;
};
};
zone "2.168.192.in-addr.arpa" IN {
type master;
file "gj1.com.reverse";
allow-update { none; };
};
zone "10.10.10.in-addr.arpa" IN {
type master;
file "priv1.com.reverse";
allow-update { none; };
};
zone "5.10.10.in-addr.arpa" IN {
type master;
file "priv2.com.reverse";
allow-update { none; };
};
zone "2.10.10.in-addr.arpa" IN {
type master;
file "priv3.com.reverse";
allow-update { none; };
};
zone "gj.com." IN {
type master;
file "gj1.zone";
notify no;
};
zone "." IN {
type hint;
file "named.ca";
};
include "/etc/named.rfc1912.zones";
[root@oel61 named]#
$ORIGIN gj.com.
@ IN SOA oel61.gj.com. root (
43 ; serial (d. adams)
3H ; refresh
15M ; retry
1W ; expiry
1D ) ; minimum
IN NS oel61
oel61 IN A 192.168.2.11
oel61a IN A 192.168.2.21
oel61b IN A 192.168.2.22
oel61c IN A 192.168.2.23
raclinux3 IN A 192.168.2.24
dns CNAME gj.com.
oel61-priv1 IN A 10.10.10.11
oel61a-priv1 IN A 10.10.10.21
oel61b-priv1 IN A 10.10.10.22
oel61-priv2 IN A 10.10.5.11
oel61a-priv2 IN A 10.10.5.21
oel61b-priv2 IN A 10.10.5.22
oel61-priv3 IN A 10.10.2.11
oel61a-priv3 IN A 10.10.2.21
oel61b-priv3 IN A 10.10.2.22
$ORIGIN grid.gj.com.
@ IN NS gns.grid.gj.com.
;; IN NS oel61a.gj.com.
gns.grid.gj.com. IN A 192.168.2.52
oel61 IN A 192.168.2.11
oel61a IN A 192.168.2.21
oel61b IN A 192.168.2.22
oel61c IN A 192.168.2.23
[root@oel61 named]#
[root@oel61 named]#
[root@oel61 named]#
[root@oel61 named]#
[root@oel61 named]#
Make sure that you enable named service for auto-start issuing the following
command.
chkconfig named on
Disable the firewall by issuing the following command on all nodes oel61,
oel61a and oel61b as root.
Test all public and private nodes accessibility and resolution using
nslookup.
Name: oel61.gj.com
Address: 192.168.2.11
Name: oel61a.gj.com
Address: 192.168.2.21
Name: oel61b.gj.com
Address: 192.168.2.22
[root@oel61a stage]#
Name: oel61a-priv1.gj.com
Address: 10.10.10.21
[root@oel61a stage]#
[root@oel61a stage]# nslookup o el61a-priv2
Server: 192.168.2.11
Address: 192.168.2.11#53
Name: oel61a-priv2.gj.com
Address: 10.10.5.21
[root@oel61a stage]#
Name: oel61a-priv3.gj.com
Address: 10.10.2.21
[root@oel61a stage]#
Name: oel61b-priv1.gj.com
Address: 10.10.10.22
Name: oel61b-priv2.gj.com
Address: 10.10.5.22
Name: oel61b-priv3.gj.com
Address: 10.10.2.22
[root@oel61a stage]# nslookup 10.10.2.22
Server: 192.168.2.11
Address: 192.168.2.11#53
[root@oel61a stage]#
Name: oel61.gj.com
Address: 192.168.2.11
[root@oel61a stage]#
Name: oel61-priv1.gj.com
Address: 10.10.10.11
[root@oel61a stage]#
Name: oel61-priv2.gj.com
Address: 10.10.5.11
[root@oel61a stage]#
[root@oel61a stage]#
Set up DHCP
[root@oel61 named]#
chkconfig dhcpd on
The OUI will be used for setting up user equivalence. Run OUI from the
staging directory.
Details:
-
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms
on following nodes: oel61a,oel61b - Cause: The DNS response time for an
unreachable node exceeded the value specified on nodes specified. - Action:
Make sure that 'options timeout', 'options attempts' and 'nameserver' entries
in file resolv.conf are proper. On HPUX these entries will be 'retrans',
'retry' and 'nameserver'. On Solaris these will be 'options retrans', 'options
retry' and 'nameserver'.
Back to Top
Verification result of failed node: oel61a
Details:
-
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms
on following nodes: oel61a,oel61b - Cause: The DNS response time for an
unreachable node exceeded the value specified on nodes specified. - Action:
Make sure that 'options timeout', 'options attempts' and 'nameserver' entries
in file resolv.conf are proper. On HPUX these entries will be 'retrans',
'retry' and 'nameserver'. On Solaris these will be 'options retrans', 'options
retry' and 'nameserver'.
Reference : PRVF-5636 : The DNS response time for an unreachable node exceeded
"15000" ms on following nodes
real 0m15.009s
user 0m0.002s
sys 0m0.002s
[root@oel61a bin]#
Reverse path filter setting - Checks if reverse path filter setting for all
private interconnect network interfaces is correct
Check Failed on Nodes: [oel61b, oel61a]
Verification result of failed node: oel61b
Expected Value
: 0|2
Actual Value
: 1
Details:
-
PRVE-0453 : Reverse path filter parameter "rp_filter" for private interconnect
network interfaces "eth3" is not set to 0 or 2 on node "oel61b.gj.com". -
Cause: Reverse path filter parameter 'rp_filter' was not set to 0 or 2 for
identified private interconnect network interfaces on specified node. -
Action: Ensure that the 'rp_filter' parameter is correctly set to the value of
0 or 2 for each of the interface used in the private interconnect
classification, This will disable or relax the filtering and allow Clusterware
to function correctly. Use 'sysctl' command to modify the value of this
parameter.
Back to Top
Verification result of failed node: oel61a
Expected Value
: 0|2
Actual Value
: 1
Details:
-
PRVE-0453 : Reverse path filter parameter "rp_filter" for private interconnect
network interfaces "eth3" is not set to 0 or 2 on node "oel61a.gj.com". -
Cause: Reverse path filter parameter 'rp_filter' was not set to 0 or 2 for
identified private interconnect network interfaces on specified node. -
Action: Ensure that the 'rp_filter' parameter is correctly set to the value of
0 or 2 for each of the interface used in the private interconnect
classification, This will disable or relax the filtering and allow Clusterware
to function correctly. Use 'sysctl' command to modify the value of this
parameter.
Back to Top
Solution
3. For PRVF-5636 make sure that nslookup always returns for less than
10s. In the example below it takes 15 secs.
[root@oel61a bin]# time nslookup not-known
;; connection timed out; no servers could be reached
real 0m15.009s
user 0m0.002s
sys 0m0.002s
[root@oel61a bin]#
Check OS processes:
SQL>
RACDBSRV =
(DESCRIPTION =
(LOAD_BALANCE = YES)
(FAILOVER = YES )
(ADDRESS = (PROTOCOL = TCP)(HOST = oel61-cluster-scan.gns.grid.gj.com)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = oel61-cluster-scan.gns.grid.gj.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RACDBSRV)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 200)
(DELAY = 10 )
)
)
)
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management,
OLAP,
Data Mining and Real Application Testing options
11 rows selected.
SQL>
[root@oel61a named]#
1. oel61b.gns.grid.gj.com
2. oel61a.gns.grid.gj.com
3. oel61a-vip.gns.grid.gj.com
4. oel61b-vip.gns.grid.gj.com
5. oel61-cluster-scan.gns.grid.gj.com
Example:
Non-authoritative answer:
Name: oel61a-vip.gns.grid.gj.com
Address: 192.168.2.100
Non-authoritative answer:
Name: oel61b-vip.gns.grid.gj.com
Address: 192.168.2.113
Non-authoritative answer:
Name: oel61-cluster-scan.gns.grid.gj.com
Address: 192.168.2.117
Name: oel61-cluster-scan.gns.grid.gj.com
Address: 192.168.2.111
Name: oel61-cluster-scan.gns.grid.gj.com
Address: 192.168.2.112
[root@oel61a stage]#
Annex 1
Results from the cluvfy utility are displayed below.
ERROR:
PRVF-7617 : Node connectivity between "oel61a : 192.168.122.1" and "oel61b :
192.168.122.1" failed
Result: TCP connectivity check failed for subnet "192.168.122.0"
Interfaces found on subnet "192.168.2.0" that are likely candidates for VIP are:
oel61b eth1:192.168.2.22
oel61a eth1:192.168.2.21
Interfaces found on subnet "10.10.10.0" that are likely candidates for a private
interconnect are:
oel61b eth2:10.10.10.22
oel61a eth2:10.10.10.21
Interfaces found on subnet "10.10.5.0" that are likely candidates for a private
interconnect are:
oel61b eth3:10.10.5.22
oel61a eth3:10.10.5.21
Interfaces found on subnet "10.10.2.0" that are likely candidates for a private
interconnect are:
oel61b eth0:10.10.2.22
oel61a eth0:10.10.2.21
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed for subnet "10.10.10.0".
Subnet mask consistency check passed for subnet "10.10.5.0".
Subnet mask consistency check passed for subnet "10.10.2.0".
Subnet mask consistency check passed for subnet "192.168.122.0".
Subnet mask consistency check passed.
Result: Node connectivity check failed
ERROR:
PRVF-7617 : Node connectivity between "oel61a : 10.10.2.21" and "oel61b : 10.10.2.22"
failed
TCP connectivity check failed for subnet "10.10.2.0"
ERROR:
PRVF-7617 : Node connectivity between "oel61a : 192.168.122.1" and "oel61b :
192.168.122.1" failed
TCP connectivity check failed for subnet "192.168.122.0"
Interfaces found on subnet "192.168.2.0" that are likely candidates for VIP are:
oel61b eth1:192.168.2.22
oel61a eth1:192.168.2.21
Interfaces found on subnet "10.10.10.0" that are likely candidates for a private
interconnect are:
oel61b eth2:10.10.10.22
oel61a eth2:10.10.10.21
Interfaces found on subnet "10.10.5.0" that are likely candidates for a private
interconnect are:
oel61b eth3:10.10.5.22
oel61a eth3:10.10.5.21
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed for subnet "10.10.10.0".
Subnet mask consistency check passed for subnet "10.10.5.0".
Subnet mask consistency check passed for subnet "10.10.2.0".
Subnet mask consistency check passed for subnet "192.168.122.0".
Subnet mask consistency check passed.
File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
All nodes have one search entry defined in file "/etc/resolv.conf"
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on
following nodes: oel61b