0% found this document useful (0 votes)
25 views

Build 2 Node Oracle RAC 11gR2 Clear Steps

Oracle Rac installed

Uploaded by

bhaveshbtz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

Build 2 Node Oracle RAC 11gR2 Clear Steps

Oracle Rac installed

Uploaded by

bhaveshbtz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 73

Oracle RAC 2 node RAC 11gR2 Installation Step by Step

Five virtual machines will be created and used.


1.OEL64 Will Host Storage, DNS and DHCP
2. OEL64A for RAC node oel64a
2.OEL64B for RAC node oel64b
3.OEL64C for RAC node oel64c
4.OEL64D for RAC node oel64d

The four virtual machines, OEL64A, OEL64B, OEL64C and OEL64D, will be configured for RAC nodes
each with:
 3GB RAM
 100GB bootable disk (Disk space will be dynamically allocated nota fixed size pre-allocation)
 NIC – bridged for public interface in RAC with address192.168.2.21/22/23/24 (first IP 192.168.2.21 on
oel64a, second IP192.168.2.22 on node oel64b, 192.168.2.23 on oel64c and 192.168.2.24 on oel64d).
These are public interface in RAC.
 NIC – bridged for private interface in RAC with address10.10.2.21/22/23/24 (first IP 10.10.2.21 on
oel64a, second IP10.10.2.22 on node oel64b, 192.10.2.23 on oel64c and 192.10.2.24 on oel64d).
These are private interface in RAC.
 NIC – bridged for private interface in RAC with address10.10.5.21/22/23/24 (first IP 10.10.5.21 on
oel64a, second IP 10.10.5.22 on node oel64b, 10.10.5.23 on oel64c and 10.10.5.24 on oel64d). These
are private interface in RAC.
 NIC – bridged for private interface in RAC with address10.10.10.21/22/23/24 (first IP 10.10.10.21 on
oel64a,second IP10.10.10.22 on node oel64b, 10.10.10.23 on oel64c and 10.10.10.24 on oel64d).
These are private interface in RAC.
 10 2GB attached shared disks for the ASM storage. (ExternalRedundancy ASM disk groups will be
deployed).

oel64
1.Create an OEL64 VM with OEL 6.4 as guest OS for node oel64.
2.Configure the OEL64A VM to meet the prerequisites for GI and RAC
12.1.0.2deployment.
3.Clone OEL64 to OEL64A.
4.Clone OEL64A to OEL64B.
5. Clone OEL64A to OEL64C
6. Clone OEL64A to OEL64D
6.Setup DNS and DHCP server on OEL64.
7.Install GI 12.1.0.2 on oel64a,oel64b,oel64c and oel64d.
7.Install RAC RDBMS 12.1.0.2 on oel64a, el64b, oel64c and oel64d.
8.Create a policy managed database RACDB sule1 and sule2.
9.Verify database creation and create a service.

Note Here: Nodes OEL64A and OEL64B are Hub Nodes and will have ASM and DB instances. OEL64C and
OEL64D will be Leaf Nodes for Application Deployment
OEL64
OEL64A RAC OEL64B RAC OEL64A RAC OEL64B RAC Network
Storage,
node node node node Type
DNS and DHCP

oel64 oel64a oel64b oel64c oel64d

eth0 192.168.2.20 192.168.2.21 192.168.2.22 192.168.2.23 192.168.2.24 Public

eth1 10.10.2.20 10.10.2.21 10.10.2.22 10.10.2.23 10.10.2.24 Private

eth2 10.10.10.20 10.10.10.21 10.10.10.22 10.10.10.23 10.10.10.24 Private

eth3 10.10.5.20 10.10.5.21 10.10.5.22 10.10.5.23 10.10.5.24 Private

Disable the NetworkManager

[root@oel64 ~]# service NetworkManager stop


Stopping NetworkManager daemon: [ OK ]

[root@oel64 ~]# chkconfig NetworkManager off

Disable the selinux

[root@oel64 ~]# vi /etc/sysconfig/selinux


# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# mls - Multi Level Security protection.
SELINUXTYPE=targeted

[root@oel64 ~]# vi /etc/hosts


192.168.2.20 oel64.radical.com oel64
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

Reboot oel64server.

Clone OEL64 to OEL64A


Set up DNS and DHCP server on OEL64 (Storage Machine OEL64)

Set up DNS

The steps in this section are to be executed as root user only on oel64. Only/etc/resolv.conf needs to be
modified on all three nodes as root.

As root on oel64 setup a DNS by creating the following zones in/etc/named.conf

 2.168.192.in-addr.arpa
 10.10.10.in-addr.arpa
 5.10.10.in-addr.arpa
 radical.com.

Copy the contents as per below details.

[root@oel64]# vi /etc/named.conf
//
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//

options {
listen-on port 53 { any; };
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
// allow-recursion { any;};
// allow-recursion-on { any;};
// allow-query-cache { any; };
// allow-query { any; };
// dnssec-enable yes;
// dnssec-validation yes;
// dnssec-lookaside auto;

/* Path to ISC DLV key */


bindkeys-file "/etc/named.iscdlv.key";
};

zone "2.168.192.in-addr.arpa" IN {
type master;
file "radical.com.reverse";
allow-update { none; };
};

zone "10.10.10.in-addr.arpa" IN {
type master;
file "priv1.com.reverse";
allow-update { none; };
};

zone "5.10.10.in-addr.arpa" IN {
type master;
file "priv2.com.reverse";
allow-update { none; };
};

zone "2.10.10.in-addr.arpa" IN {
type master;
file "priv3.com.reverse";
allow-update { none; };
};

zone "radical.com." IN {
type master;
file "radical.zone";
notify no;
};

zone "." IN {
type hint;
file "/dev/null";
};

include "/etc/named.rfc1912.zones";

[root@oel64 named]#

Create a config file for radical.com zone in /var/named/radical.zone

[root@oel64 ]# cd /var/named/

[root@oel64 named]# vi radical.zone


$TTL 86400

$ORIGIN radical.com.
@ IN SOA oel64.radical.com. root (
43 ; serial (d. adams)
3H ; refresh
15M ; retry
1W ; expiry
1D ) ; minimum
IN NS oel64
oel64 IN A 192.168.2.20
oel64a IN A 192.168.2.21
oel64b IN A 192.168.2.22
oel64c IN A 192.168.2.23
oel64d IN A 192.168.2.24
dns CNAME radical.com.
oel64a-priv1 IN A 10.10.10.21
oel64b-priv1 IN A 10.10.10.22
oel64c-priv1 IN A 10.10.10.23
oel64d-priv1 IN A 10.10.10.24
oel64a-priv2 IN A 10.10.5.21
oel64b-priv2 IN A 10.10.5.22
oel64c-priv2 IN A 10.10.5.23
oel64d-priv2 IN A 10.10.5.24
oel64a-priv3 IN A 10.10.2.21
oel64b-priv3 IN A 10.10.2.22
oel64c-priv3 IN A 10.10.2.23
oel64d-priv3 IN A 10.10.2.24

$ORIGIN grid.radical.com.

@ IN NS gns.grid.radical.com.
;; IN NS oel64.radical.com.

gns.grid.radical.com. IN A 192.168.2.52

oel64a IN A 192.168.2.21
oel64b IN A 192.168.2.22
oel64c IN A 192.168.2.23
oel64d IN A 192.168.2.24

[root@oel61 named]#

Create a config file for 2.168.192.in-addr.arpa zone in /var/named/radical.com.reverse.

[root@oel64 named]# vi radical.com.reverse


$ORIGIN 2.168.192.in-addr.arpa.
$TTL 1H
@ IN SOA oel64.radical.com. root.dnsoel55.radical.com. (
2
3H
1H
1W
1H )
2.168.192.in-addr.arpa. IN NS oel64.radical.com.
IN NS oel64.radical.com.
21 IN PTR oel64a.radical.com.
22 IN PTR oel64b.radical.com.
23 IN PTR oel64c.radical.com.
24 IN PTR oel64d.radical.com.
52 IN PTR gns.grid.radical.com.

[root@oel64 named]#

Create a config file for 10.10.10.in-addr.arpa zone in /var/named/priv1.com.reverse.


[root@oel64 named]# vi priv1.com.reverse
$ORIGIN 10.10.10.in-addr.arpa.
$TTL 1H
@ IN SOA oel64.radical.com. root.oel64.radical.com. (
2
3H
1H
1W
1H )
10.10.10.in-addr.arpa. IN NS oel64.radical.com.
IN NS oel64.radical.com.
21 IN PTR oel64a-priv1.radical.com.
22 IN PTR oel64b-priv1.radical.com.
23 IN PTR oel64c-priv1.radical.com.
24 IN PTR oel64d-priv1.radical.com.

[root@oel61 named]#

Create a config file for 5.10.10.in-addr.arpa zone in /var/named/priv2.com.reverse.

[root@oel64 named]# vi priv2.com.reverse


$ORIGIN 5.10.10.in-addr.arpa.
$TTL 1H
@ IN SOA oel64.radical.com. root.oel64.radical.com. (
2
3H
1H
1W
1H )
5.10.10.in-addr.arpa. IN NS oel64.radical.com.
IN NS oel64.radical.com.
21 IN PTR oel64a-priv2.radical.com.
22 IN PTR oel64b-priv2.radical.com.
23 IN PTR oel64c-priv2.radical.com.
24 IN PTR oel64d-priv2.radical.com.

[root@oel61 named]#

Create a config file for 2.10.10.in-addr.arpa zone in /var/named/priv3.com.reverse.

[root@oel64 named]# vi priv3.com.reverse


$ORIGIN 2.10.10.in-addr.arpa.
$TTL 1H
@ IN SOA oel64.radical.com. root.oel64.radical.com. (
2
3H
1H
1W
1H )
2.10.10.in-addr.arpa. IN NS oel64.radical.com.
IN NS oel64.radical.com.
21 IN PTR oel64a-priv3.radical.com.
22 IN PTR oel64b-priv3.radical.com.
23 IN PTR oel64c-priv3.radical.com.
24 IN PTR oel64d-priv3.radical.com.

[root@oel64 named]# chgrp named radical.zone


[root@oel64 named]# chgrp named radical.com.reverse

[root@oel64 named]# chgrp named priv*

Make sure that you enable named service for auto-start issuing the following command.

[root@oel64 named]# chkconfig named on

Start the named service issuing the following command.

[root@oel64 named]# service named start


Starting named: [ OK ]

Disable the firewall by issuing the following command onall nodes oel61, oel64a and oel64b as root.

[root@oel64 named]# chkconfig iptables off

For production systems it is strongly recommended to adjust the iptablesrules so that you can have access to
the DNS server listening on port 53.

Here for simplicity the firewall is disabled.

Modify the /etc/resolv.conf file to reflect the DNS IP address specified bynameserver parameter and the
domain specified by search parameteron allnodes (oel61, oel64a)

[root@oel64]# vi /etc/resolv.conf
# Generated by NetworkManager
search radical.com
nameserver 192.168.2.20
nameserver 10.10.2.20
nameserver 10.10.10.20
# NOTE: the libc resolver may not support more than 3 nameservers.
# Thenameservers listed below may not be recognized.
nameserver 10.10.5.20
options timeout:3
options attempts:2

root@oel64 ~]# vi /etc/nsswitch.conf


..................
passwd: files
shadow: files
group: files

#hosts: db files nisplus nis dns


hosts: dns files
................

[root@oel64 ~]# service network restart


Shutting down interface eth0: [ OK ]
Shutting down interface eth1: [ OK ]
Shutting down interface eth2: [ OK ]
Shutting down interface eth3: [ OK ]
Shutting down loopback interface: [ OK ]
Bringing up loopback interface: [ OK ]
Bringing up interface eth0: [ OK ]
Bringing up interface eth1: [ OK ]
Bringing up interface eth2: [ OK ]
Bringing up interface eth3: [ OK ]
Test all public and private nodes accessibility and resolution usingnslookup.

[root@oel64 ~]# nslookup oel64


Server: 192.168.2.20
Address: 192.168.2.20#53

Name: oel64.radical.com
Address: 192.168.2.20

[root@oel64 ~]# nslookup oel64a


Server: 192.168.2.20
Address: 192.168.2.20#53

Name: oel64a.radical.com
Address: 192.168.2.21

[root@oel64 ~]# nslookup oel64b


Server: 192.168.2.20
Address: 192.168.2.20#53

Name: oel64b.radical.com
Address: 192.168.2.22

Set up DHCP

Create a file /etc/dhcp/dhcpd.conf to specify

 Routers – set it to 192.168.2.20


 Subnet mask – set it to 255.255.255.0
 Domain name – grid.radical.com
 Domain name server – From table 1 the IP is 192.168.2.52
 Time offset - EST
 Range – from 192.168.2.100 to 192.168.2.150 will be assigned for GNS
 delegation.

[root@oel64]# vi /etc/dhcp/dhcpd.conf
# DHCP Server Configuration file.
# see /usr/share/doc/dhcp*/dhcpd.conf.sample
# see 'man 5 dhcpd.conf'
#
ddns-update-style interim;
ignore client-updates;
subnet 192.168.2.0 netmask 255.255.255.0 {
option routers 192.168.2.20;
option subnet-mask 255.255.255.0;
option domain-name "grid.radical.com";
option domain-name-servers 192.168.2.52;
option time-offset -18000; # Eastern Standard Time
range 192.168.2.100 192.168.2.150;
default-lease-time 86400;
}

Enable auto-start by issuing the following command.

[root@oel64 ~]# chkconfig dhcpd on

[root@oel64 ~]# chkconfig named on


Start the DHCPD service issuing the following command.

[root@oel64 ~]# service dhcpd start


Starting dhcpd: [ OK ]

Create the disks partitions

[root@oel64 ~]# fdisk /dev/sda

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to


switch off the mode (command 'c') and change display units to
sectors (command 'u').

Command (m for help): u


Changing display/entry units to sectors

Command (m for help): p

Disk /dev/sda: 107.4 GB, 107374182400 bytes


255 heads, 63 sectors/track, 13054 cylinders, total 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002ca67

Device Boot Start End Blocks Id System


/dev/sda1 * 2048 1640447 819200 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 1640448 63080447 30720000 83 Linux
/dev/sda3 63080448 79464447 8192000 82 Linux swap /
Solaris

Command (m for help): n


Command action
e extended
p primary partition (1-4)
e
Selected partition 4
First sector (63-209715199, default 63): 79464448---> (79464447+1)
Last sector, +sectors or +size{K,M,G} (79464448-209715199, default
209715199):
Using default value 209715199

Command (m for help): p

Disk /dev/sda: 107.4 GB, 107374182400 bytes


255 heads, 63 sectors/track, 13054 cylinders, total 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002ca67

Device Boot Start End Blocks Id System


/dev/sda1 * 2048 1640447 819200 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 1640448 63080447 30720000 83 Linux
/dev/sda3 63080448 79464447 8192000 82 Linux swap /
Solaris
/dev/sda4 79464448 209715199 65125376 5 Extended
Command (m for help): n
First sector (79464511-209715199, default 79464511): (enter)
Using default value 79464511
Last sector, +sectors or +size{K,M,G} (79464511-209715199, default
209715199): (enter)
Using default value 209715199

Command (m for help): w


The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or
resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.

[root@oel65 ~]# fdisk -l

Disk /dev/sda: 107.4 GB, 107374182400 bytes


255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002ca67

Device Boot Start End Blocks Id System


/dev/sda1 * 1 103 819200 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 103 3927 30720000 83 Linux
/dev/sda3 3927 4947 8192000 82 Linux swap /
Solaris
/dev/sda4 4947 13055 65125376 5 Extended
/dev/sda5 4947 13055 65125344+ 83 Linux

[root@oel65 ~]#

Reboot the system

You will see that /dev/sda4 – is extended partitionAnd /dev/sda5 is the new partition of around 64G

Create physical volume

[root@oel64 ~]# pvcreate /dev/sda5


Physical volume "/dev/sda5" successfully created

Create volume group

[root@oel64 ~]# vgcreate rac_vg /dev/sda5


Volume group "rac_vg" successfully created

Create luns

[root@oel64 ~]# lvcreate -L10G -n asmlv1 rac_vg


Logical volume "asmlv1" created

[root@oel64 ~]# lvcreate -L10G -n asmlv2 rac_vg


Logical volume "asmlv2" created

[root@oel64 ~]# lvcreate -L10G -n asmlv3 rac_vg


Logical volume "asmlv3" created

[root@oel64 ~]# lvcreate -L10G -n asmlv4 rac_vg


Logical volume "asmlv4" created

[root@oel64 ~]# lvcreate -L10G -n asmlv5 rac_vg


Logical volume "asmlv5" created

[root@oel64 ~]# lvcreate -L10G -n asmlv6 rac_vg


Logical volume "asmlv6" created

[root@oel64 ~]# lvs


LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync
Convert
asmlv1 rac_vg -wi-a---- 10.00g
asmlv2 rac_vg -wi-a---- 10.00g
asmlv3 rac_vg -wi-a---- 10.00g
asmlv4 rac_vg -wi-a---- 10.00g
asmlv5 rac_vg -wi-a---- 10.00g
asmlv6 rac_vg -wi-a---- 10.00g

Add the following lines to end of the file (shift+G) /etc/tgt/targets.conf

[root@oel64 ~]# vi /etc/tgt/targets.conf


#Added By Paritosh for Oracle ASm Disks
<target iqn.oel64.radical.com:oracleasm>
backing-store /dev/rac_vg/asmlv1
backing-store /dev/rac_vg/asmlv2
backing-store /dev/rac_vg/asmlv3
backing-store /dev/rac_vg/asmlv4
backing-store /dev/rac_vg/asmlv5
backing-store /dev/rac_vg/asmlv6
write-cache off
</target>

[root@oel64 ]# chkconfigtgtd on

[root@oel64 ]# servicetgtd start


Starting SCSI target daemon: [ OK ]

[root@oel64 Desktop]# tgt-admin --show


Target 1: iqn.12_flex.com.fundtech.oel64:oracleasm
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 10737 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/rac_vg/asmlv1
Backing store flags:
LUN: 2
Type: disk
SCSI ID: IET 00010002
SCSI SN: beaf12
Size: 10737 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/rac_vg/asmlv10
Backing store flags:
LUN: 3
Type: disk
SCSI ID: IET 00010003
SCSI SN: beaf13
Size: 10737 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/rac_vg/asmlv11
Backing store flags:
LUN: 4
Type: disk
SCSI ID: IET 00010004
SCSI SN: beaf14
Size: 10737 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/rac_vg/asmlv12
Backing store flags:
LUN: 5
Type: disk
SCSI ID: IET 00010005
SCSI SN: beaf15
Size: 10737 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/rac_vg/asmlv13
Backing store flags:
LUN: 6
Type: disk
SCSI ID: IET 00010006
SCSI SN: beaf16
Size: 10737 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/rac_vg/asmlv14
Backing store flags:
Account information:
ACL information:
ALL

From node oel64a


Set up the networking and hostname properly

root@oel64 ~]# hostname


oel64.radical.com

Hostname change

[root@oel64 ~]# vi /etc/sysconfig/network


NETWORKING=yes
HOSTNAME=oel65a.radical.com

IP Address change

[root@oel64 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0


DEVICE=eth0
TYPE=Ethernet
UUID=eed08ddf-7709-446d-9e96-00a15c3f17b9
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=none
HWADDR=08:00:27:5c:79:8e
IPADDR=192.168.2.21
PREFIX=24
GATEWAY=192.168.2.20
DNS1=192.168.2.20
DOMAIN=radical.com
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
NETMASK=255.255.255.0
USERCTL=no
[root@oel64 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
UUID=338a499d-2433-4ecd-9ae2-1eaab7f3c632
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=none
HWADDR=08:00:27:ba:be:c9
IPADDR=10.10.2.21
PREFIX=24
GATEWAY=192.168.2.20
DNS1=192.168.2.20
DOMAIN=radical.com
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth1"
NETMASK=255.255.255.0
USERCTL=no

[root@oel64 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth2


DEVICE=eth2
TYPE=Ethernet
UUID=2a1c9012-6faa-4a18-8352-8d4bbc706fb8
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=none
HWADDR=08:00:27:d1:24:20
IPADDR=10.10.10.21
PREFIX=24
GATEWAY=192.168.2.20
DNS1=192.168.2.20
DOMAIN=radical.com
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth2"
NETMASK=255.255.255.0
USERCTL=no

[root@oel64 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth3


DEVICE=eth3
TYPE=Ethernet
UUID=f96d2bb2-5893-4578-ace9-479ade10a95a
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=none
HWADDR=08:00:27:60:3e:c1
IPADDR=10.10.5.21
PREFIX=24
GATEWAY=192.168.2.20
DNS1=192.168.2.20
DOMAIN=radical.com
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth3"
NETMASK=255.255.255.0
USERCTL=no

Reboot OEL64a
Discover and login using iscsi

[root@oel64asf_Software]# iscsiadm --mode discovery --type sendtargets --


portal 192.168.2.20
Starting iscsid: [ OK ]
192.168.2.21:3260,1 iqn.12_flex.com.fundtech.oel64:oracleasm

[root@oel64a sf_Software]# chkconfig iscsid on

[root@oel64a sf_Software]# iscsiadm -m node -T


iqn.oel64.radical.com:oracleasm -p 192.168.2.20 -l
Logging in to [iface: default, target:
iqn.12_flex.com.fundtech.oel64:oracleasm, portal: 192.168.2.20,3260]
(multiple)
Login to [iface: default, target: iqn.12_flex.com.fundtech.oel64:oracleasm,
portal: 192.168.2.20,3260] successful.

Format all Disks from oel64a. In example I have shown only one.

[root@oel64a sf_Software]# fdisk /dev/sdb


Device contains neither a valid DOS partition table, nor Sun, SGI or OSF
disklabel
Building a new DOS disklabel with disk identifier 0x2ce4be9f.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by


w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to


switch off the mode (command 'c') and change display units to
sectors (command 'u').

Command (m for help): u


Changing display/entry units to sectors

Command (m for help): n


Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First sector (32-20971519, default 32):
Using default value 32
Last sector, +sectors or +size{K,M,G} (32-20971519, default 20971519):
Using default value 20971519

Command (m for help): w


The partition table has been altered!

Format the remaining disks as per above disk /dev/sdb, /dev/sdc, /dev/sdd, /dev/sde, /dev/sdf, /dev/sdg

Install the ASM rpm on oel64a

[root@oel64a sf_Paritosh]# rpm -ivh oracleasm-support-2.1.8-


1.el6.x86_64.rpm
warning: oracleasm-support-2.1.8-1.el6.x86_64.rpm: Header V3 RSA/SHA256
Signature, key ID ec551f03: NOKEY
Preparing... ###########################################
[100%]
package oracleasm-support-2.1.8-1.el6.x86_64 is already installed

[root@oel64a sf_Paritosh]# rpm -ivh oracleasmlib-2.0.4-1.el6.x86_64.rpm


warning: oracleasmlib-2.0.4-1.el6.x86_64.rpm: Header V3 RSA/SHA256
Signature, key ID ec551f03: NOKEY
Preparing... ###########################################
[100%]
1:oracleasmlib ###########################################
[100%]

[root@oel64a ~]# service network restart


Shutting down interface eth0: [ OK ]
Shutting down interface eth1: [ OK ]
Shutting down interface eth2: [ OK ]
Shutting down interface eth3: [ OK ]
Shutting down loopback interface: [ OK ]
Bringing up loopback interface: [ OK ]
Bringing up interface eth0: [ OK ]
Bringing up interface eth1: [ OK ]
Bringing up interface eth2: [ OK ]
Bringing up interface eth3: [ OK ]

[root@oel64a ~]# iscsiadm --mode discovery --type sendtargets --portal


192.168.2.20
192.168.2.20:3260,1 iqn.oel64.radical.com:oracleasm

[root@oel64a ~]# chkconfig iscsid on

[root@oel64a ~]# iscsiadm -m node -T iqn.oel64.radical.com:oracleasm -p


192.168.2.20 -u
Logging out of session [sid: 1, target: iqn.oel64.radical.com:oracleasm,
portal: 192.168.2.20,3260]
Logout of [sid: 1, target: iqn.oel64.radical.com:oracleasm, portal:
192.168.2.20,3260] successful.

[root@oel64a ~]# iscsiadm -m node -T iqn.oel64.radical.com:oracleasm -p


192.168.2.20 -l
Logging in to [iface: default, target: iqn.oel64.radical.com:oracleasm,
portal: 192.168.2.20,3260] (multiple)
Login to [iface: default, target: iqn.oel64.radical.com:oracleasm, portal:
192.168.2.20,3260] successful.

Create the groups and user for grid installation.

[root@oel64a ~]# userdel -f oracle


[root@oel64a ~]# groupdel dba
[root@oel64a ~]# groupdel oinstall
[root@oel64a ~]# groupadd -g 501 oinstall
[root@oel64a ~]# groupadd -g 502 dba
[root@oel64a ~]# groupadd -g 503 asmdba
[root@oel64a ~]# groupadd -g 504 asmadmin
[root@oel64a ~]# groupadd -g 505 asmoper
[root@oel64a ~]# useradd -u 501 -g oinstall -G asmadmin,asmdba,asmoper grid
[root@oel64a ~]# useradd -u 502 -g oinstall -G dba,asmdba oracle
useradd: warning: the home directory already exists.
Not copying any file from skel directory into it.
Creating mailbox file: File exists

[root@oel64a ~]# passwd oracle


Changing password for user oracle.
New password:
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password:
passwd: all authentication tokens updated successfully.

[root@oel64a ~]# passwd grid


Changing password for user grid.
New password:
BAD PASSWORD: it is too short
BAD PASSWORD: is too simple
Retype new password:
passwd: all authentication tokens updated successfully.
[root@oel64a ~]#

[root@oel64a ~]# /etc/init.d/oracleasm configure


Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: grid


Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]

[root@oel64a ~]# oracleasm createdisk OCR_VOTE01 /dev/sdb1


Writing disk header: done
Instantiating disk: done

[root@oel64a ~]# oracleasm createdisk OCR_VOTE02 /dev/sdc1


Writing disk header: done
Instantiating disk: done

[root@oel64a ~]# oracleasm createdisk OCR_VOTE03 /dev/sdd1


Writing disk header: done
Instantiating disk: done

[root@oel64a ~]# oracleasm createdisk ORADATA01 /dev/sde1


Writing disk header: done
Instantiating disk: done

[root@oel64a ~]# oracleasm createdisk FRA01 /dev/sdf1


Writing disk header: done
Instantiating disk: done

[root@oel64a ~]# oracleasm createdisk ORAVOL01 /dev/sdg1


Writing disk header: done
Instantiating disk: done
[root@oel64a ~]#
[root@oel64a ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...

[root@oel64a ~]# oracleasm listdisks


FRA01
OCR_VOTE01
OCR_VOTE02
OCR_VOTE03
ORADATA01
ORAVOL01
[root@oel64a ~]#

[root@oel64a ~]# mkdir -p /u01/app/grid/11.2.0.4


[root@oel64a ~]# mkdir -p /u01/app/oracle
[root@oel64a ~]# mkdir -p /u01/app/software/11.2.0.4
[root@oel64a ~]# mkdir -p /u01/app/oraInventory
[root@oel64a ~]# chown -R grid:oinstall /u01
[root@oel65a ~]# cd /u01/
[root@oel65a u01]# chmod 775 app/
[root@oel64a ~]# ls -ld /u01
drwxr-xr-x 3 grid oinstall 4096 Jan 3 09:04 /u01
[root@oel64a ~]#

Shutdown the OEL64a

Clone OEL64a to OEL64b (as per we did the oel64 to oel64a)

Make the changes as below in OEL64b after clone

[root@oel64a ~]# vi /etc/hosts


192.168.2.22 oel65b.radical.com oel65b
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6
localhost6.localdomain6

[root@oel64a ~]# vi /etc/sysconfig/network


NETWORKING=yes
HOSTNAME=oel65b.radical.com

[root@oel64a ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0


DEVICE=eth0
TYPE=Ethernet
UUID=eed08ddf-7709-446d-9e96-00a15c3f17b9
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=none
HWADDR=08:00:27:5c:79:8e
IPADDR=192.168.2.22
PREFIX=24
GATEWAY=192.168.2.20
DNS1=192.168.2.20
DOMAIN=radical.com
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
NETMASK=255.255.255.0
USERCTL=no

[root@oel64a ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1


DEVICE=eth1
TYPE=Ethernet
UUID=338a499d-2433-4ecd-9ae2-1eaab7f3c632
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=none
HWADDR=08:00:27:ba:be:c9
IPADDR=10.10.2.22
PREFIX=24
GATEWAY=192.168.2.20
DNS1=192.168.2.20
DOMAIN=radical.com
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth1"
NETMASK=255.255.255.0
USERCTL=no

[root@oel64a ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth2


DEVICE=eth2
TYPE=Ethernet
UUID=2a1c9012-6faa-4a18-8352-8d4bbc706fb8
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=none
HWADDR=08:00:27:d1:24:20
IPADDR=10.10.10.22
PREFIX=24
GATEWAY=192.168.2.20
DNS1=192.168.2.20
DOMAIN=radical.com
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth2"
NETMASK=255.255.255.0
USERCTL=no

[root@oel64a ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth3


DEVICE=eth3
TYPE=Ethernet
UUID=f96d2bb2-5893-4578-ace9-479ade10a95a
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=none
HWADDR=08:00:27:60:3e:c1
IPADDR=10.10.5.22
PREFIX=24
GATEWAY=192.168.2.20
DNS1=192.168.2.20
DOMAIN=radical.com
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth3"
NETMASK=255.255.255.0
USERCTL=no

Reboot the OEL64b machine.

Check the nslookup in both the machines

[root@oel65a ~]# nslookup oel64


Server: 192.168.2.20
Address: 192.168.2.20#53

Name: oel65.radical.com
Address: 192.168.2.20

[root@oel65a ~]# nslookup oel64a


Server: 192.168.2.20
Address: 192.168.2.20#53

Name: oel65a.radical.com
Address: 192.168.2.21

[root@oel65a ~]# nslookup oel64b


Server: 192.168.2.20
Address: 192.168.2.20#53

Name: oel65b.radical.com
Address: 192.168.2.22

In installation for fixing the errors and make the changes in both the nodes (OEL64, OEL64a and OEL64b)
[root@oel64a ~]# cd /etc/
[root@oel64a etc]# rm ntp.conf
rm: remove regular file `ntp.conf'? yes
[root@oel64a etc]#

[root@oel64a etc]# vi /etc/sysctl.conf


# Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
# sysctl.conf(5) for more details.

# Controls IP packet forwarding


net.ipv4.ip_forward = 0

# Controls source route verification


net.ipv4.conf.eth0.rp_filter = 0
net.ipv4.conf.eth1.rp_filter = 0
net.ipv4.conf.eth2.rp_filter = 0
net.ipv4.conf.eth3.rp_filter = 0

# Do not accept source routing


net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel


kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies


net.ipv4.tcp_syncookies = 1

# Disable netfilter on bridges.


net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

# Controls the default maxmimum size of a mesage queue


kernel.msgmnb = 65536

# Controls the maximum size of a message, in bytes


kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes

# Controls the maximum number of shared memory segments, in pages

# oracle-rdbms-server-11gR2-preinstall setting for fs.file-max is 6815744


fs.file-max = 6815744

# oracle-rdbms-server-11gR2-preinstall setting for kernel.sem is '250 32000 100 128'


kernel.sem = 250 32000 100 128

# oracle-rdbms-server-11gR2-preinstall setting for kernel.shmmni is 4096


kernel.shmmni = 4096

# oracle-rdbms-server-11gR2-preinstall setting for kernel.shmall is 1073741824 on x86_64


# oracle-rdbms-server-11gR2-preinstall setting for kernel.shmall is 2097152 on i386
kernel.shmall = 1073741824

# oracle-rdbms-server-11gR2-preinstall setting for kernel.shmmax is 4398046511104 on x86_64


# oracle-rdbms-server-11gR2-preinstall setting for kernel.shmmax is 4294967295 on i386
kernel.shmmax = 4398046511104

# oracle-rdbms-server-11gR2-preinstall setting for net.core.rmem_default is 262144


net.core.rmem_default = 262144

# oracle-rdbms-server-11gR2-preinstall setting for net.core.rmem_max is 4194304


net.core.rmem_max = 4194304

# oracle-rdbms-server-11gR2-preinstall setting for net.core.wmem_default is 262144


net.core.wmem_default = 262144

# oracle-rdbms-server-11gR2-preinstall setting for net.core.wmem_max is 1048576


net.core.wmem_max = 1048576

# oracle-rdbms-server-11gR2-preinstall setting for fs.aio-max-nr is 1048576


fs.aio-max-nr = 1048576

# oracle-rdbms-server-11gR2-preinstall setting for net.ipv4.ip_local_port_range is 9000 65500


net.ipv4.ip_local_port_range = 9000 65500

[root@oel64a ~]# vi /etc/resolv.conf


search radical.com
nameserver 192.168.2.20
nameserver 10.10.2.20
nameserver 10.10.10.20
# NOTE: the libc resolver may not support more than 3 nameservers.
# The nameservers listed below may not be recognized.
nameserver 10.10.5.20
options timeout:3
options attempts:2
Copy the grid software’s in OEL64a and Install the clusterware software.
[root@oel65a ~]# cp /media/sf_Paritosh/p13390677_112040_Linux-x86-64_*
/u01/app/software/11.2.0.4/
[root@oel65a ~]# chown -R grid:oinstall /u01
[root@oel65a ~]# cd /u01/
[root@oel65a u01]# chmod 775 app/

[root@oel65a ~]# su - grid


[grid@oel65a ~]$ cd /u01/app/software/11.2.0.4/
[grid@oel65a 11.2.0.4]$ ll
total 3664216
-rwxr-x--- 1 grid oinstall 1395582860 Jan 6 12:35 p13390677_112040_Linux-
x86-64_1of7.zip
-rwxr-x--- 1 grid oinstall 1151304589 Jan 6 12:39 p13390677_112040_Linux-
x86-64_2of7.zip
-rwxr-x--- 1 grid oinstall 1205251894 Jan 6 12:44 p13390677_112040_Linux-
x86-64_3of7.zip

[grid@oel65a 11.2.0.4]$ unzip p13390677_112040_Linux-x86-64_3of7.zip

[grid@oel65a 11.2.0.4]$ ls -lrt


total 3664220
drwxr-xr-x 7 grid oinstall 4096 Aug 27 2013 grid
-rwxr-x--- 1 grid oinstall 1395582860 Jan 6 12:35 p13390677_112040_Linux-
x86-64_1of7.zip
-rwxr-x--- 1 grid oinstall 1151304589 Jan 6 12:39 p13390677_112040_Linux-
x86-64_2of7.zip
-rwxr-x--- 1 grid oinstall 1205251894 Jan 6 12:44 p13390677_112040_Linux-
x86-64_3of7.zip

[grid@oel65a 11.2.0.4]$ cd grid/


[grid@oel65a grid]$ ls -lrt
total 68
drwxr-xr-x 2 grid oinstall 4096 Aug 26 2013 sshsetup
-rwxr-xr-x 1 grid oinstall 3268 Aug 26 2013 runInstaller
-rwxr-xr-x 1 grid oinstall 4878 Aug 26 2013 runcluvfy.sh
drwxr-xr-x 2 grid oinstall 4096 Aug 26 2013 rpm
drwxr-xr-x 2 grid oinstall 4096 Aug 26 2013 response
drwxr-xr-x 4 grid oinstall 4096 Aug 26 2013 install
drwxr-xr-x 14 grid oinstall 4096 Aug 26 2013 stage
-rw-r--r-- 1 grid oinstall 30016 Aug 27 2013 readme.html
-rw-r--r-- 1 grid oinstall 500 Aug 27 2013 welcome.html

if you are loogged in as a root then you should fire the below command before to install the runInstaller.

[root@oel65a ~]# xhost +


access control disabled, clients can connect from any host

[grid@oel64a grid]$ ./runInstaller

Select skip software updates and press Next to continue.

Select Install and Configure GI and press Next to continue.


Select Advanced installation and press Next to continue.

Select languages and press Next to continue.


Enter the requested data and press Next to continue. The GNS sub-domain isgns.grid.radical.com The GNS VIP
is 192.168.2.52. SCAN post is 1521. SCAN name is
oel64-cluster-scan.gns.grid.radical.com

Click Add.
add the oel64b.radica.com
AUTO
Click SSH Connectivity.

click on setup
click on test
Select 192.168.2 as public and all 10.10 as private. Press Next to continue.

HAIP will be deployed and examined.

Select ASM and press Next to continue.


Select disk group DATA as specified and press Next to continue.

Enter password and press Next to continue.


De-select IPMI and press Next to continue.
Specify the groups and press Next to continue.

Specify locations and press Next to continue.


Examine the findings.

click on fix and check again

run the script in root /tmp/CVU_11.2.0.4.0_grid/runfixup.sh in both the machines (OEL64a and OEK64b) as
below and then click ok.

[root@oel64a ~]# /tmp/CVU_11.2.0.4.0_grid/runfixup.sh


Response file being used is :/tmp/CVU_11.2.0.4.0_grid/fixup.response
Enable file being used is :/tmp/CVU_11.2.0.4.0_grid/fixup.enable
Log file location: /tmp/CVU_11.2.0.4.0_grid/orarun.log
uid=501(grid) gid=501(oinstall)
groups=501(oinstall),503(asmdba),504(asmadmin),505(asmoper)
Installing Package /tmp/CVU_11.2.0.4.0_grid//cvuqdisk-1.0.9-1.rpm
Preparing... ###########################################
[100%]
1:cvuqdisk ###########################################
[100%]
[root@oel64a ~]#
[root@oel64b ~]# /tmp/CVU_11.2.0.4.0_grid/runfixup.sh
Response file being used is :/tmp/CVU_11.2.0.4.0_grid/fixup.response
Enable file being used is :/tmp/CVU_11.2.0.4.0_grid/fixup.enable
Log file location: /tmp/CVU_11.2.0.4.0_grid/orarun.log
uid=501(grid) gid=501(oinstall)
groups=501(oinstall),503(asmdba),504(asmadmin),505(asmoper)
Installing Package /tmp/CVU_11.2.0.4.0_grid//cvuqdisk-1.0.9-1.rpm
Preparing... ###########################################
[100%]
1:cvuqdisk ###########################################
[100%]
[root@oel64b ~]#

you will receive below screen and make sure if the fixable is yes then you need to do again fix and check again
once that done it will give the script and need to run in both the machines.

but in below screen there is an errors and those are not fixable and just you need to ignore them by clicking
ignore all button.
click next button

click yes

Review the Summary settings and press Install to continue.

click to install
Wait until prompted for running scripts as root.

Run both the scripts in both machines (if one script is complete in OEL64a then go to OEL64b) and same as
other script.

The output from the scripts is as follows.

[root@oel64a disks]# /u01/app/oraInventory/orainstRoot.sh


Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.


The execution of the script is complete.
[root@oel64a disks]#

[root@oel64b disks]# /u01/app/oraInventory/orainstRoot.sh


Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.


The execution of the script is complete.
[root@oel64b disks]#

[root@oel64a disks]# /u01/app/11.2.0/grid/root.sh


Performing root user operation for Oracle 11g

The following environment variables are set as:


ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/grid/11.2.0.4

Enter the full pathname of the local bin directory: [/usr/local/bin]:


Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...


Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file:
/u01/app/grid/11.2.0.4/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to upstart
CRS-2672: Attempting to start 'ora.mdnsd' on 'oel64a'
CRS-2676: Start of 'ora.mdnsd' on 'oel64a' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'oel64a'
CRS-2676: Start of 'ora.gpnpd' on 'oel64a' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'oel64a'
CRS-2672: Attempting to start 'ora.gipcd' on 'oel64a'
CRS-2676: Start of 'ora.cssdmonitor' on 'oel64a' succeeded
CRS-2676: Start of 'ora.gipcd' on 'oel64a' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'oel64a'
CRS-2672: Attempting to start 'ora.diskmon' on 'oel64a'
CRS-2676: Start of 'ora.diskmon' on 'oel64a' succeeded
CRS-2676: Start of 'ora.cssd' on 'oel64a' succeeded

ASM created and started successfully.

Disk Group OCR_VOTE created successfully.

clscfg: -install mode specified


Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 611c7be3cd5d4f0ebf02eb4a9487f90d.
Successfully replaced voting disk group with +OCR_VOTE.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 611c7be3cd5d4f0ebf02eb4a9487f90d (ORCL:OCR_VOTE01) [OCR_VOTE]
Located 1 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'oel64a'
CRS-2676: Start of 'ora.asm' on 'oel64a' succeeded
CRS-2672: Attempting to start 'ora.OCR_VOTE.dg' on 'oel64a'
CRS-2676: Start of 'ora.OCR_VOTE.dg' on 'oel64a' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@oel64a disks]#

[root@oel64b disks]# /u01/app/11.2.0/grid/root.sh


Performing root user operation for Oracle 11g

The following environment variables are set as:


ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:


Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...


Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file:
/u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to upstart
CRS-4402: The CSS daemon was started in exclusive mode but found an active
CSS
daemon on node oel64a, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join
the
cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@oel64b disks]#

Wait for the assistants to finish.

Verify that GI is installed.

Check OS processes:

[root@oel64a bin]# ps -ef | grep d.bin


root 2823 1 1 06:22 ? 00:00:06
/u01/app/11.2.0/grid/bin/ohasd.bin reboot
grid 3281 1 0 06:23 ? 00:00:01
/u01/app/11.2.0/grid/bin/oraagent.bin
grid 3293 1 0 06:23 ? 00:00:00
/u01/app/11.2.0/grid/bin/mdnsd.bin
grid 3305 1 0 06:23 ? 00:00:00
/u01/app/11.2.0/grid/bin/gpnpd.bin
grid 3318 1 0 06:23 ? 00:00:02
/u01/app/11.2.0/grid/bin/gipcd.bin
root 3357 1 0 06:23 ? 00:00:00
/u01/app/11.2.0/grid/bin/cssdmonitor
root 3369 1 0 06:23 ? 00:00:00
/u01/app/11.2.0/grid/bin/cssdagent
grid 3381 1 1 06:23 ? 00:00:05
/u01/app/11.2.0/grid/bin/ocssd.bin
root 3384 1 1 06:23 ? 00:00:06
/u01/app/11.2.0/grid/bin/orarootagent.bin
root 3400 1 2 06:23 ? 00:00:10
/u01/app/11.2.0/grid/bin/osysmond.bin
root 3549 1 0 06:23 ? 00:00:01
/u01/app/11.2.0/grid/bin/octssd.bin
reboot
grid 3592 1 0 06:24 ? 00:00:01
/u01/app/11.2.0/grid/bin/evmd.bin
root 3861 1 0 06:24 ? 00:00:00
/u01/app/11.2.0/grid/bin/ologgerd -m
oel64b -r -d /u01/app/11.2.0/grid/crf/db/oel64a
root 3878 1 1 06:24 ? 00:00:04
/u01/app/11.2.0/grid/bin/crsd.bin reboot
grid 3956 3592 0 06:24 ? 00:00:00
/u01/app/11.2.0/grid/bin/evmlogger.bin -o
/u01/app/11.2.0/grid/evm/log/evmlogger.info -l
/u01/app/11.2.0/grid/evm/log/evmlogger.log
grid 3994 1 0 06:24 ? 00:00:01
/u01/app/11.2.0/grid/bin/oraagent.bin
root 3998 1 0 06:24 ? 00:00:03
/u01/app/11.2.0/grid/bin/orarootagent.bin
grid 4118 1 0 06:24 ? 00:00:00
/u01/app/11.2.0/grid/bin/tnslsnr LISTENER
-inherit
grid 4134 1 0 06:24 ? 00:00:00
/u01/app/11.2.0/grid/bin/tnslsnr
LISTENER_SCAN1 -inherit
oracle 4166 1 0 06:24 ? 00:00:01
/u01/app/11.2.0/grid/bin/oraagent.bin
root 5356 5235 0 06:30 pts/1 00:00:00 grep d.bin
[root@oel64a bin]#

Check GI resource status.

[root@oel64a bin]# ./crsctl status res -t


---------------------------------------------------------------------------
-----
NAME TARGET STATE SERVER STATE_DETAILS
---------------------------------------------------------------------------
-----
Local Resources
---------------------------------------------------------------------------
-----
ora.DATA.dg
ONLINE ONLINEoel64a
ONLINE ONLINEoel64b
ora.LISTENER.lsnr
ONLINE ONLINEoel64a
ONLINE ONLINEoel64b
ora.asm
ONLINE ONLINEoel64a Started
ONLINE ONLINEoel64b Started
ora.gsd
OFFLINE OFFLINEoel64a
OFFLINE OFFLINEoel64b
ora.net1.network
ONLINE ONLINEoel64a
ONLINE ONLINEoel64b
ora.ons
ONLINE ONLINEoel64a
ONLINE ONLINEoel64b
ora.registry.acfs
ONLINE ONLINEoel64a
ONLINE ONLINEoel64b
---------------------------------------------------------------------------
-----
Cluster Resources
---------------------------------------------------------------------------
-----
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINEoel64b
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINEoel64a
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINEoel64a
ora.cvu
1 ONLINE ONLINEoel64a
ora.gns
1 ONLINE ONLINEoel64a
ora.gns.vip
1 ONLINE ONLINEoel64a
ora.oc4j
1 ONLINE ONLINEoel64a
ora.oel64a.vip
1 ONLINE ONLINEoel64a
ora.oel64b.vip
1 ONLINE ONLINEoel64b
ora.scan1.vip
1 ONLINE ONLINEoel64b
ora.scan2.vip
1 ONLINE ONLINEoel64a
ora.scan3.vip
1 ONLINE ONLINEoel64a
[root@oel64a bin]#

Check the interfaces.

[grid@oel64a grid]$ oifcfggetif -global


eth0 10.10.2.0 global cluster_interconnect
eth1 192.168.2.0 global public
eth2 10.10.10.0 global cluster_interconnect
eth3 10.10.5.0 global cluster_interconnect
[grid@oel64a grid]$

Check the interfaces from ASM instance.

SQL> select * from V$CLUSTER_INTERCONNECTS;

NAME IP_ADDRESS IS_ SOURCE


--------------- ---------------- --- -------------------------------
eth0:1 169.254.45.77 NO
eth3:1 169.254.106.22 NO
eth2:1 169.254.188.165 NO
eth0:2 169.254.242.179 NO

SQL> select * from V$configured_interconnects;

NAME IP_ADDRESS IS_ SOURCE


--------------- ---------------- --- -------------------------------
eth0:1 169.254.45.77 NO
eth3:1 169.254.106.22 NO
eth2:1 169.254.188.165 NO
eth0:2 169.254.242.179 NO
eth1 192.168.2.21 YES

SQL>

Check the GNS

[grid@oel64b ~]$ cluvfy comp gns -postcrsinst -verbose

Verifying GNS integrity

Checking GNS integrity...

Checking if the GNS subdomain name is valid...


The GNS subdomain name "gns.grid.radical.com" is a valid domain name
Checking if the GNS VIP belongs to same subnet as the public network...
Public network subnets "192.168.2.0" match with the GNS VIP "192.168.2.0"
Checking if the GNS VIP is a valid address...
GNS VIP "192.168.2.52" resolves to a valid IP address
Checking the status of GNS VIP...
Checking if FDQN names for domain "gns.grid.radical.com" are reachable

GNS resolved IP addresses are reachable

GNS resolved IP addresses are reachable

GNS resolved IP addresses are reachable

GNS resolved IP addresses are reachable

GNS resolved IP addresses are reachable


Checking status of GNS resource...
Node Running? Enabled?
------------ ------------------------ ------------------------
oel64b no yes
oel64a yes yes

GNS resource configuration check passed


Checking status of GNS VIP resource...
Node Running? Enabled?
------------ ------------------------ ------------------------
oel64b no yes
oel64a yes yes

GNS VIP resource configuration check passed.

GNS integrity check passed


Verification of GNS integrity was successful.
[grid@oel64b ~]$
ASM GRID Disks Groups Creation Steps
After installation of ASM and Clusterware then you can proceed the below
steps for database installation.

[grid@oel64a ~]$ . oraenv


ORACLE_SID = [grid] ? +ASM1
The Oracle base has been set to /u01/app/oracle

[grid@oel64a ~]$ asmca


[grid@oel64a ~]$ asmcmd

ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 10239 10144 0 10144 0 N FRA/
MOUNTED EXTERN N 512 4096 1048576 30717 30317 0 30317 0 Y OCR_VOTE/
MOUNTED EXTERN N 512 4096 1048576 10239 10144 0 10144 0 N ORADATA/

ASMCMD>
ASM GRID Database Software Creation Steps
[root@oel64a ~]# cd /home/
[root@oel64a home]# ll
total 12
drwxr-xr-x 6 grid oinstall 4096 Jan 28 23:19 grid
drwx------. 4 54321 54321 4096 Jan 23 22:58 oracle
drwx------. 4 xguest xguest 4096 Jan 23 22:58 xguest

[root@oel64a home]# chown -R oracle:oinstall oracle/

[root@oel64a home]# ll
total 12
drwxr-xr-x 6 grid oinstall 4096 Jan 28 23:19 grid
drwx------. 4 oracle oinstall 4096 Jan 23 22:58 oracle
drwx------. 4 xguest xguest 4096 Jan 23 22:58 xguest

[root@oel64a home]# cd

[root@oel64a ~]# su - oracle


[oracle@oel64a ~]$

[root@oel64a ~]# cp /media/sf_Paritosh/p13390677_112040_Linux-x86-64_1of7.zip


/u01/app/software/11.2.0.4/database/

[root@oel64a ~]# cp /media/sf_Paritosh/p13390677_112040_Linux-x86-64_2of7.zip


/u01/app/software/11.2.0.4/database/

[root@oel64a ~]# chown -R oracle:oinstall /u01/app/oracle/

[root@oel64a ~]# chown -R oracle:oinstall /u01/app/software/11.2.0.4/database/

[root@oel64b ~]# cd /home/

[root@oel64b home]# ll
total 12
drwx------ 6 grid oinstall 4096 Jan 29 00:18 grid
drwx------. 4 54321 54321 4096 Jan 23 22:58 oracle
drwx------. 4 xguest xguest 4096 Jan 23 22:58 xguest

[root@oel64b home]# chown -R oracle:oinstall oracle/

[root@oel64b home]# chown -R oracle:oinstall /u01/app/oracle/

[root@oel64b home]# cd

[root@oel64b ~]# su - oracle

[oracle@oel64a ~]$ cd /u01/app/software/11.2.0.4/database/

[oracle@oel64a database]$ ll
total 2487208
-rwxr-x--- 1 oracle oinstall 1395582860 Jan 29 19:12 p13390677_112040_Linux-x86-64_1of7.zip
-rwxr-x--- 1 oracle oinstall 1151304589 Jan 29 19:14 p13390677_112040_Linux-x86-64_2of7.zip

[oracle@oel64a database]$ unzip p13390677_112040_Linux-x86-64_1of7.zip

[oracle@oel64a database]$ unzip p13390677_112040_Linux-x86-64_2of7.zip

[oracle@oel64a database]$ cd database/

[oracle@oel64a database]$ ll
total 60
drwxr-xr-x 4 oracle oinstall 4096 Aug 27 2013 install
-rw-r--r-- 1 oracle oinstall 30016 Aug 27 2013 readme.html
drwxr-xr-x 2 oracle oinstall 4096 Aug 27 2013 response
drwxr-xr-x 2 oracle oinstall 4096 Aug 27 2013 rpm
-rwxr-xr-x 1 oracle oinstall 3267 Aug 27 2013 runInstaller
drwxr-xr-x 2 oracle oinstall 4096 Aug 27 2013 sshsetup
drwxr-xr-x 14 oracle oinstall 4096 Aug 27 2013 stage
-rw-r--r-- 1 oracle oinstall 500 Aug 27 2013 welcome.html

[oracle@oel64a database]$ ./runInstaller


On OEL64a Node
[root@oel64a ~]# /u01/app/oracle/product/11.2.0.4/db_1/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:


ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.2.0.4/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:


The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by


Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
[root@oel64a ~]#

On OEL64b Node
[root@oel64b ~]# /u01/app/oracle/product/11.2.0.4/db_1/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:


ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.2.0.4/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:


The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by


Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
[root@oel64b ~]#
ASM GRID Database Creation Steps
[oracle@oel64a ~]$ vi .bash_profile and [oracle@oel64b ~]$ vi .bash_profile
# .bash_profile

# Get the aliases and functions


if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/db_1
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib
export PATH=$ORACLE_HOME/bin:$PATH

[oracle@oel64a ~]$ . .bash_profile


[oracle@oel64a ~]$ which dbca
/u01/app/oracle/product/11.2.0.4/db_1/bin/dbca
[oracle@oel64a ~]$

[oracle@oel64a ~]$ dbca


Suppose in case, if DBCA Does Not Display ASM Disk Groups. Please perform the following steps

1. Check if your ASM is OK

With grid user:


[grid@oel65a ]$ crsctl status res -t
…….
ora.asm
ONLINE ONLINE oel65a
ONLINE ONLINE coel65b
…….

2. Check your oracle binary mount point (option nosuid disable)


$ cat /etc/fstab

3. Check users and groups , it should be like that :


[grid@oel65a bin]$ id –a grid
uid=501(grid) gid=501(oinstall) groups=501(oinstall),503(asmdba),504(asmadmin),505(asmoper)

[oracle@oel65a ~]$ id -a oracle


uid=502(oracle) gid=501(oinstall) groups=501(oinstall),502(dba),503(asmdba)
$ cat /etc/group
dba:x:300:applq,oracle
oinstall:x:310:oracle,grid
oper:x:320:oracle
asmadmin:x:330:grid
asmoper:x:331:
asmdba:x:332:oracle,grid

4. Check if file permissions in <Grid_home>/bin/oracle executable are set properly, it should


be 6751.
[grid@oel65a bin]$ ls -l $GRID_HOME/bin/oracle
-rwsr-s–x 1 grid oinstall 200678430 Jun 16 12:44 /app/11.2.0/grid/bin/oracle

To correct:

[grid@oel65a bin]$ chmod 6751 /u01/app/grid/11.2.0.4/bin/oracle

5. Check asm disk permissions, with two users it should be : grid — asmadmin

[grid@oel65a bin]$ ls -ltr /dev/oracleasm/disks/


total 0
brw-rw---- 1 grid asmadmin 8, 17 Feb 9 00:11 OCR_VOTE01
brw-rw---- 1 grid asmadmin 8, 33 Feb 9 00:11 OCR_VOTE02
brw-rw---- 1 grid asmadmin 8, 49 Feb 9 00:11 OCR_VOTE03
brw-rw---- 1 grid asmadmin 8, 65 Feb 9 00:11 ORADATA01
brw-rw---- 1 grid asmadmin 8, 97 Feb 9 00:11 ORAVOL01
brw-rw---- 1 grid asmadmin 8, 81 Feb 9 00:11 FRA01
[oracle@oel64a ~]$ ps -ef |grep pmon
oracle 963 9068 0 21:07 pts/2 00:00:00 grep pmon
grid 31900 1 0 18:34 ? 00:00:01 asm_pmon_+ASM1
oracle 31952 1 0 20:59 ? 00:00:00 ora_pmon_radical1

[oracle@oel64a ~]$ vi /etc/oratab


#Backup file is /u01/app/oracle/product/11.2.0.4/db_1/srvm/admin/oratab.bak.oel64a line added
by Agent
#
# This file is used by ORACLE utilities. It is created by root.sh
# and updated by either Database Configuration Assistant while creating
# a database or ASM Configuration Assistant while creating ASM instance.

# A colon, ':', is used as the field terminator. A new line terminates


# the entry. Lines beginning with a pound sign, '#', are comments.
#
# Entries are of the form:
# $ORACLE_SID:$ORACLE_HOME:<N|Y>:
#
# The first and second fields are the system identifier and home
# directory of the database respectively. The third filed indicates
# to the dbstart utility that the database should , "Y", or should not,
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
+ASM1:/u01/app/grid/11.2.0.4:N # line added by Agent
radical1:/u01/app/oracle/product/11.2.0.4/db_1:N # line added by Agent

[oracle@oel64a ~]$ . oraenv


ORACLE_SID = [radical1] ?
The Oracle base has been set to /u01/app/oracle

[oracle@oel64a ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Fri Jan 29 21:10:23 2016

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL>

[oracle@oel64b ~]$ ps -ef |grep pmon


grid 2867 1 0 18:58 ? 00:00:01 asm_pmon_+ASM2
oracle 23728 1 0 21:00 ? 00:00:00 ora_pmon_radical2
oracle 25124 7331 0 21:11 pts/2 00:00:00 grep pmon

[oracle@oel64b ~]$ vi /etc/oratab


#Backup file is /u01/app/oracle/product/11.2.0.4/db_1/srvm/admin/oratab.bak.oel64b line added
by Agent
#
# This file is used by ORACLE utilities. It is created by root.sh
# and updated by either Database Configuration Assistant while creating
# a database or ASM Configuration Assistant while creating ASM instance.

# A colon, ':', is used as the field terminator. A new line terminates


# the entry. Lines beginning with a pound sign, '#', are comments.
#
# Entries are of the form:
# $ORACLE_SID:$ORACLE_HOME:<N|Y>:
#
# The first and second fields are the system identifier and home
# directory of the database respectively. The third filed indicates
# to the dbstart utility that the database should , "Y", or should not,
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
+ASM2:/u01/app/grid/11.2.0.4:N # line added by Agent
radical2:/u01/app/oracle/product/11.2.0.4/db_1:N # line added by Agent

[oracle@oel64b ~]$ . oraenv


ORACLE_SID = [oracle] ? radical2
The Oracle base has been set to /u01/app/oracle
[oracle@oel64b ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Fri Jan 29 21:12:37 2016

Copyright (c) 1982, 2013, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL>

You might also like