0% found this document useful (0 votes)
12 views34 pages

RAC Installation 19c

This document provides a comprehensive guide for installing and configuring Oracle Grid Infrastructure and Oracle Database on Linux. It includes detailed steps for OS installation, package preparation, network configuration, user and group setup, storage configuration, and final installation procedures. Additionally, it covers troubleshooting tips and post-installation checks to ensure a successful setup.

Uploaded by

beruk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views34 pages

RAC Installation 19c

This document provides a comprehensive guide for installing and configuring Oracle Grid Infrastructure and Oracle Database on Linux. It includes detailed steps for OS installation, package preparation, network configuration, user and group setup, storage configuration, and final installation procedures. Additionally, it covers troubleshooting tips and post-installation checks to ensure a successful setup.

Uploaded by

beruk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

1.

Install the OS
Note: when configuring network, the nodes should have minimum of two network interfaces,
for public network and private network
2. Prepare the repository to install the PACKAGES.
2.1 Create /etc/yum.repo/dvd.repo file as below:

[dvd]
name=oracle linux 7
#baseurl=https://yum.oracle.com/repo/OracleLinux/OL7/developer_nodejs8/$basearch/
baseurl=file:///mnt
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=1

2.2 Mount the iso file to /mnt path :

# mount –o loop /root/Downloads/OracleLinux-R7-U5-Server-x86_64-dvd.iso /mnt


# yum repolist

Output:

[dvd]
name=oracle linux 7
#baseurl=https://yum.oracle.com/repo/OracleLinux/OL7/developer_nodejs8/$basearch/
baseurl=file:///mnt
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=1

2.3 Install the below packages:

yum install bc -y
yum install binutils -y
yum install compat-libcap1 -y
yum install compat-libstdc++-33 -y
yum install elfutils-libelf -y
yum install elfutils-libelf-devel -y
yum install fontconfig-devel -y
yum install glibc -y
yum install glibc-devel -y
yum install ksh -y
yum install libaio -y
yum install libaio-devel -y
yum install libXrender -y
yum install libXrender-devel -y
yum install libX11 -y
yum install libXau -y
yum install libXi -y
yum install libXtst -y
yum install libgcc -y
yum install libstdc++ -y
yum install libstdc++-devel -y
yum install libxcb -y
yum install make -y
yum install policycoreutils -y
yum install policycoreutils-python -y
yum install smartmontools -y
yum install sysstat –y

3. Configuring Operating Systems for Oracle Grid Infrastructure on Linux.


Install the oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm

# rpm -ivh oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm

4. Configuring Networks for Oracle Grid Infrastructure and Oracle RAC

Edit the /etc/hosts file as below on both nodes:


Sample:

10.48.8.63 rac-node1.dbe.com.et rac-node1


10.48.13.20 rac-node1-priv.dbe.com.et rac-node1-priv
10.48.8.19 t24test4.dbe.com.et t24test4
10.48.8.64 rac-node2.dbe.com.et rac-node2
10.48.8.67 rac-node1-vip.dbe.com.et rac-node1-vip
10.48.8.68 rac-node2-vip.dbe.com.et rac-node2-vip
Node1 :
Node 2:

5. Configuring Users, Groups and Environments for Oracle Grid Infrastructure and Oracle
Database
5.1 Create oracle password on both nodes.

5.2 create the oraInventory group oinstall with the group ID number 54321 if it does not
exist.

# /usr/sbin/groupadd -g 54321 oinstall

5.3 verify that the Oracle installation owners you intend to use have the Oracle
Inventory group as their primary group.

5.4 Creating Operating System Privileges Groups

groupadd -g 54329 asmadmin


/usr/sbin/groupadd -g 54327 asmdba
/usr/sbin/groupadd -g 54328 asmoper
/usr/sbin/groupadd -g 54322 dba
groupadd -g 54323 oper
/usr/sbin/groupadd -g 54324 backupdba
/usr/sbin/groupadd -g 54325 dgdba
/usr/sbin/groupadd -g 54326 kmdba
/usr/sbin/groupadd -g 54330 racdba
groupadd -g 54421 oinstall
groupadd -g 54322 dba
groupadd -g 54323 oper
#usermod -a -G asmoper grid
groupadd -g 54324 backupdba
groupadd -g 54325 dgdba
groupadd -g 54326 kmdba
groupadd -g 54327 asmdba
groupadd -g 54328 asmoper
groupadd -g 54329 asmadmin
groupadd -g 54330 racdba

5.5 Creating Operating System Oracle Installation User Accounts

# /usr/sbin/useradd -u 54321 -g oinstall -G


dba,asmdba,asmadmin,backupdba,dgdba,kmdba,racdba oracle

# /usr/sbin/useradd -u 54331 -g oinstall -G


dba,asmdba,asmadmin,backupdba,dgdba,kmdba,racdba grid
Set password for grid user:
# passwd grid

# /usr/sbin/usermod -g oinstall -G
dba,asmdba,asmoper,asmadmin,backupdba,dgdba,kmdba,racdba,oper oracle

# id oracle
Output:

5.6 Creating Minimal Groups, Users, and Paths

----Create OS groups

/usr/sbin/groupadd oinstall

/usr/sbin/groupadd dba

/usr/sbin/groupadd asmadmin

/usr/sbin/groupadd asmdba

/usr/sbin/groupadd asmoper
/usr/sbin/groupadd osdba

----CREATE USER

/usr/sbin/useradd -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid -m grid

/usr/sbin/useradd -g oinstall -G dba,asmdba -d /home/oracle -m oracle

usermod -G asmdba oracle

usermod -G dba oracle

usermod -G dba grid

---To create the Oracle Inventory directory, enter the following commands as the root user:

mkdir -p /u01/app/oraInventory

chown -R oracle:oinstall /u01/app/oraInventory

chmod -R 775 /u01/app/oraInventory

---To create the Grid Infrastructure home directory, enter the following commands as the root user:

mkdir -p /u01/app/19.3.0/grid

chown -R grid:oinstall /u01/app/19.3.0/grid

chmod -R 775 /u01/app/19.3.0/grid

-----To create the Oracle Base directory, enter the following commands as the root user:

mkdir -p /u01/app/oracle

chown -R oracle:oinstall /u01/app/oracle

chmod -R 775 /u01/app/oracle

---- To create the Oracle RDBMS Home directory, enter the following commands as the root user:
mkdir -p /u01/app/oracle/product/19.3.0.0/dbhome_1

chown -R oracle:oinstall /u01/app/oracle/product/19.3.0.0/dbhome_1

chmod -R 775 /u01/app/oracle/product/19.3.0.0/dbhome_1

5.7 Checking Resource Limits for Oracle Software Installation Users

Add the below lines as root user to file

# nano /etc/security/limits.conf
oracle soft nofile 131072
oracle hard nofile 131072
oracle soft nproc 131072
oracle hard nproc 131072
oracle soft core unlimited
oracle hard core unlimited
oracle soft memlock 3500000
oracle hard memlock 3500000
oracle soft stack 10240
grid soft nofile 131072
grid hard nofile 131072
grid soft nproc 131072
grid hard nproc 131072
grid soft core unlimited
grid hard core unlimited
grid soft memlock 3500000
grid hard memlock 3500000
grid soft stack 10240
6. Disable firewall and change SElinux to permissive

# systemctl stop firewalld


# nano /etc/selinux/config >>> change SELINUX=enforcing to
permissive

7. Deactivating the NTP Service

First install ntpd : yum install ntp.x86_64


1. Run the following commands as the root user to stop the ntpd service:
# systemctl stop ntpd
# systemctl disable ntpd
2. Rename the NTP-related configuration files in the /etc directory.
mv /etc/ntp /etc/ntp_old
mv /etc/ntp.conf /etc/ntp.conf_old
3. Run the following commands as the root user to stop the chronyd service:
# systemctl stop chronyd
# systemctl disable chronyd
4. Remove the chronyd service configuration file.
mv /etc/chrony.conf /root/Downloads/

After installation confirm the status


$ crsctl check ctss

8. Configuring storage :
8.1 Configuring an iSCSI Target
…..
8.2 iSCSI Initiator
Install the iscsi-initiator-utils package:

Add the storage id to both nodes in the file /etc/iscsi/


initiatorname.iscsi
# nano /etc/iscsi/initiatorname.iscsi

InitiatorName=iqn.2003-01.org.linux-iscsi.t24test2.x8664:sn.10e15c5d93d2

discover the iSCSI targets at the specified IP address

# iscsiadm -m discovery -t sendtargets -p 10.48.8.16

Display information about the targets that are now stored in the discovery database
# iscsiadm -m discoverydb -t st -p 10.48.8.16

Establish a session and log in to a specific target:

# iscsiadm -m node -T iqn.2003-01.org.linux-iscsi.t24test2.x8664:sn.fbbce732e3b8 -p


10.48.8.16:3260 -l

If you want to format the iscsi target :


/backstores/block> delete LUN_0
Deleted storage object LUN_0.
/backstores/block> LUN_1
Command not found LUN_1
/backstores/block> delete LUN_1
Deleted storage object LUN_1.
/backstores/block> ls

Verify that the session is active and display the available LUNs

# iscsiadm -m session -P 3

Install the below packages:


yum install kmod-oracleasm
yum install oracleasmlib-2.0.12-1.el7.x86_64.rpm
yum install oracleasm-support-2.1.11-2.el7.x86_64.rpm

Create disk partition as below:


8.3 Configure Oracle ASM From Both Nodes

oracleasm configure -i
8.4 Load oracleasm module
# oracleasm init

8.5 Delete any existing disks and create them


8.6 Scan the disks:
# oracleasm scandisks

To configure iscsi:

Working With iSCSI Devices (oracle.com)

How to configure iSCSI target and initiator on CentOS/RHEL 7/8 Linux | GoLinuxCloud
To Delete ISCSI connection :

How to delete iscsi target from initiator ( CentOS / RHEL 7 ) Linux | GoLinuxCloud
9. Install oracle grid infrastructure 19c
9.1 $ cd /u01/app/19.0.0/grid/
9.2 $ unzip LINUX.X64_193000_grid_home.zip
9.3 $ ./gridSetup.sh

9.4
9.5
9.6
9.7

9.8
9.9 The permission for the disks should be as below:

For below error:

grid_base/diag directory should be owned by grid

[chmod -R 775 <ORACLE_BASE_LOCATION>/DIAG]


chown -R oracle:oinstall /u01/app/19.0.0/grid/log
9.10

9.11

9.12
9.13

9.14
rm -rf /var/tmp/.oracle/ora_gipc*
rm -rf /var/tmp/.oracle/mdnsd
rm -rf /var/tmp/.oracle/mdnsd.pid
rm -rf /var/tmp/.oracle/npohasd
rm –rf /var/tmp/.oracle/npohasd2

for ONS ports 6100 and 6200 used error:


ps -ef | grep ons
kill the process containing ons -d

9.15
For any error in the middle at 32% installation rum the below
command from $GRID_HOME/install/crs

1. /u01/app/grid/19.3.0/gridhome_1/crs/install./rootcrs.sh -
deconfig –force
2. + Then delete all files from /var/tmp/.oracle
3. cd /var/tmp/.oracle
4. rm -rf *.*
5. Then rerun root.sh
9.16 Post installation check the below:

[grid@rac-node1 ~]$ crsctl check cluster –all

Output:

[grid@rac-node2 ~]$ crsctl status resource –t


Output:
9.17 Adding FAST_RECOVERY_AREA
[grid@rac-node1 ~]$ asmca
Expand Disk Groups, we can see DATA disk group which was
added during the installation in step 9.9.
9.18 Install oracle database real application cluster

[oracle@rac-node1-racdb dbhome_1]$ unzip


LINUX.X64_193000_db_home.zip
[oracle@rac-node1-racdb dbhome_1]$ ./runInstaller
Launching Oracle Database Setup Wizard...
9.19 Create grid user that will own the Oracle grid software using the commands:

Change permissions and group & ownership like below.

Check your ASM disks ownership and mentioned group name in below command.

chown oracle:asmadmin $ORACLE_HOME/bin/oracle

chmod 6751 $ORACLE_HOME/bin/oracle


[root@rac-node1 u01]# cd /dev/oracleasm/;ll

total 0

drwxrwxrwx. 1 root root 0 Jan 8 17:39 disks

drwxrwx---. 1 grid asmadmin 0 Jan 8 17:39 iid

[root@rac-node1 oracleasm]# cd disks/;ll

total 0

brwxrwxrwx. 1 grid asmadmin 8, 17 Jan 12 11:17 DATAFILE

brwxrwxrwx. 1 grid asmadmin 8, 1 Jan 12 11:17 RECO

jdbc:oracle:thin:@(DESCRIPTION_LIST=(LOAD_BALANCE=off)(FAILOVER=on)
(DESCRIPTION=(ENABLE=broken)
(ADDRESS=(PROTOCOL=TCP)(HOST=oelc8scan.plb.internal)(PORT=1521))
(CONNECT_DATA=(UR=A)(SERVICE_NAME=orcl112))))

From oracle:
DESCRIPTION =
(CONNECT_TIMEOUT=90) (RETRY_COUNT=20)(RETRY_DELAY=3)
(TRANSPORT_CONNECT_TIMEOUT=3)
( ADDRESS = (PROTOCOL =
TCP)(HOST=scan)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=service_name))
)

Sample:
(DESCRIPTION =
(CONNECT_TIMEOUT=90) (RETRY_COUNT=20)(RETRY_DELAY=3)
(TRANSPORT_CONNECT_TIMEOUT=3)
( ADDRESS = (PROTOCOL = TCP)(HOST=sales1-
scan.mycluster.example.com)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=oltp.exam
ple.com)))

In order to create a SCAN on second public network in the cluster, you must first enable the use
of multiple subnets in the cluster, which in general is a post installation task, including the
following steps:
1.Set additional subnet for public network
2.Assign node VIPs to newly created subnet
3.Create a node Listener for newly created subnet
4.Create additional SCAN

Step1:ADD 2nd network

grid@<hostname>1]$oifcfg iflist
eth0 133.xxx.67.0
eth1 192.xxx.122.0
eth1 169.254.0.0
eth2 10.xxx.1.0
[grid@<hostname>1]$ oifcfg setif -global eth2/10.48.8.0:public
[grid@<hostname>1]$ oifcfg getif
eth0 133.xxx.67.0 global public
eth1 192.xxx.122.0 global cluster_interconnect,asm
eth2 10.xxx.1.0 global public
[root@<hostname>1]#srvctl add network -netnum 2 subnet 10.xxx.1.0/255.xxx.252.0/eth2
[root@<hostname>1]#srvctl config network -netnum 2
Network 2 exists
Subnet IPv4: 10.xxx.1.0/255.xxx.252.0/eth0, static
Subnet IPv6:

Step2:ADD node VIPs

[root@<hostname>1]# srvctl add vip -node rac-node1 -address


10.48.8.67/255.255.255.0 -netnum 2
[root@<hostname>1]# srvctl add vip -node rac-node2 -address
10.48.8.68/255.255.255.0 -netnum 2

Step3:ADD node listener on network number 2


[grid@<hostname>1]$ srvctl add listener -listener listnet2 -netnum 2 -endpoints "TCP:1523"

Step4:ADD SCAN on network number 2

[root@<hostname>1]# srvctl add scan -scanname scantest -netnum 2

Step5:START node VIPs

[root@<hostname>1]# srvctl start vip -vip <hostname>1v2


[root@<hostname>1]# srvctl start vip -vip <hostname>2v2

Step6:START ListNet2 node listener on network number 2

[grid@<hostname>1]$ srvctl start listener -listener listnet2


[grid@<hostname>1]$ srvctl status listener -listener listnet2
Listener LISTNET2 is enabled
Listener LISTNET2 is running on node(s): <hostname>1,<hostname>2.

Step7:START SCAN on network number 2

[root@<hostname>1]# srvctl start scan -netnum 2

Step8:Add and start a new SCAN listener on network 2 (as grid user)

grid@<hostname>1]$srvctl add scan_listener -netnum 2 -listener scanlsnr_2 -endpoints


"TCP:1523"
grid@<hostname>1]$srvctl start scan_listener -netnum 2

Step9:Check configuration and status for SCAN

[root@<hostname>1]#srvctl config scan -netnum 2


SCAN name: scantest, Network: 2
Subnet IPv4: 10.xxx.1.0/255.xxx.252.0/eth2
Subnet IPv6:
SCAN 0 IPv4 VIP: 10.xxx.1.250

[root@<hostname>1]# srvctl status scan -netnum 2


SCAN VIP scan1_net2 is enabled
SCAN VIP scan1_net2 is running on node <hostname>2

[oracle@rac-node1-t24racdb1 ~]$ lsnrctl status listener


LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 11-JAN-2024 15:23:59

Copyright (c) 1991, 2019, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))

STATUS of the LISTENER

------------------------

Alias LISTENER

Version TNSLSNR for Linux: Version 19.0.0.0.0 - Production

Start Date 11-JAN-2024 11:01:01

Uptime 0 days 4 hr. 22 min. 57 sec

Trace Level off

Security ON: Local OS Authentication

SNMP OFF

Listener Parameter File /u01/app/grid/19.3.0/gridhome_1/network/admin/listener.ora

Listener Log File /u01/app/grid_base/diag/tnslsnr/rac-node1/listener/alert/log.xml

Listening Endpoints Summary...

(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.48.8.63)(PORT=1521)))

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.48.8.67)(PORT=1521)))

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac-
node1)(PORT=5502))(Presentation=HTTP)(Session=RAW))

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=rac-
node1)(PORT=5501))(Security=(my_wallet_directory=/u01/app/oracle/admin/t24racdb/xdb_wallet))(Pr
esentation=HTTP)(Session=RAW))

Services Summary...

Service "+APX" has 1 instance(s).

Instance "+APX1", status READY, has 1 handler(s) for this service...

Service "+ASM" has 1 instance(s).


Instance "+ASM1", status READY, has 1 handler(s) for this service...

Service "+ASM_DATA" has 1 instance(s).

Instance "+ASM1", status READY, has 1 handler(s) for this service...

Service "+ASM_FRA" has 1 instance(s).

Instance "+ASM1", status READY, has 1 handler(s) for this service...

Service "racdbXDB.dbe.com" has 1 instance(s).

Instance "t24racdb1", status READY, has 1 handler(s) for this service...

Service "t24racdb.dbe.com" has 1 instance(s).

Instance "t24racdb1", status READY, has 1 handler(s) for this service...

The command completed successfully

You might also like