What Is RAC
What Is RAC
RAC stands for Real Application Clusters. It allows multiple nodes in a clustered
system to mount and open a single database that resides on shared disk
storage. Should a single system fail (node), the database service will still be
available on the remaining nodes.
A non-RAC database is only available on a single system. If that system fails, the
database service will be down (single point of failure).
The following list describes the functions of some of the major Oracle
Clusterware components. This list includes these components which are
processes on Unix and Linux operating systems or services on Windows.
•
Cluster Synchronization Services (CSS)—Manages the cluster configuration by
controlling which nodes are members of the cluster and by notifying members
when a node joins or leaves the cluster. If you are using third-party clusterware,
then thec ss process interfaces with your clusterware to manage node
membership information.
•
Cluster Ready Services (CRS)—The primary program for managing high
availability operations within a cluster. Anything that thec rs process manages is
known as a cluster resource which could be a database, an instance, a service, a
Listener, a virtual IP (VIP) address, an application process, and so on. Thec rs
process manages cluster resources based on the resource's configuration
information that is stored in the OCR. This includes start, stop, monitor and
failover operations. Thecrs process generates events when a resource status
changes. When you have installed Oracle RAC,c rs monitors the Oracle instance,
Listener, and so on, and automatically restarts these components when a failure
occurs. By default, thecrs process makes five attempts to restart a resource and
then does not make further restart attempts if the resource does not restart.
•
Event Management (EVM)—A background process that publishes events that
Crs creates .
•
Oracle Notification Service (ONS)—A publish and subscribe service for
communicating Fast Application Notification (FAN) events.
•
RACG—Extends clusterware to support Oracle-specific requirements and
complex resources. Runs server callout scripts when FAN events occur.
•
Process Monitor Daemon (OPROCD)—This process is locked in memory to
monitor the cluster and provide I/O fencing. OPROCD performs its check, stops
running, and if the wake up is beyond the expected time, then OPROCD resets
the processor and reboots the node. An OPROCD failure results in Oracle
Clusterware restarting the node. OPROCD uses the hangcheck timer on Linux
platforms.
•
evmd—Event manager daemon. This process also starts the racgevt process to
manage FAN server callouts.
•
ocssd—Manages cluster node membership and runs as the oracle user; failure
of this process results in cluster restart.
•
oprocd—Process monitor for the cluster. Note that this process only appears on
platforms that do not use vendor clusterware with Oracle Clusterware.
The Oracle Real Application Clusters Software Components
Each instance has a buffer cache in its System Global Area (SGA). Using Cache
Fusion, Oracle RAC environments logically combine each instance's buffer cache
to enable the instances to process data as if the data resided on a logically
combined, single cache.
(The SGA size requirements for Oracle RAC are greater than the SGA
requirements for single-instance Oracle databases due to Cache Fusion.)
To ensure that each Oracle RAC database instance obtains the block that it
needs to satisfy a query or transaction, Oracle RAC instances use two
processes, the Global Cache Service (GCS) and the Global Enqueue Service
(GES). The GCS and GES maintain records of the statuses of each data file and
each cached block using a Global Resource Directory (GRD). The GRD contents
are distributed across all of the active instances, which effectively increases the
size of the System Global Area for an Oracle RAC instance.
After one instance caches data, any other instance within the same cluster
database can acquire a block image from another instance in the same database
faster than by reading the block from disk. Therefore, Cache Fusion moves
current blocks between instances rather than re-reading the blocks from disk.
When a consistent block is needed or a changed block is required on another
instance, Cache Fusion transfers the block image directly between the affected
instances. Oracle RAC uses the private interconnect for inter-instance
communication and block transfers. The Global Enqueue Service Monitor
Can one see how connections are distributed across the nodes?
Select from gv$session.
Some examples:
SELECT inst_id, count(*) "DB Sessions" FROM gv$session
WHERE type = 'USER' GROUP BY inst_id;
With login time (hour):
•
Enterprise Manager—Enterprise Manager has both the Database Control and
Grid Control GUI interfaces for managing both single instance and Oracle RAC
environments.
•
Cluster Verification Utility (CVU)—CVU is a command-line tool that you can use
to verify a range of cluster and Oracle RAC-specific components such as shared
storage devices, networking configurations, system requirements, and Oracle
Clusterware, as well as operating system groups and users. You can use CVU
for pre-installation checks as well as for post-installation checks of your cluster
environment. CVU is especially useful during pre-installation and during
installation of Oracle Clusterware and Oracle RAC components. The OUI runs
CVU after Oracle Clusterware and the Oracle installation to verify your
environment.
•
Server Control (SRVCTL)—SRVCTL is a command-line interface that you can
use to manage an Oracle RAC database from a single point. You can use
SRVCTL to start and stop the database and instances and to delete or move
instances and services. You can also use SRVCTL to manage configuration
information.
•
Cluster Ready Services Control (CRSCTL)—CRSCTL is a command-line tool
that
you can use to manage Oracle Clusterware. You can use CRSCTL to start and
stop Oracle Clusterware. CRSCTL has many options such as enabling online
debugging,
•
Oracle Interface Configuration Tool (OIFCFG)—OIFCFG is a command-line
tool for both single-instance Oracle databases and Oracle RAC environments
that you can use to allocate and de-allocate network interfaces to components.
You can also use OIFCFG to direct components to use specific network
interfaces and to retrieve component configuration information.
•
OCR Configuration Tool (OCRCONFIG)—OCRCONFIG is a command-line tool
for OCR administration. You can also use the OCRCHECK and OCRDUMP
utilities to troubleshoot configuration problems that affect the OCR.
ASM Features
PURPOSE
-------
Automatic Storage Management is a file system and volume manager built
into the database kernel that allows the practical management of
thousands of disk drives with 24x7 availability. It provides management across
multiple nodes of a cluster for Oracle Real Application Clusters (RAC) support as
well as single SMP machines. It automatically does load balancing in parallel
across all available disk drives to prevent hot spots and maximize performance,
even with rapidly changing data usage patterns. It prevents fragmentation so
that there is never a need to relocate data to reclaim space. Data is well
balanced and striped over all disks. It does automatic online disk space
reorganization for the incremental addition or removal of storage capacity. It can
maintain redundant copies of data to provide fault tolerance, or it can be built on
top of vendor supplied reliable storage mechanisms. Data management is done
by selecting the desired reliability and performance characteristics for classes of
data rather than with human interaction on a per file basis. ASM solves many of
the practical management problems of large Oracle databases. As the size of a
database server increases towards thousands of disk drives, or tens of nodes in
a cluster, the traditional techniques for management stop working.They do not
scale efficiently, they become too prone to human error, and they require
independent effort on every node of a cluster.
Other tasks, such as manual load balancing, become so complex as to prohibit
their application. These problems must be solved for the reliable management of
databases in the tens or hundreds of terabytes. Oracle is uniquely positioned to
solve these problems as a result of our existing Real Application Cluster
technology. Oracle’s control of the solution ensures it is reliable and integrated
with Oracle products. This document is intended to give some insight into the
internal workings of ASM.
One portion of the ASM code allows for the start-up of a special
instance called an ASM Instance. ASM Instances do not mount databases,
but instead manage the metadata needed to make ASM files available to
ordinary database instances.
Both ASM Instances and database instances have access to some common
set of disks. ASM Instances manage the metadata describing the layout of the
ASM files. Database instances access the contents of ASM files directly,
communicating with an ASM instance only to get information about the
layout of these files. This requires that a second portion of the ASM
code run in the database instance, in the I/O path.
Note:
1. One and only one ASM instance required per node. So you might have
multiple databases, but they will share the same single ASM.
4. In external redundancy disk groups, ASM does not mirroring. For normal
redundancy, ASM 2-way mirrors files by default, but can also leave files
unprotected. [Unprotected files are not recommended]. For high redundancy
disk groups, ASM 3-way mirrors files.
DBCA also configures your instance parameter file and password file.
Steps in DBCA:
1. Choose ASM disk.
3. While creating ASM you have choice of mirroring for files in a disk
group and the options are like below
HIGH, NORMAL or EXTERNAL.
High
-> ASM 3-way mirrors
Normal
-> ASM 2-way mirrors
External -> If you have already mirror disk in H/W label like EMC or
another third party
4. dbca will create a separate instance called "+ASM" which will be in nomount
stage to control your ASM.
Set owner, group and permission on device file for each raw device:
So from DBCA you can see the device "raw1" and "raw2".
After finishing of ASM volume creation, when you create a database on
an ASM volume, you should see the file details using Enterprise Manager
(EM). Or you can use V$ or DBA view to check the datafile name. Oracle
recommended not to specify the datafile name while adding datafile or
creating new tablespace, because ASM will automatically generate OMF
file.
Note: If DBA's by mistake or intentionally choose the datafile name,
dropping of tablespace, will not drop the datafile from ASM volume.