Oracle Real Application Clusters (RAC) Multi-node Cluster is a database architecture that runs a single Oracle database across multiple servers (nodes), enhancing availability, scalability, and performance by distributing the workload and ensuring continuous operation even if one node fails.
Key Features
- High Availability (HA): Continuous operation by eliminating single points of failure; nodes continue to operate if one fails.
- Scalability: Add more nodes to handle increased users and transactions.
- Load Balancing: Even distribution of workloads across nodes, optimizing performance.
- Fault Tolerance: Redundancy ensures database availability despite node failures.
- Improved Performance: Multiple nodes handle more transactions and queries.
Components of Oracle RAC Multi-Node Cluster
- Cluster Nodes: Servers running an instance of the Oracle Database.
- Oracle Clusterware: Software for clustering services, including crsctl and srvctl.
- Shared Storage: Nodes share access to storage via SAN, NAS, or Oracle ASM.
- Interconnect: Private network for internode communication.
- Oracle ASM: Simplifies storage management, providing striping and mirroring.
- Global Resource Directory (GRD): Tracks data blocks and resources across instances.
Oracle Data Guard in RAC
Oracle Data Guard enhances RAC’s high availability, data protection, and disaster recovery by maintaining synchronized standby databases.
Key Features
- Disaster Recovery: Standby databases in different locations for site-level recovery.
- Data Protection: Continuous application of redo logs ensures data consistency.
- High Availability: Handles node-level and site-level failures for robust availability.
Key Benefits
- Unified Device Discovery: Provides a comprehensive view of all elements in an Oracle RAC Database with Data Guard Multi-Node Cluster, including their relationships.
- Proactive Device Monitoring: Collects metric values over time and sends alerts to the appropriate team when thresholds are breached or unexpected behavior occurs, ensuring minimal or zero downtime.
- Job Scheduling Metrics: Offers detailed metrics on job scheduling times and statuses.
- Concern Alerts: Generates alerts for each metric to notify administrators of any resource issues promptly.
Supported Target Versions
The application is validated on Oracle Database 19c Enterprise Edition Release 19.0.0.0.0.
Hierarchy of Oracle RAC resource
For Oracle RAC, Hierarchy is as follows:
- Oracle RAC
- Oracle Clusterware
- Oracle Node
- Oracle DB Instance
- Oracle Disk Group
- Oracle Disk
For Oracle Standalone, Hierarchy is as follows:
- Oracle Node
- Oracle DB Instance
Oracle authorization permissions:
For monitoring some metrics, we are using JDBC. For JDBC connections we are supporting Database authentication.
This utilizes CLI commands such as crsctl, srvctl, and olsnodes..etc for monitoring and discovery. Additionally, we are not using .oraenv to set the Oracle environment; instead, we configure the Oracle environment variables in .bashrc.
Please find the below screenshot having oracle environment configuration in .bashrc file
Please check below points in gateway:
- ping
<scan name>
- scan hostname / scan ip address based on what is provide in the configuration. (If you are using scan hostname, ensure that the hostname is resolved by checking proper dns is configured on the gateway.) - telnet
<scan name>
1521 - connect to gcli using “gcli” cmd
- execute
db oracledb <scan_name> <username> <password> <db_port> <db_name>:servicename 15000 10000 insecure Yes "SELECT INST_ID, INSTANCE_NUMBER, INSTANCE_NAME, HOST_NAME FROM gv$instance"
Note: While establishing connection on the scan hostname / Ipaddress it is internally redirected to the local listeners, ensure that the end device (all RAC nodes) accepts inbound connections on all these IpAddresses.
Privileges - The provided database user should have the SELECT ANY TABLE privilege.
Roles - The provided database user should have the CONNECT and SELECT_CATALOG_ROLE