Add a New Node to an Existing Oracle RAC 10g Cluster on Linux::
Overview:
Overview:
This article is a comprehensive guide to adding a new node to an existing Oracle RAC 10g Release 2 cluster .
In most businesses, a primary business requirement for an Oracle Real Application Clusters (RAC) configuration is scalability of the database tier across the entire system—so that when the number of users increases, additional instances can be added to the cluster to distribute the load.
In most businesses, a primary business requirement for an Oracle Real Application Clusters (RAC) configuration is scalability of the database tier across the entire system—so that when the number of users increases, additional instances can be added to the cluster to distribute the load.
Please do keep in mind that this article should not be considered a substitution of the guide from Oracle (http://www.oracle.com). The below mentioned link can be used to acceess the Guide:
Existing Environment:
Softwares used:
Operating System | RHEL 5 - 32bit (All the nodes). |
Shared Storage | Openfiler |
Oracle Database | 10.2.0.1.0 |
Cluster Manager | Oracle Clusterware 11.1.0.6.0 |
File System | OCFS and ASM |
Names and Values used:
Database Name | RACDB |
Database Service Name | RACDB_SRVC |
Number of Nodes | Two Nodes - RAC1,RAC2 |
Number of Instances | Two Instances - RACDB1,RACDB2 |
Note: In this article the new Node to be added is named "RAC3" and the new Instance to be added is named "RACDB3".
So after addition of the new node, the details will be as follows:
So after addition of the new node, the details will be as follows:
Database Name | RACDB |
Database Service Name | RACDB_SRVC |
Number of Nodes | Three Nodes - RAC1,RAC2,RAC3 |
Number of Instances | Three Instances - RACDB1,RACDB2,RACDB3 |
Dependencies and Prerequisites for Node Addition in the Cluster:
→ Install the Linux Operating System -
The new node (RAC3) should have the same version of the operating system
as the existing nodes, including all patches required for Oracle.
→ Install Required Linux Packages for Oracle RAC Database - After installing Linux on RAC3, verify and install all packages required by Oracle RAC Database.
→ Configure Public and Private Network - We need to configure the network (RAC3) for access to the public network as well as the private interconnect.
We need to configure the /etc/hosts file on all nodes in the RAC cluster. Notice that the /etc/hosts settings should the same for all nodes. So we will add the "Public IP", "Private IP", and "Virtual IP" of the new node (RAC3) in the /etc/hosts file of the other nodes (RAC1,RAC2) and similarly the network info of the other nodes should be mentioned in the /etc/hosts file of RAC3.
Verify that the new node (RAC3) has access to the public and private network of all current nodes (using the ping utility).
We need to configure the /etc/hosts file on all nodes in the RAC cluster. Notice that the /etc/hosts settings should the same for all nodes. So we will add the "Public IP", "Private IP", and "Virtual IP" of the new node (RAC3) in the /etc/hosts file of the other nodes (RAC1,RAC2) and similarly the network info of the other nodes should be mentioned in the /etc/hosts file of RAC3.
Verify that the new node (RAC3) has access to the public and private network of all current nodes (using the ping utility).
→ Turnoff Firewall - Turnoff firewall of the new node (RAC3). Similary disable SELinux also.
→ Update the kernel parameters and Shell Limits - As per the recomendation from Oracle, please update the kernel parameters and shell limits.
→ Create the Administrative User - In all existing nodes the administrative owner is "oracle", so the next step is to create an administrative user account on node RAC3. While creating this user account, it is important that the UID and the GID of user oracle are identical to that of the other RAC nodes.
→ Create the Oracle Binaries Directory - Create oracle binaries directory structure similar to the other nodes, that will be used to store the Oracle Database software and give the ownership of the directory to "oracle" user.
→ Synchronize Time on the new node (RAC3) - Synchronize Time on the new node (RAC3) to keep it in sync with the other nodes.
→ Configure Shared-Storage on RAC3 -
Configure the iSCSI Initiator.
Install and Configure Oracle Cluster File System (OCFS).
Install and Configure Automatic Storage Management (ASMLib).
Configure the iSCSI Initiator.
Install and Configure Oracle Cluster File System (OCFS).
Install and Configure Automatic Storage Management (ASMLib).
→ Establish User Equivalence with SSH
- When adding nodes to the cluster, Oracle copies files from the node
where the installation was originally performed to the new node (RAC3)
in the cluster. Such a copy process is performed either by using ssh
protocol where available or by using the remote copy (rcp). In order for
the copy operation to be successful, the "oracle" user on the RAC node
must be able to login to the new RAC node without having to provide a password or passphrase.
Note :We will do the node addition activity from this session(RAC1 Node).
Note :We will do the node addition activity from this session(RAC1 Node).
Install cvuqdisk RPM Package:
RAC3 (as root user):
[root@rac3 ]# CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
[root@rac3 ]# rpm -ivh cvuqdisk-1.0.1-1.rpm
[root@rac3 ]# rpm -ivh cvuqdisk-1.0.1-1.rpm
Cluster Verification: In Oracle Database 10g R2, Oracle introduced a new utility called
Cluster Verification Utility (CVU) as part of the clusterware software.
Executing the utility using the appropriate parameters determines the
status of the cluster. At this stage, before beginning installation of
the Oracle Clusterware, we should perform two verifications:
(1) If the hardware and operating system configuration is complete:
(1) If the hardware and operating system configuration is complete:
cluvfy stage -post hwos -n rac1,rac3
(2) Perform appropriate checks on all nodes in the node list before setting up Oracle Clusterware.
cluvfy stage -pre crsinst -n rac1,rac3
Extend Oracle Clusterware to new node(RAC3):
Oracle Clusterware is already installed on the
cluster; the task here is to add the new node to the clustered
configuration. This task is performed by executing the Oracle provided
utility called "addnode" located in the clusterware_home/oui/bin
directory. Oracle Clusterware has two files (Oracle cluster repository
(OCR) and Oracle Cluster Synchronization service (CSS), voting disk)
that contain information concerning the cluster and the applications
managed by the Oracle Clusterware. These files need to be updated with
the information concerning the new node.
I will do the addnode activity from first node (RAC1).
I will do the addnode activity from first node (RAC1).
RAC1 :
[root@rac1 ~]# xhost +
Go to the direcory, clusterware_home/oui/bin
Go to the direcory, clusterware_home/oui/bin
[oracle@rac1 bin]$ ./addNode.sh
Verify Clusterware Installation :
Verify Cluster Services:
[oracle@rac3 ~]$ /u01/app/crs/bin/crs_stat -t
Check Cluster Nodes:
[oracle@rac1 ~]$ /u01/app/crs/bin/olsnodes -n
Check CRS Status:
[oracle@rac3 ~]$ /u01/app/crs/bin/crsctl check crs
Check Oracle Clusterware files:
[oracle@rac3 ~]$ ls -l /etc/init.d/init.*
Extend Oracle Database Software to New Node(RAC3):
Note: Extend the Oracle Database software to new node from RAC1.
The next step is to install the Oracle
database software on the new node (RAC3). Oracle has provided a new
executable called "addNode.sh" located in the $ORACLE_HOME/oui/bin
directory.
RAC1:
[root@rac1 ~]# xhost +
Go to the direcory, oracle_home/oui/bin
Go to the direcory, oracle_home/oui/bin
[oracle@rac1 bin]$ ./addNode.sh
Set Oracle User Environment:
RAC3:
[oracle@rac3 ~]$ vi ~/.bash_profile
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1
export ORA_CRS_HOME=/u01/app/crs
export ORACLE_SID=racdb3
export PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin
umask 022
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1
export ORA_CRS_HOME=/u01/app/crs
export ORACLE_SID=racdb3
export PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin
umask 022
For all the screenshots, please visit ::: http://www.sensehaze.com/mydata/resources_section/rac/addnode/index.php
Run netca to add Listener to new node(RAC3):
Once the RDBMS software is installed, it is a
good practice to run netca before moving to the next step. Netca will
configure all required network files and parameters such as the
listener, sql*net, and tnsnames.ora files.
RAC1:
[oracle@rac1 bin]$ netca
Add Instance to New Node:
DBCA has all the required options to add additional instances to the cluster.
Requirements:
- Make a full cold backup of the database before commencing the upgrade process
- Oracle Clusterware should be running on all nodes.
RAC1:
[oracle@rac1 ~]$ dbca
At this stage, the following is true:
-- The clusterware has been installed on node RAC3 and is now part of the cluster.
-- The Oracle software has been installed on node RAC3.
-- The ASM3 and new Oracle instance RACDB3 has been created and configured on RAC3.
-- The clusterware has been installed on node RAC3 and is now part of the cluster.
-- The Oracle software has been installed on node RAC3.
-- The ASM3 and new Oracle instance RACDB3 has been created and configured on RAC3.
Verify that the upgrade is successful:
Verify the Instance :
Verify that all instances in the cluster are started, using the GV$INSTANCE view from any of the participating instances.
SQL> select instance_name from gv$instance ;
Check Cluster Services :
[oracle@rac1 ~]$ crs_stat -t
For all the screenshots, please visit ::: http://www.sensehaze.com/mydata/resources_section/rac/addnode/index.php
No comments:
Post a Comment