Monday 20 February 2012

Add a New Node to an Existing Oracle RAC 10g Cluster on Linux

Add a New Node to an Existing Oracle RAC 10g Cluster on Linux::


Overview:
This article is a comprehensive guide to adding a new node to an existing Oracle RAC 10g Release 2 cluster .
In most businesses, a primary business requirement for an Oracle Real Application Clusters (RAC) configuration is scalability of the database tier across the entire system—so that when the number of users increases, additional instances can be added to the cluster to distribute the load.
Please do keep in mind that this article should not be considered a substitution of the guide from Oracle (http://www.oracle.com). The below mentioned link can be used to acceess the Guide:






Existing Environment:
Softwares used:
 Operating System  RHEL 5 - 32bit (All the nodes).
 Shared Storage  Openfiler
 Oracle Database  10.2.0.1.0
 Cluster Manager  Oracle Clusterware 11.1.0.6.0
 File System  OCFS and ASM
Names and Values used:
 Database Name RACDB
 Database Service Name RACDB_SRVC
 Number of Nodes Two Nodes - RAC1,RAC2
 Number of Instances Two Instances - RACDB1,RACDB2
Note: In this article the new Node to be added is named "RAC3" and the new Instance to be added is named "RACDB3".
So after addition of the new node, the details will be as follows:
 Database Name RACDB
 Database Service Name RACDB_SRVC
 Number of Nodes Three Nodes - RAC1,RAC2,RAC3
 Number of Instances Three Instances - RACDB1,RACDB2,RACDB3







Dependencies and Prerequisites for Node Addition in the Cluster:
→ Install the Linux Operating System - The new node (RAC3) should have the same version of the operating system as the existing nodes, including all patches required for Oracle.
→ Install Required Linux Packages for Oracle RAC Database - After installing Linux on RAC3, verify and install all packages required by Oracle RAC Database.
→ Configure Public and Private Network - We need to configure the network (RAC3) for access to the public network as well as the private interconnect.
We need to configure the /etc/hosts file on all nodes in the RAC cluster. Notice that the /etc/hosts settings should the same for all nodes. So we will add the "Public IP", "Private IP", and "Virtual IP" of the new node (RAC3) in the /etc/hosts file of the other nodes (RAC1,RAC2) and similarly the network info of the other nodes should be mentioned in the /etc/hosts file of RAC3.
Verify that the new node (RAC3) has access to the public and private network of all current nodes (using the ping utility).
→ Turnoff Firewall - Turnoff firewall of the new node (RAC3). Similary disable SELinux also.
→ Update the kernel parameters and Shell Limits - As per the recomendation from Oracle, please update the kernel parameters and shell limits.
→ Create the Administrative User - In all existing nodes the administrative owner is "oracle", so the next step is to create an administrative user account on node RAC3. While creating this user account, it is important that the UID and the GID of user oracle are identical to that of the other RAC nodes.
→ Create the Oracle Binaries Directory - Create oracle binaries directory structure similar to the other nodes, that will be used to store the Oracle Database software and give the ownership of the directory to "oracle" user.
→ Synchronize Time on the new node (RAC3) - Synchronize Time on the new node (RAC3) to keep it in sync with the other nodes.
→ Configure Shared-Storage on RAC3 -
Configure the iSCSI Initiator.
Install and Configure Oracle Cluster File System (OCFS).
Install and Configure Automatic Storage Management (ASMLib).
→ Establish User Equivalence with SSH - When adding nodes to the cluster, Oracle copies files from the node where the installation was originally performed to the new node (RAC3) in the cluster. Such a copy process is performed either by using ssh protocol where available or by using the remote copy (rcp). In order for the copy operation to be successful, the "oracle" user on the RAC node must be able to login to the new RAC node without having to provide a password or passphrase.
Note :We will do the node addition activity from this session(RAC1 Node).








 Install cvuqdisk RPM Package:
RAC3 (as root user):
[root@rac3 ]# CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
[root@rac3 ]# rpm -ivh cvuqdisk-1.0.1-1.rpm

Cluster Verification: In Oracle Database 10g R2, Oracle introduced a new utility called Cluster Verification Utility (CVU) as part of the clusterware software. Executing the utility using the appropriate parameters determines the status of the cluster. At this stage, before beginning installation of the Oracle Clusterware, we should perform two verifications:
(1) If the hardware and operating system configuration is complete:
cluvfy stage -post hwos -n rac1,rac3
(2) Perform appropriate checks on all nodes in the node list before setting up Oracle Clusterware.
cluvfy stage -pre crsinst -n rac1,rac3







Extend Oracle Clusterware to new node(RAC3):
Oracle Clusterware is already installed on the cluster; the task here is to add the new node to the clustered configuration. This task is performed by executing the Oracle provided utility called "addnode" located in the clusterware_home/oui/bin directory. Oracle Clusterware has two files (Oracle cluster repository (OCR) and Oracle Cluster Synchronization service (CSS), voting disk) that contain information concerning the cluster and the applications managed by the Oracle Clusterware. These files need to be updated with the information concerning the new node.
I will do the addnode activity from first node (RAC1).
RAC1 :
[root@rac1 ~]# xhost +
Go to the direcory, clusterware_home/oui/bin
[oracle@rac1 bin]$ ./addNode.sh










Verify Clusterware Installation :
Verify Cluster Services:
[oracle@rac3 ~]$ /u01/app/crs/bin/crs_stat -t

Check Cluster Nodes:
[oracle@rac1 ~]$ /u01/app/crs/bin/olsnodes -n

Check CRS Status:
[oracle@rac3 ~]$ /u01/app/crs/bin/crsctl check crs

Check Oracle Clusterware files:
[oracle@rac3 ~]$ ls -l /etc/init.d/init.*










 Extend Oracle Database Software to New Node(RAC3):
Note: Extend the Oracle Database software to new node from RAC1.
The next step is to install the Oracle database software on the new node (RAC3). Oracle has provided a new executable called "addNode.sh" located in the $ORACLE_HOME/oui/bin directory.
RAC1:
[root@rac1 ~]# xhost +
Go to the direcory, oracle_home/oui/bin
[oracle@rac1 bin]$ ./addNode.sh










 Set Oracle User Environment:
RAC3:
[oracle@rac3 ~]$ vi   ~/.bash_profile
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1
export ORA_CRS_HOME=/u01/app/crs
export ORACLE_SID=racdb3
export PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin
umask 022











Run netca to add Listener to new node(RAC3):
Once the RDBMS software is installed, it is a good practice to run netca before moving to the next step. Netca will configure all required network files and parameters such as the listener, sql*net, and tnsnames.ora files.
RAC1:
[oracle@rac1 bin]$ netca









Add Instance to New Node:
DBCA has all the required options to add additional instances to the cluster.
Requirements:
  • Make a full cold backup of the database before commencing the upgrade process
  • Oracle Clusterware should be running on all nodes.
RAC1:
[oracle@rac1 ~]$ dbca

 
At this stage, the following is true:
-- The clusterware has been installed on node RAC3 and is now part of the cluster.
-- The Oracle software has been installed on node RAC3.
-- The ASM3 and new Oracle instance RACDB3 has been created and configured on RAC3.





 
 Verify that the upgrade is successful:
Verify the Instance :
Verify that all instances in the cluster are started, using the GV$INSTANCE view from any of the participating instances.
SQL> select instance_name from gv$instance ;

Check Cluster Services :
[oracle@rac1 ~]$ crs_stat -t 



Thursday 9 February 2012

Creating RMAN Recovery Catalog

Creating RMAN Recovery Catalog::


Overview:
This article is a comprehensive guide for Creation of RMAN Recovery Catalog.
RMAN can be used either with or without a recovery catalog. A recovery catalog is a schema stored in a database that tracks backups and stores scripts for use in RMAN backup and recovery situations. Generally, RMAN catalog schema be placed on a server separate from the main servers.
Please do keep in mind that this article should not be considered a substitution of the official guide from Oracle (http://www.oracle.com). The below mentioned link can be used to download the official documentation for RMAN Recovery Catalog.



For the screenshots of the steps, please visit the following link::   http://www.sensehaze.com/mydata/resources_section/bck_rec/recv_catalog_rman/index.php




Disk Space Allocation for the Recovery Catalog Database:
If we are creating our recovery catalog in an already-existing database, add enough space to hold the default tablespace to the recovery catalog schema. If we are creating a new database to hold our recovery catalog, then, in addition to the space for the recovery catalog schema itself, we must allow space for other files in the recovery catalog database:
  • SYSTEM tablespace
  • Temporary tablespaces
  • Rollback segment tablespaces
  • Online redo log files
Typical Recovery Catalog Space Requirements for 1 Year
 Type of Space Space Requirement
 SYSTEM tablespace 90 MB
 Temp tablespace 5 MB
 Rollback or undo tablespace 5 MB
 Recovery catalog tablespace 15 MB for each database registered in the recovery catalog
 Online redo logs 1 MB each (3 groups, each with 2 members)
Note : For the purpose of this guide i will use a separate database (Catalog DBName - CATALOG).









 Create a user and schema for Recovery Catalog:
Start SQL*Plus and connect as administrator-privileged-user to the database containing the recovery catalog.
cmd > set ORACLE_SID=CATALOG
cmd > sqlplus sys/sys as sysdba

Create Catalog Tablespace:
SQL> create tablespace reco_cat
datafile 'E:\app\admin\oradata\catalog\reco_cat.dbf' SIZE 100m ;

Create Catalog User:
SQL> CREATE USER rman IDENTIFIED BY rman
DEFAULT TABLESPACE reco_cat
TEMPORARY TABLESPACE temp
QUOTA UNLIMITED ON reco_cat;

Grant RECOVERY_CATALOG_OWNER role to the schema owner:
SQL> GRANT RECOVERY_CATALOG_OWNER, CONNECT, RESOURCE TO rman;




For the screenshots of the steps, please visit the following link::   http://www.sensehaze.com/mydata/resources_section/bck_rec/recv_catalog_rman/index.php






Create the Recovery Catalog:
Connect to the database that will contain the catalog as the catalog owner.
CMD> rman CATALOG rman/rman@catalog
Run the CREATE CATALOG command to create the catalog. The creation of the catalog can take several minutes.
RMAN> CREATE CATALOG ;






Register a Database with RMAN:
After making sure the recovery catalog database is open, connect RMAN to the target database and recovery catalog database.
CMD> rman target sys/sys@mydb CATALOG rman/rman@catalog
If the target database is not mounted, then mount or open it.
RMAN> STARTUP MOUNT ;
-or-
RMAN> STARTUP OPEN ;
Register the target database in the connected recovery catalog.
RMAN> REGISTER DATABASE ;


RMAN creates rows in the catalog tables to contain information about the target database, then copies all pertinent data about the target database from the control file into the catalog, synchronizing the catalog with the control file.
The database can now be operated on using the RMAN utility with catalog option.


For the screenshots of the steps, please visit the following link::   http://www.sensehaze.com/mydata/resources_section/bck_rec/recv_catalog_rman/index.php










Monday 6 February 2012

Oracle Database 11g R2 (11.2.0.2) RAC One Node Installation On Solaris 10

Oracle Database 11g R2 (11.2.0.2) RAC One Node Installation On Solaris 10 (x86-64)::


In this article we are going to learn Oracle Database 11g R2 (11.2.0.2) RAC One Node Installation On Solaris 10 (x86-64).


Overview:
This article is a comprehensive guide for Oracle Database 11g R2 (11.2.0.2) RAC One Node Installation On Solaris 10 (x86-64).
Oracle RAC One Node is a single instance of an Oracle Real Application Cluster database that runs on one node in a cluster. This option adds to the flexibility that Oracle offers for database consolidation. We can consolidate many databases into one cluster while also providing the high availability benefits of failover protection, online rolling patch application, and rolling upgrades for the operating system.
Please do keep in mind that this article should not be considered a substitution of the official guide from Oracle (http://www.oracle.com). The below mentioned link can be used to acceess the official RAC Guide:





Objective of the Article:
By the time we finish this article, we should be able to understand the following:
(1)Basic idea of RAC One Node.
(2)Use of Openfiler for shared storage Configuration.
(3)Oracle Grid Infrastructure 11.2.0.2 installation for Cluster.
(4)Oracle Database 11.2.0.2 installation for RAC One Node.
(5)Database creation for RAC One Node.

Softwares used:
 Operating System  Solaris 10 - 64bit (Both the nodes).
 Shared Storage  Openfiler
 Oracle Grid Infrastructure  11.2.0.2 for solaris x64
 Oracle Database  11.2.0.2 for solaris x64





Install Solaris 10 64-bit O/S:
Install Solaris 10 64-bit Operating System on all the RAC nodes of your setup ( Here two nodes ).
Network Configuration for Node1:
 eth0(e1000g0)
 Primary Network Interface e1000g0
 Hostname racnode1
 Dynamic IP configuration (DHCP) OFF
 IP Address 192.168.15.100
 System Part of Subnet Yes
 NetMask(Prefix) 255.255.255.0
 Enable IPv6 No
 Default Route Yes
 Router IP Address 192.168.15.1

 eth1(e1000g1)
 Secondary Network Interface e1000g1
 Hostname racnode1-priv
 Dynamic IP configuration (DHCP) OFF
 IP Address 192.168.16.100
 System Part of Subnet Yes
 NetMask(Prefix) 255.255.255.0
 Enable IPv6 No
 Default Route No





 No
 Default Route No

Network Configuration for Node2:
 eth0(e1000g0)
 Primary Network Interface e1000g0
 Hostname racnode2
 Dynamic IP configuration (DHCP) OFF
 IP Address 192.168.15.101
 System Part of Subnet Yes
 NetMask(Prefix) 255.255.255.0
 Enable IPv6 No
 Default Route Yes
 Router IP Address 192.168.15.1

 eth1(e1000g1)
 Secondary Network Interface e1000g1
 Hostname racnode2-priv
 Dynamic IP configuration (DHCP) OFF
 IP Address 192.168.16.101
 System Part of Subnet Yes
 NetMask(Prefix) 255.255.255.0
 Enable IPv6 No
 Default Route No
For more information about Solaris Operating System and its installation/configuration, please visit the below mentioned link:http://docs.oracle.com/cd/E19253-01/index.html 






Install Openfiler:
Openfiler is a free browser-based network storage management utility that supports file-based Network Attached Storage (NAS) and block-based Storage Area Networking (SAN) in a single framework. Openfiler supports CIFS, NFS, HTTP/DAV, FTP, iSCSI technology.
Install Openfiler for configuring Shared-storage.
Network Configuration for Openfiler:
 eth0
 Dynamic IP configuration (DHCP) OFF
 Host Name openfiler
 IP Address 192.168.15.50
 NetMask(Prefix) 255.255.255.0
 Enable IPv6 No
 Activate on boot Yes
 Default Gateway Address 192.168.15.1

 eth1
 Dynamic IP configuration (DHCP) OFF
 IP Address 192.168.16.50
 NetMask(Prefix) 255.255.255.0
 Enable IPv6 No
 Activate on boot Yes
For accessing openfiler storage:
https://192.168.15.50:446
username -- openfiler
password -- password
For more information about the installation of openfiler, please visit the following link : www.openfiler.com/learn/how-to/graphical-installation








 Network Configuration:
Network Hardware Requirements:
  • Each node must have at least two network interfaces — one for the public network and one for the private network.
  • Public interface names must be the same for all nodes. If the public interface on one node uses the network adapter e1000g0, then you must configure e1000g0 as the public interface on all nodes.
  • The network adapter for the public interface must support TCP/IP.
  • For the private network, the endpoints of all designated interconnect interfaces must be completely reachable on the network.
IP Address Requirements:
  • A public IP address for each node
  • A virtual IP address for each node
  • Three single client access name (SCAN) addresses for the cluster.
Note :For the purpose of this article, i have not used GNS and DNS. So i will use one SCAN IP Address.
Note :Perform the below mentioned network configuration tasks on both Oracle RAC nodes in the cluster.
Hostname Configuration (as root user):
bash-3.00# vi /etc/hostname.e1000g0
Output:
Node1 -- racnode1
Node2 -- racnode2
If the hostame is not set then edit the file properly.
bash-3.00# vi /etc/hostname.e1000g1
Output:
Node1 -- racnode1-priv
Node2 -- racnode2-priv
If the hostame is not set then edit the file properly.
IP Address Configuration (as root user):
bash-3.00# ifconfig -a


If the IP Addresses are not configured then configure them by using the command:
bash-3.00# ifconfig e1000gXs plumb ip_address/subnet up
Edit the /etc/hosts file (as root user):
Give details of Public IPs, Private IPs, Virtual IPs, Scan IP and Openfiler IPs.
In our Oracle RAC One Node configuration, we will use the following network settings:
 Node1
 /etc/hosts
 #::1 localhost
 127.0.0.1 localhost
 #Public IPs
 192.168.15.100 racnode1.mmm.com racenode1
 192.168.15.101 racnode2.mmm.com racenode2
 #Private IPs
 192.168.16.100 racenode1-priv
 192.168.16.101 racenode2-priv
 #Virtual IPs
 192.168.15.110 racnode1-vip.mmm.com racenode1-vip
 192.168.15.111 racnode2-vip.mmm.com racenode2-vip
 #SCAN IP
 192.168.15.150 rac-cluster-scan.mmm.com rac-cluster-scan
 #Openfiler IPs
 192.168.15.50 openfiler.mmm.com openfiler
 192.168.16.50 openfiler-priv

 Node2
 /etc/hosts
 #::1 localhost
 127.0.0.1 localhost
 #Public IPs
 192.168.15.100 racnode1.mmm.com racenode1
 192.168.15.101 racnode2.mmm.com racenode2
 #Private IPs
 192.168.16.100 racenode1-priv
 192.168.16.101 racenode2-priv
 #Virtual IPs
 192.168.15.110 racnode1-vip.mmm.com racenode1-vip
 192.168.15.111 racnode2-vip.mmm.com racenode2-vip
 #SCAN IP
 192.168.15.150 rac-cluster-scan.mmm.com rac-cluster-scan
 #Openfiler IPs
 192.168.15.50 openfiler.mmm.com openfiler
 192.168.16.50 openfiler-priv
Edit the "/etc/hosts" (as root user) file for giving the etries like above:
bash-3.00# vi /etc/hosts


Verify Network Connectivity:
Verify the network connectivity by using the "ping" command to test the connection between the nodes.
Node 1 (as root user):
bash-3.00# ping racnode1
bash-3.00# ping racnode1-priv
bash-3.00# ping racnode2
bash-3.00# ping racnode2-priv
bash-3.00# ping openfiler
bash-3.00# ping openfiler-priv

Node 2 (as root user):
bash-3.00# ping racnode1
bash-3.00# ping racnode1-priv
bash-3.00# ping racnode2
bash-3.00# ping racnode2-priv
bash-3.00# ping openfiler
bash-3.00# ping openfiler-priv

Openfiler (as root user):
bash-3.00# ping racnode1
bash-3.00# ping racnode1-priv
bash-3.00# ping racnode2
bash-3.00# ping racnode2-priv
bash-3.00# ping openfiler
bash-3.00# ping openfiler-priv






Hardware Configuration:
Minimum Hardware Requirements:
  • At least 2 GB of RAM for Oracle Cluster installations.
  • At least 1024 x 768 display resolution, so that OUI displays correctly.
  • 1 GB of space in the /tmp directory.
  • 6.5 GB of space for the Oracle Grid Infrastructure for a Cluster home (Grid home) and at least 4 GB of available disk space for the Oracle Database home directory.
  • Swap space equivalent to the multiple of the available RAM.
     Available RAM Swap Space Required
     Between 2 GB and 16 GB Equal to the size of RAM
     More than 16 GB 16 GB
Node 1 (as root user):
Checking RAM Size:
bash-3.00# /usr/sbin/prtconf | grep "Memory size"
Checking SWAP Size:
bash-3.00# /usr/sbin/swap -s
Checking /tmp Space:
bash-3.00# df -kh

Node 2 (as root user):
Checking RAM Size:
bash-3.00# /usr/sbin/prtconf | grep "Memory size"
Checking SWAP Size:
bash-3.00# /usr/sbin/swap -s
Checking /tmp Space:
bash-3.00# df -kh






Synchronize time on ALL Nodes:
We should ensure that the date and time settings on all nodes are set as closely as possible to the same date and time.Time may be kept in sync with NTP or by using Oracle Cluster Time Synchronization Service (ctssd).
Note:For the purpose of this guide i will not use NTP. So i will disable NTP on all the RAC Nodes.


Disable NTP Service:
Node 1 (as root user):
bash-3.00# /usr/sbin/svcadm disable ntp

Node 2 (as root user):
bash-3.00# /usr/sbin/svcadm disable ntp

For setting time on the nodes use the command:
bash-3.0.0 date -u new_date_and_time





Automatic SSH Configuration During Installation:
To install Oracle software, SSH connectivity should be set up between all cluster member nodes. OUI uses the ssh and scp commands during installation to run remote commands on and copy files to the other cluster nodes. We must configure SSH so that these commands do not prompt for a password.
create SSH soft links:
By default, OUI searches for SSH public keys in the directory "/usr/local/etc/" and "ssh-keygen" binaries in "/usr/local/bin". However, on Oracle Solaris, SSH public keys typically are located in the path "/etc/ssh", and "ssh-keygen" binaries are located in the path "/usr/bin". To ensure that OUI can set up SSH, use the following command to create soft links:
Node 1 (as root user):
bash-3.00# mkdir -p /usr/local/etc
bash-3.00# mkdir -p /usr/local/bin
bash-3.00# ln -s /etc/ssh /usr/local/etc
bash-3.00# ln -s /usr/bin /usr/local/bin

Node 2 (as root user):
bash-3.00# mkdir -p /usr/local/etc
bash-3.00# mkdir -p /usr/local/bin
bash-3.00# ln -s /etc/ssh /usr/local/etc
bash-3.00# ln -s /usr/bin /usr/local/bin


Set SSH LoginGraceTime to unlimited:
Edit the SSH daemon configuration file "/etc/ssh/sshd_config" on all cluster nodes to set the timeout wait to unlimited: "LoginGraceTime 0"
Node 1 (as root user):
bash-3.00# vi /etc/ssh/sshd_config
LoginGraceTime 0

Node 2 (as root user):
bash-3.00# vi /etc/ssh/sshd_config
LoginGraceTime 0






Check Required Packages:
List of Required Packages:
  • SUNWarc
  • SUNWbtool
  • SUNWcsl
  • SUNWhea
  • SUNWlibC
  • SUNWlibm
  • SUNWlibms
  • SUNWsprot
  • SUNWtoo
  • SUNWi1of
  • SUNWi1cs
  • SUNWi15cs
  • SUNWxwfnt
  • SUNxwplt
  • SUNWmfrun
  • SUNWxwplr
  • SUNWxwdv
  • SUNWgcc
  • SUNWuiu8
  • SUNWpool
  • SUNWpoolr
Node 1 (as root user):
bash-3.00# pkginfo -i pkg_name
If the package(s) are not installed, then mount the O/S disc and install the package(s).
bash-3.00# pkgadd -d . pkg_name

Node 2 (as root user):
bash-3.00# pkginfo -i pkg_name
If the package(s) are not installed, then mount the O/S disc and install the package(s).
bash-3.00# pkgadd -d . pkg_name






Creation of Users and Groups:
For the purpose of this guide, i will create only two groups. Oracle Inventory group (oinstall), and a single group (dba) as the OSDBA, OSASM and OSDBA for Oracle ASM groups.
For the purpose of this guide, i will create two users namely GRID(Grid Infra Owner) and ORACLE(Oracle Software Owner).
bash-3.00# /usr/sbin/groupadd -g 1000 oinstall
bash-3.00# /usr/sbin/groupadd -g 1031 dba

bash-3.00# /usr/sbin/useradd –u 1100 –g oinstall –G dba –d /export/home/grid grid
bash-3.00# /usr/sbin/useradd –u 1101 –g oinstall –G dba –d /export/home/oracle oracle
bash-3.00# id grid
bash-3.00# id oracle
bash-3.00# passwd grid
bash-3.00# passwd oracle
bash-3.00# mkdir /export/home/grid
bash-3.00# chmod –R 775 /export/home/grid
bash-3.00# chown -R grid:oinstall /export/home/grid
bash-3.00# mkdir /export/home/oracle
bash-3.00# chmod –R 775 /export/home/oracle
bash-3.00# chown -R oracle:oinstall /export/home/oracle






Configure Kernel Parameters:
Verify UDP and TCP Kernel Parameters:
UDP/TCP Parameters and their Recomended Values:
 Parameter Value
 udp_xmit_hiwat 65535
 udp_recv_hiwat 65535
 tcp_smallest_anon_port  9000
 tcp_largest_anon_port 65500
 udp_smallest_anon_port 9000
 udp_largest_anon_port 65500
Node 1 (as root user):
bash-3.00# /usr/sbin/ndd /dev/udp udp_xmit_hiwat
bash-3.00# /usr/sbin/ndd /dev/udp udp_recv_hiwat
bash-3.00# /usr/sbin/ndd /dev/tcp tcp_smallest_anon_port tcp_largest_anon_port
bash-3.00# /usr/sbin/ndd /dev/udp udp_smallest_anon_port udp_largest_anon_port

For setting these values, do the following:
bash-3.00# /usr/sbin/ndd -set /dev/udp udp_xmit_hiwat 65536
bash-3.00# /usr/sbin/ndd -set /dev/udp udp_recv_hiwat 65536
bash-3.00# /usr/sbin/ndd -set /dev/tcp tcp_smallest_anon_port 9000
bash-3.00# /usr/sbin/ndd -set /dev/tcp tcp_largest_anon_port 65500
bash-3.00# /usr/sbin/ndd -set /dev/udp udp_smallest_anon_port 9000
bash-3.00# /usr/sbin/ndd -set /dev/udp udp_largest_anon_port 65500

To set the UDP/TCP values for when the system restarts, the ndd commands have to be included in a system startup script. The following script in "/etc/rc2.d/S99ndd" sets the parameters:
bash-3.00# vi /etc/rc2.d/s99ndd
/usr/sbin/ndd -set /dev/udp udp_xmit_hiwat 65536
/usr/sbin/ndd -set /dev/udp udp_recv_hiwat 65536
/usr/sbin/ndd -set /dev/tcp tcp_smallest_anon_port 9000
/usr/sbin/ndd -set /dev/tcp tcp_largest_anon_port 65500
/usr/sbin/ndd -set /dev/udp udp_smallest_anon_port 9000
/usr/sbin/ndd -set /dev/udp udp_largest_anon_port 65500


Node 2 (as root user):
bash-3.00# /usr/sbin/ndd /dev/udp udp_xmit_hiwat
bash-3.00# /usr/sbin/ndd /dev/udp udp_recv_hiwat
bash-3.00# /usr/sbin/ndd /dev/tcp tcp_smallest_anon_port tcp_largest_anon_port
bash-3.00# /usr/sbin/ndd /dev/udp udp_smallest_anon_port udp_largest_anon_port

For setting these values, do the following:
bash-3.00# /usr/sbin/ndd -set /dev/udp udp_xmit_hiwat 65536
bash-3.00# /usr/sbin/ndd -set /dev/udp udp_recv_hiwat 65536
bash-3.00# /usr/sbin/ndd -set /dev/tcp tcp_smallest_anon_port 9000
bash-3.00# /usr/sbin/ndd -set /dev/tcp tcp_largest_anon_port 65500
bash-3.00# /usr/sbin/ndd -set /dev/udp udp_smallest_anon_port 9000
bash-3.00# /usr/sbin/ndd -set /dev/udp udp_largest_anon_port 65500

To set the UDP/TCP values for when the system restarts, the ndd commands have to be included in a system startup script. The following script in "/etc/rc2.d/S99ndd" sets the parameters:
bash-3.00# vi /etc/rc2.d/s99ndd
/usr/sbin/ndd -set /dev/udp udp_xmit_hiwat 65536
/usr/sbin/ndd -set /dev/udp udp_recv_hiwat 65536
/usr/sbin/ndd -set /dev/tcp tcp_smallest_anon_port 9000
/usr/sbin/ndd -set /dev/tcp tcp_largest_anon_port 65500
/usr/sbin/ndd -set /dev/udp udp_smallest_anon_port 9000
/usr/sbin/ndd -set /dev/udp udp_largest_anon_port 65500



Set Maximum user processes:
Edit the "/etc/system" file of all the nodes:
bash-3.00# vi /etc/system
set maxusers=16384


Creation of User Project:
Node 1 and 2 (as root user):
If you've performed a default installation, it is likely that the only kernel parameter you need to alter is "max-shm-memory" to meet the minimum installation requirements.
For GRID User:
To create user project, issue the following command:
bash-3.00# projadd grid

Append the following line to the "/etc/user_attr" file:
grid::::project=grid

To check the current value, issue the following command:
bash-3.00# prctl -n project.max-shm-memory -i project grid project: 100: grid
Note:To reset this value, make sure at least one session is logged in as the "grid" user, then from the root user issue the following commands.
To dynamically reset the value, issue the following command:
bash-3.00# prctl -n project.max-shm-memory -v 4gb -r -i project grid
Make changes to the "/etc/project" file so that the value is persistent between reboots, issue the following command:
bash-3.00# projmod -s -K "project.max-shm-memory=(priv,4gb,deny)" grid
bash-3.00# cat /etc/project

Similarrly repeat the above for oracle user.
Reboot your RAC Nodes before attempting to install Oracle binaries.




Configure Shell Limits:
Oracle recommends that we set shell limits and system configuration parameters as described in this section.
Recommended Shell Limits for Oracle Solaris Systems:
 Shell Limit Recommended Value
 TIME -1 (Unlimited)
 FILE -1 (Unlimited)
 DATA  Minimum value: 1048576
 STACK  Minimum value: 32768
 NOFILES  Minimum value: 1024
 VMEMORY  Minimum value: 4194304
To see the shell limit of "GRID" user, issue the following command:
bash-3.00$ ulimit -a
To change the shell limit of "GRID" user, issue the following command:
bash-3.00$ ulimit -parameter_name new_val
To see the shell limit of "ORACLE" user, issue the following command:
bash-3.00$ ulimit -a
To change the shell limit of "ORACLE" user, issue the following command:
bash-3.00$ ulimit -parameter_name new_val







Create Oracle Binaries Directory:
To create the Grid Infrastructure home directory and Oracle Database directory , enter the following commands as the root user:
bash-3.00# mkdir -p /u01/app/11.2.0/grid
bash-3.00# mkdir -p /u01/app/grid
bash-3.00# mkdir -p /u01/app/oracle
bash-3.00# chown grid:oinstall /u01/app/11.2.0/grid
bash-3.00# chown grid:oinstall /u01/app/grid
bash-3.00# chown -R grid:oinstall /u01
bash-3.00# chown oracle:oinstall /u01/app/oracle
bash-3.00# chmod -R 775 /u01/

Do this on both the nodes.



Configure iSCSI Volumes using Openfiler:
Perform the following configuration task on the storage server (openfiler).
Openfiler administration is performed using the Openfiler Storage Control Center - a browser based tool over an https connection on port 446.
Accessing the Openfiler Storage control Center:
https://openfiler:446
Username : openfiler
Password : password
 Link for configuration of iSCSI Volumes using Openfiler
For more information about Openfiler, visit the following link: www.openfiler.com







Connect to iSCSI-Target with iSCSI Initiator in Solaris:
iSCSI Packages Required for Solaris 10:
  • SUNWiscsir - Sun iSCSI Device Driver (root)
  • SUNWiscsitgtr - Sun iSCSI Target (Root)
  • SUNWiscsitgtu - Sun iSCSI Target (Usr)
  • SUNWiscsiu - Sun iSCSI Management Utilities (usr)
bash-3.00# pkginfo -i pkg_name
If the package(s) are not installed, then mount the installer disk and install the above mentioned packages.
bash-3.00# pkgadd -d pkg_name

Configure and Enable iSCSI Target Discovery:
Run the following to view the hard-disk devices:
Run the following from the nodes to discover all available iSCSI LUNs.
bash-3.00# iscsiadm add discovery-address 192.168.16.50:3260
bash-3.00# iscsiadm modify discovery --sendtargets enable
bash-3.00# devfsadm -i iscsi
bash-3.00# format

after running "iscsiadm" command, all the devices are discovered on both the nodes.


Partition and Label the LUNs:
Now that the new iSCSI LUNs have been successfully discovered, the next step is to create new partitions on the LUNs.
For partitioning , labeling and giving volname to the LUNs , use the "FORMAT" command which is described in the mentioned link below:
 Link for Partitioning and Labeling LUNs






Installation of Grid Infrastructure:
Mount the Grid Installer Disc:
I have created a directory for the grid infrastructure software and copied the setup files in that directory.
Note:The installation will be performed from one of the nodes, for the purpose of this guide i will do the installation from Node 1.
Node 1 (as root user):
bash-3.00# mkdir /softs
bash-3.00# chmod -R 775 /softs/
bash-3.00# chown -R grid:oinstall /softs/
Copy the setup files in the above created directory.
 
 Link for Grid Infrastructure Installation






Configure GRID User Environment:
Login to all the RAC nodes as "GRID" user and create the following login script (grid_profile):
Node 1 (as grid user):
Create the "grid_profile" file with the entries similar to the following:
export ORACLE_HOME=/u01/app/11.2.0/grid
export ORACLE_SID=+ASM1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$PATH:$ORACLE_HOME/bin
umask 022

(As grid user)bash-3.00$ chmod +x grid_profile
bash-3.00$ . grid_profile
bash-3.00$ echo $ORACLE_HOME


Node 2 (as grid user):
Create the "grid_profile" file with the entries similar to the following:
export ORACLE_HOME=/u01/app/11.2.0/grid
export ORACLE_SID=+ASM2
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$PATH:$ORACLE_HOME/bin
umask 022

(As grid user)bash-3.00$ chmod +x grid_profile
bash-3.00$ . grid_profile
bash-3.00$ echo $ORACLE_HOME



For all the screenshots, please visit the link:::  http://www.sensehaze.com/index.php/oracle-database/87-real-application-cluster/104-rac11g-1node-solaris


Verify Oracle Clusterware Installation:
After installing Oracle Grid Infrastructure, we should run several test commands to verify whether the installation was successful or not.
Run the following commands on both the nodes of the cluster as the 'grid' user.
 Link for verification of Oracle Clusterware Installation




Creation of ASM Disk Groups for Data and FRA:
Run ASM Configuration Assistant (asmca) as the 'grid' user from one of the nodes in the cluster to create the additional ASM disk groups which will be used to create the RAC database. For the purpose of this article i will configure ASM Disks from Node 1.
I will create additional ASM disk groups using the ASM Configuration Assistant (asmca), one for DATA and other for FRA.
First, we will have to add new disk discovery string in the path. For doing so connect to the ASM instance from one of the nodes and add the new disk-discovery-string in the path.
Node 1 (as grid user):
Connect to your ASM instance and run the following query:
SQL> alter system set asm_diskstring='/dev/crs','/dev/data','/dev/fra'

 Link for creation of ASM Diskgroups









Installation of Oracle Database Software for RAC One Node:
We will perform the Oracle Database software installation from Node 1. The Oracle Database software will be installed on both the nodes by the OUI using SSH.
Node 1 (as root user):
bash-3.00# xhost +


 Link for Installation of Oracle Database Software for RAC One Node







Configure ORACLE User Environment:
Login to all the RAC nodes as "ORACLE" user and create the following login script (db_profile):
Node 1 (as oracle user):
Create the "db_profile" file with the entries similar to the following:
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$PATH:$ORACLE_HOME/bin
umask 022

(As oracle user)bash-3.00$ chmod +x db_profile
bash-3.00$ . db_profile
bash-3.00$ echo $ORACLE_HOME

Node 2 (as oracle user):
Create the "db_profile" file with the entries similar to the following:
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$PATH:$ORACLE_HOME/bin
umask 022

(As oracle user)bash-3.00$ chmod +x db_profile
bash-3.00$ . db_profile
bash-3.00$ echo $ORACLE_HOME








Create Cluster Database:
The database creation process will be performed by one of the RAC Nodes. In this guide we will use racnode1 for database creation.
Use the DBCA utility to create the cluster database.
Node 1 (as root user):
bash-3.00# xhost +

 Link for Creation of Cluster Database






Connect to the Database:
As oracle user:
bash-3.00$ sqlplus sys/sys@racdb as sysdba

Check RAC One Node Status:
As oracle user:
(Node1)bash-3.00$ srvctl config database -d racdb

(Node2)bash-3.00$ srvctl config database -d racdb

Congrats! This finishes Oracle RAC One Node Installation on Solaris 10.