Oracle Database 11g R2 (11.2.0.2) RAC One Node Installation On Solaris 10 (x86-64)::
In this article we are going to learn Oracle Database 11g R2 (11.2.0.2) RAC One Node Installation On Solaris 10 (x86-64).
Overview:
Objective of the Article:
Install Solaris 10 64-bit O/S:
Install Openfiler:
Network Configuration:
Hardware Configuration:
Synchronize time on ALL Nodes:
For setting time on the nodes use the command:
bash-3.0.0 date -u new_date_and_time
Automatic SSH Configuration During Installation:
Check Required Packages:
Append the following line to the "/etc/user_attr" file:
grid::::project=grid
Partition and Label the LUNs:
Link for Grid Infrastructure Installation
Configure GRID User Environment:
(As grid user)bash-3.00$ chmod +x grid_profile
bash-3.00$ . grid_profile
bash-3.00$ echo $ORACLE_HOME
(As grid user)bash-3.00$ chmod +x grid_profile
bash-3.00$ . grid_profile
bash-3.00$ echo $ORACLE_HOME
For all the screenshots, please visit the link::: http://www.sensehaze.com/index.php/oracle-database/87-real-application-cluster/104-rac11g-1node-solaris
Verify Oracle Clusterware Installation:
Creation of ASM Disk Groups for Data and FRA:
Link for creation of ASM Diskgroups
Installation of Oracle Database Software for RAC One Node:
Link for Installation of Oracle Database Software for RAC One Node
Configure ORACLE User Environment:
(As oracle user)bash-3.00$ chmod +x db_profile
bash-3.00$ . db_profile
bash-3.00$ echo $ORACLE_HOME
(As oracle user)bash-3.00$ chmod +x db_profile
bash-3.00$ . db_profile
bash-3.00$ echo $ORACLE_HOME
Create Cluster Database:
Link for Creation of Cluster Database
(Node2)bash-3.00$ srvctl config database -d racdb
Congrats! This finishes Oracle RAC One Node Installation on Solaris 10.
In this article we are going to learn Oracle Database 11g R2 (11.2.0.2) RAC One Node Installation On Solaris 10 (x86-64).
Overview:
This article is a comprehensive guide for Oracle Database 11g R2 (11.2.0.2) RAC One Node Installation On Solaris 10 (x86-64).
Oracle RAC One Node is a single instance of an Oracle Real Application Cluster database that runs on one node in a cluster. This option adds to the flexibility that Oracle offers for database consolidation. We can consolidate many databases into one cluster while also providing the high availability benefits of failover protection, online rolling patch application, and rolling upgrades for the operating system.
Oracle RAC One Node is a single instance of an Oracle Real Application Cluster database that runs on one node in a cluster. This option adds to the flexibility that Oracle offers for database consolidation. We can consolidate many databases into one cluster while also providing the high availability benefits of failover protection, online rolling patch application, and rolling upgrades for the operating system.
Please do keep in mind that this article should not be considered a substitution of the official guide from Oracle (http://www.oracle.com). The below mentioned link can be used to acceess the official RAC Guide:
By the time we finish this article, we should be able to understand the following:
(1)Basic idea of RAC One Node.
(2)Use of Openfiler for shared storage Configuration.
(3)Oracle Grid Infrastructure 11.2.0.2 installation for Cluster.
(4)Oracle Database 11.2.0.2 installation for RAC One Node.
(5)Database creation for RAC One Node.
Softwares used:
Operating System | Solaris 10 - 64bit (Both the nodes). |
Shared Storage | Openfiler |
Oracle Grid Infrastructure | 11.2.0.2 for solaris x64 |
Oracle Database | 11.2.0.2 for solaris x64 |
Install Solaris 10 64-bit Operating System on all the RAC nodes of your setup ( Here two nodes ).
Network Configuration for Node1:
eth0(e1000g0) | |
Primary Network Interface | e1000g0 |
Hostname | racnode1 |
Dynamic IP configuration (DHCP) | OFF |
IP Address | 192.168.15.100 |
System Part of Subnet | Yes |
NetMask(Prefix) | 255.255.255.0 |
Enable IPv6 | No |
Default Route | Yes |
Router IP Address | 192.168.15.1 |
eth1(e1000g1) | |
Secondary Network Interface | e1000g1 |
Hostname | racnode1-priv |
Dynamic IP configuration (DHCP) | OFF |
IP Address | 192.168.16.100 |
System Part of Subnet | Yes |
NetMask(Prefix) | 255.255.255.0 |
Enable IPv6 | No |
Default Route | No |
No | |
Default Route | No |
Network Configuration for Node2:
eth0(e1000g0) | |
Primary Network Interface | e1000g0 |
Hostname | racnode2 |
Dynamic IP configuration (DHCP) | OFF |
IP Address | 192.168.15.101 |
System Part of Subnet | Yes |
NetMask(Prefix) | 255.255.255.0 |
Enable IPv6 | No |
Default Route | Yes |
Router IP Address | 192.168.15.1 |
eth1(e1000g1) | |
Secondary Network Interface | e1000g1 |
Hostname | racnode2-priv |
Dynamic IP configuration (DHCP) | OFF |
IP Address | 192.168.16.101 |
System Part of Subnet | Yes |
NetMask(Prefix) | 255.255.255.0 |
Enable IPv6 | No |
Default Route | No |
For more information about Solaris Operating System and its installation/configuration, please visit the below mentioned link:http://docs.oracle.com/cd/E19253-01/index.html
Openfiler is a free browser-based network storage management utility that supports file-based Network Attached Storage (NAS) and block-based Storage Area Networking (SAN) in a single framework. Openfiler supports CIFS, NFS, HTTP/DAV, FTP, iSCSI technology.
Install Openfiler for configuring Shared-storage.
Network Configuration for Openfiler:
eth0 | |
Dynamic IP configuration (DHCP) | OFF |
Host Name | openfiler |
IP Address | 192.168.15.50 |
NetMask(Prefix) | 255.255.255.0 |
Enable IPv6 | No |
Activate on boot | Yes |
Default Gateway Address | 192.168.15.1 |
eth1 | |
Dynamic IP configuration (DHCP) | OFF |
IP Address | 192.168.16.50 |
NetMask(Prefix) | 255.255.255.0 |
Enable IPv6 | No |
Activate on boot | Yes |
For accessing openfiler storage:
https://192.168.15.50:446
username -- openfiler
password -- password
For more information about the installation of openfiler, please visit the following link : www.openfiler.com/learn/how-to/graphical-installationhttps://192.168.15.50:446
username -- openfiler
password -- password
Network Hardware Requirements:
- Each node must have at least two network interfaces — one for the public network and one for the private network.
- Public interface names must be the same for all nodes. If the public interface on one node uses the network adapter e1000g0, then you must configure e1000g0 as the public interface on all nodes.
- The network adapter for the public interface must support TCP/IP.
- For the private network, the endpoints of all designated interconnect interfaces must be completely reachable on the network.
IP Address Requirements:
- A public IP address for each node
- A virtual IP address for each node
- Three single client access name (SCAN) addresses for the cluster.
Note :For the purpose of this article, i have not used GNS and DNS. So i will use one SCAN IP Address.
Note :Perform the below mentioned network configuration tasks on both Oracle RAC nodes in the cluster.
Hostname Configuration (as root user):
bash-3.00# vi /etc/hostname.e1000g0
Output:
Node1 -- racnode1
Node2 -- racnode2
Node1 -- racnode1
Node2 -- racnode2
If the hostame is not set then edit the file properly.
bash-3.00# vi /etc/hostname.e1000g1
Output:
Node1 -- racnode1-priv
Node2 -- racnode2-priv
Node1 -- racnode1-priv
Node2 -- racnode2-priv
If the hostame is not set then edit the file properly.
IP Address Configuration (as root user):
bash-3.00# ifconfig -a
If the IP Addresses are not configured then configure them by using the command:
bash-3.00# ifconfig e1000gXs plumb ip_address/subnet up
Edit the /etc/hosts file (as root user):
Give details of Public IPs, Private IPs, Virtual IPs, Scan IP and Openfiler IPs.
In our Oracle RAC One Node configuration, we will use the following network settings:
Node1 | ||
/etc/hosts | ||
#::1 localhost | ||
127.0.0.1 | localhost | |
#Public IPs | ||
192.168.15.100 | racnode1.mmm.com | racenode1 |
192.168.15.101 | racnode2.mmm.com | racenode2 |
#Private IPs | ||
192.168.16.100 | racenode1-priv | |
192.168.16.101 | racenode2-priv | |
#Virtual IPs | ||
192.168.15.110 | racnode1-vip.mmm.com | racenode1-vip |
192.168.15.111 | racnode2-vip.mmm.com | racenode2-vip |
#SCAN IP | ||
192.168.15.150 | rac-cluster-scan.mmm.com | rac-cluster-scan |
#Openfiler IPs | ||
192.168.15.50 | openfiler.mmm.com | openfiler |
192.168.16.50 | openfiler-priv |
Node2 | ||
/etc/hosts | ||
#::1 localhost | ||
127.0.0.1 | localhost | |
#Public IPs | ||
192.168.15.100 | racnode1.mmm.com | racenode1 |
192.168.15.101 | racnode2.mmm.com | racenode2 |
#Private IPs | ||
192.168.16.100 | racenode1-priv | |
192.168.16.101 | racenode2-priv | |
#Virtual IPs | ||
192.168.15.110 | racnode1-vip.mmm.com | racenode1-vip |
192.168.15.111 | racnode2-vip.mmm.com | racenode2-vip |
#SCAN IP | ||
192.168.15.150 | rac-cluster-scan.mmm.com | rac-cluster-scan |
#Openfiler IPs | ||
192.168.15.50 | openfiler.mmm.com | openfiler |
192.168.16.50 | openfiler-priv |
Edit the "/etc/hosts" (as root user) file for giving the etries like above:
bash-3.00# vi /etc/hosts
Verify Network Connectivity:
Verify the network connectivity by using the "ping" command to test the connection between the nodes.
Node 1 (as root user):
bash-3.00# ping racnode1
bash-3.00# ping racnode1-priv
bash-3.00# ping racnode2
bash-3.00# ping racnode2-priv
bash-3.00# ping openfiler
bash-3.00# ping openfiler-priv
bash-3.00# ping racnode1-priv
bash-3.00# ping racnode2
bash-3.00# ping racnode2-priv
bash-3.00# ping openfiler
bash-3.00# ping openfiler-priv
Node 2 (as root user):
bash-3.00# ping racnode1
bash-3.00# ping racnode1-priv
bash-3.00# ping racnode2
bash-3.00# ping racnode2-priv
bash-3.00# ping openfiler
bash-3.00# ping openfiler-priv
bash-3.00# ping racnode1-priv
bash-3.00# ping racnode2
bash-3.00# ping racnode2-priv
bash-3.00# ping openfiler
bash-3.00# ping openfiler-priv
Openfiler (as root user):
bash-3.00# ping racnode1
bash-3.00# ping racnode1-priv
bash-3.00# ping racnode2
bash-3.00# ping racnode2-priv
bash-3.00# ping openfiler
bash-3.00# ping openfiler-priv
bash-3.00# ping racnode1-priv
bash-3.00# ping racnode2
bash-3.00# ping racnode2-priv
bash-3.00# ping openfiler
bash-3.00# ping openfiler-priv
Minimum Hardware Requirements:
- At least 2 GB of RAM for Oracle Cluster installations.
- At least 1024 x 768 display resolution, so that OUI displays correctly.
- 1 GB of space in the /tmp directory.
- 6.5 GB of space for the Oracle Grid Infrastructure for a Cluster home (Grid home) and at least 4 GB of available disk space for the Oracle Database home directory.
- Swap space equivalent to the multiple of the available RAM.
Available RAM Swap Space Required Between 2 GB and 16 GB Equal to the size of RAM More than 16 GB 16 GB
Node 1 (as root user):
Checking RAM Size:
bash-3.00# /usr/sbin/prtconf | grep "Memory size"
bash-3.00# /usr/sbin/prtconf | grep "Memory size"
Checking SWAP Size:
bash-3.00# /usr/sbin/swap -s
bash-3.00# /usr/sbin/swap -s
Checking /tmp Space:
bash-3.00# df -kh
bash-3.00# df -kh
Node 2 (as root user):
Checking RAM Size:
bash-3.00# /usr/sbin/prtconf | grep "Memory size"
bash-3.00# /usr/sbin/prtconf | grep "Memory size"
Checking SWAP Size:
bash-3.00# /usr/sbin/swap -s
bash-3.00# /usr/sbin/swap -s
Checking /tmp Space:
bash-3.00# df -kh
bash-3.00# df -kh
We should ensure that the date and time settings on all nodes are set as closely as possible to the same date and time.Time may be kept in sync with NTP or by using Oracle Cluster Time Synchronization Service (ctssd).
Note:For the purpose of this guide i will not use NTP. So i will disable NTP on all the RAC Nodes.
Note:For the purpose of this guide i will not use NTP. So i will disable NTP on all the RAC Nodes.
Disable NTP Service:
Node 1 (as root user):
bash-3.00# /usr/sbin/svcadm disable ntp
Node 2 (as root user):
bash-3.00# /usr/sbin/svcadm disable ntp
bash-3.0.0 date -u new_date_and_time
Automatic SSH Configuration During Installation:
To install Oracle software, SSH connectivity should be set up between all cluster member nodes. OUI uses the ssh and scp commands during installation to run remote commands on and copy files to the other cluster nodes. We must configure SSH so that these commands do not prompt for a password.
create SSH soft links:
By default, OUI searches for SSH public keys in the directory "/usr/local/etc/" and "ssh-keygen" binaries in "/usr/local/bin". However, on Oracle Solaris, SSH public keys typically are located in the path "/etc/ssh", and "ssh-keygen" binaries are located in the path "/usr/bin". To ensure that OUI can set up SSH, use the following command to create soft links:
Node 1 (as root user):
bash-3.00# mkdir -p /usr/local/etc
bash-3.00# mkdir -p /usr/local/bin
bash-3.00# ln -s /etc/ssh /usr/local/etc
bash-3.00# ln -s /usr/bin /usr/local/bin
bash-3.00# mkdir -p /usr/local/bin
bash-3.00# ln -s /etc/ssh /usr/local/etc
bash-3.00# ln -s /usr/bin /usr/local/bin
Node 2 (as root user):
bash-3.00# mkdir -p /usr/local/etc
bash-3.00# mkdir -p /usr/local/bin
bash-3.00# ln -s /etc/ssh /usr/local/etc
bash-3.00# ln -s /usr/bin /usr/local/bin
bash-3.00# mkdir -p /usr/local/bin
bash-3.00# ln -s /etc/ssh /usr/local/etc
bash-3.00# ln -s /usr/bin /usr/local/bin
Set SSH LoginGraceTime to unlimited:
Edit the SSH daemon configuration file "/etc/ssh/sshd_config" on all cluster nodes to set the timeout wait to unlimited: "LoginGraceTime 0"
Node 1 (as root user):
bash-3.00# vi /etc/ssh/sshd_config
LoginGraceTime 0
LoginGraceTime 0
Node 2 (as root user):
bash-3.00# vi /etc/ssh/sshd_config
LoginGraceTime 0
LoginGraceTime 0
Check Required Packages:
List of Required Packages:
- SUNWarc
- SUNWbtool
- SUNWcsl
- SUNWhea
- SUNWlibC
- SUNWlibm
- SUNWlibms
- SUNWsprot
- SUNWtoo
- SUNWi1of
- SUNWi1cs
- SUNWi15cs
- SUNWxwfnt
- SUNxwplt
- SUNWmfrun
- SUNWxwplr
- SUNWxwdv
- SUNWgcc
- SUNWuiu8
- SUNWpool
- SUNWpoolr
Node 1 (as root user):
bash-3.00# pkginfo -i pkg_name
If the package(s) are not installed, then mount the O/S disc and install the package(s).
bash-3.00# pkgadd -d . pkg_name
Node 2 (as root user):
bash-3.00# pkginfo -i pkg_name
If the package(s) are not installed, then mount the O/S disc and install the package(s).
bash-3.00# pkgadd -d . pkg_name
Creation of Users and Groups:
For the purpose of this guide, i will create only two groups. Oracle Inventory group (oinstall), and a single group (dba) as the OSDBA, OSASM and OSDBA for Oracle ASM groups.
For the purpose of this guide, i will create two users namely GRID(Grid Infra Owner) and ORACLE(Oracle Software Owner).
For the purpose of this guide, i will create two users namely GRID(Grid Infra Owner) and ORACLE(Oracle Software Owner).
bash-3.00# /usr/sbin/groupadd -g 1000 oinstall
bash-3.00# /usr/sbin/groupadd -g 1031 dba
bash-3.00# /usr/sbin/useradd –u 1100 –g oinstall –G dba –d /export/home/grid grid
bash-3.00# /usr/sbin/useradd –u 1101 –g oinstall –G dba –d /export/home/oracle oracle
bash-3.00# /usr/sbin/groupadd -g 1031 dba
bash-3.00# /usr/sbin/useradd –u 1100 –g oinstall –G dba –d /export/home/grid grid
bash-3.00# /usr/sbin/useradd –u 1101 –g oinstall –G dba –d /export/home/oracle oracle
bash-3.00# id grid
bash-3.00# id oracle
bash-3.00# passwd grid
bash-3.00# passwd oracle
bash-3.00# id oracle
bash-3.00# passwd grid
bash-3.00# passwd oracle
bash-3.00# mkdir /export/home/grid
bash-3.00# chmod –R 775 /export/home/grid
bash-3.00# chown -R grid:oinstall /export/home/grid
bash-3.00# chmod –R 775 /export/home/grid
bash-3.00# chown -R grid:oinstall /export/home/grid
bash-3.00# mkdir /export/home/oracle
bash-3.00# chmod –R 775 /export/home/oracle
bash-3.00# chown -R oracle:oinstall /export/home/oracle
bash-3.00# chmod –R 775 /export/home/oracle
bash-3.00# chown -R oracle:oinstall /export/home/oracle
Configure Kernel Parameters:
Verify UDP and TCP Kernel Parameters:
UDP/TCP Parameters and their Recomended Values:
Parameter | Value |
udp_xmit_hiwat | 65535 |
udp_recv_hiwat | 65535 |
tcp_smallest_anon_port | 9000 |
tcp_largest_anon_port | 65500 |
udp_smallest_anon_port | 9000 |
udp_largest_anon_port | 65500 |
Node 1 (as root user):
bash-3.00# /usr/sbin/ndd /dev/udp udp_xmit_hiwat
bash-3.00# /usr/sbin/ndd /dev/udp udp_recv_hiwat
bash-3.00# /usr/sbin/ndd /dev/tcp tcp_smallest_anon_port tcp_largest_anon_port
bash-3.00# /usr/sbin/ndd /dev/udp udp_smallest_anon_port udp_largest_anon_port
bash-3.00# /usr/sbin/ndd /dev/udp udp_recv_hiwat
bash-3.00# /usr/sbin/ndd /dev/tcp tcp_smallest_anon_port tcp_largest_anon_port
bash-3.00# /usr/sbin/ndd /dev/udp udp_smallest_anon_port udp_largest_anon_port
For setting these values, do the following:
bash-3.00# /usr/sbin/ndd -set /dev/udp udp_xmit_hiwat 65536
bash-3.00# /usr/sbin/ndd -set /dev/udp udp_recv_hiwat 65536
bash-3.00# /usr/sbin/ndd -set /dev/tcp tcp_smallest_anon_port 9000
bash-3.00# /usr/sbin/ndd -set /dev/tcp tcp_largest_anon_port 65500
bash-3.00# /usr/sbin/ndd -set /dev/udp udp_smallest_anon_port 9000
bash-3.00# /usr/sbin/ndd -set /dev/udp udp_largest_anon_port 65500
bash-3.00# /usr/sbin/ndd -set /dev/udp udp_recv_hiwat 65536
bash-3.00# /usr/sbin/ndd -set /dev/tcp tcp_smallest_anon_port 9000
bash-3.00# /usr/sbin/ndd -set /dev/tcp tcp_largest_anon_port 65500
bash-3.00# /usr/sbin/ndd -set /dev/udp udp_smallest_anon_port 9000
bash-3.00# /usr/sbin/ndd -set /dev/udp udp_largest_anon_port 65500
To set the UDP/TCP values for when the system restarts, the ndd commands have to be included in a system startup script. The following script in "/etc/rc2.d/S99ndd" sets the parameters:
bash-3.00# vi /etc/rc2.d/s99ndd
/usr/sbin/ndd -set /dev/udp udp_xmit_hiwat 65536
/usr/sbin/ndd -set /dev/udp udp_recv_hiwat 65536
/usr/sbin/ndd -set /dev/tcp tcp_smallest_anon_port 9000
/usr/sbin/ndd -set /dev/tcp tcp_largest_anon_port 65500
/usr/sbin/ndd -set /dev/udp udp_smallest_anon_port 9000
/usr/sbin/ndd -set /dev/udp udp_largest_anon_port 65500
/usr/sbin/ndd -set /dev/udp udp_xmit_hiwat 65536
/usr/sbin/ndd -set /dev/udp udp_recv_hiwat 65536
/usr/sbin/ndd -set /dev/tcp tcp_smallest_anon_port 9000
/usr/sbin/ndd -set /dev/tcp tcp_largest_anon_port 65500
/usr/sbin/ndd -set /dev/udp udp_smallest_anon_port 9000
/usr/sbin/ndd -set /dev/udp udp_largest_anon_port 65500
Node 2 (as root user):
bash-3.00# /usr/sbin/ndd /dev/udp udp_xmit_hiwat
bash-3.00# /usr/sbin/ndd /dev/udp udp_recv_hiwat
bash-3.00# /usr/sbin/ndd /dev/tcp tcp_smallest_anon_port tcp_largest_anon_port
bash-3.00# /usr/sbin/ndd /dev/udp udp_smallest_anon_port udp_largest_anon_port
bash-3.00# /usr/sbin/ndd /dev/udp udp_recv_hiwat
bash-3.00# /usr/sbin/ndd /dev/tcp tcp_smallest_anon_port tcp_largest_anon_port
bash-3.00# /usr/sbin/ndd /dev/udp udp_smallest_anon_port udp_largest_anon_port
For setting these values, do the following:
bash-3.00# /usr/sbin/ndd -set /dev/udp udp_xmit_hiwat 65536
bash-3.00# /usr/sbin/ndd -set /dev/udp udp_recv_hiwat 65536
bash-3.00# /usr/sbin/ndd -set /dev/tcp tcp_smallest_anon_port 9000
bash-3.00# /usr/sbin/ndd -set /dev/tcp tcp_largest_anon_port 65500
bash-3.00# /usr/sbin/ndd -set /dev/udp udp_smallest_anon_port 9000
bash-3.00# /usr/sbin/ndd -set /dev/udp udp_largest_anon_port 65500
bash-3.00# /usr/sbin/ndd -set /dev/udp udp_recv_hiwat 65536
bash-3.00# /usr/sbin/ndd -set /dev/tcp tcp_smallest_anon_port 9000
bash-3.00# /usr/sbin/ndd -set /dev/tcp tcp_largest_anon_port 65500
bash-3.00# /usr/sbin/ndd -set /dev/udp udp_smallest_anon_port 9000
bash-3.00# /usr/sbin/ndd -set /dev/udp udp_largest_anon_port 65500
To set the UDP/TCP values for when the system restarts, the ndd commands have to be included in a system startup script. The following script in "/etc/rc2.d/S99ndd" sets the parameters:
bash-3.00# vi /etc/rc2.d/s99ndd
/usr/sbin/ndd -set /dev/udp udp_xmit_hiwat 65536
/usr/sbin/ndd -set /dev/udp udp_recv_hiwat 65536
/usr/sbin/ndd -set /dev/tcp tcp_smallest_anon_port 9000
/usr/sbin/ndd -set /dev/tcp tcp_largest_anon_port 65500
/usr/sbin/ndd -set /dev/udp udp_smallest_anon_port 9000
/usr/sbin/ndd -set /dev/udp udp_largest_anon_port 65500
/usr/sbin/ndd -set /dev/udp udp_xmit_hiwat 65536
/usr/sbin/ndd -set /dev/udp udp_recv_hiwat 65536
/usr/sbin/ndd -set /dev/tcp tcp_smallest_anon_port 9000
/usr/sbin/ndd -set /dev/tcp tcp_largest_anon_port 65500
/usr/sbin/ndd -set /dev/udp udp_smallest_anon_port 9000
/usr/sbin/ndd -set /dev/udp udp_largest_anon_port 65500
Set Maximum user processes:
Edit the "/etc/system" file of all the nodes:
bash-3.00# vi /etc/system
set maxusers=16384
set maxusers=16384
Creation of User Project:
Node 1 and 2 (as root user):
If you've performed a default installation, it is likely that the only kernel parameter you need to alter is "max-shm-memory" to meet the minimum installation requirements.
For GRID User:
To create user project, issue the following command:
bash-3.00# projadd grid
grid::::project=grid
To check the current value, issue the following command:
bash-3.00# prctl -n project.max-shm-memory -i project grid project: 100: grid
Note:To reset this value, make sure at least one session is logged in as the "grid" user, then from the root user issue the following commands.
To dynamically reset the value, issue the following command:
bash-3.00# prctl -n project.max-shm-memory -v 4gb -r -i project grid
Make changes to the "/etc/project" file so that the value is persistent between reboots, issue the following command:
bash-3.00# projmod -s -K "project.max-shm-memory=(priv,4gb,deny)" grid
bash-3.00# cat /etc/project
bash-3.00# cat /etc/project
Similarrly repeat the above for oracle user.
Reboot your RAC Nodes before attempting to install Oracle binaries.
Configure Shell Limits:
Oracle recommends that we set shell limits and system configuration parameters as described in this section.
Recommended Shell Limits for Oracle Solaris Systems:
Shell Limit | Recommended Value |
TIME | -1 (Unlimited) |
FILE | -1 (Unlimited) |
DATA | Minimum value: 1048576 |
STACK | Minimum value: 32768 |
NOFILES | Minimum value: 1024 |
VMEMORY | Minimum value: 4194304 |
To see the shell limit of "GRID" user, issue the following command:
bash-3.00$ ulimit -a
To change the shell limit of "GRID" user, issue the following command:
bash-3.00$ ulimit -parameter_name new_val
To see the shell limit of "ORACLE" user, issue the following command:
bash-3.00$ ulimit -a
To change the shell limit of "ORACLE" user, issue the following command:
bash-3.00$ ulimit -parameter_name new_val
Create Oracle Binaries Directory:
To create the Grid Infrastructure home directory and Oracle Database directory , enter the following commands as the root user:
bash-3.00# mkdir -p /u01/app/11.2.0/grid
bash-3.00# mkdir -p /u01/app/grid
bash-3.00# mkdir -p /u01/app/oracle
bash-3.00# mkdir -p /u01/app/grid
bash-3.00# mkdir -p /u01/app/oracle
bash-3.00# chown grid:oinstall /u01/app/11.2.0/grid
bash-3.00# chown grid:oinstall /u01/app/grid
bash-3.00# chown -R grid:oinstall /u01
bash-3.00# chown grid:oinstall /u01/app/grid
bash-3.00# chown -R grid:oinstall /u01
bash-3.00# chown oracle:oinstall /u01/app/oracle
bash-3.00# chmod -R 775 /u01/
bash-3.00# chmod -R 775 /u01/
Do this on both the nodes.
Configure iSCSI Volumes using Openfiler:
Perform the following configuration task on the storage server (openfiler).
Openfiler administration is performed using the Openfiler Storage Control Center - a browser based tool over an https connection on port 446.
Accessing the Openfiler Storage control Center:
https://openfiler:446
Username : openfiler
Password : password
Password : password
Link for configuration of iSCSI Volumes using Openfiler |
For more information about Openfiler, visit the following link: www.openfiler.com
Connect to iSCSI-Target with iSCSI Initiator in Solaris:
iSCSI Packages Required for Solaris 10:
- SUNWiscsir - Sun iSCSI Device Driver (root)
- SUNWiscsitgtr - Sun iSCSI Target (Root)
- SUNWiscsitgtu - Sun iSCSI Target (Usr)
- SUNWiscsiu - Sun iSCSI Management Utilities (usr)
bash-3.00# pkginfo -i pkg_name
If the package(s) are not installed, then mount the installer disk and install the above mentioned packages.
bash-3.00# pkgadd -d pkg_name
Configure and Enable iSCSI Target Discovery:
Run the following to view the hard-disk devices:
Run the following from the nodes to discover all available iSCSI LUNs.
bash-3.00# iscsiadm add discovery-address 192.168.16.50:3260
bash-3.00# iscsiadm modify discovery --sendtargets enable
bash-3.00# devfsadm -i iscsi
bash-3.00# format
bash-3.00# iscsiadm modify discovery --sendtargets enable
bash-3.00# devfsadm -i iscsi
bash-3.00# format
after running "iscsiadm" command, all the devices are discovered on both the nodes.
Now that the new iSCSI LUNs have been successfully discovered, the next step is to create new partitions on the LUNs.
For partitioning , labeling and giving volname to the LUNs , use the "FORMAT" command which is described in the mentioned link below:
Link for Partitioning and Labeling LUNs |
Installation of Grid Infrastructure:
Mount the Grid Installer Disc:
I have created a directory for the grid infrastructure software and copied the setup files in that directory.
Note:The installation will be performed from one of the nodes, for the purpose of this guide i will do the installation from Node 1.
Node 1 (as root user):
bash-3.00# mkdir /softs
bash-3.00# chmod -R 775 /softs/
bash-3.00# chown -R grid:oinstall /softs/
bash-3.00# chmod -R 775 /softs/
bash-3.00# chown -R grid:oinstall /softs/
Copy the setup files in the above created directory.
Login to all the RAC nodes as "GRID" user and create the following login script (grid_profile):
Node 1 (as grid user):
Create the "grid_profile" file with the entries similar to the following:
export ORACLE_HOME=/u01/app/11.2.0/grid
export ORACLE_SID=+ASM1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$PATH:$ORACLE_HOME/bin
umask 022
export ORACLE_HOME=/u01/app/11.2.0/grid
export ORACLE_SID=+ASM1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$PATH:$ORACLE_HOME/bin
umask 022
bash-3.00$ . grid_profile
bash-3.00$ echo $ORACLE_HOME
Node 2 (as grid user):
Create the "grid_profile" file with the entries similar to the following:
export ORACLE_HOME=/u01/app/11.2.0/grid
export ORACLE_SID=+ASM2
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$PATH:$ORACLE_HOME/bin
umask 022
export ORACLE_HOME=/u01/app/11.2.0/grid
export ORACLE_SID=+ASM2
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$PATH:$ORACLE_HOME/bin
umask 022
bash-3.00$ . grid_profile
bash-3.00$ echo $ORACLE_HOME
For all the screenshots, please visit the link::: http://www.sensehaze.com/index.php/oracle-database/87-real-application-cluster/104-rac11g-1node-solaris
Verify Oracle Clusterware Installation:
After installing Oracle Grid Infrastructure, we should run several test commands to verify whether the installation was successful or not.
Run the following commands on both the nodes of the cluster as the 'grid' user.
Link for verification of Oracle Clusterware Installation |
Creation of ASM Disk Groups for Data and FRA:
Run ASM Configuration Assistant (asmca) as the 'grid' user from one of the nodes in the cluster to create the additional ASM disk groups which will be used to create the RAC database. For the purpose of this article i will configure ASM Disks from Node 1.
I will create additional ASM disk groups using the ASM Configuration Assistant (asmca), one for DATA and other for FRA.
First, we will have to add new disk discovery string in the path. For doing so connect to the ASM instance from one of the nodes and add the new disk-discovery-string in the path.
Node 1 (as grid user):
Connect to your ASM instance and run the following query:
SQL> alter system set asm_diskstring='/dev/crs','/dev/data','/dev/fra'
SQL> alter system set asm_diskstring='/dev/crs','/dev/data','/dev/fra'
Installation of Oracle Database Software for RAC One Node:
We will perform the Oracle Database software installation from Node 1. The Oracle Database software will be installed on both the nodes by the OUI using SSH.
Node 1 (as root user):
bash-3.00# xhost +
Configure ORACLE User Environment:
Login to all the RAC nodes as "ORACLE" user and create the following login script (db_profile):
Node 1 (as oracle user):
Create the "db_profile" file with the entries similar to the following:
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$PATH:$ORACLE_HOME/bin
umask 022
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$PATH:$ORACLE_HOME/bin
umask 022
bash-3.00$ . db_profile
bash-3.00$ echo $ORACLE_HOME
Node 2 (as oracle user):
Create the "db_profile" file with the entries similar to the following:
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$PATH:$ORACLE_HOME/bin
umask 022
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$PATH:$ORACLE_HOME/bin
umask 022
bash-3.00$ . db_profile
bash-3.00$ echo $ORACLE_HOME
The database creation process will be performed by one of the RAC Nodes. In this guide we will use racnode1 for database creation.
Use the DBCA utility to create the cluster database.
Node 1 (as root user):
bash-3.00# xhost +
Connect to the Database:
As oracle user:
bash-3.00$ sqlplus sys/sys@racdb as sysdba
Check RAC One Node Status:
As oracle user:
(Node1)bash-3.00$ srvctl config database -d racdb
Congrats! This finishes Oracle RAC One Node Installation on Solaris 10.
I really appreciate information shared above. It’s of great help. If someone want to learn Online (Virtual) instructor lead live training in Oracle 11g RAC Admin, kindly contact us http://www.maxmunus.com/contact
ReplyDeleteMaxMunus Offer World Class Virtual Instructor led training on Oracle 11g RAC Admin. We have industry expert trainer. We provide Training Material and Software Support. MaxMunus has successfully conducted 100000+ trainings in India, USA, UK, Australlia, Switzerland, Qatar, Saudi Arabia, Bangladesh, Bahrain and UAE etc.
For Demo Contact us:
Name : Arunkumar U
Email : arun@maxmunus.com
Skype id: training_maxmunus
Contact No.-+91-9738507310
Company Website –http://www.maxmunus.com