UPDATE 221103: Oracle 11.2 is effectively out of support, this article is now archived and should not be referred to
After having experimented with my 11.2.0.1 Clusterware setup in the lab, I thought it was time to put the promise of easy extensibility to the test and run addNode.sh to extend the cluster to two nodes. That used to be quite easy in 10.2 and 11.1: the addNode.sh fired up the GUI, you filled in the new public, private and virtual IP and off it went. A few script executions as root later you had an extended cluster. Well, you still needed to add the ASM and RDBMS homes but that was a piece of cake after the Clusterware extension. How easily could that be done with 11.2?
CLUVFY enhancements
Some of the enhanced options in cuvfy are really handy in this case. You could run it with “comp peer” and “post –hwos” to check for any problems beforehand. The ultimate weapon though is the “stage –pre nodeadd”. You execute the script on one of the existing nodes and fill in the new node name. The output of the command is trimmed down for readability
[oracle@rac11gr2node1 ~]$ cluvfy stage -pre nodeadd -n rac11gr2node2 -fixup -fixupdir /tmp Performing pre-checks for node addition Checking node reachability... Node reachability check passed from node "rac11gr2node1" Check: Node connectivity for interface "eth0" Node connectivity passed for interface "eth0" Node connectivity check passed Node connectivity passed for subnet "192.168.1.0" with node(s) rac11gr2node1,rac11gr2node2 TCP connectivity check passed for subnet "192.168.1.0" Node connectivity passed for subnet "192.168.0.0" with node(s) rac11gr2node1,rac11gr2node2 TCP connectivity check passed for subnet "192.168.0.0" Interfaces found on subnet "192.168.1.0" that are likely candidates for VIP are: rac11gr2node1 eth0:192.168.1.90 eth0:192.168.1.92 ... rac11gr2node2 eth0:192.168.1.91 Interfaces found on subnet "192.168.0.0" that are likely candidates for a private interconnect are: rac11gr2node1 eth1:192.168.0.90 rac11gr2node2 eth1:192.168.0.91 Node connectivity check passed PRVF-5415 : Check to see if NTP daemon is running failed Clock synchronization check using Network Time Protocol(NTP) failed Fixup information has been generated for following node(s): rac11gr2node2 Please run the following script on each node as "root" user to execute the fixups: '/tmp/CVU_11.2.0.1.0_oracle/runfixup.sh' Pre-check for node addition was unsuccessful on all the nodes. [oracle@rac11gr2node1 ~]$
Whatever I tried, it always complained about the NTP daemon-I thought ctss would take care of cluster time synchronisation. It doesn’t affect the cluster so nothing to worry too much about.
The fixup option allows you to create a script to address fixable problems, such as missing/wrong kernel parameters. You execute the script on the node you’d like to add:
[root@rac11gr2node2 CVU_11.2.0.1.0_oracle]# ./runfixup.sh Response file being used is :./fixup.response Enable file being used is :./fixup.enable Log file location: ./orarun.log Setting Kernel Parameters... fs.file-max = 327679 fs.file-max = 6815744 net.ipv4.ip_local_port_range = 9000 65500 net.core.wmem_max = 262144 net.core.wmem_max = 1048576
That’s great! As with many of these features you wonder why they haven’t been implemented before!
addNode.sh
The next step then involves invoking addNode.sh in $ORA_CRS_HOME/oui/bin. Another change – addNode.sh is now “headless”, i.e. no more GUI installation, even if you wanted. A silent installation was possible before, but for me that meant reading up on the OUI and OPatch guide for Linux and Windows. Furthermore, you have different options depending on whether you use GNS or not. Whilst the documentation claims all that was needed to extend the Grid Infrastructure with GNS was a quick “addNode.sh –silent “CLUSTER_NEW_NODES={newNodePublicIP}” this didn’t work. My OUI session consistently complained about this:
SEVERE:Number of new nodes being added are not equal to number of new virtual nodes. Silent install cannot continue.
As with all OUI sessions, you’ll find a set of logs in $ORACLE_BASE/oraInventory/logs/
I already suspected that OUI wasn’t aware of GNS. I have raised it with Oracle-there wasn’t a OUI patch out for 11.2 yet when I looked for it. How could that have gone past testing? GNS is one of the most interesting features in 11.2, surely it must have been tested? Maybe not.
Passing the non-GNS syntax actually worked even in my case, and I successfully extended the cluster to two nodes. After the execution of root.sh even the oratab was setup correctly and defined +ASM2. I was pleasantly surprised.
The command that worked referenced CLUSTER_NEW_NAMES and CLUSTER_NEW_VIRTUAL_HOSTNAMES:
./addNode.sh -silent "CLUSTER_NEW_NODES={rac11gr2node2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac11gr2node2-vip}"
Output from root.sh
Using username "oracle". Authenticating with public key "martins key" from agent Last login: Sun Oct 11 20:50:47 2009 from 192.168.1.11 [oracle@rac11gr2node2 ~]$ su - Password: [root@rac11gr2node2 ~]# /u01/app/oraInventory/orainstRoot.sh Creating the Oracle inventory pointer file (/etc/oraInst.loc) Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. [root@rac11gr2node2 ~]# /u01/crs/oracle/product/11.2.0/grid/root.sh 2>&1 | tee /tmp/root.sh.out Running Oracle 11g root.sh script... The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/crs/oracle/product/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created ... CRS-2672: Attempting to start 'ora.evmd' on 'rac11gr2node2' CRS-2676: Start of 'ora.evmd' on 'rac11gr2node2' succeeded clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 11g Release 2. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. rac11gr2node2 2009/10/11 20:57:50 /u01/crs/oracle/product/11.2.0/grid/cdata/rac11gr2node2/backup_20091011_205750.olr Preparing packages for installation... cvuqdisk-1.0.7-1 Configure Oracle Grid Infrastructure for a Cluster ... succeeded Updating inventory properties for clusterware Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 511 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful. [root@rac11gr2node2 ~]# uname -a Linux rac11gr2node2 2.6.18-164.el5xen #1 SMP Thu Sep 3 02:41:56 EDT 2009 i686 i686 i386 GNU/Linux [root@rac11gr2node2 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.4 (Tikanga) [root@rac11gr2node2 ~]# grep -i name /proc/cpuinfo model name : Intel(R) Core(TM)2 Quad CPU @ 2.40GHz model name : Intel(R) Core(TM)2 Quad CPU @ 2.40GHz [root@rac11gr2node2 ~]#
So with the solid foundation in place I’m now ready to install the RDBMS software now, the goal is to experiment with RAC One Node. I know it’s a slightly flawed test running this out of Oracle VM but I don’t have the hardware, especially the SAN to test a different setup. Or maybe I should experiment with OpenFiler and iSCSI…
Reference
http://download.oracle.com/docs/cd/E11882_01/rac.112/e10717/adddelclusterware.htm#BEIHHFAG
Responses
Martin,
RE: The following error:
PRVF-5415 : Check to see if NTP daemon is running failed
Clock synchronization check using Network Time Protocol(NTP) failed
I was above to correct this on my local systems by changing the line in the /etc/sysconfig/ntpd file:
OPTIONS=”-u ntp:ntp -p /var/run/ntpd.pid”
to
OPTIONS=”-x -u ntp:ntp -p /var/run/ntpd.pid”
Note the addition of the “-x”
Thanks – Steve
Magic,
thank you for your comment Steve!
Hi, I have an exam question, still unsure about the answer.
Appreciate if you help
Your production envrionment cluster is running Oracle Enterprise Linux and currently has four nodes.
You are asked to plan for extending the cluster to six nodes. Which three methods are availavle to add the new nodes ?
a-)silent clonening using crsctl clone cluster and ssh
b-)a GUI interface from Enterprise Manager
c-)with the Oracle Universal Installer using runInstaller -clone
d-)silent cloning using perl clone.pl -silent either with parameter in a file or in line
e-)using AddNode.sh
Hi John,
thanks for passing by. The idea of the RAC certified expert exam is to encourage people to try things out in their own environments, which is the point at when you learn most. I would like to encourage you to build a one node RAC system, and extend it to multiple nodes using your preferred approach. There are plenty of resources out there, including this blog which guide you through the process. You will appreciate this encouragement to learn and understand RAC once you sit in the exam.
Martin
Hi Martin
Thanks for your response.
I have found good articles about adding a new node in rac environment.
I know that I can add a node by Addnode.sh(e) and silent cloning perl clone.pl (d)
I am just wondering whether this can also be done using Oracle Universal Installer (c) or crsctl clone cluster (a)
I know that there is no way to do it with Enterprise Manager