Tag Archives: addNode.sh

Adding another node for RAC 11.2.0.3 on Oracle Linux 6.1 with kernel-UEK

As I have hinted at during my last post about installing Oracle 11.2.0.3 on Oracle Linux 6.1 with Kernel UEK, I have planned another article about adding a node to a cluster.

I deliberately started the installation of my RAC system with only one node to allow my moderately spec’d hardware to deal with a second cluster node. In previous versions of Oracle there was a problem with node additions: the $GRID_HOME/oui/bin/addNode.sh script did pre-requisite checks that used to fail when you had used ASMLib. Unfortuntely, due to my setup I couldn’t test if that was solved (I didn’t use ASMLib).

Cluvfy

As with many cluster operations on non-Exadata you should use the cluvfy tool to ensure that the system you want to add to the cluster meets the requirements. Here’s an example session for the cluvfy output. Since I am about to add a node, the stage has to be “-pre nodeadd”. rac11203node1 is the active cluster node, and rac11203node2 the one I want to add. Note that you run the command from (any) existing node, specifying the nodes to be added with the “-n” parameter. For convenience I have added the “-fixup” option to generate fixup scripts if needed. Also note that this is a lab environment, real production environments would use dm-multipath for storage and a bonded pair of NICs for the public network. Since 11.2.0.2 you no longer need to bond your private NICs, Oracle does that for you now.

Continue reading

Build your own RAC system part IV – extending Grid Infrastructure

After having experimented with my 11.2.0.1 Clusterware setup in the lab, I thought it was time to put the promise of easy extensibility to the test and run addNode.sh to extend the cluster to two nodes. That used to be quite easy in 10.2 and 11.1: the addNode.sh fired up the GUI, you filled in the new public, private and virtual IP and off it went. A few script executions as root later you had an extended cluster. Well, you still needed to add the ASM and RDBMS homes but that was a piece of cake after the Clusterware extension. How easily could that be done with 11.2?

CLUVFY enhancements

Some of the enhanced options in cuvfy are really handy in this case. You could run it with “comp peer” and “post –hwos” to check for any problems beforehand. The ultimate weapon though is the “stage –pre nodeadd”. You execute the script on one of the existing nodes and fill in the new node name. The output of the command is trimmed down for readability

[oracle@rac11gr2node1 ~]$ cluvfy stage -pre  nodeadd -n rac11gr2node2  -fixup -fixupdir /tmp

Performing pre-checks for node addition

Checking node reachability...
Node reachability check passed from node "rac11gr2node1"

Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"

Node connectivity check passed

Node connectivity passed for subnet "192.168.1.0" with node(s) rac11gr2node1,rac11gr2node2
TCP connectivity check passed for subnet "192.168.1.0"

Node connectivity passed for subnet "192.168.0.0" with node(s) rac11gr2node1,rac11gr2node2
TCP connectivity check passed for subnet "192.168.0.0"

Interfaces found on subnet "192.168.1.0" that are likely candidates for VIP are:
rac11gr2node1 eth0:192.168.1.90 eth0:192.168.1.92 ...
rac11gr2node2 eth0:192.168.1.91

Interfaces found on subnet "192.168.0.0" that are likely candidates for a private interconnect are:
rac11gr2node1 eth1:192.168.0.90
rac11gr2node2 eth1:192.168.0.91

Node connectivity check passed
PRVF-5415 : Check to see if NTP daemon is running failed
Clock synchronization check using Network Time Protocol(NTP) failed

Fixup information has been generated for following node(s):
rac11gr2node2
Please run the following script on each node as "root" user to execute the fixups:
'/tmp/CVU_11.2.0.1.0_oracle/runfixup.sh'

Pre-check for node addition was unsuccessful on all the nodes.
[oracle@rac11gr2node1 ~]$

Continue reading