Based on customer request Oracle has added the functionality to add a second SCAN, completely independent of the SCAN defined/created during the cluster creation. Why would you want to use this feature? A few reasons that spring to mind are:
- Consolidation: customers insist on using a different network
- Separate network for Data Guard traffic
To demonstrate the concept I am going to show you in this blog post how I
- Add a new network resource
- Create new VIPs
- Add a new SCAN
- Add a new SCAN listener
It actually sounds more complex than it is, but I have a feeling I need to split this article in multiple parts as it’s far too long.
The lab setup
When you install RAC 11.2 and 12.1 you are prompted to specify a Single Client Access Name, or SCAN. This SCAN is usually defined in the corporate DNS server and resolves to 3 IP addresses. This allows for an easy way to implement client-side load balancing. The SCAN is explained in more detail in Pro Oracle Database 11g RAC on Linux for 11.2 and on OTN for 11.2 and 12.1. To spice the whole configuration up a little bit I decided to use RAC One Node on the clusters I am using for this demonstration.
I created 2 12.1.0.1.2 clusters for this Data Guard test. Hosts ron12cprinode1 and ron12cprinode2 form the primary cluster, ron12csbynode1 and ron12csbynode2 will form the standby cluster. The RAC One Node database is named RON:
[oracle@ron12cprinode1 ~]$ srvctl config database -db ron Database unique name: ron Database name: ron Oracle home: /u01/app/oracle/product/12.1.0.1/dbhome_1 Oracle user: oracle Spfile: +DATA/ron/spfilepri.ora Password file: +DATA/ron/orapwpri Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: ron Database instances: Disk Groups: RECO Mount point paths: Services: ron12c Type: RACOneNode Online relocation timeout: 30 Instance name prefix: pri Candidate servers: ron12cprinode1,ron12cprinode2 Database is administrator managed [oracle@ron12cprinode1 ~]$
To make things even more interesting I defined my ORACLE_SID prefix on the primary to be “pri” and “sby” on the standby.
[oracle@ron12cprinode1 ~]$ ps -ef | grep smon oracle 2553 1 0 Feb06 ? 00:00:09 asm_smon_+ASM1 oracle 15660 15578 0 05:05 pts/3 00:00:00 grep smon oracle 28241 1 0 Feb07 ? 00:00:18 ora_smon_pri_1 [oracle@ron12cprinode1 ~]$
A quick check with gpnptool reveals the network usage before the addition of the second SCAN:
<gpnp:Network-Profile> <gpnp:HostNetwork id="gen" HostName="*"> <gpnp:Network id="net1" IP="192.168.100.0" Adapter="eth0" Use="public"/> <gpnp:Network id="net2" IP="192.168.101.0" Adapter="eth1" Use="cluster_interconnect"/> </gpnp:HostNetwork> </gpnp:Network-Profile>
There is the default network, (“netnum 1”) that is created on the network defined as “public” during the installation. I have another spare network port (eth2) reserved for the new network and Data Guard traffic. Currently network 1 is the only one available.
[root@ron12cprinode1 ~]# srvctl config network Network 1 exists Subnet IPv4: 192.168.100.0/255.255.255.0/eth0, static Subnet IPv6:
As you can see RAC 12c now supports IPv6. I have another network available that I want to make available for Data Guard traffic. For this purpose I added all nodes into DNS. I am a bit old-fashioned when it comes to DNS, I am still using bind most of the time. Here is an excerpt of my reverse name resolution file:
; hosts - primary cluster 50 PTR ron12cprinode1.dg.example.com. 51 PTR ron12cprinode1-vip.example.com. 52 PTR ron12cprinode2.dg.example.com. 53 PTR ron12cprinode2-vip.dg.example.com. ; Data Guard SCAN - primary cluster 54 PTR ron12cpri-scan.dg.example.com. 55 PTR ron12cpri-scan.dg.example.com. 56 PTR ron12cpri-scan.dg.example.com. ; hosts - standby cluster 57 PTR ron12csbynode1.dg.example.com. 58 PTR ron12csbynode1-vip.dg.example.com. 59 PTR ron12csbynode2.dg.example.com. 60 PTR ron12csbynode2-vip.dg.example.com. ; Data Guard SCAN - standby cluster 61 PTR ron12csby-scan.dg.example.com. 62 PTR ron12csby-scan.dg.example.com. 63 PTR ron12csby-scan.dg.example.com.
The domain is *.dg.example.com, the primary database client traffic will be routed through *.example.com.
Adding the new network
The first step to be performed is to make Clusterware aware of the second network. I am doing this on both sides of the cluster. Notice that the primary nodes are called *pri* whereas the standby cluster is called *sby*
[root@ron12cprinode1 ~]# srvctl add network -netnum 2 -subnet 192.168.102.0/255.255.255.0/eth2 -nettype static -verbose Successfully added Network. [root@ron12csbynode1 ~]# srvctl add network -netnum 2 -subnet 192.168.102.0/255.255.255.0/eth2 -nettype static -verbose Successfully added Network.
So this worked, now I have 2 networks:
[root@ron12cprinode1 ~]# srvctl config network Network 1 exists Subnet IPv4: 192.168.100.0/255.255.255.0/eth0, static Subnet IPv6: Network 2 exists Subnet IPv4: 192.168.102.0/255.255.255.0/eth2, static Subnet IPv6: [root@ron12cprinode1 ~]#
In the next step I have to add VIPs for the new nodes on the *.dg.example.com subnet. The VIPs must be added on all cluster nodes, 4 in my case.
[oracle@ron12cprinode2 ~]$ srvctl add vip -h Adds a VIP to the Oracle Clusterware. Usage: srvctl add vip -node <node_name> -netnum <network_number> -address {<name>|<ip>}/<netmask>[/if1[|if2...]] [-skip] [-verbose] -node <node_name> Node name -address <vip_name|ip>/<netmask>[/if1[|if2...]] VIP address specification for node applications -netnum <net_num> Network number (default number is 1) -skip Skip reachability check of VIP address -verbose Verbose output -help Print usage [oracle@ron12cprinode2 ~]$
So I did this on each node in my cluster
[root@ron12cprinode1 ~]# srvctl add vip -node ron12cprinode1 -netnum 2 -address 192.168.102.51/255.255.255.0/eth2 -verbose Network exists: 2/192.168.102.0/255.255.255.0/eth2, type static Successfully added VIP. [root@ron12cprinode2 ~]# srvctl add vip -node ron12cprinode2 -netnum 2 -address 192.168.102.53/255.255.255.0/eth2 -verbose Network exists: 2/192.168.102.0/255.255.255.0/eth2, type static Successfully added VIP. [root@ron12csbynode1 ~]# srvctl add vip -node ron12csbynode1 -netnum 2 -address 192.168.102.58/255.255.255.0/eth2 -verbose Network exists: 2/192.168.102.0/255.255.255.0/eth2, type static Successfully added VIP. [root@ron12csbynode2 ~]# srvctl add vip -node ron12csbynode2 -netnum 2 -address 192.168.102.60/255.255.255.0/eth2 -verbose Network exists: 2/192.168.102.0/255.255.255.0/eth2, type static Successfully added VIP.
And I need to start the VIPs. They have some funny names as you can see in crsctl status resource (the names can’t be defined, see output of srvctl add scan -h above)
[root@ron12cprinode1 ~]# srvctl status vip -vip ron12cprinode1_2 VIP 192.168.102.51 is enabled VIP 192.168.102.51 is not running [root@ron12cprinode1 ~]# srvctl start vip -vip ron12cprinode1_2 [root@ron12cprinode1 ~]# srvctl start vip -vip ron12cprinode2_2 [root@ron12cprinode1 ~]# srvctl status vip -vip ron12cprinode1_2 VIP 192.168.102.51 is enabled VIP 192.168.102.51 is running on node: ron12cprinode1 [root@ron12cprinode1 ~]#
Add the second SCAN
At this time you can add the second SCAN. The command syntax is shown here:
[oracle@ron12cprinode1 ~]$ srvctl add scan -h Adds a SCAN VIP to the Oracle Clusterware. Usage: srvctl add scan -scanname <scan_name> [-netnum <network_number>] -scanname <scan_name> Domain name qualified SCAN name -netnum <net_num> Network number (default number is 1) -subnet <subnet>/<netmask>[/if1[|if2...]] NET address specification for network -help Print usage
Implemented on my first cluster node the command is easier to comprehend.
[root@ron12cprinode1 ~]# srvctl add scan -scanname ron12cpri-dgscan.dg.example.com -netnum 2 [root@ron12cprinode1 ~]# srvctl status scan SCAN VIP scan1 is enabled SCAN VIP scan1 is running on node ron12cprinode2 SCAN VIP scan2 is enabled SCAN VIP scan2 is running on node ron12cprinode1 SCAN VIP scan3 is enabled SCAN VIP scan3 is running on node ron12cprinode1
You need to create the SCAN on both clusters. On my primary cluster the SCAN has been created with the following configuration:
[root@ron12cprinode1 ~]# srvctl config scan -netnum 2 SCAN name: ron12cpri-dgscan.dg.example.com, Network: 2 Subnet IPv4: 192.168.102.0/255.255.255.0/eth2 Subnet IPv6: SCAN 0 IPv4 VIP: 192.168.102.54 SCAN name: ron12cpri-dgscan.dg.example.com, Network: 2 Subnet IPv4: 192.168.102.0/255.255.255.0/eth2 Subnet IPv6: SCAN 1 IPv4 VIP: 192.168.102.55 SCAN name: ron12cpri-dgscan.dg.example.com, Network: 2 Subnet IPv4: 192.168.102.0/255.255.255.0/eth2 Subnet IPv6: SCAN 2 IPv4 VIP: 192.168.102.56 [root@ron12cprinode1 ~]#
You can see the new VIPs in the output of ifconfig, just as you would with the primary SCAN:
eth2 Link encap:Ethernet HWaddr 52:54:00:FE:E2:D5 inet addr:192.168.102.50 Bcast:192.168.102.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:fefe:e2d5/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:523 errors:0 dropped:0 overruns:0 frame:0 TX packets:339 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:118147 (115.3 KiB) TX bytes:72869 (71.1 KiB) eth2:1 Link encap:Ethernet HWaddr 52:54:00:FE:E2:D5 inet addr:192.168.102.55 Bcast:192.168.102.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
So there is nothing too surprising in the output, it’s exactly the same as before with the public SCAN created during the installation.
End of part 1
This already seems like a lot of text to me so I think it’s time to pause here. The next parts will demonstrate the addition of the SCAN listeners, the new node listeners on the *.dg.example.com network and finally the duplication of the primary RAC One Node database for use as a standby database.
Pingback: RAC 12c enhancements: adding an additional SCAN-part 2 « Martins Blog
HI Martin
We found your blog outstanding but we are having problems setting up the service on the second network. It is erroring out. We got the following error and thought that we did not need a second listener is it an absolute requirement as is stated in metalink article 1560202.1
PRCD-1135 : There is no listener defined for network 172.30.128.0/255.255.255.0 to be used by service dwcodev_data_svc
thanks Nick
Hi Nick,
thanks for using the instructions on the blog as your reference. Two things come to mind. First, this is written for 12.1.0.1 and second I think I have the addition of the node listeners in part 2 of the article with additional information. I’ll check today for relevance to 12.1.0.2.
Martin
Yep, it still works as described in 12.1.0.2. You may need to check the status of the second listener (listener_dg in my example) for registered databases. If you don’t see any then check its port number. In my cluster it defaulted to 1523. If Clusterware doesn’t set listener_networks you will need to.
Martin
Pingback: New Events for Data Guard and Synchronous Redo Transport in 12c (2) | Martins Blog