Tag Archives: RAC

RAC 12c enhancements: adding an additional SCAN-part 1

Based on customer request Oracle has added the functionality to add a second SCAN, completely independent of the SCAN defined/created during the cluster creation. Why would you want to use this feature? A few reasons that spring to mind are:

  • Consolidation: customers insist on using a different network
  • Separate network for Data Guard traffic

To demonstrate the concept I am going to show you in this blog post how I

  1. Add a new network resource
  2. Create new VIPs
  3. Add a new SCAN
  4. Add a new SCAN listener

It actually sounds more complex than it is, but I have a feeling I need to split this article in multiple parts as it’s far too long.

The lab setup

When you install RAC 11.2 and 12.1 you are prompted to specify a Single Client Access Name, or SCAN. This SCAN is usually defined in the corporate DNS server and resolves to 3 IP addresses. This allows for an easy way to implement client-side load balancing. The SCAN is explained in more detail in Pro Oracle Database 11g RAC on Linux for 11.2 and on OTN for 11.2 and 12.1. To spice the whole configuration up a little bit I decided to use RAC One Node on the clusters I am using for this demonstration.

I created 2 12.1.0.1.2 clusters for this Data Guard test. Hosts ron12cprinode1 and ron12cprinode2 form the primary cluster, ron12csbynode1 and ron12csbynode2 will form the standby cluster. The RAC One Node database is named RON:

[oracle@ron12cprinode1 ~]$ srvctl config database -db ron
Database unique name: ron
Database name: ron
Oracle home: /u01/app/oracle/product/12.1.0.1/dbhome_1
Oracle user: oracle
Spfile: +DATA/ron/spfilepri.ora
Password file: +DATA/ron/orapwpri
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ron
Database instances:
Disk Groups: RECO
Mount point paths:
Services: ron12c
Type: RACOneNode
Online relocation timeout: 30
Instance name prefix: pri
Candidate servers: ron12cprinode1,ron12cprinode2
Database is administrator managed
[oracle@ron12cprinode1 ~]$

To make things even more interesting I defined my ORACLE_SID prefix on the primary to be “pri” and “sby” on the standby.

[oracle@ron12cprinode1 ~]$ ps -ef | grep smon
oracle    2553     1  0 Feb06 ?        00:00:09 asm_smon_+ASM1
oracle   15660 15578  0 05:05 pts/3    00:00:00 grep smon
oracle   28241     1  0 Feb07 ?        00:00:18 ora_smon_pri_1
[oracle@ron12cprinode1 ~]$

A quick check with gpnptool reveals the network usage before the addition of the second SCAN:

<gpnp:Network-Profile>
 <gpnp:HostNetwork id="gen" HostName="*">
  <gpnp:Network id="net1" IP="192.168.100.0" Adapter="eth0" Use="public"/>
  <gpnp:Network id="net2" IP="192.168.101.0" Adapter="eth1" Use="cluster_interconnect"/>
 </gpnp:HostNetwork>
</gpnp:Network-Profile>

There is the default network, (“netnum 1”) that is created on the network defined as “public” during the installation. I have another spare network port (eth2) reserved for the new network and Data Guard traffic. Currently network 1 is the only one available.

[root@ron12cprinode1 ~]# srvctl config network
Network 1 exists
Subnet IPv4: 192.168.100.0/255.255.255.0/eth0, static
Subnet IPv6:

As you can see RAC 12c now supports IPv6. I have another network available that I want to make available for Data Guard traffic. For this purpose I added all nodes into DNS. I am a bit old-fashioned when it comes to DNS, I am still using bind most of the time. Here is an excerpt of my reverse name resolution file:

; hosts - primary cluster
50	PTR	ron12cprinode1.dg.example.com.
51	PTR	ron12cprinode1-vip.example.com.
52	PTR	ron12cprinode2.dg.example.com.
53	PTR	ron12cprinode2-vip.dg.example.com.
; Data Guard SCAN - primary cluster
54	PTR	ron12cpri-scan.dg.example.com.
55	PTR	ron12cpri-scan.dg.example.com.
56	PTR	ron12cpri-scan.dg.example.com.

; hosts - standby cluster
57	PTR	ron12csbynode1.dg.example.com.
58	PTR	ron12csbynode1-vip.dg.example.com.
59	PTR	ron12csbynode2.dg.example.com.
60	PTR	ron12csbynode2-vip.dg.example.com.
; Data Guard SCAN - standby cluster
61	PTR	ron12csby-scan.dg.example.com.
62	PTR	ron12csby-scan.dg.example.com.
63	PTR	ron12csby-scan.dg.example.com.

The domain is *.dg.example.com, the primary database client traffic will be routed through *.example.com.

Adding the new network

The first step to be performed is to make Clusterware aware of the second network. I am doing this on both sides of the cluster. Notice that the primary nodes are called *pri* whereas the standby cluster is called *sby*

[root@ron12cprinode1 ~]# srvctl add network -netnum 2 -subnet 192.168.102.0/255.255.255.0/eth2 -nettype static -verbose
Successfully added Network.

[root@ron12csbynode1 ~]# srvctl add network -netnum 2 -subnet 192.168.102.0/255.255.255.0/eth2 -nettype static -verbose
Successfully added Network.

So this worked, now I have 2 networks:

[root@ron12cprinode1 ~]# srvctl config network
Network 1 exists
Subnet IPv4: 192.168.100.0/255.255.255.0/eth0, static
Subnet IPv6:
Network 2 exists
Subnet IPv4: 192.168.102.0/255.255.255.0/eth2, static
Subnet IPv6:
[root@ron12cprinode1 ~]#

In the next step I have to add VIPs for the new nodes on the *.dg.example.com subnet. The VIPs must be added on all cluster nodes, 4 in my case.

[oracle@ron12cprinode2 ~]$ srvctl add vip -h

Adds a VIP to the Oracle Clusterware.

Usage: srvctl add vip -node <node_name> -netnum <network_number> -address {<name>|<ip>}/<netmask>[/if1[|if2...]] [-skip] [-verbose]
    -node     <node_name>          Node name
    -address                       <vip_name|ip>/<netmask>[/if1[|if2...]] VIP address specification for node applications
    -netnum   <net_num>            Network number (default number is 1)
    -skip                          Skip reachability check of VIP address
    -verbose                       Verbose output
    -help                          Print usage
[oracle@ron12cprinode2 ~]$

So I did this on each node in my cluster

[root@ron12cprinode1 ~]# srvctl add vip -node ron12cprinode1 -netnum 2 -address 192.168.102.51/255.255.255.0/eth2 -verbose
Network exists: 2/192.168.102.0/255.255.255.0/eth2, type static
Successfully added VIP.
[root@ron12cprinode2 ~]# srvctl add vip -node ron12cprinode2 -netnum 2 -address 192.168.102.53/255.255.255.0/eth2 -verbose
Network exists: 2/192.168.102.0/255.255.255.0/eth2, type static
Successfully added VIP.

[root@ron12csbynode1 ~]# srvctl add vip -node ron12csbynode1 -netnum 2 -address 192.168.102.58/255.255.255.0/eth2 -verbose
Network exists: 2/192.168.102.0/255.255.255.0/eth2, type static
Successfully added VIP.
[root@ron12csbynode2 ~]# srvctl add vip -node ron12csbynode2 -netnum 2 -address 192.168.102.60/255.255.255.0/eth2 -verbose
Network exists: 2/192.168.102.0/255.255.255.0/eth2, type static
Successfully added VIP.

And I need to start the VIPs. They have some funny names as you can see in crsctl status resource (the names can’t be defined, see output of srvctl add scan -h above)

[root@ron12cprinode1 ~]# srvctl status vip -vip ron12cprinode1_2
VIP 192.168.102.51 is enabled
VIP 192.168.102.51 is not running
[root@ron12cprinode1 ~]# srvctl start vip -vip ron12cprinode1_2
[root@ron12cprinode1 ~]# srvctl start vip -vip ron12cprinode2_2
[root@ron12cprinode1 ~]# srvctl status vip -vip ron12cprinode1_2
VIP 192.168.102.51 is enabled
VIP 192.168.102.51 is running on node: ron12cprinode1
[root@ron12cprinode1 ~]#

Add the second SCAN

At this time you can add the second SCAN. The command syntax is shown here:

[oracle@ron12cprinode1 ~]$ srvctl add scan -h

Adds a SCAN VIP to the Oracle Clusterware.

Usage: srvctl add scan -scanname <scan_name> [-netnum <network_number>]
    -scanname <scan_name>          Domain name qualified SCAN name
    -netnum  <net_num>             Network number (default number is 1)
    -subnet                        <subnet>/<netmask>[/if1[|if2...]] NET address specification for network
    -help                          Print usage

Implemented on my first cluster node the command is easier to comprehend.

[root@ron12cprinode1 ~]# srvctl add scan -scanname ron12cpri-dgscan.dg.example.com -netnum 2

[root@ron12cprinode1 ~]# srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node ron12cprinode2
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node ron12cprinode1
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node ron12cprinode1

You need to create the SCAN on both clusters. On my primary cluster the SCAN has been created with the following configuration:

[root@ron12cprinode1 ~]# srvctl config scan -netnum 2
SCAN name: ron12cpri-dgscan.dg.example.com, Network: 2
Subnet IPv4: 192.168.102.0/255.255.255.0/eth2
Subnet IPv6:
SCAN 0 IPv4 VIP: 192.168.102.54
SCAN name: ron12cpri-dgscan.dg.example.com, Network: 2
Subnet IPv4: 192.168.102.0/255.255.255.0/eth2
Subnet IPv6:
SCAN 1 IPv4 VIP: 192.168.102.55
SCAN name: ron12cpri-dgscan.dg.example.com, Network: 2
Subnet IPv4: 192.168.102.0/255.255.255.0/eth2
Subnet IPv6:
SCAN 2 IPv4 VIP: 192.168.102.56
[root@ron12cprinode1 ~]#

You can see the new VIPs in the output of ifconfig, just as you would with the primary SCAN:

eth2      Link encap:Ethernet  HWaddr 52:54:00:FE:E2:D5
          inet addr:192.168.102.50  Bcast:192.168.102.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fefe:e2d5/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:523 errors:0 dropped:0 overruns:0 frame:0
          TX packets:339 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:118147 (115.3 KiB)  TX bytes:72869 (71.1 KiB)

eth2:1    Link encap:Ethernet  HWaddr 52:54:00:FE:E2:D5
          inet addr:192.168.102.55  Bcast:192.168.102.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

So there is nothing too surprising in the output, it’s exactly the same as before with the public SCAN created during the installation.

End of part 1

This already seems like a lot of text to me so I think it’s time to pause here. The next parts will demonstrate the addition of the SCAN listeners, the new node listeners on the *.dg.example.com network and finally the duplication of the primary RAC One Node database for use as a standby database.

RAC and Pluggable Databases

In preparation of the OUGN Spring Seminar and to finally fulfill at least a part of my promise from July I was getting ready to research RAC, PDBs and services for my demos. It turned out to become a lot more interesting than I first assumed.

RAC and Multi-Tenancy

So the first attempt to really look at how this works has started with my 2 node cluster where I created a RAC database: RAC12C, administrator managed with instance RAC12C1 and RAC12C2. The database is registered in Clusterware. Clusterware and RDBMS are patched to the January PSU, i.e. 12.1.0.1.2.

The second step was to create a PDB for testing-it’s named DEMOPDB and available on both my instances by design. By the way-PDBs do not start automatically, it is your responsibility to start them when the database starts. I used a startup-trigger for this in the Consolidation Book, have a look at it to see the example. There are other ways available as I found out.

As soon as the PDBs is opened (in all Oracle deployment types, not limited to a RAC instance), a new service with the same name as the PDB is automatically started. As it turned out, using that service is the most reliable way to connect to the PDB:

SQL> connect user/password@single-client-access-name/pdbname

This works really well. Substitute your hostname with the SCAN for single instance. Now if you would like to implement some more interesting features (TAF/FAN+FCF/Application Continuity) you could create an additional service. The srvctl syntax has changed, Oracle now uses long parameter names (-service instead of -s), which doesn’t really respect the UNIX/GNU way of naming parameters (short parameter: single dash, long parameter name: double-dash) but that’s how it is.

Continue reading

An introduction to Policy Managed Databases in 11.2 RAC

I just realised this week that I haven’t really detailed anything about policy managed RAC databases. I remembered having done some research about server pools way back when 11.2.0.1 came out. I promised to spend some time looking at the new type of database that comes with server pools: policy managed databases but somehow didn’t get around to doing it. Since I’m lazy I’ll refer to these databases as PMDs from now on as it saves a fair bit of typing.

So how are PMDs different from Administrator Managed Databases?

First of all you can have PMDs with RAC only, i.e. in a multi-instance active/active configuration. Before 11.2 RAC you had to tie an Oracle instance to a cluster node. This is why you see instance prefixes in a RAC spfile. Here is an example from my lab 11.2.0.3.6 cluster:

DEMO1.__db_cache_size=1073741824
[...]
DEMO2.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/DEMO/adump'
*.audit_trail='db'
*.cluster_database=true
*.compatible='11.2.0.0.0'
*.control_files='...'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain=''
*.db_name='DEMO'
*.db_recovery_file_dest='+RECO'
*.db_recovery_file_dest_size=4558159872
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=DEMOXDB)'
DEMO2.instance_number=2
DEMO1.instance_number=1
*.nls_language='ENGLISH'
*.nls_territory='UNITED KINGDOM'
*.open_cursors=300
*.pga_aggregate_target=310378496
*.processes=300
*.remote_listener='rac11gr2scan.example.com:1521'
*.remote_login_passwordfile='exclusive'
*.sessions=335
*.sga_target=1610612736
DEMO1.thread=1
DEMO2.thread=2
DEMO1.undo_tablespace='UNDOTBS2'
DEMO2.undo_tablespace='UNDOTBS1'

Note that the instance_number, thread and undo tablespace are manually (=administrator) managed. If these aren’t set or configured incorrectly you will run into all sorts of fun. Continue reading

Using Connection Manager to protect a database

I have known about Oracle’s connection manager (CMAN) for quite a while but never managed to use it in anger. In short there was no need to do so. Now however I have been asked to help in finding a solution to an interesting problem.

A long story short is that access to databases has to be governed via a mechanism that worked well with firewalls and allowed to regulate access to databases based on rule sets. CMAN can do that, and a few other things that are out of scope of this article.

The architecture I use in this post includes a number of hosts. Simulating end users I created a VM named client, and it’s on the 192.168.99.0/24 network. The “router” is named CMAN in a fit of creativity, and it connects the 192.168.99.0/24 network to the database network on 192.168.100.0/24 network. The database server to which access is regulated via CMAN is named CMANDB. Crucially, there is no routing enabled on the CMAN host. In other words, there is no direct connection possible from the client to the database server. Additional protection of the database host could have been achieved by using firewall rules, but that was out of scope of this post.

Overview of Connection Manager 11.2

For those of you who aren’t familiar with CMAN, here’s a short summary (based on the official Oracle documentation).

Configuration of Oracle Connection Manager (CMAN) allows the clients to connect through a firewall [I haven’t verified this yet, ed]. CMAN is an executable that allows clients to connect despite a firewall being in place between the client and server. CMAN is similar to the Listener in that it reads a configuration file [called CMAN.ora, ed], which contains an address that Oracle Connection Manager listens for incoming connections. CMAN starts similar to the Listener and will enter a LISTEN state.

This solution [to the firewall issue with TCP redirects, ed] will make the REDIRECT happen inside the firewall and the client will not see it; CMAN comports as a proxy service between the client and the real database listener.

Interestingly, Connection Manager is fully integrated into the FAN/FCF framework and equally suitable for UCP connection pools.

Installing Connection Manager

Connection Manager is part of the Oracle client, and you can install it by choosing the “custom” option. From the list of selectable options, pick “Oracle Net Listener” and “Oracle Connection Manager” plus any others you might need.

From there on it’s exactly the same as any other client installation. You should probably consider applying the latest PSU to the client installation.

Testing

A quick test with 2 separate networks reveals how the concept actually works. As I said in the introductoin the following hosts are used:

  • cman: 192.168.99.224
  • cmandb: 192.168.100.225
  • client: 192.168.99.31

The networks in use are:

  • Public network: 192.168.99.0/24
  • Private network: 192.168.100.0/24

Crucially, CMANDB is on a different network than the other hosts with cman acting as the arbiter on 192.68.99.224.

As indicated, connection manager has been installed on host “cman”, with IP 192.168.99.224 and listens on port 1521. The corresponding cman.ora file has been configured as follows in $CLIENT_HOME/network/admin:

cman1 =
  (configuration=
    (address=
      (protocol=tcp)(host=192.168.99.224)(port=1521)
    )
    (rule_list=
      (rule=(src=192.168.99.224)(dst=127.0.0.1)(srv=cmon)(act=accept))
      (rule=(src=192.168.99.0/24)(dst=192.168.100.225)(srv=*)(act=accept))
    )
  )

The file has been left in this minimalistic state deliberately, please consult the documentation for options available. The CMAN listener – not to be confused with the net8 listener – listens on port 1521.

There are two minimalistic rules:

  • Rule number one is necessary to allow management access to Connection Manager. Without it, CMAN will not start.
  • Rule number two actually governs access to the database hosts on CMANDB

There might be another one needed if IPv6 is in use on the host. Should CMAN not start and complain about TNS-04014 and TNS-12529 check MOS 1059938.1 for a solution.

The gateway host has 2 network interfaces, one for each network:

[root@cman ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:16:3E:F2:34:56
          inet addr:192.168.99.224  Bcast:192.168.99.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1209 errors:0 dropped:0 overruns:0 frame:0
          TX packets:854 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:105895 (103.4 KiB)  TX bytes:147462 (144.0 KiB)
          Interrupt:17

eth1      Link encap:Ethernet  HWaddr 00:16:3E:52:4A:56
          inet addr:192.168.100.224  Bcast:192.168.100.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:334 errors:0 dropped:0 overruns:0 frame:0
          TX packets:151 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:36425 (35.5 KiB)  TX bytes:22141 (21.6 KiB)
          Interrupt:16

Configuration on host “CMANDB”

Note that host 192.168.100.225 is on the private network! The database CMANDB has its local and remote listener configured using the below entries in tnsnames.ora:

[oracle@cmandb admin]$ cat tnsnames.ora
LOCAL_CMANDB =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(PORT = 1521)(HOST = 192.168.100.225))
    )
  )

REMOTE_CMANDB =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(PORT = 1521)(HOST = 192.168.100.224))
      (ADDRESS = (PROTOCOL = TCP)(PORT = 1521)(HOST = 192.168.100.225))
    )
  )

CMAN_LSNR =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(PORT = 1521)(HOST = 192.168.99.224))
    )
  )

Translated into human language, this means:

  • local listener is set to local_cmandb
  • remote listener is set to remote_cmandb
  • CMAN_LSNR is used for a test to verify that a connection to the CMAN listener is possible from this host

It should be pointed out at this stage that this applies to single instance databases. If I find the time I’ll write a post about CMAN and RAC later.

The host had only one network interface, allowing connections to the CMAN host:

[root@cmandb ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:16:3E:12:14:51
          inet addr:192.168.100.225  Bcast:192.168.100.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:627422 errors:0 dropped:0 overruns:0 frame:0
          TX packets:456584 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:2241758430 (2.0 GiB)  TX bytes:32751153 (31.2 MiB)

After the database listener parameters have been changed to LOCAL_CMANDB and REMOTE_CMANDB, the alert log on the CMAN host recorded a number of service registrations. This is important as it allows Connection Manager to hand the connection request off to the database:

[oracle@cman trace]$ grep cmandb *
08-JUL-2011 10:19:55 * service_register * cmandb * 0
08-JUL-2011 10:25:28 * service_update * cmandb * 0
08-JUL-2011 10:35:28 * service_update * cmandb * 0
[...]
08-JUL-2011 11:15:10 * service_update * cmandb * 0
08-JUL-2011 11:15:28 * service_update * cmandb * 0
[oracle@cman trace]$

Additionally, the CMAN processes now know about the database service CMANDB:

CMCTL:cman1> show services
Services Summary...
Proxy service "cmgw" has 1 instance(s).
  Instance "cman", status READY, has 2 handler(s) for this service...
    Handler(s):
      "cmgw001" established:0 refused:0 current:0 max:256 state:ready
        <machine: 127.0.0.1, pid: 2298>
        (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=23589))
      "cmgw000" established:0 refused:0 current:0 max:256 state:ready
        <machine: 127.0.0.1, pid: 2294>
        (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=31911))
Service "cmandb" has 1 instance(s).
  Instance "cmandb", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
      REMOTE SERVER
        (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(PORT=1521)(HOST=192.168.100.225)))
Service "cmandbXDB" has 1 instance(s).
  Instance "cmandb", status READY, has 1 handler(s) for this service...
    Handler(s):
      "D000" established:0 refused:0 current:0 max:1022 state:ready
      DISPATCHER <machine: cmandb.localdomain, pid: 7167>
      (ADDRESS=(PROTOCOL=tcp)(HOST=cmandb.localdomain)(PORT=50347))
Service "cmon" has 1 instance(s).
  Instance "cman", status READY, has 1 handler(s) for this service...
    Handler(s):
      "cmon" established:1 refused:0 current:1 max:4 state:ready
      <machine: 127.0.0.1, pid: 2282>
     (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=59541))
The command completed successfully.
CMCTL:cman1>

The previous output has been obtained from cmctl, the CMAN control utility.

Connectivity Test

With the setup completed it was time to perform a test from a third host on the (public) network. Its IP address is 192.168.99.31. The below TNSnames entries were created:

[oracle@client admin]$ cat tnsnames.ora
CMAN =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PORT=1521)(HOST=192.168.99.224)(PROTOCOL=TCP))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = cmandb)
      (SERVER = DEDICATED)
    )
  )

DIRECT =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PORT=1521)(HOST=192.168.100.225)(PROTOCOL=TCP))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = cmandb)
      (SERVER = DEDICATED)
    )
  )

The CMAN entry uses the Connection Manager gateway host to connect to database CMANDB, whereas the DIRECT entry tries to bypass the latter. A tnsping should show whether or not this is possible. Before any of the testing can take place it is necessary to start CMAN using “cmctl startup -c cman1”.

[oracle@client admin]$ tnsping direct
TNS Ping Utility for Linux: Version 11.2.0.1.0 - Production on 08-JUL-2011 11:19:47

Copyright (c) 1997, 2009, Oracle.  All rights reserved.

Used parameter files:

Used TNSNAMES adapter to resolve the alias

Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PORT=1521)(HOST=192.168.100.225)(PROTOCOL=TCP))) (CONNECT_DATA = (SERVICE_NAME = cmandb) (SERVER = DEDICATED)))

TNS-12543: TNS:destination host unreachable

[oracle@client admin]$ tnsping cman

TNS Ping Utility for Linux: Version 11.2.0.1.0 - Production on 08-JUL-2011 10:53:27

Copyright (c) 1997, 2009, Oracle.  All rights reserved.

Used parameter files:

Used TNSNAMES adapter to resolve the alias

Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PORT=1521)(HOST=192.168.99.224)(PROTOCOL=TCP))) (CONNECT_DATA = (SERVICE_NAME = cmandb) (SERVER = DEDICATED)))

OK (0 msec)

[oracle@client admin]$

Now that proves that a direct connection is impossible, and also that the connection manager’s listener is working. A tnsping doesn’t imply that a connection is possible though, this requires an end to end test with SQL*Plus:

[oracle@client admin]$ sqlplus system/xxx@cman

SQL*Plus: Release 11.2.0.1.0 Production on Fri Jul 8 10:55:39 2011

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select instance_name,host_name from v$instance;

INSTANCE_NAME
----------------
HOST_NAME
----------------------------------------------------------------
cmandb
cmandb.localdomain

The successful connection is also recorded in the CMAN log file:

Fri Jul 08 10:55:39 2011

08-JUL-2011 10:55:39 * (CONNECT_DATA=(SERVICE_NAME=cmandb)(SERVER=DEDICATED)(CID=(PROGRAM=sqlplus@client)(HOST=client)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.99.31)(PORT=16794)) * establish * cmandb * 0

(LOG_RECORD=(TIMESTAMP=08-JUL-2011 10:55:39)(EVENT=Ready)(CONN NO=0))

My idea to limit access to the database via rules seems to have been working.

How to use the RAC FAN Java API

I researched an interesting new feature available with Oracle 11g R2, the so called RAC FAN API when writing the workload management chapter for the RAC book. The RAC FAN API is documented in Oracle® Database JDBC Developer’s Guide, 11g Release 2 (11.2) available online, but when it came to the initial documentation following the 11.2.0.1 release on Linux it was pretty useless. The good news is that it improved!

The RAC FAN Java API

The aim of this API is to allow a Java application to listen to FAN events by creating a subscription to the RAC nodes’ ONS processes. The application then registers a FANListener, based on the subscription, which can pick up instances of the following events:

  • ServiceDownEvent
  • NodeDownEvent
  • LoadAdvisoryEvent

All of these are in the oracle.simplefan namespace, the javadoc reference of which you can find in the official documenation. Continue reading

Adding user equivalence for RAC the easy way

This is the first time I am setting up a new 11.2.0.2 cluster with the automatic SSH setup. Until now, I ensured user equivalence by copying ssh RSA and DSA manually to all cluster nodes. For two nodes that’s not too bad, but recently someone asked a question around a 28 (!) node cluster on a mailing list I am subscribing to. So that’s when I think the whole process  gets a bit too labour intensive.

So setting up user equivalence using a script may be the solution. You can also use OUI to do the same, but I like to run “cluvfy stage -post hwos” to check everything is ok before even thinking about executing ./runInstaller.

Here’s the output of a session, my 2 cluster nodes are acfsprodnode1 and acfsprodnode2 (yes, they are for 11.2 ACFS replication and encryption testing). I am using the grid user as the owner of Grid Infrastructure, and oracle to own the RDBMS binaries. Start by navigating to the location where you unzipped the Grid Infrastructure patch file. Then change into directoy “sshsetup” and run the command:


[grid@acfsprdnode1 sshsetup]$ ./sshUserSetup.sh
Please specify a valid and existing cluster configuration file.
Either user name or host information is missing
Usage ./sshUserSetup.sh -user <user name> [ -hosts "<space separated hostlist>" | -hostfile <absolute path of cluster configuration file> ] [ -advanced ]  [ -verify] [ -exverify ] [ -logfile <desired absolute path of logfile> ] [-confirm] [-shared] [-help] [-usePassphrase] [-noPromptPassphrase]

Next execute the command, I opted for option noPromptPassphrase, as I don’t use them for the key.

[grid@acfsprdnode1 sshsetup]$ ./sshUserSetup.sh -user grid -hosts "acfsprdnode1 acfsprdnode2" -noPromptPassphrase
The output of this script is also logged into /tmp/sshUserSetup_2010-12-22-15-39-18.log
Hosts are acfsprdnode1 acfsprdnode2
user is grid
Platform:- Linux
Checking if the remote hosts are reachable
PING acfsprdnode1.localdomain (192.168.99.100) 56(84) bytes of data.
64 bytes from acfsprdnode1.localdomain (192.168.99.100): icmp_seq=1 ttl=64 time=0.017 ms
64 bytes from acfsprdnode1.localdomain (192.168.99.100): icmp_seq=2 ttl=64 time=0.019 ms
64 bytes from acfsprdnode1.localdomain (192.168.99.100): icmp_seq=3 ttl=64 time=0.017 ms
64 bytes from acfsprdnode1.localdomain (192.168.99.100): icmp_seq=4 ttl=64 time=0.017 ms
64 bytes from acfsprdnode1.localdomain (192.168.99.100): icmp_seq=5 ttl=64 time=0.018 ms

--- acfsprdnode1.localdomain ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 3999ms
rtt min/avg/max/mdev = 0.017/0.017/0.019/0.004 ms
PING acfsprdnode2.localdomain (192.168.99.101) 56(84) bytes of data.
64 bytes from acfsprdnode2.localdomain (192.168.99.101): icmp_seq=1 ttl=64 time=0.331 ms
64 bytes from acfsprdnode2.localdomain (192.168.99.101): icmp_seq=2 ttl=64 time=0.109 ms
64 bytes from acfsprdnode2.localdomain (192.168.99.101): icmp_seq=3 ttl=64 time=0.324 ms
64 bytes from acfsprdnode2.localdomain (192.168.99.101): icmp_seq=4 ttl=64 time=0.256 ms
64 bytes from acfsprdnode2.localdomain (192.168.99.101): icmp_seq=5 ttl=64 time=0.257 ms

--- acfsprdnode2.localdomain ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 0.109/0.255/0.331/0.081 ms
Remote host reachability check succeeded.
The following hosts are reachable: acfsprdnode1 acfsprdnode2.
The following hosts are not reachable: .
All hosts are reachable. Proceeding further...
firsthost acfsprdnode1
numhosts 2
<span style="color: #ff0000;">The script will setup SSH connectivity from the host acfsprdnode1 to all</span>
<span style="color: #ff0000;">the remote hosts</span>. After the script is executed, the user can use SSH to run
commands on the remote hosts or copy files between this host acfsprdnode1
and the remote hosts without being prompted for passwords or confirmations.

NOTE 1:
As part of the setup procedure, this script will use ssh and scp to copy
files between the local host and the remote hosts. Since the script does not
store passwords, you may be prompted for the passwords during the execution of
the script whenever ssh or scp is invoked.

NOTE 2:
AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY
AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEDGES TO THESE
directories.

Do you want to continue and let the script make the above mentioned changes (yes/no)?
<span style="color: #ff0000;">yes</span>

The user chose yes
User chose to skip passphrase related questions.
Creating .ssh directory on local host, if not present already
Creating authorized_keys file on local host
Changing permissions on authorized_keys to 644 on local host
Creating known_hosts file on local host
Changing permissions on known_hosts to 644 on local host
Creating config file on local host
If a config file exists already at /home/grid/.ssh/config, it would be backed up to /home/grid/.ssh/config.backup.
Removing old private/public keys on local host
Running SSH keygen on local host with empty passphrase
Generating public/private rsa key pair.
Your identification has been saved in /home/grid/.ssh/id_rsa.
Your public key has been saved in /home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
de:e3:66:fa:16:e8:6e:36:fd:c5:e3:77:75:07:9a:b0 grid@acfsprdnode1
Creating .ssh directory and setting permissions on remote host acfsprdnode1
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR grid. THIS IS AN SSH REQUIREMENT.
The script would create ~grid/.ssh/config file on remote host acfsprdnode1. If a config file exists already at ~grid/.ssh/config, it would be backed up to ~grid/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host acfsprdnode1.
Warning: Permanently added 'acfsprdnode1,192.168.99.100' (RSA) to the list of known hosts.
grid@acfsprdnode1's password:
Done with creating .ssh directory and setting permissions on remote host acfsprdnode1.
Creating .ssh directory and setting permissions on remote host acfsprdnode2
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR grid. THIS IS AN SSH REQUIREMENT.
The script would create ~grid/.ssh/config file on remote host acfsprdnode2. If a config file exists already at ~grid/.ssh/config, it would be backed up to ~grid/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host acfsprdnode2.
Warning: Permanently added 'acfsprdnode2,192.168.99.101' (RSA) to the list of known hosts.
grid@acfsprdnode2's password:
Done with creating .ssh directory and setting permissions on remote host acfsprdnode2.
Copying local host public key to the remote host acfsprdnode1
The user may be prompted for a password or passphrase here since the script would be using SCP for host acfsprdnode1.
grid@acfsprdnode1's password:
Done copying local host public key to the remote host acfsprdnode1
Copying local host public key to the remote host acfsprdnode2
The user may be prompted for a password or passphrase here since the script would be using SCP for host acfsprdnode2.
grid@acfsprdnode2's password:
Done copying local host public key to the remote host acfsprdnode2
cat: /home/grid/.ssh/known_hosts.tmp: No such file or directory
cat: /home/grid/.ssh/authorized_keys.tmp: No such file or directory
SSH setup is complete.

------------------------------------------------------------------------
Verifying SSH setup
===================
The script will now run the date command on the remote nodes using ssh
to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP,
THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR
PASSWORDS. If you see any output other than date or are prompted for the
password, ssh is not setup correctly and you will need to resolve the
issue and set up ssh again.
The possible causes for failure could be:
1. The server settings in /etc/ssh/sshd_config file do not allow ssh
for user grid.
2. The server may have disabled public key based authentication.
3. The client public key on the server may be outdated.
4. ~grid or ~grid/.ssh on the remote host may not be owned by grid.
5. User may not have passed -shared option for shared remote users or
may be passing the -shared option for non-shared remote users.
6. If there is output in addition to the date, but no password is asked,
it may be a security alert shown as part of company policy. Append the
additional text to the <OMS HOME>/sysman/prov/resources/ignoreMessages.txt file.
------------------------------------------------------------------------
--acfsprdnode1:--
Running /usr/bin/ssh -x -l grid acfsprdnode1 date to verify SSH connectivity has been setup from local host to acfsprdnode1.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
Wed Dec 22 15:40:10 GMT 2010
------------------------------------------------------------------------
--acfsprdnode2:--
Running /usr/bin/ssh -x -l grid acfsprdnode2 date to verify SSH connectivity has been setup from local host to acfsprdnode2.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
Wed Dec 22 15:40:10 GMT 2010
------------------------------------------------------------------------
SSH verification complete.
[grid@acfsprdnode1 sshsetup]$ ssh acfsprdnode1 hostname
acfsprdnode1
[grid@acfsprdnode1 sshsetup]$ ssh acfsprdnode2 hostname
acfsprodnode2
[grid@acfsprdnode1 sshsetup]$

Nice! That’s a lot of work taken away from me, and I can start runing cluvfy now to fix problems before OUI warns me about shortcomings on my system.

You should note that per the above output, the script only distributes the local ssh keys to the remote hosts. When in OUI’s cluster node addition screen (6 of 16 in the advanced installation) you still need to click on the “SSH Connectivity” button and then on “Setup” after providing username and password to establish cluster wide user equivalence.

Installing RAC 11.2.0.2 on Solaris 10/09 x64

One of the major adventures this time of the year involves installing RAC 11.2.0.2 on Solaris 10 10/09 x86-64. The system setup included EMC Power Path 5.3 as the multipathing solution to shared storage.

I initially asked for 4 BL685 G6 with 24 cores, but in the end “only” got two-still plenty of resources to experiment with.  I especially like the output of this command:

$ /usr/sbin/psrinfo | wc –l
 24

Nice! Actually, it’s 4 Opteron processors:

$ /usr/sbin/prtdiag | less
System Configuration: HP ProLiant BL685c G6
 BIOS Configuration: HP A17 12/09/2009
 BMC Configuration: IPMI 2.0 (KCS: Keyboard Controller Style)
==== Processor Sockets ====================================
Version                          Location Tag
 -------------------------------- --------------------------
 Opteron                          Proc 1
 Opteron                          Proc 2
 Opteron                          Proc 3
 Opteron                          Proc 4

Continue reading

Build your own 11.2.0.2 stretched RAC

Finally time for a new series! With the arrival of the new 11.2.0.2 patchset I thought it was about time to try and set up a virtual 11.2.0.2 extended distance or stretched RAC. So, it’s virtual, fair enough. It doesn’t allow me to test things like the impact of latency on the inter-SAN communication, but it allowed me to test the general setup. Think of this series as a guide after all the tedious work has been done, and SANs happily talk to each other. The example requires some understanding of how XEN virtualisation works, and it’s tailored to openSuSE 11.2 as the dom0 or “host”. I have tried OracleVM in the past but back then a domU (or virtual machine) could not mount an iSCSI target without a kernel panic and reboot. Clearly not what I needed at the time. OpenSuSE has another advantage: it uses a new kernel-not the 3 year old 2.6.18 you find in Enterprise distributions. Also, xen is recent (openSuSE 11.3 even features xen 4.0!) and so is libvirt.

The Setup

The general idea follows the design you find in the field, but with less cluster nodes. I am thinking of 2 nodes for the cluster, and 2 iSCSI target providers. I wouldn’t use iSCSI in the real world, but my lab isn’t connected to an EVA or similar.A third site will provide quorum via an NFS provided voting disk.

Site A will consist of filer01 for the storage part, and edcnode1 as the RAC node. Site B will consist of filer02 and edcnode2. The iSCSI targets are going to be provided by openFiler’s domU installation, and the cluster nodes will make use of Oracle Enterprise Linux 5 update 5.To make it more realistic, site C will consist of another openfiler isntance, filer03 to provide the NFS export for the 3rd voting disk. Note that openFiler seems to support NFS v3 only at the time of this writing. All systems are 64bit.

The network connectivity will go through 3 virtual switches, all “host only” on my dom0.

  • Public network: 192.168.99/24
  • Private network: 192.168.100/24
  • Storage network: 192.168.101/24

As in the real world, private and storage network have to be separated to prevent iSCSI packets clashing with Cache Fusion traffic. Also, I increased the MTU for the private and storage networks to 9000 instead of the default 1500. If you like to use jumbo frames you should check if your switch supports it.

Grid Infrastructure will use ASM to store OCR and voting disks, and the inter-SAN replication will also be performed by ASM in normal redundancy. I am planning on using preferred mirror read and intelligent data placement to see if that makes a difference.

Known limitations

This setup has some limitations, such as the following ones:

  • You cannot test inter-site SAN connectivity problems
  • You cannot make use of udev for the ASM devices-a xen domU doesn’t report anything back from /sbin/scsi_id which makes the mapping to /dev/mapper impossible (maybe someone knows a workaround?)
  • Network interfaces are not bonded-you certainly would use bonded NICs in real life
  • No “real” fibre channel connectivity between the cluster nodes

So much for the introduction-I’ll post the setup step-by-step. The intended series will consist of these articles:

  1. Introduction to XEN on openSuSE 11.2 and dom0 setup
  2. Introduction to openFiler and their installation as a virtual machine
  3. Setting up the cluster nodes
  4. Installing Grid Infrastructure 11.2.0.2
  5. Adding third voting disk on NFS
  6. Installing RDBMS binaries
  7. Creating a database

That’s it for today, I hope I got you interested and following the series. It’s been real fun doing it; now it’s about writing it all up.

Moving OCR and voting disks to block devices in CRS 10.2

Anyone to suggest a shorter title for this: you are welcome to comment! Initially I wanted to have this in the first of today’s posts, but got side tracked. So here’s the next step in the CRS 10.2.0.5 installation: moving off the deprecated raw devices to block devices. This is something you can do with OUI in 11.1, but in 10.2 it either requires raw devices or a clustered file system. Here’s the step by step protocol of my session.

The Setup

Before talking about what I am doing a little background information is in order. This example uses single pathing, the difference to multipathing is marginal-just remember to use the logical device names instead of the physical ones. In device mapper multipathing this might be similar to /dev/mapper/ocrvotep{5,6,7,8,9} in case you map your WWID for the OCR and voting disks to a logical name called “ocrvote”. If presenting only 1 device to the server to store all copies of OCR and voting disk make sure to have it well protected on the array! With this said, here is my setup. First for the voting disk-it went to /dev/raw/raw2 during the installation:

[root@crs102node1 bin]# ./crsctl query css votedisk
 0.     0    /dev/raw/raw2

located 1 votedisk(s).

My OCR doesn’t have a mirror and it on /dev/raw/raw1

[root@crs102node1 bin]# ./ocrcheck
Status of Oracle Cluster Registry is as follows :
 Version                  :          2
 Total space (kbytes)     :    1003728
 Used space (kbytes)      :       2008
 Available space (kbytes) :    1001720
 ID                       : 1189351742
 Device/File Name         : /dev/raw/raw1
 Device/File integrity check succeeded

 Device/File not configured

 Cluster registry integrity check succeeded

Now let’s change directory to $ORA_CRS_HOME/bin before the action starts.

Continue reading

Change SCAN post installation

A quick note to myself on how to change the SCAN from /etc/hosts to DNS. I have set up a small DNS server for a local interface on my dom0 to help me resolve problems around TAF and session failover in general. This is a test/dev/playground cluster-you should have never been in this situation with anything used “for real”! Multipe IP addresses for a SCAN are needed since you no longer specify all the node VIPs in tnsnams.ora-there is no FAILOVER=yes either! DNS round robin now provides this for you. Continue reading