Posted by Martin Bach on May 26, 2011
I researched an interesting new feature available with Oracle 11g R2, the so called RAC FAN API when writing the workload management chapter for the RAC book. The RAC FAN API is documented in Oracle® Database JDBC Developer’s Guide, 11g Release 2 (11.2) available online, but when it came to the initial documentation following the 188.8.131.52 release on Linux it was pretty useless. The good news is that it improved!
The RAC FAN Java API
The aim of this API is to allow a Java application to listen to FAN events by creating a subscription to the RAC nodes’ ONS processes. The application then registers a FANListener, based on the subscription, which can pick up instances of the following events:
All of these are in the oracle.simplefan namespace, the javadoc reference of which you can find in the official documenation. Read the rest of this entry »
Posted in 11g Release 2, Linux, RAC | Tagged: API, fan, oracle.simplefan, RAC | 1 Comment »
Posted by Martin Bach on May 24, 2011
For quite a while Oracle DBAs have performed split mirror backups using special devices called “Business Continuance Volumes” or BCVs for short. A BCV is a special mirror copy of a LUN on the same storage array as the primary copy.
In a BCV backup scenario, the storage administrator (usually) “splits” the mirror after putting the database into hot backup mode. After the mirror is split, the database is taken out of hot backup mode and resumes normal operation. A new Oracle instance on a different host can be mounted using the split mirror copy of the database for backups. The use of this technology for refreshing a test environment is out of scope of this article. Read the rest of this entry »
Posted in 11g Release 2, Automatic Storage Management, Oracle | 4 Comments »
Posted by Martin Bach on May 17, 2011
I am playing around with the Grid Infrastructure 184.108.40.206 PSU 2 and found an interesting note on My Oracle Support regarding the Patch Set Update. This reminds me that it’s always a good idea to search for a patch number on Metalink before applying a PSU. It also seems to be a good idea to wait for a few days before trying a PSU (or maybe CPU) on your DEV environment for the first time (and don’t even think about applying a PSU on production without thorough testing!)
OK, back to the story: there is a known issue with the patchset which has to do with the change in the Mutex behaviour which the PSU was intended to fix. To quote MOS note “Oracle Database Patch Set Update 220.127.116.11.2 Known Issues (Doc ID 1291879.1)”, Patch 12431716 Is a Recommended Patch for 18.104.22.168.2. In fact, Oracle strongly recommends you to apply the patch to fix Bug 12431716 – Unexpected change in mutex wait behavior in 22.214.171.124.2 PSU (higher CPU possible).
In a nutshell, not applying the patch can cause your system to suffer from excessive CPU usage and more than expected mutex contention. More information can be found in the description of Bug 12431716 Mutex waits may cause higher CPU usage in 126.96.36.199.2 PSU / GI PSU which is worth reading.
Besides this, the PSU was applied without any problems to my four node cluster, I just wish there was a way to roll out a new version of opatch to all cluster node’s $GRID_HOME and $ORACLE_HOME in one command. The overall process for the PSU is the same as already described in my previous post about Bundle Patch 3:
- Get the latest version of OPatch
- Deploy OPatch to $GRID_HOME and $ORACLE_HOME (ensure permissions are set correctly for the OPatch in $GRID_HOME!)
- Unzip the PSU (Bug 11724916 – 188.8.131.52.2 Patch Set Update (PSU) (Doc ID 11724916.8)), for example to /tmp/PSU
- Change directory to where you unzipped (/tmp/PSU) and become root
- Ensure that $GRID_HOME/OPatch is part of the path
- Read the readme
- Create an OCM response file and save it to say, /tmp/ocm.rsp
- Start the patch as root: opatch auto and supply the full path to the OCM response file (/tmp/ocm.rsp)
- Apply the beforementioned one-off patch
Then wait, and after a little while you spend trailing the logfile in $GRID_HOME/cfgtoollogs/ and having a coffee the process eventually finishes. Repeat on each node and you’re done. I’m really happy there aren’t these long readme files anymore with 8 steps to be performed, partially as root, partially as CRS owner/RDBMS owner. It reduces tge tune ut takes to apply a PSU significantly.
Posted in 11g Release 2, Automatic Storage Management, Linux, RAC | 22 Comments »
Posted by Martin Bach on May 13, 2011
Addmittedly I haven’t checked for a little while, but an email by my co-author Steve Show prompted me to go to the Amazon website and look it up.
And yes, it’s reality! Our book is now finally available as a kindle version, how great is that?!?
There isn’t really a lot more to say about this subject. I’ll wonder how many techies are intersted in the kindle version after the PDF has been out for quite a while. If you read this and decide to get the kindle version, could you please let me know how you liked it? Personally I think the book is well suited for the Amazon reader as it’s mostly text which suits the device well.
Posted in 11g Release 2, Linux, RAC, RAC Book | 3 Comments »
Posted by Martin Bach on May 1, 2011
I have recently upgraded my lab’s reference machine to Oracle Linux 6 and have experimented today with its network failover capabilities. I seemed to remember that network bonding on xen didn’t work, so was curious to test it on new hardware. As always, I am running this on my openSuSE 11.2 lab server, which features these components:
- Kernel 184.108.40.206-0.2-xen
Now for the fun part-I cloned my OL6REF domU, and in about 10 minutes had a new system to experiment with. Read the rest of this entry »
Posted in Linux, Xen | 11 Comments »