Monthly Archives: February 2010

Watch those environment variables

One of my colleagues was about to install another ORACLE_HOME to our 3 node production cluster, with limited time available to him, as always on Saturdays. To make matters worse, OUI wouldn’t play ball and complain that the ORACLE_HOME location selected isn’t sharable. Pardon me? None of the file systems on the cluster are actually shared file systems and neither did we see this problem before. After selecting which nodes to install the software to, OUI opened a window with this message (taken from the logfile)

The datafile storage location for Oracle Real Application Clusters should be on a shared file system Continue reading 

RAC service weirdness

I ran into a problem starting one out of 14 services registered to my database this weekend. All of it happened after a restart of the servers following a redhat upgrade from 5.3 to 5.4. One of the servers had a file system corruption which required some extensive fsck’ing so only 2 out of 3 nodes were started-this implies that some of the services started on the available node rather than the preferred one.

Not a biggie in this situation, after all we are interested in restoring service to the users. After a couple of hours the first node eventually finished the file system check and came up. CRS automatically started the instance and all looked good. Since we were still in the downtime window I decided against checking each of the 14 services to see if they run on the correct node but rather decided to stop them all and restart them-the intention was to have them all restart on the correct node. Continue reading

It is indeed possible to install Oracle 11.2 on OpenSolaris

After I set up OpenSolaris 2009.06 (see my previous posts) on my openSuSE 11.2 dom0 I was keen to set up Oracle 11.2 single instance on ZFS. ZFS is a new-ish filesystem and successor to ufs no Solaris 10. I like it a lot for simplicty and ease of use. Piet de Visser should love it too :)

But before I could start working on this I needed to do some prep work.

Storage preparation on the dom0

Very simple-I created 2 more zero padded files in my domU base directory and added those to the domU configuration file, as in:

disk = [
vif = [ "mac=00:16:3e:1b:e8:18,bridge=br1,script=vif-bridge" ]

Be sure to add a MAC address to your configuration file or otherwise Solaris will cry out loud the next restart claiming the new storage pool has been used with a different system. That’s not a problem (can be fixed with zpool import -f) but it’s not pretty either.

Addition of new storage pool

This is really simple! First of all you need to find out which of the disks presented to the system are the new ones. I only used 1 disk for the rpool (default storage pool), so I tried to identify this one first:

root@opensolaris:~# zpool status rpool
 pool: rpool
 state: ONLINE
 scrub: none requested

 rpool       ONLINE       0     0     0
 c7t0d0s0    ONLINE       0     0     0

errors: No known data errors

Then I identified the disks on the system:

root@opensolaris:~# format
Searching for disks...done

 0. c7t0d0 <DEFAULT cyl 4095 alt 0 hd 128 sec 32>
 1. c7t1d0 <Unknown-Unknown-0001-10.00GB>
 2. c7t2d0 <Unknown-Unknown-0001-10.00GB>
Specify disk (enter its number): ^C

So, not really surprisingly, c7t1d0 and c7t2d0 were the new disks. I created a new storage pool “oraclepool” with these 2 disks (not recommended for production!)

root@opensolaris:~# zpool create oraclepool c7t1d0 c7t2d0

root@opensolaris:~# zpool status -v
 pool: oraclepool
 state: ONLINE
 scrub: none requested

 oraclepool  ONLINE       0     0     0
 c7t1d0      ONLINE       0     0     0
 c7t2d0      ONLINE       0     0     0

errors: No known data errors

 pool: rpool
 state: ONLINE
 scrub: none requested

 rpool       ONLINE       0     0     0
 c7t0d0s0    ONLINE       0     0     0

errors: No known data errors

With that done, I created a few file systems:

  • oraclepool/binaries
  • oraclepool/oradata
zfs create oraclepool/oradata
zfs create oraclepool/binaries

By default these are mounted to poolname/fs name, i.e. /oraclepool/oradata which isn’t too convenient. Luckily, zfs allows you to easily change the mountpoint without touch /etc/vfstab (which isn’t recommended anyway).

So, enter these commands to change the mountpoints:

zfs set mountpoint=/u01 oraclepool/binaries
zfs set mountpoint=/u01/oradata oraclepool/oradata

The final layout prior to the installation was as follows:

zfs set mountpoint=/u01 oraclepool/binaries
root@opensolaris:~# zfs list
NAME                       USED  AVAIL  REFER  MOUNTPOINT
oraclepool                7.98G  11.6G    19K  /oraclepool
oraclepool/binaries       6.68G  11.6G  6.68G  /u01
oraclepool/oradata        1.30G  11.6G  1.30G  /u01/oradata
rpool                     5.27G  2.55G  77.5K  /rpool
rpool/ROOT                3.01G  2.55G    19K  legacy
rpool/ROOT/opensolaris    3.01G  2.55G  2.87G  /
rpool/dump                 256M  2.55G   256M  -
rpool/export              2.22M  2.55G    21K  /export
rpool/export/home         2.20M  2.55G  1.63M  /export/home
rpool/export/home/martin   584K  2.55G   584K  /export/home/martin
rpool/swap                   2G  4.08G   474M  -

Oracle user creation

Create the oracle user as always, I did the following:

  • groupadd oinstall
  • groupadd dba
  • useradd -g oinstall -G dba -d /export/home/oracle -s `which bash` -m oracle
  • chown -R oracle:oinstall /u01
  • projadd -U oracle -K “project.max-shm-memory=(priv,4096MB,deny)”
  • projmod -s -K “project.max-sem-ids=(priv,256,deny)”

Set static IP

Edit /etc/hosts to include your hostname, then edit /etc/nwam/llp to include your network interface, the keyword “static” and the ip/netmask.

My example uses:

root@opensolaris:~# cat /etc/nwam/llp
xnf0    static

Change Swap

It’s necessary to increase swap space or otherwise the ld will fail during the “linking phase”.  I increased to 2G from 512M:

root@opensolaris:~# zfs get volsize rpool/swap
rpool/swap  volsize   512M     -
root@opensolaris:~# zfs set volsize=2G rpool/swap
root@opensolaris:~# zfs get volsize rpool/swap
rpool/swap  volsize   2G       -

Oracle Installation

We need to create a symlink for -lcrypto, otherwise one of the shared libraries won’t link with a missing reference to “-lcrypto”:

ln -s /lib/amd64/ /usr/sfw/lib/amd64

Run the installer in silent mode – I modified one of the response files and simply executed ./runInstaller -silent -debug -fore -responseFile /path/to/responseFile.rsp -ignoreSysPrereqs

Database Creation

I ran dbca in silent mode as oracle as follows:

cd /u01/app/oracle/product/11.2.0/dbhome_1/
export ORACLE_HOME=`pwd`
cd bin
./dbca -silent -createDatabase -gdbName orcl \
  -templateName General_Purpose.dbc -emConfiguration none \
  -datafileDestination /u01/oradata -sysPassword xxx \
  -systemPassword xxx -storageType FS -initParams  \


SQL> select * from v$version;

Oracle Database 11g Enterprise Edition Release - 64bit Production
PL/SQL Release - Production
CORE      Production
TNS for Solaris: Version - Production
NLSRTL Version - Production

SQL> !cat /etc/release
 OpenSolaris 2009.06 snv_111b X86
 Copyright 2009 Sun Microsystems, Inc.  All Rights Reserved.
 Use is subject to license terms.
 Assembled 07 May 2009


Now off to learn dtrace…


Praise where praise is due-I got some good pointers from the pythian blog.

Build your own RAC system part V – add RDBMS home

This article concludes my short series about how to build your own RAC system-I might explore an extended distance cluster with 11.2 soon so stay tuned for more material. As you could read in my previous posts, I have installed a cluster on 2 (now 3) nodes, with a fresh 11.2 Grid Infrastructure installation. I then wanted to extend my RDBMS home to the 3 node.

As I use to say-it’s easy to extend RAC once the cluster layer is in place and working. In my case, that was a given so I continued with the extension.

I knew from extending the Grid Infrastructure that was headless and silent-you need to pass appropriate parameters and it will do it all for you. In my case I logged in as oracle to the first node and changed directory to $ORACLE_HOME/oui/bin. There I tried the following, and it worked without problems! My cluster is made up of rac11gr2node{1,2,3}, the RDBMS software was already present on nodes 1 and 2. I should actually have run a cluvfy but was too lazy (I wouldn’t do this on any non-lab environment!)

Continue reading

Server Pool experiments in RAC 11.2

I spent Wednesday at UKOUG RAC & HA SIG and it was one of the best events I ever attended. Great  audience, and great feedback. One question I was particularly interested in was raised during my presentation, regarding server pools. I have now finally had the chance to experiment with this exciting new feature, my findings are in the blog post. I’ll see if the automatic assignment of nodes to pools works as advertised as well, but that’s for another post. Already this one turned out to be a monster!


I have installed a 3 node RAC cluster on Oracle Enterprise Linux 5 update 4 32bit to better understand server pools. I have read a lot about the subject, but as always, first hand experience pays off more than just reading. My environment uses GPnP, essentially it is the environment I described in part 2 of my build your own RAC 11.2 system, extended by another node. Continue reading

Check for non-successful connection attempts in listener.log

This could become a regular question from your security team: can you find out if someone tried to mess with the listener when trying to connect? Often you see hackers target port 1521 and sending random data and/or garbage over the wire. The listener initially accepts the connection but closes it when it doesn’t receive data it expects.

Let’s assume you need to find if there were any unsuccessful connection entries in the listener.log for a given day. First of all-how do they have to look if they are successful? Typical entries are as follows:

1 08-FEB-2010 04:49:54 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=testserv) *
   (ADDRESS=(PROTOCOL=tcp)(HOST=oracleserver)(PORT=4307)) * establish * testserv * 0
2 08-FEB-2010 04:49:55 * service_update * dev1 * 0
3 08-FEB-2010 04:49:57 * service_update * dev3 * 0

The important bit is at the end-the “0” means “normal, successful completion”. If there is a problem, you would therefore assume there is an Oracle error number from the TNS range (>12000).

Awk is the swiss army knife to find such results, and this is how you could use it (apologies in advance for my poor command of awk-if you know a better way please let me know!)

[oracle@server1 ~]$ grep "08-FEB" listener.log | awk  '{ if ( $NF != 0 ) print $0 }'

The built-in variable NF is the last of all the fields which are enumerated from $1. So in summary, this little snippet does the following:

  1. Find all lines for a given day (here: February 8th) in the $ORACLE_HOME/network/log/listener.log file
  2. Print the lines where the last field is not equal to 0

This can easily be wrapped up into a nagios check to be executed by NRPE-if in case of doubt: simplify (to quote Piet de Visser).

So next time you get output such as:

1 09-FEB-2010 16:25:56 *  * (ADDRESS=(PROTOCOL=tcp)(HOST=
   (PORT=3929)) * establish *  * 12525

… something might be wrong. The “oerr” command provides more information about a particular error code.

By the way this is gawk-3.1.5-14.el5 on RHEL 5.3.

Upgrade ASM 10.2 to 11.2 single instance

This post describes how to upgrade an existing single instance ASM installation to the latest and greatest, Oracle 11.2. The most noteworthy change is that ASM is no longer installed using RDBMS installer but rather the Grid Infrastructure.

Huh-installing CRS for single instance? That at first sounded like a bit of an overkill but Oracle left us with no choice. As you can see later, it’s not as bad as it seems, I have to say I rather like the Oracle Restart feature (see my previous blog post).

So-as always-start by downloading the necessary software and unzip it somewhere convenient.


I recommend getting some metadata from the ASM instance just in case:

SQL> select name,state from v$asm_diskgroup;

NAME			       STATE
------------------------------ -----------
FRA			       MOUNTED
LOG			       MOUNTED

SQL> select name,path from v$asm_disk;

NAME                           PATH
------------------------------ -------------------
DATA1                          ORCL:DATA1
LOG1                           ORCL:LOG1
FRA1                           ORCL:FRA1

SQL> show parameter asm

------------------------------------ ----------- ------------------------------
asm_diskgroups			     string	 DATA, FRA, LOG
asm_diskstring			     string	 ORCL:*
asm_power_limit 		     integer	 1

Continue reading