Monthly Archives: November 2010

Logica Guru4Pro – Amstelveen

Yesterday I returned from a trip to Amsterdam where I presented about Grid Infrastructure 11.2 as part of Logica’s Guru 4 Pro series. I have to say it has been a very pleasant experience! And it marked the first time I presented outside the UK as well.

Logica Holland runs a series of events where renowned experts present about the latest and greatest developments in their field. I was very pleased getting an invitation to the series, and gladly accepted. I opted to give the audience a “close look at Grid Infrastructure”. I think Oracle University would have termed it a “Deep Dive”, and a deep dive it was!

The flight from London Gatwick, my “home” airport to Amsterdam is very short indeed, I had the feeling I spent more time on taxiways than in the air. Upon arrival I was picked up by Dennis van Onselen, Logica’s Practice Manager. I really appreciate not having had to resort to a taxi to the venue!

Before the main event kicked off at 18:30 in the evening there was time for a session with Logica employees. For about two and a half we went through pros and cons of various technologies in the Oracle portfolio, and had a really good discussion along the way. I hope the attendees found it useful.

After a short break, I started my talk which was well received by the audience. I recognised some familiar faces in the audience, and was very pleased to also see Piet de Visser who I haven’t met all year and who introduced me to Anjo Kolk-I didn’t know he was part of the audience as well. After the presentation which was the longest I gave so far (around 90 minutes) we had a great discussion about the contents and high availability strategies in general. I felt a little strain on my voice, even though I was mike’d up after a cold I was suffering from at the weekend. I can fully understand singers now who can’t make it to their concert. But I’m back to normal now, hoping it won’t repeat itself during next week’s UKOUG conference.

One of the really good things that came out of this session was the prospect of returning to Holland for more presentations about RAC and all things around it. I would be delighted to return, anyone interested please drop me a line. And that of course includes Miracle!

By the way, I have converted the presentation to a PDF, and it can be downloaded here.

Configuration device mapper multipath on OEL5 update 5

I have always wondered how to configure the device mapper multipath package for a Linux system. I knew how to do it in principle, but was never involved in the configuration from start up. Today I got the chance to work on this. The system is used for a lab test and not a production box (otherwise I probably wouldn’t have been allowed on). Actually it’s part of a 2 node cluster.

So the first step is to find out which partitions are visible to the system. The Linux kernel presents this information in the /proc/partitions table, as in the following example: Continue reading

Oracle RAC One Node revisited – 11.2.0.2

Since we published the RAC book, Oracle has released patchset 11.2.0.2. Amongst other things, this improved the RAC One Node option, exactly the way we expected.

How it was – the base release

A quick recap on the product as it was in 11.2.0.1: RAC One Node is part of Oracle Enterprise Edition, any other software editions are explicitly not allowed. Another restriction exists for 3rd party Clusterware: it’s not allowed to use one. RAC One Node is a hybrid between full blown RAC and the active/passive cluster. The option uses Grid Infrastructure for cluster management and storage provisioning via ASM. The RAC One instance starts its life as a RAC database, limited to only one cluster node. It only ever runs on one node, but that node can change. It is strongly recommended to create a service for that RAC database. Utilities such as raconeinit provide a text based command line interface to transform that database to a “RAC One Node”-instance. In the process, the administrator can elect which nodes should be allowed to run the instance. The “omotion” utilities allowed the DBA to move the RAC One Node instance from the current node to another one. Optionally a time threshold could be set after which all ongoing transactions were to move to the new node. This feature required TAF or FAN to be set up correctly. The raconestatus utility allowed you to view the status of your RAC One Node instances. Conversion to full RAC was made possible by the racone2rac utility.

If you were after a Data Guard setup you’d be disappointed: that wasn’t supported. (The good news is: from 11.2.0.2 onwards, Data Guard can be used)

So all in all, that seemed a little premature. A patch to be downloaded and applied, no Data Guard and a new set of utilities are not really user friendly. Plus, initially this patch was available for Linux only. But at least a MOS note (which I didn’t find until after having finished writing this!) exists, RAC One — Changes in 11.2.0.2 [ID 1232802.1]

Changes

Instead of having to apply patch 9004119 to your environment, RAC One Node is available “out of the box” with 11.2.0.2. Sadly, the Oracle RAC One Node manual has not been updated, and searches on Metalink reveal no new information. One interesting piece of information: the patch for RAC One Node is listed as “undocumented Oracle Server” section.

The creation of a RAC One Node instance has been greatly simplified-dbca has added support for it, both from the command line for silent installations as well as the interactive GUI. Consider these options for dbca.

$ dbca -help
dbca  [-silent | -progressOnly | -customCreate] {<command> <options> }  |
 { [<command> [options] ] -responseFile  <response file > }
 [-continueOnNonFatalErrors <true | false>]
Please refer to the manual for details.
You can enter one of the following command:

Create a database by specifying the following parameters:
-createDatabase
 -templateName <name of an existing  template>
 [-cloneTemplate]
 -gdbName <global database name>
 [-RACOneNode
 -RACOneNodeServiceName  <Service name for the service to be
 created for RAC One Node database.>]
 [-policyManaged | -adminManaged <Policy managed or Admin managed
 Database, default is Admin managed
 database>]
 [-createServerPool <To create ServerPool which will be used by the
 database to be created>]
 [-force <To create serverpool by force when adequate free servers
 are not available. This may affect already running database>]
 -serverPoolName <One serverPool Name in case of create server pool and
 comma separated list of serverPool name in case of
 use serverpool>
 -[cardinality <Specify cardinality for new serverPool to be created,
 default is the number of qualified nodes>]
 [-sid <database system identifier prefix>]
...

With RAC One Node you will most likely end up with a policy managed database in the end, I can’t see how an admin managed database made sense.

The srvctl command line tool has been improved to deal with the RAC One node. The most important operations are to add, remove, config and status. The nice thing about dbca is that it actually registers the database in the OCR. Immediately after the installation, you see this status information:

$ srvctl status database -d rontest
Instance rontest_1 is running on node node2
Online relocation: INACTIVE

$ srvctl config database -d rontest
Database unique name: rontest
Database name:
Oracle home: /data/oracle/product/11.2.0.2
Oracle user: oracle
Spfile: + DATA/rontest/spfilerontest.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: rontest
Database instances:
Disk Groups: DATA
Mount point paths:
Services: rontestsrv
Type: RACOneNode
Online relocation timeout: 30
Instance name prefix: rontest
Candidate servers: node2,node3
Database is administrator managed

Note that the instance_name, although the instance is administrator managed, changed to $ORACLE_SID_1. Relocating works now with the srvctl relocate database command as in this example:

$ srvctl relocate database -d rontest -n node2

You’ll get feedback about this in the output of the “status” command:

$ srvctl status database -d rontest
Instance rontest_1 is running on node node2
Online relocation: ACTIVE
Source instance: rontest_1 on node2
Destination instance: rontest_2 on node3

After the command completed, check the status again:

srvctl status database -d rontest
Instance rontest_2 is running on node node2
Online relocation: INACTIVE

The important difference between an admin managed database and a policy managed database is that you are responsible for undo tablespaces. If you don’t create and configure undo tablespaces, the relocate command will fail:

$ srvctl relocate database –d rontest -n node3                       <
PRCD-1222 : Online relocation of database rontest failed but database was restored to its original state
PRCD-1129 : Failed to start instance rontest_2 for database rontest
PRCR-1064 : Failed to start resource ora.rontest.db on node node3
CRS-5017: The resource action "ora.rontest.db start" encountered the following error:
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-30013: undo tablespace 'UNDOTBS1' is currently in use
Process ID: 1587
Session ID: 35 Serial number: 1

CRS-2674: Start of 'ora.rontest.db' on 'node3' failed

In this case, the database runs on the same node. Check the ORACLE_SID (rontest_2 in my case) and modify the initialisation parameter.

SQL> select tablespace_name from dba_data_files where tablespace_name like ‘%UNDO%’;

TABLESPACE_NAME
------------------------------
UNDOTBS1
UNDOTBS2

So the tablespace was there, but the initialisation parameter was wrong! Let’s correct this:

SQL> alter system set undo_tablespace='UNDOTBS1' sid='rontest_1';

System altered.

SQL> alter system set undo_tablespace='UNDOTBS2' sid='rontest_2';

System altered.

Now the relocate will succeed.

To wrap this article up, the srvctl convert database command will convert between single instance, RAC One Node and RAC databases.

Installing RAC 11.2.0.2 on Solaris 10/09 x64

One of the major adventures this time of the year involves installing RAC 11.2.0.2 on Solaris 10 10/09 x86-64. The system setup included EMC Power Path 5.3 as the multipathing solution to shared storage.

I initially asked for 4 BL685 G6 with 24 cores, but in the end “only” got two-still plenty of resources to experiment with.  I especially like the output of this command:

$ /usr/sbin/psrinfo | wc –l
 24

Nice! Actually, it’s 4 Opteron processors:

$ /usr/sbin/prtdiag | less
System Configuration: HP ProLiant BL685c G6
 BIOS Configuration: HP A17 12/09/2009
 BMC Configuration: IPMI 2.0 (KCS: Keyboard Controller Style)
==== Processor Sockets ====================================
Version                          Location Tag
 -------------------------------- --------------------------
 Opteron                          Proc 1
 Opteron                          Proc 2
 Opteron                          Proc 3
 Opteron                          Proc 4

Continue reading

Build your own stretch cluster part V

This post is about the installation of Grid Infrastructure, and where it’s really getting exciting: the 3rd NFS voting disk is going to be presented and I am going to show you how simple it is to add it into the disk group chosen for OCR and voting disks.

Let’s start with the installation of Grid Infrastructure. This is really simple, and I won’t go into too much detail. Start by downloading the required file from MOS, a simple search for patch 10098816 should bring you to the download patch for 11.2.0.2 for Linux-just make sure you select the 64bit version. The file we need just now is called p10098816_112020_Linux-x86-64_3of7.zip. The file names don’t necessarily relate to their contents, the readme helps finding out which piece of the puzzle is used for what functionality.

I alluded to my software distribution method in one of the earlier posts, and here’s all the detail to come. My dom0 exports the /m directory to the 192.168.99.0/24 network, the one accessible to all my domUs. This really simplifies software deployments.

Continue reading

Kindle version of Pro Oracle Database 11g RAC on Linux

I had a few questions from readers whether or not there was going to be a kindle version of Pro Oracle Database 11g RAC on Linux.

The good news for those waiting is: yes! But it might take a couple of weeks for it to be released.

I checked with Jonathan Gennick who expertly oversaw the whole project and he confirmed that Amazon have been contacted to provide a kindle version.

As soon as I hear more, I’ll post it here.

Pro Oracle Database 11g RAC on Linux is out

So it has finally happened, the day many of you who patiently waited for. Amazon and other publishers are fulfilling pre-orders! The book I helped publish, Pro Oracle Database 11g RAC on Linux is out in print.

I received my author copies last week, and was very happy to see that after the book was out in the USA first, the “pre-order now” recommendation has been removed from Amazon’s UK and German websites when I last checked.

I have to give great credit to all of those who helped me make this possible with extra input and pointers to related information. On the Linux part Steve has done an incredible job compiling all this useful information you don’t find in this form anywhere else.

From now on, feedback and new research with specific reference to the book will appear on my wordpress blog (the one you are reading now), in a separate category-“RAC Book”.

So thanks again for all your patience while we were working very hard to finish the book.