Monthly Archives: September 2011

oracleracsig.org presentation October 27 2011

For all of those who aren’t tired of listening to me yet there is good news: I am presenting a webinar at http://www.oracleracsig.org on October 27th 2011. I will most likely be around 17:00 UK time as the meetings start 09:00 PST. I agreed with the committee that we have performed a lot of nitty-gritty down to the very low level, and should probably do a more high level overview presentation as well. As it happens, I am starting my seminars with exactly that!

For your convenience the abstract and summary are shown below-hoping to see you online.

An Introduction to Oracle High Availability

This introductory level session aims at providing an overview of Oracle High Availability options to users of traditional single instance Oracle deployments who are interested in ways to make their database more highly available.

Starting with single instance Oracle the discussion begins with addressing single points of failure in a Unix server. Virtualisation can be another step to making server failures transparent to the users. A different approach to virtualisation is the use of active passive clusters. These have been the traditional domain of established vendors such as IBM with Power HA and Veritas with their Veritas Cluster Suite. Interestingly much of their functionality can now be implemented with Oracle Clusterware/Grid Infrastructure as well at no extra cost.

The next step on the high availability ladder is the active/active cluster, for which one finds only very few implementations in the field. Oracle’s Real Application Cluster is probably the most widely spread. RAC can be configured locally on one data centre, or extend to more than one data centre which is probably the pinnacle of RAC deployment. Both of the setups are going to be explored during the presentation.

The discussion ends with a necessary view on disaster recovery strategies, both block-level oriented (EMC SRDF, HP CA etc) as well as Data Guard. Time permitting we will touch on replication technologies as well.

You should soon see the announcement on the oracleracsig.org website as well. As always, the presentation and recording will be made available there. All you need is a free account and a browser to take you to the list of previous webinars.

Advertisements

Installing Grid Infrastructure 11.2.0.3 on Oracle Linux 6.1 with kernel UEK

Installing Grid Infrastructure 11.2.0.3 on Oracle Linux 6.1

Yesterday was the big day, or the day Oracle release 11.2.0.3 for Linux x86 and x86-64. Time to download and experiment! The following assumes you have already configured RAC 11g Release 2 before, it’s not a step by step guide how to do this. I expect those to shoot out of the grass like mushrooms in the next few days, especially since the weekend allows people to do the same I did!

The Operating System

I have prepared a xen domU for 11.2.0.3, using the latest Oracle Linux 6.1 build I could find. In summary, I am using the following settings:

  • Oracle Linux 6.1 64-bit
  • Oracle Linux Server-uek (2.6.32-100.34.1.el6uek.x86_64)
  • Initially installed to use the “database server” package group
  • 3 NICs – 2 for the HAIP resource and the private interconnect with IP addresses in the ranges of 192.168.100.0/24 and 192.168.101.0/24. The public NIC is on 192.168.99.0/24
    • Node 1 uses 192.168.(99|100|101).129 for eth0, eth1 and eth2. The VIP uses 192.168.99.130
    • Node 1 uses 192.168.(99|100|101).131 for eth0, eth1 and eth2. The VIP uses 192.168.99.132
    • The SCAN is on 192.168.99.(133|134|135)
    • All naming resolution is done via my dom0 bind9 server
  • I am using a 8GB virtual disk for the operating system, and a 20G LUN for the oracle Grid and RDBMS homes. The 20G are subdivided into 2 LVMs of 10G each mounted to /u01/app/oracle and /u01/crs/11.2.0.3. Note you now seem to need 7.5 G for GRID_HOME
  • All software is owned by Oracle
  • Shared storage is provided by the xen blktap driver
    • 3 x 1G LUNs for +OCR containing OCR and voting disks
    • 1 x 10G for +DATA
    • 1 x 10G for +RECO

Continue reading

Oracle 11.2.0.3-can’t be long now!

Update: well it’s out, actually. See the comments below. However the certification matrix hasn’t been updated so it’s anyone’s guess if Oracle/Red Hat 6 are certified at this point in time.

Tanel Poder has already announced it a few days ago, but 11.2.0.3 must be ready for release very soon. It has even been spotted in the “lastet patchset” page on OTN, only to be removed quickly. After another tweet came out from Laurent Schneider, it was time to investigate what’s new. The easiest way is to point your browser to tahiti.oracle.com and type “11.2.0.3” into the search box. You are going to find a wealth of new information!

As a RAC person by heart I am naturally interested in RAC features first. The new features I spotted in the Grid Infrastructure installation guide for Linux are listed here:

http://download.oracle.com/docs/cd/E11882_01/install.112/e22489/whatsnew.htm#CEGJBBBB

Additional information for this article was taken from the “New Features” guide:

http://download.oracle.com/docs/cd/E11882_01/server.112/e22487/chapter1_11203.htm#NEWFTCH1-c

So the question is-what’s in for us?

ASM Cluster File System

As I expected, there was support for ACFS and ADVM for Oracle’s own kernel. This has been overdue for a while. I remember how surprised I was when I installed RAC on Oracle Linux 5.5 with the UEK kernel only to see that infamous “…not supported” output when the installer probed the kernel version. Supported kernels are Linux kernels UEK5-2.6.32-100.34.1 and subsequent updates to 2.6.32-100 kernels for Oracle Linux kernels OL5 and OL6.

A big surprise is support for ACFS for SLES-I though that was pretty much dead in the water after all that messing around from Novell. ACFS has always worked on SLES 10 up to SP3, but it did never for SLES 11. The requirement is SLES 11 SP1, and it has to be 64bit.

There are quite a few additional changes to ACFS. For example, it’s now possible to use ACFS replication and tagging on Windows.

Upgrade

If one of the nodes in the cluster being upgraded with the rootupgrade.sh script fails during the execution of said script, the operation can be completed with the new “force” flag.

Random Bits

I found the following note on MOS regarding the time zone file: Actions For DST Updates When Upgrading To Or Applying The 11.2.0.3 Patchset [ID 1358166.1] I suggest you have a look at that note, as it mentions a new pre-upgrade script you need to download and corrective actions for 11.2.0.1 and 11.2.0.2. I’m sure it’s going to be mentioned in the 11.2.0.3 patch readme as well.

There are also changes expected with the dreaded mutex problem in busy systems, MOS note WAITEVENT: “library cache: mutex X” [ID 727400.1] lists 11.2.0.3 as the release where many of the problems related to this are fixed. Time will tell if they are…

Further enhancements are focused on Warehouse Builder, and XML. SQL apply and the log miner have also been enhanced which is good news for users of Streams and logical standby databases

Summary

It is much to early to say anything else about the patchset. Quite a few important documents don’t have a “new features” section yet. That includes the Real Application Clusters Administration and Deployment guide as well which I’ll cover as soon as it’s out. From a quick glance at the still unreleased patchset it seems it’s less of a radical change than 11.2.0.2 was which is a good thing. Unfortunately the certification matrix hasn’t been updated yet, I am very keen to see support for Oracle/Red Hat Linux 6.x.

RAC and HA SIG meting Royal Institute of British Architects September 2011

I have been looking forward to the RAC & HA SIG for quite some time. Unfortunately I wasn’t able to make the spring meeting which must have been fantastic. For those who haven’t heard about it, this was the last time the SIG met under its current name-as Dave Burnham, the chair pointed out in his welcome note.

RAC & HA SIG is going to merge with the management & infrastructure SIG to form the availability management and infrastructure SIG, potentially reducing the number of meetings to 3 for the combined SIG. This is hopefully going to increase the number of attendees and also offer a larger range of topics. I am looking forward to the new format and am hoping for a wider number of topics and greater appeal.

Partly down to the transport problems that hit London today (Victoria Line was severely delayed and apparently overground services were impacted as well) the number of attendees was lower than expected.

The following are notes I have taken during the sessions, and as I’m not the best multi-tasking person in the world there may be some grammatical errors and typos in this post for which I apologise in advance.

Continue reading

Collecting and analysing Exadata cell metrics

Recently I have been asked to write a paper about Exadata Flash Cache and its impact on performance. This was a challenge to my liking! I won’t reproduce the paper I wrote, but I’d like to demonstrate the methods I used to get more information about what’s happening in the software.

Hardware

The Exadata Flash Cache is provided by four F20 PCIe cards in each cell. Currently the PCI Express bus is the most potent way to realise the potential of the flash disk in terms of latency and bandwidth. SSDs attached to a standard storage array will be slowed by fibre channel as the transport medium.

Each of the F20 cards holds 96G of raw space, totalling in 384GB of capacity per storage cell. The usable capacity is slightly less. The F20 card is subdivided into 4 so called FMODs, or solid state flash modules visible to the operating system using the standard SCSI SD driver.

Cellcli can also be used to view the FMODs using the “LIST PHYSICALDISK” command. The output is slightly different from the spinning disks as they are reported SCSI drivee’s [host:bus:target:lun] notation.

Now please don’t be tempted to take the FMODs and transform them into celldisks! Continue reading

Compiling RLWRAP for Oracle Linux 6.1 x86-64

RLWrap is a great too to enhance the user experience with SQL*Plus by allowing it to make use of the GNU readline library. Search the Internet for RLWrap and sqlplus and you should get plenty of hits explaining how awesome that combination is.

Why am I writing this? I am currently in the process of upgrading my lab reference database server to Oracle Linux 6.1, and in the process I wanted to install the rlwrap tool to get read line support with SQLPlus. It’s actually quite simple, all I did after installing the operating system with the “database server” package is described in the few steps that follow this introduction. Continue reading