Monthly Archives: June 2012

How to set up data guard broker for RAC

This is pretty much a note to myself on how to set up Data Guard broker for RAC

UPDATE: Please note that a lot of this has changed in 12.1, as described in a different blog post of mine (Little things worth knowing: Data Guard Broker Setup changes in 12c). If you are looking for information how to implement this with 12.1 then please continue on the other post. If you are on Oracle 11.2 then please read on :)

The Test Environment

The tests have been performed on Oracle Linux 5.5 with the Red Hat Kernel. Oracle was Sadly my lab server didn’t support more than 2 RAC nodes, so everything has been done on the same cluster. It shouldn’t make a difference though. If it does, please let me know).

WARNING: there are some rather deep changes to the cluster here, be sure to have proper change control around making such amendments as it can cause outages! Nuff said.

Unfortunately I didn’t take notes of the configuration as it was before, so the post is going to be a lot shorter and less dramatic, but it’s useful as a reference (I hope) nevertheless. Now what’s the situation? Imagine you have a two node RAC cluster with separation of duties in place-“grid” owns the GRID_HOME, while “oracle” owns the RDBMS binaries. Imagine further you have two RAC database, ORCL and STDBY. STDBY has only just been duplicated for standby, so there is nothing in place which links the two together.

Continue reading

Little things worth knowing-static and dynamic listener registration

As part of  a recent project to remove a vulnerability in relation to CVE-2012-1675 it became apparent that there are certain misconceptions around dynamic and static listener registration which are hard to get rid of. The below is applicable for single instance Oracle only!

Now let’s start with a quiz: what does the following output imply:

$ lsnrctl status

LSNRCTL for Linux: Version - Production on 20-JUN-2012 11:22:01

Copyright (c) 1991, 2010, Oracle.  All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=server1)(PORT=1571))
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version - Production
Start Date                15-JUN-2012 11:14:27
Uptime                    5 days 0 hr. 7 min. 34 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/11.2/dbhome_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/server1/listener/alert/log.xml
Listening Endpoints Summary...
Services Summary...
Service "ORA11202" has 1 instance(s).
  Instance "ORA11202", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

Continue reading

How to use vi-style editing in SQL*Plus

This post is nothing new, and I created it after a little discussion on twitter about how to use readline support in SQL*Plus. The idea is not new, and I have compiled and used rlwrap for quite some time.

At the time, Frits Hoogland asked me why I didn’t use the EPEL package-and I had to admit to myself that I didn’t know the Extra Package for Enterprise Linux repository at all. But there is more to rlwrap and Linux I didn’t know, but first things first.

Installing rlwrap from EPEL

This is really simple-you can either add the EPEL repository to your /etc/yum.repos.d/ directory or simply download the rlwrap package and install it via RPM. A simple wget on your host does the trick. You can set environment variables when you’d like to use a proxy as shown here:

$ export http_proxy=http://your.proxy.server:proxyPort/
$ export https_proxy=https://your.proxy.server:proxyPort/

Depending your release of Enterprise Linux, you can find the rlwrap package here:

Then wget should download the file for you, at the time  of writing 0.37 was current.

Continue reading

Kernel UEK 2 on Oracle Linux 6.2 fixed lab server memory loss

A few days ago I wrote about my new lab server and the misfortune with kernel UEK (aka 2.6.32 + backports). It simply wouldn’t recognise the memory in the server:

# free -m
             total       used       free     shared    buffers     cached
Mem:          3385        426       2958          0          9        233
-/+ buffers/cache:        184       3200
Swap:          511          0        511

Ouch. Today I gave it another go, especially since my new M4 SSD has arrived. My first idea was to upgrade to UEK2. And indeed, following the instructions on Wim Coekaerts’s blog (see references), it worked:

[root@ol62 ~]# uname -a
Linux ol62.localdomain 2.6.39-100.7.1.el6uek.x86_64 #1 SMP Wed May 16 04:04:37 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux
[root@ol62 ~]# free -m
             total       used       free     shared    buffers     cached
Mem:         32221        495      31725          0          5         34
-/+ buffers/cache:        456      31764
Swap:          511          0        511

Note the 2.6.39-100.7.1! It’s actually past that and version 3.x, but to preserve compatibility with a lot of software parsing the kernel revision number in 3 tuples Oracle decided to stick with 2.6.39. But then the big distributions don’t really follow the mainstream kernel numbers anyway.

Now if anyone could tell me if UEK2 is out of beta? I know it’s not supported for the database yet, but it’s a cool kernel release and I can finally play around with the “perf” utility Kevin Closson and Frits Hoogland have mentioned so much about recently. Continue reading

The new lab server has arrived

Update: the Oracle Linux memory problem is solved:

As part of an on-going research project I have invested a little bit of money towards a new lab server. Sadly my Phenom II X6 with 16GB RAM was not on the HCL for ESXi 5 which I required. Before I knew that ESXi can itself be virtualised I looked at a number of options.

After much consideration and calculations I decided to buy a machine. I have already rented a Core i7-2600 with 32GB of RAM from Hetzner (which is superb value for money for their EX4-check it out!) but needed something more flexible in terms of OS installation and virtualisation. Although the team at Hetzner is most helpful and skilled, installing ESXi or Oracle VM 3.1.x would have been quite a task. Especially hardening the setup!

For some time I thought that it is no longer possible to virtualise certain Oracle setups on a ultra-mobile laptop. I managed to run a 2 node 10.2 cluster with ASM on a Think Pad T61 at the time, but when 11.2 (especially came out memory requirements pushed the laptops to their limit. Sure there are laptops that can be equipped with > 16GB RAM, but they can hardly be carried around. I needed a good compromise between battery endurance and weight, and came to the Think Pad X220 with a Core i5 2540M. This is a great laptop with 8 GB RAM and 320GB spinning disk. Thinking about it now I should have bought a SSD instead, but that can always be fitted later.

But I digress-back to the lab machine. The choices were either a consumer board, or a server/workstation setup. The consumer boards were quickly ruled out, as they are single socket only and I wanted a bit more.

The server boards for x86-64 can be separated in AMD and Intel. With AMD going down rapidly since the new processor architecture couldn’t compete with the latest Intel chips AMD has to stay in the market by offering a price advantage. The current 6200 series Interlagos chips come with 8 up to 16 cores per socket. Although I should add the “cores” in the 6200 processor are neither really cores nor threads, but that’s a different story, better told by wikipedia. The alternative is the Sandy Bridge E5-2600 series processor, better known as the processor to die for (not literally). This is currently some of the fastest you can get, and I really would have liked one. However the price tag associated with one of them is quite high-rightly so! Also the E5-2600 Xeon is made for dual socket workstations, the new 4600 series for quad-socket hasn’t been released yet at the time I made a decision to buy. And even then I wouldn’t have been able to afford one anyway.

As a regular reader of c’t, a very good German IT magazine I noticed Delta Computers in Reinbeck near Hamburg. They sell HPC kit to academic institutions but also offer hardware to everyone else. I got in touch (the website might need an overhaul) and quickly configured my system:

  • AMD 6238 Interlagos with 2 sockets @ 12 “cores” each
  • 32 GB RAM (the upgrade to 64 GB would only have been an extra 200 GBP)
  • No internal disks, I have plenty of these
  • Supermicro H8DG6 board with G34 chipset

So what’s nice about it? The price to start with, but also the IPMI 2.0 interface, the onboard SAS and SATA interface (no SATA 6g though). The case has a disk cage into which I can attach 8 SATA disks in caddies which requires no access to the mainboard and fumbline with screws.

I should also mention a little problem though: there is noise. The board has 5 fans attached I could see: 3 take the air in from the front and channel it across the CPU, the North Bridge and the memory. Two small fans suck the air out of the case and blow it out. They are small in diameter, and I had to find the BIOS setting to reduce their RPM. The familiar airplane-taking-off noise greeted me when I first powered the server up, and I thought I couldn’t possibly use it while anyone else is in the house. The case weights 35kg, and is a 4 U workstation. There is plenty of room for PCIe v2 x16 cards (3 I think) and the board takes up to 256GB of memory.

The IPMIView software from Supermicro allows me to control the server over a KVM-over-IP. I can even mount ISO images over the network which will replace the DVD ROM. It is possible to administer the box without ever plugging a keyboard, mouse, monitor or physical boot device into it.

ESXi 5 installs beautifully and is fully Interlagos-compatible. Some guides on the Internet suggest a little tweaking of the advanced settings for ESXi to enable the deep C-states, allowing unused cores to sleep when they are not used.

On the downside I couldn’t make Oracle Linux 6.2 with kernel UEK detect all the memory. In fact, of my 32GB only 3 remained. This seems to be a regression introduced in 2.6.28 (or so) which is supposedly fixed in Red Hat 6.2. There is also a list available on Supermicro’s site listing which operating systems are optimised for the Interlagos chips. There is a possibility of excessive cache invalidations in systems not aware of the 6200 architecture. ESXi 5 is aware of it, so I stick with it for the time being.

After almost a month I have to say I’m very happy with the server. It’s noisy, ok, but having headphones with good music makes it bearable. The performance is great, I have yet to try and see how well it copes with my 4 node RAC cluster and OEM 12.1 Cloud Control.