UKOUG conference 2009-session review part 2

This is the second part of my review of UKOUG conference 2009, day 1. Check out the first part as well! This picks up exactly where I left part 1 after Tom Kyte’s session.

Virtual Insanity

I remained in hall 1 to see the great presentation of James Morle. I have to say that even if you aren’t familiar with the subject of his presentations you should go to see him-he’s such a great presenter. The prank of today was a bottle of Oracle wine, distributed into 5 glasses (“everyone help yourself to your portion of Oracle”) to simulate the idea of virtualisation. James then offered insights into some of the VMWare internals alongside some competing offerings, mainly from Oracle and Cytrix (Xen) and Red Hat (KVM and RHEV). It seems the golden age of para-virtualisation is over, with AMD and Intel releasing so many features in their processors for hardware assisted virtualisation that VMWare’s offering caught up performance-wise. Personally I still love Xen (and I am writing this article on a virtual machine!) because it gives me all the performance I need on cheap hardware. I also don’t think the nehalem processors will make it into laptops, my venerable openSuSE 11.1 will support me for some more time. When it comes to performance, anything in running on vmware in userland will roughly match a physical box, but as soon as you enter kernel mode, due to modifications VMWare has to make in order for multiple instances of OSs to coexist. Again, this wasn’t tested on VT-d or IMMOU capable processors so your experience might be different. Overcommitting memory might work well with other than Oracle workloads, but James’s advise is not to use this feature in production.
All in all a very balanced presentation with the usual laugh at the beginning.

After this presentation I had some lunch and managed to see the folks from CERN which was interesting again. A great many of them were there to actually present and I briefly met Eva, their streams specialist who is responsible for pushing a lot of data from their site (tier 0) to sites all around the world (tier 1) for distributed computing. If I remember correctly this is part of the EGEE network but might be wrong there.

RAC round table

This was chaired by Julian Dyke and David Burnham (actually Julian had a problem with his voice which left him so David took over) and saw a great attendance from some reknown Oracle specialists: Piet de Visser, Phil Davies, David Burnham, Alex Gorbachev, Luca Canali, Mark Bobak, Jonathan Lewis to name just a few.

This was my first round table and I didn’t know what to expect. I greatly enjoyed the atmosphere up to the point where the discussion about block level replication started to drag on. The questions (and some answers) were:

  • Block corruption with AIX 6.1 on p595 LPARs when accessing freshly created database (in ASM). Requires a lot of tracing on the net*8 layer all the way up a strace of the oracle process, can’t be diagnosed without a system. The block corruption didn’t result of a restore from a compressed backup, the system was freshly created via dbca.
  • Update from Larry Carpenter about future of data guard/streams and golden gate. Very interesting stuff, in essence streams won’t go away and Golden Gate receives Log Miner code. Golden Gate will remain an independent product and won’t be assimilated in the streams group.
  • There are plans in Oracle to extend RMAN in order to be able to restore across endian boundaries (not to be confused with the convert command!) Before that can happen, Phil Davis was sharing details of work on a project where Golden Gate could be used to cut downtime a lot, capturing changes while the rman convert was still running on the destination platform
  • Using san block level replication:  mirroring oracle home and database to remote host still requires full license, unless you don’t mount it (or mount it only when the primary site is completely down)
  • Phil Davies: does anyone know why in clusterware active/passive setups the cluster database resource doens’t write a trace file in $CRS_HOME/log/hostname/racg ? I didn’t, neither did anyone else. That question didn’t cover 11.2, but I didn’t have time to test cold failover clusters with 11.2 yet.
  • RAC ONe node: really just a cold failover, additional benefit is mainly for maintenance. For RAC One Node you have to license only one node (apparently). Aimed primarily against VMWare

I will add yet another post about the final two sessions I attended about Tom Kyte’s top 10 11.2 features and Wolfgang Breitlings seeding statistics.