Posted by Martin Bach on June 17, 2013
I just realised this week that I haven’t really detailed anything about policy managed RAC databases. I remembered having done some research about server pools way back when 126.96.36.199 came out. I promised to spend some time looking at the new type of database that comes with server pools: policy managed databases but somehow didn’t get around to doing it. Since I’m lazy I’ll refer to these databases as PMDs from now on as it saves a fair bit of typing.
So how are PMDs different from Administrator Managed Databases?
First of all you can have PMDs with RAC only, i.e. in a multi-instance active/active configuration. Before 11.2 RAC you had to tie an Oracle instance to a cluster node. This is why you see instance prefixes in a RAC spfile. Here is an example from my lab 188.8.131.52.6 cluster:
Note that the instance_number, thread and undo tablespace are manually (=administrator) managed. If these aren’t set or configured incorrectly you will run into all sorts of fun. Read the rest of this entry »
Posted in 11g Release 2, Linux, RAC | Tagged: 11.2, policy managed database, RAC, server pool | 2 Comments »
Posted by Martin Bach on May 22, 2013
Just a short notice to those interested that I’m very proud to be in the lineup for Enkitec’s Extreme Exadata Expo. The event takes place August 5-6, 2013 and is held in the Four Seasons Resort & Spa, Irving, Texas. There is plenty of time for you to register.
I was really sorry I missed out last year but this time I’m glad to participate and attend!
The list of great speakers includes too many to name here-you should see for yourself about who is coming to Dallas this August and why this event is unmissable.
I’m hoping to see you there!
Posted in Exadata, Public Appearances | Tagged: Dallas, E4, Enkitec | Leave a Comment »
Posted by Martin Bach on May 17, 2013
In my last post about large pages in 184.108.40.206 I promised a little more background information on how large pages and NUMA are related.
Background and some history about processor architecture
For quite some time now the CPUs you get from AMD and Intel both are NUMA, or better: cache coherent NUMA CPUs. They all have their own “local” memory directly attached to them, in other words the memory distribution is not uniform across all CPUs. This isn’t really new, Sequent has pioneered this concept on x86 a long time ago but that’s in a different context. You really should read Scaling Oracle 8i by James Morle which has a lot of excellent content related to NUMA in it, with contributions from Kevin Closson. It doesn’t matter that it reads “8i” most of it is as relevant today as it was then.
So what is the big deal about NUMA architecture anyway? To explain NUMA and why it is important to all of us a little more background information is on order.
Some time ago processor designers and architects of industry standard hardware could no longer ignore the fact that a front side bus (FSB) proved to be a bottleneck. There were two reasons for this: it was a) too slow and b) too much data had to go over it. As one direct consequence DRAM memory has been directly attached to the CPUs. AMD has done this first with it’s Opteron processors in its AMD64 micro architecture, followed by Intel’s Nehalem micro architecture. By removing the requirement of data retrieved from DRAM to travel across a slow bus latencies could be removed.
Now imagine that every processor has a number of memory channels to which DDR3 (DDR4 could arrive soon!) SDRAM is attached to. In a dual socket system, each socket is responsible for half the memory of the system. To allow the other socket to access the corresponding other half of memory some kind of interconnect between processors is needed. Intel has opted for the Quick Path Interconnect, AMD (and IBM for p-Series) use Hyper Transport. This is (comparatively) simple when you have few sockets, up to 4 each socket can directly connect to every other without any tricks. For 8 sockets it becomes more difficult. If every socket can directly communicate with its peers the system is said to be glue-less which is beneficial. The last production glue-less system Intel released was based on the Westmere architecture. Sandy Bridge (current until approximately Q3/2013) didn’t have an eight-way glue-less variant, and this is exactly why you get Westmere-EX in the X3-8, and not Sandy Bridge as in the X3-2.
Anyway, your system will have local and remote memory. For most of us, we are not going to notice this at all since there is little point in enabling NUMA on systems with two sockets. Oracle still recommends that you only enable NUMA on 8 way systems, and this is probably the reason the oracle-validated and preinstall RPMs add “numa=off” to the kernel command line in your GRUB boot loader.
Read the rest of this entry »
Posted in Linux, VLDB | 3 Comments »
Posted by Martin Bach on May 13, 2013
Large Pages in Linux are a really interesting topic for me as I really like Linux and trying to understand how it works. Large pages can be very beneficial for systems with large SGAs and even more so for those with large SGA and lots of user sessions connected.
I have previously written about the benefits and usage of large pages in Linux here:
So now as you may know there is a change to the init.ora parameter “use_large_pages” in 220.127.116.11. The parameter can take these values:
SQL> select value,isdefault
2 from V$PARAMETER_VALID_VALUES
3* where name = 'use_large_pages'
There is a new value named “auto” that didn’t exist prior to 18.104.22.168. The intention is to create large pages at instance startup if possible, even if /etc/sysctl.conf doesn’t have an entry for vm.nr_hugepages at all. The risk though is that-as with dynamic creation of large pages by echoing values into /proc/sys/vm/nr_hugepages-is that you get fewer than you expect. Maybe even 0. Read the rest of this entry »
Posted in Linux | 4 Comments »
Posted by Martin Bach on April 29, 2013
Some days are just too good to be true :) I ran into an interesting problem trying to install 22.214.171.124.0 Grid Infrastructure for a two node cluster. The storage was presented via iSCSI which turned out to be a blessing and inspiration for this blog post. So far I haven’t found out yet how to create “shareable” LUNs in KVM the same way I did successfully with Xen. I wouldn’t recommend general purpose iSCSI for anything besides lab setups though. If you want network based storage, go and use 10GBit/s Ethernet and either use FCoE or (direct) NFS.
Here is my setup. Storage is presented in 3 targets using tgtd on the host:
- Target 1 contains 3×2 GB LUNs for OCR and voting disks in normal redundancy.
- Target 2 contains 3×10 GB LUNs for +DATA
- Target 2 contains 3×10 GB LUNs for +RECO
iSCSI initiators are Oracle Linux 6.4 on KVM with the host running OpenSuSE 12.3 providing the iSCSI targets. Yes, I know I’m probably the only Oracle DBA running SuSE, but to my defence I have a similar system with Oracle Linux 6.4 throughout and both work.
So besides the weird host OS there is nothing special. Since I’m lazy sometimes and don’t particularly like udev I decided to use ASMLib for device name persistence on the iSCSI LUNs. This turned out to be crucial, otherwise I’d never had written this post.
Read the rest of this entry »
Posted in 11g Release 2, KVM | Tagged: 4k sector size, root.sh | 3 Comments »
Posted by Martin Bach on April 24, 2013
So this is a little bit of a plug for myself and Enkitec but I’m running my Grid Infrastructure And Database High Availability Deep Dive Seminars again for Oracle University. This time these events are online, so no need to come to a classroom at all.
Here is the short description of the course:
Providing a highly available database architecture fit for today’s fast changing requirements can be a complex task. Many technologies are available to provide resilience, each with its own advantages and possible disadvantages. This seminar begins with an overview of available HA technologies (hard and soft partitioning of servers, cold failover clusters, RAC and RAC One Node) and complementary tools and techniques to provide recovery from site failure (Data Guard or storage replication).
In the second part of the seminar, we look at Grid Infrastructure in great detail. Oracle Grid Infrastructure is the latest incarnation of the Clusterware HA framework which successfully powers every single 10g and 11g RAC installation. Despite its widespread implementation, many of its features are still not well understood by its users. We focus on Grid Infrastructure, what it is, what it does and how it can be put to best use, including the creation of an active/passive cold failover cluster for web and database resources.
If you are interested I would like to invite you to head over to the Oracle University website here which has a more extensive synopsis and all the detail you need:
UPDATE: I received several emails and comments that the above link does not work. I couldn’t reproduce this until today. It appears to be an issue with the country selection. If you have USA selected in the top right corner the link won’t work, switching to United Kingdom (my preference) will fetch the course detail. I don’t quite understand as to why that is the case since the class is virtual and not depending on a country…
I hope to hear from you during the course!
Posted in 11g Release 2, Linux, Public Appearances | 7 Comments »
Posted by Martin Bach on April 23, 2013
This might be something very obvious for the reader but I had an interesting revelation recently when implementing parallel_degree_limit_p1 in a resource consumer group. My aim was to prevent users mapped to a resource consumer group from executing any query in parallel. The environment is fictional, but let’s assume that it is possible that maintenance operations for example leave indexes and tables decorated with a parallel x attribute. Another common case is the restriction of PQ resource to users to prevent them from using all the machine’s resources.
This can happen when you perform an index rebuild for example in parallel to speed the operation up. However the DOP will stay the same with the index after the maintenance operation, and you have to explicitly set it back:
SQL> alter index I_T1$ID1 rebuild parallel 4;
SQL> select index_name,degree from ind where index_name = 'I_T1$ID1';
Read the rest of this entry »
Posted in 11g Release 2, Performance | Tagged: dbrm, optimizer, Oracle, pq, px | 4 Comments »
Posted by Martin Bach on April 21, 2013
The annual conference held by the Oracle User Group in Norway has once again been just great. It was the second year I went and I have to admit that it was every bit as good as last year, and that’s holding a very high standard.
The combination of such great hosts, great speakers and a wonderful atmosphere make this one of the best conferences to attend in Europe. The added benefit of being on a boat makes it a great opportunity to meet the speakers and to hang out with during dinner and after the sessions. Unfortunately I had to leave a day early and write these lines while on a train back home.
Read the rest of this entry »
Posted in Public Appearances | Leave a Comment »
Posted by Martin Bach on March 20, 2013
I had to think of @OyvindIsene, a great ambassador of the Norwegian Oracle User Group when I typed the heading for this post. Unlike him I have not actively been looking for new challenges but sometimes things just develop, and in my case that was a great turn of events. I am very happy to have signed on the dotted line and in a couple of weeks will join Enkitec in Europe.
How did that happen? During an Oracle conference I met Andy Colvin together with some of his colleagues during a break in the busy schedule. I already knew and respected Enkitec as a great company with lots of seriously experienced DBAs. I feel fortunate to actually know some of them already from email and other social media exchanges.
Andy and I have exchanged a few tweets in the past and I really like his blog so I was curious to meet him in person. I haven’t yet had the opportunity to go and speak at an American conference so anytime someone I know from the other side of the Atlantic comes to Europe I try to meet up. I had a great time but unfortunately had to run since my talk started a few minutes later. It was quite funny actually although I’m not so sure if my presentation was up to my own expectations. The conversations I had made a lasting impression on me.
Over the cause of the next months we remained in contact, and I had the great pleasure to meet Kerry Osborne together with Andy a little later and that was when I seriously thought that I wanted to join a team I do admire. Now with all the paperwork done and dusted, and having signed I can’t wait to get started.
Posted in Uncategorized | 6 Comments »
Posted by Martin Bach on January 3, 2013
For quite some time now I am using ESXi 5 update 1 for my lab server and I’m very happy with it. In my lab environment I am not too picky what to run and do not worry about support too much. It’s not production!
One area of concern has been the support for Oracle’s own kernel: UEK or Unbreakable Enterprise Kernel. UEK comes in two editions, one based on 2.6.32, just like Red Hat’s kernel for Red Hat 6. The difference is that you can get UEK/1 (2.6.32.xxx) for Oracle Linux 5.x as well instead of 2.6.18xxx which is otherwise the default.
Oracle’s second iteration of kernel UEK is unsurprisingly named UEK2 and it’s initially based on 3.x but keeps the name to 2.6.39.x for compatibility reasons. UEK2 has some really nice features taken from the Upstream kernel and it is also supported for the Oracle database.
Until not too long ago UEK was not supported by VMware ESXi, but this has changed without me taking notice at first. Thanks to a tweet by @dba_emc2 (Allan Robertson) I learned more about the change in the support policy. One interesting blog post from VMware is found here:
This post only mentions UEK, but does not clearly state whether UEK or UEK2 or both will be supported. The VMware Compatibility Guide has more information at http://www.vmware.com/resources/compatibility/search.php
- In the search, enter “unbreakable” to be directed the the relevant certification information
- It turns out it is (at the time of this writing) UEK 2 actually which is great news for me!
- Supported versions of ESXi are 5 u1, 5u2 and 5.1 at the time of writing
- Support date is listed as 09/2012
- There are even specific installation instruction but they don’t go over and above what you would normally do
- What’s very interesting is that the paravirtualised drivers for SCSI (vSCSI) and VMXNet3 are supported too
- You can also add virtual CPUs and memory while the VM is up (with the proper VM settings, I think hot-adding these is deactivated by default)
Enjoy! I will try to install Oracle Linux 6.3 – which is the first to my knowledge that boots UEK2 by default – next and install the VMware tools. Let’s see how that goes.
Posted in ESX, Linux | 3 Comments »