This is a long overdue reply to an email Mark Bobak sent to the oracle-l mailing list. The questions Mark asked were regarding policy managed databases, a new 11g Release 2 feature. Since I have been very interested in the same topic (and also partly due to the fact that I am presenting about this at UKOUG) I dug a bit deeper.
I have to say that I am very intrigued by the concept of server pools and policy based databases even though the largest cluster I have is only 3 nodes (but very powerful ones), so it’s possibly not at all relevant for me.
The only other useful source of information besides the Oracle doc I found is this one is from the very good oracleracsig.org website:
In my opinion the whole concept of GNS, policy managed databases and server pools is geared towards users with a large number of cluster nodes (>4 definitely, more like 8-10) where you subdivide the n nodes into disjoint server pools.
There are 2 default pools after a Grid Infrastructure installation or upgrade:
- free – any unassigned nodes go there and
- generic for “administrator-managed” databases
You can create your own server pools using srvctl or enterpise manager of course, even nested ones!
Administrator-managed database refers to the RAC database we know, policy-based databases are different, see below.
As far as I know there is a n:1 mapping for (policy managed) databases and server pools. Each additional server pool has a name, minimum and maximum number of nodes and an importance. Clusterware can move nodes from the free pool (or maybe even from different pools whose importance is low and where there are more nodes than the minimum number of required nodes) to fill up the numbers.
There are also changes to services. For policy managed databases you can only chose between uniform or singleton services, i.e. services running on one node or all. More granularity is only available through administrator managed databases. It should be noted that the documentation states that Oracle adds undo tablespace and online redo log thread to policy managed databases if needed.
In addition to these things you can also define users (administrators) in Clusterware to further separate duties. Access Control Lists govern access to Clusterware resources (such as VIPs, ASM and db instances) and server pools. I reckon there are more than 1 or 2 hidden bugs, and the role separated management is turned off by default (thank god).
When you just upgraded from CRS 10.2 -> 11.2 and haven’t upgraded the 10g RDBMS (yet) you won’t be able to make use of many of these features. All cluster nodes are “pinned” which is a prerequisite for running < 11.2 RDBMS instances. Not all cluster nodes need to be pinned, you could for example use nodes 1-3 for your 10.2/11.1 RDBMS and nodes 4-10 for 11.2. Search for “crsctl pin css” for more information about this…
I’ll also present about this at the RAC & HA SIG in TVP (Reading) on Feb 10th :)