Monthly Archives: July 2013

Oracle Automatic Data Optimisation 12c

One of the seriously cool new features in the Oracle database 12c is the automated lifecycle management option. I freely admit you can get the same with other storage vendors (EMC FAST VP is the one I know first hand, but 3PAR and others have similar technology) but this is not really an option for the Oracle Database and Exadata. A quick word of warning: I have not even opened the licensing guide for 12c yet, this may well be a cost option so as always please ensure you are appropriately licensed before using this feature.

Why this post

The official documentation-VLDB and Partitioning Guide, chapter 5-is a bit lacking, as is the SQL Language Reference, hence this post.

Oracle allows you to automate movement of “cold” data from tier 1 storage to lower tiers completely automatic based on so-called ILM policies. These policies can be assigned to a table at creation time or alternatively retrofitted. It is possible to define the policies on the row, segment or a group. Two distinct policies are possible: either to compress the data or to move it to a different tablespace.


Compression is simple and the documentation is adequate. Consider this example:

SQL> alter table t1
  2  ilm add policy compress basic segment
  3  after 3 months of no modification;

Table altered

Table t1 is of course range partitioned on a date-the only sensible use case for ILM in my opinion anyway. If you are lucky enough to be on Exadata you can use the ILM policy to request Hybrid Columnar Compression too by the way. At the time of writing I could request “compress for archive high” on my laptop as well but that of course is a bug and will lead to problems later. The policy states that a segment which isn’t modified in 3 months is to be compressed with BASIC compression.

Where to find information about ILM policies?

Dictionary views related to the ILM management all start with DBA_ILM:


If you get the DDL for a table/partition using DBMS_METADATA.GET_DDL you will also see the ILM policies.

Heat Map Tracking

NOTE: for the ADO option to work you need to set heat_map=ON system wide. Be careful: I haven’t tested if there is an overhead doing so. Once the heat map tracking is enabled you can query v$heat_map_segment and various DBA% views to report reads, writes, index lookups and full scans on the segments under surveillance. In my testing it took a little while for the segments to appear in the views so be patient and keep accessing the data. I used a few scheduler jobs to do so.


Tiering is different and here the documentation set is missing detail. Despite the syntax diagram suggesting that you can define a tiering policy based on access/modification/creation as with the compression policies you are mistaken.

By pure chance I found the Oracle By Example for tiering which is so much easier to understand than the official documentation. To enable tiering-which is completely independent of the compression-I used this sample:

SQL> alter table t1
  2  ilm add policy
  3  tier to ARCH segment;

Table altered.

And nothing happened. You cannot specify a condition for tiering yet, i.e. “ilm add policy tier to ARCH segment after …” will cause an error, which is actually expected behaviour.

Enter DBA_ILMPARAMETERS. In this view you see the TBS_PERCENT_USED and TBS_PERCENT_FREE columns which indicate when a segment is to be moved. Again-nothing to do with the access or modification time. As soon as the tablespace containing the segment is full, a job will be started moving the partition in question to the lower tier tablespace.

It is possible to mix and match the compression and tiering with tables, i.e. tables can have multiple ILM policies.

When does it happen?

This was again not too clear in the documentation but during testing it appears as if the evaluation of the ILM policies take place when the default maintenance window opens (10pm to 2 AM if memory serves me right). The ILM policies are implemented as PL/SQL blocks. You can see the actual code executed in sys.ilmresults$ in the PAYLOAD field. Limit your query to the current TASK_ID and/or JOB_NAME as in DBA_ILMEVALUATIONDETAILS.


A good walk-though, easy to follow example is on “Oracle by Example”:


Flex ASM 12c in action

I have been working on Flex ASM for a little while and so far like the feature. In a nutshell Flex ASM removes the requirement to have one (and only one!) ASM instance per cluster node. Instead, you get three ASM instances, not more, not less-regardless of cluster size.

Databases which are in fact “ASM clients” as in v$asm_client connect remotely or locally, depending on the placement of the ASM instance and the database instance.

What’s great about Flex ASM is that a crash of an ASM instance does not pull the database instances on the host with it. Imagine a database-as-a-service environment on say an X3-8. This machine comes with 8s80c160t and 2 TB of memory plus 8 internal disks for Oracle binaries. This should indeed be a lot of equipment to consolidate your databases into. Now imagine what happens if your ASM instance crashes … or better not. Enter Flex ASM! You can chose the ASM mode during the installation of Grid Infrastructure 12c or enable it later. In the below example it was enabled at installation time. Warning: if you want to use Flex ASM and have pre 12c databases on the host, you can’t (really). Although you will be running Flex ASM you _must_ have ASM instances on each host for which you intend to run a pre 12c RDBMS instance. Continue reading


Sometimes it’s the little differences that make something really cool, and I was wondering why this hasn’t made it into the Oracle dictionary before.

Have you ever asked yourself which out of the 30 or so accounts in the database were maintained by Oracle or in other words were Oracle internal and to be left alone? I did so on many occasions especially when it comes to the options I do not regularly see in the database. DBA_USERS lists all accounts in the database, user managed as well as Oracle managed. The below is the definition of the 11g view:

SQL> desc dba_users
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 USERNAME                                  NOT NULL VARCHAR2(30)
 USER_ID                                   NOT NULL NUMBER
 PASSWORD                                           VARCHAR2(30)
 ACCOUNT_STATUS                            NOT NULL VARCHAR2(32)
 LOCK_DATE                                          DATE
 EXPIRY_DATE                                        DATE
 DEFAULT_TABLESPACE                        NOT NULL VARCHAR2(30)
 CREATED                                   NOT NULL DATE
 PROFILE                                   NOT NULL VARCHAR2(30)
 INITIAL_RSRC_CONSUMER_GROUP                        VARCHAR2(30)
 EXTERNAL_NAME                                      VARCHAR2(4000)
 PASSWORD_VERSIONS                                  VARCHAR2(8)
 EDITIONS_ENABLED                                   VARCHAR2(1)
 AUTHENTICATION_TYPE                                VARCHAR2(8)


OK so in 10g and 11g you could check v$sysaux_occupants and join it against DBA_USERS and draw your conclusions from there…

SQL> desc v$sysaux_occupants
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 OCCUPANT_NAME                                      VARCHAR2(64)
 OCCUPANT_DESC                                      VARCHAR2(64)
 SCHEMA_NAME                                        VARCHAR2(64)
 MOVE_PROCEDURE                                     VARCHAR2(64)
 MOVE_PROCEDURE_DESC                                VARCHAR2(64)
 SPACE_USAGE_KBYTES                                 NUMBER


But that’s a bit cumbersome in my opinion. Now with Oracle 12c there is something really cool. Compare the definition of DBA_USERS with the one from above:

SQL> desc dba_users
 Name                                            Null?    Type
 ----------------------------------------------- -------- --------------------------------
 USERNAME                                        NOT NULL VARCHAR2(128)
 USER_ID                                         NOT NULL NUMBER
 PASSWORD                                                 VARCHAR2(4000)
 ACCOUNT_STATUS                                  NOT NULL VARCHAR2(32)
 LOCK_DATE                                                DATE
 EXPIRY_DATE                                              DATE
 DEFAULT_TABLESPACE                              NOT NULL VARCHAR2(30)
 TEMPORARY_TABLESPACE                            NOT NULL VARCHAR2(30)
 CREATED                                         NOT NULL DATE
 PROFILE                                         NOT NULL VARCHAR2(128)
 INITIAL_RSRC_CONSUMER_GROUP                              VARCHAR2(128)
 EXTERNAL_NAME                                            VARCHAR2(4000)
 PASSWORD_VERSIONS                                        VARCHAR2(12)
 EDITIONS_ENABLED                                         VARCHAR2(1)
 AUTHENTICATION_TYPE                                      VARCHAR2(8)
 PROXY_ONLY_CONNECT                                       VARCHAR2(1)
 COMMON                                                   VARCHAR2(3)
 LAST_LOGIN                                               TIMESTAMP(9) WITH TIME ZONE
 ORACLE_MAINTAINED                                        VARCHAR2(1)


In addition to the fields used for the new multi-tenancy option you also find a flag at the bottom named “ORACLE_MAINTAINED”-bingo!

SQL> select count(oracle_maintained),oracle_maintained
  2  from dba_users
  3  group by oracle_maintained;

------------------------ -
                      35 Y
                       1 N


And voila!

SQL> select username,oracle_maintained,account_status
  2  from dba_users
  3  order by oracle_maintained,username;

USERNAME                       O ACCOUNT_STATUS
------------------------------ - --------------------------------
SCOTT                          N OPEN
ANONYMOUS                      Y EXPIRED & LOCKED
APEX_040200                    Y EXPIRED & LOCKED
APPQOSSYS                      Y EXPIRED & LOCKED
AUDSYS                         Y EXPIRED & LOCKED
CTXSYS                         Y EXPIRED & LOCKED
DBSNMP                         Y EXPIRED & LOCKED
DIP                            Y EXPIRED & LOCKED
DVF                            Y EXPIRED & LOCKED
DVSYS                          Y EXPIRED & LOCKED
FLOWS_FILES                    Y EXPIRED & LOCKED
GSMCATUSER                     Y EXPIRED & LOCKED
GSMUSER                        Y EXPIRED & LOCKED
LBACSYS                        Y EXPIRED & LOCKED
MDDATA                         Y EXPIRED & LOCKED
MDSYS                          Y EXPIRED & LOCKED
OJVMSYS                        Y EXPIRED & LOCKED
OLAPSYS                        Y EXPIRED & LOCKED
ORACLE_OCM                     Y EXPIRED & LOCKED
ORDDATA                        Y EXPIRED & LOCKED
ORDPLUGINS                     Y EXPIRED & LOCKED
ORDSYS                         Y EXPIRED & LOCKED
OUTLN                          Y EXPIRED & LOCKED
SYS                            Y OPEN
SYSBACKUP                      Y EXPIRED & LOCKED
SYSDG                          Y EXPIRED & LOCKED
SYSKM                          Y EXPIRED & LOCKED
SYSTEM                         Y OPEN
WMSYS                          Y EXPIRED & LOCKED
XDB                            Y EXPIRED & LOCKED
XS$NULL                        Y EXPIRED & LOCKED

36 rows selected.


Note that I opened the SCOTT account. This a dbca-created database based on the General_Purpose.dbc template. In other words it has all the options in it, you might (should!) see less of these system managed accounts.

The same logic applies to certain other dictionary views as well, I came across oracle_maintained in DBA_ROLES as well. There might be others, time to check dict_columns for a column_name = ‘ORACLE_MAINTAINED’. You find it in


Trying the playground repository with Oracle Linux

I’m always a great fan of trying new things and Oracle has made it a little easier a little while ago. In my early Linux days (SuSE Linux 6.0 was my first distribution, it was still kernel 2.0.36 but “2.2 ready”) compiling the kernel was more manageable than it is today. I even remember “make zImage” instead of the now default “make bzImage” and “make menuconfig” before that :) There was less complexity with hardware, and the initrd was more of a “nice to have” than a requirement with those PATA drives in my system.

Now it’s getting increasingly complex thanks to the rapid expansion of the Linux kernel and its features. Also, the vanilla kernel from is rather large compared to 2.0.36 requiring a lot of horsepower and patience to build it. And if you are making large version jumps your binutils won’t be appropriate, and you might need a different glibc etc. In short, you have to replace your system’s core with something else, a somewhat frightening experience. Hence this process is not for the uninitiated and although I’d love to build a LFS (Linux from Scratch) system I simply don’t have the time to do so.

But if you are interested in seeing what’s happening in upstream development you can still do so in Oracle Linux. My system is 6.4 and I wanted to see what the latest kernel looks and feels like. You can get a compiled kernel plus its firmware and modules from Oracle’s public YUM repository.

WARNING Don’t follow these steps on anything but a lab system! Don’t go on and install a new kernel on a dev/test/uat/integration/prod/DR/whatever else box! This is for playing!

OK that should hopefully deter anyone from doing something silly. My lab VM for testing new things is-as I said before-Oracle Linux 6.4 in a VirtualBox environment. To enable the playground repository you need to edit /etc/yum.repos.d/public-yum-ol6.repo and enable the playground repository. Next I did a “yum search kernel” and voila, there is one! The version is 3.9.9-3.9.y.20130709.ol6.x86-64. From there on it’s child’s play:

[root@labvm1 ~]# yum install kernel

After a little bit of updating the system is ready. Careful though, the new kernel is the default kernel to boot off, check /etc/grub.conf for the new DEFAULT and change it back to the UEK 2 kernel just in case. A reboot is needed and sure enough, I have the option to use the new kernel:

Grub with kernel 3.9.9

Nice. After the system booted it announces the presence of 3.9.9 through a call to uname -r. To me it was great to see the system boot again, without a kernel oops or similar. This was rather often the case when I compiled my own kernels :)

CAUTION: this worked well in my VM. If you have 3rd party kernel modules (RAID controller, storage in general, virtio, etc!) then you may have to compile them from source if available as they certainly won’t appear in /lib/modules/$(uname -r). Any guest additions need to be reinstalled/recreated. I found the AHCI and pata_generic modules to be loaded successfully as well as the ethernet NICs to work in Virtualbox. When I have a little more time I’d like to repeat the test on KVM with libvirt for networking and storage-I wonder if they work out of the box.

Happy testing! I hope to release a few more posts when I figured out all the cool stuff that’s in the upstream kernel.

I am speaking at AIM & Database Server combined SIG

If you haven’t thought about attending UKOUG’s AIM & Database Server combined SIG, you definitely should! The agenda is seriously good with many well known speakers and lots of networking opportunity. It’s also one of the few events outside an Oracle office. 

I am speaking about how you can use incremental backups to reduce cross-platform transportable tablespace (TTS) downtime:

A little known MOS note describes how you can potentially reduce downtime by a significant amount when migrating databases between systems with different endianness. Many sites are looking for ways to move their databases from big-endian platforms such as HP-UX, AIX or Solaris/SPARC to an Exadata system running Linux. A new functionality called cross-platform incremental backup in conjunction with the already-known cross-platform transportable tablespaces makes this possible. When using incrementally rolled forward backups the source system stays up while the destination database is instantiated. Further incremental backups are then applied to the destination to keep it current-while the source system stays fully available.

At the time of the cut-over only a final incremental backup needs to be transferred to the destination host, which is significantly smaller than the full set normally transferred. The time window for when the source tablespaces have to be offline often is a lot smaller than compared to the traditional TTS process.

Hope to see you there!

Your first steps with the multi-tenancy option

You can’t possibly have avoided all the buzz around the multi-tenancy option until now so it is probably in order to give you a bit more information about it besides the marketing slides Oracle has shown you so far.

IMPORTANT:This article is a technical article and leaves the pros and cons for the multi-tenancy option at one side. It does the same with the decision to make the CDB a cost option. And it doesn’t advise you on licensing either, so make sure you are appropriately licensed to make use of the features demonstrated in the series.

Series Overview

Since it is nearly impossible to give an overview over Pluggable and Container Databases in one article this is going to be a series of multiple articles. In this part you can read about creating PDBs and where to look for information about them.

  • Part 1 is what you are reading right now, it covers the initial steps with the new database type
  • Part 2 will deal with the various options of creating, plugging and unplugging databases and the like
  • Part 3 will deal with backup and recovery of Container Databases
  • Part 4 will be all about Data Guard and how the new type of operation affects
  • And finally in part 5 you can read about how PDBs, CDBs work in a Real Application Cluster.

Prior to part 5 all articles will deal with single instance databases, part 5 obviously requires a RAC build.

If you are interested in more in-depth information about Oracle 12c and Consolidation techniques I would like to ask you to wait a little longer until my new book is released. It covers all the above in much more detail and with more examples while at the same time extending the focus to management of your new 12c estate built from the ground up for the new database generation and its lifecyle from migration to production use.

Continue reading

Increasing the maximum I/O size in Linux

This post is really a quick note to myself to remind me how to bump up the maximum IO size on Linux. I have been benchmarking a bit lately and increasing the maximum size of an I/O request from 512kb to 1024kb looked like something worth doing. Especially since it’s also done in Exadata :)

So why would it matter? It matters most for Oracle DSS systems, actually. Why? Take ORION for example-although it’s using mindless I/O as explained by @flashdba and @kevinclosson, at least it gives me a quick way to test the size of a typical I/O request. Let me demonstrate:

[oracle@server1 ~]$ $ORACLE_HOME/bin/orion -run oltp -testname foo

So ORION is now busying itself producing numbers based on 8k reads. How do I know? Because I’m using collectl ( in another session:

[oracle@server1 ~]$ perl -sD --iosize --dskfilt sdc
waiting for 1 second sample...

#          <---------reads---------><---------writes---------><--------averages--------> Pct
#Name       KBytes Merged  IOs Size  KBytes Merged  IOs Size  RWSize  QLen  Wait SvcTim Util
sdc          62464      0 7808    8       0      0    0    0       8     1     0      0   63
sdc          63280      0 7910    8       0      0    0    0       8     1     0      0   57
sdc          64048      0 8005    8       0      0    0    0       8     1     0      0   60

Look at the IO size: 8k just as you would expect. By the way ORION is slightly unrealistic in the OLTP case because it uses io_submit and io_getevents instead of pread but that’s a different story altogether for another blog post.

Now I’d like to run the DSS test. Firing off ORION again, this time with “-run dss” I observe the following:

$ perl -sD --iosize --dskfilt sdc
waiting for 1 second sample...

#          <---------reads---------><---------writes---------><--------averages--------> Pct
#Name       KBytes Merged  IOs Size  KBytes Merged  IOs Size  RWSize  QLen  Wait SvcTim Util
sdc         1995776      0 3898  512       0      0    0    0     512     1     0      0   73
sdc         2089984      0 4082  512       0      0    0    0     512     1     0      0   77
sdc         1976320      0 3860  512       0      0    0    0     512     1     0      0   73

Aha, my IO size is 512k. That’s good, but not good enough. After what seemed like a very long time and digging through scheduler code on and discussing with friends I found out that the clue is in the SYSFS. Here you find lots of parameters determining how the kernel issues IO against a block device. You define the IO scheduler, the queue depth, sector size (those 4k sectors and Oracle!) and other parameters. Amongst these the following are of relevance to this post:

– max_sectors_kb
– max_hw_sectors_kb

max_sectors_kb defines the maximum I/O size, and it logically cannot exceed its maximum. I found 1MB the maximum for many spinning disks, my SATA SSD has more.

Now when I bump that value up to 1M I get larger IO sizes with the DSS benchmark:

$ perl -sD --iosize --dskfilt sdc
waiting for 1 second sample...

#          <---------reads---------><---------writes---------><--------averages--------> Pct
#Name       KBytes Merged  IOs Size  KBytes Merged  IOs Size  RWSize  QLen  Wait SvcTim Util
sdc         1927047      0 1882 1024       0      0    0    0    1024     1     0      0   51
sdc         1906688      0 1862 1024       0      0    0    0    1024     1     0      0   51

Thanks also to Frits Hoogland for helping me understand Linux IO better!