Flex ASM 12c in action

I have been working on Flex ASM for a little while and so far like the feature. In a nutshell Flex ASM removes the requirement to have one (and only one!) ASM instance per cluster node. Instead, you get three ASM instances, not more, not less-regardless of cluster size.

Databases which are in fact “ASM clients” as in v$asm_client connect remotely or locally, depending on the placement of the ASM instance and the database instance.

What’s great about Flex ASM is that a crash of an ASM instance does not pull the database instances on the host with it. Imagine a database-as-a-service environment on say an X3-8. This machine comes with 8s80c160t and 2 TB of memory plus 8 internal disks for Oracle binaries. This should indeed be a lot of equipment to consolidate your databases into. Now imagine what happens if your ASM instance crashes … or better not. Enter Flex ASM! You can chose the ASM mode during the installation of Grid Infrastructure 12c or enable it later. In the below example it was enabled at installation time. Warning: if you want to use Flex ASM and have pre 12c databases on the host, you can’t (really). Although you will be running Flex ASM you _must_ have ASM instances on each host for which you intend to run a pre 12c RDBMS instance.

In the below example I have created a “Normal” 12c cluster with Flex ASM enabled:

[grid@server1 ~]$ asmcmd showclusterstate
[grid@server1 ~]$ asmcmd showclustermode
ASM cluster : Flex mode enabled

Because I’m curious I have set aside a dedicated network for ASM traffic. I would think that this makes sense as well for non-Exadata deployments with GBit Ethernet. Anyway, the network information is recorded in the GPnP profile which I can query with oifcfg:

[grid@server1 ~]$ oifcfg getif
eth0  global  public
eth1  global  cluster_interconnect
eth2  global  asm

In proper deployments I’d assume port aggregation in use, but this is a lab environment using KVM. As an interesting side effect Clusterware will create a listener on the ASM network. I haven’t yet tested a combined private/ASM network but also assume that there’d be a listener as shown in the output of crsctl

[grid@server3 ~]$ crsctl stat res -t
Name           Target  State        Server                State details
Local Resources
               ONLINE  ONLINE       server1               STABLE
               ONLINE  ONLINE       server2               STABLE
               ONLINE  ONLINE       server3               STABLE

Since my cluster consists of three nodes they all will have an ASM instance as per the definition of Flex ASM:

[root@server1 usr]# srvctl config asm
ASM home: /u01/app/
Password file: +OCR/orapwASM
ASM listener: LISTENER
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM

To make the example work I reduced the number of ASM instances by one before starting the ORCL database.

[grid@server1 ~]$ srvctl modify asm -count 2
[grid@server1 ~]$ srvctl status asm
ASM is running on server3,server1

My cluster database spans all 3 nodes, so I’m assuming that the instance on node 2 will have to connect remotely. And sure enough it does as shown by the alert.log

NOTE: ASMB registering with ASM instance as client 0xffffffffffffffff (reg:89854640)
NOTE: ASMB connected to ASM instance +ASM1 (Flex mode; client id 0x10007)

The ASM instance records this in the alert.log as well, and its internal views:

SQL> select dg.name,c.instance_name,c.db_name,c.status
  2  from v$asm_diskgroup dg, v$asm_client c
  3  where dg.group_number = c.group_number
  4 and c.db_name = 'ORCL'
  5 /

---------- ---------- -------- ------------

The alert.log entry from +ASM1 shows the successful connection of the database client

NOTE: Flex client id 0x0 [ORCL2:ORCL] attempting to connect
NOTE: registered owner id 0x10007 for ORCL2:ORCL
NOTE: Flex client ORCL2:ORCL registered, osid 11795, mbr 0x0 (reg:89854640)

Now what happens if I kill the smon process for +ASM3? In theory the database should reconnect. First of all ASM has to perform instance recovery for +ASM3. It’s great news that the various steps performed during the process are finally detailed! Checking v$asm_client on +ASM1 shows all three database instances as clients now:

---------- ---------- -------- ------------

A short while later, after Clusterware restarted +ASM3 you will see the entry for the third database disappear from +ASM1. The database instance ORCL3 has migrated back to the ASM instance it originated from.


Initial tests of Flex ASM show that the solution has potential. Stress testing the feature in various scenarios will show if the value proposition is fulfilled.




7 thoughts on “Flex ASM 12c in action

  1. Pingback: Oracle Database 12c (12.1) Installation and New Features | DBLinks Consulting Ltée

  2. Pingback: Applying PSU in the lab environment « Martins Blog

  3. Pavan Gilda

    In my Flex ASM setup, I see that if one of the ASM instance fails then that database is connecting to other Remote ASM instance that is fine but what I see strange is gv$asm_client is not getting updated with new status (and instance name)

    1. Martin Bach Post author

      Interesting-might be a regression? The test I did was with the base release. What’s your version

      1. Pavan Gilda

        SQL> select * from v$version;

        Oracle Database 12c Enterprise Edition Release – 64bit Production

        PL/SQL Release – Production

        CORE Production

        TNS for Linux: Version – Production

        NLSRTL Version – Production

      2. Martin Bach Post author

        Hi there,

        that wouldn’t list the patches (PSU) that you applied. If you know you didn’t, then you are on the base release. Otherwise opatch lsinv will list the patches applied.

Comments are closed.