Creating a new disk group for use with ASM Filter Driver on the command line in Oracle 19c

In my previous post I shared my surprise when I learned that calling gridSetup.sh 19c for use with Oracle ASM Filter Driver (ASMFD) required me to specify the names of the native block devices. This is definitely different from installing ASM with ASMLib where you pass ASM disks as “ORCL:diskname” to the installer.

Um, that’s great, but why did I write this post? Well, once the installation/configuration steps are completed you most likely need to create at least a second disk group. In my case that’s going to be RECO, for use with the Fast Recovery Area (FRA). This post details the necessary steps to get there, as they are different compared to the initial call to gridSetup.sh.

And while I might sound like a broken record, I would like to remind you that I’m not endorsing ASM Filter Driver. I merely found the documentation unclear in some places, and this post hopes to clarify certain aspects around the use of ASMFD. Pleae remember that ASMFD is new-ish technology and it’s up to every user to apply industry best known methods to ensure everything works as expected.

My environment

The lab environment hasn’t changed, I’m still using the same Oracle Linux 7.7 KVM VM I prepared for use with the last post. Storage is still made accessible via the virtio driver. The VM boots into the Red Hat Kernel.

Previously I installed the base release, Oracle Restart 19.3.0. Since the base release has been made available, quite a few issues have been addressed in later Release Updates (RU). To keep up with the latest fixes my system has since been patched to 19.6.0. Oracle 19.6.0 was the current RU at the time of writing.

Creating a new disk group

Since I’m using Ansible for most things these days I had to come up with a straight-forward method of creating a disk group. ASM has shipped with ASM Configuration Assistant (asmca) for quite a while now, and it can be used to create a disk group in a simple, elegant call (link to documentation). I could of course have created the disk group in sqlplus but this would have required a lot more typing, and I’m inherently lazy.

Unlike the initial call to gridSetup.sh where you pass native block devices along with a request to configure ASMFD, the steps for creating the disk group require you to label the disks beforehand. This is pretty trivial, and more importantly, easy to automate with Ansible.

Labeling the disks

As per my earlier post, I’m planning on using /dev/vde1 and /dev/vdf1 for RECO. The first step is to label the disks. The call is similar to the one you read about earlier:

[root@server4 ~]# . oraenv
ORACLE_SID = [+ASM] ? +ASM
The Oracle base remains unchanged with value /u01/app/grid
[root@server4 ~]# asmcmd afd_label RECO1 /dev/vde1
[root@server4 ~]# asmcmd afd_label RECO2 /dev/vdf1
[root@server4 ~]# asmcmd afd_lslbl
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
DATA1                                 /dev/vdc1
DATA2                                 /dev/vdd1
RECO1                                 /dev/vde1
RECO2                                 /dev/vdf1
[root@server4 ~]#  

Note the absence of the “–init” flag when invoking asmcmd afd_label … The way I understand it, this flag is used only during the initial installation.

Creating the disk group

Once the disks are labeled, you can create the disk group. Using the documentation reference I shared earlier I ended up with this call to asmca:

[grid@server4 ~]$ asmca -silent \
> -createDiskGroup -diskGroupName RECO \
> -disk 'AFD:RECO*' -redundancy EXTERNAL \
> -au_size 4 -compatible.asm 19.0.0 -compatible.rdbms 19.0.0

[DBT-30001] Disk groups created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-200402AM115509.log for details.

[grid@server4 ~]$  

Thanks to ASMFD I don’t have to specify individual disks, I can simply tell it to use all disks that go by the name of RECO* – RECO1 and RECO2 in this example. The actual number of ASM disks doesn’t matter using this call, again helping me automate the process.

This this environment is exclusively used for Oracle 19c I can safely set the compatibility to 19c both for ASM as well as the database. Refer to the ASM documentation for further information about the disk group compatibility properties.

Verification

The output of the command indicates success, so let’s have a look at the ASM configuration:

SQL> select d.name, d.path, d.library, dg.name
  2  from v$asm_disk d left join v$asm_diskgroup dg on (dg.group_number = d.group_number)
  3  where dg.name = 'RECO';

NAME       PATH            LIBRARY                                                      NAME
---------- --------------- ------------------------------------------------------------ ----------
RECO1      AFD:RECO1       AFD Library - Generic , version 3 (KABI_V3)                  RECO
RECO2      AFD:RECO2       AFD Library - Generic , version 3 (KABI_V3)                  RECO

SQL> select d.name, dg.name, dg.compatibility, dg.database_compatibility
  2  from v$asm_disk d left join v$asm_diskgroup dg on (dg.group_number = d.group_number)
  3  where dg.name = 'RECO';

NAME       NAME       COMPATIBILITY        DATABASE_COMPATIBILI
---------- ---------- -------------------- --------------------
RECO1      RECO       19.0.0.0.0           19.0.0.0.0
RECO2      RECO       19.0.0.0.0           19.0.0.0.0 

This seems to have worked. I can also see the disk groups registered in Clusterware:

[grid@server4 ~]$ crsctl stat res -t -w "TYPE == ora.diskgroup.type"
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       server4                  STABLE
ora.RECO.dg
               ONLINE  ONLINE       server4                  STABLE
--------------------------------------------------------------------------------
[grid@server4 ~]$