12.2 New Feature: the FLEX ASM disk group part 5

Some time ago I had a very interesting twitter conversation after publishing the first part of this series. The question was whether using ASM templates, which admittedly exist since Oracle 10.1, didn’t provide similar functionality as Flex Disk Groups. In other words, wouldn’t using ASM templates allow you to have high redundancy files on normal redundancy disk groups anyway?

This question has been answered by Alex Fatkulin in a blog post some time ago. In this post I would like to replay his test with my 12.2 setup. Initially I had hoped to compare the approach using ASM templates with the Flex ASM Disk Group but the post has become too long again … The actual comparison will be done with the next instalment of the series.

Templates

You may not be aware of the fact that you are using ASM templates, but you do. Each disk group has a set of system-generated, common ASM templates. Consider this example for my current lab environment (Oracle 12.2.0.1). Queries and commands are executed as SYSASM while connected to the ASM instance unless stated otherwise:

SQL> select b.name, a.redundancy, a.stripe, a.system
  2   from v$asm_template a, v$asm_diskgroup b
  3  where a.group_number = b.group_number
  4    and a.name = 'DATAFILE';

NAME                           REDUND STRIPE S
------------------------------ ------ ------ -
DATA                           MIRROR COARSE Y
MGMT                           UNPROT COARSE Y
OCR                            MIRROR COARSE Y
RECO                           UNPROT COARSE Y
FLEX                           MIRROR COARSE Y

Templates, among other disk-group meta-information, define how a supported file type is stored in ASM using 2 criteria:

  • Mirroring
  • Striping

You can see there are plenty of templates, one for each supported ASM file type:

SQL> select b.name as dg_name, a.name as template_name, a.system
  2    from v$asm_template a, v$asm_diskgroup b
  3  where a.group_number = b.group_number
  4   and b.name = 'DATA';

DG_NAME 		       TEMPLATE_NAME		      S
------------------------------ ------------------------------ -
DATA			       PARAMETERFILE		      Y
DATA			       ASMPARAMETERFILE 	      Y
DATA			       OCRFILE			      Y
DATA			       DATAGUARDCONFIG		      Y
DATA			       AUDIT_SPILLFILES 	      Y
DATA			       AUTOLOGIN_KEY_STORE	      Y
DATA			       KEY_STORE		      Y
DATA			       FLASHBACK		      Y
DATA			       CHANGETRACKING		      Y
DATA			       XTRANSPORT		      Y
DATA			       AUTOBACKUP		      Y
DATA			       INCR XTRANSPORT BACKUPSET      Y
DATA			       XTRANSPORT BACKUPSET	      Y
DATA			       BACKUPSET		      Y
DATA			       TEMPFILE 		      Y
DATA			       DATAFILE 		      Y
DATA			       ONLINELOG		      Y
DATA			       ARCHIVELOG		      Y
DATA			       FLASHFILE		      Y
DATA			       CONTROLFILE		      Y
DATA			       DUMPSET			      Y
DATA			       VOTINGFILE		      Y

22 rows selected.

Does that strike any resemblance with v$asm_filegroup_property? It does so for me. Except that within a Flex ASM Disk Group properties are defined per File Group. And there are different file groups per (N)CDB, or PDB. With other ASM disk group types the mapping is global.

Custom Templates

According to the ASM documentation (Storage Administrator’s Guide 12c Release 2 chapter 5 Administering Oracle ASM Files, Directories, and Templates) a template can be used to define attributes for file types.

If there’s a column named SYSTEM in v$asm_template, there surely is a way to create one’s own templates. And this is where I circle back to the original question: can I have high-redundancy files in a normal redundancy disk group?

You sure can! I will use the DATA disk group for this, which is created using NORMAL redundancy. Here is some useful background information:

SQL> select group_number, name, type, compatibility, database_compatibility
  2  from v$asm_diskgroup;

GROUP_NUMBER NAME       TYPE   COMPATIBILITY   DATABASE_COMPAT
------------ ---------- ------ --------------- ---------------
           1 DATA       NORMAL 12.2.0.1.0      12.2.0.1.0
           2 MGMT       EXTERN 12.2.0.1.0      10.1.0.0.0
           3 OCR        NORMAL 12.2.0.1.0      10.1.0.0.0
           4 RECO       EXTERN 12.2.0.1.0      10.1.0.0.0
           5 FLEX       FLEX   12.2.0.1.0      12.2.0.1.0

SQL> select count(*) from v$asm_disk where group_number = 
  2   (select group_number from v$asm_diskgroup where name = 'DATA');

  COUNT(*)
----------
         3

Each new datafile on the DATA disk group will be created based on the default template:

SQL> select b.name as dg_name, a.redundancy, a.stripe, a.system
  2  from v$asm_template a, v$asm_diskgroup b
  3  where a.group_number = b.group_number
  4   and a.name = 'DATAFILE'
  5   and b.name = 'DATA';

DG_NAME                        REDUND STRIPE S
------------------------------ ------ ------ -
DATA                           MIRROR COARSE Y

In other words, each extent is mirrored, and the striping is coarse. Again, I won’t be touching the striping mechanism as explained in an earlier post.

To enable high redundancy another template must be created, which is simple:

SQL> alter diskgroup data add template high_red_on_normal_dg attribute (high);

Diskgroup altered.

SQL> select b.name as dg_name, a.redundancy, a.stripe, a.system
  2  from v$asm_template a, v$asm_diskgroup b
  3  where a.group_number = b.group_number
  4   and a.name = 'HIGH_RED_ON_NORMAL_DG'
  5   and b.name = 'DATA'
  6  /

DG_NAME                        REDUND STRIPE S
------------------------------ ------ ------ -
DATA                           HIGH   COARSE N

Back in the RDBMS instance, I can now make use of that template:

SQL> create tablespace high_red_tbs 
  2  datafile '+data(high_red_on_normal_DG)' size 50m;

Tablespace created.

SQL> select name from v$datafile where name like '%high_red_tbs%';

NAME
----------------------------------------------------------------------------------
+DATA/CDB/586EF9CC43B5474DE0530A64A8C0F287/DATAFILE/high_red_tbs.286.953971189

The question is: is this file created with high redundancy?

SQL> select redundancy, type, remirror, redundancy_lowered
  2  from v$asm_file where file_number = 286 and incarnation = 953971189;

REDUND TYPE            R R
------ --------------- - -
HIGH   DATAFILE        N U

That looks like a yes to me. Using a different, random other file from the disk group shows that other files use normal redundancy:

SQL> select name from v$asm_alias 
  2   where file_number = 261 and file_incarnation = 953928133;

NAME
------------------------------------------------------------
UNDO_2.261.953928133

SQL> select redundancy, type, remirror, redundancy_lowered
  2  from v$asm_file where file_number = 261 and incarnation = 953928133;

REDUND TYPE            R R
------ --------------- - -
MIRROR DATAFILE        N U

But does it help?

Now I have high redundancy files on a normal redundancy disk group, which gives me extra protection from disk corruption. From an availability point of view you don’t win much though, as Alex has already pointed out. Removing 2 of the 3 disks that make up the DATA disk group should result in a dismount of the disk group (which a true high redundancy disk would survive). Here is proof.

The disk failures are visible in many places. For example, in /var/log/messages

Sep  6 10:42:37 rac122pri1 kernel: sd 3:0:0:0: [sdg] Synchronizing SCSI cache
Sep  6 10:42:37 rac122pri1 kernel: sd 3:0:0:0: [sdg] Synchronize Cache(10)
 failed: Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Sep  6 10:42:37 rac122pri1 kernel: sd 3:0:0:0: [sdg] Sense Key : Illegal  
 Request [current] 
Sep  6 10:42:37 rac122pri1 kernel: sd 3:0:0:0: [sdg] Add. Sense: 
 Logical unit not supported
Sep  6 10:42:41 rac122pri1 kernel: sd 4:0:0:2: [sdl] Synchronizing SCSI cache
Sep  6 10:42:41 rac122pri1 kernel: sd 4:0:0:2: [sdl] Synchronize Cache(10) 
 failed: Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Sep  6 10:42:41 rac122pri1 kernel: sd 4:0:0:2: [sdl] Sense Key : Illegal 
 Request [current] 
Sep  6 10:42:41 rac122pri1 kernel: sd 4:0:0:2: [sdl] Add. Sense: Logical 
 unit not supported

And the ASM instance’s alert.log:

...
ERROR: no read quorum in group: required 1936606968, found 1937207795 disks
ERROR: Could not read PST for grp 1. Force dismounting the disk group.
NOTE: detected orphaned client id 0x10001.
2017-09-06 10:42:45.101000 +01:00
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_rbal_2730.trc:
ORA-15130: diskgroup "" is being dismounted
GMON dismounting group 1 at 96 for pid 37, osid 32385
NOTE: Disk DATA_0000 in mode 0x1 marked for de-assignment
NOTE: Disk DATA_0001 in mode 0x7f marked for de-assignment
NOTE: Disk DATA_0002 in mode 0x7f marked for de-assignment
SUCCESS: diskgroup DATA was dismounted
SUCCESS: alter diskgroup DATA dismount force /* ASM SERVER:2133858021 */
SUCCESS: ASM-initiated MANDATORY DISMOUNT of group DATA
NOTE: diskgroup resource ora.DATA.dg is offline

It is truly gone:

SQL> select group_number, name, state from v$asm_diskgroup where name = 'DATA';

GROUP_NUMBER NAME                           STATE
------------ ------------------------------ -----------
           0 DATA                           DISMOUNTED

Summary: ASM Templates

When I initially worked out how to use custom templates and creating high redundancy files in a normal redundancy disk group I was all excited. However during testing disk failure that excitement made way to a more rational assessment of the situation.

So while you might gain on data integrity you lose on storage (triple mirroring requires more space) and don’t have added benefit on availability.

In the next post I’ll repeat this test with my FLEX ASM Disk Group.

Advertisement