I knew about the 12.2 FLEX ASM disk group type from other presenters but until now – when researching the feature for the upcoming DOAG HA Day – I haven’t been able to appreciate how cool this is. And I really think it is pretty cool and worth sharing! There is a lot to be said about the feature and these tests, which is why I am splitting it into multiple parts.
Please be aware that this post is about my lab experiments, I have no production experience with FLEX ASM disk groups. As with all new features it might take a while to mature, so test, test, test…
Background
In previous versions of Oracle Grid Infrastructure/Automatic Storage Management (ASM), especially in consolidation environments, certain operations I would have liked to perform were not easily possible. Most properties of the disk group – such as redundancy levels etc. – are valid for all files that reside within it. For example, if you wanted to have normal redundancy for some databases, and high redundancy for others, you typically ended up with two “data” disk groups. The same goes for +RECO. I can think of scenarios where “filesystem-like” attributes within a disk group would have been nice. In the build-up to ASM 12.2, Oracle has steadily increased ASM limits to allow for more customisation, and in 12.1 you really could go a little over the top. You could have 63 ASM disk groups in 11.2 and up to 511 in 12.1. Although this should allow for plenty customisation, it adds a little maintenance overhead.
A more granular option to manage storage came with 12.1 and the introduction of Container Databases (CDBs). As part of the Pluggable Database’s creation, the administrator could specify a pdb_storage_clause as in this very basic example:
SQL> create pluggable database pdb1 2 admin user someone identified by somepassword 3 ... 4 storage (maxsize 200G); Pluggable database created.
However, if the database was created within a disk group with high redundancy, all files residing in that disk group inherited that property. I couldn’t define a PDB with normal redundancy in a high redundancy disk group, at least I wasn’t aware of a way to do so in 12.1.
Flex Disk Group
A Flex Disk Group promises fine-granular management of data within the disk group. You can also enforce quotas in a disk group (probably most useful on database-level), and you can define properties such as redundancy settings per file type (also per database or Pluggable Database). So in other words you can now have a disk group containing 2 databases for example, out of which database 1 uses normal redundancy, and database 2 uses high redundancy. If database 2 is a Container Database (CDB), you can even manage settings as low down as the PDB level. A few new concepts need introducing before that can happen. Let’s begin with the most essential part: the new Flex ASM disk group. There is a variation of the theme for extended distance clusters, which is not in scope of this article series.
In my lab system, I have created a new ASM disk group of flex redundancy. The system I am using is a two-node RAC running 12.2.0.1.170620 on Oracle Linux 7.3 with UEK4. I called the disk group FLEX, here is the command used for its creation:
CREATE DISKGROUP FLEX FLEX REDUNDANCY DISK 'AFD:FLEX1' DISK 'AFD:FLEX2' DISK 'AFD:FLEX3' DISK 'AFD:FLEX4' DISK 'AFD:FLEX5' ATTRIBUTE 'compatible.asm'='12.2.0.1', 'compatible.rdbms'='12.2.0.1', 'compatible.advm'='12.2.0.1', 'au_size'='4M’;
Note the use of ASM Filter Driver which I am testing as part of my lab set up. It’s also enabled by default when you install ASM 12.2. Looking at the code example I do realise now that the disk group name is maybe not ideal … The important bit in the example is the use of “FLEX REDUNDANCY”, the 5 implicit different failure groups and the compatibility settings that need to be 12.2.
The documentation (Automatic Storage Management Administrator’s Guide, chapter “Managing Oracle ASM Flex Disk Groups”) states that a Flex disk group generally tolerates the loss of 2 failure groups (FGs). The same bullet point then elaborates that at least 5 failure groups are needed to absorb the loss of 2 FGs. The minimum number of FGs within a Flex disk group are 3.
If you now get all excited about this new feature, there is one Big Caveat: you need a 12.2 RDBMS instance to use the feature.
This command results in a flurry of activity in the ASM instance, I have captured the output from the command initiation to completion because it’s quite interesting to see what happens in 12.2 ASM when you create a new disk group. Feel free to scroll down past the listing if you aren’t interested in the finer details.
SQL> CREATE DISKGROUP FLEX FLEX REDUNDANCY DISK 'AFD:FLEX1' SIZE 10239M DISK 'AFD:FLEX2' SIZE 10239M DISK 'AFD:FLEX3' SIZE 10239M DISK 'AFD:FLEX4' SIZE 10239M DISK 'AFD:FLEX5' SIZE 10239M ATTRIBUTE 'compatible.asm'='12.2.0.1','compatible.rdbms'='12.2.0.1','compatible.advm'='12.2.0.1','au_size'='4M' NOTE: Assigning number (5,0) to disk (AFD:FLEX1) NOTE: Assigning number (5,1) to disk (AFD:FLEX2) NOTE: Assigning number (5,2) to disk (AFD:FLEX3) NOTE: Assigning number (5,3) to disk (AFD:FLEX4) NOTE: Assigning number (5,4) to disk (AFD:FLEX5) 2017-07-03 10:38:53.811000 +01:00 NOTE: initializing header (replicated) on grp 5 disk FLEX1 NOTE: initializing header (replicated) on grp 5 disk FLEX2 NOTE: initializing header (replicated) on grp 5 disk FLEX3 NOTE: initializing header (replicated) on grp 5 disk FLEX4 NOTE: initializing header (replicated) on grp 5 disk FLEX5 NOTE: initializing header on grp 5 disk FLEX1 NOTE: initializing header on grp 5 disk FLEX2 NOTE: initializing header on grp 5 disk FLEX3 NOTE: initializing header on grp 5 disk FLEX4 NOTE: initializing header on grp 5 disk FLEX5 NOTE: Disk 0 in group 5 is assigned fgnum=1 NOTE: Disk 1 in group 5 is assigned fgnum=2 NOTE: Disk 2 in group 5 is assigned fgnum=3 NOTE: Disk 3 in group 5 is assigned fgnum=4 NOTE: Disk 4 in group 5 is assigned fgnum=5 GMON updating for reconfiguration, group 5 at 657 for pid 45, osid 25857 NOTE: group 5 PST updated. NOTE: initiating PST update: grp = 5 GMON updating group 5 at 658 for pid 45, osid 25857 NOTE: set version 0 for asmCompat 12.2.0.1.0 for group 5 NOTE: group FLEX: initial PST location: disks 0000 0001 0002 0003 0004 NOTE: PST update grp = 5 completed successfully NOTE: cache registered group FLEX 5/0x0A58F009 NOTE: cache began mount (first) of group FLEX 5/0x0A58F009 NOTE: cache is mounting group FLEX created on 2017/07/03 10:38:52 NOTE: cache opening disk 0 of grp 5: FLEX1 label:FLEX1 NOTE: cache opening disk 1 of grp 5: FLEX2 label:FLEX2 NOTE: cache opening disk 2 of grp 5: FLEX3 label:FLEX3 NOTE: cache opening disk 3 of grp 5: FLEX4 label:FLEX4 NOTE: cache opening disk 4 of grp 5: FLEX5 label:FLEX5 * allocate domain 5, valid ? 0 kjbdomatt send to inst 2 NOTE: attached to recovery domain 5 NOTE: cache creating group 5/0x0A58F009 (FLEX) NOTE: cache mounting group 5/0x0A58F009 (FLEX) succeeded NOTE: allocating F1X0 (replicated) on grp 5 disk FLEX1 NOTE: allocating F1X0 (replicated) on grp 5 disk FLEX2 NOTE: allocating F1X0 (replicated) on grp 5 disk FLEX3 NOTE: allocating F1X0 on grp 5 disk FLEX1 NOTE: allocating F1X0 on grp 5 disk FLEX2 NOTE: allocating F1X0 on grp 5 disk FLEX3 2017-07-03 10:38:56.621000 +01:00 NOTE: Created Used Space Directory for 1 threads NOTE: Created Virtual Allocation Locator (1 extents) and Table (5 extents) directories for group 5/0x0A58F009 (FLEX) 2017-07-03 10:39:00.153000 +01:00 NOTE: VAM migration has completed for group 5/0x0A58F009 (FLEX) NOTE: diskgroup must now be re-mounted prior to first use NOTE: cache dismounting (clean) group 5/0x0A58F009 (FLEX) NOTE: messaging CKPT to quiesce pins Unix process pid: 25857, image: oracle@rac122pri1 (TNS V1-V3) 2017-07-03 10:39:01.805000 +01:00 NOTE: LGWR not being messaged to dismount kjbdomdet send to inst 2 detach from dom 5, sending detach message to inst 2 freeing rdom 5 NOTE: detached from domain 5 NOTE: cache dismounted group 5/0x0A58F009 (FLEX) GMON dismounting group 5 at 659 for pid 45, osid 25857 GMON dismounting group 5 at 660 for pid 45, osid 25857 NOTE: Disk FLEX1 in mode 0x7f marked for de-assignment NOTE: Disk FLEX2 in mode 0x7f marked for de-assignment NOTE: Disk FLEX3 in mode 0x7f marked for de-assignment NOTE: Disk FLEX4 in mode 0x7f marked for de-assignment NOTE: Disk FLEX5 in mode 0x7f marked for de-assignment SUCCESS: diskgroup FLEX was created NOTE: cache deleting context for group FLEX 5/0x0a58f009 NOTE: cache registered group FLEX 5/0x4718F00C NOTE: cache began mount (first) of group FLEX 5/0x4718F00C NOTE: Assigning number (5,0) to disk (AFD:FLEX1) NOTE: Assigning number (5,1) to disk (AFD:FLEX2) NOTE: Assigning number (5,2) to disk (AFD:FLEX3) NOTE: Assigning number (5,3) to disk (AFD:FLEX4) NOTE: Assigning number (5,4) to disk (AFD:FLEX5) 2017-07-03 10:39:08.161000 +01:00 NOTE: GMON heartbeating for grp 5 (FLEX) GMON querying group 5 at 663 for pid 45, osid 25857 NOTE: cache is mounting group FLEX created on 2017/07/03 10:38:52 NOTE: cache opening disk 0 of grp 5: FLEX1 label:FLEX1 NOTE: 07/03/17 10:39:07 FLEX.F1X0 found on disk 0 au 10 fcn 0.0 datfmt 1 NOTE: cache opening disk 1 of grp 5: FLEX2 label:FLEX2 NOTE: 07/03/17 10:39:07 FLEX.F1X0 found on disk 1 au 10 fcn 0.0 datfmt 1 NOTE: cache opening disk 2 of grp 5: FLEX3 label:FLEX3 NOTE: 07/03/17 10:39:07 FLEX.F1X0 found on disk 2 au 10 fcn 0.0 datfmt 1 NOTE: cache opening disk 3 of grp 5: FLEX4 label:FLEX4 NOTE: cache opening disk 4 of grp 5: FLEX5 label:FLEX5 NOTE: cache mounting (first) flex redundancy group 5/0x4718F00C (FLEX) * allocate domain 5, valid ? 0 kjbdomatt send to inst 2 NOTE: attached to recovery domain 5 start recovery: pdb 5, passed in flags x4 (domain enable 0) validate pdb 5, flags x4, valid 0, pdb flags x204 * validated domain 5, flags = 0x200 NOTE: cache recovered group 5 to fcn 0.0 NOTE: redo buffer size is 512 blocks (2105344 bytes) NOTE: LGWR attempting to mount thread 1 for diskgroup 5 (FLEX) NOTE: LGWR found thread 1 closed at ABA 0.11262 lock domain=0 inc#=0 instnum=0 NOTE: LGWR mounted thread 1 for diskgroup 5 (FLEX) NOTE: setting 11.2 start ABA for group FLEX thread 1 to 2.0 NOTE: LGWR opened thread 1 (FLEX) at fcn 0.0 ABA 2.0 lock domain=5 inc#=12 instnum=1 gx.incarn=1192816652 mntstmp=2017/07/03 10:39:08.437000 NOTE: cache mounting group 5/0x4718F00C (FLEX) succeeded NOTE: cache ending mount (success) of group FLEX number=5 incarn=0x4718f00c NOTE: Instance updated compatible.asm to 12.2.0.1.0 for grp 5 (FLEX). NOTE: Instance updated compatible.asm to 12.2.0.1.0 for grp 5 (FLEX). NOTE: Instance updated compatible.rdbms to 12.2.0.1.0 for grp 5 (FLEX). NOTE: Instance updated compatible.rdbms to 12.2.0.1.0 for grp 5 (FLEX). SUCCESS: diskgroup FLEX was mounted NOTE: diskgroup resource ora.FLEX.dg is online SUCCESS: CREATE DISKGROUP FLEX FLEX REDUNDANCY DISK 'AFD:FLEX1' SIZE 10239M DISK 'AFD:FLEX2' SIZE 10239M DISK 'AFD:FLEX3' SIZE 10239M DISK 'AFD:FLEX4' SIZE 10239M DISK 'AFD:FLEX5' SIZE 10239M ATTRIBUTE 'compatible.asm'='12.2.0.1','compatible.rdbms'='12.2.0.1','compatible.advm'='12.2.0.1','au_size'='4M' 2017-07-03 10:39:09.429000 +01:00 NOTE: enlarging ACD to 2 threads for group 5/0x4718f00c (FLEX) 2017-07-03 10:39:11.438000 +01:00 SUCCESS: ACD enlarged for group 5/0x4718f00c (FLEX) NOTE: Physical metadata for diskgroup 5 (FLEX) was replicated. adrci>
Quite a bit of activity for just 1 command… I checked the attributes of the disk group, they don’t seem too alien to me:
SQL> select name, value from v$asm_attribute 2 where group_number = 5 3 and name not like 'template%'; NAME VALUE ------------------------------ ------------------------------ idp.type dynamic idp.boundary auto disk_repair_time 3.6h phys_meta_replicated true failgroup_repair_time 24.0h thin_provisioned FALSE preferred_read.enabled FALSE sector_size 512 logical_sector_size 512 content.type data content.check FALSE au_size 4194304 appliance._partnering_type GENERIC compatible.asm 12.2.0.1.0 compatible.rdbms 12.2.0.1.0 compatible.advm 12.2.0.1.0 cell.smart_scan_capable FALSE cell.sparse_dg allnonsparse access_control.enabled FALSE access_control.umask 066 scrub_async_limit 1 scrub_metadata.enabled FALSE 22 rows selected.
I didn’t specify anything about failure groups in the create disk group command, hence I am getting 5 of them:
SQL> select name, os_mb, failgroup, path from v$asm_disk where group_number = 5; NAME OS_MB FAILGROUP PATH ---------- ---------- ------------------------------ -------------------- FLEX1 10239 FLEX1 AFD:FLEX1 FLEX2 10239 FLEX2 AFD:FLEX2 FLEX3 10239 FLEX3 AFD:FLEX3 FLEX4 10239 FLEX4 AFD:FLEX4 FLEX5 10239 FLEX5 AFD:FLEX5
The result is the new disk group, a new part of my existing lab setup. As you can see in the following output I went with a separate disk group for the Grid Infrastructure Management Repository (GIMR), named +MGMT. In addition I have a disk group named +OCR which (surprise!) I use for OCR and voting files plus the usual suspects, +DATA and +RECO. Except +FLEX, all these are disk group types that have been available forever.
[oracle@rac122pri1 ~]$ asmcmd lsdg State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 512 4096 4194304 20476 17068 0 17068 0 N DATA/ MOUNTED FLEX N 512 512 4096 4194304 51180 50676 0 0 0 N FLEX/ MOUNTED EXTERN N 512 512 4096 4194304 40956 6560 0 6560 0 N MGMT/ MOUNTED NORMAL N 512 512 4096 4194304 15348 14480 5116 4682 0 Y OCR/ MOUNTED EXTERN N 512 512 4096 4194304 15356 15224 0 15224 0 N RECO/ [oracle@rac122pri1 ~]$
The values of 0 in required_mirror_free_mb and useable_file_mb for +FLEX aren’t bugs, they are documented to be empty for this type of disk group. You need to consult other entities – to be covered in the next posts – to check the space usage for your databases.
Wrap Up Part 1
Flex ASM disk groups are hugely interesting and worth keeping an eye out for when you are testing Oracle 12.2. I admit that 12.2 is still quite new, and the cautious person I am won’t use it for production until the first major patch set comes out and I tested it to death. I am also curious how the use of the ASM Filter Driver will change the way I am working with ASM. It might be similar to ASMLib but I have yet to work that out.
In the next part(s) I will cover additional concepts you need to understand in the context of Flex ASM disk groups.
Pingback: 12.2 New Feature: the FLEX ASM disk group part 2 | Martins Blog