As promised in the earlier post here are my notes about installing Oracle Restart 19c on Oracle Linux 7.7 using the RedHat compatible kernel (RHCK). Please consult the ACFS/ASMFD compatibility matrix, My Oracle Support DocID 1369107.1 for the latest information about ASMFD compatibility with various kernels as well.
Why am I starting the series with a seemingly “odd” kernel, at least from the point of view of Oracle Linux? If you try to install the Oracle Restart base release with UEK 5, you get strange error messages back from gridSetup telling you about invalid ASM disks. While that’s probably true, it’s a secondary error. The main cause of the problem is this:
[root@server5 bin]# ./afddriverstate supported AFD-620: AFD is not supported on this operating system version: '4.14.35-1902.300.11.el7uek.x86_64' AFD-9201: Not Supported AFD-9294: updating file /etc/sysconfig/oracledrivers.conf
Which is easy to run into since gridSetup.sh doesn’t validate this for you when running in silent mode. The GUI version of the installer protects you from the mistake though. Upgrading to the latest UEK 5 doesn’t change this message, you need to check the certification matrix to learn that Oracle Restart 19.4.0 and later are required for UEK 5 if you’d like to use ASMFD (or ACFS for that matter). This scenario will be covered in a later post.
Using the Red Hat Compatible Kernel alleviates this problem for me. Just be aware of the usual caveats when using the Red Hat Kernel on Oracle Linux such as YUM changing the default kernel during yum upgrade etc. I’d also like to iterate that this post isn’t an endorsement for ASM Filter Driver, but since the documentation was a little unclear I thought I’d write up how I got to a working installation. It is up to you to ensure that ASMFD is a workable solution for your environment by following industry best known practices.
Configuration Options
In the post introducing this series I claimed to have identified 2 options for installing Oracle Restart 19c using ASMFD: the first one is to use UDEV to prepare ASM block devices, the second one is to label the ASM disks using asmcmd afd_label.
Huh, UDEV? That hasn’t really been blogged about at all in the context of ASMFD, or at least I didn’t find anyone who did. I’m inferring the possibility of using UDEV from “Configuring Oracle ASM Filter Driver During Installation” (link to documentation):
“If you do not use udev on the system where the Oracle Grid Infrastructure is installed, then you can also complete the following procedure to provision disks for Oracle ASMFD before the installer is launched”
You actually only have to choose one of them. Let’s start with the more frequently covered approach of labelling disks using asmcmd.
My environment
I have applied all the patches to this environment up to March 26th to my lab enviroment. The Oracle Linux release I’m using is 7.7:
[root@server4 ~]# cat /etc/oracle-release Oracle Linux Server release 7.7
The KVM VM I’m using for this blog post uses the latest Red Hat Compatible Kernel at the time of writing (kernel-3.10.0-1062.18.1.el7.x86_64). You will notice that I’m using the virtio driver, leading to “strange” device names. Instead of /dev/sd it’s /dev/vd. My first two block devices are reserved for the O/S and Oracle, the remaining ones are going to be used for ASM. I have an old (bad?) habit of partitioning block devices for ASM as you might notice. Most of the Oracle setup is done by the 19c preinstall RPM, which I used.
I created a grid owner – grid – to own the Oracle Restart installation. Quite a few blog posts I came across referenced group membership, and I’d like to do the same:
[root@server4 ~]# id -a grid uid=54322(grid) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54328(asmadmin),54327(asmdba)
The block devices I’m intending to use for ASM are /dev/vdc to /dev/vdf – the first 2 are intended for +DATA, the other 2 will become part of +RECO. As you can see they are partitioned:
[root@server4 ~]# lsblk --ascii NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vdf 251:80 0 10G 0 disk `-vdf1 251:81 0 10G 0 part vdd 251:48 0 10G 0 disk `-vdd1 251:49 0 10G 0 part vdb 251:16 0 50G 0 disk `-vdb1 251:17 0 50G 0 part `-oraclevg-orabinlv 252:2 0 50G 0 lvm /u01 sr0 11:0 1 1024M 0 rom vde 251:64 0 10G 0 disk `-vde1 251:65 0 10G 0 part vdc 251:32 0 10G 0 disk `-vdc1 251:33 0 10G 0 part vda 251:0 0 12G 0 disk |-vda2 251:2 0 11.5G 0 part | |-rootvg-swaplv 252:1 0 768M 0 lvm [SWAP] | `-rootvg-rootlv 252:0 0 10.8G 0 lvm / `-vda1 251:1 0 500M 0 part /boot
With all that out of the way it is time to cover the installation.
Labeling disks
I’m following the procedure documented in the 19c Administrator’s Guide chapter 20, section “Configuring Oracle ASM Filter Driver During Installation”. I have prepared my environment up to the step where I’d have to launch gridSetup.sh. This is a fairly well known process, and I won’t repeat it here.
Once the 19c install image has been extracted to my future Grid Home, the first step is to check if my system is supported:
[root@server4 ~]# cd /u01/app/grid/product/19.0.0/grid/bin [root@server4 bin]# ./afddriverstate supported AFD-9200: Supported [root@server4 bin]# uname -r 3.10.0-1062.18.1.el7.x86_64
“AFD-9200: Supported” tells me that I can start labeling disks. This requires me to be root, and I have to set ORACLE_HOME and ORACLE_BASE. For some reason, the documentation suggests using /tmp as ORACLE_BASE, which I’ll use as well:
[root@server4 bin]# pwd /u01/app/grid/product/19.0.0/grid/bin [root@server4 bin]# export ORACLE_BASE=/tmp [root@server4 bin]# export ORACLE_HOME=/u01/app/grid/product/19.0.0/grid [root@server4 bin]# ./asmcmd afd_label DATA1 /dev/vdc1 --init [root@server4 bin]# ./asmcmd afd_label DATA2 /dev/vdd1 --init [root@server4 bin]# ./asmcmd afd_lslbl /dev/vdc1 -------------------------------------------------------------------------------- Label Duplicate Path ================================================================================ DATA1 /dev/vdc1 [root@server4 bin]# ./asmcmd afd_lslbl /dev/vdd1 -------------------------------------------------------------------------------- Label Duplicate Path ================================================================================ DATA2 /dev/vdd1
Note the use of the –init flag. This is only needed if Grid Infrastructure isn’t installed yet.
Labeling the disks did not have an effect on the block devices’ permissions. Right after finishing the 2 calls to label my 2 block devices, this is the output from my file system:
[root@server4 bin]# ls -l /dev/vd[c-d]* brw-rw----. 1 root disk 252, 32 Mar 27 09:46 /dev/vdc brw-rw----. 1 root disk 252, 33 Mar 27 12:55 /dev/vdc1 brw-rw----. 1 root disk 252, 48 Mar 27 09:46 /dev/vdd brw-rw----. 1 root disk 252, 49 Mar 27 12:58 /dev/vdd1 [root@server4 bin]#
The output of afd_lslbl indicated that both of my disks are ready to become part of an ASM disk group, so let’s start the installer.
Call gridSetup.sh
I haven’t been able to make sense of the options in the response file until I started the installer in GUI mode and created a response file based on my choices. To cut a long story short, here is my call to gridSetup.sh:
[grid@server4 ~]$ /u01/app/grid/product/19.0.0/grid/gridSetup.sh -silent \ > INVENTORY_LOCATION=/u01/app/oraInventory \ > SELECTED_LANGUAGES=en \ > ORACLE_BASE=/u01/app/grid \ > ORACLE_HOME_NAME=ASMFD_RHCK \ > -waitforcompletion -ignorePrereqFailure -lenientInstallMode \ > oracle.install.option=HA_CONFIG \ > oracle.install.asm.OSDBA=asmdba \ > oracle.install.asm.OSASM=asmadmin \ > oracle.install.asm.diskGroup.name=DATA \ > oracle.install.asm.diskGroup.disks=/dev/vdc1,/dev/vdd1 \ > oracle.install.asm.diskGroup.diskDiscoveryString=/dev/vd* \ > oracle.install.asm.diskGroup.redundancy=EXTERNAL \ > oracle.install.asm.diskGroup.AUSize=4 \ > oracle.install.asm.configureAFD=true \ > oracle.install.crs.rootconfig.executeRootScript=false \ > oracle.install.asm.SYSASMPassword=thinkOfASuperSecretPassword \ > oracle.install.asm.monitorPassword=thinkOfASuperSecretPassword Launching Oracle Grid Infrastructure Setup Wizard... The response file for this session can be found at: /u01/app/grid/product/19.0.0/grid/install/response/grid_2020-03-27_01-06-14PM.rsp You can find the log of this install session at: /tmp/GridSetupActions2020-03-27_01-06-14PM/gridSetupActions2020-03-27_01-06-14PM.log As a root user, execute the following script(s): 1. /u01/app/oraInventory/orainstRoot.sh 2. /u01/app/grid/product/19.0.0/grid/root.sh Execute /u01/app/grid/product/19.0.0/grid/root.sh on the following nodes: [server4] Successfully Setup Software. As install user, execute the following command to complete the configuration. /u01/app/grid/product/19.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /u01/app/grid/product/19.0.0/grid/install/response/grid_2020-03-27_01-06-14PM.rsp [-silent] Note: The required passwords need to be included in the response file. Moved the install session logs to: /u01/app/oraInventory/logs/GridSetupActions2020-03-27_01-06-14PM [grid@server4 ~]$
It took a little while to work out that despite labeling the disks for ASMFD I didn’t have to put any reference to AFD into the call to gridSetup.sh. Have a look at the ASM disk string and the block devices: that’s what I’d use if I were using UDEV rules for device name persistence. The syntax might appear counter-intuitive. However there’s a “configureAFD” flag you need to set to true.
Since this is a lab environment I’m ok with external redundancy. Make sure you pick a redundancy level appropriate for your use case.
Running the configuration tools
The remaining steps are identical to a non ASMFD setup. First you run orainstRoot.sh followed by root.sh. The output of the latter showed this for me, indicating success:
[root@server4 ~]# /u01/app/grid/product/19.0.0/grid/root.sh Check /u01/app/grid/product/19.0.0/grid/install/root_server4_2020-03-27_13-11-05-865019723.log for the output of root script [root@server4 ~]# [root@server4 ~]# cat /u01/app/grid/product/19.0.0/grid/install/root_server4_2020-03-27_13-11-05-865019723.log Performing root user operation. The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/grid/product/19.0.0/grid Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/grid/product/19.0.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/grid/crsdata/server4/crsconfig/roothas_2020-03-27_01-11-06PM.log 2020/03/27 13:11:13 CLSRSC-363: User ignored prerequisites during installation LOCAL ADD MODE Creating OCR keys for user 'grid', privgrp 'oinstall'.. Operation successful. LOCAL ONLY MODE Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. CRS-4664: Node server4 successfully pinned. 2020/03/27 13:13:55 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service' server4 2020/03/27 13:16:59 /u01/app/grid/crsdata/server4/olr/backup_20200327_131659.olr 724960844 2020/03/27 13:17:54 CLSRSC-327: Successfully configured Oracle Restart for a standalone server [root@server4 ~]#
Well that looks ok, now on to the final step, configuration! As indicated in the output, you need to update the response (/u01/app/grid/product/19.0.0/grid/install/response/grid_2020-03-27_01-06-14PM.rsp) file with the required passwords. For me that was oracle.install.asm.monitorPassword and oracle.install.asm.SYSASMPassword. Once the response file was updated, I called gridSetup.sh once again:
[grid@server4 ~]$ /u01/app/grid/product/19.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /u01/app/grid/product/19.0.0/grid/install/response/grid_2020-03-27_01-06-14PM.rsp -silent Launching Oracle Grid Infrastructure Setup Wizard... You can find the logs of this session at: /u01/app/oraInventory/logs/GridSetupActions2020-03-27_01-20-47PM You can find the log of this install session at: /u01/app/oraInventory/logs/UpdateNodeList2020-03-27_01-20-47PM.log Successfully Configured Software.
And that’s it! The software has been configured successfully. Don’t forget to remove the passwords from the response file!
Verification
After a little while I have been able to configure Oracle Restart 19c/ASMFD on Oracle Linux 7.7/RHCK. Let’s check what this implies.
I’ll first look at the status of ASM Filter Driver:
[grid@server4 ~]$ . oraenv ORACLE_SID = [grid] ? +ASM The Oracle base has been set to /u01/app/grid [grid@server4 ~]$ afddriverstate installed AFD-9203: AFD device driver installed status: 'true' [grid@server4 ~]$ afddriverstate loaded AFD-9205: AFD device driver loaded status: 'true' [grid@server4 ~]$ afddriverstate version AFD-9325: Driver OS kernel version = 3.10.0-862.el7.x86_64. AFD-9326: Driver build number = 190222. AFD-9212: Driver build version = 19.0.0.0.0. AFD-9547: Driver available build number = 190222. AFD-9548: Driver available build version = 19.0.0.0.0. [grid@server4 ~]$
That’s encouraging: ASMFD is loaded and works on top of kernel-3.10 (RHCK)
I am indeed using the base release (and have to patch now!)
[grid@server4 ~]$ $ORACLE_HOME/OPatch/opatch lspatches 29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399) 29517247;ACFS RELEASE UPDATE 19.3.0.0.0 (29517247) 29517242;Database Release Update : 19.3.0.0.190416 (29517242) 29401763;TOMCAT RELEASE UPDATE 19.0.0.0.0 (29401763) OPatch succeeded.
And … I’m also using ASMFD:
SQL> col name for a20 SQL> col path for a10 SQL> col library for a50 SQL> set lines 120 SQL> select name, path, library from v$asm_disk where group_number <> 0; NAME PATH LIBRARY -------------------- ---------- -------------------------------------------------- DATA1 AFD:DATA1 AFD Library - Generic , version 3 (KABI_V3) DATA2 AFD:DATA2 AFD Library - Generic , version 3 (KABI_V3) SQL> show parameter asm_diskstring NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ asm_diskstring string /dev/vd*, AFD:* SQL>
This concludes the setup of my lab environment.