How to move OCR and voting disks into ASM

Moving OCR and voting disk into ASM is the Oracle Grid Infrastructure or 11.2 way of storing essential metadata. During new installations this is quite simple since the work is done by OUI. When upgrading to 11.2 from a previous Oracle release, things look slightly different. Note that storing OCR and voting disk on raw or block devices-as we were doing it up to 11.1 is no longer possible except for migrated systems.

UPDATE 230315: this post describes a method of moving OCR and voting files from raw devices to an ASM disk group. It applies to an Oracle 10.2 source and an 11.2 target environment on Linux only. Since these database releases as well as the Linux distribution used back in the day are out of support this article shouldn’t be referred to anymore. It’s a good story nevertheless and I still enjoy reading the comments from time to time.

This is a transcript of some work I did to prove that it’s possible to move OCR and voting disk into ASM. It’s not been entirely simple though! Here’s my current setup after the migration of 10.2.0.4 -> 11.2.0.1.

Be warned though that this is very technical and even longer, but I decided to include the output of all commands just in case one of my readers comes across a similar problem but wants to find out if it’s the exactly same.So, these are my settings. The commands are invoked from the Grid Infrastructure home, the OS is OEL 5.4 x86-64:

[root@racupgrade1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
 Version                  :          3
 Total space (kbytes)     :     292924
 Used space (kbytes)      :       8096
 Available space (kbytes) :     284828
 ID                       :  209577144
 Device/File Name         : /dev/raw/raw1
 Device/File integrity check succeeded

 Device/File not configured

 Device/File not configured

 Device/File not configured

 Device/File not configured

 Cluster registry integrity check succeeded

 Logical corruption check succeeded


[root@racupgrade1 ~]# crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   324593df1e4c7fe4bf87b885df62e6d5 (/dev/raw/raw2) []

Checking the ASM properties, note that by default ASM diskgroups are 10.1 compatible.

NAME     USABLE_FILE_MB COMPATIBILITY      DATABASE_COMPATIBILITY         V
---------------------------------------------------------------------------
DATA                8563 10.1.0.0.0        10.1.0.0.0                     N

Here are the clients to the ASM instance:

SQL> select db_name,status,SOFTWARE_VERSION,COMPATIBLE_VERSION from v$asm_client;

DB_NAME  STATUS       SOFTWARE_VERSION      COMPATIBLE_VERSION
-------- ------------ --------------------- --------------------
+ASM     CONNECTED    11.2.0.1.0            11.2.0.1.0
orcl     CONNECTED    10.2.0.4.0            10.2.0.3.0

It’s my venerable 10.2.0.4 database “orcl”.

First attempt at moving Voting Files

Looking at the documentation (but not too closely), all I should have to do is to execute “crsctl replace votedisk +DATA”.  Naively I tried the command:

[root@racupgrade1 ~]# crsctl replace votedisk  +DATA
Failed to create voting files on disk group DATA.
Change to configuration failed, but was successfully rolled back.
CRS-4000: Command Replace failed, or completed with errors.

Now that isn’t too informative! So let’s have a look at the ASM alert.log. This is where adrci shines:

[oracle@racupgrade1 ~]$ adrci

ADRCI: Release 11.2.0.1.0 - Production on Fri Jan 29 13:48:09 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

ADR base = "/u01/app/oracle"
adrci> set home asm
adrci> show alert -tail

2010-01-29 11:44:06.850000 +00:00
NOTE: updated gpnp profile ASM diskstring: 
NOTE: Creating voting files in diskgroup DATA
NOTE: Voting File refresh pending for group 1/0x90481b38 (DATA)
NOTE: Attempting voting file creation in diskgroup DATA
ERROR: Voting file allocation failed for group DATA
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_ora_12939.trc:
ORA-15221: ASM operation requires compatible.asm of 11.2.0.0.0 or higher

Aha, I thought that might be the case :) But I wanted to see if Clusterware is clever enough to find out. Note that setting the attributes isn’t reversible! I keep the compatible.rdbms at 10.1 so that my orcl database can still connect.

SQL> alter diskgroup data set attribute 'compatible.asm'='11.2';

SQL> select NAME,USABLE_FILE_MB,COMPATIBILITY,DATABASE_COMPATIBILITY,VOTING_FILES
 2  from v$asm_diskgroup
 3  /

NAME                 USABLE_FILE_MB COMPATIBILITY      DATABASE_COMPATIBILITY         V
-------------------- -------------- ------------------ ------------------------------ -
DATA                 8561           11.2.0.0.0         10.1.0.0.0                     N

The “V” in the output of the command is the “voting_disk” flag, which is “N” since I haven’t store the voting disk in the diskgroup yet. The alert.log reflects this change as well:

2010-01-29 11:50:10.149000 +00:00
kfdp_queryTimeout(DATA)
kfdp_query(DATA): 4 
2010-01-29 11:50:12.536000 +00:00
kfdp_queryBg(): 4 
NOTE: Instance updated compatible.asm to 11.2.0.0.0 for grp 1
2010-01-29 11:50:50.305000 +00:00

Right, so with the newly set compatibility I should be ready to re-run the command:

# crsctl replace votedisk  +DATA

Hmpf, that failed again!

NOTE: updated gpnp profile ASM diskstring: 
NOTE: Creating voting files in diskgroup DATA
NOTE: Voting File refresh pending for group 1/0x90481b38 (DATA)
NOTE: Attempting voting file creation in diskgroup DATA
NOTE: voting file allocation on grp 1 disk VOL1
2010-01-29 11:50:51.620000 +00:00
NOTE: voting file allocation on grp 1 disk VOL2
ERROR: Voting file allocation failed for group DATA
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_ora_13198.trc:
ORA-15273: Could not create the required number of voting files.
2010-01-29 11:50:55.182000 +00:00
NOTE: setting 11.2 start ABA for group DATA thread 2 to 7.47

Nope, this still didn’t work. I double checked but there was nothing wrong with the disk group. 2 disks, normal redundancy, 2 failgroups. So I tried moving the OCR next.

Moving OCR

Now that was simple! Just what the doctor ordered after the problem I just had.

# ocrconfig -add +DATA

Completes without much additional information. Not much information in the ASM alert.log:

2010-01-29 12:02:02.910000 +00:00
NOTE: Advanced to new COD format for group DATA

But it worked:

[root@racupgrade1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
 Version                  :          3
 Total space (kbytes)     :     292924
 Used space (kbytes)      :       8096
 Available space (kbytes) :     284828
 ID                       :  209577144
 Device/File Name         : /dev/raw/raw1
 Device/File integrity check succeeded
 Device/File Name         :      +DATA
 Device/File integrity check succeeded

 Device/File not configured

 Device/File not configured

 Device/File not configured

 Cluster registry integrity check succeeded

 Logical corruption check succeeded

Just needed to get rid of the raw device.

[root@racupgrade1 ~]# ocrconfig -delete /dev/raw/raw1
[root@racupgrade1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version                  :          3
Total space (kbytes)     :     292924
Used space (kbytes)      :       8096
Available space (kbytes) :     284828
ID                       :  209577144
Device/File Name         :      +DATA
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

That seemed too simple, let’s double check with cluster verify

[oracle@racupgrade1 ~]$ cluvfy comp ocr

Verifying OCR integrity

Checking OCR integrity...

Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations


ASM Running check passed. ASM is running on all cluster nodes

Checking OCR config file "/etc/oracle/ocr.loc"...

OCR config file "/etc/oracle/ocr.loc" check successful


Disk group for ocr location "+DATA" available on all the nodes


Checking size of the OCR location "+DATA" ...

Size check for OCR location "+DATA" successful...

WARNING:
This check does not verify the integrity of the OCR contents. Execute 'ocrcheck' as a privileged user to verify the contents of OCR.

OCR integrity check passed

Verification of OCR integrity was successful.

Great stuff, back to my original problem.

Trying to move voting disks again

I remembered that in 10.2 you had to stop the cluster to perform this kind of maintenance so I did the same now:

[root@racupgrade1 ~]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'racupgrade1'
CRS-2673: Attempting to stop 'ora.crsd' on 'racupgrade1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'racupgrade1'
CRS-2673: Attempting to stop 'ora.gsd' on 'racupgrade1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'racupgrade1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'racupgrade1'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'racupgrade1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'racupgrade1'
CRS-2677: Stop of 'ora.gsd' on 'racupgrade1' succeeded
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'racupgrade1' succeeded
CRS-2673: Attempting to stop 'ora.racupgrade1.vip' on 'racupgrade1'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'racupgrade1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'racupgrade1'
CRS-2677: Stop of 'ora.racupgrade1.vip' on 'racupgrade1' succeeded
CRS-2672: Attempting to start 'ora.racupgrade1.vip' on 'racupgrade3'
CRS-2677: Stop of 'ora.scan1.vip' on 'racupgrade1' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'racupgrade2'
CRS-2677: Stop of 'ora.registry.acfs' on 'racupgrade1' succeeded
CRS-2676: Start of 'ora.racupgrade1.vip' on 'racupgrade3' succeeded
CRS-2676: Start of 'ora.scan1.vip' on 'racupgrade2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'racupgrade2'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'racupgrade2' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'racupgrade1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'racupgrade1'
CRS-2677: Stop of 'ora.asm' on 'racupgrade1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'racupgrade1'
CRS-2673: Attempting to stop 'ora.eons' on 'racupgrade1'
CRS-2677: Stop of 'ora.ons' on 'racupgrade1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'racupgrade1'
CRS-2677: Stop of 'ora.net1.network' on 'racupgrade1' succeeded
CRS-2677: Stop of 'ora.eons' on 'racupgrade1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'racupgrade1' has completed
CRS-2677: Stop of 'ora.crsd' on 'racupgrade1' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'racupgrade1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'racupgrade1'
CRS-2673: Attempting to stop 'ora.evmd' on 'racupgrade1'
CRS-2673: Attempting to stop 'ora.asm' on 'racupgrade1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'racupgrade1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'racupgrade1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'racupgrade1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'racupgrade1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'racupgrade1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'racupgrade1' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'racupgrade1' succeeded
CRS-2677: Stop of 'ora.asm' on 'racupgrade1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'racupgrade1'
CRS-2677: Stop of 'ora.cssd' on 'racupgrade1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'racupgrade1'
CRS-2673: Attempting to stop 'ora.diskmon' on 'racupgrade1'
CRS-2677: Stop of 'ora.gpnpd' on 'racupgrade1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'racupgrade1'
CRS-2677: Stop of 'ora.diskmon' on 'racupgrade1' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'racupgrade1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'racupgrade1' has completed
CRS-4133: Oracle High Availability Services has been stopped.

The above command completed succesfully on all nodes

Now then I need ASM – the “-excl” flag allows me to start clusterware in maintenance mode for voting disk operations:

[root@racupgrade1 ~]# crsctl start crs -excl
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.gipcd' on 'racupgrade1'
CRS-2672: Attempting to start 'ora.mdnsd' on 'racupgrade1'
CRS-2676: Start of 'ora.mdnsd' on 'racupgrade1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'racupgrade1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'racupgrade1'
CRS-2676: Start of 'ora.gpnpd' on 'racupgrade1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'racupgrade1'
CRS-2676: Start of 'ora.cssdmonitor' on 'racupgrade1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'racupgrade1'
CRS-2679: Attempting to clean 'ora.diskmon' on 'racupgrade1'
CRS-2681: Clean of 'ora.diskmon' on 'racupgrade1' succeeded
CRS-2672: Attempting to start 'ora.diskmon' on 'racupgrade1'
CRS-2676: Start of 'ora.diskmon' on 'racupgrade1' succeeded
CRS-2676: Start of 'ora.cssd' on 'racupgrade1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'racupgrade1'
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'racupgrade1'
CRS-2676: Start of 'ora.ctssd' on 'racupgrade1' succeeded
CRS-2676: Start of 'ora.drivers.acfs' on 'racupgrade1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'racupgrade1'
CRS-2676: Start of 'ora.asm' on 'racupgrade1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'racupgrade1'
CRS-2676: Start of 'ora.crsd' on 'racupgrade1' succeeded
[root@racupgrade1 ~]#

Tension rising, drums please!

crsctl replace votedisk +data

And no, didn’t work either. This turned out to be frustrating…. I then thought the replace command didn’t have the majority of voting disks available so I added 2 more to get the number to 3.

# crsctl add css votedisk /dev/raw/raw3
# crsctl add css votedisk /dev/raw/raw4

These are mapped to my /dev/xvde LUN:

[root@racupgrade1 ~]# fdisk -l /dev/xvde

Disk /dev/xvde: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

 Device Boot      Start         End      Blocks   Id  System
/dev/xvde1               1          37      297171   83  Linux
/dev/xvde2              38          74      297202+  83  Linux
/dev/xvde3              75         111      297202+  83  Linux
/dev/xvde4             112         261     1204875    5  Extended
/dev/xvde5             112         148      297171   83  Linux
/dev/xvde6             149         185      297171   83  Linux

The addition of the voting disks succeeded as you can see from the output of crsctl:

[root@racupgrade1 ~]# crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   324593df1e4c7fe4bf87b885df62e6d5 (/dev/raw/raw2) []
 2. ONLINE   7b5ce0c6bc994f37bf81174c9a60075a (/dev/raw/raw3) []
 3. ONLINE   7de32b1c48054f5ebf8c77185c1f2eb6 (/dev/raw/raw4) []

And so it went on-I tried unsuccessfully for a number of times until I came back to RTFM:

http://download.oracle.com/docs/cd/E11882_01/rac.112/e10717/votocr.htm#BGBBIGJH

I read the part about the normal redundancy diskgroup:

  • Normal redundancy: A disk group with normal redundancy can store up to three voting disks

Normal redundancy only needs 2 disks, and gives you 2 failgroups. Slightly hidden, further down I found the solution:

“A normal redundancy disk group must contain at least two failure groups but if you are storing your voting disks on Oracle ASM, then a normal redundancy disk group must contain at least three failure groups”.

Aha! Now I need to add another failgroup to my disk group which can easily be done online. On the dom0 I created another 10G LV:

# lvcreate --size 10g --name racupgrade_asm3 data_vg

And attached it to the domUs:

[root@oravm xen]# xm list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0   512     4     r-----    373.9
racupgrade1                                  1  1536     2     -b----   1064.6
racupgrade2                                  2  1536     2     -b----    871.5
racupgrade3                                  3  1536     2     -b----    819.9
[root@oravm xen]# xm block-attach racupgrade1 phy:/dev/data_vg/racupgrade_asm3 xvdg w!
[root@oravm xen]# xm block-attach racupgrade2 phy:/dev/data_vg/racupgrade_asm3 xvdg w!
[root@oravm xen]# xm block-attach racupgrade3 phy:/dev/data_vg/racupgrade_asm3 xvdg w!

I initialised the LUN on the racupgrade1 domU, user input in red:

[root@racupgrade1 ~]# fdisk /dev/xvdg
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

The number of cylinders for this disk is set to 1305.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
 (e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): p

Disk /dev/xvdg: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

 Device Boot      Start         End      Blocks   Id  System

Command (m for help): n
Command action
 e   extended
 p   primary partition (1-4)
p1
Partition number (1-4): 1
First cylinder (1-1305, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305):
Using default value 1305

Command (m for help): p

Disk /dev/xvdg: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

 Device Boot      Start         End      Blocks   Id  System
/dev/xvdg1               1        1305    10482381   83  Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Let’s create an ASM disk:

[root@racupgrade1 ~]# /etc/init.d/oracleasm createdisk VOL3 /dev/xvdg1
Marking disk "VOL3" as an ASM disk:                        [  OK  ]

No problems here, so let’s discover the new disk on the other hosts:

[root@racupgrade2 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks:               [  OK  ]
[root@racupgrade2 ~]# /etc/init.d/oracleasm listdisks
VOL1
VOL2
VOL3

The final step is to add the disk to ASM. Remember to log in as sysASM not, sysDBA.

[oracle@racupgrade1 ~]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.1.0 Production on Fri Jan 29 13:44:38 2010

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> alter diskgroup data add disk 'ORCL:VOL3' rebalance power 9;

Diskgroup altered

SQL> select * from v$asm_operation;

GROUP_NUMBER OPERA STAT      POWER     ACTUAL      SOFAR   EST_WORK   EST_RATE EST_MINUTES ERROR_CODE
------------ ----- ---- ---------- ---------- ---------- ---------- ---------- ----------- --------------------------------------------
 1           REBAL RUN           9          9        152       1390        629           1

This completed after a little while:

2010-01-29 13:48:36.519000 +00:00
NOTE: initiating PST update: grp = 1
kfdp_update(): 8
kfdp_updateBg(): 8
NOTE: PST update grp = 1 completed successfully
NOTE: initiating PST update: grp = 1
kfdp_update(): 9
kfdp_updateBg(): 9
NOTE: PST update grp = 1 completed successfully
NOTE: membership refresh pending for group 1/0x8325e5c3 (DATA)
2010-01-29 13:48:39.473000 +00:00
kfdp_query(DATA): 10
kfdp_queryBg(): 10
SUCCESS: refreshed membership for 1/0x8325e5c3 (DATA)

Now finally, will that be the solution? It was!

[root@racupgrade1 ~]# crsctl replace votedisk +DATA
CRS-4256: Updating the profile
Successful addition of voting disk a45b859d21834f81bfeb2539ffa339aa.
Successful addition of voting disk a3e3a5d754b84f83bfdc687e58a2b002.
Successful addition of voting disk 10a5402e23cc4f1cbfc5573d4ee626ab.
Successful deletion of voting disk 324593df1e4c7fe4bf87b885df62e6d5.
Successful deletion of voting disk 7b5ce0c6bc994f37bf81174c9a60075a.
Successful deletion of voting disk 7de32b1c48054f5ebf8c77185c1f2eb6.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced

The change is reflected in the ASM alert log as well:

2010-01-29 13:59:40.498000 +00:00
NOTE: updated gpnp profile ASM diskstring:
NOTE: Creating voting files in diskgroup DATA
NOTE: Voting File refresh pending for group 1/0x8325e5c3 (DATA)
NOTE: Attempting voting file creation in diskgroup DATA
NOTE: voting file allocation on grp 1 disk VOL1
NOTE: voting file allocation on grp 1 disk VOL2
2010-01-29 13:59:41.772000 +00:00
NOTE: voting file allocation on grp 1 disk VOL3

And clusterware also reckons it worked.

[root@racupgrade1 ~]# crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   a45b859d21834f81bfeb2539ffa339aa (ORCL:VOL1) [DATA]
 2. ONLINE   a3e3a5d754b84f83bfdc687e58a2b002 (ORCL:VOL2) [DATA]
 3. ONLINE   10a5402e23cc4f1cbfc5573d4ee626ab (ORCL:VOL3) [DATA]

Voila! End of post-thanks if you read it all!

Responses

  1. Excellent Step to move OCR and voting disks into ASM.

  2. Great post Martin. I like the fact you have included all the output, I think
    it adds great value. I am planning on upgrading my 10gR2 CRS to 11gR2 GRID infrastructure,
    and this and your other 10g -> 11gR2 upgrade posts are going to be invaluable.

    Thanks

    1. Neil, you are welcome!

  3. Thanks a ton! This is a great article. It help me resolve my Voting disk relocation and OCR mirroring issue today.

    Regards
    Deepa

  4. […] How to move OCR and voting disks into ASM […]

  5. Martin,

    I had a question but I could not ask you during the SIG. For easy of manageability did you create another disk group just for voting and ocr ? If you didn’t then did you consider and if you considered then why didn’t you add?

    1. Hi Coskan,

      thanks for your question. For this particular system there is a diskgroup called CTRLLOG which is currently used for control files and online redo logs exclusively. It’s disks are set to VRAID 1 on the EVA-the only ones by the way :(

      I am planning to upgrade the compatible.asm attribute to 11.2 for this disk group and move OCR and voting disk into it. The downside is that the disk group is the redundancy level which is external.

  6. […] 15-How to move OCR and Moving disk into ASM in 11GR2? Martin Bach-How to move OCR and voting disks into ASM […]

  7. This is great. We might use it to go the other direction, away from ASM due to bug 9011779.

  8. Hi Martin,

    Great useful post! I was looking for how to do this today since the previous DBA used raw devices to setup the OCR and vote disks for 11.2 RAC instead of ASM. Thanks again.

    Cheers,
    Ben

  9. Hi,
    Is there a way to re-mark an OFFLINE voting file to ONLINE after the partition is once again available, i.e. in a
    scenario with voting files on 2 storage arrays and one array experiences a transient outage?

    Thanks!

  10. excellent steps :) thanks a lot for sharing

  11. Martin,

    What an excellent knowledge base. I followed your steps and smoothly added a new OCR mirror and replace votting disk in ASM. I appreciate your knowledge sharing.

    1. You are welcome, glad to see it helped other people as well.

  12. thanks. helped me a lot to move my voting disks to a new diskgroup.
    The sentence “but if you are storing your voting disks on Oracle ASM, then a normal redundancy disk group must contain at least three failure groups” saved my day :-)

  13. Thanks for the post.Does it mean that normal and high redudancy diskgroup for voting disk should have three failure groups ? In that case it is always better to have high redundancy right if it takes the same number of disks ?

    1. Hi Deepak,

      normal redundancy requires 3 failure groups, high requires 5 if you want to use them for the voting files. Bear in mind that _all_ voting files are written to so the more you have the more overhead you have to live with.

      I personally think normal redundancy (3 voting files) are enough. That matches the Exadata setup so it probably is ok. I have not yet seen a system with high redundancy for the OCR/voting files but it doesn’t mean it doesn’t exist.

Blog at WordPress.com.