WARNING: The patch number for the PSU has changed, but I didn’t have time to update the blog entry yet! Please ensure you read MOS note 1082394.1 now, and disregard the instructions below. The new PSU for Grid Infrastructure is patch 9778840!
DO NOT FOLLOW THESE INSRTUCTIONS! Update to follow
So here I am again, PSU 184.108.40.206.1 for Grid Infrastructure is out, so let’s get it applied. It’s known as patch 9343627 internally. Quite a chunky download at around 340 MB!
Going through the readme we find the ususal suspects, amongst them the new opatch deployment and the locking/unlocking of the Grid Infrastructure home.
We need opatch 11.2 for this patch-it’s patch 6880880 on metalink. Make sure to download the 11.2 version, some 26MB in size. Deploy as follows to the RDBMS home:
[oracle@rac11gr2node1 ~]$ . oraenv
ORACLE_SID = [oracle] ? admindb
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 is /u01/app/oracle
[oracle@rac11gr2node1 ~]$ cd $ORACLE_HOME
[oracle@rac11gr2node1 dbhome_1]$ mv OPatch OPatch.11.1
[oracle@rac11gr2node1 dbhome_1]$ cp /mnt/db11.2/p6880880_112000_Linux-x86-64.zip .
[oracle@rac11gr2node1 dbhome_1]$ unzip -q p6880880_112000_Linux-x86-64.zip
rm [oracle@rac11gr2node1 dbhome_1]$ rm p6880880_112000_Linux-x86-64.zip
Deployment to the Grid Infrastructure home has to wait until the home is unlocked, see below. Don’t forget to deploy OPatch 11.2 though! The interesting bit about this patch is that there are parts for the RDBMS as well. And as it turned out, the zipfile for patch 9343627 contains the RDBMS PSU 1 as well. How odd-haven’t seen this before but nevermind.
Both PSUs can be applied in a rolling fashion-just ensure you stop and unlock the ORACLE_HOMEs on one node at a time. My Grid Infrastructure installation is owned by user “grid”, the RDBMS by user “oracle”. I am not using shared Oracle homes. The OS is RHEL 5.5 64bit. Continue reading
Interesting scenario this morning with a development database. There is no specific monitoring in place for development systems so a user phoned us up stating that the database was inaccessible. The last lines in the alert.log showed a problem with a datafile not readable:
Wed Apr 28 10:18:08 2010
Errors in file /u01/app/oracle/admin/devone/bdump/devone_pmon_3416.trc:
ORA-00376: file 2 cannot be read at this time
ORA-01110: data file 2: '+DATB/DEVONE/datafile/undotbs1.356.703506859'
This is a 10.2.0.4.1 single instance database (a clone from our production RAC cluster) running on RHEL 5.4 64bit with ASMLib. Continue reading
This is a quick post with some information about me deploying PSU 220.127.116.11.1 to my Oracle installation on ocfs2, RHEL 5.4 64bit. First of all it should be said that this is for the RDBMS, not Grid Infrastructure home. At the time of this writing I didn’t find any known issues, but you might want to check Note 1061294.1 Oracle Database Patch Set Update 18.104.22.168.1 Known Issues before proceeding. Also, never do this in production before having tested the patch in DEV, QA and all the other environments you might have.
I started by downloading the patch-if MOS is available, you can locate it on the patches and updates tab, simple search for oracle products. Enter 9352237 as the patch number, double-check your platform and download the zipfile. I extracted the file to /tmp/9352237.
As always, it’s a good idea to read the readme.html file. First thing to notice: OPatch 11.2 is recommended, patch 6880880. Opatch is avaialble for many versions, make sure to download the one for 11.2 and your platform. Deployment is quite simple-copy the patch file to $ORACLE_HOME and unzip it. Continue reading
A common situation for many DBAs: all over sudden you are tasked to look at a database, and you are told you inherit it. Of course, a lot of problems exist with it, and you are supposed to fix them all. A few weeks ago this happened to me.
The system is a Sun 4660 x86-64 server with Red Hat5.3, and it has 64GB of memory, 8 dual core Opteron 8218 processors. SGA_TARGET was set to 40G, and PGA_AGGREGATE_TARGET to 5GB. That sounds like plenty, however the box was very busy trying to free up memory:
top - 12:16:11 up 23 days, 23:46, 19 users, load average: 28.84, 25.69, 23.19
Tasks: 970 total, 3 running, 967 sleeping, 0 stopped, 0 zombie
Cpu(s): 9.3%us, 13.0%sy, 0.0%ni, 36.0%id, 40.6%wa, 0.1%hi, 1.0%si, 0.0%st
Mem: 66068664k total, 65992772k used, 75892k free, 43168k buffers
Swap: 2096472k total, 2096472k used, 0k free, 40782300k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1068 root 20 -5 0 0 0 R 76.0 0.0 566:03.07 [kswapd5]
13284 oracle 18 0 40.3g 6.3g 6.2g D 31.5 10.0 28:30.12 oraclePROD (LOCAL=NO)
1070 root 10 -5 0 0 0 D 21.6 0.0 925:57.03 [kswapd7]
1065 root 10 -5 0 0 0 D 13.8 0.0 196:27.93 [kswapd2]
7771 oracle 18 0 40.6g 4.5g 4.1g D 12.1 7.1 26:05.04 oraclePROD (LOCAL=NO)
8073 oracle 16 0 40.2g 2.8g 2.8g D 12.1 4.4 5:29.79 oraclePROD (LOCAL=NO)
1066 root 10 -5 0 0 0 S 11.8 0.0 84:57.85 [kswapd3]
1067 root 10 -5 0 0 0 D 10.5 0.0 165:39.98 [kswapd4]
1069 root 10 -5 0 0 0 S 7.9 0.0 277:15.84 [kswapd6]