Monthly Archives: October 2009

ORA-02290: check constraint (RMAN.AL_C_STATUS) violated

I ran into this problem today when I was working on a backup problem. The scenario is as follows:

  • Oracle 10.2.0.4 EE
  • RHEL 5.2 PAE kernel, i.e. 32bit
  • Backup taken on the standby database
  • Using a recovery catalog

All rman operations on the standby database with catalog database connection failed with the following error stack:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of list command at 10/30/2009 10:01:57
RMAN-03014: implicit resync of recovery catalog failed
RMAN-03009: failure of partial resync command on default channel at 10/30/2009 10:01:57
ORA-02290: check constraint (RMAN.AL_C_STATUS) violated

Ooops-so no backups, need to fix this quickly now. BTW, the same problem didn’t exist when running the same operation on the primary, but I don’t have space for the (disk) backup over there. Continue reading

Getting started with iSCSI part IV-Slightly advanced material

The final part of the article series focuses on some slightly more advanced topics, such as deleting targets, device name stability and insight into how the automatic login works

Automatic login

The question I had when I saw the automatic login feature was: how does the daemon know which targets to log in? The iSCSI target configuration is stored in a DBM database, represented in the following locations:

  • /var/lib/iscsi/send_targets
  • /var/lib/iscsi/nodes

Information stored there contains nodes and discovered targets, not surprisingly. You should not mess with these files using your favourite text editor-use iscsiadm for all operations!

The following files were present after login:

[root@aux iscsi]# ls -l nodes/
total 20
drw------- 3 root root 4096 Oct 16 02:32 iqn.2009-10.com.openfiler:rac_vg.ocr_a
drw------- 3 root root 4096 Oct 16 02:32 iqn.2009-10.com.openfiler:rac_vg.ocr_b
drw------- 3 root root 4096 Oct 16 02:32 iqn.2009-10.com.openfiler:rac_vg.vote_a
drw------- 3 root root 4096 Oct 16 02:32 iqn.2009-10.com.openfiler:rac_vg.vote_b
drw------- 3 root root 4096 Oct 16 02:32 iqn.2009-10.com.openfiler:rac_vg.vote_c

[root@aux iscsi]# ls -l send_targets/
total 4
drw------- 2 root root 4096 Oct 16 02:32 192.168.30.22,3260

Continue reading

Getting started with iSCSI part III-Mounting Storage

After all the necessary work has been completed on the iSCSI appliance we are now ready to present the storage to the RAC nodes; I am using RHEL 5 for these. RHEL 5 and clones require a specific RPM to manage iSCSI storage, which is part of the distribution. Note that the package has a slightly different name in RHEL 4,and the steps to discover storage are slightly different to the ones presented here.

Installing the iSCSI initiator

The initiator, as the name implies, is required to initiate an iSCSI connections. You need to install the initiator if not done so already:

# rpm -ihv iscsi-initiator-utils-*.rpm

The RPM will install the software and also creates the central configuration directory, /etc/iscsi/ with all necessary configuration files. The example here assumes passwordless authentication, you’ll have to change /etc/iscsi/iscsid.conf if passwords are required for CHAP authentication. Maybe I’ll cover the security aspects in a later post, right now I won’t.

The first thing after installing the iscsi daemons is to start them:

[root@rhel5 ~]# service iscsi start
iscsid dead but pid file exists
Turning off network shutdown. Starting iSCSI daemon:       [  OK  ]
                                                           [  OK  ]
Setting up iSCSI targets: iscsiadm: No records found!
                                                           [  OK  ]
[root@rhel5 ~]# Continue reading 

PRVF-4664 : Found inconsistent name resolution entries for SCAN name

This has proven to be a worthy adversary – “PRVF-4664 : Found inconsistent name resolution entries for SCAN name”. The error has been thrown at the end of a 11.2.0.1 Grid Infrastructure installation, during the cluster verification utility run. Some bloggers I checked recommended workarounds, but I wanted the solution.

Some facts first:

  • Oracle Grid Infrastructure 11.2
  • RHEL 5.4 32bit
  • named 9.3.2
  • GPnP (grid plug and play) enabled

The error message from the log was as follows:

INFO: Checking name resolution setup for "rac-scan.rac.the-playground.de"...
INFO: ERROR: 
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name 
INFO: "rac-scan.rac.the-playground.de"
INFO: Verification of SCAN VIP and Listener setup failed

I have scratched my head, browsed the Internet and came to a metalink note (887471.1) which didn’t help (I didn’t try to resolve the SCAN through /etc/hosts).

So something must have been wrong with the subdomain delegation for DNS. And in fact, once that had been rectified using a combination of rndc, dig and nslookup, the error went away. In short, if your /etc/resolv.conf points to your nameserver (not the GNS server), and you can resolve hostnames such as the scan as part of the subdomain, the error will go away.

I previously blogged about this, you can check the setup here:

https://martincarstenbach.wordpress.com/2009/10/02/build-your-own-11-2-rac-system-part-ii-dns-dhcp-for-gns/

Getting started with iSCSI part II-Presenting Storage

This part of the series focuses on how to add storage to your appliance and then make that available to your clients. I have focussed on providing quick success, and this setup is mainly for a lab to experiment with as it’s not secure enough for a production roll out. This part of the article series is part of a presentation I gave at UKOUG’s UNIX SIG in September.

Add sharable storage

So first of all, you need to present more storage to your Openfiler system. In my case that’s quite simple, I am running Openfiler as a domU under OpenSuSE 11.1, but any recent Xen implementation should do the trick. I couldn’t convince Oracle VM 2.1.5 to log in to an iSCSI target without crashing the whole domU (RHEL 5.2 32bit) which prompted me to ditch that product. I hear that Oracle VM 2.2 focuses on the management interface as a consequence of the virtual iron acquisition earlier this year. Rumour has it that the underlying Xen version won’t change (much). To be seen-Oracle VM 2.2 hasn’t been publicly released yet. But back to Openfiler and the addition of storage.

First of all I add a LV on the dom0:

# lvcreate --name openfiler_data_001 --size 15G data_vg

Next I add that to the appliance:

# xm block-attach 5 'phy:/dev/data_vg/openfiler_data_001' xvdd w

This command adds the new LV to the domain 5-openfiler-as /dev/xvdd in write mode. Openfiler (and in fact any modern Linux-based domU) will detect the new storage straight away and make it available for use.

Continue reading

Getting started with iSCSI part I-Openfiler Fundamentals

After having read a number of articles detailing the use of Openfiler and Real Application Clusters I was intrigued, especially by the prospect to set up my own extended distance cluster. There was an excellent article by vnull (Jakub Wartak) on OTN about how to do this. So Openfiler seems to be my “SAN”, provided by a Xen based domU.

So-what is that Openfiler thing after all? The project website (Openfiler.org) states:

  • “Openfiler is a powerful, intuitive browser-based network storage software distribution” and
  • “Openfiler delivers file-based Network Attached Storage and block-based Storage Area Networking in a single framework”
  • And above all it’s free!

Its closest free competitor known to me might be FreeNAS, which is not based on Linux (some BSD dialect, without disrespect)
Continue reading

Connected to an idle instance-RAC 10.2.0.4.1

I encountered a strange problem today with a 3 node cluster. Let’s start with the facts first:

  • RHEL 5.3 64bit
  • Oracle Clusterware 10.2.0.4 + bundle patch#4
  • Oracle ASM 10.2.0.4.1 (that is PSU 1)
  • ASMLib in use
  • Cluster members: nodea, nodeb, nodec (their real names are known to the author)

I was about to create a clustered ASM instance when it happened. I just completed dbca’s “configure automatic storage management” option which created the listeners (listener_<hostname>) on each host as well as the ASM instance itself. ASM was started on all cluster nodes:

[oracle@nodea bin]$ for i in a b c; do srvctl status asm -n node$i; done
ASM instance +ASM1 is running on node nodea.
ASM instance +ASM2 is running on node nodeb.
ASM instance +ASM3 is running on node nodec.

So far so good, then I decided to query +ASM1 to see if the disks are present (I have just finished a snapclone on the storage array).

[oracle@nodea~]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Tue Oct 6 15:22:51 2009

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.

Connected to an idle instance.

SQL>

Pardon? This is the moment where I think it would be great if wordpress could play back my astonishment and surprise about the nonexistent instance. What’s going on there? I could clearly see the instance was up! Continue reading