Monthly Archives: May 2014

RAC 12c enhancements: adding an additional SCAN-part 4

This is going to be the last part of this series, however long it might end up being in the end. In the previous articles you read how to create a physical standby database from a RAC One database.

Networks (refresher)

To make it easier to follow without going back to the previous articles, here are the networks I’m using, listed for your convenience.

  • 192.168.100/24: Client network
  • 192.168.102/24: Dedicated Data Guard network

Data Guard Broker Configuration

I said it before and I say it again: I like the Data Guard broker interface for managing standby databases. It’s the ultimate simplification and so easy to implement that it’s quite safe to operate even in large organisations. As the added bonus you get OEM integration as well. OEM relies on a Broker configuration. The first step in managing the standby database therefore is to create the Data Guard configuration. And by the way, the documentation was correct, and the parameter “listener_networks” is already set! When the standby database starts you can see it being set by the Oracle agent process. The alert.log reads:

Starting background process DMON
DMON started with pid=34, OS id=11023
ORACLE_BASE from environment = /u01/app/oracle
Using default pga_aggregate_limit of 2048 MB
ALTER SYSTEM SET local_listener=' (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.100.58)(PORT=1521))' SCOPE=MEMORY SID='sby_2';
2014-04-28 05:11:47.874000 -04:00
ALTER SYSTEM SET remote_listener=' ron12csby-scan.example.com:1521' SCOPE=MEMORY SID='sby_2';
ALTER SYSTEM SET listener_networks='(( NAME=net2)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.102.58)
(PORT=1522)))))','((NAME=net2)(REMOTE_LISTENER=ron12csby-dgscan.dg.example.com:1521))' SCOPE=MEMORY SID='sby_2';
2014-04-28 05:12:45.107000 -04:00
Decreasing number of real time LMS from 1 to 0

So no more need to worry about local listener (unless you have many local listeners and are not on 12c), remote_listener and listener_networks. That’s a big relief, to see that it does work. I can now create the configuration. I like the broker command line interface more than the OEM interface, especially since it is a lot easier to show here in the post.

Before you can work with the broker you need to enabled it. Set the following init.ora parameters to values similar to these below. It is important that these values are set identical on all instances. You also need to ensure the broker configuration files are in ASM.

SQL> show parameter dg_broker

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
dg_broker_config_file1               string      +DATA/ron/d1.dat
dg_broker_config_file2               string      +RECO/ron/d2.dat
dg_broker_start                      boolean     TRUE
SQL>

Make sure to make the settings on both clusters! Now connect using the command line interface and create the configuration. Note that I’m connecting using ronprinet2, which is the TNS name for the primary database using the Broker Network (192.168.102/24)

[oracle@ron12cprinode1 ~]$ dgmgrl sys/xxx@ronprinet2
DGMGRL for Linux: Version 12.1.0.1.0 - 64bit Production

Copyright (c) 2000, 2012, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDBA.
DGMGRL> help create configuraiton

Creates a broker configuration

Syntax:

  CREATE CONFIGURATION <configuration name> AS
    PRIMARY DATABASE IS <database name>
    CONNECT IDENTIFIER IS <connect identifier>;

DGMGRL> create configuration dgtest as primary database is "RON" connect identifier is "ronprinet2";
Configuration "dgtest" created with primary database "RON"

DGMGRL> show database verbose "RON"

Database - RON

  Role:              PRIMARY
  Intended State:    OFFLINE
  Instance(s):
    pri_2

  Properties:
    DGConnectIdentifier             = 'ronprinet2'
    ObserverConnectIdentifier       = ''
    LogXptMode                      = 'ASYNC'
    RedoRoutes                      = ''
    DelayMins                       = '0'
    Binding                         = 'optional'
    MaxFailure                      = '0'
    MaxConnections                  = '1'
    ReopenSecs                      = '300'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    LogShipping                     = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyLagThreshold               = '0'
    TransportLagThreshold           = '0'
    TransportDisconnectedThreshold  = '30'
    ArchiveLagTarget                = '0'
    LogArchiveMaxProcesses          = '4'
    LogArchiveMinSucceedDest        = '1'
    FastStartFailoverTarget         = ''
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    SendQEntries                    = '(monitor)'
    LogXptStatus                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.100.51)
(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ron_DGMGRL)(INSTANCE_NAME=pri_2)(SERVER=DEDICATED)))'
    StandbyArchiveLocation          = 'USE_DB_RECOVERY_FILE_DEST'
    AlternateLocation               = ''
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = '%t_%s_%r.dbf'
    TopWaitEvents                   = '(monitor)'

Database Status:
DISABLED

DGMGRL>

A few more things worth mentioning here: first of all the database does not know about the other instance. That’s not to worry about: you will see how this is addressed during the online relocation. The StaticConnectIdentifier is also pointing to the wrong network. Let’s have a look at the only instance:

DGMGRL> show instance verbose pri_2

Instance 'pri_2' of database 'RON'

  Host Name: ron12cprinode1.example.com
  PFILE:
  Properties:
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.100.51)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=ron_DGMGRL)(INSTANCE_NAME=pri_2)(SERVER=DEDICATED)))'
    StandbyArchiveLocation          = 'USE_DB_RECOVERY_FILE_DEST'
    AlternateLocation               = ''
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = '%t_%s_%r.dbf'
    TopWaitEvents                   = '(monitor)'

Instance Status:
DISABLED

Unsurprisingly the StaticConnectIdentifier is pointing to the wrong interface again. Let’s correct this.

DGMGRL> edit instance pri_2 set property StaticConnectIdentifier=
> '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.102.51)(PORT=1522))
> (CONNECT_DATA=(SERVICE_NAME=ron_DGMGRL)(INSTANCE_NAME=pri_2)(SERVER=DEDICATED)))';
Property "staticconnectidentifier" updated

This command updated the setting on the database and instance level. OK, let’s move on. I need to add the standby database.

DGMGRL> add database "RONDG" AS CONNECT IDENTIFIER IS ronsbynet2 MAINTAINED AS PHYSICAL;
Database "RONDG" added

DGMGRL> show database verbose "RONDG"

Database - RONDG

  Role:              PHYSICAL STANDBY
  Intended State:    OFFLINE
  Transport Lag:     (unknown)
  Apply Lag:         (unknown)
  Apply Rate:        (unknown)
  Real Time Query:   OFF
  Instance(s):
    sby_2

  Properties:
    DGConnectIdentifier             = 'ronsbynet2'
    ObserverConnectIdentifier       = ''
    LogXptMode                      = 'ASYNC'
    RedoRoutes                      = ''
    DelayMins                       = '0'
    Binding                         = 'optional'
    MaxFailure                      = '0'
    MaxConnections                  = '1'
    ReopenSecs                      = '300'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    LogShipping                     = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyLagThreshold               = '0'
    TransportLagThreshold           = '0'
    TransportDisconnectedThreshold  = '30'
    ApplyParallel                   = 'AUTO'
    StandbyFileManagement           = 'AUTO'
    ArchiveLagTarget                = '0'
    LogArchiveMaxProcesses          = '4'
    LogArchiveMinSucceedDest        = '1'
    DbFileNameConvert               = ''
    LogFileNameConvert              = ''
    FastStartFailoverTarget         = ''
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    SendQEntries                    = '(monitor)'
    LogXptStatus                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.100.58)
(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=rondg_DGMGRL)(INSTANCE_NAME=sby_2)(SERVER=DEDICATED)))'
    StandbyArchiveLocation          = 'USE_DB_RECOVERY_FILE_DEST'
    AlternateLocation               = ''
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = '%t_%s_%r.dbf'
    TopWaitEvents                   = '(monitor)'

Database Status:
DISABLED

DGMGRL> show instance verbose sby_2

Instance 'sby_2' of database 'RONDG'

  Host Name: ron12csbynode1.example.com
  PFILE:
  Properties:
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.100.58)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=rondg_DGMGRL)(INSTANCE_NAME=sby_2)(SERVER=DEDICATED)))'
    StandbyArchiveLocation          = 'USE_DB_RECOVERY_FILE_DEST'
    AlternateLocation               = ''
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = '%t_%s_%r.dbf'
    TopWaitEvents                   = '(monitor)'

Instance Status:
DISABLED

Again let’s fix the static connection identifier.

DGMGRL> edit instance sby_2 set property StaticConnectIdentifier =
> '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.102.58)(PORT=1522))(CONNECT_DATA=(SERVICE_NAME=rondg_DGMGRL)
(INSTANCE_NAME=sby_2)(SERVER=DEDICATED)))';
Property "staticconnectidentifier" updated

Now everything is in place, I can enable the configuration.

DGMGRL> enable configuration
Enabled.
DGMGRL> show configuration

Configuration - dgtest

  Protection Mode: MaxPerformance
  Databases:
  RON   - Primary database
    RONDG - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL>

If you haven’t set the standby file management property to auto yet, please do so. I really don’t like MRP0 aborting because a datafile UNNAMED0000010 has been added. This causes confusion that’s better avoided.

If you get errors about missing standby redo logs (SRLs), please add them. I have dones so on the standby and primary, but didn’t include the output here.

Switchover

That should be all that’s needed in preparation. Since I’m impatient I wanted to see if the switchover works. First I check the configuration:

DGMGRL> show configuration

Configuration - dgtest

  Protection Mode: MaxPerformance
  Databases:
  RON   - Primary database
    RONDG - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

Success means: ready to switch over. Let’s try:

DGMGRL> switchover to "RONDG"
Performing switchover NOW, please wait...
Operation requires a connection to instance "sby_2" on database "RONDG"
Connecting to instance "sby_2"...
Connected as SYSDBA.
New primary database "RONDG" is opening...
Operation requires startup of instance "pri_2" on database "RON"
Starting instance "pri_2"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "RONDG"
DGMGRL> show configuration

Configuration - dgtest

  Protection Mode: MaxPerformance
  Databases:
  RONDG - Primary database
    RON   - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

Great so that worked. It was important that the StaticConnectIdentifiers are set correctly. During testing I found that relying on DGConnectIdentifier did not work, when it pointed to the second SCAN listener. The SCAN listeners do not “know” about the statically registered %_DGMGRL services, and you receive the dreaded “listener does not know about service name in connect descriptor” error. I also had to pick a listener for the StaticConnectIdentifier. I wanted to have both instances as potential targets for starting the instance, but that failed too. So in a way I map node 1 to instance sby_2 by pointing the broker to its listener.

Online Relocation

Now on to part 2: what happens to the standby when I perform an online relocation on the primary. Let’s try.

[oracle@ron12csbynode1 admin]$ srvctl status database -d rondg
Instance sby_2 is running on node ron12csbynode1
Online relocation: INACTIVE
[oracle@ron12csbynode1 admin]$ srvctl status service -d rondg
Service ron12c is running on instance(s) sby_2

[oracle@ron12csbynode1 admin]$ srvctl relocate database -d rondg -timeout 2 -verbose -node ron12csbynode2
Configuration updated to two instances
Instance sby_1 started
Services relocated
Waiting for up to 2 minutes for instance sby_2 to stop ...
Instance sby_2 stopped
Configuration updated to one instance
[oracle@ron12csbynode1 admin]$

In the meantime I connected to the primary (while it was relocating) and switched a few logs. This works, as you can see (RON is the standby database now)

2014-05-23 10:30:16.943000 -04:00
Recovery of Online Redo Log: Thread 1 Group 6 Seq 119 Reading mem 0
  Mem# 0: +DATA/RON/ONLINELOG/group_6.280.846066567
  Mem# 1: +RECO/RON/ONLINELOG/group_6.330.846066593
2014-05-23 10:30:42.897000 -04:00
Archived Log entry 226 added for thread 1 sequence 119 ID 0x64c27dbf dest 1:
2014-05-23 10:30:49.482000 -04:00
Media Recovery Waiting for thread 1 sequence 120 (in transit)
2014-05-23 10:30:52.043000 -04:00
Primary database is in MAXIMUM PERFORMANCE mode
Re-archiving standby log 8 thread 2 sequence 97
2014-05-23 10:31:02.879000 -04:00
Recovery of Online Redo Log: Thread 1 Group 5 Seq 120 Reading mem 0
  Mem# 0: +DATA/RON/ONLINELOG/group_5.279.846066497
  Mem# 1: +RECO/RON/ONLINELOG/group_5.329.846066535
2014-05-23 10:31:12.400000 -04:00
RFS[8]: Assigned to RFS process (PID:13182)
RFS[8]: Selected log 9 for thread 2 sequence 98 dbid 1681008808 branch 838874793
2014-05-23 10:31:14.987000 -04:00
Archived Log entry 227 added for thread 2 sequence 97 ID 0x64c27dbf dest 1:
2014-05-23 10:31:50.408000 -04:00
Media Recovery Waiting for thread 2 sequence 98 (in transit)

2014-05-23 10:32:02.193000 -04:00
RFS[8]: Selected log 8 for thread 2 sequence 99 dbid 1681008808 branch 838874793
2014-05-23 10:32:05.872000 -04:00
Recovery of Online Redo Log: Thread 2 Group 9 Seq 98 Reading mem 0
  Mem# 0: +DATA/RON/ONLINELOG/group_9.283.846066895
  Mem# 1: +RECO/RON/ONLINELOG/group_9.333.846066947

The logs are from 2 threads-the relocation temporarily changes the single instance database to a cluster database by starting instances on both hosts.

Eventually instance sby_2 stops and causes TNS errors in the standby’s alert.log:

***********************************************************************

Fatal NI connect error 12528, connecting to:
 (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ron12csby-dgscan.dg.example.com)(PORT=1521))
(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=RONDG_DGB)(INSTANCE_NAME=sby_2)(CID=(PROGRAM=oracle)
(HOST=ron12cprinode1.example.com)(USER=oracle))))

  VERSION INFORMATION:
TNS for Linux: Version 12.1.0.1.0 - Production
Oracle Bequeath NT Protocol Adapter for Linux: Version 12.1.0.1.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 12.1.0.1.0 - Production
  Time: 23-MAY-2014 10:34:25
  Tracing not turned on.
  Tns error struct:
    ns main err code: 12564
...

Nothing to be alarmed about, this is normal, and shown in the output of the relocate command.

The good news is that the primary receives more redo:

2014-05-23 10:35:40.756000 -04:00
RFS[9]: Assigned to RFS process (PID:13329)
RFS[9]: Selected log 9 for thread 2 sequence 100 dbid 1681008808 branch 838874793
2014-05-23 10:36:08.256000 -04:00
Archived Log entry 231 added for thread 2 sequence 100 ID 0x64c27dbf dest 1:
2014-05-23 10:36:14.754000 -04:00
Media Recovery Waiting for thread 2 sequence 101 (in transit)
2014-05-23 10:36:15.972000 -04:00
RFS[7]: Opened log for thread 2 sequence 101 dbid 1681008808 branch 838874793
2014-05-23 10:36:20.162000 -04:00
Archived Log entry 232 added for thread 2 sequence 101 rlc 838874793 ID 0x64c27dbf dest 2:
2014-05-23 10:36:29.656000 -04:00
Media Recovery Log +RECO/RON/ARCHIVELOG/2014_05_23/thread_2_seq_101.393.848313375

And after instance 2 is shut down, you can see redo from thread 1 being transferred.

2014-05-23 10:46:58.257000 -04:00
RFS[6]: Selected log 5 for thread 1 sequence 122 dbid 1681008808 branch 838874793
Archived Log entry 233 added for thread 1 sequence 121 ID 0x64c27dbf dest 1:
2014-05-23 10:47:00.479000 -04:00
Media Recovery Waiting for thread 1 sequence 122 (in transit)
2014-05-23 10:47:03.686000 -04:00
Recovery of Online Redo Log: Thread 1 Group 5 Seq 122 Reading mem 0
  Mem# 0: +DATA/RON/ONLINELOG/group_5.279.846066497
  Mem# 1: +RECO/RON/ONLINELOG/group_5.329.846066535

If you are unsure about the thread to instance mapping, check v$thread, this example is from the primary:

SQL> select thread#, status, enabled, instance from v$thread;

   THREAD# STATUS ENABLED  INSTANCE
---------- ------ -------- ------------------------------
         1 OPEN   PUBLIC   sby_1
         2 CLOSED PUBLIC   sby_2

In summary my test showed that an online relocation does not seem to cause trouble for the standby database. I need to run a benchmark like Swingbench against it to see how the online relocation behaves when the system is under load though, the database was pretty much idle during the online relocation.

Amending the Broker configuration

Circling back to the introduction: what happens to the DG broker configuration after the database has been a RAC database (albeit briefly)? Here is the output from the configuration after the first relocation:

DGMGRL> show database verbose "RONDG"

Database - RONDG

  Role:              PRIMARY
  Intended State:    TRANSPORT-ON
  Instance(s):
    sby_1
    sby_2

  Properties:
    DGConnectIdentifier             = 'ronsbynet2'
    ObserverConnectIdentifier       = ''
    LogXptMode                      = 'ASYNC'
    RedoRoutes                      = ''
    DelayMins                       = '0'
    Binding                         = 'optional'
    MaxFailure                      = '0'
    MaxConnections                  = '1'
    ReopenSecs                      = '300'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    LogShipping                     = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyLagThreshold               = '0'
    TransportLagThreshold           = '0'
    TransportDisconnectedThreshold  = '30'
    ApplyParallel                   = 'AUTO'
    StandbyFileManagement           = 'AUTO'
    ArchiveLagTarget                = '0'
    LogArchiveMaxProcesses          = '4'
    LogArchiveMinSucceedDest        = '1'
    DbFileNameConvert               = ''
    LogFileNameConvert              = ''
    FastStartFailoverTarget         = ''
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    SendQEntries                    = '(monitor)'
    LogXptStatus                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    StaticConnectIdentifier(*)
    StandbyArchiveLocation(*)
    AlternateLocation(*)
    LogArchiveTrace(*)
    LogArchiveFormat(*)
    TopWaitEvents(*)
    (*) - Please check specific instance for the property value

Database Status:
SUCCESS

Notice how the second instance (sby_1) has been added. The first question I had: what’s it done to the connection identifiers? As expected it’s set to the wrong network for the newly added instance:

DGMGRL> show instance verbose sby_1;

Instance 'sby_1' of database 'RONDG'

  Host Name: ron12csbynode2.example.com
  PFILE:
  Properties:
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.100.60)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=rondg_DGMGRL)(INSTANCE_NAME=sby_1)(SERVER=DEDICATED)))'
    StandbyArchiveLocation          = 'USE_DB_RECOVERY_FILE_DEST'
    AlternateLocation               = ''
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = '%t_%s_%r.dbf'
    TopWaitEvents                   = '(monitor)'

Instance Status:
SUCCESS

So that needs to be changed as well. It would make sense to systematically perform a relocation as part of the build of the Data Guard environment on primary and standby to register all instances and update them. Not that it’s a big issue, the switchover is most likely not going to fail, but you have to manually start the database if the connection information is wrong.

Summary

It’s been quite a journey! This article series explained how to add a second SCAN to the environment including all the other infrastructure such as VIPs and listeners. You then saw how to duplicate a database for Data Guard and finally

Advertisements

Mysterious new Oracle compression type

As part of our research for our joint presentation at OGH.nl and E4 my colleague Frits Hoogland made an interesting discovery. He verified the compression format for updated rows previously compressed using Hybrid Columnar Compression (HCC). In that particular example we researched HCC data on Exadata storage. As you may know, reading HCC data in its compressed form is limited to Exadata, Pillar Axiom, and the ZFS Storage Appliance (ZFSSA).

Background

So far I took it for granted that Oracle HCC compressed data can’t be updated in place. As has been shown in many posts and books (for example by Kerry Osborne) an update on a row that has been compressed using HCC will trigger the move of that particular row to a new block. The new block is flagged for OLTP compression. OLTP compression is a variation of the BASIC compression or de-duplication that Jonathan Lewis expertly covered in a four part series hosted by All Things Oracle. The link to the first part of it can be found here. Even if you think you don’t need to bother with BASIC compression you are mistaken: it is still very relevant, especially in the Exadata/HCC context. Before reading on about block changes etc. it might be a good idea to refer to Kerry’s blog post mentioned before. If you want to really dive into the matter I suggest two more posts as reference:

This almost starts to feel like a scientific article where you can’t see the text for all the footnotes but credit is due!

An interesting discovery

So as it happened Frits discovered a new compression type when updating rows in a HCC compressed table in 11.2.0.3.22. My examples that follow are on 12.1.0.1.0. Consider this table:

  
SQL> CREATE TABLE t1_wp
  2  enable row movement
  3  column store
  4  compress for query low
  5  AS
  6  WITH v1 AS
  7  (SELECT rownum n FROM dual CONNECT BY level <= 10000)
  8  SELECT  rownum id,
  9    rpad(rownum,500) t_pad,
 10    TRUNC(sysdate) - 180 + TRUNC((rownum-1)/3)/86400 date_created,
 11    TRUNC(sysdate) - 180 + TRUNC((rownum-1)/3)/86400 + dbms_random.value(1800, 86400)/86400 date_completed,
 12    CASE
 13	 WHEN mod(rownum,100000) = 0
 14	 THEN CAST('RARE' AS VARCHAR2(12))
 15	 WHEN mod(rownum,10000) = 0
 16	 THEN CAST('FAIRLY RARE' AS VARCHAR2(12))
 17	 WHEN mod(rownum,1000) = 0
 18	 THEN CAST('NOT RARE' AS VARCHAR2(12))
 19	 WHEN mod(rownum,100) = 0
 20	 THEN CAST('COMMON' AS	 VARCHAR2(12))
 21	 ELSE CAST('THE REST' AS VARCHAR2(12))
 22    END state,
 23    TO_CHAR(mod(rownum, 4)) spcol
 24  FROM v1,
 25    v1
 26  WHERE rownum <= 1e6;

It basically created 1 million rows with HCC compress for query low. I chose Query Low because it’s the easiest to work with as its compression unit size is roughly 32k. The compressed table is of small size.

SQL> select bytes/power(1024,2) m, blocks from user_segments
  2  where segment_name = 'T1_WP';

         M     BLOCKS
---------- ----------
        16       2048

Now what I am after is an update on a specific CU. I want to know the minimum and maximum ID per block:

select min(id),max(id),blockn from (
 select id,DBMS_ROWID.ROWID_RELATIVE_FNO(rowid) , 
   DBMS_ROWID.ROWID_BLOCK_NUMBER(rowid) as blockn 
   from martin.t1_wp
) group by blockn order by blockn;

   MIN(ID)    MAX(ID)	  BLOCKN
---------- ---------- ----------
         1       1964     262147
      1965       4194     262150
      4195       6424     262154
      6425       7148     262158
      7149       9375     262161
      9376      11598     262165
     11599      13795     262169
     13796      14970     262173
     14971      17127     262177
     17128      19270     262181
...

And so forth for a total 501 rows.

What happens now if I update IDs 1 to 10? But before I answer the question I’d like to show you the block dump of block 262147:

SQL> select rowid, dbms_compression.get_compression_type(user, 'T1_WP', rowid) compressionType 
  2  from t1_wp where id between 1 and 10;

ROWID              COMPRESSIONTYPE
------------------ ---------------
AAAO7sAALAABAADAAA               8
AAAO7sAALAABAADAAB               8
AAAO7sAALAABAADAAC               8
AAAO7sAALAABAADAAD               8
AAAO7sAALAABAADAAE               8
AAAO7sAALAABAADAAF               8
AAAO7sAALAABAADAAG               8
AAAO7sAALAABAADAAH               8
AAAO7sAALAABAADAAI               8
AAAO7sAALAABAADAAJ               8

10 rows selected.

As you can see the compression type is 8 or query low. Let’s look at the dump:

Block header dump:  0x02c40003
 Object id on Block? Y
 seg/obj: 0xeeec  csc: 0x99a.535752ba  itc: 3  flg: E  typ: 1 - DATA
     brn: 0  bdba: 0x2c40000 ver: 0x01 opc: 0
     inc: 0  exflg: 0

 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0xffff.000.00000000  0x00000000.0000.00  C---    0  scn 0x099a.535752ba
0x02   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x03   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
bdba: 0x02c40003
data_block_dump,data header at 0x7f50921c447c
===============
tsiz: 0x1f80
hsiz: 0x1c
pbl: 0x7f50921c447c
     76543210
flag=-0------
ntab=1
nrow=1
frre=-1
fsbo=0x1c
fseo=0x30
avsp=0x14
tosp=0x14
        r0_9ir2=0x0
        mec_kdbh9ir2=0x0
                      76543210
        shcf_kdbh9ir2=----------
                  76543210
        flag_9ir2=--R-----      Archive compression: Y
                fcls_9ir2[0]={ }
0x16:pti[0]     nrow=1  offs=0
0x1a:pri[0]     offs=0x30
block_row_dump:
tab 0, row 0, @0x30
tl: 8016 fb: --H-F--N lb: 0x0  cc: 1
nrid:  0x02c40004.0
col  0: [8004]
Compression level: 01 (Query Low)
 Length of CU row: 8004
kdzhrh: ------PC- CBLK: 3 Start Slot: 00
 NUMP: 03
 PNUM: 00 POFF: 7964 PRID: 0x02c40004.0
 PNUM: 01 POFF: 15980 PRID: 0x02c40005.0
 PNUM: 02 POFF: 23996 PRID: 0x02c40006.0
*---------
CU header:
CU version: 0   CU magic number: 0x4b445a30
CU checksum: 0x147384ec
CU total length: 28107
CU flags: NC-U-CRD-OP
ncols: 6
nrows: 1964
algo: 0
CU decomp length: 27815   len/value length: 1048622
row pieces per row: 1
num deleted rows: 0
START_CU:
 00 00 1f 44 0f 03 00 00 00 03 00 00 1f 1c 02 c4 00 04 00 00 00 00 3e 6c 02
 c4 00 05 00 00 00 00 5d bc 02 c4 00 06 00 00 00 4b 44 5a 30 ec 84 73 14 00
 00 6d cb eb 06 00 06 07 ac 00 10 00 2e 01 00 00 00 00 00 00 00 00 00 00 00
 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
...

And now for the update, which is a simple “update t1_wp set spcol = ‘UPDATED’ where id between 1 and 10′” affecting exactly 10 rows.

Checking the ROWIDs again:

SQL> select rowid, dbms_compression.get_compression_type(user, 'T1_WP', rowid) compressionType
  2  from t1_wp where id between 1 and 10;

ROWID		   COMPRESSIONTYPE
------------------ ---------------
AAAO7sAALAABAfeAAA               1
AAAO7sAALAABAfeAAB               1
AAAO7sAALAABAfeAAC               1
AAAO7sAALAABAfeAAD               1
AAAO7sAALAABAfeAAE               1
AAAO7sAALAABAfeAAF               1
AAAO7sAALAABAfeAAG               1
AAAO7sAALAABAfeAAH               1
AAAO7sAALAABAfeAAI               1
AAAO7sAALAABAfeAAJ               1

10 rows selected.

So that’s interesting-the rowids have changed, and so has the compression algorithm. What if I commit?


SQL> commit;

Commit complete

SQL> select rowid, dbms_compression.get_compression_type(user, 'T1_WP', rowid) compressionType
  2  from t1_wp where id between 1 and 10;

ROWID              COMPRESSIONTYPE
------------------ ---------------
AAAO7sAALAABAfeAAA              64
AAAO7sAALAABAfeAAB              64
AAAO7sAALAABAfeAAC              64
AAAO7sAALAABAfeAAD              64
AAAO7sAALAABAfeAAE              64
AAAO7sAALAABAfeAAF              64
AAAO7sAALAABAfeAAG              64
AAAO7sAALAABAfeAAH              64
AAAO7sAALAABAfeAAI              64
AAAO7sAALAABAfeAAJ              64

10 rows selected.

There it is, the new mysterious compression type 64. To my knowledge it is not documented in 11.2, but it is in 12.1, as “DBMS_COMPRESSION.COMP_BLOCK”. What does it look like? Picking ROWID for id = 10 (AAAO7sAALAABAfeAAJ)

SQL> select id from t1_wp where rowid = 'AAAO7sAALAABAfeAAJ';

        ID
----------
        10

SQL> alter system dump datafile 7 block 46401502;

Block header dump:  0x02c407de
 Object id on Block? Y
 seg/obj: 0xeeec  csc: 0x99a.53575da9  itc: 2  flg: E  typ: 1 - DATA
     brn: 0  bdba: 0x2c40781 ver: 0x01 opc: 0
     inc: 0  exflg: 0

 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x000a.008.00038f01  0x00000f5d.720e.0a  --U-   15  fsc 0x0000.53576117
0x02   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
bdba: 0x02c407de
data_block_dump,data header at 0x7fbc909ea264
===============
tsiz: 0x1f98
hsiz: 0x4c
pbl: 0x7fbc909ea264
     76543210
flag=-0----X-
ntab=2
nrow=20
frre=-1
fsbo=0x4c
fseo=0xbe
avsp=0x72
tosp=0x72
        r0_9ir2=0x1
        mec_kdbh9ir2=0x1
                      76543210
        shcf_kdbh9ir2=----------
                  76543210
        flag_9ir2=--R-LNOC      Archive compression: N
                fcls_9ir2[0]={ }
                perm_9ir2[6]={ 2 3 4 5 0 1 }
0x1c:pti[0]     nrow=5  offs=0
0x20:pti[1]     nrow=15 offs=5
0x24:pri[0]     offs=0x1f5c
0x26:pri[1]     offs=0x1f70
...
block_row_dump:
tab 0, row 0, @0x1f5c
tl: 20 fb: --H-FL-- lb: 0x0  cc: 2
col  0: [ 8]  54 48 45 20 52 45 53 54
col  1: [ 7]  55 50 44 41 54 45 44
bindmp: 00 0d 02 d0 54 48 45 20 52 45 53 54 cf 55 50 44 41 54 45 44
tab 0, row 1, @0x1f70
tl: 10 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [ 7]  78 71 0b 0d 01 01 04
bindmp: 00 03 cf 78 71 0b 0d 01 01 04
...
tab 0, row 4, @0x1f8e
tl: 10 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [ 7]  78 71 0b 0d 01 01 01
bindmp: 00 03 cf 78 71 0b 0d 01 01 01
tab 1, row 0, @0x1d55
tl: 519 fb: --H-FL-- lb: 0x1  cc: 6
col  0: [ 8]  54 48 45 20 52 45 53 54
col  1: [ 7]  55 50 44 41 54 45 44
col  2: [ 2]  c1 02
col  3: [500]
 31 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
...
col  4: [ 7]  78 71 0b 0d 01 01 01
col  5: [ 7]  78 71 0b 0d 0b 09 01
bindmp: 2c 01 05 00 ca c1 02 fa 01 f4 31 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 04 cf 78
 71 0b 0d 0b 09 01
tab 1, row 1, @0x1b4e
....

I have not come to a conclusion about this yet. There are similarities to the BASIC compression block dump in that there are 2 tables (ntab=2), and the first one looks a bit like a symbol table to me. It also uses the familiar perm_9ir2 field to let us know it changed the column order. The flag is new though, or at least to me. The only reference I could find to flag=-0—-X- was in the bug database, interestingly related to a OLTP-compressed block. The tl of the first row in t1 does not compute either, something I have only seen with BASIC/OLTP compressed data blocks.

The problem is that I can’t use the interpretation of BASIC compression in the bindump. Take the bindmp of tab 1, row 0:

bindmp: 2c 01 05 00 ca c1 02 fa 01 f4 31 20 20 20 20 20 20 20 20 20 20 20 20…

Going by the rules I know about the first byte is the flag byte (2c = –H-FL–), followed by the lock byte (01), followed by the number of columns stored at this location (05) and then the way of explaining the bindmp for OLTP/BASIC compressed data falls apart. What is interesting though that there seems to be a pattern:

> for line in $(grep bindmp orcl_ora_30181.trc); do  echo ${line:0:64}; done
bindmp: 00 0d 02 d0 54 48 45 20 52 45 53 54 cf 55 50 44 41 54 45
bindmp: 00 03 cf 78 71 0b 0d 01 01 04
bindmp: 00 03 cf 78 71 0b 0d 01 01 03
bindmp: 00 03 cf 78 71 0b 0d 01 01 02
bindmp: 00 03 cf 78 71 0b 0d 01 01 01
bindmp: 2c 01 05 00 ca c1 02 fa 01 f4 31 20 20 20 20 20 20 20 20
bindmp: 2c 01 05 00 ca c1 03 fa 01 f4 32 20 20 20 20 20 20 20 20
bindmp: 2c 01 05 00 ca c1 04 fa 01 f4 33 20 20 20 20 20 20 20 20
bindmp: 2c 01 05 00 ca c1 05 fa 01 f4 34 20 20 20 20 20 20 20 20
bindmp: 2c 01 05 00 ca c1 06 fa 01 f4 35 20 20 20 20 20 20 20 20
bindmp: 2c 01 05 00 ca c1 07 fa 01 f4 36 20 20 20 20 20 20 20 20
bindmp: 2c 01 05 00 ca c1 08 fa 01 f4 37 20 20 20 20 20 20 20 20
bindmp: 2c 01 05 00 ca c1 09 fa 01 f4 38 20 20 20 20 20 20 20 20
bindmp: 2c 01 05 00 ca c1 0a fa 01 f4 39 20 20 20 20 20 20 20 20
bindmp: 2c 01 05 00 ca c1 0b fa 01 f4 31 30 20 20 20 20 20 20 20
bindmp: 2c 01 05 00 ca c1 0c fa 01 f4 31 31 20 20 20 20 20 20 20
bindmp: 2c 01 05 00 ca c1 0d fa 01 f4 31 32 20 20 20 20 20 20 20
bindmp: 2c 01 05 00 ca c1 0e fa 01 f4 31 33 20 20 20 20 20 20 20
bindmp: 2c 01 06 d0 54 48 45 20 52 45 53 54 cf 55 50 44 41 54 45
bindmp: 2c 01 06 d0 54 48 45 20 52 45 53 54 cf 55 50 44 41 54 45

In another part of this article I will try to work out what exactly is represented by the bindmp.

RAC 12c enhancements: adding an additional SCAN-part 3

Travel time can be writing time and while sitting in the departure lounge waiting for my flight I use the opportunity to add part 3 of the series. In the previous two parts you could read how to add a second SCAN and the necessary infrastructure to the cluster. Now it is time to create the standby database. It is assumed that a RAC One Node database has already been created on the primary cluster and is in archivelog mode.

Static Registration with the Listeners

The first step is to statically register the databases with their respective listeners. The example below is for the primary database first and standby next, it is equally applicable to the standby. The registration is needed during switchover operations when the broker restarts databases as needed. Without static registration you cannot connect to the database remotely while it is shut down.

# static registration on the primary cluster-$GRID_HOME/network/admin/listener.ora
SID_LIST_LISTENER_DG=
  (SID_LIST=
    (SID_DESC=
      (GLOBAL_DBNAME=RON)
      (ORACLE_HOME=/u01/app/oracle/product/12.1.0.1/dbhome_1)
      (SID_NAME=pri_1)
    )
    (SID_DESC=
      (GLOBAL_DBNAME=RON)
      (ORACLE_HOME=/u01/app/oracle/product/12.1.0.1/dbhome_1)
      (SID_NAME=pri_2)
    )

    (SID_DESC=
      (GLOBAL_DBNAME=RON_DGMGRL)
      (ORACLE_HOME=/u01/app/oracle/product/12.1.0.1/dbhome_1)
      (SID_NAME=pri_1)
    )
    (SID_DESC=
      (GLOBAL_DBNAME=RON_DGMGRL)
      (ORACLE_HOME=/u01/app/oracle/product/12.1.0.1/dbhome_1)
      (SID_NAME=pri_2)
    )
  )

# static registration on the standby cluster-$GRID_HOME/network/admin/listener.ora
SID_LIST_LISTENER_DG=
  (SID_LIST=
    (SID_DESC=
      (GLOBAL_DBNAME=rondg)
      (ORACLE_HOME=/u01/app/oracle/product/12.1.0.1/dbhome_1)
      (SID_NAME=sby_1)
    )
    (SID_DESC=
      (GLOBAL_DBNAME=rondg)
      (ORACLE_HOME=/u01/app/oracle/product/12.1.0.1/dbhome_1)
      (SID_NAME=sby_2)
    )

    (SID_DESC=
      (GLOBAL_DBNAME=rondg_DGMGRL)
      (ORACLE_HOME=/u01/app/oracle/product/12.1.0.1/dbhome_1)
      (SID_NAME=sby_1)
    )
    (SID_DESC=
      (GLOBAL_DBNAME=rondg_DGMGRL)
      (ORACLE_HOME=/u01/app/oracle/product/12.1.0.1/dbhome_1)
      (SID_NAME=sby_2)
    )
  )

The static listener registration has to be performed on each cluster node, primary and standby cluster alike.

Database Duplication-Creating the Physical Standby

With the listener registration complete you can start the duplication. I am using backup-based duplication here, you could equally go for a duplication from active database. To be able to perform the network duplication you have to have a backup first. I created it on the primary:

RMAN> backup incremental level 0 database format '/u01/oraback/%U';

Starting backup at 28-APR-14
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00003 name=+DATA/RON/DATAFILE/sysaux.257.838874315
input datafile file number=00006 name=+DATA/RON/DATAFILE/users.259.838874605
channel ORA_DISK_1: starting piece 1 at 28-APR-14
channel ORA_DISK_2: starting incremental level 0 datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA/RON/DATAFILE/system.258.838874441
input datafile file number=00004 name=+DATA/RON/DATAFILE/undotbs1.260.838874611
input datafile file number=00008 name=+DATA/RON/DATAFILE/undotbs2.268.838875559
channel ORA_DISK_2: starting piece 1 at 28-APR-14
channel ORA_DISK_1: finished piece 1 at 28-APR-14
piece handle=/u01/oraback/0pp6r8f8_1_1 tag=TAG20140428T044110 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:48
channel ORA_DISK_1: starting incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00010 name=+DATA/RON/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/sysaux.272.838876477
input datafile file number=00011 name=+DATA/RON/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/users.275.838876481
channel ORA_DISK_1: starting piece 1 at 28-APR-14
channel ORA_DISK_2: finished piece 1 at 28-APR-14
piece handle=/u01/oraback/0qp6r8f8_1_1 tag=TAG20140428T044110 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:53
channel ORA_DISK_2: starting incremental level 0 datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00012 name=+DATA/RON/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/example.273.838876479
input datafile file number=00009 name=+DATA/RON/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/system.274.838876479
channel ORA_DISK_2: starting piece 1 at 28-APR-14
channel ORA_DISK_1: finished piece 1 at 28-APR-14
piece handle=/u01/oraback/0rp6r8gp_1_1 tag=TAG20140428T044110 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:29
channel ORA_DISK_1: starting incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00007 name=+DATA/RON/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/sysaux.265.838874953
channel ORA_DISK_1: starting piece 1 at 28-APR-14
channel ORA_DISK_2: finished piece 1 at 28-APR-14
piece handle=/u01/oraback/0sp6r8h1_1_1 tag=TAG20140428T044110 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:25
channel ORA_DISK_2: starting incremental level 0 datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00005 name=+DATA/RON/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/system.266.838874953
channel ORA_DISK_2: starting piece 1 at 28-APR-14
...
channel ORA_DISK_2: finished piece 1 at 28-APR-14
piece handle=/u01/oraback/0up6r8ht_1_1 tag=TAG20140428T044110 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:25
Finished backup at 28-APR-14

Starting Control File and SPFILE Autobackup at 28-APR-14
piece handle=+RECO/RON/AUTOBACKUP/2014_04_28/s_846045788.270.846045815 comment=NONE
Finished Control File and SPFILE Autobackup at 28-APR-14

RMAN> backup current controlfile for standby;

Starting backup at 28-APR-14
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=282 instance=pri_1 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=43 instance=pri_1 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including standby control file in backup set
channel ORA_DISK_1: starting piece 1 at 28-APR-14
channel ORA_DISK_1: finished piece 1 at 28-APR-14
piece handle=/u01/oraback/10p6rafh_1_1 tag=TAG20140428T051528 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 28-APR-14

Starting Control File and SPFILE Autobackup at 28-APR-14
piece handle=+RECO/RON/AUTOBACKUP/2014_04_28/s_846047742.271.846047749 comment=NONE
Finished Control File and SPFILE Autobackup at 28-APR-14

RMAN>

Don’t forget to backup the current controlfile for a standby database – that’s immensely important for backup-based duplication but will be taken care of by the duplication “from active database”. Once the backup is created, ensure it is available on the standby host. In my case I am using the same backup location on the primary as well as on the standby. Being a good citizen and because I want test active duplication from backup later I created a new TNS entry in /u01/app/oracle/product/12.1.0.1/dbhome_1/network/admin/tnsnames.ora

SETUP =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.102.58)(PORT = 1522))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = rondg)
      (INSTANCE_NAME = sby_2)
    )
  )

This is needed since there won’t be a cross-registration of the new SCAN listener with the listener_dg:

[oracle@ron12csbynode1 ~]$ lsnrctl status listener_dg

LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 28-APR-2014 04:57:13

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_DG)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER_DG
Version                   TNSLSNR for Linux: Version 12.1.0.1.0 - Production
Start Date                23-APR-2014 10:54:58
Uptime                    4 days 18 hr. 2 min. 15 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/12.1.0.1/grid/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/ron12csbynode1/listener_dg/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_DG)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.102.58)(PORT=1522)))
Services Summary...
Service "rondg" has 2 instance(s).
  Instance "sby_1", status UNKNOWN, has 1 handler(s) for this service...
  Instance "sby_2", status UNKNOWN, has 1 handler(s) for this service...
Service "rondg_DGMGRL" has 2 instance(s).
  Instance "sby_1", status UNKNOWN, has 1 handler(s) for this service...
  Instance "sby_2", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
[oracle@ron12csbynode1 ~]$

If you are not using active duplication you also need to take care of the password file. After a lot of trial-and-error it became apparent that a) you can’t have the password file in ASM and b) you need to copy the original password file from the primary database. If the source password is in ASM you can use asmcmdp cp +data/RON/orapwRON.ora /tmp to copy it out of ASM. When I tried to create a password file locally I could not connect-every time I tried I had an ‘ORA-1033 “ORACLE initialization or shutdown in progress”‘ For the purpose of the duplication I am using node 1 on the standby cluster. Later on you can extend the database to both nodes.

You will need a password file to start the standby database, here is what I used:

[oracle@ron12csbynode1 dbs]$ cat initsby.ora
*.archive_lag_target=0
*.audit_file_dest='/u01/app/oracle/admin/rondg/adump'
*.audit_trail='db'
*.cluster_database=true
*.compatible='12.1.0.0.0'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain=''
*.db_name='ron'
*.db_unique_name='rondg'
*.db_recovery_file_dest='+RECO'
*.db_recovery_file_dest_size=5025m
*.dg_broker_start=TRUE
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=priXDB)'
*.enable_pluggable_database=true
*.log_archive_dest_1='location=USE_DB_RECOVERY_FILE_DEST','valid_for=(ALL_LOGFILES, ALL_ROLES)'
*.nls_language='ENGLISH'
*.nls_territory='UNITED KINGDOM'
*.open_cursors=300
*.pga_aggregate_target=512m
*.processes=300
*.remote_login_passwordfile='exclusive'
*.sga_target=1024m
*.standby_file_management='auto'

Since the SIDs will be sby_1 and sby_2 I created symlinks for these, pointing to the pfile. With everything in place it was time to duplicate!

[oracle@ron12cprinode1 oraback]$ rman target sys/secret@ronprinet2 auxiliary sys/secret@setup

Recovery Manager: Release 12.1.0.1.0 - Production on Mon Apr 28 05:14:35 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: RON (DBID=1681008808)
connected to auxiliary database: RON (not mounted)

RMAN> duplicate target database for standby;

...

contents of Memory Script:
{
   switch clone datafile all;
}
executing Memory Script

datafile 1 switched to datafile copy
input datafile copy RECID=18 STAMP=846048497 file name=+DATA/RONDG/DATAFILE/system.271.846047875
datafile 3 switched to datafile copy
input datafile copy RECID=19 STAMP=846048500 file name=+DATA/RONDG/DATAFILE/sysaux.272.846047875
datafile 4 switched to datafile copy
input datafile copy RECID=20 STAMP=846048502 file name=+DATA/RONDG/DATAFILE/undotbs1.270.846047877
datafile 5 switched to datafile copy
input datafile copy RECID=21 STAMP=846048503 file name=+DATA/RONDG/F1CEE6116436678FE0433264A8C016E9/DATAFILE/system.278.846048325
datafile 6 switched to datafile copy
input datafile copy RECID=22 STAMP=846048505 file name=+DATA/RONDG/DATAFILE/users.269.846047877
datafile 7 switched to datafile copy
input datafile copy RECID=23 STAMP=846048506 file name=+DATA/RONDG/F1CEE6116436678FE0433264A8C016E9/DATAFILE/sysaux.277.846048381
datafile 8 switched to datafile copy
input datafile copy RECID=24 STAMP=846048508 file name=+DATA/RONDG/DATAFILE/undotbs2.256.846047879
datafile 9 switched to datafile copy
input datafile copy RECID=25 STAMP=846048510 file name=+DATA/RONDG/F1CF406751F56F3EE0433264A8C0E61D/DATAFILE/system.275.846048145
datafile 10 switched to datafile copy
input datafile copy RECID=26 STAMP=846048512 file name=+DATA/RONDG/F1CF406751F56F3EE0433264A8C0E61D/DATAFILE/sysaux.274.846048189
datafile 11 switched to datafile copy
input datafile copy RECID=27 STAMP=846048513 file name=+DATA/RONDG/F1CF406751F56F3EE0433264A8C0E61D/DATAFILE/users.257.846048191
datafile 12 switched to datafile copy
input datafile copy RECID=28 STAMP=846048515 file name=+DATA/RONDG/F1CF406751F56F3EE0433264A8C0E61D/DATAFILE/example.276.846048143
Finished Duplicate Db at 28-APR-14

RMAN>

After the duplication finished ensure you switch to the use of SPFILEs instead of the pfile. You also need to add the *.control_files to the (s)pfile. You could also create the spfile in ASM like I did here.

Registration with Clusterware

With the duplication complete you can register the database and a service with Clusterware:

[oracle@ron12csbynode1 ~]$ srvctl add database -d rondg -oraclehome /u01/app/oracle/product/12.1.0.1/dbhome_1 \
> -dbtype RACONENODE -server ron12csbynode1,ron12csbynode2 -instance sby -spfile '+DATA/RONDG/spfilesby.ora' \
> -role PHYSICAL_STANDBY -startoption MOUNT -diskgroup DATA,RECO

[oracle@ron12csbynode1 ~]$ srvctl add service -d rondg -s ron12c

You don’t need to start that service now, it will be started when the database is opened:

[oracle@ron12csbynode1 ~]$ crsctl stat res ora.rondg.ron12c.svc -p
NAME=ora.rondg.ron12c.svc
TYPE=ora.service.type
...
START_DEPENDENCIES=hard(ora.rondg.db,type:ora.cluster_vip_net1.type) \
weak(type:ora.listener.type) dispersion(type:ora.service.type) \
pullup(type:ora.cluster_vip_net1.type) pullup:always(ora.rondg.db)
...

Done for now

This concludes the third part of this article. We are getting closer! There is a primary and a standby database now, and all that remains to be done is the creation of the Data Guard configuration to start log shipping. You can read about that in part 4. The final part will also include demonstrations of Data Guard behaviour when you relocate the instance from node 1 to node 2. I will also show you how to perform a switchover operation.