Monthly Archives: January 2017

Creating a RAC 12.1 Data Guard Physical Standby environment (3)

In the previous two parts of this series you read about my lab environment and the preparations on the network side as well as the database. In this part I’ll cover the database duplication. Again, this won’t be a short post …

NOTE: As always, this is just a demonstration using VMs in my lab, based on my notes. Your system is most likely different, so in real-life you might take a different approach. The techniques I am using here were suitable for me, and my own small scale testing. I tried to make sure they are valid, but you may want to allocate more resources in your environment. Test, test, test on your own environment on test kit first!

Step 1: Create an initialisation file

The next step is the preparation of an initialisation file. I am taking NCDBA as the sample and transfer it over to rac12sec1:

SQL> create pfile='/tmp/initNCDBA1.ora' from spfile;

File created.

[oracle@rac12pri1 ~]$ scp /tmp/initNCDBA1.ora rac12sec1:/tmp
oracle@rac12sec1's password: 
initNCDBA1.ora                                            100% 1515     1.5KB/s   00:00    
[oracle@rac12pri1 ~]$ 

The file is a bit verbose, and needs some editing. I appreciate that RMAN replaces the file anyway with a copy but I like to keep it tidy, at least at first … The end result is shown here-as always, you make sure that your version of the pfile matches your site’s requirements and standards. The important part is where you set the directory paths correctly, and you also ensure that db_name and db_unique_name are set correctly.

*.audit_file_dest='/u01/app/oracle/admin/NCDBB/adump'
*.audit_trail='db'
*.cluster_database=true
*.compatible='12.1.0.2.0'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain=''
*.db_name='NCDBA'
*.db_unique_name='NCDBB'
*.db_recovery_file_dest='+RECO'
*.db_recovery_file_dest_size=4785m
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=NCDBBXDB)'
NCDBB1.instance_number=1
NCDBB2.instance_number=2
*.open_cursors=300
*.pga_aggregate_target=1024m
*.processes=300
*.remote_login_passwordfile='exclusive'
*.sga_target=3072m
NCDBB2.thread=2
NCDBB1.thread=1
NCDBB1.undo_tablespace='UNDOTBS1'
NCDBB2.undo_tablespace='UNDOTBS2'

The file needs to be transferred to $ORACLE_HOME/dbs as initNCDBB1.ora.

Step 2: Create a temporary password file

You undoubtedly spotted that remote_login_passwordfile is set, which is a pre-requisite for connecting to the database as SYS via the network. The password file is required for an active database duplication, and without a password file this wouldn’t work. Since Oracle takes care of the password file during the RMAN duplicate it appears to be sufficient to simply create a password file in $ORACLE_HOME/dbs on rac12sec1:

[oracle@rac12sec1 dbs]$ orapwd file=orapwNCDBB1

Enter password for SYS:
[oracle@rac12sec1 dbs]$

During my testing this was enough to proceed to the next step.

Step 3: Duplicate the database “for standby”

The system should now be ready for the standby database’s creation. It’s been a bit of a task to get there, but from now on things will be easier … Using the *_DUP entries in the system’s tnsnames.ora file I connect to the target and auxiliary database, followed by the command to create the standby. In my case the database is more than tiny and the RMAN defaults are enough. Larger databases would probably need different settings and commands.


[oracle@rac12sec1 dbs]$ rman target sys@ncdba_dup auxiliary sys@ncdbb_dup

Recovery Manager: Release 12.1.0.2.0 - Production on Wed Dec 14 10:25:47 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

target database Password: 
connected to target database: NCDBA (DBID=479005270)
auxiliary database Password: 
connected to auxiliary database: NCDBA (DBID=479005270, not open)

RMAN> startup clone nomount

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area    3221225472 bytes

Fixed Size                     2929552 bytes
Variable Size                838863984 bytes
Database Buffers            2365587456 bytes
Redo Buffers                  13844480 bytes

RMAN> duplicate target database for standby from active database;

Starting Duplicate Db at 14-DEC-16
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=240 instance=NCDBB1 device type=DISK

contents of Memory Script:
{
   backup as copy reuse
   targetfile  '+DATA/NCDBA/PASSWORD/pwdncdba.256.930561891' auxiliary format
 '/u01/app/oracle/product/12.1.0.2/dbhome_1/dbs/orapwNCDBB1'   ;
}
executing Memory Script

Starting backup at 14-DEC-16
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=276 instance=NCDBA1 device type=DISK
Finished backup at 14-DEC-16

contents of Memory Script:
{
   sql clone "create spfile from memory";
   shutdown clone immediate;
   startup clone nomount;
   restore clone from service  'ncdba_dup' standby controlfile;
}  
executing Memory Script

sql statement: create spfile from memory

Oracle instance shut down

sql statement: alter system archive log current

contents of Memory Script:
{  
   switch clone datafile all;
}
executing Memory Script

datafile 1 switched to datafile copy
input datafile copy RECID=7 STAMP=930565247 file name=+DATA/NCDBB/DATAFILE/system.259.930565213
datafile 2 switched to datafile copy
input datafile copy RECID=8 STAMP=930565247 file name=+DATA/NCDBB/DATAFILE/undotbs2.260.930565221
datafile 3 switched to datafile copy
input datafile copy RECID=9 STAMP=930565247 file name=+DATA/NCDBB/DATAFILE/sysaux.261.930565223
datafile 4 switched to datafile copy
input datafile copy RECID=10 STAMP=930565247 file name=+DATA/NCDBB/DATAFILE/undotbs1.262.930565231
datafile 5 switched to datafile copy
input datafile copy RECID=11 STAMP=930565247 file name=+DATA/NCDBB/DATAFILE/example.263.930565233
datafile 6 switched to datafile copy
input datafile copy RECID=12 STAMP=930565247 file name=+DATA/NCDBB/DATAFILE/users.264.930565241
Finished Duplicate Db at 14-DEC-16

RMAN>

Perfect – the duplicate finished ok and all 6 datafiles have been transferred to my standby cluster. There are a few things worth noting here:

  • The password file of the primary (residing in ASM) has been copied to ‘/u01/app/oracle/product/12.1.0.2/dbhome_1/dbs/orapwNCDBB1’ (not in ASM)
  • The standby database’s spfile has been created from memory – a lot of unneccesary and some obsolete parameters are now in the standby database’s spfile

Step 4: Finishing Touches

There are a few things still outstanding before declaring the environment ready. First of all you need to register the standby database in the Cluster Registry (OCR). Use srvctl for this task. One way to do this is shown here:

[oracle@rac12sec1 dbs]$ srvctl add database -db NCDBB -oraclehome $ORACLE_HOME -dbtype RAC \
> -role PHYSICAL_STANDBY -startoption MOUNT -dbname NCDBA -diskgroup "DATA,RECO"

[oracle@rac12sec1 dbs]$ srvctl add instance -db NCDBB -instance NCDBB1 -node rac12sec1
[oracle@rac12sec1 dbs]$ srvctl add instance -db NCDBB -instance NCDBB2 -node rac12sec2

When registering your standby database make sure you are license compliant!

Next I move the password file to ASM. The file currently resides on rac12sec1 in the $ORACLE_HOME/dbs directory. You can use the orapwd command to move it. If you registered the database in the OCR as I did in the previous step you will get an updated configuration as a result – the password file configuration is changed automatically.

[oracle@rac12sec1 dbs]$ srvctl config database -db NCDBB | grep -i password
Password file: +DATA/NCDBB/orapwncdbb

Since I like error messages when the database starts even less than unnecessary parameters in my spfile I tend to remove these from the standby database. And besides – in its current form the spfile wouldn’t allow me to mount both RAC instances anyway. For some odd reason the mapping between SID and instances is lost during the RMAN duplicate.

So I start off by creating a pfile from the spfile, and remove all unneeded underscore parameters and deprecated ones. In the resulting step I ensure that all the necessary RAC parameters are in. The minimalistic end result looks like this for my database:

*.audit_file_dest='/u01/app/oracle/admin/NCDBB/adump'
*.audit_trail='DB'
*.cluster_database=TRUE
*.compatible='12.1.0.2.0'
*.control_files='+DATA/NCDBB/CONTROLFILE/current.258.930565201',
   '+RECO/NCDBB/CONTROLFILE/current.258.930565201'#Restore Controlfile
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain=''
*.db_name='NCDBA'
*.db_recovery_file_dest='+RECO'
*.db_recovery_file_dest_size=4785M
*.db_unique_name='NCDBB'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=NCDBBXDB)'
NCDBB1.instance_number=1
NCDBB2.instance_number=2
*.open_cursors=300
*.pga_aggregate_target=1G
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=3G
NCDBB2.thread=2
NCDBB1.thread=1
NCDBB1.undo_tablespace='UNDOTBS1'
NCDBB2.undo_tablespace='UNDOTBS2'

Next I can move the spfile to ASM and update the OCR profile. After these steps, the OCR profile reflects the new location of the spfile. Be sure to check all $ORACLE_HOME/dbs directories on the cluster nodes to prevent local pfiles/spfiles taking precedence over the one referred to in the OCR.

[oracle@rac12sec1 dbs]$ srvctl config database -d NCDBB | grep spfile
Spfile: +DATA/NCDBB/spfileNCDBB

Everything is awesome!

It’s been a bit of a journey, but it was worth it. Before continuing with the remaining configuration steps, here is a little bit of detail about the system as it is:

[oracle@rac12sec1 dbs]$ srvctl status database -db NCDBB -verbose
Instance NCDBB1 is running on node rac12sec1. Instance status: Mounted (Closed).
Instance NCDBB2 is running on node rac12sec2. Instance status: Mounted (Closed).
[oracle@rac12sec1 dbs]$ 

SQL> select name,name,db_unique_name,database_role,log_mode,force_logging,flashback_on from v$database;

NAME      NAME      DB_UNIQUE_NAME   DATABASE_ROLE    LOG_MODE     FORCE_LOGGING   FLASHBACK_ON
--------- --------- ---------------- ---------------- ------------ --------------- ------------------
NCDBA     NCDBA     NCDBB            PHYSICAL STANDBY ARCHIVELOG   YES             NO

SQL> select * from v$active_instances;

INST_NUMBER INST_NAME                                                        CON_ID
----------- ------------------------------------------------------------ ----------
          1 rac12sec1:NCDBB1                                                      0
          2 rac12sec2:NCDBB2                                                      0

In my initial drafts I intended to continue this article with the Data Guard configuration, but having looked at the WordPress-formatted part 3 up to here I decided to split it into 2 pieces. Next up is the Data Guard Broker configuration, followed by more posts about switching over and application design considerations. Stay tuned!

Creating a RAC 12.1 Data Guard Physical Standby environment (2)

In the first part of this mini-series you saw me define the environment as well as creating a primary database. With that out of the way it’s time to think about the standby. Before the standby can be created, a few preparations are necessary both on the primary as well as the standby cluster.

NOTE: As always, this is just a demonstration using VMs in my lab, based on my notes. Your system is most likely different, so in real-life you might take a different approach. The techniques I am using here were suitable for me, and my own small scale testing. I tried to make sure they are valid, but you may want to allocate more resources in your environment. Test, test, test on your own environment on test kit first!

Preparing the Creation of the Standby Database

It’s easier to follow the discussion if you keep in mind that my primary database is NCDBA and the standby is NCDBB.

Step 1: Listener Configuration

The first step to be completed is the network setup. I am planning on using an active database duplication which requires my node listeners on the cluster to “know” about my database instances. Thankfully since RAC 12.1 we don’t need to register the *_DGMGRL service (used for switchover operations by the Broker) anymore, making life a bit easier. The static registration of databases may not be necessary if you are using a different RMAN duplicate approach.

By default Oracle does not statically register databases, so it’s up to you to do so. On rac12sec1 I added a SID_LIST_LISTENER section (note: only the node listener!) to /u01/app/12.1.0.2/grid/network/admin/listener.ora. Remember that the node listeners reside in the GRID_HOME.

...
SID_LIST_LISTENER =
  (SID_LIST =
...
    (SID_DESC =
      (GLOBAL_DBNAME = NCDBB)
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0.2/dbhome_1)
      (SID_NAME = NCDBB1)
    )
...
  )
...

Likewise, I added a static registration to rac12sec2 – identical to the previous example except for the ORACLE_SID, which is NCDBB2 of course. After which I reloaded the listeners on both nodes.

Step 2: Service Name Configuration

The next file to change is the RDBMS home’s tnsnames.ora. To make sure I cover all eventualities I created the following tnsnames.ora on all cluster nodes, primary and standby RAC

[oracle@rac12pri1 ~]$ cat $ORACLE_HOME/network/admin/tnsnames.ora 
# tnsnames.ora Network Configuration File: /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

NCDBA =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac12pri-scan)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = NCDBA)
    )
  )

NCDBB =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = rac12sec-scan)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = NCDBB)
    )
  )

# only used for RMAN duplicate ... from active database
NCDBA_DUP =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = rac12pri1-vip)(PORT = 1521))
      (ADDRESS = (PROTOCOL = TCP)(HOST = rac12pri2-vip)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = NCDBA)
    )
  )

NCDBB_DUP =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = rac12sec1-vip)(PORT = 1521))
      (ADDRESS = (PROTOCOL = TCP)(HOST = rac12sec2-vip)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = NCDBB)
    )
  )

The first two entires are self-explanatory. The second pair requires a little more explanation. Since I have statically registered my standby database with the node listeners only (eg not with the SCAN listeners-I never tried and don’t like the thought of that) the duplicate would fail trying to restart the auxiliary instance. I haven’t captured the output of the failed RMAN duplicate but it’s possible to show you the effect of using the “wrong” tnsname with a simple test (the database I want to connect to is of course down)

[oracle@rac12pri1 ~]$ sqlplus sys@NCDBB as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Dec 14 14:36:49 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Enter password: 
ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor


Enter user-name: 
[oracle@rac12pri1 ~]$ sqlplus sys@NCDBB_DUP as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Dec 14 14:36:59 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Enter password: 
Connected to an idle instance.

SQL> 

So it seems to be better to use the net service name where I can be sure the database can be controlled from a remote session :)

NB: this test required a password file for the standby database which doesn’t exist yet and will be created in part 3.

Step 3: Modify the oratab file

You also should change the oratab and add NCDBB on both standby hosts, as shown here for and rac12sec2.

[oracle@rac12sec2 ~]$ grep NCDBB /etc/oratab
NCDBB:/u01/app/oracle/product/12.1.0.2/dbhome_1:N

Step 4: Prepare the Primary Database: Archivelog mode, Force Logging, and maybe even Flashback Database

There are a number of tasks to be completed on the primary database before I can start duplicating it for a standby environment. I need to make sure it’s in Archivelog mode, and that force logging is enabled. If I can afford to take the overhead, I enable Flashback Database as well to help reinstating a failed primary should that need arise. Of course, you don’t just enable Flashback Database, because it can have a negative impact on performance, and the Flashback logs will use space. Your FRA space usage calculation can be derailed if you are not careful. Thorough testing is needed to assess whether it is ok to operate with Flashback Database enabled.

The first task in my scenario is to enable Archivelog mode and to enforce logging.

[oracle@rac12pri1 ~]$ sq

SQL*Plus: Release 12.1.0.2.0 Production on Wed Dec 14 09:52:11 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> archive log list
Database log mode              No Archive Mode
Automatic archival             Disabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     15
Current log sequence           16
SQL> exit

...

[oracle@rac12pri1 ~]$ srvctl stop database -d NCDBA
[oracle@rac12pri1 ~]$ sq

SQL*Plus: Release 12.1.0.2.0 Production on Wed Dec 14 10:00:34 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup mount
ORACLE instance started.

Total System Global Area 3221225472 bytes
Fixed Size                  2929552 bytes
Variable Size             838863984 bytes
Database Buffers         2365587456 bytes
Redo Buffers               13844480 bytes
Database mounted.
SQL> alter database archivelog;

Database altered.

SQL> alter database force logging;

Database altered.

SQL>  alter database open;

Database altered.

SQL> exit

It is easy to check if that was a success:

[oracle@rac12pri1 ~]$ srvctl start instance -db NCDBA -node rac12pri2

[oracle@rac12pri1 ~]$ sq

SQL*Plus: Release 12.1.0.2.0 Production on Wed Dec 14 15:04:28 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     15
Next log sequence to archive   16
Current log sequence           16

Step 5: Add Standby Redo Logs

In order to use Real Time Apply you need Standby Redo Logs. If you create them now (eg before the RMAN duplicate) you are saving time. First of all you don’t need to create Standby Redo Logs (SRL) post switchover when NCDBA operates as a standby database. Secondly, if there are SRLs in place, they are automatically created on the standby database as part of the RMAN duplicate.

You need n+1 SRLs per thread. You can work the number out by looking at v$log. My tiny database has 2 threads with 2 members each – ok for the lab, almost certainly not ok for anything serious. Helps me keep storage footprint in the lab low though.

SQL> select * from v$log;

    GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC STATUS           FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME     CON_ID
---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- --------- ------------ --------- ----------
         1          1         15   52428800        512          2 YES INACTIVE               1621242 14-DEC-16      1621618 14-DEC-16          0
         2          1         16   52428800        512          2 NO  CURRENT                1621618 14-DEC-16   2.8147E+14                    0
         3          2          1   52428800        512          2 YES INACTIVE               1620677 14-DEC-16      1628674 14-DEC-16          0
         4          2          2   52428800        512          2 NO  CURRENT                1629454 14-DEC-16   2.8147E+14 14-DEC-16          0

SQL> 

With that information I can create 3 SRLs per thread. Did I mention that I love Oracle Managed Files?

SQL> alter database add standby logfile thread 1 size 52428800;

Database altered.

SQL> /

Database altered.

SQL> /

Database altered.

SQL> alter database add standby logfile thread 2 size 52428800;

Database altered.

SQL> /

Database altered.

SQL> /

Database altered.

SQL> select group#,thread#,sequence#,bytes,status from v$standby_log;

    GROUP#    THREAD#  SEQUENCE#      BYTES STATUS
---------- ---------- ---------- ---------- ----------
         5          1          0   52428800 UNASSIGNED
         6          1          0   52428800 UNASSIGNED
         7          1          0   52428800 UNASSIGNED
         8          2          0   52428800 UNASSIGNED
         9          2          0   52428800 UNASSIGNED
        10          2          0   52428800 UNASSIGNED

6 rows selected.

The primary database should be in relatively good shape now – Flashback Database is off to simulate the case where my fictional load is considered to impact performance too much.

SQL> select name,db_unique_name,database_role,log_mode,force_logging,flashback_on from v$database;

NAME      DB_UNIQUE_NAME  DATABASE_ROLE    LOG_MODE     FORCE_LOGGING  FLASHBACK_ON
--------- --------------- ---------------- ------------ -------------- ------------------
NCDBA     NCDBA           PRIMARY          ARCHIVELOG   YES            NO

SQL>

Summary

This concludes part 2. In this blog post I shared my notes about configuration the network for RMAN duplicate … from active database. I also configured the primary database in preparation for the initial duplicate command.

Note that I’m not setting any initialisation parameters related to Data Guard: over time I have become a great fan of the Data Guard Broker and plan on using that instead of a manual configuration.