Martins Blog

Trying to explain complex things in simple terms

Tale of a platform migration Solaris 10 SPARC 10.2.0.5 to Linux 11.2.0.2.6

Posted by Martin Bach on May 14, 2012

This is as much a note to myself how to do this in the future as it is something hopefully worth reading for you. The requirement has been precise as always: migrate a database from 10.2 on SPARC to 11.2 on Linux. In the process, go from Veritas to ASM and make it quick!

I like short briefings but this was too short. Since the database was reasonably large I opted for the transportable tablespace approach, however I now think that a massively parallel impdp with network_link could have saved me quite a bit of time.

The following is by no means  the complete story, but hopefully gives you an idea how to do these things. Always check, and document, then test (rinse and repeat). Only when proper signoff is received should you try such a process in production. Remember to script it and have at least one clean run of the scripts! This process is not super-quick, if you have low downtime requirements then consider Streams or better: Golden Gate for the process.

The source database was originally not on the terminal release, and due to certain problems with the Data Pump API before 10.2.0.5 the source was moved to the terminal release. The source was 11g Release 2 patchset 1 with the April PSU applied

Things to think about

Since I couldn’t simply go for a subset of the database with my transportable tablespace set (TTS) I had to ensure that a lot of metadata was carried across. Personally I think that TTS works best for tables and indexes!

The process of transporting/converting tablespaces is short and sweet (excluding dealing with the application):

  1. Define a self-contained set of tablespaces. In other words, the tablespaces you export from the source must not contain dictionary references to other, non-exported tablespaces. For instance, you cannot export a tablespace containing a table that has an index on another outside of the transportable set.
  2. Set the tablespaces you want to export read-only. This is an outage in production!
  3. Export the metadata associated with the tablespaces from the source.
  4. Copy tablespaces to their destination
  5. Perform the platform conversion
  6. Optionally make the tablespace read-write. Thanks for Jerry for pointing this out
  7. Import tablespace metadata
  8. Make new tablespaces read-write in source

You can either convert the tablespaces and data files at the source or target. The above assumes the conversion will happen on the destination. Since the source database is a clone and not used by anyone I can actually export the database mountpoint via NFS and perform the conversion/copy into ASM in one step.

So what’s in the TTS at all? This is not so well documented, but visible in the output of the expdp command. I strongly recommend the use of expdp over exp! Since I’m converting LOTS of tablespaces in one go I decided to use a parameter file:

transport_tablespaces=a,b,c,d,e,f,g,h,i,j,k,l,m,n,o
directory=MARTIN
dumpfile=expdp_tts_metadata.dmp
logfile=expdp_tts_metadata.log
transport_full_check=y

The tablespace names were different of course, and a lot longer. Instead of keeping them all on one line (and run into a string buffer bug in expdp) you can create a line per tablespace-which is very nice and makes the file more readable. Note that at this stage you con’t need to specify data files, only tablespace names. I didn’t run the DBMS_TTS.TRANSPORT_SET_CHECK() procedure you’d normally run to check if the TTS is self contained: remember that I’m taking everything except SYS, SYSAUX, UNDOTBS1, TEMP (and other internal tablespaces). You should definitely run the transport_set_check() procedure!

With this I can create a dump file:

$ expdp parfile=exp_tts_metadata.par

Export: Release 10.2.0.5.0 - 64bit Production on Friday, 11 May, 2012 4:43:26

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TRANSPORTABLE_01":  /******** AS SYSDBA parfile=exp_tts_metadata.par
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/COMMENT
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/TRIGGER
Processing object type TRANSPORTABLE_EXPORT/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/TABLE
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/SECONDARY_TABLE/INDEX
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/POST_TABLE_ACTION
Processing object type TRANSPORTABLE_EXPORT/MATERIALIZED_VIEW
Processing object type TRANSPORTABLE_EXPORT/MATERIALIZED_VIEW_LOG
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TRANSPORTABLE_01 is:
  /export/home/oracle/expdp_tts_metadata.dmp
Job "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at 04:55:40

From the output you can see that it took quite a bit of information with it, all of which belongs to the data stored on the tablespace. The bit to take away is that OBJECT_GRANTS do not need to be exported from the source-they are imported later on when you plug the tablespaces in.

Metadata 

The documentation will tell you that the users the data belongs to must exist in the destination database. For most data loading operations in data warehouses that’s not a problem, but when you are migration you need to be careful. For my database migration I needed these:

  • (public) Synonyms
  • sequences
  • system grants
  • roles
  • packages
  • package bodies
  • functions
  • procedures

You might potentially need more-check the DATABASE_EXPORT_OBJECTS for different paths and compare with the objects in your database. Maybe you are using contexts for VPD or types for pipelined functions? Database links? You get it.

To get and save the metadata I decided to do a full metadata export of the database, as in

$ expdp full=y content=metadata_only dumpfile=exp_full_metadata.dmp logfile=exp_full_metadata.log exclude=table exclude=index

Depending on how busy your system is that can take a while. On this multi-TB database it helped appending the exclude parameter for tables and indexes.

Based on the metadata dump file you can do all sorts of magic, such as getting a lot of DDL. I decided against generating the SQL for roles, as the resulting sqlfile has revoke commands embedded into it. Roles can only be exported on a database-export level, and I didn’t want to waste time on tweaking the command. So I went for a simpler alternative:

set verify off
set long 10000000
set feedback off
set heading off
set pagesize 0
set linesize 220
set echo off

col cmd for a200 wrap

exec dbms_metadata.set_transform_param(dbms_metadata.session_transform, 'SQLTERMINATOR', true);

spool rolegrants_&&role..sql

SELECT DBMS_METADATA.GET_DDL('ROLE', '&&role')||';' cmd FROM dual;

SELECT DBMS_METADATA.GET_GRANTED_DDL('ROLE_GRANT', '&&role') cmd FROM dual;

SELECT DBMS_METADATA.GET_GRANTED_DDL('SYSTEM_GRANT', '&&role') cmd FROM dual;

spool
spool off

This little script will prompt you for a role to export and get all the other granted roles, as well as the system grants, and saves all of that in file “rolegrants_role.sql” for later execution. It may produce a few ORA- errors when it can’t find role or system grants, but that was not a problem when I ran the file in the destination database. The script will not overwrite or otherwise alter roles, so it’s safe to run the script-any conflicts will throw an error, and there are only additional grants, no revokes. Notice how there is no query for OBJECT_GRANT-these are already in the TTS metadata export.

The next step is to create all the users. This is a two part process: in part one, before the metadata import, you need to have all the users but don’t required them to have all the object and role grants. Except for the owners of data, which you can identify in the data dictionary using this query:

select count(owner),owner
from dba_objects
group by owner
order by owner;

Be sure to ignore the Oracle internal users (XDB, ORDSYS, MDSYS, SYS, SYSTEM, etc). Since I’m lazy I used the full database metadata export to generate the users:

 directory=MARTIN
 dumpfile=exp_full_metadata.dmp
 sqlfile=precreate_dict.sql
 logfile=precreate_dict.log
 INCLUDE=USER
 INCLUDE=SEQUENCE
 include=PROFILE
 remap_tablespace=A:DEFAULT_TBS
 remap_tablespace=B:DEFAULT_TBS
 remap_tablespace=C:DEFAULT_TBS
 remap_tablespace=D:DEFAULT_TBS
 ...
 remap_tablespace=O:DEFAULT_TBS
 

I reviewed and executed the script. I used the following script to export the system privs and role grants for the data owners:

set trimspool on
set verify off
set long 10000000
set feedback off
set heading off
set pagesize 0
set linesize 220
set echo off

col text for a200 wrap

exec dbms_metadata.set_transform_param(dbms_metadata.session_transform, 'SQLTERMINATOR', true);

spool usergrants_&&user..sql

SELECT DBMS_METADATA.GET_GRANTED_DDL('ROLE_GRANT', '&&user') text FROM dual;

SELECT DBMS_METADATA.GET_GRANTED_DDL('SYSTEM_GRANT', '&&user') text FROM dual;

exit

Change as necessary, this is for one user only. The purpose is the same as described before.

RMAN convert

Quite simple if you know how to! I shut down the source (because I could!) and exported the /u01/oradata mountpoint via NFS to the Linux host, where it was mounted under /mnt. The RMAN convert file has the following format:

convert datafile
'/mnt/.../file1.dbf',
...
'/mnt/.../file500.dbf'
TO PLATFORM="Linux x86 64-bit"
FROM PLATFORM="Solaris[tm] OE (64-bit)"
parallelism=4
format='+DATA';

You can create the datafile clause using a “select ”” || replace(file_name, ‘/u01/oradata/’,’/mnt’) || ”’,’ from dba_data_files” if there are really lots of them. Also note how the parallelism is set to 4 and the format indicates that the converted file should go straight into ASM. No need for an intermediate step.

When you ran the RMAN script, be sure to get the output to see if it all went ok. Depending on your network you might have to reduce the parallelism. Unless you are good at network troubleshooting and monitoring it is difficult to work out how long that might take. Best to let it run in a screen session over night.

FAST FORWARD …

Plug them in

With all the necessary dictionary metadata imported, and the conversion finished, it’s time to plug the tablesplespaces into the destination! That requires a fairly lengthy parameter file when you have lots of data files. The file has this format:

dumpfile=expdp_tts_metadata.dmp
logfile=impdp_tts_metadata.log
DIRECTORY=MARTIN
TRANSPORT_DATAFILES='+data/newprod/datafile/A.751.782983329'
TRANSPORT_DATAFILES='+data/newprod/datafile/A.442.782934761'
...
TRANSPORT_DATAFILES='+data/newprod/datafile/N.495.782932701'

This is a critical point, go over your checklist. Some checks you might want to consider

  • All users created
  • All roles created
  • Privileges dealt with
  • Number of datafiles to be plugged in does not exceed control file maximum or db_files parameterYou may have to recreate your destination’s control file and change MAXDATAFILES in the create controlfile statement
  • No conflicting tablespace names (USERS, TOOLS?) in destination
  • Non-default cache sizes (db_nk_cache_size)
  • etc

This list is NOT exhaustive! Ensure you have those adequately covered.

Then use the impdp command with the previously created parameter file and off you go!

Aftermath and Summary

You may have to fine-tune grants and privileges to users, and copy stored procedures and such over to the destination. You can again make use of DBMS_METADATA:

Some object types might prove problematic, check XMLTYPES, Materialized Views and other replication mechanisms. Compare counts in DBA_OBJECTS on both sides and script the missing objects to be automatically created next time.

You would of course not try to migrate a 3TB database that way, you’d use foresight, planning and a complete test plan. You’d ensure (double/triple) that all the dictionary objects have been migrated and that the whole process is scripted. Don’t even think of typing in commands during the migration process other than the script names. Good luck!

References

In a recent thread on oracle-l I found this gem (http://www.freelists.org/post/oracle-l/best-way-to-convert-from-unix-to-linux,11)

The references are quite useful for anyone going to do this:

  • Note 1389592.1 – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Platform (from 11.2.0.3)
  • NOTE 1166564.1 – Master Note for Transportable Tablespaces (TTS) — Common Questions and Issues
  • NOTE 1454872.1 – Transportable Tablespace (TTS) Restrictions and Limitations: Details, Reference, and Version Where Applicable
  • NOTE: 406586.1 – How to Handle Materialized Views When You Upgrade or Clone a Database

Credit to GregG for listing these.

4 Responses to “Tale of a platform migration Solaris 10 SPARC 10.2.0.5 to Linux 11.2.0.2.6”

  1. mdinh said

    I am about to do the same but for Solaris Sparc to Solaris Intel and have decided to use parallel expdp and impdp.

    Will you please shed some light on how using network_link would be better that parallel sftp?

    I am unaware that network_link can be parallel and now has some homework to do.

    Thanks.

    • Martin Bach said

      Hi Michael,

      network_link would have saved me time because I ran into lots of problems caused by missing dictionary objects-I lost 1 full day. So a full import over the network could have solved it. I’m guessing but I thought a parallel setup over 2 bundled GiGE ports would have taken less than 56 hours.

      Your mileage may vary-test first!

      Martin

  2. sarveswara said

    I think one additional benefit with datapump would be the users, grants would be created on destination as part of import. The problem would be with indexes which would be created in serial which can be parallelized (using third party scripts. Randolf Geist provides one such script which I have used for this purpose.

    – Kumar

    • Martin Bach said

      Exactly-I have meant to reply to this question the same way.

      Indexes can be excluded on the command line (EXCLUDE=…) and you could build a SQLFILE with indexes only which you then can hack to your heart’s delight. That’s similar to the way we did it with the INDEXFILE in non-data pump environments. Data Pump and the XML metadata API rock!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: