My first contact with the log miner

I have never really played with log miner, but used it many times implicitly for streams. The need to get to grips with it arose when a developer created unrecoverable operations (i.e. something non-logging) in a QA system. I don’t want them to run the same in production so I had to find out who did it. First of all I needed a test case on my 32bit Linux system. First some theory though.

Grey is all theory

When starting with log miner you have the options to chose from a number of dictionaries. From my understanding the dictionary is where log miner stores intermediate results and for translating object_ids to object names. You can chose from:

  • online catalog
  • online redo logs
  • flat file

Using the online catalog is the simplest of these approaches, you need supplemental logging for using the online redo logs.

Once that’s build, you simply add the archived logfiles you are interested in (can include the online logs if you like) and start the log miner session. The contents of the analysis is in v$logmnr_contents. When done with your investigation, you close the log miner session.

I opted to use the flat file approach for now, here’s the example.

1. Build the dictionary

That’s probably not always necessary, but I did create it anyway. I chose to put that into a file instead of the online redo logs since I couldn’t/didn’t want to enable supplemental logging on the database. With that said the streams initialisation-adding supplemental logging to all replicated tables makes a lot more sense! I also started to understand “dictionary_begin” and “dictionary_end” in v$archived_log more.

To build the dictionary to a file, you need to have utl_file_dir set correctly, which sounds a bit anachronistic nowadays where we have directories but I digress. In my case utl_file_dir was set to ‘/home/oracle/utl_file_dir’. The command I executed was:

    dictionary_filename => 'dictionary.ora',
    dictionary_location => '/tmp/',

2. Add log files

I knew from v$archived_log that my logfiles are for sequence 23 and 24 so I added them to the list of file to mine:

    LOGFILENAME => '/u01/oracle/ora10/arch/arch1_23_703580639.dbf',
    LOGFILENAME => '/u01/oracle/ora10/arch/arch1_24_703580639.dbf',

3. Start log miner

Now for the interesting part of this!

    dictfilename => '/home/oracle/utl_file_dir/dictionary.ora');

4. Analysing the results

Now query v$logmnr_contents at your heart’s delight!

5. End your log miner session

The final step is to end the session.


Keep it simple!

A more simplistic approach, using the current database dictionary, requires the following:

  • Access to the source database (i.e. the database where the statements originated)
  • Column definitions must not have changed (i.e. no DDL against tables)
  • The database must be open

With these, all you need to do is to add the logfiles and execute something similar to:

 sys.dbms_logmnr.start_logmnr (
 starttime => sysdate - 10 * 1/24/60,
 endtime   => sysdate,
 options   => sys.dbms_logmnr.dict_from_online_catalog);

Oracle Database Utilities

One thought on “My first contact with the log miner

  1. Pingback: Blogroll Report 04/12/2009-11/12/2009 « Coskan’s Approach to Oracle

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s