100% found this document useful (3 votes)
2K views6 pages

How To Use Logminer To Locate Archive Logs Flow

How to use Logminer to analyze and diagnose the datababse issues. From: http://www.oraclepoint.com/topic.php?filename=173&extra=page%3D1

Uploaded by

patrick_wangrui
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (3 votes)
2K views6 pages

How To Use Logminer To Locate Archive Logs Flow

How to use Logminer to analyze and diagnose the datababse issues. From: http://www.oraclepoint.com/topic.php?filename=173&extra=page%3D1

Uploaded by

patrick_wangrui
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

How to use LogMiner to locate Archive Logs Flow

R. Wang

Keywords: logminer, STATSPACK, Archive log files

Environment: Oracle 9iR2 and Solaris 5.9

Circumstance: Without introduction of new jobs, we monitored rapid disk


occupancy in very short period. The circumstance we encountered is that a new
archive log file (18MB) is created every minute. And, the disk occupancy is rapidly
increased as shown below.

-rw-r----- 1 oracle dba 18873856 Jul 24 20:38 PROD_0001_0000045102.arc


-rw-r----- 1 oracle dba 18873856 Jul 24 20:39 PROD_0001_0000045103.arc
-rw-r----- 1 oracle dba 18873856 Jul 24 20:40 PROD_0001_0000045104.arc
-rw-r----- 1 oracle dba 18873856 Jul 24 20:41 PROD_0001_0000045105.arc
-rw-r----- 1 oracle dba 18873856 Jul 24 20:41 PROD_0001_0000045106.arc
-rw-r----- 1 oracle dba 18873856 Jul 24 20:43 PROD_0001_0000045107.arc
-rw-r----- 1 oracle dba 18873856 Jul 24 20:43 PROD_0001_0000045108.arc
-rw-r----- 1 oracle dba 18873856 Jul 24 20:45 PROD_0001_0000045109.arc
-rw-r----- 1 oracle dba 18873856 Jul 24 20:45 PROD_0001_0000045110.arc
-rw-r----- 1 oracle dba 18873856 Jul 24 20:46 PROD_0001_0000045111.arc
-rw-r----- 1 oracle dba 18873856 Jul 24 20:47 PROD_0001_0000045112.arc
-rw-r----- 1 oracle dba 18873856 Jul 24 20:49 PROD_0001_0000045113.arc
-rw-r----- 1 oracle dba 18873856 Jul 24 20:50 PROD_0001_0000045114.arc
-rw-r----- 1 oracle dba 18873856 Jul 24 20:51 PROD_0001_0000045115.arc
-rw-r----- 1 oracle dba 18873856 Jul 24 20:52 PROD_0001_0000045116.arc
-rw-r----- 1 oracle dba 18873856 Jul 24 20:53 PROD_0001_0000045117.arc
-rw-r----- 1 oracle dba 18873856 Jul 24 20:54 PROD_0001_0000045118.arc
-rw-r----- 1 oracle dba 18873856 Jul 24 20:56 PROD_0001_0000045119.arc
-rw-r----- 1 oracle dba 18873856 Jul 24 20:56 PROD_0001_0000045120.arc
-rw-r----- 1 oracle dba 18873856 Jul 24 20:57 PROD_0001_0000045121.arc
-rw-r----- 1 oracle dba 18873856 Jul 24 20:58 PROD_0001_0000045122.arc
-rw-r----- 1 oracle dba 18873856 Jul 24 20:59 PROD_0001_0000045123.arc
-rw-r----- 1 oracle dba 18873856 Jul 24 21:00 PROD_0001_0000045124.arc

Steps of Diagnosis

1. Checking on Memory, CPU, I/O

With checking on memory, CPU and I/O, we failed to find any abnormal
process that excessively consumes resource of database and OS.
Note: For reference on “ Monitor Oracle Resouce Consumption in Unix”
, please
refer to the article in “
oracle metalink”with Note: 148466.1.

2. Using Logminer to locate targeted archive log files

Then, we would like to look at what does the database do by examining


archive log files created with Logminer.

Before start using Logminer, please make sure of that you

 Log in with sys account with sysdba role


 Never escape from the current session because the data viewing of
logminer will not be available at other sessions.

The steps to use Logminer are depicted as following in details.

1) Check init parameter UTL_FILE_DIR

SQL>show parameter UTL_FILE

NAME TYPE VALUE


------------------------------ ----------- -------------------
utl_file_dir string /tmp

2) Creates text based file containing data dictionary contents

execute dbms_logmnr_d.build('shwdict.ora','/tmp');

3) Add targeted archive log files to log list

SQL> execute
dbms_logmnr.add_logfile('/dbPROD/archive/PROD_0001_0000045087.
arc',dbms_logmnr.addfile);

PL/SQL procedure successfully completed.

SQL> execute
dbms_logmnr.add_logfile('/dbPROD/archive/PROD_0001_0000045088.
arc',dbms_logmnr.addfile);

PL/SQL procedure successfully completed.

SQL> execute
dbms_logmnr.add_logfile('/dbPROD/archive/PROD_0001_0000045089.
arc',dbms_logmnr.addfile);

PL/SQL procedure successfully completed.

SQL> execute
dbms_logmnr.add_logfile('/dbPROD/archive/PROD_0001_0000045288.
arc',dbms_logmnr.addfile);

PL/SQL procedure successfully completed.


SQL> execute
dbms_logmnr.add_logfile('/dbPROD/archive/PROD_0001_0000045289.
arc',dbms_logmnr.addfile);

PL/SQL procedure successfully completed.

SQL> execute
dbms_logmnr.add_logfile('/dbPROD/archive/PROD_0001_0000045290.
arc',dbms_logmnr.addfile);

PL/SQL procedure successfully completed.

SQL> execute
dbms_logmnr.add_logfile('/dbPROD/archive/PROD_0001_0000045291.
arc',dbms_logmnr.addfile);

PL/SQL procedure successfully completed.

SQL> execute
dbms_logmnr.add_logfile('/dbPROD/archive/PROD_0001_0000045297.
arc',dbms_logmnr.addfile);

PL/SQL procedure successfully completed.

SQL> execute
dbms_logmnr.add_logfile('/dbPROD/archive/PROD_0001_0000045308.
arc',dbms_logmnr.addfile);

PL/SQL procedure successfully completed.

Note: In order to track issues happened, we loaded not only


excessive archive log files generated every minutes, but
the normal archive log files created in reasonable
interval.

And, then go check the archive log files which have been
loaded and available for analyzing.

SQL> select log_id, filename from v$logmnr_logs;

LOG_ID FILENAME
----------- -------------------------------------------
45087 /dbPROD/archive/PROD_0001_0000045087.arc
45088 /dbPROD/archive/PROD_0001_0000045088.arc
45089 /dbPROD/archive/PROD_0001_0000045089.arc
45288 /dbPROD/archive/PROD_0001_0000045288.arc
45289 /dbPROD/archive/PROD_0001_0000045289.arc
45290 /dbPROD/archive/PROD_0001_0000045290.arc
45291 /dbPROD/archive/PROD_0001_0000045291.arc
45297 /dbPROD/archive/PROD_0001_0000045297.arc
45308 /dbPROD/archive/PROD_0001_0000045308.arc

9 rows selected.

4) Alter date format in current session


To know happening in exact time, we then alter data format in current
session to show hours, minutes and seconds.

SQL> alter session set nls_date_format='yyyy-mm-dd


hh24:mi:ss';

Session altered.

5) Identify range of SCN and Time

Next, the time range of SCN is available by querying view


v$logmnr_logs. The time range of SCN is helpful for you to limit
the range of activities what you are going to examine.

SQL> select low_time,high_time,low_scn,next_scn from v$logmnr_logs;

LOW_TIME HIGH_TIME LOW_SCN NEXT_SCN


------------------- ------------------- ---------- ----------
2007-07-24 20:22:13 2007-07-24 20:23:15 1343378330 1343379060
2007-07-24 20:23:15 2007-07-24 20:24:20 1343379060 1343379595
2007-07-24 20:24:20 2007-07-24 20:25:35 1343379595 1343380558
2007-07-24 22:57:47 2007-07-24 22:58:05 1343485178 1343485332
2007-07-24 22:58:05 2007-07-24 22:58:21 1343485332 1343485451
2007-07-24 22:58:21 2007-07-24 22:58:38 1343485451 1343485597
2007-07-24 22:58:38 2007-07-24 22:58:52 1343485597 1343485674
2007-07-25 00:12:52 2007-07-25 00:15:52 1343494920 1343495568
2007-07-25 10:42:05 2007-07-25 10:50:41 1343712495 1343715303

9 rows selected.

6) Do analysis

By referencing created text file of data dictionary “


shwdict.ora”in
directory “
/tmp” , we now can start logminer by running the following
statement.

SQL> execute dbms_logmnr.start_logmnr(dictfilename =>


'/tmp/shwdict.ora',starttime => to_date('2007-07-24
20:22:13','yyyy-mm-dd hh24:mi:ss'), endtime=> to_date('2007-
07-25 10:50:41','yyyy-mm-dd hh24:mi:ss'));

PL/SQL procedure successfully completed

Here, we set the time range from the very beginning to very end to analyze
any activities we loaded previously.

7) User-defined query

From here, we are able to pick up the activities recorded in the archive log
files we loaded previously.

We can easily run a query as following to pick plenty information from


view V$LOGMNR_CONTENTS.
Select SCN,timestamp, session# session_num, table_space,
rollback, operation, operation_code, username,sql_redo,
sql_undo From V$LOGMNR_CONTENTS Order by 1;

By examining the output, we noticed that the values of SQL_REDO and


SQL_UNDO are NULL in time range at which excessive archive log files
were created.
We then narrow down to run another statement upon
V$LOGMNR_CONTENTS to focus only object_id for those null SQL_UNDO
and SQL_REDO activities.

We run:

SQL> Select SCN, data_obj#,data_objd#,seg_owner, rollback,


operation, username,sql_redo, sql_undo From V$LOGMNR_CONTENTS
Order by 1;

And, the filtered output presents that couple of duplicated DATA_OBJ#


keep showing up. They are: 120049, 120048, 120071, 120061, 120070
and so on.

8) Locate the database objects causing excessive archive log files

SQL> select object_id, object_name, owner from dba_objects where


object_id in (120049, 120048,120071,120070,120061);

OBJECT_ID OBJECT_NAME OWNER


------------- ------------------------------------------------ -----------------
120061 STATS$PGASTAT PERFSTAT
120070 STATS$INSTANCE_RECOVERY PERFSTAT
120071 STATS$INSTANCE_RECOVERY_PK PERFSTAT
120048 STATS$SQL_PLAN_USAGE PERFSTAT
120049 STATS$SQL_PLAN_USAGE_PK PERFSTAT

Conclusion

Now, it’s very clear that internal activities relating to STATSPACK cause the
excessive archive log files. We then go back to check STATSPACK jobs and found
that that job started two years ago with interval 1 hour. That means
365x2X24=17520 snapshots were created and saved to database. We doubt that the
internal maintenance of snapshots attributes to this problem we are facing.

Our solution is then to stop the existing STATSPACK job (which terminate the
internal activities) and adjust it to run only at peak time, say 8:00, 9:00, 10:00,
11:00, 13:00, 14:00, 15:00, 16:00 and 17:00.

By the way, we are also planning to clean up some historical snapshots at our
convenience.
About the Author.

R. Wang currently works as Oracle DBA in Canada. He is responsible for


database performance tuning and high availability. With over 10 years
experience in architecting and building oracle systems, Rui is an evangelist
for oracle technology and products. Rui is OCP and received master degree in
computer science from Simon Fraser University in Canada.

Visit Rui’s blog at http://www.oraclepoint.com/oralife


or join forum http://www.oraclepoint.com for much more resource on oracle.

You might also like