How To Use Logminer To Locate Archive Logs Flow
How To Use Logminer To Locate Archive Logs Flow
R. Wang
Steps of Diagnosis
With checking on memory, CPU and I/O, we failed to find any abnormal
process that excessively consumes resource of database and OS.
Note: For reference on “ Monitor Oracle Resouce Consumption in Unix”
, please
refer to the article in “
oracle metalink”with Note: 148466.1.
execute dbms_logmnr_d.build('shwdict.ora','/tmp');
SQL> execute
dbms_logmnr.add_logfile('/dbPROD/archive/PROD_0001_0000045087.
arc',dbms_logmnr.addfile);
SQL> execute
dbms_logmnr.add_logfile('/dbPROD/archive/PROD_0001_0000045088.
arc',dbms_logmnr.addfile);
SQL> execute
dbms_logmnr.add_logfile('/dbPROD/archive/PROD_0001_0000045089.
arc',dbms_logmnr.addfile);
SQL> execute
dbms_logmnr.add_logfile('/dbPROD/archive/PROD_0001_0000045288.
arc',dbms_logmnr.addfile);
SQL> execute
dbms_logmnr.add_logfile('/dbPROD/archive/PROD_0001_0000045290.
arc',dbms_logmnr.addfile);
SQL> execute
dbms_logmnr.add_logfile('/dbPROD/archive/PROD_0001_0000045291.
arc',dbms_logmnr.addfile);
SQL> execute
dbms_logmnr.add_logfile('/dbPROD/archive/PROD_0001_0000045297.
arc',dbms_logmnr.addfile);
SQL> execute
dbms_logmnr.add_logfile('/dbPROD/archive/PROD_0001_0000045308.
arc',dbms_logmnr.addfile);
And, then go check the archive log files which have been
loaded and available for analyzing.
LOG_ID FILENAME
----------- -------------------------------------------
45087 /dbPROD/archive/PROD_0001_0000045087.arc
45088 /dbPROD/archive/PROD_0001_0000045088.arc
45089 /dbPROD/archive/PROD_0001_0000045089.arc
45288 /dbPROD/archive/PROD_0001_0000045288.arc
45289 /dbPROD/archive/PROD_0001_0000045289.arc
45290 /dbPROD/archive/PROD_0001_0000045290.arc
45291 /dbPROD/archive/PROD_0001_0000045291.arc
45297 /dbPROD/archive/PROD_0001_0000045297.arc
45308 /dbPROD/archive/PROD_0001_0000045308.arc
9 rows selected.
Session altered.
9 rows selected.
6) Do analysis
Here, we set the time range from the very beginning to very end to analyze
any activities we loaded previously.
7) User-defined query
From here, we are able to pick up the activities recorded in the archive log
files we loaded previously.
We run:
Conclusion
Now, it’s very clear that internal activities relating to STATSPACK cause the
excessive archive log files. We then go back to check STATSPACK jobs and found
that that job started two years ago with interval 1 hour. That means
365x2X24=17520 snapshots were created and saved to database. We doubt that the
internal maintenance of snapshots attributes to this problem we are facing.
Our solution is then to stop the existing STATSPACK job (which terminate the
internal activities) and adjust it to run only at peak time, say 8:00, 9:00, 10:00,
11:00, 13:00, 14:00, 15:00, 16:00 and 17:00.
By the way, we are also planning to clean up some historical snapshots at our
convenience.
About the Author.