0% found this document useful (0 votes)
406 views

Mystery of DBWR Tuning

shows the internals of DBWR tuning and amazing document. Must read this great work by Steve Adams who is an independent Oracle consultant
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
406 views

Mystery of DBWR Tuning

shows the internals of DBWR tuning and amazing document. Must read this great work by Steve Adams who is an independent Oracle consultant
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 7

The Mysteries of DBWR Tuning

The Mysteries of DBWR Tuning


Steve Adams

Independent Oracle Consultant

Introduction select
event,
The Oracle Database Writer (DBWR) process is total_waits,
the only process that writes modified database time_waited,
average_wait
blocks from the SGA to the datafiles. Today's high-
from
end applications modify blocks in the SGA at
sys.v_$system_event
amazing rates, and DBWR needs to be carefully where
tuned to keep up otherwise it will impose a event like 'db file %' or
severe bottleneck on overall system performance. event = 'free buffer waits' or
event = 'write complete waits'
However, DBWR tuning is a mysterious art. The order by
Oracle Server Tuning Guide and the independent time_waited desc
Oracle tuning books offer scant advice on how to /
tune this critical aspect of database operation.
TOTAL TIME AVG
Hence, to most senior DBAs, DBWR bottlenecks
EVENT WAITS WAITED WAIT
are an intractable problem. Beyond ensuring that
----------------------- ------ ------- -----
asynchronous I/O is available, and that there are free buffer waits 194278 4488038 23.10
no hot disks, they are have no ways of addressing db file sequential read 588805 2900049 4.93
the problem. db file parallel write 34667 119035 3.43
db file scattered read 19283 10242 0.53
This paper solves the problem. It explains, in write complete waits 175 5481 31.32
detail, how DBWR works and interacts with other db file single write 378 1261 3.34
database processes. It shows how to monitor
Figure1ADBWRproblem,as
DBWR performance using the v$ views, and how
to take an even deeper look at some seldom used seenfromV$SYSTEM_EVENT
x$ tables to get a complete understanding of
DBWRs behaviour. More than that, this paper The first thing to do when faced with a DBWR
explains how to then set the parameters that affect bottleneck is to ensure that the maximum possible
DBWR performance, and exactly when and why it write bandwidth is available to DBWR. Firstly, the
might sometimes be necessary to use some of the operating system and database instance must be
undocumented parameters. configured to make non-blocking I/O
(asynchronous I/O) available to DBWR. Secondly,
the datafiles must be spread over enough disks,
Whats the Problem? controllers and I/O busses to ensure that there are
If a database instance has a system performance no hot spots in the I/O subsystem. This I/O tuning
problem then it will be manifested somewhere in typically involves the use of striping, and
the V$SYSTEM_EVENT view. This view details commonly involves the use of raw datafiles also.
how often, and for how long, database processes
have had to wait for system resources. If DBWR is Figure 2 shows the dramatic improvement made to
under-performing, then V$SYSTEM_EVENT will the performance of the system depicted in figure 1
show a large number of free buffer waits as merely by converting the datafiles to raw, and
illustrated in Figure 1. Free buffer waits are enabling asynchronous I/O. No alteration to the
evidence of a DBWR problem, because it is datafile striping or layout was necessary to achieve
DBWRs responsibility to make free buffers this improvement, nor was there any evidence of
available by writing modified blocks to the hot spots in the I/O subsystem following this
datafiles. change. Indeed, the average random read time of
13 milli-seconds shown in the db file sequential
read line is quite good.

The Mysteries of DBWR Tuning Page 1


The Mysteries of DBWR Tuning

EVENT
TOTAL TIME AVG
WAITS WAITED WAIT
Tuning the Load on DBWR
----------------------- ------ ------- ----- If you still have free buffer waits after tuning the
free buffer waits 149536 1207806 8.08 I/O subsystem, you should check DBWRs
db file sequential read 793378 1033994 1.30
workload, to see whether it can be reduced. There
db file parallel write 62241 64377 1.03
are two major opportunities for reducing the load
db file scattered read 24194 55562 2.30
db file single write 542 1517 2.80
on DBWR.
write complete waits 152 1308 8.59
Firstly, Oracle versions prior to 7.3, did not have
Figure2ThesameDBWR the delayed logging block cleanouts feature. If this
problem,amelioratedbyI/O feature is disabled, or if you are using an older
tuning version of Oracle, then delayed block cleanouts
result in the cleaned-out blocks being marked as
There is much more that can, should and has been dirty, and this can increase the load on DBWR
said about these important aspects of DBWR significantly. From version 7.1.5 onwards, this can
tuning. Yet, important and primary as these things be avoided by using the parallel query option to
are, they fall outside the scope of this paper. What force blocks to be read into direct read buffers,
this paper is concerned with is what to do next rather than into the database buffer cache. In this
what to do when these well-known tuning case, cleaned-out blocks are not written unless
opportunities have been fully exploited, and yet the copied into the buffer cache for another block
problem persists. change. Alternatively, cleanouts can be forced and
written during off-peak periods by forcing buffered
In figure 2, for example, although the I/O full table scans.
performance is now good, this instance is still
spending a great deal of time waiting for free Secondly, free buffer waits may also stem from
buffers. This is typical of the high-end Oracle high demand for free buffers. There are many
database applications that I have seen I/O reasons for free buffer requests, but normally the
tuning is critical, but not enough by itself to majority of free buffer requests are associated with
eliminate free buffer waits. Further specific DBWR reading blocks from disk into the database buffer
tuning is necessary. But such tuning needs to be cache. This demand for free buffers is highly
directed by a clear understanding of how DBWR tuneable, as is well known. For example, you can
works, and how its performance can be analysed use the parallel query option to force direct reads
and then tuned. It is the objective of this paper to and hence reduce demand for cache buffers, or you
build that understanding. can enlarge the cache to improve the hit rate, or
you can concentrate on optimising segment data
density to reduce the number of blocks being read,
How Bad is it? or in version 8 you can use multiple buffer pools to
Before attempting to resolve a performance control the placement and retention of blocks in
problem like free buffer waits, it is vital to be able the cache.
to measure the relative severity of the problem, so
that the effectiveness of each tuning attempt can be Tuning the LGWR / DBWR
assessed. There are two simple measures of the
relative severity of system resource waits like free Interaction
buffer waits the first is how often the wait has The next thing to check is the interaction between
occurred as a proportion of the number of times LGWR and DBWR. The concern is that DBWR
that the resource was required; and the second is may be spending a lot of time waiting for LGWR,
how long the wait lasted when it did occur. instead of cleaning dirty buffers. DBWR must
sometimes wait for LGWR, so as to ensure that
The number of times that free buffers were data blocks are not written to disk before the
required is reported in the free buffer requests corresponding redo has been written. Otherwise it
statistic in V$SYSSTAT. In the cases above, there might not be possible to rollback uncommitted
were 1096334 and 1548954 free buffer requests changes during instance recovery.
respectively. So, taking the number of free buffer
waits from V$SYSTEM_EVENT, it can be seen So, before DBWR writes a block, it checks the
that the ratio of free buffer waits to free buffer buffer header data structure to see whether there is
requests was improved from about 1:5 to about any associated redo that still needs to be written. If
1:10. The average duration of free buffer waits was so, it sends a message to LGWR requesting that it
also improved from about 23 to about 8 hundredths write the redo to disk. DBWR does this by
of a second, as can be seen directly in allocating a message buffer in the shared pool and
V$SYSTEM_EVENT. constructing a message for LGWR. It then posts
LGWRs semaphore which acts as a signal to the
operating system that there is now work for LGWR

The Mysteries of DBWR Tuning Page 2


The Mysteries of DBWR Tuning

to do, and that it should be scheduled to run on a reduce the waiting time when necessary, because
CPU as soon as possible. Meanwhile, DBWR there will be relatively little redo to flush.
enters a log file sync wait and waits on its
semaphore. This acts as a signal to the operating However, some applications cannot afford to have
system to not schedule DBWR to run on a CPU a small log buffer, because they generate redo in
until further notice, or until the wait times-out. large bursts, and would suffer log buffer space
When LGWR has done its work, it posts DBWRs waits if the log buffer were too small. In such
semaphore to indicate to the operating system that cases, you can have a large log buffer and still
DBWR can now be scheduled to run. keep LGWR active using the _LOG_IO_SIZE
parameter. This parameter sets the threshold at
How often, and for how long, DBWR has had to which LGWR starts to write, which defaults to one
wait for LGWR in this way, can be seen in third of the log buffer. If set explicitly, it must be
V$SESSION_EVENT as illustrated in figure 3. specified in log blocks, the size of which can be
When tuning LGWR to reduce the DBWR waiting found in X$KCCLE.LEBSZ. It is best to keep
time, the number of DBWR log file sync waits _LOG_IO_SIZE at least one log block less than the
should be expressed as a proportion of the write maximum I/O size for your operating system,
requests shown in V$SYSSTAT. However, when which for Unix operating systems is typically 64K.
tuning DBWR itself, the number of DBWR log file Then when DBWR does have to wait for LGWR to
sync waits should be expressed as a proportion of flush the log buffer, it will always be able to be
the db block changes shown in V$SYSSTAT. accomplished in just a single I/O operation.

select Of course, you should not set any parameters


event, beginning with an underscore, like _LOG_IO_SIZE,
total_waits, without prior consultation with Oracle Support.
time_waited,
average_wait
from
The second aspect of the DBWR/LGWR
sys.v_$session_event
interaction to tune is their inter-process
where communication. Despite warnings to the contrary,
event = 'log file sync' and it does help to raise the operating system priority
sid = 2 of both the LGWR and DBWR processes. Also,
/ many operating systems support two types of
semaphores. The older type, known as System V
TOTAL TIME AVG semaphores, are implemented in the operating
EVENT WAITS WAITED WAIT system kernel and are thus costly to use. The newer
----------------------- ------ ------- ----- Posix 1b semaphores, or post-wait semaphores, are
log file sync 6164 9165 1.49
much more efficient because they are implemented
Figure3DBWRwaitingfor in shared user memory. If the Oracle port for your
LGWR operating system gives you a choice on this matter,
you should choose the post-wait semaphores for
There are two cases when DBWR may have to best performance. This affects all inter-process
write blocks for which the corresponding redo has communication within Oracle, but is particularly
not yet been written to disk, and therefore have to important for LGWR and DBWR, because at times
wait for LGWR. The first is when the unwritten these processes need to post many other processes.
redo relates to a block change that DBWR itself
has just performed. As a DBA, you have relatively
little control over DBWR block changes, although
careful application design can help. The second Tuning the Write Batch Size Limit
case is when the redo has not yet been written
because the log buffer is large enough to We now come to the biggest hammer with which
accommodate all the redo generated since the you can hit free buffer waits, namely the DBWR
change was made, and there has been no commit batch size limit. When DBWR writes dirty blocks
or other reason for the redo to be flushed to disk. to disk, it works in batches. Each time its services
In this case, the need for DBWR to wait for LGWR are required, DBWR compiles a batch of blocks,
can be significantly reduced by keeping LGWR and then writes them. The size of these batches
active. varies, up to a limit, depending on how many dirty
blocks are available for writing at the time.
The best way to keep LGWR active is to have a
small log buffer. LGWR will start to write If asynchronous I/O is being used, an
asynchronously each time the log buffer becomes asynchronous write system call is issued to the
one third full. This will reduce the probability that operating system for each block in the batch. It is
DBWR will need to wait for LGWR. It will also then left to the operating system and hardware to
service this set of requests as efficiently as possible.

The Mysteries of DBWR Tuning Page 3


The Mysteries of DBWR Tuning

DBWR monitors the progress of the requests and select


checks their status as they complete. If necessary, kviidsc,
DBWR will attempt to re-write blocks for which kviival
the first write attempt appears to have failed. On from
sys.x$kvii
the other hand, if multiple slave DBWR processes
where
are used, as is possible on the Oracle ports for most kviitag in (kcbswc, kcbscc)
Unix operating systems, the master DBWR does /
not write any blocks itself. It allocates one block
from its batch to each slave DBWR in turn. Each KVIIDSC KVIIVAL
slave DBWR process then issues an ordinary -------------------------- -------
synchronous write system call for its block, and DB writer IO clump 100
waits until the disk operation has successfully DB writer checkpoint clump 8
completed before getting a new block to write from
Figure4TheDBWRbatchsize
the master DBWR process.
limits
This explains why using true asynchronous I/O is
preferable to using multiple slave DBWR You need to know what your batch size limit is, to
processes. Apart from the significant overhead of determine how full the batches that DBWR is
inter-process communication between the master writing are. If they are full, or close to full, then
and slave DBWR processes for each block written, there will clearly be benefit in raising the limit. To
the master/slaves configuration also clearly determine the size of the batches being written,
reduces the degree of I/O parallelism that can be you should divide the physical writes statistic,
achieved at the operating system and hardware which is the number of blocks written, by the write
level. requests statistic, which is the number of batches
written. These statistics are found in V$SYSSTAT.
Even when the actual batches written are well
This also explains why the DBWR batch size limit,
short of the limit on average, experience shows
or the number of slave DBWR processes
that increasing the batch size limit does increase
respectively, can have a significant impact on
the average batch size and help to reduce free
DBWR performance. For example, if this write
buffer waits.
bandwidth is small, then I/O operations will have
to be done in series that might otherwise have been
done in parallel. Even in the worst case, when all To set the DBWR batch size limit implicitly, you
the dirty blocks need to be written to the same should set DB_FILES slightly higher than the
disk, there is still benefit in queuing a batch of number of datafiles in use, and adjust
DB_FILE_SIMULTANEOUS_WRITES as required.
writes concurrently, because the operating system
and hardware will be able to service the requests in
the optimal order. Tuning the Checkpoint Batch Size
Limit
The DBWR batch size limit is controlled by the
_DB_BLOCK_WRITE_BATCH parameter. The value of You may also have read about the
this parameter is derived from other parameters, DB_BLOCK_CHECKPOINT_BATCH parameter, which
and can be seen in X$KVII as illustrated in sets the maximum number of blocks in a batch that
figure 4. The value normally defaults to DB_FILES * can be devoted to checkpointing. This parameter
DB_FILE_SIMULTANEOUS_WRITES / 2 up to a limit of defaults to just 8 blocks, despite a documentation
one quarter of the cache (DB_BLOCK_BUFFERS / 4) error that persisted through to version 7.2, saying
or 512 blocks, whichever is the lesser. However, that it was a quarter of the write batch size limit.
both the formula and the limits vary between
different Oracle ports and versions, and so it is best It is sometimes recommended to set this parameter
to check in X$KVII. equal to the write batch size limit. The objective is
to make checkpoints as fast as possible, but the
speed of checkpoints is really not that important.
Foreground processes only wait for checkpoints if
a log switch checkpoint has not yet been completed
by the time it is necessary to switch back into that
online log file, or if the checkpoint is needed to
write any cached, dirty blocks from a segment that
has been dropped or truncated. The first case is
evidence of a gross tuning error that should be
corrected by other means, and in the second case a
fast checkpoint is performed, and so the
checkpoint batch size limit does not apply.

The Mysteries of DBWR Tuning Page 4


The Mysteries of DBWR Tuning

Also, if an entire batch or more is devoted to changes, and what the average waiting time is. If,
checkpointing from time to time, then DBWR will as you increase the write batch size limit, you find
not be cleaning buffers in a way that will enable that your write complete waits get worse, then stop
them to be found by sessions requiring a free do not increase the write batch size limit
buffer, and free buffer waits are likely to result. further. There is another way to combat free buffer
Therefore, it is important that the checkpoint batch waits, that should be used, rather than risking a
size limit be small enough to allow DBWR to deterioration in write complete waits.
continue its normal work for the duration of each
checkpoint, with little or no backlog developing. Another reason for not setting the DBWR batch
size limit too high, even implicitly, is that on some
However, a small checkpoint batch size limit does Oracle ports and versions it is possible to generate
have one drawback. Typically, during a checkpoint, batches larger than the maximum number of
batches are written more frequently than at other simultaneous asynchronous writes supported by the
times, with the result that the actual batch size operating system. If you make this mistake, you
quickly falls well below the normal size, will learn about it soon enough, because writes will
sometimes right down to the checkpoint batch size fail, KCF error lines will appear in your
limit, or even below it. This means that the ALERT.LOG file and the datafiles affected will be
checkpoint batch size limit can actually afford to automatically taken offline.
be raised, as long as free buffer waits are not
introduced at the beginning of each checkpoint. Tuning the Batch Frequency
The objective of doing so, is merely to reduce the
number of batches and thus the DBWR CPU The last, and most complex, aspect of the DBWR
usage. tuning mystery is how to tune the frequency with
which DBWR writes. However, before we can get
Avoiding Write Complete Waits into the details, I need to explain a few things
about the database buffer cache control structures. I
Now that I have told you about the write batch size have already made mention of the buffer header
limit, I must warn you against setting it too high. array. This is made visible to DBAs via the X$BH
If you do, you may eliminate free buffer waits, but virtual table. There is a one-to-one relationship
you will introduce write complete waits instead. between buffers and buffer headers. The buffer
Write complete waits occur because Oracle cannot headers contain a host of information. Most
allow blocks that are about to be written to be importantly, they record which database block is
modified, lest an inconsistent block image be currently cached in the corresponding buffer, and
written to disk. whether it is in current mode or consistent read
mode. Current mode blocks represent the current
Unlike free buffer waits, write complete waits disk image of the block, whereas consistent read
cannot be prevented entirely. There is always the mode blocks represent an earlier
chance that a session may need to modify a block transaction-consistent version of the block. Of
that is part of DBWRs batch. This applies course, only current mode blocks can be dirty
particularly to hot blocks like rollback segment that is, they may contain modifications that have
header blocks. Of course, hot blocks do not yet to be written to disk.
normally get written, except during checkpoints or
idle writes so, that is when the risk of getting The second important buffer cache control
write complete waits is highest. structure to mention is the working set header
array, which is made partially visible to DBAs via
The write batch size limit affects write complete the X$KCBWDS virtual table. This table contains
waits in two ways. Firstly, the larger the batch size, one row per working set. It did not exist prior to
the longer the batch will take to write, and the version 7.3, because the entire cache was just one
higher the risk of needing to modify one of the working set. Each working set is comprised of
blocks. This implies that if you need to use several linked lists of buffers the LRU, or
multiple slave DBWR processes instead of true replacement list; the write list; and in version 8
asynchronous I/O, then your batch size limit there is also a separate ping list. The working set
should be of the same order of magnitude as the data structure itself contains pointers to the buffer
number of slaves configured. Secondly, the larger headers of the buffers at the head and tail of each
the batch size limit, the greater the risk of of these lists. Each buffer header then contains
choosing a block that will need to be modified pointers to the next and previous buffers in its list,
soon, such as blocks needed by a transaction that and an indicator of which list it is on. Thus all
has been caught for a while in a resource wait. these lists can be traversed in either direction from
any point.
So before you begin tuning the write batch size
limit, you should determine how bad your write When an Oracle session requires a free buffer, it
complete waits are as a proportion of db block firstly determines which buffer pool and which

The Mysteries of DBWR Tuning Page 5


The Mysteries of DBWR Tuning

working set it will look in. It then begins searching lower bounds for the DBWR scan depth, rounded.
up the replacement list, starting with the last On the other hand, if DBWR is ever found to be
buffer, for a block that is neither dirty, nor pinned working too hard, then the scan depth is
for use by another session. If a reusable buffer is automatically reduced by the value of the
found, it is used and moved up to the top of the _DB_WRITER_SCAN_DEPTH_DECREMENT parameter,
replacement list. In future versions, this behaviour which defaults to 1. The effect of these defaults is
will vary slightly when the alternative replacement that DBWR will respond to bursts of load quickly,
policies for the KEEP and RECYCLE buffer pools but will take a long time to adjust to reduced
have been implemented. If during its scan of the activity. The current DBWR scan depth, its
replacement list, a session inspects a block that is maximum and minimum bounds, and the
dirty and not pinned, then it unlinks that buffer increment and decrement parameter settings, can
from the replacement list, and adds it to the head be seen in X$KVIT as illustrated in figure 5.
of the write list. The number of blocks moved to
the write list in this way is reported in the free select
buffer inspected statistic in V$SYSSTAT. kvitdsc,
kvitval
The frequency with which DBWR writes, depends from
sys.x$kvit
largely on messages called DBWR make free
where
requests that are triggered by either of two events kvittag in (kcbldq, kcbsfs) or
associated with the procedure of looking for a kvittag like kcbsd_
reusable buffer. When either of these events occurs, /
the session searching for a reusable buffer pauses
to check whether a make free request is already KVITDSC KVITVAL
pending for that working set. If not, it creates a ------------------------------------ -------
message in the shared pool for DBWR and posts large dirty queue if kcbclw reaches 6
DBWRs semaphore, before continuing its search this
for a reusable buffer. DBWR blocks to scan looking for 400
dirty
DBWR blocks to scan lowest value 25
The first threshold that can trigger a make free DBWR blocks to scan highest value 400
request is when the search for a reusable buffer has DBWR scan depth increment 47
inspected a certain number of buffers without DBWR scan depth decrement 1
finding a reusable buffer. This threshold is foreground blocks to scan looking 16000
surprisingly called the DBWR scan depth. The for free
reason for that name, is that when DBWR receives
the make free request, it will scan the replacement Figure5Someparameters
list from that buffer back down to the end of the affectingDBWR
list, looking for additional buffers to write, other
than those already on the write list. The number of The second event that can trigger a make free
buffers actually scanned by DBWR is commonly in request is when a dirty buffer is moved to the write
fact less than its scan depth, because some buffers list during a search for a reusable buffer, and the
may already have been moved to the write list by length of the write list thereby becomes equal to its
the session that issued the make free request, or by threshold, which is set by the
a subsequent scan of the same replacement list by _DB_LARGE_DIRTY_QUEUE parameter, and defaults
another session. This difference can be seen in the to 1/16th of the write batch size limit, rounded.
statistics DBWR summed scan depth and DBWR This is the parameter that you should set to control
buffers scanned which can both be found in the frequency at which DBWR writes. It should be
V$SYSSTAT. reduced gradually from its default, one block at a
time, until free buffer waits are eliminated. The
The interesting thing about the DBWR scan depth, cost of each reduction in this parameter setting is
is that it is self-tuning. It actually changes from an increase in DBWR CPU usage, so it should not
moment to moment while the instance is active, to be brought down without good reason.
compensate for fluctuations in the load on DBWR.
The DBWR scan depth ranges between a lower Spotting a Potential Problem
and an upper bound, which are derived from either
the write batch limit, or the working set size, When a session searching for a reusable buffer has
depending on your Oracle version. At instance issued a make free request to DBWR, it resumes its
startup, it is set to the lower bound. Then at any scan of the replacement list, up to the number of
time when DBWR is found to be falling behind, blocks specified by the parameter
the scan depth is dynamically increased. The _DB_BLOCK_MAX_SCAN_CNT, which defaults to
_DB_WRITER_SCAN_DEPTH_INCREMENT parameter one quarter of the working set, rounded down. If
controls the rate of increase, and defaults to one this limit is reached without finding a free buffer,
eighth of the difference between the upper and then the dirty buffers inspected statistic in
V$SYSSTAT is incremented by the number of

The Mysteries of DBWR Tuning Page 6


The Mysteries of DBWR Tuning

dirty and pinned blocks inspected during the scan, before its initial scan failed. Then while the session
and the free buffer request fails. was busy with its other work, DBWR had enough
time to make some free buffers available.
Now, you may think that a failed free buffer
request would translate immediately into a free Because of this, it is possible for some free buffer
buffer wait. Not necessarily so. Oracle often knows requests to fail, as shown by the dirty buffers
in advance that a certain resource will soon be inspected statistic in V$SYSSTAT, yet for there to
needed. In some cases, it attempts to get that be no free buffer waits. This should be taken as a
resource immediately in no-wait mode. If the warning of a potential problem, and tuned
attempt succeeds, it holds the resource until it is accordingly. If you are using the version 8 feature
needed. If the attempt fails, it continues with other of multiple buffer pools, this and all the related
work. When that work is done, it attempts to get statistics should be taken from X$KCBWDS,
the resource again, this time in willing-to-wait where the different performance of each buffer pool
mode, and if necessary, waits until the resource is can be seen.
available. This applies to certain latches, and it
also appears to apply to free buffer requests for Conclusion
physical reads. Before a physical read into the
buffer cache, Oracle attempts to get a free buffer in The Oracle developers have clearly put a lot of
no-wait mode. If it succeeds, the block is read into careful thought into optimising the performance of
that buffer. If it fails, Oracle goes ahead with the DBWR, maintaining statistics by which it can be
physical read anyway, using a temporary buffer. monitored, and providing parameters and
Then when the read has completed, it issues a techniques by which it can be tuned.
second free buffer request, waiting if necessary,
and then copies the block from its temporary buffer Unfortunately, the tight scope of this brief paper
into the buffer cache. has not allowed me to treat a number of other
aspects of DBWR operation such as idle writes,
The genius of this design is that even if a no-wait pings, and cache latch tuning, nor to explain a
free buffer request fails, it is highly likely that number of the other statistics that can give you an
DBWR will have been able to clean some buffers even deeper insight into specific situations.
by the time the session attempts to find a free However, I trust that I have established for you a
buffer again. This is because the session would framework upon which you can build your own
have paused to issue a make free request long experiences of The Mysteries of DBWR Tuning.

Steve Adams began work as an Oracle DBA/Developer in 1984. Since then, he has worked in a number of
roles, from youth work to management. Over the last two years, he has returned to working on Oracle, this
time with a concentration of performance issues. He lives in Sydney with his wife and three children, and can
be contacted at [email protected].

The Mysteries of DBWR Tuning Page 7

You might also like