0% found this document useful (0 votes)
26 views

Processing of Seismic Data

The document provides an overview of seismic processing principles including Huygens' principle, energy decay, reflection coefficients, diffractions, sampling techniques, and migration. It discusses concepts like seismic wavefields, amplitudes, resolution, filtering, deconvolution, and attributes which are important for data analysis, modeling, and reservoir characterization.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views

Processing of Seismic Data

The document provides an overview of seismic processing principles including Huygens' principle, energy decay, reflection coefficients, diffractions, sampling techniques, and migration. It discusses concepts like seismic wavefields, amplitudes, resolution, filtering, deconvolution, and attributes which are important for data analysis, modeling, and reservoir characterization.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 350

PRINCIPLES

OF SEISMIC PROCESSING

BY

Andreas Stark, M. Sc.


Professional Geophysicist
2 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

course in seismic processing procedures

• general overview
• Huygens' principle
• energy decay
• reflection coefficients
• diffractions
• seismic wavefield
• sampling techniques and aliasing
• seismic amplitudes
• seismic resolution
• fresnel diffractions and zones
• unmigrated data versus migrated data
• migration apertures
• NMO effects
• fold and field design
• theory of electrical filters
• Fourier transforms
• filter responses, convolution
• amplitude and phase response
• convolutional model in relation to seismic data
• convolutional model
• noise free convolutional model
• wiener filtering
• deconvolution
• causal systems causal wavelet
• signature estimation
• estimation of reservoir properties
• reservoir delineation
• data conditioning
• impedance modelling
• lithology modelling
• AVO analysis
• seismic attributes

2
Advanced seismic processing 3
Clasina TerraScience Consultants International Ltd.

THE SEISMIC METHOD


Seismic waves are elastic waves which travel trough the earth. There are various types of elastic waves, viz.
Compressional waves or p-waves, shear waves or s-waves, mode converted waves, surface waves, Love or Pseudo-
Rayleigh waves, direct and head waves, and guided waves.
All of these types of waves play a role in seismic reflection techniques. The reflection of seismic waves follow the same
rules as those for light and radar and standard ray path theory can be applied.

Seismic waves created by an explosive source will emanate outward in a spherical way , and according to Huygens’
theory, every point along the wave front can be considered as another source point, which creates a secondary wave
front. The envelope of the secondary wave fronts produces a primary wave front after a small time delay. The
trajectory of a point moving outward is a known as a raypath. Thinking in terms of ray paths allows us to use ray
theory in the modelling of seismic signals.

from encyclopedic dictionary of exploration geophysics

3
4 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Huygens' Principle

The principle proposed by Christian Huygens has been of fundamental importance in the development of
wave theory. According to Huygens, each point of a wave front can be considered as the source of a new
secondary wavefront, as illustrated below.

spherical wavefront and plane wave according to Huygens

An infinite plane wave however, will continue as an infinite plane wave, as shown.

4
Advanced seismic processing 5
Clasina TerraScience Consultants International Ltd.

5
6 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

ENERGY DECAY

As the seismic signal travels downward there is energy decay caused by spreading loss and absorption. The
energy per unit area of the wave front is inversely proportional to the square of the distance from the
source, thus the waves amplitude is inversely proportional to the distance traveled.

The amplitude spreading loss in dB = 10 log R2/R1

The amplitude absorption loss in dB = 10 log e –A(R2-R1) = 4.3 A(R2-R1)

Accounting for both types of losses we will have in terms of amplitude A2=A1(R1/R2)e-A(R2-R1)

If we assume a standard value of 0.25dB per wavelength for A, and R1 and R2 are the radial distances
from the shot, V is the velocity of sound through the material and f is the predominant frequency,

Then the absorption loss in dB = {1.1f(R2-R1)}/V

According to this formula we can see that higher frequencies have greater energy loss than lower
frequencies, this explains the loss of high frequency at depth.

In TAR, True Amplitude Recovery we compensate for this spreading loss and recover the signals true
amplitude as well as we can.

6
Advanced seismic processing 7
Clasina TerraScience Consultants International Ltd.

REFLECTION COEFFICIENTS

Energy incident on any subsurface interface is transmitted and reflected, and the amplitude and phase, or
polarity is dependent on the acoustic properties of the traveling media.
Acoustic impedance is the product of density and velocity. The reflectivity coefficient is given by the
following equation
Rc = (p2V2-p1V1)/(p2V2+p2V2)

7
8 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

The amplitudes are related as follows Arefl = Rc . Aincident

Refraction, reflection and transmission of rays is determined by Snell’s law and the Zoeppritz’s equations.

Snell’s law sin i / V1 = sin r / V2

Snell’s law holds for all angles.

Critical reflection occurs when a wave arrives at an interface at such an angle that the reflected wave will
travel along the interface boundary, this called refraction.

Critical angle can be determined by sin Ic = V1/V2 sin 90 = V1/V2

For an incident plane p-wave of unity amplitude, continuity of tangential displacement (x-direction)
requires that

(1+A) sin ap1 + B cos as1 = C sin ap2 – D cos as2 ,

where the p-wave displacement is positive in the direction of propagation and the s-wave is positive to the
right of the direction of propagation. A, B, C, D are the amplitudes of reflected p-wave, reflected s-wave,
transmitted p-wave, and transmitted s-wave.

Continuity of normal displacement (z-direction) requires that :

(1-A) cos ap1 + B sin as1 = C cos ap2 + D sin as2

In the general case of an interface between two solids, when the angle of incidence is not zero, four waves
are generated, reflected p-wave, and s-wave, and transmitted p-wave and s-wave given by the

Zoeppritz’s Equations

(-1+A) sin 2ap1 + (Vp1/Vs1) B cos 2as1 = -(p2Vs22Vp1/p1Vs12Vp2) C sin 2ap2 + (p2Vs2Vp1/p1Vs12) D cos as2 , and
(1+A) cos 2as1 – (Vs1/Vp1) B sin 2as1 = (p2Vp2/p1Vp1) C cos 2as2 + (p2Vs2/p1Vp1) D sin 2as2

Seismic reflections are the result of any significant change in the density or velocity of the material that the
wave front travels through. Therefor the seismogram gives detailed information about the variations in
geological layers, the variations in structure, stratigraphy and lithology, and the delineation of
hydrocarbons. The density and velocity characteristics of sedimentary rocks have a wide range.
For most clastics, carbonates and sands the changes are produced by changes in impedance.

8
Advanced seismic processing 9
Clasina TerraScience Consultants International Ltd.

The following picture is from Sheriff encyclopedic dictionary of exploration geophysics

The ability to resolve these characteristics is a function of the following:

• Velocity and density contrast


• Bed thickness
• Signal to noise ratio
• Frequency content
• Distortions

Seismic waves are the sum of many complex signals each with a constant phase, amplitude and frequency.
Low frequency signals are useful in defining the major changes in large impedance contrasts, such as thick
low velocity layers followed by high velocity carbonates. High frequencies will define the thin layers.

Stacking-velocity analysis

We will consider some practical aspects of computing interval velocities from CDP reflection time
measurements. We will outline some procedures that should be adhered to in order successfully to apply
the generalized "Dix-type" formulas to measurements obtained in a real geophysical world.
Our travel time inversion algorithms require that the real earth be closely approximated by the models
considered.
As in many other inversion algorithms, "idealized" surface measurements are required; such
measurements are derived from "real" surface measurements only after applying certain corrections.
Geophysicists who have previously computed interval velocities from either stacking- or migration

9
10 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

velocities know that, typically, a host of often maddening problems is encountered when working with real
seismic data. Some of these problems result from applying invalid corrections to the surface measurements.

The most critical parameter that must be obtained from CDP reflections is the NMO velocity VNMO. At
best, it equals the stacking velocity VNMO for infinitesimal small offset only. A number of factors influence
and bias VNMO and V,.
These quantities are commonly computed in the digital computer by a procedure known as a stacking-
velocity analysis (SVA). A successful SVA requires adequately preprocessing CDP gathers and choosing
some coherency measure for the computation of velocity spectra.
Picking, validation, and smoothing schemes may follow after the spectra are computed. In addition to
providing estimates of VS, these schemes generally also make use of stack times and time dips to increase
the overall reliability of all picked values. Though SVA has been considered primarily for determining the
NMO corrections needed in CDP stacking, it is also a basic source of the observations required for
computing interval velocities. In this application, SVA must be subjected to special high-resolution and
accuracy requirements.

Since in the presence of long spreads and curved velocity boundaries stacking velocities generally do not
equal NMO velocities, the removal of the spread-length bias must be given particular attention. Even when
all necessary corrections are performed and interval velocities are obtained from accurately estimated
observations in the surface seismic data, the total process of computing interval velocities might still not be
considered complete. The interval velocities must, in all instances, be further subjected to some final
statistical averaging and uncertainty estimation schemes in order to increase their ultimate value for
interpretation.

Factors affecting observed velocity estimates

There are many factors that make picking velocities complex, e.g. thin layering, vertical and horizontal
velocity gradients, near-surface or distributed velocity anomalies, faulting, and anisotropy.

Among the numerous factors are those associated with data acquisition; examples include maximum offset
and in-line offset, stacking multiplicity, and source and receiver characteristics.
In a marine environment, the influence of source and streamer depth as well as streamer feathering must
be considered.
Some factors relate to specifics of wave propagation such as static shifts and change of wavelet character
with offset and reflection Lime. Further complexity arises when primary events interfere with other
neighboring primaries or with multiples and diffractions.

Data processing parameters also can have considerable influence. These include parameters of the Stacking
Velocity Analysis program, such as muting, time gates, velocity sampling; and choice of coherence measure.
Computations reflect the interpreter's bias in the selection of horizons for analysis, always with
observations inevitably contaminated by noise and measurement error. Record quality, with regard to
either reflections or diffractions, can, of course, range from good to very poor. The signal-to-noise ratio
generally degrades with increasing reflection time, making reflection time estimation less accurate just
when greater accuracy is needed.
The higher frequencies required to delineate layers are the ones most attenuated by the earth. Their
amplitudes also decrease most rapidly in transit through thin layering. The influence of the bandwidth of
reflections on VS has been investigated by Stone (1974) He shows that a decrease in bandwidth generally
Is accompanied by a decrease in the value of VS

10
Advanced seismic processing 11
Clasina TerraScience Consultants International Ltd.

Stacking velocity is a parameter which is quite sensitive to a large number of factors.

The role of the seismic interpreter.-

Interpreters need not be concerned with the complexity of the algorithms, nor need they require intuitive
understanding of the dependence of VNMO or VSAM on the layer boundaries or layer velocities. Such
understanding is likely only when one restricts consideration to simple homogeneous plane layer velocity
models such as those discussed by Levin (1971).
Interpreters who want to avail themselves of the above inversion algorithms should, however, be constantly
aware of the well defined limitations of models used and the exact meaning of the input parameters
required for executing the computations.
It is their task to restrict application of the sophisticated algorithms to use on good quality input data, to
concentrate on reducing uncertainties in measurements and to detect and verify conditions which may
violate the assumptions. They should exercise interpretational judgment and measure the results against
general geologic likelihood.

Discriminating between desired and undesired information and recognizing conditions that oppose the
basic assumptions for the inversion algorithms can involve a significant expenditure of an interpreter's
time. However, a substantial amount of this work can be automated. Interactive processing can, for
instance, be of considerable help at several critical phases of an analysis. Interpreters must balance the cost
of expending such an effort against the cost of not doing so; the quality of a derived subsurface model
primarily depends upon the interpreter's effort.

Accuracy considerations.

Considering stacking velocities for the purpose of computing meaningful interval velocities puts large
resolution and accuracy demands on an Stacking Velocity Analysis. These demands can be satisfied only if
the CDP gathers used for analysis are properly preprocessed.
The choice of an adequate coherency measure is of considerable importance for computing velocity spectra.

Depending upon the ultimate use, velocity accuracy requirements can range widely in seismic processing
and interpretation. The more accurate end of the range, however, may be required for character detail
studies of broadband amplitude-preserved data as performed in processing hydrocarbon zones (Larner,
1974).
If it were not for computing seismic interval velocities, the subject "stacking velocity analysis" would be
almost a closed chapter in seismic exploration. Uncertainties in picked stacking velocities, however, can
give rise to largely amplified uncertainties in computed interval velocities. Consequently, any research
aimed at further improving the resolution of an SVA remains a challenging task.

The estimation of meaningful interval velocities for purposes of Lithological studies imposes strict accuracy
requirements on stacking velocities-often within less than a half percent.
Indeed, a requirement of three percent accuracy in interval velocity over a 400-ft interval at a depth of
10,000 ft is unachievable; it requires an error less than one in a thousand for the RMS velocities. To aim for
such high resolution would not only be unrealistic but also naive since, for the many reasons outlined
above, a chosen model could never completely characterize the real earth.

11
12 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

The larger the interval thickness, the smaller will be the error in computed interval velocity. However, a
velocity estimate over a thick interval is less interesting than that over a thin one. To compute velocities for
thin intervals with some confidence, one must increase the statistical reliability of the observed -surface
measurements.
To estimate analytically the accuracy of computed interval velocities, one has to start by estimating the
uncertainties (e.g., the standard deviation of the scatter) of the arrival times of primary CDP reflections
(Bodoky and Szeidovitz, 1972). After that, one can analytically estimate the accuracy of the derived

stacking velocities (Al-Chalabi, 1974) and finally the accuracy of computed interval -velocities (Shugart,
1969; Kesmarky, 1976; Nakashima, 1977). Though much can be learned from such error analyses, they are
no substitute for care taken in recording, processing, and correcting seismic data.
Also, the constraints provided by geologic plausibility are difficult to include in a rigorous mathematical
analysis.
Normally, for a flat layered earth, an error in interpreted RMS velocity at time T is magnified by a factor
of about 1.4T/∆∆T into an error in interval for the time interval ∆T.
However, this belief remains to be investigated in detail. From lessons learned through computer modeling,
we also feel that the accuracy requirements for all other parameters, such as two-way time and normal
time gradient, are less stringent than those put on the stacking velocity.

Communication theory and statistics, considered in the light of the presently available computer technology,
are essential to get the best results from high-resolution SVA.

Preprocessing CDP gathers

Optimum parameters must be selected for the various processes following the computation of velocity
spectra. One should first ascertain, however, that the program that computes the spectra will be provided
with suitably selected and well-preprocessed CDP gathers.

The selection of suitable CDP gathers away from faulted or otherwise disturbed zones is a matter of
routine. Having CDP gathers properly preprocessed essentially implies that the primary CDP reflections
should be properly aligned along CDP reflection time curves and enhanced against a background of
multiples, diffractions, and random noise. This can be achieved by various wavelet processing, signal
enhancement, and static-analysis methods. Their description could be the subject of an extensive treatment
in itself, from which we will refrain. Nevertheless we feel that the role of these topics should be briefly
reviewed to emphasize that successful computation of interval velocities cannot be seen outside the context
of general procedures for seismic high-resolution signal processing.

Filtering

A suite of digital-filtering techniques has evolved, which can be tailored to particular needs as required,
based on properties of the acquisition and recording systems as well as based on the geologic complexity of
a survey area.
One can select from among de-bubbling, de-reverberation, de-ghosting, wavelet shaping, zero-phase, band-
pass, and velocity filters to simplify reflection events on traces in a CDP gather.
Multiples and reverberations are persistent problems, particularly on marine seismic data.

12
Advanced seismic processing 13
Clasina TerraScience Consultants International Ltd.

Often they can be discriminated on the basis of their low stacking velocity relative to that of interfering
primary events. It is a common practice to sum a few successive CDP gathers in order to improve the
signal-to-random noise ratio.

This effort at reducing noise often involves a trade-off between improved accuracy and degraded lateral
resolution in velocity estimates.

Simple addition of CDP gathers is justified only if layers are horizontal and lateral velocity gradients
are reasonably small.

Even then, the summing of CDP gathers should be performed only after static corrections have been
applied. Attenuation of random noise can also be achieved by increasing the CDP multiplicity over a given
spread-length.
In the presence of dip and curvature of velocity boundaries, CDP gathers should not simply be added
together; rather, complicated procedures have to be designed to incorporate several neighboring gathers
into a single SVA.
These will not be discussed here, but it is obvious that computations of VNMO and VS for assumed models
can be helpful in this respect. For reasons of simplicity, let us assume in our further discussions that
primary reflections can be well detected in single CDP gathers.

Static corrections

As the CDP spread moves across low-velocity, near-surface anomalies, time variations are introduced into
the travel times of CDP reflections at different offsets.
The variations are due primarily to lateral velocity inhomogeneities or depth undulations in a low-velocity,
near-surface layer.
Primary CDP reflection energy must he correctly aligned along the desired hyperbola-like curves to utilize
fully CDP gathers for velocity analysis. The alignment of all CDP reflections can often be achieved in the
form of a constant time shift applied to each trace. The simple shifts are justified when travel paths for all
reflectors and all offsets are sufficiently vertical near the surface. This assumption is generally acceptable
for thin weathering layers of low velocity. Time shifts can then be assumed to be surface-consistent, so all
traces for a common shot location receive the same shot-static correction, and traces for a common receiver
location receive the same receiver static correction.
Typically, static corrections are performed in two steps-a first correction for field statics and a second one
for residual statics. Field static corrections are deterministic quantities based on up-hole times, short
refraction lines, first arrivals, and the picking of predominant reflections. Residual static corrections are
statistically derived quantities that can dramatically improve the coherence of seismic reflections. They are
best obtained by automatic analysis of travel time information (Taner et al, 1974; Saghy and Zelei, 1975;
Wiggins et al, 1976).

Automatic static analysis and correction procedures have progressed to the point where numerous
approaches work well when the near-surface layers affecting the statics are of low velocity and are
sufficiently thin. Thick weathering fill and deep water-depth variations give rise to violations of the
assumption of surface consistency. As with long-wavelength statics (near-surface. caused time anomalies
having low spatial wave number) which are not readily separable from subsurface structural variations,
these particular static problems are far from being solved and remain vexing to processing personnel and
interpreters alike.

13
14 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Imperfect static corrections leave CDP reflection time curves erratic;-- the resulting best hyperbolic fits
then result in misleading stacking velocities and wrong interval velocities.
The effects can be devastating, as has been repeatedly shown by Prescott and Scanlan (1971), Schneider
(1971), Miller (1974), Pollet (1974), and Booker et al (1976).
Near-surface anomalies are often the most disruptive factor in a velocity analysis. In fact, the large swings
that too often exist in computed stacking velocities, made their use in the computation of interval velocities
suspect for a long time. Now that the nature of these variations is better understood, the method has re-
emerged as an accurate and powerful tool.

Though undesired in most respects, near-surface time anomalies can provide useful supporting evidence of
subtle changes in the stratigraphy within the overburden. Obvious errors in the values of V,. resulting from

anomalies in the overburden can, in fact, help in upgrading the long-wavelength components of static
corrections (Paturet, 1977). A deconvolution method based on this idea was discussed by Lucas et al (1975).

Because stacking velocity becomes increasingly more sensitive to timing errors as depth increases, the
velocity errors increase with increasing depth of reflectors. The errors extend laterally over distances
approaching the maximum offset and have magnitudes that vary inversely with the square of it.
Although the anomalous stacking velocities may be proper for best alignment of primaries locally, any
other use of them is not justifiable. For example, their use for converting from time to depth would result in
large erroneous undulations of deep reflectors. Likewise, computed interval velocities would exhibit even
more exaggerated variations.

The anomalous overburden problem provides the greatest rationale for a continuous determination of VS
along a seismic line. In the absence of continuous velocity coverage, anomalies in the overburden can often
go unnoticed.

14
Advanced seismic processing 15
Clasina TerraScience Consultants International Ltd.

With the renewed interest in the Alaska North Slope and the Mackenzie Delta, we need to present some solutions to the
difficult imaging problems, which explorationists face in these areas due to the existence of permafrost conditions.

The imaging problem is caused by extreme lateral velocity variations in the near surface layers. Conventional time domain
approaches, such as the application of surface consistent statics, do not provide an optimal solution since the problem is not
surface consistent.

Depth imaging tools allow us to apply a dynamic static, that is, a depth model consistent solution to the problem.

Taken from Geologic Survey 1976:


Permafrost in the Mackenzie Delta, Northwest Territories
Michael W. Smith

Cross-section through a shifting channel area


Kelman Technologies Inc.

15
16 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Taken from Geologic Survey 1976:


Permafrost in the Mackenzie Delta, Northwest Territories
Michael W. Smith

Steady-state permafrost configuration under rivers


Kelman Technologies Inc.

Taken from Geologic Survey 1976:


Permafrost in the Mackenzie Delta, Northwest Territories
Michael W. Smith

Permafrost regression, with time, under a river 100m wide


Kelman Technologies Inc.

16
Advanced seismic processing 17
Clasina TerraScience Consultants International Ltd.

Taken from Geologic Survey 1976:


Permafrost in the Mackenzie Delta, Northwest Territories
Michael W. Smith

Computed temperature field under a traverse line Kelman Technologies Inc.

Taken from Geologic Survey 1976:


Permafrost in the Mackenzie Delta, Northwest Territories
Michael W. Smith
a: Initial Position (Steady State)

Permafrost history under a shifting channel


Kelman Technologies Inc.

17
18 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Taken from Geologic Survey 1976:


Permafrost in the Mackenzie Delta, Northwest Territories
Michael W. Smith
b: Present-Day (460 years since time 0, rate of shift = 0.5m/year)

Permafrost history under a shifting channel


Kelman Technologies Inc.

First, we will review the characteristics of the permafrost problem. This will help to explain why conventional processing
techniques do not work well in this environment and it will point to the type of solution which is necessary.

Next, we will look at options for defining the permafrost velocity model. Whether we use a conventional statics approach,
or, a dynamic statics approach, we must be able to define a reasonably accurate velocity model.

Once the near surface model has been determined, at Kelman, we have a number of options from which to choose. The
option we go with will depend on factors such as the severity of the problem, time constraints, and cost constraints.

18
Advanced seismic processing 19
Clasina TerraScience Consultants International Ltd.

What Are the Characteristics


and Effects of Permafrost?
1. IBPF: Ice-Bearing-Permafrost - frozen with sufficient ice content
such that the acoustic velocity changes.

2. Surrounding material: 5000 - 8000 ft/sec, Permafrost 8000 to


14,000 ft per second.

3. Variable severity in shale and porous sandstone.

4. Thawed areas under lakes and rivers cause ray bending and large
statics - Must compute statics to the top of permafrost and use
RTNMO surface referencing.

2000/2001 Kelman Technologies Inc.

19
20 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

What Are the Characteristics


and Effects of Permafrost?
5. Alaska North Slope - Wedge shape creating lateral velocity
change - Endicott - cause lateral mispositioning errors in the order
of 1000 feet. Base of permafrost may be gradational - no distinct
event to pick. Spatial variability within permafrost layer - need
only to determine average velocity (not detailed structure) to
correct lateral mispositioning errors.

6. Beaufort Continental Slope - Series of deltaic sands - very


complex velocity model which can not be defined in detail.

7. Shallow permafrost thin layer lenses - cause lateral changes in


offset dependent amplitude and phase of deeper reflections -
sometimes these thin lenses are undetectable on the seismic
section - causes interpretive confusion - can cause non-hyperbolic
2000/2001
move out. Kelman Technologies Inc.

20
Advanced seismic processing 21
Clasina TerraScience Consultants International Ltd.

First Break Effects of High


Velocity Stringers

2000/2001 Kelman Technologies Inc.

The high velocity stringers can produce a shingling effect on the first breaks as shown. They may be very thin, and therefore
not produce a significant vertical time static effect, however they do tend to confuse the interpretation of the first breaks.

One of the key goals is to identify Va which represents the top of the thick permafrost layer. The unfrozen layer may vary
dramatically in thickness and velocity so it is very important to use refraction information to help predict the static related
static effects.

21
22 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Permafrost Area Characteristics

2000/2001 Kelman Technologies Inc.

The above illustrates that a very complex near surface velocity model may exist in permafrost areas.

• At location “A” the unfrozen layer thickens under the lake due to the sun light focusing effects of the water surface
during the summer months. These result in very large statics.
• At “B” we see thin layers of frozen material embedded inside the unfrozen material. These layers cause confusing
patterns on the first breaks which are illustrated on the previous slide.
• The green layers at “C” represent unfrozen layers in the permafrost layers. These may be caused by sand to shale
lithology changes or porosity changes in sandstone.

E1 is the layer which represents the top of the permafrost layer.


E2 represents the base of the permafrost layer - this layer may be gradational and difficult to identify as a reflection.
E3 represents some hypothetical sub-permafrost flat layer in depth. We will discuss the structural and imaging effects on E3
that are caused by the existence of the permafrost layer.

Not illustrated here, is the possibility that we can also have high velocity regions due to frozen gas condensate. This situation
can produce apparent highs which can be easily miss-interpreted as true structure.

22
Advanced seismic processing 23
Clasina TerraScience Consultants International Ltd.

The Simplified Depth Model

2000/2001 Kelman Technologies Inc.

The above cartoon represents a simplified depth model which can be used to approximate the effects of the more complex
real earth model.
Refraction techniques will allow us to determine the model to event E1, which is the top to the permafrost layer.

A spatially varying permafrost blocky velocity model can be used to approximate the average characteristics of the
permafrost layer. A reasonably accurate determination of the split between thickness change effects and average velocity
change effects must be made.

We have added an Event 3 which represents some hypothetical flat event (in depth) which we will use to illustrate the effects
that this complicated overburden has on our ability to image deeper events. For the purposes of this model building exercise
we will assume that these unfrozen clastics can be represented with a constant velocity.

23
24 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Time Domain Artifacts Caused By


Permafrost Conditions

2000/2001 Kelman Technologies Inc.

The time structure and imaging effects of the permafrost overburden on our ability to image the deeper events are illustrated
above.
Note that Event 3 suffers from general spatial structural pull ups and push downs due to the changing thickness of the high
velocity layer.
The yellow box represents a prestack gather display of Event 3, after conventional time domain NMO has been applied, at
that spatial location on the line. Note that the inside traces (at the left of the box) are fairly well flattened after conventional
NMO, however, the further offsets are badly over-corrected.
This over-correction error is clearly non-hyperbolic.

The reason that time domain NMO gives this result is that it must assume that all traces within any one CDP gather will be
corrected with the same velocity, and that that correction pattern is hyperbolic. Of course, in this situation the velocities
which the far offset traces observe are much higher than that of the near offsets due to the thickening of the permafrost layer.
This non-hyperbolic characteristic means that we can not possibly image this data optimally using surface consistent statics
and conventional NMO.

Clearly our statics solution must correct to the base of the permafrost, and it can not be constrained to be surface consistent -
it must be a dynamic static correction which is depth model consistent.

24
Advanced seismic processing 25
Clasina TerraScience Consultants International Ltd.

Options For Defining The


Velocity Model
1. Refraction Statics: Defines model from the surface to the top of
permafrost.
• FBPLOT: Gardner time delay approach for 2D.

• GLI2D and GLI3D: Both conventional and tomographic


solutions.
2. Time Domain Velocity Analysis: Provides a smoothed RMS
velocity cube which can be used to help define the overall
structure and average velocity of the permafrost layer.
RAVEN: Interactive 2D and 3D semblance, gathers, stack panel
velocity stacking velocity analysis.

MVEL2D and MVEL3D: 2D and 3D velocity stack movie


approach to stacking velocity analysis
2000/2001 Kelman Technologies Inc.

Now, lets look above at our options for defining the velocity model.
Our options for determining the model to the top of the permafrost layer (to Event 1) are as listed here. We have been very
successful in using the FBPLOT module in areas which have very large, short period statics such as the case under lakes.
Conventional time domain velocity analysis can sometimes be used to determine the overall structure and average velocity of
the permafrost layer.
However, these tools do not easily generate a velocity model in depth to the base of the permafrost.

Options For Defining The


Velocity Model
3. Depth model building software.

MODANI: Interactive 2D depth model building.

HVEL3D: Interactive 3D depth model building.

Paradigm GeoDepth® 3D tomographic model building

2000/2001 Kelman Technologies Inc.

25
26 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

MODANI and HVEL3D are interactive depth model building tools developed at Kelman. Normally, we would be inputting
depth migrated gathers to these tools, however, there is no reason why we cannot also input raw gather data and attempt to
derive a blocky near surface model.
We can use Paradigms Geodepth 3D tomographic model building tool, provided the base of permafrost layer, and/or a very
shallow sub-permafrost layer event is reasonably pickable on prestack data.

Depth Imaging Processing Flow

2000/2001 Kelman Technologies Inc.

On the left side of the flow chart we are iterating depth model updates based on prestack gather data, and on the right side we
are iterating model updates based on stack image movie data.

26
Advanced seismic processing 27
Clasina TerraScience Consultants International Ltd.

What Can We Do With the


Permafrost Layer Depth Model?
A. Assume the permafrost effects can be removed with surface
consistent statics and compute them to replace the permafrost with
unfrozen material.

B. Continue to build the entire depth model and do full prestack


depth migration.

C. Treat the problem as a varying lateral velocity induced STATIC


and IMAGING problem. To accomplish this we must apply a
form of time and space dynamic static as follows:

1. Apply 2D or 3D model based NMO to remove the non-


hyperbolic moveout associated with the lateral variations in the
permafrost layer. Output the prestack gathers in depth.
2000/2001 Kelman Technologies Inc.

Now, once we have our depth model, what are our options?

Option A is the standard approach which assumes that the permafrost layer effects can be corrected by assuming that its
effects can be modeled using a surface consistent assumption.
This may in fact be sufficient as a solution in situations where the lateral velocity change is not too rapid, and, the permafrost
layer thickness is not too large. It is a methodology which has served the industry in the past, however, in many cases, we
should be able to improve upon it.

Option B is the full prestack depth migration solution. This, of course, is technically the best solution. Unfortunately, it is a
very time consuming and costly undertaking.

Option C allows us to stay with a time domain solution, however, it uses a depth model based NMO to apply a depth model
consistent, dynamic static solution.

27
28 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

What Can We Do With the


Permafrost Layer Depth Model?
2. Generate a second depth model in which we have defined the
permafrost layer to have the same velocity as the sub-permafrost
layer. Using this model apply CDP consistent depth to time
conversion.

3. Apply reverse NMO using the sub-permafrost time and space


constant velocity.

4. Resume time processing as normal.

2000/2001 Kelman Technologies Inc.

28
Advanced seismic processing 29
Clasina TerraScience Consultants International Ltd.

Model Based NMO Output To Depth


(Structural and Imaging Problems Solved)

2000/2001 Kelman Technologies Inc.

The first step is to output the prestack data in the depth domain after model based NMO is applied. Under this process, the
NMO for each trace is determined by sampling the actual depth model velocity field which it is believed the ray had passed
through. This enables the correction of non-hyperbolic effects due to lateral velocity change.
Note that in this domain the structural pull up and push down effects have been removed as well as the imaging errors.

Model B: Permafrost Is Removed and


Data Is Squeezed Back to Time

2000/2001 Kelman Technologies Inc.

29
30 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Next, we create a model which is exactly the same as the previous model accept that the permafrost layer has been removed.

We then use this model to squeeze the data back to time in a CDP consistent fashion. This can then be followed with a time
domain De-NMO process using the same model. At this point we can now proceed with time domain processes.

Note, that both the structural effects and the imaging effects of the permafrost have been removed.

It is important to understand that this is not a full depth migration solution.


In other words, we can not simply follow this process with a post stack depth migration. If it is desired to fully correct for
lateral positioning errors which have been caused by the permafrost and we must use depth migration with the permafrost
layer defined.

If the use of a post stack depth migration is desired we would have to use the model with the permafrost in it for the squeeze
back to time. This will leave us with the original time structure errors but with improved imaging.

Depth Model Consistent Statics


Build Simple Permafrost Create Model B without the
Depth Model A Permafrost Velocities

Apply Depth Model "A" Check Spatial Structure is


based NMO - Output Reasonable and Depth
gathers to depth Gathers are flat

Apply CDP Consistent


Depth to Time Conversion
using Model B

Apply Reverse NMO using


Model B

More accurate velocity


Resume Normal Time
picking and surface
Processing Flow consistent reflection statics

2000/2001 Kelman Technologies Inc.

Above we illustrate the process in a flow chart form.


One of the benefits to note is shown in the lower right hand box which states that we can achieve better residual statics and
velocity determination after the non-hyperbolic effects have been removed.

30
Advanced seismic processing 31
Clasina TerraScience Consultants International Ltd.

Diffraction patterns

Diffracting subsurface points give rise to similar hyperbola-like CDP arrivals. The diffraction curve is
precisely hyperbolic, however, only when the medium is homogeneous and the CDP is directly above the
diffracting source.
Often reflections and diffractions in a CDP gather can be discriminated from one another only with
difficulty.
For small offsets, primary events within a CDP record fall upon symmetric hyperbola-like curves
regardless of whether they are associated with reflectors or diffractors. While diffraction amplitudes are
generally weak, their implied velocity values can become anomalously large when the CDP spread moves
away from the diffractor location.
The diffractor can be looked upon as a small reflecting sphere. As the CDP spread moves away from the
sphere, the "reflection" comes from an increasingly steep portion of the "reflector surface." The increasing
dip implies an increase in the (pseudo-) stacking velocity for which the diffraction event stacks best.

Diffraction stacking velocities can generally be discerned from stacking velocities if interpreters study them
in conjunction with CDP-stacked and time-migrated sections (Dinstel, 1971). The lateral variation of time
and velocity will reveal the lateral distance of a diffractor from considered SVA locations.

Stacking velocities for both diffractions and multiples constitute a similar kind of noise confusing the
interpretation of true primary stacking velocity functions from velocity spectra.

While stacking velocities for multiples are typically lower in value than those of primaries at comparable
two-way times, pseudo stacking velocities resulting from diffractions are often higher.
• The suppression of short-period multiples prior to CDP stacking is achieved partly by properly
applied deconvolution methods.
• The suppression of diffractions prior to performing a stacking velocity analysis can be achieved by
individually time migrating all common-offset sections (Doherty and Claerbout, 1976).

However, the "analysis velocity" derived from such time-migrated common-offset sections leads to the
stacking velocity as only when the medium velocities do not vary laterally.

Coherency measures

A good coherency measure for the detection of CDP reflections in preprocessed (NMO-corrected) CDP
families is the human eye. Various techniques exist for routine extraction of Vs from CDP gathers, these are
based on either summation of, or correlation between CDP reflections and involve various choices of
mathematical normalization.
These techniques map coherent CDP reflection events from the time - offset domain into the zero offset -
stacking velocity domain. The output is a velocity spectrum.
At a particular two way time t0 a coherency measure is computed for various hyperbolic trajectories
corresponding to a specified series of test stacking velocities Vs ,
such that Vmin ≤ Vs ≤ Vmax

31
32 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

A coherency value is computed for each test stacking velocity at the chosen time. The position of the
maximum value of coherency then determines the optimum stacking velocity of a particular event.
By incrementing the t0 time by an appropriate amount, we repeat a search such as this for different t0
values over the entire record length of interest.
From this we can estimate the optimum stacking velocity as a function of the two-way zero-offset time.
The comparative analysis of coherence techniques for multi channel data is a broad topic. The most
common coherency measures used for computing optimum stacking velocities have been subjected to
numerous practical and controlled-performance tests (Garotta and Michon, 1967; Schneider and Backus,
1968; Taner and Koehier, 1969; Robinson, 1970b; Robinson and Aldrich, 1972; Montalbetti, 1973; and
Sattlegger, 1975). Many of these measures are well summarized by Neidell and Taner (1971), in which the
concept of semblance is introduced. Semblance, when properly interpreted, is a particularly powerful
discrimination tool. We shall follow Neidell and Taner (1971) and Montalbetti (1973) in the discussion
below from Peter Hubral's interval velocity from seismic reflection time measurements

Let us denote any digitized trace in a CDP-gather as fi,t where i refers to the channel index and t is the time
index. The trajectory across the gather corresponding to a particular test stacking velocity Vs and two-way
zero-offset time t0 is denoted by t (i), so that the stacked amplitude for M input channels is given by
M
St = ∑ f i,t (i) (1)
i=1

Since a trajectory need not pass through discrete sample points, f i,t (i) represents an interpolated value on
the ith trace. The absolute value of St will exhibit a maximum when the trajectory t(i) corresponds to the
optimum stacking velocity Vs; that is, when the events are properly aligned before addition.
A normalized version of the expression (1) can be written in terms of absolute amplitude variation as

∑ f i,t (i)   St 
COH =  =  (2)
∑ f i,t (i)  ∑ f i,t (i) 

COH, the normalized stacked amplitude, takes on values ranging from unity when signals are identical and
have the same polarity, to zero when they are completely random or out of phase. Thus,

0 ≤ COH ≤ 1. (3)

Crosscorrelation is a related coherency measure. For each particular two-way time t0 and velocity Vs , zero
lag crosscorrelation sums are computed over moveout corrected windows of some specified length T
centered about t0. Since the actual trajectories of true reflections do not parallel one another, the window
length T should be reasonably short.
The crosscorrelation sum is determined according to
M-1 M-k
CC = ∑ ∑ ∑ f i,t (i) f i+k , t (i+k) (4)
t k=1 i=1

32
Advanced seismic processing 33
Clasina TerraScience Consultants International Ltd.

where the summations over k and I refer to all possible channel combinations and the sum over t refers to
the time window over which the crosscorrelation is computed.
CC in equation (4) is an unnormalized crosscorrelation sum which can be written more simply as

CC = 1/2 ∑ ∑ f i,t (i) 2 - ∑ f2 i,t (i) (5) or,


i i i

CC = 1/2 ∑ ( St 2 - ∑ f2 i,t (i) ) (5)


i i

Equation (5) shows the unnormalized crosscorrelation sum to be equal to half the difference between the
output energy of the stack St and the input energy, where St is defined for trajectories (within the time gate)
parallel to the one belonging to t0. Thus, if t(i) defines the trajectory of a coherent event across the input
channels, the first term of equation (5) will be large with respect to the second, and CC represents a
maximum.
Let us now consider normalized versions of equations (4) and (5). The usual normalization of equation (4)
provides the statistically normalized crosscorrelation sum.
This can be written as

33
34 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

34
Advanced seismic processing 35
Clasina TerraScience Consultants International Ltd.

35
36 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

The semblance coefficient represents a normalized output/input energy ratio. It is computed over the width
of a hyperbolic time window and may be the most frequently used coherency measure in stacking and
migration velocity analysis.
Maximum coherency is achieved when the hyperbolic time window straddles the peak energy of a CDP
reflection. The surface representing the coherency measure as a function of two-way zero offset time and
test stacking velocity defines a velocity spectrum.

To ensure stability in stacking-velocity estimates, time windows should overlap by about half their window
length. They should not be shorter than about half the dominant period of the coherent wavelets of a CDP
reflection. The window length will thus generally fall into the range from 20 to 80 msec depending upon the
bandwidth of the signals. At the cost of loss in resolution, broadening the window length increases the eye
appeal of a velocity spectrum by reducing the velocity fluctuations in peak positions and by reducing the
confusion of weaker events. For detail studies, test stacking velocities are often incremented in steps of
about 20 m/sec. Muting early reflections (on the larger offset traces) is mandatory since wavelet character
often changes strongly with increasing angle of reflection. Both offset dependence of wavelet character and
possible interference with mode-converted and head waves can bias the stacking velocity unaccountably.
Muting large-offset reflections generally does little harm to the determination of Vs for shallow reflections
because the differential NMO times involved are large enough in the near traces to provide the necessary
resolution in stacking velocity.

36
Advanced seismic processing 37
Clasina TerraScience Consultants International Ltd.

Normal and dip moveout


Both normal moveout and dip moveout are key elements of modern seismic data processing.
We will give here a discussion of the nature and action. as well as the effect of dip, velocity variation. and
anisotropy.
We will also show the relationship between dip moveout and pre-stack migration.

Normal moveout.

The term normal moveout has two meanings, it is both a seismic effect and a seismic processing step.
As shown in the figure below, a shot record is the collection of seismic traces generated when one source
shoots into many receivers. Dots below the reflector show subsurface reflection points. Halfway between
the source S and each receiver R is a point on the ground called the midpoint.
These midpoints are shown as black dots above the acquisition surface. When there is no dip, the midpoint
is directly above the reflection point. As the offset (source to receiver distance) increases, so does the travel
time from source to receiver. This characteristic delay of reflection times with increasing offset is called
normal moveout.

In the next figure , reflections can be seen in real data along with other kinds of events.
There are receivers on both sides of the shot in this case. The reflection events have the hyperbolic shape
characteristic of normal moveout. From an imaging point of view, everything except reflection energy is
considered noise.

37
38 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

NMO is a seismic processing step which flattens reflections in a common midpoint gather in preparation
for stacking.
Note that several reflection points from the second shot (dashed lines) were also reflection points for the
first shot. This is called common midpoint (CMP) shooting. As the shots roll along, there will be many
source-receiver pairs with the same CMP location
The point of gathering multifold data is that we get redundant information about the reflection point, and
this redundancy can be used to reduce noise and create a more reliable image. Our goal is to process all
these traces and add them together (CMP stack) to make one trace at a CMP location.

NMO attempts to remove the hyperbolic curvature of reflection events.

It is removing the effect of offset. If this is done properly, then the reflection should come in at the same
time for all offsets (since we have removed any travel time delay due to offset). So reflection events should
be flat after NMO.
The second figure above shows the data after NMO processing.

It is important to realize that when there are velocity variations, dipping beds, etc., the reflection curve is
not a simple hyperbola thus NMO alone is not really the right thing to do.

In the NMO corrected picture above, we see the events are well flattened by NMO, but there are a few
points of interest. What does NMO do to direct arrivals?
Since these were linear and not hyperbolic, NMO has not flattened them.

Also, note how wide these events look after NMO. This is because NMO actually operates by stretching the
trace-and the shallower, the more it stretches. Since our goal is to eventually flatten all these traces and add
them together to make one trace, keeping this kind of stuff would wipe out shallow reflections. We get rid
of these events by muting, and we can let NMO itself do the muting for us.

38
Advanced seismic processing 39
Clasina TerraScience Consultants International Ltd.

The idea is to keep track of how much stretch NMO is doing to the trace.

The stretch changes down the trace with the biggest at the top, and the smallest at the bottom.
The NMO mute sets a stretch limit, above which the trace is zeroed out.

The Industry Status Quo 

• The floating datum approach assumes that we can


predict NMO within a CDP by using a single
datum elevation for all offsets. This model
assumes hyperbolic NMO, since the ray path
distances on each side of the CMP are assumed to
be equal.
Kelman Technologies Inc.

39
40 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Kelman’s Surface Referencing

• In truth, these ray paths are often far from being


equal. Kelman’s software recognizes the existence
of S&R elevation changes and weathering layer
thickness variations. The non-hyperbolic nature of
the time delay pattern can then be more accurately
predicted.
Kelman Technologies Inc.

40
Advanced seismic processing 41
Clasina TerraScience Consultants International Ltd.

Benefits of Surface Referencing


• A higher resolution stack, DMO stack, or
prestack time/depth migration
• The extracted stacking velocity field is more
geologically reasonable which is important for
migration considerations
• The interpolation of the velocity field is more
accurate since the time delay artifacts caused by
rapid elevation change and/or rapid weathering
layer thickness change are automatically
predicted for each trace

Kelman Technologies Inc.

41
42 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Where to use Surface Referencing


• Mountain Front Zones
• Any onshore areas with
weathering thickness /
velocity variations
(bayou, river, marsh,
transition, caliche,
mesa, surface volcanics)

Kelman Technologies Inc.

42
Advanced seismic processing 43
Clasina TerraScience Consultants International Ltd.

Kirchhoff 3-D prestack time


migration
• Crisper,
cleaner images
• Noise
cancellation
benefits
• Takes
advantage of
Surface Refer-
encing

Kelman Technologies Inc.

Lateral Velocity Variation


Stack Using Depth Consistent NMO

Kelman Technologies Inc.

43
44 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Lateral Velocity Variation


Shot Gathers - Weathering Statics,
Depth Consistent NMO

Kelman Technologies Inc.

Lateral Velocity Variation


CDP Gathers Using Depth Consistent
NMO

Kelman Technologies Inc.

44
Advanced seismic processing 45
Clasina TerraScience Consultants International Ltd.

Lateral Velocity Variation


Shot Gathers - Weathering Statics,
Surface Referenced NMO

Kelman Technologies Inc.

Lateral Velocity Variation


Shot Gathers - Weathering Statics,
No NMO

Kelman Technologies Inc.

45
46 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Lateral Velocity Variation


Shot Gathers - No Statics, No NMO

Kelman Technologies Inc.

Lateral Velocity Variation


Stack Using Surface Referenced NMO

Kelman Technologies Inc.

46
Advanced seismic processing 47
Clasina TerraScience Consultants International Ltd.

Lateral Velocity Variation


Time Model

Kelman Technologies Inc.

Lateral Velocity Variation


Depth Model

Kelman Technologies Inc.

47
48 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Screen 1 Screen 2 Screen 3

View 1

View 2

February 2001 Kelman Technologies Inc.

Above we’ve chosen to display semblance analysis, velocity panels, and a percent velocity movie.

Semblance Analysis

February 2001 Kelman Technologies Inc.

48
Advanced seismic processing 49
Clasina TerraScience Consultants International Ltd.

% Velocity Panels

February 2001 Kelman Technologies Inc.

% Velocity Movies

February 2001 Kelman Technologies Inc.

49
50 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

% Mute Panels

February 2001 Kelman Technologies Inc.

Percent mute panels, which allows you to pick mute functions directly from the CDP stack. The first panel is a stack of 20
CDP’s using 40% mute function, then 60% mute function, then 80% mute function, and so forth.

What is a 40% mute function? It’s where we multiply the offsets in the current mute function by 0.4

Note that the mute function is too narrow at earlier times, since the best stack is to the right of the 100% panel. The function
is too wide at later times since the best stack is to the left of the 100% panel.

We’ve found percent mute panels to be particularly useful for 3D volumes, where any one velocity location may be missing
near-offset traces.

Non –hyperbolic moveout is caused by anisotropy and vertical heterogeneity, most apparent at shallow depths and far offsets,
and is based on Alkhalifah’s equations Geophysics, 1977, which uses the non-hyperbolic parameter η.

The parameter η relates to R or Vh as follows: sqrt (1+2η) = R = Vh / Vnmo


Within the normal range of 1+η = R

50
Advanced seismic processing 51
Clasina TerraScience Consultants International Ltd.

Normal Moveout

February 2001 Kelman Technologies Inc.

Here’s an event corrected with normal moveout. Notice the “hockey stick” shape at the far offsets. This event cannot be
flattened by adjusting the normal moveout velocities.

Semblance Analysis Of
Nonhyperbolic Moveout

February 2001 Kelman Technologies Inc.

Here’s a semblance analysis for a single event as a function of normal moveout velocity (in the horizontal direction) and eta
(in the vertical direction).

Normal picking on the left optimizes the semblance along the eta axis. This is not, however, the best pick. On the right we’ve
picked the optimum semblance by decreasing the velocity and using a positive eta value. This is the non-hyperbolic moveout
solution.

51
52 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Normal Vs. Nonhyperbolic Moveout

Normal
Moveout

Nonhyperbolic
Moveout

February 2001 Kelman Technologies Inc.

Here we compare normal (above) and non-hyperbolic (below) moveout correction. Non-hyperbolic moveout has removed
most of the “hockey stick” effect at the far offsets. Notice how much slower the velocities are.

52
Advanced seismic processing 53
Clasina TerraScience Consultants International Ltd.

The following describes stretch-free NMO processing , a new technique designed to avoid normal moveout stretch.

Be warned that stretch-free processing is still in the experimental stages.

NMO Stretch
Gather After NMO Stack

February 2001 Kelman Technologies Inc.

Normal moveout stretch is a fundamental and long-standing problem in seismic processing. Let's consider a single
uncorrected gather. After normal moveout the shallow events appear stretched at the far offsets.
If we are to stack this without muting, these events will be highly distorted.

Normal Moveout Of A Single Trace

Moveout Moveout

Zero-
Offset
Time

February 2001 Kelman Technologies Inc.

Consider the normal moveout of a single trace. It doesn't matter how sophisticated the normal moveout algorithm is -
whether it includes surface-referencing or non-hyperbolic effects - the correction comes down to calculating moveout time as
a function of zero-offset time.

53
54 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

The problem with this is that it assumes that events occur instantaneously. Of course they don't. The seismic wavelet is at
least 20 ms long, and usually more. The result is that a different moveout is applied to the beginning of the wavelet than to
the end. This is the cause of normal moveout stretch.

What we'd really like is have the moveout held constant for a duration at least as long as the seismic wavelet, so that constant
moveout is applied to an entire event. On the right we've defined constant-moveout intervals which are 32 ms long at regular
points along the curve. Moveout is now a multi-valued function. We'll show how to resolve his shortly.

Moveout Over A Gather


Moveout Curve Interval

Offset Offset
February 2001 Kelman Technologies Inc.

Let's consider a single constant-moveout interval as it travels over an uncorrected gather. This interval is centred on the
moveout curve, and is assumed to contain the same seismic wavelet throughout the gather.

This is only one interval. The gather is made up of many overlapping intervals, a new one typically beginning every 4 ms. It
looks something like the gather moveout curve in the slide above.

So finally what is the method? Define constant-moveout intervals for the gather. Typically these intervals are 32 ms long,
with a new interval beginning every 4 ms at the zero-offset position.
Calculate the seismic wavelets over these intervals simultaneously, by doing a least-squares fit to the entire gather. Stretch-
free processing can be considered a type of highly specialized Radon transform.
The least-squares fit resolves the problem of the multi-value moveout function.
Form the stacked trace by summing the constant-moveout intervals together at their zero-offset positions.

Here are some properties of stretch-free processing.


NMO-corrected gathers are never formed. Instead we go directly from uncorrected gathers to stacked traces. Indeed, forming
NMO-corrected gathers can be considered the step where normal processing goes wrong.
Stretch-free processing requires far less muting. The muting objective changes from avoiding NMO stretch to avoiding
strong noise and refracting energy. It's now even possible to consider applying no mute at all, especially if we first filter off
the refracting energy.
Assuming the same mute, stretch-free stacks tends to be higher frequency and noisier than normal stacks.
Most of the differences tend to be very shallow, well above 500 ms. You may also see differences at very long offsets or
where there is sharp vertical jumps in velocity.

54
Advanced seismic processing 55
Clasina TerraScience Consultants International Ltd.

Normal Vs. Stretch-Free Stack


Normal Stretch-Free

February 2001 Kelman Technologies Inc.

Now let's compare normal versus stretch-free processing. On the left is the previously shown stack of un-muted gathers. The
early events are a disaster. The stretch-free stack on the right is far better, although the first event has lost some high
frequencies.

Noise Comparison
Normal
(Muted) Stretch-Free

February 2001 Kelman Technologies Inc.

Here's what happens when we add noise. To make a proper comparison we've applied a stretch mute to the normal
processing. The stretch-free stack seems to have suppressed more noise at early times.

Does this mean that stretch-free processing is better at suppressing noise? No, in general it's worse. We did not, however,
apply a mute, so the fold is much higher at earlier times. This accounts for the improved noise suppression.

55
56 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Multiple Comparison
Normal
(Muted) Stretch-Free

Multiples

February 2001 Kelman Technologies Inc.

Now we've added six multiples. The normal stack has done a poor job of suppressing these multiples up shallow. The stretch-
free processing is much better.

Does this mean that stretch-free processing is better at suppressing multiples? No, in general it's worse. We did not, however,
apply a mute, so we have all of the far offsets at early times. This accounts for the improved multiple suppression.

Crossing Primaries
Gather After NMO Stack

2000 m/s

2500 m/s

February 2001 Kelman Technologies Inc.

Now let's look at an example featuring a sharp vertical jump in velocity. The velocities here increase 25% over a duration of
60 ms. This causes the two primary events to cross over each other at about 800 m offset.
The result after NMO correction is rather unexpected.
If we were to stack this without muting the result is very poor.

56
Advanced seismic processing 57
Clasina TerraScience Consultants International Ltd.

Crossing Primaries
Mute + NMO Stack Stretch-Free

February 2001 Kelman Technologies Inc.

Naturally we want to apply a mute, but it's quite difficult designing a good one. We're still left with a diagonal pattern at the
far offsets. The peaks and troughs of this feature line up nicely, however, so it mostly cancels out upon stacking.
The stack is now quite acceptable.
The stretch-free stack, applied without any mute, is a little sharper and higher frequency.

Crossing Primaries With Noise


Normal Stretch-Free

February 2001 Kelman Technologies Inc.

Now let's add some noise and compare the two stacks. Due to its higher fold up shallow, stretch-free processing has done a
better job at suppressing this noise there.

57
58 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Stretch-Free
Common-Offset Common-Offset
Velocities Stack Stack

1000 m 0
February 2001 Kelman Technologies Inc.

Now let's examine a real-data example featuring a sharp vertical jump in velocity. On the right is part of semblance analysis
showing a 33% jump in velocities over the space of 80 ms.
The common-offset stack clearly shows NMO stretch at the far offsets.
The stretch-free common-offset stack has avoided NMO stretch.

How did we form this stack if it's not possible to generate NMO-corrected gathers in stretch-free processing? The answer is
that a single trace from the common-offset stack is created by stretch-free stacking a number of traces having a variety of
limited offsets.

Normal Stack Stretch-Free Stack

February 2001 Kelman Technologies Inc.

58
Advanced seismic processing 59
Clasina TerraScience Consultants International Ltd.

Let's compare the CDP stacks. Beneath the strongest reflector the normal stack looks washed out, whereas the stretch-free
stacks shows a number of sharp, consistent events. The arrow points to a rather muddy looking doublet, which becomes much
sharper and clearer in the stretch-free stack.

Normal Stack
(Muted) Stretch-Free Stack

February 2001 Kelman Technologies Inc.

After applying a mute to remove the stretch, the normal stack is now comparable to the stretch-free stack. The normal stack
looks a little better above the strongest reflector, whereas the stretch-free stack is a little better beneath.

This line has neither strong noise nor multiples, so stretch-free stacking doesn't provide much advantage over normal
processing using a well-picked mute. The point, however, is that it is possible to form a high resolution stack without muting.

When Is Stretch-Free
Processing Necessary?

February 2001
Offset Kelman Technologies Inc.

59
60 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

When is stretch-free processing necessary? Here we see part of a common-offset stack. The dark blue colour indicates
regions where NMO stretch exceeds 50%. We suggest you have two choices here - either mute off such regions, or apply
stretch-free processing. It's that simple.

But which way to go? If you feel muting will be too high a price in terms of multiple or noise suppression, or continuity of
events up shallow, then stretch-free processing is necessary.

We are just beginning to study stretch-free processing, but here's where we think potential applications may lie.
Very shallow targets, particularly where the zone of interest is in the first few hundred milliseconds.
Areas having large, sharp vertical jumps in velocities, such as in our examples.
Very long offset surveys are becoming more common. If you have to mute off the far offsets due to NMO stretch then you've
wasted your money.

Accurate AVO analysis requires longer offsets, but can't tolerate NMO stretch. For these last two applications, stretch-free
processing becomes even more effective when combined with non-hyperbolic moveout.

What about dipping beds?

It's true NMO starts to break down in the presence of dip. The next figure gives a hint of things to come.
The basic problem is that when the bed dips, the midpoints are not vertically above the reflection points.

The reflection points become unevenly spaced along the reflector.


Reflection points live at different locations on the interface because Snell' s Law of reflection requires the
angle of incidence to be equal to the angle of reflection.
The fact is that NMO has a constant dip assumption built in. If every bed were dipping at the same angle
we could do NMO just fine. The real problem is when there are many different dips in the subsurface. In
this case, NMO acts like a dip filter. preferentially passing a particular dip, which the processor chooses,
while suppressing other dips.

This was the situation until something new came on the scene. It was called (ultimately) dip moveout.

60
Advanced seismic processing 61
Clasina TerraScience Consultants International Ltd.

DIP MOVEOUT

Early versions of DM0 had the cumbersome. but descriptive, name 'pre-stack partial migration.

But there was a DM0 processing product on the market (Judson et al., 1978).
It was named "DEVILISH", an acronym for Dipping Event Velocity Inequality Licked. (This does not account
for the ISH, but Hale (1991) gives a plausible explanation: "Ingeniously by Sherwood.")

Unlike NMO, the development of DM0 is very recent. We will attempt to understand what DM0 is and
does.

To put things in perspective, Tables I and 2 give a rather detailed chronology of major advances in DM0.

(From Liner Geophysics vol64)

61
62 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Dip moveout is not normal

62
Advanced seismic processing 63
Clasina TerraScience Consultants International Ltd.

In the field, a source and receiver are, 3000 m apart (the offset), and a trace is recorded. This is pre-stack
data. From the discussion above, we know that NMO removes offset from pre-stack data. In the computer,
we adjust this trace to simulate one that would have been recorded at a point halfway between the source
and receiver (the midpoint). This new trace is a zero offset trace, and stacking all such traces that live at
this midpoint yields a stack trace. All of the stacked traces are plotted side by side to form the stack section,
which is raw material for post stack migration processing.

NMO assumes the reflection comes from a horizontal interface.

This is an important and restrictive assumption. The NMO correction adjusts observed travel time to zero
offset travel time as seen from the midpoint. So after NMO, the event is moved up in time, but it is not
moved across traces.
Technically, we are changing the time coordinate from raw time to NMO time.

But what if the interface is not horizontal? All valid reflector positions have one thing in common: the total
distance from source to reflection point to receiver is constant.
This is the definition of an ellipse with the source and receiver at each focus. This is the pre stack migration
ellipse and is shown in Figures below.

The goal of NMO plus DM0 is to remove offset and thus create a zero-offset section.

63
64 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

NMO gives one of many possibilities. DM0 gives all the rest.

The Figure shows the pre-stack migration ellipse with a source and receiver at each focus
By definition, each reflection path from S to R has the same travel time. Each location labeled with S/R is a
zero offset location. There is a unique S/R location at the midpoint location where the ellipse has zero dip.
NMO calculates this time. From the NMO time, DM0 calculates all the other zero offset times.
Figure B is the NMO trace from the center of the ellipse.
Figure C shows the result of DM0, which is, another ellipse (the DM0 ellipse).
Note that in the middle DM0 does not change the NMO time, but away from the middle it does.
(See Russel (1998) for a good discussion of the basic travel time equations related to NMO, DM0, and migration.)

We saw that NMO is a process which takes one trace in and gives one trace out.
DM0 is different.
One trace into DM0 generates many traces out. all of which live between the original source and receiver
locations. This is illustrated in Figure C.

Let's consider the following, we have a common offset panel of data containing two filtered spikes of
amplitude on one trace (A). The other traces are there, but zeroes.
NMO moves the spike up on the same trace (B).
DM0 then throws the spike out along a curve which lives between the source and receiver (C). This is,
again the DM0 ellipse or smile.
Since it comes from a spike or impulse on the input data, it is also called the DM0 impulse response.
If the raw spikes in (A) are pre-stack migrated, then we get the pre-stack migration impulse response (D).
This result is correct. and it is also an ellipse, as discussed above. This time curve is equivalent to the depth
curve shown in ( 9-A).
If the NMO corrected data (B) is thrown directly into post-stack migration, we get the result in (E). Post
stack (zero offset) migration seems appropriate because NMO removed the effect of offset.
Actually. NMO has done the right thing only for zero dip beds. Therefore. the migrated result (B) is correct
only at the bottom of each ellipse where the dip is zero.

The combined process of NMO and DM0 removes offset and does the right thing for all dips.

It is ready for post-stack migration, so when the NMO +DM0 data (C) are post stack migrated, we get the
result (F), which is the same as the pre-stack migration result (D).

Although no time scales are given, the times at which the spikes occur in (B) coincide with the bottom of all
the ellipses in (C-E).

This is another way of saying that DM0 and migration do not adjust travel times in the case of zero dip.

In summary, D and F are a look alike, because for isotropic constant velocity, pre-stack migration is
equivalent to NMO followed by DM0 followed by post stack migration.

64
Advanced seismic processing 65
Clasina TerraScience Consultants International Ltd.

By creating the DM0 smile, all possible dips are handled simultaneously. We do not need to know what the
dip is in the earth. By DM0 processing every blip of amplitude on all traces, the zero-offset image will
emerge, because each reflection is tangent to many DM0 smiles.

This superposition process is also at the heart of migration.

One nice feature of constant velocity DM0 is the fact that it is very nearly independent of which velocity is
chosen.

DMO is unlike NMO and migration, which are quite sensitive to velocity errors. Another big selling point is
that NMO +DM0, rather than NMO alone, passes all dips into the stack section. This gives more raw
material for migration to work with in creating a final migrated image.

65
66 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

The next figure is another way of showing how DM0 works. Imagine a single spike of amplitude on a 2-D
common offset section.
The effect of DM0 is to broadcast this spike along the DM0 smile, shown with dots in the figure, we apply
post stack migration to only four of the DM0 dots (gray) to see what happens. Each dot spawns a post stack
(zero-offset) migration curve.
In Figure B, all the DM0 dots are migrated. The post stack migration curves are tangent to yet another
curve, the pre-stack migration ellipse.
In real data, the interior points would tend to cancel, leaving only the perimeter curve.
The migration curves are said to osculate and form the outer curve.
(See Liner and Lines (1994) for more information on how this works )

A second complication: Velocity

DM0 is not, independent of velocity, but the constant velocity formulation improved data so much,
compared to just using NMO that they did not worry much about velocity variation.

The effect of vertical velocity variation, v(z). on DM0 is strange. as shown in the next figure. Panel (A) is the
constant velocity DM0 smile and (B) is the result for mild v(z). As the velocity starts to become a function of
depth, the first effect is the squeezing of the DM0 ellipse. It becomes narrower than the constant velocity
version.

66
Advanced seismic processing 67
Clasina TerraScience Consultants International Ltd.

One nice feature of standard DM0 is that in three dimensions the DM0 operator is still a 2-D curve.
However, everywhere the velocity is a significant function of depth, and then the DM0 ellipse begins to
twist and torque into a saddle shaped 3-D operator as shown below.

Most people who talk about velocity variation and DM0 are referring to vertical velocity changes, v(z). If
there are strong lateral changes. then to do DM0 right becomes about as expensive as pre-stack depth
migration. So most people do not mess with DM0 in that situation

67
68 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Another complication: Anisotropy

Anisotropy is a general effect that causes seismic waves to go at different speeds depending on the direction
of travel. Anisotropy is one of those things, that has been known for decades and, until recently, have been
largely ignored in mainstream seismic data processing.
Anisotropy has been shown to influence P-wave data, which means we have to deal with it.
NMO and DM0 involve rays traveling at all sorts of angles. If these move at different speeds. then both
processes need to account for it.

Recall that NMO moves events up on the same trace. The anisotropy effect on NMO is to move these events
up a different amount, but they are still confined to the same trace.
From a mathematical point of view, anisotropic NMO involves the offset raised to the second and fourth
power, whereas isotropic NMO does not need the fourth power. Anyway, NMO is affected by anisotropy,
but not drastically.

DM0 is a different story. it has a much closer relationship to dip and steep ray angles than does NMO.
Anisotropic DM0 is controlled by a new parameter η, which can range from about -0.2 to +0.2
Forη η = -0.15, the DM0 smiles (C) are deeper and wider than the isotropic result (A). For η = +0.15, the
result (D) is nothing like the isotropic case. Remember, all panels on are operating on the same input data.
It is fair to say that anisotropy has a stronger influence on the relationship to pre-stack migration

Since DM0 spreads things out across traces, it is much more expensive than NMO, which only shifts things
up on one trace. Even so, NMO +DM0 is still cheaper than pre-stack migration.

When subsurface conditions become too extreme, this decoupled processing flow fails to give a reliable
migrated image. In those cases, the single grand process of pre-stack migration is used.
The bad news is that this one process is more expensive than all the processes we used before, including
DM0.

So here is the bottom line. If structure and velocity variation are not too nasty in an area, we can get away
with a traditional processing sequence:

NMO +DM0 + CMP Stack + Post-Stack Migration.

However, if things get really tough down there (e.g.. sub salt, sub thrust, or extreme topography), this
sequence breaks down and fails to give a good image. In this case we are compelled to do one grand process
called pre-stack migration.
In fact, DM0 was originally invented to complete this equality under mild subsurface conditions:

Pre-Stack Migration-NMO + ? + CMP Stack + Post-Stack Migration.

The ? turned out to be DM0 and is used worldwide every day.

68
Advanced seismic processing 69
Clasina TerraScience Consultants International Ltd.

Normal moveout and dip moveout are key steps in modern seismic data processing.
By dealing with concepts. rather than equations, the basic goals and operations of NMO and DM0 can be
discussed. Some key issues are the effect of dip, velocity variation, and anisotropy on NMO and DM0.
DM0 is an expensive process and has a close relationship to pre-stack migration. When subsurface
conditions are not too extreme, then a standard processing flow involving NMO and DM0 gives acceptable
results.
However. in the presence of structural complexity and/or lateral velocity variation, a single grand process
(pre-stack migration) is required to achieve a geologically meaningful image.

Because of its simplicity and clarity we have taken most of the above information and pictures from a tutorial
by C. Liner, Geophysics vol 64 no 5 September - October 1999 p 1637-1647

69
70 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

REFERENCES

Alfaraj, M.. and Larner. K., 1991. Dip-moveout for mode-converted waves: 61st Annual Internat. Mtg..
Soc. Expl. Geophys., Expanded Abstracts, 1191-1193.
1992, Transformation to zero offset for mode-converted waves:
Geophysics, 57,474,177,
Berg, L. E., 1984, Prestack partial migration method overview and presentation of processed data: 54th
Ann. Internal. Mtg., Soc. Expl. Geophys., Expanded Abstracts. 796799.
Biondi, B., and Ronen, S., 1986, Shot profile dip moveout using log-stretch transform: 56th Annual
Internal. Mtg., Soc. Expl. Geophys:, Expanded Abstracts, 1473-1482.
1987, Dip-moveout in shot profiles: Geophysics, 52.1473-1482. Bolondi, G., Loinger, E., and Rocca, F.1982,
Offset continuation of seismic Sections: Geophys. Prosp., 30, 8l3-828.
Dietrich, M., and Cohen, J. K., 1992, Three-dimensional dip moveout operator for linear velocity-depth
function: 62nd Annual internat. Mtg., Soc. Expl. Geophys, Expanded Abstracts, 958-961
1993, Migration to zero offset (DM0) for a constant velocity gradient: An analytical formulation: Geophys.
Prosp., 41, 62l-644.
Deregowski, S. M., and Rocca, E, 1981, Geometrical optics and wave theory of constant-offset sections in
layered media: Geoph. Prosp.,29, 374-406.
Forel. D., and Gardner, GIl. F., 1988, A three-dimensional perspective on two-dimensional dip moveout:
Geophysics. 53,604-610.
Harrison, M. P., 1990, Dip moveout for converted wave (p-Sv) data in a constant velocity medium: 60th
Annual Internat. Mtg., Soc. Expl. Geophys.. Expanded Abstracts, 1370-1373.
Hearn, f., 1989, time-domain application of dip moveout to shot gathers 62nd Annual Internal. Mtg., Soc.
Expl. Geophys., Expanded Abstracts.) 1140-1143.
Hale, D., 191(3, Dip-moveout by Fourier transform: 53rd Annual Internat. Mtg., Soc. Expl. Geophys.,
Expanded Abstracts, Session S3.3.
1984. Dip-moveout by Fourier transform: Geophysics. 49,741-757.
1991a, A non aliased integral method for dip moveout: Geophysics, 56, 795-805.
1991a, A nonaliased integral method for dip moveout: Geophysics, 56, 795-805.
1991b, Dip moveout processing: Course notes: Soc. Expl. Geophys
Hale, D., and Artley, C., 1993, Squeezing dip moveout for depth variable velocity: Geophysics, 511,257-264.
Jakubowicz. H., 1990,A simple efficient method of dip-moveout correction: Geophys Prosp.,31,221-246.
Jakubowicz, H., Beasley, C. 1, and Chambers, R., 1984, Pres tack partial migration: A comprehensive
solution to the problems of 3-D data processing: 54th Annual Internat. Mtg., Soc. Expl. Geophys.,
Expanded Abstracts, 802804.
Jorden, T. E., Bleistein N., and Cohen, J. K., 1987, A wave equation-based dip moveout: 57th Annual
Internal. Mtg., Soc. Expl. Geophys., Expanded Abstracts, 718-721.
Judson, ID. R., Schultz, P 5., and Sherwood, J. W. C., 1971(, Equalizing the stacking velocities of dipping
events via DEVILISH: Presented at the 41(th Ann. Internal. Mtg.. Soc. Expl. Geophys. (brochure published
by Digicon Geophysical Corp.).
Larner, K., 1993, Dip-moveout error in transversely isotropic media with linear velocity variation in depth:
Geophysics, 58, 1442-1453.
Larner. K., and Hale. D.. 1992, Dip-moveout error in transversely isotropic media with linear velocity:
62nd Annual Internat. Mtg., Soc. Expl. Geophys, Expanded Abstracts, 979-983.
Liner. C. L., 1990, General theory and comparative anatomy of dip moveout: Geophysics. 55.595607.
1991, Born theory of wave-equation dip moveout: Geophysics. 56.182-189.

70
Advanced seismic processing 71
Clasina TerraScience Consultants International Ltd.

Liner. C. L.. and Cohen, I K.. 1988, An amplitude-preserving inverse of Hale's DM0: 58th Annual Intern
at. Mtg., Soc. Expl. Geophys.. Expanded Abstracts. 1117-1120.
Liner, C. L.. and Lines, L. R., 1994, Simple pre stack migration of cross well seismic data: J Seis. Explor..3,
101-112.

Pleshkevitch, A. L., 1994, Shot- DM0 by finite-difference: 56th Mtg. Eur. Assoc. Expl. Geophys.. Extended
Abstracts, Session P103.
Ronen. S., 1987. Wave-equation trace interpolation: Geophysics. 52. 973-984
Russell, B., 1998, A simple seismic imaging exercise: The Leading Edge. 17, 1885-889.
Sorin. V., and Ronen. S.. 1989, Ray-geometrical analysis of dip moveout amplitude distribution:
Geophysics, 54.1333-1335.
Uren. N. F.. Gardner, G H. F. and McDonald, J.A., 1990, Dip moveout in anisotropic media: Geophysics,
55,863-867.
Wang. C.. 1995. DM0 in radon domain: 65th Annual Internat. Mtg.. Soc. Expl. Geophys.. Expanded
Abstracts, 1441-1444.
Yilmaz, O.,and Claerhout,J F.. 1980. Pre stack partial migration: Geophysics. 45,1753

71
72 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Imaging below dipping clastics

Photo: Jennifer Leslie

This anisotropy tutorial deals with seismic imaging below strata with lateral velocity heterogeneity–carbonates at surface, for
example–and velocity anisotropy–velocity changing with direction of propagation through clastic rocks.

The example in this image shows steeply dipping clastics in the near-surface geology. On the microscopic scale, the shales in
this section are anisotropic because the platelet-shaped grains are aligned parallel to bedding. The intrinsic anisotropy of
shales gives a faster P-wave velocity parallel to bedding and a slower P-wave velocity normal to bedding. On the larger scale
of the seismic wavelength, interbedding of sands and shales has the same effect on the velocity with faster P-wave velocities
parallel to bedding and slower velocities normal to bedding.

What about reflectors below TTI


media?
• Reflectors below TTI media are
incorrectly positioned on seismic images.
• Depth-imaging now needs slow velocity,
epsilon, and delta.
• More parameters offer additional degrees
of freedom to our velocity model.

72
Advanced seismic processing 73
Clasina TerraScience Consultants International Ltd.

Physical-model data from the


Foothills Research Project

Physical model

Physical-model data from the


Foothills Research Project

Physical model

73
74 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Physical-model data from the


Foothills Research Project

Seismic processed with isotropic assumption

Physical-model data from the


Foothills Research Project

Seismic processed with anisotropic correction

74
Advanced seismic processing 75
Clasina TerraScience Consultants International Ltd.

Lateral-positioning error
caused by TTI media

P-Wave sideslip caused by


TTI media

QuickTime™ and a
Animation decompressor
are needed to see this picture.

75
76 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Sideslip error versus dip

Sideslip error versus dip

76
Advanced seismic processing 77
Clasina TerraScience Consultants International Ltd.

Migration: From time to depth

• Migration inverts reflections in time for reflectors in


depth.
• Time migration assumes laterally consistent
velocities to get a smooth, robust operator.
• Full depth migration can move subsurface structures
into their proper position in the subsurface. An
accurate model of the subsurface velocities is
required.

77
78 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Aplanatic Surfaces
• All possible reflector locations for a given reflection

Time migration
• Assumes localised homogeneous, isotropic velocity

Aplanatic Surfaces
• All possible reflector locations for a given reflection

Depth migration
• Corrects for velocity heterogeneity

78
Advanced seismic processing 79
Clasina TerraScience Consultants International Ltd.

Aplanatic Surfaces
• All possible reflector locations for a given reflection

Anisotropic depth migration


• Corrects for seismic velocity anisotropy

Aplanatic Surfaces
• All possible reflector locations for a given reflection

Full depth migration


• Corrects for lateral velocity heterogeneity and anisotropy

79
80 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Where does depth imaging apply?

Christopher Liner, Feb 99 TLE

Where does ADM imaging apply?

Prestack
time
increasing structural complexity

• Overburden anisotropy factors:


• Thickness of consistently dipping
Prestack anisotropic strata
time • dip of anisotropic strata
• % velocity change
Prestack
ADM

Poststack
time

increasing anisotropy

80
Advanced seismic processing 81
Clasina TerraScience Consultants International Ltd.

Anisotropic velocity
model building
• Use geologic velocities from wells: usually normal-
normal-to-
to-
bedding velocities in clastic overburden.
• Perform velocity analysis with anisotropy in the
fine-tune ε and δ.
model, then fine-
• Use two methods for parameter analysis:
– Prestack picking using modan
– Poststack picking using mvel

Dip of anisotropic strata required


• Laterally varying dip is picked from the seismic data
as input to the velocity model.

81
82 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Anisotropic effects on
prestack gathers

Isotropic
Data courtesy Husky and Talisman

Picking ε on prestack gathers

Anisotropic
Data courtesy Husky and Talisman

82
Advanced seismic processing 83
Clasina TerraScience Consultants International Ltd.

Comparison of migration results

ADM
Data courtesy Husky and Talisman

Comparison of migration results

ADM
Data courtesy Husky and Talisman

83
84 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Picking v on poststack sections

Data courtesy Amoco Canada

Picking v on poststack sections

Data courtesy Amoco Canada

84
Advanced seismic processing 85
Clasina TerraScience Consultants International Ltd.

Picking v on poststack sections

Data courtesy Amoco Canada

Picking v on poststack sections

Data courtesy Amoco Canada

85
86 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Picking v on poststack sections

Data courtesy Amoco Canada

Picking v on poststack sections

Data courtesy Amoco Canada

86
Advanced seismic processing 87
Clasina TerraScience Consultants International Ltd.

Picking v on poststack sections

Data courtesy Amoco Canada

Picking v on poststack sections

Data courtesy Amoco Canada

87
88 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Picking v on poststack sections

Data courtesy Amoco Canada

Picking v on poststack sections

Data courtesy Amoco Canada

88
Advanced seismic processing 89
Clasina TerraScience Consultants International Ltd.

Picking v on poststack sections

Data courtesy Amoco Canada

89
90 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Picking ε on poststack sections

Data courtesy Amoco Canada

Picking ε on poststack sections

Data courtesy Amoco Canada

90
Advanced seismic processing 91
Clasina TerraScience Consultants International Ltd.

Picking ε on poststack sections

Data courtesy Amoco Canada

Picking ε on poststack sections

Data courtesy Amoco Canada

91
92 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Picking ε on poststack sections

Data courtesy Amoco Canada

Picking ε on poststack sections

Data courtesy Amoco Canada

92
Advanced seismic processing 93
Clasina TerraScience Consultants International Ltd.

Picking ε on poststack sections

Data courtesy Amoco Canada

Picking ε on poststack sections

Data courtesy Amoco Canada

93
94 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Picking δ on poststack sections

Data courtesy Amoco Canada

Picking δ on poststack sections

Data courtesy Amoco Canada

94
Advanced seismic processing 95
Clasina TerraScience Consultants International Ltd.

Picking δ on poststack sections

Data courtesy Amoco Canada

Picking δ on poststack sections

Data courtesy Amoco Canada

95
96 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Picking δ on poststack sections

Data courtesy Amoco Canada

Picking δ on poststack sections

Data courtesy Amoco Canada

96
Advanced seismic processing 97
Clasina TerraScience Consultants International Ltd.

Comparison of migration results

Time
Data courtesy Amoco Canada

Comparison of migration results

Depth
Data courtesy Amoco Canada

97
98 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Comparison of migration results

ADM
Data courtesy Amoco Canada

Seismic depths vs. well depths


Isotropic migration Anisotropic migration

Seismic depth Well depth


Data courtesy Amoco Canada

98
Advanced seismic processing 99
Clasina TerraScience Consultants International Ltd.

ADM summary
• Imaging and positioning improve when we account
for anisotropy.
• Geologic depths and seismic depths don't have to be
different.
• Need two additional parameters in our velocity
model.
• Each parameter effects the migration differently, so
each can be picked separately.
• Using appropriate parameters makes the anisotropic
imaging problem easier than ignoring the anisotropy.

Conclusions
• Anisotropy effects moveout corrections on long
offsets
– Moveout corrections apply to horizontal reflectors in or below
horizontal anisotropic strata
– Moveout corrections also apply to dipping reflectors within
dipping TI media

• Reflectors below dipping TI media are mispositioned


on isotropicly processed seismic
• Correcting for anisotropy in depth imaging where
applicable improves imaging and positioning of
subsurface structures

99
100 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Equivalent-offset binning
of depth gathers

Common offset traces Equivalent offset traces

Physical-model gathers

isotropic anisotropic isotropic anisotropic


common offset common offset equivalent offset equivalent offset

100
Advanced seismic processing 101
Clasina TerraScience Consultants International Ltd.

NMO EFFECT

In designing spread length, we not only need to look at dip effects, but also at the NMO effect at the far
offset.

A standard rule used in the industry is that spread length should equal depth to the horizon of interest,
however a better way to compute this will follow.

The far offset is determined by both the critical angle and the NMO stretch. If we allow to much NMO
stretch, we will reduce the amount of high frequency in our stack. We can mute this effect out in our
processing sequence, but then why record it in the first place.

If the NMO stretch at the far trace becomes equal to or larger than the period of the dominant frequency
at the far offset, then we have too much stretch.
Let δt = tx - t0 be the NMO correction, then

δt = [( x2 / v2 ) + t02 ]1/2 - t0 , where


x is the far offset,
v is the average velocity,
tx is the two way travel time at far offset, and
t0 is the two way travel time at zero offset

Now given that the dominant frequency is f, and δf is the change in frequency, we can quantify the
stretching ( Yilmaz 1987), as follows:
δf / f = δt / t0

A stretch at far offset of more than 50% would distort the signal after stacking, and needs to be muted out.
This computation allows us to set the maximum spread length.

If we only allow 50% stretch at the far offset, we can compute the following

0.5* t0 = [( x2 / v2 ) + t02 ]1/2 - t0 , therefore


1.52 t02 = [( x2 / v2 ) + t02 , or
x2 = 1.25 * t02 v2

We now have the maximum far offset xmax = ( √ 1.25 ) t0 v

To compute the maximum distance of the near offset we need to consider that the near trace should be
close enough to the source, so that we record reflections rather than refractions, but far enough from the
source to eliminate source noise.

In general we use the following to determine this offset:

Maximum near offset xnearmax = ( Zshallow Vsh-int ) / ( V2below-shallow - V2above-shallow )1/2

101
102 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

A collection of traces having a CMP is called a CMP gather.


The number of traces in such a gather is called its fold.
As for a shot record, the NMO of reflection events in a CMP gather increases as the source-receiver offset
increases.
Longer offset data can assist in the separation of multiples from reflection events because the move-out of
longer offset reflections is less than the multiple moveout. A longer spread or cable length often improves
the process of multiple suppression.

In data processing, the NMO correction flattens primary events, but leaves residual curvature in multiple
events, if there is enough separation between primary and multiple velocities.
When the traces are stacked, the reflected primary is enhanced but the multiple is attenuated.
Therefor, CMP stacking helps attenuate multiples.

After applying NMO corrections, all traces become pseudo zero-offset recorded traces. If the reflecting
beds were flat and the speed of sound in the earth were uniform within beds, stacking the traces would
provide an acceptable image of the earth's layering.

However, geologic horizons rarely are flat and of uniform sonic velocity. Therefore, the CMP method with
simple NMO and stack is not perfect. Nonetheless, data-processing techniques such as dip moveout (DM0)
are available to account for the departure of the earth from the ideal case (Yilmaz, 1979).

102
Advanced seismic processing 103
Clasina TerraScience Consultants International Ltd.

seismic wave field

We should now take a closer look at the seismic wave field, which we are recording.
In order to properly design the spreads for recording, we need to understand some of the basic behaviour
of the seismic signal itself.

We won't go into the details of elastic wave fields, but will discuss some of the principle elements needed to
understand spread design.
A very good discussion is given by Gijs Vermeer in Seismic wavefield Sampling, and complete treatment of
this subject is given by Walter Pilant in Elastic waves in the Earth.

A continuous wavefield W is defined as follows

W = W(t, Xs, Ys ,Zs, Xr, Yr, Zr), now defining spatial vectors for shot and receiver we can rewrite the
expression as follows.

W = W( t, xs, xr), where xs = ( Xs, Ys ,Zs), and xr = ( Xr, Yr, Zr)

From this we can define the mid point vector

xm = ( xs + xr ) / 2, and the offset vector as


xs - xr
xo =

Based on the above equations we can now derive the common offset, receiver, shot and midpoint
coordinates, as illustrated below.

The square in the ABCD in (xs , xr) space is mapped onto A'B'C'D' in (xm , xo) space. The notation and
diagrams used here are those used by Gijs Vermeer.
It is important to understand behaviour in both systems, so that proper acquisition and processing
parameters can be chosen.

103
104 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Note the following,

Common Shot panels are the (t, xr) subspace,


Common Receiver panels are the (t, xs) subspace,
Common Offset panels are the (t, xm) subspace,
Common Midpoint panels are the (t, xo) subspace,
Common Time panels are the (xs, xr) subspace,

Common time panels are constructed in a similar manner to time slices, but are pre-stack .
Introducing Green's tensor for elastic media gives

Uj(x' | xo ; t ) = fj (xo , t) gij (x' | xo , t)

where gij (x' | xo , t) is the j component at the position x' due to the i component of force acting at the point xo
.

We then get fj(x' , t)fi (xo , t) gij (x' | xo , t) = fj(xo , t)fi (x' , t) gij (xo | x' , t)

if the forces are momentarily restricted to lie along each of the Cartesian axis, therefore

gij (x' | xo , t) = gji (xo | x' , t), this says the following:

If an arbitrary force f(t) is acting in the i direction at a point x' and gives a displacement u(t) in the j
direction at a point xo , then the same force f(t) acting in the j direction at point xo will give the same
displacement u(t) in the i direction at point x' .

This relationship , THE PRINCIPLE OF RECIPROCITY , holds in elastic media with arbitrary
boundaries, inhomgeneity, and anisotropy.
Knopoff and Gangi (1959) show some seismic models showing this reciprocity.
Additional information may be found in Eringen and Suhubi (1975), and Wheeler and Sternberg(1968).

104
Advanced seismic processing 105
Clasina TerraScience Consultants International Ltd.

Some examples of reciprocity are shown below.

From W. Pilant, elastic waves in the earth

An important aspect of the elastic wavefield is that the arrival time of an event is a continuous function of
two spatial coordinates, t = t(xs , xr ) = t(xm , xo ), and we can express the partial derivatives of one
coordinate system in the other, so

δt δt δxs δt δxr δt δt
--- = ------- + ------- = --- + ----
δxm δxs δxm δxr δxm δxs δxr

and
δt δt δxs δt δxr δt δt
--- = ------- + ------- = ---- - -----
δxo δxs δxo δxr δxo δxs
2δ 2δδxr

now let pi = δt/δ


δxi , be the apparent slowness of pI in the cross section defined by (t, xi).

105
106 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

This gives us some simple relationships for apparent slowness in the various cross sections:

pm = ps + pr
po = (ps - pr)/2
ps = pm /2 + po
pr = pm /2 - po

Note that the horizontal slowness is called p, and the vertical slowness is called q.

The corresponding expressions for apparent velocity are:

1/vm = 1/vs + 1/vr


1/vo = 1/2(1/vs - 1/vr)
1/vs = (1/2vm + 1/vo)
1/vr = (1/2vm - 1/vo)
It should also be noted that when NMO is applied to the CMP gather, 1/vo = 0, therefore in that case

1/vs = 1/vr and 1/vm = 2/vs

Another term that should be introduced is surface consistency, this is used in statics, and says that the time
delay for a trace at (xs , xr ) can be written as t(xs , xr ) = ts(xs) + tr(xr) , where ts(xs) is the shot static
and tr(xr) is the receiver static.
Note ts(xs) + tr(xr) ≠ ts(xr) + tr(xs) , this would mean reciprocity,
which is only the case when ts(x) = tr(x) +C.
Surface consistency allows us to separate effects on the wavefield into shot effects and receiver effects.

106
Advanced seismic processing 107
Clasina TerraScience Consultants International Ltd.

SAMPLING TECHNIQUES

GENERAL

If the continuous analog seismic signal is to be recorded digitally, its amplitude should be measured at
regular intervals of time and converted into a binary code suitable for magnetic recording.

The process of measuring a signal's amplitude only at discrete time intervals is called sampling.

After sampling, the signal is no longer a continuous function of time but rather a sequence of pulses, each
representing the signal amplitude at a particular instant of time.

The device which converts continuous data into such a sequence of pulses is generally known as a sampler.
The sampler output can be considered as a train of pulses having negligible width and separated by time
interval T.

The measurement of the signal's amplitude at each sample point requires a finite time delay, during which
the sample height must be kept constant. This is achieved by storing the sample voltage in a capacitor until
the next sample arrives.
The voltage in the capacitor then changes abruptly to the value of the new sample, and this value is stored
in the capacitor until another sample comes along. As a result, the sampled signal is changed into a series of
consecutive steps, or samples.

Clamping is the name given to this process of holding constant the height of each sample for a period
smaller or equal to the sampling period.

A clamping circuit is connected to the output of an electronic sampler. The capacitor in the clamping
circuit charges up to the analog voltage level when the sampling switch is closed, and holds this voltage
during the measurement. A low impedance source and a low-resistance path through the switch are
necessary, so that the capacitor can fully charge during the time the switch is closed.
An amplifier with a very high input impedance is inserted between the capacitor and the measuring circuit
to prevent leakage of the stored voltage.

After the switch is opened, the amplifier output is measured in an analog-to-digital (A-D) converter and
recorded in coded form on magnetic tape.

107
108 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

SAMPLING THEOREM

This fundamental theorem can be stated as follows:

If a continuous signal with a band limited spectrum is sampled at a frequency equal to or


greater than twice the highest frequency in the spectrum, then the samples theoretically
describe fully the original signal.

This statement makes it evident that the following conditions must be satisfied in order to recover the
original signal without ambiguity:

fs ≥ 2fmax or fmax ≤ 1/2T

If the highest frequency component in the signal spectrum is known, then, the required minimum sampling
rate can be established;

e.g. , if the highest frequency component of a signal is 500 Hz, the sampling frequency must be at least 1000
Hz, which corresponds to a minimum sampling rate of 1 msec.

The most important consideration in choosing the sampling rate is to have a sufficient number of samples
to describe properly the high frequencies in the seismic signal.
However, when the samples are too closely spaced, redundant data will result.

SIGNAL RECOVERY

One method of reconstructing a continuous signal from its sampled version is by passing the samples
through a high-cut filter, which removes the high frequencies introduced by the sampling process.
To prevent the amplitude of the continuous output signal from being considerably smaller, than the
amplitude of the input samples, (this is because the energy of each sample, which is directly proportional to
its area, is distributed over a long period of time by the high-cut filter), each sample from the digital-to-
analog converter is connected instantaneously to a hold circuit.

The resulting output is a train of same amplitude pulses, which form a continuous staircase signal.

108
Advanced seismic processing 109
Clasina TerraScience Consultants International Ltd.

Since the total area of the staircase signal approximates the area of the original signal, energy content is
also similar. Consequently, the amplitude of the output from the hold-and-filter circuit is comparable to
the amplitude of the original signal.
The high-cut filter connected to the sample-and-hold amplifier will smooth out the ripples produced by the
process, so that only the original signal is reproduced.

Spatial Sampling

In order for data to be sampled properly, we must be able to reconstruct the theoretical continuous
wavefield from the recorded samples.

According to the Nyquist condition, or proper sampling theorem, proper sampling is achieved when

∆t ≤ 1/ 2fmax
∆xs ≤ 1/ 2|ks|max = vmin / 2fmax
∆xr ≤ 1/ 2|kr|max = vmin / 2fmax
∆xo ≤ 1/ 2|ko|max = vmin / 2fmax
∆xm ≤ 1/ 2|km|max = vmin / 4fmax

Now if we call xI the basic sampling interval, then the corresponding frequency and wave number are
∆t and Nyquist wave number kiN = 1 /2∆
called the Nyquist frequency fN = 1 /2∆ ∆xi

By way of the reciprocity principle, we know that the basic sampling intervals for shots and receivers are
the same.
Therefore ∆xo = ∆xr = ∆xs = 2∆
∆xm

All basic sampling intervals are inversely proportional to fmax , so the anti alias filter in t acts as an anti-alias
filter in xs and xr as well.

Now let's consider the following, we would like to record up to 80 Hz in a medium with a vmin = 1200 m/sec,
then we have the following:

∆t = 1 / 2* 80 = 6.25 msec
∆xo = ∆xr = ∆xs = 1200/ 2* 80 = 7.5 m, and ∆xm = 7.5/2 = 3.75

Even at a velocity of 1200m/sec, which in land data certainly is not the minimum, we would need to record
with a shot and geophone spacing of 7.5m to avoid aliasing, therefore a certain amount of spatial aliasing is
always present.

Shot and receiver array application will reduce a fair amount of this aliasing.

Temporal sampling can be performed with great accuracy, but the spatial sampling has many disturbances
and irregularities.

109
110 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Some irregularities occur in geophone placement, ground coupling, hole conditions, vibrator coupling, etc.

A reduction in the wavefield distortion can created by planting more geophones closer together.
Another problem is Ambient noise, which consists of two different types,.

First type is all noise part of the wavefield,


second type is the noise which is not part of the wavefield, such as instrument noise, traffic, wind noise,
cattle noise, water noise, etc.
In both cases velocity filters can be used to eliminate this type of noise.

Sampling Paradox

Consider a properly sampled data set, i.e. points in (xs, xr) are sampled such that ∆x = ∆xr = ∆xs , then let 's
look at the following:
In (xs, xr) the first point would be at (0, ∆x ), and the second point at (∆∆x, 2∆
∆x), etc.
However, in (xm, xo) the first point would be at (∆ ∆x/2, -∆
∆x), and the second point at (3∆
∆x/2, -∆
∆x), etc.

Therefore in the common offset panel the distance between the points is

∆x/2 - ∆x/2) = ∆x, which is twice as large as the basic sampling interval ∆xm .
(3∆
This also applies to CMP' s, thus we have a sampling paradox.

To correct the problem we would need twice as many traces in (xm, xo) , or reduce the shot interval to half.
By using the one dimensional sample interval as defined by Shannon, described earlier, we created this
problem.
Petersen and Middleton (1962) introduced the N-dimensional sampling theorem, which states that the most
efficient sampling lattice is usually not rectangular.

Consider the spatial bandwidth limitation as defined by ( Berkhout, 1984)

-f/Vmin ≤ ks,r ≤ f/Vmin

This shows that the energy distribution of the continuous wavefield in the (f,ks) and (f,kr) panels, has
regions with and without energy.
Now if we look at the cross section (ks, kr), or the common frequency panel, we see that this forms a square.
In this case square sampling in (xs, xr) is most efficient, as illustrated below.

left the regions of energy, right the square sampling example

110
Advanced seismic processing 111
Clasina TerraScience Consultants International Ltd.

Now if we look at the cross section ((km, ko), we see that this forms a parallelogram
In this case oblique sampling in (xm, xo) is the most efficient, as illustrated below.

xo↑ xm→

left the regions of energy, right the oblique sampling example

Even is the wavefield is properly sampled in (t, xs, xr), and therefore also in (t, xm, xo), any single COP or
CMP is still under sampled.

By using a split spread , instead of off-end spreads, we can eliminate this problem, using the theorem of
reciprocity, and placing shots half way between the receivers.

ALIASING

As previously stated, the maximum frequency, which can be recovered from the sampled data is

fmax = 1/2T

where fmax is known as the alias frequency, Nyquist frequency, or fold-over frequency.

We now investigate what happens if the signal contains frequencies above the Nyquist frequency.

Above we show two sinusoidal components of a signal, with frequencies of 50 Hz and 200 Hz, respectively;
both signal components are sampled at a 4-msec rate.
The amplitude of the 200 Hz signal is identical to that of the 50-Hz signal at the sampling points, i.e., the
200 Hz looks like 50 Hz to the sampler, and is said to fold over about the alias frequency.

Consequently, the 200 Hz signal cannot be recovered from the sampled signal.
The effect of fold over can be best understood by plotting a sampler's input/output frequency relationship,
as illustrated below.
111
112 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

As can be seen, the output frequency is identical to the input frequency up to fmax .
Beyond fmax , the output frequency decreases to 0, when input frequencies are even multiples of fmax, and
increases to fmax , when input frequencies are odd multiples of fmax .

This means that the sampler sees input frequencies above the Nyquist frequency as though they were
folded over about fmax .

A signal having a spectrum with frequency components higher than the alias frequency of the sampler is
considered next.

These higher frequency components fold back, and add to those components below the alias frequency. As
a result, the signal spectrum as seen by the sampler is distorted.

To avoid this situation, an anti-alias filter is inserted at the input of the sampler to severely attenuate those
frequencies above the alias frequency.

112
Advanced seismic processing 113
Clasina TerraScience Consultants International Ltd.

Spatial aliasing

Above we treated the effect of under sampling with respect to the signal, an other important aspect of
sampling is the aliasing in the spatial sense.

Let's look at the following example


A single frequency signal is produced with a delay across the window creating the dip as shown in the
upper left window , and is sampled at 10m, 20m,40m and 80m.
Observe the spatial aliasing errors due to the coarse sampling.

113
114 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

The larger the spatial sampling intervals, the more serious the aliasing problem becomes, especially in COP
and CMP.

The Above example demonstrates the importance of computing the right group interval.
The maximum allowable group interval can be computed by looking at the following diagram.

Receiver station spacing

Distance between the stations A and B is δx, and the wave forms are half a wavelength apart.
Aliasing begins, when the waves arriving at A are delayed by half a wavelength or more from B.
This delay is indicated by δz.
We know that
θ = δz / δx
sinθ
Therefore
δx = λ / ( 2 sinθ
θ) = v / (2f sinθ
θ) , where v is the velocity of the layer.

Now if the stations were separated by the above computed value, or more, then the waveform with velocity
v , dip angle θ, and frequency f will be aliased.

To overcome the spatial aliasing problem we normally put the station spacing at half the distance δx,

So δx = λ / ( 4 sinθ) = v / (4f sinθ)

Let's look at an example:

θ= 30 degrees, f= 60 Hz, and v=3000m/sec

then δx= 3000 / ( 4*60*sin 30) = 3000 / ( 120) = 25 meters.

So the maximum trace spacing or group interval should not exceed 25 meters.
114
Advanced seismic processing 115
Clasina TerraScience Consultants International Ltd.

SEISMIC AMPLITUDES

Seismic amplitudes are affected by many different factors. In interpretation it is often assumed that
amplitude variations are due to geologic changes, although this may be true, very often the cause is not
geology.

Other factors that may cause changes in amplitude are acquisition, processing, resolution, absorption,
diffractions, tuning effect, constructive and destructive interference, and distortions in general, as was
mentioned earlier on.

The interpreter has to be aware of the possible causes of amplitude changes at all times.

SEISMIC AMPLITUDE

Seismic amplitude is affected by many factors, the following is a short overview of some of these factors.

Factors relating to geology.

• Reflection at interfaces
• Rugosity of reflectors.
• Reflector curvature and velocity focussing.
• Variation of reflection coefficient with angle of incident.
• Peg-leg multiples from tin beds.
• Interference of different events.
• Wavefront healing by diffractions.
• Absorption
• Scattering
• Transmissivity.
• Attenuation in the near surface.

Factors not related to geology:

• Energy input source strength and coupling.


• Superimposed noises
• Spherical divergence and raypath curvature.
• Instrumentation
• Array directivity
• Geophone sensitivity and coupling.
• Energy received and recorded.

115
116 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Direct hydrocarbon indicators.

Occurrences:
• Diversity of characteristics.
• Better developed in marine than land data.
• Found in diverse geographic and geologic areas.
• Occur in reservoirs of various ages, depths and depositional environments.
Types:
• Amplitude variation.
• Frequency variation
• Phase shifts and diffractions
• Fluid contact reflection or flat spot.
• Shadow zones, energy absorption in gas zones
• Sag, velocity changes in reefs, etc.
Ideal conditions for DHI’s are:
• Relatively shallow depths
• Simple structures
• Uniform reflection interval, which drapes across structure.
• Marine data.

THE LIMITATIONS OF DHI’s:

• Seismic response is the result of porosity and lithology contrast, and hydrocarbons have a
negligible effect on them.
• Seismic response to porosity is very similar to that of hydrocarbons in less compacted beds.
• Facies changes in the reservoir bed or sealing beds
• Destructive or constructive interference.
• Geometry of signal path may distort the seismic signal.
• CDP smearing may alter amplitude of the flank of a structure.
• Polarity must be strictly controlled.

Pitfalls in DHI’s:

• Low gas saturation reservoirs containing water with as low as 10% gas saturation give DHI
similar to completely gas saturated reservoirs.
• Incorrect polarity can cause high impedance layers to look as low impedance sediments.
• Flat spot reflections not related to fluid contact:
• Results of complex geology, processing or multiples
• A flat spot should be located at the crest of an areally restricted reservoir.
• Low impedance rocks, that extent down the entire flank of the structure.
• Anomalies extent beyond the trap limits.
• Anomalies caused by coal beds will terminate without accompanying fluid contact

116
Advanced seismic processing 117
Clasina TerraScience Consultants International Ltd.

Basic facts that you should know:

• Velocity increases with over burden


• Seismic expression is always less accurate than actual conditions.
• Sideswipe should be checked on all 2D data
• A true anticline is sigmoidal not hyperbolic
• Any significant geologic features can cause a velocity anomaly
• Salt dome migration, always migrate in and up
• Flowage will cause velocity slow down
• Diffractions are hyperbolic and have maximum curvature at top.
• Anticlines and synclines are continuous curves.

The ideally processed seismic trace must

• Yield events that have accurate amplitudes


• Have broadened the frequency band of the data
• Have properly treated the phase to give a zero-phase section
• Provide reliable estimates of event time and amplitude
• Have stripped away all interfering noise and multiples
The above listed factors cannot all be removed successfully, but we should try to come as close as possible.

117
118 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

AVO ANALYSIS

An Amplitude Versus Offset Study

Amplitude versus offset (AVO) forward modelling was attempted in the Northeast British Columbia.
The Gething Sand was the zone of investigation.

Excellent gas reserves have been discovered within the Gething Sand. Unfortunately, the productive gas
sand cannot be distinguished from the shale filled sands in this area using conventionally stacked seismic
data. Since AVO analysis works well in distinguishing gas filled sands from other lithologies, a series of
forward models were generated to establish if AVO may be used in the area to aid in distinguishing the gas
sands.

The sonic and density logs from the D-21-L shaled out well were digitized and entered into the modelling
program for use in the forward modelling process. The sonic log from the D-35-l gas well was also digitized.
Unfortunately no density log was available for the D-35-1 well. The gas sand portion of the D-35-l well, as
marked on the cross section supplied ,(approximately 36.6 meters was spliced into a copy of the D-21-L
sonic log to aid in the modelling process. The logs were blocked over the zone of interest. The blocks were
determined with the aid of the sonic and gamma ray logs and the cross section. The following blocks were
used (all units are metric):
Block 1 1290.5 - 1311.2
Gething 1311.2 - 1348.8
Block 3 1348.8 - 1360.0

Prior to running the final forward models, a sample forward model was run to determine the offset range
required to achieve an angle of incidence range of approximately 5 to 35 degrees. The testing proved that
an offset range of 200 meters to 2000 meters over 20 offsets would produce an angle of incidence range of 4
to 35 degrees in 2 degree increments.

118
Advanced seismic processing 119
Clasina TerraScience Consultants International Ltd.

For all of the models the regional background level of Poisson's Ratio was set to 0.30. For the gas sands,
Poisson's Ratio was set to 0.15. For the shale filled sand in D-21-L, Poisson's Ratio was set to 0.25. Screen
plots of all of the models described below have been included in this report. For all models the Gething
reflection has been highlighted in yellow.

Four gas sand models were generated using the blocking outlined above. For all of the models, the density
for the gas sand had to be calculated theoretically as no density log exists for a gas sand in the area. Three
of the models were generated using theoretical P-wave velocities. Since the Gething has a high porosity, the
calculations for both the density and the P-wave velocity were made assuming porosities of 15, 20 and 25
percent.
Copies of the calculations have been included in this report. The fourth model used the average P-wave
velocity of the D-35-L gas well and used the density associated with the 20% porosity model.

For the 15% porosity gas sand model, the interval transit time for the Gething was set to 270.9
microseconds per meter (approximately 3700 mIs) and the density was set to 2.39 g/cc.
The Gething reflection exhibited a polarity reversal as it went from a peak at near offsets to a trough at far
offsets.

For the 20% porosity gas sand model, the interval transit time for the Gething was calculated to be 303.2
microseconds per metre (approximately 3300 m/sec) and the density was calculated to be 2.3 g/cc. This
forward model produced a low amplitude trough at the Gething at the near offsets which increased
dramatically in amplitude with offset.

For the 25% porosity gas sand model, the interval transit time used for the Gething was 335.5
microseconds per meter (approximately 2980 m/sec) and the density used was 2.21 g/cc. The near offset
response for the Gething in this model was a large amplitude trough which increased slightly in amplitude
with offset.

The final gas model incorporated the average interval transit time from the D-34-L gas well, 316
microseconds per meter (approximately 3165 m/sec). The density was set to 2.30 g/cc as used in the 20%
porosity model. The resulting Gething reflection began as a weak amplitude trough at near offset and
increased in amplitude with increasing offset.

A shaly sand model was also generated using the blocking outlined above. In this case the interval transit
time and density were acquired by averaging the values from the D-21-L well. The values used were
225.69 microseconds per meter (approximately 4430 m/sec) for the interval transit time and 2.56 g/cc for
the density. In this case the Gething reflection near offset response was a strong peak which decreased
dramatically with increasing offset.

While the modelled responses for the gas models versus the shale models indicate that the zero offset
response should indicate the difference between the two reservoirs, this is not as evident when looking at
the general shape of the seismic reflectors from approximately 650 ms to the end. While there are apparent
differences in frequency content, the general shape of the reflectors is similar. With the introduction of
noise this would be even more difficult to distinguish.

Fortunately the AVO response differs greatly between the gas sand and the shale.

119
120 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

One additional model was generated using a thinner Gething of 13.4 meters (the most porous portion of the
D-35-L).
The logs for the D-21-L well were blocked as follows:
Block 1 1290.5 - 1311.6
Gething Gas 1311.6 - 1325.0
Shaly Gething 1325.0 - 1333.8
Block 4 1333.8 - 1360.0
For the Gething gas sand the porosity was assumed to be 20%; therefore, the interval transit time was
set to 303.2 microseconds per meter and the density was set to 2.30 g/cc. The Shaly Gething used a density
of 2.57 g/cc and an interval transit time of 221.06 microseconds per meter. The resulting trough at the
Gething gas sand increased in amplitude with offset.
Without the AVO forward model this situation would be impossible to differentiate from the shale model
as the zero offset responses are very similar.

The forward models indicate that AVO analysis should distinguish between Gething gas sand and shaly
sands. The next logical step would be to test the model results against real data examples gathered in the
area. Should data be acquired for the purpose of AVO analysis, care should be taken in choosing the
parameters. In addition to an adequate offset range, the fold must be sufficient to provide a good signal to
noise ratio.

120
Advanced seismic processing 121
Clasina TerraScience Consultants International Ltd.

121
122 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

122
Advanced seismic processing 123
Clasina TerraScience Consultants International Ltd.

123
124 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

124
Advanced seismic processing 125
Clasina TerraScience Consultants International Ltd.

125
126 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

126
Advanced seismic processing 127
Clasina TerraScience Consultants International Ltd.

127
128 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

128
Advanced seismic processing 129
Clasina TerraScience Consultants International Ltd.

129
130 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

The above example shows how the Zoeppritz’ equations can be used successfully in determining if sands are gas
bearing. The comparison to real data might, however not be perfect, and in most instances differs significantly from
the real seismic data amplitudes.

Zoeppritz’ equations :
• Describe plane waves, but seismic waves are spherical
• Do not take into account transmission losses, attenuation, divergence, geophone directivity, effect of multiples and
converted waves.
• They apply to reflections between two half spaces and do not include the interference effects produced by layering
and wavelets.

To create more realistic models for analysis, we should use full elastic waveform models.

The full elastic waveform model includes converted waves and offset dependent amplitudes for each interface.
It also includes the transmission losses, attenuation, dispersion, geometry offset effects, overburden effects, tuning and
interference effects, all of these factors can significantly affect the AVO response.
The following examples will illustrate the differences.

130
Advanced seismic processing 131
Clasina TerraScience Consultants International Ltd.

131
132 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

These examples clearly show the effect gas has on p-waves and s-waves. The p-waves are very sensitive to gas and
fluid in the pore space of the reservoir and react accordingly, while the s-waves will not react as they travel trough
the rock matrix and are therefor not affected by the gas or fluid in the pore space.

132
Advanced seismic processing 133
Clasina TerraScience Consultants International Ltd.

The presence of even a tiny amount of gas in the pore space of a sandstone will reduce the p-wave velocity, while the s-
wave velocity will actually increase slightly with higher gas concentrations. The Vp/Vs ratio decreases greatly even
with only a slight percentage increase in gas saturation.
This decrease in the Vp/Vs ratio changes the relative amplitudes of the reflections from the top and base of the
reservoir rock as a function of the angle of incidence.

To properly use AVO interpretation, we must first understand the local geologic setting, then design the appropriate model
and the design the pre-stack processing flow and meaningful displays.

There are several types of gas-sand responses:

• Higher impedance gas sands effect: peak decreases with offset and can change polarity at far offsets
(Good example of this are cretaceous sands in Western Canada)
• Near zero impedance contrast effect: increasing amplitude with offset
• Lower impedance gas-sands effect: Bright spot at near offset increases to Brighter at far offsets.

The contrast of the Poisson ratio with respect to the surrounding beds is the key to all AVO interpretations.

The following factors can affect amplitudes and should be treated in the processing flow.

• Multiple interference and Phase changes with offset.


• Tuning and interference and Noise
• Mode conversions and Array directivity
• Source strength , consistency and directivity
• Transmission losses
• Coupling of receivers and directional sensitivity
• instrumentation
• Effect of overburden
• Spherical spreading
• Reflector curvature and rugosity
• NMO errors and stretching

The key to proper AVO processing is to remove all the factors, that can influence the amplitudes,
except for the effect caused by the angle of incidence, and then we create displays to highlight the
AVO responses.

Some of the tools used are:


• Offset limited stacks Ostrander gathers Angle limited stacks
• Normal incidence P-wave stacks Gradient stacks Product stacks
• Delta P stacks Delta S stacks Fluid stacks
• Inversion Modelling

133
134 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

This display shows the difference between an AVO attribute stack and a fluid stack, a simple difference stack, which
will enhance the effect of gas bearing sands. The attribute stack just does not do it. The gas effect is a higher
impedance gas sand.
Fluid stacks or weighted stacks are a good means for gas detection. The procedure is outlined in a paper written by G.
C. Smith and P.M. Gidlow in Geophysical prospecting vol. 35, p 993-1014, 1987.

134
Advanced seismic processing 135
Clasina TerraScience Consultants International Ltd.

The procedure is quite straight forward and processing is standard, except that great care is taken to treat the
amplitudes correctly.

• A cross plot is constructed of the S- and P- wave velocities, and a logarithmic plot of density versus P- wave
velocity is constructed as well.
• An analysis is made of the spread in log V and log ρ, and the shale and sandstone cross-plots are analyzed as well.
• A spherical divergence correction is applied proportional to Vrms2T.
• Then a frequency wave number filter is applied to attenuate the coherent noise outside the region of the primary
reflection spectrum.
• An array compensation is applied.
• Velocity analysis are picked based on a trace by trace and horizon velocity analysis.
• No data dependent scaling or equalization is performed.
• The weights are then computed using the smooth interval velocity function created from log data or VSP.
• The weights are then applied with a slight shift to compensate for transmission losses and anisotropy.

The parameters of the model are V = P-wave velocity, W = S-wave velocity, q = V/W, σ Poisson ratio, and ρ the
density.
The weights are then computed from
P – wave velocity reflectivity ∆V / V,
S – wave velocity reflectivity ∆W /W,
Pseudo Poisson ratio ∆q /q,
and the fluid factor ∆F = ∆V / V – 1.16 * W / V * ∆W /W.
The pseudo Poisson section ∆q /q can be constructed by subtracting the ∆W /W section from the ∆V / V section.
Or the weights can be subtracted to give a new set of weights which give the fluid factor directly.

Insertion of gas reduces the P - wave velocity, but does not affect the S – wave velocity, therefor a fluid factor can be
defined.
The “Fluid factor” will be close to zero for all water bearing rock, negative at the top of a gas bearing sand, and positive
at the bottom of a gas bearing sand.

135
136 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

P STACK

S STACK

136
Advanced seismic processing 137
Clasina TerraScience Consultants International Ltd.

PSEUDO POISSON STACK

FLUID STACK

137
138 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Lithological effects on the Poisson ratio:


Increase in produces
Gas saturation a decrease in Poisson ratio
Shale content an increase in Poisson ratio
Quartz content a decrease in Poisson ratio
Porosity a decrease in Poisson ratio
Reservoir pressure a decrease in Poisson ratio
Temperature an increase in Poisson ratio

An improved AVO fluid detection and lithology discrimination process uses the Lamé petro-physical parameters λρ,
µρ, and λ/µµ.
In this process, described in a paper by Bill Goodway et al, PCP, . at the 1997 CSEG convention, the parameters λρ
and λ/µµ provide a robust method of determining lithology and gas sands. This method is better at determining
lithology than the fluid factor or the Poisson ratio.

The Lamé parameters λ and µ represent incompressibility and shear rigidity, ρ represents the density. These
parameters relate directly to lithology and porosity.

E.g. gas sands are more compressible than wet sands, and therefor have lower density values. And since gas is less
dense than water, porous sands with gas will also have low density values.
These two factors enhance each other in the product λρ by producing very low values.

In unstressed pore space , the maximum amount of pore space is available.

In compressed rock space, the compression squeezes the grains closer together, causing a decrease in available pore
space. Fluids resist the compression and increase the pressure against the grains and therefor produce a less
compressible rock. Gas cannot resist as well and will compress more easily leading to low compressibility or λ values.

In sheared rock matrix, the grains attempt to slide across each other. Shearing is easier in shales than in sands, due to
the nature of each rock. Shearing does not decrease the pore space as much, so the liquid or gas in the pore space does
not exert much more pressure under shear stresses than under unstressed conditions.

λρ clearly distinguishes gas sands and shales as well as carbonates from shales and sand.
A ρλ - ρµ cross plot further distinguishes shales from tight sands.

Vs * ρ versus Vp * ρ µρ versus λρ

Poisson ratio expressed in terms of Vp and Vs is as follows :


σ = [0.5 * (Vp / Vs)2 -1] / [(Vp / Vs)2 -1] and in terms of λ and µ σ = λ / 2(λ
λ + µ ).

138
Advanced seismic processing 139
Clasina TerraScience Consultants International Ltd.

The rigidity or shear modulus is given by µ, and the Lamé constant is given by λ. Relationship between the various
constants and velocities are as follows:
( λ + 2µ
µ ) = ρ Vp2 (1-r2) / (1+r2)2 and µ = ρ Vs2 (1-r2) / (1+r2)2 , the term (1-r2) / (1+r2)2 indicates the degree of
velocity dispersion for linear viscoelastic media, when damping is small this term can be ignored.
So then: ( λ + 2µµ ) = ρ Vp2 , and µ = ρ Vs2 .
In the example below :
gas sand - Vp / Vs = 2857/1666 = 1.71, (Vp / Vs)2 = 1.71*1.71 = 2.92, σ = (0.5*1.71*1.71-1) / (1.71*1.71-1) = 0.24 ,
µ =2.275*1666*1666/1000000 = 6.314, and λ=[(2.275*2857*2857)/1000000] -2*6.314 = 5.94
and λ + 2µ µ = 5.94 + 2*6.314 = 18.54, and λ/µ µ = 5.94/6.314 = 0.94.

139
140 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

140
Advanced seismic processing 141
Clasina TerraScience Consultants International Ltd.

AVO PROCESSING FLOW OPTIONS


• Velocity dependent geometric spreading correction
• Ground role modeling and subtraction
• Surface consistent scaling, deconvolution, and statics
• AVO friendly trace scaling options
• Non-hyperbolic NMO options
• Angle gathers created from offset gathers
• AVO friendly DMO to bins
• AVO friendly full prestack migration to bins
• AVO and AVA displays and attributes
Kelman Technologies Inc.

1. Geometric spreading and an elastic attenuation (frequency attenuation)


2. Maintain broad band signal
3. Land Stratigraphic - marine OBC ==> Use of statistics to stabilize solutions in noise environments
4. Noise interpolation and scaling
5. Long offsets / lateral velocity change / anisotropy
6. AVA Analysis
7. DMO + zoom ==> it is true amplitude
8. Kirchhoff migration to bins ==> it is true amplitude
9. Supporting displays and attributes Related ==> depth migration to gathers - anisotropic corrections

AVO FRIENDLY TRACE SCALING


OPTIONS
• Single long window split into several gates and
median scalar is used to eliminate noise burst
effects
• Transparency filter/scaling
• Amplitude spikes are scaled down using smooth
scalars in time and the associated primary down-
weighting is fold compensated in stack

Kelman Technologies Inc.

141
142 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

NON-HYPERBOLIC NMO OPTIONS

• Shot and receiver surface and weathering


referenced NMO
• Model based NMO to correct for very long offsets
• Model based NMO to correct for lateral velocity
variations
• Model based anisotropic NMO

Kelman Technologies Inc.

ANGLE GATHERS CREATED


FROM OFFSET GATHERS
• Two incident angle calculation options
– straight line assumption
– correction for time variant NMO by using NMO
stretch/incident angle relationship
• An angle for each sample is computed for later
use in AVA analysis
• AVA gathers with any desired set of angle range
bins can be created
Kelman Technologies Inc.

142
Advanced seismic processing 143
Clasina TerraScience Consultants International Ltd.

AVO FRIENDLY DMO


. HOW DOES IT WORK?
– Trace by trace t-x domain summation
– Each 3D DMO output trace is summed into the four
nearest CDP bins with weights based on 1/(CMP to
CDP distance) and total trace weight is normalized to
one.
– DMO’d data may be output to any bin grid density and
line orientation

Kelman Technologies Inc.

TRUE AMPLITUDE DMO


Under post stack time migration these traces
Each input trace is DMO’d into n zero will sample the prestack time migration ellipse
offset traces each characterized by a simple evenly (dip to dip T.A.D.).
time shift. b3
b4 b2 b1 b0
Shot CMP Geophone

The trace samples are weighted and stacked


into vertical traces lying between the shot and Each offset bin stack is normalized based
geophone and each of these are weighted (as on the accumulated weights (offset bin to
1/d total weight = 1.0) and stacked into the offset bin T.A.D.).
nearest 4 output CDP bin centers.

Kelman Technologies Inc.

143
144 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

AVO FRIENDLY DMO (con’t)


/ HOW IS OPERATOR-ALIASING NOISE
CONTROLLED?
– User controls operator-aliasing (noise caused by
operator cancellation problems) by specifying the
number of proliferated traces
– DMO dip limits are user input

Kelman Technologies Inc.

AVO FRIENDLY DMO (con’t)


 WHAT ABOUT VARIABLE FOLD, ETC.?
– The illumination balancing option may be used to equalize
the contributing energy from CDP’s
– Traces moved via flex-binning have shot and receiver
coordinates automatically updated
– The output from DMO is normalized on the basis of
DMO’d trace input fold and the associated relative
weighting of each trace
– DMO artifacts due to geometry, offsets, mutes, etc. may be
modeled and used to scale final DMO results

Kelman Technologies Inc.

144
Advanced seismic processing 145
Clasina TerraScience Consultants International Ltd.

AVO FRIENDLY DMO (con’t)


1 IS IT AVO FRIENDLY?
– The trace is DMO proliferated such that each dip is
represented equally after zero offset migration (i.e. dip
to dip true amplitude).
– The total data weight stacked into each CRP bin and
time sample is accumulated, i.e. weights associated
with the initial DMO trace plus weights associated
with distribution to the four nearest CDP’s.
– The offset bin stack is normalized on the basis of the
accumulated weights such that changes in fold are
compensated.
– The ability of the DMO to maintain AVO information
for a given geometry and fold can be evaluated.
Kelman Technologies Inc.

AVO FRIENDLY FULL PRESTACK TIME


MIGRATION (PSTM)
. HOW DOES IT WORK?
– Full 3D one pass prestack Kirchhoff migration
– Travel times are surface and weathering layer
referenced separately at shot and geophone
– Each trace is explicitly “moved out” or migrated to
any desired x-y coordinate within the migration
aperture
– Migrated data may be output to any bin grid density
and line orientation
– Output can be targeted to any line or cube of data

Kelman Technologies Inc.

145
146 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

AVO FRIENDLY FULL PSTM (con’t)


/ HOW IS OPERATOR-ALIASING NOISE CONTROLLED?
– Weighting scheme and maximum angle to migrate data can be tuned by
the user to control operator cancellation.
– Conventional anti-aliasing is achieved by high cut filtering in a dip
dependent fashion.
– The user may specify directionally dependent anti-alias filtering to
accommodate differing shot line and receiver line spacing.

NOTE:
The use of tuned dip limits maximizes broad band steep dip imaging
and is more optimal for dip to dip relative amplitude and character
retention.

Kelman Technologies Inc.

We have experimented with operator dip cut off and weighting tuning for many years.
Each data set must be tuned ==> best for broad band data consistent with dip

AVO FRIENDLY FULL PSTM (con’t)


 HOW IS IRREGULAR FOLD HANDLED?
– The input data may be fold regularized to compensate for irregular
geometry (editing according to trace quality criteria) to reduce
operator cancellation artifacts
– The illumination balancing option can be used to scale each input trace
such that all CDP’s have equal energy representation
– Traces moved via flex-binning have shot and receiver coordinates
automatically updated
– Each output trace is normalized on the basis of the input migrated trace
weights and fold
– Footprint effects due to recording geometry can be modeled using a
representative set of data to replace each data trace

Kelman Technologies Inc.

146
Advanced seismic processing 147
Clasina TerraScience Consultants International Ltd.

AVO FRIENDLY FULL PSTM (con’t)


1 IS IT AVO FRIENDLY?
– The migration may be output to offset bin range sets
– Each trace is weighted as a function of angle
according to a user defined set of weighting functions
– Preset weighting is cosine of the dip angle and data is
migrated 89o
– Each offset bin stack is normalized on the basis of the
total data weight stacked into each time sample
– The ability to maintain AVO information for a specific
geometry and fold situation can be evaluated

Kelman Technologies Inc.

AVO & AVA DATA DISPLAYS


• Ostrander gathers with zone of interest windows
displayed vertically to give compact presentation
of AVO/AVA characteristics
• Inside, middle and far offset stacks
• Amplitude picks from defined horizon displayed
on top of section
• Ratio of outside to inside horizon amplitude picks
displayed
• Color coded AVA attributes displayed as
background of a wiggle trace

Kelman Technologies Inc.

147
148 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

AVO & AVA ATTRIBUTE ANALYSIS

• Least squares fitted amplitude gradient (G)


Amplitude (offset, angle , sin, sin2)
• Zero offset intercept stack (P stack)
• Goodness of fit to gradient (C)
• Various combinations of AVO/AVA attributes
(sign of P)  G
(sign of P)  G  C

Kelman Technologies Inc.

DMO BASED 3D PRESTACK


MIGRATION
DMO to offset bins A: Time Variant, Spatially Constant
Low Velocity Cube

Zero Offset
Migration B: Imaging Velocity Analysis

Apply Residual C: Compute Residual Migration


NMO + Stack Velocities

De-migrate Using Residual Post Stack Time Migration


Velocity Cube (A)

Post Stack Depth


Migration

Kelman Technologies Inc.

148
Advanced seismic processing 149
Clasina TerraScience Consultants International Ltd.

3D PRESTACK TIME MIGRATION OPTIONS


Kirchhoff DMO + Zero Offset Migration

• Each trace is migrated to its exact • DMO to bins + Z.O.M. + Residual


final location before stack Velocities + Stack + Residual
Migration

• Structural movement and imaging • Staged movement and imaging


done in a single true one pass
operation

• More sensitive velocity analysis • Imaging velocities picked on


possible: user picks geologic partially migrated data
structure and imaging
simultaneously

• It is expensive • It is much less expensive

• It is inexpensive to select targeted • Targeted outputs not feasible


outputs

Kelman Technologies Inc.

DEPTH IMAGING OPTIONS


• Kelman Depth Imaging Tools:
– Advanced 2D model building tools
– Turning ray 2D depth migration
– Anisotropic 2D depth migration
– Beta version 3D model building tools
• Paradigm Depth Imaging Tools:
– Full 2D/3D model building and depth migration
– Target oriented or full volume 3D prestack depth migration

GOAL: Seamless model movement between


the two packages.

Kelman Technologies Inc.

149
150 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

REMOVAL OF NONHYPERBOLIC
NMO DUE TO CHANGING WATER
BOTTOM DEPTHS
Deconvolved Data Near Trace Stack

Apply Model Based Create Simple


NMO (non-hyperbolic) Water/Land Depth
Model

Remove CDP Based


NMO (hyperbolic)

Conventional Velocity
Analysis NMO + DMO

Stack
Kelman Technologies Inc.

150
Advanced seismic processing 151
Clasina TerraScience Consultants International Ltd.

SEISMIC RESOLUTION

Seismic resolution is divided into vertical and lateral resolution. Resolution of thin layers is dependent on
frequency content.

VERTICAL RESOLUTION of seismic data is controlled by the seismic wavelet’s bandwidth and its length
and shape. The fundamental model for thin bed resolution consists of two spikes separated by the two way
travel time through the thin bed and convolved with a zero phase seismic wavelet.
There can be many variations on this by changing polarity (direction) and amplitude of the spikes,
depending on the geological model we are trying to mimic.
Let’s refer now to a classic paper by M. B. Widess How thin is a thin bed?. In this paper Widess shows
how thin beds effect the amplitude of a reflection.

It is shown that with beds less than about one eighth of a wavelength, the amplitude of the reflection is equal to
4π A b / λb, where b is the thickness of the bed and A is the amplitude, if the bed were to be very thick.

For example a wave of 200ft, a velocity of 10,000ft/sec and a frequency of 50Hz will detect a bed as thin as
10 ft. with an amplitude of about 0.6 of that of a very thick bed.

151
152 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Wood and Kallweit, and Moffitt (1977) developed

Sinc wavelet tuning thickness as 1/(1.5*fu) and temporal resolution as 1/(1.4*fu).

Temporal resolution and tuning thickness for a ricker wavelet can be given by the following:

Tr = (2.31*fp)-1 , or limiting bed resolution of ∆ZR = λp / 4.62 for temporal resolution

And

b / 2 = (2*fp)-1 , or ∆Zb = λp / 4 for tuning thickness.

As can be seen in the wedge model the tuning thickness is ∆ZR = 200/4 = 50ft, and at temporal resolution
gives ∆ZR = 200/4.62 = 43.3 ft.

Another example,

Consider a zero phase deconvolved section with a bandpass of 10-65 Hz, this will give a tuning
thickness of 1/(1.4*65)=11.0 msec and a temporal resolution of 1/(1.5*65)=10.3 msec.
If the velocity for the gas filled sand is 8000ft/sec, then the bed thickness is 11.0*8/2=44 ft.

Tuning thickness should not be confused with temporal resolution, which is the value the apparent
thickness converges to as the bed thins to zero. Tuning thickness is ¼ of the predominant wavelength.

Temporal resolution may be determined by setting the second derivative of the convolving wavelet to zero,
or by measuring the inflection point separation around the central maxima of the convolving wavelet.

A sinc wavelet is a wavelet of the form sinc x = (sin πx )/π


πx .

152
Advanced seismic processing 153
Clasina TerraScience Consultants International Ltd.

The following is the same model with a broader type Ricker wavelet. Look at the tuning effect and the
thinnest possible bed thickness that can be detected.

All the above examples hold true as long as there is lateral uniformity in lithology, if there is a drastic
change in lateral acoustic impedance, than the thickness determination by amplitude will not hold.

We have seen that with excellent data we can resolve beds at 1/8 wavelength, with poorer data this may
reduce to ¼ wavelength. The smallest thickness that will have some influence on a seismic wave is 1/30 of a
wavelength, so although it is too thin to be observed directly it will contribute to the reflectivity series and
may influence the seismic trace.

LATERAL SEISMIC RESOLUTION centers around the three dimensional phenomenon known as the
Fresnel zone. The Fresnel zone determines the minimum areal extent of a geologic unit which can be
detected as a proper seismic reflection. The Fresnel zone radius :

Rf = 0.5 * V * ( T / f )1/2

So for an event at 1.0 sec with a dominant frequency of 25 Hz, and a velocity of 4000 m/sec, the Fresnel
radius is equal to 0.5*4000*(1/25)1/2 = 400 meters

153
154 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Fresnel zone ring

The size of the Fresnel zone increases with depth, resulting in the increase of the area that effects a
reflection , and a decrease in local resolution.
The minimum lateral resolution that can be distinguished is one half the Fresnel radius, so in the above
example that would be 400/2 or 200 meters.

Geologic units covering an area smaller than the Fresnel zone will still contribute to the seismic reflection,
by either weakening the amplitude, disrupting the lateral continuity, or distortion of the waveform.

Diffractions

Let us now consider the following situation, as illustrated in the picture below.

154
Advanced seismic processing 155
Clasina TerraScience Consultants International Ltd.

An electromagnetic plane wave approaches an opaque sheet with a slot. The slot width is equal to a, and is
of infinite direction normal to the page, then to the right of this sheet we only see that part of the wave
which passed through the slot. If the slot width is many wavelengths then the field distribution will be as
shown by (b). To the right of the slot the field behaves, according to Huygens' principle, as if each point in

the slot is the source of a new spherical wavefront, and each of these sources is of equal amplitude and
phase. This last statement can be rephrased as follows:
each point in the slot area can be thought of, as a continuous array of point sources, the length of
which is equal to the length of the slot. The field pattern response of this array is shown in (a).

The far field or Fraunhofer response is given by Ψ'/2) / (Ψ


E = sin (Ψ Ψ'/2) , where Ψ'=(2aππ / λ) sin θ,
where θ is in the x-y plane.
The field pattern in the x-y plane is independent of the extent of the array in the direction normal to this
plane ( the page).

The field variation near the slot is often called a Fresnel diffraction pattern.

So close to the slot, and along a line parallel to the sheet, the pattern looks like that of (a),
as the distance from the slot increases the pattern looks like that in (b),
and at very large distances we enter the Fraunhofer region, and the response will look like that of (c).

For a point to be in the Fraunhofer region, the distance has to be such, that the lines extending from the
edges of the slot can be assumed to be parallel. This is normally the case when
r ≥ 2a2 / λ , so the larger the aperture a ,or the shorter the wavelength λ , the greater the distance will
have to be , if the pattern we wish to measure has to be free of the Fresnel diffraction.

Fraunhofer diffraction

The Fraunhofer diffraction is a limiting case where the reference wavefronts at the aperture are essentially
plane waves, with the result that the diffraction integral reduces to the exponential of a linear function of
the aperture coordinates.
The Fraunhofer criterion is written as follows R > d2 / λ
The Fraunhofer condition is well satisfied in mane practical cases, optics, radio frequency electromagnetic
waves, acoustic waves, of interest to seismic, and surface waves on a lake.

We know from wave theory that the intensity of a wave of any sort is proportional to the square of
amplitude of the wave. Thus I ∝ |A|2
Then if I0 is the intensity at the center of the diffraction, then
I = I0 ( sin u / u )2 ( sin v / v )2 , where
u = - πdx / λ f xobs , and v = πdy / λ f yobs , so u and v are normalized coordinates in the observation
plane.

155
156 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Below we will show an example of the intensity pattern for a rectangular aperture.

Extending this to a two dimensional array of apertures we have the following example

Both pictures are from physics of waves, W. C. Elmore &M. A. Heald

For a single slit the intensity function would be I = I0 ( sin u / u )2 , the following figure will show the
amplitude and intensity curves for such a case.

(a) amplitude, (b) intensity; polar plot of intensity

156
Advanced seismic processing 157
Clasina TerraScience Consultants International Ltd.

Now according to Rayleigh's criterion, two point or line sources of equal strength are said to be resolved,
when the central peak of one diffraction falls on the first minimum of the other diffraction, as is shown
below.

Rayleigh's Criterion

The Rayleigh criterion for the two point sources at normal incidence is given by the angle ∆θ = λ / 2b.
The following two illustrations will clarify the diffraction for two slits.

If we now extend the two slit case to many evenly spaces parallel slits, we find that the intensity of the
diffracted wave is given by the product of a diffraction factor, that depends on the width a of the individual
slit, and the interference factor, that depends on the slit spacing b and the total number of slits N.

157
158 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

The intensity for this diffraction pattern is given by:

I = I0 ( sin u / u )2 ( sin (N v) /N sin v )2 , and the interference factor is given by


( sin (N v) /N sin v )2
Examples of multiple slit intensity and interference are shown in the figures below.

Fraunhofer diffraction pattern for five slits Interference factor for N slits

The figure below shows the diffraction pattern of a circular aperture, it is computed with a Bessel function
of the first order.

158
Advanced seismic processing 159
Clasina TerraScience Consultants International Ltd.

We will not treat the 3D diffraction case here in detail, as it is much more involved and beyond the scope of
this course,

but we will give Bragg's formula λ = 2d sin θ.


nλ = 2d sin θ., n is the order of the interference.

Bragg's formula can be obtained by combining the waves diffracted, or scattered by individual diffraction
centers and relating maxima to the sets of planes of diffracting centers, into which the lattice or three
dimensional grating, can be subdivided.

We have treated some of the diffraction theory here to give a lead in to the fresnel zone problems to be
treated later on in the course.

159
160 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Fresnel Diffractions

Fresnel diffractions refers to the application of Kirchhoff theory in which sources and observer are closer
to the diffracting aperture than Fraunhofer's infinite distances. This is a more general case than the
diffraction theory described under array development. The circular zones and the knife edge effects will be
introduced here.

Circular zone

From the figure above we can derive the following

λ/2 ≡ δ(ρ
nλ ρ) = (Rs2 + ρ2 )1/2 + (Ro2 + ρ2 )1/2 - (Rs + Ro) , the aperture coordinate ρ is hereby replaced by
a continuous variable n.
The aperture area between circles of radii ρ(n-1) and ρ(n) is known as the Fresnel zone, or the Fresnel half
period zone.
In an entire circular aperture, all zones contribute equal amplitudes, and the contributions from adjacent
zones are exactly 1800 out of phase.
This means that in a circular aperture with an even integral number of Fresnel zones, the intensity will
diminish to zero, but when n is odd the intensity is maximum and equal to 4Io .
Let's show a graphical representation of the Fresnel zone by means of rotating vectors. In the figure below
we see in a the first Fresnel zone as constructed by adding the vectors, in b we have completed the second
zone.

160
Advanced seismic processing 161
Clasina TerraScience Consultants International Ltd.

Now let's look at an off axis diffraction.


A careful analysis of this case requires a two coordinate integration over the aperture, similar to that
carried out for Fraunhofer. (The resulting integrals are Lommel functions.)
An off axis case is shown in the figure below.

off axis view of a 3 zone aperture approximate vector diagram for same

As in the Fraunhofer case for an rectangular aperture the problem is solved by two independent integrals,
each controlled by only one of the aperture's dimensions.
These integrals, the Fresnel Integrals, can be plotted against each other and will form the Cornu spiral, just
like in the Fraunhofer case.

the Cornu spiral

161
162 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Since the Fresnel zones are defined by phase considerations, 180 degrees for each half zone, the points on
the spiral that represent full integral zones are those at the extrema of S(u). The on axis maxima and
minima are at C2(u) + S2(u) .
The Fraunhofer case is a limiting case of the above, where the aperture unmasks far less than one Fresnel
zone.
Now let us consider the effect of a knife or straight edge on diffractions.

The case can be seen as a semi infinite slit, and is shown in the figure below together with the intensity
diffraction pattern.

diffraction by a straight edge pattern of same

At x=0 the intensity is equal to I0 /4 , as the figure shows, and in the shadow region x< 0 the intensity
decreases monotonically to zero , in the illuminated region, that is the region for x>0, the maxima and
minima gradually dampen out as x goes to infinity.

Let 's look at the Fresnel zone with respect to modelling and migration, and we see that in both cases the
energy starts at zero and increases to maximum value at some radius, and then oscillates in the manner
described above for a knife edge Fresnel zone.
In modelling we assume the energy to be generated from a circular reflecting disk, in migration we use the
migration radius as our measure, but in both cases the Fresnel zone intensity can be directly related to that
of the knife edge Fresnel zone.

This brings us back to the question when are the effects totally recognizable as separate events?
This happens when the size of the disk, or the radius is such, that two wavelets. Or a reflection and a
diffraction, can be detected separately.

This brings us to the definition of a zone of influence, which can be defined as follows (Brühl et al 1996)
The zone of influence is the area on the reflector for which the difference between the
reflection travel times and the diffraction travel times is less than the length of the wavelet.
(∆t)

162
Advanced seismic processing 163
Clasina TerraScience Consultants International Ltd.

And in migration (Gijs Vermeer), the zone of influence can be described as :

The zone of influence is the area around the image point ( point of stationary phase) for which the
difference between the reflection travel times and the diffraction travel times is less than the length of
the wavelet (∆t)

The Fresnel zone is always equated in our industry to the zone of influence, but in actual fact the Fresnel
zone is always smaller than the zone of influence.

It is therefore important to look at the zone of influence rather than the Fresnel zone when designing a 3D
seismic program. To clarify this we will look at the radial response of a midpoint area of a bin.

output from midpoint areas


outside midpoint area partially in edge of area mostly in well inside midpoint area

It should be noted that for a migrated horizon in 3D cross section we will see that the amplitude behaves as
follows.

----------------------------------- limits of 3D volume --------------------------------------




163
164 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Unmigrated data

Let’s now compare the resolution of unmigrated and migrated data. Consider a flat reflector at zero offset
with a constant velocity, then the nominal reflection point is directly below the data stack point. Each point
on the reflector acts as a new scatter point as described earlier, producing wavelets which make up the
reflection. The resulting reflections are dominated by an area of δx around the data point.

The travel time function is given by T =[ T02 + ( 2δδx/V)2]1/2.


Expanding this as a function of δx, we get T ≈ T0 + 1/2T0(2δ δx/V)2. Now if we define a phase shift of 1800, we
get f(T-T0)=1/2, which gives δx = (V2T0/4f)1/2, the resolution of unmigrated data.

migration resolution
900
migration resolution in feet

800
700 6000 ft/sec
600 8000 ft/sec
500 10000 ft/sec
400 12000 ft/sec
300 14000 ft/sec
200 16000 ft/sec
100 18000 ft/sec
0
100 400 600 900 1200 1500 1700 2000 2300 2600 2800 3100 3400 3600 3900 4200 4500 4700 5000 5300 5600 5800

6000 ft/sec 50 50 51 52 53 55 56 58 61 63 65 68 71 74 77 80 84 86 90 94 98 100


8000 ft/sec 67 68 69 71 74 78 81 85 91 96 100 106 113 117 124 130 137 142 149 156 164 168
10000 ft/sec 83 85 87 91 97 104 109 118 127 137 143 154 164 172 183 194 205 213 224 236 248 256
12000 ft/sec 100 103 106 114 123 135 143 156 170 185 196 211 227 238 254 271 288 299 316 333 351 362
14000 ft/sec 117 121 127 138 152 169 181 201 221 242 257 279 301 316 339 362 386 401 425 448 472 488
16000 ft/sec 134 140 148 164 185 208 225 252 279 308 327 357 386 406 437 467 498 519 550 581 612 633
18000 ft/sec 151 159 170 193 221 252 274 309 345 382 407 445 483 509 547 587 626 652 691 731 771 797
two way time in msec

Migrated data.

The migration response at each point is obtained by summing over the diffraction curve for each output
point. If the output point coincides with the reflection point, then the data will be in phase and add up. Now
how far do we have to move along this curve before the summation is significantly out of phase.
The diffraction curve for zero offset is as follows TD= (T02 + 4x’ 2/V2)1/2 where x’ is the distance between
input and output points.
The travel time curve for a scatter point δx from the output point is
δx)2 / V2)1/2≈ TD – 4x’δx/V2TD.
T = (T02 + 4(x’-δ
The migration sum therefor becomes roughly M(0,t)=-L ∫L δx’W(TD-T)=-L ∫L δx’W(4x’δx/V2TD),

where W is the wavelet and L is the length of the operator.

164
Advanced seismic processing 165
Clasina TerraScience Consultants International Ltd.

The result will be fairly small unless the phase shift is kept within 1800 over the integral, so the
lateral resolution for δx is given by f(4Lδδx/V2TD)= ½ → δx= (1/8Lf)*V2*(T02+4L2/V2)1/2.

x'=0 x'
-------------------••-------------------••-
|
|
|
|
|
δx
• •
P

Now for an example where f=30Hz, V=8000ft/sec, z=10000ft

we will get in the non migrated case (80002*2.5/4*30)1/2=1154.7 ft

and in the migrated case (1/8*10000*30)*80002*(2.52+4*100002/80002)1/2=94.281 ft

So the migrated resolution is at least 11 times better than the non migrated one.

We have to consider that we have used the correct migration velocity, or else the migration could have
serious errors.
If we perform 2D migration the resolution is only improved in the inline direction, but the crossline
direction is still at the fresnel zone resolution.

So 3D migration greatly enhances resolution, or if this is not possible migrate in the direction of
stratigraphic change, to try and improve resolution.

The goal is to get the reflection at a stack point to come from the smallest possible area of a reflector, so
that stratigraphic interpretation is well defined and faults are clearly delineated.

Migration aperture

Migration aperture is defined as :

Migration Aperture = depth to reflecting horizon / Tan ( apparent dip of reflecting horizon)

A table showing lateral resolution and migration aperture follows below.


Let's look at the following two cases, a flat reflector and a dipping reflector, as shown below.

165
166 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

In the case of the flat reflector, the CMP is exactly halfway between the shot and receiver, in the dipping
case the CMP has moved up dip. In order to properly migrate the data to its original position after
shooting and processing, we need to include enough spread at the end of the line.

This additional portion of recording line is computed by computing the aperture as stated earlier.
The aperture has to be added on, at both ends of the line, and has to be planned into the cost.

The aperture in the dip direction is obviously larger than the aperture in the strike direction.

166
Advanced seismic processing 167
Clasina TerraScience Consultants International Ltd.

167
168 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Maximum allowable far offset

The table shows some of the far and near offset computations for the particular given values.

168
Advanced seismic processing 169
Clasina TerraScience Consultants International Ltd.

LINEAR SYSTEMS

DEFINITIONS

The dynamic behavior of some circuits is fully described in all circumstances, by either their steady state or
transient responses. In making this statement, we have assumed, that the circuits in question behave
linearly.

A system, whether electrical or otherwise, is said to be linear if and only if both the principles of
superposition and proportionality are satisfied.

The way in which these two properties define a linear system is shown by the following two examples.

Let us suppose that an excitation f(t) applied at the input of a system produces a response r(t) and a
second excitation g(t) produces a response s(t), then the principle of supposition is valid only if the
response to

f(t) + g(t) is r(t) + s(t)

169
170 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

This is shown in the figure below.


The validity of the superposition principle means, that each input produces its own output, regardless of
what other signals may be present.
In other words, there are no interactions among the responses of the different excitations within the system.

System Obeying the Superposition Principle

If the excitation f(t) is multiplied by an arbitrary number a, and g(t) by another arbitrary number b, the
principle of proportionality is valid only when the response to

a*f(t) + b*g(t) is a*r(t) + b*s(t)

The equation above shows, that by doubling, or halving the excitation, then the response is also doubled, or
halved. Thus, the output of a linear system is always directly proportional to its input.

We further note that the equation states symbolically the sufficient conditions for linearity.

An important characteristic of linear systems is that a sinusoidal excitation always results in a


sinusoidal response, differing at most in amplitude and phase but not frequency.

Recalling our study of ac circuits, the voltages and currents at any point in a circuit vary sinusoidally at the
same frequency as the source.

It is this property which gives sine waves their importance in circuit theory.

Another way of stating this definition is as follows:

A system is linear if and only is the input and output relation can be described by the following integral
( the convolution integral)

y(t) = ∫ w(t,ττ) x(ττ) dττ, where y(t) is the output, x(ττ) is the input and w(t,ττ) is the system response (weighting function)

-∞

170
Advanced seismic processing 171
Clasina TerraScience Consultants International Ltd.

n
and the principle of superposition is y(t) = ∑ xi (t), where y(t) is the response and xi(t) are the inputs
i=1

DIFFERENTIATING AND INTEGRATING CIRCUITS

If a step function is applied across the input terminals of an electrical system and the voltage across its
output terminals varies with time similarly to the curve shown in Figure (a) below, then this system is
called a differentiator.
If the output is similar to the curve shown in Figure (b) below it is called an integrator.
The output pulses shown in Figures (a) and (b) are approximately proportional to the exact differential and
the exact integral, respectively, of the input function.

An idealized step function together with its exact differential and integral are illustrated in the next Figure.
Obviously, none of the three impulses shown are physically realizable.
They are just useful mathematical concepts which help to solve a variety of problems related to transient
circuits theory.

step function impulse function ramp function

171
172 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

ESSENTIAL CONCEPTS
DECIBELS

The decibel (dB) is a unit of the power (or intensity) ratio between two amounts of seismic, acoustic or
electric energy.
The ratio between powers P1 and P2 is defined in decibels as

dB = 10 log10 (P2 / P1)

Thus, the decibel is 1/10 of a bel, the unit named after Alexander Graham Bell.
From the equation it is clear, that a scale drawn linearly in decibels must be logarithmic in power ratio.

It is more convenient to use decibels, but the power ratio is quite satisfactory in other instances.
Human senses such as,-hearing, touch, and sight -, are sensitive to relative changes of intensity, so decibels
properly belong in discussions of natural phenomena.

172
Advanced seismic processing 173
Clasina TerraScience Consultants International Ltd.

Examples in seismic work, where decibels are useful, include the inelastic attenuation per cycle of
transmitted longitudinal wave, the relative strength of signal and noise at different stages of data
processing, and the gain of a particular stage of a seismic amplifier.

Instead of comparing the quantities in power or intensity, measurements are often easier to make in
voltage, or its equivalent.
In an electrical circuit, the power P developed in a load of impedance Z, across which the voltage is V, is
given by
P = V2 /Z

Therefore, provided that the two powers to be compared are developed in the same load Z, the ratio in
decibels can be written
dB = 20 log10 (V2/V1)

where the voltages V1 and V2 are associated respectively with the powers P1 and P2 of the equation above.

In the Figure above, the plot of decibels against equivalent voltage-current and power ratios, implies that
the voltages or currents of interest are developed across the same impedance.

OCTAVES

In seismic exploration and elsewhere, if frequencies f1 and f2 are related so that

f1 = 2f2, they are said to be an octave apart:

f1 is one octave above f2 and f2 is one octave below f1.

An octave is thus the smallest ratio between different frequencies, that can be expressed as an integer.
Octaves are important, because human senses and electromechanical sensors respond to relative changes,
rather than to absolute changes of frequency measured in Hz.

173
174 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

THEORY OF ELECTRICAL FILTERS

Ideal Filter Responses

An ideal linear filter is a device which allows only a certain band, or bands, of the input-signal spectrum to
pass through unaltered and infinitely attenuates all other signal frequencies outside these bands.
The band, or bands, of frequencies which are permitted to pass provide a basis for classifying filters into
four types shown below.

Ideal Filter Responses in the Frequency Domain:


(a) lowpass, (b) highpass, (c) bandpass,(d)band rejection

An ideal low-pass, (high-cut), filter passes unaltered all the frequencies from 0 to the cutoff frequency fcoff
and completely rejects all frequencies above fcoff . The converse is true for the high-pass (low-cut) filter.

On the other hand, an ideal bandpass filter passes unaltered all those frequencies located between f1 and f2
and rejects frequencies on either side of f1 and f2 . The converse statement defines the ideal band-rejection
filter.

To obtain ideal bandpass or reject, a filter which has an infinite time delay is required.
This means that an infinitely long time would have to elapse before a signal fed at the filter's input would
appear at its output.

From this description, we realize that an ideal filter is just a theoretical model and can never be realized in
practice.

174
Advanced seismic processing 175
Clasina TerraScience Consultants International Ltd.

Practical Filter Responses

The response of practical linear filters can differ considerably from the idealized response.
Some of the most significant differences are

• The amplitude throughout the passband is not constant


• The signal frequencies outside the passband are not completely rejected
• There is a gradual transition from the pass and rejection bands rather than a step discontinuity
• The frequency components of the input signal appear at the filter's output, delayed by different
amounts; thus, in addition to the amplitude spectrum, we also have a phase spectrum.

A typical filter response in the frequency domain is shown below.

A filter, like any other linear system, can also be described by its time-domain response besides its
frequency-domain response. The former can be obtained in two ways, one of which is by performing the
Fourier synthesis using the amplitude and phase spectra.
A more direct method is to inject a very sharp impulse (impulse function) into the filter while
monitoring its output.

As we already know, such an impulse is composed of an infinitely large number of sinusoidal waves having
equal amplitudes and different frequencies which are all in phase at the time origin t0.

Thus, by injecting an impulse into a filter, we obtain its response at all frequencies.
In other words, the output of the filter is the sum of all the sinusoidal components present at the input
(superposition principle), but with their amplitudes and phases altered by different amounts.
This process is shown below.
175
176 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Responses of Filters to an Arbitrary input Signal (Convolution)

The output of a filter can be found either analytically or graphically for any input waveform by a process
called convolution.

Any arbitrary input can be considered as being formed by a succession of infinitesimally spaced impulses.
Each impulse, by going through the filter, generates an impulse response of appropriate amplitude and
polarity. The output of the filter is the sum of all these impulse responses.

Symbolically, convolution can be written as


r(t) = f(t) ⊗ h(t)
where

r(t) is the filter' s output time function,


f(t) is the filter's input time function,
⊗ stands for convolved with,
h(t) is the filter's impulse response
176
Advanced seismic processing 177
Clasina TerraScience Consultants International Ltd.

An excellent nonmathematical description of convolution is given in N. A. Anstey's article.

The operation of convolution in time domain is equivalent in the frequency domain,

to multiplying the amplitude spectra of g(t) and h(t), and summing their phase spectra.

This should be evident from the explanation given in the previous illustration.

Therefore
R(f) = G(f) * H(f)
And
φ = φ1 + φ2

Where
R(f) is the amplitude spectrum of the filter's output
G(f) is the amplitude spectrum of the input signal
H(f) is the frequency domain amplitude response of the filter
φ is the phase spectrum of the filter's output
φ1 is the phase spectrum of the input signal
φ2 is the phase response of the filter

Deconvolution

The reverse process to the one just described is called deconvolution, or inverse filtering.
This enables us to recover the input signal g (t) if both the filter impulse response h(t) and the output signal
r (t) are known.
Deconvolution is widely used in seismic work, where it is desired to remove the filtering effect of the
ground, seismometers, and the recording instruments.
An explanation of inverse filtering in the frequency domain is shown next.

Inverse Filtering in the Frequency Domain:


(a) flat spectrum of a unit impulse;
(b)frequency spectrum of (a) after filtering;
(c) amplitude and phase response of inverse filter required to remove the amplitude and phase distortion in (b);
(d)original flat spectrum recovered by multiplying (or convolving in the time domain) the amplitude spectra (b) and (c)
and summing the phase spectra

177
178 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Remarks about the Amplitude and Phase Responses of Filters

In the preceding section, we have shown that the frequency components of a signal, which are fed into a
filter appear altered both in amplitude and phase at the filter output.
As a result,
the shape of the filtered signal differs considerably from the original.

If the input consists of a series of pulses, as in the case of a seismic reflections signal, each pulse, or
reflection, reaches the output broadened in time and in some cases followed by an oscillating tail.

The former effect makes accurate picking of reflections a difficult task, whereas the latter, often referred to
as filter ringing, can obliterate closely spaced reflection events.

To gain an insight into how time broadening and filter ringing are produced, let us analyze separately the
effects of both the filter amplitude and phase spectrum on a broadband input such as a seismic reflection.

We have explained before that the amplitude spectrum shows the relative attenuation, which each input
frequency component suffers by going through the filter. This relative attenuation, called amplitude
distortion, has a time broadening effect on the signal, since

the width of a pulse is inversely proportional to the bandwidth of its spectrum.

Moreover, by band limiting the input signal, oscillations appear with frequency in the bandpass.
These oscillations are provided by frequency components within the passband and, prior to filtering, were
cancelled out by the frequencies in the reject band. This effect is more severe as the filter bandpass
becomes narrower.

The phase spectrum determines the relative phase at which the different frequency components of a signal
are passed through the filter. This is equivalent in the time domain to the displacement in time of each
sinusoidal component of the signal.
If this time displacement is the same for each sinusoidal wave,

the phase shift is a straight line having an intercept at π, or a multiple of π.

Under these circumstances, no change in signal waveform takes place.


However, if each sinusoidal component is made to undergo a different time displacement, or more
specifically, if the relative position of the components is altered in any way, a change in waveform results.

This change in the appearance of the signal is known as phase distortion.

The effect of phase distortion on seismic reflections is more harmful than amplitude distortion.
It can drastically change the shape of the filtered signal from the original, especially if it is passed through
two filters which have different pass bands, as is often the case in seismic recording.

178
Advanced seismic processing 179
Clasina TerraScience Consultants International Ltd.

Another undesirable effect of phase distortion occurs when the amplitude of the frequency components
near cutoff is large. The phase response in this region is usually very steep, separating the frequency
components, which results in severe filter ringing.

Various methods have been devised to remove, or at least minimize, phase distortion on seismic records,
using both analog and digital processes.

From the foregoing, we have seen that filtering of seismic data is undesirable; therefore, why is it used?

Unfortunately, the seismic signal contains not only reflections, but in addition, there is a myriad of
unwanted signals, some of which are completely random and others organized or coherent.

The function of the filter is to attenuate, by frequency discrimination, some of the unwanted signals at the
expense of some distortion in the wanted ones.

When recording seismic data, we are always confronted with the problem of finding the best
compromise between unwanted noise rejection and the resulting deterioration of the reflected events.

179
180 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

The Model in Relation to Simulation of Seismic Data

Seismogram Synthesis

In the strictest sense a synthetic seismogram is the artificial seismic record which would result from seismic
shooting carried out over a given subsurface model (Sheriff, 1973).
The original synthetic seismogram concept involves generating a single seismic trace from a model, based
on velocity log values assuming horizontal bedding and normal incidence and neglecting multiples
(Peterson, et al., 1953). The concept clearly embodies the convolutional model,

s(t) = w(t)*r(t),

that is, a synthetic seismogram s(t) is the result of convolving a wavelet w(t) with a reflectivity model r(t)
(explicitly stated in Sengbush, et al., 1961). The work of Tranter and Kerns also embodies these original
ideas.
Since its original inception, the term “synthetic seismogram” has remained largely reserved for the initial
approach, while the concept of simulating seismic responses has been greatly generalized.

The expanded technology for simulating seismic responses is most commonly referred to as seismic
modelling and may incorporate some or all of the following features:

(1) Improvements of the reflectivity model to include multiples,

(2) Removal of the horizontal bedding restriction,

(3) Removal of the normal incidence restriction,

(4) Allowances for absorption,

(5) Inclusion of wave theory concepts, diffractions, etc., i.e., no longer assuming energy travels only
along raypaths according to Snell's law. (This feature has allowed more rigorous amplitude accounting
and inclusion of diffraction effects).

Inclusion of better estimates or direct measurements of the seismic wavelet w(t).

Allowances for converted waves and rock properties.

Both analog and digital methods were used in synthetic seismogram generation (Dennison, 1960). Since the
synthetic seismogram concept predated digital seismic recording, a variety of several analog computing
schemes were devised (e.g., Peterson, et al., 1953; Woods, 1956; Anstey, 1960; Bois, et al., 1960;
Sherwood, 1962; Silverman, et al., 1963).
With the advent of digital techniques, however, synthetic seismogram generation is almost exclusively by
numerical calculation with general purpose computers, and the special purpose analog devices are now of
historic interest only.

180
Advanced seismic processing 181
Clasina TerraScience Consultants International Ltd.

Continuous velocity logs were not generally run in wells in earlier days, and in fact are not always run
today, nor run with geophysical needs in mind. Thus, reflectivity models sometimes begin with creation of
a "synthetic" velocity log from electrical log or other data using an empirical relationship to velocity
(Peterson, 1954).

Faust's equation V = KP 1/6 (Faust, 1953) has been used for the relationship of velocity, V, to electrical
resistivity, P, K being an empirical constant.
However, in regional studies superior results have generally been obtained by using empirical relationships
determined locally.

It was recognized from the beginning that reflections originated where changes occurred in the acoustic
impedance, the product of velocity and density rather than where changes occurred in velocity only.
However, density information was not generally available, and in fact, is often not available even today, so
density is sometimes "synthesized" from other data using an empirical relationship (Gardner, et al., 1974).
The relationship ρ = 0.23V1/4 between density ρ and velocity V is sometimes used for this purpose; this can
be incorporated in the reflectivity model by simply replacing ρV with V5/4.
However the use of this general relationship does not seem to improve results significantly, since it is
basically just a scaling of the reflectivity.

Better results are sometimes achieved by the use of empirical relationships which recognize the local
lithology.

Even where continuous logs are run, the velocity information is usually incomplete, and may require
extensive editing to replace bad data segments. Velocity logs are often regarded as a fundamental tool for
determining porosity in connection with evaluating the production capability of a well and sometimes the
logs are only run over portions which are assessed as having production potential.
In particular, the upper portion of a borehole, the portion covered by the first casing string, is usually not
logged.
Major velocity variations which are important in generating multiples often occur within this region, so
that multiples may not be calculated correctly. The base of the weathering is often the dominant generator
of multiples.

Gas, even in minor amounts, in the shallow section also may give rise to large reflection coefficients and
hence be important in multiple generation.

The velocity log is usually integrated to give arrival time information. Check shots may not be available
and the sonic log may involve small systematic errors which accumulate in the integration. Hence, there
may be error in calculating the arrival time of an event even though the prediction of the wave shape may
be good. Discrepancies between the arrival time of an event on a synthetic seismogram and that observed
on a seismic record can also be produced by filter delays in recording and/or processing, by the use of
different reference datums or for other reasons. The filtering on the seismic record and on the synthetic
seismogram may be different so that one may be inverted with respect to the other or may have some other
phase shift.

Early reflectivity models did not generally include multiples although some of the analog devices did
include the effects of multiples.
Recognition of the importance of the contribution of multiples to seismic records developed at the same
time as early synthetic seismogram development

181
182 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

(Wuenschel, 1960; Bois, et al, 1960; Baranov and Kunetz, 1958, 1959, 1960; Delaplanche and Ledoux,
1960; Darby and Neidell, 1966).

One of the first applications of synthetic seismograms was to identify which events were primaries and
which were not. In the detailed comparison of field seismograms with synthetics, the multiple free
synthetic is still normally used.

For the convolutional model, two very significant findings from the study of multiples should be noted.

First, the effect of intrabed multiples has a broadening effect on the transmitted wavelet which is similar to
constant Q absorption (O'Doherty and Anstey, 1971; Schoenberger and Levin, 1974). Both effects can be
lumped into a constant Q effect on the seismic wavelet.

Second, in an elastic plane layered medium, the effect of transmission on the wavelet can be treated exactly
as an autoregressive process. This fits the spiking deconvolution model (Robinson, 1967).

Moving Beyond the One-dimensional Synthetic Seismogram

The removal of the horizontal bedding restriction was the step which began to change the nomenclature
from "synthetic seismogram creation" to "ray-path modelling". This step usually was accompanied by
simplification of the models to a relatively few layers. Ray paths were then traced through these layers so
as to reflect at normal incidence. This was usually accomplished by observing where the rays emerged,
which started at a reflector normal to the reflector, and then interpolating for whatever surface trace
spacing was desired.
Usually, velocity was taken as constant within the interval between reflectors, except possibly for allowing a
continuous velocity gradient in the upper layer.
Subsequently, the velocity layers were no longer constrained to follow the bedding, a much more realistic
assumption since velocity is usually a function of depth of burial as well as lithology and geologic history.
Usually, the number of layers has been kept to a reasonably small number (See, for example, Taner, et al.,
1970).

Interest in stacking velocity calculations which grew out of common depth-point techniques led to the
desire to ray trace for offset traces. Early modelling techniques usually assumed that the geophone and
shotpoint were at the same location, that is, zero offset.
However, arrival times are required for a number of different values of shotpoint to geophone distance in
order to calculate stacking velocity. This calculation was usually carried out in an iterative manner -
reflection angles being assumed and then rays traced back to the surface to see where shots and geophones
had to be located for such reflecting conditions, then changing the angle or the reflecting point and ray
tracing again, and so on until the shot and geophone locations came within whatever limits were considered
as tolerable errors.

The arrival times determined by such a procedure were then taken along with reflection coefficient values
deemed appropriate to give the reflectivity function which could then be convolved with the wavelet to give
a synthetic seismogram.

The very earliest synthetic seismogram occasionally allowed for time-dependent decrease of energy, which
was taken to allow for spherical divergence.

182
Advanced seismic processing 183
Clasina TerraScience Consultants International Ltd.

However, the synthetic seismograms were to be compared with actual records which often had been
subjected to automatic gain control, so that allowing for attenuation was often counter-productive.
While some amplitude studies were made in the intervening years (Trorey, 1962; Bois and Hemon, 1963),
it was the bright-spot enthusiasm of the early 1970's that really provoked attention on maintaining proper
amplitude relationships.
The approach to amplitude was of two kinds:
to lump all amplitude factors together into an average, empirical attenuation constant,
or to work out attenuation for identifiable factors (O'Doherty and Anstey, 1971) one at a time.

The first amplitude factor to be corrected for was spherical divergence, sometimes allowing for spreading
because of curved ray paths in the presence of a velocity gradient as well as spreading because of distance
(Newman, 1973). Transmission energy losses were usually neglected, or accounted for on an average
empirical basis rather than being calculated.

The most recent trend is toward full wave-theory effects. These are generally based on either Kirchoff
diffraction theory (Hilterman, 1970), or on numerical solutions of the wave equation by finite difference
(Kelly, et al., 1976).

These methods recognize that an entire region of a reflecting interface-the first Fresnel zone is involved in
reflection rather than simply a reflecting point.

Whereas synthetic seismograms provide a means for calculating a seismic trace from well data, a "seismic
log" is an attempt to calculate the equivalent of a well log from the seismic trace. For the discrete reflector
case,

ρ2V2 = ρ1V1 (l+R)/(l-R) where R is the reflection coefficient.

This expression can be used to calculate the acoustic impedance below a reflector in terms of the acoustic
impedance above the reflector. If the complete sequence of reflection coefficients can be determined, then
the acoustic impedance of the nth interface can be determined in terms of the shallowest impedance:

ρnVn = ρ0V0 Σni=1 (1+R)/(1-R)

The goal of the seismic log process usually is a synthetic sonic log rather than an acoustic-impedance log.
An empirical relationship between velocity and density, such as referred to earlier in this section, permits
this.
More often it is simply assumed that the density does not vary.

The inversion process assumes that the seismic trace is a good approximation of a reflection coefficient log;
only primary reflection energy must be present and it must be present in proper proportions.

Non-reflection energy, including multiples, must have been removed,


amplitude must have been preserved faithfully and
the equivalent wavelet must have been reduced to an impulse.

183
184 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Since these cannot be done completely, the process is at best an approximation and produces a noisy,
filtered version of impedance.

However, in some instances seismic logs can be produced which are reasonable approximations to filtered
velocity logs.

The bandwidth of seismic data is narrower and the frequencies involved are appreciably lower than those
of a sonic log, so attempts to create seismic logs which are the equivalent of sonic logs are up against
fundamental limitations. Very low frequency information is especially missing so absolute velocity values
cannot be recovered from the observed reflectivity.

Two solutions to this problem are to settle for a log of changes in velocity without attributing any absolute
significance to values, or to use other constraints (e.g. normal moveout) to give absolute velocity values.

The Convolutional Model

(Overview is taken from notes given by M. Backus, N. S. Neidell and R. E Sheriff)

The convolutional model has been important throughout the history of the seismic method.

The emergence of direct hydrocarbon detection, porosity mapping, and stratigraphic mapping , lithology
and fracture mapping during the past two decades has dramatically increased the importance of
understanding and controlling seismic amplitude and waveform.

The one-dimensional recovery of noisy filtered impedance logs from reflectivity estimates (Lindseth, 1972;
Lavergne, 1975, 1977) has increased the interest. During the same period, our capability for modelling and
understanding the reflectivity for three-dimensional subsurface models has rapidly developed (Trorey,
1970; Claerbout, 1976; Hilterman, 1970, 1975).

In the convolutional model, we treat the reflection seismic signal, S(t), as the convolution of a seismic
wavelet, w(t), with the subsurface reflectivity, r(t).

The seismic wavelet, w(t), is the waveform which would be recorded with our actual seismic system for the
reflection from a single plane reflecting boundary in the subsurface.
The reflectivity, r(t), represents the idealized noise free seismogram we would record from the actual
subsurface if the seismic wavelet were a perfect spike, or impulse.

SEISMIC TRACE SIGNAL + NOISE


g(t) = s(t) + n(t)

SIGNAL = WAVELET * REFLECTIVITY


s(t) = w(t) * r(t)

The recorded seismic trace, g(t), is treated as the sum of the seismic signal, w(t)*r(t), plus additive noise,
n(t).
The seismic trace is thus treated as a noisy, filtered version of the subsurface reflectivity. The seismic
wavelet is the impulse response of the filter.

184
Advanced seismic processing 185
Clasina TerraScience Consultants International Ltd.

The Reflectivity

If the subsurface consists of plane parallel homogeneous layers, we can view the reflection seismic signal as
the superposition of many wavelets, each having the waveform w(t), but with strengths and arrival times
corresponding to the multitude of boundaries encountered in the subsurface.

s(t) = Σ Ri w(t-ττi)

The Ri correspond to the normal incidence reflection coefficients.


The τi corresponds to the two-way travel times to the reflecting boundaries.
In this case, the reflectivity is:

r(t) = Σ Ri δ( t-τi)

where δ(t) = 0 ≠0)


(t≠

δ(t) =∞
∞ (t=0)

∫-∞∞∞ δ(t) dt = 1

Recognizing the existence of continuous variations in acoustic impedance, we can express the reflectivity in
terms of the time derivative of the logarithm of impedance versus two-way travel-time (Peterson, 1955).

185
186 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

r(t) ≈ 1/2 d/dt ln ρV(t)

The time derivative (or integral) involved in the relationship between impedance and reflectivity is a linear
operation which may also be treated as a filter.
For the one-dimensional case, the seismic signal may be viewed as a noisy filtered version of either the
reflectivity, or the impedance versus two-way travel time.
In this formulation internal multiple reflections and transmission loss at the reflecting boundaries is
neglected. In the context of the convolutional model, this approximation is generally valid (shallow bright
spots provide a moderate exception).

The one-dimensional nature of the model represents a far more significant qualification. In the presence of
significant curvature of the bedding planes, or variations in impedance parallel to the bedding planes, a
three-dimensional description of impedance must be used to calculate the reflectivity (Trorey, 1970;
Hilterman, 1970, 1975).
The valid use of the normal incidence reflection coefficient is limited to modest (up to 15-20 degree) angles
of incidence, and correspondingly modest source receiver offsets.
If the time duration of the r(t) of interest is long, the time variant effects of wavefront divergence and
frequency dependent attenuation may significantly modify the reflectivity from the one-dimensional
approximation.
In spite all of these approximations, the one-dimensional model has significant application to properly
processed seismic data.

Modern methods of migration or downward continuation of seismic data (Claerbout, 1976; Schneider,
1976) can bring even three dimensionally complex data to a state where the one-dimensional formulation is
useful.

The Seismic Wavelet

The waveform recorded from a simple plane reflector can it-self be viewed as the convolution of many
individual wavelets, or as the impulse response of a cascade of linear system elements describing the seismic
reflection process. Thus, the seismic wavelet for the trace recorded in the field may be regarded as the
result of filtering the pressure pulse produced by the seismic source with a series of filters describing the
effects listed in the Figure below.

The digital filtering applied to the field trace comprises one more filter, or convolutional link, which is
applied for the benefit of the interpreter.
We will distinguish between the seismic field wavelet, wf(t), which serves in the model of the field
seismogram, and the interpretive wavelet, wi (t), which serves in the interpreter's convolutional model.

186
Advanced seismic processing 187
Clasina TerraScience Consultants International Ltd.

The components of the seismic field wavelet can be characterized in several significant aspects.

a) Lateral variation: Variations in the wavelet from trace to trace are especially objectionable, since
we are basically trying to detect and map lateral variations in the subsurface. On land, the source
waveform and the effect of the near surface layer can be quite variable. Ghosting with explosive sources is
also a problem. The recording system is the only invariant component.

In deep water (beyond the continental shelf) marine work, only the transmission effects are variable,
assuming the cable depth and source performance are maintained constant. On the shelf, the effect of
reverberations within the water layer is so objectionable that in many areas exploration was ineffective
prior to the introduction of processing compensation and CDP stack.

b) Knowledge of the component: The difficulty of determination of the response of each component
correlates to
some extent with lateral variability. The response of the recording system is readily available.
The marine ghosting effect can be established, if the source and receiver depth are known.
Ideally, the marine source pulse needs to be determined only once for a given source configuration.
Determination of the response of the laterally variable components is the most significant problem for the
successful application of the convolutional model.

c) Compensation difficulty and desirability: If we know the transfer function of the field wavelet
component, we can construct a digital filter, a(t), to compensate for that transfer function, within limits.
We are concerned with the following wavelet properties.

Does the response have nulls or near nulls in the amplitude versus frequency transfer function
descriptions? If so, we cannot recover useful information in those near null frequency bands.
The marine ghosting effect produces nulls within the band of interest.
The geophone and source pulse spectra are zero at zero frequency. The effect of transmission is to reduce
high frequencies to a level far below the noise, and represents the fundamental limit on the resolution of the
seismic process. The amplifier response cuts out the low and high frequencies where recoverable
information is presumed to be unavailable.

187
188 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

In the time-sampled representation of the wavelet component, can the wavelet be adequately represented as
an auto-regressive process (all-pole filter in Z transform)? If a moving average component is required (all-
zero filter in Z transform), does it contain non-minimum phase elements? We shall return to these
questions in the discussion of processing.

d) Validity of the convolutional model: The shot pulse, geophone, and recording system can
accurately be characterized as linear time-invariant filters acting on the subsurface reflectivity. They must
be dealt with prior to the application of time-variant processes such as spherical divergence correction and
normal-moveout. The ghosting effect, near surface layer effect, and the convolutional effect of source and
receiver arrays are all dependent on the emergent angle of the reflected wavefronts, so the single trace
convolutional model is an approximation for these elements. Rapid lateral variations in the near surface
produce a significant degradation of the approximation. Transmission effects are clearly time variant. The
comprehensive single trace convolutional model is most useful when applied to a limited vertical section of
the subsurface for which the information arrives at the surface in waves with a limited emergent angle
range.

188
Advanced seismic processing 189
Clasina TerraScience Consultants International Ltd.

189
190 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

There are three basic ways in which the convolutional model is important.

1) In Data Acquisition - Control of the seismic wavelet produced in the data acquisition process
determines the ultimate quality of the interpreters' trace. The ratio of signal to noise as a function
of frequency is the primary consideration in the field.

2) In Data Processing - If the field seismic wavelet, wf(t), is known, then the data can be digitally
filtered to produce an interpreters' trace with an improved wavelet, wi(t).
The interpreter would generally like a seismic section in which the wavelet is of minimum duration
compatible with the suppression of additive noise. He would like a record section in which visible
lateral variations in amplitude and waveform are due to lateral variations in the subsurface layers,
rather than lateral variations in the wavelet or additive noise. He may wish to employ an in-
terpretive wavelet which estimates impedance rather than reflectivity, to facilitate his tie to well
data.

3) In Interpretation - We can shorten the seismic wavelet, but we can never make it as short as we
would like. The subsurface region of interest must be interpreted on the basis of data in which the
diagnostic information has been smeared by the wavelet. Subsurface properties may be postulated,
and the corresponding reflectivity calculated. Convolution with the interpretive wavelet then
provides the interpreter with a synthetic seismic trace or section which may be directly compared
with the observational data.

In the application of the model for wavelet improvement and modelling four major issues arise.

• How valid is the model?


• How do we estimate the field wavelet?
• How do we decide on the desired wavelet?
• How do we design and implement the linear filter a(t)?

We are now interested in attempting to apply the concepts of the convolutional model .

• First we will attempt to set the stage with a review of the approaches and assumptions which
have been utilized in applying the model. After some further discussion of the basic concepts
and history, we will examine the noise free model, in which we can apply a linear filter to the
reflection trace to recover a good estimate of seismic impedance versus travel time.
• Second, we consider the noisy case in which the seismic field wavelet is known, and we wish to
specify the characteristics of the shaping filter, a(t), to recover a useful band-limited estimate of
seismic impedance.
• Third, the approaches to implementation of the desired result on the digital computer are
discussed. In particular, the least mean square error approach to digital shaping filter design
will be reviewed.
• Fourth, the real heart of the problem, the estimation of the actual field seismic wavelet is
discussed. The most widely used approaches of spiking deconvolution and "predictive
deconvolution" are reviewed.
• Fifth, we will step briefly beyond the convolutional model to look at the use of normal moveout
to fill in the missing low frequency band in our impedance logs, and the use of non-linear

190
Advanced seismic processing 191
Clasina TerraScience Consultants International Ltd.

processing to apparently increase resolution. A brief review of modelling and the synthetic
seismogram will complete this introductory discussion.

(The following excerpts are from notes by M.A.Backus, N.S.Neidell, R.E.SheriffW.A.Schneider, F.J.Hilterman, A.Alam)

- SOME HISTORY
- NOISE FREE CONVOLUTIONAL MODEL
- BAND-LIMITED IMPEDANCE RECOVERY AND INTERPRETIVE WAVELET DESIGN
- DIGITAL WIENER FILTERING & DECONVOLUTION
- FIELD WAVELET ESTIMATION
- EXTENDING THE BANDWIDTH
- MODELLING

Norman Ricker (1940, 1953) described many of the basic concepts of the convolutional model in a series of
classic papers.

The figure shows an early comparison of a "synthetic seismogram” with a field seismogram. The "Ricker
wavelet" was used to synthesize the expected seismic trace for an intermediate velocity layer with the
reflectivity. Ricker's modelling studies included the reflection from a wedge, in which he points out that a
very thin bed produces an event which is the time derivative of the wavelet, and bed thickness variations
must be followed through a study of the change in reflection waveform.
This thin bed concept is currently used as a basis for estimation of reservoir thickness in direct hy-
drocarbon detection (Neidell, 1977), and is illustrated below.

191
192 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Ricker's seismic wavelet was intended to include a simple explosive source waveform and the loss of high
frequencies during transmission through the earth.

It was based on theory in the frequency domain, and looks as follows

W(f) = f2/f12 exp(-f2/f12 )

Ricker assumed a "perfect" recording instrument, with a flat amplitude versus frequency response and a
linear change in phase with frequency.
The theoretically based "Ricker wavelet" has not proven to be a good field wavelet representation, but is
widely used in modelling and is sometimes employed as an interpretive wavelet.

The modern version of the one-dimensional synthetic seismogram was introduced by R. A. Peterson, W.R.
Fillipone, and F. B. Coker (1955), when "ground truth" became available through the development of the
SONIC log.

192
Advanced seismic processing 193
Clasina TerraScience Consultants International Ltd.

From the SONIC log and some estimate of density, they generated a plot of acoustic impedance versus two-
way travel time. An analogue electrical signal proportional to the logarithm of acoustic impedance versus
travel time was reproduced by optical means.

The signal was then passed through a differentiating circuit, followed by a sequence of analog filters
simulating the convolutional elements which make up the complete seismic field wavelet, wf(t).
The example comparison between synthetic seismogram and field seismogram above is at least
encouraging.
Considering the marginal signal-to-noise ratio of the field seismogram, and considering the fact that the
seismic wavelet was synthesized by the "cut and try" adjustment of analogue filters, the results were quite
reasonable.

193
194 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Most one dimensional modelling done today is done with a digital implementation of the approach
developed by Peterson, et al.

2. Improving the wavelet - the deterministic approach

The reflection system objective and system requirements were qualitatively the same in the past as they are
today.

Initially these system requirements had to be met in the field. Special care was taken to attempt to
simultaneously provide adequate signal-to-noise ratio, a laterally invariant wavelet, and a wavelet of short
duration. Over the pass band of interest, a flat amplitude response and a linear phase response was sought
in the recording amplifier to avoid "distortion" of the data acquired in the field.

The notion of applying a "non-ideal” filter to the data to improve the wavelet rather than preserving it was
also put forth by Ricker (1953).

194
Advanced seismic processing 195
Clasina TerraScience Consultants International Ltd.

Given his theoretical description of the seismic wavelet, he devised an analogue system with a response such
that the wavelet would be changed into a shorter wavelet with the same shape on a compressed time scale.
To handle the major wavelet component addressed by Ricker, the inelastic attenuation problem, his
scheme would have many advantages over the currently employed time-variant deconvolution.

Today, however, the wavelet model would be modified to treat the transmission effect as a
minimum phase wavelet with an exponential decay with frequency(rather than the square of frequency) in
accordance with the current general view on attenuation (Wuenschel, 1965; Hamilton, 1976; O'Doherty
and Anstey, 1971).

The marine “signing" problem was incorporated into the convolutional model by Backus (1959).
The first Figure below illustrates the convolutional model of the marine seismogram.
The second Figure illustrates the dominating effect which the reverberations can have on shaping the field
seismogram.

195
196 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

With magnetic recording available, it was possible to use an analogue time domain filter to "remove" the
effect of reverberations from the wavelet. The approach did some good, despite the fact that the linear
model had usually been violated by AGC, and in spite of the fact that the ocean bottom is often not well
approximated as a simple interface
However, because the reverberation process can be characterized as an autoregressive process, the process
of inversion is a particularly simple, stable and robust process, and is an important current application of
"predictive deconvolution".

Lindsey (1960), provided an analogue implementation to invert the ghost filter (on land). This problem is
basically the inverse of the reverberation problem.

196
Advanced seismic processing 197
Clasina TerraScience Consultants International Ltd.

In each case, the approach was semi-deterministic. The field wavelet component was cast in a particular
form on theoretical grounds.
For actual field data, only a few parameters had to be estimated to completely specify the wavelet
component transfer function, and the appropriate processing to improve the wavelet.
The two problems are illustrated below.

In each case, the linear process representing the wavelet filter action and its inverse is shown. The
reverberation wavelet, and the de-ghosting operator, both have the same form, and are of infinite time
duration.

If R is large (e.g. R = l in marine ghosting), the filter rings severely.


When applied as a de-ghosting filter, it has a corresponding effect on seismic noise. On the other hand, the
ghosting wavelet and the de-reverberation filter are both simple two point operators. The effect of ghosting
on a record is undesirable but not disastrous. A de-reverberation filter produces only a ghost of the noise.

3. Robinson and the statistical approach

In the 1950's, the Geophysical Analysis Group at MIT brought two powerful tools to bear on the problem;
the flexibility of the digital computer, and the concepts of statistical communications theory and Norbert
Wiener's approach to time series analysis.

197
198 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Robinson (1957) applied these tools to the problem of the estimation of the complete seismic wavelet di-
rectly from the reflection seismogram. The approach is illustrated by Robinson in the Figure below
(Robinson, 1954, 1967).
The observed seismic trace (A) is assumed to be the convolution of an unknown seismic wavelet (D) with
the unknown subsurface reflectivity (G).

If we assume that the reflection sequence is a completely random sequence, with a white spectrum, and that
the seismic trace is noise-free, then the autocorrelation of the observed seismic trace (B) provides an
estimate of the autocorrelation of the seismic wavelet.

If we further assume that the seismic wavelet can be adequately described as an autoregressive process,
then the wavelet can be calculated from a short portion of its autocorrelation function.

(Robinson, 1957) shows a land data wavelet estimate made by Robinson using this approach. It looks quite
different from the simple symmetric Ricker wavelet.
The Figure below (From Robinson, 1957). Robinson's statistical estimate of a land seismic wavelet.

198
Advanced seismic processing 199
Clasina TerraScience Consultants International Ltd.

A digital filter to accomplish "wavelet contraction" to a spike, based on the statistical estimate of the
seismic wavelet from the seismic trace, is shown on the left , and is obtained directly from the trace
autocorrelation function.

The basis for current processing techniques called “deconvolution, predictive decomposition, or prediction
error filtering" was thus provided.

4. Digital Processing in the 1960's

In the early 1960's the digital recording and processing of seismic data was introduced as a production tool
in reflection exploration.
The flexibility of digital computers permitted the use of an arbitrary processing filter a(t). The
autocorrelation based deconvolution approach described by Robinson was especially well suited for the
water reverberation problem, since it is an autoregressive process which often dominates the
autocorrelation function of the seismic trace. Typical production application in the 1960's is illustrated
below

199
200 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Deconvolution applied to land data provided good results for stratigraphic exploration under favorable
circumstances. In the next Figure (Godfrey, et al., 1968) an example from Canada with a land surface
source is shown. The field data may be compared with the synthetic seismogram, which was prepared
from well data by the digital implementation of Peterson's scheme. A simple 20-65 Hz band pass digital
filter was used as the interpretive wavelet, to correspond to the filter applied to the field data after
deconvolution. The agreement with is not perfect, but is adequate to suggest the general validity of the
model.

200
Advanced seismic processing 201
Clasina TerraScience Consultants International Ltd.

The next Figure shows a second example, this time with dynamite. Velocity filtering (Embree, et al, 1963)
was applied to the field data to improve the signal-to-noise ratio.
De-ghosting (Schneider, et al., 1964) was accomplished by multi-channel filtering of two shots at different
hole depths. The ghost-free, high signal-to-noise ratio seismic field trace was then treated by
autocorrelation based spiking deconvolution, and then convolved with the same interpretive wavelet as that
used on the synthetic. The agreement is relatively good, with significant exceptions in some zones. Careful
inspection also shows that the time interval between the two marked events is about 10 milliseconds longer
on the synthetic trace than on the field trace. It is inferred that the SONIC log included some zones of
anomalously low velocity.

201
202 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

A trace labelled "cross-equalized" trace is also shown .


When we actually have both the reflectivity, r(t) and the seismic trace, g(t), available, we can calculate a
least mean square error estimate of the field seismic wavelet without assumptions, other than the validity of
the convolutional model, making use of the cross-correlation between g(t) and r(t).

In this example, the independent deconvolution of the field trace appears to provide better waveform
agreement with the synthetic than the "optimum filtering" which was designed using the available
synthetic. Evidently the reason is that the field trace and synthetic are not well related by a linear filter.

The relative time base shift is the primary reason.


A time variant gain difference is a secondary reason.

The time base shift does not significantly affect the autocorrelation of the field trace, and thus did not
degrade the wavelet shaping performance of the spiking deconvolution.

In all of the three examples shown, when the implementation of the shaping filter, a(t), was accomplished
by the use of "spiking deconvolution" followed by a band-pass filter, a reasonably good result was
obtained.
In the two land examples, where stratigraphic mapping quality results were obtained, there was no
ghosting, the seismic wavelet was evidently well approximated by an autoregressive process within the
frequency band of interest, the convolutional model was valid over a good part of the seismogram, and the
signal-to-noise ratio was very high.

In the marine example, the principal objective was reverberation reduction, and the signal-to-noise ratio
was high.

202
Advanced seismic processing 203
Clasina TerraScience Consultants International Ltd.

Because of problems with spiking deconvolution application to marine ghosts, a comparison would
probably be poor relative to the land examples.

Deconvolution based on the trace autocorrelation function continues to be widely applied. It is a


reasonably robust process and often improves the data for the interpreter even when the assumptions
required to make it a valid signal wavelet contraction process break down significantly, and it must be re-
garded strictly by its other name, prediction error filtering.

Time variant deconvolution in which the model is treated as being time invariant only over a limited record
time interval is widely used. There is some justification for this approach because of the time variant
nature of the transmission effect (hence the field wavelet) and the time variant nature of the signal-to-noise
spectrum (hence the desired interpretive wavelet).
In addition, when deconvolution is applied to CDP stacked data, the processes of spherical divergence cor-
rection, normal moveout, and stack, all introduce time variation of the wavelet.
Predictive deconvolution (or gapped deconvolution) is another widely used modification, particularly in
marine work, and is intended to limit the filtering action to reverberation reduction.

5. Bright spots, seismic stratigraphy, migration, and the convolutional model

Discontinuities in impedance in the subsurface occur at old depositional surfaces, fault contacts, and fluid
contacts. The seismic reflection record section with AGC (automatic gain control) highlights the spatial
configuration of these impedance discontinuities, thus providing a very fuzzy picture of subsurface
structure and depositional patterns. The value of the stratigraphic information (Vail, 1977) in this context
has led to increased demands for resolution.

Lateral variations in impedance parallel to the bedding planes are primarily due to lateral changes in
lithology, porosity, and fluid content :

Rock Change Typical Impedance Change (%)

Sand to shale 2 – 20 %
Decrease in porosity from 14%
15% to 10%
Reservoir fluid change
From gas to brine 5 – 100%
From oil to brine 1 – 20%

On either a true amplitude seismic record section, or on a derived impedance log display, lateral (parallel
to bedding) changes in impedance can be viewed.
Given the structural and depositional context, lateral changes in lithology, porosity, and fluid content can
be inferred.

If we could produce a three-dimensional map of subsurface impedance, with an accuracy of 1%, and a
spatial resolution of the order of one meter, we could probably locate and evaluate all hydrocarbon
reserves of economic interest.

203
204 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

We are a long way from achieving the desired accuracy and resolution, and it is unlikely that it will ever be
fully achieved with an economically feasible system. However, the emergence of direct hydrocarbon
detection in the late 1960's and early 1970's, and developments in porosity mapping and seismic
stratigraphy, have dramatically increased the value of understanding, controlling, and sharpening the
seismic wavelet.

An example of some bright spots is shown in the following Figure.

Traces are at 100 meter horizontal spacing. This is a variable area display in which peaks are colored
dark, and troughs have been inverted and colored light in order to show the full wave-form in variable
area. The seismic interpretive wavelet is band-limited to 20-60 Hz, and is of unknown phase.
These data were subjected to spiking deconvolution before stack, and were band-pass filtered with a zero
phase 20-60 Hz wavelet after normal moveout correction and CDP stack. The effective wavelet duration is
of the order of 30 milliseconds. In this young clastic sequence, there are usually five to ten significant
reflecting interfaces within a 30 millisecond period, plus a myriad of other weaker reflectors. On the left

204
Advanced seismic processing 205
Clasina TerraScience Consultants International Ltd.

six traces, the detailed wave shapes are essentially identical from trace to trace, suggesting that we have a
laterally stable seismic wavelet, a noise level below the limit of visual perception, and in this region of the
subsurface, a subsurface reflectivity which is laterally invariant except for dip. Specific peaks and troughs
cannot in general be attributed to specific reflecting interfaces, but are unresolved interference patterns
from a number of conformable interfaces. In the eighth and ninth traces from the left, the lateral in-
variance continues, except for a large amplitude increase in the 40 millisecond duration event indicated by
the lower arrow.
In the well, this event ties to an eleven millisecond (54 ft) "thick" gas sand. In this example, our "signal is
the lateral change in amplitude and waveform caused by the change from brine saturation to gas
saturation in the thin layer.
The thin layer anomaly produced by the replacement of brine with gas has an equivalent reflectivity at top
and base of about |R| ≈ 0.1

At about 1.62 seconds at the well location, a 2.3 millisecond (12 ft) "thick" gas sand can be marginally
detected by a slight increase in amplitude beginning about four traces to the left of the well. Because of the
low resolution wavelet, our thin layer anomaly for this bed is only about one-fourth as strong as for the
deeper reservoir.

The upper arrow points to a 30 ms "thick" (140 ft) sand reservoir which is partially filled with gas at the
well. In this case, our resolution is sufficient to show the gas-water contact as a separate unconformable
reflector.

When we are interested in mapping lateral changes in impedance in a particular layer, whether it be to
follow changes in lithology, porosity, or fluid content, the threshold of our mapping capability is extremely
dependent on maintenance of a low additive noise level, lateral wavelet stability, and a wavelet with
maximum resolution compatible with the first two requirements.
If the lateral anomaly of interest is large in comparison with the lateral variations in impedance in the
layers immediately surrounding the target layer, then we have a chance to quantitatively interpret the
anomaly in terms of an isolated thin layer anomaly. This requires knowledge of the amplitude and
waveform of the seismic wavelet.

The increased requirement for knowledge and control of the wavelet has led to the development of
numerous approaches to wavelet estimation, directed toward reducing the degree of dependence on general
statistical estimation techniques where other approaches are available. Where statistical estimates are still
the only approach, attempts to improve the statistics are sought.

Wavelet shaping on land is still largely statistically based.

The major changes have thus far been appearing in marine data processing. Neidell, et al. (1977), Shugart
and Barry (1976), Berkhout (1976), and Nath and Patch (1977), have emphasized the importance of seismic
wavelet estimation, and have shown that an appropriate combination of deterministic and statistical
estimation of the wavelet components can lead to data more suitable for stratigraphic mapping.

The next Figure shows an example over a North Sea Oil Field (Neidell and Poggiagliolmi).

205
206 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

A well was drilled at shotpoint 230 on the basis of a structural high. There is oil production from a
Jurassic sand at 2.8 seconds. The lower section was processed by a shaping filter based on an estimate of
the seismic wavelet from the first water bottom multiple. Statistically based predictive deconvolution was

206
Advanced seismic processing 207
Clasina TerraScience Consultants International Ltd.

subsequently applied to reduce the water reverberation. The interpretive wavelet used is shown in the
lower part of the figure.

In the lower section, six copies of the synthetic trace have been inserted at the well location for the time
interval from 2.7 to 3.0 seconds.
The agreement is reasonably convincing. However, the low velocity reservoir sand appears to be about ten
milliseconds thicker on the synthetic seismogram than on the field traces.

We have looked at three examples, two from land data, and one from marine data, in which the validity of
the convolutional model was tested by a comparison between field seismic data and data obtained from logs
taken in a borehole The results are imperfect, but are reasonably convincing verification of the reasonable
validity of the model There have been many less convincing examples published, and many very
disappointing results which have not been published. Sometimes the problem lies with the seismic data, as
in the early Peterson example where the signal-to-noise ratio was marginal, and the knowledge of the
seismic wavelet was lacking.

1) The well log is at best a characterization of impedance in the 1-2 foot cylinder surrounding the hole. The
seismic trace samples the impedance over a much larger region with a radius of tens to hundreds of meters.
A look at a road cut is a convincing illustration of the potential sampling problem inherent in well data.

2) SONIC and DENSITY logs are known to contain errors due to the local changes introduced by the
borehole. These problems are especially serious in poorly consolidated sediments. The necessity to correct
the integrated SONIC log to achieve agreement with a well velocity survey is symptomatic of this problem
and the sampling problem.

The introduction of the LONG-SPACED SONIC log and the borehole gravimeter should improve the
available log data both from the standpoint of sampling and local borehole effects.

3) Finally, a perfect borehole survey gives us a log of impedance in only one dimension. If the subsurface
impedance is indeed varying only in the vertical dimension, then a good log can provide "ground truth" for
the seismogram. Curvature of the interfaces, or non-linear lateral variations in impedance within the
interface, can cause the one-dimensional approximation to break down seriously. The work of Trorey
(1970) and Hilterman (1970, 1975) in two-dimensional and three-dimensional seismic modelling has
provided a rapid advance of our understanding of the reflectivity of the real subsurface.

207
208 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

ONE = 1D SEISMIC
TWO = 2D SEISMIC
THREE = 3D SEISMIC

The figure above illustrates the Michigan reef problem. The subsurface is indeed three-dimensional.

This excellent example is from Jackson and Hilterman (1977). shows synthetic record sections based on
one-dimensional, two-dimensional and three-dimensional computations of reflectivity.
Clearly, the data from a single borehole, would not be expected to agree with a perfect conventional
seismogram showing the reflectivity.

Even a seismic trace taken from a migrated record section (Claerbout, 1976; Schneider, 1977) would show
poor agreement.
A three dimensionally migrated reflectivity estimate from three dimensionally migrated data collected over
a surface area rather than along seismic lines is required to expect agreement with single borehole data in
this problem.

208
Advanced seismic processing 209
Clasina TerraScience Consultants International Ltd.

209
210 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

210
Advanced seismic processing 211
Clasina TerraScience Consultants International Ltd.

211
212 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

212
Advanced seismic processing 213
Clasina TerraScience Consultants International Ltd.

213
214 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

214
Advanced seismic processing 215
Clasina TerraScience Consultants International Ltd.

215
216 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

216
Advanced seismic processing 217
Clasina TerraScience Consultants International Ltd.

217
218 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

218
Advanced seismic processing 219
Clasina TerraScience Consultants International Ltd.

219
220 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

220
Advanced seismic processing 221
Clasina TerraScience Consultants International Ltd.

221
222 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

222
Advanced seismic processing 223
Clasina TerraScience Consultants International Ltd.

223
224 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

224
Advanced seismic processing 225
Clasina TerraScience Consultants International Ltd.

225
226 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

226
Advanced seismic processing 227
Clasina TerraScience Consultants International Ltd.

227
228 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

228
Advanced seismic processing 229
Clasina TerraScience Consultants International Ltd.

229
230 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

230
Advanced seismic processing 231
Clasina TerraScience Consultants International Ltd.

231
232 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

232
Advanced seismic processing 233
Clasina TerraScience Consultants International Ltd.

233
234 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

234
Advanced seismic processing 235
Clasina TerraScience Consultants International Ltd.

235
236 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

236
Advanced seismic processing 237
Clasina TerraScience Consultants International Ltd.

237
238 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

238
Advanced seismic processing 239
Clasina TerraScience Consultants International Ltd.

239
240 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

240
Advanced seismic processing 241
Clasina TerraScience Consultants International Ltd.

241
242 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

242
Advanced seismic processing 243
Clasina TerraScience Consultants International Ltd.

243
244 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

244
Advanced seismic processing 245
Clasina TerraScience Consultants International Ltd.

245
246 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

246
Advanced seismic processing 247
Clasina TerraScience Consultants International Ltd.

247
248 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

248
Advanced seismic processing 249
Clasina TerraScience Consultants International Ltd.

249
250 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

250
Advanced seismic processing 251
Clasina TerraScience Consultants International Ltd.

251
252 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

252
Advanced seismic processing 253
Clasina TerraScience Consultants International Ltd.

253
254 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

254
Advanced seismic processing 255
Clasina TerraScience Consultants International Ltd.

255
256 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

256
Advanced seismic processing 257
Clasina TerraScience Consultants International Ltd.

257
258 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

258
Advanced seismic processing 259
Clasina TerraScience Consultants International Ltd.

259
260 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

260
Advanced seismic processing 261
Clasina TerraScience Consultants International Ltd.

261
262 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

262
Advanced seismic processing 263
Clasina TerraScience Consultants International Ltd.

263
264 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

264
Advanced seismic processing 265
Clasina TerraScience Consultants International Ltd.

265
266 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

266
Advanced seismic processing 267
Clasina TerraScience Consultants International Ltd.

267
268 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

268
Advanced seismic processing 269
Clasina TerraScience Consultants International Ltd.

269
270 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

270
Advanced seismic processing 271
Clasina TerraScience Consultants International Ltd.

271
272 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

272
Advanced seismic processing 273
Clasina TerraScience Consultants International Ltd.

273
274 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

274
Advanced seismic processing 275
Clasina TerraScience Consultants International Ltd.

275
276 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

276
Advanced seismic processing 277
Clasina TerraScience Consultants International Ltd.

277
278 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

278
Advanced seismic processing 279
Clasina TerraScience Consultants International Ltd.

279
280 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

280
Advanced seismic processing 281
Clasina TerraScience Consultants International Ltd.

281
282 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

282
Advanced seismic processing 283
Clasina TerraScience Consultants International Ltd.

283
284 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

284
Advanced seismic processing 285
Clasina TerraScience Consultants International Ltd.

285
286 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

286
Advanced seismic processing 287
Clasina TerraScience Consultants International Ltd.

287
288 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

288
Advanced seismic processing 289
Clasina TerraScience Consultants International Ltd.

289
290 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

290
Advanced seismic processing 291
Clasina TerraScience Consultants International Ltd.

291
292 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

292
Advanced seismic processing 293
Clasina TerraScience Consultants International Ltd.

293
294 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

294
Advanced seismic processing 295
Clasina TerraScience Consultants International Ltd.

295
296 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

296
Advanced seismic processing 297
Clasina TerraScience Consultants International Ltd.

297
298 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

298
Advanced seismic processing 299
Clasina TerraScience Consultants International Ltd.

299
300 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

300
Advanced seismic processing 301
Clasina TerraScience Consultants International Ltd.

301
302 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

302
Advanced seismic processing 303
Clasina TerraScience Consultants International Ltd.

303
304 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

304
Advanced seismic processing 305
Clasina TerraScience Consultants International Ltd.

305
306 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

306
Advanced seismic processing 307
Clasina TerraScience Consultants International Ltd.

307
308 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

308
Advanced seismic processing 309
Clasina TerraScience Consultants International Ltd.

309
310 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

310
Advanced seismic processing 311
Clasina TerraScience Consultants International Ltd.

Let ‘s consider the following paper, courtesy Kelman Technologies, about decon and noise contamination.

Deconvolution With A View

Stewart Trickett & Brian Link


Kelman Technologies Inc.

Bill Goodway
PanCanadian Petroleum Ltd.

Kelman Technologies Inc.

A common attitude towards surface-consistent deconvolution is to treat it like a black box.

With all those statistics we throw at it, one would assume that noise can no longer be a problem.

Well, without good quality control, without a way to see what surface-consistent deconvolution has done, we really don’t
know. We’re working in the dark.

311
312 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Surface-Consistent Deconvolution

We now try to get a more in depth look at surface-consistent deconvolution. And we have found that noise contamination is
still a problem.

The good news, though, is that provided we are aware of the noise contamination, we have a number of tools at our disposal
to remove it, and by doing so we can substantially improve our sections.
The biggest advancement in deconvolution over the last 20 years has been the replacement of trace-by-trace deconvolution
with surface-consistent deconvolution.

Here’s a simple and typical method for performing Surface consistent decon..

Surface-Consistent Deconvolution
• Estimate amplitude spectra A(i,j) of each
trace
• Model all trace amplitude spectra as
A(i,j) = L S(i) R(j)
• Statistically estimate L, S(i)’s, and R(j)’s
• Deconvolve each trace based on its surface-
consistent spectrum

Kelman Technologies Inc.

We begin by estimating the amplitude spectrum of each trace. We then model these spectra as being made up of three
components:
1. the line component which represents the average spectrum for the entire line,
2. the source component which represents the difference between source spectra, and
3. the receiver component which represents the difference between receiver spectra. We then statistically estimate
these spectra.

Then based on these spectra, we deconvolve each trace using the Wiener-Levinson algorithm - in other words, we will be
making the minimum-phase assumption.

312
Advanced seismic processing 313
Clasina TerraScience Consultants International Ltd.

The principal strength of surface-consistent deconvolution is its ability to suppress the effects of noise. It does this in a
couple of ways:
• First, it increases signal-to-noise through statistical redundancy,
• Second, because we don’t need all the trace for the estimation, we can exclude especially noisy traces from the
design,
• Third, we can provide a component, for instance an offset component, which is designed to absorb noise, thus
preventing other components from being contaminated. The effectiveness of this last method is open to debate, but
it doesn’t effect what we have to say here.

Despite this list, we’ll show that the surface-consistent solution is still often contaminated, and this produces phase errors in
the final stacked sections.

Displaying Surface-Consistent Spectra

We’ve come up with two different ways to display the surface-consistent spectra in order to detect noise contamination.

0 0

100 100
0 0

100 100
-180 -180

180 180
Kelman Technologies Inc.

Above is a surface-consistent display, where everything is aligned vertically by common station.

A basic assumption we have is that most deconvolution effects occur near the surface. So we begin with the surface model,
showing the surface elevation and weathering layers.

Immediately below this are receiver amplitude spectra. Actually we mean the receiver spectra plus the average line
spectrum. Frequencies run vertically from 0 Hz to 100 Hz. The high amplitudes are red, and the low amplitudes are green.

Below this are the receiver delta-phase spectra. This shows how the phase spectra of the deconvolution operators changes
laterally.

Below this are the receiver constant-phase rotations. Of course the phase spectra of the deconvolution operators aren’t linear,
but if we restrict the frequencies to between 20 and 50 Hz, we can make a reasonable approximation.

313
314 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

You’ll notice a flat spot in the surface elevations where there’s a gap in the shooting. This happens to be a creek bed.
In most parts of the amplitude spectra the peak frequency is about 20 Hz. However, right below the creek bed the peak
frequency is about 10 Hz. This is a sure indication of shot noise contamination.

The phase distortion this generates can be seen in the delta-phase spectra. The turquoise colour indicates that it’s at least 90
degrees. The constant phase rotations display below verify this.

Log-Amplitude Spectra

-20

-40

-60
0 25 50 75 100 125
Kelman Technologies Inc.

A second way to display the surface-consistent spectra is through a simple log-amplitude graph. The y-axis runs from +10
dB to -60 dB. The x-axis runs from 0 Hz to 125 Hz.

The blue line is the log-amplitude spectrum of the average line component. The red lines are for the source spectra with the
line component added in.

The first peak is around 12 Hz, and indicates significant shot noise contamination.
The second peak is at about 25 Hz: this is typical peak frequency for the signal.
Then the log-amplitude spectra decay in a long linear trend. This is exactly what’s predicted by the properties of absorption.

This linear trend is interrupted by a spike at 60 Hz: this is power line noise contamination. There is another spike at 120 Hz:
this is the first harmonic of the power line noise.

314
Advanced seismic processing 315
Clasina TerraScience Consultants International Ltd.

Noise Types

Three types of noise can be detect through use of these displays:


1. coherent noise,
2. random broadband noise, and
3. power line noise.

Source Noise Example

Kelman Technologies Inc.

By coherent shot noise we mean ground roll, air blast, refracted multiples, and so on.
The most common indication are spectral peaks between 5 and 15 Hz.
This contamination causes frequencies below about 25 Hz to be severely distorted.

Phase Distortion From Source Noise

Kelman Technologies Inc.

Most of the phase distortion due to source noise is below 25 Hz. But as you can see, it can be quite severe.

315
316 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

There are three typical ways to remove shot noise contamination.


1. First, we note that certain types of shot noise, such as ground roll or air blast, are mostly restricted to the near traces.
Thus we can exclude the near offsets from the design.
2. Second we try to remove the shot noise directly from the data. This can be done in transparent mode, where the
filtering is done only for purposes of designing the deconvolution. The deconvolution is applied to the raw,
unfiltered data. Typically these filters are based on the FK or linear Radon transform.
3. Third we edit the spectra directly. Amortization is a fancy word meaning to extrapolate the lower frequencies from
the mid frequencies using a given decay rate.

Random Broadband Noise Example

Kelman Technologies Inc.

We think random broadband noise is caused by wind.


Its indicator is that the amplitudes fail to die down at high frequencies.

Phase Distortion From Random


Broadband Noise

Kelman Technologies Inc.

The phase distortion it generates is much like that of a large pre-whitening. On the data it looks a lot like a constant phase
rotation.

316
Advanced seismic processing 317
Clasina TerraScience Consultants International Ltd.

One way to get rid of random broadband noise is to apply FX deconvolution on the data. Again this can be done in a
transparent mode.
Another way is to edit the spectra directly by extrapolating the higher frequencies from the mid frequencies with a given
decay rate.

Powerline Noise Example

Kelman Technologies Inc.

Power line noise is caused by electric currents acting on geophones or cables. It’s very easy to detect - a sharp spike right at
60 Hz, or 50Hz, depending on the system.

Phase Distortion From Powerline Noise

Kelman Technologies Inc.

The phase distortion it generates is severe. Not only can it be large in magnitude and have a broad bandwidth, but worse for
interpretation purposes, it’s highly non-linear.

317
318 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

One way to get rid of power line noise is a modelling and subtraction algorithm. In principle such algorithms can remove 60
Hz noise without damaging 60 Hz signal.
Another way is to edit the spectra directly, for instance by linearly interpolating between 50 and 70 Hz.

One way we don’t recommend is a notch filter. Not only can it not discriminate between noise and signal, the notch it
leaves in the spectra can be as troublesome to the deconvolution as the spike it took out!

We can summarize the three main types of noise, their appearance, and possible remedies in a single chart

Kelman Technologies Inc.

318
Advanced seismic processing 319
Clasina TerraScience Consultants International Ltd.

Power line Noise Example

We will look at two examples. The first example is a line inundated with power line noise.
Let’s first do a quick test. Suppose we have three spikes which we have filtered with a 5/10-40/50 band pass, a 5/10-55/65
band pass, and a 5/10-70/80 band pass, all of them zero phase.

Now let’s distort their phase spectra in a manner typical for power line noise. We can see that the low-frequency wavelet is
hardly changed. The broad-band wavelet on the right, however, is severely distorted.

Zero Phase
5/10-40/50 5/10-55/65 5/10-70/80

Phase Distorted

Kelman Technologies Inc.

We conclude the following: if we’re serious about performing high resolution seismic work, power line noise cannot be
tolerated in the deconvolution design.

Log-Amplitude Spectrum

Kelman Technologies Inc.

Back to the seismic data. Here is a spectrum where the power line noise is left in. It shows a large 60 Hz spike.

319
320 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Log-Amplitude Spectrum
20

-20

-40

-60
0 25 50 75 100 125Kelman Technologies Inc.

We then reprocess the data after removing the power line noise using a modeling-and-subtraction algorithm. The line
spectrum now shows only a small spike at 60 Hz, which can be removed by linearly interpolating between 55 and 65 Hz.

Phase Spectra

Kelman Technologies Inc.

Above are the phase spectra for those last two amplitude spectra. The red line is the distorted spectrum. Note that the
distortion at 50 and 70 Hz is plus or minus 70 degrees.

320
Advanced seismic processing 321
Clasina TerraScience Consultants International Ltd.

Finally let’s look at the final stacks. The data was processed twice, each time, the power line noise was removed right at the
beginning.
In this section, however, the phase spectrum of the deconvolution operators was distorted by the power line noise.

Phase-Only Powerline Distortion


0 0

.5 .5

1 1

Kelman Technologies Inc.

No Powerline Distortion
0 0

.5 .5

1 1

Kelman Technologies Inc.

Thus the only difference between the previous section and this one is in the phase of the deconvolution.

Deep down at 1.2 seconds the two sections are virtually identical.
Shallower, at 0.8 seconds, we can see some significant differences in character.
Finally, very shallow at 0.2 seconds the two sections look completely different.

Note the improvement in lateral continuity.

Why do we see more differences the shallower we go? Because the shallower data is broader band, and thus more sensitive
to power line noise distortion.

321
322 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Source Noise Example

Let’s look at a line which features significant shot noise.

Raw Shot Record

0 0

1 1

2 2

Kelman Technologies Inc.

Here is a raw record. The shot noise is strongest in the near traces, but also extends to the far offsets. Since the zone of
interest is at 1.2 seconds there’s no way we’ll be able to design around the shot noise.

Acquisition Parameters
Source: Dynamite, 1 hole
Charge: 0.5 kg at 3m depth
Traces/Record 300
Source Interval: 20m
Group Interval: 10m
Spread: 1500 - 10 - X - 10 - 1500m
Fold: 7500%

Kelman Technologies Inc.

The acquisition parameters show that this is a dynamite line, with an exceptionally high fold of 7500%.

322
Advanced seismic processing 323
Clasina TerraScience Consultants International Ltd.

The first, well A, is offset a considerable distance from the line.


The other two, B and C, are right on the line.

Geological Cross-Section

shale/silt

shale
Datum:
Viking
sand

shale
Mannville

silt/coals

Med R. Coal
Glauconite
Porous
gas sand

lmst/shale Ostracod
BQ/Detrital
qtz/lmst/chert

lmst Banff

Kelman Technologies Inc.

The pay zone is a gas sand. Somewhere in the middle of this gas sand is a down-cutting channel, plugged with a non-porous
silt. We don’t want to drill on this channel because the pay zone underneath is very thin.

There’s some bad news. Overtop of this are a series of coal beds. These coals give off strong reflections and vary laterally in
thickness. What’s more they may produce multiples and tuning effects. In other words, we’re looking for a very subtle event
in the shadow of a large and complex one.

The good news is that beneath the pay zone is the Banff, a strong consistent marker which we can use to gauge the reliability
of our sections.

There are two interpretation goals:

1. First, to determine the boundaries of the channel.


2. Second, to map the time difference between the coals and the Banff. This may indicate the thickness of the pay
zone.

323
324 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Synthetic Seismic Model

Well A Well B Well C

Kelman Technologies Inc.

The gas sand appears as a very quiet zone in our seismic model. The down-cutting channel manifests itself as a small, rather
subtle, peak.

Conventional Stack
Synthetic: Zero Phase 10/15-55/65
A B C
.9

1.1
Coal
Banff 1.2
Wab
1.3

1.4
Kelman Technologies Inc.

Here’s the stack after our first cut of processing. It has some serious problems. First, the Banff does not tie the wells very
well. At well A, for instance, the well has a trough where the seismic data has a peak. Even worse, it’s impossible to track
the Banff across the section.

The gas sand is a quiet zone on our seismic model. Here it’s a strong trough. And as for the start of the channel, there’s
nothing to go on. For these reasons the interpreter judged this section to be useless for interpretation purposes.

324
Advanced seismic processing 325
Clasina TerraScience Consultants International Ltd.

Raw Surface-Consistent Spectra

0 0
100 100

Kelman Technologies Inc.

Looking at the surface-consistent receiver spectra, we see they peak at around 10 Hz, a sure sign of severe shot noise
contamination. The delta-phase spectra show a discontinuity at 60 Hz, indicating power line noise.

We then reprocessed this line to get rid of the noise in the spectra.

Final Surface-Consistent Spectra

0 0
100 100

Kelman Technologies Inc.

We now see that the peak frequencies are at a healthy 20 Hz. The discontinuity at 60 Hz is now gone.

325
326 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Raw FK, Range Limited

FX, FX, Range Limited + Spectra Amortization

Kelman Technologies Inc.

Here are the log-amplitude spectra of the line component with the source plus line spectra in red. The raw spectra show a
peak at about 10 Hz.

We then reprocessed the data, removing most of the shot noise using an FK filter, in transparent mode, and excluded the near
offsets. Now the maximum amplitude is at about 20 Hz for most sources. Some, however, are still badly contaminated.
We then reprocessed the data again, this time applying an FX deconvolution to knock down some of the random broadband
noise. This reduced the amplitudes of frequencies above 35 Hz by about 4 or 5 dB.
Finally, we applied spectral amortization, extrapolating frequencies below 12 Hz and above 50 Hz using a mild decay rate.

Conventional & Final Phase Spectra


180

90

-90

-180
0 25 50 75 100 125
Kelman Technologies Inc.

The phase spectra of the conventional and final line components are shown here. The red line is conventional. Most of the
differences are below 25 Hz. At 15 Hz the difference is about 120 degrees. Note also that the final spectrum is less wobbly
than the conventional one.

326
Advanced seismic processing 327
Clasina TerraScience Consultants International Ltd.

Conventional Stack
Synthetic: Zero Phase 10/15-55/65
A B C
.9

1.1
Coal
Banff 1.2
Wab
1.3

1.4
Kelman Technologies Inc.

Finally, let’s see what difference this made to the seismic sections. The conventional stack is shown here.

Final Stack
Synthetic: Zero Phase 10/15-55/65
A B C
.9

1.1
Coal
Banff 1.2
Wab
1.3

1.4
Kelman Technologies Inc.

This is the final stack.

Note the Banff event on the final section. We can now easily track it across the section. What’s more, the tie between it and
the wells is nearly perfect.

The gas sand is a deep trough on the conventional stack. Now it’s a much quieter zone, as predicted by the seismic model.

As for the right boundary of the down-cutting channel, we can now hazard a guess. The left boundary of the channel is still
missing, however, so this section still has a ways to go. Nevertheless, we’ve gone from a section which was useless for
interpretation to one which is now useful.

327
328 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Decview Stack
Synthetic: Zero Phase, Amplitude Spectra
Taken From Data
A B C
.9

1.1
Coal
Banff 1.2
Wab
1.3

1.4
Kelman Technologies Inc.

Here we’ve used a wavelet for the well logs whose amplitude spectrum is taken from the seismic data. We can no longer see
the channel on well B. We conclude that the section does not have the frequency content to resolve the left side of the
channel.

Summary
Finally, let me conclude:

1. First, surface-consistent deconvolution, despite all the statistics we throw at it, still suffers from noise
contamination.
2. Second, we have the tools, none of which are exotic, all of which are known to the industry, to remove most of it.
3. Third, as I think our examples have shown, doing so can make a significant improvement to our final sections.
4. Fourth, and this is the main point of our talk, we must begin to view the surface-consistent solution

We must find ways to detect noise contamination easily and on a production basis, in essence turning deconvolution from a
black box into a glass box.

328
Advanced seismic processing 329
Clasina TerraScience Consultants International Ltd.

The following discussion is based on Kelman Technologies’ paper on spectral balancing, and presented here with their
permission.

There are at least two problems that can arise with balancing processes.

1. The first is distortion of the geology, meaning deconvolution of the underlying reflection coefficient series.
2. The second is ringing caused by spikes in the amplitude spectrum created near sudden drop-offs in the input spectra.

Balance a Kelman process is designed to avoid both these problems (because it is time variant).

How does BALANCE work?


1. First we transform the seismic trace into the time-frequency, or TF, domain by taking the discrete Fourier transform
of many small overlapping windows. These windows are typically 400 ms long with a new window beginning every
100 ms (although the processor can change this). The windows are strongly tapered. From the transforms we extract
amplitude spectra in the time-frequency domain.
2. Next we apply a 2D median filter in time and frequency designed to preserve the seismic wavelet response while
removing the geology response. We’ll have more to say about this later.
3. We then pre-whiten and invert the filtered amplitude spectra within a given frequency range. Constant extrapolation
is used outside of this range.
4. Finally we form zero-phase filters from the spectra, and apply these to the seismic trace within their corresponding
windows.

Why don’t we perform a full invertible TF transform (such as a Gabor transform), with an amplitude spectrum every time
sample?
The main reason is expense. Although these transforms can be done fairly quickly, applying a 2D median filter of the
dimensions we would like is very time consuming.

Proper Median Filter

Window 2D Median Inverted


Spectra Filter Spectra

600-1000 ms

700-1100 ms

800-1200 ms

900-1300 ms

April 2000 Kelman Technologies Inc.

Shown are four overlapping windows of a single real trace. On the left are the amplitude spectra of these windows. In the
middle are these spectra after 2D median filtering. Note the constant extrapolation of amplitudes done at the beginning and
end of each spectrum.

329
330 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

If the median filter has done its job correctly then these represent the spectra of the seismic wavelet at different points in time.
See how they appear smooth and reasonable in the frequency direction. Just as important, they change smoothly and
consistently from one window to the next.

On the right are the inverted spectra which get applied to the trace. Notice how the high frequencies get boosted more as we
go deeper.

(In middle windows, right hand side is straight because of constant extrapolation).
(On right hand side windows, slope increasing with depth --> boosting more and more of the higher frequencies as depth
increases - as expected).

There are two parameters which are commonly changed.


• The first is the frequency band within which to balance the spectrum. The applied operator is constant and
extrapolated outside of this band. The results are not too sensitive to this parameter. We suggest you set it to the
limits of the final band pass filter.
• The second parameter is the pre-whitening. A value of 1% gives a harsh whitening, 10% is normal (the default), and
50% is mild.

The amplitude spectrum of a trace contains the response of both the seismic wavelet and the geology.
The wavelet response must be preserved within the filter amplitude spectra if we wish to do a good job of balancing the trace
spectrum when we apply them.
The geology response, however, must be removed if we are to prevent distortion of the geology. The 2D median filter, which
typically takes the median of amplitude spectra over a moving rectangle 20 Hz wide and 1200 ms (or usually about 9
windows), is well suited to both tasks. (1200 ms is derived from experience).

Seismic Wavelet Response


• Spectrum at any point in time usually smooth,
with occasional sharp drop-off.
• Tends to change monotonically from window to
window.
• Median filtering preserves all of these properties.

April 2000 Kelman Technologies Inc.

In the frequency direction the amplitude spectrum of the seismic wavelet usually has a smooth appearance, although
sometimes with an occasional sharp “drop off”. Both these properties survive median filtering.

In the time, or window, direction we expect the seismic wavelet to be reasonably consistent over nearby windows. If the
wavelet does change with time, we expect these changes to be roughly monotonic (although not necessarily smooth) for any
one frequency. Monotonic functions are unchanged by median filters.

330
Advanced seismic processing 331
Clasina TerraScience Consultants International Ltd.

Geology Response
• Tends to be “grassy” and roughly white.
• Completely different in non-overlapping windows.
• Median filtering in frequency direction removes most of
the spikes.
• Median filtering in window direction removes rest, since
geology response from any one window always in the
minority (2 or 3 windows versus 9).

April 2000 Kelman Technologies Inc.

Geology, on the other hand, represents the spiky appearance across the spectrum, the rapidly changing part of a trace
amplitude spectrum. In the frequency direction, the width of the median filter causes much of this spikiness to smooth out
and disappear.

The height of the filter, however, is just as important. The geology response within any one window is similar to only a few
overlapping windows. The geology response in the remaining 6 or 7 windows within the rectangle is completely different.
Since a median works by “majority rule” the effects of this minority should disappear.

In addition to the spiky-looking part of the amplitude spectrum, the geology response often has some overall “shape” or
“colour” to it, usually in the form of a slow increase in amplitude with increasing frequency. We do not try to address this,
but you’re free to perform spectral colouring afterwards.
(Minor geology response should disappear because of ‘median’).

Proper Median Filter

Window 2D Median Inverted


Spectra Filter Spectra

600-1000 ms

700-1100 ms

800-1200 ms

900-1300 ms

April 2000 Kelman Technologies Inc.

331
332 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Shown are four overlapping windows of a single real trace. On the left are the amplitude spectra of these windows. In the
middle are these spectra after 2D median filtering. Note the constant extrapolation of amplitudes done at the beginning and
end of each spectrum.

If the median filter has done its job correctly then these represent the spectra of the seismic wavelet at different points in time.
See how they appear smooth and reasonable in the frequency direction. Just as important, they change smoothly and
consistently from one window to the next.

On the right are the inverted spectra which get applied to the trace. Notice how the high frequencies get boosted more as we
go deeper.

(In middle windows, right hand side is straight because of constant extrapolation).
(On right hand side windows, slope increasing with depth --> boosting more and more the higher frequencies as depth
increases - as expected).

Median Filter Too Narrow

Window 2D Median Inverted


Spectra Filter Spectra

600-1000 ms

700-1100 ms

800-1200 ms

900-1300 ms

April 2000 Kelman Technologies Inc.

Here’s what happens when the 2D median filter fails. We’ve narrowed the filter from dimensions of 21 Hz and 1200 ms to 8
Hz and 600 ms. Now the median filtered spectra change quickly from window to window, and have sharp spikes in the
frequency direction. This tells us that the median filter has not removed all of the geology response, so when we apply these
filters we will distort the geology.

Of course we wouldn’t use such a narrow median filter in practice, but if we did, these diagnostic plots would indicate the
problem.

332
Advanced seismic processing 333
Clasina TerraScience Consultants International Ltd.

How Mean Smoothing Causes


Ringing

Sharp Drop-off In Spectrum

After Mean Smoothing

Inverted

Applied

April 2000 Kelman Technologies Inc.

Sometimes our spectra contain sharp drop-offs in amplitude. The frequency boundaries of a Vibroseis sweep is on obvious
example, but it can also happen in the middle of the spectrum for any kind of data. It might look something like the above.

Most spectral balancers contain, implicitly or explicitly, mean or least-squares smoothers across the amplitude spectrum, on
which they base the inverse filter. When this inverse is applied to the top spectrum it produces spikes, causing the trace to
“ring” in the time domain.

How Median Smoothing Prevents


Ringing

Sharp Drop-off In Spectrum

After Median Smoothing

Inverted

Applied

April 2000 Kelman Technologies Inc.

We use median, not mean, filters. A median filter preserves step functions, so it does a perfect job of whitening the spectrum
in this simple example, without producing spikes.

333
334 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

A Ringing Example (Artificial)

Input

Mean-Smoothed
Spectral Whitener

BALANCE

0 125 Hz
April 2000 Kelman Technologies Inc.

Here are some actual tests on an artificial example. We begin with a zero-phase wavelet, shown on the right, with a spectrum
having a sharp drop-off in the central frequencies. When a whitener containing a spectral mean smoother is applied, it leaves
spectral spikes where the drop-off use to be. BALANCE has no such problem.

Note the ringing of the middle wavelet compared to the BALANCE wavelet. This is what can happen to our seismic wavelet.

Spectral Notches and


Mean Filtering

Spectral Notch

After Mean Smoothing

Inverted

Applied

April 2000 Kelman Technologies Inc.

334
Advanced seismic processing 335
Clasina TerraScience Consultants International Ltd.

Spectral Notches and


Median Filtering

Spectral Notch

After Median Smoothing

Inverted

Applied

April 2000 Kelman Technologies Inc.

10

Signal / Noise

Raw

BALANCE

0 125 Hz
April 2000 Kelman Technologies Inc.

Here’s a real data example. At the top is the average amplitude spectrum over part of stacked section. Note the sharp drop-off
in amplitude after 75 Hz.
STACKVIEW diagnostics show that there is signal to at least 110 Hz, and we would like spectral balancing to boost some of
this.
The application of a mean-smoothed spectral whitener is a disaster. It has created a huge spectral spike between 65 and 75
Hz. The only remedy for this process is to not whiten beyond 75 Hz.
The BALANCE result is much better.
Zero phase decon works very well within the window which we are working, but tends to boost high frequencies too high in
the shallow parts, and not boost the high frequencies high enough in the deeper section

335
336 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Raw

BALANCE

April 2000 Kelman Technologies Inc.

Here’s the actual stacked sections. The mean-smoothed result is unacceptable, while the BALANCE result is just a sharper
version of the raw stack. An event appears near the bottom which is invisible in the top two stacks.

Above Coals
Raw

BALANCE

Below Coals
Raw

BALANCE

April 2000 Kelman Technologies Inc.

336
Advanced seismic processing 337
Clasina TerraScience Consultants International Ltd.

In our STACKVIEW presentation we show a line where coals are sharply attenuating high frequencies. This mostly happens
on the right half of the line (where the coals are thickest), and only to events below the coals.

At the top we see some events above the coals (every seventh trace is shown). There are only slight variations in character
between the left and right sides. BALANCE has sharpened up the events, but preserved most of the variation.

At the bottom we have events below the coals. The difference between the left and right sides is severe. BALANCE greatly
improves the consistency across the line, but obviously more remains to be done. In fact our STACKVIEW study concluded
that minimum-phase time-variant filters were needed to fix the problem, and BALANCE, for now at least, is zero phase.

337
338 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

ESTIMATION OF RESERVOIR PROPERTIES USING SEISMIC DATA

Lateral variations of reservoir properties usually cannot be inferred from measurements made at sparsely located
wells. However, the integration of 3-D seismic data with petro-physical information at the wells can significantly
improve the spatial characterization of the reservoir.

• First, modern 3-D seismic methods provide high resolution images of the reservoir geometry and help delineate
complex fault patterns.
• Second, in contrast with sparse well observations, 3-D seismic data yield a dense and regular areal sampling of the
acoustic properties of the reservoir interval.
After careful data processing, the lateral variations of seismic amplitudes can be transformed to changes in acoustic
impedance; i.e., the product of bulk density by interval velocity.
Such impedance data can then be related to rock properties and physical conditions in the reservoir.

Several techniques exist for using seismic data to extrapolate reservoir properties away from the well control points.
These are deterministic and geo-statistical techniques.

Deterministic techniques use simple empirical or regression formulas between seismic impedance and reservoir
properties to predict properties away from the well control points. While sometimes over simplistic, these techniques
have proven effective in a wide range of geological environments, bearing in mind their effectiveness will depend on
the specific depositional properties of the reservoir and the associated acoustic responses.

Geo-statistical modeling techniques alleviate some of the uncertainty in the deterministic techniques by quantifying
the imprecision present in spatial description of the reservoir parameters. These advantages include using spatial
functions to analyze preferential directions of variability in the data and to measure the sensitivity of seismic data to
changes in different reservoir parameters. This approach provides quantitative measurement of the reliability of the
reservoir models (with error bounds consistent with the accuracy of the measurements) that are useful for risk
assessment

RESERVOIR DELINEATION

Traditionally, seismic data have been used primarily to map large-scale subsurface structures and to identify
exploration targets.
During the production phase when detailed reservoir analysis is required, the seismic information has not been used
extensively by petroleum engineers, except, perhaps, to map major faults and gross reservoir geometry. In fact, even
in the area of reservoir delineation, the applicability of conventional seismic methods has been limited not only by the
two-dimensional (2-D) nature and low resolution of the seismic data, but also by the problem of transforming travel-
time measurements to reliable estimates of reservoir depth and thickness.

Using 3-D acquisition and processing techniques, seismic methods now can provide more detailed and spatially
continuous images of subsurface structures that can help the engineer in constructing an accurate structural model of
the reservoir.

In stratigraphically or structurally complex reservoirs, delineating reservoir boundaries is done more effectively using
3-D seismic data Compared with the conventional 2-D seismic survey, the 3-D method has several advantages in that
it provides:

• a well-constrained 3-D image of the reservoir volume


• a better delineation of the faults
• an enhanced vertical and lateral resolution of the reservoir structure.

338
Advanced seismic processing 339
Clasina TerraScience Consultants International Ltd.

The dense areal coverage of 3-D data provides the geologist with a continuous image of reservoir thickness, depth, and
facies distribution The 3-D nature of the data is also important for accurate delineation of the complex fault patterns
that may control the flow of fluids in the reservoir, as well as the effectiveness of enhanced oil recovery (EOR)
processes. Also, arbitrarily oriented cross sections can be extracted by slicing the data volume.

With 3-D data, enhanced lateral resolution is achieved not only with the dense areal coverage of the seismic
measurements, but also with the 3-D migration process. With 2-D data, migration is, at best, incomplete because it
cannot comprehend energy reflected from outside the vertical plane of the survey line. In contrast, the 3-D migration
properly images subsurface structures and improves the lateral resolution of the reservoir boundaries. The more
accurate subsurface imaging attained with 3-D migration also increases the signal-to-noise ratio in the data. As a
result, improved vertical (or temporal) resolution also follows from the migration process.

While seismic mapping is generally conducted in two-way travel time, the structural information is needed in depth
for integration with other reservoir measurements for subsequent utilization by the reservoir engineer.
Three primary procedures convert time/amplitude seismic data to depth:

1. Apparent Well Velocity Depth conversion


2. Seismic Velocity depth conversion
3. Ray Tracing Depth Modeling.

The method that is adopted depends on the available velocity information, the structural complexity of the reservoir,
and the distribution of well control.

SEISMIC AMPLITUDE MODELING

In addition to being useful in delineating the reservoir structure, seismic data also carry information about the petro-
physical properties of the reservoir. After careful data conditioning, seismic reflection amplitudes can be interpreted
in terms of changes in compressional and shear wave velocities or impedance, which, in turn, are related to petro-
physical variations in the subsurface.

DATA CONDITIONING

Before seismic amplitude changes in the reservoir zone can be modeled physically, a number of data processing steps
must be implemented to reduce:
(a) the effects of near-surface variations,
(b) the effects of changes in the overburden geology, and
(c) the effects of distortions caused by changes in the acquisition parameters.

Some of the data processing steps necessary to minimize these effects follow:

• relative amplitude preservation


• surface statics correction
• surface-consistent deconvolution and amplitude compensation
• wavelet processing and phase correction
• compensation for an-elastic attenuation
• detailed velocity analysis
• multiple suppression
• accurate migration (3.D, where possible)
• post-stack filters.

339
340 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Proper selection of the data processing sequence and parameters is also important to avoid inducing numerical
artifacts. In practice, the optimum processing scheme varies greatly, depending on data quality, acquisition
environment, and geologic conditions.

IMPEDANCE MODELING

The knowledge of acoustic impedance or interval velocity in the reservoir layer provides valuable information about
rock parameters such as porosity, lithology, and gas saturation. However, recovering acoustic impedance from band-
limited and noise-contaminated seismic records is an inherently non-unique process. To constrain our reservoir
model, we use one of three modeling techniques that make different assumptions about the recorded seismic signals
and are adapted to distinct geological situations.

THE PSEUDO-ACOUSTIC IMPEDANCE SECTION

This approach assumes that the seismic traces approximate reflection coefficient series. By using a recursive relation,
calibrated amplitudes are directly converted to estimates of seismic impedance. However, because of the band-limited
nature of seismic data, a low frequency trend must be added to the derived impedance model.
This trend is estimated from external information such as sonic logs, seismic velocity analysis, or regional velocity
maps (with an assumed velocity-density relationship).

The Pseudo Acoustic method produces satisfactory results when reservoir layering is thick enough to avoid the
interference effects of closely spaced reflectors. A high signal-to-noise ratio is desirable since each sample of the
seismic trace is treated as a reflection coefficient.

SEISMIC LITHOLOGIC MODELING

Seismic Lithology modeling is a process in which the synthetic response of an iteratively perturbed lithologic model
converges toward the seismic data. It is a forward process in which the input parameters for a 2-D layered-model
(interval velocity, density, and thickness) are varied one at a time at specified locations. The parameter perturbation
process is performed over several iterations through the full set of model parameters and terminates when a
satisfactory match is obtained between the model-derived synthetic and recorded seismic traces.

Compared with the Pseudo Acoustic process, the Seismic Lithology method produces a thin-layer, more highly
resolved velocity model from band-limited, noise-contaminated seismic data by simulating the interference effects of
closely spaced reflectors. This forward modeling technique is comparatively insensitive to random noise, since the 2-D
process searches for lateral continuity of model layers, rather than aiming for a perfect match between the synthetic
seismograms and recorded seismic traces.

The L-1 Norm Modeling Method

The L-1 Norm Modeling Method (commonly known as Sparse Spike Inversion --SSI) (Oldenburg et al., 1983; Ardali,
1987) assumes that the seismic data provide a reasonable estimate of the reflectivity sequence only within the signal
band. The method computes broadband reflectivity functions by honoring the reflectivity spectrum within the seismic
signal bandwidth with the additional constraint such that, in the time domain, the reflectivity function is composed of
a series of isolated spikes. With this constraint, the process partly recovers the high and low frequencies of the
reflectivity spectrum that are missing in seismic records. If additional impedance information is available (e.g., from
sonic logs), it is also incorporated as a constraint on the reflectivity sequence. After L1-norm modeling, the
reflectivities are converted to impedances using the Pseudo Acoustic recursive technique.

While the Seismic Lithology technique is best suited to construct detailed and laterally consistent impedance models,
the L1- norm approach is optimally used where the reservoir structure is characterized by a few major interfaces.

340
Advanced seismic processing 341
Clasina TerraScience Consultants International Ltd.

AMPLITUDE VERSUS OFFSET ANALYSIS

The amplitude of reflected P- or S-wave seismic signals varies with the offset distance between source and receiver.
That variation often is dominated by the dependence of reflectivity, R, on reflection angle, Θ. In certain depositional
settings, the variation of R with Θ provides an important clue to the presence of hydrocarbons. Such information is
normally suppressed in conventional seismic displays (stacked sections) in which the amplitude of each event
represents an average over all offsets. Amplitude-versus-offset analysis is designed to retrieve that angle-dependent
information, measure it, and display it in a form suited for interpretation.

Relationships between hydrocarbons and amplitude-versus-offset variations are verified by modeling. In this step, the
user inputs an estimate of the reservoir conditions and builds a set of synthetic offset gathers for comparison to the
seismically derived offset gathers.
In this manner, it is possible to compare several geological possibilities to the seismic data and arrive at a reasonable
list of possible reservoir conditions.

341
342 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Basic velocity structure of a sonic log is provided by very low frequencies, higher frequencies are provide the detailed
resolution.

342
Advanced seismic processing 343
Clasina TerraScience Consultants International Ltd.

343
344 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

344
Advanced seismic processing 345
Clasina TerraScience Consultants International Ltd.

345
346 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Normal range of transit time for several materials.

346
Advanced seismic processing 347
Clasina TerraScience Consultants International Ltd.

347
348 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

348
Advanced seismic processing 349
Clasina TerraScience Consultants International Ltd.

SIGNIFICANCE OF ATTRIBUTES

Attribute measurements based on complex trace analysis were defined by Taner et al, 1976. We willexamine their
significance.

Reflection strength (amplitude of the envelope).

Reflection strength is independent of phase. It may have its maximum at phase points other than peaks or troughs of
the real trace, especially where an event is the composite of several reflections. Thus, the maximum reflection strength
associated with a reflection event may be different from the amplitude of the largest peak or trough.

High reflection strength is often associated with major lithologic changes between adjacent rock layers, such as across
unconformities and boundaries associated with sharp changes in sea level or depositional environments.
High reflection strength also is often associated with gas accumulations.
Strength of reflections from unconformities may vary as the subcropping beds change, and reflection strength
measurement may aid in the lithologic identification of subcropping beds if it can be assumed that deposition is
constant above the unconformity so that all the change can be attributed to the subcropping bed.
Lateral variations in bed thickness, change the interference of reflections; such changes usually occur over
appreciable distance and so produce gradual lateral changes in reflection strength. Sharp local changes may indicate
faulting, or hydrocarbon accumulations where trapping conditions are favorable.
Hydrocarbon accumulations, especially gas, may show as high-amplitude reflections or "bright-spots". However, such
bright spots may be non-commercial and conversely some gas productive zones may not have associated bright spots.

The strength of reflections from the top of massive beds tends to remain constant over a large region. Reflections of
nearly constant strength provide good references for time-interval measurements.

Instantaneous phase

Instantaneous phase emphasizes the continuity of events. instantaneous phase is a value associated with a point in time
and thus is quite different from phase as a function of frequency, such as given by the Fourier transform.
In phase displays, the phase corresponding to each peak, trough, zero-crossing, etc. of the real trace is assigned the
same color so that any phase angle can be followed from trace to trace.
Because phase is independent of reflection strength, it often makes weak coherent events clearer.
Phase displays are effective in showing discontinuities, faults, pinchouts, angularities, and events with different dip at-
titudes which interfere with each other.
Prograding sedimentary layer patterns and regions of on-lap and off-lap layering often show with special clarity so
that phase displays are helpful in picking "seismic sequence boundaries"

Instantaneous frequency

Instantaneous frequency, is a value associated with a point in time, like instantaneous phase. Most reflection events
are the composite of individual reflections from a number of closely spaced reflectors which remain nearly constant in
acoustic impedance contrast and separation.
The superposition of individual reflections may produce a frequency pattern which characterizes the composite
reflection.
Frequency character often provides a useful correlation tool.
The character of a composite reflection will change gradually as the sequence of layers gradually changes in thickness
or lithology.
Variations, as at pinchouts and the edges of hydrocarbon-water interfaces tend to change the instantaneous frequency
more rapidly.

A shift toward lower frequencies ("low frequency shadow") is often observed on reflections from reflectors below gas
sands, condensate and oil reservoirs.

349
350 Advanced seismic processing
Clasina TerraScience Consultants International Ltd.

Low-frequency shadows often occur only on reflections from reflectors immediately below the petroliferous zone,
reflections from deeper reflectors appearing normal.
This observation is empirical and many have made the same observation, but we do not understand the mechanism
involved.

Two types of explanations have been proposed:


(1) that a gas sand actually fitters out higher frequencies because of (a) frequency-dependent absorption or (b)
natural resonance, or
(2) that travel time through the gas sand is increased by lower velocity such that reflections from reflectors im-
mediately underneath are not summed properly. These explanations seem inadequate to account for the
observations.

Fracture zones in brittle rocks are also sometimes associated with low frequency shadows.

Weighted average frequency

Weighted average frequency, emphasizes the frequency of the stronger reflection events and smoothes irregularities
caused by noise.
The frequency values approximate "dominant frequency" values determined by measuring peak-to-peak times or
times between other similar phase points.
Like instantaneous frequency displays, weighted average frequency displays are sometimes excellent for enhancing
reflection continuity.
Hydrocarbon accumulations often are evidenced by low frequencies.

Apparent polarity

While all attribute measurements depend on data quality and the quality of the recording and processing, apparent
polarity measurements are especially sensitive to data quality.
Interference may result in the reflection strength maximum occurring near a zero-crossing of the seismic trace so that
the polarity may change sign as noise causes the zero-crossing of the trace or the location of the reflection strength
maximum to shift slightly.

The analysis of apparent polarity assumes a single reflector, a zero-phase wavelet, and no ambiguity due to phase
inversion. However, in as much as most reflection events are composites of several reflections, polarity often lacks a
clear correlation with reflection coefficient and hence it is qualified as "apparent" polarity.

Polarity sometimes distinguishes between different kinds of bright spots.


Bright spots associated with gas accumulations in clastic sediments usually have lower acoustic impedance than
surrounding beds and hence show negative polarity for reservoir top reflections and positive polarity for reflections
from gas-oil or gas-water interfaces (often called "flat spots")

General Interpretation considerations

The various attributes reveal more as a set then they do individually. Features often are anomalous in systematic ways
on various displays.
Meaning of a variation is clear, but often the meaning is clear only when well data are correlated to seismic data (see
Sheriff et al, 1977).
As more additional data are assimilated, the more interpretable are these attribute measurements. Those familiar
with local geology find significance, which others miss.
The use of lateral changes of pattern, especially low-frequency zones, to define limits of production is obviously
important.

350

You might also like