0% found this document useful (0 votes)
100 views34 pages

Cu Ultra PDF

This document discusses reflection-mode ultrasound imaging. Ultrasound uses acoustic waves above 20 kHz to image the inside of the body. A short pulse is transmitted into the body and reflections from interfaces between tissues are detected over time. The time of reflections corresponds to depth, allowing a 1D image to be formed. Scanning the beam in 2D generates a 2D image. Reflections come from impedance mismatches between tissues and from scattering within tissues. Surface reflections appear brighter while volumetric scattering is weaker but provides intrinsic tissue information. Ultrasound images display the reflectivity profile of the object being imaged.

Uploaded by

Julio Souza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
100 views34 pages

Cu Ultra PDF

This document discusses reflection-mode ultrasound imaging. Ultrasound uses acoustic waves above 20 kHz to image the inside of the body. A short pulse is transmitted into the body and reflections from interfaces between tissues are detected over time. The time of reflections corresponds to depth, allowing a 1D image to be formed. Scanning the beam in 2D generates a 2D image. Reflections come from impedance mismatches between tissues and from scattering within tissues. Surface reflections appear brighter while volumetric scattering is weaker but provides intrinsic tissue information. Ultrasound images display the reflectivity profile of the object being imaged.

Uploaded by

Julio Souza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Chapter U

Reflection-mode ultrasound imaging


Contents
Introduction to real-time reflection-mode ultrasound imaging .
Object: What does ultrasound image? . . . . . . . . . . . .
Plane wave propagation . . . . . . . . . . . . . . . . . . . . . .
Source considerations . . . . . . . . . . . . . . . . . . . . . . .
B-mode scan: near-field analysis . . . . . . . . . . . . . . . . .
A-mode scan: Diffraction analysis . . . . . . . . . . . . . . . .
Narrowband approximation (for amplitude modulated pulses)
Steady-state approximation (for narrowband pulse) . . . . .
Image formation . . . . . . . . . . . . . . . . . . . . . . .
Fresnel approximation in Cartesian coordinates . . . . . . .
Fraunhofer approximation in Cartesian coordinates . . . . .
Beam pattern in polar coordinates . . . . . . . . . . . . . .
Physical interpretation of beam pattern . . . . . . . . . . . .
Design tradeoffs . . . . . . . . . . . . . . . . . . . . . . . .
Time-delay, phase, propagation delay . . . . . . . . . . . . .
Focusing (Mechanically) . . . . . . . . . . . . . . . . . .
Ideal deflection (beam steering) . . . . . . . . . . . . . . . .
Speckle Noise . . . . . . . . . . . . . . . . . . . . . . . . . .
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

U.1

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

U.2
U.4
U.8
U.9
U.13
U.18
U.20
U.21
U.22
U.23
U.24
U.25
U.28
U.28
U.29
U.29
U.31
U.32
U.34

c J. Fessler, September 21, 2009, 11:18 (student version)


U.2

Introduction to real-time reflection-mode ultrasound imaging


Outline
Overview
Source: Pulse and attenuation
Object: Reflectivity
Geometric imaging (PSF approximation)
Diffraction: Fresnel and Fraunhofer approximations
Noise
Phased-arrays (beamforming, dynamic focusing)
R, scan conversion
Overview
Ultrasound: acoustic waves with frequency > 20kHz. Medical ultrasound typically 1-10MHz.
Ultrasound imaging is fundamentally a non-reconstructive, or direct, form of imaging. (Minimal post-processing required.)
Two-dimensions of spatial localization are performed by diffraction, as in optics.
One-dimension of spatial localization is performed by pulsing, as in RADAR.
The ultrasonic wave is created and launched into the body by electrical excitation of a piezoelectric transducer.
Reflected ultrasonic waves are detected by the same transducer and converted into an electrical signal.
Basic ultrasound imaging system is shown below.
Pulser
p(t)

Patient

Transducer
s(x,y)

Signal
Processor

Display
z

A pulser excites the transducer with a short pulse, often modeled as an amplitude modulated sinusoid: p(t) = a(t) e0 t ,
where 0 = 2f0 is the carrier frequency, typically 1-10 MHz.
The ultrasonic pulse propagates into the body where it reflects off mechanical inhomogeneities.
Reflected pulses propagate back to the transducer. Because distance = velocity time, a reflector at distance z from the
transducer causes a pulse echo at time t = 2z
c , where c is the sound velocity in the body.
Velocity of sound about 1500 m/s 5% in soft tissues of body; very different in air and bone.
Reflected waves received at time t are associated with mechanical inhomogeneities at depth z = ct/2.
The wavelength = c/f0 varies from 1.5 mm at 1 MHz to 0.15 mm at 10 MHz, enabling good depth resolution.
The cross-section of the ultrasound beam from the transducer at any depth z determines the lateral extent of the echo signal.
The beam properties vary with range and are determined by diffraction. (Determines PSF.)
We obtain one line of an image simply by recording the reflected signal as a function of time.
2D and 3D images are generated by moving the direction of the ultrasound beam.
Signal processing: bandpass filtering, gain control, envelope detection.
History
started in mid 1950s
rapid expansion in early 1970s with advent of 2D real-time systems
phased arrays in early 1980s
color flow systems in mid 1980s
3D systems in 1990s
Active research field today including contrast agents (bubbles), molecular imaging, tissue characterization, nonlinear interactions, integration with other modalities (photo-acoustic imaging, combined ultrasound / X-ray tomosynthesis)

c J. Fessler, September 21, 2009, 11:18 (student version)


Example.

A month later...

U.3

c J. Fessler, September 21, 2009, 11:18 (student version)


U.4

Object: What does ultrasound image?


Reflection-mode ultrasound images display the reflectivity of the object, denoted R(x, y, z).
The reflectivity depends on both the object shape and the material in a complex way.
Two important types of reflections are surface reflections and volumetric scattering.
Surface reflections or specular reflections
Large planar surface (relative to wavelength ), i.e., planar boundary between two materials of different acoustic impedances.
(e.g., waves in swimming pool reflecting off of concrete wall)
Interface

Incident
pinc
R
Medium 1:
Z 1 , c1

Medium 2:
inc

Z 2 , c2

ref

pref
Reflected

trn
ptrn
^
Transmitted / refracted

p is pressure (force per unit area) [Pascals: Pa = N/m2 = J/m3 = kg/(m s2 )]


v is particle velocity [m/s].
p and v are signed scalar quantities that can vary over space and with time.
Z = p/v is specific acoustic impedance [kg/(m2s)] (analogous to Ohms law: resistance = voltage / current)
For a plane harmonic wave: Z = 0 c, called characteristic impedance
0 is density [g/m3 ]
c is (wave) velocity [m/s]
Force: 1 dyne = 1 g cm / s2 , 1 newton = 1 kg m / s2 = 1106 dyne

Boundary conditions [2, p. 88]:


Equilibrium total pressure at boundary: pref + pinc = ptrn
Total pressure left of interface is pref + pinc , and pressure must be continuous across interface [3, p. 324].
Snells law: sin inc / sin trn = c1 /c2
Continuous particle velocity: vinc cos inc = vref cos ref + vtrn cos trn
Angle of reflection: ref = inc (like a mirror).
From the picture we see that Z1 = pinc /vinc , Z1 = pref /vref , Z2 = ptrn /vtrn . Substituting into particle velocity condition:


pinc
ptrn
pref
cos inc Z2
cos inc =

cos trn so 1 + R =
(1 R).
Z1
Z1
Z2
cos trn Z1
Thus the pressure reflectivity at the interface is
R=

Z2 cos inc Z1 cos trn


pref
=
.
pinc
Z2 cos inc + Z1 cos trn

Only surfaces parallel to detector (or wavefront) matter (others reflect away from transducer), so inc = ref = trn = 0. Thus the
reflectivity or pressure reflection coefficient for waves at normal incidence to surface is:
R = R12 =

Z2 Z1
Z
pref
=

,
pinc
Z1 + Z2
2Z0

where Z0 denotes the typical acoustic impedance of soft tissue. Clearly 1 R 1, and R is unitless. Note that R21 = R12 .

c J. Fessler, September 21, 2009, 11:18 (student version)


Typically

Z
Z0

U.5

is only a few % in soft tissue, so weakly reflecting (not much energy loss). But shadows occur behind bones.

Also useful is the pressure transmittivity or pressure transmission coefficient:


12 =

pinc + pref
2Z2
ptrn
=
= 1 + R12 =
1.
pinc
pinc
Z1 + Z2

Note that 21 = 1 + R21 = 1 R12 .


It is fortunate that 12 1! Ultrasound would be much more difficult otherwise.
Surface reflections are an extrinsic property, because they are related to the relative impedances between two tissues, rather than
representing just the characteristics of a single tissue.
The intensity of an ultrasonic wave is I = p2 /(2Z). Thus the reflected and transmitted intensities are
Iref /Iinc

Z2 Z1
=
Z2 + Z1

2

Itrn /Iinc =

4Z2 Z1
.
(Z2 + Z1 )2

Note that Iinc + Iref = Itrn as one would expect.


Volumetric Scattering
On a microscopic level (less than or comparable to an ultrasonic wavelength), mechanical inhomogeneities inherent in tissue will
scatter sound.
Individual inhomogeneities are less than an ultrasound wavelength and distributed throughout the volume.
= Backscatter coefficient = Backscatter cross section/unit volume
These volumetric signals are very weak (typically 20 dB down from surface reflections) but are very useful for imaging because
they are an intrinsic property of the microstructure of tissue.
Volumetric scattering is nearly isotropic so that the backscattered component is always present and is representative of the tissue.
Volumetric scattering can give rise to speckle.
Can we see a tumor using ultrasound?

??

Summary
In reflection-mode ultrasound imaging, the images are representative reproductions of the reflectivity of the object. A cyst,
which is a nearly homogeneous fluid-filled region, has reflectivity nearly 0, so appears black on ultrasound image. Liver tissue,
which has complicated cellular structure with many small mechanical inhomogeneities that scatter the sound waves, appears as a
fuzzy gray blob (my opinion). Boundaries between organs or tissues with different impedances appear as brighter white curves in
the image.

c J. Fessler, September 21, 2009, 11:18 (student version)


U.6

Preview of A-mode scan


Assume the medium has a uniform sound speed c and is weakly reflecting, so we ignore 2nd order and higher reflections.
Also ignore (for now) attenuation.
Assume the transducer transmits an amplitude modulated pulse p(t) = a(t) e0 t , where 0 = 2f0 is the carrier frequency
and a(t) is the envelope. In reality the modulation is sinusoidal, and one uses I,Q receiver processing for envelope detection.
Ultrasonic Pulses

Pulse Magnitude Spectra

0.4
12

Pulse p(t)
Envelope a(t)

10

0.2
|P(f)|

p(t)

8
0

6
4

0.2

2
0.4

10

0.4
12

Pulse p(t)
Envelope a(t)

10

0.2
|P(f)|

p(t)

8
0

6
4

0.2

2
0.4

4
t [sec]

10

f [MHz]

Suppose at depths z1 , . . . , zN there are interfaces with reflectivities R(z1 ), . . . , R(zN ), i.e.,
N
X

R(z) =

n=1

R(zn ) (z zn ) . (Picture)

Then a (highly simplified) model for the signal received by the transducer is:
v(t) = K

N
X

n=1

R(zn ) p(t 2zn /c), (Picture)

where K is a constant gain factor relating to the impedance of the transducer, electronic preamplification, etc.
A natural estimate of the reflectivity is
 
2z
, (Picture)

R(z) = v
c
where |v(t)| is the envelope of the received signal.

One can display the estimated reflectivity R(z)


as a function of depth z, simply by synchronizing the display device to show the
amplitude of the received envelope signal |v(t)| (e.g., analog scope trace). It is a plot of amplitude versus time (or depth), hence it
is called an A-mode scan. Can be completely analog (and it was in early days).

Is R(z)
= R(z)? No. Even in this highly simplified model, there is blurring (in z direction) due to the width of the pulse. Soon
we will analyze the blur in all directions more thoroughly.
Also note that reflection coefficients can be positive or negative, but with envelope detection we lose the sign information. Hereafter
we will ignore this detail and treat reflectivity R as a nonnegative quantity.
What happens if sound velocity in some organ differs from others?
Synopsis of M-mode scan
If reflectivity is a function of time t, e.g., due to cardiac motion, then we have R(z; t). If the time scale is slow compared to A-mode
t).
scan time (300 sec), then just do multiple A-mode scans (1D) and stack them up to make 2D image of R(z,

c J. Fessler, September 21, 2009, 11:18 (student version)


U.7

Illustration of A-mode scan


Reflectivity

R(z)

0.5

0.5

5
z [cm]

10

Received signal
1

v(t)

0.5

0.5

20

40

60

80

100

120

t [usec]
Estimated Reflectivity
1

R(z)

0.5

0.5

z = ct/2 (distance = rate time, depth = distance / 2)


depth resolution vs pulse width
speckle
scan speed

5
z = ct/2 [cm]

10

c J. Fessler, September 21, 2009, 11:18 (student version)


U.8

Plane wave propagation


Assume a medium that is homogeneous, continuous, of infinite extent, and nondissipative (i.e., no energy is lost as the sound wave
propagates).
In ideal fluids (and for practical purposes in soft tissue), only longitudinal waves are propagated, i.e., the particles of the medium
are displaced from their equilibrium position in the direction of wave propagation only. Transverse waves, or shear waves cannot
be generated in an ideal fluid (essentially by definition of an ideal fluid).
If p(x, y, z, t) denotes the (acoustic) pressure at spatial coordinates (x, y, z) at time t, then after various linearizations (i.e., for
small pressure changes) one can derive the simple wave equation which must hold away from sources:
2 p
where
2 =
Replacing 2 with

2
z 2

1 2
p=0
c2 t2

2
2
2
+ 2 + 2.
2
x
y
z

yields the 1D wave equation, which has general solution


p(z, t) = forward (t z/c) + backward (t + z/c),

where forward and backward are arbitrary twice differentiable functions.


Note that z/c is the time required to for the wave to propagate the distance z.
One specific class of solutions to this equation is the monochromatic plane wave with frequency f :
pf (z, t) = P (f ) e2f (tz/c)
where P (f ) is the amplitude, for which backward = 0. It is a simple calculus exercise to verify that
2 p =

(2f )2
1 2
p=
p = k 2 p
2
2
c t
c2

where k = 2f /c = 2/ is called the wave number. This confirms that plane waves satisfy the wave equation.
Because the simple wave equation is linear, any superposition of solutions is also a solution. Hence
Z
Z
p(z, t) = pf (z, t) df = P (f ) e2f (tz/c) df
is also a solution. Observe that p(z, t) = p(0, t z/c) where p(0, t) =
Spherical waves

P (f ) e2f t df = F 1 {P } = p(t).

Another family of solutions to the wave equation is


p(r, t) =
where r =

1
1
outward (t r/c) + inward (t + r/c),
r
r

p
x2 + y 2 + z 2 .

A specific case is the spherical wave:


p(r, t) =
Using the equality

x r

1 2f (tr/c)
e
.
r

= x/r, one can verify that:


2 p =

1 2
p = k 2 p(r, t).
c2 t2

c J. Fessler, September 21, 2009, 11:18 (student version)


U.9

Source considerations
Now we begin to examine the considerations in designing the transducer and the transmitted pulse.
Transducer considerations
Definition of transducer: a substance or device, such as a piezoelectric crystal, microphone, or photoelectric cell, that converts
input energy of one form into output energy of another.
Transducer electrical impedance 1/area, so smaller source means more noise (but better near-field lateral spatial resolution).
Higher carrier frequency means wider filter after preamplifier, so more noise (but better depth resolution).
Nonuniform gains for each element must be calibrated; errors in gains broaden PSF.
Pulse considerations

(Why a pulse? And what type?)

Consider an ideal infinite plane-reflector at a distance z from the transducer, and an acoustic wave velocity c.
If the transducer transmits a pulse p(t) (pressure wave), then (ignoring diffraction) ideally the received signal (voltage) would be


2z
v(t) = p t
,
c
because

2z
c

is the time required for the pulse to propagate from the transducer to the reflector and back.

Unfortunately, in reality the amplitude of the pressure wave decreases during propagation, and this loss is called attenuation.
It is caused by several mechanisms including absorption (wave energy converted to thermal energy), scattering (generation of
secondary spherical waves) and mode conversion (generation of transverse shear waves from longitudinal waves).
As a further complication, the effect of attenuation is frequency dependent: higher frequency components of the wave are attenuated
more. Thus, it is natural to model attenuation in the frequency domain to analyze what happens in the time domain.
Ideally the recorded echo could be expressed using the 1D inverse FT as follows

 Z
2z
2z
v(t) = p t
= P (f ) e2f (t c ) df .
c
A more realistic (phenomenological) model (but still ignoring frequency-dependent wave-speed) accounts for the frequencydependent attenuation as follows:
v(t) =

2z (f )

|e {z } P (f ) e

2f (t 2z
c )



2z
df =
6 p t
,
c

(U.1)

where the amplitude attenuation coefficient (f ) increases with frequency |f |.


What are the units of ? ?? Why factor of 2? ??
Attenuation causes two primary effects.
Signal loss (decreasing amplitude) with increasing depth z
Pulse dispersion due to frequency-dependent attenuation.

Narrowband pulses
The effect of signal loss is easiest to understand for a narrowband pulse. We say p(t) is narrowband if its spectrum is concentrated
near f f0 and f f0 , for some center frequency f0 ,
6

e2z (f )

P (f )
f0

f0

c J. Fessler, September 21, 2009, 11:18 (student version)


U.10

For a narrowband pulse, the following approximation to the effect of attenuation is reasonable:
e2z (f ) P (f ) e2z (f0 ) P (f ).

(U.2)

Substituting that approximation into (U.1) yields




Z
2z
2z
v(t) e2z (f0 ) P (f ) e2f (t c ) df = |e2z{z(f0}) p t
.
c

(U.3)

In words, a narrowband pulse is simply attenuated according to the attenuation coefficient at the carrier frequency f0 ; the pulse
shape itself is not distorted (no dispersion). We will use this approximation throughout our discussion of ultrasound PSF.
In (U.3), the attenuation increases with range z. Therefore one usually applies attenuation correction to try to compensate for this
loss:


vc (t) , e2z (f0 )
v(t) = ect (f0 ) v(t) .
(U.4)
z=ct/2

What is the drawback of using narrowband pulses? They are wider in time, providing poorer range resolution.

Furthermore, for smaller wavelengths (higher frequency) there is more attenuation, so less signal, so lower SNR. This is an example
of the type of resolution-noise tradeoff that is present in all imaging systems.
Dispersion
In practice, to provide adequate range resolution, medical ultrasound systems use fairly wideband pulses (cf. figure on U.6) that
do not satisfy the narrowband approximation (U.2).
For general wideband pulses, it is difficult to simplify (U.1) to analyze the time-domain effects of frequency-dependent attenuation.
Qualitatively, because different frequency components are attenuated by different amounts, the pulse shape is distorted. Multiplication by H(f ) = e2z (f ) in the frequency domain is equivalent to convolution with some impulse response h(t) in the time
domain. This convolution will spread out the pulse and the resulting distortion is called dispersion.
It is easiest to illustrate dispersion by evaluating (U.1) numerically, as illustrated in the following figure.

R(z)

Reflectivity
1
0.5
0

6
8
z [cm]
Hypothetical received signal: no attenuation

10

12

v(t)

1
0
1

20

40

80
100
120
t [usec]
Hypothetical received signal: narrowband assumption

v(t)

140

160

60

140

160

60

140

160

140

160

Attenuation of 1 dB/MHz/cm for f0=1 MHz

0
1

60

20

40

20

40

20

40

80
100
120
t [usec]
Received signal with attenuation/dispersion

v(t)

1
0
1

80
100
120
t [usec]
Received signal with attenuation correction

vc(t)

1
0
1

60

80
t [usec]

100

120

c J. Fessler, September 21, 2009, 11:18 (student version)


U.11

Amplitude modulated pulses


Although dispersion is challenging to analyze for general pulses, it is somewhat easier for amplitude modulated pulses of the
form p(t) = a(t) cos(2f0 t), where a(t) denotes the envelope of the pulse and f0 denotes the carrier frequency.
By Eulers identity we can write cos(2f0 t) = 12 e2f0 t + 12 e2f0 t , and usually it is easier to analyze one of those terms at time.
Therefore, often hereafter we consider amplitude modulated pulses of the form p(t) = a(t) e2f0 t .
F

The corresponding spectrum is P (f ) = A(f f0 ), where a(t) A(f ).



Define the recentered signal vz (t) , v t + 2z
c . Without attenuation, we would have: vz (t) = p(t) . Accounting for attenuation:
Z
Z
2z (f )
2f t
vz (t) =
e
P (f ) e
df = e2z (f ) A(f f0 ) e2f t df
Z
= e2f0 t e2z (f +f0 ) A(f ) e2f t df = az (t) e2f0 t ,
by making the change of variables f = f f0, where az (t) = dz (t) a(t) is the envelope for a reflection from depth z accounting
for attenuation,
and dz (t) is the time-domain signal (dispersion function) with Fourier Transform Dz (f ) = e2z (f +f0 ) .
Note d0 (t) = (t).
Thus the envelope of the recentered received signal is
| vz (t) | = | az (t) | = |dz (t) a(t) |.
This depth-dependent blurring reduces depth spatial resolution.
One can use the above analysis to study dispersion effects (HW).
Example. Dispersion for a rect pulse envelope (which is not narrowband) is shown below.
(Each echo is normalized to have unity maximum for display.)
|v(t)| with dispersion

Dispersed Pulse Envelope


1.5

1.5
z = 0 cm

z = 4 cm

0.5

0.5

0
10

0
t [sec]

10

1.5

0
10

z = 12 cm

0.5

0.5

How can we reduce dispersion?

??

10

1.5
z = 8 cm

0
10

10

0
10

10

c J. Fessler, September 21, 2009, 11:18 (student version)


U.12

Gaussian envelopes
On a log scale, attenuation is roughly linear in frequency (about 1dB/cm/MHz) over the frequency range of interest.
I.e. (f ) |f | for f between 1 and 10 MHz, where 1 20 log10 e = 20 log10 e, so 1/(20 log10 e) 0.1 MHz1 cm1 .
The property (f ) |f | provides an opportunity to minimize dispersion: use pulses with Gaussian envelopes: A(f ) = ew
where w is related to the time-width of the envelope.
Assuming f0 0,

f2

e2z (f +f0 ) A(f ) = e2z|f +f0 | A(f ) e2z(f +f0 ) A(f ).

(Do not use this approximation in HW.)


With this approximation:
e2z (f +f0 ) A(f ) e2z(f +f0 ) A(f ) = e[2z(f +f0 )+w

f 2]

Complete the square in the exponent:




z
2z (f + f0 ) + w2 f 2 = w2 f 2 + 2f 2 + 2z 0 = w2 (f + fz )2 w2 fz2 + 2z 0 ,
w
where and 0 = f0 = (f0 ) is the attenuation coefficient at the carrier frequency,
and fz = z/w2 is an attenuation-induced (apparent) frequency shift. Thus
2

e2z (f +f0 ) A(f ) e2z 0 A(f + fz ) e(wfz ) .


So in the time domain, for this gaussian envelope model:
Z
2
az (t) = e2z (f +f0 ) A(f ) e2f t df = e2z 0 a(t) e2fz t e(wfz ) ,
which has no dispersion, just extra gain factors that can be compensated and a phase factor that disappears with envelope detection.
So in principle, using envelopes that are approximately Gaussian is attractive.
A typical imaging transducer has a fractional bandwidth of about 30-50%. This means that the envelope a(t) has a duration of
about 2-3 periods of the carrier, i.e., 2-3 wavelengths depth resolution (cf. earlier figure).
Summary
Depth resolution is determined by width of acoustic pulse.
Resolution improves as pulse becomes shorter / higher frequency.
Attenuation also increases with increasing frequency.
Attenuation causes signal loss and dispersion.
Gaussian pulse envelopes are less sensitive to dispersion effects.
Notes:
F
e|t| 2/(1 + (2f )2 )
See [4] for an example of more sophisticated attenuation compensation.

c J. Fessler, September 21, 2009, 11:18 (student version)


U.13

B-mode scan: near-field analysis


Now we begin to study the PSF of reflection-mode ultrasound imaging, specifically the brightness mode scan (B-mode scan).
We begin with near-field analysis of a mechanically scanned transducer. This analysis is quite approximate, but still the process
is a useful preview to the more complete (but more complicated) diffraction analysis that follows. The steps are as follows.
Derive an approximate signal model.
Use that model to specify a (simple) image formation method.
y, z) to the ideal image R(x, y, z) to analyze the PSF of the system.
Relate the expression for the formed image R(x,
Near-field signal model

Reflectors

Transducer

s(x,y)

(x0 ,y0 )

z1
|v(t)|

z2

exp(2z )/z

2z1/c

2z2/c

We first focus on the near field of a mechanically scanned transducer, illustrated above, making these simplifying assumptions.
Single transducer element
Face of transducer much larger than wavelength of propagating wave, so incident pressure approaches geometric extension of
transducer face s(x, y) (e.g., circ or rect function). Called piston mode.
Neglect diffraction spreading on transmit
Uniform propagation velocity c
Uniform linear attenuation coefficient , assumed frequency independent, i.e., ignoring dispersion. (Focus on lateral PSF.)
Body consists of isotropic scatterers with scalar reflectivity R(x, y, z).
No specular reflections: structures small relative to wavelength, or large but very rough surfaces.
Amplitude-modulated pulse p(t) = a(t) e0 t
Weakly reflecting medium, so ignore 2nd order and higher reflections. (See HW.)
Pressure propagation: approximate analysis
Suppose the transducer is translated to be centered at (x0 ,y0 ), i.e., s(x x0 , y y0 ).
(x ,y )
Let pinc0 0 (x, y, z, t) denote the incident pressure wave that propagates in the z direction away from the transducer.
Assume that the pressure at the transducer plane (z = 0) is:
(x ,y0 )

pinc0

(x, y, 0, t) = s(x x0 , y y0 ) p(t) = s(x x0 , y y0 ) a(t) e0 t .

Ignoring transmit spreading, the incident pressure is a spatially truncated (due to transducer size) and attenuated pressure wave:
(x ,y0 )

pinc0

(x ,y )

(x, y, z, t) = pinc0 0 (x, y, 0, t z/c) e|{z }z = s(x x0 , y y0 ) p(t z/c) e z .


{z
}
|
simple propagation attenuation

c J. Fessler, September 21, 2009, 11:18 (student version)


U.14

(x ,y )

We need to determine the reflected pressure pref0 0 (x, y, 0, t) incident on the transducer. We do so by using superposition. We first
(x ,y )
find pref0 0 (x, y, 0, t; x1 , y1 , z1 ), the pressure reflected from a single ideal point reflector located at (x1 , y1 , z1 ), and then compute
the overall reflected pressure by superposition (assuming linearity, i.e., small acoustic perturbations):
ZZZ
(x ,y )
(x ,y )
pref0 0 (x, y, 0, t) =
R(x1 , y1 , z1 ) pref0 0 (x, y, 0, t; x1 , y1 , z1 ) dx1 dy1 dz1 .
An ideal point reflector located at (x1 , y1 , z1 ), i.e., for R(x, y, z) = (x x1 , y y1 , z z1 ), would exactly reflect whatever
pressure is incident at that point, and produce a wave traveling back towards the transducer. If the point reflector is sufficiently far
from the transducer plane, then the spherical waves are approximately planar by the time they reach the transducer. (Admittedly
this seems to be contrary to the near-field assumption.) Thus we assume:
(x ,y0 )

pref0

(x ,y0 )

(x, y, z, t; x1 , y1 , z1 ) = pinc0
|

1
1 z)
(x1 , y1 , z1 , t + (z z1 ) /c) |e (z
,
{z
}
z1 z
{z
}
|
{z
}
attenuation
simple propagation
spreading

where the 1/(z1 z) is due to diffraction spreading of the energy on return. In particular, back at the transducer plane (z = 0):
(x ,y0 )

pref0

(x, y, 0, t; x1 , y1 , z1 ) =

(x ,y0 )

p 0
| inc

z1
(x1 , y1 , z1 , t z1 /c) |e{z
}
{z
}
attenuation
simple propagation

s(x1 x0 , y1 y0 ) p(t 2z1 /c)

1
z1
|{z}
spreading

e 2z1
.
z1

Applying the superposition integral:


(x ,y0 )

pref0

(x, y, 0, t) =
=

ZZZ

ZZZ

(x ,y0 )

R(x1 , y1 , z1 ) pref0

(x, y, 0, t; x1 , y1 , z1 ) dx1 dy1 dz1

R(x1 , y1 , z1 ) s(x1 x0 , y1 y0 ) p(t 2z1 /c)

e 2z1
dx1 dy1 dz1 .
z1

(U.5)

The output signal from an ideal transducer would be proportional to the integral of the (reflected) pressure that impinges on its
face. The constant of proportionality is unimportant for the purposes of qualitative visual display and resolution analysis. (It would
affect quantitative SNR analyses.) For convenience we assume:
ZZ
1
(x ,y )
v(x0 , y0 , t) = RR
s(x x0 , y y0 ) pref0 0 (x, y, 0, t) dx dy,
s(x, y) dx dy

where we reiterate that the received signal depends on the transducer position (x0 ,y0 ), because we will be moving the transducer.
(x ,y )
Under the (drastic) simplifying assumptions made above, the pressure pref0 0 (x, y, 0, t) is independent of x, y, so by (U.5) the
recorded signal is simply:
v(x0 , y0 , t) =
=

(x ,y0 )

pref0
ZZZ

(, , 0, t)

R(x1 , y1 , z1 ) s(x1 x0 , y1 y0 ) e0 (t2z1 /c) a(t 2z1 /c)

e ct
... v(x0 , y0 , t)
ct/2

ZZZ

e 2z1
dx1 dy1 dz1
z1

R(x1 , y1 , z1 ) s(x1 x0 , y1 y0 ) a(t 2z1 /c) dx1 dy1 dz1 ,

(U.6)

where we assume the pulse envelope is narrow, i.e., a(t) (t).

(See picture above for sketch of signal. Note the distance-dependent loss e 2z1 /z1 .)

To help interpret (U.6), consider the most idealized case where s(x, y) = 2 (x, y) (tiny transducer) and a(t) = (t) (short pulse).
Then by the Dirac impulse sifting property:

e 2z1
.
(U.7)
v(x0 , y0 , t) = R(x0 , y0 , z1 )
z1 z1 =ct/2

c J. Fessler, September 21, 2009, 11:18 (student version)


U.15

Near-field image formation


y, z) from the received signal(s) v(x0 , y0 , t)?
How do we perform image formation, i.e., form an image R(x,
Frequently this question is answered first by considering very idealized signal models such as (U.6) or (U.7).
In light of the ideal relationship (U.7), we must:
relate time to distance using z = ct/2 or t = 2z/c, and
try to compensate for the signal loss due to attenuation and spreading by multiplying by a suitable gain term.
(In practical systems, the gain as a function of depth is adjusted both automatically and by manual sliders.)
Rearranging (U.7) leads to the following very simple image formation relationship for estimating reflectivity:

y, z) , ct ect |v(x, y, t)|
.
R(x,
t= 2z
c
|2 {z }
gain

(U.8)

This time/depth-dependent gain is called attenuation correction.


Note that we must translate (scan) the transducer to every x, y position where we want to observe R(x, y, z).

Near-field PSF

(Geometric PSF)

y, z) relate to the true reflectivity R(x, y, z)?


How does our estimated image R(x,
If we substituted the extremely approximate signal model (U.7) into the image formation expression (U.8) we would conclude
y, z) = R(x, y, z) .
erroneously that R(x,
Although simple measurement models are often adequate for designing simple image formation methods, when we want to understand the limitations of such methods usually we must analyze more accurate models.
Substituting the (somewhat more accurate) signal model (U.6) into the image formation expression (U.8) yields
Z Z Z



R(x, y, z) =
R(x1 , y1 , z1 ) s(x1 x, y1 y) a(2z/c 2z1 /c) dx1 dy1 dz1


ZZZ
2

R(x1 , y1 , z1 ) s(x1 x, y1 y) a (z z1 ) dx1 dy1 dz1 ,


c
where the approximation is reasonable provided the pulse is sufficiently narrow.
Under all of the (unrealistic) simplifying assumptions we have made, the PSF has turned out to be space invariant, and the final
superposition integral simplifies to the form of a convolution:
y, z) R(x, y, z) hGeometric (x, y, z),
R(x,
where the geometric PSF is given by:
hGeometric (x, y, z) = s(x, y) a

2z
c

This PSF is separable between the transverse plane (x, y) and range z.
Now we can address how system design affects the imaging PSF.
The lateral or transverse spatial resolution is determined by the transducer shape.
The depth resolution is determined by the pulse envelope.

c J. Fessler, September 21, 2009, 11:18 (student version)


U.16

Example. For a 4mm square transducer, the PSF is 4mm wide in the x, y plane.
A typical pulse envelope has a duration of 2-3 periods of its carrier, i.e., its width is roughly t = 2/f0, so the width of a(2z/c) is
roughly z = ct /2 = c/f0 = . If c = 1500 m/s and f0 = 1 MHz, then the wavelength is = c/f0 = 1.5 mm.
NearField PSF h(x,0,z)
1

1
0
0
2

4
10

1
0
8

0
z

10

So we can improve the depth resolution by using a higher carrier frequency f0 . What is the tradeoff? More attenuation! So we
have a resolution-noise tradeoff. Such tradeoffs exist in all imaging modalities.
Although this simplified analysis suggests that the ideal transducer would be very small, the analysis assumed at the outset that the
transducer is large (relative to the wavelength)! So it is premature to draw definitive conclusions about designing transducer size.
However, when we properly account for diffraction, we will see that the PSF h is not space invariant, so it will not be possible
to write the superposition integral as a convolution. Virtually all of the triple convolutions in Ch. 9 and Ch. 10 of Macovski are
incorrect. But the superposition integrals that precede the triple convolutions are fine.
Even though the PSF will vary with depth z, it is still quite interpretable.
Using more detailed analysis, one can show that the geometric model is reasonable for z < D2 /2 for a square transducer or
z < D2 /4 for a circular transducer [3, p 333]. The Fresnel region extends from that point out to z = D2 /. Beyond that is the
Fraunhofer region or far field.
A-mode scan
0 , y0 , z) vs z, is made, then this is
If the transducer is held at a fixed position (x0 ,y0 ) and a plot of reflectivity vs depth, i.e., R(x
called an A-mode scan.
(Brightness) (Usual mode)
B-mode scan
Translate transducer (laterally) to different location (x, y) (usually fixing x or y and translating w.r.t. the other)
Typically x-motion by mechanical translation of transducer, y-motion by manual selection of operator.
Form lines of image from different positions.
Assume transducer motion slow relative to pulse travel.
Everything shift-invariant w.r.t. x and y due to scanning.
What if sound velocity c is not constant (i.e., varies in different tissue types)?

c J. Fessler, September 21, 2009, 11:18 (student version)


U.17

Depth variance of PSF


It would be nice if in general for an A-mode scan we could find a PSF hpsf (x, y, z) for which
|vc (t)| = |(R hpsf )(0, 0, tc/2)|
so that we can form a line of the image by:
0, z) = |vc (2z/c)| = |(R hpsf )(0, 0, z)| .
R(0,
Unfortunately, for propagation models that are more realistic than those we used above, the PSF is depth-variant (varies with z),
as a 3D convolution like above.
so we cannot express vc (t) or R
Nearly all equations in Ch. 9 and 10 of Macovski containing are incorrect! The integral equations are fine.
We will settle for describing the system function through integrals like the following:

Z Z Z




2
k2r1 2


R(0, 0, z) =
b (x1 , y1 , z1 ) a (z z1 ) dx1 dy1 dz1 .
R(x1 , y1 , z1 ) e
c
b(x, y, z) determines primarily the lateral resolution at any depth z and varies slowly with z. It is called the beam pattern.
Both the transmit and receive operations have an associated beam pattern.
The overall beam pattern is the product of the transmit beam pattern and the receive beam pattern.
For a single transducer,
these transmit and receive beam patterns are identical, so the PSF contains the squared term b2 .

2
a c (z z1 ) determines primarily the depth resolution,
p
r1 = x21 + y12 + z12
ek2r1 is an unavoidable (and unfortunate) phase term, where k = 2/ is the wave number.
(Its presence causes destructive interference aka speckle.)

If we image by translating the transducer, then everything will be translation invariant w.r.t. x and y, so in particular:

Z Z Z




2
2kr1 2

y, z) =
(z

z
)
dx
dy
dz
R(x,
b
(x

x,
y

y,
z
)
a
R(x
,
y
,
z
)
e
1
1
1
1
1
1
1
1 1 1

c
Z Z Z



=
R(x1 , y1 , z1 ) e2kr1 h(x x1 , y y1 , z z1 ; z1 ) dx1 dy1 dz1 ,
where


2
h(x, y, z; z1 ) = b (x, y, z1 ) a z .
c
2

Due to the explicit dependence of the PSF on depth z1 , the PSF is shift variant or, in this case, depth dependent.
Mathematical interpretation: if R(x, y, z) = (x x1 , y y1 , z z1 ), then



2
2

R(x, y, z) = b (x1 x, y1 y, z1 ) a (z z1 ) ,
c

which is the lateral PSF b2 (, , z1 ) at depth z1 translated to (x1 , y1 ), and blurred out in depth z by the pulse a


.

2
c

Physical interpretation: the PSF h(x, y, z; z1 ) describes how much the reflectivity from point (x, y, z1 ) will contaminate our estimate of reflectivity at (0, 0, z).
Goals:
Find b, interpret, and simplify
Study how b varies with transducer size/shape.
Final form appears in (9.38):



cos
sin
bFraunhofer() =
.
SX

Intermediate assumptions along the way are important to understand to see why in practice (and in the project) not everything
agrees with these predictions.
Mostly follow Macovski notation, filling in some details, and avoiding potentially ambiguous notation f (t) g(t) h(t).

c J. Fessler, September 21, 2009, 11:18 (student version)


U.18

A-mode scan: Diffraction analysis


Diffraction: any deviation of light rays from rectilinear paths which cannot be interpreted as reflection or refraction.
Why diffraction? Goal: more accurate PSF, because in reality wavelength fairly large relative to aperture.
(f0 = 1.5 MHz, c = 1500 m/s, = f0 /c = 1mm) (AM Radio: 540KHz to 1.6MHz)
Major factor in determining spatial resolution is diffraction spreading. References: [5], [6].
Geometry

y
x
(x ,y )
0 0

01
r 01
Reflector

(0,0)
1
r1
r 10

(x , y )
0

(x ,y )
1 1

10

Transducer
s(x,y)
Transducer defines (x, y, 0) plane. Define: P0 , (x0 , y0 , 0), P0 , (x0 , y0 , 0), P1 , (x1 , y1 , z1 ).
Shorthand for radial distances:
q
r01 = kP0 P1 k = k(x0 , y0 , 0) (x1 , y1 , z1 )k = (x1 x0 )2 + (y1 y0 )2 + z12 .

Similarly define r10 = kP1 P0 k and r1 = kP1 k.

Later we will assume that P1 is sufficiently far from the transducer (relative to transducer size) that
cos 01
r01

cos 10 cos 1 = z1 /r1


r10 r1 .

The latter approximation applies only within functions that vary slowly with r, like 1/r01 , but not in terms like ekr01 .
Superposition
The main ingredient of diffraction analysis is the Huygens-Fresnel Principle: superposition!
Pressure at P1 is superposition of contributions from each point on transducer, where each point can be thought of as a point source
emitting a spherical wave.
Superposition requires that we assume linearity of the medium, which means the pressure perturbations must be sufficiently
small. Modern ultrasound systems include harmonic imaging modes where nonlinear effects are exploited, not considered here.

c J. Fessler, September 21, 2009, 11:18 (student version)


U.19

Monochromatic case
Start with monochromatic wave (called continuous-wave diffraction):
u(P, t) = Ua (P ) cos(2f t + (P )) = Real[U (P ) e2f t ] where U (P ) = Ua (P ) e(P )
is complex phasor, and position: P = (x, y, z).
Note everywhere the wave (pressure) is oscillating at the same frequency, only difference is amplitude and phase.
Rayleigh-Sommerfeld Theory
Using:
linear wave equation
Helmholz equation: (2 + k 2 )U = 0
linearity and superposition
Greens theorem
...
Goodman [5], shows that:
1
U (P1 ) =

ZZ

cos 01 kr01
dx0 dy0 =
e
U (P0 )
r01

ZZ

h(P1 , P0 )U (P0 ) dx0 dy0

for r01 , where the point spread function for the phasor U (P ) is
h(P1 , P0 ) =

1 cos 01 kr01
.
e
r01

The wavenumber k is defined as: k = 0 /c = 2/.


Physical interpretation of above diffraction integral:
means integrate over the transducer (or transducer plane).
cos 01 is obliquity factor (later assumed cos 1 ).
1/r01 is the 1/r falloff of amplitude (conservation of energy on sphere with surface area proportional to r2 ).
ekr01 = e0 (r01 /c) is phase change due to propagation over distance r01 (time delay of r01 /c).
The reciprocity theorem of Helmholz states that h(P0 , P1 ) = h(P1 , P0 ).
Propagation is shift-invariant; translating the entire coordinate system has no effect on wave propagation.
Polychromatic case
By Fourier decomposition, Goodman [5] shows that, assuming r01 , for polychromatic waves, the pressure at point P1
relates to the pressure at the transducer plane as follows:
u(P1 , t) =

ZZ

cos 01 1 d 
r01 
u P0 , t
dx0 dy0 .
r01 2c dt
c

(Goodman:3-33)

This is the starting point for our analysis.


d , cf. shaking a rope: large slow displacement vs small fast shake.
For dt
Note that we are ignoring attenuation to focus on diffraction effects.
We use u not p for pressure here, consistent with Goodman / Macovski, because we use P for points and p(t) for pulse.
Preview
Our strategy now will be to combine (Goodman:3-33) with the principles of superposition and reciprocity, by analogy with
the preceding near-field analysis. One approach is to use superposition first, and then make simplifying approximations [8].
An alternative derivation considered here is to first simplify (Goodman:3-33) by making several approximations, and then use
superposition and reciprocity.

c J. Fessler, September 21, 2009, 11:18 (student version)


Insonification

U.20

(filling the volume with acoustic wavelets)

If the transducer is pulsed coherently over its face (piston mode) with output pressure p0 (t), then at the transducer plane:
u(P0 , t) = s(x0 , y0 ) p0 (t),

(9.15)

i.e., at plane z = 0 the pressure is zero everywhere except over the transducer face.

Narrowband approximation (for amplitude modulated pulses)


Assume we use an amplitude modulated pulse:
p0 (t) = a0 (t) e0 t ,
where a0 (t) includes the transducers impulse response. In practice, one can determine experimentally the pulse envelope a0 (t)
using wire phantoms.
From (Goodman:3-33), we see we will need derivatives of the pressure. These expressions simplify considerably if we assume that
the pulse is narrowband. In short, a narrowband amplitude modulated pulse satisfies the following approximation:
d
d
a0 (t) e0 t 0 a0 (t) e0 t , i.e.,
p0 (t) 0 p0 (t) .
dt
dt
In particular, because c = f0 , under the narrowband approximation the time derivative of the input pressure is:
1 d
1
1
u(P0 , t) u(P0 , t) = s(x0 , y0 ) p0 (t) .
2c dt

(U.11)

To explore the narrowband approximation in the time domain, use the product rule:
p1 (t) ,

1 d
1
a 0 (t) a0 (t) .
p0 (t) = a1 (t) e0 t , where a1 (t) ,
2f0 dt
2f0

One way of defining a narrowband pulse is to require that |a 0 (t)| f0 , in which case a1 (t) a0 (t) so p1 (t) p0 (t) .
More typically, we define a narrowband pulse in terms of its spectrum, namely that the width of the frequency response of a0 (t)
is much smaller than the carrier frequency f0 . Because p0 (t) = a0 (t) e0 t , in the frequency domain P0 (f ) = A0 (f + f0 ).
By the derivative property of Fourier transforms:
p1 (t) =

1 d
f
(f0 )
1
F
(2f )P0 (f ) = A0 (f + f0 )
A0 (f + f0 ) = A0 (f + f0 ) = P0 (f ).
p0 (t) P1 (f ) =
2f0 dt
2f0
f0
f0

Thus, taking the inverse FT: p1 (t) =

1 d
2f0 dt

p0 (t) p0 (t) .

Simplified incident pressure


At this point we also assume that cos 01 cos 10 cos 1 and r01 r10 r1 , within terms that vary slowly with those
quantities. Combining (Goodman:3-33) with (U.11) leads to the following approximation for the incident pressure:
ZZ 
p
r01 
cos 1
dx0 dy0 ,
r01 = (x0 x1 )2 + (y0 y1 )2 + (0 z1 )2
u P0 , t
u(P1 , t)
r1
c
ZZ

r01 
cos 1 0 t
dx0 dy0 .
(U.12)
e
s(x0 , y0 ) ekr01 a0 t
=
r1
c

c J. Fessler, September 21, 2009, 11:18 (student version)


U.21

Steady-state approximation (for narrowband pulse)


The steady-state or plane wave approximation is:


r01 
r1 
a0 t
a0 t
.
c
c

(9.21)

Assumes envelope of waveform emitted from all parts of the transducer arrive at point P1 at about same time.
Need pulse-width D2 /(8r1 c), where D is transducer diameter.
If = 3/f0 , then need r1 D2 /(32) 3 mm if D = 10 mm and = 1 mm.
Accurate for long pulses (narrowband). But short pulses give better depth resolution...
Poor approximation for large transducers or small depths z1 .
Makes lateral resolution determined by relative phases over transducer, not by pulse envelope.
Applying the steady-state or plane wave approximation (9.21) to (U.12) yields the final incident pressure field approximation:
Z Z
 
r1 
cos 1
u(P1 , t) e0 t
.
s(x0 , y0 ) ekr01 dx0 dy0 a0 t
r1
c
If point P1 has reflectivity R(P1 ), then by reciprocity the contribution of that (infinitesimal) point to the (differential) pressure
reflected back to transducer point P0 is (applying again the narrow band and steady-state approximations):
u(P0 , t; P1 ) =

r10 
cos 10 1 d 
u P1 , t
r10 2c dt
c
  
cos 1
r10 
R(P1 )
u P1 , t
[using narrowband approximation]
r1

c


Z Z
2


2r1
1 cos 1
, [using ss]
e0 t
s(x0 , y0 ) ekr01 dx0 dy0 ekr10 a t
R(P1 )

r1
c
R(P1 )

where a(t) , ()2 a0 (t) . Now apply superposition over all possible reflectors in 3D object space:
ZZZ

u(P0 ; t) =
u
(P0 , t; P1 ) dP1

1 0 t
e

ZZZ

R(P1 )

cos 1
r1

2 Z Z

s(x0 , y0 ) e

kr01

dx0 dy0 e

kr10



2r1
dP1 .
a t
c

Signal model
Assuming transducer linearity the output signal is (proportional to) the integral of the reflected pressure over the transducer:
v(t) =

ZZ

s(x0 , y0 ) u(P0 ; t) dx0 dy0


"Z Z Z
ZZ

#


2r
1
K e0 t
s(x0 , y0 )
R(P1 )
dP1 dx0 dy0
s(x0 , y0 ) ekr01 dx0 dy0 ekr10 a t
c
#
"

2 Z Z

ZZ
ZZZ
2r1
cos 1

kr10
kr01
0 t
dP1
dx0 dy0
s(x0 , y0 ) e
dx0 dy0 a t
s(x0 , y0 ) e
Ke
R(P1 )
r1
c


cos 1
r1

2 Z Z

Inner: contributions from transducer point P0 incident on volume point P1 .


Middle: contributions from volume point P1 reflected back to transducer point P0 .
Outer: integrate over transducer face for output voltage.

c J. Fessler, September 21, 2009, 11:18 (student version)


U.22

Simplifying yields the following ultrasound signal equation:


v(t) K e0 t

ZZZ

 2



2r1
R(P1 )
dP1 ,
ek2r1 b2Narrowband (x1 , y1 , z1 ) a t
r1
c

(U.14)

where we define the (unitless) beam pattern by


cos 1
bNarrowband (x1 , y1 , z1 ) ,
2

ZZ

s(x0 , y0 ) ek(r01 r1 ) dx0 dy0 .

(U.15)

The expression (U.15) is suitable for numerical evaluation of transducer designs, but for intuition we want to simplify bNarrowband .
What would the ideal beam pattern be? Perhaps b(x, y, z) = s(x, y), as in the geometric near-field analysis.
Image formation
The above analysis was for the transducer centered at (0, 0). Based on our earlier near-field geometric analysis, and the gain
corrections suggested by the signal equation above, the natural estimate of reflectivity is:
 2  
0, z) , 1 z v 2z
R(0,

c
|K {z }
gain
Z Z Z





z r1

dP1
R(P1 ) ek2r1 b2Narrowband (x1 , y1 , z1 ) a 2
c

Z Z Z


(U.16)
R(P1 ) h(0, 0, z; P1 ) dP1 ,
=

where the (space varying) PSF is:

k2r1



z r1
x, y1 y, z1 ) a 2
.
{z
}
c
|
{z
}
lateral
depth, range

b2Narrowband (x1

h(x, y, z; P1 ) , |e {z }
|
speckle

Above we have included x, y in the PSF for generality assuming B-mode scanning. For A-mode, x = y = 0.
Note that h() depends on z1 , not z z1 , revealing the depth dependence of the PSF, so it is not a convolution, even if we make
the approximation z r1 z z1 in the range term.

The phase modulation ek2r1 contributes to speckle: destructive interference of reflections from different depths.
Because 2kr = 2r/(/2), this term wraps around 2 every half wavelength.
If the wavelength is 1 mm, then this term wraps 2 in phase every 0.5 mm!
To interpret the PSF, we would like to simplify its expression, particularly the beam pattern part.
Here we needed z 2 gain compensation because we accounted for diffraction spreading in both directions.
In practice we would need additional gain to compensate for attenuation.

c J. Fessler, September 21, 2009, 11:18 (student version)


U.23

Paraxial approximation
Near the axis, cos 1 1, so
cos 1
bNarrowband (x1 , y1 , z1 ) =
2

ZZ

s(x0 , y0 ) ek(r01 r1 ) dx0 dy0 bParaxial (x1 , y1 , z1 ),

where we define
bParaxial (x1 , y1 , z1 ) ,

1
2

ZZ

s(x0 , y0 ) ek(r01 r1 ) dx0 dy0 =


2 2 2
1
s(x, y) 2 ek x +y +z1 ekr1 ,

where the 2D convolution is over x and y. Convolution with such an exponential term is hard and non-intuitive. Thus we want to
further simplify bParaxial and/or bNarrowband .

Fresnel approximation in Cartesian coordinates


To simplify (U.15), we need to make approximations to the exponent r01 . Consider a Taylor series approximation:
s
q
(x1 x0 )2 + (y1 y0 )2
(x1 x0 )2 + (y1 y0 )2
2
r01 = (x1 x0 )2 + (y1 y0 )2 + z1 = z1 1 +
z1 +
2
z1
2z1
because

1 + t 1 + t/2 t2 /8 for small t.



2
To drop 2nd-order term, need kz1 t2 /8 1 for t = max (x1 x0 )2 + (y1 y0 )2 /z12 = (r1 r0 )2 /z12 = rmax
/z12 .
p
4
4
4
Thus need z13 krmax
/8 = rmax
/(4) rmax
/ or z1 rmax 3 rmax /.
Combining all of the above approximations:
bParaxial (x, y, z)
bFresnel(x, y, z) ,

ek(zr1 ) bFresnel(x, y, z)


ZZ

1
k 
2
2
dx0 dy0 ,
(x

x
)
+
(y

y
)
s(x
,
y
)
exp

0
0
0 0
2
2z



k 2
1
2
bFresnel(x, y, z) = s(x, y) 2 exp [x + y ] .

2z

Applying this approximation to (U.16), the gain-compensated reflectivity estimate is:



Z Z Z




2
2kz1 2


R(0, 0, z)
bFresnel(x1 , y1 , z1 ) a (z z1 ) dx1 dy1 dz1 .
R(x1 , y1 , z1 ) e
c

This cannot be written as a 3D convolution! (Because the lateral response bFresnel depends on z, see figures below.)
bFresnel still messy due to convolution with complex exponential with quadratic phase. (Hence no pictures yet...)
Focusing preview


k
2
2
bFresnel(x, y, z) =
s(x0 , y0 ) exp [(x x0 ) + (y y0 ) ] dx0 dy0
2z






ZZ
2
1
k 2
k 2
2
2
s(x0 , y0 ) exp [x0 + y0 ] exp [xx0 + yy0 ] dx0 dy0
= exp [x + y ]
2z
2
2z
z




 

k
k
1
x y
= exp [x2 + y 2 ]
.
F s(x, y) exp [x2 + y 2 ]
,
2z
2
2z
z z
1
2

ZZ

To cancel phase term inside Fourier transform, use spherical (acoustic) lens of radius R having thickness proportional to
r
x2 + y 2
x2 + y 2
1
1
.
R
2R

(U.17)

c J. Fessler, September 21, 2009, 11:18 (student version)


U.24

Fraunhofer approximation in Cartesian coordinates


We can rewrite the Fresnel approximation to the beam pattern (U.17) as follows::


ZZ
1
k
2
2
bFresnel(x, y, z) =
[(x

x
)
+
(y

y
)
]
dx0 dy0
s(x
,
y
)
exp

0
0
0 0
2
2z


ZZ
2
2
k
1
s(x0 , y0 ) ekr0 /(2z) exp [xx0 + yy0 ] dx0 dy0 ,
= ekr /(2z) 2

(9.38)

where r2 , x2 + y 2 and r02 , x20 + y02 .


2

Ignoring the inner phase term ekr0 /(2z) in (9.38) leads to the Fraunhofer approximation to the beam pattern:
bFresnel(x, y, z)
bFraunhofer(x, y, z) =

ekr /(2z) bFraunhofer(x, y, z)




ZZ
1
2
s(x0 , y0 ) exp [xx0 + yy0 ] dx0 dy0 ,
2
z

bFraunhofer(x, y, z) =


1 x y 
1

S
S(u,
v)
,
,
=
x
y
2
z z
2
u= z ,v= z

(9.39)

where S = F [s] is the 2D FT of the transducer. Recall k = 0 /c = 2/.

Note the importance of accurate notation: we take FT of s(x, y), but evaluate the transform at spatial (x, y) arguments.
Ignoring the inner phase term is reasonable if kr02 /(2z) 1 (radian), i.e.,
2
2
2
z (/)r0,max
= Dmax
/ (/4) Dmax
/.
2
The range z Dmax
/ is called the far field.

Example. For a D = 1 cm transducer and = 1 mm, need z 10 cm.


Under the Fraunhofer approximation, after the usual gain correction, the reflectivity estimate for an A-scan becomes:

Z Z Z




2
k2r1 2


R(0, 0, z) =
bFraunhofer(x1 , y1 , z1 ) a (z z1 ) dx1 dy1 dz1 .
R(x1 , y1 , z1 ) e
c
Example: square transducer


y
x
rect D
, then S(u, v) = D2 sinc(Du) sinc(Dv). So the far-field beam pattern is
If s(x, y) = rect D
 2




1 x y 
D
Dy
Dx
bFraunhofer(x, y, z) = 2 S
=
sinc
.
,
sinc

z z

z
z

(9.41)

c J. Fessler, September 21, 2009, 11:18 (student version)


U.25

Beam pattern in polar coordinates


So far we treated everything in Cartesian coordinates, mostly as in Macovski.
For beam steering, polar coordinates (for the beam pattern) are probably more natural (for r sector scan format).
We want to simplify the narrowband beam pattern derived in (U.15) above.
ZZ
cos 1
bCartesian
(x
,
y
,
z
)
=
s(x0 , y0 ) ek(r01 r1 ) dx0 dy0 .
Narrowband 1 1 1
2
To simplify analysis, assume source is separable: s(x, y) = sX (x) sY (y) .
To concentrate on beam pattern in (x, z) plane, consider thin (1D) transducer element: sY (y) = (y/) = (y) .
(We need for unit balance.)
Represent (x, z) plane in polar coordinates: x = r sin , z = r cos . (Treat object as 2D, so take y1 = 0.)
Then we define the narrowband beam pattern in polar coordinates as:
bNarrowband (r, ) , bCartesian
Narrowband (r sin , 0, r cos ) =

cos

sX (x) ek(d(x;r,) r) dx,

where d(x; r, ) is distance from source point (x, 0, 0) to a point at (r, ) in y = 0 plane:
d(x; r, )

=
=

r01 = k(x, 0, 0) (r sin , 0, r cos )k


p
p
(x r sin )2 + (r cos )2 = x2 2xr sin + r2 = r

 x 2
x
1 2 sin +
.
r
r

x
P0 = (x, 0, 0)

P1 = (r sin , 0, r cos )

Simplifying approximations: Fresnel and Fraunhofer


The integral above for bNarrowband is too complicated to provide simple interpretation. To simplify, consider Taylor series:
1
1 ....
1 ...
f (t) = f (0) + f(0)t + f(0)t2 + f (0)t3 + f (0)t4 + . . . ,
2
3!
4!
where t = x/r, and = sin . One can verify the following.
p
f (0) = 1
f (t) = 1 2t + t2
f(t) = (t )/f (t)
f(0) = = sin
(t) = (1 2 )/f 3 (t)
(0) = 1 2 = cos2
f...
f...
4
f (t) = 3(1 2 )f(t)/f...
(t)
f (0) = 3(1 2 ) = 3 sin cos2
....
....
2

f (t) = [3f (t) 4f (t) f (t)]/f (t) f (0) = 3 cos2 (5 sin2 1).
Thus

For |t| < 1, 1 |t| f (t) |t| + 1


1 f(t) 1

1
1
3
f (t) 1 t sin + t2 cos2 + t3 sin cos2 + t4 cos2 (5 sin2 1).
2
2
4!

c J. Fessler, September 21, 2009, 11:18 (student version)


U.26

Applying this expansion to d yields


d(x; r, )




1  x 3
3  x 4
1  x 2
x
cos2 +
sin cos2 +
cos2 5 sin2 1
rf (x/r) r 1 sin +
r
2 r
2 r
4! r
r x sin +

x2
cos2 .
2r

Physical interpretations:
r : propagation
x sin : steering
2
x2r cos2 : focusing (curvature of wavefront)
When is this approximation accurate?
3
Because d enters as ek d , we can ignore the 3rd-order (aberration?) term if kr 12 xr sin cos2 1 radian.
3



3
x
.
The maximum of sin cos2 occurs when sin2 = 1/3, so 21 sin cos2 1/33/2 . Thus kr 12 xr sin cos2 kr r
3
q

Thus we need r2 k(x/ 3)3 or r k(x/ 3)3 where x is half the width of (centered) aperture.
E.g., if 10mm wide aperture and = 0.5 mm, then r 50 mm.
But 3rd-order term is 0 for = 0, so 4th-order term is more important on axis.
1
For = 0, for the 4th-order term to be negligible we need rk 4!
3(xmax /r)4 1 or r3 (2/) 18 x4max x4max / i.e.,
p
r xmax 3 xmax /. (cf. earlier condition for Fresnel approximation).

Fresnel approximation in polar coordinates


(approximate circular wavefront by parabola)
p
2
For r xmax 3 xmax /, we can safely use the above 2nd-order Taylor series approximation: d(x; r, ) r x sin + x2r cos2
leading to the following Fresnel approximation to the beam pattern:
bNarrowband (r, ) bFresnel(r, )
Z
2
cos
bFresnel(r, ) ,
sX (x) ek(x cos ) /(2r) ekx sin dx

(9.38)

bFresnel still messy because it involves a complex exponential with quadratic phase. (Hence no pictures yet...)
It is suitable for computation, but there remains room for refining intuition.
Fraunhofer approximation in polar coordinates
We can ignore 2nd-order term if: kx2max /(2r) 1, i.e., r x2max / =

2
4 D /

D2 /.

If N = D/ (called the numerical aperture) then r N D = D2 / is the far-field.


Thus in the far-field we have:
bFresnel(r, ) bFraunhofer()


Z
cos
sin
cos
kx sin
bFraunhofer() ,
,
SX
sX (x) e
dx =

where SX = F [sX ]. In words: (far-field) angular beam pattern is FT of aperture function, evaluated at sin /.
Note the importance of accurate notation: we are take FT of s(x), but evaluate the transform at a spatial (x) argument!
The Fraunhofer (far field) beam pattern (in polar coordinates) is independent of r.

(9.38)

c J. Fessler, September 21, 2009, 11:18 (student version)


U.27

Example. If sX (x) = rect(x/D), then SX (u) = D sinc(Du) so the far-field beam pattern is
bFraunhofer() =





cos
sin
sin
cos
=
.
SX
sinc

/D
/D

Note x/z = tan sin and cos 1 for 0. So polar and Cartesian approximations are equivalent near axis.
Note (/2, /2), i.e., above approximations are good for any angle, whereas Cartesian good only for small x/z.
The following figure shows bFraunhofer.
Angular FarField Beam Pattern for Rectangular Transducer
1

D = 6 wavelengths

0.8

0.4

0.2

Fraunhofer

() / D

0.6

0.2

0.4
100

80

60

40

20

0
[degrees]

20

40

60

80

100

The following figure compares bFraunhofer and bFresnel.


Fresnel beam pattern, D / = 8
0

30

4
4

30

40
20

40

60

80

100

120

Fraunhofer beam pattern, D / = 8


0

x/

30

4
4

30

40
20

40

60
z/

80

100

120

c J. Fessler, September 21, 2009, 11:18 (student version)


U.28

Physical interpretation of beam pattern


We can understand the sinc response physically as well as through the mathematical derivation above.
A point reflector that is on-axis in the far field reflects a pressure wave that is almost a plane wave parallel to the transducer plane
by the time it reaches the transducer. Being aligned with the transducer, a large pressure pulse produces a large output voltage.
On the other hand, for a point reflect that is off-axis in the far field, the approximate plane wave hits the transducer at an angle, so
there is a (sinusoidal) mix of positive and negative pressures applied to the transducer. If the angle is such that there is an integer
number of periods of the wave over the transducer, then there is no net pressure so the output signal is 0. These are the zeros in the
sinc function. If the angles is such there is a few full periods and a fraction of a period leftover, there will be a small net positive or
negative pressurethis is the sidelobes.
Design tradeoffs
Why did we do all this math? The above simplifications finally led to an easily interpreted form for the lateral response as a
function of depth.
The width of the sinc function is about 1, so the (angular) beam width is about = arcsin(/D) .

Because sin = x/r = x/ x2 + z 2 x/z, the beam width is x = z , or z/D.


How can we use system design parameters to affect spatial resolution?
Smaller wavelength , better lateral resolution (but more attenuation) so SNR decreases.
Larger transducer gives better far field resolution, (but worse in near field).
Resolution degrades with depth z (beam spreading)
The Fraunhofer beam pattern is called the diffraction limited response, because it represents the best possible resolution for a
given transducer. Best possible has two meanings. One meaning is that the actual beam pattern will be at least as broad as the
2
Fraunhofer beam pattern, (i.e., a more precise calculation of the beam pattern that includes the phase term ekr0 /(2z) in the integral
produces a beam pattern that is no narrower than the Fraunhofer beam pattern). The second is that even if we use a lens to focus, the
size of the focal spot (i.e., the width of the beam pattern at the focal plane) will be no narrower than the Fraunhofer beam pattern).
z
x

D 2/
z / D

Effective of doubling source size: narrower far-field beam pattern, but wider in near-field.
How to overcome tradeoff between far-field resolution and depth-of-field, i.e., how can we get good near-field resolution
even with a large transducer? Answer: by focusing.
Approximate beam patterns in Cartesian coordinates
It is also useful to express the Fresnel and Fraunhofer beam patterns (9.38) and (9.38) in Cartesian coordinates. Using the approximation sin = x/r x/z leads to:


ZZ

2
1
k 
2
2
bFresnel (x, y, z) =
dx0 dy0 ekr /(2z) bFraunhofer(x, y, z)
(x

x
)
+
(y

y
)
s(x
,
y
)
exp

0
0
0 0
2

2z


ZZ

1
2
1

S(u,
v)
[xx
+
yy
]
dx
dy
=
s(x
,
y
)
exp

.
bFraunhofer(x, y, z) =
x
0
0
0
0
0 0
y
2
z
2
u= z , v= z

c J. Fessler, September 21, 2009, 11:18 (student version)


U.29

Time-delay, phase, propagation delay


For a narrowband amplitude modulated pulse, a small time delay is essentially equivalent to a phase change:
0
p(t) = a(t) e0 t = p(t ) = a(t ) e0 (t ) a(t) e0 (t ) = a(t) e0 t e0 = p(t) |e
{z } .

phase

Another way to modify the phase is to propagate the wave through a material having a different index of refraction, or equivalently
a different sound velocity:




p0 (t) = p(t /c0 ) p(t) e0 /c0 = p(t) e2/
p(t)

c0




p1 (t) = p(t /c1 ) p(t) e0 /c1 = p0 (t) e2(c0 /c1 1)/ .
p(t)

{z
}
|
c1
phase

By varying the thickness over the transducer face, one can modify the phase of s(x, y) to make, for example, an acoustic lens.

Focusing (Mechanically) (1D analysis)


Suppose we choose the thickness above as a function of position x along the transducer such that (c0 /c1 1) = x2 /2zf . Then
this is equivalent to modifying the source such that
snew (x) = sorig (x) ekx

/(2zf )

So for 0 (so cos 1), the resulting beam pattern is


bnew
Fresnel (r, )

So in particular for r zf :

bnew
Fresnel (r, )

Z
2
cos
snew (x) ekx /(2r) ekx sin dx

Z
2
cos
sorig (x) ekx /2[1/r1/zf ] ekx sin dx .

cos

sorig (x) ekx sin dx = borig


Fraunhofer () .

So at depth zf (and nearby) (even if this zf is in the near field) the modified system achieves the diffraction limited resolution (of
about zf /D), even for a large transducer.
This focusing could be done with a curved acoustic lens of appropriate radius and index of refraction.
In typical lens material, the acoustic waves travel faster, i.e., c1 > c0 , so use thickness proportional to x2 in 1D or x2 + y 2 in 2D.

zf

zf /D

The key point is that this focusing technique works even if zf is in near field of transducer!

c J. Fessler, September 21, 2009, 11:18 (student version)


U.30

What are the drawbacks?


Worse far-field resolution. (cf. distance background in photograph)
Focal depth zf is fixed by mechanical hardware choice.
F number is zf /D. If F number large ( 1), then resolution degrades gradually on either side of focal plane.
But for zf to small relative to D, the resolution degrades rapidly away from the focal plane.
Phased arrays allow something like this with electronics for variable depth focusing.

Skip wideband diffraction, compound scan


Example. (From Prof. Noll)

In these images, the transducer is indicated by the black line along the left margin and the transmitted wave is curved to focus at a
particular point. Previously, we discussed the depth resolution was determined by the envelope function (in this case a Gaussian).
The lateral localization function is more complicated and is determined by diffraction.

c J. Fessler, September 21, 2009, 11:18 (student version)


U.31

Ideal deflection (beam steering)


Physical Beam Steering

The echo from a far-field reflector will approximate a plane wave impinging on the transducer plane. By using a wedge-shaped
piece of material in which the velocity of sound is faster than in tissue, the wave fronts from an angle can be made parallel with
the transducer so maximum signal from reflectors at that angle.
In particular, suppose we choose in the phase/delay analysis above to vary linearly over the transducer face such that
(c0 /c1 1)x = x,
where = sin 0 is the desired beam direction.
The equivalent corresponding time delay is x = (c0 /c1 1)x /c0 = x/c0 .
The resulting ideal (1D) beam-steering transducer function would be:
x
sideal
(x) = e2x/ rect
.
X
D

Note that there is no change in amplitude across transducer, just in phase.

The corresponding far-field beam pattern would be:






 

cos
cos
sin
sin sin 0
cos ideal sin
=
=
,
S
sinc D
sinc
bFraunhofer() =
X

/D

/D
/D
which is peaked at sin = i.e., at = 0 . So steering by phase delays works! Note somewhat larger sidelobes.
Angular FarField Beam Pattern for Mechanical Beam Steering

D = 6 wavelengths
0.8

0 = 45 degrees

lFraunhofer() / D

0.6

0.4

0.2

0.2
100

80

60

40

20

0
[degrees]

20

40

60

80

100

This wedge-shaped acoustic lens (cf. prism) is fixed to a single angle 0 . A phased array allows one to vary the angle electronically
in essentially real time.

c J. Fessler, September 21, 2009, 11:18 (student version)


U.32

Speckle Noise
Really an artifact because nonrandom in the sense that the same structures appear if scan is repeated under same conditions.
But random in the sense that scatterers are located randomly in tissue (i.e., from patient to patient).
Summary: speckle noise leads to Rayleigh statistics and a disappointingly low SNR.
A 1D example
If R(x, y, z) = (x, y) R(z) then
vc (t) = e
and if R(z) =

0 t

R(z ) e

2kz



2z
a t
dz ,
c

(z zl ) then


  Z X




X

2z
2
(z

z
)
2



l

=
R(z)
= vc
e4zl / a
(z zl ) e2kz a (z z ) dz =
.


c
c
f0

For subsequent analysis, it is more convenient to express in terms of wavelengths. Let w = z/, wl = zl /, h(w) = a(2w/f0 ),



X


4wl

e
h(w wl ) .
R(w) =


l

Without phase term we would just get superposition of shifted h functions. But with it, we get destructive interference.
Example.
1

|R(z)|

0.8
0.6
0.4
0.2
0
0

10

20

30

40

50

60

70

80

90

100

30

40

50

60

70

80

90

100

30

40
50
60
z [wavelengths]

70

80

90

100

1
0.8

without phase

0.6
0.4
0.2
0
0

10

20

|\hat{R}(z)|

0.8

with phase

0.6
0.4
0.2
0
0

10

20

How can we model this phenomena so as to characterize its effects on image quality? Statistically!

c J. Fessler, September 21, 2009, 11:18 (student version)


U.33

Rayleigh distribution
From
[9, Ex. 4.37, p. 229], if U and V are two zero-mean, unit-variance, independent Gaussian random variables, then W =

U 2 + V 2 has a Rayleigh distribution:


2
fW (w) = w ew /2 1{w0}
p
2
for which E[W ] = /2 and W
= 2 /2.

Rayleigh statistics of sum of random phasors


(For ultrasound speckle)

To examine the properties of R(z)


for some given z, we assume the pulse envelope is broad enough that it encompasses several,
say n, scatterers, at positions w1 , . . . , wn . We can treat those positions as independent random variables, and a reasonable model is
that they have a uniform distribution. Thus it is reasonable to model the corresponding phases l = 4wl as i.i.d. random variables
with Uniform(0, 2) distributions. For a sufficiently broad envelope, we can treat h() as a constant and consider the following
model for the envelope of a signal that is sum of many random phasors:

r n
2 X l
e .
Wn =


n
l=1

Mathematically, this is like a random walk on the complex plane.


We will show that Wn is approximately Rayleigh distributed for
Goal: to understand statistical properties of Wn (and hence R).
large n. Expanding:

r
r n
n
n
p
X
2 X l
2 X

Wn =
e =
cos l +
sin l = Un2 + Vn2




n
n
l=1

where

Un ,

l=1

l=1

2X
cos l ,
n

Vn ,

l=1

2X
sin l .
n
l=1

Note E[Un ] = E[Vn ] = 0 because E[cos( + c)] = 0 for any constant c, when has a uniform distribution over [0, 2]. (See [9,
Ex. 3.33, p. 131].) Also, Var{Un } = 2 Var{cos } because i.i.d., where
Z 2
Z
Z 2


1
1 1
1
2
2
2
cos d =
(1 + cos(2)) d = .
(cos E[]) f () d =
Var{cos } = E (cos E[]) =
2
2
2
2
0

Thus Var{Un } =pVar{Vn } = 1. Furthermore, Un and Vn are uncorrelated: E[Un Vn ] = 2 E[cos sin ] = 0.
So to show that Un2 + Vn2 is approximately Rayleigh distributed, all that is left for us to show is that for large n, Un and Vn are
approximately (jointly) normally distributed.
Bivariate central limit theorem (CLT)
[10, Thm. 1.4.3]
2
Let (Xk , Yk ) be i.i.d. random variables with respective means X and Y , variances X
and Y2 , and correlation coefficient , and
define
#
"
P
~n =
Z

1
n
1
n

Xk X
X
Yk Y
k=1
Y

Pk=1
n

~ n converges in distribution to a bivariate normal random vector with zero mean and covariance
As n , Z


1
.
1
~ n approach independent Gaussian random variables.
In particular, if = 0, then as n , the two components of Z
Hence statistics of speckle often assumed to be Rayleigh.
Signal to noise ratio

(signal mean over signal standard deviation)

p
r
/2

E[W ]
=
SNR =
=p
1.91
(9.72)
W
4
2 /2
Low ratio! Averaging multiple (identically positioned) scans will not help. One can reduce speckle noise by compounding,
meaning combining scans taken from different directions so that the distances r01 , and hence the phases, are different, e.g., [11].
See [12] for further statistical analysis of envelope detected RF signals with applications to medical ultrasound.

c J. Fessler, September 21, 2009, 11:18 (student version)


U.34

Summary

Introduction to physics of ultrasound imaging


Derivation of of signal equation relating quantity of interest, reflectivity R(x, y, z), to the recorded signal v(t).
Description of (simple!) image formation method for A-mode scan and B-mode scans.
Analysis of (depth dependent!) point spread function
Analysis of (speckle) noise

Bibliography
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]

T. L. Szabo. Diagnostic ultrasound imaging: Inside out. academic, New York, 2004.
K. K. Shung, M. B. Smith, and B. Tsui. Principles of medical imaging. Academic Press, New York, 1992.
J. L. Prince and J. M. Links. Medical imaging signals and systems. Prentice-Hall, 2005.
D. I. Hughes and F. A. Duck. Automatic attenuation compensation for ultrasonic imaging. Ultrasound in Med. and Biol.,
23:65164, 1997.
J. W. Goodman. Introduction to Fourier optics. McGraw-Hill, New York, 1968.
M. Born and E. Wolf. Principles of optics. Pergamon, Oxford, 1975.
A. D. Pierce. Acoustics; An introduction to its physical principles and applications. McGraw-Hill, New York, 1981.
A. Macovski. Medical imaging systems. Prentice-Hall, New Jersey, 1983.
A. Leon-Garcia. Probability and random processes for electrical engineering. Addison-Wesley, New York, 2 edition, 1994.
P. J. Bickel and K. A. Doksum. Mathematical statistics. Holden-Day, Oakland, CA, 1977.
G. M. Treece, A. H. Gee, and R. W. Prager. Ultrasound compounding with automatic attenuation compensation using paired
angle scans. Ultrasound in Med. Biol., 33(4):63042, 2007.
R. F. Wagner, M. F. Insana, and D. G. Brown. Statistical properties of radio-frequency and envelope detected signal with
applications to medical ultrasound. J. Opt. Soc. Am. A, 4(5):91022, 1987.

You might also like