Cu Ultra PDF
Cu Ultra PDF
U.1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
U.2
U.4
U.8
U.9
U.13
U.18
U.20
U.21
U.22
U.23
U.24
U.25
U.28
U.28
U.29
U.29
U.31
U.32
U.34
U.2
Patient
Transducer
s(x,y)
Signal
Processor
Display
z
A pulser excites the transducer with a short pulse, often modeled as an amplitude modulated sinusoid: p(t) = a(t) e0 t ,
where 0 = 2f0 is the carrier frequency, typically 1-10 MHz.
The ultrasonic pulse propagates into the body where it reflects off mechanical inhomogeneities.
Reflected pulses propagate back to the transducer. Because distance = velocity time, a reflector at distance z from the
transducer causes a pulse echo at time t = 2z
c , where c is the sound velocity in the body.
Velocity of sound about 1500 m/s 5% in soft tissues of body; very different in air and bone.
Reflected waves received at time t are associated with mechanical inhomogeneities at depth z = ct/2.
The wavelength = c/f0 varies from 1.5 mm at 1 MHz to 0.15 mm at 10 MHz, enabling good depth resolution.
The cross-section of the ultrasound beam from the transducer at any depth z determines the lateral extent of the echo signal.
The beam properties vary with range and are determined by diffraction. (Determines PSF.)
We obtain one line of an image simply by recording the reflected signal as a function of time.
2D and 3D images are generated by moving the direction of the ultrasound beam.
Signal processing: bandpass filtering, gain control, envelope detection.
History
started in mid 1950s
rapid expansion in early 1970s with advent of 2D real-time systems
phased arrays in early 1980s
color flow systems in mid 1980s
3D systems in 1990s
Active research field today including contrast agents (bubbles), molecular imaging, tissue characterization, nonlinear interactions, integration with other modalities (photo-acoustic imaging, combined ultrasound / X-ray tomosynthesis)
Example.
A month later...
U.3
U.4
Incident
pinc
R
Medium 1:
Z 1 , c1
Medium 2:
inc
Z 2 , c2
ref
pref
Reflected
trn
ptrn
^
Transmitted / refracted
cos trn so 1 + R =
(1 R).
Z1
Z1
Z2
cos trn Z1
Thus the pressure reflectivity at the interface is
R=
Only surfaces parallel to detector (or wavefront) matter (others reflect away from transducer), so inc = ref = trn = 0. Thus the
reflectivity or pressure reflection coefficient for waves at normal incidence to surface is:
R = R12 =
Z2 Z1
Z
pref
=
,
pinc
Z1 + Z2
2Z0
where Z0 denotes the typical acoustic impedance of soft tissue. Clearly 1 R 1, and R is unitless. Note that R21 = R12 .
Typically
Z
Z0
U.5
is only a few % in soft tissue, so weakly reflecting (not much energy loss). But shadows occur behind bones.
pinc + pref
2Z2
ptrn
=
= 1 + R12 =
1.
pinc
pinc
Z1 + Z2
Z2 Z1
=
Z2 + Z1
2
Itrn /Iinc =
4Z2 Z1
.
(Z2 + Z1 )2
??
Summary
In reflection-mode ultrasound imaging, the images are representative reproductions of the reflectivity of the object. A cyst,
which is a nearly homogeneous fluid-filled region, has reflectivity nearly 0, so appears black on ultrasound image. Liver tissue,
which has complicated cellular structure with many small mechanical inhomogeneities that scatter the sound waves, appears as a
fuzzy gray blob (my opinion). Boundaries between organs or tissues with different impedances appear as brighter white curves in
the image.
U.6
0.4
12
Pulse p(t)
Envelope a(t)
10
0.2
|P(f)|
p(t)
8
0
6
4
0.2
2
0.4
10
0.4
12
Pulse p(t)
Envelope a(t)
10
0.2
|P(f)|
p(t)
8
0
6
4
0.2
2
0.4
4
t [sec]
10
f [MHz]
Suppose at depths z1 , . . . , zN there are interfaces with reflectivities R(z1 ), . . . , R(zN ), i.e.,
N
X
R(z) =
n=1
R(zn ) (z zn ) . (Picture)
Then a (highly simplified) model for the signal received by the transducer is:
v(t) = K
N
X
n=1
where K is a constant gain factor relating to the impedance of the transducer, electronic preamplification, etc.
A natural estimate of the reflectivity is
2z
, (Picture)
R(z) = v
c
where |v(t)| is the envelope of the received signal.
Is R(z)
= R(z)? No. Even in this highly simplified model, there is blurring (in z direction) due to the width of the pulse. Soon
we will analyze the blur in all directions more thoroughly.
Also note that reflection coefficients can be positive or negative, but with envelope detection we lose the sign information. Hereafter
we will ignore this detail and treat reflectivity R as a nonnegative quantity.
What happens if sound velocity in some organ differs from others?
Synopsis of M-mode scan
If reflectivity is a function of time t, e.g., due to cardiac motion, then we have R(z; t). If the time scale is slow compared to A-mode
t).
scan time (300 sec), then just do multiple A-mode scans (1D) and stack them up to make 2D image of R(z,
U.7
R(z)
0.5
0.5
5
z [cm]
10
Received signal
1
v(t)
0.5
0.5
20
40
60
80
100
120
t [usec]
Estimated Reflectivity
1
R(z)
0.5
0.5
5
z = ct/2 [cm]
10
U.8
2
z 2
1 2
p=0
c2 t2
2
2
2
+ 2 + 2.
2
x
y
z
(2f )2
1 2
p=
p = k 2 p
2
2
c t
c2
where k = 2f /c = 2/ is called the wave number. This confirms that plane waves satisfy the wave equation.
Because the simple wave equation is linear, any superposition of solutions is also a solution. Hence
Z
Z
p(z, t) = pf (z, t) df = P (f ) e2f (tz/c) df
is also a solution. Observe that p(z, t) = p(0, t z/c) where p(0, t) =
Spherical waves
P (f ) e2f t df = F 1 {P } = p(t).
1
1
outward (t r/c) + inward (t + r/c),
r
r
p
x2 + y 2 + z 2 .
x r
1 2f (tr/c)
e
.
r
1 2
p = k 2 p(r, t).
c2 t2
U.9
Source considerations
Now we begin to examine the considerations in designing the transducer and the transmitted pulse.
Transducer considerations
Definition of transducer: a substance or device, such as a piezoelectric crystal, microphone, or photoelectric cell, that converts
input energy of one form into output energy of another.
Transducer electrical impedance 1/area, so smaller source means more noise (but better near-field lateral spatial resolution).
Higher carrier frequency means wider filter after preamplifier, so more noise (but better depth resolution).
Nonuniform gains for each element must be calibrated; errors in gains broaden PSF.
Pulse considerations
Consider an ideal infinite plane-reflector at a distance z from the transducer, and an acoustic wave velocity c.
If the transducer transmits a pulse p(t) (pressure wave), then (ignoring diffraction) ideally the received signal (voltage) would be
2z
v(t) = p t
,
c
because
2z
c
is the time required for the pulse to propagate from the transducer to the reflector and back.
Unfortunately, in reality the amplitude of the pressure wave decreases during propagation, and this loss is called attenuation.
It is caused by several mechanisms including absorption (wave energy converted to thermal energy), scattering (generation of
secondary spherical waves) and mode conversion (generation of transverse shear waves from longitudinal waves).
As a further complication, the effect of attenuation is frequency dependent: higher frequency components of the wave are attenuated
more. Thus, it is natural to model attenuation in the frequency domain to analyze what happens in the time domain.
Ideally the recorded echo could be expressed using the 1D inverse FT as follows
Z
2z
2z
v(t) = p t
= P (f ) e2f (t c ) df .
c
A more realistic (phenomenological) model (but still ignoring frequency-dependent wave-speed) accounts for the frequencydependent attenuation as follows:
v(t) =
2z (f )
|e {z } P (f ) e
2f (t 2z
c )
2z
df =
6 p t
,
c
(U.1)
Narrowband pulses
The effect of signal loss is easiest to understand for a narrowband pulse. We say p(t) is narrowband if its spectrum is concentrated
near f f0 and f f0 , for some center frequency f0 ,
6
e2z (f )
P (f )
f0
f0
U.10
For a narrowband pulse, the following approximation to the effect of attenuation is reasonable:
e2z (f ) P (f ) e2z (f0 ) P (f ).
(U.2)
(U.3)
In words, a narrowband pulse is simply attenuated according to the attenuation coefficient at the carrier frequency f0 ; the pulse
shape itself is not distorted (no dispersion). We will use this approximation throughout our discussion of ultrasound PSF.
In (U.3), the attenuation increases with range z. Therefore one usually applies attenuation correction to try to compensate for this
loss:
vc (t) , e2z (f0 )
v(t) = ect (f0 ) v(t) .
(U.4)
z=ct/2
What is the drawback of using narrowband pulses? They are wider in time, providing poorer range resolution.
Furthermore, for smaller wavelengths (higher frequency) there is more attenuation, so less signal, so lower SNR. This is an example
of the type of resolution-noise tradeoff that is present in all imaging systems.
Dispersion
In practice, to provide adequate range resolution, medical ultrasound systems use fairly wideband pulses (cf. figure on U.6) that
do not satisfy the narrowband approximation (U.2).
For general wideband pulses, it is difficult to simplify (U.1) to analyze the time-domain effects of frequency-dependent attenuation.
Qualitatively, because different frequency components are attenuated by different amounts, the pulse shape is distorted. Multiplication by H(f ) = e2z (f ) in the frequency domain is equivalent to convolution with some impulse response h(t) in the time
domain. This convolution will spread out the pulse and the resulting distortion is called dispersion.
It is easiest to illustrate dispersion by evaluating (U.1) numerically, as illustrated in the following figure.
R(z)
Reflectivity
1
0.5
0
6
8
z [cm]
Hypothetical received signal: no attenuation
10
12
v(t)
1
0
1
20
40
80
100
120
t [usec]
Hypothetical received signal: narrowband assumption
v(t)
140
160
60
140
160
60
140
160
140
160
0
1
60
20
40
20
40
20
40
80
100
120
t [usec]
Received signal with attenuation/dispersion
v(t)
1
0
1
80
100
120
t [usec]
Received signal with attenuation correction
vc(t)
1
0
1
60
80
t [usec]
100
120
U.11
1.5
z = 0 cm
z = 4 cm
0.5
0.5
0
10
0
t [sec]
10
1.5
0
10
z = 12 cm
0.5
0.5
??
10
1.5
z = 8 cm
0
10
10
0
10
10
U.12
Gaussian envelopes
On a log scale, attenuation is roughly linear in frequency (about 1dB/cm/MHz) over the frequency range of interest.
I.e. (f ) |f | for f between 1 and 10 MHz, where 1 20 log10 e = 20 log10 e, so 1/(20 log10 e) 0.1 MHz1 cm1 .
The property (f ) |f | provides an opportunity to minimize dispersion: use pulses with Gaussian envelopes: A(f ) = ew
where w is related to the time-width of the envelope.
Assuming f0 0,
f2
f 2]
U.13
Reflectors
Transducer
s(x,y)
(x0 ,y0 )
z1
|v(t)|
z2
exp(2z )/z
2z1/c
2z2/c
We first focus on the near field of a mechanically scanned transducer, illustrated above, making these simplifying assumptions.
Single transducer element
Face of transducer much larger than wavelength of propagating wave, so incident pressure approaches geometric extension of
transducer face s(x, y) (e.g., circ or rect function). Called piston mode.
Neglect diffraction spreading on transmit
Uniform propagation velocity c
Uniform linear attenuation coefficient , assumed frequency independent, i.e., ignoring dispersion. (Focus on lateral PSF.)
Body consists of isotropic scatterers with scalar reflectivity R(x, y, z).
No specular reflections: structures small relative to wavelength, or large but very rough surfaces.
Amplitude-modulated pulse p(t) = a(t) e0 t
Weakly reflecting medium, so ignore 2nd order and higher reflections. (See HW.)
Pressure propagation: approximate analysis
Suppose the transducer is translated to be centered at (x0 ,y0 ), i.e., s(x x0 , y y0 ).
(x ,y )
Let pinc0 0 (x, y, z, t) denote the incident pressure wave that propagates in the z direction away from the transducer.
Assume that the pressure at the transducer plane (z = 0) is:
(x ,y0 )
pinc0
Ignoring transmit spreading, the incident pressure is a spatially truncated (due to transducer size) and attenuated pressure wave:
(x ,y0 )
pinc0
(x ,y )
U.14
(x ,y )
We need to determine the reflected pressure pref0 0 (x, y, 0, t) incident on the transducer. We do so by using superposition. We first
(x ,y )
find pref0 0 (x, y, 0, t; x1 , y1 , z1 ), the pressure reflected from a single ideal point reflector located at (x1 , y1 , z1 ), and then compute
the overall reflected pressure by superposition (assuming linearity, i.e., small acoustic perturbations):
ZZZ
(x ,y )
(x ,y )
pref0 0 (x, y, 0, t) =
R(x1 , y1 , z1 ) pref0 0 (x, y, 0, t; x1 , y1 , z1 ) dx1 dy1 dz1 .
An ideal point reflector located at (x1 , y1 , z1 ), i.e., for R(x, y, z) = (x x1 , y y1 , z z1 ), would exactly reflect whatever
pressure is incident at that point, and produce a wave traveling back towards the transducer. If the point reflector is sufficiently far
from the transducer plane, then the spherical waves are approximately planar by the time they reach the transducer. (Admittedly
this seems to be contrary to the near-field assumption.) Thus we assume:
(x ,y0 )
pref0
(x ,y0 )
(x, y, z, t; x1 , y1 , z1 ) = pinc0
|
1
1 z)
(x1 , y1 , z1 , t + (z z1 ) /c) |e (z
,
{z
}
z1 z
{z
}
|
{z
}
attenuation
simple propagation
spreading
where the 1/(z1 z) is due to diffraction spreading of the energy on return. In particular, back at the transducer plane (z = 0):
(x ,y0 )
pref0
(x, y, 0, t; x1 , y1 , z1 ) =
(x ,y0 )
p 0
| inc
z1
(x1 , y1 , z1 , t z1 /c) |e{z
}
{z
}
attenuation
simple propagation
1
z1
|{z}
spreading
e 2z1
.
z1
pref0
(x, y, 0, t) =
=
ZZZ
ZZZ
(x ,y0 )
R(x1 , y1 , z1 ) pref0
e 2z1
dx1 dy1 dz1 .
z1
(U.5)
The output signal from an ideal transducer would be proportional to the integral of the (reflected) pressure that impinges on its
face. The constant of proportionality is unimportant for the purposes of qualitative visual display and resolution analysis. (It would
affect quantitative SNR analyses.) For convenience we assume:
ZZ
1
(x ,y )
v(x0 , y0 , t) = RR
s(x x0 , y y0 ) pref0 0 (x, y, 0, t) dx dy,
s(x, y) dx dy
where we reiterate that the received signal depends on the transducer position (x0 ,y0 ), because we will be moving the transducer.
(x ,y )
Under the (drastic) simplifying assumptions made above, the pressure pref0 0 (x, y, 0, t) is independent of x, y, so by (U.5) the
recorded signal is simply:
v(x0 , y0 , t) =
=
(x ,y0 )
pref0
ZZZ
(, , 0, t)
e ct
... v(x0 , y0 , t)
ct/2
ZZZ
e 2z1
dx1 dy1 dz1
z1
(U.6)
(See picture above for sketch of signal. Note the distance-dependent loss e 2z1 /z1 .)
To help interpret (U.6), consider the most idealized case where s(x, y) = 2 (x, y) (tiny transducer) and a(t) = (t) (short pulse).
Then by the Dirac impulse sifting property:
e 2z1
.
(U.7)
v(x0 , y0 , t) = R(x0 , y0 , z1 )
z1 z1 =ct/2
U.15
(U.8)
Near-field PSF
(Geometric PSF)
R(x, y, z) =
R(x1 , y1 , z1 ) s(x1 x, y1 y) a(2z/c 2z1 /c) dx1 dy1 dz1
ZZZ
2
2z
c
This PSF is separable between the transverse plane (x, y) and range z.
Now we can address how system design affects the imaging PSF.
The lateral or transverse spatial resolution is determined by the transducer shape.
The depth resolution is determined by the pulse envelope.
U.16
Example. For a 4mm square transducer, the PSF is 4mm wide in the x, y plane.
A typical pulse envelope has a duration of 2-3 periods of its carrier, i.e., its width is roughly t = 2/f0, so the width of a(2z/c) is
roughly z = ct /2 = c/f0 = . If c = 1500 m/s and f0 = 1 MHz, then the wavelength is = c/f0 = 1.5 mm.
NearField PSF h(x,0,z)
1
1
0
0
2
4
10
1
0
8
0
z
10
So we can improve the depth resolution by using a higher carrier frequency f0 . What is the tradeoff? More attenuation! So we
have a resolution-noise tradeoff. Such tradeoffs exist in all imaging modalities.
Although this simplified analysis suggests that the ideal transducer would be very small, the analysis assumed at the outset that the
transducer is large (relative to the wavelength)! So it is premature to draw definitive conclusions about designing transducer size.
However, when we properly account for diffraction, we will see that the PSF h is not space invariant, so it will not be possible
to write the superposition integral as a convolution. Virtually all of the triple convolutions in Ch. 9 and Ch. 10 of Macovski are
incorrect. But the superposition integrals that precede the triple convolutions are fine.
Even though the PSF will vary with depth z, it is still quite interpretable.
Using more detailed analysis, one can show that the geometric model is reasonable for z < D2 /2 for a square transducer or
z < D2 /4 for a circular transducer [3, p 333]. The Fresnel region extends from that point out to z = D2 /. Beyond that is the
Fraunhofer region or far field.
A-mode scan
0 , y0 , z) vs z, is made, then this is
If the transducer is held at a fixed position (x0 ,y0 ) and a plot of reflectivity vs depth, i.e., R(x
called an A-mode scan.
(Brightness) (Usual mode)
B-mode scan
Translate transducer (laterally) to different location (x, y) (usually fixing x or y and translating w.r.t. the other)
Typically x-motion by mechanical translation of transducer, y-motion by manual selection of operator.
Form lines of image from different positions.
Assume transducer motion slow relative to pulse travel.
Everything shift-invariant w.r.t. x and y due to scanning.
What if sound velocity c is not constant (i.e., varies in different tissue types)?
U.17
R(0, 0, z) =
b (x1 , y1 , z1 ) a (z z1 ) dx1 dy1 dz1 .
R(x1 , y1 , z1 ) e
c
b(x, y, z) determines primarily the lateral resolution at any depth z and varies slowly with z. It is called the beam pattern.
Both the transmit and receive operations have an associated beam pattern.
The overall beam pattern is the product of the transmit beam pattern and the receive beam pattern.
For a single transducer,
these transmit and receive beam patterns are identical, so the PSF contains the squared term b2 .
2
a c (z z1 ) determines primarily the depth resolution,
p
r1 = x21 + y12 + z12
ek2r1 is an unavoidable (and unfortunate) phase term, where k = 2/ is the wave number.
(Its presence causes destructive interference aka speckle.)
If we image by translating the transducer, then everything will be translation invariant w.r.t. x and y, so in particular:
Z Z Z
2
2kr1 2
y, z) =
(z
z
)
dx
dy
dz
R(x,
b
(x
x,
y
y,
z
)
a
R(x
,
y
,
z
)
e
1
1
1
1
1
1
1
1 1 1
c
Z Z Z
=
R(x1 , y1 , z1 ) e2kr1 h(x x1 , y y1 , z z1 ; z1 ) dx1 dy1 dz1 ,
where
2
h(x, y, z; z1 ) = b (x, y, z1 ) a z .
c
2
Due to the explicit dependence of the PSF on depth z1 , the PSF is shift variant or, in this case, depth dependent.
Mathematical interpretation: if R(x, y, z) = (x x1 , y y1 , z z1 ), then
2
2
R(x, y, z) = b (x1 x, y1 y, z1 ) a (z z1 ) ,
c
which is the lateral PSF b2 (, , z1 ) at depth z1 translated to (x1 , y1 ), and blurred out in depth z by the pulse a
.
2
c
Physical interpretation: the PSF h(x, y, z; z1 ) describes how much the reflectivity from point (x, y, z1 ) will contaminate our estimate of reflectivity at (0, 0, z).
Goals:
Find b, interpret, and simplify
Study how b varies with transducer size/shape.
Final form appears in (9.38):
cos
sin
bFraunhofer() =
.
SX
Intermediate assumptions along the way are important to understand to see why in practice (and in the project) not everything
agrees with these predictions.
Mostly follow Macovski notation, filling in some details, and avoiding potentially ambiguous notation f (t) g(t) h(t).
U.18
y
x
(x ,y )
0 0
01
r 01
Reflector
(0,0)
1
r1
r 10
(x , y )
0
(x ,y )
1 1
10
Transducer
s(x,y)
Transducer defines (x, y, 0) plane. Define: P0 , (x0 , y0 , 0), P0 , (x0 , y0 , 0), P1 , (x1 , y1 , z1 ).
Shorthand for radial distances:
q
r01 = kP0 P1 k = k(x0 , y0 , 0) (x1 , y1 , z1 )k = (x1 x0 )2 + (y1 y0 )2 + z12 .
Later we will assume that P1 is sufficiently far from the transducer (relative to transducer size) that
cos 01
r01
The latter approximation applies only within functions that vary slowly with r, like 1/r01 , but not in terms like ekr01 .
Superposition
The main ingredient of diffraction analysis is the Huygens-Fresnel Principle: superposition!
Pressure at P1 is superposition of contributions from each point on transducer, where each point can be thought of as a point source
emitting a spherical wave.
Superposition requires that we assume linearity of the medium, which means the pressure perturbations must be sufficiently
small. Modern ultrasound systems include harmonic imaging modes where nonlinear effects are exploited, not considered here.
U.19
Monochromatic case
Start with monochromatic wave (called continuous-wave diffraction):
u(P, t) = Ua (P ) cos(2f t + (P )) = Real[U (P ) e2f t ] where U (P ) = Ua (P ) e(P )
is complex phasor, and position: P = (x, y, z).
Note everywhere the wave (pressure) is oscillating at the same frequency, only difference is amplitude and phase.
Rayleigh-Sommerfeld Theory
Using:
linear wave equation
Helmholz equation: (2 + k 2 )U = 0
linearity and superposition
Greens theorem
...
Goodman [5], shows that:
1
U (P1 ) =
ZZ
cos 01 kr01
dx0 dy0 =
e
U (P0 )
r01
ZZ
for r01 , where the point spread function for the phasor U (P ) is
h(P1 , P0 ) =
1 cos 01 kr01
.
e
r01
ZZ
cos 01 1 d
r01
u P0 , t
dx0 dy0 .
r01 2c dt
c
(Goodman:3-33)
Insonification
U.20
If the transducer is pulsed coherently over its face (piston mode) with output pressure p0 (t), then at the transducer plane:
u(P0 , t) = s(x0 , y0 ) p0 (t),
(9.15)
i.e., at plane z = 0 the pressure is zero everywhere except over the transducer face.
(U.11)
To explore the narrowband approximation in the time domain, use the product rule:
p1 (t) ,
1 d
1
a 0 (t) a0 (t) .
p0 (t) = a1 (t) e0 t , where a1 (t) ,
2f0 dt
2f0
One way of defining a narrowband pulse is to require that |a 0 (t)| f0 , in which case a1 (t) a0 (t) so p1 (t) p0 (t) .
More typically, we define a narrowband pulse in terms of its spectrum, namely that the width of the frequency response of a0 (t)
is much smaller than the carrier frequency f0 . Because p0 (t) = a0 (t) e0 t , in the frequency domain P0 (f ) = A0 (f + f0 ).
By the derivative property of Fourier transforms:
p1 (t) =
1 d
f
(f0 )
1
F
(2f )P0 (f ) = A0 (f + f0 )
A0 (f + f0 ) = A0 (f + f0 ) = P0 (f ).
p0 (t) P1 (f ) =
2f0 dt
2f0
f0
f0
1 d
2f0 dt
p0 (t) p0 (t) .
U.21
(9.21)
Assumes envelope of waveform emitted from all parts of the transducer arrive at point P1 at about same time.
Need pulse-width D2 /(8r1 c), where D is transducer diameter.
If = 3/f0 , then need r1 D2 /(32) 3 mm if D = 10 mm and = 1 mm.
Accurate for long pulses (narrowband). But short pulses give better depth resolution...
Poor approximation for large transducers or small depths z1 .
Makes lateral resolution determined by relative phases over transducer, not by pulse envelope.
Applying the steady-state or plane wave approximation (9.21) to (U.12) yields the final incident pressure field approximation:
Z Z
r1
cos 1
u(P1 , t) e0 t
.
s(x0 , y0 ) ekr01 dx0 dy0 a0 t
r1
c
If point P1 has reflectivity R(P1 ), then by reciprocity the contribution of that (infinitesimal) point to the (differential) pressure
reflected back to transducer point P0 is (applying again the narrow band and steady-state approximations):
u(P0 , t; P1 ) =
r10
cos 10 1 d
u P1 , t
r10 2c dt
c
cos 1
r10
R(P1 )
u P1 , t
[using narrowband approximation]
r1
c
Z Z
2
2r1
1 cos 1
, [using ss]
e0 t
s(x0 , y0 ) ekr01 dx0 dy0 ekr10 a t
R(P1 )
r1
c
R(P1 )
where a(t) , ()2 a0 (t) . Now apply superposition over all possible reflectors in 3D object space:
ZZZ
u(P0 ; t) =
u
(P0 , t; P1 ) dP1
1 0 t
e
ZZZ
R(P1 )
cos 1
r1
2 Z Z
s(x0 , y0 ) e
kr01
dx0 dy0 e
kr10
2r1
dP1 .
a t
c
Signal model
Assuming transducer linearity the output signal is (proportional to) the integral of the reflected pressure over the transducer:
v(t) =
ZZ
#
2r
1
K e0 t
s(x0 , y0 )
R(P1 )
dP1 dx0 dy0
s(x0 , y0 ) ekr01 dx0 dy0 ekr10 a t
c
#
"
2 Z Z
ZZ
ZZZ
2r1
cos 1
kr10
kr01
0 t
dP1
dx0 dy0
s(x0 , y0 ) e
dx0 dy0 a t
s(x0 , y0 ) e
Ke
R(P1 )
r1
c
cos 1
r1
2 Z Z
U.22
ZZZ
2
2r1
R(P1 )
dP1 ,
ek2r1 b2Narrowband (x1 , y1 , z1 ) a t
r1
c
(U.14)
ZZ
(U.15)
The expression (U.15) is suitable for numerical evaluation of transducer designs, but for intuition we want to simplify bNarrowband .
What would the ideal beam pattern be? Perhaps b(x, y, z) = s(x, y), as in the geometric near-field analysis.
Image formation
The above analysis was for the transducer centered at (0, 0). Based on our earlier near-field geometric analysis, and the gain
corrections suggested by the signal equation above, the natural estimate of reflectivity is:
2
0, z) , 1 z v 2z
R(0,
c
|K {z }
gain
Z Z Z
z r1
dP1
R(P1 ) ek2r1 b2Narrowband (x1 , y1 , z1 ) a 2
c
Z Z Z
(U.16)
R(P1 ) h(0, 0, z; P1 ) dP1 ,
=
k2r1
z r1
x, y1 y, z1 ) a 2
.
{z
}
c
|
{z
}
lateral
depth, range
b2Narrowband (x1
h(x, y, z; P1 ) , |e {z }
|
speckle
Above we have included x, y in the PSF for generality assuming B-mode scanning. For A-mode, x = y = 0.
Note that h() depends on z1 , not z z1 , revealing the depth dependence of the PSF, so it is not a convolution, even if we make
the approximation z r1 z z1 in the range term.
The phase modulation ek2r1 contributes to speckle: destructive interference of reflections from different depths.
Because 2kr = 2r/(/2), this term wraps around 2 every half wavelength.
If the wavelength is 1 mm, then this term wraps 2 in phase every 0.5 mm!
To interpret the PSF, we would like to simplify its expression, particularly the beam pattern part.
Here we needed z 2 gain compensation because we accounted for diffraction spreading in both directions.
In practice we would need additional gain to compensate for attenuation.
U.23
Paraxial approximation
Near the axis, cos 1 1, so
cos 1
bNarrowband (x1 , y1 , z1 ) =
2
ZZ
where we define
bParaxial (x1 , y1 , z1 ) ,
1
2
ZZ
2 2 2
1
s(x, y) 2 ek x +y +z1 ekr1 ,
where the 2D convolution is over x and y. Convolution with such an exponential term is hard and non-intuitive. Thus we want to
further simplify bParaxial and/or bNarrowband .
2
To drop 2nd-order term, need kz1 t2 /8 1 for t = max (x1 x0 )2 + (y1 y0 )2 /z12 = (r1 r0 )2 /z12 = rmax
/z12 .
p
4
4
4
Thus need z13 krmax
/8 = rmax
/(4) rmax
/ or z1 rmax 3 rmax /.
Combining all of the above approximations:
bParaxial (x, y, z)
bFresnel(x, y, z) ,
ek(zr1 ) bFresnel(x, y, z)
ZZ
1
k
2
2
dx0 dy0 ,
(x
x
)
+
(y
y
)
s(x
,
y
)
exp
0
0
0 0
2
2z
k 2
1
2
bFresnel(x, y, z) = s(x, y) 2 exp [x + y ] .
2z
R(0, 0, z)
bFresnel(x1 , y1 , z1 ) a (z z1 ) dx1 dy1 dz1 .
R(x1 , y1 , z1 ) e
c
This cannot be written as a 3D convolution! (Because the lateral response bFresnel depends on z, see figures below.)
bFresnel still messy due to convolution with complex exponential with quadratic phase. (Hence no pictures yet...)
Focusing preview
k
2
2
bFresnel(x, y, z) =
s(x0 , y0 ) exp [(x x0 ) + (y y0 ) ] dx0 dy0
2z
ZZ
2
1
k 2
k 2
2
2
s(x0 , y0 ) exp [x0 + y0 ] exp [xx0 + yy0 ] dx0 dy0
= exp [x + y ]
2z
2
2z
z
k
k
1
x y
= exp [x2 + y 2 ]
.
F s(x, y) exp [x2 + y 2 ]
,
2z
2
2z
z z
1
2
ZZ
To cancel phase term inside Fourier transform, use spherical (acoustic) lens of radius R having thickness proportional to
r
x2 + y 2
x2 + y 2
1
1
.
R
2R
(U.17)
U.24
x
)
+
(y
y
)
]
dx0 dy0
s(x
,
y
)
exp
0
0
0 0
2
2z
ZZ
2
2
k
1
s(x0 , y0 ) ekr0 /(2z) exp [xx0 + yy0 ] dx0 dy0 ,
= ekr /(2z) 2
(9.38)
Ignoring the inner phase term ekr0 /(2z) in (9.38) leads to the Fraunhofer approximation to the beam pattern:
bFresnel(x, y, z)
bFraunhofer(x, y, z) =
bFraunhofer(x, y, z) =
1 x y
1
S
S(u,
v)
,
,
=
x
y
2
z z
2
u= z ,v= z
(9.39)
Note the importance of accurate notation: we take FT of s(x, y), but evaluate the transform at spatial (x, y) arguments.
Ignoring the inner phase term is reasonable if kr02 /(2z) 1 (radian), i.e.,
2
2
2
z (/)r0,max
= Dmax
/ (/4) Dmax
/.
2
The range z Dmax
/ is called the far field.
R(0, 0, z) =
bFraunhofer(x1 , y1 , z1 ) a (z z1 ) dx1 dy1 dz1 .
R(x1 , y1 , z1 ) e
c
Example: square transducer
y
x
rect D
, then S(u, v) = D2 sinc(Du) sinc(Dv). So the far-field beam pattern is
If s(x, y) = rect D
2
1 x y
D
Dy
Dx
bFraunhofer(x, y, z) = 2 S
=
sinc
.
,
sinc
z z
z
z
(9.41)
U.25
cos
where d(x; r, ) is distance from source point (x, 0, 0) to a point at (r, ) in y = 0 plane:
d(x; r, )
=
=
x 2
x
1 2 sin +
.
r
r
x
P0 = (x, 0, 0)
P1 = (r sin , 0, r cos )
f (t) = [3f (t) 4f (t) f (t)]/f (t) f (0) = 3 cos2 (5 sin2 1).
Thus
1
1
3
f (t) 1 t sin + t2 cos2 + t3 sin cos2 + t4 cos2 (5 sin2 1).
2
2
4!
U.26
1 x 3
3 x 4
1 x 2
x
cos2 +
sin cos2 +
cos2 5 sin2 1
rf (x/r) r 1 sin +
r
2 r
2 r
4! r
r x sin +
x2
cos2 .
2r
Physical interpretations:
r : propagation
x sin : steering
2
x2r cos2 : focusing (curvature of wavefront)
When is this approximation accurate?
3
Because d enters as ek d , we can ignore the 3rd-order (aberration?) term if kr 12 xr sin cos2 1 radian.
3
3
x
.
The maximum of sin cos2 occurs when sin2 = 1/3, so 21 sin cos2 1/33/2 . Thus kr 12 xr sin cos2 kr r
3
q
Thus we need r2 k(x/ 3)3 or r k(x/ 3)3 where x is half the width of (centered) aperture.
E.g., if 10mm wide aperture and = 0.5 mm, then r 50 mm.
But 3rd-order term is 0 for = 0, so 4th-order term is more important on axis.
1
For = 0, for the 4th-order term to be negligible we need rk 4!
3(xmax /r)4 1 or r3 (2/) 18 x4max x4max / i.e.,
p
r xmax 3 xmax /. (cf. earlier condition for Fresnel approximation).
(9.38)
bFresnel still messy because it involves a complex exponential with quadratic phase. (Hence no pictures yet...)
It is suitable for computation, but there remains room for refining intuition.
Fraunhofer approximation in polar coordinates
We can ignore 2nd-order term if: kx2max /(2r) 1, i.e., r x2max / =
2
4 D /
D2 /.
where SX = F [sX ]. In words: (far-field) angular beam pattern is FT of aperture function, evaluated at sin /.
Note the importance of accurate notation: we are take FT of s(x), but evaluate the transform at a spatial (x) argument!
The Fraunhofer (far field) beam pattern (in polar coordinates) is independent of r.
(9.38)
U.27
Example. If sX (x) = rect(x/D), then SX (u) = D sinc(Du) so the far-field beam pattern is
bFraunhofer() =
cos
sin
sin
cos
=
.
SX
sinc
/D
/D
Note x/z = tan sin and cos 1 for 0. So polar and Cartesian approximations are equivalent near axis.
Note (/2, /2), i.e., above approximations are good for any angle, whereas Cartesian good only for small x/z.
The following figure shows bFraunhofer.
Angular FarField Beam Pattern for Rectangular Transducer
1
D = 6 wavelengths
0.8
0.4
0.2
Fraunhofer
() / D
0.6
0.2
0.4
100
80
60
40
20
0
[degrees]
20
40
60
80
100
30
4
4
30
40
20
40
60
80
100
120
x/
30
4
4
30
40
20
40
60
z/
80
100
120
U.28
D 2/
z / D
Effective of doubling source size: narrower far-field beam pattern, but wider in near-field.
How to overcome tradeoff between far-field resolution and depth-of-field, i.e., how can we get good near-field resolution
even with a large transducer? Answer: by focusing.
Approximate beam patterns in Cartesian coordinates
It is also useful to express the Fresnel and Fraunhofer beam patterns (9.38) and (9.38) in Cartesian coordinates. Using the approximation sin = x/r x/z leads to:
ZZ
2
1
k
2
2
bFresnel (x, y, z) =
dx0 dy0 ekr /(2z) bFraunhofer(x, y, z)
(x
x
)
+
(y
y
)
s(x
,
y
)
exp
0
0
0 0
2
2z
ZZ
1
2
1
S(u,
v)
[xx
+
yy
]
dx
dy
=
s(x
,
y
)
exp
.
bFraunhofer(x, y, z) =
x
0
0
0
0
0 0
y
2
z
2
u= z , v= z
U.29
phase
Another way to modify the phase is to propagate the wave through a material having a different index of refraction, or equivalently
a different sound velocity:
p0 (t) = p(t /c0 ) p(t) e0 /c0 = p(t) e2/
p(t)
c0
p1 (t) = p(t /c1 ) p(t) e0 /c1 = p0 (t) e2(c0 /c1 1)/ .
p(t)
{z
}
|
c1
phase
By varying the thickness over the transducer face, one can modify the phase of s(x, y) to make, for example, an acoustic lens.
/(2zf )
So in particular for r zf :
bnew
Fresnel (r, )
Z
2
cos
snew (x) ekx /(2r) ekx sin dx
Z
2
cos
sorig (x) ekx /2[1/r1/zf ] ekx sin dx .
cos
So at depth zf (and nearby) (even if this zf is in the near field) the modified system achieves the diffraction limited resolution (of
about zf /D), even for a large transducer.
This focusing could be done with a curved acoustic lens of appropriate radius and index of refraction.
In typical lens material, the acoustic waves travel faster, i.e., c1 > c0 , so use thickness proportional to x2 in 1D or x2 + y 2 in 2D.
zf
zf /D
The key point is that this focusing technique works even if zf is in near field of transducer!
U.30
In these images, the transducer is indicated by the black line along the left margin and the transmitted wave is curved to focus at a
particular point. Previously, we discussed the depth resolution was determined by the envelope function (in this case a Gaussian).
The lateral localization function is more complicated and is determined by diffraction.
U.31
The echo from a far-field reflector will approximate a plane wave impinging on the transducer plane. By using a wedge-shaped
piece of material in which the velocity of sound is faster than in tissue, the wave fronts from an angle can be made parallel with
the transducer so maximum signal from reflectors at that angle.
In particular, suppose we choose in the phase/delay analysis above to vary linearly over the transducer face such that
(c0 /c1 1)x = x,
where = sin 0 is the desired beam direction.
The equivalent corresponding time delay is x = (c0 /c1 1)x /c0 = x/c0 .
The resulting ideal (1D) beam-steering transducer function would be:
x
sideal
(x) = e2x/ rect
.
X
D
/D
/D
/D
which is peaked at sin = i.e., at = 0 . So steering by phase delays works! Note somewhat larger sidelobes.
Angular FarField Beam Pattern for Mechanical Beam Steering
D = 6 wavelengths
0.8
0 = 45 degrees
lFraunhofer() / D
0.6
0.4
0.2
0.2
100
80
60
40
20
0
[degrees]
20
40
60
80
100
This wedge-shaped acoustic lens (cf. prism) is fixed to a single angle 0 . A phased array allows one to vary the angle electronically
in essentially real time.
U.32
Speckle Noise
Really an artifact because nonrandom in the sense that the same structures appear if scan is repeated under same conditions.
But random in the sense that scatterers are located randomly in tissue (i.e., from patient to patient).
Summary: speckle noise leads to Rayleigh statistics and a disappointingly low SNR.
A 1D example
If R(x, y, z) = (x, y) R(z) then
vc (t) = e
and if R(z) =
0 t
R(z ) e
2kz
2z
a t
dz ,
c
(z zl ) then
Z X
X
2z
2
(z
z
)
2
l
=
R(z)
= vc
e4zl / a
(z zl ) e2kz a (z z ) dz =
.
c
c
f0
For subsequent analysis, it is more convenient to express in terms of wavelengths. Let w = z/, wl = zl /, h(w) = a(2w/f0 ),
X
4wl
e
h(w wl ) .
R(w) =
l
Without phase term we would just get superposition of shifted h functions. But with it, we get destructive interference.
Example.
1
|R(z)|
0.8
0.6
0.4
0.2
0
0
10
20
30
40
50
60
70
80
90
100
30
40
50
60
70
80
90
100
30
40
50
60
z [wavelengths]
70
80
90
100
1
0.8
without phase
0.6
0.4
0.2
0
0
10
20
|\hat{R}(z)|
0.8
with phase
0.6
0.4
0.2
0
0
10
20
How can we model this phenomena so as to characterize its effects on image quality? Statistically!
U.33
Rayleigh distribution
From
[9, Ex. 4.37, p. 229], if U and V are two zero-mean, unit-variance, independent Gaussian random variables, then W =
where
Un ,
l=1
l=1
2X
cos l ,
n
Vn ,
l=1
2X
sin l .
n
l=1
Note E[Un ] = E[Vn ] = 0 because E[cos( + c)] = 0 for any constant c, when has a uniform distribution over [0, 2]. (See [9,
Ex. 3.33, p. 131].) Also, Var{Un } = 2 Var{cos } because i.i.d., where
Z 2
Z
Z 2
1
1 1
1
2
2
2
cos d =
(1 + cos(2)) d = .
(cos E[]) f () d =
Var{cos } = E (cos E[]) =
2
2
2
2
0
Thus Var{Un } =pVar{Vn } = 1. Furthermore, Un and Vn are uncorrelated: E[Un Vn ] = 2 E[cos sin ] = 0.
So to show that Un2 + Vn2 is approximately Rayleigh distributed, all that is left for us to show is that for large n, Un and Vn are
approximately (jointly) normally distributed.
Bivariate central limit theorem (CLT)
[10, Thm. 1.4.3]
2
Let (Xk , Yk ) be i.i.d. random variables with respective means X and Y , variances X
and Y2 , and correlation coefficient , and
define
#
"
P
~n =
Z
1
n
1
n
Xk X
X
Yk Y
k=1
Y
Pk=1
n
~ n converges in distribution to a bivariate normal random vector with zero mean and covariance
As n , Z
1
.
1
~ n approach independent Gaussian random variables.
In particular, if = 0, then as n , the two components of Z
Hence statistics of speckle often assumed to be Rayleigh.
Signal to noise ratio
p
r
/2
E[W ]
=
SNR =
=p
1.91
(9.72)
W
4
2 /2
Low ratio! Averaging multiple (identically positioned) scans will not help. One can reduce speckle noise by compounding,
meaning combining scans taken from different directions so that the distances r01 , and hence the phases, are different, e.g., [11].
See [12] for further statistical analysis of envelope detected RF signals with applications to medical ultrasound.
U.34
Summary
Bibliography
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
T. L. Szabo. Diagnostic ultrasound imaging: Inside out. academic, New York, 2004.
K. K. Shung, M. B. Smith, and B. Tsui. Principles of medical imaging. Academic Press, New York, 1992.
J. L. Prince and J. M. Links. Medical imaging signals and systems. Prentice-Hall, 2005.
D. I. Hughes and F. A. Duck. Automatic attenuation compensation for ultrasonic imaging. Ultrasound in Med. and Biol.,
23:65164, 1997.
J. W. Goodman. Introduction to Fourier optics. McGraw-Hill, New York, 1968.
M. Born and E. Wolf. Principles of optics. Pergamon, Oxford, 1975.
A. D. Pierce. Acoustics; An introduction to its physical principles and applications. McGraw-Hill, New York, 1981.
A. Macovski. Medical imaging systems. Prentice-Hall, New Jersey, 1983.
A. Leon-Garcia. Probability and random processes for electrical engineering. Addison-Wesley, New York, 2 edition, 1994.
P. J. Bickel and K. A. Doksum. Mathematical statistics. Holden-Day, Oakland, CA, 1977.
G. M. Treece, A. H. Gee, and R. W. Prager. Ultrasound compounding with automatic attenuation compensation using paired
angle scans. Ultrasound in Med. Biol., 33(4):63042, 2007.
R. F. Wagner, M. F. Insana, and D. G. Brown. Statistical properties of radio-frequency and envelope detected signal with
applications to medical ultrasound. J. Opt. Soc. Am. A, 4(5):91022, 1987.