0% found this document useful (0 votes)
153 views

Chapter 20 - Digital Image Processing in Nuclear - 2012 - Physics in Nuclear Med

This document discusses digital image processing techniques used in nuclear medicine. It begins by explaining that digital images consist of a grid of pixels, each with a discrete x-y location and value representing detected events. Various microprocessors are used to perform tasks like image acquisition, reconstruction, processing and display. Digital image processing allows manipulations like edge sharpening and contrast enhancement, as well as 3D viewing and image fusion. All processing relies on binary number representation in computer systems.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
153 views

Chapter 20 - Digital Image Processing in Nuclear - 2012 - Physics in Nuclear Med

This document discusses digital image processing techniques used in nuclear medicine. It begins by explaining that digital images consist of a grid of pixels, each with a discrete x-y location and value representing detected events. Various microprocessors are used to perform tasks like image acquisition, reconstruction, processing and display. Digital image processing allows manipulations like edge sharpening and contrast enhancement, as well as 3D viewing and image fusion. All processing relies on binary number representation in computer systems.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

chapter

20 
Digital Image
Processing in Nuclear
Medicine

Image processing refers to a variety of tech- purpose microprocessor. Such devices can be
niques that are used to maximize the infor- programmed to perform a wide variety of
mation yield from a picture. In nuclear tasks, but they are relatively large and not
medicine, computer-based image-processing very energy efficient. For very specific tasks,
techniques are especially flexible and power- an application-specific integrated circuit often
ful. In addition to performing basic image is used. ASICs are compact and energy effi-
manipulations for edge sharpening, contrast cient, but their functionality is hardwired
enhancement, and so forth, computer-based into their design and cannot be changed.
techniques have a variety of other uses that Examples of their uses include digitizing
are essential for modern nuclear medicine. signals (analog-to-digital converters) and
Examples are the processing of raw data for comparing signal amplitudes (pulse-height
tomographic image reconstruction in single analyzers and multichannel analyzers). Other
photon emission computed tomography categories of microprocessors include digital
(SPECT) and positron emission tomography signal processors (DSPs) and graphics pro-
(PET) (see Chapters 16 to 18), and correcting cessing units. These devices have limited pro-
for imaging system artifacts (e.g., Chapter 14, grammability, but they are capable of very
Section B, and Chapter 18, Section D). fast real-time signal and image processing,
Another important example is time analysis such as 3-D image rotation and similar types
of sequentially acquired images, such as is of image manipulations.
done for extracting kinetic data for tracer The technology of microprocessors and
kinetic models (see Chapter 21). Computer- computers is undergoing continuous and
based image displays also allow three- rapid evolution and improvement, such 
dimensional (3-D) images acquired in SPECT that a “state-of-the-art” description rarely 
and PET to be viewed from different angles is valid for more than a year or, in some 
and permit one to fuse nuclear medicine cases, even a few months. However, the end
images with images acquired with other result is that the usage of computers and
modalities, such as computed tomography microprocessors in nuclear medicine is ubiq-
(CT) and magnetic resonance imaging (MRI) uitous. They are used not only for acquisi-
(see Chapter 19). Computer-based acquisition tion, reconstruction, processing, and display
and processing also permit the raw data and of image data but also for administrative
processed image data to be stored digitally applications such as scheduling, report gen-
(e.g., on computer disks) for later analysis eration, and monitoring of quality control
and display. protocols.
All of these tasks are performed on silicon- In this chapter, we describe general con-
based processor chips, generically called cepts of digital image processing for nuclear
microprocessors. The central processing unit medicine imaging. Additional discussions of
(CPU) of a general purpose computer, such as specific applications are found in Chapters 13
a personal computer, is called a general to 19 and Chapter 21.
363
364 Physics in Nuclear Medicine

A.  DIGITAL IMAGES a location x = 4.8 cm, y = 12.4 cm, the appro-


priate pixel location for this event would be
1.  Basic Characteristics and x-pixel location = 4.8 cm/ 0.3125 cm/pixel
Terminology
= int(15.36) = 15
For many years, nuclear medicine images
were produced directly on film, by exposing y-pixel location = 12.4 cm/ 0.3125 cm/pixel
the film to a light source that produced flashes = int(39.68) = 40
of light when radiations were detected by the
imaging instrument. As with ordinary photo- where int(x) denotes the nearest integer
graphs, the image was recorded with a virtu- of x, and the pixels are labeled from 0-127
ally continuous range of brightness levels and with the coordinate system defined as shown
x-y locations on the film. Such images some- in Figure 20-2.
times are referred to as analog images. Very A similar format is used for digital multi­
little could be done in the way of “image pro- slice tomographic images, except that the dis-
cessing” after the image was recorded. crete elements of the image would correspond
Virtually all modern nuclear medicine to discrete 3-D volumes of tissue within a
images are recorded as digital images. This is cross-sectional image. The volume is given by
required for computerized image processing. the product of the x- and y-pixel dimensions
A digital image is one in which events are multiplied by the slice thickness. Thus they
localized (or “binned”) within a grid compris- are more appropriately called volume ele-
ing a finite number of discrete (usually square) ments, or voxels. However, when discussing
picture elements, or pixels (Fig. 20-1). Each an individual tomographic slice, the term
pixel has a digital (non­fractional) location or pixel still is commonly used. In tomographic
address, for example, “x = 5, y = 6.” For a images, the “intensity” of each voxel may or
gamma camera image, the area of the detector may not have a discrete integer value. For
is divided into the desired number of pixels example, voxel values for a reconstructed
(Fig. 20-2). For example, a camera with a field- image will generally have noninteger values
of-view of 40 cm × 40 cm might be divided into corresponding to the calculated concentration
a 128 × 128 grid of pixels, with each pixel of radionuclide within the voxel.
therefore measuring 0.3125 mm × 0.3125 mm. Depending on the mode of acquisition (dis-
Each pixel corresponds to a range of possible cussed in Section A.4), either the x-y address
physical locations within the image. If an of the pixel in which each event occurs, or the
event were determined to have interacted at pixel value, p(x,  y), is stored in computer
memory. For 3-D imaging modes, such as 3-D
SPECT or PET, individual events are localized
within a 3-D matrix of voxels, and the recon-
structed value in a voxel is denoted as v(x, y, z).
Depending on how data are acquired and pro-
cessed by the imaging system, the pixel or
voxel value may correspond to the number of
counts, counts per unit time, the reconstructed
pixel or voxel value, or absolute radionuclide
concentrations (kBq/cc or µCi/cc).
Although most interactions between the
user and a computer system involve conven-
tional decimal numbers, the internal opera-
tions of the computer usually are performed
p (x, y)
using binary numbers. Binary number repre-
sentation uses powers of 2, whereas the com-
monly used decimal number system uses
L
powers of 10. For example, in decimal repre-
sentation, the number 13 means [(1 × 101) + (3
× 100)]. In the binary number system, the
FIGURE 20-1  A digital image consists of a grid or matrix same number is represented as 1101, meaning
of pixels, each of size L × L units. Each pixel has an x-y
address location, with pixel value, p(x,y), corresponding
[(1 × 23) + (1 × 22) + (0 × 21) + (1 × 20)], or (8 +
to the number of counts or other quantity associated with 4 + 0 + 1) = 13. Each digit in the binary number
that pixel. representation is called a bit (an abbreviation
20  •  Digital Image Processing in Nuclear Medicine 365

Active area of
gamma camera Detected Pixel corresponding
radiation event to event location

For each event detected:


1) Digitize PM tube
PM tube signals
signals 2) Compute event x-y
location
3) Determine x-y pixel
y location and increment
that pixel by 1 count

x Image matrix
FIGURE 20-2  Subdivision of the gamma camera detector area for generating a digital image. The photomultiplier
tube signals are analyzed using analog-to-digital converters to assign the digital matrix location for each detected
event.

for “binary digit”). In general, an n-bit binary in a pixel exceeds the allowed pixel depth, the
number can represent decimal numbers with count for that pixel is reset to 0 and starts over,
values between zero and (2n − 1). which can lead to erroneous results and image
Binary numbers are employed in computer artifacts.
systems because they can be represented con- Pixel depth also affects the number of gray
veniently by electronic components that can shades (or color levels) that can be repre-
exist only in an “on” or “off ” state. Thus an sented within the displayed image. In most
n-bit binary number can be represented by computer systems in use in nuclear medicine,
the “on” or “off ” state of a sequence of n such 8 bits equals a byte of memory and 16 bits
components. To communicate sensibly with equals a word of memory. The pixel depth,
the outside world, the binary numbers used therefore, frequently is described as “byte”
within the computer must be converted into mode or “word” mode.*
decimal integers or into decimal numbers and
fractions. The latter are called floating point 2.  Spatial Resolution and Matrix Size
numbers. The methods by which binary The spatial resolution of a digital image is
numbers are converted to decimal format are governed by two factors: (1) the resolution of
beyond the scope of this presentation and can the imaging device itself (such as detector or
be found in more advanced texts on computer collimator resolution) and (2) the size of the
systems. pixels used to represent the digitized image.
Digital images are characterized by matrix For a fixed field-of-view, the larger the number
size and pixel depth. Matrix size refers to the of pixels, that is, the larger the matrix size,
number of discrete picture elements in the the smaller the pixel size (Fig. 20-3). Clearly,
matrix. This in turn affects the degree of a smaller pixel size can display more image
spatial detail that can be presented, with detail, but beyond a certain point there is no
larger matrices generally providing more further improvement because of resolution
detail. Matrix sizes used for nuclear medicine limitations of the imaging device itself. A
images typically range from (64 × 64) to (512 question of practical importance is, At what
× 512) pixels. Matrix size virtually always point does this occur? That is, how many
involves a power of 2 (26 and 29 in the previous pixels are needed to ensure that significant
examples) because of the underlying binary detail is not lost in the digitization process?
number system used in the computer. The situation is entirely analogous to that
Pixel depth refers to the maximum number presented in Chapter 16 for sampling require-
of events that can be recorded per pixel. Most ments in reconstruction tomography. In par-
systems have pixel depths ranging from 8 bits ticular, Equation 16-13 applies—that is, the
(28 = 256; counts range from 0 to 255) to 16 bits
(216 = 65,536; counts range from 0 to 65,535).
Note again that these values are related to the *Most modern computer CPUs have 32-bit or 64-bit pro-
cessors. This means they can process data 32 or 64 bits
underlying binary number system used in the at a time; however, this is largely independent of image
computer. When the number of events recorded display and how pixel values are stored.
366 Physics in Nuclear Medicine

FIGURE 20-3  Digital images of the liver and


256 × 256 128 × 128 spleen (posterior view) displayed with different
matrix sizes. The larger the matrix size, the
smaller the pixels and the more detail that is
visible in the image. (Original image courtesy
GE Medical Systems, Milwaukee, WI.)

64 × 64 32 × 32

linear sampling distance, d, or pixel size, larger pixels) to diminish the visibility of
must be smaller than or equal to the inverse noise in the final digitized image.
of twice the maximum spatial frequency, kmax,
EXAMPLE 20-1
that is present in the image:
What is the approximate spatial resolution
d = 1/(2 × kmax ) (20-1) that can be supported for a 30-cm diameter
field-of-view using a 64 × 64 matrix? A 128 ×
This requirement derives directly from the 128 matrix? Assume that the original data
sampling theorem discussed in Appendix F, are noise free.
Section C.
Once this sampling requirement is met, Answer
increasing the matrix size does not improve
spatial resolution, although it may produce  64 × 64 matrix
a cosmetically more appealing image with
less evident grid structure. If the sampling A 64 × 64 image matrix results in a pixel size
requirements are not met (too coarse a  of 300 mm/64 = 4.69 mm. From Equation
grid), spatial resolution is lost. The maximum 20-2, this would be suitable for image resolu-
spatial frequency that is present in an image tion given by
depends primarily on the spatial resolution FWHM  3 × pixel size = 14.06 mm

of the imaging device. If the resolution of 
the device is specified in terms of the full 128 × 128 matrix
width at half maximum (FWHM) of its line-
spread function (Chapter 15, Section B.2), FWHM  3 × 300 mm/128 = 7.03 mm
then the sampling distance (pixel size) should
not exceed about one third of this value to The values calculated in Example 20-1 rep-
avoid significant loss of spatial resolution, resent the approximate levels of imaging
that is, system resolution that could be supported
FWHM without loss of imaging resolution for the
d (20-2) specified image and matrix sizes. The practi-
3 cal effects of undersampling depend as well
This applies for noise-free image data. With on the information contained in the image
added noise it may be preferable to relax the and whether it has a significant amount of
sampling requirement somewhat (i.e., use actual spatial frequency content near the
20  •  Digital Image Processing in Nuclear Medicine 367

resolution limit of the imaging device. Practi- independently generated red, green, and blue
cal experimentation sometimes is required to color channels].
determine this for a particular type of imaging One commonly used color scale, the pseudo­
procedure. color scale (sometime known as the rainbow
or spectrum color scale), assigns different
3.  Image Display colors from the visible spectrum, ranging
Digital images in nuclear medicine are dis- from blue at the low (“cool”) end, through
played on cathode ray tubes (CRTs) or flat- green, yellow, and red (“hot”), for progres-
panel displays such as liquid crystal displays sively increasing pixel values. This is an
(LCDs). In addition to their use at the site of intrinsically nonlinear scale, because the
the imaging device, displays are an essential viewer does not perceive equal significance for
component of picture archival communica- successive color steps. A somewhat more
tions systems (PACS) networks, for remote natural scale, the so-called heat or hot-body
viewing of images (see Section C). The spatial scale, assigns different shades of similar
resolution of the display device should exceed colors, such as red, yellow, and white, to pro-
that of the underlying images so as not to gressively increasing pixel values, corre-
sacrifice image detail. In general, the display sponding to the colors of an object heated to
devices used in nuclear medicine computer progressively higher temperature. In both
systems and in radiology-based PACS net- examples, the colors are blended to produce a
works comfortably exceed this requirement. gradual change over the full range of the
Typical high-resolution CRTs have 1000 or scale. Figure 20-4 shows the same image dis-
more lines and a typical LCD might have played with different color scales.
1536 × 2048 elements. The major problem with the use of color
Individual pixels in a digital image are dis- scales to represent pixel count levels is that
played with different brightness levels, they are somewhat unnatural and also can
depending on the pixel value (number of produce contours, such as apparently sharp
counts or reconstructed activity in the pixel) changes in pixel values, where none actually
or voxel value. On grayscale displays, the exist. A more practical use of color displays is
human eye is capable of distinguishing for color coding a second level of information
approximately 40 brightness levels when they on an image. For example, in combined-
are presented in isolation and an even larger modality imaging of PET or SPECT with CT
number when they are presented in a sequence (see Chapter 19), the anatomic (CT) image
of steps separated by sharp borders. Image often is displayed using a standard gray scale,
displays are characterized by the potential whereas the functional (PET) image is shown
number of brightness levels that they can using a color scale. Such a display clearly dif-
display. For example, an 8-bit grayscale ferentiates between the two types of images,
display can potentially display 28 = 256 differ- whereas a simple overlay of two grayscale
ent brightness levels. Such a range is more images would be confusing.
than adequate in comparison with the capa- Hard-copy images can be produced on
bilities of human vision. In practice, the effec- black-and-white transparency film from a
tive brightness scale often is considerably less CRT display. Single-emulsion films are used
than the physical limits of the display device to minimize blurring of the recorded image,
because of image noise. For example, if an especially when images are minified for
image has a root mean square noise level of compact display on a single sheet of film. The
1%, then there are not more than 100 signifi- CRT display intensity must be calibrated to
cant brightness levels in the image, regard- compensate for the sensitometric properties
less of the capabilities of the display device. of the recording film to match the monitor
Digital images also can be displayed in display. Computer printers are now commonly
color by assigning color hues to represent dif- used to record hard-copy images and a range
ferent pixel values. The human eye can dis- of different technologies and media are avail-
tinguish millions of different colors, and color able depending on requirements such as
displays are capable of producing a broader quality (resolution and gray-scale range), cost
dynamic range (i.e., number of distinguish- and printing speed.
ably different levels) than can be achieved in
black-and-white displays. For example, a 4.  Acquisition Modes
true-color display with 24-bit graphics can Digital images are acquired either in frame
generate nearly 16.8 million different colors mode or in list mode. In frame-mode acquisi-
[224 = (28)3, in which the 3 represents the tion, individual events are sorted into their
368 Physics in Nuclear Medicine

A C

B D
FIGURE 20-4  The same reconstructed transaxial image slice rendered in different color scales. A, Grayscale, high-
intensity white; B, inverted grayscale, high-intensity black; C, hot-wire or hot-body scale; D, pseudocolor spectral scale.
The slice is from a PET scan of the brain using the radiotracer 18F-fluorodeoxyglucose. (Original image courtesy Siemens
Molecular Imaging, Knoxville, TN).

appropriate x-y locations within the digital improved, but the total counts per frame and
image matrix immediately after their position per pixel are reduced compared with slower
signals are digitized. After a preset amount of frame rates.
time has elapsed or after a preset number  In list-mode acquisition, the incoming x
of counts have been recorded, the acquisition and y position signals from the camera are
of data for the image is stopped and the pixel digitized, but they are not sorted immediately
values [p(x,y) = number of counts per pixel] into an image grid. Instead, the x and y posi-
are stored in computer memory. tion coordinates for individual events are
When a series of such images is obtained stored, along with periodic clock markers
sequentially, individual images in the (e.g., at millisecond intervals). This permits
sequence are referred to as “frames.” Clearly, retrospective framing with frame duration
the image matrix size (e.g., 64 × 64, 128 × 128, chosen after the data are acquired.
and so forth) must be specified before the List-mode acquisition permits greater flex-
acquisition begins. Additionally, the time ibility for data analysis. However, it is not an
duration of the frame sets a limit on the tem- efficient method for using memory space
poral accuracy of the data. For example, if  during acquisition for conventional imaging,
the frame is acquired during a 1-minute especially for high-count images, because
period, the number of counts recorded in each every recorded event occupies a memory loca-
pixel represents the integrated number of tion. Thus a 1-million count 128 × 128 image
counts during the 1-minute acquisition period recorded in list mode would require 1 million
and cannot be subdivided retrospectively into memory locations, whereas in frame mode 
shorter time intervals. When faster framing the same image would require only approxi-
rates are used, such as for cardiac blood-pool mately 16,000 memory locations. However,
imaging, temporal sampling accuracy is list mode can actually be more efficient in
20  •  Digital Image Processing in Nuclear Medicine 369

some situations. This would apply, for


example, if the average number of counts is 1.  Image Visualization
less than 1 per pixel in an image frame. In Commonly, a single projection image, or in the
this case, list mode would require fewer case of tomographic data, a set of contiguous
memory locations to record the image than image slices, are displayed on the screen. The
frame mode. Such situations can arise (e.g., display of the images can be manipulated in
in fast dynamic studies). a number of ways to aid in interpretation.
Another commonly used acquisition mode This includes changing from a linear gray
is called gated imaging. In this mode, data are scale to a color scale or to a nonlinear (e.g.,
acquired in synchrony with the heart beat or logarithmic) gray scale, or limiting the range
with the breathing cycle, so that all images of pixel values displayed. The latter is known
are acquired at the same time during the as windowing. For example, if greater con-
motion cycle. This helps reduce blurring and trast is desired in one region of an image, the
other possible image artifacts induced by full brightness range of the display device can
body motion (see Fig. 15-1). To perform gated be used to display only the range of pixel
imaging, it is necessary to have some sort of values found within that region (Fig. 20-5).
monitoring system in place, such as electro- This increases the displayed contrast in the
cardiogram leads for the heart beat or a pneu- selected area, but other parts of the image
matically operated “belly band” that produces may have diminished contrast as a result (i.e.,
an electrical signal when it expands during the counts per pixel may be beyond the upper
the breathing cycle. or lower range of the selected grayscale
In frame-mode gated imaging, the signals window). Whenever image data are displayed
from the motion-monitoring device are used or reproduced, it is desirable to show a gray-
to initiate an image-acquisition cycle. The scale or color-scale bar that has undergone the
“cycle” may consist of several images, each of same manipulations as the images, so that
which represents the object at the same loca- the viewer can interpret the image in the
tion, assuming the motion is repeated repro- context of how the display has been
ducibly during the cycle. With list-mode modified.
acquisition, retrospective synchronization is Tomographic nuclear medicine data con-
possible by recording the motion-monitoring sists of a 3-D volume that can be displayed in
signal along with the signals from the detec- conventional transaxial views or in coronal or
tor. In either case, data usually are acquired sagittal views (Fig. 20-6). Often it is useful to
over a large number of cycles and the images display all three views simultaneously on the
are added together, until the total number of screen. Typically, a point within the object is
counts is sufficient to provide adequate count- chosen using the cursor, and the three orthog-
ing statistics. onal images that pass through that point are
displayed. As the cursor is moved, the trans-
axial, coronal, and sagittal images are
B.  DIGITAL IMAGE-PROCESSING updated. This is an efficient way of navigating
TECHNIQUES through a large 3-D dataset. The dataset also
can be resliced at an arbitrary orientation to
All modern nuclear medicine systems are pro- provide oblique views. This is useful for
vided with fairly sophisticated software for objects whose line of symmetry does not fall
displaying and processing the images that are naturally along one of the perpendicular axes
acquired on those systems. There also are of the 3-D volume. An example is reslicing the
many “third-party” software packages avail- 3-D dataset to provide short axis views of the
able that incorporate extensive tools for image heart.
processing and allow images from different Another useful visualization tool for 3-D
manufacturers and different modalities to be tomographic datasets is the projection tool.
analyzed. A variety of digital image-processing This collapses the 3-D dataset into a single
techniques are used in nuclear medicine, 2-D image for a specified viewing angle and
some of which are fully automatic (i.e., per- allows all the data to be seen at once. An
formed entirely by the computer), whereas example is shown in Figure 20-7. A number of
others are interactive (i.e., require some user different algorithms are available to “render”
input). In this section, we describe briefly  the projection images. The simplest approach
a subset of the major image-processing  is to simply sum the intensities along the
tools that are commonly used in nuclear projection direction. This is essentially equiv-
medicine. alent to what would be obtained by acquiring
370 Physics in Nuclear Medicine

FIGURE 20-5  Effect of changing the distribution of gray levels on image contrast. Left, Original image with uniform
distribution of gray levels. Center, Gray scale compressed (fewer levels) in high-count (dark) regions to improve the
visualization of soft tissues. Right, Gray scale compressed in low-count (light) regions to suppress soft tissues and
visualize only bone. (Image courtesy Siemens Medical Solutions, USA, Inc.)

Sagittal plane

Coronal plane
Transverse plane

Transverse Sagittal Coronal

B
FIGURE 20-6  A, Orientation of transverse (also known as transaxial), coronal and sagittal sections. B, Orthogonal
views (transverse, coronal, sagittal) of an 18F-fluorodeoxyglucose PET brain study in which the imaging field-of-view
covers the entire head. (A, Reproduced from http://en.wikipedia.org/wiki/File:Human_anatomy_planes.svg#file. B,
Images courtesy CTI PET Systems, Inc., Knoxville, TN.)
20  •  Digital Image Processing in Nuclear Medicine 371

Projection image

Individual coronal slices


18
FIGURE 20-7  Whole-body F-fluorodeoxyglucose PET study displayed as a single projection image (top) and as a
series of coronal image slices (bottom row). The projection image shows the distribution of radiotracer in all slices
(arrows correspond to metastatic disease in this patient with cancer), but image resolution and contrast are lost rela-
tive to the individual tomographic slices. Projection images often are a convenient way to initially view large tomo-
graphic datasets, after which individual tomographic slices can be used for more detailed examination or quantitative
analysis. (Courtesy Dr. Magnus Dahlbom, University of California–Los Angeles.)

2-D views of the object from many projection combine images acquired with different radio-
directions. Another approach is to display only nuclides or acquired with different modali-
the surface pixel values (surface rendering). ties. Most image-processing software allows
An approach that highlights internal features one to add, subtract, multiply, and divide
is to display only the pixel with the maximum single images or 3-D image volumes. These
value along the projection direction (maximum operations typically are applied on a pixel-by-
intensity projection). pixel basis. Figure 20-8 is an example of a
By computing projection views at a set of simple frame arithmetic operation: subtrac-
angles around the object and presenting them tion. The study illustrated is a visual stimula-
in a continuous loop, one can create movies in tion using 15O-labeled water as a flow tracer.
which it appears that the object is rotating in Visual stimulation, created by having the
space. This sometimes is called cine mode. subject view a strobe light, caused an increase
These and other rendering and display algo- in blood flow to the occipital (visual) cortex,
rithms are discussed in some of the suggested while the remainder of the brain remained
readings at the end of the chapter. largely unaffected. Subtraction of an image
Another important application of image taken from a resting control study from the
processing is image arithmetic. There are a image obtained in the stimulation study pro-
number of applications in which one wishes vides a display of the blood flow changes
to see differences between images or to occurring as a result of stimulation.
372 Physics in Nuclear Medicine

Baseline Visual stimulation Subtraction

FIGURE 20-8  Cerebral blood-flow images (H215O PET) acquired in the resting state (left) and during visual stimula-
tion using a flashing light (center). The stimulus causes a small increase in blood flow in the visual cortex that is virtu-
ally invisible on the image acquired during visual stimulation; however, the increase is clearly visible when the
resting-state image is subtracted from it (right).

Most digital images in nuclear medicine radiotracer distribution in the body. Conven-
are, in essence, pictures of the count density tional 2-D images also can provide informa-
in the organ or tissue of interest. Instead of tion about the relative concentration of
presenting the data in this format, one may radiotracer in different areas. Regions of
desire to first process the image data on a interest (ROIs) are used to extract numerical
pixel-by-pixel basis using a model that repre- data from these images. The size, shape, and
sents the functional process and display the position of ROIs can be defined and positioned
calculated result. Such an image, in which the by the user, using a selection of predefined
pixel values represent a calculated parame- geometric shapes (e.g., rectangles, circles).
ter, sometimes is called a parametric image. Alternatively, irregular ROIs can be created
For example, a digital ventilation image can using a cursor on the image display. The com-
be divided by a perfusion image to produce a puter then reports ROI statistics such as the
parametric image that shows the ventilation/ mean pixel value, the standard deviation of
perfusion ratio. Other examples of calculated the pixel values, and the number of pixels in
functional parameters are discussed in the ROI (Fig. 20-9). Software tools that use
Chapter 21. edge-detection algorithms also are available
for automated definition of ROIs (see Section
2.  Regions and Volumes of Interest B.5).
Care must be taken in the use of ROIs to
Both PET and SPECT can provide semiquan- accurately place them on the tissues of inter-
titative, or when all appropriate corrections est, especially for applications in which
are applied, quantitative images of the  radiotracer uptake or concentration are

FIGURE 20-9  Manually drawn region of


interest (ROI) placed over the right stria-
REGION OF INTEREST
tum (a small, gray matter structure deep in
Mean: 684.3 counts the brain) in an 18F-fluoroDOPA PET brain
Max: 723.5 counts study. This tracer reflects dopamine synthe-
Min: 648.9 counts sis and accumulates in the dopaminergic
S.D.: 18.3 counts (dopamine-producing) neurons of the stria-
tum. Typically, ROI software programs
Number of pixels: 112 provide a set of statistics on the number of
Area: 450 mm2 counts per pixel per second (proportional to
the radioactivity concentration) and the
area of the ROI.
20  •  Digital Image Processing in Nuclear Medicine 373

monitored during a longer period. Such mea-


surements sometimes are made, for example, 4.  Image Smoothing
to observe and monitor the metabolic status All nuclear medicine computer systems
of a tumor. Automated methods are provided provide image-smoothing algorithms. Figure
on some computer systems to assist with 20-11 illustrates the effect of smoothing on an
this task. However, even with an automated image that has a poor SNR in the unpro-
method, one must be aware of and careful to cessed state. Smoothing operations are, in
avoid errors caused by the partial-volume essence, techniques that average the local
effects discussed in Chapter 17, Section B.5. pixel values to reduce the effect of pixel-to-
For example, an important question for pixel variation. Two simple algorithms for 2-D
staging in tumor imaging is the concentra- images are 5-point and 9-point smoothing, in
tion of radiotracer in the tumor. A “small” which a pixel value is averaged with its
tumor with high uptake (Bq /cm3) is different nearest 4 or 8 neighbors (Fig. 20-12).
from a somewhat larger tumor with low In the previous examples, all four or eight
uptake. An apparent increase in radiotracer neighboring pixels, as well as the center pixel,
concentration in a “small” tumor (one that is are given equal weight in the smoothing
near the resolution limits of the imaging process. Image smoothing also can be per-
device) during a longer period actually could formed using filters that are weighted accord-
be due to tumor growth in size and an asso- ing to the distance from the pixel that is being
ciated reduction in partial-volume effects. smoothed. One such example is a gaussian
ROI analysis is therefore particularly diffi- smoothing filter. In general, one can write the
cult when the volume of tissue that is of following:
interest is changing with time. smoothed image
Most structures of interest extend in all (20-3)
three dimensions and cover multiple image = original image ∗ smoothing filter
planes in a tomographic dataset. To obtain where ∗ represents the operation of convolu-
the best signal-to-noise ratio (SNR), and to tion (see Appendix G). Although smoothing
ensure that the ROI values are representa- frequently produces a more appealing image
tive of the entire structure, it is necessary to by reducing noise (and improving the SNR),
draw ROIs on multiple contiguous image it also results in blurring and potential loss
planes that contain the object. In some soft- of image detail. Sometimes it is convenient to
ware these ROIs can be connected to form a perform analytic studies (e.g., integrating
volume of interest (VOI). Automated tools for pixel count values over an area) on an
defining VOIs also have been developed. unsmoothed image after identifying ROIs on
a smoothed image. In such applications, a
3.  Time-Activity Curves practical compromise between resolution and
As discussed in Chapter 21, the rate of change visual appeal must be reached.
of radiotracer uptake in a specific organ or
tissue often is of interest. To determine this, 5.  Edge Detection and Segmentation
the data are acquired as a series of frames Edge detection and segmentation are two
over time (see Section A.4). The data typically image-processing tools that can be used to
are analyzed by defining an ROI on one frame, assist in automatically defining ROIs. They
or on the sum of all frames, and then copying also are used for classifying different types 
the ROI across all of the frames. This is accu- of tissue based on their radiotracer uptake
rate provided that the patient has not moved and for defining the body and lung contours
between frames. The process is illustrated in for attenuation correction (see Chapter 17,
Figure 20-10 for a time series of images of the Section B.2).
brain following the injection of 18F-fluoro Edge-detection algorithms work best with
DOPA, a compound that localizes in the stria- edges that are very clearly defined as a result
tum whose uptake is related to the rate at of a sharp boundary in radiotracer uptake.
which dopamine is synthesized. One of the most common is the Laplacian
The ROI data from a series of frames can technique. The 2-D Laplacian is defined by
be used to create a time-activity curve (TAC),
showing the radiotracer concentration as a ∂2 ∂2
Laplacian = + 2 (20-4)
function of time in the tissue defined by the ∂x 2
∂y
ROI. Figure 20-10 also shows the TAC
extracted from the ROI data in the time series where ∂2/∂x2 represents the second partial
of images. derivative of the function [i.e., the pixel
374 Physics in Nuclear Medicine

0.0 min 0.75 min 1.25 min 1.75 min 2.25 min 2.75 min

4.5 min 7.5 min 10.5 min 13.5 min 20 min 30 min

40 min 50 min 60 min 75 min 95 min 115 min

0.3

0.25
Striatum
Radioactivity (µCi/mL)

0.2

0.15

Cerebellum
0.1

0.05

0
0 20 40 60 80 100 120
Time (min)
FIGURE 20-10  Top, PET images of the same two-dimensional (2-D) slice through the brain at different times after
administration of a bolus injection of 18F-fluoroDOPA. A region of interest (ROI) is drawn over the right striatum on
the last image and then copied to all other time points. Bottom, Time-activity curve (TAC) showing the mean value in
the ROI, converted with a calibration factor from counts per second per pixel to absolute concentration of radioactivity,
versus time for the striatum. Also shown is a TAC for the cerebellum, taken from a different 2-D image slice, demon-
strating how different brain regions can have different kinetics. Analysis of such TACs is discussed in Chapter 21.
(Adapted from Cherry SR, Phelps ME: Positron emission tomography. In Sandler MP, Coleman RE, Patton JA, et al
[eds]: Diagnostic Nuclear Medicine, 4th ed. Baltimore, Williams & Wilkins, 2002, p. 79.)
20  •  Digital Image Processing in Nuclear Medicine 375

FIGURE 20-11  Effect of image smoothing using a gaussian filter with a full width at half maximum of 4 mm (center)
and 8 mm (right). Smoothing improves the signal-to-noise ratio in the images but at the expense of spatial
resolution.

radiotracer uptake. The simplest method of


segmentation is just to select pixels having
values within a specified range: A < p(x, y) < B.
Because of image noise, this simple method
rarely is sufficient for accurate segmentation.
More sophisticated algorithms that consider
the underlying resolution and noise proper-
ties of the images and that also seek clusters
of contiguous pixels usually are employed.
5-point smoothing 9-point smoothing Edge-detection algorithms also can be used in
segmentation. For example, an edge-detection
FIGURE 20-12  Illustration of pixels used in 5-point and
9-point smoothing. The value in the pixel of interest
algorithm can be used to define the contours
(dark blue) is averaged with the values in the surround- of the lungs and all pixels within that contour
ing pixels (light blue). defined as lung tissue. Other examples of
edge-detection and segmentation algorithms
are discussed in the suggested readings at the
end of the chapter.
values, p(x,y)], with respect to spatial coordi-
nate x, and similarly for the spatial coordi- 6.  Co-Registration of Images
nate y. An analogous definition can be made It is quite common to perform multiple nuclear
for the 3-D Laplacian, which would be applied medicine imaging studies at different times
to 3-D datasets. As illustrated by the 1-D on the same subject. This is useful, for
example presented in Figure 20-13, the point example, to monitor the progression of
at which the Laplacian crosses zero repre- Alzheimer’s or Parkinson’s disease, to measure
sents a region with a high rate of change the change in tumor blood flow or metabolism
between neighboring pixel values and there- following cancer treatment, or to study the
fore reflects an edge. In practice, the operator response of brain blood flow to various stimuli.
often specifies a starting point for the algo- To accurately compare nuclear medicine
rithm, which then searches in all possible studies on the same subject performed at 
directions and constructs a line (edge) where different times (e.g., by subtraction of the
the Laplacian crosses zero. Eliminating small images), it is necessary that the images be
changes (i.e., setting limits) in the Laplacian accurately aligned. This is known as intra­
reduces the effect of noise. Figure 20-14 shows subject intramodality image co-registration.
an image in which the lung contours have Many algorithms have been developed to
been defined automatically by the Laplacian co-register nuclear medicine studies. They
algorithm. have been particularly successful in the brain,
The goal of image segmentation is to group because the brain is rigidly held within the
all pixels that have certain defined character- skull and the transformations required to
istics. In nuclear medicine, this usually refers co-register the images are limited to simple
to pixels that have a certain range of pixel translations and rotations. Co-registration
intensities and thus a certain level of accuracy can be as high as 1 to 2 mm. Figure
376 Physics in Nuclear Medicine

20-15 shows co-registered PET images of the


same subject acquired at different times.
Outside the brain, image co-registration
becomes much more difficult, because organs
can shift relative to each other depending on
the exact positioning of the patient on the
Pixel value, p

bed. In general, this requires nonlinear


co-registration algorithms that attempt to
“warp” one image to fit with the other.
Intrasubject, cross-modality co-registration
involves registering a nuclear medicine study
with a study of the same subject performed
with a different imaging modality (e.g., MRI
or CT). This requires more sophisticated algo-
0 rithms because the information content of the
images as well as their spatial resolution and
SNR characteristics are different. Nonethe-
less, algorithms have been successful for
co-registering PET and SPECT studies onto
MRI or CT images, thereby providing
co-registered volumetric datasets that reflect
First derivative

both biologic function (PET or SPECT) and


anatomy (CT or MRI). Again, because of its
dp/dx

precise geometric relationship to the skull,


the most successful applications of these algo-
rithms have been in brain imaging.
There also is interest in intersubject co- 
registration of nuclear medicine images for
comparisons between multiple subjects. For
example, this can be used to create images
0 based on a large database of subjects, showing
the distribution of a radiotracer across a spe-
cific population of subjects. If such a database
of images of normal subjects is available for
a particular radiotracer, a patient then can be
Laplacian
crosses zero – compared with the normal database to see if
the distribution is significantly different from
Second derivative

location of edge
normal controls. A summary of modern image
d 2p/dx 2

co-registration techniques is provided in ref-


0
erence 1.

C.  PROCESSING ENVIRONMENT

The digital image-processing environment


may be limited to a single gamma camera, or,
in a large department, it may involve a collec-
Pixel location tion of gamma cameras and tomographic
FIGURE 20-13  A one-dimensional example showing a imaging devices. For all modes of imaging,
count profile of pixel value p versus pixel location x across digital image processing involves several
a boundary with two different activity levels. The first
and second (Laplacian) derivatives of the count profile
steps: (1) acquisition, (2) processing, (3) display,
with respect to x are shown. The first derivative is the (4) archiving (storing the raw or processed
difference between neighboring pixel values, or equiva- data or images), and (5) retrieval. Acquisition
lently the local slope of the count profile. At an edge, pixel obviously takes place on the imaging device
values are changing rapidly with location and the slope itself; however, it is useful if the other steps in
has a high absolute value. The second derivative, the
Laplacian, is the difference between adjacent first deriva- the chain can be performed not only on the
tive values. The x-location where this second derivative imaging system console but also on any other
crosses zero defines the location of the edge. computers in the hospital or laboratory. In
20  •  Digital Image Processing in Nuclear Medicine 377

FIGURE 20-14  Series of images illustrating the segmentation of the lungs on a transmission scan acquired on a
single-photon emission computed tomography system. (Original image courtesy Dr. Freek Beekman, University Medical
Delft University of Technology, The Netherlands.)

FIGURE 20-15  Top three rows, Co-registered slices from three 18fluorodeoxyglucose PET scans acquired at 1-year
intervals on the same subject. Images in each column represent the same anatomic slice, after co-registration. Bottom
row, Corresponding co-registered slices from a magnetic resonance imaging scan acquired at the time of the third PET
scan (intermodality co-registration). Images were co-registered using the Automated Image Registration software
developed by Roger Woods of the University of California-Los Angeles. Note the excellent agreement in structures
included and their locations in each slice. Some images were truncated (particularly in the left column) because parts
of the brain were outside the field-of-view in some scans. (From Woods RP, Mazziotta JC, Cherry SR: Optimizing
activation methods: Tomographic mapping of functional cerebral activity. In Thatcher RW, Hallett M, Zeffiro T, et al
[eds]: Functional Neuroimaging: Technical Foundations. San Diego, Academic Press, 1994, p. 54.)
378 Physics in Nuclear Medicine

addition to freeing up the image acquisition imaging equipment support this standard.
computer for additional studies, it allows a The objective of DICOM is to enable vendor-
variety of other activities to proceed simulta- independent communication not only of
neously. For example, it allows a medical phys- images but also of associated diagnostic and
icist to reprocess a study from his or her office therapeutic data and reports.
while physicians are viewing the same images A true archival system not only should
in the reading room or even at a different hos- store the image or processed image data but
pital and researchers are downloading the also should be organized around a logical
studies onto a computer in the research retrieval system that permits correlation of
laboratory. images with other types of data (e.g., reports)
Many nuclear medicine departments there- for a given patient study. That is, it should
fore employ high-speed networks to connect have the capacity of a computer database
their imaging systems together with other system and interface seamlessly with radiol-
computer systems in the institution and, via ogy, nuclear medicine, and hospital informa-
the Internet, to the outside world. These tion systems. In addition, the system must be
departments use PACS to store and move capable of protecting patient information by
images from acquisition sites to more conve- providing access only to authorized users.
nient viewing stations and to provide a
common basis for handling nuclear medicine
and all other diagnostic imaging modalities.2 REFERENCES
PACS systems in hospitals with large radiol- 1. Hill DLG, Batchelor PG, Holden M, Hawkes DJ:
ogy and nuclear medicine departments must Medical image registration. Phys Med Biol 46:R1-R45,
be capable of handling huge amounts of data, 2001.
typically several gigabytes per day (1 Gb = 2. Bick U, Lenzen H: PACS: The silent revolution. Eur
Radiol 9:1152-1160, 1999.
1012 bytes). 3. Mildenberger P, Eichelberg M, Martin E: Introduction
To facilitate the exchange and handling of to the DICOM standard. Eur Radiol 12:920-927, 2002.
images from multiple different imaging
modalities and from different vendors each
with their own custom software, image file BIBLIOGRAPHY
format standards have been developed. The General references on image processing
central standard in radiology is the Digital Gonzalez RC, Woods RE: Digital Image Processing, ed 3,
Upper Saddle River, NJ, 2008, Pearson Prentice Hall.
Imaging and Communications in Medicine (Chapters 2, 3, 6, and 10 are especially relevant.)
(DICOM) standard described in reference 3, Robb RA: Biomedical Imaging, Visualization, and Analy-
and all manufacturers producing diagnostic sis, ed 2, New York, 1999, Wiley-Liss.

You might also like