Digital Processing of Synthetic Aperture Radar Data: Algorithms and Implementation
Digital Processing of Synthetic Aperture Radar Data: Algorithms and Implementation
The technical descriptions, procedures, and computer programs in this book have been developed with the greatest of care
and they have been useful to the author in a broad range of applications; however, they are provided as is, without
warranty of any kind. Artech House, Inc. and the authors of the book tided Digital Processing of Synthetic Aperture Radar.
Data: Algorithms and Implementation make no warranties, expressed or implied, that the equations, programs, and procedures
in this book or its associated software are free of error, or are consistent with any particular standard of merchantability, or
will meet your requirements for any particular application. They should not be relied upon for solving a problem whose
incorrect solution could result in injury to a person or loss of property. Any use of the programs or procedures in such a
manner is at the user's own risk. The editors, author, and publisher disclaim all liability for direct, incidental, or consequent
damages resulting from use of the programs or procedures in this book or the associated software.
For a listing of recent titles in the Artech House Remote Sensing Library, turn to the back of this book.
Digital Processing of
Synthetic Aperture Radar Data
Algorithms and Implementation
DISCLAIMER OF WARRANTY
The technical descriptions, procedures, and computer programs in this book have been developed with the greatest of care
and they have been useful to the author in a broad range of applications; however, they are provided as is, without
warranty of any kind. Artech House, Inc. and the authors of the book tided Digital Processing of Synthetic Aperture Radar.
Data: Algorithms and Implementation make no warranties, expressed or implied, that the equations, programs, and procedures
in this book or its associated software are free of error, or are consistent with any particular standard of merchantability, or
will meet your requirements for any particular application. They should not be relied upon for solving a problem whose
incorrect solution could result in injury to a person or loss of property. Any use of the programs or procedures in such a
manner is at the user's own risk. The editors, author, and publisher disclaim all liability for direct, incidental, or consequent
damages resulting from use of the programs or procedures in this book or the associated software.
For a listing of recent titles in the Artech House Remote Sensing Library, turn to the back of this book.
Digital Processing of
Synthetic Aperture Radar Data
Algorithms and Implementation
Ian G . Cumming
Frank H. Wong
ARTECH
HOUSE
I
BOSTON LONDON
a r t ec h house. com
To our wives, Linda and Mabel, who patiently stood by us through the many years of preparation of this book
In memory of my parents, aunts, and uncles, who impressed on me the value of education and curiosity, and who
supported me through my many years of education (Ian Cumming)
In memory of my grandmother, Madame Lee Kam Lin, who, through no fault of her own, received no education
while living through one of the most tumultuous periods in China's history, but did value the importance of
education (Frank Wong)
Limit of Liability
The technical descriptions, procedures, and content featured in this book have been developed with the greatest
care and have been useful to the authors in a broad range of applications; however, they are provided as is,
without warranty of any kind. Artech House, Inc. and the authors of this book make no warranties, expressed or
implied, that the equations, procedures, or content in this book are free of error, or are consistent with any
particular standard of merchantability, or will meet your requirements for any application. They should not be
relied upon for solving a problem whose incorrect solution could result in injury to a person or loss of property.
Any use of the descriptions, procedures, and content of this book in such a manner is at the user's own risk.
The authors and publisher disclaim all liability for direct, incidental, or consequential damages resulting from your
use of the material contained in this book.
Contents
Foreword
Preface
Acknowledgments
1 Introduction
1.1 Brief Background of SAR
1. 2 Radar in Remote Sensing
1.3 SAR Fundamentals
1.4 Spaceborne SAR Sensors
1.5 Outline of the Book
1.5.1 Example of a Spaceborne SAR Image
References
11 Comparison of Algorithms
11.1 Introduction
11.2 Recap of the Precision Processing Algorithms
11.2.1 Range Doppler Algorithm
11.2.2 Chirp Scaling Algorithm
11.2.3 Omega-K Algorithm
11.3 Comparison of Processing Functions
11.3.1 Form of Range Equation
11.3.2 Implementation of the Azimuth Matched Filter
11.3.3 Implementation of Range Cell Migration Correction
11.3.4 Implementation of Secondary Range Compression
11.4 Summary of Processing Errors
11.4.l QPE 1n the Azimuth Matched Filter
11 .4.2 QPE in Secondary Range Compression
11.4.3 Residual RCM
11.4.4 Examples of the Size of the Processing Errors
11.5 Computation Load
11.5.1 Basic Algorithm Operations
11.5.2 Range Doppler Algorithm
11.5.3 Chirp Scaling Algorithm
11.5.4 Omega-K Algorithm
11.6 Pros and Cons of Each Algorithm
11.6.1 Pros and Cons of the RDA
11.6.2 Pros and Cons of t he CSA
11.6.3 Pros and Cons of the WKA
11 .7 Summary
11.7.1 ASAR Image of the Strait of Messina
The introduction of spaceborne synthetic aperture radar (SAR) into Earth remote sensing can be dated back to
the SEASAT mission in 1978 - a 100-day episode that opened a door and enabled us to catch a glimpse of the
potential of SAR imaging. However, it was not until the 1991 launch of ERS-1 that SARs began to continuously
orbit the Earth and deliver images on a reliable, operational basis from aboard remote sensing satellites. Data
continuity, which has become indispensable to env:ironmental monitoring and global change research, is guaranteed
by national and international space programs where radar remote sensing is of high priority.
Many new applications, never before dreamed of, have been developed since then. The most exciting one is
SAR interferometry, where the coherent nature of the complex SAR image is exploited to measure topography,
motion, or structural decorrelation of the Earth's surface. At present, with the availability of more than 13 years
of continuous SAR data, long-term studies are possible and decadal signals can be observed, be they indicators of
land subsidence of 1 mm per year or of variations in ocean wave systems. Up to 100 SAR images of the same
area often need to be processed, precisely registered, and evaluated to obtain a single result.
This has all become possible because SAR processing made a significant leap from the optical bench to the
world of digital signal processing (DSP). Only in the digital world are processed SAR images accessible in a
convenient and reproducible way; their large dynamic range remains undistorted, their phase is carefully preserved,
and the set of possible signal processing operations is not limited by physical restrictions. Also, digital SAR
processing continually benefits from the inflationary increase in computing power that we have been and still are
experiencing. In the beginning, supercomputers or dedicated DSP hardware were required; today, a SAR image can
be processed on a notebook computer in a reasonable time. The tempting possibilities of digital processing, plus
the ever-growing SAR data availability and user interest, stimulated SAR processor developments at many research
institutes and commercial companies. Several new algorithms have been designed to accommodate high resolution,
high bandwidth, and high phase accuracy requirements, as well as sophisticated imaging modes.
SAR image formation and data processing are different from many other remote sensing techniques, since the
imaging process is coherent. The most natural way to describe such a system and its signals is by complex-valued
functions. Hence, signal processing, rather than image processing, provides the appropriate tools. SAR processing
needs DSP, but DSP also profits from SAR. The SAR imaging principle and the processing algorithms on their
own have become an attractive class of DSP methods that can be transferred to other fields. The concepts of
SAR constitute a general imaging principle of a caliber quite comparable to tomography; hence SAR signal
processing is well worth studying from an educational point of v:iew.
Having said this, it is surprising that only a very small number of books are available to describe digital
processing for remote sensing SARs in a comprehensive manner. Up until now, knowledge in this field has been
dispersed throughout journal papers, conference proceedings, internal reports, and patents. Therefore, this book by
Professor Cumming and Dr. Wong is special.
First, it is written by the developers of the first digital SAR processor for remote sensing and, therefore,
builds upon the longest possible experience. Second, it collects the current knowledge on SAR algorithms, and
represents it in a coherent signal processing based terminology. It differs from classical radar books in that it
approaches the subject from the processing v:iewpoint rather than from the physical radar world. By focusing on
the important class of satellite SARs with their relatively small squint and aperture, some of the concepts- and
sometimes baggage - inherited from conventional radar can be simplified and described in a clearer fashion. It is
common experience that, even though a technology has been fully developed, it takes longer for its exposition to
be clearly established. This book demonstrates that SAR processing algorithms have reached this level of
maturity. This is not only true for the stripmap imaging mode algorithms, but also for the more recent ScanSAR
algorithms. I hope that in a later edition of the book spotlight imaging will also be included.
It is ev:ident that the authors have well-honed teaching skills and are not holding back any of their
knowledge (Ian Cumming is a professor and Frank Wong is a sessional lecturer at the University of British
Columbia). I have known both authors for several years and have always appreciated their dedication to
explaining complicated matters in an instructive and easy-to-understand way. Their book is a step-by-step, concise
yet complete, illustrative course in SAR processing, following a straight, logical line. The reader is taken on a
guided tour, starting from a rev:iew of relevant signal processing fundamentals, passing the traditional range
Doppler algorithm, reaching the most recent class of chirp scaling methods, and finally arriving at the field of
processing parameter estimation. All these stations are supported by numerous examples and illustrations. A set
of SAR data prov:ided on a CD-ROM allows the reader to achieve valuable hands-on experience with the
processing algorithms.
I am sure that university teachers, postgraduate students, and engineers - whether nov-ices or SAR
processing experts - will appreciate this book. Personally, I would have saved myself quite some effort had I
had access to a book like this when I entered the field of SAR processing.
We have written this book to record our experiences in processing synthetic aperture radar (SAR) data for
remote sensing applications. Most of the material has been published previously in technical literature, but is
gathered together here for the first time in a single reference work.
Our SAR work began at MacDonald Dettwiler (MDA) in 1977, in designing a digital processor for SEASAT
data. The work continued with processors developed for SIR-B, ERS-1, ERS-2, RADARSAT-1, and ENVISAT.
Several airborne SAR processors were built, the most recent being a dual-frequency, polarimetric, interferometric
system. The work continues to this day, with the development of the current RADARSAT-2 processor. This book
is an attempt to record the knowledge gained in this work during the last 27 years.
When we began our work, coherent optical SAR processors were the e:xisting technology. Coming from digital
sonar backgrounds, it was a natural extension to apply digital signal processing (DSP) principles to SAR data.
Although there had undoubtedly been work on digital processors in the military, we were unaware of any such
developments, so we had a clean slate upon which to develop our SAR processor ideas.
Our experience has been mainly with the class of SARs that we refer to as "remote sensing" SARs. These
SARs make an image of the Earth's surface for applications such as mapping, geology, oceanography, forestry,
agriculture, and the like. Their resolutions are typically in the order of a few meters to a few tens of meters,
and swath widths are in the order of 2000 to 8000 samples, covering up to 150 km in ground range (even larger
swaths with ScanSAR).
There are significant differences between the processing of satellite and airborne SAR data, and it is difficult
to build a processor with enough generality to handle both types of data successfully. As satellite data is more
publicly available, we usually describe the algorithms from this point of v:iew. When we can provide a simple
explanation without disturbing the flow of the book, we point out some differences with airborne data processing.
This book addresses SAR processing from a DSP perspective. It does not dwell upon radar systems
principles, except those needed to understand the properties of recorded SAR data.
Prospective Audience
The book is primarily directed towards practicing engineers working with SAR data, or research engineers
designing new processing algorithms. Most of the technical information needed to understand and design high
quality and/or high throughput SAR processors is presented in some detail. Some of the pertinent DSP principles
are included to help those without a strong DSP background.
In addition to processor designers, the book may also be of interest to applications specialists who need to
understand some of the properties of the SAR data to help in their image interpretation.
Digital SAR processing is a fascinating application of DSP principles. Indeed, a SAR processor uses a large
proportion of the algorithms described in standard DSP textbooks, and even adds a few new concepts. For this
reason, the book should be of interest to senior or postgraduate students studying DSP, who wish to learn from
an advanced example of applying DSP to a practical application.
The authors apologize for any errors that may be present in the book, and would be grateful to have them
brought to their attention.
We have tried to quote the most relevant and precedent references for the various technical issues. In many
cases, we have used the references most familiar to us, which means that in some cases our own writings have
been given more prominence than they deserve. We will be grateful to have other references pointed out to us.
Acknowledgments
We would like to acknowledge four groups of people who have been essential to the writing of this book. First,
the many people with whom we have worked over the years. Among them, John MacDonald, company founder
and visionary, who has always been a strong source of inspiration and support. He believed in us, and gave us
the resources to build the first commercial digital SAR processor in 1977, when many people thought it could not
be done. John Bennett, who was our group leader through the first 10 years, was usually one step ahead of us
with ideas and insight. Other members of the original team included Robert Deane, Robert Orth, Pietro Widmer,
and Pete McConnell, with whom we shared many ideas in the course of unraveling the mysteries of SAR data
processmg.
As our market in SAR processors grew, many people joined the MDA team, and most are still working at
MacDonald Dettwiler in the SAR group. Some of the people who have made significant technical contributions to
SAR processing include David Stevens, Gordon Davidson, Martie Goulding, Paul Lim, and Tim Scheuer.
Most of our work has been sponsored by Canadian and European government organizations, especially the
Canada Center for Remote Sensing, the Canadian Space Agency, and the European Space Agency. We have
always had close technical ties with our customers, and we would especially like to thank Keith Raney, Laurence
Gray, Paris Vachon, and Bob Hawkins of the Canadian agencies; and Rudolph Okkes, Jean-Claude DeGawe, and
Yves Desnos of ESA for the many fruitful collaborations we have enjoyed.
Two other agencies stand out as being influential. While we have only occasionally worked directly with
them, many algorithms have been developed in parallel, and much has been learned from each other's work. This
includes JPL, with Charlie Wu, Michael Jin, Dan Held, Paul Rosen, and Richard Carande developing technology
pertinent to this book. Also, there is the German Aerospace Research Center, DLR, where Richard Bamler,
Hartmut Runge, Michael Eineder, Alberto Moreira, Rolf Scheiber, and Josef. Mitter- meier have done seminal work
in SAR processor algorithm design. In particular, we are very grateful to Richard Bamler, who has kindly
provided the foreword for the book. Their names will appear throughout the reference lists we quote for various
algorithm developments.
Second, we would like to thank people who contributed material to the book. Kjell Magnussen of MDA was
very helpful in defining the Earth/satellite geometry models. We are grateful to Professor Fabio Rocca of the
Politec- nico di Milano, who gave us a few extra insights on the WKA algorithm. Dr. Riccardo Lanari of
IREA-CNR in Napoli was very helpful in providing an explanation of the modified SPECAN algorithm, and Paul
Rosen of JPL provided an SRTM image for this section. Bob Hawkins of CCRS provided two Convair-580
airborne radar images. Gordon Staples provided many RADARSAT images, as well as the raw data CD. A
number of graduate students in the Radar Remote Sensing Group at UBC, including Shu Li, Millie Sikdar, Kaan
Ersahin and Yewlam Neo, helped by reading the chapters and providing some programs to read the RADARSAT
data and estimate Doppler parameters for some of the figures for Chapters 12 and 13.
Third, we would like to thank the many people who reviewed the manuscript during various stages of its
preparation. Ian Cumming was enjoying a sabbatical year at DLR during the first part of the writing, and many
indiv:iduals were very helpful in reviewing early stages of the work. Near the end of the writing, Juergen Holzner
of DLR did a very detailed review of the manuscript. Frank Wong spent a leave of absence in the Department
of Electrical Engineering at the National University of Singapore, when a couple of the earlier chapters were
written. Back at home, there were more rev:iewers, including Bernd Scheuchl and Yewlam Neo of UBC, Dave
Alton of the University of Calgary, and Martie Goulding, Paul Lim, and Norm Goldstein of MDA.
As to style, our proofreader, Eunice Ludlow, spent many hours making sure that a semitechnical reader had
a fair chance of understanding most of the sentences. She was diligent in telling us where to put commas in, and
did not let us get away with sentences more than three lines. Thanks also to Rebecca Allendorf and the
anonymous Artech House proofreaders, who did a diligent job persuading us to conform to a uniform style in the
manuscript.
Finally, our families deserve medals for their support, patience, and tolerance of our intense, irregular working
schedules.
On to the book-but first let's start off with a SAR image, taken by the Canada Center for Remote Sensing
C-band polarimetric airborne .radar on the Convair-580. The image was taken over the UBC Westham Island test
site on September 30, 2004.
The scene center is 49.2° N, 123.1 °W. The swath width is 10 km, and the image was formed by the
on-board real-time processor with seven looks, with a nominal radiometric correction. The four polarimetric
channels are combined to form the black and white image displayed in Figure A.1.
The first real-time digital SAR processor built by MacDonald Dettwiler was delivered to CCRS in 1979, and
installed on the Convair-580. This image was made by the second real-time processor, which replaced the first
model in 1986.
Figure A.l: Convair-580 real-time processed image of the Ladner area of Delta,
BC. (Courtesy of Bob Hawkins of CCRS.)
Part I
Fundamentals of Synthetic
Aperture Radar
Chapter 1
Introduction
Radar was originally developed for military purposes during World War II. Its initial purpose was to track
aircraft and ships through heav-y weather and darkness. It has experienced a steady growth, with advances in
radio frequency (RF) technology, antennas, and, more recently, digital technology (l].
The original radar systems measured range to a target (radar scatterer) via time delay, and direction of a
target v-ia antenna directivity. It was not long before Doppler shifts were used to measure target speed. Then it
was discovered that the Doppler shifts could be processed to obtain fine resolution in a direction perpendicular to
the range or beam direction. Through this latter concept, often credited to Carl Wiley of Goodyear Aerospace in
1951, it was found that two-dimensional images could be made of the targets and of the Earth's surface using
radar. The method was termed Synthetic Aperture Radar (SAR), referring to the concept of creating the effect of
a very long antenna by signal analysis. An account of the early development of SAR is given in the first few
chapters of Kovaly's collection (2].
In the 1950s and 1960s, the science of remote sensing was developing in the civilian community. From the
origins of aerial photography, digital scanners using several optical frequency bands were installed in aircraft and
satellites, and people began developing uses for the detailed wide-area images of the Earth's surface that were
acquired. Military SAR technology was released to the civilian community in the 1970s, and remote sensing
scientists found that the SAR images provided a complementary and useful addition to their optical sensors (3] .
Much of the original SAR technology was developed on aircraft, but it was the first satellite SAR that really
drew the attention of the remote sensing community to this new class of sensor. In 1978, the NASA satellite
"SEASAT" showed the world how detailed images of the Earth's surface could be obtained. This program spurred
many technical developments in the remote sensing community, including work on digital SAR processors and
applications, such as measuring ocean wave length, height, and direction.
When SAR data are collected by the radar system, the data appear to be quite unfocused. In fact, the
received data look much like random noise. As in a hologram, the essential information lies in the phase of the
received data, and phase-sensitive processing is needed to obtain a focused image.
Using the principles of Fourier optics (4], it was found that the focusing could be accomplished using laser
beams and lenses. The received radar data were recorded on black and white film, and a laser beam was
collimated and shone through the film. Lenses provided a real-time two-dimensional Fourier transform of the
data, and diffraction gratings were used to focus the data. After using another set of lenses to perform Fourier
transforms, an image was obtained and recorded onto film. A detailed book on the optical processing of SAR
data was written by Harger in 1970 (5).
Optical SAR processors could produce a well-focused image, but required precise alignment of high-quality
lenses on a large optical bench. Even though the processing was done in real time (apart from the film
processing), a skilled operator was required to control image quality, and production was difficult to automate. In
addition, the output film limited the dynamic range of the final image [6).
With the leadup to the SEASAT mission, a concentrated effort was made to develop digital SAR processors.
The received radar data were digitized and stored on tape or disk. In the late 1970s, 256 KB of memory was
considered large for computers, and disk capacity and transfer speeds were very modest by today's standards.
Nevertheless, a digital SAR processor was built for SEASAT data in 1978, and it took 40 hours to process a
40x40 km image with 25-m resolution [7). Today's computers can process the same data in a few tens of
seconds, using desktop workstations.
Developing digital SAR processor algorithms required a complete shift of paradigm from optical processing.
Word lengths, data scaling, corner turning, interpolation, and fast convolution were items to consider. After a
sequence of rapid prototyping, accurate processing was developed in the form of the range Doppler algorithm
(RDA), concurrently but separately developed by MacDonald Dettwiler (MDA) and the Jet Propulsion Lab (JPL)
in 1978. The benefits and potential of digital SAR processing were quickly recognized, and the digital method
soon became established as the standard.
Since 1978, the RDA has undergone a number of useful refinements, and other digital algorithms have been
developed, sometimes optimized for specific applications. The purpose of this book is to outline the various
algorithm developments since 1978, and to explain the ones used for remote sensing satellite data processing in
detail.
There is no need to explain the virtues of digital technology here. It is fair to say that a large proportion of
the innovations in radar systems in the last 30 years has arisen from the use of digital technology in radar
systems design, especially in the data processing. The speed of algorithm and system development is holding
steady, and more capable remote sensing radar systems are being designed every year.
The increased use of SAR in the remote sensing community is based upon three main principles:
2. The electromagnetic waves of common radar frequencies pass through clouds and precipitation with little or
no deterioration.
3. The radar energy scatters off materials differently from optical energy, providing a complementary and
sometimes better discrimination of surface features than optical sensors.
A current rev.iew of the applications of SAR in remote sensing is given in the Manual of Remote Sensing (8),
as well as many Web sites, including the Canada Centre for Remote Sensing education site. The applications
include agriculture, soil moisture, forestry, geology, hydrology, flood and sea ice monitoring, oceanography, ship and
oil slick detection, snow and ice, land cover mapping, height mapping, and change detection (land subsidence,
glacier motion, volcanic activ.ity). Even some subsurface features have been imaged by SAR, as the radar signals
can penetrate into some materials such as dry sand. Furthermore, researchers have found application in
bathymetry (measuring the underwater bottom topography) (9, 10].
In the remote sensing context, a SAR system makes an image of the Earth's surface from a spaceborne or
airborne platform. It does this by pointing a radar beam approximately perpendicular to the sensor's motion
vector, transmitting phase-encoded pulses, and recording the radar echoes as they reflect off the Earth's surface.
To form an image, intensity measurements must be taken in two orthogonal directions. In the SAR context,
one dimension is parallel to the radar beam, as the time delay of the received echo is proportional to the
distance or range along the beam to the scatterer. By measuring the time delay, the radar places the echo at the
correct distance from the sensor, along the image's x-axis. The geometric distortion caused by the fact that the
beam is not parallel with the ground, and is not exactly perpendicular with the motion vector of the sensor, is
corrected during the processing.
The second dimension of the image is given by the travel of the sensor itself. As the sensor moves along in
a nominally straight line above the Earth's surface, the radar beam sweeps along the ground at approximately
the same speed. The radar system emits pulses of electromagnetic energy, and the echoes received from the
pulses are processed and placed in the image's y-axis, according to the sensors' current position, creating an
image with the correct geometric coordinates. The y-dimension is called azimuth (or along-track), an analogy to
the azimuth steering of a rotary radar beam. However, in SAR, the azimuth dimension is usually obtained by
linear motion of the sensor, not by rotation of the beam from a stationary sensor.
A SAR can be operated in a number of different ways, sometimes with different systems, or sometimes as
different modes within a single system. Some of the different modes of operation include:
Stripmap SAR: In this mode, the antenna pointing direction is held constant as the radar platform moves. The
beam sweeps along the ground at an approximately uniform rate, and a contiguous image is formed. A strip
of ground is imaged, and the length of the strip is only limited by how far the sensor moves or how long
the radar is left on. The azimuth resolution is governed by the antenna length.
ScanSAR: This mode is a variation of stripmap SAR, whereby the antenna is scanned in range several times
during a synthetic aperture. In this way, a much wider swath is obtained, but the azimuth resolution is
degraded (or the number of looks is reduced). The best azimuth resolution that can be obtained is that of
the stripmap mode multiplied by the number of swaths scanned.
Spotlight SAR: The resolution of the stripmap mode can be improved by increasing the angular extent of the
illumination on the area of interest (a spot on the ground). This can be done by steering the beam
gradually backwards as the sensor passes the scene. The beam steering has the transient effect of simulating
a wider antenna beam (i.e., a shorter antenna). However, the antenna must ultimately be steered forward
again, and a part of the ground is missed. This means that the coverage is not contiguous-only one spot
on the ground is imaged at a time.
Inverse SAR: So far, it has been assumed that the target is stationary and the SAR system is moving.
However, SAR also works if the target is moving and the radar system is stationary. This reversal of roles
leads to the term "Inverse SAR." An example is the tracking of satellites from a ground-based radar. The
concept can be generalized to the case where both the target and the sensor are mov,ing, such as a ship in
heavy seas being imaged by an airborne or satellite SAR.
Bistatic SAR: This is a mode of SAR operation in which the receiver and the transmitter are at different
locations. In remote sensing SARs, the receiver is usually at approximately the same location as the
transmitter, which is referred to as monostatic.
Interferometric SAR (InSAR): This is a mode of SAR operation in which post-processing is used to extract
terrain height or displacement from the complex images. Two complex SAR images acquired at the same spa-
tial positions (differential InSAR) or slightly different positions (terrain height InSAR) over the same area are
conjugate multiplied. The result is an interferogram with contours of equal displacement or elevations.
This book deals primarily with stripmap and ScanSAR, operating in the monostatic mode, as these are the
dominant modes of operation of remote sensing SARs. The processing of spotlight SAR data is described in detail
in references [11-14). The processing of inverse SAR data is covered in [15-17). Bistatic SAR has not been used
in remote sensing applications so far, but interest in it is increasing - the concepts and processing are discussed
in [1723]. The polar format algorithm (PFA) is more suited to processing spotlight SAR data [11-13). The
processing of interferometric SAR data is discussed in [24, 25).
SAR Resolution
At this stage, the signal processor makes its important contribution. The pulse lengths, used in practice, are too
long to provide a usable resolution, so pulse compression techniques are used to obtain fine resolution in the
range direction. The slant range resolution is equal to the speed of light divided by the processed range
bandwidth. Pulse compression techniques are described in Chapter 3.
Pulse compression is used in virtually all radar systems, not just in SARs. But the second contribution of
the signal processor is unique to SAR systems, and is the main distinguishing feature of such systems. In the
azimuth direction, the resolution of a simple radar is equal to the angular beamwidth multiplied by the range to
the scatterer. Even radar systems with an azimuth beamwidth that is only a fraction of a degree develop an
impractically coarse resolution in azimuth when the range exceeds a few kilometers.
The secret to obtaining good azimuth resolution is to recognize that each scatterer in the radar beam reflects
energy with a different Doppler shift, and to use this distinction to separate the received energy into fine cells in
the azimuth dimension. This involves the concept of a "synthetic aperture," which is discussed in Chapter 4 and
gives the SAR system its name. By utilizing this Doppler shift, an aperture of many kilometers can be
synthesized, with a correspondingly improved resolution.
After processing, the ultimate azimuth resolution is equal to one-half the antenna length, independent of
range. Thus, to get a finer resolution, the antenna is made shorter. This is a special property of SAR, and is
opposite to the normal antenna or lens principle that the resolution is finer for a longer antenna length or larger
lens aperture. However, if the antenna is made too short, or the operating range is made too long, the image
signal-to-noise ratio may drop below an acceptable limit. These properties are discussed in Chapter 4.
Signal-to-Noise Ratio
Another important parameter of a SAR system is the signal-to-noise ratio (SNR) in the final image. Specifically,
the SNR of a SAR signal is governed by the "radar equation,» which expresses the received power as a function
of transmitted power, the range to the scatterer, and a number of system and scatterer variables. The radar
equation is often expressed in terms of SNR of the final image, so that the amount of transmitted power
required to achieve a certain level of image quality can be computed. When the image consists of distributed
features, referred to as clutter, t he SNR is
Pave G 2 .>.3 ao C
SN~lutter - (1.1)
2561r3 R3 KT Br Fn Ls V sin Bi
where the variables are defined as
c Speed of light
R Range to reflector
K Boltzmann's constant
T Temperature of receiver
V Platform velocity
Fa Radar PRF
The antenna gain is a function of elevation and azimuth angles. In terms of elevation, the gam is taken at
the specific range of the target in question. In the case of azimuth, the gain must be taken as a weighted
average of the antenna pattern over the azimuth integration angle. The quantity KT is the thermal noise of an
ideal receiver at the nominal operating temperature, and Fn is the extra noise of the actual receiver compared to
the ideal. The quantity Ls represents losses within the system signal path.
When a discrete target is imaged, and its dimension is in the order of the SAR resolution or finer so most of
its energy appears in a single image sample, the target's signal-to-noise ratio is
Pave c2 ).3 l1t C
(1.3)
SNRtarget = 256 1r3 R3 KT Br Fn Ls V Pr Pa
In this form, the actual (unnormalized) radar cross section of the target is used. The quantities Pr and Pa are
the processed resolutions. Depending upon the application, these equations can be manipulated to obtain a value
for target/clutter ratio or target-to-clutter-plus-noise ratio.
One of the main differences between the SNR of a SAR and that of a conventional radar system is the
dependence of SNR on the range R . A conventional radar has an SNR proportional to 1/ R4 , because the radar
energy spreads proportional to 1/R 2 in both the transmit and receive directions, according to the familiar inverse
square law (where the energy is distributed throughout the surface of a sphere of radius R). In contrast, the
SAR processor integrates energy in azimuth over a length proportional to range, thereby removing one of the R
terms in the energy spreading or dispersal relationship. This results in an SNR proportional to 1/ R3 for a SAR
system. Understanding the SNR equations is not necessary for producing a well-focused SAR image, but the 1/R3
law is needed to correct the radiometry of the final image.
The synthetic aperture processing takes place over many echoes in the received data. Owing to the motion of the
sensor over this synthetic aperture, the range of a target changes with time. The change creates a Doppler shift
in the received data, which forms the basis of the synthetic aperture processing. However, the change in range
also causes a phenomenon called "range cell migration" (RCM), which complicates the processing.
As radar echoes are received, they are sampled and placed in memory. The processing is two-dimensional,
and can often be separated into independent or separate processing in range and in azimuth. This separation is
particularly simple if the received energy does ·not change significantly in range over the course of the synthetic
aperture. How much is "significant" depends upon the fineness of the range samples. If the change in range, or
range migration, is larger than one sample (one cell), it is considered significant, and must be taken into account
in the processing. Often, the RCM is corrected in an explicit processing operation, leading to the term "range cell
migration correction" (RCMC).
RCMC is a challenging operation in SAR processing. The key is to correct the RCM accurately, without
unduly complicating the processing. The way that RCMC is performed is often the feature that distinguishes one
algorithm from another, as will be seen in the various algorithm descriptions in the book.
This book is aimed at the processing of SAR data from sensors devoted to remote sensing. An emphasis is
placed on satellite sensors because of their wide coverage and data: availability.
To set the scene for the SAR processing algorithms in the body of the book, the main SAR sensors for
which the algorithms have been developed are introduced below. More details on the sensors are available in the
Manual of Remote Sensing [8] and Observation of the Earth and Its Environment [26].
SEASAT: SEASAT was the first civilian spacebome SAR, and drew the attention of the remote sensing
community to the benefits of SAR in 1978. It operated at L-band (1.27 GHz) with an incidence angle of
23°. The large range cell migration of the received data was a significant factor in processor design. It
inspired a significant design effort in digital processing, and many processors in use today trace their
genealogy back to SEASAT.
SEASAT also inspired a wide-ranging development into the applications of SAR data in remote sensing,
with an initial emphasis on oceanography. SEASAT carried five complementary instruments designed primar-
ily for oceanographic applications:
o a radar altimeter to measure spacecraft height above the ocean surface, and hence the average geoid;
o a visible and infrared radiometer to identify cloud, land, and water features;
o an L-band, HH polarization SAR, with a fixed look angle, to monitor the global surface wave field and polar
sea ice conditions.
Magellan: The NASA/JPL Magellan spacecraft was launched on May 4, 1989, and arrived at Venus on August
10, 1990. The spacecraft was based on a Voyager bus and utilized a 4-m circular dish for the SAR antenna.
The radar frequency was 2.4 GHz (S-band). In the two-year SAR mapping mission, over 98% of the surface
was mapped with a resolution of 100 m.
ALMAZ: The Russian SAR satellite, ALMAZ, was launched in 1991, but it did not have much exposure m the
western remote sensing community. Its main unique feature was its operation at S-band (3.0 GHz) .
SIR-C/X-SAR: SIR-C was the third of a series of NASA SAR systems to fly on the Space Shuttle. It had two
missions, one in April 1994 and one in October 1994. Its main features were operation at three frequencies,
L-, C-, and X-band, and the use of quad-polarizations on Land C-bands. The X-band SAR system was
provided by Germany and Italy. The multiple channels spawned a rich set of applications research projects,
which have helped define the requirements for subsequent SAR systems.
ER.S-1/2: The European Remote Sensing satellite, ERS-1, launched in 1991, represented a maJor effort in global
cooperation. Twelve European nations and Canada combined to build a multi-instrument microwave satellite.
The SAR operated at C-band, and other instruments included an altimeter, radiometer, and scatterometer.
The SAR specifications were similar to SEASAT, with the exception of the wavelength.
ERS-2 was launched in 1995, with the same specifications as ERS-1. Its main unique feature was its
operation in tandem with ERS-1, which opened up the field of repeat-pass satellite SAR interferometry.
With a one-day interval between observations, images that were taken from almost the same vantage point
could be used to measure terrain height, and to detect changes in surface features.
Another interesting feature of ERS-1 is the use of the SAR instrument in a wave mode. Five-kilometer
imagettes can be received around the world on a routine basis, and used for ocean wave analysis, surface
wind estimation, and weather forecasting.
J-ERS: The Japanese Earth Remote Sensing satellite, J-ERS, was launched in 1992. It had similar specifications
to SEASAT, but had a larger incidence angle, optimized for geological applications.
RADARSAT-1: The RADARSAT-1 satellite was launched by the Canadian Space Agency in 1995. It operates
at C-band, and its main use is the daily mapping of Arctic ice. Its main innovation is a scanning mode of
operation, called ScanSAR, whereby very wide swaths are imaged by scanning the radar beam to different
elevation angles within the synthetic aperture time. Swath widths of 300 km are obtained from two scanned
beams and widths of 500 km are obtained with four scanned beams.
The versatility of RADARSAT is further enhanced by the use of seven standard beams with 25-m
resolution and 100-km swath width, three wide beams with 30-m resolution and 150-km swath width, and
five fine beams with 8-m resolution and 50-km swath width. Each set of beams covers a range of incidence
angle from 16° to 49° , and there are other experimental beams at extreme incidence angles.
A novel application of RADARSAT-1 was its use in the Antarctic Mapping Mission in 1997. The satellite
was turned in a complicated maneuver to point the antenna left, allowing areas near the South Pole to be
imaged for the first time.
SRTM: The Shuttle Radar Topography Mission in February 2000 represented the third flight of SIR-C and
X-SAR. A 60-m boom was added to the Shuttle cargo bay, to carry outboard antennas for simultaneous
reception of the C- and X-band SAR signals. This raised the technology of spaceborne interferometric SAR
to a new level, as the problem of temporal decorrelation was eliminated. The received data provides almost
complete topographic height maps within ±60° latitude, with a posting of 30 m and a vertical accuracy of
16 m.
ENVISAT/ ASAR: The third European satellite SAR system was launched in February 2002, and is providing
good quality data. It builds upon ERS-1 and ERS-2 technology by providing two polarizations plus wide
swath and ScanSAR modes.
Future Satellite SAR Sensors
At the time of writing, a number of new SAR satellite systems are under development for launch in the
2005- 2006 time frame. These include the Canadian C-band RADARSAT-2, the European X-band TERRASAR-X,
and the Japanese L-band ALOS/PALSAR.
The specifications of RADARSAT-2 are similar to RADARSAT-1, with the following notable improvements:
o A number of polarimetric modes, including a full quadrature polarimetric mode, over a smaller swath width;
o Resolutions as fine as 3 m;
The book is divided into three parts. The first part contains information needed to understand SAR processing
algorithms, including background on SAR systems, some signal processing fundamentals, and a description of the
SAR signals. The second part contains descriptions and analyses of the main SAR processing algorithms used in
satellite remote sensing. The third part includes algorithms for determining the main Doppler parameters, the
Doppler centroid, and the azimuth FM rate.
Each chapter is self-contained, and references are given to details needed from other chapters. Readers with a
SAR background can skip directly to the processing algorithm and parameter estimation chapters. More detailed
outlines of each chapter follow.
Chapter 1 - Introduction This chapter introduces SAR system concepts and some of the spaceborne sensors
that have been prominent in the remote sensing community.
Chapter 2 - Signal Processing Fundamentals The digital signal processing concepts needed for SAR
processing are introduced, with an emphasis on sampling, Fourier transforms, convolution, windows, inter-
polation, and point target analysis.
Chapter 3 - Pulse Compression of Linear FM Signals The properties of linear FM signals are introduced,
including their compression using matched filters. The use of windows and compression errors are di~cussed.
Chapter 4 - Synthetic Aperture Concepts The geometry of SAR data collection is discussed, leading to the
properties of the SAR signals and the concepts of synthetic aperture and resolution.
Chapter 5 - SAR Signal Properties A detailed look at the mathematical structure of SAR signals is given,
including the signal spectrum in the range Doppler and two-dimensional frequency domains. The effect of
squint on the signal properties is discussed, as well as the Doppler centroid and range migration.
Chapter 6 - The Range Doppler Algorithm The description of SAR processing algorithms begins with the
most common satellite SAR processing algorithm, the RDA. Details include the handling of squint, RCMC,
and multilook processing. In this and subsequent sections, a point target is processed to illustrate the
operation of the algorithm.
Chapter 7 - The Chirp Scaling Algorithm The CSA offers an incremental improvement in image quality
over the RDA, by replacing the RCMC interpolator with a scaling operation in the range time/azimuth
frequency domain.
Chapter 8 - The Omega-K Algorithm The wave equation or omega-K algorithm (wKA) traces its heritage to
seismic processing techniques, where range migration is corrected by a range interpolation in the two-
dimensional frequency domain. It can handle the widest apertures and highest squints of all algorithms.
Chapter 9 - The SPECAN Algorithm This algorithm was developed to produce a quick-look image with
minimum use of memory and computing. A single and short FFT is used in the compression operation,
rather than the usual two long ones. As its image quality is a little less than that of the RDA, its image
quality is discussed in detail, especially phase and radiometric accuracy.
Chapter 10 - Processing ScanSAR Data ScanSAR processing is different from conventional processing
because each target experiences a segmented aperture in azimuth. Because the SPECAN algorithm naturally
operates with segmented apertures, it is often used for medium-quality ScanSAR processing. Other ScanSAR
processing algorithms are also discussed, including the full-aperture, short IFFT, chirp-z, and extended chirp
scaling algorithms.
Chapter 11 - Comparison of Algorithms The processing part closes with a comparison of the various
algorithms discussed, and suggests the best algorithm to use in specific cases.
Chapter 13 - Azimuth FM Rate Estimation The value of the azimuth FM rate must be obtained, so that
the phase of the azimuth matched filter can be defined and sharp image focus obtained. It is a function of
the effective velocity of the radar platform, and can be found from geometry models. If the estimate is not
accurate enough, it can be refined using measurements on the received radar data.
The first image produced by the RADARSAT-1 system is included in Figure 1.1. It heralded a new era in
operational satellite SAR imaging in 1995.
Figure 1.1: RADARSAT-1 image of Cape Breton Island m Nova Scotia.
(Copyright Canadian Space Agency, 1995.)
The data was received at the Gatineau ground station on November 28, 1995, and processed by the Canadian
CDPF processing center using the range Doppler algorithm (Chapter 6).
The scene covers the central and eastern part of Cape Breton Island in Nova Scotia. The scene center is
approximately 46°N, 60°W, and the image size is approximately 120 x 175 km. The image is processed to four
looks and the resolution is 25 m. The coal and steel town of Sydney is a few centimeters below the center of
the image. A southwest wind was blowing, giving some interesting features to the maritime parts of the scene.
Note that, because of file size and printing limitations, the quality of this image and other images portrayed
in the book is generally not representative of the full quality available from the sensor.
References
(2] J. J. Kovaly. Synthetic Aperture Radar. Artech House, Dedham, MA, 1976.
(3] K. Tomiyasu. Tutorial Review of Synthetic-Aperture Radar (SAR) with Applications to Imaging of the
Ocean Surface. Proc. IEEE, 66 (5), pp. 563-583, 1978.
(5] R. 0. Harger. Synthetic Aperture Radar Systems: Theory and Design. Academic Press, New York, 1970.
(6] D. A. Ausherman. Digital Versus Optical Techniques in Synthetic Aperture Radar Data Processing. In
Application of Digital Image Processing (/OGG 1971), Vol. 119, pp. 238-256. SPIE, 1977.
(7] J. R. Bennett, I. G. Cumming, R. A. Deane, P. Widmer, R. Fielding, and P. McConnell. SEASAT Imagery
Shows St. Lawrence. Aviation Week and Space Technology, page 19 and front cover, February 26, 1979.
(8] F. M. Henderson and A. J. Lewis, editors. Manual of Remote Sensing, Volume 2: Principles and Applications
of Imaging Radar. John Wiley & Sons, New York, 3rd edition, 1998.
(9] W. A. Alpers and I. Hennings. A Theory of the Imaging Mechanism of Underwater Bottom Topography by
Real and Synthetic Aperture Radar. J. of Geophysical Research, 89 (C6), pp. 10529-10546, November 1984.
(10] R. Romeiser, 0. Hirsch, and M. Gade. Remote Sensing of Surface Currents and Bathymetric Features in
the German Bight by Along-Track SAR Interferometry. In Proc. Int. Geoscience and Remote Sensing Symp.,
IGARSS'00, Vol. 3, pp. 1081- 1083, Honolulu, HI, July 2000.
(11] W . G. Carrara, R. S. Goodman, and R. M. Majewski. Spotlight Synthetic Aperture Radar: Signal Processing
Algorithms. Artech House, Norwood, MA, 1995.
(13] M. Soumekh. Synthetic Aperture Radar Signal Processing with MATLAB Algorithms. Wiley-lnterscience, New
York, 1999.
(14] G. Franceschetti and R. Lanari. Synthetic Aperture Radar Processing. CRC Press, Boca Raton, FL, 1999.
(15] D. L. Mensa. High Resolution Radar Cross-Section Imaging. Artech House, Norwood, MA, 1991.
(16] D. R. Wehner. High Resolution Radar. Artech House, Norwood, MA, 2nd edition, 1995.
(17] R. J . Sullivan. Microwave Radar Imaging and Advanced Concepts. Artech House, Norwood, MA, 2000.
(18] M. I. Skolnik. Radar Handbook. McGraw-Hill, New York, 2nd edition, 1990.
(19] D. Massonnet. Capabilities and Limitations of the Interferometric Cartwheel. IEEE Trans. on Geoscience
and Remote Sensing, 39 (3), pp. 506-520, March 2001.
(20] Y. Ding and D. C. Munson. A Fast Back-Projection Algorithm for Bistatic SAR Imaging. In Proc. Int.
Conj. on Image Processing, /GIP 2002, Vol. 2, pp. 449-452, Rochester, NY, September 22-25, 2002.
(22] D. D'Aria, A. Monti Guarnieri, and F . Rocca. Focusing Bistatic Synthetic Aperture Radar Using Dip Move
Out. IEEE Trans. on Geoscience and Remote Sensing, 42 (7) , pp. 1362- 1376, July 2004.
(23] 0. Loffeld, H. Nies, V. Peters, and S. Knedlik. Models and Useful Relations for Bistatic SAR Processing.
IEEE Trans. on Geoscience and Remote Sensing, 42 (10) , pp. 2031-2038, October 2004.
(24] R. Bamler and P. Hartl. Synthetic Aperture Radar Interferometry. Inverse Problems, 14 (4), pp. Rl- R54,
1998.
(25] R. F. Hanssen. Radar Interferometry: Data Interpretation and Error Analysis. Kluwer Academic Publishers,
Dordrecht, the Netherlands, 2001.
[26] H. J. Kramer. Obseroation of the Earth and Its Environment: Suroey of Missions and Sensors.
Springer-Verlag, Berlin, 1996.
Chapter 2
Signal Processing
Fundamentals
2.1 Introduction
This chapter prov-ides the reader with the mathematical preliminaries required to understand the digital signal
processing (DSP) of SAR data. The reader is expected to be familiar with linear systems theory, as presented in
many excellent textbooks [1-5]. The purpose of this chapter is not to replace these books, but to review the
specific DSP tools and specialized mathematical operations used in SAR processing.
Fourier transforms, convolution, interpolation, and measurement of image quality parameters are the main
tools of SAR processing. These concepts are reviewed in this chapter. The chapter begins with a rev-iew of linear
convolution in Section 2.2. Convolution is frequently performed in the frequency domain via the Fourier transform.
This transform and its properties, relevant to SAR processing, are summarized in Section 2.3. When convolution is
performed via the Fourier transform, cyclic convolution is actually performed, and this is the topic of Section 2.4.
Section 2.5 summarizes the sampling theorem, and discusses the differences between real and complex
sampling. It also defines the meaning of base band signals and the fundamental frequency range, and presents the
spectral properties of sampled signals.
Usually, a smoothing or tapering window is required to process the SAR data. Section 2.6 presents the time
and frequency properties of the Kaiser window, which is used to control the image quality of the processed data.
Interpolation is required at several stages of SAR processing. Section 2.7 discusses the sine interpolation
kernel, one of the most widely used interpolators in engineering practice. Implementation issues are also addressed
- how interpolation can be performed efficiently and accurately.
A common method of evaluating the quality of a processed SAR image is to measure a set of image quality
parameters. The parameters represent the impulse response of the SAR system and the processor, and can be
measured from a processed point target. Section 2.8 discusses these parameters and how the measurements can be
made.
As the name of this book implies, SAR processing is to be performed on discrete-time signals with finite
length records, rather than on the continuous-time representations of the signals. However, in order to derive
certain properties or to analyze data, continuous time is often used for mathematical simplicity and/or
convenience. After obtaining a result with the continuous-time analysis, its discrete counterpart can always be
used to obtain an equivalent result.
A basic operation in signal processing is the convolution of a signal s(t) with a filter h(t). In continuous time,
the convolution is written as
y(t) = s(t) ® h(t) = 1_: s(u) h(t - u) du = 1_: s(t - u) h(u) du (2.1)
where ® denotes the convolution operation and y(t) is the output signal. Usually, the time duration of the filter is
shorter than that of the signal. Let the filter be of duration T, centered at t = 0. Then, the integration limits of
(2.1) can be reduced to
t+T/2 T/2
y(t) -
l
t-T/2
s(u) h(t - u) du -
!
-T/2
s(t - u) h(u) du (2.2)
where h1(t1) and h2(t2) are the respective one-dimensional filters. Then, the convolution with h(t1, t2) in (2.7) is
implemented by performing a one-dimensional convolution of the two-dimensional signal s(t1, t2) in the t1 direc-
tion with h1 (ti), followed by a one-dimensional convolution in the t2 direction with h2(t 2)
y(t1,t2) - s(t1,t2)®[h1(t1) ®h2(t2)]
- [s(t1,t2) ® h1(t1)] ® h2(t2) (2.9)
The last line follows from the fact that convolution is also associative.
I:
m=n- (M-1)
s(m) h(n - m) (2.10)
in which the filter, of length M samples, is zero outside the interval n = 0 to M - 1. The length M is assumed
to be shorter than the signal length, K, which is the case for SAR data. The indexing in (2.10) corresponds to a
causal filter, where the output at time n uses signal values from time n and earlier.
A linear convolution between a K = 8 sample signal and an M = 3 sample filter is illustrated in Figure 2.1,
implementing (2.10) . This is how the MATLAB "conv (s, h)" function works. The signal is used in normal-time
order, and the filter in reverse-time order. The filter is shown by circles in two locations, first as it just meets the
signal at n = 0, then, when it is at the end of the signal, at n = 9. Between these two locations it slides along,
one sample at a time, and the inner product is formed at each location to obtain one sample of the answer.
Signal s(n) X X X X X X X X
Filter h(-n) 000 > 000
Output y(n) 0 ~ ¢ ¢ ¢ ¢ ¢ ¢ ~ ~
Time n 0 2 3 4 5 6 7 8 9
Figure 2.1: The signals m a linear convolution operation ( x denotes an input sample, o a filter coefficient, and
O an output sample).
The outputs of the filter exist at times running from n = 0 ... 9. But note that the outputs at n = 0 and 1
and n = 8 and 9 represent "partial convolutions," in which the signal points are only multiplied by a subset of
the filter coefficients. These partial convolutions are represented by the smaller diamonds in the figure. The first
two points correspond to outputs when the initial conditions of the filter have not settled down.
In some applications, these partial convolution points are useful, and in others they are not wanted. In the
latter case, the terminology "good output points" is applied to the K - M + 1 = 6 outputs occurring at times n
= 2 . . . 7, and the other output points are discarded. The MATLAB "conv(s,h)" function produces all K + M -
1 = 10 output points shown in Figure 2.1. In using DFTs to implement a convolution, edge effects also exist. The
edges, if unwanted, are the "throwaway" regions that are discussed in Section 2.4.
An example of a one-dimensional linear convolution with K = 8,
= 3 is M
{1,3, - 1,5,2,6,4,-2}®{1,2, 3} = {1, 5,8,12,9,25,22, 24,8,-6} (2.11)
The six good output points are {8, 12, 9, 25, 22, 24}. The edge or partial convolution points are {1, 5} and {8,
- 6}.
Two-Dimensional Convolution
2.3.1 Continuous-Time Fourier Transform
The Fourier transform is considered first in the continuous-time domain. When time is continuous (i.e., not
sampled), it is referred to as the continuous-time Fourier transform, or simply the Fourier transform. The Fourier
transform gains its usefulness from the fact that a function g(t) can be represented by a summation of sinusoidal
functions, each having a different amplitude and phase. The function g(t), where t denotes continuous time, can
be complex. Each of these sinusoidal functions corresponds to a spectral component in the frequency domain in
which the spectrum of g(t) is represented.
The Fourier transform pair for the continuous-time case can be written as
where j is the complex constant, A. The first equation represents the forward transform in which the complex
spectrum G(/) is computed. The second equation represents the inverse Fourier transform in which the original
signal g(t) is reconstructed from its spectrum.2 Usually, this Fourier transform pair is denoted by
The Fourier transform can be easily extended to two dimensions, with t1 and t2 denoting the two time axes,
and Ji and / 2 the corresponding frequency axes. The Fourier transform pair in two dimensions can be written as
G(fi, h) - j_: j_: g(t1, t2) exp{-j 21r (fit1 + ht2)} dt1 dt2
g(t1, t2) - j_: j_: G(fi, h) exp{ +j 21r (fit1 + /2t2)} dfi d/2 (2.18)
Next, the discrete Fourier transform (OFT) and its inverse (!OFT) are considered. These transforms are defined
for finite length or periodic sampled signals. For a discrete-time signal g(n) of length N, the OFT pair can be
written as
!DFT: g(n) = !~ 2
G(k)exp { +j ";n} n = 0, ... , N-1 (2.20)
where the N values of G(k) are called spectral coefficients. Note that the scaling constant 1/N in (2.20) is needed
to recover g(n) with the correct amplitude, although it is ignored in some signal processing applications.
The first point in the time domain g(O) corresponds to time zero, and the evenly spaced time samples are
separated by 1/ Is, where Is is the sampling rate. Similarly, the first point in the frequency domain G(O)
corresponds to zero frequency and the evenly-spaced frequency samples are separated by fs/N. Thus, the spectral
sample G(k) corresponds to frequency k fs/N (i.e., the frequency of a complex sine wave that has k cycles per
record of N samples).
Because of the 21r periodicity of the complex exponential function, the following properties of the OFT hold
for any integer M :
g(n+MN) - g(n) (2.21)
G(k+MN) - G(k) (2.22)
Note that (2.21) implies that the time sequence is periodic outside the analysis interval n = 0, ... , N -1. In
reality, the sequence is usually not periodic, but this assumption must be made when the OFT is used. The
leakage discussed at the end of the section is a consequence of a nonperiodic signal being assumed periodic.
Equation (2.22) shows that the spectrum of the discrete-time sequence is also periodic. The periodicity of the
spectrum comes from the theory of sampling, as opposed to the periodicity or finite length of the time sequence,
as explained in Section 2.5. Because of the periodicity of the spectrum, the energy observed at a specific OFT
output sample k can arise from a component in the continuous-time signal that has a frequency of kfs/N ±
M Is, for any integer M. The uncertainty of the value of M is referred to as a "spectral ambiguity." In some
applications, this value of M is unimportant, but in some SAR operations it is important, as noted in Section 5.4.
The forward OFT and its inverse each require an order of N 2 operations. When N can be factored into
many small factors, as when N is a power of 2, fast Fourier transform (FFT) methods exist to implement the
OFT or IOFT efficiently. The number of operations in the FFT or its inverse, the IFFT, is in the order of N
log 2 N, when N is a power of 2. To be specific, the number of complex multiplications in a radix-2 FFT is (N/2)
log 2 N, and the number of complex additions is N log2N, where one complex multiplication consists of four real
multiplications and two real additions, and one complex addition consists of two real additions. Counting a real
addition or a real multiplication as one operation, a radix-2 FFT or IFFT requires 5N log2 N operations. FFTs
of other lengths can be almost as efficient as the radix-2 FFT, if the length is made up of small factors.
The MATLAB functions fft and ifft work with any array length, using a fast algorithm when available.
The analysis record length can be zero padded to a power of 2, or other efficient length, if needed (see Section
2.3.3).
Two-Dimensional DFT
For a two-dimensional, discrete-time signal with dimensions N1 and N2, the two-dimensional OFT pair is written
as
(2.23)
(2.24)
where each of the indices, k1, k2, n 1, and n2, go from zero to N1 -1 or N2 -1.
The purpose of this section is to summarize the properties of the Fourier transform that will be utilized in SAR
processing. The derivations are detailed in many textbooks specially devoted to the topic [6-8].
Again, the discussion is based on the continuous-time case; the properties also hold for the discrete-time case,
unless otherwise stated. In most cases, the properties are presented for the one-dimensional case. Generalization to
the two-dimensional case is straightforward. For the following properties, let g(t) +-+ G(f), g1(t) +-+ G1(/), and
g2(t) +-+ G2(f).
Complex conjugate: The complex conjugate of a signal IS transformed into the complex conjugate of the
spectrum, with the frequency axis reversed
g*(t) ..- G*(-f) (2.25)
Linear operator: Fourier transform of the sum is equal to the sum of the Fourier transforms
(2.26)
Scaling: A scaling in one domain corresponds to a "compression" or "expansion" in the other domain. For a
nonzero scaling factor a
g(a t) - I: Ic ( ~) (2.27)
For I a I < 1, the signal is expanded in the time domain and compressed in the frequency domain, and vice
versa for I a I > 1.
Shifting/modulation: These are important properties in applications such as filter design and interpolation. To
shift a signal by a constant time to to the right, its spectrum can be multiplied by a linear phase function,
an exponential with a negative exponent. Similarly, shifting the spectrum to the right by lo corresponds to
modulating the signal in the time domain by an exponential with a positive exponent. For the continuous
case,
g( t - to) <-+ G(l) exp{ - j 21r l to} (2.28)
g(t) exp{j 21r lot} <-+ G(f - Jo) {2.29)
For the discrete-time case, all shifts are circular, equivalent to addressing the shifted indices modulo N.
For each of the real and 1magmary parts, they represent the areas under the curves G(J) and g(t),
respectively.
For the discrete-time case
N-1
G(O) - E g(n) {2.32)
n=O
g(O) = ! Y:
k=O
1
G(k) (2.33)
The first spectral sample 1s the sum of the time samples, and the first time sample is the average of the
frequency samples.
Symmetry: If g(t) is real, then the spectrum G(l) has conjugate symmetry, that is, the real part of G(J) is
symmetrical about zero frequency and the imaginary part is antisymmetrical
G(l) = G*(-J) (2.34)
For complex signals, this symmetry does not hold. This means that poEitive frequencies can be distinguished
from negative frequencies, so that positive and negative frequencies can represent independent information in
complex signals.
Parseval's relation: For the continuous case, the signal and spectral powers are related by
1: lg(t)j2dt = 1: 2
IG(J)l dl (2.35)
(2.36)
This shows that the OFT and its inverse are energy-conserv.ing operations - the total energy in the
N-point sequence is equal to the average energy
in the spectral coefficients.
For the discrete-time case, the convolution is cyclic, as illustrated in Section 2.4. This is the most important
property utilized in SAR processing.
(a) Time domain, original (b) Spectrum, original
( )
( )
( )
Figure 2.2: Fourier transform pairs involving data skew and rotation.
In this section, some common Fourier transform pairs are illustrated. In most cases, the relations hold when the
time and frequency arrays are switched.
rect(x) - { 1
0
if !xi< 0.5
otherwise
(2.47)
sin(11' x)
sinc(x) - (2.48)
11'X
T sinc(f T)
rect(t IT)
,I I I •
• •
-T/2 0 T/2
Time I
Frequency f
sinc(t/ T )
3-dB width= 0.886 T T rect(f T)
• •
TI
-1/(2T) 0
II 1{(2T)
-4 T -3T -2T -T O T 2T 3T 4T
Frequency f
Time t
In other words, the time duration is approximately the reciprocal of the bandwidth.
An important parameter of a signal is its "time bandwidth product" (TBP) . As its name implies, it is the
product of the 3-dB width in time and the 3-dB bandwidth of the signal. The TBP is approximately unity
for these sine and rectangular functions.
For a sine-like pulse, the region between -T < t < T is called the main lobe, 4 and each region on either
side of the main lobe contains spurious energy referred to as sidelobes. The ratio of maximum sidelobe
power (at t ~ ±1.5 T in the figure) to that of the peak power (at t = 0) is called the peak sidelobe ratio
(discussed in Section 2.8). For the sine function, the ratio is - 13 dB. For a function resembling a
rectangular function in the frequency domain, but tapering off towards both ends, the peak sidelobe ratio
will be reduced, but at the expense of increasing the 3-dB width. This effect is discussed further in Chapter
3.
Owing to the time/frequency duality, the same definitions apply to the first Fourier transform pair with
time replaced by frequency and vfoe versa. The same TBP also applies to this Fourier transform pair.
where 6 is the delta function. A monochromatic function (i.e., a complex sine wave) transforms into a single
spike (delta function) in the frequency domain. However, in the discrete-time case, this only happens when
lo = k fs/N, where Is is the sampling rate, N is the DFT length, and k is an integer. This only occurs
when there are exactly k cycles per analysis record, so that the frequency, Jo, coincides with the frequency
of one of the DFT output samples. If this is not the case, spectral "leakage" occurs, and all DFT output
points contain some "misplaced" energy. The bulk of the energy lies in the neighborhood of lo , but
significant amounts of energy are spread through other parts of the spectrum.
where Ts is the sampling interval 1/ Is- This states t hat a pulse train in one domain is transformed into a
pulse train in the other domain, where consecutive pulses in the frequency domain are separated by the
sampling frequency, Is·
(a) Signal array is NOT zero padded (b) Signal array IS zero padded
The circular convolution implemented this way has an interesting property that distinguishes it from a linear
convolution. When the inner circle is located as shown, or one position clockwise, the filter samples simultaneously
overlap both t he beginning and end of the signal array. If the signal array is periodic, the correct answer is
obtained. Most likely it is not periodic, and a "corrupted" answer is obtained at these two output points. These
points are a result of "circular convolution wraparound error/ and should be discarded from the answer array.
They are called "throwaway" points. In the above eight-point example, there are two "throwaway" points and six
"good" points.
For example, when performed with OFTs, the linear convolution in (2.11) now becomes
{1, 3, -1, 5, 2, 6, 4, -2} ® {1, 2, 3} = {9, -1, 8, 12, 9, 25, 22, 24} (2.51)
where the three-point filter is zero padded to eight points. The six good points are the same as before, but the
edge points in (2.11) have been combined to give: (1, 5) + (8, -6) = (9, -1), which are the wraparound points
in the cyclic convolution.
Zero Padding
In order to avoid any corrupted answers, zero padding has to be used to extend both sequences to length N = n1
+ n2 - 1, where n1 and n2 are the original sequence lengths. Such zero padding is illustrated in Figure 2.4(b).
The OFT length N is usually selected to be a power of 2 for computing efficiency. As an example, assume that
the lengths of n 1 and n 2 are of 3100 and 900 samples respectively. A suitable FFT length is then 4096, and both
signals should be zero padded to this length. The final answer contains 4096 samples, of which n1 - n2 + 1 =
2201 are full convolutions and 2(n2 - 1) = 1798 are partial convolutions, and the remaining N - n 1 - n 2 + 1 =
97 points are extraneous zeros.
The partial convolution answers are usually not needed. Let n1 be the longer sequence. If a OFT is to be
used, the minimum OFT length is n1 , and only the shorter array n2 need be zero padded to the OFT length. If
an FFT is used, then the n1 array may have to be zero padded to a convenient length, and the n2 array zero
padded to the same length. In either the DFT or FFT case, n1 -n2+l correct full-convolution answers are
obtained from the cyclic convolution, as illustrated in Figure 2.4(a).
In MATLAB, the cyclic convolution is implemented by
where the FFT length N is chosen to obtain the desired amount of zero padding and/or an efficient FFT length.
The zero padding is built-in, and occurs at the end of each array in the MATLAB implementation.
Digital signal processing only works accurately and efficiently if the signal has been sampled correctly. This
means that the sampling rate should be high enough to preserve signal fidelity, but not so high as to be
inefficient. Aliasing occurs if the signal is not sampled fast enough. The sampling requirements depend on the
signal type, for example, real versus complex signals, baseband versus nonbaseband signals. In this section, these
effects are discussed in the time and in the frequency domains.
In DSP operations, it is important to consider the signal spectrum, to examine whether the signal contains the
expected information, before and after the operation. Indeed, some signal processing operations are applied directly
to the signal spectrum. The spectrum of a discrete-time signal is different from that of a continuous-time signal.
There are four cases to consider, depending on whether the signal is continuous or sampled, and whether it has
an infinite or finite duration. These four cases are illustrated in Figure 2.5.
It is assumed that the infinite-duration, continuous-time signal under consideration is bandlimited, which
means that the energy is confined to a limited region, ±fb, of the frequency axis. This is illustrated by the first
row of Figure 2.5. In this case, the signal extent is infinite, and its spectrum is continuous in frequency. In the
case of the signal in the second row, the signal is truncated, so that only a finite record is observable. When the
Fourier transform is taken, it is found that the spectrum is discrete in frequency, and is commonly known as a
Fourier series. 5
In the case of the third and fourth rows, the signal is sampled. This has the effect of replicating the
spectrum along the frequency axis, as shown in the lower half of Figure 2.5. The spectrum is repeated at the
sampling frequency Is, but only one cycle, covering a frequency span of Is, is observable in the DFT output. The
property of the spectrum repetition follows directly from (2.22) or (2.50). In DSP operations, only the last case is
of interest, since all signals analyzed are represented by discrete sample points in finite length records.
When an N-point DFT is performed on a sampled signal, the first sample of the output array represents
zero frequency, and the last sample represents frequency (N - 1) ls/N ~ Is· For analytical and illustrative
purposes, the left and right halves of the DFT output array are often interchanged. This is done by the
"fftshift" function in MATLAB. After this shift, the first sample in the array corresponds to frequency - ls/2
and the last sample to frequency (N - 2) ls/(2N) ~ ls/2. For a real signal, only the first half of the DFT
output array is adequate to describe the signal, as the second half is redundant.
Time domain signals are: Sampled? Anite leng1h?
··· NO NO
(\
NO YES
&
NO (\ (\ (\
-f o +f
• •
Time --->
YES
y~& && - fa o
Frequency -
+fI
>
The way information 1s stored in a given type of signal affects its sampling requirements. There are different
cases of sampling to consider: On one hand, there are differences between the sampling of real versus complex
signals; and, on the other hand, between the sampling of baseband versus nonbaseband signals. These differences
are illustrated in Figure 2.6, where spectral properties of continuous-time signals are shown.
0 .6
o '-----c.._..._...:a_ _ __,
-200 - 100 0 100 200
Figure 2.6: The spectral differences between real and complex signals, and
between baseband and nonbaseband signals.
In the physical world, all signals are real; for example, the voltage measured at an electrical terminal. However,
within the digital signal processor, it is convenient, and often more efficient, to operate on complex signals. An
example is provided by the DFT, which inherently operates on complex sequences.
In practice, a real-valued signal is converted to a complex-valued signal that contains the same information,
using a complex demodulation process (also called quadrature demodulation). In this operation, the signal is
mixed (multiplied) with a cosine wave of the appropriate frequency, and the result, after filtering the higher
frequency component, is called the "real channel." In parallel, the signal is mixed with a sine wave of the same
frequency, and the result is called the "imaginary channel." These two signals are then sampled with synchronous
samplers to obtain one complex sample. Alternately, the real signal can be sampled and the demodulation
performed in the discrete-time domain [20]. Quadrature demodulation is discussed further in Appendix 4A. The
conversion to a complex signal can be performed using a Hilbert transformation, which generates a signal that is
phase shifted by 1r/2.
Figure 2.6 illustrates the property that a real signal has a spectrum that is conjugate symmetric about zero
frequency, while complex signals do not have any spectral symmetry requirements.
Signal Bandwidth
It is important to know the bandwidth of a signal to determine its sampling requirements. For real signals, only
the positive frequencies are considered. If /2 is the frequency of the highest-frequency signal component, and Ji is
the frequency of the lowest (positive) frequency signal component, the bandwidth is h - Ji.
For complex signals, negative as well as positive frequencies are considered. In this case, Ji is the frequency
of the lowest-frequency signal component, taking negative frequencies into account. Again, the bandwidth is h - Ii
In Figure 2.6, the bandwidth is shown by the extent of the solid horizontal line. In all four cases, the
bandwidth is about 75 Hz. The bandwidth appears quite well defined, with the energy captured almost completely
within the passband. However this is not always the case. Furthermore, in theory, a finite-length signal cannot be
bandlimited. To circumvent the problem, it can be said that energy below a certain level can be ignored, so that
a finite-length signal can be effectively bandlimited. Hence, only energy above a "significant" level is considered.
This level is often considered to be 3 dB below the peak energy, although 10 dB is also used in some
applications. In practice, an analog signal often has some unwanted high frequency components, such as white
noise. In order to keep the sampling rate as low as possible, an antialiasing filter is used before the sampler, to
restrict the bandwidth to the areas of the spectrum that are of interest in the application.
For a certain sampling rate, the fundamental range is said to be the lowest set of frequencies that can hold
unambiguous information of the signal. If Is is the sampling frequency, the fundamental range is from zero to
fs/2 for real signals. For complex signals, the fundamental range is from - fs/2 to + / 8 /2, although it can also
be considered to be between zero and / 8 • In Figure 2.6, the fundamental range is shown by the extent of the
horizontal dashed line, assuming that the sampling frequency is 200 Hz for the real case, and 100 Hz for the
complex case.
For real signals, a baseband signal is defined as one in which the lowest positive-frequency component is small
compared to the bandwidth. The energy can extend down to zero frequency, for example, when a white noise
signal is passed through a lowpass filter. In this case, the sampling rate is selected so that the significant energy
of the continuous-time signal lies within the fundamental range.
An example of the spectrum of a real baseband signal is given in Figure 2.6(a). If there is a significant gap
between zero frequency and the lowest positive frequency component in the signal, it is said to be a nonbaseband
signal. Nonbaseband signals are common in communications systems, when an information signal is modulated on
a high-frequency carrier. An example of the spectrum of a real, nonbaseband signal is given in Figure 2.6(c).
For complex signals, a baseband signal is defined to be one in which the significant energy of the
continuous-time signal lies entirely within the fundamental range for a given sampling rate. An example is given
in Figure 2.6(b). Conversely, a nonbaseband complex signal is one in which the significant energy does not all lie
within the fundamental range. An example is given in Figure 2.6(d).
The Nyquist sampling theorem states that a real, bandlimited, baseband, continuous-time signal must be sampled
at a rate greater than twice its highest frequency component, in order for the samples to correctly represent the
information in the continuous signal. This minimum sampling rate is called the Nyquist sampling rate. Another
way of expressing the sampling theorem for real signals is that more than two samples per cycle must be taken
of any sine wave present in the signal.
To illustrate this sampling requirement, consider the 300-Hz continuous- time sine wave drawn as the solid line
in Figure 2.7. Three cycles of the sine wave are shown, covering a time period of 10 ms. First, let the signal be
sampled at a rate of Is = 800 Hz, as shown by the asterisks in the figure. This sampling rate corresponds to
2.667 samples per cycle, which is higher than the Nyquist rate. An infinite number of sine waves can be drawn
through the given sample points. If it is understood that the frequency of the continuous sine wave lies between
zero and !s/2, that is, within the fundamental range, only one unique sine wave can be drawn through the
samples, the one given by the solid line. In this case, the frequency of the original signal (300 Hz) is within the
fundamental frequency range, and the signal reconstructed from the samples has the correct frequency.
time = 10 ms
Now, let the same continuous-time signal be sampled at 400 Hz (or 1.333 samples per cycle), as shown by
the diamonds in Figure 2.7. When the lowest frequency sine wave is drawn through the samples, it is seen to
have a wrong frequency, as shown by the dashed curve. In other words, a different sine function is reconstructed
from the samples. This change in observed frequency is called aliasing, whereby the frequency of the original
signal has apparently been misrepresented by the sampling process. When sampled at 400 Hz, the apparent
frequency is only 400 - 300 = 100 Hz. This change in apparent frequency is quantified by (2.53).
For a real signal that is offset from baseband (called a nonbaseband or bandpass signal), the bandwidth rather
than the highest frequency component can be used to define the sampling requirements. In this case, the Nyquist
sampling theorem states that the sampling rate must be greater than twice the bandwidth of the real,
nonbaseband signal. However, this only works correctly when the spectrum of the continuous-time signal lies wholly
within an aliasing boundary. This is illustrated in Figure 2.8, where a real signal with a bandwidth of 150 Hz is
considered.
In Figure 2.8, the panels on the left side show the magnitude spectra of a continuous-time signal, when the
center frequency is varied, creating a nonbaseband signal. The signal in Row 1 has a center frequency of 100 Hz,
so it is at baseband , and subsequent rows represent signals with a center frequency (annotated by the variable Fe)
increasing by 50 Hz every row. The signal is to be sampled at 400 Hz, so the aliasing boundaries, shown by the
vertical dashed lines, are !s/2 = 200 Hz apart.
In Rows 5 and 9, the spectra of the continuous-time signal lies wholly within aliasing boundaries. Thus, when
the signal is sampled, the spectral energy from the various components in the signal does not interfere with other
components. This means that the spectrum of the sampled signal, shown in the right column, is undistorted. There
is a difference between the cases of Rows 5 and 9, however, as the positive and negative frequencies of the
spectrum in Row 5 have to be interchanged for the signal to be equivalent to Row 1. Rows 5 and 9 behave
differently because of the fan-fold nature of the aliasing rule for real signals, which is illustrated later in Figure
2.10.
If the rule that the signal energy must lie wholly within a pair of aliasing boundaries is obeyed, the sampled
signal retains all the information content of the continuous signal. In practice, the nonbaseband signal is often
bandshifted to baseband before it is sampled, to avoid the problem of spectral distortion when the signal does not
lie within the aliasing boundaries.
For a complex signal, the Nyquist sampling rate is equal to the bandwidth of the signal. This is compatible with
the rule for real signals, when one considers that each complex sample (a real/imaginary pair) carries twice the
amount of information as one real sample. The effect of the sampling on the spectrum of t he complex signal can
be seen in Figure 2.9. Here, the original signal has a bandwidth of 300 Hz, and is sampled at Is = 400 Hz. In
each row of t he figure, the center frequency is increased from zero to 400 Hz in 100 Hz steps.
-eoo
i _E~ir\_.____
:t....____-«a
7-____,
"*400
J o a em
.E
-mo o
~
11111
Rcw3 Fe• -200~
·t E i i ni l .I M I
.G ...- ..- o •
i a @410 o 200
Rows Fe• 450tu
JE i i iil
.,ao ..a -aD0
A.ow t
O
Fe • 600 Hz
200 400 IIOO
Figure 2.9 illustrates a number of unique properties of complex signals. First, the spectrum is not
symmetrical about zero frequency, as discussed in Section 2.3.3. Second, the fundamental frequency range now
extends from - ls/2 to +/s/2, which is delineated by the leftmost pair of dashed lines in the figure. The dashed
lines are the aliasing boundaries, which are twice as far apart as in the case of real signals.
resolution as fine as possible. This involves a tradeoff, as lowering the sidelobes is accompanied by a widening of
the resolution. The Kaiser window is often used, as it has a parameter that can adjust the degree of weighting,
and thereby the sidelobe/resolution tradeoff [21].
When used in the time domain, with length T, the Kaiser window is defined by
Io(f3 V1 - (2t/T) 2
) T T
(2.54)
wk(t, T) = Io( f3) , 2 < t < 2
where f3 is the adjustable roll-off or smoothing coefficient, and Io(·) is the zeroth-order Bessel function [19].
Similarly, a Kaiser window of length F in the frequency domain is defined by
In MATLAB, the statement kaiser (N, beta) creates an N-element column vector with a smoothing coefficient
of beta. Figure 2.11 shows the shape of the Kaiser window for seven values of {3. A commonly used value of f3
is 2.5, in which the weighting at the edge of the window is one-third that of the peak.
0.8 1
Cl)
~ 0.6
Q.
~ 0.4 2
0.2 3
0
g
0 50 100 150 200 250
Sam.pie number
Figure 2.11: Shape of the Kaiser window for various values of {3.
The Fourier transform or inverse Fourier transform of a Kaiser window gives a sine-like function. When f3 =
0, the window reduces to a rectangular one, and the Fourier transform is shown in Figure 2.3. When f3 > 0, the
tapering at the ends broadens the 3-dB width of the transformed function, and at the same time lowers the peak
sidelobe ratio (see the description associated with Figure 2.3), as shown in Figure 2.12. The broadening is defined
as the ratio of the 3-dB width with windowing to that without windowing. These effects are discussed further in
Section 2.8.
- 20
,g
~- - 30 ; ....... ,.,:········· ·, ......... ;
..J
Ch
0..
~40
..
... .. . ................. ,..
-50'-'------------- - -
0 2 4 6 0 2 4 6
Kaiser window p Kaiser window p
Figure 2.12: Broadening and peak sidelobe ratio with different Kaiser win-
dows.
Two other common windows are t he Hanning and Hamming windows [14]. In the time and frequency
domains, they can be represented by
wh(t,T) - a - {1-a)cos (
2
;t) , -T
2
< t <
T
2
For a Hanning window, a is 1/2, which gives a cosine squared window. For a Hamming window, a is 0.54, and
the window is a raised cosine squared. The window {2.56) is also called the general cosine window, and the a pa-
rameter can be used to control the level of smoothing [15]. The general cosine window has the convenient
property that its inverse Fourier transform can be obtained in a closed form. However, these windows do not
have quite as good a resolution/sidelobe tradeoff as the Kaiser window, so the Kaiser window is preferred for
many DSP applications. Windows used in filtering are discussed further in Section 3.3.4.
2. 7 Interpolation
Interpolation plays a major role in digital signal and image processing, since it is often necessary to shift sample
locations. Consider a signal, g(x), that is sampled at discrete positions x = i, where x is a continuous independent
variable and i is an integer (the sample number). The independent variable x = t Is is used instead of time, t, in
this section, since spatial coordinates are often used in practical applications. Also, let 9d( i) be the sampled signal;
therefore, 9d(i) = g(x) when x = i. Very often the signal is required at some other noninteger values of x. The
problem is then to "resample" 9d( i) using interpolation.
As discussed in Section 2.3.3, the value of g(x) for noninteger values of x can be obtained by applying the
DFT shifting/modulation property, or by zero padding the spectrum. However, these methods are not flexible
enough for general use, because the former gives pixels that are shifted by the same amount from the original
signal, and the latter gives pixels that are separated by 1/M subsamples, where M is the expansion factor (which
must be an integer).
In this section, a flexible way to obtain g(x) for any value of x is explained. An interpolator is used, which
should be as efficient and accurate as possible. The interpolation can be implemented via a convolution
where h(x) is called the interpolator or interpolation kernel. In practice, the kernel is an even function of x, so
that h(i-x) = h(x-i). A sample point at i is then weighted by the kernel weight h(i-x). The interpolated value,
g(x), at the interpolation point, x, is the sum of the products of the kernel weights and the samples in 9d(i)
under the kernel, or a weighted sum of the samples in the neighborhood of x.
2. 7 .1 Sine Interpolation
This section discusses a special interpolation kernel, its accuracy and its implementation. The kernel is based on a
sine function, which follows from Shannon's sampling theorem.
If a signal, g( x), is sampled at discrete and evenly spaced intervals, the sampling theorem states that g( x) can
be reconstructed without error, if both of the following conditions are satisfied:
o The signal is bandlimited; that is, its highest frequency is finite. Measurements made of any physical system
are indeed bandlimited.
o The sampling satisfies the Nyquist sampling rate. For a real signal, the sampling rate must be more than
twice per cycle of the signal's highest frequency component. For a complex signal, the sampling rate must be
higher than the signal's bandwidth.
Reconstruction Equation
The sampling theorem states that the original signal, with the above two conditions satisfied, can be reconstructed
by a convolution. For a baseband signal, the interpolation kernel is a sine function
sin(7rx)
h(x) - sinc(x) - (2.58)
11'X
The interpolation operation is illustrated in Figure 2.14, using an example in which the value of g(x) at x =
11.7 is sought. The continuous-time sine interpolation kernel is shown in Figure 2.14(a). The interpolation kernel
is placed along the samples with its center at x = 11.7, as shown in Figure 2.14(b) . The kernel weights are then
computed at the sample points, as shown by the asterisks. Finally, a weighted sum of the data points is com-
puted, according to (2.59). The interpolated result is marked by a diamond in Figure 2.14(c). By interpolating
along a dense set of x positions, a smooth curve is obtained through the given data points, as shown by the
dotted curve in Figure 2.14(c).
An infinite number of points is required in the kernel to obtain one particular point, g(x), exactly. In
practice, an infinite number of points is not available; besides, an interpolation that uses a large number of points
would be very costly. Fortunately, the kernel weight decreases with distance away from the interpolation point, x,
suggesting that the kernel can be truncated without much loss in accuracy. In the example of Figure 2.14, the
interpolation kernel is limited to eight points, and data samples 8 to 15 are used to compute g(ll.7).
When short kernels are used, it is easy to apply interpolation to images, using one-dimensional interpolator
kernels, as in (2.14). Image rotations and skews can be obtained this way.
Kernel Normalization
Once the kernel is truncated, normalization is needed to unify the gain of the interpolator. If unnormalized, the
sum of weights on the samples is no longer equal to one, and differs between interpolation points x. It is best to
keep this sum constant, equal to one in this case, for all interpolation points. Assuming a uniform magnitude
signal with the magnitude equal to Mu , the interpolated value is then equal to the product of Mu and the sum
of the weights. By normalizing the sum to one, then the interpolated value is also
(a) Interpolation Kernel
.3 3 4
. Original samples
(b) Operation of lnterpolator
8 9
15
\.... . . /
...
Mu, which is the desired result since the interpolation should keep the uniform magnitude unchanged. A
normalization procedure is applied to modify the interpolated result to give
g(x)
g' (x) - (2.60)
s
where S is the sum of the kernel weights at a particular interpolation point
S = L sinc(x - i) (2.61)
i
where i is now limited to the kernel size. Sometimes, it is desirable to keep the signal power constant. In this
case, the normalization constant is
As the kernel length increases, the normalization constant in either case becomes closer to one, and the
distinction between these two forms of normalization reduces.
The procedure in (2.60) is equivalent to normalizing the interpolation kernel by dividing each weight h(x-i)
by S, so that the sum (or the sum of the squares) of the normalized weights is one.
When a function with sharp edges is interpolated, using a truncated sine kernel, the results exhibit a "ringing"
called the Gibb's phenomenon. To reduce this ringing, the interpolation kernel is weighted by a tapering window,
such as the Kaiser window discussed in Section 2.6. A weighted kernel is shown in Figure 2.15. The kernel
normalization also takes this window into account.
16-polnt sine Interpolation kernel
1
Unweighted
0.8
Weighted
0.6
-8:,
i 0.4
0.2
-0.2
-8 -6 -2 0 2 4 6 8
Time (samples)
For implementation efficiency, the interpolation kernel is generated at fine subsample intervals and stored in a
table. In this way, the sine function, window, and normalization do not have to be performed at each
interpolation point; only the appropriate coefficients need to be extracted from the table, using the table entries
nearest to the interpolation shift. Accuracy is affected by both the kernel length and the number of subsamples
in the table. Limiting the table size introduces a geometric error of, at most, one-half the subsample interval.
An example of a table of interpolator coefficients is given in Table 2.1. The shift is quantized to one-sixteenth
of a sample. The coefficients that are used for a given shift are taken from the corresponding row in the table.
The eight coefficients in each row define a specific interpolator. The coefficients in Row 1 give a shift of
one-sixteenth of a sample, while the coefficients in Row 16 produce a one-sample shift.
What is the interpolation accuracy? Unfortunately, this is not an easy question to answer, since the original value
of g(x) is not known. Special test signals can be defined to examine the accuracy question, but another way is to
examine the spectra of the interpolation kernels.
Recall again that the signal g(x) is a summation of sinusoidal functions. The spectrum of the interpolation
Magnitude Ideal complex interpolator
L.
.... T•.....
II
-1rr o II 1/T Frequency
4 6f 1--
Figure 2.17: Spectrum of a complex, nonbaseband signal to be interpolated,
and the magnitude response of the ideal interpolator.
There are two methods to perform the interpolation on nonbaseband signals, and the choice 1S
application-dependent:
o Translate the signal to baseband. This means shifting the signal to the left by 6 / , the frequency offset
from baseband. By the Fourier transform shift property (2.29), the signal in the time domain is multiplied
by a linear phase term, exp{-j 27f6/ t}. If desired, the data spectrum can be bandshifted back to its
original center frequency after interpolation.
o Translate the baseband filter to the signal's center frequency, as in Figure 2.17. In the figure, the shift of the
filter spectrum is to the right by 6 / . By the Fourier transform shift property (2.29) , the interpolation
kernel in the time domain filter is multiplied by a linear phase term, exp{j 27f6/ t}, which makes the
interpolator coefficients complex.
As the SAR system is linear, it is natural to characterize its performance through its impulse response. The
impulse response is the output of the system when an impulse is applied at the input. In SAR systems, the
impulse response is obtained by measuring the system response to a single, isolated scatterer on the ground, such
as a corner reflector. Such a small discrete scatterer is called a point target. Many important SAR image quality
parameters can be estimated from measurements made on the point target response. As the SAR product is
two-dimensional, some image quality parameters should be measured in two directions. This section describes
typical SAR image quality parameters and shows how they are measured.
In a processed SAR image, the signal from a point target is a sine-like function, as in Figure 2.3. The most
important quality parameters that can be measured from the point target are (1) the impulse response width
(IRW) that defines the SAR resolution, and (2) the peak or integrated sidelobe ratio (PSLR or !SLR) that
pertains to the image contrast. To measure these parameters, the point target response must be interpolated in
the vicinity of the peak, as the peak is only represented by one or two samples in the processed data. The usual
practice is to extract a window of 16 x 16 to 64 x 64 samples centered on the peak, and base the analysis on an
interpolation or expansion of this "chip" by a factor of 8 or 16. A sine-like interpolator, implemented by DFTs
and zero padding, works well for this expansion of the data points.
(a) Magnitude spectrum (b) Processed point target
22
0
20 @:>
°i 0
g- 1e 0
f::o@O~OO@o
12 ©
0
10'--~- ~- ~ ~ - ~ ~
5 10 15 20 25 30 10 12 14 16 18 20 22
Horizontal frequency (cells) Horizontal (samples)
The two-dimensional interpolation is illustrated in Figure 2.18. On the left side, the spectrum of an ideal
point target is shown. This is obtained by taking a two-dimensional DFT of a 32 x 32 chip centered on the
target. At this point, the image data is complex, as it is more accurate to interpolate the complex data rather
than the real data obtained after detection.8 The data is at baseband, so the spectrum is centered about zero
frequency (the top left cell), and t he spectrum is quite flat, as no weighting is used in this example.
The point target is expanded by zero padding the spectrum, using a property of the Fourier transform
outlined in Section 2.3.3. The white dashed lines delineate the null in the spectrum, where the zero padding must
be done. After zero padding and taking a two-dimensional IDFT, an expanded version of the point target is
obtained. A contour plot of the central samples of the expanded point target energy is shown in Figure 2.18(b).
co -s
:s
-6
m
-10 :S, - 10
GI GI
B - 15 B - 15
;t::
=c::
& -20 -20
«I
:I -25 I -21S
-30 --30
10 12 14 18 18 20 22 10 12 14 18 18 20 22
!
3
2
. - ~
- - .
.!!
--
1
!
J
0
-1
- - - - - -
fl. --2
-3 . ... . -3 ... . .
10 12 14 18 18 20 22 10 12 14 18 18 20 22
Distance (samples) Distance (samples)
The detailed structure of the main lobe and the sidelobes can be obtained from the expanded data. For
example, horizontal and vertical profiles of magnitude and phase can be taken through the peak, as shown m
Figure 2.19. The following image quality parameters can be measured from the expanded impulse response:
IRW: T he impulse response width is defined as the width of the main lobe of the impulse response, measured 3
dB below the peak value. In SAR processing, this is referred to as the image resolution (see Chapter 3). The
counterpart in optical images is the instantaneous field of view (IFOV). The units of the IRW are samples,
although it can also be expressed in the units of the system, such as meters. In Figure 2.19, the pulse is
close to a sine function because the target spectrum was nearly flat. In this case, the IRW is 0.886 times the
oversampling ratio which is typically 1.1 to 1.4. When weighting is applied, the IRW 1s broadening by about
20%, resulting in an IRW of 1.2 to 1.5 samples.
PSLR: The peak sidelobe ratio is the ratio between the height of the largest sidelobe and the height of the
main lobe, expressed in decibels (dB). For a sine function resulting from the Fourier transform of a uniform
spectrum, the ratio is - 13 dB, as discussed in Section 2.3.4. In SAR systems, the PSLR should be smaller
than this, so that small targets are not masked by adjacent strong targets. A commonly acceptable level of
PSLR is approximately - 20 dB . This level can be achieved by using a tapered window in the processing (see
Chapter 3).
One-dimensional ISLR: Even though the signal is two-dimensional, it is often useful to start by analyzing the
sidelobe power in one dimension. The integrated sidelobe ratio can be obtained by integrating the power
(magnitude squared) of the impulse response over suitable regions of Figure 2.19(a) or Figure 2.19(b). If the
"main lobe" power is Pmain and the total power Ptotal, the one-dimensional ISLR is then
where the numerator is the total power of the sidelobes. It remains to define the main lobe width, which
can be taken as a times the IRW, centered around the peak, where a is a predefined constant, usually
between 2 and 2.5. Sometimes, the null-to-null definition, as presented in Section 2.3.4, is used.
The one-dimensional ISLR should be kept small, so that dark areas in the image are not filled in by
spillover from adjacent strong areas. A typical one-dimensional ISLR is - 17 dB, with the main lobe defined
as being within the null-to-null limits.
Ideally, Ptotal requires measuring the total power even beyond the chip boundaries. Measuring beyond the
chip boundaries is not necessary, since the sidelobes usually decay quickly. There is also the danger of
including extraneous targets in practice, if the integral is extended too far.
Two-dimensional ISLR: The two-dimensional ISLR is similar to the one-dimensional ISLR, with the main lobe
defined within a rectangle a times the respective IRWs in each direction. The sidelobe power is then the
power of the entire two-dimensional image chip minus the main lobe power. Instead of measuring this
parameter in the above manner, it can be approximated from the two one-dimensional ISLRs, by assuming
the two-dimensional impulse response to be separable in the two directions, that is,
(2.64)
where Pi(ti, Bi), i = 1, 2, is a sine-like function with bandwidth, B. For example, if there is no weighting,
Pi(t, B) = sinc(7rBt) for a baseband signal. In addition, if it is assumed that ISLRx = ISLRy and the x
and y dimensions have little correlation, then it can be shown that
Peak position: This is the position of the peak of the impulse response in the two-dimensional sample space.
The measurement is used for geometric calibration, (i.e., to determine the geometric registration accuracy of
the image, either absolute or relative) . The final processed image is usually tagged with geolocation tick
marks, such as latitude and longitude. To measure absolute accuracy, identifiable image points must be
compared against a map or surveyed locations.
Relative accuracy concerns the distance between points within a scene, and the existence of a constant bias
and/or rotation of the image is of no concern. In other words, a square on the ground should appear as a
square in the image regardless of its orientation in the image, and the measured size should agree with that
on the map.
Signal peak magnitude: This is the magnitude of the target response at its peak. It is used for radiometric
calibration purposes, (e.g., to measure the radar cross section of a target). Sometimes, integrated magnitude
is preferred over peak magnitude for this measurement.
Phase: The phase is an important parameter in applications such as interferometry and polarimetry. Therefore, it
is important that a SAR processor preserve the phase information that is contained in the received data.
The phase can also be utilized in some steps of SAR processing, such as Doppler centroid estimation and
autofocus (see Chapters 12 and 13).
The phase characteristics of a point target measured in one direction can differ from the characteristics in
the other direction. This is because the compressed target has a phase ramp across the peak of the impulse
response, whose slope is proportional to the center frequency of the data in that direction. In each
direction, the regularity of the phase is an indicator of how well the data are compressed.
The phase at the peak of t he compressed target is a single-valued parameter, as it is measured at t he
maximum point of the two-dimensional impulse response. The phase is dependent upon the target position,
as discussed in Chapter 5.
In Figure 2.18(a), the two-dimensional spectrum has edges that are parallel to the axes, and the zeros can
easily be added for the interpolation operation. In SAR processing, the spectrum is often rotated, as in Figure
2.20(a). In this case, more care has to be taken in placing the zeros - t hey are placed adjacent to the white
dashed lines in the figure.
In order to perform point target analysis correctly, the data must be properly sampled in both directions.
Usually, the complex data are adequately sampled during the various processing stages. However, if the image is
detected before the point targets are analyzed, errors may occur as a result of undersampling, because the
detection operation widens the signal bandwidth. To keep the detected data from being aliased, the data should
be oversampled by a factor of two in each direction before the detection operation. However, because of
oversampling of the data and spectral rolloff at higher frequencies, upsampling by a factor of 1.4 in each
direction is usually sufficient to represent the point target properties adequately in detected data.
2.9 Summary
Mathematical preliminaries are presented in this chapter, which are needed to understand SAR processing in later
chapters. T he important properties of convolution and t he Fourier transform are summarized. The two properties
frequently utilized in later chapters are: (1) convolution in one domain is equivalent to multiplication in the other
domain, and (2) a shift in one domain is equivalent to a linear phase multiply in the other domain. Convolution
can be efficiently implemented v-ia DFTs, in which case it has circular properties.
The spectrum of a discrete-time signal is discussed. The spectrum repeats at the sampling frequency. This
repeat property must be considered in several stages of SAR processing.
The sine interpolation kernel, which originates from Shannon's sampling theorem, is presented. For
interpolation efficiency, t he kernel is usually tabulated at selected subsample spacings, such as one-sixteenth of a
sample. The sine coefficient values are weighted by a tapering window to reduce ringing, and are normalized to
preserve the average magnitude or average power in the interpolated data.
In SAR processing, the impulse response is typically a sine-like function. The important image quality
parameters that can be measured from the impulse response, such as the IRW and the PSLR, are summarized.
For the Fourier transform of a rectangular window, the PSLR is - 13 dB. This value is reduced by applying a
tapering window before the Fourier transform. A commonly acceptable PSLR is -20 dB. Various image quality
parameters are explained, many of which can be measured using a point target. A small chip is extracted around
the target and is zoomed by zero padding its spectrum. In order to measure the parameters accurately, the
measurement should be performed on complex data. If detected data is provided, it should be oversampled before
detection.
Spaceborne SAR.s have been used in a number of interesting extraterrestrial m1ss1ons, including the 2004 Cassini
mission to Saturn. The 1990-1992 NASA/ JPL Magellan mission to Venus was particularly successful. The scientific
objectives were to study landforms and tectonics, impact processes, erosion, deposition, chemical processes, and
model the interior of Venus.
The image in Figure 2.21 shows the 72-km diameter Wheatley crater, located in Asteria Regio at 17° N,
267° E on Venus. The image exhibits a radar-bright ejecta pattern and a generally flat floor with some rough
raised areas and faulting.
References
[1] T. Kailath. Linear Systems. Prentice Hall, Upper Saddle River, NJ, 1980.
[2] A. Papoulis. Probability, Random Variables and Stochastic Processes. McGraw-Hill, New York, 1984.
[3] E. W. Kamen. Fundamentals of Signals and Systems Using MATLAB. Prentice Hall, Upper Saddle River,
NJ, 1996.
[4] A. V. Oppenheim and A. S. Willsky. Signals and Systems. Prentice Hall, Upper Saddle River, NJ, 2nd
edition, 1996.
(5] B. P. Lathi. Signal Processing and Linear Systems. Oxford University Press, New York, 1998.
(6] A. Papoulis. The Fourier Integral and Its Applications. McGraw-Hill College Div:ision, New York, 1962.
(7] E. 0 . Brigham. The Fast Fourier Transform: An Introduction to Its Theory and Application. Prentice Hall,
Upper Saddle River, NJ, 1974.
(8] R. N. Bracewell. The Fourier Transfonn and Its Applications. WCB/ McGraw-Hill, New York, 3rd edition,
1999.
(9] J. G. Proakis and M. Salehi. Communication Systems Engineering. Prentice Hall, Upper Saddle River, NJ,
1993.
(10] S. S. Haykin. Communications Systems. John Wiley & Sons, New York, 4th edition, 2000.
(11] L. B. Jackson. Digital Filters and Signal Processing. Kluwer Academic Publishers, Boston, MA, 3rd edition,
1996.
(12] J . G. Proakis and D. G. Manolakis. Digital Signal Processing: Principles, Algorithms and Applications.
Prentice Hall, Upper Saddle River, NJ, 3rd edition, 1996.
(13] S. K. Mitra. Digital Signal Processing: A Computer-Based Approach. McGraw-Hill College Div:ision, New
York, 2nd edition, 2001.
(14] A. V. Oppenheim, R. W. Schafer, and J . R. Buck. Discrete-Time Signal Processing. Prentice Hall, Upper
Saddle River, NJ, 2nd edition, 1999.
(16] V . K . Ingle and J. G. Proakis. Digital Signal Processing Using MATLAB V.4, Brooks/Cole Publishing Co.,
Pacific Grove, CA, 1st edition, 2000.
(17) J. H. McClellan, R. W. Schafer, and M . A. Yoder. DSP First: A Multimedia Approach. Prentice Hall, Upper
Saddle River, NJ, 1998.
(18) E. C. lfeachor and B. W. Jerv:is. Digital Signal Processing: A Practical Approach. Pearson Education,
Harlow, England, 2nd edition, 2002.
(19) E. Kreyszig. Advanced Engineering Mathematics. John Wiley & Sons, New York, 7th edition, 1993.
(20) W. G. Carrara, R. S. Goodman, and R. M. Majewski. Spotlight Synthetic Aperture Radar: Signal Processing
Algorithms. Artech House, Norwood, MA, 1995.
(21) J . F . Kaiser. Nonrecursive Digital Filter Design Using the Io-sinh Window Function. In 1974 Inter. Conf.
on Circuits and Systems, pp. 20- 23, April 22- 25, 1974. Reprinted in "Selected Papers in Digital Signal
Processing, II", IEEE Press, New York, 1976.
Chapter 3
3.1 Introduction
Pulse compression is a signal processing technique used in radar, sonar, seismic, and other "probing" systems. In
this context, a "probing" system refers to a system in which a signal is transmitted, reflected from a remote ob-
ject, and received, in order to measure parameters of the object. A similar technique is used in "signaling"
systems, such as cell phones and the global positioning system. Pulse compression is a type of spread spectrum
method, designed to minimize peak power, maximize signal-to-noise ratio, and obtain fine resolution of the sensed
object (e.g., to achieve sensitive target detection or good image quality). The pulse compression, which is described
in this chapter, is achieved with a matched filter. The discussion of matched filtering is based on linear FM
signals, which are of fundamental importance in the theory and applications of SAR systems.
Section 3.2 introduces the linear frequency-modulated (FM) signal and examines its properties. The signal is
examined in both the time and frequency domains, especially its frequency versus time characteristics. This leads
into matched filtering in Section 3.3, the tool with which high resolution is obtained in the signal processor.
Derivations of matched filtering for baseband and nonbaseband signals, in both the time and frequency domains,
are presented in detail. The discussion is based upon an exactly linear FM signal, but the theory developed can
be extended to a signal that is approximately linear FM as well. Section 3.4 discusses various ways to implement
the matched filter. In practice, errors in generating the matched filter will inevitably occur. The mismatch is often
quantified by a parameter called the quadratic phase error. Section 3.5 addresses this error and its effects on the
compression process.
Linear FM signals are very prominent in SAR systems, where the signal's instantaneous frequency is a linear
function of time. They are used in the transmitted signal to achieve a uniformly filled bandwidth, and they arise
in the received signal from sensor motion. This section discusses the characteristics of linear FM signals in the
time and frequency domains.
In the time domain, an ideal linear FM signal or pulse has a duration T seconds with a constant amplitude, a
center frequency fcen Hz, and a characteristic phase component O(t), which varies with time in a specific manner.
Physical "probing" systems often transmit pulses of this form. For linear frequency modulation, the phase is a
quadratic function of time. When /cen is set to zero, the complex form of the signal ~
(3.1)
where t is the time variable in seconds, and K is the linear FM rate in hertz per second. An example of a
complex linear FM signal, with /cen = 0, is shown in Figure 3.1. Each of the real and imaginary parts oscillates
as a function of time, and the oscillation frequency increases away from the time origin.
The reasons that the signal is called linear FM, and K is called the linear FM rate, can be seen from the
frequency versus time characteristics in Figure 3.l(d). The phase of the pulse is given by the argument of the
exponential in (3.1)
expressed m radians. This is a quadratic function of time, as shown in Figure 3.l(c). The instantaneous frequency
is the derivative with respect to time
2
f = __!_ d</>(t) = __!_ d(1rKt ) = Kt (3.3)
21r dt 21r dt
expressed in Hz. This means that the frequency is a linear function of time t, with the slope K expressed in
hertz per second. The bandwidth is defined as the range of frequencies spanned by the significant energy of the
chirp, or the frequency excursion of the signal (for real signals, only positive frequencies are considered in this
definition). From Figure 3.l(d), the bandwidth is the product of the chirp slope and the chirp duration, and 1s
BW = IKIT (3.4)
expressed in hertz. As shown in Section 3.3.2, the bandwidth governs the obtainable resolution.
Another important signal parameter is the time bandwidth product (TBP) that has been introduced in
Section 2.3.4. It is the product of the bandwidth IKIT and chirp duration T, and is a dimensionless parameter
given by
The TBP of the baseband linear FM signal in (3.1) can be measured by counting the number of zero
crossings of the real or imaginary part of the time-domain signal. When fcen = 0, the number of zero crossings
in the signal is approximately equal to one-half the TBP, or the number of complete cycles is approximately
equal to TBP / 4.2 In the example of Figure 3.1, T = 7.24 µs and the bandwidth is 5.80 MHz, which gives a
TBP of 42. In Figure 3.l(a, b), the total number of zero crossings is 21. If the center frequency of the signal is
not zero, the number of zero crossings will be greater than TBP /2.
-2
--3 - 2 -1 0 1 2 3 -3 -2
_, 0 1 2 3
Time relative to\, {141e<:) Time relative to \, {1.1sec)
Figure 3.1: The phase and frequency of a linear FM pulse. The signal has
been oversampled by a factor of 5 in order to see the amplitude structure
clearly in (a) and (b).
In summary, a linear FM signal has a quadratic phase, so that its frequency is a linear function of time. The
frequency slope is the linear FM rate. A linear FM signal is frequently called a "chirp," in analogy with a bird's
call. When the slope is positive, the signal is called an up chirp, the case shown in Figure 3.1. Similarly, when
the slope is negative, the signal is called a down chirp. In (3.1) , the direction of the chirp is embedded in the
sign of K . Whether it is an up chirp or a down chirp will not affect the subsequent analysis.
The analytical form of the spectrum of a linear FM signal is often needed for SAR system analysis. The purpose
of this section is to derive an approximate analytical expression of the Fourier transform of the signal. The exact
deriv-ation is not straightforward, and a convenient approximate expression can be obtained by the Principle of
Stationary Phase (POSP) [1- 5) .
Let g(t) be an FM signal, whose modulation is either linear, like (3.1) , or approximately linear
g(t) = w(t) exp{j</>(t)} (3.6)
where w(t) is the real-valued envelope and <f>(t) is the phase describing the signal modulation. It is assumed that
the envelope varies very slowly with time, compared with the variation of the phase.
The spectrum of this signal is then the Fourier transform of g(t)
where the phase due to the Fourier transform, -271" f t, has been absorbed into a single phase term
O(t) = </>(t) - 21rf t (3.8)
The phase in the integrand contains quadratic and possibly higher order terms. Even for the simple case of a
linear FM signal, in which the phase is quadratic, an analytical form of the integral is quite difficult to derive by
conventional means.
The POSP can be briefly explained as follows. Figure 3.1 shows that the phase </>(t) of the signal is
"stationary" at some time, ts, when the derivative, d<f>(t)/dt, is zero. In this case, the stationary point occurs at t
= 0. Around the neighborhood of this point, the phase is slowly varying, as are the amplitudes of the real and
imaginary parts. At other times, the phase varies quite rapidly. A similar observation is true for the phase O(t) in
(3.8), but now the stationary point varies with /.
Assuming that w(t) is slowly varying, compared to the phase function, the integral (3.7) has the following
property. Where the phase O(t) is rapidly varying, the envelope w(t) is almost constant over one complete phase
cycle. Over this interval, the contribution to the integral (both real and imaginary parts) is almost zero, because
the positive and negative parts of the phase cycle cancel each other. Therefore, the contribution to the integral
lies mainly around the stationary phase point. A Taylor series expansion can be used on the integrand about this
point, resulting in an approximate analytical solution of G(/) in (3.7).
The result is stated here without going through the derivation, which is detailed in [4, 5]. The spectrum of
the signal is approximately given by
G(f) ~ C1 W(J) exp{ j ( 0(/) ± 1r/4)} (3.9)
o C1 is a constant
(3.10)
I<l>"(ts)I
which is usually ignored. The double prime denotes the second derivative with respect to t. Note that </>"(ts) -
O"(ts)-
o The expression t(f) to use in (3.11) and (3.12) is given by the time versus frequency relationship of the
signal. This relationship can be derived by finding the derivative of the complete integrand phase (3.8) at
the stationary point (i.e., when the derivative is zero)
d8(t)
(3.13)
dt - 0
o The sign of 1r/4 is given by the sign of <!>"(ts)- As in the case of C1, this constant phase component can be
ignored without affecting most analyses.
where the envelope w(t) is the "rect" function and the integrand phase is
8(t) = 1rKt2 - 21rft (3.15)
f = Kt and t = { (3.16)
for the linear FM signal. This agrees with the frequency relation (3.3) found directly from the time domain
analysis.
The phase in the frequency domain is
,2
-1r- (3.17)
K
where the rect function defines the center frequency to be zero. Substituting (3.17) and (3.18) into (3.9), and
ignoring the constant C 1 and the phase ±1r/4, the integral (3.14) for the spectrum of the linear FM signal
becomes
(3.19)
Figure 3.2 illustrates the spectrum of the linear FM pulse, calculated using a OFT rather than using the POSP.
The pulse has the same structure as in Figure 3.1, except that the TBP has been raised to 720 to make the
POSP accurate, and the five times oversampling has been lowered to 1.25. An "fftshift" (a left/right half swap)
has been applied after the OFT in MATLAB, so that zero frequency lies in the center of the plotted array.
(a) Real part of spectrum (c) Magnitude of spectrum
40 ....
I 20 M I6»20 .
I.
a. 0
~
as
-20
u \ .J
~ o.._______.______
-0.1 -0.05 0 0.05 0.1 -0.5 0 0.5
CD 20 feoo
1J
'[ 0
i 600
; 400
~ - 20 :3 200
& o _ ______.______,,
-0.1 -0.05 0 0.05 0.1 -0.5 0 0.5
Frequency (nonnaftzed) Frequency (normalized)
o The real and imaginary parts of the spectrum, shown in Figure 3.2(a, b), have a similar linear FM structure
as the real and imaginary parts of the time domain signal in Figure 3.l(a, b). The differences are a rr/4
phase change, and a change in the sign of the FM rate, compared to the time domain. Note that only the
central 20% of the spectrum is shown for clarity.
o The envelope in Figure 3.2(c) is approximately the same as the rectangular envelope of the time domain
signal of Figure 3.1. In other words, the envelope is approximately preserved between the two domains. The
dip in the magnitude spectrum is a result of the 1.25 times oversampling.
o The phase in Figure 3.2(d) is approximately quadratic in the frequency domain, as it is in the time domain.
This means that the frequency versus time relationship is / = Kt, showing that there is a linear, one-to-one
relationship between time and frequency in linear FM signals.
The POSP is only an approximation. However, it is sufficiently accurate for analysis purposes if the FM
signal exhibits enough cycles. The number of cycles in the signal is given by TBP /4, and the POSP is reasonably
accurate if the TBP is greater than 100. The accuracy of the POSP is illustrated in Figure 3.3, where the solid
lines represent the OFT magnitude and unwrapped phase, and the dashed lines represent the POSP values for
various values of TBP.
Figure 3.3: Variation in DFT spectra with different TBPs.
The POSP also can be applied to an approximately linear FM signal in which the phase has higher order
components than quadratic. This property will be used in dealing with two-dimensional signals in subsequent
chapters.
The sampling of real and complex signals is discussed in Section 2.5. For the complex linear FM signals discussed
in this section, the Nyquist sampling theorem states that the sampling rate must be greater than the bandwidth
of the signal.
For a complex, linear FM signal at baseband, the highest frequency is IKIT/2, which is one-half the
bandwidth. Therefore, the minimum complex sampling rate ls must be greater than the bandwidth IKIT. Another
way to examine the adequacy of the sampling rate is to look for a gap in the spectrum of the sampled signal. If
there is no gap, the sampling rate is too small. If the gap is greater than 20% of the sampling rate, the sampling
rate is larger than optimum for efficiency.
To measure the relative size of the energy gap, it is useful to define the oversampling factor or ratio as
(3.20)
Figure 3.4 illustrates the width of the energy gap for different oversampling ratios.3 The oversampling ratio is
decreased by 0.2 in each row, and the width of the energy gap decreases accordingly. Unlike Figure 3.2, the
frequency values are plotted in the order of the DFT output, to show the gap more clearly. In the top row, a: 0 5
= 1.4, and the gap is quite wide. In this case, the sampling rate is higher than necessary, and extra storage and
more computation is needed to process the signal. Another point of view is that the large gap represents unused
space in the spectrum.
In the second row of the figure, a: 05 = 1.2, and a small but clear gap exists in the spectrum. This is a
reasonable value of oversampling, representing a sound efficiency/accuracy tradeoff. In the third row, a:0 s = 1.0,
and the gap disappears. While, strictly speaking, no aliasing should occur in this case, a small amount of aliasing
does occur because of the leakage of frequencies beyond the nominal bandwidth limits.
In the bottom row, a:05 = 0.8, and serious aliasing results. As shown in the bottom left panel, the frequency
wraps around before -4 µs and after +4 µs . Energy from the lower and upper parts of the signal spectrum are
mixing together, and cannot be separated. This is seen in the "extra" energy in frequency cells 50 to 80 in the
bottom right spectrum.
Usually, the oversampling factor is chosen to be between 1.1 and 1.4, to obtain efficient use of the data
samples, but to still have an adequate gap in the spectrum. When a distinct gap exists, accurate signal processing
operations can be performed on the samples. For example, the continuous signal can be accurately reconstructed,
as discussed in Section 2.7.
Magnitude of spectrum
1 15 a01 = 1.4
10
0
5
_,
-5 0 6 40 60 80 100 120
, 16
,o
0
5
-1
-5 0 5 20 40 60 80 100 120
1 16
10
0
5
-1
0
-s 0 s 0 20 40 60 80 100 120
1 16
a 01 = 0.8
10
0
5
-1
0
-5 0 5 0 20 40 60 80 100 120
Time (ps) Frequency (cells)
Figure 3.4: Energy gap in the spectrum due to oversampling by the ratio, 0:0 5 .
It has been shown that the DFT treats time and frequency domain arrays as periodic and circular. The end of
each array is assumed to be "connected" to its beginning. However, it is sometimes important to know where the
real end of the array lies (e.g. , where to place the zeros when zero padding).
In the case of the time domain array, the "ends" are at the first and last sample of the input to the DFT,
as the input has been obtained by truncating a longer signal array. While the DFT assumes the input signal is
periodic with a period N, it is usually not periodic, and a "discontinuity" exists at these end points of the
truncated array. If zeros are to be added to the time array in order to obtain a convenient transform length, to
equalize the signal and filter array lengths, or to increase the sample spacing in the spectrum, then the zeros
must be added at one of the two ends of the array. Note that the phase of the spectrum will depend upon
which end is chosen.
In the case of the frequency domain array, the "ends" of the spectrum are defined by the gaps discussed in
Section 3.2.3. For example, in the top row of Figure 3.4, the gap in the spectrum is centered on cell 65. In this
case, the main part of the spectrum is considered to begin at cell 90, and end at cell 40, with frequency
increasing from left to right . If the frequency array is to be interpreted (and processed) as one contiguous
spectrum, the left and right halves of the array can be interchanged when the gap is near the center of the DFT
output array. After the interchange, the gap is div:ided between both ends. This can be conveniently done in
MATLAB with the fftshift function.
When the time signal is real-valued, the gap in the spectrum is always centered on the midpoint of the DFT
output array, corresponding to frequency !s/2. The spectrum is conjugate symmetric about zero frequency, the
first sample in the DFT output array. However, if the time sequence is complex, the spectrum is not necessarily
symmetric, and the gap may lie anywhere in the spectrum. In the examples in Figure 3.4, the signals are
complex, but the gap still lies in the middle of the spectrum, because its center frequency is zero. When the
signal center frequency is not zero, the gap lies elsewhere, as shown in Figure 2.9. Sometimes, the location of the
gap is not known in advance and has to be estimated.
If signal processing operations are to be performed on the spectrum, the spectral samples must be considered
as continuous, starting from the end of the gap, proceeding across the end of the DFT array (if needed), and
ending at the beginning of the gap. An example is the design and application of a frequency domain filter, as
shown in Chapter 6. Another example is when the spectrum is to be zero padded to increase the time domain
sample spacing, when the zeros must be added to the middle of the gap.
Returning to the time domain, it was stated that the time discontinuities are at either end of the array. But
this is true for the input data only. This may not be true if the data are processed in the frequency domain, and
an inverse Fourier transform is performed to bring the data back to the time domain. In this case, the time axis
is circular in the final array. The problem is then how to determine the point of time discontinuity. This problem
will be encountered again in later chapters. As a very simple example, assume the following sequence of
operations: (1) Fourier transform the input data, (2) multiply by a linear phase ramp in the frequency domain,
and (3) take the inverse Fourier transform. These operations create a circular shift [see (2.28)], where the amount
of shift depends upon the slope of the phase ramp.
In "probing" systems, a pulse of energy is used to measure parameters such as distance, speed, shape, or
reflectivity of a remote object. The received pulse must be strong enough and have a fine enough resolution for
the measurement to be useful. If the transmitted pulse has a duration of T , the "resolving power" before
compression is simply
p' = T (3.21)
since each target occupies a time interval of T in the echo data. Two targets separated by this time in the echo
will not be exposed by the same pulse at any instant in time. Therefore, to obtain a fine resolution, a short pulse
must be used, or at least obtained by signal processing.
However, the SNR of the received signal must be high enough to obtain the required object parameters
accurately, a quality often in conflict with resolution. The SNR can be increased by raising the average transmitted
power. This can be done either by raising the peak power, or by increasing the length or duration of the transmit
event. As physical limitations often constrain the peak power, it is common to increase the length of the
transmitted signal, frequently to a length much greater than the length of a pulse having the desired resolution .
This is done by transmitting an "expanded" pulse and later compressing it to the desired resolution in the signal
processor. This technique is known as pulse compression.
There is an interesting analogy between pulse compression in radar systems and spread spectrum in
communications systems [6]. The objective of pulse compression is to obtain a fine resolution in time from the
received data, thus giving a well-focused radar image, while the objective of spread spectrum in communications is
to transmit a message in a noisy environment. In pulse compression, the signal is expanded and compressed in
the time domain, and the transmitted signal has a longer duration than t he ultimate resolution. In spread
spectrum, the signal is spread and despread in the frequency domain, and the transmitted signal has a wider
bandwidth than the information in the message.
Pulse compression can be conveniently achieved with linear FM signals. To see how this is achieved, consider
the following points.
1. The shortest physically realizable signal has a TBP of approximately unity, as discussed in Section 2.3.4. In
other words, its duration is the inverse of the bandwidth. To synthesize a short duration (fine resolution)
pulse, a large bandwidth must be transmitted, received, and processed.
2. The shortest pulse of a given bandwidth is approximated by a sine function. It has a very compact
distribution of energy in the time domain.
3. From Figure 2.3, it is noted that a sine function is obtained by the inverse Fourier transform of a
rectangular function. Although not shown in the figure, the phase of the rectangular function must
correspond to that of a sine wave (i.e., it must have a linear phase), and the frequency of the sme wave
dictates the time position of the sine function.
Thus, to achieve good pulse compression, the received signal should be processed into a signal with a
spectrum whose magnitude is reasonably flat, and whose phase has constant and linear terms only. Section 3.2
has shown how a linear FM signal has a nearly flat spectrum. It is about as near to being flat as can be
obtained with a finite length signal. Its flatness is obtained by a uniform sweep of frequencies, from the beginning
to the end of the signal band.
The uniform sweep of frequencies is obtained in the linear FM signal by having a quadratic component in its
time domain phase. The spectrum of the linear FM signal, as derived by the POSP, has a quadratic phase as
well. To give the spectrum a flat shape with a linear phase in this case, the spectrum can be multiplied by a
signal with a similar spectrum, but with a phase given by the conjugate of the quadratic component of the
signaPs phase. The resulting signal phase is then linear. The inverse Fourier transform will then give the desired
sine function.
This is the essence of pulse compression, as applied in the frequency domain. The spectrum of the signal is
multiplied by a frequency domain filter that has the conjugate quadratic phase property. Pulse compression is
also known as "matched filtering.,, This terminology arises from the fact that the filter is "matched» to the
expected phase of the received signal.
Another way of looking at matched filtering is rooted in communications practice. If a desired signal is
buried in noise within a received signal vector, its presence and the time of its occurrence can be found by
cross-correlating the received signal with a conjugate replica of the desired signal. A spike in the form of a sine
function appears in the output whenever the signal is found in the received data. In this context, the correlation
"filter» is "matched,, to the expected phase properties of the signal. In general, if sr(t) is the received signal and
g(t) is the signal replica, the matched filtering operation can be implemented by a correlation
Note the presence of the complex conjugate in g(t), which is the usual convention for the definition of correlation.
If the filter is applied with a convolution rather than a correlation, the filter kernel is the time-reversed
complex conjugate of the replica
The distinction between correlation and convolution has been made in Section 2.2. In this book, a "matched
filter» means a "convolution filter,» and the convolution integral is written as
-£:
Sout(t) - Sr(t) ® h(t)
00
A quantitative measure of the degree of pulse compression is the compression ratio. It is the length of the
original signal divided by the 3-dB width of the compressed pulse, and is approximately equal to the TBP of the
uncompressed pulse. A compression ratio, in the order of hundreds or even thousands, can be achieved using a
linear FM pulse. The characteristics of the linear FM pulse in both the time and frequency domains are
described in Section 3.2. The purpose of this section is to derive an expression for the matched filter output via
the time domain, first for baseband signals and then for nonbaseband signals.
Baseband Signals
Let the transmitted signal s(t) be given by (3.1). If the echo is received a time to later, the target's echo can be
expressed as
(3.25)
The matched filter is the time-reversed, complex conjugate of s(t), with to set to zero
and the matched filter output is given by the convolution {3.24), which 1s solved in Appendix 3A. The
compressed output is approximately given by the sine function
where the approximation is accurate for a large T BP. An expanded view of the compression example of Appendix
3A is shown in F igure 3.5, where the TBP = 100, to has been set to zero, and t he result is oversampled by a
factor of eight for clarity.
In {3.25) and {3.26), t he signal and the matched filter are assumed to have the same duration T. If not, as
in some SAR cases, there is an additional but usually negligible quadratic phase term in the impulse response, as
derived in Appendix 3A.2.
Note that {3.27) is a real function with a phase of either 0 or 1r, because the matched filter has been
explicitly designed for this signal. In practice, however, every target will have a different phase, and the
compressed pulse will be complex.
0.8
Resolution
!
ci.
0.6
0 .4
~ 0.2
-0.2
Pulse Resolution
T he resolution is defined as the spread between the two -3-dB points of the compressed signal. It is t he width
of the pulse measured a factor of 0.707 below the peak magnitude, as illustrated in Figure 3.5. From Figure 2.3,
the 3-dB resolution, expressed in time units, is given ~
0.886 1
p = ~ (3.28)
IKIT IKIT
As an approximation, the 0.886 factor is sometimes ignored, especially when a broadening factor caused by a
smoothing window is included {see Section 3.3.4). Since IK I T is t he chirp bandwidth, the resolution is the
reciprocal of the bandwidth. A wider bandwidth gives a finer resolution. The resolution is also called t he impulse
response width (IRW), since (3.24) represents t he response to a point target after matched filtering (recall Section
2.8).
Ignoring the 0.886 factor, the compression ratio is
p' T
Cr = - ~
1 / ( IK IT) = IKI y2 (3.29)
p
which is equal to the TBP of the uncompressed pulse. As an example, let K = 0.41 x 1012 Hz/s, and T = 42 x
10- 6 s, as in RADARSAT's medium resolution beams. The bandwidth is then 17.2 MHz, the resolution is 0.058 µs,
and the compression ratio is 723. To express the compression concept in another way, one can view the
expansion/compression operation as synthesizing a transmitted pulse with an extremely short duration of 1/ (IKIT)
= 0.058 µs, which is 1/723 of the length of the actual transmitted pulse.
111
l
~ -0.5•
1
0.5 •
0
v
n \J
. -8
~
i -20
-10
-3 -2 -1 0 1 2 3
-5-1'--------------
- 0.5 0 0.5 1
'ii
Time relative to (µs) Time relative to 'ii (µs)
Figure 3.6 illustrates the matched filtering operation. The real part of the received linear FM signal from a point
target is shown in Figure 3.6(a), with the time delay removed. The amplitude of the compressed pulse is shown in
Figure 3.6(b). Using the point target analysis techniques of Section 2.8, the magnitude and phase of the
compressed pulse can be observed in more detail in Figure 3.6(c, d). The received signal is 7.2 11,s long, and the
3-dB width of the compressed pulse is about 0.17 µs, for a compression ratio of about 42, the TBP. In Figure
3.6(c), it is seen that the first sidelobe, which is also the peak sidelobe, is - 13 dB with respect to the main lobe,
a characteristic of the inverse Fourier transform of a rectangular function. This ratio is called the peak sidelobe
ratio (PSLR), as discussed in Section 2.8.
In the present example, the phase across the main lobe and every second sidelobe is zero, since the real part
is positive and the imaginary part of the compressed pulse is zero. This effect occurs because the signal is
noise-free, and the matched filter has the conjugate phase of the signal. The phase across the odd sidelobes is 1r
radians, since the real part there is negative, while the imaginary part is still zero (round-off errors may exist in
the imaginary part to disrupt this ideal phase pattern).
~
i
2
0
! - 10
~ -2
6,
~
- 20
:11 -30 .____.._.........__.___.___.__ ___._._.____,
-3 -2 -1 0 1 2 3 -1 -0.5 0 0.5 1
(b) Real part of CO"l)ressed signal (d) Co"l)r. signal phase (expanded)
300 . - - - - - - - ~ - - . . - - . . . . . - - - - - . 5- - - - - - - - - - - - .
t:
~ 0
-5 " - - - - - - - - - - - - - - - - "
-3 -2 -1 0 1 2 3 -1 -0.5 0 0.5
Time relative to \ (µs) Time relative to\ (µs)
In practical situations, noise is always present in the received signal. To illustrate the effects of received
noise, Gaussian random noise is added to both the real and imaginary parts of the received signal of (3.25). The
noise standard deviation is 0.75 of the signal amplitude, corresponding to a received SNR of +2.5 dB. The results
are shown in Figure 3.7, which should be compared with Figure 3.6. In Figure 3.7(a), the distortion in the
received signal is noticeable. However, after the compression is performed in Figure 3.7(b), the results are not
much worse than the noise-free case.
This noise immunity is obtained because the filter is only matched to the signal, not to the noise-it is
uncorrelated with the noise. The matched filter "collects" most of the signal components into a single peak, but
leaves the noise components randomly distributed in the output array. In the presence of Gaussian additive noise,
it is known that the optimum receiver in terms of maximizing the SNR at the peak of the compressed pulse is
the matched filter receiver [7-9] . For the purpose of understanding the principle of SAR processing, it is not
necessary to consider noise in the subsequent development.
The quality of the compression operation and, in particular, how well the phase of the matched filter
matches that of the signal, can be observed in the regularity of the sidelobes and the phase pattern. This is
illustrated by comparing Figure 3.7 with Figure 3.6, where it can be seen that the effect of the noise is most
apparent in the sidelobes in part (c) and the phase in part (d) of the figures.
Nonbaseband Signals
In the time domain, a nonbaseband signal can be viewed as one in which the time of zero frequency is offset
from the pulse center (see Section 2.5.2). Let tc be t he time offset referenced to t = 0 at the pulse center .5 Then
the transmitted signal s(t) is given by [recall (3.1)]
The analysis of pulse compression is very similar to that for a baseband signal discussed in Section 3.3.2. If the
echo is received a time to later, the target's echo can be expressed as [recall (3.25)]
0
sr(t) = rect(t-;/ ) exp{j7rK(t-to - tc) 2 } (3.31)
(3.32)
Following the development of Appendix 3A leading to (3.27), the compressed signal is found to be
Sout(t) = T exp{-j 271" K tc (t - to)} sinc{KT(t - to)} (3.33)
i:
J 0 ___..,.,,Aw .__1A----
V'I ,PY
-s .___ _.__ __.___ __.._ __.
-3 -2 -1 0 1 2 3 _, -0.5 0 0.5
Time relative to 'o (µs) Time relative to 'o (µs)
Figure 3.8: Matched filtering of a nonbaseband linear FM pulse.
Comparing this result with (3.27), the pulse is still compressed 'to time to, which is the center of the pulse,
but there is now a linear phase ramp of -21rKtc t through the peak of the pulse, as a result of the exponential
term in (3.33). The factor -Ktc has a physical interpretation-it is the center frequency of demodulated received
data, or the offset frequency of the compressed pulse. It is negative because the center of the pulse is to the left
of the zero frequency position in this up chirp example, as shown in Figure 3.8(a).
Note that zero frequency of the signal Sr (t) does not occur at to , but rather at to + tc. It is often desirable
to register the compressed data to this zero frequency position. This can be conveniently implemented via
matched filtering in the frequency domain.
In this section, the matched filter for a linear FM signal and its output are derived directly m the frequency
domain. The derivation is given for baseband and nonbaseband signals.
Baseband Signals
The time domain matched filter, discussed in Section 3.3.2, can be applied with a time domain convolution. It can
also be applied with a frequency domain fast convolution, giving the same results. In addition to being applied in
the frequency domain, the matched filter can also be designed directly in the frequency domain, with little loss of
accuracy. It is sometimes convenient to do so. The derivation starts with the same signal from a point target
given in (3.25). In this derivation, all constant factors are ignored, which is immaterial to the development.
Applying the POSP to (3.25), the signal spectrum is approximately given
by
(3.34)
The additional linear phase term in the last exponential is due to the offset of the target to from time 0, as seen
from the Fourier transform shift property.
The matched filter is designed to cancel the quadratic phase term in (3.34), which is done by setting
Note that the quadratic phase term is independent of the target position to, which allows the cancellation. The
spectrwn of the signal, after the matched filter is applied, is given by
as the quadratic phase terms in (3.34) and (3.35) cancel each other, leaving the linear registration term. Figure
3.9 illustrates an example of the spectrum, where the dominance of the sinusoidal (linear phase) term is seen
after the frequency domain matched filter is applied.
I
1
j ~
~~-y~
1
J 0
-1
"d O :,
~ - 1'
t"..,J
-2 ,. -2
0 50 100 150 200 250 0 50 100 150 200 250
Frequency (cells) Frequency (cells)
This result is the same as that found in the time domain (3.27), except for the additional gain IKI. This gain
arises in the frequency-domain derivation, because one of the constants that is ignored in the POSP equals 1/
v'fKl [recall (3.10)]. This constant should appear in the signal spectrum (3.34), and could be included in the
matched filter (3.35) to remove the IKI factor. However, this constant is often ignored, because other normalization
criteria are used in practice to set the matched filter gain.
N onbaseband Signals
The above derivation assumes a "baseband" signal, that is, one with a zero center frequency. For a nonbaseband
signal, the spectrum is rotated, and the signal's center frequency is no longer zero. According to the Fourier
transform properties, this spectral shift introduces a linear phase in the time domain, which passes through zero
at the compressed target peak. This result has already been shown in Section 3.3.2, with some algebraic
manipulations, via the time domain convolution. This section derives the same result using a frequency domain
implementation of the matched filter.
Beginning with the received nonbaseband signal of (3.31) and applying the POSP, the received signal
spectrum is given by [recall (3.34)]
(3.38)
and, from (3.36), the spectrum after the matched filter multiply is
Sout(/) = rect{
1 :~tc} exp{-j 21r /(to+ tc)} (3.40)
1
The compressed target is registered to to+tc in the presence of a spectral shift in the received signal, that is,
the target is registered to where its zero frequency occurs in the received signal. This property holds, even if zero
frequency is outside the signal's bandwidth. Only the slope of the phase ramp through the compressed target is
affected by a spectral shift in the received signal.
Sometimes, it is desired to register the target to the time of reception of the beginning (or the middle) of
the received signal. To achieve this, the filter in (3.35) can be multiplied by a linear phase modulation exp{-
j21r t::..t /}, where t::..t represents the time difference between the zero frequency time and the desired registration
time.
Up to now, both time and frequency are assumed to be continuous variables, so aliasing has not been
considered. In practice, the sampling rate is also important in the frequency domain filter design. The sampling
and aliasing of SAR signals are discussed in Chapter 5.
So far, the matched filters discussed have a constant magnitude.6 In Figure 3.6, it is shown that the resulting
PSLR is - 13 dB when the envelope of the spectrum is approximately rectangular. This PSLR is usually
considered to be too high, since it masks out nearby weaker t argets in t he image. One way to reduce the PSLR
is to apply a smoothing window to the matched filter in the frequency domain, in order to reduce the leakage of
energy from the main lobe to the sidelobes.
A window is a symmetrical real function that applies weights to the signal spectrum. The weight is maximum
in the middle (the peak) of the signal spectrum, and rolls off toward the edges of the spectrum. Recalling the
discussion of the spectrum gap of Section 3.2.3, the window must be rotated so that the peak of the window is
aligned with the peak of the spectrum, which is assumed to be one-half the sampling rate away from the middle
of the gap.
The window has the effect of "smoothing" the spectrum, that is, reducing the discontinuity at the edges of
the spectrum. This reduces the leakage of energy in the main lobe of the resulting compressed pulse, but at the
expense of degraded resolution. The resolution is degraded because the window reduces the effective signal
bandwidth used in the compression.
Examples of windows are Taylor, Chebyshev, Hanning, Hamming, and Kaiser [10]. In this section, attention
will be given to the Kaiser window, because it has the following desirable properties:
1. The Kaiser window is nearly optimal in the sense of creating a compressed pulse with the largest energy in
the main lobe for a given ISLR. This property comes from the prolate spheroidal wave functions [11], of
which the Kaiser window is an approximation.
2. The Kaiser window has an adjustable parameter /3, which allows a trade-off between resolution and sidelobe
levels to suit different applications.
Equations (2.54) and (2.55) give the formulations of the Kaiser window in the time domain and frequency
domain, respectively. Kaiser windows of different values of f3 have been plotted in Figure 2.11. With the
weighting, the frequency domain matched filter of (3.35) becomes
Because of the one-to-one correspondence between time and frequency for linear FM signals, the window can also
be applied in the time domain. The matched filter also can be designed in the time domain without windowing,
and application of the window can be deferred until the data is processed in the frequency domain. The matched
filter designed in the time domain (3.26) with windowing becomes
h(t) = wk(t,T) exp{-j1rKt2 } (3.43)
Applications of windows in these two domains are illustrated in Figure 3.10. The left column shows the
window in the time domain and its effect on tapering the edges of the signal. The right column shows the win-
dow in the frequency domain and its effect on tapering the edges of the signal spectrum. For simplicity, a
baseband signal is assumed.
As in (3.36), the signal spectrum after the weighting and matched filter are applied is
Bout(/) = Sr(f)H(J) = Wk(f, IKIT) exp{-j27r/to} (3.44)
The impulse response of the compressed pulse is then given by the inverse Fourier transform of the window7
Sout(t) = p(t - to) (3.45)
It is a sine-like function centered at to, but with a broader main lobe and lower sidelobes than the actual sine
function. For a rectangular window, the impulse response is given by (3.37) , ignoring a multiplicative complex
amplitude. Let 'Yw be the ffiW broadening factor due to window weighting. The resolution in (3.28) can be
written as:
0.886 1 w
p - (3.46)
IKIT
In specifying system performance, restrictions are placed on both the ffiW and PSLR. Therefore, a tradeoff
has to be performed when selecting the parameters of a window. For the Kaiser window, the PSLR and IRW
broadening (or resolution broadening) are given in Figure 2.12, with the broadening referenced to that of the
rectangular window (i.e., f3 = 0 ). A typical choice of /3 is 2.5. This window gives a PSLR of - 21 dB
(approximately one-tenth in amplitude), and a resolution with a factor of 'Yw = 1.18 times that of a rectangular
window. This is another reason why the 0.886 factor can be ignored in (3.28), since the broadening effect of the
window tends to compensate it (the factor is 0.886 x 1.18 = 1.05 in this example).
I
~
~
0
-3 -2
_, 0 , 2 3
0
20 -40 80 80 100 120
I 0
-1
-3 -2 -1 0 1 2 3
0
-5
T he oversampling ratio in (3 .20) is defined as t he ratio of the sampling rate to the signal bandwidth. It can also
be defined as the ratio of the resolution to the sample spacing, as shown below.
Let !l.t equal the sample spacing measured in time units, 1/ / 5 • Combining this with (3.20) and (3.46), the
following ratio is obtained:
p
- 0.886 'Yw <los ::::: <los (3.47)
!l.t
T he matched filter (3.26) can be applied in the time domain, using a linear convolution. However, as the matched
filter is usually very long in SAR applications, the matched filter is normally applied in the frequency domain.
T he discussion below is based upon a baseband signal, but can be generalized to a nonbaseband signal.
T here are three options to generate the frequency domain matched filter:
Option 1: Take the DFT of the zero padded, complex conjugate of the time-reversed pulse replica, which is a
copy of t he transmitted pulse.
Option 2: Take t he complex conjugate of the DFT of t he zero padded pulse replica (not time-reversed).
Option 3: Generate the matched filter directly in the frequency domain, using assumed linear FM characteristics.
In Options 1 and 2, the replica has to be zero padded to t he selected FFT length before t he FFT is taken.
Because the throwaway region is equal to the replica length (minus one) , the FFT length should be a few times
longer than the replica length to obtain efficient processing.
The location of the filter throwaway region, discussed in Section 2.4, is different in each design. If the zero
padding is done at the end of t he replica array, t he circular convolution throwaway region is at t he start of the
IDFT output array in Option 1 and at the end of the IDFT output array in Option 2. It is convenient if the
matched filter throwaway is at the end of the IDFT output array, which is why Option 2 is sometimes preferred.
In Option 3, the throwaway region is split between the two ends of the IDFT output array.
Options 1 and 2 have the additional advantage that the pulse does not need to be exactly linear FM,
because the processor simply uses the pulse chirp replica in the received auxiliary data. Option 1 will not be
discussed any further since its properties are similar to those of Option 2.
In Option 2, the time domain pulse replica, s(t), can be generated from a mathematical expression such as (3.1),
if known, or can be obtained from the radar system itself. A tapering window, w(t, T) , is then applied, which
gives the weighted replica
In this case, the time-reversed complex conjugate of h'(t) is the time domain matched filter, as given in (3.43) .
On the other hand, the frequency domain matched filter is the complex conjugate of the OFT of h'(t) , with no
axis reversal.
Consider the RADARSAT medium resolution example, which has the following baseband chirp parameters:
chirp duration T = 42 µs, FM rate K = 0.41 x 1012 Hz/sec, and sampling rate Is = 18.5 MHz. The bandwidth is
then 17.2 MHz, the oversampling factor is 1.07, and the number of samples in the chirp is 777. Let w(t, T) be a
Kaiser window with roll-off coefficient of {3 = 2.5 .
The time domain filter is transformed into the frequency domain by a OFT
H2(/) = {fft[h'(t), Nfft]}* (3.49)
In this equation, the MATLAB notation is used, which indicates that the time domain array h'(t) is zero padded
at its end to an FFT length of Nfft. In this numerical example, Nm is 2048.
Figure 3.11 shows the frequency response H2(/) of the matched filter, generated using Option 2. The
magnitude drops off at the center of the FFT output array, which corresponds to frequency components at either
end of the pulse replica. This drop-off is largely due to the application of the Kaiser window. The shape of the
window is apparent in the frequency domain, where the window "begins" at sample 1190 and "ends" at sample
860, as shown by the dashed lines. A ringing effect, caused by the truncation of the pulse, is seen near the
"ends" of the spectrum. Between samples 990 and 1060, the matched filter energy is near zero, corresponding to
frequencies absent in the replica because of the oversampling factor.
I
CII
20
2 10
0
0 200 400 eoo 800 1000 1200 1400 ieco 1800 2000
1 0
-
I!
i
-500
- 1000
.!
Cl. -1!500
Figure 3.11: Magnitude and phase of the matched filter in the frequency
domain, designed using Option 2.
The phase, shown in the bottom panel of Figure 3.11, is also a quadratic function of frequency, but it is not
symmetrical about zero frequency, as it is in Figure 3.2. This is because the time domain array has been zero
padded, which shifts the stationary phase (zero Doppler) point in the replica away from the middle of the array.
This introduces a linear phase ramp into the spectrum, tilting the phase curve. Note that there 1s no ringing in
the phase of the spectrum.8
In Option 3, the matched filter is generated directly in the frequency domain, assuming linear FM characteristics
of the pulse, as described in Section 3.3.3. This version of the filter, H 3 (! ), is given in (3.42) . Using the same
simulation parameters as above, the magnitude of the frequency-domain filter is shown in Figure 3.12(a).
To implement this filter, the frequency variable, J, must be specified. Its range of values should span the
sampling frequency over the filter array. As the variable J is defined circularly within the array, a discontinuity
inev:itably exists. This discontinuity introduces a phase discontinuity within the array. The phase discontinuity
should be placed at the null point of the spectrum of the filter array. Letting ls be the sampling rate, this point
corresponds to ±fs/2 Hz in the present example, as shown in Figure 3.12(b). This choice is due to the fact that
the pulse replica is at baseband and, in the frequency domain, the first sample corresponds to zero frequency.
The phase is shown in Figure 3.12(c). Comparing Figures 3.12(c) and 3.ll(b), it is seen that the phase is not
skewed, but is symmetrical about zero frequency.
As a final point, Figure 3.13 illustrates the positions of compressed targets in the IFFT output array using
matched filter Options 2 and 3. Three targets are considered, each of which occupies N = 401 samples in the
signal space, as shown in Figure 3.13(a). The zero frequency point of the target lies Nzo = (NL - NR) /2 samples
to the right of the midpoint of the target exposure.
~ 0.8
i
t 0.6
0.4
0,2 +- Signal bandwld1h /2
0 100 160 200
f _:
l - ,0
0 50 100 150 200
I 200
I 0
-1
600 800 1000 1200 1400 1600 1800 2000
I: 0
--
-
~
0
T"-
. I
200
-
'I
.
-400
'
800
'
800 1000
.
1200
'
1400 1600
'
1800
., T,\.
,- -
I .
2000
~
Tme C
samolesl
Figure 3.13: Compressed target positions showing the throwaway regions, TA,
for baseband signals.
Figure 3.13(b) shows that each target in the output array is registered to the position of its leading edge in
the input array, when Option 2 is used. In this case, the throwaway region is TA = N - 1 = 400 samples long,
and is located at the end of the output array. The target registration and the throwaway region size and location
are also independent of Nzo in this case.
Figure 3.13(c) shows that each target in the output array is registered to its zero frequency position in the
input array, when Option 3 is used. In this case, the throwaway regions are split between each end of the output
array. If the target is centered on zero frequency, the throwaway regions are split equally on either end of the
output array, with TAL = TAR = (N-1)/2 = 200 samples. When the zero frequency point of the target is shifted
by Nzo samples to the right of the center of the exposure, the left end throwaway region is TAL = (N-1)/2 +
Nzo samples long, while at the right end, the throwaway region is TAR = (N - 1)/2 - Nzo samples. In this
example, the throwaway on the right is shorter than that on the left, because the zero frequency point is shifted
to the right of center.
It is often preferred to register each target to its zero frequency position (see Chapter 6), which Option 3
does. If desired, a phase ramp can be applied to the matched filter in the frequency domain, to place the whole
throwaway region at the end of the output array.
An intuitive way of remembering the location of the throwaway region is as follows. If the first sample of
the exposure of Target A is located at the first sample of the input array, the throwaway region ends at the
sample to the left of the peak sample of that target. Similarly, if the last sample of the exposure of Target C is
placed in the last sample of the input array, the throwaway region begins one sample to the right of the peak of
that target. This rule applies to both Options 2 and 3, when the samples in the inverse FFT (IFFT) output
array are addressed in a circular fashion.
3. 5 FM Rate Mismatch
Sometimes the matched filter used in the compression operation is not accurate. Three parameters are required to
define the matched filter for a linear FM signal, duration, center frequency, and FM rate. The parameter that
tends to cause the most serious errors is the FM rate. In this section, the effects of FM rate errors on four
important image quality parameters (IRW, PSLR, registration, and phase) are examined. The discussion is
presented separately for a baseband signal and a nonbaseband one.9 The effects of the FM rate error are
somewhat different in these two cases.
(a) IOPEI slightly< 0.281t radians (b) IOPEI slightly > 0.28 7t radians
0 O·
ij
.g -10 -10
i
j -20 -20
Figure 3.15: Similar impulse responses, with different first sidelobe positions.
Because of the jwnps in the PSLR curve, the ISLR is a more meaningful measurement to quantify the
energy that leaks into the sidelobes. The sidelobe behavior discussed above also affects the ISLR measurement,
but a simple way of making the measurement more consistent is to define the edges of the main lobe as a fixed
fraction of the ideal resolution, independent of FM rate error, as suggested in Section 2.8.
The choice of the integration limits is arbitrary. To compute the !SLR in Figure 3.14(c), the integration
limits for Pmain in (2.63) are defined to be a = 2.8 times the IRW. It is seen that the !SLR rises by about 3.3
dB for an FM rate error that causes the IRW to broaden by 10%.
The matched filtering process in the presence of an FM rate error can be analyred in both the time and
frequency domains. The time domain derivation is considered first, with the received signal given by (3.25). The
matched filter is defined by (3.26), with K replaced by its wrong value, K + llK
2
h(t) = rect(;) exp{-j71' (K + llK) t } (3.51)
In using the frequency domain approach, the spectrum of the signal in (3.34) is
2
~ f ) exp { J7r
rect ( IKIT . /K } . llK f 2 }
exp { -J7r K2 (3.54)
The last step is obtained by recogmzmg that the rectangular envelopes of the signal and the filter are
approximately equal, and 1/(K + llK) ~ 1/ K - llK/ K 2 when lllK I << IKI. Then the matched filter output in
the frequency domain is approximately
Bout(/) - Sr(f) H(f)
2
- rect(i}iT) exp{-j71' ll:f } exp{-j271' /to} (3.55)
The matched filter output is then transformed into the time domain by an inverse Fourier transform
+IKIT/2 { AK /2 }
Sout(t) = ! -IKIT/2
exp -j7f K2 exp{ j 21r / (t - to)} df (3.56)
By substituting / = Ku for a linear FM signal, where u is a dummy time variable, and recognizing that the
Fourier transform of an even function is proportional to its inverse Fourier transform, the Sou t(t) obtained in
(3.56) is the same as that given by (3.52).
The presence of a QPE alone, caused by AK, will not change the compressed peak position for a baseband
signal, hence the compressed peak is still centered at t = to. Only a linear phase component will change the
peak position.
In some applications, phase accuracy is also important. It can be shown that the phase error at the peak of
the compressed target, due to a t:l.K error, is one-third of the QPE. The derivation, using (3.52), is given in
Appendix 3B. Therefore, a QPE magnitude of 1r/2 introduces a phase error of 1r/6, or 30°, at the target peak.
This may be too large for some applications because, even though a broadening of 10% of the magnitude
response may be acceptable, a phase error this large may not be acceptable.
Consider a nonbaseband signal (3.31) and its spectrum (3.38). Again, let the FM rate error in the matched filter
be AK. Using the frequency domain approach as in Section 3.3.3, the output signal is [recall (3.41))
+IKIT/2- Ktc { AK/2} exp{
Sout(t) = ! -IKIT/2-Ktc
exp -j1r K2 j 21r J (t-to-tc)} df (3.57)
By substit uting u =J + K tc, collecting terms that are independent of u, and simplifying, the output signal is
+IKIT/2 { . AK u 2 }
x
!
-IKl71/2
exp -J7f - -2-
K
The target is compressed to time t = to + tc - AKtc/K. At this time, the phase given by the first exponential
term is 1rAKt~. Including the QPE, the total phase is then
(3.59)
where the last term is one-third of the QPE, as derived in Appendix 3B.
Comparing (3.58) with the case of no phase error (3.41), the former has an additional time shift of AK tc/ K
and an additional phase of 7f AK t~. This time shift also can be derived from the frequency versus time relation-
ship t = f / K for a linear FM signal, by differentiating t with respect to K . This shift will not affect the IRW
broadening and the sidelobe effects, compared to a baseband signal. In other words, the IRW broadening and
PSLR effects, as shown in Figure 3.14 for a baseband signal, are still applicable to a nonbaseband signal.
The time domain approach would give the same results shown in (3.58) and (3.59).
Using the phase error relation (3.50) for linear FM signals and the resulting IRW broadening of Figure 3.14(a), a
rule of thumb can be established to relate the allowable FM rate error to the TBP of the signal (3.5). If the
allowable QPE is set to Q 1r, the relative FM rate error criterion can be expressed as
t:l.K
K
I< 4Q
TBP
(3.60)
This means that signals with longer durations or higher bandwidths are more sensitive to FM rate errors, and
care must be taken to use accurate parameters in the compression process.
Figure 3.14 shows that Q is 0.41 for an IRW broadening of 5%, and 0.55 for a broadening of 10%. Using Q
= 0.5 as an intermediate value (giv.ing a broadening of 8%), a simple rule of thumb is that the FM rate error
should be limited to ILlK/ Kl < 2/TBP for moderate focusing accuracy. For high focusing accuracy (IRW
broadening less than 2%), the rule that ILlK/ K I should be less than 1/TBP can be used.
Using the RADARSAT example of Section 3.3.2, where K = 0.41 x 1012 Hz/s and T = 42 µs, the TBP is
723. Consequently, the FM rate has to be accurate to within 0.28% for moderate focusing accuracy, or 0.14% for
high focusing accuracy. Note that these figures were obtained assuming a Kaiser window /3 = 2.5. If a smaller {3
is used, the error criterion is more sensitive than the values given above. Note also that the TBP is measured
before the window is applied. The window has the effect of reducing the effective TBP of the signal.
3.6 Summary
In this chapter, a linear FM signal is introduced, along with its time and frequency characteristics. There exists a
one-to-one correspondence between the time and frequency domains for FM signals. In the case of a linear FM
signal, this relationship is linear, which leads to a simple duality between the time and frequency domains. For
example, the shape of the envelope and the phase profile are approximately the same in both domains.
The analytical form of the spectrum and the time and frequency relationship of an FM signal can be
derived by the POSP. The important equations of linear FM pulse compression derived in this chapter are
summarized in Table 3.1. The equations assume an up chirp and a target positioned at to = 0.
How fast must a signal be sampled to avoid aliasing? A complex signal must be sampled at a rate greater
than its bandwidth. In practice, it is customary to use an oversampling factor of 1.1 to 1.4 greater than the
bandwidth. With the oversampling, an energy gap exists in the signal spectrum. This gap is the point where the
frequency is considered to be discontinuous for DSP operations.
In pulse compression or matched filtering, the most important quality parameter is resolution. It is defined as
the 3-dB width of the compressed pulse, which is approximately the reciprocal of the bandwidth. The compres-
sion ratio is the ratio of the length of the transmitted pulse to the width of the compressed pulse, and can be
in the order of several hundred. The matched filter is the time-reversed complex conjugate of the transmitted
pulse, and can be implemented either in the time domain or in the frequency domain. Examples are given of the
compression of linear FM signals.
Finally, the effects of an FM rate error in the matched filter are investigated for two different cases
-baseband and nonbaseband signals. It is found that the IRW broadening is the same for the two cases, and
that the QPE should be less than 1r/2 for an IRW broadening of less than 8%. To meet this QPE limit, the FM
rate should be accurate to within 2/TBP, and to 1/TBP for a more stringent QPE and hence IRW broadening
limit. For a nonbaseband signal, the compressed signal will be shifted by an amount proportional to the spectrum
shift. The phase is also distorted by the FM rate error. The phase distortion is one-third the QPE for a
baseband signal, plus an additional component for a nonbaseband one.
An example of a SAR image taken by the ENVISAT/ASAR sensor is shown in Figure 3.16. The radar was
operating with an HH polarization. The image was acquired on October 31, 2004 at 16:33 GMT at the ESA
ground station in Kiruna, Sweden, on Track 384, Orbit 13,965.
It is a ScanSAR Wide Swath product, processed with the ESA PF-ASAR v-3.08 SAR processor built by
MacDonald Dettwiler. The SPECAN algorithm is used, with seven range looks and three azimuth looks. The
processed resolution is approximately 150 m, but the pixels have been averaged by a factor of five in both
dimensions for display purposes.
The scene center is approximately 80° N, 70° W. The scene is approximately 400 km on each side. The top
left of the image is pointing to the north.
The ice filled water aligned vertically just right of center in the scene is the Nares Strait, which separates
the Canadian Ellesmere Island on the left from northern Greenland on the right. The bright area on the far right
is the Greenland icecap, whose rough surface gives a very strong radar backscatter. A number of the icecap
glaciers flow directly into the sea, such as the one at the upper right of the scene.
Table 3.1: Summary of Linear FM Pulse Compression Equations
exp{jtr K(t-to) 2}
0
Linear FM signal Sr(t) recte-/ }
References
[1] M. Born and E. Wolf. Principles of Optics. Cambridge University Press, Cambridge, England, 7th edition,
1999.
[3) M. I. Skolnik. Radar Handbook. McGraw-Hill, New York, 2nd edition, 1990.
[4) E. L. Key, E. N. Fowle, and R. D. Haggarty. A Method of Designing Signals of Large Time-Bandwidth
Product. IRE Intern. Conv. Record, (4), pp. 146-154, March 1961.
[5] J. Curlander and R. McDonough. Synthetic Aperture Radar: Systems and Signal Processing. John Wiley &
Sons, New York, 1991.
[6] R. C. Dixon. Spread Spectrum Systems with Commercial Applications. Wiley-Interscience, New York, 3rd
edition, 1994.
[7) J. M. Wozencraft and I. M. Jacobs. Principles of Communication Engineering. John Wiley & Sons, New
York, 1965.
[8] B. R. Mahafza. Radar Systems Analysis and Design Using MATLAB. Chapman and Hall/CRC Press, Boca
Raton, FL, 2000 . .
[9] R. J. Sullivan. Microwave Radar Imaging and Advanced Concepts. Artech House, Norwood, MA, 2000.
[10] V. K . Ingle and J. G. Proakis. Digital Signal Processing Using MATLAB V.4- Brooks/Cole Publishing Co.,
Pacific Grove, CA, 1st edition, 2000.
[11] A. V. Oppenheim, R. W. Schafer, and J. R. Buck. Discrete-Time Signal Processing. Prentice Hall, Upper
Saddle River, NJ, 2nd edition, 1999.
This appendix derives the matched filter output in the convolution integral in (3.24). There are two cases to
consider: (1) the case where the signal and matched filter durations are the same length; (2) the case where one
duration is longer than the other. The solutions are similar, except that there is an extra quadratic phase term in
the latter case.
The derivation of the matched filter output (3.24) begins with the signal and matched filter in (3.25) and (3.26),
both having the same duration T.
2
- j_}ect(;) rect(t Tu) exp{j1TKu }exp{-j1rK(t-u)2}du
This integral only has a contribution when the two rect functions overlap. The integral can be evaluated by
separating it into two parts, one part where the signal is to the left of the matched filter, and the other part
where it is to the right, corresponding to two cases of overlap. Changing the integration limits accordingly, the
integral becomes
t -T/2) [T/
2 }
+ rect ( T lt-T/ exp(j211" Ktu) du (3A.4)
2
+ (T - t) rect ( t-T/2)
T sinc[K t (T - t)] (3A.5)
The output Sout(t) is a real function, and can be written in a more compact form
The function represented by {3A.6) can be viewed as having a slowly varying part, or envelope, (T - ltl)
rect{ t/ (2T)}, and a rapidly varying part ansmg from the sine function. The envelope is a triangular function ,
shown in Figure 3A.l(a). The triangular shape of the envelope arises from t he fact that the fraction of the two
functions, sr(t) and h(t), that overlap one another, decreases linearly from the middle, as t he uniform-magnitude
functions are shifted in relation to one another. The parameters are taken from a radar with a TBP of 100, and
the peak values are normalized to unity for clarity.
-to -8 -8 -4 -2 0 2 4 8 8 10
~
i
Q.
0.6
0
L
- to -8 .....
. ..!. . .
-2
.............
0
..................
2 4 8 8 10
~
!
i
0.5
-8 -2
. . L.
-...... ,.......
0
~
2
TBP • 100
8 8 10
Time (µs)
The rapidly varying part shown in Figure 3A.l(b) can be approximately represented by three sine functions,
which peak at times t = - T, 0, and + T, respectively. The effective duration of each sine function is much
smaller than T, assuming that the signal has a large TBP.
The combined function is illustrated in Figure 3A.l(c) . The contributions of the sine functions that peak at
-T and +T are negligible, since the triangular function is close to zero in the vicinity of these points. For values
of TBP less than 100, the outer sine functions start becoming noticeable. In addition, the envelope is nearly equal
to T in the vicinity of the sine function that peaks at t = 0, where !ti << T. Under these conditions of TBP >
100, the compressed output can be approximated by
An expanded view of the curve in Figure 3A.l(c) was shown m Figure 3.5. The effect of the approximation is
too small to be seen for the large TBP used in this example.
The above derivation assumes that the signal and filter have the same durations, that is, the rectangular functions
are the same length. This requires that the integral of (3A.3) be partitioned into two parts. Next, consider the
case where the rect functions of (3A.3) have different lengths. Then, the partition of the integral is no longer
necessary, and it is simpler to evaluate.
The convolution integral is obtained by removing the rectangular window in one of the functions. For
example, the rectangular function can be removed from Sr (t) in (3.25), which is equivalent to assuming that the
duration of the signal sr(t) is longer than the duration of the filter h(t). Again, letting t 0 = 0, the matched filter
output is
3 3 3
QPE QPE _ .!_ (- QPE _ QPE )
3 105 3 3 105 + ... + ...
Therefore, a QPE magnitude of 71" /2 would introduce an absolute phase error of approximately 71" /6, or 30° at
the target peak.
Chapter 4
4.1 Introduction
The purposes of this chapter are to explain the term "synthetic aperture" in the radar context, and to derive
associated parameters such as azimuth bandwidth and resolution. The chapter begins with an explanation of the
SAR geometry in Section 4.2 and the special terminology that is used in the imaging radar. Then, Section 4.3
outlines the "range equation," detailing how the distance from the sensor to the target changes with time.
The next three sections describe how the SAR signal is acquired. First, the form of the transmitted radar
pulse is given in Section 4.4, pointing out the relationship between transmitted bandwidth and achievable
processed range resolution. The received echo from the pulse is a convolution of the pulse and the ground
reflectiv:ity.
Second, the form of the SAR signal in the azimuth direction is discussed in Section 4.5. The notion of pulse
coherency and the timing of the transmitted pulses are presented. The timing, as characterized by the PRF, is
affected by many of the SAR system design parameters, and its choice is quite restricted in a satellite sensor.
After discussing the factors affecting the received signal strength, the important signal parameters of exposure
time, Doppler frequency, and bandwidth are discussed.
Third, Section 4.6 explains how the received signal can be considered as a two-dimensional signal, and how it
is written into the range and azimuth dimensions of the signal processor memory. This structure is needed so
these signals can be processed into a two-dimensional image of the Earth's surface. The concept of the impulse
response of the SAR sensor is introduced, and typical aircraft and satellite SAR parameters are given.
The central idea of SAR processing is based upon matched filtering of the received SAR signal in both the
range and azimuth directions. Matched filtering is possible because the acquired SAR data are modulated in these
directions with an appropriate phase function. The modulation in range is provided by the phase encoding of a
transmitted pulse, while the modulation in azimuth is created by the motion of the radar platform. 1 The phase
contains the most important information in the signal, so the phase characteristics of these modulations are
examined throughout this chapter. More details of the SAR signal properties are given in Chapter 5.
So far, the groundwork for matched filtering and range resolution has been established. By now, the concept
that a high azimuth resolution can also be obtained by matched filtering should be apparent. The classical limit
of azimuth resolution, which is one-half the antenna aperture, is derived in Section 4.7.1 from the viewpoint of
processed bandwidth and SAR system velocities.
Finally, the foregoing discussion leads to the concept of synthetic aperture, which is presented in Section
4.7.2. The signal processor operates on a group of signals obtained during the time that the sensor illuminates a
selected target, and in doing so, creates the effect that would be obtained by a single antenna with a very long
aperture. This concept of synthetic aperture also leads to an alternate derivation of azimuth resolution.
The chapter ends with three appendices. The first appendix derives a simple form of the range equation for
a satellite orbit that is locally circular, and an Earth that is locally spherical. It justifies the approximate radar
velocity used in Section 4.3.1. The second appendix describes quadrature demodulation in detail, including how to
correct for calibration errors between the two quadrature channels. The third appendix takes another look at the
meaning of "synthetic aperture," this time from an antenna v-iewpoint.
The purposes of this section are to describe SAR data acquisition geometry and to define the geometry-related
terms used in the text.
Slant range
(be.fore processing)
Ptane of zero
Doppler
f<?,16
rl>lh.. ~lb
-...'6.r,r~(J'1(J~
'IJ lo Pe
~roo.
OIJ.0.1-
-Utj
The terms used to describe the SAR geometry are defined as follows.
Target: This is a hypothetical point on the Earth's surface that the SAR system is imaging. The SAR system
actually images an area on the ground, but to develop the SAR equations, a single representative point on
the ground is considered. This point is called a "point target" or "point scatterer," or simply "target" or
"scatterer."
Beam footprint: As the platform advances, pulses of electromagnetic energy are transmitted at regular intervals
towards the ground. During the transmission of a particular pulse, the radar antenna projects a beam onto
an area of the ground referred to as the beam "footprint." The position and shape of the footprint is
dictated by the antenna beam pattern and the sensor/Earth geometry. This footprint is said to be
"illuminated" by the radar beam.
Nadir: The nadir is the point on the Earth's surface directly below the sensor, so that the "normal" to the
Earth's surface at the nadir passes through the sensor. For a spherical Earth model, the vector from the
sensor to the Earth's center intersects the Earth's surface at the nadir. This is not true for an ellipsoidal
model.
Radar track: As the nadir point moves along the Earth's surface, it traces out the radar track.
o Platform velocity: This is the velocity, denoted by Vs, of the platform along the flight path.
o Beam velocity: This is the velocity, denoted by V9 , with which the zero Doppler line sweeps along the
ground.
For the satellite case, Vs is the orbital velocity, which can be expressed in either Earth centered inertial
(ECI) coordinates or Earth centered rotating (ECR) coordinates. The set of ECI axes does not move with
Earth rotation, while the set of ECR axes does, as discussed in Chapter 12. For a circular orbit with a
constant angular velocity, Vs is a constant in ECI coordinates, but varies in ECR coordinates, due to the
difference in Earth tangential speeds at different latitudes.3 From now on, l1s is assumed to be expressed in
ECR coordinates, unless otherwise stated, as this simplifies some formulations.
The beam velocity, Vg, is the speed of the zero Doppler line along the Earth's surface. Assuming the
satellite attitude is controlled so that the beam center is approximately steered to zero Doppler (or other
suitable reference), Vg can be considered to be the velocity of the beam sweeping along the surface. For a
satellite with an altitude of 800 km, Vg is about 12% less than V,, because the orbit "circumference" is
greater than the track circumference. In addition, Vg varies around the orbit as the Earth's radius and
tangential speed change.
For the aircraft case, Vs is the nominal aircraft speed relative to the Earth. It can be assumed that Vy =
Vs for the aircraft case, because the Earth's curvature is small compared to the SAR geometry. The true
aircraft speed varies, but is compensated by changing the PRF to make the "pulses" evenly spaced on the
ground.
Azimuth: In the context of SAR processing, this is a direction aligned with the relative platform velocity vector
(or sensor velocity vector in ECR coordinates). It can be considered as a vector parallel to the net sensor
motion, as in Figure 4.1, or as a vector in the slant range plane, as in Figure 4.2.
Zero Doppler plane: This is the plane containing the sensor that is pe1pendicular to the platform velocity
vector (in ECR coordinates). It is approximately perpendicular to the azimuth axis, where the approximation
comes from the fact that the platform may be climbing or descending. The intersection of this plane with
the ground is called the zero Doppler line. When this line crosses the target, the relative radial velocity of
the sensor, with respect to the target, is zero.
Range of closest approach: The distance from the radar to the target varies with time as the platform moves.
When the range is a minimum (when the zero Doppler line crosses the target), it is called the range of
closest approach, denoted by ll-0 in Figure 4.1.
Position of closest approach: The position of closest approach is the position of the radar when it is closest
to the target, as shown by point P2 in Figure 4.1. Note that the target may not be illuminated when the
sensor is at this point , because of beam squint.
Zero Doppler time: This is the time of closest approach, measured relative to an arbitrary time origin.4 Most
SAR processing algorithms, including the ones discussed in this book, register targets to positions
corresponding to their zero Doppler times, referred to as "compression to zero Doppler."
Beamwidth: The radar beam can be viewed as a cone, and the footprint viewed as the intersection of the cone
with the ground. The beam has two significant dimensions: its angular width in the azimuth and elevation
planes, respectively. In each plane, the half-power beamwidth, or simply beamwidth, is defined by the angle
subtended by the beam "edges," in which the beam edge is defined when the radiation strength is 3 dB
below the maximum.5
In azimuth, with a uniformly driven aperture, the beamwidth is approximately t he wavelength div:ided by
the antenna length in this direction. In elevation, the beamwidth governs the width of the imaged "range
swath." Its formula is more complicated, since the elevation radiation pattern is usually shaped with a
nonuniform aperture.
The radar beamwidth is not affected by Earth curvature or rotation, but it is shown later that the
exposure time, the azimuth bandwidth, and the resolution are affected (see Section 4.5.5).
Target trajectory: The range from the radar to a target changes during the time that the target is illuminated
by the radar beam. When drawn on a two-dimensional plot versus range and azimuth, the locus of received
target energy is curved, and is referred to as the target trajectory in signal space [see Figure 4.2(b)].
Beam center crossing time: This is the difference between the time when the zero Doppler line crosses the
target, and the time when the beam centerline crosses the target. It is positive when the beam points back-
wards relative to the zero Doppler line; in other words, when the beam center crosses the target after the
z.ero Doppler line crosses the target. It is sometimes referred to as the beam center offset time.
Signal space and image space: There are two two-dimensional spaces used for the SAR data in the signal
processor. The signal space contains the received SAR data, and the image space contains the processed
data. If the data in the signal are displayed, features imaged by the radar are not recognizable. The features
will emerge only after extensive proces~ing is performed on the input data. The processed data are defined in
the image space, since the data now form a meaningful image (see Figure 4.2) .
fG
(c) SAR image space
fG:
(a) Data acquisition (b) SAR signal space
Range Range
• 0
I Al al ~
A
•
I ~ of
E
~ •
B
•
~ • cl
C •
D
C D
Figure 4.2: Definitions of range at different points in the SAR system and
processor.
Range: First, the generic term "range" can mean slant range or ground range, as shown in Figure 4.1. The
former is measured along the radar line-of-sight, while the latter is measured along t he ground. Because all
SAR processing operations use the slant range definition, the usual convention is that "range" defaults to
"slant range" when not specified. Second, there are two cases to consider in the definition of range: signal
space and image space. In signal space, range is a distance measured from the radar antenna to the target
on t he ground. It is not orthogonal to the azimuth axis unless the squint angle, as defined in Figure 4.1, is
zero. This range direction is called the radar "line-of-sight" - it is approximately along the beam centerline
or boresight, but the direction varies with the location of the target within the beam. After the SAR
processing, the image is registered to the azimuth position of closest approach, and to the range of closest
approach . At this point, the range axis is perpendicular to the azimuth axis. 6
Figure 4.2 shows the difference between range in the input signal space and in the final image after
compression to zero Doppler. Figure 4.2(a) shows the physical coordinates, with four targets on the Earth's
surface. The antenna is assumed to look ahead (i.e., squinted forward), as in Figure 4.1, except that the
antenna is looking left in t his figure. The radar is moving down the page, and the beam center crosses
Targets A and B at the same time. Later it crosses Target D, and finally Target C . The range R is
measured along the radar beam, as in Figure 4.1.
In Figure 4.2(b), the target trajectories are shown in the signal memory at the input to the SAR processor.
They are located according to their range (horizontally) and their beam center crossing time (vertically). In
this memory, the range R' is relative to t he first sample, as controlled by the range gate delay, RGD
The RGD is the difference in time between the transmission of the pulse and t he recording of t he first
sample of the associated echo, and c = 2.997925 x 108 m/s is the speed of light.
In Figure 4.2(c) , the targets are focused in image space to positions according to their zero Doppler times. Range
now lies in the zero Doppler direction (see R.o in Figure 4.1) . The zero Doppler time is independent of the
antenna squint angle, so the target positions in t he final image do not depend upon t he squint angle. Similar to
the signal space, the range variable R0 is relative to the first processed sample, and for a given target, its slant
range of closest approach is Ro = Ro + RGD c/2.
Slant range plane: This is the plane containing the relative (ECR) sensor velocity vector and the slant range
vector for a given target . The orientation of this plane, relative to the local vertical, changes with targets at
different ranges, R.o.
G round range: This is the projection of slant range onto the ground. If the image is to be presented in a
maplike format, the slant range variable is converted to ground range. Assuming that the data is registered
to zero Doppler, ground range is the direction orthogonal to the azimuth axis and parallel to the Earth's
surface, with its origin at the nadir point, as shown in Figure 4.1.
Squint angle: T his is the angle, 0sq, that t he slant range vector makes with the zero Doppler plane,7 and is an
important component in the description of the beam pointing direction. It is measured in the slant range
plane. If viewed from above (i.e., projected to the ground plane), it coincides with the beam yaw angle. The
squint angle depends upon the target range, R.o, for a given beam pointing direction.
Note that the zero Doppler time of a target is independent of the squint angle, but the beam center
crossing time does depend upon the squint angle. Since the zero Doppler plane in ECR coordinates (from
which 8sq is measured) accounts for Earth curvature and rotation, 85q is not the squint angle in inertial
space. The computation of 85q from beam pointing and Earth/platform geometry is discussed in Chapter
12.
Cross range: This is a direction orthogonal to the radar's line-of-sight. Unless the squint angle is zero, the cross
range and azimuth axes are not parallel. Theoretically, "azimuth" resolution is developed along the cross
range axis instead of the azimuth axis. But in stripmap SAR, the cross range resolution does not differ
significantly from the azimuth resolution, because the squint angle is usually small. Since this book
concentrates on processing stripmap data, the generic definition of "azimuth resolution" is used throughout
this book, and the cross range/azimuth distinction is pointed out as necessary.
The SAR image formation steps produce an image in slant range and azimuth coordinates, as in Figure 4.2(c). It
is often desirable to resample the image to coordinates corresponding to those of a map or an optical sensor,
where the range and azimuth axes have equal scales. In this postprocessing step, the concept of ground range
arises, which is a distance measured along the surface of the Earth, approximately perpendicular to the azimuth
direction. The conversion from slant range to ground range provides a geometrically realistic image, approximately
aligned with the radar track. 8
Radar
I
I
I
I
Blowup of target area
I
Re IR
/ e
I
I
13,,,
I
I
I
Center of Earth
Figure 4.3: Satellite cross-track geometry, illustrating slant range, R.o, versus
ground range, G, and the associated sample spacings, 6.R and 6.G.
For the case of zero squint and a locally circular Earth, the relationship between slant range and ground
range coordinates is illustrated in Figure 4.3. Let the line joining the radar and the Earth's center intersect the
Earth at Point E. Ground range is the arc length along the Earth's surface from E to the target. It is marked
by G in the figure, and /3e is the angle subtended by G at the Earth's center. Re is the local radius of the
Earth, taken at the scene center. Let h be the altitude of the platform with respect to E, 8n the off- nadir
angle, and Oi the incidence angle. For the locally circular Earth approximation, the geometric variables in Figure
4.3 are related by the law of sines and cosines
(4.2)
sin On sin Oi sin /3e
(4.3)
and by
G = Re/3e (4.4)
The incidence angle, Oi, is larger than the off-nadir angle, On, by the angle, /3e· The difference is negligible in the
airborne case, but is a few degrees in the satellite case.
An expanded view of the target area is shown on the right side of Figure 4.3. The length, tl.R, represents
the distance between two slant range samples, and the dotted line is a small part of a "constant range" circle.
The ground is assumed to be locally flat, and the dashed line is the local vertical. Then tl.G is the distance along
the ground represented by the range sample.
For a given radar mode, the slant range sample spacing, tl.R, is a constant, and the ground sample spacing,
tl.G, varies with the local incidence angle
(4.5)
The quantities tl.R and tl.G can be thought of as one range resolution cell, and sin Oi gives the ground range
resolution as a factor of the slant range resolution. Equation (4.5) shows how the ground range resolution
degrades with a decreasing off-nadir angle, with the worst case occurring when Oi is zero. This is unlike optical
sensors, which enjoy the best resolution when looking straight down.
The variation in ground range resolution with range is largest when the incidence angle is small. It is
interesting to consider the RADARSAT-1 beams, as RADARSAT exhibits a large variety of beams with different
incidence angles. There are seven regular beams, called S1 to S7, and three wide swath beams, called Wl to W3
[3]. The largest variation of ground range resolution occurs in Beam Wl, which is the widest non-ScanSAR beam
with the smallest incidence angle. Figure 4.4 shows the slant range, incidence angle, and ground range resolution
for the Wl beam. As the slant range resolution is 13.6 m for this RADARSAT-1 beam, the ground range
resolution varies from 27 m at far range to 40 m at near range.
A generic satellite geometry is illustrated in Figure 4.5. The satellite orbit is approximately a low-eccentricity
ellipse, defined by the length and hour angle of the semimajor and semiminor axes, and by the inclination of the
orbit plane with respect to the equator. 9
920 r--.--
.- ---,_---r------.--- . . - - ~ ---r--...--- -.---,
:Re = ~ ~
•
.
• •
. •
.• • I • • • • • • • • • • .... .
~ .. . ....
.
:h = ;800 km
!& 880 . . . : . . .:" ..... .:. ..... ·:..
.
.. . . :.
E . . . . . . . . .
~ 860 .. ; ....... : . . . . . .. ........ ; ...... -~ •. .... .:....... -~· .......:.. .... . :: . .
en : . : : : : :
.... .. .. .. . ..: :
. .
i& 35 . ...
..
.. . .
.. . .. .... .
.
..
. .
! .: . . .. . : . . . . .·. . .
"2 30
~
<!> 25 ~ .___ _,__ ___,__ _....__ _.__ ____._ _....___ __.__ _ .__.
260 280 300 320 340 360 380 400 420
Ground range (km) - >
Figure 4.4: The variation of ground range resolution for the RADARSAT-1
Wl beam.
The choice of the orbit parameters for remote-sensing SARs involves a number of complicated tradeoffs, and
a few considerations are mentioned here. The orbit altitude above the Earth's surface is often around 800 km,
being a compromise between power requirements and atmospheric drag. The orbit eccentricity is close to zero, so
that the altitude is nearly constant around the orbit. If the orbit is circular, the square of the orbit period, P, is
related to the cube of the orbit radius, Rs, by
(4.6)
where µe = 3.9860 x 1014 m3 /s2 is the gravitational constant of the Earth, and the period 1s expressed in
seconds. This corresponds to a satellite angular velocity of
271" ~ (4.7)
Ws - p - VR;
radians per second and to a satellite inertial velocity of
For example, a satellite with a nominal altitude of 800 km (i.e., an orbit radius of 7168 km) has a period of
100.66 minutes, an angular velocity of 1.0403 mrad/s, and a tangential speed of 7457 m/s, assuming a circular
orbit.
The orbit inclination is usually set to approximately 98°, so that the orbit is Sun-synchronous. With this
inclination, the Earth's oblateness causes the orbit plane to precess once per year, so that it has a fixed angle
with respect to the Sun. This simplifies the power collection strategy of the solar panels, an important issue, since
the radar coverage and SNR are directly proportional to the available power. An inclination greater than 90°
indicates that the satellite is orbiting the Earth in a westerly direction. As a point on the Earth's surface is
moving eastward, the average relative target velocity and the average beam velocity is higher than if the satellite
were moving eastward. This gives a slightly higher coverage rate over the Earth's surface.
Lat. and long. Ones are drawn 10 deg.
apart (11 00 km at the equator)
sate\\\te
()l\:)i\
The single most important parameter in SAR processing is the slant range from the sensor to the target. This
range varies with azimuth time, and is defined by the s<rcalled range equation. 10 As the sensor approaches the
target through the motion of the radar platform, the range decreases with every pulse. After the sensor passes
the target, the range increases with every pulse.
This change in range has two important implications. It causes a phase modulation from pulse to pulse,
which is necessary to obtain fine azimuth resolution in the SAR processor. However, it also causes the received
data to be skewed in computer memory, an effect called range cell migration (RCM). As shown later, this
range/azimuth coupling must be taken into account in the SAR processing.
To get the exact range equation, one must be able to model the sensor motion, plus the motion of the
target or surface, if any. This can get quite complicated but, in most cases, the simple geometry of Figure 4.1
can be used, with an appropriate choice of sensor velocity. This results in a hyperbolic form of the range
equation, which allows the signal properties in various domains to be represented conveniently and the processing
equations to be derived easily (see Chapter 5). For these reasons, this section focuses its discussion on a
hyperbolic model.
To develop the hyperbolic form of the range equation, a simplified form of the geometry of Figure 4.1 is
considered. In this case, the flight path is assumed to be locally straight, and the Earth is assumed to be locally
flat and not rotating. This is a good model for the aircraft case, where the distances are much shorter than the
satellite case, and the aircraft follows the Earth's atmosphere as it rotates.
Assuming a velocity, V,., pertaining to the simplified case, the distance X in Figure 4.1 equals Vr T/, where r,
is the azimuth time referenced to the time of closest approach. Then, using the Pythagorean theorem, the range
to the target, R(r,) , is given by the hyperbolic equation
(4.9)
where Ro is the slant range when the radar is closest to the target, that is, Ro is the range of closest approach.
For the aircraft case, the beam is assumed to be stationary with respect to the flight direction, so that the
geometry of Figure 4.1 remains stable for a given radar scene. The variable, V,., is the nominal aircraft speed,
and also equals the speed of the beam footprint along the surface. However, in the satellite case, the geometry is
more complicated, as the orbit is curved, the Earth's surface is curved, and the Earth is rotating independently
of the satellite orbit (4, 5].
When an accurate, curved-geometry model is used, it turns out that the range equation is still very close to a
hyperbola, within the limits of the target exposure time. Thus, the parameter v;. can still be used as a type of
"velocity" in the satellite case, provided it is interpreted in a special way. Specifically, when R 2 (11) is expressed as
a power series in Tl, the cubic and higher terms are very small for typical remote sensing SARs, particularly the
C-band and higher frequency SARs. Then, V,.2 is the quadratic coefficient of (4.9), and can be calculated from a
geometry model as one-half the second derivative of R 2 (11).
With this assumption, the hyperbolic range equation (4.9) holds for a satellite, except that Vr is not a
physical velocity, but a pseudovelocity, selected so that the hyperbola (4.9) models the actual range equation. The
parameter V,. has been called the "radar velocity," "effective velocity," or "speed parameter" by several authors (4].
A better term may be "effective radar velocity," although some authors prefer not to call it a velocity at all [6].
Important differences from the aircraft case include the fact that Vr varies with range, and varies slowly with
azimuth as the satellite orbit and the Earth rotation component changes. Its numerical value lies between the
satellite platform velocity, Vs, and the lower speed, Vg , with which the beam moves along the ground . The
hyperbolic model is adequate over the duration of the target exposure time, which is typically on the order of a
second.
To see how the effective radar velocity, v;., relates to the physical velocities of the satellite and the beam, it
is useful to consider the two geometry models in Figure 4.6. Figure 4.6(a) shows the radar/beam geometry in the
slant range plane, assuming a curved orbit and a curved Earth. The satellite moves with tangential velocity, V, ,
and the beam footprint moves along the surface with a velocity, V9 . Assume that the beam centerline, CB, makes
a squint angle, Bsq, with respect to the zero Doppler line, CA, and illuminates a point, B , on the ground.
Now, can this curved geometry be related to the rectilinear geometry of Figure 4.6(b)? A rectilinear
geometry can be formed out of the curved geometry if tangential lines are drawn through points C and A, and
the angle Bsq is increased by swinging the vector CB outwards, until it meets the tangential line through A
(keeping its length R the same). Using the primed notation for the rectilinear geometry in Figure 4.6(b), the
vectors C' A' and C' B' are the same length as their associated vectors in Figure 4.6(a), but the distance, X 1., is
larger than X 9 because the angle ACB has increased. The new angle A'C' B' is used in SAR processing, and it is
called Or, Distances and angles have been distorted in Figure 4.6(b), but time is unaltered, that is, the time taken
for the satellite to move from C to D is the same as that from C' to D', and is the same as it takes the beam
centerline to go from A to B and from A' to B'.
By comparing the two geometries in Figure 4.6, it can be seen that A' B' > AB, and that C' D' < CD, so
that the effective radar velocity, v;. > V9 and v;. < Vs. Note that V9 < Vs models the property that the satellite
attitude changes by 21r radians every orbit. As shown in Appendix 4A, an approximation for v;. is
(4.10)
Note that Vs and V9 vary with orbit position and range, because the magnitude of the Earth's tangential velocity
changes with latitude, as well as its direction relative to the satellite velocity vector. In this way, V,. changes with
time and range, and must be updated around the orbit. The main approximation in (4.10) comes from the fact
that the orbit is not circular. An example in Section 13.3 indicates that the approximation is accurate to about
0.6 % for typical RADARSAT parameters, and that v;. changes by up to 1% over a swath width of 300 km.
While the approximation (4.10) is not accurate enough for calculating the azimuth matched filter coefficients
in precision SAR processing, it is adequate for analysis purposes, such as finding the exposure time and Doppler
bandwidth in Section 4.5.5. For precision SAR processing, the velocity, Vr, must be calculated from a refined
geometric model, as shown in Chapter 12.
(a) Curved earth geometry (b) Rectlllnear geometry
Satellite C D
Orbit - -·· v.
x.
Ro
Ro Ro
B'
Note that V,. and V9 vary with range. They are calculated at the zero Doppler point of the target, so they
are a function of ~' and do not vary along the target exposure. That means V,. is a constant for a particular
target, an important fact to note in a point target simulator. This property is also used when the target's
spectrum is derived in Chapter 5.
Which velocity is to be used depends upon the application. As shown later, V,. is the velocity used to obtain
the RCM and the azimuth FM rate in the azimuth SAR processing, and V5 is used to obtain the Doppler
bandwidth. Finally, when ground resolution and ground distance are concerned, V9 is used.
To help understand the physical meaning of the various velocities, distances, and angles in Figure 4.6, it is useful
to examine the relationship between:
Using small angle approximations in the curved Earth geometry of Figure 4.6(a), the squint angle, Osq, is
defined as
The negative signs are due to the fact that for a positive Osq and 09 , the radar looks ahead, and hence the
position of closest approach, where 1J is defined to be zero, has not been reached yet. In other words, 1J is
negative for positive squint angles. Note that 09 is not the radar incidence angle, because Figure 4.6 depicts the
geometry in the slant range plane.
In the rectilinear geometry of Figure 4.6(b), a new squint angle, Or, can be defined that is useful for analysis
and, sometimes, for data processing. The new squint angle is defined in the equivalent rectilinear geometry using
. O Xr v,. -
1/
sm r = R(1J) - - R(77)
- (4.13)
Similar to the variable Vr, the angle Br is not a physical angle, but serves a useful purpose in SAR signal
analysis. Since this is the angle often used in SAR system analysis instead of the physical squint angle, it is
called the "squint angle" for brevity.
Combining (4.11) and (4.13), and using small angle approximations, the following ratios are equal
Bsq: Br: Bg = Vg: Vr : Vs = Xg : Xr : Xs (4.14)
From (4.10) and (4.14), it is found that the squint angle, Br, is a scaled version of the physical squint angle, Bsq
Vr Vs
Br = - Bsq = - Bsq (4.15)
Vg v;.
For a typical satellite case, Br is about 6% larger than Bsq, but the cosines of these two angles differ by less than
0.08% for a squint angle as large as 6°. However, it is important to distinguish between these angles when their
sines or tangents are invoked.
The following formulas are also useful. Using (4.13), cos Br can be written
(4.16)
Since the hyperbolic equation assuming rectilinear geometry is used in the processing, the angle Br is relev:ant,
but the angles 08 and B9 are rarely used. For example, the cross track direction is at an angle Br with respect to
azimuth (Section 5.5 addresses this), hence the effective velocity and ground velocity components in the cross
range direction are v;. cos 0r and V9 cos 0r, respectively, and the latter velocity component is used to derive the
(cross range) resolution.
Equation (4.17) can also be obtained by rearranging the range equation (4.9) as
(4.18)
where T is the range time, and Pn are the phase coefficients, when the signal phase is expressed as a power
series. The pulse envelope is usually approximated by a rectangular function
(4.20)
where Tr is the pulse duration. Even if the pulse envelope is not quite rectangular, it is usually safe to assume a
rectangular envelope when defining the matched filter for the processing. In early radar systems, the pulse was
generated by an analog Surface Acoustic Wave (SAW) dev:ice [7], but now it is generated by a digital synthesizer.
The most commonly used pulse has a linear FM characteristic
(4.21)
where Kr is the FM rate of the range pulse. Here, r is referenced to the center of the pulse for convenience. 11
In this form, the phase coefficients are Po = 0, P1 = fo, P2 = Kr/2, and Pn = 0 for n > 2. For ease of analysis,
this simple linear FM form is assumed from now on, unless otherwise stated.
The instantaneous frequency of the signal, spu.1 (r), varies with fast time, r. For a linear FM pulse, given by
(4.21), the instantaneous frequency is /i = Jo + Krr. As the radar wavelength is c/ Ji, the wavelength also varies
within the pulse. However, after demodulation, the variation of wavelength is not apparent, and the variation is
not used in any of the processing equations. Unless otherwise stated, the definition of >. used in this book is the
wavelength corresponding to the center frequency, >. = c/ Jo.
The choice of the sign of the FM rate is up to the system designer (the sign is embedded in the parameter,
Kr), When the sign is positive, the pulse is an "up chirp" because the pulse frequency increases with time.
Similarly, when the sign is negative, the pulse is said to be a "down chirp." The direction of the chirp neither
affects the structure of the SAR processing nor the quality of the processed image.
The signal bandwidth is a very important parameter, since it governs the range resolution and the sampling
requirements. The signal or pulse bandwidth is given by I Kr I Tr. When the demodulated, received signal is
sampled, the complex sampling rate, Fr, must be higher than the bandwidth to prevent aliasing. The range
oversampling ratio, aos,r, is the sampling rate div-ided by the signal bandwidth, and, in practice, is usually
between 1.1 and 1.2. Thus, the complex range sampling rate is
(4.22)
(4.23)
where 'Yw ,r is the IRW broadening factor due to a processing window, and the factor of two 1n the denominator
accounts for the two-way path taken by the radar signal. Similar to (3.29), the pulse compression ratio is
approximately the time bandwidth product
(4.24)
Figure 4.7 shows how the data are acquired across a range swath. The radar beam has a certain 3-dB width in
the elevation plane, called the "elevation beamwidth." The beam illuminates a section of the ground, lying
between "near range" and "far range" in the figure. At a given instant in time, the pulse has a finite extent,
between the two dashed arcs in the figure.
The pulse expands outward in concentric spheres, expanding at the speed of light. The lower dashed arc in
Figure 4.7 shows the pulse at the instant it reaches the ground, at a time t 1 after it leaves the transmitting
antenna. At time t2, a fraction of a millisecond later, the trailing edge of the pulse passes the "far range" point.
In this way, each point on the ground, between near range and far range, is illuminated by the beam for a
duration of Tr. Note that, at any instant, only a portion of the beam footprint is illuminated by the pulse, and
this portion sweeps outwards at the speed of light div-ided by sin 0i , where 0i is the local beam incidence angle
shown in Figure 4.3. The reflected energy at any illumination instant is a convolution of the pulse waveform and
the ground reflectivity, 9r, within the illuminated patch
sr(r) = 9r(r) ® spu.1(r) (4.25)
This energy arrives back at the rece1vmg antenna between times 2 t1 and 2 t2 . The receiver starts sampling
a few microseconds before 2 t1 and finishes a few microseconds after 2 t2, thereby recording the ground
reflections between near range and far range. If the elevation beam is too wide in relation to the interpulse
period, range ambiguities may occur, which result from the mixing of reflected energy from consecutive pulses at
the receiver.
Consider a point target at a distance, Ra, away from the radar, with a magnitude, A0, which models the
backscatter coefficient, ao. This means that 9r(r) = A0 6(r-2Ra/c) in (4.25), where 2Ra/c is the delay time of
the signal for that reflector. The signal received from the point target, from (4.21) and (4.25) , is then
.
I'~' ~1
~"/;if
Earth's surface
Nadir
Far
range
Figure 4.7: Illustrating the radar beam's 3-dB elevation beamwidth and the
radar pulse spreading outward in concentric spheres.
The scattering process may cause a phase change in the radar signal upon reflection from the surface, which is
accounted for by the variable '1/J in the equation. The present analysis is unaffected, as long as the phase change
lS constant for a given reflector within the radar illumination time. Note that all variables in (4.26) are real.
The echo, sr(T), contains a high frequency component, cos(21r /oT) , which is the radar carrier frequency, and
a low frequency component consisting of the rest of the terms in (4.26). Appendix 4B shows how the high
frequency component is removed by a quadrature demodulation process, so that the maximum signal frequency is
on the order of the transmitted signal bandwidth.
The data often has a radiometric variation in the range direction caused by several factors. First, the echo
power is inversely proportional to the fourth power of the slant range. Second, the elevation beam pattern of
Figure 4.7 is not uniformly weighted. The antenna gain at the upper elevation angle is sometimes designed to be
higher than that at lower angles, to compensate for the 1/ R4 law. Third, the reflectiv.ity of the ground is a
function of the beam incidence angle, Bi ·. Finally, there is a geometrical term, 1/ sin Bi ·, which arises when the
ground area is converted to an equivalent area, perpendicular to the radar beam. These effects, if uncorrected,
will cause a variation of intensities across the range swath in the processed image. The correction can be
performed in the processor, assuming a knowledge of the above factors.
In the prev:ious section, the signal received from a single pulse was discussed. As the sensor advances along its
path, subsequent pulses are transmitted and received by the radar. The pulses are transmitted every 1/PRF of a
second, where PRF is the pulse repetition frequency. But before getting too far into a discussion of azimuth
parameters, an intuitive explanation of Doppler frequency is needed.
Consider a radar that transmits a pure tone, which is generated by a local oscillator. The signal is transmitted
through the antenna, and the resulting electromagnetic (EM) wave travels to the ground where it hits an object
and is reflected (scattered). The reflected EM wave travels back to the antenna, where it is converted into a
voltage. The received signal has the same waveform as the transmitted signal, but is much weaker and has a
frequency shift governed by the relative speed of the sensor (antenna) and the scatterer. If the distance from the
antenna to the scatterer is decreasing, the frequency of the received signal increases. On the contrary, if the
distance to the scatterer is increasing, the frequency of the received signal decreases. The situation is analogous to
a high (low) frequency heard from the siren on an approaching (receding) ambulance.
It is this frequency, governed by the relative speed of the sensor and the target, which is called the SAR
Doppler frequency, in analogy with the well-known effect in physics. This discussion has taken a few shortcuts,
but does give an intuitive description of the Doppler effect in coherent radars. The radar is "coherent" if the
consistent timing of the local oscillator allows one to observe the change in phase and frequency of the received
signal very accurately.
The following points are omitted from the above discussion:
1. The radar system generates and transmits a finite-duration pulse, rather than a pure tone.
2. The radar electronics upconverts the pulse to a very high frequency (the radar carrier frequency), and
subsequently downconverts the received signal to the original frequency (or even lower, to baseband).
3. The pulse has a linear FM waveform rather than a tone, and, after reception and downconversion, it js
converted (compressed) to a sharp impulse that has the approximate shape of a sine function.
4. The Doppler frequency is a function of the carrier frequency rather than the original baseband pulse
frequency.
5. The pulses are repeated at a precisely controlled time interval, called the pulse repetition interval (PRI) .
The inverse of this interval is the PRF.
Despite these simplifications in the analogy, the concept of Doppler frequency is still valid in the modulated,
pulsed radar. The effect of the pulses is to sample the waveform representing the Doppler-shifted, received signal,
with the sampling frequency being the PRF.
The sampling of the continuous signal creates an aliasing effect when the Doppler frequency exceeds the
sampling frequency (the samples are complex, so the aliasing rules, including the folding or Nyquist frequency,
follow the rules for complex signals) . It is the sampling that profoundly affects how the Doppler frequency is
observed and how it is estimated.
The transmitted pulses are evenly spaced, as shown in Figure 4.8, with each pulse represented by (4.19) or (4.21).
"Coherent" means that the start time and phase of each pulse is carefully controlled. The receiver and
demodulator must also maintain high timing accuracy. This coherency is an important property, necessary to
obtain high azimuth resolution in the SAR system.
Transmit Receive
( )( )
t1
Figure 4.8: Timing of transmitted radar pulses (not to scale).
When the radar is not transmitting, it can receive echoes reflected back from objects and surfaces on the
ground. A timeline of transmitted pulses and received echoes is shown in Figure 4.9. In an airborne case, each
echo is received directly after the transmitted pulse, before another pulse is transmitted. In a spaceborne case, the
echo from a specific pulse is received after 6 to 10 intervening pulses have been transmitted, because of the much
longer ranges involved. The number of intervening pulses can be determined from the sensing geometry.
For a satellite SAR with a PRF of 1700 Hz and a pulse duration of 34 µs , the time available to receive the
echo is 554 µs, although a few microseconds are needed at either end of the receive window to switch the signal
path. This time allows a slant range swath width of up to 80 km, although other constraints usually keep it
smaller, such as a varying satellite altitude that requires the receive window to be moved.
Pulse Echo
~LJ~LJ~(:\~l:\~LJ Time
Figure 4.9: Illustrating the transmit and receive cycles of a pulsed radar.
Between successive pulses, the radar platform advances in the azimuth direction by a small amount. The
separation between the footprints of each pulse, also known as azimuth sample spacing in the input data, is the
footprint velocity divided by the PRF. For an aircraft, the footprint velocity equals the platform velocity, but for
a satellite, the footprint velocity is about 12% less than the satellite velocity, as discussed in Section 4.2. The
separation of footprints is generally about 40% of the SAR antenna length, although it can be smaller than 40%
in aircraft cases (because aircraft SARs do not come close to hitting the range and azimuth ambiguity limitations
[8]). The separation is about 4 m for the ERS/ENVISAT satellites and 5 m for RADARSAT.
The azimuth sampling rate or PRF is selected by considering the following parameters and criteria:
Nyquist sampling rate: The PRF should be larger than the significant azimuth signal bandwidth, as it
corresponds to a complex sampler. The azimuth oversampling factor, o-05,a, is usually about 1.1 to 1.4. If the
PRF is too low, azimuth ambiguities caused by aliasing will be troublesome. The azimuth oversampling ratio
is usually higher than the range oversampling ratio because the azimuth spectrum rolls off more slowly than
the range spectrum.
Range swath width: The sampling window can be up to 1/PRF - Tr seconds long, corresponding to a slant
range interval of (1/PRF - Tr) c/2 meters. The PRF should be low enough so that most or all of the near
range to far range interval illuminated by the beam (the swat h width) falls within the receive window, as
shown in Figure 4.9. If the PRF is too large in relation to the echo duration, range ambiguities occur
because of echoes from different pulses overlapping in the receive window. If the range ambiguities are too
large and the PRF cannot be lowered, the antenna's elevation beamwidth can be reduced by making the
antenna wider or by adjusting the antenna weighting.
Receive window timing: The significant energy from the ground must arrive at the receiving antenna between
the pulse times. Unlike the previous criterion, which concerns the length of the receive window, this concerns
the start time of the window. The start time is particularly affected by the PRF in the satellite case when
a given transmitted pulse is not received until several pulse intervals have elapsed.
Nadir return: Sometimes a significant amount of energy arises from ground reflections at the nadir point, and
causes a bright streak in the image. This nadir return is bright because, when the incident angle is small,
each range cell covers a large area and specular reflections occur. This energy is unwanted in satellite SARs,
since it is range ambiguous, and it is usually po~ible to choose a PRF for which the nadir return does not
fall within the receive window (or at least not within the main part of the imaged swath) .
Each of these criteria is in conflict with some or all of the other criteria, so a compromise is needed,
especially in the satellite case. The tradeoff involves many of the SAR system parameters, notably the platform
height and velocity, operating range, radar wavelength, antenna length, and swath width. The tradeoff is mainly
between range ambiguity levels, azimuth ambiguity levels, and swath width, and results in a lower limit for the
antenna area [8, 9]. However, there have been systems built that accept more compromises, and that use an
antenna area smaller than this lower limit [10].
In the aircraft case, these restrictions are usually not a limiting factor, because the platform velocity is lower,
and the beam geometry restricts the swath width to well below the ambiguity limit. This means that the PRF
can be made higher than that needed to support the azimuth bandwidth. A higher PRF allows the transmi~ion
of a higher average power, without raising the peak power or the pulse length, thereby improving the SNR.
When this occurs, the azimuth signal can be filtered and the sampling rate reduced, to increase the efficiency of
subsequent processing steps. This filtering and sample rate reduction is called "presumming," which lowers the
PRF, resulting in more efficient SAR processing.
As the platform advances, a target on the ground is illuminated by many hundreds of pulses. For each pulse, the
strength of the signal varies, primarily because of the azimuth beam pattern. 12 The azimuth beam pattern for a
zero squint case is shown in the top part of Figure 4.10 for three positions of the sensor, as seen in the slant
range plane. At sensor position A, the target is just entering the main lobe of the beam. The received signal
strength is shown in the middle part of the figure. The signal strength increases until the target lies in the center
of the beam, as shown at position B.
After the beam center crossing time, the signal strength decreases until the target lies in the first null of the
beam pattern, when the sensor is at position C. From then on, a small amount of energy will be received from
sidelobes of the beam pattern. The energy in the outer edges of the main lobe, as well as the energy in the
sidelobes, contribute to the azimuth ambiguities in the processed image, as discussed in Chapter 5. Doppler
centroid estimation errors aggravate these ambiguities (see Chapter 12).
The bottom part of the figure shows the Doppler frequency history of the target. The Doppler frequency is
proportional to the target's radial velocity with respect to the sensor. When the target is approaching the radar,
the Doppler frequency is positive, and it is negative when the target is receding. Thus the frequency versus time
curve has a negative slope.
Sensor position A B C
Azimuth -+
Target
slant
range
Ro Beam pattem
Azimuth time -+
Figure 4.10: Azimuth beam pattern and its effect upon signal strength and
Doppler frequency.
Revisiting the middle part of Figure 4.10, note that the received signal strength is governed by the azimuth
beam pattern. As most SAR antennas are unweighted in the azimuth plane, the one-way beam pattern is
approximately a sine function [9]
where (} is the angle measured from boresight in the slant range plane, f3bw is the azimuth beamwidth, 0.886.A/ La.,
and La. is the antenna length along the azimuth direction. The received signal strength is given by the square of
Pa.(8) , because of the two-way propagation of the radar energy, and is usually expressed as a function of azimuth
time T/
Wa(17) - p; {8(77)} (4.28)
where R(TJc) is the slant range to the target at the time it is illuminated by the beam center, and Osq,c 1s the
value of Osq at this time. In the rectilinear geometry discussed in Section 4.3, TJc can be expressed as
R(77c) sin Or,c
TJc = Vr
( 4.30)
Other azimuth parameters derived in this section include the exposure time, FM rate, and Doppler bandwidth.
These parameters depend on the antenna squint, and are evaluated at the beam center, when 77 = 77c· Hence, the
squint angle at the beam center, Or,c, is used instead of the general variable, Or , for these calculations.
Doppler Centroid
The Doppler centroid frequency at 77 = TJc is proportional to the rate of change of R(77) 1 which is given by (4.9)
_ -~ dR(17)1 _ 2V,.2 11c _
f 1/ - -- - - - -- - +2V,.
-- sinOr,c
-- (4.33)
c A d17 11=1/c A R(r,c) A
in units of Hertz. Equation (4.13) has been used to obtain the final equality. It is possible to express the Doppler
frequency in terms of the physical satellite velocity, Vs, and the physical squint angle, Osq,c· Using (4.15), f 11c can
be written as
2 Vs sin Osq,c
f1/c - (4.34)
A
This expression can be v:isualized from the fact that Vs sin Bsq,c is the radial velocity of the radar along the
line-of-sight to the target, and Vs is in ECR coordinates. .
Doppler Bandwidth
in which the scaling factor, ½/V,., is due to the rectilinear geometry assumption. This equation makes use of the
fact that the bandwidth is the frequency excursion experienced by the target during the time in which the target
is illuminated by the 3-dB width of the radar beam, 0bw = 0.886 >./ L 0 • Therefore, the Doppler bandwidth is
Two other important parameters are target exposure time, Ta,, and the azimuth FM rate, K 0 . The exposure time,
as defined by how long the target stays in the 3-dB beam limits, is given by
In the above equation, 0.886 >./ L0 is the azimuth beamwidth, so therefore 0.886 R(r,c) >./ L0 is the projection of
this beamwidth on the ground. This projection is lengthened in azimuth by the factor ljcos0r,c for a nonzero
squint angle. Again, note that the use of the velocity, Vy, the smallest of the three velocities defined in Figure
4.6, takes into account the satellite attitude drift through which the sensor nadir remains pointed at the local
vertical as it proceeds around its orbit. This attitude drift has the effect of lengthening the exposure time.
Azimuth FM Rate
where the azimuth frequency is 2/ >. times the first derivative of range. Assuming the velocity approximation
(4.10), an alternate derivation of the Doppler bandwidth of (4.36) is obtained by multiplying (4.37) and (4.38).
This section first shows how the received radar signal is configured as a two-dimensional signal, within the signal
processor. Then, the important SAR sensor impulse response is presented. Finally, typical v:alues of the parameters
of the two-dimensional signal are given.
In a simple sense, the received radar signal is one-dimensional - a voltage as a function of time. In accordance
with the transmit/receive cycles of Figure 4.9, the waveform of the received signal could take the form shown in
Figure 4.11, where each segment of the signal represents the ground echo received during one pulse cycle. The
gaps between each segment represent the time when the receiver is turned off, which includes the pulse
transmission time, plus an allowance for switching of the signal paths. The format of the signal in Figure 4.11 is
how it might appear in a one-dimensional storage media, such as a magnetic tape.
Tlme -t
To see how this signal can be considered as a two-dimensional signal, it is useful to re-examine the SAR data
collection scenario, as portrayed in Figure 4.12. For simplicity, assume that the radar beamwidth is finite in
azimuth. When the sensor is at Point A, a target is just entering the radar beam. The received signal from the
target, which is part of the echo from a given transmitted pulse, is written into one row of SAR signal memory.
While this memory may actually be on tape or in downlink memory for the time being, it can be considered as
being in the memory at the input to the SAR signal processor.
Then, as the sensor advances, more pulses are transmitted, and the associated echoes are written into
successive rows in the signal memory. When the sensor is at Point B, the target leaves the beam, and the last
received energy of that target is written into SAR signal memory. Naturally, the signal memory contains data
from many targets, not just the one shown in the figure. The azimuth beamwidth is not finite in practice, which
means that energy received from the azimuth sidelobes from each target is also recorded before A and after B.
Going back to the "one-dimensional" signal, shown in Figure 4.11 , one can also think of this format as
two-dimensional, where the received signal is sampled and written into a computer memory, as in Figure 4.13.
The data from each segment or pulse are written into a new row in memory. The beginning of each row occurs
at a fixed time delay with respect to the pulse transmission time, referred to as the range gate delay in (4.1). In
this way, the horizontal axis in Figure 4.13 represents the travel time, r, or "range" from the sensor to the
ground. From another viewpoint, one can consider a single column in Figure 4.13, in which each sample
corresponds to the same range from the sensor. A column is sometimes called a range gate, and a row called a
range line.
Now consider the vertical axis of the two-dimensional memory of Figure 4.13. While each sample in a given
column is at the same range, the sensor has moved a small amount in the azimuth direction from one sample to
the next, so this vertical axis can be labeled "azimuth," or azimuth time 'T/· Data have now been recorded,
corresponding to two near-orthogonal directions on the Earth's surface, which is appropriate, as the objective is to
make a two-dimensional image of the Earth's surface.
In this way, one can see how a two-dimensional signal is obtained from the radar system, with the
coordinates being range time and azimuth time.14 The range time is also called "fast time" and the azimuth time
is called "slow time," because the range distance is related to range time by the speed of light, and the azimuth
distance is related to azimuth time by the much slower forward motion of the beam footprint.
Figure 4.12: How the received SAR data are placed in a two-dimensional
signal memory.
Another view of the two-dimensional memory is given in Figure 4.14, where the locus of energy from a
single point target is outlined. Items to note are the extent of the echo in the range dimension (the transmitted
pulse duration), the extent of the echo in the azimuth direction (the exposure time or the synthetic aperture
length), and the range migration. The sketch is not to scale, as the range and azimuth extents of the energy are
usually many hundreds of samples. However, the range migration may only be a few cells.
To summarize, the radar samples originate from the sampling of a continuous-time analog signal (i.e., the
echo received from a transmitted pulse), and these samples are written along the horizontal range axis. In the
azimuth direction, the signal is inherently in discrete-time from the outset, owing to the discrete nature of the
transmitted pulse events. The azimuth samples are written along the vertical azimuth axis in the two-dimensional
signal memory.
The received signal, sr, contains the radar carrier, cos(21r /oT), which is removed before the sampling by a
quadrature demodulation process, as discussed in Appendix 4B. The demodulated baseband signal from a single
point target can be represented by the complex signal
;vvv\
1 ~
~
E '-/V\f\.N'
~
J, ~
~
Range time -+
Figure 4.13: How the voltage of the received radar signal of Figure 4.11 is
written into the two-dimensional signal processor memory.
Ao wr(r-2R(ry)/c) Wa(11 -11c)
2
x exp {-j 471" Jo R(ry)/c} exp {j71" Kr ( r-2R(ry)/c) } (4.39)
~ .!nv-.re
....
.... .
. .
..
.. ......
....
....... , ' '
5
.....
.... ..
.. .. .....
••••
,
-•••t•••••,•·····
,
·......
=••••
····.•·.·•••.••....•••••
.. ....
...
..... . . .. . .....
10
.....
••••••.. ..
••••• . ....
•••••
......
,-•••·
•••.
••••
.....·•·.•.••..
.
. . ··••·.•••·.....
.. . . . ···
...
••••
...
......
........ . .
............. . ,
. .... ,
.......
..•
..
..•
.
. ..•••
•••••••••••••••••
. .....
l. . . ... ... .. . . . . -
l\ • •.• . . . .••
..•
.•. :i
······
.........
•··
~,••· • ·
.·
•·•·•·•·••·•••:i
....... · • · · ····~
......
••·•
..................
'
"
,. ......
, \
\.
....
~···· .............. _,
.....
, · ···•·•••••\
' • • • • ·• · · · · · · · · · ::i
"\
OI lll!l!81•!f<i!\'J'!
~
.....
\. .........
, . . . . ,. .. . . . . . . . . . l.
••••••••••••••••
'
'
Figure 4.14: The locus of energy of a single point target in the tw<rdimensional
signal processor memory, within the nominal target exposure time (the target
energy extends beyond these limits with a smaller magnitude).
If Ao is ignored, (4.39) is the impuJse response of a point target having unity amplitude. Thus, the important
SAR sensor impulse response is given by
~mp(r, 17) - wr(r-2R(17)/c) wa(?J - ?Jc)
2
x exp {-j 41r /o R(17)/c} exp {j1r Kr (r-2R(17)/c) } (4.42)
To model the signal received from a general ground surface, the ground reflectivity is convolved with this
impulse response in two dimensions to give the baseband SAR signal data
Sbb(r,17) = g(r,17) ® himp(r,17) + n(r,17) (4.43)
where n(r, 17) is an additional noise component that is present in all practical systems. The noise originates
mainly from the front end receiver electronics, and can be modeled as Gaussian white noise. The SAR system
model corresponding to (4.43) is shown in Figure 4.15. In the following development of matched filtering, the
noise can be ignored. In simulation experiments, one may wish to include noise in order to see the power of
matched filtering at work-the desired signal emerges from the noise floor, as illustrated in Figure 3.7.
h1mp('t,TJ)
- +-----•=-
SAR processmg is assumed to start with this demodulated baseband signal. SAR processing algorithms
attempt to solve for g(r, 17), which is a deconvolution process. The difficulty, and the challenge, lies in the fact
that the impulse response is both range and azimuth dependent, and contains a range-varying RCM.
Table 4.1 gives a representative set of SAR parameter values for both a spaceborne and an airborne case. For
the spaceborne case, a set of parameters broadly representative of the SEASAT, J-ERS, ERS, RADARSAT, and
ENVISAT remote sensing satellites is used, assuming a C-band wavelength and a 10-m antenna.
For the aircraft case, a generic X-band system with a 1-m antenna is considered. The table is separated into
those parameters which primarily affect range processing, and those which mainly affect azimuth processing. The
squint in the satellite case is caused by Earth rotation, assuming no yaw steering. In the airborne case, the
squint is caused by sidewinds and physical antenna motion.
The purpose of this section is to derive the azimuth resolution obtainable from the SAR system. The azimuth
resolution is derived from the concept of azimuth bandwidth here, and is derived from antenna concepts in Ap-
pendix 4C.15
In the range direction, the received signal has FM characteristics, inherited from the transmitted pulse. A high
resolution can be obtained by matched filtering. The resolution is governed by the pulse bandwidth, as shown in
(3.28). In time units (seconds), it is 0.886 times the reciprocal of the range bandwidth in hertz. Multiplying
further by c/2 gives the resolution in slant range units (meters).
In the azimuth direction, the beamwidth is given by 0.886 >../ La , Without SAR processing, the azimuth
resolution is the projection of the beamwidth onto the ground
0.886 R(11c) >..
(4.44)
La
This is called the resolution of a real aperture radar. It is on the order of several hundreds of meters in an
airborne case, and several kilometers in a satellite case.
Range parameters
Slant range of scene center R(11c) 30 850 km
Altitude 10 800 km
Transmitted pulse duration Tr 10 40 ~
Azimuth parameters
Effective radar velocity (1) v,. 250 7100 m/s
Radar center frequency Jo 9.4 5.3 GHz
Radar wavelength .A 0.032 0.057 m
Azimuth FM rate Ka, 131 2095 Hz/s
Synthetic aperture length (2) Ls 0.85 4.8 km
Target exposure time Ta, 3.4 0.64 sec
Antenna length La, 1 10 m
Doppler bandwidth Ll/dop 443 1338 Hz
Azimuth sampling rate (PRF) Fa 600 1700 Hz
Squint angle (3) Ore <8 <4 deg.
'
Notes:
(1) Vs is 6% higher and V9 is 6% lower than V,. in the satellite case.
(2) See Section 4.7.2.
(3) Br,c ~ Osq,c·
In Section 4.5, it is shown that the signal in the azimuth direction is also frequency modulated by v:irtue of
motion of the platform. Hence, as in the range direction, one would expect to obtain a high resolution by
matched filtering. The azimuth resolution that can be obtained (in time units) is 0.886 times the reciprocal of the
bandwidth (4.36). In distance units, the obtainable resolution is
0.886 V9 COS Or,c
Pa = ~
Af
dop
'Yw,a (4.45)
(4.46)
Substituting this angle into (4.35), the Doppler bandwidth, ~ !dop, can be expressed in terms of Bayn as
By combining the first part of (4.45) and (4.47), the desired result is obtained, showing that the resolution is
0.886 .X
Pa= (4.48)
28syn
The resolution is now independent of the squint angle. Substituting (4.46) into (4.48), the azimuth resolution in
(4.45) can be obtained, except for the broadening factor, 'Yw,a· The broadening is not represented in (4.48),
because t he Doppler spectrum is assumed flat in this derivation-the broadening is introduced once the data are
processed with a tapering window . This form of the resolution equation is used more frequently in spotlight SAR
(11, 12) and inverse SAR (ISAR) (7, 13) than in stripmap SAR (see Section 1.3).
L,
Satellite
Orbit
Figure 4.16: Antenna azimuth beamwidth and synthetic angle. For clarity,
the beamwidth and synthetic angle are exaggerated, and a zero squint angle
is used.
4. 7 .2 Synthetic Aperture
The purpose of this section is to explain the term "synthetic aperture" in the SAR context. This gives another
derivation of the azimuth resolution.
The azimuth resolution of a conventional radar, or a SAR before processing, is given by the azimuth
beamwidth. The beamwidth is determined by the radar wavelength, \ and the antenna: length or aperture, La.
Both these parameters are fixed for a given radar system. To improve the resolution, it would be desirable to
reduce the effective beamwidth.
Similar to creating (synthesizing) a narrow pulse in range, the trick is to use signal processing to synthesize
a narrow beamwidth in azimuth. Since the beamwidth is inversely proportional to the antenna aperture,
synthesizing a narrow beamwidth is equivalent to synthesizing a large aperture. In practice, the synthesized
aperture can be several hundred meters long in the airborne case, and several thousand meters long in the
spaceborne case, whereas the antenna's real aperture, La, is only on the order of 1 to 15 m in length.
The synthetic aperture is shown by Ls in Figure 4.16. It is the length of the sensor path during the time
that a target stays within the radar beam. This length governs the amount of data that is available for
processing from a given target. The synthetic aperture, Ls, is given by
0.886 ~ A Vs 0.886 R(TJc) A Vs
La COS 0r,c V9
- La V9
(4.49)
where 0bw = 0.886A/ La, and the ratio, Vs/½ , is due to the difference between the beamwidth, 0bw, and the
synthetic angle, Bsyn .
Appendix 4C shows that this definition of synthetic aperture gives a null-to-null beamwidth of A/ Ls. From
the sine function presented in Section 2.3.4, the synthesized half-power beamwidth is
0.886A
<Ps = 2Ls
(4.50)
where the factor of two accounts for the two-way path of the radar signal, as shown in Appendix 4C. Assuming
an antenna having a beamwidth, <Ps, the azimuth resolution is then R(TJc) ¢.,. Then, using (4.49) and (4.50), the
azimuth resolution in (4.45) can be obtained, except for the broadening factor, iw,a· There is no broadening
represented in (4.50), because the aperture illumination is assumed to be uniform in this derivation.
As an example, let La = 10 m, A = 0.057 m, ~ = 850 km, Vy /Vs = 0.88, and assume a small squint angle
and no weighting. Then, the obtainable resolution from (4.45) is 4.4 m, and the real aperture required to obtain
this resolution from (4.44) is about 9.8 km, and the synthetic aperture from (4.49) is about 4.9 km. This example
shows that the equivalent real aperture is twice the length of the SAR synthetic apertlllre - this effect is due to
the two-way signal path.
Compression Ratio
Using the parameters in Table 4.1, the compression ratios for the airborne and satellite cases are summarized
in Table 4.2 . The total compression ratio in each case is on the order of 1 million. While the energy of a target
is spread throughout an area of 1000 x 1000 samples in the raw data, the energy can be compressed to a point
with a processing gain of a million. This is the magic of matched filtering in two dimensions!
Table 4.2: Compression Ratios
4.8 Summary
In this chapter, the geometry of the SAR data collection is discussed. Of particular importance is the ongm of
the frequency modulations in range and in azimuth. The range modulation is achieved by the design of the
transmitted pulse. The azimuth modulation is introduced by the motion of the platform. The SAR signal is the
convolution of the ground reflectiv-ity with the SAR system impulse response, which is range-varying and possibly
azimuth-varying. SAR processors solve for the ground reflectiv:ity from this convolution equation.
An important result, derived in this chapter, is that the available azimuth resolution is about one-half the
antenna length. Two different methods have been shown to a:rrive at the same result. One is by utilizing the
bandwidth of a signal captured within the exposure time. Another method starts with the definition of synthetic
aperture and then derives the resolution from the synthesized, much narrower, beam width.
It appears that SAR violates two commonsense principles:
Resolution versus range: In general, the closer a sensor is to a target, the more the target details are
revealed. But in SAR, the azimuth bandwidth, and hence the resolution, is independent of range, and this
can be explained as follows. The exposure time is proportional to range, but the azimuth FM rate is
inversely proportional to range; hence, the signal bandwidth, being the product of the exposure time and
FM rate, is independent of range. Note, however, that the SNR decreases with R3 , and under power-limiting
conditions, this SNR loss obscures target details at longer ranges.
Sensor size: In general, a larger sensor can "see" more details than a smaller sensor (as in a telescope or
microscope). This is true for a real aperture radar, but the opposite is true if the received data are
processed in SAR mode. The beamwidth increases with a smaller antenna and, in turn, the exposure time
and signal bandwidth increase, leading to a finer resolution. However, ambiguities and SNR place a lower
limit on the antenna size.
The important equations derived in this chapter are summarized in Table 4.3. In this table, Vs = Vr = Vy is
assumed for an airborne case. The azimuth resolution, Pa, is defined in terms of different variables, namely, the
Doppler bandwidth, ~ /dop, the antenna aperture, La, and the synthetic angle, 0syn· When the squint angle is
small, cos 0r,c can be assumed to equal one in the table.
Table 4.3: Summary of Key Synthetic Aperture Equations
ScanSAR is a mode of operation in which the radar beam is scanned in elevation (i.e., in range) to obtain a
wider swath image than can be obtained by a single unsteered beam. With an unsteered beam, the swath width
is limited to approximately 110 km by ambiguities, as explained in Section 4.5. The scene shown in Figure 4.17
was collected using the ScanSAR Narrow mode of RADARSAT-1, in which the Wl and W2 beams are used to
obtain a swath width of 290 km in ground range. More details on ScanSAR data and its signal processing are
given in Chapter 10.
This scene was collected on July 6, 2004, during ascending orbit number 45,255. The scene is centered at
49.3° N, 124.4° W, near the town of Qualicum Beach on Vancouver Island. The image is portrayed along the
path of the satellite- the north direction lies 11° clockwise from the top of the image.
The image was processed by Radarsat International, using the SPECAN algorithm. The original processed
resolution is 50 m, with two range looks and two azimuth looks. However, for the purpose of displaying the whole
300-km image (originally processed to 13,500 pixels), the samples have been averaged by a factor of six in each
direction, so that the observed resolution is about 300 m with a smoothness corresponding to about 100 looks.
The city of Vancouver is on the mainland, near the right of the scene, and the city of Victoria is near the
south end of the island. The 2500-m Coast Mountains dominate the upper part of the scene. The Olympic
Peninsula in Washington State is at the bottom edge of the scene, and the San Juan Islands are near the
bottom right of the scene.
In wide swath satellite images, there is a larger range of incidence angles than found in regular beam images.
The effect of the changing incidence angles is seen in the mountains in this image. At near range (on the left),
the incidence angle is approximately 20°, and the large layover gives high contrast to the mountains. At far range,
the incidence angle is approximately 49°, and the layover is much less. Therefore, even though the mountains on
the right are higher, the contrast caused by layover is much less than on the left of the scene.
References
[1] D. Massonnet. Capabilities and Limitations of the Interferometric Cartwheel. IEEE Trans. on Geoscience and
Remote Sensing, 39 (3), pp. 506-520, March 2001.
[2] T. Amiot, F. Douchin, E. Thouvenot, J.-C. Souyris, and B. Cugny. The Interferometric Cartwheel: A
Multipurpose Formation of Passive Radar Microsatellites. In Proc. Int. Geoscience and Remote Sensing Symp.,
IGARSS'02, Vol. 1, pp. 435-437, Toronto, June 2002.
(3] R. K. Raney, A. P. Luscombe, E. J. Langham, and S. Ahmed. RADARSAT. Proc. of the IEEE, 79 (6), pp.
839-849, 1991.
[4] J. Curlander and R. McDonough. Synthetic Aperture Radar: Systems and Signal Processing. John Wiley &
Sons, New York, 1991.
[5] R. K. Raney. A Comment on Doppler FM Rate. International Journal of Remote Sensing, 8 (7), pp.
1091- 1092, January 1987.
(6] R. K. Raney. Radar Fundamentals: Technical Perspective. In Manual of Remote Sensing, Volume 2:
Principles and Applications of Imaging Radar, F. M. Henderson and A. J. Lewi,s {ed.), pp. 9-130. John Wiley
& Sons, New York, 3rd edition, 1998.
[7] D. R. Wehner. High Resolution Radar. Artech House, Norwood, MA, 2nd edition, 1995.
[8] R. W. Bayma and P. A. Mcinnes. Aperture Size and Ambiguity Constraints for a Synthetic Aperture Radar.
In Synthetic Aperture Radar, J. J. Kovaly (ed.). Artech House, Dedham, MA, 1978.
[9] S. W. McCandless. SAR in Space - The Theory, Design, Engineering and Application of a Space-Ba.sed
SAR System. In Space-Based Radar Handbook, L. J. Cantafio (ed.), chapter 4. Artech House, Norwood, MA,
1989.
[10] A. Freeman, W. T . K. Johnson, B. Honeycutt, R. Jordan, S. Hensley, P. Siqueira, and J. Curlander. The
"Myth" of the Minimum SAR Antenna Area Constraint. IEEE Trans. Geoscience and Remote Sensing, 38 (1),
pp. 320-324, January 2000.
[11] W . G. Carrara, R. S. Goodman, and R. M. Majewski. Spotlight Synthetic Aperture Radar: Signal Processing
Algorithms. Artech House, Norwood, MA, 1995.
[13] R. J . Sullivan. Microwave Radar Imaging and Advanced Concepts. Artech House, Norwood, MA, 2000.
Equation (4.10) states that the effective radar velocity, V,., is approximately the geometric mean of the satellite
velocity, Vs, and the beam velocity over the ground, V9 . The relationship is accurate for zero Doppler pointing
and a satellite orbit that can be assumed to be circular within the target exposure time. The purpose of this
appendix is to prove this relationship. It should be emphasized that the approximation is adequate for simple
geometric analyses, but is not sufficiently accurate for precision focusing of the SAR data.
B
Satellite velocity, v.
Center of Earth
Figure 4A.1: The SAR geometry, showing the satellite velocity and the beam
footprint velocity.
Figure 4A.1 can be used to illustrate the satellite/Earth geometry of the radar system. Let C be a target
on the Earth's surface under consideration, and A be the satellite position when the target at C is at zero
Doppler. The relative orbit time at A is set to zero. Let the satellite, traveling with angular velocity Ws, advance
from position A to B in a time 11· All velocities, including Vs, are in ECR coordinates, in which Earth rotation
is not a factor.
Assume a right-handed, Earth-centered coordinate system, in which the z-axis points from the center of the
Earth to Point A, the y-axis points in the direction of the satellite ECR velocity vector at Point A, and the
x-axis points to the right to complete the orthogonal system. At 1J = 0, the satellite position is [O O H]T . Then,
the position of the satellite at time 7J is at B, given by
Pa = [ H si~Ws'f/ ] (4A.1)
H COSWs'Tl
Pc=
Re sin/3e
[ Re
0
cosf3e
l (4A.2)
where H is the local orbit radius, Re is the local Earth radius at the target, and /3e is the angle between OC
and the orbit plane.
The range to the target at Tl = 0 is /lo, and the range at time 7J is
(4A.4)
where the last equation uses the triangle relationship .ll5 = H 2 + R~ - 2HRe cosf3e- From the figure, Hws is the
satellite velocity, Vs, arising from the locally circular orbit assumption, and Re Ws cos f3e is the beam footprint
velocity, Vg. This value of V9 assumes that the Earth is locally spherical in the vicinity of C, so that V9 is
measured parallel to Vs.
Hence the final result is
(4A.5)
which shows that the effective radar velocity, v;., in (4.9), 1s equal to Jvs V9 under the conditions of a locally
circular orbit and an antenna centered on zero Doppler.
The pulses transmitted and received by the radar system are real signals. This appendix explains how the
received signal can be bandshifted, to obtain a complex baseband signal by a quadrature demodulation process.
The demodulation removes the high frequency carrier, but may create some signal errors, so the compensation of
the errors is also discussed.
Let a general real-valued signal, having a high frequency carrier and a low frequency modulation, be represented
by
x(T) = cos{ 21r /o T + </>(T)} (4B.1)
where the frequency of the carrier, Jo, 1s several orders of magnitude higher than the bandwidth of the
modulation, </>(T) (GHz versus MHz).
Figure 4B.1 shows that the quadrature demodulation process produces two channels that represent a
complex-valued output (11). First, consider the upper channel, where the signal is multiplied by cos(21r foT) . Using
the trigonometric identity
(4B.2)
. . . - - - - - . Imaginary
Xa1(t) channel
Lowpass
:- X>--~:-~ ADC
Fitter
'" Q
- sin( 21t fo "t)
The first cosine term of (4B.3) has an upper frequency governed by the bandwidth of c/>(-r), while the second
cosine term has a much higher frequency, centered around 2/o. Therefore, the second term can be removed by a
lowpass filter, giving the result
Similarly, the lower channel of Figure 4B.1 is multiplied by - sin(21r foT), and the following trigonometric
identity is used
(4B.5)
to express the signal as the sum of high and low frequency components. The result of the multiplication is
The signals Xc2(T) and Xs2(T) are then sampled by the analog-to-digital converters (ADCs), at a rate at least
equal to the bandwidth of ef>(T). Because of the cosine and sine multiplication, the two signals are in phase
quadrature, and represent a complex signal
The two individual signals are called the quadrature wmponents of the complex signal, or the I and Q channels
for in-phase and quadrature. The signal x3(-r) is the required baseband signal, which is used in the processing of
the SAR signals.
When demodulation is applied to the real SAR echo data from a point target (4.32) to obtain the complex
baseband signal, the phase term, <I>( r), is
(4B.9)
Frequency mixing: The input signal in analog form goes through two channels. One channel is multiplied by
cos(27r lo r), and the other channel by - sin(27r lo r). A constant phase error can be introduced in the
multiplier in each channel, caused by the electronics. The phase difference between these two paths, rather
than the absolute phase angles, is of importance for the SAR processing. Let this phase difference be AO.
The frequency beating can then be viewed as follows. The upper channel is multiplied by cos(27r lo r), and
the lower channel by - sin (211' lo (r + AO)]. The additional phase, AO, is an error. In other words, the
signals in the two channels are no longer orthogonal.
Lowpass filter: Ideally, the gains of the lowpass filters in the two channels are equal. Because of imbalance in
the electronics, this may not always be the case. When this imbalance occurs, the signal powers after
filtering are not equal. The ratio of the two gains, rather than the absolute values of the gains, are of
importance. Similarly, each channel has a DC bias, and the two biases can be different.
Analog-to-digital conversion: In the ADCs, the gains of the two channels may not be balanced, and there
may be a timing error between the two channels.
The corrections for gain, DC bias, and phase should be performed in the following order: bias removal in
both channels, power balancing between the channels, and phase correction in one channel. The first two
corrections can be performed quite easily:
3. Determine the relative gain of the channels, after the bias removal.
The phase correction is not as straightforward. One method is described below, assuming that bias removal and
power balancing have been performed. Let i and Q denote the pixel intensity in the I and Q channels, A the
magnitude and O the phase, wher~ the symbol ~ denotes a random variable. Then the channel intensities can be
written
f - A cos0 (4B.10)
where the phase error, AO, has been included. The probability distribution of A is immaterial in the subsequent
analysis. The angle 8 has a uniform distribution between -71' and 71'. The two random variables, A and 0, are
statistically independent. The random variables, i and Q, each have zero mean, because of the cosine and sine
factors.
For perfect orthogonality, the cross channel covariance must be zero. Hence, any nonorthogonality can be
detected from the off-diagonal terms of the covariance matrix, and the angle, AO, can be determined. Let the
covariance be C, which can be expressed by
C - E{iQ} (4B.12)
where E is the expectation over the received data set. Substitu!ing (4B.10) and (4B.ll) into (4B.12), and
recognizing the fact that A and 9 are statistically independent, the covariance is
C E{(A) 2 cos0 sin(O + 69)}
E{(A) 2 } E{cos0 sin(O + 69)}
1 2
E{(A) } E{sin(29 + 69) + sin69}
A A
(4B.13)
2
Recognizing that E{sin(2 0+ 69)} =0 and E{sin ~9} = sin 69, the covariance is
1 2
C = A
Hence, the required phase correction, 69, can be determined by combining (4B.12), (4B.14), and (4B.15)
The phase correction needs to be applied to one channel only, for example, the Q channel. Using 69 of
(4B.11), the phase error for this channel can be expressed using
where the A has been dropped because the equation now refers to each particular pixel. The desired result is
Q' = A sin9 (4B.18)
where 69 has been compensated. Now, Q' has to be expressed in terms of the known variables I, Q, and ~9.
Equation (4B .11) can be rewritten as
Q = A sin9 cos69 + A cos9 sin69 (4B.19)
The data, I and Q', now form an orthogonal set. It is seen that if 69 = 0 to start with, then Q' = Q, and no
correction is needed. Subsequently, in this book, it is assumed that the input data have been properly corrected.
In Section 4.7, the SAR azimuth resolution is derived from the processed bandwidth, using the resolution formula
from the pulse compression development of Section 3.3. In this appendix, the azimuth resolution is derived from a
different v-iewpoint, using antenna beamwidth concepts. In doing so, an intuitive explanation of the term "synthetic
aperture" is given. For simplicity, the development assumes a zero squint case, but it can be extended to a
nonzero squint easily.
The presentation starts by taking a brief look at antenna theory, and discusses the meaning of "aperture" and
the resolving power of an antenna. Then, the operations with which the SAR processor creates or synthesizes an
antenna are examined, which gives an alternate way of deriv:ing azimuth resolution.
Consider a radar antenna consisting of a linear array of identical radiating elements, as shown in Figure
4C.l. The length of the antenna, La, is called the real aperture (or simply the aperture) of the antenna. This is
in analogy with the aperture or diameter of a lens, and is the "opening" through which the sensor views the
imaged terrain.
Consider the far-field radiation pattern of the antenna beam as it strikes the Earth's surface. The main lobe
of the pattern illuminates a patch of the ground at any instant, and, in a simple sense, this patch is what is
"observed" at that time. Consequently, the azimuth extent of this patch defines the resolving power of the
antenna in the azimuth direction. More specifically, the 3-dB width of the radiation pattern is usually taken as
the resolution of the unprocessed received signal.
1...-«--~~
perpendicular to
viewing direction 8
Earth's surface
Suppose it is desired to measure the far-field strength of the radiated energy at a point on the ground, using
a field strength meter, placed as shown in Figure 4C.1. Consider a line from the field strength meter to the
center of the antenna that makes an angle () with the normal to the surface of the antenna. The far-field
assumption states that the rays from each element to the field strength meter can be considered to be parallel.
The distance from each element to the field strength meter is Ro+x 8, assuming that the angle 8 is small.
Assuming that the radiation from each element is of equal amplitude at the field strength meter, and neglecting
the constant phase due to the range Ro, the net voltage is given by the sum of the radiation from all of the
radiating elements
Pa(8) =
2
j+La/ exp
- La/2
{-j 271' x,()}
"'
dx - . (LaO)
LaSlilC T (4C.2)
Real
From Figure 4C.2, it can be seen that the vector sum is zero (i.e., the circle is completed) if the phasors
from the first and last radiating elements are aligned in almost the same direction. Beginning from the antenna
boresight and moving outwards, the circle is first completed when the path difference, La 0 for small 0, between
the two ends of the aperture to the field strength meter, is one wavelength. This occurs at the beam angle of 0
= >./ La, so, from symmetry, the beamwidth between the nulls adjacent to the main lobe is 2 >./ La, This agrees
with the prev:ious result.
How does the SAR signal processor contribute to the resolution? In the SAR system, there are two main
differences with respect to the antenna model described in the preceding section. First, the location of the
radiating elements in the SAR case is given by the location of the sensor when the pulses are transmitted and
received. Thus, the antenna phase center location at each pulse epoch is analogous to each element in Figure
4C.l, as each pulse is acting as one contribution to the received signal in the SAR system. 18
Second, the signal strength is being observed at the receiver, rather than on the ground. This means that the
ranges in the analysis of the preceding section must be doubled. Then, in order to complete the analogy with the
prev:ious section, the field strength meter in Figure 4C .1 is replaced by an ideal reflector (corner reflector), and
the field strength meter is replaced by a voltmeter in the SAR receiver.
SAR sensor
locatlon at pulse I
Ro Ro
0
I •
X
Figure 4C.3: Sensor locations where the data are collected, illustrating the
concept of synthetic aperture.
This analogy is expressed in Figure 4C.3, where the synthetic array length is given by the distance that the
sensor travels while the corner reflector is illuminated by the radar beam. The length 0.886 .\ Ro/ La. is used,
which corresponds to the length where the received signal strength is within 6 dB of its maximum.
The corner reflector is located a distance X away from the central axis of the synthetic array. The
perpendicular distance from the corner reflector to the array is Ro . The distances from the corner reflector to
either end of the synthetic array are R 1 and R2 . Then, for large range distances, the total path length difference is
(4C.3)
Using the same argument as that associated with Figure 4C.2, the first null occurs when the corner reflector is at
Ro.\ (4C.4)
Xnull = 2 Ls
and the null-to-null separation of two corner reflectors located symmetrically about the central axis is 2 Xnull·
Thus, taking the resolution as 0.886 times one-half the null-to-null distance, the azimuth resolution of the
processed SAR data is
Ro .\ Ro .\ La. La. ( )
Pa. = 0.886 Xnull - 0.886
2
Ls - 0.886 - - _ Ro.\ -
2 0 886 2
4C.5
Section 4.7 shows that the achievable resolution is better than this, by a ratio of footprint velocity to satellite
velocity.
An alternate way to arrive at the resolution is to follow the development leading to (4C.2), in which La. is
replaced by Ls, and the phase inside exp{.} is increased by a factor of two due to the two-way path. Then the
synthesized rarliation pattern is
x0} T
Ps(0) = p~(0)
J
L,/2
- L,/2
exp { -j4-rr ~ dx ~ Ls sine (2L)
0
The element position, and the integral is the Fourier transform of a rectangular function-a sine function. The
(4C .6)
width of p~(0) is much wider than that of the sine function, because the former is the beamwidth of the original
real aperture, while the latter is of the synthesized beamwidth. The factor, p~(0), can then be ignored; hence, an
approximation sign is used in the last step.
Drawing a further analogy between the synthetic aperture, L~, and a real aperture antenna, it would take a
conventional antenna of length 2 Ls to obtain a resolution of La./2 without SAR processing. The synthetic array is
only one-half this length, because the SAR system benefits from the two-way radar path length.
Chapter 5
5.1 Introduction
SAR data are acquired m the two-dimensional time domain. The data are often transformed into other domains
for processing efficiency reasons. Two of the domains considered here are the range Doppler domain and the two-
dimensional frequency domain. In order to develop the signal processing algorithms for focusing SAR data, it is
important to understand the characteristics of the received SAR signal, and the form of the important processing
parameters in these domains. These are the topics of this chapter.
Analytical expressions are derived for the signal from a point target in the range time, azimuth frequency
domain. This domain is called the "range Doppler" domain, since azimuth frequency is synonymous with Doppler
frequency. Expressions are also derived for the signal properties in the two-dimensional frequency domain. These
derivations will enable the design of appropriate matched filters, to be addressed in Chapter 6. The derivations,
presented in Section 5.2, start with a low squint case and proceed to a higher squint case that requires a more
complicated form of the signal spectrum.
Another important parameter in SAR processing is the Doppler centroid, which is the azimuth frequency or
Doppler shift when the point target is in the center of the beam. Section 5.4 discusses the significance of the
Doppler centroid to the characteristics of SAR data. It also addresses the Doppler frequency aliasing that occurs
as a result of the azimuth sampling of the data by the radar pulses.
The most important relationship in SAR processing is the instantaneous range of a point target with respect
to the sensor, from which the signal phase characteristics can be determined. When the range of the received
energy from the target is drawn versus time, the curve passes through several range cells, and hence one refers to
the range variation by the name range cell migration (RCM). The RCM effects in the different signal domains are
discussed in Section 5.5, as they have a significant impact on the processing.
Examples of simulated point targets and their spectra in the two domains are shown in Section 5.6. In
Section 5.7, an introduction to SAR processing is given by discussing two simple algorithms, one with accurate
focusing but poor efficiency, and the other with good efficiency but moderate focusing. This leads to a search for
other algorithms that offer both accuracy and efficiency. Finally, Section 5.8 summarizes this chapter, including
tabulating the key signal properties.
Most SAR processing algorithms work in the frequency domain for reasons of efficiency. The most important of
these is matched filtering convolution efficiency, but there is also range cell migration correction (RCMC)
efficiency. When the data are in the azimuth frequency domain, targets having the same slant range of closest
approach share the same migration trajectory, making RCMC simpler to apply in this domain. Thus, it is useful
to derive the spectrum of the SAR signal in the range time, Doppler frequency domain, and in the
two-dimensional frequency domain. Solutions for the simple low-squint case are presented first in this section,
followed in Section 5.3 by the more complicated general case.
When the first satellite SAR processing was done digitally in the late 1970s, a small squint angle was
assumed. With this assumption, the signal properties, and consequently the matched filters, take on a relatively
simple form and can easily be derived [1- 3]. This section presents these simple solutions first, to allow the reader
to gain an insight into the derivations. The derived spectrum can be useful in a first order analysis of a SAR
processor.
Starting from the hyperbolic form of the range equation (4.9) , it is useful to make the parabolic
approximation
R(''l) (5.1)
This approximation is valid for a low squint and moderate aperture length, such as ERS or RADARSAT, since
the terms higher than the second order in the series expansion (5.1) are very small. Then, the baseband received
signal (4.39) can be approximated by
~ 2 4
so(r,17) Aowr(r- ~(1J)) wa.(1J-1Jc) exp{-j 1r_xRo}
The range Doppler spectrum can be obtained by applying the principle of stationary phase (POSP) directly to
(5.2), as discussed in Chapter 3. This results in the familiar relationship between frequency and time, shown in
the bottom part of Figure 4.10
or T/ ~
_ _& (5.5)
Ka
This means that the Doppler centroid frequency and Doppler centroid time are related by
,...,
,..., (5.6)
f11c -Ka. rJc
or 'Tlc ~ -f11c
- (5.7)
Ka.
The azimuth phase in the range Doppler domain then takes on a very simple form
Ord ~
f;,
7r
Ka. + 1r Kr
[r _2Rrdc(/11)] 2
(5.8)
where Rrd is the RCM in this domain. The RCM is now obtained by combining (5.1), (5.3), and (5.5), resulting
m
(5.10)
which is parabolic in / 11 • The parabolic range equation (5.1) in 17 is transformed into a parabolic range equation
in / 71 •
Finally, the two-dimensional frequency spectrum can be derived by taking a range Fourier transform of the range
Doppler signal, once again applying the POSP to (5.9). The resulting phase function in this two-dimensional
frequency domain is
(5.11)
Note that the migration of the target trajectory is not represented in the range frequency envelope, Wr, which is
only a function of /n but not of f,,. The migration is embedded in the phase, 82dr, given by (5.11).
When the beam squint is higher, or the aperture is very wide, a more complicated form of cross coupling
between the range and azimuth domains must be recognized. To derive the signal spectrum analytically in the
range Doppler domain, one is tempted to use a direct azimuth Fourier transform. The difficulty encountered with
this operation is the presence of cross coupling between the range and azimuth in the raw signal domain.
The cross coupling can be best illustrated using the raw signal of a point target, as shown in Figure 5.1.
When a particular range gate is examined, for example, the one shown by the vertical dashed line, a phase
change between azimuth samples can be seen. The phase change is due to the RCM, and becomes bigger as the
amount of RCM increases. This phase change is in addition to the normal azimuth phase encoding that results
from the demodulation process (which is shown on the right-hand side of the figure) . In other words, the cross
coupling caused by the RCM creates an additional azimuth phase term, which affects the azimuth FM rate.
Jin and Wu at JPL were the first to recognize the coupling between the range phase and azimuth phase in
squinted data. In spite of the difficulty mentioned above, they actually derived the range Doppler formulation, via
a direct azimuth Fourier transform, by using a parabolic approximation of the range equation. Their results are
presented in an influential paper [4]. Their concise derivation is expanded in [5].
In this section, an intuitive derivation is used, which also has the benefit of keeping the hyperbolic form of
the range equation. To arrive at an accurate analytical form of the azimuth Fourier transformed signal, the
following operations can be used:
After the first two steps, the two-dimensional spectrum is obtained. The final step gives the range Doppler
representation of the signal. 2
In the derivation, the purpose of the initial range Fourier transform is to ((expand,, the range signal, so that
the spread of energy in the range direction is independent of azimuth. Figure 5.2 shows that the "skewness,, of
the signal data is removed by the range Fourier transform, regardless of whether the data have been range
compressed or not. Consequently, each range frequency cell contains the full azimuth exposure, which allows the
application of the POSP in the subsequent azimuth Fourier transform. The remaining sections derive the
appropriate expressions in the three steps.
Target Target
energy energy
FT FT FT
In this section, a closed form expression for the range Fourier transform of the baseband received signal, so(r, 77),
is derived, using the POSP. The range Fourier transform is given by
As discussed in Section 4.3.1, R{77) is a function of the effective velocity, V,.. The parameter Vr is range
variant, but remains constant for the duration of the pulse. For this reason, R(77) is not a function of r in the
derivative of (5.16). To apply the POSP, one must find the range time when the derivative is zero
Using this expression for r, the solution to the integral (5.14) can be written as
So(ln TJ) - Ao Ai Wr(l-r) wa(TJ - TJc)
41r (lo+
x exp { -J. - l-r) R(TJ)} exp { -J. -1r
- - - --
C Kr
I;} (5.18)
where A1 is a constant, and Wr(l-r) = wr(l-r/Kr) is the envelope of the range frequency spectrum. The constant,
A1 , contains a phase term of ±1r/ 4, which is not important to the following analysis.
Equation (5.18) represents the signal after the range Fourier transform. In this section, a closed form solution of
the azimuth Fourier transform of So (In TJ) is derived, again using the POSP. The azimuth Fourier transform 1s
given by
S2dc(l-r, !TJ) = 1_: So(ln 17) exp{-j 21r / 11 17} dTJ (5.19)
Using the phase of So from (5.18), the phase of the Fourier integrand is
Substituting the instantaneous slant range of (4.9) into (5.20), the derivative of 8(TJ) with respect to TJ can be
obtained
d8( TJ) 41r (Jo + f-r) V; TJ - 21T !,, (5.21)
dTJ C JRrJ + V,.2 172
which is zero when
2 V; (Jo+ f-r) TJ
1,, - (5.22)
cJ~ + Vr21J2
or
cRo J,, (5.23)
TJ -
The two equations above give the one-to-one correspondence between time and azimuth frequency in the
two-dimensional frequency domain, and show that the relationship is dependent upon range frequency, /-r·
Finally, the solution to (5.19) can be expressed as
S2df(f-r , J,,) = Ao A1 A2 Wr(f-r ) Wa.(111 - f 11J exp{j 8a(fn J,,)} (5.24)
where A2 is a constant, Wa (111 - J11J is the envelope of the azimuth frequency spectrum, centered at the
Doppler centroid frequency f 11c, and 8a(l-r, J,,) is the phase angle after the Fourier transform. The constant, A2,
has an unimportant phase of ±1r/4. The envelope, Wa.(111 ), is given by substituting TJ from (5.23) into w(TJ)
Similarly, the phase angle, 8a(l-r, 111 ), of the azimuth Fourier transform result is obtained by combining (5.23),
(4.9), and (5.20)
2
41r Ro (Jo+ IT) + ____1r_c-;:Ro=J11====
I
c V1 -
c2 13
4 Vl (fo+J,.j2
2 / c
(Jo+ IT) V,. V 1 - 4 v; (fo+J,,.)2
2
13
(5.26)
Let D2df (!n f,1, v;.) denote the square root factor
1
c2 !J (5.27)
- 4 Vr2 (lo +f T )
2
It is instructive to examine the geometric interpretation of this factor. Substituting (5.22) into (5.27) and
simplifying, D2dr becomes
(5.28)
From (4.16), D2dr (In f,,, Vr) is simply the cosine of the squint angle, Br, at azimuth time, 77, for rectilinear
geometry. Since the range migration can be represented by Ro/cos Br, D2df (In I,,, v;.) is called the migration
factor in the two-dimensional frequency domain.
Equation (5.26) shows that the phase in the two-dimensional frequency domain is proportional to Ro Dw (fn
Jr,, v;.). Unlike the range equation, (5.47), the factor D2df, the cosine of the squint angle, appears in the nu-
merator here. A geometric interpretation of this property is given in the appendix of Chapter 8. The phase term
given by (5.26), from which the phase coupling is derived, has a geometric interpretation introduced by Raney [8].
When data are acquired at a nonzero squint angle, the effective wavelength in the two-dimensional frequency
domain or range Doppler domain is altered from the original wavelength. This geometric interpretation is discussed
in Section 8.4.4.
The range frequency envelope, Wr(IT) , in (5.24) does not exhibit a shift due to RCM. The phase, Ba(IT, / 11 ) ,
in (5.26) contains the azimuth modulation, the RCM, as well as the coupling between range and azimuth (this
coupling is addressed by secondary range compression, as presented in the next section). To highlight the different
phase terms, the phase, Ba(!T, Jr,), of (5.26) can be rewritten as
where
c2 !J (5.30)
1 - 4 v:2 12
r JO
Equation (5.24), with (5.26) or (5.29), represents the two-dimensional signal spectrum. It is an important
expression when processing is performed in the two-dimensional frequency domain, such as in the omega-K
algorithm described in Chapter 8. In the above derivation, no approximations related to the squint angle have
been made. Hence, the equation is an accurate representation of the signal spectrum, regardless of the squint, as
long as the range locus can be represented by a hyperbola. The availability of this expression allows one to
derive matched filters directly in the two-dimensional frequency domain, which makes processing in this domain
attractive.
In the digital processor implementation, it does not matter whether the range Fourier transform or the
azimuth Fourier transform is performed first. In either case, the two-dimensional signal spectrum can be
accurately represented by the above equations.
The equations in the preceding section represent the signal in the two-dimensional frequency domain. To arrive at
the signal representation in the range Doppler domain, a range inverse Fourier transform is performed on S2dr(/n
!11)
(5.31)
The analytical solution can be obtained by the POSP. However, a direct application of this method results in a
quartic equation in lr. Although a closed form solution of a quartic equation exists [9], the algebra is tedious. To
circumvent the detailed algebraic manipulations, an approximation can be made to the phase term, 8a.(lr, / 11), in
the integrand.
By expanding the square root term of (5.29) in a power series of fri and keeping terms up to J;, 8a(IT, / 11 )
becomes
41rRo Jo [ /7'
8a(fr, / 11 ) - - c D(Jr,, Vr) + Jo D(ff/, V,,.)
- 2 f6
J;
D 3(f11, Vr)
c2 !;
4 v,.2 f'/i
l (5.32)
Reference [10] also gives a Taylor series expansion of the phase function (5.29). The higher order terms missing
from (5.32) can be ignored when
(5.33)
This assumption, the only one used so far in the derivations, fails for high squint angles as / 11 increases. A
derivation that keeps more higher order terms in the expansion of (5.32) is presented in [11].
The first term inside the square brackets in (5.32) is due to the azimuth modulation, the second term is due
to the RCM, and the last term is due to the cross coupling between range and azimuth. The cross coupling is
particularly important to the target focusing when the squint is high
(5.34)
This phase represents an additional range modulation due to the squint, which appears after the azimuth Fourier
transform. The origin of this phase coupling can also be visualized from a signal processing point of v:iew, as
described in Appendix 5A.
Using (5.32), the phase in the integrand in (5.31) is
9
(/7') =-
41r Ro Jo [
c D(Jr,, Vr)
/7'
+ /o D(f11, V,,.) - 2
/;
/6 D3 (f11, '1r)
e- !J
4 V,.2 f6
l
- K: +
Tr 12
21r /TT (5.35)
Now, the POSP can be applied with ease. The derivative of 8(/7') with respect to /7' is given by
d8(J,,.)
(5.36)
d/7'
where
cRoJJ (5.37)
f - K,,.
7' - 1 - K,,. Z
[T - 2 Ro
cD(/11 , Vr)
l (5.38)
Substituting this expression into (5.32) and manipulating the terms, the solution to the range Fourier
transform (5.31) can be expressed as
where A3 is a constant having an unimportant phase of ±7r/ 4, and the new range FM rate is
Kr (5.40)
Km= 1-KrZ
Equation (5.39) represents the target spectrum in the range Doppler domain. The same result would have been
obtained if an azimuth Fourier transform had been applied directly to the raw data, so(r, 71), but the
mathematical development is difficult.
The term inside the square brackets in the range envelope, Wr, represents the RCM shift. The first
exponential term represents the azimuth modulation, -41r Ro D(/11 , Yr) /o/c, caused by the range migration. The
second exponential term represents the range modulation. Note that the range phase in the second exponential
term is zero along the target trajectory, that is, at r = 2Ro/ [c D ( f 11 , Yr) ].
The following points should be noted about the range Doppler spectrum.
o Typically, 1/Z << IKrl, so that IKr ZI << 1. From (5.40), it can be seen that Km is only slightly different
from Kr, However, the difference may be large enough to cause misfocusing.
o The length of the pulse envelope is altered by a factor 1/(1 - Kr Z). Since IKr ZI << 1, the change m
length is not significant.
o The FM rate of the radar pulse is altered by the same factor 1/(1-Kr Z). This change in FM rate is
caused by the coupling between range and azimuth. The change is range dependent through the Ro term in
Z. The factor 1/Z is called the secondary range compression (SRC) filter among the SAR community. This
filter was first discovered by Jin and Wu [4], and is explored in detail in Chapter 6. The new FM rate, Km,
can be viewed as the combined FM rate of the radar pulse and the SRC filter.
o It should be emphasized that the range Doppler spectrum, Srd(r, / 11 ), given in (5.39), is not exact. It has
made use of the approximation in (5.33). This equation is the starting point of some processing algorithms,
such as the range Doppler and chirp scaling to be discussed in later chapters. For very high squints, the
approximation may not hold, and the validity of the approximation should be analyzed in each case.
The azimuth frequency versus time relationship in the range Doppler domain can be obtained by substituting
(5.38) into (5.22) and (5.23), but the algebraic manipulations are tedious. A shortcut is as follows. The term r -
2 Ro/[cD(/11 , Vr)] in (5.39) represents the RCM. After range compression, the energy is focused at the range
position, T = 2 Ro/[(c D(/11 , Yr)], that is, at the range position where fr = 0, according to (5.38). Another way
to visualize this effect is to recall that range compression serves to remove the quadratic phase modulation, so
that a radar transmitting a very narrow pulse is emulated with a center frequency of /o .
Setting fr =0 in (5.22) and (5.23), azimuth frequency and time are then related by
cRo/11 (5.42)
11 -
2 1 - 4c2 f~
2 /o V,. v2 f?
r O
The first equation can also be derived by differentiating the range equation
2 dR(r,)
IT) = ,\ dr,
(5.43)
where Ka,dop is a special FM rate used just to express this azimuth time to frequency relationship. It is given by
2
2 V,. lo
Ka,dop = cR(r,e) (5.45)
and is a slowly varying function of "le· In fact, if the range equation is a parabola, R(r,e) in the denominator is
replaced by Ro, and then Ka,dop equals Ka in (5.3), and is independent of "le·
It can be seen that this FM rate is related to the signal FM rate, Ka in (4.38) , by
More details of this relationship are given m Appendix 5B, where a geometric interpretation of the difference
between Ka and Ka,dop is included.
The RCM in the range Doppler domain can now be found. From the term inside the square brackets m (5.39),
the RCM in this domain is
(5.47)
e2 J2
1- 4v21,2
,
,. 0
The last equality in (5.47) is obtained by v-i rtue of the definition of D(ITJ, Vr ), given in (5.30). From the
above equation, it is seen that D(l11 , Vr) should be the cosine of the instantaneous squint angle. This is indeed
true, as can be derived by substituting (5.41) into (5.30), as is done in the two-dimensional case with D2dr(fr, ITJ,
½,). For the above reason, D (J.,,, Vr) is called the migration factor in the range Doppler domain.
The received signal experiences a Doppler shift because of the relative motion of the sensor and the target. This
Doppler shift exists in the received, demodulated signal, but it is hardly noticeable because the shift is very small
compared to the radar's pulse bandwidth. However, when observed in the azimuth direction, this Doppler shift is
very noticeable, and is fundamental to the azimuth phase encoding and azimuth processing. An important
parameter of the Doppler shift is the average Doppler shift, called the Doppler centroid.
As a result of the PRF sampling of the radar data, the Doppler centroid can be aliased, so that its
apparent value may be different from the absolute one. In this section, the implications of azimuth aliasing and
how it affects the apparent Doppler centroid are discussed. How the Doppler centroid is affected by the antenna
pointing angle, and the implications of the Doppler centroid varying with range, are also discussed.
This azimuth aliasing has a number of interesting properties, and implications for the signal processor. The
frequency used in Section 5.3.2 to derive the azimuth spectrum is the absolute azimuth frequency, which is what
the frequency would be had there been no aliasing. However, the radar system and signal processor only observe
azimuth frequencies in the range ( -Fa/2, + Fa/2] (or the range ( 0, + Fa ], depending upon the interpretation of
the digital frequency space).
The ambiguous aliasing of azimuth frequencies is illustrated in Figure 5.3. The antenna pattern is not shown,
in order to emphasize the aliasing effect. Figure 5.3(a, b) shows the azimuth signal, sampled at two different
rates. For clarity, the signals have been drawn as continuous lines, rather than as individual samples. In Figure
5.3(a), a high sampling rate has been used, equal to the Nyquist rate, and no aliasing has occurred. In Figure
5.3(b), the signal has been subsampled by a factor of four, and aliasing has occurred. This is seen by the fact
that the phase pattern of the signal is repeated four times in Figure 5.3(b). Aliasing has replaced the unique
waveform of the top panel by the ambiguous waveform in the middle panel.
This aliasing also becomes apparent when the frequency of the signal is observed. In Figure 5.3(c), the
dashed line gives the frequency of the unaliased signal of Figure 5.3(a), while the solid line gives the frequency
of the undersampled or aliased signal of Figure 5.3(b). Note that the frequencies are correctly represented in the
undersampled signal when the frequencies in the unaliased signal lie in the range ( - Fa/2, +Fa/2), but outside
this range, the frequencies are folded or aliased into the ( - Fa/2, + F 0 /2] range. Again, the aliasing of this linear
FM signal makes the signal appear to "repeat itself" four times in the frequency domain.
The "PRF time" is defined to be the time that it takes for the azimuth chirp to cover one PRF of the
frequency band, equal to 32 samples in Figure 5.3. The PRF time is introduced to emphasize the aliasing effect
caused by the "PRF sampling" of the radar system. This time, denoted by d11PRF, can be obtained from (5.42)
C R(r,c) F.
(5.48)
- 2 /o ~ 2 cos2 Br,c a
With Ka given by (4.38), it is interesting to note that the PRF time can be rewritten as
Because the antenna pattern has azimuth sidelobes that extend well beyond the width of the main beam,
there is always a certain degree of aliasing in practical situations as a result of the PRF constraints. The aliasing
causes "ghost targets" to appear in the compressed data, because the matched filter perceives the phase pattern
of each of the aliased parts of the signal to be identical. The aliasing is further illustrated in Figure 5.4, where
the azimuth beam pattern and compressed data are shown. The figure assumes that the antenna has a
sine-squared beam pattern and is pointing to zero Doppler.
~
-
-1 ~ ' \
'
~ \. '
~
. ~
·' . .
0 20 40 60 80 100 120
-2
.J-
-- -- --
un-allued lreqa - - -
--
~
0 20 eo 80 100 120
Azimuth (aamplea)
The data is compressed to zero Doppler, which means that target energy appears where the phase patterns
in Figure 5.4(a) are stationary. The main part of the target energy is compressed to Sample 64 in Figure 5.4(c),
but portions of the target energy also get registered to Samples 0, 32, 96, and 128. The weaker group of targets
are aliases of the main target, and are sometimes referred to as ghost targets. The ghost targets are azimuth
ambiguities and are separated by the PRF time, 32 samples in our example. As most of the received signal
energy is confined within one PRF time [see Figure 5.4(b)], the ambiguity power will be small.
----
CZ)
"0
0
(b) Two-way antenna beam pattern
Aliased region - - - - Aliased region
! -20 PRFTlme
ei~
!~ 0 20 40 80 IIO 100 t20
(c) Compressed target and aliases (ghosts)
i.... 0
! - 10
Si --20
~
:E -30
0 20 .o eo ao 100 120
Azlmuth (samples) -->
Figure 5.4: Illustration of azimuth ambiguities caused by aliasing of the az-
imuth chirp.
(a) Real part of signal (zero squint) (b) Signal apectrum (zero squint)
.... ,:.·~ 0:00 ;; ·: -...
.
I. .J1 ! a:
.- .. ...
0 .____.,_ _,__.....__..._____._ _......,
0 10 20 30 40 50 60 0 10 20 30 40 50 60
(o) Real part of signal (with squint) (d) Signal spectrum (with squint)
10 •
I.
0 ,_____.,_ _,__ _ _...___ __..._.
0 10 20 30 40 50 60 0 10 20 30 40 50 60
Time (samples) Frequency (cells)
Figure 5.5: Doppler centroid with and without squint. The vertical dashed
line indicates the time that the beam center crosses the target .
A signal with a nonzero Doppler centroid is shown in the bottom row of Figure 5.5. The antenna beam is
squinted forward, so that the point of closest approach occurs around Sample 48, at a time after the beam center
crosses the target . The closest approach occurs three-quarters of the way through the exposure, so the Doppler
centroid is at the normalized frequency of 0.25 cycles/sample or one-quarter of a PRF. In Figure 5.5(d), it is
seen that the peak frequency occurs at Cell 16, one quarter of the way along the frequency axis. Note that when
the beam is squinted forward, the Doppler centroid frequency is positive, which is in agreement with (4.34).
There are several main points of interest about the Doppler centroid:
Doppler ambiguity: Because of the possibility of azimuth aliasing, the Doppler centroid, as observed in the
azimuth spectrum, is not unique. An ambiguity exists because the true center frequency may not lie within
the fundamental frequency range observed in the spectra, which is in the range ( -F0 /2, +F0 /2].
Fractional and integer PRF parts: Because of this ambiguity, it is convenient to express the unaliased
centroid in units of PRFs, and consider two components, the fractional PRF part and the integer PRF
part. 3
Processing requirements: Some functions in the signal processing chain (e.g., basic azimuth compression)
require that only the fractional part of the centroid be known. Other functions (e.g., RCMC and SRC)
require that both parts of the centroid be known.
Discontinuity in spectra: The azimuth spectrum has a mm1mum at a certain frequency, as illustrated in
Figure 5.5. Because of symmetry, the minimum lies at ±PRF/2 away from the frequency of maximum
energy, or Doppler centroid. In order to process the main part of the azimuth energy over one contiguous
frequency band, the spectra must be separated into two parts at its minimum point [Cell 32 in Figure 5.5(b)
and Cell 48 in Figure 5.5(d)), and the two parts swapped left to right, so that the spectra has a
"continuous" horizontal frequency axis when used in the signal processor.
Estimation: The Doppler centroid can be found from a geometry model using the satellite's orbit and attitude
data. However, this estimate is usually not accurate enough, which means that the centroid must be
estimated from the received radar data. Different algorithms are needed to estimate the fractional and integer
parts of the centroid. This is discussed in Chapter 12.
The Doppler ambiguity can be seen in the bottom two rows of Figure 5.3. In this example, the length of the
data is four PRF times, and the unaliased stationary phase point can be in any one of five positions (five,
because both ends of the exposure are included). When the aliased spectra is observed, one cannot tell which is
the true stationary point. It is the function of the Doppler ambiguity resolver to find which is the correct
ambiguity.
The Doppler ambiguity has been introduced by a simple example in the previous section. A more formal
explanation now follows. Figure 5.6 illustrates the absolute Doppler centroid frequency in the general case.
Because of the PRF sampling of the signal, the spectrum is aliased. The spectrum at "baseband" is the only part
that is visible in the received data. The Doppler centroid frequency, estimated from the location of the peak in
the observed spectrum, lies with ±0.5 Fa, where Fa is the PRF. This observed Doppler centroid frequency is
called the fractional PRF part of the Doppler centroid, because its possible range is one PRF. Using a prime to
distinguish it from the absolute centroid, it is denoted by f~c.
Because of the aliasing, the observed spectrum in Figure 5.6 looks identical when the absolute or unaliased
Doppler centroid frequency, f 11 , is changed by an integer multiple of the PRF. The frequency difference between
f~ c and f 11c is an integer multiple of the PRF. This integer multiple, denoted by Mamb, is called the Doppler
ambiguity. In other words, as in (2.52), the unaliased Doppler centroid is given by
(5.50)
(5.51)
Note that the ambiguity number indicates the position along the frequency axis of the absolute Doppler centroid.
The spectral energy of the received data usually spans more than one ambiguity number, as illustrated in Figure
5.6.
Figure 5.6: Absolute Doppler spectrum and the aliased, observed version.
The ambiguity number is a function of platform/Earth geometry, and the radar beam's attitude angles. In
some sensors, such as the European Earth remote sensing satellites, ERS and ENVISAT, the beam is steered in
the yaw coordinate as a function of latitude to compensate for the effect of Earth rotation. This yaw steering
has the effect of making the Doppler ambiguity number equal to zero most of the time. In ERS, yaw steering is
done to align the beams of other sensors, but it has the secondary and beneficial effect of making the unaliased
Doppler centroid near zero. This reduces the range and azimuth coupling in the signal spectrum and simplifies
the SAR processing.
In some satellite sensors, such as RADARSAT-1 (or ERS in "roll/tilt" mode), the beam is not steered to zero
Doppler, but is controlled to point perpendicular to the satellite heading. In this case, Earth rotation creates an
effective yaw of up to 4°, and the Doppler ambiguity number varies in a regular fashion around the orbit. The
ambiguity number is near zero at the extreme latitudes, and is at a maximum at the equator. In RADARSAT,
the ambiguity number typically varies between ±9.
In SAR processing, the absolute or unaliased Doppler centroid frequency is required for some operations.
Chapter 12 shows how this ambiguity can be resolved. In summary, the role of the Doppler centroid estimator is
to enable each frequency cell to be labeled with the unambiguous or absolute frequency value, as in the right
hand side of Figure 5.6.
In processing algorithms operating in the range frequency domain, the Doppler centroid should not vary much
over a processing range swath. If it does, and the PRF is not high enough, the energies at the beginning and
end of the azimuth spectra will get mixed, and the azimuth ambiguity level will rise (see Section 5.5). However,
the amount of centroid variation with range is beyond the control of the processing algorithm; it is strictly a
property of the SAR data collection geometry. This section explains the origin of the Doppler centroid variation
with range, and how it is affected by antenna yaw and pitch.
Consider the two-dimensional spectra of two targets, one at near range and one at far range. The far range
target is assumed to have a different Doppler centroid than the near range target, as illustrated in Figure 5.7.
Each panel represents a typical spectrum in the presence of squint. The gap in the azimuth spectrum has a
slope, owing to the fact that the Doppler centroid frequency (4.33) is a function of wavelength (/o + /-r)/c in
this domain. The gaps in the spectra are important to the processing algorithm, as these gaps are needed to keep
the energy from the end of the exposure from wrapping around into the beginning of the exposure. If wrappmg
does occur, the phase history of the signal becomes confused, and azimuth ambiguities will be noticed.
In Figure 5.7, it is seen that the gaps in the azimuth spectra of the two targets do not coincide. When the
two targets are processed in the same data block, the two-dimensional spectra is the sum of the two spectra, by
the superposition principle. When these two spectra are summed, the azimuth spectral gap disappears because the
centroids are too far apart. Hence, it is not always possible to identify a common gap in the azimuth spectrum
when processing a large range swath. This problem will be addressed when describing each of the SAR processing
algorithms in subsequent chapters.
Antenna Yaw
To see how antenna attitude affects the Doppler centroid as a function of range, consider first the case of pure
Radar
' , /Slantrange, R
''
G
~,,'1
~l?ge
Figure 5.9: The effect of. antenna pitch on the Doppler centroid variation.
Antenna Pitch
In contrast, if squint is obtained by pitching the antenna, the beam center offset time, as shown by the time to
traverse X 9 in Figure 5.9, is
_Xg (h + d) tan Opit
1Jc = Vg
- Vg
(5.56)
where h is the platform altitude, Bpit is the pitch angle, and V9 is the footprint velocity on the ground.
The beam center offset time in (5.56) is approximately constant with range, because the beam centerline is
nearly parallel with the zero Doppler line. The Doppler centroid frequency is then obtained by substituting (4.30)
into (5.56), and then using (4.10)
fT/c
=_ 2 V,.
>.R(TJc)
2
[- { h + d)
Vg
tan Bpit l = 2 Vs (h + d) tan Bpit
>.R(TJc)
(5.57)
Attitude Compensation
The purpose of the above analysis is to illustrate the different effects of antenna yaw and pitch on the Doppler
centroid frequency and especially its variation with range. These effects are illustrated in Figure 5.10, where the
RADARSAT-1 W3 beam is chosen as an example. The most northerly point of the orbit is chosen to minimize
the effects of Earth rotation (its contribution is zero when the yaw and pitch are zero). It is noted that a yaw
angle of 0.5° (counterclockwise as seen from above) creates a Doppler frequency of about 1300 Hz. The variation
of Doppler is almost linear with range, with a small positive slope (see the dashed line). In contrast, a pitch of
0.36° (satellite tilting up) causes a Doppler shift of about the same amount, but with a small negative slope (see
the solid line).
The geometric approximations used for a curved Earth in the derivations are sufficient to serve the above
purpose. References [5, 12] give a rigorous derivation of Doppler frequency for a curved and rotating Earth. No
approximations are made to obtain the data in Figure 5.10, except to assume that the orbit is circular.
Because of the different effects of yaw and pitch, it is apparent that the satellite attitude can be controlled
to achieve certain effects, such as making the Doppler centroid constant with range, or setting it to an arbitrary
value [13]. In ERS, the attitude is ''yaw steered" to make the Doppler centroid approximately zero, so that the
front and rear beams of the scatterometer are aligned. This also has the effect of simplifying the SAR processing,
as the coupling between range and azimuth is largely eliminated, and RCM is quite small. RADARSAT-2 will
also have yaw steering.
The effect of yaw and pitch steering on the Doppler centroid is illustrated in Figure 5.11. Here, the
RADARSAT-1 W3 beam is used again, but the latitude is set to 77° (ascending orbit) to obtain a moderate
Earth rotation component in the Doppler centroid. When the satellite attitude is zero, the Doppler centroid is
approximately -2000 Hz (see the dashed curve). A change in the yaw angle would be sufficient to move the
Doppler centroid to near zero, but the slope of Doppler versus range would not be exactly zero.
. - ... -
1500,----,---....----.----.----.----.-----....--.....---.---,
.
.. = r .
._.. -·
.
. -
--. - . . .- .
--- ---.
.. ...
. ••
: -- - • a . i
. . .. . ' . .
. . .
...
.............. ~········ . ··".······~·· . ··1····················1············
. . ...
... .
0 0 0 0 0 0 0 I o 0
. . ..
. Latitude = 8 1.46
!1- .P= 0.000
·. -
Y= 0.500
p,. 0.360 y. 0.000.
I
500 L___J_---1._-L_....L_...,1__...L___C=::ic:::::=:::::::;::=:::::r::=::'...J
980 990 1000 1010 1020 1030 1()40 1050 1060 1070 1080 1090
Slant range (km)
Figure 5.10: Doppler centroid frequency versus range for specific yaw and
pitch values-example of the RADARSAT-1 W3 beam.
However, by adjusting the yaw and pitch appropriately, both the Doppler offset and the slope can be set to
zero, so that the beam centerline coincides with the zero Doppler line. This adjustment is illustrated in the
dot-dash line, where a yaw of 0.676° is applied, and in t he solid line, where a pitch of 0.064° is added to drive
both the Doppler offset and the slope to zero. Note that roll has little effect upon the Doppler centroid, but the
roll angle is very important to obtain correct illumination of the desired range swath. Yaw and pitch steering is
discussed further in Section 12.3.2.
' t I f I I I
-
1000 · · .. · ... ·· ... ·· .. · · · ·' · · ':' · ·· · ·: .. ' ·: ..... : - • P• 0.000 y,. 0.000
·.· Latitude = ff, , .00 · . · .
· ' P o.ooo Y o.......
500 •• • ' • • •: • ' ' ' ' -:• ' ' ' • ' ! " ' " " •:- ' ' ' ' -:• ' • • ' • ; ' • • ' ' •: -
. . - P ... 0.064 Y.. 0.676
..... a
t
I•
O .• . . . • . . . . . . . -
o o
.
I
. .
o
.
•
. .
.... -~ ..... : .. ' .. ·:· .... -~ ..... : ..... ·:· .... -
o • O I O
! -1500 ,.. · · · · :·. · · .. ·:. · · · · ·;. · · · · · :·. .. · · ·:. · · · · ·:. · · · · · :·. · .. • ·:. · · · · ·;.· · · · · ·:·. · · · · ·
-2()()()
. .
~ - ~ , - .l, _,__ _
. . . . . . . ..
j,,o ,
; ; , ... -
; · - ·•
: ~ --
;- - r - ~ - - r- - + - ..;-
·• ~., o • t; o o o .°o ,'o o ,.
0
.0 I I I • 1 ' •'1 0 •.' ! : ''' • I IO
980 990 1000 1010 10.20 1030 1(),40 1050 1060 1070 1080 1090
Slant range (km)
The instantaneous slant range, R('T]), changes with azimuth time, TJ, according to (4.9), in which R(TJ) is expressed
as a hyperbolic function of rJ. The equation represents the target trajectory, in distance units, as a function of
azimuth time. The separation between range samples is c/(2Fr), where Fr is the range sampling rate. This means
that in signal memory, the trajectory migrates through range cells during the exposure time of the target, hence
the name "range cell migration," or RCM.
This migration complicates the processing, but, ironically, it is an essential feature of SAR. It is this variation
of slant range with time that imposes an FM characteristic on the signal in the azimuth direction. The purpose
of this section is to examine further the form of the RCM in the different domains.
5.5.1 RCM Components
The hyperbolic form of the slant range equation can be expanded in a power series, resulting in a linear RCM
component, a quadratic RCM component, and higher order terms. These components are examined in this section.
The first generation of satellite SAR processors used the power series expansion of the range equation in the
time domain and the range Doppler domain. Later, it was discovered that the hyperbolic form could be kept in
all domains, thereby improving the processing accuracy, even for wide aperture systems. However, the power series
expansion is sometimes useful for analysis purposes.
Compared to (5.1), where the expansion was taken around time TJ = 0, more accuracy can be retained by
expanding about the beam center time, TJc· Expanding (4.9) about T/c and using (4.30), R(r,) can be expressed as
2 2 2
R(TJ ) R(T/c ) + V,. TJc ( ) 1 V,. cos 0r,c ( )2
= R(r,c) TJ - T/c + 2 R(r,c) 11 - T/c + ...
2 2
. 1 V,. cos 0r,c ( )2 ( )
= R (TJc ) - Vr sm 0r,c (17 - TJc) +2 R(r,c) TJ - T/c + ... 5.58
in which the higher order terms can usually be ignored, if the exposure time is moderate. Note that, while the
higher order terms can be ignored in RCM equations because they are very small compared to the range cell
size, they usually cannot be ignored in azimuth phase equations (e.g., through the range dependent azimuth FM
rate, Ka) -
The linear and quadratic components of RCM are illustrated in Figure 5.12 for the C-band satellite SAR
parameters, given Table 4.1.6 The squint angle is set to 0.3°, so that the quadratic RCM relative to the linear
part is observable in the plot. The linear component is tangent to the RCM trajectory at the beam center offset
time, TJc, and is represented by the (TJ - 11c) term in (5.58). The quadratic component is approximately given by
the difference between the target trajectory and the linear component, and is represented by the (r, - TJc) 2 term
in (5.58).
It is necessary to find the total RCM within the target exposure, Ta, to understand the magnitude of the
correction problem. For the low-squint case, where cos 0r,c ~ 1 and the zero Doppler point lies within the
exposure (i.e., ITJcl < Ta/2) , the total RCM is given by
02
Total ACM
0.3 ... ...
0.4
ai'
.._. 0,6
~ 0.6
.,
:;
e 0.1
~ 0.6
0.9
1 +---+
Quadratic ACM component
-5 0 6 10 15 20 26 80 35
Range migration (m)
(5.59)
In this case, the total RCM is dominated by the quadratic component. In the higher squint case, the total RCM
in range cells is given by
R (l17cl + Ta/2) - R (l17cl - Ta/2)
v2
ll-0 Tal17cl (5.60)
~R
2
= ! v? cos2 Or,c (Ta) (5.61)
"-quad 2 R(17c) 2
Substituting for Ta from (4.37) and simplifying, the quadratic component of RCM is
where La is the antenna length in the azimuth direction. Unlike the linear component of RCM, the quadratic
component is independent of squint for a given slant range R(11c), and is independent of platform velocity.
In some cases where the wavelength is short, as at X-band, it may be possible to ignore the quadratic RCM.
By considering only the linear RCM, the processing can be made simpler. However, careful analysis must be per-
formed before discarding the quadratic part. As an example, consider the X-band airborne system shown in Table
4.1 with the following parameters: A = 0.032 m, R(11c) = 30, 000 m and La = 1 m. From (5.62), ~Rquad is found
to be 3 m. This amount of quadratic RCM may or may not be negligible, depending upon the range resolution
of the radar.
As another example, consider a C-band satellite system with the following parameters: A = 0.056 m, R(11c) =
850 km, and La = 10 m. Then, Rquad is found to be 3 m over the full exposure time. This amount of quadratic
RCM is smaller than the typical slant range resolution of 8 m, and need not be corrected. However, as it is
simple to do, a precision processor would normally provide this correction.
The slant range equation in the range Doppler domain is given by (5.47), in which Rrd(/r,) versus J,,, is
approximately hyperbolic. The difference between the range hyperbola in the azimuth time domain and azimuth
frequency domain is that the curvature of the hyperbola decreases with range in the time domain but increases
with range in the frequency domain, as shown in Figure 5.13. The above can be observed by power series
expansions of R(17) and Rrd(/TJ) , as shown in (5.1) and (5.10), respectively. In the time domain, the range variable,
f?-0, is in the denominator; therefore the hyperbola opens up with increasing f?-0. In the range Doppler domain, the
reverse is true since the variable, f?-0, is in the numerator. Note also that the exposure time increases with range
in the time domain, and the azimuth bandwidth is a constant in the azimuth frequency domain.
(
Range Range
Figure 5.13: How the target trajectories vary between the azimuth time and
frequency domains.
~ iI ~
I
*"
f
::,
I l
I
V
1/
(a) A/A domain
V
1/
(b) 0/A domain
~
j
(c) R/0 domain
Figure 5.14: Illustrating the collapsing of multiple targets into one trajectory
in the range Doppler domain.
To transform the data to the range Doppler domain, an azimuth Fourier transform is performed. By Y:irtue of
the Fourier transform shift theorem, the energy of all these targets will be positioned exactly in the same
frequency samples as shown in Figure 5.14{c) of the figure. The only difference is that each target will have a
different linear phase component, exp{j 21r J.,., 'T'/ k}, where 'T'/k is the time shift of the kth target .
The azimuth time delay of each target means an extra linear phase term has to be included in the azimuth
spectra of (5.24) and {5.39). Then, by the superposition principle, the total signal spectrum is simply the sum of
the indiv,idual spectra, each of which have the same region of support {the region with significant energy) in the
range Doppler or two-dimensional frequency domains.
To summarize, all targets having the same range of closest approach collapse into one trajectory in the
azimuth frequency domain; for example, the range Doppler domain. This important property is exploited in some
SAR processing algorithms to gain computing efficiency.
The target trajectory in the range Doppler domain wraps around the azimuth frequency axis, as a result of the
aliasing caused by the PRF sampling of the data. This effect is illustrated in Figure 5.15.
(a) Before azimuth aliasing
2000...-.........- - - - - - -
1500
-
:x: 1000 - - - - - 1000...-............---.....-----,
800
600
400
200
0 ' - - ~---- ---' - - - - '
0 10 20
Range (cells)
Figure 5.15: Aliasing of the target azimuth spectrum, creating azimuth am-
biguities.
The left side of Figure 5.15 shows how the target trajectory would appear in the range Doppler domain after
range compression, if there were no PRF sampling. The target trajectory as a continuous function of azimuth
frequency is given in (5.47). The Doppler centroid is 500 Hz, and the width of the curve indicates how the signal
strength drops off away from the beam center. In this example, the Doppler bandwidth is assumed to be 2500
Hz, which is chosen to be larger than normal for illustration purposes.
The right side of the figure shows the effect of the azimuth sampling, when the PRF is 1000 Hz. The
segment of the energy within ±PRF/2 of the Doppler centroid represents the main part of the Doppler energy,
and is labeled as Segment 1. This spans the frequencies from 0 to 1000 Hz, which are taken as the fundamental
frequency range in this example. Then, Segments 0 and 2 of the original trajectory are aliased into the same
fundamental frequency range.
The important thing to note in Figure 5.15(b) is that the ambiguous parts of the target energy, Segments 0
and 2, have a different RCM as a function of measured J11 , compared to the main segment. As the RCM
correction processing cannot distinguish between the different segments or ambiguities, the RCM correction curve
must be selected to correct or focus the energy of the main part but not the other two parts, which are azimuth
ambiguities (recall Section 5.4.1) .
In the processed image, the ambiguities corresponding to Segments 0 and 2 of the received data will appear
as ghost images, as shown in Figure 5.4. They may be out of focus in range and in azimuth. They will be
displaced in range by a few cells, but will be displaced in azimuth by one "PRF time" (recall Section 5.4.1). The
ambiguity power decreases with an increase in the PRF, and is kept below - 20 dB in practice.
In this section, examples are used to illustrate the spectrum of targets in the two domains that have been
analyzed mathematically in the first half of this chapter. Two cases are considered, the low squint and the high
squint cases.
The SAR signal characteristics in the different domains are illustrated with simulated airborne C-band data sets.
The simulation parameters are listed in Table 5.2. The selected range sampling rate gives a range oversampling of
1.2, and the selected PRF an azimuth oversampling rate of 1.3. There are two datasets, one with a zero squint,
and the other with a significant squint. The parameters, especially the range FM rate and range sampling rate,
are chosen to be lower than normal in order to reduce the simulation size.
Figure 5.16 illustrates the characteristics of the signal received from a single point target. The data have been
demodulated to baseband in range. The zero Doppler time is at the middle of. the azimuth exposure, so the data
is also at baseband in azimuth. Figure 5.16(a) shows that the region of energy is rectangular, as the RCM is too
small to be noticed. The range envelope is taken from the pulse characteristics, and is assumed to be uniform. In
the azimuth direction, the signal extent is given by the azimuth beam pattern, and the energy decays away from
the beam center point.
Next, the phase is examined. A contour of constant phase is given by
-11" Ko, r,2 + 11" Kr -r2 = C¥const (5.63)
Recalling that K 0 is always positive and Kr is signed, the constant phase contours are either hyperbolae or
ellipses, depending upon the sign of Kr,
Assuming an up chirp where Kr is positive, the phase contours are hyperbolae, as shown in Figure 5.16(b).
The azimuth phase in the first term in (5.63) is a good approximation for zero squint, as shown in (5.2).
Similarly, for a down chirp where Kr is negative, the phase contours are ellipses, as shown in Figure 5.16(c).
Note that the phase contours are aliased, as evidenced by the superfluous "saddles" and "bulls eyes" in Figure
5.16(b, c). This occurs because the phase represents only part of the information in the complex signal, so
plotting phase alone is equivalent to representing a complex signal by a real signal, and aliasing can result. Note
that the printing process can also cause aliasing.
The range Doppler domain spectrum can be obtained by performing a direct azimuth Fourier transform on
the raw signal. The results for an up chirp are shown in Figure 5.17(a, b). The azimuth oversampling manifests
itself in the form of a "gap" at each range gate. Recall that a frequency discontinuity has to be assumed when
the data are processed. The gap is where the azimuth frequency discontinuity should be placed in the processing.
For this zero squint case, the gap, and hence the discontinuity, is at the middle of the transformed array, in
agreement with Figure 5.5(a). The constant phase contours in Figure 5.17(b) are now ellipses (instead of.
(b) Phase, RO spectrum
250
200
150
100
50
200
150
100
50
Figure 5.17: Azimuth spectra of single point target, zero squint, up chirp case.
The purpose of SAR processing is to solve the convolution equation given by (4.43) , which is rewritten here
without the noise term
(5.64)
where the baseband signal, Sbb(r, 17), is the recorded data, himp (r, 11) is the impulse response (4.42) of a unity
amplitude point target at specific coordinates (r, rJ), and the ground reflectivity, g(r, 17), is to be determined. In
general, accuracy and efficiency in obtaining the solution are important characteristics of any SAR processor.
Before the mainstream SAR processing algorithms are described, it is useful to have a brief look at some
simple and historical algorithms, as these help understand the complexities of processing SAR data, and how to
approach the better algorithms.
This section first presents a two-dimensional filter, which carefully matches the impulse response for a single
target. This matched filter focuses the data faithfully, but the computational requirement is prohibitively high.
Then, a simple algorithm on the other end of the accuracy scale is discussed. It requires only a running average
or boxcar filter (a filter of equal weights or coefficients), which can be applied in one or two dimensions. Its
computational requirement is extremely low, but the attainable resolution is poor. Finally, the section gives an
overview of processing in the other domains, which leads into the mainstream processing algorithms that are the
topics of the remaining chapters.
(a) Magnitude (b) Phase, up chirp (c) Phase, down chirp
0 0 0
i0.
50 50 60
i
..... 100 '100 100
Figure 5.18: Single point target time domain characteristics, nonzero squint
case.
A straightforward way to compress the SAR data is by processing with a two-dimensional matched filter in the
time domain. The filter is the time reversed, complex conjugate of the SAR system impulse response function,
himp(T, 11), given by (4.42). The advantage of this method is that if the filter is changed for each output pixel,
each target in the image will be compressed as accurately as possible, including the correct compensation of the
range/azimuth coupling. However, the efficiency is very low because advantage cannot be taken of fast
convolution. This is because himp(T, 17) is range dependent, and possibly azimuth dependent as well, so a new
matched filter has to be generated for each output point.
Two possible improvements can be made to increase efficiency. The first one is to use range invariance
regions, in which the filter is held constant over a certain number of range cells. The second improvement is to
perform range compression first, using fast convolution. However, because of the RCM, the azimuth matched filter
is still two-dimensional, even though its length in the range direction is much reduced. This means that the
two-dimensional matched filter cannot be directly decomposed into two one-dimensional matched filters, a feature
that would improve efficiency.
150
200
150
::,
g' 100 100
~
~ 50 50
E
~ o~-~------~
0 50 100 150 200 250 0 50 100 150 200 250
Range frequency (samples) Range frequency (samples)
Figure 5.19: Azimuth spectra of single point target, nonzero squint, up chirp
case.
As an example, let the pulse duration be 40 µs, the range sampling rate be 24 MHz, the exposure time be
0.64 s, and the PRF be 1700 Hz (as in Table 4.1) . Then, the range matched filter consists of 960 samples, and
the azimuth matched filter is 1088 samples long. The complicating factor in using a two-dimensional matched
filter lies in the RCM, mostly its linear component. The trajectory of a point target migrates through range
gates; that is, its energy is not confined to a single range gate. The azimuth matched filter should then follow
this curved trajectory, as shown in Figure 5.12. From (5.58), the linear RCM slope is I½- sin Br,cl, and the linear
RCM extent over the exposure time is
(5.65)
For Vr = 7100 m/s, Br,c = 4°, and Ta = 0.64 s, ~Rlin is found to be 317 m. At a range sampling rate of 24
MHz, the range sample spacing is 6.25 m; therefore, the above value of Ri corresponds to 51 range samples. That
means the azimuth matched filter is now 51 x 1088 samples, even when applied to range compressed data.
When range invariance regions for the azimuth matched filter are used, the region size can be computed by
limiting the phase error between the signal and the matched filter to a certain value. Even if the invariance
region size may be large, the two-dimensional matched filtering problem still has to be overcome. The
computational requirement for a time domain processor is simply not efficient enough in practice, and commercial
processors do not use them.
Another algorithm, called the rectangular algorithm, ignores the RCM completely, and applies the range
matched filter and azimuth matched filter one after the other (can be in any order). This algorithm does not
yield the full resolution in range because of the trajectory misalignment to the azimuth axis, or in azimuth
because of the reduction in azimuth bandwidth captured by the azimuth matched filter.
A natural question from the above discussion is whether the data can be skewed so that the trajectories are
aligned with the vertical axis. Unfortunately, the RCM is not linear; it also has a quadratic component and
possibly higher order components as well, which may be significant. The best that can be done is to remove the
linear component in the time domain, since the RCM cannot be removed completely and simultaneously for all
targets without an excessive amount of computation.
The disadvantages of this method are:
o For each target, a residual quadratic RCM exists, which may not be negligible.
o Because of the skew, when the data are accessed down a column in the azimuth direction, the range of each
sample changes. Since the coefficients of the azimuth matched filter are range dependent, the filter eventually
becomes mismatched. In the skewed space, the matched filter should be both range and azimuth dependent.
The method of correcting for the linear RCM component may not be suitable for high resolution processing,
but the method will be addressed again in Chapter 9, when only low-to-medium resolution imagery is required.
The foregoing analysis shows that the matched filter cannot be decomposed into a range filter and an
azimuth filter directly. In the range Doppler algorithm, which is the subject of Chapter 6, it shall be shown how
the decomposition can be performed by correcting the RCM as an intermediate step.
The image shown in Figure 5.20 is an example of an image processed with a simple, time-domain algorithm. The
operators of the Canada Centre for Remote Sensing Convair-580 aircraft required a real-time processor on board
so they could verify their data takes on the fly. Image quality is not so important, as precision processing is done
on the ground, including the radiometric correction.
The real-time processor was built by MacDonald Dettwiler in 1986, and operates on 4096 range cells. It can
be configured to process all 4096 cells in one swath, or to process two channels of 2048 cells each, as in the
present example. The sampling rate is 37.5 MHz, giv,ing a sample spacing of 4 m. The SAR processing is
performed with a time-domain correlation, with no RCMC. The antenna is steered to zero Doppler, so the linear
component of RCM is small. The quadratic component of RCM is approximately 6 m at 15 km range [see
(5.62)], so minimal image degradation results from the lack of RCMC. As an option, nearest neighbor
interpolation can be done, to provide a ground range display, as in the present case. Seven looks are taken in
azimuth.
The C-band radar is fully polarimetric, and two real-time processors operate on all four channels. The image
shown here is a color composite of the four channels, but is reduced to black and white for the book. This gives
an additional multilooking effect.
The image covers the east end of Boundary Bay, just north of the Canada- U.S. border. The data were
acquired on September 30, 2004. The scene center is 49.1° N, 122.8°W. The swath width is approximately 10 km,
and the top of the image is pointing approximately eastward. The radar illumination is from the left. Two small
rivers are seen emerging from farmland, with adjoining suburbs on the left and right of the scene.
This section describes a simple algorithm in which SAR data are partially focused to a low resolution using an
averaging filter. Although there is some degree of focusing , it is still known as "unfocused SAR" (5, 14- 16] . The
principle is extremely simple in that it only uses a boxcar filter. This way of processing is usually applied to
range compressed data, and hence the boxcar filter is applied in the azimuth direction only.
To illustrate the principle, consider the azimuth linear FM signal in Figure 5.21, shown after downconverting
the signal to baseband. The boxcar filter integrates the signal, and obtains a peak at the stationary (zero
Doppler) point of the signal. The resolution and sidelobe ratio of the output of the filter is shown in Figure 5.22.
The resolution gets better as t he filter length increases, up to the point when multiple oscillations of the signal
are included in the integral.
(a) Real part of signal (b) Imaginary part of signal
1 1
t o.~
-g
I
0.5
o
~ --0.5
-1
-100
_______
...._
-50 0
...._
50
_ __.__.
100
l -0.5
-1 .....,_ _ _ _ _ _ _ _ _ _ __
o o l.Ao~~::.,:__ __..__.....:,.;:c.~06....I
- 100 ..5() 0 50 100 - 100 -50 0 50 100
Tine (samples) Time (umpleS)
Po. ~ (5.66)
where R is the slant range to the target. Thus, the resolution worsens with
range and is generally much inferior to proper SAR processing. However, the
number of computations are relatively small, requiring only four adds for each
output point.
2- 25 .
.
. .. .
•: ·····i• •••• •:••••••:•••• .• ;...... ;
. .. .
j 20
..... • • \,, .... . .
••••••• ,................... ·, 1•• .,.
Figure 5.22: Resolution and sidelobe ratio for unfocused SAR processing.
A two-dimensional, time-domain, matched filtering approach has been discussed, which yields high resolution but
lacks efficiency. On the other hand, the unfocused SAR and rectangular algorithm offer high efficiency, but the
resolution is poor. Thus, other algorithms should be sought that offer both accuracy and efficiency. In these
algorithms, data are processed in domains other than the two-dimensional time domain.
One suitable domain is the range Doppler domain (range time versus azimuth frequency). In this domain, all
targets having the same slant range of closest approach are collapsed into one single trajectory, as shown in
Figure 5.14. Therefore, correcting the migration of a single trajectory in this domain has the effect of correcting
a whole family of trajectories. Furthermore, the correction can accommodate the range dependence of RCM, since
the data is processed in the range time domain .
The algorithm operating in this domain is named the range Doppler algorithm (RDA). Its main advantage is
its ability to produce high resolution imagery accurately and efficiently, as long as the aperture is not too wide
or the squint not too large. It is accurate because it can accommodate range varying parameters, such as Doppler
centroid, azimuth FM rate, and RCM. At the same time, it is efficient because it allows the two dimensions to be
processed separately. The RDA seems to be the most widely used algorithm for high precision processing of
remote sensing satellite SAR data. In the presence of a moderate amount of squint, the RDA requires an SRC
step, which has been introduced in Section 5.3. Chapter 6 gives a detailed description of the algorithm.
There are algorithms that process the data using more than one domain, and one example is the chirp
scaling algorithm (CSA). The RDA requires interpolation in the RCMC step. The CSA eliminates the
interpolation step, and processes the data partly in the range Doppler domain and partly in the two-dimensional
frequency domain. In the two-dimensional frequency domain, the algorithm performs "bulk" operations that are
assumed to be range independent. The range dependent residual corrections are then performed in the range
Doppler domain. Chapter 7 gives a detailed description of the CSA.
Another domain that can provide efficient and accurate processing is the two-dimensional frequency domain.
The omega-K algorithm (wKA) processes the data in this domain. Its advantage lies in its ability to handle very
high squint and wide apertures. But the disadvantage lies in the fact that targets at different ranges are mixed in
this domain; hence, the algorithm holds some processing parameters to be range invariant, even if they are not .
Chapter 8 gives a detailed description of the wKA.
The RDA, CSA, and wKA algorithms are designed for high resolution processing. For medium to low
resolution processing, an algorithm named "SPECAN" is more efficient. It relies on a time domain linear RCMC,
an "azimuth deramp," and FFTs of short segments of the deramped signal. The azimuth deramp operation is a
simple phase multiply, using a chirp with an opposite slope to the transmitted chirp. The efficiency is gained
because only a single set of short FFTs are needed. The deramping effectively transforms the data into the
frequency domain, and hence only one FFT is needed for each segment, instead of the usual FFT/IFFT
combination. Chapter 9 gives a detailed description of the SPECAN algorithm.
5.8 Summary
The spectrum of the signal received from a point target has been derived in the range Doppler domain and in
the two-dimensional frequency domain. Coupling exists between the range and azimuth frequency coordinates in
these domains, to the extent that the range chirp FM rate is distorted in the two-dimensional frequency domain.
It is instructive to summarize the important equations in the various domains, as shown in Tables 5.3, 5.4, and
5.5.
RCM is an important property of SAR signals. It can be expanded in a power series, with the linear and
quadratic components separated. The power series expansion is mostly for analytical purposes, and the more
accurate hyperbolic form is usually kept for processing. A key efficiency feature of the range Doppler and
two-dimensional frequency domains is that all targets having the same slant range of closest approach will
collapse into one single trajectory, prov:ided that parameters, such as the Doppler centroid and radar velocity, do
not change with azimuth time.
The Doppler centroid frequency is an important parameter of SAR signals. It has two components: (1) the
fractional PRF part, which is the spectral peak position wrapped around to baseband; and (2) the Doppler
ambiguity resulting from the sampling of the data by the PRF. Unfortunately, the observer can only view one
PRF cycle of the Doppler spectrum, and the Doppler ambiguity has to be resolved.
Linear RCM - Yr sin Or,c (T/ - 11c) -Yr sin Or,c (11 - T/c)
V,.2 2 V,.2 cos2 8r,c ( )2
Quadratic RCM ('T'/ -11c) 11 - 11c
2 Ro 2 R(11c)
Kr
Range FM rate Kr Km= 1-KrZ
2v.2
r 2 V,.2 cos2 8,,c
Azimuth FM rate Ka ~
>..Ro
Ka - >.. R(11c)
4,rRo 2?fV,.2 4,r R(17)
Azimuth phase --->-.
- >-.Ro
2
11 - >-.
Table 5.4: Summary of Signal Properties in the Range Doppler Domain
c2 I~
Migration factor n~ 1 - 8c?v.2rI~/,2 0
D = 1 -
r /.20
4 v.2
>.2 Ro/,.,2 Ro
Slant range Rrd ~ Rrd = D(f,.,.Vr)
8 Vr2
Kr
Range FM rate Kr Km= 1 - KrZ
2 Vr2 1J lo
Azimuth frequency 11/ = -Ka. TJ 11/ = -
versus time c JRJ + V,.21] 2
,,., - cRo /11
T] - - -Ka 11 - 2/o Vr2 D(f,., ,Vr)
The above properties will be encountered again in subsequent chapters when different processing algorithms
are discussed. The frequency versus time characteristics will be exploited in focusing t he SAR data.
In choosing a SAR processor, accuracy and efficiency are the two major factors. Two simple algorithms were
discussed that were either accurate or efficient, but not both. Examples of accurate and efficient algorithms are
the RDA, WKA, and CSA, each of which are suitable for high resolution processing under appropriate conditions.
Each of these algorithms has its own advantages and disadvantages, due to inevitable approximations made in
each. They process the data in different domains, and efficiency is gained by utilizing special signal properties in
these domains.
If efficiency is the prime requirement, and if the azimuth resolution can be relaxed by several factors, the
SPECAN algorithm is appropriate. Its details are discussed in Chapter 9.
Migration
factor
D2dr ~ 1 -
c2 I~
8 V,.2 (lo+ f.,. )2
D -
2df -
J 1
-
c2f211
4 V,.2 (/o+/.,.) 2
Range FM rate Kr K Kr
m - 1- KrZ
2vr2,,uo+f.,.)
Azimuth 11/ - -K~TJ 1,, = -
c,/~+vr21/2
frequency
versus time
,,., - cRof,.,
11 = K' (J
rJ = 2 (/o+ /.,.) V,.2 D2dr
References
[1] J . R. Bennett and I. G. Cumming. Digital Techniques for the Multilook Processing of SAR Data with
Application to SEASAT-A. In Fifth Canadian Symp. on Remote Sensing, Victoria, BC, August 1978.
[2] I. G. Cumming and J . R. Bennett. Digital Processing of SEASAT SAR Data. In IEEE 1979 International
Conference on Acoustics, Speech and Signal Processing, Washington, D.C., April 2-4, 1979.
[3] C. Wu, K. Y . Liu, and M. J. Jin. A Modeling and Correlation Algorithm for Spaceborne SAR Signals.
IEEE Trans. on Aerospace and Electronic Systems, AES-18 (5), pp. 56~574, September 1982.
[4] M. J . Jin and C. Wu. A SAR Correlation Algorithm Which Accommodates Large Range Migration. IEEE
Trans. Geoscience and Remote Sensing, 22 (6), pp. 592-597, November 1984.
[5] J . Curlander and R. McDonough. Synthetic Aperture Radar: Systems and Signal Processing. John Wiley &
Sons, New York, 1991.
[6] R. K. Raney. A New and Fundamental Fourier Transform Pair. In Proc. Int. Geoscience and Remote Sensing
Symp., IGARSS'92, pp. 106107, Clear Lake, TX, May 1992.
[7] R. K. Raney, H. Runge, R. Bamler, I. G. Cumming, and F . H. Wong. Precision SAR Processing Using
Chirp Scaling. IEEE Trans. Geoscience and R emote Sensing, 32 (4), pp. 786-799, July 1994.
[8] R. K. Raney. Radar Fundamentals: Technical Perspective. In Manual of Remote Sensing, Volume 2:
Principles and Applications of Imaging Radar, F. M. Henderson and A . J. Lewis (ed.), pp. 9-130. John Wiley
& Sons, New York, 3rd edition, 1998.
[9] S. M. Selby. Standard Mathematical Tables. CRC Press, Boca Raton, FL, 1967.
[10] A. M. Smith. A New Approach to Range Doppler SAR Processing. International Journal of Remote
Sensing, 12 (2), pp. 235-251, 1991.
[11] G. W. Davidson, I. G. Cumming, and M. R. Ito. A Chirp Scaling Approach for Processing Squint Mode
SAR Data. IEEE Trans. on Aerospace and Electronic Systems, 32 (1), pp. 121-133, January 1996.
[12] R. K. Raney. Doppler Properties of Radars in Circular Orbits. Int. J. of Remote Sensing, 1 (9), pp.
1153-1162, 1986.
[13] G. W. Davidson and I. G. Cumming. Signal Properties of Squint Mode SAR. IEEE Trans. on Geoscience
and Remote Sensing, 35 (3), pp. 611617, May 1997.
[14] M. I. Skolnik. Radar Handbook. McGraw-Hill, New York, 2nd edition, 1990.
[15] D. R. Wehner. High Resolution Radar. Artech House, Norwood, MA, 2nd edition, 1995.
[16] C. Elachi. Spaceborne Radar Remote Sensing: Applications and Techniques. IEEE Press, New York, 1987.
In this appendix, the phase of the SAR signal from a point target in the two-dimensional frequency domain (5.34)
is derived from a signal processing point of view. The derivation invokes the two-dimensional Fourier transform
skew property presented in Section 2.3.3, allowing one to easily v:isualize the range/azimuth coupling in the signal.
There are two steps in the derivation. First, a target without any RCM is assumed, so that its energy lies
within one range cell, but the range modulation still follows the hyperbolic range equation. The two-dimensional
spectrum of this hypothetical signal can be obtained easily by the POSP. Second, the RCM is reinstated, using
its linear component only. The above-mentioned Fourier transform property can then be utilized to formulate the
skewed spectrum.
For simplicity, the derivation uses range compressed data. As shown in (5.15) and (5.26) , the range
(a) Target trajectory Without RCM (b) Unskewed spectrum
t=O
i
j
i
(c) Target trajectory with linear RCM (d) Skewed s-pectrum
i
j
'l = 0
J
Figure 5A.l: Fourier transform pairs with and without RCM.
From (2.40), the two-dimensional Fourier transform of s1 (r, r,) is a skewed version of So (f-r , /TJ) in the
azimuth frequency direction
(5A.7)
This Fourier transform parr is shown in Figure 5A.l(c, d). Without going into a detailed analysis, one can
observe that a range frequency slice, shown by the thick dashed line in Figure 5A.l(d), contains a phase
modulation caused by the azimuth frequency skew. This extra modulation is the elusive phase coupling term,
which can be removed by "Secondary Range Compression." As can be observed in the figure, the phase coupling
depends upon the azimuth FM rate. Then, because the azimuth FM rate is range dependent, the phase coupling
is also range dependent. Thus, without going through the rest of the mathematical development, the origin of the
phase coupling can be easily seen from the figure.
Proceeding with the derivation, equations (5A.2), (5A.3), and (5A.7) are combined to obtain
(5A.8)
D-
c2 /J (5A.10)
1- 4 v.2
r O
!?
the phase in (5A.9) can be written as
41r ~lo
C
D2 - 4 V.c22 !? (2/-r J,, a+ /2-r a 2) (5A.ll)
r 0
Expanding the square root as a power series in / -r up to the second order, the phase becomes
41r Ro Jo
(SA.12)
C
81(lnf,.,) = - 41T Ro
C
lo D - 2 1r
Ro J,, 1,,.) f + ( CRo J';,c ) 12
(C2V.2j,2D 1r 2V.2J,3D3 r
T
·r O r 0
(SA.13)
Comparing with (5.34), the third term in (SA.13) is the range/azimuth cross coupling. The only difference
between the two equations is that the variable, 111 , is replaced by the constant Doppler centroid frequency, 111c, in
(5A.13). The discrepancy is due to the linear RCM approximation used to derive (5A.13). The factor D is
actually the cosine of the squint angle, Or, as has been established in the text, and is weakly dependent on 111 for
the squints under consideration. If 111 = f 11c is used for the factor D in (SA.10), then the phase coupling becomes
independent of azimuth frequency, which is one form of SRC used in the processor discussed in Chapter 6, and is
the form presented in [4, 5].
Now examine the linear 17 term in (5A.13). The coefficient inside the brackets is the RCM
RCM = _ cRo !11 f11c (SA.14)
2 v.2 12 D
r JO
Combining (5A.6), (5A.10), (SA.14), and finally using (5.41), the anticipated result follows
The RCM is linear and has a slope, a, which agrees with the stated assumptions.
This derivation does not show the nonlinear nature of the RCM; the phase coupling is an approximation
resulting from the linear RCM assumption. To be accurate, the derivation given in Section 5.3 should be used
(this is discussed further in Chapter 6). Nevertheless, the derivation in this appendix gives an illustration of the
origin of the coupling from the signal processing viewpoint, by using a powerful Fourier transform property.
Two definitions of azimuth FM rate are used in Section 5.3.3. The azimuth FM rate of the signal from a point
target is given by its frequency rate at the time the beam center crosses the target. It is found from the second
derivative of range [see (4 .38)]
where the target has a range of R(TJc) at the beam center crossing time, TJc·
Another definition of FM rate relates the Doppler frequency at the beam center, 111c, to the beam center
crossing time7
I1/c - - K a,dop T/c (5B.2)
where
K a,dop - (5B.3)
as derived in (5.45).
If the signal is accurately represented by a parabolic range equation, these two FM rates are equal. There is
a difference between these two FM rates when the range equation deviates from the simple parabolic form (5.1),
which it does when the squint angle increases. This appendix gives a geometric interpretation of the difference
between them.
5B.1 Geometric Interpretation from Slopes
The signal's Doppler frequency is found from the first derivative of R(ry)
This frequency is represented by the solid curve in Figure 5B.1, which gives the frequency/time relation of the
point target aft,er the time of closest approach is passed.
'le
0-t., - - - - - - - - - - - - - - - - - - " T " " -- - - - ---+
Azimuth time 11
.... .....
· .... . ..._ ,,,Average slope = -Ka.do,
Azimuth ... . ./
frequency
......
····· · ' · ..._
Two slopes are shown in the figure. The first is the local slope of the frequency /time curve, taken at the
beam center crossing time. The negative of this slope is the Ka version of the FM rate, defined in (5B.l).
The second slope is the average slope since the zero Doppler time. The negative of this slope is the Ka,dop
version of the FM rate. When it is evaluated at Point A, at 1J = 1Jc, Ka,dop in (5B.3) is obtained.
Discussion
The slope, -Ka, is the most important one, and must always be used when generating or analyzing the matched
filter phase. The slope, -Ka,dop, is not used very often. It is used only to find the beam center offset time, 1Jc,
when the Doppler centroid frequency is known. It is shown in Chapter 12 that an accurate estimate of f,,,c can be
found in practice, but there is no practical way of finding 1Jc accurately, except through - Ka,dop·
The difference between the two FM rates is given by the cosine squared of the squint angle, and it is usually
small. For example, when the squint angle is 5°, its cosine squared is 0.9924, and the two FM rates differ by
only 0.76%. However, this difference is important when considering the matched filter phase.
Even though the signal represented by the curved frequency/time history of Figure 5B.l has a hyperbolic range
equation, a segment of it can often be represented accurately by a parabola. An example is when the processed
aperture is only a small fraction of the beam offset, so only a small segment of the curve is needed for the
matched filter. The hyperbolic equation is needed for the whole curve, but the local parabolic approximation may
be adequate to obtain the signal phase over the processed aperture.
When the signal is approximated by such a local parabola, the signal frequency is approximated by a local
linear ramp as shown by the dotted line. This ramp has a slope, Ka . For analytical purposes (such as the
quadratic phase error), for simulations, and sometimes even for azimuth matched filter generation, the signal is
then modeled using the quadratic phase component only
6.1 Introduction
The range Doppler algorithm (RDA) was developed in 1976-1978 for processing SEASAT SAR data [1-8]. The
first digitally processed spaceborne SAR image was made with this algorithm in 1978 [9], and it is still in
widespread use today. The algorithm is designed to achieve block processing efficiency, using frequency domain
operations in both range and azimuth, while maintaining the simplicity of one-dimensional operations. It takes
advantage of the approximate separability of processing in these two directions, allowed by the large difference in
time scales of the range and azimuth data, and by the llS!:l of range cell migration correction (RCMC) between
the two one-dimensional operations.
Block processing efficiency is also achieved for the RCMC operation because it is done in the range time and
azimuth frequency domain. This domain is called the "range Doppler" domain, since azimuth frequency is
synonymous with Doppler frequency. The algorithm is called the range Doppler algorithm, because the fact that
RCMC is performed in this domain is the most distinguishing feature of the algorithm.
Energy from point targets, at the same range but separated in azimuth, is transformed to the same location
in the azimuth frequency domain. Therefore, correction of one target trajectory in this domain effectively corrects
a family of target trajectories that have the same slant range of closest approach. This is a key feature of the
algorithm, which allows RCMC to be implemented efficiently in the range Doppler domain.
For implementation efficiency, all matched filter convolutions are performed as multiplies in the frequency
domain. Matched filtering and RCMC depend on range varying parameters. The RDA also distinguishes itself
among frequency domain algorithms by its ability to explicitly accommodate range variation of parameters with
relative ease. This is another key feature of the algorithm. All operations are performed with one-dimensional
data arrays, hence achieving processing simplicity and efficiency.
In 1984, JPL added a modification called secondary range compression (SRC) to handle data with a
moderate amount of squint [10]. SRC, introduced in Section 5.3, is applied to the range compression matched
filter, and compensates the range and azimuth coupling of the target's phase history. It helps to remove phase
distortions related to the coupling in squinted or in large aperture datasets.
The purpose of this chapter is to present the processing steps of the RDA in detail, and to illustrate the
steps with point target simulations. Section 6.2 highlights the processing steps of the RDA. Section 6.3 gives a
detailed description of the RDA for the low squint case, and derives the processing steps. Section 6.4 turns to the
more general case of processing squinted data, including various ways to implement SRC.
The concept of multilook processing is presented in Section 6.5. Selected parts of the data spectrum are
processed independently, then summed incoherently, to reduce a phenomenon called "speckle noise." This lowers
the resolution of the SAR image, but generally leads to better image interpretation.
This section describes the RDA, using the processing steps depicted in Figure 6.1. Figure 6.l(a) gives a block
diagram of the basic RDA algorithm, a version suited to processing data with relatively small squint angles and
short aperture lengths. Figure 6.l(b) shows the SRC modification needed to process squinted data, with the SRC
implemented accurately in the two-dimensional frequency domain. Figure 6.l(c) shows an approximate
implementation of the SRC in the range frequency domain, which is more efficient, compared to the
implementation of Figure 6.l(b).
The three implementations depicted in Figure 6.1 share most processing steps, as they only differ in the SRC
implementation.
1. Range compression is performed with a fast convolution when the data are in the azimuth time domain. In
other words, a range FFT is performed followed by a range matched filter multiply, and finally a range
IFFT, to complete the range compression. This is done in implementations (a) and (c), but not in (b).
2. An azimuth FFT transforms the data into the range Doppler domain, where Doppler centroid estimation and
most of the subsequent operations are performed (see Chapter 12 for Doppler estimation).
3. RCMC, which is range time and azimuth frequency dependent, is performed in the range Doppler domain,
where a family of target trajectories at the same range are transformed into one single trajectory (see
Section 5.5.2). RCMC straightens out these trajectories so that they now run parallel to the azimuth
frequency axis.
4. Azimuth matched filtering can be conveniently performed as a frequency domain matched filter multiply at
each range gate.
5. The final step 1s azimuth IFFT to transform the data back to t he time domain, resulting 1n a compressed
complex image. Detection and look summation can be done at this stage, if desired.
l 1 l
Range Compres.slon Rang.e Compre11lon
Range Compression
without the lFFT with SRC Option 3
L
Azimuth FFT Azimuth FFT Azimuth FFT
L
SRC Option 2
and range IFFT
L •
RCMC RCMC RCMC
l l
Azimuth Azimuth Azimuth
Compression Compression Compre11ion
L l
Azimuth IFFT and Azimuth IFFT and Azimuth IFFT and
Look Summation Look Summation Look Summation
1 l l
Compressed data Compressed data Compressed data
Figure 6.1: Functional block diagram of the RDA, showing three implemen-
tations.
The following sections discuss each processing step in turn, including the two different implementations of the
SRC. All the processing steps are illustrated with simulated airborne C-band data, using the parameters listed in
Table 6.1. Some of these parameters are not realistic, as the range chirp length and azimuth aperture length were
reduced to keep t he simulations and figures to a reasonable size. More importantly, they serve the purpose of
illustrating t he three implementations shown in Figure 6.1.
Table 6.1: C-Band Airborne SAR Parameters Used in the Simulations
The selected range sampling rate gives a range oversampling of 1.2, and the selected PRF an azimuth
oversampling rate of 1.25. The range resolution is 2.7m and the azimuth resolution 1.7m, without considering
aperture weighting. There are two datasets, one with a low squint and the other with a large squint, to illustrate
the requirement for SRC. The corresponding beam center crossing times and frequencies for these two cases are
also shown in the table. It is assumed that the Doppler centroid frequency is not range varying, and is fixed at 2
V,. sin 9r,c/ >. for all ranges.
The low squint case is considered first - it is simpler, as SRC is not required. The processing steps follow those
depicted in the basic RDA in Figure 6.l(a).
The data received from the radar system are referred to as "signal data" or "raw data." The data are first
demodulated to baseband, so that the nominal center range frequency is zero. The demodulated radar signal, so(T,
7J) , received from a point target can be modeled as (4.39)
so(T, 'IJ) = Ao Wr[T - 2R(1J)/c] wa('IJ -1Jc)
x exp {-j 471' Jo R(1J)/c} exp{j71' Kr (T - 2R(7J)/c)2} (6.1)
A linear FM radar pulse is assumed, having an FM rate, Kr. The two w terms model the magnitudes of the
range and azimuth signals, and are often neglected in the signal analysis. The instantaneous slant range, R(77), is
given by (4.9)
(6.2)
Radar
Range
se
ce
Figure 6.2: Positions of three targets used in the simulation.
To examine some of the signal and algorithm properties, three targets are simulated. The targets are located
so that there is overlap in range and azimuth in the received signal. Targets A and B have the same slant range
of closest approach, and B and C cross the beam center at the same azimuth time, as shown in Figure 6.2.
After range compression and azimuth FFT, Targets A and B still overlap each other in the range Doppler
domain, but Target C is separate from the other two targets. This positioning is designed to illustrate the effect
of target energy overlap in various stages of the processing, and to have separate targets to analyze when needed.
Figure 6.3 depicts the magnitude and phase of the received raw data. Prior to range compression, the
targets are mixed where they overlap each other, compared with the one-target case in Section 5.6. A moderate
amount of RCM is visible in the figure, due to the squint angle of 3.5°. 2 The real and imaginary parts of the
signal still exhibit some of the FM rate characteristics, despite the mixing of the three targets. The RCM and
phase patterns are noticeable because only three targets are simulated. If the number of targets is increased
substantially, these patterns disappear, and the raw data no longer contain recognizable patterns or information.
Figure 6.3(c) shows that the signal magnitude has a scalloped appearance, which is due to the targets
interfering with one another. The nonoverlapping part of Target C can be seen as a thin margin of about 20
range samples at the extreme right in the signal. In the subsequent sections, it is shown how the SAR processing
focuses these targets to their respective positions in the image.
(a) Real part (b) Imaginary part
ia. 50 50
!~ 100 100
E 150
~ 150
l
V
200
200
250 250
50 100 150 200 250 50 100 150 200 250
i
I 50 50
! 100 100
='3
!150 150
I
V
200
200
Figure 6.3: Simulated radar signal (raw) data with three targets - the low
squint case.
Different options of matched filter generation and implementation are discussed in Section 3.4. Option 2 is selected
for range compression in this section, so that the throwaway, equal to the pulse replica length minus one, is at
the end of the compressed data array. In this option, the filter is generated by taking the complex conjugate of
the FFT of the zero padded pulse replica (6.1), where the zeros are added to the end of the replica array. A
tapered window can be applied, either in the time or frequency domain. In Section 6.4, it is shown how SRC can
be incorporated into this matched filter, using a simple adjustment to the chirp FM rate, Kr ,
Let So(/,,., 17) be the range Fourier transform of so(·r, 'l'J) of (6.1), and G(/,,.) be the frequency domain
matched filter defined in (3.19). The output of the range matched filter can be expressed as
where the compressed pulse envelope, Pr(r), is the IFFT of the window, Wr( f,,.). For a rectangular window, Pr(r)
is a sine function, and for a tapered window it is a sine-like function with lower sidelobes. The slant range
resolution, derived in Section 4.4, is
0.886 "(w,r
C
Pr = 2 IKrlTr
(6.4)
where "fw,r is the IRW broadening factor due to the tapering window, I K r I Tr is the chirp bandwidth, and the
factor c/2 expresses the resolution in distance rather than time units. The broadening factor, 'Yw,r, equals one
when a rectangular window is used, and results in a PSLR of - 13 dB . Instead, a Kaiser window with roll-off
coefficient of {3 = 2.5 is used in the simulation experiments. Then, "fw,r = 1.18, corresponding to an 18%
broadening of the IRW, and the resulting PSLR is -21 dB (recall Figure 2.12).
The factors in (6.3) have the following physical interpretations. The first factor, Ao, is the overall gain, which
includes t he scattering coefficient. It can be assumed to equal unity in subsequent discussions. The second factor,
Pr [r - 2R(ry)/c], is the sine-like range envelope, which incorporates the target range migration v:ia the azimuth
varying parameter, 2R(r,)/c. The third and fourth factors are the azimuth gain and phase, which are unaffected
by range compression.
Figure 6.4 shows the range compression results of t he three simulated point targets of Figure 6.2. Targets A
and B have the same slant range of closest approach, while Target C is at a different range. Each trajectory
shows the RCM effect in the time domain. The magnitude image shows the interference between the two targets,
A and B , which are almost coincident in this domain. The azimuth modulation is clearly visible in the real part
of the compressed data, especially for the separate Target C .
In the basic version of the RDA, designed in 1978 to process SEASAT data, a low squint angle is assumed . The
low squint assumption makes the processing easy to understand - the analysis is extended to the general case
with a higher squint in Section 6.4. SRC can be ignored in some low squint cases, but not in high squint or wide
aperture cases. The conditions under which SRC can be ignored are analyzed in Section 6.4.
In low squint cases, the antenna beam points close to the zero Doppler direction. The range equation can be
approximated by the parabolic equation (5.1) , if the aperture is not too large
(6.5)
.
II>
'?5.
50 50
I
'
'
! 100 100 A. B
!
§ 150 160 C
V
I 200 200 I i•
•
~
250 250 I
20 40 60 80 100 20 ,o 60 80 100
Range (samples) --> Range (samples) - - >
The approximation is justified from the assumption that ll-0 >> v;. r,, but one must be cautious in using this
approximation. When the higher order terms are negligible compared to the range resolution, they can be ignored
as far as RCMC is concerned. However, before discarding them in the azimuth matched filter, their magnitudes
must be compared to the wavelength, since 2R( r,) / >. is the important phase term in the filter. In this low squint
airborne C-band example, the residual range error introduces an insignificant phase error at the ends of the
processing aperture (the residual error is mainly cubic). As this size of error introduces negligible effects to the
image quality, the parabolic approximation can be used in this case.
Combining (6.3) and (6.5) , the range compressed signal can be expressed as
Src(T , r,)
R(r,)] Wa(r,-r,c)
Ao Pr [T - 2 -c-
2
x exp { - j 41r Joc ll{) } exp { -j1r >,2 V:
l?-0 r,2 } (6 .6)
The azimuth phase modulation is now apparent in the second exponential phase term. Since the phase is a
function of r,2 , t he signal has linear FM characteristics, with the linear FM rate being
2v:2
_ r_
~ (6.7)
>. ll{)
The FM rate is derived in Section 4.5.5, resulting in (4.38). Equation (6.7) is obtained by assuming that cos3 0r,c
1n (4.38) is equal to unity for small squint angles.
An azimuth FFT is then performed on each range gate to transform the data into the range Doppler
domain. In deriv.ing the signal in this domain, only the second exponential phase term in (6.6) is important, as
the first exponential term is a constant for a given target. By applying the POSP, as in Section 5.2, the
relationship between azimuth frequency and time is
/ 11 = -Ka 'T/ (6.8)
By substituting 'T/ = - /11 / Ka into (6.6), the data after the azimuth FFT can be expressed as
. 41r /o Ro } { . /; } (6.9)
x exp { -J c exp J7r Ka
The azimuth beam pattern, wa('T/- 'T/c), is now transformed into Wa(/11 - !, 7J , with its shape preserved. There
are two exponential terms in the above equation. The first one carries the inherent phase information of the
target, and is important in applications such as interferometry and polarimetry, but is not important in an
intensity image. The second phase term is the azimuth modulation, which also has linear FM characteristics in / 11 .
The RCM in the range envelope, Rrd(/17), is now expressed in the range Doppler domain. Its locus can be
obtained by combining (6.5) and (6.8) to give . .
{6.10)
Figure 6.5 shows the structure of the data after the azimuth FFT. Again, the azimuth modulation is clearly
visible in the real part of the compressed data. It is important to note that Targets A and B, which have the
same slant range of closest approach, now follow the same trajectory in the range Doppler domain. The
modulation in the left trajectory of Figure 6.5{b) is the result of these two targets interfering with one another.
Therefore, correcting the range migration of this one trajectory has the effect of correcting a family of
trajectories, whose members share the same range of closest approach.
I\
1
~
i 100 100 A, B ,
;
J::
~ I• C
~
E 60 60 '
::,"
20 40 60 80 100 20 40 80 80 100
Range (samples) - - > Range (samples) - - ->
There are two ways to implement range cell migration correction (RCMC). In the first option, RCMC is
performed by a range interpolation operation in the range Doppler domain. As discussed in Section 2.7, an
interpolator based on the sine function can be conveniently implemented. The sine kernel is truncated and
weighted by a tapering window, such as a Kaiser window.
The amount of RCM to correct is given by the second term in (6.10)
b.R(J,,,) - (6.11)
T his equation represents the target displacement as a function of azimuth frequency, / 71 • Note that b.R(/11 ) is
also a function of Ro; that is, it is range variant. Since one of the dimensions in the data is range time, the
RDA can correctly implement the range variation of t he RCM in the range Doppler domain.
Another RCMC implementation involves the assumption that the RCM is range invariant, at least over a
finite range region. In this case, the RCMC can be implemented using an FFT, linear phase multiply, and IFFT
technique. The phase multiplier, for a given / 71 , is given by
To apply RCMC using this implementation, small range blocks can be used, with the correction amount held
constant within each block. However, this implementation has the disadvantage that the blocks have to overlap in
range, and the efficiency gain may not be worth the added complexity. From now on, the sine interpolation
option is assumed.
Assuming the RCMC interpolation is applied accurately, the signal becomes
Note that t he range envelope, Pr, is now independent of azimuth frequency, showing t hat the RCM has been
corrected. In addition, t he energy is now centered at T = 2Ro/c, the range of closest approach.
T he Doppler centroid is an important parameter in the RCMC operation, as can be seen from (6.11), which
specifies the amount of RCM as a function of azimut h frequency. But as the frequency concerned is the absolute
azimuth frequency, / 11 , t he correct Doppler ambiguity must be known to implement RCMC correctly.
The effect of an azimuth ambiguity on RCMC can be seen in t he range Doppler domain of Figure 6.6. · The
PRF sampling has the effect of aliasing ambiguous energy into a single P RF cycle. Normally, t he PRF is high
enough so that most of the signal energy is limited to one PRF cycle only. However, Figure 6.6 presents an
exaggerated case where significant energy is present over more t han two PRFs, in order to illustrate the effect of
a Doppler ambiguity error.
'N
e.
1000
900
(a) Before ACMC
1000
800
(b) ACMC to Trajectory 0
1000
800
,
(c) ACMC to Trajectory 1
~
!! 600 eoo 600 I
0 1 2
! 400 '400
= \
I 200
0
200
0
200
0
J
0 10 20 0 10 20 0 10 20
Range (samples) Range (~ ) Range (samples)
In Figure 6.6, it is assumed t hat the PRF is 1000 Hz, and that the absolute Doppler centroid is 1500 Hz.
Figure 6.6(a) shows the target trajectory after the azimuth FFT but before RCMC, as in Figure 5.15. T hree
parts of the target trajectory are shown, labeled 0, 1, and 2. The width of the trajectory represents the strength
of the signal at the various azimuth frequencies. Locus # 1 represents the main target energy, and includes energy
spanning one P RF, centered on the Doppler centroid. The energy shown as Loci #0 and #2 are azimuth
ambiguities coming from the edges of the radar beam.
The three parts of the locus of target energy have different RCM slopes because of their different absolute
azimuth frequencies. As the objective is to focus or compress the energy from the main target ambiguity, the
RCMC correction curve should be taken from Locus #1. However, if the Doppler ambiguity estimate is wrong, an
error will occur in RCMC, as the wrong slope will be used.
Figure 6.6(b) illustrates the case where the Doppler ambiguity has been estimated incorrectly by one unit,
with the result that Segment #0 has been corrected properly, but the main energy in Segment #1 has not. This
results in an azimuth misregistration of the image by V9 Fa/ Ka per ambiguity error, where V9 is the radar beam
speed projected onto the ground. A small range misregistration also occurs.
There is also a loss of focus in both range and azimuth because the energy in Segment #1 resides in more
than one range cell. This causes a direct smearing in range, but also a loss of azimuth resolution because of the
reduced azimuth bandwidth in each range cell. For C-band satellite SARs, the broadening with a one-ambiguity
error is not very significant, but the broadening becomes significant when the ambiguity error is greater than one,
or if the synthetic aperture is longer.
Finally, Figure 6.6(c) illustrates the result of RCMC when the correct ambiguity is chosen for the processing.
Weaker energy from the ambiguities of Segments #0 and #2 also appears in the image, but has a small range
displacement as a result of the uncorrected RCM and a large azimuth displacement given by the "PRF time," as
illustrated in Figures 5.3 and 5.4. The ambiguities are also defocused compared to the main energy. A SAR
design goal is to keep the ambiguities small (e.g., less than - 20 dB) by the design of the antenna beam pattern
and selection of the PRF.
From this discussion, it is apparent that the RCM correction curve should be taken from azimuth frequencies
lying within ±Fa/2 of the absolute Doppler centroid, so that the correct curve is used, and the correction
"discontinuity" is as far as possible from the center of the main target energy. The same argument applies to the
choice of matched filter discontinuity in azimuth compression. In the present example, the discontinuity is at 0 or
1000 Hz. Techniques for Doppler centroid estimation are discussed in Chapter 12.
There are three issues in the generation and application of the RCMC interpolator: the kernel length, the shift
quantization, and the coefficient values (the sine window) .
To perform the interpolation efficiently, the interpolation kernel can be tabulated at predefined subsample
(quantization) intervals. In this way, the sine function does not need to be generated at every interpolation point;
only nearest neighbor indexing into the tabulated function is required. This introduces a maximum geometric
distortion of 0.5 Nsu b, where Nsu b is the number of subsamples (typically 16). The generation of the kernel
coefficients is described in Section 2. 7.
For efficiency considerations, the size of the interpolator kernel should be as short as possible. However, a
short kernel has two drawbacks: the loss of radiometric and phase accuracy, and the introduction of radiometric
artifacts known as paired echoes.
8
-;::,, •
.8
E
::>
2
C
l O
==-
§ -2
...
-6
0 5
Range (cell numbet)
The paired echo effect is illustrated in Figures 6.7 and 6.8. Figure 6.7 shows part of the exposure of the
range compressed pulse of a target, as it migrates through two range cells. Ideally, the RCM correction would be
done for each azimuth sample by applying a; shift equal to the distance of the peak from the range cell
boundary.
The RCMC results are shown in Figure 6.8, where the top row shows the magnitude of the signal viewed
along the azimuth direction in one range cell, and the bottom row shows the results of the subsequent azimuth
compression. Column (a) represents the ideal RCMC case, where the top curve shows that the energy at the peak
of the compressed pulse has been smoothly extracted at each azimuth sample. The curve is actually the slowly
varying azimuth beam profile.
If, instead, the RCMC shift is quantized to the nearest integer sample (i.e., nearest neighbor interpolation),
the shift amount is represented by the open circles in Figure 6.7. In this case, an amplitude modulation occurs as
the target migrates between the center and edge of each range cell. When the peak of the range compressed
pulse lies in the center of a range cell, shown by the vertical dashed lines, the target has maximum energy. When
the target lies at the cell edges, the energy is down by a few decibels, given by the shape of the range
compressed pulse. The resulting modulation is shown in Column (b) of Figure 6.8, where the target has migrated
through seven range cells during the course of its whole exposure. This modulation gives rise to the paired echoes
seen in the bottom panel of Column (b).
o--------
50 100 150 200 250
o--------
50 100 150 200 250
0'-------~
50 100 150 200 250
- 10
-20
5 10 15 20 5 10 15 20 5 10 15 20
Azimuth (samples) Azimuth (samples) Azimuth (samples)
Figure 6.8: Paired echoes caused by modulation when RCMC is not accurate.
In practice, a compromise is made between no interpolation (nearest neighbor selection), and perfect
interpolation (infinite length). Usually, a four- or eight-point interpolator is chosen to give reasonable accuracy. The
solid dots in Figure 6.7 show the case where the RCMC shift is quantized to 1/4 of a cell, although a
quantization of 1/16 or 1/32 of a cell is used in practice. In Column (c) of Figure 6.8, the results of a four-point
interpolator are shown. It is seen that the modulation 1s lower than in Column (b), and the paired echoes are
reduced to 28 dB below the target peak.
The paired echo power is usually not noticeable in SAR images, because it tends to be masked by the
adjacent radar clutter. The exception is when there is a bright discrete target against a dark background. In many
cases, t he interpolator length is dictated by the radiometric and phase accuracy requirements, rather than by the
paired echo power.
The RCMC algorithm operates on t he data in the range Doppler domain, shown in Figure 6.5. After the RCMC
interpolator has been applied, the data take on the straightened form shown in Figure 6.9. Even though there are
three targets, only two are distinguishable, because Targets A and B are still colocated at this point in the
processmg.
Note that a small amount of ambiguous energy is visible in the magnitude plot. It has been shifted left or
right by nine cells, because the RCMC is not correct for this part of the target trajectory (recall Figure 6.6).
•
I 200 200
•
2I
li
150
150 It
••
i 100 100 A, B :.•
.::
~
E 50 50 t't C
Jl .,
!.
20 40 60 80 100 20 40 60 80 100
Range (samples) - -·> Range (samples) - - >
Image registration operations can be incorporated into the RCMC interpolation, namely, slant range to ground
range (SRGR) conversion and scaling the sample spacing to a map grid. SRGR conversion is discussed in Chapter
4. The natural slant range output sample spacing is given by c/(2 Fr) , and in ground range by c/(2 Fr sin 0i),
where Oi is the range-dependent incidence angle, shown in Figure 4.3. Both factors require interpolation in the
range direction, and therefore can be combined with the RCMC operation. T hese two operations have not been
considered in the simulation experiments.
Figure 6.10 shows the phase of Target C in both t he range and azimuth directions. In Figure 6.l0(a), a
range profile is taken of the range compressed pulse, at the high-energy region around t he Doppler centroid. The
phase plot shows that the data is at baseband, as the phase is approximately flat through the main lobe.
However, because the RCMC operation has some error, a small amount of phase distortion can be seen. In
comparison, the phase was not distorted after range compression, as in the ideal case shown in Figure 3.6(d).
(a) Phase of a range profile
3
Malnlobe
'ii, 2 tc ..
i:
J -1
-2
-3
0 5 10 15 0 50 100 150 200 250
Range (samples) Azimuth (samples)
The azimuth phase is shown in Figure 6.l0(b). The phase is not at baseband in azimuth, because of the
nonzero squint angle-this is seen by the nonzero slope of the phase at the center of the exposure (at Sample
128). The quadratic phase modulation is observable, since azimuth compression has not been performed yet. The
cusp in the phase curve is where the frequency discontinuity lies. It is shown later that, in the general case of
nonzero squint, the range phase is altered by the azimuth matched filter, and is not at baseband either.
In the presence of a significant error in performing RCMC, the uncorrected migration causes both the range and
azimuth IRW to be broadened. The origin and amount of broadening is explained in this section.
The range broadening is caused by the misalignment of the range compressed data, because the azimuth
compression operation effectively sums the data in the azimuth direction. When RCMC is done correctly, as in
Figure 6.9, summing in azimuth does not broaden the range response. The sum will be a sine-like function,
similar to the range impulse response of a single line after range compression. However, the target trajectory is
skewed in the presence of an uncorrected RCM component, as shown in Figure 6.5, resulting in a range
broadening after the azimuth sum. The following assumes the uncorrected RCM component to be linear, as this
usually dominates over the quadratic component.
The azimuth broadening is due to the reduction in azimuth bandwidth that exists in a single range cell.
When RCMC is performed correctly, as in Figure 6.9, the target energy is captured completely within a single
range cell. Hence, the full azimu,th bandwidth is utilized in azimuth compression. When RCMC is not pero/rmed
correctly, the azimuth exposure is spread among multiple range cells. This reduces the azimuth bandwidth
captured within the range cell where the target energy is supposed to be. Since resolution is inversely
proportional to bandwidth, as shown in (4.45), an azimuth broadening occurs.
Figure 6.11 shows the range and azimuth broadening for three different degrees of azimuth weighting (Kaiser
f3 = 0, 2.5, and 5). The Kaiser window simulates the combined effect of the azimuth beam profile and the
azimuth compression weighting. The horizontal axis is the amount of uncorrected RCM that exists within the
azimuth processing time. The following general rule can be stated. The uncorrected RCM should be kept to
within 0.5 range resolution elements, to keep the range and azimuth broadening due to RCMC errors less than
2%.
1.s .-....
-_-_-_-_-_ - _ : ; - - - ~ - - - - - - - ~ - - - - - - - - - .
- 13=0
.. · 13= 2.5
· - 13 =5
.
. . . . .. . . . . .. . . . . . . . . . . .... . . . . . . . . . . . . .. . . ... •.,................. .....
. .
.
.,
~
: :
.' .
'
~
,,
i
2'
5 ..•.............. ,
' .,.
.................. '' ...,. , .... : ...... ······
,. ' .,..:.
oL....,_,,~=:'.'.::::::::=-____i___ _ __J
0 0.5 1 1.6
Residual ACM (range resolution elements)
The IRW broadening is mildly dependent on the weighting function used in the azimuth compression. When
the weighting is heavier, the edges of the azimuth exposure are de-emphasized, which results in less severe
broadening for a given RCMC error. However, for weighting values that are in common use, ( /3 < 3), the effect is
not large.
The broadening is the same in each direction, as a result of the symmetry of the signal spectrum of a
compressed target in the two-dimensional frequency domain. This can be illustrated as follows. Assuming a zero
squint case for simplicity, the spectrum after the azimuth matched filter multiply is represented by Wr(fr) W0
(f11 ). The phase is zero in each direction, as a result of a perfect matched filter application. The compression is
then completed by a two-dimensional IFFT. However, when a residual RCM is introduced, the spectrum is
expressed by
Srcm(fr, Jr,) = Wr(fr) Wa(f11 ) exp{- j 271" fr t1T(fr,)} (6.14)
where t1T(f11 ) represents the residual azimuth-dependent RCM. Assuming the residual RCM is linear in azimuth,
t1r(f11 ) can be written as o J11 , where a is proportional to the slope of the linear RCM. After the substitution,
(6.14) becomes
Srcm(fn f11) = Wr(fr) Wa(fr,) exp{-j 271" O fr Jr,} (6.15)
When the two windows have the same shape relative to the range and azimuth bandwidths, the spectrum has a
symmetrical phase ramp in fr and f 11 . The phase ramp creates an impulse response expansion in the time
domain, where the "stretching" in range and azimuth directions are equal, relative to their respective resolutions.
In other words, the symmetry of the two-dimensional frequency domain phase ramp transforms to an equal
broadening in the time domain.
In multilook processing (see Section 6.5), only part of the azimuth bandwidth is processed at a time. The
range broadening, caused by an RCMC error, is similar to that in the full aperture case (see Figure 6.11),
because a similar summation of energy in azimuth is performed by the look summation operation. However, the
percent azimuth broadening is much reduced because the processed resolution is larger, and the RCMC error is
smaller within the time of the shorter matched filter. Therefore, range broadening is the main limiting factor in
the amount of residual RCM that can be tolerated in the multilook case.
In azimuth, there is usually some latitude in the choice of processed resolution. This is because the azimuth
signal bandwidth is often greater than needed to make the azimuth resolution the same as the range resolution.
"Full- resolution" (or "single-look") processing can be done, using all the bandwidth, and achieving a resolution
close to the theoretical limit of one-half the antenna length. On the other hand, "multilook" processing can be
done, in which the data are processed to a resolution less than this limit, to obtain a less noisy image. One-look
processing is discussed in this section, and multilook processing in Section 6.5.
After RCMC, a matched filter is applied to focus the data in the azimuth direction. Since the data after
RCMC are in the range Doppler domain (6.13), it is convenient and efficient to implement the azimuth matched
filter in this domain; that is, as a function of slant range, ll-0, and azimuth frequency, J.,,. The matched filter is
the complex conjugate of the second exponential term in (6.13)
2
Ha.z(J.,,) = exp { - .
J1r Jla
/ } (6.16)
in which Ka is a function of ll-0, as shown in (6.7). This version of the matched filter is implemented, as in
Option 3 of Section 3.4.
The Doppler centroid is an important parameter in generating the matched filter in much the same way as it
is for RCMC. Recall that a point of discontinuity in the J.,, array must be selected, due to the wraparound in
the frequency domain. This point is taken to be one-half of a PRF away from the Doppler centroid frequency.
The compressed result is registered to zero Doppler with this filter, as discussed in Section 3.3.3. The
registration is correct because the phase of each target is canceled by the matched filter, except for a linear
phase component that gives each target its unique position in the output array.
Weighting can be applied in the azimuth compression process. Because the azimuth beam profile already
applies a significant amount of weighting in the single-look case, only a small amount of additional matched filter
weighting is generally needed. As the magnitude of the two-way beam pattern at the edges of the processed
beamwidth is approximately one-half of its peak value, the effective beam weighting is equivalent to a Kaiser
window with a coefficient, {3, equal to 1.8. Therefore, if a total weighting is desired that is equivalent to {3 = 2.5,
only a light additional weighting need be applied with the matched filter.
Because of the existing antenna weighting, either a moderate window can be used to give a small additional
amount of tapering, or no window need be used.
To perform azimuth compression, the data after RCMC, S2(r, J.,,) in (6.13), are multiplied by the frequency
domain matched filter, Ha.z(J.,,) . The result is
S3(r, !.,,) - S2(r, J.,,) Ha.z(J.,,)
4
- Ao Pr(r - 2il-O/c) Wa(J.,, - J.,,c) exp{-j 11' J:il-0} (6.17)
- Ao Pr(r-2Il-O/c) Pa(17)
4
x exp{-j 11' ' : il-0} exp{j 211' f.,,c 1J} (6.18)
where Pa is the amplitude of the azimuth impulse response, a sine-like function similar to Pr· These envelopes
show that the target is now positioned at ,,. = 2il-O/ c and 1J = 0 . Recalling that 77 is relative to the time of
closest approach when zero Doppler occurs for the given target, it is seen that the target is registered to its zero
Doppler position. There are two exponential terms in (6.18). The first one is the target phase due to its range
position, il-0- The second one is a linear phase term, due to a nonzero Doppler centroid, f.,,c. The phase of this
term is zero at the peak position, where 77 = 0. 3
It must be emphasized that the phase mentioned above is an approximation when the parabolic form of the
range equation given in (6.5) or (6.10) is used. When this approximation is used, the processor may not be phase
preserv.ing for nonzero squints. The current practice is to use the hyperbolic form of the phase for the matched
filter for low squint cases when phase precision is needed . .
Simulation Results
Figure 6.12 shows the results of the full-aperture or single-look azimuth compression, without any extra window
weighting. The image data are often stored in complex form, and the image is commonly referred to as
single-look complex (SLC) product. The azimuth sample spacing is V9 / Fa, where Vg is the velocity of the beam
projected onto the Earth's surface. This velocity is about 88% of the satellite velocity, for orbit heights of 800
km. An azimuth interpolation can be applied to adjust the final azimuth sample spacing, if needed.
Figure 6.12(a) shows the three compressed targets, whose positions agree with Figure 6.2. Recall that Targets
B and C were placed parallel to the beam centerline, and the beam squint is small (3.5°) in this case. As the
compression operation registers the targets to zero Doppler, they are correctly registered with a small azimuth
offset, even though the beam centerline crossed them at the same time.
-
Kl
10
20
30
.
! 12
l 40
A
! 10
-~r e
50
60
J 70 ·
80
B, • 0
I 4
90
100
20 40 60 80 100 5 10 15
Range time (samples) Range freq (samples)
I e
..... 8
f
......
e
8 o o•
E ~
~ 10 E 10
~ 12
~ 12
C)
C:>
C)
14 14 C)
()
16 16
5 10 15 5 10 15
Range (samples) Range (samples)
Target C is analyzed by selecting a 16 x 16 chip centered on the peak, and upsampling by a factor of 16
(slightly better accuracy is obtained if a 32x32 chip size is used). The results are shown in the bottom panels of
Figure 6.12. The orientation of the range sidelobes, shown in Figure 6.12(c, d), is skewed by an angle equal to
the beam squint angle. This skew is caused by the variation of the azimuth FM rate with range in the matched
filter, which interacts with the nonzero Doppler centroid to change the registration in the azimuth direction. In
principle, the azimuth FM rate should be a constant for a particular target. By varying the FM rate with range,
a mismatch is introduced into the sidelobes of the target. From (6.7), the FM rate mismatch at a range shift,
6.R.o, from the center of the target is
(6.19)
The geometric misregistration caused by a filter mismatch on a nonbaseband signal is presented in Section 3.5,
and (3.58) shows that this misregistration is a linear function of 6.K,L. The filter mismatch increases linearly with
range 6.R.o away from the target range, as Vr is almost constant, resulting in a linear slope of the sidelobes. The
misregistration is also a linear function of the Doppler offset [the parameter, tc, in (3.58)] , so that the range
sidelobe skew angle is proportional to the squint angle.
This skew also causes a range spectrum shift in the target spectrum, as shown in Figure 6.12(b) (recall
Figure 2.2). The skew is more severe for a high squint, and care must be taken in placing the zeros when the
spectrum is zero padded.
(a) Range profile (b) Azimuth profile
0 0
-
ID
-5
~ -10
-s
- 10
-8 - 15 - 15
~ -20 -20
lil'
~ -25 -25
-30 -30
-35 -35
0 5 10 15 0 5 10 15
(c) Range phase (d} Azimuth phase
200 200
i 100 100
!-
QI
0 0
51
as
f - 100 -100
-200 -200
0 5 10 15 0 5 10 15
Range (samples) Azimuth (samples)
To examine the response in more detail, range and azimuth profiles are taken through the peak of the
compressed target, as shown in Figure 6.13. The small asymmetry in the sidelobes is caused by inaccuracies in
applying RCMC, and in matching of t he cross coupling phase terms in the signal. The range sidelobes a ppear
smaller than they really are because the profile is not aligned with the actual sidelobe axis.
The theoretical range resolution is 0.886 x 1.2 x 1.18 = 1.25 range samples, where 0.886 is t he resolution
without weighting, 1.2 is the range oversampling factor, and 1.18 is the IRW broadening due to the use of /3 =
2.5 in t he Kaiser window. The measured range resolution is 1.24 samples, which agrees well with the theoretical
value. The PSLR is less than - 20 dB, agreeing with the value expected from the Kaiser window.
The measured azimuth resolution is 1.05 samples, which is the combined effect of an IRW of 0.886 samples for
a rectangular window, and a broadening factor of 1.185 due to the antenna pattern (no extra weighting was used
in azimuth). Refer to Section 3.5 for a discussion of broadening caused by FM rate errors. The azimuth PSLR of
- 20 dB is correct and is due to the antenna pattern.
In Figure 6.13(c), it can be seen that the range phase has a slope after azimuth compression, which can be
compared with Figure 6.l0(a) , where the range phase was flat after RCMC. As a result, the range spectrum is
shifted from baseband. This phase ramp is caused by the fact that the azimuth matched filter F!vl rate changes
with range, while all parts of a given target (including the sidelobes) have a constant azimuth FM rate. According
to (3.59), the azimuth FM rate mismatch, L::i,,Ko., introduces a phase error, and (6.19) shows that b.Ka increases
linearly with range from the peak position. Therefore, a linear phase ramp is imposed across the compressed target
in the range direction. This property also holds for other processing algorithms discussed in this book.
Looking at the azimuth phase in Figure 6.13(d), it can be seen that the quadratic phase present in Figure
6.l0(b) has been removed by the azimuth matched filter. The residual phase is linear, due to t he nonzero Doppler
centroid, represented by the last exponential term in (6.18).
An example of a low squint SAR image processed to a single-look detected product is shown in Figure 6.14. The
squint angle is approximately 1.6°. The data were collected on June 16, 2002 with RADARSAT-l's Fine Beam 2.
The range bandwidth is 30.3 MHz, giving a resolution of approximately 6 meters slant range or 10 meters ground
range. The full azimuth bandwidth is processed in one look, giving a resolution of approximately 9 meters. Other
parameters of the scene are given in Appendix A at the back of the book . The image was processed by
!vlacDonald Dettwiler with the RDA .
The scene is centered at 49.3° N, 123.3° W , and covers the city of Vancouver. The image has been resampled
to a 6.25 meter north-south grid. English Bay is in the upper left, where six large freighters are anchored. The
University of British Columbia is located on the peninsula on the left of the scene, and the airport is at the
bottom, surrounded by the Fraser River.
In the discussion of the low squint case in Section 6.3, the range equation is assumed to oo parabolic with time,
as shown by the approximation in (6.5). The parabolic model corresponds to a linear FM signal in the time
domain, which transforms to a linear FM signal in the azimuth frequency domain (derived in Section 5.2). Th.is
one-to-one relationship between time and frequency is linear, which means that a graph of instantaneous
frequency versus time is a straight line.
When the squint angle increases, the range equation should be expressed in the more accurate, hyperbolic
form (4.9). There is still a on~to-one relationship between time and frequency, but it is now slightly nonlinear
[refer to (5.41) and (5.42)). This has two effects on the SAR processor. First, the range used in RCMC and the
azimuth matched filter has to be modified by a small amount, according to the new range equation. Second,
there is now a stronger cross coupling between range and azimuth, as derived in Section 5.3 and Appendix 5A. A
filter needs to be applied to correct the misfocusing caused by this coupling. The filtering is called secondary
range compression (SRC) and is the main topic of this section.
The modifications to the processing equations needed to handle higher squint are presented in this section. The
signal representation is first rev:iewed, which explains the origin of the cross coupling.
Signal Representation
Consider the range Doppler representation of the signal of a point target before range compression (5.39)
2
Srd(r, Jr,) = Ao Wr ( r - ~(Jr,)) Wa(Jr, - JriJ
2
exp {- J- - - - " - -- exp {·J11' K m [r - 2Rrd(J11)]
.411'RoD(f11,Vr)fo} } (6.20)
C C
where the constants Ao to As of (5.39) are absorbed into Ao, and the factor 1/(1 - KrZ) in Wr(,) is
approximated by unity.
The last exponential term in (6.20) shows that the original range chirp FM rate, Kr, has been changed to
Km in the range Doppler domain. This FM rate can be expressed by
(6.21)
(6.22)
In (6.21), the term 1/Z of (5.40) is replaced by Ksrc to use a variable in units of FM rate. In fact, Ksrc is the
FM rate of the SRC filter.
In the range Doppler domain, Km is significantly different from Kr when the squint is appreciable. This
means that range compression using the original FM rate would result in defocusing, mainly in the range
direction. One interpretation of the origin of the defocusing is that the squint results in a very low azimuth TBP
in each range cell if range compression has been performed. This means that the range phase is altered by the
azimuth FFT, as the POSP is not accurate with the low TBP.
Modification to RCMC
In the RDA, RCMC is done in the range Doppler domain. The hyperbolic range equation is written as (5.47)
(6.23)
(6.24)
which is a little less than unity-it is the cosine of the instantaneous squint angle corresponding to the azimuth
frequency, J11 • The range migration is equal to
ilrd(lr,) - Ro = Ro [l - D(lr,,
D(lr,, Vr)
Vr)l (6.25)
in the range Doppler domain. In the high squint case, the RCMC operation should use this modified expression.
By substituting (6.24) into (6.23), expanding the square root term of D(Jr,, Vr) and ignoring terms higher
than second order in J,,.,, the range migration (6.25) reduces to the low squint form of (6.11).
The phase of the azimuth matched filter of (6.16) needs to be modified to include the more accurate D(Jr,, Vr)
term. The modified filter is
A window function centered on the Doppler centroid frequency is also applied. Again, if the square root term in
D(J,,.,, V) is expanded and the terms higher than second order in fri are ignored, the azimuth matched filter
reduces to the low squint version given in (6.16).
Equation (6.20) shows that the range matched filter in the range Doppler domain should have an FM rate of Km
instead of the original value of Kr . If range compression has been done using an FM rate of Kr, a filter can be
applied to compensate for the difference. This filter also has a linear FM rate, given by Ksrc of (6.22). In other
words, the data can be compressed by first using a filter with an FM rate of Kr (i.e., the normal range
compression), and then compressed again using a second filter with an FM rate, Ksrc· This explains the origin of
the term "secondary range compression."
The form of the cross coupling is derived in Section 5.3.3. The cross coupling phase term is expressed as a
function of range frequency by (5.34), which can be canceled using the filter
which is also expressed as a function of range frequency. However, note that the filter depends on range Ro and
azimuth frequency J,,.,, through the variable Ksrc(Ro, Jr,)-
The range compression filter can be combined with the SRC filter, when both are expressed in the range
frequency domain. The combined filter is
- exp{j,r 1; U, -K=(~,f.i)}
. J; } (6.29)
- {
exp J7r Km(Ro, Ir,)
as the two FM rates, Kr and Ksrc, can be combined usmg (6.21). Note that the combined filter IS also a
function of Ro and J,,., .
If SRC is not applied, the range FM rate mismatch between the signal and data IS Kr - Km. The resulting
quadratic phase error (QPE) is
2
fi</>src - 7r (Kr - Km) (~)
_ 1r K2 (T, )2 ~ 1r K; (Tr) 2
(6.30)
Ksrc (1 - Kr/ Ksrc) ; Ksrc 2
where the approximation is allowed as Ksrc >> IKr l- This error measure 1s used below to determine the
requirements for SRC, including the approximations allowed.
The SRC filter is defined by (6 .22) and (6.27), where it is noted that it is a function of both R.o and /ri- Ideally,
it should be applied in the range Doppler domain, so that these dependencies can be accommodated. Because the
dependence on R.o is weak, and the dependence on / TJ is small in some cases, a number of implementation options
can be considered with different efficiency and accuracy properties.
Option 1: Apply SRC along with the RCMC interpolator in the range Doppler domain.
Option 2: Apply SRC as a phase multiply in the two-dimensional frequency domain. Note that m this domain,
both range time and range frequency cannot be accessed simultaneously, so a fixed R.o has to be assumed.
Range invariance regions can be used to update the R.o dependence, as needed.
Option 3: Apply SRC in the range frequency, azimuth time domain, assuming that the azimuth frequency
dependence can be ignored as well. This is the simplest implementation, since the SRC filter can be
combined with the range matched filter. It is accurate at the reference frequency, but has errors at other
frequencies.
A summary of each SRC implementation option is given in Table 6.2. The second and third columns state
whether the implementation accommodates the dependence of SRC on the R.o and /ri variables. The fourth column
gives the domain in which the SRC implementation takes place (T = time or space, F = frequency) . The fifth
column states whether SRC is combined with another operation (e.g., with the RCMC interpolator or with the
range compression filter) or is implemented alone (e.g. , by a phase multiply) . The sixth column gives the relative
accuracy and the seventh column gives the extra computing required for SRC. More details are given in the
following paragraphs.
SRC Option 1
As SRC depends on R.oand /ri, the most accurate implementation of SRC is performed in the range time,
azimuth frequency domain. In this domain, it can be incorporated into the RCMC interpolator [11]. Since both
the SRC filter and the RCMC interpolator are convolution filters operating along range time, the two filters can
be combined. The efficiency is lower than the other options, because the combined filter length is longer than the
RCMC interpolator, and because its coefficients have to be updated quite frequently in range and in azimuth.
SRC Option 2
Option 2 is represented by the middle column of Figure 6.1. Here, SRC is implemented in the two-dimensional
frequency domain. In this domain, a fixed value of Ro is needed, and is usually taken to be the midrange value.
The question is how wide the range swath can be before the SRC error gets too large. When fl.Ro is the range
of the target from the reference range, Ro, the QPE is
(6.31)
where Km is written as a function of both Ro and /T/. The maximum value of fl.Ro is one-half the swath width.
If the range swath is too wide, then it can be divided into contiguous segments and processed in range
invariance regions. To determine if this is needed, tl.</>src,R can be evaluated to see if it meets the requirements of
the application (see Figure 3.14). Often, Ro can be kept constant over the whole swath, as the dependence of
SRC on Ro is usually weak.
Another lesser effect is that the effective radar velocity varies with range. The QPE due to an error, fl. Vr, in
this velocity is
(6.32)
In Option 2, the normal, low squint range matched filter with FM rate, Kr , is first applied in the top block
of Figure 6.l(b). At this stage, the quadratic phase component of the received pulse is properly canceled.
However, when the data are transformed to the azimuth frequency domain in the second block, the range data
take on a small quadratic phase component, which would appear to be defocused if the range IFFT were done.
In the third block, the SRC filter is applied, which again cancels the quadratic phase in range.
As a minor modification to Option 2, the range compression phase multiply can be postponed until after the
azimuth FFT, at which time the combined range compression and SRC filter is applied using the FM rate, Km.
Sometimes, Option 2 is modified by completing the range compression operation with an IFFT in the first
block. This is done in airborne applications where autofocus algorithms need range compressed data in the
azimuth time domain. Note that this method is not efficient, since an extra set of FFTs and IFFTs are needed
after range compression. This implementation of SRC has the interesting feature of illustrating how the data is
originally focused by the range compression, defocused by the azimuth FFT, then refocused by the SRC
operation.
Note that SRC may be required when the aperture is wide, even if the radar beam points to zero Doppler.
This is because the target passes through zero Doppler only at one instance in time, and at other times it is not
at zero Doppler. Thus, the received signal contains a range of azimuth frequencies, over which the SRC
requirements can become significant.
SRC Option 3
Another approximation was made in the early versions of the RDA, and is still being used to process spaceborne
SAR data. This option is depicted in Figure 6.l(c) , where the SRC step is implemented by combining it with the
range compression filter, using (6.29). Besides a fixed Ro, the main approximation is that a fixed Doppler centroid
frequency, /T/c> is used instead of the variable, /TJ, in (6.22), thus making Ksrc and Km a constant for all azimuth
frequencies. 4 Owing to the difference between Km and Kr , the combined range compression filter (6.29) will
introduce IRW broadening in the range compressed data in the time domain, but the azimuth FFT will remove
most of it.
Using this approximation, applying SRC requires no further computing than the low squint algorithm, since
the combined range/SRC matched filter needs to be calculated only once, and no further SRC-specific operations
are needed. To use this approximation, the following phase error criterion should be satisfied:
(6.33)
This criterion allows a range IRW broadening of up to 8% at the ends of the processed aperture where lfTJ - JTJ.,I
is greatest. The broadening reduces to zero at f'1 = ffJc· When the aperture weighting is considered, the overall
broadening is less than 4% after azimuth compression.
In addition to the range broadening, a phase error also occurs, which leads to a smaller azimuth broadening.
It can be shown that, from (6.21) and (6.22), Km(Ro, f'1) is approximately a quadratic function of f'1. Then, from
(6.33), the QPE varies along the azimuth exposure, also in a quadratic manner. This QPE introduces a phase
error of f:l.</>src,r/3, as shown in Appendix 3B. That means that a QPE is introduced into the azimuth direction,
resulting in an azimuth broadening. As an example, let the QPE f:l.</>src,f be 71' /2 at the ends of the exposure time,
introducing a phase error of 71' /6 at these points. This results in an azimuth broadening of less than 1%, as seen
from Figure 3.14, and a phase error of 71' /18 in the azimuth compressed pulse.
In Figure 6.1 and Section 6.4.2, different options of SRC implementation are presented. This section discusses
when SRC is needed, and if needed, which option should be used. Options 2 and 3 are considered - Option 1 is
the most accurate, but its extra accuracy is usually not needed, and its implementation is more complex.
Figure 6.15 gives the percentage range IRW broadening versus squint angle for typical airborne and satellite
cases of different frequency bands. The typical radar and geometry parameters in Table 4.1 are used, rather than
the simulation parameters of Table 6.1. Additional sensor frequencies have been added to those in Table 4.1 (with
the same physical parameters), and a wider range of squint angles have been used.
Airborne Cases
Figure 6.15(a,b) shows two demanding airborne radar cases in which the achievable azimuth resolution is 0.5 m.
The squint angle is varied up to 25°. For the X-band case, Figure 6.15(a) shows that SRC can be ignored for
squint angles less than 4° , if the allowable range IRW broadening is 5%. Beyond 4° squint, the approximate SRC
of Option 3 can be used. If the squint goes beyond 16°, the accurate SRC implementation of Option 2 is needed.
For the C-band airborne case, Figure 6.15(b) shows that the corresponding squint angle limits are 2° and 6°,
respectively. These limits are tighter than the X-band case, because of the longer aperture. For L-band airborne
radars with Table 4.1 parameters, the accurate SRC is needed for all squint angles, even zero squint, because of
the very long exposure time (the L-band case is not shown in the figure).
Satellite Cases
Figure 6.15(c,d) shows two typical satellite radar cases, in which the achievable resolution is 5 m. The squint
angle is varied up to 10°. For C-band satellites, Figure 6.15(c) shows that SRC can be ignored for squint angles
less than 3.5°, for IRW broadening less than 5%. Beyond 3.5°, a version of SRC should be used. For C-band
satellite SAR parameters, the dependence of SRC on f'1 is weak, so the approximation of Option 3 is adequate for
squint angles normally experienced in practice.
For L-band satellites, Figure 6.15(d) shows that SRC can be ignored for squint angles less than 1°. The
approximate SRC of Option 3 can be used for squint angles between 1° and 3°. Beyond 3° squint, the extra
accuracy of Option 2 is needed. For X-band satellites with Table 4.1 parameters, SRC is only needed for squint
angles above 5° (the X-band case is not shown in the figure).
Note that when the accurate Option 2 form of SRC is used, the IRW broadening is negligible for all the
airborne and satellite cases shown in the graphs, so Option 1 is not needed, except in extreme cases.
(a) Airborne case, X-band
l 10 ,----...---...--.----------
a : ~xSRC / : . NoSRC
~ 8 • • • • .N.o·S·R·C
· :· ..•.. ·..•. ·; ·:· •.•• . 8 ...•..:, . . . . ;. ..... ( ......:- .... .
.g . . " . .
I 6 ••••• ;·.:..- : :: ~:..: : :..: ~-~ -~·.:.:; :: : :..: e :_:. ·_-_: -~·.:.:: · ~ ~ ~ci@nf,iuin1i
3:: 4 .... ~- .. ···~·· .. · :, · ....: ...... 4 " 0 0 I I ~ 0 o O .- -. ~ • • I I o ! 0 o O • p •:• I" 0 0 o o
a:
j.
. . / :
.
.
-,,,
2 •...•....•.•• .., •••.••..•.•••...
. .
0 """"=..a;;a.:..:'-'-'--'-'-"-..:...:...:............1...:...:..:...:..a.""'"'-'.........~
.
0 l=:,,,,ll!!!!!!a'-'81......,L''-"'
.
. .......................
.
ApproxSRC .
• i,:.,_,=-
-:.::..
.
. -
~ ~=.a;!
0 5 10 15 20 25 0 2 6 8 10
(b) Airborne case, C-band (d) SateUite case, L-band
l ,o ,---...---r--...--
No· • .
. - - -- ---,
•
10 ,---.--- .......--....--.....---,
a SRC / Aw:">X SRC : . ~c J ~rox s~c .
.. 8 .. •.. ·.............. ' ..... ·.· .... .. 8 . . . . •, ... ; .. :...... ·: .... ":· .....
i : / : . . . . .
j 6 .. . . ·:- I .... :· Nomi . ' bnie° ...• " u" . 6 . . . . .· .. / ... '• ..... , ...... ', . ....
e
J:J
- - .,
· - · - .· - · ~- . .~~!'!! .
~ i.7:. ~-t ~ = -~:'~
i
4 • • .. A . ' • •.. • • • • ••..•.• • •• • .. 4 : :-
3:: /. . •
2 j; •/~ ...... ~ ..... ! ..... ·!· .... . 2 · /- ) ~ • · · · ) · • • ..
/,,, : :
i-
:
····.j. ·" ' ·
.
cl. 0 ..... ~'.-11 ••• ,.:. •••• ~·~ ...... ••••••• 0 ,,,.,w,o 4 •o••-•••t• • ••••t•.,•••~•
0 5 10 15 20 25 0 2 4 6 8 10
Squint angle (degrees) Squint angle (degrees)
Figure 6.15: Range IRW broadening versus squint angle for the cases of no
SRC and the approximate SRC of Option 3.
Figure 6.1 shows the block diagrams of the two different SRC implementations of the RDA. Figure 6.l(b) shows
the more accurate Option 2 implementation of the SRC in the azimuth frequency domain, and Figure 6.l(c)
shows the more efficient Option 3 implementation, when the Jr, dependence is ignored. The Ro dependency is
ignored in both cases.
Again, the three point targets in Figure 6.2 are used to illustrate the effects of each of the two SRC
implementations, and the C-band airborne radar and geometry parameters in Table 6.1 were used in the
simulations. In this section, a higher squint angle of 21.9° is used, and Target C is moved down so that it is still
in the center of the beam at the same time as Target B.
Figure 6.16 shows the raw data. Even though the range gate window remains fixed in time, the data appear
squinted in the figure because only the energy from the three targets is shown. The resulting skew of the raw
data is quite noticeable compared to the low squint case of Figure 6.3.
In the low squint simulation experiment of Section 6.3, the QPE for ignoring SRC is only 0.061r, but now it
is 2.3 1r, which introduces a significant amount of IRW range broadening if SRC is not used. The applications of
SRC with Options 2 and 3 are illustrated next.
Figure 6.17 shows the results of the more accurate Option 2 implementation. A fixed Ro is used, set to the range
of Targets A and B. Figure 6.17(a) shows that the data are correctly range compressed using a range matched
filter FM rate, Kr. 5
Then, an azimuth FFT is performed on the data. It is seen that a significant broadening has occurred in
Figure 6.17(b), unlike the low squint case of Figure 6.5.6 This broadening is a result of the limited exposure of
the target in each range cell, but it can be v:iewed as an error in range compression, as the broadening takes the
form of a linear FM signal in range. This linear FM form is explained in Section 5.3.
(a) Real part (b) lma.ginary part
'ii,
i 50 50
!~ 100 100
! 150 150
!V
200 200
250 250
50 100 150 200 250 50 100 150 200 250
i
E
50 50
V
I 200 200
Figure 6.16: Simulated raw data from three targets, with a larger squint angle
of 21.9°
Next, SRC is applied in the two-dimensional frequency domain. It can be seen in Figure 6.17(c) that the
broadening introduced by the azimuth FFT has been corrected by the SRC. This CQrrection can be v-iewed as a
recompression of the data, hence the name "secondary range compression." Finally, Figure 6.17(d) shows that the
targets are compressed to their correct zero Doppler positions (see Figure 6.2) by the subsequent RCM and
azimuth compression operations.
i 50 1200
!~ 100
i 150
I
V
200
20 40 60 80 100 20 40 60 60 100
(C) Data after SAC (d) Data after ACMC and az comp
I\ 250 . - - -- - ~ -.......- - - - .
I /
1: /
i 100
~
,. J'-
i
,I:
50
'----- ----'-""---'
20 40 60 80
Range (samples) - -->
100 20 40
Range (samples) -
60 60
>
100
To observe the compressed target properties more clearly, Target C IB simulated on its own. Figure 6.18(a)
shows the spectrum of a 96 x 96 chip centered on the target. The spectrum is rotated by the negative of the
squint angle. Figure 6.18(b) shows the compressed target. An expanded radiometric scale is used to portray the
extended sidelobes more clearly. The figure reveals the existence of two sets of azimuth sidelobes. One set of
sidelobes runs vertically, parallel to the azimuth axis, and the second set is skewed by the squint angle.
i 20
~
._ 40
~
E
~ 60
l 80
20 40 60 80 20 40 60 80
Range freq (samples) - > Range (samples) - >
(d) Contoors of magnitude
18
1 14 2
i
a.
!
12
10
8
I1: 8
I a < 10
~ 4 1 12
I 2
V
14
1 6 ' - - - -•- - - - - - _ _ ,
5 10 15 5 10 15
Range freq (samples) - > Range (samples) - - >
The target is then zoomed for analysis. Figure 6.18(c) shows the magrutude spectrum of a 16 x 16 chip
centered on the target. Figure 6.18(d) shows the interpolated contours of target energy, obtained by zero padding
the spectrum in the energy gaps and inverse transforming. If the ISLR is to be measured in one dimension, the
target should be rotated and skewed so that the major sidelobes are aligned with the horizontal and vertical
axes. After the rotation, the measured IRWs and PSLRs agree with the theoretical values, indicating high quality
compression.
The two sets of azimuth sidelobes in Figure 6.18(b) can be explained as follows. The more prominent
sidelobes that are skewed from the vertical axis are caused by the vertical skew in the spectrum of Figure
6.18(a) (recall Figure 2.2(c, d)) . The vertical sidelobes are caused by the azimuth frequency discontinuity that
must be assumed in the RCMC and azimuth compression operations. Because these processing steps are applied
in the range Doppler domain, the azimuth frequency discontinuity occurs at a fixed azimuth frequency sample in
the two-dimensional frequency domain, shown at Sample 26 of the azimuth frequency axis of Figure 6.18(a) . This
discontinuity should theoretically vary with range frequency, fn since the Doppler centroid frequency, f 11e, in this
domain is 2(/o+ fr) V,. sin 9r,c/c. Because of this Doppler centroid frequency variation, the spectrum is skewed in
azimuth frequency.
In a simple sense, the spectrum can be v-iewed as a superposition of two parts, as shown in Figure 6.19- a
skewed part caused by the spectrum skew (Figure 6.19(a)] and an orthogonal part caused by the fixed azimuth
frequency sample where the azimuth frequency discontinuity occurs (Figure 6.19(b)] . These parts give the two
azimuth sidelobe axes. In two-dimensional frequency domain approaches, such as the WKA discussed in Chapter 8,
the azimuth frequency discontinuity is able to vary with range frequency. In these approaches, only the skewed
axis is present, so the small vertical sidelobes can be considered as a minor artifact in the RDA, as well as in
the CSA.
(a) Skewed spectrum (b) Orthogonal spectrum
Assumed azimuth
Target energy Zero intensity frequency discontinuity Target energy
Figure 6.20 shows the results of the approximate implementation of SRC, where the azimuth frequency
dependence of SRC is ignored. In this case, the SRC is applied in the range compression operation, by modifying
the range FM rate from Kr to Km, according to (6.21).
Figure 6.20(a) shows that the target is not properly compressed in the azimuth time domain, when using the
modified FM rate. However, after the azimuth FFT, the target is properly range compressed, as shown in Figure
6.20(b) . It can be considered that the modification of the range FM rate from Kr to K m imparts a
"predistortion" of the range pulse, so that it is properly compressed after the azimuth FFT.
Figure 6.20(c) shows the data after RCMC, and Figure 6.20(d) the data after azimuth compression. With
these parameters, the final result does not differ significantly from the case where the more accurate form of SRC
is used. The approximate SRC can be used here because the quadratic phase error, D.</>src, r in (6.33), is found to
be 0.11 rr radians at the maximum azimuth frequency, /T/. This gives a negligible range broadening at the extreme
azimuth frequency bins, and less broadening at other bins. After azimuth compression, the range broadening is
therefore also negligible.
20 40 60 60 100 20 40 60 80 100
i I
.
! 40
150
~
i 100 i 60
B~
't C
I l
,t:
80
50 l
,__~.__ ........
• _.......,.__~._____, 100'-~.___.___ _ _. _ ~
20 40 60 80 100 20 40 60 80 100
Range (samples) ---> Range (samples) - >
An L-band airborne radar dataset is selected to illustrate the RDA with SRC Option 2. The data were acquired by
a multimode, dual-band SAR system built for a remote sensing/surveillance application by MacDonald Dettwiler in
2001. It has an azimuth beamwidth of 13° , and is fully polarimetric.
The 1000-pixel, 10-km wide, L-band, VY polarization image in Figure 6.21 was produced by a real time
processor. The primary processing is 4-look, with a resolution of 3 m, using the RDA with SRC Option 2, but the
pixels have been averaged 4x4 for display purposes. The processor can also produce images with 8-look/6 m
resolution and 16-look/ 18 m resolution.
Figure 6.21: An L-band airborne image processed with the RDA, Option 2.
(Courtesy of :tviacDonald Dettwiler.)
This section discusses a processing method to reduce the speckle noise that is an inherent and characteristic
feature of radar images. The method relies on processing more than one "look," and is hence called multilook
processing. This mode of processing is not affected as much by the squint angle, because the focusing
requirements are not as stringent as in the one-look case.
Note that even though multilook processing is illustrated with the RDA in this chapter, it can also be
applied to other algorithms as well.
Speckle is a multiplicative noise that arises in SAR because the relative phase of indiv,i dual scatterers within a
resolution cell is strongly dependent upon the radar vfowing angle. This imparts a randomness to the coherent
sum of the scatterers, so that the magnitude of each pixel obeys a Rayleigh distribution [12]. A statistical model
of speckle noise is presented in [13, 14]. One of the original and most detailed descriptions of coherent speckle is
given in [15].
The traditional method of reducing speckle noise is by processing images taken from different parts of the
signal spectrum, and averaging the detected results [16]. This is called multilooking, and is discussed below. A
number of more advanced speckle reduction algorithms are available, such as the Lee and Frost filters [17,181,
wavelet-based filters [19,20] , and simulated annealing [13]. A rev.iew is given in [21]. These methods fall under the
category of postprocessing, that is, processing after a well focused image is generated, and are not discussed in
this book.
In most SAR systems, the inherent azimuth resolution is finer than the range resolution, so looks are usually
extracted in the azimuth direction, although they can be taken in both directions. For illustration, azimuth looks
are assumed in this section. In this case, the azimuth spectrum is div.ided into separate parts using bandpass
filters. Because of the linear FM correspondence between frequency and time, this operation corresponds to the
azimuth beamwidth being divided into sectors. As each sector corresponds to a different beam look direction, the
data corresponding to each sector are referred to as "looks," which explains the origin of the term "multilook
processing."
Looks are extracted from the azimuth spectrum using fixed bandwidth bandpass filters, with the filter
bandwidth, FL, selected according to the desired azimuth resolution. From (4.45), the azimuth resolution, Pa, is
related to FL by
0.886 Oaz Vg COS Or,c
Pa = FL 'Yw,a (6.34)
where 'Yw,a is the IRW broadening factor created by the combined effect of the tapering window and the antenna
beam pattern. The bandpass filters are also called "look extraction filters."
The bandpass filters are positioned symmetrically about the Doppler centroid, as shown in Figure 6.22, with
a small look overlap. For clarity, the figure shows the spectrum rotated so that the Doppler centroid is at the
middle of the array. In practice, looks can wrap around from one end to the other of the array, if they span a
PRF boundary. The center frequency of Look k is denoted by Fk,cen .
Azimuth
frequency
I- Look bandwidth -j
Processed Doppler bandwidth
PRF
Figure 6.22: Location of the look extraction filters in the azimuth frequency
domain (three-look case).
(6.35)
and from (5.44), the look center time, Tk,cen, corresponding to the look center frequency, Fk,cen, is
Fk,cen
(6.36)
Ka,dop
Figure 6.23 illustrates a typical relationship between looks in the time domain and looks m the frequency
domain. Three looks are used in the example, and the figure shows the limits of each look in the two domains,
assuming no look overlap. In the time domain of Figure 6.23(a), the look center times and durations both vary
with slant range in accordance with (6 .35) and (6 .36). In the frequency domain of Figure 6.23(b), the looks
occupy a constant bandwidth, and the look center frequencies remain parallel to the beam center frequency,
which is approximately a linear function of range. Since Jr, is directly related to squint angle, each look can be
viewed as acquiring data from a different squint angle.
(a) Looks In the time domain (b) Looks In the frequency domain
I - Look 3
-T mumlnated
Doppler spectra
Look2
Look 1 J
1 i - Look 1
J_
Range Range
Multilook azimuth compression differs from single-look processing in the look extraction, detection, and summation
operations, as shown in the block diagram of Figure 6.24. The matched filter is first applied to the whole spec-
trum. It is a "phase only" filter, which removes the quadratic phase from t he whole spectrum. Then, the
"magnitude only" filters of Figure 6.22 are applied for each look, to extract the desired parts of the spectrum.
The magnitude function includes weighting for sidelobe control, and the looks are usually overlapped by 10% to
20%.
Inverse FFTs are then used to complete the compression of each look. Only the properly compressed points
in the IFFT output array are kept for the look summation operation. The data in each look are automatically
registered if the matched filter FM rate is correct, because of the common phase function used for the whole
spectrum in the matched filter phase multiply. After the IFFT, the partial correlations are removed from the
output array. The location of these throwaway regions is different for each look (see Section 6.5.4) .
At this point, the data of each look are still complex. The data are then detected to remove the phase
information, an essential step in the speckle removal process.
Look 1
Look extraction,
IFFT and detection
Look2
Look extraction,
IFFT and detection
Look
Range Doppler summed
domain data
• data
•
•
Matched Look N1oo1o
filter phase Look extraction,
multiply IFFT and detection
There are two ways to do the detection, which affects the statistics of the look-summed data. Let s(k, n) be
the complex data at Sample n in Look k , Niooks be the total number of looks, and s1s(n) be the look-summed
data. One method is to sum the magnitude of the pixel values of each look
N1ooks
Another way is to sum the magnitude squared of the looks, and then perform a square root
N1ooks
L Is(k,n) I
2
s1s(n) - (6.38)
k= l
Both methods leave the data in the magnitude (voltage) domain, where the distribution of the pixel values gives
a better grayscale for v-isual image interpretation than a power domain representation.
The second method has some advantages, and this method is assumed from here on. First, the number of
square roots is reduced. Second, a parameter to evaluate the number of independent looks can be computed
readily from the look-summed data, as it is only easy to evaluate statistical properties of random variables that
are summed in the power domain. Third, more emphasis is given to the middle looks, where the azimuth beam
pattern is stronger and the SNR is higher. Fourth, the radar cross section of a target is best estimated in the
power domain [13].
Look Summation
Next, the looks are added together incoherently to lower the speckle noise.7 Before adding, the looks must be
registered, as shown in the top panel of Figure 6.25. The registration of each look is staggered, according to the
different center frequencies in (6.36).
(a) Two azimuth blocks
?
} First
azimuth
I block
Proceued
loOka } Second
azlmut.h
I block
r,T
Figure 6.25: Stitching and summing of indiv:idual looks from two azimuth
blocks.
(6.39)
where Nm and Nifft are the lengths of the forward and inverse FFTs. Note that Nifft does not necessarily
correspond to the look bandwidth, as the IFFT may be zero padded to a convenient length.
The effectiveness of look summation in reducing speckle noise depends on how independent the looks are from
each other. A quantitative measure of the number of independent looks is the equivalent number of looks (ENL).
When the looks are summed in the power domain, but the square root in (6.38) is not taken, the ENL is defined
by
2
ENL = ( ~Nlooks ) (6.40)
N1ooks
where µ N 1<s is the mean and <7N 1<s is the standard deviation of the look-summed data. In t his way, ENL is
100 100
defined by the probability density function of the look-summed data. The result is derived in the following
manner, assuming power law detection.
Let Xrea1(k, n) and Ximag(k, n) be the real part and imaginary part of sample n in Look k . Then the
look-summed data is
N1ooko
Assuming that Xrea1(k, n) and Ximag(k, n) are each of Gaussian distribution, with zero mean and unit standard
dev:iation, and are independent from look to look (i.e., no look overlap) , then z(n) has a: chi-squared distribution
with 2N1ook.s degrees of freedom [the factor 2 accounts for both the real and imaginary parts in (6.41)]. Its mean
and standard dev:iation are [22]
µ Niooks - 2 N1ooks
In this case of independent looks, the ENL is equal to Niooks . This is an expected result, since the variance of
the sum of N independent Gaussian variables, having the same mean and variance, is reduced by a factor of N.
The purpose of multilook processing is to reduce speckle noise. Compared to one-look processing, the noise
reduction factor is given by the ratio
UNtooks/ µNtooks
(6.43)
u1/µ1
which is the inverse of the square root of the ENL. The denominators, µNtooks and µ1, can be viewed as
normalization constants, so that the two variances can then be compared on the same scale.
The ENL is reduced when the looks overlap, although the effective overlap is reduced by the weighting of
the look extraction filters. This effect can be seen in Figure 6.26, where a four-look case is shown. If there is no
look overlap, the equivalent number of looks is four, which gives the maximum noise reduction. However, when
the looks are overlapped, the ENL reduces because the looks are no longer independent. In the limit of 100%
overlap, the ENL = 1, as Looks 2, 3, and 4 do not add any further information. These graphs do not include the
effect of antenna weighting, which reduces the ENL if the look weighting is not adjusted.
::J' 4.5
~ 4r--====::::'.::==-::---
~ 3.5 (Kaiser window)
8 3
0
._ 2.5
~
::J
2
C 1.5
'E
~ 1
-~ 0.5
r::r
w 0 '---'--.....__.......__ _,__ ____._ _,__..,__.....__ _.__ _.
0 10 20 30 40 60 60 70 80 90 100
Percentage look overlap
When weighting is used in the look extraction filters, the ENL is larger for a given overlap, because the
overlapping (correlated) portions of the looks are de-emphasized. For example, a weighting of f3 = 2, with 20%
overlap, gives almost the full noise reduction as the no-overlap case, and is a sensible value to use. In a sense,
look overlap has the effect of recovering information lost at the look edges caused by weighting.
Three point targets in the same range cell are simulated to illustrate the compression and registration of data in
multilook processing. First, the scene is set by reviewing the compression, registration, and throwaway region in
the single-look processing case.
Figure 6.27 shows the results of one-look processing of the three targets. The demodulated signal data is placed
in a 512-point processing array, as shown in Figure 6.27(a). The data are simulated with a zero Doppler centroid
and a rectangular envelope, for simplicity.
(a) Real part of raw data
0 E
10 .. ·-- .--
- - • • I I
• • I
'- ·-- .
CD
TA Good points TA
~
i
:E
5 ~
0 E F .
0
. . . . .
0 50 100 1SO 200 250 300 350 400 450 500
Azimuth time (samples)
Target E is fully exposed and is centered in the input array. Target D is not completely captured at the
start of the input array, and similarly Target F at the end. The captured signal bandwidths of Targets D and F
are 80% of the bandwidth of Target E.
The matched filter is generated using Option 3 of Section 3.4. It is applied with a frequency domain phase
multiply, and then an IFFT is performed to obtain the compressed output array shown in Figure 6.27(b). Before
any data is thrown away, the targets are registered to the same place in the output array as their zero Doppler
position in the input array.
Target E is properly compressed to the midpoint of the output array. The two edge targets, D and F, are
also compressed, but not as well as E, owing to the fact that the input array does not contain their whole
exposure. It can be observed that their amplitudes are only 80% of that of Target E, and that their IRWs are
wider, because of a reduction of signal bandwidth.
As the Doppler centroid is zero in this case, the throwaway region, (TA), is evenly split between the two
ends of the output array. A nonzero Doppler centroid affects only the location of the throwaway regions, as
explained in Section 3.4.8
The two partially exposed targets, D and F , are discarded in the throwaway regions. As the processing
arrays are four times longer than the matched filter length, the good points of the circular convolution consist of
three-quarters of the output array, as indicated by the arrows in Figure 6.27(b).
Multilook processing is illustrated with the same three targets. The spectrum of the signal data is shown in
Figure 6.28(a), which has a zero center frequency. The ragged appearance of the magnitude spectrum is caused
by the interference between the three targets.
target (Target D) appears m Look 3, rather than Look 1, because the looks are m reverse order in the time
domain.
Figure 6.29(g) shows the look-summed data, after the three looks of Figure 6.29(b, d, f) are added
incoherently. Target E is fully look-summed, but Targets D and F are not. Fully look-summed data for these two
end targets are available only by processing adjacent azimuth blocks (recall Figure 6.25). When the next azimuth
block is processed, another look of Target F is generated, and the summed data will portray Target F at full
strength.
To examine the registration rules of the indiv-idual looks, it is instructive to look at the equivalent time domain
matched filter for each look. These are shown in Figure 6.30, where it has been assumed that the replica is taken
from Target E , as in the single-look case. Each time domain filter is obtained by taking the IFFT of the
frequency domain matched filter, when it is delimited by the appropriate look window. Because Look 1 has the
lowest frequency, its matched filter comes from the end of the signal array. The window effect can also be
observed in these time domain filters, which roll off from the center towards the edges.
Comparing the Look 1 filter to Target E in the top panel, it is seen that the filter is matched to the right
one-third of the signal, which also means that the right part of Target D (in Figure 6.27) can be focused by this
filter. Similarly, the Look 2 filter is matched to the middle, and the Look 3 filter is matched to the left one-third
of Target E. The corresponding parts of Target F are also matched.
In multilook processing, the throwaway region is different from single-look processing. First, the amount of
throwaway is reduced because a shorter matched filter is used in the multilook case. Second, the location of the
throwaway region is different for each look. Assuming that there is an odd number of symmetrically located looks,
the location of the center look's throwaway is still the same as described in Section 3.4 for the single-look case.
That is, it is one-half the length of the look's time domain matched filter, located at either end of the IFFT
output array for compression ro the Doppler centroid. For the other looks, the throwaway location is offset from
that of the center look by:
6.r, = (6.44)
This time increment represents the "delay" in the compressed data corresponding to this look.
The amount of azimuth broadening due to QPE and look misregistration is summarized in Table 6.3, using
the typical airborne and satellite parameters given in Table 4.1. The azimuth resolution includes an 18% IRW
broadening due to a processing window. The broadening due to look misregistration dominates over that due to
QPE, because the data comes from more widely separated parts of the Doppler spectrum. This fact is useful for
autofocus algorithms (see Section 13.4.2).
Table 6.3: Broadening Due to Azimuth FM Rate Error in the Multilook Case
A RADARSAT-1 high resolution (FINE Beam 2) scene of Vancouver, Canada, is selected to illustrate the
multilook processing with the RDA (see Figure 6.32). The data were acquired on June 16, 2002, and was
processed to a four-look, detected format by MacDonald Dettwiler, using the RDA with SRC Option 3. The
image has been resampled to a 12.5-m north-south grid, with a resolution of approximately 20 m.
It is the same scene that is shown in Figure 6.14. This version is approximately 24 km wide, about one-half
the full swath width. The scene covers 29.5 km in the vertical direction, from North Vancouver to the town of
Ladner in the south. More details of the scene can be found in Chapter 12 and Appendix A.
Figure 6.32: Four-look version of the Vancouver scene, processed by the RDA
with SRC Option 3. (Copyright Canadian Space Agency, 2002.)
6.6 Summary
This chapter introduces the RDA and explains its major steps in detail, including range compression, RCMC ,
azimuth compression, and multilook processing. Mathematical expressions are given for the processing steps and for
the signals at critical points in the processing. Simulations are used to verify the equations and to illustrate the
signal structures, including the case where the beam has a moderate squint angle.
The role of SRC in squint or wide-aperture data processing is highlighted, and various ways of implementing
it are discussed . The importance of good interpolator design in RCMC is shown, as well as the significance of
processing the correct Doppler ambiguity. In Chapter 7, an algorithm that eliminates the RCMC interpolator is
discussed. In Chapters 12 and 13, the various methods for estimating the Doppler parameters of each radar scene
are presented.
Multilook processing to reduce speckle noise is presented. The processing steps are the same as one-look
processing until the azimuth IFFT stage. Shorter IFFTs are taken, and then the looks are added incoherently. The
looks can be interpreted as acquiring data from different squint angles.
While the RDA is the most popular algorithm in use today for processing civilian satellite SAR data, there
exists a number of other good processing algorithms that have been developed since the introduction of the RDA.
Some of these are optimized fo r higher throughput and some for handling special cases, like higher squint. These
algorithms are discussed in the next few chapters.
The important processing equations derived in this chapter are summarized in Table 6.4. An up chirp is
assumed in range. The range migration parameter D listed is a function of JTJ, Ro, and Vr .
Approximate
Operation Low Squint Exact SRC
SRC
References
[1] C. Wu. A Digital System to Produce Imagery from SAR Data. In A/AA Conference: System Design Driven
by Sensors, October 1976.
[2] C. Wu. Processing of SEASAT SAR Data. In SAR Technology Symp., Las Cruces, NM, September 1977.
[3] I. G. Cumming and J . R. Bennett. Digital Processing of SEASAT SAR Data. In IEEE 1979 In ternational
Conference on Acoustics, Speech and Signal Processing, Washington, D.C., April 2-4, 1979.
[4] J . R. Bennett and I. G. Cumming. A Digital Processor for the Production of SEASAT Synthetic Aperture
Radar Imagery. In Proc. SURGE Workshop, ESA Publication No. SP-154, Frascati, Italy, July 16-18, 1979.
[5] C. Wu, K. Y. Liu, and M. J. Jin. A Modeling and Correlation Algorithm for Spaceborne SAR Signals. IEEE
Trans. on Aerospace and Electronic Systems, AES-18 (5), pp. 563- 574, September 1982.
[6] B. C. Barber. Theory of Digital Imaging from Orbital Synthetic Aperture Radar. International Journal of
Remote Sensing, 6, pp. 1009-1057, 1985.
[7] A. M. Smith. A New Approach to Range Doppler SAR Processing. International Journal of Remote Sensing,
12 (2), pp. 235-251, 1991.
[8] R. Bamler, H. Breit, U. Steinbrecher, and D. Just. Algorithms for X-SAR Processing. In Proc. Int.
Geoscience and Remote Sensing Symp., IGARSS'93, Vol. 4, pp. 1589-1592, Tokyo, August 1993.
[9] J. R. Bennett, I. G. Cumming, R. A. Deane, P. Widmer, R. Fielding, and P. McConnell. SEASAT Imagery
Shows St. Lawrence. Aviation Week and Space Technology, page 19 and front cover, February 26, 1979.
[10] M. J. Jin and C. Wu. A SAR Correlation Algorithm Which Accommodates Large Range Migration. IEEE
Trans . Geoscience and Remote Sensing, 22 (6) , pp. 592-597, November 1984.
[11] A. R. Schmidt. Secondary Range Compression for Improved Range Doppler Processing of SAR Data with
High Squint. Master's thesis, The University of British Columbia, September 1986.
[12] N. L. Johnson, S. Kotz, and N. Balakrishnan. Continuous Univariate Distributions. John Wiley & Sons,
New York, 2nd edition, 1994.
[13] C. Oliver and S. Quegan. Understanding Synthetic Aperture Radar Images. Artech House, Norwood, MA,
1998.
(14] R. K. Raney. Radar Fundamentals: Technical Perspective. In Manual of Remote Sensing, Volume 2:
Principles and Applications of Imaging Radar, F. M. Henderson and A. J. Lewis {ed.), pp. 9-130. John Wiley
& Sons, New York, 3rd edition, 1998.
(15] J. W. Goodman. Statistical Properties of Laser Speckle Patterns. In Laser and Speckle Related Phenomena,
J. C. Dainty ( ed.). Springer-Verlag, London, 1984.
(16] R. K. Moore. Trade-Off Between Picture Element Dimensions and Noncoherent Averaging in Side-Looking
Airborne Radar. IEEE Trans. on Aerospace and Electronic Systems, 15, pp. 697- 708, September 1979.
(17] V. S. Frost, J. A. Stiles, K. S. Shanmugan, and J . C. Holtzman. A Model for Radar Images and Its
Application to Adaptive Filtering of Multiplicative Noise. IEEE Trans. Pattern Anal. Mach. lntell., 4, pp.
157- 166, 1982.
(18] J.-S. Lee. A Simple Speckle Smoothing Algorithm for Synthetic Aperture Radar Images. IEEE Trans.
Systems, Man and Cybernetics, 13, pp. 85-89, 1983.
(19] M. Simard. Extraction of Information and Speckle Noise Reduction in SAR Images Using the Wavelet
Transform. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'98, Vol. 1, pp. 4- 6, Seattle, WA, July
1998.
(20] Z. Zeng and I. G. Cumming. Modified SPIHT Encoding for SAR Image Data. IEEE Trans. on Geoscience
and Remote Sensing, 39 (3) , pp. 546552, March 2001.
(21) Y. Dong, A. K. Milne, and B. C. Forster. A Rev:iew of SAR Speckle Filters: Texture Restoration and
Preservation. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS '00, Vol. 2, pp. 633-635, Honolulu,
m, July 2000.
(22] S. M. Selby. Standard Mathematical Tables. CRC Press, Boca Raton, FL, 1967.
Chapter 7
7.1 Introduction
The range Doppler algorithm presented in Chapter 6 was the first algorithm developed for civilian satellite SAR
processing. It is still the most widely used algorithm because of its favorable tradeoff between maturity, simplicity,
efficiency, and accuracy. However, under certain conditions, two disadvantages can become apparent. First, a high
computing load is experienced when a long kernel is used to obtain high accuracy in the RCMC operation.
Second, it is not easy to incorporate the azimuth frequency dependence of SRC, which can limit its accuracy in
certain high squint and wide-aperture cases.
The chirp scaling algorithm (CSA) was developed specifically to eliminate the interpolator used for RCMC [1].
It is based on a scaling principle described by Papoulis [2], whereby a frequency modulation is applied to a
chirp-encoded signal to achieve a shift or scaling of the signal. Using this "chirp scaling" principle, the required
range-variant RCMC shift can be implemented, using phase multiplies instead of a time-domain interpolator. The
algorithm has the additional benefit that SRC can be made azimuth frequency dependent. This benefit arises
because the data are available in the two-dimensional frequency domain at a convenient stage in the processing.
The maximum shift or change of scale implemented by the frequency modulation cannot be too large, or the
associated change in the signal's center frequency and bandwidth would become problematic. This restriction is
neatly avoided by applying RCMC in two steps, whereby only the difference in RCM between signals at different
ranges is corrected by the chirp scaling operation. After this is done, the RCM of all signals is the same, and is
simply corrected in the two-dimensional frequency domain, using a phase multiply. These two steps are referred to
as differential RCMC, and bulk RCMC, respectively.
A high-level block diagram of the CSA algorithm is given in Figure 7.1. The basic operations are implemented
using only four FFTs and three phase multiplies. The steps in the CSA algorithm are:
1. An azimuth FFT to transform the data into the range Doppler domain.
2. The chirp scaling is applied, using a phase multiply to equalize the range migration of all target trajectories.
This is the first phase function.
3. A range FFT is performed to transform the data into the two-dimensional frequency domain.
4. A phase multiply is performed with a reference function, which applies range compression, SRC, and bulk
RCMC in the same operation. This is the second phase function.
5. A range IFFT is performed to transform the data back to the range Doppler domain.
6. A phase multiply is performed to apply azimuth compression with a range-varying matched filter. A phase
correction is also required, as a result of the chirp scaling applied in step 2, which can be incorporated into
the same phase multiply. This is the third and last phase function.
7. The final step is an azimuth IFFT to transform the compressed data back to the two-dimensional time
domain, which is the SAR image domain.
Note that range compression is not performed first, as it is in the RDA. This is because the data must
retain the range chirp encoding, in order for the scaling of step 2 to work. If, for some reason, the data have
already been range compressed, the chirp must be reintroduced into the data by a range expansion operation.
Note that a bulk azimuth compression term can be incorporated into the second phase function of step 4,
although it does not appear to be advantageous to do this.
It can be seen that the key processing operations are performed in different domains. Specifically, the first
phase function is applied in the range-time, azimuth-frequency domain (the range Doppler domain), the second in
the two-dimensional frequency domain, and the third back in the range Doppler domain. In this way, the CSA
can be considered as a hybrid algorithm, sharing characteristics of both range Doppler and two-dimensional
frequency domain processing.
The CSA uses specific characteristics of the raw SAR data to obtain a processing algorithm that requires no
interpolation operations to focus the SAR image. However, it is common practice to resample the compressed
image data into ground range coordinates or a specific map grid, and an interpolator is normally used for this
operation. This postfocusing interpolation is used in all algorithms.
+
Chirp scaling for
differential RCMC
t
Range Doppler
domain
+
Range FFT
+
Reference function
Two-dimensional
Second phase function multiply for bulk
frequency domain
RCMC, RC, and SRC
+
RangelFFT
+
Azimuth compresslon Range Doppler
Third phase function
and phase correction domain
+
Azimuth IFFT
+
Compressed Data
+
SAR image
domain
In this chapter, Section 7.2 explains the concept of chirp scaling, which is the fundamental building block of
the CSA. Section 7.3 shows how chirp scaling can be applied to the important RCMC step in SAR processing.
Section 7.4 derives the scaling function. Section 7.5 discusses the range processing steps, which involve the first
and second phase functions. It also discusses the azimuth processing steps, which involve the third phase function.
Section 7.6 illustrates the operation of the algorithm with simple processing examples.
Historical Note
As far as is known, the CSA was developed concurrently, but independently, by two groups. The concept of
applying chirp scaling to RCMC was conceived in Canada by Keith Raney, and a working CSA algorithm was
developed by Ian Cumming and Frank Wong at MacDonald Dettwiler. The same concept was conceived in
Germany by Hartmut Runge, and developed into the CSA algorithm by Richard Bamler and his group at the
German Aerospace Center, Oberpfaffenhofen (DLR). The two groups met at IGARSS'92 in Houston and discovered
each others' work [3, 4]. They agreed to cooperate, and published a joint paper a year later [1]. The algorithm
was patented by each group in 1993 [5, 6].
Related Work
A number of applications and extensions of the algorithm have been reported since the CSA was introduced for
satellite SAR processing. In this chapter, only the basic principles of the CSA algorithm are described. The
following additional references are included for those readers who wish to delve deeper into variations of the
algorithm:
o Extended chirp scaling (ECS) for ScanSAR, stripmap, and spotlight modes [17- 19];
To explain the chirp scaling concept, it is useful to revisit t he range compression of a single point target.
Assuming that the transmitted pulse is linear FM, the ideal received signal from a point target, after
demodulation, can be expressed as
where Ta is the time {range) of occurrence of the target {the rect function is centered at T = Ta and has a
duration of Tr), and Kr is the range FM rate. The signal's spectrum is centered at baseband, and the range
frequency is zero at T = Ta, As a convention, assume that the target is to be registered to this zero frequency
point by the compression operation. To achieve this registration, the compression can be implemented in the
frequency domain, using the matched filter
(7.2)
where fr is the range frequency variable, and Fr is the range sampling rate. If the oversampling ratio 1s large,
the matched filter extent, Fr, can be replaced by the chirp bandwidth, I Kr I Tr.
If the intent is to compress the target to a position slightly shifted from the zero frequency point, the
following cases can be considered: {1) the simple case where the shift is a constant, and (2) the more
complicated case where the shift is a linear function of range. In either case, the shift could be implemented
using an interpolation operation, but the purpose of this section is to examine the use of chirp scaling to
implement the shift.
To implement a constant shift, the Fourier transform shift property, discussed in Section 2.3.3, can be utilized;
that is, a linear phase ramp can be applied to the frequency domain matched filter (7.2). Because of the linear
frequency encoding in the signal, a nearly equivalent operation is to apply the linear phase ramp in the time
domain. The time domain phase ramp is called a "scaling function,'' since it adjusts or scales the frequency (and
ultimately the position) of the radar signal. 1
To illustrate this effect, let the scaling function be
(7.3)
which depends upon Kr, and contains a shift parameter, t:::.:r. Multiplying the radar signal, so(T), by sp(T) results
in the scaled signal
By comparing the (T - Ta)2 phase factor of (7.1) with the (T - Ta + 6T) 2 factor of (7.4), it is seen that the
position of zero frequency has been shifted by 6T time units to the left. 2 This has been achieved by applying a
frequency shift of
(7.5)
to the signal via (7.3). This frequency shift is called the "scaling function frequency." Besides the shift, there is a
residual phase given by the last exponential term of (7.4), which can be removed by a subsequent phase com-
pensation.
Geometric Interpretation
A geometric interpretation of the method is illustrated by the frequency /time diagram of Figure 7 .2, where an up
chirp is assumed (Kr > 0) . The scaling function shifts the frequency of the signal up when 6T is positive. This
moves the zero frequency point of the signal to a time 'Tb to the left of Ta, Then the matched filter (7.2) will
compress the scaled signal to a time Tb instead of Ta, a left shift of 6T.
Figure 7.3 shows a simple simulation of the compression of a chirp signal, in which the compressed target is
to be moved by 50 samples to the left by the chirp scaling. The FM rate, Kr, is 100 Hz/s, and the sampling
rate, Fr, is 800 Hz. The signal duration is 700 samples, and the bandwidth is 87.5 Hz (the oversampling ratio is
9.15; a high value is used for illustration purposes). The shift parameter is 6T = 50/Fr = 62.5 ms, and the
scaling function frequency is lsc = Kr 6T = 6.25 Hz, or 0.8% of the sampling rate. The scaling function is shown
in Figure 7.3(b), and has 6.25 cycles over 800 samples, or 1 second.
j I
,'E
I Scaling function
·-·-·- ·---I. ·-·-·-·-
Time, .
-
Scaled signal /
I
Original signal
Figure 7.3(a, c) shows the chirp signal, before and after the scaling function has been applied. The zero
frequency positions of the signals are at Samples 400 and 350, respectively. Figure 7.3(d) shows the results of
compressing the signals of Figure 7.3(a, c), the original signal, and the scaled signal. It can be seen that the
targets are compressed to Samples 400 and 350, respectively, as expected.
Of course, the shifting can also be performed by an interpolation operation. However, it requires more
arithmetic operations when the interpolation kernel is long, and reduces the signal bandwidth when the
interpolation kernel is short (recall Figure 2.16). There is also a frequency/time difference between interpolation
and scaling. In interpolation, the entire pulse envelope is shifted in time, but the frequency extent remains the
same. In contrast, the envelope of the signal stays at the same position in the chirp scaling case, as a
time-domain phase multiply is used [compare Figure 7.3(a) and Figure 7.3(c)]. However, the frequency band of
the signal is shifted in the chirp scaling case, which can cause the signal to partially shift out of the band of
the matched filter.
The frequency shift is apparent in Figure 7.2. Figure 7.3(c) shows that the signal is no longer at baseband,
but is shifted a small amount. As long as the shift is so small that the resulting signal frequencies are still
within ±0.5Fr, and the baseband matched filter is wide enough to accommodate the frequency shift, the chirp
scaling will produce a well-compressed pulse with an accurate time shift. If the frequency shift is so large that
some of the signal frequencies are shifted outside the bandwidth of the matched filter, the filter should be shifted
in frequency to accommodate the frequency shift of the data. However, as discussed later, this may not be
possible when the shift varies with range.
In the RCMC operation, the required shift is not constant, but varies with range, and, in fact, the variation 1s
very nearly linear. In this case, the scaling function turns out to be a linear FM signal, as derived below.
In specifying the shift needed for differential RCMC, it is convenient to choose a reference range time, Tref,
for which the shift is zero. The reference range time is usually taken to be the midswath range. The required
shift is then proportional to the time away from this reference. To simplify the mathematical notation, let T 1 = T
- Tref be the time referenced to this range where the shift is zero.
~
.:
0..5
l -o~_,
0 100 200 300 400 500 600 700 800
(C) Real part of scaled signal
,
! 0.6
l -o.:
-1
0 100 200 300 400 500 800 700 800
(d) Original and scaled signals after compression
800 ' ' '
' '
~ 800. Scaled signal ; Original signal .
..-.
..
i :: ...J
.. ...
'-._~·
,,
..--.. .
0
0 100 200 300 400 600 800 700 800
Time (samples)
Figure 7.3: A simple example of target shift after scaling and compression.
Figure 7.4 shows a set of three equally-spaced targets, having the same FM rate, Kr, and duration, Tr, but
placed at different ranges (a and Kr are positive in this example) . Examining Target C with zero frequency
occurring at r' = r~, its signal can be expressed as
If this target is compressed directly with the matched filter (7.2), its peak will occur at the zero frequency
position r~ . If it is to be compressed to the position r{,, the frequency of the signal should be scaled so that the
new zero frequency will occur at rfi.
,,
. -·
. -I ,.
,•
-·
,• Total
•
,• bandwidth
,• after chrip
,• scaling
,•
• _J_
Figure 7.4: Action of a linear FM scaling function.
where D.r (r{J is the required time shift, which can be achieved by a frequency scaling of D. f (r{,). Note that
D.r(r{,) is the time r~ minus r{,, a positive value in agreement with the convention used in Figure 7.2.
Suppose that Target A is to be shifted right by the same amount, and Target B is to remain where it is. Then,
the required shift of each target can be expressed by a r' . The shift of each target is proportional to its new
zero frequency time, r'; that is, the shift is proportional to range. In the example, a = D.r(r{,)/ r{,, or r{, = r.'a,/ (1
+ a) .
To achieve this shift, the frequency to be added to the signal is
(7.8)
which is the required frequency of the scaling function. The phase of the scaling function is
which is a quadratic function of time. Hence, the scaling function has a linear FM form, with the FM rate, a
Kr
(7.10)
This linear FM form of the scaling function is shown by the dash-dot line in Figure 7.4. Note that a and
Kr are positive in this example.
The zero frequency position of Target A is moved to the right, that of Target C is moved to the left, and
that of the center Target B remains unchanged because it is at the reference time. It can be seen that the linear
FM scaling function shifts the targets by an amount proportional to their distance from the reference position.
Examining Target C again, the scaled signal is
(7.11)
o The linear FM rate of each target is changed from Kr to (1 + a)Kr by the scaling function. Since the
exposure time of the target is unchanged by the scaling function, the target bandwul.th is also changed by the
factor 1 + a. In this example, the FM rate and bandwidth of the target have increased.
o The first exponential term of (7.11) shows that the target is compressed to r' = T~/(l + a), which is T{, in
Figure 7.4. Therefore, the target shift is proportional to range, which is the purpose of the scaling function.
In other words, the range axis is scaled by the factor 1/(1 + a). As a is positive in this example, the
scaling involves a compression of the axis. The scaling actually takes place during the range compression
operation.
o The frequency band of each target has been shifted. The frequency band of Target A has been shifted down,
while that of Target C has been shifted up. This means that the overall bandwidth needed to cover all
targets is increased by 2 IKrl max(l~TI), or 2max(l~Tl)/Tr when expressed as a fraction of the range
bandwidth. The value of max(l~T I) is !al times one-half the swath width (in time units). Hence, it is impor-
tant to keep a small, so that the expanded bandwidth still lies within the bandwidth of the matched filter
and aliasing does not occur.
o The second exponential term in (7.11) is independent of time, and represents a residual phase. This phase
can be removed by multiplying the compressed data by a phase compensation term.
Figure 7.5 shows the results of a simulation of the chirp scaling of the three equally-spaced targets shown in
Figure 7.4. The center target is used as the reference, corresponding to zero shift. A linear FM scaling function is
applied so that the target at the left is shifted to the right by 10 samples, and the target at the right shifted to
the left by 10 samples. As the targets are originally spaced by 200 samples, this represents a 5% compression of
the range axis.
In this section, the principles of chirp scaling have been illustrated, assuming a shift that varies linearly with
time or range. This is the "constant scaling" case, where the time axis is stretched or compressed uniformly along
its length. In practice, situations will be met where the required shift has small quadratic or higher order terms.
Then, the scaling of the axis is not quite uniform, as in the case in the last part of Section 7.5.1. As these
nonlinear terms are usually very small in practice, the principles discussed in this section still apply.
In summary, three different forms of the scaling function have been di~cussed, distinguished by the degree of
its phase:
Constant scaling of a range line means that the whole line is stretched or compressed by a uniform factor,
so that the range shift varies linearly with range. Range variant scaling means that the stretch or compression
factor varies along the range line, so that the range shift includes quadratic and higher order terms. In practice,
it turns out that the range varying component is very small, so the shift is very nearly a linear function of range
(i.e., the scaling is nearly a constant).
{a) Real part of original signal {raw data)
0 50 100
: :n : :n
I 50 200 250 300 350
(b) Real part of scaling function
400 450 500
I_'. ~ 0 50 100 150 200 250 300 350 400 450 500
{c) Real part of scaled signal
1_:111 : :n : :1 ·
0 50 100 150 200 250 300 350
(d) Original and scaled signal after compression
'400 450 500
Section 5.5 has shown how RCMC could be efficiently implemented in the range Doppler domain, taking
advantage of the congruence of target energy for targets at the same range. Section 6.3.4 has shown that RCMC
could be applied by a range-varying shift along a line of constant azimuth frequency, using an interpolator. The
required shift is approximately a linear function of range. This section shows how the chirp scaling principle can
be applied to the RCMC operation.
The foregoing discussion indicates that RCMC is a good candidate for chirp scaling implementation, whereby
the range interpolation can be implemented by a scaling operation, with higher accuracy and efficiency than a
conventional interpolator [l]. However, two additional conditions must be met before chirp scaling can be used.
First, the data must have a chirp encoding in the range direction. Second, the chirp scaling shift must be small
enough so that the spread of frequencies does not exceed the range sampling rate and cause aliasing.
The second condition can be met by separating RCMC into two components. In one component, the RCM of
a: reference trajectory (at midrange) is corrected. Then, in the chirp scaling operation, only the difference between
each trajectory and the reference trajectory needs to be corrected. In this way, the shift required by the chirp
scaling operation is quite small, and the bandwidth increase is minor.
In Section 7.3.1, the concept of div,iding RCMC into bulk and differential components is discussed. Then,
after revisiting the form of the signal in the range Doppler domain in Section 7.3.2, the chirp scaling function is
derived in Section 7.4. First, a simple low squint, narrow-aperture example is considered, in which the RCM is a
quadratic function of azimuth frequency, and the effective radar velocity is assumed constant. In this case, a
linear FM scaling function can be used for the RCMC. Then, a: more general case is considered, in which an
accurate hyperbolic range equation is assumed, and V,. is allowed to vary with range. In this case, the chirp
scaling principle is still applicable, but the scaling function takes on a more complicated form. Finally, the size of
the differential component of the RCM is given in Section 7.4.1 for typical airborne and satellite cases.
For a target whose range of closest approach 1s R.o, the range equation in the range Doppler domain 1s given
approximately by
(7.12)
as given by (5.10). The approximation is valid for low squint angles and small apertures. The range migration 1s
given by the second term in (7.12), and is a linear function of slant range of closest approach, R.o, and a
quadratic function of azimuth frequency, J.,.,.
Recall that in the range Doppler domain, each horizontal line has a constant /r,, and RCMC is usually
implemented line-by-line in this horizontal direction. As RCM is a linear function of range in (7.12), this suggests
that a linear scaling function can be used to perform the RCMC, similar to the example of the three targets in
Figure 7.4.
The RCM expressed in the second term of (7.12) can be considered to be the total RCM. Using chirp scaling
to correct the total RCM risks shifting the signal outside the frequency band of the baseband range matched
filter (see Figures 7.4 and 7.5). A solution is to divide the RCM into two parts, a "bulk RC.tvl" representing the
RCM of a reference or middle target, and a "differential RC.tvl." The bulk RC.tvl is the same for all targets, and
the differential RCM represents the remaining part. The differential RCJ\11 is range dependent, and is much smaller
than the bulk RCM. Each part is then corrected separately, using different types of operations.
The concept of bulk and differential RCM is illustrated in Figure 7.6. Figure 7.6(a) shows the loci of energy
of three targets at different ranges (the data are shown range compressed for convenience). The vertical axis is
Doppler frequency, and each target has the same Doppler bandwidth (the Doppler centroid frequency is set to zero
for simplicity). The curvature of each target is different, as the quadratic coefficient Ro/V} increases with range in
(7 .12). Now, if the middle target is selected as the reference target, the bulk RCM is defined as the RCM of this
target. The bulk RCM is range invariant, that is, it is the same for all targets, as shown in Figure 7.6(b). Then,
when the bulk RCJvl component is removed from each target, the remaining or differential R CM is obtained, as
shown in Figure 7.6(c). The size of the differential RCM is exaggerated in the figure, but, in fact, is only a small
fraction of the bulk RCM.
When RCMC is performed, the shift can be done with respect to an arbitrary origin. To set this origin,
reference points can be chosen in range and in azimuth where the RCMC is zero. As one of the main objectives is
to minimize the frequency shift introduced by the chirp scaling operation, the reference points are selected so as
to minimize the differential RC?vlC. Suitable selections are as follows:
Reference target: The target at the midrange of the swath is a suitable choice for the reference target, as the
differential RCMC is symmetrical. It is the target that is used to define the bulk RCM.
Reference range: The reference range is the range of closest approach, R.o, of the reference target.
Reference azimuth frequency: The reference azimuth frequency, f,11 , 01 , is chosen to be the Doppler centroid
frequency, Irie, at midrange (i.e., the centroid of the reference target), in order to minimize the differential
RCM.
Using these reference points and conventions, the components of RCM can be defined as follows.
Total RCM: To minimize the size of the RCMC shifts, the total RCM of each target can be defined as the
target's range, minus its range at the reference azimuth frequency
(7.13)
Bulk RCM: In order to separate this total RCM into bulk and differential components, the bulk RCM can be
defined as the total RCM of the target at the reference range, Rrer
2
RCM bulk ( f11 ) - D
RCM total ( ~"ref, f11
) _ c
8 12
Rrer
V2 (/2
rJ -
2 )
1'7ref ( 714)
·
JO rrer
i
f - - - -- - -----11----------it---R_an--"-ge
f
::,
f - - ---------11-----------1t---R_an_ge
ff - - ---------11----------it---R_an
_· ~ge
...
i
Figure 7.6: The total RCM expressed as the sum of a range-invariant bulk
RCM and a range-variant differential RCM.
Differential RCM: The differential RCM 1s then simply found by subtracting the bulk RCM from the total
RCM (7.13) to obtain
Discussion
1. Both the bulk and differential corrections of all targets are zero at the reference azimuth frequency, J,., =
This means that each target will be compressed to the range it had at this frequency. If the targets
/'7rer .
are registered to ll-0, their range at J,., = 0, the bandwidth shift will be too large.
2. The total RCM can be corrected by shifting in two steps: by the bulk RCM, and by the differential RCM.
These two components of RCMC can be implemented in either order.
3. The bulk RCM is the same for each target, so it can be corrected by a linear phase multiply m the range
frequency domain.
4. The differential RCM varies with range, and is zero at ll-0 - Rrer. It can be corrected by a chirp scaling
operation in the range time domain.
5. If Vr is constant, the differential RCM (7.15) IS a linear function of the range of closest approach of each
target, ll-0, at a fixed azimuth frequency, / 77 •
In the general radar case, the differential and bulk RCMs take on more complicated forms, which are
discussed in the next section.
c2 !J (7.17)
D(Jr,, v;.) - 1 - 4 v:2 12
r JO
Km= Kr 2 (7.18)
1 - K cR.of.,,
r 2V/ JJ D3(!.,,Vr)
In the current context, the key factor in (7.16) is the range migration parameter, D, which appears in the
range envelope, the azimuth phase, and the range phase components of (7.16) . Specifically, it gives the more
accurate hyperbolic form of the range equation in the range Doppler domain
(7.19)
Note that this equation is equivalent to the simpler quadratic range equation (7.12), when the square root in
(7.19) is expanded up to the first term
C212
T/
)-1/2 C212
T/
(7.20)
( l - 4 Vr2 JJ ~ l + 8 V,.2 JJ
With the new range equation (7.19), the low squint RCM equations, (7.12) to (7.15), can be replaced with
their more accurate forms.
Total RCM: Following (7.13), the total range migration with respect to the reference azimuth frequency is
(7.21)
Bulk RCM: Evaluating this function at the reference range, Rref (where v r = vrrer ), the bulk RCM is defined
as TT TT
(7.22)
Differential RCM: The differential RCM at the range Ro is then found by subtracting the bulk RCM (7.22)
from the total RCM (7.21) to get
D(Jr,, v;.)
(7.23)
Bear in mind that Yr is a function of range in each of these equations, taking its value at the range 1n the
corresponding numerator.
Because of the order in which the Fourier transforms are taken in the CSA, the differential RCMC is done before
the bulk RCMC, as seen in the flow diagram of Figure 7.1. Following the concepts discussed in Figure 7.6, the
RCM of three targets is illustrated in Figure 7.7 (for clarity, the energy spread in range is not shown, as if
range compression were already done). In this case, realistic radar parameters have been used, with a PRF of
1500 Hz, a Doppler bandwidth of 1200 Hz, and a Doppler centroid at midrange of 750 Hz. The Doppler centroid
has been given a small increase with range. Note that the units on the horizontal axis are arbitrary-they are
selected to make the various components of RCM clear.
In Figure 7.7, the solid curves represent the loci of targets at near range, midrange, and far range. The
reference target is taken to be the middle target, and the reference range, Rrer, is the range of closest approach
to that target. The dashed curves represent the residual migration of the targets, after differential RCMC has
been applied. The residual migration is the same at all ranges. As the differential RCMC is zero for the reference
target, the dashed line is coincident with the solid line at this range. Finally, the vertical dotted lines represent
the locus of the target after the bulk RCMC has also been applied, when all migration has been corrected (the
bulk RCMC is actually performed in the two-dimensional frequency domain) .
Now consider the CSA scaling function required to achieve the differential RCMC. As a general case, consider
the right target in Figure 7.7. At an arbitrary azimuth frequency, J11 , its trajectory lies at Point P1. Its slant
Figure 7.7: Illustration of RCM and its correction, shown in the range Doppler domain.
range of closest approach is R.o, and the range coordinate of Point P1 is R.o/ D(f11, Vr) (remember that Yr is
constant along a given target trajectory). The purpose of differential RCMC is to shift Point P1 to Point P2, and
the purpose of bulk RCMC is to shift Point P2 to Point Pa.
At Point P1, the differential RCM, RCMdiff(Ro, / 11), is given by (7.23) . To achieve the differential RCMC, the
range position of the target energy must be shifted left by this amount. As range compression registers the target
to its zero frequency position, the scaling function must shift the zero frequency position of the uncompressed
target to the left by this amount. To implement this differential shift, the scaling function must have a frequency
of
(7.24)
at range, R.o/ D(/11 , Vr ), and azimuth frequency, J11 • In this equation, the factor 2/c converts the range shift to
time units, and the range FM rate factor, Km, of (7.18) converts the time shift to frequency units. The time
shift, ~T, is given by
(7.25)
This time shift should be expressed as a function of range time (instead of R.o) at the azimuth frequency of
interest, / 11 , and the expression is derived as follows. The range time of Point P2 is given by the time of Point
Pa, plus the bulk RCM time shift
1500..-.......- - - - - - - - - - - - - - - - ~ - - - - - . - - ,
I
- I
, f'J!~,
: /
: /
: l
Reference
azimuth frequency
I -
'' :: I
I -
I R,.,
I - Reference : O(frt,., Vr,.) I
I - range .
Ran~ j
0
R,., I. 0 .
Let the range time coordinate be translated so that a new range time, r', is referenced to Point Q1 (similar to
the way that r' was used in Figure 7.4)
, 2 Rref
r =r - (7.27)
C D(J'T}, ¼ref)
Then, combining (7.23), (7.25), (7.26), and (7.27) and simplifying, the differential shift in time units is
D(fT/reP Vr) _
[ D(JT/ , Vr)
l r'
(7.28)
where the first term can be written as a r', representing the scaling at the reference azimuth frequency.
To obtain the expression for the scaling function, the frequency (7.24) is integrated from the reference point,
Q1 , where the differential RCMC is zero [see (7.9)] . The integral is
where the v-alue of l:l.r is given by (7.28) and should be used. At r' = 0, the scaling function is unity (zero
phase) as expected.
The transmitted range chirp does not have to be exactly linear FM for the chirp scaling approach to work
effectively. An error analysis done by Davidson, with RADARSAT parameters, showed that the chirp could have a
nonlinear component as large as 1% of the size of the linear part. The correction for the nonlinear chirp is done
by adding a nonlinear component to the scaling function [21]. If the nonlinear component of the chirp is larger
than 1% of the linear part, one option is to compress the data, then reexpand with a linear FM rate (although
in this case, other algorithms may be more efficient).
The important differential RCM parameters, the maximum correction, the maximum scaling change, and the
increase in bandwidth, can be computed for a given SAR system. The results are shown in Table 7.1 for the
typical aircraft and satellite SAR parameters of Table 4. 1.,.:
Table 7.1: Summary of Chirp Scaling Parameters
Swath width 10 50 km
Squint angle, 9r,c 8 4 deg
Range resolution, Pr 1.8 8 m
Notes: (1) From (7.23). (2) It is equal to the maximum differential RCM
divided by half the swath width. Also, it is the coefficient of r' in (7.28) .
In the cases considered, the differential correction should not be ignored, since it can be in the order of one
or more range resolution elements (especially in the aircraft case). In relative terms, the differential correction
becomes larger at longer wavelengths and finer range resolutions. However, it may be small enough to ignore at
small squint angles and short wavelengths.
In both the cases considered, increases in range bandwidth are small enough so that they can easily be
absorbed by the range oversampling ratio. Each situation should be evaluated individually, especially for an
L-band airborne case, since the differential RCM is approximately proportional to wavelength squared, as shown in
(7.15) where c/ Jo is the wavelength.
The differential RCM is drawn in Figure 7.8 for the aircraft and satellite cases of Table 4.1, as a function of
range and azimuth frequency. The range on the horizontal axis is the distance from the reference range. The
azimuth frequency (as annotated at the right end of each line) is the frequency from the reference azimuth
frequency. It is seen that the differential RCM is very nearly a linear function of range, with a slope proportional
to the distance from the azimuth reference frequency.
-g 10
(Hz)
-240
- 180 5
(Hz)
-800
-eoo
:E 5 - 120 -400
0
a: ~ -200
j
0 0 0
80 200
-5 120 400
l5 -10
180 800
240 800
-4 --2 0 2 .. 8 - 20 -10 0 10 20 30
Range from reference (km) Range from reference (km)
This section discusses the range and azimuth processing steps in the CSA. For the purposes of discussion, the
range processing is considered to consist of blocks 2 to 5 in the block diagram of Figure 7.1, and the azimuth
processing corresponds to blocks 6 and 7 (after the azimuth FFT in block 1). In particular, the phase functions
involved in processing blocks 2, 4, and 6 are examined in this section.
In range processing, the cases of linear and nonlinear FM scaling functions are considered.4
With these assumptions, the second term on the right side of (7.28) vanishes, and the scaling function of
(7.29) reduces to
1. Multiply the signal in the range Doppler domain by the scaling function (7.30).
2. Perform a range FFT to transform the data to the two-dimensional frequency domain.
3. Perform range matched filtering, SRC, and bulk RCMC. These are phase multiples in the two-dimensional
frequency domain, and can be combined into one phase function. A range smoothing window can be applied
in this step for sidelobe reduction.
4. Perform a range IFFT to transform the data back to the range Doppler domain.
The details of the range processing steps are as follows. After the scaling function multiply of Step 1, the
scaled signal in the range Doppler domain is
(7.31)
where Srd is defined in (7.16), and the relation between r and r' is given in (7.27).
Then, in Step 2, a range Fourier transform is performed on the scaled signal, S1(r, f,,). After multiplying
Sro(r, J,,) by Ssc (r', / 11 ), and using the POSP to evaluate the Fourier transform integral, it can be shown that
the scaled signal in the two-dimensional frequency domain is
Ai Wr(f-r) Wa(/11 - f11J
X
. 1r D(/11 , V,.)
exp { -J Km D(/1/refl V,.)
2}
1
T
471'-
x exp - j - - ll-0-- f-r }
{ C D(f1/rcf I Vrref)
X exp { - j -411" [ - - -1 - - - - - -
C D(/11 , Vrrec)
1 --
D(f1Jrefl Yr-ref)
l } R ref f-r
where A1 is an immaterial complex constant. Note that the chirp scaling function uses the fixed value of Yr-ref for
Vr, but the signal retains the varying value of Vr.
The five exponential terms in the above equation can be interpreted as follows:
o The first exponential term contains the azimuth modulation. It is approximately a quadratic function of
azimuth frequency, and is range dependent. It will be dealt with in the azimuth processing stage.
o The second exponential term represents the range modulation after the scaling. It is quadratic in /.,, but has
a small range and azimuth dependence due to the Km and D factors. When this range modulation is
transformed back to the range time domain, it will be seen that the fraction D(f, ?reP Vr)/ D(/11 , Vr)
represents the scaling factor 1 + a in (7.11), and includes the range/azimuth coupling (i.e., corrected by SRC
in the RDA).
o The third exponential term is a linear phase that represents the target position of Ro/D(f11reP Vrref). This is
where the peak of the target will be located after range compression.
o The fourth exponential term is the bulk RCM of (7.22), which is approximately quadratic in J,,. The scaling
removes the range variant differential RCM, leaving a range invariant RCM which is represented by this bulk
RCM.
o The fifth exponential term contains a residual phase, which is a function of range and azimuth. It 1s not
wanted in the final image, and can be compensated at the azimuth processing stage.
In Step 3 (the fourth block in Figure 7.1), the range compression, SRC, and the bulk RCM are applied by a
single phase multiply, which compensates (removes) the second and fourth exponential terms of (7.32). This
results in the range-compensated signal in the range Doppler domain
. 411" Ro }
x exp { -J D(f. V. ) f-r
C 'f/refl Tref
X exp{j 411" Km [1 -
c2
D(/11 , Vrref) l
D(/1'/refl Vrref )
Ro
x [ D(/11 , Vr) - D(f,,,, Vr)
Rret l2} (7.33)
Recall that Km is assumed to be range invariant while, in fact, it should be range varying. Therefore, it is
reasonable to use the midrange value of Km to minimize the error. Another solution to this range-varying
problem has been reported in [14), in which a nonlinear FM component is incorporated into the raw data during
processing. This extra component interacts with the chirp scaling to remove the range dependence.
The range processing is completed in Step 4 with a range IFFT, resulting in the signal in the range Doppler
domain
. 411' Fl{) Jo D(fTJ, Vr) }
x exp { -J - -- - ~ - -
c
. 41r K m [
x exp J c2 1 -
{
fl{)
x [ D(fTJ, l1r) -
Rref
D(fTJ, Vr)
l2} (7.34)
where A2 is a complex constant resulting from the range inverse Fourier transform involving A1 . The range
envelope, Pr(T), is the inverse Fourier transform of Wr(fr) in (7.33). The envelope, Pr, is a sine-like function,
since the data are now range compressed. Now, only the azimuth modulation and residual phase terms remam,
and they are both range dependent.
In the general case, both the effective radar velocity, Vr , and the range FM rate, Km , vary with range. In this
case, the scaling function frequency contains terms of higher order. The integral in (7.29) can be evaluated by
expanding both Km and ~:r(T' , f TJ) in (7.28) as a power series in r'. Note that it is not necessary to expand
these terms analytically in order to implement the scaling, but it is useful for analysis. A polynomial fit for the
integrand can be computed inside the computer program, for example by using the polyf it function in
MATLAB.
The form of the signal in the range Doppler domain after RCMC is the same as (7.32), with the exception
of the fifth exponential term, which represents the residual phase. The residual phase can be derived by assuming
that the frequency of the scaling function is locally linear; that is
(7.35)
The coefficients go and 91 vary with r' and fTJ , due to variations in the local linear fit. The scaling function is
then
With this approximation, and following the derivation leading to the fifth exponential term in (7.32), it can
be shown that the residual phase is
_! Ro R-rer
r/> _
c2
1r Km91
7r Km + 91 [ D(JTJ, Vr)
-- - - +c90]
- 2 (7.37)
res - D(JTJ, l'r,er) 4 91
There is a geometric interpretation of the cg0 /(4g1) term inside the square brackets. From (7.35), the scaling
function frequency is zero at T 1 = -90/(291)- Therefore, the reference range time has been shifted to the left by
this amount. When multiplied by c/2, the shift is converted to distance (each term in the square brackets has
units of range).
Note that the range varying component of the differential RCMC is very small in practice, so the RCMC is
very nearly a linear function of range.
There is a subtle point about the range scaling concerning target registration. The chirp scaling has the effect
that each target is compressed to the associated vertical dotted line in Figure 7.7. This range registration corre-
sponds to the range of closest approach, divided by D(fT/ret , Vr), the cosine of the beam squint angle, 9r.
Therefore, there is a range scaling change of D(/T/refl Vr) in the final image. Since Vr varies a small amount with
range in the satellite case, so does the scaling, D(/T/reP V,.) .5
Azimuth matched filtering, in which the filter is the complex conjugate of the first exponential term in (7.34).
The filter is implemented as a phase multiply, and a weighting can be applied in this step.
Residual phase correction, in which the multiplier is the complex conjugate of the second exponential term in
(7.34) for a linear FM scaling function. For a nonlinear scaling function, the correction term is exp{-j <Pres},
where <Pres is given by (7.37). This phase multiply can be combined with the azimuth matched filter multiply.
Azimuth IFFT, in which the final data are transformed back to the azimuth time domain, the final image
domain. Similar to the RDA, multilook processing can be performed by taking shorter azimuth IFFTs at this
stage.
where Pa(11) is the IFFT of the window Wa(/11 ) in (7.34), and is a sine-like function, A4 is a complex constant
due to A3 and the azimuth inverse Fourier transform, and 0(,,-, 77) is the target phase. As in the RDA processor,
(1) the signal is not at baseband in azimuth due to the azimuth center frequency f 11c, and (2) the signal is not at
baseband in range due to a range varying azimuth matched filter (see Figure 6.13). The angle 0(,,-, 77) accounts
for these nonbaseband characteristics, as explained in the description accompanying Figure 6.13.
Note that all the phases including the target phase, 41r R.o/ A, have been compensated. Therefore, the final
phase obtained is relative to the target phase at the range of closest approach, R.o. For applications such as inter-
ferometry or polarimetry, where the actual target phase is required , the total phase has to be reinstated by
multiplying (7.38) by exp{ j41r R.o/ A}.
Because one processing step is done in the two-dimensional frequency domain, one of the main advantages of
the RDA is not available, namely that the variation of Doppler centroid frequency with range can be easily
accommodated. When the data are in the two-dimensional frequency domain in the CSA, a Doppler centroid
frequency that varies too much over the processed range swath can cause energy at different ranges to overlap.
This effect can be observed in Figure 5.7. If the change in azimuth centroid frequency is greater than the
azimuth oversampling ratio (the vertical gap in Figure 5.7), a form of azimuth aliasing will occur when the range
FFT is taken in Step 3. Looking at a middle row in Figure 5.7, energy from one end of the exposure of the
near-range target is mixed with the energy from the other end of the exposure of the far-range target.
This problem can be overcome in two ways, each of which carries a small processing penalty. One solution is
to replicate the spectral lines, so that different phase functions can be applied to each end of the trajectories,
since they are a function of absolute Doppler frequency [6]. Another solution is to process the data in smaller
range blocks, which is done in other algorithms for various reasons.
In this section, processing examples using real data and point targets are presented.
Point target experiments were performed, using the C-band satellite parameters of Table 4.1. The squint angle
was doubled to 8°, in order to accentuate the differential RCM. The results of a target 20 km off the 1000 km
reference range are shown in the following figures.
Figure 7.9 shows the real part of the raw data after the azimuth FFT (i.e. , the raw data in the range
Doppler domain). The center of the target exposure, the Doppler centroid, is at azimuth frequency Sample 397,
and the discontinuity in the spectrum is at Sample 140. The range profiles at three different azimuth frequencies
are also plotted. These plots show that the range encoding has a linear FM modulation structure, which is
important to the application of chirp scaling.
To observe the effect of bulk and differential RCMC, these two components of RCM are shown in Figure
7.10. With these parameters, the bulk RCM is ± 38 samples, while the differential RCM is ± 0.8 range samples.
In the next two figures, the results of the CSA are shown with and without applying the differential RCMC
scaling operation.
!
i
(II
i 0
250
Range (samples) Range (sa~les)
Figure 7.9: Real part of the raw signal in the range Doppler domain.
-<I)
Cl)
0
I
~100 I
! I
8 200 I
I
Cl) I
i... 300 1 Olfferentlal ACM
-~ 400
I
I
E
~ 500
-40 -30 - 20 - 10 0 10 20 30 40
Range migration (samples)
Figure 7.11 shows the range compressed data processed by the CSA, but without applying chirp scaling. In
other words, only the bulk RCMC was performed, not the differential. The differential RCM is about ± 0.8 range
samples, which is just large enough to cause noticeable range and azimuth broadening. The sidelobes of the slices
at Samples 70 and 212 show some distortion, as the slices are taken well off the peak of the energy.
(a) AD spectrum after rangeoomp (b) Azimuth frequency sample 70
0
Peak at
-5
sample 17.7
t - 10
i - 15
~ -20
- 25
_30 , __ __....._...__ _.;._.__ _._~
10 15 20
350
(c) Azimuth frequency sample 212
0
Peal<al
-5
sample 16.4
j - 10
°& - 15
~ -20
-25
-30 l..o.l'--- .L..I.-.....L..&LI.U...----'
10 15 20
5 10 15 20 25 30 10 15 20
Range (samples) Range (samples)
Figure 7.11: Range compressed data in the range Doppler domain, without
performing chirp scaling for differential RCMC. The residual RCM is approx-
imately 1.5 range samples.
Figure 7.12 shows the same data but with chirp scaling applied. A non-linear scaling function was used,
which included phase terms up to the third order in range time. The figure shows that the RCMC has
straightened out the target trajectory in the range Doppler domain, so that azimuth compression can be correctly
applied.
{a) RD spectrum after rangecomp (b) Azimuth frequency sample 70
0
500 Peak at
-5
sample 17.0
,8 -10
i - 15
i - 20
-25
...3() .____...u.,l,JL.:..,__L...L,..____........,
10 15 20
10 15 20
Figure 7.12: Range compressed data in the range Doppler domain, with dif-
ferential RCMC performed by chirp scaling.
Finally, Figure 7.13 shows the compressed target after the azimuth IFFT. The well balanced sidelobe
structure shows that the target has been properly compressed. The sidelobe alignments are orthogonal, angled
according to the radar squint angle.6 The range and azimuth resolutions agree with the theoretical values. An
accurate measurement of the target phase at the peak of the compressed pulse shows that it is within three
degrees of the theoretical value of zero. This is too small a discrepancy to be significant in most applications.
(a) Expanded target (b) Expan.cled target contours
2 2
..
I " i 4
~ 8 i 6
!s: 8 ! 8 O 0 00
S 10 I 10
! 12 ~ 12
14 14
5 10 15 18'--------------'s
0
to 15
Range (samples) Range (samples)
0 5 10 15 0 S 10 15
(e) Range phase (t) Azimuth phase
,..
200 . - - - - - - - - - - -........
100
\
0
\
u
'
-200---- - - - - - -
-100
0 5 10 16 5 10 15
Range (samples) AzlmU1h (samples)
Figure 7.13: Analysis of a point target compressed with the CSA algorithm.
The data for Figure 7.14 were collected by the X-SAR radar system on February 18, 2000, during the Shuttle
Radar Topography Mission (SRTM). The image processed to four looks using the CSA, and was kindly provided
by Helko Breit and Michael Eineder of the Remote Sensing Technology Institute of DLR.
The scene covers the area of St. Petersburg in Russia, with the scene center approximately at 59.95°N and
30.10°E. The city center is on the right of the scene, and the frozen Gulf of Finland in the Baltic Sea is on the
left. The scene size is approximately 26 km (range) by 40 km (azimuth), and represents just over one-half of the
X-SAR swath width. North is approximately up in the image. The radar illumination is from the south. Relevant
X-SAR parameters are given in Table 7.2.
Table 7.2: X-SAR Operating Parameters
Swath width 50 km
Pulse duration, Tr 40 µs
Range resolution, Pr 16 m
Maximum differential RCM 7 m
Figure 7.14: X-SAR scene of St. Petersburg, processed by the CSA algorithm
at DLR. (Courtesy of G€rman Aerospace Center.)
7.7 Summary
The concept of implementing the RCMC data shift using chirp scaling is presented in this chapter. It is based on
shifting the zero frequency position of the linear FM signal used in the transmitted pulse. The shift is
implemented by multiplying the received data by a scaling function in the range Doppler domain. The
multiplication is done before pulse compression, but the results of the shift are not observed until after the
Table 7.3: Summary of CSA Processing Equations
Differential
RCMdi.ff J7.o [D(f~.Vr) - D(f'l:{'Vr)] - RCMbulk(f11)
RCM
Scaled Km D(f'lref'Vr)
FM rate D(f,,,,Vr)
Scaling 1] (r')2 }
function
Ssc('T, f11) exp{-.J7r K m [ D(f'lreflVrref)
D(f. v, )
_
~' rref
Residual
phase <Pres 41rKm
c2
[1- D(f'J,Vrref)
D(f'lreflVrrer) J
l rl Ro
D(f,,,.Vr)
_ Rrer ]
D(f,,,,Vr)
2
Scaling
Ssc('T, f11) exp{ j 21r J;' Km 6-r( r' ,/11 ) dr'}
function
Scaling
Ssc('T, fr,) exp{j1r[ gor' + 91 (r') 2 ]}
function (1)
Residual
</>res
.i_ 1rKmq1
[
Ro _ Rrer + ego ]2 _ gi
phase 2
c ?TKm+91 D(f'T/,Vr) D(/'l,Vrref) 4 91 491
Note: (1) This scaling function is the linear one that is assumed for the pur-
poses of correcting the residual phase.
References
(1] R. K. Raney, H . Runge, R. Bamler, I. G. Cumming, and F. H. Wong. Precision SAR Processing Using
Chirp Scaling. IEEE Trans. Geoscience and Remote Sensing, 32 (4), pp. 786-799, July 1994.
(2] A. Papoulis. Systems and Transforms with Applications in Optics. McGraw-Hill, New York, 1968.
(3] H. Runge and R. Bamler. A Novel High Precision SAR Focusing Algorithm Based On Chirp Scaling. In
Proc. Int. Geoscience and Remote Sensing Symp. , IGARSS '92, pp. 372- 375, Clear Lake, TX, May 1992.
(4] I. G. Cumming, F. H. Wong, and R. K. Raney. A SAR Processing Algorithm with No Interpolation. In
Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'92, pp. 376- 379, Clear Lake, TX, May 1992.
(5] R. Bamler and H . Runge. Method of Correcting Range Migration in Image Generation in Synthetic Aperture
Radar. U.S. Patent No. 5,237,329. Patent Appl. No. 909,843, filed July 7, 1992, granted August 17, 1993. The
patent is assigned to DLR. An earlier successful patent application was filed in Germany on July 8, 1991.
(6] R. K. Raney, I. G. Cumming, and F. H. Wong. Synthetic Aperture Radar Processor to Handle Large Squint
with High Phase and Geometric Accuracy. U.S. Patent No. 5,179,383. Patent Appl. No. 729,641, filed July 15,
1991, granted January 12, 1993. The patent is assigned to the Canadian Space Agency.
(7] M. Y. Jin, F. Cheng, and M. Chen. Chirp Scaling Algorithms for SAR Processing. In Proc. Int. Geoscience
and Remote Sensing Symp., IGARSS'93, Vol. 3, pp. 1169-1172, Tokyo, Japan, August 1993.
(8] Y. Huang and A. Moreira. Airborne SAR Processing Using the Chirp Scaling and a Time Domain
Subaperture Algorithm. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'93, Vol. 3, pp. 1182- 1184,
Tokyo, August 1993.
(9] F. Impagnatiello. A Precision Chirp Scaling SAR Processor Extension to Sub-Aperture Implementation on
Massively Parallel Supercomputers. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'95, Vol. 3,
pp. 1819-1821, Florence, Italy, July 1995.
(10] 0. Loffeld, A. Hein, and F. Schneider. SAR Focusing: Scaled Inverse Fourier Transformation and Chirp
Scaling. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'98, Vol. 2, pp. 630-632, Seattle, WA,
July 1998.
(11] C. Ding, H. Peng, Y. Wu, and H. Jia. Large Beamwidth Spaceborne SAR Processing Using Chirp Scaling.
In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'99, Vol. 1, pp. 527- 529, Hamburg, June 1999.
(12] F. H. Wong and T. S. Yeo. New Applications of Non-Linear Chirp Scaling in SAR Data Processing. IEEE
Trans. on Geoscience and Remote Sensing, 39 (5), pp. 946-953, May 2001.
(13] D. W. Hawkins and P. T. Gough. An Accelerated Chirp Scaling Algorithm for Synthetic Aperture Imaging.
In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'97, Vol. 1, pp. 471- 473, Singapore, August 1997.
(14] G. W. Davidson, I. G. Cumming, and M. R. Ito. A Chirp Scaling Approach for Processing Squint Mode
SAR Data. IEEE Trans. on Aerospace and Electronic Systems, 32 (1), pp. 121- 133, January 1996.
(15] W. Hong, J. Mittermayer, and A. Moreira. High Squint Angle Processing of E-SAR Stripmap Data. In
Proc. European Conference on Synthetic Aperture Radar, EUSAR '00, pp. 449-552, Munich, Germany, May 2000.
(16] E. Gimeno and J. M. Lopez-Sanchez. Near-Field 2-D and 3-D Radar Imaging Using a Chirp Scaling
Algorithm. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'01, Vol. 1, pp. 354-356, Sydney, Aus-
tralia, July 2001.
(17] A. Moreira, J. Mittermayer, and R. Scheiber. Extended Chirp Scaling Algorithm for Air and Spaceborne
SAR Data Processing in Stripmap and ScanSAR Imaging Modes. IEEE Trans. on Geoscience and Remote
Sensing, 34 (5) , pp. 1123- 1136, September 1996.
(18] J. Mittermayer, R. Scheiber, and A. Moreira. The Extended Chirp Scaling Algorithm for ScanSAR Data
Processing. In Proc. European Conference on Synthetic Aperture Radar, EUSAR'96, pp. 517- 520, Konigswinter,
Germany, March 1996.
(19] J . Mittermayer and A. Moreira. A Generic Formulation of the Extended Chirp Scaling Algorithm (ECS) for
Phase Preserving ScanSAR and SpotSAR Processing. In Proc. Int. Geoscience and Remote Sensing Symp.,
IGARSS'00, Vol. 1, pp. 108-110, Honolulu, HI, July 2000.
(20] D. Fernandes, G. Waller, and J. R. Moreira. Registration of SAR Images Using the Chirp Scaling
Algorithm. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'96, Vol. 1, pp. 799-801, Lincoln, NE,
July 1996. [21] G. W. Davidson, F. H. Wong, and I. G. Cumming. The Effect of Pulse Phase Errors on the
Chirp Scaling SAR Processing Algorithm. IEEE Trans . on Geoscience and Remote Sensing, 34 (2), pp.
471-478, March 1996.
(22] J. Mittermayer, A. Moreira, and R. Scheiber. Reduction of Phase Errors Arising from the Approximations
in the Chirp Scaling Algorithm. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'98, Vol. 2, pp.
1180-1182, Seattle, WA, July 1998.
(23] A. Moreira and Y. Huang. Airborne SAR Processing of Highly Squinted Data Using a Chirp Scaling
Approach with Integrated Motion Compensation. IEEE Trans . Geoscience and Remote Sensing, 32 (5), pp.
1029-1040, September 1994.
(24] A. Gallon and F. lmpagnatiello. Motion Compensation in Chirp Scaling SAR Processing Using Phase
Gradient Autofocusing. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'98, Vol. 2, pp. 633-635,
Seattle, WA, July 1998.
(25] A. Moreira and R. Scheiber. Doppler Parameter Estimation Algorithms for SAR Processing with the Chirp
Scaling Approach. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'94, Vol. 4, pp. 1977- 1979,
Pasadena, CA, August 1994.
(26] W. G. Carrara, R. S. Goodman, and R. M. Majewski. Spotlight Synthetic Aperture Radar: Signal Processing
Algorithms. Artech House, Norwood, MA, 1995.
(27] J. Mittermayer and A. Moreira. Spotlight SAR Processing Using the Extended Chirp Scaling Algorithm. In
Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'97, Vol. 4, pp. 2021- 2023, Singapore, August 1997.
(28] J. Mittermayer, A. Moreira, and 0. Loffeld. Spotlight SAR Data Processing Using the Frequency Scaling
Algorithm. IEEE Trans. Geoscience and Remote Sensing, 37 (5), pp. 2198- 2214, September 1999.
(29] H. Breit, B. Schattler, and U. Steinbrecher. A High Precision Workstation-Based Chirp Scaling SAR
Processor. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'97, Vol. 1, pp. 465-467, Singapore,
August 1997.
(30] A. Moreira, R. Scheiber, J. Mittermayer, and R. Spielbauer. Real-Time Implementation of the Extended
Chirp Scaling Algorithm for Air and Spaceborne SAR Processing. In Proc. Int. Geoscience and Remote Sens-
ing Symp., IGARSS'95, Vol. 3, pp. 2286-2288, Florence, Italy, July 1995.
(31] W. Hughes, K. Gault, and G. J. Princz. A Comparison of the Range-Doppler and Chirp Scaling
Algorithms with Reference to RADARSAT. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'96,
Vol. 2, pp. 1221- 1223, Lincoln, NE, July 1996.
Chapter 8
8.1 Introduction
At t his point in the book, two high-precision SAR processing algorithms have been presented, the range Doppler
algorithm (RDA) and the chirp scaling algorithm (CSA). A comparable algorithm, the omega-K algorithm (wKA)
is described in this chapter. In order to set the stage for the WKA, some relevant properties of the RDA and
the CSA are given here.
RDA: The range Doppler algorithm, presented in Chapter 6, was the first digital processing algorithm developed
for satellite SAR data. It is still the most widely used algorithm because of its favorable tradeoff between
efficiency, accuracy, maturity, and ease of implementation. One of the key features of the RDA is how an
interpolator is used in the range Doppler domain to implement RCMC efficiently and accurately, in the face
of the range variations of range migration and Doppler centroid.
The range-azimuth coupling in the received data is a function of azimuth frequency, and range time, with
the azimuth frequency hav.ing a strong dependence and range time a weak one. The SRC is usually applied
in the range frequency and azimuth time domain [see Figure 6.l(c)] in which the two dependencies are
ignored. Even in the more accurate form of the RDA, in which SRC is implemented in the two-dimensional
frequency domain, t he range time dependence is not compensated (see Figure 6.l(b)]. This means that the
range-azimuth coupling is not accurately compensated when the azimuth beamwidth is wide.
CSA: The chirp scaling algorithm, presented in Chapter 7, replaces the RCMC interpolator of the RDA with a
more accurate phase multiply. The CSA implements the SRC with a phase multiply in the two-dimensional
frequency domain, allowing an azimuth frequency dependence, but once more ignoring the range dependence.
The CSA assumes a specific form of the SAR signal in the range Doppler domain, (7.16). Reexamining its
derivation in Section 5.3, it is noted that the approximation in (5.33) has been used in the range IFFT to
transform the signal from the two-dimensional frequency domain back into the range Doppler domain. This
approximation may not be adequate for wide apertures or high squint angles.
To avoid these deficiencies, the WK.A uses a special operation in the two-dimensional frequency domain, which
corrects the range dependence of the range-azimuth coupling, as well as the azimuth frequency dependence. In
addition, an accurate form of the signal properties, derived in Section 5.3, can be used [i.e., the approximation in
(5.33) is no longer needed]. This gives the WKA the ability to process data acquired over wide azimuth apertures
or high squint angles. However, the WKA assumes that the effective radar velocity, ¼-, is range invariant, which
limits its ability to handle large range swaths rather than its ability to handle wide apertures.
Some comparisons of the WKA with other processing algorithms can be found in [1-3]. A more detailed
comparison is given in Chapter 11.
A high-level block diagram of two implementations of the WKA algorithm is given in Figure 8.1. The most
accurate implementation is given in Figure 8.l(a). Under certain conditions, an approximate form of the WKA is
sufficiently accurate, and is shown in Figure 8.l(b).
The accurate implementation consists of the following major steps.
1. A two-dimensional FFT is performed to transform the SAR signal data into the two-dimensional frequency
domain.
2. The first key focusing step 1s a reference function multiply. The reference function is computed for a
selected range, usually the midswath range, so that it compensates the phase at that range, including the
phase resulting from the frequency modulation in range, RCM, range-azimuth coupling, and the frequency
modulation in azimuth. After its application, a target at the reference range is correctly focused, but targets
away from that range are only partially focused.
3. The second key step, the Stolt interpolation, focuses the remainder of the targets using an interpolation in
the range frequency direction. One can v-iew the reference function multiply of Step 2 as bulk focusing and
the Stolt interpolation as differential focusing.
4. A two-dimensional IFFT is performed to transform the data back to the time domain, that is, the SAR
image domain.
The approximate implementation, which removes the Stolt interpolation m favor of a simpler phase multiply,
consists of the following steps.
1. A two-dimensional FFT and reference function multiply are performed in the same way as m the accurate
implementation.
2. A range IFFT is performed to transform the data to the range Doppler domain.
! domaln
! domain
Two-dimensional Two-dimensional
FFT FFT
i i
Reference function Reference function Two-
multiply multiply dlmenslona
I frequency
(bulk compression) (bulk compression) domain
Two-
+ dimensional
freq uency
i
domain RangelFFT
Stolt mapping along the
range frequency axis
(differential i
compression) Range
Differential azimuth Doppler
matched filter multiply domain
i i
+
Two-dimensional
IFFT Azimuth IFFT
+
Compressed Image
SAR image
domain
+
Compressed Image
SAR image
domain
Data Data
Figure 8.1: The major operations in the (a) accurate and (b) approximate
forms of the omega-K algorithm.
3. A range dependent azimuth matched filter IS applied to remove the residual azimuth modulation after the
reference function multiply, as in the CSA.
4. A final azimuth IFFT is performed to transform the compressed data back to the time domain.
It can be seen from the steps above that the key processing operations in either implementation are
performed in the two-dimensional frequency domain. Because of this, one of the main advantages of the RDA is
not available, namely, that the variation of Doppler centroid frequency with range can be easily accommodated.
This problem also exists in the CSA, and methods of solving it are addressed in Section 7.5.2.
Unlike the CSA, the WKA works equally well if the data have already been range compressed. As in other
algorithms, it is common practice to resample the compressed image data into ground range coordinates, or a
specific map grid, and an interpolator is normally used for this operation. Since this step is not related to SAR
focusing, it is not included in Figure 8.1.
Historical Note
The WKA has its roots in seismic signal processing. To gather seismic data, a set of geophones are placed on the
surface of the ground in a straight line. After a charge is detonated at a position along the line, the geophones
are used to receive the echoes from geological features under the surface (a linear FM v:ibration and pulse
compression can also be used). The analogy with the SAR system is that each seismic geophone position can be
v:iewed as the SAR platform location where a radar pulse is received. One difference from the seismic case is that
it takes the SAR platform a finite time to move from one position to the next. However, the movement of the
platform is not an issue, as the received echoes are tied to each individual pulse transmission in the SAR
processing.
Techniques developed for processing seismic data, using the acoustic wave equation, are presented in [4, 5].
Stolt developed an accurate frequency domain solution to the wave equation, using a step now called Stolt
mapping [6]. In 1987, Hellsten and Anderson were the first in the SAR world to use Stolt mapping, although they
did not appear to recognize the seismic work [7]. They used the omega-K approach for processing the
wide-aperture Carabas data [8]. In 1987, Rocca and his colleagues recognized the seismic analogy, and took
advantage of Stolt's method to solve the electromagnetic wave equation [9-11]. The key step in the WKA is
applying the Stolt interpolation to the range frequency v:ariable. The range frequency axis is resampled or mapped
to a new axis, so that only a linear phase remains in the two-dimensional spectrum in either direction. An
approximation whereby the interpolation could be avoided was also developed [shown in Figure 8.l(b)] [2,9,12].
Munson takes a tomographic or back propagation approach to the same solution [13]. Soumekh [14] has also been
a proponent of the wave equation approach.
The formulation of the algorithm was originally derived using the wave equation approach. The wave equation
approach allows the WKA to handle wide aperture imaging, and in seismics, the aperture can be as wide as 180° .
It is called the omega-K algorithm because the data are manipulated in the two-dimensional frequency
domain. In this domain, one dimension is range angular frequency, w, and the other dimension can be thought of
as the azimuth wavenumber, k. The azimuth wavenumber has units of cycles per meter, a spatial form of the
frequency variable. In this book, a different approach is taken, whereby the WKA is derived from a signal
processing v:iewpoint, to make the derivation compatible and comparable with the algorithms presented in other
chapters. To this end, range and azimuth frequencies are used in the formulation in this book.
The algorithm has also been called the range migration algorithm in [12] , as one of its distinct advantages
lies in correcting the range varying RCM over wide apertures, provided that the constant-velocity assumption is
valid. In this book, the term WKA is retained, out of respect for the original papers.
Since its development, the WKA has been applied to stripmap [9], spotlight [12, 15- 17] , and interferometric
data processing [18]. It has also been applied to a hybrid mode between stripmap and spotlight [19].
Following this introduction, the reference function multiply is discussed in Section 8.2, and the concept of the
Stolt interpolation is explained in Section 8.3. Then, in Section 8.4 and Appendix 8A, different interpretations of
the Stolt interpolation are given to help explain this operation. The errors resulting from the constant-velocity
assumption are analyzed in Section 8.5.
In Section 8.6, an approximate form of the algorithm is given, whereby the Stolt interpolation is replaced by
a phase multiply for efficiency. Finally, the operation of the algorithm is illustrated with simple processing
examples in Section 8.7.
The first main focusing step of the WKA, the reference function multiply (RFM), is applied in the
two-dimensional frequency domain (see Figure 8.1). In this domain, the baseband uncompressed SAR signal is
S2df(/,,, / 11 ) = A Wr(J.,:) Wa(/11 - J,.,c) exp{j 02df(/n / 11 )} (8.1)
as derived in (5.24) and (5.26) of Section 5.3. The factor A combines the constant terms Ao, A 1 , and A2 of
(5.24). Assuming that the range pulse is an up chirp and the range equation is hyperbolic, (4.9), the phase term
in (8.1) of a target at range, Ro, is
41r Ro c2 !J
02dr(lr, 111 ) -
C
(Jo + Ir )2 -
4
V.2 (8.2)
r
Note that the Doppler centroid frequency, lr,c, varies with range frequency, fr, as it depends upon the radar
wavelength. Thls means that the two-dimensional spectrum is skewed in azimuth, as illustrated in Figure 5A.l(d)
and shown again later in this chapter.
Note that most of the variables in (8.2) are defined in the two-dimensional frequency domain, as needed for
the RFM operation in this domain. However, the range and effective velocity variables, Ro and v;., are defined in
the range time domain, and their range variation cannot be accommodated in the range frequency domain. The
best phase compensation that can be applied in the frequency domain is achieved by setting the range and
effective radar velocity variables to their midrange or reference values. This makes the phase of the RFM filter
equal to
41r Rref 2 c2 IJ
Oref(lr, Ir,) = +- -c
- (lo + Ir) - V.2
4 r,.r
(8.3)
Using this RFM filter has the effect of canceling the phase at the reference range, which focuses the data
correctly at that range. After the RFM filtering, the phase remaining in the two-dimensional signal spectrum is
approximately
2 c2 !J
(lo + Ir) - 4 V.2 (8.4)
r
where the approximation is due to the assumption that Vr is independent of range. The consequences of this
assumption are addressed in the error analysis of Section 8.5.
Applying the filter (8.3) is called "bulk compression," as the phase at the reference range is used to correct
the phase over the whole data block. The residual phase is zero at the reference range because of the (Ro -
Rref) factor in (8.4), but a residual phase exists for targets at other ranges. It is necessary to correct the residual
phase in a subsequent operation in order to obtain precise focusing over the whole scene.
Discussion
It should be emphasized that the focusing is exact at the reference range only if the square root term in (8.2)
accurately represents the signal phase. The form of the phase in (8.2) arises from assuming that the range
equation is hyperbolic. This assumption is usually valid over wide apertures, so this property of the WKA means
that the focusing is accurate for larger apertures and bigger squint angles than it is in the RDA or the CSA. If
the hyperbolic range equation is not accurate, higher order terms can be included in the range equation, although
this makes the analytical derivation of the phase in the signal spectrum (8.2) difficult. If an analytical solution is
not available, the mapping for the Stolt interpolation can be derived numerically, as long as the assumption is
made that the residual phase is a linear function of (Ro - Rref), as in (8.4).
Apart from the bulk compression, the use of the RFM to provide phase compensation at the reference range
has the important effect of mov:ing the high frequency range modulation of (8.2) to baseband, since Ro-Rref of
(8.4) has a zero mean value. This means the interpolator needed in the next step does not have to contend with
the rapid phase modulation that exists at the radar center frequency, Jo, so that a simple baseband interpolator
can be used.
Note that the spectrum is derived using the assumption that the azimuth location of the target is at 77 = 0
at the point of closest approach, for simplicity. If this is not the case, a residual term that is linear in Jr, will
appear. This term is left out of subsequent equations, but is included wherever necessary to make the phase
contours more realistic.
After the reference function multiply, targets at the reference range are properly focused. The need now is to
focus the targets at other ranges. This is done by a mapping or warping of the range frequency axis, using an
interpolator developed by Stolt [6]. The mapping changes the phase content of the data in the two-dimensional
frequency domain. The mapping serves to cancel the residual quadratic and higher order phase modulations of
(8.4), since it adjusts the azimuth phase as well as the range phase.
The Stolt interpolation performs the differential RCMC, differential SRC, and differential azimuth compression.
The interpolation implements a mapping, which can be viewed as a change of variables in the range frequency
domain. It is described mathematically and graphically in this section, and physical interpretations are given in
Section 8.4.
The Stolt interpolation has been called by different names, such as Stolt migration, Stolt mapping, Stolt
transformation, and Stolt change of variables. In this book, the terms "Stolt mapping" and "Stolt interpolation"
are ~sed interchangeably - the former term has the connotation of a geometric transformation, while the latter
term suggests a numerical process.
The phase that remains after the RFM in (8.4) represents the residual RCM, the residual range-azimuth coupling,
and the residual azimuth modulation. Because of the square root factor of (8.4), the phase is nonlinear in f-r- If
an inverse range DFT were applied at this point, the target would be unfocused when (Ro - Rrer) -:/- 0. An
elegant way to avoid this is to modify the range frequency axis, by replacing the square root factor of (8.4) with
the shifted and scaled range frequency variable, Jo + J;, so that
(8.5)
This substitution is in effect a mapping of the original range frequency variable, /-r, into a new range frequency
variable, J;. The form of the mapping is illustrated in Figure 8.2 for the typical C-band satellite parameters
given in Table 4.1 ( v;. = 7100 m/s, /o = 5.3 GHz, and a range bandwidth of 20 MHz). The maximum squint
angle considered is 9°, corresponding to /f'/ = 40 kHz, about twice as high as normally found in practice.
coefficient in (8.4) reduces the shift or scaling to zero. At other ranges, the effect is proportional to the distance
of the target from the reference range.
To illustrate the azimuth dependence of the amount of shift, the /~ variable of Figure 8.2 is plotted against / 11 in
Figure 8.3 for five values of fr- These curves represent vertical slices taken from Figure 8.2(a). These curves are
essentially parabolic, as noted in (8.15). The parabola is plotted with a dashed line in Figure 8.3 for the fr = 0
case. The dashed line is hardly visible, as it nearly coincides with the solid line (except at the right end),
showing that the Jr, dependence is essentially quadratic for moderate squint.
°N' 10
::r:
-~ 0
; _.. - 10
ft .. +10 MHz
a-C -20
G>
::,
-30 rT =
I -40
& -SO
~
-80
is varied linearly in steps or 5.0 MHz
l~
-70
--80
t,
A dashed parabola is plotted over the r, = O case
0 5 10 15 20 25 30 35
Azimuth frequency f (kHz)
11
By differentiating (8.5) with respect to fr, the slope of the mapping is found to be
df~ 1
(8.6)
df-r c2f2,
4V,.2 (/o+/.,.)2
and is drawn in Figure 8.4. It is seen that the slope is unity for Jr, = 0, and increases approximately
quadratically with / 11 • The nonunity slope away from JTJ = 0 represents the differential RCMC that is performed
by the Stolt interpolation.
As (Jo + fr) >> cl/111/(2\/;.) in (8.6), the slope is effectively independent of fr at low squint angles. However,
as l/11 I increases, the slope takes on a small variation with fr (a small change of slope can just be seen in the
top curve in Figure 8.4). The variation of the slope with fr represents the range-azimuth coupling, and is the
means whereby the Stolt interpolation implements differential SRC.
When the slope of the mapping is different from unity, a change in energy occurs during the compression
process, because the independent variable in the matched filter integration has changed scale (i.e., the region of
support of the range matched filter has been scaled). Fortunately, the change in energy is small and is usually
ignored in the processing. For example, at I /r, I = 35 kHz (for the C-band satellite example and at a squint
angle of about 8°), the change in scale is only about 1%.
After the mapping, the phase function of the partially compressed signal of (8.4) becomes
which is linear in the new range frequency variable, /~. This means the mapping performs the differential
compression by removing the phase terms higher than the linear term. The remaining linear term represented in
(8.7) defines the range position, Ro, of the target. After a two-dimensional inverse Fourier transform, the target
will be well focused and correctly registered. The range registration is at Ro, where the received target echo has
zero azimuth frequency and the azimuth registration is at 'f/ = 0, also where the azimuth frequency is zero (recall
the definition of the origin of 'f/ in Section 4.3.1) .
Note that this correct compression assumes that V,. is independent of range. For typical satellite SAR
parameters, this assumption is satisfactory. If really necessary, the variation of V,. can be accommodated using a
final differential azimuth compression in the range Doppler domain (before the final azimuth IFFT), but using the
normal WKA with narrower invariance regions is probably the simpler way to go.
1.018 . . . . . . . . .
1.014 . ,.
f =- 40 kHz .
1.012
r 1.0, •
.
.
.
~ 1.008
E
0 1.008
11.004 . .
0 1.002 . .
O.ll98 •
f,. = 0 . .
f,. is varied in steps of 6.7 kHz
. . . . . .
- 10 -8 -e -4 -2 0 2 4 8 8 10
Range frequency f (MHz)
~
Figure 8.4: The slope of the Stolt mapping, which represents the change of
scale along the range frequency a.xis.
Several interpretations of the Stolt mapping exist in the seismic processing literature [20, 21]. In this section,
several alternative interpretations are given, taken from signal processing v:iewpoints. The first one is derived from
the Fourier transform shift/modulation property given in Section 2.3.3. The second interpretation uses the skew of
the two-dimensional spectrum, derived in the appendix of Chapter 5. Another interpretation is based on an
imaging geometry perspective. First, the stage is set by examining the components of the Stolt mapping.
An interpretation can be given to the components of the phase function (8.4) that remains after the RFM.
Following the development leading to (5.32), the function can be expanded as a power series up to quadratic
terms in Jr and /ri
47r (Ro-Rref)
l
C
f J; c2 !; (8.8)
[ Jo D(J,i, v,.) + D(f11~ v,.) - 2 /,0 D3(JTJ> V.)
T
4 v2
T
/,0
where D(Jr,, Vr) is the migration parameter introduced in (5.30) of Section 5.3
c2 /J (8.9)
1 - 4 v2 1,2
r 0
There are three terms in the square brackets of (8.8). The first term represents the residual azimuth
modulation at a range, Ro, from the reference range, Rrec. The second term represents the residual RCM, which
is a linear function of the range distance away from Rrer. The third term represents the residual range-azimuth
coupling to be corrected by SRC.
The important point about the expansion (8.8) is that it separates these major components of the Stolt
mapping. In (8.4), the components are all mixed together in the square root term, but, after the expansion, the
biggest part of each component is isolated on the right hand side of (8.8). Equation (8.8) is useful for geometric
interpretation of some processing steps in this section, and for error analysis in Section 8.5.
8.4.2 Interpretation Via Fourier Transform Properties
One may be curious to know why a range frequency mapping performs the residual RCMC, SRC, and azimuth
compression simultaneously. For ease of explanation, a low squint case is assumed, in which the SRC can be
ignored. The following discussion can be generalized t,o include the SRC.
Beginning with the expanded form of the phase after the RFM, (8.8), and ignoring the range-azimuth
coupling represented by the third term, the phase of a point target after RFM is approximately
where 6 r = 2( Ro- Rref) / c is range "time" of the target measured from the reference range.
The phase (8.10) is portrayed in Figure 8.5, where the real part of the signal from a point target away from
the reference range is "imaged" in Figure 8.5(a). The phase is dominated by two features. In range frequency, the
phase is linear, that is, the horizontal slices shown in Figure 8.5(b) are sine waves, but the frequency of the sine
waves varies with azimuth frequency. This feature is represented by the Jr/ D(f11, Vr) term in (8.10). This term is
linear in fr with slope 1/D(J11 , Vr)- The frequency of the signal (i.e., the frequency of the "range frequency"
signal) represents the range position of the target, which becomes ev:ident when the range inverse DFT is taken.
The result after the range IDFT is shown as the wider line in Figure 8.5(c), where the residual RCM is clearly
seen as a function of azimuth frequency.1
The second feature of the phase function of (8.10) is that when a slice is taken vertically in Figure 8.5(a),
along the azimuth frequency axis, a nonlinear phase is observed, as in Figure 8.5(c). The nonlinear component
represents modulation that is a result of the residual RCM and azimuth focusing.
To understand the Stolt mapping, the quantity inside the large brackets of (8.10) can be interpreted as a new
range frequency variable, Jo + J~
(8.11)
as in (8.5), or
(b) Range freq slices
258 MMMM
128 M/V\f\
60 100 150 200 2!i0
,ww co eo eo
O iOO 200 20
Range frequency t Range frequency '~ Range time
. '
(d) Scaled data (e) Range freq slices (I) After range IDFT
200
150
100
so
150
100
(8.12)
where J; is the mapped range frequency to be compared with f,,.. The mapping is performed by a single range
frequency interpolation operation. For the purposes of illustration, however, the mapping will be considered in
two parts - differential azimuth compression and differential RCMC, corresponding to the two terms in (8.12).
The first term in (8.12) is independent of range frequency and hence does not affect the RCMC . Let the residual
RCM, which is represented by the second term, be considered first. This term is linear in /,,., with a slope of
1/D(/11 , Yr). The slope starts at unity when / 11 = 0, and has a small quadratic increase with azimuth frequency
l/71 1. Thus, the RCMC part of the mapping can be described by
/~1 =
J,,.
D(/ , Yr) ~ J,,.
(
l
c
2
+ 8 ½- 16
f; ) (8.13)
11
(8.14)
so that the waveform is a sine wave in the new range frequency variable. The frequency of the waveform along
each range frequency line is the same at each azimuth frequency, but the phase of the waveform does va:ry with
azimuth frequency through the last term in (8.14).
This scaling of the range frequency axis is illustrated in the second row of Figure 8.5, where constant
frequency waveforms can be seen along any horizontal slice of Figure 8.5(d) , such as those drawn in Figure
8.5(e) . Note that the scaling operation stretches the axis, because D is always less than unity. In this simple
model, the stretching is constant along the range frequency axis and can be implemented by a simple increase of
the sampling rate along this axis.
The effect of the constant frequency is seen in Figure 8.5(f), where the inverse range Fourier transform
registers the target to its proper range position, the same for each azimuth frequency. In this way, the
substitution of variables achieves the differential RCMC. The registration is explained mathematically by the
Fourier transform modulation/shift property
Figure 8.5(f) also shows a vertical slice that reveals the residual azimuth modulation. It is essentially quadratic,
agreeing with Figure 8.3.
When the range-azimuth coupling is also considered [the third term of (8.8)], an additional term involving /; is
included in (8.14), and the differential SRC would be apparent. This addition does not affect the discussion of
differential RCMC above.
To consider the residual azimuth modulation, note that the first term in (8.12) represents a constant shift of the
range frequency variable by the amount
(8.15)
Combining (8.12), (8.13), and (8.15), the mapped frequency variable, /~, consists of a shift and a scaling
(8.16)
In the example of Figure 8.5, the shift is implemented by resampling the phase function with the appropriate
shift. The result is shown in Figure 8.5(g), where it is seen that the phase is now linear in both range frequency
and azimuth frequency. The horizontal slices shown in Figure 8.5(h) show that the frequency is still the same as
in Figure 8.5(e), but Figure 8.5(i) shows that the azimuth phase is now linear with azimuth frequency, so that
the quadratic component of Figure 8.5(f) has been removed by the shift. This means the residual azimuth
compression has been performed, and the remaining linear phase term represents the azimuth position of the
target (the linear azimuth phase term has been omitted from the equations of this section for simplicity).
If the residual RCM and range-azimuth coupling are small enough to be ignored, the differential focusing is
achieved with the shift (8.15) alone. This is the basis of an approximate version of the WKA , to be discussed in
Section 8.6. The shift can be implemented by a phase multiply in the range Doppler domain, which is much
simpler than implementing the full interpolation.
The effect of the shift can also be seen through the Fourier transform shift/modulation property
whereby the range IDFT changes the range frequency shift, / -rshirt , into a phase that varies with azimuth
frequency. Substituting (8.15) for /-r•hift , the correcting phase function is
c2 12
Bcorr(D.T, /T/) = 27r D.T 8 Vl Jo (8.17)
in the range Doppler domain. It cancels the quadratic part of the residual azimuth modulation, thus performing
the differential azimuth compression. This is the correct form of the differential azimuth compression matched
filter in the range Doppler domain.
While the phase was examined in Section 8.4.2, it is also interesting to look at the effect of the Stolt mapping
on the region of support of the two-dimensional spectrum of the data. The Stolt mapping results in a skew of
the spectrum, and this leads to another interpretation v-i a the Fourier transform properties.
The skew is actually a quadratic function of azimuth frequency v:ia the D(Jr,, Yr) term in (8.13), but can
appear nearly linear when the linear term of the range equation dominates over the quadratic term. This happens
when the Doppler centroid is much larger than the Doppler bandwidth. To see the effect of the linear term
compared with the quadratic term, it is useful to expand D(fq, Yr) in (8.9) about the Doppler centroid, f .,,c
(8.18)
When f.,,c = 0, the middle or linear term of (8.18) vanishes, and the quadratic form used in (8.15) is obtained.
But when significant squint is present, the linear term can dominate over the quadratic term. Substituting (8.18)
for D(J.,,, Vr) in (8.15), the effect of the linear term in frshift on the skew becomes apparent. This effect is
illustrated in Figure 8.6 for the zero and high squint cases, in the top and bottom rows, respectively.
For the case of no squint, Figure 8.6(a) shows the region of support of its signal spectrum after the RFM,
with the phase contours superimposed. The slightly curved phase contours indicate that the target is not properly
focused. The quadratic component is generally very small, compared to the range bandwidth, but is exaggerated
here to illustrate the differential azimuth compression effect of a target away from the reference range.
Figure 8.6(b) shows the shape of the spectrum after the Stolt mapping (i.e., after the interpolation of the
range frequency variable). The phase contours are now equally spaced and parallel straight lines. If either a
vertical profile (along azimuth frequency) or a horizontal profile (along range frequency) is extracted, the phase
will be linear, as given in (8.7). The compressed target is shown in Figure 8.6(c).
g g
! o ••O+O••
-60 0 50 -60 0 50
I -50 ,___ _ _ _ __,
-60
~
0 50
Range freq (cells) Range freq (cells) Range (samples· 8)
! so 0
1 0 !i 0
•oo?ooOo.
=:,
E -SO 1 0
~
~ 0 50 -60 0 llO
~ - $0 ' - - - - ~- - - - -
-60 0 llO
Range freq (cells) Range freq (cells) Range (samples• 8)
Figure 8.6: Two-dimensional spectra of a target and the impulse response after
compression. The top row (a-c) is the zero squint case, while the bottom row
(d- f) illustrates the skew of the spectrum when significant squint is present.
The solid lines in the two leftmost columns represent phase contours.
In the top row of Figure 8.6, the Stolt mapping does not skew the spectrum -only a small quadratic
mapping is observed. When the squint is significant, the Stolt mapping introduces a skew in the spectrum as it
performs the differential focusing. In Figure 8.6(d), the range frequency span is independent of azimuth frequency,
but the Stolt mapping of Figure 8.6(e) produces a noticeable skew in the spectrum given by the linear term of
(8.18). This also results in a skew of the sidelobe axes, as seen in Figure 8.6(f). The sidelobes are parallel with
the spectrum borders in Figure 8.6(f), as in the Fourier transform pairs of Figure 2.2.
8.4.4 Interpretation Via Imaging Geometry
Figure 8.7 shows a squinted imaging geometry with the wavefronts of the radar pulse traveling at an angle, Br ,
with respect to the range axis (the general squint angle used in Chapter 4). The range axis is assumed to be
orthogonal to the azimuth axis, and the two axes form the slant range plane. Two wavefronts are shown in the
figure, representing two consecutive maxima of the electric field vector at one instant in time. The wavefronts are
spherical, but, because only a small portion is shown, they appear linear in the figure.
Azimuth
\
\
\ Range (orthogonal to
azimuth)
Ar
•
Figure 8.7: Illustrating the effect of squint on the wavelengths observed along
the range and azimuth axes.
The apparent wavelength of the radar signal can be observed along different directions. If the transmission
frequency is /o, the wavelength along the direction of propagation is
C
A= - (8.19)
/o
From the geometry of Figure 8.7, the wavelength observed along the azimuth direction is
A
AT/ = sin Or
(8.20)
By the same argument, the wavelength, >.r, observed along the range axis IS
(8.21)
Raney calls Ar the effective wavelength [22]. The directions of AT/ and Ar then form a pair of orthogonal axes,
and, from the right angle triangle rule, it can be shown that
(8.22)
Let / 0 be the frequency observed along the range axis, so that Ar = c/ / 0. This is the axis along which the SAR
image should be generated, since this axis is perpendicular to azimuth. Noting that ~ = 2 Vr/ /1/, as found from
(4.34) and (8.20) , and substituting for A, AT/, and Ar into (8.22), the following result can be obtained
)2 c2 !';,
(fO
I
= f O2 -
4 v:2
r
(8.23)
In a SAR case, where the transmitted pulse occupies a certain frequency span, /o in the above equation is
replaced by /o + fr, and fo
is replaced by /o + /~. Then the result is the same as the Stolt mapping (8 .5). This
shows that the Stolt mapping is replacing frequencies measured along the direction of propagation by frequencies
observed along the range axis (perpendicular to azimuth).
It is interesting to note that the phase, 82d.c(f-r, f 11), 1n (8.2) can be derived simply from the effective
wavelength [i.e., 82d.c(f-r, / 11 ) = - 471'Ro/Ar1, instead of from the lengthy derivation using the POSP, detailed in
Section 5.3. The POSP is used in the derivation of Section 5.3 because it conforms to the signal processing point
of v-iew of this book.
A parallel geometric interpretation is given in Appendix 8A. It is described in the wavenumber domain, as
given by the original authors [9-11].
Assuming that the approximate version of the algorithm shown in Figure 8.l(b) is not used, the only significant
approximation in the WKA is that of a constant effective radar velocity, Vr , For an airborne case, this
approximation poses no problem, since the effective velocity is quite constant. The approximation can introduce
significant errors in the satellite case, where Vr can vary by 0.15% over a slant range swath of 50 km. The errors
caused by this approximation are analyzed in this section.
The Stolt interpolation can be viewed as a compensation that removes the square root term of ORFM(/-r, / 77 )
in (8.4). By expanding the square root, as in (8.8), the remaining RCM, range-azimuth coupling, and azimuth
modulation terms are explicitly shown. It is convenient to use these expanded terms in the error analysis in order
to examine what happens when an incorrect radar velocity is used.
The dominant range migration component is the linear RCM. Over the exposure time, T0 , the linear migration
obtained from (5.58) is:
RCMun('IJ) = - Vr sin Or,c Ta (8.24)
Hence, a velocity error caused by the range invariance assumption introduces a migration correction error of
.:::lRCMlin (T/) = - .:'.lVr sin Or,c Ta (8.25)
As shown in Figure 6.11, if the RCMC error is kept below one-half of a range cell, the resolution
broadening remains below 2%.
The purpose of SRC is to correct the cross coupling between the range and azimuth variables. The phase term in
the cross coupling is given by (5.34). In that equation, D3 (/11 , Vr) can be approximated by cos3 Or ,c, according
to the geometric interpretation of D(/11 , Vr), as explained in Section 5.3.3. Then, (5.34) is written as
Ro f; /;
8cc = -11' C
(8.26)
2 V,.2 f o3 cos3 Or,c
In this case, a velocity error, .:::l Vr , introduces a quadratic phase error (QPE) in the SRC filter of
where IKr ITr is the range bandwidth. It is shown in Figure 3.14(a) that the QPE should be kept to about 0.5 11'
for the resolution broadening to be small, although phase error considerations may require a somewhat smaller
QPE, such as 0.1 11' (see Appendix 3B).
Hence, a velocity error, ~ v;., introduces a QPE in the azimuth matched filter of
dKa 2
0am,qpe = 7r d v;. ~ v;. 'T] (8.30)
Again, for the azimuth IRW broadening to be small, the QPE has to be kept to within 0.5 71' (or 0.1 1r if phase
is important).
If the azimuth compression error is too big, the effect of the varying V,.. can be compensated using the
following modification. The tw~dimensional IFFT at the bottom of Figure 8.l(a) is replaced by a range IFFT, a
secondary azimuth compression, and an azimuth IFFT. The secondary azimuth compression is a phase multiply in
the range Doppler domain that compensates the change in FM rate caused by the varying v;.. However, it does
not compensate the varying v;. effects in RCMC or SRC. The same compensation can be applied in the
approximate version of the WKA, where it is simply combined with the differential azimuth matched filter
multiply in Figure 8.l(b).
The effective velocity can vary in the order of 0.15% for typical C-band satellite SAR cases with a range swath
of 50 km. Assuming this variation, the error magnitudes using the satellite parameters of Table 4.1 with a
maximum squint of 4° can be found. The RCMC error is less than 1 m. The QPEs in the SRC and in the
azimuth matched filter are each much less than 0.01 1r, and can therefore be neglected.
In this example, the only significant error is the residual RCM, but it is still less than the resolutions
typically used. Therefore, the WKA is able to tolerate the range varying effective radar velocity over a 50-km
swath for the C-band satellites. However, each case must be evaluated separately, especially for L-band and/or
finer resolutions.
The Stolt interpolation is a time-consuming step in the WKA. The interpolation implements differential RCMC,
SRC, and azimuth compression. The largest of these terms is the differential azimuth compression. If the differen-
tial RCMC and SRC are small enough to be ignored, the Stolt interpolation is replaced with a phase multiply
that only implements the differential azimuth compression. This is equivalent to the approximation that the RCM
and the range-azimuth coupling are independent of range [2, 9, 12).
The processing steps of the approximate algorithm are shown in Figure 8.l(b). The approximation and its
relation to the RDA and CSA are outlined in this section.
The approximation has two components. It is convenient to examine the approximation by seemg its effect on the
range-azimuth coupling and on the RCMC separately.
The first approximation involves ignoring the residual range-azimuth coupling in (8.8), which means that the
differential SRC is not performed. Recall that the SRC has been performed, using the reference range in the
RFM operation. The residual can be ignored if the quadratic phase error criterion in (6.31) is satisfied over the
range swath. If this criterion is not satisfied, then the data have to be processed in range segments, with a
different RFM for each segment.
Using (8.8), the Stolt mapping of (8.5) without the residual range-azimuth coupling is given by
The first term corresponds to the residual RCM and the second to the residual azimuth compression. The
coefficient of fr in the above equation is dependent on the azimuth frequency, hence an interpolation is still
required to perform the differential RCMC.
The second component of the approximation is the effect of ignoring the differential RCM. This is valid if the
coefficient of fr in (8.32) is assumed to be independent of azimuth frequency. Since the processing is centered on
the Doppler centroid, it is reasonable to use lr,c in the place of /'I'/. This results in the following approximate
expression for the Stolt mapping:
After applying this approximate mapping, the signal spectrum phase of {8.7) becomes
(8.34)
Then, taking the range IFFT of the spectrum with the above phase, the signal in the range Doppler domain is
given by:
(8.35)
where the sine-like range envelope, Pr(r) , is the inverse Fourier transform of Wr(fr) in (8.1). Since the range
envelope is independent of azimuth frequency, no residual RCM is involved in (8.35). Only the range dependent
differential azimuth compression remains. Therefore, the data is compressed by a simple range dependent matched
filter, with a phase equal to the complex conjugate of the last term of (8.35). The range dependence of V,. is
accommodated at the same time, in contrast to the normal WKA , where the variation of V,. cannot be
accommodated without algorithm modification.
Applying the matched filter to compensate the phase in (8.35) can be done with a phase multiply in the range
Doppler domain. Looking back at the Stolt interpolation, the matched filtering is equivalent to the interpolation
being reduced to a constant shift in range frequency, with the shift amount being a function of azimuth
frequency. This is seen from (8.35), where, for a fixed D (i.e., constant / 11), the phase varies linearly with flo.
Recalling Figure 8.2{a), it is noted that the range shift is constant if each line in the figure has a slope of unity.
If the "D" factor in (8.35) is proportional to /.,,2 , the mapping in Figure 8.3 is quadratic.
The approximation assumes that the residual RCM and the range-azimuth coupling are both range invariant,
which may be valid for low squint angles or a narrow swath width. Each set of radar parameters should be
evaluated before applying this low squint approximation. At worst, it means that the range invariance regions
must be kept small, as the approximation does not introduce errors at the reference range.
Figure 8.2(b) can be viewed as showing the mapping error, since it gives the difference between the mapping
Range
Radar
• •
A
•• •
B
• •
C
D
E
F
G
The squint angle is set to 5°. Even at this small squint angle, the effect of the skew in the spectrum shown
in Figure 8.6 is noticeable. The swath width is set to 3.8 km (3072 range samples), which is wide enough to
illustrate the misfocusing of targets away from the reference range when only the reference function multiply is
performed. The reference range is chosen to be at the center target, Target D.
The reference function multiply is the bulk focusing step, and is applied in the two-dimensional frequency domain.
To see what effect the RFM step has on the target focusing, a two-dimensional IFFT is taken after the RFM
without doing the Stolt interpolation. The focusing results are shown in Figure 8.9(a). It is seen that Target D is
compressed properly, but the other targets exhibit an azimuth smearing proportional to their distance from the
reference range. The smearing is mainly caused by the residual azimuth modulation.
Then, to complete the focusing, the accurate Stolt interpolation is applied after the RFM, and the
two-dimensional IFFT is again taken. Figure 8.9(b) shows that all targets are now properly focused. In this figure,
Target A appears first in azimuth and Target G appears last, agreeing with the geometric layout of the targets in
Figure 8.8.
To check on the focusing in more detail, Target A in Figure 8.9(b) is analyzed with an interpolator, and the
results are shown in Figure 8.10. The measured slant range resolution is 1.25 samples, and the measured azimuth
resolution is 1.21 samples. These values agree with the theoretical values. All six other targets are similarly well
compressed.
The resolutions and sidelobes are slightly asymmetrical between range and azimuth, because different
weighting values are used. Weighting is applied in the range IFFT for sidelobe reduction, with a Kaiser window
coefficient of /3 = 2.5. In azimuth, only the beam pattern is used for the weighting.
(a) Compresaed targets without Stolt Interpolation
- 50
i 150
200
! l
250
1000 1500 2000
Range (samples)
(b) Targets after full prooesalng
50 8
f-- 100
C
D
i 150
200
E
F
G
250
600 1000 1500 2000 2.500 3000
Range (samples)
Figure 8.9: Compressed targets, (a) without and (b) with Stolt interpolation.
0
2 2
0
4 0
0
0 0 006000
' 10
12
:
~
0
14 14 0
16...__--~--~--.......,
0
5 10 15 2 4 6 8 10 12 14 16
Range (samples) Range (samples)
-5 -5
ij -10 -10
~
-8 - 15 -15
a
~ -20 -20
I -25 - 25
-30 -30
-35 ..............._.._.......______................................ _35 ....._.................._.._____,.....__.._...............
0 5 10 15 0 5 10 15
Range (samples) Azimuth (samples)
Shape of t he Spectrum
It is also interesting to look at the spectra of a target before and after the Stolt interpolation. For example,
considering Target A again, its spectral properties are obtained by taking the two-dimensional Fourier transform.
The results are shown in Figure 8.11. The spectra in the left column are shown before the Stolt interpolation,
while the spectra in the right column are shown after the Stolt interpolation.
The top row shows the spectra as they occur naturally in the simulation, with a range center frequency of
zero and an arbitrary Doppler centroid. The gaps are a result of the oversampling in range and azimuth. In the
bottom row, the spectra are shifted to the center of the plot, so they can be compared with Figure 8.6. Other
targets show similar spectral properties.
In the left column, the edges of the range spectrum are independent of azimuth frequency, as the RFM does
not change the envelope of the original spectrum. The edges of the azimuth spectrum have a small variation with
range frequency, as the Doppler centroid depends upon the wavelength. After the Stolt interpolation, the spectrum
is skewed, showing the effect of the interpolation, which is an azimuth frequency dependent shift and scaling of
the range spectrum, in accordance with Figure 8.6(e). The skew has a quadratic component with azimuth
frequency, as well as a linear component, but in many examples, the linear component dominates, so the quadratic
component is hardly visible.
1~
~ 100
!
i 150
.::
250
50 100 150 200 250 50 100 ,so 200 250
Range frequency (samples) Ran~e frequency (samples)
1~
f 100
t 150
1200
250.__~-~----~-.....,
50 100 150 200 250 50 100 lliO 200 250
Range frequency (samples) Range frequency (samples)
Figure 8.11: The spectra of Target A, before and after Stolt interpolation.
It is also useful to examine the phase of the spectra before and after Stolt interpolation. To do this, only a single
target is simulated, so that its phase is not distorted by other nearby targets. Target A is selected, away from the
reference range.
Figure 8.12 shows the phase characteristics of the spectrum after the RFM but before the Stolt interpolation.
Figure 8.12(a) shows the wrapped phase. The phase is blanked out beyond the signal bandwidth, where the phase
is unreliable. Figure 8.12(b) shows an expanded plot of the phase of the lower left 64 x 64 samples of Figure
8.12(a). The phase is not well represented in the figure, because the vertical frequency is near the 0.5 cycles per
sample aliasing limit, but a small quadratic component can be seen in Figure 8.12(b).
Two arbitrary phase profiles are examined. Figure 8.12(c) shows a horizontal slice, taken at azimuth
frequency sample 150 of Figure 8.12(a), revealing the shape of the phase along the range frequency direction. In
Figure 8.12(e), the difference between adjacent phase samples is taken, showing the local frequency of the
waveform. These panels show a linear phase trend (i.e., a constant frequency), caused by the target's range
position. This indicates that most of the range modulation has been removed by the RFM, and that other
modulation, arising from the range-azimuth coupling, is small.
t
I
200
f --
"-"--.
-- -- -
,~ ------------- -- -- .
:
...._,._._
---.-..... --- ----. -.
50
210 .-=---:-.::--:--- :-:--
I 100
-._._
--..=----
-.:-,,:--: -- - - -- - -.,
- -.:----:-,.:-:-_-:.-.,:--..
---~:;:-.:-:-::.~-=-=---==
_
...-~--:~
I 150 - 230
::- ~---...:·-----.:--..:.-.:-........-.,_.,,~
1240 --=----~---=-=---- --:.--
-- - -~ --- -- -- . . . . . . _ Iii
J
,6
....
-.=-.... ~-
--...:~------ ---....
-' .=-:1.- ~------- .
'-._~---
~ ~ - --=--
~
200
250 -----~-- -- ------
--i.._ -
--- -
..~-. ----._-
·-.
...
----... .----=-
. -- ---:--
-_-:--... _-.--,:-...
500 1000 1500 2000 2500 3000 to 20 30 40 so eo
Range frequency aaff1)1e Range frequency aaff1)1e
I -500
t
1 - 1000
f -1500 -500
J-2000---------~
1000 0 2000 3000
-1000'-----------'
0 50 fOO 150 200 250
(e) Frequency, range proffle (I) Frequency, azimuth prome
I 0.05
0.11
li-0,--0.,; .,____
0.5
,~
u.. o,___ _.....__ _ _ _ __.
1000 2000
Range frequency sample
3000
02'------~-----
o 50 100
Azimuth frequency aa~le
150 200
..
250
However, it is seen that the quadratic phase modulation is not removed in the azimuth direction. Figure
8.12(d) is a phase plot along the azimuth frequency direction, taken at range frequency sample 150 [a vertical
slice from Figure 8.12(a)]. The phase appears to be almost linear, as the linear component dominates over the
quadratic component. The linear component represents the azimuth position of the target. However, the phase
difference plot in Figure 8.12(f) shows that there is a significant linear FM or quadratic phase component in the
azimuth direction, a result of the residual azimuth modulation.
To show the effect of the Stolt mapping, Figure 8.13 shows the spectrum phase characteristics after the mapping.
The phase characteristics in Figure 8.13 (a-f) are counterparts to those in Figure 8.12(a- f). The important point
to observe is that the phase contours are now parallel, evenly spaced straight lines, as shown in Figure 8.13(b).
This is in agreement with Figure 8.6(e). Hence, the phase in either direction is linear, that is, it contains no
residual FM components, as shown in Figure 8.13(e, f). This shows that the Stolt interpolation has completed the
focusing, and only the phase arising from the target location remains in the spectrum.
(a) Phase contours (b) Phase contours, expanded
t 50
~ 100
~
I 150
J 200
250
500 1000 1500 2000 2SOO 3000 10 20 30 40 50 80
Range frequency sall1)fe Range frequency s a •
500
0 -
- 1000,___ _ _ _ _ _ _ _..,
1000 2000 3000 0 50 100 150 200 2.50
(e) Freq-uency, range profile (f) Frequency, azimuth profile
0
I -0.;
' 0.05 -0,1
L •
! -0.1 - - - -
i~
t
u.. -0.25 o.___ _ _ _ _ _ _ _.., -OA,____._ _.__ _ _ _ _..,
1000 2000 3000 0 50 100 150 200
Range frequency aall1)fe Azimuth frequency sa111>le
Figure 8.13: Phase of Target A, after Stolt interpolation.
Finally, the approximate version WKA is simulated. The Stolt interpolation is simplified to apply a differential
azimuth compression only, using a range frequency shift with a phase multiply in the range Doppler domain, as
described in Section 8.6. The resulting impulse responses are shown in Figure 8.14, using the same seven target.5.
(a} Target D (b} Target E
2
Range 0.0% 0
2
Range 1.3% .
Azimuth 0.0% C> Azimuth 0.6% C)
j 4 C> a
Q.
6
<C> '
6
~
-i
-s
~ 10
8 OQ Oo 8
10
OQ Oo
(Q) ~
~ 12
0 12
C>
14 0 14 C)
0 0
18 16
2 10 12 14 8 8 10 12 14 16
' (c}6 8
Target F
18 2
'
(d) Target G
Range 4.1 % Range 10.2%
2 2
Azimuth 3.8% 0 Azimuth 9.3%
i 4 c:,
• •
l 6
8
6
i 10 10
~ 12 12
14 C 14
C, 0
16 16
2 6 8 10 12 14 18 2 4 6 8 10 12 14 16
'Range (samples) Range (samples)
It is seen that Target D at the reference range is correctly focused, but the quality of the focusing
deteriorates as the target gets further from the reference range. The a.mount of residual RCM varies linearly with
distance from the reference range, and its effect on focusing varies quadratically with distance, as seen in Figure
6.11. The residual RCM for the two edge targets, A and G, are about 1.4 range resolution elements.
Note that the errors in the approximate form are much less when the radar parameters are less demanding,
such as lower resolution, less squint, and narrower range swaths.
Because the W KA is suited to wide aperture, squinted, and spotlight SAR processing, an X-band airborne
spotlight radar image is selected to illustrate a product of the algorithm (the same radar system as in Figure
6.21).
The scene in Figure 8.15 is from the area of Ann Arbor, Michigan, and was processed by MacDonald
Dettwiler. The processing is single-look, with a resolution of 1.8 m, using the approximate form of the WKA. The
image size is 660 x 1000 pixels, and is 1 by 1.5 km in size.
Figure 8.15: An X-band airborne spotlight radar image processed with the
WKA algorithm. Courtesy of MacDonald Dettwiler.
8.8 Summary
The details of the WKA SAR processing algorithm have been presented. It is a frequency domain processing
algorithm consisting of two key steps: reference function multiply and Stolt interpolation. The reference function
multiply is performed in the two-dimensional frequency domain, and focuses targets at a chosen reference range.
The Stolt interpolation performs a mapping of the range frequency axis, which achieves the differential focusing
of targets away from this reference. The algorithm is able to handle wide apertures and high squint angles
properly, but not a range varying effective velocity, which limits the width of the satellite range swath that can
be processed in one block.
The algorithm originates from seismic signal processing. The Stolt interpolation was originally derived using a
wave equation approach, but it has been derived here from signal processing principles. Physical interpretations of
the key Stolt interpolation have been presented. Basically, the mapping compensates the residual RCM,
range-azimuth coupling, and azimuth modulation by a change of variables along the range frequency axis.
An approximate form of the WKA is valid under some circumstances. The approximation reduces the Stolt
mapping to a constant shift along the range frequency axis at each azimuth frequency. This allows the Stolt
interpolation operation to be replaced by a simpler phase multiply in the range Doppler domain. The
approximate form implements the residual azimuth compression only, ignoring the residual RCM and
range-azimuth coupling away from the reference range. When the radar parameters permit, the approximate form
can be very accurate. The accuracy of the regular and approximate forms of the WKA have been illustrated by
simulation experiments.
The only drawback in the WKA algorithm is that it relies on a constant effective radar velocity. For an
airborne case, this poses no problem. For a satellite case, the effective velocity does vary with range, so the
resulting errors have to be analyzed on a case-by-case basis, especially when a high resolution is needed. For a
typical C-band satellite case, in which the velocity changes by up to 0.5% over the swath width, the algorithm
can still focus the SAR data properly over a 50-km swath.
The important processing equations of the WKA are summarized in Table 8.1. All units are phase, expressed
in radians, except the Stolt mapping, which is range frequency, expressed in hertz.
Bulk focused (
/O + f T )2 c2 1.,2
41r {Ro-Rrer)
signal after RFM (}RFM (/r, Ir,) - C - 4 V2
r
Residual range- c2 !;
41r (Ro-Rrer) J..2
azimuth coupling C 2/i D 3 (!11 ,Vr) 4Vr2
after RFM
Stolt mapping c2 /2
(range frequency) !~ (Jo + J; )2 + #z
r
- lo
Signal phase after _ 41r{Roc-Rretl (/o + J;)
Stolt mapping Ostolt(J;, !.,.,)
References
[1] T. E. Scheuer and F. H. Wong. Comparison of SAR Processors Based on a Wave Equation Formulation. In
Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'91, Vol. 2, pp. 635-639, Espoo, Finland, June 1991.
[2] R. Bamler. A Systematic Comparison of SAR Focusing Algorithms. In Proc. Int. Geoscience and Remote
Sensing Symp., IGARSS'91, Vol. 2, pp. 1005-1009, Espoo, Finland, June 1991.
[3] R. Bamler. A Comparison of Range-Doppler and Wavenumber Domain SAR Focusing Algorithms. IEEE
Trans. on Geoscience and Remote Sensing, 30 (4), pp. 706-713, July 1992.
[4] I. R. Mufti. Recent Development in Seismic Migration. In Time Series Analysis: Theory and Practice 6, 0.
D. Anderson, J. K. Ord, and E. A. Robinson (eds.). Elsevier Science Publishers B.V., North-Holland, 1985.
[5] J. F. Claerbout. Imaging the Earth's Interior. Blackwell Science, Oxford, 1985.
[6] R. H. Stolt. Migration by Transform. Geophysics, 43 (1), pp. 23-48, February 1978.
[7] H. Hellsten and L. E. Anderson. An Inverse Method for the Processing of Synthetic Aperture Radar Data.
Inverse Problems, No. 3, pp. 111-124, 1987.
[8] L. M. H. Ulander and H. Hellsten. System Analysis of Ultra-Wideband VHF SAR. In IEE International
Radar Conference, RADAR'97, Conf. Publ. No. 449, pp. 104-108, Edinburgh, Scotland, October 14-16, 1997.
[9] C. Cafforio, C. Prati, and F. Rocca. Full Resolution Focusing of SEASAT SAR Images in the
Frequency-Wave Number Domain. In Proc. 8th EARSel Workshop, pp. 336- 355, Capri, Italy, May 17-20, 1988.
[10] F. Rocca, C. Cafforio, and C. Prati. Synthetic Aperture Radar: A New Application for Wave Equation
Techniques. Geophysical Prospecting, 37, pp. 809-830, 1989.
[11] C. Cafforio, C. Prati, and F. Rocca. SAR Data Focusing Using Seismic Migration Techniques. IEEE Trans.
on Aerospace and Electronic Systems, 27 (2), pp. 194-207, March 1991.
[12] W. G. Carrara, R. S. Goodman, and R. M. Majewski. Spotlight Synthetic Aperture Radar: Signal Processing
Algorithms. Artech House, Norwood, MA, 1995.
[13] D. C. Munson, J. D. O'Brian, and W. K. Jenkins. A Tomographic Formulation of Spotlight Mode Synthetic
Aperture Radar. Proc. of the IEEE, 71, pp. 917- 925, 1983.
[14] M. Soumekh. Synthetic Aperture Radar Signal Processing with MATLAB Algorithms. Wiley-lnterscience, New
York, 1999.
[15] C. Prati and F. Rocca. Focusing SAR Data with Time-Varying Doppler Centroid. IEEE Trans. on
Geoscience and Remote Sensing, 30 (3), pp. 550-559, May 1992.
[16] M. M. Goulding, D. R. Stevens, and P. R. Lim. The SIVAM Airborne SAR System. In Proc. Int.
Geoscience and Remote Sensing Symp. , IGARSS'01 , Vol. 6, pp. 2763-2765, Sydney, Australia, July 2001.
[17] J. Steyn, M. M. Goulding, D. R. Stevens, P. R. Lim, J . Steinbacher, J. Tofil, T. Durak, and K. Wesolowicz.
Design Approach to the SIVAM Airborne Multi-Frequency, Multi-Mode SAR System. In Proc. European
Conference on Synthetic Aperture Radar, EUSAR'02, Koln, Germany, June 2002.
[18] C. Prati, F. Rocca, A. Monti Guarnieri, and E. Damonti. Seismic Migration for SAR Focusing:
Interferometric Applications. IEEE Trans. Geoscience and Remote Sensing, 28 (4) , pp. 627- 640, 1990.
[19] D. P. Belcher and C. J. Baker. High Resolution Processing of Hybrid Strip-Map/Spotlight Mode SAR. IEE
Proc. , Radar, Sonar, Navig. , 143 (6), pp. 366-374, 1996.
[20] J. H. Chun and C. A. Jacowitz. Fundamentals of Frequency Domain Migration. Geophysics, 46, pp. 717- 733,
1981.
(21] 0. Yilmaz. Seismic Data Processing. SEG Publications, Tulsa, OK, 1987.
(22] R. K. Raney. Radar Fundamentals: Technical Perspective. In Manual of Remote Sensing, Volume 2:
Principles and Applications of Imaging Radar, F. M. Henderson and A . J. Lewis (ed.), pp. 9- 130. John Wiley
& Sons, New York, 3rd edition, 1998.
(23] L. M. H. Ulander and H. Hellsten. Calibration of the CARABAS VHF SAR System. In Proc. Int.
Geoscience and Remote Sensing Symp., IGARSS '94, Vol. 1, pp. 301- 303, Pasadena, CA, August 1994.
[24] L. M. H. Ulander and P.-O. Forlind. Precision Processing of CARABAS HF /VHF-Band SAR Data. In
Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'99, Vol. 1, pp. 47-49, Hamburg, Germany, June
1999.
The original derivation of the WKA was done in the wavenumber domain [9-11]. The wavenumber domain is a
two-dimensional frequency domain, which uses different notations from the frequency domains used in the body of
this book. The derivation is based upon the properties of propagation of a plane wave, and its phase when it
hits the target. The purpose of this appendix is to present the "wavenumber" derivation of the Stolt mapping of
(8.5).
The symbols used in this appendix are:
X distance along the azimuth direction, m
Figure 8A. l shows the propagation geometry of a plane wave, which is a local approximation to a spherical
wave. The approximation is valid for a small region around the radar line of sight, but it is used here over a
whole plane. The vertical axis represents azimuth position. The direction of propagation is the vector from the
sensor to the target, at the time of transmitting the pulse and receiving its echoes (using the start-stop
assumption).
These two vectors form a plane that is commonly called the slant range plane, in which most of the SAR
geometry models are formulated. The horizontal axis is the intersection of the slant range plane and the zero
Doppler plane, and can be interpreted as a range axis that is perpendicular to the azimuth axis. The angle, 0, is
the instantaneous squint angle, measured in the slant range plane. 2 The instantaneous squint angle varies with
time, since the sensor moves in relation to the target. The two dashed lines, perpendicular to the direction of
propagation, represent two wave fronts, separated by the radar wavelength, >..
For the moment, a monochromatic wave can be assumed with a frequency, / o, which is expressed as
21r C
w = 21r Jo - ).
(8A.1)
in angular frequency units. The parameter c is the speed of the radar wave.
)(
\
\
\\~
\~
,~
\
\
Range, r
(orthogonal to azimuth)
0 r.-
, - - - -- - - --I
\
\\
Radar
The frequency of the signal can also be expressed as a: wavenumber, which has units of radians per meter, rather
than radians per second. The wavenumber is normally defined to be the angular frequency divided by velocity, or
equivalently, 21r div:ided by the wavelength. However, as two-way propagation is considered in the SAR case, the
definition is modified to be the angular frequency div:ided by the effective velocity, where the effective velocity of
the wave is c/2. With this understanding, the wavenumber is 41r div:ided by the wavelength. The use of
wavenumber directs attention to the spatial aspects of the SAR formulation.
The radar wavelength, >., is a spatial unit measured along the direction of propagation, and is the separation
of consecutive maxima of the electric field vector along this direction. Considering the two-dimensional wavefront
in the vicinity of the target, the "wavelength" can also be measured, parallel to the azimuth x-axis and parallel
to the orthogonal range r-axis m Figure 8A.1. These special wavelengths are called Ax and Ar, respectively, and
lead to the concept of spatial frequencies or wavenumbers, kx and kr, along these axes.
(8A.2)
Azimuth wavenumber is similar to azimuth frequency, used in the body of the chapter, except that it has
"geometric" units of radians per meter rather than temporal units of cycles per second. Similarly, the range
wavenumber, kr, is
(8A.3)
The planar geometry of Figure 8A.l shows that the azimuth "wavelength," Ax , is related to the radar
wavelength and the instantaneous squint angle by
A
{8A.4)
sinO
Using (8A.1), (8A.2), and (8A.4), the squint angle can be expressed as
A
sinO - (8A.5)
Ax
With the relationships above established, the propagation of the plane wave can now be formulated. Consider
Figure 8A.1 to portray a snapshot of the radar/target geometry at the time when the radar pulse is transmitted.
Because of the start-stop assumption, the geometry of Figure 8A.l is also assumed to be valid as the wave hits
the target.
Consider a plane wave that is transmitted at time, t = -to. The phase of the plane wave can be observed at
any time, t, and any place, (r, x), in the slant range plane. Its starting phase is assumed to be zero, and the
phase, as a function of time and location, is given by the plane wave model
<Pr,z(t) = w (t + to) - kr r - kx (x + xo) (8A.6)
Consider the time ongm, t = 0, to be the time that the plane wave reaches the spatial origin, (r = 0, x = 0).
As the distance AB is xo sin 9, the time increment from -to to zero is given by
2
to = xo sinO (8A.7)
C
is obtained, which is also a consequence of triangle ABO being similar to the triangle involving A and Ax in
Figure 8A.1. Using this result, (8A.6) can be simplified to
<Px,r = wt - kxx - krr (8A.9)
which is the usual form given in the literature. It means that the phase at the spatial origin is also zero at t =
0. When observed along the r-axis, the phase is <Pr = w t - kr r, and after quadrature demodulation, the phase
along the r-axis is simply <Pr = - kr r.
From (8A.5), the cosine of the squint angle is given by
cos 9 = J 1 k; c2
4w 2
{8A.10)
Noting that Ax = /11 /(2v;.) [see development leading to (8.23)], and using (8A.l) and (8A.2), kxc/('au) is found to
be equal to Aj11 /(2Vr)- Substituting this result into (8A. 10), the cosine expression is seen to be the range
migration parameter, D2dt, used in (5.27).
(a) ro - kx domain (b) k, - ~ domain
The interpretation of the Stolt mapping can be explained by the following simplified geometric analogy,
whereby the Stolt mapping "corrects" the geometry of the received data. 3 The SAR signal is acquired in the (t,
x) domain, where t is the time of the radar signal (proportional to range along the radar line of sight), and x is
the azimuth position of the sensor. The t and x axes are not orthogonal, since t is along the beam vector, not
usually perpendicular to azimuth. Range migration distorts the geometry in this domain.
The (w, kx) domain is the two-dimensional Fourier transform of the (t, x) domain, where w is the frequency
of the transmitted signal along the beam direction, and kx is the spatial frequency along the azimuth direction.
Operating in the two-dimensional frequency domain, the nonorthogonality of the received signal axes is rectified
by the Stolt mapping of the (w, kx) domain signal into a (kr, kx) domain signal, by an interpolation along the
w-axis. When the mapped signal in the (kr, k,:) domain is inverse Fourier transformed, the SAR image is obtained
in the orthogonal (r, x) spatial frame. In this way, the Stolt mapping changes data in a nonorthogonal (signal)
space into focused data in an orthogonal (image) space. This also means that the data are registered to their zero
Doppler position.
It remains to derive the instantaneous squint angle in the (kr, kx) domain. Using (8A.2) and (8A.3), the
relationship
(8A.16)
is obtained, in which the last equality comes from the plane wave geometry of Figure 8A.1, or from using (8A.4)
and (8A.ll). The squint angle, 0, is indicated in Figure 8A.2(b) at the start and end of the target exposure.
Chapter 9
9.1 Introduction
The SEASAT, ERS-1, and ERS-2 satellites have prov.ided images of high quality and sparked many developments
in radar remote sensing. These satellites, with the data acquired in the stripmap mode, provide multilook images
of 25-m resolution in a 100-km swath.
The success of the satellites has prompted further developments.
o For data acquired in a nonsquinted, stripmap mode, the most common processing algorithm is the RDA.
However, the RDA is a precision processing algorithm, and does not usually yield images in real time. Can a
faster algorithm be found that can produce images in real time, perhaps at a lower resolution? With such an
algorithm, the operator can browse through the acquired data and decide if a high resolution image of a
particular area is desired [l].
o Can the swath width be increased, perhaps at the expense of reducing the resolution? In the stripmap mode,
each target is swept by the complete footprint; that is, the exposure time is proportional to the size of the
azimuth footprint. If the full resolution is not required, then each target does not have to be illuminated by
the entire exposure time of the beam. The reduction in the exposure time means that the beam can spend
time elsewhere, illuminating another part of the Earth. This is the basic idea behind another SAR operating
mode-the Scanning Synthetic Aperture Radar (ScanSAR) mode of RADARSAT.
It turns out that quick-look processing of stripmap data and the operational processing of ScanSAR data can
take advantage of the same algorithm, known as SPECtral ANalysis, or SPECAN, for short. The algorithm is
more efficient and requires less memory than the RDA, while producing the image quality required for these
moderate-resolution applications.
Most new SAR satellites are designed to have a ScanSAR mode, with a swath width of up to 500 km, and
a resolution between 50 m and 100 m [2]. Examples are RADARSAT, SIR-C, and ENVISAT. The ENVISAT
satellite also has a Global Monitoring Mode (GMM) with a swath width of 400 km and resolution of 1000 m.
ScanSAR has been one of the most popular imaging modes of RADARSAT-1, and researchers have even been
successful in processing the data in interferometry mode [3, 4]. The ScanSAR mode of operation, and its data
processing, is presented in Chapter 10.
A high-level block diagram of the SPECAN algorithm is given in Figure 9.1. The range compression operation is
usually the same as in the RDA algorithm. The remaining operations are unique to the SPECAN quick-look
processing algorithm.
The central feature of the SPECAN algorithm is the way it does azimuth compression. The SPECAN
algorithm relies on a "deramping" operation, followed by an FFT. This is explained in Section 9.2, where the
equivalence between these operations and time domain convolution is established. As in the RDA, multilook
processing can also be performed, and is addressed in Section 9.3. If single-look complex processing is to be done,
the block labeled "Multilooking" is replaced by "Phase compensation," which is discussed in Section 9.6.
A key property of the SPECAN algorithm is its computing efficiency. The parameters that affect efficiency
are discussed in Section 9.4. Range cell migration correction (RCMC) is discussed in Section 9.5, where it is
shown that RCMC is limited to a linear correction only, which is sometimes a limitation on image quality. After
the image is formed, the registration skew caused by the linear RCMC is corrected by a deskewing operation,
which is also discussed in Section 9.5.
Image degradations, such as those caused by phase, azimuth FM rate, and Doppler centroid errors, are
examined in Section 9.7. Section 9.8 illustrates the operation of the SPECAN algorithm, first using a point target
simulation, then using ERS-1 stripmap data. A summary of the algorithm properties is given in Section 9.9.
Range
compression
linear
RCMC
Deramp, weight,
and FFT
Oescalloping
Multi- Phase
looking compensation
Deskewing
and stitching
SAR Image
Historical Note
The original concept for the SPECAN algorithm came from the stretch and step transform methods of processing
linear FM signals [5, 6]. This included the deramping concept, reported in the 1970s [7- 9]. The SPECAN
algorithm, in its present form, was developed by MacDonald Dettwiler (MDA) and the European Space
Technology Center (ESTEC) in 1979 in a project to design and build a real-time SAR processor [10, 11]. The
main innovations made in 1979 were the multilooking concept and the methods of applying linear RCM,
descalloping, and deskewing. Later, a graduate student at the University of British Columbia (UBC), Vancouver,
Canada, compared various linear FM processing methods, and produced the first public document on the
SPECAN algorithm [12].
The basic SPECAN algorithm is presented in this section. By exammmg the equation for the time-domain
matched filtering of a linear FM signal, a fast way of performing the matched filtering, using one short FFT
rather than two long ones can be deduced. A key step is deramping, whereby the linear FM signal is converted
into a sine wave with a frequency proportional to the position of the signal in the input array. A single FFT
then compresses the signal and registers the targets to their correct positions. Because the FFT operation is
compressing the signal by spectral analysis of its deramped frequencies, the algorithm has been given the name
"SPECAN."
After the mathematical principles are established, frequency /time diagrams are used to give a geometrical
interpretation of the operation of the algorithm. This helps to understand the effects of parameters such as FFT
length, output sample indices, and FFT spacing.
9.2.1 From Convolution to SPECAN
As seen in Chapter 3, compression is performed by convolving the received signal with the time-reversed complex
conjugate of the ideal received signal from a single point target. It is illustrative to start with the time-domain
convolution equations to develop the SPECAN algorithm for azimuth compression.
Let the demodulated, received signal be Sr (11'), and the matched filter be
2
h(71') = rect(~) exp{j1rKa(11') } (9.1)
as in (3.26), where T is the matched filter duration. 1 The duration, T, is long enough t hat the matched filter
frequencies span at least the bandwidth of the signal. The compressed signal is the convolution between sr(11')
and h(17')
T/2
j-T/2 sr(11' - u) h(u) du
r,1 +T/2
1r,'-T/2
sr(u) h(171 - u) du (9.2)
Substituting (9.1) into (9.2) and simplifying, the compressed signal becomes
r,'+T/2
s1(11') = exp{j1r Ka (r,')
2
}
1
r/-T/
2
sr(u) exp{j1r Ka u 2 }
Examination of (9.3) reveals an alternate way of performing the convolution. There are two operations
represented by the two exponentials in the integrand. The first exponential is a phase multiply applied to the sig-
nal, sr(11), and the second exponential (along with the integral) represents a Fourier transform. In the discrete
case, the Fourier transform is implemented with a OFT of duration T , using an FFT for efficiency. From here
on, an FFT is assumed in the discussion.
Note also that the exponential, to the left of the integral in (9.3), represents a phase change proportional to
the square of the position of the compressed target. If multilook processing is done, this phase can be ignored,
but if single-look complex output is wanted, this phase must be compensated, as is discussed in Section 9.6.
In range, the transmitted pulse usually has a linear FM characteristic, in which case the SPECAN algorithm
can be applied. However, it is seen later that the SPECAN algorithm loses much of its efficiency when
full-resolution processing is done, so SPECAN is not usually used in range processing. In azimuth, the signal is
always approximately linear FM, and less than full- resolution processing is often done, so the discussion of the
SPECAN algorithm concentrates on azimuth processing. In addition, azimuth compression provides a further
challenge, since it has the added complexity of the variation of the Doppler centroid and FM rate, of RCM, and
of multilook processing.
The geometric interpretation of the two key operations in (9.3), the phase multiply and the FFT, are now
examined. Frequency/time diagrams of the signal are used to show why the first operation is called "deramping"
and the second operation achieves the desired compression or focusing.
First, consider a single point target whose frequency versus time characteristic is shown in Figure 9.2(a). Its
linear FM character is ev.ident, and the sloped line is referred to as a "ramp" (see Figure 4.10). The target is
assumed to extend over a finite time, Ta, representing the time that the radar beam exposes the target with
significant energy. Because of the requirement for oversampling, Ta is less than one PRF time by about 20% to
30%. The duration, Ta, is used to place an upper limit on the part of the received target energy that is
processed.
The frequency versus time characteristic of the phase multiplier, the first exponential m the integrand in
(9.3) , is shown in Figure 9.2(b). The phase multiplier has the following properties.
o Its time extent overlaps that of the signal.
o Its duration is longer than one PRF time, so it is aliased in frequency by the sampling effect of the PRF.
The dashed lines show the frequency before aliasing, and the solid lines show the frequency after aliasing.
o Its FM rate or slope is equal in magnitude but opposite in sign to that of the signal.
o It does not necessarily have the same time origin or time of zero frequency as the signal - its time ongm can
be arbitrary.
1 I• •I
~
C
(b) Deramp function
-
--·
__,.___
Unatiased
a>
:, T
¥
IL.
PRFtime
•I
(c) Signal after deramping but before aliasing
f:,1--------------+----,.------
I
Azimuth
!
-
il1----(d_)_S_i_g-na-l-aft_e_r_d_e,Lra_m_p_,n_g_a_n_d-aft-e--'rlL-a-lia_s_
ln_g....,.....,.--,-..,
Azimuth
Figure 9.2: Frequency versus time characteristics of one target, before and
after deramping.
When the signal (a) is multiplied by the deramp function (b), the instantaneous frequency of the product is
equal to the sum of the two individual frequencies. The frequency /time diagram of this product is shown in Fig-
ure 9.2(c) . Again, the frequency is aliased by the PRF sampling and the final result, after aliasing, in shown in
Figure 9.2(d).2 Since the original ramp of the target has been removed by the phase multiply, the phase function
is called the "deramp function," or sometimes the "reference function," and the phase multiply is referred to as
"deramping."
Multiple Targets
Now, consider a realistic case of having multiple targets in the same range cell. Let sr(11') be a signal consisting
of a series of targets that are evenly spaced in the azimuth direction, as illustrated by the frequency versus time
characteristics of Figure 9.3(a). The beam is assumed to be centered on zero Doppler - the effects of the
Doppler centroid are discussed later.
(a) Signal before deramping
j ~~~~~~~~
r (b) Deramp function •••
····'unallased
I f ----:,,,"-----l-- --,,,,l-:;___- - - I - - ~- -
Azimuth
ilr-:::::::::::....____./ --------
---------
I-!-"!""'!"~-/. . . • . . . . . . . . . . . . . . . . . . ./. . . . . . . . .
r.
which is the same as the time domain matched filter of (9.1), but without the envelope. Again, hdr(r/) is a linear
FM signal, with a slope equal to the negative of the slope of the targets, as shown in Figure 9.3(b). With this
choice of slope, each target is changed into a monochromatic signal (i.e. , a sine wave) by the phase multiply, as
shown in Figure 9.3(c). Note that the phase multiply moves the frequency of some targets beyond the PRF
limits. This frequency is shown by the dashed lines, but the PRF aliasing moves the frequency to a value shown
by the solid lines.
Before aliasing, the targets are arranged in a ladder shape, bounded by the two slanted dotted lines with
slope Ka. The vertical or frequency separation between these dotted lines is one PRF, and the horizontal or
azimuth time separation between them is one PRF time, given by Fa/Ka- Figure 9.3(c) also shows that the time,
Ta, of the significant beam exposure is less than the PRF time, meaning that the processed Doppler bandwidth is
less than the PRF, and the signal is oversampled by an appropriate amount.
The deramped signal of Figure 9.3(c) is
(9.5)
and the next operation is an FFT. Most of the energy of each target is compressed into one sample in the FFT
output array, and the location of the target is given by its deramped frequency. As an example, let Sr ( r,') be the
echo received from a target with zero Doppler at time 11d
(9.6)
The antenna beam pattern has been ignored in this equation, since it is not important in the following discussion.
Then multiplying (9.6) by (9.4) and simplifying, the deramped signal is
ScJr(11') = exp{-j1rKa (17d)2} exp{+j21rKa17d171 } (9.7)
The first exponential term in (9 .7) is a constant phase term, with a phase proportional to the square of the
target's zero Doppler time, 11d· The second exponential term is a complex sine wave, which shows that the target's
frequency is monochromatic, equal to + Ka17d· After the FFT, the target is compressed to a frequency cell
corresponding to this frequency value.
In this algorithm, some confusion lies in the interpretation of "frequency" and "time." Initially, the input
signal is in the time domain. After compression, the signal is again in the time domain. Before the FFT, one
might interpret the deramped signal to be in the frequency domain, although no physical interpretation justifies
this point. Regardless of the terminology used, the important point to note is that a monochromatic signal in one
domain is transformed to a position in the other domain, determined by the frequency of the deramped signal.
The position of each target is unique after the FFT, as each target has its own frequency in the deramped
signal, governed by the relative time shift between the target and the reference function.
It should be emphasized that the algorithm is designed to apply to linear FM signals. However, if the signals
have a small nonlinear FM component, the degradation that results from using the SPECAN algorithm can be
quite small. Specifically, when low-to-medium resolution processing is considered for quicklook or ScanSAR
applications, the requirement for linearity in the signal's FM characteristic is not too demanding.
Now examine the aliasing effects in Figure 9.3(c). The signal energy represented by the heavy dashed lines is
outside the PRF interval in its unaliased form, but, after aliasing, the energy is shifted vertically into the PRF
interval. This is shown by the solid lines in Figure 9.3(c), which are repeated in Figure 9.4 (with five more
targets).
Frequency
. ----{---:---j-7---
_ .,. .,. ,, .,. -------.,. .,.r - - - ------7
,, ,, .,.
.,.
.,.
,,,,
.,.
,, ,,
,, ,,
,, A
.,. .,.
.
.,. ,, ,, B
.,.,,
/
,, ,, - Tim~
-
... ,, ,, .,. ,, .,.
.,. ,, ,, ,, ,, .,.
----c--;RFtl; ;--1--- ~F;;,;;;- L----------
Figure 9.4: Frequency versus time characteristics of a group of evenly spaced
targets.
The parallelograms indicated by the dashed lines in Figure 9.4 delineate the data processing regions. Given
this configuration of deramped targets within the parallelograms, the following parameters must be chosen:
These questions are answered in this and following sections. The choice of FFT length 1s discussed first. The
factors to consider are:
o Prevention of aliasing;
o Computing efficiency.
The resolution is governed by the processed bandwidth, or, in the context of Figure 9.4, how much of the
target's exposure is captured by each indiv,idual FFT. As in the RDA and as discussed in Section 4.7.1, the
resolution is given by 0.886 times the beam velocity, divided by the processed bandwidth. For an Nfft point FFT,
the duration is Nrn/ Fa, and the processed bandwidth is Ka Nm/ Fa, where Fa is the sampling rate, or PRF, and
Ka is the azimuth FM rate. Weighting is used for side lobe control, which typically reduces the effective
processed bandwidth by about 10% to 20%. The azimuth resolution obtained by a given FFT length is
in time units, where 'Yw,a. is the IRW broadening factor due to the weighting (see (3.46)]. The resolution can be
converted to distance units (see (4.45)]
where V9 is the radar beam's ground velocity, and Or,c is the squint angle.
As a second consideration governing FFT length, energy from more than one target should not appear in the
same FFT output cell. For example, Targets A and B in Figure 9.4 appear in the same output cell of the FFT
and will be mixed together. This is a form of aliasing, as either one or both of Targets A and B have arrived
at their present frequency by aliasing of the deramped signal. If the FFT length is longer than one PRF time,
almost every target will be mixed with others. Therefore, the PRF time is an upper bound for the length of the
FFTs.
In addition, when computing efficiency is addressed, it turns out that efficiency poses a more severe constraint
upon the longest practical FFT length. Efficiency is considered in Section 9.4, where it is shown that efficiency
considerations suggest that 70% of the exposure time, Ta, , is a reasonable upper limit for the FFT length. The
lower limit is governed by the minimum acceptable resolution, which might call for an FFT length as short as
5% of the exposure time.
The SPECAN algorithm can be modified to yield high resolution imagery. A SPECAN-based algorithm,
combined with a step transform, has been applied to high resolution processing of both stripmap and spotlight
SAR data. The high resolution is obtained by a coherent summation of the FFTs (12-15].
The sample spacing at the FFT output can be derived as follows. While an FFT spans a variable number of
input (time) samples in Figure 9.4, the output space of the FFT always spans one PRF in the frequency domain.
Therefore, the output sample spacing is Fa,/ Nm in frequency units, which corresponds to
~y = (N:°'i<°') (9.10)
in time units. If Nm is not an efficient FFT length, the data can be zero padded to a better length, and the
new length used in this formula.
Comparing the sample spacing, (9.10), with the resolution, (9.8), the ratio of resolution to sample spacing is
P:,
l.J.y
t = 0.886 /W a.
'
(9.11)
The ratio should be larger than one, as this is approximately the oversampling ratio [see (3.47)]. If the FFT
weighting is kept constant, the oversampling ratio does not vary with range.
Fan-Shaped Distortion
It is important to note that the azimuth output sample spacing is a function of the azimuth FM rate, Ka., which
varies with range. This property is distinct from the RDA, CSA, or WKA , where the output sample spacing is
constant. To equalize the output sample spacing in the SPECAN algorithm, an azimuth interpolation is needed.
This may not involve additional computing, since the output is usually interpolated to a convenient sample
spacing or map grid, in any case.
If uncorrected, the variable sample spacing creates a fan-shaped distortion in the processed image. An
approximate way of alleviating the distortion is to increase the size of the SPECAN FFTs as Ka. decreases, to
keep the denominator in (9.10) constant. If variable-length FFTs are available, the FFT size can be increased one
sample at a time, which leads to a small quantization effect that may be permissible in quicklook images. An
interesting alternative is provided by the use of the chirp-z transform, whose variable transformation properties
can be used to equalize the output sample spacing (see Section 10.6) .
9.2.5 Number of Good FFT Output Points
As in other matched filtering operations, some of the results of the SPECAN algorithm must be thrown away,
because they represent partial or mixed convolutions. The number of good points from each FFT can be deduced
from frequency/time diagrams.
Having chosen an FFT length, the initial position of the FFT and the selection of the valid FFT output
points must be decided. The position of the first FFT is rather arbitrary. Usually, it is placed at the beginning
of the collected data, although it could be selected to obtain a specific scene start time. In the example of
Figure 9.5, an FFT length of 33% of Ta has been chosen, and the first FFT has been aligned with the
beginning of the collected data.
Frequency -~
~---- -r------------M ----r---------
,, X
A
8
,,, ,, . ,,
L /' w
~
,,, ,,, ,,
/
V
K
,, ,,, J ,, u
v"'
,, ,,
H
I
,,,,,,,,, s
T Time
,,,,,,"---·
G / ,,/ R
- ,,, ,,
good_... E
.,.,,,, ,
F
,,,
/ ,, ,,," a
_p_ _ _ _ ,,"' - -
,,
/
z
points D
,
0
,," ----
C
--
..../"
FFT ten 9th
N
- - --------
/ y
From the figure, it can be seen that Targets A and B are picked up by the FFT. However, their energy
does not fill the whole FFT input array, and they are referred to as "partially exposed" targets. This means that
they are not compressed to their potential resolution, and they should not be used in the output image. The
same is true for Targets J and K . The fully exposed targets are C, D, E, F, G, H, and /. These targets are
compressed to the full resolution offered by the bandwidth captured by the FFT, and they should be retained in
the output image.
Now the number and location of the good points in the FFT output array must be determined. The number
of good points out of each FFT can be deduced geometrically from Figure 9.5. Consider a line connecting the
ends of targets C and /, as shown by the heavy slanted line in the figure. This slanted line completes a triangle
whose horizontal part forms the last 67% of Target I, and the vertical part is aligned with the end of the FFT
(t he end of Target C) .
The slope of the slanted line is Ka , as it is parallel to the inclined dashed line of the parallelogram. Then,
as the horizontal side of the triangle has a duration of Ta - Nfft/ Fa, the vertical side of the triangle has a
frequency span of Ka ( Ta - Nfft/ Fa) - The vertical side represents the frequency span of the well-focused points
appearing at the FFT output, and the frequency/time domain of these "good points" is indicated by the heavier
rectangle, to the left of the triangle in the figure.
The height of the vertical side of the heavy triangle can be div:ided by Fa to turn this frequency interval
into a fraction of the sampling rate, and then multiplied by Nfft to find t he number of good points out of the
FFT
Nfft) Ka (9.12)
Ngood = Nfft ( Ta - Fa Fa
assuming that Nfft < Fa Ta. This number is not an integer in general - it can be rounded to the nearest integer
at the data extraction stage.
The number of good points at the output of the FFT is plotted in Figure 9.6 for a generic C-band satellite,
with parameters shown in Table 4.1. T he P RF is 1700 Hz and the FM rate is 2095 Hz/s. The exposure time is
1088 samples. The dashed line shows that, as the FFT length increases, the percentage of good FFT output
points decreases. This effect can be seen in Figure 9.5, where the height of t he heavy rectangle (the number of
good points) is directly affected by the FFT length (the width of the heavy rectangle).
The solid line in Figure 9.6 shows that the total number of good points increases with FFT length up to a
certain length, then decreases. In the limiting case where the FFT covers the whole processed aperture, Ta, there
is only one good point. It is shown later that the number of good points has a profound effect upon the
efficiency of the SP ECAN algorithm.
To determine which of the FFT output points are selected to be the Ngood points, the Doppler centroid and the
initial frequency, or time origin, of the reference function must be taken into account. Considering a target in
Figure 9.3(a), the frequency at the middle of its processed aperture before deramping is equal to the Doppler
centroid value used in the processing. To get the frequency of the target after deramping, ftar_dr, the frequency of
the reference function at the time of the middle of the target must be added
(9 .13)
!1
C
'&_ 200
l
0 150 Number of good points out of each FFT
c
~
8_ 100· F T = 1088 samples
'O a a
a... 50 - - - 7~- ":" - - - - - __ _
~ Percentag: 0
E P0ints out of each FFT - -
~
z 0
0 100 200 300 400 500 600
FFT length (samples)
Figure 9.6: The number of good points in the FFT output versus the FFT
length.
where lr,c is the Doppler centroid frequency used in the processmg, 'TJ~id is the time at the middle of the target
exposure, and rJ'ramPO is t he time that the reference function passes through zero frequency. The deramped
frequency computed from (9.13) must be interpreted modulo the PRF.
The task now is to find the the indices of the FFT output array where the energy of the first and last fully
compressed targets lie. These are Targets C and I in Figure 9.5, respectively. For example, the mid-time of
Target C equals the time at the end of t he FFT minus one-half of the exposure time. The deramped frequencies
of these targets can be computed from (9.13).
The output of the FFT has Nm points, with the kth element in the output array, (1 < k < Nm),
corresponding to a deramped frequency, (k - 1) Fa/Nm. Therefore, t he array index of a target is given by
Nm
k = Fa ftar...dr + 1 (9.14)
The range of good points often extends over the end of the FFT output array, in which case modulo(Nfft)
indexing must be used to extract the good points.
As Target I is the last good point processed by the first FFT, where should the next FFT be placed so that a
continuous output scene is obtained? The answer can be deduced from Figure 9.7, where the second FFT is
placed so that Target I is the first good point out of the second FFT. To achieve this, the second FFT is placed
so t hat its end corresponds to the end of the processed exposure of Target /. This choice of FFT separation
guarantees frequency continuity in the good points selected, and hence, spatial or time continuity of the targets
in the output array. The good points in the first FFT ( C to I) occupy one frequency span, and the good points
in t he second FFT ( I to O, with N and O wrapped around) occupy another frequency span. These two
frequency spans are continuous (contiguous) at Target / .3
Frequency a
. . . .
~
- ~
FFT 1 FFT2
To change t he FFT length in Figure 9.7 (e.g., to shorten it) the start of FFT 1 and the end of FFT 2 are
kept in their same locations. The end of FFT 1 is moved left and the start of FFT 2 is moved right, to obtain
the required FFT length. T his keeps Target / as the point of continuity between the two FFTs, and ensures
continuity of the processed image.
The separation in time between the beginning of the first FFT and the beginning of the second FFT can be
deduced as follows. The Ngood good points out of the first FFT cover a frequency span of Fa Ngood/Nfft. This in
turn corresponds to a time span of Fa Ngood/(Nfft Ka)- This value is then multiplied by Fa to convert to
samples, to obtain the delay of the second FFT with respect to the first FFT ~
p2
NFFT _delay = Ngood N aK (9.15)
fft a
Note that Ngood is the FFT "delay" in the FFT output space, and the remaining factor in (9.15) serves to convert
from output samples to FFT input samples. Substituting for Ngood from (9.12), it is seen that NFFT_delay is a
linear function of FFT length
(9.16)
The delay between adjacent FFTs is illustrated by the solid curve in Figure 9.8, using the same parameters as
in Figure 9.6. As the FFT length increases, the FFT delay goes down linearly, according to (9.16) . This affects
efficiency, which is dramatically seen when the FFT delay is expressed in time units - see the dash-dot line in the
figure. This means that a shorter image section is produced by each FFT as the FFT length increases, resulting in
a low computing efficiency for FFT lengths that are more than one-half of the processed aperture.
1200
.I....
. . . . . .
.
1000 '" .
800 .. \ .
t FFT delay (samples) .
Ii
600 ..
.00 ..
'' ' F ~
200 .. ... .... . -.
. .... . . FT delay (sec X60)
-- --
--
0
ls
-
- • - • - t - • - I - t - I - t - • __,,,,,..~ - • -
-
0
>,
.
........ --
al -200 -
~ -400- ~
~'8$) - - .
~
ff
----.
-800 • ~ o"e - .
f - -
--800 - .... F T = 1088 samples .
- 1000 •
.... .... ..... . ••. -
•
0 100 200 300 .00 500 800
FFT length (sa"l)les) -.
Figure 9.8: The delay between adjacent FFTs versus the FFT length (single-
look case).
Overlap of FFTs
Note that FFTs 1 and 2 have a gap between them in Figure 9.7. If the FFT length is less than one-half of the
processed aperture, there will be gaps between the FFTs, and if the FFT length is greater than one-half of the
processed aperture, there will be overlap between the FFTs. The overlap is Nm - NFFT _delay samples, a linear
function of FFT length. This is illustrated by the dashed line in Figure 9.8, where the processed aperture is 1088
samples. For this curve, negative values indicate a gap between successive FFTs, and positive values indicate an
overlap between the FFTs, given by the number of samples shown on the vertical a.xis.
The implications of the gaps and overlaps are discussed in later sections, as computing efficiency and image
quality are affected by the FFT length and corresponding overlap.
To obtain continuity in the output image, the good points of each FFT are stitched together. An example of
stitching the FFT results is illustrated in Figure 9.9, in accordance with the target placement shown in Figure
9.7. The targets are assumed to be of equal strength, and the antenna pattern is ignored in order to illustrate
the magnitudes of partial targets.
Figure 9.9(a) shows the FFT 1 output array after unwrapping. The partially exposed targets, A, B, J, and
K, do not have the resolution or magnitude of the fully exposed targets C to /. Figure 9.9{b) shows the FFT 2
output array after unwrapping and registering to the output array of FFT 1. The partially exposed targets are
now G, H , P, and Q, and the fully exposed ones are / to O. Then, the partially exposed targets are discarded,
and the beginning of the good points of FFT 2 are stitched to the end of the good points of FFT 1.
The stitched results are shown in Figure 9.9(c). The stitching position is at the peak of Target /, and only
the fully exposed targets are retained. It can be seen that the output is continuous through the stitching point.
In Figure 9.7, the gap between the two FFTs represents unused information in the received radar data. A simple
way of making use of this unused information is to place another FFT in the gap between FFT 1 and FFT 2.
This is done in Figure 9.10, where a fourth FFT is included to complete the illustration. The FFTs are now
contiguous, and they are renumbered for convenience.
(a) Output of the first FFT
! 1
A 8 C D E F G H J K
5'
a, 0.5
~
G H J K L M N O P Q
Figure 9.9: Stitching together of two FFT outputs to form an output array
that is continuous in azimuth.
It is instructive now to examine the good points out of each FFT, and see how many times each target has
been properly processed. The number is listed in Table 9.1. Targets A and B are not fully processed at all.
Targets C, D , and E are processed in one FFT only, representing the startup conditions of the processor. All the
other targets are processed in at least two FFTs and represent fully processed data. 5
This means that if the final image is started at Target F , two complete looks can be assembled for each
target. To do this, the good output points of FFT 1 are divided into two groups, called Look 1 and Look 2.
Only Look 1 is used from this first FFT. The good output of each successive FFT is divided into two looks in a
similar fashion. This grouping of good FFT output points into looks is illustrated in Figure 9.11. The lines
showing the targets have been removed for clarity, but the arrangement of targets is the same as in the
preceding figures.
The good FFT output points are separated into looks, and stitched together to form a continuous output
image, as illust rated in Figure 9.12. The data from the looks are added incoherently to form the image, with the
additions proceeding vertically in Figure 9.12. If desired, the looks can be weighted unevenly to achieve certain
effects. For example, Bamler and Eineder have proposed a weighting scheme to obtain equal SNR from each
output sample [16).
Figures 9.10 and 9.11 illustrate a two-look case in which the FFT length equals one-third of the processed
beamwidth. The FFTs are contiguous, with no overlap. This scheme can easily be generalized into an arbitrary
number of looks. In general, when there is no FFT overlap, it can be seen from Figure 9.10 that the FFT length
for the multilook case is equal to the duration of the processed beamwidth div:ided by (Ntooks + 1), where N1001cs
is t he number of looks. As in the RDA, the looks can be overlapped to equalize the utilization of data, because
the weighting function reduces the emphasis of t he look edges. A moderate amount of look overlap tends to
produce the best image quality for a given resolution.
Frequency ·~
j,-o---- -r--· ---- ----- ----7---------
-
8 / M / )(
/
A / L / w
/ /
1,/
/
/
J
K
/
.," u
V
/
/
I / T Time
/ /
H / s /
/
,-
F
G
,, /
/
,, /
Q
R
,, ,,
/
/
-
e p
,, ,,
/
/
,/ z
,, ,,
0 0 /
C ,, /
y
~- N
-J-
.. . . ..
-
FFT 1
- FFT2 FFT 3 FFT 4
Frequency
..
- - r-· -- - Look 1 Look 2
---7- - -
/ / -
good
/ Look 1 Look2 ~
. ,
I
points
per l.o ok
-
/ Time .
J, /.
Look 1 Look 2
,-
/ /
/ Look 1
/
Look 2
/
.._. ...!:.OO!LL Look 2- - - - ·-/ - -
-FFT 3 .. - FFT 4 .-
- . -
~ . - . I
FFT 1 FFT 2
Figure 9.11: Div:ision of the good points in the FFT outputs into looks.
Look 2 Look 1
FFT1
C
:8
(II
Loolt 2 Look 1
E FFT2
E
::,
II) Look 2 Look 1
FFT3
~ Look 2 Look1
FFT4
Figure 9.12: Division of the good FFT output point.s into looks, and alignment
of the looks before summation, to form a contiguous multilook image.
The main factor affecting the efficiency of the SPECAN algorithm is the FFT delay or offset, as it governs the
number of FFTs that must be computed per second. It is a function of the number of good points extracted
per look. Starting with the FFT delay (9.15) in the single-look case, the delay is divided by the number of looks
to obtain the FFT delay in the multilook case
Ngood Fa2
NFFT _delay -
N1ooks NtrtKa
(9.17)
where Ngood is the total number of good points per FFT defined in (9.12).
The FFT delay in the multilook case is plotted in Figure 9.13. The horizontal and vertical scales have been
normalized to be a fraction of the processed beamwidth (i.e., a fraction of Ta) , as the FFT delay is relatively
insensitive to the various parameters used in C-band satellite SARs.
. . .
;:--
0 0.6 ........ . .. . . .. . . •.. ..
... ....... . ...
C Number of looks
.Q
l
ii;' 0.3 .. . . . . · ; ..... . . , • • • o • • • • I • • • • • • • •
0.1 .s... .. . !M e :
.
.. .. ..... .
0 L-.._ _.,____ _,..____ _,..____ __.___ _..,___ __.____ _ _.______,
Figure 9.13: FFT delay as a fraction of the processed aperture. The crosses
"x" indicate the FFT length that leads to contiguous FFTs, while the circles
"o" indicate the FFT length when the overlap is 40%.
Another way to v:iew efficiency is by exam1mng the overlap between looks. This is plotted in Figure 9.14,
where the one-look case is also included (in the one-look case, the variable is "FFT overlap," rather than "look
overlap"). Compared to Figure 9.8, the overlap curves are nonlinear, because the vertical scale has been
normalized by the FFT length. In practice, a look overlap between 0% and 40% is commonly used.
Efficiency Factor
For a given number of looks, the main efficiency factor in (9.17) is Ngood/Nm , which is the fraction of FFT
output points that can be used to form the image. The FFT length, Nfft, is a direct function of the azimuth
resolution, so it is convenient to evaluate computing efficiency as a function of resolution. This is done in Figure
9.15, where the generic C-band satellite SAR parameters of Table 4.1 are used.
1.-------.-----.----------.....---..---..-------,
. . . . . .
...0 .. .
.'
0.8 · · · · · · · ·:·. · · • · · · · ~. · · · · · · · · !. • • · · · · · ·:· · · · • · • · : • · · · · · · ·: · · · · · · .. ·:· · · · · · · ·
.
j 0.6
.. . . . . . . ..... . . . . . . " . ..
.. ..........................................
, , ..
.
f
~ 0.4 ··
..
. .... . . . . . . . ·......
. .
... .: ....... •.'. ....... -:......... .' .. . ... ' ...
ffi .. .. . . .
0.2 • · · · · · · ·:·. · · • · · • · ·:.· · • · · · · · .~ · · · · · · · ·:·
.. · · · · · · · '. · · · · · · · · ~ · · · • · · · ·:- · · · · · ·
..
. . .. . ...
0 .___ __,_ ___.__ _........__ __,__ _.....__ _......__ _ _ __
0 20 40 60 60 100 120 140 160
Azimuth resolution (m)
It can be seen that, as the processed resolution approaches the fineness of the theoretical resolution (about 7
m in this case), the efficiency drops drastically. The resolution can be characterized by the ratio of the FFT
duration,
Nm/ Fa, to the processed aperture time, Ta. When the FFT length is relatively short compared to the aperture
time, the processing efficiency is high, and when the FFT length approaches the aperture time, the efficiency
drops. For this reason, the SPECAN algorithm would not normally be used to obtain resolutions finer than those
corresponding to about 70% of the effective target exposure time, Ta .
Another way to measure computing requirements is to count the number of FFTs per second, or the number of
arithmetic operations per second. The number of FFTs per second per range cell is given by
Fa
NFFTs/s = NrFT_dela.y
(9.18)
and the number of real operations per second per range cell for the FFTs is
Nor>s = 5 Nm logz(Nfft) NFFTs/s (9.19)
This does not count the reference function multiply or the other operations in Figure 9.1, which require about
20% additional arithmetic. However, compaiing the FFT operations is a simple and effective way of comparing
algorithm efficiencies.
From (9.17) and (9.18), it can be seen that, for a given resolution, the SPECAN computing requirements are
directly proportional to the number of looks
Nfft Ka
NFFT/s = Niooks N.
good
F.
a
(9.20)
Varying the number of looks prov:ides a flexible tradeoff between image quality and computing requirements in
the SPECAN algorithm. It has been shown that the quality of a SAR image is proportional to the number of
looks divided by the resolution [17]. For example, to obtain the fastest processing at the expense of image quality,
the minimum number of looks is taken (i.e. , one look). The use of a coarse resolution will also reduce the
computation load, as seen in Figure 9.15.
In Figure 9.7, it is shown how gaps appear between the FFTs, when only one look is used and the
resolution is coarser than that corresponding to half the processed bandwidth. In Figure 9.10, it is shown how
FFTs can be placed in between other FFTs to obtain more than one look, and to obtain better image quality.
How many looks is it sensible to take, if better image quality is desired at a given resolution?
Experience has shown that better image quality is obtained by inserting more looks into the processing chain,
up to the point where the looks overlap by approximately 20%. Normally, one would not expect image quality to
be improved by overlapping looks, but a small amount of overlap between the FFTs is beneficial, when weighting
is used to taper or deemphasize the data at each end of the FFT input array. The overlap tends to recover
information lost by the weighting.
The tradeoff between efficiency (operations per second) and image quality (azimuth resolution and number of
looks) is illustrated in Figure 9.16. The lower curve is the single-look case, giving the fastest computing but the
lowest image quality. The middle curve is the multilook case, where the looks are overlapped by 20%, which
represents the highest image quality for the given resolution. In between these two cases, the number of
operations varies in proportion to the number of looks, according to (9.20).
Figure 9.16: Number of million real operations per second required for the
FFTs per range cell.
Since the curves are nonlinear, the efficiency of the SPECAN algorithm tends to favor the use of coarser
resolutions with more looks. For example, 50-m resolution with eight looks provides more efficient computing than
25-m resolution with four looks. This is in contrast to the RDA, whose computing requirements are a weaker
function of resolution and number of looks.
The RDA computing requirements for azimuth compression are also drawn in Figure 9.16. The multilook
RDA case is given by the line with the square symbols, which can be compared with the multilook SPECAN
case, the line with the diamond symbols. It is seen that, for coarser resolutions, the RDA takes about twice the
FFT arithmetic, which is a direct result of the RDA 's need for doing an inverse FFT as well as a forward FFT.
However, at finer resolutions, the efficiency factor of Figure 9.15 takes effect, and the SPECAN algorithm becomes
less efficient than the RDA.
When single-look processing is done (the circles in Figure 9.16), the comparison leans even more towards the
SPECAN algorithm. This is because the SPECAN arithmetic is directly proportional to the number of looks,
whereas in the RDA, only the number of IFITs is proportional to the number of looks, while the forward FFTs
remain the same.
In summary, the SPECAN algorithm is more efficient than the RDA for any resolutions coarser than about
70% of the finest resolution. However, there is a small image quality penalty to pay, which is discussed in Section
9.7. The SPECAN efficiency is highest when fewer looks are taken, which explains its popularity in quicklook
processors.
Memory Requirements
The multilook SPECAN algorithm was originally developed for spaceborne use, so it was considered important to
minimire the amount of memory needed, as well as minimizing the number of arithmetic operations.
In fast algorithms, the memory requirements are proportional to the processing block length, which is
generally equal to the longest FFT used. In the SPECAN algorithm, the FFT length is dictated by the resolution
and is less than the full matched filter length, while in fast convolution algorithms, like the RDA, the FFT
length is usually two to four times the full matched filter length. Thus, the SPECAN algorithm requires the
lowest amount of memory among common fast SAR processing algorithms.
In the RDA, CSA, and WKA, RCMC is performed efficiently with controllable accuracy. However, because the
data are not in the true azimuth frequency domain, only a simplified version of RCMC can be performed
efficiently in the SPECAN algorithm. Fortunately, since the SPECAN algorithm is designed for high efficiency at
a moderate resolution, the limited RCMC accuracy can usually be accepted. In this section, the form of RCMC
that can be done efficiently with the SPECAN algorithm is explained, and its accuracy analyzed.
According to (5.58), the RCM of a target can be decomposed into a linear component, a quadratic component,
and small higher order terms. In the SPECAN algorithm, the data are represented in the time domain, so the
"bulk-processing" efficiency of performing RCMC in the frequency domain is not easily available. However, RCMC
can be performed with bulk-processing efficiency by restricting the correction to the linear component only.
This simplified RCMC is performed using a range shift, with the amount of shift equal to a linear function
of azimuth. This is called "linear RCMC," since it corrects the linear component of the RCM only. For a target
with zero Doppler time "Td, the linear RCM (5.58) is modeled as
Since Yr and Br,c are range dependent, the RCMC is range variant, but it can be performed approximately in a
range-invariant fashion, using a constant shift for a group of range cells. The linear RCMC can either be
implemented by a time domain interpolation or by a linear phase shift in the range frequency domain, assuming
that the shift is constant within a range invariance region.
After the linear component of RCM is corrected, a residual quadratic component remains. The residual RCM
is bounded by the quadratic RCM, Ll.Rquad, given by (5.62) and shown in Figure 5.12. As cited in Section 5.5.1,
Ll.Rquad is approximately 3 m for a typical C-band spaceborne case, which is an order of magnitude smaller than
the resolution usually processed by the SPECAN algorithm. This justifies ignoring the quadratic component of
RCM for C-band satellite SARs.
9.5.2 Skewing and Deskewing
The linear RCMC results in a skew of the data, and its effect must be offset by a deskewing operation after
azimuth compression. To illustrate the geometry of the skewing and deskewing, the same squinted data acquisition
geometry as in Figure 4.2 is used.
The effects of the linear RCMC, azimuth compression, and deskewing operations of the SPECAN algorithm
are illustrated in Figure 9.17. Figure 9.l 7(a) shows the received target trajectories, assuming their range dis-
placement is approximately a linear function of azimuth. The required linear RCMC shift is shown by the arrows
in Figure 9.17(a), and the shifted results are shown in Figure 9.17(b) . After linear RCMC, the targets are aligned
in cross range instead of in azimuth, but the axis is usually referred to as "azimuth" when the squint is small.
Targets that were in the same range gate before RCMC have been shifted to other ranges (e.g., see Target C
compared with Target A).
A
I
Linear
RCMC
a/
fF
-- I
gt
e
A aI
i'
ll
~
E ij o/ -~ft DI
~
.I
/7 !' / c E
CI
~
!I ~ Deskew
~,; ; •A
1i\ 8
• •B
h
:
·r {.
\
!~ .. D
•C •D
.
Azimuth compression is now performed by the SPECAN algorithm. The algorithm registers the targets to
their zero Doppler azimuth locations, as discussed in Section 9.2. This means the four targets are compressed to
their correct azimuth time coordinates, as shown in Figure 9 .17 (c). However, the data are skewed in range as a
result of the earlier linear RCMC operation.
After azimuth compression, the skew can be corrected by inverting the linear RCMC operation. This is done
by linearly shifting each range line by an amount equal and opposite to the shift of the linear RCMC, as shown
by the arrows in Figure 9.17(c) . This operation is called "deskewing." T he final compressed image is shown m
Figure 9.17(d), with the four targets properly positioned as in Figure 4.2(c).
Another important aspect of Figure 9.17(b) must be emphasized. The trajectories after the linear RCMC are
aligned with the vertical axis, so that azimuth compression can be done along each vertical column. However, tar-
gets along a given column originate from different range gates. In other words, when the azimuth compressor
moves down in azimuth, it traverses ranges as well. Since the azimuth FM rate, and often the Doppler centroid,
are functions of range, the deramp function and throwaway regions should be updated regularly in azimuth. The
position, and is portrayed by the solid line in Figure 9.19(a).
The purpose of phase compensation is to make the phase at the peak of the compressed target the same as if it
were compressed by a conventional matched filter. 7 Since the third exponential term is zero at the peak, it does
not require compensating. However, the first two exponential terms in (9.24) do affect the phase at the target
peak and can be compensated.
The phase at the peak should be made zero, so that the rema.inmg phase is that due to the target>s complex
scattering coefficient and its slant range. To achieve this, the compensation is given by the conjugate of the first
two exponential terms in (9.24), with 11d replaced by the time variable, 11'. The replacement is made so that the
compensation applies to every output sample, not just the target considered in (9.24) .
After the compensation, the signal is
where the last step is obtained after some algebraic manipulations. The target phase after compensation is
illustrated in Figure 9.19(b). Note that the compensated phase is zero at the peak of t he compressed target, 11' =
17d, as shown by the dashed line.
:i
§
100
80
60
- ;c ,,,
--
.9
0
-
C 40
1Cl.
20
0 - - ---
--
---=----..-~---
0 50 100 150 200 250 300 350 400
Figure 9.19: The target phase (a) before and (b) after compensation.
The first exponential phase term m (9.25) is quadratic, and the second phase term 1s linear m 17'. These
terms are discussed next.
The linear phase term, -271' Ka (171 + Tm/2 - 1Jd) (r,' - 1Jd), has a slope that depends upon the position of the
FFT, 771, relative to the zero Doppler time of t he target, 1Jd· Examining the frequency portion of the expression,
Ka (771 + Tm/2 - 11d), note t hat 171 + Tfft/2 gives the time of the center of the FFT, and 111 + Tm/2 - 1Jd gives
the same time but referenced to the zero Doppler time, 11'cl, of the target. When this time is multiplied by Ka, it
is converted to frequency, corresponding to the Doppler frequency of the target at the center of the FFT. This
dependence of the slope of the linear term on the target position can be seen in the solid lines in Figure 9.19(b).
This means the phase correction reinstates the Doppler centroid frequency of the target to the value it had
before the deramping, taking into account the fact that the FFT truncates the target exposure. A conventional
matched filter would give the same linear phase term, had it been used to process the data limited by the FFT
extent.
The quadratic phase term represents a small phase modulation that is mainly of interest across the main lobe of
the impulse response of the compressed target. Where does it come from?
Recall that the processed signal length is the FFT length in the SPECAN algorithm, but the matched filter
length corresponds to the total beamwidth. This is because the deramp function compresses all targets within the
FFT window, whose frequencies span the total beamwidth. As explained in Appendix 3A, the quadratic phase
term arises from the fact that the duration of the filter is longer than the duration of the signal.
How big is the quadratic phase term across the main lobe of a target? According to (9.25), the maximum
quadratic phase over a resolution element, Pa, t, is 1rKa (Pa, t/2)2, which increases with the square of Pa, t , or
decreases with the square of the FFT length. In some cases, especially for low resolution, the quadratic phase can
be significant. For a typical satellite C-band example, let the azimuth FM rate be 2095 Hz/s, the beam ground
velocity be 6700 m/s, and the processed resolution be 100 m. Then, the 3-dB IRW occupies a time span of
100/6700, or 0.015 seconds. The maximum quadratic phase is 2095 1r (0.015/2) 2 = 0.37 rad or 21°.
Fortunately, the quadratic phase term is the same for all targets, as the solid lines in Figure 9.19(b) all have
the same second derivative. In interferometric SAR, if both passes are processed to the same resolution with the
FFT locations synchronized, the quadratic phase disappears when the interferogram is formed.
Discussion
It is important to note that the phase properties wscussed above are a function of the way the SPECAN FFT
extracts the data, with a different Doppler centroid for each target. Compressing the targets with the SPECAN
combination of deramping, FFT, and phase compensation gives a result that is the same as when a short burst
of data is convolved with a long matched filter. This is the same scenario found in ScanSAR data collection, so
the same phase properties are obtained when ScanSAR data are processed (see Chapter 10).
Some image quality issues that are unique to the SPECAN algorithm are discussed m this section. Some ·of the
points also pertain to burst-mode SAR data as well.
In general, the image quality is more difficult to maintain than in the RDA, because of the time-varying
Doppler of the processed targets. On the other hand, image quality is less important, because the SAR data are
usually processed to a coarser resolution (e.g., to get a quicklook image) . This section focuses on the following
issues: (1) the effects of frequency discontinuities, (2) the effects of an azimuth FM rate error, and (3) radiometric
effects caused by processing targets with a varying Doppler centroid.
Figure 9.9 shows how FFT output arrays are stitched together to form a continuous image of the ground.
Unfortunately, this stitching has phase implications, since some targets are inevitably split into two parts by the
stitch. Each part originates from a different FFT, so that parts of the stitched target come from different regions
of the Doppler spectrum. Consider the stitching that takes place at Target I in Figures 9. 7 and 9.9. The last
good point of FFT 1 comes from the leading edge of the beam, while the first good point of FFT 2 comes from
the trailing edge of the beam. In this section, the unavoidable frequency discontinuities that occur at the stitching
points are discussed.
What happens to Target I when the good output points of FFTs 1 and 2 are stitched together? Figure 9.20
shows the results when Target I lies half-way between the last good sample taken from FFT 1 and the first good
sample from FFT 2 (i.e., the stitching occurs at the center of the target). 8 First, the point target response is
examined when the output of each FFT is analyzed separately. This is done with a point target analysis program
that uses some of the "bad" points off the end of Target I (this provides an accurate analysis as the exposure of
Target I is fully captured by each of the two FFTs). Figure 9.20(a, b) shows the magnitude and phase of Target
/. The solid lines show the response taken from FFT 1, while the dashed lines show the response taken from
FFT 2. In Figure 9.20(a), the dashed line is not seen, since it lies on top of the solid line, showing that the
magnitude responses are the same from each FFT.
i 0
0 2 3 4 a
J -2 ..___._______._.....___.__-J
0 1 2 3 4 15
i
ii,
g_
2
0
1
!
2
J -2 .__
' _....__ __.__ _....__....__ __, J - 2 ..___._______._........_____._-J
2 3 4 15
0 2 3 4 II 0
Time (samples) Time (samples)
Figure 9.20: Phase and frequency discontinuities that occur at stitching points.
However, in Figure 9.20(b), where the phase histories are shown before compensation, it can be seen that the
phases are different. Each plot is a linear ramp having the same slope of -1r radians per sample, as the
deramped Target I has the same frequency in each FFT. The phases differ by a constant, because of the
different FFT start times, 77~, as shown by the second exponential term in (9.24). Next, the phases are
compensated using (9.25). The compensated phases are shown in Figure 9.20(c), and the following points are
noted:
o The phase values at the peaks of the target are the same (the peak of Target I lies at Sample 2.5, where
the phase is zero), but are different at other places. This agrees with our definition of phase preservation,
which pertains to the peak of each target.
o The slope of the phase ramps (the frequencies) are different, although they are the same before the
compensation.
Thus, the phase compensation has given the compressed target the same phase and phase slope as if a proper
matched filter has been applied (i.e., an RDA matched filter with a length and location corresponding to the
SPEC AN FFT).
The output array after stitching is examined next. The severe frequency discontinuity at the stitching point
will affect any point target analysis done near the stitching point. So, to examine the point target properties after
the stitching, the interpolated responses can be stitched together. The first half of the stitched result will be
taken from the solid curves of Figure 9.20(a, c), while the second half will be taken from the dashed curves. The
stitching point is at Sample 2.5 in the figure.
The magnitude of the stitched result remains unchanged, as shown in Figure 9.20(d). The magnitude always
remains unchanged, regardless of the stitching point, because each target in the good FFT output points is fully
compressed.
Figure 9.20(e) shows that the phase of the stitched result is continuous, but the frequency or phase slope in
the left half is different from that in the right half. This agrees with the change in processed Doppler between
FFTs 1 and 2 discussed above. If, however, the stitch occurs off the peak of the point target, the phase also
becomes discontinuous at the stitching point. This is illustrated in Figure 9.20(f), where the stitching point has
been moved slightly to the left of the peak.
Owing to these phase and frequency discontinuities, caution must be exercised when performing point target
analysis on the output array. When a target is close to a boundary, there probably will be distortions in the
measurements. The worst case occurs when the target is split in half, as is the case in Figure 9.20(e).
If a target of interest happens to lie on a stitching boundary, can its image quality still be measured?
Fortunately, yes, provided that the data are kept in complex form and a record of the stitching positions is
maintained. Figure 9.20(b) reveals a solution to this measurement problem. First, the phase compensation must
be reversed (or the compensation not done in the first place). Second, the peak position, 11d, of the target to be
analyzed is estimated. Finally, a phase compensation is performed that removes the constant phase term given by
the second exponential in (9.24). After the phase compensation, the two phase histories will be the same, so both
the phase and the frequency will be continuous. This continuity allows accurate point target analysis, even though
the peak phase has been altered, since the phase change can be calculated. Note that this procedure does not
correct the whole dataset, only the target being analyzed.
Although the topic of interferometric SAR (InSAR) is beyond the scope of this book, it is worth mentioning
how phase is handled in the SPECAN algorithm if the output data are used for InSAR processing [18]. In
InSAR, data are collected for two passes over the same area, using slightly different off-nadir angles. The two sets
of data are registered and conjugate multiplied to form an interferogram. The two sets of data should be
synchronized, so that an FFT in one dataset has the same Doppler centroid as the corresponding FFT in the
other dataset. In this way, the phase slopes of each target are the same in the two datasets. Furthermore, the
phase and frequency discontinuities are the same and therefore cancel each other out.
Note that there also may be phase distortions caused by the approximations and assumptions made in the
SPECAN algorithm. First, the quadratic RCM and higher components are ignored. Second, the data are assumed
to have perfect linear FM characteristics. Unfortunately, the amount of phase distortion is different in each
situation, depending upon the radar and processing parameters, such as resolution, squint, and wavelength, so a
simulation should be done in each situation to quantify the distortion.
As in other algorithms, errors in the azimuth FM rate can cause the azimuth IRW to broaden because of: (1) the
FM rate error in the compression operation, and (2) the look misregistration in multilook processing. These two
factors are addressed here with the aid of frequency/time diagrams, which add an intuitive dimension to the
explanation. Note that other errors, such as Doppler centroid errors, also contribute to azimuth IRW broadening,
but the azimuth FM rate tends to be the dominant error.
The SPECAN algorithm uses the deramp function, which turns the linear FM point target history into a
constant frequency signal. The correctly deramped signal is illustrated by the thick horizontal line in Figure 9.21.
The subsequent FFT yields the best focusing when this line is horizontal. However, if the slope of the deramp
function is Kamr instead of Ka , the resulting product will not have a zero slope (i.e., constant frequency).
Rather, the slope will be
(9.26)
In the presence of an FM rate mismatch, 6.Ka, the deramped signal will no longer be monochromatic, but its
frequency will spread, as shown by the dashed line in Figure 9.21. This spread will cause IRW broadening when
the FFT is taken.
As the deramping/FFT of the SPECAN algorithm acts like a matched filter, the IRW broadening behaves
the same as discussed in Section 3.5. It is a function of quadratic phase error (QPE) at the ends of the
processed aperture, as quantified in Figure 3.14. In the SPECAN single-look case, the processed aperture time is
Tfft, and the QPE is
(9.27)
which is the same as in the RDA, except that the matched filter duration is defined by the FFT length, Tfft.
energy of successive targets is extracted from different parts of the nonuniform Doppler spectrum. The two-way
beam pattern has the form of a sine-squared function, which has its maximum at the beam center and tapers off
at both ends, as shown in Figure 4.10. As a result of this nonuniformity, a radiometric variation in the form of a
periodic scalloping exists because of the time-varying Doppler extraction of the SPECAN algorithm. The purpose
of. this section is to explain the nature of the scalloping, and to show how it can be compensated.
The origin of scalloping for the one-look case is illustrated first. Consider three targets extracted by one specific
FFT, for example, Targets C, F, and / of Figure 9.5. The energy of the three targets lies at the beginning, at
the middle, and at the end of the processed beamwidth, respectively, during the time corresponding to the FFT
input domain. These locations are shown in Figure 9.22, where the solid curve represents the two-way beam
pattern corresponding to the target reception times or angles. 9
'i"' 1.2 C F
i ,----
Cl
a,
.S 0.8
>-
5l' 0.6
i
j 0 .4
Ern!rgy shift due to a
~
Doppler centroid error
§ 0.2
a: QL-_ _.___...,__ __.__ ___,__ _.___...,___ _,__ __.__ _.'--__,
0 Q1 Q2 Q3 OA Q5 o• Q7 OB
Beam angle or time (normalized to processed beamwidth)
Q9
Figure 9.22: The location of the energy of three targets captured by one FFT,
with the correct Doppler centroid (solid curve), and with a Doppler centroid
estimation error (dashed curve).
It can be seen that the received energy of the middle target, F, is higher than that of the two edge targets,
C and /, because the beam energy is higher where the FFT extracts the energy for this target. After the
deramping and FIT operations, these targets are compressed into the first, middle, and last of. the selected points
of the FFT output array, and the energy of each target is proportional to the received energy at the appropriate
beam times or Doppler frequencies. Specifically, the energy of each target is obtained by integrating the energy of
the part of the beam pattern spanned by the target during the time of the FFT.
To perform the integral, the two-way beam pattern of (4.27) and (4.28) can be used, where the time, r,', is
measured with respect to the beam center. Ignoring the target radar cross section, which is assumed to be
constant in the analysis, the energy of a compressed target centered at r,', and, with FFT duration Tm = Nm/ Fa,
lS
11'+Ttrt/2
E(r,') =
1ry'- T,n/2
w 0 (u) du (9.31)
where Wa is the antenna pattern. The energy, E(r,'), is a function of time, as the integration limits depend upon
the target's azimuth location in relation to the FFT position.
For a continuum of. targets that would be experienced in practice, the offset of the target from the beam
center, r,', varies in a sawtooth fashion, as shown in Figure 9.23(a).10 Then, the integral (9.31) gives a periodic
variation in energy, as shown by the dashed line in Figure 9.23(b). The period of the energy variation is one
FFT cycle, corresponding to the processed beamwidth less the FFT length (i.e., from Target C to I in Figure
9.22). This periodic structure is known as "scalloping," and can be corrected if the spectrum of the received
energy and the FFT locations are known.
To compensate the data so that the output has a constant energy level, the data are multiplied by the
inverse of the energy profile (9.31). The compensation curve, E- 1 (r,'), is shown by the dotted line in Figure
9.23(b). If. the compensation curve is correct, the result will be constant with time, as shown by the solid line in
Figure 9.23(b). Note that one requirement for this ideal compensation is that the received energy be a symmetric
function over each FFT cycle, which requires that the Doppler centroid be known correctly.
Effect of a Doppler Centroid Estimation Error
What happens to the radiometric compensation when there is a Doppler centroid error? As discussed in Chapter
12, there is always some estimation error. Because of the one-to-one correspondence between frequency and time,
a Doppler centroid frequency error translates to a beam center offset time error. 11 This time error results in a
shift of the selection of the good points out of each FFT, or a shift of the beam profile in relation to the
extracted targets, as illustrated by the dashed curve in Figure 9.22. This leads to an error in the applied
radiometric correction curve, as outlined in Figure 9.24.
Figure 9.24(a) shows how the extraction of each target by the FFT has been shifted by the Doppler
centroid error. Relative to the beam center, the extraction points are shifted to the left in Figure 9.22, or shifted
down in Figure 9.24(a), compared to the case with no Doppler estimation error. This results in a skewing of the
extracted energy with.in the FFT cycles, as shown by the dashed line in Figure 9.24(b). Note how the cyclic
pattern of the received energy lacks the symmetry of the case with no Doppler estimation errors. In particular, a
discontinuity exists in the processed energy at the FFT cycle boundary, which creates a sensitive compensation
situation.
The compensation function is taken to be the same as the previous one, since it is not known that an
estimation error has occurred. The compensation function is no longer well matched to the actual energy profile.
The energy profile is now E(ry' + Llry) , where !l.TJ is the error in the beam center offset time, and 11' is measured
from the center of the assumed profile, which is still E(ry'). The compensated energy profile is then E(ry' + Llry)
E - 1(11').
! 0.5
tt
.5 0
i
Q.
-0..5
.
... ..,·... ,•
.
Figure 9.23: The scalloping effect and its compensation, for the case of no
Doppler centroid estimation error.
(a) Target center position relative to beam center
! 0.5
~
.e 0
~
8
Q.
-0.5
QI
j
1.02
·. ~ .J
t 0.98 '/
I Radiometry of
I received data
0.96
0 0.5 1 1.6 2 2.5 3 3.5 4
Time (FFT cycles)
Because E(r/) is periodic, the compensated result displays a periodic variation in energy as a function of
time, as shown by the solid line in Figure 9.24(b). The discontinuity in the output energy is the same size as the
discontinuity before compensation, since the compensation function is a continuous curve. The discontinuity tends
to create the most noticeable radiometric artifact, so its size should be minimized by making the Doppler
estimator as accurate as possible.
The scalloping effect is reduced in multilook processing, especially when the looks are evenly spaced. This is
because the variation in extracted target position within an FFT cycle is less when more looks are taken. This
effect is illustrated in Figure 9.25 for t he two-look case, where the solid lines trace the center of each FFT
relative to the beam center as a function of time. Comparing this two-look case with the one-look case of Figure
9.23(a), it is seen that the centroid of each FFT (or the combined centroid of the t wo FFTs) moves less than in
the one-look case. This is because only one-half of the number of good points are extracted from each FFT
output for each look in the two-look case, so the next FFT does not swing back as far.
i 0.5
~
.s 0
j
g
Q.
-0.5
Figure 9.25: Position of the target center of two looks, as a function of time.
For N looks and a look spacing of Ts, which 1s NFFTsep m (9.17) divided by the PRF, the energy profile
becomes
Number of looks
2
4
8
Figure 9.27: Radiometric ctiscontinuity versus FFI' length for a varying num-
ber of looks, assuming a Doppler centroid estimation error of 5% of the PRF.
To determine the requirements on the accuracy of the scalloping correction, consider the following points.
o Abrupt discontinuities in radiometry tend to be more noticeable than slow variations m radiometry of the
same size.
o Discontinuities tend to be most noticeable when they occur at the same range, as they create a linear
feature in the image.
To investigate the requirements experimentally, a uniform scene is simulated using random numbers that have
the statistical distribution of N-look SAR data (19]. Discontinuities of increasing size are simulated, creating lines
running in the azimuth direction. The simulated discontinuities in radiometry have sizes of 0.20, 0.33, 0.46, 0.59,
0.72, 0.85, and 0.98 dB, increasing from left to right in Figures 9.28 to 9.31. In Figure 9.28, single-look data are
simulated, and it appears that only the rightmost two or three discontinuities are discernable. Thus for one-look
data, it is proposed that the discontinuities be kept to below 0.6 dB.
However, because the eight-look data are smoother, the eye can pick up smaller radiometric changes. It can
be seen from Figure 9.31 that the rightmost six discontinuities are v-isible. This suggests the radiometric
compensation should be less than 0.25 dB in this case. Examining the two-look and four-look cases as well, it
was found that the limit on the size of the radiometric discontinuity should be approximately given by the values
in the second column of Table 9.2.
1 0.60 2.0
2 0.45 2.8
4 0.35 4.5
8 0.25 6.3
However, while the radiometric requirements become more demanding when there are more looks, it can be
seen from Figures 9.26 and 9.27 that the inherent scalloping is less when more looks are taken. Putting these
criteria together, it turns out that the scalloping is more sensitive to Doppler centroid errors when fewer looks
are taken. The required Doppler centroid accuracies for one, two, four and eight looks are shown in the third
column of Table 9.2, expressed as a percent of the PRF. This required Doppler centroid accuracy assumes that
the beam pattern is known accurately-if it is not, additional compensation errors will occur.
Finally, the scalloping is also a function of the SNR of the received signal. So far, a high SNR has been
assumed, whereby the received signal spectrum follows the Doppler spectrum of the antenna beam. When the
SNR is reduced (e.g., when the backscatter is low over smooth water), the received signal spectrum becomes
flatter, thus lowering the variation of E(r,') with time. If the low SNR is not recognized, the scalloping will be
overcorrected, because the correction based upon the high SNR has too much variation.
If there is a Doppler centroid error in the low SNR case, the compensation is also wrong, but the sensitivity
to Doppler estimation errors is reduced because the Doppler spectrum is flatter.
Therefore, the received SNR, as well as the beam pattern and the Doppler centroid, should be known in
order to provide an accurate radiometric compensation. However, when the SNR is very low, image quality is
reduced, and the radiometric scalloping effect is less important.
50
200
250
200 250 300
Range (samples)
200
250
200 250 300
Range (samples)
60
j
0.100
!
~ 150
e
~
200
250
50 100 150 200 250 300 350 400 450 600
Range (samples)
200
The operation of the SPECAN algorithm is illustrated m this section, using simulated point targets and an ERS-1
SAR scene.
An experiment is performed with simulated point targets, using the C-band satellite SAR parameters shown in
Table 4.1. The squint angle is chosen to be 3°, corresponding to a Doppler centroid frequency of more than 13
kHz. A data block of 220 azimuth samples is simulated to give an azimuth resolution of approximately 27 m, and
the FFT length is zero padded to 256 samples. The off-nadir angle is 20°, making the ratio of slant range to
ground range approximately 2.6. The slant range resolution is processed to approximately 10 m, so that square
pixels are obtained in ground range.
(a) Before RCMC (c) SPECAN processed
20
60
eo
80 100
100
120 160
140
1110
200
180
..
10 20 30 40
200
10 20 30 40
2!50 ...___.__...____._
10
- _
20 30
_,,
40
Range (aamplee) Range (aamplee) Range (..,.,...)
Figure 9.32: Simulated data: (a) after range compression, (b) after linear
RCMC, and (c) after deramping and the SPECAN FFT.
Targets of equal magnitudes are evenly spaced in the simulation, as shown in Figure 9.32. Their slant range
positions are staggered so that they all lie in the same range gate after linear RCMC. The data are first range
compressed, as shown in Figure 9.32(a) . The range migration is noticeable, being about two range cells within the
220 azimuth samples. After the linear RCMC, the data is correctly aligned in azimuth, as shown in Figure
9.32(b). The data are t hen deramped and FFTed, resulting in the targets being compressed into t he same range
cell, as shown in Figure 9.32(c).
A vertical slice of t he FFT output is shown in Figure 9.33(a), to illustrate the magnitudes of the compressed
targets. T he dashed line is the two-way antenna pattern. The magnitude of each target depends upon the integral
of the dashed line over the FFT length. Then t he incompletely compressed samples are thrown away, and the
data wraparound removed, to obtain a contiguous set of compressed targets, as shown in Figure 9.33(b). After the
descalloping or radiometric correction, the targets will have equal strength.
1 ..
- 0 .8 ::- ... ... ...
i5
.. I
. • Throwaway • I .. TWo--way antenna patlem
I • ... • -
- ~
,.. - - - - • -
~at::, OJ! ,-
i 0,,4 .-
.._ .._
... ... ... _ ... . .. _. _. .
.
.
0.2
0
lo,
0.1 0.2
M 0.3
. ............... ..._ _
o.s
........
0.8 0.7
...
0.8
0
. ~ ~ ~ ... ~ ~ . ~ .... ~ ~ ............ ...__
0 0.1 0.2 0.3 o..- 0.5 0.8
Az.imulh time (seconds) -+
2 2
0
I., 4
6
4,
6
C>
.!!. 8
z:.
:i 10
8
10
-ooi ioo-
E 0
~ 12 12 0
14 14 •
16 16
5 10 15 5 10 15
Range (samples) Range (samples)
-
ID
::51..
0
- 10
0
- 10
-e~
CII
- 20 - 20
Ill
:IE - 30 --30
0 5 10 15 0 5 10 15
(e) Range phase (f) Azimuth phase
200 200
.;-
~ 100
... .,.,. .,.,. ,,.. 100
r'I
f 0 0
$
,! - 100
Q.
.... .... i,.- . .,.,. ... - 100
- 200 - 200
0 5 10 15 0 5 10 15
Range (samples) Azimuth (samples)
Figure 9.35 shows an ERS-1 stripmap image processed using the SPECAN algorithm. T he SPECAN algorithm is
used in range as well as in azimuth. The processor was programmed by Cathy Vigneron and Terry Ngo at UBC,
using the IDL env:ironment. The resolution, number of looks, and image size are operator selectable; in t his case,
the resolution is 60 m in range and azimuth, with 15 looks taken in azimuth.
The SAR signal data was acquired by ERS-1 on April 24, 1992 (orbit 4044, frame 981). The scene center is
approximately 49.2° N, 122.6° W , with north towards the top right corner. The city of Vancouver is near the
upper left of the scene, with the suburbs and eventually farmland extending to the east and south. Mountains
with elevations up to 1400 m fill the upper portions of the scene.
The city center is at (4.5,4.8) cm (from the left and top margins of the scene, respectively). Six large ships
are anchored in English Bay, to the west of the city center. The University of British Columbia is located at
(2.5,4.7) cm. The Straits of Georgia are to the west of the university, where a bright area shows surface
roughness caused by wind flowing out of Howe Sound. The Fraser River is seen flowing through the lower half of
the scene.
The SPECAN algorithm has been used in a number of quicklook applications by MacDonald Dettwiler,
including hardware processors delivered to ESA in 1984 and to the Brazilian ground station in 1992 for ERS-1.
Figure 9.35: ERS scene of Vancouver, processed by the SPECAN algorithm
at UBC . (Copyright European Space Agency, 1992.)
9.9 Summary
This chapter introduces the SPECAN algorithm and explains its major steps, including linear RCMC, deramping,
FFT, multilook processing, phase correction, and descalloping. Error sources are discussed and simulations are
performed to illustrate the operation of the algorithm.
The main features of the SPECAN algorithm are:
o A deramping operation turns each target into a sine wave of a specific frequency, and an FFT "compresses"
the sine wave into sine-like functions for each discrete target. This 1s possible because of the linear FM form
of the received signal, and the algorithm is very efficient, since it requires fewer, shorter FFTs than the
precision RDA, CSA, and WKA algorithms.
o The resolution is given by the bandwidth before deramping, as captured by the length of the FFT. Unlike
the precision algorithms, the resolution and sample spacing are functions of the azimuth FM rate, which is
in turn a function of slant range.
o The FFTs capture a different part of the Doppler spectrum for each successive target. This leads to
radiometric, frequency, and phase changes that can be partially corrected.
o FFTs are stitched together to form a contiguous image. Because of the jump in Doppler frequency between
FFTs, the frequency of a target at a stitching point is distorted. The distortion can cause measurement
errors in point target analysis. The frequency distortion can be corrected, but the correction is specific to
each target.
o A radiometric variation or scalloping exists because the processed parts of successive targets come from
different parts of the antenna beam. This effect can be largely corrected if the Doppler centroid, the beam
shape, and the SNR are known.
o Since processing is performed in the azimuth time domain, only the linear component of the RCM can be
efficiently corrected. This is usually satisfactory, as the quadratic component is often small, compared to the
resolution typically processed with the algorithm (e.g., in C-band satellite SAR case).
o Multilooking can be efficiently achieved by placing the FFTs closer together in the compression step.
o The Doppler centroid plays two parts in the algorithm. It governs the FFT throwaway region, and the
antenna pattern correction. The required Doppler centroid estimation accuracy is higher than in the precision
algorithms, because of the time-varying FFT extraction frequency.
o The algorithm is very efficient in providing medium-to-low resolution images. This fact, plus its less than
ideal image quality properties, means that it is most suited to quicklook applications. Its worth has also been
proven in ScanSAR data processing, as discussed in Chapter 10.
The important processing equations derived in this chapter are summarized in Table 9.3.
References
[2] A. P. Luscombe. Taking a Broader View: Radarsat Adds ScanSAR to Its Operations. In Proc. Int.
Geoscience and Remote Sensing Symp., IGARSS'88, Vol. 2, pp. 1027- 1032, Edinburgh, Scotland, September
1988.
[3] R. Bamler, D. Geudtner, B. Schattler, P. Vachon, U. Steinbrecher, J. Holzner, J. Mittermayer, H. Breit, and
A. Moreira. RADARSAT ScanSAR Interferometry. In Proc. Int. Geoscience and Remote Sensing Symp.,
IGARSS'99, Vol. 3, pp. 1517-1521, Hamburg, Germany, June 1999.
[4] J. Holzner and R. Bamler. Burst-Mode and ScanSAR Interferometry. IEEE Trans. on Geoscience and Remote
Sensing, 40 (9), pp. 1917- 1934, September 2002.
[5] M. I. Skolnik. Radar Handbook. McGraw-Hill, New York, 2nd edition, 1990.
[6] W. J. Caputi. Stretch: A Time-Transformation Technique. IEEE Trans. on Aerospace and Electronic Systems,
AES-7, pp. 269-278, March 1971.
[7] R. P. Perry and H. W. Kaiser. Digital Step Transform Approach to Airborne Radar Processing. In IEEE
National Aerospace and Electronics Conference, pp. 280..:.287, May 1973.
[8] J. C. Kirk. A Discussion of Digital Processing in Synthetic Aperture Radar. IEEE Trans. on Aerospace and
Electronic Systems, 10 (3), pp. 326-337, May 1975.
[9] R. P. Perry and L. W. Martinson. Radar Matched Filtering, chapter 11 in "Radar Technology," E. Brookner
(ed.), pp. 163- 169. Artech House, Dedham, MA, 1977.
[10] I. G. Cumming and J. Lim. The Design of a Digital Breadboard Processor for the ESA Remote Sensing
Satellite Synthetic Aperture Radar. Technical report, MacDonald Dettwiler, Richmond, BC, July 1981. Final
report for ESA Contract No. 3998/79/NL/HP(SC).
[11] R. Okkes and I. G. Cumming. Method of and Apparatus for Processing Data Generated by a Synthetic
Aperture Radar System. European Patent No. 0048704. Patent on the SPECAN algorithm, filed September 15,
1981, granted February 20, 1985. The patent is assigned to the European Space Agency.
[12] M. Sack, M. Ito, and I. G. Cumming. Application of Efficient Linear FM Matched Filtering Algorithms to
SAR Processing. IEEE Proc-F, 132 (1), pp. 45-57, 1985.
[13] K. H. Wu and M. R. Vant. Extensions to the Step Transform SAR Processing Technique. IEEE Trans. on
Aerospace & Electronic Systems, 21 (3), pp. 338-344, May 1985.
[14] X. Sun, T . S. Yeo, C. Zhang, Y. Lu, and P. S. Kooi. Time-Varying Step-Transform Algorithm for High
Squint SAR Imaging. IEEE Trans. on Geoscience and Remote Sensing, 37 (6), pp. 2668-2677, November 1999.
[15] T. S. Yeo, N. L. Tan, and C. B. Zhang. A New Subaperture Approach to High Squint SAR Processing.
IEEE Trans. on Geoscience and Remote Sensing, 39 (5), pp. 954- 967, May 2001.
[16] R. Bamler and M. Eineder. Optimum Look Weighting for Burst-Mode and ScanSAR Processing. IEEE
Trans. Geoscience and Remote Sensing, 33, pp. 722- 725, 1995.
[17] R. K. Moore. Trade-Off Between Picture Element Dimensions and Noncoherent Averaging in Side-Looking
Airborne Radar. IEEE Trans. on Aerospace and Electronic Systems, 15, pp. 697- 708, September 1979.
[18] R. Bamler and P. Hartl. Synthetic Aperture Radar Interferometry. Inverse Problems, 14 (4), pp. Rl- R54,
1998.
[19] C. Oliver and S. Quegan. Understanding Synthetic Aperture Radar Images. Artech House, Norwood, MA,
1998.
Chapter 10
10.1 Introduction
Normally, SAR data is acquired by transmitting a periodic sequence of pulses, which are processed into a
continuous image. This is referred to as the "stripmap" imaging mode. However, not all the transmitted pulses are
needed to form a continuous image. In Chapter 9, it is shown that if an image with a low resolution in azimuth
is acceptable, gaps can exist between sections of the acquired data, as illustrated in Figure 9.7. If the radar
system is operated in this mode, a burst of pulses is transmitted, corresponding to the time of FFT 1 in Figure
9.7. The transmitter is then turned off until the time of FFT 2 is reached. These ON/OFF cycles are repeated
during the data collection period. This type of SAR operation is referred to as "burst mode" operation.
Burst mode was first used in the Magellan mission to Venus in 1990, to conserve transmit power and
downlink capacity (l]. However, if these restrictions do not apply, the radar system can use the gaps to collect
additional information [2, 3]. As an example, the gaps can be used to illuminate other range swaths, where low
resolution images can also be formed. By stitching these images together, a wide swath image is obtained, which
covers a range extent that is impossible to image with conventional SAR operation. Another example is the
acquisition of data from other polarizations during the gaps.
This mode is called "ScanSAR." It is included in most modern satellite SAR systems to offer wide swath
coverage as an option (4]. It is not used in airborne SARs, because airborne systems use a lower PRF, and can
image as far as the radar beam can see without resorting to scanning techniques. ScanSAR was first used in
SIR-C in 1994 [5, 6], and subsequently in RADARSAT [7] and ENVISAT (8-10].
In modern satellites, the most common form of burst mode operation is ScanSAR, so the term "Processing
ScanSAR Data" is used as the title of this chapter, although, strictly speaking, "Burst mode processing" is the
more appropriate title. When ScanSAR data are processed, each subswath is processed separately. After image
formation, the subswath images are combined into a wide-swath image. Each subswath of ScanSAR data consists
of a set of bursts (consecutive sequences of echo lines) , which are processed the same as burst-mode data.
The purpose of this chapter is to discuss the signal processing of burst- mode data. Different algorithms have
been developed for this purpose.
o The "Full-Aperture" algorithm, using existing RDA or CSA processors with zeros filling the interburst gaps;
o The SPECAN algorithm, which is the most efficient of the algorithms, especially when resampling is omitted;
o The modified SPECAN algorithm, using the chirp-z transform to eliminate the fan-shaped distortion;
o The short IFFT (SIFFT) algorithm, which is an evolution of the RDA using reduced-length IFFTs;
o The extended chirp scaling (ECS) algorithm, which combines the chirp scaling and SPECAN algorithms.
Outline of Chapter
Each of these algorithms is discussed in this chapter. Since ScanSAR images have been used in interferometric
applications (11], phase preservation is also considered. The description of each algorithm is rather brief, because,
in most cases, they are made up from parts of algorithms already described.
Section 10.2 describes how data are acquired in the ScanSAR mode, illustrating how the Doppler frequencies
vary with target position. Section 10.3 gives the form of the optimally focused impulse response of a target in a
single burst. One way to process the data is to zero pad the interburst pauses, and then process the data using
a conventional algorithm, such as the RDA or CSA. Section 10.4 discusses this method and its properties. Section
10.5 summarizes how the SPECAN algorithm can be applied to ScanSAR processing. Section 10.6 explains how
the SPECAN algorithm can be modified to remove the fan-shaped distortion. Sections 10.7 and 10.8 present the
SIFFT and ECS algorithms, respectively. Section 10.9 discusses how compressed targets from consecutive bursts
of these criteria vary with range.
Burst start time: Normally, a burst transmission does not begin until all the echoes from the previous burst
(the traveling pulses) have been received, although there is some advantage to starting the transmissions
earlier to aid roll angle estimation [12, 13].
Burst duration: Burst durations need not be the same in each subbeam. In fact, they can be adjusted in order
to maintain the same resolution in each subswath, since the azimuth FM rate is a function of range.
ScanSAR Modes
The various ScanSAR modes of RADARSAT-1 and ENVISAT's Advanced SAR (ASAR) are summarized in Table
10.1. The ground range swath width, the number of subbeams, the number of radar pulses per subbeam, and the
number of azimuth looks are listed [14, 15]. The number of pulses per beam is an interesting parameter, as it
limits the azimuth resolution and affects how looks can be taken, especially in the case of the ENVISAT GMM
mode.
Multilook Processing
In ScanSAR operation, azimuth multilooking can be achieved in one of two ways. If the burst durations are
short, so that two or more burst cycles of a given beam can be completed within the time of the system's
synthetic aperture, an independent look can be extracted from each burst (see Figures 9.10 and 9.11). If the
burst duration is long, so that each burst is longer than needed for a single azimuth look, more than one look
may be extracted from each burst, at the risk of more sensitive scalloping.
To guarantee a continuous ground coverage in the first case, the following condition on burst length and
aperture length must be satisfied. Let Tb be the burst duration, T9 be the gap duration, and Niooks be the
number of looks. Then, from Figure 9.10, it can be seen that the target exposure time, Ta, must be at least
(10.1)
in order to obtain N1ooks independent looks with an azimuth resolution governed by Tb.
This relationship between burst length and azimuth beamwidth is illustrated in Figure 10.2, where one beam
of the two-look case is shown. The figure shows 25 equally spaced targets that are displaced in azimuth by small
amounts. Range compression and RCMC have been performed, so the targets are all in the same single range
cell.
The light lines show the azimuth exposure time of each target, as if the SAR were operating in continuous
mode, and the heavier parts of each line show the part of each target actually exposed by the bursts. It can be
seen that each target is exposed for at least two full bursts, so that two looks can be processed for each target
(Targets 1, 9, 17, and 25 are limiting cases, being exposed for three full bursts).
If only one azimuth look is needed for the burst length shown in Figure 10.2, the even-numbered bursts need
not be transmitted. Range looks can be processed if the power supply and data handling capacity can accommo-
date more range bandwidth than is required for the specified range resolution. Range multilooking requires
different processing and is not discussed here.
Figure 10.3 shows how the spectra of the targets vary with azimuth position. Within a given burst, each
successive target is received at a higher Doppler frequency. Each target is later captured at a lower frequency in
the following burst, as long as it stays within the beam exposure. This unusual pattern of target Doppler
histories has a major effect on the processing algorithms.
:!D. ... ,
.
1
Data used In
I - - the almulaUon
0
10
11
2 12,:
14 ,.
15 ,~
19 ~..
17 '),.
--r:7 18 ...,
i-: ' LJ 10
20
21
,-
22
1
23
24
,. 2,$
1 tl."'"
0
~
F 5
_L_
1 3 5 7 9 11 13 15 17 19 21 23 2a Target number
I I
I I
I I
I I
I
I
I
I
1 PRF
,..
Burst 1
... •• Ill
Burst 2
. Burst 4
..
Figure 10.3: The spectra of targets in each burst, illustrating how the Doppler
frequencies change from target to target.
(10.2)
11d is the zero Doppler time of the target ( = 77~ in this case)
Ideally, the matched filter should be s*(-77'), with the same duration and bandwidth as s(77'). However, this
choice of filter is not practical, since it requires a different matched filter to be used for every output sample
(i.e., for every target). This is because the Doppler frequency (or time offset, 11' -17d) of every target is different
within this burst, as illustrated in Figure 10.3. When the matched filter changes with each target, the efficiencies
of fast convolution are not available, and computing times exceed those of fast FFT techniques.
To regain part of the fast convolution effi.~iency obtained in continuous- mode processing, the bandwidth of
the matched filter can be widened to match several consecutive targets in a burst, despite their differing Doppler
frequencies. To achieve this, the matched filter is exoonded in time to
mp(77
1
) = rect ( ;: ) exp{j1r Ka. (11') 2 } (10.3)
where the matched filter duration, Ta., is greater than the burst duration, Tb. The new duration is chosen to cover
the combined signal bandwidth of a specific group of targets to be compressed. All targets in the burst are often
compressed in the same operation, in which case the filter duration is matched to the full exposure time of a
target in the stripmap mode, given in (4.37) .
With Ta. > Tb , the compressed target from a single burst is derived in Appendix 3A.2, and is
s(17') ® 1np(11')
where the beam pattern is assumed to be constant over the burst. Note that ry' 1Jd IS the relative time
measured from the zero Doppler time, 1Jd, and similarly, Tc - "ld is the relative time of the burst center
measured from 11d· The following points can be deduced from (10.4).
o A linear phase term is present, with a slope of -271" Ka. (Tc - 11d);
o There is a quadratic phase term: exp{ jtrKa(r,' - 11:i) 2
Note that the peak phase and the quadratic phase term are the same for every target, but the slope of the
linear phase term varies with target position. The slope is a function of the target's Doppler frequency at the
center of the burst, Ka (Tc - 11d) , and is zero when the target's time of closest approach, 11:i, is at Tc , The
quadratic phase term arises from the fact that t he duration of the filter is longer than the signal, as explained in
Appendix 3A. An example of the quadratic phase is shown in Figure 10.4, and its size is discussed in Section 9.6.
Raw data of the first 13 targets of Figure 10.2 are simulated and compressed, using the matched filter
(10.3). The results of Targets 1 and 9 from Burst 2 are examined. The point of closest approach of Target 1
occurs in the middle of Burst 2, so that its "local" Doppler centroid is zero. The magnitude and phase of the
compressed target are shown in Figure 10.4(a, b) . The magnitude response is the well known sine function arising
from a rectangular-weighted aperture. The phase and the phase slope at the target peak is zero, but the
quadratic phase component is observable in Figure 10.4(b) . However, the quadratic component is quite small - its
size is less than 1° over the resolution width of the main lobe.
In contrast, Target 9 is illuminated by the beam edge in Burst 2, which gives it a substantial local Doppler
centroid. In this case, the linear phase dominates, and it is hard to observe the quadratic phase in Figure 10.4(d).
The phase at the peak is still zero, and the phase curvature is less than 1°, the same as Target 1. Note that the
responses m Figure 10.4 are oversampled because the burst bandwidth of one target is smaller than the data
bandwidth.
! - 10 - 10
1-20
5 10 15 20 215 30 5 16 20 25 30
1s ,oo 0
-
100
l - 100
~
-200
.
0 15 10 15 20 2a 30 10 15 20 25 30
Time (aamplel) Time (eamplet)
In summary, the use of a practical matched filter that compresses all the targets within a single burst works
well. Comparing the result with the ideal matched filter of Figure 3.6, the only difference is the addition of a
small quadratic phase term.
Note that when the time-bandwidth product of a single burst is very low, as in the ENVISAT/ ASAR global
monitoring mode, conventional matched filtering techniques do not provide a well defined impulse response. An al-
ternate approach has been proposed to handle this case, using an inversion technique [16].
The first algorithm considered is called "Full-Aperture" or "Coherent Multiple Burst" processing [17, 18]. Once the
ScanSAR data are acquired for a given subswath, the gaps are filled with zeros, and processed as stripmap,
single-look data. Any of the high precision algorithms described in previous chapters (the RDA, CSA, or the
WKA) can be used. While this is not a very efficient approach, it has the advantage that existing stripmap SAR
processors can be used for ScanSAR data with very little modification.
The entire set of burst exposures for each target are processed coherently, as long as the bursts maintain
their correct separation (if not, an interpolator can be used to reinstate the signal timing). The coherent
processing preserves the phase information, and yields a product that shares the same geometric and overall
spectral properties of the associated stripmap images [19]. However, the coherently processed bursts result in an
impulse response that contains intermodulations. These can be visually disturbing (the authors call them "nasty"),
but if a detected image is the desired final product, the modulations can be reduced by a lowpass filtering
operation.
The form of the modulations can be best explored with a simulation. Assume that the burst length and data
gap are each 256 samples long, and the data length is 1792 samples, corresponding to four bursts and three gaps.
This burst configuration is shown in the left panels of Figure 10.5. When the full-aperture processing is applied
to the data, the responses of Targets A, B, and C are shown in the right panels of the figure.
The simulation illustrates the following properties of the full-aperture burst-mode processing.
o The first column shows that the collected data is modulated by a rectangular wave, in addition to the
azimuth beam pattern. The burst locations are independent of the target locations. The targets are separated
by 128 samples in azimuth, as noted by the shift of the azimuth beam pattern going down the rows. The
target exposure time is longer than shown in Figure 10.2, to illustrate the effects more clearly. The
oversampling ratio is about 18%.
o The second column shows the spectrum of each target. Because of the linear FM property of the azimuth
signal, a modulation is seen that is similar to the modulation in the time domain. The modulation caused by
the beam pattern is stationary in the frequency domain, while the modulation caused by the bursts is a
function of the target locations.
o In the absence of bursts, the impulse response is a narrow sine function, with a resolution governed by the
total bandwidth and the beam pattern. The central fine "spike" in the responses in the third column arises
from this narrow sine function.
o However, the burst modulation has the effect of replicating the sine function along the time axis, with an
envelope of a wider sine function, shown as a dotted line (see subsequent discussion).
o The envelope is the same for all three targets, but the fine details of the impulse response vary with the
target location because the "beam pattern" varies with the target position.
In other words, these properties are a direct consequence of the full-aperture (continuous-mode) signal being
multiplied by a pulse train that serves to "cut out" bursts from the continuous-mode data. Such a pulse train is
shown in Figure 10.6(a), and its inverse Fourier transform is shown as the solid line in Figure 10.6(b). A sine
function is overplotted with a dotted line, showing that the envelope of the inverse transform is given by the
impulse response of a single burst.
I: ... ._/ ·: . . ..
......
/ r"\
! 40 !!.
='i 20
.. ..• ... ·.:-.. ...·- ..
0 a:.......JE.- - lL..._s..__..:9 -30 • ~ ••
0
0 500 1000 1500 0 500 0 10 20 30 40 50 80
(d) Exposure of Target B (e) Spectrum ol Target B (f) Compression of Target B
i:
(g) Exposure of Target C (h) Spectrum ol Target C
§ i 0
~ '"" i
1I!! 0..5
-10 • • ...
f 20 f - 20 ·. . ..
0~---.. . . . . . .
0 500 1000 1500
0~ ~ ~ -~ ~
- 500 0 500
f-30u:~··-:~
· ~:~
0
.:~ :•: ~~~~ ~~
10 20 30 40 50 80
Time (~ ) Frequency (eels) Time (18f11)1es)
0
Time (samples) --t
(b) FFT or IFFT of pulse train window
1
1- FFT of pulse train
. • . Sfnc tunctiOn
I
- 60 - 20 0 20 40 60
Frequency (cells) --t
This is a result of a Fourier transform property discussed in Chapter 2. The burst modulation of Figure
10.6(a) can be represented by an impulse train convolved with a rectangular function, with a width equal to one
burst. Therefore, its Fourier transform is the product of the Fourier transforms of the two functions individually.
The Fourier transform of the impulse train is also an impulse train with a spacing inversely proportional to its
spacing in the time domain, as shown by (2.50). The Fourier transform of the rectangular function is a sine
function, as in Figure 2.3, with a width inversely proportional to the width of the rectangular function. The
resulting Fourier transform is shown in Figure 10.6(b) .
In essence, the narrow sine functions in the right column originate from the extent of all the bursts (i.e., the
total target exposure) , and the wide sine function (i.e. , the envelope) originates from the extent of a single burst.
A complete mathematical development is given in [19].
This algorithm has been used to produce interferograms from ScanSAR data [18]. The interferogram has the
correct differential phase, as long as the burst timing is synchronized and the images are well registered. A
lowpass filter can be used to reduce the "spikyness" of the product, since the complex conjugate multiply in the
interferogram formation process downconverts the frequency of the complex SAR images to baseband. This
filtering is achieved by the usual interferogram smoothing process (11].
It is shown in Chapter 9 that the SPECAN algorithm is the most efficient one for processing continuous-mode
data. It is also shown that there can be gaps between the FFTs when the required azimuth resolution is modest
and number of looks is reduced, as shown in Figure 9.7. In the beginning of this chapter, this fact is used to
explain why burst mode data can produce a continuous image.
Consequently, the SPECAN algorithm is considered a natural candidate for processing ScanSAR data, since
the FFTs can simply be applied to each of the bursts separately. The approach is simplest when the FFT length
equals the burst length, although zero padding may be used to get efficient FFT lengths, such as powers of two.
The processing steps are the same as those described in Figure 9.1, when the FFT length is set to the burst
length. Each FFT processes one full burst at a time, and the processed results are stitched together to form a
continuous image.
Multilook processing can be done in either of two ways. If there is more than one burst per aperture, the
outputs of the associated FFTs are detected and added together before stitching. Another way is to shorten the
FFT length, so that two or more FFTs are done per burst, accepting the associated reduction in resolution. If
there is more than one burst per aperture, these two approaches can be combined to obtain even more looks.
It is the most efficient algorithm for ScanSAR data, but does suffer from some image quality effects. One
disadvantage is that only linear RCMC can be efficiently performed. However, if only low or medium resolution is
required, higher order RCMC may not be necessary. Another disadvantage is that the output data have to be
resampled, since the output sample spacing depends on the azimuth FM rate. This can be done approximately by
using zero padding to make small adjustments to the FFT length as range varies.
A number of SPECAN ScanSAR processors have been delivered to ground stations for RADARSAT-1 and
ENVISAT / ASAR by MacDonald Dettwiler. The SPECAN algorithm has been used for SIR-C ScanSAR processing
at JPL [20], and for RADARSAT ScanSAR processing in the Alaska SAR Facility [21, 22].
The algorithm has been used to process data from the first burst-mode SAR mission, the Magellan mission
to Venus [23] (see Section 2.9.1). More accurate but less efficient algorithms that remove some of the
disadvantages of the SPECAN algorithm are presented in the next three sections.
The azimuth focusing of each individual burst acquired in the ScanSAR mode can be efficiently carried out by
applying the SPECAN algorithm, which is described in Section 10.5. However, the presence of a range dependent
deramping function, which accounts for the variation of the azimuth FM rate, causes an undesirable range
dependent scaling of the azimuth pixel dimension - referred to as "fan-shaped distortion," as discussed in Section
9.2.4. This effect can be compensated, using an interpolation that resamples the data to a constant azimuth pixel
spacing. However, this is not a trivial operation in terms of both computational efficiency and precision, since too
short an interpolation kernel can introduce artifacts in the image, while long interpolation kernels increase
computation requirements.
An extension of the basic SPECAN algorithm has been proposed that is referred to as the modified
SPECAN algorithm [24, 25]. This algorithm is based on replacing the standard Fourier transform used in the
SPECAN procedure with a scaled Fourier transform (SCFT), whose kernel includes a range dependent scaling
factor to adjust to a constant azimuth pixel spacing. The SCFT operation is efficiently carried out by applying
the chirp-z transform technique [26], which has also been applied in the SAR context for correcting RCM [27] .1
To illustrate the principle of the modified SPECAN approach, it is convenient to start with the expression of the
azimuth component of a range compressed, point target signal
a(x'-x,x,r) = x' -
Wa { Xs
x} rect{Xex' } exp { -j21r (x' .Xr- x) 2
} (10.5)
where x' is the azimuth spatial variable, and x and r are the azimuth and range coordinates of the target,
respectively. The variable, Xe , is the extent of the burst modeled by the "rect'' function; and Xs = 0.886.X r / La
is the length of the full aperture (the antenna beamwidth on the ground), where .X is the radar wavelength, and
La is the length of the radar antenna in the azimuth direction. All these variables have units of meters. The
function Wa { . } represents the two-way antenna pattern that is responsible for the scalloping effect. This pattern
and other nonessential amplitude factors have been ignored in the subsequent analysis. 2
The first step of the modified algorithm is the same as the basic SPECAN algorithm, and involves the
multiplication of the burst signal by the phase factor, exp{j 21r (x') 2 /(.X r)}, referred to as the deramping step.
The resulting deramped signal is then transformed in the azimuth direction, using a scaled Fourier transform
instead of the standard Fourier transform used in the basic SPECAN algorithm. The SCFT utilizes the scaling
property given in (2.27).
In particular, the kernel of the SCFT is given by exp{-j 21r {K(r) x'}, where ~ is the spatial frequency
variable with units of cycles per meter. The kernel includes the unitless, range-dependent scaling factor
K(r) = ro (10.6)
T
where ro is a fixed reference range that 1s selected according to the desired output azimuth pixel spacing.
Assuming the effective radar velocity, Vr, is constant, the factor K(r) is proportional to the range-varying
azimuth FM rate.
The expression for the SCFT of the deramped signal is
ii((-x, x, r) - SCFT {a(x' - x, x, r) exp{ j 2,/:':2} }
- /a(x'-x,x,r) exp{j21r(~'t}
The integral can be changed to the form of the conventional Fourier transform by introducing the scaled
variable
{- = {K (r ) Xs-
La = {. X
2
-La
2
(10.8)
Then, when 2{/(La.Xs) is substituted for { K(r) m (10.7), the result of the transform (except for a nonessential
amplitude factor) is the compressed pulse
which is wider than the one-look sine function obtained in the stripmap case by the ratio, XB/ Xs. Note that X
= K(r) Xs = 0.886.X ro/ La is the synthetic aperture length at the reference range.
Discrete-Time Equivalent
Consider the sampled version of (10.9), which is relevant to the digital implementation of the procedure. The
discrete output is represented by
where k represents the integer data index, and Li{ is the output azimuth pixel spacing. By assuming an original
(raw) data spatial sampling spacing of fix', equal to the Nyquist limit, L/2, the output sample spacing is
(10.11)
By inspecting (10.11), it is evident that the output sampling spacing, Li(, is independent of range - it can be
chosen by selecting the factor ro in the expression for X, in (10.6) and (10.8). Note that the factor X/ XB
represents the subsampling allowed by the reduced bandwidth of the burst signal.
Finally, by substituting (10.11) into (10.10), the compressed result is
2 Xs ( ktlx, -X - x )} {10.12)
. { 0.886---
x smc
La Xs XB
Chirp-z Transform
It is important to note that the SCFT operation in (10.9) can be efficiently computed by applying the chirp-z
transform algorithm, which reduces the integration in (10.9) to a convolution step (26]. The convolution can be
implemented with FITs, if desired.
The core of the chirp-z transform algorithm is the identity
(10.13)
Making use of the above identity in {10.7), one obtains
where the exponential term not involving x' has been taken outside of the integral. The equation can be written
in a convolution form [as in (2.1))
where the symbol ® represents a convolution operation in the azimuth direction. The result is the sine function
given in (10.9).
Note that the units of the three exponents on the right side of (10.13) and subsequent equations are not in
radians. However I the identity is valid and the exponential on the left side is in radians. It is not required that
the arguments of the three exponents on the right side are in radians; what is required is that the result of their
multiplication has an argument in radians. Indeed, this argument is the kernel of the SCFT. This is the key trick
of the chirp-z transform, which allows implementation of a scaled transform via a convolution operation.
Processing Procedure
2. The result is convolved with the scaling kernel, exp f j1r K(r) (x')2}, which is the conjugate of the first
scaling kernel. The convolution can be implemented by ~FTs.
3. Finally, the result of the convolution is multiplied by the phase compensation factor, exp { - j 1r K (r) { 2
}-
The procedure is outlined in the block diagram of Figure 10.71 where the dashed box includes the chirp-z
transform processing elements. Note that the convolution operation included in (10.15) has been implemented by a
fast convolution in the frequency domain. The two signals to be convolved must be doubled in length by zero
padding, to avoid circular convolution wraparound errors.
Finally, it should be mentioned that the technique described only accounts for the focusing operation of the
single burst. Further steps are required for performing the radiometric correction and the stitching of the burst
images.
r---------------------------------------------,
I
Conjugmed
II
scaling kernel
I
I
Oeramplng : ScaUng
function 1 kemel Azlmuth FT Phase
II oompensatlon
Range II
compreued I
burst signal I Azimuth
Azimuth FT Inverse FT
I
II
Chlrp-z trenarorm 1
"Implementing the SCFT
The modified SPECAN algorithm has been used to process the ScanSAR data acquired during the SRTM Ill1ss1on.
Because of the interferometric application, the algorithm had to pay particular attention to phase and registration
accuracy. Because of the huge data volume, the computational efficiency was another factor in the choice of
algorithm .
An example of an SRTM image is shown in Figure 10.8. The data were acquired on February 16, 2000. The
image is 1219 samples or 37 km in range by 1064 lines or 32 km in azimuth. The scene center is at 37.8° N,
122.2° W. The resolution is 30 m and north is to the upper right of the image.
The scalloping seen in the image is a result of using two looks for some pixels and three for others. The
different number of looks is a result of the scanning pattern changing in relation to the aperture length as range
increases. All of the available looks were retained in the image product to maximize the phase accuracy in the
subsequent interferometric processing [29].
The RDA has been a popular algorithm for SAR processing, with a well proven track record of accuracy and
efficiency. It is therefore natural to ask if the RDA can be used to process ScanSAR data in a different way than
the full-aperture algorithm. It turns out that the RDA can be used to process ScanSAR data, with minor
modifications. The main modificat ion is to shorten the IFFT in the final step of the RDA, which leads to the
name "Short IFFT" or SIFFT algorithm [30-32].
Figure 10.8: SRTM image of San Francisco processed at JPL by the modified
SPECAN algorithm. (Courtesy of Paul Rosen, JPL.)
In applying the RDA to burst-mode data, it would be desirable to modify the RDA to:
Figure 10.9 summarizes the processing steps in the SIFFT algorithm that is designed to meet these goals.
Zero padding is used to fill in the gaps between the bursts, as in the full-aperture algorithm. Weighting must be
done in the time domain, because the target exposures do not fill the full spectrum, and vary from target to
target. Then, an azimuth FF'r is taken that spans as many bursts as desired. The choice of the forward FFT
length is mainly an efficiency issue, as in the RDA.
Raw radar data
~
Range compression
with SRC
•
Weighting and
zero filling
•
Azimuth FFT
•
RCMC
•
Azimuth matched
filter multiply
I
•
IFFT 1
•
IFFT2
.
I I
.L
Stitch the results-
•
Compressed data
With the exception of the weighting and zero padding, the SIFFT algorithm is the same as the RDA, up to
and including the azimuth matched filter multiply. The approximate version of SRC shown in Figure 6.l(c) is
used. The ScanSAR operation is only used in satellite systems, and, in this case, the approximate form of the
SRC is sufficient (see Section 6.4.2). The algorithm can be modified to use the accurate form of SRC shown in
Figure 6.l(b) , if needed.
The main divergence from the RDA occurs in the IFFT steps of azimuth compression. The key idea is to
adjust the inverse transform length so that, when one burst of a target is fully captured by a particular IFFT,
little or no energy from adjacent bursts of the same target is present in the IFFT input data. In this way, each
IFFT compresses a group of targets without interference from other bursts, and an accurate single-burst impulse
response is obtained. In essence, the IFFT is acting like a bandpass filter to extract target energy in the
segmented form of Figure 10.10. The filter is time varying, in the sense that each successive IFFT is applied to a
different frequency band.
This algorithm is referred to as the short IFFT algorithm, because the IFFTs are shorter than the normal
single-look version of the RDA. Short IFFTs are also used in the multilook version of the RDA described in Sec-
tion 6.5, but in the SIFFT implementation, the locations of the IFFTs are specifically tailored to the spectral
properties of the targets created by the bursts.
.,.
-- - t:
11.
~
N
t:
11.
1 PRF
...
1 5 9 13 17 21 25
Target number
Figure 10.10: Placement of the short IFFTs with respect to the spectrum of
the targets.
The rules for choosing the length and location of IFFTs can be deduced from Figure 10.10. This figure is a
version of Figure 10.3, redrawn with the roles of target number and azimuth time interchanged. The horizontal
a.xis (target number) can be thought of as azimuth time after compression.
Figure 10.10 shows the distribution of target spectral energy when a forward FFT is taken of the six bursts
shown in Figure 10.2, so that all 25 targets are captured as completely as possible, given the burst data
collection. A po::sible choice of IFFT locations in the frequency domain is shown on the right side of the figure.
It can be seen that IFFT 1 captures the complete energy of a single burst of Targets 1- 5 (in Burst 3), Targets
9-13 (in Burst 4), and Targets 17-21 (in Burst 5).
Note that the length of the IFFT must be at least as long as the frequency span of one burst, in order to
capture the full burst exposure of at least one target. If the IFFT length is this minimum length, the efficiency is
very low, as only one output sample is completely compressed from each burst (e.g. , Targets 1, 9, 17, and 25).
The efficiency can be improved by using a longer IFFT, which compresses a group of output samples from each
burst.
The IFFT length can be increased up to the point where its frequency extent covers the bandwidth of one
target (in one burst), plus the equivalent bandwidth of the gap between the bursts. The maximum IFFT length
is illustrated in Figure 10.10. This upper limit is set so that a target that is fully exposed in one burst is not
contaminated by a partial exposure of the same target from a different burst. If a partial exposure from a
second burst is included, the impulse response is contaminated because the second burst involves noncontiguous
frequencies of the same target.
Within these two limits, the IFFT length can be selected to get the best processing efficiency. In practice,
these length limits should be made more conservative because of the spreading of target energy into neighboring
frequency bins, caused by the finite-length bursts. Thus, the lower IFFT length limit should be a little larger
than the minimum cited above, and the upper limit a little shorter.
As Target 5 is the last target in Burst 3 to be fully captured by IFFT 1, IFFT 2 should be placed so that
Target 5 is the first of the next group of targets in Burst 3 to be fully captured. From Figure 10.10, it can be
seen how IFFT 2 captures Targets 5-9, 13- 17, and 21- 25, so that contiguous coverage can be obtained in the
output image. Similarly, IFFTs 3 and 4 are placed as shown in the figure, so that IFFTs 1 to 4 capture all the
Doppler energy generated by the SAR beam, during the time interval of the associated forward transform.
Figure 10.10 portrays the two-beam, two-burst per aperture case, a common operating mode of RADARSAT.
In this case, every target has at least two fully-exposed bursts, and two azimuth looks can be taken. If
single-look processing is desired, only two out of the four IFFTs are needed - the two closest to the Doppler
centroid should be chosen to maximize the SNR. For SARs operating with other burst cycles, convenient IFFT
lengths and locations can be found.
Note that the use of an IFFT length greater than the minimum results in oversampling of the data at the
output. In the two-beam configuration of Figure 10.10, the oversampling is a factor of two. Zero padding can be
used to increase the IFFT length for efficiency reasons, at the expense of additional oversampling. If desired, the
oversampling can be reduced by lowpass filtering the output data.
Simulation results using the SIFFT algorithm are presented in Section 10.9. Note that the SIFFT concept is
applied to the RDA, although it can just as easily be applied to the CSA.
In summary, the concept of the SIFFT algorithm is to use short IFFTs to reduce or eliminate the spectral
mixing of indiv.idual targets from different bursts. In the next section, a different way of achieving this separation
of burst energy is outlined, by using a full-bandwidth IFFT, followed by the SPECAN algorithm for the final
azimuth processing step.
In the previous section, it is shown how short IFFTs can be used to isolate disjoint target spectra caused by
burst operation. Another method has been developed, using the SPECAN algorithm for the final azimuth
compression. This procedure has been applied to the CSA at DLR, and is called the Extended Chirp Scaling
algorithm (ECSA) [33- 36].
The main steps in the ECSA are outlined in Figure 10.11. Essentially, the first five blocks are the same as
the CSA shown in Figure 7.1, but with the azimuth compression replaced by a frequency scaling operation that
equalizes the azimuth FM rate at each range cell. Then, the SPECAN algorithm is applied to the individual
bursts, as in Section 10.5. More details of each step are given in the following.
1. Azimuth FFT: The bursts can be transformed one burst at a time, or a number of bursts together. In
the latter case, sufficient zeros are inserted between each burst to create adequate separation of the bursts
after the azimuth IFFT of Step 5. Azimuth weighting can be applied, at this stage or during the final
SPECAN step.
2. Range chirp scaling: With the data in the range Doppler domain, range chirp scaling can be applied in
the same way as in the CSA discussed in Chapter 7. The perturbation function can be either linear FM, or
nonlinear if a higher accuracy is needed. This accomplishes differential RCMC, but also can be used to scale
the range axis, if desired.
Raw radar data
!
1
Azimuth FFT
on burst data
2 •
Range chirp scaling
for differential RCMC
Chirp scaling
processing
3 •
Range FFT, phase
multiply, and range IFFT
4 •
Phase correction and
azimuth chirp scaling
5 •
Azimuth IFFT
Precondition data for
SPECAN processing
6
-
•
SPECAN
-
+
SPECAN
processing
!
+
Compressed image data
Figure 10.11: Steps in the ECS algorithm for processing ScanSAR data.
3. Range processing: The data are transformed into the two-dimensional frequency domain, using a range
FFT, and a phase multiply is used to perform bulk RCMC, range compression, and an azimuth-frequency
dependent SRC. Then, a range IFFT returns the data to the range Doppler domain. The azimuth modulation
is left alone at this stage.
4. Phase correction and azimuth chirp scaling: This step involves a phase multiply that has two
components. One component corrects a range-dependent phase, introduced by the range chirp scaling opera-
tion, as in the CSA. A second component is unique to the ECSA. It is called azimuth chirp scaling, and has
the function of changing the range-dependent, hyperbolic azimuth phase modulation into a linear FM that is
independent of range. It also can be used to register the azimuth data as needed.
5. Azimuth IFFT: An azimuth IFFT returns the data to the time or spatial domain, where the bursts are
once again separated.
6. SPECAN: The SPECAN processing proceeds with a deramping or reference function multiply, followed by
an azimuth FFT. One FFT is applied to each burst to complete the azimuth focusing in the SAR image
domain. The SPECAN operations are similar to those described in Chapter 9, except that the azimuth FM
rate has been equalized beforehand.
The steps above are the same as the corresponding steps in the CSA and SPECAN algorithms, with the
following difference. The azimuth scaling in Step 4 replaces the range-dependent azimuth compression of the CSA,
with a differential compression that leaves the signal with a linear FM equal to that at the reference range. This
preconditions the signal to the form needed for SPECAN processing, because SPECAN assumes linear FM. It also
has the advantage that the interpolation step, normally needed in the SPECAN algorithm, is no longer necessary.
The azimuth scaling has the effect of stretching the near-range targets in the time domain after the IFFT,
where the azimuth FM rate is decreased by the scaling. This governs how many zeros must be inserted between
blocks in Step 1. The radiometric correction for scalloping (antenna pattern correction) and the stitching together
of the bursts must still be done, as in all ScanSAR processing algorithms.
Discussion
One may ask why the SPECAN procedure must be introduced into the flow of Figure 10.11, when the range and
azimuth FFTs and IFFTs have already been done. The reason is that the coherent azimuth processing should not
take place across bursts, or else the modulation described in Section 10.4 will occur. The bursts should be
processed separately, which is accomplished efficiently using the SPECAN algorithm.
Compared with the SPECAN algorithm of Section 10.5, the ECSA offers the high accuracy of RCMC and
SRC, as well as the registration convenience of the scaling. It also has slightly more accuracy than the RDA
versions because of the avoidance of interpolation operations.
Comparing the ECSA with the SIFFT algorithm, the ECSA offers the possibility of higher efficiency because
the gaps in the bursts need not be filled in fully. However, this efficiency is burdened by increased complexity,
because both the IFFT and the SPECAN operations are needed for azimuth compression.
The ECSA has been used for Spotlight SAR processing, illustrating its versatility [37, 38] . It has also been
used for interferometry, where the range and azimuth scaling are helpful in providing accurate image
coregistration [39].
In all SAR processing algorithms, processed data have to be stitched together in azimuth to form a contiguous
output image. This is a natural operation in fast convolution, when the azimuth data are broken into blocks for
efficient processing (refer to Section 2.4). This is a straightforward operation in continuous-mode processors like
the RDA, unless the azimuth processing parameters change between blocks.
In burst-mode processing, stitching also has to be done, but there is a significant difference in the phase
properties of the results. The difference occurs because stitches can join targets between one burst and the next.
These stitches involve a jump in Doppler frequency across the bursts, which can upset interpolation operations
such as point target analysis. If the SIFFT algorithm is used, then the phase properties are also changed, because
the IFFT locations change between groups of targets within one burst.
0.3
ti:'
lE 02
This effect can best be seen in a simulation, using the SIFFT algorithm as an example. Thirteen targets are
simulated, as shown in Figure 10.12, which is a subset of Figure 10.10. The burst length is 256 samples in the
time domain, and the data gap is also 256 samples. The FFT length is 2048 (four bursts plus four data gaps) .
The bandwidth of each target occupies 410 frequency samples, not counting frequency leakage. Hence, each short
IFFT can be up to 820 samples long, as is the case shown in Figure 10.12.
To illustrate the stitching operation, consider IFFTs 1 and 2 in Figure 10.12. The outputs of these IFFTs are
shown in the top two panels of Figure 10.13. The horizontal lines above the targets indicate the burst from
which each group of targets originate-the solid part of the line shows which targets are fully exposed, while the
dashed parts of each line indicate which targets are partially exposed. In this example, which uses the maximum
length IFFT, each partially exposed target comes from two bursts.
It is noted that Targets 1- 5 and 9- 13 are cleanly compressed in IFFT 1, whereas Targets 6-8 are not. The
reason for this is apparent in Figure 10.12, where Targets 1-5 and 9- 13 are fully exposed in contiguous samples
of the domain of IFFT 1, whereas the exposures of Targets 6- 8 are separated into two sections, with a Doppler
gap between them.
-----------------·--------
2
.
Burst #3
3 4 5
Partial targets
6 1 8 9 10
Burst#4
11 12 13
0 _______..._._.._._.....
250 300 350
-~-L...!l~L-:l-:..:B-~-----------
400 450 500 550
-------·
Burst #2
-------·---------------·
Burst #3
Burst #4
0 ----E:-.3111--JED::E.....:_..._._IE...al_~-La-W.m:::....::r::~ 3!1!11L....:I_L.:111...
250 300 350 400 450 500 550
250
-~ .~ ~- ~
300
~ ........... -.
350
...
400
.-
450
. -~ ~
'
500
. .
550
- ...
Time (samples)
Looking at the IFFT 2 output in 10.13(b), it is seen that the targets, which were corrupted in IFFT 1, are
now compressed correctly, while those compressed correctly in IFFT 1 are now corrupted. 3 Upon examining this
pattern of fully and partially compressed targets, it is evident how to combine the outputs of IFFT 1 and IFFT
2 to form a contiguous set of well compressed targets in the output array. The output arrays are truncated at
Targets 5 and 9 and joined together, as shown in Figure 10.13(c). It can be seen that a contiguous set of well
compressed targets is obtained after the stitching.
A frequency discontinuity exists when a target is stitched across two different bursts (or even across two IFFTs in
the same burst in the SIFFT case). This is exemplified by Target 9, one-half of which originated from Burst 4
(extracted by IFFT 1) and the other half from Burst 3 (extracted by IFFT 2). The frequency discontinuity is
similar to that shown in Figure 9.20. If the stitching point occurs near a point target of interest, the frequency
discontinuity causes the interpolation in the point target analysis to be inaccurate. Prov.ided that a record of the
stitching points is maintained, accurate point target analysis can still be performed by applying a phase
compensation, as described in Section 9.7.1.
When these stitched outputs are used for interferometric processing, the phase errors in the two images may
cancel each other, if the stitches are exactly aligned. However, since aligning the stitches may be difficult to
achieve, a safer procedure is as follows. The IFFrs are moved closer together so that there is a small overlap in
the number of good points from adjacent IFFTs. The interferogram can be formed away from the stitching
points, avoiding the phase error caused by the image stitching. The interferograms from adjacent bursts share a
common area, where they can be stitched, thereby avoiding the phase errors that are a function of image
stitching [11] . It should also be remembered that the frequency discontinuity experienced at the stitching point is
city of Buffalo is at the east end of Lake Erie. The Niagara River joins the two lakes, with the city of Niagara
Falls about half way between the lakes.
References
(1] W. T. K. Johnson. Magellan Imaging Radar Mission to Venus. Proceedings of the IEEE, 79 (6), pp. 777-790,
June 1991.
(2] R. K. Moore, J. P. Claassen, and Y. H. Lin. Scanning Spaceborne Synthetic Aperture Radar with Integrated
Radiometer. IEEE Trans. on Aerospace and Electronic Systems, AES-17, pp. 410-421, May 1981.
(3] K. Tomiyasu. Conceptual Performance of a Satellite Borne, Wide Swath Synthetic Aperture Radar. IEEE
Trans. on Geoscience and Remote Sensing, 19 (2), pp. 108-116, April 1981.
(4] A. P. Luscombe. Taking a Broader View: Radarsat Adds ScanSAR to Its Operations. In Proc. Int.
Geoscience and Remote Sensing Symp., IGARSS'88, Vol. 2, pp. 1027- 1032, Edinburgh, Scotland, September
1988.
(5] B. L. Honeycutt. Spaceborne Imaging Radar-C Instrument. IEEE Trans. on Geoscience and Remote Sensing,
27 (2), pp. 164-169, March 1989.
(6] Special Issue on SIR-C/X-SAR. IEEE Trans. on Geoscience and Remote Sensing, 33 (4), pp. 817-956, July
1995.
(7] R. K. Raney, A. P. Luscombe, E. J. Langham, and S. Ahmed. RADARSAT. Proc. of the IEEE, 79 (6), pp.
839-849, 1991.
(8] S. Karnevi, E. Dean, D. J. Q. Carter, and S. S. Hartley. ENVISAT's Advanced Synthetic Aperture Radar:
ASAR. ESA Bulletin, 76, pp. 3035, 1994.
(9] J .-L. Suchail, C. Buck, J. Guijarro, and R. Torres. The ENVISAT-1 Advanced Synthetic Aperture Radar
Instrument. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'99, Vol. 2, pp. 1441-1443, Hamburg,
Germany, June 1999.
(30] F. H. Wong, D. R. Stevens, and I. G. Cumming. Phase-Preserv-ing Processing of ScanSAR Data with a
Modified Range Doppler Algorithm. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'97, Vol. 2,
pp. 725-727, Singapore, August 1997.
(31] I. G. Cumming, Y. Guo, and F. H. Wong. A Comparison of Phase-Preserving Algorithms for Burst-Mode
SAR Data Processing. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'97, Vol. 2, pp. 731- 733,
Singapore, August 1997.
(32] S. Albrecht and I. G. Cumming. The Application of the Momentary Fourier Transform to SAR Processing.
IEE Proc: Radar, Sonar and Navigation, 146 (6), pp. 285-297, December 1999.
(33] A. Moreira, J. Mittermayer, and R. Scheiber. Extended Chirp Scaling Algorithm for Air and Spaceborne
SAR Data Processing in Stripmap and ScanSAR Imaging Modes. IEEE Trans . on Geoscience and Remote
Sensing, 34 (5), pp. 1123- 1136, September 1996.
(34] J . Mittermayer, R. Scheiber, and A. Moreira. The Extended Chirp Scaling Algorithm for ScanSAR Data
Processing. In Proc. European Conference on Synthetic Aperture Radar, EUSAR'96, pp. 517-520, Konigswinter,
Germany, March 1996.
(35] J. Mittermayer, A. Moreira, and R. Scheiber. Reduction of Phase Errors Arising from the Approximations
in the Chirp Scaling Algorithm. In Proc. Int. Geoscience and Remote Sensing Symp. , IGARSS'98, Vol. 2, pp.
1180-1182, Seattle, WA, July 1998.
(36] J. Mittermayer and A. Moreira. A Generic Formulation of the Extended Chirp Scaling Algorithm (ECS) for
Phase Preserv,ing ScanSAR and SpotSAR Processing. In Proc. Int. Geoscience and Remote Sensing Symp.,
IGARSS'00, Vol. 1, pp. 108- 110, Honolulu, HI, July 2000.
(37] J. Mittermayer and A. Moreira. Spotlight SAR Processing Using the Extended Chirp Scaling Algorithm. In
Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'97, Vol. 4, pp. 2021-2023, Singapore, August 1997.
(38] J . Mittermayer, A. Moreira, and 0. Loffeld. High Precision Processing of Spotlight SAR Data Using the
Extended Chirp Scaling Algorithm. In Proc. European Conference on Synthetic Aperture Radar, EUSAR'98, pp.
561-564, Friedrichshafen, Germany, May 1998.
(39) J. Mittermayer and A. Moreira. The Extended Chirp Scaling Algorithm for ScanSAR Interferometry. In
Proc. European Conference on Synthetic Aperture Radar, EUSAR '00, pp. 197- 200, Munich, Germany, May 2000.
Chapter 11
Comparison of Algorithms
11.1 Introduction
Three high resolution SAR processing algorithms have been presented in the preceding chapters-the RDA, CSA,
and WKA. The purpose of this chapter is to compare the properties of these algorithms so that the reader is
given some guidance to selecting the most appropriate algorithm.
Sections 11.2 and 11.3 proY.ide a brief recap of the three algorithms - first grouped by algorithm, then
grouped by the main processing functions such as azimuth compression, RCMC, and SRC. This involves some
repetition, but provides a convenient reference. Section 11.4 quantifies the main errors in each algorithm, using
satellite and airborne systems as examples. Section 11.5 gives a comparison in terms of the number of arithmetic
operations for a typical processing block size. Finally, Section 11.7 summarizes the pros and cons of each
algorithm and gives a few guidelines to help the reader select a suitable algorithm for their application.
The main operations and features of the RDA, CSA, and WKA are summarized in this section.
The main features of the RDA are to perform RCMC in the range Doppler domain and to limit all operations
to one dimension at a time. Radar scatterers that have the same slant range of closest approach have a common
locus of energy after transformation to the range Doppler domain. Processing efficiency results because one
RCMC operation achieves the correction of a whole family of targets within an azimuth processing block.
The RDA is distinguished among frequency domain algorithms by its explicit ability to accommodate range
variations of parameters with relative ease. For example, changes in RCMC and azimuth compression can be
made to track changes in the effective radar velocity, ¾-, the Doppler centroid, f.,,c, and the azimuth FM rate,
Ka. This flexibility is a result of processing t he data in the range Doppler domain in which one of the axes is
range time.
Secondary range compression (SRC) is an additional step applied to correct for range/ azimuth coupling of
the target's phase history. It can be implemented in different ways. An efficient way is to combine it with the
range compression filter, but this requires t he approximation that the SRC is independent of azimuth frequency. A
more accurate way is to implement it in the two-dimensional frequency domain, so that its azimuth frequency
dependence can be accommodated.
The "chirp scaling" principle allows small, nonlinear changes of scale of a function, assuming the data are encoded
in linear FM chirps. In the CSA, chirp scaling is used to apply differential RCMC efficiently and accurately with
a simple phase multiply.
The CSA starts with an azimuth Fourier transform on the uncompressed range data. The data are now in
the range Doppler domain, where the scaling or perturbation function is applied. This has the effect of
performing a differential RCMC, leaving a bulk RCM in the data. Next, range compression and the
rangerindependent components of RCM, range-azimuth coupling, and azimuth modulation are corrected by a
reference function multiply (RFM) in t he two-dimensional frequency domain . Differential azimuth compression (but
not differential SRC) is later applied in t he range Doppler domain.
11.2.3 Omega-K Algorithm
The WKA processes the data in the two-dimensional frequency domain. An RFM is applied to focus targets at a
reference range. Other targets are not focused at this stage, and the amount of defocusing increases with distance
away from the reference range.
A Stolt interpolation is then applied to focus the remaining targets. It is a one-dimensional interpolation
applied in the range frequency direction. It has the effect of applying a differential correction to the data to
account for the range variation of the azimuth FM rate, RCMC, and SRC. However, the variation of Vr is not
taken into account.
The Stolt interpolation is the most computationally difficult step in the algorithm, and an approximation can
be made that avoids it. The approximate form proceeds by performing an IFFT in range after the bulk
compression is done. Next, a phase multiply is applied in the range Doppler domain, which corrects the
differential azimuth compression, but not the differential RCMC or SRC. Operating in the range Doppler domain
also allows the variation of Vr to be taken into account.
This section compares the algorithms in terms of the most distinctive processing functions. These functions are
the azimuth matched filter, RCMC, and SRC operations. Table 11.1 summarizes these basic operations in the
various algorithms, and lists which parameters are range variant or invariant in these operations. In the table, a
linear FM perturbation function is assumed in the CSA, and a residual azimuth compression is not assumed in
the accurate form of the WKA to correct for the change in Vr with range.
In the discussion, two types of range variance are distinguished:
Directly range variant: The processing function uses range variant parameters, such as the azimuth linear FM
rate, SRC FM rate, and RCM. When one of these parameters is assumed to be range invariant, the cor-
responding error can be expressed explicitly as a function of the offset, 6.R.o, from the reference range.
Indirectly range variant: The above parameters also can be functions of another parameter that in turn is a
function of range. An example is V,. in a satellite case. In this example, the error in the parameter of
interest due to a range invariance assumption of Vr is quantified as a function of the error 6. V,..
Table 11.1: Comparison of Processing Functions
Notes: The range invariances in both cases (1) and (2) can be compensated.
In case (1) , a residual azimuth compression can be performed by a phase
multiply. In case (2), a nonlinear perturbation function can be used.
P.S. means "Power Series."
The range equation is the main model from which many of the processing functions are calculated.
RDA: A hyperbolic form is usually used in t he processing equations. A parabolic form is sometimes used for
simplicity, which can provide a good approximation for low squint angles and/or narrow apertures.
CSA: A hyperbolic form is generally used. As in the RDA, a parabolic form can be used in some cases, which
makes the derivation of the perturbation function easier.
The azimuth matched filter coefficients are a function of the range-dependent azimuth modulation. The matched
filter can be applied in one operation, or can be divided into range-invariant (bulk) and range-varying (residual)
components, and applied in two stages.
RDA: Its coefficients can be generated at each range gate whereby all range varying parameters, such as /T/c and
v;., can be accommodated.
CSA: The matched filter is applied in two stages. The bulk matched filtering is done by a two-dimensional
frequency domain RFM. The residual is performed by a second RFM, after the data are transformed back
to the range Doppler domain, where all range-dependent parameters can be accommodated.
WKA: There are two stages of azimuth matched filtering. The bulk filtering is done by a two-dimensional
frequency domain RFM. In the accurate WKA, the residual filtering is done v:ia a range frequency
interpolation in the Stolt mapping step. In the approximate form of the WKA, the residual filtering is done
with a phase multiply in the range Doppler domain.
The presence of RCM is the main feature of the received data that complicates SAR processing. It is corrected
differently in each algorithm- to the extent that it is often the most distinguishing feature of the algorithm.
RDA: The RCMC operation is performed in the range Doppler domain using an interpolation in the range time
variable. By operating in this domain, the migration of all forms of the range equation can be accurately
corrected.
CSA: The main feature in the CSA is that RCMC is performed without the use of an interpolator. The RCMC
consists of two parts. A differential RCMC is first performed by a chirp scaling RFM in the range Doppler
domain. Then, the bulk RCMC is performed by an RFM in the two-dimensional frequency domain.
WKA: RCMC is also done in two stages, but in a different order than in the CSA. The bulk RCM is first
corrected by a two-dimensional frequency domain RFM. The residual RCMC is then done in the same
domain using a range frequency interpolation in the Stolt mapping step. Operating in this domain does not
allow v;. to vary with range. In the approximate form of the algorithm, the residual RCM is ignored.
RDA: An approximate but efficient form of the SRC is implemented by corn bining it with the range matched
filter. In this implementation, the filter is assumed to be range time and azimuth frequency invariant. An
accurate form is to implement it in the two-dimensional frequency domain, where the azimuth frequency
dependency can be taken into account. In either case, the SRC is assumed to be independent of range and
v;..
CSA: SRC is implemented m the two-dimensional frequency domain. Again, this implementation assumes range
and Vr invariance.
WKA: Similar to azimuth matched filtering, there are two stages of SRC. The bulk of SRC is done by a
two-dimensional frequency domain RFM. The residual SRC is done in the same domain via a range
frequency interpolation in the Stolt mapping step. Again, the SRC does not allow v;. to vary with range. The
SRC in the WKA is the most accurate among the algorithms presented, because it allows for range
dependence. In the approximate form of the algorithm, the residual SRC is ignored.
11.4 Summary of Processing Errors
The main SAR processing operations are range matched filtering, SRC, azimuth matched filtering, and RCMC.
The slant range, Ro, and effective radar velocity, Yr, are two key parameters used as inputs in these operations.
Keeping them constant would introduce QPEs and a residual RCM. Recall that the QPE should be kept to
within 0.51r, and the residual RCM within 0.5 of a range resolution element.
The Jr, dependent SRC has an additional error when its approximate form (combined with the range matched
filter) is used, since, in this form, J11 is not allowed to vary. The magnitudes of these errors are investigated in
this section for X-, C-, and L-band airborne and spaceborne systems. The errors due to range, velocity, and
azimuth frequency changes are denoted by t1Ro, t1 Vr, and t1 Jr,, respectively.
The quadratic phase at the end of the aperture is Ka(Ta/2) 2 , where Ta is the exposure time. The azimuth FM
rate, Ka , given by (4.38), is rewritten here, explicitly showing its dependence on Ro and Vr
2 V/ cos3 Br,c
Ka(Ro, Yr) = >.Ro
(11.1)
The three processing algorithms can accommodate a variant Ro, as summarized in Table 11.1, so it is not a
problem here. However, the above analysis is useful when the use of range invariance regions for the azimuth
matched filter is considered.
Similarly, the QPE error in the azimuth direction due to t1 Vr is
2
- 2 Ka(Ro, Yr) lt1il (~~) (11.3)
The SRC FM rate, Ksrc, is given by (6.22). It is a function of Ro, Vr , and Ir,, as follows:
Recall that the combined FM rate range chirp FM rate [see (5.40)] in the range Doppler domain is
K K'; (11.6)
~ r + Ksrc(Ro, Yr, Ir,)
The approximation is obtained because IKsrcl >> IKrl- Combining (11.4) and (11.6), the respective errors in Km
due to ARo, Av;., and A J.,, can be obtained. The respective QPEs in the range direction are:
dR J,,,=!.,c 2
(11.7)
A</>src,V -
2
2K; All;. (~) (11.8)
Ksrc(Ro, ll;., /.,,) Vr J.,=f.,c
'1 J,,,=J.,c
(11.9)
In the last equation, AKsrc,f is at its maximum when A / 17 is one half of the Doppler bandwidth, A /dop·
The residual RCM can be derived in the azimuth frequency domain, which is the domain of the RCMC in the
three algorithms. For squinted cases, the main contribution to the residual RCM is the linear component over the
Doppler bandwidth, A /dop, and this component is analyzed below.
The slant range equation is given by (6.23):
(11.10)
Rrcm(Ro, Yr) -
Ro .X2 f; Afdop
- D 3 (Vr, / 17J 4 ~ 2 Irie
_ Ro sin2 Br,c fl./dop (11.11)
cos3 Br,c f 11c
The last step is obtained by recogmz1ng that D(Vr, J.,,J = cos Br,c, as explained in Section 5.3.3, and by seeing
that .X2 ff'/c 2 /(4Vr2) = sin Br,c, by virtue of (11.5).
The residual RCM, due to ARo and Av;., respectively, is given by
ARrcmR
'
=
(11.12)
ARrcm,V -
• 2 (}
- Ro Slil r,c (11.13)
cos3 Br,c
The above analysis is pertinent to the linear RCM component only, because only the first derivative is considered
in (11.11). For the cases considered, its sensitiv:ity to fl.~ and 6.Vr can be ignored.
The errors defined above are computed using the airborne and spaceborne parameters shown in Table 4.1, and are
summarized in Tables 11.2 and 11.3. Note that squint angles are assumed to be 8° and 4° for the airborne and
spaceborne cases, respectively.
The conclusions drawn in this section are based on the two typical e}(amples, but the reader should perform
an independent error analysis for each new system under consideration. In the analysis, l1r is assumed to vary by
0.15% across the range swath width in the spaceborne case.
The performance of the algorithms is comparable, with the following e}(ceptions:
o From Table 4.1, it can be deduced that the range resolution for the airborne case is 1.5 m, and is 7.5 m for
the spaceborne case. In both cases, the approximate WKA gives a residual RCM of more than one range
resolution element for C- and L-band. Therefore, the approximate WKA algorithm should be ruled out,
except perhaps for X-band, or range invariance regions have to be used. For the C-band example, the
residual RCM is 1 m, or 0.7 of a resolution element, and hence it is slightly above the 0.5 resolution
element threshold.
o For the airborne L-band case, the RDA algorithm using the approximate SRC form gives an unacceptably
large QPE of more than 1011'. For airborne C-band, it is 0.71r, again more than the acceptable value of 0.511'.
In other words, this form can be used for X-band only in the airborne case.
For the satellite case, the approximate SRC can be used for X-band and C-band, but not L-band. For L-band,
the QPE is 0.711'.
o In the spaceborne case, tl.</>m f,V has to be compensated. As mentioned, it can be done by a phase multiply
in the range Doppler domain before the final azimuth Fourier transform. However, for L-band, there is still a
residual RCM of 8.2 m, corresponding to several range resolution elements. A residual RCMC can also be
performed in the range Domain by an interpolator, but this will increase the number of computations, and
perhaps defeat the purpose of the algorithm.
o In the L-band satellite case, a nonlinear perturbation function in the CSA is preferred to eliminate the l1r
variance in the SRC, as the residual RCM is 8.2 m, which is more than one resolution element.
o In the satellite case, and for all three bands, the accurate form of the WKA requires a residual azimuth
compression, since the residual QPE is more than 0.511'.
Table 11.2: Algorithm Errors for the Airborne Case
1r radians m
X -band
C-band
L-band
1r rad 1r rad m
C-band
L-band
Notes:
(1) A nonlinear perturbation function would remove this residual RCM.
(2) A residual azimuth compression would remove this QPE.
The purpose of this section is to estimate the number of floating point operations (FLOPs) in each of the SAR
processing algorithms. Each FLOP can either be a real multiply or a real add. 1
Each of the SAR algorithms requires some or all of the following basic operations.
FFT and IFFT: An FFT or IFFT of length N requires 5N log2 (N) FLOPs.
Chirp Scaling
Azimuth FFT = 5 Nrg Naz log2(Naz) / 109 = 1.01
CS phase multiply = 6 Nrg Naz/ 109 = 0.10
Subtotal - 1.11
Two-dimensional spectrum processing
Range FFT = 5 Naz Nrg log2(Nrg) / 109 = 1.01
Phase multiply = 6 Nrg Naz / 109 = 0.10
Range IFFT = Range FFT = 1.01
Subtotal 2.11
Residual azimuth processing
Phase multiply = 6 Nrg,out Naz/ 109 = 0.08
Azimuth IFFT = 5 Nrg,out Naz log2(Naz) / 109 = 0.76
Subtotal 0.83
Total 4.05
The final residual azimuth processing is performed on the number of output range samples, Nrg,out, after the
range matched filter throwaway. The final phase multiply accounts for both residual phase correction due to the
earlier chirp scaling operation, and the differential azimuth phase compression.
Two implementations of the SRC in the RDA are considered - approximate and accurate, without and with
Stolt interpolation, respectively.
The final residual azimuth processing is performed on the number of output range samples, Nrg,out, after the
range matched filter throwaway.
Accurate Form, with Stolt Interpolation
Range FFT = 5 Nrg Naz log2(Nrg) / 109 = 1.01
Azimuth FFT = 5 Nrg Naz log2(Naz) / 10 9
= 1.01
RFM phase multiply = 6 Nrg Naz/ 109 = 0.10
Stolt interpolation = 2 (2 M1rer - 1) Nrg Naz/ 109 = 0.50
Range IFFT = Range FFT = 1.01
Azimuth IFFT = 5 Nrg,out Naz log2(Naz) / 109 = 0.76
Total 4.38
The final azimuth IFFT 1s performed on the number of output range samples, Nrg,out, after the range
matched filter throwaway.
o If range compression has already been performed (e.g., onboard the platform) or a chirp was not used, the
data have to be expanded with a chirp before the CSA is applied.
o If the range matched filter is very long, efficiency suffers because the extra data corresponding to the range
throwaway must be retained in memory and be used in all computations up until the range IFFT.
o Because processing is partly done in the two-dimensional frequency domain, the algorithm cannot easily cope
with large changes in the Doppler centroid over range. For example, if the azimuth oversampling ratio is
20%, spectral mixing will occur if the Doppler centroid changes by more than 20% over the processed range
swath. When t he Doppler centroid varies by more than the oversampling ratio, range invariance regions have
to be used, incurring an appreciable efficiency penalty. This is the drawback of any algorithm performing
processing, even partly, in the two-dimensional frequency domain.
o Because the data enter the two-dimensional frequency domain in an uncompressed state, larger array sizes are
needed to accommodate the range matched filter throwaway region compared with the RDA .
o SRC is assumed to be range and Vr invariant, since only the bulk correction is applied.
o The perturbation function is only linear FM when the pulse chirp is linear FM, and when the RCM varies
linearly with range in the case of a range invariant v;. [see (6.23)]. If these conditions are not satisfied, a
nonlinear FM perturbation function can be used. This is more complicated and involves an approximation in
the final phase correction.
o The WKA gives exact SAR processing for all squint angles and aperture widths, as long as the hyperbolic
range equation holds and there is no variation of v;. with range. These assumptions are valid for airborne
systems, and for satellite systems over a limited swath width.
o Unlike the RDA or CSA, the SRC is applied accurately (including its slant range and azimuth frequency
dependencies), apart from the constant v;. assumption.
o The approximate form of the WKA is very efficient. It can be used when the differential RCMC and SRC
are negligible.
o A range frequency interpolation is needed for the accurate form of the WKA. Interpolators are not as easy
or accurate to apply as phase muJtiplies.
o Again, because processing is done in the two-dimensional frequency domain, the algorithm cannot cope with
rapid changes in t he Doppler centroid.
o If the data are not already range compressed, larger array sizes are needed in the two-dimensional frequency
domain to accommodate the range matched filter throwaway region. In the WKA , data can be range
compressed prior to the entry into the two-dimensional frequency domain, at the expense of an extra set of
range FFTs and IFFTs.
o The range variation of v;. is not accounted for in the algorithm, although an additional differential azimuth
compression step can be added before the final azimuth IFFT.
11. 7 Summary
The three SAR precision processing algorithms, RDA, CSA, and WKA, are compared in this chapter. Their
similarities and differences are highlighted, including variations of each algorithm. Assumptions and approximations
are different in each algorithm. Using airborne and spaceborne examples, the errors due to these approximations
are analyzed and tabulated. The computation load of the basic operations of FFTs, phase multiplies, and
interpolations required for each algorithm are tabulated.
Although many detailed considerations must be examined in order to select the best algorithm for a specific
application, the reader can begin with some high-level guidelines:
o For medium or low resolution processing, the efficiency of the SPECAN algorithm cannot be surpassed. It is
a good choice for quicklook sy~tems in which the user may wish to browse through the low resolution
products before proceeding with high resolution processing.
SPECAN is also suitable for ScanSAR processing, although some of the other algorithms in Chapter 10 are useful
in specialized applications.
o The RDA is the most widely used algorithm for high resolution processing of satellite SAR data. It 1S
conceptually the simplest, and can accommodate range varying parameters in the processing.
For airborne systems, analyses have to be performed to see which version of SRC can be applied. Most likely, the
approximate form of SRC is adequate for X-band, but not for L-band. For the latter, SRC should be
implemented in the azimuth frequency domain.
For satellite systems, approximate SRC is adequate for X-band and C-band. systems, and perhaps for L-band,
too. However, with the incre~ing resolution of sensors, analysis should be performed for each case, to see which
form of SRC should be applied.
o The CSA is a useful alternative to the RDA. It requires no interpolation for the RCMC, which slightly
improves the image quality. In this algorithm, SRC is azimuth frequency dependent.
o For SARs with a wide beamwidth or moderate to high squint, the WKA is an excellent choice, since its
only limitation is that V,. is assumed to be range invariant. The approximate form of the algorithm can be
used for smaller beamwidths or squints.
This partial-swath ENVISAT Advanced Synthetic Aperture Radar (ASAR) image shows the Strait of Messina,
separating the east coast of Sicily and the toe of mainland Italy. The city of l\t!essina, where the Italian ferries
land in its protected harbor, is on the west side of the Strait. The radar polarization is VV, which portrays the
water features with a higher intensity. Evidence of internal waves is seen near the south end of the Strait of
Messina.
The data were collected on descending orbit # 5787 on April 9, 2003, using Image Swath 2. The scene center
is at 38.1 ° N, 15.6° E. The Strait of Messina is 3 - 8 km wide, and connects the Tyrrhenian Sea (at the top) with
the Ionian Sea (at the bottom). The image is approximately 32 km wide, and the original image has been
averaged by a factor of two in each direction for display purposes.
Figure 11.1: ENVISAT/ ASAR image of the Strait of Messina. (Copyright
European Space Agency, 2003.)
Part III
Doppler Parameter
Estimation
Chapter 12
12.1 Introduction
An essential part of SAR processing is the estimation of the Doppler parameters of the received data- the
Doppler centroid frequency and the azimuth FM rate [1,2). The estimation of the Doppler centroid is covered in
this chapter, while the estimation of the azimuth FM rate is described in Chapter 13.
Because the azimuth (i.e., Doppler) signal is observed in a sampled fashion, it is useful to consider the Doppler
frequency as having two components. The sampling rate is the PRF, which limits the highest observable rate of
the Doppler frequency. In the received signal, only frequencies between -0.5 PRF to +0.5 PRF can be observed.
The component corresponding to these frequencies is referred to as the baseband or fractional P RF part of the
Doppler frequencies.
Frequencies outside this range are not directly observable, but are important for SAR processing. For
convenience, they are quantized to an integer multiple of the PRF, and are referred to as the Doppler ambiguity.
For processing, the average or center Doppler frequency must be known, which is referred to as the Doppler
centroid frequency, or simply the Doppler centroid.
More details of the Doppler centroid are given in Section 5.4, where it is shown that the centroid can be
expressed as
(12.1)
where f~c is the fractional PRF part, Mamb is the ambiguity number, and Fa is the PRF. The fractional PRF
part sets the azimuth matched filter center frequency and throwaway region. The total or absolute centroid, f 11c,
is used in range cell migration correction and in secondary range compression.
Despite many advances in SAR processing and data handling, many production SAR processing systems for
satellite SAR data tend to suffer from unreliable Doppler centroid estimates in a number of the scenes processed.
Poor estimates affect registration and focusing, and raise the noise and ambiguity levels in the processed image,
sometimes to the point of seriously affecting image quality [3].
The Doppler centroid is difficult to estimate accurately for two reasons. First, the satellite system does not
have sufficiently accurate attitude measurements or beam pointing knowledge to calculate the centroid from
geometry alone. Second, the Doppler estimation result has a considerable dependence on the scene content.
When the estimate of either component of the Doppler centroid is made from the received data, the
algorithm can fall into one of two categories - a magnitude-based approach or a phase-based approach. The basic
algorithms used in each of these approaches are discussed in this chapter. There are other more complicated
approaches that use the methods of this chapter as basic building blocks. One of the most comprehensive
approaches is represented by the work of Dragosev:ic, in which geometry models, along-track filters, and data
estimators are used to track Doppler centroid changes over time [4, 5) .
The system factors that affect the Doppler frequency can be examined through a model of the geometry of the
satellite orbit, the radar beam pointing direction, and the intersection of the beam with the rotating Earth's
surface.
Satellite/Earth Geometry
The geometry model begins with a description of the satellite orbit, as sketched in Figure 12.1. The satellite
position and velocity (the satellite ephemeris) is described by state vectors (6]. These quantities are estimated by
satellite tracking and control stations, and are included as engineering data in the SAR signal records. State
vectors are usually given for a set of coarse time intervals, for example, 30-second intervals. The satellite position
and velocity can be calculated for arbitrary intermediate times using interpolation techniques.
The RADARSAT-1 satellite provides a convenient example for illustrating Doppler calculations. For
illustration, a locally circular orbit is assumed (7]. It has an average height above the surface of approximately
800 km, and the orbit plane has an inclination of 98.6° with respect to the equator.
satellite
ort>lt
I I
\
,ma~I
swa\11
50-1SO \a1l
Figure 12.1: Sketch of the orbit of a satellite and the radar beam.
After specifying the orbit, the beam pointing direction must also be given. This requires knowledge of the
satellite attitude, the local vertical, and the beam off-nadir angle. With a given pointing direction, the intersection
of the beam center with the Earth's surface can be calculated for a variety of off-nadir angles corresponding to
the slant ranges of interest.
The Doppler frequency can then be found as a function of range. The Doppler frequency mainly depends on
the yaw and pitch of the satellite (the roll angle has little effect), and attitude rates and accelerations can be
used to define how the Doppler centroid changes with time.
A sketch of the geometry of the radar beam and how it intersects the Earth's surface is shown in Figure
12.2. The radar position is at Point P1 at the time when the target is illuminated by the center of the radar
beam. Another significant sensor location is Poi~t P2, which is the position of the radar when the zero Doppler
plane crosses the target. As the beam is pointed forward in this example, Point P2 is reached after Point P1. The
figure also illustrates the slant range vector before processing, R, and the effective slant range after processing to
zero Doppler, Ro.
A top v:iew of the radar footprint is shown in Figure 12.3, illustrating the effect of satellite pitch and yaw. The
lower footprint illustrates the case where the pitch and yaw are zero. This zero beam attitude would give a zero
Doppler centroid frequency, before the effect of Earth's rotation is taken into account. The size of the footprint
depends on a number of system parameters, such as radar wavelength, antenna dimensions, and range to the
target. In C-band satellites, a typical footprint size is 120 by 5 km.
Figure 12.3 also illustrates what happens to the beam footprint when the pitch and yaw are nonzero. A
positive pitch corresponds to the leading edge of the satellite being tilted upwards. Such a tilt moves the beam
forward, more or less parallel to its zero pitch position, which increases the Doppler frequency (the radar system
is assumed to be mov:ing upwards in this drawing) . The increase of Doppler is smaller at far range than at near
range, because the azimuth FM rate decreases with range.
The displacement of the footprint is also illustrated when the yaw is increased. Yaw is a rotation about the
satellite nadir line, and positive yaw moves the beam forward in our notation. 1 With this rotation, the effect of
the yaw increases with range (see Figure 12.13). In the simpler geometry of an airborne SAR, an antenna yaw
creates a Doppler that is approximately independent of range. Details of how these models are used in Doppler
centroid calculations are given in Section 12.3 and Appendix 12A.
Slant range
(before processing)
Plane of zero
Doppler
-- -:-=--
-
------
___ ---- -:..-:...}-
-::---
+'{AW '
-~=----- -
--
, -
+PITCH
Figure 12.3: Top v:iew of the radar beam footprint on the Earth's surface,
showing the effects of (1) zero attitude, (2) positive pitch, and (3) positive
yaw.
This chapter covers the estimation of the Doppler centroid frequency needed for SAR processing. The estimate
can be made from geometry models and their related measurements, and from measurements on the received
data itself. In the latter case, the baseband component and the ambiguity are estimated using different algorithms.
A road map of Chapter 12 is given in Figure 12.4. First, a summary of the Doppler centroid accuracy
requirements is given in Section 12.2. The calculation of the Doppler centroid from the satellite orbit model and
attitude measurements is outlined in Section 12.3, with some of the mathematical details given in Appendix 12A.
An example of Doppler centroid variations around the orbit is given for RADARSAT parameters.
The estimation of the Doppler centroid from the received data is presented in the next two sections. In
Section 12.4, the estimation of the baseband centroid from the received data is outlined. There are two methods,
the original method, based on the shape of the azimuth spectrum (Section 12.4.1), and a method based on the
phase properties of the received signal (Section 12.4.2).
1
The senses of yaw and pitch are selected to conform to the right-hand coordinate system defined in Appendix 12A. A positive
yaw or pitch represents a clockwise rotation about the corresponding a.xis, when viewed along the direction of the axis.
1
Introduction
I I I
2 3 8
Accuracy Geometry
requirements Summary
models
I
A
Mathematical
developmenl
I I
4 5
Eatlmallon of E-ltlmallonof
the baseband lhe Doppler
component ambiguity
I I
I I I I
4.2 5.1 5.2
Phase Magnitude-
4 .1 Spectral llt Phaae-bued
Increment based
method melhod1
method methods
I
I I I I
8
Global fit to
Wavelength
Doppler MLCC MLBF
dlveralty
surface
I 5.3 I I 5.4 5.5
I
6.1 7 B
6.2 Fitting with
Spatlal diversity, Offset
polynomials and
quality measures frequency
geometry models
In Section 12.5, the estimation of the Doppler ambiguity is covered, with one method based on the range
displacement between azimuth looks (a magnitude approach in Section 12.5.1), and three methods based on the
phase of the received signal (Section 12.5.2). A unique estimation bias property of two of the phase-based
methods is explained in Appendix 12B. A final method is based on observ.ing the change of image parameters
when the PRFs are changed, which is mainly applicable to ScanSAR data (Section 12.5.6).
Finally, a global estimation procedure is outlined in Section 12.6 that is suitable for SAR processors operating
one frame at a time. A spatial diversity approach combined with quality checks is used to get sound estimates
over a wide area of the scene. The global fit can be achieved using polynomial surfaces, or using a geometry
model in which pitch and yaw values are found that fit the observed Doppler values (Section 12.7). A short
summary in Section 12.8 completes the chapter.
The baseband centroid and the Doppler ambiguity are estimated from different algorithms, and their accuracy
requirements differ.
Azimuth Ambiguities
Figure 12.5 illustrates the origin of azimuth ambiguities and the effects when the data are processed with an
incorrect Doppler centroid. Figure 12.5(a:) shows the magnitude of a target's echo as a function of azimuth time.
When only one target is present and the signal is observed in the time domain, the effect of aliasing is not
apparent. The sidelobes of the antenna can be seen, which lead to the ambiguities. An unweighted sine-squared
azimuth beam pattern is assumed in this example, so the first sidelobes have a magnitude of - 26 dB with respect
to the beam center. In some satellites, such as RADARSAT-1, the azimuth aperture is weighted, and the sidelobe
magnitudes are lower.
Ambiguities are caused by the aliasing of the Doppler spectrum, as explained in Section 5.4.1. The "main"
spectral energy is found around the time that the beam center crosses the target - after matched filtering, it
creates the main or nonambiguous response of the target. The extent of this main energy is indicated by the
arrow labeled "A" in Figure 12.5(a).
In addition to this main energy, the parts of the received signal that lead to four ambiguities are shown,
labeled "Aaznb·" Ambiguities beyond these are usually not significant. The signal ambiguities are spaced one "PRF
time" from each other, which is 920 samples in this example. There is a gap between each ambiguity, as the
arrows cover only the time region corresponding to the processed portion of the Doppler spectrum. The azimuth
oversampling ratio is 1.3, and the 6-dB two-way beamwidth covers 700 samples, typical values for C-band
satellites.
(a) Target magnitude
10
Am A,nt, AA AA
m o ~ ~ ~ ~ ~ ~ ~ ~
~ Bd ~
B_,., ~ ~
iI == ..;)O
-<IQ
~
Bwrib
1°
~ -0.6 ' - - - L - - - L - - - L - - - L - - - L - - -L--- " - - - " - - -..__....,
500 1000 1500 2000 2600 3000 3500 .000 4500
0
(c) Compressed target with the correct Doppler centroid
Mairi :·· .. ··:·······:··· .... :. '
4> • • • • respo(IS8 : : :
~ - 20 · · · · · • , · • · · · · · · • • · · ... · · · · · ·,. · • · · · , · · · · · , · · · · • · ,. · · · · · · ·-· · · · · · · .. · · ·
c: : Ambiguities · : · · · ·
I -40 ._·_· _
· ·_· _,:....
•_ · ·..__
· •....
:_ · ·_· _
· ·_· ....
·:·_.· .._·_· _
· •....
•_ · ·_· _
· ·_· =- _ _ _ , _ _.a...._ __.__..............__.
Figure 12.5(b) shows the dependence of the Doppler frequency of the target on the data collection time.
Without loss of generality, zero Doppler antenna pointing is assumed. The graph indicates how the data are
aliased by the sampling process, as complex samples can only portray frequencies between ±0.5 cycles per sample.
The aliasing converts the four regions marked "Aamb" in the time domain of Figure 12.5(a) into the same band
as the region marked "A" in the frequency domain. Ambiguities arise because the azimuth matched filter focuses
each of the "A" and "Aamb" regions, as explained below.
In azimuth compression, a matched filter is convolved with the signal shown in Figure 12.5(a). When the
Doppler centroid is estimated correctly, the filter matches the signal within the region marked by "A," and this
part of the signal is compressed into a narrow sine function, shown as the "main response" in Figure 12.5(c). The
length of the horizontal arrow in Figure 12.5(a) is the matched filter duration, which defines the processed
bandwidth.
The phase of the filter also correctly matches other parts of the signal, denoted by the "Aamb" regions in
Figure 12.5(a) . The compression of this energy leads to focused sine functions in Figure 12.5(c), which are
multiples of the PRF time away from the main response. In other words, they are misregistered by these
amounts, and are referred to as "ambiguities" of the main response. The focusing of the ambiguities is correct,
apart from a relatively minor error caused by incorrect RCMC of the ambiguities (recall Figure 5.15 and Section
6.3.4). The focusing is good, because the center frequency and azimuth FM rate of the aliased signal is the same
at these ambiguities as they are at the central signal region.
The magnitude of each ambiguous target is lower than the magnitude of the correct target. The magnitude
can be found by integrating the beam pattern of Figure 12.5(a) within the regions delineated by the arrows,
taking the matched filter weighting into account. In Figure 12.5(c), the strength of the largest ambiguity is - 28
dB with respect to the main energy, when the azimuth oversampling ratio is 1.3.
Now assume that the baseband centroid estimate has an error of 0.30 of the PRF. In this case, the main part of
the signal is extracted by the matched filter at the times spanned by "B" in Figure 12.5(a), while the ambiguities
are extracted from the signal phase history at the times indicated by "Barnb·" The compressed results with the
Doppler centroid error are shown in Figure 12.5(d). The registration is the same as in Figure 12.5(c), because the
filter compresses to zero Doppler. Compared with Figure 12.5(c), two effects are seen: (1) the main target power
decreases and (2) the ambiguity power increases. The strength of the largest ambiguity has risen to - 12 dB with
respect to the main signal.
The signal-to-ambiguity ratio can be defined in two ways. In the first, the signal magnitude in the main
response can be divided by the magnitude of the largest of the ambiguities, as in the prev:ious paragraph. This
parameter indicates how the ambiguity of a strong discrete target would appear in a dark area of the image (see
Figure 12.8). In the second, the power in the main response can be divided by the sum of the powers of all the
ambiguities. This parameter is a measure of how distributed clutter is spread throughout the scene by the
ambiguities.
The Doppler centroid error also affects t he image SNR. As the noise power is unaffected by the Doppler
centroid, the reduction in SNR is given by the change in signal power in the main response in Figure 12.5(d),
compared with the value in Figure 12.5(c).
Accuracy Requirement
The signal-to-first-ambiguity ratio and the change in SNR are quantified in Figure 12.6 for a sine-squared azimuth
beam pattern. T he Doppler estimation error is varied between 0 and 0.5 of a PRF, using azimuth oversampling
ratios between 1.1 and 1.4. The effect on t he signal-to-first-ambiguity ratio is shown in Figure 12.6(a) and the
change in SNR in Figure 12.6(b).
The specification of the allowed Doppler centroid estimation error can be deduced from these results. An
example of one possible requirement is if the signal-to-first-ambiguity ratio must not degrade more than 3 dB, the
estimation error must be less than 7.5% of the PRF. Another example of a requirement is if the SNR must not
degrade more than 1 dB, the estimation error must be less than 19% of the PRF.2
Another way of looking at the Doppler centroid accuracy specification is as follows. A typical specification
quoted for the Doppler centroid is that it should be accurate to ±5% of the PRF for regular beam processing.
In this case, with an oversampling of 1.3, the signal-to-ambiguity ratio is lowered by 1.4 dB, and the SNR
degradation is less than 0.1 dB.
. .
...... ; ... ... ~ S.: .t.~- .. ... ~.... . : .......
. .. . .. .. .
•.•
. . . .
0 ..... . ; ..... .; ..... ; . . . . ; ..... ; ...... :. •....• ; ..
0 0.06 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
-10
0 0.06 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
Fractional PAF error
Figure 12.6: (a) Signal-to-first-ambiguity ratio and (b) change in SNR result-
ing from a Doppler centroid error.
860
IIOO
460
:i' 400
--B•
s:. 300
'S 250
E
~ 2IIO
lee>
100
50
50 I 00 150 200 250 ll00 350 400 50 100 150 200 250 SIO 350 400
(c) DopCen error. -0.20 PRF (d) DopCen error: -0.30 PAF
SlO
400
460
~«IO
--"§B•
300
i= le()
100
eo
IIO 100 150 200 2IIO 300 350 400 80 100 150 200 250 300 lleo 400
Range (cells) Range (cells)
The size of the ambiguities can be roughly estimated from these images. The end of the railroad jetty is at
sample 355, and its ambiguous energy creates a ghost image at sample 132. The PRF time is (355 - 132) = 223
samples in the look-summed image, or 892 samples in the original image. The strength of the first ambiguity with
respect to the energy in the main image is -22, - 15, and -8 dB for Doppler centroid errors of 0.10, 0.20, and
0.30 PRFs, respectively. Even with no Doppler centroid error [Figure 12.8(a)], the ambiguity is just noticed in the
dark water. This is a result of the PRF used for this scene, and is not a fault of the processor.
These results are a little different from those shown in Figure 12.6, because RADARSAT-1 has aperture
weighting in its antenna. If the antenna is weighted in azimuth, the results of Figure 12.6 should be recalculated
using the actual beam pattern. The associated analysis can be used to design the antenna weighting to meet a
specific requirement on ambiguities.
12.2.2 Doppler Ambiguity Accuracy Requirements
The Doppler ambiguity is expressed by an integer, which is the absolute centroid divided by the PRF, rounded
to the nearest integer. Because it is an integer, the centroid error caused by an ambiguity error is an integer
multiple of the PRF. If the ambiguity error is nonzero, a focusing error occurs in both range and azimuth, and
a registration error occurs, mainly in azimuth. The focusing and registration effects are proportional to the
exposure time. An error of one ambiguity causes a relatively small focusing error for C-band and higher
frequency satellites, but is noticeable for L-band satellites. The azimuth registration error is equal to the distance
corresponding to one PRF time, for each unit of ambiguity error, and is appreciable even for the higher
frequency satellites.
Because the ambiguity number is relatively easy to estimate correctly, and has a large effect on azimuth
registration, it is generally accepted that there should be no error in this parameter.
This section shows how the Doppler centroid is calculated from a geometry model, given the satellite orbit, the
satellite attitude, and the pointing direction of the radar beam. Assuming beam symmetry in azimuth, the
centroid corresponds to the Doppler frequency of a target lying on the beam centerline, and is different at
different ranges.
The relative velocity between the sensor and a beam-center target on the Earth's surface must be computed
in order to find the centroid. The Doppler frequency is then given by
2 ½el
Jr,c = - -A- (12.2)
where .X is the radar wavelength and ½e1 is the difference between the satellite and target's velocities, after each
is projected upon the beam v:iew vector (see Appendix 12A). In other words, the relative velocity is the rate of
change of the range to the target.
For an aircraft case, the Doppler centroid can be computed readily from:
where V,. is the aircraft velocity and 8r,c is the squint angle. However, for a satellite case, the computation is
more complicated because of the curvature and rotation of the Earth.3 Comparing (12.2) to (4.33) and (4.34),
the relative velocity between the sensor and the target is
Radar beam
..
N 3000 f
l
.. ~
Target
Satellite inclination 98.6 deg
1000 Beam nadir angle 32 deg
Average sat. height 800 km Target velocity vector
(In Inertial space)
Figure 12.9: Sketch of the satellite path around one-quarter of the orbit,
showing the beam vector, a potential target, and the target>s velocity.
The relative velocity varies around the orbit, even if the orbit is circular, because the Earth's surface velocity
and the angle between the satellite and target's velocity vectors vary with latitude. Figure 12.9 shows a "side
view" of the satellite orbit, viewed along the negative x-axis of the Earth-Centered Inertial (ECI) coordinate
system described in Appendix 12A. The prime meridian corresponds to zero longitude - it is where the satellite
crosses the equator in this example, that is, it is at the ascending node.
One-quarter of the satellite orbit is shown in Figure 12.9, starting from the ascending node and heading
northwest. Targets lying on the beam centerline for a specific off-nadir angle are shown by small circles for six
positions along the orbit. The off-nadir angle is defined in Figure 4.3. The vectors from the satellite to the targets
are shown by the dashed lines, defining the target range and the beam angle. The target "velocity" in inertial
space is shown by the vectors to the right of each target.
The next step is to calculate the Doppler centroid frequency from the relative sensor/target velocity, for
arbitrary satellite positions around the orbit, and for arbitrary beam pointing angles. A plot of Doppler frequency
versus beam off-nadir angle is computed at a given satellite position.
Step 1
Define the satellite orbit and
other geometry parameters
2 +
Select an orbit tlme,
and specify the satellite pitch
and yaw at this tlme
3 •
For a collection of beam
off-nadir angles defining
~
4
Find the intersecting
point With the Earth's
surface (ellipsoid)
7 •
Draw g raphs of Doppler
centroid versus slant range
A flow chart of the calculation procedure is shown in Figure 12.10, which uses the following steps:
Step 2 Select an orbit time to calculate the Doppler centroid, and specify the satellite yaw and pitch at this
time.
Step 3 Define a set of beam off-nadir angles, which specify a set of target positions along the beam centerline.
Perform Steps 4 and 5 for each off-nadir angle.
Step 4 Using the beam pointing angle, find the target location by calculating the intersection of the beam
pointing vector with the Earth's surface.
Step 5 Calculate the range to the target, the relative satellite/target velocity, and the Doppler frequency of the
target.
Step 6 Using the set of beam off-nadir angles, fit a low-order polynomial to the Doppler frequency versus slant
range (this form is needed by many SAR processors).
\
Velodly V
e..mn•an9'1
Salelllta pitch and yaw
Step 1
l
Rolllte l>Mm dndlotl In
....1111 frame by glv9II pllc;h
lllldyaw
2
Rotale •
Iha.,.,...._ poslllon S,
veloelty V end eni.nna view
vector U to ECOP coordln1tt1
3 •
+
Ro111te ....... potlllon s,
veloclty V and anlanne view
veelor U lo ECI coordlnetas.
Correct for geodellc lllftude
' •
Solve for lnter'Mdlng poloi P
on Iha Ellth'• su~ and
Ille range R from S lo P
!
5
Find velodty Q of point P
end rotate from ECR to ECI
coordlnalU
!
8
Find the '91allve wloclfy V,.
belWffn s 111\d p
and the Doppler frequency
-2V,. /}.. of polnl P
A flow chart of the Doppler frequency calculation of Steps 4 and 5 is given in Figure 12.11, and the
mathematical details are given in Appendix 12A. The procedure consists of a sequence of coordinate rotations
and translations to get the radar beam's "view vector,, into ECI coordinates. Then, a quadratic equation is solved
to find the point where the beam intersects the Earth's surface, defined by an ellipsoid. This defines the target
location. T hen, the target's position and velocity are rotated into ECI coordinates. ECR coordinates can also be
used at this stage, by rotating the satellite's position and velocity into the target's frame of reference. With both
the satellite and target positions and velocities expressed in the same coordinate system, the velocities are
projected along the beam vector to find the relative velocity, and the calculation of Doppler frequency follows
from (12.2).
With this geometry model, the Doppler centroid can be calculated for a variety of SAR imaging conditions or
for specific received data sets. For example, for a fixed beam off-nadir angle and satellite attitude, the Doppler
centroid around the whole orbit can be found. In this way, the range and azimuth dependence of the centroid
can be found, which is useful for the Doppler modeling described in Section 12. 7.
15 - Beam off-nadir 16 dtQ
,. . ......... ..
•- • Beam off-nadir 32 deg
- • Beam off- nadir 62
10
f 5 "
.
I --5....__.-r; .....
,,
,I.', • 'o
"'·
\ .. i.
\, ·"'·
: ,,..... .. :
.
I
I 10,11 .. ••• O• •
r- : , ,
Figure 12.12: Doppler centroid frequency around the orbit for three beam
off-nadir angles. There is approximately 1.3 kHz per ambiguity.
An example of the Doppler variation around a circular RADARSAT orbit is shown in Figure 12.12, for beam
off-nadir angles of 16°, 32°, and 52°, with the attitude set to zero. 4 The Doppler centroid reaches a maximum
near the equator, where the relative velocity between the satellite and the Earth's surface is greatest. The
Doppler offset increases with range, so that the largest offset for this orbit at an off-nadir angle of 52° is 14,300
Hz, representing 11 PRFs when the PRF is 1300 Hz (assuming zero attitude). The Doppler centroid is zero when
the satellite is at its most northerly or southerly point in the orbit, if the attitude is zero (at this point in the
orbit, the satellite and target velocity vectors are parallel). The small asymmetry in the curves is due to the right
pointing of the antenna, and to the satellite attitude being referenced to the local vertical rather than to the
Earth's center.
As another example, the Doppler centroid can be plotted versus slant range or beam off-nadir angle for a
fixed satellite position, as illustrated in Figure 12.13. The plot shows the ascending pass when the satellite is just
over one-eighth of the way around the orbit and the target latitude is at approximately 49° N. This is
approximately the geometry when the Vancouver scene of Figure 12.7 was collected. The satellite altitude is 800
km, and the beam off-nadir angle is varied from 16° to 52°, corresponding to slant ranges from 836 to 1471 km.
- Zero attitude
- - Pitcn or +/-1 deg.
,_ ., Yawol+/-1
Figure 12.13: Doppler frequency versus beam off-nadir angle for an ascending
pass at 48° N. The satellite yaw and pitch values are set to -1 °, 0°, or +1°.
The Doppler centroid shown as the solid line in Figure 12.13 is obtained when the satellite attitude is zero.
A polynomial can be fitted to the curve to obtain a simple expression of Doppler centroid versus slant range. A
cubic polynomial fits the geometry model to within 2 Hz over the range swath, and this is accurate enough for
SAR processing purposes.
that its average spectrum, G(/11 ), is flat. For brevity, only the azimuth dimension, 17, is considered. The reflectivity
is complex and is assigned a random phase, because the slant range to the scattering centers of each pixel is
random at the wavelength level. ·
To model the received signal, the ground reflectivity is convolved with the azimuth impulse response of the
radar's data acquisition system, himp(1J), in (4.43). Finally, complex Gaussian random noise, n(17), is added, with a
flat spectrum, N(/11 ). Then, the spectrum of the received signal is the product of the spectrum of the ground
model, G(/11 ), and the spectrum corresponding to the impulse response of the radar system, Himp(/11 ), plus the
noise spectrum.
The properties of the azimuth signal and its spectrum are illustrated in Figure 12.14, where the Doppler
centroid frequency, f 11c, is set to 0.375 of the PRF, and the SNR is set to -6 dB. Figure 12.14(a) shows the real
part of the impulse response, himp(1J), which is the ideal signal received from a point target (after demodulation).
The envelope of the impulse response is a sine-squared function, as in (4.28). The 6-dB two-way beamwidth
corresponds to 250 samples, and the PRF time is 325 samples, giving an azimuth oversampling ratio of 1.3. The
duration of the signal is 1024 samples, corresponding to a total frequency extent of 3.15 PRFs, so that a
portion of the ambiguous part of the signal is included in the simulation. Note that the zero Doppler point in
the signal (Sample 635) is shifted to the right of the peak by an amount proportional to the Doppler centroid
frequency.
_, _________...__;....__.__ _.....__....a
0 200 <IOO 930
Azlmuln time (aamplu)
IDO 1000 200 "°° eoo
Azimuth frequency (oelt)
eoo 1000
(c) Received signal (SNA = -8.0 dB) (d) Spectrum of 1 Hne ol received signal
o....__ _.___.__-",._,__.____..__.___......,
O 200 <IOO tlOO
Azlmulh lime (~ )
800 !ODO 200 "°° eoo
AzlmUth frequency (cells)
..o
(e) Spectrum averaged over 200 lloes (I) Output of power balancing rater
. ' .-... ...
• • • ~ -!• ' • • • • • •
. .•
Figure 12.14: Estimation of the position of the peak of the signal spectrum.
The system transfer function, IHimp(/11)1 2 , is shown in Figure 12.14(b), with its peak at the position of the
Doppler centroid. By the POSP, the envelope is taken from the time function in Figure 12.14(a), but with the
ambiguous energy folded in. In the power-domain plot of Figure 12.14(b), the spectral shape can be
approximated by one cycle of a sine wave (the high frequency modulation in the figure is a sampling/OFT
artifact).
Figure 12.14(c) shows the magnitude of the received radar data for one range line. Its power spectrum,
given by I G(J11 )Himp(l ,,) + N(/11 ) 12 , is shown in Figure 12.14(d). Since both G(/11 ) and N(/11 ) are white
noise-type spectra, the spectrum shape follows that of IWa (JTJ - J11Jj2 of Figure 12.14(b) plus the flat noise
spectrum. However, the observed spectrum is very noisy and the sine wave shape is not evident, as only one line
is analyzed.
It remains to determine the peak of the signal spectrum in the presence of noise. Because the single-line
spectrum of Figure 12.14(d) is very noisy, the first step is to obtain the power spectrum as the average over a
number of range cells. An average of 200 range cells is shown in Figure 12.14(e). A raised sine wave, whose peak
is shifted by 0.375 of the sampling frequency, is overplotted in white. The experiments show that the "sine wave
on a pedestal" is a reasonable model of the noisy, ambiguous power spectrum.
Because the expected magnitude spectrum is symmetrical about its peak, a simple way of finding the
centroid of the noisy spectrum is by power balancing - finding the point along the frequency axis that div.ides
the power into two equal parts, taking the circular property of the spectrum into account. The power balancing
is performed using a circular convolution of the received, averaged power spectrum with the filter
where Fa is the PRF. Each of the +1 and - 1 sections of Fpb(/r,) spans one-half of the Doppler spectrum.
By convolving the averaged spectrum in Panel (e) with Fpb(/r,), a filtered result is obtained, as shown in
Figure 12.14(f). The frequency where the power is balanced is indicated by the zero crossing with the negative
slope. A vertical dashed line is plotted at the correct result in Figure 12.14(f) (at 0.375 of the PRF), showing
that the zero crossing has obtained the correct baseband Doppler centroid to within 1% of the PRF.
The estimator, Fpb(/r,), acts as a lowpass filter, as is evident from the fact that the result in Figure 12.14(f)
is very smooth compared to the signal spectrum. A somewhat better approach is to use the derivative of
IWa(/,,)1 2 as the estimating filter, which can be approximated by a cosine wave (see examples in the next few
paragraphs) . As before, the centroid estimate is found from the zero crossing. The results obtained are quite
similar to the results using the filter of (12.5), which is the derivative of a triangular function [10]. When a sine
wave model is used, the estimation of the spectral peak can be found simply as the phase angle of the
fundamental harmonic of the magnitude spectrum (i.e., the phase angle of the OFT coefficient corresponding to
one cycle per record) [11] .
The magnitude-based estimator was run on each 5-km block of the Vancouver scene described in Section 12.2.1,
using the sine wave fit discussed in the preceding paragraphs. The range-compressed image of the scene is shown
in Figure 12.15. Four estimation results are shown in Figure 12.16, choosing representative blocks that illustrate
various estimator properties. In each panel, a dashed line is drawn to indicate the fit of the sine wave model.
1 2 3 4 5 e 1 e 9 10 11 t2
Slant range (blocks of 655 cells) 471 7 m/dlv
1:2
40
0
0 aoo
-- ..,,
(c) Spectrum of Row 1
,ooo
eo1s
111!11
0
0 ..
- -- ,_
(d) Spectrum of Row 8 Col 11
1-
•
I
IO
DC._• 338Hz
50
Enor • OHz
i:I
-- - -
10
0 0
0 aoo 400 1000 1200 0 IOO eao 1000 IIOO
Frequency (Hz) Frequency (Hz)
The two left-side panels of Figure 12.16 illustrate estimation results from areas of the scene in the water.
Data from these areas have a very low SNR, in the order of -12 dB. In the data of Figure 12.16(a), there is some
land on the leading edge of the block where the Doppler frequency is above average, which has biased the
estimate high by over 300 Hz. In Figure 12.16(c), there are no v-isible features in the water, so the spectrum is
symmetrical. However, the low SNR gives a rather high standard deviation to the estimate, which is about 20 Hz.
Also, the receiver attenuation is high in the Figure 12.16(a) case, which means that the water area is dominated
by noise from the 4-bit ADC. In this case, the strong, flat spectrum of the ADC noise hides the spectral shape of
the radar returns from the water . The attenuation is lower in the case of Figure 12.16(c), so the spectral shape of
the radar signal scattering off the water is evident. In water areas, range-traveling waves and currents can bias
the Doppler estimate, although the effect is probably very small in the present example.
Figure 12.16(b, d) illustrates the spectrum of land areas. The SNR is quite high, in the order of +8 dB (the
vertical scale of these plots is compressed compared with the water areas). There is a very bright discrete
scatterer (the large Port Mann Bridge) near the leading edge of the block in Figure 12.16(b), which has biased the
estimate high by about 80 Hz [the bright scatterer can be seen at coordinates (8,9.5) in Figure 12.15-only half of
the bridge return is in the current estimation block). The strong reflection from the partially exposed target has
distorted the spectral shape and the sine wave model does not fit well. The data in Figure 12.16(d) include an
area of reasonably low contrast, with suburbs, farmland, and trees. Because the spectrum is quite symmetrical
and the SNR is high, the estimate is more accurate, with an error close to zero.
The spectral fit method has been proven to work well in practice, and is used in many processors. The
problem with partially exposed strong targets can be alleviated by averaging over a larger area, especially in
azimuth. However, the averaging area cannot be made too large, or the range and azimuth dependence of the
Doppler centroid will be masked. An alternative that avoids this problem is the spatially selective, model fitting
approach discussed in Section 12.6. With these techniques, the Doppler estimation error can be kept below 0.5%
of the PRF.
Many of the biased estimates are a result of partial exposures of bright targets when the data have not
been azimuth compressed. Estimating the Doppler centroid after azimuth compression is another approach that
removes the effect of partial exposures, as the energy of each target (and the spectral shape) is concentrated in a
few pixels [12). However, there is the problem (in estimating the centroid at this stage) that the azimuth spectral
shape is affected by the azimuth matched filter weighting, which can mask the shape of the spectra of the
received data. Even if the matched filter weighting is removed, there is still the effect of ambiguities, which
change the spectral shape when a different Doppler centroid value is used in the processing. This means that one
has to iterate to home in on the centroid estimate if azimuth compressed data is used, which may not be worth
the extra work.
Madsen introduced a new Doppler centroid estimation algorithm in 1989 [13). In this method, the phase of the
complex radar data is utilized to estimate the fractional part of the Doppler centroid. It may be called the
"phase increment" method, as the differences of the signal phase from sample to sample are used by the
estimator.
In this section, a point target near zero Doppler is used to illustrate the principle of the method. From
(4.39), the azimuth part of the received signal is
. 41r Jo R(17)}
s(17) = wa(rJ -11c) exp { -J c (12.6)
The signal is approximately linear FM, with an average FM rate proportional to the second derivative of R(17) in
the middle of the exposure, as in (4.38). The magnitude of the signal reaches a maximum at its midpoint, 17 =
1'/c, the time that the target crosses the beam centerline. The envelope, wa(rJ - rJc), is assumed to be symmetrical
about this time.
As "frequency" is proportional to the "differential of phase," the Doppler frequency of the signal can be
estimated by measuring the change of signal phase between one azimuth sample and the next. This measurement
is illustrated in Figure 12.17, where a point target with a Doppler centroid of 0.375 of the PRF is simulated.
The magnitude and the phase of the signal are shown in the top two panels, where two PRF times of the signal
are simulated. As usual, the phase is wrapped into the interval (-1r, +1r ].
(a) Magnitude of point target
GI
"0
.e
a. 0.5
j
iI!! 2
--
CD
0
"&
~ -2
;e~==::;r?.-:-:-::-:-:-:-:,,-,.,-~
~ ~-
I!!
l'.'C
"t)
2
o~••
2
~
1~~~~
: ~~~jl·
T•- ....• :.:. •••••••••••••••••
... ___
I
.
•
1 •••••.••••••
: • ••••••• ••••••••••••••--
·
.!! -
-4 ~ I
---· • o '
' ' . .
0. 0 50 100 150 200 260 000 350 400 ,so 500
Tirne lsamoles\
The phase differences per sample are shown by the dashed line in the bottom panel of Figure 12.17. The
phase differences are a linear function of time, because the quadratic form of the signal phase is used. When the
phase differences are taken, the wrapping persists, and the phase increments are all in the interval ( -1r, +1r]. The
phase difference at the peak of the target exposure can be found from the plot, which is fl.¢ = 2.37 = 0. 751r
radians in this example. This corresponds to a baseband Doppler frequency of
I fl.q> F.
f ,,c = 2
1T' Fa = 0.375 a (12.7)
Going beyond this simple example, there are a number of problems that must be faced in practice. First, in
real data, there are many targets present, as well as noise, so the phase difference curve is quite random.
Therefore, a procedure must be implemented to find the average phase increment over all targets. Averaging works
because each increment is weighted by the signal strength, which comes from the beam pattern that is
symmetrical about the beam center. Therefore, the expected value of the phase increment equals the phase
increment at the center of the target's exposure.
Second, the averaging cannot be done on the phase increment directly, since the wrapping of the phase upsets
the average. This can be seen from the lower panel of Figure 12.17, where both the wrapped (dashed line) and
unwrapped (solid line) phases are shown. The average of the unwrapped phase gives the correct answer, but the
average of the wrapped phase does not. However, when many targets are present with noise, phase unwrapping
may not work, as false 21r jumps may occur. The way to avoid this problem is to find the angle of the average
complex signal increment rather than the average of the angles of the signal increments. While the interchanging
of the angle and the average is not equivalent because of the nonlinearity of the arctan function, the expected
values are the same, because of the symmetry of the arctan and the beam pattern.
If s(77) is a sample of the radar signal, and s(rJ + tl.rJ) is the next sample in the azimuth direction, the conjugate
product
(12.8)
is a vector whose angle is the phase difference between the two samples (tl.rJ is the azimuth sampling interval of
1/PRF) . These "signal difference" vectors can be averaged or summed as complex numbers without incurring
phase unwrapping problems. Then the "average phase increment" is the angle of the sum
C(r, = L,, s*(r,) s(r, + .6.17) (12.9)
where s* denotes the complex conjugate of s. Note that the sum can be taken instead of the average, since only
the angle is of interest. The estimate C(17) is called the average cross correlation coefficient (ACCC) at lag one,
because the sum (12.9) is the component of the autocorrelation function of the signal s(17) when the shift is one
sample.
The operation of this estimator is illustrated in Figure 12.18, where each fine line represents one of the
signal increment vectors on the right hand side of (12.9). The data in Figure 12.18(a) are taken from the point
target example
8. u t:. 06
&
i~
O •••••• !~ 0 •••••••••
-~ -u ""~
- -o.a
-1
_,,..____.___..____..____.
•U •I -4.S O U
_ ___._ __,
U
.,...____...__.
-1-4 _,
_____
-oJI O
__..
U
_ __.._~
U
Real part Real part
Figure 12.18: Polar plot of azimuth increment vectors and their average.
of Figure 12.17 (for clarity, only every fourth vector is plotted). The length of each vector is taken from the
magnitude profile in Figure 12.17(a). Because of the weighting imposed by the varying vector lengths, the sum of
the vectors yields the vector with the direction shown by the thicker line in the figure. The angle of this vector is
31r/4, or 3/ 8 of 2rr, indicating a baseband Doppler frequency of 0.375 PRF, the correct answer.
In Figure 12.18(b), the effect of noise is illustrated by using data from the simulation of Figure 12.14. The
signal increment vectors (the fine lines) are taken from 1024 samples in one range cell and appear to be very
random. However, when the signal increment vectors are averaged over 200 range cells, the average vector (the
thick line) gives a good estimate of the centroid. In this example, the estimate is 0.373 PRFs, very close to the
correct answer.
The solution for the baseband Doppler frequency 1s now formalized [10]. The expected value of the angle of the
ACCC
equals the phase increment at the center of the exposure, 77 = 17c, when the spectral symmetry conditions apply.
For a sampling interval of .6.17, the average phase increment is then
<Pac.cc =
2
471" Jo vr '/Jc .6.
- - cR(11c) 1]
(12.11)
271"
- - Fa Ka,dop 1]c (12.12)
where the differential is computed from the phase expression m (12.6), and Ka,dop is the average azimuth FM
rate [refer to (5.45) and Appendix 5B). Then, as the Doppler frequency 1s related to 11c by /1'/e - -Ka,dop T/c
[recall (5.44)), the Doppler frequency is
Because <l>accc is wrapped within the interval, (-1r, +1r ], the Doppler centroid estimate (12.13) is wrapped
within (-F0 /2, + Fa/2 ]. This is the fractional PRF . or baseband part of the centroid. It is denoted by /~c
instead of ff'/c, which is the absolute Doppler centroid.
Similar to the magnitude-based method discussed in Section 12.4.1, strong, partially exposed targets in the
received data tend to bias the centroid estimate. To lessen this effect, Madsen proposes the use of the signs of
the real and imaginary parts in the cross correlation. Let
+1 if X > 0
y= { (12.14)
-1 otherwise
where x is either the real or imaginary part. Then, the cross correlation uses the y values, which are either -ls
or +ls. This simplifies the implementation, but some phase noise is introduced.
As a final note, the magnitude-based or phase-based estimators discussed above can be applied to raw signal
data or to range-compressed data. In the raw data case, the biasing effect of strong targets is reduced, but the
range dependence of the centroid is more diffused. In practice, either option works well.
The ACCC phase increment method was applied to the same 5-km blocks (1024 by 566 samples) of the
Vancouver scene, as in Section 12.4.1. The phase-increment estimates agreed with the spectral fit estimates to
within 0.3 Hz, with the largest deviations occurring in the water areas. The signal increment vectors are shown in
Figure 12.19 for the same four blocks as in Figure 12.16. Refer to the description of the data given earlier for
comments on the results -the point here is that the phase increment results are essentially the same as the
spectral fit results.
i o.s (U
~
ft! 0 •• ........ .. ..... .. •• • •••••••••••
.5
!;!I
§ ~5
-1
oc.. - 951 Hz
-· :: DC_, • 446 Hz
Em)(• 330Hz : Error = Hz 82
-u..___----~----'
-1.5 -I 0 0.5.4).5 1,5
-,.s..___--~---~~
-u _, -U O 0.5 1.5
a. 0.5 0.6
-1 -1 : DC . • 338Hz
: Ml
: Error • O Hz
-1,a..____ _ __ __ ___, -1 s L - - - - - ~ - -__,
- LS -I -CU O 0,6 U -1,6 -I -45 0 0.5 1,5
Real pa,t Real part
There is a theoretical reason why the sine wave fit to the power spectrum gives the same answer as the
correlation at lag +l. It is because the power spectrum is the Fourier transform of the autocorrelation function
[14). Therefore, if the power spectrum is a sine wave on a pedestal, and an IFFT is performed, three terms are
obtained - the zero-lag term, which equals the total power, then terms at lag -1 and + 1. The term at lag + 1 is
the ACCC estimate. Therefore, the two methods are essentially equivalent [10], as has been verified experimentally.
Similar to the estimation of the fractional PRF part of the Doppler centroid, the attitude measurements may not
be accurate enough to estimate the Doppler ambiguity, and an improved measurement can be obtained from the
received data. This section gives examples of the magnitude-based and phase-based Doppler ambiguity resolvers
(DARs).
Figure 6.6(c) shows that after proper RCMC, the trajectory of a point target is aligned with the azimuth
frequency axis in the range Doppler domain. With an error AMamb in the Doppler ambiguity, a residual skew
exists, as shown in Figure 6.6(b). This skew introduces two adverse effects. First, there is range broadening in the
impulse response, since azimuth compression integrates the skewed data in the azimuth direction. Second, there is
azimuth broadening caused by a reduction in the azimuth bandwidth in each range cell, although this may not
be noticed in multilook processing.
The function of the DAR is to estimate the correct ambiguity number, Mamb· The magnitude-based algorithm
uses the correlation between two azimuth looks to measure misregistration in the range direction [15]. The
principle is illustrated using a single target. With an incorrect ambiguity, the two looks are misregistered in range,
as shown in Figure 12.20. Each look is compressed to the range position corresponding to the center of the look
energy. The misregistration between the two looks, AR, is a measure of the error in M.amb. 6
The misregistration caused by an ambiguity error can be determined as follows. The slant range equation
(5.47) in the range Doppler domain is
(12.15)
in units of range (meters) per azimuth frequency (Hertz). The approximation in the last step is made by
assuming D 3 (fTJc> Vr) ~ 1. With an error of AMamb, the residual slope after RCMC is
(12.17)
Target trajectory
with residual RCM
~
Azimuth
frequency
T
t.t.
l
Look1
compressed
Look2
compressed
Let the two azimuth look centers be at f11c ± ~ !a/2, where 6 f a is the frequency separation between the
looks. The look misregistration is the slope multiplied by the look separation, and can be expressed in slant range
as
(12.18)
Finally, when the misregistration, 6R, is measured, the Doppler ambiguity error is obtained as
2
4~ 6R } (12.19)
~Mamb = round { ).2 .1.'io Fa6 /a
In practice, the two looks are usually placed as follows. The Doppler spectrum corresponding to the processed
bandwidth is divided into two nonoverlapping halves, so that each look occupies one-half of the processed Doppler
spectrum. The look separation is then one-half of the azimuth bandwidth.
Although the above discussion is based upon a single target, the algorithm also works in practice in the
presence of multiple targets or generic clutter. In this case, the misregistration between the two looks can be
measured by correlating the magnitude images of each look in the range direction, averaging over many range
lines. The algorithm works best with high contrast scenes, but does work with low contrast scenes as long as
enough averaging is done.
Referring to (12.19), the algorithm does not require any iterations, as ~Mamb can be determined from a
single measurement of 6R. However, if the error is in the order of a few ambiguities, the impulse response of
each target will be broadened in range, thus reducing the accuracy in the range correlation. In this case,
iterations may be required, whereby the newly obtained 6Mamb is used to perform the RCMC and the
correlation is repeated.
A RADARSAT Example
The operation of the magnitude-based DAR method is illustrated using the RADARSAT-1 scene of Vancouver,
shown in Figure 12.7. It is an 8 m resolution, FINE-mode scene, which has the most stringent focusing
requirements of t he current C-band satellite SARs. The scene is taken from an ascending orbit, at a latitude of
49° N. There are a number of bright ships anchored in English Bay, west of the city center. As the ships are
stationary and the calm surrounding water gives a dark background, it is a convenient dataset to illustrate
manual estimation of the Doppler ambiguity.
Figure 12.21 is a range-compressed segment of the scene that contains the five ships (Row 10, Column 2 in
Figure 12.15). The strong discrete targets appear as long streaks, which are easily picked out by eye. As the
RCM skew is clearly visible in the data, the absolute Doppler centroid can be estimated directly by measuring
the skew angle, which is the linear RCM slope. A similar method is illustrated in [16].
From Figure 12.21, the slope of the bright targets is measured to be approximately 0.034 range samples per
azimuth sample. This slope is positive - that is, the range increases with azimuth time. This corresponds to a
backward squint of the antenna. Multiplying by c/(2Fr) to change range samples into meters, and multiplying by
the PRF to change azimuth samples into seconds, the slope is approximately 198 m/s, using the parameters in
Table 12.1.
From (5.58) , the RCM slope is - Vr sin Or,c m/s. Therefore, the squint angle is approximately - 1.6°, which is
consistent with the Earth rotation component for an ascending orbit at 49° N. Using this slope in (4.33), the
Doppler centroid is estimated to be approximately - 7000 Hz (consistent with Figure 12.12 at 45° on the
horizontal axis). This value is close enough to the correct value (~980 Hz) to obtain the correct Doppler
ambiguity of - 6.
This method is useful for illustrating the principle, but cannot be easily automated, as it depends on
identifying strong point targets in the scene. Sometimes, straight line extraction techniques, such as the Hough or
Radon transforms, can be helpful. For automatic operation, the look correlation method or the phase-based
methods of the next section can be implemented.
Figure 12.22 shows the results of the azimuth look correlation algorithm. The azimuth looks are extracted
with a separation of 500 Hz. RCMC is implemented with ambiguity errors of -1, 0, and +1 PRFs. A
one-ambiguity error gives a misregistration in range of 2.1 range samples, shown as vertical dashed lines. The
correlation is measured over a block of 1536 range lines by 1536 range cells containing water, city, and farmland.
The figure shows clear correlation peaks for each ambiguity error, even though the peak becomes broader
with an ambiguity error. Therefore, the range displacement, ll.R, can be readily measured and the correct
ambiguity number obtained. Again, the answer is - 6.
The principle of phase-based DAR methods is that the absolute Doppler centroid is a linear function of the
radar carrier frequency, Jo. In a chirped pulse, the range frequency varies between /o ± I Kr l Tr/2, where Kr is
the pulse FM rate, Tr is the pulse duration, and I Kr I Tr is the pulse bandwidth. This frequency variation is
only a few tens of megahertz for remote sensing radars.
If the ACCC angle is plotted versus range frequency, the average slope of the values can be used to find the
absolute Doppler centroid, even though it is aliased. This is done by finding the intercept of the sloped line with
the f-r = 0 axis, then finding the value of the ambiguity number that makes the intercept closest to zero. The
slope in radians per hertz is generally not wrapped, as the pulse bandwidth is very small compared to the carrier
frequency, so the change in the ACCC angle over the range bandwidth is small. If a wrap does occur, it is easy
to recognize and unwrap, as long as the standard dev:iation of the ACCC angle is moderate.
1100
1000
900
800
.;;-
I 700
-
~
:6
800
500
~
~ 400
300
200
100
There are three ways to determine the variation of the Doppler centroid versus range frequency:
WDA: This is the first phase-based DAR, developed at the German Aerospace Center, DLR [17]. It is called the
"wavelength diversity" algorithm (WDA), because it examines the Doppler properties of the signal as a
function of range frequency (i.e., radar wavelength) . A range FFT transforms the data into the range
frequency domain, and the ACCC angle is calculated for each range frequency bin. The slope of the ACCC
versus frequency is measured, from which the absolute Doppler centroid is calculated.
MLCC: This is the counterpart of the wavelength diversity algorithm, implemented in the range time domain.
Range compression is done with two range looks, separated by a frequency ~ Jr. This emulates two radars
with different center frequencies. The ACCC angle is found for each look, and the difference between the two
angles, div:ided by ~ fr, gives an estimate of the slope of Doppler centroid versus range. Another way to
explain this is that the effective carrier frequency after the ACCC angle subtraction is lowered to ~ fr
instead of to the original transmission frequency Jo in (12 .11) . Because of the much lower frequency, the
angle difference is very small and phase wrapping is not a problem.
J. - Amberr • 0
/ I\ - Amberr•-1
!
. 0.95
.I
/ I \
\
,- Amberr • +1
i.......
0
C
0.9 .I \
\
I
\
~
:, 0,85
I
\
§, ' ... ... .... ........
~ 0.8
----·---·--·-·
- -- -- ·- ·- ---:---
- 10 -5 0 5 10
Range displacement (samples)
MLBF: In this method, two range-compressed looks are also used. The two looks are multiplied or mixed
together to generate a beat signal, 7 which is a low bandwidth signal whose average frequency is called the
"beat frequency." The magnitude spectrum of this signal usually exhibits a distinct peak, whose frequency is
proportional to the absolute centroid. Again, another way to explain this method is that the effective carrier
frequency is lowered to ~ fr by the mixing, which eliminates the wrapping of the azimuth frequencies.
These algorithms share some common modules, which are highlighted in Figure 12.23. The boxes in the figure
represent the functions called by the algorithms. The MLCC and MLBF algorithms each use two range looks, but
in the MLBF, the beat frequency between the two looks is measured, rather than measuring the frequency of
each look. All of the algorithms invoke the ACCC angle computation, with the WDA operating in the range
frequency domain and the MLCC and MLBF operating in the range time domain.
These DAR estimators are not very accurate compared to the baseband estimation algorithms of Section 12.4.
While the accuracy of the baseband algorithms can be in the order of a few hertz, the accuracy of the DAR
algorithms is more like several hundred hertz. But the DAR estimators only have to be accurate to within
PRF /2 in order to estimate the correct ambiguity. The ambiguity estimate can be made more reliable by
estimating the baseband centroid, f~c ' first, and subtracting it from the absolute Doppler frequency, f 11c, estimated
by the DAR algorithms. This procedure is convenient because the baseband estimate is a byproduct of the DAR
algorithms.
When the difference is divided by the PRF, the answer should be close to an integer, so that the ambiguity
estimate is obtained by a rounding operation
(12.20)
The remainder
(12.21)
is a measure of the accuracy of f 11c. An example of an accuracy criterion is that the remainder should be less
than 0.33 of a PRF. However, this criterion cannot be used as the only criterion, as an f 11c error close to any
integer multiple of the PRF will also give a low remainder.
The ambiguity estimator is simplified by the fact that the ambiguity number changes infrequently over the
SAR scene. It jumps by an integer where the baseband centroid wraps. The places where the wrapping occurs
are usually easy to recognize in the baseband estimates, so a geometry model can be used to remove the jump
from the ambiguity estimation procedure. When the wraps are compensated, only one ambiguity number needs to
be estimated over the whole scene, and the averaging area can be extended to the scene boundaries.
Once the correct ambiguity is obtained, the ambiguity number and the baseband estimates are combined to
get an accurate estimate of f 11c using
(12.22)
The details of each DAR algorithm are described in the following sections.
In the wavelength diversity algorithm, the slope of the ACCC angle versus range frequency is used to estimate
the range frequency dependence of the Doppler centroid, which leads to the estimate of the absolute Doppler cen-
troid. The slope is measured as follows. A range FFT is performed on the input data, and the ACCC angle is
found for each range frequency cell, by finding the signal increments in the azimuth direction, as in (12.9). The
angle of the average phase increment is found. The slope of the angle versus range frequency is found by
regression. Sometimes the increments have to be averaged over a few range frequency cells to avoid wrapping
when the angles are found.
The absolute Doppler centroid can be found from the slope, as explained in the following equations. The
ACCC angle (12.12) can be written as
(12.23)
where Jo is the nominal or average radar frequency. For a chirped radar, Jo should be replaced by the
instantaneous range frequency, Jo + J,,., where /,,. is the baseband pulse frequency. As an example from
RADARSAT-1, Jo = 5.300 GHz, and /,,. sweeps from - 15 MHz to +15 MHz in the fine resolution mode.
Substituting the instantaneous range frequency for Jo in (12.23), the range frequency dependence of the ACCC
angle is expressed by
21r 2V,.2 (Jo + J,,.)
<Paccc (J,,.) - - - - - - - - - T/c (12.24)
Fa cR(rJc)
and the slope of <Paccc versus J,,. is
d</>accc(f,,.) 271" 2v;.2
a - - - -- - - - rJ (12.25)
- df,,. - Fa cR(TJc) c
Then, substituting the average azimuth FM rate, Ka,dop = 2V/ /o/[c R(rJc)], back in and recognizing that the
absolute Doppler frequency, f'l'/e , equals - Ka,dop f/c, the absolute Doppler centroid estimate is obtained from the
slope, a, using
(12.26)
This derivation is equivalent to projecting the slope measured around the radar center frequency, Jo, back
along the range frequency axis to J,,. = 0. As the Doppler frequency is proportional to the radar frequency, the
projected line should go through the (0,0) point of the graph when the Doppler centroids at Ji and h are
expressed correctly in absolute azimuth frequency terms. When the projected line is offset vertically until it does
pass through (0,0), the vertical coordinate of the line at Jo gives the absolute Doppler frequency, f'l'Jc · This is
illustrated in Figure 12.24, where Ji and /2 represent the lower and upper frequencies in the transmitted chirp,
and the slope is measured between these frequencies.
In SAR satellites, the azimuth boresight angle of the radar beam can vary as the chirp sweeps through its
frequencies. This means that in (12.23) to (12.25), the beam center offset, f/c, may have a small dependence on
the radar transmission frequency, Jo + J,,.. This has the effect of mov.ing the points at Ji and h up or down a
little in Figure 12.24.
This movement upsets the projection of the slope to the zero range frequency axis, biasing the estimate in
(12.26). The bias can be modeled as an offset frequency in the estimate, which can be represented by adding a
bias term to (12.26)
(12.27)
The compensation for this offset is an important part of the wavelength diversity algorithm. Unfortunately, it
appears to be difficult to obtain a consistent value of f os for the current satellite radar systems. The offset
frequency also applies to the MLCC algorithm. A derivation of the offset frequency, f os, is given in Appendix 12B.
fo
O""'
0
--- - - - - - - - - - - - - - -- - -+
Range frequency
During the application of the wavelength djversity algorithm in the range frequency domain, the baseband centroid
can also be estimated conveniently by averaging the ACCC over all range frequency cells and converting to an
angle. This gives the same answer as when the ACCC angle is measured in the range time domain. The range
transform blocks should not be so large as to mask a nonlinear variation of the baseband centroid over range.
RADARSAT Results
Figure 12.26 shows the results of the wavelength diversity algorithm operating on the 1024 x 655 sample block of
RADARSAT data from Row 8 and Column 11 of Figure 12.15, the same data used in Figure 12.16(d) and Fig-
ure 12.19(d). After the range FFT is taken, the average CCC is found in each range frequency cell, and
converted to an angle. These values are shown as the "noisy" signal in the plot. There is a clear linear trend of
ACCC versus range frequency, and a straight line is obtained with a least squares fit within the central 90% of
the spectrum. The fitted line has a slope of - 7.02 mrad/MHz and is shown as the dashed line in the graph.
Assuming that fos = 0 for the moment, the absolute Doppler centroid is estimated from (12 .27) to be - 7440 Hz,
or -5.92 PRFs.
2., .... , .. .. ·····
..
. . .. . ..... ,. . . .. .. ...
est Ffrac
• = 338 Hz :•
2.2
.
- ' .
. .'
1.!! .: Fit Error ~. 59 mrads:
.
-0
2 I: -1.02 mrad/Mi·iz• -•\ cutsrc Bt ~ · T. r mrad$
I!!
; t.8
gi
al t.8
0
0
~ 1.4 . .. . . . . ' i::
. ..: Remainder . .. .PRFs
"' ' .. .0.19 .' . • • I • . ..' .... . ~ ····· .. •'.
. '
: Mamb• ;: - 6
., ....,, . . ... ' - - .
-ts - 10 -5 0 5 15
Range frequency (MHz)
Figure 12.26: Estimation of the absolute Doppler centroid from Row 8, Col-
umn 11, using the wavelength diversity algorithm.
Next, the CCCs are averaged over all range frequency cells. The average is converted to a wrapped angle
and, from (12.13), the baseband Doppler centroid is found to be 338 Hz (the baseband estimate is not affected
by /os)- Combining these two estimates in (12.20), the ambiguity number is found to be -6. The remainder
(12.21) is 0.19 PRFs, giving some credibility to the ambiguity estimate. More useful measures of estimation
accuracy are the RMS deviation between the ACCC values and the straight line and the cubic error of the fit.
These values are 59 and 7 mrad, respectively, indicating good quality estimates.
• ·56 mrads:·
..
fu
0 2 .. . .,- . .. ... . . ...- . . .
8 . Remainder i:: -0.47 PRFa
< u .. . ........ ,- ... .. .. ,. . . ....... . ... .. ~ •{" ...... ' .
'. Fcen• i= - 10.18 PRFs
u :. Marni:> •• ... =·:.:n ·· • • • • • • 1 • • • •
. • • • •••
- 15 - 10 -5 0 5 10 15
Range frequency (MHz)
Figure 12.27: Bias of the Doppler centroid estimate due to a bright, partially
exposed target in Row 8, Column 10.
An example of a biased ambiguity estimate is given in Figure 12.27. It is taken from Row 8, Column 10, the
same block of data analyzed in Figure 12.16(b) and Figure 12.19(b). The bright, partially exposed target in this
data block has caused a substantial bias in the ambiguity estimate. The large cubic fit error of 19 mrad could
be used to reject this estimate.
When the ambiguity is estimated over all 228 blocks of the scene, the correct ambiguity is found in 87 of
the blocks. The histogram of the block ambiguity estimates is shown in Figure 12.28. About 20% of the estimates
are in error by more than one unit. Most of the large errors are in the low SNR areas in the water, and these
could be rejected by the quality measures of the RMS deviations or the SNR. The average of the remaining
blocks gives a correct ambiguity of -6.8
Smaller blocks than are normally recommended were used in these experiments to illustrate some of the noise
and biasing effects of the estimates. In practice, larger blocks should be used for the ambiguity estimates, perhaps
as big as 4096 by 1024 samples in the present dataset. This reduces the effect of bright, partially exposed targets,
as 4096 samples is approximately six times the target exposure time. An alternative is to use the smaller blocks
in combination with a quality-guided, spatial diversity approach of Section 12.6.
12.5.4 Multilook Cross Correlation Algorithm
The MLCC algorithm uses two range looks to isolate the effect of range frequency on the Doppler centroid. The
operation of the algorithm is illustrated by considering a point target. Ignoring the target's magnitude and range
envelope, the range compressed signals of the two looks of the target can be written as
4
s1(1J) - Wa(1J - 1Jc) exp{-j : (lo - A!r) R(17)} {12.28)
s2(17) =
4
Wa(1J - 1Jc) exp{-j : (lo+ 6{r) R(17)} (12.29)
The azimuth phase histories of the target are different in each look because of the frequency shift, A fr. As in
(12.12), the ACCC angles of the two looks are
100
0
. 40
~
20
oi -....._ --1-.--.-.
- 14 - 12 -10 -8 -6 -4 -2 0 2
WDA Ambiguity Number, Mamb
21T
<PLl - L ( C1(17)) - - Fa Kal,dop 1]c (12.30)
21r
<PL2 - L ( C2(17)) - - Fa Ka2,dop 1Jc (12.31)
where Ka1,dop and Ka2,dop are the average FM rates of the two looks evaluated at their respective center
frequencies, given by
Kal,dop - 2 ~2
cR(1Jc)
(1o - 62lr) (12.32)
where Ka,dop is simply t he average of Ka1,dop and Ka2,dop· Finally, the difference between the two ACCC angles is
21r
fl.</> = <PL2 - <PLl =- Fa {Ka2,dop - Kal,dop) T/c (12.35)
In practice, this difference angle is estimated from the data by computing the average of the conjugate-multiplied
signal vectors and extracting the angle
(12.36)
to avoid wraparound when the phases are subtracted. Substituting (12.34) into (12.35), the difference angle can be
rewritten as
271" D..fr 271" D..fr f
D..</> =- Fa To Ka,dop T/c - Fa Jo 11c
(12.37)
where the relation f 11c = -Ka,dop 1Jc has been utilized in the last equality.
Since D..</> is much less than a radian in practice, angle wraparound is not a problem when the angle is
extracted after the differences are averaged. The absolute Doppler centroid frequency is estimated by
(12.38)
where an offset frequency has been added, as in the wavelength diversity algorithm (12.27). This result can be
related to the result (12.26) of the wavelength diversity algorithm by noting that !::..</>/ t::.. fr is equivalent to the
slope a. This is expected, as the MLCC algorithm is analogous to the wavelength diversity algorithm implemented
in the range time domain.
The estimate of the absolute Doppler centroid of (12.38) is not accurate enough to provide the baseband
centroid, but it does serve to find the Doppler ambiguity. As in (12.20), the ambiguity is best estimated by
subtracting the baseband estimate, div:iding by Fa, and then rounding. As in {12.13), the baseband estimate can
be found within the MLCC algorithm by averaging the ACCCs of the two looks and converting to an angle9
With this baseband estimate, the absolute Doppler centroid can be refined according to (12.20) and (12.22).
The accuracy of the MLCC method depends upon the range look bandwidth IKr ITr and the look separation t::..
fr- The optimal separation of the looks is found to be t::.. fr = (2/3) IKr ITr, and the optimal look bandwidth is
IKrl Tr/3 [18].
In summary, the difference between the two ACCC angles gives an approximate estimate of the absolute
Doppler centroid, while their average gives an accurate estimate of the fractional PRF part. Combining the two
gives the ambiguity number estimate, which is then used to get an updated absolute Doppler centroid estimate. A
beam-specific offset frequency must be included in the estimation procedure.
An MLCC estimator was run on the 228 blocks of the Vancouver RADARSAT scene. The range spectrum is
shown in Figure 12.29 for the Row 8, Column 11 block used in Figure 12.16(d). The shape of the range spectrum
is largely governed by the weighting function used in range compression, a Kaiser window with /3 = 2.2. For
simplicity, the two range looks are extracted from the first and second halves of the spectrum. Note that, because
of the nonuniform shape of the spectrum, the effective look separation, t::.. fr, has to be calibrated for the specific
spectral shape of the data. The calibration is more reliable if the range spectrum is flat.
Next, the angles of the sum and difference of the signal increments were found in each range cell. This is
done to examine the variability of the individual angles, and the results are plotted in Figure 12.30. The
standard dev:iation of the sum and difference angles prov:ide a useful quality measure.
To form the MLCC estimate, the increment vectors are averaged over range first, then summed and
differenced and the angles are found. The difference angle gives an absolute centroid estimate of -6113 Hz for
Row 8, Column 11 using (12.38). The sum of the increment vectors gives a baseband centroid of 338 Hz, the
same as the other estimators. When the baseband estimate and an /os offset are combined with the absolute
estimate, the estimated ambiguity number is - 6.1, before rounding.
Next, the MLCC estimator was tested with the block of Row 8, Column 10 [used in Figure 12.16(b)]. The
results are shown in Figure 12.31. The bias in the estimates is considerable, caused by the bright, partially
exposed discrete target of the Port Mann Bridge. The absolute Doppler centroid estimate is biased by six PRFs,
and should definitely be rejected. It can be rejected by the standard dev:iation or the spectral fit criterion. On the
other hand, the bias would be much less if the estimation block extent were extended in azimuth by a few
thousand lines.
After the MLCC estimator was run on all the 5-km blocks in the Vancouver scene, a histogram of the
estimated ambiguity numbers was plotted in Figure 12.32. The offset frequency was arbitrarily set to -380 Hz,
the value needed to balance the estimates that were one unit high with the ones t hat were one unit low. About
26% of the estimates are in error by more than one unit - this percentage will come down when larger block
sizes are used.
The apparent value of /os is different from the value found in the WDA. This is because the weighting
applied to the range frequency axis is quite different in the two algorithms. This is especially true when the
antenna pointing angle is a nonlinear function of transmission frequency, as explained in Appendix 12B. The
weighting may also account for the wider spread in the histogram values.
It seems that the MLCC algorithm does not provide any additional information than the WDA, but it is
convenient to use it in combination with the MLBF algorithm.
Look ,, Look 12
35
( )
30
-§ 25
120
:f 15
MLCCJMLSF Reaulla
F.,_. • 338Hz
10 F- MLCC • -6113 Hz
5 F- MLBF ., -7487 Hz
0
-15 -10 -S O 5 10 15
Range frequency (MHz)
Figure 12.29: Range spectrum and the two look positions, Row 8, Column 11.
(a) Angle of the sum of the ACCC of the 2 looks, Row 8 Col 11
-1
0 50 100 160 200 250
,-1
Q)
Figure 12.30: MLCC: Angles of the sum and differences of the ACCC of the
two looks for Row 8, Column 11.
In the MLBF algorithm, the same two looks are used as in the MLCC algorithm, but a different way of finding
the Doppler frequency difference caused by the range frequency shift is employed. By multiplying (beating) the
two looks together, a difference or beat frequency emerges, whose frequency can be estimated by a Fourier
transform or by the ACCC method. 10 The different way of measuring the look frequency difference gives the
algorithm useful properties that are complementary with those of the MLCC algorithm.
The generation of the beat frequency can best be understood by examining what happens to a single point
target. Treating each range look as an azimuth time series in one range cell, the MLBF algorithm begins by
multiplying the two signals given in (12.28) and (12.29), conjugating the signal of the first look. The resulting
beat signal for the point target is given by
Sbeat(17) - si(11) s2(17)
2 { . 41r tl.fr R(17)} (12.40)
- IWa ( 'T/ -11c) 1 exp -J
C
(a) Angle of the sum of the ACCC of the 2 looks, Row 8 Col 10
Figure 12.31: MLCC: Angles of the sum and differences of the ACCC of the
two looks for Row 8, Column 10.
Because the linear component of R(17) tends to dominate the higher order range migration terms in high
frequency SARs, the signal frequencies expressed by (12.40) are confined to a narrow bandwidth, within the
effective target exposure time. In other words, the beat signal is approximately a sine wave. If. the frequency of
the sine wave can be estimated, the absolute Doppler centroid can be found.
The average beat frequency, derived from the phase in (12.40), is
_ 2tl.fr dR(TJ) I __ tl.fr f (12.41)
f beat - - f)c
C dry f)=f)c /o
The differential is taken of the hyperbolic range equation (4.9), as explained in Appendix 5B. Therefore, the
frequency of the beat signal is proportional to the absolute Doppler frequency, and the latter is then estimated by
lo (12.42)
f'1c =- tl.fr /beat
The value of the beat frequency, /beat, can be estimated in a number of ways. One way is to measure the ACCC
angle of the beat signal, and another way is to find the location of the peak in the magnitude spectrum using
an FFT.
100
Fos ~ -380 Hz.
8 80 . . ... .. .. . . ~ .... . .
~ • t ••••
Look J,,ep • 0.735
§ 60 .•
Outliers • 26% .
•l •
0
0 40
z
20
0
- 14 - 12 -10 -a -& _.. -2 0 2
MLCC Ambiguity Number, Mamb
Discussion
As in the other algorithms, the MLBF requires averaging to improve its accuracy. In the implementation, the beat
signal is generated and its FFT is taken in every range cell. The power spectrum is averaged over a group of
range cells. Then, the frequency of the peak of the spectrum is found, which gives the midrange estimate of f 11c .
Finally, the estimated range-dependent baseband, f~c, guides the algoritbw , 1ie mos-efined estimate of f Tic as a
function of range, as in the MLCC algorithm. In MLBF, the baseband, J:
easily estimated by finding
the angle of the ACCC summed over the two looks, the same as in the MLCC algorithm.
Unlike the MLCC algorithm, there is no offset frequency that needs to be included in the MLBF estimate.
This important property is shown in Appendix 12B. In fact, when the MLCC and MLBF algorithms are used
together, the difference in their estimates can be used to find the offset frequency, although experience has shown
that many scenes have to be averaged to obtain a reliable estimate of the offset frequency. The offset frequency
can vary with the beam off-nadir angle, and possibly even change with time, as the beam pattern ages.
An interesting property that demonstrates how the MLBF algorithm is complementary with the MLCC
algorithm is that the MLBF works best for high contrast scenes, while the MLCC works best for low contrast
scenes [18]. The MLBF works best when there is a single dominant scatterer in each range cell, which corresponds
to the ideal scenario represented by the point target analysis of (12.40) to (12.42). When additional scatterers are
present in a range cell, intermodulation frequencies arise from the beating of one target with another, which
obscures the desired beat frequency [recall that the beat frequency of (12.41) arises from the beating of only a
single target with itself] . In low contrast scenes, where there are many scatterers of similar magnitude in each
range cell, their intermodulations can mask the desired beat frequency to the point where there is not a distinct
peak in the beat signal spectrum.
Another interesting property that is also complementary with the MLCC algorithm is that the MLBF
algorithm is not very sensitive to strong partially exposed targets. The reason can be deduced from (12.41). If
the slope of R( TJ) does not vary much through the exposure, then the beat frequency corresponding to the
beginning of the exposure is not much different from that at the end of the exposure.
The MLBF algorithm was run on all the 5-km blocks of the Vancouver RADARSAT scene. The same two blocks
discussed in the MLCC results are again highlighted here. The shape of the beat signal spectrum is the key
point to examine. The Row 8, Column 11 block shown in Figure 12.33 has moderate contrast. The beat
frequency is clearly visible, but is not too high above the surrounding pedestal.
Each cell along the beat frequency axis, denoted by the x 's, corresponds to a beat frequency change of 1.23
Hz for the FFT length of 1024 points. It takes a change in Doppler frequency of 500 Hz to shift the peak by
one cell, so each ambiguity corresponds to 2.5 frequency cells. Thus, the peak location has to be estimated to
within one cell to find the correct ambiguity with this 1024-point FFT. A longer FFT length would make this
selectiv,i ty finer, as well as provide more averaging.
A curve-fitting procedure or the ACCC angle can be used to get a better estimate of the frequency of the
beat signal, avoiding problems with double peaks and FFT cell quantization. The ACCC estimates of the beat
frequency are shown by the shorter vertical bar in Figures 12.33 and 12.34, while the frequency estimates made
from the FFT are shown by the longer bar. The ACCC estimates were found to have a slightly smaller standard
dev,iation.
By simply taking the FFT cell with the highest energy in this example, the ambiguity number is - 6.2 before
rounding. When rounded to -6, the correct answer is obtained. Even though there is no offset frequency to be
calibrated, it is important to note that the effective look separation has to be calibrated, which may be different
from the MLCC calibration.
The results for the Row 8, Column 10 block are shown in Figure 12.34. In this block, there is a strong
partially exposed scatterer, which gives a much higher peak in the beat spectrum. The peak cell is one sample to
t he right of the Column 11 case, giving an ambiguity number of -6. 7 before rounding. This demonstrates the
reduced sensitivity of the MLBF to partially exposed targets (compare with the MLCC result of Figure 12.31).
The histogram of the ambiguity number estimates of all the blocks is given in Figure 12.35. The most
common answer is the correct one with 110 correct blocks, and the other estimates are symmetrically distributed
about this value. About 18% of the blocks are outliers that can be rejected by quality criteria, such as the height
of the spectral peak and the SNR.
ScanSAR data is distinguished by the presence of gaps in the received data. When using the MLBF DAR
estimation method, the gaps can cause problems if the beat frequency is estimated using FFTs. If the FFT
length is set to the burst length, the FFT output cell spacing is too coarse to estimate the ambiguity. If longer
FFTs are taken across several bursts, zeros must be added to fill the gaps, in order to make the FFT coherent.
But long FFTs result in a modulation effect, discussed in Section 10.4, which may affect the accuracy of locating
the spectral peak.
120
30,-----,-------.--- ~ - -- ~- - ~ ----.----..-----,
25
Peak FFT beatfreq '"' 17.2 Hi
.
ACcq beat freq ~: 16.7 Hz
MLSF· F . ..... =
.
-7~
. Hz
Row eot 11
.
:
.
.
.
..
..
.' ' .
...
.. . ... ... .. .. .. .. ... .. .. .. ... .
.
. .
a
& 15
j 10
Figure 12.33: Spectrum of the beat signal of the two looks (Row 8, Column 11).
a; 350
. A~ ~ ! f!"-Q =:: 1!-~..~ .; . ·! . . .
!300 .....
' ~LSP. F .. "' . -7'938
. Hz' ··'. .. ... . .. -~.... . ...
& 2SO
I!! 200
~
< 150
100
50
- 10 0 10 20 30
Beat frequency (Hz)
Figure 12.34: Spectrum of the beat signal of the two looks (Row 8, Column 10).
The solution is to use the ACCC method of estimating frequency in the MLBF algorithm, as it is not
affected by the data gaps, as long as the phase increment is not calculated across the gap. Another alternative is
to use the MLCC or wavelength diversity algorithm, which can be applied to the bursts individually, with
averaging from one burst to the next.
Another method of Doppler ambiguity estimation is available if multiple PRFs are available in the same dataset.
This happens naturally in ScanSAR data, as the PRF is changed between the subbeams.
Provided that there is an overlap area between the ScanSAR subbeams, the images formed in these two
subbeams are misregistered in the presence of an ambiguity error [19]. Assuming that the antenna boresight does
not change appreciably between the subbeams, the Chinese remainder theorem can be used to determine the
absolute Doppler centroid, by utilizing the knowledge of the fractional PRF part and the respective PRF of each
sub beam [20]. Another method relies on finding the ambiguity that gives the minimum discontinuities of the
Doppler centroids between the sub beams [21]. Experience with wide-swath, five-beam ENVISAT / ASAR data is
presented in [22].
100 .•...•
AOC<; measure1t19nts
. .
.
. .. . ..... '".
Look Sep • 0.7-35
.. ....... .. .. .. . . ...•.... ... .... .. .
Ou!liefs = 18% .
20
-12 ~0 ~ ~ ~ 4 0 2
MLBF Ambiguity Number, Mamb
Figure 12.35: Histogram of MLBF ambiguity estimates over the whole Van-
couver scene.
Use of multiple looks: The MLCC and MLBF algorithms use two range looks to emulate radars with two
different center frequencies. They estimate the change in Doppler frequency between the looks to find the
absolute Doppler frequency. The wavelength diversity algorithm works in the range frequency domain, and
finds the change in Doppler frequency versus range frequency from the slope of angles in the range
frequency domain. In the two-look case, the effective separation between the looks must be estimated.
MLCC versus WDA: The MLCC and WDA algorithms are very similar in their operation and their accuracy.
The WDA is not affected by the weighting of the range spectrum, because only phase is measured, so look
separation does not need calibrating. The main advantage of the MLCC is that it can be conveniently used
in conjunction with the MLBF algorithm.
Bright targets: The MLBF algorithm benefits from the presence of bright targets, as one dominant target in
each range cell creates the purest beat spectrum. In addition, the MLBF algorithm does not suffer as much
from partially exposed bright targets as the other algorithms, because the beat frequency is fairly constant
throughout the whole of the target exposure.
Scene contrast: A scene can have many bright discrete targets, or can have fairly uniform radiometry. These
are referred to as high and low contrast scenes, respectively. The wavelength diversity and MLCC algorithms
work well for low contrast scenes, because partial exposures are not so much of an issue when bright targets
are not present. On the contrary, the MLBF algorithm works well for high contrast scenes, where isolated
bright targets give a clearer picture of the beat frequency spectral peak. The use of the two algorithms
together is recommended to reduce the sensitiv-ity to scene contra.st.
Offset frequency: An offset frequency exists in the wavelength diversity and MLCC algorithms, which can be
troublesome to estimate consistently. The offset frequency exists when the beam boresight varies with radar
frequency. The MLBF algorithm does not suffer from such a bias, a.s shown in Appendix 12B.
Application to ScanSAR data: The WDA and MLCC algorithms are based upon estimating the phase
increments between azimuth samples, and are not bothered by the gaps in ScanSAR data. The same is true
of the MLBF algorithm, a.s long a.s the ACCC angle rather than an FFT is used to estimate the beat
frequency.
The ability of each algorithm to measure the Doppler ambiguity number correctly depends upon the radar
parameters, especially the wavelength. Table 12.2 shows the measurement sensitivities of the different DAR algo-
rithms for different sensors and radar bands. The range and azimuth bandwidths, the range sampling rates, and
the PRFs are taken from Table 4.1.
Satellite
The numerical values given in Table 12.2 represent the measurement increment needed to distinguish a change
in Doppler centroid of one PRF (i.e., a one-ambiguity error) . A larger numerical value is favorable since it means
a higher sensitivity in the estimation procedure. However, it also means more image quality degradation if the
ambiguity estimate is wrong. For example, it is difficult to estimate the ambiguity number of an X-band satellite
to the nearest integer, but the image quality implications of a one-ambiguity error is correspondingly less. It is
difficult to compare the sensitiv.ities across the algorithms because the measurement techniques are different.
The second column in the table represents the residual RCM in meters over the processed aperture (assumed
to be 80% of the PRF) . The residual RCM is the RCM remaining after RCMC has been performed, assuming a
Doppler centroid error of one PRF. For the look correlation method, where the range displacement is measured
between two azimuth looks, the range misregistration is about 50% of this value, assuming the look separation is
one-half of the processed bandwidth.
The third column gives the look misregistration in terms of range resolution elements, as the resolution
governs the ability to measure registration. Normally, a misregistration of more that one resolution element is
measurable by the correlation. It is seen that the misregistration is more than one range resolution element for all
cases except the X-band satellite case. This is primarily because the exposure time is so short for the X-band
satellite. For each of the two L-band cases, the misregistration is large because of the relatively long exposure
time, which makes the ambiguity easier to estimate.
The fourth column pertains to the WDA and MLCC algorithms, and shows the amount of ACCC angle
variation over two-thirds of the processed range bandwidth. Again, the larger values in this column mean that the
ACCC angle measurement (or the slope) should be able to estimate the Doppler ambiguity with greater reliability.
As before, the radar wavelength is the dominant parameter affecting this sensitiv:ity. Although !),,,<J> values appear
to be small, they can be measured within the required tolerance with enough averaging. For the WDA and
MLCC algorithms, the uncertainty in the offset frequency and partial target exposures tend to be more of a
problem than the ability to measure the ACCC angle.
For the MLBF algorithm, Column 5 shows the number of frequency cells that the beat frequency changes
when there is a one-ambiguity error in the Doppler centroid. A 4096-point FFT is assumed. Because there are
more than six cells in each case, the ambiguity number should be reliably estimated with this transform length,
as long as enough averaging is done. When the ACCC is used to estimate the beat frequency, the change in
average phase per ambiguity is the same as in Column 4.
In this section and the next, four approaches are described to improve the estimation accuracy. The concepts are
[11, 23]:
1. The concept of "spatial diversity" is used, in which baseband Doppler estimates are obtained from
widely-spread areas of the scene, rather than from concentrated areas, such as the first 1024 or 4096 lines.
2. Measurements are made to judge the quality of the estimates, and are used to identify and reject those
parts of the received data that create the main estimation anomalies.
3. A geometry model is incorporated that can compute a Doppler surface, given the satellite attitude during
the data acquisition of the scene.
4. A "global" model-based fitting procedure is used that fits a Doppler centroid surface over a whole frame of
data in one operation.
The geometry model has the advantage of reducing the dimensionality of the estimation problem, and of
imposing a physical reality on the answer. Experiments with SRTM/X-SAR and RADARSAT data indicate the
Doppler centroid estimation accuracy can be 5 Hz or better with difficult scenes, which is accurate enough to
allow high image quality in the processed SAR images.
The first two of these concepts are outlined in this section, including the concept of spatial diversity, the use
of image blocks to obtain the diversity, and the measurement of parameters that predict the accuracy of each
block. The final concepts of using a global fitting procedure with geometry models are presented in Section 12.7.
The concept of spatial diversity is based on the premise that there are parts of a radar scene which prov:ide
good baseband Doppler estimates, and other parts that provide noisy or biased estimates. Poor estimates arise
primarily from areas of the scene where the backscatter is very weak (the received SNR is low) , from areas
where strong discrete targets are present, and from areas where the received energy is changing abruptly. The low
SNR areas raise the standard deviation of the block estimates, and the strong targets and the varying radiometry
can introduce large biases into the Doppler estimates.
In present practice, the first 1024 or 4096 lines of each processed frame are typically used to form the
Doppler estimates. This approach inadvertently uses areas of the scene that bias the estimator. To avoid the bad
areas, a "spatial diversity" approach can be used. In this approach, the whole scene is div:ided up into blocks or
subscenes, the baseband estimators are applied to each block separately, and only blocks that provide reliable
answers are included in the global estimation procedure.
The spatial distribution of blocks can be sparse, contiguous, or overlapping, depending upon the scene size
and the computing resources available. If contiguous blocks are used, a suitable size is 256x 1024 samples (range
where "/ can be calculated from the beam pattern, and the PRF can be estimated from the data. If a strong
part of the received data is examined, an upper bound of "/ can be obtained. From the data example in Figure
12.16(d), the upper bound appears to be approximately 0.25 for this PRF and beam.
The SNR estimate can then be obtained from the Fourier coefficients as
- ;l
10 log10{ (1 + Ps2}
Spectral Distortion
The spectral distortion is estimated by the root mean square (RMS) dev:iation between the averaged spectra and
the fitted curve. The RMS deviation is divided by the average height of the spectrum and multiplied by 100 to
express the dev:iation as a percentage.
Azimuth Gradient
The azimuth gradient is measured by the changes in radiometry, averaged over 1-km subblocks of the
range,.compressed data. The range gradient tends not to bias the estimators because range compression localizes
the radiometric effects. High azimuth gradients are used to reject blocks for both the baseband and ambiguity
estimators, and contrast is used to select between the WDA/MLCC and MLBF ambiguity estimators.
Contrast
Very low SNR, very high SNR, high azimuth radiometric gradients, and high spectral distortion are the main
criteria that are used to reject baseband Doppler estimates using the quality measures. A degree of correlation
exists between these variables, but they each contain some independent information. The local standard dev:iation
of block estimates is also a useful quality measure. Scatter plots can be used to assess the utility of the various
quality measures, as seen in the examples in [11].
The WDA and MLCC Doppler ambiguity estimators use average phase increments from one range line to the
next to obtain estimates of the absolute Doppler centroid. The best quality measure is found to be the standard
dev:iation of the average phase increments over the range cells. The MLBF estimator works by finding the
frequency of the strongest discrete signal when the azimuth signals of two range looks are multiplied. In this case,
the best quality measure is the ratio between the peak energy and the surrounding spectral energy. In addition,
the quality measures used for the baseband estimates are useful for the ambiguity estimates.
By examining scatter plots of the quality parameters for a number of scenes, appropriate thresholds for the
quality parameters can be found. By setting conservative thresholds, scene-independent values emerge, and can be
used to set the initial rejection mask. This mask is used in the automatic surface fitting procedure discussed in
Section 12.7.3.
After obtaining the fractional PRF estimates for each block in the scene and the overall ambiguity estimate, the
baseband estimates are unwrapped, and blocks with bad estimates are rejected. Then, a two-dimensional surface
of Doppler centroid versus range and azimuth is fitted. The simplest approach is to fit a low-order polynomial
surface
to the block estimates, where ri and ai are the range and azimuth block numbers relative to the center block.
-6900
-6950
-7000
-7050
- 7100
-7160
2 4 6 8 10 12
~ Block No.
Figure 12.37: Doppler centroid estimates using the sine wave fit method, taken
over contiguous 5-km blocks of the Vancouver scene. The gray scale is in hertz.
The coefficient co is the average Doppler frequency over the whole scene. The coefficient c,. 1 accounts for most
of the sizable variation of Doppler with range, and c,.2 and Crs allow small quadratic and cubic components in
the range variation. Normally, a linear term ca1 is sufficient to model a slowly-varying azimuth drift in the
Doppler centroid over 100 or 200 km, but a quadratic component Ca2 is useful to follow the faster Doppler
changes caused by the operation of the attitude control system (as frequently happened in the SRTM mission).
Finally, a cross coupling term Car is introduced to model a range slope that changes with azimuth, as happens
when the satellite latitude changes or the antenna yaw angle drifts.
A Nelder-Mead simplex direct search method can be used to estimate the coefficients in (12.46) [24]. A
Newton-Raphson steepest descent method can be used to provide faster convergence. Because of the sensitivity of
the optimization method to initial conditions, it is found best to fit Fbb first using just one coefficient, co, then fit
using three, then five and finally seven coefficients. At each stage, coefficients are added using the order on the
right hand side of (12.46). The coefficient values for each fit are used as the initial conditions for the subsequent
fit. Alternatively, the geometry model of Section 12.3 can be used to prov:ide the initial values of the coefficients,
assuming a zero or nominal satellite attitude.
Attitude and attitude rate constraints are applied, which are ±0.5° and ±0.01 ° /s, respectively, in the case of
RADARSAT-1. From these limits, extreme values of the coefficients (12.46) are deduced, and used to limit the
parameter search procedure.
The surface-fit method is applicable to frame-based SAR processors. If the SAR data is to be processed in a long
continuous strip, better results can be obtained if the Doppler centroid is updated as each new group of range
lines is processed. The spatial diversity approach can be used on blocks representing the new range lines, and a
Kalman filter can be used to update the surface fit [25). Since the rate of change of the Doppler centroid is
quite small in the absence of satellite maneuvers, it is simpler to use either the Cal term in (12.46) or attitude
rate terms (see the next section) to model the Doppler drift. A model-based filtering approach has been
successfully used for long strips of ScanSAR data [5).
The satellite/Earth geometry model of Section 12.3 can be used to find the Doppler centroid of each block, for a
given satellite orbit and beam attitude angle. Given the block estimates from the data, the attitude and attitude
rate parameters can be adjusted to get the best fit between the model and the good block estimates.
The absolute Doppler centroid is found at each block (ri, °'i) assuming a zero or nominal attitude, to get a
nominal Doppler surface, Fza(ri, ai)- Then, attitude angles and their rates are added to the model
(12.47)
where cp is the yaw angle, '1/J is the pitch angle, and the primes denote time derivatives. If the satellite is yaw
steered, Fza(ri, °'i) is near zero and can be omitted from (12.47). Small biases in the yaw steering can be
absorbed into the yaw and pitch estimates. The range dependence of F(ri, ai) is given by the beam geometry,
while the azimuth dependence is determined by the change in the Earth rotation component and by the attitude
rates.
The search procedure described in Section 12.7.3 is applied to find the attitude parameters on the right-hand
side of (12.47) that best fit the block estimates. Attitude and attitude rate limits are applied to the search space.
If the satellite is not under maneuver, the second derivative terms can be omitted. As the effects of yaw and
pitch on the Doppler centroid are quite cross coupled, it is helpful to make the parameter space orthogonal
before applying the optimization algorithm.
Both the polynomial model (12.46) and the geometry model (12.47) are found to work well in the surface
fitting procedure. The polynomial model is simpler to program, and a simplified geometry model can be used for
the constraints. It also provides an expression for the centroid in a form that is simple to use in the SAR
processor. The geometry model has the advantage of prov:iding a more physical interpretation for the results, and
physical constraints can be directly applied. A suitable surface fitting procedure is discussed in the next section.
The Doppler calculations using the geometry model can be embedded in an automatic fitting procedure, as shown
in the block diagram of Figure 12.38. Once the indiv:idual block estimates, Fbb_ measured (ri, aj), are available
from Step 1, the objective is to find the best fit over the whole scene of the surface (12.47), using the parameter
set (<P, cp', cp", 'I/), 1/J', '1/J").
The baseband estimates often extend around a PRF boundary, as is the case when the ambiguity number
changes within a scene. In this case, the baseband estimates must be unwrapped, with the aid of the nominal
Doppler surface available from the zero-attitude geometry. This does not present a problem, as the centroid
changes across range and azimuth in a smooth, predictable way.
The ambiguity number, Mamb, of the scene can be estimated at this stage, since the geometry model
inherently works with the absolute Doppler estimates
(12.48)
In Step 2, the quality parameters are calculated, and thresholds are set to control the initial block rejections.
The data SNR, the sp€ctral fit standard dev:iation, and the azimuth radiometric gradient can be used effectively
to determine the quality of the baseband estimates. In Step 3, a fitting mask is calculated, which indicates the
blocks that are used in the first iteration of the surface fit.
Steps 4, 5, and 6 constitute an iterative fitting procedure, whereby the free parameters of the model are
adjusted using a search procedure, to minimize the rms deviations between the model and the measured block
centroids. Normally, the pitch and yaw (and their rates) are used in the model, but the second derivatives can
also be used if a satellite attitude maneuver is anticipated. In Step 5, the geometry model of Figure 12.11 is used
to calculate the Doppler centroid frequency of each block, using the procedure outlined in Section 12.3. Only
those blocks within t he mask are used to find the rms deviation between the surface fit and the block estimates
in Step 6.
In Step 7, a decision is made as to whether the rms dev:iation is small enough. If it is not small enough, the
block or group of blocks having the largest dev:iation can be removed from the fitting mask in Step 8. Steps 3 to
7 are then repeated until a satisfactory fit is obtained. The termination decision of Step 7 can include other
factors, such as maximum number of iterations, leveling out of the rms dev,iations, minimum percentage of blocks,
or minimum degree of spatial diversity retained in the mask.
Step 1
Measure Doppler centroid over
apatlally diverse blocks
2
Calculate qualify paremetera of
each block estimate (e.g.,
SNR, Flt_atd, Rad_gred)
Set quality thresholds
3
Calculate tne fitting mask
No
9
Flt a low order polynomial to
Fd Vll"IUI range
The geometry model approach using state vectors is used to fit a surface to the block estimates of Figure 12.37.
About 20% of the blocks are rejected in the iterative procedure, and the surface fit of Figure 12.39 is obtained.
The RMS dev:iation between the fitted surface and the individual block estimates is 7 Hz, and the surface fit is
within 1 Hz of the best fit that could be found manually.
18 Yaw0 • 0.0118deg
- 6850
Pltoh0 • -0,0072
18 -8900
14 - 8950
- 7000
-7050
- 7100
6 - 7150
- 7200
4
- 7250
2
-7300
2 4 8 8 10 12
Range Bl.ock No.
Figure 12.39: The fitted Doppler surface using the satellite state vectors.
A surface fit was also made using the circular orbit geometry model. The two surfaces agreed to tenths of a
hertz, but the yaw and pitch values required to get the best fit differ by 0.001° and the rates differ by 0.0001 °/s.
The state vectors showed that the radius of the orbit was decreasing by 8. 7 m/s, or 130 m over the 15-second
scene. This gives an impression of the deviation of the circular orbit approximation from the actual orbit.
A fit was also made using the polynomial surface of Section 12.7.1. The fitted surface agrees with the
geometry model surface to better than 0.2 Hz. This illustrates that if the only objective is to get an accurate
Doppler surface, it does not matter what model is used.
12.8 Summary
The Doppler centroid is an important parameter for accurate SAR processing. In principle, it can be computed
from the satellite orbit and attitude information in the ephemeris data. However, the derived centroid is generally
not accurate enough for focusing the SAR data, because of limitations in the model and errors in the ephemeris
data. In this case, the estimate has to be refined using measurements made on the SAR data. In this chapter, a
number of methods to estimate the Doppler centroid from geometry and from received data are presented.
The Doppler centroid consists of two parts-the fractional PRF or baseband part, and the integer PRF part
or the Doppler ambiguity. A magnitude- based method to estimate the fractional PRF part is based on locating
the peak of the power spectrum by fitting a sine wave. A phase-based method utilizes the fact that the average
azimuth phase increment of the received signal is the same as the increment at the center of the radar beam.
The two methods are essentially the same. Both methods work well, although they can be biased by partially
exposed targets and low values of SNR.
A magnitude-based ambiguity estimation method works by compressing the SAR data using two azimuth
looks, and then measuring the range misregistration between the two looks. Phase-based methods have also been
developed, such as the wavelength diversity, MLCC, and MLBF algorithms. They all measure the Doppler centroid
as a function of the transmission frequency, but the measurement methods differ. The WDA method works in the
range frequency domain where the slope of the centroid versus the transmission frequency can be measured
directly. In contrast, the MLCC and MLBF algorithms utilize two range looks to emulate two different
transmission frequencies, and the difference between the centroids is measured.
The magnitude-based method works well for high contrast scenes. The phase-based methods seem to be more
sensitive, and can be used to handle both low contrast and high contrast scenes.
When estimating the Doppler centroid over a whole frame of data, it is helpful to take a global view to fit
a Doppler surface over the whole frame in one estimation procedure. Areas of the scene that bias the estimates
can be recognized using quality measures, and excluded from the estimation process. A geometry model can be
used to reduce the dimensionality of the parameter space, and force a realistic solution on the Doppler surface.
Important equations for Doppler centroid estimation are summarized in Table 12.3.
References
[1] F. K. Li, D. N. Held, J. Curlander, and C. Wu. Doppler Parameter Estimation for Spaceborne Synthetic
Aperture Radars. IEEE Trans. on Geoscience and Remote Sensing, 23 (1) , pp. 47- 56, January 1985.
[2] C. Y. Chang and J . C. Curlander. Doppler Centroid Ambiguity Estimation for Synthetic Aperture Radars.
In Proceedings of the International Geoscience and Remote Sensing Symposium, IGARSS'89, pp. 2567-2571,
Vancouver, BC, 1989.
[3] D. Esteban Fernandez, P. J. Meadows, B. Schaettler, and P. Mancini. ERS Attitude Errors and Its Impact
on the Processing of SAR Data. In CEOS SAR Workshop, Toulouse, France, October 26-29, 1999. ESA-
CNES. http://www.estec.esa.nl/ceos99/papers/p027.pdf.
[4] M. Dragosevic. On Accuracy of Attitude Estimation and Doppler Tracking. In CEOS SAR Workshop,
Toulouse, France, October 26-29, 1999. ESA-CNES. http://www.estec.esa.nl/ceos99/papers/p164.pdf.
[5] M. Dragosevic and B. Plache. Doppler Tracker for a Spaceborne ScanSAR System. IEEE '.lrans. on Aerospace
and Electronic Systems, 36 (3), pp. 907- 924, July 2000.
[6] J. B.-Y. Tsui. Fundamentals of Global Positioning System Receivers: A Software Approach. John Wiley &
Sons, New York, 2000.
[7] R. K. Raney. Doppler Properties of Radars m Circular Orbits. Int. J. of Remote Sensing, 7 (9), pp.
1153- 1162, 1986.
[8] H. Fiedler, E. Boerner, J . Mittermayer, and G. Krieger. Total Zero Doppler Steering. In Proc. European
Conference on Synthetic Aperture Radar, EUSAR '04, pp. 481- 484, Ulm, Germany, May 2004.
[9] C. Oliver and S. Quegan. Understanding Synthetic Aperture Radar Images. Artech House, Norwood, MA, 1998.
[10] R. Bamler. Doppler Frequency Estimation and the Cramer-Rao Bound. IEEE Trans. on Geoscience and
Remote Sensing, 29 (3) , pp. 385- 390, May 1991.
[11] I. G. Cumming. A Spatially Selective Approach to Doppler Estimation for Frame-Based Satellite SAR
Processing. IEEE Trans . on Geoscience and Remote Sensing, 42 (6), June 2004.
(12) M. Dragosev:ic. Attitude Estimation and Doppler Tracking. In CEOS SAR Workshop, (ESA), Ulm, Germany,
May 27-28, 2004.
(13) S. N. Madsen. Estimating the Doppler Centroid of SAR Data. IEEE Trans. on Aerospace and Electronic
Systems, 25 (2), March 1989.
(14) A. Papoulis. Probability, Random Variables and Stochastic Processes. McGraw-Hill, New York, 1984.
(15) I. G. Cumming, P. F. Kavanagh, and M. R . Ito. Resolving the Doppler Ambiguity for Spaceborne Synthetic
Aperture Radar. In Proceedings of the International Geoscience and Remote Sensing Symposium, IGARSS'86,
pp. 1639-1643, Zurich, Switzerland, September 8-11, 1986.
(16) J. Holzner and R. Bamler. Burst-Mode and ScanSAR Interferometry. IEEE Trans. on Geoscience and
Remote Sensing, 40 (9), pp. 1917-1934, September 2002.
(17) R. Bamler and H. Runge. PRF-Ambiguity Resolving by Wavelength Diversity. IEEE Trans. Geoscience and
Remote Sensing, 29 (6), pp. 9971003, November 1991.
(18) F. H. Wong and I. G. Cumming. A Combined SAR Doppler Centroid Estimation Scheme Based upon
Signal Phase. IEEE Trans. on Geoscience and Remote Sensing, 34 (3), pp. 696-707, May 1996.
(19) M. Y. Jin. PRF Ambiguity Determination for RADARSAT ScanSAR System. In Proc. Int. Geoscience and
Remote Sensing Symp., IGARSS'94, Vol. 4, pp. 1964-1966, Pasadena, CA, August 1994.
(20) C. Y. Chang and J. C. Curlander. Application of the Multiple PRF Technique to Resolve Doppler Centroid
Estimation Ambiguity for Spaceborne SAR. IEEE Trans. on Geoscience and Remote Sensing, 30 (5), pp.
941- 949, September 1992.
(21) C. S. Purry, K. Dumper, G. C. Verwey, and S. R. Pennock. Resolving Doppler Ambiguity for ScanSAR
Data. In Proceedings of the International Geoscience and Remote Sensing Symposium, IGARSS'00, Vol. 5, pp.
2272-2274, Honolulu, HI, July 24- 28, 2000.
(22) C. Cafforio, P. Guccione, and A. Monti Guarnieri. Doppler Centroid Estimation for ScanSAR Data. IEEE
Trans. on Geoscience and Remote Sensing, 42 (1), pp. 14-23, January 2004.
(23) I. G. Cumming. Model-Based Doppler Estimation for Frame-Based SAR Processing. In Proc. Int. Geoscience
and Remote Sensing Symp., IGARSS'0l, Vol. 6, pp. 2645-2647, Sydney, Australia, July 2001.
(24) J. A. Nelder and R. Mead. A Simplex Method for Function Minimization. Computer Journal, 7, pp.
308- 313, 1965. Available in MATLAB as the function FMINSEARCH.
(25) 0. Loffeld. Estimating Time Varying Doppler Centroids With Kalman Filters. In Proc. Int. Geoscience and
Remote Sensing Symp., IGARSS'91, Vol. 2, pp. 1043-1046, Espoo, Finland, June 1991.
(26) I. M. Yaglom and A. Shields. Geometric Transformations. The Mathematical Association of America, 1962.
The flow chart of the main Doppler frequency calculations has been given m Figure 12.11. The details of each
step in the flow are described in this appendix. The frames of reference used in the mathematical development
are listed in Table 12A.1, where the ascending node is assumed to be at the Greenwich meridian. The frames are
sketched in Figure 12A.1, where all "views" are towards the Earth's center from the equator at the Greenwich
meridian.
Table 12A.l: Center and Orientation of Frames of Reference
Step 1: Rotate Beam by the Satellite Pitch and Yaw and Translate to the Earth Center (to
Frame El)
The geometric development begins in the satellite-centered frame of reference, referred to as Frame SO, in which x
points up, away from the Earth's center, z points "ahead" in the plane of the satellite orbit, perpendicular to x,
and y points to the right, completing the orthogonal, right-handed frame. For illustration purposes, a circular
orbit is assumed, and the satellite position and velocity state vectors are
So - [0 0 0 ]' position (12A.1)
where Ysat = µe R s is the scalar value of the satellite velocity for an orbit of radius Rs, µe = 3.987 1014 m3 /s2 is
the gravitational constant of the Earth, and [ . ]' denotes the transpose 11
It is assumed that the radar antenna is attached to the satellite body in such a way that the azimuth
boresight lies in the x, y plane for all elevation angles, 12 and that the specific pointing angle under consideration
is defined by the unit view vector
Uor = [- cos(a) sin(a) 0 ]' view vector (12A.3)
where a is the "nadir" angle between the local vertical and the beam direction, positive for right-pointing
antennas. 13
Now assume that the satellite is subject to an arbitrary yaw, </>, and pitch, 1/J. The beam view vector must
be rotated using two transformations. First, the v:iew vector is rotated clockwise around the positive y-axis by the
The second step in transforming to the ECI frame of reference is to rotate the ECOP frame counterclockwise
l
around the x-axis by the angle "'· This is achieved by the transformation
1
0 0
cos(K) 0
- sin(K) (12A.14)
0 sin(K) cos(K)
to obtain
Sa - T2a S2 satellite position (12A.15)
in Frame E3 . This frame corresponds to a "conventional" view of the Earth, with z pointing north, y pointing
east, and x pointing from the Earth's center to the equator at the Greenwich meridian. The two vectors (Sa, Va)
represent the satellite's state vector, assuming it is expressed in ECI coordinates. 17
The satellite attitude is often expressed in a frame of reference in which the local vertical is normal to the
Earth's ellipsoid (geodetic reference), rather than pointing to the Earth's center (geocentric reference). The
geodetic reference is not used in the current development, but if the satellite attitude is expressed in geodetic
coordinates, then the unit view vector can be compensated at this point in the transformations. The difference in
v:iewing angle definitions is illustrated in Figure 12A.2.
Local vertical
\..-- - - (expressed in
·vertical" - geodetic coordinates)
(expressed in
geocentric
coordinates)
Tangent
to surface a.
Center of
Earth
If 'l?sat.Ja.t and i9sat.Jong are the latitude and longitude of the satellite, this compensation can be achieved by
first rotating the ECI coordinates clockwise around the z-axis to the satellite's longitude using 18
cos('!9sat..long) sin('l?sat..long)
T za = - sin('!9 t..1ong) cos('!9sat..long) (12A.18)
[ 0 0
The beam is then tilted slightly towards the equator by the angle 19
<Pg = 't91at_geodetic - t11at..geocentric (12A.19)
Finally, the beam is rotated back around the z-axis to ECI coordinates using Tzs - 1 . The net result is
view vector (12A.21)
Note that t he geodetic compensation angle, ¢ 9 , is zero at the equator and the poles, and has a maximum of
0.194° at ±45° latitude. The effect of the transformation (12A.21) is to rotate the v:iew vector so that the satellite
pitch and yaw, which are originally specified with respect to the local horizontal, are correct in the ECI frame.
B.
Figure 12A.3: Intersection of the beam view vector, R3 Usg, with the Earth's
surface.
The next step is to locate the target on the Earth's surface. Referring to Figure 12A.3, the satellite position is
represented by the vector, Ss from (12A.15), and the beam range vector by R 3 U 3 g, where R3 is the unknown
range to the target. The target position is given by the sum of these vectors
Ps = Ss + R3 Usg target position (12A.22)
and is found by solving for the intersection of the beam range vector with the Earth ellipsoid
P3(x) 2 P3(y) 2 P3(z) 2
A2 + A2 + B2e = 1 (12A.23)
e e
where Ae = 6, 378, 137.0 m and Be = 6,356, 752.3142 m are the equatorial and polar radii in meters of the
WGS-84 ellipsoid. 20
Because the ellipsoid is quadratic, the intersection is solved by finding the smallest root of the quadratic
equation21
R~ + 2F R3 + G - 0 (12A.24)
G = Ss o Ss - A} + E S((z)
(12A.26)
1 + E U3~(z)
where o is the dot product, and S3(z) is the z component of Ss. The parameter, E, is defined for the WGS-84
ellipsoid as
e2 A2-B2
e e
€ - -~ - - 0.0067395 (12A.27)
1- e2 B2
e
Having found the range to the target, the location of the target in ECI coordinates, Ps, can be found by
extrapolating by this distance along the view vector, U39 , starting from the satellite position, Ss, as in Figure
12A.3. The extrapolation is done by (12A.22).
The target>s velocity must now be found in the ECI coordinates. The target is assumed to be stationary with
respect to the Earth 1s surface, but if it is not stationary, a suitable component can be included in (12A.30). The
magnitude of the velocity is a function of the target>s latitude, and the direction of the velocity is a function of
the target 1s longitude. The target rotates with the Earth around the polar axis, with a radius
(12A .29)
when it is at longitude zero, that is, at y = 0. The variable We = 7.2921xlo-s is the Earth 1s rotation rate in an
inertial reference frame. To get the target velocity into the ECI coordinates of the satellite, its velocity vector
must be rotated about the polar axis, z, by the target 1s ECI east longitude
(12A.31)
cos('19tar...long) - sin('19tarJong)
sin('19tar...long) cos('19ta.r...long) (12A.32)
0 0
To calculate the target 1s Doppler frequency, the relative velocity of the satellite, with respect to the target, must
be found. This is done by projecting each of the velocities along the radar view vector, and subtracting them.
This projection can be done in either the ECI or ECR frames. In the ECI frame, the relative velocity is
obtained from
½et= VsoUsg - QsoUsg = (Vs - Qs)oUsg (12A.34)
The Doppler frequency of the target in the center of the beam (the Doppler centroid) is then
- 2V..el
->.- (12A.35)
In the WDA and MLCC Doppler ambiguity estimation algorithms, an offset frequency has to be accounted for in
estimating the absolute Doppler centroid, but not in the MLBF. The offset frequency originates from the fact that
the antenna>s boresight angle varies slightly with the transmission frequencies that are present in the radar pulse.
This means that the line joining the two dots in Figure 12.25 is not straight and/or the projection of the line
does not pass through (0,0) .
The purpose of this appendix is to derive the offset frequency in the MLCC algorithm caused by this
variation of boresight angle, and explain why this does not affect the MLBF. The WDA is not included in the
derivation, since it is affected by the boresight angle variation in the same way as the MLCC algorithm.
To illustrate the concept, consider the case in which the boresight angle varies linearly with transmitted
frequency. In this case, the beam center crossing time, T'/c, has an additional component, C¥t /-r, where /-r is the
range frequency and at is a constant in units of seconds per Hertz.
In the MLCC and MLBF algorithms, two range looks are used. The range compressed signals of the two
looks from a point target in (12.28) and (12.29) are now
4
Sv,1(r,) - Wa [ T'/ - ( T'/c - l¥t ~fr)] exp{-jc 1r (Jo - D'!r) R(TJ)} (12B.l)
Sv,1(17) - Wa [ TJ - ( 1Jc + O:t ~fr)] exp{-jc 1r
4
(Jo+ ~;r) R(TJ)} (12B.2)
The use of the subscript v in a variable means it is different from that presented in Sections 12.5.4 and 12.5.5,
and without this subscript it is the same.
In each equation, the added component from the boresight variation lies in the azimuth envelope, Wa, The
term -at ~ fr/2 is the shift in the azimuth antenna boresight for Look 1 expressed in terms of azimuth time,
and similarly +at ~ fr /2 is the shift for Look 2.
Following the development in Section 12.5.4, (12.30) and (12.31) now become
,1.. __ -w Kal,dop (1Jc - l¥t ~/r/2) (12B.3)
"l'v,L1 2" Fa
<Pv,L
2
= _ 27r Ka2,dop (1Jc~ l¥t ~/r/2) (12B.4)
where Ka1,dop and Ka2,dop are given by (12.32) and (12.33), respectively. Then, the difference between the two
angles, ~</J,,, is:
~<Pv - <Pv,L2 - <Pv,L1
- 7r
(Kal,dop + Ka2,dop)
..;._--:.-=----'--=---- -
l¥t ~fr (12B.5)
Fa
Following the development leading to (12.37) , ~ <Pv can be written as
_ _ 7r !),.Jr Ka,dop f/c _ 7r Ka,dop at /::,.fr
2 2
fo Fa Fa
_ + 21r /::,.fr f1/c _ 27r Ka,dop at /::,.fr
Jo Fa Fa
1::,. Jr ) (12B.6)
- + 21r Jo Fa (11/c - at Jo Ka,dop
The last term of (12B.6) represents a bias m the ACCC-based estimate, which can be expressed as a
frequency offset
(12B.8)
Finally, the fractional PRF part of the centroid, f~c, has to be determined in order to obtain the PRF
ambiguity. In Section 12.5.4, this is obtained by averaging the two ACCC angles, as shown in (12.39).
It remains to see if the offset frequency would affect the averaged result and hence the accuracy of /~c. By
averaging the two ACCC angles given in (12B.3) and (12B.4), as in (12.39), the fractional PRF part, f~c , can be
obtained by:
f~,1/c
where (12.30), (12.31), (12.34), and (12.39) have been used in the last step.
The last term in (12B.9) represents the difference between f~,1/c and the true fractional PRF part /~c. Combining
(12B.7) and (12B.9), the difference can be expressed in terms of the offset frequency
l
f v,1/c !' !Oil 6/; (12B.10)
- 1/c =- 2 JJ
It is found from experiments that / 08 is at most several thousand hertz for ERS-1, ERS-2, J-ERS, and
RADARSAT-1. Since Jo is several orders of magnitude larger than I),. fr , the difference is much less than 1 Hz
and is therefore negligible. The final result is then
(12B.11)
In other words, the offset frequency would not affect the accuracy in estimating the fractional PRF part.
Although the derivation presented above is based upon a linear model, f/c = at In the same reasoning can be
applied to a nonlinear model, where an offset frequency is also expected. In practice, the variation can be quite
nonlinear, as seen from measurements on the ERS-1 antenna shown in Figure 12B.l. The dashed line in this
figure is obtained from ground measurements of the ERS-1 antenna made before launch. The boresight angle of
the antenna was measured at five frequencies within the radar band.
The two other curves were obtained using the MLCC algorithm on received ERS-1 data. Look bandwidths of
3.56 MHz were used. First, the position of Look 1 was fixed at the low frequency end of the range spectrum
and the position of Look 2 was varied along the spectrum axis. The offset frequency was measured, and
converted to an equivalent beam pointing angle. Second, the position of Look 2 was fixed at the upper end of
the range spectrum, and the position of Look 1 was varied. The two experiments gave similar results, and the
agreement with the prelaunch measurements indicate that Doppler centroid estimators can make precise
measurements of in-orbit antenna properties.
Finally, the offset frequency must be calibrated using measurements on operational data. The absolute
Doppler centroid must be determined from another method that is not affected by the variation of the boresight
angle due to transmission frequency, and hence is independent of the offset frequency. Examples of Doppler
estimation methods that are independent of the offset frequency are the magnitude approach discussed in Section
12.5.1 and the MLBF method discussed in Section 12.5.5. The Doppler centroid of the same scene is then
obtained using the WDA or the MLCC algorithm. The difference between the offset-independent estimate and the
offset-dependent estimate gives the calibrated offset frequency.
Because the MLCC and WDA methods weight the data along the range frequency axis differently, the value
of offset frequency is different for the two algorithms. This is especially true when the pointing angle is a
nonlinear function of transmission frequency. The calibration should be obtained for many scenes, and then an
average is taken to improve the calibration accuracy. In the case of the many different radar beams and modes in
satellites such as RADARSAT and ENVISAT, the calibration can be quite time-consuming.
-25,---.---.--~---~-----~----
. I \
,1 \
-30 I \
I \
I .- - ·
1~
I
I
I I
I I
.... - ...... 1 I
I
I
I
f-40 I
I
I
t-45 I
I
I
I
- - Ground measurement I
-56
The MLBF algorithm begins by beating the two range looks given in (12B.1) and (12B.2). The resultant beat
signal, Sv ,b('Tl), for a point target is given by:
'(
Wa 1J -1]c ) = Wa [1J - ('Tlc - Ot~fr)]
2
Wa [1J - (?Jc+ Ot~fr)]
2
(12B.13)
The displacement, a ~ fr, is so small that the following approximation can be made:
2
w~(?J - ?Jc) = lwa(?J - ?Jc)l (12B.14)
which is the same as the envelope in (12.40). Hence the MLBF algorithm is not biased by an offset frequency.
Chapter 13
13.1 Introduction
In addition to the Doppler centroid frequency, another Doppler parameter that is very important to SAR
processing is the azimuth FM rate, Ka. It governs the phase of the azimuth matched filter, and has a
fundamental effect on the focus of the processed image. The effective radar velocity, v;., must be estimated in
order to compute the azimuth FM rate. The accuracy requirements of the azimuth FM rate are discussed in
Section 13.2.
Unlike the Doppler centroid, the effective radar velocity can often be estimated to sufficient accuracy for
SAR processing from the orbit data. This is covered in Section 13.3 for the satellite case. For the airborne case,
the velocity simply comes from the aircraft navigation system, and generally does not need further refining.
If the orbit or navigation data are not accurate enough, the estimate can be refined using measurements
made from the received SAR data. This is referred to as "autofocus" and is discussed in Section 13.4. Again,
autofocus methods generally fall into two categories - magnitude-based approaches and phase-based approaches.
The magnitude-based methods include contrast maximization (Section 13.4.1) and azimuth look misregistration
(Section 13.4.2). Two phase-based methods are briefly discussed in Section 13.4.3. A short summary in Section 13.5
completes the chapter.
The effective radar velocity in the hyperbolic range equation (4.9) is a critical parameter for the focusing of SAR
data. Errors in the velocity are the main source of azimuth FM rate errors, which introduce phase errors into
the azimuth matched filter. Velocity errors also have a small effect on the amount of range cell migration
correction 1
The following equations are used to model the phase error. The matched filter phase is approximately
(13.2)
The phase error, !:l</>, m the matched filter caused by a velocity error, !:l Vr, is dominated by the quadratic
component
(13.3)
To measure the effect of the error, the quadratic phase error (QPE) discussed in Section 3.5 is used.
Following (3.50), the QPE is the quadratic component of the phase error at the end of the processed aperture,
assuming the error is zero at the middle. It is given by
2 2
_ 471" v;. cos Or,c !:l V. ( Ta)
>. R(17c) r 2
where Ta is the exposure time given by (4.37), and !:lKa is the azimuth FM rate error caused by /:1 V,. . The QPE
introduces an azimuth IRW broadening, quantified in Figure 3.14(a). The QPE should be kept to within 1r/2 for
a broadening of less than 8%, as shown in Figure 3.14(a).
In addition, ~ v;. introduces an RCMC error in which. the linear term dominates. From (5.58), the residual
linear RCM over the exposure time, Ta, is
(13.5)
but its effect on image focus is much smaller than the effect of the phase error.
To illustrate the size of these effects, consider the parameters for the aircraft and satellite cases shown in
Table 4.1 , and assume that ~Vr is 0.5% of Vr (~Ka is 1% of Ka)- The resulting QPEs for the X-, C- and
L-band cases are shown in Table 13.1. It is observed that the QPEs for all cases are more than 0.51r, and hence
would introduce more than 8% IRW broadening. This result can also be observed from the 1/TBP values. As
shown in Section 3.5.3, ~Ka/ Ka should be less than 2/TBP, which is not obeyed in all cases here.
The range resolution, derived from the signal bandwidth in Table 4.1, is 1.5 m in the aircraft case and 7.5 m
in the satellite case. Hence, the residual linear RCM for all cases, except the aircraft L-band, is less than one
resolution element. The results in Table 13.1 show that the QPE is the limiting factor in these examples.
From another point of v:iew, the QPE limit can be specified, and the required accuracy of v;. can be found
for the different cases. For example, if the QPE limit is 0.5 1r, the required accuracies of v;. are shown in Table
13.2. In this table, ~Ka/ Ka is the same as 2/TBP (see Table 13.1), the requirement for less than 8% IRW
broadening.
Discussion
The accuracy guidelines in Tables 13.1 and 13.2 are given for nominal sensor parameters. Especially for those
sensors whose accessible swath varies over a wide range, the accuracy values can vary considerably, and should be
checked for the specific imaging conditions used. The velocity accuracy requirement is most stringent for L-band
data, since the exposure time is relatively long. Note that some applications require more accurate focusing than
that provided by a QPE of 0.5 1r rad.
For airborne cases, an azimuth FM rate error is not caused by an aircraft velocity error alone, but also by
radial acceleration. Reference (1) gives a treatment of the subject, and the associated geometric registration
problems.
The effective radar velocity, v;., is the main parameter from which the azimuth FM rate is computed. For
airborne systems, it is simply the aircraft velocity. However, for satellite systems, it is different from the satellite
velocity, Vs , and the difference is enough to cause serious image degradations (recall Figure 4.6) . The computation
of V,. for satellite systems can be quite complicated due to the Earth's curvature and rotation.
This section presents a simple method to compute the effective radar velocity in the hyperbolic slant range
equation for the satellite case, assuming that the satellite state vectors are known. The algorithm first uses the
state vectors to generate the instantaneous slant range as a function of azimuth time over the exposure time.
This is followed by fitting the set of slant ranges with a hyperbolic model, which is used in most SAR processing
algorithms. The hyperbolic range equation of (4.9) is
Table 13.2: Azimuth FM Rate and V,. Accuracies for a QPE of 0.5 1r
(13.6)
where flo is the slant range when the radar is closest to the target, and TJ is the azimuth time relative to the
zero Doppler time (the time of closest approach) . An accurate model of the range equation can be obtained with
detailed mathematical manipulations [2], but ultimately the hyperbolic approximation (13.6) is adopted in most
precision SAR processing algorithms.
The inputs to the hyperbolic fitting method are the orbit state vectors and satellite attitude information spaced
at suitable intervals over the duration of the scene (see Section 12.1.2). The satellite positions, but not the
velocities, are used in this method.
Starting with a target at the scene center, several evenly spaced orbit times are selected, which roughly span
the time that the target is exposed by the radar beam. For example, let five orbit times, 171 to T/t, be chosen, as
shown in Figure 13.1. The middle time, 'ls,
is approximately the time that the beam centerline crosses the scene
center target. The satellite position must be found at these five times from the state vectors. If the state vectors
are given with short time intervals, such as a few seconds, a polynomial fit can be used to extract the state
vectors for the selected time points. If the state vector intervals are too long to use a polynomial fit, an orbit
propagator should be used to find the precise satellite positions.
Using the satellite positions at the selected time points, the effective radar velocity can then be obtained by
the following method.
1. Specify the satellite attitude at the central, beam center time. If the attitude measurements are not available,
the attitude can be set either to zero or to a nominal value.
2. Using a circular orbit model with approximately the same radius as the state vector at the central time, an
orbit time corresponding to the state vector position at the central time, and a beam nadir angle that gives
the desired slant range, compute the beam view vector in ECR coordinates. This can be done using the
method in Appendix 12A.2
Tfs
Satellite
orbit
Earth's
surface
3. Using the central state vector position, find the target location by solving for the intersection of the v:iew
vector with the Earth's surface, using the method in Appendix 12A. 3 Find the range from the 'Satellite to
the target.
4. Using the satellite position at the four other state vector times, find the ranges, R(r,), to the same target, as
in Figure 13.1.
5. Use the interpolated state vectors to find the time and satellite position when the range to the target is a
minimum. This gives the time of zero Doppler, f/6, and the range of closest approach, Ro, corresponding to
the target.
6. Using the time of zero Doppler and the range of closest approach from the previous step as the origin of a
hyperbola (13.7), find the coefficient, V?, that gives the best fit to the five range points in the hyperbolic
model
(13.7)
This gives an estimate of the effective radar velocity, Vr, at the specified range.
7. Repeat the steps above for several other ranges (changing the off-nadir angle m Step 2), to obtain v;. for a
set of ranges covering the imaged swath.
8. Fit a low order polynomial through the v;. values, so that Vr 1s available as a function of slant range, as in
Figure 13.2.
The method is simple and easy to implement. It takes into account the general case of nonzero Doppler
pointing, an ellipsoidal Earth, and an elliptical orbit, with orbit variations such as acceleration in any direction.
Complicated geometric formulations, especially when the satellite altitude is changing, can be avoided. The source
of estimation error originates from the orbit state vectors and/or the orbit propagator. This error can be removed
using an autofocus algorithm, as discussed in Section 13.4.
Note that the method is not intended to find the Doppler centroid accurately, as the beam center crossing
time is only provided approximately. This is allowed, since v;. is only weakly dependent on the squint angle.
Using this approach, the effective radar velocity, v;., can be calculated for nominal satellite cases. The velocity is
plotted as a function of range in Figure 13.2, for a simple circular orbit with approximately the orbit parameters
of the Vancouver scene. The orbit radius is 7167 km, corresponding to a nominal satellite altitude of 800 km.
The orbit inclination is 8.6°. The orbit period is 100.63 minutes.
The beam nadir angle is varied from 18° to 42° in steps of 2°, to give the values of slant range indicated in
the figure. The satellite is just over one-eighth of the way around the orbit from the ascending node (i.e., it is at
48.6° N geocentric latitude), as it is for the Vancouver scene. A third order polynomial 1s fitted to the 13
calculated v:alues. The RMS deviations between the 13 values and the polynomial fit is 0.07 m/s, which is
negligible.
7100
. Satellite hour angle • 48.6 ~eg
7090 I> I • • o I .. • • • • • • •• • • 0
1
:t:
w 7010
7o:!O ... ,. - -........ •
~ ~
• I
.. -.. - ..
!o,jt..;A=:~~~..
;.
It is shown in Appendix 4A that an approximation for v;.. is JV; Vg, where Vs is the satellite orbit velocity
in the Earth centered rotating coordinate system, and V9 is the beam footprint velocity. This approximation is
plotted as a dashed line in Figure 13.2. It is low by about 43 m/s, or 0.6 % in this case (the accuracy varies
around the orbit). This is accurate enough for simple geometry models, but not accurate enough for precision
SAR image focusing.
The effective radar velocity for the same circular orbit is given in Figure 13.3, to illustrate typical variations
that can be expected around the orbit. Note that orbits are not circular in general, so the actual state vectors
should be used to calculate Vr. Three beam nadir angles are considered, 18°, 30°, and 42°, approximately
corresponding to slant ranges of 846, 944, and 1138 km. The asymmetry of the curves between the northerly and
southerly parts of the orbit is due to the right-pointing direction of the radar antenna.
7120
I •
•I ) ;
.. . .... .
• , I ., ••
... , , , ,.tI , . .....
'ii' I •
.
S. 7100 . .·,· .
I :
•
. ... ........
f >· , • I
, • I
., : •,' • I ....... .
7080
\
'° ' I : '
, ·".'· I
I
·
:
I
1--------1
·
.
i 7060
\
'.
Q '., ... J
' .., l
,. , .'..
.- "
/ .••••• , •••• : •••.•••••• .' • , I -
I
Ntarl'lnge 111°
I - - Mid-range ad'
I · - · Farraoge 4i'
i 7040
· ·· ·
'.
'
I
·· · ' : ·· ... , · i
.' • I . ,/
/
·• , " ·· .· •
• .
._,, .••. · •··· •s· .. •• ,, .. '" • .-- ··I O V8ll00\IV9rrewlt
' • 1"--"'-"'--'--'--------1
m1020 • ' ·,
I • ,'
I ' .. , •
: :
: ·• • " I ' ••:• .. • ..... •
I
' I . ... . . . . .. ...
· -., .- I • I
I . I
7000 • • • •~· · I••• •• • • r •• • .... \ .. ,•.
Eq tor No Equator : South or
0 50 100 150 200 250 300 350
Angle around orbit (deg)
After the effective radar velocity and the azimuth FM rate are calculated from geometry, a more accurate
estimate sometimes has to be obtained from the received data, in order to obtain the sharpest image focus. A
number of ways of refining these estimates are presented in this section. These automatic focusing methods are
generically referred to as "autofocus" methods . For analytical purposes, only the linear FM component of the
azimuth matched filter phase (13.1) is considered, since the quality of focusing depends mainly on this component.
Autofocus can be classified into two categories: parametric and nonparametric. In the former, the phase
correction is assumed to be a polynomial of azimuth time TJ, and the order of the polynomial is assumed to be
known. For example, the order is generally two (quadratic phase or linear FM) for the satellite case. For
nonparametric autofocus, the phase correction is assumed to vary in a general way with 17, as opposed to using a
polynomial.
Similar to Doppler centroid estimation, autofocus can also be classified into magnitude-based and phase-based.
Two commonly used magnitude-based autofocus algorithms for stripmap data are described in the next two
sections. Two phase-based algorithms developed for spotlight SAR are also highlighted, since they can be modified
for stripmap data.
A matched filter has the effect of redistributing the received energy in the output array. The energy of a target,
which has the expected quadratic phase history, gets compacted into a few cells, while the energy from noise gets
re-arranged in a random fashion. Note that the energy from the target and the noise within the filter bandwidth
are approximately conserved, so the integrated target-to-noise power ratio does not change appreciably if the over-
sampling ratio is small. If the oversampling is large, the SNR is improved, as happens in the case of presumming
of airborne radar data.
If the focusing is good, most of the target energy is compressed into one cell. If the focus is not good, the
target energy is spread into more cells. This suggests a way of focusing the matched filter by measuring the
power-to-mean ratio of the target, which is a type of c< ntrast measurement. If the focusing is good, the contrast
will be a maxirr lm. When the focusing deteriorates, the contrast goes down. A suitable quantitative measure for
the energy distribution or image contrast is
E( I Ij2) (13.8)
C - [ E( III) ]2
As explained in Section 6.5.5, an azimuth FM rate error, fl.Ka, causes an azimuth misregistration between two
azimuth looks. This misregistration can be measured in processed data, and can be used to correct the focus of
the image. This autofocus method may not depend as much on the presence of strong scatterers in the scene,
and many processors have used this method successfully [3, 4].
(a) No FM rate error (b) 0.5% FM rate error
1 .. ,...........
..
········!·• .. , ... 1 .•.. ' ... ! ........ ·: ' ....... ·:·
0.8 . . .... . . . . . . . .... 0,8 ...
.............. ...
i
Cl> Cl>
Io.•
..
0.6 .............. .. . ...........
..............
'
.. •,•
i! 0.8 • • • • • • • • 'I, • • • • .. -:-.' ........ -:-.'
. ... . .......
. ....
. . _.,,
0,-4 . .
0.2
0 ~IJ/C.~IC!!lli:a:.._..J:!!!!olJ!:~~
20 '40 80 0 20 '40 80
Time (aamples) Time (samples)
(c) Contrast measurement
1 .... -~ ..... ·:· ... • ·~· ..... : :.;
• ·-· - - -· .:.:_
·:....... ·:· ..... ~- ..... : .... .
.. .. .. ..
. ' . . . . .
.. ... ... . .
·····-········ •••"'1·· .. •• • • •••••••• .... • . .............. .
...
. . ' ................. ..............
- . .
'
. . ...
................................... .
.
~
. . . . . . . .
0.7 _ _ _ __.__ _,___...__ __,__ ___.____......__ _,__ ___._______,
- 0.5 -0,'4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 o.• 0.5
Percentage FM rate error
The method has been referred to as the "map drift» algorithm, because the misregistered looks drift away
from their correct map position. Equation (6.46) shows the relationship between the misregistration, 6.17, and the
error, AKa, in the linear FM rate.
Let Ka be the actual FM rate of the data, and Kamr be the value used in the azimuth matched filter. Let
A fa be the frequency separation between t he centers of the two looks, as shown in Figure 13.S(a). Denoting
Look 1 as the look at the lower frequency end of the spectrum and Look 2 the higher end ( after spectrum
unwrapping, as in Figure 6.28) , the misregistration in time of Look 2 with respect to Look 1 is
Since Kame = Ka + AKa, the error in the azimuth FM rate can be estimated from the misregistration by
CD
~ 0 .8
6)
Ill
0.6
Look 1
+ Look2
:E 0.4
~ 11 fa
~
0.2
0
-500 -400 --300 - 200 -100 0 100 200 300 400 500
Frequency (cells)
(b) Compressed looks, showing their misregistralion
0.8 ,-----.----.--- ~---.----r----r-----.r----,-,
I ,. '
CD 0.6 Look 2 1 \
~
0.4 I
I
''
1 0.2
I
2 3 4 S 6 7 8
Azimuth time (samples)
Simulation Example
A simulation is performed with random data to illustrate the correlation method. The ground reflectiv:ity model
of Section 12.4.1 is used, which is representative of a high contrast scene. A single range cell with 4096 azimuth
samples and an azimuth FM rate error of - 1% are simulated. The correlation result is shown in Figure 13.6. The
vertical lines show the expected correlation lag with various FM rate errors. The correlation peak is close to the
lag expected with an FM rate error of - 1%. In practice, a better answer is obtained when the correlation is
averaged over many range cells, as done with RADARSAT data in Figure 13.7.
In principle, the method does not require an iterative approach, because the measured shift gives the sign
and magnitude of the FM rate error directly via {13.10). However, the correlation function can be quite noisy for
some scenes, depending on the scene content and the averaging area, and the peak is less discernable when the
FM rate error is large. An iterative scheme improves the reliability of the method.
The method is parametric, since it is used to estimate the quadratic component of the phase error only. The
method can be extended to detect higher order phase errors by using multiple looks. With this extension, it is
called "multiple aperture map drift," with each look corresponding to a subaperture [5]. Then, the map drift
algorithm is performed on all permutations of pairs of looks. In general, N looks can be used to measure a
phase error of order N. However, the look length decreases as the number of looks increases; thus, the sensitivity
decreases due to a broader IRW in each look.
The map drift method of autofocus is quite similar to the magnitude-based DAR presented in Section 12.5.1.
In the DAR, two detected azimuth looks are correlated in the range direction, whereas in the map drift
algorithm, the two looks are correlated in the azimuth direction. The same two looks can be used for both
purposes. A two-dimensional cross correlation in range and azimuth can be performed; the displacement in range
is used to estimate the Doppler ambiguity, and the displacement in azimuth is used to estimate the FM rate
error.
The map drift method is magnitude-based, since the look correlation is determined from the image pixel
intensities. Other methods based on the complex image data are described in Section 13.4.3.
A RADARSAT Example
The Vancouver RADARSAT scene of Section 12.4 is used to test the contrast and map drift autofocus
algorithms. Table 13.2 shows that for C-band satellites, the accuracy requirement for Ka should be approximately
0.23%. Autofocus is sometimes needed in practice, since it is not uncommon to have an initial estimate of
azimuth FM rate found from geometry that has an error of up to 0.5% .
The autofocus results for one block of the Vancouver scene are shown in Figure 13.7. The range compressed
data is taken from Column 10, Rows 1-2 of Figure 12.15, using 512 range cells and 2048 azimuth samples. The
area has medium contrast, consisting mainly of suburban houses, farmland, parks, and trees along the
Canadian-U.S. border. The matched filter FM rate is changed in increments of approximately 2 Hz/s, over a
range of values between - 1% and +1% of the nominal value.
1uuu,------r--,------.---.,-------.---....-----.----.----.----,
·I -2%
900
800 SNR =-3.2d8
,-1% I +1 %
i· +2%
azimuth FM rate error
700 I
I..
I
600
-4 -.2 0 2 4 8 8 10
CooelaUon offset (lag In samples)
Figure 13.7(a) shows the correlation function for five cases of matched filter FM rate from 1716 to 1750 Hz/s.
Usually the correlation peak is well defined, and the look shift corresponding to the maximum correlation can be
found to approximately 0.2 of a sample (the azimuth sample spacing is the same as in the original data).
Sometimes the peak is not well defined, so a quality measure, such as the width and height of the peak, can be
used to qualify the result. The peak position is estimated using a local parabolic fit, and is shown as a vertical
dashed line in the figure.
After making a number of correlation measurements, the look displacement can be plotted against FM rate~
as in Figure 13.7(b). A st raight line is fitted to the measurements, and the zero crossing gives t he FM rate
estimate. The theoretical slope of the line is also plotted, using a dashed line. The agreement between the slope
of the straight line fit and the theoretical slope is another useful quality measure.
If the slope error is small, the FM rate can be estimated to better than 0.1%. In this case, the estimate is
1733.6 Hz/s. If the correlation peak is very well defined, the straight line fit can be avoided, as the FM rate
error can be found from one correlation using (13.10).
Figure 13.7(c) shows the results of the contrast optimization method for FM rate errors varying between - 1%
and +1 % of the nominal value. The contrast (13.8) is plotted for various values of Ka in the matched filter, and
a parabola is fitted to estimate the azimuth FM rate corresponding to the maximum contrast. The contrast peak
is relatively broad, but the peak can be measured accurately with a parabolic fit. The estimated FM rate is
1733.0 in this case. If the peak is well defined, the estimation error is usually less than 0.1 %.
Discussion
The results of Figure 13.7 are quite good because of the medium-to-high cont rast in this particular scene. For a
lower contrast scene, the amount of averaging has to be increased to achieve the same accuracy. More averaging
may be needed in L-band cases, where the accuracy requirements are tighter.
The global fitting approach of Section 12.6 including quality measures also could be applied to the autofocus
estimates. The estimates first should be converted to the effective radar velocity, Yr, which varies slowly over the
whole scene. However, the variability of the estimates is usually much less than the Doppler centroid estimates, so
the global fitting approach is not as necessary in the autofocus case.
The autofocus results are converted to effective radar velocity, and are plotted with the predicted values of
Vr from the state vector model and the circular orbit model in Figure 13.8.
~0.9
~
C
ll
~ 0.8
-6 --4 -2 0 2 4 6
llme shift (samples)
(b) Look mlsreglstratlon measurements from (a)
i 2 : .. ! · ... 1
estimated FM rate
1733.6 tWs
i
••
..... 0
~ -
)( Ml11'9QlttraUon
Straight lne flt
~ -2 ..:, ' . . ., .. . . ......:, .. . - - lheonltlcal
J=
1715 1720 1725 1730 1735 1740 1745 1750
Azimuth FM rate (Hz/s)
(c) Contrast measurements
2.52
.,_ Contrast
- - Parabola At
-8
E 2.5
I
I •; •
.
.. .. . . .. . . . .....
C
a Estimated FM rate
~ 2.48 .. .• 1733.0 Hi/s
7069
: -M- Slate veclof
-I 7068 CirculltOltiit
Autofocut
-
>- 7068
• o • • I • • • o • • •
7084
O. O. f ,0 O f • o O •• O O O o O ! O o .. O IO O O 1• O O
••• ; •• , ..
• O to o O.
I .,,, ••
I
4)
7083
7062 . . ~
·····················
... .
..
I 7061
7080
I· • ... • • I "• """ l ••
Figure 13.8: Effective radar velocity from state vectors, circular orbit model,
and autofocus.
A number of phase,.based autofocus methods have been developed for spotlight SAR systems. In this mode, all
targets are illuminated within the same exposure time, since the beam is constantly steered towards the scene of
interest. This means each target is subjected to the same phase errors throughout its exposure. This property
makes the methods well suited for processing spotlight SAR data. The methods cannot be applied directly to
stripmap processors, since targets are exposed at different azimuth times in this mode. For this reason, only a
summary is provided for each method.
In these autofocus methods, a deramping process is used to perform the azimuth matched filtering, which
keeps the data available for a time varying phase measurement. Two commonly used methods are highlighted
next.
The phase difference method IS parametric, since it is designed to estimate the quadratic component of the
azimuth phase error [5). Two azimuth looks are used, as in the map drift approach. Instead of correlating the
magnitude of the two compressed looks, one look is multiplied by the complex conjugate of the other after
deramping. A Fourier transform is then performed to estimate the frequency of the resulting beat signal, as in
the MLBF DAR algorithm. The beat signal will exhibit a peak in the spectrum if the estimation area is
dominated by a strong discrete target. The FM error, f1K0 , can be obtained from the peak position.
As multiple targets cause cross-beating that obscures the peak of the beat spectrum, as in the MLBF
algorithm, the algorithm does not work well if applied to all the data in one operation. To minimize the
cross-beating, the azimuth history of strong targets can be extracted from the received data. The method is
applied to the extracted targets, and the beat signal spectrum is averaged over targets. Another method is to
partition the data into azimuth blocks, and to perform autofocus processing for each block in turn. Only targets
that are fully exposed in a block are considered. Similar to the map drift algorithm, multiple azimuth looks can
be used to estimate higher order phase errors.
The phase gradient autofocus (PGA) algorithm is nonparametric, in that a polynomial form is not assumed for
the phase error [5,6). The concept is based upon the fact that a properly deramped signal of a point target has
a sinusoidal form, with a frequency proportional to the position of the compressed target. The average frequency
of the deramped target can be found using an FFT. With this knowledge, the expected frequency can be
removed from the deramped data, and any remaining phase terms can be considered as a phase error. After the
removal of the average frequency, the phase increments between consecutive samples are found. The phase
increment should be zero at each sample, and their departure from zero at specific parts of the phase history
indicates the phase correction needed.
To estimate this phase correction, the deramped data are first compressed by an FFT, and strong targets are
identified and selected. An azimuth window centered around each selected target is applied, to isolate it from
nearby ones. The centering helps to remove the frequency of the target due to its position. An IFFT is taken,
and the phase increments (i.e., frequency) of the target should ideally be zero. The autocorrelation function is
computed to find the phase increments. By integrating the phase difference and averaging over all selected targets,
the phase correction is obtained. The algorithm is iterative, in that the targets are recompressed after the phase
correction, and the estimation is repeated. To improve the isolation of each target, the window size is reduced at
each iteration.
Discussion
These two methods share a number of similar characteristics. The phase difference method assumes that the
phase error is dominated by a quadratic term, which happens if the radar velocity is incorrect. The PGA method
is more general, and is aimed at the radar platform not following an ideal flight path, where the phase history
takes on nonquadratic forms over parts of the received data. The methods of measuring the phase errors are also
different.
Since targets occur at different azimuth times in stripmap data, the algorithms cannot be applied directly to
all of the data. One way is to extract the azimuth history of strong targets, and apply the algorithms to the
extracted targets. Another method is to partition the data into azimuth blocks, and perform autofocus processing
for each block separately. Only targets that are fully exposed in a block are considered.
13.5 Summary
T he effective radar velocity that gives the azimuth FM rate is very important for sharp image focus. It can be
computed from the satellite orbit and attitude information in the ephemeris data, using a geometry model. T his is
done at different ranges, to get a parameterized model of azimuth FM rate t o use in the SAR processor.
However, the derived parameters may not be accurate enough for focusing the SAR data, due to limitations in
the model and errors in the ephemeris data. In this case, the estimates of the parameters have to be refined
using measurements made on the SAR data, referred to as «autofocus."
Several autofocus methods are discussed, which are used to refine the geometry estimates. One method works
by optimizing t he contrast of the processed image. Map drift is another magnitude-based autofocus method, in
which the SAR data are compressed using two azimuth looks, and t he azimuth misregistration between the two
looks is then measured. Phase-based autofocus methods such as PD and PGA have been developed for spotlight
SAR, but can be adapted to stripmap SAR.
To illustrate that SAR systems come in many shapes and sizes, Figure 13.9 features the miniature MiSAR system
built by EADS Defense Electronics in Ulm, Germany, for use in a pilotless surveillance drone.
The antennas and gimbal assembly are shown in Figure 13.9. The horns are 15 cm long in the azimuth
direction, quite a contrast from the 15-m antenna on RADARSAT. There are two antennas, one for transmitting
and one for receiving, since the transmit and receive events overlap in time in this continuous-wave design.
An image taken by the MiSAR system in 2004 is shown in Figure 13.10. The radar frequency is Ka band.
The swath width can vary from 0.5 to 2 km, and the operating range is up to 5 km. The total weight is less
than 4 kg and t he power consumption is less t han 100 W .
Figure 13.9: The MiSAR system with 15-cm transmitting and receiving horn
antennas.
Figure 13.10: An image taken by MiSAR. (Courtesy of EADS, Germany.)
References
(1) C. Oliver and S. Quegan. Understanding Synthetic Aperture Radar Images. Artech House, Norwood, MA, 1998.
(2] J. Curlander and R. McDonough. Synthetic Aperture Radar: Systems and Signal Processing. John Wiley &
Sons, New York, 1991.
(3] J. R. Bennett and I. G. Cumming. A Digital Processor for the Production of SEASAT Synthetic Aperture
Radar Imagery. In Proc. SURGE Workshop, ESA Publication No. SP-154, Frascati, Italy, July 16-18, 1979.
(4] J. C. Curlander, C. Wu, and A. Pang. Automated Processing of Spaceborne SAR Data. In Proceedings of
the Jnternational Geoscience and Remote Sensing Symposium, JGARSS'82, Vol. 1, pp. 3-6, Munich, Germany,
June 1982.
(5] W. G. Carrara, R. S. Goodman, and R. M. Majewski. Spotlight Synthetic Aperture Radar: Signal Processing
Algorithms. Artech House, Norwood, MA, 1995.
A CD containing raw signal data from RADARSAT-1 is included with this book. The data were acquired on
June 16, 2002, between 02:03:50 and 02:04:05 GMT, in ascending orbit #34522, using FINE Beam 2 (Near). This
signal data were used to produce the image in Figure 6.32, and used in the estimation experiments of Chapters
12 and 13.
The signal data are stored in the file dat_0 l. 001, and consists of approximately 19,400 records. There is one
record per range line, and every eighth record contains a replica of the transmitted pulse. There are 9288
complex echo samples per range line, stored as unsigned integers. The records without the pulse replica contain
18,818 bytes, starting with 192 bytes of header data and 50 bytes of auxiliary data, followed by 18,576 bytes of
echo data. The records with the pulse replica contain 21 ,698 bytes, starting with 192 bytes of header data and 50
bytes of auxiliary data, then 2880 bytes of the pulse replica, followed by 18,576 bytes of echo data.
To process this dataset, a number of radar parameters are needed. The important ones are listed in Table
A.l. The values for azimuth FM rate and Doppler centroid are approximate values at the center of the swath.
Apart from these parameters, the system attenuation values must be extracted from the accompanying CD, so the
gain can be corrected during the processing.
The data are in CEOS format, and a MATLAB program to read the data in this file is posted on the
Artech House Web site: https : / /ww.,. artechhouse . com/ Please refer to this Web site for more detailed
instructions on reading the CD, and for any parameter updates.
The data on the CD have been kindly provided by Gordon Staples of Radarsat International. The Canadian
Space Agency holds the copyright of the data, and it is provided on the condition that it only be used for
educational purposes by the owners of this book. Copying of the CD for distribution to other persons is strictly
prohibited.
List of Acronyms
DC direct current
EM electromagnetic (wave)
FM frequency modulation
RF radio frequency
TA throwaway region
This section contains a list of the major symbols used m Chapters 2 to 13.
Ka1,dop Average azimuth FM rate of point target for the first range
look for Doppler ambiguity estimation, hertz per second
Doppler ambiguity
FFT length
IFFT length
t Time, seconds
8sq,c Squint angle of the radar beam center in curved Earth ge-
ometry, measured from zero Doppler, radians
Special Issue on SIR-C/X-SAR. IEEE Trans. on Geoscience and Remote Sensing, 33 (4), pp. 817-956, July 1995.
S. Albrecht and I. G. Cumming. The Application of the Momentary Fourier Transform to SAR Processing. IEE
Proc: Radar, Sonar and Navigation, 146 (6), pp. 285-297, December 1999.
W. A. Alpers and I. Hennings. A Theory of the Imaging Mechanism of Underwater Bottom Topography by Real
and Synthetic Aperture Radar. J. of Geophysical Research, 89 (C6), pp. 10529-10546, November 1984.
T. Amiot, F. Douchin, E. Thouvenot, J.-C. Souyris, and B. Cugny. The Interferometric Cartwheel: A Multipurpose
Formation of Passive Radar Microsatellites. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'02, Vol.
1, pp. 435-437, Toronto, June 2002.
D. A. Ausherman. Digital Versus Optical Techniques in Synthetic Aperture Radar Data Processing. In Application
of Digital Image Processing (IOCC 1977), Vol. 119, pp. 238-256. SPIE, 1977.
R. Bamler. A Systematic Comparison of SAR Focusing Algorithms. In Proc. Int. Geoscience and Remote Sensing
Symp., IGARSS'91, Vol. 2, pp. 1005-1009, Espoo, Finland, June 1991.
R. Bamler. Doppler Frequency Estimation and the Cramer-Rao Bound. IEEE Trans. on Geoscience and Remote
Sensing, 29 (3), pp. 385-390, May 1991.
R. Bamler. A Comparison of Range-Doppler and Wavenumber Domain SAR Focusing Algorithms. IEEE Trans. on
Geoscience and Remote Sensing, 30 (4), pp. 706-713, July 1992.
R. Bamler. Adapting Precision Standard SAR Processors to ScanSAR. In Proc. Int. Geoscience and Remote
Sensing Symp., IGARSS'95, Vol. 3, pp. 2051-2053, Florence, Italy, July 1995.
R. Bamler, H. Breit, U. Steinbrecher, and D. Just. Algorithms for X-SAR Processing. In Proc. Int. Geoscience and
Remote Sensing Symp., IGARSS'93, Vol. 4, pp. 1589-1592, Tokyo, August 1993.
R. Bamler and M. Eineder. Optimum Look Weighting for Burst-Mode and ScanSAR Processing. IEEE Trans.
Geoscience and Remote Sensing, 33, pp. 722-725, 1995.
R. Bamler and M. Eineder. ScanSAR Processing Using Standard High Precision SAR Algorithms. IEEE Trans.
Geoscience and Remote Sensing, 34 (1), pp. 212-218, January 1996.
R. Bamler and P. Hartl. Synthetic Aperture Radar Interferometry. Inverse Problems, 14 (4), pp. Rl-R54, 1998.
R. Bamler and H. Runge. Method of Correcting Range Migration in Image Generation in Synthetic Aperture
Radar. U.S. Patent No. 5,237,329. Patent Appl. No. 909,843, filed July 7, 1992, granted August 17, 1993. The
patent is assigned to DLR. An earlier successful patent application was filed in Germany on July 8, 1991.
R. Bamler and H. Runge. PRF-Ambiguity Resolving by Wavelength Diversity. IEEE Trans. Geoscience and
Remote Sensing, 29 (6), pp. 9971003, November 1991.
B. C. Barber. Theory of Digital Imaging from Orbital Synthetic Aperture Radar. International Journal of Remote
Sensing, 6, pp. 1009-1057, 1985.
D. K. Barton. Modern Radar System Analysis. Artech House, Norwood, MA, 1988.
D. C. Bast and I. G. Cumming. RADARSAT ScanSAR Roll Angle Estimation. In Proc. Int. Geoscience and
Remote Sensing Symp., IEEE/CRSS, IGARSS'02, Vol. 1, pp. 152-154, Toronto, June 24-28, 2002.
R. W. Bayma and P. A. Mcinnes. Aperture Size and Ambiguity Constraints for a Synthetic Aperture Radar. In
Synthetic Aperture Radar, J. J. Kovaly {ed.). Artech House, Dedham, MA, 1978.
D. P. Belcher and C. J. Baker. High Resolution Processing of Hybrid Strip-Map/Spotlight Mode SAR. IEE Proc.,
Radar, Sonar, Navig., 143 (6), pp. 366-374, 1996.
J. R. Bennett and I. G. Cumming. Digital Techniques for the Multilook Processing of SAR Data with Application
to SEASAT-A. In Fifth Canadian Symp. on Remote Sensing, Victoria, BC, August 1978.
J. R. Bennett and I. G. Cumming. A Digital Processor for the Production of SEASAT Synthetic Aperture Radar
Imagery. In Proc. SURGE Workshop, ESA Publication No. SP-154, Frascati, Italy, July 16-18, 1979.
J. R. Bennett, I. G. Cumming, R. A. Deane, P. Widmer, R. Fielding, and P. McConnell. SEASAT Imagery Shows
St. Lawrence. Aviation Week and Space Technology, page 19 and front cover, February 26, 1979.
M. Born and E. Wolf. Principles of Optics. Cambridge University Press, Cambridge, England, 7th edition, 1999.
R. N. Bracewell. The Fourier Transform and Its Applications. WCB/ McGraw-Hill, New York, 3rd edition, 1999.
H. Breit, B. Schattler, and U. Steinbrecher. A High Precision Workstation-Based Chirp Scaling SAR Processor. In
Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'g1, Vol. 1, pp. 465--467, Singapore, August 1997.
E. 0. Brigham. The Fast Fourier Transform: An Introduction to Its Theory and Application. Prentice Hall, Upper
Saddle River, NJ, 1974.
C. Cafforio, P. Guccione, and A. Monti Guarnieri. Doppler Centroid Estimation for ScanSAR Data. IEEE Trans.
on Geoscience and Remote Sensing, 42 (1), pp. 14- 23, January 2004.
C. Cafforio, C. Prati, and F. Rocca. Full Resolution Focusing of SEASAT SAR Images in the Frequency-Wave
Number Domain. In Proc. 8th EARSel Workshop, pp. 336- 355, Capri, Italy, May 17-20, 1988.
C. Cafforio, C. Prati, and F. Rocca. SAR Data Focusing Using Seismic Migration Techniques. IEEE Trans. on
Aerospace and Electronic Systems, 27 (2), pp. 194- 207, March 1991.
W. J. Caputi. Stretch: A Time-Transformation Technique. IEEE Trans. on Aerospace and Electronic Systems,
AES-7, pp. 269-278, March 1971.
W. G. Carrara, R. S. Goodman, and R. M. Majewski. Spotlight Synthetic Aperture Radar: Signal Processing
Algorithms. Artech House, Norwood, MA, 1995.
C. Y. Chang and J. C. Cuxlander. Doppler Centroid Ambiguity Estimation for Synthetic Aperture Radars. In
Procee,dings of the International Geoscience and Remote Sensing Symposium, IGARSS'89, pp. 2567-2571,
Vancouver, BC, 1989.
C. Y. Chang and J. C. Curlander. Application of the Multiple PRF Technique to Resolve Doppler Centroid
Estimation Ambiguity for Spaceborne SAR. IEEE Trans. on Geoscience and Remote Sensing, 30 (5) , pp. 941- 949,
September 1992.
C. Y. Chang, M. Y. Jin, Y.-L. Lou, and B. Holt. First SIR-C ScanSAR Results. IEEE Trans. on Geoscience and
Remote Sensing, 34 (5), pp. 1278- 1281, September 1996.
J. H. Chun and C. A. Jacowitz. Fundamentals of Frequency Domain Migration. Geophysics, 46, pp. 717- 733, 1981.
I. G. Cumming. Model-Based Doppler Estimation for Frame-Based SAR Processing. In Proc. Int. Geoscience and
Remote Sensing Symp., IGARSS'01, Vol. 6, pp. 2645- 2647, Sydney, Australia, July 2001.
I. G. Cumming. A Spatially Selective Approach to Doppler Estimation for Frame-Based Satellite SAR Processing.
IEEE Trans. on Geoscience and Remote Sensing, 42 (6), June 2004.
I. G. Cumming and D. C. Bast. A New Hybrid-Beam Data Acquisition Strategy to Support ScanSAR Radiometric
Calibration. IEEE Trans. on Geoscience and Remote Sensing, 42 (1), pp. 3- 13, January 2004.
I. G. Cumming and J. R. Bennett. Digital Processing of SEASAT SAR Data. In IEEE 1919 International,
Conference on Acoustics, Speech and Signal Processing, Washington, D.C., April 2-4, 1979.
I. G. Cumming, Y. Guo, and F. H. Wong. A Comparison of Phase-Preserving Algorithms for Burst-Mode SAR
Data Processing. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'91, Vol. 2, pp. 731- 733, Singapore,
August 1997.
I. G. Cumming, Y. Guo, and F . H. Wong. Analysis and Precision Processing of Radarsat ScanSAR Data. In
Geomatics in the Era of Radarsat, GER'97, Ottawa, ON, May 25-30, 1997. Published on CD-ROM and available
on the UBC RRSG Web site.
I. G. Cumming, P. F. Kavanagh, and M. R. Ito. Resolving the Doppler Ambiguity for Spaceborne Synthetic
Aperture Radar. In Proceedings of the International Geoscience and Remote Sensing Symposium, IGARSS'86, pp.
1639-1643, Zurich, Switzerland, September 8-11, 1986.
I. G. Cumming and J. Lim. The Design of a Digital Breadboard Processor for the ESA Remote Sensing Satellite
Synthetic Aperture Radar. Technical report, MacDonald Dettwiler, Richmond, BC, July 1981. Final report for ESA
Contract No. 3998/79/NL/HP(SC).
I. G. Cumming, F. H. Wong, and R. K. Raney. A SAR Processing Algorithm with No Interpolation. In Proc. Int.
Geoscience and Remote Sensing Symp., IGARSS'92, pp. 376-379, Clear Lake, TX, May 1992.
J. Curlander and R. McDonough. Synthetic Aperture Radar: Systems and Signal Processing. John Wiley & Sons,
New York, 1991.
J. C. Curlander, C. Wu, and A. Pang. Automated Processing of Spaceborne SAR Data. In Proceedings of the
International Geoscience and Remote Sensing Symposium, IGARSS'82, Vol. 1, pp. 3-6, Munich, Germany, June
1982.
D. D'Aria, A. Monti Guarnieri, and F. Rocca. Focusing Bistatic Synthetic Aperture Radar Using Dip Move Out.
IEEE Trans. on Geoscience and Remote Sensing, 42 (7), pp. 1362-1376, July 2004.
G. W. Davidson. Image Formation from Squint Mode Synthetic Aperture Radar Data. PhD thesis, Dept. of
Electrical and Computer Eng., University of British Columbia, Vancouver, BC, September 1994.
G. W. Davidson and I. G. Cumming. Signal Properties of Squint Mode SAR. IEEE Trans. on Geoscience and
Remote Sensing, 35 (3), pp. 611617, May 1997.
G. W. Davidson, I. G. Cumming, and M. R. Ito. A Chirp Scaling Approach for Processing Squint Mode SAR
Data. IEEE Trans. on Aerospace and Electronic Systems, 32 (1), pp. 121-133, January 1996.
G. W. Davidson, F. H. Wong, and I. G. Cumming. The Effect of Pulse Phase Errors on the Chirp Scaling SAR
Processing Algorithm. IEEE Trans. on Geoscience and Remote Sensing, 34 (2), pp. 471-478, March 1996.
Y.-L. Desnos, H. Laur, P. Lim, P. Meisl, and T. Gach. The ENVISAT-1 Advanced Synthetic Aperture Radar
Processor and Data Products. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'99, Vol. 3, pp.
1683- 1685, Hamburg, Germany, June 1999.
C. Ding, H. Peng, Y. Wu, and H. Jia. Large Beamwidth Spaceborne SAR Processing Using Chirp Scaling. In
Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'99, Vol. 1, pp. 527- 529, Hamburg, June 1999.
Y. Ding and D. C. Munson. A Fast Back-Projection Algorithm for Bistatic SAR Imaging. In Proc. Int. Conf. on
Image Processing, !GIP 2002, Vol. 2, pp. 449-452, Rochester, NY, September 22-25, 2002.
R. C. Dixon. Spread Spectrum Systems with Commercial Applications. Wiley-lnterscience, New York, 3rd edition,
1994.
Y. Dong, A. K. Milne, and B. C. Forster. A Review of SAR Speckle Filters: Texture Restoration and
Preservation. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'00, Vol. 2, pp. 633-635, Honolulu, HI,
July 2000.
M. Dragosev-ic. On Accuracy of Attitude Estimation and Doppler Tracking. In CEOS SAR Workshop, Toulouse,
France, October 26- 29, 1999. ESA-CNES. http://www.estec.esa.nl/ceos99/papers/p164.pdf.
M. Dragosev.ic. Attitude Estimation and Doppler Tracking. In CEOS SAR Workshop, (ESA), Ulm, Germany, May
27-28, 2004.
M. Dragosev.ic and B. Plache. Doppler Tracker for a Spaceborne ScanSAR System. IEEE Trans. on Aerospace and
Electronic Systems, 36 (3), pp. 907-924, July 2000.
C. Elachi. Spaceborne Radar Remote Sensing: Applications and Techniques. IEEE Press, New York, 1987.
D. Fernandes, G. Waller, and J. R. Moreira. Registration of SAR Images Using the Chirp Scaling Algorithm. In
Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'96, Vol. 1, pp. 799-801, Lincoln, NE, July 1996.
D. Esteban Fernandez, P. J. Meadows, B. Schaettler, and P. Mancini. ERS Attitude Errors and Its Impact on the
Processing of SAR Data. In CEOS SAR Workshop, Toulouse, France, October 26- 29, 1999. ESA- CNES.
http://www.estec.esa.nl/ceos99/papers/p027.pdf.
H. Fiedler, E. Boerner, J. Mittermayer, and G. Krieger. Total Zero Doppler Steering. In Proc. European Conference
on Synthetic Aperture Radar, EUSAR'04, pp. 481-484, Ulm, Germany, May 2004.
G. Franceschetti and R. Lanari. Synthetic Aperture Radar Processing. CRC Press, Boca Raton, FL, 1999.
A. Freeman, W. T . K. Johnson, B. Honeycutt, R. Jordan, S. Hensley, P. Siqueira, and J. Curlander. The "Myth"
of the Minimum SAR Antenna Area Constraint. IEEE Trans. Geoscience and Remote Sensing, 38 (1), pp. 320-324,
January 2000.
V. S. Frost, J. A. Stiles, K. S. Shanmugan, and J. C. Holtzman. A Model for Radar Images and Its Application to
Adaptive Filtering of Multiplicative Noise. IEEE Trans. Pattern Anal. Mach. Intell., 4, pp. 157-166, 1982.
A. Gallon and F. lmpagnatiello. Motion Compensation in Chirp Scaling SAR Processing Using Phase Gradient
Autofocusing. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'98, Vol. 2, pp. 633- 635, Seattle, WA,
July 1998.
E. Gimeno and J. M. Lopez-Sanchez. Near-Field 2-D and 3-D Radar Imaging Using a Chirp Scaling Algorithm. In
Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'0J, Vol. 1, pp. 354- 356, Sydney, Australia, July 2001.
J. W. Goodman. Statistical Properties of Laser Speckle Patterns. In Laser and Speckle Related Phenomena, J. C.
Dainty (ed.). Springer- Verlag, London, 1984.
M. M. Goulding, D. R. Stevens, and P. R. Lim. The SIVAM Airborne SAR System. In Proc. Int. Geoscience and
Remote Sensing Symp., IGARSS'0J, Vol. 6, pp. 2763-2765, Sydney, Australia, July 2001.
A. Monti Guarnieri. Residual SAR Focusing: An Application to Coherence Improvement. IEEE Trans. Geoscience
and Remote Sensing, 34 (1), pp. 201-211, January 1996.
A. Monti Guarnieri and P. Guccione. Optimal "Focusing" for Low Re~olution ScanSAR. IEEE Trans. on
Geoscience and Remote Sensing, 39 (3), pp. 479-491, March 2001.
A. Monti Guarnieri and C. Prati. ScanSAR Focusing and Interferometry. IEEE Trans. Geoscience and Remote
Sensing, 34 (4) , pp. 1029-1038, July 1996.
R. F. Hanssen. Radar Interferometry: Data Interpretation and Error Analysis. Kluwer Academic Publishers,
Dordrecht, the Netherlands, 2001.
R. 0. Harger. Synthetic Aperture Radar Systems: Theory and Design. Academic Press, New York, 1970.
D. W. Hawkins and P. T. Gough. An Accelerated Chirp Scaling Algorithm for Synthetic Aperture Imaging. In
Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'97, Vol. 1, pp. 471-473, Singapore, August 1997.
R. K. Hawkins and P. W. Vachon. Modelling SAR Scalloping in Burst Mode Products from RADARSAT-1 and
ENVISAT. In Proc. CEOS Workshop on SAR, London, September 2002. ESA Publication SP-520.
S. S. Haykin. Communications Systems. John Wiley & Sons, New York, 4th edition, 2000.
H. Hellsten and L. E. Anderson. An Inverse Method for the Processing of Synthetic Aperture Radar Data. Inverse
Problems, No. 3, pp. 111- 124, 1987.
F. M. Henderson and A. J. Lewis, editors. Manual of Remote Sensing, Volume 2: Principles and Applications of
Imaging Radar. John Wiley & Sons, New York, 3rd edition, 1998.
S. Hensley, P. Rosen, and E. Gurrola. The SRTM Topographic Mapping Processor. In Proc. Int. Geoscience and
Remote Sensing Symp., IGARSS'00, Vol. 3, pp. 1168- 1170, Honolulu, HI, July 2000.
J. Holzner and R. Bamler. Burst-Mode and ScanSAR Interferometry. IEEE Trans. on Geoscience and Remote
Sensing, 40 (9), pp. 1917- 1934, September 2002.
B. L. Honeycutt. Spaceborne Imaging Radar-C Instrument. IEEE Trans. on Geoscience and Remote Sensing, 27
(2), pp. 164-169, March 1989.
W . Hong, J. Mittermayer, and A. Moreira. High Squint Angle Processing of E-SAR Stripmap Data. In Proc.
European Conference on Synthetic Aperture Radar, EUSAR '00, pp. 449-552, Munich, Germany, May 2000.
Y. Huang and A. Moreira. Airborne SAR Processing Using the Chirp Scaling and a Time Domain Subaperture
Algorithm. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'93, Vol. 3, pp. 1182- 1184, Tokyo, August
1993.
W. Hughes, K. Gault, and G. J. Princz. A Comparison of the Range-Doppler and Chirp Scaling Algorithms with
Reference to RADARSAT. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'96, Vol. 2, pp. 1221- 1223,
Lincoln, NE, July 1996.
E . C. Ifeachor and B. W. Jervis. Digital Signal Processing: A Practical Approach. Pearson Education, Harlow,
England, 2nd edition, 2002.
V. K. Ingle and J . G. Proakis. Digital Signal Processing Using MATLAB V.4. Brooks/Cole Publishing Co. , Pacific
Grove, CA, 1st edition, 2000.
L. B. Jackson. Digital Filters and Signal Processing. Kluwer Academic Publishers, Boston, MA, 3rd edition, 1996.
M. J. Jin and C. Wu. A SAR Correlation Algorithm Which Accommodates Large Range Migration. IEEE Trans.
Geoscience and Remote Sensing, 22 (6), pp. 592-597, November 1984.
M. Y. Jin. PRF Ambiguity Determination for RADARSAT ScanSAR System. In Proc. Int. Geoscience and Remote
Sensing Symp., IGARSS'94, Vol. 4, pp. 1964- 1966, Pasadena, CA, August 1994.
M . Y. Jin, F. Cheng, and M. Chen. Chirp Scaling Algorithms for SAR Processing. In Proc. Int. Geoscience and
Remote Sensing Symp., IGARSS'93, Vol. 3, pp. 1169- 1172, Tokyo, Japan, August 1993.
N. L. Johnson, S. Kotz, and N. Balakrishnan. Continuous Univariate Distributions. John Wiley & Sons, New York,
2nd edition, 1994.
W. T. K. Johnson. Magellan Imaging Radar Mission to Venus. Proceedings of the IEEE, 79 (6), pp. 777- 790, June
1991.
R. L. Jordan. The SEASAT-A Synthetic Aperture Radar System. IEEE Trans. Oceanic Eng., 5 (2), pp. 154-164,
1980.
R. L. Jordan, B. L. Honeycutt, and M. Werner. The SIR-C/X-SAR Synthetic Aperture Radar System. Proc. IEEE,
79 (6), pp. 827-838, 1991.
T. Kailath. Linear Systems. Prentice Hall, Upper Saddle River, NJ, 1980.
J. F. Kaiser. Nonrecursive Digital Filter Design Using the Io-sinh Window Function. In 1914 Inter. Conj. on
Circuits and Systems, pp. 20-23, April 22- 25, 1974. Reprinted in "Selected Papers in Digital Signal Processing, II",
IEEE Press, New York, 1976.
E . W. Kamen. Fundamentals of Signals and Systems Using MATLAB. Prentice Hall, Upper Saddle River, NJ,
1996.
S. Karnev:i, E. Dean, D. J. Q. Carter, and S. S. Hartley. ENVISAT's Advanced Synthetic Aperture Radar: ASAR.
ESA Bulletin, 76, pp. 3035, 1994.
E. L. Key, E. N. Fowle, and R. D. Haggarty. A Method of Designing Signals of Large Time-Bandwidth Product.
IRE Intern. Conv. Record, (4), pp. 146-154, March 1961.
J. C. Kirk. A Discussion of Digital Processing in Synthetic Aperture Radar. IEEE Trans. on Aerospace and
Electronic Systems, 10 (3), pp. 326-337, May 1975.
H. J. Kramer. Observation of the Earth and Its Environment: Survey of Missions and Sensors. Springer-Verlag,
Berlin, 1996.
E . Kreyszig. Advanced Engineering Mathematics. John Wiley & Sons, New York, 7th edition, 1993.
R. Lanari. A New Method for the Compensation of the SAR Range Cell Migration Based on the Chirp-Z
Transform. IEEE Trans. Geoscience and Remote Sensing, 33 (5), pp. 1296- 1299, September 1995.
R. Lanari, S. Hensley, and P. A. Rosen. Chirp-Z Transform Based SPECAN Approach for Phase Preserv:ing
ScanSAR Image Generation. IEE Proc. Radar, Sonar and Navigation, 145 (5) , pp. 254- 261 , 1998.
B. P. Lathi. Signal Processing and Linear Systems. Oxford University Press, New York, 1998.
J .-S. Lee. A Simple Speckle Smoothing Algorithm for Synthetic Aperture Radar Images. IEEE Trans. Systems,
Man and Cybernetics, 13, pp. 85- 89, 1983.
K. Leung, M. Chen, J . Shimada, and A. Chu. RADARSAT Processing System at ASF. In Proc. Int. Geoscience
and Remote Sensing Symp., IGARSS'96, Vol. 1, pp. 43-47, Lincoln, NE, July 1996.
K. Leung, M. Jin, C. Wong, and J. Gilbert. SAR Data Processing for the Magellan Prime Mission. In Proc. Int.
Geoscience and Remote Sensing Symp., IGARSS'92, pp. 606-609, Clear Lake, TX, May 1992.
F. K. Li, D. N. Held, J. Curlander, and C. Wu. Doppler Parameter Estimation for Spaceborne Synthetic Aperture
Radars. IEEE Trans. on Geoscience and Remote Sensing, 23 (1), pp. 47-56, January 1985.
0. Loffeld. Estimat ing Time Varying Doppler Centroids With Kalman Filters. In Proc. Int. Geoscience and
Remote Sensing Symp., IGARSS'91 , Vol. 2, pp. 1043- 1046, Espoo, Finland, June 1991.
0 . Loffeld, A. Hein, and F . Schneider. SAR Focusing: Scaled Inverse Fourier Transformation and Chirp Scaling. In
Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'98, Vol. 2, pp. 630- 632, Seattle, WA, July 1998.
0. Loffeld, H. Nies, V. Peters, and S. Knedlik. Models and Useful Relations for Bistatic SAR Processing. IEEE
Trans. on Geoscience and Remote Sensing, 42 (10), pp. 2031- 2038, October 2004.
J. M. Lopez-Sanchez and J . Fortuny. 3-D Radar Imaging Using Range Migrat ion Techniques. IEEE Trans. on
Antennas and Propagation, 48, pp. 728-737, May 2000.
A. P. Luscombe. Taking a Broader View: Radarsat Adds ScanSAR to Its Operations. In Proc. Int. Geoscience and
Remote Sensing Symp., IGARSS '88, Vol. 2, pp. 1027- 1032, Edinburgh, Scotland, September 1988.
S. N. Madsen. Estimating the Doppler Centroid of SAR Data. IEEE Trans. on Aerospace and Electronic Systems,
25 (2), March 1989.
B. R. Mahafza. Radar Systems Analysis and Design Using MATLAB. Chapman and Hall/CRC Press, Boca Raton,
FL, 2000.
S. R. Marandi. RADARSAT Attitude Estimates Based on Doppler Centroid of Satellite Imagery. In Proceedings of
the International Geoscience and Remote Sensing Symposium, IGARSS'97, Vol. 1, pp. 493- 497, Singapore, August
3-8, 1997.
P. Martyn, J. Williams, J. Nicoll, R. Guritz, and T. Bicknell. Calibration of the RADARSAT SWB Processor at
the Alaska. SAR Facility. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'99, pp. 2355-2359,
Hamburg, Germany, June 1999.
D. Massonnet. Capabilities and Limitations of the Interferometric Cartwheel. IEEE Trans. on Geoscience and
Remote Sensing, 39 (3), pp. 506-520, March 2001.
S. W. McCandless. SAR in Space - The Theory, Design, Engineering and Application of a Space-Based SAR
System. In Space-Based Radar Handbook, L. J. Cantafio (ed.), chapter 4. Artech House, Norwood, MA, 1989.
J. H. McClellan, R. W. Schafer, and M. A. Yoder. DSP First: A Multimedia Approach. Prentice Hall, Upper
Saddle River, NJ, 1998.
A. D. McGoey-Smith and M. R. Vant. Modification of the SAR Step Transform Algorithm. IEEE Trans. on
Aerospace and Electronic Systems, 28 (3), pp. 666-674, July 1992.
D. L. Mensa. High Resolution Radar Cross-Section Imaging. Artech House, Norwood, MA, 1991.
S. K . Mitra. Digital Signal Processing: A Computer-Based Approach. McGraw-Hill College Division, New York, 2nd
edition, 2001.
J. Mittermayer and A. Moreira. Spotlight SAR Processing Using the Extended Chirp Scaling Algorithm. In Proc.
Int. Geoscience and Remote Sensing Symp., IGARSS'97, Vol. 4, pp. 2021-2023, Singapore, August 1997.
J. Mittermayer and A. Moreira. A Generic Formulation of the Extended Chirp Scaling Algorithm (ECS) for Phase
Preserving ScanSAR and SpotSAR Processing. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'00,
Vol. 1, pp. 108-110, Honolulu, HI, July 2000.
J. Mittermayer and A. Moreira. The Extended Chirp Scaling Algorithm for ScanSAR Interferometry. In Proc.
European Conference on Synthetic Aperture Radar, EUSAR '00, pp. 197- 200, Munich, Germany, May 2000.
J. Mittermayer, A. Moreira, and 0. Loffeld. High Precision Processing of Spotlight SAR Data Using the Extended
Chirp Scaling Algorithm. In Proc. European Conference on Synthetic Aperture Radar, EUSAR '98, pp. 561-564,
Friedrichshafen, Germany, May 1998.
J. Mittermayer, A. Moreira, and 0 . Loffeld. Spotlight SAR Data Processing Using the Frequency Scaling
Algorithm. IEEE Trans. Geoscience and Remote Sensing, 37 (5), pp. 2198-2214, September 1999.
J. Mittermayer, A. Moreira, and R. Scheiber. Reduction of Phase Errors Arising from the Approximations in the
Chirp Scaling Algorithm. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'98, Vol. 2, pp. 1180-1182,
Seattle, WA, July 1998.
J. Mittermayer, R. Scheiber, and A. Moreira. The Extended Chirp ScaJing Algorithm for ScanSAR Data
Processing. In Proc. European Conference on Synthetic Aperture Radar, EUSAR '96, pp. 517-520, Konigswin- ter,
Germany, March 1996.
A. Monti-Guarieri and Y.-L. Desnos. Optimizing Performances of the ENVISAT ASAR ScanSAR Modes. In Proc.
Int. Geoscience and Remote Sensing Symp., IGARSS'99, pp. 1758- 1760, Hamburg, Germany, June 1999.
R. K. Moore. Trade-Off Between Picture Element Dimensions and Noncoherent Averaging in Side-Looking
Airborne Radar. IEEE Trans. on Aerospace and Electronic Systems, 15, pp. 697-708, September 1979.
R. K. Moore, J. P. Claassen, and Y. H. Lin. Scanning Spaceborne Synthetic Aperture Radar with Integrated
Radiometer. IEEE Trans. on Aerospace and Electronic Systems, AES-17, pp. 410-421, May 1981.
A. Moreira and Y. Huang. Airborne SAR Processing of Highly Squinted Data Using a Chirp Scaling Approach
with Integrated Motion Compensation. IEEE Trans. Geoscience and Remote Sensing, 32 (5), pp. 1029-1040,
September 1994.
A. Moreira, J. Mittermayer, and R. Scheiber. Extended Chirp Scaling Algorithm for Air and Spaceborne SAR
Data Processing in Stripmap and ScanSAR Imaging Modes. IEEE Trans. on Geoscience and Remote Sensing, 34
(5), pp. 1123-1136, September 1996.
A. Moreira and R. Scheiber. Doppler Parameter Estimation Algorithms for SAR Processing with the Chirp
Scaling Approach. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'94, Vol. 4, pp. 1977- 1979,
Pasadena, CA, August 1994.
A. Moreira, R. Scheiber, J . Mittermayer, and R. Spielbauer. Real-Time Implementation of the Extended Chirp
Scaling Algorithm for Air a:nd Spaceborne SAR Processing. In Proc. Int. Geoscience and Remote Sensing Symp.,
IGARSS'95, Vol. 3, pp. 2286-2288, Florence, Italy, July 1995.
I. R . Mufti. Recent Development in Seismic Migration. In Time Series Analysis: Theory and Practice 6, 0. D.
Anderson, J. K. Ord, and E. A. Robinson (eds.). Elsevier Science Publishers B.V., North-Holland, 1985.
J. A. Nelder and R. Mead . A Simplex Method for Function Minimization. Computer Journal, 7, pp. 308-313, 1965.
Available in MATLAB as the function FHINSEARCH.
Y. Nemoto, H. Nishino, M. Ono, H. Mizutamari, K. Nishikawa, and K. Tanaka. Japanese Earth Resources
Satellite-I Synthetic Aperture Radar. Proc. of the IEEE, 79 (6), pp. 800-809, 1991.
R. Okkes and I. G. Cumming. Method of and Apparatus for Proces~ing Data Generated by a Synthetic Aperture
Radar System. European Patent No. 0048704. Patent on the SPECAN algorithm, filed September 15, 1981,
granted February 20, 1985. The patent is assigned to the European Space Agency.
C. Oliver and S. Quegan. Understanding Synthetic Aperture Radar Images. Artech House, Norwood, MA, 1998.
A. V. Oppenheim, R . W. Schafer, and J. R. Buck. Discrete-Time Signal Processing. Prentice Hall, Upper Saddle
River, NJ, 2nd edition, 1999.
A. V. Oppenheim and A. S. Willsky. Signals and Systems. Prentice Hall, Upper Saddle River, NJ, 2nd edition,
1996.
M. P. G. Otten. Comparison of SAR Autofocus Algorithms. Proc. Military Microwaves, pp. 362- 367, 1990.
R. Ottolini. Synthetic Aperture Radar Data Processing. SEP Reports, Stanford University, SEP-56, pp. 203- 214,
1988.
A. Papoulis. The Fourier Integral and Its Applications. McGraw-Hill College Division, New York, 1962.
A. Papoulis. Systems and Transforms with Applications in Optics. McGraw-Hill, New York, 1968.
A. Papoulis. Probability, Random Variables and Stochastic Processes. McGraw-Hill, New York, 1984.
R. P. Perry and H. W. Kaiser. Digital Step Transform Approach to Airborne Radar Processing. In IEEE National
Aerospace and Electronics Conference, pp. 280-287, May 1973.
R. P. Perry and L. W . Martinson. Radar Matched Filtering, chapter 11 in "Radar Technology," E . Brookner (ed.),
pp. 163-169. Artech House, Dedham, MA, 1977.
C. Prati and F. Rocca. Focusing SAR Data with Time-Varying Doppler Centroid. IEEE Trans. on Geoscience and
Remote Sensing, 30 (3), pp. 550-559, May 1992.
C. Prati, F . Rocca, A. Monti Guarnieri, and E. Damonti. Seismic Migration for SAR Focusing: Interferometric
Applications. IEEE Trans. Geoscience and Remote Sensing, 28 (4), pp. 627-640, 1990.
J. G. Proakis and D. G. Manolakis. Digital Signal Processing: Principles, Algorithms and Applications. Prentice
Hall, Upper Saddle River, NJ, 3rd edition, 1996.
J. G. Proakis and M. Salehi. Communication Systems Engineering. Prentice Hall, Upper Saddle River, NJ, 1993.
C. S. Purry, K. Dumper, G. C. Verwey, and S. R. Pennock. Resolv:ing Doppler Ambiguity for ScanSAR Data. In
Proceedings of the International Geoscience and Remote Sensing Symposium, /GARSS'00, Vol. 5, pp. 2272-2274,
Honolulu, HI, July 24-28, 2000.
L. R. Rabiner, R. W. Schafer, and C. M. Rader. The Chirp-Z Transform and Its Applications. Bell System Tech.
J., 48, pp. 1249-1292, 1969.
R. K. Raney. Doppler Properties of Radars in Circular Orbits. Int. J. of Remote Sensing, 7 (9), pp. IJ.153-1162,
1986.
R. K. Raney. A Comment on Doppler FM Rate. International Journal of Remote Sensing, 8 (7), pp. 1091-1092,
January 1987.
R. K. Raney. A New and Fundamental Fourier Transform Pair. In Proc. Int. Geoscience and Remote Sensing
Symp. , IGARSS'92, pp. 106107, Clear Lake, TX, May 1992.
R. K. Raney. Radar Fundamentals: Technical Perspective. In Manual of Remote Sensing, Volume 2: Principles and
Applications of Imaging Radar, F. M. Henderson and A. J. Lewis {ed.), pp. 9-130. John Wiley & Sons, New York,
3rd edition, 1998.
R. K. Raney, I. G. Cumming, and F. H. Wong. Synthetic Aperture Radar Processor to Handle Large Squint with
High Phase and Geometric Accuracy. U.S. Patent No. 5,179,383. Patent Appl. No. 729,641, filed July 15, 1991,
granted January 12, 1993. The patent is assigned to the Canadian Space Agency.
R. K. Raney, A. P. Luscombe, E. J. Langham, and S. Ahmed. RADARSAT. Proc. of the IEEE, 79 (6), pp.
839-849, 1991.
R. K. Raney, H. Runge, R. Bamler, I. G. Cumming, and F. H. Wong. Precision SAR Processing Using Chirp
Scaling. IEEE Trans. Geoscience and Remote Sensing, 32 (4), pp. 786--799, July 1994.
F . Rocca, C. Cafforio, and C. Prati. Synthetic Aperture Radar: A New Application for Wave Equation Techniques.
Geophysical Prospecting, 37, pp. 809-830, 1989.
E. Rodriguez and J. Martin. Satellite Interferometer Radar for Topographic Mapping. IEE Proceedings-F, 139, pp.
147-159, 1992.
R. Romeiser, 0. Hirsch, and M. Gade. Remote Sensing of Surface Currents and Bathymetric Features m the
German Bight by Along-Track SAR Interferometry. In Proc. Int. Geoscience and Remote Sensing Symp.,
IGARSS'00, Vol. 3, pp. 1081-1083, Honolulu, HI, July 2000.
P. A. Rosen, S. Hensley, E. Gurrola, F. Rogez, S. Chan, J. Martin, and E. Rodriguez. SRTM C-Band Topographic
Data: Quality Assessments and Calibration Activities. In Proc. Int. Geoscience and Remote Sensing Symp.,
IGARSS'0J, Vol. 2, pp. 739- 741, Sydney, Australia, July 2001.
H. Runge and R. Bamler. A Novel High Precision SAR Focusing Algorithm Based On Chirp Scaling. In Proc. Int.
Geoscience and Remote Sensing Symp., IGARSS'92, pp. 372-375, Clear Lake, TX, May 1992.
M. Sack, M. Ito, and I. G. Cumming. Application of Efficient Linear FM Matched Filtering Algorithms to SAR
Processing. IEEE Proc-F, 132 (1), pp. 45-57, 1985.
T . E. Scheuer and F . H. Wong. Comparison of SAR Processors Based on a Wave Equation Formulation. In Proc.
Int. Geoscience and Remote Sensing Symp. , IGARSS'91, Vol. 2, pp. 635- 639, Espoo, Finland, June 1991.
A. R. Schmidt. Secondary Range Compression for Improved Range Doppler Processing of SAR Data with High
Squint. Master's thesis, The University of British Columbia, September 1986.
S. M. Selby. Standard Mathematical Tables. CRC Press, Boca Raton, FL, 1967.
S. Silver, editor. Microwave Antenna Theory and Design. Dover, New York, 1965.
M. Simard. Extraction of Information and Speckle Noise Reduction in SAR Images Using the Wavelet Transform.
In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'98, Vol. 1, pp. 4-6, Seattle, WA, July 1998.
A. M. Smith. A New Approach to Range Doppler SAR Processing. International Journal of Remote Sensing, 12
(2), pp. 235-251, 1991.
M. Soumekh. A System Model and Inversion for Synthetic Aperture Radar Imaging. IEEE Trans. on Image
Processing, 1 (1), pp. 64- 76, 1992.
M. Soumekh. Synthetic Aperture Radar Signal Processing with MAT LAB Algorithms. Wiley-Interscience, New York,
1999.
J. Steyn, M. M. Goulding, D. R. Stevens, P. R. Lim, J. Steinbacher, J. Tofil, T . Durak, and K. Wesolowicz. Design
Approach to the SIVAM Airborne Multi-Frequency, Multi-Mode SAR System. In Proc. European Conference on
Synthetic Aperture Radar, EUSAR'02, Koln, Germany, June 2002.
G. W. Stimson. Introduction to Airborne Radar. Scitech Pub Inc., 2nd edition, 1998.
R. H. Stolt. Migration by Transform. Geophysics, 43 (1), pp. 23- 48, February 1978.
J.-L. Suchail, C. Buck, J. Guijarro, and R. Torres. The ENVISAT-1 Advanced Synthetic Aperture Radar
Instrument. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'99, Vol. 2, pp. 1441- 1443, Hamburg,
Germany, June 1999.
R. J. Sullivan. Microwave Radar Imaging and Advanced Concepts. Artech House, Norwood, MA, 2000.
X. Sun, T. S. Yeo, C. Zhang, Y. Lu, and P. S. Kooi. Time-Varying Step-Transform Algorithm for High Squint
SAR Imaging. IEEE '.lrans. on Geoscience and Remote Sensing, 37 (6), pp. 2668-2677, November 1999.
K. Tomiyasu. Tutorial Rev.iew of Synthetic-Aperture Radar (SAR) with Applications to Imaging of the Ocean
Surface. Proc. IEEE, 66 (5), pp. 563-583, 1978.
K. Tomiyasu. Conceptual Performance of a Satellite Borne, Wide Swath Synthetic Aperture Radar. IEEE Trans.
on Geoscience and Remote Sensing, 19 (2), pp. 108-116, April 1981.
J. B.-Y. Tsui. Fundamentals of Global Positioning System Receivers: A Software Approach. John Wiley & Sons,
New York, 2000.
L. M. H. Ulander and P.-O. Forlind. Precision Processing of CARABAS HF /VHF-Band SAR Data. In Proc. Int.
Geoscience and Remote Sensing Symp., IGARSS'99, Vol. 1, pp. 47- 49, Hamburg, Germany, June 1999.
L. M. H. Ulander and H. Hellsten. Calibration of the CARABAS VHF SAR System. In Proc. Int. Geoscience and
Remote Sensing Symp., IGARSS'94, Vol. 1, pp. 301-303, Pasadena, CA, August 1994.
L. M. H. Ulander and H. Hellsten. System Analysis of Ultra-Wideband VHF SAR. In IEE International Radar
Conference, RADAR'97, Conj. Publ. No. 449, pp. 104-108, Edinburgh, Scotland, October 14- 16, 1997.
A. Vidal-Pantaleoni and M. Ferrando. A New Spectral Analysis Algorithm for SAR Data Processing of ScanSAR
Data and Medium Resolution Data Without Interpolation. In Proc. Int. Geoscience and Remote Sensing Symp.,
IGARSS'98, Vol. 2, pp. 639-641, Seattle, WA, July 1998.
J. L. Walker. Range-Doppler Imaging of Rotating Objects. IEEE Trans . on Aerospace and Electronic Systems, 16
(1), pp. 23- 52, January 1980.
D. R. Wehner. High Resolution Radar. Artech House, Norwood, MA, 2nd edition, 1995.
F. H. Wong and I. G. Cumming. A Combined SAR Doppler Centroid Estimation Scheme Based upon Signal
Phase. IEEE Trans . on Geoscience and Remote Sensing, 34 (3), pp. 696-707, May 1996.
F. H. Wong, D. R. Stevens, and I. G. Cumming. Phase-Preserv.ing Processing of ScanSAR Data with a Modified
Range Doppler Algorithm. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'97, Vol. 2, pp. 725-727,
Singapore, August 1997.
F. H. Wong, N. L. Tan, and T. S. Yeo. Effective Velocity Estimation for Spaceborne SAR. In Proc. Int.
Geoscience and Remote Sensing Symp., IGARSS'00, Vol. 1, pp. 90-92, Honolulu, HI, July 2000.
F. H. Wong and T. S. Yeo. New Applications of Non-Linear Chirp Scaling in SAR Data Processing. IEEE Trans.
on Geoscience and Remote Sensing, 39 (5), pp. 946-953, May 2001.
J. M. Wozencraft and I. M. Jacobs. Principles of Communication Engineering. John Wiley & Sons, New York,
1965.
C. Wu. A Digital System to Produce Imagery from SAR Data. In AIAA Conference: System Design Driven by
Sensors, October 1976.
C. Wu. Processing of SEASAT SAR Data. In SAR Technology Symp., Las Cruces, NM, September 1977.
C. Wu, K. Y. Liu, and M. J . Jin. A Modeling and Correlation Algorithm for Spacebome SAR Signals. IEEE
Trans. on Aerospace and Electronic Systems, AES-18 (5) , pp. 563- 574, September 1982.
K. H. Wu and M. R. Vant. Extensions to the Step Transform SAR Processing Technique. IEEE Trans. on
Aerospace & Electronic Systems, 21 (3), pp. 338- 344, May 1985.
I. M. Yaglom and A. Shields. Geometric Transformations. The Mathematical Association of America, 1962.
T. S. Yeo, N. L. Tan, and C. B. Zhang. A New Subaperture Approach to High Squint SAR Processing. IEEE
Trans. on Geoscience and Remote Sensing, 39 (5), pp. 954-967, May 2001.
Z. Zeng and I. G. Cumming. Modi.fled SPIHT Encoding for SAR Image Data. IEEE Trans. on Geoscience and
Re.nwte Sensing, 39 (3), pp. 546552, March 2001.
About the Authors
Ian G. Cumming received his B.Sc. in engineering physics from the University of Toronto and a
Ph.D. in computing and automation from Imperial College, University of London. He joined MacDonald
Dettwiler in 1977, where he has developed SAR signal processing algorithms, including Doppler estima-
tion and autofocus routines. He has been involved in algorithm design of the digital SAR processors
for SEASAT, SIR-B, ERS-1/2, J-ERS-1, and RADARSAT, as well as several airborne radar systems.
In 1993, Dr. Cumming joined the Department of Electrical and Computer Engineering at the University of
British Columbia, where he holds the MacDonald Dettwiler/NSERC Industrial Research Chair in Radar Remote
Sensing. The Radar Remote Sensing Laboratory supports a research staff of eight engineers and students,
working in the fields of SAR processing, SAR data encoding, satellite SAR two-pass interferometry, airborne
along-track interferometry, polarimetric radar image classification, and SAR Doppler estimation. The work of the
Radar Remote Sensing Group is described at http://www.ece.ubc.ca/sar/.
He was a visiting scientist at the German Aerospace Center, DLR, in Oberpfaffenhofen for one year in 1999.
Away from work, he enjoys hiking, skiing, and travel.
Frank H. Wong received his B.Eng. from McGill University in electrical engineering, his M.Sc. from Queen's
University in electrical engineering, and his Ph.D. in computer science from the University of British Columbia.
He joined MacDonald Dettwiler in 1977, and worked in the Landsat and SPOT imaging area for the first few
years. Then his interest switched to SAR, and he has been working on airborne and satellite SAR processing and
Doppler estimation ever since. He is affiliated with the Radar Remote Sensing Group at the University of British
Columbia, where he has also been a sessional lecturer on image processing for 18 years. He was a v-isitor to the
National University of Singapore for a year in 1999, where he did research in bistatic SAR processing. After
work, he enjoys chess, bridge, and table tennis.
Index
velocity
beam, 116
effective radar, 126- 129
orbital, 116
platform, 116
zero Doppler
compression to, 119
line, 117
plane of, 117
time of, 117
Digital Processing of Synthetic Aperture Radar Data: Algorithms and Implementation, Ian G. Cumming and
Frank H. Wong
Handbook of Radar Scattering Statistics for Terrain: Software and User's Manual, F. T. Ulaby and M. C. Dobson
Microwave Remote Sensing: Fundamentals and Radiometry, Volume I, F. T. Ulaby, R. K. Moore, and A . K. Fung
Microwave Remote Sensing: Radar Remote Sensing and Surface Scattering and Emission Theory, Volume II, F.
T. Ulaby, R. K. Moore, and A. K. Fung
Microwave Remote Sensing: From Theory to Applications, Volume Ill, F. T. Ulaby, R. K. Moore, and A. K. Fung
Radargrammetric Image Processing, F. W. Leberl
Understanding Synthetic Aperture Radar Images, Chris Oliver and Shaun Quegan
For further information on these and other Artech House titles, including previously considered out-of-print
books now available through our In-Print-Forever® (IPF®) program, contact:
Artech House
Artech House
46 Gillingham Street
London SW1V 1AH UK
Phone: +44 (0)20-7596-8750
Fax: +44 (0)20-7630-0166
e-mail: [email protected]
You may also make my emall address available to selected industry organ,za11on,
and companies O Yes O No
Please 1nd,cate your areas of interest
Mailing address:
Name: - - - - - - - - - - - - - - - - -- -- - -- -
Company: - - - - - - - - - - - - - -- - - - - --
Address:
Fax or mad lhJS card to the Artech I-louse offKe neare~t you PIPil~e SM' other .,,de,