Remote Sensing
Remote Sensing
1. Remote Sensing
Introduction and definition of remote sensing terms, Remote Sensing System, Electromagnetic
radiation and spectrum, spectral signature, Atmospheric windows, Different types of platforms.
Sensors and their characteristics, Orbital parameters of a satellite. Multi concept in Remote
sensing.
2. Image Interpretation
Principles of interpretation of aerial and satellite image, equipments and aids required for
interpretation, ground truth – collection and verification, advantages of multidate and multiband
images. Digital image processing concept.
Remote Sensing
Remote Sensing is defined as the science and technology by which the characteristics of objects
of interest can be identified, measured or analyzed the characteristics without direct contact.
Electro-magnetic radiation which is reflected or emitted from an object is the usual source of
remote sensing data.
In current usage, the term "remote sensing" generally refers to the use of satellite- or aircraft-
based sensor technologies to detect and classify objects on Earth, including on the surface and in
the atmosphere and oceans, based on propagated signals (e.g. electromagnetic radiation).
There are two types of remote sensing technology, active and passive remote sensing. ...
RADAR and LiDAR are examples of active remote sensing where the time delay between
emission and return is measured, establishing the location, speed and direction of an object.
Remote sensing is defined as the technique of obtaining information about objects through the
analysis of data collected by special instruments that are not in physical contact with the objects
of investigation.
There are four basic components of a remote sensing system ( Fig. 1) including: (1) a target;
(2) an energy source; (3) a transmission path; and (4) a satellite sensor (Landsat, SPOT, or the
SIR-C radar) which records the intensity of electromagnetic radiation (sunlight) reflected from
the earth at different
1
Remote sensing is the process of acquiring details about an object without physical on-site
observation using satellite or aircraft. Remote sensors are mounted on the aircraft or satellites to
gather data via detecting energy reflected from the Earth.
Remote Sensing Techniques. Remote sensing utilizes satellite and/or airborne based sensors to
collect information about a given object or area. ... Passive sensors (e.g., spectral imagers) detect
natural radiation that is emitted or reflected by the object or area being observed.
The advantages of remote sensing include the ability to collect information over large spatial
areas; to characterize natural features or physical objects on the ground; to observe surface areas
and objects on a systematic basis and monitor their changes over time; and the ability to integrate
this data with other
Features of Remote sensing by earth observation satellites are: (1) Enables to observe broad
area at a time. The right image shows Tokyo, Yokohama and surrounding area of about 75km
~75km taken by Japanese Earth Resource Satellite-1 (JERS-1).
Remote sensing application. Agriculture - Satellite and airborne images are used as mapping
tools to classify crops, examine their health and viability, and monitor farming practices.
Difference between Photogrammetry and Remote Sensing
Photogrammetry is the science of measuring distances, angles, areas, etc. in photographs. ...
“Remote Sensing” on the other hand, can deal with all other sensors (e.g. RADAR, SONAR,
imaging, etc. and doesn't necessarily rely on spatial measurements as it's main concern.
Difference between GIS and remote sensing.
A geographic information system (GIS) is a computer-based tool for mapping and analyzing
features and events on earth. On the other hand, remote sensing is the science of collecting data
regarding an object or a phenomenon without any physical contact with the object.
Active and passive remote sensing.
Remote sensing instruments are of two primary types - active and passive. Active sensors,
provide their own source of energy to illuminate the objects they observe. ... Passive sensors, on
the other hand, detect natural energy (radiation) that is emitted or reflected by the object or scene
being observed.
Remote sensing tools
2
What are the Tools of Remote Sensing? Remote sensing works with sunlight which is reflected
by the surface of the earth. Sensors detect this light and store it. ... We are used to calling the
light of the sun “sunbeams”.
Three remote sensing tools that geographers can use
The operation of remote sensing is carried out by sensors, sonar systems and cameras provided
on aircraft and satellites. Remote sensing allows the mapping of ocean floors without having to
go into the depths of the ocean
Purpose of a resolution
Resolution. The official expression of the opinion or will of a legislative body. The practice of
submitting and voting on resolutions is a typical part of business in Congress, state legislatures,
and other public assemblies. These bodies use resolutions for two purposes.
Resolution
The noun resolution has a few related meanings having to do with being firmly determined
about something. If you lack determination, you'll never fulfill your New Year's resolutions.
Resolution is the noun form of the verb resolve, which comes from Latin resolvere, "to loosen,
undo, settle."
Good resolution for a photo
For a 4" x 6" print, the image resolution should be 640 x 480 pixels minimum. For a 5" x 7"
print, the image resolution should be 1024 x 768 pixels minimum. For an 8" x 10" print, the
image resolution should be 1536 x 1024 pixels minimum. For a 16" x 20" print, the image
resolution should be 1600 x 1200 pixels minimum.
four primary types of "resolution" for rasters:
Spatial.
Spectral.
Radiometric.
Temporal.
Remote Sensing Systems (RSS) is a private research company founded in 1974 by Frank
Wentz. It processes microwave data from a variety of NASA satellites. Most of their research is
supported by the Earth Science Enterprise program. The company is based in Santa Rosa,
California.
GPS a remote sensing
3
Remote sensing relies on various devices and instruments such as satellites, aerial cameras etc
for capturing imagery. GPS: GPS full form is global positioning system. ... GPS is widely used
in many applications related to surveying and navigation. GIS: GIS full form is Geographical
Information System.
Difference between GIS and GPS
GPS uses satellites that orbit Earth to send information to GPS receivers that are on the ground.
The information helps people determine their location. GIS stands for Geographical Information
System. GIS is a software program that helps people use the information that is collected from
the GPS satellites.
Atmospheric window
The atmospheric window are wavelengths at which electromagnetic radiation from the sun will
penetrate the Earth's atmosphere. Remote sensing not only takes advantage of the visible
spectrum (red, green and blue) but also non-visible light.
2 windows in the Earth atmosphere
Atmospheric Windows. The Earth's atmosphere absorbs electromagnetic radiation at most
infrared, ultraviolet, X-ray, and gamma-ray wavelengths, so there are only two atmospheric
windows, in the radio and visible wavebands, suitable for ground-based astronomy.
Atmospheric window closed
Clouds and the Atmospheric Window. Clouds and the Atmospheric Window: -There is a gap
of infrared absorption by carbon dioxide and water vapor. ... -When present, clouds can “close”
the atmospheric window, because they can absorb IR at this wavelength, while carbon dioxide
and water vapor can't.
Wavelength range of the atmospheric window
This means that, at the scarcely absorbed continuum of wavelengths (8 to 14 μm), the radiation
emitted, by the earth's surface into a dry atmosphere, and by the cloud tops, mostly passes
unabsorbed through the atmosphere, and is emitted directly to space; there is also partial window
transmission in far infrared.
Infrared pass through the atmosphere
Some wavelengths of infrared radiation pass through the Earth's atmosphere, while others are
blocked - this gives rise to 'infrared windows' which can be measured from the ground.
Importance of atmospheric windows in remote sensing
4
Atmospheric Window. One important practical consequence of the interaction of
electromagnetic radiation with matter and of the detailed composition of our atmosphere is that
only light in certain wavelength regions can penetrate the atmosphere well. These regions are
called atmospheric windows.
Atmosphere absorb radiation
Our atmospheric gases absorb different narrow bands of incoming solar radiation. ... The
smaller molecules of oxygen and nitrogen absorb very short wavelengths of solar radiation
while the larger molecules of water vapor and carbon dioxide absorb primarily longer infrared
radiant energy.
Remote sensing platforms
There are three broad categories of remote sensing platforms: ground based, airborne, and
satellite. 3.1.1 Ground based -- A wide variety of ground based platforms are used in remote
sensing. Some of the more common ones are hand held devices, tripods, towers and cranes.
Meaning of platform in remote sensing
Remote sensing platforms can be defined as the structures or vehicles on which remote
sensing instruments (sensors) are mounted. ... Thus, higher the sensor is mounted; larger the
spatial resolution and synoptic view is obtained.
Ground based platform
The vehicle or carrier for a remote sensor to collect and record energy reflected or emitted from a
target or surface is called a platform. ... Ground based sensors may be placed on a ladder,
scaffolding, tall building, cherry-picker, crane, etc.
Platform
A platform is a group of technologies that are used as a base upon which other applications,
processes or technologies are developed. In personal computing, a platform is the basic
hardware (computer) and software (operating system) on which software applications can be run.
The different types of platforms
The following are common types of technology platform.
Operating Systems. Operating systems provide the basic services required to use
hardware. ...
Computing Platforms. ...
Database Platforms. ...
5
Storage Platforms. ...
Application Platforms. ...
Mobile Platforms. ...
Web Platforms. ...
Content Management Systems.
6
Orbital perturbations. ... Atmospheric Drag is a non-conservative force and will continuously
take energy away from the orbit. • The effect of atmospheric friction is to speed up the motion of
the satellite as it spirals inward.
Orbital Parameters of the Earth
Three parameters that describe the Earth's orbit around the sun: (1) eccentricity, (2) axial tilt
(or obliquity), and (3) time of perihelion (or precession).
Slant range in satellite communication
In radio electronics, especially radar terminology, slant range is the line-of-sight distance along
a slant direction between two points which are not at the same level relative to a specific datum.
“Multi” concept for remote sensing
The “multi” concept for remote sensing applications refers to multisource, multiscale,
multipolarization, multifrequency, and multitemporal imagery. ... The accuracy in classifying a
scene can be increased by using images from sev- eral sensors operating at different wavelengths
of the electromagnetic spectrum.
With a growing number of satellite sensors the coverage of the earth in space, time and the
electromagnetic spectrum is increasing fast. To be able to utilize all this information, a number
of approaches for data fusion have been presented. The “multi” concept for remote sensing
applications refers to multisource, multiscale, multipolarization, multifrequency, and
multitemporal imagery. We present and discuss methods for multisource image analysis and
provide a tutorial on the subject on data fusion for remote sensing. The main focus is on methods
for multisource, multiscale and multitemporal image classi- fication. Guidelines to be used in
choosing the best architecture and approach for data fusion for a given application are provided.
Earth observation is currently developing more rapidly than ever before. During the last decade
and the near future the number of satellites has been growing steadily, and the coverage of the
Earth in space, time and the electromagnetic spectrum is increasing correspondingly fast. The
accuracy in classifying a scene can be increased by using images from several sensors operating
at different wavelengths of the electromagnetic spectrum. The interaction between the
electromagnetic radiation and the earth’s surface is characterized by certain properties at
different frequencies of electromagnetic energy. Sensors with different wavelengths provide
complementary information about the surface. In addition to image data, prior information about
the scene might be available in the form of map data from Geographic Information Systems
7
(GIS). The merging of multisource data can create a more consistent interpretation of the scene
compared to an interpretation based on data from a single sensor. This development opens up for
a potential significant change in the approach of analysis of earth observation data. Traditionally,
analysis of such data bas been by means of analysis of a single satellite image. The emerging
exceptionally good coverage in space, time and the spectrum opens for analysis of time series of
data, combining different sensor types, combining imagery of different scales, and better
integration with ancillary data and models. Thus, data fusion to combine data from several
sources is becoming increasingly more important in many remote sensing applications.
The variety of different sensors already available or being planned creates a number of
possibilities for data fusion to provide better capabilities for scene interpretation. This is referred
to as the “multi” concept in remote sensing. The “multi” concept includes: multitemporal,
multispectral/multifrequency, multipolarization, multiscale, and multisensor image analysis. In
addition to the concepts discussed here, imaging using multiple incidence angles can also
provide additional information.
Image interpretation
Image interpretation is the process of examining an aerial photo or digital remote sensing
image and manually identifying the features in that image. ... These image characteristics are:
size, shape, tone/color, texture, shadow, association, and pattern.
Elements of image interpretation
The most basic are the elements of image interpretation: location, size, shape, shadow,
tone/color, texture, pattern, height/depth and site/situation/association. They are routinely used
when interpreting aerial photos and analyzing photo-like images.
Visual interpretation
Visual image interpretation is a first analysis approach to remote sensing imagery. Here, the
size, shape, and position of objects as well as the contrast and colour saturation are analysed. The
height of objects can be determined by indirect visual analysis.
Features are used to identify satellite images
To unlock the rich information comprising a satellite image, you need to begin with five
basic steps:
1. Look for a scale.
2. Look for patterns, shapes and textures.
8
3. Define colors (including shadows).
4. Find north.
5. Consider your prior knowledge.
Principles of interpretation of aerial and satellite image
The interpretation of satellite imagery and aerial photographs involves the study of various
basic characters of an object with reference to spectral bands which is useful in visual analysis.
The basic elements are shape, size, pattern, tone, texture, shadows, location, association and
resolution.
Elements of image interpretation
The most basic are the elements of image interpretation: location, size, shape, shadow,
tone/color, texture, pattern, height/depth and site/situation/association. They are routinely used
when interpreting aerial photos and analyzing photo-like images.
Features are used to identify satellite images
To unlock the rich information comprising a satellite image, you need to begin with five
basic steps:
1. Look for a scale.
2. Look for patterns, shapes and textures.
3. Define colors (including shadows).
4. Find north.
5. Consider your prior knowledge.
Read aerial photography
The most basic of these principles are the elements of image interpretation. They are: location,
size, shape, shadow, tone/color, texture, pattern, height/depth and site/situation/association.
These are routinely used when interpreting an aerial photo or analyzing a photo-like image.
Types of aerial photographs
Aerial photographs are classified into the following types : (i) Vertical photographs (ii) Low
oblique photographs (iii) High oblique photographs (i) Vertical Photographs.
Visual interpretation
Visual image interpretation is a first analysis approach to remote sensing imagery. Here, the
size, shape, and position of objects as well as the contrast and colour saturation are analysed. The
height of objects can be determined by indirect visual analysis.
9
Elements of interpretation
Location
There are two primary methods to obtain a precise location in the form of coordinates. 1) survey
in the field by using traditional surveying techniques or global positioning system instruments, or
2) collect remotely sensed data of the object, rectify the image and then extract the desired
coordinate information. Most scientists who choose option 1 now use relatively inexpensive GPS
instruments in the field to obtain the desired location of an object. If option 2 is chosen, most
aircraft used to collect the remotely sensed data have a GPS receiver.
Size
The size of an object is one of the most distinguishing characteristics and one of the more
important elements of interpretation. Most commonly, length, width and perimeter are measured.
To be able to do this successfully, it is necessary to know the scale of the photo. Measuring the
size of an unknown object allows the interpreter to rule out possible alternatives. It has proved to
be helpful to measure the size of a few well-known objects to give a comparison to the unknown-
object. For example, field dimensions of major sports like soccer, football, and baseball are
standard throughout the world. If objects like this are visible in the image, it is possible to
determine the size of the unknown object by simply comparing the two.
Shape
There is an infinite number of uniquely shaped natural and man-made objects in the world. A
few examples of shape are the triangular shape of modern jet aircraft and the shape of a common
single-family dwelling. Humans have modified the landscape in very interesting ways that has
given shape to many objects, but nature also shapes the landscape in its own ways. In general,
straight, recti-linear features in the environment are of human origin. Nature produces more
subtle shapes.
Shadow
Virtually all remotely sensed data are collected within 2 hours of solar noon to avoid extended
shadows in the image or photo. This is because shadows can obscure other objects that could
otherwise be identified. On the other hand, the shadow cast by an object act as a key for the
identification of the object as the length of the shadow will be used to estimate the height of the
object which is vital for the recognition of the object. Take for example, the Washington
Monument in Washington D.C. While viewing this from above, it can be difficult to discern the
10
shape of the monument, but with a shadow cast, this process becomes much easier. It is a good
practice to orient the photos so that the shadows are falling towards the interpreter. A
pseudoscopic illusion can be produced if the shadow is oriented away from the observer. This
happens when low points appear high and high points appear low.
Tone and color
Real-world materials like vegetation, water and bare soil reflect different proportions of energy
in the blue, green, red, and infrared portions of the electro-magnetic spectrum. An interpreter can
document the amount of energy reflected from each at specific wavelengths to create a spectral
signature. These signatures can help to understand why certain objects appear as they do on
black and white or color imagery. These shades of gray are referred to as tone. The darker an
object appears, the less light it reflects. Color imagery is often preferred because, as opposed to
shades of gray, humans can detect thousands of different colors. Color aids in the process of
photo interpretation.
Texture
This is defined as the “characteristic placement and arrangement of repetitions of tone or color in
an image.” Adjectives often used to describe texture are smooth (uniform, homogeneous),
intermediate, and rough (coarse, heterogeneous). It is important to remember that texture is a
product of scale. On a large scale depiction, objects could appear to have an intermediate texture.
But, as the scale becomes smaller, the texture could appear to be more uniform, or smooth. A
few examples of texture could be the “smoothness” of a paved road, or the “coarseness” a pine
forest.
Pattern
Pattern is the spatial arrangement of objects in the landscape. The objects may be arranged
randomly or systematically. They can be natural, as with a drainage pattern of a river, or man-
made, as with the squares formed from the United States Public Land Survey System. Typical
adjectives used in describing pattern are: random, systematic, circular, oval, linear, rectangular,
and curvilinear to name a few.
Height and depth
Height and depth, also known as “elevation” and “bathymetry”, is one of the most diagnostic
elements of image interpretation. This is because any object, such as a building or an electric
pole that rises above the local landscape will exhibit some sort of radial relief. Also, objects that
11
exhibit this relief will cast a shadow that can also provide information as to its height or
elevation. A good example of this would be buildings of any major city.
Site/situation/association
Site has unique physical characteristics which might include elevation, slope, and type of surface
cover (e.g., grass, forest, water, bare soil). Site can also have socioeconomic characteristics such
as the value of land or the closeness to water. Situation refers to how the objects in the photo or
image are organized and “situated” in respect to each other. Most power plants have materials
and building associated in a fairly predictable manner. Association refers to the fact that when
you find a certain activity within a photo or image, you usually encounter related or “associated”
features or activities. Site, situation, and association are rarely used independent of each other
when analyzing an image. An example of this would be a large shopping mall. Usually there are
multiple large buildings, massive parking lots, and it is usually located near a major road or
intersection.
Equipments
The National Geospatial-Intelligence Agency (NGA) is a combat support agency under the
United States Department of Defense and a member of the United States Intelligence
Community,[8] with the primary mission of collecting, analyzing, and distributing geospatial
intelligence (GEOINT) in support of national security. NGA was known as the National
Imagery and Mapping Agency (NIMA) until 2003.
NGA headquarters, also known as NGA Campus East, is located at Fort Belvoir North Area in
Virginia. The agency also operates major facilities in the St. Louis, Missouri area, as well as
support and liaison offices worldwide. The NGA headquarters, at 2.3 million square feet
(214,000 m2), is the third-largest government building in the Washington metropolitan area after
The Pentagon and the Ronald Reagan Building.[9]
In addition to using GEOINT for U.S. military and intelligence efforts, the NGA provides
assistance during natural and man-made disasters, and security planning for major events such as
the Olympic Games.[10]
In September 2018, researchers at the National Geospatial-Intelligence Agency released a high
resolution terrain map[11] (detail down to the size of a car, and less in some areas) of Antarctica,
named the "Reference Elevation Model of Antarctica" (REMA).[12]
12
Ground truth – collection and verification
In remote sensing, "ground truth" refers to information collected on location. Ground
truth allows image data to be related to real features and materials on the ground.
The collection of ground truth data enables calibration of remote-sensing data, and aids in the
interpretation and analysis of what is being sensed.
Ground truth in image processing
"Ground truth" means a set of measurements that is known to be much more accurate than
measurements from the system you are testing. For example, suppose you are testing a stereo
vision system to see how well it can estimate 3D positions. ... In such cases the "ground truth" is
the known parameters of the model.
Ground truth – collection and verification
In remote sensing, "ground truth" refers to information collected on location. Ground truth
allows image data to be related to real features and materials on the ground. The collection of
ground truth data enables calibration of remote-sensing data, and aids in the interpretation and
analysis of what is being sensed. Examples include cartography,meteorology, analysis of aerial
photographs, satellite imagery and other techniques in which data are gathered at a distance.
More specifically, ground truth may refer to a process in which a "pixel"[1] on a satellite image is
compared to what is there in reality (at the present time) in order to verify the contents of the
"pixel" on the image (noting that the concept of a "pixel" is somewhat ill-defined). In the case of
a classified image, it allows supervised classification to help determine the accuracy of the
classification performed by the remote sensing software and therefore minimize errors in the
classification such as errors of commission and errors of omission.
13
Ground truth is usually done on site, performing surface observations and measurements of
various properties of the features of the ground resolution cells that are being studied on the
remotely sensed digital image. It also involves taking geographic coordinates of the ground
resolution cell with GPS technology and comparing those with the coordinates of the "pixel"
being studied provided by the remote sensing software to understand and analyze the location
errors and how it may affect a particular study.
Ground truth is important in the initial supervised classification of an image. When the identity
and location of land cover types are known through a combination of field work, maps, and
personal experience these areas are known as training sites. The spectral characteristics of these
areas are used to train the remote sensing software using decision rules for classifying the rest of
the image. These decision rules such as Maximum Likelihood Classification, Parallelepiped
Classification, and Minimum Distance Classification offer different techniques to classify an
image. Additional ground truth sites allow the remote sensor to establish an error matrix which
validates the accuracy of the classification method used. Different classification methods may
have different percentages of error for a given classification project. It is important that the
remote sensor chooses a classification method that works best with the number of classifications
used while providing the least amount of error.
Ground truth also helps with atmospheric correction. Since images from satellites obviously have
to pass through the atmosphere, they can get distorted because of absorption in the atmosphere.
So ground truth can help fully identify objects in satellite photos.
Advantages of multi-date and multi-band images in remote sensing
Bands in remote sensing
What are spectral bands in remote sensing? - Quora. The electromagnetic radiance can be
reflected by objects at various wavelengths along a spectrum. Satellite or airborne sensors are
able to detect a limited amount of such wavelength values that are attribute to certain
spectral bands.
Remote sensing in image processing
Remote Sensing and Digital Image Processing book series. Remote sensing is the acquisition
of Physical data of an object without touch or contact. ... SAR interferometry, laser altimetry and
high-resolution imaging allow a very detailed, three-dimensional reconstruction of the earth
surface.
14
Satellite based terrain classifications can often be improved by using multi-date imagery which
provides good contrast between vegetation communities and surface materials. Using this
concept, a quantitative assessment was made of organic terrain in northern Canada in order to
predict offroad mobility conditions. A number of vegetation communities were found to be good
indicators of ground conditions, and the potential for off-road mobility of those areas dominated
by each vegetation community was assessed in the field. A digital multi-date satellite analysis
was then performed to demonstrate the usefulness of such a technique for off-road mobility
planning in northern areas where ground information is generally scarce. A principal component
analysis was used to identify the two most important MSS bands for each image. The selected
spectral data from two different images were then combined and subjected to an unsupervised
cluster analysis. The resulting color coded images were then related to ground conditions and
were found to be directly useful as thematic mobility maps for operational planning in such
environments.
The monitoring of crops is of vital importance for food and environmental security in a global
and European context. The main goal of this study was to assess the crop mapping performance
provided by the 100 m spatial resolution of PROBA-V compared to coarser resolution data (e.g.,
PROBA-V at 300 m) for a 2250 km2 test site in Bulgaria. The focus was on winter and summer
crop mapping with three to five classes. For classification, single- and multi-date spectral data
were used as well as NDVI time series. Our results demonstrate that crop identification using
100 m PROBA-V data performed significantly better in all experiments compared to the
PROBA-V 300 m data. PROBA-V multispectral imagery, acquired in spring (March) was the
most appropriate for winter crop identification, while satellite data acquired in summer (July)
was superior for summer crop identification. The classification accuracy from PROBA-V 100 m
compared to PROBA-V 300 m was improved by 5.8% to 14.8% depending on crop type.
Stacked multi-date satellite images with three to four images gave overall classification
accuracies of 74%–77% (PROBA-V 100 m data) and 66%–70% (PROBA-V 300 m data) with
four classes (wheat, rapeseed, maize, and sunflower). This demonstrates that three to four image
acquisitions, well distributed over the growing season, capture most of the spectral and temporal
variability in our test site. Regarding the PROBA-V NDVI time series, useful results were only
obtained if crops were grouped into two broader crop type classes (summer and winter crops).
Mapping accuracies decreased significantly when mapping more classes. Again, a positive
15
impact of the increased spatial resolution was noted. Together, the findings demonstrate the
positive effect of the 100 m resolution PROBA-V data compared to the 300 m for crop mapping.
This has important implications for future data provision and strengthens the arguments for a
second generation of this mission originally designed solely as a “gap-filler mission”.
Digital Image Processing
Remote Sensing and Digital Image Processing book series. Remote sensing is the acquisition
of Physical data of an object without touch or contact. Earth observation satellites have been
used for many decades in a wide field of applications.
Remote sensing images are representations of parts of the earth surface as seen from space.
The images may be analog or digital. Aerial photographs are examples of analog images while
satellite images acquired using electronic sensors are examples of digital images. A digital
image is a two-dimensional array of pixels.
In computer science, digital image processing is the use of computer algorithms to
perform image processing on digital images. ... It allows a much wider range of algorithms to
be applied to the input data and can avoid problems such as the build-up of noise and signal
distortion during processing.
Digital Image Classification. ... Spectral differentiation is based on the principle that objects of
different composition or condition appear as different colors in a multispectral or
hyperspectral image.
Types of image processing
There are two types of methods used for image processing namely, analogue and digital image
processing. Analogue image processing can be used for the hard copies like printouts and
photographs.
Elements of digital image processing
Elements of digital image processing systems: The basic operations performed in adigital
image processing systems include (1) acquisition, (2) storage, (3)processing, (4)
communication and (5) display.
Types of Image Files
JPEG (or JPG) - Joint Photographic Experts Group.
PNG - Portable Network Graphics.
GIF - Graphics Interchange Format.
16
TIFF - Tagged Image File.
PSD - Photoshop Document.
PDF - Portable Document Format.
EPS - Encapsulated Postscript.
AI - Adobe Illustrator Document.
Image classification
Image classification refers to the task of extracting information classes from a multiband
raster image. The resulting raster from image classification can be used to create thematic
maps. ... The recommended way to perform classification and multivariate analysis is through
the Image Classification toolbar.
Classification in image processing?
Image classification refers to the. labelling of images into one of a number of predefined
categories. Classification includes image sensors, image pre-processing, object detection,
object segmentation, feature extraction and objectclassification.Many classification techniques
have been.
Fundamental steps in Digital Image Processing :
Image Acquisition. This is the first step or process of the fundamental steps of digital image
processing. ...
Image Enhancement. ...
Image Restoration. ...
Color Image Processing. ...
Wavelets and Multiresolution Processing. ...
Compression. ...
Morphological Processing. ...
Segmentation.
17