0% found this document useful (0 votes)
10 views

RS & GIS Unit-1

Remote sensing is a technique for gathering information about the Earth's surface and atmosphere without direct contact, utilizing sensors on satellites, aircraft, or drones to detect electromagnetic radiation. It has diverse applications in environmental monitoring, agriculture, disaster management, urban planning, and more. Key components include energy sources, sensor types, data processing, and interpretation methods, all of which contribute to understanding and analyzing Earth's features.

Uploaded by

monimounika17t
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

RS & GIS Unit-1

Remote sensing is a technique for gathering information about the Earth's surface and atmosphere without direct contact, utilizing sensors on satellites, aircraft, or drones to detect electromagnetic radiation. It has diverse applications in environmental monitoring, agriculture, disaster management, urban planning, and more. Key components include energy sources, sensor types, data processing, and interpretation methods, all of which contribute to understanding and analyzing Earth's features.

Uploaded by

monimounika17t
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

UNIT – I

Concept of Remote Sensing

Remote sensing is a powerful technique that allows us to gather information about an object or area
without physically being in contact with it. Remote sensing provides a valuable tool for observing
and understanding the Earth's surface and atmosphere from a distance. This technology plays a
crucial role in various scientific, environmental, and commercial applications.

 Data Acquisition from a Distance: The fundamental idea is to obtain information without
direct physical interaction. This is typically achieved using sensors mounted on platforms
like satellites, aircraft, or drones.
 Electromagnetic Radiation: A primary method involves detecting and measuring
electromagnetic radiation (EMR) that is reflected or emitted from the Earth's surface. This
includes visible light, infrared radiation, and microwaves.

 Sensors: Sensors are the instruments that detect and record EMR. They can be:
 Passive sensors: These detect naturally emitted or reflected radiation (e.g.,
sunlight).
 Active sensors: These emit their own energy and then measure the reflected
signal (e.g., radar).
 Data Analysis:
o The collected data is then processed and analyzed to extract meaningful information
about the target object or area.

Applications:

Remote sensing has a wide range of applications across various fields, including:

 Environmental Monitoring:
o Tracking deforestation, monitoring water quality, assessing climate change impacts.
 Agriculture:
o Crop monitoring, yield prediction, precision agriculture.
 Disaster Management:
o Mapping flood zones, monitoring wildfires, assessing earthquake damage.
 Urban Planning:
o Land use analysis, urban growth monitoring.
 Geology:
o Mineral exploration, geological mapping.
 Oceanography:
o Sea surface temperature monitoring, ocean current tracking.

Elements involved in remote sensing

Remote sensing is the science of acquiring information about the Earth's surface without
actually being in contact with it. This is done by sensing and recording reflected or emitted
energy and processing, analyzing, and applying that information."In much of remote sensing,
the process involves an interaction between incident radiation and the targets of interest.

1
1. Energy Source or Illumination (A) – the first requirement for remote sensing is to have an
energy source which illuminates or provides electromagnetic energy to the target of interest.
2. Radiation and the Atmosphere (B) – as the energy travels from its source to the target, it
will come in contact and interact with the atmosphere. This interaction may take place a
second time as the energy travels from the target to the sensor.

3. Interaction with the Target (C) - once the energy makes its way to the target through the
atmosphere, it interacts with the target depending on the properties of both the target and the
radiation.

4. Recording of Energy by the Sensor (D) - after the energy has been scattered by, or emitted
from the target, we require a sensor (remote - not in contact with the target) to collect and
record the electromagnetic radiation.

5. Transmission, Reception, and Processing (E) - the energy recorded by the sensor has to be
transmitted, often in electronic form, to a receiving and processing station where the data are
processed into an image (hardcopy and/or digital).

6. Interpretation and Analysis (F) - the processed image is interpreted, visually and/or
digitally or electronically, to extract information about the target which was illuminated.

7. Application (G) - the final element of the remote sensing process is achieved when we
apply the information we have been able to extract from the imagery about the target inorder
to better understand it, reveal some new information, or assist in solving a particular problem.

ELECTROMAGNETIC SPECTRUM

The electromagnetic spectrum is essentially the range of all types of electromagnetic radiation
(EMR). This radiation travels in waves and spans an enormous range of wavelengths and
frequencies.

 The electromagnetic spectrum is fundamental to many technologies and scientific fields.


 Remote sensing relies heavily on detecting and analyzing EMR across different parts of the
spectrum.
2
 It plays a crucial role in communication, medicine, astronomy, and many other areas.

Electromagnetic Radiation:

o This is a form of energy that travels through space as waves.


o It includes everything from radio waves to gamma rays.
 Wavelength and Frequency:
o These are key properties of EMR.
o Wavelength is the distance between successive wave crests.
o Frequency is the number of waves that pass a point in a given time.
o Wavelength and frequency are inversely related: shorter wavelengths mean higher
frequencies, and vice versa.
 The Spectrum:
o The electromagnetic spectrum organizes EMR by wavelength or frequency.
o It's divided into distinct regions, including:
 Radio waves: Longest wavelengths, lowest frequencies.
 Microwaves: Used in cooking and communication.
 Infrared radiation: Associated with heat.
 Visible light: The portion we can see with our eyes.
 Ultraviolet radiation: From the sun, can cause sunburn.
 X-rays: Used in medical imaging.
 Gamma rays: Shortest wavelengths, highest frequencies.

Remote sensing terminology & units

Remote sensing requires familiarity with its specialized terminology and the units used to measure
various parameters.
Units of Measurement:

 Wavelength:
o Measured in micrometers (µm) or nanometers (nm).
 Frequency:
o Measured in Hertz (Hz).
 Spatial Resolution:
o Measured in meters (m).
 Radiance/Irradiance:
3
o These are measurements of electromagnetic radiation, and have complex units
involving watts per square meter, and steradians.
 Degrees:
o Used to measure angles, such as those related to sensor viewing angles.

ENERGY INTERACTIONS WITH THE ATMOSPHERE

Before radiation used for remote sensing


reaches the earth's surface it has to travel
through some distance of the Earth's
atmosphere. Particles and gases in the
atmosphere can affect the incoming light and
radiation. These effects are caused by the
mechanisms of scattering and absorption.

SCATTERING

Scattering occurs when particles or large gas molecules present in the atmosphere interact
with and cause the electromagnetic radiation to be redirected from its original path. How
much scattering takes place depends on several factors including the wavelength of the
radiation, the abundance of particles or gases, and the distance the radiation travels through
the atmosphere.

RAYLEIGH SCATTERING

Rayleigh scattering occurs when particles are very small compared to the wavelength of the
radiation. These could be articles such as small specks of dust or nitrogen and oxygen
molecules. Rayleigh scattering causes shorter wavelengths of energy to be scattered much
more than longer wavelengths. Rayleigh scattering is the dominant scattering mechanism in
the upper atmosphere. The fact that the sky appears "blue" during the day is because of this
phenomenon. As sunlight passes through the atmosphere, the shorter wavelengths (i.e. blue)
of the visible spectrum are scattered more than the other visible wavelengths

ATMOSPHERIC INTERACTIONS WITH ELECTROMAGNETIC RADIATION

All electromagnetic radiation detected by a remote sensor has to pass through the atmosphere
twice, before and after its interaction with earth's atmosphere. This passage will alter the
speed, frequency, intensity, spectral distribution, and direction of the radiation.

As a result atmospheric scattering and absorption occurs. These effects are most severe in
visible and infrared wavelengths, the range very crucial in remote sensing. During the
transmission of energy through the atmosphere, light interacts with gases and particulate
matter in a process called atmospheric scattering.

The two major processes in scattering are selective scattering and non- selective scattering.
Rayleigh, Mie and Raman scattering are of selective type. Non selective scattering is
4
independent of wavelength. It is produced by particles whose radii exceed 10 micrometer,
such as, water droplets and ice fragments present the clouds.

Satellite Orbits
Satellite orbits are the paths that satellites follow around the Earth or other celestial bodies.These
paths are determined by a balance between the satellite's velocity and the gravitational pull of the body it's
orbiting.

Types of Satellite Orbits:

 Low Earth Orbit (LEO):


o Altitude: Typically 160 to 2,000 kilometers.
o Characteristics: Short orbital periods, used for Earth observation, imaging, and the
International Space Station.
o Advantages: High resolution imaging.
 Medium Earth Orbit (MEO):
o Altitude: Typically 2,000 to 35,786 kilometers.
o Characteristics: Used for navigation systems like GPS.
o Advantages: wider coverage area than LEO.
 Geostationary Orbit (GEO):
o Altitude: Approximately 35,786 kilometers.
o Characteristics: Satellites appear stationary from the ground, used for
communication and weather monitoring.
o Advantages: continuous coverage of a specific area.
 Polar Orbit:
o Characteristics: Satellites pass over the Earth's poles, used for Earth observation and
mapping.
o Advantages: scans the entire earth.
 Sun-Synchronous Orbit (SSO):
o Characteristics: A special type of polar orbit where the satellite passes over the same
point on Earth at the same local time each day, used for Earth observation.
o Advantages: consistent lighting conditions for imaging.

5
Sensor resolution

Sensor resolution refers to the level of detail captured by remote sensing devices. There are two
main types: spatial resolution and spectral resolution. Spatial resolution tells us how clear the
images are, while spectral resolution tells us how well the sensors can detect different colors or
wavelengths of light. By understanding sensor resolutions, we can better analyze remote sensing
data to learn about things like land cover, vegetation health, and environmental changes.

Types of Sensor Resolution:

 Spatial Resolution:
o This refers to the size of the smallest object that can be distinguished in an image.
o It's often expressed as the size of a pixel on the ground (e.g., 1 meter, 30 meters).
o A higher spatial resolution means finer detail can be seen.
 Spectral Resolution:
o This describes a sensor's ability to distinguish between different wavelengths of
electromagnetic radiation.
o Sensors can capture data in specific "bands" of the spectrum.
o High spectral resolution means a sensor can detect very subtle differences in
wavelengths, which is useful for identifying materials with distinct spectral
signatures.
 Radiometric Resolution:
o This refers to a sensor's sensitivity to differences in the intensity of radiation.
o It's often expressed as the number of "bits" used to record the data.
o Higher radiometric resolution means a sensor can detect very subtle variations in
brightness or energy levels.
 Temporal Resolution:
o This describes how often a sensor captures images of the same area.
o A high temporal resolution means frequent revisits, which is useful for monitoring
changes over time.

Types of Sensors

In remote sensing, sensors are the instruments that detect and measure electromagnetic radiation.
They can be broadly categorized into two main types: passive and active sensors.

1. Passive Sensors: Passive sensors detect naturally occurring energy, such as sunlight reflected
from the Earth's surface or thermal radiation emitted by objects.

Characteristics:
 They rely on external energy sources.
 They operate primarily in the visible, infrared, and microwave portions of the
electromagnetic spectrum.

Examples:
o Radiometers: Measure the intensity of electromagnetic radiation.
o Spectrometers/Hyperspectral sensors: Measure radiation across many narrow
spectral bands, allowing for detailed analysis of material composition.
o Imaging sensors (like those in digital cameras): Capture images by detecting
reflected light.

6
Applications:

 Monitoring vegetation health.


 Measuring land surface temperature.
 Observing cloud cover.

2. Active Sensors:Active sensors provide their own source of energy to illuminate the target. They
then measure the energy that is reflected or backscattered back to the sensor.

Characteristics:
o They emit their own radiation.
o They can operate day or night and in various weather conditions.
Examples:
o Radar (Radio Detection and Ranging):
 Emits microwave radiation and measures the backscattered signal.
 Used for mapping terrain, monitoring weather, and detecting objects.
o Lidar (Light Detection and Ranging):
 Emits laser pulses and measures the time it takes for the pulses to return.
 Used for creating high-resolution 3D maps, measuring atmospheric aerosols,
and studying vegetation structure.
Applications:
o Creating digital elevation models.
o Measuring forest canopy height.
o Monitoring ice thickness.

Remote Sensing Platforms and Sensors

The base on which remote sensors are placed to acquire information about the Earth’s surface
is called platform. Platforms can be stationary like a tripod (for field observation) and
stationary balloons or mobile like aircrafts and spacecraft’s.

The types of platforms depend upon the needs as well as constraints of the observation
mission. There are three main types of platforms, namely

7
1) Ground borne
2) Airborne
3) Space borne

GROUND BORNE PLATFORMS:

• Ground based platforms are used to record detailed information about the objects or features
of the earth’s surface

• These are developed for the scientific understanding on the signal-object and signal sensor
interactions.

• It includes both the laboratory and field study, used for both in designing sensors and
identification and characterization of land features.

Example: Handheld platform, cherry picker, towers, portable masts and vehicles etc.

AIR BORNE PLATFORMS

• Airborne platforms were the sole non-ground-based platforms for early remote sensing
work.

• Aircraft remote sensing system may also be referred to as sub-orbital or airborne, or aerial
remote sensing system

• At present, aeroplanes are the most common airborne platform.

• Observation platforms include balloons, drones (short sky spy) and high altitude sounding
rockets. Helicopters are occasionally used.

SPACEBORNEPLATFORMS

• In space- borne remote sensing, sensors are mounted on-board a spacecraft (space shuttle or
satellite) orbiting the earth.

8
• Space-borne or satellite platform are onetime cost effected but relatively lower cost per unit
area of coverage, can acquire imagery of entire earth without taking permission.

• Space-borne imaging ranges from altitude 250km to 36000 km.

Space-borne remote sensing provides the following advantages:

Large area coverage;

• Frequent and repetitive coverage of an area of interest;

• Quantitative measurement of ground features using radio metrically calibrated sensors;

• Semi-automated computerized processing and analysis;


• Relatively lower cost per unit area of coverage.

• Satellite can cover much more land space than planes.

IRS satellites

India's remote sensing program was developed with the idea of applying space technologies for
the benefit of humankind and the development of the country. The program involved the
development of three principal capabilities. The first was to design, build and launch satellites to
a Sun-synchronous orbit. The second was to establish and operate ground stations for spacecraft
control, data transfer along with data processing and archival. The third was to use the data obtained
for various applications on the ground.

India demonstrated the ability of remote sensing for societal application by detecting coconut root-
wilt disease from a helicopter mounted multispectral camera in 1970. This was followed by flying
two experimental satellites, Bhaskara-1 in 1979 and Bhaskara-2 in 1981. These satellites carried
optical and microwave payloads.

India's remote sensing programme under the Indian Space Research Organization (ISRO) started off
in 1988 with the IRS-1A, the first of the series of indigenous state-of-art operating remote sensing
satellites, which was successfully launched into a polar Sun-synchronous orbit on March 17, 1988,
from the Soviet Cosmodrome at Baikonur.

9
10
Remote sensing data interpretation

Remote sensing data interpretation is the process of extracting meaningful information from
remotely sensed imagery. This involves analyzing the data to identify and understand features,
patterns, and changes on the Earth's surface. It's a crucial step in utilizing the vast amount of
data collected by satellites and aircraft.

Visual interpretation techniques in remote sensing involve the analysis of images by a human
interpreter to identify and understand features on the Earth's surface. This process relies on
recognizing patterns, shapes, and other visual cues.

Techniques and Considerations:


 Image Scale:
o Determines the level of detail visible. Large-scale images show more detail.
 Image Resolution:
o Spatial, spectral, radiometric, and temporal resolution affect image quality and
interpretability.
 Stereoscopic Viewing:
o Using overlapping images to create a 3D view, useful for terrain interpretation.
 Image Enhancement:
o Techniques like contrast stretching improve feature visibility.
 Knowledge of the Area:
o Familiarity with the geography and land cover is essential for accurate interpretation.
Applications:Visual interpretation is used in:
 Land use and land cover mapping.
 Environmental monitoring.
 Geological mapping.
 Disaster assessment.

Basic Elements of Visual Interpretation:

 Tone/Color:
o This refers to the brightness or hue of objects in an image. Variations help
distinguish different features.
 Shape:
o The form or outline of an object. Regular shapes often indicate human-made
features, while irregular shapes are common in natural features.
 Size:
o The relative or absolute dimensions of objects. Comparing sizes aids in
identification.
 Pattern:
o The spatial arrangement of objects. Repetitive patterns can reveal land use types.
 Texture:
o The roughness or smoothness of an area, determined by tonal variations.
 Shadow:
o Shadows provide information about object height and shape, but can also obscure
features.
 Association:
o The relationship between objects and their surroundings. Certain features are often
found together.

11
 Site:
o The geographic location and enviromental context of the observed item.

CONVERGING EVIDENCE

Converging evidence is a very important concept for ensuring the accuracy and reliability of
interpretations. It essentially means using multiple, independent sources of remote sensing data,
or combining remote sensing data with other forms of information, to validate a particular
finding.
Multiple Sensor Data: Using data from different sensors that capture different parts of the
electromagnetic spectrum. For example:
 Combining optical imagery (visible and infrared) with radar data. Optical imagery can
provide information about surface reflectance, while radar can provide information about
surface roughness and structure.
 Using data from sensors with different spatial resolutions to provide both broad and detailed
views.
Temporal Data: Analyzing changes over time by comparing images taken at different dates. This
can help confirm trends or identify anomalies.
Ground Truthing: Comparing remote sensing data with on-the-ground observations. This is
crucial for validating interpretations and ensuring accuracy.
12
Combining with Other Data: Integrating remote sensing data with other sources of information,
such as:
 Geographic information system (GIS) data.
 Geological maps.
 Meteorological data.
 Historical records.

INTERPRETATION FOR TERRAIN EVALUATION


Terrain evaluation using remote sensing data involves interpreting various features and
characteristics of the Earth's surface to understand its suitability for specific purposes. This
process combines visual analysis, digital image processing, and geological/geomorphological
knowledge.
Data Preprocessing:

 Geometric correction (removing distortions).


 Radiometric correction (adjusting for atmospheric effects).
 Image enhancement (contrast stretching, filtering).

Digital Elevation Models (DEMs):

 Generating DEMs from stereo imagery or LiDAR data.


 Deriving terrain derivatives (slope, aspect, curvature).

Terrain Evaluation Applications:

 Landslide Hazard Assessment: Identifying areas prone to landslides based on slope,


geology, and rainfall.
 Flood Risk Assessment: Mapping flood-prone areas based on DEMs and hydrological
modeling.
 Infrastructure Planning: Evaluating terrain suitability for roads, pipelines, and other
infrastructure.
 Resource Exploration: Identifying areas with potential mineral or groundwater resources.
 Environmental Impact Assessment: Assessing the impact of development projects on the
terrain.
 Agricultural Land Suitability: assessing soil properties, slope, and water availability

SPECTRAL PROPERTIES OF SOIL, WATER, AND VEGETATION

The spectral properties of soil, water and vegetation is fundamental to interpreting remote sensing
data. Each of these Earth surface components interacts with electromagnetic radiation in unique
ways, creating distinct spectral signatures.

1. Vegetation:

 Visible Light (0.4 - 0.7 µm): Chlorophyll, the pigment responsible for photosynthesis,
strongly absorbs red and blue light, while reflecting green light. This is why healthy
vegetation appears green.
 Near-Infrared (NIR) (0.7 - 1.3 µm): Vegetation exhibits high reflectance in the NIR region
due to the internal cellular structure of leaves. This high reflectance is a key indicator of
healthy vegetation.

13
 Shortwave Infrared (SWIR) (1.3 - 2.5 µm): Water content within plant leaves strongly
influences reflectance in the SWIR region. Water absorption bands occur at approximately
1.4, 1.9, and 2.7 µm. Changes in water content, such as those caused by stress or disease,
can be detected in this region.

2. Soil:

 Factors Affecting Soil Reflectance:


o Moisture content: Increased moisture decreases reflectance.
o Organic matter: Darker soils with high organic matter have lower reflectance.
o Iron oxide: Iron oxides can cause reddish or yellowish hues and affect reflectance.
o Texture: Soil texture (sand, silt, clay) influences reflectance.
 General Trend:
o Soil reflectance generally increases with increasing wavelength.
o Water absorption features can be observed in the SWIR region.

3. Water:

 Visible Light:
o Clear water absorbs most of the incident light, especially in the red and infrared
regions.
o Blue and green light are reflected more, which is why water appears blue or green.
o Turbidity (suspended sediments) increases reflectance across the visible spectrum.
 NIR and SWIR:
o Water strongly absorbs NIR and SWIR radiation. This property is used to delineate
water bodies in remote sensing images.

DIGITAL IMAGE PROCESSING

Digital image processing is a critical field that involves using computer algorithms to manipulate
and analyze digital images. It's a fundamental technology used in a vast array of applications, from
everyday photography to complex scientific research.

 Digital Image: A digital image is represented as a grid of pixels (picture elements), each
with numerical values representing brightness and color.
 Algorithms: Digital image processing relies on algorithms to perform various operations on
these pixel values.
 Purpose: The goals of digital image processing include:
 Improving image quality.
 Extracting information from images.
 Automating image-based tasks.
Image Enhancement: Techniques to improve the visual quality of an image, such as:
 Contrast adjustment.
 Brightness adjustment.
 Noise reduction.
 Sharpening.
Image Restoration: Techniques to remove or reduce degradations in an image, such as

 Blurring.
 Noise.
 Distortion.
14
 Image Segmentation: Dividing an image into regions or segments, each corresponding to
an object or feature.
 Image Classification: Assigning labels or categories to pixels or regions in an image.
 Feature Extraction: Identifying and extracting specific features from an image, such as
edges, corners, or textures.
 Image Compression: Reducing the size of an image file for storage or transmission.

Digital image processing involves a wide range of techniques, and understanding the distinction
between qualitative and quantitative analysis, as well as the role of pattern recognition, is crucial.

1. Qualitative Analysis:

o Qualitative analysis focuses on the descriptive aspects of an image. It involves


interpreting the visual characteristics of features without necessarily assigning
numerical values.
o It's about "what" and "where" things are, rather than "how much."
 Examples:
o Identifying different types of land cover (e.g., forest, urban, water) based on their
visual appearance.
o Describing the texture of a soil surface.
o Visually interpreting geological structures like faults and folds.
 Techniques:
o Visual interpretation.
o Image enhancement to improve visual clarity.
o Feature recognition based on shape, color, and texture.

2. Quantitative Analysis:

o Quantitative analysis involves extracting numerical measurements from an image. It


focuses on "how much" or "how many."
o It provides objective and measurable data.
 Examples:
o Measuring the area of a forest.
o Calculating the density of urban development.
o Determining the spectral reflectance of a surface.
o measuring the changes in pixel values over time.
 Techniques:
o Image classification.
o Spectral analysis.
o Statistical analysis of pixel values.
o Change detection.

3. Pattern Recognition:

o Pattern recognition is the process of identifying regularities in data. In digital image


processing, this involves detecting and classifying patterns within an image.
o It's essential for both qualitative and quantitative analysis.
 Role:
o It enables automated or semi-automated analysis of images.
o It helps to identify and classify objects, features, and anomalies.
 Techniques:
15
o Machine learning algorithms (e.g., convolutional neural networks).
o Feature extraction.
o Image segmentation.
o Statistical pattern recognition.

Classification Techniques and Accuracy Estimation in digital image processing (DIP)


particularly within the context of remote sensing, it's essential to understand the core concepts.

Classification Techniques in DIP:

 Supervised Classification:
 Minimum Distance: Assigns pixels to the class with the closest spectral distance
based on a predefined distance metric (e.g., Euclidean).
 Maximum Likelihood: Uses statistical probability to classify pixels based on the
assumption that data follows a normal distribution within each class.
 Support Vector Machines (SVM): A powerful technique that finds the optimal
hyperplane to separate different classes, often used for complex classification
problems.
 Unsupervised Classification:
 K-means Clustering: Partitions pixels into a predefined number of clusters based on
spectral similarity.
 Hierarchical Clustering: Creates a hierarchical tree structure by merging or
splitting clusters based on their similarity.

Accuracy Estimation in DIP:

 Confusion Matrix: A table that compares the classified pixels to known ground truth data,
allowing calculation of various accuracy metrics.
 Accuracy Metrics:
 Overall Accuracy: The percentage of correctly classified pixels across all classes.
 Producer's Accuracy (Class-specific): Represents the proportion of pixels correctly
classified within a specific class.
 User's Accuracy (Class-specific): Represents the proportion of pixels classified as a
specific class that actually belong to that class.
 Kappa Coefficient: A measure of agreement that considers the chance of correct
classification by random assignment.

Important aspects of accuracy assessment:

 Reference Data Collection: Gathering ground truth data through field surveys or high-
resolution imagery to compare with the classified image.
 Sampling Design: Selecting a representative set of sample points for accuracy assessment,
often using stratified random sampling to ensure adequate representation of each class.
 Error Matrix Analysis: Interpreting the confusion matrix to identify areas of
misclassification and potential issues with the classification process.

16

You might also like