Nta - Ugc Netjrf Environmental Science (89) - Complete Guide
Nta - Ugc Netjrf Environmental Science (89) - Complete Guide
https://www.youtube.com/channel/UCrPRG_Wut_UrDhD_MTvQ6GA
NTA- UGC NET/JRF
ENVIRONMENTAL SCIENCES - COMPLETE GUIDE
Author: Madhuraj P K
Madhurendu, Mandur PO, Kannur, 670501
Mob: 8547867160, 8943671971
Language: English
Publisher: Envirobooks
Cover Design: Jyotish V M
ISBN: 978-93-5396-804-5
Edition: First
Price: 400
Printer Details: Printing park, Thalassery
8547867160
madhurajpk
Contents
Syllabus -
1. Geology 1-9
2. Soil Chemistry 10-12
3. Global Circulation 13-16
4. Air Pollution 17-24
5. Atmospheric Chemistry 25-31
6. Noise Pollution 32-34
7. Water Pollution 35-46
8. Pesticides 47-52
9. Municipal Solid Waste Management (MSWM) 53-61
10. Remote Sensing and GIS 62-69
11. Environmental Impact Assessment (EIA) 70-81
12. Environmental Analytical Chemistry 82-87
13. Environmental Laws, Conventions, and Agreements 88-100
14. Environmental Issues 101-107
15. Energy and Environment 108-120
16. Environmental Biology 121-157
17. Statistics 158-178
18. Previous Year Questions and Answers 179- 209
UNIVERSITY GRANTS COMMISSION
SYLLABUS
ENVIRONMENTALSCIENCE (89)
The syllabus contains ten units:
https://www.youtube.com/channel/UCrPRG_Wut_UrDhD_MTvQ6GA
2
GEOLOGY
The Earth was formed by a gravity-driven process known as Accretion. Originally homogenous earth
developed a dense core and a light crust by Differentiation, which resulted in the formation of three
distinct geological layers: Crust (Rich in Silicon, Aluminium, and Oxygen), Mantle (Rich in Magnesium,
Silicon, and Oxygen), and Core (Rich in iron).
Geothermal Gradient
It is the rate of increase in temperature with an increase in depth of the earth’s interior (~1°C/40 meter).
Much of the heat inside the Earth’s interior is created by the decay of radioactive elements. Continental
crust has a higher density of radioactive elements than oceanic crust.
These elements readily combine with oxygen (e.g., Aluminium, Boron, and Lithium)
These elements readily dissolve in iron either as solid solutions or in the molten state (e.g., Osmium,
Gold, and Platinum)
These elements combine rapidly with chalcogens other than oxygen (E.g., Silver, Arsenic, Cadmium,
Lead, and Mercury)
4. Atmophile or Volatile elements
These are elements with low boiling point (E.g., Nitrogen and Hydrogen)
Seismic discontinuities
1. Moho discontinuity is the boundary between the crust and the mantle. The term “Discontinuity” is
used refer the surface at which seismic waves change its velocity and undergo acceleration.
2. Conorod discontinuity occurs between upper and lower crust
3. Repiti discontinuity is between the upper and lower mantle
Continental Crust
It is a layer of granitic, sedimentary and metamorphic rocks that form the continents. Continental crust
is less dense than oceanic crust but is considerably thicker (35-40Km). This layer is sometimes called sial
because its bulk composition is richer in silicates and aluminum minerals and has a lower density
compared to the oceanic crust, called sima. The average density of continental crust is about 2.83 g/cm3.
The area of shallow seabed close to shores is known as continental shelves.
Types of Rocks
1. Igneous rocks are crystalline solids that form when magma cools down quickly. Intrusive igneous rocks
such as diorite, gabbro, granite, and pegmatite are formed when magma solidifies below the Earth’s crust,
whereas Extrusive igneous rocks such as rhyolite, andesite, basalt, obsidian, and pumice are formed when
lava solidifies on or above the Earth surface.
2. Metamorphic Rocks form as a result of the transformation of existing rock types under extreme
conditions. Foliated metamorphic rocks such as gneiss, phyllite, schist, and slate have a layered or banded
appearance, whereas Non-foliated metamorphic rocks such as marble and quartzite do not appear
banded or layered.
Metamorphosis
Limestone Marble or Travertine
Shale Slate
Sandstone Quartzite
3. Sedimentary rocks, such as sandstone, limestone, and shale are formed by the accumulation and
lithification of sediments. The sedimentary rock cover of the continents of the Earth's crust is extensive
(73% of the Earth's current land surface), but the total contribution of sedimentary rocks is estimated to
be only 8% of the total volume of the crust. Sedimentary rocks are only a thin veneer over a crust
consisting mainly of igneous and metamorphic rocks.
4
2. Frost Wedging
Caused by freeze-thaw action of water which is trapped inside the cracks on the rock.
3. Temperature changes
Weathering due to temperature changes is most common in deserts, where temperature can change
drastically from day to night. This extreme variation in temperature causes expansion and contraction of
rock, which will lead to gradual weathering of the material.
4. Salt Wedging
It occurs when the salt crystallizes out of solution as the water evaporates out of cracks on the rocks. The
process exerts pressure on the surroundings and weakens the rock.
5. Abrasion
Abrasion occurs when rocks collide against one another while they are being transported by water, glacial
ice, etc.
Faults
Faults are fractures or crack in Earth’s crust where rocks on either side of the crack have slid past each
other. In an active fault, the pieces of the Earth's crust along a fault move over time and can cause
earthquakes. Inactive faults used to have movement at some time in the past, but they no longer move.
There are three major types of faults found on the earth crust: Normal fault, Reverse fault, and Strike-
slip fault
5
2. Normal faults
Normal faults are formed when two blocks of crusts pull apart by stretching into a valley. Normal faults
will not make an overhanging rock ledge.
Coal
In the process of coalification, peat or organic matter is altered to lignite, lignite is altered to sub-
bituminous, sub-bituminous coal is altered to bituminous coal, and bituminous coal is altered to
anthracite.
1. Anthracite
Anthracite is the highest rank of coal which is black and lustrous, often referred to as hard coal. It contains
a high percentage of fixed carbon (80-95%) and low volatile matter. It ignites slowly with blue flames,
indicating high efficiency and little or no pollutants. In India, a small deposition of anthracite coal is
found in Jammu and Kashmir.
2. Bituminous
Bituminous coal usually has a high heating value and is widely used for electricity generation. It contains
40-80% carbon with very high calorific value due to the high proportion of carbon and low moisture.
3. Lignite
Lignite is also known as “Brown coal”, which contains 40-55% carbon with high moisture content (over
35%). It undergoes spontaneous combustion causing fire accidents in mines.
6
4. Peat
Peat or organic matter is the first stage of the coalification process. It contains a high percentage of
impurities and less than 50% of carbon. Sufficient moisture content and high volatile matter produce
smoke while burning.
Types of Magma
Type Silicon Dioxide Iron content Temperature Viscosity
Content % °C
Basaltic 45-55 High 1000-1200 Low
Andesitic 55-65 Intermediate 800-1000 Intermediate
Rhyolitic 65-75 Low 650-800 High
Source
Knick point Potamon zone
Rhitron zone (High DO) (Sandy, low DO)
Mouth
Riverine zone
Transitional zone
1. Aquifer
An aquifer is a saturated formation of earth material that not only stores water but also yields it in
sufficient quantities. This layer of rocks transmits water easily due to its high-water permeability.
Unconsolidated deposits of sand and gravel form good aquifers.
7
2. Aquitard
The underground formation through which only seepage is possible due to its low permeability. Water
may leak into the aquifer through the aquitards, such as sand, clay, etc.
3. Aquiclude
Aquiclude is a geological formation that is porous but not permeable. Such rocks may bear water but do
not yield the same as they are impermeable.
4. Aquifuge
The underground formations, such as Granite and Quartzite belong to this category of rocks that are
neither porous nor permeable.
Types of Lakes
1. Monomictic lake
Monomictic lakes are holomictic lakes that mix from top to bottom during one mixing period every year.
Monomictic lakes may be subdivided into cold and warm types.
2. Dimictic lake
Dimictic lakes mix from the surface to the bottom twice every year. Dimictic lakes are holomictic, a
category which includes all lakes which mix one or more times per year. This type of mixing happens
in temperate regions during spring and autumn.
3. Meromictic lake
The meromictic lake has layers of water that never intermix. The accumulation of dissolved CO2 in such
lakes is potentially dangerous due to the limnic eruption which releases a large quantity of carbon dioxide
to the lake surroundings replacing oxygen needed for life.
4. Amictic lake
Amictic lakes are commonly referred to as lakes that never mix, and are perennially sealed off by ice
from most of the annual seasonal variations in temperature.
5. Kettle lake
A kettle is a depression or hole in an outwash plain formed by retreating glaciers or by draining
floodwaters. Many are filled with water, and are then called "kettle lakes".
6. Garben lake
When adjacent tectonic plates separate at fault lines, the steep narrow gap between them can result in
the formation of Garben lake.
7. Maar lake
A maar is a broad, low-relief volcanic crater caused by a phreatomagmatic eruption (an explosion that
occurs when groundwater comes into contact with hot lava or magma).
8
Estuary
An estuary is a partially enclosed coastal body of brackish water with one or more rivers or streams
flowing into it. Estuaries form a transition zone between river environments and maritime environments.
There are different types of estuaries, each of which is created differently. The four main types of
estuaries are coastal plain estuaries/drowned river valleys, tectonic estuaries, bar-built estuaries and fjords.
Coastal plain estuaries form from rising sea level, which fills an already existing river valley with water,
creating an estuary. Tectonic estuaries form on faults, where tectonic activity has created a space that can
be filled in with water. The San Francisco Bay is an example of a tectonic estuary. Bar built estuaries are
behind some sort of natural bar between the estuary and the ocean, such as a spit. Fjords are valleys that
were, at one time, carved out by glaciers and were then filled in with water.
Deltas
A river delta is a landform created by the deposition of sediments carried by a river as the flow leaves its
mouth and enters slower-moving or stagnant water. This occurs where a river enters an ocean, sea,
estuary, lake, reservoir, etc.
1. Arcuate or fan-shaped
The land around the river mouth arches out into the sea and the river splits many times on the way to
the sea, thus creating a fan effect. Both Nile and Ganges deltas are examples of the arcuate delta.
2. Cuspate
Cuspate delta is formed when the material brought down by a river is spread out evenly on either side of
its channel. An example of a cuspate delta is the Tiber.
3. Bird's foot
When the river splits on the way to the sea, each part of the river juts out into the sea resembling the
bird’s foot. The Mississippi delta of the US is an example of Bird’s foot delta.
Moraine
A moraine is any glacially formed accumulation of unconsolidated glacial debris that occurs in glaciated
regions on earth. There are five types of moraine formations:
Miscellaneous
B- Cliff
C- Talus slope
D- Pediment slope
Mineral Hardness
Talc 1
Gypsum 2
Calcite 3
Fluorite 4
Apatite 5
Feldspar 6
Quartz 7
Topaz 8
Corundum 9
Diamond 10
• Hamada is a type of desert landscape consisting of high and barren rocks where most of the sand
has been removed by deflation.
• The black soil or regur soil is formed by the weathering of igneous rocks and the cooling of lava
after a volcanic eruption. The soil in the Deccan Plateau consists of black basalt soil rich in
humus, iron and also contains a high quantity of magnesia, lime, and alumina. Columnar jointing
10
is a characteristic of basalt rocks. Basalt and Gabbro have the same mineral composition. Gabbro
is also known as black granite.
• Pycnocline is the layer where the density gradient is greatest within a body of water
• In open oceans, the temperature falls rapidly in the thermocline (500-800m) layer
• A structural trap is a type of geological trap which forms as a result of changes in the structure of
the subsurface, due to tectonic, diapiric, gravitational and compactional processes. These
changes block the upward migration of hydrocarbons and can lead to the formation of a
petroleum reservoir.
• Frequency of occurrence of earthquakes
• Solifluction is the gradual movement of wet soil or other materials down a slope, especially in
places where frozen subsoil acts as a barrier to the percolation of water.
• Permafrost is a ground of rock and soil with a temperature that remains at or below 0°C for two
or more years. Most of the permafrost regions are located at high latitudes. At low latitudes,
alpine permafrost occurs at higher elevations.
• A caldera is a large cauldron-like hollow that forms shortly after the emptying of a magma
chamber or reservoir in a volcanic eruption.
• Cirque is the amphitheater-like valley formed by glacial erosion.
• Esker is a long winding ridge of stratified sand and gravel that generally occurs in glaciated or
formerly glaciated areas.
• Lahar is the destructive mudflow or debris flow from volcanoes along a river valley.
• A kame is a glacial landform of an irregularly shaped hill or mound composed of sand and gravel
that have been formed by glacial deposition.
• Subduction is a geological process that takes place at convergent boundaries of tectonic plates
where one plate moves under the other and is forced to sink due to gravity.
• Islands arcs are long chains of active volcanoes having intense seismic activity. They are situated
along convergent plate boundaries such as the ring of fire.
• Mantle convection is the source of energy for large scale tectonic movements.
• A fold is an undulating or wave-like structure that forms when rocks deform due to stress and
pressure. The formation of folds can be of two types: syncline (youngest rock occurs at the core
of the fold) and anticline (oldest rock occurs at the core of the fold)
11
SOIL CHEMISTRY
Humic substances (HS) are a group of organic materials present in the soil whose major components are
humin, humic acid, and fulvic acid. Humin is the insoluble component of soil organic matter whose main
function is to improve the water holding capacity of the soil. Humic acids are a mixture of weak aliphatic
and aromatic carbon acids that are soluble in water under alkaline conditions. Humic acid polymers are
capable of readily binding with clay minerals to form a stable organic complex, and they also show a
strong tendency to form a salt with inorganic clay complexes. Fluvic acids are a mixture of aliphatic and
aromatic organic acids that are soluble in water at all pH conditions. They are relatively smaller than
humic acids but are highly reactive due to the presence of a large number of carboxyl and hydroxyl
groups. The oxygen content of fluvic acids is more than that of humic acids. Fluvic acid particles easily
enter the plant system owing to their small size.
Soil Macronutrients: C, H, O, N, P, K, Ca, Mg, S
Soil Micronutrients: Fe, Mn, B, Mo, Cu, Zn, Cl, Ni, Co, Na, Si
Soil Microorganisms can be classified into bacteria, fungi, algae, and protozoa. Some of the important
microorganisms and their functions are given below.
1. Azotobactor
Azotobacter, a free-living soil microbe, is the heaviest breathing organism that requires a large amount of
organic carbon for its growth. They fix atmospheric nitrogen into ammonia non-symbiotically.
Azotobacters are found worldwide, in climates ranging from extremely northern Siberia to Egypt and
India.
2. Mycorrhizae
It is the symbiotic association between a fungus and a plant. The majority of the world’s plants are
mycorrhizal with varying degrees of benefits derived from this association. The most common
mycorrhizal symbioses involve arbuscular mycorrhizae (crop species) and ectomycorrhiza (woody
species). Mycorrhizae play crucial roles in plant nutrition, soil biology, and soil chemistry. Other benefits
of the mycorrhizal association include protection against pathogens; enhanced tolerance to pollutants
and higher resistance to water stress, high soil temperature, and extreme soil pH.
3. Rhizobia
Rhizobia infect roots of leguminous plants creating nodules where Nitrogen is fixed. When the legume
dies, the nodules also break down releasing rhizobia back to the soil where they can live either
individually or reinfect a new host plant.
4. Frankia
Frankia lives in symbiotic relations with actinorhizal plants, such as Coriaria, Bayberry, Alder, etc. These
bacteria convert atmospheric nitrogen into ammonia using the enzyme nitrogenase. The Frankia
symbiosis is often utilized in land reclamation and restoration process using Casuarinales trees to hold
soil.
12
Soil Profile
The soil profile is a vertical section of the soil that depicts all of its horizons having different properties
and characteristics than the adjacent layers above or below.
O (Organic)
A (Surface)
B (Subsoil)
C (Substratum)
R (Bedrock)
Clay Minerals
Clay minerals are formed by digenetic and hydrothermal alteration rocks. Most clay minerals are
described as hydrous-alumino-silicates. Structurally, clay particles are less than 2 microns in size and are
composed of planes of cations arranged in sheets, which may be tetrahedrally or octahedrally coordinated
with oxygen. They exhibit high dry strength and slow dilatancy. Among different types of clay minerals,
silicate clay and its types are important in the exam point of view..
13
Soil alkalinity
Inland soil alkalinity is caused by sodium carbonate (Na2CO3) or sodium bicarbonate (NaHCO3) present
in the soil. Saline-alkaline (Usar) soils have high pH and undesirable salts on their surface. The addition
of gypsum (CaSO4, 2H2O) or pyrite (FeSO4) is a conventional method for the reclamation of such soils.
In this method, calcium replaces the exchangeable sodium and thereafter leaching of the exchanged
sodium takes place by flooding or extensive irrigation. Biological reclamation using cyanobacteria is
another effective approach. Cyanobacteria could be used to reclaim alkaline soils because they grow
successfully on such soils where most plants fail to grow.
Miscellaneous
• Lysimeter is used for determining the amount of evapotranspiration takes place in plants. By
recording the amount of precipitation that an area receives and the amount lost through the soil,
the amount of water lost to evapotranspiration can be calculated.
• Agrobacterium tumefacians is a plant pathogenic bacterium that is known as “nature’s genetic
engineer”. It manipulates the plant genome by injecting its DNA segment into the plant cells.
14
GLOBAL CIRCULATION
The winds are deflected to the right of their intended path in the northern hemisphere and to the left in
the southern hemisphere due to Coriolis force (CF). The impact of the Coriolis effect helps to define
prevailing wind patterns around the globe. The wind blows from a region of high pressure to a region of
low pressure due to Pressure gradient force (PGF). When Coriolis force and pressure gradient force
come into balance, the air which begins to move parallel to the isobars is known as Geostrophic wind.
The wind which is a balance of the pressure gradient force, Coriolis force, and centrifugal force (CFF) is
known as Gradient wind. The flow of gradient wind stays parallel to the height of contours.
PGF PGF
CF
CF
Horse latitudes
Subtropical regions where sinking air creates an area of high-pressure with calm wind and little
precipitation. The horse latitudes are regions located at about 30 degrees north and south of the equator.
Hadley cells
Hadley cell is a global scale atmospheric air circulation that features air rising near the equator and flowing
poleward at a height of 10-15 km above the Earth surface then descending in the subtropics and returning
equatorward.
Polar cells
The polar cells are found at Earth’s poles between 60° and 70° north and south. Cold and dense air in
these cells sinks and travels towards the equator.
Ferrel cells
In the Ferrel cells, air flows poleward and eastward near the surface and equatorward and westward at
higher altitudes. This movement is the reverse of the airflow in the Hadley cell. Ferrel cell is a thermally
indirect cell because it is driven by the motions of the cells on either side.
15
Wind Rose
16
Local winds
Mesoscale winds or local winds are formed due to temperature and pressure differences, topographical
variations, and the shape and height of Earth's surface features. Such winds occur in a small spatial scale
with horizontal dimensions generally ranging from tens to a few hundred kilometers.
Local Winds
Katabatic wind Convection wind Prevailing wind Periodic wind
This occurs in places This includes land and Also known as Also known as seasonal
such as Greenland and sea breezes. permanent wind or winds or monsoons.
Antarctica where on planetary wind. They blow from water
calm and clear nights Subdivided into trade bodies to land.
heat is rapidly winds, polar winds, and
lost through ground anti-trade winds.
radiation.
El Niño and La Niña
El Niño and La Niña events are a part of the global climate system which occurs in the Pacific Ocean
when the atmosphere above it changes from their normal state for several seasons. El Niño is the warm
phase and La Niña is the cold phase of the El Niño–Southern Oscillation (ENSO) cycle. El Niño and
La Niña episodes typically last 9-12 months. El Niño occurs more frequently than La Niña and the
proportion of frequency of their occurrence is 1:2. El Niño events are associated with the warming of the
central and eastern tropical Pacific, while La Niña events are the reverse, i.e. sustained cooling of the
same areas.
Gulf- stream
Gulf- stream is a westward-moving current of warm water. It starts by moving north of South America
and into the Caribbean Sea. It actually becomes Gulf stream when it starts moving
17
Northwards along the east coast of the United States of America. London and Paris enjoy a mild winter
climate because of the Gulf- stream. Its presence has caused the development of intense cyclones of all
types within the atmosphere and the ocean. The Gulf Stream is also an important potential source of
renewable power generation.
Scales of Meteorology
Weather phenomena occur at a variety of scales of motion. The major three meteorological scales are
microscale, mesoscale, and synoptic scale.
1. Microscale
Microscale meteorological events take place in an area less than 10km and they generally last less than
one hour. Mixing and dilution process in the atmosphere, near ground turbulence, and gas exchange
between soil and vegetation are the important microscale events.
2. Mesoscale
Mesoscale phenomena generally range from ten-kilometer to several hundred kilometers and they last
more than one hour. Sea breezes and squall lines are the most prominent mesoscale phenomena.
3. Synoptic scale
The synoptic-scale is also known as cyclonic scale phenomena that control day-to-day weather changes.
The synoptic-scale events, such as a hurricane, cyclone, high and low-pressure areas, and mid-latitude
depressions typically last more than 24 hours.
Miscellaneous
• The weather front is a boundary that separates masses of air of different densities and is the
principal cause of meteorological phenomena.
• In mesoscale meteorological systems, vertical velocity often exceeds or equals horizontal velocity.
Rising thermals are susceptible to undergo non-hydrostatic processes such as buoyant
acceleration through narrow mountain passes.
• The hydrostatic approximation is the assumption that the atmosphere is in hydrostatic
equilibrium. It leads to a balance between the pressure gradient vertical component force and
gravity. As an example, the pressure gradient force prevents gravity from collapsing Earth’s
atmosphere into a thin shell, while gravity prevents the pressure gradient force from diffusing the
atmosphere into space.
• Eddies are mesoscale ocean phenomena arising from instabilities of the ocean currents. They
are temporary loops of swirling water, which can be imaged by the seismic oceanography method.
• Venturi effect is the increase in velocity of a fluid or gas due to constriction of flow. In narrowing
terrains mountain winds can accelerate in speed due to this effect.
18
https://www.youtube.com/channel/UCrPRG_Wut_UrDhD_MTvQ6GA
19
AIR POLLUTION
The dispersion of emitted gases in the form of vapor or smoke into the air from their source of
production is known as plumes. The rate of decrease of temperature with altitude in the stationary
atmosphere at a given time and location is known as the Environmental lapse rate (ELR). The average
ELR is assumed to be 6.49 °C /km. An adiabatic process is one in which the expansion and contraction
of a thermodynamic system take place without exchanging any heat or mass between the system and its
surroundings.
When a parcel of air expands, it pushes on the air around it, doing work (thermodynamics). Since the
parcel does work but gains no heat, it loses internal energy so that its temperature decreases. The process
of expanding and contracting without exchanging heat is an adiabatic process. The adiabatic lapse rate in
the case of a rising air plume is assumed to be 9.8 °C/ Km. Environmental Lapse Rate and Adiabatic
Lapse rate govern the diffusion or dispersion of air pollutants into the atmosphere.
3. Mass transfer due to bulk motion in the x-direction far out-shadows the contribution due to mass
diffusion.
C: concentration of the emission (micrograms/cubic meter) at any point x meters downwind of the
source, y meters laterally from the centerline of the plume, and z meters above ground level.
Q: quantity or mass of the emission (in grams) per unit of time (seconds)
µ: wind speed (in meter per second)
h: height of the source above ground level (m) , σy and σz are the standard deviations of a statistically
normal plume in the lateral and vertical dimensions, respectively. They are the functions of x.
Atmospheric Inversion
In normal atmospheric conditions, the temperature falls by 6.49°C for an increase of 1km in altitude.
This vertical temperature gradient is called 'lapse rate' or ‘Environmental lapse rate’. Inversion occurs
when the cold layer of air at the ground level is covered by a warmer air at a higher level. This leads to
an increase in atmospheric temperature with respect to the increase in altitude. Inversion prevents the
vertical movement of air and the pollutants are concentrated below the inversion layer. In this case, the
atmosphere is said to be stable as it’s free of mixing and turbulence. Consequently, the pollutants will not
disperse into the higher altitude. Inversion generally occurs between the months of October and
February. The phenomenon leads to the accumulation of smoke and other air pollutants at the ground
level and it often prevents solar rays from heating the ground and surrounding air. Fog is generally
associated with inversions. Narrow valleys are more prone to inversion as horizontal air movement is
restricted. During the winter season (November to March) in North India, the prevailing atmospheric
inversion is limiting the dispersal of pollutants as the upper-level air is descending to ground level. There
are two types of inversions: Radiation inversion and Subsidence inversion.
21
TYPES OF PLUMES
❖ Looping plumes are formed under strong lapse conditions (ELR> ALR) when the atmosphere
is highly unstable.
❖ Weak lapse condition results in a limited vertical mixing of gases. In such slightly stable
environments, the plume tends to attain a cone-like structure.
❖ Fanning plumes are obtained under extreme inversion conditions. The stable environmental
conditions present just above the stack prevent upward movement of the plume.
❖ Lofting plume is considered to be suitable for industrial areas as it is ideal for the dispersion of
pollutants. It is formed when a strong super adiabatic lapse rate exists just above the stack and
negative lapse rate (inversion) exists just below the opening of the stack.
❖ Fumigating plume is obtained in the condition opposite to that of lofting plume, i.e., negative
lapse rate (inversion) exists just above the stack and a strong super adiabatic lapse rate exists just
below the stack. It is the most lethal form of plume among all other types.
❖ Trapping Plume is obtained when the inversion layer exists above and below the stack, the plume
neither goes upward nor goes downward, rather, it gets trapped between inversion layers.
Keeling Curve
Radiation emission from Earth’s surface takes place in the infrared region (between 200 and 2500 cm−1)
in contrast with light emission from the sun which takes place in the visible region. The atmospheric
carbon dioxide absorbs Earth’s infrared radiation at the vibrational frequencies near the surface, which
leads to warming of the lower atmosphere. An increase in the concentrations of atmospheric CO2,
methane (CH4), nitrous oxide (N2O), and other greenhouse gases have enhanced the absorption rate of
infrared radiation in the lower atmosphere, thus creating Global warming effect. Among the greenhouse
gases, carbon dioxide is of greatest concern because of its long residence time in the atmosphere and
ability to exert a larger overall warming influence than all other gases combined. The Global Warming
Potential (GWP) was developed to compare the global warming impacts of different gases. GWP is a
measure of how much energy the emissions of 1 ton of a gas will absorb over a given period of time,
relative to the emissions of 1 ton of carbon dioxide (CO2). GWP values are generally determined for
time horizons of 20, 100, and 500 years. The following table shows the GWP of different greenhouse
gases for a time horizon of 100 years.
The oceans are enormous sinks of carbon and they take up about a third of carbon dioxide produced
from anthropogenic activities. Carbon dioxide dissolves in the ocean to form carbonic acid (H2CO3),
bicarbonate (HCO3−), and carbonate (CO32−). The bicarbonate ion is the most abundant form of carbon
in the ocean with a relative concentration of 91%, followed by carbonate ion (8%) and CO 2 (1%). Since
CO2 is an acid gas, the uptake of anthropogenic CO2 uses up carbonate ions and lowers the oceanic pH
causing ocean acidification.
24
Coral bleaching
Delicate coral reefs are home to millions of fish and other marine organisms. Coral bleaching is a global
issue caused by a rise in seawater temperatures. When water is warmer than the average levels, the
endosymbiotic relationship between coral polyps and algae will break down, causing the expulsion of
algae partner which is responsible for generating up to 90% of the coral’s energy by living inside the
tissues of polyps.
5. Carbon monoxide (CO)
The main anthropogenic sources of carbon monoxide are waste incinerators, power plants, and petrol
vehicles that are not fitted with a catalytic convertor. The incomplete combustion of various other fuels
(including wood, coal, charcoal, oil, paraffin, propane, natural gas, and trash) also results in the
production of carbon monoxide. CO poisoning can even cause death because of its ability to bind to
hemoglobin very strongly. The affinity between hemoglobin and carbon monoxide is almost 230 times
stronger than the affinity between hemoglobin and oxygen. In the atmosphere, carbon monoxide
molecules are generally short-lived but have an important role in ground-level ozone formation.
6. Hydrocarbons
Hydrocarbons play a pivotal role in atmospheric chemistry. They are found in the atmosphere as trace
gases with methane (1.77 ppm) being the predominant hydrocarbon molecule. In nature, vegetation is
the main emitter of hydrocarbons, e.g., terpenes, α - pinene, limonene, β - pinene; myrcene; ocimene;
α - terpinene and isoprene. These are the most reactive compounds in the atmosphere due to the
presence of olefinic bonds. The formation of Blue haze in the atmosphere above some dense vegetation
is due to the reaction between natural hydrocarbons and atmospheric radicles.
Gaseous and volatile organic hydrocarbons are of particular interest in air pollution studies. In the
atmosphere, hydrocarbons alone produce no adverse effects. They are of concern when they undergo
chemical reactions in the presence of sunlight and NOx forming photochemical oxidants. Some of the
hydrocarbon compounds present in the atmosphere are aldehydes, ketones, Chlorofluro carbons
(CFCs), CH3Br, CF3Br, polychlorinated biphenyls (PCBs), carbon tetrachloride, etc.
The ozone depletion potential (ODP) of chlorofluorocarbons is defined as the ratio of the impact on
ozone from a specific chemical to the impact from an equivalent mass of CFC-11. ODP value depends
on the species’ reactivity, atmospheric residence time, and molecular mass. CFCs are unreactive in the
troposphere but undergo photolytic decomposition in the stratosphere to produce chlorine radicles,
which react with ozone molecules and break them apart. In the industrial-environmental perspective,
hydrofluorocarbons (HCFCs) are generally considered as substitutes for chlorofluro carbons as they
contain no chlorine atoms and have zero ODP value. Ozone layer depletion leads to an increase in UV-
B radiation reaching the Earth’s surface. Exposure to this harmful radiation can cause skin cancer, eye
cataracts, weakening of immune systems, damage to crops, and reductions in primary producers
(plankton) in the ocean.
Smog
Smog is a type of critical air pollution, originally named for the mixture of smoke and fog in the air. Two
distinct types of smog are recognized in the atmosphere: classical smog and photochemical smog.
The main components are fog and coal smoke The word smog is a misnomer as it does not
(SO2). it is first observed in London (1952) involve any smoke or fog. It is first observed in
Los Angeles (1943)
SO2 reacts with humidity in the air to form In the presence of UV radiation, NOx and
Sulphuric acid fog which deposits on the hydrocarbons undergo photochemical reactions
particulates. to produce toxic secondary air pollutants, such as
PAN (Peroxyacyl nitrates) aldehydes, ketones,
and ozone.
Early morning hours of winter months are The secondary air pollutants are lachrymatory
susceptible to the formation of classical smog. (eye irritants)
Chemically, it is reducing in nature and causes It is oxidizing in nature due to the high
bronchitis irritation. concentration of photochemical oxidants
Criteria air pollutants
Criteria pollutants are a set of air pollutants with national air quality standards that define permissible
concentrations of these substances in ambient air. The set of criteria pollutants include carbon monoxide,
lead, nitrogen dioxide, ozone, particulate matter, and sulfur dioxide. Exposure to these substances can
cause health effects, environmental effects, and property damage.
Miscellaneous
❖ Formaldehyde is a major indoor pollutant that causes Sick building syndrome (SBS), i.e., a
medical condition in which people in a building suffer from symptoms of illness or feel unwell
for no apparent reason.
❖ Water vapor is the most potent greenhouse gas having the highest GWP value, but it is not a
significant driver of global warming due to its short lifetime in the atmosphere
❖ The GWP of greenhouse gases are in the order of CO2<CH4<N2O<CFC<HFC<SF6
❖ Atmospheric brown cloud is a layer of air pollution containing aerosols such as soot or dust that
absorbs as well as scatter incoming solar radiation. Aerosols in brown clouds primarily consist of
black carbon and organic carbon. It is prevalent in tropical regions having a lengthy dry season.
❖ Benzo- a- pyrene (BaP) is formed from incomplete combustion of organic matter at a
temperature between 300°C-600°C.
❖ Frontal inversions form at the frontal surface in the upper atmosphere between the cool and
warm air separating two air masses. Similar inversions occur on a more local scale at the sea
breeze front.
27
https://www.youtube.com/channel/UCrPRG_Wut_UrDhD_MTvQ6GA
28
ATMOSPHERIC CHEMISTRY
A plot of the lifetimes of different atmospheric chemical species shows a huge variation in their residence
time in the atmosphere. The lifetime of a chemical determines the spatial and temporal variability of the
species. Long-lived gases such as CFC and N2O are well mixed in the troposphere, whereas free radicles
exhibit much more temporal and spatial variability owing to their short life span.
The stratosphere lies above the troposphere and extends from 13 km to about 44 km. It has the highest
concentration of ozone (O3) molecules in the atmosphere.
The mesosphere is the third highest layer of Earth's atmosphere. In this layer, the temperature drops
with increasing altitude to the mesopause, which marks the top of this middle layer of the atmosphere.
It is the coldest place on earth and has an average temperature of around -85 °C.
The thermosphere extends from the mesopause (80 km) to the thermopause (500–1000 km).
Considerable height variation is observed in the location of thermopause due to changes in solar activity.
In this hottest layer of the atmosphere, even the absorption of small amounts of solar radiation can
significantly increase the air temperature as there are relatively few molecules and atoms. The
29
temperature in this layer can rise as high as 1500 °C. The layer is devoid of clouds and water vapor.
However, non-hydrometeorological phenomena such as the aurora borealis and aurora australis are
sometimes observed in the thermosphere. The International Space Station orbits in this layer, between
350 and 420 km.
The exosphere is the outermost layer of Earth's atmosphere. This layer is composed of extremely low
densities of hydrogen, helium, and several heavier molecules including nitrogen, oxygen, and carbon
dioxide closer to the exobase (i.e. the upper limit of atmosphere). The atoms and molecules in this layer
can travel considerable distances without colliding with one another.
Ionosphere
Ionosphere (75-1000 km) is defined as the layer of the Earth's atmosphere that is prone to ionization by
solar and cosmic radiations. Within the ionosphere, there are five different layers that influence the
propagation of radio waves through the atmosphere.
1. D Region
D layer is found at altitudes between about 50 and 90 kilometers and the region attenuates high-frequency
radio waves only during the day. Lyman radiation is primarily responsible for the formation of this region
where ionization of nitric oxide gas takes place. The greater the number of gas molecules present in the
layer, the higher the number of collisions with free electrons.
Attenuation= k/f²
Where: k = constant, f = frequency of operation (Hz)
2. E Region
E layer exists above the D layer at altitudes between about 100 and 125 kilometers, and it can be observed
during day and night. Instead of attenuating the radio waves, this layer mainly reflects them, often to a
degree where they are returned to earth. The phenomenon of reflection is dependent upon the frequency
and the angle of incidence of incoming radiations.
3. F Region
F region is the most important layer in the context of high-frequency radio communications. The lower
part of the region is known as the F1 layer and the higher part is known as the F2 layer. Generally, the
F1 layer is situated at an altitude of 300 km and is followed by an F2 layer above it at around 400 km.
30
Most of the high-frequency waves penetrate through the F1 layer to the F2 layer, which is the most
reflecting layer for high-frequency waves that helps in establishing worldwide radio communication.
Types of UV Radiation
Type Health consequences
UVA Premature aging and wrinkling of the skin. Accounts
(315nm – 400nm) for approximately 95 percent of the UV radiation
reaching the Earth's surface.
UVB More dangerous than UVA and the prime cause of skin
(280nm- 315nm) cancers and cataracts.
UVC A lethal form of ultraviolet radiation but it fails to reach
(100nm- 280nm) the earth’s surface due to absorption in the atmosphere
by ozone.
At higher altitudes, Chlorofluoro Carbons are exposed to an intense flux of ultraviolet radiation. As a
consequence, the photolytic decomposition of CFC molecules takes place.
slowly from the peak that occurred in 1992-1994. Whereas the abundance of HCFCs in the troposphere
continues to increase. Springtime Antarctic ozone depletion caused by halogens has been significant
throughout the last decade. Polar stratospheric clouds (PSCs), also known as nacreous clouds, are made
up mostly of supercooled droplets of water and nitric acid and are implicated in the formation of the
ozone hole in the Antarctic and Arctic regions. The heterogeneous chemical reactions that occur on the
surface of PSCs lead to the production of Cl• in the stratosphere, which directly destroys the ozone layer
by turning it into ordinary oxygen molecules.
The Dobson unit is the basic measure used in ozone research to determine the thickness of the
stratospheric ozone layer. One Dobson unit is equivalent to a layer of pure ozone 0.01 mm thick at
standard temperature and pressure. The average thickness of the ozone layer above the earth's surface is
about 300 Dobson Units or 3 millimeters. Equivalent effective stratospheric chlorine (EESC) is a
parameter that is used to quantify man-made ozone depletion and its changes with time. It provides an
estimate of the total effective amount of chlorine and bromine present in the stratosphere.
CFC Number
Chlorofluorocarbons (CFCs) contain carbon and some combination of fluorine and chlorine atoms.
To find the number of a given CFC molecule from its chemical formula, consider the number as
consisting of 3 digits: x, y, and z.
Similarly, the number of CFCl3 and C2Cl3F3 are CFC-11 and CFC-113 respectively.
32
NO2 + hѴ -- > NO + O
The atomic oxygen combines with reactive oxygen to form ground-level ozone.
O + O2 +M--- > O3 + M
O3 + NO --- > NO2 + O2
The energy-absorbing molecule (M) is required to stabilize the ozone molecule from undergoing rapid
decomposition. Under normal conditions, the ozone formed will quickly react with NO to produce NO2
and O2, but when hydrocarbons are abundant in the atmosphere, NO reacts with the hydrocarbon
radical peroxyacyl (RCO3•) and also with O2 to give O3. As a result, the ozone concentration in the
atmosphere reaches dangerous levels.
Structure of PAN
The overall photochemical reaction leading to the formation of photochemical smog can be represented
as;
The presence of excessive O3, along with aldehydes, ketones, and PAN constitute photochemical smog
in the atmosphere.
33
O3 + hѴ -- > O + O2
O + O2+ M ---- > O3+ M
O+ H2O --- > 2 OH
OH• reacts with volatile organic compounds (VOC) by abstracting a hydrogen atom. A few compounds
like Chlorofluro carbons (CFSs), nitrous oxide(N2O), and carbon dioxide (CO2) do not react at all or
react very slowly with hydroxyl radicle. The rate of oxidation of methane (CH4) by OH• is relatively very
lower than that of other organic compounds. This is the reason why methane (CH4) is having a higher
concentration than any other trace gas in the troposphere.
The above figure shows a low concentration of hydroxyl radicles near the ground over the Maunas rain
forest in Brazil. Vegetation emits various organic compounds into the atmosphere (e.g. isoprene) which
reacts with hydroxyl radicles present over the region and results in the reduction of the concentration of
OH-. The major atmospheric pollutants that react with OH• are carbon monoxide (CO) and ozone (03).
Hydroxyl radicles can oxidize carbon monoxide into carbon dioxide. Reactions of OH• with
atmospheric trace gases usually produce hydroperoxyl radicals, HO2, which are recycled back to OH by
reaction with Nitric oxide (NO). Although OH is the most important atmospheric oxidant, its night time
concentration can fall closer to zero due to the absence of solar radiation which is required for its
formation. Nighttime tropospheric chemistry is dominated by nitrate radical (NO3•) and ozone molecule
(O3).
Nitrate radical (NO3•) is generated at night by the reaction of NO2 with O3. This reaction takes place
during day time too, but NO3 gets instantly photolyzed by solar radiation.
NO2 + O3 ---------- > NO3 + O2
NO3• further reacts with NO2 and establishes an equilibrium with N2O5
NO3 + NO2 ⇌ N2O5
NO3• generally does not perform hydrogen abstraction reactions like their day time counterpart- OH•.
However, the reaction between dimethyl sulfide (emitted by oceanic plankton) and NO3 is an exception.
NO3 often acts as the dominant sink for isoprene and terpenoid compounds released by plants. The
atmospheric concentrations of NO3 and N2O5 are quantified using variants of cavity ring-down
spectroscopy (CRDS) and cavity-enhanced absorption spectroscopy (CEAS).
Nitrogen (N) forms oxides in which nitrogen exhibits oxidation numbers from +1 to +5. Gaseous
nitric oxide (NO) is the most thermally stable oxide of nitrogen with an unpaired electron. Two
molecules of NO can combine to form a dimer by coupling their unpaired electrons, i.e. 2NO or
N2O2
Hydroperoxyl radicals (HO2) can be regarded as an important chemical reservoir for hydroxyl radicals
(OH). Gaseous hydroperoxyl is involved in reaction cycles that cause the destruction of stratospheric
ozone. In the troposphere, they are generated as a byproduct of the oxidation of carbon monoxide and
hydrocarbons by hydroxyl radicals. As HO2 is quite reactive, it acts as a ‘cleanser’ of the atmosphere by
degrading certain organic pollutants.
35
NOISE POLLUTION
According to the World Health Organization, sound intensity levels less than 70 dB are not harmful to
living beings despite how long or consistent the exposure is. But exposure for more than 8 hours to a
constant noise beyond 85 dB can cause noise-induced hearing loss.
The Noise Pollution (Regulation and Control) Rules, 2000, were legislated by the government of India
to abate the harmful effects of noise. The law set regulations on the use of loudspeakers, megaphones,
and any other form of public address system, along with banning the use of the same from 10 PM to 6
AM.
Under this Act, the legislation has divided all areas into 4 categories, viz.,
1) Industrial Areas (A)
2) Commercial Areas (B)
3) Residential Areas (C)
4) Silence Zones (D)
An ambient air quality standard (AAQS) in respect of noise was specified to each of these areas during
day (6 AM to 10 PM) and night (10 PM to 6 AM) times. It has also become mandatory to designate an
area of 100 meters around hospitals, educational institutions, and courts as silence zones.
The decibel scale and the intensity level values are objective measures of sound. The decibel rating of a
given sound can be determined by measuring its intensity level. The loudness of sound is subjective, i.e.
not measurable. It varies from person to person. The unit Phon is used to indicate an individual’s
perception of loudness. 1 phon is equivalent to 1 decibel at 1000 Hz.
The sone scale is associated with the loudness of a sound. The scale is based on the observation that a
10-phon increase in a sound level is most often perceived as a doubling of loudness. According to the
sone scale, 1 sone sound is defined as a sound whose loudness is equal to 40 phons.
While studying long term trends in environmental noise, a single- value descriptor like Leq - Equivalent
Level is used to define an entire day's noise history. Another useful set of parameters is the Ln values or
statistical noise levels, such as L10, L50, and L90.
Percentile levels
Leq is the preferred method to describe sound levels that vary over time, resulting in a single decibel
value which takes into account the total sound energy over the period of time of interest. Leq is the
imaginary constant noise level that would result in the same total sound energy being produced over a
given period. The meter initially converts the dB values to sound pressure levels and adds them all up,
then divides by the number of samples and finally converts this equivalent level back to decibels. An
equivalent continuous A-weighted sound pressure level is a common measurement used in the industry
to characterize noise levels in loud environments.
37
L10 is the noise level which exceeds only 10% of the time of the measurement duration. It is also called
the ‘Average peak level’. This parameter is generally used to give an indication of the upper limit of
fluctuating noise due to intermittent events such as traffic congestion.
L50 is the noise level which exceeds 50% of the measuring time. It is statistically the middle point of
noise readings that represents the median of the fluctuating noise levels.
L90, also known as ‘average background level’, is the noise level which exceeds 90% of the measuring
time. It generally indicates the ambient noise level of an environment.
Auditory Perception
Our ears are sound pressure receptors because the eardrums are vibrated by the sound pressure as a
sound field quantity. The perceived sound consists of periodic acoustic pressure vibrations which are
superimposed on the surrounding static air pressure. It’s a wrong concept that a particular sound source
(e.g. Jetplane) should have a fixed dB value because the sound level depends on the distance between
the sound source and the listener. The sound level usually drops 6dB by doubling the distance from the
source.
Sound Pressure Level (dB) Sound pressure (Pa) Permissible exposure time
88 0.50 4 Hours
91 0.71 2 Hours
94 1 1 Hour
38
WATER POLLUTION
Drinking water, also known as potable water, is water from any source that is safe for drinking and
cooking purposes. It includes treated or untreated water supplied by any means for human consumption.
The quality standards for drinking water in India are prescribed by the Bureau of Indian Standards in IS
10500: 2012.
Boron 0.5 1 -
Poor lathering and deterioration
Calcium 75 200 of the quality of
clothes; incrustation in pipes;
scale formation
Chloramines (as Cl2) 4.0 No relaxation -
Contamination in water is confirmed by the presence of Escherichia coli (E. coli), a species of fecal
coliform bacteria. Although E. coli is the more precise indicator of fecal pollution, the count of
thermotolerant coliform bacteria is also considered as an alternative. A specific strain of E. coli bacteria
known as E. coli O157: H7 is responsible for most of the disease outbreaks. According to the Bureau of
Indian Standards, all water intended for drinking must not contain E. coli or thermotolerant coliform in
any 100 ml sample. Potable water quality tests for total coliform bacteria include membrane filtration,
multiple tube fermentation, and MPN (Most Probable Number) methods. Bacteria are usually removed
from water by disinfection and/or filtration. Filtration alone may not be effective, but it helps in removing
sediments that shelters the bacteria. The three chemicals most commonly used as primary disinfectants
are chlorine, chlorine dioxide, and ozone. Other disinfectants include iodine, ultraviolet light, and
physical methods such as boiling or steam sterilization. Monochloramine is generally used as a residual
disinfectant for distribution.
Water chlorination
Water chlorination is a chemical disinfection method in which chlorine or chlorine compounds such as
sodium hypochlorite is added to water. Chlorine is a strong oxidizing agent that kills disease-causing
pathogens, such as bacteria, viruses, and protozoans via oxidation of organic molecules. In particular,
chlorination is used to prevent the spread of waterborne diseases such as cholera, dysentery, and typhoid.
When added to water, chlorine is converted to an equilibrium mixture of hypochlorous acid (HOCl)
and hydrochloric acid (HCl). In acidic solution, the major species are Cl2 and HOCl, whereas in alkaline
solution only ClO− (hypochlorite ion) is present.
Cl2 + H2O ⇌ HOCl + HCl
The neutrally charged chlorine and hypochlorous acids can easily penetrate the negatively charged
surface of pathogens, causing the disintegration of lipids in the cell wall.
Shock chlorination is a method used to reduce the bacterial and algal residue in swimming pools, water
wells, and springs. The procedure involves adding a large amount of hypochlorite into water. The
hypochlorite can be in the form of a powder or a liquid such as chlorine bleach (solution of sodium
hypochlorite or calcium hypochlorite in water). Shock chlorinated water should not be used until the
sodium hypochlorite concentration in the water reduces to 3 PPM or until the calcium hypochlorite
concentration reduces to 0.2 to 0.35 PPM.
Chlorination is one of the most common disinfection methods. However, the reaction of chlorine with
organic matter in water has raised concerns, because such a reaction can result in the formation of
carcinogenic trihalomethanes- a chemical compound in which three of the four hydrogen atoms of
methane (CH4) are replaced by halogen atoms.
Wastewater Treatment
A typical wastewater treatment process involves a combination of physical, chemical, and biological
processes and operations to remove solids, organic matter, and nutrients from the wastewater. The
43
wastewater undergoes through different stages of treatment in order of increasing treatment level before
coming out as treated effluent, which is safe enough to be discharged into the environment. The
treatment process takes place in a wastewater treatment plant (WWTP), often referred to as a Water
Resource Recovery Facility (WRRF) or a Sewage Treatment Plant (STP). The major four treatment
processes are preliminary treatment, primary treatment, secondary or biological treatment, and tertiary
treatment.
1. Preliminary Treatment
The objective of preliminary treatment is the removal of floating materials, settleable inorganic solids,
and oily substances. Preliminary treatment operations are carried out in equipment such as screeners,
grit chambers, and skimming tanks.
Screening is the first unit operation used at wastewater treatment plants. In this stage, the effluent is
passed through a screener having uniform-sized openings for removing floating materials and suspended
particles. The screeners are classified as coarse (75-150 mm), medium (20-50 mm) or fine (less than 20
mm), depending on the size of the openings.
Grit chambers are generally used to remove heavy inorganic materials having a specific gravity between
2.4 and 2.7 (e.g., sand and ash). The chamber works on the principle of sedimentation due to the force
of gravity.
When the effluent is passed through Skimming Tanks, the greasy and oily substances rise and remain
on the surface of the wastewater until removed. The tank contains three interconnected compartments.
The rising air bubbles from the bottom of the tank effectively coagulate and solidify the oily and greasy
materials present in the wastewater.
2. Primary Treatment
Primary treatment involves temporarily holding the sewage in a quiescent basin. The treatment is applied
to fine suspended organic solids that cannot be removed in the preliminary treatment. The sedimentation
of organic particles is influenced by certain factors, such as size, shape, viscosity, specific gravity, and flow
velocity of effluent. Sometimes, certain chemicals are used to facilitate the sedimentation process of
colloidal wastes. The chemical-aided sedimentation or chemical precipitation technique involves three
stages-coagulation, flocculation, and sedimentation. Two types of chemicals are used in chemical aided
sedimentation- coagulants (e.g., aluminum sulfate, iron salts, and sodium carbonate) and coagulant aids
(e.g., activated silica, weighting agent and polyelectrolytes). The remaining liquid may be discharged or
subjected to secondary treatment.
secondary treatment. They differ primarily in the manner in which oxygen is supplied to the
microorganisms and in the rate at which organisms decompose the organic matter. Biological oxidation
processes are sensitive to temperature and, between 0 °C and 40 °C, the rate of biological reactions
increases with temperature.
Category Process
Aerobic Processes
a) Suspended growth system Activated sludge process, Plug- flow process, Oxidation
ditch, Deep shaft, Aerobic lagoons, Contact stabilization, and
Sequencing batch reactor
b) Attached growth system Trickling filters, Roughing filters, and Rotating biological
contractors (RBC)
Aerated Lagoon
46
Trickling filter
47
Biofilms having a thickness in the range of 70-100 µm are considered to be ideal for the treatment
process. As the biofilm layer thickens, it settles to the bottom of the tank and forms part of the secondary
sludge. In the case of high hydraulic loading rates, a special type of trickling filters known as Roughing
Filters is used to reduce the organic matter in downstream processing. Roughing filters provide low
maintenance treatment when high water quality is not needed. It can also reduce the number of
pathogens in the water, as well as the amount of iron and manganese.
6. Anaerobic Digestion
The anaerobic treatment process is usually preferred for effluents that are highly polluted with organic
content. The anaerobic process provides the possibility to decrease BOD from thousands to hundreds
mg/l. The anaerobic digestion takes place in an airtight reactor, into which sludge is introduced.
Afterward, the contents of the reactor are thoroughly mixed and heated to yield methane (CH4) and
carbon dioxide (CO2) as the end products.
Tertiary Treatment
Tertiary treatment or advanced treatment can further reduce organics, turbidity, nitrogen, phosphorus,
metals, and pathogens present in the wastewater. There are four major tertiary treatment processes:
1. Filtration
Sand filtration removes much of the residual suspended matter. The removal of residual toxins is carried
out through the activated carbon adsorption method.
2. Lagoons or ponds
Large man-made lagoons or ponds are highly aerobic systems colonized by native macrophytes and filter-
feeding invertebrates such as Daphnia and species of Rotifera, which greatly assist in removing fine
particulates.
cyanobacteria. The decomposition of the algae by bacteria consumes huge quantities of dissolved oxygen
from water leading to the death of most or all of the animals. Thus, adding more organic matter for the
bacteria to decompose.
4. Nitrogen removal
Decomposition products of proteins and urea are the major forms of biological nitrogen present in the
wastewater. Biological nitrogen removal (BNR) is carried out by oxidation of organic nitrogen into nitrite
(NO2−), most often facilitated by Nitrosomonas spp., followed by the oxidation of nitrite into nitrate
(NO3−) by Nitrospira spp. The final step is the de-nitrification process in which nitrate is reduced into
nitrogen gas under anaerobic conditions with the help of certain genera of bacteria, such as Aerobacter,
Bacillus, Brevibacterium, Lactobacillus, Micrococcus, Pseudomonas and Spirillum.
5. Phosphorus removal
Phosphorus removal is achieved through a process called enhanced biological phosphorus removal. A
certain type of bacteria called polyphosphate-accumulating organisms (PAOs) are used in this method
as they can absorb and accumulate large quantities of phosphorus within their cells (up to 20 percent of
their mass). When the bacteria achieve high biomass and fertilizer value, they can be separated from the
treated water for their applications in agriculture. The phosphate-rich sewage sludge is also referred to as
biosolids.
6. Disinfection
A substantial reduction in the number of microorganisms is achieved through disinfection methods. The
commonly used disinfectants are ozone, chlorine, ultraviolet light, and sodium hypochlorite. The
disinfected water is safe to be discharged back into the environment for later uses, such as drinking,
bathing, irrigation, etc.
https://www.youtube.com/channel/UCrPRG_Wut_UrDhD_MTvQ6GA
51
PESTICIDES
Pesticides are chemical substances used for controlling pests and weeds. Most pesticides are intended to
serve as plant protection products, which generally protect plants from weeds, fungi, or insects.
Herbicides account for 80% of the total pesticide use. There are three prominent chemical families of
insecticides- organochlorines, organophosphates, and carbamates. Organochlorine hydrocarbons (e.g.
DDT) can be further classified into dichlorodiphenylethanes, cyclodiene compounds, and other related
compounds. They kill organisms by disrupting the sodium/potassium balance of the nerve fiber.
Organophosphate and carbamates are relatively less toxic than organochlorines. They can inhibit the
enzyme acetylcholinesterase, allowing acetylcholine to transfer nerve impulses indefinitely, causing a
variety of symptoms such as weakness or paralysis. Important herbicide families are phenoxy and benzoic
acid herbicides (e.g. 2,4-D), triazines (e.g. atrazine), urea (e.g. diuron), and Chloroacetanilides (e.g.
alachlor). Phenoxy compounds can selectively kill broad-leaf weeds by crushing the plant's nutrient
transport system.
1. Organochlorines
Organochlorine pesticides belong to the class of persistent organic pollutants (POPs) with Endosulfan
and lindane being the most biodegradable pesticides in the group that are still in use. Organochlorines
are classified into three subgroups:
If exposed 2-8 hours, chlorinated cyclodienes can cause damages to the functioning of the central nervous
system (CNS). CNS symptoms in animals poisoned by chlorinated cyclodienes include tremors,
convulsions, ataxia, and changes in EEG patterns. Polychlorinated biphenyls (PCBs) were once
commonly used for manufacturing electrical insulators and heat transfer agents. Their use has generally
been phased out due to health concerns.
53
2. Organophosphate
Organophosphates are esters of phosphoric acid and are relatively less toxic to humans. Although certain
organophosphates, such as triorthocresyl phosphate, nipafox, and trichlorfon compounds are
neurotoxic. It contaminates drinking water by moving through the soil to the groundwater.
Organophosphate works by irreversible inactivation of the enzyme acetylcholinesterase, which is essential
for nerve functioning in humans, insects, and many other animals. This group of chemicals is degraded
rapidly by hydrolysis on exposure to sunlight, air, and soil.
3. Carbamates
Carbamate pesticides are derived from carbamic acid (NH2COOH). They kill insects by reversible
inactivation of the enzyme acetylcholinesterase. The most potent chemicals among the group- aldicarb,
and carbofuran- are capable of inhibiting mammalian acetylcholinesterase enzymes.
4. Phenoxy herbicides
Phenoxy herbicides are the most widely used family of herbicides worldwide. This family of chemicals
is related to the growth hormone indoleacetic acid (IAA). When sprayed on monocotyledonous (grass)
crops such as wheat or corn, they selectively kill broad-leaf weeds by stimulating uncontrolled growth,
leaving the crops relatively unaffected. Phenoxy herbicides can be further classified into phenoxyacetic,
phenoxybutyric and phenoxypropionic subtypes.
2,4-Dichlorophenoxyacetic acid (2,4-D), one of the best-studied agricultural chemicals, belongs to the
family of Phenoxy herbicides. During the Vietnam War, 2,4-D was mixed with a similar chlorophenoxy
compound- 2,4,5-trichlorophenoxyacetic acid (2,4,5-T), to make the defoliant Agent Orange. It was used
to remove the leaves of trees and other dense tropical foliage that provided enemy cover. 2,4-D may also
contain dioxin impurities depending on the production method.
Polychlorinated biphenyls (PCBs) PCBs are used as heat exchange fluids in electrical
transformers, capacitors, and as additives in paint, carbonless
copy paper, and plastics. Persistence varies with the degree of
halogenation, an estimated half-life of 10 years. PCBs are toxic
to fish at high doses, and associated with spawning failure at low
doses. Human exposure occurs through food and is associated
with reproductive failure and immune suppression. Immediate
effects of PCB exposure include pigmentation of nails and
mucous membranes and swelling of the eyelids, along with
fatigue, nausea, and vomiting.
Dichlorodiphenyltrichloroethane In 1962, the American biologist Rachel Carson published
(DDT) Silent Spring, describing the impact of DDT spraying on the
environment and human health. DDT’s persistence in the soil
for 10–15 years after its initial application has resulted in the
spread of DDT residues throughout the world including the
arctic, even though it has been banned or severely restricted in
many of the countries. DDT can be detected in foods from all
over the world and food-borne DDT remains the greatest
source of human exposure.
Dioxins Dioxins are by-products derived from partial combustion of
materials containing chlorine. They are typically emitted from
the burning of hospital waste, municipal waste, and hazardous
waste. Air dust or pesticide deposit of dioxins on plants is their
entry point in the food chain by stably bonding with lipids.
Dioxin’s half-life in the human body is approximately 7–11
years. Their high toxicity leads to reproductive problems,
immune system damage, hormone interference, and cancer.
Polychlorinated dibenzofurans These are by-products of high-temperature processes such as
incomplete combustion after waste incineration or in
automobiles, pesticide production, and polychlorinated
biphenyl production. Structurally similar to dioxins, the two
compounds share toxic effects. Furans persist in the
environment and are classified as possible human carcinogens.
*In 2001, this list has been expanded to include some polycyclic aromatic hydrocarbons (PAHs), brominated
flame retardants, and other compounds.
Arsenic (As)
Arsenic (atomic number 33) is a silver-grey brittle crystalline solid. The highly toxic inorganic arsenic
compounds are primarily used for wood preservation. Whereas, the less toxic organic arsenic substances
are mostly used as pesticides, especially on cotton plants. The most common forms of arsenic present in
56
the natural water bodies are arsenite (AsO 33-) and inorganic arsenate (AsO 4 3-), generally represented as
As3+ and As5+. The trivalent methylated arsenic species (arsenite) is more toxic than its pentavalent form
(arsenate) because it is more efficient at causing DNA breakdown. Although As5+ tends to be less toxic
compared to As3+, but is thermodynamically more stable than As3+ and is a major groundwater
contaminant.
Lead (Pb)
Lead commonly exists in the +2 oxidation state and is one of the most widely and evenly distributed trace
metals. Lead from car exhaust, dust, and gases from various industrial sources can contaminate soils,
especially those with high organic content. Lead poisoning or Plumbism is a type of metal poisoning
caused by lead in the body, which affects the peripheral nervous system and the central nervous system.
Mercury (Hg)
Mercury occurs in deposits throughout the world mostly as cinnabar (mercuric sulfide). Three soluble
forms of Hg exist in the soil environment. The most reduced one is Hg0 metal with the other two forms
being mercurous ion (Hg+) and mercuric ion Hg2+. The mercurous ion is relatively unstable and under
oxidizing conditions, especially at low pH, it gets oxidized into Hg0 and Hg2+. Methylation of mercury by
soil anaerobic bacteria results in the formation of Methylmercury [CH3Hg], which is the most toxic form
of mercury present in the environment. Mercury poisoning can result from exposure to water-soluble
forms of mercury (such as mercuric chloride or methylmercury), by inhalation of mercury vapor, or by
ingesting any form of mercury. Prolonged Inhalation of elemental mercury (Hg0) vapors, released from
broken thermometers or found in higher concentrations near gold mines can cause tremors, gingivitis,
and excitability. The Ingestion of mercuric salts, e.g. HgCl2, can damage the gastrointestinal tract.
Cadmium (Cd)
Cadmium is a severe pulmonary and gastrointestinal irritant. Human exposure to cadmium occurs
through inhalation or cigarette smoke, and from the ingestion of food. It is commonly determined by
measuring cadmium levels in blood or urine. Several regulatory agencies have designated cadmium as a
carcinogen with lungs being its primary target. Chronic inhalation exposure to cadmium particulates is
generally associated with changes in pulmonary function and chest radiographs that are consistent with
emphysema.
Chromium (Cr)
Chromium (Cr) is a naturally occurring lustrous, brittle, and hard metal present in the Earth’s crust. Its
oxidation states vary from chromium (II) to chromium (VI). Elemental chromium [Cr (0)] does not
occur naturally, the naturally occurring chromium is usually present as trivalent Cr (III). Hexavalent Cr
(VI) is the most toxic form among the group and is almost completely derived from human activities.
Chromium (III) is an essential nutrient for humans and its deficiency may cause heart conditions,
disruptions of metabolisms and diabetes. Chromium (VI) is lethal to human health, mainly for people
who work in the steel and textile industries. People who smoke cigarettes also have a higher chance of
exposure to chromium. Chromium (VI) is known to cause various health effects. When it is a compound
in leather products, it can cause allergic reactions such as skin rash.
57
The solid waste management system (SWM) is a combination of the following functional elements:
In India, about 90% of the municipal waste collected by the authorities is dumped in low-lying areas in
the outskirts having no facilities for leachate collection and treatment. As a result, leachate containing
heavy metals percolate into the groundwater resources, rendering it unfit for drinking. The landfill gases
(CH3& CO2) escape into the atmosphere, adding to the greenhouse emissions, which otherwise could be
used as thermal fuel.
Over the years, the problems faced due to MSW escalated and were highlighted by civic and
environmental activists. This resulted in framing Municipal Solid Wastes (Management and Handling)
Rules, 2000.
Heating value
The heating value (KJ/Kg) of waste materials is determined experimentally using the Bomb calorimeter
test. This test evaluates the potential of waste materials to be used as fuel for incineration.
3. Composting
Composting is the biological decomposition of organic matter into humus like material under controlled
conditions of ventilation, temperature, and moisture. An efficient composting process can yield a stable,
odor-free soil conditioner as its final product. With the rising interest in organic agriculture, the
production of organic-grade MSW compost for agriculture is also gaining popularity because of its
positive effect on biological, physical, and chemical soil properties.
4. Incineration
Incineration refers to controlled burning of the hospital and other biological wastes at a high temperature
(1200 – 1500°C) to sterilize, stabilize, and to reduce its volume. In this process, most of the combustible
materials such as paper or plastics get converted into carbon dioxide and ash.
5. Gasification
This is the partial combustion of organic materials containing a large amount of carbon content. At high
temperature (roughly 1000°C) they form a gas comprising mainly carbon dioxide (CO2), carbon
monoxide (CO), nitrogen (N), hydrogen (H), water vapor, and methane (CH4).
Solid recovered fuel (SRF) is a high-quality alternative to fossil fuel. It is produced from mainly
commercial waste including paper, card, wood, textiles, and plastic. Solid recovered fuel has gone
through additional processing to improve the quality and value. It has a higher calorific value than RDF
and is used in facilities such as cement kilns.
7. Pyrolysis
Pyrolysis is the thermal degradation of organic materials in the absence of oxygen. Pyrolysis of
carbonaceous materials occurs in the absence of oxygen at a temperature between 200 and 900°C. The
process always produces solid (charcoal, biochar), liquid and non-condensable gases (H2, CH4, CnHm,
CO, CO2, and N).
1. Hydrolysis and fermentation of solid and dissolved organic components into volatile fatty acids,
alcohols, hydrogen, and carbon dioxide.
2. The products derived from the first stage are then converted into acetic acid, hydrogen and carbon
dioxide by an acidogenic group of bacteria.
60
3. Methanogenic bacteria convert acetic acid into methane and carbon dioxide. Hydrogenophilic bacteria
convert hydrogen and carbon dioxide into methane.
Non-methane organic compounds (NMOCs) and Landfill gas (LFG) can contaminate groundwater in
unlined landfills and landfills with no or inadequate LFG collection systems. The contamination takes
place through three transfer mechanisms:
The control of escaping LFG by proper extraction ensures its use as a source of energy. Generally, the
landfill gas is extracted during the operation phase by means of gas wells that are drilled by auger and are
driven into the landfill at a spacing of 40 – 70 m. If landfill gas is not utilized, it should be burnt by means
of flaring. As a matter of fact, the efficient utilization of landfill gas can notably reduce fossil fuel
consumption for the production of electricity and heat.
Leachate Formation
Leachate is a bad odoured liquid comprising soluble components of waste and its degradation products
that are harmful to the environment. The generation of leachate is caused principally by precipitation
percolating through waste deposited in a landfill and is controlled by a group of factors, such as water
availability, landfill surface condition, refuse state, and condition of surrounding strata. The major toxic
substances present in leachate are ammonia and heavy metals.
The water balance equation for landfill requires negative or zero so that no excess leachate is produced.
Lo = I – E – aW
Lo = free leachate retained at site (equivalent to leachate production minus leachate leaving the site);
I = total liquid input;
E = evapotranspiration losses;
a = absorption capacity of waste;
W = weight of waste disposed.
An appropriate way for controlling the migration of leachate and its toxic constituents into underlying
aquifers or nearby rivers is to use landfill liners to prevent the movement. Modern landfills generally
require a layer of compacted clay with a minimum required thickness and a maximum allowable
hydraulic conductivity, overlaid by a high-density polyethylene geomembrane. Chipped or waste tires are
generally used to support and insulate the liner.
1. Natural liners
These are generally less permeable, resistant to chemical attack and have good sorption properties.
Commonly used natural liners are compacted clay or shale, bitumen or soil sealants, etc. They do not
act as true containment barriers because leachate may migrate through them.
61
3. Composite liners
A composite liner consists of a natural membrane along with a geosynthetic clay liner. This system is
more efficient at reducing leachate migration into the subsoil than either a clay liner or a single
geomembrane layer.
The leachate containing a high concentration of undesirable substances then undergoes a treatment
process for their safe discharge into the environment. The common treatment processes applied to
leachate are:
1. Leachate recirculation
It is defined as the practice of returning leachate to the landfill from which it has been abstracted. The
process reduces the hazardous nature of leachate and helps to wet the waste thereby increasing the rate
of biological degradation. This method of repeatedly reapplying leachate to waste masses saves offsite
disposal costs and boosts landfill gas production.
2. Biological treatment
The high concentration of volatile fatty acids (VFA) in leachate makes it easily biodegradable. The
common methods, such as aerated lagoons, activated sludge process, and rotating biological contactors
are used to remove BOD, ammonia, and suspended solids from the leachate.
3. Physicochemical treatment
Physiochemical processes are generally carried out to remove different substances, such as heavy metals
and other chemicals that remain even after the biological degradation of leachate. These processes
include flocculation-precipitation, adsorption, and reverse osmosis.
Recycling Programmes
Recycling programs are formulated and carried out according to the needs and priorities of the
communities. The major elements of a recycling program include source separation, curbside collection,
material resource facilities, and full stream processing.
1. Source separation
The segregation of recyclable and reusable materials at the point of generation is referred to as source
separation. It involves voluntary or mandated separation of recyclable materials into their own specific
containers.
2. Curbside collection
Kerbside collection or curbside collection is a service provided to households for collecting source-
separated recyclables on a regular basis. Specific purpose-built vehicles are used to pick household waste
in containers prescribed by the municipality. Kerbside collection is today often referred to as a strategy
of local authorities to collect recyclable items from the consumer
62
1. Clean MRF
A clean MRF accepts source-separated materials from municipal solid waste generated by either
residential or commercial sources. In a single-stream type of clean MRF processing, all recyclable
materials are mixed. Whereas in dual-stream MRFs, source-separated recyclables are delivered in a
mixed container stream (glass, ferrous metal, aluminum and other non-ferrous metals, and plastics) and
a mixed paper stream (corrugated cardboard boxes, newspapers, magazines, office paper, and junk mail).
2. Dirty MRF
A Dirty MRF or Mixed-waste processing facility (MWPF) involves the manual and mechanical separation
of recyclable materials from a mixed solid waste stream. MWPFs can recycle wastes at much higher rates
than that of curbside or other waste collection systems, and it can recover 5% - 45% of the incoming
materials as recyclables, then the remainder is sent to landfills.
Recycled Process
Material
i) Semi-mechanical process
Glass The waste glass cullet is sorted according to the color and melted in an oven at 1400°C.
Substances such as soda ash, potassium carbonate, borax, lime, etc., are added to
enhance the hardness of glass. It is followed by refining and molding steps to finally yield
new, clear, green and brown glass jars and bottles.
Metals Ferrous and non-ferrous metals are usually recycled into sanitary and gas fittings, funnels,
buckets and storage bins, reinforced steel bars, hand tools, etc. They are initially melted
in a crucible in the coal furnace and the molten metal is cast into the desired mold to
make ingots of required shapes and size. New and melted recycled metals are mixed
together in a 3:1 ratio for better quality products.
Plastic High-density polyethylene (HDPE) and polyethylene terephthalate (PET) plastics now
hold a stronger place in the market. The uses of recycled HDPE include non-food
bottles, drums, toys, pipes, sheets, and plastic pallets, and of PET include plastic fibers,
injection molding, non-food grade containers, and chemicals.
Batteries Battery recycling is paramount due to the concerns over toxic compound including lead,
cadmium, and mercury present in many batteries. Battery reprocessing includes
breaking open the batteries, neutralizing the acid, chipping the container for recycling
and smelting the lead to produce recyclable lead.
Composting
Compost is the organic matter that has been decomposed in a process called composting. It is peaty
humus, dark in color and has a crumbly texture and earthy odor, and resembles topsoil. In India, organic
materials constitute about 70% of the total weight of municipal solid wastes
63
Composting Process
The success of composting largely depends on various processes take place within the system. All these
processes and reactions can be classified into two:
1. Biological Processes
In the mesophilic, or moderate-temperature phase, the compost bacteria combine oxygen with carbon
to produce carbon dioxide and energy. A part of the energy is used by microorganisms for their own
reproduction and growth, and the rest is released as heat. Initially, when the degradation begins,
mesophilic bacteria such as E. coli and other bacteria from the human intestinal tract proliferate and
raise the temperature of the composting mass up to 44°C. The increase in temperature, in turn, inhibits
the growth of mesophilic bacteria and are replaced by thermophilic bacteria.
The thermophilic or high-temperature phase occurs in the upper portion of a compost pile at a
temperature range of 44-52°C. The heating phase usually lasts only a few days, weeks or months. After
the completion of this stage, the manure will appear to have been digested, but the coarser organic
materials will not be digested.
The degradation of coarser organic materials takes place in the cooling phase with the help of fungi and
macroorganisms, such as earthworms and sowbugs. The microorganisms replaced by thermophilic
bacteria will also migrate back into the compost during this phase.
The final stage of the composting process is known as the curing or maturing stage. This phase usually
lasts 3-4 weeks and the long curing period ensures the destruction of pathogens as a result of microbial
competition. Uncured compost contains high levels of organic acids and phytotoxins that deplete the
oxygen and nitrogen content of the soil.
2. Chemical Processes
A major portion of municipal solid wastes contains an adequate amount of biodegradable form of
carbon. The microorganisms present in the compost convert a small portion of the carbon into microbial
cells and a significant portion into carbon dioxide which is lost to the atmosphere. Eventually, the
degradation of carbon decreases the weight and volume of the feedstock. The carbon-nitrogen ratio is
64
an important factor determining the rate of decomposition. An initial ratio of 30:1 (C: N) is generally
considered ideal. Higher ratios result in slow decomposition, while ratios below 25:1 may produce
unpleasant odors.
In general, the moisture content of compostable materials is relatively lower than the ideal requirement
(i.e. 50-60%). In order to hasten the decomposition rate, moisture content can be increased by the
controlled addition of water. Excess water creates anaerobic conditions leading to the rotting of feedstock.
However, the amount of water evaporated from the compost usually exceeds the input of moisture, in
such cases, adding moisture is a necessity for maintaining the ideal composting rate. Composting is an
aerobic process that requires 10- 15% oxygen concentration and enough void space for the free
circulation of oxygen and
carbon dioxide. These requirements are met via mechanical aeration of the compost pile by turning the
materials over with a front-end loader, or by means of mechanical agitation with a special compost turner
along with turning it frequently to expose the microbes to the atmosphere.
A pH between 6 and 8 is considered optimum for the biodegradation of feedstock. It can be adjusted by
adding lime (for increasing the pH) or by adding Sulphur (for decreasing the pH). The carbon dioxide
generated by microorganisms often combines with water to form carbonic acid within the compost
causing swings in pH during the degradation process, but organic materials are naturally well buffered
with respect to pH changes.
❖ Composting technologies
• Windrow composting - Most common and cheapest technology
• In-vessel composting system - All environmental conditions can be controlled
• Vertical composting reactor - Neither temperature nor oxygen can be maintained
• Horizontal composting reactor - Effective in dealing with the heterogeneity of MSW
• Rotating drum - Retain the material for only a few hours or days
Vermicompost
The end-product of the breakdown of organic matter by earthworms is known as vermicast, which is
used as a valuable compost known as ‘vermicompost’. The commonly used earthworm species are red
wigglers (Eisenia fetida or Eisenia andrei), though European nightcrawlers (Eisenia hortensis or
Dendrobaena veneta) could also be used.
1. Surface dwellers
These epigeic earthworms are inhabitants of the organic layer of the soil and are not common in most
agricultural soils. Perionyx excavatus, Eisenia fetida, Eudrilus eugeniae, and Lumbricus rubellus are
examples of surface-dwelling earthworms that are usually darker in color, mostly purplish or reddish. In
organic farming practices, mulching is highly recommended as it provides food and shelter for surface-
dwelling earthworms.
2. Sub-surface dwellers
These endogeic species of earthworms live in horizontal and vertical burrows known as drilospheres that
are created by them in the soil. They assist in soil comminution, i.e. breaking down of soil into finer
65
structures. They feed primarily on partially decomposed organic matter that is already incorporated in
the soil. The cast deposit by these earthworms is directly related to the bulk density of the soil. Species
such as Lampito mauritii, Octochaetona serrata, Lumbricus terrestris, Pontosolex carethrurus and
Octochaetona thurstoni are examples for endogeic earthworms.
3. Subsoil dwellers
Subsoil- dwellers or anecic species live in permanent vertical burrows that can be 5 or 6 feet deep. Their
burrows are capped by crop residue that they pull to the entrance.
The nightcrawler (Lumbricus terrestris) is the most prominent member of the group.
The most widely used species of earthworms for composting purposes are Eisenia andrei, Eisenia fetida,
Eudrilus eugeniae, Lampito mauritii, Lumbricus rubellus, Metaphire posthuman, Perionyx excavates,
Polypheretima elongate.
In India, endemic species such as Perionyx excavatus and Lampito mauritii are also used in composting
units owing to their capability to tide over local adverse soil conditions in which exotic species fail to
survive. For example, the native earthworm species Octochaetona serrata is adapted to live in red laterite
soil which is acidic. Despite the diversity of endemic compost worms, In India, exotic species of
earthworms are preferred due to their conspicuous action, voracious feeding habits, and high
reproductive potential. Since 1982, African nightcrawler (Eudrilus eugeniae) has been promoted for
waste degradation in India because it overperformed local species in laboratory experiments. Red worm
or tiger worm (Eisenia fetida) is also used in certain areas of India for waste degradation.
66
Types of Orbits
1. Polar orbits
The satellites functioning in near-polar orbits are able to scan virtually every part of the Earth as the Earth
rotates underneath it. Typically, a satellite in such an orbit moves in a near-circle about 1000 km above
ground and It takes approximately 90 minutes for the completion of one orbit. Measurement of
atmospheric ozone concentration and the temperature is done using near-polar orbiting satellites. E.g.,
PSLV, SPOT, IERS
2. Sun-synchronous orbits
Sun-synchronous satellites pass over the same spots on the ground at the same solar time of day. It
enables us to compare images from the same season over several years. These satellites orbit at an altitude
between 600 km and 800 km. This orbit is also useful for imaging, spy, and weather satellites because
every time the satellite is overhead, the surface illumination angle on the planet underneath it will be
nearly the same. E.g., Landsat 7, CloudSat, Cartosat
3. Geosynchronous orbits
Also known as geostationary orbit as the satellite appears stationary to an observer on a point on the
earth. Satellites in these orbits circle the Earth in its equatorial plane with the same angular speed and in
the same direction as the Earth rotates about its own axis. These satellites are placed at an altitude of
approximately 35,800 kilometers directly over the equator. Geostationary satellites are considered ideal
for communication, broadcasting, and GPS services. E.g., GSAT, INSAT, INTELSAT, EDUSAT
Thermal remote sensing has wide-ranging applications in the study of agriculture, forestry, water
resources, forest fires, and volcanic eruptions. It can also be used to detect water stress in crops and
evapotranspiration in river basins, which is important for watershed management. The Airborne thermal
infrared sensor (e.g. TIMS or ATLAS) is generally used for measuring the change in the surface
temperature of a particular area. it is then analyzed using thermal response number (TRN)- defined as
the amount of net radiation required to change one unit of surface temperature. It is equal to daily total
net radiation divided by daily temperature range. Rugged and barren terrains are found to be having the
lowest TRN value, while forests have the highest. Absorption of radiation by water and other gases in the
atmosphere restricts sensors to record thermal images in two wavelength windows - 3 to 5 µm and 8 to
15 µm. For this reason, thermal IR imagery is difficult to interpret and process because there is
absorption by moisture in the atmosphere.
2. Beamwidth
It determines the spatial resolution in the direction of flight, referred to as Azimuth Resolution.
Microwave Bands
Band Wavelength Use
X 2.4- 3.75 cm Generally used for military reconnaissance, mapping, and
surveillance.
C 3.75- 7.5 cm The penetration capability of this band is limited and is restricted to
the upper layer of vegetation. Sometimes used for sea ice
surveillance.
S 7.5- 15 cm Used for medium-range meteorological applications such as rainfall
measurement.
L 15- 30 cm It can penetrate vegetation to support surface observation, e.g., soil
moisture, monitoring ice sheets, and glacier dynamics.
P 30- 100 cm It has a significant penetration capability. Widely used for estimating
biomass and tree height.
Radar Range Resolution
Range resolution is defined as the minimum distance that the radar system can distinguish between two
scatters on the target. Search radars usually have poor resolution and will distinguish two targets at about
100 meters apart. The range resolution of a radar depends on a number of factors, such as transmitted
pulse width, pulse compression, beam-width, type of target, size of the target, radar receiver efficiency,
and resolution of the radar display unit. The equation for calculating the range resolution of a radar is
given below.
69
RRes = C Ʈ/2
where Ʈ is the transmitted pulse width and c is the velocity of light in the free space.
Ground Truth
In remote sensing, the term ground truth refers to a process in which a ‘pixel’ on a satellite image is
compared to what is there in reality in order to verify the contents of the ‘pixel’ on the image. The method
is used for accuracy assessment, calibration of remote sensing sensors, and analysis of data. Ground truth
also helps to correct distortion in the images due to absorption in the atmosphere.
RESOURCESAT-1 (also known as IRS-P6) is an advanced remote sensing satellite built by the Indian
Space Research Organization (ISRO). The main objective of the mission is to provide continued remote
sensing data services on an operational basis for integrated land and water resources management with
enhanced data quality.
1. A high-resolution linear imaging self-scanner (LISS-IV) with a spatial resolution of 5.8 m and a swath
of 70 km.
2. A medium resolution linear imaging self-scanner (LISS-III) with a spatial resolution of 23.5 m and a
swath of 140km.
3. An AwiFS (Advanced Wide Field Sensor) with a spatial resolution of 56 m and a swath of 740 km.
Possible applications include crop monitoring and condition assessment, crop canopy water stress, forest
inventory, fire damage analysis, environmental monitoring, land use, soil contamination, oil spills and
disaster monitoring, environmental impact assessment, rock type mapping, mining pollution assessment,
coal fire analysis, landslide risk, road network, 3D building models and cartography.
2. Resourcesat-2
Resourcesat-2 is virtually identical to Resourcesat-1, but it has a few sensor enhancements such as
enhancement of LISS-4 multispectral swath from 23 km to 70 km and improved Radiometric accuracy
from 7 bits to 10 bits for LISS-3 and LISS-4 and 10 bits to 12 bits for AWIFS. Besides, suitable changes
including miniaturization in payload electronics, have been made in RESOURCESAT-2. its major
objective is to provide data with enhanced multispectral and spatial coverage.
3. Cartosat-1
CARTOSAT–1 is a stereoscopic Earth observation satellite in a sun-synchronous orbit and is the first
Indian Remote Sensing Satellite capable of providing in-orbit stereo images. The satellite has two
panchromatic cameras with 2.5 m spatial resolution and a swath of 30 km for acquiring the information
required for generating digital elevation models, orthoimage products, and value-added products for
various applications of geographical information system (GIS).
70
4. Cartosat-2
Cartosat-2 carries a panchromatic (PAN) camera that takes black and white pictures of the earth in the
visible region of the electromagnetic spectrum. The swath covered by this high-resolution PAN camera
is 9.6 km and their spatial resolution is 0.81 m.
5. RISAT-1
Radar Satellite-1 (RISAT-1) is a Microwave Remote Sensing Satellite carrying a Synthetic Aperture Radar
(SAR) Payload operating in C-band (5.35 GHz), which enables imaging of the surface features during
both day and night under all weather conditions. Active Microwave Remote Sensing provides cloud
penetration and day-night imaging capability. These unique characteristics of C-band (5.35GHz)
Synthetic Aperture Radar enable applications in agriculture, particularly paddy monitoring in Kharif
season and management of natural disasters like floods and cyclones.
RISAT-2 is a Radar Imaging Satellite with the all-weather capability to take images of the earth. This
Satellite enhances ISRO's capability for Disaster Management applications.
6. Megha-Tropiques
Megha-Tropiques is an Indo-French Joint Satellite Mission for studying the water cycle and energy
exchanges in the tropics. The main objective of this mission is to understand the life cycle of convective
systems that influence the tropical weather and climate, and their role in associated energy and moisture
budget of the atmosphere in tropical regions.
Megha-Tropiques provides scientific data on the contribution of the water cycle to the tropical
atmosphere, with information on condensed water in clouds, water vapor in the atmosphere,
precipitation, and evaporation. Having its circular orbit inclined 20 deg to the equator, the Megha-
Tropiques is a unique satellite for climate research that should also aid scientists seeking to refine
prediction models.
7. Landsat 8
Landsat 8 launched on February 11, 2013, from Vandenberg Air Force Base, California, with the
extended payload fairing (EPF) from United Launch Alliance, LLC.
The Landsat 8 satellite payload consists of two science instruments—the Operational Land Imager (OLI)
and the Thermal Infrared Sensor (TIRS). These two sensors provide seasonal coverage of the global
landmass at a spatial resolution of 30 meters (visible, NIR, SWIR); 100 meters (thermal); and 15 meters
(panchromatic). Landsat 8 bands reflect environmental changes and help identify differences in the
surface of the Earth. It will continue to obtain valuable data and imagery to be used in agriculture,
education, and science.
Map Scale
Map scale refers to the ratio between a distance on the map to the corresponding distance on the ground,
i.e. on a 1:100000 scale map, 1cm on the map equals 1km on the ground. Large scale maps are usually
prepared for small areas as they contain a large amount of detail. While small scale maps are prepared
for large areas and they contain a small amount of detail.
71
Scale Formula
The simplified formula used to determine the scale of photographs is finding the ratio between the focal
length of the camera (f) and the altitude of the aircraft platform (H).
S = F/h
or
Scale (RF) = F (focal length) / H (flight altitude) – h (average ground elevation)
End lap
Practically all projects require more than a single pair of photographs for creating the three-dimensional
effect necessary for mapping. End lap, also known as forward overlap, is the overlapping portion of two
successive aerial photos. Normally, end lap ranges between 55 and 65% of the length of a photo.
Side lap
Side lap encompasses the overlapping areas of photographs between adjacent flight lines. Usually, side
lap ranges between 20 and 40% of the width of a photo. This type of overlap is needed to make sure that
there are no gaps in the coverage.
3. Aspect
Aspect values indicate the directions in which the physical slopes face. It is measured clockwise starting
North as 0°, and it returns back as 360° north again. The flat areas are given an aspect value of -1 (or 0
degrees).
4. Atmospheric Window
Atmospheric windows are the ranges of wavelengths of the electromagnetic spectrum that can be
transmitted through the Earth's atmosphere. The upper atmosphere blocks 100% of the gamma rays, x-
rays, and most ultra-violet light. But visible light can easily pass through. The wavelengths of the EM
spectrum that are absorbed by atmospheric gases such as water vapor, carbon dioxide, and ozone are
known as absorption bands. The atmosphere is nearly opaque to EM radiation in part of the mid-IR and
all of the far-IR regions.
Atmospheric window
The radio window is the range of wavelengths from about 1mm to 100 mm. The low-frequency end of
the window is limited by signal absorption in the ionosphere, while the upper limit is determined by
signal attenuation caused by water vapor and carbon dioxide in the atmosphere.
use remote sensing rather than direct survey data. The DEM of a particular area can be acquired through
techniques, such as photogrammetry, lidar, IfSAR, land surveying, etc. The acquired data is then
represented as a raster (a grid of squares) or as a vector-based triangular irregular network (TIN). Discrete
rasters are used to represent distinct categories where you can distinguish each thematic class, e.g. a grid
cell representing a soil type. Integers are usually used to represent classes in discrete data. On the other
hand, gradually changing data, such as elevation, temperature or aerial photographs are represented by
continuous rasters (non-discrete). This type of surface generally has no distinct boundaries. A continuous
raster surface is generally derived from a fixed registration point. For example, digital elevation models
use sea level as a registration point.
6. Contour Line
The contour line, also known as isoline or isopleth, is a constant value for mapping variables such as
elevation and temperature. Simply, it is a line on a map joining points of equal height above or below sea
level.
7. Light Detection and Ranging (LiDAR)
LiDAR is a remote sensing method that uses laser pulse measurements to identify heights, depths, and
other properties of features on the Earth’s surface. Topographic LiDAR uses a near-infrared laser to
map the land, while bathymetric LiDAR uses water-penetrating green light to measure seafloor and
riverbed elevations. This method produces more accurate shoreline maps, makes digital elevation
models for use in geographic information systems, and assists in emergency response operations.
8. Overlay Analysis
It is a method by which the thematic maps with relevant class values are overlaid. Based on the arithmetic
process the class values will be added, subtracted, multiplied or divided. An overlay creates a composite
map by combining the geometry and attributes of the input data sets. Tools are available in many GIS
software for overlaying both vector or raster data.
74
https://www.youtube.com/channel/UCrPRG_Wut_UrDhD_MTvQ6GA
75
❖ Screening
This step categorizes projects and determines whether a full-scale environmental impact assessment is
required or not. The output from the screening process is often a document called an Initial
Environmental Examination or Evaluation (IEE). The main conclusion will be a classification of the
project according to its likely environmental sensitivity.
Screening Stage
Different types of tools used during the screening method are:
1. Inclusive project list (projects must undergo EIA) and exclusive project list (projects exempted from
EIA)
2. Case-by-case examinations to determine whether projects may have significant environmental effects.
if so, the project should undergo EIA
❖ Scoping
Scoping is the process of determining the most critical issues to be considered and this step will involve
community participation to some degree. It establishes the content and scope of an EIA report. It is at
this early stage that EIA can most strongly influence the outline proposal. Two major types of scoping
method are closed scoping and open or public scoping.
76
1. Closed scoping
In this method, the content and scope of an EIA Report is pre-determined by law and modified through
closed consultations between a developer and the competent authority.
The scoping process begins with the preparation of a scope outline through informal consultation with
environmental and health authorities. The outline is then made available for the public and is compiled
with an extensive list of concerns, which is followed by the evaluation of relevant concerns to establish
key issues. The key issues are further organized into impact categories and the outline is amended
accordingly. A ‘Terms of reference’ (ToR) is then developed for impact analysis and the progress is
monitored. If necessary, revising is carried out. Another major activity of scoping is to identify key interest
groups (both governmental and non-governmental) and to establish good lines of communication.
❖ Impact Analysis
In this phase, the likely impacts are analyzed in greater detail in accordance with the terms of reference
specifically established for this purpose. Impact analysis is conducted using various tools, such as
checklists, matrices, networks, overlays and geographical information systems (GIS), expert systems, and
professional judgment.
Type Biophysical, social, health or economic
Nature Direct or indirect, cumulative, etc.
Magnitude High, moderate, low
Extent Local, regional, trans-boundary or global
Timing Immediate/long term
Duration Temporary/permanent
Uncertainty Low likelihood/high probability
Reversibility Reversible/irreversible
Significance Unimportant/important
• Checklists
Checklists are comprehensive lists of environmental effects and impact indicators designed to stimulate
the analyst to think broadly about possible consequences of contemplated actions. There are four types
of checklist:
1. Simple checklist
A list of environmental parameters with no guidelines on how they are to be measured and interpreted.
2. Descriptive checklist
This method involves the identification of environmental parameters and guidelines on how to measure
data on particular parameters.
77
3. Questionnaire
This is based on a set of questions to be answered. Some of the questions may concern indirect impacts
and possible mitigation measures. They may also provide a scale for classifying estimated impacts from
highly adverse to highly beneficial.
• Matrices
Matrices are two-dimensional tables that facilitate the identification of impacts arising from the interaction
between project activities and specific environmental components. The entries in the cell of the matrix
can be either qualitative or quantitative estimates of impact. The three important types of matrices used
in EIA are:
1. Magnitude Matrices
Go beyond the mere identification of impacts according to their magnitude, importance, and time frame
(e.g., short, medium or long term)
2. Quantified Matrix
Leopold Matrix is one of the best-known types of quantified matrix. It was developed by Leopold et al.
for the US geological survey. This matrix is based on a horizontal list of 100 project actions and a vertical
list of 88 environmental components.
78
3. Weighted Matrix
In this method, Importance weightings are assigned to environmental components, and sometimes to
project components. The impact of the project (component) on the environmental component then
assessed and multiplied by the appropriate weightings.
• Network
The network is an alternative for illustrating the secondary and subsequent effects of actions on
environmental elements. This method involves the construction of a network tracing such effects. The
advantage of a network approach is that it permits clear tracing of high-order effects of initial actions;
indeed, mitigation and control measures can also be illustrated.
Sorenson network is probably the best-known approach used to illustrate and understand primary,
secondary and tertiary impacts of developmental activity. It also identifies feasible mitigation measures
and assesses multiple impacts at the same time. However, the applications of the Sorenson network are
limited by inadequate data availability and reference networks relevant to the local environment.
Battelle Environmental Evaluation System is designed to assess the impacts of water resource
development projects, water quality management plans, highways, nuclear power plants, and other
projects. In an Environmental Evaluation System, the major concerns are separated into four categories:
Ecology, Physical/chemical, Aesthetics and Human Interest, and Social. Each category is further broken
down into a number of environmental components and for each component, an index of environmental
quality normalized to a scale ranging from 1 to 10 is developed. Environmental Indicator is defined as
the difference in environmental quality between before and after the impact states. Each environmental
component has a weighting factor (relative importance). From this, the overall impact of the project is
calculated by summing the weighted impacts indicators.
❖ Impact Mitigation
Mitigation measures are considered after environmental impact evaluation. An important outcome of
this stage will be recommendations for mitigating measures. This would be contained in the
Environmental Impact Statement. Clearly, the aim of this stage will be to introduce measures that
minimize any identified adverse impacts and enhance positive impacts. Even measures that are not
economically viable should not be dropped out, because, in the long run, they make the project both
economically and environmentally viable.
• Contingent Valuation
This approach involves asking people to directly report their willingness to pay (WTP) for the use or
conservation of natural goods. In other words, it is a method of estimating the value that a person places
on a good.
❖ Decision-making
The decision is taken by a manager or a committee, or personnel from the concerned ministry who had
not been associated with the EIA during its preparation. In general, the decision-maker has three choices:
The comprehensive EIA report incorporates the data of all four seasons of a year. Whereas, Rapid EIA
(i.e. completed within 3 months) has only one season data. It should be noted that the Environmental
Ministry has recently diluted the comprehensive EIA policies, therefore, in most cases only Rapid EIA
is required. The completed report is then submitted to the regulatory agency to decide if the project may
go for formal EIA or not.
Screening is the primary stage of EIA to decide if the project requires EIA or not. Based on the
government rules, the existing projects are generally categorized into two.
• All new National Highways are classified as Category A. In addition, expansion of National
Highways greater than 30 KM, involving additional Right of Way (ROW) greater than 20m,
involving the land acquisition and passing through more than one State are categorized as
Category A.
81
• Crude oil refineries and installations for the gasification and liquefaction of 500 tonnes or more
of coal or bituminous shale per day.
• Thermal power stations and other combustion installations (including cogeneration) with a heat
output of not less than 300 megawatts (equivalent to the gross electrical output of 140 MWe for
steam and single-cycle gas turbines power stations) and nuclear power stations and other nuclear
reactors.
• Construction of airports with a basic runway length of 2,100 meters or more.
• Waste-processing and disposal installations for the incineration, chemical treatment or landfill
of hazardous, toxic or dangerous wastes.
• Large dams and other impoundments designed for the holding back or permanent storage of
water.
• Industrial plants for the production of pulp, paper, and board from timber or similar fibrous
materials.
• Municipal wastewater treatment plants with a capacity exceeding 150,000 population
2. Category B projects (with potentially less significant impacts) are evaluated and given clearance by
state-level authorities, the State Environment Impact Assessment Authority (SEIAA) and State Expert
Appraisal Committee (SEAC). Under Category B projects, there was only one distinction—coal and non-
coal mines—with all non-coal mine leases below 50 ha clubbed under Category B. Environmental
clearance (EC) is compulsory for mining of minor minerals in areas less than or equal to five hectares
Category B projects are further categorized as ‘B1’ and ‘B2’ (except for township and area development
projects). The MoEF is assigned the task of issuing appropriate guidelines from time to time for such
projects.
The projects categorized as B1 require an EIA report for appraisal and also have to undergo a public
hearing process. But those falling under B2 are exempted from requirements of both EIA and public
consultation. If the project clears the screening stage, the developer will have to conduct a Preliminary
Assessment, which will predict the extent of the impacts and would briefly evaluate the importance for
decision-makers. After reviewing the preliminary report, the competent authority will decide if there is a
need for comprehensive EIA or Rapid EIA.
Scoping is an important stage prior to the main EIA process. During scoping, the study team interacts
and engage in discussions with various stakeholders, such as developers, investors, regulatory agencies,
scientific institutions, local people, etc. The numerous concerns and issues raised by different
stakeholders will be investigated and addressed by the study team. Then the team would select primary
impacts for the main EIA to focus and determine detailed and comprehensive Terms of Reference
(ToR).
It is during Main EIA stage that the key impacts on the environment, such as changes in air quality, noise
levels; impacts on wildlife, impact on biodiversity, impact on local communities’ settlement patterns,
changes in employment statistics, changes in water consumption and availability, etc., are formally
identified. It is followed by “prediction” in which the impacts are characterized quantitatively as well as
82
qualitatively. The predicted adverse impacts are then evaluated to determine if they can be significantly
mitigated. This stage also involves estimating mitigation costs and benefit-cost analysis.
Once the above-mentioned procedures are over, the next part is documentation, which is called the EIA
report. The report contains an executive summary of the project. The project developer would now
submit 20 copies of the executive summary to SPCB (State Pollution Control Board). It is now the
responsibility of the SPCB to conduct public hearing.
The Public hearing is organized within 30 days after the release of official notification at the site or in its
close proximity for ascertaining concerns of local stakeholders. SPCB fixes a date for Public Hearing
and informs the proponent to advertise in the local newspapers inviting the public for the hearing. Once
the hearing is completed, the SPCB forwards the minutes of the hearing along with the No objection
certificate to MoEF. In MoEF, the application is evaluated by an Impact Assessment Agency (IAA). The
IAA has the complete right of entry and inspection of the sites or factory premises prior to, during or
after the commencement of the project. The team carries out a technical assessment and gives its
recommendations within 90 days. On the basis of this, the MOEF grants the environmental clearance
which is valid for a period of seven years for the commencement of the project.
Under the Environment Protection Act, 1986 of India, notification was issued in February 1991, for the
regulation of activities in the coastal area by the Ministry of Environment and Forests (MoEF). As per
the notification, the coastal land up to 500m from the High Tide Line (HTL) and a stage of 100m along
banks of creeks, estuaries, backwater, and rivers subject to tidal fluctuations is called the Coastal
Regulation Zone (CRZ). Under this regulation, coastal areas have been classified as CRZ-1, CRZ-2, CRZ-
3, CRZ-4.
83
CRZ-1: These ecologically sensitive areas are essential in maintaining the ecosystem of the coast. They
lie between low and high tide lines. Exploration of natural gas and extraction of salt is permitted.
CRZ-2: These areas are urban coastal areas. As per CRZ, 2011 Notification, for CRZ-II (Urban) areas,
Floor Space Index (FSI) or the Floor Area Ratio (FAR) had been frozen as per the 1991 Development
Control Regulation (DCR) levels. In the CRZ, 2018 Notification, it has been decided to de-freeze the
same and permits FSI for construction projects, as prevailing on the date of the new notification. This
will enable the redevelopment of these areas to meet emerging needs.
CRZ-3: Rural and urban localities that fall outside the CRZ1 and CRZ2. Only certain activities related to
agriculture and some public facilities are allowed in this zone. Two separate categories have now been
stipulated as below:
(a) CRZ-III A – These are densely populated rural areas with a population density of 2161 per square
kilometer as per the 2011 Census. Such areas shall have a No Development Zone (NDZ) of 50 meters
from the HTL as against 200 meters from the High Tide Line stipulated in the CRZ Notification, 2011,
since such areas have similar characteristics as urban areas.
(b) CRZ-III B – Rural areas with a population density of below 2161 per square kilometer as per the
2011 Census. Such areas shall continue to have an NDZ of 200 meters from the HTL.
CRZ-4: This zone lies in the aquatic area up to territorial limits. Fishing and allied activities are permitted
in this zone. Solid waste should be let off in this zone. This zone has been changed from 1991
notification, which covered coastal stretches in islands of Andaman & Nicobar and Lakshadweep.
2. Temporary tourism facilities are also now permissible in the "No Development Zone" (NDZ) of the
CRZ-III areas as per the Notification. However, a minimum distance of 10 m from HTL should be
maintained for setting up of such facilities.
3. CRZ Clearances streamlined: The procedure for CRZ clearances has been streamlined. Only such
projects/activities, which are located in the CRZ-I (Ecologically Sensitive Areas) and CRZ IV (area
covered between Low Tide Line and 12 Nautical Miles seaward) shall be dealt with for CRZ clearance
by the Ministry of Environment, Forest and Climate Change. The powers for clearances with respect to
CRZ-II and III have been delegated at the State level with necessary guidance.
4. A No Development Zone (NDZ) of 20 meters has been stipulated for all Islands: For islands close to
the mainland coast and for all Backwater Islands in the mainland, in wake of space limitations and unique
geography of such regions, bringing uniformity in treatment of such regions, NDZ of 20 m has been
stipulated.
5. Pollution abatement has been accorded special focus: In order to address pollution in Coastal areas,
treatment facilities have been made permissible activities in CRZ-I B area subject to necessary safeguards
and precautions.
84
Environmental Audit
Environmental auditing is a management tool intended to provide information on environmental
performance to the concerned people at the right time. This audit includes an analysis of the technical,
procedural and decision-making aspects of the EIA. It also encompasses all kinds of activities related to
the environmental measures of an organization.
85
Auditing Action
Eco-mark Logo
An earthen pot symbolizes the Eco-mark scheme that uses renewable resources like earth, which does
not produce hazardous wastes and consumes less energy in making. The Central Pollution Control
Board has set up guidelines for environmentally friendly products. These products can obtain
environmental labeling from the Government of India if they meet the criterion set by the acts in Indian
laws.
▪ "Screening", investigation of whether the plan or programme falls under the SEA legislation,
▪ "Scoping", defining the boundaries of investigation, assessment and assumptions required,
▪ "Documentation of the state of the environment", effectively a baseline on which to base
judgments,
▪ "Determination of the likely (non-marginal) environmental impacts", usually in terms of
Direction of Change rather than firm figures,
▪ Informing and consulting the public,
▪ Influencing "Decision taking" based on the assessment and,
▪ Monitoring of the effects of plans and programmes after their implementation.
87
1. Flame photometry
Flame photometry or flame atomic emission spectrometry (FAES) is an emission technique suitable for
different gases such as nitrous oxide or methane, and for determination of alkali metals such as Na and
K in solutions. When Na+ and K+ are excited, they emit spectra with sharp, bright lines at 589 and 768
nm, respectively. Within certain limits, the amount of light given off by the excited atoms is proportional
to the concentration of the metal ions in the solution.
3. X-ray Diffraction
X-ray diffraction analysis (XRD) is one of the microstructural analysis methods used for the study of
structures and atomic spacing of unknown crystalline materials (e.g., minerals and inorganic compounds).
In this method the crystalline structure causes a beam of incident X-rays to diffract into many specific
directions.
4. IR spectroscopy
The IR spectroscopy is based on the concept that molecules tend to absorb specific frequencies of light
that are characteristic of the corresponding structure of the molecules, and this method is often used to
identify structures of functional groups. It covers a range of techniques but is mostly based on absorption
spectroscopy.
5. Raman spectroscopy
Raman spectroscopy is a spectroscopic technique typically used to determine vibrational, rotational and
other low-frequency modes of molecules. This method is generally used to establish a structural
fingerprint by which molecules can be identified. The Stokes and Anti-Stokes lines are the two sets of
lines found in Raman Spectra. The former are those lines having wavelengths longer than that of the
incident light, whereas the latter have wavelengths shorter than the incident light.
9. Electrophoresis
Electrophoresis is a technique commonly used in the lab to separate charged molecules according to
their size. Gel electrophoresis is another method for separation and analysis of macromolecules (DNA,
RNA, and proteins) and their fragments, based on their size and charge. It is used in clinical chemistry
to separate proteins by charge or size.
in water. Mixtures are separated because some components will be more attracted to the stationary phase
(and stick to it) while some components will be more attracted to the mobile phase (and travel with it).
The samples to be separated must be spotted onto the paper (stationary phase). By placing one edge of
the paper into a small amount of liquid (mobile phase), the paper will wick up the liquid. The
components that are attracted more to the paper will move very little, if at all. The components that are
more attracted to the mobile phase will travel with it, at different rates depending on the level of attraction.
This component traveling process is called elution. A component with a given solubility travels along
with the mobile phase at one rate, regardless of what other components are present in the sample.
14. Thin-layer chromatography (TLC)
Thin-layer chromatography (TLC) is a technique used to separate non-volatile mixtures. It is performed
on a sheet of glass, plastic or aluminum foil, which is coated with a thin layer of adsorbent material: silica
gel, aluminum oxide (alumina), or cellulose. This layer of adsorbent is known as the stationary phase.
After the sample has been applied on the plate, a solvent or solvent mixture (known as the mobile phase)
is drawn up the plate via capillary action. Separation is achieved since different analytes ascend the TLC
plate at different rates. The mobile phase has different properties from the stationary phase. For example,
non-polar mobile phases (e.g. heptane) are used with polar stationary phases such as silica gel.
The spots are visualized after the completion of the experiment. This can be done simply by projecting
ultraviolet light onto the sheet. When sheets are treated with phosphor, dark spots appear on the sheet
where compounds absorb the light impinging on certain areas. Chemical processes can also be used to
visualize the spots. For example, anisaldehyde forms colored adducts with many compounds. Similarly,
sulfuric acid will char most organic compounds leaving a dark spot on the sheet.
Retardation factor
A solvent in chromatography is the liquid in which the paper is placed, and the solute is the ink that is
being separated. The retardation factor (Rƒ) is defined as the ratio of the distance traveled by the solute
to the distance traveled by the solvent. It is used in chromatography to quantify the amount of retardation
of a sample in a stationary phase relative to a mobile phase. The term retention factor is sometimes used
synonymously with the retardation factor in regard to planar chromatography.
91
The Rf value will always be in the range 0 to 1. If Rƒ value of a solution is zero, then the solute will
remain in the stationary phase and is immobile. If Rƒ value = 1 then the solute will have no affinity for
the stationary phase and travels with the solvent front.
sample. The electron beam is scanned in a raster scan pattern, and the position of the beam is combined
with the intensity of the detected signal to produce an image. In the most common SEM mode, secondary
electrons emitted by atoms excited by the electron beam are detected using an Everhart-Thornley
detector. SEM can achieve resolution better than 1 nanometer.
https://www.youtube.com/channel/UCrPRG_Wut_UrDhD_MTvQ6GA
94
Article 48A
The article states that “The State shall endeavor to protect and improve the environment and to
safeguard the forests and wildlife of the country.”
Article 51A
It deals with the fundamental duties of the citizens and states that “It shall be the duty of every citizen of
India to protect and improve the natural environment including forests, lakes, rivers, and wildlife and to
have compassion for living creatures.”
Article 253
Article 253 states that “Parliament has the power to make any law for the whole or any part of the country
for implementing any treaty, agreement or convention with any other country.” In simple words, this
article suggests that in the wake of the Stockholm Conference of 1972, Parliament has the power to
legislate on all matters linked to the preservation of the natural environment.
1. The specified endemic plants in Schedule VI are prohibited from cultivation and planting- Beddomes’
cycad (Cycas beddomei), Blue Vanda (Vanda soerulec), Kuth (Saussurea lappa), Ladies slipper orchids
(Paphiopedilum spp.), Pitcher plant (Nepenthes khasiana), Red Vanda (Rananthera inschootiana).
2. The Chief Wildlife Warden may, if he is satisfied that any wild animal specified in Sch. 1 has become
dangerous to human life or is so disabled or diseased as to be beyond recovery, by order in writing and
stating the reasons, therefore permit any person to hunt such animal or cause animal to be hunted.
3. it shall be lawful for the Chief Wildlife Warden, to grant a permit, by an order in writing stating the
reasons therefor, to any person, on payment of such fee as may be prescribed, for the purpose of
education, scientific research, translocation of wild animals, and Collection of specimens (Section 38-1).
95
4. The Chief Wild Life Warden may with the previous permission of the State Government, grant to
any person a permit to pick, uproot, acquire or collect from a forest land or the area specified under
section 17A or transport, subject to such conditions as may be specified therein, any specified plant for
the purpose of education, scientific research, collection, preservation, and display in a herbarium of any
scientific institutions; or propagation by a person or an institution approved by the Central Government
in this regard.
5. The State Government may, by notification, declare its intention to constitute any area other than area
comprised of any reserve forest or the territorial waters as a sanctuary (Section 18) if it considers that
such area is of adequate ecological, faunal, floral, geomorphological, natural. or zoological significance,
for the purpose of protecting, propagating or developing wildlife or its environment.
6. Whenever it appears to the State Government that an area, whether within a sanctuary or not, is, by
reason of its ecological, faunal, floral, geomorphological, or zoological association of importance, needed
to be constituted as a National Park for the purpose of protection& propagating or developing wildlife
therein or its environment, it may, by notification, declare its intention to constitute such area as a
National Park (Section 35).
7. The State Government may, by notification, declare any area closed to hunting for such period as may
be specified in the notification. No hunting of any wild animal shall be permitted in a closed area during
a particular period. (Section 36)
Section 2 of the act is about the restriction on the State Government for de-reservation of forests or use
of forest land for non-forest purpose. "non-forest purpose" means the breaking up or clearing of any
forest land or portion thereof for- (a) the cultivation of tea, coffee, horticultural crops; (b) any purpose
other than reforestation.
Section 3 of the Act deals with the constitution of an Advisory Committee. It gives Central Government
the power to constitute a committee of such number of persons as it may deem fit to advise the
Government.
Section 3A deals with Penalty for contravention of the provisions of the Act.
Section 3B deals with cases in which the offense is made by Authorities or Government Departments.
Section 4 deals with the power to make rules. It states that the Central Government may, by notification
in the Official Gazette, makes rules for carrying out the provisions of this Act.
96
Section 5 deals with repealing and saving. The Forest (Conservation) Ordinance, 1980 is hereby
replaced.
National Biodiversity Authority (NBA): The National Biodiversity Authority is a statutory autonomous
body under the Ministry of Environment, Forests and climate change, Government of India established
in 2003. All matters relating to requests for access by foreign individuals, institutions or companies, and
all matters relating to the transfer of results of research to any foreigner will be dealt with by the National
Biodiversity Authority.
State Biodiversity Boards (SBB): All matters relating to access by Indians for commercial purposes will
be under the purview of the State Biodiversity Boards (SBB). The Indian industry will be required to
provide prior intimation to the concerned SBB about the use of a biological resource. The State Board
will have the power to restrict any such activity, which violates the objectives of conservation, sustainable
use and equitable sharing of benefits.
Water (Prevention & Control of Pollution) Act, 1974 is a comprehensive legislation for the maintaining
or restoring of wholesomeness of water in the country. The Central and State Pollution Control Boards
were constituted under this act. The act was amended in 1978 and 1988 to clarify certain ambiguities
and to vest more powers in the Pollution Control Board. Under the water act, Sewage or pollutants
cannot be discharged into water bodies including lakes and it is the duty of the state pollution control
board to intervene and stop such activity.
1. The ambit of the rules has been expanded by including other wastes such as waste tire, paper waste,
metal scrap, used electronic items, etc.
3. The basic necessity of infrastructure to safeguard the health and environment from the waste
processing industry has been prescribed as Standard Operating Procedure (SOPs). It is a set of step-by-
step instructions compiled by an organization to help workers carry out complex routine operations.
4. List of Waste Constituents with Concentration Limits has been revised as per international standard
and drinking water standard. The items such as Waste edible fats and oil of animals, or vegetable origin,
Household waste, and other chemical wastes especially in the solvent form are prohibited from
importing.
1. Phase-out the use of chlorinated plastic bags, gloves, and blood bags within two years.
2. Existing incinerators to achieve the standards for retention time in the secondary chamber and Dioxin
and Furans within two years.
3. Bio-medical waste has been classified into 4 categories instead of 10 to improve the segregation of
waste at source. The amended rules stipulate that generators of bio-medical waste such as hospitals,
nursing homes, clinics, and dispensaries, etc., will not use chlorinated plastic bags and gloves beyond
98
March 27, 2019, in medical applications to save the environment. Blood bags have been exempted for
phase-out, as per the amended BMW rules, 2018.
Blue Glassware: Broken or discarded and Disinfection (by soaking the washed glass
contaminated glass including waste after cleaning with detergent and
medicine vials and ampoules except Sodium Hypochlorite treatment) or through
those contaminated with cytotoxic autoclaving or microwaving or hydroclaving
wastes. and then sent for recycling.
The bins and bags used in hospitals should carry the biohazard symbol indicating the nature of waste to
the patients and the public.
99
The Ministry of Environment, Forest and Climate Change notified the Construction & Demolition
Waste Management Rules, 2016 as an initiative to tackle the issues of pollution and waste management.
The rule applies to everyone who generates construction and demolition waste. Large generators of waste
(more than 20 tons or more in one day or 300 tons per project in a month) shall segregate the waste into
streams such as concrete, soil, steel, wood and plastics, bricks and mortar. Besides, the authorities shall
procure and utilize 10-20% materials made from construction and demolition waste in municipal and
government contracts.
The Manufacture, Storage and Import of Hazardous Chemical Rules, 2000
The regulation was firstly enacted in 1989 by the Ministry of Environment & Forests (MoEF) and later
amended in 1994 and 2000. It regulates the manufacture, storage, and import of hazardous chemicals in
India. The transport of hazardous chemicals must meet the provisions of the Motor Vehicles Act, 1988.
100
The Noise Pollution (Regulation and Control) (Amendment) Rules, 2000, 2010
These rules lay down such terms and conditions that are necessary to reduce noise pollution such as
banning the use of loudspeakers or public address systems during night hours (between 10:00 p.m. to
6:00 am) on or during any cultural or religious festive occasion. According to the rules, a loudspeaker or
any sound-producing system or a sound amplifier shall not be used at night time except in closed
premises for communication within like the auditorium, conference rooms, community halls, banquet
halls or during a public emergency. The noise level at the boundary of the public place where
loudspeaker or public address system is being used the sound should not exceed 10dB above the ambient
noise standards of that area or 75dB whichever is less.
2. Involvement of local communities in the protection, conservation, and management of forests through
the Joint Forest Management Programme. In this scheme, the village committee is known as the Forest
Protection Committee (FPC), with which the Forest Department enters into a JFM agreement. Villagers
agree to assist in the safeguarding of forest resources by providing protection from fire, grazing, and illegal
101
harvesting in exchange for what they receive, i.e. Non-timber forest products and a share of the revenue
from the sale of timber products.
3. Meeting the requirement of fuelwood, fodder and minor forest produce of the rural and tribal
populations.
4. Conservation of Biological Diversity and Genetic Resources of the country through ex-situ and in-situ
conservation measures.
5. A significant contribution to the maintenance of the environment and ecological stability.
In 1972 in Stockholm, Sweden, the United Nations hosted its first Conference on the Human
Environment, the official declaration of which is commonly called the Stockholm Declaration of 1972.
The declaration was the first major international document to recognize that both developing and
industrialized economies contribute to environmental problems, and it noted that most environmental
problems in developing economies occur because of underdevelopment. One of the seminal issues that
102
emerged from the conference is the recognition of poverty alleviation for protecting the environment.
The Indian Prime Minister Indira Gandhi in her influential speech in the conference brought forward
the connection between ecological management and poverty alleviation.
Montreal Protocol (1987)
The Montreal Protocol on substances that deplete the Ozone layer (a protocol to the Vienna Convention
for the Protection of the Ozone Layer) is an international treaty designed to protect the ozone layer by
phasing out the production of numerous substances that are responsible for ozone depletion. Climate
projections indicate that the ozone layer will return to 1980 levels between 2050 and 2070. Owing to its
widespread adoption and implementation the protocol has been hailed as an example of exceptional
international co-operation.
The Cartagena Protocol on Biosafety is an international agreement which aims to ensure the safe
handling, transport, and use of living modified organisms (LMOs) resulting from modern biotechnology
that may have adverse effects on biological diversity, taking also into account risks to human health.
The Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits
Arising from their Utilization (ABS) provides a transparent legal framework for the effective
implementation of one of the three objectives of the CBB, i.e. the fair and equitable sharing of benefits
arising out of the utilization of genetic resources.
104
The Kyoto Protocol is an international treaty that commits state parties to reduce greenhouse gas
emissions. It is based on the scientific consensus that global warming is occurring and it is extremely
likely that human-made CO2 emissions have predominantly caused it. The Kyoto Protocol applies to the
six greenhouse gases listed in Annex A: Carbon dioxide (CO2), Methane (CH4), Nitrous oxide (N2O),
Hydrofluorocarbons (HFCs), Perfluorocarbons (PFCs), and Sulphur hexafluoride (SF6).
Carbon trading is the process of buying and selling permits and credits to emit carbon dioxide. Countries
with commitments under the Kyoto Protocol have accepted targets for limiting or reducing emissions.
These targets are expressed as levels of allowed emissions or ‘assigned amounts.’ The allowed emissions
are divided into ‘assigned amount units’ (AAUs). Countries that have emission units to spare (i.e.
emissions permitted to them but not "used") are allowed to sell this excess capacity to countries that are
over their targets. Thus, a new commodity was created in the form of emission reductions or removals.
Since carbon dioxide is the principal greenhouse gas, it was termed carbon trading. Carbon is now
tracked and traded like any other commodity. This is often termed the carbon market.
The other units which may be transferred under the scheme are the following.
1. A removal unit (RMU) on the basis of land use, land-use change and forestry (LULUCF) activities
such as reforestation.
3. A certified emission reduction (CER) generated from a clean development mechanism project activity.
Clean Development Mechanism (CDM)
The Clean Development Mechanism (CDM) allows a country with an emission-reduction or emission-
limitation commitment under the Kyoto Protocol to implement an emission-reduction project in
developing countries. Such projects can earn saleable certified emission reduction (CER) credits, each
equivalent to one tonne of CO2, which can be counted towards meeting Kyoto targets. A CDM project
activity might involve, for example, a rural electrification project using solar panels or the installation of
more energy-efficient boilers.
Earth Summit at Johannesburg (2002)
The United Nations World Summit on Sustainable Development, also called Earth Summit 2002, was
an international convention on the environment and sustainable development held in Johannesburg,
South Africa. The Johannesburg Declaration contains targets and timetables for achieving the goals of
Agenda 21.
RIO+20
Rio+20 or Earth Summit 2012 was the third international conference on sustainable development aimed
at reconciling the economic and environmental goals of the global community. The primary result of the
conference was the nonbinding document, "The Future We Want," a 49-page work paper. In it, the
105
heads of state of the 192 governments in attendance renewed their political commitment to sustainable
development and declared their commitment to the promotion of a sustainable future.
United Nations Framework Convention on Climate Change
The UNFCCC objective is to "stabilize greenhouse gas concentrations in the atmosphere at a level that
would prevent dangerous anthropogenic interference with the climate system". The framework sets non-
binding limits on greenhouse gas emissions for individual countries and contains no enforcement
mechanisms. Instead, the framework outlines how specific international treaties (called "protocols" or
"Agreements") may be negotiated to specify further action towards the objective of the UNFCCC. One
of the first tasks set by the UNFCCC was for signatory nations to establish national greenhouse gas
inventories of greenhouse gas (GHG) emissions.
The Millennium Development Goals (MDGs) were eight international development goals that all 191
UN member states have agreed to achieve by the year 2015. The goals had been established following
the Millennium Summit of the United Nations in 2000.
As the MDGs era came to a conclusion with the end of the year 2015, the official launch of the bold and
transformative 2030 Agenda for Sustainable Development was adopted by world leaders in 2016. The
Eight Millennium Development Goals are given below.
1. To eradicate extreme poverty and hunger
2. To achieve universal primary education
The IPCC was established in 1988 by the World Meteorological Organization (WMO) and the United
Nations Environment Programme (UNEP). IPCC is dedicated to providing the world with an objective,
scientific view of climate change, its natural, political and economic impacts and risks, and possible
response options. IPCC reports cover the "scientific, technical and socio-economic information relevant
to understanding the scientific basis of risk of human-induced climate change, its potential impacts and
options for adaptation and mitigation." The IPCC does not carry out original research, nor does it
monitor climate or related phenomena itself. Rather, it assesses published literature including peer-
reviewed and non-peer-reviewed sources.
106
The 2009 United Nations Climate Change Conference, commonly known as the Copenhagen Summit
recognized that climate change is one of the greatest challenges of the present day and that actions should
be taken to keep any temperature increases to below 2 °C. Besides, it significantly advanced the
negotiations on the infrastructure needed for effective global climate change cooperation, including
improvements to the Clean Development Mechanism of the Kyoto Protocol, and also produced the
Copenhagen Accord, which expressed clear a political intent to constrain carbon and respond to climate
change, in both the short and long term.
The United Nations Environment Programme (UNEP) is a program that coordinates the organization's
environmental activities and assists developing countries in implementing environmentally sound policies
and practices. UNEP's activities cover a wide range of issues regarding the atmosphere, marine and
terrestrial ecosystems, environmental governance, and green economy. UNEP has aided in the
formulation of guidelines and treaties on issues such as the international trade in potentially harmful
chemicals, transboundary air pollution, and contamination of international waterways. The International
Cyanide Management Code, a program of best practice for the use of chemicals at gold mining
operations, was developed under UN Environment's aegis.
The International Geosphere-Biosphere Programme (IGBP) was a research program that studies the
phenomenon of global change that ran from 1987 to 2015. IGBP aimed to describe and understand how
the physical, chemical and biological processes regulate the Earth system.
107
ENVIRONMENTAL ISSUES
National Action Plan on Climate Change (NAPCC)
National Action Plan on Climate change was officially launched in 2018 for promoting development
objectives while also yielding co-benefits for a National Solar Mission. The action plan outlines a number
of steps to simultaneously advance India's development and climate change-related objectives. It
encompasses a range of measures. The core of the implementation of the Action plan is constituted by
the following eight missions:
1. National Solar Mission
The major objective of the mission is to enhance the share of solar energy in the total energy requirement
of the country, while also expanding the scope of other renewable sources. The NAPCC sets the solar
mission a target of delivering 80% coverage for all low temperature (<150° C) applications of solar energy
in urban areas, industries and commercial establishments, and a target of 60% coverage for medium
temperature (150° C to 250° C) applications.
2. National Mission for Enhanced Energy Efficiency
The NAPCC recommends mandating specific energy consumption decreases in large energy-consuming
industries, with a system for companies to trade energy-saving certificates, financing for public-private
partnerships to reduce energy consumption through demand-side management programs in the
municipal, buildings, and agricultural sectors, and energy incentives including reduced taxes on energy-
efficient appliances.
3. National Mission on Sustainable Habitat
NMSH aims at promoting energy efficiency as a core component of urban planning by extending the
existing Energy Conservation Building Code, strengthening the enforcement of automotive fuel economy
standards, and using pricing measures to encourage the purchase of efficient vehicles and incentives for
the use of public transportation. The NAPCC also emphasizes on waste management and recycling.
4. National Water Mission
With water scarcity projected to worsen as a result of climate change, the plan sets a goal of 20%
improvement in water use efficiency through pricing and other measures.
National Mission for Sustainable Agriculture (NMSA) has been formulated for enhancing agricultural
productivity especially in rainfed areas focusing on integrated farming, water use efficiency, soil health
management, and synergizing resource conservation. The mission also aims to support climate
adaptation in agriculture through the development of climate-resilient crops, expansion of weather
insurance mechanisms, and agricultural practices.
8. National Mission on Strategic Knowledge for Climate
To gain a better understanding of climate science, impacts, and challenges. The plan envisions a new
Climate Science Research Fund, improved climate modeling, and increased international collaboration.
It encourages private sector initiatives to develop adaptation and mitigation technologies. It also aimed
at networking existing knowledge institutions, capacity building & improving understanding of key climate
processes and climate risks.
Gandhamardan Movement
Gandhamardan, one of the bauxite rich hill ranges situated in Sambalpur and Bolangir region of western
Odisha, was regarded by tribals and peasants as their mother who provides them with food, firewood,
109
fodder, and also water for cultivation and drinking purposes. The Five-year long sustained campaign led
by the local people saw BALCO wind up its operation to mine 213 million tonnes of bauxite. It was a
major victory both for the local forest-dependent people and for the fragile ecology of western Orissa.
Chipko Movement (1973)
Chipko Movement, started in the Chamoli district of Uttarakhand, was a non-violent movement led by
Sunderlal Bahugua, alongside Chandi Prasad Bhatt and Gaura Devi. It aimed at the protection and
conservation of trees and forests from being destroyed. The name of the movement originated from the
word “embrace” (Chipko) as the villagers used to hug the trees and protect them from woodcutters.
Appiko Movement (1983)
On Sep.8, 1983, Panduranga Hegde, the fiery activist, started the Appiko movement. The main purpose
of the movement was to prevent the commercial felling of trees in the Uttara Kannada district of
Karnataka.
Greater one-horned rhinoceros once roamed from Pakistan to the Indo-Burmese border, and in parts of
Nepal, Bangladesh and Bhutan. Due to hunting and habitat loss, their population has reduced to fewer
than 200 individuals in northern India and Nepal. Kaziranga National Park in Assam, India, holds about
110
70% of its world population. Indian Rhino Vision 2020 is an ambitious effort to attain a wild population
of at least 3,000 greater one-horned rhinos spread over seven protected areas in the Indian state of Assam
by the year 2020.
Indian Crocodile Conservation (1975)
The Indian Crocodile Conservation Project was an in-situ conservation initiative set up in 1975 under
the auspices of the Government of India, initially at Odisha's Satkosia Gorge Sanctuary. Sixteen crocodile
rehabilitation centers and five crocodile sanctuaries including National Chambal Sanctuary; where
Gharials are bred in captivity, were established in India between 1975 and 1982. As of 1999, gharials
were also kept in the Madras Crocodile Bank Trust, Mysore Zoo, Jaipur Zoo and Kukrail Gharial
Rehabilitation Centre in India.
GRIHA is an acronym for Green Rating for Integrated Habitat Assessment. GRIHA is a performance-
oriented system where points are earned for meeting the design and performance intent of the criteria.
GRIHA is a 100-point system consisting of some core points, which are mandatory, while the rest are
optional. Different levels of certification (one star to five stars) are awarded based on the number of
points earned. The minimum points required for certification are 25.
The GRIHA V 2015 rating system consists of 31 criteria categorized under various sections, such as Site
Planning, Construction Management, Occupant Comfort and Well-being, Sustainable Building
Materials, Performance Monitoring and Validation, and Innovation.
GRIHA V 3 rating system consists of 34 criteria covering various subjects such as sustainable site
planning, energy, and water optimization, sustainable building materials, waste management, and building
operations & maintenance.
111
In its recent judgment, the Supreme Court has banned the sale and registration of motor vehicles
conforming to the emission standard Bharat Stage-IV in the entire country from April 1, 2020. The
major difference between the existing BS-IV and forthcoming BS-VI norms is the presence of sulfur in
the fuel. While the BS-IV fuels contain 50 parts per million (ppm) sulfur, the BS-VI grade fuel only has
10 ppm sulfur content. Also, the harmful NOx (nitrogen oxides) from diesel cars can be brought down
by nearly 70%.
Cetane Number
Cetane is a colorless, liquid hydrocarbon that ignites easily under compression. Cetane number is the
rating assigned to diesel fuel to rate its combustion quality. Pure cetane represents the highest purity of
diesel fuel possible, and thus has a cetane rating of 100. A typical diesel engine operates well with a CN
from 48 to 50. Fuels with lower CN causes longer ignition delays. Generally, Alkyl nitrates (principally
2-ethylhexyl nitrate) and di-tert-butyl peroxide are used as additives to raise the cetane number. Dimethyl
ether is a potential diesel fuel as it has a high cetane rating (55-60) and can be produced as biofuel.
Octane number
Octane number is a standard measure of the performance of an engine or aviation fuel. It is based on a
scale on which isooctane is 100 (minimal knock) and heptane is 0 (bad knock). The higher the octane
number, the more compression the fuel can withstand before igniting, i.e. gasoline with an octane
number of 92 has the same knock as a mixture of 92% isooctane and 8% heptane. The use of gasoline
with lower octane numbers often lead to engine knocking.
The most common type of octane rating worldwide is the Research Octane Number (RON). RON is
determined by running the fuel in a test engine with a variable compression ratio under controlled
conditions and comparing the results with those for mixtures of iso-octane and n-heptane.
The Compression ratio is varied during the test in order to challenge the fuel's antiknocking tendency as
an increase in the compression ratio will increase the chances of knocking.
Antiknock agents are additives used to reduce engine knocking by raising the temperature and pressure,
thereby increasing the fuel's octane rating. Some of the antiknock agents in use are:
112
1. Tetraethyllead - Still in use as a high-octane additive. To avoid deposits of lead inside the engine, lead
scavengers such as Tricresyl phosphate, 1,2-Dibromoethane and 1,2-Dichloroethane are added to the
gasoline together with tetraethyllead. Tetraethyllead is highly toxic, with as little as 6-15mL being enough
to induce severe lead poisoning.
2. Alcohol
3. Ferrocene
4. Iron pentacarbonyl
6. Toluene
7. Isooctane
8. BTEX - a hydrocarbon mixture of benzene, toluene, xylene, and ethyl-benzene, also called gasoline
aromatics.
Both indices are calculated from the density and kinematic viscosity of the fuel. The normal ignition
quality value lies somewhere between 800 and 880. A lower value indicates better ignition quality. Fuels
with a CCAI higher than 880 are often problematic or even unusable in a diesel engine.
Environmental Disasters
1. Love Canal Disaster (1978 – 2004)
Love Canal is a neighborhood in Niagara Falls, New York. In 1890, the place was envisioned to be a
model planned community. After the partial development and subsequent demise of the project, the
canal became a dumpsite during the 1920s for municipal refuse for the City of Niagara Falls. During the
1940s, the canal was purchased by Hooker Chemical Company, now Occidental Chemical Corporation,
which used the site to dump 21,800 tons of chemical byproducts from the manufacturing of dyes,
113
perfumes, and solvents for rubber and synthetic resins. In 1953, the Hooker Chemical Company, then
the owners and operators of the property, covered the canal with earth and sold it to the city for one
dollar. In the late '50s, about 100 homes and a school were built at the site. In 1978, an explosion was
triggered at the site by a record amount of rainfall. Shortly thereafter, the leaching and upward percolation
of chemical substances began. This event displaced numerous families, leaving them with long-standing
health issues and symptoms of high white blood cell counts and leukemia. Subsequently, the federal
government passed the Superfund law. The resulting Superfund cleanup operation continued until 2004.
2. Minnamata Disaster (1956)
Minamata disease is methylmercury (MeHg) poisoning that first occurred in humans who ingested fish
and shellfish contaminated by Hg discharged in wastewater in the Minamata City, Kumamoto prefecture,
Japan. It was caused by the release of methylmercury in the industrial wastewater from the Chisso
Corporation's chemical factory, which continued from 1932 to 1968. Minamata is a neurological disease
with symptoms such as ataxia, numbness in the hands and feet, general muscle weakness, narrowing of
the field of vision, and damage to hearing and speech. In extreme cases, insanity, paralysis, coma, and
death follow within weeks of the onset of symptoms. A second outbreak of Minamata disease also
reported in Niigata, Japan in 1965. For the past 36 years, of the 2252 patients who have been officially
recognized as having Minamata disease, 1043 have died.
https://www.youtube.com/channel/UCrPRG_Wut_UrDhD_MTvQ6GA
115
Petroleum
Petroleum is a yellowish-black liquid consisting of naturally occurring hydrocarbons of various molecular
weights. Due to its high energy density, easy transportability, and relative abundance, oil has become the
world's most important source of energy since the mid-1950s. The name petroleum covers both naturally
occurring unprocessed crude oil and petroleum products that are made up of refined crude oil. The
components of petroleum are separated via fractional distillation, i.e. a process by which components in
a chemical mixture are separated into different parts according to their different boiling points using a
fractionating column.
The consumer products derived from petroleum include gasoline (petrol), kerosene, asphalt, and
chemical reagents that are used to make plastics, pesticides, and pharmaceuticals. Petroleum is usually
found in porous rock formations in the upper strata of some areas of the Earth's crust and in oil sands
(tar sands).
Gasoline
Gasoline or petrol is used primarily as a fuel in spark-ignited internal combustion engines. It is a mixture
of paraffin (alkanes), olefins (alkenes) and cycloalkanes (naphthenes). Gasoline also contains benzene
and other known carcinogens.
In October 2007, the Government of India decided to make 5% ethanol blending (with gasoline)
mandatory.
Ethanol fuel
It is made by the catalytic hydration of ethylene with sulfuric acid as the catalyst. Bioethanol is a form of
renewable energy that can be produced from very common crops, such as hemp, sugarcane, potato,
cassava, and corn, through a process called ethanol fermentation.
In India, seeds of the Jatropha curcas plant are used for the production of bio-diesel in remote rural and
forest communities to meet the fuel requirements.
Shale oil
Shale oil is produced from shale rock fragments by pyrolysis, hydrogenation, or thermal dissolution.
These processes convert the organic matter within the rock (kerogen) into synthetic oil and gas. The
World Health Organization classifies shale oil as Group 1 carcinogens to humans.
Kerogen is the most abundant source of organic compounds on earth, in which Petroleum and natural
gas are formed. Kerogen may be classified based on its origin: lacustrine (e.g. algal), marine (e.g.
planktonic), and terrestrial (e.g. pollen and spores).
Gas Hydrates
Gas hydrate is a solid, ice-like form of water, which contains gas (usually huge quantities of methane)
molecules in its molecular cavities. It is found at several hundred meters directly below the seafloor and
in association with permafrost in the Arctic. It is not stable at normal sea-level pressures and
temperatures.
ONGC has struck gas hydrate reserves in the deep sea off the Andhra Pradesh coast. The reserves are
located in the Krishna-Godavari basin, which came into the limelight about a decade ago. These natural
gas hydrates are seen as a potentially vast energy resource, but an economical extraction method has so
far proven elusive.
Hydrocarbon clathrates are a bane to the petroleum industry because they can form inside gas pipelines
creating obstructions. Deep-sea deposition of carbon dioxide clathrate has been proposed as a method
to remove this greenhouse gas from the atmosphere and control climate change.
117
Calorific value
The calorific value (heating value or energy value) of a fuel is the amount of heat released during the
combustion of a specified amount of it. The values are conventionally measured using a bomb
calorimeter.
Higher (HHV) and Lower (LHV) heating values of some common fuels (at 25 °C)
Gross calorific value (GCV) is the amount of heat released by the complete combustion of a unit of
natural gas. It is also known as the Higher Heating Value (HHV).
Net Calorific Value (NCV), also known as lower heating value (LHV), is determined by subtracting the
heat of vaporization of water from the higher heating value. It treats any H2O formed as vapor, and the
energy required to vaporize water is not released as heat. Natural gas prices are decided on the basis of
GCV and NCV.
Hv is the heat of vaporization of water, nH2O out is the number of moles of water vaporized, nfuel in is
the number of moles of fuel combusted
Most applications that burn fuel produce water vapor, which is unused and thus wastes its heat content.
In such applications, the lower heating value must be used to give a 'benchmark' for the process.
However, for true energy calculations in some specific cases, the higher heating value is correct. This is
particularly relevant for natural gas, whose high hydrogen content produces much water when it is burned
in condensing boilers and power plants with flue-gas condensation that condenses the water vapor
produced by combustion, recovering heat which would otherwise be wasted.
118
Hydro-Electricity
Hydroelectricity is the application of hydropower to generate electricity. The output power of a
hydroelectric turbine is calculated using the following formula;
Where,
P or W = Electric Power in W
Q = Flow rate in the pipe (m3/s)
ρ = Density (kg/m3)
g = Acceleration due to gravity (m/s²)
H = Waterfall height (m)
η = Global efficiency ratio
For example, the power output of a turbine that is 85% efficient, with a flow rate of 80 cubic meters per
second (2800 cubic feet per second) and a head of 145 meters (480 feet), is 97 Megawatts.
Tidal Energy
Tidal forces are periodic variations in gravitational attraction exerted by celestial bodies. Springtide is a
tide in which the difference between high and low tide is the greatest. it occurs when the moon, the sun,
and the Earth are aligned. When this is the case, their collective gravitational pull on the Earth's water is
strengthened.
A neap tide is a tide in which the difference between high and low tide is the least. It occurs twice a month
when the sun and moon are at right angles to the Earth. When this is the case, their total gravitational
pull on the Earth's water is weakened because it comes from two different directions.
The time difference between the two high tides is 12.25 hours and between two low tides is 6.125 hours.
Tidal energy generation is achieved using Tidal Barrage and Tidal Stream Generator. The former
method makes use of the kinetic energy of moving water to power turbines, in a similar way to wind
turbines that use the wind to power turbines. Some tidal generators can be built into the structures of
existing bridges or are entirely submersed, thus avoiding concerns over the impact on the natural
landscape. The later makes use of potential energy differences between high and low tides. When using
tidal barrages to generate power, the potential energy from a tide is seized through the strategic placement
of specialized dams. When the sea level rises and the tide begins to come in, the temporary increase in
tidal power is channeled into a large basin behind the dam, holding a large amount of potential energy.
With the receding tide, this energy is then converted into mechanical energy as the water is released
through large turbines that create electrical power through the use of generators As of March 2017, India
119
announced that its 7500 Km long coastline, where the height of the high tide was recorded over 5 meters
higher than the low tide can essentially capture the potential tidal power. The Ministry of New and
Renewable Energy estimated that the country can produce 7000 MW of power in the Gulf of Khambhat
in Gujarat, 1200 MW of power in the Gulf of Kutch in Gujarat and about 100 MW of power in the
Gangetic delta of Sundarbans in West Bengal.
Wind Power
Wind power, a form of energy conversion in which turbines convert the kinetic energy of wind into
mechanical or electrical energy that can be used for power. The set-up consists of many individual wind
turbines that are connected to the electric power transmission network. The onshore wind farm is an
inexpensive source of electric power, but it is having environmental impacts such as habitat loss as they
are constructed in wild and rural areas.
Wind resources are calculated based on the average wind speed and the distribution of wind speed values
occurring within a particular area. Areas are generally grouped into wind power classes that range from
1 to 7. A wind power class of 3 or above (or a mean wind of 5.1–5.6 m/s) is suitable for utility-scale wind
power generation.
Wind Power Classification
6 6.4- 7 Outstanding
7 7- 9.4 Superb
where ρ is the density of air; v is the wind speed; Avt is the volume of air passing through A (which is
considered perpendicular to the direction of the wind); Avtρ is, therefore, the mass m passing through
"A". ½ ρv2 is the kinetic energy of the moving air per unit volume.
Power is energy per unit time, so the wind power incident on A (i.e. equal to the rotor area of a wind
turbine) is:
Wind power is proportional to the third power of the wind speed; the available power increases eightfold
when the wind speed doubles.
Cp = PT / PWind
(The Betz Limit is the maximal possible Cp = 16/27)
59% is the best efficiency at which a conventional wind turbine can extract power from the wind.
Geothermal Energy
The heat is produced at Earth’s interior mainly by the radioactive decay of potassium, thorium, and
uranium in the crust, mantle, and along the margins of continental plates due to friction. Dry steam is
the oldest form of geothermal technology, which takes the steam out of the ground and uses it to directly
drive a turbine.
Flash plants pump hot fluids under high pressure into a tank at the surface held at a much lower pressure,
causing some of the fluid to rapidly vaporize, or "flash." Geothermal power is cost-effective, reliable,
sustainable, and environmentally friendly, but has historically been limited to areas having a recent
volcanic activity or those near tectonic plate boundaries.
Solar Energy
Solar radiation can be converted either into thermal energy (heat) or into electrical energy. The former
is achieved using solar thermal collectors.
121
The efficiency of a PV cell indicates how much electrical power a cell can produce for a given solar
irradiance. The basic expression for maximum efficiency of a photovoltaic cell is given by the ratio of
output power to the incident solar power (radiation flux times area). The actual efficiency is influenced
by temperature, irradiance, and spectrum.
122
Another important parameter that is used to determine solar cell performance is referred to as Fill Factor
(FF). It is the ratio of the actual maximum obtainable power to the product of the open-circuit voltage
(Voc) and short circuit current (Isc). This is a key parameter in evaluating the performance of the solar
cells.
The most important part of a photovoltaic cell is the specially treated semiconductor comprising two
distinct layers (p-type and n-type). The minimum energy required by a photon to remove an electron
from a crystal structure is known as the band-gap energy. If a photon has less or more energy than the
bandgap, the energy will be transformed into heat. The theoretical efficiency of silicon PV cells is about
33%. The commonly used material for commercial solar cell construction is Silicon (Si), but others
include Gallium Arsenide (GaAs), Aluminium gallium arsenide (AlGaAs) and Cadmium Telluride
(CdTe).
Solar Ponds
A solar pond is an artificially constructed large scale solar thermal collector. The depth of the pond is
usually between 1 to 4 meters. The bottom of the pond is generally lined with a durable plastic liner.
Salts like magnesium chloride, sodium chloride or sodium nitrate are dissolved in the pond water, and
the concentration is densest at the bottom (20% to 30%) and gradually decreasing to almost zero at the
top. A typical solar pond consists of three zones with a varying salt gradient.
1. Upper Convective Zone, typically .3 m thick, having low salinity and is almost close to ambient
temperature. The layer is kept as thin as possible by using wave suppressing mesh or by placing
windbreaks near the ponds.
2. Non-Convective Zone is much thicker and occupies more than half the depth of the pond.
3. The lower convective zone has the densest salt concentration and serves as the heat storage zone. It is
almost as thick as the middle non-convective zone. Salt concentration and temperatures are nearly
constant in this zone.
123
Solar Pond
When solar radiation strikes the pond, most of it is absorbed by the surface at the bottom of the pond.
The temperature of the dense salt layer therefore increases. If the pond contained no salt, the bottom
layer would be less dense than the top layer as the heated water expands. The less-dense layer would
then rise up and the layers would mix. But the salt density difference keeps the layers of the solar pond
separate. The denser salt water at the bottom prevents the heat from being transferred to the top layer of
freshwater by natural convection, due to which the temperature of the lower layer may rise to as much
as 95°C.
India was the first Asian country to have established a solar pond in Bhuj, in Gujarat. The project was
sanctioned under the National Solar Pond Programme by the Ministry of Non-Conventional Energy
Sources in 1987 and completed in 1993.
Nuclear Energy
Nuclear power can be obtained from nuclear fission, nuclear decay, and nuclear fusion reactions.
Presently, the vast majority of electricity from nuclear power is produced by nuclear fission of uranium
and plutonium. Nuclear fission is the splitting of the nucleus of an atom into two smaller nuclei. The
Liquid Drop model can be used to estimate the energy released in fission. For 235U it is about 200 MeV
per nucleus and it mostly comes from the Coulomb energy.
The fusion of deuterium with tritium results in creating helium-4, freeing a neutron, and releasing 17.59
MeV as the kinetic energy of the products while a corresponding amount of mass disappears.
124
Nuclear fuel
Nuclear fuel is a material that can be used or consumed by nuclear fusion or fission to derive nuclear
energy. The only naturally occurring fissionable nuclear fuel is Uranium. Which contains 238U (99.3%),
235
U (0.7%) and trace amounts of 234U. The other fissionable materials such as 239Pu and 233U can be
produced artificially from 238U and 232Th.
The uranium is extracted from open-pit or in-situ leach mines. It is then converted into a stable and
compact form known as "yellowcake" for the ease of transport. In the next step, the yellowcake is
converted to uranium hexafluoride for enriching it with 3-5% uranium-235. Finally, the enriched uranium
is converted back to the ceramic uranium oxide (UOx) form and shaped into rods.
In modern light-water reactors the fuel rods will typically spend 3 operational cycles (about 6 years) inside
the reactor, generally, until about 3% of the uranium has been fissioned. Afterward, they will be moved
to a spent fuel pool which provides cooling for the thermal energy and shielding for ionizing radiation.
After about 5 years in a spent fuel pool, the fuel becomes radioactively and thermally cool enough to
handle and can be moved to dry storage casks or reprocessed. As opposed to light water reactors that
use uranium-235 (0.7% of all-natural uranium), fast breeder reactors use uranium-238 (99.3% of all-
natural uranium) or thorium.
1. Moderator
The material in the core which slows down the neutrons released from fission. It is usually water but
maybe heavy water or graphite.
2. Control rods
These are made with neutron-absorbing material such as cadmium, hafnium or boron, and are inserted
or withdrawn from the core to control the rate of reaction.
3. Coolant
A fluid circulating through the core so as to transfer the heat from it. In light-water reactors, the water
moderator functions also as a primary coolant. Except in Boiling Water Reactor (BWR) having a
secondary coolant circuit where the water becomes steam.
4. Containment
The structure around the reactor and associated steam generators that are designed to protect it from
outside intrusion and to protect those who live outside from the effects of radiation in case of any serious
malfunction inside. It is typically a meter-thick concrete and steel structure.
but still within the steel-lined, reinforced concrete pressure vessel. The AGR was designed to have a high
thermal efficiency of about 41%, which is better than modern pressurized water reactors having a typical
thermal efficiency of 34%.
3. Pressurized heavy water reactor (PHWR)
The PHWR reactor generally uses natural uranium (0.7% U-235) oxide as fuel, hence it requires a more
efficient moderator such as heavy water (D2O). It also produces more energy per kilogram of mined
uranium than other designs. The pressure tube is designed in such a way that the reactor can be refueled
progressively without shutting down.
5. Light water graphite-moderated reactor/ High Power Channel Reactor/ RBMK
This is a Soviet design, developed from plutonium production reactors. Their main attraction is the use
of light water and unenriched uranium. It employs long (7 meters) vertical pressure tubes running
through graphite moderator, and is cooled by water. The water is allowed to boil in the core at 290°C.
The fuel pellets are made of uranium dioxide powder and are loaded into assemblies of nuclear fuel
rods.
In RBMKs, the generation of steam in the coolant water would create a void• bubble that does not absorb
neutrons. The reduction in moderation by light water is irrelevant as graphite is still moderating the
neutrons. However, the loss of absorption would dramatically alter the balance of neutron production
causing a runaway condition in which more and more neutrons are produced, and their density grows
exponentially fast. Such a condition is called a positive void coefficient. The RBMK has the highest
positive void coefficient among any commercial reactor ever designed. A series of critical safety flaws
have also been identified with the RBMK design, though some of these were corrected following the
Chernobyl disaster.
6. Fast neutron reactors (FNR)
A fast-neutron reactor (FNR) is a category of the nuclear reactor in which the fission chain reaction is
sustained by fast neutrons. Such a reactor needs no neutron moderator but requires fuel (plutonium-
239) that is relatively rich in fissile material when compared to that required for a thermal-neutron
reactor.
A breeder reactor is a nuclear reactor capable of generating more fissile material than it consumes. These
devices are able to achieve this feat because their neutron economy is high enough to breed more fissile
fuel than they use from fertile material like uranium-238 or thorium-232. Interest in breeders declined
after the 1960s as more uranium reserves were found, and new methods of uranium enrichment reduced
fuel costs.Fast breeder reactor (FBR) uses fast (unmoderated) neutrons to breed fissile plutonium and
possibly higher transuranic from fertile uranium-238. The fast spectrum is flexible enough that it can also
breed fissile uranium-233 from thorium.
Electricity Generation Efficiencies
Electric power plant efficiency (η) is defined as the ratio between the useful electricity output from the
generating unit, in a specific time, and the energy value of the energy source supplied to the unit in the
127
same time period. For electricity generation based on steam turbines, 65% of all prime energy is wasted
as heat. The maximum theoretical energy efficiency is defined in more detail by the Rankine cycle.
https://www.youtube.com/channel/UCrPRG_Wut_UrDhD_MTvQ6GA
129
ENVIRONMENTAL BIOLOGY
Speciation is an evolutionary process by which populations evolve to become distinct species. There are
four geographic modes of speciation in nature based on the extent to which speciating populations are
isolated from one another.
1. Allopatric Speciation
Allopatric Speciation occurs as a result of geographic isolation of populations due to habitat
fragmentation (e.g. mountain formation). The isolated populations then undergo genotypic or
phenotypic divergence as (a) they become subjected to dissimilar selective pressures; (b) they
independently undergo genetic drift; (c) different mutations arise in the two populations. When the
populations come back into contact, they have evolved such that they are reproductively isolated and are
no longer capable of exchanging genes.
E.g. Darwin's finches, which may have speciated allopatrically because of volcanic eruptions that divided
populations.
Modes of Speciation
2. Peripatric Speciation
This is a subform of allopatric speciation. The new species are formed in isolated, smaller peripheral
populations that are prevented from exchanging genes with the main population. Founder effect and
Genetic drift are considered as important driving forces of peripatric speciation. Case studies include
Mayr's investigation of bird fauna; the Australian bird Petroica multicolor, and reproductive isolation in
populations of Drosophila subject to population bottlenecks.
130
3. Parapatric Speciation
Partial geographical separation of a population often leads to the parapatric type of speciation. individuals
of each species may come in contact or cross habitats from time to time, but reduced fitness of the
heterozygote leads to selection for behaviors or mechanisms that prevent their interbreeding. Ring
species such as Larus gulls have been claimed to illustrate the parapatric type of speciation.
4. Sympatric Speciation
Sympatric Speciation occurs due to reproductive isolation within a single population without geographic
isolation. The new species are generally the descendants of a single ancestral species which occupied the
same geographic location. The best-known example of sympatric speciation is that of the Cichlids of East
Africa inhabiting the Rift Valley lakes.
Energy loss also occurs through the expulsion of undigested food (egesta) by excretion or regurgitation.
The carnivores (i.e. secondary consumer) and omnivores then consume the primary consumers, with
some energy passed on and some lost. Tertiary consumers, which may or may not be apex predators,
then consume the secondary consumers. Finally, the decomposers, such as Saprotrophic bacteria and
fungi break down the organic matter of the dead tertiary consumers and release nutrients into the soil.
The unidirectional flow of energy through various trophic levels in an ecosystem can be explained with
the help of various energy flow models.
Ecological Succession
The term ‘Ecological succession’ is defined as the process of change in the species structure of an
ecological community over time. The time scale can be decades (e.g. after a wildfire), or even millions
of years after a mass extinction. Succession that begins at new habitats, such as newly exposed rock or
sand surfaces, lava flows, newly exposed glacial tills, etc., uninfluenced by pre-existing communities is
called primary succession. The stages of primary succession include pioneer microorganisms, plants
(lichens and mosses), grassy stage, smaller shrubs, and trees (i.e. climax community).
Succession that follows the disruption of a pre-existing community is called secondary succession. The
dynamics of secondary succession are strongly influenced by pre-disturbance conditions, such as soil
development, seed banks, remaining organic matter, and residual living organisms. The succession of
micro-organisms including fungi and bacteria occurring within a microhabitat is known as
microsuccession or serule.
Hydrarch Succession
Plant succession starting on relatively shallow water, such as ponds and lakes, which culminate in a mature
forest is called Hydrarch succession. Hydrosere is a type of hydrarch succession which starts in the
freshwater ecosystems, such as ponds, pools, lakes, and marshes. When the succession starts in the saline
water ecosystem, it is termed as Halosere.
The whole processes involved in hydrosere succession is further subdivided into a number of sub-stages:
1. Phytoplankton stage
This is the initial stage in hydrosere succession which takes place in ponds devoid of nutrients and life
forms. Initially, phytoplanktons consisting of microscopic algae begin to multiply and they quickly form
the pioneer community. Death and decomposition of these organisms result in the enrichment of
minerals in aquatic habitats.
2. Submerged stage
Rooted and submerged hydrophytes, such as Hydrilla, Potamogeton, and Vallisneria start to grow in the
pond which is now shallower and rich in nutrients. These plants grow at various depths mostly rooted in
the muddy or sandy bottom depending on the species and also on the clearness or turbidity of water.
3. Floating stage
In the beginning, the submerged and floating plants grow intermingled, but as the succession progresses
the submerged plants are replaced entirely. Their broad leaves floating on the water surface check the
penetration of light to the deeper layer of water. The most tolerant species in the area are able to
reproduce and perpetuate. This may be one of the main causes responsible for the death of submerged
plants. Some of the Important floating plants are Nelumbum, Trapa, Pistia, Nymphaea, and
Limnanthemum, etc.
4. Reed-swamp stage
When the ponds and lakes become too shallow, the habitat becomes less suited to the floating plants.
133
Under these conditions floating plants are gradually replaced by amphibious plants, such as Bothrioclova,
Typha, Phragmites (Reed), etc. The plants of the swamp stage transpire huge quantities of water and also
produce abundant organic matter.
5. Sedge Marsh or Meadow stage
The newly formed marshy soil will be too dry for the plants of the pre-existing community to thrive. Now
the plants well adapted to the new habitat begin to appear. Important plants that are well suited to marshy
habitat are the members of Cyperaceae and grammeae. The species of sedge (Carex) and rushes
(Juncus), species of Themeda, Iris, Dichanthium, Eriophorum, Cymbopogon, Teucrium, Cicuta, etc.
6. Woodland stage
The woodland plant community produces more shade and absorb and transpire large quantity of water.
Some of the prominent members of this group are Buteazon, Acacia, Cassia, Terminalia, Salix,
Cephalanthus, etc.
7. Climax forest
In the climax stage of hydrarch succession, the level of the soil rises much above the water level by
progressive accumulation of humus and soil particles. In the newly formed dry and much-aerated habitat,
self-maintaining and self-reproducing plant community consisting mostly of woody trees develops in the
form of mesophytic forest.
Xerarch Succession
The succession initiated with the establishment of pioneer communities in dry areas is called Xerosere.
When succession takes place on bare rocks it is called Lithosere and when it occurs in a sandy area, like
sand dunes, the succession is called Psammosere. The stages involved in xerarch succession are:
1. Crustose lichen stage
Crustose lichens (e.g., Rhizocarpus and Lacidea) are the pioneer colonizers on bare rock surfaces. These
plants grow only when water becomes available in the habitat. They occur in the form of membranous
crusts and during the dry season, though they appear to be desiccated, remain alive. Their spongy nature
helps them absorb excess amounts of water and minerals. The lichens also secrete carbonic acid which
causes weathering of the rock and subsequent formation of a thin layer of soil on the rock surface.
2. Foliose Lichens Stage
Now the habitat is suitable for the growth of foliose and fruticose lichens (e.g., Dermatocarpon Parmelia,
Umbilicaria, etc.). They also secrete carbonic acid which further pulverizes or loosens the rocks into
small particles.
3. Moss Stage
The accumulation of enough rock particles, humus, and moisture creates a condition suitable for the
growth of xerophytic mosses. They cover the previous lichens and successfully compete with them for
water and mineral nutrients. Like lichens, mosses are also adapted to survive in extreme drought. They
134
develop rhizoids which can penetrate deep into the rocky soil. Eventually, the habitat becomes wetter as
a result of the formation of a thick mat by decaying older parts of mosses.
4. Herbs Stage
As the soil thickness increases, the herbaceous vegetation, which consists mainly of annual and perennial
herbs, develops rapidly. The increased moisture content of the soil further enhances the growth of herbs.
5. Shrub Stage
Xerophytic shrubs gradually replace the herbaceous vegetation and establish a community. Roots of
shrubs also reach the surface of unpulverized rocks and corrode a sufficient quantity of rock particles
which make the soil more massive. Decaying leaves, twigs and roots of these shrubs also enrich the soil
with hummus.
7. Forest Stage
The shade-tolerant seedlings of mesophytic trees grow densely and become dominant. Under the shade
of these trees, some shade loving herbs and shrubs also flourish. Complete harmony is developed among
various plant communities and their environment. This climax stage remains unchanged unless disturbed
by some major environmental changes.
Mechanisms of Succession
1. Nudation
Initiation of ecological succession on a bare site that is usually formed after a major environmental
disturbance such as volcanic eruption or wildfire.
2. Migration
Migration refers to the arrival of propagules such as seeds or spores to the bare site through mediums
like air and water.
3. Ecesis
Ecesis is the successful establishment of species on the harsh environmental conditions of the bare site.
4. Aggregation
After ecesis, the individuals of the same species increase their number as they stay close to one another.
5. Competition
As vegetation becomes well established, various species in different communities begin to compete for
6. Reaction
It is the most important stage in ecological succession in which living organisms can influence the
modification of the surrounding environment. Due to these modifications, the present community
becomes unsuitable for existing environmental conditions and are entirely replaced by another. The
whole sequence of communities replacing one another in a given area is called sere. A seral community
is an intermediate stage found in ecological succession in an ecosystem advancing towards its climax
community. In many cases, more than one seral stage evolves until climax conditions are attained.
7. Stabilization
The final stage in ecological succession in which the climax community maintains equilibrium with the
prevailing environmental conditions for longer periods.
Species Diversity
Species diversity is the number of different species that are represented in a given community. It consists
of three components: species richness, taxonomic or phylogenetic diversity, and species evenness.
1. Species richness
Species richness is simply a count of species, i.e. the number of different species represented in an
ecological community, landscape or region. Species richness does not take the abundances of the types
into account, it is not the same thing as diversity, which does take abundances into account. It is a popular
diversity index in ecology because it can be used when abundance data is not available for the datasets of
interest.
2. Phylogenetic diversity
It is a measure of biodiversity which incorporates phylogenetic difference between species. The
phylogenetic dimension of biodiversity reflects evolutionary differences among species based on times
since divergence from a common ancestor and represents a comprehensive estimate of phylogenetically
conserved ecological and phenotypic differences among species.
3. Species evenness
Species evenness is a diversity index that quantifies how close in numbers each species in an environment
is. The evenness of a community is mathematically represented by Pielou's evenness index:
Where H’ is the number derived from the Shannon diversity index and H’ max is the maximum possible
value of H’.
Diversity Indices
Diversity indices (also called phylogenetic indices or phylogenetic metrics) are used for mathematical
measurement of species diversity in a given community based on the species richness (the number of
136
species present) and species abundance (the number of individuals per species). There are two types of
diversity indices used in ecological calculations. They are dominance indices and information statistic
Indices.
1. The Simpson index (D) is a dominance index because it gives more weight to common or dominant
species. In this case, a few rare species with only a few representatives will not affect diversity. It takes
into account the number of species present, as well as the abundance of each species. There are two
different formulae for calculating D, either is acceptable but is to be consistent:
0 represents infinite diversity and 1, no diversity. That is, the higher the value of D, the lower the diversity.
This does not sound logical, so to get over this problem, D is often subtracted from 1 or the reciprocal
of the index is taken.
syzygium aromaticum 1 0
Bambusa bambos 1 0
Artocarpus heterophyllus 3 6
Total (N) 15 64
137
D= 64/210
Simpson's Index D =0.3
All three values quantify the same biodiversity. Hence, it is important to be clear about which index
should actually be used in comparative studies.
2. The Shannon index (H) is an information statistic index, it assumes all species are represented in a
sample and they are randomly sampled. It takes into account the number of species present, as well as
the abundance of each species.
Where p is the proportion (n/N) of individuals of one particular species found (n) divided by the total
number of individuals found (N), ln is the natural log, Σ is the sum of the calculations, and s is the number
of species.
Typical values of H are generally between 1.5 and 3.5 in most ecological studies, and the index is rarely
greater than 4. The Shannon index increases as both the richness and the evenness of the community
increase. Even the rare species with one individual contribute some value to the Shannon index.
Species Abundance Pi ln Pi
Lantana camara 50 -0.347
Adhatoda Vasica 30 -0.361
syzygium aromaticum 10 -0.230
Bambusa bambos 9 -0.217
Artocarpus heterophyllus 1 -0.046
Total 100 -1.201
Putting the values into the formula for Shannon index, Hs = 1.201
Alpha diversity (α-diversity) is the diversity of a region or a local species pool. In other words, alpha
diversity is the number of species found in a particular area or ecosystem.
138
Beta diversity (β-diversity) is defined as the difference in species composition between communities and
is closely related to many facets of ecology and evolutionary biology. Simply, it is the ratio between
regional and local species diversity.
Gamma diversity (γ-diversity) is the total species diversity in a landscape (γ) and is determined by two
different things: the mean species diversity in sites or habitats at a more local scale (α) and the
differentiation among those habitats (β).
Biodiversity Gradient
Biodiversity gradient or Latitudinal Gradients of Biodiversity are biogeographic patterns that quantify the
ways in which taxonomic, phylogenetic, functional, genetic, or phenetic biodiversity changes with the
latitudinal position on the surface of the earth. In general, species richness increases with increasing
latitude from polar to equator regions. A few species are there contradicting the aforementioned
phenomenon. They include parasites, aquatic flora, North American wasps, marine birds, etc. There are
several theories to explain why species diversity is greater in the tropics than in cooler regions. Almost
90% of this variation can be explained by four variables: seasonal temperature, precipitation,
evapotranspiration, and elevation. Besides, the relative environmental stability of the tropics also enables
species to specialize to a greater extent.
Rapoport's Rule
The rule states that latitudinal ranges of plants and animals are generally smaller at lower latitudes than
at higher latitudes. According to this theory, species can have narrower tolerances in more stable climates,
leading to smaller ranges and allowing the coexistence of more species.
“50/500” rule proposed by Australian geneticist Ian Franklin suggests that a minimum population size of
50 is necessary to combat inbreeding and a minimum of 500 individuals are needed to reduce genetic
drift.
In general, species with high reproductive capacities, known as r-selected species, such as arthropods and
rodents tend to have lower MVPs than species with low reproductive capacities (i.e. k- selected species).
It is mainly because of the fact that they do not breed until individuals are several years old. An MVP of
500 to 1,000 has often been given as an average for terrestrial vertebrates when inbreeding or genetic
variability is ignored.
The authors of island biogeography theory have also predicted that larger islands will have more species
than smaller islands (assuming these islands are comparably isolated), and isolated islands will have fewer
species than islands closer to source regions (assuming these islands are equally large). Thus, the more
isolated islands are likely to have a greater number of endemic species and low gene flow and connectivity
with surrounding islands.
The relationship between species richness and area is well known in ecology and is represented as
S = cAz
S = species richness, A is area and c and z are constants. The equation is often presented in its logical
form as:
log10S = log10 C + z log10 A
140
Extinct (Ex)
A taxon is Extinct when there is no reasonable doubt that the last individual has died. A taxon is
presumed Extinct when exhaustive surveys in known and/or expected habitat, at appropriate times
(diurnal, seasonal, annual), throughout its historic range have failed to record an individual.
Endangered (EN)
The endangered species are under very high risk of extinction due to rapid population declines of 50 to
more than 70 percent over the past 10 years (or three generations), a current population size of fewer
than 250 individuals, or other factors.
Vulnerable (VU)
A designation applied to species that possess a high risk of extinction as a result of rapid population
declines (30-50%) over the past 10 years (or three generations), a current population size of fewer than
1,000 individuals, or other factors.
A taxon in this category may be well studied, and its biology well known, but appropriate data on
abundance and/or distribution are lacking. Data Deficient is therefore not a category of threat.
on the tiny fragments of grassland scattered across South and Southeast Asia. The major threat to their
population is the land conversion for intensive agriculture, particularly for dry season rice production.
Poaching also continues to be a problem in Southeast Asia, while the South Asian population is down to
less than 350 adult birds, about 85% of which are found in India.
5. Jerdon's courser (Rhinoptilus bitorquatus)
Jerdon's courser (Rhinoptilus bitorquatus) is a nocturnal bird endemic to southern India. It has a very
limited geographical range around the Godavari river valley near Sironcha and Bhadrachalam, and from
the Cuddapah and Anantpur areas in the valley of the Pennar River. The construction of the Somasilla
Dam had led to the relocation of residents of 57 villages into the region where the courser inhabits. In
addition, extensive quarrying threatens their habitat. The scrub habitat preferred by the bird has declined
due to increased agricultural activity.
Biodiversity hotspot
A biodiversity hotspot is a biogeographic region with significant levels of biodiversity that is threatened
by human habitation. To qualify as a hotspot, a region must contain at least 0.5% or 1,500 species of
vascular plants as endemics, and it has to have lost at least 70% of its primary vegetation. 36 biodiversity
hotspots have been so far identified around the globe. Some of these hotspots support up to 15,000
endemic plant species and some have lost up to 95% of their natural habitat. India shares its territories
into four biodiversity hotspots viz. Eastern Himalaya, Western Ghats, Indo-Burma and Sundalands.
terrestrial ecotones are more dramatic than for animals in terrestrial ecotones because the habitats vary
more substantially in a set of characteristics, such as temperature, oxygen content, chemical conditions
(e.g., pH, salinity, osmotic/ionic state, etc.), physical support for the animal (buoyancy), and predators.
The riparian zone is a major transitional zone for all types of aquatic systems.
The edge effect is defined as the influence of the two bordering communities on each other. It results in
greater species diversity and density at the transition zone than in adjoining communities.
Ecological Niche
The ecological niche is a term for the position of a species within an ecosystem. It describes both the
range of conditions necessary for the persistence of the species and its ecological role in the ecosystem.
It also describes how an organism or population responds to the distribution of resources and
competitors. A wide array of abiotic factors (e.g., soil type and climate) also influence a species’ niche.
Fundamental niche (FN) is the whole set of conditions under which an animal can survive and reproduce
itself. Whereas, realized niche (RN) is the set of conditions actually used by given animal after interactions
with other species have been taken into account.
FN ≥ RN
According to Niche breadth species are classified into two categories: Specialist species (having narrow
niches) and Generalist species (having broad niches).
species has even the slightest advantage over another, the one with the advantage will dominate in the
long term. This will lead either to the extinction of the weaker competitor or to an evolutionary or
behavioral shift toward a different ecological niche. Examples for Gause’s principle include Darwin’s
finches and Mac Arthur’s Warblers
The competitive exclusion principle is based on a mathematical model developed independently by Vito
Volterra and Alfred Lotka. This model is expressed in the equation for Logistic Population Growth.
N is the number of individuals in a population or the density; K is the carrying capacity of the
environment for that species; r denotes the intrinsic rate of population growth; dN/dt represents the
change in population with time.
2. In the absence of predators, the prey population x would grow proportionally to its size, dx/ d t = αx,
α > 0. The coefficient α was named as the coefficient of auto-increase.
3. In the absence of prey, the predator population y would decline proportionally to its size, d y/ d t =
−γy, γ > 0.
4. When both predator and prey are present, a decline in the prey population and a growth in the
predator population will occur, each at a rate proportional to the frequency of encounters between
individuals of the two species (−βxy for prey, δxy for predators, β, δ > 0).
Prey equation
When the interaction rate is adjoined to the natural rate, the change in the prey’s numbers is given by its
own growth minus the rate at which it is preyed upon.
146
Predator equation
The predator equation expresses the change in the predator population as growth is determined by the
food supply minus natural death.
Where δxy represents the growth of the predator population; α, β, γ, δ are positive real parameters
describing the interaction of the two species.
α12 represents the effect species 2 has on the population of species 1 and α21 represents the effect
species 1 has on the population of species 2
N species equation
This is a generalized equation for any number(N) of species competing against one another. The
equation for any species i becomes
Niche Differentiation
The long-term coexistence of ecologically similar species is achieved through niche differentiation (also
known as niche segregation, niche separation, and niche partitioning), a process by which competing
species use the environment differently in a way that helps them to coexist.
Many studies have shown that potential competitors show differences in patterns of resource use (i.e.
resource partitioning). For example, in a detailed study of bumblebees in the mountains of Colorado, it
was observed that different bumblebees in this area appear to be adapted to specific species of plants
having different corolla lengths in their flowers. Hence, the species having long proboscis consumed
nectar from the flowers with the long corolla, while the species with short proboscis preferred flowers
with short corolla.
147
The case study of character displacement in Darwin's finches of Galapagos island is another classical
example of resource partitioning. A single species of seed-eating finch, which originally colonized the
Galapagos Islands, was faced with a diverse range of seed types and sizes. However, the beak of the
founding species only allowed it to eat a small subset of the available seed types and sizes. The advantages
gained by individuals that were able to exploit slightly different seed types drove the evolution of many
new species, each with different shaped beaks enabling them to specialize in a particular size of the seed.
Although the term ‘resource’ generally stands for food, species can partition other non-consumable
objects such as parts of the habitat. For example, warblers are thought to coexist because they nest in
different parts of trees.
Population Ecology
A population is a group of organisms of the same species occupying a particular space at a particular
time. Its major characteristics are:
1. Geographic Distribution
2. Density
3. Dispersion
4. Growth Rate
5. Age Structure
Carrying Capacity and Maximum Sustainable Yield (MSY)
The carrying capacity of an area represents the maximum number of individuals of a particular species
that the area can support indefinitely without degrading the environment. It is represented by K in the
logistic growth equation.
Maximum Sustainable Yield (MSY) is a well-known acronym in fisheries science. MSY is defined as the
maximum catch (in numbers or mass) that can be removed from a population over an indefinite period.
The key assumption behind all sustainable harvesting models such as MSY is that populations of
organisms grow and replace themselves – that is, they are renewable resources. The simplest way to
model harvesting is to modify the logistic equation so that a certain number of individuals is continuously
removed.
H represents the number of individuals being removed from the population or harvesting rate. An
equilibrium population size is attained when the harvest rate becomes equal to the population growth
rate.
148
In the logistic growth model, there is a point at which the population exhibits the maximum growth rate.
This point, called the maximum sustainable yield (MSY), is where the population size is half the carrying
capacity.
Organisms that exhibit r-selected traits can range from bacteria and diatoms to insects and grasses, to
various semelparous cephalopods and small mammals, particularly rodents. Some of the k- selected
organisms are elephants, humans, and whales, but also smaller, long-lived organisms, such as Arctic terns,
parrots, and eagles.
In reality, it is not surprising that many organisms cannot be classified clearly into either category. A
number of organisms adopt an intermediate strategy or even adopt different strategies depending on local
environmental conditions at any given time. For example, trees exhibit traits of K-strategists, such as
longevity and strong competitiveness. In reproduction, however, trees typically produce thousands of
offspring and disperse them widely, which are the characteristic of r-strategists. Similarly, sea turtles, an
organism with a long lifespan and large size, produce a large number of unnurtured offspring. Unlike r -
selected species, k -selected species are not effective colonizers. Instead, they tend to be found in climax
communities (stable and long-established ecological communities).
The Intermediate Disturbance Hypothesis posits that intermediate levels of disturbance in a landscape
create patches at different levels of succession, promoting the coexistence of colonizers and competitors
at the regional scale.
Metapopulation Ecology
A metapopulation is a spatially structured population that persists over time as a set of local populations
with limited dispersal among them. A species with a metapopulation structure lives in a habitat made up
of patches. At a given instant, only some of the patches will be occupied. There will be limited migration
among these local populations. The persistence of metapopulations largely depends on the balance
between local extinction and colorization rates. Migration among local populations that prevents local
extinctions is called the ‘Rescue effect’. Turnover event occurs when a habitat patch becomes
unoccupied through extinction and is then recolonized by individuals from other local populations.
Genetic Drift
Genetic drift is the change in the frequency of an existing gene variant in a population due to random
sampling of organisms. The alleles in the offspring are a sample of those in the parents, and chance has
a role in determining whether a given individual survives and reproduces. Genetic drift often takes place
through two mechanisms:
.
152
2. Founder effect
The founder effect occurs when a small group of individuals breaks off from a larger population to
establish a new colony. The smaller population size of colonizers results in reduced genetic variation
from the original population. The founder effect is similar in concept to the bottleneck effect, but it
occurs via a different mechanism- colonization rather than a catastrophe.
Biomes
Biomes are distinct biological communities that have formed in response to a shared physical climate.
They constitute the largest geographic biotic unit. The major Terrestrial biomes are:
1. Tundra
Tundra is a type of biome where the tree growth is hindered by low temperatures and short growing
seasons. The ecotone between the tundra and the forest is known as tree line or timberline. There are
three types of Tundra regions existing around the globe: Arctic tundra, alpine tundra, and Antarctic
tundra.
Arctic tundra is situated to the north of the taiga belt covering vast areas of northern Russia and Canada.
This region is characterized by permafrost (permanently frozen soil) which makes it impossible for trees
to grow. The precipitation is only about 150–250 mm falling per year. The biodiversity in Arctic Tundra
is low with just 700 species of vascular plants and only 48 species of land mammals, but during summer
millions of birds migrate to the marshes in these regions. Poikilotherms such as frogs or lizards are absent
in Arctic Tundra.
A major threat to the Tundra biome is global warming and subsequent melting of permafrost. About
one-third of the world's soil-bound carbon is in taiga and tundra areas. When the permafrost melts, the
carbon gets released in the form of carbon dioxide and methane to the atmosphere and thereby
augmenting the greenhouse effect.
Antarctic tundra is found in Antarctica and on several Antarctic and sub-Antarctic islands. This continent,
too cold to support any vegetation, is mostly covered by ice fields. However, some remaining areas of
rocky soil support plant life (e.g., lichens, mosses, liverworts, etc.). Antarctic hair grass (Deschampsia
antarctica) and Antarctic pearlwort (Colobanthus quitensis) are the only flowering plants found on the
entire continent. Unlike the Arctic tundra, the Antarctic regions are devoid of large mammal fauna. Sea
mammals and sea birds including seals and penguins inhabit areas near the shore.
Alpine tundra does not have trees because the climate and soil at high altitude block tree growth. The
lack of permafrost distinguishes Alpine tundra from the rest. The soil found here is better drained than
arctic soils. The stunted forests growing at the forest-tundra ecotone are known as Krummholz.
153
2. Taiga
The taiga, also known as boreal forest or snow forest, is found throughout the high northern latitudes
between the tundra and the temperate forest. This biome is dominated by coniferous forests consisting
mostly of pines, spruces, and larches. It is the world's largest land biome occupying about 17 percent of
Earth’s land surface area. The taiga experiences relatively low precipitation throughout the year (generally
200–750 mm). Soil is spodosol, i.e., acidic with a strongly leached surface layer. A distinctive feature of
the flora of taiga is the abundance and diversity of mosses, that form one-third of the ground cover. The
fire has been one of the most important factors shaping the composition and development of boreal
forest stands. The dominant fire regime in the boreal forest is high-intensity crown fires or severe surface
fires of very large size, often more than 10,000 ha, and sometimes more than 400,000 ha. Such fires can
kill entire stands.
3. Temperate forests
The temperate forest forms a belt between the tropical and boreal regions and is widely distributed in
Eastern Asia, Central, and Western Europe. It is the second-largest terrestrial biome on the planet.
Temperate forests are characterized as regions with high levels of precipitation, humidity, and a variety
of deciduous trees that shed their leaves in fall and bud new leaves in spring. This forest is further
classified into 4 subtypes:
Deciduous forests are composed mainly of broad-leaved trees, such as maple and oak, which shed all
their leaves once in a year. They are typically found in the middle-latitude regions with temperate
climates.
Coniferous forests are composed of cone-bearing needle-leaved or scale-leaved evergreen trees, such as
pine, spruce, fir, sequoia, and redwood.
Broadleaved and mixed forests, as the name implies, it is composed of conifers and broadleaved-trees
growing together in the same area. E.g., maple, birch, beech, poplar, elm, and pine.
Temperate rainforests are composed of evergreen trees and found only in very wet coastal areas.
4. Tropical rainforests
Tropical rainforests are the most biodiverse terrestrial biome typically found in equatorial regions. The
annual rainfall in tropical rainforests ranges from 125 to 660 cm, which often results in poor soils due to
intense leaching of soluble nutrients from the ground. Soils in the tropical regions are categorized into
ultisols and oxisols. Tropical rainforests are characterized by a vertical layering of vegetation that forms
distinct habitats for animals within each layer. The forest floor receives only 2% of the sunlight. Only
plants adapted to low light can survive in this region. The movement of large animals such as okapi, tapir,
Sumatran rhinoceros, etc., takes place on the forest floor. Buttress roots are a common feature of many
tropical rainforest tree species.
5. Deserts
Deserts cover about one-fifth of the Earth's surface and occur where rainfall is less than 50 cm/year.
Subtropical hot deserts can have daytime soil surface temperatures above 60oC and night-time
temperatures dropping down to 0oC. Most of the xerophytic desert plants have adopted crassulacean acid
metabolism, which allows them to open their stomata during the night for CO2 to enter and close them
during the day to reduce evapotranspiration.
In Cacti, a group of desert specialists, the leaves have been dispensed with and the chlorophyll is
displaced into the trunks, the cellular structure of which has been modified to store water. They also
possess spines for shade and waxy skin to seal in moisture. Phreatophytes are a characteristic of arid or
desert zones. These plants are adapted to arid environments by growing extremely long roots, allowing
them to acquire moisture at or near the water table. Animals adapted to live in deserts are called
xerocoles. Most of them are crepuscular or nocturnal. Some smaller desert animals burrow below the
surface of the soil or sand to escape the high temperatures at the desert surface. Kangaroo rats
manufacture water metabolically from the digestion of dry seeds. Other common adaptations seen in
desert animals include big ears, light-colored coats, humps to store fat, etc. There are four major types
of deserts on Earth:
Hot and dry deserts: The Sahara is one such dessert. Others appear in Australia, South Asia, and Central
and South America.
Cold winter deserts or semiarid deserts: The average temperature ranges between 21-27°C while the
annual rainfall is between 0.8 and 1.5 inches. Semi-arid deserts are found in North America, Russia,
northern Asia, Europe, and Greenland.
Coastal deserts occur in moderately cool to warm areas such as the Nearctic and Neotropical realm. A
good example is the Atacama of Chile.
Cold deserts are characterized by cold winters with snowfall and high overall rainfall throughout the
winter and occasionally over the summer. The largest of these deserts are the Gobi Desert in northern
China and southern Mongolia, the Taklimakan Desert in western China, the Turkestan Desert, and the
Great Basin Desert of the United States.
6. Chaparral
Chaparral is a shrub forest found in California, along the Mediterranean Sea, and along the southern
coast of Australia. It is shaped by the Mediterranean climate (mild, wet winters and hot dry summers)
and wildfire, featuring summer-drought-tolerant plants with hard sclerophyllous evergreen leaves. The
frequency of fire occurrence intervals ranges between 30-50+ years. Some plants produce seeds that
155
germinate only after a hot fire. There are two types of Chaparral plant species:
Sprouters (obligate): They have dormant meristems on buried lignotubers, and also have storage and
intact roots, which help them respond quickly.
Seeders (facultative and obligate): These plants have long-lived seeds in the seed bank or aerially. The
germination of these seeds is induced by fire.
Mature chaparral stands that have survived for greater intervals between fires are characterized by nearly
impenetrable dense thickets. These plants are highly flammable during the late summer and autumn
months.
7. Temperate grasslands
Temperate grasslands are areas of open grassy plains that are sparsely populated with trees. They are
known as prairies in North America and steppes in Eurasia. The natural factors that impact temperate
grassland biomes are tornadoes, blizzards, and fires. The major locations of Temperate grasslands are
given below.
Argentina - Pampas
Australia - Downs
Russia - Steppes
Savannas are grasslands with scattered trees, generally occurring in Africa, South America, and northern
Australia. Charismatic animals including elephant, giraffe, lion, and cheetah make their homes in the
savanna. The plants here have well-developed root systems that allow them to quickly re-sprout after a
fire.
The South American savanna type- cerrado has a high density of trees, which is similar to, or greater
than that found in South American tropical forests. Two factors common to all savanna environments
are rainfall variations from year to year and dry season wildfires. Serengeti National Park in Tanzania is
one of the famous savannas in the world.
Precipitation and temperature are the two most important climatic variables that determine the type of
biome in a particular location. In the graph given below, the tropical rainforest has high average
temperatures and high precipitation, whereas subtropical deserts have much lower precipitation and great
variation in average temperature.
156
1. Ocean
The pelagic zone of oceans consists of the water column of the open ocean. The benthic zone extends
along the ocean bottom from the shoreline to the deepest parts of the ocean floor. This is a nutrient-rich
portion of the ocean because of the dead organisms that fall from the upper layers of the ocean. Due to
the high level of nutrient availability, a diversity of sponges, sea anemones, marine worms, sea stars,
fishes, and bacteria thrive in this region.
157
The photic zone is the portion of the ocean that light can penetrate (approximately 200 m). At depths
greater than 200 m, less than 1% of sunlight penetrates into the next portion of the ocean known as
aphotic (or profundal) zone. The portion of the ocean deeper than about 2000 m and remains in
perpetual darkness is known as the abyssal zone. The deepest region of the ocean lying within oceanic
trenches is known as the hadal zone. It is found from a depth of around 6,000 to 11,000 meters.
The mid-ocean ridges of the world are connected and form a single global mid-oceanic ridge system that
is a part of every ocean and is the longest mountain range in the world. This feature is formed where
seafloor spreading takes place along a divergent plate boundary.
The ocean is divided into different zones based on water depth and distance from the shoreline
The intertidal zone is the area where the ocean meets the land between high and low tides. The tough
exoskeletons of shoreline crustaceans protect them from desiccation (drying out) and wave damage from
this zone.
The neritic zone, also called coastal waters, is the relatively shallow part of the ocean situated from the
intertidal zone to depths of about 200 m. The zone is relatively stable and provides a well-illuminated
environment for marine life. The water contains silt and is well-oxygenated. Zooplankton, protists, small
fishes, and shrimp are found in the neritic zone and are the base of the food chain for most of the world’s
fisheries. Corals are also mostly found in the neritic zone than in the intertidal zone.
The oceanic zone is typically defined as the area of the ocean lying beyond the continental. shelf and is
relatively less productive. The mixing of warm and cold waters takes place in this zone due to the action
of ocean currents.
Coral Reefs
Coral reefs often called “rain forest of the sea”, form some of Earth's most diverse ecosystems. Most of
the reef-building or hermatypic corals live only in the photic zone. The coral organisms (members of
phylum Cnidaria) are colonies of saltwater polyps that secrete a calcium carbonate skeleton. The gradual
accumulation of these calcium-rich skeletons forms the underwater reef. Coral polyps are not
photosynthetic, but they form a symbiotic association with zooxanthellae. These organisms live within
the polyps' tissues and provide organic nutrients (glucose and amino acids) that nourish the polyp.
158
2. Wetlands
Wetlands are characterized by soil that is periodically or permanently saturated with water. They have
also been described as ecotones, acting as a transition zone between dry land and water bodies. Wetlands
are generally formed due to flooding. The duration of flooding or prolonged soil saturation by
groundwater determines whether the resulting wetland has aquatic, marsh or swamp vegetation. Bogs
and fens are formed due to the accumulation of peat in the region. Wetlands are an important carbon
sink storing approximately 44.6 million tonnes of carbon per year globally. However, some wetlands are
a significant source of methane emissions and some are also emitters of nitrous oxide.
Mangrove
Mangrove forests only grow at tropical and subtropical latitudes near the equator in coastal saline or
brackish water. Mangroves are a group of salt-tolerant trees, generally called halophytes. The anoxic
sediments under mangroves act as sinks for a variety of heavy metals, which have been scavenged from
the water by the colloidal particles in the sediments.
Black Mangroves (e.g. Avicennia germinans) are easily identified by the presence of pneumatophores,
which are tubular bristles like a root system that sticks out vertically and trap oxygen for its oxygen-starved
roots. The Black Mangrove is adapted to high saline conditions and these trees grow in isolated groups
or in woodland formations. Individual trees are fairly large and may grow up to 20-25 meters in height.
Red Mangroves (e.g. Rhizophora mangle) are evergreen trees, which are immediately recognized by their
elaborate prop and aerial root systems that stabilize the trees. The roots contain a waxy substance to
159
prevent salt intake. When salt gets through, it is deposited in older leaves and the tree then sheds them.
The tree also creates a propagule resembling an elongated seed pod that is, in reality, a living tree. The
fully-grown propagule on the mangrove is capable of rooting and producing a new tree even after floating
over a year in brackish water.
White Mangroves (e.g. Laguncularia racemose) normally grow in the back portion of mangrove swamps
and remain unaffected by tidal inundation except during spring tides. The tree grows up to 18 meters.
The leaves are adapted to the saline environment by developing special openings (glands) to expel salt.
Mangrove zonation is the predictable and discrete ordering of mangrove species caused by a unique
intertidal environment. The red mangrove is closest to the water, while the buttonwood mangrove is
found farthest from the water. Their positions depend on land elevation, water and soil salt levels, and
tidal changes.
Mangrove zonation
3. Estuary
An estuary is a partially enclosed coastal body of brackish water having one or more rivers or streams
flowing into it with a free connection to the open sea. Estuaries form a transition zone between river
environments and maritime environments.
The topmost zone near the shore of a lake or pond is the littoral zone. This zone is the warmest and is
dominated by rooted and floating aquatic plants, grazing snails, clams, insects, crustaceans, fishes, and
amphibians. The near-surface open water surrounded by the littoral zone is called the limnetic zone,
which is dominated by both phytoplankton and zooplankton. The much colder and denser portion of a
lake or pond is called the profundal zone.
Eutrophication is the process by which a lake or pond gets enriched with minerals and nutrients
(phosphorus and nitrogen), which induce excessive growth of algae (or algal bloom)
160
Ecosystem Services
Ecosystem services are the many and varied benefits that humans freely gain from the natural
environment and from properly-functioning ecosystems. They support directly or indirectly our survival
and quality of life. Ecosystem services can be categorized into four main types: Provisioning services,
cultural services, regulating services, and supporting services. To help inform decision-makers, many
ecosystem services are being assigned economic values.
161
Traditional Knowledge naturally includes a deep understanding of ecological processes and the ability to
sustainably extract useful products from the local habitat. Biopiracy refers to an illegitimate appropriation
of traditional knowledge. It is most often associated with western biotech companies that forage the flora
of biodiversity-rich developing countries to exploit and commercialize biological substances that have
been used by Indigenous people for generations.
Indian products, such as the neem tree, tamarind, turmeric, and Darjeeling tea have all been patented
by multinational companies for different biomedical purposes.
Restoration Ecology
Ecological restoration is the rehabilitation, reclamation, recreation, and recovery of degraded
lands. These efforts may be conducted on either a small-scale (e.g. tree planting) or may involve major
human and technical efforts (e.g., the re-creation of wetlands, acid lake neutralization, etc.).
Enhancement is the process through which only a few ecosystem processes or species are ‘restored’.
However, the system remains far from its pristine state.
Rehabilitation helps to significantly improve the ecosystem, but it remains quite distinct from its pre-
degradation conditions. This is generally carried out in areas that have been strip-mined.
Reclamation stabilizes land and restores sufficient soil to revegetate the land, without attempting to
restore the conditions before mining.
Replacement builds a new community that meets some set of conservation objectives but is different
from the degraded one. E.g., constructed wetlands around some lakes fit into this category.
162
Finally, Restoration rebuilds an ecosystem little different than the pristine ecosystem that was degraded.
For animals, there are translocation and reintroduction methods (using captive-bred animals). Restorative
projects can also increase the effective size of a population by adding suitable habitat and decrease
isolation by creating habitat corridors that link isolated fragments.
Bioremediation
Bioremediation is the use of microorganisms, plants, or microbial or plant enzymes to destroy or
immobilize waste materials by altering environmental conditions. One of the major advantages of
bioremediation is the savings in cost and also the savings in the time put forth by workers to clean a
contaminated site. The contaminants known to be biologically degraded by microorganisms are
categorized into five groups:
1. in-situ indicates that the location of the bioremediation has occurred at the site of contamination
without the translocation of the polluted materials. This technique focuses on initiating or enhancing the
degradation of the contaminants in soils and groundwater. The most effective methods in in-situ
bioremediation are the following;
Bioventing
This process involves supplying the necessary amount of oxygen and nutrients through wells to
contaminated soil for stimulating the growth of indigenous bacteria while minimizing the volatilization
and release of contaminants to the atmosphere. Oxygen is most commonly supplied through direct air
injection into residual contamination in soil by means of wells. It is applied for simple hydrocarbons and
can also be used where the contamination is deep under the surface. In many soils, effective oxygen
diffusion for desirable rates of bioremediation extends to a range of only a few centimeters to about 30
cm into the soil, although depths of 60 cm and greater have been effectively treated in a few cases.
163
Biosparging
It involves the injection of air under pressure below the water table to increase the oxygen concentration
in groundwater, which will enhance the rate of biological degradation of contaminants by naturally
occurring bacteria. Biosparging also increases the contact between soil and groundwater. Another benefit
of this technique is that water does not have to be extracted. However, there is still very little known about
the extent to which air injection causes a flow of ground-water.
In situ biodegradation
This method involves supplying oxygen and nutrients by circulating aqueous solutions through
contaminated soils to stimulate naturally occurring bacteria to degrade organic contaminants. It can be
used for biodegradation in the soil and the groundwater.
Bioaugmentation
In this method, bacterial cultures are added to a contaminated medium. Two factors limit the use of
added microbial cultures in a land treatment unit: (a) nonindigenous cultures rarely compete well enough
with an indigenous population to develop and sustain useful population levels and (b) most soils with
long-term exposure to biodegradable waste have indigenous microorganisms that are effective degraders
if the land treatment unit is well managed. Bioaugmentation is commonly used in municipal wastewater
treatment to restart activated sludge bioreactors.
2. Ex-Situ Bioremediation involves the treatment of contaminated soil or water once it has been
excavated or pumped out of the location where it was found. The techniques used for Ex-situ type of
bioremediation are the following;
Landfarming
Landfarming is limited to the treatment of superficial (10–35 cm) soil. In this method, contaminated soil
is excavated and spread over a prepared bed and periodically tilled until pollutants are degraded by
indigenous microorganisms through aerobic degradation.
Composting
Composting is a technique that involves combining contaminated soil with nonhazardous organic
materials such as manure or agricultural wastes. The presence of these organic materials supports the
growth of a rich microbial population and elevated temperature facilitates composting.
Biopiles
Biopiles are a hybrid of landfarming and composting techniques in which solid-phase biological process
is carried out for converting contaminants to low-toxicity byproducts. They are used to reduce
concentrations of petroleum constituents in excavated soils through the use of biodegradation.
Bioreactors
Slurry reactors or aqueous reactors are proven to be more effective and efficient against a wider range of
pollutants. A slurry bioreactor may be defined as a containment vessel and apparatus used to create a
three-phase (solid, liquid, and gas) mixing condition to increase the bioremediation rate of soil-bound
164
and water-soluble pollutants. Bioreactors are used primarily to treat volatile organic compounds (VOCs)
and fuel hydrocarbons in soil and groundwater. The process is less effective for pesticides.
The rate and extent of biodegradation are higher in a bioreactor system than in in situ or in solid-phase
systems because the contained environment is efficiently manageable and more controllable.
Anaerobic bacteria are recently being used for the bioremediation of polychlorinated biphenyls (PCBs)
in river sediments, dechlorination of the solvent trichloroethylene (TCE), and chloroform.
Ligninolytic fungi such as white rot (Phanaerochaete chrysosporium) have the ability to degrade an
extremely diverse range of persistent or toxic environmental pollutants.
Methylotrophs are aerobic bacteria that grow utilizing methane for producing carbon and energy. They
can degrade a wide range of compounds, including the chlorinated aliphatics trichloroethylene and 1,2-
dichloroethane.
Superbug is a constructed bacterium, e.g. Pseudomonas putida, which can degrade hydrocarbons found
in petroleum wastes.
Phytoremediation
Phytoremediation is a bioremediation process that uses various types of plants for in situ removal,
degradation, and containment of contaminants in soils, surface waters, and groundwater. The method is
165
well suited for the use at very large field sites where other methods of remediation are not cost-effective
or practicable. Some of the plant species used for phytoremediation include,
1. Indian mustard (Brassica juncea L.)
2. Willow (Salix species)
3. Poplar tree (Populus deltoides)
4. Indian grass (Sorghastrum nutans)
5. Sunflower (Helianthus Annuus L.)
Technique Plant mechanism Surface medium
Phytoextraction Plants accumulate contaminants in the roots and Soils
or Phytoaccumulation aboveground shoots or leaves. These plants are then
removed for disposal or recycling.
Phytotransformation Uptake of organic contaminants from soil or water Surface water,
and their transformation to more stable, less toxic, or groundwater
less mobile form (e.g., Metal chromium can be
reduced from hexavalent to trivalent chromium).
Phytostabilization Reduction of the mobility and migration of Soils, groundwater,
contaminated soil. mine tailing
https://www.youtube.com/channel/UCrPRG_Wut_UrDhD_MTvQ6GA
167
STATISTICS
Statistics is a form of mathematical analysis that uses quantified models, representations and synopses
for a given set of experimental data or real-life studies. In statistics, an attribute is a characteristic of an
object. Attributes are closely related to variables. A variable is a logical set of attributes. For example, if
you are collecting data on damaged products, attribute data simply classify the output as damaged or not
damaged. If you gather variable data, you can find out how bad each damaged product is: 30 percent, 40
percent, and so on.
Types of Variables
In scientific researches, a variable is something that we measure, but also something that we can
manipulate and control for. The two main variables in an experiment are the independent variable and
the dependent variable. An independent variable is a variable that is being manipulated in an experiment
in order to observe the effect on a dependent variable. Whereas, the dependent variable is a variable
that is dependent on an independent variable.
For example, an entomologist wants to find out if the brightness of light has any effect on a moth being
attracted to the light. The brightness of the light, which is controlled by the scientist, is an independent
variable. How the moth reacts to the different light levels is considered as the dependent variable.
Categorical Variables
Categorical variables are numeric or nominal variables that take a finite, limited, countable number of
possibilities or values (e.g. the number of students in a class). It can be further categorized as nominal
and ordinal variables.
1. Nominal variable
These categories of variables differ in name only, not in value. Examples include Gender (male and
female), eye color (blue, brown, green, etc.), surgical outcome (dead or alive), etc.
2. Ordinal variables
There is an inherent order to the relationship among different categories. E.g. Stages of cancer (stage I,
II, III, IV), Education level (elementary, secondary, college), etc.
Continuous variables
Continuous variables are also known as quantitative variables having an infinite number of value
possibilities. There always exists a third value between any two values on it. Continuous variables can be
further categorized as either interval or ratio variables.
1. Interval
Variables having constant and equal distances between values, but the zero point is arbitrary. E.g.
Intelligence (IQ test score of 100, 110, 120, etc.), pain level (1-10 scale), etc.
2.Ratio
168
Variables having equal intervals between values, the zero point is meaningful, and the numerical
relationships between numbers are meaningful. E.g. weight (50 kilos, 100 kilos, 150 kilos, etc.), pulse
rate, etc.
1. Arithmetic Mean
The arithmetic mean is the same as the average of data. It is calculated by adding together the given
values in a data set and then dividing that total by the number of given values. Thus, the mean of n
observation x1, x2, ., xn, is given by;
2. Median
If we arrange n observations in ascending or descending order then the middle value is the median. If
the total number of terms(n) is an odd number, then the formula for the median is:
If the total number of terms(n) in a data set is even, then the formula for the median is:
169
3. Mode
The mode is the most frequently occurring value in the data set. It is useful when there are a lot of
repeated values in a dataset.
The harmonic mean has the least value when compared to the geometric mean and the arithmetic mean.
Measures of Dispersion
The measures of dispersion show the scatterings of the data. More precisely, it measures the degree of
variability in the given observation on a variable from their central value (usually the mean or the median).
170
2. Quartile Deviation
The quartiles(Q) divide a data set into three quarters: Q1, Q2, and Q3. Quartile Deviation (QD) is the
product of half of the difference between the upper and lower quartiles.
Q1 is the middle number between the smallest number and the median of the data;
Q3 is the middle number between the median and the largest number.
3. Mean Deviation
For n observations; x1, x2, ..., xn, the mean deviation about their mean x is given by:
Let x1, x2, ..., xn be n observations with x as the mean. The variance, denoted by σ2, is given by,
171
Where σ is the standard deviation of the population and n is the size (number of observations) of the
sample. In most cases, the standard deviation of populations is unknown, hence, the standard error of
the mean is usually estimated as the sample standard deviation divided by the square root of the sample
size.
Where S is the sample standard deviation and n is the number of observations of the sample.
A confidence interval is a range of values where the true value of an unknown population parameter lies.
This proposes a range of plausible values for an unknown parameter. The general formula for estimating
CI is given below. Confidence levels of 90% and 99% are often used in analyses.
172
CI Z Value
90% 1.645
95% 1.960
99% 2.576
When the long tail of a distribution is on the negative side of the peak, it’s called negatively skewed. The
distribution is positively skewed when the long tail is on the positive side of the peak.
Kurtosis is a statistical measure that defines how heavily the tails of distribution differ from the tails of a
normal distribution. It characterizes the relative peakedness or tailedness of a distribution. The kurtosis
of distribution is in one of three categories of classification:
Moment ratio and Percentile Coefficient of kurtosis are used to measure the kurtosis. Moments are a set
of statistical parameters to describe the shape of a distribution.
Probability Theory
The probability is the chance of occurring of a certain event when expressed quantitatively, i.e. the
probability is a quantitative measure of the certainty.
? A bag contains 5 white and 3 black balls, and two balls are drawn at random. What is the probability
that both are of the same color?
Total probability of drawing two white balls is 5/8 x 4/7 (since they are independent events)
? A box contains 2 red, 3 black, and 4 blue balls. 3 balls are randomly drawn from the box. What is the
probability that the balls are of different colors?
n
Cr = n! / r! * (n - r)!
1 Red, 1 Black and 1 Blue ball can be chosen in [(2C 1) x (3C 1) x (4C1)] = 24 ways.
The probability that the balls are of different colors is 24/84= 2/7
E.g., heights, blood pressure, and IQ scores follow the normal distribution. The mean and the standard
deviation are the two parameters of normal distribution. The Empirical Rule for the Normal Distribution
helps to determine the proportion of values that fall within specific numbers of standard deviations from
the mean.
score. For example, a Z-score of 1.5 indicates that the observation is 1.5 standard deviations above the
mean. As opposed to this, a negative Z-score represents a value below the average.
Following formula can be used to calculate the standard score for an observation:
If Z is the standard normal variable, μ and σ are the mean and standard deviation of the logarithm of X,
then the equation for such distribution is given as,
3. Binomial Distribution
The binomial distribution can be simply stated as the probability of a SUCCESS or FAILURE outcome
in an experiment or survey that is repeated multiple times. The probability distribution of the random
variable X is given by the formula:
Where,
n = the number of trials , x = 0, 1, 2, ... n, p = the probability of success in a single trial, q = the probability
of failure in a single trial (i.e. q = 1 − p), nCx is a combination, P(X) gives the probability of successes in
n binomial trials. Binomial distributions must also meet the following criteria:
1. The number of trials is fixed.
2. Each observation or trial is independent. In other words, none of the trials should have an effect on
the probability of the next trial.
3. The probability of success is exactly the same from one trial to another.
4. Poisson Distribution
This probability model can be used when the outcome of an experiment is a random variable taking
positive integer values and where the only information available is a measurement of its average value.
Some of the common examples of Poisson distribution include the number of births per hour during a
given day, the number of particles emitted by a radioactive source in a given time, the number of
mutations in given regions of a chromosome, etc.
If X = the number of events in a given interval;
2. The 2-Sample independent sample (unpaired) t-test for comparing two means.
3. The 2-Sample “correlated sample” t-test (paired) for comparing two means with correlated or
repeated-measures data.
Properties of a t-Distribution
4. The variance is always greater than one but approaches 1 when df gets bigger
The goal of the one-sample t-test is to compare a sample mean to a specific value (e.g., a population
parameter; a neutral point on a Likert-type scale, chance performance, etc.), based on the assumptions
that subjects are randomly drawn from a population and the distribution of the mean being tested is
normal. The one-sample t-test requires the following statistical assumptions:
1. Random and independent sampling.
Note: The one-sample t-test is generally considered robust against violation of this assumption when N
> 30.
The null hypothesis (H0) and the alternative hypothesis (H1) of the one-sample T-test can be expressed
as:
H1: µ ≠ x ("the sample mean is not equal to the [proposed] population mean")
df= n-1
Where, t= t score, x̅= sample mean, μ= population mean, s= sample standard deviation, n= sample size.The
calculated t value is then compared to the critical t value from the t distribution table with degrees of
freedom (df = n - 1) and chosen confidence level.
If the calculated t value is greater than the critical t value, then we reject the null hypothesis.
The Independent Samples (unpaired) t-test compares the means of two independent groups in order to
determine whether there is statistical evidence that the associated population means are significantly
different.
Two samples are referred to as independent if the observations in one sample are not in any way related
to the observations in the other (e.g. giving treatment A to the first group and treatment B to the second
group)
The null hypothesis (H0) and the alternative hypothesis (H1) of the Independent Samples t-Test can be
expressed in two different but equivalent ways:
H0: µ1 = µ2 ("the means of two populations are equal")
Where,
x̅1 = Mean of first sample, x̅ 2 = Mean of second sample, n1 = Sample size of first sample, n2 = Samplesize
of second sample, s1 = Standard deviation of first sample, s2 = Standard deviation of second sample, sp =
Pooled standard deviation
The calculated t value is then compared to the critical t value from the t distribution table with degrees
of freedom (df = n1 + n2 – 2) and chosen confidence level. If the calculated t value is greater than the
critical t value, then we reject the null hypothesis.
When the two independent samples are assumed to be drawn from populations with unequal variances
(i.e., σ12 ≠ σ22), the test statistic t is computed as:
The calculated t value is then compared to the critical t value from the t distribution table with degrees
of freedom:
The Paired Samples t-Test is used to compare means on the same or related subject over time or in
differing circumstances; subjects are often tested in a before-after situation. Hence, in a paired sample t-
test, each subject or entity is measured twice, resulting in pairs of observations. It does not assume that
the variance of both populations is equal.
The null hypothesis (H0) and the alternative hypothesis (H1) can be expressed as:
H0: µ1 = µ2 ("the paired population means are equal")
Or
where Oi is the observed number of cases in category i, and Ei is the expected number of cases in category
i.
? Using the data in the following table, conduct a chi-square goodness of fit test to determine whether
the sample does provide a good match to the known age distribution of Bangalore women. Use the 0.05
level of significance.
H0: The age distribution of respondents in the sample is the same as the age distribution of Toronto
women, based on the Census.
H1: The age distribution of respondents in the sample differs from the age distribution of women in the
Census.
The chi-square test should always be conducted using the actual number of cases, rather than the
percentages. Let the 20-24 age group be category i = 1. The 20-24 age group contains 18% of all the
women, and if the null hypothesis were to be exactly correct, there would be 18% of the 490 cases in
category 1.
E1 = 490 × 18/100 = 490 × 0.18 = 88.2
Similarly, for the second category 25-34, there would be 50% of the total number of cases.
E2 = 490 ×50/100= 490 × 0.50 = 245.0
And finally,
The next step is to decide whether this is a large or a small χ2 value. The level of significance requested
is the α = 0.05 level. The number of degrees of freedom is the number of categories minus one. There
are k = 3 categories into which the ages of women have been grouped so that there are (d = k −1 = 3−1
= 2) two degrees of freedom. The critical value of χ2, in this case, is 5.991. Since 7.202 > 5.991 the null
hypothesis can be rejected, and the research hypothesis is accepted at the 0.05 level of significance.
Level of Significance
The level of significance is defined as the probability of rejecting a null hypothesis by the test when it is
really true, which is denoted as α. The relationship between the level of significance and the confidence
182
level is c=1−α. For example, a significance level of 0.05 indicates a 5% risk of concluding that a difference
exists when there is no actual difference. The significance level for a study is chosen before data collection
and is typically set to 5% or much lower depending on the field of study.
The common level of significance and the corresponding confidence level are given below:
1. The level of significance 0.10 is related to the 90% confidence level.
7. F Distribution
A Statistical F Test uses an F Statistic to compare two variances, s1 and s2, by dividing them. its result is
always a positive number (because variances are always positive).
The null hypothesis assumes that the variances are equal, i.e. H0: S12 = S22
The alternate hypothesis states that the variances are unequal. H1: S12 ≠ S22
The equation for comparing two variances with the f-test is:
Where S12 is assumed to be larger sample variance and S22 is the smaller sample variance. Degree of
freedom (df1) = n1 – 1 and Degree of freedom (df2) = n2 – 1, where n1 and n2 are the sample sizes. If
the F statistic is greater than the critical value at the required level of significance, we reject the null
hypothesis.
8. Correlation and Regression
Two variables are said to be in correlation if the change in one variable results in a corresponding change
in the other variable. Correlation is a statistical technique that can show whether and how strongly pairs
of variables are related. Correlation is said to be Positive or direct when the values increase together, and
it is negative when one value decreases as the other increases, and so-called inverse or contrary
correlation.
Correlation can have the following values:
1 is a perfect positive correlation
183
If the change in one variable tends to bear a constant ratio to the change in the other variable, the
correlation is said to be linear. Correlation is said to be non- linear if the amount of change in one variable
does not bear a constant ratio to the amount of change in the other variable. If only two variables are
studied, it is a case of a simple correlation. In multiple correlations, three or more variables are studied
simultaneously. In partial correlation, we have more than two variables but consider only two variables
to be influencing each other, the effect of the other variables being kept constant.
The following are the methods of determining correlation:
In order to quantify the relationship between the variables, a measure called the correlation coefficient
developed by Karl Pearson is used. The coefficient of correlation ‘r’ is always a number between -1 and
+1, which indicates to what extent two variables are related.
Suppose that there are two variables X and Y, each having n values X1, X2, . . . , Xn and Y1, Y2. . . , Yn
respectively. Let the mean of X be x̅ and the mean of Y be Ȳ. Then the Pearson’s r is
184
3. If r = 0, there is no correlation.
? A study is conducted involving 10 students to investigate the association between statistics and science
tests. Is there a relationship between the degrees gained by the 10 students in statistics and science tests?
Let us consider that x denotes statistics degrees and y denotes science degree. Calculating x̅ and Ȳ;
The calculation shows a strong positive correlation (0.761) between the student's statistics and science
degrees.
Regression Analysis
The term regression literally means “stepping back towards the average”. It is used to explain variability
in the dependent variable by means of one or more independent or control variables. In regression
analysis, the independent variable is known as regressor or predictor or explanatory variable, and the
dependent variable is known as regressed or explained variable. The simple regression model is used to
study relationships between two continuous (quantitative) variables. In a cause and effect relationship,
the independent variable (x) is the cause, and the dependent variable (y) is the effect. Mathematically,
the regression model is represented by the following equation:
𝐲 = 𝛽0 ± 𝛽1 𝒙1 ± 𝜀1
Where,
x -independent variable, y -dependent variable, 𝛽1 - Slope of the regression line, 𝜷𝟎- The intercept point
of the regression line and the y-axis.
𝜷𝟎 = Ȳ- 𝛽1 x̅
𝒏 -Number of cases or individuals, ∑ 𝒙𝐲 - Sum of the product of dependent and independent variables,
∑ 𝒙 - Sum of the independent variable, ∑ 𝐲 - Sum of the dependent variable, ∑ 𝒙𝟐 = Sum of square
of the independent variable.
Coefficient of Determination (R2) is the direct indicator of how good our model is in terms of accuracy
and precision. Technically, it is the measure of the variance in response to variable ‘y’ that can be
predicted using the predictor variable ‘x’.
R2 is basically a square of a correlation coefficient. Using regression outputs, we can calculate R2 using the
following formula.
186
The value of R2 lies between 0 and 1. The higher the value of R2, the better will be the prediction and
strength of the model.
y=β0+β1X1+β2X2+ε
In the above equation, suppose y denotes the yield, X1 denotes the temperature, and X2 denotes the
catalyst concentration. This is a multiple linear regression model with two regressor variables. The term
linear is used because the equation is a linear function of the known parameters β0, β1& β2, and ε is an
error term.
The total amount of variability among observations is measured by summing the squares of the
differences between each xij and x̄:
SST measures the variation of the data around the overall mean x̄
SSG measures the variation of the group means around the overall mean
SSE measures the variation of each observation around its group mean
x̄IDegrees of Freedom
k − 1 for SSG, since it measures the variation of the k group means about the overall mean
n − k for SSE, since it measures the variation of the n observations about k group means
n − 1 for SST, since it measures the variation of all n observations about the overall mean
Two-way (or multi-way) ANOVA is an appropriate analysis method for a study with a quantitative
outcome and two (or more) categorical explanatory variables.
188
Among the following chemical species, the pH of natural waters is controlled mainly by?
1. H2CO3
2. HCO3-
3. CO32-
Ans: H2 CO 3, HCO 3- and CO 2-3
Alkalinity in most natural surface and groundwater is mainly derived from the dissolution of
carbonate minerals, and from CO2 present in the atmosphere and in the soil above the water table.
Three carbonate species (H 2CO 3, HCO 3- and CO32-) contribute to total alkalinity, their relative
proportions being dependent on pH and temperature. At near-neutral values of pH, dissolved
bicarbonate (HCO 3 -) is the dominant ion. A significant contribution from CO32-, and other anions,
emerges only at pH levels greater than approximately 9.0. In more acidic streams, a greater
proportion of dissolved CO2 is present as H2CO3.
Label on paper products with the Mobius arrows contained within a circle indicates that the
product is
1. Made up of recycled material
2. Recyclable after use
Ans: Recyclable after use
In flat-plate solar collector, the enrgy loses are due to?
1. Conduction
2. Convection
3. Radiation
Ans: Conduction, Convection, and Radiation
Heat loss from any solar water heating system takes the three modes of heat transfer: radiation,
convection, and conduction. The conduction heat losses occur from sides and the back of the
collector plate. The convection heat loss takes place from the absorber plate to the glazing cover and
can be reduced by evacuating the space between the absorber plate and the glazing cover and by
optimizing the gap between them. The radiation loss occurs from the absorber plate due to the plate
temperature.
189
Which wavelength is most useful for imaging during cloud covered conditions?
1. 4.0 cm
2. 4 nm
3. 0.4 m
4. 0.04 cm
Ans: Ozonization- Cyanides, Evaporation- Chlorinated organics, Fluidized bed incineration- PCB,
Carbon adsorption- Aqueous with metals
negative RF. Black carbon (BC) aerosol in the atmosphere is estimated to have a large positive
climate forcing (heating effect). Explosive volcanic eruptions inject substantial amounts of sulfur
dioxide (SO2) and ash into the stratosphere. SO2 oxidizes to form sulfuric acid (H2SO4) which
condenses, forming new particles or adding mass to preexisting particles. These aerosols scatter
sunlight back to space, creating a negative RF that cools the planet.
The online monitoring technique for NOx is based on the Chemiluminiscence of which of the
following molecules in the excited state?
1. Nitric oxide
2. Ozone
3. Nitrogen dioxide
Ans: Nitrogen dioxide
When nitric oxide reacts with ozone, nitrogen dioxide in the excited state is produced. These
excited molecules return to the ground state by emitting light in the region 600-2800nm.
191
The power output (P) from an ideal magnetohydrodynamic power generating plant varies
with the strength of magnetic field as.
1. P∝B2
2. P∝ B 1/2
3. P∝ B3
Ans: P ∝ B2
The power generated per unit length by the MHD generator is approximately given by:
P= σ μB2/ p
Where μ is the fluid velocity, B is the magnetic flux density, σ is the electrical conductivity of
conducting fluid and p is the density of the fluid.
In a nuclear fusion reactor based on deuterium+ tritium fuel, the proportion of these isotopes
of hydrogen in the fuel is as
1. 80:20
2. 60:40
3. 75:25
4. 50:50
Ans: 50: 50
In a commercial fusion power station, the fuel will consist of a 50-50 mixture of deuterium and
tritium (D-T) because this mixture fuses at the lowest temperature and its energy yield is the largest
compared with other fusion reactions.
Ozone in the stratosphere is not removed by catalytic cycles involving homogeneous gas phase
reactions of
1. SOx Familiies
2. HOx Families
3. BrOx Families
Ozone can be destroyed by a number of free radical catalysts; the most important ones are the
hydroxyl radical (OH·), nitric oxide radical (NO·), chlorine radical (Cl·) and bromine radical (Br·).
192
If arctic ice is replaced by forests, which of the following situation will arise?
1. It will decelerate global warming
2. It will accelerate global warming
3. It will not have any effect on global warming
Ice reflects sunrays, if replaced by dense forests it will absorb more heat, as the dark region absorbs
more heat. This will eventually aid global warming.
Which of the following types of motion contributes to vertical transport of latent and sensible
heat in surface layers?
1. Thermals
2.Molecular conduction and diffusion
3.Microscale turbulance
4. Deep convection
Ans: Deep convection
Deep convection refers to the thermally driven turbulent mixing that moves air parcels from the
lower to the upper atmosphere. In the tropics, it generally involves the vertical ascent of warm moist
air and, ultimately, precipitation.
Ans: Fall- Rock, Flows- Soil, Subsidence- Cave, Lateral movement- Block
R.J. Chorley (1985) has classified mass movement and mass wasting phenomena on the basis of the
direction of movement, type of movement, and presence of transporting agent as given below:
If mean annual precipitation and mean annual temperature are the lowest, then which type of a
major vegetation dominates?
1. Tundra
2. Taiga
3. Desert
4. Chaparral
Ans: Tundra
194
https://www.youtube.com/channel/UCrPRG_Wut_UrDhD_MTvQ6GA
195
The dominant mechanism(s) of deposition of aerosol particles in the size range 5-10 μ in the
respiratory tract are
1. Sedimentation
2. Impaction
3. Diffusion
Ans: Impaction
It is widely accepted that the mechanisms of inertial impaction, gravitational sedimentation, and
Brownian diffusion mainly govern the deposition of aerosol particles in the lungs. Inertial impaction
is a velocity-dependent mechanism and causes most of the particles larger than ~5 μm to deposit in
the upper respiratory tract. Brownian diffusion and gravitational sedimentation are time-dependent
mechanisms. Brownian diffusion primarily affects small particles (<~0.5 μm). Sedimentation is the
gravitational settling of particles and it mainly affects particles in the size range of 1–5 μm.
Ans: NO 3- and NH +4
Most of the crop plants prefer nitrogen in nitrate form, but paddy and a few other higher plants
prefer nitrogen in ammoniacal form.
The vertical temperature gradient in the stratosphere strongly inhibits vertical mixing, in contrast to
the situation in the troposphere. The stability of the stratosphere results in a strongly layered structure
in which thin layers of aerosol can persist for a long time. The stratosphere is a region of intense
interactions among radiative, dynamical, and chemical processes, in which horizontal mixing of
gaseous components proceeds much more rapidly than vertical mixing. The stratosphere is warmer
than the upper troposphere, primarily because of a stratospheric ozone layer that absorbs solar
ultraviolet radiation.
Ans: A plan of action to prevent an emergency and steps to be taken when emergencies occur.
pH of dust free and unpolluted rainwater is 5.6 due to dissolution of
1. NO2
2.SO2
3. CO2
Ans: CO2 (Carbon dioxide dissolves into rain and forms weak carbonic acid)
Name the low heat thermal process that destroys the pathogen in biomedical waste by heating
that occurs inside the waste material.
1. Hydroclave
2. Microwave
3. Pyrolysis
4. Internal combustion
Ans: Microwave
The processes utilize heat to disinfect are grouped into two categories:
1. Low-heat systems (operates between 93 -177oC) use steam, hot water, or electromagnetic radiation
to heat and decontaminate the waste. Autoclave & Microwave are low heat systems.
Autoclaving is a low heat thermal process, which uses steam for the disinfection of waste. Autoclaves
are of two types depending on the method they use for removal of air pockets. They are gravity flow
autoclave and vacuum autoclave.
Microwaving is a process which disinfects the waste by moist heat and steam generated by microwave
energy. Unlike other thermal treatment systems, which heats wastes externally, microwave heating
occurs inside the waste material.
Ans: N2 and O2
The permanent gases whose percentages do not change from day to day are nitrogen, oxygen, and
argon. Nitrogen accounts for 78% of the atmosphere, oxygen 21%, and argon 0.9%.
According to Darcy’s law, the relationship between flow velocity and hydraulic gradient is
1. Non-linear
2. Logarithmic
3. Linear
Ans: Linear
Darcy’s Law is an empirical relationship for liquid flow through a porous medium. A common
application is groundwater flow through an aquifer. It gives an idea about the relationship among the
flow rate of the groundwater, the cross-sectional area of the aquifer perpendicular to the flow, the
hydraulic gradient, and the hydraulic conductivity of the aquifer.
Darcy’s equation
Q = KA (hL/L)
hL = head loss over a horizontal length, L, in the direction of flow (hL in ft and L in ft)
Darcy's Law is valid only for laminar flow, which occurs for Reynold's number less than 1. The
Reynolds number is the ratio between the inertial forces in a fluid and the viscous forces.
Most practical applications of groundwater flow have Re < 1, and thus can be modeled with Darcy's
Law. hL/L and is often called the hydraulic gradient (i). Then i = hL/L and Darcy's Law can be given
199
as Q = KAi. Darcy's law is a linear flow law. It is linear because Darcy’s velocity(v) varies linearly with
the hydraulic gradient i, where v = −Ki.
According to the Water Act (1974), any liquid, gas or solid discharged from any premises used
for carrying industrial operation or process or treatment is called
1. Contaminants
2. Sewage
3. Trade effluent
3. Hazardous effluent
Ans: Trade effluent
The area of water column at which light intensity is just sufficient for rate of photosynthesis to
be equal to the rate of repiration is called?
1. Aphotic depth
2. Decomposition zone
3. Compensation depth
Ans: Compensation depth
Key attributes of a restored ecosystem include the absence of threats, physical conditions, species
composition, community structure, ecosystem function, and external exchanges. These attributes in
combination can then be used to derive a five‐star rating that enables practitioners, regulators, and
industry to track restoration progress over time and between sites.
Ecosystem attributes
Sorghum is a
1. C4 Plant
2. CAM Plant
3. C3 Plant
Ans: C4 plant
C3 plants are the most common and the most efficient at photosynthesis in cool, wet climates. E.g.,
Sunflower, Spinach, Beans, Rice, Cotton, etc.
C4 plants are most efficient at photosynthesis in hot, sunny climates. E.g., Sugarcane, Sorghum,
Maize, etc.
CAM plants are adapted to avoid water loss during photosynthesis so they are best in deserts. E.g.
Cacti, orchids, etc
201
A SWOT analysis is used to evaluate the internal strengths and weaknesses, and the external
opportunities and threats in an organization's environment.
A dwarf forest that occurs in regions with winter rain-summer drought and is adapted to fire is
1. Chaparral forest
2. Temperate forest
3. Deciduous forest
Ans: Chaparral forest
Which of the following biomass fuels has the highest energy content (MJ/Kg)?
1. Coconut and groundnut shells
2. Unsorted domestic refuse
3. Wood (air dry)
Kleptoparasitism refers to
1. Facultative parasitism
2. Obligate parasitism
3. Ectoparasitism
4. Stealing food from another predator’s catch
Obligate parasites are completely dependent on the host in order to complete their life cycle. Over
time, they have evolved so that they can no longer exist without the existence of the host. Head lice
are obligate parasites
202
Ectoparasites are parasites that live on the outside of the host’s body, such as lice and ticks.
Endoparasites, like nematodes and hookworms, live inside the host.
Facultative parasites do not rely on the host in order to complete their life cycle; they can survive
without the host, and only sometimes perform parasitic activities. Certain plants, fungi, animals, and
microbes can be facultative parasites. A specific example is the nematode species Strongyloides
stercoralis.
Kleptoparasitism, literally meaning parasitism by theft, is a form of resource acquisition where one
animal takes resources from another. Although kleptoparasitism of food (i.e., kleptoparasitic
foraging) is the best-known example, the stolen resources may be food or another resource such as
nesting materials.
Ans: Soft clay- Very high, Clay- High, Silt- Medium, Sandy Clay- Low
Ans: Fuel load, vegetation type, and distance from the habitations
Greenhouse gas CO2 has very strong absoption band in the wavelength region
1. 4-10 µm
2. >13.0 µm
3. 2- 3.5 µm
4. 1-4 µm
Ans: >13.0 µm
Carbon dioxide absorbs infrared radiation (IR) in three narrow bands of wavelengths, they are 2.7,
4.3 and 15 micrometers (µM). This means that most of the heat-producing radiation escapes it.
About 8% of the available black body radiation is picked up by these "fingerprint" frequencies of
CO2.
204
Absorption Peaks
Spectral response pattern of leaves in healthy vegetation in SWIR region of EMR is primarily a
function of
1. Chlorophyll content
3. Leaf parenchyma
4. Water content of the leaf
Ans: Water content of the leaf
The leaf pigments, cell structure, and water content all impact the spectral reflectance of
vegetation. In the visible bands, the reflectance is relatively low as the majority of light is absorbed
by the leaf pigments.
205
For healthy vegetation, the reflectance is much higher in the near-infrared (NIR) region than in the
visible region due to the cellular structure of the leaves, specifically the spongy mesophyll. The
reflectance in the shortwave infrared wavelengths (SWIR) is related to the water content of the
vegetation and its structure. Water has strong absorption bands around 1.45, 1.95, and 2.50 µm.
Assertion (A) : The occurnce of acid rain over the Indian landmass is extremely rare
Reason (R) : The alkaline dust and ammonia (NH3) produced from agricultural areas
neutralise the acid formation in the atmosphere
Ans: Both (A) and (R) are true. R is the correct explanation of A
As per the amended rules for the implementation of Forests Rights Act of 2006, the transit
passes for the transport of minor forest produce are issued by?
1. Gram Sabha
2. Forest Department
3. Biodiversity Monitoring Committee
According to FRA, the committee constituted under the gram sabha will prepare conservation and
management plan for community forest resources, and the gram sabha will approve all decisions of
the committee pertaining to the issue of transit permits, use of income from the sale of forest produce
or modification of management plans.
Which forest type has the highest percentage cover in the country?
1. Tropical deciduous
2. Tropical evergreen
3. Tropical thorn
207
Which one among the following do all the terrestrial biomes have in common?
1. Annual average rainfall in excess of 25 cm
2. Clear boundaries between adjacent biomes
3. Biodiversity pattern that is directly proportional to latitude
Ans: Biodiversity pattern that is directly proportional to latitude
Ans: Vermiculite (150-160), Smectite (100-120), Illite (20-40) and Kaolinite (5-25)
208
A spoil tip is a pile built of accumulated waste material removed during mining. They trap solar
heat, making it difficult (although not impossible) for vegetation to take root; this encourages erosion
and creates dangerously unstable slopes. Existing techniques for regreening spoil tips include the use
of geotextiles (polypropylene or polyester) to control erosion as the site is resoiled and simple
vegetation such as grass is seeded on the slope.
What percent of visible radiation accounts for total solar radiation on top of atmosphere?
1.7%
2. 41%
3. 36%
Ans: Ans: 41%
Assertion (A) : In the Himalayas North-facing slopes have luxuriant growth of forests, whereas
shruby drought- resistant vegetation inhabits the south-facing slopes
Assertion (A) : Biogas production is reduced in winters
Reason (R) : South-facing slopes in the northern hemisphere receive more sunlight than
Reason (R) : Methanogenesis
nearby north-facing occursare
slopes and, therefore, at warmer
mesophilic temperatures
and drier
209
Ans: Both (A) and (R) are correct. R is the correct explanation of A
Which method is most appropriate for separating tin cans from aluminium cans in municipal
waste?
1. Shear shredding
2. Manual sorting
3. Magnetic field separation
A lake has 2.5 mgL-1 of dissolvedorganic carbon. The dissolved organic matter concentration in
the lake is approximately
1. 6.25 mgL-1
2. 4.25 mgL-1
3. 3.25 mgL-1
Ans: 4.25 mgL-1 (Organic matter = Total Organic Carbon (TOC) x 1.72)
Remote sensing data from which of the following satellites is most suitable for urban ecological
studies?
1. RISAT-2
2. CARTOSAT-2
Ans: CARTOSAT-2
The Cartosat-2 series of satellites is a set of agile satellites with less inertia providing high-resolution
images at better than 0.8 m resolution. It helps to meet the user demands for cartographic
applications at cadastral level, urban planning, and rural development, utility mapping and
infrastructure development, etc.
A stream of waste water having BOD of 20.0 mg/L discharges water at the rate of 1.0 m3/s into
river with flow rate 8.0m3/s and BOD of 5.0 mg/L. Assuming complete and instantaneous
mixing, what is the resultant BOD just downstream from the point of discharge?
1. ~12.5 mg/L
2. ~6.7 mg/L
3. ~15.0 mg/L
4. ~ 7.7 mg/L
Ans: ~ 6.7 mg/L
210
Where,
Resultant BOD= (20 x1.0) + (8.0 x 5.0)/ (1.0+ 8.0) = 60/9 = 6.66 ≈ 6.7 mg/L
In the model of r and k selection, rate of increase of two species are portrayed as function of
1. Population density
2. Resource density
3. Habitat
4. Size
Ans: Population density and Resource density
Ans: PAHs- HPLC, Heavy metals- AAS, Sulphate ions- UV- Vis spectrophotometer, Nucleotides-
Electrophoresis.
According to National Ambient Air Quality Standars (NAAQS) annual average concentration
(µg/m3) for SO and NO2 in ecologically sensitive areas are
20 and 60 respectively
80 and 47 respectively
20 and 30 respectively
Agencies responsible for air quality standard creation and monitoring include the Central Pollution
Control Board (CPCB) and several State Pollution Control Boards (SPCBs). All of these entities fall
under the control of the Ministry of Environment and Forest (MoEF).
Assertion (A): Refuse derived fuel from municipal solid waste has higher energy content than
raw municipal solid waste
Reason (R): Combustible organic material gets concentrated in final pellets produced from
municipal solid waste
Ans: Both (A) and (R) are true and R is the correct explanation of
The maximum concentration of hydroxyl radical in earth’s atmosphere is observed at
1. The equator, in middle of troposphere
2.The equatorial surface
3. The poles, in middle of troposphere
Atmospheric Scale Height (H) is the height over which pressure dereases by a factor of
1. e
2. 10
3. π
Ans: e
Scale height is the vertical distance over which the density and pressure fall by a factor of 1/e. These
values fall by an additional factor of 1/e for each additional scale height H.
Which among the following can not be considered as VOCs?
1. Polycyclic Aromatic Hydrocarbons
2. Halocarbons
3. Oxygenates
4. Non- methane Hydrocarbons
In Ecological sampling, Importance Value Index to the sum of which of the following?
1. Relative density
2. Relative frequency
3. Relative Dominance
Ans: All of the Above
Which among the following radioactive species have a half life more than10 years?
91
36 Kr
38 Sr 90
55 Cs 137
Ans: 38 Sr 90 and 55 Cs 1
Consider the following gases: CO, CH4, HCFCs, CO2, SO3. The gases removed from the
atmosphere by the process of oxidation are
Ans: CO, CH4, and HCFCs
Solar radiation on a rectangular module (1.5 m x 2 m) of photovoltaic cells is 550 W/m2. If the
efficiency of the cell is 12%, what is the power output of the module?
214
Sound emitted from a line source attenuates at which of the following rates as a result of
geometrical spreading?
1. ≈ 3 dB per doubling the distance from the source
2. ≈ 6dB per doubling the distance from the source
3. ≈ 5 dB per doubling the distance from the source
Identify the correct sequence of the components of Producer Gas from biogasification of coal
in decreasing order of their concentrations
1. CH4> H2>CO>N2
2. H2>CO>CH4>N2
3. N2>CO>H2>CH4
Ans: N2>CO>H2>CH4
Producer gas is fuel gas that is manufactured from material such as coal, as opposed to natural gas.
The average composition of ordinary producer gas: CO2 (5.8%), O2(1.3%) CO (19.8%), H2(15.1%),
CH4(1.3%), N2(56.7%).
Which of the following particulate control device has the poorest removal efficiency for particles
in the sub-micron range?
1. Electrostatic Precipitator
2. Fabric filer
3. Cyclone
4. Venturi Scrubber
Ans: Cyclone
215
An electrostatic precipitator is a type of filter (dry scrubber) that uses static electricity to remove soot
and ash from exhaust fumes before they exit the smokestacks. Electrostatic precipitators are
extremely effective and are capable of removing more than 99% of particulate matter.
Fabric filters (also called baghouses) are devices that remove particulate from a gas stream by passing
the dirty air through a layer of cloth filtration. Fabric filters are typically used when very high
efficiencies are required, the gas is always above its dewpoint, volumes are reasonably low, and
temperatures are relatively low.
Cyclone separators or simply cyclones are separation devices (dry scrubbers) that use the principle
of inertia to remove particulate matter from flue gases. Cyclone separators are one of many air
pollution control devices known as pre-cleaners since they generally remove larger pieces of
particulate matter. Most cyclones are built to control and remove particulate matter that is larger
than 10 micrometers in diameter. They are generally able to remove somewhere between 50-99%
of all particulate matter in the flue gas.
A venturi scrubber is designed to effectively use the energy from the inlet gas stream to atomize the
liquid being used to scrub the gas stream. This type of technology is a part of the group of air
pollution controls collectively referred to as wet scrubbers. Venturi scrubbers PM collection
efficiencies range from 70 to greater than 99 percent, depending upon the application. Collection
efficiencies are generally higher for PM with aerodynamic diameters of approximately 0.5 to 5 μm.
In the case of which of the following fossil fuels, the difference between the Gross Calorific Value
and Net Calorific Value is maximum?
1. Natural Gas
2. Petrol
3. Diesal
4. Coal
Ans: Coal
Assertion (A): Between organophosphorus pesticides malathion and parathion, the former is
more toxic than the latter
Reason (R): Malathion is hydrolyzed by enzymes possessed by mammals to produce relatively
non- toxic products.
216
How many years will it take for the population of a country to double, if it grows at the rate of
4% per year?
1. 17.3 years
2. 13.8 years
3. 15 years
What is the total power per unit area available in a wind stream blowing at a speed of 5 m/s?
(Given that density of air= 1.226J/Kg.K/m3)
Ans: 76.6 W/m3
Power Per Unit Area= ½ x ρ * V3 = ½ x 1.226 x 53= 76.6 W/m3
Artesian Well is a
1. Natural well in fault planes
2. Water underneath the surface having sufficient hydraulic pressure
3. Water underneath the surface having very low hydraulic pressure
Ans: Water underneath the surface having sufficient hydraulic pressure
Artesian well- a well from which water flows under natural pressure without pumping. It is dug or
drilled wherever a gently dipping, permeable rock layer (such as sandstone) receives water along its
outcrop at a level higher than the level of the surface of the ground at the well site.
The Beaufort scale is an empirical measure that relates wind speed to observed conditions at sea or
on land. Its full name is the Beaufort wind force scale.
Which of the following surface exhibits maximum variation in albedo with repect to the angle of
incidence of solar radiation?
1. Water
2. Bare soil
3. Vegeation
Ans: Water
Which of the following is/are the characteristic(s) of direct band gap semi-conductor Ga-As?
1. Less sharp absorption band
2. Large values of extinction coefficient
Ans: 1&2
Which is the country in India whose project was approved under the Clean Development
Mechanism (CDM) of Kyoto Protocol?
218
The ethical view which argues that any resource of nature is meant for human use/consumption
is termed as
1. Stewardship
2. Utilitarian
3. Ecocentric
4. Biocentric
Ans: Utilitarian
Ecocentrism is a term used in ecological political philosophy to denote a nature-centered, as opposed
to the human-centered, system of values. The justification for ecocentrism usually consists in an
ontological belief and subsequent ethical claim. Environmental stewardship refers to responsible use
and protection of the natural environment through conservation and sustainable practices. The term
biocentrism encompasses all environmental ethics that "extend the status of moral object from
human beings to all living things in nature".
219
Which of the following measures of skewness is based on the distance of upper and lower
quartiles from the median value in the data set?
1. Karl Pearson’s Coefficient of Skewness
2. Kelly’s Measure of Skewness
3. Bowley’s Coefficient of Skewness
Ans: Bowley’s Coefficient of Skewness
Bowley skewness is a way to figure out if you have a positively-skewed or negatively skewed
distribution.
Above the earth’s atmosphere, 90% of the total atmospheric mass confined within first
1. 5 Km
2. 10 Km
3. 20 Km
Ans: 20 Km
https://www.youtube.com/channel/UCrPRG_Wut_UrDhD_MTvQ6GA