0% found this document useful (0 votes)
4 views364 pages

PRINCIPLES OF PHYSICS-Revised Edition simplified

The New Lower Secondary Curriculum in Uganda emphasizes a shift from knowledge-based learning outcomes to a focus on skills and deeper understanding in Physics education. This competency-based approach encourages students to apply physics concepts to real-life situations, fostering essential skills such as analytical reasoning and problem-solving. The curriculum aims to equip students not only with knowledge but also with the ability to address complex societal challenges, contributing to Uganda's socio-economic development.

Uploaded by

mukasalatif18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views364 pages

PRINCIPLES OF PHYSICS-Revised Edition simplified

The New Lower Secondary Curriculum in Uganda emphasizes a shift from knowledge-based learning outcomes to a focus on skills and deeper understanding in Physics education. This competency-based approach encourages students to apply physics concepts to real-life situations, fostering essential skills such as analytical reasoning and problem-solving. The curriculum aims to equip students not only with knowledge but also with the ability to address complex societal challenges, contributing to Uganda's socio-economic development.

Uploaded by

mukasalatif18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 364

PRINCIPLES AND PERSPECTIVES OF PHYSICS

The New Lower Secondary Curriculum sets new expectations for learning, with a shift from Learning Outcomes that focus mainly on knowledge
to those that focus on skills and deeper understanding. These new Learning Outcomes require a different approach to assessment. The “Learning
Outcomes” in the manuscript are set out in terms of Knowledge, Understanding, Skills, Values, and Attitudes. This is what is referred to by
the letters k, u, s, v/a.
Knowledge(K) The retention of information.
Understanding(U) Putting knowledge into a framework of meaning – the development of a ‘concept’.
Skills(S) The ability to perform a physical or mental act or operation.
Values (V) The inherent or acquired behaviours or actions that form a character of an individual.

Attitudes(A) A set of emotions, beliefs or behaviours toward a particular object, person, thing or event.

The new lower secondary curriculum being implemented in Uganda marks a significant shift towards competency-based
learning, emphasizing the application of knowledge, skills, and problem-solving abilities. The focus is no longer just on
memorizing formulas and concepts but on nurturing students' ability to apply Physics in real-life contexts. This curriculum
fosters the development of essential competencies such as analytical reasoning, creativity, and collaboration. Physics, as a core
scientific discipline, offers learners the tools to explore and understand the fundamental principles governing the natural
world, from forces and motion to energy transformations and the properties of matter. By engaging in hands-on experiments,
collaborative projects, and practical problem-solving tasks, students will be encouraged to investigate real-world challenges,
thereby developing the skills necessary for lifelong learning and future careers in science and technology.
Ultimately, this learner-centered approach aims to equip students with not only knowledge of physics but also the capacity to
tackle complex societal problems with innovative solutions, ensuring they contribute meaningfully to Uganda's socio-economic
development.

-1
Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844
-
Contents
SENIOR ONE ...................................................................................................................................................................14
1.1 INTRODUCTION TO PHYSICS................................................................................................................14
INTRODUCTION ..........................................................................................................................................................14
Branches of Physics............................................................................................................................................................14
Importance of Studying Physics ....................................................................................................................................15
Basic Laboratory Rules .....................................................................................................................................................15
First Aid Measures ..............................................................................................................................................................16
1.2 Measurements in Physics ..................................................................................................................................16
Introduction ...........................................................................................................................................................................16
INTERNATIONAL SYSTEM OF UNITS (S.I UNIT) .......................................................................................17
MEASUREMENT OF AREA OF AN OBJECT ....................................................................................................18
VOLUME: .............................................................................................................................................................................19
Volume of irregular Shaped Objects............................................................................................................................19
MASS ......................................................................................................................................................................................20
TIME ........................................................................................................................................................................................21
SIGNIFICANT FIGURES...............................................................................................................................................23
DENSITY...............................................................................................................................................................................24
EXPERIMENT TO DETERMINE DENSITY OF REGULAR OBJECTS .................................................25
DENSITY OF MIXTURES ............................................................................................................................................27
RELATIVE DENSITY .....................................................................................................................................................28
Ocean Currents and Water Density ..............................................................................................................................29
1.3 STATES OF MATTER .............................................................................................................................................30
STRUCTURE OF MATTER .........................................................................................................................................30
MATTER................................................................................................................................................................................30
The Particle Theory of Matter ........................................................................................................................................30
Different states of matter ..................................................................................................................................................31
Plasma......................................................................................................................................................................................31
The nature of plasma and why it is described as the fourth state of matter ..................................................32
Kinetic theory .......................................................................................................................................................................32
Assumptions of the Kinetic Theory..............................................................................................................................32
Particle theory to explain states of matter ..................................................................................................................32

-2
Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844
-
Changes of State of Matter: Water and Ice................................................................................................................32
Why heat is taken in and given out during phase changes ..................................................................................34
Melting and Boiling (Heat Absorption): ....................................................................................................................34
Importance of Changes of State in Everyday Life..................................................................................................34
Making Ice Cream...............................................................................................................................................................35
Brownian motion .................................................................................................................................................................35
Brownian motion experiment .........................................................................................................................................35
Causes of Brownian motion ............................................................................................................................................36
Effects of Brownian motion ............................................................................................................................................36
Diffusion .................................................................................................................................................................................36
Importance of Diffusion ...................................................................................................................................................36
Demonstration of Diffusion in Gases ..........................................................................................................................37
Demonstration of diffusion in liquids..........................................................................................................................37
Factors that affect the rate of diffusion in fluids .....................................................................................................37
Comparison of Diffusion in Liquids and Gases ......................................................................................................38
Investigate the rates of diffusion of ammonia gas and hydrochloric acid gas ............................................38
Speed of Diffusion ..............................................................................................................................................................38
Linking diffusion to biological processes: Transpiration and Osmosis..........................................................39
1.4 EFFECTS OF FORCES ............................................................................................................................................39
THE EFFECTS OF FORCES ........................................................................................................................................40
Distinguishing between Mass and Weight ................................................................................................................40
Why weight depends on the force of gravity ............................................................................................................41
Types of Friction: ................................................................................................................................................................41
Factors affecting Friction .................................................................................................................................................42
Types of contact forces .....................................................................................................................................................42
Categorizing Forces ............................................................................................................................................................43
Demonstrating the Effects of Forces on Objects .....................................................................................................43
Molecular Behavior of Adhesion and Cohesion......................................................................................................43
Molecular Mechanisms behind Cohesion and Adhesion .....................................................................................44
Behavior of liquids on the surface ................................................................................................................................45
Ways of reducing of surface tension ............................................................................................................................45
Experiments to demonstrate surface tension.............................................................................................................46

-3
Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844
-
CAPILARITY/CAPILARY ACTION ........................................................................................................................46
Application of capillarity..................................................................................................................................................46
1.5 TEMPERATURE MEASUREMENTS ..............................................................................................................47
Introduction ...........................................................................................................................................................................47
How Temperature Scales Are Established.................................................................................................................48
Fixed Points ...........................................................................................................................................................................48
Division of the Scale / Thermometer scales ..............................................................................................................48
Use of Thermometric Properties....................................................................................................................................48
Thermometric properties ..................................................................................................................................................49
LOWER FIXED POINT ..................................................................................................................................................49
UPPER FIXED POINT.....................................................................................................................................................49
Thermometric liquids and their properties ................................................................................................................50
Water is not used as thermometric liquid ...................................................................................................................50
CLINICAL THERMOMETER .....................................................................................................................................51
Effect of heat on matter.....................................................................................................................................................51
Properties/qualities of a thermometer ..........................................................................................................................51
Atmospheric Temperature ...............................................................................................................................................52
1.6 HEAT TRANSFER ....................................................................................................................................................53
Introduction ...........................................................................................................................................................................53
MODES OF HEAT TRANSFER..................................................................................................................................53
Factors affecting conduction in metals .......................................................................................................................53
Experiment to compare conduction in metals ..........................................................................................................53
Application of heat conduction ......................................................................................................................................54
Experiment to show that water is a poor conductor of heat ................................................................................54
Convection .............................................................................................................................................................................54
Experiment to demonstrate convection in liquids ...................................................................................................55
Explanation of convection currents ..............................................................................................................................55
Application of convection ................................................................................................................................................55
Convection in gases ............................................................................................................................................................55
Application of convection in gases...............................................................................................................................56
SEA AND LAND BREEZES.........................................................................................................................................56
RADIATION ........................................................................................................................................................................57

-4
Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844
-
Good and Bad absorbers of heat radiation .................................................................................................................57
Experiment about Heat Radiation ................................................................................................................................58
Comparison of radiation of different surfaces ..........................................................................................................58
Application of radiation ....................................................................................................................................................58
Black and dull surfaces .....................................................................................................................................................59
Polished and white surfaces ............................................................................................................................................59
The vacuum flask ................................................................................................................................................................60
Choice of clothes .................................................................................................................................................................60
GREENHOUSE EFFECT AND GLOBAL WARMING....................................................................................61
Role of heat transfer in the greenhouse effect ..........................................................................................................61
Global Warming ..................................................................................................................................................................62
How global warming relates to heat transfer ............................................................................................................62
Consequences of Global Warming and Heat Transfer ..........................................................................................62
Disruption of Climate Systems ......................................................................................................................................62
1.7 EXPANSION OF SOLIDS, LIQUIDS, AND GASES .................................................................................63
Introduction ...........................................................................................................................................................................63
Uses of a bimetallic strip (application of expansion of solids) ..........................................................................64
EXPANSION IN FLUIDS ..............................................................................................................................................64
Experiment to demonstrate expansion in liquids ....................................................................................................64
Application of expansion property of liquids ...........................................................................................................65
Water as matter ....................................................................................................................................................................65
Anomalous Expansion of Water ....................................................................................................................................65
Why does it happen? ..........................................................................................................................................................66
Application of anomalous behavior of water............................................................................................................66
Disadvantages of anomalous behavior of water ......................................................................................................66
EXPANSION OF GASES ...............................................................................................................................................66
Experiment to demonstrate expansion in gases .......................................................................................................67
Application of expansion of air......................................................................................................................................67
1.8 NATURE OF LIGHT; REFLECTION OF LIGHT AT PLANE SURFACES....................................68
LIGHT AS SOURCE OF LIGHT.................................................................................................................................68
Sources of light ....................................................................................................................................................................68
NATURAL SOURCES OF LIGHT.............................................................................................................................68

-5
Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844
-
Natural Light Sources ........................................................................................................................................................68
Artificial Light Sources: ...................................................................................................................................................69
Categories of sources of light .........................................................................................................................................69
Light as Energy ....................................................................................................................................................................70
Interaction with Matter ......................................................................................................................................................70
RAYS AND BEAMS ........................................................................................................................................................70
Beams ......................................................................................................................................................................................70
Types of Beams....................................................................................................................................................................71
Applications of rays and beams .....................................................................................................................................71
RECTLINEAR PROPAGATION OF LIGHT.........................................................................................................71
EXPERIMENT TO SHOW THAT LIGHT TRAVELS IN A STRAIGHT LINE ....................................71
Formation of Shadows.......................................................................................................................................................72
Types of Shadows ...............................................................................................................................................................72
Factors Influencing Shadow Formation......................................................................................................................72
Distance between the object and the surface ............................................................................................................72
Examples of Shadow Formation....................................................................................................................................73
Importance of Shadows.....................................................................................................................................................73
ECLIPSES..............................................................................................................................................................................73
Solar Eclipse..........................................................................................................................................................................73
Types of Solar Eclipses .....................................................................................................................................................73
Hybrid Solar Eclipse ..........................................................................................................................................................74
Formation of a Solar Eclipse...........................................................................................................................................74
Lunar Eclipse ........................................................................................................................................................................74
Formation of a Lunar Eclipse .........................................................................................................................................74
Frequency and Observation .............................................................................................................................................75
THE PINHOLE CAMERA .............................................................................................................................................75
How a Pinhole Camera Works .......................................................................................................................................75
Characteristics of images produced by Pinhole Cameras ....................................................................................75
Applications of Pinhole Cameras ..................................................................................................................................76
Building a Pinhole Camera ..............................................................................................................................................76
Reflection of light by plane surfaces ...........................................................................................................................77
The Nature of Plane Surfaces .........................................................................................................................................77

-6
Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844
-
Laws of Reflection ..............................................................................................................................................................77
Types of Reflection ............................................................................................................................................................78
Specular Reflection/Regular reflection .......................................................................................................................78
Diffuse Reflection/Irregular reflection ........................................................................................................................78
Applications of Reflection by Plane Surfaces ..........................................................................................................78
Application of diffuse reflection....................................................................................................................................78
Experiment to verify laws of reflection ......................................................................................................................79
NATURE OF IMAGE FORMED ................................................................................................................................79
BY A PLANE MIRROR ..................................................................................................................................................79
Images formed in two plane mirrors inclined at 900 ..............................................................................................79
Image formed by an inclined mirror at an angle θ .................................................................................................80
Periscope.................................................................................................................................................................................80
Other uses of plane mirrors include: ............................................................................................................................80
SENIOR TWO .....................................................................................................................................................................81
2.1 WORK, ENERGY, AND POWER ......................................................................................................................81
SUN AS SOURCE OF ENERGY ................................................................................................................................81
EFFECTS OF SOLAR ENERGY ................................................................................................................................81
FORMS OF ENERGY ......................................................................................................................................................81
ENERGY CONCEPT........................................................................................................................................................82
RENEWABLE AND NONRENEWABLE ENERGY .........................................................................................82
The primary sources of energy .......................................................................................................................................82
The secondary sources of energy ..................................................................................................................................82
WORK DONE, FORCE, AND DISTANCE MOVED ........................................................................................84
Force.........................................................................................................................................................................................84
ENERGY ................................................................................................................................................................................85
FORMS OF ENERGY ......................................................................................................................................................86
Relationship between Work, Energy, and Power....................................................................................................87
MECHANICAL FORMS OF ENERGY ...................................................................................................................88
KINETIC ENERGY...........................................................................................................................................................88
PONTENTIAL ENERGY ...............................................................................................................................................89
ENERGY INTERCHANGE ...........................................................................................................................................89
PRINCIPLES OF CONSERVATION OF ENERGY ...........................................................................................89

-7
Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844
-
MACHINES ..........................................................................................................................................................................90
PRINCIPLE OF MACHINES ........................................................................................................................................90
TERMS USED IN MACHINES ...................................................................................................................................90
LEVERS .................................................................................................................................................................................92
PULLEY SYSTEMS .........................................................................................................................................................92
APPLICATIONS OF PULLEY SYSTEMS .............................................................................................................94
INCLINED PLANES ........................................................................................................................................................95
WHEEL AND AXLE ........................................................................................................................................................96
GEARS ....................................................................................................................................................................................97
SCREWS ................................................................................................................................................................................98
HYDRAULIC PRESS OR LIFT................................................................................................................................. 100
2.2 TURNING EFFECT OF FORCES, CENTRE OF GRAVITY, AND STABILITY ....................... 101
Moment of a Force (Torque): ....................................................................................................................................... 101
Factors affecting Turning of a Force ......................................................................................................................... 102
Application of the Principle of Moments ................................................................................................................. 103
Conditions for Equilibrium: .......................................................................................................................................... 103
Center of Gravity ............................................................................................................................................................... 104
Concepts of Center of Gravity...................................................................................................................................... 104
Concepts of Stability and equilibrium ....................................................................................................................... 106
Factors affecting Stability .............................................................................................................................................. 107
Applications of Stability ................................................................................................................................................. 108
2.3 PRESSURE IN SOLIDS AND FLUIDS .......................................................................................................... 110
Pressure in solids ............................................................................................................................................................... 110
PRESSURE IN LIQUIDS.............................................................................................................................................. 111
Effect of pressure on fluids............................................................................................................................................ 111
Experiment to show that pressure in liquids increases with increase in depth (h) ................................... 111
Experiment to show that pressure is independent of cross section area and shape of container......... 112
PASCAL’S PRINCIPAL OR LAW OF LIQUID PRESSURE ...................................................................... 112
Experiment to verify the principle of transmission of pressure in liquids ................................................... 112
Application of the Pascal’s principle:........................................................................................................................ 113
HYDRAULIC PRESS/MACHINE ............................................................................................................................ 113
Hydraulic lift ...................................................................................................................................................................... 114

-8
Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844
-
ATMOSPHERIC PRESSURE ..................................................................................................................................... 114
MEASUREMENT OF ATMOSPHERIC PRESSURE ...................................................................................... 115
Simple barometer .............................................................................................................................................................. 115
Structure of the Atmosphere ......................................................................................................................................... 116
Applications of Atmospheric pressure ...................................................................................................................... 117
LIFT PUMP......................................................................................................................................................................... 118
Drinking straw .................................................................................................................................................................... 118
The siphon ............................................................................................................................................................................ 118
Applications of siphon principle.................................................................................................................................. 118
Manometer ........................................................................................................................................................................... 119
FLUID MOTION: BERNOILLI EFFECT............................................................................................................. 120
Floating and Sinking ........................................................................................................................................................ 122
SINKING AND FLOATING ....................................................................................................................................... 122
ARCHIMEDES PRINCIPLE ....................................................................................................................................... 123
Experiment to verify Archimedes’ principle .......................................................................................................... 123
Application of Archimedes’ principle ....................................................................................................................... 123
FLOATING OBJECTS................................................................................................................................................... 124
Law of floatation ............................................................................................................................................................... 125
Application of law of floatation ................................................................................................................................... 125
Balloons and airships ....................................................................................................................................................... 126
Hydrometers ........................................................................................................................................................................ 126
Motion of a body through fluids .................................................................................................................................. 126
2.4 MECHANICAL PROPERTIES OF MATERIALS AND HOOKE’S LAW ..................................... 127
Bricks and blocks as building materials ................................................................................................................... 129
Glass as a building material........................................................................................................................................... 129
Concrete ................................................................................................................................................................................ 130
NOTCH AND NOTCH EFFECTS ............................................................................................................................ 132
STRUTS AND TIES........................................................................................................................................................ 132
HOOKE’S LAW OF ELASTICITY .......................................................................................................................... 134
Hooke’s Law Applications ............................................................................................................................................ 135
Hooke’s Law Disadvantages......................................................................................................................................... 135
2.5 REFLECTION OF LIGHT AT CURVED SURFACES ............................................................................ 137

-9
Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844
-
Plane Mirror vs Spherical Mirror ................................................................................................................................ 138
Characteristics of Concave and Convex Mirrors ....................................................................................................... 138
Concave Mirror Definition ............................................................................................................................................ 138
Image Formation by Spherical Mirrors.......................................................................................................................... 139
Guidelines for rays falling on the concave and convex mirrors ...................................................................... 139
IMAGE FORMATION BY CONCAVE MIRROR ................................................................................................. 140
OBJECT AT INFINITY ................................................................................................................................................. 140
OBJECT BEYOND THE CENTRE OF CURVATURE................................................................................... 140
OBJECT BETWEEN THE CENTRE OF CURVATURE AND FOCUS .................................................. 140
OBJECT AT THE FOCUS ........................................................................................................................................... 140
CONCAVE MIRROR IMAGE FORMATION SUMMARY .............................................................................. 140
IMAGE FORMATION BY CONVEX MIRROR .................................................................................................... 141
OBJECT AT INFINITY ................................................................................................................................................. 141
OBJECT AT A FINITE DISTANCE ........................................................................................................................ 141
Concave Mirror Image Formation Summary ............................................................. Error! Bookmark not defined.
Everyday Examples of Static Electricity ............................................................................................................. 163
Shocks from Door Handles or Metal Objects ................................................................................................... 163
Hair Standing on End ................................................................................................................................................... 163
Static Cling in Clothing................................................................................................................................................ 163
Sparks from Touching Electronics......................................................................................................................... 164
The Attraction of Dust and Small Debris ........................................................................................................... 164
Use of Electrostatic Precipitators ............................................................................................................................ 164
The Functioning of Copy Machines and Printers ........................................................................................... 164
Spark Discharge When Fueling Vehicles ............................................................................................................ 164
Lightning ............................................................................................................................................................................. 169
The Earths’ orbit about the sun & Moons’ orbit about the earth. .............................................................................. 178
Day and night ................................................................................................................................................................................ 178
Seasons in some parts of the earth ........................................................................................................................................ 178
Implication of season on activities on earth ...................................................................................................................... 179
Relative motion of the sun and moon and eclipse .......................................................................................................... 179
Characteristics of inner and outer planets .......................................................................................................................... 179
Explain why Earth is the only planet that supports life ................................................................................................ 179

-
Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844 10
-
The Asteroid Belt ........................................................................................................................................................................ 181
The asteroid belt a failed planet.............................................................................................................................................. 182
Origin and structure universe.................................................................................................................................................. 182
The Earth's Axis ........................................................................................................................................................................... 182
Rising and Setting of the Sun ................................................................................................................................................. 183
The Earth's Orbit.......................................................................................................................................................................... 183
Moon & Earth ............................................................................................................................................................................... 183
Phases of the Moon..................................................................................................................................................................... 184
Phases of the Moon as it orbits around Earth ................................................................................................................... 184
Gravitational Field Strength .................................................................................................................................................... 184
Gravitational Attraction of the Sun ...................................................................................................................................... 185
Orbits & Conservation of Energy ......................................................................................................................................... 186
Conservation of Energy ............................................................................................................................................................ 186
The Sun ........................................................................................................................................................................................... 186
Our Sun ........................................................................................................................................................................................... 187
VAPOURS........................................................................................................................................................................... 264
Saturated and Unsaturated Vapours ........................................................................................................................... 264
Relation to Boiling and Evaporation.......................................................................................................................... 265
Introduction ................................................................................................................................................................................... 267
Average stars ................................................................................................................................................................................. 268
Massive stars ................................................................................................................................................................................. 268
The Milky Way ............................................................................................................................................................................ 273
Satellites .......................................................................................................................................................................................... 274
Sizes and altitude of satellites................................................................................................................................................. 275
............................................................................................................................................................................................................ 276
Importance of artificial satellites ........................................................................................................................................... 278
Satellites in Navigation ............................................................................................................................................................. 278
Satellite Communication .......................................................................................................................................................... 281
One-way Satellite Communication ......................................................................................................................... 282
International Space Station ...................................................................................................................................................... 284
4.5 ATOMIC MODELS ................................................................................................................................................. 328
Introduction ......................................................................................................................................................................... 328

-
Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844 11
-
STRUCTURE OF AN ATOM ..................................................................................................................................... 328
Isotopes ................................................................................................................................................................................. 329
Applications of Isotopes ................................................................................................................................................. 330
A nuclide............................................................................................................................................................................... 330
Electron Emission ............................................................................................................................................................. 330
Electron Emission And Its Types ................................................................................................................................ 330
What is Electron Emission? .......................................................................................................................................... 331
Photo-Electric Emission / Effect ................................................................................................................................. 331
PHOTOELECTRIC EFFECT ...................................................................................................................................... 331
Applications of the Photoelectric Effect ................................................................................................................... 331
Thermionic emission ........................................................................................................................................................ 332
Applications of thermionic emission ......................................................................................................................... 332
Cathode Rays ...................................................................................................................................................................... 332
THE C.R.O (Cathode Ray Tube) ................................................................................................................................ 333
X – RAYS ............................................................................................................................................................................ 334
Characteristics of X-Rays .............................................................................................................................................. 334
Production ............................................................................................................................................................................ 334
Applications of X-rays .................................................................................................................................................... 334
Safety and Risks ................................................................................................................................................................ 335
Types of X – rays .............................................................................................................................................................. 335
Applications of x-rays ..................................................................................................................................................... 335
Health hazards of X – rays............................................................................................................................................. 335
Safety precautions of X – rays ..................................................................................................................................... 336
4.6 NUCLEAR PROCESSES ...................................................................................................................................... 336
Introduction ......................................................................................................................................................................... 336
RADIOCTIVITY .............................................................................................................................................................. 336
What is Radioactivity? .................................................................................................................................................... 337
Application of alpha particles ....................................................................................................................................... 338
Beta Decay (β).................................................................................................................................................................... 338
Beta-minus Decay ............................................................................................................................................................. 338
Applications of beta particles ....................................................................................................................................... 339
Gamma Decay (γ) ............................................................................................................................................................. 339

-
Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844 12
-
GAMMA RAYS ................................................................................................................................................................ 339
Applications of gamma rays.......................................................................................................................................... 339
Understanding the decay curve .................................................................................................................................... 340
Uses and of dangers of radioactivity .......................................................................................................................... 341
Dangers of radioactivity ................................................................................................................................................. 341
Nuclear fission and fusion ............................................................................................................................................. 341
Nuclear energy ................................................................................................................................................................... 342
Nuclear reactors ................................................................................................................................................................. 342
Nuclear Reactions ............................................................................................................................................................. 343
Procedures in a Nuclear Chain Reaction .................................................................................................................. 343
Nuclear Fusion Reaction ............................................................................................................................................... 343
Applications of Radioactivity ....................................................................................................................................... 344
Dangers Associated with Radioactivity .................................................................................................................... 344
Social, Political, and Environmental Dimensions of Nuclear Power Use ................................................... 344
Nuclear and radiation accidents and incidents ....................................................................................................... 344
4.5 DIGITAL ELECTRONICS ................................................................................................................................... 344
Introduction ......................................................................................................................................................................... 345
Potential Divider ................................................................................................................................................................ 345
Potential Divider .......................................................................................................................................................................... 345
Applications of Potential Dividers.............................................................................................................................. 346
Variable Resistor / Rheostat .......................................................................................................................................... 346
Potentiometer ...................................................................................................................................................................... 347
Application of Potential Dividers................................................................................................................................ 347
Potential Divider in Sensory Circuits ........................................................................................................................ 347
Binary systems and logic gates .................................................................................................................................... 348
Logic gates ........................................................................................................................................................................... 348
Digital electronics ............................................................................................................................................................. 348
Continuous signals ............................................................................................................................................................ 348
Boolean Laws ..................................................................................................................................................................... 350
Logic Gates realization of Boolean Expressions ................................................................................................... 350
Logic Gates .......................................................................................................................................................................... 350
Types of Logic Gates ....................................................................................................................................................... 351

-
Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844 13
-
Derived Logic Gates ........................................................................................................................................................ 351
Properties of XOR Gate .................................................................................................................................................. 354
Construction of digital logic gates on breadboards .............................................................................................. 361
BUFFER ............................................................................................................................................................................... 361
INVERTER ......................................................................................................................................................................... 361
AND Gate............................................................................................................................................................................. 362
NAND Gate ......................................................................................................................................................................... 362
OR Gate ................................................................................................................................................................................ 362
NOR Gate ............................................................................................................................................................................. 362
XOR Gate ............................................................................................................................................................................. 363
XNOR Gate ......................................................................................................................................................................... 363

SE N I O R O N E
1. 1 I N T R O D U C T I O N T O P H Y S I C S
Learning Outcomes
a) Understand the meaning of physics, its branches and why it is important to study Physics (u,v/a))
b) Understand why it is important to follow the laboratory rules and regulations (u, v/a)

INTRODUCTION
Physics is a fundamental science that explores the nature of matter, energy, and their interactions. Defined as the study of physical reality, it
encompasses various phenomena, including motion, force, and the four fundamental forces of nature: gravitation, electromagnetism, and the strong
and weak nuclear forces. The discipline seeks to understand how these elements behave through space and time, providing a framework for
interpreting the universe. The laws of physics are expressed with precision, allowing scientists to predict outcomes and understand complex systems.
This scientific study not only addresses the mechanics of everyday objects but also delves into advanced concepts like quantum entanglement and
entropy, which reveal the intricate connections and behaviors of particles at a fundamental level. In essence, physics serves as a bridge between the
observable world and the theoretical constructs that describe it, making it a cornerstone of scientific inquiry and technological advancement.

Branches of Physics
Physics is a vast field of science that explores the fundamental principles governing the universe. It is traditionally divided into several branches, each
focusing on different aspects of physical phenomena. The main branches include classical mechanics, which studies the motion of objects;
thermodynamics, which deals with heat and energy transfer; and electromagnetism, which examines electric and magnetic fields and their interactions.
In addition to these, optics focuses on the behavior of light, while acoustics studies sound waves and their properties. Modern physics introduces
concepts such as quantum mechanics, which explores the behavior of particles at the atomic and subatomic levels, and relativity, which addresses the
effects of gravity and the curvature of space-time.

-
Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844 14
-
Importance of Studying Physics
Studying physics is essential for understanding the fundamental principles that govern our universe. It fosters curiosity about how the world works,
from the smallest particles to the vast cosmos. This knowledge not only satisfies intellectual curiosity but also equips students with critical quantitative
reasoning and problem-solving skills applicable in various fields beyond physics. Moreover, physics is the foundation of many scientific disciplines,
making it crucial for advancements in technology, engineering, and medicine. By learning physics, students develop a systematic approach to
identifying and solving complex problems, which is invaluable in today’s job market. The versatility of a physics education opens doors to diverse
career opportunities, from research and academia to engineering and data science.

Basic Laboratory Rules

Activity: To observe a laboratory and identify safety rules therein.


Your teacher will take you to the laboratory or science room; where you will be conducting physics experiments.
 Identify some apparatus you may have seen before.
 Identify some situations that may pose danger to learners while in the laboratory.
 List at least five safety rules you may think of that should be observed while in the laboratory.
 Compare and discuss your findings with other groups in your class.

A laboratory is a room in which science experiments or investigations are conducted. While in the laboratory, you are expected to observe some
measures to avoid occurrence of accidents that may harm them or the apparatus.. The list of laboratory safety rules and regulations is quite long and
you will be learning more rules and regulations throughout your study in Physics. Laboratory safety is paramount to prevent accidents and ensure a
secure working environment. Basic laboratory rules emphasize hygiene and appropriate behavior, such as prohibiting eating, drinking, or applying
makeup in labs with hazardous materials. These practices help minimize the risk of contamination and exposure to harmful substances. It is crucial
to handle equipment and materials with care. For instance, avoid touching hot objects to prevent burns, and never force glass tubing into stoppers,
as this can lead to breakage and injury. Wearing personal protective equipment (PPE) and maintaining a clean workspace are also essential components
of lab safety. Lastly, always perform a risk assessment before beginning any experiment. This proactive approach allows individuals to identify potential
hazards and implement necessary precautions, ensuring a safer laboratory experience for everyone involved. By adhering to these basic rules, lab
personnel can significantly reduce the likelihood of accidents and promote a culture of safety.

Importance of following Laboratory rules and regulations


Following laboratory rules and regulations is crucial for maintaining a safe and efficient working environment. These guidelines are designed to
prevent accidents, such as spills and chemical exposures, which can pose serious risks to personnel and the surrounding environment. By adhering to
these safety protocols, employees can significantly reduce the likelihood of incidents that could lead to injuries or environmental damage. Laboratory
safety is essential for protecting productivity and financial resources. Accidents can result in costly downtime, damage to equipment, and potential
legal liabilities. By prioritizing safety, laboratories can ensure smooth operations and safeguard their investments. In addition to protecting individuals
and resources, following laboratory safety rules fosters a culture of responsibility and awareness. When everyone in the lab commits to these practices,
it creates a collaborative environment where safety is a shared priority, ultimately leading to more successful research outcomes and a healthier
workplace.

-
Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844 15
-
First Aid Measures

Activity: To define first aid and interpret hazard symbols


Materials: First aid kit, chart-showing hazard symbols
 In groups of three, discuss what first aid is and why it is important to have adequate knowledge on first aid.
 Open the first aid kit provided to you and identify all the items in it and their uses.
 Now, discuss each hazard symbols shown on the chart provided and suggest why it is important to understand them.

In laboratory settings, safety is paramount, and understanding first aid measures is crucial for preventing and treating accidents. To minimize risks,
individuals should wear appropriate personal protective equipment (PPE), including long sleeves, long pants, closed-toed shoes, and safety goggles.
These barriers help protect against chemical spills, burns, and other hazards. In the event of an accident, immediate action is essential. For chemical
exposure to the skin, use the safety shower to flush the affected area thoroughly until help arrives. If chemicals enter the eyes, rinse them under
running water for at least 15 minutes. A well-stocked first aid kit should be readily available in the lab, containing essential supplies like antiseptics,
bandages, and sterile cotton. Being prepared can significantly reduce the severity of injuries and promote a safer laboratory environment.

1.2 Measurements in Physics


Learning Outcomes
a) Understand how to estimate and measure physical quantities: length, area, volume, mass, and time (u, s, g, s)
b) Explain how they choose the right measuring instrument and units; explain how to use the instruments to ensure accuracy (u,s)
c) Appreciate that the accuracy of measurements may be improved by making several measurements and taking an average value (gs, v/a)
d) Identify potential sources of error in measurement and devise strategies to minimize them (u, s, v/a)
e) Understand the scientific method and explain the Procedures used in relation to the study of physics(u)
f) Know that practical investigations involve a ‘fair test’, analysis, prediction and justification of results, and observations, and apply
learning in practice (k, s)
g) Record data in graphs and charts and look for trends (u,s)
h) Understand and be able to use scientific notation and significant figures (u,s)
i) Understand density and its application to floating and sinking(u)
j) Determine densities of substances and relate them to purity (u, s,gs)
k) Understand the global nature of ocean currents and how they are driven by changes in water density and temperature (u,s)

Introduction
Measurements in physics are essential for quantifying physical quantities, allowing scientists to compare unknown values against established standards.
This process involves determining the size or magnitude of an object, such as length, mass, or time, by using a known reference. For instance,
measuring the length of a table requires comparing it to a standard unit, like meters. In physics, there are seven fundamental physical quantities,
including length, mass, and electric current, each measured in specific units. These units provide a consistent framework for expressing measurements,
enabling clear communication and understanding among scientists. The precision of these measurements is crucial, as they form the basis for
hypotheses, theories, and laws that describe the behavior of matter and energy. Recent studies have revealed that the universe is expanding at a rate
that challenges current physical models, highlighting the importance of accurate measurements in advancing our understanding of cosmology.
-
Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844 16
-
Basic fundamental quantities of physics:
In physics, fundamental quantities are the essential building blocks that cannot be derived from other quantities. There are seven recognized
fundamental quantities: length, mass, time, electric current, thermodynamic temperature, amount of substance, and luminous intensity. Each of these
quantities has a specific unit of measurement, forming the basis for all other derived quantities in physics. Length is measured in meters, mass in
kilograms, and time in seconds. Electric current is quantified in amperes, while thermodynamic temperature is expressed in kelvins. The amount of
substance is measured in moles, and luminous intensity is quantified in candelas. These fundamental quantities provide a standardized framework for
scientific measurement and experimentation.

INTERNATIONAL SYSTEM OF UNITS (S.I UNIT)


The International System of Units (SI) is the modern metric system and serves as the global standard for measurement. Abbreviated from the French
"Système International d'Unités," SI is utilized worldwide to express physical quantities, ensuring consistency and clarity in scientific communication.
Established to facilitate international collaboration, the SI system includes seven base units: meter (length), kilogram (mass), second (time), ampere
(electric current), kelvin (temperature), mole (amount of substance), and candela (luminous intensity). These units can be combined to form derived
units, such as newtons for force and joules for energy, making SI versatile for various scientific fields. In 2018, significant revisions were made to the
SI, redefining the base units based on fundamental constants of nature. This evolution underscores the system's commitment to precision and
adaptability, reinforcing its role as the cornerstone of global measurement standards.

LENGTH
Length is a measure of distance between two points. i.e. breadth, width, height, radius, depth and diameter are all lengths. S.I unit of length is metres
(m). Other units: kilometres (km), centimetres (cm), millimetres (mm), Inches, yards, miles etc. km = 1000m, 1m = 1000cm = 1000000mm, 1cm = 10mm.
Very small lengths are measured in micrometer and nanometers (nm). 1m = 1,000,000nm = 10 6 μm; 1m = 1,000,000,000nm = 109 nm
Example, Convert the following measurements.
(a) 20mm to metres.
1m =1000mm; 20mm = 0.02m
(b) 0.8m to centimeters
1m = 100cm; 0.8m = 0.8×100cm = 80cm
Length is measured using; Metre rule, Tape measure, Calipers, Micrometer screw gauge, and Thread.

CALIPERS: These are used to measure distance in solid objects where an ordinary metre rule cannot be applied. They are made out of pair of hinged
steel jaws, which are closed until they touch the object in the desired position. Calipers are of two types namely:
i) Engineer’s calipers,
ii) Vernier calipers

MICROMETER SCREW GAUGE: This is used to measure small distance such as diameter of pieces of wire, bicycle spoke pins, needles etc. The
instrument measures up to 2 decimal places in mm. It consists of a spindle, which can be screwed, and it is fitted with a scaled thimble.

-
Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844 17
-
Activity
1. James found that the perimeter of his farming plot was approximately 200 strides. His stride was 0.75m long. What was the perimeter of the plot?
2. Estimate the width of your desk, classroom window and the classroom by counting how using the ruler or any possible measuring instrument in
S.I units.
3. Suggest a method you can use to estimate the width of a page of your book.

MEASUREMENT OF AREA OF AN OBJECT


Area is the quantity that expresses the extent of a given surface on a plane. It is a derived quantity of length. The SI unit of area is the square metre,
written as m2. It can also be measured in multiples and sub-multiples of m2, for example, cm2 and km2.
Area is a measure of the extent of a two-dimensional surface or shape. It quantifies the amount of space enclosed within the boundaries of a flat object,
such as a rectangle, circle, or triangle.
Area is an important concept in various fields, including mathematics, physics, engineering, and everyday life.
Units of Area: Area is measured in square units, reflecting the number of unit squares that fit into the shape. Common units of area include:
Square meters (m²) in the metric system. Square centimeters (cm²) and square millimeters (mm²) for smaller areas. Square kilometers (km²) for large
areas.
Square feet (ft²) and square inches (in²) in the imperial system. Acres and hectares for measuring land areas.

Calculating Area
The method to calculate the area depends on the shape of the object.
Name the different objects you know around you and establish their areas.

Importance and Applications of area.


Architecture and Construction: Calculating the area is crucial for designing floor plans, determining the amount of materials needed (e.g., paint,
flooring), and estimating costs.
Agriculture: Farmers need to know the area of their fields to manage planting, fertilization, and irrigation effectively.
Real Estate: The area of land and buildings is a fundamental factor in property valuation and transactions.
Clothing and Textiles: Manufacturers calculate the area of fabric needed to produce garments and other textile products.
Science and Engineering: In physics, area calculations are used in various contexts, such as determining the pressure exerted on surfaces (pressure
= force/area) and in fluid dynamics.
Everyday Life: Understanding area helps in tasks like planning gardens, arranging furniture, and organizing spaces efficiently.

Visualization and Understanding:


Visualizing area often involves tiling a shape with unit squares. For instance, a rectangle with dimensions 3 meters by 4 meters can be covered by 12
unit squares of 1 square meter each, thus having an area of 12 square meters.

Practical Examples: A rectangular garden: If the garden is 5 meters long and 3 meters wide, the area is 5×3=15 square meters.

-
Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844 18
-
VOLUME:
Volume is a measure of the amount of space an object or substance occupies.
It is a three-dimensional quantity, representing the extent of an object in the three dimensions of space: length, width, and height. The concept of
volume applies to both solid objects and fluids (liquids and gases).

Units of Volume: Volume can be expressed in various units, depending on the system of measurement:
Metric System: Cubic meters (m³), Liters (L), where 1 liter = 1 cubic decimeter (dm³) = 1,000 cubic centimeters (cm³), Milliliters (mL), where 1
milliliter = 1 cubic centimeter (cm³).
NB: 1 Litre = 1000cm3, 1cm3 = 1000cm3, 1m3 = 100cm x 100cm x 100cm =1,000,000cm 3
Importance and Applications:
Understanding volume is crucial in everyday activities like cooking (measuring ingredients), filling fuel tanks, and packing. Volume calculations are
essential in various scientific fields, including chemistry (stoichiometry and reactions involving gases), physics (density calculations), and engineering
(designing containers, buildings, and other structures). Dosage of liquid medications, blood transfusions, and intravenous fluids are often calculated
based on volume.
Industries dealing with liquids (e.g., oil, beverages) and gases need to measure and control volumes accurately for production, storage, and
distribution.

Volume of irregular Shaped Objects


The volume of irregularly shaped objects is obtained by using the displacement method.

Procedure
An over flow can is filled with water
The irregular shaped object is tied onto a string and carefully lowered into the water
in the overflow can. The water level is displaced.
The water flowing out of the can through the spout is collected using a measuring
cylinder
The volume of the water collected is determined

The volume of the liquid displaced is equal to the volume of the irregular object (stone).

METHOD II
Procedure
Water is poured into a measuring cylinder and the volume noted on its scale.
A thread is tied around the irregular object.
The solid (object) is lowered into the water in the cylinder and the 2 nd reading noted.
Volume of the solid is obtained from,
V= Volume of 2nd reading - Volume 1st reading
Therefore, V = V2 – V1

-
Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844 19
-
Volume of liquids
To measure fixed volumes, the following vessels are used:
Volumetric flask, measuring cylinder, beaker, pipettes etc
To measure varying volumes use a burette.
Activity:
1. Use the rectangular box below to answer questions that follow.
Find the volume;

(i) in cm3
Volume = L x W x H
V= 8cm x 5cm x 10cm
V= 400cm3
(ii) In m3
Volume in m3
1m3 = 1000, 000 cm3
400cm3= 0.0004m3

2. A cuboid has dimensions 2cm by 10cm. Find its width in metre if it occupies a volume of 80cm 3.
Solution
V=L x W x H
V=2cm x W x 10cm = 80cm3
W = 4cm
Width in metres
4cm = m = 0.04m
3(a) Find the volume of water in a cylinder of water radius 7cm if its height is 10cm.
Volume = πr2h = x 7cm x 7cm x10cm = 1540cm3
(c)The volume of the cylinder was 120m3.When a stone was lowered in the cylinder filled with water the volume increased to 15cm 3. Find the height
of the cylinder of radius 7cm.
Volume V = πr2h, 12 = 72xh h = 0.078 cm

MASS
Mass is the property of a body that is a measure of its inertia, and that is commonly taken as a measure of the amount of matter it contains and
causes it to have weight in a gravitational field.
Mass is a fundamental property of physical objects that quantifies the amount of matter they contain. It is a measure of an object's resistance to
acceleration when a force is applied, and it is a key component in understanding and describing the dynamics of objects and systems in physics.

-
Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844 20
-
Characteristics of Mass:
Mass is a measure of inertia, which is the resistance of an object to changes in its state of motion. The more massive an object, the more force is
required to accelerate it.
Mass determines the strength of the gravitational force an object experiences in a gravitational field. The gravitational force between two objects is
proportional to the product of their masses and inversely proportional to the square of the distance between them.
Mass is conserved in isolated systems, meaning it cannot be created or destroyed. This principle is fundamental in classical mechanics and is extended
in the form of mass-energy equivalence in relativity. Inertia is actually not a force at all, but rather a property that all things have due to the fact that
they have mass. The more mass something has the more inertia it has. You can think of inertia as a property that makes it hard to push something
around.

Units of Mass:
Mass is measured in units such as kilograms (kg)
The base unit of mass in the International System of Units (SI).
Grams (g): Commonly used for smaller masses, where 1 kilogram = 1,000 grams.
Metric tons (t): Used for large masses, where 1 metric ton = 1,000 kilograms.
Pounds (lb): Used primarily in the United States, where 1 pound is approximately 0.453592 kilograms.

Types of Mass:
Inertial Mass: A measure of an object's resistance to acceleration when a force is applied. It is defined by Newton's second law of motion, F=ma,
where F is the force applied, m is the inertial mass, and a is the acceleration.
Gravitational Mass: A measure of the strength of an object's interaction with a gravitational field.

Importance and Applications of mass:


Understanding mass is essential for analyzing forces, motion, and energy in physical systems. It plays a critical role in Newton's laws of motion,
momentum, and kinetic energy.
Mass is a key parameter in studying celestial bodies, including their formation, dynamics, and interactions. It influences the structure and evolution
of stars, galaxies, and the universe.
Accurate measurement and control of mass are crucial in designing and manufacturing products, from small electronic components to large structures
like buildings and bridges.
Mass is commonly encountered in everyday activities, such as cooking (measuring ingredients), transportation (vehicle weight and fuel efficiency),
and health (body weight).
Mass of the body is measured using the following instruments; (1) Beam balance, (2) Lever – arm-balance, (3) Top-arm-balance

TIME
The measure of the interval between events or is the measure of duration between events.
Time is a fundamental concept that quantifies the progression of events from the past through the present to the future. It is a continuous, measurable
quantity used to sequence events, compare the durations of events or the intervals between them, and quantify the motions of objects.

Characteristics of Time:

Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844 21


Time is often perceived as moving in a linear fashion from the past to the present and into the future. Time can be measured using various units and
instruments, with seconds being the base unit in the International System of Units (SI).
In most everyday contexts and classical physics, time is perceived as unidirectional and irreversible, meaning events move forward and cannot be
reversed.
Units of Time: Time is measured in units such as seconds (s); the base unit of time in the SI system. Minutes (min): 1 minute = 60 seconds, Hours
(h): 1 hour = 60 minutes = 3,600 seconds.
Days, weeks, months, and years: Larger units of time based on the Earth's rotation and orbit around the Sun.

Measurement of Time:
Watches: Devices designed to measure and display time accurately. They range from mechanical clocks to electronic digital watches.
Atomic Clocks: Extremely precise timekeeping devices that use the vibrations of atoms (often cesium or rubidium) to measure time. These are used
for scientific research and to maintain the accuracy of time standards.
Calendars: Systems for organizing days and larger units of time into a coherent structure. The Gregorian calendar is the most widely used today.

Importance and Applications of time


Time is essential for organizing daily activities, schedules, and routines. It helps in planning, coordinating, and managing tasks and events.
Accurate time measurement is crucial for experiments, observations, and the functioning of various technologies, including GPS, communication
networks, and computer systems.
Time is a key variable in the laws of motion, thermodynamics, and quantum mechanics. It plays a crucial role in understanding the universe's structure
and dynamics.
Time is a critical factor in financial markets, production schedules, project management, and service delivery.

Theories and Concepts:


Newtonian Time: In classical mechanics, time is considered absolute and universal, flowing at a constant rate regardless of the observer's state of
motion.
Relativistic Time: According to Einstein's theory of relativity, time is relative and can vary depending on the observer's velocity and the presence
of gravitational fields. Time dilates, or stretches, for objects moving at high speeds or in strong gravitational fields.
Arrow of Time: This concept describes the one-way direction or asymmetry of time, primarily observed through the increase of entropy as stated in
the second law of thermodynamics, where systems evolve from order to disorder.

Practical Examples:
Daily Activities: Waking up at 7:00 AM, attending a meeting at 2:00 PM, or catching a flight scheduled for 6:00 PM.
Scientific Experiments: Measuring the reaction time in a chemical experiment or the half-life of a radioactive substance.
Technological Systems: Synchronizing data across global networks using Coordinated Universal Time (UTC).

Scientific notation and significant figures


A number is in scientific form, when it is written as a number between 1 and 9 which is multiplied by a power of 10.
Scientific notation is used for writing down very large and very small measurements.
Example:

Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844 22


(i) 598,000,000m = 5.98 x 108m
(ii) 0.00000087m = 8.7 x 10-7m
(iii) 60220m = 6.022 x 104m

SIGNIFICANT FIGURES
Significant figures are the digits in a number that carry meaning contributing to its measurement accuracy. This includes all the non-zero digits, any
zeros between them, and any trailing zeros in a decimal number.

Importance in Measurements
Significant figures reflect the precision of a measurement. When recording measurements, the number of significant figures indicates how accurate
the measurement is.
The more significant figures in a number, the more precise the measurement is.

Rules for Determining Significant Figures


Non-Zero Digits: Always significant. Example: 123.45 has five significant figures.
Leading Zeros: Not significant; they only indicate the position of the decimal point.
Example: 0.0032 has two significant figures.
Captive (or trapped) Zeros: Zeros between non-zero digits are significant.
Example: 1002 has four significant figures.
Trailing Zeros: With a Decimal Point: Significant.
Example: 2.300 has four significant figures.
Without a Decimal Point: Not significant unless explicitly indicated.
Example: 2300 has two significant figures, but if written as 2300. (With a decimal), it has four significant figures.

Significant Figures in Calculations:


Addition and Subtraction:
The result should be reported with the same number of decimal places as the number with the fewest decimal places.
Example: 12.11 + 1.2 = 13.3 (one decimal place, matching the least precise term).

Multiplication and Division: The result should have the same number of significant figures as the number with the fewest significant figures in
the calculation.
Example: 2.5 × 3.42 = 8.6 (two significant figures).
Rounding Rules:
When rounding a number to a certain number of significant figures: Look at the digit immediately after the last significant figure. If it's 5 or greater,
round up the last significant figure.
If it's less than 5, keep the last significant figure as it is.
Exact Numbers:
Numbers that are counted (e.g., 3 apples) or defined quantities (e.g., 1 inch = 2.54 cm) have an infinite number of significant figures and do not limit
the precision of calculated results.

Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844 23


Use in Reporting Scientific Data: In scientific publications and data reports, using the correct number of significant figures ensures that the data is not
over-interpreted and reflects the actual precision of the measurement instruments.

Common Mistakes: Overestimating the precision by reporting too many significant figures. Misinterpreting leading or trailing zeros.

Example:
If you measure a length as 0.00456 meters, the significant figures are 456, giving three significant figures. This indicates that the measurement
instrument can measure to the nearest ten-thousandth of a meter. Understanding significant figures is critical in maintaining the integrity of scientific
calculations and reporting. It helps in avoiding the illusion of precision where there is none, ensuring the accuracy and reliability of scientific results.
Activity
Write the following to the stated significant figures
a) 28.8 to 3 s.f b) to 2 s.f c) 4.027 x10-2 to 3 s.f

DENSITY
Density is a measure of how much mass is contained in a given volume.
It is a key physical property of materials and substances and is defined as the mass per unit volume.
The concept of density helps to understand how compact or spread out the mass in a material is.
Formula for Density:

Units of Density:
The units of density depend on the units used for mass and volume: In the metric system, density is commonly expressed in kilograms per cubic meter
(kg/m³) or grams per cubic centimeter (g/cm³). For example, water has a density of about 1 g/cm³ or 1000 kg/m³. In the imperial system, density
can be expressed in pounds per cubic foot (lb/ft³) or pounds per cubic inch (lb/in³).

Characteristics of Density:
Density is an intrinsic property of a substance, meaning it does not depend on the amount of substance present. It is a characteristic property that can
be used to identify materials. The density of substances, especially gases, can change with temperature and pressure. For example, heating a substance
typically decreases its density because the volume increases while the mass remains constant. Density allows for comparison of different materials.
For example, lead is denser than aluminum, meaning a given volume of lead has more mass than the same volume of aluminum.

Applications of Density:
Density is used to identify substances and verify their purity. For instance, gold's high density can help distinguish it from less dense metals.
Objects with lower density than the fluid they are in will float, while those with higher density will sink. This principle is cruc ial in designing ships,
submarines, and hot air balloons.
Knowing the density of materials helps in calculating loads, stresses, and stability in construction projects.
Density is used to estimate quantities and manage resources, such as determining the fuel efficiency of vehicles or the storage capacity for liquids.

Practical Examples:
Water has a standard density of 1 g/cm³ at 4°C. This property is often used as a reference point.

Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844 24


The density of air at sea level is approximately 1.225 kg/m³. This value is important in fields like meteorology and aviation.
Gold has a high density of about 19.32 g/cm³, which is why gold objects feel heavy for their size.
Different types of wood have varying densities. For example, balsa wood is less dense and therefore lighter than oak wood.

Calculating Density - Example:


Imagine you have a metal block with a mass of 200 grams and a volume of 50 cubic centimeters.
Solution: This means the metal block has a density of 4 grams per cubic centimeter.
Understanding density is essential in many scientific, engineering, and everyday contexts, allowing for the assessment and comparison of different
materials and their properties.
Other units for density: g/cm3
E.g., density of iron metal is 0.8g/cm3.This means that 8g of iron have a volume of 1cm3.
Example:
Find the density of a substance of;
(i) Mass 9kg and volume 3m3
(ii) Mass 100g and volume 10cm3

Converting density from g/cm3 to kg/m3


The density of the substance in g/cm3 is multiplied by 1000 in order to convert to kg/m 3.And to convert from kgm-3 to gcm-3, divide by 1000.
Example:
The density to water is 1.0g/cm3. Find its density in kgm-3.
Density = 1.0gm-3 = 1.0 x 1000kgm-3 = 1000kgm-3.

1). A piece of steel has a volume of 12cm 3 and a mass 96g.Find its density.
(a) In g/cm3
(b) Density 8g/cm3
(b) 8g/cm3 to kg/m3
2) The oil level in a burette is 25cm3. 50 drops of oil fall from a burette. If the volume of one drop is 0.1cm3.What is the final oil level in the burette.
Volume of one water drop = 0.1cm3, Volume of 50 water drops = 0.1x 50cm3 = 5cm3
Final-level=25cm3+5cm3=30cm3

Activity:
A measuring cylinder has water level of 13cm.What will be the new water level if 1.6g of a metallic block of density 0.8g/cm3 is added.

EXPERIMENT TO DETERMINE DENSITY OF REGULAR OBJECTS


The mass of the solid is obtained using a beam balance.
The volume of the object is obtained by measuring the dimensions’ length, width and height using a ruler or Vernier calipers or both, and then
substitutes the dimensions into the known formula of determining the volume.

HOW TO DETERMINE DENSITY OF AN IRREGULAR OBJECT (e.g. a stone)

Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844 25


The mass of a solid is measured using a beam balance.
Its volume is obtained using displacement method.

The density is then obtained from, density =


Density of liquids
i) The volume V of the liquid is measured using a measuring cylinder.
ii) The liquid is poured into the beaker of known mass .
iii) The mass M of the beaker containing the liquid is obtained using a beam balance.
iv) The density of the liquid will be

v) Density =
Density of Air
i) A round bottomed flask is weighed when full of air and its weighed again after removing air with a vacuum pump.
ii) The difference gives the mass of air.
iii) The volume of air is obtained by putting water in the same flask and measures its volume using a measuring cylinder.
iv) The volume of water will be the volume of air

v) The density then calculated from;

Examples
1. A Perspex box has a 10cm square base containing water to a height of10 cm. a piece of rock of mass 600g is lowered into the water and the
level rises to 12 cm.
(a) What is the volume of water displaced by the rock?
V =L x w x h =10 x10 x (12-10) =200 cm3
(b) What is the volume of the rock?
Volume of rock= volume of water displaced =200cm3

Alternatively
Volume of water before adding the rock, V1 = L x W x H = (10 x 10 x 10) cm = 1000cm3
Volume of water after adding the rock V2 = L x W x H = (10 x 10 x 12) cm3 = 1200cm3
Volume of water displaced V= V2 – V1= (1200 – 1000) cm3 = 200cm3
a) Calculate the density of the rock : Density = 3g/cm3
2. A Perspex box having 6cm square base contains water to a height of 10cm. Find the volume of water in the box.
Volume of water in the box =L x w x h, V=6cm x 6cm x 10cm =360cm3
3. A stone of mass 120g is lowered into the box and the level of water rises to 13cm.
(i) Find the new volume of water
=L x w x h =6cm x 6cm x 13cm =468cm3
(i) Find the volume of the stone
Volume of the stone = Volume of displaced water = V2 – V1 = 468 – 360cm3 = 108 cm3
(ii) Calculate the density of the stone. Density = 1.11 g/cm3

Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844 26


4. A steel box of dimensions 100cm by 20cm by 40cm has a mass of 560g
Find its density (i) In g/cm3 (ii) In kg/m3
Volume = L x W x H = (100 x 40 x 20) cm3 = 80,000cm3, Density = 0.007g/cm3
(i) In kg/m3: Density = 0.007 x 1000 Density = 7kg/m3

DENSITY OF MIXTURES
A mixture is obtained by putting together two or more substances such that they do not react with one another. The density of the mixture lies
between the densities of its constituent substances and depends on their proportions. It is assumed that the volume of the mixture is equal to the
𝑚𝑎𝑠𝑠 𝑜𝑓 𝑡ℎ𝑒 𝑚𝑖𝑥𝑡𝑢𝑟𝑒
sum of the volumes of the individual constituents. 𝐷𝑒𝑛𝑠𝑖𝑡𝑦 𝑜𝑓 𝑚𝑖𝑥𝑡𝑢𝑟𝑒 = 𝑣𝑜𝑙𝑢𝑚𝑒 𝑜𝑓 𝑡ℎ𝑒 𝑚𝑖𝑥𝑡𝑢𝑟𝑒
Suppose two substances are mixed as follows:
Substance Mass Volume Density
X M1 V1
Y M2 V2

Then Density of the mixture:

Yasson TWINOMUJUNI. +(256)-772-938844 // 0752-938844 27


PRINCIPLES AND PERSPECTIVES OF PHYSICS
Example: Two liquids x and y mixed to form a solution. If the density of x = 0.8gcm -3 and volume = 100cm3, y = 1.5cm-3 and volume = 300m3.Find;
(i) The mass of liquid x: =Density x Volume =0.8x100gcm 3 = 80g
(ii) The mass of liquid y: =Density x Volume =1.5 x 300 =450g

Density of a mixture =1.325gcm-3

1. 100 cm3 of fresh water of density 1 000 kgm–3 is mixed with 100 cm3 of seawater of density 1030 kgm–3. Calculate the density of the
mixture.
2. Bronze is made by mixing molten copper and tin. If 100 kg of the mixture contains 80% by mass of copper and 20% by mass of ti n,
calculate the density of bronze. (Density of copper is 8 900 kgm–3 and density of tin 7 000 kgm–3)
3. A density bottle has a mass of 17.5 g when empty. When full of water, its mass is 37.5 g. When full of liquid X, its mass is 35 g. If the
density of water is 1 000 kgm–3, find the density of liquid X.

RELATIVE DENSITY
Relative density, also known as specific gravity, is a dimensionless quantity that compares the density of a substance to the density of a
reference substance, typically water for liquids and solids, and air for gases. It indicates whether a substance is more or less dense than the reference
substance without requiring units.
The relative density (RD) is calculated using the formula:

For liquids and solids, the reference substance is usually water, which has a density of approximately 1 g/cm³ (or 1000 kg/m³) at 4°C.

Characteristics of relative density


Since relative density is a ratio of two densities, it has no units.
It allows for easy comparison of the density of a substance to the reference substance.
If R.D > 1, the substance is denser than the reference substance.
If R.D < 1, the substance is less dense than the reference substance.
When measuring relative density, it is important to specify the temperatures and pressures of both the substance and the reference substance, as
density can change with these conditions.

Applications of Relative Density:


 Helps in identifying materials and verifying their purity. For example, the relative density of pure gold is about 19.3.
 Used in industries like mining, oil, and chemicals to assess material properties.
 Determines whether an object will float or sink in a fluid. An object with an RD less than 1 will float in water.
 Used in quality control processes to ensure consistency and standards in manufacturing.

Page 28 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
The relative density of water is 1, as it is the reference substance. Ethanol has a relative density of about 0.79, meaning it is less dense than water and
will float on it. Mercury has a relative density of about 13.6, making it much denser than water. This high density allows mercury to be used in
barometers and other instruments.
Calculating Relative Density
Suppose you have a liquid with a density of 850 kg/m³. To find its relative density, compare it with to water: This means the liquid has a relative
density of 0.85 and is less dense than water.
Importance of R.D in Different Fields:
 Used to determine the concentration of solutions and the purity of substances.
 Helps in studying the stratification of lakes and oceans where water density varies with temperature and salinity.
 Essential in designing systems involving fluid flow, such as pipelines and hydraulics.

Ocean Currents and Water Density


Ocean currents are a critical component of Earth's climate system, influenced by various factors, including changes in water density. Ocean currents
are large-scale movements of water within the oceans, driven by various forces such as wind, Earth's rotation (Coriolis Effect), temperature gradients,
salinity differences, and tides.

Density-Driven Currents (Thermohaline Circulation):


Ocean water density is primarily influenced by temperature and salinity: Colder water is denser than warmer water. Water with higher salinity is
denser than water with lower salinity.
Thermohaline circulation, also known as the "global conveyor belt," is a density-driven ocean circulation system. It plays a key role in distributing
heat and regulating climate across the globe. Cold, salty water in the Polar Regions sinks due to its higher density, driving deep ocean currents. As
this water moves along the ocean floor, it eventually warms and rises, creating a global circulation pattern.

Role of Water Density in Ocean Currents:


In the Polar Regions, where water is cold and salty, the increased density causes the water to sink, initiating deep ocean currents that travel across
the globe. Changes in water density also affect surface currents. For instance, when surface water becomes denser, it can sink, driving vertical currents
that influence surface water movement. Upwelling occurs when deeper, colder and nutrient-rich water rises to the surface, often driven by wind
patterns and changes in water density. Down welling occurs when surface water sinks due to increased density, often due to cooling or increased
salinity.

Impact of Ocean Currents on the Warming of the North Atlantic Due to Climate
The Atlantic Meridional Overturning Circulation (AMOC), a vital system that transports warm water from the tropics to the North Atlantic, is showing
signs of weakening. This slowdown is primarily driven by increased ocean heat content and the influx of cold freshwater from melting Arctic ice
sheets. As glaciers and sea ice continue to melt, the balance of warm and cold water is disrupted, threatening the stability of the AMOC. The
consequences of a weakened AMOC could be severe, potentially leading to drastic climate shifts not only in the North Atlantic region but globally.
Climate scientists warn that if this current collapses, it could impact weather patterns, sea levels, and marine ecosystems for centuries.
Climate change is causing accelerated melting of polar ice, particularly in Greenland. This influx of freshwater into the North Atlantic reduces the
salinity of seawater, decreasing its density and potentially disrupting the sinking of water that drives the AMOC. A reduction in the density of North
Atlantic waters can weaken the AMOC, leading to a slowdown in the circulation. A weakened AMOC could reduce the transport of warm water to the
North Atlantic, potentially leading to regional cooling in Europe and North America, despite global warming.

Page 29 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
A weakened AMOC could lead to a rise in sea levels along the North Atlantic coast due to changes in ocean circulation patterns. Changes in ocean
currents can alter atmospheric circulation, potentially leading to more intense storms and hurricanes in the North Atlantic region. Changes in ocean
currents and temperature can impact marine life, including fish populations and migratory patterns, affecting ecosystems and human activities like
fishing.
As polar ice melts, it reduces the reflective surface area, causing more sunlight to be absorbed by the ocean, further increasing temperatures and
accelerating ice melt. This feedback loop can further weaken ocean currents by altering water density.
Increased freshwater input from melting ice can further reduce the salinity and density of ocean water, amplifying the weakening of the AMOC and
potentially leading to more pronounced climate impacts.
Reducing greenhouse gas emissions is critical to slowing the rate of climate change and its impact on ocean currents. Global efforts to limit temperature
rise can help mitigate the risks associated with the disruption of ocean circulation patterns. Ocean currents, driven by changes in water density, are
integral to the global climate system.

1.3 STATES OF MATTER


Learning Outcomes
a) Understand the meaning of matter(u)
b) Understand that atoms are the building blocks from which all matter is made; appreciate that the states of matter have different properties
(k,u)
c) Apply the particle theory to explain diffusion and Brownian motion and their applications(s)
d) Understand how the particle theory of matter explains the properties of solids, liquids and gases, changes of state, and diffusion(u)
e) Understand the meaning and importance of plasma in physics (u, v/a)

STRUCTURE OF MATTER
Matter is the substance that constitutes the physical universe, encompassing everything that has mass and occupies space. It forms the basis of all
objects, living or non-living, and exists in various states: solid, liquid, gas, and plasma. Composed of tiny particles called atoms and molecules, matter
interacts through physical and chemical processes, giving rise to the diverse materials and phenomena we observe in the world around us. Matter is
anything that occupies space, has mass and weight. It exists in different forms/states of small items called atoms.

MATTER
Matter is fundamentally defined as anything that occupies space and has mass. This definition encompasses all physical substances, from the smallest
particles to the largest structures in the universe. Matter has a physical presence, meaning it takes up space in the universe. Whether it's a solid,
liquid, gas, or plasma, matter displaces a volume in space, which is a key characteristic of its existence. Mass is a measure of the amount of matter in
an object. It is an intrinsic property of matter that does not change regardless of the object’s location in the universe. Mass gives matter inertia, the
resistance to changes in motion, and is directly related to the gravitational force an object experiences. The physical objects and materials around
us like glass, water and the air manifest the existence of matter in its three states. The process of subdividing matter into smaller and smaller units
continues indefinitely, suggesting that matter is not continuous, but is made up of even smaller parts.

The Particle Theory of Matter


The particle theory of matter, also known as the kinetic theory of matter, is a scientific theory that explains the properties and behavior of matter in
terms of small, discrete particles.

Page 30 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Whether an object is a solid, liquid, or gas, it is composed of tiny particles (atoms, molecules, or ions). The particles of matter are always moving. The
speed of their movement depends on the state of matter: In solids, particles vibrate in fixed positions, In liquids, particles move freely but stay close
together, and In gases, particles move rapidly and are widely spaced.

The particles attract each other:


There are forces of attraction between particles that vary in strength depending
on the state of matter: Strongest in solids, Weaker in liquids., and Weakest in
gases.
Energy and temperature affect particle movement: As
temperature increases, the particles gain energy and move faster. This energy is
known as kinetic energy.

Different states of matter


Matter can exist in several different states, primarily as solids, liquids, gases, and
plasma. The state of matter is determined by the arrangement and energy of its particles.
In solids, particles are closely packed together in a fixed arrangement. They vibrate but do not move freely. Solids have a definite shape and volume.
Solids have a tightly packed and orderly arrangement of particles called lattice. Strong intermolecular forces hold the particles together.
Solids are formed through processes like cooling of liquids (solidification), deposition of gases, or directly from chemical reactions (e.g., precipitates
formed during chemical reactions). Metals (like iron, copper), minerals (like quartz, diamond), ice, wood, and plastics are common solids, and glass.
Solids find extensive use in construction (building materials), manufacturing (machinery parts), electronics (semiconductor materials), and daily
objects (furniture, utensils).

In liquids, particles are close together but can move past each other. They have weaker intermolecular forces compared to solids. Liquids take the
shape of their container. They have a fixed volume that remains constant. Liquids flow and can be poured. Liquids can be compressed slightly compared
to gases. Liquids are formed when solids melt or when gases condense. They can also be created through chemical reactions that produce liquid
products. Water, oil, milk, and alcohol are common liquids. Liquids are crucial in industries such as food and beverage (processing and packaging),
pharmaceuticals (drug formulations), automotive (engine lubricants), and cosmetics (lotions, creams).

In gases, gas particles are widely spaced and move freely. They have weak intermolecular forces and no fixed arrangement. Gases take the shape of
their container. They expand to fill the available space. Gases can be compressed significantly under pressure. Gases mix readily with each other. Gases
are formed when substances vaporize (evaporate from liquids), sublimate (turn from solids directly into gases), or when gases are released during
chemical reactions. Oxygen, nitrogen, hydrogen, and carbon dioxide are common gases. Gases have diverse uses, including in energy production
(natural gas, hydrogen fuel), healthcare (medical gases like oxygen), manufacturing (industrial gases for welding and cutting), and refrigeration
(cooling gases like Freon). Examples: Oxygen, nitrogen, carbon dioxide, and steam.

Plasma
Plasma consists of ionized particles positively charged ions and free electrons due to high energy levels. Plasma conducts electricity and responds to
magnetic fields. The presence of ions and free electrons distinguishes plasma from gases. Plasma is often at high temperatures. Plasma is formed when
gases are heated to extremely high temperatures or subjected to strong electromagnetic fields, causing ionization of particles. Examples: Lightning,
auroras, stars (like the sun), and fluorescent lights, neon signs, and plasma TVs are examples of natural and artificial plasmas.

Page 31 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Plasma finds applications in technologies like plasma cutting and welding, fluorescent lighting, plasma TVs, semiconductor manufacturing, and
experimental fusion reactors.
Plasma is a state of matter similar to gas but with ionized particles, meaning the electrons are separated from atoms. This creates a mixture of positively
charged ions and free electrons. Plasma is electrically conductive and responds to magnetic fields.

The nature of plasma and why it is described as the fourth state of matter
Plasma is often referred to as the fourth state of matter because it has distinct properties that set it apart from solids, liquids, and gases. Plasma is
created when a gas is energized to the point where electrons are stripped away from atoms, resulting in a soup of free electrons and positively charged
ions. Unlike gases, plasma is a good conductor of electricity due to the presence of free-moving charged particles. Plasma can be influenced by magnetic
fields, which can cause it to move or change shape. Plasma is the most common state of matter in the universe, found in stars, including our Sun, and
interstellar space. Plasma has much higher energy levels than the other states of matter, which is why it requires significant energy input (such as
heat or electrical discharge) to form.

Kinetic theory
The kinetic theory of matter is a fundamental concept in physics and chemistry that helps explain the behavior of gases, liquids, and solids based on
the movement and interactions of their constituent particles.
According to the kinetic theory, all matter is made up of tiny particles (atoms, molecules, or ions) that are in constant motion. The particles in a
substance are constantly moving and colliding with each other and the walls of their container.

Assumptions of the Kinetic Theory


The kinetic theory makes several assumptions about the behavior of particles in matter:
Particles are in constant, random motion. Particles possess kinetic energy due to their motion. Collisions between particles are elastic, meaning
energy is conserved during collisions. The average kinetic energy of particles is directly proportional to temperature (Kelvin scale).

Particle theory to explain states of matter


In gases, particles are widely spaced and move freely. They have high kinetic energy and are constantly moving in random directions. Gas particles
collide with each other and the walls of the container, creating pressure.
In liquids, particles are closer together compared to gases. They have moderate kinetic energy and move past each other, allowing liquids to flow.
Liquid particles also exhibit random motion but with less freedom compared to gases.
In solids, particles are tightly packed and vibrate in fixed positions. They have low kinetic energy and limited movement. However, solid particles still
vibrate and can transmit vibrations (heat) through the substance.

Changes of State of Matter: Water and Ice


When water and ice undergo changes in their state of matter due to heating and cooling, they demonstrate the fundamental principles of phase
transitions. These processes include melting, boiling, condensing, and freezing. Understanding these transitions, helps explain the energy changes
involved, specifically why heat is absorbed or released.

Melting: Ice to Water

Page 32 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Melting occurs when solid ice is heated, causing it to change into liquid water.
Molecular Explanation:
Ice is a solid where water molecules are arranged in a fixed, crystalline structure, held together by strong hydrogen bonds. As heat is added to ice,
the energy causes the water molecules to vibrate more vigorously. When the temperature reaches 0°C (32°F), the vibrations are strong enough to
break some of the hydrogen bonds, allowing the molecules to move more freely. This marks the transition from solid to liquid.

Energy Involvement:
Heat Absorption: During melting, ice absorbs heat from the surroundings. This energy is used to break the intermolecular bonds between water
molecules, rather than increasing the temperature. This absorbed energy is known as the latent heat of fusion. The temperature of the substance
remains constant at 0°C until all the ice has melted, after which the temperature of the liquid water begins to rise.

Boiling: Water to Steam


Boiling occurs when liquid water is heated to its boiling point, turning it into steam (water vapor).
Molecular Explanation:
In liquid water, molecules are close together but can move around each other freely. As heat is added, the molecules move faster, increasing the kinetic
energy of the water. When the temperature reaches 100°C (212°F) at standard atmospheric pressure, the energy is sufficient to overcome the
intermolecular forces completely, allowing the molecules to escape into the air as steam.

Energy Involvement:
Heat Absorption: During boiling, water absorbs a significant amount of heat, which is used to break the bonds that hold the molecules in the liquid
state. This energy is known as the latent heat of vaporization. Like melting, the temperature remains constant at 100°C during the phase change until
all the water has boiled off.

Condensation: Steam to Water


Condensation is the reverse of boiling, where steam (water vapor) cools down and changes back into liquid water.
Molecular Explanation:
As steam cools, the kinetic energy of the water molecules decreases. At a certain point, the molecules slow down enough that the intermolecular forces
can pull them back together, forming liquid water. This occurs at the condensation point, which is the same temperature as the boiling point (100°C
under standard conditions).

Energy Involvement:
During condensation, steam releases the latent heat of vaporization to the surroundings. This released energy is the same amount that was absorbed
during boiling. The temperature remains constant during condensation until all the steam has converted back into liquid water.

Freezing: Water to Ice


Freezing occurs when liquid water is cooled and changes into solid ice.
Molecular Explanation:
As liquid water-cools, the molecules lose kinetic energy and move more slowly.
At 0°C, the water molecules begin to arrange themselves into a crystalline structure, forming ice. The hydrogen bonds that were partially broken
during melting are reformed, holding the molecules in a fixed position.

Page 33 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Energy Involvement:
During freezing, water releases the latent heat of fusion, the same energy that was absorbed during melting. This energy is released into the
surroundings as the water molecules settle into the solid structure. The temperature remains constant at 0°C during the phase change until all the
water has frozen.

Why heat is taken in and given out during phase changes

Melting and Boiling (Heat Absorption):


Boiling: This is the process by which a liquid when heated changes to the gaseous state at a fixed temperature e.g. Pure water at 1000c changes to
vapour by the process of boiling. There is no change in temperature at boiling point because the heat supplied is used to weaken cohesive forces of
attraction of molecules and the rest is converted to kinetic form of energy. When a substance melts or boils, energy is required to break the
intermolecular bonds that hold the particles in a solid or liquid state. This energy does not increase the temperature but is instead used to change the
state of the substance. This is why heat is absorbed during melting and boiling.

Condensing and Freezing (Heat Release):


When a substance condenses or freezes, the intermolecular bonds are reformed. The energy that was absorbed during melting or boiling is now
released back into the surroundings. This release of energy explains why heat is given out during condensation and freezing.

Importance of Changes of State in Everyday Life


Changes of state such as melting, freezing, condensation, and evaporation—are essential in many everyday processes. They play crucial roles in
maintaining life, influencing weather patterns, and supporting various practical activities.

Control of Body Temperature in Mammals


Sweating and Evaporation:
Mammals, including humans, rely on the evaporation of sweat to regulate body temperature. When the body overheats, sweat glands produce sweat,
which is mostly water. As the sweat evaporates from the skin's surface, it absorbs heat from the body, cooling it down. This process is crucial for
maintaining a stable internal temperature, especially in hot environments or during physical exertion.

Breathing and Heat Exchange:


Mammals also lose heat through the process of respiration. When air is exhaled, it often carries moisture from the lungs. The evaporation of this
moisture helps cool the body. The balance between heat production (through metabolism) and heat loss (through evaporation and radiation) is vital
for homeostasis.

Rain and the Water Cycle


Evaporation: The water cycle is driven by changes in the state of water. The sun’s heat causes water from oceans, lakes, and rivers to evaporate,
turning it into water vapor (gas) that rises into the atmosphere.
Condensation: As the water vapor rises and cools, it condenses into tiny droplets, forming clouds. This change of state from gas to liquid is crucial
in cloud formation.
Precipitation: When the droplets in clouds combine and grow large enough, they fall to the Earth as precipitation (rain, snow, sleet, or hail). This
process returns water to the surface, replenishing water bodies and supporting life on land.

Page 34 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Freezing and Melting in the Water Cycle
In colder regions, precipitation can fall as snow or ice. This frozen water eventually melts during warmer periods, feeding rivers and lakes, contributing
to the continuous cycle of water.
Cooling Drinks with Ice
Melting Ice: Adding ice to a drink cools it down through the process of melting. As the ice absorbs heat from the drink, it changes from a solid to a
liquid, lowering the temperature of the drink. The melting process requires energy, known as the latent heat of fusion, which is absorbed from the
drink, thus cooling it effectively.
Practical importance:
This is a practical application of phase change that people use daily, especially in warm weather, to keep beverages at a refreshing temperature.

Making Ice Cream


Freezing: The process of making ice cream involves freezing a mixture of cream, sugar, and flavorings. As the liquid mixture cools, it undergoes a
phase change from liquid to solid. To achieve the right texture, ice cream is churned during freezing to prevent the formation of large ice crystals.
This results in a smooth, creamy consistency.
Use of Salt in Ice Cream Making: Often, salt is added to ice surrounding the ice cream mixture to lower the freezing point of water. This allows the ice
cream mixture to freeze at a lower temperature, speeding up the process and improving texture.
Importance: Understanding the freezing process is crucial in producing ice cream with the desired consistency and flavor, making it a popular treat
worldwide.

Brownian motion
Brownian motion is the random movement of particles suspended in a fluid (liquid or gas) as they collide with the fast-moving molecules of the fluid.
“Brownian motion refers to the random movement displayed by small particles that are suspended in fluids. It is commonly
referred to as Brownian movement”. This motion is a result of the collisions of the particles with other fast-moving particles in the fluid. It is
a direct observation of the kinetic theory of matter. Brownian motion is named after the Scottish Botanist Robert Brown, who first observed that
pollen grains move in random directions when placed in water.
An illustration describing the random movement of fluid particles (caused by the collisions between these
particles) is provided below. The random motion is caused by the uneven and continuous bombardment of the
suspended particles by the molecules of the surrounding fluid, which are in constant motion. Brownian motion
provided evidence for the existence of atoms and molecules and supported the kinetic theory of matter, which
posits that matter is made up of small particles in constant motion.

Brownian motion experiment


When smoke particles are suspended in air and observed through a microscope. They seem to be in a state
of continuous random motion. The smoke particles are seen as bright specks moving in continuous random
motion. The bright specks are due to collision between smoke particles and gas molecules. The random
motion is due to smoke particles colliding with air molecules, which were moving randomly. When the
temperature of the glass cell is increased the random motion increases (smoke particles are seen to move
faster), showing that increase in temperature increase the kinetic energy of molecules.

Page 35 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Brownian motion is fundamental to understanding processes such as
diffusion, where particles spread from areas of higher concentration to lower
concentration, and is used in various scientific fields to study particle
dynamics and fluid behavior.

Causes of Brownian motion


The size of the particles is inversely proportional to the speed of the motion,
i.e. Small particles exhibit faster movements. This is because the transfer of
momentum is inversely proportional to the mass of the particles. Lighter particles obtain greater speeds from collisions. The speed of the Brownian
motion is inversely proportional to the viscosity of the fluid. The lower the viscosity of the fluid, the faster the Brownian movement. Viscosity is a
quantity that expresses the magnitude of the internal friction in a liquid. It is the measure of the fluid’s resistance to flow.

Effects of Brownian motion


Brownian movement causes the particles in a fluid to be in constant motion.
This prevents particles from settling down, leading to the stability of colloidal solutions. A true solution can be distinguished from a colloid with the
help of this motion. Albert Einstein’s paper on Brownian motion was vital evidence on the existence of atoms and molecules. The kinetic theory of
gases , which explains the pressure, temperature, and volume of gases, is based on the Brownian motion model of particles.

Diffusion
Diffusion is the process by which particles (such as molecules or ions) spread out from an area of high concentration to an area of low concentration.
This movement occurs until the particles are evenly distributed, reaching a state of equilibrium. Diffusion is a fundamental concept in chemistry,
physics, and biology, describing how substances move within liquids, gases, and even across cell membranes. Diffusion occurs down a concentration
gradient, which means particles move from a region where they are more concentrated to a region where they are less concentrated. The steeper the
concentration gradient (the greater the difference in concentration between the two areas), the faster the rate of diffusion. Particles in liquids and
gases are in constant, random motion due to their kinetic energy. This random movement is what drives diffusion, as particles naturally spread out to
occupy all available space.

Diffusion continues until there is no net movement of particles, meaning the concentration of particles is the same throughout the space. At this point,
dynamic equilibrium is achieved, where particles continue to move, but there is no overall change in concentration.
Diffusion in Gases: When you open a bottle of perfume, the scent molecules diffuse through the air. Initially, the concentration of perfume molecules
is high near the bottle, but they gradually spread out and can be smelled throughout the room.
Diffusion in Liquids: If you drop a dye into a glass of water, the dye molecules will diffuse throughout the water until the color is evenly distributed.
The dye moves from an area of high concentration (where it was dropped) to areas of lower concentration.
Biological Diffusion: In living organisms, diffusion is crucial for processes such as gas exchange in the lungs, where oxygen diffuses from the
alveoli (where its concentration is high) into the blood (where its concentration is low), while carbon dioxide diffuses in the opposite direction.

Importance of Diffusion
Diffusion is vital for transporting substances within cells, including nutrients, gases, and waste products. Diffusion helps maintain homeostasis in
organisms by ensuring that essential molecules like oxygen and glucose are evenly distributed where needed.

Page 36 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Diffusion is utilized in various industries, such as in the purification of gases, separation processes, and in creating concentration gradients for chemical
reactions.
In summary, diffusion is a passive, natural process driven by the random motion of particles, allowing substances to move from areas of high
concentration to areas of low concentration until they are evenly distributed. This process is essential in both natural and industrial systems, playing
a key role in everything from cellular function to the distribution of scents in a room.

Demonstration of Diffusion in Gases


Two Gas jars, one full of nitrogen dioxide / bromine and the
other gas full of air. When the gas cover is removed, and the
gases mix up, the whole become filled with the brown gas
(Nitrogen dioxide). The lighter gas diffuses faster than the heavier
gas.

Demonstration of diffusion in liquids


Half fill a glass beaker with water
Gently and slowly, pour potassium permanganate crystals down the tube.
After some time, the purple colour spreads throughout the beaker. This is due to diffusion
of liquid molecules.
The rate of diffusion increases when temperature is increased.

Factors that affect the rate of diffusion in fluids


Diffusion is the process by which particles spread from areas of high concentration to areas of low concentration, occurring in both liquids and gases.
The rate of diffusion in fluids is influenced by several factors:
Temperature: Higher temperatures increase the kinetic energy of particles, causing them to move more rapidly. This increased movement accelerates
the rate of diffusion. In contrast, at lower temperatures, particles move more slowly, and the rate of diffusion decreases.
Concentration Gradient: The concentration gradient is the difference in concentration between two regions. The greater the concentration
difference, the faster the rate of diffusion. Diffusion occurs more rapidly when there is a steep concentration gradient, as particles move quickly to
balance concentrations.
Molecular Size: Smaller molecules diffuse faster than larger molecules because they encounter less resistance and can move more easily through
the fluid. Larger molecules move more slowly, reducing the rate of diffusion.
Medium of Diffusion: The rate of diffusion is also affected by whether the medium is a liquid or a gas. Gases typically allow faster diffusion due to
the greater distance between particles, which leads to fewer collisions.
In liquids, particles are closer together, leading to more frequent collisions and slower diffusion.
Viscosity of the Fluid: Viscosity refers to the "thickness" or resistance to flow in a fluid. Higher viscosity fluids (e.g., honey) slow down the
diffusion process because particles move more slowly through the dense medium. Lower viscosity fluids (e.g., water) allow for faster diffusion because
particles can move more freely.
Nature of the Diffusing Substance: The chemical nature of the diffusing substance can also impact the rate of diffusion. For instance, substances
that are more soluble in the fluid will diffuse faster. Polar substances may diffuse more slowly in non-polar solvents and vice versa.

Surface Area: The rate of diffusion increases with the surface area over which diffusion can occur. Larger surface areas provide more space for
particles to move and spread out.

Page 37 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS

Comparison of Diffusion in Liquids and Gases


A gas jar containing brown bromine gas and covered with a sheet of
cardboard is placed in contact with an open end of a gas jar of the same
diameter with the mouth smeared with grease. The cardboard is removed and
the jars pressed together tightly.

Observation and Explanation


The bromine gas spreads into the gas jar B at a greater
speed than it returns to gas jar A because of high
concentration of bromine particles. Likewise, air spreads into
gas jar A at greater rate than it returns to B because of high
concentration of air particles in B. As shown in figure, a
homogeneous pale brown mixture forms in the two jars and
because this happen in a very short time, suggesting that the
random movement of the particles is more rapid in
gases than diffusion in liquids. Performing the same
experiment with the jars held vertically instead of horizontally slows down the rate of diffusion because of the different densities of the
gases. The less dense gas diffuses much faster into the more dense gas. The characteristic smell of cooking gas used in laboratories can be detected
when there is a leakage. This is because the gas diffuses into the air.

Investigate the rates of diffusion of ammonia gas and hydrochloric acid gas
Soak a piece of cotton wool in concentrated solution of hydrochloric acid and another in concentrated ammonia solution.
Care should be taken while handling the two solutions because of their burning effect on the skin. Simultaneously insert the soaked cotton wool pieces
at the opposite ends of the horizontal glass tube and cork it.
Observation and Explanation
A white deposit of ammonium chloride forms on the walls of the glass tube in the region nearer end B. This suggests that although both gases
diffused, ammonia gas did so at a higher rate than the hydrochloric acid gas.
Conclusion
Different gases have different rates of diffusion. A gas of high density has heavier particles or molecules, hence moves more slowly than a
lighter one.

Speed of Diffusion
Gases: Diffusion occurs much faster in gases than in liquids. This is because gas particles are further apart, with more kinetic energy and fewer
collisions between particles, allowing them to spread quickly.
Liquids: In liquids, particles are closer together and move more slowly due to intermolecular forces, leading to slower diffusion rates.

Molecular Interaction:

Page 38 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Gases: In gases, the interaction between molecules is minimal because the particles are far apart, leading to a less frequent but faster spread of
particles.
Liquids: In liquids, molecules interact more frequently due to closer proximity, which hinders the free movement of particles and slows down
diffusion.

Density and Medium Resistance:


Gases: The lower density of gases means there is less resistance to the movement of particles, which facilitates faster diffusion.
Liquids: The higher density and viscosity of liquids create more resistance, slowing down the diffusion process.

Linking diffusion to biological processes: Transpiration and Osmosis

Transpiration: Transpiration is the process by which water vapor diffuses from plant leaves into the atmosphere through stomata.
The rate of transpiration depends on factors similar to diffusion, such as temperature, humidity, and the concentration gradient between the inside of
the leaf and the surrounding air.
In this process, water molecules move from areas of high concentration inside the leaf (where water is abundant) to low concentration outside the leaf
(in the air), driven by diffusion.

Osmosis: Osmosis is a special type of diffusion involving the movement of water molecules across a semi-permeable membrane from an area of lower
solute concentration to an area of higher solute concentration.
In biological systems, osmosis is crucial for maintaining cell turgor pressure, nutrient absorption, and waste removal. For example, in plant roots,
water diffuse into root cells via osmosis because the soil has a higher water potential (lower solute concentration) compared to the inside of the root
cells (higher solute concentration).

Diffusion is a fundamental process that is faster in gases than in liquids due to differences in particle movement and density. Temperature, concentration
gradients, molecular size, viscosity, and the nature of the diffusing substance all influence the rate of diffusion in fluids. In biological systems, diffusion
plays a critical role in processes like transpiration in plants and osmosis in both plants and animals, which are vital for maintaining homeostasis and
supporting life functions.

1.4 EFFECTS OF FORCES


Learning Outcomes
a) Know that a force is a push or a pull and that the unit of force is the Newton(k)
b) Know the effects of balanced and unbalanced forces on objects (k,s)
c) Understand the existence of the force of gravity and distinguish between mass and weight(u)
d) Appreciate that the weight of a body depends on the size of the force of gravity acting upon it (k, u,v/a)
e) Understand the concept of friction in everyday life contexts(u)f. Understand the meaning of adhesion and cohesion as forms of molecular
forces(u)
f) Explain surface tension and capillarity in terms of adhesion and cohesion and their application (u,v/a)

Page 39 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
THE EFFECTS OF FORCES

Force is a physical quantity that causes an object to change its state of motion (speed up, slow down, remain stationary, or change direction) or alter
its shape. It is a vector quantity, which means it has both magnitude (size) and direction. The unit of force in the International System of Units (SI) is
the newton (N).

One newton is defined as the force required to accelerate a 1-kilogram mass by 1 meter per second squared (1 N = 1 kgms-2).

Two natural forces that we have experienced are the force of gravity and magnetic forces. These two forces act at a distance and do
not require direct contact between the objects to function. Gravity produces a force that pulls objects towards each other, like a person towards the
ground. It is the force that keeps the Earth revolving around the sun and it's what pulls you toward the ground when you fall. Magnetism produces a
force that can either pull opposite ends of two magnets together or push the matching ends apart. A magnet also attracts (or pulls toward) objects
made of metal.

Variety of forces and instances where they occur


Gravitational Force: The force of attraction between two masses. It keeps planets in orbit around the sun and causes objects to fall to the ground.
The Earth's gravity pulling an apple down from a tree. Gravity is a natural force that pulls objects toward each other. It is one of the four fundamental
forces of nature, and it is responsible for the attraction between masses. Sir Isaac Newton described the concept of gravity in his law of universal
gravitation, which states that every mass in the universe attracts every other mass with a force that is proportional to the product of their masses
and inversely proportional to the square of the distance between their centers.

Importance of gravity
Gravity is what keeps planets in orbit around stars, moons in orbit around planets, and holds galaxies together. On Earth, gravity gives weight to
physical objects and causes objects to fall toward the ground when dropped.

Distinguishing between Mass and Weight


Mass is a measure of the amount of matter in an object. It is a scalar quantity, meaning it has magnitude but no direction. The SI unit of mass is the
kilogram (kg). Mass is an intrinsic property of an object and does not change regardless of location (whether on Earth, the Moon, or in space). It
represents how much "stuff" is in an object, and it is directly proportional to the amount of inertia the object has (resistance to changes in its state
of motion). Therefore, mass is a measure of the amount of inertia contained in an object.

Weight is the force exerted on an object due to gravity. It is a vector quantity, meaning it has both magnitude and direction (toward the center of
the gravitational source). The SI unit of weight is the Newton (N), the same as any other force. Weight is calculated by multiplying the mass of an
object by the acceleration due to gravity at a given location: 𝑊 = 𝑚𝑔 where: W is the weight of the object, m is the mass of the object, g is the
acceleration due to gravity (g=10 m/s2 on Earth). Example: An object with a mass of 10 kg has a weight of 10 kg×10 m/s2=100 N on Earth.

Differences
Mass is constant regardless of where an object is located, while weight varies depending on the gravitational field strength (which changes
depending on where you are, such as on Earth, the Moon, or in space). Mass is a measure of the amount of matter while weight is the force
exerted by gravity on that mass.

Page 40 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
On Earth: A person with a mass of 70 kg will have a weight of 70 kg×10 m/s2=700N. This means that the Earth's gravity is pulling the person
downward with a force of 700 newtons.
On the Moon: The gravitational acceleration on the Moon is about 1.6 m/s2, which is approximately 1/6th of Earth's gravity. The same 70 kg person
on the Moon would have a weight of 70 kg×1.6 m/s2=112 N. Although the person's mass remains 70 kg, their weight decreases because the Moon's
gravitational force is weaker.

Why weight depends on the force of gravity


Gravitational Force Varies: The gravitational force exerted by a massive body like Earth depends on the mass of the planet and the distance from
its center. The stronger the gravitational force (g), the greater the weight of an object. For example, Earth’s gravity is stronger than the Moon’s because
Earth has more mass. As a result, an object will weigh more on Earth than on the Moon, even though its mass remains the same. Since weight is
directly proportional to the gravitational force, any change in g results in a corresponding change in weight. If gravity is stronger (higher g), the
weight increases. If gravity is weaker (lower g), the weight decreases. The object weighs less on the Moon because the Moon’s gravitational force is
weaker.

Outer Space: In the microgravity environment of outer space, where g is nearly zero, an object’s weight would be nearly zero, making it effectively
"weightless." However, its mass remains unchanged. This is why objects weigh differently on Earth, the Moon, or other celestial bodies, even though
their mass remains constant. The stronger the gravitational force, the greater the weight; the weaker the gravitational force, the lesser the weight.

Frictional Force: Friction is the force that opposes the relative motion or tendency of such motion of two surfaces in contact. It acts parallel to the
surfaces and opposite to the direction of motion.

Types of Friction:
Static Friction: The frictional force that prevents two surfaces from sliding past each other. It must be overcome for motion to start.
Kinetic (Sliding) Friction: The frictional force that opposes the movement of two surfaces sliding past each other.
Rolling Friction: The frictional force that opposes the rolling of an object over a surface.
Fluid Friction: The frictional force that opposes the movement of an object through a fluid (liquid or gas).

Friction in Everyday Life


Walking and Running: Friction between your shoes and the ground allows you to push off the ground and move forward. Without friction, your
feet would slip, making walking or running impossible. On a slippery surface like ice, the friction is reduced, making it difficult to walk without
slipping.
Driving and Braking: The friction between a car’s tires and the road allows the car to move forward when the wheels rotate. When you press the
brake pedal, the brake pads apply friction to the wheels, slowing the car down. On a wet or icy road, friction is reduced, which can cause the tires to
skid, making it harder to control the vehicle.
Holding Objects: When you grasp an object, friction between your fingers and the object helps you hold onto it. The rougher the surface, the more
friction, making it easier to grip. It is easier to hold a dry, rough object like a piece of wood than a wet, smooth object like a glass bottle.
Writing and Drawing: Friction between the tip of a pen or pencil and the paper allows you to write or draw. The friction causes the writing
material (ink or graphite) to leave a mark on the paper. Writing on a smooth, glossy surface like glass is difficult because of the lack of friction.

Page 41 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Heating by Friction: When two surfaces rub against each other, the friction between them can convert kinetic energy into heat. This is why rubbing
your hands together quickly can make them warm. In some traditional methods of starting a fire, sticks are rubbed together to create heat by friction,
eventually igniting dry leaves or tinder.
Sports and Athletics: In many sports, friction plays a crucial role. For example, the grip of athletes' shoes on the track or field affects their
performance. The design of sports equipment like balls and bats also considers friction. Soccer players wear cleats with spikes to increase friction
between their shoes and the grass, improving their grip and preventing slipping.

Furniture and Appliances: Moving heavy furniture across a floor involves friction. Sliding friction occurs when you push the furniture, making it
difficult to move. Rollers or casters reduce friction, making it easier to move heavy objects. Placing furniture sliders under heavy items like sofas
reduces friction, allowing you to push or pull them with less effort.

Factors affecting Friction


Surface Texture: Rougher surfaces have higher friction because they have more microscopic bumps and irregularities that catch on each other.
Sandpaper has more friction than smooth paper because of its rough texture.
Normal Force: The greater the force pressing two surfaces together, the higher the friction. This is why heavier objects experience more friction.
Pushing a heavy box across the floor is harder than pushing a lighter one because the heavier box presses down more, increasing friction.
Surface Area: While surface area does not directly affect friction in most cases (since friction is generally independent of the contact area), it can
play a role in certain situations, such as when dealing with pressure-sensitive surfaces. Wide tires on a car can increase friction slightly, improving
grip on certain terrains.
Materials in Contact: Different materials have different coefficients of friction. For instance, rubber on asphalt has high friction, while ice on metal
has low friction. Car tires are made of rubber, which provides good friction on asphalt, while skates are designed to minimize friction on ice.

Reducing and Increasing Friction


Lubrication: Applying a lubricant like oil or grease between two surfaces reduces friction by creating a thin layer that separates the surfaces,
allowing them to slide more easily. Oil in a car engine reduces friction between moving parts, preventing wear and tear.
Friction as a Nuisance: Friction causes wear and tear on machine parts, leading to maintenance issues. It reduces efficiency by converting useful
kinetic energy into unwanted heat.

Types of contact forces


There are 6 kinds of forces which act on objects when they come into contact with one another. Remember, a force is either a push or pull. The 6
are: normal force, applied force, frictional force, tension force, spring force, and resisting force
Tension Force: The force transmitted through a string, rope, cable, or wire when forces acting from opposite ends pull it tight. The force in a rope
holding a hanging object like a chandelier.
Normal Force: The support force exerted upon an object that is in contact with another stable object, example a book resting on a table experiences
a normal force from the table that balances its weight.
Applied Force: A force that is applied to an object by a person or another object, like in pushing a door open or pulling a cart.
Air Resistance Force: A type of frictional force that acts against an object as it moves through the air like in a parachute slowing down a skydiver’s
descent.
Magnetic Force: The force of attraction or repulsion that arises between electrically charged particles because of their motion. It is related to the
magnetic fields generated by magnets. Example A magnet sticking to a refrigerator door.

Page 42 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Electrostatic Force: The force between charged particles. Like charges repel, while opposite charges attract. Example a balloon sticking to a wall after
being rubbed on hair.

Categorizing Forces
Contact and Non-Contact Forces
Contact Forces: These forces occur when two objects are physically touching each other. Examples Frictional Force: When you slide a book across a
table.
Tension Force: A rope pulling a car.
Normal Force: Chairs pushing up against your body as you sit.
Applied Force: Pushing a shopping cart.
Non-Contact Forces: These forces that occur even when the objects are not physically touching. Examples gravitational Force: The Earth's gravity
pulling objects toward its center.
Magnetic Force: A magnet attracting a metal object from a distance.
Electrostatic Force: A charged balloon attracting small pieces of paper without touching them.

Demonstrating the Effects of Forces on Objects


Forces can affect objects in several ways:
Change in Speed: A force can cause an object to accelerate (increase speed), decelerate (decrease speed), or come to a stop. Example applying the
brakes to a moving car reduces its speed due to friction.
Change in Direction: A force can change the direction in which an object is moving. Example a soccer player kicking a ball causes it to change
direction.
Change in Shape: A force can deform an object, temporarily or permanently. Example squeezing a rubber ball changes its shape, and stretching a
rubber band elongates it.
Start or Stop Motion: A force can set an object in motion from rest or bring a moving object to a halt. Example pushing a stationary box causes it
to start moving, while friction can stop it from moving.

Examples of the Effects of Forces


Change of Movement:
Acceleration: Pressing the gas pedal in a car increases the speed of the car.
Deceleration: Applying brakes on a bicycle slows it down.
Change of Shape:
Compression: Squeezing a sponge compresses it, reducing its volume.
Tension: Stretching a rubber band increases its length.
Stopping an Object:
Frictional Force: A sliding book eventually stops due to the frictional force between the book and the table surface.
Air Resistance: air resistance slows down a falling object like a feather.

Molecular Behavior of Adhesion and Cohesion


Cohesion: Cohesion is the attractive force between molecules of the same substance. It is the force that holds molecules together within a material,
like water molecules sticking to each other.

Page 43 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Molecular Behavior: Cohesion occurs because of intermolecular forces, such as hydrogen bonding, Van der Waals forces, or dipole-dipole
interactions. In water, for instance, hydrogen bonds between the slightly positive hydrogen atoms of one water molecule and the slightly negative
oxygen atoms of another water molecule create strong cohesive forces.
Adhesion: Adhesion is the attractive force between molecules of different substances. It is the force that causes one substance to stick to another,
such as water molecules sticking to a glass surface.
Molecular Behavior: Adhesion occurs due to similar intermolecular forces as cohesion, but they act between different substances. For example,
the polar nature of water allows it to form hydrogen bonds with the molecules of other polar substances, such as glass, which leads to adhesion.

Molecular Mechanisms behind Cohesion and Adhesion

Cohesion in Water: Water molecules are polar, with a partial negative charge on the oxygen atom and partial positive charges on the hydrogen
atoms. These polar charges allow water molecules to form hydrogen bonds with each other, leading to strong cohesive forces. This cohesion is
responsible for phenomena like surface tension, where the surface of water resists external force because the water molecules are strongly attracted
to each other.

Adhesion of Water to other surfaces: When water meets a different material, such as glass, the polar water molecules can form hydrogen bonds
with the polar molecules of the glass surface. This interaction creates adhesion, which can be observed when water spreads out on a glass surface
rather than forming droplets.

Practical implications of Cohesion and Adhesion


Capillary Action: Capillary action is the ability of a liquid to flow in narrow spaces without the assistance of external forces (like gravity). It occurs
due to the combination of cohesive and adhesive forces.
Molecular Behavior: Adhesion causes the liquid to cling to the walls of a narrow tube or pore, while cohesion pulls other liquid molecules along,
leading to the upward movement of the liquid.
Plants: Capillary action helps transport water from the roots to the leaves in plants through thin tubes called xylem.
Paper Towels: When you dip a paper towel in water, the water moves upward against gravity due to capillary action.
Wetting: Wetting occurs when a liquid spreads out over a solid surface, which is a balance between cohesion (which resists spreading) and adhesion
(which promotes spreading).
Molecular Behavior: If the adhesive forces between the liquid and the surface are stronger than the cohesive forces within the liquid, the liquid
will spread out (wetting the surface). If cohesive forces are stronger, the liquid will bead up.
Water on Glass: Water wets glass surfaces because the adhesive forces between water and glass are stronger than the cohesive forces within the
water.
Waterproof Materials: On surfaces coated with hydrophobic (water-repellent) materials, water beads up instead of spreading out due to weak
adhesion and strong cohesion.
Meniscus Formation: The meniscus is the curve seen at the surface of a liquid in response to its container.
Molecular Behavior: The shape of the meniscus depends on the balance between cohesion and adhesion. If adhesion is stronger (as with water in
glass), the liquid will curve upwards (concave meniscus). If cohesion is stronger (as with mercury in glass), the liquid will curve downw ards (convex
meniscus).
Water in a Glass Tube: Water forms a concave meniscus in a glass tube because it adheres strongly to the glass.
Mercury in a Glass Thermometer: Mercury forms a convex meniscus because it has stronger cohesive forces than adhesive forces with the glass.

Page 44 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Applications and Implications
In Biology: Plant Transport: Adhesion and cohesion are vital for the movement of water and nutrients in plants.
Cell Membranes: Adhesion and cohesion play roles in the behavior of water and other substances at cellular surfaces.
Paints and Coatings: The effectiveness of paints and coatings depends on adhesion to surfaces, which is why surface preparation is crucial.
Adhesives: Glues and tapes rely on strong adhesive forces to stick to surfaces and cohesive forces to stay intact.

In Technology: Microfluidics: The design of devices that manipulate small amounts of liquids relies heavily on controlling adhesion and cohesion
at small scales.
Printing Technologies: Ink adhesion to paper or other materials is crucial for clear and lasting prints.

Behavior of liquids on the surface


When water is dropped on a glass surface, it wets it and spreads out in a thin surface because adhesive force between the water molecules and glass
is greater than the cohesive force between water molecules. When mercury is dropped on a glass surface it forms spherical droplets or large flatten
drop because cohesive forces between mercury molecules is greater than adhesive forces between mercury and glass.

Surface Tension: This is the effect of force on the surface of a liquid, which makes it behave like a stretched elastic skin. Surface
tension is the result of the cohesive forces between liquid molecules at the surface. It causes the surface of a liquid to behave like a stretched elastic
membrane. At the surface of a liquid, molecules experience a net inward force because they are only attracted to other molecules beside and below
them. This creates a tension on the surface. Surface tension allows water to form droplets on surfaces. Small objects, like a paper clip, can float on the
surface of water if they do not break the surface tension.

Other effects of surface tension


One significant effect of surface tension is its ability to support small objects, such as water striders, allowing them to "float" on the surface without
sinking. This occurs because the surface tension creates a sort of elastic layer that resists external forces. Surface tension plays a role in various
scientific and industrial applications, including the development of pore structures in materials like coal, where it influences adsorption and
wettability.

Explanation on surface tension (emphasise meaning)


Surface tension is due to molecules on liquid surface being slightly further apart like those in a stretched wire. Therefore, experience attractive forces
from their neighbors in liquid surface. The forces stretch the molecules on the surface, making it behave like a stretched elastic skin.

Ways of reducing of surface tension


Reducing the surface tension of water can be achieved through several methods that do not involve detergents. One effective approach is to mix vinegar
with water. Adding a tablespoon or two of vinegar to a bowl before filling it with water significantly decreases surface tension, resulting in fewer
bubbles forming on the surface.
Another method is to heat the water. Increasing the temperature lowers the surface tension, making it easier for the water to spread and interact with
other substances. Spreading oil or oily compounds on the water's surface can also reduce surface tension, as these substances disrupt the cohesive
forces between water molecules. Electrifying the water can lead to a reduction in surface tension. This method alters the molecular interactions within
the water, promoting a more fluid surface.

Page 45 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Experiments to demonstrate surface tension
Some water is poured in a clean trough.
It is left to settle and a filter paper (blotting paper) is placed on the water surface.
A pin is carefully placed on top of the filter paper as shown in the
figure.

Observation
The needle floats on the surface of the water and remains floating so long
as the water surface is not broken.
When the surface of the water where the needle lies is observed
carefully (a magnifying lens would help), the water surface is found
to be slightly depressed and stretched like an elastic skin. When drops of paraffin or soap solution are put on the surface of the water around the
needle, the needle sinks on its own after a few seconds. If, alternatively, the tip of the needle is depressed lightly into the water, the needle sinks very
quickly to the bottom of the water.

Explanation
The steel needle or the razor blade floats because the surface of the water
behaves like a fully stretched, thin, elastic skin. This skin always has a tendency to shrink, i.e., to have a minimum surface area or elastic membrane.
The force which causes the surface of a liquid to behave like a stretched elastic skin is called surface tension. This force is due to the force of attraction
between individual molecules of the liquid (cohesion).
The needle or the blade sinks when a drop of kerosene or soap solution is put in the liquid near the needle because the kerosene or soap solution
reduces the surface tension of the water. When the tip of the needle is pressed into the water, it pierces the surface skin and sinks.

CAPILARITY/CAPILARY ACTION

This is the rising or depression of a liquid in a capillary tube or


bore.
The rise of water in a capillary tube is because the cohesive force
between the water molecules is less than the adhesive force
between molecules of glass and water. It is also for this reason
that water spreads over glass surface. When similar capillary
tubes are dipped in mercury, each surface is depressed below
the outside level of the beaker and the surface curves down wards as shown below.
Mercury is depressed more in narrow tube than in a large one. This is because cohesive forces between molecules of mercury are greater than
adhesive forces between molecules of mercury and glass. It is also for this reason that mercury does not wet glass but forms droplets on glass.

Application of capillarity
Capillarity, or capillary action, is a physical phenomenon where liquids move through narrow spaces without the assistance of external forces like
gravity. This occurs due to the interplay of cohesive and adhesive forces, allowing liquids to rise or fall in small tubes or porous materials. One of the
most common applications of capillarity is in everyday writing instruments, such as pens, where ink rises through the nib due to capillary action.
Page 46 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Capillarity plays a crucial role in the natural world; for instance, it enables water to travel from the roots to the leaves of plants, ensuring their survival.
Other applications include the functioning of blotting paper, which absorbs ink through capillary action, and the operation of kerosene lamps, where
the fuel rises through the wick to sustain the flame.

Disadvantages of capillarity
One significant drawback is its tendency to mix liquids unintentionally. In scenarios where maintaining the separation of two liquids is crucial,
capillary action can lead to unwanted mixing, undermining the desired outcomes.
Controlling the flow rate of liquids through capillary action can be challenging. The rate at which a liquid moves through narrow spaces is influenced
by various factors, including tube diameter and liquid properties.
House bricks and concrete are porous. Capillary action is likely to draw water upwards from the ground through them, making the building dump
(wet).This problem is overcome by putting water proof layer made from plastic that is placed in the layers of bricks at the bottom of the house.

1.5 TEMPERATURE MEASUREMENTS


Learning Outcomes
a) Understand the difference between heat and temperature(u)
b) Understand how temperature scales are established(u)
c) Calibrate a thermometer and use it to measure temperature (s,u)
d) Compare the qualities of thermometric liquids (u, s,v/a
e) Describe the causes and effects of the daily variations in atmospheric temperature (u,v/a)

Introduction
Heat and temperature are closely related but fundamentally different concepts in thermodynamics and daily life. Heat is a form of energy between
two systems or objects due to a temperature difference. It is energy in transit and flows from a hotter object to a cooler one. Unit: Joules (J), Calories
(cal). Temperature is a measure of the average kinetic energy of the particles in a substance. It represents how hot or cold an object is, independent
of the object's size or material.
Unit: Degrees Celsius (°C), Fahrenheit (°F), or Kelvin (K). Symbol: T.
Heat is a type of energy. It is a quantity that flows into or out of a system during processes like conduction, convection, or radiation. Temperature is
not energy but a scalar measure of thermal intensity. It reflects the energy state of a system.

Heat depends on: Temperature difference: Heat flows due to a gradient (higher to lower temperature); Mass: Larger masses contain more energy
at the same temperature; Material properties: Specific heat capacity affects how much heat a material absorbs or releases.
Temperature is independent of the object's mass or material. It only measures the energy per particle. Heat is measured using a calorimeter by
observing changes in temperature and applying the formula: Q=mcΔT where m is mass, c is specific heat capacity, and ΔT is the temperature change.
Temperature: Measured directly using a thermometer or other sensors.

Heat is transferred between objects or systems by: Conduction: Through direct contact .i.e. molecular movement (vibrations); Convection: Through
fluid movement.; Radiation: Through electromagnetic waves. Temperature does not transfer; it is the driving factor for heat transfer. When you heat
water, energy (heat) flows from the flame to the water, increasing its internal energy. The temperature of the water increases as its particles move
faster, reflecting the rise in average kinetic energy.

Page 47 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
How Temperature Scales Are Established
Temperature scales provide a standard for measuring temperature. The establishment of a temperature scale involves defining fixed reference points
and a method to interpolate between them.
Choice of Reference Points
Reference points are critical in defining a temperature scale. These are specific, reproducible temperatures based on natural phenomena:

Fixed Points
Fixed points in a thermometer are essential reference temperatures used for calibration. Typically, two primary fixed points are established: the
freezing point of water (the ice point) (0°C or 32°F) and the boiling point of water (the steam point) (100°C or 212°F). These points are chosen due to
their reproducibility and ease of measurement, making them reliable benchmarks for temperature scales. When constructing a thermometer, these
fixed points help define the scale. The length of the liquid column in the thermometer at these temperatures allows for the accurate measurement of
intermediate temperatures. For instance, the distance between the freezing and boiling points called fundamental interval can be divided into equal
intervals, creating a consistent scale for temperature readings.
The ice point is defined as the temperature at which pure melting ice exists at normal atmospheric pressure, typically 0°C (32°F). The steam point is
the temperature at which pure water boils, usually at 100°C (212°F) under the same conditions.
In some advanced thermometers, additional fixed points may be utilized, such as the triple point of water, which occurs at approximately 0.01°C
(273.16 K) and is used for more precise measurements. Fixed points are crucial for the accurate functioning of thermometers, allowing for standardized
temperature measurement in scientific and industrial contexts.

 Freezing Point of Water: The temperature at which pure water freezes under standard atmospheric pressure (0°C or 273.15 K).
 Boiling Point of Water: The temperature at which pure water boils under standard atmospheric pressure (100°C or 373.15 K).
 Triple Point of Water (273.16 K): A unique condition where water coexists as a solid, liquid, and gas. This is a universal reference point
for the Kelvin scale.
 Absolute Zero: The theoretical temperature where particle motion ceases, defined as 0 Kelvin (K) or −273.15°C.

Division of the Scale / Thermometer scales


Once the reference points are chosen, the temperature range between them is divided into equal intervals:
 Celsius Scale (°C): Defined based on the freezing point (0°C) and boiling point (100°C) of water, divided into 100 equal parts.
 Fahrenheit Scale (°F): Defined with the freezing point of water at 32°F and boiling point at 212°F, divided into 180 equal parts.
Conversion: T(°F)=T(°C)×9/5 +32
 Kelvin Scale (K): Based on absolute zero (0 K) and the triple point of water (273.16 K).
One Kelvin is the same size as one Celsius degree. Conversion: T(K)=T(°C)+273.15

Use of Thermometric Properties


Thermometric properties are essential for measuring temperature across various fields, including manufacturing, scientific research, and medicine.
These properties refer to physical characteristics that change with temperature, such as volume, resistance, and pressure. For instance, thermal
expansion is a common thermometric property utilized in everyday applications, such as in thermometers and temperature-sensitive devices.
 In manufacturing, precise temperature control is crucial for processes like metal forging and chemical reactions.
 In scientific research, thermometric properties enable accurate data collection in experiments, ensuring reliable results.

Page 48 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
 In the medical field, thermometers that utilize these properties are vital for monitoring patient health, particularly in diagnosing fevers or
other conditions.
 Additionally, advancements in luminescence thermometry, which leverages the properties of materials like Nd3+ doped phosphors,
showcase the growing potential of thermometric techniques.
Temperature scales are tied to measurable physical properties that change consistently with temperature.

Thermometric properties
Volume of a Liquid (e.g., mercury or alcohol in a thermometer): Liquids expand linearly with temperature.
Electrical Resistance: Resistance of a material (e.g., platinum in a resistance thermometer) changes with temperature.
Gas Pressure: The pressure of an ideal gas at constant volume is directly proportional to temperature.
Radiation Intensity: The intensity of thermal radiation emitted by an object relates to its temperature (used in pyrometers).

LOWER FIXED POINT


This is the temperature of pure melting ice at standard atmospheric pressure. The standard atmospheric pressure is 76cmHg or 760mmHg. On the
Fahrenheit scale, the lower fixed point = 320F, On Celsius scale = 00C, and on the Kelvin scale = 237K

DETERMINATION OF LOWER FIXED POINT


Procedure;
a) The filter funnel is supported on a retort stand as shown above.
b) A thermometer is placed in the funnel and surrounded with pure melting ice.
c) The thermometer is adjusted so that the mercury thread is clearly seen.
d) The point at which the mercury thread is steady is marked off with a scratch as the lower fixed
point.

UPPER FIXED POINT


This is the temperature of steam above water boiling under standard atmospheric pressure. On Celsius scale it is 1000C, on the Kelvin scale it is 373K,
and on the Fahrenheit scale it is 2120F.
Determination of upper fixed point using a hypsometer
A hypsometer is a two walled vessel made out of a round bottom flask.
Procedure
a) Partly fill vessel with water and arrange the apparatus as in the
diagram.
b) Gently heat water in vessel using a Bunsen flame to its boiling
point.
c) Adjust the thermometer so that mercury thread is seen clearly
when water is boiling.
d) Mark the steady point of mercury thread as the upper fixed
point.
e) With the upper and lower fixed points marked on the
thermometer, the distance between them is divided into 100
equal degrees so that the thermometer gets the Celsius scale. Hence, it is said to be calibrated.

Page 49 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Using an uncalibrated thermometer to measure temperature:
Fundamental interval is the difference between the upper fixed point and the lower fixed point. This is
divided into a hundred equal parts to calibrate in Celsius scale and each is called a degree.

Activity:
1. The top of a mercury thread of a given thermometer is 3cm
from the ice point, if the fundamental interval is 5cm,
determine the unknown temperature θ. (Answer: 600C)
2. The length of a mercury thread at a low fixed point and
upper fixed point are 2cm and 8cm respectively for a certain
liquid X. Given that the length of mercury thread at un
known temperature θ is 6cm determine the value of θ.
3. Find the temperature in oC if the length of mercury thread
is 7cm from the ice point and fundamental interval is 20cm.
4. Find the unknown temperature θ given the following length
of mercury.

Thermometric liquids and their properties


Thermometric liquids are essential for accurate temperature measurement in thermometers. The most common examples include mercury and
ethanol.
Historically, mercury was favored due to its high boiling point and low freezing point, allowing it to remain liquid over a wide temperature
range. However, its toxicity has led to a decline in its use, with ethanol becoming a popular alternative, especially in household thermometers. A
good thermometric liquid must possess several key properties. It should have a high boiling point and a low freezing point to function effectively
across various temperatures. Additionally, it should be non-corrosive to the thermometer's container and non-toxic to ensure safety. Visibility is also
crucial; the liquid should be opaque or colored for easy reading. Uniform thermal expansion is another important characteristic, allowing for precise
temperature readings. These properties ensure that thermometric liquids provide reliable and accurate measurements in various applications.

Water is not used as thermometric liquid


Water is not used as a thermometric liquid due to several significant limitations. Firstly, its freezing point is 0 degrees Celsius and its boiling point is
100 degrees Celsius, which restricts its ability to measure a wide range of temperatures. This narrow range makes it unsuitable for applications
requiring measurements beyond these limits. Water has a low coefficient of thermal expansion, meaning it does not expand or contract significantly
with temperature changes. This results in reduced sensitivity, making it less effective for precise temperature readings compared to liquids like
mercury or alcohol, which have higher coefficients of expansion. Moreover, water is transparent, making it difficult to read measurements accurately
in a thermometer. In contrast, liquids like mercury are opaque and provide clearer readings. These factors collectively explain why water is not a
preferred choice for thermometric applications.

Page 50 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
CLINICAL THERMOMETER
A clinical thermometer is a specialized device designed to measure the body
temperature of humans and animals. Typically constructed as a long, narrow
glass tube with a bulb at one end containing mercury or another liquid, it
provides an accurate reading of temperature. The thermometer retains the
highest temperature reached until it is manually reset, making it a reliable
tool for diagnosing fever and monitoring health. The standard range for
clinical thermometers is usually between 35°C to 42°C (95°F to 107.6°F),
which covers the normal and elevated body temperatures. These
thermometers are essential in medical settings, as they help healthcare professionals assess a patient's condition effectively.
Digital and infrared thermometers have gained popularity due to their ease of use and quick readings. However, traditional clinical thermometers
remain a trusted choice for many, especially in home healthcare settings, ensuring accurate temperature monitoring. The glass from which the tube
is made is very thin which makes the body heat reach the mercury quickly to read body temperature. When thermometer bulb is placed into the mouth
or armpit, the mercury expands and it is forced past the constriction along the tube. When removed, the bulb cools and the mercury in it contracts
quickly. The mercury column breaks at the constriction leaving mercury in the tube. The constriction prevents flow back of mercury to the bulb when
the thermometer is temporary removed from the patients mouth or armpits. The thermometer is reset by shaking the mercury back in the bulb.

Effect of heat on matter


When a solid is heated, the cohesive forces between its molecules are weakened and the molecules begin to vibrate vigorously causing the solid to
change into a liquid state. The temperature at which a solid changes into liquid is called the melting point. At melting point the temperature remains
constant until the solid has melted. When the entire solid has melted and more heat is applied, the temperature rises. The heat gained weakens the
cohesive forces between the liquid molecules considerably causing the molecules to move faster until the liquid changes into gaseous state.
The temperature at which a liquid changes into gaseous state is called the boiling point. At boiling point temperature of the liquid remains constant
since heat supplied weakens the cohesive forces of attraction in liquid molecules.
As temperature increases, particles in solids, liquids, and gases move apart, causing the substance to expand. Gases expand the most, followed by
liquids, while solids expand the least. This principle is crucial in engineering and construction, where materials must accommodate changes in
temperature. Lastly, heat can induce chemical changes, altering the composition of substances. For example, heating can cause reactions such as
combustion or decomposition. These effects are essential in various fields, including chemistry, physics, and engineering, as they influence material
properties and behaviors.
If the heated substance is water its temperature rises with time as shown below

Properties/qualities of a thermometer
A thermometer is an essential instrument for measuring temperature, and its
effectiveness is determined by several key properties. Firstly, quick action is
crucial; a good thermometer should provide temperature readings in the shortest
time possible, allowing for immediate assessment. This is particularly important
in clinical settings where timely decisions are vital. Another significant quality
is sensitivity, which refers to the thermometer's ability to detect small changes in temperature. A sensitive thermometer can accurately reflect even
minor fluctuations, making it reliable for precise measurements. Low thermal capacity is important, as it ensures that the thermometer does not absorb
much heat, allowing it to respond quickly to temperature changes. Accuracy is paramount; a thermometer must provide measurements that closely

Page 51 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
align with standard values. All these qualities quick action, sensitivity, low thermal capacity, and accuracy make a thermometer a reliable tool for
various applications, from medical to scientific uses.

Atmospheric Temperature
Atmospheric temperature refers to the temperature at various levels of the Earth's atmosphere, influenced by factors such as solar radiation and
altitude. The atmosphere is divided into distinct layers: the
troposphere, stratosphere, mesosphere,
thermosphere, and exosphere, each exhibiting unique
temperature profiles. For instance, temperatures decrease
with altitude in the troposphere but increase in the
stratosphere due to the absorption of ultraviolet radiation.
Earth's average temperature has risen significantly, with an
increase of approximately 2°F since 1850. This warming
trend is linked to human activities and has profound
implications for global weather systems. Monitoring
atmospheric temperature helps scientists predict climate
changes and assess their impacts on ecosystems and human
life.

The Earth’s atmosphere is a mixture of gases and vapor (air), and


also of some amount of aerosols (dust, smoke, condensation products of vapor). The percentage ratio of main gases of a dry atmosphere changes
slightly up to an altitude of about 100 km (in homosphere). At an altitude of 20-25 km an ozonal layer is situated which prevents living beings on the
Earth from a harmful shortwave radiation. A share of light gases rises above 100 km (in heterosphere) and at very high altitudes helium and hydrogen
prevail; a part of molecules decay into atoms and ions thus forming an ionosphere. One of the most vital characteristics of the climate is the air
humidity, the content of vapor in it. “Absolute” humidity can either be defined as mass of water vapor per unit volume or as mass of
water vapor per unit mass of dry air. “Relative” humidity is the ratio of the absolute humidity to that for saturation. The highest
mean value of humidity at the Earth’s surface is 30 g/m 3 (absolute) (or 3% relative) and is characteristic of the equator area, the lowest (2 × 10−5%
relative) is in Antarctica. The atmospheric formations (clouds) are the aggregation of water drops and ice crystals suspended in the atmosphere. The
diameters of cloud drops are of the order of several mm. The enlarged drops fall out as rain, snow, hail. The size of rain drops varies from 0.5 to 6-7
mm; with a smaller size of drops the rainfall is called drizzle. The cooling of air below 0°C brings about snow fallout. The atmospheric pressure is the
pressure of air on the objects in it and on the earth surface. The pressure at each point of the atmosphere is equal to the weight of the air column
lying above it; it is measured in Pascals (1 Pa = 1 N/m 2).

Page 52 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
1.6 HEAT TRANSFER
Learning Outcomes
a) Understand how heat energy is transferred and the rate at which
transfer takes place (u,s)
b) Understand what is happening at particle level when conduction,
convection, and radiation take place and their application (k, u,v/a)
c) Understand that greenhouse effect and global warming are aspects
related to heat transfer on the earth surface (u,v/a)

Introduction
Heat transfer is a fundamental concept in thermal engineering, focusing on the
generation, use, and exchange of thermal energy between physical systems. It occurs due to temperature differences, with heat naturally flowing from
hotter to cooler bodies. The three primary mechanisms of heat transfer are conduction, convection, and radiation. Conduction involves direct contact
between materials, allowing heat to transfer through molecular interactions. Convection, on the other hand, occurs in fluids where warmer areas rise and
cooler areas sink, creating a circulation pattern. Radiation is the transfer of heat through electromagnetic waves, allowing energy to travel through a
vacuum. Understanding heat transfer is crucial in various applications, from designing efficient heating systems to improving industrial processes.

Heat is a form of energy, which is transferred from one place to another due to difference in temperature between the two poi nts.

MODES OF HEAT TRANSFER


Heat is transferred in three different ways, namely; Conduction, convection and radiation Conduction
Conduction is the flow of heat from a region of high temperature to that of low temperature through matter without the movement of matter as a whole.
e.g. in metals when they are heated their molecules vibrates faster along their mean position and pass on the heat to the molecules on the cooler parts of
the metal. Electrons are always moving about the metal transfer from the hot to the cold end. Heat conduction is best in metals and worst in gases. Because
of the distant spread of molecules in gases it is not highly possible to have heat transfer in gases

Factors affecting conduction in metals


The conduction of heat and electricity in metals is influenced by several key factors. The material's nature plays a crucial role; pure metals typically
exhibit superior conductivity compared to those with impurities. Impurities disrupt the flow of electrons, reducing overall conductivity.
The physical dimensions of the metal, including its cross-sectional area and length, significantly affect conduction rates. A larger cross-sectional area
allows more electrons to flow, enhancing conductivity, while a longer length can impede it.
Temperature is a vital factor; as temperature increases, the vibrational energy of atoms rises,
which can scatter electrons and reduce conductivity. A greater temperature difference across
the material can enhance heat transfer. Understanding these factors is essential for
optimizing the performance of metals in various applications, from electrical wiring to heat
exchangers.

Experiment to compare conduction in metals


Procedure
a) Match sticks are fixed with wax at one end of each rod and placed on a tripod
stand with their free ends put together.

Page 53 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
b) The free ends are heated with a Bunsen flame.
c) Heat is conducted along each rod towards the cork.
d) The match stick on copper drops off first which shows that of all the metals, copper is the best conductor of heat.

Application of heat conduction


Heat conduction is a fundamental process that plays a crucial role in various everyday applications. One of the most common examples is cooking,
where heat transfers from a burner to metal utensils, allowing food to cook evenly. For instance, when frying an egg in a hot pan, the heat from the
stove is conducted through the pan's material, cooking the egg efficiently.
Another significant application of heat conduction is in medical treatments, such as using hot water bags to alleviate pain. The heat from the bag is
conducted through the skin, providing relief to sore muscles or joints.
Additionally, in electronics, heat conduction is vital for cooling systems in computers and servers. Heat generated by processors is dissipated through
conductive materials, preventing overheating and ensuring optimal performance.

Explain why metals feel colder when touched than bad conductors
Metals often feel colder to the touch than materials like wood or plastic, even when they are at the same room temperature. This phenomenon is
primarily due to their high thermal conductivity. Metals are excellent conductors of heat, meaning they can transfer heat away from your skin much
more efficiently than poor conductors. When you touch a metal surface, it quickly absorbs heat from your skin, creating a sensation of coldness. In
contrast, materials like wood do not conduct heat as effectively, so they retain more of your body heat, making them feel warmer. This difference in
heat transfer rates is why metals can feel significantly cooler than other materials, even if they are at the same temperature. The sensation of coldness
when touching metal is a result of its ability to draw heat away from your skin rapidly, contrasting with the slower heat transfer of less conductive
materials.

Note: Liquids and gases transfer heat very slowly. This is because their molecules are apart and they do not have free electrons like in metals, so
transfer of heat is only by atoms

Experiment to show that water is a poor conductor of heat


Procedure
Water is put in a test tube slanted as shown in the diagram above. The upper part of the tube is
heated and convection currents are seen at the top of the tube. Water begins to boil from the top.
Ice at the bottom remains not melted. This shows that water is poor conductor of heat.

Convection

Page 54 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Convection is a fundamental process of heat transfer that occurs in fluids, including both liquids and gases. It involves the movement of heated particles
within the fluid, which creates a cycle of rising and sinking. When a fluid is heated, its molecules move faster and become less dense, causing them
to rise. Equally, cooler, denser fluid sinks, leading to a continuous circulation pattern. This process is essential in various natural phenomena and
everyday applications. For instance, convection is responsible for weather patterns, ocean currents, and even the functioning of household appliances
like ovens and heaters. In a lava lamp, for example, the heated wax rises and cools, creating a mesmerizing display of convection in action.

Experiment to demonstrate convection in liquids


Procedure
The apparatus is arranged as in the diagram above.
By means of a straw, potassium permanganate crystals are put in water at the bottom. When
heat is applied, purple streaks are observed moving upwards in the middle of the flask and
down wards at the side of a flask in a circular form. The purple streaks show convection
currents.

Explanation of convection currents


Convection currents are the movement of fluid caused by differences in temperature and
density, resulting in the transfer of heat. This process, also known as convection heat transfer,
occurs in liquids and gases, where warmer, less dense fluid rises while cooler, denser fluid
sinks. This cycle creates a continuous flow, effectively distributing heat throughout the fluid.
Convection currents can also be observed in a lava lamp, where heated wax rises and cools as
it reaches the top, creating a mesmerizing display. In nature, convection currents play a crucial role in weather patterns and ocean currents, influencing
climate and ecosystems. In the Earth's mantle, convection currents are responsible for the movement of tectonic plates, driving geological processes
such as earthquakes and volcanic activity. Understanding convection currents is essential in various fields, including meteorology, oceanography, and
geology, as they significantly impact our environment and daily life. It is displaced by dense cold water from the top. In displacement of hot water by
cold water, it sets up convection currents as observed by the purple loops.

Application of convection
In electric kettles: When warming a liquid, the heating element of an electric kettle is placed at the bottom.
Domestic hot water system: Cold water is supplied to the boiler along the cold water
supply pipe. On warming, the cold water in the boiler warms up, expands and becomes
less dense, so it rises up. As more cold water is supplied to the boiler, hot water is
displaced upwards and supplied to the hot water taps along hot water pipes, and the
cold water downwards. The ventilation pipe is used to release steam.

Convection in gases
Experiment to demonstrate convection in gases:

Page 55 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
A lighted piece of paper will produce smoke at point A. The movements of smoke from A to B across point X and out through C shows convection.
Smoke moves by convection because; The air above the candle warms up,
becoming less dense and then rises up through C. The dense cold air from
the paper (smoke) enters X through chimney A to replace the risen air
(smoke) causing convection currents.

Application of convection in gases


Convection is a crucial process for heat transfer in gases, relying on the
movement of molecules to distribute thermal energy. One significant
application of convection is in ventilation systems. In homes, ventilators
installed at the top of rooms allow warmer, stale air to escape, promoting
the influx of cooler air from below. This natural circulation enhances indoor air quality and comfort. Another application is in meteorology, where
convection plays a vital role in weather patterns. Warm air rises, creating convection currents that can lead to the formation of clouds and precipitation.
This process is essential for understanding phenomena such as thunderstorms and rain. Additionally, convection is utilized in cooking, particularly in
convection ovens. These appliances use fans to circulate hot air, ensuring even cooking and browning of food. By harnessing convection, chefs can
achieve better results in less time, showcasing the versatility of this heat transfer method in everyday life. Other common applications include; Chimneys
in kitchens and factories, Ventilation pipes in VIP latrines, Ventilators in houses, and Sea and land breezes.

SEA AND LAND BREEZES


Sea breeze: Sea breezes are local winds that occur during hot summer days, resulting from the unequal heating of land and water. The land heats
up faster than the sea because of it has low specific heat capacity and becomes warmer than the sea. So warm air rises, which is replaced by the cold
air from the sea. As the sun heats the land more quickly than the adjacent body of water, the air above the land warms up, becomes less dense, and
rises. This creates a low-pressure area over the land. Meanwhile, the cooler, denser air over the water moves in to fill this void, resulting in a breeze
that flows from the sea to the land. The sea breeze circulation consists of two opposing flows: the surface flow, which is the sea breeze itself, and an
upper-level return flow. This dynamic system not only provides a refreshing coolness
to coastal areas but also plays a significant role in local weather patterns, often
leading to increased cloud formation and precipitation in the afternoon.

Land breeze: Land breezes are local winds that occur at night when the
land cools more rapidly than the adjacent body of water. At night, land loses
heat faster than seawater (high specific heat capacity) causing land to be
cooler than the sea. As a result, air above the sea becomes warm and less
dense, so it rises. The air above the land, which is cold, replaces the warm
air resulting in the land breeze As the temperature over the land decreases,
the cooler, denser air flows from the land to the sea, creating a pressure
difference. This phenomenon is most prevalent in coastal areas, where the
temperature disparity between land and water is significant. In contrast, sea breezes occur during the day when the land heats up faster than the
water. The warm air over the land rises, creating a low-pressure area, while the cooler air over the sea moves in to replace it. This cycle of air
movement is essential for regulating coastal temperatures and can influence local weather patterns. Both land and sea breezes are examples of how
temperature differences can create wind patterns, demonstrating the dynamic relationship between land and water in shaping our environment.

Page 56 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Ventilation: During hot days, rooms get heated up and it is why they are usually provided with ventilators above the floor, through which warm air
finds its way out while fresh air enters through the doors and windows. In this way, a circulation of air convection is set up.

RADIATION
Radiation is the emission or transmission of energy in the form of waves or
particles, a phenomenon that is integral to our environment. It can originate
from natural sources, such as cosmic rays and radon gas, or be produced by
human-made devices. Radiation plays a role in various fields, including
medicine, where it is used for diagnostic imaging and cancer treatment. Recent
advancements in cancer treatment have led to the development of a
revolutionary one-second therapy that could potentially replace traditional
radiation methods.
Radiation is energy that travels through space or material in the form of waves or particles. It can originate from unstable atoms
undergoing radioactive decay or be produced by machines. Common sources of radiation include natural elements like radon gas, cosmic rays, and
medical x-rays. Radiation encompasses various types, including electromagnetic radiation (like light and heat) and particle radiation. This energy can
penetrate different materials, making it significant in fields such as medicine, astronomy, and environmental science. For instance, astronomers
recently demonstrated infrared radiation to educate the public about its properties and applications. While exposure to certain types of radiation can
be harmful, many forms are natural and essential for life. Awareness and education about radiation help mitigate risks and promote safety in its use.
This is the transfer of heat from a region of high temperature to that of low temperature by means of electromagnetic waves.

Good and Bad absorbers of heat radiation


Heat radiation absorption varies significantly among materials, influencing thermal comfort and energy efficiency. Good absorbers, such as matte black
surfaces, excel at absorbing and emitting heat. This is why people often wear dark clothing in winter; it helps retain warmth by absorbing heat
radiation effectively. Shiny and light-colored surfaces, like white or silver, are poor absorbers and emitters of heat. They reflect most thermal radiation,
making them ideal for hot weather, as they help keep the body cool. The relationship between absorption and emission is crucial; materials that are
good at absorbing heat tend to be equally proficient at emitting it. This principle is vital in various applications, from clothing choices to architectural
design, where thermal management is essential.
Heat radiation absorption varies significantly among different materials. Good absorbers, such as matte black surfaces, excel at absorbing and emitting
thermal radiation. This is due to their strong coupling with the radiation field, allowing them to reach thermal equilibrium quickly.

Page 57 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Wearing dark clothing in winter helps retain warmth, as these fabrics absorb heat effectively. Poor absorbers, like shiny and light-colored surfaces,
reflect most thermal radiation. This is why bright white clothes are recommended on hot days; they reflect sunlight, keeping the wearer cooler. Metals,
while good conductors of heat, can also be poor absorbers if they have shiny surfaces,
as they reflect visible light and thermal radiation.

Experiment about Heat Radiation


Some surfaces absorb heat radiation better than others as illustrated in the
experiment.
A source of heat is placed midway between a polished and dull surface. Cork is fitted
with wax on the two surfaces and the experiment left standing for a few minutes.
After a few minutes, the wax on the dull or black surface begins to melt and cork
eventually falls off while the one on the polished surface remains not melted for some
time. This shows that a dull black surface is a good absorber of heat radiation while a
polished surface is a poor absorber of heat radiation because shiny surfaces reflect heat radiation instead of absorbing it.

Comparison of radiation of different surfaces


Requirements: - A Leslie tube
Thermopile is an instrument that converts heat energy into electrical energy. One
side of the tube is dull black, the other is dull white and the last one is made
shiny polished.
The tube is filled with hot water and radiation from each surface is detected by a
thermopile. When the radiant heat falling on the thermopile is much, it registers
a large deflection of the pointer. With different surfaces of the tube made to face
the thermopile one at a time, the following observations are noted:
The greatest deflection at the pointer is obtained when the dull black surface
faces the thermopile. The least deflection is obtained when a highly polished
shiny surface faces the thermopile. The dull surface is a good radiator or emitter of heat radiation while a polished shiny surface is a poor emitter of
heat radiation.

Application of radiation
Heat radiation, the transfer of thermal energy through electromagnetic waves, has numerous practical applications across various fields. One notable
use is in automotive cooling systems, where car radiators are painted black to maximize heat dissipation. This enhances the cooling effect, ensuring
optimal engine performance. In the energy sector, steam generators in power plants rely on heat radiation to convert water into steam efficiently. This
process is crucial for electricity generation, showcasing the importance of thermal radiation in industrial applications. Solar energy harnesses heat
radiation from the sun, converting it into usable energy for heating and electricity. Other applications include personal heating and cooling systems,
where infrared radiation is utilized for comfort. Innovations in materials, such as metal surfaces, are also emerging, allowing for precise control of
thermal radiation, which could revolutionize energy efficiency in various technologies. Heat radiation plays a vital role in enhancing efficiency and
comfort in everyday life.

Page 58 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Black and dull surfaces
Black and dull surfaces play a significant role in thermal radiation absorption and emission.
These surfaces are known to be excellent absorbers of heat, making them ideal for applications
where heat retention is crucial. For instance, solar panels often utilize dark, matte finishes to
maximize energy absorption from sunlight, enhancing their efficiency. In addition to
absorption, black and dull surfaces are also effective emitters of infrared radiation. This
property is beneficial in various heating applications, such as radiators and heat exchangers,
where efficient heat transfer is necessary. The ability of these surfaces to emit heat effectively
ensures that they can maintain optimal temperatures in various environments. Alternatively,
shiny and light-colored surfaces are poor at absorbing and emitting heat, making them less
suitable for applications requiring efficient thermal management.

Polished and white surfaces


Polished surfaces reflect light effectively due to their smoothness and evenness. When light hits
a polished surface, it encounters minimal irregularities, allowing for specular reflection, where
light bounces off at a consistent angle. This is in contrast to dull surfaces, which scatter light in
various directions due to their rough texture, resulting in less brightness and a darker
appearance. White surfaces, while they can be flat and non-reflective, appear white because they
scatter light diffusely. The irregularities in their texture cause light to reflect in multiple
directions, preventing the formation of a clear image or mirror-like effect. This is why white surfaces do not shine like polished ones. The degree of
reflectivity in surfaces is influenced by their texture. Polished surfaces reflect light sharply, while white, flat surfaces scatter light, giving them a
distinct appearance. The very reasons why white painted buildings keep cool in summer, Roofs, fuel tanks are made of aluminum and painted to reflect
radiant heat, white coloured clothes are won in summer to keep us cool, and Tea pots, kettles and saucepans are made of aluminium / silvered retain
heat for a long time.

Page 59 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
The vacuum flask
A vacuum flask, commonly known as a thermos, is an essential household item
designed to maintain the temperature of its contents, whether hot or cold, for
extended periods. These flasks utilize a double-wall construction that creates
a vacuum between the walls, minimizing heat transfer. This technology not
only keeps beverages hot for hours but also preserves cold drinks, making
them ideal for various activities, from daily commutes to outdoor adventures.
Available in a range of styles, sizes, and materials, vacuum flasks cater to
diverse preferences. Retailers offer options from sleek stainless steel designs
to colorful, durable plastic models. Popular brands, such as Smith Creek,
Always, etc. provide rugged, insulated bottles perfect for outdoor enthusiasts,
ensuring that drinks remain at the desired temperature. With the growing
demand for portable beverage solutions, vacuum flasks have become
indispensable for anyone looking to enjoy their favorite drinks on the go,
whether it's hot coffee in the morning or refreshing iced tea during a summer
hike.

How a flask minimizes heat loss


A vacuum flask, commonly known as a thermos, is designed to minimize heat loss through three primary mechanisms: conduction, convection, and
radiation. Its double-walled construction features a vacuum between the two layers, which effectively prevents heat transfer by conduction. Since a
vacuum contains no air or matter, it eliminates the medium through which heat can be conducted. Additionally, the inner surfaces of the flask are
often coated with a reflective material, such as silver. This reflective layer significantly reduces heat loss due to radiation by reflecting infrared
radiation back into the flask. The insulating stopper or cork at the top of the flask minimizes heat loss through convection, as it prevents warm air
from escaping and cold air from entering. Together, these features make the vacuum flask an efficient container for maintaining the temperature of
liquids, whether hot or cold, for extended periods.
Through the function of the various parts of the vacuum flask, heat loss by conduction, convection and radiation are minimized.
 The cork. This minimizes heat loss by conduction and convection,
 Vacuum prevents heat loss by conduction and convection,
 Silvered walls minimize heat loss by radiation,
 Vacuum seal keeps air out of the vacuum,
 Asbestos anti – shock pad keeps the walls apart to avoid damage,
 The thermos flask becomes useless when the vacuum seal breaks, because the vacuum will no longer exists and heat loss by conduction
and convection will occur.

Choice of clothes
The choice of clothing significantly impacts heat transfer through conduction, convection, and radiation. In colder climates, thick materials like wool
are preferred as they trap air, providing insulation and reducing heat loss. This trapped air acts as a barrier, minimizing heat transfer through
conduction, which occurs when two materials are in direct contact. When it comes to heat radiation, clothing also plays a crucial role. The human
body emits thermal radiation, and lightweight or reflective fabrics can help retain this heat. For instance, wearing dark colors can absorb more heat

Page 60 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
from the sun, while lighter colors reflect it, influencing comfort levels in varying temperatures. Additionally, during activities like ironing, conduction
is the primary method of heat transfer. The metal iron, being a good conductor, transfers heat directly to the fabric, making it essential to choose
appropriate materials that can withstand this heat without damage. The choice of cloth one puts on depends on conditions of the environment. On hot
days, a white cloth is preferable because it reflects most of the heat radiations falling on it. On cold days, a dull black woolen cloth is preferred because
it absorbs most of the heat incident on it and can retain for a longer time.

GREENHOUSE EFFECT AND GLOBAL WARMING


The greenhouse effect and global warming are closely related phenomena that
affect how heat is transferred on Earth's surface, leading to significant changes in
the planet's climate. They are interconnected through the mechanisms of heat
transfer and atmospheric composition.
The greenhouse effect is a natural process that warms the Earth by trapping heat
from the Sun in the atmosphere. The Earth, in turn, re-radiates this energy back
into space as infrared radiation (longwave radiation). Some of the re-radiated heat
is trapped by greenhouse gases in the atmosphere, such as carbon dioxide (CO₂),
methane (CH₄), nitrous oxide (N₂O), and water vapor, absorb infrared radiation
and prevent it from escaping back into space. As a result, the Earth's surface
remains warmer than it would be without these gases, creating a climate that supports life. This trapped heat warms the lower atmosphere, raising
the overall temperature of Earth’s surface and creating a balance between incoming and outgoing radiation. Without this natural greenhouse effect,
Earth would be much colder and likely uninhabitable, with an average temperature around -18°C (0°F), instead of the current +15°C (59°F).
However, human activities, such as burning fossil fuels and deforestation, have significantly increased the concentration of greenhouse gases in the
atmosphere. This enhancement of the greenhouse effect leads to global warming, a long-term rise in Earth's average temperatures. Over the past
century, the planet has warmed rapidly, resulting in changes to weather patterns, rising sea levels, and increased frequency of extreme weather events.
Addressing global warming requires reducing greenhouse gas emissions and transitioning to sustainable energy sources. Knowledge about greenhouse
effect is crucial for developing effective strategies to combat climate change and protect our planet for future generations.

Role of heat transfer in the greenhouse effect


The greenhouse effect is a natural process that warms the Earth’s surface. It occurs when greenhouse gases, such as carbon dioxide and methane, trap
heat from the sun in the atmosphere. This heat transfer primarily happens through radiation, where the Earth absorbs solar energy and re-emits it as
infrared radiation.
Greenhouse gases absorb this infrared radiation, preventing it from escaping back into space, thus warming the atmosphere.
In addition to radiation, heat transfer in the greenhouse effect also involves conduction and convection. Conduction occurs when heat is transferred
through direct contact between molecules, while convection involves the movement of warm air rising and cooler air sinking, creating c irculation
patterns. Together, these processes contribute to the overall warming of the planet.

Page 61 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Global Warming
Global warming refers to the long-term increase in Earth's average
temperature, primarily driven by the rising concentrations of greenhouse gases,
such as carbon dioxide (CO2). These gases trap heat in the atmosphere, leading to
a warming effect that disrupts natural climate patterns. Over the past century,
human activities, particularly the burning of fossil fuels and deforestation, have
significantly contributed to this phenomenon. While global warming
specifically addresses temperature rise, climate change encompasses broader
shifts in weather patterns and seasonal changes. These alterations can result in
extreme weather events, rising sea levels, and disruptions to ecosystems. The
consequences of global warming are increasingly evident, with record heat waves and unpredictable weather becoming more common. Addressing
global warming requires urgent action to reduce greenhouse gas emissions and transition to sustainable energy sources.

How global warming relates to heat transfer


As concentrations of greenhouse gases rise, more infrared radiation is trapped in the atmosphere, leading to higher surface temperatures. This disrupts
the natural balance of heat transfer, where less heat is lost to space and more is retained near the Earth's surface. Global warming leads to more
uneven heat distribution across the planet. Polar Regions, for example, are warming faster than tropical regions, a phenomenon known as polar
amplification. This change affects global heat circulation and weather patterns. Warmer surface temperatures increase the rate of evaporation and the
amount of heat transferred by convection. This can lead to more intense storms, as the atmosphere holds more moisture and energy. The oceans absorb
much of the heat from global warming, transferring it via ocean currents. Warmer ocean temperatures lead to changes in marine ecosystems, melting
ice sheets, and rising sea levels due to thermal expansion.

Consequences of Global Warming and Heat Transfer


The excess heat from global warming melts glaciers and polar ice, leading to sea level rise. This is also an example of latent heat transfer, where
energy is absorbed to change the state of water from solid (ice) to liquid, without changing its temperature. As the atmosphere becomes warmer, more
energy is available for weather systems. This results in more frequent and intense heatwaves, droughts, hurricanes, and storms due to more efficient
heat transfer through convection and radiation.

Disruption of Climate Systems


Global warming disturbs large-scale climate systems, such as the jet stream and ocean currents like the Gulf Stream, altering the heat distribution
across the planet and contributing to unpredictable weather patterns. Solar energy reaches the Earth, and the planet re-emits this energy as infrared
radiation. Greenhouse gases trap some of this radiation, warming the surface and lower atmosphere through radiative heat transfer. Global warming
occurs when human activities increase the concentration of these gases, enhancing the natural greenhouse effect and disrupting the Earth’s energy
balance, leading to excess heat retention. Changes in conduction, convection, and radiation all contribute to altered weather patterns and the broader
climate effects observed because of global warming. Both the greenhouse effect and global warming are fundamentally tied to how heat is transferred
on Earth, shaping the climate and life-supporting conditions.

Page 62 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
1.7 EXPANSION OF SOLIDS, LIQUIDS, AND GASES
Learning Outcomes
a) Understand that substances expand on heating, and recognize
some applications of expansion (u,s)
b) Understand the effect and consequences of changes in heat on
volume and density of water (u,s)
c) Know about the anomalous expansion of water between 0ºC and
4 ºC and its implications (u, k,v/a)

Introduction
When substances are heated, their particles (atoms or molecules) gain
kinetic energy and move more rapidly. This increased motion causes the
particles to push farther apart from one another, leading to an increase in
the substance's volume. This phenomenon is called thermal expansion. Thermal expansion occurs because, as temperature raises, the bonds
between the particles in solids, liquids, or gases allow for more movement. In solids, this movement is limited, but they still expand slightly. Liquids
and gases, having weaker bonds, expand more significantly. The degree of expansion varies based on the material and its state (solid, liquid, or gas).
For example:
Solids (like metals) expand when heated, but the expansion is usually small. Metals exhibit a fascinating property known as thermal expansion,
which causes them to increase in size when heated. This phenomenon occurs because the heat energy causes the atoms within the metal to vibrate
more vigorously, leading to an increase in the distance between them. For instance, when a metal rod is heated, it expands and can push a pointer on
a measuring device, demonstrating this principle visually. While most metals expand when heated, some alloys exhibit unique behaviors, such as
negative thermal expansion, where they contract instead.
Liquids expand more noticeably than solids. A common example is mercury in a thermometer rising as it heats up.
Gases expand the most because their particles are far apart and free to move. This is why bridges have expansion joints, and why sealed containers
can burst if heated too much, as the gas inside expands. Expansion is an increase in size of a substance.
When heated, solids increase in size in all directions. Expansion of solids can be illustrated using a metal ball with a ring. The metal ball passes
through the ring when it is cold but when heated, the ball does not pass through the ring any more, showing that it has expanded. It passes through
the hole again when it cools, meaning that the metal contracts when it loses heat
Experiment to demonstrate that metals expand at different rates when heated equally
Metals exhibit varying rates of expansion when subjected to the same temperature increase, a phenomenon
that can be effectively demonstrated through a bimetallic strip experiment. This strip consists of two different
metals, such as copper and iron, bonded together. When heated, each metal expands at its own rate due to
their distinct coefficients of thermal expansion. As a result, the
strip bends, illustrating the concept of differential expansion.
Another engaging experiment involves heating a metal ball
and a ring. When both are heated to the same temperature, the
ball expands enough to no longer fit through the ring,
showcasing how thermal expansion affects solids. This experiment highlights the intuitive understanding
of expansion, as the ball can pass through the ring when cooled. Different metals expand at different

Page 63 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
rates when equally heated, this can be shown using a metal strip made of two metals such
as copper and iron bounded tightly together (bi metallic strip ) when the bi metallic strip
is heated, the copper expands more than iron and the strip bends.

Uses of a bimetallic strip (application of expansion of solids)


Fire alarm: Heat from the source makes the bi metallic strip bend and completes the
electric circuit and the bell rings.
Thermostat: This device makes temperature of appliances or room constant. The
thermostat uses a bimetallic strip in the heating circuit of a flat iron. When the flat iron
reaches the required temperature, the strip bends and breaks the circuit at the contact and
switches off the heater. The strip makes contact again after cooling a little and the heater is on again.
A nearly steady temperature results. If the control knob screwed, the strip has to bend more to break
the circuit and this needs higher temperature.

Disadvantages of expansion
Expansion can cause a number of problem. Railway lines are constructed with gaps left in between
consecutive rails such that on hot days when the rails expand, they have enough room for expansion.
If no gap is left in the rails, they bend on hot days.
Steel bridges
These are constructed in such a way that one end is
rested on rollers and the other end is normally fixed.
This is to ensure that the structure can contract and
expand freely at various temperatures without
damaging the bridge.
Transmission cables
Wires or cables in transmission or telephone cables are normally not pulled tightly during installation in
order to allow room for expansion and contraction during extreme weather conditions.

EXPANSION IN FLUIDS
Thermal expansion helps in describing how matter increases in size; length, area, or volume, when subjected to temperature changes. This phenomenon
is particularly evident in liquids, which expand when heated. As a liquid's temperature rises, the kinetic energy of its molecules increases, causing
them to move apart and occupy more space. This results in a rise in the liquid's level within a container, which may also expand due to the heat. For
instance, in engineering, bimetallic strips are used in thermostats, bending more with greater temperature changes. Additionally, the brine fluids
market is projected to experience significant growth, driven by the increasing demand for thermal management solutions in various industries.
When liquids or gases (fluids) get hot, they expand just as solids do, but their expansion is greater than that of solids for the same amount of heat

Experiment to demonstrate expansion in liquids


Procedure

Page 64 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
The flask is filled completely with coloured water. A narrow tube is passed through
the hole in cork and the cork fitted tightly. The first level of water on a narrow tube
is noted. Water is heated from the bottom of the flask as its level is observed. The
level of water in the tube first drops then rises steadily as heating is continued. Water
level first drops because the flask first expands then the water expand steadily due
to continued heating.

Application of expansion property of liquids

One prominent example is the liquid thermometer, which utilizes the thermal expansion
of substances like mercury or alcohol. As the temperature rises, these liquids expand
and rise within a calibrated glass tube, providing an accurate reading of the
temperature. Another significant application is in engine coolants. As engine
temperatures increase, the coolant liquid expands, allowing it to circulate effectively and
absorb excess heat. This expansion is vital for maintaining optimal engine performance and preventing overheating, which can lead to severe engine
damage.

Water as matter
Water occupies a very commonplace in our lives and is considered to be a typical liquid.
So much so that the search for extraterrestrial life begins with a search for water. In
reality, water is one of the most unusual liquids you will ever encounter. We know that
life on earth depends on the unique properties of water. As a gas, water is one of the
lightest gases known; it is much denser than its solid form as a liquid. These unusual
properties of water have a significant bearing on us. In this article, let us take an in-depth
look at the anomalous expansion of water.

Anomalous Expansion of Water


A common observation seen in the behaviour of the substances is that they expand when
heated as the density decreases and vice versa when the material is cooled. This is how
substances generally react to heat. Let us now look at how water behaves when heated.
The general tendency of cold water remains unchanged until 4 o C. The density of water gradually increases as you cool it. When you reach 4oC, its
density reaches a maximum. What water does next will astound you. When you cool it
further to make some ice, i.e. 0oC, water expands with a further drop in temperature,
meaning the density of water decreases when you cool it from 4oC to 0oC. The below graph
explains this behaviour.
The effect of this expansion of water is that the coldest water is always present on the
surface. Since water at 4oC is the heaviest, this water settles on the bottom of the water body
and the lightest, i.e. the coldest layer, accumulates on the top layer. So in the winter, the
top of the water is always the first to freeze over. Since ice and water both are bad
conductors of heat, this top layer of ice insulates the rest of the water body from the cold
of the winter, thereby protecting all the life in the water body. Now you can truly
comprehend how essential the anomalous properties of water are for life.

Page 65 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Why does it happen?
As shown below, a water molecule is made of one oxygen atom combined with two hydrogen atoms. At normal temperature, the water molecules are
held together in a liquid state because of the intermolecular attraction of the water molecules. In a liquid state, the water molecules are constantly
whizzing around in the container and are constantly being rearranged. Keep in mind that hydrogen contributes to intermolecular attraction. The
attraction between the hydrogen atoms of one water molecule and the oxygen atom of another water molecule is also present. The distance between
the H-O bond is more than the O-O attraction in the water. On cooling water, the rate at which the molecules are whizzing around decreases as they
lose energy. The water molecules start squeezing together on further cooling, increasing their density. At 4 oC, the density reaches its maximum, and
after this, the water molecules can squeeze no further. The water freezing over into ice is held together not by the O-O attraction but by the H-O
attraction. The lattice arrangement of ice prevents the movement of water molecules. But since the H-O is not quite as tight as the O-O bond, it
experiences a little expansion once the H-O bond takes over. It is like people packed in a busy subway. More of them can fit into the subway if they
tuck their hands inside their pockets than if they all hold hands and stand. This arrangement is stronger, and it also occupies more space. Water
experiences this same effect.

Application of anomalous behavior of water


The anomalous expansion of water is a unique property where water expands instead of contracting as it cools from 4°C to 0°C. This phenomenon has
significant implications for aquatic life and ecosystems. When water freezes, the ice formed is less dense than liquid water, allowing it to float. This
insulating layer of ice protects the underlying water, maintaining a stable environment for aquatic organisms during cold weather. In lakes and
ponds, the top layer of water cools first, forming ice while the warmer water remains below. This stratification is crucial for the survival of fish and
other aquatic life, as it prevents the entire body of water from freezing solid.

Disadvantages of anomalous behavior of water


Anomalous expansion of water, where water expands upon freezing, presents several disadvantages. One significant issue is structural damage. When
water freezes in pipes or containers, it expands, leading to bursts and leaks. This can result in costly repairs and water damage, particularly in colder
climates where temperatures frequently drop below freezing.
The expansion of water can disrupt natural ecosystems. Ice formation on lakes and rivers can create pressure on banks, potentially leading to erosion
and habitat destruction. This phenomenon can also affect aquatic life, as the ice cover can limit oxygen exchange, impacting fish and other organisms.
The unique properties of water make it less effective in certain applications, such as thermometers. The expansion can lead to inaccurate readings,
complicating temperature measurements.

EXPANSION OF GASES
When a gas is heated, the kinetic energy of its particles increases, causing them to move
more rapidly. This increased motion results in a greater distance between particles,
leading to an expansion in volume. All gases expand uniformly when exposed to the
same degree of heat, demonstrating a consistent relationship between temperature and
volume. This principle helps from everyday phenomena like inflating balloons to
industrial processes involving gas storage and transportation. A gas expands when
heated almost 10,000 times more than solids. This is due to the fact that cohesion
between molecules is extremely weak. When the temperature of a gas rises, its
molecules gain kinetic energy, causing them to move more rapidly and spread apart.
This phenomenon is consistent across all gases, as they experience a uniform increase
in volume with the same degree of heat. Thermal expansion can be observed in everyday

Page 66 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
examples, such as hot air balloons. As the air inside the balloon is heated, it expands, becoming less dense than the cooler air outside, allowing the
balloon to rise.

Experiment to demonstrate expansion in gases


In the above set up the flask is slightly heated. Air bubbles will be seen coming out from the other end of the tube. This shows that air expand when
heated. In the second set up, when the source of heat is removed and the flask is allowed to cool by pouring cold water, the level of water will rise.
This shows that air contacts when cooled.

Application of expansion of air


One of the most common examples is the phenomenon of sea breezes. During the day, the land heats up faster than the sea, causing the warm air
above the land to rise. This creates a low-pressure area, prompting cooler air from the sea to rush in, resulting in a refreshing sea breeze. This natural
occurrence is crucial for coastal climates and influences local weather patterns.
In engineering, thermal expansion is vital for designing structures and systems. For instance, railway tracks are constructed with expansion joints to
accommodate the expansion and contraction of metal due to temperature changes. Similarly, in automotive engineering, the air in tires expands when
heated, increasing pressure, which must be monitored for safety and performance. Hot air balloons utilize the principle of air expansion. When air in
the balloon is heated, it expands and becomes less dense (it becomes lighter than the cooler air outside) and as a result the balloon rises up. This
principle of thermal expansion is fundamental in various applications, showcasing its importance in both natural phenomena and technological
advancements.

Sample items
Item 1: The factory facility uses a solar panel coated with different materials made of aluminum, copper, and are painted black. In order to test their
heat absorption capacities, a thermometer is placed on each material to record the surface temperature at 10-minute intervals. Task: As a physics
student,
(a). How would you ensure accurate temperature readings for each material, considering environmental factors like wind and humidity?
(b). If the black paint reaches a temperature of 80°C while the aluminum reaches only 60°C under the same conditions, explain why this difference
occurs.

Item 2: A water tank is connected to the solar panels to store heat energy for later use. The tank uses convection to transfer heat from the water at
the top with a temperature 70°C while the one at the bottom has a temperature of 40°C. in order to study heat transfer further, a steel rod and a
wooden rod were inserted partially into the water to observe the different conduction.
Task: As a physics student,
(a). Compare the rates of heat transfer through the steel rod and the wooden rod and explain the reason for the difference.
(b). How does the process of convection help maintain a relatively uniform temperature in the water tank over time?
Item 3: The facilities at solar energy storage system is made of steel pipe network that transports heated water and air. During the hottest times of
the day, the steel pipes expand due to thermal energy. Similarly, the volume of the water and air also changes with temperature.
Task: As a physics student,
(a). If the steel pipe expands by 0.2% of its length when the temperature increases by 50°C, determine the original length of the pipe if the expanded
length is 10.02 meters.
(b). Explain why the volume of air in the pipes changes more significantly than the volume of water when the temperature increases.
(b). Discuss how such thermal expansion might affect the long-term durability of the pipes and what design strategies can be employed to minimize
potential damage.
Page 67 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS

1.8 NATURE OF LIGHT; REFLECTION OF LIGHT AT PLANE SURFACES


Learning Outcomes
a) Know illuminated and light source objects in everyday life (u,s)
b) Understand how shadows are formed and that eclipses are natural forms of shadows (u)
c) Understand how the reflection of light from plane surfaces occurs and how we can make use of this (k, u, s,gs)

Light is a form of electromagnetic radiation that is visible to the human eye. It plays a crucial role in our daily lives, enabling us to see
and perceive the world around us. Light travels in waves and exhibits both particle-like and wave-like properties, a phenomenon known as wave-
particle duality. It moves at a constant speed in a vacuum, approximately 299,792 kilometers per second (186,282 miles per second), making it the
fastest thing in the universe. Light is essential not only for vision but also for various natural processes, such as photosynthesis in plants, and it is
fundamental to many technologies, from medical imaging to telecommunications.
It is a form of energy that travels in waves and can behave both as a wave and as a particle, known as a photon. The visible
spectrum of light ranges from about 400 to 700 nanometers in wavelengths, which corresponds to the different colors we see, from violet to red.
Beyond the visible spectrum, light also includes ultraviolet, infrared, and other forms of electromagnetic radiation that are not visible to the human
eye.
Light is a form of energy that enables us to see.

LIGHT AS SOURCE OF LIGHT


Light sources can be categorized into two main types: natural and
artificial. Natural sources include celestial bodies like the Sun, which
is the primary source of light for Earth, providing sunlight and solar
radiation. Other astronomical objects, such as stars, also emit light,
although their contribution is minimal compared to the Sun. Artificial
light sources, on the other hand, are human-made and include devices
like light bulbs, torches, and lamps. These sources are essential for
daily activities, especially in the absence of natural light. They work
by converting electrical energy into visible light, making it possible to
illuminate spaces at any time.

Sources of light
Light sources can be categorized into natural and artificial types.

NATURAL SOURCES OF LIGHT

Natural Light Sources


The universe is filled with objects that emit light. Some light from these sources reaches the earth. The following things in nature have the ability to
emit light:
The Sun is the major source of light for the earth. The sun is a massive ball of fire, at the centre of which nuclear fusion produces massive energy.
This energy comes out as heat and light. The light from the sun is one of the major factors behind the sustainability of life on earth. Every other star
produces light too, but only a small or no amount of it reaches the earth because of the huge distance. The moon provides light as well but it cannot

Page 68 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
produce light on its own. The light that we get from the moon is the light reflected by it from the sun. Some living organisms have the ability to
produce light too.
It is called bioluminescence. It is the effect of certain chemical reactions within the organism. Fireflies, jellyfish, glow -worm, certain deep-sea plants,
and microorganisms can be cited as examples. Certain other natural phenomena such as lightning and volcanic eruptions also emit light. Apart from
the Sun, other stars also emit light. Although they are much farther away, they contribute to the light we see in the night sky. Fire is a natural source
of light produced by combustion, where substances like wood or fossil fuels burn, releasing energy in the form of light and heat. During a thunderstorm,
lightning is a natural electrical discharge that produces a sudden and intense burst of light.

Artificial Light Sources:


The different light sources produced artificially can be put under three broad categories-
Incandescent Sources: When certain objects are heated to a high temperature, they begin to emit light. Both infrared and visible light is produced in
the process. Example- Candle, incandescent lamp. Luminescent Sources: Light can be produced by accelerating charges in a luminescent material. One
common way of doing it is by passing current through the material. Example- Fluorescent tube light, electric bulb. Gas Discharge Sources: Passing
electricity through certain gases at very low pressure can produce light too. Example – Neon lamp, Sodium lamp. Bioluminescence: Some organisms,
like fireflies, certain types of fungi, and deep-sea creatures, produce their own light through chemical reactions within their bodies. This phenomenon
is known as bioluminescence. Fluorescent Lamps: Fluorescent lamps produce light by passing an electric current through a gas, typically mercury
vapor, which emits ultraviolet light. This ultraviolet light then excites a phosphorescent coating on the inside of the bulb, produc ing visible light.
Light Emitting Diodes (LEDs): LEDs are highly energy-efficient light sources that produce light by passing an electric current through a semiconductor
material. They are widely used in various applications, from home lighting to electronic displays.
Lasers: Lasers, or Light Amplification by Stimulated Emission of Radiation, are unique light sources that produce a highly focused and intense beam
of light. Unlike traditional light sources such as bulbs, lasers generate light through the stimulation of atoms in optical materials like glass, crystals,
or gases. When electrons in these materials absorb energy, they emit photons in a coherent manner, resulting in a narrow beam with specific
wavelengths. They are utilized in fields such as medicine for precise surgical procedures, in telecommunications for high-speed data transmission,
and in manufacturing for cutting and engraving materials. Additionally, advancements in laser technology, such as laser-sustained plasma sources,
are enhancing capabilities in scientific research and industrial applications, providing high brightness and broad spectral coverage. Lasers represent
a significant technological advancement, offering precision and efficiency across multiple disciplines. Lasers produce light that is highly focused and
coherent, meaning the light waves are in phase and travel in the same direction. Lasers are used in various applications, including medical devices,
communication technology, and cutting tools.

Neon Lights: These are used mainly for signage and decorative purposes. They work by passing an electric current through a gas (such as neon),
causing it to emit light. Both natural and artificial light sources are crucial in various aspects of life, from supporting ecosystems to enabling human
activities and technological advancements.

Categories of sources of light


Luminous source of energy is that which produces its own light e.g. star, sun, bulb, candle etc.
Non luminous source of light is that which doesn’t produce its own light but can reflect from luminous object e.g. mirrors, moon, car reflectors etc.
Transparent objects: These are objects which can allow light to pass through them. e.g. driving windscreen of a car, ordinary glass, pure water
etc.
Translucent objects: These are objects which allow little light to pass through them e.g. bathroom glass, tinted glass, tracing paper e.t.c.
Opaque objects: These are objects which don’t allow light to pass through them e.g. wood, concrete etc.

Page 69 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Wave Nature of Light
Electromagnetic Waves: Light is a type of electromagnetic wave, which means it consists of oscillating electric and magnetic fields that propagate
through space. These waves can travel through a vacuum, unlike sound waves, which require a medium.
Wavelength and Frequency: The wave nature of light is characterized by its wavelength (the distance between successive crests of a wave) and
frequency (the number of wave crests that pass a given point per second). The color of visible light is determined by its wavelength, with violet light
having the shortest wavelength and red light the longest.
Interference and Diffraction: The wave nature of light is evident in phenomena such as interference and diffraction. Interference occurs when
two or more light waves overlap and combine, creating patterns of constructive and destructive interference. Diffraction is the bending of light waves
around obstacles or through small openings, leading to characteristic patterns.
Polarization: Polarization is another wave-related property, where the orientation of the light wave's oscillations can be restricted to a single plane.
Polarized sunglasses, for example, reduce glare by blocking certain orientations of light waves.

Light as Energy
Light is just one part of the electromagnetic spectrum, which includes other forms of electromagnetic radiation such as radio waves, microwaves,
infrared, ultraviolet, X-rays, and gamma rays. These different types of radiation vary in wavelength and frequency but all share the same basic
properties as light. Light transfers energy as it travels, which can be absorbed, reflected, or transmitted by different materials. This energy transfer is
responsible for various effects, such as heating objects, driving photosynthesis, and enabling vision.

Interaction with Matter


When light interacts with matter, it can be reflected (bounced back) or refracted (bent). Reflection occurs when light hits a surface and bounces off at
the same angle, while refraction occurs when light passes from one medium to another, changing speed and direction. Light can also be absorbed by
matter, transferring its energy to the material. This absorbed energy can later be emitted as light, often at a different wavelength, a process observed
in phenomena like fluorescence and phosphorescence. The nature of light, with its dual wave-particle characteristics, plays a fundamental role in our
understanding of the physical universe, influencing everything from the behavior of atoms to the structure of the cosmos.

RAYS AND BEAMS


A ray is an idealized model of light as a straight line that shows the
direction of light's propagation. In diagrams, rays are often represented
by arrows pointing in the direction the light is traveling.
Rays are used to illustrate how light travels, particularly in
understanding how light interacts with different media, such as
reflecting off surfaces or refracting through lenses. In reality, light is
not a single line but spreads out as a wavefront. However, treating light
as rays simplifies the analysis of optical systems, such as lenses, mirrors,
and prisms. The concept of rays is foundational in geometrical optics,
where it is used to analyze and predict the paths that light will take through various optical devices. For example, in the study of lenses, rays help
determine how an image will be formed by a lens system.

Beams
A beam of light is a collection of rays that are closely aligned and travel together in a single direction.

Page 70 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Beams are broader than rays and can spread out over a distance. A laser beam, for example, is a concentrated stream of light rays that stay together
over long distances.

Types of Beams
Parallel Beam: A beam in which all rays are parallel to each other. This is
typical of light emitted by lasers or light that has passed through a
collimating lens. Parallel beams do not converge or diverge, making them
ideal for long-distance propagation.
Diverging Beam: A beam in which the rays spread out from a common
point, causing the beam to become wider as it travels. An example of this is
light from a flashlight or the beams from a spotlight.
Converging Beam: A beam in which the rays come together or converge
at a point. This occurs when light passes through a converging lens, such as in a magnifying glass, where the light focuses to a single point.

Applications of rays and beams


Beams are commonly used in various lighting applications, such as in flashlights, headlights, and stage lighting. The properties of the beam whether
it is narrow or broad, focused or diffused determine how it illuminates an area. In optical communication, beams of light, such as laser beams, are
used to transmit information over long distances, as they can be directed with high precision and remain coherent over great distances. Beams of light
are also used in medical procedures, such as in laser surgery, where a focused beam of light is used to make precise incisions, and in industrial
processes, like cutting or welding materials. Both concepts are essential in optics, with rays being crucial for understanding the principles of reflection,
refraction, and image formation, and beams being important for practical applications like illumination, communication, and precision cutting.

RECTLINEAR PROPAGATION OF LIGHT


This is the phenomenon where by light travels in a straight line. The word rectilinear literally means “straight” in geometry and the rectilinear
propagation of light means that light travels from the source in a straight line. Due to this property, light does not bend due to which we are unable
to look around the corner of objects where the light ray falls upon. There are two notable phenomenon’s related to the rectilinear propagation of light-
reflection and refraction. Reflection can be demonstrated using a mirror and refraction can be explained when a person puts his hand inside a
tube of water, the hand appears bent and smaller. Let's perform an experiment to understand the rectilinear propagation of light better.

EXPERIMENT TO SHOW THAT LIGHT TRAVELS IN A STRAIGHT LINE


Procedure
Three (3) identical cardboards A, B and C each with a hole in its centre are
arranged with the holes in a straight line as shown above.
A source of light is placed behind cardboard A and an observer in front of
C. The observer is able to see the light from the source because light travels
in a straight line. If one of the cardboards is displaced such that the holes
are not in the straight line, the observer will see no light. This shows that
light travels in a straight line.

Page 71 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Formation of Shadows
Shadows are fascinating phenomena that occur when light is obstructed by an opaque
object. When light rays encounter such an object, they cannot pass through, resulting
in a region devoid of light behind it. This area is what we recognize as a shadow.
Shadows consist of three distinct regions: the umbra, where the light is completely
blocked; the penumbra, where light is partially obstructed; and the antumbra, which
occurs when the light source is
smaller than the object
blocking it.
The characteristics of shadows can change based on the light source's angle. For instance, when
the light is low on the horizon, shadows appear longer, while a higher light source creates shorter
shadows. This dynamic nature of shadows makes them an intriguing subject for both scientific
study and artistic expression. The formation of shadows is a common optical phenomenon that
occurs when an opaque object blocks the path of light, preventing it from reaching a surface on
the other side. Shadows are an integral part of how we perceive light and objects in our
environment.

Types of Shadows
Shadows can vary in shape, size, and intensity, depending on several factors:
Umbra: The umbra is the darkest part of the shadow, where the light source is
completely blocked by the opaque object. In this region, no direct light from the source
reaches the surface. The umbra is sharp and well defined when the light source is small
and far away.
Penumbra: Surrounding the umbra is the penumbra, a region where only part of the
light source is blocked. In the penumbra, the light is partially obstructed, resulting in
a lighter, more diffused shadow. The penumbra occurs because light sources often have
a finite size, and some light rays can partially reach the surface around the edges of the object.
Antumbra: In some cases, particularly with large and distant light sources, an antumbra can form. This is a region beyond the umbra and penumbra
where the shadow appears lighter because the light from the edges of the light source converges again.

Factors Influencing Shadow Formation


Size of the Light Source:
Point Source: A small or point light source, like a distant star, creates sharp, well-defined shadows because the light rays are almost parallel.
Extended Source: A larger light source, such as the Sun or a lamp, produces softer, more diffused shadows with a noticeable penumbra, as the light
rays spread out more and cover different angles.

Distance between the object and the surface


Close Distance: When the object is close to the surface where the shadow falls, the shadow appears sharp and well defined.
Far Distance: If the object is farther from the surface, the shadow becomes larger and more diffused, as the light has more space to spread out and
partially fill in the shadowed area.

Page 72 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Angle of the Light Source
Low Angle: When the light source is at a low angle (close to the horizon, such as during sunrise or sunset), shadows are elongated and stretched
out.
High Angle: When the light source is directly overhead, shadows are shorter and fall directly beneath the object.
Shape of the Object: the shape of the object blocking the light influences the shape of the shadow. Simple shapes like circles or squares cast
shadows that closely resemble their form, while complex objects create intricate shadow patterns.

Examples of Shadow Formation


During a solar eclipse, the Moon passes between the Earth and the Sun, casting a shadow on the Earth. The umbra causes a total eclipse in specific
areas, while the penumbra causes a partial eclipse in others. Similarly, a lunar eclipse occurs when the Earth casts its shadow on the Moon. Common
objects like trees, buildings, and people cast shadows in everyday life. The changing position of these shadows throughout the day provides information
about the time of day and the position of the Sun.

Importance of Shadows
Shadows play a crucial role in how we perceive the depth and shape of objects. They provide visual cues that help us understand the three-dimensional
structure of our environment.
Artists and photographers often use shadows to create contrast, mood, and emphasis in their work. The interplay of light and shadow adds depth and
texture to images. Shadows are also important in scientific observations, such as in the study of celestial bodies. For example, the measurement of
shadows can help determine the size and distance of planets and moons. Shadows are formed when an opaque object blocks light from reaching a
surface, creating regions of darkness. The characteristics of a shadow, such as its sharpness, size, and intensity, depend on factors like the light source,
the distance between the object and the surface, and the shape of the object. Shadows are not only a natural consequence of light but also a powerful
tool in visual perception and artistic expression.
Shadows are formed when light rays are obstructed by an opaque object

ECLIPSES
Eclipses are fascinating astronomical events that occur when one celestial body moves into the shadow of another, blocking or obscuring light. There
are two primary types of eclipses visible from Earth: solar eclipses and lunar eclipses. Each type of eclipse is the result of specific alignments between
the Sun, Earth, and Moon.

Solar Eclipse
A solar eclipse occurs when the Moon passes between the Earth and the Sun, casting a shadow on the Earth. This alignment blocks the Sun's light
from reaching certain areas of the Earth, leading to different types of solar eclipses.

Types of Solar Eclipses


Total Solar Eclipse: Occurs when the Moon completely covers the Sun, as seen from Earth. During a total solar eclipse, the Moon’s umbra (the
darkest part of its shadow) touches the Earth's surface.
The sky darkens as if it were twilight, and the Sun’s corona, or outer atmosphere, becomes visible as a halo around the Moon.
This type of eclipse can only be observed from a narrow path on the Earth’s surface, known as the path of totality.
Partial Solar Eclipse: Occurs when only a part of the Sun is obscured by the Moon. In this case, the Moon’s penumbra (the lighter part of its shadow)
falls on the Earth, leading to only a portion of the Sun being covered. A partial solar eclipse is visible over a much larger area than a total eclipse but
is less dramatic.

Page 73 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Annular Solar Eclipse: Occurs when the Moon is directly in front of the Sun but is too far from Earth to completely cover it. This results in a ring
of sunlight, known as an “annulus” or “ring of fire,” being visible around the dark disk of the Moon. The Moon’s apparent size is smaller than the
Sun’s, leading to this distinctive appearance.

Hybrid Solar Eclipse


A rare type of eclipse that transitions between a total and an annular eclipse along different sections of the path. In some areas, viewers may experience
a total eclipse, while in others; they will see an annular eclipse.

Formation of a Solar Eclipse


A solar eclipse can only occur during a new moon when the Sun, Moon, and Earth
are aligned in a straight line, with the Moon between the Sun and Earth. The
Moon's orbit is slightly tilted relative to the Earth’s orbit around the Sun. Because
of this tilt, solar eclipses don’t occur every month. They only happen when the
new moon occurs near one of the points where the Moon's orbit crosses the
Earth's orbital plane, called nodes. The shadow cast by the Moon consists of two
parts: the umbra and the penumbra. The umbra causes the total eclipse, while
the penumbra causes a partial eclipse.

Lunar Eclipse
A lunar eclipse occurs when the Earth passes between the Sun and the Moon, casting a shadow on the Moon. This alignment prevents sunlight from
directly reaching the Moon, causing it to darken and sometimes take on a reddish color.
Types of Lunar Eclipses
Total Lunar Eclipse: Occurs when the entire Moon passes through the Earth's umbra. During a total lunar eclipse, the Moon can appear red or
coppery, a phenomenon known as the "Blood Moon." This reddish color is due to the Earth’s atmosphere bending and filtering sunlight, allowing
only the red wavelengths to reach the Moon. Total lunar eclipses are visible from anywhere on the night side of the Earth.
Partial Lunar Eclipse: Occurs when only a portion of the Moon
enters the Earth's umbra. This results in a part of the Moon being
darkened, while the rest remains illuminated by direct sunlight.
Penumbral Lunar Eclipse: Occurs when the Moon passes through
the Earth’s penumbra, the outer part of its shadow. A penumbral
lunar eclipse is subtle, with the Moon slightly darkening, and is often
difficult to observe without careful attention.

Formation of a Lunar Eclipse


A lunar eclipse can only occur during a full moon when the Sun, Earth,
and Moon are aligned, with the Earth between the Sun and the Moon. Similar to solar eclipses, lunar eclipses do not occur every month because of the
tilt of the Moon's orbit. They happen when the full moon occurs near the nodes of the Moon's orbit. The Earth's shadow is divided into the umbra and
penumbra. When the Moon passes through the umbra, a total or partial lunar eclipse occurs, while passage through the penumbra results in a
penumbral eclipse.

Page 74 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Frequency and Observation
Solar eclipses are rarer for any given location on Earth because the path of totality is narrow. On average, a total solar eclipse occurs somewhere on
Earth about every 18 months, but it may take several decades for one to be visible from the same location.
Lunar eclipses are more common than solar eclipses and are visible from anywhere on the night side of Earth. A total lunar eclipse occurs roughly
once every 2.5 years from any given location.

Significance of Eclipses
Eclipses provide valuable opportunities for scientific study, such as observing the Sun's corona during a total solar eclipse or studying the Earth's
atmosphere by analyzing the light filtered during a lunar eclipse. Eclipses are extraordinary celestial events that occur due to the specific alignment
of the Sun, Earth, and Moon.

THE PINHOLE CAMERA


A pinhole camera is a simple and fascinating optical device that demonstrates the
basic principles of image formation. It is an early form of camera that does not
use lenses but relies on a tiny aperture, or "pinhole," to project an image onto a
surface.

Components of a Pinhole Camera


The pinhole camera typically consists of a light-tight box or container. The container can be made from various materials, such as cardboard, metal,
or plastic, and must be sealed to prevent light from entering except through the pinhole. The pinhole is a small, precisely made hole in one side of
the box. It is the only entry point for light. The size of the pinhole affects the clarity and brightness of the image produced. It can be made by
puncturing a thin sheet of metal or foil, or using a fine needle to create a tiny hole. Inside the box, opposite the pinhole, there is a surface where the
image is projected. This surface can be a photographic film, a piece of photographic paper, or a screen. The image forms on this surface as light passes
through the pinhole and projects onto it. Some pinhole cameras include a lens cap or cover to protect the pinhole from light when not in use, allowing
the user to open it only when taking a photograph.

How a Pinhole Camera Works


Light from the scene outside the camera enters through the pinhole. Because the pinhole is small, it allows only a narrow beam of light to pass through,
which projects an inverted image of the scene onto the image surface inside the camera. The light rays from different parts of the scene pass through
the pinhole and travel in straight lines to the opposite side of the box. The point where light rays from each part of the scene converge on the image
surface forms a reversed and inverted image.
Exposure Time: The exposure time required for a pinhole camera to capture an image depends on the size of the pinhole and the intensity of the
light. Since pinhole cameras generally have a small aperture, the exposure time can be relatively long, ranging from several seconds to minutes or
even hours.

Characteristics of images produced by Pinhole Cameras


The image formed in a pinhole camera exhibits several distinctive characteristics due to the fundamental principles of optics and light behavior.
Inverted and Reversed Image: The image formed in a pinhole camera is inverted, meaning it appears upside down compared to the actual scene.
This occurs because light rays traveling from the top of the scene through the pinhole cross over and hit the bottom part of the image surface, and

Page 75 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
vice versa. Similarly, the left side of the scene ends up on the right side of the image surface. The image is also reversed left to right. This is a
consequence of the straight-line paths of light rays crossing through the pinhole and projecting onto the image surface.
Softness and Blurriness: The image produced by a pinhole camera is generally less sharp compared to images from modern cameras with lenses.
This softness results from the diffraction of light waves around the edges of the pinhole. The smaller the pinhole, the greater the diffraction effect,
leading to a blurrier image. Due to the diffraction, the edges of objects in the image may appear fuzzy. This effect is more pronounced if the pinhole
is not perfectly round or if the image surface is not flat.
Large Depth of Field: One of the notable characteristics of a pinhole camera image is its large depth of field. Objects at varying distances from the
camera are all in focus, unlike cameras with lenses that have a limited depth of field and require precise focusing.
Uniform Focus: This is because the pinhole allows light from all parts of the scene to converge onto the image surface, without the need for adjusting
focal length.
Image Size and Sharpness: The size of the image is related to the size of the pinhole and the distance between the pinhole and the image surface.
A larger pinhole or a longer distance between the pinhole and the image surface will result in a larger image.
There is a trade-off between image sharpness and size. A smaller pinhole produces a sharper image but requires longer exposure times and may reduce
the image's brightness. Conversely, a larger pinhole creates a brighter and larger image but with reduced sharpness.
Exposure Time: Due to the small size of the pinhole, pinhole cameras generally require longer exposure times compared to cameras with lenses.
The longer exposure is needed to gather enough light to form a visible image on the photographic paper or film.
Distortion: The image may exhibit some distortion, particularly if the image surface is not perfectly flat or if the pinhole is irregular. This distortion
can include curvature or stretching, depending on the shape and positioning of the image surface relative to the pinhole.
Light Intensity: The brightness of the image is influenced by the size of the pinhole. A smaller pinhole allows less light to enter, resulting in a
dimmer image. To compensate, the exposure time must be increased.

Applications of Pinhole Cameras


Educational Tool: Pinhole cameras are commonly used in educational settings to teach the principles of optics, light, and image formation. They
provide a hands-on way to understand how cameras work without the complexity of lenses and electronic components.
Artistic Photography: Many artists and photographers use pinhole cameras to create unique and artistic images. The distinctive softness and
characteristic look of pinhole photographs are valued for their aesthetic qualities.
Scientific Experiments: Pinhole cameras can be used in scientific experiments to study light behavior, image formation, and optical properties.
They are also used in experiments related to the study of astronomical phenomena.
Historical and Cultural Interest: Pinhole cameras are historically significant as one of the earliest forms of photographic devices. They offer
insight into the evolution of photography and the development of optical technology.

Building a Pinhole Camera


Materials: A simple pinhole camera can be constructed from a cardboard box or a tin can, a thin sheet of metal or foil for the pinhole, and photographic
paper or film.
Construction: Cut a small hole in one side of the box or can for the pinhole. Attach the pinhole to the hole and seal the box to prevent any light
leakage. Place the photographic paper or film on the opposite side of the box.

Page 76 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Exposure: Point the pinhole camera at the desired scene and expose the
photographic paper or film for the required amount of time. Develop the image
according to the photographic process used. A pinhole camera is a simple yet
powerful tool that demonstrates the fundamental principles of photography and
optics. Its basic design, reliance on light, and ability to produce unique images
make it a valuable educational and artistic device.

MAGINIFICATION: This is the ratio of image size to object size:


𝑣 ℎ1
By Proportionality; 𝑚 = = , where V = image distance from pinhole to
𝑢 ℎ0
screen, and U= is the object distance from pinhole to object

Example (Try it out in groups)


An object 2cm high forms an image on a screen of the pinhole camera. If the distance between the object and screen is 24cm and the distance between
the object and the pinhole is 6cm find the magnification of the image and the size of the image. Solutions M = 4 and (h=8 cm)

Reflection of light by plane surfaces


Reflection of light is a fundamental optical phenomenon that occurs when light waves encounter a surface and bounce back into the original medium.
This process allows us to see objects around us by reflecting light into our eyes. When light reflects off plane surfaces such as mirrors or flat sheets
of metal the behavior of the reflected light follows specific principles governed by the laws of reflection.

The Nature of Plane Surfaces


A plane surface is a flat, smooth surface that reflects light in a predictable manner. Unlike curved surfaces, which can distort light,
plane surfaces reflect light at a consistent angle, making them ideal for creating clear and accurate images. Common examples of plane surfaces include
mirrors, calm water surfaces, and flat glass windows. These surfaces reflect light in a manner that adheres to the fundamental principles of geometric
optics.

Laws of Reflection

First Law of Reflection: The angle of incidence is equal to the angle of reflection.
Explanation: When a light ray strikes a plane surface, the angle at which it approaches the surface (the angle of incidence) is equal to the angle at
which it bounces away from the surface (the angle of reflection). These angles are measured with respect to the normal, which is an imaginary line
perpendicular to the surface at the point of incidence.
Second Law of Reflection: The incident ray, the reflected ray, and the normal
the point of incidence all lie in the same plane.
The path of the incoming and outgoing light rays, along with the normal to the surface, are
all contained within the same plane. This ensures that the reflection is predictable and
consistent.

Page 77 of 364
PRINCIPLES AND PERSPECTIVES OF PHYSICS
Types of Reflection

Specular Reflection/Regular reflection


This occurs on smooth, shiny surfaces such as mirrors and polished metals. In specular reflection, parallel light rays are reflected parallel to each
other, preserving the image's clarity and details. This creates a clear and defined
image with minimal distortion.

Diffuse Reflection/Irregular reflection


This occurs on rough or matte surfaces where light rays are scattered in many
directions. This type of reflection happens because the surface irregularities cause the
incident rays to reflect at various angles. This produces a less clear image, with light
being spread out, which helps in illuminating spaces and reduces glare.

Applications of Reflection by Plane Surfaces


Reflection on a plane mirror is a fundamental optical phenomenon where light rays strike a smooth surface and are reflected back, creating a virtual
and erect image. This process adheres to the law of reflection, which states that the angle of incidence equals the angle of reflection. The characteristics
of plane mirrors include producing images that are the same size as the object and laterally
inverted.
They are commonly used in everyday items such as bathroom mirrors, where they assist in personal
grooming. Plane mirrors are integral in optical devices like telescopes and periscopes, enhancing
visibility and image clarity. In technology, plane mirrors are utilized in flashlights and car
headlights to direct light beams effectively. Their ability to reflect light efficiently makes them
essential in various fields, including photography and projection systems, where they help in
gathering and directing light for optimal performance.

Application of diffuse reflection


Diffuse reflection occurs when light strikes a rough surface and scatters in multiple directions, rather than reflecting at a single angle. This
phenomenon is crucial in various applications, enhancing visibility and safety in everyday environments. For instance, dry asphalt roads utilize
diffuse reflection to minimize glare, allowing drivers to see clearly without being blinded by sunlight. In addition to road safety, diffuse reflection
plays a significant role in architectural design and interior lighting. By employing materials that promote diffuse reflection, spaces can achieve a
softer, more uniform light distribution, reducing harsh shadows and creating a more inviting atmosphere. This is particularly beneficial in offices
and homes, where comfort and visibility are paramount.
Moreover, diffuse reflection is utilized in advanced technologies, such as machine learning for surface defect detection. By analyzing the scattered light
patterns, systems can identify imperfections on various surfaces, enhancing quality control in manufacturing processes.
In photography, for instance, diffusers are employed to soften and evenly distribute light, minimizing harsh shadows and creating a more flattering
illumination for subjects. This technique is essential for achieving high-quality images, particularly in portrait and product photography. Diffuse
reflection plays a significant role in the design of auditoriums and theaters. By ensuring that sound and light are evenly distributed throughout the
space, it enhances the overall auditory and visual experience for audiences. Diffuse reflective photoelectric switches are increasingly utilized in smart
home systems for effective lighting control, displaying the versatility and importance of diffuse reflection in modern technology.

Page 78 of 364
79

Experiment to verify laws of reflection


 Draw lines AB and ON perpendicular to each other on white sheet of paper.
 Measure angle I = 300 and draw line IO
 Put the white piece of paper on the soft board. Fix pins p 1 and p2 vertically
 Insert a plain mirror along AB with the reflecting surface facing you.
 Looking through the plain mirror in the opposite side, fix pins p3 and p4 such that they
appear to be in line with images of p 1 and p2.
 Measure angle i and r using a protractor
 The procedure above are repeated for angle of incidence 45 0 and 400
 It is observed that angle of incidence i is equal to angle of reflection and since IO, ON and OR are drawn on the same sheet of paper, hence
verifying the laws of reflection

NATURE OF IMAGE FORMED


BY A PLANE MIRROR
Firstly, the image is virtual, meaning it cannot be projected onto a screen,
as it appears to be located behind the mirror. This occurs because the light
rays reflecting off the mirror diverge, creating the illusion of an image at
a specific location. Secondly, the image is upright and maintains the same
orientation as the object. This means that if the object is positioned
upright, the image will also appear upright. However, it is important to
note that the image is laterally inverted, which means that the left and right sides are reversed. For example, if a person raises their right hand, the
image in the mirror will show the left hand raised.
Lastly, the size of the image is identical to that of the object, and it is located at the same distance from the mirror as the object itself. These
characteristics make plane mirrors unique in their ability to reflect images accurately while maintaining specific optical properties.
Note: The line joining any point on the object to its corresponding point on the image cuts the mirror at 90 0.Distance BC = distance CD

Images formed in two plane mirrors inclined at 90 0


When two mirrors are inclined at 900 to each other, images are formed by a single
reflection in addition to two extra images formed by 2 reflections.
Image formed in parallel mirrors
An infinite number of images are formed when an object placed between two parallel
mirrors. Each image seen in one mirror will act as virtual object to the next mirror.
The object O gives rise to image I1 in mirror m1 and I2 on m2 .I1 acts as virtual object
to give an image I(1,2) in mirror m2 just as I2 gives an image I(2,1) in mirror m2. I(1,2) in
mirror m2 gives I(1,2,1) after reflection in m1 while I (2,1,2) after reflecting in Mirror m2.

Yasson TWINOMUJUNI “inspiring greatness” (256)-772938844 // 0752-938844 Page 79 of 364


80

Image formed by an inclined mirror at an angle θ


360
The number of image formed by 2 mirrors inclined at an angle𝜃, is given by 𝑛 = ( 𝜃 − 1),where n = number of images formed. When two
mirrors are parallel, the angle θ between them is zero and the number of images formed between them is n = ∞ (infinite). This shows infinite
number of images when two plane mirrors are parallel. The image lies in a straight line through the object and perpendicular to the mirrors.

Periscope
A periscope is an optical device designed to observe objects that are not in the direct line of sight,
making it particularly useful in various applications, including submarines and photography. It
typically consists of a long tube with two mirrors positioned at 45-degree angles at each end. When
light from an object enters the periscope, it reflects off the first mirror, travels down the tube, and
then reflects off the second mirror, allowing the observer to see the image above or around an obstacle.
The working principle of a periscope relies on the laws of reflection. The arrangement of mirrors
ensures that the image remains upright, providing a clear view of the surroundings. This technology
has evolved, with modern adaptations like the periscope camera in smartphones, which utilizes prisms
to capture high-quality images while maintaining a slim design. Periscopes are essential tools in both
military and civilian contexts, enabling observation in challenging environments. It is mainly used in submarines.

Other uses of plane mirrors include:


Kaleidoscopes utilize plane mirrors to create mesmerizing visual patterns, displaying the beauty of symmetry and reflection. In the realm of solar
energy, plane mirrors are employed in solar cookers, where they concentrate sunlight to generate heat for cooking. This eco-friendly application
highlights the mirrors' ability to harness solar power efficiently. Additionally, plane mirrors are commonly found in security systems, providing a
wider field of view in retail stores and public spaces, enhancing safety and surveillance. Plane mirrors serve crucial roles in various fields, including
optics, interior design, and physics research, demonstrating their importance in both practical and creative applications.
Used in pointers to prevent errors due to parallax, used in optical lever instruments to magnify angle of rotation,
Used in small shops and supermarkets, takeaway and saloons to give a false magnification because of multiple reflections.

Activity:
1. Two plane mirrors are inclined at an angle 500 to one another. Find the number of images formed by these mirrors
2. Two plane mirrors are inclined at an angle θ to each other. If the number of image formed between them is 12, find the angle of
inclination θ.

Yasson TWINOMUJUNI “inspiring greatness” (256)-772938844 // 0752-938844 Page 80 of 364


81

SENIOR TWO
2.1 WORK, ENERGY, AND POWER
Learning Outcomes
a) Know that the sun is our major source of energy, and the different forms of energy(k)
b) Know that energy can be changed from one form into another and understand the law of conservation of energy (k,u)
c) Understand the positive and negative effects of solar energy(u)
d) Understand the difference between renewable and nonrenewable energy resources with respect to Uganda. (u, v/a)
e) Know and use the relationship between work done, force, and distance moved, and time taken (k,s)
f) Understand that an object may have energy due to its motion or its position and change between kinetic and positional potential energy
(u,s)
g) Know the mathematical relationship between positional potential energy and kinetic energy, and use it in calculations (k, u, s,gs)
h) Understand the meaning of machines and explain how simple machines simplify work (u,s)
i) Understand the principles behind the operation of simple machines (u, s, gs)

SUN AS SOURCE OF ENERGY


The sun is our primary source of energy, essential for sustaining life on Earth. It radiates light and heat, known as solar radiation, which drives various
processes within the Earth system. This energy is crucial for photosynthesis, allowing plants to grow and produce oxygen, which is vital for animal
life. Solar energy manifests in different forms, primarily as visible light and infrared radiation. Visible light is what we see, while infrared radiation
is felt as heat. Both forms are integral to Earth's climate system, influencing weather patterns and the hydrologic cycle. The sun's energy warms the
planet, making it habitable and supporting diverse ecosystems. In addition to solar energy, alternative energy sources such as wind, hydroelectric,
and biomass also play a role in our energy landscape. However, the sun remains the ultimate source, powering nearly all forms of energy we utilize,
making it indispensable for life on Earth.

EFFECTS OF SOLAR ENERGY


Solar energy offers numerous advantages, making it a popular choice for sustainable energy solutions. As the most abundant and fastest-growing
energy source, it generates minimal greenhouse gas emissions, significantly reducing our carbon footprint. Solar panels can be installed in diverse
applications, from residential rooftops to large solar farms, providing flexibility in energy generation. Solar energy can lead to substantial savings on
electricity bills, making it a cost-effective option in the end. However, solar energy also has its drawbacks. One significant disadvantage is its
dependence on weather conditions; cloudy days or storms can disrupt energy production. The initial installation costs can be high, which may deter
some homeowners or businesses from making the switch. The production and disposal of solar panels can pose environmental challenges, including
the use of hazardous materials. While solar energy presents a promising path toward sustainability, it is essential to weigh both its benefits and
limitations to make informed decisions about energy use.

FORMS OF ENERGY
Energy exists in various forms, primarily categorized into potential and kinetic energy. Potential energy is stored energy, which can be further divided
into several types, including chemical, gravitational, mechanical, and nuclear energy. For instance, chemical energy is stored in the bonds of molecules,
while gravitational energy is related to an object's position in a gravitational field. On the other hand, kinetic energy is the energy of motion. This
includes thermal energy, which is related to the temperature of an object, and electrical energy, which powers our homes and devices. Other forms of
kinetic energy include sound energy, which is produced by vibrating objects, and radiant energy, which is carried by electromagnetic waves, such as
light.

Yasson TWINOMUJUNI “inspiring greatness” (256)-772938844 // 0752-938844 Page 81 of 364


82

ENERGY CONCEPT
Energy is a fundamental concept in physics, defined as the ability to do work. One of the key principles governing energy is the law of conservation
of energy, which states that energy cannot be created or destroyed; it can only be transformed from one form to another. For
instance, when you turn on a light bulb, electrical energy is converted into light and heat energy. This transformation illustrates how energy changes
forms while the total amount remains constant. Whether it’s kinetic energy from a moving car or potential energy stored in a compressed spring, the
energy can shift between forms but the overall energy in a closed system stays the same.

RENEWABLE AND NONRENEWABLE ENERGY


Renewable resources are those that can be replaced naturally over time, such as solar, wind, and hydroelectric power. These
resources are sustainable and do not deplete when used, making them essential for a sustainable energy future.
Nonrenewable resources are finite and cannot be replaced within a human timescale. Examples include fossil fuels like coal,
oil, and natural gas. Once these resources are consumed, they take millions of years to form again, leading to concerns about their long-term
availability and environmental impact.
Renewable energy sources offer a continuous supply and lower environmental costs, while nonrenewable sources contribute to pollution and climate
change. Transitioning to renewable energy is crucial for reducing our carbon footprint and ensuring a sustainable future for generations to come. The
distinction between renewable and nonrenewable energy resources lies in their replenishment capabilities and environmental impacts. Renewable
energy sources, such as solar, wind, and hydroelectric power, can naturally replenish themselves over time. This means they are sustainable and can
be utilized without the risk of depletion. Their environmental costs are generally lower, contributing to a cleaner ecosystem.
Nonrenewable energy resources, including fossil fuels like coal, oil, and natural gas, are finite. Once consumed, they cannot be replaced within a
human timescale, leading to concerns about exhaustion and environmental degradation. The extraction and use of these resources often result in
significant pollution and greenhouse gas emissions, contributing to climate change.
In Uganda, the distinction between renewable and nonrenewable energy resources is crucial for sustainable development. Renewable energy sources,
such as hydropower, biomass, solar, and wind, can be replenished naturally over time.
Currently, hydropower dominates Uganda's energy landscape, accounting for about 84% of electricity generation. This reliance on renewable resources
helps reduce carbon emissions and environmental degradation. Nonrenewable energy resources, like fossil fuels, are finite and cannot be replenished
once depleted. Uganda has limited nonrenewable energy sources, primarily relying on biomass for cooking and heating. However, the environmental
costs associated with nonrenewable energy, including pollution and greenhouse gas emissions, pose significant challenges.

The primary sources of energy


Primary energy sources are essential for generating the energy carriers that power our daily lives. The main categories include fossil fuels, nuclear
energy, and renewable sources. Fossil fuels, such as petroleum, natural gas, and coal, account for a significant portion of global energy consumption,
providing about 80% of the energy in the United States. Renewable energy sources, which are increasingly gaining traction, include wind, solar,
hydropower, biomass, and geothermal energy. These sources are sustainable and have a lower environmental impact compared to fossil fuels. Biomass,
for instance, is derived from organic materials and can be converted into fuels or used directly for heating. Nuclear energy, derived from radioactive
minerals, also plays a crucial role in the energy mix, providing a stable and low-emission power source.

The secondary sources of energy


Secondary sources of energy are derived from the transformation of primary energy resources. Primary energy sources, such as coal, natural gas, and
sunlight, are harnessed to produce secondary energy forms like gasoline, electricity, hydrogen, and refined biofuels. These secondary sources are
essential for modern energy consumption, as they provide the energy needed for transportation, heating, and electricity generation. For instance, coal
can be burned to generate electricity, which is then distributed for residential and industrial use. Similarly, biomass, a renewable organic material,

Yasson TWINOMUJUNI “inspiring greatness” (256)-772938844 // 0752-938844 Page 82 of 364


83

can be converted into liquid or gaseous fuels, contributing to the secondary energy supply. As the world shifts towards cleaner energy solutions, the
role of secondary energy sources will continue to evolve, emphasizing the importance of efficient conversion processes and innovative technologies in
meeting global energy demands.

Advantages of Fossil Fuels: High energy density, Established infrastructure and technology, and Reliable and consistent power generation
Disadvantages: Significant greenhouse gas emissions, Air pollution and health impacts, Finite resource, and leading to depletion concerns
Nuclear Energy: Generated through nuclear fission, where atomic nuclei (typically uranium-235 or plutonium-239) are split to release energy.
Advantages of Nuclear Energy: Low greenhouse gas emissions during operation, High energy density and reliable power generation, and Long-
term energy supply with abundant fuel resources
Disadvantages: Radioactive waste disposal issues, High initial capital costs, and Risk of nuclear accidents (e.g., Chernobyl, Fukushima)

Renewable Energy
Solar Energy: Energy from the sun captured using solar panels or photovoltaic cells.
Advantages of Renewable Energy Abundant and inexhaustible source, Low operating costs after installation, and No greenhouse gas emissions
during operation
Disadvantages: Intermittent energy supply (dependent on weather and time of day), High initial installation costs, and Requires significant space
for large-scale installations
Wind Energy: Energy from wind captured using wind turbines.
Advantages: Renewable and abundant, Low operating costs after installation, and No greenhouse gas emissions during operation
Disadvantages: Intermittent energy supply (dependent on wind availability), Noise and visual impact concerns, and Requires suitable locations with
consistent wind
Hydropower: Energy from moving water, typically harnessed using dams on rivers.
Advantages: Reliable and consistent power generation, Can provide large-scale power, and No greenhouse gas emissions during operation
Disadvantages: Ecological impact on aquatic ecosystems, Displacement of communities and wildlife, and High initial construction costs
Biomass Energy: Energy from organic materials (plant and animal matter), including wood, agricultural residues, and biofuels.
Advantages: Can use waste materials, reducing landfill use, Renewable if managed sustainably, and Can reduce greenhouse gas emissions if replacing
fossil fuels
Disadvantages: Air pollution from burning biomass, Land and water resource competition with food production, and Can contribute to deforestation
if not managed sustainably
Geothermal Energy: Energy from heat stored within the Earth, harnessed using geothermal power plants or heat pumps.
Advantages: Reliable and consistent power generation, Low greenhouse gas emissions, and Small land footprint compared to other renewables
Disadvantages: Limited to regions with accessible geothermal resources, High initial capital costs, Potential for induced seismic activity, and
Emerging and Alternative Energy Sources
Hydrogen Energy: Energy from hydrogen, used in fuel cells to generate electricity or as a direct fuel.
Advantages: High energy density, Can be produced from various resources (including water and renewable energy), and No greenhouse gas emissions
when used in fuel cells
Disadvantages: High production and storage costs, Infrastructure for widespread use is still developing, and Energy-intensive production process if
not using renewable sources
Tidal and Wave Energy: Energy from ocean tides and waves captured using specialized turbines and generators.

Yasson TWINOMUJUNI “inspiring greatness” (256)-772938844 // 0752-938844 Page 83 of 364


84 PRINCIPLES AND PERSPECTIVES OF PHYSICS

Advantages: Predictable and reliable energy source, High-energy potential in coastal areas, and No greenhouse gas emissions during operation
Disadvantages: High initial capital costs, Environmental impact on marine ecosystems, and Limited to suitable coastal locations

WORK DONE, FORCE, AND DISTANCE MOVED


Work Done
Work, is said to be done when a force is applied to an object, and the object moves in the direction of the applied force. It is a measure of energy
transfer, quantifying how much energy is used to move an object.
Mathematically, work done (W) is expressed as: W=Fdcos(θ), where F represents the magnitude of the applied force in newtons (N), d is the
displacement or distance moved in meters (m), and θ is the angle between the direction of the force and the direction of displacement.
The cosine term accounts for the fact that not all the force may contribute to motion in the direction of displacement. For example, if the force and
displacement are in the same direction, the angle is 0∘ and cos (0∘) =1. In this case, the work done is maximized. Conversely, if the force is
perpendicular to the displacement (θ=90∘) no work is done because cos (90∘) =0.
Work = force × distance moved in the direction of the force. W = F × d Work is measured in joules (J), where one J=1N×1m.
The SI unit of work is joule (J). A joule is the work done when a force of 1 newton moves a body through a distance of 1 metre.
1 joule =1 newton × 1 metre. Bigger units used are kilojoules (1 kJ) = 1 000 J. Megajoule (1 MJ) = 1 000 000 J
To illustrate, consider lifting an object against gravity. Here, the force applied is equal to the object’s weight, and the w ork done is proportional to
the height the object is lifted. Similarly, pushing a cart along a flat surface involves work done, provided the cart moves in the direction of the applied
force. However, holding a stationary object does not constitute work because, despite the exertion of force, there is no displacement.

Example: Find the work done in lifting a mass of 2 kg vertically upwards through 10 m. (g = 10 m/s2)
Solution: To lift the mass upwards against gravity, a force equal to its own weight is exerted.
Applied force = weight = mg = 2kg × 10N/kg = 20 N
Work done = F × d = 20 N × 10 m = 200 Nm = 200 J

Force
Force is defined as a physical quantity that is exerted on an object that can cause it to accelerate, change direction, or deform. It is a vector quantity,
meaning it has both magnitude and direction. According to Newton's Second Law of Motion, force (F) is mathematically described as: F=m×a.
Here, m is the mass of the object in kilograms (kg), and a is the acceleration produced by the force in meters per second squared (m/s2). The unit of
force is the newton (N), where 1 N=1 kg⋅m/s2
Forces can be classified as contact forces and non-contact forces. Contact forces, such as friction, tension, and normal force, occur when objects are in
physical contact. Non-contact forces, such as gravitational, electromagnetic, and nuclear forces, act at a distance without direct contact. In the context
of work, force is crucial because it drives the displacement of an object. Without force, no movement or work is possible.

Distance Moved
Distance moved refers to the measure of how far an object travels during motion. It is a scalar quantity, meaning it has only magnitude and no
direction. In relation to work, the distance moved must occur in the direction of the applied force for work to be done. If an object is stationary, no
matter how much force is applied, the work done remains zero because there is no displacement.
For example, when a person pushes a box across a floor, the distance the box travels contributes to the work done. However, if the person simply
pushes against a wall, no work is done because the wall does not move, even though a force is applied. Similarly, moving an object a greater distance
requires more work if the force remains constant.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 84


85 PRINCIPLES AND PERSPECTIVES OF PHYSICS

Example:
a) A horizontal pulling force of 60 N is applied through a spring to a block on a frictionless table, causing the block to move by a distance
of 3 m in the direction of the force. Find the work done by the force.
Solution: The work done = F × d = 60 N × 3 m = 180 Nm = 180 J
b) A horizontal force of 75 N is applied on a body on a frictionless surface. The body moves a horizontal distance of 9.6 m. Calculate the
work done on the body.
Solution: Work = force × distance = 75 N × 9.6 m = 75 × 9.6 Nm = 720 J

Interrelationship between Work, Force, and Distance


The relationship between work, force, and distance is interdependent. Work done is directly proportional to both the force applied and the distance
moved in the direction of the force. If either force or distance is zero, the work done will also be zero. The angle between the force and the displacement
direction (θ) plays a critical role in determining how much of the force contributes to the displacement.

Practical Applications
In transportation, these principles help design engines and vehicles that efficiently convert fuel energy into motion. Lifting equipment such as cranes
and pulleys rely on calculations of work and force to safely move heavy loads. In sports, athletes apply these concepts to improve performance, such
as sprinters using force over distance to maximize speed. Even in everyday tasks like pushing furniture or climbing stairs, these principles are at play.
Work is done when a force acts on an object and causes it to move. The work done by a force is equal to the product of the force and the displacement
of the object in the direction of the force.
Equation: W=Fdcosθ, where: W is the work done, F is the magnitude of the force, d is the displacement of the object; θ is the angle between the
force and the displacement direction. Units: The SI unit of work is the joule (J), where 1 J=1 Nm
A joule is the work done when force of one Newton moves its point of application through a distance of one metre in the
direction of the force.
Other units include kilo Joules (KJ), Mega joules (MJ), where 1 kilo Joule= 1,000joules, and 1megajoule =1,000,000 joules
e.g Pushing a box with a force of 10 newtons over a distance of 5 meters in the direction of the force does 50 joules of work.

ENERGY
Energy is defined as the ability to do work. It’s a Scalar quantity.
It exists in various forms, including potential, kinetic, thermal, electrical, chemical, and nuclear energy. Each type of energy can be transformed from
one form to another, enabling processes that power our daily lives, from heating our homes to fueling vehicles. Potential energy is stored energy,
while kinetic energy is the energy of motion. For instance, a rock perched on a hill has potential energy, which converts to kinetic energy as it rolls
down. This interplay between different energy forms is crucial for understanding how systems operate and interact in nature. Energy is not only a
quantitative property but also a vital resource for societal development. As we harness and convert energy efficiently, we can improve industrial
processes, enhance building technologies, and promote sustainability, ensuring a balanced approach to energy consumption and environmental
stewardship.

Example:
A towing truck was used to tow a broken car through a distance of 30 m. The tension in the towing chain was 2 000 N. If the total friction is 150 N,
determine. (a) Work done by the pulling force. (b) Work done against friction. (c) Useful work done.
Solution:
(a) Work done by the pulling force, W = F × d = 2 000 N × 30 m = 60 000 J

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 85


86 PRINCIPLES AND PERSPECTIVES OF PHYSICS

(b) Work done against friction W = F r × d (F r is the frictional force) = 150 N × 30 m = 4 500 J
(c) Useful work done, Useful work done = Fd – F r d = (60 000 – 4 500) J = 55 500 J

FORMS OF ENERGY
Energy exists in various forms, primarily categorized as potential and
kinetic energy. Potential energy is stored energy, often associated with
an object's position or state relative to the ground or in a gravitational
field. For instance, gravitational potential energy is related to an object's
height, while chemical energy is stored in the bonds of atoms and
molecules, ready to be released during chemical reactions. Kinetic energy,
on the other hand, is the energy produced by bodies due to their state of
motion. Any moving object, from a rolling ball to flowing water, possesses
kinetic energy. Thermal energy, which is the energy of heat, results from the movement of particles within a substance. Other forms of energy include
electrical energy, which powers our homes and devices, and nuclear energy, derived from the nucleus of atoms. Each form plays a crucial role in our
daily lives and technological advancements, highlighting the diverse ways energy can be harnessed and utilized.

Kinetic Energy:
The energy produced by bodies due to their state of motion or possessing velocity. When an object is in motion, there is energy
associated with that object. Moving objects are capable of causing a change, or, put differently, of doing work. For example, think of a wrecking ball.
Even a slow-moving wrecking ball can do a lot of damage to another object, such as an empty house. However, a wrecking ball that is not moving
does not do any work (does not knock in any buildings). The energy associated with an object’s motion is called kinetic energy. A speeding bullet,
a walking person, and electromagnetic radiation like light all have kinetic energy. Another example of kinetic energy is the energy associated with
the constant, random bouncing of atoms or molecules. This is also called thermal energy; the greater the thermal energy, the greater the kinetic
energy of atomic motion, and vice versa. The average thermal energy of a group of molecules is what we call temperature, and when thermal energy
𝟏
is being transferred between two objects, it’s known as heat. Mathematically: 𝑲𝒊𝒏𝒆𝒕𝒊𝒄 𝑬𝒏𝒆𝒓𝒈𝒚 = 𝒎𝒗𝟐, where m is the mass and v is
𝟐
the velocity. This formula illustrates that kinetic energy increases with the square of the object's speed, meaning that even small increases in velocity
can lead to significant increases in kinetic energy. Kinetic energy is not a force; rather, it can be transferred to or from an object through forces acting
upon it. For example, when a car accelerates, work is done on it, resulting in an increase in its kinetic energy. Alternatively, when brakes are applied,
kinetic energy is transformed into other forms, such as heat.

Potential Energy:
The energy an object possesses due to its position or configuration in a
gravitational field. Potential energy is a form of stored energy that arises from
an object's position, state, or arrangement within a system. It is fundamentally
linked to the forces acting upon the object, such as gravitational, elastic, or
electric forces. For instance, a steel ball held at a height possesses gravitational
potential energy due to its elevated position relative to the ground. The higher
the ball is lifted, the more potential energy it accumulates, which can be

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 86


87 PRINCIPLES AND PERSPECTIVES OF PHYSICS

converted into kinetic energy when it falls. This energy is not only confined to gravitational scenarios; it also applies to elastic materials. A compressed
spring, for example, stores potential energy that can be released when the spring returns to its original shape.
For example, gravitational potential energy is given by:𝑃𝑜𝑡𝑒𝑛𝑡𝑖𝑎𝑙 𝐸𝑛𝑒𝑟𝑔𝑦 = 𝑚𝑔ℎ, where m is the mass, g is the acceleration due to
gravity, and h is the height above the reference point.

Example:
a) Calculate the work done by a weight lifter in raising a weight of 400 N through a vertical distance of 1.4 m.
Solution: Work done against gravity = Force x displacement = mg × h = 400 N × 1.4 m = 560 J
b) A force of 200 N was applied to move a log of wood through a distance of 10 m. Calculate the work done on the log.
Solution: W = F × d = 200 N × 10 m = 2 000 J
Activity:
1. Calculate the work done by a force of 12 N when it moves a body through a distance of 15 m in the direction of that force.
2. Determine the work done by a person pulling a bucket of mass 10 kg steadily from the well through a distance of 15 m.
3. A car moves with uniform speed through a distance of 40 m and the net resistive force acting on the car is 3 000 N.
(a) What is the forward driving force acting on the car? Explain your answer.
(b) Calculate the work done by the driving force. (c) State the useful work done.
4. A student of mass 50 kg climbs a staircase of vertical height 6 m. Calculate the work done by the student.
5. A block was pushed by a force of 20 N through a distance of 9 m. Calculate the work done.

Power is the rate at which work is done or the rate at which energy is transferred or converted.
It measures how quickly work can be performed or energy can be used.
𝑤𝑜𝑟𝑘 𝑑𝑜𝑛𝑒 𝑓𝑜𝑟𝑐𝑒 𝑥 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒
Mathematically, 𝑃𝑜𝑤𝑒𝑟 = 𝑡𝑖𝑚𝑒
= 𝑡𝑖𝑚𝑒
= 𝐹𝑉 , where: P is the power, W is the work done, t is the time taken, and F
is the force and V is the velocity. The SI unit of power is the 𝑤𝑎𝑡𝑡 (𝑊), where 1W=1Js-1. If 100 joules of work is done in 10 seconds, the power
is 100 J10 s=10 W. Other units: Kilo watt (kW), Megawatt (Mw); 1 kW = 1000w, 1MW = 1000, 000W
A watt is the power developed when one joule of work is done in one second. i.e. 1W = 1Js-1.

Example
What is the power of a boy lifting a 300 N block through 10 m in 10 s?
Solution: Force = 300 N, Distance = 10 m, Time = 10 s
𝑊 3000𝐽
Work done by the boy = F ×d = 300 ×10 = 3 000 J. 𝑃𝑜𝑤𝑒𝑟 = 𝑡 = 10𝑠 = 300𝑊

Relationship between Work, Energy, and Power


Work and Energy: Work is a means of transferring energy. When work is done on an object, energy is transferred to or from that object. The work-
energy theorem states that the work done on an object is equal to the change in its kinetic energy.
Energy and Power: Power quantifies how quickly energy is used or transferred. Higher power means more energy is used or transferred in a given
amount of time.
Examples
Work: Lifting a weight off the ground involves doing work against the force of gravity.
Energy: A roller coaster at the top of a hill has high potential energy, which converts to kinetic energy as it descends.
Power: A high-powered engine can do more work in a shorter time, enabling a car to accelerate rapidly.
Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 87
88 PRINCIPLES AND PERSPECTIVES OF PHYSICS

Estimating the power of an individual climbing a flight of stairs


Activity: To estimate the power of an individual climbing a flight of stairs
Materials: stopwatch, weighing machine, tape measure
Procedures
a) Find a set of stairs that you can safely walk and run up. If there are no stairs in your school, create some at garden.
b) Count their number, measure the vertical height of each stair and then find the total height of the stairs in metres.
c) Let one member weigh himself/herself on a weighing machine and record the weight down.
d) Let him/her walk then run up the stairs. Using a stopwatch, record the time taken in seconds to walk and running up the stairs.
e) Calculate the work done in walking and running up the stairs. Let each group member do the activity. Do different members in walking and
running up the stairs the same do the work? Explain the disparity of work done by various group members.
f) Calculate the power developed by each individual in walking and running up the stairs. Which one required more power, walking or running
up the flight of stairs? Why?
Note
From your discussion, you should have established that: Height moved up (h) = Number of Procedures (n) × height of one step (x)
𝑤𝑜𝑟𝑘 𝑑𝑜𝑛𝑒 𝑎𝑔𝑎𝑖𝑛𝑠𝑡 𝑔𝑟𝑎𝑣𝑖𝑡𝑦 𝑚𝑔ℎ 𝑊ℎ 𝑊𝑛𝑑
h = n × d = n d. Time taken to move height (h) = t.𝑃𝑜𝑤𝑒𝑟 = = = = , where, P =
𝑡𝑖𝑚𝑒 𝑡 𝑡 𝑡
power, W = weight, d = height of first step, n = number of Procedures If d is in metres, W in newtons and t in seconds then power is in watts.

Example:
A girl whose mass is 60 kg can run up a flight of 35 Procedures each of 10 cm high in 4 seconds. Find the power of the girl. (Take g = 10 m/s 2).
Solution: Force overcome (weight) = mg = 60 kg × 10 N/kg = 600 N. Total distance = 10 × 35 = 350 cm = 3.5 m
𝑤𝑜𝑟𝑘 𝑑𝑜𝑛𝑒 2100𝐽
Work done by the girl = F × d = 600 × 3.5 = 2 100 J. 𝑃𝑜𝑤𝑒𝑟 = = . The power of the girl is 525 W
𝑡𝑖𝑚𝑒 4𝑠

ACTIVITY:
1. Janelle is 42 kg. She takes 10 seconds to run up two flights of stairs to a landing, a total of 5.0 metres vertically above her starting point. What
power does the girl develop during her run?
2. Student A lifts a 50 newton box from the floor to a height of 0.40 metres in 2.0 seconds. Student B lifts a 40 newton box from the floor to a height
of 0.50 metres in 1.0 second. Which student has more power than the other?

MECHANICAL FORMS OF ENERGY


Mechanical energy is a fundamental concept in physics, encompassing two primary forms: kinetic energy and potential energy. Kinetic energy refers
to the energy of motion, while potential energy is the energy stored in an object due to its position or state, such as a compressed spring or a stretched
rubber band. Together, these two forms constitute the total mechanical energy of a system. The principle of conservation of mechanical energy states
that in a closed system, the total mechanical energy remains constant provided no external forces do work on it. This principle is crucial in various
applications, from engineering to natural phenomena, as it helps predict how energy transforms and transfers within systems

KINETIC ENERGY
Recall that; Kinetic energy is given by, , where m is the mass of the body, v is the speed or velocity.
Activity

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 88


89 PRINCIPLES AND PERSPECTIVES OF PHYSICS

1.Find the kinetic energy of a body mass 2kg moving with a speed of 4m/s (16 J)
2.A boy of mass 60 kg is running at a speed of 10m/s. Find his kinetic energy. (3000J)
3.A ball has a mass of 50kg moving with kinetic energy of 3125J. Calculate the speed with which he runs.(11ms-1)

PONTENTIAL ENERGY
Recall that; When the body is allowed to fall, its potential energy reduces as it approaches the ground, and h is the height above the ground. 𝑃𝐸 =
𝑊𝑜𝑟𝑘 𝑑𝑜𝑛𝑒 = 𝐹 𝑥 𝑑 𝑏𝑢𝑡 𝐹 = 𝑚𝑔 𝑎𝑛𝑑 𝑑 = ℎ,𝐸 = 𝑚𝑔ℎ

Activity
1. A stone of mass 8kg is lifted through a height of 2 metres. Find the potential energy the stone develops (Take g = 10m/s2). P.E = mgh = 8 x
10 x 2 = 160J
2.A girl of mass 40kg is 15 metres above the ground. Find the potential energy she possesses.
P.E = mgh = 40 x 10 x 15 = 400 x 15 = 6000J

ENERGY INTERCHANGE
In the gravitational field energy changes from one form to another
The stone has maximum potential energy at position Y where it is at rest above the ground
At P, the stone has both potential and kinetic energy and when it hits the ground at X it
losses all the potential energy.
This potential energy is converted to kinetic energy, which is maximum as it hits the ground.

P.E at Y = K.E at X, mgh = , 2mgh = , 2gh = v2, , where


V is the speed with which
the stone lands on the ground.
Activity
A stone of mass 1kg falls from rest at height of 120m above the ground
(a).Find its potential energy before it begins to fall (P.E = 1200J)
(b).If the stone falls with a velocity of 2m/s, find its Kinetic energy.(KE = 2J)
(c).Find the velocity with which it hits the ground.(V= 48.9m/s)

PRINCIPLES OF CONSERVATION OF ENERGY


It states that energy can neither be created nor destroyed but can be transformed
from one form to another.
The principle of conservation of energy is a fundamental concept in physics, stating that the
total energy in an isolated system remains constant over time. This means that
energy cannot be created or destroyed; it can only change forms. For instance, potential energy can be converted into kinetic energy, but the total
amount of energy remains unchanged. In practical terms, this principle is crucial for understanding various physical phenomena. For example, in a
closed system, the energy provided must equal the energy transferred to the surroundings. This concept is essential in fields ranging from engineering
to environmental science, as it helps in analyzing energy efficiency and sustainability.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 89


90 PRINCIPLES AND PERSPECTIVES OF PHYSICS

Activity
1. Explain why a swinging pendulum eventually stops after sometime.
(i) Describe the energy changes that occur at an instant the stone is released from a height h to the ground. (ii)Given that, the height in b (i) was 20m.
Calculate the speed with which the stone hits the ground.
2. Boy climbs some stairs. Each step raises 20cm and there are 10 steps, if the boy has a mass of 50kg (a). How much work does he do in climbing the
stairs?
(b). Calculate the power developed if he took 10 seconds in climbing.
3. A machine lifts a load of 2500N through a vertical height of 3m in 1.5s. Find i) The power developed by a machine, (ii) Using the same power how
long would it take to lift 6000N through a vertical height of 5m
4. A force of 500N displaced a mass of 20kg through a distance of 4m in 5 seconds; find (i) the work done (ii) power developed
5. A pump is rated 400w. How many kilograms of water can it raise in one hour through a height of 72m?

MACHINES
Machines are devices designed to modify motion and force to perform work more efficiently. They can range from complex systems to simple machines,
which have few or no moving parts. Simple machines include levers, pulleys, inclined planes, wedges, screws, and wheels. These basic devices play a
crucial role in simplifying tasks by altering the direction or magnitude of a force. By using simple machines, individuals can accomplish physical tasks
with less effort. For instance, a lever allows a person to lift heavy objects with minimal force by distributing the weight. Similarly, an inclined plane
reduces the effort needed to raise an object by spreading the distance over
which the force is applied. In essence, simple machines enhance our ability to
perform work efficiently, making everyday tasks easier and more manageable.
When using a machine, a force is applied at one point (EFFORT) to overcome
another force (LOAD) at another point. A machine is used to convert energy
from one form to another and amplify the force used.

PRINCIPLE OF MACHINES
The principle of machines is fundamentally rooted in the law of conservation
of energy, which states that energy cannot be created or destroyed, but
can only be transformed. This principle implies that in an ideal machine, the work output is equal to the work input, meaning that the energy
used to perform a task is conserved throughout the process. Machines operate on the concept of mechanical advantage, allowing a
smaller effort to move a larger load over a greater distance. This is achieved through various simple machines, such as levers, pulleys,
and inclined planes, which enhance efficiency and reduce the effort required to perform work. In essence, machines are designed to facilitate tasks by
altering the direction and magnitude of forces, making it easier to overcome resistance. Hence, It states that a small force applied (effort) moves a
large distance to produce a bigger force that moves a load through a small distance.

TERMS USED IN MACHINES


A lever is a solid bar that pivots around a point called the fulcrum, allowing users to lift heavy loads with less effort.
The effort refers to the force applied to the lever to move the load, while the load is the weight or resistance that needs to be overcome.
The mechanical advantage, which measures how much a machine amplifies the input force. This is often expressed as the force ratio, indicating
the relationship between the load and the effort.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 90


91 PRINCIPLES AND PERSPECTIVES OF PHYSICS

The velocity ratio describes the ratio of the distance moved by the effort to the distance moved by the load, providing insight into the efficiency of
the machine. Velocity Ratio doesn’t have units.
Work input (W.I): This is the work done by the effort to overcome the load. It’s the product of effort and the distance moved by the effort. The SI
unit of work input is a joule (J).
Work output (W.O): This is the work done by the machine to overcome the load.
It’s the product of load and the distance moved by the load. The SI unit of work output is a joule (J). Work output is also referred to as work done on
the load.
Energy wasted: This is the difference between Work input and Work output.
Efficiency: This is the ratio of work output to work input of a machine expressed as a percentage.
In practice, the efficiency of a machine is always less than 100%. Because some energy is wasted or lost in overcoming friction between the movable
parts of a machine. Some energy is wasted or lost in lifting useless loads e.g. strings in pulleys. The efficiency of a machine can be increased by
lubricating the movable parts of the machine i.e. oiling and greasing, by using light materials for useless loads.

Activity
1. An effort of 200 moves a distance of 1.5 to lift a load of 480 through 1m. Calculate its Mechanical Advantage, Velocity ratio, Work output, Work
input, and Efficiency.
2. If a lever is used to overcome a load of 50N by applying an effort of 10N. Find; (i) Mechanical advantage of the lever system. Obtain the efficiency
of the system if the velocity ratio is 6, and its mechanical advantage
3. In a machine, 50N are used to overcome a load of 20kg.If the 20kg load moves a distance of 5cm whenever the 50N moves a distance of 25cm.
Calculate the Mechanical advantage, the velocity ratio and efficiency.
4. An effort of 100N is used to raise a load of 200N. If the effort moves through a distance of 4m, calculate the distance moved by the load if the
velocity ratio is 5, and the energy wasted by the machine.
5. In a machine which is 75% efficient, an effort of 300N is used to lift a load of 900N. If the load is moved through a distance of 2m, find the Calculate
its Mechanical Advantage, Velocity ratio, Work output, Work input, and Efficiency.
6.An effort of 100N moves through 12cm while moving a Load of 400N through 2cm. Calculate its Mechanical Advantage, Velocity ratio, Work output,
Work input, and Efficiency.
7. A water pump raises 2000kg of water through a vertical height of 22m. If the efficiency of the water pump is 80%, calculate the work input.
8.A simple machine raises a load of 60N through a distance of 2m by an effort of 20N which moves through a distance of 8m. Calculate the machine’s
efficiency.
9.A simple machine raises a load of 300kg through 0.5m when an effort of 150N is applied through a distance of 12.5m. Calculate the work input into
machine, work output by the machine and efficiency of the machine.

Simple machines are fundamental devices that simplify work by utilizing principles of mechanical advantage and leverage. They allow users to apply
a smaller force over a greater distance, making tasks more efficient. The six primary types of simple machines include the lever, inclined
plane, wedge, screw, pulley, and wheel and axle. Each of these machines operates by altering the direction or magnitude of the force applied.
For instance, a lever consists of a rigid beam pivoting around a fulcrum, enabling a small input force to lift a heavier load. Similarly, a pulley changes
the direction of the force, allowing for easier lifting of objects. While simple machines do not reduce the total work required, they make it more
manageable by redistributing the effort needed.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 91


92 PRINCIPLES AND PERSPECTIVES OF PHYSICS

LEVERS
Levers are simple machines that operate based on the principle of torque, which is the force required to rotate an object around a pivot point. By
utilizing a lever, one can reduce the amount of force needed to lift a load, effectively multiplying the applied force. This is achieved by increasing the
distance over which the force acts, allowing for greater efficiency in moving heavy objects. There are three classes of levers, each defined by the
relative positions of the load, effort, and fulcrum.
In a first-class lever, the fulcrum is positioned between the load and the effort, while in a second-class lever, the load is between the fulcrum and the
effort. The third-class lever has the effort applied between the load and the fulcrum. Each configuration offers unique advantages in terms of force and
distance.
FIRST CLASS LEVERS
This is the type of lever where the pivot is between the load and the effort (LPE). Examples include; Crow bar, Beam balance, Pair of scissors, Pair of
pliers, Seesaw, Claw hammer, Shears, Secateurs Seesaw, Pair of scissors and Claw hammer.

SECOND CLASS LEVERS


This is the type of lever where the load is between the pivot and the effort (PLE).
Examples include; Wheel barrow, Nutcracker, Bottle opener, Office punching machine, Wheelbarrow, Nutcracker, and Bottle opener.

THIRD CLASS LEVERS: This is the type of lever where the effort is between the load and the pivot (LEP). Examples include; Spade, Pair of tongs,
Tweezers, Broom, fishing rod, Stapling machine, Spade, Broom, Fishing rod, Stapling machine, Forceps, Tweezers. The operation of a lever depends
on the principle of moments. The efficiency of a lever can be increased by the effort distance (distance of the effort from the turning point).

PULLEY SYSTEMS
Pulleys are simple machines that consist of one or more wheels over which a rope or chain is looped, facilitating the lifting of heavy objects. The
fundamental principle behind a pulley system is to reduce the amount of force needed to lift a load by distributing the weight across multiple segments
of rope. When one end of the rope is pulled down, the load on the opposite end is lifted, demonstrating the mechanical advantage pulleys provide. In
an ideal scenario, where pulleys and ropes are considered weightless and frictionless, the effort required to lift an object decreases as more pulleys
are added. This is because each additional pulley increases the length of rope used, effectively reducing the force needed to raise the load. Overall,
pulley systems exemplify the principles of mechanical advantage, allowing users to lift heavy objects with less effort, making them invaluable in
various applications, from construction to everyday tasks.
There are three types of pulleys namely; Single fixed pulley,
Single movable pulley. And block and tackle pulley.

SINGLE FIXED PULLEY:


This is a type of pulley system in which the pulley is fixed on the
rigid support. In a single fixed pulley, the load is tied to one end
and the effort applied to another end of the rope. As the rope is
pulled downwards, the load is raised upwards.
Therefore, a single fixed pulley eases work by changing the
direction of the application of the effort.
Assuming, there is no friction in the groove and the rope is
weightless; At equilibrium; in real practice, the effort is always

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 92


93 PRINCIPLES AND PERSPECTIVES OF PHYSICS

greater than the load because it is used overcome friction in the groove and used to lift the weight of the groove. Therefore, mechanical advantage is
always less than 1. However, the distance moved by the effort is always equal to the distance moved by the load.

SINGLE MOVABLE PULLEY:


This is the type of pulley system in which the pulley moves along with the rope. One end of the rope is fixed to a rigid support and the effort is applied
on the other end. The advantage of this pulley system is that less effort is required to lift the load thus raising it easily. Assuming, there is no friction
in the groove and the rope is weightless;
At equilibrium; In real practice, the effort is always greater than the load because it is used overcome friction in the groove and also used to lift the
weight of the groove. Therefore, mechanical advantage is always less than 2. However, the distance moved by the effort is always twice the distance
moved by the load.

BLOCK AND TACKLE PULLEY SYSTEM


This is the type of pulley system where two or more pulleys are combined to form a machine with
high velocity ratio and high mechanical advantage.
It consists of two blocks namely; fixed block, and movable block.
A single rope called the “tackle” joins these blocks.

NOTE: Velocity ratio is equal to the number of strings


supporting the movable block.
Velocity ratio is also equal to the number of pulleys on
the system. The effort applied is equal to the tension in
each string supporting the movable block. For an odd
number of pulleys in the system, the fixed block should have one more pulley than the movable block. For
an even number of pulleys in the system, the fixed and the movable blocks should have the same number of
pulleys.

PASSING THE STRINGS (ROPES):


If the number of pulleys is odd (velocity ratio is odd), the string must be tied and start from the movable block. If the number of pulleys is even
(velocity ratio is even), the string must be tied and start from the fixed block.

Activity
1. A block and tackle pulley system shown in the figure below is used to lift a load of 220N when an effort of 110N is applied.
(i) State the velocity ratio of the system.
(ii) Calculate the mechanical advantage of the system. (iii) Calculate the efficiency of the pulley system.
2. A pulley system of velocity ratio 5 is used to lift a load of 500N. The effort needed is found to be 200N. Draw the arrangement of the above system,
and calculate the efficiency of the system.
3. A man uses a block and tackle pulley system to raise a load of 720N through a distance of 10m using an effort of 200N. If the pulley system has a
velocity ratio of 5, find the efficiency and energy wasted.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 93


94 PRINCIPLES AND PERSPECTIVES OF PHYSICS

4. A block and tackle pulley system with a velocity ratio of 5 and 60% efficient is used to lift a load of 60kg through a vertical height of 2m. Calculate
the effort that must be applied on the system. If the weight of the pulley system is 4N, calculate efficiency of the system.
5. A block and tackle pulley system is used to lift a mass of 2000kg. If this machine has a velocity ratio of 5 and an efficiency of 80%. Sketch a possible
arrangement of the pulleys, calculate the mechanical advantage of the system, and determine the effort applied.
6. An effort of 125N is used to lift a load of 500N through a height of 2.5m using a pulley system. If the distance moved by the effort is 15m, calculate
the efficiency of the pulley system.
7. An effort of 50N is required to raise a load of 200N using a pulley system of velocity ratio 5. a) Draw a diagram to show the pulley system.
b) Find the efficiency of the system.
c) Calculate the work wasted when the load is raised through 120cm.
d) Give two reasons why the efficiency of the above pulley is less than 100%.
8. A pulley system of velocity ratio 3 supports a load of 20N. Given that the tension in each string is 8N, calculate the efficiency of the pulley system,
distance moved by the effort if the load moves through a distance of 2m, and The weight of the lower pulley.

Experiment to show the variation of mechanical advantage or efficiency of pulley


system with the load / Experiment to determine efficiency of a pulley system
 A known load (L) is placed on the load pan.
 Known weights are added on the effort pan until the load just begins to rise upwards.
 The total weight (E) on the effort pan is noted and recorded.
 The experiment is repeated with different loads.
 The results are put in a suitable table including values of mechanical advantage and
efficiency.
 A graph of efficiency against load is plotted.
 As the load increases, the efficiency of the pulley system also increases. This is because;
when the load is small, a large effort is used to overcome friction force between moving
parts and lift the weight of the movable block. This leads to a small mechanical
advantage and small efficiency for a small load.
 When the load is increased, the friction force and the weight of the movable become very small. Therefore, a large portion of the effort is
used to lift the load while a small portion of the effort overcomes friction and lifts the weight of the movable block. This leads to a large
mechanical advantage and large efficiency for a small load.

APPLICATIONS OF PULLEY SYSTEMS


One of the most common uses is in elevators, where pulleys help raise and lower the cabin efficiently. This mechanism not only makes it easier to
transport people and goods between floors but also enhances safety and reliability in high-rise buildings.
In construction, pulleys are vital for lifting heavy materials, such as steel beams and concrete blocks. Cranes equipped with pulley systems can move
these loads vertically and horizontally, significantly improving productivity on job sites. Pulleys are found in exercise equipment, allowing users to
perform strength training with adjustable resistance. Everyday applications of pulleys extend to simple tasks, such as raising flagpoles or pulling
curtains. Their ability to change the direction of force makes them invaluable in both industrial and domestic settings, show casing their versatility
and importance in modern life. They are used at construction sites to lift heavy building materials from the ground, raising (hoisting) flags, lifts and
elevators, cranes for loading and offloading ships at ports, fetching water from underground wells, and drawing stage curtains in theatres.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 94


95 PRINCIPLES AND PERSPECTIVES OF PHYSICS

INCLINED PLANES
Inclined planes are fundamental simple machines characterized by a sloped
surface that facilitates the movement of heavy objects to higher elevations
with reduced effort. When an object is placed on an inclined plane, the force
of gravity acting on it is divided into two components: one parallel to the
slope, causing acceleration down the incline, and another perpendicular to
the slope, which is countered by the normal force. Examples of inclined
planes; Stair case, raising cows up to the truck using a slopping piece of
wood, and sloping roads in mountains. This division of forces allows for
easier lifting of loads compared to lifting them vertically.

Work done along an inclined plane


Activity: To determine the work done along an inclined plane
Materials: Spring balance, one piece of wood of about 10 cm, a wedge, ruler, trolley/ piece of wood/mass hanger/stone.
Procedures
a) Make an inclined plane by putting a piece of wood on a wedge.
b) Attach the mass hanger to a spring balance (calibrated in newtons).
c) Measure the length of the incline and its height, and record it down d (cm) and h(cm)
d) Pull the spring balance with its object on from the bottom of the incline and note down the force used in pulling F(N)
e) Change the length in cm to m and find the work done using the formula, work = Fd = (J)
f) Using the above skills, approximate the amount of work you do when climbing a slope of 100 m long.

Note: Work done by the applied force is given by Work done = Fd.
The work done against the gravitational force is given by Work done = weight of the object x vertical height. Work = mgh.
In case, the inclined plane is frictionless force: Work done by the applied force = work done against gravity.
In case there is some frictional force opposing the sliding of the object along the plane:
Work done by the applied force > Work done against gravity
Work done against friction = Work done by applied force – work done against gravity

Example
A box of mass 100 kg is pushed by a force of 920 N up an inclined plane of length 10 m. The box is raised through a vertical distance of 6 m.
Determine (i) the work done by the applied force, (ii) the work done against the gravitational force. (iii) the difference in work done. Why do the
answers to (i) and (ii) in part (a) differ?
Solution:
(a) (i) Work done by the applied force = F × d = 920 N × 10 m = 9 200 J
(ii) Work done against gravity = F × d = mg × h = 100 kg × 10 N/kg × 6 m = 6 000 J
(iii) The difference in work done = 9 200 J – 6 000 J = 3 200 J. This work done is used to overcome the friction between the box and surface of the
incline plane. The useful work done is 6 000 N.

The applications of inclined planes are vast and varied.


They are commonly used in everyday scenarios, such as ramps for loading goods into trucks or wheelchairs, and in construction, where they assist in
moving materials to elevated areas. Inclined planes played a crucial role in monumental constructions, such as the Egyptian p yramids, where they
enabled workers to transport heavy stones to great heights with less physical strain.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 95


96 PRINCIPLES AND PERSPECTIVES OF PHYSICS

Activity:
1. A box of mass 50 kg is pushed with a uniform speed by a force of 200 N up an inclined plane of length 20 m to a vertical height for 8 m.
Calculate the: (a) Work done to move the box up the inclined plane. (b) Work done if the box was lifted vertically upwards.
2. A body of mass 85 kg is raised through a vertical height 6 m through an inclined plane with base length of 8m. Calculate the:(a) Slant distance.(b)
Work done by the force 150 N.(c) Work done, if the body was lifted vertically upwards. (d) Work done against friction. (e) Frictional force between
the body and the track.
3. A block of mass 60 kg was raised through a vertical height of 7 m. If the slant height of a frictionless track is 21 m, and the force used to push
the block up the plane is 800 N, calculate the work done in pushing the block.
4. A car engine offers a thrust of 2 500 N to ascend a sloppy road for 1.1 km. At the top of the slope, the driver realized that the attitude change was
200m. If the mass of the car is 1.2 tonnes, calculate the; (a) Work done by the car engine. (b) Work done against resistance.

An inclined plane allows a load to be raised using a small effort than it was to be lifted vertically upwards.
Velocity ratio of an inclined plane:
𝒍𝒆𝒏𝒈𝒕𝒉 𝒍
𝒗𝒆𝒍𝒐𝒄𝒊𝒕𝒚 𝒓𝒂𝒕𝒊𝒐 = =
𝒉𝒆𝒊𝒈𝒉𝒕 𝒉
ACTIVITY
1. A brick of weight 20N is lifted through a height of 3m along a smooth inclined
plane of length 15m by applying an effort of 5N. Calculate the efficiency.
2. A woman uses an inclined plane to lift a load of 500N through a vertical distance
of 4m. the inclined plane makes an angle of 30° to the horizontal. If the efficiency of the inclined plane is 72%, calculate the effort need to raise the
load.
3. An effort of 50N is used to move a 300N box along an inclined, which rises vertically 1m for every 8m distance along the plane. Find the efficiency
of the inclined plane.
4. A trolley of weight 10N is pulled from the bottom to the top of the inclined plane by a steady force of 2N. If the height and the distance moved by
the force are 2m and 20m respectively, calculate the efficiency of the inclined
plane.

WHEEL AND AXLE


The wheel and axle is a fundamental mechanical device that simplifies
movement by reducing friction. It consists of a circular wheel attached to a
central axle, allowing the wheel to rotate around the axle. This design enables
heavy loads to be transported with minimal effort, as the wheel rolls over
surfaces rather than dragging.

Applications of the wheel and axle are vast and varied.

In transportation, vehicles such as cars, bicycles, and trains utilize this mechanism to facilitate movement. It plays a crucial role in machinery, where
it is used in systems like conveyor belts and pulleys. The Axle, a modern innovation, integrates electric motors into the axle structure, enhancing the
efficiency of electric vehicles. It is applied in screw drivers, steering wheels in cars, and wind las to draw water from wells.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 96


97 PRINCIPLES AND PERSPECTIVES OF PHYSICS

It consists of a wheel of large radius attached to an axle of small radius. The wheel and axles have a common axis of rotation. The effort is applied to
one end of the rope passing over the wheel of radius (R), while the load is applied at the other end of the rope passing over the axle of radius, (r). The
wheel and the axle are circular therefore, for one complete turn; the effort moves through a distance equal to the circumference of the wheel (2R).
The load moves through a distance equal to the circumference of the axle (2r).
Velocity ratio of a wheel and axle: Therefore, velocity ratio of a wheel and axle is given
𝒄𝒊𝒓𝒄𝒖𝒎𝒇𝒆𝒓𝒆𝒏𝒄𝒆 𝒐𝒇 𝒘𝒉𝒆𝒆𝒍 𝟐𝝅𝑹 𝑹
by 𝒗𝒆𝒍𝒐𝒄𝒊𝒕𝒚 𝒓𝒂𝒕𝒊𝒐 = = = .
𝒄𝒊𝒓𝒄𝒖𝒎𝒇𝒆𝒓𝒆𝒏𝒄𝒆 𝒐𝒇 𝒂𝒙𝒍𝒆 𝟐𝝅𝒓 𝒓

ACTIVITY
1. A machine consisting of a wheel of radius 50cm and axle of radius 10cm is used to
lift a load of 400N with an effort of 100N. Calculate the efficiency of the machine.
The figure besides shows a wheel and axle. When an effort of 300N is applied, a load
of 900N is raised. Calculate the efficiency.
2. A machine consists of a wheel of 40cm and an axle of radius 10cm. If an effort of 20N raises a load of 60N, find the efficiency of the machine.
3. The system below is a wheel and axle of radii 40cm and 4cm respectively.
Assuming that the efficiency of the system is 50%,
Calculate the energy wasted.
3. A wheel and axle machine is constructed from a wheel of diameter 20cm and mounted on an axle of diameter 4cm. Calculate the velocity ratio of
the machine, and Mechanical advantage of the machine if its 100%. Explain why the actual mechanical advantage of this machine is likely to be less
than the value obtained above.
4. A common windlass is used to raise a load of 480N by an application of an effort 200N at right angles to the handle. If the handle has a radius of
33cm from the axis and the radius of the axle is11cm, calculate the velocity ratio, and efficiency of the windlass.

GEARS
A gear is a device consisting of a set of toothed wheels that control the movement (speed) of a machine. Gears are essential mechanical components
designed to transfer rotary motion and torque between shafts. They consist of toothed wheels that interlock, allowing for the efficient transmission of
power. The working principle of gears relies on the meshing of these teeth, which can alter speed, torque, and direction of motion. Different types of
gears, such as spur, bevel, and worm gears, serve specific functions based on their design and application.

The applications of gears are vast and varied.


In the automotive industry, gears are crucial for transmissions, differentials, and steering systems,
enabling vehicles to operate smoothly. Industrial machinery also heavily relies on gears for functions
like conveyor systems, pumps, and mixers, where precise control of speed and torque is necessary.
Additionally, gears are utilized in household appliances, robotics, and aerospace technology,
showcasing their versatility across multiple sectors. In daily life applications like applied in clocks,
bicycles, motor cycles, watches, and motors.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 97


98 PRINCIPLES AND PERSPECTIVES OF PHYSICS

In gears; the effort is applied to the shaft of the small gear (wheel) called a driving wheel, the load is applied to the shaft of the large gear (wheel)
called a driven wheel. The more the number of teeth on the gear, the less the speed of rotation
of the gear and the less the number of teeth on the gear, the higher the speed of rotation of
the gear. Therefore, the fastest gear is the driving wheel with the smallest number of teeth.
Velocity ratio of a gear system
𝑉𝑒𝑙𝑜𝑐𝑖𝑡𝑦 𝑟𝑎𝑡𝑖𝑜 𝑜𝑓 𝑎 𝑔𝑒𝑎𝑟 𝑠𝑦𝑠𝑡𝑒𝑚
𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑡𝑒𝑒𝑡ℎ 𝑜𝑓 𝑡ℎ𝑒 𝑑𝑟𝑖𝑣𝑒𝑛 𝑔𝑒𝑎𝑟
=
𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑡𝑒𝑒𝑡ℎ 𝑜𝑓 𝑡ℎ𝑒 𝑑𝑟𝑖𝑣𝑖𝑛𝑔 𝑤ℎ𝑒𝑒𝑙
𝑑𝑟𝑖𝑣𝑒𝑛
=
𝑑𝑟𝑖𝑣𝑖𝑛𝑔
If two simple machines are combined together, the overall velocity ratio is
equal to the product of the individual velocity ratios of the two machines.
𝒅𝒓𝒊𝒗𝒆𝒏 𝒈𝒆𝒂𝒓 𝟑𝟔
𝑽𝒆𝒍𝒐𝒄𝒊𝒕𝒚 𝒓𝒂𝒕𝒊𝒐 = = =𝟒
𝒅𝒓𝒊𝒗𝒊𝒏𝒈 𝒈𝒆𝒂𝒓 𝟗

Activity (Work out in groups)


1. A driving wheel of 25 teeth interlocks with another wheel of 100 teeth. The gear system has an efficiency of 85%. Calculate the velocity ratio, and
Mechanical advantage of the system.
2. A bicycle has 120 teeth in the driven gears and 40 teeth in the driving gears. Calculate the velocity ratio and mechanical advantage if the bicycle
is 80% efficient.
3. In a gear system, the driven wheel has 40 teeth and the driving wheel has 10 teeth. The system is used to carry a load of 300N when an effort of
100N is applied. Determine the velocity ratio and the efficiency.
4. A bicycle has a chain wheel with 32 teeth, and the driven wheel has 80 teeth. If the efficiency is 88%, find the velocity ratio and Mechanical
advantage.
5. Two gear wheels A and B with 20 teeth and 10 teeth respectively are fastened together such that the weight of 160N is attached to one wheel and
rises a load of 400N applied on the other wheel. If wheel B drives A, Calculate the velocity ratio of the system and efficiency.

SCREWS
Screws are essential fasteners characterized by their external threads,
which allow them to align and hold multiple components securely.
The working principle of a screw involves converting rotational
motion into linear motion, enabling it to penetrate materials and
create a strong bond. This mechanism provides greater holding power
compared to nails, making screws ideal for applications requiring
disassembly or adjustments.
Various types of screws cater to specific materials and functions. For instance, wood screws are designed for fastening wood, while concrete screws
are tailored for securing objects to concrete surfaces. Specialized screws like lag screws are used in heavy-duty applications, such as fixing structural
components in construction. Screws find applications across diverse industries, including automotive, aerospace, and electronics. They are crucial in
assembling machinery, securing components like gears and bearings, and even in furniture construction.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 98


99 PRINCIPLES AND PERSPECTIVES OF PHYSICS

Pitch of a screw: This is the distance between any two successive threads of a screw.
In order to use a screw, a screw driver or brace or screw jack is used to drive screws in and out of the
material. An effort is applied on the handles of those devices above to drive the screw (load) in and out of the
material.
When the handle moves through one complete turn (complete circular path), the screw enters or leaves the
wood through a distance equal to the pitch of the screw.
Distance moved by the effort in one complete turn is equal to the circumference of a circle described by the
handle, where radius, R of the circle is equal to the length of the lever arm. Distance moved by the load
(screw) in one complete turn is equal to the pitch of the screw (Pitch).
𝑐𝑖𝑟𝑐𝑢𝑚𝑓𝑒𝑟𝑒𝑛𝑐𝑒 2𝜋𝑅
𝑉𝑒𝑙𝑜𝑐𝑖𝑡𝑦 𝑟𝑎𝑡𝑖𝑜 𝑜𝑓 𝑎 𝑠𝑐𝑟𝑒𝑤 = =
𝑝𝑖𝑡𝑐ℎ 𝑝𝑖𝑡𝑐ℎ
NOTE: The velocity ratio of the screw is always very large because the length of the handle is very big
compared to the pitch of the screw. The efficiency is always very low because screws have a very high
friction since the threads are very rough. This helps screws to firmly hold materials together.

ACTIVITY

1. In a screw jack, the length of the lever arm is 56cm and a pitch of 4cm. It is used to lift a load. Calculate its velocity ratio.
2. A screw of pitch 5cm is used to lift a load of 890.8N in a car jack. The lever makes a circle of circumference 10cm and has an efficiency of 85%.
Calculate the velocity ratio of screw, and Mechanical advantage of screw.
3. A screw has a pitch of 5mm. If an effort of 30N is rotated through one turn of radius 50cm to lift a load of 750N, find the efficiency.
4. A screw with a lever arm of 56cm has two successive threads which are 2.5mm apart. It is used to lift a load of 800N. If its 25% efficient, calculate
the mechanical advantage of the screw, and velocity ratio.
5. The handle of a screw jack is 14cm long. The screw jack is used to drive a screw of pitch 20cm. if an effort of 5N is applied on the jack to move a
screw of 15N, calculate efficiency.
6. A screw has 6 successive threads and describes a circular path of diameter 0.28mm when a screw driver is attached to it. Determine the velocity
ratio of the machine if the distance between the 6 threads is 0.12mm.
7. The pitch of a screw jack is 2.5mm. With a lever arm of 56cm long, the jack is used to lift a car of mass 790kg. if the screw jack is 75% efficient,
determine the velocity ratio and Effort applied.
8. The pitch of a bolt is 1mm. to tighten the bolt, The mechanic uses a spanner of a long arm of length 80cm. Calculate the velocity ratio of the
spanner.
3. A screw jack is found to be 70% efficient. If an effort of 20N is used to lift a vehicle of 5000N and the pitch of the screw is 2mm. What is the length
of the lever arm?
9. A screw of pitch 2.5cm is used to raise a load of 200kg when an effort of 50N is applied to the screw arm of length 20cm. Calculate the efficiency.
10. A Screw jack of pitch 2.5mm is operated by a force of 100N acting at a distance of 7cm from the axis about which the handle rotates and lifts a
car weighing 792kg. Calculate its efficiency.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 99


100 PRINCIPLES AND PERSPECTIVES OF PHYSICS

HYDRAULIC PRESS OR LIFT


A hydraulic press operates on Pascal's principle, which states that pressure applied to a confined fluid is transmitted undiminished throughout the
fluid. This mechanism allows a small force applied on a smaller piston to generate a much larger force on a larger piston, enabling the press to perform
heavy-duty tasks. The hydraulic system consists of a pump, cylinders, and fluid, which work together to amplify force efficiently. These systems utilize
hydraulic fluid to transfer power, enabling machinery to perform various tasks efficiently. The components include pumps, valves, actuators, and
reservoirs, which work together to convert mechanical energy into hydraulic energy and vice versa.. In mobile machinery, such as construction
equipment and agricultural vehicles, hydraulics provide the necessary force for lifting, digging, and moving materials. Industries like automotive and
aerospace rely on hydraulic systems for precise control in braking and flight control mechanisms. Recent trends indicate a shift towards smart hydraulic
systems, integrating
advanced technologies for improved efficiency and performance. In manufacturing, they are used for metal forming, stamping, and molding processes,
allowing for precise shaping of materials. The automotive industry utilizes hydraulic lifts for vehicle maintenance and repairs, providing easy access
to undercarriages. Hydraulic presses are employed in recycling operations to compact materials, optimizing space and facilitating transportation.

OPERATION OF THE SYSTEM


It’s a device used to lift a large load using a small effort.
𝑑𝑖𝑎𝑚𝑒𝑡𝑒𝑟 𝑜𝑓 𝑖𝑛𝑝𝑢𝑡 𝑝𝑖𝑠𝑡𝑜𝑛 𝑎𝑟𝑒𝑎 𝑜𝑓 𝑜𝑢𝑡𝑝𝑢𝑡 𝑝𝑖𝑠𝑡𝑜𝑛
𝑉𝑒𝑙𝑜𝑐𝑖𝑡𝑦 𝑟𝑎𝑡𝑖𝑜 𝑜𝑓 𝑎 ℎ𝑦𝑑𝑟𝑎𝑢𝑙𝑖𝑐 𝑝𝑟𝑒𝑠𝑠 = =
𝑑𝑖𝑎𝑚𝑒𝑡𝑒𝑟 𝑜𝑓 𝑜𝑢𝑡𝑝𝑢𝑡 𝑝𝑖𝑠𝑡𝑜𝑛 𝑎𝑟𝑒𝑎 𝑜𝑓 𝑖𝑛𝑝𝑢𝑡 𝑝𝑖𝑠𝑡𝑜𝑛
Let d1 be the distance moved by the small piston and A1 be the area of the small piston. Let d 2 be the distance moved by the small piston and A2 be the
area of the large piston. When an effort is applied on the small piston;
the volume of the liquid pushed down by the small piston is equal to
the volume of the liquid lifts up the large piston with the load, since
the pistons are circular, their areas equal to the area of a circle. This
means that the velocity ratio depends on the relative sizes of the
pistons, where a larger output piston area compared to the input
piston area results in a smaller movement (or velocity) of the output
piston, but a higher mechanical advantage.

ACTIVITY

1.The radius of the effort piston of a hydraulic lift is 1.4cm while that
of the load piston is 7.0cm. This machine is used to raise a load of
1200N. Given that the machine is 80% efficient, calculate the velocity ratio and Effort applied.
2.A hydraulic press is used to lift 400N using an effort of 20N. The diameter of the large cylinder is 100cm and the diameter of the small cylinder is
10cm. Find efficiency
3. A hydraulic machine has a ram cylinder (large cylinder) of diameter 30cm and a pump cylinder (small cylinder) of diameter 2cm. If the effort applied
to the small piston is 70N and the efficiency of the machine is 80%, calculate the Velocity ratio and Load lifted.
4.A hydraulic machine has a large cylinder of 30cm and a small cylinder of 1cm. Given that the machine is 80% efficient and that the effort applied on
the small piston is 50N, calculate the Maximum load that can be raised.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 100


101 PRINCIPLES AND PERSPECTIVES OF PHYSICS

5. The area of the effort piston of a hydraulic lift is 562 while that of the load piston is 2242N.This machine is used to raise a load of 300kg through a
height of 2.5mm. Given that the machine is 75% efficient, calculate the velocity ratio Effort applied and distance moved by the effort.

2.2 TURNING EFFECT OF FORCES, CENTRE OF GRAVITY, AND STABILITY


Learning Outcomes
a) Understand the turning effect of forces and its applications (u, s,v/a)
b) Understand and apply the concept of centre of gravity (u, s,v/a)

The turning effect of forces, also known as torque, is a fundamental concept in physics that describes how forces can cause an object to rotate around
an axis. When a force is applied at a distance from the pivot point, it generates a turning effect proportional to both the magnitude of the force and
the distance from the pivot, known as the lever arm.
The turning effect of a force, often referred to as torque or moment of
force, is a measure of how effectively a force can cause an object to
rotate around an axis or pivot point.

Definition of Torque
Torque (τ) is defined mathematically as: 𝜏 = 𝐹 × 𝑑 × 𝑠𝑖𝑛(𝜃)
Where: F is the magnitude of the force applied, d is the perpendicular distance from the line of action of the force to the axis of rotation (this is also
known as the lever arm or moment arm), and θ (theta) is the angle between the force vector and the line from the pivot to the point where the force
is applied.

Moment of a Force (Torque):


The moment (or torque) is calculated using the formula:
Moment (Torque)=Force×Perpendicular distance from the pivot τ=F×d. Where: Τ is the moment or torque (measured in Newton-meters, Nm), F
is the force (measured in Newtons, N), d is the perpendicular distance from the pivot (measured in meters, m). . If the force is applied directly along
the lever arm (θ = 0° or 180°), no torque is produced because sin(0°) or sin(180°) equals zero.

Terminologies
Direction: Torque is a vector quantity, meaning it has both magnitude and direction. The direction of torque is perpendicular to both the force vector
and the lever arm, following the right-hand rule for vectors.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 101


102 PRINCIPLES AND PERSPECTIVES OF PHYSICS

Magnitude: The magnitude of torque depends not only on the force applied but also on how far from the pivot point this force is applied and at what
angle it acts.
Lever Arm: The lever arm is crucial because it determines how much of the force is actually effective in causing rotation. For instance, pushing on
the end of a door far from the hinges will be more effective than pushing near the hinges.
Equilibrium: For an object to be in rotational equilibrium, the sum of all torques acting on it must be zero. This principle is used in static equilibrium
problems where objects are not accelerating or rotating.

Practical Examples:
Wrench on a Nut: When you use a wrench to tighten a nut, the force you apply at the end of the wrench handle creates a larger torque because the
lever arm is longer, making it easier to turn the nut.
Seesaw: On a seesaw, if one person is heavier but closer to the pivot, they might not go down if the lighter person is further away, balancing the
torque on both sides.
Doors: Pushing on a door closer to the hinge will require more force to move it compared to pushing at the outer edge where the torque is maximized
due to the longer lever arm.
Opening a Door: When you push a door, the door rotates around its hinge (the pivot). The further from the hinge you apply the force, the easier it
is to open the door, because the turning effect (moment) increases.

Factors affecting Turning of a Force


Magnitude of the Force: A larger force will produce a greater turning effect.
Distance from the Pivot: The farther the force is applied from the pivot, the
greater the moment.
Direction of the Force: For maximum turning effect, the force must be applied
perpendicular to the object. If it's not, only the perpendicular component of the
force contributes to the turning effect.
This concept is crucial in mechanical systems, where controlling torque is essential for the operation of levers, gears, and other rotational systems.

Moment of a force about a point is the product of the force and the perpendicular distance of its line of action from the piv ot.
S.I unit is Nm
The Principle of Moments, also known as the Law of Moments, is a fundamental concept in mechanics that states:
For an object to be in equilibrium, the sum of the clockwise moments about a pivot must equal the sum of the counterclockwise
moments about the same pivot.
In simple terms, this means that if an object is balanced (i.e., not rotating), the turning effects of the forces acting in one direction (clockwise) must
be exactly balanced by the turning effects of the forces acting in the opposite direction (counterclockwise).

Concepts
Moment of a Force: the moment is the turning effect of a force. It is given by the formula: Moment=Force x Perpendicular Distance from the Pivot
The unit of moment is Newton-meters (Nm).
Clockwise Moment: When a force causes an object to rotate in the direction of a clock's hands, it is a clockwise moment.
Counterclockwise Moment: When a force causes an object to rotate in the opposite direction to a clock's hands, it is a counterclockwise moment.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 102


103 PRINCIPLES AND PERSPECTIVES OF PHYSICS

Principle of Moments Formula:


For an object to be in equilibrium (not rotating), the sum of the moments in the clockwise direction must equal the sum of the moments in the
counterclockwise direction.
Mathematically, ∑Clockwise Moments=∑Counterclockwise Moments
F1×d1+F2×d2+⋯=F3×d3+F4×d4+…
Where: F1,F2,… are forces causing clockwise rotation, d 1,d2,… are the corresponding perpendicular distances to the pivot, F 3,F4,… are forces causing
counterclockwise rotation, and d3,d4,… are their corresponding perpendicular distances to the pivot.

Application of the Principle of Moments


Seesaw: On a seesaw, two children sit on either side of the pivot (the fulcrum). If one child is heavier, they will sit closer to the pivot, while the
lighter child will sit further away to balance the seesaw. The clockwise moment from the heavier child is balanced by the counterclockwise moment
from the lighter child. This keeps the seesaw level. If the moments are not equal, the seesaw will tip toward the side with the greater moment.
Lever: When using a lever to lift a heavy object, the effort force applied on one side must create a moment equal to the moment created by the weight
of the object on the other side for the system to balance. This explains how levers can be used to lift heavy loads with smaller forces, by increasing
the distance from the pivot where the force is applied.
Balancing Beams: In mechanical structures, such as bridges or cranes, the principle of moments is used to ensure that all forces acting on the
structure are balanced to prevent rotation or collapse.

Example
Consider a seesaw where a 60 N child sits 2 m from the pivot on one side, and a 40 N child sits on the other side. To balance the seesaw, we can find
the distance the second child should sit from the pivot.

Using the principle of moments: Clockwise Moment=Counterclockwise Moment


For the 60 N child: 60 N×2 m=120 Nm
Let d be the distance for the 40 N child: 40 N×d=120 Nm
Solving for d: d=120 N/m40 N=3 m, So, the 40 N child should sit 3 meters from the pivot to balance the seesaw.

Conditions for Equilibrium:


There are two conditions for an object to be in static equilibrium:
Translational Equilibrium: The net force acting on the object must be zero, meaning there is no linear motion.
Rotational Equilibrium: The sum of the clockwise moments must equal the sum of the counterclockwise moments, ensuring there is no rotation.
Importance of the Principle of Moments:
Engineering and Construction: Ensures structures like bridges, cranes, and buildings are stable and balanced.
Mechanical Systems: Helps in designing levers, gears, and other rotating systems for effective functioning.
Daily Life: From using wrenches to balance scales, the principle is applied in various practical situations.
In summary, the Principle of Moments governs rotational equilibrium, ensuring that when forces act on a system at different distances from a pivot,
the system remains balanced if the clockwise and counterclockwise turning effects (moments) are equal.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 103


104 PRINCIPLES AND PERSPECTIVES OF PHYSICS

Examples: 1. Forces of 8N, PN and 4N act on a body as shown;

Find the value of P if the system is in equilibrium.


Sum of Anti-clockwise moments = Sum of clockwise moments
4 x 40 + p x 20 = 8 x 30
P = 4N
2.Forces below act on the plank as shown
If the body is in equilibrium find the distance x
Anti-clockwise = clockwise moments
6 x 2 + 4 x1 = 8 x, x = 2m

Center of Gravity
The center of gravity (CoG) is a crucial concept in physics, representing the point where the total weight of a body is considered to act. It is influenced
by the distribution of mass within the object and is essential for understanding balance and stability.
According to Newton's law of universal gravitation, every mass attracts every other mass, which means that the CoG plays a significant role in how
objects interact under gravitational forces. In practical applications, such as walking,
the center of mass (CoM) shifts dynamically, allowing for forward motion. This
movement is vital for maintaining balance and coordination, particularly in clinical
settings where understanding these mechanics can aid in rehabilitation. Moreover,
the CoG is not static; it can change based on the object's shape and mass distribution.

The center of gravity (CG) is the point in a body or system where the
entire weight of the object can be considered to be concentrated for
purposes of analysis.
When a force such as gravity acts on an object, it is as if all the mass of the object is located at this single point. Understanding the center of gravity
is crucial for maintaining balance and stability in structures and moving objects.

Concepts of Center of Gravity


Center of Gravity: The point through which the resultant of the gravitational forces on an object acts. For objects in uniform gravity fields, this
coincides with the center of mass.
The center of gravity is the point at which the weight of an object is evenly distributed in all directions, and around which the object balances in any
orientation. It is the theoretical point where the gravitational force acts on an object.
Center of Mass: The point where the entire mass of the body can be considered to be concentrated for purposes of analyzing its motion. In a uniform
gravitational field, the center of mass and center of gravity are the same point.
Properties of CoG bodies
Symmetry: In symmetric objects like spheres or cubes with uniform density, the CoG is at the geometric center.
Stability: The position of the CoG affects an object's stability. Lowering the CoG (or moving it closer to the base of support) generally increases
stability.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 104


105 PRINCIPLES AND PERSPECTIVES OF PHYSICS

Motion: When an object moves, rotates, or is acted upon by forces, the CoG represents the point about which motion or forces can be simplified. For
instance, in projectile motion, the trajectory of the CoG follows a parabolic path.

Practical Examples:
 Human Body: The CoG in humans changes with posture. Standing straight, it's roughly at the navel, but when bending, it shifts. Athletes
use knowledge of CoG for balance in sports like gymnastics or diving.
 Vehicles: In automotive design, engineers consider the CoG to ensure stability, particularly in turns, to prevent rollovers. Lowering the CoG
(e.g., by lowering the engine or battery placement in electric cars) improves handling.
 Aircraft: The balance of an aircraft around its CoG is critical for flight control. Loading must be done so that the CoG remains within the
aircraft's center of gravity envelope for safe and efficient flight.
 Cranes and Lifting: When lifting heavy loads, ensuring the CoG of the load and machinery is aligned or controlled is vital to prevent tipping.

Experimental Determination
Physical Balance: Objects can be balanced on a point or line; the point where they balance is their CoG.
Plumb Line Method: For irregular shapes, suspend the object from various points, and the CoG will be where the plumb lines intersect.
Density Mapping: With more complex or unevenly distributed mass, one might use imaging or computational methods to estimate density distribution
and hence CoG.
Importance
 Design and Engineering: Understanding CoG is crucial for the design of structures, vehicles, and everyday objects to ensure they are stable
and functional.
 Physics and Dynamics: It's essential for predicting and analyzing how systems will behave under various forces or in different environments.
 Safety: In safety assessments, from car crash tests to structural integrity of buildings, knowing the CoG helps in predicting outcomes of
forces applied to systems.

How to Find the Center of Gravity:


For simple shapes with uniform mass, the center of gravity can be found geometrically. For example:
Rectangle or Cube: The center of gravity is located at the intersection of the diagonals.
Circle or Sphere: The center of gravity is at the geometric center.
For more complex or irregular objects, the center of gravity can be determined experimentally by balancing the object at different points or through
calculation, often using integration for continuous mass distributions.
Balance and Stability:
If the center of gravity is low and close to the base of an object, the object tends to be more stable.
If the center of gravity is high or located away from the base, the object is less stable and more prone to tipping over.
Practical Examples of Center of Gravity
Balancing Objects: For a seesaw to balance, the center of gravity of the entire system must be located directly over the pivot point.
Vehicles: In cars and trucks, a lower center of gravity improves stability and reduces the risk of rollover. That’s why race cars are designed with low
profiles to keep the center of gravity close to the ground.
Athletics and Sports: In gymnastics, athletes must keep their center of gravity within their base of support to maintain balance during movements.
In sports like high jump, athletes manipulate their body position to raise their center of gravity for better jumps.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 105


106 PRINCIPLES AND PERSPECTIVES OF PHYSICS

Construction: Cranes and tall buildings need careful design to ensure that their center of gravity remains within their base of support, preventing
them from toppling over.
Application of the Center of Gravity in Physics and Engineering:
Aviation: The center of gravity is critical in aircraft design. The position of the CG affects an airplane's stability and control. If it’s too far forward or
backward, the plane could become unstable or difficult to control.
Robotics: Robots are designed with low centers of gravity to avoid tipping over when moving or carrying objects.
Construction Equipment: Cranes and heavy lifting machines are designed with counterweights to adjust the center of gravity for stability during
operation.
The center of gravity is the point where the total weight of an object appears to act.
For uniform objects, it is typically located at the geometric center, while for irregular objects; it is closer to the heavier side.
Stability and balance are directly related to the position of the center of gravity relative to the base of support.
A lower center of gravity typically increases stability, while a higher one decreases it.

Experimental Method to determine the Center of Gravity of an Irregular Object


Materials Needed:
The irregular object (e.g., a cardboard cut-out), A piece of string, A plumb line (a string with a weight at the end), A pin or nail, and a marker
Procedure:
 Suspend the irregular object from any point along its edge using a pin or nail. This point of suspension should be arbitrary, and it will act
as a temporary pivot point.
 Hang a plumb line from the same pin or nail that is used to suspend the object. The plumb line will point straight down due to gravity.
 Once the object is hanging freely and the plumb line is steady, use the marker to draw a line along the path of the plumb line on the surface
of the object. This line represents one potential path through the center of gravity.
 Choose another point along the edge of the object and repeat the process. Suspend the object from this new point, hang the plumb line, and
draw another line along the plumb line.
 The point where the two lines intersect is the center of gravity of the irregular object. This intersection point is where the entire weight of
the object is effectively concentrated.
 To improve accuracy, you can suspend the object from a third point and draw a third line. All lines should intersect at the same point,
confirming the location of the center of gravity.
The stability of a body refers to its ability to maintain equilibrium and resist disturbances that might cause it to tip, fall, or
rotate.

Concepts of Stability and equilibrium


Equilibrium: A body is in equilibrium when the sum of the forces and the sum of the moments acting on it are zero.
Stability and equilibrium are fundamental concepts that describe the behavior of systems when subjected to disturbances. A system is in stable
equilibrium when, if displaced from its equilibrium position, it experiences a net force or torque that acts in the opposite
direction of the displacement. This tendency to return to the original state signifies stability, much like a ball resting at the bottom of a bowl.
Similarly, unstable equilibrium occurs when a slight disturbance leads the system further away from its original position. For
instance, a ball balanced on top of a hill exemplifies this, as any small push will cause it to roll away.
A stable equilibrium ensures that systems can maintain their state, while unstable equilibrium indicates a propensity for change, highlighting the
dynamic nature of physical systems.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 106


107 PRINCIPLES AND PERSPECTIVES OF PHYSICS

Stable Equilibrium: If the body is slightly disturbed, it returns to its


original position. This usually occurs when the center of gravity is low and
the base of support is wide.
Unstable Equilibrium: If the body is disturbed, it moves further away
from its original position. This occurs when the center of gravity is high or
the base of support is narrow.
Neutral Equilibrium: If the body is disturbed, it remains in its new
position. This is typical for objects that have a flat surface, like a sphere on
a flat plane.
Center of Gravity: The location of the center of gravity plays a crucial
role in stability. A lower center of gravity typically increases stability, while
a higher center of gravity decreases it. This is because a lower center of gravity means that the object has a greater resistance to tipping.
Base of Support: The area beneath an object that includes all points of contact with the ground. A wider base of support generally leads to greater
stability. If the center of gravity falls within this base, the object will be stable; if it falls outside, the object will topple.

Factors affecting Stability


Stability refers to an object's ability to maintain its equilibrium when subjected to external forces. Three critical factors affecting stability are the
center of gravity, the base of support, and the line of gravity.
Firstly, the position of the center of gravity plays a significant role; a lower center of gravity enhances stability, making it less likely for the object to
topple.
Secondly, the base of support is crucial; a wider base increases stability, particularly when aligned with the direction of the applied force.
Lastly, the line of gravity, which should ideally fall within the base of support, is essential for maintaining equilibrium.
If the line of gravity extends beyond the base, the object is at risk of tipping over.
When the center of gravity is directly above the center of the base of support, the object is more stable. If it shifts outside this area, the object becomes
unstable.

Distribution of Mass:
How mass is distributed affects stability. For example, a low and wide shape is generally more stable than a tall and narrow shape because the mass
distribution lowers the center of gravity.
Stability Analysis:
Stability Triangle: To visualize stability, consider a triangle formed by the points of contact with the ground. If the center of gravity lies within
this triangle, the object is stable.
Tipping Point: The tipping point is reached when the line of action of the weight (a vertical line passing through the center of gravity) falls outside
the base of support.
Dynamic Stability: In moving objects (like vehicles), stability is affected by factors such as speed, momentum, and the forces acting during
acceleration or deceleration.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 107


108 PRINCIPLES AND PERSPECTIVES OF PHYSICS

Applications of Stability
Engineering and Architecture: Structures like bridges, buildings, and towers must be designed to ensure that their center of gravity remains within
their base of support to prevent collapse.
Car designs focus on lowering the center of gravity to improve handling and reduce the risk of rollover. Sports and Physical Activities: Athletes and
performers often use techniques to lower their center of gravity to maintain balance during dynamic movements, such as in gymnastics or martial
arts.
Robots are designed with stability in mind to prevent tipping during movement or when carrying loads. In imaging technologies such as MRI and X-
rays. These techniques rely on stable magnetic fields and precise measurements to produce accurate diagnostic images, aiding in effective patient
treatment.
Stability plays a vital role in renewable energy systems, such as wind turbines and solar panels. Ensuring these systems remain stable under varying
environmental conditions maximizes efficiency and energy output.

Illustration

Determination of mass of a beam or rod or any straight material


Determining the mass of a beam, rod, or any straight material can be accomplished using several methods, depending on the available tools and the
precision required.
Using a Balance Scale:
Method:1
Measure the length of the beam or rod using a ruler or measuring tape.
Use a balance scale to weigh the object directly. Ensure the balance is calibrated and zeroed before use. Record the mass displayed on the scale.
This method is the most straightforward for solid and homogenous materials, giving a direct measurement of mass.
Calculating Mass from Volume and Density:
If the beam or rod is of uniform cross-section and material, you can calculate its mass using the formula: Mass=Volume x Density
Method:2
Measure the dimensions of the beam or rod.
For a cylindrical rod: Volume=πr2h, where r is the radius and h is the height(length).
For a rectangular beam: Volume=width×height×length
Find the density of the material (usually available in material property tables).
Multiply the volume by the density to get the mass.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 108


109 PRINCIPLES AND PERSPECTIVES OF PHYSICS

Using a Lever Balance


This method is useful for longer beams or rods where it might be difficult to use a standard scale.
Method:3
Set up a lever balance with the beam positioned on the fulcrum (pivot point).
Place known weights on one side of the lever.
Adjust the position of the known weights until the lever is balanced.
Calculate the mass of the beam using the principle of moments. The relationship can be expressed as: Mass of beam= Mass of weights x
distance from pivot

Using Water Displacement Method (for Irregular Shapes):


If the beam or rod has an irregular shape, you can measure its mass based on its volume via the water displacement method.
Method:4
Fill a graduated cylinder with water and record the initial volume.
Submerge the beam or rod in the water and measure the new water level.
The volume of water displaced equals the volume of the beam or rod.
Use the density of the material to calculate the mass using:
Mass=Volume displaced x density

Activity
A uniform beam 5m long weighing 10kg is carried by 2 men each 1m from either ends of the beam if the mass of 5 kg rests 2m away from one end.
Draw a diagram showing all forces acting on the bar and determine the reactions due to the men acting on the bar

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 109


110 PRINCIPLES AND PERSPECTIVES OF PHYSICS

2.3 PRESSURE IN SOLIDS AND FLUIDS


Learning Outcomes
a) Understand that pressure is the result of a force applied over an area (u,s)
b) Understand the effect of depth on the pressure in a fluid and the implications of this (u,s)
c) Understand the nature of the atmosphere and how atmospheric pressure is measured (u,s)
d) Know the structure of the atmosphere and the significance of the different layers (k, u,v/a)
e) Understand the use of the Bernoulli effect in devices like aerofoils and Bunsen burner jets(u)
f) Understand the concept of sinking and flotation in terms of forces acting on a body submerged in a fluid(u)
g) Understand and apply the Archimedes’ principle in different situations (u, s,v/a)

Pressure in solids
Pressure in solids is defined as the force exerted normally or perpendicularly per unit area,
represented by the formula 𝑃 = 𝐹/𝐴, where P is pressure in pascals, F is the force in
newtons, and A is the area in square meters.
When a force is applied to a solid, it creates stress on the material, which can lead to
deformation or failure if the pressure exceeds the material's strength. In practical applications,
pressure in solids is significant in various fields, including engineering and construction. For
instance, when designing structures, engineers must calculate the pressure exerted by loads to
ensure stability and safety.

Pressure is defined as force acting normally per unit area.


SI unit is Nm-2 or Newton per square metre or Pascal. Other units: Kilo Pascal (kpa) or Kilo Newton per metre squared
Note: The pressure increases when the surface area is decreased. This can be demonstrated using a needle and a nail, sharp panga against blunt
panga, high-heeled shoe against gumboots, bicycle tire against tractor tire, etc. When the same force is applied at the top of the needle and nail, one
tends to feel more pain from the needle than the nail, and the rest applies. This is because surface area of the bottom of the needle is smaller therefore,
the pressure is high. The increase in pressure when the surface area is decreased explains why a tractor can easily move in a muddy area than the
bicycle.

Activity (Work in groups)


1. A car piston exerts a force of 200N on a cross sectional area of 40cm 2. Find the pressure exerted by the piston
2. The pressure exerted on foot pedal of cross sectional area 5cm2 is .Calculate the force.
3. Explain why dams, and buildings foundations have wide bases than the top.
4. Explain why big carriage trailers have small tires at the back and big tires in front.

Minimum and maximum pressure


Pressure is minimum when area is maximum, and on the other hand, pressure is maximum when area is minimum.

Activity
a). A box measures 5m by 1m by 2m and has weight of 60N while resting on the surface. What is the minimum pressure?

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 110


- 111 - PRINCIPLES AND PERSPECTIVES OF PHYSICS

b). A box of dimensions of 6m x 2m x 4m exerts its weight of 400N on the floor. Determine its maximum pressure, minimum pressure, and density

PRESSURE IN LIQUIDS
Pressure in liquids is a fundamental concept in physics, defined as the force exerted per unit area. This pressure arises from the weight of the liquid
above an object, leading to an increase in pressure with depth. The formula for calculating this pressure is given by 𝑃 = ℎ𝜌𝑔, where P is pressure,
ρ (rho) is the fluid density, g is the acceleration due to gravity, and h is the depth of the fluid. As liquid particles are in constant motion, they collide
with surfaces, exerting pressure on them.
This phenomenon is crucial in various applications, from understanding buoyancy where objects float or sink based on the pressure differences to
industrial processes that rely on fluid dynamics. In summary, liquid pressure is influenced by depth and density, playing a vital role in both natural
and engineered systems.
Consider a column of liquid to a height h above the base in a cylinder as shown; The pressure
on the surface of the base of cross sectional area A is due to weight W of the liquid above
it. It follows that pressure is the same in all directions and depends on; depth (h) of the
liquid and density (ρ) of the liquid

Activity
1.The density of liquid X is 800kgm-3. It was poured in a container to a depth of 400cm.
Calculate the pressure it exerts at the bottom of the container.
2.The tank contains mercury and water. The density of mercury is 13600kgm -3 and that of
water is 1000kgm-3. Find the total pressure exerted at the bottom.
3.A cylindrical vessel of cross section area 50cm2 contains mercury to a depth of 2 cm. calculate the pressure that mercury exerts on the vessel and the
weight of water in the vessel. (density of mercury =13600kgm -3)

Effect of pressure on fluids


The pressure in a fluid increases with depth due to the weight of the fluid above. This phenomenon is a fundamental principle in fluid mechanics,
where the pressure at a given depth can be calculated using the formula 𝑃 = 𝑃𝑂 + ℎ𝜌𝑔. Here, 𝑃𝑂 is the atmospheric pressure, ρ (rho ) is the
fluid density, ( g ) is the acceleration due to gravity, and ( h ) is the depth. As one descends underwater, the cumulative w eight of the water above
exerts greater pressure on the body. This increase in pressure can lead to
physiological effects, such as ear discomfort or the need for equalization during
diving. Additionally, the increased pressure affects buoyancy, making it harder to
float as depth increases.

Experiment to show that pressure in liquids increases with increase


in depth (h)
Three equally sized holes A, B and C are made on a tall can at different depths h 1,
h2 and h3 as shown in the figure above. The holes are blocked with cork and the
can is filled with water. The holes are unblocked and the sizes of water jets noted
Observation

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | - 111


-
- 112 - PRINCIPLES AND PERSPECTIVES OF PHYSICS

The speed with which water spurts out is greatest for the lowest jet, showing that pressure increases with depth.
NB: Pressure does not depend on shape and cross sectional area of the container. This can be illustrated using communication tube.

Experiment to show that pressure is independent of cross section area


and shape of container
The liquid is allowed into the tubes A, B and C as shown in the figure. The liquid reaches
the same height h in all the tubes.
Since the tubes are of different cross sectional area and shape. It follows that pressure
does not depend on shape and cross sectional area.
PASCAL’S PRINCIPAL OR LAW OF LIQUID PRESSURE
The principle of transmission of pressure in liquids, known as Pascal's Principle, states
that any change in pressure applied to an enclosed fluid is transmitted equally
throughout the fluid. This means that if pressure is applied at any point in a confined
liquid, the same pressure change is felt equally at every point within that liquid. Pascal's
Principle is fundamental in understanding how hydraulic systems operate. For instance,
in hydraulic lifts, a small force applied to a small area can create a much larger force
over a larger area, allowing heavy objects to be lifted with minimal effort. This principle
highlights the incompressibility of liquids, which allows for efficient pressure
transmission. As a result, Pascal's Principle is not only crucial in physics but also has
practical applications in various engineering fields, including hydraulics and fluid
mechanics. The principal assumes that all liquid are incompressible.

Experiment to verify the principle of transmission of pressure in liquids


To verify the principle of transmission of pressure in liquids, one can conduct a simple
experiment based on Pascal's Law. This principle states that when pressure is applied to
a confined fluid, it is transmitted equally in all directions. For the experiment, take a
closed container filled with water and attach several small tubes to it at different heights.
When pressure is applied to the water in the container, observe the water level in each
tube.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | -


112 -
- 113 - PRINCIPLES AND PERSPECTIVES OF PHYSICS

The water will rise to the same height in all tubes, demonstrating that the pressure is transmitted uniformly throughout the liquid.
This experiment not only illustrates Pascal's Law but also highlights the behavior of fluids under
pressure. It serves as a fundamental concept in hydraulics, where understanding pressure
transmission is crucial for designing systems like hydraulic lifts and brakes.
The piston is moved in such way that it pushes “the plunger” to compress the liquid. The pressure
caused is transmitted equally throughout the liquid. This can be observed by having all holes pouring
out the liquid at the same rate when the piston is pushed in; hence pressure in liquid is equally
transmitted.

Application of the Pascal’s principle:


Pascal's principle, which states that pressure applied to a confined fluid is transmitted undiminished
throughout the fluid, has numerous practical applications across various fields.
One of the most notable applications is in hydraulic systems, such as hydraulic lifts and brakes.
These systems utilize the principle to amplify force, allowing heavy objects to be lifted with minimal effort, making them essential in automotive and
construction industries.
Another significant application is in the design of hydraulic presses, which are used in
manufacturing processes to shape and mold materials. By applying a small force on a smaller piston,
a much larger force can be exerted on a larger piston, enabling the compression of metals and
plastics efficiently. Pascal's principle is crucial in medical devices like syringes and blood pressure
monitors. In syringes, the principle allows for the easy injection of fluids, while in blood pressure
monitors, it helps measure the pressure exerted by blood on the vessel walls, providing vital health
information. Some machines where the Pascal’s principle are used include; hydraulic car jacks,
shock absorbers (cylinders), Hydraulic car brakes, Hydraulic press, Hydraulic lifts, and Hydraulic
bulldozers, and many others.

HYDRAULIC PRESS/MACHINE
A hydraulic press consists of two connected cylinders of different bores, filled with a liquid or any other incompressible fluid and fitted with piston
shown in the figure. When the force F is exerted on the liquid via piston A, the pressure produced is transmitted equally through out to piston B,
𝐹1 𝐴
which supports a load W. The force created at B raises the load squeezing a hard substance. 𝑃1 = 𝑃2; = 𝐴1
𝐹2 2

Activity
1. The cross sectional area of the piston A = 2m2 and the force applied at piston A is 10N.
Calculate the force on B, given the cross section area as 150m 2
2. Calculate the weight B, lifted by the piston of area 480cm2 with a force of 20N whose piston
area is 40cm2 as shown below.
3.Calculate the weight W raised by a force of 56N applied on a small piston area of 14m 2. Take the
area of the large piston to be 42m2
4.A force of 32N applied on a piston of area 8cm 2 is used to lift a load W acting on large area of
640cm2. Determine the value of W.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | -


113 -
- 114 - PRINCIPLES AND PERSPECTIVES OF PHYSICS

Hydraulic lift
This is commonly used in garages; it lifts cars so that repairs and service on them
can be done easily underneath the car
A force applied to the small piston, raises the large piston, which lifts the car. One
valve allows the liquid to pass from the small cylinder to the wider one. A second
valve allows more liquid (usually oil) to pass from oil reservoir on the left to the
small cylinder. When one valve is open, the other must be shut.

FORCE PUMP
A force pump is a mechanical device designed to move liquids, particularly water, by
utilizing pressure. It consists of a solid plunger and a foot valve, which work together
to create a flow of liquid. When the plunger is pushed down during the downward
stroke, it forces water out through a side valve, allowing it to be expelled from the
pump. This action is crucial for applications such as draining deep mines or supplying
water to elevated areas. The operation of a force pump relies on the principle of
pressure. Unlike lift pumps, which depend on atmospheric pressure to draw water,
force pumps can push water to greater heights without being limited by air pressure.
This makes them particularly advantageous in scenarios where water needs to be
transported over long distances or to significant elevations.

ATMOSPHERIC PRESSURE
Atmospheric pressure, the weight of the air above us, plays a crucial role in weather patterns and can significantly affect our health. Changes in
barometric pressure can lead to various physical symptoms, including headaches, joint pain, and fatigue. When atmospheric pressure drops, it can
cause the tissues in our bodies to expand, leading to discomfort and pain, particularly in those with pre-existing conditions like arthritis. One notable
phenomenon related to atmospheric pressure is the bomb cyclone, a rapidly intensifying storm that occurs when the pressure drops significantly
within a short time. This drastic change can lead to severe weather conditions, including heavy rainfall and strong winds, impacting daily life and
safety. As we continue to study these changes, we can better prepare for their implications on both health and the environment. The earth is surrounded
by a sea of air called atmosphere. Air has weight therefore it exerts pressure at the surface of the earth. The pressure this air exerts on the earth’s
surface is called atmospheric pressure.

Atmospheric pressure is the pressure exerted by the weight of air on all objects on earth’s surface.
The higher you go the less dense the atmosphere and therefore atmospheric pressure decrease at high altitude and increase at low altitude
The value of atmospheric pressure is about 101325N/m.
Experiment to demonstrate the existence of atmospheric pressure;
Crushing can experiment or collapsing can experiment
A metal can with its tight stopper removed, is heated until the small quantity of water in boils.
When the steam has driven out all the air, the cork is tightly replaced and the heat removed at the same time.
Cold water is poured over the can. This causes the steam inside to condense reducing air pressure inside the can
The can collapses in wards. This is because the excess atmospheric pressure outweighs the reduced pressure inside the can.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | -


114 -
- 115 - PRINCIPLES AND PERSPECTIVES OF PHYSICS

MEASUREMENT OF ATMOSPHERIC PRESSURE


Atmospheric pressure is the force exerted by the weight of air above a specific
point on the Earth's surface. It is a crucial factor in weather patterns and is
measured using various units, including pascals (Pa), millibars (hPa), and
inches of mercury (inHg). At sea level, standard atmospheric pressure is
approximately 101,325 Pa, equivalent to 1,013.25 hPa or 29.92 inHg. The
primary instrument for measuring atmospheric pressure is the barometer.
Traditional mercury barometers utilize a column of mercury to indicate
pressure changes, while modern electronic barometers often employ silicon
capacitive pressure sensors for enhanced accuracy and stability. These devices
help meteorologists monitor weather conditions and predict phenomena such
as storms and cyclones. Atmospheric pressure is essential for various
applications, including aviation, meteorology, and environmental science, as it influences both weather systems and climate patterns. Atmospheric
pressure is measured using an instrument called Barometer.
Types of barometers Units of pressure
1. Simple barometer Nm -2
2. Fortin barometer Pa
3. Aneroid barometer atmospheres

Simple barometer
A simple barometer is made by completely filling a thick walled glass tube of uniform bore about 1m
long with mercury. The tube is tapped from the open side and inverted several times to expel any air
bubbles trapped in mercury. It is inverted over a dish containing mercury as shown in the diagram.
The mercury level falls leaving a column “h” of about 76 cm. The height “h” gives the atmospheric
pressure 76cmHg.The empty space created above the mercury in the tube vacuum called
Torricellian vacuum.
The vertical height of the mercury will remain constant if the tube is lifted as in (2) provided the top
of the tube is not less than 76cm above the level of mercury in the dish. If it is lifted so that “h” is
less than 76cm.The mercury completely fills the tube. This shows that vacuum was a trice vacuum
and a column of mercury is supported by atmospheric pressure; Atmospheric pressure (𝑃𝑂 = ℎ𝜌𝑔) = barometer height x density of liquid x
gravity

Example
The column of mercury supported by the atmospheric pressure is 76cm. Find column of water that the atmospheric pressure will support in the same
place. Comment on your answer.
P = hpg= 0 .76x 13600 x 10 = 103360 Nm -2. In the same place atmosphere pressure is the same as using water; P = hpg. 103360 = h x 1000 x
10. h = 103360/ 1000 X 10. h= 1034m

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | -


115 -
- 116 - PRINCIPLES AND PERSPECTIVES OF PHYSICS

The answer to the question above, explains why water is not used in a barometer because the column will be too long. Water is
not used in barometers primarily due to its low density compared to mercury. The density of water is approximately 1,000 kg/m³, which means that
a water column would need to be over 10 meters tall to balance atmospheric pressure. This impractical height makes water an unsuitable choice for
barometric measurements. Water has a tendency to vaporize under low-pressure conditions, which can lead to inaccuracies in pressure readings.
Unlike mercury, which remains liquid and stable in a vacuum, water can form vapor bubbles that disrupt the measurement process. Lastly, the physical
properties of water, such as its susceptibility to temperature changes and evaporation, further complicate its use in barometers. These factors
collectively make mercury the preferred liquid for barometric instruments, ensuring accurate and reliable atmospheric pressure readings.
The atmosphere is the layer of gases surrounding Earth, held in place by gravity. It plays a crucial role in supporting life by providing oxygen,
regulating temperature, and protecting the planet from harmful solar radiation. The Earth's atmosphere is composed of several layers, each with
distinct characteristics and functions. These layers are primarily differentiated based on temperature gradients and extend from the Earth's surface to
the outer reaches of space.

Structure of the Atmosphere


Earth's atmosphere is a complex structure composed of five distinct layers: the troposphere, stratosphere, mesosphere, thermosphere, and exosphere.
Each layer is characterized by variations in temperature, chemical composition, and density, playing a crucial role in sustaining life on our planet.
The troposphere, the lowest layer, is where weather occurs and where most of the atmosphere's mass is concentrated. Above it lies the stratosphere,
which contains the ozone layer that protects us from harmful ultraviolet radiation. The mesosphere follows, where temperatures decrease with altitude,
and it is here that meteors burn up upon entering the atmosphere. Higher still, the thermosphere experiences a dramatic increase in temperature and
is where the auroras occur. Finally, the exosphere is the outermost layer, gradually fading into space.

The layers of the atmosphere


1. Troposphere: From the surface to about 8-15 km (5-9 miles). Decreases with
altitude (approximately 6.5°C per kilometer). The troposphere is the lowest layer of
the atmosphere, where all weather phenomena occur (clouds, rain, storms, and wind).
It contains about 75% of the atmosphere's mass and 99% of its water vapor. Air
pressure and density are highest at this level.
Significance: Supports life by containing the gases essential for breathing,
especially oxygen and nitrogen. Plays a key role in the hydrological cycle, enabling
weather and precipitation. Turbulent mixing of air occurs here, which helps in the
redistribution of heat and moisture across the planet. Airplanes fly near the top of the
troposphere (the tropopause) to avoid turbulence and storms.

2. Stratosphere:
About 15 to 50 km (9 to 31 miles). Increases with altitude, due to the absorption of ultraviolet (UV) radiation by the ozone layer. The stratosphere is
the second layer and is characterized by stable air and very little mixing of gases. It contains the ozone layer, which absorbs and scatters the sun’s
harmful UV radiation.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | -


116 -
- 117 - PRINCIPLES AND PERSPECTIVES OF PHYSICS

Significance: The ozone layer plays a critical role in protecting living organisms from dangerous ultraviolet radiation. Since there is very little
turbulence in the stratosphere, it is a favorable region for jet aircraft and high-altitude balloon flights. The temperature increase in this layer prevents
vertical mixing, separating it from the troposphere and ensuring weather stays in the lower layers.

3. Mesosphere: About 50 to 85 km (31 to 53 miles). Decreases with altitude; it is the coldest layer, with temperatures dropping to around -90°C (-
130°F) near the mesopause (the boundary between the mesosphere and thermosphere). The mesosphere is where most meteoroids burn up upon
entering Earth's atmosphere due to increased friction with the particles present.
Significance: The burning of meteors in this layer protects the Earth from impacts. The mesosphere is poorly understood because it is difficult to
study too high for weather balloons and too low for satellites. Extremely cold temperatures play a role in the formation of noctilucent clouds, which
are the highest clouds in the atmosphere, made of ice crystals.
4. Thermosphere: About 85 to 600 km (53 to 373 miles). Temperature: Increases significantly with altitude, reaching up to 2,500°C (4,500°F) or
higher. Despite these high temperatures, the thermosphere would not feel hot to humans because the air density is so low. The thermosphere is where
solar activity strongly influences temperature and energy. The lower part of the thermosphere contains the ionosphere, a region filled with charged
particles.
Significance: The ionosphere is crucial for radio communication because it reflects radio waves back to Earth, allowing long-distance communication.
This is also the region where auroras (northern and southern lights) occur, caused by the interaction between solar winds and Earth's magnetic field.
The thermosphere provides protection from harmful solar and cosmic radiation. The International Space Station (ISS) and many low Earth orbit
satellites operate in this region.

5. Exosphere: From about 600 km (373 miles) up to 10,000 km (6,200 miles) or more. The temperature continues to rise but is not easily defined
due to the extremely low density of particles. The exosphere is the outermost layer of the Earth's atmosphere, where atoms and molecules can escape
into space. It gradually fades into the vacuum of space.
Significance: The exosphere serves as the transition zone between Earth's atmosphere and outer space. Spacecraft and satellites orbit the Earth
within this layer, as air resistance is almost nonexistent. It contains very few particles, mostly hydrogen and helium, and is where Earth's atmosphere
thins out into the void of space.

Additional Layers and Boundaries


1. Tropopause: The boundary between the troposphere and stratosphere, where the temperature stops decreasing with altitude.
2. Stratopause: The boundary between the stratosphere and mesosphere, where the temperature is highest before it begins to drop again in the
mesosphere.
3. Mesopause: The boundary between the mesosphere and thermosphere, marking the coldest point in Earth's atmosphere.
4. Ionosphere: While not a distinct layer, the ionosphere overlaps the thermosphere and part of the mesosphere. It is an electrically charged layer
that plays a vital role in radio communication, GPS, and satellite signals.
5. Magnetosphere: Though technically not part of the atmosphere, the magnetosphere is the region where Earth’s magnetic field influences charged
particles, protecting the planet from solar winds and cosmic radiation.

Applications of Atmospheric pressure


Used in syringes. When the piston is pulled up, the pressure inside the cylinder decreases, allowing liquid to be drawn in due to the higher
atmospheric pressure outside. This principle is essential in medical and laboratory settings for administering medications and conducting experiments.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | -


117 -
- 118 - PRINCIPLES AND PERSPECTIVES OF PHYSICS

Used in straws. When a person sucks on a straw, they reduce the air pressure inside it. The higher atmospheric pressure outside pushes the liquid
up into the straw, allowing for easy drinking. This simple yet effective mechanism demonstrates how atmospheric pressure can facilitate fluid
movement.
Atmospheric pressure influences weather patterns. Low-pressure systems often lead to cloud formation and precipitation, while high-pressure systems
typically bring clear skies.

Others uses include; Atmospheric pressure may be made useful in rubber suckers, bicycle pump, lift pump, force pump, siphon, water supply
system and among others.

LIFT PUMP
A lift pump, also known as a suction pump, operates on the principle of atmospheric
pressure. When the plunger is pulled upward, it creates a vacuum that opens the lower
valve, allowing water to enter the pump. As the plunger moves up, the upper valve
remains closed, trapping the water inside the pump. This process continues until the
plunger is pushed down, which forces the water out through the upper valve. The lift
pump can raise water to a height where the atmospheric pressure can balance the weight
of the water column. This height is typically around 10 meters; as atmospheric pressure
can only support a column of water up to this limit. If the water source is deeper, the
pump may require additional mechanisms to function effectively. While lift pumps are
efficient for shallow water extraction, they have limitations, such as their inability to lift
water from great depths without additional assistance.

Drinking straw
When drinking using a straw some of the air in the straw goes into the lungs once sucked.
This leaves space in the straw partially evacuated and atmospheric pressure pushing down the liquid
becomes greater than the pressure of the air in the straw.

The siphon
This is used to take the liquid out of vessels (eg. Aquarium, petrol tank)
How a siphon works
The pressure at A and D is atmospheric, therefore the pressure at E is atmospheric pressure plus
pressure due to the column of water DE. Hence, the water at E can push its way out against
atmospheric pressure
NB: To start the siphon it must be full of liquid and end A must be below the liquid level in the tank.

Applications of siphon principle


Automatic flushing tank: This uses siphon principle. Water drips slowly from a tap into the tank.
The water therefore rises up the tube until it reaches and fills the bend. In the pipe, the siphon
action starts and the tank empties (the water level falls to the end of the tube). The action is then repeated again and again.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | -


118 -
- 119 - PRINCIPLES AND PERSPECTIVES OF PHYSICS

Flushing tank of water closet: This also uses the siphon principle. When the chain
or handle is pulled, water is raised to fill the bend in the tube.
The siphon action at once starts and the tank empties.

Water supply system


Water supply in towns often comes from a reservoir on a high groundwater flows from
it through a pipe to any tap or storage tank that is below the water reservoir.
In very tall building it may be necessary to first pump water to a large tank on a roof.
Reservoirs of water supply in hydroelectric power stations are often made in
mountainous areas.
The dam must be thicker at the bottom than at the top to with stand large water pressure at the bottom.
Atmospheric pressure is 760mmHg.When you move on the top of the mountain, the pressure reduces to about 600mmHg.This shows that pressure
reduces with increase in altitude.

Manometer
It is a U shaped tube containing mercury.
One limb is connected to the gas or air cylinder whose pressure P is required.
Second limb is left open to the atmosphere
Using a metre rule, pressure P of the gas is calculated as Pressure at B = Pressure at C = H + h
(when B is above A) = H – h (when B is below A)

Example:1.A man blows in one end of a water U – tube manometer until the level differ by 40.0cm.If
the atmospheric pressure is 1.0x 105 N/m2 and density of water is 1000kgm-3.calculate his lung
pressure.
Pressure of air = H + hpg = 1.01 x 105 + x 1000 x 10 = 105,000Nm-2.
Therefore, lung pressure = 105,000Nm-2

2.The manometer contains water, when the tap is opened; the difference in the level of water is 54.4cm. The height of mercury column in the barometer
was recorded at 76cm.What is the pressure in cmHg at points A, B, and C.
Pressure at A = pressure B = H + h
Pressure using mercury = pressure of water
h1 p1 g1 = h2 p2 g2. h x 13600 x 10 = 54.4 x 1000 x 10
h = 4 cm. Therefore, at B, P = H + 4. P = 4 +76 = 80cmHg
3.The difference in pressure at the peak of the mountain and the foot of the mountain is Given that the density of air is 1.3kgm-3, calculate the height
of the mountain.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | -


119 -
- 120 - PRINCIPLES AND PERSPECTIVES OF PHYSICS

Difference of P = hpg → 5.0 x 104 = h x 1.3 x 10.h = 3846.15m or 3.85km

Comparison of densities of liquids using Hare’s apparatus


Liquids of different densities are placed in glass pots as shown above. When the gas tap is opened
each liquid rises to different height h 1 and h2. Since they are subjected to the same gas supply,
Pressure on liquid 1 = pressure on liquid 2
.

FLUID MOTION: BERNOILLI EFFECT


The Bernoulli effect is a fundamental principle in fluid dynamics that explains how changes in fluid
speed can affect pressure. According to Bernoulli's principle, as the velocity of a fluid increases, its pressure decreases. This phenomenon is crucial
in the functioning of devices like aerofoils and Bunsen burner jets. In aerofoils, the shape of the wing causes air to travel faster over the top surface
than the bottom. This speed difference creates lower pressure above
the wing, generating lift and allowing airplanes to fly. The design of
the aerofoil is essential for optimizing this effect, ensuring efficient
flight. Similarly, in a Bunsen burner, when the gas valve is opened,
gas flows through a narrow opening, increasing its velocity. This
rapid flow decreases the pressure, drawing in air through the side
openings, which mixes with the gas to create a controlled flame.
Thus, the Bernoulli effect plays a vital role in both aviation and
combustion technologies.

Bernoulli’s effect and its applications


Consider an ideal, incompressible fluid moving along a streamline. If the
fluid moves from point 1 to point 2, the sum of pressure energy, kinetic
energy, and potential energy remains constant.

Applications of Bernoulli's Principle:


Bernoulli's principle, which states that an increase in fluid speed results
in a decrease in pressure, has numerous practical applications across
various fields. One of the most notable applications is in aviation, where
it explains how aircraft generate lift. The shape of an airplane wing
causes air to travel faster over the top surface, resulting in lower pressure
above the wing compared to the higher pressure below, thus lifting the plane.
In addition to aviation, Bernoulli's principle is utilized in fluid dynamics to analyze flow behavior in pipes and ducts. Devices like venturimeters,
which measure fluid flow rates, rely on this principle to determine velocity changes as fluid passes through constricted sections. Furthermore, it plays
a crucial role in sports, such as in the design of golf balls and soccer balls, where surface features are optimized to enhance lift and reduce drag.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | -


120 -
- 121 - PRINCIPLES AND PERSPECTIVES OF PHYSICS

Overall, Bernoulli's principle is fundamental in engineering, transportation, and sports, demonstrating its wide-ranging impact on technology and
daily life.
Airplane Wings (Lift Force): One of the most famous applications of Bernoulli's effect is in aerodynamics, particularly the generation of lift in aircraft
wings. The wings of an airplane are designed such that air moves faster over the top surface (longer curved path) and slower beneath the wing (shorter
path).

According to Bernoulli’s principle


The faster airflow on top results in lower pressure. The slower airflow
beneath the wing results in higher pressure. This pressure difference
creates an upward force, known as lift, which allows the airplane to fly.
Venturi Effect (Flow through Narrow Tubes)
A Venturi tube is a device that demonstrates Bernoulli's principle. As a
fluid enters a constricted section of the tube, its velocity increases and
pressure decreases. This phenomenon is used in various applications,
such as:
Carburetors: In engines, the Venturi effect helps mix fuel with air.
Medical Devices: Venturi masks used in oxygen therapy regulate airflow and pressure to deliver the desired oxygen concentration to patients.
Spray Bottles and Atomizers: Spray bottles and atomizers (e.g., perfume sprays) rely on the Bernoulli principle to function. When air is forced
through a narrow tube, the velocity of the air increases, reducing the pressure at the top of the fluid in the bottle. This pressure difference pulls the
liquid up the tube, where it is broken into small droplets and sprayed out.
Chimneys and Drafts: The Bernoulli effect is also responsible for the draft in chimneys. Wind blowing across the top of a chimney causes the air to
move faster, resulting in a lower pressure at the top. The pressure difference between the inside and outside of the chimney draws air (and smoke)
upward, helping ventilate the building.
Curve Balls in Sports: In baseball, cricket, and other sports, Bernoulli’s principle explains the movement of curved balls. A spinning ball creates
differing air velocities on either side of the ball (due to the Magnus effect). The side where air moves faster experiences lower pressure, causing the
ball to curve toward the lower-pressure side.
Fluid Flow in Pipes (Taps and Nozzles): When water flows through a pipe and exits through a narrow nozzle, the velocity of the water increases,
and pressure drops. This is why water flows faster through narrow openings, and is also seen in household faucets and garden hoses.
Blood Flow in Arteries: In human physiology, Bernoulli's principle applies to blood flow through arteries. As blood flows through narrower
sections of an artery, its velocity increases and pressure decreases. This effect is critical for understanding blood pressure and flow in cardiovascular
diseases such as aneurysms and stenosis (narrowing of blood vessels).
Sailing (Wind Force on Sails): In sailing, the shape of the sails creates a Bernoulli effect where air moves faster across the curved side of the sail,
generating lift. This helps propel the boat forward even when the wind isn't directly behind the sail.
Pitot Tubes (Velocity Measurement): A Pitot tube is a device that measures the velocity of a fluid, often used in aircraft to determine airspeed.
The tube compares the static pressure of the fluid with the dynamic pressure due to the fluid’s motion, using Bernoulli’s principle to calculate the
velocity.
Hydraulic Jump: A hydraulic jump is a sudden transition from fast to slow-moving water, often seen in rivers or spillways. Bernoulli’s equation
helps explain the energy transfer during this process, especially the change in velocity and pressure that causes the jump.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | -


121 -
- 122 - PRINCIPLES AND PERSPECTIVES OF PHYSICS

Limitations of Bernoulli's Principle:


Viscous Fluids: Bernoulli's principle assumes an ideal, incompressible, and non-viscous fluid. In reality, viscosity (fluid friction) can affect how
pressure and velocity interact.
Turbulent Flow: The principle works best for steady, laminar flow. In turbulent flow, energy is lost due to eddies and vortices, making Bernoulli’s
principle less applicable.
Compressibility: The principle assumes incompressibility, meaning the fluid’s density remains constant. For highly compressible fluids (like gases
at high velocities), corrections need to be made.
Conclusion
Bernoulli's effect is a cornerstone of fluid dynamics with wide-ranging applications in both natural phenomena and technology. It provides insights
into the behavior of fluids under varying pressures and velocities, helping explain everything
from airplane flight to the simple functioning of spray bottles.

Floating and Sinking


The concepts of floating and sinking are primarily determined by buoyancy and density.
Buoyancy refers to the upward force exerted by a fluid, which counteracts the
weight of an object. When an object is placed in water, it will float if its density is less than
that of water. Conversely, if the object's density is greater, it will sink. For example, a rubber
duck floats because it is less dense than water, while a rock sinks due to its higher density.
This principle applies to various objects, from everyday items to large vessels like cruise ships.
Despite their massive weight, ships float because their overall density, including the air inside,
is less than the water they displace.
Floating and sinking are fascinating concepts that help us understand how objects interact with water. The key factor determining whether an object
floats or sinks is its density compared to that of water. If an object has a higher density than water, it will sink; conversely, if it has a lower density,
it will float. This principle is known as buoyancy. For example, a heavy stone sinks because its density is greater than that of water. In contrast, a
large ship, despite its weight, floats because its overall density, including the air inside, is less than that of the water it displaces.
This balance between weight and the upward push of water is crucial for floating. Engaging children in hands-on activities, such as testing various
objects in water, can enhance their understanding of these concepts.

SINKING AND FLOATING


The concepts of sinking and flotation are governed by the forces acting on a body
submerged in a fluid. When an object is placed in a fluid, two primary forces come into
play: the downward gravitational force and the upward buoyant force. The gravitational
force is determined by the object's weight, while the buoyant force is equal to the weight
of the fluid displaced by the object, as described by Archimedes' Principle. If the weight
of the object exceeds the buoyant force, it will sink. At the same time, if the buoyant
force is greater than the object's weight, it will float. This balance of forces is crucial in
determining whether an object will remain submerged, float, or sink. In summary, the
interaction between gravitational and buoyant forces dictates the behavior of objects in fluids, illustrating the fundamental principles of buoyancy and
fluid mechanics.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | -


122 -
- 123 - PRINCIPLES AND PERSPECTIVES OF PHYSICS

Up-thrust: It is an upward force due to the fluid resisting being compressed. When any object is immersed or subme rged into a fluid, its weight
appears to have been reduced because it experiences an up thrust from the fluid.

ARCHIMEDES PRINCIPLE
According to Archimedes' the ancient Greek mathematician, any object submerged in a fluid experiences an upward buoyant force equal to the weight
of the fluid it displaces. This principle explains why objects float or sink in water. If the weight of the displaced fluid is greater than the weight of the
object, the object will float; otherwise, it will sink. This principle has numerous applications, from designing ships and submarines to understanding
the behavior of hot air balloons. For instance, a hot air balloon rises because the weight of the air it displaces is greater than the weight of the balloon
itself. Archimedes' principle not only enhances our understanding of buoyancy but also serves as a cornerstone in various scientific and engineering
fields, illustrating the profound impact of Archimedes' discoveries on modern science.
Hence, Archimedes’ principle states that when a body is wholly or
partially immersed in a fluid, it experiences an up thrust equal to the
weight of the fluid displaced. i.e. up thrust = weight of fluid displaced.

Experiment to verify Archimedes’ principle


An object is weighed in air using a spring balance to obtain its weight W1.
The eureka can is completely filled with the liquid and a beaker is put under its spout.
The body is then immersed in the liquid.
The new weight w2 is also read from the spring balance.
The liquid collected in the small beaker is weighed to determine its weight W3.
It is obtained that W3 = W1 – W2, i.e. 𝑊1 = 10𝑁,
𝑊2 = 6𝑁,𝑊3 = 10 − 6 = 4𝑁
The weight of the body when completely immersed or submerged is called the apparent weight (6N). The apparent weight is less than the weight of
the body because when the body is immersed it experiences an up thrust force (U=4N).

Activity (In groups)


1. A glass block weighs 25N. When wholly immersed in water, the block appears to weigh 15N, calculate the up thrust.
2.A body weighs 1N in air and 0.3N when wholly immersed in water. Calculate the weight of water displaced.
3.A metal weights 20N in air and 15N when fully immersed in water. Calculate the weight of displaced water, volume of displaced water, volume of
metal, and density of metal. (Density = 1000kg/m 2)

Application of Archimedes’ principle


This principle is crucial in various real-life applications, particularly in the design and operation of ships and submarines. By understanding buoyancy,
engineers can create vessels that float and navigate effectively, ensuring they displace enough water to support their weight.
Additionally, Archimedes' principle aids in swimming, as it explains how the human body can float in water. The principle also provides a method
for calculating the volume of irregularly shaped objects. By submerging such objects in water, one can measure the displaced water volume, which
corresponds to the object's volume. Other applications is determining the density of irregularly shaped objects. By measuring the weight of an object
in air and then in water, one can calculate its density based on the volume of water displaced. This method is particularly useful in various scientific
fields, including material science and engineering. It is also employed in hydraulic lifts and hydrometers, showcasing its significance in both everyday

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | -


123 -
- 124 - PRINCIPLES AND PERSPECTIVES OF PHYSICS

applications and advanced engineering solutions. In addition to Measurement of relative density of solids, and Measurement of relative density of a
liquid

Measurement of relative density of a solid


Weigh the object in air and note it to be Wa
Weigh the object in water and note it to be Ww
Determine the upthrust U = Wa – Ww
Weigh the object in air
Relative density of solid 𝑅. 𝐷 = upthrust

Determination of RD of a liquid
Weigh the object to find its weight in air Wa using a spring balance
Weigh the object in the liquid whose RD is to be determined, label it W l
Weigh the object in water, call it 𝑊𝑤
Find the up thrust in liquid = 𝑊𝑎 – 𝑊1
Find the up thrust in water = 𝑊𝑎 – 𝑊𝑤
𝑊𝑎 – 𝑊1
Obtain RD of a liquid from 𝑅. 𝐷 = 𝑊𝑎 – 𝑊𝑤

Activity
1. An object weighs 5.6N in air, 4.8N in water and 4.6N when immersed in a liquid. Find the R.D of the liquid.
2. An object weighs 100N in air and 20N in a liquid of RD 0.8. Find its weight in water.

FLOATING OBJECTS
Floating objects are a fascinating aspect of physics, primarily explained by the principles of buoyancy
and density. An object floats when it is positively buoyant, meaning it is less dense than the fluid it is
in. According to Archimedes' Principle, when an object is submerged, it displaces a volume of fluid
equal to its own weight. If the weight of the fluid displaced is greater than the weight of the object, it
will float. The concept of floatation is crucial in various applications, from designing boats to
understanding natural phenomena. For instance, wood floats on water because its density is lower than
that of water, allowing it to displace enough water to support its weight. Alternatively, objects denser
than water, like stones, will sink.
There are two vertical forces which act on an object when immersed in water, weight (W) and upthrust (U). If W is less than U the object rises, If W
is equal to upthrust U object floats, and If W is greater than upthrust U object sinks. Therefore, floating objects weigh equal to up thrust. From
Archimedes principal, up thrust is equal to weight of a fluid displaced. Therefore, for floating objects, weight of objects should be equal to weight of
fluid displaced.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | -


124 -
- 125 - PRINCIPLES AND PERSPECTIVES OF PHYSICS

Law of floatation
The principle of floatation, rooted in Archimedes' principle, states that an object will
float in a fluid if the buoyant force acting on it is equal to its weight. This buoyant
force arises from the pressure difference exerted by the fluid on the object, which is
greater at the bottom than at the top. Consequently, when an object is placed in a
liquid, it displaces a volume of fluid equal to its own weight, allowing it to float. For
an object to float, its density must be less than that of the fluid. This is why large
ships, despite their heavy weight, can float; they are designed to displace enough
water to counterbalance their weight. Similarly, icebergs float because they displace
a volume of water equal to their weight, with a significant portion submerged. In summary, the principle of floatation explains how objects interact
with fluids, emphasizing the relationship between buoyancy, weight, and density.
Hence the law of floatation states that a floating object displaces its
own weight of the fluid in which it floats.

Experiment to verify law of floatation


To verify the law of flotation, a simple experiment can be conducted using a
eureka can and a solid object. First, fill the eureka can with water up to the
spout level. Next, gently lower the object into the water until it floats. As the
object displaces water, the displaced water will flow out of the spout. Collect this
water in a measuring container to determine the volume of water displaced.
According to the law of flotation, an object will float if the weight of the water
displaced is equal to the weight of the object. By measuring the weight of the
object and the volume of water displaced, you can confirm this principle. Weigh the displaced water. It is obtained out that weight of water displaced
= weight of object measured in air. If the object's density is less than that of the fluid, it will float; if greater, it will sink. This experiment effectively
demonstrates Archimedes' principle and the fundamental concepts of buoyancy and flotation.

Application of law of floatation


Ship: A ship floats when the up thrust of the water it displaces equals its weight i.e. Weight of floating ship =weight of water displaced.
While a ship is being loaded, it sinks lower and displaces more water to balance the extra load. While steel does not float, steel ship floats. This is
because steel ship is hollow and most of its parts contain air, hence its average density is less than the density of water. Therefore, hollow steel
displaces many times its volume of water.
Submarines
A submarine has ballast tanks which can be filled with water or air. When full of
water, the average density of the submarine is slightly greater than the density of sea
water and it sinks. When air is pumped into the tanks, the average density of the
submarine falls until it’s the same or slightly less than that of water around it. The
submarine therefore stays at one depth or rises to the surface.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | -


125 -
- 126 - PRINCIPLES AND PERSPECTIVES OF PHYSICS

Balloons and airships


A balloon is an airtight, light bag with hydrogen or helium. These gases are less dense than air. An airship is a large balloon with a motor to move it
and fins to steer it. The down ward force on the balloon equals to the weight of the bag plus the weight of gas in it. The balloon rises if the up thrust
is greater than the downward force. The lifting force= up thrust –total weight. Upthrust = weight of air displaced –weight of bag + weight of gas.
Balloons that carry passengers control their weight by dropping ballast to make them rise and by letting gas
out of the gas bag to make them fall. As the balloon rises, the atmospheric pressure on it becomes less. The gas
in the balloon tends to expand. Therefore the gas bag must not be filled completely when the balloon is on the
ground.

Hydrometers
A hydrometer is a floating object used to find the density of liquids by noting how far it sinks in them. No
weighing is necessary. It consists of a longer glass tube with a bulb at the bottom. Mercury or lead is in the
bulb so that the hydrometer floats up right. The stem is long and thin and is graduated. The thin stem means
that the hydrometer is sensitive i.e. it sinks to different levels even in two liquids whose densities are almost
the same.
Uses of a hydrometer
It is used for measuring the densities of milk (lactometer), beer, wines, acids in car batteries(the acid in a fully charged accumulator should have a
density of 1.25g/cm3, if it falls below 1.18, the accumulator needs recharging).

Motion of a body through fluids


The motion of a body through fluids, such as air or water, involves complex interactions that can
significantly affect its movement. When a body moves relative to a fluid, it encounters resistance, known
as drag, which necessitates the application of force in the direction of motion. This resistance is influenced
by factors such as the body's shape, size, and the fluid's viscosity. Fluid dynamics, the study of fluids in
motion, encompasses two critical elements: viscosity and flow regimes. Viscosity refers to a fluid's
resistance to deformation, while flow regimes describe the patterns of fluid movement, which can be
laminar or turbulent. In practical applications, such as in pharmacy and medicine, the principles of fluid
dynamics are crucial. They help in designing drug delivery systems and understanding how substances
move through the human body, ensuring effective therapeutic outcomes.

Action of motion
When a body falls through a fluid, it is acted on by forces namely -weight of the body, viscous force and Up thrust. The weight of the body acts
downwards towards the earth. Up thrust acts upwards and viscous force acts in the direction opposite to body’s motion. As the body falls, it accelerates
first with net resultant force. F= W – (Fx + U). As the body continues to fall, it attains a uniform velocity called terminal velocity, when the weight
of the body W = FX + U. At this stage, the resultant force or net force on the body is zero, and the body attains terminal velocity.
Terminal velocity is the steady speed achieved by an object freely falling through a gas or liquid. When an object is dropped , it accelerates due to
gravity until the force of air resistance equals the gravitational pull. At this point, the object stops accelerating and continues to fall at a constant
speed, known as terminal velocity. The terminal velocity of an object depends on its mass, shape, and cross-sectional area. Objects with larger cross-
sectional areas or higher drag coefficients, such as a parachute, will fall more slowly than denser, more streamlined objects like a rock. For example,

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | -


126 -
- 127 - PRINCIPLES AND PERSPECTIVES OF PHYSICS

a skydiver in a spread-eagle position experiences a lower terminal velocity compared to a head-down position due to increased air resistance. It
illustrates the balance between gravitational force and air resistance, providing insight into the dynamics of free fall.

Terminal velocity: This is a constant or uniform velocity with which a body falling through a fluid moves such that the upward forces acting on
it are equal to its weight.
The uniform velocity attained by a body falling through a fluid when the net force on the body is zero. In case of a balloon or a rain drop falling, the
resisting force or retarding force on the body is called air resistance.

2.4 MECHANICAL PROPERTIES OF MATERIALS AND HOOKE’S LAW


Learning Outcomes
a) Understand how the mechanical properties of common materials can be utilised in physical structures (u, s, v/a)
b) Understand that the tensile strength of materials is determined by the properties of the substances they are composed of (u)
c) Understand that heating changes the structure and properties of some materials(u)
Mechanical properties of matter refer to the behaviors and characteristics of materials when they are subjected to mechanical
forces. Key properties include strength, ductility, hardness, impact resistance, and fracture toughness. These characteristics determine how materials
respond to various forces, influencing their suitability for specific structural applications. For instance, metals like steel exhibit high tensile strength
and ductility, making them ideal for construction. Materials such as concrete provide excellent compressive strength but are less ductile. This
combination of properties allows engineers to select materials that can withstand specific loads and environmental conditions. Additionally, innovative
materials like bamboo are gaining attention for their high mechanical properties and sustainability. As a natural fiber, bamboo can serve as a
reinforcement in composite materials, offering an eco-friendly alternative to synthetic fibers. By evaluating these mechanical properties, engineers can
design safer, more efficient structures that meet modern demands. These properties are crucial in determining how materials will react under different
conditions of stress, strain, and temperature, and they are fundamental in fields such as materials science, engineering, and physics. Here are some of
the key mechanical properties of matter:

Elasticity
Elasticity is the ability of a material to return to its original shape and size after the removal of a deforming force.
Hooke's Law: Within the elastic limit, the deformation (strain) of a material is directly proportional to the applied force (stress).
Plasticity: Plasticity is the ability of a material to undergo permanent deformation without breaking when a force is applied beyond its elastic limit.
Ductility: The ability of a material to be stretched into a wire.
Malleability: The ability of a material to be hammered or rolled into thin sheets. Example: Metal being shaped into a car body panel.
Strength
Strength is the ability of a material to withstand an applied force without failure or plastic deformation.
Types:
Tensile Strength: Resistance to breaking under tension.
Compressive Strength: Resistance to breaking under compression.
Shear Strength: Resistance to breaking under shear stress. Example: Steel beams in construction must have high tensile and compressive strength
to support loads.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | -


127 -
128 PRINCIPLES AND PERSPECTIVES OF PHYSICS

Hardness: Hardness is the resistance of a material to deformation, particularly permanent deformation, scratching, cutting, or abrasion. Example:
Diamond is the hardest known material and can scratch almost any other material.
Toughness: Toughness is the ability of a material to absorb energy and plastically deform without fracturing. It is a measure of how much energy a
material can absorb before failure. Example: Rubber has high toughness because it can absorb significant energy before breaking.
Brittleness: Brittleness is the tendency of a material to break or shatter without significant plastic deformation when subjected to stress. Example:
Glass and ceramics are brittle materials; they break easily under stress without significant deformation.
Ductility: Ductility is the ability of a material to undergo significant plastic deformation before rupture or fracture. Elongation and Reduction in Area:
Measures of ductility. Example: Copper is highly ductile and can be drawn into thin wires.
Malleability: Malleability is the ability of a material to withstand deformation under compressive stress, often characterized by its ability to form a
thin sheet when hammered or rolled. Example: Gold is highly malleable and can be hammered into very thin sheets.
Creep: Creep is the slow, permanent deformation of a material under constant stress over a long period, typically at high temperatures. Example:
Turbine blades in jet engines undergo creep at high temperatures and stress during operation.
Fatigue: Fatigue is the weakening or failure of a material caused by cyclic loading, leading to the accumulation of damage and eventual fracture.
Fatigue Limit: The stress level below which a material can withstand an infinite number of cycles without failing. Example: Metal parts in machinery
can fail due to fatigue after repeated loading and unloading cycles.

Applications and Importance


Mechanical properties of materials such as tensile strength, ductility, hardness, and toughness are essential for classifying materials and predicting
their behavior under stress. For instance, materials with high tensile strength are ideal for construction and manufacturing, where they must withstand
significant loads without deforming. In industries like aerospace and automotive, understanding mechanical properties is vital for ensuring safety
and performance. Components must be selected based on their ability to endure fatigue and impact, which directly relates to their mechanical
characteristics. Additionally, advancements in materials science, such as the development of conjugated polymers, showcase how tailored mechanical
properties can enhance product functionality, such as in resistance training equipment.

Other properties of materials


Strength: It is the property of material that makes it require a large force to break. The material which has this property is said to strong e.g concrete,
metals etc.
Stiffness: It is the property of material that makes it resist being bent. Materials with this property are said to be stiff e.g. steel, iron and concrete.
Ductility: It is a property of materials that makes it possible to be molded in different shapes and sizes or rolled into sheets, wires or useful shapes
without breaking. Materials which have this property are called ductile materials e.g. Copper wire, Soft iron wire etc.
Brittleness: This is the ability of a material to break suddenly when force is applied on it. Materials which have this property are called brittle
materials e.g. bricks, chalks, glass, charcoal etc.
Elasticity: This property makes material stretch when force is applied on it and regains original size and shape when the force is removed. Materials
with this property are called elastic materials e.g. rubber, copper spring etc.
Plasticity: This is the property which makes materials stretched (deformed) permanently even when the applied force is removed materials which
have this property are called plastic materials e.g. plasticine, clay, putty or tar etc.
Hardness: This is a measure of how difficult it is to scratch a surface of a material. Hard materials include; metals, stones etc.
Timber as a building material: It is used for making furniture, walls, bodies of vehicles, bridges, making ceilings etc.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 128


129 PRINCIPLES AND PERSPECTIVES OF PHYSICS

Bricks and blocks as building materials


Bricks and blocks are fundamental stony like building materials in used in construction of bridges, walls, floors etc. known for their durability and
structural integrity. Typically made from clay, concrete, or other composite materials, bricks serve as essential components for walls, pavements, and
various architectural elements. Their load-bearing capabilities allow them to support timber floor joists and roof rafters, creating stable structures.
One of the significant advantages of brick and block construction is their fire resistance. These materials are non-combustible, which enhances the
safety of buildings by reducing the risk of fire spread. Additionally, advancements in technology have led to innovative alternatives, such as 3D
printable glass bricks, which can withstand pressures similar to traditional concrete blocks, showcasing the evolving nature of building materials.
Bricks and blocks remain vital in modern construction, combining traditional methods with contemporary innovations to meet the demands of safety,
sustainability, and structural performance.

Bricks and blocks are increasingly favored in construction due to their numerous advantages.
One of the primary benefits is their durability; they can withstand harsh weather conditions, including extreme temperatures and natural disasters
like cyclones and wildfires. This resilience makes them a reliable choice for both residential and commercial buildings.
In addition to durability, bricks and blocks offer excellent thermal mass, which helps regulate indoor temperatures. This property allows buildings
to retain heat during colder months and stay cooler in the summer, leading to energy efficiency and reduced heating and cooling costs. Furthermore,
their fire resistance and sound insulation capabilities enhance safety and comfort within structures.
Lastly, bricks and blocks are versatile materials that can be used in various applications, from load-bearing walls to decorative facades.
Disadvantages: They are brittle, they need firing, and it turn out to be expensive, and not suitable under wet conditions i.e. can soften and weaken.

Glass as a building material


Glass has emerged as a versatile building material, utilized in various applications such as insulation, structural components, external glazing, and
cladding. Its unique properties stem from its amorphous structure, which allows for transparency while providing strength and durability. This non-
crystalline solid is primarily composed of silica, which is heated to high temperatures and then rapidly cooled, resulting in its distinctive
characteristics. In modern architecture, glass not only enhances aesthetic appeal but also contributes to energy efficiency. Its insulating properties
help regulate indoor temperatures, reducing the need for excessive heating or cooling. Innovations in glass technology, such as 3D printable glass
bricks developed by MIT, demonstrate its potential to withstand pressures comparable to traditional materials like concrete.
Glass is used as a building material because it has a number of desirable properties, which include;
It is transparent, few chemicals react with it, it can be melted and formed into various shapes, its surface is hard and difficult to scratch, and it can
be re-enforced (strengthened).
Advantages of using Glass as building materials
One of the primary benefits is its ability to allow natural light into spaces, which enhances the ambiance and reduces reliance on artificial lighting.
This not only creates a more inviting environment but also contributes to energy efficiency, as natural light can significantly lower electricity
consumption.
Advancements in thermal insulating glass technology enable buildings to maintain comfortable indoor temperatures. This type of glass reflects heat
back into the room during colder months, minimizing energy loss and reducing heating costs.
Additionally, glass can be designed to absorb, refract, or transmit light, offering architects unparalleled versatility in design. Finally, glass is an
excellent electrical insulator, making it a safe choice for modern buildings. Its combination of beauty, energy efficiency, and safety solidifies glass as
a vital material in contemporary architecture.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 129


130 PRINCIPLES AND PERSPECTIVES OF PHYSICS

Concrete
Concrete is a vital building material composed of a mixture of
cement, water, and aggregates such as sand and gravel. This
composite material hardens over time through a chemical process
known as hydration, resulting in a strong and durable substance.
Its versatility makes it suitable for various construction applications,
from residential buildings to large infrastructure projects. One of the
key advantages of concrete is its resistance to fire, water, and pests,
making it a safe choice for structural integrity.
Concrete can withstand extreme weather conditions, including high
winds and heavy rainfall. Its non-combustible nature further
enhances its appeal in construction, ensuring safety in case of fire.
Moreover, advancements in concrete technology have led to innovative uses, such as incorporating materials that allow it to function as a battery.
Concrete is strong under compression but weak under tension. It can with stand tensional forces when it is reinforced.
Reinforced concrete is a composite building material that combines the high compressive strength of concrete with the tensile strength of steel. This
synergy allows the two materials to work together effectively, making reinforced concrete ideal for various structural applications. The steel
reinforcement, typically in the form of bars or mesh, compensates for concrete's inherent weakness in tension, enabling it to withstand bending and
stretching forces.
The properties of reinforced concrete make it a preferred choice in construction. It is durable, resistant to weathering, and can be molded into various
shapes, allowing for architectural flexibility. Its ability to absorb energy makes it suitable for seismic-resistant structures, enhancing safety in
earthquake-prone areas. Reinforced concrete's unique combination of strength, durability, and versatility has made it a cornerstone of modern
construction, used in everything from residential buildings to bridges and skyscrapers.

Advantages of reinforcing concrete


One of the primary advantages of reinforced concrete is its ability to prevent cracking and structural failure. The steel reinforcement absorbs tensile
stresses, which concrete alone cannot handle effectively. This enhancement not only increases the durability of the structure but also extends its
lifespan, making it a cost-effective option in the long run. Reinforced concrete can support heavier loads and be molded into various shapes, allowing
for innovative architectural designs.

BEAMS
A beam is a crucial structural element in construction and engineering, primarily
designed to resist lateral loads applied across its axis. Its primary mode of deflection
is bending, making it essential for maintaining the integrity of various structures.
Beams are typically horizontal and serve as a load-bearing component, transferring
forces to columns or walls. There are various types of beams, including simply
supported, cantilever, and continuous beams, each serving specific structural needs.
They are engineered to withstand vertical loads, shear forces, and bending moments,
ensuring safety and efficiency in load distribution.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 130


131 PRINCIPLES AND PERSPECTIVES OF PHYSICS

A beam is along piece of materials e.g. wood, metal, concrete etc. It is usually horizontal and supported at both ends. It carries the weight of the part
of the building or other structures.
When a force is applied on a beam it bends on one side of the beam in compressed (under compression), the other side is stretched (under tension)
and its centre is unstretched (neutral).
AB – Under compression
DC – Under tension
EF – unstretched i.e. it neither under tension nor under compression.
The neutral axis of beam does not resist any forces and can therefore be
removed without weakening the stretch of the beam.

GIRDERS
Girders are essential structural components in construction, serving as the primary
support for buildings and bridges. They are defined as large horizontal beams, girders
bear significant vertical loads and can accommodate dynamic and rolling loads, making
them crucial for structural integrity. They connect to smaller beams, forming a
framework that distributes weight effectively throughout the structure. Unlike standard
beams, which primarily support smaller loads, girders are designed to handle
concentrated forces and larger spans. This distinction allows girders to play a pivotal role
in maintaining stability and safety in various constructions, from skyscrapers to bridges.
Typically made from materials like steel or reinforced concrete, girders are engineered to withstand substantial stress.
Their robust design ensures that they can support the weight of the structure above while
transferring loads to the foundation below, making them indispensable in modern
architecture and engineering. Hence, a girder is a beam in which the material’s neutral axis
can be removed.

Examples of Girders
I-Shape girders. This I-shaped girder is used in construction of large structures like bridges. Hollow tube/girder (hollow cylinder), Square beam/girder,
Triangular beam/girder and L – Shaped girder.
Advantages of hollow beams
Hollow beams, particularly hollow structural sections (HSS), offer numerous advantages in construction and engineering. One of the primary benefits
is their uniform strength distribution, which enhances structural integrity. This design minimizes the likelihood of bending or deformation under
load, making them ideal for various applications.
Additionally, hollow beams are lightweight, facilitating easier transportation and installation. Their reduced weight does not compromise strength,
allowing for significant load-bearing capabilities while conserving material. This efficiency translates to cost savings in both material and labor. Hollow
beams are versatile and aesthetically appealing, fitting seamlessly into modern architectural designs. They can be easily fabricated and customized,

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 131


132 PRINCIPLES AND PERSPECTIVES OF PHYSICS

making them a popular choice for builders and architects alike. Furthermore, their recyclability aligns with sustainable construction practices,
contributing to a greener environment.
Disadvantages of solid beams
They are heavier, economically expensive and sometimes weak

Disadvantages of a material used in the neutral axis


It is a wastage and un necessary

NOTCH AND NOTCH EFFECTS


A notch, which is a geometric discontinuity in a material, can lead to stress concentration, resulting in uneven stress distribution. This phenomenon,
known as the notch effect, can significantly reduce the loading capacity of a component, making it more susceptible to fatigue and fracture. When a
material experiences a notch, the critical gross stress is often lower than the critical net stress, indicating that the presence of the notch weakens the
material's overall strength. This effect is particularly pronounced in metals, where notches can decrease ductility and increase the likelihood of brittle
fracture. Hence, a notch is a cut on a weak point on a material. It is either a crack or scratch on the surface of the material. A notch weakens the
strength of a material when it is the region of tension than when it is under compression. Notch effect: This is the effect that the notch has
on the strength of the material i.e. the notch weakens the strength of the material.

WAYS OF REDUCING NOTCH EFFECTS


One effective method is to apply compressive mean stress (all its parts are under compression), which can diminish or even eliminate stress
concentrations around notches.
Optimizing the shape of notches by using larger circular arcs can help reduce stress concentration factors. Techniques such as roller burnishing can
also enhance fatigue strength by smoothing out surface imperfections that contribute to notch effects.
Incorporating these methods into design and manufacturing processes can lead to more robust components, ultimately improving their performance
and longevity, and use of laminated material rather solid materials in construction.

Structures: A structure consists of pieces of materials joined together in a particular way. The pieces of materials used to strengthen structures are
called girders.
Examples of structures: Both the upper and lower parts of the buildings are under compression. The bridge is weak under tension

STRUTS AND TIES


Struts and ties are fundamental concepts in structural engineering, representing different types of
forces within a structure. A strut is a structural member that experiences compressive forces, while
a tie is one that experiences tensile forces. Struts typically appear in areas where loads are being
pushed together, while ties are found where loads are being pulled apart. For example, in a truss
bridge, the diagonal members act as struts, providing compression, while the horizontal members
serve as ties, holding the structure together under tension. Strut and tie modeling (STM) simplifies
complex stress patterns into a triangulated framework, making it easier to visualize and calculate
forces. This method is particularly useful in reinforced concrete design, where understanding the
interplay of struts and ties can lead to more efficient and safer structures.
Tie: A tie is a girder under tension and can be replaced by a string.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 132


133 PRINCIPLES AND PERSPECTIVES OF PHYSICS

Strut: A strut is a girder under compression

How to identify shuts and ties in a structure?


Remove each of the girder one at a time from the structure of the frame work and the effect it
causes on the frame work is noted.
If the frame work moves further apart the girder is a tie otherwise the girder is a strut
Experiment to distinguish between a tie and a strut
Two straws are fixed on the side of a piece of soft board.
A small load is added at the end B. The structure supports the load.
The string of the same length now replaces the straw AB.
If the structure still supports the load, then AB is under tension hence it is a tie.
Similarly, straw AB is then replaced with the string of the same length. If the structure does not
support the load and it collapses, then AB was under compression and it is a strut.

Yasson TWINOMUJUNI “inspiring greatness” (256) 772-938844 Page | 133


134

HOOKE’S LAW OF ELASTICITY


Hooke's Law of elasticity is a fundamental principle in physics that describes the
relationship between the force applied to an elastic object and the resulting
deformation. Formulated by Robert Hooke, the law states that the extension or
compression of a spring (or any elastic material) is directly proportional to the force
applied, as long as the material remains within its elastic limit. This relationship
can be expressed mathematically as 𝐹 = 𝑘𝑥, where F is the force applied, k is
the spring constant, and x is the displacement from the original position. It helps
in designing structures and mechanical systems that can withstand forces without
permanent deformation.

Hooke's law states that the strain of the material is proportional to the applied stress within the elastic limit of that material. When
the elastic materials are stretched, the atoms and molecules deform until stress is applied, and when the stress is removed, they return to their initial
state. The figure shows the stable condition of the spring when no load is applied, the condition of the spring when elongated to an amount x under
the load of 1 N, the condition of the spring elongated to 2x under the influence of load 2 N. Depending on the material, different springs will have
different spring constants, which can be calculated. The figure shows us three instances, the stable condition of the spring, the spring elongated to an
amount x under a load of 1 N, and the spring elongated to 2x under a load of 2 N. If we substitute these values in the Hooke’s law equation, we get
the spring constant for the material in consideration.

Solution:
We know that the spring is displaced by 5 cm, but the unit of the spring constant is Newtons per meter. This means that we have to convert the
distance to meters. Converting the distance to meters, we get 5 cm = 0.05 m; Now substituting the values in the equation,
we get F = –k.x, 500 N = – k x 0.05 m
Now, we need to rework the equation so that we can calculate the missing metric, which is the spring constant, or k. Looking only at the magnitudes
and therefore omitting the negative sign, we get 500 N/0.05 m = k : k = 10000 N/m
Therefore, the spring constant of the spring is 10000 N/m.

The figure below shows the stress-strain curve for low carbon steel.
The material exhibits elastic behaviour up to the yield strength point, after which the material
loses elasticity and exhibits plasticity. From the origin till the proportional limit nearing yield
strength, the straight line implies that the material follows Hooke’s law. Beyond the elastic limit
between proportional limit and yield strength, the material loses its elasticity and exhibits
plasticity. The area under the curve from origin to the proportional limit falls under the elastic
range. The area under the curve from a proportional limit to the rupture/fracture point falls under
the plastic range.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 134


135

The material’s ultimate strength is defined based on the maximum ordinate value given by the stress-strain curve (from origin to rupture). The value
provides the rupture with strength at a point of rupture.

Hooke’s Law Applications


Hooke's Law states that the strain in a solid is directly proportional to the applied stress, provided the material remains within its elastic limit. For
instance, in engineering, Hooke's Law is crucial for designing vehicle suspension systems, ensuring that vehicles can absorb shocks and provide a
smooth ride. In material science, it helps in understanding the strength and elasticity of materials before they are used in construction or
manufacturing. This ensures safety and durability in structures and products.
Hooke's Law is applied in the medical field, particularly in understanding the mechanics of the heart and lungs. It explains the length-tension
relationship in cardiac muscles and the elastic recoil of lung tissues, which are vital for effective respiration.
It is used as a fundamental principle behind the manometer, spring scale, and the balance wheel of the clock. Hooke’s law sets the foundation for
seismology, acoustics and molecular mechanics

Hooke’s Law Disadvantages


Following are some of the disadvantages of Hooke’s Law:
Hooke’s law ceases to apply past the elastic limit of a material.
Hooke’s law is accurate only for solid bodies if the forces and deformations are small.
Hooke’s law isn’t a universal principle and only applies to the materials as long as they aren’t stretched way past their capacity.
Example
1. An elastic wire of length 10cm has force applied on it of 3N. Find its extension and elastic constant k. Extension e = l - l0 =12 -10 = 2cm
= 0.02m, Using F =K e, 3= k x 0.02 K =150Nm -1
2. A spring extends by 0.5 cm when a load of 0.4N hangs on it. Find the load required to cause an extension of 1.5cm, and what additional load
causes the extension of 1.5cm?
Method 1: Using F =K e, 0.4 = k x0.5, K = 0.8N/cm, F2=ke2 F= 0.8 x 1.5 = 1.2N.
Method 2: F2 = 1.2N. Additional load required = (1.2 – 0.4) = 0.8N
Activity
A spring has an un-stretched length of 12 cm. When a force of 8N is attached to its length
becomes 6cm.Find its extension produced, the constant of the spring and extension which will
be produced by a force of 12N.
Experiment to verify Hooke’s law
To verify Hooke's Law, an experiment can be conducted using a spring or a thin wire. Hooke's
Law states that the extension (x) of an elastic material is directly proportional to the applied
force (F), expressed as F = kx, where k is the spring constant.
In the experiment, a spring is suspended vertically, and a mass hanger is attached to its lower
end. A spring is suspended next to the metre rule with a pointer at the bottom end used to obtain
a reading on a scale as shown. The initial position P0 on the pointer is read and recorded. As
weights are added to the hanger, the new positions of the pointer is measured using a ruler,
recorded, and subsequent lengths are noted after each weight is added, and table of results
tabulated.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 135


136

By plotting the graph of applied force against the extension, a straight line should be
observed, confirming that the extension is proportional to the force applied, as long as
the elastic limit is not exceeded. This simple experiment demonstrates the fundamental
principles of elasticity and the validity of Hooke's Law.

Plastic deformation: Deformation often referred to strain, is the change in the


size and shape of an object due to the change in temperature or an applied force.
Depending on the size, material and the force applied, various forms of deformation
may occur.
Based on these factors, deformation is classified into the following:

Elastic deformation: This is the deformation, which occurs before the elastic limit. The wire regains its shape and size after deformation. Energy
is stored as potential energy.
Elastic deformation refers to a temporary deformation of a material's shape that is self-reversing after removing the force or load. Elastic deformation
alters the shape of a material upon the application of a force within its elastic limit. This physical property ensures that elastic materials will regain
their original dimensions following the release of the applied load. Here
deformation is reversible and non-permanent. Elastic deformation of metals and
ceramics is commonly seen at low strains; their elastic behavior is generally
linear. The mechanisms that cause plastic deformation differ widely. Plasticity in
metals is a consequence of dislocations while in brittle materials such as concrete,
rock and bone, plasticity occurs due to the slippage of micro cracks.

STRESS, STRAIN AND YOUNG’S MODULUS


Stress, strain, and Young's modulus are fundamental concepts in material science
and engineering that describe how materials respond to applied forces. Stress is
defined as the force applied per unit area of a material, leading to internal forces
between atoms. When a material is subjected to stress, it undergoes strain, which
is the measure of deformation resulting from that stress. Strain can be tensile
(stretching) or compressive (squeezing), depending on the nature of the applied force. Young's modulus, also known as the modulus of elasticity,
quantifies a material's stiffness and is defined as the ratio of stress to strain
in the elastic region of the material's deformation. This relationship is crucial
for understanding how materials behave under load and is essential for
designing structures and components that can withstand various forces
without permanent deformation.
Consider a force F acting on a material e.g. a wire of length l and cross section
area A so that it extends by length e.
Stress for the wire is defined as the ratio of applied force on a
material to its cross section area.
𝑭𝒐𝒓𝒄𝒆
i.e. 𝒔𝒕𝒓𝒆𝒔𝒔 = . SI unit: N/m2
𝒂𝒓𝒆𝒂

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 136


137

𝑒𝑥𝑡𝑒𝑛𝑠𝑖𝑜𝑛
Strain is a ratio of extension of a material to its original length. i.e. 𝑠𝑡𝑟𝑎𝑖𝑛 = 𝑜𝑟𝑖𝑔𝑖𝑛𝑎𝑙 𝑙𝑒𝑛𝑔𝑡ℎ
Strain has no units.
𝑠𝑡𝑟𝑒𝑠𝑠
Young’s modulus is defined as the ratio of stress to strain. 𝐸 = 𝑠𝑡𝑟𝑎𝑖𝑛 . SI unit = N/m2
Young’s modulus is determined when the elastic limit is not exceeded and its value is constant
Activity (Work out in groups)
1.A force of 20N acting on a wire of cross sectional area 10cm 2 makes its length to increase from 3m to 5m. Find stress?
2.A copper wire of length 10cm is subjected to a force of 2N if the cross section area is 5cm 2 and a force causes an extension of 0.2cm. Calculate the
Tensile stress, Tensile strain and Young’s modulus
3.A mass of 200kg is placed at the end of the wire 15cm long and cross sectional 0.2cm 2 if the mass causes an extension of 1.5cm. Calculate the tensile
stress and tensile stress.
4.A mass of 200g is placed at the end of a wire 15cm long are cross sectional area 0.2m .If the mass causes an extension of 1.5 calculate the tensile
stress, tensile strain and the young’s modulus

2.5 REFLECTION OF LIGHT AT CURVED SURFACES


Learning Outcomes
a) Understand reflection of light and the formation of images by curved mirrors(u)
b) Use ray diagrams to show how images are formed by curved mirrors and the nature of the images(s)
c) Determine the focal length of concave mirrors using a variety of methods. (s,gs)

Introduction
Reflection of Light by Curved Surfaces
Reflection of light refers to the bouncing back of light rays when they hit a surface. In the context of curved surfaces, such as concave and convex
mirrors, the reflection of light follows specific patterns depending on the shape of the surface.
A mirror is a surface that reflects almost all incidents light. Mirrors come in two types: those with a flat surface, known as plane mirrors, and those
with a curved surface, called spherical mirrors. In this article, we will explore two specific types of spherical mirrors: convex mirrors and concave
mirrors. We will also delve into the concept of ray diagrams, which help us understand how light behaves when it interacts wi th these mirrors. The
reflection of light by curved surfaces primarily involving concave and convex mirrors. Concave mirrors, which curve inward, focus parallel rays of
light to a single point known as the focal point. This property makes them useful in applications like telescopes and shaving mirrors, where magnified
images are desired. Convex mirrors bulge outward and cause light rays to diverge. This results in a wider field of view, making them ideal for use in
vehicle side mirrors and security applications. The images formed by convex mirrors are smaller and appear farther away than they actually are. Both
types of mirrors adhere to the law of reflection, where the angle of incidence equals the angle of reflection.

Mirrors
A mirror plays a fascinating role in reflecting light, resulting in the formation of images. When an object is placed in front of a mirror, we observe its
reflection. Incident rays originate from the object, and the reflected rays converge or appear to diverge to create the image. Images formed by mirrors
can be classified as real image or virtual image. Real images are produced when light rays converge and intersect, while virtual images are formed

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 137


138

when light rays appear to diverge from a point. Ray diagrams are employed to comprehend the behaviour of light and better understand image
formation. These diagrams use lines with arrows to represent incident and reflected rays, allowing us to trace their paths and interactions with the
mirror. By interpreting ray diagrams, we gain valuable insights into how images are formed and a deeper understanding of how our eyes perceive
objects through reflection.

Plane Mirror vs Spherical Mirror


A plane mirror is a flat, smooth reflective surface with a clear, undistorted reflection. When an object is reflected in a plane mirror, it always forms a
virtual image that is upright, of the same shape and size as the object.
On the other hand, a spherical mirror exhibits a consistent curvature. It possesses a constant radius of curvature (In the context of spherical mirrors,
the radius of curvature refers to the distance between the centre of the spherical mirror and its curved surface.). Spherical mirrors can create both
real and virtual images, depending on the position of the object and the mirror. Spherical mirrors are further categorized into concave and convex
mirrors, each with distinct properties and image formation
characteristics.

In the upcoming sections, we will detail the characteristics of


convex and concave mirrors, along with a comprehensive
understanding of the images formed by these mirrors when the
object is placed at various positions with the help of ray
diagrams.

Characteristics of Concave and Convex Mirrors

Concave Mirror Definition


A concave mirror is a curved mirror where the reflecting surface is on the inner side of the curved shape. It has a surface that curves inward, resembling
the shape of the inner surface of a hollow sphere. Concave mirrors are also converging mirrors because they cause light rays to converge or come
together after reflection. Depending on the position of the object and the mirror, concave mirrors can form both real and virtual images.
Characteristics of Concave Mirrors

Converging Mirror: A concave mirror is often referred to as a converging mirror because when light rays strike and reflect from its reflecting
surface, they converge or come together at a specific point known as the focal point. This property of concave mirrors allows them to focus light to a
point.

Magnification and Image Formation: When a concave mirror is placed very close to the object, it forms a magnified, erect, and virtual image.
The image appears larger than the actual object and is upright. The virtual image is formed as the reflected rays appear to diverge from a point behind
the mirror.

Changing Distance and Image Properties: As the distance between the object and the concave mirror increases, the size of the image decreases.
Eventually, at a certain distance, the image transitions from virtual to real. In this case, a real and inverted image is formed on the opposite side of
the mirror.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 138


139

Versatile Image Formation: Concave mirrors have the ability to create images that can vary in size, from small to large, and in nature, from real
to virtual. These characteristics make concave mirrors useful in various applications such as telescopes, shaving mirrors, and reflecting headlights.

Convex Mirror Definition


A convex mirror is a curved mirror with the reflecting surface on the curved shape’s outer side. It has a surface that curves outward, resembling the
shape of the outer surface of a sphere. Convex mirrors are also known as diverging mirrors because they cause light rays to diverge or spread out after
reflection. Convex mirrors always form virtual, erect, and diminished images, regardless of the object’s position. They are commonly used in
applications requiring a wide field of view, such as rear-view mirrors and security mirrors.
Characteristics of Convex Mirrors

Diverging Mirror: A convex mirror is commonly referred to as a diverging mirror because when light rays strike its reflecting surface, they diverge
or spread out. Unlike concave mirrors, convex mirrors cause light rays to diverge from a specific focal point.

Virtual, Erect, and Diminished Images: Regardless of the distance between the object and the convex mirror, the images formed are always
virtual, erect, and diminished. The image appears upright, smaller than the actual object, and behind the mirror. When traced backwards, the virtual
image is formed by the apparent intersection of diverging rays.

Wide Field of View: One of the significant characteristics of convex mirrors is their ability to provide a wide field of view. Due to the outwardly
curved shape, convex mirrors can reflect a broader area compared to flat or concave mirrors. This property makes them useful when a larger
perspective is required, such as in parking lots, intersections, or surveillance systems.

Image Distance and Size: Convex mirrors always produce virtual images closer to the mirror than the object. The image formed by a convex mirror
appears diminished or smaller than the object. This reduction in image size allows a greater expanse of the reflected scene to be captured within the
mirror’s field of view.

Image Formation by Spherical Mirrors


By understanding some crucial guidelines for ray incidence on concave and convex mirrors, we can predict and analyze the behaviour of light rays,
aiding in constructing accurate ray diagrams and comprehending image formation processes.

Guidelines for rays falling on the concave and convex mirrors


Oblique Incidence: When a ray strikes a concave or convex mirror at its pole, it is reflected obliquely, making the same angle as the principal axis.
This principle of reflection ensures that the angle of incidence is equal to the angle of reflection, maintaining the symmetry of the reflected rays.
Parallel Incidence: When a ray parallel to the principal axis strikes a concave or convex mirror, the reflected ray follows a specific path. In the
case of a concave mirror, the reflected ray passes through the focus on the principal axis. Similarly, for a convex mirror, the reflected ray originates
from the focus on the same side as the incident ray.
Focus Incidence: When a ray passes through the focus and strikes a concave or convex mirror, the reflected ray will be parallel to the principal
axis. This characteristic holds for concave and convex mirrors and is crucial in determining the path of reflected rays.
Centre of Curvature Incidence: A ray passing through the centre of curvature of a spherical mirror will retrace its path after reflection. This
principle illustrates that when a ray hits the mirror’s centre of curvature, it undergoes reflection and follows the exact same path in the opposite
direction.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 139


140

IMAGE FORMATION BY CONCAVE MIRROR


The object’s position in relation to a concave mirror affects the type and
characteristics of the image formed. Different scenarios result in different types of
images:

OBJECT AT INFINITY

A real and inverted image is formed at the focus when the object is placed at
infinity. The size of the image is significantly smaller than that of the object.

OBJECT BEYOND THE CENTRE OF CURVATURE


When the object is positioned beyond the centre of curvature, a real image is formed
between the centre of curvature and the focus. The size of the image is smaller
compared to that of the object.

Object at the Centre of Curvature or Focus


When the object is placed at the centre of curvature, or the focus, a real image is
formed at the centre of curvature. The size of the image remains the same as that of
the object.

OBJECT BETWEEN THE CENTRE OF CURVATURE AND FOCUS


If the object is located between the centre of curvature and the focus, a real image
is formed behind the centre of curvature. The size of the image is larger compared
to that of the object.

OBJECT AT THE FOCUS


When the object is positioned exactly at the focus, a real image is formed at infinity.
The size of the image is much larger than that of the object.

Object between the Focus and the Pole


Placing the object between the focus and the pole results in the formation of a virtual
and erect image. The size of the image is larger compared to that of the object.

CONCAVE MIRROR IMAGE FORMATION SUMMARY

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 140


141

IMAGE FORMATION BY CONVEX MIRROR


A convex mirror produces specific characteristics in the images
formed. Let’s explore the types of images formed by a convex mirror.

OBJECT AT INFINITY
When the object is positioned at infinity, a virtual image is formed
at the focus of the convex mirror. The size of the image is
significantly smaller than that of the object.

OBJECT AT A FINITE DISTANCE


When an object is placed at a finite distance from the mirror, a virtual
image is formed between the pole and the focus of the convex mirror.
The size of the image is smaller than compared to that of the object.
It’s important to note that in both cases, the images formed by a
convex mirror are always virtual and erect. The nature of a convex
mirror causes light rays to diverge upon reflection, creating virtual
images with reduced sizes. Understanding these principles helps us
accurately predict the characteristics of images formed by convex
mirrors.

REFLECTION LAWS FOR CURVED SURFACES


Reflection from curved surfaces still obeys the laws of reflection:
Law 1: The angle of incidence (angle between the incident ray and the normal) equals the angle of reflection (angle between the reflected ray and
the normal).
Law 2: The incident ray, the normal to the surface at the point of incidence, and the reflected ray all lie in the same plane.

Concepts of Curved Mirrors


Focal Point (F): The point where parallel light rays meet (for concave mirrors) or appear to diverge from (for convex mirrors) after reflection.
Center of Curvature (C): The center of the sphere of which the curved mirror is a part. For concave mirrors, it is in front of the mirror; for convex
mirrors, it is behind.
Radius of Curvature (R): The distance between the mirror’s surface and its center of curvature.
Principal Axis: A straight line passing through the center of curvature and the focal point, perpendicular to the surface of the mirror.

Real Image vs. Virtual Image


Real Image: Formed when reflected rays converge and meet at a point; can be projected on a screen (seen with concave mirrors).
Virtual Image: Formed when reflected rays diverge and appear to originate from a point behind the mirror; cannot be projected on a screen (seen
with both concave and convex mirrors).

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 141


142

Applications of Curved Mirrors


Concave Mirrors:
One of the most common uses is in automobile headlights, where they reflect and direct light to illuminate the road effectively. Similarly, they are
found in torchlights and railway engines, enhancing visibility in low-light conditions. In personal care, concave mirrors serve as makeup mirrors,
allowing users to see enlarged reflections for precise application. They are also utilized in shaving mirrors, providing a clear view for grooming.
Beyond personal use, concave mirrors play a crucial role in scientific instruments like telescopes and microscopes, where they enhance image clarity
by focusing light. Additionally, concave mirrors are employed in solar concentrators and searchlights, demonstrating their versatility in both every
day and specialized applications.
Concave mirrors are used to form optical cavities, which are important in laser construction. Some dental mirrors use a concave surface to provide a
magnified image. The mirror landing aid system of modern aircraft carriers also uses a concave mirror.
Convex Mirrors
Used in vehicle side mirrors because they offer a wider field of view, helping drivers see more of the road. Convex mirror lets motorists see around a
corner.
Commonly used in security mirrors in stores or parking lots for better surveillance due to the wide-angle reflection. The passenger-side mirror on
a car is typically a convex mirror. In some countries, these are labeled with the safety warning "Objects in mirror are closer than they appear", to
warn the driver of the convex mirror's distorting effects on distance perception. Convex mirrors are preferred in vehicles because they give an upright
(not inverted), though diminished (smaller), image and because they provide a wider field of view as they are curved outwards.
These mirrors are often found in the hallways of various buildings (commonly known as "hallway safety mirrors"),
including hospitals, hotels, schools, stores, and apartment buildings. They are usually mounted on a wall or ceiling where hallways intersect each
other or where they make sharp turns. They are useful for people to look at any obstruction they will face on the next hallway or after the next turn.
They are also used on roads, driveways, and alleys to provide safety for road users where there is a lack of visibility, especially at curves and turns.
Convex mirrors are used in some automated teller machines as a simple and handy security feature, allowing the users to see what is happening behind
them. Similar devices are sold to be attached to ordinary computer monitors. Convex mirrors make everything seem smaller but cover a larger area of
surveillance.

Determine the focal length of concave mirrors using a variety of methods


Determining the Focal Length of Concave Mirrors
The focal length (f) of a concave mirror is the distance from the
mirror's surface to its focal point, where parallel light rays converge
after reflection. Several methods can be used to determine the focal
length of concave mirrors, including direct experimentation with light
sources, practical observation of image formation, and mathematical
calculations.

Using a Distant Object (Sunlight or Distant Light Source)


Method Overview: This is one of the simplest methods and relies on
the fact that light rays from distant objects (such as the sun) are nearly parallel when they reach the mirror. These parallel rays converge at the focal
point after reflection.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 142


143

Procedures:
Take the concave mirror outside on a sunny day or use a distant artificial light source.
Position a screen or white paper in front of the mirror.
Adjust the position of the screen until a sharp, bright spot of light (the focused rays) appears on the screen. This point is where the parallel rays
converge.
Measure the distance from the mirror’s surface to the screen. This distance is the focal length (f).
Notes: This method works well for objects far away because the incoming rays are nearly parallel.
The sharpness of the focused spot helps in accurately determining the focal length. Typically used to determine the focal length of mirrors used in
telescopes or headlights.

Using the Image of a Nearby Object (Object-Image Distance Method)

Method Overview: This method uses an object at a finite distance from the mirror and applies the mirror formula to calculate the focal length based
𝟏 𝟏 𝟏
on the object distance and image distance. Mirror Formula: 𝒇 = 𝒖 + 𝒗. Where: f is the focal length, u is the object distance (distance from the
object to the mirror), and v is the image distance (distance from the image to the mirror).

Procedures:
1. Place the lamp-box well outside the approximate focal length
2. Move the screen until a clear inverted image of the crosswire is obtained.
3. Measure the distance u from the crosswire to the mirror, using the metre rule.
4. Measure the distance v from the screen to the mirror.
1 1 1
5. Calculate the focal length of the mirror using = +
𝑓 𝑢 𝑣
6. Repeat this procedure for different values of u.
7. Calculate f each time and then find an average value.
Notes: This method requires careful measurement of the object and image distances.
The object should not be placed too close to the focal point to avoid complications in obtaining a sharp image.

Using the Focal Length Formula and Parallax Method


Method Overview: This method involves creating a virtual image inside the mirror using a bright object and relies on visual observation of parallax
to find the exact focal point.
Procedures:
a) Place a bright object (such as a light bulb or candle) at a measured distance in front of the concave mirror.
b) Stand behind the object and look into the mirror. Move your head slightly from side to side.
c) If there is no parallax between the object and its reflection (they appear to overlap and move together), the object is at the focal point.
d) Measure the distance from the mirror to the object; this distance is the focal length.

Notes: This method works well for concave mirrors with shorter focal lengths.
Parallax refers to the apparent shift in the position of an object when viewed from different angles. When parallax is zero, the object is located at the
focal point.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 143


144

Using Ray Diagrams and Graphical Methods


Method Overview: This method is based on drawing ray diagrams to scale and using the properties of concave mirrors to graphically find the focal
point.
Procedures:
a) Draw a principal axis on a sheet of paper.
b) Mark the center of curvature and the mirror on the axis.
c) Draw two rays from the top of the object:
d) One ray parallel to the principal axis, which reflects through the focal point.
e) Another ray passing through the focal point, which reflects parallel to the principal axis. Where the reflected rays intersect is where the
image is formed.
f) Measure the distance from the mirror to the focal point to determine the focal length.
Notes: This is a theoretical method that helps visualize how light behaves with concave mirrors.
It’s useful for learners to understand ray diagrams and the relationship between object distance and image formation.
Using an Optical Bench Setup (Laboratory Method)
Method Overview:
This method uses precise instruments in a laboratory setting to measure focal length more accurately.
Equipment: Optical bench (a straight, graduated track for positioning mirrors and objects), Concave mirror, Light source, Measuring instruments (ruler
or caliper).

Procedures:
a) Place the concave mirror on the optical bench at a fixed position.
b) Position an object (light source or object with clear edges) at a specific distance from the mirror.
c) Move a screen along the optical bench to capture the sharp image.
d) Record the object distance and image distance.
e) Use the mirror formula to calculate the focal length.
Notes: This method is precise and commonly used in physics labs. The optical bench allows for fine adjustments and accurate measurements.
Conclusion:
The focal length of a concave mirror can be determined using a variety of methods, each suited to different contexts:
The distant object method is simple and effective for sunlight or distant sources.
The object-image distance method and parallax method provide more accurate results for nearby objects.
Ray diagrams help with conceptual understanding, while the optical bench method is precise and ideal for controlled experiments in a lab setting.By
using one or more of these methods, learners can gain a deep understanding of how concave mirrors focus light and how focal length plays a critical
role in optical applications.

Construction of ray diagrams to scale.


Example
An object 4cm high is placed 30cm from a concave mirror of focal length 10cm. by construction, find the position, nature and size of the image (scale,
1:5)
Graph

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 144


145

Questions
1. An object 4cm high is placed 2.4cm from concave mirror of focal length 8cm.
draw a ray diagram to find the position size and nature of image.
Scale 1cm = 2cm
2. An object of height 10cm is placed at a distance 60cm from a convex mirror
of focal length 20cm. By scale find the image position, height, nature and
magnification (scale 1cm: 5cm).
MAGNIFICATION
This is the ratio of image height to the object height. M = where hI – image
height, ho – object height
OR
This is the ratio of image distance from distance from the mirror to the object distance from the mirror
𝑣
𝑀 = 𝑢 , Where v – image distance, u – object distance
Example 1
An object 10cm high is placed at distance of 20cm from a convex mirror of focal length
10cm. Draw a ray diagram, locate the position of the image. Calculate the magnification
(1cm: 5cm)

MEASURING FOCAL LENGTH OF A CONCAVE MIRROR


METHOD 1: Distant object method (rough method)
Hold a concave mirror at one end focusing the distant object. Hold a white screen in
front of the mirror so that it receives rays reflected from it to reach the mirror from
the object. Move the screen at different distances from the mirror until a sharp image is formed on the screen. Measure the distance from the screen
to the mirror with a metre rule.
Repeat the experiment several times and find the average
value of the distance between the screen and the mirror.
This is the focal length (f) of the mirror

METHOD 2: Using illuminated object at c


With the mirror facing illuminated object, adjust the
distance between them until a sharp image is formed on
the screen alongside the object. Measure the distance
between the object and the mirror . Repeat the experiment
for several attempts and find the average value. This is the
radius of curvature so the focal length (f) is obtained from r =2f.

MIRROR FOMULA METHOD


Two pins are required, one acts as an object pin and the other as a search pin.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 145


146

The object pin is placed in front of the mirror between F and C so that a magnified
real image is formed beyond C. The search pin is then placed so that there is no
parallax between it and the real image as shown in figure above. The distance of
the object pin from the mirror, u and that of the search pin, v is measured. Several
pairs of object and image distances are obtained in this way and the results
1 1 1
recorded in a suitable table including,𝑓 = 𝑢 + 𝑣. A mean value for focal length
f is obtained from the mirror formula
Sign convention
 All distances are measured from the pole of the mirror
 Distances of real objects and images are positive
 Distance of virtual objects and images are negative
 A concave mirror has a real focus therefore focal length is positive
 A convex mirror has a virtual focus therefore focal length is negative
By scale drawing (using graph paper)
1. Find the focal length of a concave mirror from the following results
a) Object distance u = 30cm
Image distance v = 20cm
b) Object distance u = 8cm
Image distance v = 24cm
2. Find the image distance when an object is placed
a) 12cm from the concave mirror of focal length 8cm
b) 10cm from a convex mirror of focal length 10cm.

2.1 MAGNETS AND MAGNETIC FIELDS


Learning outcomes
a) Know that a small number of materials are magnetic, but most are not (k)
b) Know how magnets can be made and destroyed (k, s)
c) Understand the behaviour of magnets and magnetic fields(u)
d) Know that the earth is a magnet and how a compass is used to determine direction (k, s)

Introduction
The force which causes attraction or repulsion by a magnet is called magnetic force. A magnet has two types of poles, a north pole and a south pole.
Like poles repel while unlike poles attract. Some materials are attracted by a magnet while others are not. Those which are attracted are called
magnetic materials while those not attracted are called non-magnetic materials. Magnetic and non-magnetic materials are classified based on their
interaction with a magnetic field. Magnetic materials are those that can be attracted to a magnet, while non-magnetic materials do not exhibit this
property. The distinction between these two types is due to the atomic structure and the alignment of electrons within the material, which determines
whether or not it responds to a magnetic field.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 146


147

Magnetic Materials

Magnetic materials are primarily composed of metals that have magnetic properties. These materials contain regions called magnetic domains, which
are groups of atoms with aligned magnetic moments. In the presence of a magnetic field, these domains align themselves in the direction of the field,
allowing the material to be attracted to the magnet. Common magnetic materials are iron, nickel, and cobalt. Magnetic materials are often used in
applications that require the manipulation of magnetic fields, such as in electromagnets, motors, and electronic devices. Magnetic materials are
essential components in various technologies, ranging from electronics to renewable energy systems. These materials can be classified into three main
categories: ferromagnetic, paramagnetic, and diamagnetic.

Ferromagnetic materials, such as iron, cobalt, and nickel, exhibit strong magnetic properties and can be permanently magnetized. They are commonly
used in the production of permanent magnets and magnetic storage media. Paramagnetic materials, on the other hand, have unpaired electrons that
create a weak magnetic moment. They are attracted to magnetic fields but do not retain magnetization once the external field is removed. Diamagnetic
materials, in contrast, are characterized by their weak repulsion to magnetic fields, resulting from the paired electrons that create no net magnetic
moment.
Introduction to Magnets and Magnetic Fields
Magnets and magnetic fields are fundamental concepts in physics, deeply connected to the force of magnetism, one of the four fundamental forces in
nature. Magnets are objects that produce a magnetic field, an invisible area of influence that exerts forces on certain materials like iron, nickel, cobalt,
and other magnets. Magnetic fields are essential in understanding how magnets interact with their environment and are responsible for many
technological applications, such as electric motors, magnetic resonance imaging (MRI), and data storage devices.
Magnets are fascinating objects that produce invisible magnetic fields, which can attract ferromagnetic materials like iron, nickel, and cobalt. These
magnetic fields arise whenever electric charges are in motion. The strength of a magnetic field increases with the amount of charge in motion,
demonstrating a direct relationship between electric currents and magnetism.
Magnetic fields exert forces on other magnetic materials without requiring physical contact, allowing magnets to attract or repel each other. This
phenomenon is not magic; it is a fundamental aspect of physics that governs how magnets interact. The lines of force in a magnetic field exit from one
pole of the magnet and enter through the other, creating a continuous loop.
The Earth's magnetic field, generated in its outer core, also protects our planet from solar radiation, highlighting the importance of magnetism in
both technology and nature.

Properties of Magnets
A magnet has two distinct poles: a north pole and a south pole. These poles are where the magnetic force is strongest. The behavior of a magnet is
governed by its poles, following the basic rule of magnetism: like poles repel each other, while opposite poles attract. This means that the north pole
of one magnet will repel another north pole but attract a south pole. Interestingly, if a magnet is cut into smaller pieces, each piece will still retain a
north and south pole, illustrating that magnetic poles always exist in pairs.

Determining the polarity of magnets


Using a compass. By placing the magnet near the compass, the needle will point towards the magnet's north pole, indicating which end is positive.
This method is effective and requires no specialized equipment.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 147


148

Using a marked magnet. By bringing the north pole of a labeled magnet close to an unmarked magnet, you can observe the attraction or repulsion.
If the two magnets attract, the end of the unmarked magnet is the south pole; if they repel, it is the north pole.

The Nature of Magnetic Fields


A magnetic field is the region around a magnet where its magnetic forces
can be felt. This field is represented visually by magnetic field lines, which
depict the direction and strength of the field. These lines emerge from the
north pole of a magnet and curve around to enter the south pole, forming
continuous loops. The density of these lines indicates the strength of the
magnetic field: the closer the lines, the stronger the field. The lines are most
concentrated near the poles, where the magnetic forces are strongest, and
they spread out as they move farther away from the magnet.

Magnets possess distinct properties that define their behavior and


applications.
The primary property of magnets is attraction; they can attract ferromagnetic
materials such as iron and nickel. Additionally, magnets exhibit repulsion, where
like poles (north-north or south-south) push each other away, while opposite poles
(north-south) attract. This duality is crucial in various applications, from simple toys
to complex machinery.
When suspended freely, a magnet aligns itself along the Earth's magnetic field, with
its north pole pointing towards the magnetic north. This property is utilized in
compasses, aiding navigation. Magnets are widely used in everyday life, including
in electric motors, generators, and magnetic storage devices. Their unique
properties not only enhance technology but also open avenues for innovative applications, such as in renewable energy solutions.

How Magnetic fields are produced


Magnetic fields are generated by moving electric charges, which is why they are closely associated with electricity. In a magnet, the movement of
electrons within the atoms, particularly their spin and orbital motion, creates the magnetic field. In larger-scale systems, such as electromagnets,
magnetic fields are produced when an electric current flows through a conductor, such as a coil of wire. The strength of this field can be increased by
increasing the current or by using a ferromagnetic material like iron as a
core.
Magnetic fields are generated whenever electric charges are in motion. This
phenomenon occurs at both macroscopic and microscopic levels. For
instance, the movement of electrons around an atomic nucleus creates a
magnetic field, while larger currents flowing through wires also produce
magnetic fields.
When electric current flows through a conductor, such as a wire, it
generates a magnetic field that encircles the wire. This effect can be

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 148


149

amplified by coiling the wire into loops or coils, which concentrates the magnetic field lines and enhances the overall strength of the magnetic field
produced.
Additionally, intrinsic magnetic moments of elementary particles, such as electrons, contribute to the creation of magnetic fields. These moments arise
from the particles' spin and charge, further illustrating the fundamental relationship between electricity and magnetism. Thus, magnetic fields are a
direct result of moving charges and the properties of particles at the quantum level.

Types of Magnetic Materials


Magnetic materials can be classified into several categories based on their
magnetic properties. The primary types include diamagnetic, paramagnetic,
ferromagnetic, and ferromagnetic materials. Ferromagnetic materials, such
as iron, cobalt, and nickel, have strong magnetic properties because their
atomic magnetic moments align to form magnetic domains. When exposed to
an external magnetic field, these domains align in the same direction, creating
a strong overall magnetic field. Paramagnetic materials, such as aluminum
and platinum, have weak and temporary alignment with the magnetic field,
while diamagnetic materials, like copper and bismuth, are repelled by the field due to induced opposing magnetic moments.

Diamagnetic materials, such as copper and bismuth, exhibit a weak repulsion to magnetic fields, making them the least magnetic. In contrast,
paramagnetic materials, like aluminum and platinum, are weakly attracted to magnetic fields and only exhibit magnetism in the presence of an external
field. Ferromagnetic materials, including iron, cobalt, and nickel, display strong magnetic properties and can retain magnetization even after the
external field is removed. Ferromagnetic materials, found in compounds like magnetite, have opposing magnetic moments that result in a net
magnetization. Understanding these types of magnetic materials is crucial for
various applications, from electronics to data storage technologies.
Magnetic Fields in Everyday Life
From the simple fridge magnets that hold cherished photos to the powerful
magnets used in MRI machines, their utility is both diverse and essential. These
magnetic forces are generated by moving electric charges and magnetic materials,
impacting objects around them.
One of the most familiar uses of magnets is in compasses, which rely on Earth's
geomagnetic field to guide navigation. This natural magnetic force allows us to
orient ourselves and find our way, demonstrating the importance of magnetism in
daily activities. Additionally, various household items, such as magnetic locks and hangers, utilize these forces for convenience and security. Moreover,
exposure to electromagnetic fields is a common aspect of modern life, with potential effects on our health. Understanding magnetic fields helps us
appreciate their significance, from practical applications to their influence on our well-being. Magnetic fields are present in many everyday scenarios.
For example, the Earth itself acts like a giant magnet, with a magnetic field generated by the motion of molten iron in its outer core. This magnetic
field protects the planet from harmful solar winds and enables navigation using compasses. Magnetic fields are also utilized in technology, such as in
electric motors, which convert electrical energy into mechanical energy, and in magnetic storage devices, where data is stored by aligning tiny
magnetic domains.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 149


150

Applications of Magnets and Magnetic Fields


Magnets play a crucial role in various applications that permeate our daily lives. From small toys to heavy machinery, magnets are integral in devices
such as refrigerators, speakers, and magnetic locks. Their ability to attract or repel materials without direct contact makes them invaluable in numerous
sectors, including healthcare, where magnetic resonance imaging (MRI) utilizes strong magnetic fields to create detailed images of the body. In
engineering, solenoids and electromagnets are essential for controlling magnetic fields, enabling their use in transformers and electric motors. These
applications highlight the versatility of magnets in converting electrical energy into mechanical energy, which powers countless devices. Moreover,
advancements in quantum magnetism are paving the way for innovative technologies, enhancing our understanding of material properties at the
quantum level. As research continues, the potential applications of magnets and magnetic fields are set to expand, promising exciting developments
in technology and science. In transportation, magnetic levitation (maglev) trains use magnets to lift and propel the train, reducing friction and allowing
for high speeds. Magnets are also essential in renewable energy technologies, such as wind turbines, where they are used to generate electricity.

Magnetisation
Magnetization is a fundamental concept in magnetism, representing the density of magnetic dipole moments within a magnetic
material. It is quantified as the ratio of the magnetic moment to the volume of the material, indicating how strongly a material can be magnetized.
This property can arise from either permanent magnetic dipoles or induced dipoles when exposed to an external magnetic field. Different materials
exhibit varying responses to magnetization, categorized as diamagnetic, paramagnetic, and ferromagnetic. Diamagnetic materials are weakly repelled
by magnetic fields, while paramagnetic materials are weakly attracted. Ferromagnetic materials, on the other hand, can retain magnetization even
after the external field is removed, making them essential in various applications, from electric motors to data storage. Magnets can be created through
several methods that align the magnetic domains in a material, which are small regions where magnetic fields from individual atoms are aligned.
When these domains are organized in the same direction, the material becomes magnetized. The common ways magnets are made:
1. Magnetizing by Stroking
This involves taking a strong magnet and repeatedly
stroking it along the surface of a ferromagnetic material
(like iron or steel). The repeated stroking aligns the
magnetic domains in the material in the direction of the
stroking motion. Over time, this process turns the
material into a magnet. The strength of the resulting
magnet depends on the material's properties and the
consistency of the stroking motion.
This technique involves stroking a steel bar with a permanent magnet, which causes the magnetic domains within the steel to align. As these domains
align, a north (N) pole and a south (S) pole are induced in the steel bar, effectively turning it into a magnet. For instance, increasing the number of
strokes or using a stronger permanent magnet can lead to a more powerful magnet. Additionally, stroking in one direction consistently ensures that
the external magnetic field is effectively applied, further promoting alignment of the magnetic domains. This method is not only practical but also
serves as an excellent educational tool for understanding the principles of magnetism and the behavior of magnetic materials.

2. Magnetizing by Electrical Current


Electromagnets are created by passing an electric current through a coil of wire wound around a ferromagnetic core. The electric current generates a
magnetic field that aligns the domains in the core material. This method is widely used in industries because it allows for control over the strength of
the magnet by adjusting the current. Electromagnets are commonly used in electric motors, generators, and lifting equipment.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 150


151

Action: When an electric current flows through a conductor, it generates a


magnetic field around it. This phenomenon can be observed by placing a
magnetic compass near a current-carrying wire, which will deflect due to the
magnetic field produced. To create a magnet using this method, a coil of wire is
often employed. When an electric current passes through the coil, it induces a
magnetic field, effectively turning the coil into an electromagnet. This method
is particularly effective for magnetizing ferromagnetic materials, as the
magnetic field aligns the material's internal dipoles, enhancing its magnetic
properties. Overall, the electrical method of magnetization is not only efficient
but also allows for the creation of stronger magnets, making it a preferred choice
in various applications, from industrial uses to everyday electronics.

3. Magnetizing by Induction
When a ferromagnetic material is placed in a strong magnetic field, it can become magnetized
through induction. The external magnetic field forces the domains in the material to align in the
same direction. This process can occur naturally, such as when a piece of iron comes into contact
with a strong magnet, or artificially in controlled environments.
Action : Magnetizing by induction is a process that transforms magnetic materials, such as iron
and steel, into magnets without direct contact with a magnetic source. This phenomenon occurs
when a ferromagnetic material is exposed to a magnetic field, causing its magnetic domains small
regions within the material that act like tiny magnets to align in the direction of the external
field. For example, when a magnet is brought close to a nail, the magnetic field induces
magnetism in the nail, resulting in the attraction of small metal pins to it. This method is widely
used in creating artificial magnets and is fundamental in various applications, from household items to advanced technologies. Induction magnetization
is not limited to physical contact; even a strong magnetic field can induce magnetism in materials at a distance.

4. Magnetizing by Heating and Cooling


Certain materials can be magnetized by heating them above their Curie temperature (the temperature at which their magnetic properties are lost) and
then cooling them in the presence of a magnetic field. This method realigns the domains during the cooling process, resulting in a permanent magnet.
Action
Magnetizing by heating and cooling is a fascinating process that significantly affects the
strength and behavior of magnets. When a neodymium magnet is heated above its Curie
point, it loses its magnetism due to increased thermal agitation, which disrupts the
alignment of its magnetic domains. Upon cooling, these domains can realign, allowing
the magnet to regain some of its magnetic proper ties. Temperature plays a crucial role
in the performance of permanent magnets. Heating a magnet can weaken its magnetic
field, as the increased kinetic energy causes the molecules to move more rapidly,
disrupting their alignment. If the temperature exceeds the magnet's maximum operating
limit but remains below the Curie temperature, irreversible performance losses may
occur.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 151


152

Demagnetisation
Magnets can lose their magnetism or become demagnetized through processes that disrupt the alignment of their magnetic domains.
1. Heating Beyond the Curie Temperature
Magnets can be significantly affected by temperature, particularly when heated beyond their Curie temperature. The Curie temperature is the point at
which a magnet's magnetic domains become disordered due to increased thermal energy. As the temperature rises, the kinetic energy of the magnet's
molecules increases, causing them to vibrate more vigorously. This disruption leads to a weakening of the magnet's strength and magnetic field.
When a magnet is heated above its Curie temperature, it undergoes irreversible changes. The magnetic domains, which are responsible for the magnet's
alignment and strength, become misaligned and lose their ability to maintain a magnetic field. Even if the magnet is cooled b ack down, it may not
regain its original magnetic properties, as the domains may not realign properly.

2. Mechanical Shock
Striking a magnet repeatedly or subjecting it to strong mechanical vibrations can disorient the aligned magnetic domains. This process diminishes the
magnet’s strength over time, and with enough shocks, the magnet can become fully demagnetized. Dropping a magnet onto a hard surface is a common
example of how mechanical shock can destroy a magnet. Magnets, particularly permanent magnets, can be surprisingly vulnerable to mechanical
shock, which can lead to demagnetization. One of the primary causes of this phenomenon is the physical impact from dropping or striking the magnet.
Such shocks can disrupt the alignment of the magnetic domains within the material, leading to a loss of magnetism. Additionally, exposure to strong
external magnetic fields can also demagnetize a magnet. When a magnet is subjected to another magnetic field, it can interfere with its own magnetic
alignment, causing a reduction in strength. Moreover, temperature fluctuations can exacerbate the effects of mechanical shock. High temperatures can
weaken the magnet's structure, making it more susceptible to damage from physical impacts.

3. Exposure to a Strong Opposing Magnetic Field


When a magnet is exposed to a strong external magnetic field that opposes its own field,
the domains can become misaligned, leading to partial or complete demagnetization. This
principle is used in demagnetization processes, such as in degaussing coils. Magnets can
be destroyed or demagnetized when exposed to a strong opposing magnetic field. This
phenomenon occurs because the external magnetic field can disrupt the alignment of the
magnetic domains within the magnet. When these domains, which are responsible for the
magnet's magnetic properties, become misaligned, the magnet loses its ability to attract
or repel other magnetic materials.
Several factors contribute to the demagnetization of permanent magnets. Besides exposure to strong magnetic fields, heat can also play a significant
role. When a magnet is heated above its Curie temperature, the thermal energy can cause the magnetic domains to become disordered, leading to a
loss of magnetism. Additionally, physical shocks or vibrations can further exacerbate this process by causing misalignment of the magnetic domains.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 152


153

4. Corrosion or Oxidation
Magnets, especially those made of iron or iron-based alloys, can lose their
magnetism when exposed to moisture or corrosive environments. Magnets can be
significantly weakened by corrosion and oxidation, particularly when they are
made from ferromagnetic materials like iron. When exposed to moisture, these
magnets undergo oxidation, resulting in the formation of iron oxide, commonly
known as rust. This rust is non-magnetic, which means that as the magnet rusts,
it loses its attractive power and overall functionality. Corrosion not only diminishes
the magnetic strength but also affects the durability of the magnet. Rusty magnets
are more susceptible to further damage, which can lead to complete
demagnetization over time. Factors such as temperature fluctuations and physical
shocks can exacerbate this process, making it crucial to protect magnets from harsh environments. To prevent corrosion, it is advisable to apply
protective coatings or use coverings like rubber. Corrosion alters the material's structure and disrupts the alignment of magnetic domains. Protective
coatings, such as nickel plating, are often applied to prevent this. Proper storage and maintenance can significantly extend the life of magnets, ensuring
they retain their magnetic properties for longer periods.

5. Passing Electrical Current


If a magnet is placed in the vicinity of an alternating current (AC) circuit or
subjected to high-frequency electromagnetic waves, the domains can become
disoriented due to the rapidly changing magnetic field. This process is a common
cause of demagnetization in magnets used near electrical devices. Magnets can be
demagnetized through various methods, one of which involves passing an
electrical current through them. When current flows through a wire, it generates
a magnetic field that can interfere with the existing magnetic field of the magnet, potentially leading to its demagnetization.

6. Time and Natural Decay


Over long periods, magnets naturally lose their
magnetism due to the gradual randomization of magnetic
domains. This process is called magnetic decay and is
accelerated in unstable materials or in environments with
fluctuating temperatures or external magnetic fields.
Magnets can indeed lose their strength over time due to a
process known as demagnetization. This gradual decline in magnetic power is primarily influenced by environmental factors, particularly temperature.
When exposed to elevated temperatures, the thermal energy can disrupt the alignment of magnetic domains within the material, leading to a permanent
loss of magnetism. In addition to heat, physical impacts can also contribute to a magnet's decay. Striking a magnet with a hammer or subjecting it to
mechanical stress can misalign its internal structure, further diminishing its magnetic strength. Unlike radioactive decay, which follows a predictable
half-life, the loss of magnetism in permanent magnets occurs at an unpredictable rate, making it difficult to determine exactly when a magnet will
lose its effectiveness. Overall, while magnets are designed to retain their strength for extended periods, time and natural decay can gradually weaken
them, necessitating careful handling and storage to prolong their lifespan.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 153


154

Magnetism and Magnetic Poles


Magnets are objects that produce a magnetic field, an invisible
force that exerts attraction or repulsion on other materials. The
magnetic field is generated by the movement of electric charges
within the material, particularly the alignment of electron
spins. Every magnet has two poles, called the north pole and
the south pole. These poles are the regions where the magnetic
force is strongest. Opposite poles (north and south) attract each other, while like poles (north and north or south and south) repel. This behavior is
summarized in the basic rule of magnetism: like poles repel, and unlike poles attract. The magnetic poles always exist in pairs; even if a magnet is
broken into smaller pieces, each piece will still have a north and a south pole. Every magnet has two distinct regions known as magnetic poles: the
north pole and the south pole. These poles are where the magnetic field is strongest, and they dictate how magnets interact with each other. A north
pole will attract a south pole, while two north poles or two south poles will repel each other. The Earth itself acts as a giant magnet, with its magnetic
poles located near the Arctic and Antarctic Circles. This magnetic field influences various natural processes, including the orientation of compasses
and the behavior of charged particles in the atmosphere. Interestingly, Earth's magnetic poles are not fixed; they can shift over time due to changes
in the planet's core dynamics.

The Magnetic Field


The magnetic field is the region around a magnet where its magnetic
forces can be detected. This field is represented by magnetic field
lines, which visually map the direction and strength of the field.
Magnetic field lines emerge from the north pole and loop around to
enter the south pole, forming closed loops. The density of these lines
indicates the strength of the field; they are closer together near the
poles, where the field is strongest, and spread out as they move
farther away. The field lines never cross, as each line represents a
distinct direction of force at any point.
A magnetic field is a physical field that exerts a magnetic influence on moving electric charges, electric currents, and magnetic
materials.
It is generated whenever electric charges are in motion, with the strength of the field increasing as more charge is set in motion. This invisible field
surrounds magnets and can exert forces on other magnetic materials without direct contact. Magnetic fields are vector fields, meaning they have both
direction and magnitude. They are observable in the vicinity of magnets, electric currents, or changing electric fields. The Earth itself has a magnetic
field, which plays a crucial role in influencing various geological and atmospheric processes.

Behavior of Magnets in a Magnetic Field


When a magnet is placed in a magnetic field, it experiences forces that depend on the orientation of its poles relative to the field. If the magnet’s north
pole is near the north pole of another magnet, they will repel each other, causing the magnet to move away. If the north pole of one magnet is near
the south pole of another, they will attract and move closer. These interactions demonstrate the alignment tendencies of magnetic dipoles in external

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 154


155

magnetic fields. Additionally, the strength of the magnetic interaction diminishes with distance,
following an inverse square law. Magnets exhibit fascinating behavior when placed in a magnetic
field, which can be categorized into permanent and temporary types.
Permanent magnets retain their magnetic properties indefinitely, while temporary magnets, such
as electromagnets, only exhibit magnetism when an electric current is applied. The strength and
direction of a magnet's influence are quantified by its magnetic moment, which is determined by
the alignment of electron spins within the material. When magnets interact with a magnetic field,
they can attract or repel other magnetic materials, such as iron, nickel, and cobalt. This interaction
is governed by the orientation of the magnetic field lines, which emerge from the magnet's north
pole and re-enter at the south pole. The behavior of magnets is not only limited to solid materials;
researchers have even discovered ways to manipulate the movement of bacteria using magnetic
fields, showcasing the diverse applications of magnetism in science.

Magnetic Materials and Magnetization


Magnetic materials are characterized by the presence of atomic magnetic dipoles,
which can align to produce a net magnetism that is observable on a macroscopic scale.
The origin of magnetism is rooted in the orbital and spin motions of electrons, as well
as their interactions. This alignment of dipoles is crucial for the material to exhibit
magnetic properties. There are several classes of magnetic materials, including
ferromagnetic, paramagnetic, and diamagnetic substances. Ferromagnetic materials,
like iron, can become permanently magnetized, while paramagnetic materials exhibit
weak magnetism in the presence of an external magnetic field. Diamagnetic materials,
on the other hand, are repelled by magnetic fields.

Electromagnetic Interaction
Magnets and magnetic fields are closely related to electricity through the concept of
electromagnetism. A current-carrying conductor generates a magnetic field around it, and
the strength and direction of this field depend on the current's magnitude and direction.
This principle is the basis of electromagnets, which are temporary magnets created by
passing an electric current through a coil of wire. The strength of the magnetic field can
be amplified by increasing the current or by adding a ferromagnetic core inside the coil.
Electromagnetic interaction is a fundamental force that governs the behavior of electrically
charged particles. This interaction is crucial in determining how particles, such as electrons and protons, interact with one another through
electromagnetic fields. It is responsible for a variety of phenomena, including the formation of chemical bonds and the rigidity of solids, as it holds
electrons within atoms. The electromagnetic force is mediated by photons, which are particles of light. When charged particles interact, they can emit
or absorb photons, facilitating the transfer of energy and momentum.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 155


156

Earth’s Magnetic Field


The Earth itself behaves like a giant magnet, with a magnetic field generated by the motion of molten iron and nickel in its outer core. This field
extends far into space and protects the planet from harmful solar wind and cosmic radiation. The Earth’s magnetic poles are not fixed and gradually
shift over time. The interaction of the Earth’s magnetic field with charged particles in the atmosphere gives rise to phenomena like the auroras
(Northern and Southern Lights).
The Earth's magnetic field, also known as the geomagnetic field, originates from the
outer core, where liquid iron generates electric currents. These currents create a
magnetic field that extends into space, forming the magnetosphere. This region acts as
a protective shield, safeguarding our planet from harmful solar and cosmic radiation.
The magnetic field resembles a giant bar magnet, with lines of force radiating from the
south to the north magnetic pole.
Its dynamic nature is influenced by the movement of the fluid outer core around the
solid inner core, which maintains the field's strength and stability. Despite its
importance, the exact origins of the Earth's magnetic field remain a mystery, with
ongoing research exploring its complexities.

Non-Magnetic Materials
Non-magnetic materials are materials that do not exhibit any magnetic properties. In these materials, the magnetic domains do not align in the presence
of a magnetic field, or they do not have magnetic domains at all. Non-magnetic materials may include non-metals as well as certain metals. Non-metals
such as wood, plastic, and rubber are inherently non-magnetic because they do not contain any elements with magnetic properties. Some metals, like
copper, aluminium, and zinc, are also non-magnetic due to their atomic structure, which prevents the alignment of magnetic domains. These materials
are characterized by their inability to be magnetized by external magnetic fields, making them distinct from magnetic materials, which can be attracted
or repelled by magnets. The molecular structure of non-magnetic materials often results in balanced electron spins, which prevents them from
responding to magnetic fields. This property makes non-magnetic materials useful in various applications, such as electrical insulation and structural
components in devices where magnetic interference must be minimized. In everyday life, non-magnetic materials are prevalent and essential. For
instance, kitchen utensils made of plastic or aluminum, furniture made of wood, and packaging materials like paper all fall into this category.

Characteristics of Non-Magnetic Materials


One key characteristic of non-magnetic materials is their weak response to magnetic fields. For instance, gold is classified as a diamagnetic metal,
showing only a slight repulsion when exposed to a magnet. This behavior is typical of many non-magnetic metals, which remain unaffected by external
magnetic forces. In practical applications, non-magnetic materials are crucial in various industries, including electronics and construction, where
magnetic interference must be minimized. Examples of Non-Magnetic Materials: Copper: A non-magnetic metal commonly used in electrical wiring due
to its excellent conductivity and lack of magnetic interference.
Aluminium: Another non-magnetic metal that is lightweight and often used in manufacturing and packaging. Zinc: A non-magnetic metal used for
galvanization to prevent rusting in steel and iron.
Wood: An organic, non-metallic material that is non-magnetic and often used in construction and manufacturing. Rubber: A flexible, non-magnetic
material often used for insulation and protective coverings.

How to Identify Magnetic and Non-Magnetic Materials

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 156


157

The classification of materials into magnetic and non-magnetic can be done using a simple experiment with a magnet. By bringing a magnet close to
various materials, learners can observe whether the material is attracted to the magnet (indicating it is magnetic) or not (indicating it is non-magnetic).
Magnetic materials, such as iron, nickel, and cobalt, are attracted to magnets due to their atomic structure, which allows their electrons to align in a
way that creates a magnetic field. Common examples include steel and certain rare earth metals. To test if an object is magnetic, bring a magnet close
to it. If the object is attracted, it is magnetic; if not, it is likely non-magnetic. Non-magnetic materials, such as aluminum, copper, and plastic, do not
exhibit this attraction and cannot be magnetized
Magnetic properties of Steel and Iron
The magnetic properties of iron and steel differ significantly due to their composition. Iron, a pure element, is easily magnetized and exhibits strong
ferromagnetic properties. However, it tends to lose its magnetism relatively quickly when the external magnetic field is removed. This characteristic
makes iron suitable for applications requiring temporary magnetism. In contrast, steel, an alloy primarily composed of iron and carbon, demonstrates
a unique magnetic behavior. While it retains a strong magnetic field, its retentivity is lower than that of pure iron. The presence of carbon and other
alloying elements in steel enhances its structural integrity, making it more durable while still maintaining magnetic properties. When bars of Iron and
steel of the same size are placed in contact with a pole of a permanent magnet as shown below and placed in Iron filings. More Iron filings are attracted
to the Iron than those on steel. When the Iron and steel bars are removed from the magnet, all Iron filings fall off and little if any falls off the steel
bar. It can be concluded that, the induced magnetism of Iron is stronger than that of steel. From the above experiment, Iron can be regarded as a soft
magnetic material and steel a hard magnetic material.
Magnetic materials are classified into two main categories: hard and soft magnetic materials, each with distinct properties and applications. Hard
magnetic materials, such as neodymium and ferrite, exhibit permanent magnetism, meaning they retain their magnetization even in the absence of an
external magnetic field. This characteristic makes them ideal for applications like permanent magnets in motors and generators. In contrast, soft
magnetic materials, such as iron and silicon steel, possess temporary magnetism. They can be easily magnetized and demagnetized, which allows
them to respond quickly to changing magnetic fields. This property is crucial in applications like transformers and inductors, where efficient magnetic
flux management is essential. The key difference lies in their coercivity: hard materials have high coercivity, while soft materials have low coercivity.

Differences Between Temporary and Permanent Magnets


Magnets are materials that produce a magnetic field, attracting certain materials such as iron, nickel, and cobalt. They are broadly classified into
temporary magnets and permanent magnets based on their ability to retain magnetic properties.
Retention of Magnetism
The primary distinction between temporary and permanent magnets lies in their ability to retain magnetism. Permanent magnets are materials that
retain their magnetic properties even after the external magnetizing force is removed. They are inherently magnetic due to their atomic structure,
where magnetic domains are aligned and remain fixed. Examples include neodymium, alnico, and ferrite magnets. In contrast, temporary magnets
exhibit magnetism only when exposed to an external magnetic field. Once the external field is removed, their magnetic properties quickly diminish or
disappear entirely. Soft iron is a common example of a temporary magnet.

Material Composition
The composition of the material plays a critical role in determining whether a magnet is temporary or permanent. Permanent magnets are typically
made from hard magnetic materials, such as steel, cobalt, and certain alloys, which have a high resistance to demagnetization. These materials require
a significant amount of energy to align their magnetic domains, making them stable over time. On the other hand, temporary magnets are made from
soft magnetic materials like soft iron. These materials have low resistance to demagnetization, allowing their magnetic domains to realign easily under
an external field but revert to a random state when the field is removed.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 157


158

Strength and Stability


Permanent magnets generally provide consistent and stable magnetic fields over time, making them suitable for applications where a constant magnetic
force is required, such as in electric motors, generators, and loudspeakers. Their strength is a result of the permanent alignment of magnetic domains.
Temporary magnets, however, exhibit varying magnetic strength depending on the intensity of the external field. Their magnetic properties are
transient, which makes them ideal for applications like electromagnetic cranes, which require controlled magnetism.

Cost and Durability


Permanent magnets tend to be more expensive to produce due to the use of rare-earth elements or specialized alloys. However, they offer durability
and long-term performance, making them cost-effective in the long run for certain applications. Temporary magnets are usually less expensive, as
they are made from readily available materials. However, their limited use in retaining magnetism and susceptibility to external conditions like
temperature make them less durable for applications requiring sustained magnetic properties.
Applications
The differences in magnetic retention and material properties lead to distinct applications for each type of magnet. Permanent magnets are used in
devices requiring a stable magnetic field, such as compasses, electric motors, and magnetic storage media. They are also found in everyday objects
like refrigerator magnets and magnetic locks. Conversely, temporary magnets are used in applications where the magnetic field needs to be easily
turned on and off, such as in electromagnets used in electric bells, maglev trains, and scrapyard cranes. Their ability to lose magnetism quickly when
the external field is removed is advantageous in such contexts.

Dependence on External Factors


Temporary magnets are highly dependent on external factors such as the presence of an external magnetic field or electric current. Without these
factors, they cannot maintain their magnetic properties. In contrast, permanent magnets are less influenced by external conditions. However, their
performance can be slightly affected by extreme temperatures or physical damage, which may cause partial demagnetization.

Magnetic Domain Alignment


In permanent magnets, the alignment of magnetic domains is fixed and remains consistent without external influence. This alignment is achieved
during manufacturing through processes like heating the material above its Curie temperature and exposing it to a strong magnetic field. In temporary
magnets, the magnetic domains are randomly aligned in the absence of an external field and only temporarily align when subjected to magnetizing
forces.
Magnetic screening (shielding)
Magnetic shielding, also known as magnetic screening, is a technique used to exclude
magnetic fields from specific areas by redirecting their field lines. This process is
essential in protecting sensitive electronic devices, such as alternating-current
measuring instruments, from stray external magnetic fields that could interfere with
their operation. By utilizing materials with high magnetic permeability, like steel or
iron, magnetic shielding effectively reroutes the magnetic field, preventing it from
penetrating the shielded area. There are two primary types of magnetic shielding:
passive and active. Passive shielding involves using materials to absorb and
redirect magnetic fields, while active shielding employs controlled currents to neutralize magnetic fields. Although static magnetic fields cannot be

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 158


159

completely blocked, they can be significantly reduced through these methods, enhancing the performance and longevity of devices like Hall thrusters
in aerospace applications.

Magnetic screening, also known as magnetic shielding, is the process of reducing or blocking the influence of a magnetic fiel d
in a specific area by redirecting or absorbing the magnetic field lines. This technique is used to protect sensitive equipment, devices, or
spaces from unwanted magnetic interference, which can cause functional disruptions or inaccuracies. Magnetic shielding is achieved by using materials
that have high magnetic permeability, such as soft iron, mu-metal, or certain ferromagnetic alloys, which attract and channel magnetic field lines
away from the protected area.
The principle of magnetic shielding lies in the ability of high-permeability materials to provide a low-resistance path for magnetic field lines. When a
magnetic field encounters a shielding material, the field lines are concentrated within the material instead of passing through the shielded region.
This occurs because the shielding material has a higher capacity to "conduct"
magnetic flux compared to air or non-magnetic materials. As a result, the field
intensity inside the shielded area is significantly reduced or eliminated,
depending on the effectiveness of the shielding material and its configuration.
For instance, in medical imaging technologies like Magnetic Resonance Imaging
(MRI) machines, magnetic shielding ensures that external magnetic fields do not
distort the images or compromise the machine's performance. Similarly, in
scientific laboratories, sensitive equipment such as electron microscopes and
superconducting quantum interference devices (SQUIDs) require shielding to
maintain accurate measurements in environments exposed to fluctuating magnetic fields. The design of magnetic shielding depends on factors such
as the strength and frequency of the magnetic field, the level of shielding required, and the size and shape of the area to be protected. Common
shielding configurations include enclosures, sheets, or layers of high-permeability materials placed around the object or space. For low-frequency
magnetic fields, materials like mu-metal are often used because of their excellent shielding properties, while for high-frequency fields, conductive
materials like copper or aluminum are more effective due to their ability to reflect electromagnetic waves.

2.2 ELECTROSTATICS
Learning Outcomes
a) Understand everyday effects of static electricity and explain them in terms of the build-up and transfer of electrical charge (u,s)
b) Apply knowledge of electrostatic charge to explain the operation of devices like lightening conductors (u, s,v/a)
Introduction
Electrostatics is a branch of physics that focuses on the study of electric charges at rest. It examines the forces and interactions between stationary
electric charges, which can either attract or repel each other. This phenomenon occurs when there are no moving charges, establishing a static
equilibrium. The most common examples of electrostatics is the behavior of materials when rubbed together, such as a plastic rod rubbed with fur or
a glass rod with silk. This friction generates static electricity, leading to observable effects like the attraction of small particles or the discharge of
sparks. The fundamental principle governing electrostatics is Coulomb's Law, which quantifies the electrostatic force between two charges. Hence,
this refers to the study of charge at rest.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 159


160

Structure of an atom
Atoms are composed of three primary particles: protons, neutrons, and electrons. The nucleus,
located at the center of the atom, contains protons, which carry a positive charge, and neutrons,
which are neutral. This dense core is surrounded by a cloud of electrons, which are negatively
charged and occupy various energy levels around the nucleus. Protons and neutrons are
relatively heavy compared to electrons, which are much lighter and exist in constant motion.
The number of protons in the nucleus defines the atomic number and determines the element's
identity. For instance, hydrogen has one proton, while carbon has six. Neutrons contribute to
the atomic mass and can vary in number, leading to different isotopes of an element.
Static electricity is a common phenomenon resulting from an imbalance of electric charges on the surface of materials. When two objects come into
contact and are then separated, electrons may transfer from one to the other, leading to one object becoming positively charged and the other negatively
charged. This charge buildup can create noticeable effects, such as the familiar shock felt when touching a doorknob after walking on a carpet. The
effects of static electricity can be both beneficial and hazardous. On one hand, it can be harnessed in applications like photocopiers and air purifiers.
On the other hand, static discharges can damage sensitive electronic components, ignite flammable materials, and disrupt industrial processes.
Moreover, static electricity can lead to physical phenomena, such as electrostatic attraction or repulsion, causing materials to stick together or repel
each other.

Effects of Static Electricity


Static electricity is the buildup of electric charges on the surface of materials, typically resulting from friction, separation, or induction. When two
objects come into contact and then separate, electrons can transfer from one surface to another, creating an imbalance in electric charge. This imbalance
often results in various observable and measurable effects, which can be categorized as beneficial, disruptive, or even hazardous, depending on the
context.
1. Attraction and Repulsion of Objects
One of the most noticeable effects of static electricity is the attraction or repulsion of objects. When an object becomes statically charged, it can attract
neutral objects or repel other charged objects with like charges. For instance, a statically charged balloon can stick to a wall or attract small pieces of
paper. This phenomenon occurs because the electric field created by the charged object induces opposite charges on nearby neutral objects, leading
to an attractive force. Similarly, objects with similar charges repel each other due to electrostatic forces. This effect is commonly observed in everyday
life, such as when combing hair on a dry day, where strands of hair repel each other and stand up due to static charge buildup. While often harmless,
such effects can also cause challenges in industrial processes, where unwanted static attraction can interfere with material handling and packaging.

2. Static Electricity Discharges


The sudden release of static electricity, known as an electrostatic discharge (ESD), is another significant effect. When a charged object comes into
proximity with a conductor or another object with a different charge, the electric field can cause electrons to jump across the gap, resulting in a
discharge. This is often experienced as a small shock when touching a doorknob after walking on a carpeted floor. While such discharges are usually
harmless to humans, they can have serious implications in certain environments. In industrial and technological settings, ESD can damage sensitive
electronic components, such as semiconductors, integrated circuits, and microchips. These devices are vulnerable because the high voltage associated
with ESD can cause permanent damage to their intricate structures. Consequently, static electricity control measures, such as grounding, ionization,
and antistatic materials, are widely used to prevent such effects.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 160


161

3. Hazards in Flammable Environments


Static electricity can pose significant safety risks in environments containing flammable gases, vapors, or dust. A static discharge in such settings can
ignite combustible materials, leading to fires or explosions. For example, in oil and gas facilities, the movement of fluids through pipes can generate
static charges, and if these charges accumulate and discharge near flammable substances, they can trigger catastrophic accidents. Similarly, static
electricity is a concern in industries such as grain handling and chemical manufacturing, where combustible dust or volatile chemicals are present.
To mitigate these risks, industries implement strict safety measures, including proper grounding, bonding of equipment, and the use of antistatic
clothing and tools. These measures help dissipate static charges before they can accumulate to dangerous levels.

4. Disruption in Everyday Activities


Static electricity can also interfere with routine activities and the performance of certain technologies. For instance, in the printing and textile
industries, static charges can cause materials to stick together, leading to production inefficiencies and defects. Similarly, in photography or electronics
manufacturing, dust particles attracted by static electricity can compromise product quality. In addition, static electricity can affect consumer
electronics, such as touchscreens, where it may cause erratic behavior or temporary malfunctions. Although these effects are typically minor, they
highlight the need for effective static control in modern technology.

5. Positive Applications of Static Electricity


Despite its potential hazards, static electricity also has numerous beneficial applications. For example, it is used in electrostatic precipitators to control
air pollution. In these devices, static charges are applied to particulate matter in exhaust gases, causing the particles to be attracted to oppositely
charged plates, effectively removing them from the air stream. This technology is widely used in industries to reduce emissions. Static electricity is
also used in photocopiers and laser printers, where it plays a key role in transferring toner to paper to create printed images and text. Additionally, it
is employed in electrostatic painting, where charged paint particles are sprayed onto objects, ensuring an even and efficient coating.

6. Biological Effects
Static electricity can affect living organisms, including humans. Low-level discharges, such as those experienced when touching a charged object, are
generally harmless but can be startling or uncomfortable. Prolonged exposure to strong static electric fields, however, may cause mild discomfort or
physiological effects, such as tingling sensations. In rare cases, static electricity can interfere with medical devices like pacemakers, emphasizing the
need for precautions in healthcare settings.

7. Environmental Impact
In natural phenomena, static electricity contributes to significant events such as lightning. Lightning occurs when charges build up in clouds due to
friction between air molecules and water droplets. When the charge difference between the cloud and the ground becomes too great, a massive
discharge occurs, releasing energy in the form of light, heat, and sound. While awe-inspiring, lightning can cause severe damage to property,
ecosystems, and human life.

Insulators and Conductors:


Insulators and conductors are two fundamental classifications of materials based on their ability to transmit electrical current or heat. Understanding
these materials and their properties is critical in science, engineering, and daily life, as they form the foundation of numerous applications in
technology, safety, and energy management.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 161


162

Properties and Effects


Insulators are materials that do not allow the free flow of electric current or heat. This resistance to conduction occurs because their
atomic structure does not have free electrons available to move easily between atoms. Instead, electrons in insulators are tightly bound to their atoms,
which inhibits the transmission of electrical energy. Common examples of insulators include rubber, glass, plastic, wood, and ceramics. In daily life,
insulators play a crucial role in ensuring safety and efficiency in electrical and thermal systems. For instance, electrical wires are coated with insulating
materials such as plastic or rubber to prevent accidental electric shocks and to ensure that electricity flows only through the conductor inside.
Similarly, insulators are used in home appliances, such as oven handles or electrical outlets, to prevent the transfer of heat or electricity to unintended
areas. Insulators are also significant in reducing energy loss. For example, insulating materials such as fiberglass or foam are used in building
construction to minimize heat transfer, keeping indoor spaces warm in winter and cool in summer. This reduces energy consumption for heating and
cooling, making buildings more energy-efficient and environmentally friendly.

Properties and Effects


Conductors are materials that allow the free flow of electric current or heat due to the presence of free electrons in their
atomic structure. These electrons can move easily between atoms, facilitating the transfer of energy. Metals such as copper, aluminum, silver, and
gold are excellent electrical conductors, while materials like iron and steel are good conductors of both heat and electricity. Conductors are
indispensable in daily life, particularly in electrical and electronic systems. Copper, for example, is widely used in electrical wiring due to its excellent
conductivity, affordability, and flexibility. Aluminum is another common conductor, often used in high-voltage power lines because it is lightweight
and relatively inexpensive. The ability of conductors to efficiently transmit electricity ensures the operation of household appliances, lighting,
communication devices, and industrial equipment.
In addition to electricity, conductors play a vital role in heat transfer. For instance, metals like aluminum and stainless steel are used in cookware
because they efficiently conduct heat, ensuring that food cooks evenly. Heat conductors are also employed in radiators, heat exchangers, and engines,
where rapid and effective heat dissipation is necessary to maintain performance and safety.

Combined Use of Insulators and Conductors


The combined use of insulators and conductors is central to the design of most electrical and electronic systems. For example, electrical circuits
require conductive materials to allow current flow, but they also need insulating materials to protect users and to prevent short circuits. A typical
example is a power cord, where copper wires conduct electricity while the plastic coating serves as an insulator to prevent accidental shocks. Similarly,
in thermal applications, the strategic use of conductors and insulators enhances energy efficiency and functionality. For example, a thermos flask uses
insulating materials to keep beverages hot or cold by minimizing heat transfer, while the metallic interior ensures even temperature distribution.

Daily Challenges and Solutions


While insulators and conductors are essential, they also present challenges in everyday life. Poor insulation in electrical systems can lead to energy
loss, higher electricity bills, and even fire hazards. Similarly, improper use of conductors can result in accidental electric shocks or equipment damage.
To address these challenges, manufacturers and engineers have developed advanced materials and technologies. For instance, improved insulating
materials, such as polymer composites, are now used in high-voltage transmission systems to enhance safety and efficiency. Innovations in conductive
materials, such as superconductors, offer the potential for more efficient energy transmission with minimal resistance.

Environmental and Industrial Implications

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 162


163

The choice of conductors and insulators also has significant environmental and industrial implications. Metals like copper and aluminum require
energy-intensive mining and manufacturing processes, which contribute to environmental degradation. Recycling these materials, however, reduces
the environmental impact and conserves resources. Similarly, insulating materials, particularly plastics, can pose environmental challenges due to
their non-biodegradable nature. The development of biodegradable or recyclable insulators is an area of ongoing research to address these concerns.

Examples in Nature and Everyday Life


Nature provides its own examples of insulators and conductors. For instance, air is a natural insulator, preventing the easy transfer of electricity and
heat. This property is harnessed in power lines, where air serves as an insulating medium between live wires and the ground. In contrast, water,
particularly saltwater, acts as a conductor due to the presence of dissolved ions. This principle is critical in understanding lightning strikes, where
the conductive properties of water in the atmosphere and the ground facilitate the discharge of electricity.
In daily life, the principles of conductors and insulators are evident in activities as simple as cooking, where metal pots conduct heat while plastic or
wooden handles prevent burns. Similarly, the clothes we wear, often made of insulating materials like wool or synthetic fibers, help regulate body
temperature by trapping heat.

Everyday Examples of Static Electricity


Static electricity is a common phenomenon that we encounter in our daily lives, often without realizing it. It occurs when there is an imbalance of
electric charges on the surface of materials, typically resulting from friction or the movement of electrons between objects. While it is a natural
occurrence, static electricity can manifest in a variety of ways, some of which are harmless and even useful, while others can be inconvenient or
potentially hazardous.

Shocks from Door Handles or Metal Objects


One of the most common and familiar experiences with static electricity occurs when a person touches a metal doorknob or any other conductive
surface after walking on a carpet or rubbing against a synthetic fabric. This sudden shock is the result of an electrostatic discharge (ESD), where
electrons rapidly flow from the person to the metal object, neutralizing the charge imbalance. The shock itself is typically harmless but can be startling
or uncomfortable. This phenomenon occurs more frequently in dry weather, as low humidity levels prevent the dissipation of charge, leading to an
accumulation of static electricity on the body.

Hair Standing on End


Another well-known example of static electricity is when a person’s hair stands up after brushing or combing it, especially during dry weather. This
effect occurs because combing or brushing the hair causes the transfer of electrons between the hair and the comb, creating an imbalance of electric
charges. The individual strands of hair become charged with the same type of charge (either positive or negative), and because like charges repel each
other, the hair fibers push away from each other, causing the hair to stand upright. This can be a temporary effect, but it is often most noticeable
when the environment is dry, which is conducive to the buildup of static charges.

Static Cling in Clothing


Static electricity is the main cause of the frustrating problem of static cling in clothing, particularly when wearing synthetic fabrics like polyester.
When clothes made of synthetic materials rub against each other in the dryer or during wear, electrons are transferred between them, causing one

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 163


164

item to become negatively charged and the other positively charged. As a result, the clothes attract each other, causing them to cling together. This
effect can be especially troublesome when clothing sticks to the skin or other garments. To mitigate static cling, fabric softeners or dryer sheets are
often used, as they help neutralize the static charge by adding a conductive layer to the fabric.

Sparks from Touching Electronics


Another common example of static electricity in daily life occurs when individuals touch sensitive electronic devices, such as computers, televisions,
or cell phones, and experience a brief spark or discharge. This occurs when a person has accumulated a significant static charge by walking on a
carpet, rubbing clothing, or moving through an area with low humidity. The charged person then touches an electronic device, and the charge flows
into the device, causing a visible spark. While this is often harmless to the person, it can be damaging to sensitive electronic components, leading to
malfunctions or, in extreme cases, complete failure of the device. Therefore, electronic manufacturers take Procedures to design equipment with
grounding systems to minimize the effects of static electricity.

The Attraction of Dust and Small Debris


Static electricity also causes dust and small particles to be attracted to certain surfaces. This effect can be seen when cleaning televisions, computer
screens, or even plastic items like packaging materials. The buildup of static charge on a surface can attract dust particles from the air, causing them
to stick to the surface. This is particularly evident when cleaning electronics, where small particles may cling to the screen or keyboard despite efforts
to wipe them away. The effect is most noticeable in environments with low humidity, as moisture in the air typically helps to reduce static charge
accumulation.

Use of Electrostatic Precipitators


In some households or industries, static electricity is intentionally used for practical purposes, such as in electrostatic precipitators. These devices are
designed to remove dust and particulate matter from the air by charging the particles and then using an oppositely charged collector plate to attract
and capture them. This process is commonly used in air purifiers, where static electricity is harnessed to remove allergens, smoke, and other pollutants
from the air. Similarly, industrial electrostatic precipitators are employed to reduce air pollution from smokestacks by capturing harmful particulates
before they are released into the atmosphere.

The Functioning of Copy Machines and Printers


Static electricity also plays a critical role in the operation of photocopiers and laser printers. These machines rely on the principle of electrostatic
charge to transfer toner onto paper. In a photocopier or laser printer, a drum or belt is charged with static electricity, and a laser or light source
selectively discharges areas of the drum, forming an image. The toner, which is charged with the opposite type of charge, is then attracted to the
discharged areas, and the image is transferred to the paper. This process, known as xerography, is efficient and widely used in modern printing
technology.

Spark Discharge When Fueling Vehicles


When fueling a vehicle at a gas station, static electricity can build up on the body of the car or on the person’s clothes, especially when moving in and
out of the vehicle. If this static charge discharges when coming into contact with the gas nozzle, it can cause a small spark. Although rare, this spark
could ignite flammable vapors, leading to a fire or explosion. To prevent this, it is important to touch a metal surface away from the fuel nozzle to
discharge any built-up static before refueling, ensuring a safe fueling process.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 164


165

Charging by friction: When two different materials are rubbed


together, electrons are transferred from one material to the other.
Charging by friction, also known as triboelectric charging, occurs when
two insulating materials are rubbed together, resulting in the transfer
of electrons from one object to another. This process leads to one object
becoming positively charged and the other negatively charged. The
material losing electrons becomes positively charged, while the
material gaining electrons becomes negatively charged. The
phenomenon is commonly observed in everyday situations, such as
when a plastic comb is rubbed through hair, causing the comb to attract
hair strands due to the opposite charges. The effectiveness of charging by friction depends on the materials involved.
Different insulating materials have varying tendencies to gain or lose electrons, which is described by the triboelectric series. When two materials
from this series are rubbed together, the one higher on the list will typically lose electrons, while the one lower will gain them.
This method of charging is significant in understanding static electricity and has practical applications in various fields, including electronics and
material science.

Friction Generates Electric Charge


When two materials rub against each other, electrons are transferred from one material to another. The material that gains electrons becomes negatively
charged, while the material that loses electrons becomes positively charged. On insulators, electric charge can build up because these materials do not
allow electrons to move freely. For example, when rubbing a balloon, the balloon (an insulator) can retain the charge.

Charging by Induction
Charging by induction involves bringing a charged object close to a neutral object without direct contact. The presence of the charged object causes a
redistribution of charges within the neutral object. This results in the neutral object acquiring a charge opposite to that of the charged object.
Example: If a negatively charged rod is brought near a neutral metal sphere, electrons in the sphere will be repelled, causing a positive charge to
appear on the side closest to the rod.

Principle of Charging by Induction


The fundamental principle behind induction is the ability of a charged object to influence the distribution of charges within a nearby neutral object.
When a charged object (either positively or negatively) is brought close to a neutral object, the electric field from the charged object exerts a force on
the electrons within the neutral object. If the charged object is positively charged, it will attract electrons from the neutral object, causing them to
accumulate on the side of the object closest to the charged object, creating a negative charge on that side. Conversely, the side of the neutral object
farthest from the charged object will have a deficit of electrons, resulting in a positive charge. This separation of charges creates a dipole within the
neutral object, with the side nearest to the charged object becoming negatively charged and the far side becoming positively charged. This
redistribution of charges within the object does not involve any physical transfer of electrons from the charged object but rather a rearrangement of
electrons within the neutral object.

Procedures Involved in Charging by Induction

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 165


166

 The process begins when a charged object, such as a charged rod (either positively or negatively charged), is brought near a neutral
conducting object, such as a metal sphere or a metal plate.
 The electric field from the charged object causes the electrons in the neutral object to move. If the charged object is negatively charged, it
repels the electrons in the neutral object, pushing them away, which leaves a positive charge near the charged object. If the charged object
is positively charged, it attracts the electrons of the neutral object, causing them to move toward the region near the charged object, creating
a negative charge.
 As a result of the redistribution of electrons, the neutral object develops two regions: a region of negative charge (closer to the charged
object) and a region of positive charge (further from the charged object). This is known as electrostatic induction.
 If the neutral object is connected to the ground during induction, the object will lose or gain electrons through the ground, depending on
the type of charge of the external object. For example, if a negatively charged object is brought near the neutral object, the neutral object’s
electrons will be repelled to the ground, leaving behind a positively charged object. If the object is then disconnected from the ground
while the external charge is still nearby, the object will retain a net positive charge.
 After the charged object is removed, the redistributed charges within the neutral object remain in their new positions. If the object has
been grounded, the flow of charge will stop once the grounding is removed. If the object has not been grounded, the redistribution of
charges results in a net charge on the object.
Types of Induction
There are two main types of induction:
Electrostatic Induction (Conduction without Touching): This occurs when a charged object induces a charge separation within a nearby
object, leading to an induced charge. In this case, the object does not physically touch the charged body, but the redistribution of charges creates a
net charge on the object.
Induced Charge Separation: This involves the temporary rearrangement of charges within an object. When a charged body is brought near a
neutral object, it causes the electrons within the neutral object to shift. However, once the charged object is removed, the induced charges disappear,
and the object returns to its original state.
Applications of Charging by Induction
 Capacitors use the principle of charging by induction to store energy. When two conductive plates are placed close together and connected
to a power source, the electric field from the power source induces charges on the plates. The positive plate attracts electrons, and the
negative plate repels them, creating a potential difference between the plates. The energy is stored as electrostatic energy in the form of a
charge imbalance.
 The electrophorus is a device used to generate static electricity by induction. It consists of a charged plate and a metal disc. By touching
the disc to the charged plate, charge is induced on the metal disc. This induced charge can then be transferred to other objects.
 A lightning rod works on the principle of induction. It is designed to provide a safe path for the discharge of electricity from a storm cloud
to the ground. The charged cloud induces charges on the lightning rod, causing the rod to become oppositely charged. This attracts the
lightning strike to the rod, where it is then conducted safely to the ground.
 The concept of inductive charging, which is commonly used in wireless charging devices, is based on induction. In these systems, an
alternating current (AC) in the charging pad creates a changing magnetic field that induces a current in a coil within the device, thus
charging it without the need for physical connections.
 Induction is used in devices like electrostatic precipitators, which remove particulate matter from the air in industrial settings. These
devices induce charges on particles, causing them to move toward oppositely charged plates where they are collected and removed from
the air.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 166


167

Charge Transfer
Charges can be transferred between objects through direct contact or by induction. When two objects come into contact, electrons can move from one
to the other, transferring the charge.
Electrons: The primary particles responsible for electric charge are electrons. Electrons carry a negative charge. The movement of electrons from one
object to another creates static electricity.

Gold Leaf Electroscope


The gold leaf electroscope is a simple yet effective device used to detect electric charges. It consists
of two thin gold leaves suspended from a conducting rod, all housed within a glass container.
When an electrically charged object is brought near the electroscope, the leaves repel each other
due to the like charges, causing them to spread apart. This movement indicates the presence of
an electric charge. The gold leaf electroscope serves not only to detect charge but also to
determine its polarity. By touching the electroscope with a charged object, one can observe
whether the charge is positive or negative based on the behavior of the leaves. If the leaves diverge
further, it indicates a stronger charge. This device has practical applications in educational
settings, helping students understand electrostatics and the principles of charge detection. Its
simplicity and effectiveness make it a valuable tool in physics demonstrations.
Function of a Gold Leaf Electroscope:
The gold leaf electroscope is a sensitive device used to detect, measure, and observe electric charges. It consists of a metal rod with a metal cap at one
end and thin gold leaves at the other, housed in a transparent insulating case to protect the leaves from disturbances.
When a charged object is brought near the cap, charges in the electroscope redistribute, causing the gold leaves to repel each other. This divergence
indicates the presence and magnitude of the charge. The electroscope can also determine the type of charge on an object by observing the behavior of
the leaves when a charged object is brought close.
In addition to detecting charges, the electroscope is used to compare the relative magnitude of charges and distinguish between conductors and
insulators. A conductor transfers charge to the electroscope, causing the leaves to diverge, while an insulator does not affect the leaves. Furthermore,
the electroscope has been historically used to detect ionizing radiation, as radiation neutralizes its charge, causing the leaves to collapse.

CHARGING A GOLD LEAF ELECTROSCOPE BY INDUCTION


Charging it positively
Bring a negatively charged rod near the cap of the gold leaf electroscope. Positive charges
are attracted to the cap and negative charges are repelled to the plate and gold leaf.
The leaf diverges due to repulsion of the same number of charges on the plates and the
leaf. Earth the gold leaf electroscope in presence of a negatively charged rod. Electrons on
the plate and leaf flow to the earth. The leaf collapses.
Remove the negatively charged rod. Positive charges on the cap spread out to the plate and
leaf therefore the leaf diverges, hence the gold leaf is positively charged.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 167


168

Charging it negatively
 Get an uncharged gold leaf electroscope.
 Bring a positively charged rod near its cap.
 Negative charges are attracted to the cap and positive charges are
repelled to leaf and brass plate.
 Earth the gold leaf electroscope in presence of a positively charged rod.
 Negative charges flow from the earth to neutralize positive charges on
plate and leaf. The leaf collapses.
 Remove the positively charged rod, negative charges on the cap spread
out on the leaf and plate, hence charged negatively

Testing for presence of charge


A negatively charged rod is brought near the cap of a negatively charged gold leaf electroscope, the leaf increases in divergence as the charged rod is
lowered on to the cap.

Distribution of charge on a conductor


Hollow conductor
When the proof plane is placed on the outside surface of a charged hollow conductor (a)
and charge is transferred to the uncharged G.L.E, the leaf diverges.
This proves that charge was present on the outside of the surface.
When the proof plane is placed on the inside of a charged hollow conductor and
transferred to the uncharged G.L.E, the leaf does not diverge. Therefore, charge resides
on the outside surface of the hollow charged conductor.

Curved bodies
A curve with a big curvature has a small radius and a curve with small curvature has
big radius therefore, curvature is inversely proportional to radius. A straight line has
no curvature.
Surface charged density is directly proportional to the curvature. Therefore a small
curvature has small charge density.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 168


169

Note. Surface charge density is the charge per unit area of the surface

Action of points
Charge concentrates at sharp points. This creates a very strong electrostatic
field at charged points which ionizes the surrounding air molecules producing
positive and negative ions. Ions which are of the same charge as that on the
sharp points are repelled away forming an electric wind which may blow a
candle flame as shown in the diagram below and ions of opposite charge are
collected to the points
Therefore, a charged sharp point acts as;
Spray off’ of its own charge in form of electric wind.
Collector of unlike charges.
The spray off and collecting of charges by the sharp points is known as corona discharge.

Application of action of points (corona discharge)


 Used in a lightening conductor.
 Used in electrostatics generators.
 Electrostatic photocopying machines.
 Aircrafts are discharged after landing before passengers are allowed to board. Aircrafts get electrified but charge remains on the outer
surface.

Lightning
One of the most dramatic and powerful examples of static electricity is lightning, a natural phenomenon that occurs when static electricity builds up
in the atmosphere. During thunderstorms, the movement of air and water droplets causes friction, resulting in a build-up of static charges in the
clouds. Once the charges reach a critical level, the electric field becomes strong enough to overcome the insulating properties of the air, causing a
discharge in the form of a lightning strike. This discharge is a massive release of energy, producing a bright flash of light, a loud thunderclap, and
significant heat. While lightning is awe-inspiring, it is also extremely dangerous and can cause fires, injuries, and even fatalities. Safety precautions,
such as staying indoors during a storm, are essential to minimize the risks associated with lightning.

Occurrences of Lightning in Uganda


Uganda experiences frequent and intense lightning due to its equatorial location, high temperatures, and proximity to Lake Victoria, which create ideal
conditions for storm formation. The country’s rainy seasons and mountainous regions, such as the Rwenzori and Elgon areas, further contribute to the
development of thunderstorms and lightning. Charged cumulonimbus clouds generate lightning through static electricity created by the collision of
ice particles and water droplets. Lightning frequently strikes populated areas, schools, and farmlands, causing fatalities, injuries, and damage to
property and infrastructure. Rural areas are particularly vulnerable due to a lack of lightning protection systems and proper grounding in buildings.
Efforts to reduce the impact of lightning include awareness campaigns, installation of lightning arrestors, and enforcement of safety measures, although
significant challenges remain in improving protection, especially in resource-limited regions.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 169


170

Uganda experiences frequent thunderstorms, especially during the rainy seasons (March to May and September to November). Lightning is common in
areas with high humidity and warm temperatures. Lightning strikes can cause fires, damage to buildings, and even fatalities. Understanding the
patterns and frequency of lightning can help in disaster preparedness and safety.

Causes of Lightning
Lightning is a form of static discharge that occurs when electric charges build up in clouds and the atmosphere. This buildup is caused by friction
between different layers of air masses. As air currents cause collisions between ice crystals and water droplets in thunderstorms, static electricity
accumulates. When the charge becomes strong enough, it is released as lightning. Lightning is a spectacular natural phenomenon caused by the
buildup of static electricity within thunderstorm clouds. As turbulent winds move water droplets, ice crystals, and graupel (a type of soft hail) within
the cloud, they collide and create an imbalance of electrical charges. This process leads to the accumulation of negative charges at the cloud's base
and positive charges at the top. When the difference in charge becomes significant, a discharge occurs, resulting in lightning. This discharge can
happen within the cloud (intra-cloud lightning) or between the cloud and the ground. The rapid expansion of heated air from the lightning bolt creates
a shockwave, which we hear as thunder.

Lightning Conductors
Lightning conductors are metal rods placed on buildings and structures to safely conduct lightning strikes into the ground. They provide a path of
low resistance for the electrical charge, preventing damage to the structure. Lightning conductors are installed at the highest points of a building and
connected to a grounding system that disperses the charge safely into the earth. A lightning conductor, commonly known as a lightning rod, is a metal
rod installed at the highest point of a building to protect it from lightning strikes. It is typically made of copper or aluminum and is connected to the
ground through a thick wire. This system is crucial for safeguarding structures from the destructive power of lightning. The operation of a lightning
conductor is based on the principle of induction. When a charged cloud passes overhead, the conductor acquires an opposite charge, which helps to
attract the lightning bolt. Upon striking the rod, the electrical energy is safely channeled down the conductor and into the ground, preventing damage
to the building and its occupants. In addition to protecting buildings, lightning conductors are essential for safeguarding lives and reducing the risk
of fire.
Factors Influencing Lightning Occurrence
Topography: Elevated regions and mountainous areas are more prone to thunderstorms and lightning.
Humidity: High humidity levels, common in tropical climates, contribute to the formation of thunderstorms and lightning.
Temperature: Warm temperatures increase the likelihood of convection currents, which can lead to thunderstorm development.

Causes of Lightning in Uganda


Lightning in Uganda is influenced by several environmental factors, including ground elevation, latitude, prevailing wind currents, and relative
humidity. The country's unique geography, characterized by its proximity to both warm and cold bodies of water, contributes to the frequency of
thunderstorms, which are the primary conditions for lightning strikes. Recent studies indicate that climate change is exacerbating extreme weather
patterns in Uganda, leading to an increase in lightning occurrences.
Meteorologists have noted that as weather conditions become more unpredictable, the risk of deadly lightning strikes rises, posing a significant threat
to communities. Additionally, traditional beliefs in Uganda often attribute lightning to supernatural forces, reflecting cultural interpretations of natural
phenomena.

Impact of Lightning in Uganda

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 170


171

Lightning significantly impacts Uganda, causing fatalities, injuries, and extensive damage to infrastructure, agriculture, and the economy. Schools and
rural areas are particularly vulnerable, with poorly grounded buildings and open spaces increasing the risk. Fatalities and injuries often occur, while
property damage includes fires, destroyed electronics, and disrupted communication networks. Agriculture suffers from crop loss, livestock deaths,
and risks to farmers working in fields, leading to food insecurity and economic setbacks.
The psychological effects of lightning, such as fear and trauma, affect communities, while educational disruptions occur when schools are struck.
Environmentally, lightning can ignite fires, leading to habitat destruction and contributing to climate change. Efforts to mitigate these effects include
installing lightning arrestors, raising awareness, and enforcing safety standards, though more resources are needed to reduce the ongoing risks.

Lightning Safety and Preparedness


Lightning safety and preparedness involve several key measures to minimize the risk of injury and damage during storms. Seeking shelter indoors or
in a car is the safest option, avoiding open fields and tall trees. It's important to wait at least 30 minutes after the last thunderclap before leaving
shelter. Installing lightning rods and grounding systems in buildings, along with regular maintenance, can prevent structural damage. Creating
emergency plans, educating the public, and monitoring weather reports can further enhance safety. Outdoor equipment should be lightning-resistant,
and first aid knowledge, such as performing CPR, is essential in the event of a lightning strike. Community and government efforts should focus on
improving infrastructure and spreading awareness to ensure widespread preparedness.

Components of a Lightning Conductor System


The lightning rod is a metal rod, usually made of copper or aluminum, mounted at the highest point of a building or structure. It serves as the point
where lightning is most likely to strike. The rod is designed to attract and intercept lightning strikes. Metal conductors (usually copper or aluminum)
connect the lightning rod to the ground. These conductors create a low-resistance path for the electrical charge to travel from the lightning rod to the
ground. The grounding system consists of metal rods or plates buried in the ground. It disperses the electrical charge safely into the earth. The
grounding system must be in good contact with the soil to effectively conduct the charge. Bonding refers to the connection of the lightning conductor
system to other conductive parts of the building, such as metal plumbing or electrical systems. This ensures that all conductive parts of the building
are at the same electrical potential, reducing the risk of side flashes or damage.

How Lightning Conductors Work


The lightning rod is designed to attract lightning because it is the highest point of the structure. While lightning rods do not prevent lightning from
striking, they provide a controlled point of entry. When lightning strikes, the rod intercept the electrical discharge and prevents it from hitting other
parts of the building. The metal conductors create a direct path for the electrical energy to travel from the lightning rod to the ground. This pathway
is designed to be low in electrical resistance. The electrical energy travels along the conductors, bypassing sensitive parts of the building and avoiding
potential damage. Once the electrical energy reaches the ground, the grounding system disperses it safely into the earth. The metal rods or plates in
the ground conduct the charge into the soil, where it is absorbed and neutralized. Proper grounding ensures that the electrical energy does not cause
harm to the building’s structure, electrical systems, or occupants.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 171


172

By directing lightning safely into the ground, lightning conductors prevent lightning from causing fires by striking flammable materials or electrical
systems. The system also reduces the risk of secondary fires caused by electrical surges or damage to wiring. Lightning conductors help prevent power
surges and electrical damage by directing lightning away from electrical components and wiring. They protect sensitive equipment and electronics
from potential damage due to lightning-induced electrical surges. By preventing lightning strikes from entering the building, lightning conductors
reduce the risk of injuries or fatalities caused by electrical discharge. The system helps maintain a safe environment for occupants during
thunderstorms.

ELECTRIFICATION
Electrification is the process of transforming a neutral body into a charged one,
a phenomenon that occurs universally. This process involves the transfer of
electric charge, typically through the movement of electrons. When an object
gains or loses electrons, it becomes electrically charged, resulting in an
imbalance of charges. This imbalance can lead to static electricity, where charges
remain on the surface of materials until they find a path to neutralize. Static
electricity arises from the interaction between negatively charged objects and
positively charged ones, often through friction. For instance, rubbing materials together can cause electrons to transfer, creating a charge imbalance.
This principle is illustrated in the triboelectric series, which ranks materials based on their tendency to gain or lose electrons.

Methods of producing Electric charges


Electric charges can be produced through several methods, charging by friction,
where two uncharged objects are rubbed together, causing electrons to transfer from
one object to another. This results in one object becoming positively charged and the
other negatively charged. Another method is charging by conduction, which occurs
when a charged object comes into direct contact with an uncharged object. The
charged object transfers some of its charge to the uncharged object, resulting in both
objects carrying a similar charge. Charging by induction involves bringing a charged object close to an uncharged conductive material without direct
contact. This causes a redistribution of charges within the uncharged object, leading to a temporary charge separation.

Electrification by friction
Electrification by friction is a fundamental process in physics where two bodies, when
rubbed together, exchange electrons. This interaction results in one body acquiring a
negative charge and the other a positive charge, with both charges being equal in
magnitude. During this process, the body that loses electrons becomes positively
charged, while the one that gains electrons becomes negatively charged. This transfer
of electrons occurs due to the differing affinities of materials for electrons, a concept
known as the triboelectric effect. For instance, when rubber is rubbed against fur,
electrons move from the fur to the rubber, charging them oppositely. The resulting static
electricity can lead to various effects, such as attracting light objects or causing small

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 172


173

shocks. The body which loses electrons becomes positively charged and that which gains electrons becomes negatively charged.

Electrification by conduction
Electrification by conduction is a process that occurs when a charged object comes into direct contact with a neutral object. This contact allows for
the transfer of electrons between the two objects. If the charged object is negatively charged, electrons will flow from it to the neutral object, imparting
a negative charge to the latter. Conversely, if the charged object is positively charged, electrons will move from the neutral object to the charged one,
leaving the neutral object positively charged. This method of charging is straightforward and relies on the principle that like charges repel and
opposite charges attract. The result is that the neutral object acquires the same type of charge as the charged object it contacted. This phenomenon is
fundamental in electrostatics and has practical applications in various fields, including electronics and materials science.
Note: The insulated stand prevents flow of charge away from the conductor. To charge the conductor negatively, a negative rod is produced.

Electrification by induction
Charging the body positively.
Procedure
Put the conductor on an insulated stand as in (a)
Bring a negatively charged rod near the conductor.
The positive and negative charges separate as shown in (a)
Earth the conductor by momentarily touching it with a finger and electrons flow
from it to the earth as in (b) in presence of the charged rod.
Removing the charged rod. The conductor is obtained to be positively charged.

Charging the body negatively by induction


Procedure
Put the conductor on an insulated stand as in (a).
Bring a positively charged rod near the conductor.
The positive and negative charges separate as shown in (a)
Earth the conductor by momentarily touching it with a
finger and electrons flow from it to the earth as in (b) in
presence of the charged rod.
Remove the charged rod. The conductor is obtained to be
negatively charged.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 173


174

Charging two bodies simultaneously with opposite charges


Support two uncharged bodies in contact on an insulated stand as shown
in (a)
 Bring a positively charged rod near the two bodies
 Positive and negative charges separate as in (b).
 Separate (X) from (Y) in presence of the inducing charge.
 Remove the inducing charge
 Body (X) will be positively charged and (Y) will be negatively
charged.

Conductors and Insulators


Conductors and insulators are two categories of materials that differ in
their ability to allow the flow of electric charges (electrons) or heat. Understanding the distinction between them is essential in fields such as electricity,
electronics, and thermodynamics.

Conductors are materials that allow the flow of electric charges (usually electrons) or heat through them easily. They have free-moving charge carriers,
which can be electrons or ions that enable electricity or heat to pass through.
Characteristics of Conductors:
 Conductors have many free electrons (delocalized electrons) in their atomic structure. These electrons are loosely bound to their atoms and
can move freely when an electric field is applied.
 Conductors offer very little resistance to the flow of electric current because the free electrons can move with minimal hindrance. This
makes them highly efficient at transmitting electricity.
 In addition to conducting electricity, conductors can also efficiently transfer heat. Metals, for instance, can quickly transfer thermal energy
from one part of the material to another.
Examples: Copper, aluminum, gold, and silver are excellent electrical conductors because they have free-moving electrons. While non-metallic,
graphite is a good conductor due to the presence of free electrons in its structure. Ionized gases in the plasma state (e.g., in stars or neon signs) can
conduct electricity due to the presence of free-moving ions and electrons.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 174


175

Applications of Conductors:
Copper and aluminum are commonly used in electrical wiring and power lines because of their excellent conductivity. Conductors are used in circuit
boards, connectors, and various electrical components. Metal conductors like aluminum are used in heat sinks to dissipate heat from electronics like
CPUs and GPUs.
Insulators are materials that resist the flow of electric charges or heat. They do not allow electrons or ions to move freely, which makes them poor
conductors of electricity or heat.

Characteristics of Insulators
In insulators, the electrons are tightly bound to their respective atoms and cannot move freely. This prevents the movement of electric charges and
stops the flow of current. Insulators have very high resistance to electric current. This makes them effective at blocking or containing electrical
charges. Insulators are also poor conductors of heat. They resist the transfer of thermal energy, making them good for thermal insulation.

Examples: Materials such as rubber, glass, wood, plastic, and ceramics are excellent insulators of electricity and heat.
Although technically a gas, air acts as an insulator, preventing the free flow of electricity (which is why air gaps are used in electrical insulators).
Dry paper is a good insulator and is used in various applications where electrical insulation is necessary.

Applications of Insulators:
Plastic and rubber are often used to coat electrical wires to prevent accidental contact with live conductors and to contain electrical energy within the
system. Insulators such as fiberglass, foam, and wool are used in building construction to prevent the loss of heat in homes and offices.
Insulators are used in capacitors as dielectric materials, which can store electrical energy.

Comparison between Conductors and Insulators


Property Conductors Insulators
Electrical Conductivity High (due to free electrons) Low (due to tightly bound electrons)
Resistance Low (minimal resistance to current flow) High (resist the flow of current)
Heat Conductivity Good (thermal energy transfers easily) Poor (resist the transfer of heat)
Examples Copper, aluminum, gold, silver Rubber, glass, plastic, wood
Electron Movement Free electrons move easily Electrons are tightly bound and immobile

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 175


176

Semiconductors:
In addition to conductors and insulators, semiconductors represent a special class of materials that have properties between conductors and insulators.
Their conductivity can be controlled or modified by adding impurities (a process called doping), applying voltage, or changing temperature.
Semiconductors are the foundation of modern electronics, used in devices like transistors and diodes.
Examples of Semiconductors: Silicon and germanium.
Applications: Semiconductors are the building blocks of computer chips, solar cells, and other electronic devices.

ELECTRIC FIELDS
This is a region around a charged body where electric forces are experienced. Electric fields may be represented by field lines. Field lines are lines
drawn in an electric field such that their directions at any point give a direction of electric field at that point. The direction of any field at any given
point is the direction of the forces on a small positive charge placed at that point.
Properties of electric field lines
 They begin and end on equal quantities of charge.
 They are in a state of tension which causes them to shorten.
 They repel one another side ways.
Field patterns
(a) Isolated charge

(b) Unlike charges


close together
(c) Like charges
close together

A neutral point is a region where the resultant electric field is zero i.e. field
lines cancel each other and therefore no resultant electrostatic forces exists.

2.3 THE SOLAR SYSTEM


Learning Outcomes
a) Know the relative sizes, positions, and motions of the earth, sun and moon (k, u)
b) Understand how day and night occur and demonstrate the phases of the moon (u, s)
c) Understand the roles of the sun, earth and moon in explaining time, seasons, eclipses, and ocean tides (k, u,gs)
d) Know the components of the solar system and their positions(k)
e) Know the main characteristics of the inner and outer planets in the solar system(k)
f) Understand the various views about the origin and structure of the universe(k, v/a)

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 176


177

The solar system


The solar system is a vast and intricate collection of celestial bodies, primarily centered around the Sun, a G-type main-sequence star that contains
99.86% of the system's mass. The Sun's gravitational pull governs the orbits of various components, including eight planets, which are categorized
into terrestrial (Mercury, Venus, Earth, and Mars) and gas giants (Jupiter, Saturn, Uranus, and Neptune).In addition to the planets, the solar system is
home to five officially recognized dwarf planets, such as Pluto and Eris, along with hundreds of moons that orbit these planets. Furthermore, countless
asteroids and comets populate the space between the planets, contributing to the dynamic nature of our solar system. Together, these components
create a complex and fascinating environment, showcasing the diversity and beauty of the cosmos.

The planets in the solar system


The solar system consists of eight planets: Mercury, Venus, Earth,
Mars, Jupiter, Saturn, Uranus, and Neptune. These planets can be
categorized into two groups: terrestrial and gas giants. The four inner
planets—Mercury, Venus, Earth, and Mars are terrestrial,
characterized by their rocky surfaces. In contrast, the outer
planets—Jupiter, Saturn, Uranus, and Neptune—are gas giants,
composed mainly of gases and lacking solid surfaces. Mercury, the
closest planet to the Sun, experiences extreme temperature
fluctuations, while Venus, often called Earth's twin, has a thick
atmosphere that traps heat. Earth is unique for its liquid water and
life, and Mars is known for its red surface and potential for past life. The gas giants, particularly Jupiter, are massive and have numerous moons, with
Saturn famous for its stunning rings. In addition to these eight planets, the solar system contains five officially recognized dwarf planets, hundreds of
moons, and countless asteroids and comets, showcasing the vast diversity of celestial bodies orbiting our Sun.

THE BRIGHT PLANETS IN THE SKY

In January, the night sky offers a spectacular view of several bright planets, with Venus shining as the most brilliant. Known as the "Evening Star,"
Venus reflects about 70% of the sunlight that reaches it, making it a dazzling sight in the western sky after sunset. Its ste ady golden light is
complemented by the nearby presence of Saturn, which also adds to the celestial display. Jupiter and Mars are other notable planets visible this month.
Jupiter, the largest planet in our solar system, reflects around 34% of sunlight, making it a prominent feature in the night sky. Mars, while not as
bright as Venus or Jupiter, can still be spotted with relative ease. For stargazers, January is an excellent time to observe these planets. With clear
skies, the combination of Venus, Saturn, Jupiter, and Mars creates a stunning panorama that captivates both amateur and seasoned astronomers alike.

Motion of planets around the sun

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 177


178

Orbital motion refers to the movement of an object in space as it travels forward while being pulled by gravity toward another object. The motion of
planets around the Sun is governed by gravitational forces and follows specific patterns described by Kepler's Laws. Each planet orbits the Sun in a
counterclockwise direction when viewed from above the Sun's north pole. These orbits are elliptical, with the Sun located at one of the foci, which
means that the distance between a planet and the Sun varies throughout its orbit. Kepler's First Law states that planets move in elliptical orbits, while
the Second Law indicates that a line segment joining a planet and the Sun sweeps out equal areas during equal intervals of time. This means that
planets move faster when they are closer to the Sun and slower when they are farther away. The Third Law relates the time a planet takes to orbit the
Sun to its average distance from the Sun, establishing a predictable relationship between these two factors. Kepler's findings by explaining that the
gravitational force between two objects depends on their masses and the distance between them. This interplay of forward motion and gravitational
pull keeps planets in stable orbits.

The Earths’ orbit about the sun & Moons’ orbit about the earth.
The Earth revolves in an orbit around the Sun in 365.25 days, with reference to the stars, at a speed ranging from 29.29 to 30.29 kms-1. The
6 hours, 9 minutes (0.25 days) adds up to about an extra day every fourth year, which is designated in a leap year, an extra day added as
February 29th. The Moon takes about one month to orbit the Earth (27.3 days to complete a revolution, but 29.5 days to change from the
present Moon to New Moon). As the Moon completes each 27.3-day orbit around Earth, both Earth and the Moon are moving around the Sun. A
gravitational force that attracts and keeps them in the orbit maintains the Earth and the Moon’s orbits.

Day and night


The formation of day and night on Earth is a result of its rotation on an imaginary
line known as the axis. As the Earth spins, different parts of the planet are exposed
to sunlight, creating the cycle of day and night. When a specific area faces the
Sun, it experiences daylight, while the opposite side remains in darkness,
resulting in nighttime. This rotation occurs approximately every 24 hours, which
is why we have a consistent cycle of day and night. The spherical shape of the
Earth ensures that only half of it can receive sunlight at any given moment. As the
Earth continues to rotate, the areas that were once in darkness gradually move into the light, transitioning from night to day. Daytime is when
you can see the sun from where you are, and its light and heat can reach you. Nighttime is when the sun is on the other side of the Earth from you,
and its light and heat do not get to you. Over a year, the length of the daytime in the part of the Earth where you live changes. Days are longer in
the summer and shorter in the winter.

Seasons in some parts of the earth


The formation of seasons on Earth is primarily due to the planet's axial tilt of
approximately 23.5 degrees. This tilt causes different parts of the Earth to receive
varying amounts of sunlight throughout the year. As Earth orbits the Sun, the
angle at which sunlight strikes the surface changes, leading to the distinct
seasons we experience: spring, summer, autumn, and winter. During summer,
the hemisphere tilted towards the Sun receives more direct sunlight, resulting
in warmer temperatures. Conversely, during winter, the same hemisphere is tilted
away from the Sun, leading to cooler temperatures and shorter days. The transition periods of spring and autumn occur as the Earth moves
between these extremes, with varying sunlight and temperatures. This consistent pattern of changing seasons is a result of Earth's revolution

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 178


179

around the Sun combined with its axial tilt, creating a dynamic and ever-changing climate that influences ecosystems and human activities alike.

Implication of season on activities on earth


The Earth's seasons are a direct result of its axial tilt as it orbits the Sun. This tilt causes variations in sunlight distribution, leading to distinct
seasonal changes. When the Northern Hemisphere tilts towards the Sun, it experiences summer, while the Southern Hemisphere endures winter,
and vice versa. This phenomenon not only affects temperature but also influences the length of daylight, impacting various ecosystems and human
activities. Seasonal changes significantly affect environmental conditions, including precipitation, soil moisture, and river flows. For instance,
spring brings increased rainfall, promoting plant growth, while autumn leads to leaf fall and preparation for winter dormancy. These changes are
crucial for agriculture, as farmers plan planting and harvesting schedules based on seasonal patterns. Moreover, climate change is altering
traditional seasonal behaviors, leading to unpredictable weather patterns. Understanding these implications is vital for adapting agricultural
practices and managing natural resources effectively.

Item: Discuss the impact and implications of changing seasons to the human and other activities on Earth.

Relative motion of the sun and moon and eclipse


The Sun is the largest of the sun, Earth and Moon. The earth rotates about the sun and revolves about its own axis. The moon rotates about the
Earth and the sun concurrently. When the Sun, Earth and the Moon are in a straight line, the shadow of the sun is cast either on the Earth or on
the Moon. This is referred to as an eclipse. During a solar eclipse, the moon moves between the Earth and the sun and blocks the sunlight. The
shadow is formed on Earth. During a lunar eclipse, the Earth blocks the sun's light from reaching the moon. The shadow is formed on the moon
as the Earth blocks light from reaching the moon. Since we are standing on Earth, what we see is that the moon gets dark. Other kinds of eclipses
happen too.

Characteristics of inner and outer planets


The solar system is divided into two distinct groups of planets: the inner and outer planets. The four inner planets; Mercury, Venus, Earth, and Mars
are characterized by their rocky composition and metallic surfaces. They have shorter orbits around the Sun, slower rotation speeds, and lack ring
systems. Their proximity to the Sun results in higher temperatures and a more solid structure compared to their outer counterparts. In contrast, the
outer planets; Jupiter, Saturn, Uranus, and Neptune; are significantly larger and primarily composed of gases like hydrogen and helium. These gas
giants have longer orbits and faster rotation rates, which contribute to their dynamic atmospheres. Unlike the inner planets, the outer planets possess
extensive ring systems and numerous moons, showcasing their complex gravitational interactions.
Hence (i) Inner planets are denser than outer planets, (ii) Outer planets are made of gas, ice, and rocks, whereas the inner planets are made of
iron, nickel, and silicates, (iii) Inner planets have very few to no moons around them, whereas the outer planets have dozens of moons orbiting them.

Explain why Earth is the only planet that supports life


Earth is the only planet known to support life, primarily due to its unique combination of factors. First, Earth is situated in the habitable zone,
where temperatures allow for liquid water to exist. This is crucial, as water is a fundamental requirement for all known life forms. Additionally,
Earth has a diverse ecosystem that provides various habitats and resources necessary for survival. The planet's atmosphere is rich in oxygen,
which is essential for the respiration of most living organisms. This atmosphere also protects life from harmful solar radiation and helps regulate
temperature. Earth’s geological activity, driven by plate tectonics, contributes to a dynamic environment that fosters biodiversity. While scientists
continue to search for life beyond Earth, no other planet in our solar system has demonstrated the same capacity to support life, making Earth
truly unique in the cosmos. Earth is the third planet from the Sun. It is one of the inner planets. As far as we know, Earth is also the only planet

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 179


180

that has liquid water. Earth's atmosphere has oxygen. The water and oxygen are crucial to life, as we know it. Therefore, the Earth is able to
support life in it.

FORMATION OF OCEAN TIDES


Ocean currents are driven by wind, water density differences, and tides.
Oceanic currents describe the movement of water from one location to
another. Ocean tides are primarily caused by the gravitational pull of the Moon
and, to a lesser extent, the Sun. This gravitational attraction generates a tidal
force that results in the regular rise and fall of ocean waters, known as tides.
As the Earth rotates, the Moon's gravitational influence causes the water to
bulge out on the side of the Earth facing the Moon, creating a high tide. At the same time, on the opposite side, another high tide occurs due to
the centrifugal force created by the Earth-Moon system. Sir Isaac Newton highlighted the interplay between gravitational forces and the Earth's
rotation. The result is a cyclical pattern of high and low tides, which can vary in intensity depending on the alignment of the Earth, Moon, and
Sun.

Gravitational Pull of the Moon and Sun


According to Newton , tides result from the gravitational attraction exerted by these celestial bodies. The Moon, being closer to Earth, has a more
significant impact, causing the oceans to bulge on both the side facing the Moon and the opposite side. As the Earth rotates, these bulges (direct
tidal bulge) create high and low tides. The Sun also exerts a gravitational force, but its effect is less pronounced due to its greater distance. When
the Sun and Moon align during full and new moons, their combined gravitational pull results in higher high tides, known as spring tides. Equally,
when they are at right angles, the Sun's pull counteracts the Moon's, leading to lower high tides, known as neap tides. When they are at right angles
(first and third quarters of the moon), neap tides occur, with less extreme tidal ranges.

Centrifugal Force from Earth's Rotation


Centrifugal force, Latin term meaning "center fleeing," describes the apparent force that
pushes objects outward when they follow a curved path. This phenomenon is particularly
evident due to Earth's rotation. As the planet spins, objects on its surface experience a
force directed away from the axis of rotation, which is most pronounced at the equator
and diminishes towards the poles. This force is often referred to as a "fictitious" force
in Newtonian mechanics, as it arises from the perspective of a rotating frame of reference.
While gravity pulls objects toward the Earth's center, centrifugal force acts outward,
creating a balance that affects everything from the shape of the Earth to the behavior of
objects in motion. Thus, tides form two bulges on opposite sides of the planet: one caused
by the Moon's gravity and the other by centrifugal force.

Earth's Rotation and Tidal Cycles

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 180


181

Earth's rotation plays a crucial role in the formation of tidal cycles, which are primarily influenced by the gravitational pull of the Moon and, to a
lesser extent, the Sun. As the Earth rotates on its axis approximately every 24 hours, it passes through these tidal bulges, resulting in two high tides
and two low tides each day for most coastal areas. This cyclical pattern is essential for understanding ocean dynamics and coastal ecosystems. Tidal
friction, caused by the interaction between the Earth's rotation and the gravitational forces exerted by the Moon, leads to a gradual slowing of the
Earth's spin. Over long periods, this effect can result in significant changes in the length of a day. Additionally, the alignment of the Earth, Moon, and
Sun can amplify tidal effects, creating higher tides when they are in a straight line. As Earth rotates, different areas move in and out of the tidal bulges,
leading to the cyclical rise and fall of sea levels. Most coastal areas experience two high
tides and two low tides every 24 hours and 50 minutes. This is called a semi-diurnal
tide. Some areas have one high and one low tide per day (diurnal tide) or mixed patterns
(mixed tide).

CORIOLIS EFFECT
The Coriolis Effect is a phenomenon that causes moving objects, such as air currents and
planes, to appear to curve rather than travel in a straight line. This effect arises from the
rotation of the Earth on its axis. In the Northern Hemisphere, moving objects are deflected
to the right, while in the Southern Hemisphere, they are deflected to the left. This
deflection influences weather patterns and ocean currents, playing a crucial role in the dynamics of our atmosphere. Without the Coriolis Effect, air
would flow directly from high-pressure areas to low-pressure areas, leading to a very different weather system. Instead, the curved paths created by
this effect contribute to the formation of cyclones and anticyclones, which are essential for understanding global weather patterns. The rotation of
Earth causes the Coriolis effect, which deflects the movement of water. This affects the direction and flow of tidal currents, especially in large ocean
basins.
The Asteroid Belt
The asteroid belt is a vast region of space located between the orbits of Mars and Jupiter,
containing the majority of asteroids in our Solar System. This belt is estimated to harbor between
1.1 and 1.9 million asteroids larger than 1 kilometer (0.6 miles) in diameter, along with millions
of smaller fragments. These celestial bodies are remnants from the early solar system, composed
of materials that never coalesced into a planet due to the gravitational influence of nearby Jupiter.
The asteroid belt serves as a boundary between the inner rocky planets and the outer gas giants,
marking a significant transition in the solar system's structure. Its formation is still a subject of
research, with theories suggesting it began as an empty space that later filled with debris from
the solar system's formation.
The asteroid belt is a region within the solar system occupied by asteroids that are sparsely held together by gravity and
occupying a region taking the shape of a gradient ring orbiting the Sun. Asteroids are small rocky bodies sometimes composed of
iron and nickel, which orbit the Sun. The asteroid belt exists between the orbits of Mars and Jupiter, between 330 million and 480 million
kilometers from the Sun.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 181


182

The asteroid belt a failed planet


The asteroid belt, located between Mars and Jupiter, is often described as the remnants of
a failed planet. This theory suggests that the gravitational influence of Jupiter prevented
the coalescence of planetesimals, which are the building blocks of planets. Instead of
forming a single, large body, these fragments remain scattered in a vast region of space.
The belt contains millions of asteroids, varying in size from small rocks to dwarf planet-
sized objects like Ceres. Despite its vastness, the total mass of the asteroid belt is
insufficient to form a planet. The gravitational forces exerted by Jupiter disrupt the orbits
of these celestial bodies, maintaining their status as a collection of debris rather than a
cohesive planet. While the asteroid belt may be seen as a failed planet, it also serves as a crucial area for understanding the early solar system's
formation and evolution, offering insights into the processes that shaped our planetary neighborhood. Astronomers now believe the asteroid belt
never gravitationally accreted into a planet, but was kept from doing so because of the massive gravity from Jupiter's mass.

Origin and structure universe


The origin and structure of the universe are primarily explained by the Big Bang
theory, which posits that the universe began approximately 13.7 billion years ago
from an infinitely dense and hot core. This initial explosion led to the rapid
expansion of space, cooling the universe and allowing matter to form. As the
universe expanded, it transitioned from a hot, formless state into a structured
cosmos filled with galaxies, stars, and planets. Cosmologists study the universe's
evolution, seeking to understand the large-scale structures we observe today. They
explore concepts such as dark matter, which constitutes about 80% of the
universe's mass, and its role in shaping cosmic structures. Theories of inflation
further augment the Big Bang model, explaining the uniformity and distribution of
galaxies. The prevailing idea about how the universe was created is called the big-bang theory. Although the ideas behind the big-bang theory feel
almost mystical, they are supported by Einstein’s theory of general relativity. The big-bang theory proposes the universe was formed from
an infinitely dense and hot core of the material. The bang in the title suggests there was an explosive, outward expansion of all matter and
space that created atoms. Spectroscopy confirms that hydrogen makes up about 74% of all matter in the universe. Since its creation, the universe has
been expanding for 13.8 billion years and recent observations suggest the rate of this expansion is increasing.

The Earth's Axis


The Earth is a rocky planet that rotates in a near circular orbit around the Sun. It rotates on its axis, which is a line through the north and south poles.
The axis is tilted at an angle of approximately 23.5° from the vertical. The Earth completes one full rotation (revolution) in approximately 24 hours (1
day). This rotation creates the apparent daily motion of the Sun rising and setting. Rotation of the Earth on its axis is therefore responsible for
the periodic cycle of day and night.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 182


183

Rising and Setting of the Sun


The Earth's rotation on its axis makes the Sun looks like it moves from east to west.
At the equinoxes, the Sun rises exactly in the east and sets exactly in the
west. Equinox (meaning 'equal night') is when day and nights are
approximately of equal length. However, the exact locations of where the Sun rises
and sets change throughout the seasons. In the northern hemisphere (above the
equator): In summer, the sun rises north of east and sets north of west.
In winter, the sun rises south of east and sets south of west. The Sun rises in the
east and sets in the west. Its approximate area changes throughout the year. The
Sun is highest above the horizon at noon (12 pm).
In the northern hemisphere, the daylight hours are longest up until roughly 21 st June. This day is known as the Summer Solstice and is where the
Sun is at its highest point in the sky all year. The daylight hours then decrease to their lowest around 21st December. This is known the Winter
Solstice and is where the Sun is at its lowest point in the sky all year.

The Earth's Orbit


The Earth orbits the Sun once in approximately 365 days. This is 1 year. The
combination of the orbiting of the Earth around the Sun and the Earth's tilt
creates the seasons. Seasons in the Northern hemisphere caused by the tilt of
the Earth. Over parts B, C and D of the orbit, the northern hemisphere is
tilted towards the Sun. These means daylight hours are more than hours of
darkness. This is spring and summer. The southern hemisphere is
tilted away from the Sun. This means there are shorter days than night. This
is autumn and winter. Over parts F, G and H of the orbit, the northern
hemisphere is tilted away from the Sun. The situations in both the northern
and southern hemisphere are reversed. It is autumn and winter in the
northern hemisphere, but at the same time, it is spring and summer in the southern hemisphere. At C: This is the summer solstice. The northern
hemisphere has the longest day, whilst the southern hemisphere has its shortest day. At G: This is the winter solstice. The northern hemisphere
has its shortest day, whilst the southern hemisphere has its longest day. At A and D: Night and day are equal in both hemispheres. These are
the equinoxes.

Moon & Earth


The Earth and Moon have a fascinating origin story that dates back approximately 4.6 billion years. Scientists believe that the Earth formed from a
rotating disk of dust and gas surrounding the early Sun. This process led to the creation of the Moon, which is unique in its size relative to Earth,
being about 27% of Earth's diameter. This significant ratio is unlike any other moon-planet relationship in our solar system. The Moon plays a crucial
role in making Earth more habitable. It influences ocean tides, stabilizes Earth's axial tilt, and serves as a historical record of our solar system's
evolution. The Moon orbits Earth approximately every 27.3 days, creating a rhythmic cycle that affects various natural phenomena.
The Moon, Earth's only natural satellite, plays a crucial role in our planet's ecosystem. With a mass of 1.2% that of Earth and a diameter of
3,474 km (2,159 mi), it orbits at an average distance of about 239,000 miles (385,000 kilometers). This proximity influences various natural phenomena,
including ocean tides, which are essential for marine life and coastal ecosystems. The Moon is not just a beautiful sight in the night sky; it also serves
as a historical record of our solar system. Its surface, marked by craters and maria, provides insights into the early solar system's conditions. The

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 183


184

Moon's phases, from new to full, have guided human calendars and cultural practices for centuries. Upcoming lunar events, such as the Full Sturgeon
Moon on August 19, 2025, continue to captivate skywatchers.
The Moon travels around the Earth in roughly a circular orbit once a month. This takes 27-29 days. The Moon revolves around its own axis in a month
so always has the same side facing the Earth. We never see the hemisphere that is always facing away from Earth, although astronauts have orbited
the Moon and satellites have photographed it. The Moon shines with reflected light from
the Sun; it does not produce its own light.

Phases of the Moon


The way the Moon's appearance changes across a month, as seen from Earth, is called its
periodic cycle of phases.

Phases of the Moon as it orbits around Earth


The phases of the Moon are a fascinating aspect of our night sky, occurring in a regular cycle
over approximately 29.5 days. There are eight distinct phases: New Moon, Waxing Crescent,
First Quarter, Waxing Gibbous, Full Moon, Waning Gibbous, Third Quarter, and Waning
Crescent. Each phase represents the varying amounts of sunlight reflecting off the Moon's surface as it orbits Earth. During the
New Moon phase, the Moon is not visible from Earth, as the illuminated side faces away from us. As it transitions to the Waxing Crescent, a sliver
of light becomes visible. The First Quarter marks the halfway point where half of the Moon is illuminated, leading to the Full Moon, when the
entire face is lit. After this, the Moon begins to wane, transitioning through the Waning Gibbous and Third Quarter phases before returning to the
Waning Crescent.

Gravitational Field Strength


The strength of gravity on different planets affects an object's weight on that planet.
Weight is defined as the force acting on an object due to gravitational attraction.
Planets have strong gravitational fields. Hence, they attract nearby masses with a strong gravitational force. Because of weight: Objects stay firmly on
the ground, Objects will always fall to the ground, and Satellites are kept in orbit.
Objects are attracted towards the centre of the Earth due to its gravitational field strength.
Both the weight of any body and the value of the gravitational field strength, g, differs between the surface of the Earth and the surface of other bodies
in space, including the Moon because of the planet or moon's mass. The greater the mass of the planet then the greater its gravitational field strength.
Higher gravitational field strength means a larger attractive force towards the centre of that planet or moon, g varies with the distance from a planet,
but on the surface of the planet, it is roughly the same. The strength of the field around the planet decreases as the distance from the planet increases.
However, the value of g on the surface varies dramatically for different planets and moons. The gravitational field strength (g) on the Earth is
approximately 10 N/kg. The gravitational field strength on the surface of the Moon is less than on the Earth. This means it would be easier to lift a
mass on the surface of the Moon than on the Earth. The gravitational field strength on the surface of the gas giants (eg. Jupiter and Saturn) is more than
on the Earth. This means it would be harder to lift a mass on the gas giants than on the Earth.

Value for g on the different objects in the Solar System


On such planets such as Jupiter, an object’s mass remains the same at all points in space.
However, their weight will be a lot greater meaning for example; a human will be unable to fully stand up. A person’s weight on Jupiter would be so
large a human would be unable to fully stand up.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 184


185

Gravitational Attraction of the Sun


There are many orbiting objects in our solar system and they each orbit a different type of planetary body.

Orbiting Objects or Bodies in Our Solar System Table


In our Solar System, the term "orbit" refers to the gravitational path that various celestial
bodies follow around the Sun. This includes not only the eight major planets; Mercury, Venus,
Earth, Mars, Jupiter, Saturn, Uranus, and Neptune; but also a multitude of smaller objects such
as asteroids, comets, and moons. Each of these bodies has a unique orbital trajectory
influenced by gravitational forces. Beyond the main planets, the Solar System is populated
with millions of asteroids, primarily located in the "main-belt" between Mars and Jupiter.
There are thousands of smaller bodies, particularly in the Kuiper Belt and Oort Cloud,
which exhibit eccentric orbits. These irregular paths can sometimes be attributed to past
stellar flybys that have altered their trajectories.
For example, a planet orbiting the Sun; In order to orbit a body such as a star or a planet, there has to be a force pulling the object towards that body.
Gravity provides this force. Therefore, it is said that the force that keeps a planet in orbit around the Sun is the gravitational attraction of the Sun. The
gravitational force exerted by the larger body on the orbiting object is always attractive. Therefore, the gravitational force always acts towards the
centre of the larger body. Therefore, the force that keeps an object in orbit around the Sun is the gravitational attraction of the Sun and is always
directed from the orbiting object to the centre of the Sun. The gravitational force will cause the body to move and maintain in a circular path.
Gravitational attraction causes the Moon to orbit around the Earth.

Sun's Gravitational Field & Distance


As the distance from the Sun increases; the strength of the Sun's gravitational field on
the planet decreases. Their orbital speed of the planet decreases. To keep an object in a
circular path, it must have a centripetal force. For planets orbiting the Sun, this force
is gravity. Therefore, the strength of the Sun's gravitational field in the
planet affects how much centripetal force is on the planet.
This strength decreases the further away the planet is from the Sun, and the weaker the centripetal force. The centripetal force is proportional to the
orbital speed. Therefore, the planets further away from the Sun have a smaller orbital speed. This also equates to a longer orbital duration

Table of Orbital Distance, Speed and Duration

Orbital distance Orbital Speed Orbital duration


Planet
(million km) (km/s) (days or years)

Mercury 57.9 47.9 88 days


Venus 108.2 35.0 225 days
Earth 149.6 29.8 365 days
Mars 227.9 24.1 687 days
Jupiter 778.6 13.1 11.9 years

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 185


186

Saturn 1433.5 9.7 29.5 years


Uranus 2872.5 6.8 75 years
Neptune 4495.1 5.4 165 years

NOTE: It is important to refer to the force of gravity as 'gravitational attraction', ' strength of the Sun's gravitational field' or
'the force due to gravity'. Avoid terms such as 'the Sun's gravity' or
even more vague, 'the force from the Sun'.

Orbits & Conservation of Energy


An object in an elliptical orbit around the Sun travels at a different speed depending
on its distance from the Sun. Although these orbits are not circular, they are
still stable. For a stable orbit, the radius must change if the comet orbital
speed changes.
As the comet approaches the Sun: The radius of the orbit decreases, the orbital speed increases due to the Sun's strong gravitational pull. As
the comet travels, further away from the Sun: The radius of the orbit increases, and the orbital speed decreases due to a weaker gravitational
pull from the Sun. Comets travel in highly elliptical orbits, speeding up as they approach the Sun.

Conservation of Energy
Although an object in an elliptical orbit, such as a comet, continually changes its speed, its energy must still be conserved. Throughout the orbit, the
gravitational potential energy and kinetic energy of the comet changes.
As the comet approaches the Sun:
It loses gravitational potential energy and gains kinetic energy; this causes the comet to speed up. This increase in speed causes a slingshot effect, and
the body will be flung back out into space again, having passed around the Sun. As the comet moves away from the Sun: It gains gravitational
potential energy and loses kinetic energy, this causes it to slow down. Eventually, it falls back towards the Sun once more. In this way, a stable
orbit is formed.
1
Remember that an object's kinetic energy is defined by:𝐸 = 2 𝑚𝑣 2 ; where m is
the mass of the object and v is its speed. Therefore, if the speed of an object
increases, so does its kinetic energy. Its gravitational potential energy therefore
must decrease for energy to be conserved.

The Sun
The Sun is the star at the center of our Solar System, a massive sphere of hot plasma
that generates energy through nuclear fusion. Part of this energy is emitted from
its surface as visible light, ultraviolet, and infrared radiation, providing most of the energy for life on Earth. This process heats the Sun to
incandescence, producing the light and warmth essential for life on Earth. Comprising over 99% of the Solar System's mass, the Sun's gravitational
pull keeps planets, moons, and other celestial bodies in orbit. As the closest star to Earth, the Sun plays a crucial role in sustaining life. It provides
the energy necessary for photosynthesis, allowing plants to grow and, in turn, supporting the entire food chain. The Sun's energy also influences
Earth's climate and weather patterns, making it a vital component of our ecosystem. The Sun is a medium sized star consisting of
mainly hydrogen and helium. It radiates most of its energy in the infrared, visible and ultraviolet regions of the electromagnetic spectrum.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 186


187

Our Sun
Stars come in a wide range of sizes and colours, from yellow stars to red dwarfs,
from blue giants to red supergiant. These can be classified according to their colour.
Warm objects emit infrared and extremely hot objects emit visible light as well.
Therefore, the colour they emit depends on how hot they are. A star's colour is
related to its surface temperature. A red star is the coolest (at around 3000 K).
A blue star is the hottest (at around 30 000 K). The colour of a star correlates to its
temperature.

Nuclear Fusion in Stars


Nuclear fusion is the fundamental process that powers stars, including our Sun.
It occurs when two light atomic nuclei, typically hydrogen, collide and merge
to form a heavier nucleus, such as helium. This reaction releases vast amounts
of energy in the form of light and heat, which is essential for maintaining the
star's temperature and preventing gravitational collapse. In the core of stars,
extreme temperatures and pressures facilitate these fusion reactions. The
energy produced not only sustains the star's brightness but also contributes to
the creation of heavier elements through a process known as stellar nucleosynthesis. As stars evolve, they can fuse heavier elements, enriching the
universe with diverse chemical elements.
In the centre of a stable star, hydrogen nuclei undergo nuclear fusion to form helium. The equation for the reaction is shown here: Deuterium and
tritium are both isotopes of hydrogen.
They can be formed through other fusion reactions in the star. A huge amount of
energy is released in the reaction. This provides a pressure that prevents the star
from collapsing under its gravity. The fusion of deuterium and tritium to form
helium with the release of energy.
ECLIPSES
Eclipses are fascinating astronomical events that occur when one celestial body
moves into the shadow of another, temporarily obscuring it. On Earth, we
primarily experience two types of eclipses: solar and lunar. A solar eclipse
happens when the Moon passes directly between the Sun and Earth, blocking
the Sun's light and casting a shadow on Earth. This alignment can result in a
total, partial, or annular eclipse, depending on the positions of the Sun, Moon,
and Earth. Conversely, a lunar eclipse occurs when the Earth comes between the
Sun and the Moon, causing the Earth's shadow to fall on the Moon. This can only happen during a full moon and can also be total or partial. Both types
of eclipses offer unique opportunities for observation and study, captivating skywatchers and scientists alike.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 187


188

SOLAR ECLIPSE
A solar eclipse is a fascinating celestial event that occurs when the Moon
passes directly between the Earth and the Sun, temporarily obscuring the
Sun's light. This alignment causes the Moon to cast a shadow on specific
areas of the Earth, resulting in a partial or total blockage of sunlight for
observers in those regions. There are three main types of solar eclipses: total,
partial, and annular. A total solar eclipse occurs when the Moon completely
covers the Sun, allowing viewers in the path of totality to experience
darkness during the day. A partial eclipse happens when only a portion of the Sun is obscured, while an annular eclipse occurs when the Moon is too
far from Earth to completely cover the Sun, resulting in a "ring of fire" effect. Solar eclipses are relatively rare events, with total eclipses visible from
only a limited area on Earth. They offer a unique opportunity for scientific observation and public engagement with astronomy .

LUNAR ECLIPSE
A lunar eclipse is a fascinating astronomical event that occurs when the Earth
positions itself directly between the Sun and the Moon. This alignment causes
the Earth to cast a shadow on the Moon, resulting in a temporary darkening of
its surface. Unlike solar eclipses, which can only be viewed from specific
locations on Earth, lunar eclipses are visible from anywhere on the night side
of the planet.
During a lunar eclipse, the Moon can take on a striking reddish hue, often
referred to as a "blood moon." This phenomenon occurs due to Rayleigh
scattering, where sunlight passes through the Earth's atmosphere and bends, allowing some light to reach the Moon while filtering out other colors.
The result is a captivating display that has intrigued humanity for centuries. Lunar eclipses can occur several times a year, but total eclipses, where
the Moon is completely covered by Earth's shadow, are less frequent.

ANNULAR ECLIPSE
An annular solar eclipse is a fascinating celestial event that occurs when the
Moon passes between the Earth and the Sun, but is at or near its farthest
point from Earth. During this type of eclipse, the Moon appears smaller than
the Sun, resulting in a striking "ring of fire" effect, where the outer edges
of the Sun remain visible around the Moon. This phenomenon contrasts with
a total solar eclipse, where the Moon completely covers the Sun. The annular
eclipse is a unique spectacle, as it allows observers in specific regions to
witness this stunning visual display. The next notable annular solar eclipse will occur on October 14, 2023, crossing parts of North, Central, and South
America. Observers in the path of annularity will experience the full effect of the ring of fire, while those outside this path will see a partial eclipse.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 188


189

ACTIVITY
1. Imagine you are tasked with leading a space mission to explore Mars and establish a permanent colony on one of Jupiter's moons, Europa. Task;
i). What challenges might you face in landing a rover on Mars and sustaining human life on Europa?
ii). How would you address issues like communication delays, extreme weather conditions, resource management (food, water, oxygen), and
the design of long-term habitats?
2. A massive rogue planet enters the solar system, altering the gravitational balance. Task;
i). How might this affect the orbits of planets, including Earth?
ii). If a rare planetary alignment occurred during this event, how could it influence Earth's tides and climate?
iii). What technologies or strategies would you use to monitor and mitigate these effects?
3. NASA has discovered a moon with liquid water beneath its icy surface and potential signs of microbial life. Simultaneously, a solar storm is predicted
to disrupt Earth's satellites and power grids. Task;
i). Design an experiment to search for life on the moon and explain how the findings might reshape our understanding of extraterrestrial life.
ii). What measures would you propose to protect critical infrastructure on Earth from the solar storm?
4. Your spacecraft, enroute to Jupiter for an asteroid mining mission, faces a fuel shortage. Task;
i). What alternative energy sources could you rely on to complete the mission?
ii). If the Sun lost its ability to produce energy, how could humanity develop artificial technologies to sustain life in the solar system?
iii). What ethical considerations should guide the use of resources from an asteroid?
5. You discover artifacts from an ancient civilization suggesting advanced knowledge of the solar system, including planetary orbits and eclipses.
Meanwhile, a private company claims ownership of a newly discovered planet. Task;
i). How would you interpret the ancient artifacts, and what significance could they hold for modern science?
ii). Based on current space treaties and international law, how would you address the company’s ownership claim?
iii). What Procedures could humanity take to ensure fair use and preservation of extraterrestrial resources?

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 189


190

SENIOR THREE VOLUME 2: S3-S4

3.1 LINEAR AND NON-LINEAR MOTION


Learning Outcomes
a) Understand and apply the relationship between speed, distance, and time (u, s)
b) Understand the terms: linear motion, speed, average speed, acceleration, and be able to investigate resistance to motion (u,s)
c) Know and use the equations of motion (u,s)
d) Understand the acceleration of bodies moving in a circle and the effect of gravity and air resistance on moving bodies (u,s)
e) Understand linear momentum and that it is conserved during collisions (u,s)
f) Understand that momentum is conserved during a collision and the implication of this (u, s,v/a)
g) Understand and apply newton's laws of motion (u,s,v/a)
h) Understand the differences between vector and scalar quantities, and give examples of each(u)
i) Understand that a number of forces acting on a body can be represented by a sign.

Introduction
Linear motion, also known as rectilinear motion, refers to the movement of an object along a straight line in one dimension. This type of motion can
be described mathematically using a single coordinate, making it simpler to analyze compared to more complex movements. In linear motion, the
velocity of the object remains constant unless acted upon by an external force, as stated in Newton's First Law of Motion. In practical terms, linear
motion can be observed in various everyday scenarios, such as a car driving down a straight road or a ball rolling along a flat surface. The key
characteristics of linear motion include position, displacement, velocity, and acceleration, all of which can be graphically represented to illustrate the
object's movement over time. It is the simplest type of motion, involving objects moving in a straight line, either at a constant speed or accelerating.
Linear motion can be described using concepts such as velocity, acceleration, displacement, and time. It is a fundamental concept in classical mechanics
and is governed by Newton's laws of motion.

Quantities in Linear Motion


In linear motion, the relationship between speed, distance, and time is fundamental.
𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒
Speed is defined as the ratio of distance traveled to the time taken, expressed mathematically as 𝑠𝑝𝑒𝑒𝑑 = . This
𝑡𝑖𝑚𝑒
relationship allows us to solve various rate problems effectively. For instance, if a person travels at a speed of 30 km/h, we can determine how far
they will go in a specific time frame by rearranging the formula to distance = speed × time. When analyzing motion, it’s essential to distinguish
between constant speed and accelerated motion. Constant speed indicates that the distance covered over time remains uniform, while accelerated
motion involves changes in speed, either increasing or decreasing. This can be visualized using distance-time and speed-time graphs, where straight
lines represent constant speed, and curves indicate acceleration.

Speed, a scalar quantity, measures how fast an object covers distance, while average speed is calculated as the total distance traveled
divided by the total time taken. Acceleration, on the other hand, is the rate of change of velocity over time. It indicates how quickly
an object speeds up, slows down, or changes direction. In scenarios where velocity remains constant, acceleration is zero. Investigating resistance to
motion involves understanding the forces that oppose an object's movement, such as friction and air resistance. These forces can significantly

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 190


191

affect an object's speed and acceleration, making it essential to consider them in any analysis of linear motion. Understanding these concepts is crucial
for applications in physics and engineering.

Definitions of key terms:


Displacement (s): Displacement is the change in position of an object along a straight line. It
is a vector quantity, meaning it has both magnitude (distance) and direction. Example: If a car
moves 5 km east, its displacement is 5 km to the east, regardless of the path taken. The SI unit of
displacement is the meter (m).
Distance: Distance is the total length of the path traveled by an object, irrespective of direction.
Unlike displacement, distance is a scalar quantity; it only has magnitude, not direction.
Example: If a person walks 5 meters forward and then 5 meters back, the total distance is 10
meters, but the displacement is 0 meters.
Velocity (v): Velocity is the rate of change of displacement with respect to time. It is a vector
∆𝑠
quantity, meaning it describes both speed and direction. Average velocity (Vavg) is calculated as: Vavg= ∆𝑡, where Δs = change in displacement, Δt
= change in time. Instantaneous velocity is the velocity of an object at a particular moment in time. The SI unit of velocity is the meter per second
(m/s).
Speed: Speed is the rate at which an object covers distance. It is a scalar quantity and only has magnitude. Speed is the total distance traveled
divided by the time taken. The SI unit of speed is also meters per second (m/s).
Acceleration (a): Acceleration is the rate of change of velocity with respect to time. It is a vector quantity and can describe changes in speed or
direction. Uniform acceleration occurs when the velocity of an object changes by the same amount each second. Average acceleration is given by:
∆𝑣
𝑎= , where: Δv = change in velocity, Δt = time interval. The SI unit of acceleration is meters per second squared (m/s²).
∆𝑡
Time (t): Time measures the duration of motion. It is a scalar quantity and is usually measured in seconds (s).

Linear motion, also known as rectilinear motion, refers to the movement of an object along a straight line. This type of motion
can be categorized into three primary types: constant velocity motion, uniformly accelerated motion, and free fall.
Uniform motion refers to the movement of an object traveling in a straight line at a
constant speed. In this type of motion, the object covers equal distances in equal intervals
of time, meaning its velocity remains unchanged. This can be represented graphically as
a straight line on a distance-time graph, indicating that the object is not accelerating or
decelerating. For example, a car moving at a steady speed of 60 km/h on a straight
highway exemplifies uniform motion. In contrast, non-uniform motion occurs when an
object’s speed or direction changes, resulting in varying distances covered in equal time
intervals.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 191


192

Constant velocity motion occurs when an object moves at a steady speed in a straight line, while uniformly accelerated motion involves a consistent
change in velocity over time. Free fall describes the motion of an object under the influence of gravity alone. In practical applications, linear motion
systems are essential in various technologies, including automation and robotics. These systems often incorporate components like linear bearings,
actuators, and slides, which facilitate smooth and precise movement. Examples include screw drives and linear motors, which are crucial for tasks
requiring accurate positioning.

Non-uniform motion refers to the movement of an object that changes its


speed or direction over time. Unlike uniform motion, where an object travels equal
distances in equal intervals, non-uniform motion involves varying distances covered in
the same time frame. This can occur due to acceleration or deceleration, resulting in a
change in velocity. For example, consider a car navigating through city traffic. It may
speed up on a clear road and slow down at traffic lights, demonstrating non-uniform
motion. The velocity of the car is not constant, as it fluctuates based on external
conditions. Graphically, non-uniform motion can be represented by a curve on a
distance-time graph, indicating the changes in speed.

Uniform Linear Motion: This occurs when an object moves along a straight path at a constant velocity,
meaning it covers equal distances in equal time intervals. In this type of motion, acceleration is zero because
the velocity does not change.
Equations: Displacement: s=vt, where v is constant velocity, and t is the time.
Non-Uniform Linear Motion: This type of motion occurs when an object moves along a straight path but
with changing velocity. It experiences acceleration or deceleration. The velocity changes with time due to the
influence of forces such as gravity, friction, or external applied forces.
Equations of Motion for Uniformly Accelerated Linear Motion:
These equations are crucial for solving problems involving linear motion with constant acceleration. They relate displacement, velocity, acceleration,
and time:
1st Equation (velocity-time relationship): 𝒗 = 𝒖 + 𝒂𝒕 ; Where: v = final velocity, u = initial velocity, a = acceleration and t = time
𝟏
2nd Equation (Displacement-Time Relationship): 𝒔 = 𝒖𝒕 + 𝟐 𝒂𝒕𝟐, where: s = displacement, u= initial velocity, t = time, and a =
acceleration
3rd Equation (velocity-displacement Relationship): 𝒗𝟐 = 𝒖𝟐 + 𝟐𝒂𝒔, where: v = final velocity, u = initial velocity, a =
acceleration and s = displacement
These equations assume that acceleration is constant and the motion occurs in a straight line.

Examples of Linear Motion in Real Life:


Free Fall: When an object falls under the influence of gravity, it experiences linear motion with constant acceleration (due to gravity, g≈9.8 m/s2.
The motion of the object can be described using the equations of motion, where: u=0, (when starting from rest), a=g, t = time taken to fall.
A Car on a straight road: If a car is moving at a constant speed along a straight road, it is in uniform linear motion. If the car accelerates or
decelerates (e.g., during braking), it is in non-uniform linear motion.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 192


193

Motion of a Train: A train moving between stations experiences both uniform and non-uniform linear motion. It accelerates when leaving the station,
travels at constant speed, and decelerates when approaching the next station.
Projectile Motion (Horizontal Component): In projectile motion, the horizontal component of motion (if air resistance is neglected) is uniform
linear motion, as no force acts horizontally after the object is launched. The area under the velocity-time graph represents the displacement.

Acceleration-Time Graph:
For constant acceleration, the acceleration-time graph is a horizontal line. If acceleration is zero, the graph lies on the time axis.

ACTIVITY
1. A car starts from rest and is accelerated uniformly at a rate of 1 m/s 2 in 20 seconds. Find ;(a) Its final velocity, (b). The distance covered.
2. A car accelerates uniformly at a speed of 20m/s for 4 seconds. Find final velocity if acceleration is 2 m/s 2 and distance traveled.
3. A body moving with velocity of 20 m/s accelerates to a velocity of 40m/s in 5 seconds. Find the acceleration and the distance traveled in
5s.
4. A body at rest at height of 20m falls freely to the ground.
Calculate the velocity with which it hits the ground and the
time before striking the ground.
Graphs of motion: Distance – time graphs
i) For a body at rest If a body is at rest its distance from a certain point
does not change as time passes
ii) For a body moving with uniform velocity
If a body is moving with the same velocity it travels equal distance in
equal intervals of time i.e. the object distance increases by equal increase
in time.
iii) Body moving with non-uniform velocity; varying distances are
moved in equal intervals of time
iv) Body moving with decreasing acceleration (retardation); For a body whose velocity is decreasing the graph bends towards the horizontal.
Velocity decreasing (retardation)

Velocity time graphs


i) Body moving with uniform velocity
ii) Body moving with uniform acceleration
iii) Body moving with uniform deceleration.
The area under a velocity time graph gives the distance covered by the body.
The slope of a uniform velocity time graph gives the uniform acceleration.
A velocity-time graph is a crucial tool in physics that illustrates how an object's velocity
changes over time. The graph's shape provides valuable insights into the object's motion.
For instance, a horizontal line indicates constant velocity, meaning the object is moving
at a steady speed without acceleration. A straight diagonal line signifies uniform
acceleration, where the object's velocity increases or decreases at a constant rate. The slope of the line on a velocity-time graph is particularly

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 193


194

significant, as it represents the object's acceleration. A steeper slope indicates greater acceleration, while a flatter slope suggests less acceleration. If
the graph curves, it indicates that the acceleration is changing over time, which can occur in various real-world scenarios.

Activity
1). A car starts from rest and steadily accelerates for 10s to a velocity of 20m/s. It continues with this velocity for a further 20s before it is brought
to rest in 20s
a) Draw a velocity time graph to represent this motion.
b) Calculate (i) acceleration, (ii) deceleration, (iii) distance travelled, (iv) average speed
2).A car from rest accelerates to velocity 30m/s in 10s. It continues at uniform velocity for 30s and then decelerate so that it stops in 20s (a). Draw a
velocity time graph to represent its motion
(b). Calculate the acceleration, deceleration, distance travelled and average speed
3). A racing car starts from rest and moves with uniform acceleration of 3m/s 2 for 4 seconds. Then moves with uniform velocity for 2 seconds. It is
brought to rest after a further 2 seconds. Task:
a) Draw a velocity time graph for motion of the car
b) Find total distance travelled
c) Average speed
4). The graph below represents a velocity time graph of a body in motion. Task:
a). Describe the motion of the body,
b). Calculate the total distance travelled, and the average speed.
5). A body of mass 60 kg starts moving with an initial velocity of 15 m/s and
accelerates at a rate of 4m/s2 in 5s, then maintains a constant velocity for another
5s and brought to rest in 7s. Task:
a). Draw a velocity –time graph to represent this motion, b). Calculate the total
distance travelled
c). Calculate the retarding force

MOTION UNDER GRAVITY


Motion under gravity refers to the movement of an object influenced solely by gravitational force. This phenomenon occurs when an object is in free
fall, meaning the only force acting on it is gravity, which accelerates the object towards the Earth at a constant rate, denoted as 'g' (approximately
9.81 m/s²). As an object falls, its velocity increases due to this constant acceleration. This relationship is crucial in understanding various physical
concepts, including projectile motion and the behavior of celestial bodies. Newton's laws of motion provide a framework for analyzing these
movements, linking the gravitational force to the motion of both falling objects and orbiting planets.
For a body falling under gravity, acceleration due to gravity is positive but for a body thrown vertically upwards, acceleration due to gravity is negative.
At maximum height, the body is momentarily at rest therefore final velocity is 0m/s
Equations of motion for a body falling freely under gravity, g

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 194


195

V = u+gt V = u–gt
S = ut +½gt2 S =ut -½gt2
V2 =u2 +2gs V2 = u2- 2gs
Falling bodies Vertically Upwards
Acceleration of free fall is constant for a body falling from rest
S= u t +½gt2, S = ½gt2, when u=0ms-1
ACTIVITY
1). A stone is raised from rest at point 20m above the ground so as to fall freely vertically down wards. Find time to land on the ground and the
velocity with which the body lands
2). A ball is thrown vertically upwards with an initial velocity of 30m/s. Task: Determine
(a). The maximum height reached, (b). Time taken to reach the maximum height, (c). Time taken to return to the starting point.
3). A stone thrown vertically upwards with an initial velocity of 14m/s neglecting air resistance, Task: determine the maximum height reached and
the time taken before it reached the ground

An experiment to determine the acceleration due to gravity (g) can be conducted using a simple pendulum.
The experiment involves measuring the period of oscillation of the pendulum and using the formula derived from the relationship between the
pendulum's length and its oscillation period.
Materials Required
1. String or thread (approximately 1 meter long)
2. Small spherical bob (a metal or wooden ball)
3. Stopwatch
4. Meter ruler
5. Stand or firm support to suspend the pendulum
6. Protractor (optional, for measuring small angular displacements)

Method I
a) Attach the bob to the end of the string, ensuring it is securely tied.
b) Fix the other end of the string to a stand or any rigid support so that the pendulum can swing
freely.
c) Measure the length (L) of the pendulum from the point of suspension to the center of mass of the bob using the meter ruler.
d) Pull the bob slightly to one side through a small angle to maintain simple harmonic motion
e) Release the bob without applying any force, allowing it to swing freely.
f) Use the stopwatch to time 20 complete oscillations (a single oscillation is the motion from one extreme point, back to the starting point).
g) Record the time ,t, taken for these 20 oscillations
𝑡
h) Determine the period (T) of one oscillation using 𝑇 = 20
i) Repeat the timing for 20 oscillations at least three times for the same length of the pendulum, to calculate the average period (T).
j) Adjust the string length to different values (e.g., 0.5 , 0.6 , 0.7 ,0.8 and 0.9m) and repeat the above Procedures for each length.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 195


196

𝐿
k) Formula for g: The period of a simple pendulum is given by:𝑇 = 2𝜋√𝑔
4𝜋2 𝐿
l) Using the measured length (L) and the calculated period (T), compute g for each length, and take the average of the values of 𝑔 =
𝑇2
obtained from different lengths to minimize errors.

Precautions
1. Ensure the pendulum swings in a single plane without any external interference.
2. Keep the amplitude of oscillation small (less than15∘) to satisfy the small-angle approximation.
3. Measure the length of the pendulum accurately from the point of suspension to the center of the bob.
4. Start timing after the pendulum has settled into a regular oscillatory motion.
5. Use a stopwatch with good precision and minimize reaction time errors.

The calculated value of g should be close to the standard value of 9.8 𝑚𝑠 −2 , depending on the accuracy of measurements and environmental
conditions.

Method II: To determine the acceleration due to gravity (g) using simple pendulum bob
 Attach a small, dense bob to a string of length L and suspend it from a fixed point.
 Measure the length of the string from the point of suspension to the center of the bob.
 Displace the bob slightly and release it to swing back and forth as a pendulum.
 Using a stopwatch, measure the time (T) it takes for the pendulum to complete 10 oscillations.
Divide the total time by 10 to get the period (T) for one oscillation.
 Tabulate of results is tabulated including values of 𝑇 2
 Plot a graph of 𝑇 2 against L
 Calculate the slope S from the graph above
4𝜋2
 Determine the value of acceleration due to gravity, g from 𝑔 = 𝑆𝑙𝑜𝑝𝑒(𝑆)
Projectile motion
In projectiles, the horizontal velocity of the body in motion remains the same throughout whole
journey (trajectory). Acceleration due to gravity continues to act on the body vertically downwards
and it doesn’t affect the horizontal motion of the body. Vertical motion, distance is S=h, h =½gt 2
,where vx is horizontal velocity of a given body and t is the time of flight.
This motion occurs when an object, such as a ball, is launched with an initial force and continues
its path due to inertia, while gravity pulls it downward. The resulting path is typically parabolic,
although recent discussions suggest that this is an approximation rather than an exact
representation.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 196


197

ACTIVITY
1. An object is dropped from a helicopter. if the object hits the ground after 2 seconds, calculate the height from which object was dropped
2. An object is dropped from helicopter at a height of 45m above the ground.
a) If the helicopter is at rest, how long does the object take to reach the ground and what is its velocity on arrival.
b) If the helicopter falls with a velocity of 1m/s when the object is released, what would be the final velocity of the object?
3.An object is released from an air craft travelling horizontally with a constant velocity of 200m/s at a height of 500m. Ignoring air resistance;
a) How long does it take the object to reach the ground?
b) Find the horizontal distance covered by the object leaving the air craft and reaching the ground.

TICKER – TAPE TIMER


Motion under gravity can be effectively analyzed using a ticker tape timer, a device that records an
object's movement over time. As an object falls, the ticker tape produces a series of dots at regular
intervals, representing its position changes. A ticker timer is a steel strip which vibrates rapidly and
print dots on a length of a paper tape pulled through it. It prints n dots on a tape every second
(frequency, = f). A ticker timer is used to measure speed or velocity and acceleration of bodies in
motion.
The distance between these dots indicates the object's velocity; larger gaps suggest increased speed,
which is characteristic of free fall due to gravity. To calculate the acceleration of gravity, one can use
𝑣−𝑢
the formula𝑎𝑐𝑐𝑒𝑙𝑎𝑟𝑎𝑡𝑖𝑜𝑛 = , where ( a ) is acceleration, ( v) is the final velocity, (u) is
𝑡
the initial velocity , and ( t ) is the time taken. By measuring the distance between the dots on the
ticker tape and the time intervals, students can derive the acceleration due to gravity. If n dots are
1
printed on a tape at a frequency ,f, along a given length , then the time between any two successive dots is the period and its given by 𝑇 = 𝑓. This
𝑛−1
implies, that the total time taken to print n dots is 𝑡 = (𝑛 − 1)𝑇 =
𝑓
𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒 𝑑 𝑑𝑓
𝑉𝑒𝑙𝑜𝑐𝑖𝑡𝑦 = = 𝑛−1 = 𝑛−1, where n is the number of dots, f is the frequency of the ticker timer, and d is the distance travelled.
𝑡𝑖𝑚𝑒
𝑓

Experiment with a ticker timer


The paper tape is pulled by a trolley moving down an inclined plane as shown
in the figure.
Different results are obtained on the speed of the trolley.
Trolley moving with uniform speed, spacing between successive dots is the
same throughout.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 197


198

The trolley is accelerating, the spacing between dots gets


bigger and bigger.
Trolley decelerating, the spacing between successive dots
gets smaller and smaller.

Sample Activity
1. The paper tape shown below was made by a trolley
moving with uniform acceleration. if the ticker timer
operated with a frequency of 100Hz, determine
i) Initial velocity
ii) Final velocity
iii) Acceleration.
Solution
2. Below is a tape by a ticker – timer of frequency 50Hz Initial velocity or speed u = 0.5m/s
Final velocity = v =1m/s
Acceleration a = 5m/s2

Calculate Solution
i) Initial velocity Initial velocity or speed u = 1.67m/s
ii) Final velocity Final velocity/ speed =4.25m/s
iii) The acceleration of the trolley Acceleration a= 1.77m/s2
3. The ticker timer below printed dots. Assuming it vibrates at frequency of 20Hz, calculate
i) Initial velocity
ii) Final velocity.
iii) Acceleration

Solution
Initial velocity or speed = 10m/s
Final velocity or speed = 12m/s
Acceleration a = 4.5m/s
COLLISION AND LINEAR MOMENTUM
Linear momentum is a fundamental concept in physics that describes the quantity of motion possessed by an object. It is defined as the product
of an object's mass and its velocity, expressed mathematically as 𝑝 = 𝑚𝑣, where p is momentum, m is mass, and v is velocity. Linear
momentum is a vector quantity, meaning it has both magnitude and direction, and its direction aligns with the direction of the object’s velocity.
Momentum plays a central role in understanding motion and collisions because it is conserved in isolated systems, as described by the law of
conservation of momentum. This principle states that the total momentum of a system remains constant if no external forces
act upon it. This conservation law underpins much of classical mechanics, including phenomena such as collisions, rocket propulsion, and motion
in closed systems. For example, in a collision, the combined momentum of all objects before the impact equals their combined momentum afterward,

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 198


199

provided no external forces are involved. Linear momentum also helps quantify the impact force of moving objects, with higher momentum indicating
a greater force needed to stop the object.

Momentum has both magnitude and direction (Vector quantity), which is the same as the direction of the velocity. The total linear momentum of a
system remains constant if no external forces act on it (conservation of momentum). The SI unit of momentum is kgm/s
Momentum and Force:
Newton’s second law can be written in terms of momentum. The net force acting on an object is equal to the rate of change of its momentum:𝐹 =
𝑑𝑝 𝑑𝑝
, where F is the net force, 𝑑𝑡 is the rate of change of momentum. This indicates that force is responsible for changing an object’s momentum over
𝑑𝑡
time.

Collisions
Collisions in physics refer to the sudden and forceful interaction between two or more bodies, such as balls or a golf club striking a ball. These events
occur over a relatively short time frame, during which the bodies exert forces on each other. Collisions occur when two or more bodies interact over
a short period, exerting forces on each other that significantly change their motion. They are fundamental events studied in physics, involving the
principles of Newtonian mechanics and the laws of conservation of momentum and energy.
Collisions can occur in everyday scenarios, such as a car crash or two balls hitting each other,
and on a microscopic scale, such as in the interaction of gas molecules.

Collisions are categorized into three main types based on the conservation of kinetic
energy. In elastic collisions, both momentum and kinetic energy are conserved. Such
collisions are typically observed in ideal systems, such as two billiard balls striking each
other or molecules in a gas. On the other hand, inelastic collisions conserve momentum
but not kinetic energy, as some energy is dissipated as heat, sound, or deformation. For instance, in a car crash, part of the kinetic energy is converted
into the deformation of the vehicles and other forms of energy. A subset of inelastic collisions is the perfectly inelastic collision, where the
colliding objects stick together after impact and move as a single entity, resulting in the maximum loss of kinetic energy.

The study of collisions relies heavily on the law of conservation of momentum, which states that in an isolated system with no external forces,
the total momentum before and after a collision remains constant. This principle can be expressed mathematically for two colliding bodies as
𝑚1 𝑢1 + 𝑚2 𝑢2 = 𝑚1 𝑣1 + 𝑚2 𝑣2, where 𝑚1 and 𝑚2 are the masses, 𝑢1 and 𝑢2 are the initial velocities, and 𝑣1 and 𝑣2 are the
final velocities of the objects. In elastic collisions, kinetic energy is also conserved, while in inelastic collisions, the energy lost is accounted for in
other forms, such as deformation or heat.
A critical parameter in collision analysis is the coefficient of restitution (e), which measures the elasticity of the collision. This coefficient is the
ratio of the relative speed of separation to the relative speed of approach. Values of e range from 0 to 1, where e=1 represents a perfectly elastic
collision, 0<e<1 represents an inelastic collision, and e=0 indicates a perfectly inelastic collision. The impulse experienced during a collision, which
is the product of the force and the duration of impact, causes a change in momentum and is another important aspect of collision dynamics.

Real-life examples of collisions include macroscopic interactions like a bat striking a ball in sports, vehicular accidents, and even celestial events like
asteroid impacts. At the microscopic level, elastic collisions are central to the behavior of gas molecules and are modeled in kinetic theory. Collisions

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 199


200

are also studied in high-energy particle physics, such as those conducted in particle accelerators, where subatomic particles collide to reveal
fundamental properties of matter.
The study of collisions has significant applications in various fields. In traffic safety, understanding collision mechanics aids in the design of safer
vehicles, airbags, and crumple zones that absorb impact forces, reducing injury. In sports, analyzing collisions between balls, players, and equipment
helps improve performance and strategies. In astrophysics, studying celestial collisions, such as asteroid impacts or star mergers, provides insights
into the evolution of the universe. Similarly, material science benefits from understanding energy dissipation during collisions, which is vital for
designing durable and energy-absorbing materials.
A collision occurs when two or more objects come into contact with each other for a short period, exchanging forces and energy.
During a collision, the momentum of the involved objects changes, but the total momentum of the system (if isolated) remains conserved. Collisions
are categorized into two main types based on the conservation of kinetic energy:
a) Elastic Collision. In an elastic collision, both momentum and kinetic energy are conserved. The objects bounce off each other without any lasting
deformation or generation of heat.
Examples: For two objects, 1 and 2, with masses m1 and m2, and initial velocities v 1 and v2, the following conservation laws apply:
Conservation of momentum: 𝑚1 𝑣1 + 𝑚2𝑣2 = 𝑚1 𝑣′1 + 𝑚2𝑣′2 , Where v1' and v2′ are the velocities of the objects after the collision.
1 1 1 1
Conservation of kinetic energy: 2 𝑚1 𝑣12 + 2 𝑚2 𝑣22 = 2 𝑚1 𝑣′12 + 2 𝑚2 𝑣′22

b) Inelastic Collision
In an inelastic collision, momentum is conserved, but kinetic energy is not. Some kinetic energy is transformed into other forms of energy, such as
heat, sound, or deformation of the objects. In a perfectly inelastic collision, the colliding objects stick together after impact and move as a single
object.
Examples: For two objects undergoing a perfectly inelastic collision: Conservation of momentum: 𝑚1 𝑣1 + 𝑚2 𝑣2 = (𝑚1 + 𝑚2 )𝑣𝑓 , where vf
is the final velocity of the combined mass after the collision. Unlike elastic collisions, the kinetic energy before and after the collision is different.
Some of it is lost during the collision:
∆𝐾𝐸 = ∆𝐾𝐸𝑖𝑛𝑖𝑡𝑖𝑎𝑙 − ∆𝐾𝐸𝑓𝑖𝑛𝑎𝑙 ∆𝐾𝐸 = ∆𝐾𝐸𝑖𝑛𝑖𝑡𝑖𝑎𝑙 − ∆𝐾𝐸𝑓𝑖𝑛𝑎𝑙

A truck of 2.5 tones moving at 0.5ms-1 collides with another truck of mass 2 tones moving at 1.2ms-
1 in opposite direction. If they had an inelastic collision on impact, establish their velocity with which

they moved with after the collision. (velocity is 0.26ms-1 in the direction of 2 tone truck).

Conservation of Momentum
The law of conservation of momentum is a fundamental principle in physics. It states that in an isolated system (i.e., no external forces), the total
linear momentum of the system remains constant, regardless of the interactions within the system. This is true for both elastic and inelastic collisions.
Conservation of Momentum in Collisions: For two objects 1 and 2, with initial momenta p1 and p2, and final momenta p1’and p2′, the total
momentum before and after the collision is conserved:
p1+p2=p1′+p2′. In vector form: m1v1+m2v2=m1v1′+m2v2′
Applications of Collisions and Momentum
a) Car Crashes

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 200


201

Momentum conservation helps engineers understand how forces are distributed during car crashes. In inelastic collisions, energy is absorbed through
vehicle deformation, minimizing the impact on passengers.
b) Sports
In sports like pool or snooker, players rely on elastic collisions to control the balls' motion. The angles and speeds are crucial to achieve the desired
outcomes based on momentum transfer.
c) Space Exploration
In space, momentum conservation is key in propulsion. Rockets expel gas backward, which provides the forward momentum needed to propel them
forward (action-reaction pairs from Newton’s third law).

Activity
1.A body of mass 2kg travelling at 8m/s collides with a body of mass 3kg travelling at 5m/s in the same direction. If after
collision the two bodies move together. Calculate the velocity with which the two bodies move.
2.A body of mass 20 kg travelling at 5m/s collides with another stationary body with a mass of 10kg and they move
separately in the same direction. If the velocity of the 20 kg mass after collision was 3m/s, calculate the velocity with
which 10kg mass will move.
3.A body of mass 8kg travelling at 20m/s collides with a stationary object and they move together with a velocity of 15m/s.
Calculate the mass of the stationary body.

NEWTON’S LAWS OF MOTION


Isaac Newton's three laws of motion form the foundation of classical mechanics, describing the relationship between the motion of an object and the
forces acting on it.
Newton’s First Law of Motion (Law of Inertia)
Newton's First Law of Motion, also known as the Law of Inertia, states that an object at rest will remain at rest, and an object in motion
will continue to move at a constant velocity in a straight line unless acted upon by a net external force. This principle highlights
the natural tendency of objects to resist changes in their state of motion. Inertia, the property that describes this resistance, is fundamental to
understanding motion. For example, a stationary ball will not roll until a force, such as a kick, is applied. Similarly, a rolling ball will not stop or
change direction unless friction or another force intervenes.
Explanation: This law explains the concept of inertia, which is the tendency of an object to resist changes in its state of motion. If no external force
(like friction, air resistance, or a push/pull) acts on an object, it will either remain still or continue moving in a straight line at constant speed.
Inertia is the tendency of the body to remain at rest or if moving, to continue in its motion in a straight line with uniform
velocity. The larger the mass of the body, the greater is its inertia, therefore the mass of the body is a measure of its inertia.
Examples: A book on a table will remain there unless someone pushes it. A car in motion will continue moving unless brakes (external force) are
applied to slow it down or stop it.
This principle has numerous applications in everyday life.

Applications of the Law: One prominent example is the use of seatbelts and airbags in vehicles. When a car suddenly stops, the body tends to
continue moving forward due to inertia. Seatbelts restrain passengers, preventing injury, while airbags provide additional protection by cushioning

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 201


202

the impact. Another application can be observed in sports, such as baseball. A stationary baseball will not move until a player exerts force by hitting
it with a bat. Similarly, when a ball is thrown, it will continue to travel in a straight line until gravity or air resistance alters its path.

Newton’s Second Law of Motion (Law of Force and Acceleration)


Newton's Second Law of Motion is a fundamental principle in physics that describes how forces affect the motion of objects. It states that the rate of
change of momentum of a body is directly proportional to the applied force and takes place in the direction of the applied
force. This relationship can be expressed with the formula F = ma, where F represents force, m is mass, and a is acceleration. In practical terms,
this means that a greater force will result in a greater acceleration, while a heavier object will require more force to achieve the same acceleration as
a lighter one. For example, pushing a car requires significantly more force than pushing a bicycle due to the car's greater mass.
From the law; the acceleration of an object is directly proportional to the net force acting on it and inversely proportional to its mass. The direction
of the acceleration is in the direction of the net force. Mathematically: F=ma. Where: F = net force acting on the object (in newtons, N) m = mass of
the object (in kilograms, kg), a = acceleration of the object (in meters per second squared, m/s 2)
Explanation: If a force is applied to an object, it will accelerate in the direction of the force. The larger the force, the greater the acceleration.
However, if the object is more massive (heavier), the same force will result in less acceleration. Thus, heavier objects are harder to move or stop than
lighter ones.
Examples: A small car requires less force to accelerate compared to a large truck. If you push a shopping cart with a strong force, it accelerates
quickly; with less force, it moves slower.

Application of the law


For instance, when kicking a ball, the force exerted determines how fast and far the ball travels. Similarly, pushing a shopping cart illustrates the law;
an empty cart accelerates more easily than a loaded one due to its lower mass. In automotive engineering, racing cars are designed to maximize
acceleration by optimizing force and minimizing weight, showcasing the law's relevance in high-performance scenarios. Engineers design vehicles
based on the mass and the forces needed for efficient acceleration and braking, balancing speed and control. Additionally, in aerospace, rockets utilize
Newton's Second Law to propel themselves into space. The thrust generated by the engines must overcome the rocket's mass to achieve the desired
acceleration.

Newton’s Third Law of Motion (Action and Reaction)


Newton's Third Law of Motion is a fundamental principle in physics that states, "For every action, there is an equal and opposite reaction."
This means that whenever one object exerts a force on another, the second object exerts a force of equal strength in the opposite direction on the first
object. This interaction highlights the symmetry present in nature. For example, when a person jumps off a small boat, they push down on the boat
(action), causing the boat to move backward (reaction). Similarly, in rocket propulsion, the engines expel gas downwards, which results in the rocket
being propelled upwards. It emphasizes that forces always occur in pairs, ensuring that motion and stability are maintained in our physical world.
Explanation: When one object exerts a force on a second object, the second object exerts an equal but opposite force on the first object. Forces
always come in pairs. If you push on something, it pushes back with the same amount of force in the opposite direction.

Examples: When you jump off a boat onto the shore, the boat moves backward. Your feet push on the boat (action), and the boat pushes back on
your feet (reaction), causing it to move. A rocket works by expelling gases downward (action), and as a result, the rocket is pushed upward (reaction).

Applications of Newtons third Law (Action-Reaction):

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 202


203

One of the most prominent examples is rocket propulsion. When a rocket expels gas downwards, the reaction force pushes the rocket upwards, allowing
it to ascend into space. In everyday life, this law is evident in activities such as walking. As a person pushes their foot backward against the ground,
the ground exerts an equal and opposite force that propels them forward. Similarly, swimming relies on this principle; when a swimmer pushes water
backward, the water pushes them forward. Additionally, Newton's Third Law is crucial in sports. Athletes utilize this law to enhance their performance,
whether it's a basketball player jumping or a sprinter pushing off the starting blocks. Airplanes generate lift by pushing air downward with their
wings (action), and the air pushes the airplane upward (reaction), allowing it to fly.

Activity:
1. A 20 kg mass travelling at 5m/s is accelerating to 8m/s in 10s. Task: Determine
(i). The change in momentum, (ii). The rate of change in momentum, (iii). The applied force
2. A body of mass 600g moving at 10m/s is accelerated uniformly at 2m/s for 4s. Task: (i). Calculate the change in momentum, (ii). Rate of
change in momentum, (iii). The force acting on a body.
3.A one turn car travelling at 20 m/s is accelerated at 2ms -2 for 5 seconds. Calculate the (i) change in momentum, (ii) The rate of change of
momentum, (iii) Accelerating force acting on a body
4. A block of mass 500g is pulled from rest on a horizontal frictionless bench by a steady force (F) and travels 8m in 2 seconds Find (i)
Acceleration (ii) Value of F

Motion of a Body in a Lift (Elevator)


When a person (or any object) is inside a lift (elevator), the motion of the lift affects the forces acting on the body. Specifically, the apparent weight of
the person changes depending on the motion of the lift. This phenomenon is explained by Newton's second law of motion and the concepts of weight
and normal force. The real weight of a person is the force due to gravity acting on their mass, given by: W=mg. where: W is the weight, m is the mass
of the person, g is the acceleration due to gravity (approximately 9.8 m/s2)

Forces Acting on the Person in the Lift:


Weight (W = mg): The gravitational force pulling the person downward. Normal Force (N): The force exerted by the floor of the lift on the person,
which opposes the weight. This is the force you feel as your "apparent weight."
Situations
1. Lift at Rest or Moving with Constant Velocity
When the lift is at rest or moving at constant velocity (either upward or downward), the acceleration is zero. According to Newton’s first law, there is
no change in motion, so the forces are balanced. 𝑅 = 𝑊 = 𝑚𝑔. Apparent weight (what you feel) is equal to your actual weight.
2. Lift Accelerating Upwards
When the lift accelerates upward, there is an additional upward force due to the acceleration of the lift. The normal force (R) increases because the lift
floor pushes harder against the person.
Using Newton’s second law: 𝑅 = 𝑚𝑔 + 𝑚𝑎 = 𝑚(𝑔 + 𝑎), where: a is the acceleration of the lift upwards. Apparent weight increases
because the normal force is greater than the gravitational force alone. You feel heavier during upward acceleration.
3. Lift Accelerating Downwards
When the lift accelerates downward, the normal force decreases because the lift floor is not pushing as hard against the person. Again, applying
Newton's second law: 𝑅 = 𝑚𝑔 − 𝑚𝑎 = 𝑚(𝑔 − 𝑎 ).
Apparent weight decreases because the normal force is less than the gravitational force. You feel lighter during downward acceleration.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 203


204

4. Free Fall (Lift Cable Breaks)


If the lift were in free fall (e.g., the lift cable snaps, and the lift falls under gravity alone), the acceleration of the lift would be equal to the acceleration
due to gravity g.
In this case: 𝑅 = 𝑚(𝑔 − 𝑔) = 0. Apparent weight is zero. The person would feel weightless because there is no normal force acting on them
(both the person and the lift are falling at the same rate). This is similar to the sensation astronauts experience in free fall in space.

Summary of Apparent Weight in Different Scenarios:


Lift's Motion Apparent Weight Feeling
At rest or constant velocity Apparent weight = Actual weight Normal (as if on solid ground)
Accelerating upwards Apparent weight > Actual weight Heavier
Accelerating downwards Apparent weight < Actual weight Lighter
Free fall (cable breaks) Apparent weight = 0 Weightless

Conclusion: The apparent weight of a person in a lift changes depending on the motion of the lift. This change is due to the varying normal force,
which either increases or decreases depending on whether the lift is accelerating upwards or downwards.

Activity:
Task: Establish the reaction on a woman of mass 70 kg standing in a lift if the lift is
(a) at rest
(b) ascending upwards with uniform acceleration of 4m/s2
(c) moving down wards with uniform acceleration of 4m/s2

EXPLOSION AND RECOIL VELOCITY


Explosions can be understood through the lens of momentum changes and Newton's laws of motion. When an explosion occurs, forces are exerted,
and objects are propelled in different directions, often with rapid acceleration. Key examples of these concepts include firing a gun, rocket propulsion,
and a balloon releasing air. Momentum is the product of an object's mass and velocity, and it is a conserved quantity in isolated systems. In the
situation of explosions, the total momentum before and after an explosion must remain constant (assuming no external forces act on the system). This
principle is known as the conservation of momentum.

Before an explosion, the total momentum of the system is usually zero or some constant value.
During the explosion, forces act on different parts of the system, causing them to move in opposite directions. After the explosion, the sum of the
momenta of all the parts must equal the total momentum before the explosion.
Example of Conservation of Momentum in an Explosion
If a firecracker explodes, the fragments move in different directions. While each fragment has its own momentum, the vector sum of the momenta of
all fragments equals the momentum of the firecracker before the explosion, maintaining the system’s overall momentum.

ACTIVITY
A bullet of mass 55g is fired from a gun of mass 10kg with a muzzle velocity of 400ms-1. Determine the recoil velocity of the gun. (2.2ms -1 in
opposite direction)

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 204


205

Newton's Laws of Motion and Explosions


a) Newton’s First Law (Law of Inertia)
Newton's first law states that an object at rest will remain at rest, and an object in motion will remain in motion with a constant velocity unless acted
upon by an external force.
Application to Explosions: Before an explosion, all parts of the object (e.g., gun, rocket, and balloon) are at rest or moving with constant velocity.
The explosion introduces a force, overcoming the inertia of the object and causing its parts to move rapidly. Without the explosion, the object would
remain at rest.
b) Newton’s Second Law (Law of Force and Acceleration)
Newton’s second law explains how the force applied to an object is directly proportional to the acceleration it experiences and inversely proportional
to its mass: 𝐹 = 𝑚𝑎
In the context of explosions, the rapid release of energy exerts large forces on the object, causing a dramatic acceleration of its parts.

Application to Explosions: In a gun, the expanding gases from the explosion inside the barrel exert a force on the bullet, accelerating it out of
the barrel. The greater the force applied to the bullet (which is proportional to the energy of the explosion), the greater the acceleration, and the faster
the bullet will move. In a rocket, the combustion of fuel generates a high-pressure gas that accelerates out of the rocket's engine, pushing the rocket
in the opposite direction.

c) Newton’s Third Law (Action-Reaction Law)


Newton’s third law states that for every action, there is an equal and opposite reaction.
Application to Explosions: This law is critical in understanding how explosions cause movement. When a force is exerted on one part of the system,
an equal and opposite force is exerted on another part. These forces lead to changes in momentum, propelling objects in different directions.

Examples of Newton’s Third Law in Explosions:


Firing a Gun: The exploding gunpowder exerts a force on the bullet, propelling it forward. The gun exerts an equal and opposite force on the shooter
(recoil), pushing the gun backward. Before the shot, the gun-bullet system has zero momentum (if at rest). After firing, the momentum of the bullet
moving forward is equal and opposite to the momentum of the gun recoiling backward.
When the bullet leaves the barrel, the total momentum must be conserved. Therefore, the bullet moves forward, the gun jacks backwards (recoils)
with a velocity called recoil velocity.

Rocket Propulsion:
The rocket engine expels gases downward at high speed due to the combustion of fuel. An equal and opposite force pushes the rocket upward. Before
ignition, the rocket and gases are stationary, so the total momentum is zero. After ignition, the upward momentum of the rocket is balanced by the
downward momentum of the expelled gases, conserving the total momentum of the system.

Balloon Releasing Air:


When a balloon is released, the air inside rushes out through the opening. The balloon moves in the opposite direction as a response to the force
exerted by the escaping air. The total momentum before the balloon is released is zero. Once the air escapes, the momentum of the air rushing out is
balanced by the momentum of the balloon moving in the opposite direction.

Conservation of Momentum in Explosions


Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 205
206

In all explosions, the law of conservation of momentum applies. This means that the total momentum of the system before the explosion must equal
the total momentum after the explosion, provided no external forces are involved.
Before the Explosion: The system (e.g., gun and bullet, rocket and gases, balloon and air) may have zero momentum if at rest or some constant
momentum if in motion.
After the Explosion: The different parts of the system move in different directions, but the vector sum of all the momenta remains equal to the
initial momentum.
In an isolated system, no momentum is gained or lost during an explosion, but the energy from the explosion redistributes the
momentum among the system's components.

Energy in Explosions: While momentum is conserved in explosions, kinetic energy often changes. In most explosions, energy stored in the form of
chemical or nuclear potential energy is rapidly converted into kinetic energy and heat. This release of energy is responsible for the high velocities and
large forces associated with explosions.
In a gun firing, the chemical energy in the gunpowder converts into the kinetic energy of the bullet. In rocket propulsion, the chemical energy
from burning fuel converts into the kinetic energy of the expelled gases and the rocket.

Activity
1. A bullet of mass 50g is fired with a velocity of 400m/s from a gun of 5kg. Calculate the recoil velocity
of a gun.
2. A 50kg girl jumps out of a rowing boat of mass 300kg to the bank with a horizontal velocity of 3m/s.
With what velocity does the boat begin to move backwards
3. (a) Outline the similarities and the differences between elastic and inelastic collisions
b) Fatimah of mass 60kg running at 64 km/h jumps on a stationary trolley of mass 20kg.If the collision
is perfectly inelastic; Find the Loss in kinetic energy and the final kinetic energy
4. A block of mass 500g is pulled from rest along a rough table by a force, F. It moves 10m in 4s. if the
friction force on the table is 2N. Determine the; (a) acceleration of the book, (b) value of the force.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 206


207

3.2 REFRACTION, DISPERSION, AND COLOUR


Learning Outcomes
b) Understand that light may be refracted as it passes from one medium to another and that this has both consequences and uses (u, s,v/a)
c) Understand the concept of refractive index (u,s)c. Understand the concept of total internal reflection(u)
d) Know that white light can be split into coloured light by refraction (k,s)
e) Know that white light results from the superimposition of light of all colours of the visible spectrum (u,k)
f) Determine refractive index of glass (s, gs)

Refraction of Light
Refraction is the bending or change in direction of a ray of light as it passes from one transparent medium into another medium with a different optical
density. This phenomenon occurs due to the change in the speed of light when it moves from one medium to another. Refraction is a fundamental
concept in optics and is responsible for many optical phenomena and devices such as lenses, prisms, and magnifying glasses.
Concepts in Refraction
Optical Density: Optical density refers to how much a medium slows down light as it passes through. The denser the medium, the more it slows
light. Light travels fastest in a vacuum and slower in materials like air, water, or glass.
Speed of Light: The speed of light in a vacuum is approximately 3×10 8 m/s3 When light enters a denser medium (like water or glass) from a less
dense medium (like air), its speed decreases, and it bends toward the normal. Conversely, when light enters a less dense medium from a denser one,
its speed increases, and it bends away from the normal.
The Normal Line:
The normal is an imaginary line perpendicular to the boundary between two media at the point where the light ray strikes the surface.
Refraction is measured with respect to this normal line.
Snell's Law of Refraction:
The amount of bending (refraction) of light as it passes from one medium to another is governed by Snell's Law, which is mathematically expressed
as: 𝑛1 𝑠𝑖𝑛𝜃1 = 𝑛2𝑠𝑖𝑛𝜃2 . Where; n1 is the refractive index of the first medium, n 2 is the refractive index of the second medium, θ1 is the angle
of incidence (the angle between the incident ray and the normal), and θ2 is the angle of refraction (the angle between the refracted ray and the
normal).
𝑐
The refractive index (n) is a measure of how much light slows down in a given medium and is defined as: 𝑛 = 𝑣; where: c is the speed of light in a
vacuum, v is the speed of light in the medium. A medium with a higher refractive index has a lower light speed, causing more bending of light.
Refraction at an Interface: When light passes through the interface of two materials, the following scenarios occur:
a) Light Passing from a Less Dense to a Denser Medium (e.g., Air to Water):
Light slows down when it enters a denser medium, causing it to bend toward the normal. Example: A pencil partially submerged in water appears
bent or broken at the water’s surface because the light rays refract as they move from water (denser) to air (less dense).
b) Light Passing from a Denser to a Less Dense Medium (e.g., Water to Air):
Light speeds up when it moves into a less dense medium, causing it to bend away from the normal.
Example: When you look at a coin at the bottom of a pool, it appears shallower than it actually is due to the refraction of light as it exits the water.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 207


208

REFRACTION AT PLANE SURFACES

This is the bending of light rays when it passes from one medium to
another of different optical densities.
Refraction can also be defined as the change in speed of light when it
moves from one medium to another of different optical densities. When a
ray of light enters an optically denser medium, it is bent towards the
normal and when it enters a less dense medium it is bent away from the
normal

CONSEQUENCES OF REFRACTION OF LIGHT

Refraction of light is the bending of light rays as they transition between different
media, resulting in a change in their path. This phenomenon is crucial in various
applications, including the creation of lenses, magnifying glasses, and prisms. Our eyes
also rely on refraction to focus light, enabling us to see clearly. The consequences of
refraction extend beyond optical devices. It plays a significant role in natural
occurrences, such as the formation of rainbows and the twinkling of stars.
Additionally, optical illusions like mirages and looming effects are direct results of light
refraction, showcasing its impact on our perception of reality.

Why do stars’ twinkle?


As a star’s light penetrates the Earth’s atmosphere, every individual stream of starlight is refracted; caused to change direction, slightly by the various
temperature and density layers in Earth’s atmosphere. You might think of it as the light travelling a zig-zag path to our eyes, instead of the straight
path the light would travel if Earth didn’t have an atmosphere.
The atmosphere of the earth is made of different layers. It is affected by winds, varying temperatures, and different densities as well. When light from
a distant source (a star) passes through our turbulent (moving air) atmosphere, it undergoes refraction many times. When we finally perceive this
light from a star, it appears to be twinkling! This is because some light rays reach us directly, and some bends away from and toward us. It happens
so fast that it gives a twinkling effect.

LAWS OF REFRACTION OF LIGHT


 The incident ray, the refracted ray and the normal at the point of incidence all lie in the same plane
𝑠𝑖𝑛𝑖
 The ratio sine of angle of incidence to the sine of angle of refraction is constant for any given pair of media (Snell’s law) , i.e. 𝑠𝑖𝑛𝑟 = 𝑛
,constant (n) where n – refractive index of the medium containing the refracted ray
Refractive index: It is the ratio of sine of angle of incidence to the sine of angle of refraction for a ray of light moving from one medium to another
of different optical densities.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 208


209

ACTIVITY
Light is incident at an angle of 600 onto an air glass material having a refractive index n= 1.5. Determine the angle of refraction, if the ray of light
moves from air into glass.

EXPERIMENT TO VERIFY SNELL’S LAW


To determine the refractive index of a glass block by measuring the angles of incidence and refraction using Snell's Law.

Materials Required:
Rectangular glass block, A4 paper, Protractor, Pencil, Ruler, Pins (4),
Cork board, Light source (optional for more precise light ray tracing)

Procedure:
 Place the glass block on a sheet of A4 paper and trace its
outline using a pencil. Label the block edges as PQRS.
 Mark a point A on one side (say APS) of the outline where the
ray of light will enter the block.
 Use a protractor to draw a normal line (N) perpendicular to
side PS at point E.
 Draw an incident ray (AE) at an angle of incidence (i) to the
normal, typically around 30∘
 Fix two pins along the incident ray AE so that the light travels through them into the block.
 Place the glass block carefully over its outline.
 Look through the opposite side of the block (side SR) and fix two more pins such that they align with the light ray exiting the block.
 Ensure the pins form a straight line with the exiting ray, tracing the Emergent Ray:
 Remove the glass block carefully and trace the path of the refracted ray exiting from side QR.
 Measure the angle of incidence (i) between the incident ray and the normal.
 Measure the angle of refraction (r) between the refracted ray and the normal at point P.
 Repeat the experiment by varying the angle of incidence (i) and measuring the corresponding angle of refraction (r).
 Tabulate a table including values of values of i, r, sini, and sinr
 Plot a graph of sini (y-axis) against sinr (x-axis).
 The slope of the graph gives the refractive index (n) of the glass block.

Precautions:
 Ensure the glass block is stable and aligned correctly over its traced outline.
 Pins should be inserted vertically for accurate alignment.
 Avoid parallax error when aligning the pins or measuring angles.
 Repeat the experiment for accuracy and use average values.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 209


210

i(0) r(0) Sin i Sin r


10
20
30
40
50
A straight line graph through the origin verifies Snell’s law. The slope of the graph gives the refractive index of the glass. slope =refractive index.
Absolute refractive index
This is the ratio of sine of angle of incidence to the sine of angle of refraction for a ray of light moving from air (vacuum) to another medium of
different optical density. n = the angle incident i should be in air or vacuum.

REFRACTION ON PLANE PARALLEL BOUNDARIES


Refraction at plane parallel boundaries is a fundamental concept in optics, describing how light behaves when transitioning between different media.
When an electromagnetic wave encounters a boundary, part of its energy is reflected while the remainder is transmitted into the new medium. This
phenomenon is governed by Snell's Law, which states that the ratio of the sine of the angle of incidence to the sine of the angle of refraction is constant
and depends on the refractive indices of the two media. In the case of plane parallel plates, the emergent ray
remains parallel to the incident ray but is laterally displaced. This displacement is proportional to the
thickness of the plate and the angle of incidence.

EFFECTS OF REFRACTION ON PLANE SURFACES


Refraction at plane surfaces is a fundamental concept in optics, describing how light changes direction when
it passes from one medium to another. When light enters a denser medium, such as water or glass, it bends
towards the normal line, which is an imaginary line perpendicular to the surface at the point of incidence.
This bending occurs because light travels faster in less optically dense media, like air, compared to denser
ones. The degree of bending is governed by Snell's Law, which states that the ratio of the sine of the angle of
incidence to the sine of the angle of refraction is constant for any two media. This principle explains everyday phenomena, such as the apparent
bending of a pencil submerged in water.
This optical illusion is primarily due to a phenomenon known as refraction. Refraction occurs when light travels from one medium to another, such
as from air to water, causing it to change speed and direction. As light passes from the air into the water, it slows down and bends. The part of the
pencil submerged in water appears to shift position relative to the part above
the water. This bending of light creates the illusion that the pencil is broken
or distorted at the water's surface.

REAL AND APPARENT DEPTH


Real depth and apparent depth are fundamental concepts in optics,
particularly when discussing how light behaves as it passes through different
media. Real depth refers to the actual physical distance from the surface of
a medium, such as water, to an object submerged within it.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 210


211

In contrast, apparent depth is the perceived distance, which is often less than the real depth due to the refraction of light. For example, when observing
an object underwater, the light rays bend as they exit the water, causing the object to appear closer to the surface than it truly is. The refractive index
of water, approximately 1.33, plays a crucial role in this phenomenon, leading to the apparent depth being about 34% of the real depth. An object O
placed below a water surface appears to be nearer to the top when viewed from above. The depth corresponding to apparent depth. The actual depth
of an object, below the liquid surface is called the real depth.

Determination of refractive index using a triangular prism


A prism is placed on a white sheet of paper and its outline drawn as shown. Two
object pins P and Q are fixed upright on side AB and while looking through the
prism from side AC, two other pins R and S are fixed such that they appear to be in
line with images of P and Q. The prism is removed and a line drawn through P and
Q and another drawn through R and S. Points E and F are joined by a straight line
and normal NN’ drawn at a point E as shown. Angle i and r are measured, and
recorded. The procedure is repeated to obtain different values of i and r and the
results tabulated.
i(o) r(o) sini sinr

A graph of sin i against sin r is plotted, and the slope of the graph is the refractive index of the prism.

TOTAL INTERNAL REFLECTION


Total internal reflection (TIR) is a fascinating optical phenomenon that occurs when light travels from a denser medium, such as water or glass, to a
less dense medium, like air. This process results in the complete reflection of light at the boundary, provided the angle of incidence exceeds a specific
threshold known as the critical angle. When this condition is met, no light escapes into the second medium, making TIR a crucial concept in optics.
TIR has significant applications in various fields, particularly in fiber optics,
where it enables efficient light transmission over long distances. It is also
utilized in polarizing prisms, fiber optics and optical devices, enhancing
their functionality. TIR plays a role in advanced technologies, such as
terahertz applications, where phase modulation is essential for radar
detection and biomedical imaging.

Conditions for total internal reflection to occur


Firstly, light must travel from a denser medium, such as glass, to a less dense
medium, like air. This transition is crucial because it allows the light to
reflect entirely rather than refract into the second medium.
Secondly, the angle of incidence must exceed a certain threshold known as the critical angle. When the angle of incidence is greater than this critical
angle, the light does not pass through the boundary but instead reflects back into the denser medium. This critical angle varies depending on the
refractive indices of the two media involved.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 211


212

CRITICAL ANGLE
The critical angle is a fundamental concept in optics, defined
as the angle of incidence at which light traveling from a
denser medium to a rarer medium refracts at an angle of 90
degrees. This phenomenon occurs when the light ray strikes
the boundary between the two media, such as water and air,
at a specific angle. Beyond this critical angle, total internal
reflection occurs, meaning that all the light is reflected back
into the denser medium rather than refracted.
Consider a ray of light moving from glass (denser) to air
(rarer). As the angle of incidence increases, the angle of
refraction also increases until it reaches 90 degrees. At this point, the critical angle is reached, and
any further increase in the angle of incidence results in total internal reflection. The critical angle
is crucial in various applications, including fiber optics and optical devices, where controlling light
behavior is essential for efficient transmission and imaging.
1
From snell’s law; 𝑛𝑔 𝑠𝑖𝑛𝑖𝑔 = 𝑛𝑎 𝑠𝑖𝑛90,𝑛𝑔 𝑠𝑖𝑛𝑐 = 1;𝑐 = sin−1 [𝑛]

ACTIVITY:
A ray of light incident on one side of an equilateral glass prism of refractive index 1.52 is critically refracted at the second surface of the prism.
Determine the angle of incidence. (29.5o)
Experiment to determine the critical angle of a glass block
The critical angle is the angle of incidence in a denser medium at which the angle
of refraction in the less dense medium becomes 90°. Beyond this angle, total
internal reflection occurs. The experiment described below outlines a practical
procedure to determine the critical angle for a glass block.
Materials Required
The materials include a semi-circular glass block, a ray box or laser pointer, a
protractor, a ruler, a white sheet of paper, and a pencil. The semi-circular glass
block is used to eliminate refraction at the curved surface, ensuring that light
enters the block perpendicularly.

Procedure
 Start by placing the white sheet of paper on a flat surface and tracing the outline of the semi-circular glass block. Position the glass block
back on its traced outline and mark the center of the flat surface of the block.
 Use a protractor to draw a normal line perpendicular to the flat surface of the glass at this center point.
 Direct a thin ray of light from the ray box or laser pointer toward the flat surface of the glass block, ensuring the light ray meets the block
at the center where the normal line is drawn.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 212


213

 Adjust the angle of incidence (the angle between the incident ray and the normal) starting at a small value, such as 10°, and gradually
increase it. Observe the path of the refracted ray as it exits through the curved surface of the block into the air.
 Measure the angle of incidence at which the refracted ray disappears and travels along the boundary. Record this angle as the critical angle,
C.
 Repeat the experiment several times for different angles of incidence, and take the average of the recorded angles. Ensure all angles are
measured carefully with a protractor to minimize errors.
1
 Using the critical angle C, the refractive index (n) of the glass can be calculated using the relationship: 𝑛 = 𝑠𝑖𝑛𝐶 , where n is the refractive
index of the denser medium (glass) relative to air.
Precautions
To achieve reliable results, ensure the light ray is thin and precise, use a clean glass block to prevent scattering of light, and avoid parallax error
while measuring angles with the protractor.
APPLICATION OF TOTAL INTERNAL REFLECTION
Total internal reflection (TIR) occurs when a light ray traveling from a denser medium to a less dense medium is completely reflected back into the
denser medium, provided the angle of incidence exceeds the critical angle.
TIR in fiber optics. Optical fibers rely on total internal reflection to transmit light signals over long distances with minimal loss of energy. These
fibers, made of glass or plastic, have a core with a higher refractive index surrounded by a cladding of lower refractive index. Light entering the fiber
at an appropriate angle reflects internally and travels through the core, enabling high-speed data transfer in telecommunications and internet
infrastructure. Fiber optics are also used in medical devices like endoscopes, allowing doctors to view internal organs and tissues with minimal
invasiveness.

TIR in prismatic devices such as binoculars and periscopes. Prisms are designed to exploit total internal reflection to redirect light paths effectively.
For instance, binoculars use roof prisms to fold the optical path, allowing a more compact design while maintaining image clarity. Similarly, periscopes
in submarines use prisms to provide a clear line of sight from below the water’s surface.

TIR is also integral to the functioning of retroreflectors, which are used in road safety devices, vehicle reflectors, and bicycle lights. These devices
reflect light back toward its source, regardless of the incident angle, by utilizing a combination of mirrors or prisms that depend on total internal
reflection. This ensures visibility and enhances safety during nighttime or low-light conditions.

TIR in the realm of lasers and optical instruments, total internal reflection plays a critical role. Devices like ring resonators and optical waveguides
use TIR to confine and direct light within specific paths, enabling precise control over laser beams or light transmission. These technologies are
essential in scientific research, industrial applications, and advanced imaging systems.
Gemstones and jewelry also benefit from the principles of TIR. The sparkle and brilliance of diamonds are largely due to their high refractive
index, which facilitates total internal reflection. Light entering a well-cut diamond undergoes multiple internal reflections before emerging, creating
the characteristic sparkle that makes diamonds highly desirable.
TIR in optical sensors and measurement systems. Instruments like refractometers use TIR to measure the refractive index of liquids or solids,
which is useful in industries such as food production, pharmaceuticals, and chemical analysis. TIR-based sensors are used in security systems and
motion detectors.
EFFECTS OF TOTAL INTERNAL REFLECTION

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 213


214

The mirage: A mirage is an optical phenomenon that occurs when light rays bend due to variations in air temperature. This bending, or refraction,
creates the illusion of water or a distorted image of the sky on the ground. Here's a detailed explanation:
Mirages Formation
Temperature Gradient: Mirages typically occur on hot days when the ground heats the air just above it. This creates a temperature gradient, with
hot air near the surface and cooler air above.
Refraction of Light: Light travels at different speeds through hot and cool air. When light passes from cooler to hotter air, it bends or refracts. This
bending causes the light to follow a curved path.
Total Internal Reflection: If the temperature gradient is steep
enough, light rays can bend so much that they reflect off the
boundary between the hot and cool air layers. This reflection can
make the sky appear on the ground, creating the illusion of water.
Types of Mirages
Inferior Mirages: These are the most common and occur when
the ground is much hotter than the air above. They make distant
objects appear lower than they are, often creating the illusion of
water on the road.
Superior Mirages: These occur when a layer of cold air lies
beneath a layer of warmer air. They make objects appear higher than they are and can sometimes create multiple images of the same object.
Fata Morgana: This is a complex form of superior mirage that can create stacked images of distant objects, often seen over the horizon at sea.
Real-World Examples
Desert Mirages: Travelers in deserts often see what appears to be water in the distance. This is an inferior mirage caused by the hot sand heating
the air above it.
Highway Mirages: On hot days, drivers may see what looks like a pool of
water on the road ahead. This is also an inferior mirage.

DISPERSION OF WHITE LIGHT

Dispersion occurs when white light separates into its constituent colors as it
passes through a medium, such as a prism. This phenomenon happens
because different colors of light travel at different speeds when they move
through a medium, causing them to refract (bend) at different angles. When
white light enters a prism, it slows down and bends due to the change in medium from air to glass. Different colors (wavelengths) of light bend by
different amounts. For example, violet light bends the most, while red light bends the least. As the light exits the prism, the colors spread out to form
a spectrum, ranging from red to violet. The visible spectrum includes the following colors in order: red, orange, yellow, green, blue, indigo, and violet
(often remembered by the acronym ROYGBIV.
This effect is primarily caused by refraction, which is the change in direction of light as it travels from one medium to another. The index of
refraction varies for different colors, leading to the separation of light into a spectrum. Shorter wavelengths, such as violet and blue, are
slowed down more and refract more sharply than longer wavelengths like red and orange. This difference in refraction angles causes the white light
to spread out into its individual colors, creating a spectrum. When the light exits the prism, the colors separate further due to a second refraction at
the boundary between the glass and air.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 214


215

Practical applications of dispersion of white light


One significant application is in fields like optics and spectroscopy, in spectrometers, prisms are used to analyze the spectral composition of light,
which helps in identifying materials or chemical compositions. By scanning the grating, these instruments can identify the composition of materials
based on their light absorption and emission characteristics.
Dispersion plays a crucial role in medical imaging techniques, such as quantitative phase imaging, which enhances the visualization of biological
samples. Moreover, the principles of light dispersion are harnessed in optical technologies, including fiber optics and sensors, improving data
transmission and detection capabilities.
Dispersion also underpins the functioning of devices like prisms in
binoculars and the design of lenses in cameras to minimize chromatic
aberration (unwanted dispersion of light).

Everyday Examples
Rainbows: A rainbow is formed through the interaction of sunlight
with water droplets in the atmosphere, involving the processes of
refraction, reflection, and dispersion. When sunlight enters a water
droplet, it slows down and bends due to refraction, as light moves from
the less dense air into the denser water. Inside the droplet, the light
undergoes dispersion, splitting into its constituent colors because different wavelengths of light refract by different amounts.
This reflection further enhances the dispersion of colors, creating the vibrant arc we see as a rainbow. The spherical shape of raindrops plays a crucial
role in this process, as it allows for the consistent bending and reflecting of light. Shorter wavelengths, such as violet and blue, refract more than
longer wavelengths like red and orange. The light then reflects off the inner surface of the droplet, reversing its path. As the reflected light exits the
droplet, it refracts again, bending away from the droplet and further separating into distinct colors. This combination of refraction, internal reflection,
and dispersion creates the vibrant spectrum of colors that form a rainbow. The angle at which the light exits the droplet determines the position of
the colors, with red appearing on the outer edge and violet on the inner edge. Rainbows are typically seen when the observer has their back to the
sun, as the refracted light from millions of droplets converges to create a circular arc of colors.

A pure spectrum is a display of distinct and separate colors produced when white light is dispersed through a prism or diffraction grating,
with each wavelength clearly visible and not overlapping. It consists of the full range of visible colors arranged in order of increasing
wavelength, from violet to red, without blending between them.

CDs and DVDs: The reflective surfaces of these discs cause dispersion due to diffraction, which also separates light into its colors.
Crystal Chandelier Effects: Light passing through the faceted surfaces of a crystal chandelier disperses into a colorful display.

COLORS OF OBJECTS

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 215


216

The color of an object is determined by the wavelengths of light it


reflects, absorbs, or transmits. When white light, which contains all
colors, strikes an object, certain wavelengths are absorbed while
others are reflected. For instance, a red ball appears red because it
absorbs green, blue, and violet wavelengths, reflecting only the red
wavelengths back to our eyes. This interaction between light and
objects is fundamental to our perception of color. The wavelengths
that are reflected determine the color we see. For example, a blue
object reflects blue light and absorbs other colors, while a yellow
object reflects yellow light. The colour it transmits or reflects e.g. an
object appears blue because it reflects blue light into the eyes and
absorbs the other colours of the spectrum. Similarly, an object appears red because it reflects red light into the eyes and absorbs all other colours.
This principle applies to all colored objects, making color perception a fascinating aspect
of physics and biology. A white object reflects all the colours of the spectrum into the eyes
and absorbs none.

Types of colours
Primary colors are the foundation of color theory, consisting of red, yellow, and blue.
These colors cannot be created by mixing other colors together, making them unique in
the color spectrum. They serve as the building blocks for all other colors, including
secondary colors.
Secondary colors are formed by mixing two primary colors in equal parts. For instance,
combining red and yellow produces orange, yellow and blue create green, and blue mixed
with red results in purple. These secondary colors occupy the spaces between the primary
colors on the color wheel, illustrating their relationship.

Complementally colours
Complementary colors are pairs of colors that, when combined, produce a grayscale color
such as white or black. In the realm of physics, these colors are defined as those that are
opposite each other on the color wheel. For instance, red pairs with cyan, magenta with
green, and blue with yellow. When two complementary colors of light are mixed in equal
intensities, they create white light. This phenomenon is rooted in the additive color theory,
which explains how different wavelengths of light combine to form new colors. The mixing
of complementary colors results in a uniform frequency distribution, effectively canceling out the original colors. In practical applications, such as art
and design, complementary colors are used to create contrast and visual interest.
These are two different colours which when added produce white light. One of them is a secondary colour and the other must be a primary colour.
The pairs are : Red + peacock blue white light, Green + magenta white light; Blue + yellow white light
From the complementally colours it is noted that when the three primary colours are joined, they produce white light.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 216


217

Coloured objects in white light


A coloured object reflects and transmits its own colour and absorbs other colour incident on it. When white light, which contains all the colors of the
visible spectrum, shines on an object, the color we perceive depends on which wavelengths of light are absorbed and which are reflected by the object.
How Colored Objects Appear in White Light
White light is a combination of all the colors of the rainbow, from red to violet. This means it contains all wavelengths of visible light. When white
light hits an object, the object absorbs some wavelengths and reflects others. The color of the object is determined by the wavelengths of light it
reflects. A red object appears red because it reflects red wavelengths and absorbs other colors. A blue object reflects blue wavelengths and absorbs
the rest. A green object reflects green wavelengths and absorbs other colors.
White and Black Objects: White objects appear white because they reflect all wavelengths of light equally. Black objects appear black because they
absorb all wavelengths and reflect none.

Examples: When white light shines on a red apple, the apple absorbs all colors except red, which it reflects. This is why the apple looks red to our
eyes. The sky appears blue because molecules in the atmosphere scatter blue light from the sun more than they scatter red light.
Colored Light on Colored Objects: If you shine colored light (e.g., blue light) on an object, the object's appearance can change. For example, a red
object under blue light will appear black because it absorbs the blue light and reflects no light.

FILTERS (COLOUR)
Colour filters are essential tools in physics that manipulate light by
selectively absorbing certain wavelengths while allowing others to
pass through. The primary colours of light; red, green, and blue; differ
from the primary colours used in art, which are cyan, magenta, and
yellow. When light passes through a colour filter, it is subjected to a
process known as colour separation by subtraction, where the filter
absorbs all colours except for the one it is designed to transmit. For
instance, a yellow filter allows yellow light to pass while absorbing
other wavelengths, such as blue and green.
This selective absorption is crucial in various applications, from photography to scientific experiments, where specific colours are needed for analysis
or aesthetic purposes. Constructed from materials like dyed glass or plastic, colour filters are vital in optics, enhancing our understanding of light and
its properties.
A filter is a coloured sheet of plastic or glass material which allows light of its own type to pass through it and absorbs the rest of the coloured lights
i.e. a green filter transmits only green, a blue transmits only blue, a yellow filter transmits red, green and yellow lights.

MIXING OF COLOURED PIGMENTS


Mixing colored pigments involves the principles of subtractive color mixing,
where pigments absorb certain wavelengths of light and reflect others. When
pigments are combined, they absorb more wavelengths, resulting in a new color.
For instance, mixing yellow and blue pigments produces green because yellow
absorbs blue light while blue absorbs yellow light, leaving only green to be
reflected. In contrast, additive color mixing occurs with light, where different
wavelengths are combined to create new colors.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 217


218

For example, mixing red and green light results in yellow. This distinction is crucial in various applications, such as painting, printing, and
photography, where understanding how colors interact can lead to desired outcomes. The rarity of certain colors in nature, like blue, can also be
explained through physics. Blue pigments are less common because they require specific structural properties to reflect blue light, making them a
unique phenomenon in the natural world.
A pigment is a substance which gives its colour to another substance.
A pigment absorbs all the colours except its own which it reflects.
Mixing coloured pigments is called colour mixing by subtraction. Pigments appear black because none of the colours are reflected

APPEARANCE OF COLOUR PIGMENT IN THE WHITE


LIGHT
The appearance of color pigments in white light is a fascinating
interplay of reflection and absorption. White light, which consists of
all visible wavelengths, appears white because it reflects all colors
equally. When an object is white, it reflects the entire spectrum of
light, allowing us to perceive it as such. In contrast, colored
pigments work by selectively absorbing certain wavelengths of light
while reflecting others. For instance, a red pigment absorbs blue and
green wavelengths but reflects red light. This selective absorption is what gives the pigment its color. Therefore, when white light hits a red object,
the red wavelengths are reflected back to our eyes, making the object appear red. This principle of color subtraction explains why objects appear
colored under white light.
The interaction between light and pigments is essential in various applications, from art to technology, influencing how we perceive and utilize color
in our environment. The appearance of a pigment in white light is determined by the specific wavelengths it reflects. When illuminated with white
light, a pigment absorbs certain colours and reflects others, creating the perception of its colour. This interaction between the pigment and the various
wavelengths in white light allows us to see the pigment in its characteristic hue. A colour pigment reflects only one colour

COLOURS ON A TELEVISION
Televisions produce color images through a combination of light-emitting units and the RGB (red, green, blue) color model. In traditional CRT (cathode
ray tube) televisions, three electron guns shoot beams of electrons onto a phosphorescent screen coated with red, green, and blue phosphors. By
varying the intensity of each beam, the television can create a wide spectrum of colors through additive color mixing. Modern flat-screen TVs, such as
LCDs and OLEDs, utilize a different technology. LCDs use liquid crystals that manipulate light from a backlight, while OLEDs consist of organic compounds
that emit light directly. Each pixel in these displays is divided into sub-pixels, each corresponding to red, green, or blue. By adjusting the brightness
of these sub-pixels, the television can produce millions of colors, creating vibrant and detailed images.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 218


219

3.3 LENSES AND OPTICAL INSTRUMENTS


Learning Outcomes
a) Know the properties of converging and diverging lenses, and how they are used in everyday life (k,u,s)
b) Understand how lenses are used in optical systems such as the magnifying glass, correcting sight in the human eye and in camera lenses
(k, u, v/a)
LENSES
A lens is a transparent optical device designed to refract light, forming images by focusing or dispersing light rays. Typically made from glass or
plastic, lenses are essential in various applications, including cameras, microscopes, and eyeglasses. The fundamental principle behind a lens is
refraction, which occurs when light passes through the curved surfaces of the lens, bending the light rays to create a clear image. Lenses come in
various types, including convex and concave, each serving distinct purposes. Convex lenses converge light rays, making them ideal for magnifying
objects, while concave lenses diverge light rays, often used to correct nearsightedness. The careful design and manufacture of lenses involve precise
grinding or molding to achieve the desired optical properties.
There are two primary types of lenses: converging (convex) lenses and diverging
(concave) lenses.

Convex/converging lenses
A converging lens, also known as a convex lens, is an optical device that focuses parallel
rays of light to a single point known as the focal point. When an object is placed outside
this focal point, the lens produces a real, inverted image. Conversely, if the object is
positioned within the focal length, the lens generates a virtual, upright, and magnified
image. This property makes converging lenses essential in various applications, including magnifying glasses, cameras, and microscopes. The behavior
of light through a converging lens can be understood through ray diagrams, which illustrate how light rays bend as they pass through the lens. The
lens's curved surfaces cause the rays to converge, allowing for precise image formation.

Concave/diverging lenses
A diverging lens, also known as a concave lens, is designed to spread out parallel
rays of light that pass through it. This lens is thinner at its center than at its edges,
causing light rays to diverge as if they originated from a single point on the opposite
side of the lens.
This unique property makes diverging lenses essential in various optical
applications, such as eyeglasses for nearsightedness. When analyzing the behavior
of light through a diverging lens, ray diagrams are often used. These diagrams illustrate how light rays refract and appear to diverge, creating a virtual
image that is upright and smaller than the object. The position of this virtual image depends on the distance of the object from the lens, with closer
objects resulting in a larger virtual image.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 219


220

HOW A LENS WORKS


Lenses function primarily through the principle of refraction, which is the
bending of light rays as they pass through a transparent material. When
light enters a lens, it changes direction due to the lens's curved surfaces,
allowing it to focus or disperse the light to form an image. This process is
essential in various optical devices, including cameras, glasses, and
microscopes. There are two main types of lenses: convex and concave.
Convex lenses, which are thicker in the center, converge light rays to a focal point, making them ideal for magnifying objects. In contrast, concave
lenses are thinner in the center and diverge light rays, which can be useful for correcting nearsightedness.

TERMS USED IN LENSES


Principal Axis
The principal axis is an imaginary straight line that passes through the center of the lens and connects its two curved surfaces. It is the reference line
for analyzing lens behavior.
Optical Center
The optical center is the point in the lens through which light rays pass without being refracted. For thin lenses, it lies in the geometric center of the
lens.
Focal Point (Principal Focus)
The focal point is a specific point where parallel rays of light converge (convex lens) or appear to diverge from (concave lens) after passing through
the lens. It is denoted as F.
Focal Length (f)
The focal length is the distance between the optical center of the lens and its focal point. It determines the lens's ability to converge or diverge light.
A shorter focal length results in greater convergence or divergence.
Radius of Curvature (r)
This refers to the radius of the sphere from which the lens surface is a segment. Each surface of a lens has its radius of curvature.
Aperture
The aperture of a lens is the effective diameter of the lens through which light passes. It determines the amount of light that enters the lens.
Power of a Lens

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 220


221

The power of a lens measures its ability to converge or diverge light, expressed in diopters (D). It is the reciprocal of the focal length (in meters);
1
𝑃𝑜𝑤𝑒𝑟 = 𝑓(𝑚)
Magnification
Magnification refers to how much larger or smaller an image is compared to the object.
𝑖𝑚𝑎𝑔𝑒 ℎ𝑒𝑖𝑔ℎ𝑡 𝑖𝑚𝑎𝑔𝑒 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒
It is calculated as:𝑚 = 𝑜𝑏𝑗𝑒𝑐𝑡 ℎ𝑒𝑖𝑔ℎ𝑡 = 𝑜𝑏𝑗𝑒𝑐𝑡 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒
Axis of the Lens
The axis of the lens is the line perpendicular to the lens surface at its center. It helps in aligning the lens correctly during imaging or analysis.
Chromatic Aberration
This is a lens defect caused by different wavelengths of light being refracted by different amounts, resulting in color fringes in the image.
Spherical Aberration
A lens defect where light rays passing through the edges of a lens focus at different points than rays passing through the center, leading to a blurred
image.
Principal Focus
This is the specific point where light rays parallel to the principal axis converge (convex lens) or appear to diverge from (concave lens).

Focal plane of a lens


The focal plane of a lens is a critical concept in photography, representing the flat area where light
rays converge to create a sharp image. This plane is positioned at a specific distance from the lens,
known as the focal length, which is essential for achieving the desired focus in an image.
Understanding the focal plane allows photographers to manipulate depth of field and focus, enhancing
the overall composition of their work. While the focal plane itself remains flat, specialized lenses, such
as tilt-shift lenses, enable photographers to tilt or rotate this plane relative to the camera sensor. This
capability allows for creative adjustments in focus, making it possible to achieve unique perspectives
and control the plane of focus in ways that standard lenses cannot.

Image formation in lenses


Image formation in convex lenses, or
converging lenses, can create both real and
virtual images depending on the object's
distance from the lens. When an object is
placed at a finite distance, a virtual image is
formed between the optical center and the
focus. This image is upright and smaller than
the object. Similarly, concave lenses, or
diverging lenses, consistently produce virtual
images that are erect and diminished. The image appears on the same side as the object, and the light rays diverge, making it impossible to project
the image onto a screen. Interestingly, even a half lens can form an image, albeit fainter, demonstrating the efficiency of lens design.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 221


222

Drawing ray diagrams


Ray diagrams are essential tools in optics, illustrating how light travels
from an object to an observer. To create a ray diagram, start by selecting
a point on the top of the object and drawing incident rays towards the
optical device.
 The first ray travels parallel to the principal axis, refracting
through the focal point on the opposite side.
 The second ray heads towards the focal point before refracting
parallel to the axis.
 The third ray passes through the center of the lens without
bending.

Scale drawing of ray diagrams


Rules: A lens is represented by a line on a graph
paper. Scale must be used.
Object 5cm tall is placed 15cm away from a lens of
focal length 10cm. By construction; Determine the
position, size and nature of the final image (use a
scale 1cm: 5cm)
Solution
U=15.0cm, V=30.0cm
𝟑𝟎
𝒎= = 𝟐, Magnified
𝟏𝟓

Activity
1. A simple magnifying glass of focal length 5cm forms an erect image of the object 25cm from the lens. By graphical
method, find the distance between the object and image, and the magnification
2. An erect object 5cm high is placed at a point 25cm from a convex lens. A real image of the object is formed 25 cm high.
Construct a ray diagram and use it to find the focal length of the lens
3. An object is placed at right angle to the principal axis of a thin covering lens of focal length 10cm. A real image of
height 5cm is formed at 30cm from the lens. By construction, find the position and height of the object.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 222


223

Determination of focal length using on illuminated object


a) A lens is set up in a suitable holder with a plane mirror behind it so that
light passing through the lens is reflected back.
b) Position the convex lens in front of a distant object, such as a tree or a
building, ensuring that the object is far enough away to be considered at
infinity.
c) The light rays from this distant object will converge after passing
through the lens, forming a sharp image on a screen placed behind the
lens.
d) The distance between the lens and the screen is measured; this gives the
focal length (f) of the lens.

Using converging lens by measuring object and


image distances
a) The lens is set up in front of an illuminated
object so that a real image is formed on a white
screen placed on the opposite side.
b) The lens is adjusted by moving the position of
the screen until a clear image is formed.
c) The object distance u is measured from the lens
center to the object.
d) The image distance v is also measured and recorded.
e) The experiment is repeated with several different values of U and the corresponding values of V are obtained and the results entered in table
1 1
of results, including values of 𝑎𝑛𝑑 𝑣.
𝑢
1 1
f) A graph of 𝑢 𝑎𝑔𝑎𝑖𝑛𝑠𝑡 in plotted, and the intercepts obtained from both axes as 𝑐1 and 𝑐2 .
𝑣
1 1 1
g) The mean value of intercepts c= 2 (𝑐 + 𝑐 ) determined.
1 2
h) Focal length is calculated from f = c

Alternately
a) The experiment is repeated with several different values of u and the corresponding values of v are obtained and the results entered in table
of results, including values of 𝑢𝑣 𝑎𝑛𝑑 (𝑢 + 𝑣).
b) A graph of 𝑢𝑣 𝑎𝑔𝑎𝑖𝑛𝑠𝑡 (𝑢 + 𝑣) in plotted, and the slope is obtained
∆𝑢𝑣
c) The slope value is determined from 𝑠𝑙𝑜𝑝𝑒 = ∆(𝑢+𝑣).
d) Focal length is calculated from f = slope

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 223


224

POWER OF A LENS
The power of a lens in optics is defined as the ability of a lens to bend light rays.
Mathematically, it is expressed a 𝐏𝐨𝐰𝐞𝐫
1
(𝑃) = 𝑓(𝑚), where ( P ) represents the power of the lens and ( f ) is its focal
length measured in meters. A lens with a shorter focal length has a higher optical
power, indicating a greater ability to converge or diverge light. The unit of
measurement for lens power is the diopter (D), which corresponds to a focal length
of one meter. For instance, a lens with a focal length of 0.5 meters has a power of
+2 diopters, while a lens with a focal length of 2 meters has a power of +0.5
diopters. In practical terms, the power of a lens influences how effectively it can
focus light, impacting image quality and clarity in various optical systems.

Application of lenses
 Far-sighted individuals struggle to focus on nearby objects because
their eyes focus images behind the retina. Convex lenses help by
converging the light before it enters the eye, enabling proper focus
on the retina. Corrective glasses for far-sighted individuals. Lenses
are vital for individuals with refractive errors. Convex lenses are
used in eyeglasses to correct hyperopia (farsightedness) and
presbyopia, allowing clearer vision.
 Microscopes and telescopes utilize lenses to magnify objects,
enabling scientists and astronomers to explore the microscopic
world and distant celestial bodies, respectively. Converging lenses
are used to magnify distant celestial objects (telescopes) or tiny
biological samples (microscopes) by forming real or virtual
magnified images. Astronomers use telescopes to view distant stars
and planets; scientists use microscopes to view cells and
microorganisms.
 In magnifying Glasses, converging lenses are used in magnifying glasses to enlarge the appearance of small objects by converging light rays
to produce a larger, upright virtual image. They are used to read small text or examine tiny objects.
 In cameras, converging lenses focus light from the object onto the film or sensor, forming a sharp, inverted image of the object. The lens
system allows for the adjustment of focus. Almost all camera systems, whether digital or film-based, use converging lenses. Cameras also
rely on lenses to focus light and create sharp images, making them essential tools for photography.
 In a projector, a convex lens is used to focus and enlarge an image or video onto a large screen. The projector forms a real image by
directing light through a converging lens system. Used in classrooms, movie theaters, and presentations.

Uses of Diverging Lenses:

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 224


225

 Near-sighted individuals (myopia) have trouble focusing on distant objects because their eyes focus images in front of the retina. Concave
lenses spread the light rays slightly before they enter the eye, pushing the focal point back onto the retina. Concave lenses are widely used
in eyeglasses for people with near-sightedness to correct their vision.
 Peepholes in Doors use diverging lenses to provide a wide-angle view of the outside. This gives the viewer a broader view of the area outside
the door, despite the small size of the peephole. Peepholes are commonly installed in doors to allow the user to safely view who or what is
outside.
 Laser Beam Expanders, diverging lenses are used in combination with other lenses to expand or diverge a narrow laser beam. This is
useful in adjusting the width and focus of laser beams applications. Laser optics systems, such as those in scientific instruments and medical
devices.
 In some optical devices, such as binoculars, diverging lenses are used in the eyepiece to widen the viewer’s field of view while maintaining
image clarity. Enhances viewing experience by providing a wider perspective in binoculars.

THE HUMAN EYE


It reacts to visible light, enabling us to perceive our surroundings. The
eye consists of several key components, including the cornea, pupil, lens,
and retina, all of which work in harmony to provide clear vision. The
cornea and lens focus light onto the retina, where photoreceptors known
as rods and cones convert light into electrical signals. In addition to
vision, the human eye helps regulate circadian rhythms, influencing our
sleep-wake cycles.
The iris, the colored part of the eye that regulates light entry by adjusting
the size of the pupil. This dynamic control allows the eye to adapt to
varying light conditions, ensuring optimal vision.
The lens, located behind the iris. The lens focuses light onto the retina,
enabling clear images of objects at different distances. It works in conjunction with the cornea, which also helps to refract light. Together, these
structures ensure that light rays converge accurately on the retina, where photoreceptor cells convert them into electrical signals. These signals are
then transmitted to the brain, allowing us to perceive the world around us. The intricate functions of the iris and lens highlight the remarkable design
of the human eye, facilitating our ability to see and interpret our environment.

Accommodation. This is the process by which the human eye changes its size to focus the image on the retina. This process makes the eye to see
both near and far objects.
EYE DEFECTS AND THEIR CORRECTIONS
The normal eye can see objects clearly placed at infinity (far point) to see objects in greater details the eye sees it at the near point i.e. 25cm
SHORT SIGHTEDNESS (MYOPIA)
Myopia, commonly known as shortsightedness, is a refractive error that affects the ability to see distant objects clearly. This condition occurs when
the eyeball is elongated or the cornea is overly curved, causing light to focus in front of the retina instead of directly on it. As a result, individuals
with myopia struggle to see faraway objects, such as street signs or faces in a crowd. Symptoms of myopia typically include blurred vision at a distance,
squinting, and eye strain. It is increasingly prevalent, particularly among children, due to factors like excessive screen time and limited outdoor

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 225


226

activities. Treatment options for myopia include corrective


lenses, such as glasses or contact lenses, and in some cases,
refractive surgery. Lifestyle changes, like spending more time
outdoors and reducing screen exposure, can also help manage the
condition and slow its progression.

LONG SIGHTEDNESS
Long sightedness (hyperopia), is a common vision condition where
distant objects may be seen more clearly than those that are near.
This occurs when the eyeball is too short or the cornea has too
little curvature, causing light rays to focus behind the retina
instead of directly on it. As a result, individuals may experience
difficulty with tasks such as reading or sewing, leading to eye
strain and discomfort. Symptoms of long sightedness can include
blurred vision, headaches, and difficulty concentrating on close-up
tasks. Children may not always recognize their vision problems,
which can affect their learning and development. Regular eye
examinations are crucial for early detection and management.
Treatment options for hyperopia include corrective lenses, such
as glasses or contact lenses, which help focus light correctly on the
retina. In some cases, refractive surgery may be considered to
provide a more permanent solution. A convex lens is placed in front
of the eye to make the light parallel, so that it appears to come
from a distant object as shown above.

Similarities and differences between the eye and camera


Both systems utilize lenses to focus light onto a light-sensitive surface; in the eye, this is the retina, while in a camera, it is the sensor or film. Both
have mechanisms to control the amount of light entering; the iris in the eye and the diaphragm in a camera. However, there are notable differences
between the two. The human eye is a dynamic organ capable of adjusting focus and exposure in real-time, allowing for continuous visual processing.
In contrast, cameras typically capture a single still image at a time. Furthermore, while the eye has a single lens, cameras can have multiple lenses to
enhance image quality and versatility.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 226


227

3.4 GENERAL WAVE PROPERTIES


Learning Outcomes
a) Understand that energy is transferred by waves, and these may
be transverse or longitudinal (k,u)
b) Know and use the relationship between velocity, frequency, and
wavelength (k, s)
c) Understand the propagation, properties, and uses of
electromagnetic waves, and that white light is a mixture of
frequencies but that light from a laser is a single frequency (k,
u,v/a)

Introduction
Waves are disturbances or oscillations that transfer energy from one place to another without the transfer of matter. They occur in various forms and
can be found in many aspects of daily life, including sound, light, water waves, and even seismic waves. The study of wave properties is fundamental
in understanding how energy propagates through different media. A wave is a disturbance that travels through a medium (such as air, water, or a
solid material) or through a vacuum (as in the case of electromagnetic waves) by transferring energy.

What is a wave?
A wave is a disturbance that travels through a medium, transferring energy from one location to another without the permanent displacement of
particles. This phenomenon can be observed in various forms, such as sound waves in air, water waves on the surface of a lake, and light waves in a
vacuum. Each type of wave exhibits unique characteristics, including wavelength, amplitude, and frequency, which define its behavior and properties.
Waves can be classified into two main categories: mechanical and electromagnetic. Mechanical waves, like sound and water waves, require a medium
to propagate, while electromagnetic waves, such as light, can travel through a vacuum. The study of waves is essential in various fields, including
physics, engineering, and acoustics, as they play a crucial role in communication, energy transfer, and many natural phenomena.

How Waves Transmit Energy


Waves are fundamental phenomena that transmit energy through various
mechanisms. In mechanical waves, energy is transferred through the
vibrations of particles within a medium, such as air or water. For instance,
sound waves propagate by causing air molecules to oscillate, transferring
energy from one location to another without moving the particles
themselves. Electromagnetic waves, on the other hand, transfer energy
through oscillating electric and magnetic fields. This type of wave does not
require a medium, allowing it to travel through the vacuum of space. The
energy carried by these waves is directly related to their frequency and
amplitude; higher frequency waves, such as gamma rays, carry more energy
compared to lower frequency waves like radio waves.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 227


228

The basic features of waves


Waves are characterized by several key features.
The wavelength (λ) is the distance between two
successive peaks (maxima) or troughs (minima) in
a wave.
The amplitude (A) represents the maximum
height of the wave from its equilibrium position,
indicating the energy carried by the wave; higher
amplitudes correspond to greater energy.
Frequency, which refers to the number of wave cycles that pass a given point per unit time, typically measured in hertz (Hz).
number of oscillations made( n)
Frequency, 𝑓 = time taken (t)
n
f = t.
1
Frequency is related to the period T by: 𝑇 = , Where T is the time for one complete oscillation.
𝑓

The period, the inverse of frequency, is the time it takes for one complete wave cycle to pass.
time take to complete oscillation (t) 𝑡(𝑠)
Period of a wave expressed as; 𝑇 = .𝑇 =
number of oscillations (n) 𝑛
The speed of a wave is determined by the medium through which it travels, influencing how quickly energy is transferred.
Wave speed referring to the distance a wave travels over a specific period. It is typically measured in meters per second (m/s).
total distance 𝑛𝜆 𝑛 𝑛
Speed = .V = = [ ] 𝜆 = 𝑓𝜆, since 𝑓 =
time taken 𝑡 𝑡 𝑡

Example
A vibrator produces waves which travel at a distance of 35.0cm in 2 seconds. If the distance between two successive wave crests is 5.0cm,
determine the frequency of the vibrator.
Solution : speed of the wave is given by 𝑣 = 𝑑/𝑡, where d is distance moved and t is time taken. 𝑉 = 35/2 = 17.5𝑚𝑠 −1. Using
𝑣 = 𝑓𝜆,𝑓 = 17.5/5 = 3.5𝐻𝑍

ACTIVITY
1. A wave makes 10 cycles in 5 seconds. If the wavelength is 2cm, calculate its frequency, period and velocity.
2. A radio station produces waves of wavelength 2000cm and speed 3.0x10 8ms-1. Calculate frequency, periodic time and number of
cycles completed in 5 hours.
3. James made 30 consecutive troughs. The total distance between the troughs is 50cm and the frequency of the wave is 18Hz.
Determine the velocity of the waves made.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 228


229

Classification of waves
Mechanical waves are disturbances that travel through a medium,
transferring energy from one location to another. Unlike
electromagnetic waves, which can propagate through a vacuum,
mechanical waves require a material medium, such as air, water, or
solid substances. Common examples include sound waves, which travel
through air, and water waves, which move across the surface of a body
of water. There are two primary types of mechanical waves: transverse
and longitudinal. In transverse waves, the oscillation occurs
perpendicular to the direction of wave propagation, as seen in waves
on a string.
In contrast, longitudinal waves, like sound waves, involve oscillations parallel to the direction of travel, compressing and rarefying the medium.
Mechanical waves play a crucial role in various natural phenomena, including seismic activity during earthquakes and the prop agation of sound in
musical instruments.

Electromagnetic waves (EM waves) are a fundamental aspect of


physics, created by the interplay between electric and magnetic fields.
When charged particles accelerate, they generate changing electric
fields, which in turn induce changing magnetic fields. This reciprocal
relationship allows for the formation of EM waves that propagate
through space. These waves travel at the speed of light and encompass
a broad spectrum, including radio waves, microwaves, infrared, visible
light, ultraviolet, X-rays, and gamma rays. Each type of EM wave has distinct properties and applications, from communication technologies using
radio waves to medical imaging utilizing X-rays. The electromagnetic spectrum illustrates the range of these waves, highlighting their varying
wavelengths and frequencies.

Transverse Waves
Transverse waves are a fundamental concept in physics, characterized by
oscillations that occur perpendicular to the direction of wave propagation. In
transverse waves, particles of the medium oscillate perpendicular to the
direction of the wave’s propagation. Energy is transmitted as particles move up
and down while the wave moves horizontally. Example: Waves on the surface of
water, vibrations in a string (e.g., a guitar string), or light waves.

This means that as the wave travels, the particles of the medium move up and
down or side to side, creating a distinct wave pattern. Common examples of
transverse waves include surface ripples on water and waves on a string. In contrast to longitudinal waves, where particle movement is parallel to the
wave's direction, transverse waves exhibit unique properties. They can transfer energy efficiently, making them essential in various applications, from
musical instruments to electromagnetic radiation, such as light waves.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 229


230

Longitudinal Waves
Longitudinal waves are a type of mechanical wave where the oscillation
of particles occurs parallel to the direction of wave propagation. This
means that as the wave travels, the medium's particles move back and
forth in the same direction as the wave itself. Common examples of
longitudinal waves include sound waves, where compressions and
rarefactions travel through air or other mediums. Other examples
Seismic P-waves (primary waves), and compressional waves in a spring
or slinky. In contrast to transverse waves, where particle displacement is perpendicular to wave motion, longitudinal waves are characterized by their
ability to transmit energy through various materials. This property makes them essential in fields such as acoustics and seismology, where
understanding wave behavior is crucial for applications like sound transmission and earthquake analysis. Recent studies have explored the propagation
of longitudinal waves in stressed materials, such as rocks, and their implications for detecting structural defects. In a sound wave, for example,
vibrating air molecules collide with neighboring molecules, transferring energy as the wave moves through the medium. The energy is transmitted as
the particles push and pull on each other, compressing and expanding as the wave travels. The higher the amplitude (compression and rarefaction
intensity), the more energy the wave transmits.

The ripple tank


A ripple tank is a shallow glass container filled with water, designed to
visually demonstrate the fundamental properties of waves. This
educational tool allows students and researchers to observe various wave
phenomena, including reflection, refraction, interference, and
diffraction. By using an overhead light source, the ripples created in the
water can be illuminated, making it easier to study their behavior. In a
ripple tank, waves are generated by a vibrating source, creating patterns
that can be analyzed. For instance, when waves encounter obstacles or
openings, they exhibit diffraction, spreading out and creating
interference patterns. Ripple tanks are widely used in physics education,
providing a hands-on approach to learning about wave mechanics.

PROGRESSIVE AND STATIONARY WAVES

Progressive waves are defined as waves that travel continuously through a medium in the same direction without any change in amplitude. These
waves are essential for transferring energy from one location to another, while the medium itself remains largely undisturbed. The particles in the
medium oscillate around their equilibrium positions, allowing energy to propagate without the actual movement of matter. There are two primary
types of progressive waves: transverse and longitudinal. In transverse waves, the oscillation occurs perpendicular to the direction of wave travel, while
in longitudinal waves, the oscillation occurs parallel to the direction of travel. Both types exhibit unique characteristics, such as wavelength, frequency,
and speed, which are crucial for understanding wave behavior.

The mathematical representation of progressive waves involves equations that describe their properties, including wave speed and frequency.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 230


231

Stationary (Standing) Waves

Standing waves, also known as stationary waves, are unique


waveforms that do not appear to travel through space. A standing
wave is the superposition of two travelling waves of the
same frequency and amplitude travelling in opposite
directions. Unlike traveling waves, which propagate from one point
to another, standing waves oscillate in place, creating fixed points
known as nodes where there is no movement. This phenomenon
occurs when two waves of equal frequency and amplitude travel in
opposite directions and interfere with each other. Standing waves can be described as the superposition of these counterpropagating waves. They can
manifest in various forms, including longitudinal waves, like sound, and transverse waves, such as those seen in water. A common example of standing
waves is found in musical instruments, where strings vibrate to produce sound without the wave traveling along the string. The study of standing
waves is essential in understanding resonance and wave behavior in different mediums, making it a fundamental concept in physics.

Differences between standing and progressive waves

Standing and progressive waves exhibit distinct characteristics, primarily


in their energy transfer and movement. Progressive waves, also known as
traveling waves, propagate through a medium, transferring energy from
one point to another. They are characterized by continuous motion,
allowing them to carry energy across distances, such as ocean waves
moving across the sea surface. In contrast, standing waves, or stationary
waves, do not propagate. They form when two waves of the same frequency
interfere, creating fixed points called nodes where there is no movement. The energy in standing waves oscillates between maximum and minimum
values but does not result in a net transfer of energy. Both wave types share similarities, such as having defined wavelengths and frequencies. However,
the key difference lies in their behavior: progressive waves convey energy, while standing waves remain fixed in space, oscillating around specific
points.

ELECTROMAGNETIC WAVES
Electromagnetic waves are a fundamental aspect of
physics, representing a form of radiation that travels
through the universe without the need for a medium.
These waves are generated by electrically charged
particles that undergo acceleration, creating
oscillating electric and magnetic fields. This unique
property allows electromagnetic waves to propagate
through the vacuum of space, making them essential
for various applications, from communication to
medical imaging. The electromagnetic spectrum

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 231


232

encompasses all types of electromagnetic radiation, ranging from radio waves to gamma rays. Each type of wave has distinct characteristics and energy
levels, influencing how they interact with matter. For instance, visible light, a small portion of the spectrum, is crucial for human vision, while X-rays
are utilized in medical diagnostics. Recent advancements in materials science have led to the development of flexible electromagnetic metamaterials,
inspired by natural phenomena like the color-changing abilities of chameleons. These innovations hold promise for enhancing electromagnetic control
and expanding the potential applications of electromagnetic waves in technology.

Applications
Electromagnetic waves are integral to modern technology, influencing various aspects of daily life. They encompass a broad spectrum, including radio
waves, microwaves, and infrared radiation, each serving unique applications. For instance, radio waves facilitate wireless communication, powering
devices like mobile phones, radios, and televisions. This connectivity has transformed how we communicate and access information.
Microwaves, another type of electromagnetic wave, are widely recognized for their role in cooking, particularly in microwave ovens. They generate
heat by agitating water molecules in food, providing a quick and efficient cooking method. Beyond culinary uses, microwaves are also essential in
telecommunications, enabling Wi-Fi and Bluetooth technologies. Electromagnetic waves extend their utility to fields such as agriculture, where remote
sensing technologies monitor crop health, and astronomy, where telescopes utilize various wavelengths to explore the universe. A radio wave can
broadcast FM and AM radio signals, but it can also do more. Our televisions uses radio waves to broadcast the signal from various TV stations (assuming
you’re watching analog TV instead of streaming). Finally, radio waves have applications in the military through radar. That’s why radar and radio
begin with the same three letters. An infrared wave generates heat, and it’s also used in TV remote controls.
In today’s age, Bluetooth technology has primarily
replaced infrared. Yet, infrared waves are necessary for
creating heat-vision and night-vision cameras. That’s
because every living creature emits heat, which is infrared
waves. That means anything that gets hot is producing a
lot of infrared waves. An x-ray is simply a form of EM wave
used for internal photography. It uses a penetrating form
of EM radiation to take pictures inside your body. The
medical field uses the same EM radiation type as a cancer
treatment. The radiation is concentrated in a high-energy
form to eliminate cancer cells.

Effects of electromagnetic waves


Electromagnetic waves, while integral to modern technology, pose potential health risks that warrant attention. The World Health Organization (WHO)
has reported symptoms associated with exposure, including headaches, anxiety, and fatigue. Although scientific evidence remains inconclusive
regarding long-term effects, the possibility of adverse health outcomes cannot be dismissed. High levels of electromagnetic radiation, particularly from
radio frequencies, can lead to tissue heating, affecting biological systems. This heating can damage skin and other tissues, raising concerns about
prolonged exposure. Ionizing radiations, such as gamma rays, can cause cellular damage and increase the risk of cancer by inducing DNA mutations.

LASER BEAM

The term "laser" stands for "Light Amplification by Stimulated Emission of Radiation," which is, in a nutshell, how lasers work. Light
particles (called photons) are excited with current causing them to emit energy in the form of light. This light forms the laser beam. Lasers are

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 232


233

remarkable devices that emit coherent light through a process known as


stimulated emission. This phenomenon occurs when excited electrons transition
from a higher energy state to a lower one, releasing energy in the form of photons.
The emitted light is unique because all the photons are in phase, traveling in the
same direction and at the same wavelength, which is what makes laser light so
powerful and precise. The optical oscillator within a laser amplifies this light,
maintaining its coherence. This property allows lasers to be used in various
applications, from medical procedures to telecommunications. The ability to focus
laser beams into narrow, intense streams of light makes them invaluable in cutting,
welding, and even in scientific research. Under specific conditions, laser beams
can cast shadows, behaving like opaque objects.

A laser beam is a focused, coherent stream of light produced by a device called a laser (Light Amplification by Stimulated
Emission of Radiation).

Nature or Characteristics of Laser beam


 Coherence: Unlike regular light, which consists of many different wavelengths, a laser beam is made up of light waves that are
synchronized. This means all the light waves have the same wavelength (color) and move in step with each other, resulting in a highly
focused and intense beam.
 Monochromatic: Laser light is usually monochromatic, meaning it is made up of one single wavelength or color. This is different from
white light, which is made up of many different wavelengths.
 Directionality: Laser beams are directional. The light emitted from a laser does not spread out much, allowing it to travel long distances
with minimal dispersion. This is why laser beams appear as a narrow, straight line.
 High Intensity: Because the light is focused and not dispersed,
a laser beam can carry a lot of energy in a small area, making it
extremely intense and useful in applications requiring precision.
How Laser Beams are produced
 Stimulated Emission: A material inside the laser (such as a
gas, crystal, or semiconductor) is excited by an external energy
source. When the atoms in the material return to a lower energy
state, they emit photons. These photons stimulate other atoms
to emit more photons of the same wavelength and phase,
creating an amplified, coherent beam of light.
 Optical Cavity: The light bounces between two mirrors inside
the laser, further amplifying the beam. One of the mirrors is partially transparent, allowing the laser beam to escape and be used.

Applications:
 In manufacturing, high-powered lasers are employed for cutting, welding, and brazing metals, enabling intricate designs and efficient
production processes.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 233


234

 In medicine, lasers facilitate surgeries, such as eye procedures, by providing targeted treatment with minimal damage to surrounding
tissues.
 In the realm of metrology and geophysics, laser beams are essential for surveying and construction, allowing professionals to draw straight
lines and measure distances accurately.
 Lasers play a crucial role in data storage and communications, where they enable high-speed data transfer and secure information
transmission.

PROPERTIES OF WAVES

Reflection, refraction, and diffraction are fundamental concepts in wave physics


that describe how waves interact with different media.
Reflection occurs when a wave encounters a boundary and changes direction,
bouncing back into the original medium. This phenomenon is crucial in various
applications, from acoustics to optics, as it determines how sound and light behave
when they meet surfaces.

Refraction, on the other hand, involves a change in direction as waves pass


from one medium to another, influenced by the media's properties. This
bending of waves is essential in understanding phenomena such as the bending
of light in lenses and the behavior of sound in different environments.

Diffraction refers to the spreading of waves when they encounter obstacles


or openings. These three processes; reflection, refraction, and diffraction play a vital role in wave behavior, impacting technologies ranging from
telecommunications to medical imaging. Sound waves are more
diffracted than light waves because the wave length is greater than
that of light. Therefore, sound can be heard in hidden corners.
When waves undergo diffraction, wavelength and velocity remain
constant.

INTERFERENCE OF WAVES: This is the superposition of two


identical waves travelling in the same direction to form a single
wave with a larger amplitude or smaller amplitude. The two waves
should be in phase (matching).
Wave interference is a fundamental phenomenon in physics that
occurs when two or more waves meet while traveling through the
same medium. This interaction can lead to two primary outcomes: constructive interference, where the waves combine to produce a wave of greater
amplitude, and destructive interference, where they cancel each other out, resulting in a wave of lower amplitude.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 234


235

The nature of the interference depends on the relative phase


and amplitude of the interacting waves. For interference to
occur, at least two coherent waves are required. When these
waves overlap, their amplitudes add or subtract, creating a
new resultant wave. This principle is crucial in various
applications, from understanding sound and light behavior
to advanced technologies like AI devices that utilize wave
interference in magnetic materials.

CONSTRUCTIVE INTERFERENCE
Constructive interference of waves occurs when two or more
waves overlap in such a way that their crests and troughs align, resulting in a wave of greater amplitude (The resulting amplitude is the sum of the
individual amplitudes). This phenomenon is often described as the waves being "in-
phase," meaning they share the same phase relationship. When the peaks of the
waves coincide, the energy is effectively combined, leading to a stronger resultant
wave. This type of interference is crucial in various fields, including physics and
engineering, as it can enhance signals in communication systems or amplify sound
in acoustics. For instance, in musical instruments, constructive interference can
create richer tones when sound waves from different sources combine.

DESTRUCTIVE INTERFERENCE
Destructive interference of waves occurs when two waves overlap in such a way that
they cancel each other out, resulting in a reduced amplitude or even complete
cancellation. This phenomenon is most evident when the crest of one wave aligns
with the trough of another, leading to a net displacement that is less than either
wave alone. In essence, the waves are said to be 180 degrees out of phase, which is
a critical condition for achieving destructive interference. This type of interference
is not just a theoretical concept; it has practical implications in various fields,
including acoustics and optics. For instance, noise-canceling headphones utilize
destructive interference to reduce unwanted ambient sounds by generating sound
waves that are out of phase with the noise.

VIBRATION IN STRINGS
When a string is plucked, it vibrates at multiple frequencies, with the lowest
frequency known as the fundamental frequency. This phenomenon occurs because standing waves form on the string, which only happen when a
whole number of wavelengths fit between the fixed ends. The tension, mass, and length of the string significantly influence its vibrational
characteristics. Thicker and heavier strings vibrate more slowly, resulting in lower pitches, while tighter strings produce higher frequencies. This
principle is evident in instruments like guitars, where strings increase in thickness from the high-E to the low-E string. The energy transfer in vibrating
strings exemplifies the continuous transformation of energy, making them a classic study in physics. Recent advancements in string technology, such
as engineered resonators, have led to strings that can vibrate longer at ambient temperatures, showcasing the ongoing exploration of string dynamics

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 235


236

in both music and physics. Many musical instruments use stretched strings to
produce sound. A string can be made to vibrate by plucking it like in a guitar or
in a harp in pianos. Different instruments produce sounds of different qualities
even if they are of the same note.

3.5 SOUND WAVES


Learning Outcomes
a) Understand that sound is an example of a wave form that requires a medium through which to travel, and determine its velocity in air by
the echo method (k, s)
Introduction
Sound waves are vibrations that travel through various media, including gases, liquids, and solids. These waves consist of alternating compressions
and rarefactions, creating regions of high and low pressure that propagate at specific speeds. For instance, sound travels faster in water than in air,
illustrating the influence of the medium on sound wave velocity. Recent advancements in sound wave research have led to exciting developments.
Engineers at the University of Connecticut are exploring wave control and energy localization, which could have significant implications for various
technologies. Researchers have successfully simulated chaotic sound
wave propagation, confirming theories of acoustic turbulence, which
could enhance our understanding of sound behavior in complex
environments.

What are sound waves?


Sound waves are patterns of disturbance caused by the movement of
energy through various media, such as air, water, or solids. These
waves consist of alternating compressions and rarefactions, which

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 236


237

are regions of high and low pressure, respectively. As sound waves travel, they transfer energy from one location to another, allowing us to perceive
sounds. There are two primary types of waves: transverse and longitudinal. Sound waves are classified as longitudinal waves, meaning the particle
displacement is parallel to the direction of wave propagation. This characteristic allows sound to travel efficiently through different materials, with
speed varying based on the medium's density and elasticity. Sound waves are a type of mechanical wave that propagate through a medium (such as
air, water, or solids) due to the vibration of particles within the medium. They are longitudinal in nature, meaning that the oscillations of the particles
occur parallel to the direction of the wave's propagation. Sound waves play a crucial role in human communication, music, and technology and are
vital for understanding phenomena related to hearing and acoustics.

How Sound is produced as a Form of Energy


Sound energy is a form of energy produced by the vibrations of objects. When an object vibrates, it creates pressure waves in the surrounding medium,
such as air, water, or solids. These waves travel through the medium, allowing sound to be heard when they reach our ears. The energy from these
vibrations is transferred in waves, which is why sound can travel over distances. The process begins when a force, such as a plucked guitar string or
a vibrating speaker cone, causes an object to vibrate. This mechanical energy is then converted into sound energy, which propagates as longitudinal
waves. The frequency of these waves determines the pitch of the sound, while the amplitude relates to its loudness. It plays a crucial role in
communication, entertainment, and even in technologies like sonar and ultrasound.

Production of Sound
Sound is a fascinating phenomenon produced by the vibration of
matter. Various actions, such as plucking, scratching, blowing,
hitting, rubbing, and shaking objects, lead to these vibrations. For
instance, when a guitar string is plucked, it vibrates, creating
sound waves that travel through the air, allowing us to hear
music. Similarly, blowing into a flute causes air to vibrate within
the instrument, producing melodious sounds. The science behind
sound production is rooted in the movement of air particles. When
an object vibrates, it disturbs the surrounding air, causing particles to collide and transmit energy in the form of sound waves.
This principle is essential in understanding how we perceive sound in our environment. In summary, sound is an energy form generated by vibrations,
and its production can be observed in everyday activities, from
playing musical instruments to the simple act of speaking.

TRANSMISSION OF SOUND
Sound transmission is the process by which sound waves propagate
through various mediums, such as air, liquids, and solids. These
waves can interfere with one another, resulting in amplitudes that
may be greater or smaller than the original waves. When sound waves
reach the ear, they are converted into nerve impulses, which travel to the brain for interpretation, allowing us to perceive sounds. In architectural
contexts, sound transmission refers to how sound travels through building elements like walls and floors. Sound transmission is crucial for designing
spaces that minimize unwanted noise and enhance acoustic comfort. Interestingly, animals also utilize sound transmission for communication. For

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 237


238

instance, elephants can send low-frequency sound waves that travel several kilometers, enabling
them to communicate over long distances. When an object vibrates, it creates compressions and
rarefactions in the air. Molecules in the air collide with adjacent molecules, transferring energy.
The compressions (areas of high pressure) move outward as sound waves, while the rarefactions
(areas of low pressure) follow. This process continues as molecules pass kinetic energy from one
to another.

Experiment to Show That Sound Cannot Pass Through a


Vacuum
Objective:
To demonstrate that sound waves require a medium (air, water, or solids) to travel and cannot propagate through a vacuum.
Apparatus:
Bell jar (glass jar with a vacuum pump), Electric bell or buzzer (small, battery-operated), Vacuum pump (used to remove air from the jar), Power
supply or batteries for the bell, and Glass plate (to seal the bell jar)
Procedure:
 The setup involves placing a ringing electric bell inside an airtight bell jar connected to a vacuum pump. Turn on the power supply to the
electric bell, the bell rings, sound waves travel through the air, allowing us to hear the loud sound.
 However, as the vacuum pump removes the air from the jar, the sound diminishes until it becomes inaudible (no sound is heard). Though
the hammer is seen hitting the gong, even though the sound will not reach your ears.
 Gradually allow air back into the bell jar through the vacuum pump. As the air re-enters, the loud sound of the bell will become audible
again.
Explanation:
 Before removing the air: Sound waves travel through the air in the bell jar. These waves are longitudinal waves that require a medium (air
in this case) to propagate. Hence, the sound of the bell can be heard.
 After removing the air: As the air is removed from the bell jar by the vacuum pump, the medium through which the sound waves travel is
reduced. In a complete vacuum, there are no air particles to transmit the sound vibrations, so the sound cannot travel, and the bell becomes
inaudible.
 When air is reintroduced: Once the air is let back into the jar, the medium for sound transmission is restored, and the sound waves can
once again travel to your ears, making the bell audible.
 This experiment demonstrates that sound cannot travel through a vacuum because there is no medium (such as air) for the sound waves to
propagate through. Sound requires a medium like air, water, or solids to travel, unlike light, which can travel through a vacuum.

Transmission of sound through air, water, and solids


Sound is transmitted through waves that travel across different mediums: gases, liquids, and solids. In air, sound waves travel relatively slowly due
to the greater distance between molecules. This allows for the propagation of sound, but at a limited speed of approximately 343 meters per second.
In contrast, sound travels faster in water, reaching speeds of about 1,480 meters per second, as the molecules are closer together, facilitating quicker
energy transfer. When sound waves move through solids, they travel even faster, often exceeding 5,000 meters per second. This is because the tightly
bonded molecules in solids allow for more efficient transmission of sound energy. The differences in speed across these mediums highlight the
importance of molecular structure in sound propagation.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 238


239

An experiment to determine the transmission of sound through air, water, and solids
By utilizing simple materials like tuning forks, students can observe how sound waves travel differently through various mediums.
A tuning fork is struck and held near the surface of
water, the vibrations can be felt more intensely than in
air, demonstrating that sound travels faster in water.
To measure sound speed, students can conduct a
straightforward experiment using two blocks of wood
and a stopwatch.
By striking the blocks together, and recording the time
the sound's travel over a known distance, then
calculating the speed of sound in air. This experiment
highlights that sound travels approximately four times
faster in water and about thirteen times faster in solids like wood compared to air. Through these experiments, students gain a deeper understanding
of sound wave behavior and the factors influencing sound transmission across different mediums.
Sound propagation and density
Sound propagation is significantly influenced by the density of the medium through which it travels. Denser materials can transmit sound waves more
effectively, but this relationship is nuanced. For instance, while sound travels faster in liquids than in gases, denser solids may exhibit slower sound
transmission due to their molecular structure. Larger molecules in a dense medium require more energy to transmit sound, resulting in a slower speed
of sound. The interplay between density and pressure also plays a crucial role in sound propagation. In gases, higher density corresponds to higher
pressure, which can enhance sound transmission. However, the absolute pressure in a sound wave remains relatively constant, indicating that other
factors, such as temperature, also affect sound speed.

Sound propagation and pressure


Sound propagation is intricately linked to air pressure,
although the speed of sound remains relatively constant
regardless of pressure changes. This phenomenon occurs
because sound waves are essentially pressure variations
traveling through a medium, such as air. While the speed
of sound is primarily influenced by temperature, the
intensity of sound is more closely related to the density of
the air, which is affected by pressure. In lower pressure
environments, sound waves can travel further due to
reduced atmospheric attenuation. This means that in high-
pressure conditions, sound may not propagate as effectively, as the increased density can dampen the sound intensity. Conversely, in low-pressure
scenarios, sound waves experience less resistance, allowing them to travel greater distances. Sound propagation and air pressure is crucial in areas
like acoustics, meteorology, and even animal communication, where sound plays a vital role in navigation and interaction.
Sound propagation can be affected by pressure, but the speed of sound in air is primarily determined by temperature. Sound is a vibration that travels
through particles in a medium, such as air, as a wave. This wave is created by a series of compressions and rarefactions in the medium. Compression

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 239


240

is a region of high pressure, and rarefaction is a region of low pressure. In an ideal gas, pressure and density both contribute equally to the velocity
of sound, so the effect of pressure cancels itself out. However, pressure can affect sound in other ways, such as harmonic distortion and shock
waves. High amplitude sound waves can distort as they travel through a medium, such as water. This is because sound speed is a function of the
pressure associated with the wave.

Vibrations and Pitch


Frequency of Vibration: The frequency at which an object vibrates directly
influences the pitch of the sound produced. Faster vibrations produce higher
frequencies and thus higher pitches. Example: A shorter guitar string vibrates
faster than a longer one, producing a higher pitch. This is why instruments with
shorter strings (like violins) produce higher notes than those with longer strings
(like cellos).

Loudness and Pitch


Pitch is determined by the frequency of sound waves; higher frequencies produce
higher pitches, while lower frequencies yield lower pitches. For instance, a thin
string vibrates faster, creating a higher pitch compared to a thicker string, which
vibrates more slowly. On the other hand, loudness is related to the amplitude of sound waves. Greater amplitude results in louder sounds, while
smaller amplitude produces softer sounds. This relationship means that two sounds can have the same pitch but differ in loudness, affecting how we
perceive them in various environments.

Frequency and pitch of sound waves


Frequency, measured in hertz (Hz), refers to the number of vibrations or cycles a sound
wave completes in one second. A high frequency indicates a rapid vibration, which we
perceive as a high-pitched sound, such as a whistle. Low-frequency sound waves vibrate
more slowly, resulting in lower-pitched sounds, like a bass drum. This connection
between frequency and pitch is crucial in various fields, from music to acoustics.
Musicians rely on this principle to create harmonious sounds, while scientists study it to
understand how sound interacts with different environments. For instance, the pitch of a
sound can change based on the medium through which it travels, affecting how we perceive it.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 240


241

Types of sound waves


Sound waves are vibrations that travel through various mediums, characterized by their
frequency, amplitude, and wavelength. They can be classified into three main types:
audible, infrasonic, and ultrasonic waves. Audible sound waves fall within the frequency
range of 20 Hz to 20 kHz, which is detectable by the human ear. Infrasonic waves
(subsonic), have frequencies below 20 Hz and are often associated with natural
phenomena like earthquakes. Ultrasonic waves exceed 20 kHz and are utilized in various
applications, including medical imaging and cleaning. In terms of wave types, sound
waves are primarily longitudinal waves, where particle oscillations occur parallel to the
direction of wave travel. This characteristic allows sound to propagate efficiently through gases, liquids, and solids. These types of sound waves is
crucial in fields such as acoustics, where they are applied in designing soundproof rooms, musical instruments, and audio technology.

Applications of ultrasonic sound waves


Ultrasonic sound waves have a wide range of applications across
various fields, showcasing their versatility and effectiveness. In
medicine, ultrasound is primarily known for its diagnostic
capabilities, particularly in prenatal care, where it provides
crucial images of developing fetuses. It is used to treat soft
tissue injuries and alleviate pain associated with conditions like
bursitis and collagen diseases. Beyond healthcare, ultrasonic
waves play a vital role in navigation and underwater
exploration. By emitting sound waves and measuring their return time, these waves help map underwater terrains and locate objects. Furthermore,
ultrasonic technology is employed in industrial settings for nondestructive testing, ensuring the integrity of materials without causing damage.
Ultrasound also finds applications in cleaning, mixing, and communication, demonstrating its broad utility.

Loudness of sound
Loudness is the subjective perception of sound pressure, distinguishing how
loud or soft a sound appears to a listener. While closely related to sound
intensity, loudness is not synonymous with it. Instead, it reflects the
auditory sensation produced by sound waves, influenced by their
amplitude. The greater the amplitude, the louder the sound perceived by the
human ear. For instance, musicians and sound engineers often manipulate
loudness to create a balanced auditory experience. Tools like sound pressure
level (SPL) meters or smartphone apps can help measure loudness, ensuring that
audio levels are appropriate for different environments. Advancements in technology, such
as equal loudness apps, enhance listening experiences by equalizing sound frequencies.

Amplitude and loudness

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 241


242

A larger amplitude corresponds to a louder sound, while a smaller amplitude results in a softer sound. This relationship is fundamental to how we
perceive sound, as the amplitude reflects the pressure variations exerted on our ears. Human ears are remarkably sensitive, capable of detecting a vast
range of loudness levels. The intensity of sound waves, which is directly related to their amplitude, plays a significant role in this perception. As sound
waves travel, their amplitude diminishes, leading to a decrease in loudness. It used in the design of hearing aids and cochlear implants. These devices
rely on manipulating sound waves to enhance auditory experiences for individuals with hearing impairments.

Amplitude is a physical property of a sound wave that indicates the maximum


displacement of particles in the medium caused by the wave. It determines the energy
carried by the wave, with higher amplitudes representing greater energy. On a graph,
amplitude is the height of the wave from its equilibrium point, and it is a key factor
influencing the perception of sound intensity.

Loudness, on the other hand, is the subjective perception of how strong or soft a sound
appears to a listener. It depends on the amplitude of the sound wave but is also
influenced by factors like frequency and individual sensitivity to sound. For instance,
the human ear is more sensitive to frequencies between 2,000 and 5,000 Hz, making
sounds in this range seem louder even at the same amplitude as lower or higher frequencies.

While amplitude is an objective measure often expressed in terms of sound pressure levels, loudness is measured in psychoacoustic units like phons
or sones. A sound wave with a higher amplitude generally produces a louder sound, but this relationship is not perfectly linear due to the complexities
of human hearing.

Amplitude and intensity


Amplitude refers to the maximum displacement of particles in a medium caused by a
sound wave, essentially measuring the wave's energy. The greater the amplitude, the
louder the sound perceived. Sound intensity is defined as the sound power
per unit area, indicating how much energy passes through a specific
area over time. As amplitude increases, so does intensity. This relationship can be
mathematically expressed, showing that intensity is proportional to the square of the
pressure amplitude. This means that even small increases in amplitude can lead to
significant increases in intensity, affecting how we perceive sound. These concepts
help in various fields, from acoustics to neuroscience, where changes in sound
intensity can influence brain function and perception. On a graph, amplitude is
represented by the height of the wave, and larger amplitudes indicate stronger vibrations. Intensity measures the amount of energy transmitted by
the wave per unit area per unit time. It is a physical quantity that depends on both the square of the amplitude and the frequency of the wave. Intensity
is expressed in watts per square meter (W/m²) and provides an objective measure of the sound's power as it spreads through a medium. The
relationship between amplitude and intensity is quadratic, meaning that if the amplitude of a sound wave doubles, its intensity increases by a factor
of four. While amplitude represents the energy within the wave, intensity accounts for how this energy is distributed in space, making it a key factor
in understanding the propagation of sound.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 242


243

SPEED OF SOUND WAVES


The speed of sound waves varies significantly depending on the medium through which they travel. In dry air at sea level, sound travels at
approximately 761 mph (or 343 meters per second). This speed can fluctuate based on environmental factors, primarily temperature and humidity.
For instance, sound travels faster in warmer air due to increased molecular activity, while higher humidity can also enhance sound speed. In contrast,
sound waves move more rapidly in solids than in liquids or gases. This is attributed to the rigidity and density of the material; sound waves can
transmit energy more efficiently in denser and more rigid substances. For example, sound waves can travel at speeds of 4 to 7 km/s in solids, compared
to 1.5 km/s in water.
Factors affecting speed of sound in a medium
 The speed of sound in a medium is influenced by (i) temperature, (ii) density, and (iii) elasticity. As temperature increases, the speed of
sound also rises because warmer molecules vibrate more rapidly, facilitating quicker energy transfer. In colder conditions, sound travels
slower due to reduced molecular motion.
 Density plays a crucial role as well. Sound travels faster in denser materials, but this is nuanced by the material's elasticity. For instance,
while sound moves quickly through solids due to their high density and elasticity, it travels slower in gases, where lower density and
elasticity hinder sound propagation.
 Pressure can also affect sound speed, particularly in gases. Increased pressure typically raises density, which can enhance sound speed.

ACTIVITY
Kate and Kitty stood at a distance apart besides along metal rail on a still day. Kate placed his ear against the rail while Kitty gave the rail a
sharp knock with a hammer. Two sounds were separated by a time interval of 0.5s were heard by Kate. If the speed of sound in air is 330ms -
1
and that in the metal rail is 5300ms-1. Establish the actual distance between the two.

Reflection of sound waves


The reflection of sound is a fundamental phenomenon where sound waves encounter a barrier and bounce back within the same medium. It has
applications, from architectural acoustics to musical performances. The effectiveness of sound reflection is influenced by the surface's shape and
material; flat surfaces tend to reflect sound more efficiently than irregular ones. In practical terms, sound reflection can lead to phenomena such as
echoes and reverberation. An echo occurs when sound waves reflect off a distant surface, returning to the listener after a noticeable delay. Reverberation
is the persistence of sound in a space due to multiple reflections, creating a rich auditory experience. Sound reflection is essential for designing spaces
that optimize acoustics, such as concert halls and recording studios.

EXPERIMENT TO VERIFY THE LAWS OF REFLECTION OF SOUND


 Put a ticking clock on a cardboard tube on a table and make it to face
a hard plane surface e.g. a plywood.
 Put the ear near a cardboard tube and move it on either sides until
the ticking sound of the clock is heard loudly.
 Measure angle i and r, which are the angles of incidence and
reflection. From the experiment, sound is heard distinctly due to
reflection.
 Angle of incidence (i) and angle of reflection (r) are equal and lie along XY in the same plane. This verifies the laws of reflection.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 243


244

ECHOES
Echoes occur when sound waves reflect off surfaces and return to the listener. This reflection is crucial in acoustics, where it influences the design of
concert halls and auditoriums. The quality of sound in these spaces is significantly affected by how echoes are managed, ensuring that music is heard
clearly and harmoniously. Beyond acoustics, echoes serve practical purposes in scientific measurements. For instance, they are employed in sonar
technology to determine distances and map underwater terrains. By analyzing the time it takes for sound waves to return after bouncing off an object,
scientists can calculate its distance and shape. This principle is also utilized in medical imaging techniques, such as ultrasound, where echoes help
create detailed images of internal body structures. Echoes are being explored in fields like superconductivity and gravitational wave detection,
showcasing their versatility and importance in modern physics.

The occurrence and strength of echoes


Echoes are resulting from the reflection of sound waves off surfaces. When a sound
is produced, it travels through the air until it encounters a surface, such as a wall
or a mountain. If the surface is hard and smooth, it reflects the sound waves
effectively, allowing the listener to hear the echo after a brief delay. The strength
of an echo is influenced by the absorptive properties of the reflecting surface;
denser and smoother surfaces yield stronger echoes. In practical applications,
echoes are used in acoustics, particularly in concert halls and auditoriums, where
sound quality is paramount. The design of these spaces often considers how echoes will enhance or detract from the auditory experience. Additionally,
in medical imaging, ultrasound technology utilizes echoes to create images of internal structures, demonstrating the versatility of this phenomenon
across various fields.

Investigating the velocity of sound in air using the echo method


The echo method is a practical way to measure the velocity of sound in air. It involves producing a sound, allowing it to travel a known distance (d),
2𝑑
and measuring the time (t) it takes for the sound to return as an echo. The velocity of sound (v) can be calculated using the formula:𝑣 = 𝑡
, where:
d = distance traveled by sound, t= time taken for the echo to return
Assumptions
The sound travels at a constant speed in the medium (air) under controlled conditions (e.g., similar temperature, humidity). The distance measured is
the one-way distance to the reflecting surface; thus, the total distance for the echo is twice the one-way distance.
Equipment Needed
A sound source, measuring tape, and stopwatch or timer for measuring the time interval
Procedure
 Choose an open area where a hard surface (like a wall or a large rock) can reflect sound waves.
 Measure the distance (d) from the sound source to the reflecting surface.
 Generate a loud sound (such as clapping hardboard or using a whistle) at the chosen distance.
 Start the stopwatch at the moment the sound is made and stop it when the echo is heard. Record the time (t) taken for the echo to return.
2𝑑
 Use the formula 𝑣 = 𝑡
,to calculate the speed of sound in air. Since the time measured is for the sound to travel to the wall and back,
divide the time by 2 before using it in the formula.
 Repeat the experiment, and get two values of velocity, and determine the average speed.
Other applications of sound waves/Echoes

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 244


245

Submarine Navigation(SONAR)
Submarines use sound waves, specifically sonar (Sound Navigation and Ranging), for navigation and obstacle detection underwater. Sonar systems
emit sound pulses and measure the time it takes for the echoes to return after reflecting off objects. This helps determine distances, locate other
vessels, and navigate safely in the depths where visibility is limited.
Fisheries: Locating Schools of Fish
Fisheries rely on echo sounders, a type of sonar technology, to locate schools of fish. Sound waves are transmitted into the water, and the reflected
signals from fish or other underwater objects provide information about their position and size. This helps optimize fishing efforts while reducing
overexploitation.
Oceanography: Mapping the Ocean Floor
Oceanographers use sound waves for seabed mapping, employing techniques like multibeam sonar. These systems emit sound waves that travel to the
ocean floor and return as echoes, creating detailed topographic maps. This data is essential for studying marine ecosystems, underwater navigation,
and geological research.
Ultrasound Applications
Ultrasound, which involves high-frequency sound waves exceeding 20,000 Hz, is widely used in various fields due to its precision and non-invasive
nature. These sound waves travel through different materials, reflecting back when they encounter changes in density, which enables detailed analysis
and effective applications.
Medical Imaging
In healthcare, ultrasound is a critical tool for creating images of internal structures, such as organs, tissues, and developing fetuses. The machine
emits high-frequency sound waves that penetrate the body and reflect off tissues, producing echoes that are converted into visual images. This method
is non-invasive, radiation-free, and widely used in prenatal care, abdominal imaging, and cardiac examinations.
Industrial Cleaning
Ultrasound is employed in industrial cleaning to remove dirt, grease, and contaminants from intricate or delicate items, such as surgical tools, jewelry,
and machine parts. High-frequency sound waves create microscopic bubbles in a cleaning solution, a process known as cavitation. These bubbles
collapse, producing intense pressure that effectively cleans without damaging the items.
Non-Destructive Testing (NDT)
Ultrasound is a vital technique in NDT for inspecting materials like metals, composites, and concrete without causing damage. High-frequency sound
waves are directed into the material, and the reflected waves are analyzed to detect cracks, voids, or other imperfections. This method is extensively
used in industries like aerospace, construction, and manufacturing for quality assurance and safety checks.
Applications of Sound Spectrum:
In music, the spectrum is used to analyze the different harmonic components of musical notes and instruments. The richness of a musical sound comes
from the combination of multiple harmonics, each contributing a unique frequency to the sound.
The sound spectrum is crucial in speech analysis. Different vowels and consonants have characteristic frequency patterns, and the study of these
patterns helps in fields like speech recognition and phonetics.
In audio engineering, the sound spectrum is analyzed and adjusted to enhance or reduce certain frequencies (e.g., equalization) to improve sound
quality for music, radio, or film.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 245


246

REFRACTION OF SOUND WAVES


Refraction of sound waves is a fascinating phenomenon that occurs
when sound travels through different media or varying
temperatures. This bending of sound waves is primarily due to
changes in wave speed, which can be influenced by factors such as
air temperature.
For instance, sound waves may refract when moving over water, as
they transition between air and the water surface, altering their speed and direction. The critical angle plays a significant role in sound refraction,
determining how sound waves behave as they encounter different mediums. When sound waves reach this angle, they can either reflect or refract,
depending on the properties of the media involved.
Refraction occurs when speed of sound waves changes. The speed of
sound in air is affected by temperature. Sound waves are refracted when
they are passed through areas of different temperature. This explains why
it is easy to hear sound waves from distant sources at night than during
day.
During day, the ground is hot and this makes the layers of air near the
ground to be hot while that above the ground is generally cool. The wave
fronts from the source are refracted away from the ground. During night,
the ground is cool and this makers layers of air near the ground to be
cool while above to be warm. The wave fronts from the source are
refracted towards the ground making it easier to hear sound waves over long distances.
DIFRACTION OF SOUND
This refers to the spreading of sound waves around corners or in gaps when sound waves have wavelength similar to the size of the gap. It is due to
refraction that a person behind the house can hear sound from inside. Diffraction of sound waves occurs when sound waves encounter obstacles or
openings. This bending and spreading of waves allows sound to travel around corners and through small openings, making it possible to hear sounds
even when the source is not directly in line with the listener. The extent of
diffraction is influenced by the wavelength of the sound; longer wavelengths,
such as those produced by lower frequencies, diffract more easily than shorter
wavelengths. For instance,
when sound waves pass
through a doorway, they
spread out into the adjacent
room, allowing us to hear
conversations or music from
another space. This property of
sound diffraction is crucial in various applications, from architectural acoustics to audio engineering,
as it helps in designing spaces that enhance sound quality.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 246


247

INTERFERENCE OF SOUND
When two sound waves from two different sources overlap, they produce regions of loud sound and regions of quiet sound. The regions of loud sound
are said to undergo constructive interference while regions of quiet are said to undergo
destructive interference. Sound wave interference occurs when two or more sound waves
occupy the same space, leading to interactions that can alter their amplitudes.
Constructive interference happens when waves combine to create a larger amplitude,
while destructive interference occurs when waves cancel each other out, resulting in a
smaller amplitude. The interference pattern is influenced by the wavelength of the sound
waves. As the wavelength increases, the points of constructive and destructive
interference become more spaced out, leading to fewer interference points. This principle
is often demonstrated in experiments using speakers, where sound waves from two sources meet and interact.

Reverberation
Reverberation is the persistence of sound after its source has stopped, caused by multiple reflections of the sound waves off surfaces in an enclosed
space. Unlike an echo, which is a distinct repetition of a sound, reverberation involves a continuous blending of the sound as it reflects and gradually
diminishes in intensity.
Reverberation is the overlapping and continuation of sound caused by reflection in a confined space. It creates a sense of fullness or richness in the
sound but does not produce a distinct repetition of the sound. Echo, on the other hand, is a distinct repetition of the original sound. For an echo to
occur, the reflected sound must travel a longer distance (typically more than 17 meters) so that it is heard separately from the original sound.
Causes of Reverberation
Large rooms or spaces with high ceilings tend to have longer reverberation times because sound waves travel longer distances before being absorbed
or dissipated. Irregularly shaped rooms can also contribute to reverberation by creating complex patterns of reflected sound. Hard, reflective surfaces
like glass, concrete, or marble reflect sound more effectively, leading to more reverberation. Soft, absorbent surfaces like carpets, curtains, and foam
panels absorb sound energy, reducing reverberation by preventing multiple reflections. The larger the distance between surfaces (walls or ceiling),
the longer it takes sound waves to bounce back and reach the listener, which can lead to a more noticeable reverberation effect.
Examples of Reverberation
 Reverberation in concert halls gives music a rich, full quality by allowing sound to linger slightly after it is produced. However, the
reverberation time must be carefully controlled to avoid creating a muddled or indistinct sound.
 Due to their large size and hard stone surfaces, cathedrals typically have long reverberation times. This is why spoken words or music in
such spaces often sounds echoed and prolonged.
 Recording studios are often designed to have minimal reverberation. This allows for cleaner recordings of sound without the unintended
effects of reflections from walls or other surfaces. Acoustic treatments like foam panels and baffles are used to absorb sound and reduce
reverberation time.
Effects of Reverberation on Sound Quality
 Many cases, reverberation can enhance the quality of sound, especially in music. The reflection of sound waves can create a sense of
spaciousness and depth, giving the sound a more natural and immersive quality.
 Too much reverberation can distort sound. When reverberation time is excessively long, individual sounds (such as spoken words) can
overlap and become difficult to distinguish, leading to a loss of clarity and intelligibility.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 247


248

 In environments with excessive reverberation, speech may sound garbled or echoes because the reflections of earlier sounds interfere with
new ones, making it harder to understand what is being said.
Controlling Reverberation
 Soft materials, like foam or fabric-covered panels, can be installed on walls or ceilings to absorb sound waves and reduce reverberation.
These panels are commonly used in recording studios, theaters, and auditoriums to control the acoustics of the room.
 Adding carpets or curtains can also absorb sound, helping to reduce the amount of reflection and thus minimizing reverberation in smaller
spaces like homes or offices.
 Designing rooms with irregular shapes or adding furniture can break up sound waves and prevent them from reflecting as strongly. This is
useful in reducing reverberation in spaces where clear, crisp sound is important, such as conference rooms or classrooms.

3.6 HEAT QUANTITIES AND VAPOURS


Learning Outcomes
a) Understand and use the concepts of heat capacity and latent heat (k, u,s)
b) Know and explain the implications of the high values of the specific latent heat and the specific heat capacity of water (k,u)
c) Carry out calculations and investigations on specific heat capacity and specific latent heat (u,s)
d) Understand the concept of latent heat and change of state, and use them to explain melting and boiling point (u,s)
e) Understand the meaning of saturated and unsaturated vapours, saturated vapour pressure, and how these terms relate to boiling and
evaporation(u)
f) Appreciate the cooling effect of evaporation and how this contributes to maintaining constant body temperature (k, u, s,v/a)

Introduction
Heat capacity is a fundamental physical property that quantifies the
amount of heat energy required to change the temperature of a
substance by one degree Celsius (or Kelvin). It is typically expressed
in units such as calories per degree or joules per degree. There are
two main types of heat capacity: specific heat capacity and molar
heat capacity. Specific heat capacity refers to the heat required to
raise the temperature of one gram of a substance, while molar heat
capacity pertains to one mole of a substance. In practical
applications, heat capacity is significant in designing systems such
as heat pumps and thermal storage solutions, ensuring efficient
energy use and temperature regulation in various environments.

Heat energy
Heat energy, often referred to simply as heat, is the transfer of energy from a higher temperature body to a lower temperature one.This process is
fundamental in thermodynamics, where heat is defined as energy in transit, distinct from energy stored within a system. The movement of tiny particles
atoms, molecules, or ions within solids, liquids, and gases generates thermal energy, which directly influences temperature. There are three primary
mechanisms of heat transfer: conduction, convection, and radiation. Conduction occurs through direct contact, convection involves the movement of

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 248


249

fluids, and radiation transfers energy through electromagnetic waves. Heat energy is
produced from various sources, including the sun, combustion of fuels, and chemical
reactions.
Heat energy is the energy transferred between objects at different
temperatures, playing a crucial role in thermodynamics. It is the capacity to do
work by moving matter.
Heat capacity, refers to the amount of heat energy required to raise the
temperature of a substance by one degree Celsius. This property varies among
materials, influencing how they store and transfer thermal energy. For instance, substances with high heat capacity can absorb more heat without a
significant temperature change, making them ideal for applications requiring thermal stability.
Specific heat capacity, closely related to heat capacity, measures the heat
required to change the temperature of a unit mass of a substance by one degree
Celsius.

How does heat energy flow?


Heat energy flows through three primary mechanisms: conduction, convection, and radiation.
Conduction occurs when heat is transferred directly between substances in contact, such as a
metal spoon in a hot pot of soup. The heat moves from the hotter substance to the cooler one
until thermal equilibrium is reached. Convection involves the movement of heat through fluids (liquids and gases) as warmer, less dense areas rise
and cooler, denser areas sink. This process is commonly observed in boiling water or atmospheric circulation. Radiation, on the other hand, is the
transfer of heat through electromagnetic waves, such as infrared
radiation from the sun warming the Earth. Unlike conduction and
convection, radiation does not require a medium to transfer heat,
allowing energy to travel through the vacuum of space.

Heat capacity
Heat capacity is a fundamental physical property of
matter, defined as the amount of heat energy required to raise the temperature of an object by one degree Celsius. It is typically
expressed in units such as calories per degree or joules per degree. This property is vital in various scientific and engineering applications, as it helps
predict how substances respond to heat.
Specific heat capacity, a related concept, refers to the heat required to raise the temperature of one unit of mass of a substance
by one degree Celsius.
For instance, water has a high specific heat capacity, making it effective for temperature regulation in natural and engineered systems. Recent
advancements in technology, such as the world's largest CO2 based seawater heat pump in Denmark, highlight the importance of heat capacity in
sustainable energy solutions. These innovations aim to efficiently heat thousands of homes while minimizing environmental impact, showcasing the
practical applications of heat capacity in modern society.

To investigate how different liquids of the same volume acquire heat


 An experiment can be conducted using equal amounts of various liquids, such as water, oil, and alcohol.
 Begin by measuring 100 mL of each liquid and placing them in identical calorimeters.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 249


250

 Use a thermometer to monitor the temperature of each liquid as they are


heated uniformly, ensuring that they all receive the same amount of heat
energy.
 As the liquids are heated, record the temperature changes at regular intervals.
This data will reveal how quickly each liquid's temperature rises.
 The experiment shows that different liquids expand at different rates due to
their unique molecular structures and intermolecular forces. For instance,
water may heat up more slowly than alcohol, indicating a higher specific heat
capacity.
 By analyzing the temperature data, one will conclude that equal volumes of
different liquids do indeed acquire heat at varying rates, demonstrating the
principles of thermodynamics and material properties.

Heat Capacity
Different substances have different heat capacities due to variations in molecular structure and bonding. Solids, liquids, and gases have different heat
capacities, with gases generally having higher heat capacities than solids. Heat capacity can change with temperature; as sub stances heat up, their
ability to store thermal energy may vary.
ℎ𝑒𝑎𝑡 𝑠𝑢𝑝𝑝𝑙𝑖𝑒𝑑 (𝐽) 𝑄 𝑗𝑜𝑢𝑙𝑒
Heat capacity, C, of an object 𝑐 = 𝑐ℎ𝑎𝑛𝑔𝑒 𝑖𝑛 𝑡𝑒𝑚𝑝𝑒𝑟𝑎𝑡𝑢𝑟𝑒 (𝐾), 𝐶 = ∆𝜃. The SI unit of heat capacity is ; 𝑘𝑒𝑙𝑣𝑖𝑛 𝑜𝑟 𝐽𝐾 −1
Investigating the effect of mass on the heat capacity of a substance
Objective: to investigate how increasing the mass of a substance affects its overall heat capacity while keeping other factors constant.
Variables
 Independent Variable: Mass of the substance.
 Dependent Variable: Heat capacity (C) and temperature change
 Controlled Variables: Heat source, heating duration, and starting temperature.
Experimental Setup
Materials: A substance with a known specific heat capacity (e.g., water), A heat source (e.g., a heater),Insulated containers, A thermometer or
temperature probe, Measuring scales.
Procedure:
a) Measure and record different masses of the substance (100 g, 200 g, 300 g).
b) Place each sample in an insulated container.
c) Heat each sample with a constant heat source for a fixed time (e.g., 5 minutes).
d) Record the temperature change (Δθ) for each mass.
e) Tabulate the table of results including values of 𝜃1, 𝜃2 , 𝑎𝑛𝑑 ∆𝜃 = (𝜃2 − 𝜃1)
f) A graph of mass (m) against (𝜃2 − 𝜃1 ) will plotted
g) The nature of the graph obtained, and conclusion generated, from which
Expected Results
As the mass increases, the total heat capacity will increase proportionally.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 250


251

The temperature change (Δθ) for larger masses will be smaller if the same amount of heat is applied, demonstrating that larger masses require more
energy to achieve the same temperature rise.
Specific heat capacity
Specific heat capacity is defined as the amount of heat required to raise the temperature of one gram of a substance by 1°C .
This property, denoted by the symbol 'c', varies among different materials, influencing how they respond to heat. For instance, water has a notably
high specific heat capacity, which allows it to absorb and retain heat longer than many other substances. This characteristic plays a role in regulating
climate and weather patterns. The specific heat capacity of various materials can be found in comprehensive tables, which list values for common
substances. For example, aluminum has a specific heat capacity of 0.89 J/g°K, while concrete has a higher value of 0.88 J/g°K.
∆𝑸
Specific heat capacity is defined as:𝒄 = 𝒎∆𝜽, Where: c = specific heat capacity
(measured in joules per kilogram per degree Celsius, J/(kg·°C)), m = mass of the
substance (measured in kilograms, kg), ΔQ and Δθ are the same as defined earlier.
Different substances have distinct specific heat capacities. For example: Water =4,200
J/(kg·°C) (high specific heat, which is why it moderates temperatures), and Iron: =450
J/(kg·°C) (lower specific heat compared to water)

Determining the Specific Heat Capacity of a Substance


by the Method of Mixtures
The method of mixtures is a practical approach to determine the specific heat capacity (c)
of a substance. It is based on the principle of conservation of energy, where heat lost by
a hotter object equals the heat gained by a cooler one, assuming no heat loss to the
surroundings.
Principle
The heat lost by the hotter substance is given to the cooler substance until thermal equilibrium is reached. Mathematically: For a substance, heat
transfer is calculated as Q=mcΔθ
Materials
A calorimeter (insulated container), Thermometer, Balance (to measure mass), A hot solid (e.g., a metal block), A known volume of water (to act as
the cooler substance), and Heat source (to heat the solid).
Procedure
 Measure the mass of the solid, 𝑚𝑠 and its initial temperature 𝜃0 after heating.
 Measure the mass of water,𝑚𝑤 and initial temperature of the water 𝜃1 in the calorimeter.
 Quickly place the heated solid into the water in the calorimeter.
 Stir gently and record the final equilibrium temperature of the mixture.
Energy Balance
Assume no heat is lost to the surroundings or the calorimeter.
Calculate the heat lost by the solid: 𝑄𝑠 = 𝑚𝑠 𝑐𝑠 (𝜃0 − 𝜃2 )
Calculate the heat gained by the water: 𝑄𝑤 = 𝑚𝑤 𝑐𝑤 (𝜃2 − 𝜃1 )
𝑚𝑤 𝑐𝑤(𝜃2 −𝜃1 )
Since 𝑄𝑠 =𝑄𝑤 , rearrange to solve for 𝑐𝑠 = 𝑚𝑠 (𝜃0 −𝜃2 )

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 251


252

Expected Outcome
The specific heat capacity of the solid can be calculated accurately if heat losses are minimized. Typical values of specific heat capacities for common
materials (e.g., metals) can be compared to validate the results.
Sources of Error
 Heat loss to the environment or calorimeter.
 Inaccurate measurement of temperatures or masses.
 Uneven heat distribution in the solid or water.
Applications of High Specific Heat Capacity
 In heating systems, for instance, materials with high specific heat capacity, like water, are preferred for their ability to store and transfer
heat efficiently. This property is also essential in cooking, where understanding the specific heat of different materials helps determine how
quickly food heats up or cools down.
 In construction, materials with high specific heat, such as wood, serve as effective insulators. During hot summer months, wooden houses
maintain cooler indoor temperatures, showcasing the practical benefits of specific heat capacity in energy efficiency. Large bodies of water
absorb and store heat from the sun, moderating coastal temperatures and creating a more stable climate. This helps prevent extreme
temperature fluctuations in coastal regions. The ocean acts as a heat reservoir, absorbing heat during the summer and releasing it during
the winter, influencing weather patterns and climate stability.
 Moreover, specific heat capacity is vital in material characterization across industries. By measuring this property, scientists can identify
and select materials suitable for specific applications, enhancing performance in fields ranging from engineering to environmental science.
High specific heat capacity of water
Water has high specific heat capacity possesses various applications across multiple industries.
 One of the most significant uses is in cooling systems, particularly in nuclear power plants, where water absorbs excess heat, preventing
overheating. This property is also vital in automotive and marine engines, where water is used to maintain optimal operating temperatures,
ensuring safety and efficiency.
 In everyday life, water's ability to retain heat is harnessed in heating systems, such as radiators and central heating. These systems rely on
water to absorb and distribute heat effectively, providing warmth in homes and buildings.
 Additionally, water is commonly used in firefighting, as it can absorb large amounts of heat, helping to extinguish flames more effectively.
 Water's high specific heat capacity is essential for maintaining stable body temperatures in living organisms, allowing for proper
physiological functions. This characteristic underscores the importance of water in both industrial and biological contexts.

Effects of ocean on currents


The ocean plays a role in influencing climate and weather patterns on land. By absorbing solar radiation, it stores heat and redistributes moisture
globally, which drives various weather systems. Ocean currents act like a conveyor belt, transporting warm water and precipitation from the equator
toward the poles, while cold water flows back, maintaining a balance in temperature and climate. These currents are slower than winds due to friction,
moving at angles to wind direction. They significantly impact regional climates, affecting everything from rainfall patterns to temperature variations.
For instance, disruptions in these currents due to climate change could lead to drastic temperature shifts, particularly in Europe. Ocean currents are
vital for marine ecosystems, transporting nutrients and dissolved gases essential for marine life.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 252


253

The effects of oceans on currents are profound and influence global climate,
weather patterns, marine ecosystems, and human activities. Ocean currents are
continuous, directed movements of seawater driven by forces such as wind,
Earth's rotation, temperature differences, salinity gradients, and the
gravitational pull of the Moon and Sun.
The primary effects:
 Regulation of Global Climate
Ocean currents play a crucial role in regulating Earth's climate by
redistributing heat. Warm currents, such as the Gulf Stream, transport heat
from the equator toward the poles, warming regions like Western Europe.
Conversely, cold currents, such as the California Current, carry cooler waters
from polar regions toward the equator, moderating temperatures in coastal
areas. These currents maintain the balance between tropical and polar temperatures, preventing extreme climate conditions.
 Impact on Marine Ecosystems
Currents influence nutrient distribution, supporting marine life. Upwelling currents, where deep, nutrient-rich water rises to the surface, promote the
growth of phytoplankton, forming the base of the marine food chain. This supports thriving fisheries, as seen off the coasts of Peru and Namibia.
Conversely, strong currents can disrupt ecosystems by altering habitats or causing displacement of marine species.
 Weather and Storm Patterns
Ocean currents significantly impact atmospheric weather systems. For example, warm ocean currents can fuel tropical storms and hurricanes by
providing the necessary heat and moisture. Cold currents, in contrast, often stabilize the atmosphere, reducing storm activity. Phenomena such as El
Niño and La Niña, caused by changes in Pacific Ocean currents, lead to widespread weather disruptions, including droughts, floods, and temperature
anomalies.
 Navigation and Human Activities
Currents affect shipping and navigation. Sailors and mariners historically relied on predictable ocean currents to reduce travel time and fuel
consumption. Today, shipping routes and oil exploration efforts consider current patterns for efficiency and safety. Additionally, strong currents like
the Kuroshio Current pose challenges to navigation and fishing industries due to their intensity.
 Sediment Transport and Coastal Erosion
Ocean currents contribute to the transport of sediments along coastlines,
shaping the geography of beaches and estuaries. Longshore currents, for
instance, move sand parallel to the shore, which can lead to erosion in
some areas and deposition in others. These processes influence coastal
development and require careful management to mitigate erosion and
protect habitats.

The land and sea heat up and cool at different rates


The differential heating of land and sea is a fascinating phenomenon that
significantly impacts our climate and weather patterns. During the summer, land heats up much faster than water due to its composition and physical
properties. Land is a better conductor of heat, allowing it to absorb solar energy more quickly. This rapid heating results in higher temperatures on
land compared to the cooler, slower-heating ocean.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 253


254

At night, land cools down more quickly than water. The heat
absorbed during the day dissipates rapidly from the land, while
the ocean retains heat longer due to its higher specific heat
capacity. This difference in cooling rates contributes to the
formation of sea breezes, where cooler air from the sea moves
toward the warmer land, creating a refreshing breeze on hot days.

LATENT HEAT
Latent heat refers to the hidden energy absorbed or
released by a substance during a phase change, such as melting or vaporization, without altering its temperature or pressure.
This is crucial in understanding various physical processes, particularly in meteorology and thermodynamics. For instance, when ice melts into water,
it absorbs latent heat, which is essential for the transition from solid to liquid. The significance of latent heat extends to climate science, where it
plays a vital role in energy exchange between the Earth's surface and the
atmosphere. Solar radiation is converted into both sensible and latent
heat, influencing weather patterns and climate dynamics. These
processes help in effective water resource management and predicting
weather phenomena. In engineering, phase change materials (PCMs)
utilize latent heat to enhance energy efficiency in thermal management
systems. By absorbing or releasing heat during phase transitions, PCMs
can help regulate temperatures in various applications, from building
materials to refrigeration systems.
Demonstrating Latent Heat
Latent heat refers to the heat absorbed or released by a substance during
a phase change (e.g., melting, boiling) without a change in temperature. The
demonstration of latent heat typically involves observing these phase changes and
measuring the heat involved.
Objective: To demonstrate latent heat during melting (solid to liquid) and boiling
(liquid to gas) processes and understand the concept of heat absorption without a
temperature change.
Materials: Ice cubes or a block of ice, Water, A thermometer or temperature probe,
A heat source (e.g., Bunsen burner or electric heater), Beaker, and Stopwatch
(optional)
Procedure
1. Demonstrating Latent Heat of Fusion (Melting Ice)
 Place a few ice cubes in a beaker.
 Insert a thermometer into the beaker to measure the temperature.
 Gradually heat the beaker using a heat source.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 254


255

 Observe the temperature as the ice begins to melt into water.


Observation: The temperature remains constant at 0∘C while the ice melts, despite
continuous heating. The absorbed heat is used to overcome intermolecular forces, not
to raise the temperature.
2. Demonstrating Latent Heat of Vaporization (Boiling Water)
 Fill a beaker with water and measure its temperature.
 Heat the water gradually until it starts boiling.
 Monitor the temperature as the water boils and turns to steam.
Observation: The temperature remains constant at 100∘C during boiling, even
though heat is continuously supplied. The heat absorbed is used to break bonds in the
liquid, converting it into vapor.

Experiment to determine the total heat required to convert ice to steam


This experiment involves measuring the total heat required to convert a given mass of ice at 0∘C The total heat comprises three parts: the heat to melt
the ice into water, the heat to raise the temperature of the water to boiling point, and the heat to convert the water to steam.
Apparatus: Ice (at 0∘C), Beaker, Thermometer, Bunsen burner or electrical heater, Stirring rod, Stopwatch, Measuring scale
(to measure the mass of ice), Heatproof mat, and Data table or recording sheet
Procedure
PART 1: Mass Measurement
a) Weigh a specific mass of ice (e.g., m=200g) using a scale.
b) Measure and record the temperature of ice as 0∘C by storing it in a controlled environment like an ice bath.
PART 2: Melting Ice to Water
c) Place the ice in a beaker.
d) Insert the thermometer into the ice-water mixture to monitor temperature changes.
e) Heat the beaker gently using a Bunsen burner or electrical heater.
f) Stir continuously to ensure uniform heat distribution.
g) Record the time (t) and temperature when all the ice has melted (final temperature remains 0 ∘C).
HINT: Heat supplied by water is equal to heat absorbed by the ice and the beaker. 𝑸𝒇𝒖𝒔𝒊𝒐𝒏 = 𝒑𝒐𝒘𝒆𝒓 ∗ 𝒕𝒊𝒎𝒆 = 𝒑𝒕
PART 3: Heating Water to Boiling Point
h) Continue heating the water in the beaker.
i) Monitor the temperature using the thermometer.
j) Record the time (t) and the temperature once the water reaches 100 ∘C
HINT: Heat supplied,𝑸𝒉𝒆𝒂𝒕𝒊𝒏𝒈 = 𝒎𝒄∆𝜽
PART 4: Converting Water to Steam
k) Keep heating until the water is completely converted to steam.
l) Measure the time taken (t) for the water to evaporate fully, while maintaining the temperature at 100 ∘C.
HINT: Heat supplied to covert boiling water into steam,𝑸𝒗𝒂𝒑𝒐𝒓𝒊𝒔𝒂𝒕𝒊𝒐𝒏 = 𝒎𝒍𝒗

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 255


256

Heat Calculations
The total heat Qtotal required can be calculated as: 𝑄total = 𝑄fusion + 𝑄heating + 𝑄 vaporization; Where 𝑄𝑓𝑢𝑠𝑖𝑜𝑛 = 𝑚𝑙𝑓 , Heat required to melt ice,
where 𝐿 f is the latent heat of fusion (334 J/g), 𝑄ℎ𝑒𝑎𝑡𝑖𝑛𝑔 = 𝑚𝑐∆𝜃, Heat required to raise the temperature of water, where ccc is the specific
heat capacity of water (4.18 J/g ∘C and 𝛥𝜃 = (100 − 0)𝑜 ,𝑄𝑣𝑎𝑝𝑜𝑟𝑖𝑠𝑎𝑡𝑖𝑜𝑛 = 𝑚𝑙𝑣 ,Heat required to vaporize water, where 𝐿v is the
latent heat of vaporization (2260 J/g)
Precautions
 Use a consistent heat source to ensure even heating.
 Avoid water loss due to splashing.
 Monitor the temperature closely to ensure accurate data collection.
 Minimize heat loss to the environment by using an insulated setup, if possible.

The latent heat of fusion refers to the energy required to change ice at 0°C into liquid water at
the same temperature. During this phase change, the ice absorbs heat without a change in
temperature. This energy is essential for breaking the hydrogen bonds that hold the water molecules
in a solid structure. As heat is applied to the ice, it begins to melt, transitioning into water. This process continues until all the ice has converted to
liquid. The temperature remains constant at 0°C during this phase, despite the continuous absorption of heat. Once all the ice has melted, the resulting
water can then increase in temperature as it continues to absorb heat. This increase occurs at a rate of 1.00 cal/g·°C, leading to further heating until
it reaches its boiling point at 100°C.

The latent heat of vaporization is a crucial process in the transformation of water into steam. When liquid water reaches 100°C, it requires an
additional 540 calories of heat per gram to convert into steam without changing its temperature. This energy is absorbed by the water molecules,
allowing them to overcome the hydrogen bonds that hold them together in the liquid state. During this phase change, the temperature remains constant
at 100°C, even as heat is continuously supplied. This phenomenon illustrates the concept of latent heat, which refers to the hidden energy required
for a substance to change its state without altering its temperature. Once the water molecules gain enough energy, they break free from the liquid
phase and enter the gaseous state as steam. This process is essential in various applications, including cooking, power generation, and even weather
patterns, highlighting the importance of understanding latent heat
in both scientific and practical contexts.

ENERGY IN THE STORM


Hurricanes derive their energy primarily from warm ocean waters.
When surface water temperatures rise, the storm acts like a straw,
drawing in heat energy from the ocean. This process generates
moisture in the air, which is crucial for storm development. As the
warm, moist air rises, it cools and condenses, forming clouds and
thunderstorms. The condensation of water vapor releases latent
heat, further warming the surrounding air. This additional heat
causes more air to rise, creating a cycle that intensifies the storm.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 256


257

The continuous influx of warm, moist air fuels the hurricane, allowing it to grow stronger and more organized. As the storm progresses, the energy
from the ocean is transformed into powerful winds and heavy rainfall. This dynamic process illustrates the intricate relationship between ocean
temperatures and storm intensity, highlighting the significant impact of climate change on hurricane formation and strength.

Stearic acid changes with temperature


Stearic acid, a saturated fatty acid, exhibits distinct
changes in its physical state with temperature variations.
The melting point of stearic acid is typically around
69.3°C, while its freezing point can vary slightly
depending on purity. During the phase transition from
solid to liquid, the temperature remains constant,
indicating that energy is being absorbed to break
intermolecular forces rather than increasing kinetic
energy. In a cooling curve experiment, stearic acid is
cooled at a constant rate, starting from a liquid state above its freezing point. As it cools, the temperature decreases steadily until it reaches the melting
point, where it flattens out, signifying the phase change. Once fully solidified, the temperature continues to drop, demonstrating the cooling process
of the solid state.

Experiment to investigate how stearic acid changes with temperature


Materials Needed: Stearic acid (solid), Heating apparatus (water bath or hot plate), Thermometer, Beaker, Ice bath (optional, for cooling), Stirring
rod, and Scale (for measuring mass)
Procedure
 Place a known mass of stearic acid in a beaker.
 Set up the heating apparatus to gradually heat the beaker while monitoring the temperature with a thermometer.
 Gradually heat the stearic acid while stirring gently to ensure uniform temperature distribution.
 Record the temperature at regular intervals (e.g., every 1-2 minutes).
 Observe the state of the stearic acid (solid, liquid) at each temperature.
 Continue heating until you notice a change in state from solid to liquid.
 Carefully note the temperature at which the stearic acid begins to melt (melting point).
 Once the stearic acid is fully melted, remove it from the heat and allow it to cool naturally or place it in an ice bath.
 Record the temperature at regular intervals as it cools and observe the state change back to solid, and note the temperature at which the
stearic acid begins to solidify (freezing point).
Findings and Explanation
 Melting Phase: As the temperature increases and reaches the melting point, stearic acid absorbs heat (latent heat of fusion) without a
temperature increase. During this phase change, the energy is used to break the intermolecular forces holding the solid structure, allowing
particles to move more freely in the liquid state.
 Liquid Phase: Once completely melted, the stearic acid exists as a liquid, and further heating raises its temperature until reaching its
boiling point (if applicable). The liquid, increasing the kinetic energy of the particles, absorbs heat.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 257


258

 Cooling Phase: When cooling, stearic acid loses heat (latent heat of solidification) as it transitions from liquid to solid. During this phase
change, the energy released allows the particles to arrange themselves into a solid structure, which involves the formation of intermolecular
bonds.
 Freezing Phase: The temperature remains constant during the phase change from liquid to solid, as the energy is released rather than
resulting in a temperature decrease.
 The investigation demonstrates that stearic acid undergoes phase changes (melting and freezing) at specific temperatures, during which it
absorbs or releases heat without a change in temperature. This behavior illustrates the concepts of latent heat and the energy dynamics
involved in phase transitions.

SPECIFIC LATETNT HEAT


Specific latent heat (L) is the amount of energy required to change the phase of a unit mass of a substance without altering its temperature. This energy
transfer occurs during phase changes, such as melting or boiling, where the temperature remains constant despite the absorption or release of heat.
𝑄(𝐽)
Expressed mathematically as 𝑄 = 𝑚𝐿, 𝐿 = 𝑚(𝑘𝑔), where Q is the heat (thermal) energy, m is the mass, and L is the specific latent heat. For
instance, the specific latent heat of fusion refers to the energy needed to convert a solid into a liquid at a constant temperature, while the specific
latent heat of vaporization pertains to the energy required to transform a liquid into a gas. These concepts is essential in various applications, including
thermal energy storage systems, where materials like molten salts are utilized for efficient heat management. Units : joules per kilogram (J/kg). The
formula to calculate latent heat is: Q=mL, Where: Q = heat energy absorbed or released (in joules, J), m = mass of the substance (in kilograms, kg),
L = specific latent heat of the substance (in J/kg). Let us assume you are melting 0.5 kg of ice at 0°C. The latent heat of fusion for ice is approximately
334,000 J/kg.
Q=mLf=0.5kg×334,000kgJ Q=167,000J.
This means 167,000 joules of heat are required to melt 0.5 kg of ice without raising its temperature.

Specific Latent Heat of Fusion:


The specific latent heat of fusion is the amount of energy required to convert a unit mass of a solid into a liquid at a constant
temperature.
This process occurs without any change in temperature, highlighting the idea of "hidden" energy. For instance, the specific latent heat of fusion
for water i.e. solid into a liquid at its melting point (e.g., ice to water). is approximately 334 kJ/kg (334,000 J/kg), meaning that this amount of energy
is needed to melt 1 kg of ice into water. For example, materials like sodium nitrate and nanoparticles are being explored for their thermal performance
in heat storage, leveraging their latent heat properties to enhance efficiency. In practical terms, the concept of latent heat is vital in everyday
phenomena, such as ice melting or the freezing of water, and plays a significant role in climate and weather patterns, influencing how energy is
𝑄(𝐽)
transferred in the environment. Mathematically; 𝑄 = 𝑚𝐿𝑓 , 𝑎𝑛𝑑 where 𝐿𝑓 = 𝑚(𝑘𝑔).

Experimental investigation of specific latent heat (latent heat of fusion) of fusion for ice
Apparatus: Ice, Calorimeter (or insulated container), Water at room temperature, Thermometer, Balance (for mass), and Joulemeter or heating
element
Method:
 Measure the mass of the calorimeter 𝒎𝒄 and stirrer , add some warm water in it and weigh again 𝒎𝒘
 Record the initial temperature of the water in calorimeter 𝜽𝟏

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 258


259

 Add ice 𝟎𝟎 𝑪 into the water and allow it to melt completely


while stirring to ensure even distribution of temperature.
 Record the final temperature 𝜽𝟐 of the water after all the ice has
melted.
 Measure the mass of the calorimeter and its contents 𝒎𝟐 to determine
the mass of the melted ice, from 𝒎𝒊𝒄𝒆 = 𝒎𝟐 − (𝒎𝒄 +
𝒎𝒘 ).
Using the formula 𝑸 = 𝒎𝒄𝜟𝜽, to find the energy lost by the water as it
cools.
Use Q= 𝑚𝑙𝑓 ,to calculate the specific latent heat of fusion for ice, where Q is the
energy absorbed by the ice to melt.
From energy conservation, Energy supplied=Energy gained.
Total Energy gained by ice to melt and boil to 100 oC, 𝑬 = 𝒎𝒊𝒄𝒆 𝒍𝒇 + 𝒎𝒊𝒄𝒆 𝒄𝒘𝒂𝒕𝒆𝒓𝜽𝟐.
Energy supplied 𝐄𝐬𝐮𝐩𝐩𝐥𝐢𝐞𝐝 = (𝐦𝐜 𝐜𝐜 + 𝐦𝐰 𝐜𝐰 )(𝛉𝟏 − 𝛉𝟐 )
Finally use the equation to find latent heat of fusion of ice; 𝒎𝒊𝒄𝒆 𝒍𝒇 + 𝒎𝒊𝒄𝒆 𝒄𝒘𝒂𝒕𝒆𝒓𝜽𝟐 = (𝒎𝒄 𝒄𝒄 + 𝒎𝒘 𝒄𝒘 )(𝜽𝟏 − 𝜽𝟐 )

Specific Latent Heat of Vaporization


Specific latent heat of vaporization is the amount of heat energy required to convert a unit mass of a substance from its liquid state to vapor without
any change in temperature. This process occurs at a constant temperature, which is crucial for understanding phase changes in thermodynamics. For
instance, when water boils, it absorbs heat energy, allowing it to transition into steam while remaining at 100°C.This latent heat is significant in
various applications, including meteorology, cooking, and industrial processes. The specific latent heat of vaporization for water is approximately
2260 kJ/kg, meaning that 2260 kilojoules of energy (liquid into a gas at its boiling point (e.g., water to steam). For water, this is approximately
(2,260,000 J/kg, heat energy required to change a liquid into a gas at its boiling point (e.g., water to steam) are needed to vaporize one kilogram of
water. This high energy requirement explains why boiling water takes time, even when heat is continuously applied.

Experimental investigation of specific latent


heat (latent heat of vaporization)
Apparatus
Calorimeter, lagging, beaker, conical flask fitted with stopper and
delivery tube or steam generator, steam trap, retort stand, heat source,
thermometer accurate to 0.1 °C and electronic balance.
Procedure
a) Half fill the conical flask or steam generator with water and fit
with the delivery tube.
b) Heat until steam issues freely.
c) Find the mass of the empty calorimeter with stirrer, 𝑚𝑐
d) Half fill the calorimeter with water cooled to approximately 10 °C below room temperature.
e) Find the mass of the water plus calorimeter, 𝑚0
f) The mass of the cooled water, 𝑚𝑤 = (𝑚0 − 𝑚𝑐 ).

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 259


260

g) Record the temperature of the calorimeter plus water, 𝜃𝑜


h) Allow dry steam to pass into the water in the calorimeter until the temperature has
risen by about 20 °C.
i) Remove the steam delivery tube from the water, taking care not to remove any water from the calorimeter in the process.
j) Record the final temperature of the calorimeter plus water plus condensed steam, 𝜃1 . The fall in temperature of the steam is, ∆𝜃 =
(100 − 𝜃1 )°C
k) The rise in the temperature of the calorimeter plus water, ∆𝑇 = 𝜃1 − 𝜃𝑜
l) Find the mass of the calorimeter plus water plus condensed steam, 𝑚1 . Hence the mass of the condensed steam 𝑚𝑠 = (𝑚1 − 𝑚𝑜 )
Results
Mass of the calorimeter 𝑚𝑐
Mass of the water plus calorimeter 𝑚0
Mass of the cooled water 𝑚𝑤 = (𝑚0 − 𝑚𝑐 )
Temperature of the calorimeter plus water 𝜃𝑜
Final temperature of the calorimeter plus water plus condensed steam 𝜃1
Fall in temperature of the steam ∆𝜃 = (100 − 𝜃1)°C
Rise in the temperature of the calorimeter plus water ∆𝑇 = 𝜃1 − 𝜃𝑜
Mass of the calorimeter plus water plus condensed steam 𝑚1
Mass of the condensed steam 𝑚𝑠 = (𝑚1 − 𝑚𝑜 )
Calculations
Assume heat losses to the surroundings cancel heat gains from the surroundings. Given that the specific heat capacity of water 𝑐𝑤 and the specific
heat capacity of copper 𝑐𝑐 are already known, the specific latent heat of vaporization of water 𝐿𝑉 may be calculated from the following equation:

Energy lost by steam = energy gained by calorimeter + energy gained by the water
Energy lost by steam=𝒎𝒔 𝒍𝒇 + 𝒎𝒔 𝒄𝒘 (100 − 𝜃1)°
Energy gained by calorimeter + Energy gained by the water=(𝒎𝒘 𝒄𝒘 + 𝒎𝒄 𝒄𝒄 )(𝜃1 − 𝜃𝑜 )
𝒎𝒔 𝒍𝒇 + 𝒎𝒔 𝒄𝒘 (100 − 𝜃1)°= =(𝒎𝒘 𝒄𝒘 + 𝒎𝒄 𝒄𝒄 )(𝜃1 − 𝜃𝑜 )

Effect of latent heat of vaporization of steam


This latent heat refers to the amount of energy required to convert water at its boiling point (100 °C) into steam without changing its temperature.
This energy helps in heat transfer applications, making steam an efficient medium for transporting thermal energy. The high latent heat of vaporization
also explains why steam burns can be particularly dangerous. When steam comes into contact with skin, it releases a substantial amount of energy,
leading to severe burns. This energy transfer occurs as the steam condenses back into water, highlighting the importance of understanding steam
properties in safety protocols.
The latent heat of vaporization of steam is the amount of heat energy required to convert water into steam at a constant temperature and pressure
without changing its temperature. For water, this value is approximately 2260 kJ/kg at 100°C (373 K) under standard atmospheric pressure. This
physical property has profound effects in various scientific, industrial, and natural processes.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 260


261

Industrial Applications
Steam is widely used in industries for heating and sterilization processes. The large amount of heat released when steam condenses to water (equal to
the latent heat of vaporization) makes it an efficient heat transfer agent. For instance, steam is used in food processing, chemical production, and
power plants to provide uniform and rapid heating. Steam turbines in
thermal power plants utilize the high energy content of steam to generate
electricity. Water is boiled into steam, which expands and turns turbines.
The high latent heat ensures efficient energy transfer in these systems. In
industries like textiles, paper manufacturing, and pharmaceuticals, steam
is employed to provide controlled drying. The heat released during
condensation aids in rapid drying without overheating the materials.

Climatic and Environmental Effects


When water evaporates, it absorbs latent heat from its surroundings,
causing a cooling effect. This principle is why sweating helps regulate body temperature and why evaporation moderates temperatures in lakes, rivers,
and oceans. During the evaporation of water from Earth's surface (oceans, lakes, vegetation), latent heat is absorbed. When the water vapor condenses
into clouds in the upper atmosphere, the heat is released, driving atmospheric convection currents. This mechanism fuels weather systems like
hurricanes and thunderstorms. The latent heat of vaporization is fundamental to the water cycle. Energy absorbed during evaporation and released
during condensation helps maintain the Earth's energy balance.

Boiling and Cooking Processes


The latent heat of steam enables faster cooking compared to boiling water. Steam carries more energy, which transfers efficiently to food. This is why
steamers and pressure cookers are effective in preparing food quickly. Steam burns are more severe than burns from boiling water. This is because
when steam condenses on the skin, it releases a significant amount of latent heat, delivering more energy to the tissue.
Steam Engines and Energy Efficiency
The energy stored in steam's latent heat is used to perform mechanical work. High-pressure steam delivers energy through expansion, making steam
engines historically significant in driving industrialization. Steam systems often incorporate condensers to recover latent heat from steam and improve
energy efficiency. This recovered heat is reused, minimizing energy losses.
Refrigeration and Air Conditioning
Although refrigeration primarily depends on latent heat of vaporization of refrigerants, the principle is comparable to that of steam. In large buildings
and industrial plants, cooling towers use the evaporation of water to dissipate heat. The high latent heat of vaporization ensures that even a small
amount of water can absorb significant amounts of heat.

Energy and Efficiency Considerations


The high energy required to vaporize water makes processes like desalination energy-intensive. However, it also ensures that steam-based processes
deliver substantial heat transfer efficiency.
Technologies like concentrated solar power plants use steam to convert solar energy into electricity, leveraging the latent heat of vaporization for
thermal energy storage and transfer.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 261


262

Natural Disasters and Steam Explosions


When water is suddenly exposed to high temperatures (e.g., volcanic activity or industrial accidents), it can vaporize rapidly, releasing large amounts
of energy. This can lead to violent steam explosions. In volcanic systems, water heated to its boiling point and beyond contributes to explosive eruptions
due to rapid vaporization and the energy released.

Applications of high specific latent heat of water


When water evaporates from oceans and lakes, it absorbs large amounts of heat, which is released during condensation, forming clouds and
precipitation. The release of latent heat during condensation powers storms and weather systems, influencing local and global weather patterns.
Systems that use the cooling effect of water evaporation (like swamp coolers) take advantage of water’s high latent heat to provide efficient cooling in
dry climates. Many industrial processes use water to absorb heat, leveraging its latent heat properties for effective temperature control. Cooking
methods that rely on boiling or steaming utilize the high latent heat of water, allowing food to cook evenly without exceeding 100 °C (at sea level).

Oceans role in regulating global temperatures through several mechanisms


Oceans absorb a significant amount of solar energy, storing heat
in their vast bodies of water. This heat is distributed throughout
the ocean currents, helping to moderate temperatures across the
globe. Ocean currents act like conveyor belts, transporting warm
water from the equator to the poles and bringing cold water from
the poles back to the equator. This circulation helps balance
temperature differences and influences climate patterns, such as
the Gulf Stream, which warms parts of North America and Europe.
The oceans produce water vapor through evaporation, which
plays a key role in regulating temperature and weather patterns.
Water vapor is a greenhouse gas, trapping heat in the atmosphere
and contributing to the Earth’s energy balance. Oceanic currents
help redistribute heat around the planet. For example, warm
surface currents transfer heat to the atmosphere, while colder, denser water sinks and moves along the ocean floor, affecting global climate systems.
Oceans absorb a large portion of atmospheric carbon dioxide, which helps mitigate climate change. However, increased CO2 levels can lead to ocean
acidification, affecting marine ecosystems, influencing seasonal weather patterns, such as monsoons, and long-term climate phenomena which affect
global temperatures and weather systems.

Latent heat and change of state


Latent heat refers to the energy absorbed or released by a substance during a
phase change, such as melting or boiling, without altering its temperature.
This hidden energy is helps in understanding how materials transition
between solid, liquid, and gas states. The amount of latent heat involved in
these changes depends on the number and strength of molecular bonds within
the substance. For instance, the latent heat of fusion is the energy required to
convert a solid into a liquid at its melting point, while the latent heat of

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 262


263

vaporization is the energy needed to transform a liquid into a gas at its boiling point. These processes are reversible; a liquid can solidify back into a
solid when cooled, releasing the same amount of latent heat.

Kinetic theory and change of state of a substance


The kinetic molecular theory posits that matter is composed of constantly moving, invisible particles. As temperature and pressure change, the kinetic
energy of these particles also fluctuates, leading to changes in state. For instance, when a solid is heated, its particles gain energy, causing them to
vibrate more vigorously until they break free from their fixed positions, resulting in melting. When a gas is cooled, the particles lose energy, slowing
down and coming closer together, which can lead to condensation. The relationship between pressure, temperature, and volume is crucial during
these transitions, as described by the ideal gas law.

Boiling and melting


The kinetic theory of matter explains the behavior of
particles in different states: solids, liquids, and
gases. In solids, particles are closely packed and
vibrate in fixed positions, resulting in a definite
shape and volume. When heat is applied, the
particles gain energy, leading to melting, the
transition from solid to liquid. At the melting point,
the added energy breaks the bonds holding the
particles together, allowing them to move more
freely. Boiling, on the other hand, occurs when a
liquid reaches its boiling point. At this temperature, the particles gain enough energy to overcome the attractive forces between them, transitioning
into the gas phase. This process involves the formation of vapor bubbles
within the liquid, which rise and escape into the air. Both melting and boiling
illustrate how energy input affects particle movement and state changes,
highlighting the dynamic nature of matter as described by the kinetic theory.

Application of latent heat in refrigerators


Latent heat in the functioning of refrigerators works through the process of
phase change. In refrigeration systems, latent heat is the energy absorbed or
released when a refrigerant transition between liquid and gas states. This
phase change is essential for effectively transferring heat from the interior of
the refrigerator to the outside environment, thereby maintaining a cool
temperature inside. When warm products, such as food items, are placed in a
refrigerator, they transfer heat to the refrigerant. As the refrigerant absorbs
this heat, it evaporates, changing from a liquid to a gas. This process requires latent heat, which is drawn from the surrounding environment,
effectively cooling the interior. The gas is then compressed and condensed back into a liquid, releasing the absorbed heat outside. Integrating latent
heat cold storage systems in refrigerated warehouses can enhance energy efficiency and reduce operational costs, especially during peak demand
periods.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 263


264

Working of refrigerator
Refrigerators operate on the principle of heat transfer using a refrigerant that circulates through a closed system. The process begins when the
refrigerant, initially in a low-pressure gas form, enters the compressor. It is heated and pressurized, transforming it into a high-pressure gas. This gas
then flows into the condenser coils, where it releases heat to the outside environment and condenses into a liquid. Once in liquid form, the refrigerant
travels to the expansion valve, where it experiences a drop in pressure, causing it to evaporate and cool rapidly. This cold gas then enters the evaporator
coils inside the refrigerator, absorbing heat from the interior and lowering the
temperature. The cycle continues as the refrigerant returns to the compressor, ready to
repeat the process. This efficient cycle of evaporation and condensation is what keeps
our food and beverages cool, demonstrating the remarkable science behind everyday
appliances.

VAPOURS
Vapours are defined as substances in the gas phase that exist below their critical
temperature, distinguishing them from gases. A vapor is a gaseous phase of a substance
that typically exists as a liquid or solid at room temperature. It can coexist with its
condensed phases, meaning it can transition back to liquid or solid under certain
conditions, such as changes in pressure or temperature. A gas is defined as a single
substance in the gaseous state at a given temperature and pressure, without any
coexistence with its liquid or solid forms. Gases do not condense into liquids or solids under normal conditions, making them more stable in their
gaseous state. In summary, while both vapors and gases are in the gaseous state, vapors are associated with substances that can transition between
phases, whereas gases are stable, single-phase substances. This characteristic allows vapours to be condensed into liquids under certain conditions.
The process of crystallization from vapour involves gas molecules attaching to a surface and arranging themselves into a crystal structure, which is
crucial in various scientific applications. Vapour pressure refers to the pressure exerted by vapour in equilibrium with its liquid phase. This pressure
is solely influenced by temperature, affecting phenomena such as evaporation rates.

Saturated and Unsaturated Vapours


A saturated vapor is a vapor that is in contact with its own liquid within a confined space , meaning it cannot hold any more
moisture at a given temperature. When the enclosed space above a liquid is saturated with vapor molecules and can hold no more molecules,
the pressure exerted by this saturated vapor is said to be the saturated vapor pressure (s. v. p) of the liquid. The vapor is said to be saturated when
the number of molecules escaping from the liquid per unit is equal to the number returning to the liquid per unit time. The saturated vapor is thus
said to be in a state of dynamic equilibrium with its own liquid. Saturated vapor pressure increases with temperature. As temperature increases, the
saturated vapour pressure also rises, allowing the air to hold more water vapour. This is crucial in understanding weather patterns and humidity
levels.
Unsaturated vapour exists when the vapor is not in contact with its liquid phase, meaning it can still absorb more moisture . The
unsaturated vapor is the vapor which is not in contact with its own liquid in a confine space. It is not in dynamic equilibrium with its own liquid. The
rate at which the liquid evaporates is greater than the rate at which the liquid condenses. Thus, the pressure exerted by a vapor which is not in contact
with its own liquid in a confined space is called unsaturated vapor pressure. Unsaturated vapour can transition to a saturated state by either decreasing
the volume or increasing the temperature, which is essential in various applications, including HVAC systems and environmental science. The

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 264


265

differences between these types of vapours is vital for fields such as meteorology, engineering, and environmental studies, as they influence processes
like condensation, evaporation, and humidity control.

Vapor Pressure: When a liquid evaporates in a closed container, the vapor formed above the liquid exerts a pressure. According to kinetic molecular
theory, the molecules of the vapor are in constant motion and will hence exert a pressure just like the molecules of a gas. This pressure is called the
vapor pressure of the liquid.
Saturated vapour pressure (SVP) is the pressure exerted by a vapour when it is in equilibrium with its liquid at a specific temperature. At this
point, the maximum amount of vapour that can exist above the liquid has been reached. SVP depends on temperature: as temperature increases, the
kinetic energy of the liquid's molecules increases, leading to higher evaporation rates, and hence a higher saturated vapour pressure.

Relation to Boiling and Evaporation


Evaporation is the process where molecules at the surface of a liquid gain enough energy to escape into the vapour phase. Evaporation can occur at
any temperature, as long as the vapour above the liquid is unsaturated. It only affects surface molecules, and it continues until the vapour becomes
saturated, meaning the rate of condensation matches the rate of evaporation. Evaporation is a slower process and does not require the liquid to reach
its boiling point. Boiling occurs when the vapour pressure of the liquid equals the external (atmospheric) pressure. At this point, bubbles of vapour
can form within the liquid and rise to the surface. The boiling point is the temperature at which the saturated vapour pressure of the liquid equals
the atmospheric pressure. Unlike evaporation, which only happens at the surface, boiling occurs throughout the entire liquid because bubbles of
vapour can form inside the liquid once the saturated vapour pressure matches the external pressure.

Boiling and Evaporation


Boiling is the rapid change of a liquid into a gas throughout the entire liquid. According to particle theory, as a liquid is heated, its particles gain
kinetic energy and move faster. At the boiling point, the particles have enough energy to overcome the intermolecular forces holding them together
in the liquid. At this temperature, the liquid's vapour pressure becomes equal to the external (atmospheric) pressure, and bubbles of vapour form
within the liquid and rise to the surface to escape into the gas phase. Boiling happens at a specific temperature (the boiling point), which depends on
the external pressure. Evaporation is the slow change of a liquid into a gas at the surface of the liquid. Unlike boiling, evaporation can occur at any
temperature. The particles at the surface of the liquid with the highest kinetic energy are able to escape the liquid phase and enter the gas phase. This
process happens because some surface particles have enough energy to overcome the intermolecular forces binding them to the liquid, even at lower
temperatures. Evaporation only occurs from the surface and does not require the liquid to reach its boiling point.

Perspiration in Mammals
Mammals, including humans, use perspiration (sweating) to regulate their body
temperature, especially in hot environments or during physical exertion. When
mammals sweat, their bodies release water onto the surface of the skin. The evaporation
of this sweat cools the body. The water molecules in sweat absorb latent heat from the
skin as they transition from liquid to gas. This energy is used to break the bonds between
the water molecules, allowing them to evaporate. As a result, heat is removed from the
body, leading to a cooling effect and helping to maintain a constant internal temperature.
This process is part of thermoregulation, ensuring that mammals stay within a safe
temperature range despite environmental changes.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 265


266

Cooling Effect of Evaporation


The cooling effect of evaporation occurs when a liquid transitions into a gas. This process requires energy, which is drawn from the surrounding
environment, resulting in a decrease in temperature. For instance, when alcohol evaporates quickly, it cools the surface of the skin, demonstrating
how rapid evaporation enhances cooling. Evaporative cooling is not only a natural occurrence but also a principle utilized in various technologies.
Systems designed for evaporative cooling, such as those used in air conditioning, leverage this effect to lower temperatures efficiently. As water
evaporates, it absorbs heat from the remaining liquid, leading to a cooler environment. Innovations, like self-cooling artificial grass, harness this
principle to combat urban heat. By allowing rainwater to evaporate, these surfaces can remain significantly cooler than traditional materials,
showcasing the practical applications of evaporative cooling in managing extreme weather conditions.

How Evaporation Causes Cooling


Evaporation is a natural process that leads to cooling, due to the energy dynamics of liquid molecules. When a liquid, such as water, is exposed to air,
the molecules at the surface gain energy from their surroundings. This energy allows them to overcome intermolecular forces and transition into
vapor. As these high-energy molecules escape, the average kinetic energy of the remaining liquid decreases, resulting in a drop in temperature. This
cooling effect is evident in everyday scenarios, such as when sweat evaporates from our skin, providing a refreshing sensation on a hot day. The
process is not only limited to water; any liquid can exhibit this cooling effect during evaporation. During evaporation, the molecules with the highest
energy (highest velocity) are the ones that escape the liquid's surface. As the more energetic molecules leave the liquid, they take their heat energy
with them. The average kinetic energy (which determines temperature) of the remaining molecules in the liquid decreases, leading to a cooling effect.
This process of removing higher-energy particles reduces the temperature of the remaining liquid. For example, if water evaporates from your skin,
the surface temperature of your skin drops because the water molecules absorb heat from the skin to evaporate.
Why water boils at a temperature less than 100°c at the top of a mountain.
The boiling point of water decreases at higher altitudes because of the reduction in atmospheric pressure. At sea level, atmospheric pressure is higher,
and water boils at 100°C because the vapour pressure of the water must match the atmospheric pressure to allow bubbles of vapour to form within
the liquid. At higher altitudes, the atmospheric pressure is lower, so water needs less energy (i.e., lower temperature) to reach the point where its
vapour pressure equals the external pressure. For example, on top of a mountain, where the atmospheric pressure is much lower than at sea level,
water may boil at 90°C or even lower, depending on the height.

How a Pressure Cooker Works?


A pressure cooker is a specialized kitchen appliance that utilizes steam and high pressure
to cook food more quickly than traditional methods. A pressure cooker is a sealed cooking
pot that increases the pressure inside by trapping steam. This elevated pressure allows the
temperature of boiling water to rise above its normal boiling point of 100°C, speeding up
the cooking process. When a liquid inside the pressure cooker is heated, it boils and turns
into steam. This steam cannot escape, leading to an increase in pressure within the chamber.
As the pressure rises, the boiling point of water also increases, allowing food to cook at
higher temperatures. The pressure might rise to 15 psi (pounds per square inch) above
atmospheric pressure, which increases the boiling point of water to about 121°C (250°F).
At this higher temperature, food cooks faster because the increased kinetic energy of the
water molecules speeds up the chemical reactions involved in cooking. The higher pressure also forces moisture into the food, further accelerating

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 266


267

the cooking process, but also helps retain moisture and flavor in the food. The high-pressure steam penetrates food more effectively, resulting in
tender and flavorful dishes. Pressure cookers can significantly reduce cooking times for various foods, from beans to tough cuts of meat.

3.7 STARS AND GALAXIES


Learning Outcomes
a) Know the source of energy in stars and appreciate the importance of
the energy produced by the sun to the people on Earth (k,u)
b) Appreciate that stars vary in colour and brightness(u)
c) Know that stars have life cycles and that the fate of stars (white dwarfs,
neutron stars and black holes) depends on their initial size (k,u)

Introduction
A star is a luminous spheroid of plasma held together by self-gravity. The nearest
star to Earth is the Sun. Many other stars are visible to the naked eye at night;
their immense distances from Earth make them appear as fixed points of light. The most prominent stars have been categorised into constellations
and asterisms, and many of the brightest stars have proper names. Astronomers have assembled star catalogues that identify the known stars and
provide standardized stellar designations. About 4,000 of these stars are visible to the naked eye all within the Milky Way galaxy. A star's life begins
with the gravitational collapse of a gaseous nebula of material largely comprising hydrogen, helium, and trace heavier elements. Its total mass mainly
determines its evolution and eventual fate. A star shines for most of its active life due to the thermonuclear fusion of hydrogen into helium in its core.
This process releases energy that traverses the star's interior and radiates into outer space. At the end of a star's lifetime as a fusor, its core becomes
a stellar remnant: a white dwarf, a neutron star, or if it is sufficiently massive a black hole. A galaxy is a massive system composed of stars, stellar
remnants, interstellar gas, dust, and dark matter, all bound together by gravity. Meanwhile, studies reveal that isolated dwarf galaxies have experienced
an early end to their star-forming activities, highlighting the complex dynamics of gas and star formation. As we continue to explore the cosmos, tools
like the Hubble Space Telescope unveil intricate details of galaxies, allowing us to appreciate the vastness and beauty of the universe.

The stars
A star is a massive, self-luminous celestial body composed primarily of gas, primarily hydrogen and helium. These luminous spheroids generate energy
through nuclear fusion in their cores, which produces the light and heat we observe from Earth. The most familiar star is our Sun, the closest star to
our planet, which plays a crucial role in sustaining life. Stars vary
in size, temperature, and brightness, and they can be classified into
different types based on these characteristics. Some stars are visible
to the naked eye, while others require telescopes to be seen. The
life cycle of a star includes stages such as formation, main sequence,
and eventual death, which can result in phenomena like supernovae
or the formation of black holes.
Classification of stars
Stellar classification is a systematic approach to categorizing stars
based on their temperature, color, and luminosity. The primary
classification system divides stars into seven main types: O, B, A, F, G, K, and M. This sequence, from hottest to coolest, is often remembered by the

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 267


268

mnemonic “Oh Be A Fine Guy, Kiss Me.” O and B stars are rare, extremely hot, and luminous, while M stars are the most common and cooler. Each
class is further divided based on luminosity, indicated by Roman numerals. For instance, supergiants are classified as I, bright giants as II, giants as
III, and subgiants as IV. This classification not only helps astronomers understand the physical properties of stars but also their lifecycle and evolution.

The standard classes are (temperatures are in Kelvin):


O – Blue Stars: > 30,000 K, B – Blue-White Stars: 10,000 – 30,000 K, A – White Stars: 7,500 – 10,000 K, F – Yellow-White Stars: 6,000 – 7,500
K, G – Yellow Stars: 5,200-6,000 K,
K – Orange Stars: 3,700 – 5,200 K, M – Red Stars: 2,400 – 3,700 K
Additional Classes:
L – Sub-Red Dwarf Stars: 1,300 – 2,400 K, T – Brown Dwarf Stars: 700 – 1,300 K, and
Y – Sub-Brown Dwarf Stars: < 700 K

Average stars
An average star, often referred to as an intermediate-mass star, has an initial mass ranging from
0.5 to 8 times that of the Sun. These stars are primarily powered by nuclear fusion occurring in
their cores, which allows them to shine brightly for billions of years. The Sun itself is considered
an average star, a classification that highlights its typical characteristics compared to the vast
diversity of stars in the universe. Stars are born in stellar nurseries, such as the Orion Nebula,
where clouds of gas and dust collapse under gravity to form new stars. Throughout their life cycle,
average stars spend the majority of their time in the main sequence phase, where they fuse
hydrogen into helium. Eventually, they will exhaust their nuclear fuel and undergo significant
transformations, leading to their eventual demise.

Massive stars
Massive stars, defined as those with a solar mass at least eight times greater than
that of the Sun, play a crucial role in the universe. They form from the densest regions
of gas and dust clouds, undergoing rapid fusion processes that create heavier elements.
Unlike smaller stars, massive stars burn hotter and faster, leading to a relatively short
lifespan. The life cycle of a massive star culminates in a spectacular event known as a
supernova. As these stars exhaust their nuclear fuel, they undergo gravitational collapse,
resulting in an explosive release of energy that can outshine entire galaxies. This process
not only disperses heavy elements into space but also contributes to the formation of new
stars and planetary systems. Despite their rarity, massive stars are significant sources of
light and energy in the cosmos. Their existence influences the evolution of galaxies and
the chemical composition of the universe, making them essential to our understanding of
stellar life cycles.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 268


269

Sources of energy in stars


Stars, including our Sun, derive their energy primarily from nuclear fusion, a process where hydrogen nuclei combine to form helium. This reaction
occurs in the core of stars, where extreme temperatures and pressures facilitate the fusion of light elements. The most common fusion process in stars
is the proton-proton chain, which converts hydrogen into helium while releasing vast amounts of energy in the form of light and heat. In addition to
the proton-proton chain, stars also utilize the CNO cycle and the triple alpha process for energy production. The CNO cycle involves carbon, nitrogen,
and oxygen as catalysts in the fusion of hydrogen into helium, while the triple alpha process fuses helium nuclei into carbon. These nuclear reactions
are fundamental to the life cycle of stars and contribute to the synthesis of heavier elements.

The sun as source of energy


The sun is the ultimate source of energy for Earth, driving essential processes that sustain life.
Through nuclear fusion, hydrogen atoms collide and release vast amounts of energy, which
radiates as sunlight. This solar radiation is crucial for the Earth’s climate system, influencing
weather patterns and supporting photosynthesis in plants, which forms the foundation of our
food chain. Harnessing solar energy has become increasingly important as we seek sustainable
alternatives to fossil fuels. Technologies such as solar panels convert sunlight into electricity,
providing a clean and renewable energy source. Innovations in solar energy, including the
potential to produce hydrogen fuel through water splitting, promise to revolutionize our
energy landscape. As we face the challenges of climate change and dwindling resources, the
sun remains a powerful ally.

Importance of solar energy


One of its primary advantages is its environmental benefits; solar power significantly
reduces carbon emissions, helping combat climate change. As a renewable energy
source, it is inexhaustible and can be harnessed anywhere sunlight is available,
making it a versatile option for energy generation. Solar energy is a reliable and cost-
effective alternative to traditional fossil fuels. By investing in solar systems,
homeowners can reduce their electricity bills and enjoy long-term savings. This energy
independence not only fosters economic stability but also enhances national security
by reducing reliance on imported fuels. In addition to its economic and environmental
benefits, solar energy promotes energy innovation. As technology advances, solar
solutions become more efficient and accessible, paving the way for a cleaner, more sustainable future.

The size of the sun


The Sun is an immense celestial body, measuring approximately 864,400 miles (1,391,000 kilometers) in diameter. This size makes it about 109 times
wider than Earth, showcasing its colossal scale. In terms of mass, the Sun weighs around 333,000 times more than our planet, highlighting its dominant
presence in the solar system. With a mean radius of about 432,450 miles (696,000 kilometers), the Sun's vastness is difficult to comprehend. It contains
99.86% of the total mass of the solar system, making it the central gravitational anchor for all planets, including Earth. The Sun's surface temperature
reaches about 10,000 degrees Fahrenheit (5,500 degrees Celsius), contributing to its brightness and energy output.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 269


270

Size of the sun relative to other stars


The Sun, with a diameter of approximately 864,000 miles (1.39 million kilometers), is often
perceived as a massive celestial body. However, when compared to other stars in the Milky
Way galaxy, it is considered average-sized. For instance, the largest known star, UY Scuti,
is about 1,700 times larger than the Sun, showcasing the vast range of star sizes in the
universe. In the grand scheme of stellar sizes, the Sun is classified as a G-type main-sequence
star, or G dwarf. While it plays a crucial role in our solar system, providing light and heat
to Earth, it pales in comparison to giants like Stephenson 2-18, which boasts a diameter of
2,150 solar radii. This stark contrast highlights the diversity of stars, from small red dwarfs
to colossal supergiants. The reason other stars appear smaller than the Sun to an observer
on Earth primarily boils down to distance. The Sun is approximately 93 million miles away,
while even the closest star, Proxima Centauri, is over 4 light-years away. This immense
distance means that, despite their actual sizes, stars appear as mere points of light in the night sky. Additionally, the Sun is a medium-sized main
sequence star, situated in the upper range of stellar sizes. In fact, about 80% of stars in the universe are smaller than the Sun. While some stars are
indeed larger, their vast distances from Earth make them appear much smaller than they truly are. Moreover, atmospheric conditions can cause stars
to twinkle, further diminishing their apparent size. This twinkling effect can obscure the differences in brightness and size among stars, making them
all seem relatively uniform to the naked eye.

Variation in color and brightness of the stars


Stars exhibit a fascinating variation in color and brightness, which are key indicators of their temperature and intrinsic properties. The hottest stars,
with surface temperatures reaching up to 40,000ºC, appear blue or blue-white, while the coolest stars, around 3,000ºC, shine in shades of red. This
color spectrum not only reflects the star's temperature but also provides insights into its life cycle and evolutionary stage. Brightness in stars is
influenced by both luminosity and distance from Earth. A star's luminosity is its inherent brightness, while apparent brightness is how bright it
appears from our perspective. For instance, a faint star that is relatively close can outshine a more luminous star that is much farther away. This
complexity in brightness helps astronomers understand the vast distances and varying characteristics of celestial bodies.

The brightness and color of stars as observed from Earth are influenced by several key factors. Primarily, a star's luminosity, which is the total energy
it emits, plays a crucial role. A more luminous star will appear brighter
than a less luminous one, regardless of distance. However, the distance from
the observer also significantly affects apparent brightness; as light travels
through space, it diminishes in intensity, making distant stars appear
fainter. In addition to luminosity and distance, a star's color is determined
by its surface temperature. Hotter stars emit blue light, while cooler stars
emit red light. This relationship is governed by Wien's Law, which states
that the peak wavelength of radiation emitted by a black body is inversely
proportional to its temperature. Thus, the color observed can provide
insights into a star's temperature and, consequently, its size and age.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 270


271

Why the stars appear to twinkle?


Stars appear to twinkle due to the effects of Earth's atmosphere on the light
they emit. As starlight travels through the atmosphere, it encounters varying
temperatures and densities of air, which bend and distort the light. This
phenomenon, known as refraction, causes the light to shift in color and
intensity, creating the twinkling effect we observe from the ground. Planets
do not twinkle in the same way as stars. This is because planets are closer to
Earth and appear as small disks rather than point sources of light. The light
from planets travels through less atmosphere, resulting in less distortion and
a steadier appearance. The twinkling of stars is more pronounced when they are near the horizon, where their light passes through a greater thickness
of atmosphere. Thus, the shimmering beauty of stars is a captivating reminder of the complex interactions between celestial bodies and our planet's
atmosphere.

The Lifecyle of a star


The life cycle of a star is a fascinating journey that begins in a nebula, a
vast cloud of gas and dust. Within these stellar nurseries, gravity pulls
matter together, forming a protostar. As the protostar accumulates mass, it
ignites nuclear fusion in its core, entering the main sequence phase, where
it spends the majority of its life. The duration of this phase varies
significantly based on the star's mass; larger stars burn through their fuel
quickly, while smaller stars, like red dwarfs, can last for billions of years.
As stars exhaust their nuclear fuel, they undergo dramatic transformations.
Massive stars expand into red giants before shedding their outer layers,
resulting in a supernova explosion. This cataclysmic event disperses elements into space, contributing to the formation of new stars and planets. In
contrast, smaller stars end their lives as white dwarfs, slowly cooling over time.

Lifecycle of small and average stars


The lifecycle of small and average stars, such as our Sun, follows a fascinating seven-stage process. It begins in a stellar nursery, where gas and dust
coalesce to form a protostar. As the protostar accumulates mass, it heats up and eventually ignites nuclear fusion in its core, entering the main
sequence phase. This stage can last billions of years, during which the star steadily fuses hydrogen into helium. As the hydrogen supply diminishes,
the star transitions into the red giant phase. Here, the core contracts and heats up, causing the outer layers to expand. For small stars, this is followed
by the planetary nebula phase, where the outer layers are expelled, leaving behind a hot core. This remnant cools and becomes a white dwarf,
eventually fading into a black dwarf over billions of years. In contrast, larger stars may end their lives in spectacular supernova explosions, but small
and average stars conclude their lifecycle more quietly, contributing to the cosmic recycling of elements.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 271


272

Lifecycle of massive stars


The lifecycle of massive stars begins in nebulae, where large clouds of hydrogen condense
under gravity. These stars burn hotter and faster than their low-mass counterparts, rapidly
undergoing nuclear fusion. As they evolve, they enter the Main Sequence phase, where they
spend a significant portion of their lives. Once a massive star exhausts its hydrogen fuel, it
undergoes a series of fusion processes, creating heavier elements.
This evolution diverges from that of low-mass stars after carbon fusion. Instead of stabilizing,
massive stars swell into red giants, eventually leading to the formation of iron in their cores.
The end of a massive star's life is marked by a spectacular supernova explosion, which occurs
when the core collapses under gravity. This event not only disperses elements into space,
enriching the interstellar medium, but also leaves behind remnants such as neutron stars or
black holes, continuing the cycle of stellar evolution.

Supernova
A supernova is one of the most spectacular events in the universe, marking the explosive death of a star. This phenomenon occurs when a star exhausts
its nuclear fuel, leading to a dramatic increase in brightness and energy output. Supernovae can outshine entire galaxies for a brief period, releasing
an immense amount of light and energy, making them the brightest explosions observed in the cosmos. There are two primary types of supernovae:
Type I and Type II. Type I supernovae occur in binary star systems, where one star siphons material from its companion until it reaches a critical mass
and explodes. In contrast, Type II supernovae result from the collapse of massive stars that have depleted their nuclear fuel. Both types scatter essential
elements like hydrogen, helium, and carbon into space, contributing to the
formation of new stars and planets. The remnants of a supernova often form a
nebula, a vast cloud of gas and dust that can give rise to new celestial bodies.
Brightness of supernova
Supernovae are among the most luminous events in the universe, capable of
outshining entire galaxies. Their peak optical luminosity can reach between
5𝑥1043 and 2𝑥1046 𝑒𝑟𝑔𝑠 −1 , making them incredibly bright celestial
phenomena. For instance, a supernova occurring 100 light-years away would
appear about 100 times brighter than a full moon, showcasing their immense
brightness. The brightness of a supernova is not just a fleeting moment; it can
last for weeks or even months before gradually fading. This dramatic increase in luminosity is often accompanied by a rapid change in brightness,
with some supernovae exhibiting a change of up to three magnitudes within just 15 days. Such rapid fluctuations highlight the dynamic nature of
these stellar explosions. In summary, supernovae are extraordinary cosmic events that illuminate the night sky, providing valuable insights into
stellar evolution and the universe's expansion. Their brightness serves as a reminder of the powerful forces at play in the cosmos.
Blackhole and neutron star
Black holes and neutron stars are two of the universe's most enigmatic entities, each representing extreme states of matter and gravity. A neutron
star, formed from the remnants of a supernova, is incredibly dense, with a sugar-cube-sized amount of its material weighing as much as a mountain.
In contrast, black holes are regions in space where gravity is so intense that not even light can escape, resulting from the collapse of massive stars.
Astronomers have long been intrigued by the mass gap between neutron stars and black holes. The heaviest known neutron star teeters on the brink

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 272


273

of becoming a black hole, highlighting the delicate balance of


gravitational forces at play. Observations have shown that neutron star
mergers, providing insights into their transition, revealing that such
collisions can lead to the formation of black holes.

Formation of blackholes
Black holes are fascinating cosmic entities formed primarily through the
death of massive stars. When these stars exhaust their nuclear fuel, they
undergo a dramatic transformation. The core collapses under its own
gravity, leading to a supernova explosion that ejects the outer layers of
the star. What remains is an incredibly dense core, which can become a black hole if its mass exceeds a certain threshold. There are different types of
black holes, with stellar black holes being the most common. These typically form from stars with at least three times the mass of our Sun. In contrast,
supermassive black holes, found at the centers of galaxies, may have formed through more complex processes, possibly involving the merging of
smaller black holes or the influence of dark matter. The study of black holes not only enhances our understanding of the universe but also challenges
our grasp of physics, particularly regarding gravity and spacetime.

Galaxies
A galaxy is a massive system composed of stars, stellar remnants, interstellar
gas, dust, and dark matter, all bound together by gravity. The term "galaxy"
originates from the Greek word for "milky," reflecting our own Milky Way
galaxy. Galaxies can vary significantly in size and structure, containing
anywhere from millions to trillions of stars, and they often exist in vast cosmic
clusters. Galaxies are categorized into several types, including spiral,
elliptical, and irregular. Spiral galaxies, like the Milky Way, feature distinct
arms that wind outward from the center, while elliptical galaxies appear more
rounded and lack the defined structure of spirals. Irregular galaxies do not fit neatly into these categories and often exhibit chaotic shapes.

The Milky Way


The Milky Way Galaxy, a barred spiral galaxy, is home to our Solar System and
contains approximately 100 billion stars, including our Sun. Spanning about
100,000 light-years in diameter, it features a distinct hazy band of light visible
from Earth, which is composed of countless stars and cosmic dust. This galaxy
is estimated to be around 13.6 billion years old, making it a significant part of
our cosmic history .At the center of the Milky Way lies a supermassive black
hole, surrounded by a bulge of older stars. The galaxy's structure includes large
spiral arms that extend outward, hosting various celestial phenomena such as
nebulae and star clusters. The Milky Way is accompanied by several satellite
galaxies, with the Large and Small Magellanic Clouds being the most notable.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 273


274

3.8 SATELLITES AND COMMUNICATION


Learning Outcomes
a) Understand what artificial satellites are and how we make use of the min
research and in everyday life (u,s)
b) Appreciate the importance of space exploration (u,v/a)

Introduction
Satellites play a crucial role in modern communication, serving as relay stations in
space that facilitate the transfer of voice, video, and data across vast distances. By
orbiting the Earth, these artificial satellites enable seamless connectivity for various
applications, including television broadcasting, internet services, and emergency
communications. The technology behind satellite communication involves
transmitting signals from ground stations to satellites, which then relay the information back to other locations on Earth. This system allows for global
coverage, making it indispensable for remote areas where traditional communication infrastructure may be lacking. Innovations, such as the HMD Off
Grid, a compact satellite communication device for smartphones, highlight the ongoing advancements in this field. As satellite technology continues
to evolve, it promises to enhance connectivity and accessibility, further integrating into our daily lives and supporting critical services worldwide.

Satellites
A satellite is defined as an object that orbits a larger celestial body. This can include natural satellites, like the Moon, which orbits Earth, and artificial
satellites, which are human-made objects designed to orbit planets or other celestial bodies. According to NASA, satellites maintain their orbits through
a balance between their speed and the gravitational pull of the body they orbit. Artificial satellites serve various purposes, including communication,
weather monitoring, and scientific research. They are crucial for modern technology, enabling global communication networks and providing data for
climate studies. The term "satellite" can also refer to any object that orbits another, such as Earth itself, which is a satellite of the Sun. These artificial
satellites are launched into space and can orbit Earth or other planets. As of May 2024, there are approximately 9,900 active satellites in various orbits,
highlighting their significance in modern technology and science. Satellites can be categorized into different types, including weather satellites, which
are essential for forecasting and monitoring climate patterns. There are two main types: polar orbiting and geostationary satellites, each with unique
characteristics that allow them to gather data effectively. Organizations like NOAA utilize satellite data to provide timely environmental information.
Recent advancements in satellite technology include the launch of mega constellations, such as China's Thousand Sails and SpaceX's Starlink, aimed
at enhancing global internet connectivity and monitoring capabilities.

Artificial satellite
Artificial satellites are human-made objects launched into orbit around celestial bodies,
primarily Earth. Since the launch of Sputnik 1 by the Soviet Union in 1957, which was
the first artificial satellite, the number of active satellites has surged to over 3,000.
These satellites serve various purposes, including communication, weather monitoring,
navigation, and scientific research. Satellites can be classified into two categories:
natural and artificial. Recent advancements in satellite technology include the
development of Gen-3 satellites by BlackSky Technology, set for launch in February
2025, and innovative projects like Proba-3, which aims to create artificial solar eclipses for solar studies. As technology evolves, artificial satellites

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 274


275

will continue to play a vital role in understanding our planet and beyond. They are categorized into several types based on their functions. Navigation
satellites, for instance, assist in determining precise locations on Earth, essential for GPS systems. Communication satellites relay and amplify signals,
enabling global telecommunications, including television and internet services. Weather satellites monitor atmospheric conditions, providing vital
data for forecasting and climate research.
Earth observation satellites capture images and data about the planet's surface, aiding in environmental monitoring and disaster management. Lastly,
astronomical satellites are designed to observe celestial phenomena, contributing to our understanding of the universe. These satellites operate in
various orbits, including geostationary and polar orbits, each serving specific purposes.

Natural satellite
Natural satellites, commonly known as moons, are celestial bodies that orbit larger astronomical
entities, such as planets. These moons vary significantly in size, shape, and composition, with
some, like Saturn's Titan, being larger than the planet Mercury. Currently, Saturn boasts the
highest number of moons, with 146 confirmed natural satellites, showcasing the diversity of
these celestial companions. Most natural satellites are solid bodies and typically lack substantial
atmospheres. They play crucial roles in their respective planetary systems, influencing
phenomena such as tides and geological activity.
For instance, Earth's Moon not only stabilizes our planet's axial tilt but also regulates ocean tides, making Earth more habitable. In total, over 300
natural satellites orbit six planets and seven dwarf planets in our solar system. Understanding these celestial bodies enhances our knowledge of
planetary formation and the dynamics of our solar system.

Sizes and altitude of satellites


Artificial satellites vary significantly in size and altitude,
depending on their purpose and orbit. Low-Earth orbit (LEO)
satellites, such as the International Space Station (ISS), typically
orbit at altitudes ranging from 200 to 2,000 kilometers
(approximately 124 to 1,243 miles). The ISS, for instance,
maintains an altitude of about 400 kilometers (248 miles) and
travels at speeds of around 28,000 kilometers per hour (17,500
miles per hour).Medium Earth orbit (MEO) satellites, which include
navigation systems like GPS, operate at altitudes between 2,000
kilometers (1,200 miles) and just below geosynchronous orbit at
35,786 kilometers (22,236 miles). In contrast, geostationary
satellites remain fixed relative to the Earth's surface at
approximately 35,786 kilometers, providing consistent coverage over specific areas.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 275


276

Satellite and orbits


Satellites play roles in modern technology, orbiting Earth in various
paths dictated by gravity. These orbits are classified primarily into
three categories: low Earth orbit (LEO), medium Earth orbit (MEO), and
geosynchronous orbit (GEO). Each type serves distinct functions, from
communication and weather monitoring to navigation and Earth
observation. Low Earth orbit, typically ranging from 160 to 2,000
kilometers above the Earth, is favored for satellites that require close
proximity for high-resolution imaging, such as those used in
environmental monitoring. In contrast, geosynchronous satellites,
positioned approximately 36,000 kilometers above the Earth, maintain
a fixed position relative to the planet, making them ideal for telecommunications. The European Space Agency (ESA) and NASA are at the forefront of
satellite technology, continuously launching new satellites to enhance our understanding of Earth and beyond. As satellite technology evolves, so too
does our ability to monitor and respond to global challenges.
Types of Satellites by Orbit
Low Earth Orbit (LEO): Low Earth Orbit (LEO) satellites are positioned at altitudes
ranging from 200 to 2,000 kilometers above Earth, making them crucial for various
applications. These satellites complete an orbit approximately every 90 to 128 minutes,
allowing them to provide real-time data and communication services. The European
Space Agency (ESA) emphasizes the importance of LEO for scientific research, Earth
observation, and telecommunications. The LEO economy is rapidly evolving, driven by
advancements in technology and increasing demand for satellite services. This
economic sector encompasses the production, distribution, and trade of goods and
services in LEO, attracting significant investment from various industries. As more
companies launch LEO satellites, the potential for innovation and growth in this area
continues to expand. Investors are increasingly recognizing the opportunities within
the LEO satellite market, as these satellites enhance broadband connectivity and support military reconnaissance. The future of LEO satellites promises
to reshape industries and improve global communication networks.

Medium Earth Orbit (MEO): Altitude: 5,000–20,000 km. Applications: Navigation


systems (e.g., GPS), enabling global positioning and low-latency data communication.
Balances coverage area and data transmission rates, requiring fewer satellites than LEO
systems. Limitation: Weaker signals and longer transmission delays compared to LEO.
Medium Earth Orbit (MEO) satellites operate at altitudes ranging from 2,000 to 36,000
kilometers (1,243 to 22,300 miles) above Earth. These satellites have orbital periods of
less than 24 hours, with the lowest MEO altitude allowing for a minimum period of
about 2 hours. This unique positioning enables MEO satellites to provide a balance
between coverage and latency, making them ideal for various applications, including
Global Navigation Satellite Systems (GNSS) and Internet of Things (IoT) services. MEO

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 276


277

satellites are particularly significant for delivering high-speed connectivity in remote


areas where traditional fiber optics are impractical. They utilize Ku-band and Ka-band
frequencies to ensure robust data transmission. Recent advancements have seen countries
like China launching their first MEO broadband satellites, highlighting the growing
interest and investment in this orbital range. As the demand for satellite-based services
continues to rise, MEO satellites are poised to play a crucial role in enhancing global
communication and navigation capabilities.
Geostationary Orbit (GEO): Geostationary orbit (GEO) is a unique orbital position
approximately 22,300 miles (35,800 kilometers) above the Earth's equator. In this orbit,
satellites maintain a fixed position relative to the Earth's surface, appearing stationary to
observers on the ground. This characteristic makes GEO ideal for telecommunications,
weather monitoring, and broadcasting, as a single satellite can cover about one-third of
the Earth. Satellites in GEO are launched into a temporary orbit before being maneuvered into their designated "slot." The orbital period of these
satellites matches the Earth's rotation, allowing them to maintain a constant view of the same geographic area. This stability is crucial for applications
requiring continuous data transmission and monitoring. As technology advances, the role of GEO satellites continues to expand, enhancing global
communication and observation capabilities.
Sun-Synchronous Orbit (SSO): Sun-synchronous orbit (SSO), also known as Helio synchronous orbit, is a specialized polar orbit that allows
satellites to maintain a consistent angle with respect to the Sun. This unique characteristic
enables satellites to pass over the same point on Earth at the same local solar time, making
SSO ideal for Earth observation and reconnaissance missions. Typically, SSO satellites
operate at altitudes between 600 to 800 kilometers, with an inclination of about 20 to 30
degrees. The precession of the satellite's orbital plane matches the Earth's rotation
around the Sun, ensuring that the satellite's observations are consistent throughout the
year. This is particularly beneficial for applications such as weather monitoring,
agricultural assessments, and environmental studies, where consistent lighting
conditions are crucial. Recent advancements in SSO technology have led to successful
launches, such as the Sentinel-1C satellite for the European Union's Copernicus program,
further enhancing our ability to monitor and understand our planet.
Applications: Cyclone forecasting, wildfire monitoring, deforestation tracking, and change
detection. Limitation: Smaller regional coverage, requiring multiple satellites for
continuous monitoring.
Geostationary Transfer Orbit (GTO): A transitional orbit used to transfer satellites to GEO or higher altitudes. Minimizes resource use during the
transition phase. This orbit allows satellites to transition from a lower orbit to a geostationary orbit, where they can maintain a fixed position relative
to the Earth's surface. In GTO, a satellite is launched into an elliptical orbit, reaching its highest point, or apogee, before firing its engines to circularize
its orbit at geostationary altitude. Satellites in geostationary orbit (GEO) orbit the Earth at approximately 35,786 kilometers above the equator,
synchronizing their rotation with the planet. This unique positioning enables them to provide consistent coverage over specific areas, making them
ideal for telecommunications, weather monitoring, and broadcasting services. Satellite technology, such as NOAA's GOES-U, demonstrate the importance
of GTO in modern space operations. As satellite launches continue to evolve, GTO remains a vital step in ensuring effective satellite deployment and
functionality.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 277


278

Specialized Satellite Types


Earth Observation Satellites:
Purpose: Monitor environmental changes, support emergency response, and provide
weather and GIS data. Categories: Weather satellites (e.g., GEO), remote sensing
satellites (e.g., LEO, GEO), and GIS satellites for mapping. Example: EOS SAT-1,
designed for precision agriculture and forestry monitoring.
Navigation Satellites: Functions: Provide geolocation and positioning data.
Types: Global Navigation Satellite System (GNSS): Offers worldwide coverage (e.g.,
GPS, Galileo, BeiDou). Regional Navigation Satellite System (RNSS): Covers specific
regions (e.g., India’s IRNSS).
Astronomical Satellites: Function as space telescopes, free from atmospheric
distortions.
Applications: Study celestial bodies, map stars, monitor black holes, and conduct climate research. Examples: Climate research satellites for ocean and
atmospheric studies, and biosatellites for biological experiments.

Importance of artificial satellites


Satellites serve various essential functions; they facilitate in-flight phone communications on airplanes and provide voice communication in rural areas
where traditional networks may be lacking. Satellites are crucial for weather forecasting, offering accurate and timely information about climate
patterns and natural disasters. Earth observation satellites monitor environmental changes, such as wildfires, volcanic eruptions, and the health of
ecosystems. This data is invaluable for scientific research and helps combat climate change by mapping plant life and tracking geological changes.
Military organizations also utilize satellite imagery for intelligence purposes, showcasing the diverse applications of satellite technology. Moreover,
satellites support navigation systems like GPS, enabling precise location tracking for transportation and logistics. Overall, the multifaceted uses of
satellites underscore their importance in communication, environmental
monitoring, and scientific advancement, making them indispensable tools
in our interconnected world.

Satellites in Navigation
Satellite navigation systems, commonly known as satnav, utilize a
network of satellites to provide precise positioning and timing
information globally. These systems, classified as Global Navigation
Satellite Systems (GNSS), enable users to determine their location
anywhere on Earth by receiving signals from multiple satellites. The most
well-known GNSS is the Global Positioning System (GPS), which consists
of a constellation of satellites that continuously broadcast navigation
signals. The operation of satellite navigation relies on the principle of
triangulation, where the position of a receiver is calculated based on the time it takes for signals from satellites to reach it. This technology is crucial
for various applications, including aviation, maritime navigation, and personal navigation devices.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 278


279

Global Positioning System(GPS)


The Global Positioning System (GPS) is a vital US-owned
utility that offers positioning, navigation, and timing (PNT)
services globally. This satellite-based navigation system,
originally known as Navstar GPS, consists of a constellation
of at least 24 operational satellites orbiting the Earth. These
satellites work in conjunction with ground monitoring
stations and receivers to provide users with precise location
data. GPS technology operates by triangulating signals from
multiple satellites, allowing users to determine their exact
position anywhere on the planet. This capability has
revolutionized various sectors, including transportation,
agriculture, and emergency services, enhancing efficiency
and safety. The GPS market is projected to grow significantly, with estimates reaching USD 472.16 billion by 2033. GPS continues to evolve, integrating
with other systems like Europe’s Galileo, ensuring users benefit from highly accurate and reliable navigation services worldwide.
GPS, or the Global Positioning System, is a global navigation satellite system that provides location, velocity and time synchronization. GPS is
everywhere. You can find GPS systems in your car, your smartphone and your watch. GPS helps you get where you are going, from point A to point B.
The Global Positioning System (GPS) is a navigation system using satellites, a receiver and algorithms to synchronize location, velocity and time data
for air, sea and land travel. The satellite system consists of a constellation of 24 satellites in six Earth-centered orbital planes, each with four satellites,
orbiting at 13,000 miles (20,000 km) above Earth and traveling at a speed of 8,700 mph (14,000 km/h). While we only need three satellites to produce
a location on earth’s surface, a fourth satellite is often used to validate the information from the other three. The fourth satellite also moves us into
the third-dimension and allows us to calculate the altitude of a device.

What are the three elements of GPS?


GPS is made up of three different components, called segments, that work together to
provide location information. The three segments of GPS are: Space (Satellites); The
satellites circling the Earth, transmitting signals to users on geographical position and time
of day. Ground control; The Control Segment is made up of Earth-based monitor
stations, master control stations and ground antenna. Control activities include tracking
and operating the satellites in space and monitoring transmissions. There are monitoring
stations on almost every continent in the world, including North and South America, Africa,
Europe, Asia and Australia. User equipment; GPS receivers and transmitters including
items like watches, smartphones and telematic devices.

How does GPS technology work?


GPS works through a technique called trilateration. Used to calculate location, velocity and elevation, trilateration collects signals from satellites to
output location information. It is often mistaken for triangulation, which is used to measure angles, not distances. Satellites orbiting the earth send
signals to be read and interpreted by a GPS device, situated on or near the earth’s surface. To calculate location, a GPS device must be able to read the
signal from at least four satellites. Each satellite in the network circles the earth twice a day, and each satellite sends a unique signal, orbital parameters

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 279


280

and time. At any given moment, a GPS device can read the signals from six
or more satellites. A single satellite broadcasts a microwave signal which is
picked up by a GPS device and used to calculate the distance from the GPS
device to the satellite. Since a GPS device only gives information about the
distance from a satellite, a single satellite cannot provide much location
information. Satellites do not give off information about angles, so the
location of a GPS device could be anywhere on a sphere’s surface area. When
a satellite sends a signal, it creates a circle with a radius measured from
the GPS device to the satellite. When we add a second satellite, it creates a
second circle, and the location is narrowed down to one of two points where
the circles intersect. With a third satellite, the device’s location can finally
be determined, as the device is at the intersection of all three circles. We
live in a three-dimensional world, which means that each satellite produces a sphere, not a circle. The intersection of three spheres produces two
points of intersection, so the point nearest Earth is chosen. As a GPS device moves, the radius (distance to the satellite) changes. When the radius
changes, new spheres are produced, giving us a new position. We can use that data, combined with the time from the satellite, to determine velocity,
calculate the distance to our destination and the ETA. GPS is a powerful and dependable tool for businesses and organizations in many different
industries. Surveyors, scientists, pilots, boat captains, first responders, and workers in mining and agriculture, are just some of the people who use
GPS on a daily basis for work. They use GPS information for preparing accurate surveys and maps, taking precise time measurements, tracking position
or location, and for navigation. GPS works at all times and in almost all weather conditions.

There are five main uses of GPS:


Location: Determining a position. Navigation: Getting from one location to another. Tracking: Monitoring object or personal movement.
Mapping: Creating maps of the world. Timing: Making it possible to take precise time measurements. GPS use cases include: Emergency
Response: During an emergency or natural disaster, first responders use GPS for mapping, following and predicting weather, and keeping track of
emergency personnel. In the EU and Russia, the eCall regulation relies on GLONASS technology (a GPS alternative) and telematics to send data to
emergency services in the case of a vehicle crash, reducing response time. Entertainment: GPS can be incorporated into games and
activities like Pokémon Go and Geocaching. Health and fitness: Smartwatches and wearable technology can track fitness activity (such as running
distance) and benchmark it against a similar demographic. Construction, mining and off-road trucking: From locating equipment, to measuring
and improving fleet asset allocation, GPS enables companies to increase return on their assets. Transportation: Logistics companies implement
telematics systems to improve driver productivity and safety. A lorry tracker can be used to support fleet route optimization, fuel efficiency, driver
safety and fleet compliance. Other industries where GPS is used include: agriculture, autonomous vehicles, sales and services, the military, mobile
communications, security, and fishing.

How accurate is GPS?


GPS device accuracy depends on many variables, such as the number of satellites available, the ionosphere, the urban environment and more. Some
factors that can hinder GPS accuracy include: Physical obstructions: Arrival time measurements can be skewed by large masses like mountains,
buildings, trees and more. Atmospheric effects: Ionospheric delays, heavy storm cover and solar storms can all affect GPS devices. Ephemeris: The
orbital model within a satellite could be incorrect or out-of-date, although this is becoming increasingly rare. Numerical miscalculations: This might
be a factor when the device hardware is not designed to specifications. Artificial interference: These include GPS jamming devices or spoofs. Accuracy
tends to be higher in open areas with no adjacent tall buildings that can block signals. This effect is known as an urban canyon. When a device is

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 280


281

surrounded by large buildings, like in downtown Manhattan or Toronto, the satellite signal is first blocked, and then bounced off a building, where it
is finally read by the device. This can result in miscalculations of the satellite distance.

Satellite Communication
Satellite communication is transporting information from one place to another using a communication satellite in orbit around the Earth. Watching
the English Premier League every weekend with your friends would have been impossible without this. A communication satellite is an artificial
satellite that transmits the signal via a transponder by creating a channel between the transmitter and the receiver at different Earth locations.
Telephone, radio, television, internet, and military applications use satellite communications. Believe it or not, more than 2000 artificial satellites are
hurtling around in space above your heads.

How Satellite Communications Work?


The communication satellites are similar to the space mirrors that help us bounce signals such as radio, internet data, and television from one side of
the earth to another. Three stages are involved, which explain the working of satellite communications.
These are: Uplink, Transponders, and Downlink
Consider an example of signals from a television. In the first
stage, the signal from the television broadcast on the other
side of the earth is first beamed up to the satellite from the
ground station on the earth. This process is known as uplink.
The second stage involves transponders such as radio
receivers, amplifiers, and transmitters. These transponders
boost the incoming signal and change its frequency so that
the outgoing signals are not altered. Depending on the
incoming signal sources, the transponders vary. The final
stage involves a downlink in which the data is sent to the
other end of the receiver on the earth. It is important to understand that usually, there is one uplink and multiple downlinks.

The need for satellite communication becomes evident when we want to transmit the signal to far-off places, where the Earth’s curvature comes into
play. This obstruction is overcome by putting communication satellites in space to transmit the signals across the curvature. Satellite communication
uses two types of artificial satellites to transmit the signals:
Passive Satellites: If you put a hydrogen balloon that has a metallic coating
over it up in the air, it technically becomes a passive satellite. Such a balloon
can reflect microwave signals from one place to another. The passive satellites
in space are similar. These satellites just reflect the signal back towards the
Earth without amplification. Since the satellite orbit height can range from
2000 to 35786 km, attenuation due to the atmosphere also comes into play,
and due to this, the received signal is often very weak.
Active Satellites: Active Satellites, unlike passive satellites, amplify the
transmitted signals before re-transmitting it back to Earth, ensuring excellent
signal strength. Passive satellites were the earliest communication satellite, but now almost all the new ones are active satellites.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 281


282

To avoid mixing up and interference signals, every user is allocated a specific frequency for transmitting. The International Telecommunication Union
does this frequency allocation. Geosynchronous satellites are of note here. Geostationary orbit is present at 35786 km above Earth’s surface. If you can
spot such a satellite with a telescope from Earth, it will appear stationary to you. The satellite’s orbital period and the Earth’s rotational rate are in
sync.
Satellite Communication Services
There are two categories in which satellite communication services can be classified:
One-way Satellite Communication
In one-way satellite communication, the communication usually takes place between
either one or multiple earth stations through the help of a satellite. The
communication takes place between the transmitter on the first earth satellite to the
receiver which is the second earth satellite. The transmission of the signal is
unidirectional.

Two-Way Satellite Communication


In two-way satellite communication, the information is exchanged between any two earth
stations. It can be said that there is a point to point connectivity. The signal is transmitted
from the first earth station to the second earth station such that there are two uplinks and
two downlinks between the earth stations and the satellite.

Trilateration is defined as the process of determining the location based on


the intersections of the spheres. The distance between the satellite and the receiver
is calculated by considering a 3-D sphere such that the satellite is located at the centre of
the sphere. Using the same method, the distance for all the 3 GPS satellites from the receiver
is calculated. Following are the parameters that are calculated after trilateration: Time of
sunrise and the sunset, Speed, and Distance between the GPS receiver to the destination
GPS systems are remarkably versatile and can be obtained in almost any industry sector. They can be used to map forests, help farmers harvest their
fields and navigate aeroplanes on the ground or in the air.

Space exploration
Space exploration is the investigation of the universe beyond Earth's atmosphere, utilizing both
crewed and uncrewed spacecraft. This endeavor has a rich history, particularly in the latter half of
the 20th century when advancements in rocket technology enabled humanity to overcome
gravitational forces and achieve orbital velocities. The first manned mission to the moon in 1969
marked a significant milestone, showcasing the potential of space travel. The motivations for
exploring space are vast. Organizations like NASA emphasize that space exploration inspires global
unity, drives groundbreaking discoveries, and creates new opportunities for technological
advancements. The ongoing missions and research not only expand our understanding of the
cosmos but also lead to innovations that benefit life on Earth. Despite its potential, human
spaceflight poses risks and challenges, as evidenced by historical incidents. By studying celestial bodies and phenomena, we unlock the mysteries of

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 282


283

black holes, planets, and the origins of life. This knowledge not only satisfies human curiosity but also inspires future generations to pursue careers
in science and technology. Space exploration has tangible benefits for life on Earth. Research conducted in space has led to advancements in health
care, such as improved vaccines and treatments for conditions like bone loss and eye disorders. The technologies developed for space missions often
find applications in everyday life, enhancing weather forecasting, climate monitoring, and telecommunications. Space exploration fosters international
collaboration, uniting nations in the pursuit of knowledge and innovation. As we continue to explore the cosmos, we not only gain insights into the
universe but also create opportunities for technological advancements that can improve life on our planet.

Hubble Space Telescope (HST)


It is a space telescope, and the first sophisticated optical observatory to be placed into orbit
around Earth. It is the largest and most versatile telescope on which many research papers
have been written. Hubble has made over 1.4 million observations, and has aided to track the
interstellar and celestial bodies present in space.
Since its launch in 1990, the Hubble Space Telescope (HST) has revolutionized our
understanding of the universe.
Operating in low Earth orbit, Hubble captures stunning images and data across the near-
infrared to ultraviolet spectrum. Its large reflecting mirror gathers light from distant celestial
objects, allowing scientists to explore phenomena such as the formation of stars, the evolution
of galaxies, and the expansion of the universe. Hubble's contributions are vast, including the
precise measurement of Hubble's constant, which describes the rate of expansion of the universe. Its collaboration with the James Webb Space
Telescope (JWST) enhances our observational capabilities, providing complementary data that deepens our cosmic insights. As a joint project of NASA
and the European Space Agency (ESA), Hubble continues to operate and inspire new generations of astronomers. Its legacy is not just in the discoveries
made but in the questions it raises about the cosmos, ensuring its place in the annals of space exploration. As the first true space-based observatory,
Hubble has provided unprecedented high-resolution images and data across ultraviolet, optical, and near-infrared wavelengths. This capability allows
astronomers to explore celestial phenomena with clarity unattainable from ground-based telescopes, which are hindered by Earth's atmosphere.
Hubble's contributions to astronomy are vast, including the discovery of new moons in our solar system and the observation of distant galaxies, which
have helped refine our understanding of cosmic expansion. Its ability to be serviced by astronauts has ensured that it remains at the forefront of
astronomical research, with multiple upgrades enhancing its capabilities over the years. In collaboration with other observatories like the James Webb
Space Telescope, Hubble continues to provide complementary insights into the cosmos, solidifying its status as one of the most important scientific
instruments in history.

Observations of Hubble Space Telescope


The Hubble space telescope is the main instrument in discovering the moons
(hydra and nix) around the planet Pluto. It made observations of six galaxies
merging together, HST obtained small concentrations of black holes. The Hubble
Space Telescope could find infrared radiations too.
Hubble’s constant was determined after HST observed the Cepheid variables in
nearby galaxies. Hubble has recorded the birth of stars through turbulent clouds
of gas and dust. Hubble is the main reason behind discovering dusty disks and
stellar nurseries although the Milky Way. Using Hubble, the presence of black holes is proved.

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 283


284

International Space Station


The International Space Station (ISS) is a remarkable achievement in human engineering
and international collaboration, orbiting approximately 250 miles above Earth. As the
largest space station ever constructed, it serves as a unique platform for scientific research
in microgravity, enabling astronauts to conduct experiments that would be impossible on
Earth. The ISS is equipped with living quarters, a gym, and even a 360-degree bay
window, making it a home for astronauts from various countries. The ISS is a multi-
national project, primarily built by the United States and Russia, with contributions from
other nations. It plays a crucial role in advancing our understanding of space and human health, as well as fostering international cooperation in
science and technology. Educational initiatives, such as STEM programs, leverage the ISS as a classroom, inspiring the next generation of scientists
and engineers. Through its ongoing missions, the ISS continues to push the boundaries of human exploration and research, making it a vital asset for
future space endeavors. The International Space Station (ISS) is a huge spacecraft orbiting around our planet. It revolves around the Earth with a
consistent speed and direction. It acts as an all-in-one place where astronauts can live and conduct experiments. The space station is a unique science
laboratory where a broad spectrum of experiments is carried out. Many nations are involved with the maintenance of the craft. Since its launch, the
space station has been extended with newer modules and equipment. Astronauts are deployed to assemble and disassemble parts. Overall it took forty-
two space flights to attach the main parts of the space station. It is regulated by a public association of multiple space research organizations: NASA
(United States of America), ESA (Europe), Roscosmos (Russia), CSA(Canada) and JAXA(Japan).
The ownership of the space station is regulated under intergovernmental agreements and treaties.
It revolves around the Earth at a mean altitude of about 402 km. The average speed is about 17,500 mph. Therefore, it completes one full revolution
around the Earth in 90 minutes. The station acts as a microgravity research lab for astronomy, physics, astrobiology, meteorology, etc. The ISS is used
for the trials of spaceship equipment and systems that are needed for Mars and Moon missions. NASA predominantly uses the station to understand the
effects of working and living in space. Such invaluable information will further demystify the conditions necessary for human s to survive on other
planets.

Essential Parts of the International Space Station


Nodes are modules that connect individual parts of the space station. Solar arrays extend out from the main structure. It is used to accumulate solar
energy for generating electricity. The arrays are joined to the space station with the help of an extended truss. In fact, there are radiators on the truss,
which helps to regulate the station’s temperature. Robotic arms are attached outside the station. They were used to build the whole space station.
Robotic arms also help to move astronauts during spacewalks. Other than these use cases, such automated arms are extensively used in laboratory
science experiments.
Docking ports are another crucial part of the entire space station. They enable the station to connect external spacecraft and satellites for various
purposes. New visitors and crew members enter through the docking ports. Supplies are also delivered through docking ports.

Uses of International Space Station


The ultimate goal of the International Space Station is to facilitate long-term space exploration and create useful inventions for the goodness of
humanity. There are six cutting-edge laboratories enabling premium research projects in various fields of science and technology. Complex and volatile
experiments can be easily conducted in microgravity spaces. Especially in medicine, we are witnessing revolutionary research projects that were
impossible to perform in earthly conditions. Studies on hyper and microgravity help understand the effects of alien conditions on the human body.
Cultivating protein crystals in space could help scientists develop better treatment solutions for many diseases that have no cure. In addition, there

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 284


285

are numerous innovative space research projects that are designed to study celestial bodies. The International Space Station acts as a doorway to new
horizons in space exploration. It is a place where we can learn and experiment on living and surviving on alien planets. The long-term consequences
of weightlessness and radiation on the human body are the most crucial research areas, which will allow us to prepare astronauts for crewed
interplanetary missions.

Significant Contributions of International Space Station


The International Space Station is aiding in the research of water purification technology. In the space station, scientists are able to conduct advanced
experiments which are not possible on the Earth. In space, the external variables are significantly less compared to the Earth. Fewer constraints mean
less probability of errors. High-quality protein crystals are being developed in the space station. Space has the perfect conditions to examine these
structures. Microgravity is conducive to the optimal cultivation of rare and complex protein crystal structures. These are some of the crucial substances
in medical diagnosis. Hematopoietic prostaglandin D synthase was developed here, an essential component in diagnosing muscular dystrophy. Space
station ultrasound technology. Eye surgery methods with space station hardware and tools (helmet feeding highly efficient image-processing chips).
Robotic arms can be used to operate complex tumours. Research on the characteristics of various fluids to improve existing medical devices.
They are developing fine-tuned diets and exercises for preventing bone loss or degradation.
Practical experiences in the space station help better understand osteoporosis development.
The International Space Station gives excellent opportunities for students to conduct their own scientific experiments in space. It helps to monitor
water quality. It also enables us to monitor and predict natural calamities from space. There have been projects for developing optimal methods for
cultivating crops in space. In turn, it also helps to find solutions for mould prevention in medical labs, homes and large-scale food storage,

Revised Edition: Yasson Twinomujuni. .0772-938844 II 0752-938844 Page 285


286

SENIOR FOUR

4.1 INTRODUCTION TO CURRENT ELECTRICITY


Learning Outcomes
a) Understand what e.m.f is (u)
b) Understand that cells convert chemical energy into
electrical energy, producing current and also the force
needed to create a flow of current in a circuit (u, s)
c) Understand that electric cells are very useful but have their limitations (u, v/a)
d) Understand the nature of electric current, its sources, what makes it flow around circuits and how and it is measured (u, s)
e) Know that some materials are electrical conductors and other are insulators(k)
f) Recognize, understand and apply knowledge of series and parallel circuits (k, u, s,v/a)
g) Appreciate that circuits may be represented as circuit diagrams consisting of an agreed set of symbols to represent components (k, u, s)

Introduction
Current electricity is the flow of electric charge, primarily electrons, through a conductor, such as a wire. This flow is measured in amperes (amps),
which quantify the number of charges passing a point in the circuit per second. Understanding electric current is essential for grasping how electrical
circuits function, as it directly relates to the voltage and resistance present in the system. Electric current can be compared to water flow; just as water
flows through pipes, electric current flows through conductors. The relationship between voltage (the potential difference) and current is described
by Ohm's Law, which states that current is directly proportional to voltage and inversely proportional to resistance. This fundamental principle is
crucial for designing and analyzing electrical circuits. In summary, current electricity is a vital concept in physics and engineering, underpinning the
operation of countless devices and systems in our daily lives. Understanding its principles allows for advancements in technology and energy efficiency.
Electromotive force (e.m.f) refers to the voltage generated by a source that drives electric current in a circuit. It is a measure of the energy supplied
per coulomb of charge and can be thought of as the "push" that moves electrons through a conductor.

Sources of e.m.f
Chemical Sources
Batteries: Batteries convert chemical energy into electrical energy through electrochemical
reactions. Example: In a common alkaline battery, zinc and manganese dioxide undergo a chemical
reaction, resulting in a flow of electrons from the anode (negative terminal) to the cathode (positive
terminal).
Fuel Cells: Fuel cells convert the chemical energy of fuels (like hydrogen or methanol) directly into
electricity through a chemical reaction with oxygen, typically producing water as a byproduct.
Example: Hydrogen fuel cells generate e.m.f. by combining hydrogen and oxygen, producing electricity, water, and heat.
Electromagnetic Sources
Generators: Generators convert mechanical energy into electrical energy through electromagnetic induction, where a conductor (like a wire) moves
through a magnetic field, inducing an e.m.f. in the conductor. Example: In a simple AC generator, a coil of wire rotates within a magnetic field,
producing alternating current due to the changing magnetic flux.

Revised Edition: [email protected] Page 286


287

Alternators: Similar to generators, alternators produce alternating current by rotating coils within a magnetic field, inducing an e.m.f. that changes
direction with time. Example: Alternators are commonly used in vehicles to charge batteries and power electrical systems while the engine runs.

Thermoelectric Sources
Thermocouples: Thermocouples generate e.m.f. by exploiting the Seebeck effect, where a voltage is produced in a circuit made of two different
conductors when there is a temperature difference between their junctions. Example: Used in temperature measurement, where the voltage produced
correlates to the temperature difference, allowing for precise readings.
Thermoelectric Generators (TEGs): TEGs convert heat energy directly into electrical energy using thermoelectric materials that generate e.m.f.
when subjected to a temperature gradient.
Example: Used in space applications and waste heat recovery systems
to convert waste heat into usable electrical energy.
Photovoltaic Sources.
Photovoltaic (PV) sources are essential in harnessing solar energy,
converting sunlight directly into electricity through solar cells. A PV
cell, typically small and made from semiconductor materials, is the
fundamental unit of this technology. When sunlight strikes the cell, it
excites electrons, generating an electric current. The efficiency of PV
systems is a primary focus of ongoing research, aiming to make solar
energy more cost-competitive with traditional energy sources. Various materials are utilized in PV cells, each contributing to the overall performance
and efficiency of solar panels. As the demand for clean, renewable energy grows,photovoltaic sources are increasingly recognized for their cost-
effectiveness and environmental benefits.
Solar Cells: Solar cells convert light energy directly into electrical energy through the photovoltaic effect, where photons knock electrons loose from
atoms in a semiconductor material, creating a flow of current. Example: Used in solar panels to generate electricity from sunlight for residential,
commercial, and utility applications.

Piezoelectric Sources
Piezoelectric Materials: Piezoelectric materials generate e.m.f. when mechanical stress is applied, causing a displacement of charges within the
material. Example: Used in sensors and actuators, as well as in applications like energy harvesting from vibrations or mechanical movements. Various
sources of e.m.f. are vital for supplying electrical energy in different applications. Chemical reactions in batteries and fuel cells, mechanical movement
in generators and alternators, thermal gradients in thermocouples and TEGs, light energy in solar cells, and mechanical stress in piezoelectric materials
all serve to produce voltage that drives electric current in circuits. These are devices that can produce electricity by chemical action.
Different types of electric cells
Simple cells, Dry cells, Leclanche cells (wet cells), Lead acid accumulator and Alkaline cells (Nickel iron cells).
Simple cells
A simple cell consists of a copper plate as the anode (positive) and the zinc plate the cathode (negative), dipped in dilute suphuric acid as the electrolyte.
The more reactive metal in the reactivity series forms the cathode i.e. zinc is higher than copper in the reactivity series therefore zinc cathode and
copper anode.
How a simple cell works?

Revised Edition: [email protected] Page 287


288

The electrolyte undergoes ionization. The zinc plate slowly dissolves and goes into the solution
as zinc ions which displace hydrogen ions to form zinc sulphate. The displaced hydrogen ions
move to the copper plate. They gain electrons and become neutralized. The two of these atoms
combine to form gas that appears as bubbles on the copper plate. This flow of electrons from the
cathode to anode causes the flow of electricity from the anode to the cathode hence if a voltmeter
is connected between anode and cathode, it deflects.

Defects of a simple cell


The two primary faults are polarization and local action. Polarization occurs when the cell is in
use, leading to the accumulation of gases, particularly hydrogen, at the electrodes. This buildup reduces the current flow, causing the cell to operate
less effectively over time. As the reaction progresses, the voltage drops, making it challenging to
maintain a consistent output.
Prevention/ reduction of polarization
This can be reduced by Occasional brushing of the anode, and adding a depolarizer e.g. potassium
dichromate ( K2Cr2O┐) which oxidizes hydrogen to form water.
Local action refers to the unintended reactions occurring at the electrodes, which can lead to
energy loss and reduced efficiency. This phenomenon is often caused by impurities in the materials
used for the electrodes, gradual wearing down of the zinc electrode (cathode) as a result of
impurities on the plate reacting with the acid to hydrogen bubbles, resulting in unwanted side reactions that consume reactants without contributing
to the intended electrochemical process.
Prevention of local action: The zinc plate is cleaned in concentrated acid and then rubbed with mercury. The mercury zinc amalgam covers up
the impurities thereby preventing their contact with acid electrolyte.

PRIMARY AND SECONDARY CELLS


Primary cell: This is one in which current is produced as a result of an irreversible reactions i.e. cannot be recharged when it runs down e.g. simple
cells, dry cells and Leclanché cells. Whenever the cell runs out, it implies that all ionized ions in the electrolyte have reacted to release electrodes.
Primary cells are electrochemical cells that generate electric energy through irreversible chemical reactions. Once the chemical reactants are exhausted,
these cells cannot be recharged and must be replaced. Primary cells are commonly used in various applications due to their convenience and ease of
use.

Dry cell as a primary cell


It is used in torches, radios and has an E.M.F of 1.5 volts.
It uses Ammonium chloride jelly as the electrolyte.
The anode is the carbon rod placed in the centre of the Zinc container which forms the cathode.
Action of a dry cell
The source of energy is chemical action between Zinc and ammonium chloride jelly. As a result,
hydrogen gas is produced which collects at the carbon rod and polarizes the cell. The manganese
(iv) oxide oxidizes the hydrogen to water in the cell and enables it to supply current for some time.
Local action cannot be completely stopped in dry cells and therefore the cell deteriorates with time.

Revised Edition: [email protected] Page 288


289

The other different types of primary cells include:


Alkaline Cells: These cells typically consist of zinc (anode) and manganese dioxide (cathode) with an alkaline electrolyte
(potassium hydroxide).They higher energy density than other primary cells, stable voltage output over a longer period,
and better performance at lower temperatures compared to other types. They are widely used in household devices like
remote controls, toys, and flashlights.
Zinc-Carbon Cells: This traditional type consists of a zinc anode, carbon rod as the cathode, and an electrolyte made
of a paste of ammonium chloride or zinc chloride. They have lower cost compared to alkaline cells, has a lower
energy density and shorter shelf life and voltage drops quickly under load, making it less suitable for high-drain
applications. They are commonly used in low-drain devices like clocks, flashlights, and remote controls.

Lithium Cells: These cells use lithium metal or lithium


compounds as the anode and various materials (like manganese
dioxide or cobalt oxide) as the cathode, often with a lithium salt
electrolyte. They have a high energy density and long shelf life, can operate in a wide temperature
range, and generally have a stable voltage output. Used in a variety of applications, including
watches, cameras, calculators, and medical devices.
Mercury Cells: Composed of mercury (anode), zinc
(cathode), and a potassium hydroxide electrolyte.
Stable voltage output, High energy density, and have environmental concerns due to mercury toxicity.
Previously used in hearing aids, cameras, and some types of watches, but usage has declined due to
environmental regulations.

Silver-Oxide Cells: These cells have a silver oxide cathode,


zinc anode, and an alkaline electrolyte (typically potassium
hydroxide). They have high energy density and stable voltage
output, more expensive than other primary cell types. Commonly used in watches, hearing aids, and other
small electronic devices due to their compact size and reliability.

Secondary cells
Secondary cells, commonly known as rechargeable batteries, are essential components in modern technology. Unlike
primary cells, which are designed for single-use, secondary cells can be charged and discharged multiple times.
This capability is achieved by reversing the chemical reactions that occur during discharge, allowing the battery to
restore its energy. There are various types of secondary cells, including lithium-ion, lead-acid, and flow batteries.
Each type has unique characteristics, such as energy density and internal resistance, which make them suitable for
different applications. For instance, lithium-ion batteries are widely used in portable electronics due to their high
energy density and low self-discharge rates. The development of advanced materials, such as carbon nanotubes, is

Revised Edition: [email protected] Page 289


290

enhancing the performance of secondary batteries, making them more efficient and longer-lasting. As technology continues to evolve, the importance
of secondary cells in energy storage and sustainability will only increase, driving innovation in this vital sector.

Lead acid accumulators as a secondary cell


Lead-acid accumulators, commonly known as lead-acid batteries, are a type of rechargeable battery that utilizes
a chemical reaction to store and release electrical energy. They consist of two lead-based plates immersed in a
sulfuric acid solution, with a negative electrode made of spongy lead to facilitate the formation and dissolution
of lead. While lead-acid batteries have a relatively low energy density compared to modern rechargeable
batteries, they excel in providing high surge currents, making them ideal for automotive applications. These
batteries have a moderate lifespan and are not subject to the memory effect seen in nickel-based systems,
allowing for better charge retention. Flooded lead-acid batteries, or wet cell batteries, are the most traditional and widely recognized type, used
extensively in vehicles and backup power systems. During discharge, both electrodes change to lead sulphate and the acid becomes more dilute. When
fully charged, the relative density of the acid is 1.25 which falls to 1.18 at cell discharge.

Care and maintenance of lead acid accumulators


The liquid level must be maintained using distilled water. This ensures that the electrodes are not exposed. The cell should be charged if the relative
density of the electrolyte falls below 1.25.
The battery should be kept clean so that the current does not lick away across the casing and across the terminal. The positive terminal should not be
connected directly to the negative terminal. When this is done (short circuit), too much current is taken from the cell. This tends to destroy it. The
battery should be charged regularly and should not be left uncharged condition for a long time. The effect of charging current is to change lead
sulphate on both positive and negative plate into lead (ii) oxide and to return the sulphate into the electrolytes.

The Nickel- Cadmium Alkaline Cells or Nife Accumulators


Nickel-Cadmium (NiCd) batteries, also known as NiFe accumulators, are a type of rechargeable battery that utilizes nickel oxide hydroxide and metallic
cadmium as electrodes. These batteries are known for their durability and ability to deliver high discharge rates, making them suitable for various
applications, including power tools, emergency medical equipment, and two-way radios. NiCd batteries operate effectively in extreme temperatures
and have a long cycle life, which contributes to their popularity in industrial settings. However, they do have some drawbacks, such as the memory
effect, which can reduce their capacity if not properly maintained. Have environmental concerns regarding cadmium, a toxic heavy metal, have led to
increased scrutiny and regulations surrounding their disposal and recycling.

Electric Current
Electric Current is the rate of flow of electrons in a conductor. The SI Unit of electric current is the Ampere.1𝐴 = 1𝐶𝑠 −1 . Electrons are minute
particles that exist within the molecular structure of a substance. Sometimes, these electrons are tightly held, and other times they are loosely held.
When electrons are loosely held by the nucleus, they are able to travel freely within the limits of the body. Electrons are negatively charged particles
hence when they move, a number of charges moves, and we call this movement of electrons as electric current. It should be noted that the number of
electrons that are able to move governs the ability of a particular substance to conduct electricity. Some materials allow the current to move better
than others. Based on the ability of the material to conduct electricity, materials are classified into conductors and insulators.

Revised Edition: [email protected] Page 290


291

Conductors: These materials allow the free flow of electrons from one particle to another. Conductors allow for charge transfer through the free
movement of electrons. The flow of electrons inside the conducting material or conductor generates an electric current. The force that is required to
drive the current flow through the conductor is known as voltage. Conductors of electricity are classified into two main categories: good conductors
and bad conductors. Good conductors, such as copper and
aluminum, iron, silver and gold, allow electric current to
flow through them easily due to their abundance of free-
moving electrons. Silver is the best conductor of electricity.
This property makes them ideal for electrical wiring and
components, ensuring efficient energy transfer. Bad
conductors, or insulators, resist the flow of electricity.
Materials like rubber, glass, and wood fall into this category,
as they have high resistance and few free electrons.
Insulators are materials that restrict the free flow of electrons
from one particle to another. The particles of the insulator do
not allow the free flow of electrons; subsequently, the charge is seldom distributed evenly across the surface of an insulator. Examples of Insulators:
Plastic, Wood and Glass

Current flow in a conductor


Electric current is the flow of charged particles, primarily electrons, through a conductor. In metallic
conductors, these electrons move in response to an electric field, creating a chain reaction of mutual
repulsion that allows them to flow almost simultaneously. This movement is essential for the functioning
of electrical circuits, where current travels from a higher potential (positive terminal) to a lower potential
(negative terminal).The behavior of current flow can vary depending on the type of electricity. Direct
current (DC) flows uniformly through the entire cross-section of a conductor, while alternating current
(AC) experiences the skin effect, where it tends to flow near the surface of the conductor at higher
frequencies. Understanding these dynamics is crucial for designing efficient electrical systems.

What is an electromotive force?


Electromotive force (EMF) refers to the potential difference generated by a source, such as a battery
or generator, that drives electric charges through a circuit. It is the energy supplied per unit
charge to move charges from the negative to the positive terminal of the source, enabling the flow
of current. EMF is the maximum voltage the source can provide when no current flows and is
measured in volts (V). When a circuit is complete, the actual voltage available to the external
circuit is slightly less than the EMF due to energy losses within the source caused by its internal
resistance. EMF acts as the driving energy behind the flow of electric current in a circuit. If a force
acts on electrons to make them move in a particular direction, then up to some extent random
motion of the electrons will be eliminated. An overall movement in one direction is achieved. The force that acts on the electrons to make them move
in a certain direction is known as electromotive force, and its quantity is known as voltage and is measured in volts.

Revised Edition: [email protected] Page 291


292

Measurement of e.m.f of a cell


Unit of Electric Current
The magnitude of electric current is measured in coulombs per second. The SI unit of
electric current is Ampere and is denoted by A. Ampere is defined as one coulomb of
charge moving past a point in one second. If there are 6.241 x 10 18 electrons flowing
through our frame in one second, then the electrical current flowing through it is ‘One
Ampere.’ The unit Ampere is widely used within electrical and electronic technology
along with the multipliers like milliamp (0.001A), microamp (0.000001A), and so forth.

Visualizing Electric Current


To gain a deeper understanding of what an electric current is and how it behaves in a conductor, we can use the water pipe analogy of electricity.
Certainly, there are some limitations but they serve as a very basic illustration of current and current flow.
Conventional current flow versus electron flow
There is a lot of confusion around conventional current flow and electron flow.
Conventional Current Flow versus Electron Flow.

Conventional Current Flow


The conventional current flow is from the positive to the negative terminal and
indicates the direction in which positive charges would flow.

Electron Flow
The electron flow is from negative to positive terminal. Electrons are negatively
charged and are therefore attracted to the positive terminal as unlike charges
attract.

Properties of Electric Current


After we define electric current, let us learn the properties of electric current. Electric current is an important quantity in electronic circuits. We have
adapted electricity in our lives so much that it becomes impossible to imagine life without it. Therefore, it is important to know what is current and
the properties of the electric current. We know that electric current is the result of the flow of electrons. The work done in moving the electron stream
is known as electrical energy. Electrical energy can be converted into other forms of energy such as heat energy, light energy, etc. For example, in an
iron box, electric energy is converted to heat energy. Likewise, the electric energy in a bulb is converted into light energy.
There are two types of electric current known as alternating current (AC) and direct current (DC). The direct current can flow only in one direction,
whereas the alternating direction flows in two directions.
Direct current is seldom used as a primary energy source in industries. It is mostly used in low voltage applications such as charging batteries, aircraft
applications, etc.
Alternating current is used to operate appliances for both household and industrial and commercial use. The electric current is measured in ampere.
One ampere of current represents one coulomb of electric charge moving past a specific point in one second. The conventional direction of an electric
current is the direction in which a positive charge would move. Henceforth, the current flowing in the external circuit is directed away from the
positive terminal and toward the negative terminal of the battery.

Revised Edition: [email protected] Page 292


293

Effects of Electric Current


After defining electric current, let us learn various effects of electric current. When a current flow through a conductor, there are a number of signs
which tell if a current is flowing or not. Following are the most prominent signs:

Heating Effect of Electric Current


When our clothes are crumpled, we use the iron box to make our clothes crisp and neat. Iron box works on the principle of heating effect of current.
There are many such devices that work on the heating effect. When an electric current flows through a conductor, heat is generated in the conductor.
The heating effect is given by the following equation 𝑝𝑜𝑤𝑒𝑟 𝑑𝑖𝑠𝑠𝑖𝑝𝑎𝑡𝑒𝑑 𝑎𝑠 ℎ𝑒𝑎𝑡 = 𝐼2𝑅𝑡.
The heating effect depends on the following factor:
 The time ‘for which the current flows. The longer the current flows in a conductor more heat is generated.
 The electrical resistance of the conductor. Higher the resistance, the higher the heat produced.
 The amount of current. The larger the amount of current higher the heat produced.
If the current is small, then the amount of heat generated is likely to be very small and may not be noticed. However, if the current is larger than it is
possible that a noticeable amount of heat is generated.

Magnetic Effect of Electric Current


Another prominent effect that is noticeable when an electric current flows through the conductor is the build-up of the magnetic field. We can observe
this when we place a compass close to a wire carrying a reasonably large direct current, and the compass needle deflects. The magnetic field generated
by a current is put to good use in a number of areas. By winding a wire into a coil, the effect can be increased, and an electromagnet can be made.

Chemical Effect of Electric Current


When an electric current passes through a solution, the solution ionizes and breaks down into ions. This is because a chemical reaction takes place
when an electric current passes through the solution.
Electroplating and electrolysis are the applications of the chemical effect of electric current.
Electric current is the flow of electric charge through a conductor, typically measured in amperes (A). This flow of charge is caused by a difference in
electric potential (voltage) across the conductor, which creates an electric field, pushing charges from one point to another.

Types of Electric Current


Direct Current (DC): Electric charge flows in a single, constant direction.
Applications: Commonly used in batteries, electronic devices, and DC motors.
Alternating Current (AC): Electric charge periodically changes direction.
Applications: Widely used for household and industrial power supply because AC can be easily transformed to different voltages.

Measuring Electric Current


Electric current is measured in amperes (A), where one ampere represents one coulomb of charge passing through a point in the conductor per second.
It can be measured using an ammeter in series with the circuit.
Electric Current Flow: Defined as the flow of positive charge from the positive to the negative terminal. Electron Flow: Actual flow of electrons,
which move from the negative to the positive terminal.

Revised Edition: [email protected] Page 293


294

Ohm’s Law
Electric current (I) in a conductor is directly proportional to the voltage (V) across it and inversely proportional to the resistance (R) of the conductor:
𝑉
where I is current, V is voltage, and R is resistance.𝑉 = 𝐼𝑅, 𝐼 =
𝑅
Applications of Electric Current
 Provides energy to devices like lights, motors, and electronic devices.
 Used in industrial processes to extract or refine metals.
 Provides energy for electric heaters and light sources.
Safety Considerations
Electric current can be dangerous if not managed properly. High current can cause burns, shocks, or fires, so insulation, grounding, and circuit
protection (fuses, circuit breakers) are essential for safe operation.

Electromotive Force
Electromotive force is defined as the electric potential produced by either an electrochemical cell or by changing the magnetic field.
E.M.F is the commonly used acronym for electromotive force.
A generator or a battery is used for the conversion of energy from one form to another. In these devices, one terminal becomes positively charged
while the other becomes negatively charged. Therefore, an electromotive force is a work done on a unit electric charge.
Electromotive force is used in the electromagnetic flowmeter which is an application of Faraday’s law.
 Terminal voltage (V) is defined as the potential difference across the terminals of a load when the circuit is on while EMF (E) is defined as
the maximum potential difference that is delivered by the battery when there is no flow of current.
 A voltmeter is used for measuring the terminal voltage whereas a potentiometer is used for measuring the EMF.

Symbol for Electromotive Force
The electromotive force symbol is ε.
What is Electromotive Force Formula?
The formula for electromotive force:𝜀 = 𝑉 + 𝐼𝑟, where, V is the voltage of the cell, I is the current across the circuit, r is the internal resistance
of the cell and ε is the electromotive force. The unit for electromotive force is Volt. EMF is numerically expressed as the number of Joules of energy
𝑗𝑜𝑢𝑙𝑒𝑠
given by the source divided by each Coulomb to enable a unit electric charge to move across the circuit.𝑣𝑜𝑙𝑡𝑠 = 𝑐𝑜𝑢𝑙𝑜𝑚𝑏 (𝐽𝐶 −1 ) .
𝑗𝑜𝑢𝑙𝑒𝑠
EMF is given as the ratio of work done on a unit charge which is represented as follows: 𝑒. 𝑚. 𝑓 = 𝑐𝑜𝑢𝑙𝑜𝑚𝑏 (𝐽𝐶 −1 )

Measuring the e.m.f of a cell


To determine the electromotive force (e.m.f.) of a cell, a simple circuit can be set up with a voltmeter and variable resistor (rheostat) to measure the
voltage across the cell when different currents are drawn. This method is known as the voltmeter and ammeter method.
Apparatus
A cell, Voltmeter, Ammeter, Bulb or Rheostat or variable resistor, Connecting wires, and Switch
Procedure
 Connect the cell, ammeter, voltmeter, and rheostat in a circuit.
 The voltmeter should be connected in parallel across the cell, while the ammeter is in series with the cell and rheostat.

Revised Edition: [email protected] Page 294


295

 Close the switch to complete the circuit, allowing current to flow.


 Adjust the rheostat to change the current flowing through the circuit.
 For each position of the rheostat, record the current I (from the ammeter)
and the voltage V (from the voltmeter).
 Repeat for several different settings to obtain a range of I and V readings.
 Open the switch between readings to prevent the cell from discharging too
quickly and affecting accuracy.
 Plot a graph of V (on the y-axis) against I (on the x-axis). The graph should
yield a straight line with a negative slope.
 The intercept of the V-axis (where I=0) gives the e.m.f (E). of the cell.
 The slope of the line represents the internal resistance r of the cell, as the relationship between V and I follows the equation: V=E−Ir⋅
Explanation
The e.m.f. of the cell is the open-circuit voltage (the voltage when no current flows), represented by the y-intercept on the V-I graph. This is because,
at zero current, the voltage across the cell is entirely due to the e.m.f., without any voltage drop across the internal resistance of the cell.
This method allows for both the e.m.f. and internal resistance to be determined accurately by analyzing the straight-line graph obtained.

Cells in Series and Parallel


Cells generate electricity and also derives chemical reactions. One or more electrochemical cells are batteries. Every cell has two terminals namely:
Anode: Anode is the terminal from where the current flows in from out i.e. it provides an incoming channel for the current to enter the circuit or the
device.
Cathode: Cathode is the terminal from where the current flows out i.e. it provides an outgoing current flow from the circuit or the device.
There are two simplest ways for cell connectivity:
Series Connection: Series connection is the connectivity of the components in a sequential array of components.
Parallel Connection: Parallel connection is the connectivity of the components alongside to other components.

Cells in Series Connection


In series, cells are joined end to end so that the same current flows through each cell. In case if the cells are connected in series the emf of the battery
is connected to the sum of the emf of the individual cells. Suppose we have multiple cells and they are arranged in such a way that the positive
terminal of one cell is connected to the negative terminal of the another and then again the negative terminal is connected to the positive terminal
and so on, then we can that the cell is connected in series.

Equivalent EMF/Resistance of Cells in Series


If E is the overall emf of the battery combined with n number cells and E1, E2, E3 , En are
the e.m.fs of individual cells. Then E1 + E2 + E3 + ……. En
Similarly, if r1, r2, r3, rn are the internal resistances of individual cells, then the internal
resistance of the battery will be equal to the sum of the internal resistance of the individual
cells i.e. r = r1 + r2+ r3 +…+ rn

Revised Edition: [email protected] Page 295


296

Cells in Series
In a series arrangement, cells are connected end-to-end, with the positive terminal of one cell connected to the negative terminal of the next cell. This
increases the total voltage of the battery pack while keeping the current the same as that of a single cell.
Advantages of Cells in Series
Increased Voltage: The total voltage of the series connection is the sum of the individual cell voltages. For example, three 1.5V cells in series provide
a total of 4.5V.
Useful for applications requiring higher voltages, like some power tools or motor-driven devices.
Series connections are straightforward and do not require complex wiring or additional circuitry, making them easy to implement.

Disadvantages of Cells in Series


Single Cell Failure Affects the Whole Chain: If one cell fails or is depleted, it breaks the circuit, and the entire battery stops working. This limits the
reliability of the series configuration, especially if using cells with different capacities.
Cells in series must have the same capacity; otherwise, cells with lower capacities will discharge faster and potentially get damaged, affecting overall
performance.

Cells in Parallel Connection


Cells are in parallel combination if the current is divided among various cells. In a
parallel combination, all the positive terminal is connected together and all the
negative terminal are connected together.
Equivalent EMF/Resistance of Cells in Parallel
If emf of each cell is identical, then the emf of the battery combined with n numbers
1 1 1 1
of cells connected in parallel is equal to the emf of each cell. The resultant internal resistance of the combination is, = + + +⋯+
𝑟 𝑟1 𝑟2 𝑟3
1
𝑟𝑛

Equivalent EMF/Resistance of Cells in Series and Parallel


Assume the emf of each cell is E and internal resistance of each cell is r. As n numbers of cells are connected in series, the total emf of the cell as well
as the battery, will be 𝑬t= 𝒏𝑬. The equivalent resistance of the series is 𝒓t= 𝒏𝒓. As, m the number of cells connected in parallel, equivalent
1 𝑚
internal resistance of the cells for the parallel battery is 𝑟 = 𝑟 ,.
𝑡
Cells in Parallel
In a parallel arrangement, all positive terminals are connected together, and all negative terminals are connected together. This increases the current
capacity while keeping the voltage the same as that of a single cell.
Advantages of Cells in Parallel
Increased Current Capacity: The total current output is the sum of the currents from each cell, making parallel connections ideal for applications
that need higher current, like high-power lights or appliances. Extends battery life by providing a larger total charge (amp-hour rating).
If one cell fails, the others can still provide power, allowing the device to continue functioning.
Ideal for applications requiring long-term reliability, as individual cell failures have less impact on the circuit.

Revised Edition: [email protected] Page 296


297

Disadvantages of Cells in Parallel


The voltage remains the same as that of a single cell, which may be insufficient for devices requiring higher voltages. Not suitable for applications
requiring a voltage boost.
Parallel-connected cells may require special balancing during charging to avoid overcharging or undercharging certain cells. Without p roper
management, cells can discharge unevenly, leading to inefficiency and shortened cell lifespan.

4.2 VOLTAGE, RESISTANCE AND OHM’S LAW


Learning Outcomes
a) Understand electrical resistance, how it is measured, its relationship to current and voltage, and the factors that affect it (k, u, s)
b) Know the function and use of a diode, transistor, thermistor, LDR. Led and potentiometer (k,s)
Electrical work and voltage
Electrical Energy
Electrical energy is the work done in moving an electric charge by an electric force. The SI unit of electrical energy is the Joule (J). This electrical
energy is accompanied with a rise in temperature so this energy may be given out as heat energy.. This explains why wires become hot when electricity
passes through them. As current is switched on, electrons start moving through the wire. Due to resistance of the wire, the electrons are opposed from
moving and they collide with the molecules of the wire. They lose some of their kinetic energy to the molecules of the wire w hich causes a rise in
temperature (heat energy).

Electrical Power
This is the rate of doing work on a charged particle. Th SI unit of electrical power is the Watt (W).
Calculating electrical energy is essential for understanding power consumption in various applications. The fundamental equation for electric energy
is E = P × t, where E represents energy in kilowatt-hours (kWh), P is power in kilowatts (kW), and t is time in hours. This formula allows users to
determine how much energy is consumed over a specific period, making it crucial for managing electricity costs. To further break it down, one
kilowatt-hour equals 1,000 watts used for one hour. This means that if an appliance operates at 1 kW for one hour, it consumes 1 kWh of energy.
Additionally, using Ohm's Law, power can also be calculated by multiplying voltage (in volts) by current (in amps), providing another method to assess
energy consumption in electrical circuits.

Electrical resistance
Electrical resistance is the opposition encountered by the flow of charge, which
can be influenced by the material's properties, temperature, and physical
dimensions. The unit of measurement for resistance is the ohm (Ω), and it plays
a crucial role in determining how much current will flow for a given voltage,
as described by Ohm's Law. Different materials exhibit varying levels of
resistance; for instance, silver has the highest conductivity, followed by copper
and gold. This variance is essential in electrical engineering, as it helps in
selecting appropriate materials for wiring and components to optimize
performance and efficiency.

Revised Edition: [email protected] Page 297


298

Types of resistors
Resistors are essential components in electronic circuits, serving to control current flow. They can
be categorized into two main types: fixed and variable resistors. Fixed resistors maintain a constant
resistance, while variable resistors allow for adjustable resistance levels, making them versatile for
various applications. Among the specialized types of resistors are thermistors, which change
resistance with temperature, and varistors, which protect circuits from voltage spikes.
Photoresistors, or light-dependent resistors (LDRs), adjust their resistance based on light exposure, making them ideal for light-sensing applications.
Surface mount resistors are compact and designed for modern electronic devices, facilitating efficient circuit design.
Fixed resistors are resistors with a specific value. Fixed resistors one of the most widely used types of resistor. Fixed resistors are used in electronics
circuits to set the correct conditions in a circuit. Variable Resistors consist of a slider which taps onto the main resistor element and a fixed resistor
element. Simply we can say that a variable resistor is a potentiometer with only 2 connecting wires instead of 3.

Factors affecting the resistance of a wire


The resistance of a wire is influenced by several key factors. Firstly, the length of
the conductor; longer wires exhibit greater resistance due to the increased distance
electrons must travel. A wire with a larger cross-sectional area allows more
electrons to flow simultaneously, thereby reducing resistance. The material of the
wire. Different materials have varying resistivities, with metals like copper and
aluminum offering lower resistance compared to insulators. Temperature
significantly affects resistance; as the temperature rises, the metal atoms vibrate
more vigorously, impeding the flow of electrons and increasing resistance. In
summary, the resistance of a wire is determined by its length, cross-sectional area, material composition, and temperature.

To investigate the factors affecting the resistance of a wire


Conduct a practical experiment focusing on three main variables:
length, cross-sectional area, and material.
The resistance of a wire is directly proportional to its length; as
the length increases, so does the resistance. This relationship can
be demonstrated by measuring the resistance of wires of varying
lengths while keeping other factors constant. A wire with a larger
cross-sectional area will have lower resistance compared to a
thinner wire of the same material and length. This is because a
thicker wire allows more electrons to flow through it, reducing
resistance. The material of the wire significantly influences
resistance. Different materials have varying atomic structures,
affecting how easily electrons can move.

Revised Edition: [email protected] Page 298


299

PART A (using Nickle SWG28)


a) Measure a length of wire, L=20.0cm, connect it across the
voltmeter as shown in the circuit,
b) Close the switch, read and record both the ammeter reading, I,
and voltmeter reading ,V.
c) Repeat the experiment by adjusting the lengths of the wire
L=30,40,50,60,and 70.0cm
𝑉
d) Tabulate the table of results including both the values of 𝐼
𝑉
e) Plot a graph of against L
𝐼
f) Draw a line of best fit. This should be a straight line
that passes through the origin of the graph.
g) The line of best fit shows that the resistance is directly
𝑽
proportional to the wire length. NOTE: 𝑹 = 𝑰

PART B
a) Using the same experimental setup shown above,
b) Connect another wire of SWG28 and using a length of L=40.0cm,repeat the same procedures , and read and record both values of I and V
c) Repeat procedure (b) above with different wires of SWG30,and SWG32
𝑉
d) Tabulate your results including values of 𝐼
𝑉
e) Plot a graph of 𝐼 against SWG number
f) Determine the slope and give your conclusions

Arrangement of resistors in circuits


Series arrangement of resistors
In a series arrangement, resistors are connected end-to-end, creating a
single path for current to flow. This means that the same current passes
through each resistor sequentially. The total resistance in a series circuit
is simply the sum of the individual resistances, which can be calculated
using the formula: 𝑅𝑡𝑜𝑡𝑎𝑙 = 𝑅1 + 𝑅2 + 𝑅3 + . . . + 𝑅𝑛. According to Ohm's Law,
the voltage drop across a resistor is proportional to its resistance, meaning larger resistors will
have a greater voltage drop. This arrangement is commonly used in applications where the same
current is required through multiple components, such as in string lights or certain types of sensors.
Following are the conclusions from the above investigation
In series circuit all the appliances work simultaneously when switch is closed. Conversely, all
appliances stop working when switch is open. In series circuit, if any, of the appliances goes out of order, the other appliances stop working. As the
bulbs were not glowing very brightly, it can be concluded that in series the appliances do not work to their full capacity. The same current flows
through each resistor in series. Individual resistors in series do not get the total source voltage, but divide it.

Revised Edition: [email protected] Page 299


300

Example (c). The three IR drops add to 12.0 V, as predicted: V1 + V2 + V3 =


Suppose the voltage output of the battery is 12.0 V, and the resistances (0.600 + 3.60 + 7.80) V = 12.0 V.
are R1 = 1.00 Ω, R2 = 6.00 Ω, and R3 = 13.0 Ω are connected in (d). The easiest way to calculate power in watts (W) dissipated by a
series respectively; determine, (a) the total resistance, (b) the current resistor in a dc circuit is to use Joule’s law, P = IV, where P is electric
flowing in the circuit, (c) the voltage drop in each resistor, and show power. In this case, each resistor has the same full current flowing
these add to equal the voltage output of the source, (d) the power through it. By substituting Ohm’s law V = IR into Joule’s law, we get
dissipated by each resistor. (e) the power output of the source, and the power dissipated by the first resistor as
show that it equals the total power dissipated by the resistors. P1 = I2R1 = (0.600 A)2(1.00 Ω) = 0.360 W,P2 = I2R2 = (0.600 A)2(
Solution 6.00 Ω) = 2.16 W., P3 = I2R3 = (0.600 A)2(13.0 Ω) = 4.68 W.
(a). The total resistance is the sum of the individual resistances, given (d). Power can also be calculated using either P = IV or 𝑃 =
𝑉2
,
𝑅
by this equation:
where V is the voltage drop across the resistor (not the full voltage of
Rs=R1+R2+R3=1.00 Ω+6.00 Ω+13.0 Ω=20.0 Ω
the source). The same values will be obtained.
(b).The current is found using Ohm’s law, V = IR. Entering the value
(e). The easiest way to calculate power output of the source is to
of the applied voltage and the total resistance yields the current for
use P = IV, where V is the source voltage. This gives P = (0.600
the circuit:
A)(12.0 V) = 7.20 W. Note, that the total power dissipated by the
I=VRs=12.0 V20.0 Ω=0.60 AI=VRs=12.0 V20.0 Ω=0.60 A.
resistors is also 7.20 W, the same as the power put out by the source.
(c).The voltage drop =IR , in a resistor is given by Ohm’s law.
That is, P1 + P2 + P3 = (0.360 + 2.16 + 4.68) W = 7.20 W.
Entering the current and the value of the first resistance yields
Power is energy per unit time (watts), and so conservation of energy
V1 = IR1 =(0.600A)(1.0Ω)=0.600V,
requires the power output of the source to be equal to the total power
V2 = IR2 =(0.600A)(6.0Ω)=3.60V, V3 = IR3 = (0.600A)(13.0 Ω) =
dissipated by the resistors.
7.80 V.

Parallel arrangement of resistors


Resistors are connected in parallel when their terminals are linked, allowing multiple pathways for current to flow. In this arrangement, the voltage
across each resistor remains constant, while the total current is the sum of the currents through each resistor. This configuration results in a lower
total resistance compared to individual resistors, making it an efficient choice for various electrical applications. When resistors are connected in
parallel, the formula for calculating the total resistance (RT) is given by the reciprocal of the sum of the reciprocals of each resistor's resistance (R1,
1 1 1 1 1
R2, R3, etc.).𝑅 = 𝑅 + 𝑅 + 𝑅 + ⋯ + 𝑅 This means that adding more resistors in parallel
𝑇 1 2 3 𝑛
decreases the overall resistance, which can be advantageous in circuits requiring higher current flow. The
parallel arrangement of resistors is crucial in electronics, allowing for increased current capacity and
flexibility in circuit design.
Resistors are in parallel when each resistor is connected directly to the voltage source by connecting wires
having negligible resistance. Each resistor thus has the full voltage of the source applied to it. Each resistor
draws the same current it would if it alone were connected to the voltage source (provided the voltage
source is not overloaded). For example, an automobile’s headlights, radio, and so on, are wired in parallel, so that they utilize the full voltage of the
source and can operate completely independently. The same is true in your house, or any building.
Following conclusions can be drawn from above investigation.

Revised Edition: [email protected] Page 300


301

In parallel circuit all the appliances work independently, in parallel circuit


if one appliance goes out of order, the other continues working. It means
that each appliance in parallel circuit can be operated independently by a
switch, As the bulbs glow brightly, it means each appliance gets enough
electric energy, and hence, works to its full capacity.
If the voltage output of the battery and resistances in the parallel
connection in circuit be the same as the previously considered in the series
connection: V = 12.0 V, R1 = 1.00 Ω, R2 = 6.00 Ω, and R3 = 13.0 Ω.
Determine (a) the total resistance, (b) the total current, (c) the currents in
each resistor, and show these add to equal the total current output of the
source. (d) the power dissipated by each resistor. (e) the power output of the
source, and show that it equals the total power dissipated by the resistors.
Solution
The total resistance for a parallel combination of resistors is found (c). The total current is the sum of the individual currents:
using the equation below. Entering known values gives I1 + I2 + I3 = 14.92 A. This is consistent with conservation of
charge.
1 1 1 1 1 1 1 1 1
𝑅𝑇
=𝑅 +𝑅 +𝑅 , 𝑅
= 1 + 6 + 13, Thus, 𝑅𝑃
= (d). The power dissipated by each resistor can be found using any of
1 2 3
1.2436 (Note that in these calculations, each intermediate answer the equations relating power to current, voltage, and resistance, since
𝑉2
is shown with an extra digit.) We must invert this to find the total all three are known. Let us use 𝑃 = , since each resistor gets full
𝑅
resistance Rp. This yields 𝑅𝑃 = 0.8041𝑜ℎ𝑚𝑠. The total 𝑉2 122
voltage. Thus, 𝑃 = , 𝑃1 = = 144𝑊, 𝑃2 =
resistance with the correct number of significant digits is Rp = 0.804 𝑅 21
122 122
Ω = 24𝑊, 𝑃3 = = 11.1𝑊,.
6 13
(a). Rp is, as predicted, less than the smallest individual resistance. (d). The power dissipated by each resistor is considerably higher in
(b). The total current can be found from Ohm’s law, parallel than when connected in series to the same voltage source.
substituting Rp for the total resistance. This gives 𝐼 = (e). The total power can also be calculated in several ways. Choosing P
𝑉
=12.0Vx0.8041 Ω=14.92 A = IV, and entering the total current, yields P = IV = (14.92 A)(12.0
𝑅𝑃
(b). Current I for each device is much larger than for the same devices V) = 179 W.
connected in series (see the previous example). A circuit with parallel (e). Total power dissipated by the resistors is also 179 W:
connections has a smaller total resistance than the resistors connected P1 + P2 + P3 = 144 W + 24.0 W + 11.1 W = 179 W. This is
in series. The individual currents are easily calculated from Ohm’s consistent with the law of conservation of energy.
law, since each resistor gets the full voltage. Thus, Note that both the currents and powers in parallel connections are
𝑽 𝟏𝟐 𝑽 𝟏𝟐 greater than for the same devices in series.
𝑰𝟏 = 𝑹 = = 𝟏𝟐. 𝟎𝑨,𝑰𝟐 = 𝑹 = = 𝟐. 𝟎𝑨,
𝟏 𝟏 𝟐 𝟔
𝑽 𝟏𝟐
𝑰𝟑 = 𝑹 = 𝟏𝟑 = 𝟎. 𝟗𝟐𝑨
𝟑

ACTIVITY
Two resistors connected in series (R1,R2) are connected to two resistors that are connected in parallel (R3,R4). The series-parallel combination is
connected to a battery. Each resistor has a resistance of 10.0 Ohms. The wires connecting the resistors and battery have negligible resistance. If
a current of 2.0 Amps runs through resistor R1. What is the voltage supplied by the voltage source?

Revised Edition: [email protected] Page 301


302

Internal resistance of a cell


The internal resistance of a cell is a factor that affects its performance and efficiency. It refers
to the resistance encountered by the flow of current within the cell's electrolyte and
electrodes. This resistance can lead to a voltage drop when the cell is under load, meaning the
voltage output is lower than the electromotive force (EMF) when no current is flowing.
Typically, the internal resistance of a lead-acid cell is relatively low, while dry cells often
exhibit higher internal resistance. For instance, an AA dry cell generally has an EMF of about
1.5 volts and an internal resistance around 1 ohm.
A cell has two terminals; positive and negative terminal. The positive terminal is known as
the cathode and the negative terminal is known as the anode. They both are the electrodes of a cell. Two or more cells combine together serially or
parallelly to form a battery. Internal Resistance of a cell is scaled in ohm’s. The terminal of the cell is connected with a wire to make a closed
circuit. Electric current flows from the positive terminal of cell to negative terminal through the wire and at the same time positive ions in the
electrolyte flow from lower to higher potential due to which some resistance is offered to the current flow.

Internal Resistance
It is the resistance (opposing force) in the flow of current when the circuit gets because of electrolyte and electrodes present in the battery/cell. It is
present within the cell or battery. Measures in ohm. Fresh/new cell has low internal resistance but increases with continuous use. Potential drops
across the terminal as the current flows.

Formula for Internal Resistance of a Cell


ℰ: emf of a cell (in volts),
V: Potential difference across a cell (in Volts),
I: Current flowing through a conductor,
r: Internal Resistance, R: External resistance.
Emf is the work done by the cell to carry a unit charge through the closed circuit.
So, it is the sum of the work done to carry the charge through the conducting wire
with external resistance(R) and cell with internal resistance (r).
𝜀 = 𝑉 + 𝑉 1…………(1), By Ohms Law 𝑉 = 𝐼𝑅 … … … . . (2),𝑉 1 = 𝐼𝑟……………(3)
𝜀
From equation (2) and (3), ℰ = 𝐼𝑅 + 𝐼𝑟, 𝑉 = 𝐼 (𝑟 + 𝑅),𝐼 = 𝑟+𝑅 ……….(4)
𝑅𝜀
Substituting (4) in (2) 𝑉 = 𝐼𝑅 = , Now, from equation (1),𝑉 = 𝜀 − 𝑉 1,𝑉 = 𝜀 − 𝐼𝑟, 𝐼𝑟 = 𝜀 − 𝑉
𝑟+𝑅
𝜀−𝑉
𝑟= , hence 𝑉 = −𝐼𝑟 + 𝜀, where V is the potential difference across the resistor, R, ε is the e.m.f of the cell, and r is the internal
𝐼
resistance, which is in the form 𝑦 = 𝑚𝑥 + 𝑐 (the equation of the line graph).

Revised Edition: [email protected] Page 302


303

Experiment to determine the Internal resistance of a cell


a) With the switch K, open, read an record the voltmeter reading, and call it 𝐸𝑜
b) Close the switch, K, adjust the value variable resistor R, until the voltmeter
reading V=0.20V.
c) Read and record the ammeter reading I,
d) Repeat the procedures of the experiment for different values of
V=0.30,0.40,0.50,0.60,0.70 and 0.80V.
e) Tabulate the results in a table of results,
f) Plot a graph of V against I
g) Determine the slope and make conclusions
Note: At the point where the line meets the current axis, (the x-axis intercept) the
maximum current is drawn from the cell. This happens when the load
resistance, R=0Ω. This would be achieved by short circuiting the cell (this should
be avoided as the cell could overheat and it is potentially dangerous).
The maximum current is called the short circuit current, ISC. To find the
internal resistance of the cell the gradient of the line is calculated. This has a
negative value. The internal resistance of the cell is the same value but without
the negative sign. For example, if the slope of the line is −4 then the internal
resistance is 4Ω. The internal resistance can also be found by dividing the EMF, E, by the short circuit current, ISC.

Relationship between current, voltage and resistance


𝑉
The relationship between voltage, current, and resistance is described by Ohm's law. This equation, 𝐼 = 𝑅 , tells us that the current, I, flowing
through a circuit is directly proportional to the voltage, V, and inversely proportional to the resistance, R.
Ohm’s law states that the voltage across a conductor is directly proportional to the current flowing through it, provided all physical conditions and
temperatures remain constant. In certain components, increasing the current raises the temperature. An example of this is the filament of a light bulb,
in which the temperature rises as the current is increased. In this case, Ohm’s law cannot be applied. The lightbulb filament violates Ohm’s Law.

Experimental Verification of Ohm’s Law


Ohm’s Law can be easily verified by the following experiment: Resistor,
Ammeter, Voltmeter, Battery, Plug Key, and Rheostat
Procedure
a) Initially, the key K is closed and the rheostat is adjusted to get the
minimum reading in ammeter A and voltmeter V.
b) The current in the circuit is increased gradually by moving the
sliding terminal of the rheostat. During the process, the current flowing in the circuit and the corresponding value of potential difference
across the resistance wire R are recorded.
c) Different sets of values of voltage, V, and current, I, are obtained and a table of results tabulated.

Revised Edition: [email protected] Page 303


304

d) A graph of the current (I) against the potential difference (V) is plotted, it will be a straight line. This shows that the current is proportional
to the potential difference.

Water pipe analogy for ohm’s law


The relationship between current, voltage and resistance is expressed by Ohm’s Law. This states that the
current flowing in a circuit is directly proportional to the applied voltage and inversely proportional to
the resistance of the circuit, provided the temperature remains constant.
𝑉𝑜𝑙𝑡𝑎𝑔𝑒 (𝑉)
Ohm’s Law: 𝐶𝑢𝑟𝑟𝑒𝑛𝑡 (𝐼) = 𝑅𝑒𝑠𝑖𝑠𝑡𝑎𝑛𝑐𝑒 (𝑅)
To increase the current flowing in a circuit, the voltage must be increased, or the resistance
decreased. A simple electrical circuit is depicted in Figure (a). The flow of electricity through this
circuit is further illustrated by analogy to the pressurized water system in Figure (b). In the
electrical circuit the power supply generates electrical pressure (voltage), equivalent to the pump
creating water pressure in the pipe; the current is equivalent to the rate of flow of water; and the
light bulb provides the resistance in the same way as the restriction in the water system. The
ammeter is equivalent to the flow meter and the voltmeter measures the difference in electrical
pressure each side of the restriction in the water system.
There will be a drop in voltage due to the energy used up in driving the current through the light
bulb, which has a higher resistance than the wire in the circuit. Similarly, the water pressure at (A) will be less than at (B). The overall resistance of
an object depends on a number of properties including its length, cross-sectional area and the type of material. The longer a conductor, the greater
its resistance; for example, a two metre wire has twice the resistance of a one metre wire of similar properties. The larger the cross-section of a
conductor, then the lower its resistance: overhead power cables have a much lower resistance than a lamp flex of the same length. Different materials
also have different abilities to conduct electricity. Metals conduct very well but materials such as ceramics or glass do not usually conduct electricity
at all and are known as insulators.
Animals contain a high proportion of liquid that will conduct electricity well; however, skin, fat, bone and hair are poor conductors. Electrical current
will take the path of least resistance through animal tissue, with the result that only a small proportion of the measured current will penetrate the
brain. Animals with heavy fleeces, thick skin, fat layers or thick skulls will have a high electrical resistance. Table 1 shows how the relationship
between current, voltage and resistance differs when
stunning sheep of different physical condition. In this
example, the minimum current required for an
effective stun is one amp.

Revised Edition: [email protected] Page 304


305

Ohmic and Non Ohmic conductors


Ohmic conductors, such as resistors and standard wires, adhere to Ohm's Law, which states that the current through a conductor is directly proportional
to the voltage across it. This results in a linear current-voltage (I-V) relationship, represented graphically as a straight line. In contrast, non-ohmic
conductors do not follow Ohm's Law. Their resistance can vary with changes in voltage or current, leading to a non-linear I-V relationship. Examples
of non-ohmic conductors include diodes and thermistors, which exhibit unique behaviors under different electrical conditions.

Thermistors
A thermistor is a type of semiconductor resistor whose resistance varies significantly with
temperature changes. The term "thermistor" combines "thermal" and "resistor,"
highlighting its primary function as a temperature-sensitive device. Unlike standard
resistors, thermistors exhibit a much stronger response to temperature fluctuations,
making them ideal for precise temperature measurements. Thermistors can be classified
into two main types: Negative Temperature Coefficient (NTC) and Positive Temperature
Coefficient (PTC). NTC thermistors decrease in resistance as temperature increases, while
PTC thermistors do the opposite. This unique property allows thermistors to be used in
various applications, including temperature sensing, circuit protection, and in medical devices for monitoring body temperature. In recent years, the
demand for disposable thermistors has increased, particularly in hospital settings, where they are favored for their convenience and hygiene. Leading
companies in the thermistor market include Vishay, Littelfuse, and TDK, reflecting the growing importance of these devices in modern technology.

Transistors
A transistor is a crucial
semiconductor device that
functions as a switch or
amplifier for electronic signals.
It regulates the flow of electric
current or voltage, making it a
fundamental building block of
modern electronics.
Transistors can control the flow of electricity in circuits, allowing them to amplify signals or act as on/off switches. There are various types of
transistors, including Bipolar Junction Transistors (BJTs) and Field-Effect Transistors (FETs), each serving specific applications. The working principle
of a transistor involves the manipulation of charge carriers (electrons and holes) within a semiconductor material, enabling it to either allow or block
current flow based on input signals. Transistor diagrams typically illustrate the three terminals: the collector, emitter, and base for BJTs, or the source,
gate, and drain for FETs. Understanding transistors is essential for grasping the operation of countless electronic devices, from simple circuits to
complex computing systems.

Diodes
A diode is a two-terminal electronic component that conducts electricity primarily in one direction. It has high resistance on one end and low resistance
on the other end. A diode is a semiconductor device that functions as a one-way switch for electrical current, allowing it to flow primarily in one
direction. Typically made from silicon, diodes have two terminals: the anode (positive) and the cathode (negative). When a voltage is applied in the

Revised Edition: [email protected] Page 305


306

forward direction, the diode conducts electricity; however, it blocks current when
the voltage is reversed, demonstrating its asymmetric conductance. Diodes are widely
used in various applications, including rectification in power supplies, signal
modulation, and protection circuits. They are essential components in converting
alternating current (AC) to direct current (DC) and are found in devices such as radios,
televisions, and computers. Testing diodes is crucial for ensuring their functionality.
A multimeter can be used to check for proper conduction in the forward direction
and to confirm that the diode blocks current in the reverse direction.
Diodes can be made of either of the two semiconductor materials, silicon and
germanium. When the anode voltage is more positive than the cathode voltage,
the diode is said to be forward-biased, and it conducts readily with a relatively
low-voltage drop. Likewise, when the cathode voltage is more positive than the
anode, the diode is said to be reverse-biased. The arrow in the diode symbol
represents the direction of conventional current flow when the diode conducts.

Applications and Uses of diodes


The most basic function would be changing AC current to DC current by removing some part of the signal. This functionality would make them
rectifiers. They are used in electrical switches and are used in surge protectors because they can prevent a spike in the voltage. Diodes help in
performing digital logic. Millions of diodes are used similar to logic gates and used in modern processors. They are used for isolating signals from a
supply. For example, one of the major uses of diodes is to remove negative signals from AC current. This is known as signal demodulation. This
function is basically used in radios as a filtering system in order to extract radio signals from a carrier wave. They are also used in creating power
supplies and voltage doublers. Using a full wave rectifier will help to deliver a more stable voltage. Combination of a diode with a capacitor will help
to make small AC voltage multiply to create a very high voltage. The light emitting diodes or LEDs are used in sensors and also in laser devices any
many other light illumination devices. Zener diodes are used as voltage regulators and varactors are used in electronic tuning and varistors are used
in suppressing AC lines. Diodes are the basis of op-amps and transistors.

LDR
LDR (Light Dependent Resistor) as the name states is a special type of resistor that
works on the photoconductivity principle means that resistance changes according
to the intensity of light. Its resistance decreases with an increase in the intensity of
light. It is often used as a light sensor, light meter, Automatic street light, and in
areas where we need to have light sensitivity.
LDR is also known as a Light Sensor. LDR are usually available in 5mm, 8mm, 12mm,
and 25mm dimensions. A Light Dependent Resistor (LDR), also known as a
photoresistor, is a passive electronic component that alters its resistance based on
light intensity. When exposed to light, the resistance of an LDR decreases, allowing more current to flow through the circuit. This property makes
LDRs essential in various applications, including light sensing, automatic lighting systems, and camera exposure controls. LDRs are commonly used in
devices that require light detection, such as streetlights that turn on at dusk and off at dawn. They can also be found in alarm systems and light

Revised Edition: [email protected] Page 306


307

meters, where they help in monitoring ambient light levels. The simplicity and effectiveness of LDRs make them a popular choice for hobbyists and
professionals alike. Their ability to adjust resistance based on light intensity enables innovative solutions across various fields.

LED
Light Emitting Diodes (LEDs) are semiconductor devices that convert electrical energy
into light through a process called electroluminescence. When an electric current flows
through the LED, it excites electrons in
the semiconductor material, causing
them to release energy in the form of
light. This technology operates in
contrast to photodiodes, which
convert light into electricity. LEDs are
known for their energy efficiency,
producing light up to 90% more
efficiently than traditional incandescent bulbs. This efficiency not only reduces energy consumption
but also leads to lower electricity bills and a smaller carbon footprint. LEDs have a longer lifespan,
often lasting tens of thousands of hours, making them a cost-effective lighting solution. As the demand
for sustainable and efficient lighting solutions grows, LEDs are becoming increasingly popular in
residential, commercial, and industrial applications. Their versatility and performance make them a
key player in the future of lighting technology.

Potentiometer
A potentiometer is a three-terminal variable resistor that plays a crucial role in controlling
electrical devices. Common applications include volume controls in audio equipment and
speed regulation in fans. By adjusting the position of a wiper along a resistive element,
users can modify the resistance and, consequently, the voltage output in a circuit. In
addition to its control functions, a potentiometer is instrumental in measuring unknown
voltages by comparing them with known references. This capability makes it valuable in
determining electromotive force (EMF) and internal resistance in various electronic
applications. Recent advancements have led to the development of self-powered
potentiometric sensors, which can monitor changes in concentration without external
power. These innovations highlight the versatility of potentiometers, extending their use
into fields such as wearable technology for real-time monitoring of ions in human sweat,
showcasing their importance in both everyday devices and cutting-edge research.

Revised Edition: [email protected] Page 307


308

4.2 ELECTROMAGNETIC EFFECTS


Learning Outcomes
a) Investigate the behaviour of magnets and magnetic fields(s)
b) Understand that a current carrying conductor produces a magnetic field that
can be detected. (u,s)
c) Understand the application of electromagnets in devices such as motors,
bells, and generators (u,s)
d) Understand the difference between a.c. and d.c (u)
e) Know how a.c. and d.c. can be inter-converted using rectifiers and inverters
(k)
f) Understand the action and applications of transformers (u, s,v/a)

ELECTROMAGNETISM
Electromagnetism is a fundamental branch of physics that explores the interactions between electrically charged particles through electromagnetic
fields. This interaction is crucial as it governs the behavior of charged particles, influencing everything from atomic structure to the forces that hold
matter together. The electromagnetic force is responsible for keeping electrons in orbit around atomic nuclei, making it essential for the stability of
matter. The relationship between electricity and magnetism is a key aspect of
electromagnetism. Moving electric charges generate magnetic fields, which can, in
turn, exert forces on other moving charges. This principle underlies many
technologies, including electric motors and generators, showcasing the practical
applications of electromagnetic theory. As one of the four fundamental forces of
nature, electromagnetism plays a vital role in various scientific fields, including
materials science and superconductivity

Magnetic fields
Magnetic fields are regions around magnetic materials or moving electric charges where magnetic forces
are exerted. They arise whenever electric charges are in motion, with the strength of the field increasing
as more charge is set in motion. This phenomenon is fundamental in physics, influencing the behavior
of charged particles and electric currents. The force exerted by magnetic fields can cause charged
particles to move in circular or helical paths. This principle is crucial in various applications, from
electric motors to particle accelerators.
Magnetic fields play a significant role in
astronomical phenomena, such as the
formation of stars and planets, by influencing the dynamics of charged particles in space.

Magnetic Field of a Current Carrying Conductor (Wire)


When an electric current flows through a wire, it creates a magnetic field that forms
concentric circles around the wire. The interaction between a magnetic field and a
current-carrying conductor results in a magnetic force. This force is crucial in various

Revised Edition: [email protected] Page 308


309

applications, including electric motors and generators, where it enables the conversion of electrical energy into mechanical energy. The direction of
the magnetic field can be determined using the right-hand rule, which This rule states that “If a current-carrying conductor is held by the right hand,
keeping the thumb straight and if the direction of electric current is in the direction of thumb, then the fingers will curl in the direction of the
magnetic field lines’’.

Magnetic field due to current through a circular


loop
When an electric current flows through a circular loop, it generates a magnetic field
that is most concentrated at the center of the loop. The magnetic field lines form
closed loops around the wire, and their direction can be determined using the right-
hand thumb rule: if the thumb points in the direction of the current, the curled
fingers indicate the direction of the magnetic field. The strength of the magnetic
field at the center of the loop is directly proportional to the current and inversely
proportional to the radius of the loop. Every point on the wire carrying current
gives rise to a magnetic field around it would become larger and larger as we move away from the wire and by the time we reach the center of the
circular loop, the arcs of these circles would appear as a straight line.

Magnetic field due to current flowing in a solenoid


A solenoid is essentially a coil of wire, and when an electric current passes through it, a magnetic
field is created. Inside the solenoid, the magnetic field is nearly uniform and parallel to the axis of
the coil. This uniformity makes solenoids particularly useful in various applications, such as
electromagnets and inductors. When a ferromagnetic material, like iron, is placed inside the
solenoid, it becomes magnetized, significantly enhancing the magnetic field strength. The magnetic
field due to a solenoid is crucial for advancements in technology, including electric motors and
magnetic resonance imaging (MRI).

Force on a Current Carrying Conductor in a magnetic field


Procedure
Take a small aluminium rod of about 5cm and name its ends as A and B. Using two connecting
wires, hang it horizontally from a stand, as shown in the figure. Locate a strong horseshoe
magnet so that the rod rests between the two poles with the magnetic field directed upwards. For
this, put the magnet's north pole vertically below and the south pole vertically above the
aluminium rod (as shown in the figure).The aluminium rod, battery, switch and rheostat are
connected in series (as in the figure). Allowing a current flow through the aluminium rod from end
B to end A.
Observation: It is seen that the rod is relocated towards the left. You will see that the
rod gets relocated. Reverse the direction of current flowing through the rod (end A to B) and notice
the direction of its displacement. The rod is now relocated towards the right.

Revised Edition: [email protected] Page 309


310

Conclusion: The displacement of the aluminium rod suggests that a force is exerted on the current-carrying conductor when it is placed in a magnetic
field. It also shows that when the direction of the current through the conductor is reversed, the direction of force is also reversed.

Magnetic field and number of turns of the coil


The magnitude of the magnetic field gets summed up with the increase in the number of turns of the
coil. If there are ‘n’ turns of the coil, the magnitude of the magnetic field will be ‘n’ times the magnetic
field in case of a single turn of the coil.
 The radius of the coil: The strength of the magnetic field is inversely proportional to
the radius of the coil. If the radius increases, the magnetic strength at the center decreases.
 The number of turns in the coil: As the number of turns in the coil increase, the
magnetic strength at the center increases, because the current in each circular turn is having the same direction, thus, the field due to each
turn adds up.
 The strength of the current flowing in the coil: As the strength of the current increases, the strength of three magnetic fields also
increases.

The Electric Motor


An electric motor is a vital machine that transforms electrical energy into mechanical
energy, primarily in the form of rotational motion. This conversion is achieved
through the interaction of magnetic fields and electric currents, allowing electric
motors to power a wide range of applications, from household appliances to electric
vehicles. There are two main types of electric motors: direct current (DC) motors and
alternating current (AC) motors. DC motors utilize direct current to generate motion,
while AC motors operate on alternating current.

Operation of a simple Electric Motor


A simple electric motor operates on the principles of
electromagnetism, converting electrical energy into
mechanical motion. At its core, the motor consists of a coil
of wire, a power source (like a battery), and a permanent
magnet. When electricity flows through the coil, it
generates a magnetic field, turning the coil into an
electromagnet. When current is supplied to the armature,
one end becomes a north pole, and the other end a south
pole, magnetically. Since the armature lies in a magnetic field, there are magnetic forces acting on the armature, such that the north pole end of the
armature is attracted to the south pole of the external field. A similar attractive force will exist between the south pole end of the armature and the
north pole of the field. These two forces create a torque that causes the armature to rotate. The split ring commutator causes the armature current to
reverse direction when the armature turns and the brushes contact alternate ring segments, which causes the polarity of the armature to reverse.
Thus, as the north pole end of the armature swings by the south pole of the field, it reverses to a south pole itself and is now repelled by the field,

Revised Edition: [email protected] Page 310


311

and pushed toward the other pole of the field. The continual reversal of the polarity of the armature, whenever it has turned 180o, is what permits the
rotation to continue.

Electromagnetic Induction
Electromagnetic induction is the generation of electromotive force (emf) across a conductor when it is exposed to a changing magnetic field. This
phenomenon occurs when there is relative motion between a magnetic field and an electrical conductor, leading to the production of voltage and,
consequently, electric current in a closed circuit. The principle of electromagnetic induction is the backbone of various technologies, including electric
generators and motors. These devices convert mechanical energy into electrical energy and vice versa, showcasing the practical applications of this
principle in everyday life. Faraday's Law of Induction quantifies this relationship, stating that the induced emf in a circuit is directly
proportional to the rate of change of the magnetic field. In addition to its applications in energy generation, electromagnetic induction is
also utilized in fields such as geophysics and archaeology, where it aids in locating unmarked graves and mapping cemeteries.

Electromagnets: An electromagnet is a type of magnet in which the magnetic field is


produced by an electric current. Unlike permanent magnets, electromagnets can be turned
on and off and their strength can be varied by changing the current flowing through them.
The principle behind electromagnets is Ampère's Law, which states that an electric current
flowing through a conductor produces a magnetic field around it.

Making an electromagnet
Materials Required: Insulated copper wire, Iron core, Power source, Switch (optional),
To control the flow of current, Connecting wires.
Procedures:
a) Take a piece of soft iron, such as a nail or iron rod. The core helps concentrate
the magnetic field generated by the coil of wire.
b) Take the insulated copper wire and begin wrapping it tightly around the iron core.
c) Make multiple turns of the wire around the core to form a coil. The more turns in the coil, the stronger the electromagnet will be.
d) Ensure the wire is neatly wound, as overlapping wires can reduce efficiency. Leave both ends of the wire free to connect to the power
source.
e) Attach the two free ends of the wire to the terminals of the battery or power supply.
f) Connect the switch between one end of the wire and the battery to control the current flow.
Observation
When the switch is closed; When the current flows through the coil, the iron core becomes magnetized, and the electromagnet can attract
ferromagnetic materials such as paperclips or nails. Disconnect the power source or turn off the switch to stop the flow of current. The iron core will
lose its magnetism almost instantly, as it is not a permanent magnet.
Principles operation behind Electromagnets
According to Ampere’s Law, when electric current flows through a conductor, it creates a circular magnetic field around the wire. Coiling the wire
increases the strength of the magnetic field because the fields from each turn of the wire combine, creating a stronger overall effect. The iron core
becomes magnetized due to the alignment of its magnetic domains, significantly amplifying the magnetic field produced by the coil.

Revised Edition: [email protected] Page 311


312

Applications of Electromagnets
 The most significant uses is in electric motors and generators, where they convert electrical energy into mechanical energy and vice versa.
This principle is foundational in powering everything from household appliances to large industrial machines.
 In addition to motors, electromagnets are integral to devices like relays, electric bells, and buzzers, which rely on magnetic fields to operate.
 They are also essential in audio technology, found in headphones and loudspeakers, where they convert electrical signals into sound.
Electromagnets are utilized in data storage devices, enhancing the efficiency of information retrieval and storage.
 In healthcare, powerful electromagnets are employed in MRI machines, generating strong magnetic fields for imaging.

Uses in Home Appliances: Most of the electric appliances used in the home use electromagnetism as the basic working principle. Some
electromagnet uses in the home include an electric fan, electric doorbell, induction cooker, magnetic locks, etc. In an electric fan, the electromagnetic
induction keeps the motor rotating on and on making the blade of the fan to rotate. Also in an electric doorbell when the button is pressed, due to the
electromagnetic forces the coil gets energized and the bell sounds.
Uses in Medical Field: The uses of electromagnets are also seen in the medical field. MRI scan which is short for Magnetic Resonance Imaging is a
device that uses electromagnets. The device can scan all the tiny details in the human body with the help of electromagnetism.
Uses in Memory Storage Devices and Computer Hardware: The data in ebook gadgets and phones are stored in the electromagnetic format
in the form of bytes and bits. The computer hardware is also having a magnetic tape which works on the principle of electromagnetism. Even in the
olden days’ electromagnets had a huge role in the data storage of VCP and VCR.
Uses in Communication Devices and Power Circuits: Without electromagnets, the mobiles and the telephones we used to make phone calls
over a long distance could not have taken shape. The electromagnetic pulses and the interaction of the signals make mobiles and telephones very
handy.

Some of the applications of Electromagnets


The electric bell: An electric bell changes electrical energy to sound energy. When the switch is
closed, the circuit is completed. Current flows through the electromagnet and it is magnetized. The
core attracts the soft Iron on which the hammer is attached which in turn hits the gong and sound is
heard. Contact is broken and the electromagnet is demagnetized. The spring pulls back the soft iron
and contact is regained. The process continues as before as sound is heard.

The telephone receiver: A telephone receiver changes electric energy to sound energy. A
microphone changes sound energy to electric energy. The varying current from the microphone passes
through the coils of the electromagnet. This magnetizes the electromagnet which pulls the diaphragm
towards itself at varying distances depending on the strength of the current through the electromagnet
from the microphone. The diaphragm moves in and out and produces sound waves at the same
frequency as those that entered the microphone.

Revised Edition: [email protected] Page 312


313

Generators
Electric generators are essential devices that convert mechanical energy into electrical energy, operating on the principle of
electromagnetic induction. They do not create electricity; instead, they harness energy from an external source, such as motion
or fuel, to generate electrical power. The core components of a generator include an engine, an alternator, and a fuel source . The
engine provides the mechanical energy needed to rotate the alternator, which contains magnets and coils of wire. As the engine
turns the alternator, the movement induces an electrical current in the coils, effectively transforming mechanical energy int o
electrical energy. Generators are widely used in various applications, from powering homes during outages to supplying energy
for industrial operations.

Electromagnetic induction
Electromagnetic induction uses the relationship between electricity and magnetism
whereby an electric current flowing through a single wire will produce a magnetic field
around it. If the wire is wound into a coil, the magnetic field is greatly intensified producing
a static magnetic field around itself forming the shape of a bar magnet giving a distinct
North and South pole.

Air-core Hollow Coil


The magnetic flux developed around the coil being proportional to the amount of current flowing in the coils windings as show n. If additional layers
of wire are wound upon the same coil with the same current flowing through them, the
static magnetic field strength would be increased. Therefore, the magnetic field strength
of a coil is determined by the ampere turns of the coil. With more turns of wire within the
coil, the greater the strength of the static magnetic field around it. But what if we reversed
this idea by disconnecting the electrical current from the coil and instead of a hollow core
we placed a bar magnet inside the core of the coil of wire. By moving this bar magnet “in”
and “out” of the coil a current would be induced into the coil by the physical movement
of the magnetic flux inside it.
If we kept the bar magnet stationary and moved the coil back and forth within the
magnetic field an electric current would be induced in the coil. Then by either moving the
wire or changing the magnetic field we can induce a voltage and current within the coil
and this process is known as Electromagnetic Induction and is the basic principle of operation of transformers, motors and generators.

Electromagnetic Induction by a Moving Magnet


Faraday’s Law of Electromagnetic Induction.
When the magnet is moved “towards” the coil, the pointer or needle of the Galvanometer, which is basically a very sensitive centre zeroed moving-
coil ammeter, will deflect away from its centre position in one direction only. When the magnet stops moving and is held stationary with regards to
the coil the needle of the galvanometer returns back to zero as there is no physical movement of the magnetic field.

Revised Edition: [email protected] Page 313


314

When the magnet is moved “away” from the coil in the other direction, the needle
of the galvanometer deflects in the opposite direction with regards to the first indicating
a change in polarity. Then by moving the magnet back and forth towards the coil the
needle of the galvanometer will deflect left or right, positive or negative, relative to the
directional motion of the magnet.
If the magnet is now held stationary and ONLY the coil is moved towards or away
from the magnet the needle of the galvanometer will also deflect in either direction. Then
the action of moving a coil or loop of wire through a magnetic field induces a voltage in
the coil with the magnitude of this induced voltage being proportional to the speed or
velocity of the movement.
The faster the movement of the magnetic field the greater will be the induced emf or voltage in the coil, Faraday’s law to hold true there must be
“relative motion” or movement between the coil and the magnetic field and either the magnetic field, the coil or both can move.

Faraday’s Law of Induction


From the above description, there is a relationship between an electrical voltage and a changing magnetic field to which Faradays law of
electromagnetic induction states: “that a voltage is induced in a circuit whenever relative motion exists between a conductor and a
magnetic field and that the magnitude of this voltage is proportional to the rate of change of the flux”.
Hence, Electromagnetic Induction is the process of using magnetic fields to produce voltage, and in a closed circuit, a current.
But a changing magnetic flux produces a varying current through the coil which itself will produce its own magnetic field as we saw in the
Electromagnets tutorial. This self-induced emf opposes the change that is causing it and the faster the rate of change of current the greater is the
opposing emf. This self-induced emf will, by Lenz’s law oppose the change in current in the coil and because of its direction this self-induced emf is
generally called a back-emf.
Lenz’s Law states that: ” the direction of an induced emf is such that it will always opposes the change that is causing it”. An
induced current will always OPPOSE the motion or change which started the induced current in the first place and this idea is found in the analysis of
Inductance.

Types of generators
The simple A.C motor
Alternating Current (AC) generators are essential devices that convert mechanical energy into electrical energy, producing an alternating current
output. The simplest form of an AC generator consists of a rectangular coil rotating within a uniform magnetic field, typically provided by permanent
magnets. This rotation induces an electric current in the coil due to electromagnetic induction. AC generators are particularly advantageous because
they can transmit electricity over long distances with minimal energy loss, making them more efficient than their DC counterparts. They are widely
used in various applications, from powering homes to supplying electricity for industrial operations.
The ac generator / The alternator
An alternating current (ac) generator is a device that produces a potential difference. A simple ac generator consists of a coil of wire rotating in a
magnetic field. Cars use a type of ac generator called an alternator to keep the battery charged and to run the electrical system while the engine is
working.

Revised Edition: [email protected] Page 314


315

Action: The slip rings maintain constant contact with the same sides of the coil. As one side of the coil moves up through the magnetic field, a
potential difference is induced (created) in one direction. As the rotation continues
and that side of the coil moves down, the induced potential difference reverses
direction. This means that the alternator produces a current that is constantly
changing. This is alternating current or ac.
Alternator output on a graph
The output of an alternator can be represented on a potential difference-time graph,
with potential difference on the vertical axis and time on the horizontal axis. The
graph shows an alternating sine curve. The maximum potential difference or current
can be increased by: increasing the rate of rotation, increasing the strength of the
magnetic field, and increasing the number of turns on the coil. The diagram shows
four different positions of the coil in an alternator, and the corresponding potential
difference produced.
The potential difference-time graph for an alternator
 A - The coil is at 0°. The coil is moving parallel to the direction of the
magnetic field, so no potential difference is induced.
 B - The coil is at 90°. The coil is moving at 90° to the direction of the
magnetic field, so the induced potential difference is at its maximum.
 C - The coil is at 180°. The coil is moving parallel to the direction of
the magnetic field, so no potential difference is induced.
 D - The coil is at 270°. The coil is moving at 90° to the direction of
the magnetic field, so the induced potential difference is at its
maximum. Here, the induced potential difference is in
the opposite direction to what it was at B.
 A - The coil is at 360°, ie it is back at its starting point, having done a full rotation. The coil is moving parallel to the direction of the
magnetic field, so no potential difference is induced.

The DC Generator
A direct current (dc) generator is another device that produces a potential difference. A
simple dc generator consists of a coil of wire rotating in a magnetic field. However, it
uses a split ring commutator rather than the two slip rings found in alternating current
(ac) generators. Some bike lights use a type of dc generator called a dynamo to run the
lamps while the wheels are turning.

The Dynamo
In a dynamo, a split ring commutator changes the coil connections every half turn. As
the induced potential difference is about to change direction, the connections are
reversed. This means that the current to the external circuit is always in the same direction.

Revised Edition: [email protected] Page 315


316

Dynamo output on a graph


The output of a dynamo can be shown on a potential difference-time graph. The graph
shows a curve that stays in the same direction all the time. The shape of the curve can
be thought of as a sine curve, with the negative part reflected in the time axis. The
maximum potential difference or current can be increased by: increasing the rate of
rotation, increasing the strength of the magnetic field, and increasing the number of
turns on the coil
The diagram shows four different positions of the coil in a dynamo, and the
corresponding potential difference produced.
The potential difference-time graph for a dynamo
A - The coil is at 0°. The coil is moving parallel to the direction of the magnetic field, so
no potential difference is induced.
B - The coil is at 90°. The coil is moving at 90° to the direction of the magnetic field, so
the induced potential difference is at its maximum.
C - The coil is at 180°. The coil is moving parallel to the direction of the magnetic field, so
no potential difference is induced.
D - The coil is at 270°. The coil is moving at 90° to the direction of the magnetic field, so
the induced potential difference is at its maximum. Here, the induced potential difference
is in the same direction as at B.
A - The coil is at 360°, ie it is back at its starting point, having done a full rotation. The
coil is moving parallel to the direction of the magnetic field, so no potential difference is
induced.

Working Principle of A DC Generator or Dynamo


When the D.C. motor is connected to a power source, current flows through the brushes into the commutator and then into the rotor windings. The
current flowing through the rotor creates a magnetic field around the rotor. This field interacts with the magnetic field of the stator (from the
permanent magnets or field coils). The interaction between the rotor's magnetic field and the stator's magnetic field produces a torque on the rotor,
causing it to rotate. As the rotor turns, the commutator periodically reverses the direction of the current in the rotor windings. This reversal ensures
that the magnetic forces continue to push the rotor in the same direction, allowing it to keep spinning. The rotor continues to spin as long as current
is supplied. The speed of the motor can be controlled by varying the voltage or current supplied.

Applications of D.C. Motors


DC motors are commonly found in machines such as centrifugal pumps, lifts, and weaving machines, these motors provide reliable performance in
demanding environments. Their ability to deliver high starting torque makes them ideal for heavy-duty applications like cranes and elevators. In the
automotive sector, DC motors power electric vehicles and are essential in systems like Powering windshield wipers, electric windows, and seat
adjustments and electric traction. They are widely used in household appliances, including hair dryers, fans and vacuum cleaners, showcasing their
adaptability in both industrial and domestic settings. Industrial Equipment like in conveyor belts, hoists, and machine tools, Brushed DC motors are
increasingly utilized in robotics arms and mobile robots, where precise control and responsiveness are crucial.
Advantages of A.C over D.C

Revised Edition: [email protected] Page 316


317

 There is little or no power losses


 A.C can easily and cheaply be stepped up and stepped down
 Frequency of the supply can be precisely controlled
 Thinner cables can be used since voltage can be stepped up and down

Rectifiers and Inverters


A rectifier is an essential electronic device that converts alternating current (AC)
into direct current (DC). Rectifiers typically utilize one or more P-N junction
diodes, which allow current to flow in one direction while blocking it in the
opposite direction, effectively transforming the oscillating AC into a steady DC
output. There are several types of rectifiers, including half-wave and full-wave
rectifiers. Half-wave rectifiers only utilize one half of the AC cycle, while full-wave
rectifiers use both halves, resulting in a more efficient conversion.
Each type has its specific applications, ranging from simple power supplies to
complex electronic circuits. In addition to their primary function, rectifiers play a
vital role in various fields, including telecommunications and power systems.

Inverters are essential electronic devices that convert direct current (DC)
into alternating current (AC), enabling the use of DC power sources to
operate household appliances and electronic equipment. This conversion is
crucial for integrating renewable energy systems, such as solar panels, into
the electrical grid, as most household devices require AC power. The
functionality of inverters extends beyond mere conversion; they also
manage the speed and torque of electric motors, making them vital in
various applications, from industrial machinery to electric vehicles. As the
demand for renewable energy solutions grows, inverter manufacturers are
adapting to technological shifts and market dynamics, facing challenges in
production and innovation. In addition to their role in renewable energy, inverters are increasingly popular in automotive applications, allowing users
to power devices while on the road.

Transformers
The transformer, in a simple way, can be described as a device that Procedures up or Procedures down voltage. The transformer works on the
principle of Faraday’s law of electromagnetic induction and mutual induction. There are usually two coils – primary coil and secondary coil – on the
transformer core. The core laminations are joined in the form of strips. The two coils have high mutual inductance. When an alternating current passes
through the primary coil, it creates a varying magnetic flux. As per Faraday’s law of electromagnetic induction, this change in magnetic flux induces
an EMF (electromotive force) in the secondary coil, which is linked to the core having a primary coil by is mutual induction. When a transformer is
working; a primary potential difference drives an alternating current through the primary coil, the primary coil current produces a magnetic field,

Revised Edition: [email protected] Page 317


318

which changes as the current changes, the iron core increases the strength of the magnetic
field, the changing magnetic field induces a changing potential difference (voltage) in the
secondary coil, and the induced potential difference produces an alternating current in
the external circuit.

Mutual induction
This is when two coils are arranged with one carrying current (primary) and the change
in current in the primary coil induces a current in the secondary coil (secondary). This is
due to a change in the current in the primary coil, which induces an e.m.f in the secondary
coil, and hence an induced current flows.

Efficiency of Transformer
The efficiency of a transformer is also known as commercial efficiency. It is represented by the letter ‘η’. The efficiency of a Transformer is
described as the ratio of power output (in W or kW) to input (in W or kW). Hence, the efficiency of transformer may be expressed as follows:
𝑃𝑜𝑤𝑒𝑟 𝑂𝑢𝑡𝑝𝑢𝑡 𝑃𝑜𝑤𝑒𝑟 𝑂𝑢𝑡𝑝𝑢𝑡
𝐸𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑐𝑦 (𝜂) = 𝑥100𝐸𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑐𝑦 (𝜂) = 𝑥100
𝑃𝑜𝑤𝑒𝑟 𝐼𝑛𝑝𝑢𝑡 𝑃𝑜𝑤𝑒𝑟 𝐼𝑛𝑝𝑢𝑡
The power input and output will be equal if the transformer is perfect or 100 percent efficient (no energy losses). 𝐼𝑝𝑉𝑝 = 𝐼𝑠𝑉𝑠 …… (1),
And for an ideal transformer, εp=Vp
By approximation, if the secondary is an open circuit or the current drawn from it is modest, εs=Vs.
𝑉𝑠 𝑁
The voltage across the secondary coil is Vs. = 𝑁 𝑠 ……….(2)
𝑉𝑝 𝑝
𝐼𝑆 𝑉𝑆 𝑁
Combining Equations (1) and (2), we have 𝐼 = 𝑉 = 𝑁𝑆 = 𝐾
𝑃 𝑃 𝑆
𝐼𝑆 𝑉𝑆 𝐼𝑆 𝑉𝑆
𝐸𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑐𝑦 (𝜂 ) = 𝑥100%𝐸𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑐𝑦 (𝜂 ) = 𝑥100%
𝐼𝑃 𝑉𝑃 𝐼𝑃 𝑉𝑃

Energy Losses in a Transformer


We used an ideal transformer in the previous equations (without any energy losses). However, some energy losses do occur in actual transformer for
the following reasons:
Flux Leakage: Because some flux leaks from the core, not all flux generated by the primary coil make it to the secondary coil. This occurs as a result
of the core’s inadequate design or the presence of air holes in the core. It is possible to lower it by wrapping the primary and secondary coils over
each other. It can also be lowered if the core is well-designed.
Windings Resistance: Because the wire used for the windings has some electrical resistance, energy is wasted as a result of the heat generated in
the windings. These are mitigated in high current, low voltage windings by utilizing thick wire with a high conductive substance.
Eddy Currents: The alternating magnetic flux creates eddy currents in the iron core, resulting in energy losses through heating. By using a laminated
core, the impact is decreased.
Hysteresis Loss: In each AC cycle, the alternating magnetic field reverses the magnetization of the core. The loss of energy in the core occurs as heat
owing to hysteresis loss, which is minimized by employing a magnetic material with a low hysteresis loss.

Revised Edition: [email protected] Page 318


319

Application of Transformer
The following are some of the most common uses for transformer:
 Increasing or reducing the voltage level in an AC circuit to ensure the correct operation of the circuit’s various electrical components,
 It stops DC from flowing from one circuit to another,
 It separates two separate electric circuits,
 Before transmission and distribution can take place, the voltage level at the electric power plant must be increased (Transformers are
used as voltage regulators)
 transformer transmits electrical energy through wires over long distances,
 Transformers with multiple secondaries are used in radio and TV receivers, which require several different voltages.
Examples
A transformer has 600 turns of the primary winding and 20 turns of the secondary winding. Determine the secondary voltage if the secondary circuit
is open and the primary voltage is 140 V. Given Total number of turns of the primary coil (N 1) = 600 turns, Total number of turns of the secondary
coil (N2) = 20 turns, and Primary voltage (V1) = 140 V
Solution:
The voltage on the primary coil = 140V,N turns =600
The voltage on the secondary coil = V2 ,Turns =20
𝑉 𝑁
From the formula, 𝑉𝑠 = 𝑁 𝑠 Vs= 4.67 V
𝑝 𝑝

2. A transformer has a primary coil with 1600 loops and a secondary coil with 1000 loops. If the current in the primary coil is 6 Ampere, then what is
the current in the secondary coil?
Given: Primary coil (N1) = 1600 loops, Secondary coil (N2) = 1000 loops, The current in the primary coil (I1) = 4 A
𝐼𝑠 𝑁𝑠
From the formula, = IP = 6.4 A. The current on the secondary coil is 6.4 Ampere.
𝐼𝑝 𝑁𝑝

Activity
1. A transformer steps down a voltage from 240V to12V for a radio. If the primary windings are 300 turns, how many turns on the secondary
windings?
2. An electric power generator produces 24kW at 240V. The voltage is stepped up to 4000V. If the transformer is 100% efficient, calculate the
current in the secondary coil
3. A transformer 80% efficient is connected to a 240V A.C supply to operate a heater of resistance 240Ω. If the current flowing in the primary
circuit is 5A, calculate the potential difference across the heater

Revised Edition: [email protected] Page 319


320

4.3 ELECTRIC ENERGY DISTRIBUTION AND CONSUMPTION


Learning Outcomes
a) Understand the distribution of electricity from the source to consumer
units(u)
b) Understand the energy transformations in common domestic electrical
devices and how energy can be saved(u)
c) Understand how to use mains electricity safely and know the insulation
colour codes used in domestic wiring (u, k,s)
d) Know the dangers of mains electricity and understand how these may be
minimised by safety devices, and by sensible precautions (k, u,v/a)
e) Know how to read a domestic electricity meter and its significance (k, u,s)
f) Appreciate the importance of the use of energy saving appliances (u, s,v/a)

Introduction
Electrical energy is the work done by electric charges, calculated by multiplying power by the time the current flows. This energy is derived from the
potential or kinetic energy of charged particles. The rate at which this energy is transferred within a circuit is known as electric power, measured in
watts. One watt is defined as one joule per second, illustrating the relationship between energy and time. Understanding the distinction between
energy and power is crucial. Energy represents the total work done, while power indicates how quickly that work is performed. For instance, a light
bulb may consume a certain amount of energy over time, but its power rating tells us how much energy it uses per second. As global electricity
consumption continues to rise, with figures exceeding 25 terawatt-hours, the efficient use of electrical energy and power becomes increasingly
important for sustainable development and energy management.

The mains electricity supply


Mains electricity refers to the electrical power supplied to homes and businesses through an electric grid. This system is essential for daily activities,
providing the energy needed for lighting, heating, and powering appliances. The specifications for mains electricity vary by country, with diff erent
plugs, voltages, and frequencies in use. For instance, in the Uganda, the mains supply operates at 240 volts and a frequency of 50 hertz, while in the
United States, it is typically 110-120 volts at 60 hertz. The term "mains" originates from the historical use of the word to describe principal channels
for conveying resources, such as water. This terminology reflects the importance of a reliable electricity supply in modern infrastructure.
Uganda's electricity generation is predominantly reliant on hydropower, which accounts for over 84% of the country's electricity supply. The nation
is endowed with abundant water resources, making hydropower a viable and sustainable energy source. This reliance on hydropower has facilitated
significant electrification progress, outpacing population growth between 2017 and 2019.In addition to hydropower, Uganda also utilizes thermal
power, which contributes around 100 MW to the energy mix. Other sources include biomass, solar energy, and cogeneration, with grid-connected solar
contributing approximately 60 MW. Despite these diverse sources, Uganda still faces challenges with one of the lowest electrification rates in Africa,
largely due to an overreliance on traditional biomass. Uganda is exploring nuclear energy as a potential source to diversify its electricity generation
portfolio, aiming to enhance energy security and meet growing demand.

Revised Edition: [email protected] Page 320


321

Transmission and Distribution of Electricity


Transmission refers to the high-voltage movement of electricity over long distances, akin to
motorways transporting vehicles quickly across the country. This system connects power
plants to substations, where the voltage is reduced for safe distribution. Once electricity
reaches substations, the distribution phase begins. This involves lower-voltage power lines
that deliver electricity to homes and businesses. The distribution network is essential for
providing reliable access to electricity, adapting to local demand and ensuring that power is
available where it is needed most. As the energy landscape evolves, particularly with the rise
of renewable sources, the transmission and distribution systems face new challenges. Utilities
and system operators must continuously balance supply and demand, integrating advanced
technologies to enhance efficiency and reliability in delivering electricity to consumers.

The grid system


Uganda's electricity sector is primarily powered by hydroelectric sources,
contributing 1,023.59 MW, followed by thermal (100 MW), cogeneration (63.9
MW), and grid-connected solar (60 MW). Despite these resources, only about
42% of the population had access to electricity by 2021, highlighting a
significant gap in energy availability. To address this, isolated grid systems
are emerging as a viable solution, particularly in rural areas where
traditional grid expansion is challenging. The Twaake integrated energy
minigrid project has gained recognition for its innovative approach to
delivering clean and economical energy, showcasing the potential of
decentralized energy solutions. Furthermore, Uganda's participation in the Eastern Africa Power Pool (EAPP) aims to enhance regional power trade
and improve electricity access. As the country explores alternative technologies, including solar photovoltaic systems, the potential for increased
electricity access remains promising, paving the way for a more sustainable energy future.

Electricity is transmitted at high voltages primarily to enhance efficiency and minimize energy losses. When electrical power travels over long distances,
resistance in the transmission lines can lead to significant energy loss. By increasing the voltage, the current flowing through the wires is reduced,
which directly decreases the power loss due to resistance. This principle is rooted in Ohm's Law, where power loss is proportional to the square of
the current. High-voltage transmission systems utilize transformers to step up the voltage for long-distance travel and then step it down for safe
distribution to consumers. This method not only conserves energy but also allows for the transmission of larger amounts of power over vast distances,
making it essential for modern electrical grids. In summary, high-voltage transmission is a critical strategy for efficient energy distribution, ensuring
that electricity can be delivered effectively while minimizing waste and maintaining system reliability.

Domestic Electricity
Domestic electricity in Uganda is governed by specific tariffs aimed at making power more accessible to households. Customers consuming up to 100
units per month can purchase the first 15 units at a subsidized rate of Ush 250 per kilowatt-hour. This lifeline tariff is designed to support low-income
households, allowing them to manage their energy costs effectively. Despite the availability of electricity, many Ugandans still rely on solid fuels like
firewood and charcoal for cooking, highlighting a significant gap in energy access. The demand for electricity is growing rapidly, with household

Revised Edition: [email protected] Page 321


322

consumption increasing at an annual rate of 13%. Appliances account for a significant portion, approximately 64.7% of total electricity usage. This
highlights the importance of understanding energy consumption patterns to manage costs and promote efficiency. Domestic electric circuits are
designed to supply power safely throughout homes. These circuits consist of various components, including switches, wires, and safety devices like
fuses and circuit breakers.

Power rating on each appliance


Understanding the power ratings of common household appliances
is essential for managing energy consumption and costs. For
instance, dishwashers typically consume between 1200 to 1500
watts, while electric clothes dryers can use a significant amount of
energy, ranging from 1800 to 5000 watts. In contrast, gas-heated
dryers are more efficient, requiring only 300 to 400 watts. When
planning for energy efficiency, it's crucial to consider the wattage
of all appliances. For example, a coffee maker generally uses about
600 to 1200 watts, while a clothes washer consumes around 350 to
500 watts. These figures can help homeowners make informed
decisions about their energy use and potential savings. By being aware of these power ratings, individuals can optimize their energy consumption,
potentially leading to lower utility bills and a reduced environmental impact.

The Meaning of 5kWh Appliance


A 5 kWh appliance refers to the energy consumption of an appliance that uses five kilowatt-hours of electricity. The term "kWh" stands for kilowatt-
hour, which is a unit of energy that quantifies how much power an appliance uses over time. For example, if an appliance operates at a power level of
1 kW for 5 hours, it will consume 5 kWh of energy. Understanding kWh is essential for managing energy costs. The cost of electricity is typically
calculated based on kWh usage, meaning that a 5 kWh appliance will contribute to your monthly
electricity bill depending on your local rates. For instance, if your electricity costs UGX0.10 per
kWh, running a 5 kWh appliance would cost you UGX0.50 for one hour of use. In practical terms,
a 5 kWh appliance can power various household devices, such as a refrigerator or a washing
machine, for a limited time. Knowing the kWh rating of your appliances helps in making
informed decisions about energy consumption and efficiency.

Electricity Meters
Electricity meters are essential devices that measure the amount of electrical energy consumed
by a residence or business. They operate with a high level of accuracy, ensuring that consumers
are billed correctly for their energy usage. Traditional analog meters have largely been replaced by digital meters, which utilize electronic components
to provide precise readings. These digital meters not only record energy consumption but also transmit data directly to utility companies, streamlining
the billing process. Digital meters come in various types, including single-phase and three-phase meters, catering to different energy needs. Three-
phase meters are commonly used in commercial settings, measuring the power of three-phase electrical supplies. The transition to smart meters has
further enhanced energy management, allowing users to monitor their consumption in real-time and make informed decisions about their energy use.

Revised Edition: [email protected] Page 322


323

Calculating the Cost of Electricity


ELECTRICAL ENERGY: Electrical energy is the work done in moving an electric charge by an electric force. The SI unit of electrical energy is the
Joule (J). This electrical energy is accompanied with a rise in temperature so this energy may be given out as heat energy. This explains why wires
become hot when electricity passes through them.

Calculating electrical circuits


Find the total or effective resistance in the circuit. When finding current through a resistor in parallel, first find the potential difference (voltage)
across the parallel connection. Power dissipated in any resistor is 𝑃 = 𝐼2𝑅. Power expended in the whole circuit is 𝑃 = 𝐼𝑉

Activity (In groups)


1. In the diagram below, a battery of e.m.f 12V and internal resistance 0.6Ω is connected resistors. Determine; (i) Current through the circuit,
(ii) Current through the 4Ω and 6Ω resistor, (iii)Power dissipated in the 4Ω resistor, (iv) Power expended in the circuit.
2. A dry cell of e.m.f, E and internal resistance, r drives a current of 0.25A through a resistor of 5.5Ω and drives a current of 0.3A through a
resistor of 4.5Ω. Determine the E.m.f, E and internal resistance, r., and the power developed in 5.5ohms resistor.

Commercial Electricity
Electricity and its Units
Power is the rate of electricity used while electricity is the actual energy consumption. Imagine a 100-feet pipe, filled with water. When you open a
valve on one end, the water immediately flows out the other end. The pressure wave has traveled to 100 feet of pipe. Likewise, electricity is the flow
of electrons through a conductor. Electricity (Energy) = Power * Time
The unit of electricity is kilowatt hour or kWh. This is the actual amount of energy used in one hour. Suppose you use 1000W power in 1 hour then
you consume 1 unit or 1 kWh of electricity. So, if a 100W bulb lit up for 5 hours then the total electricity it consumes:
Electricity = 100W * 5h = 500WH = 0.5 kWh or 0.5 unit

Power Consumption and its Units


Power consumption is the amount of energy used per unit of time to operate something. Consider an office building, it needs 4-watt power energy to
function and the availability of the power is always required to work properly. Power is always measured in watts (W) or kilowatt (kW). There are
1000 watts in 1 kilowatt. Let’s say, your washing machine has a power rate of 1.5kW which means it consumes electricity at the rate of 1500 watts.
Similarly, if you buy a 100W bulb, it does not mean it consumes 100 watts of electricity. It means it consumes at the rate of 100W.
In Uganda, electricity boards such as UMEME sell electricity. They use our meters to estimate the electrical energy consumed. The energy consumed is
measured in kilowatt-hours (kWh).
Energy Consumption (kWh) =Power Rating (kW) ×Usage Time (hours).
Base Charge=Energy Consumption (kWh) ×Rate per kWh
A kilowatt-hour is the amount of electrical energy consumed by a device of power 1000W in one hour. All electrical appliances are marked (rated)
showing the power rating in Watts and voltage in Volts. An electrical appliance rated 240V, 60W means that the appliance supplies or consumes 60J
every second when connected to a 240V mains supply.

Revised Edition: [email protected] Page 323


324

Example
An electric kettle labelled 5kW is connected to an electric power supply for 20 minutes. Determine the amount of
energy consumed and the cost of electricity used during the time if each kWh costs UGX780/=
𝟐𝟎
Solution : Energy consumed =power*time=𝟓𝒙 𝟔𝟎 = 𝟏. . 𝟔𝟕𝒌𝑾𝒉.
Cost =electrical energy x rate of electricity=1.67x780=UGX1302.6/=

ACTIVITY
1. How much will it cost to run four bulbs rated at 40W each for 2 days, if the cost of each unit of electricity is UGX750?
2. Find the cost to run two bulbs rated at 60W each and an electric iron rated at 120W for 35 minutes, if the unit is UGX420/=.
3. An electrical heater is rated at 3000W, 240V. Find the total number of units it consumes in 1.5 hours and the cost of electricity if each unit costs
UGX10, 000/= after using the heater for 3hours every day for a month.
4. An electrical heater supplies or consumes 3000J every second when connected to a 240V mains supply. Janelle paid an electricity bill of UGX2800/=
after using two identical bulbs for 2 hours every day for 20 days at a cost of 2600shs per unit. Determine the power consumption by each of the bulbs.
5. Find the cost of running five 60W lamps and four 100W lamps for 8 hours if the electrical energy costs UGX500/= per unit.
6. Mr. Peter uses 5 kettles of 800W each, a flat iron of 1000W, 4 bulbs of 60W each and 2 bulbs of 75W each. If they are used for 6hours every day for
30 days and one unit of electricity costs
200shs, find the total cost of running the appliances.
7. A television is rated 240V, 600W.
(a) What do you understand by the statement above?
(b) Calculate the current flowing through the TV.
(c) Calculate the resistance of the television.
(d) Calculate the cost of running the television for 600 minutes if the unit cost is 600shs.

Energy Transmission in common of Electric Devices


Energy transmission is a fundamental concept in the operation of electric devices, where electrical energy is transformed into various other forms.
Common household appliances such as flat irons, electric toasters, and stoves convert electrical energy into thermal energy, enabling cooking and
heating. Similarly, televisions transform electrical energy into sound and light, providing entertainment and information. The process of energy
transfer in electric circuits is crucial for understanding how these devices function. The energy transferred to a resistor, for instance, is calculated by
multiplying electric power by time, illustrating the relationship between energy consumption and device operation. Various energy transformation
devices play a significant role in our daily lives, converting electrical energy into light, heat, sound, and mechanical energy. This transformation is
essential for the functionality of appliances like light bulbs, heaters, and fans, showcasing the versatility and importance of energy transmission in
modern technology.

Safety Measures
Electrical safety is crucial in both workplace and home environments to prevent accidents and injuries. Here are some essential safety measures to
consider. First, always keep electrical equipment away from water to avoid electrocution. Ensure that all equipment is unplugged safely after use, and
avoid overloading outlets, which can lead to fires. In the workplace, maintain a safe distance from power lines, ideally stay ing at least 10 feet away.
Use protective gear, such as non-conductive gloves and goggles, when working with electrical equipment. Regularly inspect electrical tools and cords
for damage, and ensure that all equipment is properly grounded. At home, follow appliance instructions carefully and utilize fuses and circuit breakers

Revised Edition: [email protected] Page 324


325

to protect against power surges. By adhering to these safety tips, you can significantly reduce the risk of electrical hazards and create a safer
environment for everyone.
Transmission of electricity
Electricity generated at power stations is stepped up to higher voltages before transmission using step transformers. The power transmitted is usually
alternating current and it is stepped down as it reaches factories, industries, towns and homes using step down transformers. Transmission can be
either overhead or underground.
How power losses are reduced during transmission of electricity:
Electricity is transmitted at high voltages to reduce power loss due to the heating effect in the transmission cables. The transmission cables are made
thick to reduce its resistance hence minimizing power loss.

House wiring (domestic electrical


installation)
Electricity is connected in a house by thick cables called the mains
from the electricity poles to the meter box or fuse box and then to
the main distribution box. From here, electricity is supplied to the
electrical appliances. The electricity supply cables in a house
consist of the following wires. Domestic electrical installation
involves a systematic approach to distributing energy throughout
a home, powering everything from lights to appliances. Key
considerations include selecting the right wire types,
understanding circuit layouts, and adhering to safety standards.
When wiring a house, several essential components must be included to ensure safety and functionality. The circuit breaker panel serves as the heart
of the electrical system, distributing power through various circuits to different areas of the home. Standard residential wiring consists of wires,
cables, outlets, switches, and electrical panels, all of which must comply with local codes. In kitchens, for instance, it is crucial to have a minimum of
eight circuits, with kitchen lighting on a separate circuit rated for 15 or 20 amps.
Ground Fault Circuit Interrupter (GFCI) outlets are also required in areas prone to moisture, such as kitchens and bathrooms, to prevent electrical
shocks. Additionally, using the correct gauge wire is vital; 14-gauge wire is typically used for lighting circuits, while 12-gauge is suitable for general-
purpose outlets. Following these guidelines ensures a safe and efficient electrical system in your home.

Precautions taken when wiring a house


When wiring a house, safety should always be the top priority. Here are ten essential tips to ensure safe residential electrical wiring. First, always turn
off the power at the circuit breaker or fuse box before starting any work. This simple step can prevent serious accidents. Additionally, using insulated
tools is crucial to protect against electrical shocks. Wearing safety glasses and protective clothing is another important precaution. These items can
shield you from debris and potential electrical hazards. Always test the wires with a voltage tester before touching them, ensuring they are not live.
Having the right tools on hand can also make the process smoother and safer. Finally, ensure that all wiring complies with local codes and regulations.
Proper grounding of electrical systems is essential to prevent shocks and fire hazards. By following these guidelines, you can significantly reduce the
risks associated with electrical wiring in your home.

Revised Edition: [email protected] Page 325


326

Dangers of mains electricity


Mains electricity poses significant dangers, including electric shocks, fires, and electrical burns. The high voltage, typically around 220-240 V, makes
contact with live wires particularly hazardous. Surges in electricity can lead to severe consequences, such as fires or electric shocks, emphasizing the
importance of safety measures. To mitigate these risks, built-in safety features like fuses and earth wires are essential. Fuses work by melting when a
high current flows through them, effectively cutting off the electricity supply to the appliance and making it safe to touch. This mechanism is crucial
in preventing potential accidents. Common hazards include
overloaded sockets and faulty appliances, which can exacerbate the
dangers of mains electricity. Awareness and adherence to safety
guidelines are vital for preventing electrical accidents in homes and
workplaces. Always engage licensed professionals for electrical
installations and repairs to ensure safety.

How to Use Mains Electricity Safety?


When working with mains electricity, safety should always be your
top priority. Start by following appliance instructions carefully to
ensure proper usage. Avoid overloading outlets, as this can lead to
electrical fires. If you encounter exposed voltages, keep the device turned off and maintain a safe distance from other objects to prevent accidental
contact. In addition, always keep electrical equipment away from water to reduce the risk of electrocution. Unplug devices safely and ensure that cords
are installed properly and kept tidy to avoid tripping hazards. Familiarize yourself with the electrical systems in your home or workplace, and engage
only licensed professionals for any installation or repair work. Lastly, always consider using safety signs and tags to identify electrical hazards. By
adhering to these guidelines, you can significantly reduce the risk of electrical accidents and ensure a safer environment for yourself and others.

Domestic Wiring in Mains Electricity


A typical home electrical system consists of two 240-volt wires and one neutral wire, which deliver power to various appliances and lighting fixtures.
Familiarizing yourself with the basics of home electrical wiring, including wire types and installation techniques, can empower you to tackle minor
electrical projects safely. The three primary types of electrical wires include live (hot) wires, neutral wires, and grounding wires. Live wires are
typically red or blown, neutral wires are white/black, and grounding wires are either bare yellow or green.

Sample item.
A family is downtown slums in in Uganda uses 2kW heater for 10 hours and a 100W lamp for 8 hours each day for 2 months. At the same the
time, an electric immersion heater was immersed in 0.05 kg of oil placed in a lagged copper calorimeter. The temperature of oil rose from 20oC
to 80oC in 100seconds. If the specific heat capacity of oil is 2000Jk -1 and that od calorimeter is 400Jk-1.. As a physics student; help to determine
the (a) Power supplied by the heater, (b) The cost of running the heater for 100s for 2 months’ period if used every day. HINT: ERA Cost of
unit UGX780.5/=

Revised Edition: [email protected] Page 326


327

Insulation and Color Codes


Color coding in insulation and control cables is essential for ensuring safety
and compliance in electrical installations. By using standardized color
codes, manufacturers and technicians can easily identify and connect cables
correctly, reducing the risk of hazardous mistakes. Common wire colors
include black, red/blown, white, green/yellow, and blue, each
serving a specific purpose in electrical wiring. In addition to safety, proper
insulation and color coding contribute to the efficiency of electrical
systems, ensuring that installations meet local codes and regulations.
Circuit breaker: A circuit breaker is an essential electrical safety device
designed to protect circuits from damage caused by excessive current. It
functions as an automatic switch that interrupts the flow of electricity
during overloads, short circuits, or ground faults. This mechanism prevents
potential hazards such as electrical fires and equipment damage, making circuit breakers crucial for both residential and industrial applications.
Various types of circuit breakers are available, including single-pole and multi-pole options, catering to different electrical needs. For instance, the
Square D by Schneider Electric Homeline 20 Amp One-Pole Circuit Breaker is specifically designed for overload and short-circuit protection. The market
for industrial miniature circuit breakers is projected to grow significantly, driven by increasing demand for reliable electrical protection. They serve
as a resettable fuse, automatically cutting off power when necessary, thus safeguarding both
people and property from electrical hazards.

FUSE
A fuse is a crucial electrical safety device designed to protect circuits from overcurrent
conditions. When excessive current flows through a fuse, it heats up and eventually melts,
creating an open circuit that halts the flow of electricity. This mechanism prevents potential
damage to electrical components and reduces the risk of fire hazards. Fuses come in various
types and ratings, tailored to specific applications and current levels. They are essential in
both residential and industrial settings, ensuring that electrical systems operate safely and
efficiently. When a fuse blows, it must be replaced to restore functionality, making regular
maintenance and checks important for electrical safety. The demand for fuses is on the rise, driven by increased electrification and stringent safety
standards. As technology advances, the electric fuse market is projected to grow significantly, highlighting the importance of these devices in modern
electrical systems.

Revised Edition: [email protected] Page 327


328

4.5 ATOMIC MODELS


Learning Outcomes
a) Understand the structure of an atom in terms of a positive nucleus and negative electrons(u)
b) Understand the terms: atomic number, mass number, and isotopes, and use them to represent different nuclides (k,u)
c) Understand the methods by which electrons are ejected from /matter atoms and how these electrons are useful (u, v/ a)

Introduction
Atomic models are essential in understanding the structure and behavior of atoms, the
fundamental building blocks of matter. Over the centuries, these models have evolved
significantly, reflecting advancements in scientific knowledge. Early theories, such as
Dalton's solid sphere model, proposed that atoms were indivisible particles. However, this
notion was challenged by J.J. Thomson's discovery of electrons, leading to the plum
pudding model, which depicted atoms as a mix of positive and negative charges. The most
significant shift came with Ernest Rutherford's gold-foil experiment in 1911, which
revealed the existence of a dense nucleus at the center of the atom, surrounded by orbiting
electrons. This nuclear model laid the groundwork for Niels Bohr's model, which
introduced quantized electron orbits. Today, the electron cloud model further refines our
understanding, depicting electrons as existing in probabilistic clouds rather than fixed paths, illustrating the complexity of atomic structure.

STRUCTURE OF AN ATOM
The structure of an atom can be described as consisting of two main regions: a positively
charged nucleus at the center and negatively charged electrons surrounding the nucleus.
The Nucleus (Positive): The nucleus is a small, dense core located at the center of the atom.
It contains protons and neutrons: Protons are positively charged particles. Neutrons have no
charge (they are neutral). The overall charge of the nucleus is positive due to the presence of
protons. The number of protons in the nucleus is called the atomic number, which defines the
element (e.g., hydrogen has 1 proton, carbon has 6 protons). The nucleus accounts for almost
all of the atom's mass, but it occupies only a tiny fraction of the atom's volume.
Electrons (Negative): Surrounding the nucleus are electrons, which are negatively charged particles. Electrons are much lighter than protons and
neutrons (about 1/1836th of the mass of a proton). Electrons are arranged in regions of space called electron shells or energy levels. These shells orbit
around the nucleus. The number of electrons in a neutral atom equals the number of protons, which balances the atom's overall charge.

Electrostatic Forces: The negatively charged electrons are attracted to the positively charged nucleus by the electrostatic force (Coulomb force),
which holds the atom together. Despite this attraction, electrons do not "fall" into the nucleus because they occupy specific energy levels or orbitals,
which are governed by the principles of quantum mechanics.
Atomic Number (Z): The atomic number is the number of protons in the nucleus of an atom.
It is represented by the symbol Z. The atomic number determines the identity of the element. For example: Hydrogen (H) has an atomic number of 1
because it has 1 proton. Carbon (C) has an atomic number of 6, meaning it has 6 protons. The atomic number also equals the number of electrons in a
neutral atom, since protons and electrons balance each other’s charge.

Revised Edition: [email protected] Page 328


329

Mass Number (A): The mass number is the total number of protons and neutrons in an atom's nucleus. It is represented by the symbol A. Mass
number = Protons (Z) + Neutrons (N). For example, carbon-12 (C-12) has a mass number of 12 because it has 6 protons and 6 neutrons. Unlike the
atomic number, which is unique to each element, the mass number can vary among atoms of the same element, depending on the number of neutrons.
Isotopes: Isotopes are atoms of the same element (same number of protons) that have different numbers of neutrons. Isotopes have the same atomic
number (Z) but different mass numbers (A). Because the number of neutrons varies, isotopes of an element may have different physical properties,
such as different masses, but they behave chemically in the same way. For example, carbon has three naturally occurring isotopes:
Carbon-12 (C-12): 6 protons and 6 neutrons.
Carbon-13 (C-13): 6 protons and 7 neutrons.
Carbon-14 (C-14): 6 protons and 8 neutrons.
All three isotopes have 6 protons (atomic number = 6) but different mass numbers due to varying numbers of neutrons.
Representing Nuclides
Nuclides are represented using their atomic and mass numbers. The standard notation is 𝑍𝐴𝑋 ; Where A is the mass number (protons + neutrons), Z
is the atomic number (number of protons), X is the chemical symbol of the element.
Examples of Nuclide Representation:
Carbon-12: A = 12 (6 protons + 6 neutrons), Z = 6 (6 protons). Carbon-14: A = 14 (6 protons + 8 neutrons), Z = 6 (6 protons). Uranium-238: A =
238 (92 protons + 146 neutrons), Z = 92 (92 protons).

Charged atoms
A charged atom, commonly referred to as an ion, occurs when an atom
gains or loses electrons, resulting in a net electric charge. Ions can be
either positively charged, known as cations, or negatively charged,
called anions. This charge imbalance is crucial in various chemical
reactions and physical processes, as charged atoms interact through
Coulomb forces, attracting or repelling each other based on their
charges. Charged atom" often lead to answers like "ion," "anion,"
or "proton." These terms reflect the fundamental nature of ions in
chemistry. For instance, "ion" is a three-letter answer with a high
confidence rating, while "anion" and "proton" are also common responses.

Isotopes
Isotopes are unique nuclear species of the same chemical element, characterized by having the same atomic number but differing in the number of
neutrons. This variation in neutron count results in different atomic masses for isotopes of the same element. For example, carbon has isotopes such
as carbon-12 and carbon-14, which are essential in various scientific applications, including radiocarbon dating. The significance of isotopes extends
beyond chemistry; they play crucial roles in fields like medicine, where radioactive isotopes are used in diagnostic imaging and cancer treatment.
Additionally, isotopes are vital in nuclear energy production and research, providing insights into atomic structure and behavior. Isotopes of the same
element behave identically in chemical reactions because they have the same electron configuration. Isotopes have different masses, and this can affect
their behavior in physical processes (e.g., heavier isotopes may diffuse more slowly).

Revised Edition: [email protected] Page 329


330

Applications of Isotopes
In medicine, radioactive isotopes are integral to nuclear medicine, providing diagnostic information and treatment options for diseases such as cancer
and cardiovascular conditions. These isotopes, found in radiopharmaceuticals, allow for targeted therapies and precise imaging of organ functions. In
agriculture, isotopes are utilized to improve crop yields and manage pest control. They help trace nutrient uptake in plants and monitor the
effectiveness of fertilizers. Isotopes are employed in food processing to ensure safety and extend shelf life through radiation. Beyond health and
agriculture, isotopes are valuable in environmental studies, where they trace pollution sources and study ecological processes. For instance, carbon-
14 is used in carbon dating to determine the age of archaeological finds, providing insights into historical timelines. Overall, isotopes are indispensable
tools across multiple disciplines, driving advancements and enhancing our understanding of the world. Carbon-14 is used in radiocarbon dating to
determine the age of ancient biological materials. Isotopes like Iodine-131 are used in
medical imaging and treatment, especially in cancer therapy. Uranium-235 is used as fuel
in nuclear reactors.

A nuclide
A nuclide is a specific type of atom defined by its unique combination of protons (Z),
neutrons (N), and the energy state of its nucleus. Each nuclide represents a distinct nuclear
configuration, which can influence its stability and behavior. For instance, isotopes are
nuclides of the same element that differ in neutron count, leading to variations in mass
and stability. The most stable nuclide is nickel-62, which possesses the lowest binding
energy per nucleon, making it particularly resilient against decay. Nuclide symbols are
used to represent these atomic species, incorporating the atomic number and mass
number.

Electron Emission
Electron emission is defined as the ejection of electrons from the surface of a material. This process can be stimulated by various factors, including
temperature elevation, radiation, and electric fields. The emitted electrons can play a crucial role in numerous applications, such as in vacuum tubes,
cathode ray tubes, and advanced energy conversion systems. There are several types of electron emission, including thermionic emission, where
electrons are released due to thermal energy, and photoemission, which occurs when electrons absorb photon energy. Field emission, another
significant type, involves the emission of electrons induced by an external electrostatic field. Each of these processes has unique characteristics and
applications, making them essential in fields like electronics and materials science.

Electron Emission and Its Types


Electron emission is the ejection of an electron from the surface of matter. We know that the electrons are attracted to the protons in the nucleus of an
atom. It is this attraction that holds the electrons in place. But if the electrons gain sufficient energy from an external source, the electrons can escape
the metal surface. In this article, we will explore the definition and the types of electron emission.

Revised Edition: [email protected] Page 330


331

What is Electron Emission?


Electrons are negatively charged sub-atomic particles responsible for the generation of electricity and magnetism. Metals have free electrons that can
move from one atom to the other within the metal. In fact, this factor is responsible for the excellent electrical conductivity. But if they try to escape
the metal surface, they are unable to do so. This is because when these negatively charged particles (electrons) try to leave the metal, the surface of
the metal acquires a positive charge. Due to the attraction between the negative and the positive charges, the electrons are pulled back into the metal.
There exist no forces to pull them forward. The electrons are thus forced to stay inside the metal due to the attractive forces. This barrier provided by
the metal surface to prevent escaping of free electrons is called the surface barrier. However, the surface barrier can be broken by providing a certain
minimum amount of energy to the free electrons which increases their kinetic energy and
consequently help them escape the metal surface. This minimum amount of energy is known as the
work function of the metal. And when the work function is provided to the metal, the consequent
liberation of electrons from the metal surface is known as electron emission.

Photo-Electric Emission / Effect


The photoelectric effect is where electrons are emitted from a material when it is exposed to
electromagnetic radiation, such as ultraviolet light. This effect occurs when light waves strike a
metal surface, causing the ejection of electrons. The process is contingent upon the frequency of the
incident light; only light with a frequency above a certain threshold can initiate electron emission.
Einstein's explanation of the photoelectric effect demonstrated that light can behave both as a wave and as a particle, leading to the development of
solar cells and digital cameras.

PHOTOELECTRIC EFFECT
The photoelectric effect is a phenomenon in which electrons are ejected from the
surface of a metal when light is incident on it. These ejected electrons are
called photoelectrons. It is important to note that the emission of photoelectrons
and the kinetic energy of the ejected photoelectrons is dependent on the frequency of
the light that is incident on the metal’s surface. The process through which
photoelectrons are ejected from the surface of the metal due to the action of light is
commonly referred to as photoemission. The photoelectric effect occurs because the
electrons at the surface of the metal tend to absorb energy from the incident light and use it to overcome the attractive forces that bind them to the
metallic nuclei.

Applications of the Photoelectric Effect


Used to generate electricity in solar panels. These panels contain metal combinations that allow electricity generation from a wide range of
wavelengths. Motion and Position Sensors: In this case, a photoelectric material is placed in front of a UV or IR LED. When an object is placed in
between the Light-emitting diode (LED) and sensor, light is cut off, and the electronic circuit registers a change in potential difference. Lighting
sensors, such as the ones used in smartphones, enable automatic adjustment of screen brightness according to the lighting. This is because the
amount of current generated via the photoelectric effect is dependent on the intensity of light hitting the sensor. Digital cameras can detect and record
light because they have photoelectric sensors that respond to different colours of light. X-Ray Photoelectron Spectroscopy (XPS): This technique
uses X-rays to irradiate a surface and measure the kinetic energies of the emitted electrons. Important aspects of the chemistry of a surface can be

Revised Edition: [email protected] Page 331


332

obtained, such as elemental composition, chemical composition, the empirical formula of compounds and chemical state. Photoelectric cells are used
in burglar alarms. Used in photomultipliers to detect low levels of light. Used in video camera tubes in the early days of television.
Night vision devices are based on this effect. The photoelectric effect also contributes to the study of certain nuclear processes. It takes part in the
chemical analysis of materials since emitted electrons tend to carry specific energy that is characteristic of the atomic source.

Thermionic emission
Thermionic emission is the process by which charged particles, primarily electrons, are
released from a heated electrode. When a metal, typically a cathode, is heated to high
temperatures around 1,000 °C (1,800 °F) or more its thermal energy provides sufficient
kinetic energy for some electrons to overcome the work function of the material and escape
into the surrounding vacuum. This phenomenon, historically known as the Edison effect, has
significant applications in various technologies, including vacuum tubes and electron guns.
In these devices, thermionic emission is harnessed to create a flow of electrons, which can
be manipulated for various purposes, such as amplification and signal processing.
Thermionic emission is used in structured materials, such as superlattices of elemental
metals and compound semiconductors, enhancing our understanding of electron behavior
and potential applications in energy conversion processes.

Applications of thermionic emission


Thermionic emission refers to the release of electrons from heated materials, a phenomenon that plays a vital role in various electronic applications.
This process is fundamental to the operation of traditional electron tubes, which serve as sources of electrons in devices like vacuum tubes and cathode
ray tubes. These tubes are essential in amplifying signals and generating images in
older television and computer monitors. In addition to electron tubes, thermionic
emission is utilized in power electronics and high-frequency vacuum transistors,
which are crucial for modern communication technologies. Furthermore, thermionic
converters leverage this emission for electricity generation, converting heat directly
into electrical energy, making them valuable in energy-efficient systems. Moreover,
thermionic emission devices are lightweight and resilient, making them suitable for
use in spacecraft, where they can withstand the harsh conditions of space travel.

Cathode Rays
Cathode rays, or electron beams, are streams of electrons emitted from the negative
electrode (cathode) in a vacuum tube. When a high voltage is applied across two
electrodes in a low-pressure gas environment, these electrons travel towards the
positively charged anode, creating a visible beam. Cathode rays are also fundamental to
the operation of cathode-ray tubes (CRTs), which were widely used in early television
sets and computer monitors, where they produced images by directing electron beams
onto a fluorescent screen. A cathode-ray tube (CRT) is a vacuum tube in which an
electron beam, deflected by applied electric or magnetic fields, produces a trace on a fluorescent screen. In a cathode ray tube, electrons are accelerated
from one end of the tube to the other using an electric field.

Revised Edition: [email protected] Page 332


333

When the electrons hit the far end of the tube they give up all the energy they carry due to their speed and this is changed to other forms such as
heat. A small amount of energy is transformed into X-rays. The cathode ray tube (CRT), is an evacuated glass envelope containing an electron gun a
source of electrons and a fluorescent light, usually with internal or external means to accelerate and redirect the electrons. Light is produced when
electrons hit a fluorescent tube. The electron beam is deflected and modulated in a manner that allows an image to appear on the projector. The
picture may reflect electrical wave forms (oscilloscope), photographs (television, computer monitor), echoes of radar-detected aircraft, and so on.
Uses of Cathode Ray Tube
Used as a most popular television (TV) display. X-rays are produced when fast-moving cathode
rays are stopped suddenly. The screen of a cathode ray oscilloscope, and the monitor of a
computer, are coated with fluorescent substances. When the cathode rays fall off the screen
pictures are visible on the screen.

Properties of Cathode Rays


Cathode rays consist of electrons, so they carry a negative charge. In a vacuum, cathode rays
travel in straight lines from the cathode to the anode. Cathode rays can be deflected by electric
and magnetic fields, further proving they carry charge. When cathode rays strike certain surfaces, they transfer energy and may produce light or heat.
The study of cathode rays led to the discovery of the electron as a fundamental particle, a major milestone in atomic physics and the foundation for
various technologies, including the cathode-ray tube (CRT) used in early televisions and oscilloscopes.

THE C.R.O (Cathode Ray Tube)


The electron gun, this consists of the following parts, the cathode – used to emit
electrons, the control grid – this is connected to low voltage supply and is used
to control the number of electrons passing through it towards the anode. The
anode – the anode is used to accelerate the electrons and also focus the electrons
into a fine beam. Since the grid control the number of electrons moving towards
the anode. It consequently controls the brightness of the spot on the screen. As
the grid control is made more negative, it repels most of the electrons, allowing
a few to reach the screen hence screen appears dark. When it is made more
positive, it attracts the electrons hence brightening the screen.
Deflecting system: This consists of the x and y plates. They are used to deflect the electron beam horizontally and vertically respectively.
Fluorescent: This is where the electrons beam is focused to form a bright spot. The zinc sulphide coatings on the fluorescent screen converts kinetic
energy in light and produce a bright spot when the electrons beam is focused on it.
Get unknown p.d and connect it to y-plate and then compare the deflection by counting the number of cm deflected. This means that we can measure
unknown p.d. Used to study wave forms of current and voltage. Used in manufacture of T.V.
Example
A C.R.O with time base switch on is connected across a power supply; the wave form shown below is obtained. Distance between each line is 1cm (i)
identify the type of voltage generated from the power source (ii). Alternating current, (iii). find the amplitude of voltage generated if voltage gain is
5V per cm.
Amplitude = 2cm, 1cm = 5V, 2cm is equivalent to (5 x 2) = 10V

Revised Edition: [email protected] Page 333


334

Calculate the frequency of power source is the time base setting on the C.R.O is
Time for 2 cycles = 8 x 5.0 x 10-3.Time for 1 cycle = 0.02s. Frequency = 50HZ

X – RAYS
X-rays are a form of high-energy electromagnetic radiation, characterized by their
ability to penetrate various materials, including human tissue. This unique property
allows X-ray imaging to create detailed pictures of the inside of the body, making it
an invaluable tool in medical diagnostics. Commonly used to examine bones and
joints, X-rays help healthcare providers identify fractures, infections, and other
conditions. The process of obtaining X-ray images is quick and painless, utilizing
invisible energy beams that produce images on film or digital media. Recent advancements in technology have expanded the applications of X-rays,
including their use in space exploration. X-rays play a crucial role in both medical and scientific fields, enhancing our understanding of the human
body and the cosmos. X-Rays are radiations of electromagnetic wave that are produced when first moving electrons are stopped by dense matter. In
the X- ray tube electrons from the hot cathode are accelerated across the vacuum by a large potential difference. On reaching the Anode, the first
moving electrons hit the target of tungsten which decelerates them resulting into the production of X– rays. The target should be of a high melting
point because during the hitting of the target, very high temperatures build up, so the high melting point is to make the target able to withstand the
high temperatures.

Characteristics of X-Rays
X-rays have enough energy to pass through most objects, including human tissue. This makes them valuable for medical imaging and security scans.
X-rays are ionizing, meaning they have enough energy to remove tightly
bound electrons from atoms, thus ionizing them. This property is both useful
in applications like cancer treatment and potentially harmful, as it can
damage or destroy living tissue.

Production
X-rays are produced through the acceleration of electrons in an X-ray tube,
where a potential difference is applied. This process begins with thermionic
emission, where a current heats the cathode filament, releasing electrons.
These electrons are then accelerated towards a target material, typically
tungsten, within the tube anode. Upon collision with the tungsten nuclei,
the high-energy electrons undergo rapid deceleration, resulting in the emission of X-rays. There are two primary types of radiation generated during
this interaction: characteristic radiation, which occurs when electrons displace inner-shell electrons in the target, and Bremsstrahlung radiation,
produced when electrons are deflected by the electric field of the nuclei. The production of X-rays is crucial in various fields, particularly in medical
imaging, where they allow for the visualization of internal structures.

Applications of X-rays
X-rays are extensively used in radiography to create images of the inside of the body, helping in the diagnosis and monitoring of conditions such as
fractures, infections, and tumors. Techniques like computed tomography (CT) use X-rays to create detailed cross-sectional images. Dentists use X-rays
to inspect teeth and jaw structures for cavities, bone loss, and other issues. X-ray machines are used in airports and other secure areas to scan luggage
and packages for dangerous items.

Revised Edition: [email protected] Page 334


335

X-rays are employed in non-destructive testing to inspect the integrity of materials and structures, such as pipelines, aircraft, and buildings. X-ray
crystallography is a technique used to determine the atomic and molecular structure of a crystal by scattering X-rays on the crystal and analyzing the
diffraction pattern.

Safety and Risks


While X-rays are invaluable in many fields, exposure to high doses or prolonged exposure to X-rays can be harmful, increasing the risk of cancer and
other health issues. Therefore, safety protocols are essential to minimize exposure, such as using lead aprons in medical settings and maintaining
adequate shielding in industrial applications.

Types of X – rays
X-rays are a form of electromagnetic radiation that can penetrate various materials, making them invaluable in medical imaging. There are several
types of X-rays, each serving a specific purpose in diagnosing health conditions. The most common types include plain radiography, which captures
static images of the body, and computed tomography (CT) scans, which provide detailed cross-sectional images. Fluoroscopy is another type that
produces real-time moving images, allowing doctors to observe the function of organs and systems. Other specialized X-rays include abdominal, chest,
and dental X-rays, each targeting specific areas for evaluation. For instance, abdominal X-rays help diagnose issues in the digestive system, while chest
X-rays are crucial for assessing lung conditions.

X-rays are a crucial part of the electromagnetic spectrum, categorized into two main types: hard and soft X-rays. Hard X-rays possess higher energies,
typically above 10 keV, and shorter wavelengths, making them ideal for penetrating dense materials. This characteristic allows hard X-rays to be
extensively used in medical imaging, such as CT scans and cancer treatments, where detailed images of bones and tumors are required. In contrast,
soft X-rays have lower energies and longer wavelengths, generally ranging from 0.1 to 10 keV. These properties make soft X-rays particularly effective
for imaging softer tissues, such as those found in biological samples. Researchers are increasingly utilizing soft X-rays in advanced imaging techniques,
allowing for the observation of living cells and tissues without causing significant damage.

Applications of x-rays
X-rays are a vital tool in both medical and non-medical fields, with diverse applications that significantly impact health and safety. In medicine, X-rays
are primarily used for diagnostic purposes, helping to identify conditions such as broken bones, infections, and tumors. They play a crucial role in
cancer treatment, where high-energy X-rays are employed in radiation therapy to target and destroy cancerous cells by damaging their DNA. Beyond
healthcare, X-rays have important applications in security and industrial settings. They are utilized in security systems to scan luggage and cargo,
ensuring safety in transportation. Additionally, X-rays are employed in research and industrial imaging, allowing for non-destructive testing of
materials and components. Advancements in X-ray imaging technology, such as those developed by researchers promise to enhance the precision and
effectiveness of X-ray applications, paving the way for improved diagnostics and treatment options in the future.

Health hazards of X – rays


X-rays are invaluable in medical imaging, aiding in the diagnosis of various conditions. However, exposure to ionizing radiation from X-rays poses
significant health risks. High levels of radiation can lead to acute effects such as skin reddening, hair loss, and even severe tissue damage. These effects
are rare but can occur with excessive exposure. Long-term exposure to X-rays is particularly concerning, as it has been linked to an increased risk of
cancer. The dose-dependent nature of these risks means that the likelihood of adverse effects varies based on the amount of radiation absorbed and
the specific body part being imaged. For instance, radiation to the head and neck can result in complications like dry mouth and swallowing difficulties.

Revised Edition: [email protected] Page 335


336

While X-rays are essential for diagnosing serious health issues, it is crucial to balance their benefits against potential health hazards. Patients should
discuss their imaging needs with healthcare providers to minimize unnecessary exposure.

Safety precautions of X – rays


X-rays are a valuable diagnostic tool, but they come with safety considerations due to their potential to cause DNA mutations, which can lead to cancer.
Patients should be informed about the risks, especially pregnant women, as radiation can harm a developing fetus. It is crucial to communicate these
risks before undergoing any X-ray procedure. To minimize exposure, healthcare professionals must adhere to strict safety protocols. This includes
using protective equipment such as lead aprons and collars for patients, and ensuring that no personnel routinely hold or support animals or film
during X-ray procedures. Proper structural shielding, including lead-lined walls and floors, is essential to protect both patients and staff from
unnecessary radiation exposure. Patients should also be advised to remove any metal objects, such as jewelry and eyeglasses, before the procedure.

4.6 NUCLEAR PROCESSES


Learning Outcomes
a) Understand the processes of nuclear fission and fusion and the associated energy changes(u)
b) Understand the spontaneous and random nature of nuclear decay and interpret decay data in terms of half-life (k, u,s)
c) Know the applications of radioactivity and the dangers associated with exposure to radioactive materials. (k,u)
d) Understand and appreciate that there are significant social, political, and environmental dimensions associated with use of nuclear power.
(u,v/a)

Introduction
Nuclear processes encompass a variety of reactions that involve the interaction of
atomic nuclei. A nuclear reaction occurs when two nuclei collide or when a
nucleus interacts with a subatomic particle, resulting in the formation of new
nuclides. The two primary types of nuclear reactions are fission and fusion. In
fission, a heavy nucleus, such as uranium, splits into smaller nuclei upon neutron
collision, releasing significant energy in the form of heat and radiation. Fusion
involves the merging of two light nuclei to form a heavier nucleus, a process that
powers stars, including our Sun. Both fission and fusion yield millions of times
more energy than conventional energy sources, making them crucial for energy
production. Understanding these nuclear processes is essential for advancements
in energy technology and medical applications, as they hold the potential for
sustainable energy solutions and innovative treatments.

RADIOCTIVITY
Radioactivity is the phenomenon where unstable atomic nuclei lose energy by emitting radiation, a process known as radioactive decay. This decay can
result in the release of various types of radiation, including alpha and beta particles. Materials that contain these unstable nuclei are classified as
radioactive, and their emissions can have significant implications for health and safety. The applications of radioactivity are vast, spanning medical,
industrial, and research fields. Radioisotopes, are crucial in medical diagnostics and therapies, allowing for targeted treatments and imaging
techniques.

Revised Edition: [email protected] Page 336


337

What is Radioactivity?
Due to nuclear instability, an atom’s nucleus exhibits the phenomenon of Radioactivity. Energy is lost due to radiation that is emitted out of the unstable
nucleus of an atom. Two forces, namely the force of repulsion that is electrostatic and the powerful forces of attraction of the nucleus, keep the nucleus
together. These two forces are considered extremely strong in the natural environment. The chance of encountering instability increases as the size of
the nucleus increases because the mass of the nucleus becomes a lot when concentrated. That’s the reason why atoms of Plutonium, Uranium are
extremely unstable and undergo the phenomenon of radioactivity.

Alpha Decay (α)


In alpha decay, the nucleus emits an alpha particle, which consists of two protons and two neutrons (like a helium nucleus). This emission reduces
the mass of the nucleus and transforms the atom into a different element with a lower atomic number. Example: Uranium-238 decays to Thorium-234
through alpha emission. An alpha particle is a helium atom, which has lost 2 electrons. An alpha particle has mass 4 and atomic number 2, which is
positively charged. Alpha particles (α) are a type of ionizing radiation that consists of two protons and two neutrons bound together. They are identical
to the nucleus of a helium-4 atom and carry a +2 charge. An alpha particle is made up of two protons and two neutrons, giving it a relatively large
mass. Since it has two protons, it carries a positive charge of +2. Alpha particles are relatively heavy compared to other types of radiation, such as
beta particles or gamma rays. Their mass is about 4 atomic mass units. They typically have kinetic energies in the range of 4-9 MeV (mega-electron
volts), which is quite high in energy compared to other forms of radiation.
Alpha particles have low penetrating power. They can be stopped by a few centimeters of air, a sheet of paper, or even the outer dead layer of human
skin. This limited range means alpha radiation is generally not harmful when outside the body but can be very dangerous if alpha-emitting materials
are inhaled, ingested, or enter the bloodstream.
Alpha particles have high ionizing power because of their large mass and charge. As they travel through matter, they create many ion pairs by knocking
electrons off atoms, causing strong ionization in a short distance. This ionization capability makes alpha particles effectiv e at damaging biological
tissues if they come into direct contact with living cells.
Due to their mass, alpha particles travel relatively slowly compared to beta particles and gamma rays. Their speed is usually around 5-10% the speed
of light. Emission of an alpha particle from a radioactive nucleus reduces the atomic number of the element by 2 and the atomic mass by 4. This
process transforms the original atom into a different element, typically shifting it down two places in the periodic table (e.g., Uranium-238 decaying
to Thorium-234).
Due to their high ionizing power, alpha particles can be detected with devices sensitive to ionization, such as Geiger-Müller tubes and scintillation
counters.

When un stable nuclei emits an alpha particle, the mass reduces by 4 and atomic number by 2 e.g. a radioactive substance Undergoes
decay and emits an alpha particle to form Y.
Write an equation for the process

238 =x + 4 x =234 92 = y + 2 y = 90

Revised Edition: [email protected] Page 337


338

Application of alpha particles


In healthcare, targeted alpha therapy is a promising treatment for solid tumors. This method involves attaching alpha-emitting radionuclides to tumor-
targeting molecules, such as antibodies, allowing for precise destruction of cancer cells while minimizing damage to surrounding healthy tissue. They
are commonly found in smoke detectors as Americium-241 is an alpha emitter, where their ionizing properties help detect smoke particles.
Furthermore, alpha radiation is utilized in thickness gauges to ensure quality control in manufacturing processes.

Beta Decay (β)


In beta decay, a neutron in the nucleus transforms into a proton, releasing a beta particle (which is an electron or positron) and an antineutrino or
neutrino. This process increases the atomic number by one without changing the atomic mass, turning the atom into a different element. Example:
Carbon-14 decays into Nitrogen-14 through beta emission.
Beta particles are fast moving electrons. When radioactive nuclei decay by emitting a beta particle, mass number is not affected but the atomic number
increases by one.
Beta particles (β\betaβ) are high-energy, high-speed particles emitted from the nucleus of an atom undergoing radioactive decay. They are of two
types: beta-minus (β−) particles, which are electrons, and beta-plus (β+) particles, which are positrons.
A beta-minus particle is an electron, carrying a negative charge (-1). A beta-plus particle, or positron, is the antimatter counterpart of an electron,
carrying a positive charge (+1). Beta particles are much lighter than alpha particles, with a mass approximately 1/18361/18361/1836 of a proton’s
mass. This small mass allows them to travel faster than alpha particles. The energy of beta particles can vary widely, generally in the range of a few
keV (kilo-electron volts) to several MeV (mega-electron volts), depending on the specific decay process. Beta particles have moderate penetrating
power. They can penetrate several millimeters into human tissue or a few centimeters of air.
Unlike alpha particles, materials like plastic, glass, or a thin sheet of metal, such as aluminum foil, can stop beta particles. Beta radiation can cause
burns or radiation damage to skin if in close contact, and it can penetrate superficial tissue layers. Beta particles have moderate ionizing power, less
than alpha particles but greater than gamma rays. They can ionize atoms and molecules as they pass through matter, but since they are lighter, they
tend to spread their energy over a greater distance. Due to their low mass, beta particles travel at relatively high speeds, often close to the speed of
light, depending on their energy level.
Beta-minus Decay: In beta-minus decay, a neutron in the nucleus converts into a proton, releasing a beta-minus particle (electron) and an
antineutrino. This increases the atomic number of the element by 1, changing it into a different element (e.g., Carbon-14 decays into Nitrogen-14).
Beta-plus Decay: In beta-plus decay, a proton converts into a neutron, emitting a beta-plus particle (positron) and a neutrino. This decreases the atomic
number by 1, also resulting in a new element.
Beta particles can be detected using Geiger-Müller counters, scintillation detectors, and other radiation-detecting devices that measure ionization. Beta
particles are light, high-speed particles with moderate ionizing power and penetrating ability. They can travel through air and penetrate shallow
materials but are stopped by denser materials like metal or plastic. Beta radiation is useful in medical applications and scientific research but requires
shielding to prevent tissue damage.

E.g. unstable nuclei decays to form a stable nuclei Y by emitting a beta particle

226 = n + 0 n =226
88 = m + - 1 = 89

Write down an equation for the process

Revised Edition: [email protected] Page 338


339

Applications of beta particles


In the medical field, beta-emitting radiopharmaceuticals are increasingly utilized in cancer treatments, particularly for conditions like eye and bone
cancer. Techniques such as radio immunotherapy and bone-seeking radiopharmaceutical therapy leverage the properties of beta particles to target and
destroy malignant cells effectively. In addition to their therapeutic uses, beta particles serve as valuable tools in industrial applications. They are
employed in quality control processes, such as measuring the thickness of materials, ensuring product consistency and safety. Furthermore, beta
particles play a role in nuclear reactors, contributing to clean energy generation.

Gamma Decay (γ)


In gamma decay, the nucleus emits a gamma ray, a high-energy form of electromagnetic radiation. Gamma emission usually occurs after alpha or beta
decay when the nucleus remains in an excited energy state and needs to release excess energy to stabilize. Example: Cobalt-60, after beta decay, emits
gamma rays as it stabilizes.

GAMMA RAYS
These are electromagnetic radiation with the shortest wavelength. When unstable nuclei decay by emitting gamma rays, the mass and atomic number
are not affected. Gamma radiation (γ) is a form of high-energy electromagnetic radiation emitted by the nuclei of radioactive atoms. Unlike alpha and
beta particles, gamma rays have no mass and no charge, which gives them unique properties. Gamma rays are photons packets of electromagnetic
energy similar to visible light but with much higher energy. They carry no electrical charge, meaning they are neutral. Mass: Gamma rays are massless,
which allows them to travel at the speed of light. Gamma rays have very high energy, typically ranging from keV (kilo-electron volts) to several MeV
(mega-electron volts). Their energy is higher than that of X-rays, making them one of the most energetic forms of electromagnetic radiation. Gamma
rays have extremely high penetrating power. They can pass through most materials, including human tissue, air, and even thin sheets of metal. Thick,
dense materials such as lead or several centimeters of concrete are needed to significantly reduce or block gamma radiation. This high penetrability
makes them more hazardous when exposure is uncontrolled. While gamma rays have less ionizing power than alpha or beta particles, they can still
ionize atoms indirectly. As they pass through matter, they can eject electrons from atoms, which then cause secondary ionization. This indirect ionizing
capability means that while gamma rays do not cause as much direct ionization as alpha or beta particles, they can still penetrate deeply and cause
widespread damage within materials, including biological tissue. Gamma rays, being electromagnetic waves, travel at the speed of light (approximately
3.0×108 ms-1 in a vacuum). Gamma radiation is often emitted alongside alpha or beta decay when the nucleus has excess energy. The emission of
gamma rays allows the nucleus to transition from an excited, higher-energy state to a lower-energy, more stable state without changing the atomic
number or mass of the nucleus. Gamma radiation can be detected with instruments sensitive to high-energy photons, such as Geiger-Müller counters,
scintillation detectors, and gamma spectrometers.

Applications of gamma rays


In medicine, they are crucial for imaging different organs, such as the brain, liver, and kidneys, using specific chemical forms. Additionally, gamma
rays play a significant role in radiotherapy, targeting and destroying cancerous cells, making them essential in oncology. Beyond healthcare, gamma
rays are utilized for sterilizing medical equipment and food, ensuring safety and longevity. They also serve as tracers in medical studies and are
employed in blood flow research. In environmental science, aerial and ground-based gamma-ray spectroscopy aids in geological mapping and mineral
exploration, helping identify contamination. Moreover, advancements in technology have led to innovative applications, such as drone-integrated
gamma-ray sensors for mineral exploration and contamination mapping. Used in cancer treatment (radiotherapy) and in diagnostic imaging (e.g.,
gamma cameras in nuclear medicine). Used for inspecting materials and structures (e.g., radiography in welding) and in food sterilization. Important

Revised Edition: [email protected] Page 339


340

in astronomical observations and fundamental physics research. Gamma rays are high-energy, penetrating electromagnetic waves that are highly
effective at passing through most materials and pose a significant health risk due to their deep penetrative ability. Though less ionizing than alpha or
beta particles, they can still cause extensive tissue damage when absorbed and thus require dense shielding for safe handling.

Half-Life of a Radioactive Material


The half-life of a radioactive material is the time required for half of the radioactive atoms in a sample
to decay. During this time, the amount of the original radioactive substance decreases, while the
amount of decay products increases. Each type of radioactive material has a characteristic half-life,
which can range from fractions of a second to millions of years.

Understanding the decay curve


Radioactive decay follows an exponential decay model, meaning that the amount of substance does
not decrease by a fixed amount each time but rather by a fixed percentage. After each half-life, 50%
of the remaining radioactive atoms decay, reducing the sample amount progressively but never completely to zero. If a sample has a half-life of 10
years, then after 10 years, half of the original material will remain. After 20 years (two half-lives), only a quarter (25%) of the original material will
remain, and so on. A longer half-life generally indicates greater stability, as the material decays more slowly. Equally, a short half-life suggests the
material is less stable and decays quickly.
Suppose we have 100 grams of a radioactive substance with a half-life of 5 years: After
5 years: 50 grams remain, after 10 years: 25 grams remain, and after 15 years: 12.5
grams remain. This pattern continues, with each interval of 5 years reducing the
remaining amount by half. When a graph of account rate against time or radioactive
nuclei is drawn, the half-life of the radioactive nuclei can be determined;

Activity 1
In a certain town, people are concerned about the waste disposal from the factory into
the nearby lake, which is their source of water for home use. They raised this issue to
the chairperson Local Council 1 (LC1) who directed the management of the factory to stop disposing waste into the lake. A scientist was contacted to
investigate the presence of radioactive material in the water. The scientist found out that the water was indeed radioactive as shown in Table 1 and
Table 2.

Table 1
Time/days 0 5 10 15 20 25 30
Activity/counts
1200 740 440 260 160 90 60
per minute
Although the water from the lake remains radioactive for a long time, the scientist recommended that water will be safe for use again when the activity
is less than 38 counts per minutes.
Task: As a student of physics;
(a) Advise the chairperson LC1 about the time the community will wait for the water to be safe for use again. Sensitize the members of the
community about the risks associated with radioactive materials and how such materials should be handled.
(b) Sensitive the members of the community about the risks associated with radioactive materials and how such materials should be handled.

Revised Edition: [email protected] Page 340


341

Activity 2
The following values were obtained from the readings of a rate meter from a radioactive isotope of iodine as shown in the table 2 for radioactive
iodine.
Table 2
Time (Hour) 0 2500 5000 7500 1000 12500 17250 20000
Disintegrations/min 15 11 8 5 4.0 3.2 1.6 1.2
Estimate the half-life of iodine and suggest the possible precautions to undertake.

Applications of Half-Life
Carbon Dating: The half-life of Carbon-14 (about 5,730 years) is used in radiocarbon dating to estimate the age of ancient organic materials. Medical
Treatments: Short half-life isotopes are often used in medical imaging and treatment so that they decay quickly and minimize radiation exposure to
the patient. Nuclear Waste Management: The half-lives of radioactive materials determine how long nuclear waste must be safely stored before it
becomes harmless. Importance of Understanding Half-Life Understanding half-life helps predict how long radioactive materials remain hazardous,
manage exposure risks, and utilize these materials safely in applications ranging from medicine to archaeology.
Nuclear Energy is the energy in the core of an atom. Where an atom is a tiny particle that constitutes every matter in the universe. Normally, the mass
of an atom is concentrated at the centre of the nucleus. Neutrons and Protons are the two subatomic particles that comprehend the nucleus. There is
an exact massive amount of energy in bonds that bind atoms together. Nuclear Energy is discharged by nuclear reactions by either fission or fusion.
In nuclear fusion, atoms combine to form a larger atom. In nuclear fission, the division of atoms takes place to form smaller atoms by releasing energy.
Nuclear power plants produce energy using nuclear fission. The Sun produces energy using the mechanism of nuclear fusion.

Uses and of dangers of radioactivity


In medicine, radioisotopes like cobalt-60 are utilized to suppress immune reactions in organ transplants, while nuclear medicine employs radioisotopes
for diagnosis and therapy, revolutionizing patient care. In industry, radiation is essential for quality control and material testing, ensuring safety and
reliability in products. Food irradiation uses radioactive sources to eliminate harmful bacteria and extend shelf life, enhancing food safety. Radioactivity
is pivotal in research, aiding scientists in developing new technologies and medicines that improve health outcomes. The energy sector also harnesses
radiation for electricity generation, contributing to a sustainable energy future.

Dangers of radioactivity
Radioactivity poses significant health risks due to its ability to damage living tissue and DNA. Ionizing radiation, which includes gamma rays and X-
rays, can alter atomic structures, leading to cellular damage. High doses of radiation are particularly dangerous, as they can result in severe health
issues, including cancer and even death. Historical cases have shown that exposure to radiation can lead to burns, hair loss, and increased cancer rates
among affected individuals. In radiation emergencies, radioactive dust and smoke can spread over large areas, contaminating food and water supplies.
Inhalation or ingestion of radioactive materials can lead to acute health effects, including radiation sickness. The long-term consequences may include
chronic health conditions and increased cancer risk.

Nuclear fission and fusion


Nuclear fission: This is the splitting of nucleus of heavy atoms into two lighter nuclei. This process can be started by bombardment of a heavy
nucleus with a neutron. The products of the process are two light atoms and more neutrons which can make a process continue. Nuclear fission is a
powerful reaction in which the nucleus of an atom, typically uranium-235, splits into two or more smaller nuclei, releasing a significant amount of

Revised Edition: [email protected] Page 341


342

energy. This process occurs when a neutron collides with a heavy nucleus, causing it to
become unstable and break apart. The fission reaction not only produces smaller atoms,
known as fission products, but also emits gamma photons and additional neutrons, which
can trigger further fission events in a chain reaction. Nuclear fission is the fundamental
process that powers nuclear reactors, providing a reliable source of energy for electricity
generation. Most nuclear power plants utilize uranium as fuel, capitalizing on the energy
released during fission to produce steam that drives turbines. As the demand for energy
continues to grow, advancements in nuclear technology, including Generation IV
reactors, aim to enhance safety and efficiency in harnessing this energy source.

Nuclear Fusion: Nuclear fusion is a process where two light atomic nuclei combine to
form a heavier nucleus, releasing vast amounts of energy in the process. This reaction is
the fundamental energy source of stars, including our Sun, where hydrogen atoms fuse
to create helium. The mass difference between the reactants and the product results in
energy release, as described by Einstein's equation, E=mc².

Nuclear energy
Nuclear energy is a vital clean energy source, providing around 9% of the world's
electricity and supporting climate goals and national security. It is generated through
nuclear fission, where the nucleus of an atom is split to release energy. This process not
only produces electricity but also contributes significantly to low-carbon energy
production, accounting for about one-quarter of the world's low-carbon electricity. As
global demand for energy rises, nuclear power is experiencing a resurgence, with
projections indicating record production levels by 2025. Increased investment and
advancements in technology are driving this growth, positioning nuclear energy as a key
player in the clean energy transition. Uganda is on a promising path towards harnessing
nuclear energy, with plans to begin power generation by 2031. The journey began with
the enactment of the Atomic Energy Act in 2008, which established a regulatory framework for nuclear energy use. The government aims to generate
at least 1,000 megawatts (MW) from nuclear sources, diversifying its energy portfolio to meet growing demands. In 2024, Uganda hosted an
International Atomic Energy Agency (IAEA) review to assess its uranium production capabilities, a crucial step in its nuclear ambitions. The first
nuclear project is planned for Buyende, located about 150 km from Kampala, marking a significant milestone in the country's energy strategy. Uganda
is exploring potential partnerships, including a nuclear deal with Russia, to bolster its nuclear infrastructure.

Nuclear reactors
Nuclear reactors are essential devices that initiate and control nuclear fission, a process where atoms split to release energy. This energy is harnessed
for various applications, including commercial electricity generation and marine propulsion. By August 2023, the United States operated 93 commercial
nuclear reactors across its 54 power plants, highlighting the significance of nuclear energy in the national energy landscape. Recent discussions have
emerged regarding the extension of the operational life of nuclear reactors, such as those in Heysham, where opinions diverge between supporting
the extension for energy needs and concerns over safety and environmental impact. Meanwhile, Canada is exploring the commercialization of small

Revised Edition: [email protected] Page 342


343

reactor designs, which could revolutionize energy production with a 15-year fuel cycle. mobile nuclear reactors are gaining attention for their potential
to provide reliable energy solutions for military operations, addressing the growing demand for portable power sources in various industries.

Nuclear Reactions
Nuclear reactions cause changes in the nucleus of atoms, which in turn leads to changes in the atom itself. Nuclear reactions convert 1 element into a
completely different element. Suppose if a nucleus interacts with any other particles and then separates without altering the characteristics of other
nuclei then the process is called as nuclear scattering rather than specifying it as a nuclear reaction. This does not imply radioactive decay. One of the
most evident nuclear reactions is the nuclear fusion reaction that occurs in fissionable materials producing induced nuclear fission.
A nuclear chain reaction occurs when a single nuclear reaction causes one or more subsequent reactions, leading to a self-sustaining series of reactions.
This process is fundamental to both nuclear reactors and nuclear weapons.
The most common type of nuclear chain reaction involves nuclear fission,
where a heavy nucleus, such as uranium-235 or plutonium-239, splits into
smaller nuclei, releasing energy and neutrons. These released neutrons can
then initiate further fission reactions in nearby nuclei, perpetuating the chain
reaction.

Procedures in a Nuclear Chain Reaction


A neutron collides with a heavy atomic nucleus (e.g., uranium-235), causing
it to become unstable and split into two smaller nuclei (fission products)
along with a release of energy and additional neutrons. This fission releases
a large amount of energy in the form of kinetic energy and gamma radiation.
The fission event releases two or three neutrons, which can then collide with
other nearby fissile nuclei, causing them to undergo fission as well. If each fission event leads to at least one more fission, the reaction becomes self-
sustaining, creating a chain reaction. A sufficient amount of fissile material (critical mass) is required to sustain the chain reaction, as too few atoms
would make it difficult for neutrons to continue colliding with nuclei. In nuclear reactors, the rate of the chain reaction is regulated using control
rods that absorb excess neutrons, allowing the reaction to proceed at a steady rate to produce energy safely. In nuclear weapons, the chain reaction is
uncontrolled, leading to an explosive release of energy within a fraction of a second.

Nuclear Reactions -Types


Inelastic scattering: This process takes place when a transfer of energy occurs. It occurs above threshold energy called the inelastic threshold energy
and is the energy of the first excited state.
Elastic Scattering: It occurs when there is energy transfer between a particle and intends nuclei. It is the most vital process for slowing down neutrons.
In the case of an elastic scattering total kinetic energy of any system is conserved.
Transfer Reactions: The absorption of a particle followed by discharge of 1 or 2 particles is referred as transfer reactions.
Capture Reactions: When nuclei capture neutral or charged particles followed by discharge of ˠ-rays, it is termed as capture reactions. Radioactive
nuclides are produced by neutron capture reactions.

Nuclear Fusion Reaction


This is the union of two light atomic nuclei to form a heavy atom. It involves the release of energy.

Revised Edition: [email protected] Page 343


344

Applications of Radioactivity
Radioactive isotopes are used in radiotherapy to target and destroy cancerous cells.
Tracers with radioactive isotopes are used in imaging techniques like PET scans.
Radioisotopes help in non-destructive testing of materials for flaws.
Nuclear power plants use fission to generate electricity.
Carbon-14 dating in archaeology and other isotopic dating techniques are used in geology and paleontology.

Dangers Associated with Radioactivity


Exposure to high levels of radioactivity poses significant health risks.
High exposure can cause acute radiation syndrome (nausea, vomiting, hair loss, and even death). Increases the risk of cancer, genetic mutations, and
damage to tissues and organs.
Radioactive waste can contaminate soil, water, and ecosystems, impacting plant, animal, and human life. Proper shielding, limiting exposure time,
and maintaining distance are essential safety practices when working with radioactive materials.

Social, Political, and Environmental Dimensions of Nuclear Power Use


Many communities have concerns about nuclear safety, often influenced by past nuclear accidents (e.g., Chernobyl, Fukushima). Ensuring transparency
and safety is crucial to gaining public support. The dual-use nature of nuclear technology (for power generation and weaponry) raises concerns about
the spread of nuclear weapons. Treaties such as the Non-Proliferation Treaty (NPT) regulate nuclear technology and promote peaceful use. Long-lived
radioactive waste requires secure, stable storage to prevent environmental contamination over thousands of years. Nuclear energy has low greenhouse
gas emissions compared to fossil fuels, making it attractive for reducing climate impact.

Nuclear and radiation accidents and incidents


Nuclear and radiation accidents have raised significant safety concerns since the inception of nuclear power. As of 2014, over 100 serious incidents
have been recorded, with 57 classified as severe. The Chernobyl disaster in 1986 remains the most catastrophic, resulting in widespread radiation
exposure and long-term health effects. In contrast, the Fukushima accident in 2011, while serious, did not lead to the same level of health threats for
workers as seen in Chernobyl. The Three Mile Island incident in the United States also marked a pivotal moment in nuclear safety, where a partial
meltdown resulted in a minor radiation release. These events have prompted international organizations like the IAEA to enhance emergency
preparedness and response strategies.

4.5 DIGITAL ELECTRONICS


Learning Outcomes
a) Understand how resistors are used to make potential dividers in control and logic circuits (u,s)
b) Understand elementary logic and memory circuits that exploit devices such as bistable and a stable switches, logic gates and resistors as
potential dividers (u,s)
c) Know that logic circuits are able to store and process binary information and that this can be exploited in an increasingly w ide variety of
digital instruments (k, u,s)

Revised Edition: [email protected] Page 344


345

Introduction
Digital electronics is a vital field that focuses on the study and application of digital signals. It encompasses the design and engineering of devices that
utilize these signals, which are essential for modern computing and communication systems. Digital electronics operates on binary data, allowing for
efficient processing and manipulation of information, such as numbers and text. The discipline includes various components, such as logic gates,
combinational and sequential circuits, which form the backbone of digital systems. Recent advancements in materials science are pushing the
boundaries of digital electronics, particularly in high-temperature applications. Innovations beyond traditional silicon technologies are being explored
to enhance the resilience and performance of electronic devices, ensuring that digital
electronics continues to evolve and meet the demands of modern technology.

Potential Divider
A potential divider, also known as a voltage divider, is a fundamental electronic circuit
that reduces a higher voltage to a lower voltage. This is achieved by using resistors in
series, where the output voltage is a fraction of the input voltage, determined by the
resistor values. This simple yet effective design is widely utilized in various
applications, including sensor circuits and signal processing. In practical terms, a
potential divider allows for the adjustment of voltage levels, making it essential in
electronic devices. For instance, it can be used to calibrate sensor outputs or to create reference voltages for operational amplifiers. It is based on the
principle that the potential drop across a segment of a wire of uniform cross-section carrying a constant current is directly proportional to its length.
It is used in the volume control knob of music systems. Sensory circuits using light-dependent resistors and thermistors also use potential dividers.
They can be used as audio volume controls, to control the temperature in a freezer or monitor changes in light in a room. Two resistors divide up the
potential difference supplied to them from a cell. The proportion of the available p.d. that the two resistors get depends on their resistance values.
 Vin = p.d. supplied by the cell
 Vout = p.d. across the resistor of interest
 R1 = resistance of resistor of interest R1
 R2= resistance of resistor R2

Potential Divider
A potential divider consists of two resistors (R1 and R2=S) in series. The current I through both the resistors are the same. The potential across resistor
R1 is V1 and R2=S is V2. The potential difference across the resistors can be mathematically written using Ohm’s law.
𝑉 𝑅
From ohms law, 𝑉1 = 𝐼𝑅1 , 𝑎𝑛𝑑 𝑉2 = 𝐼𝑅2, dividing both sides yields 𝑉1 = 𝑅1
2 2
Using the above equation, it can be understood that the total potential difference (V) is divided between the two resistors according the ratio of their
resistances. By choosing the appropriate resistor values, the potential difference across the resistances can be varied. A potential divider (or voltage
divider) is a simple circuit used to divide an input voltage into smaller output voltages.

Revised Edition: [email protected] Page 345


346

This is achieved by connecting two or more resistors (or other components) in series across
a voltage source. The divided voltage is taken from the junction between these components,
and its value depends on their relative resistances.

Applications of Potential Dividers


Adjustable Voltage Supplies (Variable Voltage Sources): A variable resistor
(potentiometer) can be used in a potential divider to produce a range of output voltages.
They are used in power supplies to adjust the output voltage according to specific
requirements. For example, in lab equipment where different voltages are needed for testing circuits.
Volume Control in Audio Equipment: A potentiometer is used as a potential divider to adjust the voltage that controls sound intensity. In radios,
speakers, and other audio devices, the volume control knob is essentially a variable potential divider that adjusts the audio signal level reaching the
speaker.
Sensors and Transducers: Many sensors, such as thermistors (temperature sensors) and light-
dependent resistors (LDRs), are connected in a potential divider circuit to produce an output voltage
that varies with environmental conditions. In light-sensitive or temperature-sensitive circuits,
potential dividers provide a varying voltage output that can be used in control systems, like
adjusting brightness in automatic lighting or regulating temperature.

Battery Level Indicators: A potential divider can scale down the voltage of a battery to a safer, readable level for a microcontroller or ADC (Analog-
to-Digital Converter). Used in battery-operated devices to monitor and display battery voltage levels. For example, in mobile phones, laptops, and
electric vehicles, battery indicators often rely on a potential divider to measure the state of charge.
Reference Voltage for Comparators: A potential divider can create a fixed reference voltage for a comparator circuit. In control systems, such
as thermostats, this reference voltage is compared with an input signal to trigger an action, like turning a heater on or off based on temperature levels.
Signal Conditioning: Potential dividers can be used to scale down higher voltages to a level that’s compatible with low-voltage circuitry or sensitive
input devices. Common in digital electronics, where signals need to be scaled to safe levels for microcontrollers or analog-to-digital converters without
damaging sensitive components.
Level Shifting: In some digital circuits, a potential divider adjusts signal voltages to match different logic levels. Essential in interfacing components
that operate at different voltage levels, such as connecting 5V and 3.3V logic circuits in integrated systems like microcontrollers, sensors, and
communication modules.

Variable Resistor / Rheostat


A variable resistor (rheostat) can be used to control current in a circuit. A variable resistor consists of
a length of resistance wire and an adjustable sliding contact. Without switching off the circuit, the
resistance can be varied using a sliding contact. A rheostat is made using a resistance wire, which is
wound around circular insulation. A sliding contact is placed in the wire to change the length of the
resistor. An end of the wire and sliding contact is connected to the circuit. As the length of the resistance wire is changed, the resistance also changes.
The resistance can be set to any value from nearly zero to the total resistance of the wire in the variable resistor. A rheostat is used in car lighting
systems to change the brightness of the lights.

Revised Edition: [email protected] Page 346


347

Potentiometer
The design of a potentiometer is similar to that of a variable resistor. All the
three points, both the ends of resistance wire and the adjustable contact, are
connected to the circuit. Two terminals and the contact are connected to the
circuit. The length of the wire can be changed by the sliding contact. The
resistance increases as the length of the wire increases. The resistance can be
set to any value from zero to the total resistance of the wire. Potentiometers are
often used, for example, to change the volume in a speaker system.

Application of Potential Dividers


Potential dividers are widely used in sensory circuits. The change in the physical property of a sensor has to be processed before it can be displayed
or measured. Light-dependent resistors and thermistors are two examples of sensory
devices whose resistances vary with light and temperature respectively. The
resistance of a light-dependent resistor decreases as the light intensity increases.
The resistance of the thermistor decreases with rise in temperature. A potential
divider can be used to process the information obtained from these sensory devices.
Let us consider a potential divider circuit with a sensory device can be placed in
the position of S.

Potential Divider in Sensory Circuits


The voltage across sensory device (Vs) can be mathematically written as:

The magnitude of Vs depends on the relative resistance of R and S. We can note that, as the resistance of sensory device (S) increases, the voltage also
increases.
Example
A potential divider circuit can be used inside a refrigerator to switch on the cooling
circuit when the temperature is high (more than 3°C).

Potential Divider Using a Thermistor


The characteristics of the thermistor are given in the table below. Let the voltage across
the cooling circuit be VCC and the resistance of cooling circuit is 5kΩ. In order for the
cooling circuit to operate, it needs a potential difference of 5 V or more.

Revised Edition: [email protected] Page 347


348

Hence, this potential divider meets the requirements of the refrigerator. These circuits can be further modified to suit different applications. For
example: switching off a heater when the temperature is above a certain temperature. This circuit can also be used for switching off lights in the
daytime and switching them on at night (i.e LDR).

Binary systems and logic gates


The binary number system is fundamental to digital electronics, utilizing only two
digits: 0 and 1. This base-2 system is ideal for representing digital signals, as it aligns
perfectly with the two states of electronic components—on and off. Each binary digit,
or bit, can represent a true or false condition, making it essential for data processing
and storage in computers. In digital applications, binary numbers are used to perform
various operations, including arithmetic and logical functions. The simplicity of the
binary system allows for efficient encoding and decoding of information, which is
crucial in modern computing. For instance, complex data structures and algorithms rely
on binary representations to function effectively. Moreover, the conversion between
binary and other number systems, such as decimal, is a common task in digital electronics. Understanding binary is vital for anyone working in
technology, as it forms the backbone of all digital systems.

Logic gates
Logic gates are fundamental components in digital electronics, designed to perform Boolean functions through various electronic components like
diodes, transistors, and resistors. These gates process binary inputs to produce a single binary output, enabling complex computations and decision-
making processes in digital systems. The primary types of logic gates include AND, OR, NOT, XOR, NAND, NOR, and XNOR. Each gate has distinct
functions: for instance, the AND gate outputs a high signal only when all inputs are high, while the OR gate outputs high if at least one input is high.
Understanding these gates is crucial for designing and analyzing digital circuits, as they form the building blocks of more complex systems.

Digital electronics
Digital electronics is a vital branch of electronics focused on the study and
application of digital signals, which are represented as discrete binary values
(0s and 1s). This field encompasses the design and engineering of devices that
utilize these signals, making it essential for modern technology. Digital
electronics is foundational in various applications, from computers to
communication systems, where information is processed and manipulated
efficiently. One of the key advantages of digital electronics is its ability to
handle information that is inherently digital, such as numbers and text. This allows for precise and reliable data processing, which is crucial in
today’s digital age. The use of logic gates, which form the basis of digital circuits, enables the creation of complex systems that can perform a variety
of functions. Digital electronics is a type of electronics that deals with the digital systems, which processes the data/information in the form of
binary (0s and 1s) numbers, whereas analog electronics deals with the analog system, which processes the data/information in the form of continuous
signals.

Continuous signals
A Continuous signal is function f(t), whose value is defined for all time 't' in other words.
Continuous signal a varying quantity with respect to independent variable time. Example: Figure 1.1(a) shows the continuous signal.

Revised Edition: [email protected] Page 348


349

Continuous signals. Digital signals


A digital signal is a quantized discrete time signal. Example: Figure (b) shows
the discrete and digital signals.

Figure (b1): Discrete signal.

Digital signal
Boolean algebra
Boolean algebra is a branch of Algebra (Mathematics) that deals with operations on logical values with Boolean variables; Boolean variables are
represented as binary numbers, which takes logic 1, and logic 0 values.

Hence, the Boolean algebra is also called two-valued logic, Binary Algebra
or Logical Algebra.
Great mathematician George Boole introduced the Boolean algebra in 1847.
The Boolean algebra is a fundamental for the development of digital
electronic systems, and is provided for in all programming languages.
Set theory and statistics fields also use Boolean algebra for the representation, simplification and analysis of mathematical quantities.

Classifications of Logic levels


Positive logic
Logic 0 = False, 0V, Open Switch, OFF ; Logic 1= True, +5V, Closed Switch, ON
Negative logic
Logic 0 = True, +5V, Closed Switch, ON; Logic 1= False, 0V, Open Switch, OFF
Boolean algebra differs from normal or elementary algebra. Latter deals with numerical operations such as, addition, subtraction, multiplication and
division on decimal numbers.
And former deals with the logical operations such as conjunction (OR), disjunction (AND) and negation (NOT). In present context, positive logic
has been used for the entire discussion, representation and simplification of Boolean variables.

Boolean Algebra Rules and properties


Boolean variables take only two values, logic 1 and logic 0, called binary numbers.
Basic operations of Boolean algebra are complement of a variable, ORing and ANDing of two or more variables.
Mathematical description of Boolean operations using variables is called Boolean expression.
An over-bar represents complement of variable (-),Y = A ̅.
A Y=A ̅

0 1
1 0

Revised Edition: [email protected] Page 349


350

A B Y=A+B
ORing of variables is represented by a plus symbol (+), A+B=Y (output)
0 0 0
ANDing of variables is represented by a dot symbol (.), A. B=Y (Output)
0 1 1
A B Y=A.B 1 0 1
0 0 0 1 1 1
Boolean operations are different from binary
0 1 0 operations.
1 0 0 Example: 1+1=10 in Binary Addition,
1+1=1 in Boolean algebra.
1 1 1

The present chapter deals with the simplification of Boolean expressions and representation using sum of product form and product
of sum forms.

Boolean Laws
Law-1: Commutative law
The sequence of changing the variables does not effect on the result even after changing their sequence while performing OR, or AND operations
of Boolean expression.
i. e., A. B = B. A and A + B = B + A
Law-2: Associative law
The order of operations on variables is independent.
A. (B. C) = (A. B). C and A + (B + C) = (A + B) + C

Logic Gates realization of Boolean Expressions


Logic gate is the basic building block of any digital circuits. The logic gates may have one or more inputs and only one output. The relationship
between input and output is based on a certain logic, which is same as Boolean operations, such as AND, OR and NOT. Based on the Boolean
operations, the gates are named as AND gate, OR gate and NOT gate. These three gates are called basic gates, and some more gates can be derived
by using the basic gates, they are named as NAND gate, NOR gate, EXOR gate and XNOR gate. NAND and NOR gates are called universal gates,
because by using only the NAND gates /NOR gates we can realize all basic gates even all Boolean expression. Logic gates, its truth table, expression
and symbols are summarized in the table 1.7 as follows.

Logic Gates
Logic gates are the fundamental components of all digital circuits and
systems. In digital electronics, there are seven main types of logic
gates used to perform various logical operations. A logic gate is basically an
electronic circuit designed by using components like diodes, transistors,
resistors, capacitors, etc., and capable of performing logical operations. In
this article, we will study the definition, truth table, and other related
concepts of logic gates. So let’s start with the basic introduction of logic
gates. A logic gate is an electronic circuit designed by using electronic
components like diodes, transistors, resistors, and more.

Revised Edition: [email protected] Page 350


351

As the name implies, a logic gate is designed to perform logical operations in digital systems like computers, communication systems, etc. Therefore,
we can say that the building blocks of a digital circuit are logic gates, which execute numerous logical operations that are required by any digital
circuit. A logic gate can take two or more inputs but only produce one output. The output of a logic gate depends on the combination of inputs and
the logical operation that the logic gate performs. Logic gates use Boolean algebra to execute logical processes. Logic gates are obtained in nearly
every digital gadget we use on a regular basis. Logic gates are used in the architecture of our telephones, laptops, tablets, and memory devices.

Types of Logic Gates


A logic gate is a digital gate that allows data to be manipulated. Logic gates use logic to determine whether to pass a signal. Logic gates, on the other
hand, govern the flow of information based on a set of rules. The logic gates can be classified into the following major types:
Basic Logic Gates
There are three basic logic gates: AND Gate, OR Gate, and NOT Gate
Universal Logic Gates: In digital electronics, the following two logic gates are considered as universal logic gates: NOR Gate and NAND Gate

Derived Logic Gates


The following two are the derived logic gates used in digital systems:
XOR Gate, XNOR Gate, & AND Gate
In digital electronics, the AND gate is one of the basic logic gate that performs the logical multiplication of inputs applied to it. It generates a high or
logic 1 output, only when all the inputs applied to it are high or logic 1. Otherwise, the output of the AND gate is low or logic 0.
Properties of AND Gate:
The following are two main properties of the AND gate:
AND gate can accept two or more than two input values at a time, when all of the inputs are logic 1; the output of this gate is logic 1.
The operation of an AND gate is described by a mathematical expression, which is called the Boolean expression of the AND gate.
For two-input AND gate, the Boolean expression is given by, Y=A.B, where, A and B are inputs to the AND gate, while Y denotes the output of the
AND gate.
We can extend this expression to any number of input variables, such as, Y=A.B.C. D

Truth Table of AND Gate: In SETS AND means intersection; hence for independence deals with multiplication of two EVENTS A
and B
The truth table of a two input AND gate is given below:
Input Output=A.B
A B A AND B
0 0 0
0 1 0
1 0 0
1 1 1

Revised Edition: [email protected] Page 351


352

Symbol of AND Gate:


The logic symbol of a two input AND gate is shown in the following figure.
Symbol of Two-Input AND Gate
OR Gate
In digital electronics, there is a type of basic logic gate which produces a
low or logic 0 output only when it’s all inputs are low or logic 0.
For all other input combinations, the output of the OR gate is high or logic
1. This logic gate is termed as OR gate.
An OR gate can be designed to have two or more inputs but
only one output.
The primary function of the OR gate is to perform the logical sum operation.
Properties of OR Gate:
An OR gate have the following two properties:
 It can have two or more input lines at a time.
 When all of the inputs to the OR gate are low or logic 0, the output of it is low or logic 0.
The operation of an OR gate can be mathematically described through a mathematical expression called Boolean expression of the OR gate.
The Boolean expression for a two input OR gate is given by, Y = A + B
The Boolean expression for a three-input OR gate is, Y= A + B + C
Here, A, B, and C are inputs and Y is the output variables. We can extend this Boolean expression to any number of input variables.
Truth Table of OR Gate:
The truth table of an OR gate describes the relationship between inputs and output. The following is the truth table for the two-input OR gate:

Input Output, Y

A B A OR B

0 0 0

0 1 1
1 0 1

1 1 1

Symbol of OR Gate; in sets OR means UNION; hence ADDITION


The logic symbol of a two-input OR gate is shown in the following figure.
Symbol of Two-Input OR Gate

Revised Edition: [email protected] Page 352


353

NOT Gate (in set theory; Meaning NOT A, hence COMPLEMENT OF A)


In digital electronics, the NOT gate is another basic logic gate used to perform compliment of
an input signal applied to it. It takes only one input and one output. The output of the NOT gate
is complement of the input applied to it. Therefore, if we apply a low or logic 0 outputs to the
NOT gate is gives a high or logic 1 output and vice-versa. The NOT gate is also known as
inverter, as it performs the inversion operation.
Symbol of NOT Gate
The logic circuit symbol of a NOT gate is shown in the following figure. Here, A is the input line
and Y is the output line.
Properties of NOT Gate:
The output of a NOT gate is complement or inverse of the input applied to it. NOT gate takes only one output.
The logical operation of the NOT gate is described by its Boolean expression, which is given by 𝑌 = 𝐴̅
The bar over the input variable A represents the inversion operation.

Truth Table of OR Gate:


The truth table describes the relationship between input and output. The following is the truth table for the NOT gate:
Input Output, 𝑌 = 𝐴 ̅

A NOT A
0 1
1 0

NOR Gate: The NOR gate is a type of universal logic gate that can take two or more
inputs but one output. It is basically a combination of two basic logic gates i.e., OR gate
and NOT gate. Thus, it can be expressed as, NOR Gate = OR Gate + NOT Gate
In other words, a NOR gate is an OR gate followed by a NOT gate.
The following neither are two important properties of NOR gate:
 A NOR gate can have two or more inputs and gives a single output.
 A NOR gate gives a high or logic 1 output only when it’s all inputs are low or logic 0.
Similar to basic logic gates, we can describe the operation of a NOR gate using a mathematical equation called Boolean expression of the NOR gate.
The Boolean expression of a two did not input NOR is gate given below: 𝑪 = ̅̅̅̅̅̅̅̅𝑨 + 𝑩𝑪 = 𝑨 ̅̅̅̅̅̅̅̅
+𝑩
In the above Boolean expressions, the variables A and B are called input variables while the variable C is called the output variable.
Truth Table of NOR Gate:
The following is the truth table of a two-input NOR gate showing the relationship between its inputs and output:
Input Output
A B A NOR B
0 0 1
0 1 0
1 0 0
1 1 0

Revised Edition: [email protected] Page 353


354

Symbol of the NOR Gate


NAND Gate
In digital electronics, the NAND gate is another type of universal logic gate used to
perform logical operations. The NAND gate performs the inverted operation of the AND
gate. Similar to NOR gate, the NAND gate can also have two or more input lines but
only one output line. The NAND gate is also represented as a combination of two basic
logic gates namely, AND gate and NOT gate. Hence, it can be expressed as NAND
Gate = AND Gate + NOT Gate

Properties of NAND Gate:


The following are the two key properties of NAND gate:
 NAND gate can take two or more inputs at a time and produces one output based on the combination of inputs applied.
 NAND gate produces a low or logic 0 outputs only when it’s all inputs are high or logic 1.
We can describe the expression of NAND gate through a mathematical equation called its Boolean expression.
The Boolean expression of a two input NAND gate, Y=𝑨𝑩 ̅̅̅̅.
In this expression, A and B are the input variables and C is the output variable. We can extend this relation to any number of input variables like three,
four, or more.

Truth Table of NAND Gate:


The truth table is a table of inputs and output that describes the operation of the NAND gate and shows the logical relationship between them:
Input Output Y=𝑨𝑩̅̅̅̅̅

A B A NAND B
0 0 1
0 1 1
1 0 1
1 1 0

Symbol of NAND Gate: The logic symbol of a NAND gate is represented as a AND gate with a bubble on its output end as depicted in the following
figure. It is the symbol of a two-input NAND gate.

XOR Gate
In digital electronics, there is a specially designed logic gate named, XOR gate, which is used in digital circuits to perform modulo sum. It is also
referred to as Exclusive OR gate or EX-OR gate. The XOR gate can take only two inputs at a time and give an output.
The output of the XOR gate is high or logic 1 only when its two inputs are dissimilar.

Properties of XOR Gate


The following two are the main properties of the XOR gate:
 It can accept only two inputs at a time. There is nothing like a three or more input XOR gate.
 The output of the XOR gate is logic 1 or high, when its inputs are dissimilar.

Revised Edition: [email protected] Page 354


355

The operation of the XOR gate can be described through a mathematical equation called its Boolean expression. The following is the Boolean expression
for the output of the XOR gate. 𝑌 = 𝐴𝐵̅ + 𝐴𝐵̅. Where, Y is the output variable, and A and B are the input variables. This expression can also be
written as follows:𝑌 = 𝐴𝐵̅ + 𝐴𝐵̅
Truth Table of XOR Gate:
The truth table is a table of inputs and output that describe the relationship between them and the operation of the XOR gate for different input
combinations. The truth table of the XOR gate is given below:
Input Output, 𝒀 = 𝑨𝑩 ̅ +𝑨 ̅𝑩

A B A XOR B
0 0 0
0 1 1
1 0 1
1 1 0

XNOR Gate
The XNOR gate is another type of special purpose logic gate used to
implement exclusive operation in digital circuits. It is used to implement the
Exclusive NOR operation in digital circuits. It is also called the Ex-NOR or
Exclusive NOR gate.
It is a combination of two logic gates namely, XOR gate and NOT gate. Thus, it
can be expressed as, XNOR Gate = XOR Gate + NOT Gate.
The output of an XNOR gate is high or logic 1 when it’s both inputs are similar. Otherwise the output is low or logic 0. Hence, the XNOR gate is used
as a similarity detector circuit.
Properties of XNOR Gate:
The following are two key properties of XNOR gate:
 XNOR gate takes only two inputs and produces one output.
 The output of the XNOR gate is high or logic 1 only when it has similar inputs.
The operation of XNOR gate can be described through a mathematical equation called the Boolean expression of XNOR gate. Here is the Boolean
expression of the XNOR gate, write this expression as follows: 𝒀 = 𝑨𝑩 + ̅̅̅̅ 𝑨𝑩. Here, the A and B are inputs and Y is the output.

Truth Table of XNOR Gate:


The truth table of the XNOR gate is given below. This truth table is describing the relationship between inputs and output of the XNOR gate.
Input Output
A B A XNOR B
0 0 1
0 1 0
1 0 0
1 1 1

Revised Edition: [email protected] Page 355


356

XNOR Gate: The logic symbol of XNOR gate is shown in the following figure. Here,
A and B are inputs and Y is the output.
Applications of Logic Gates
Logic gates are the fundamental building blocks of all digital circuit
and devices like computers.
Here are some key digital devices in which logic gates are utilized to design their circuits: Computers, Microprocessors, Microcontrollers, Digital and
smart watches, and Smartphones, etc.

Logic gates:
Logic gates are digital circuits that conduct logical operations on the input provided to them and produce appropriate output. Universal gates: To
accomplish a specific logical process, universal gates are created by merging two or more fundamental gates.
Universal gates are NAND and NOR gates.
What is the output of a NOT gate when input 0 is applied? Because NOT gate is an inverter. As a result, if 0 is used as an input, the output will be 1.
Which logic gate is known as the “invertor”?
An inverter is also known as a NOT gate. The obtained output is the inverse of the input.
What is the Boolean expression for OR gate?
If A and B are the input, then the OR gate output can be given as Y=A+B.
What is the Boolean expression for the XNOR gate?
If A and B are the input, then the XNOR gate output can be given as Y=A.B+A’B’.

LOGIC DIAGRAM
Realize the following Boolean expression using only NAND gates.
Y = AB + BC + AC
Logic diagram: Step-1: Replace basic gates by NAND equivalents
Step-2: Eliminate two single inputs NAND gates are connected in series.
Step-3: Draw the resultant logic circuit.

Revised Edition: [email protected] Page 356


357

BISTABLE CIRCUITS
A Bistable is a digital circuit that has two inputs and a digital output.
The SET input makes the output Logic 1 (HIGH) and the output will
stay in this state until forced to change. The RESET input makes the
output Logic 0 (LOW) and the output will also stay in this state until
forced to change. The output of a Bistable circuit is stable in both
states - it can remain as either Logic 1 or Logic 0 indefinitely until
either the SET or RESET initiate a change of state. The name means that the circuit has two stable states.

The terms Bistable, Latch and Flip-Flop are sometimes used interchangeably to describe Bistable circuits. However, each of these terms does have a
specific meaning.
1) A bistable circuit is the most basic circuit with SET and RESET inputs and the output immediately responds to a change in the inputs.
2) A latch is very similar to the basic bistable circuit but includes an ENABLE to control the state of the output.
3) A flip-flop is a bistable circuit where the output changes on the rising (usually) edge of a clock pulse.

BISTABLE BASICS
A Bistable has two inputs called Set (S) and Reset (R).
The output is called Q. There is often a second output, which is the opposite of Q or, in logic terms, NOT Q.
The NOT Q output is written as 𝑄̅ and pronounced "Q-bar"
In the most common bistables, Set and Reset are usually LOW and must go HIGH to change the output.

NOTE
1. In normal operation, SET and RESET are usually both held LOW
2. Q is always the opposite of Y
3. SET and RESET should not both be high at the same time - if they are the state of the outputs is undecided
4. There are also bistable where SET and RESET are usually HIGH and go LOW to change the output.

Revised Edition: [email protected] Page 357


PRINCIPLES AND STUDENTSCOMPANION
PERSPECTIVES OF PHYSICS

The timing diagram shows how the SET and RESET inputs cause Q
and 𝑄̅ to change The first time SET (Red line) goes HIGH it makes
the OUTPUT (Green line) go HIGH. Making the SET go HIGH again has
no further effect - the OUTPUT stays HIGH. Making the RESET (Blue
line) go HIGH makes the OUTPUT go LOW. Making the RESET go HIGH
again has no further effect - the OUTPUT stays LOW. The SET and
RESET pulses can be momentary pulses as shown by the final RESET
pulse which is just a very narrow, short pulse. A bistable is
particularly useful in an alarm circuit where one input (the sensor
or detector) will SET the alarm ringing and a different input (the security
officers key) will RESET the alarm to silent.

4043 Bistable IC
A basic bistable can be built from logic gates but is also available
on a dedicated IC such as the 4043
The 4043 IC contains four separate bistables each with a SET, a
RESET and a single output. As shown in the pin layout, only
output Q is available. There is no Q-bar output.
SET is normally LOW, making SET go HIGH forces the output Q
HIGH. RESET is normally LOW, making RESET go HIGH forces the
output Q LOW. Making both SET and RESET HIGH at the same time
is a disallowed state - in this case the output Q goes HIGH with the final state being determined by which input goes LOW first. The 4043 IC also has an
ENABLE input. This input controls the tristate output of all four bistables together. When the ENABLE is HIGH, the outputs of each bistable are either
HIGH or LOW as expected. When the enable is LOW the outputs are not connected to the bistables and simply float to any value. A simple test circuit is
shown with the ENABLE connected HIGH and two inputs provided by push buttons.

Using 4013 as a Bistable


A bistable can easily be built from a 4013 D-type flip-flop IC. The 4013 has a SET and RESET
as expected and has outputs Q and 푄 making it preferable to the 4043 IC in some cases.
There are two other inputs called CLOCK (CK) and DATA (D) that are not used and must be
connected to ground when the 4013 is used as a simple bistable.
The 4013 IC contains two separate flip-flops and so can be used to provide two separate
bistables that operate completely independently. SET is normally LOW, making SET go HIGH
forces the output Q=HIGH and 𝑄 = 𝐿𝑂𝑊. RESET is normally LOW, making RESET go High forces the output Q LOW and 𝑄 =HIGH.

Revised Edition: Yasson-Twinomujuni. (256) 772-938844 // 752-938844 Page 358


PRINCIPLES AND STUDENTSCOMPANION
PERSPECTIVES OF PHYSICS
Making both SET and RESET HIGH at the same time is a disallowed state -
in this case, the outputs Q and 𝑄 both go HIGH with the final state being
determined by which input goes LOW first.
The diagram shows a simple test circuit with CK and D connected to ground.

Simple logic gate Bistable


At the heart of a bistable circuit are two inverting logic gates. The output
of each logic gate is connected to the input of the other logic gate. Such a
circuit has two states where it is stable. The most basic logic gate bistable
is made from two NOT gates as shown in the diagram.
Situation 1: Assume A = 0 and therefore B = 1. B = 1 and therefore C =
0 ... C is connected to A, and so A = 0 as required.
Situation 2: Assume A = 1 and therefore B = 0. B = 0 and therefore C =
1 ... C is connected to A, and so A = 1 as required.
Whether A = 0 or A = 1, the circuit works in both cases. To make this
circuit a bistable simply make one of the NOT gate outputs Q and the
other𝑄̅.
This is not a good circuit. To SET or RESET the bistable requires input A
or input B to be forced into either a HIGH or LOW state - but the inputs
are also the outputs of the other logic gates and forcing the outputs of
logic gates to be either HIGH or LOW can lead to problems.
If we assume situation 1 where A = 0 and we force A to be HIGH so that
A = 1 then the output C will try and stay LOW so is A HIGH or LOW? An
indeterminate state can result and the bistable will either fail to work or
be unreliable or, in the worst case scenario, the logic gates will be damaged by having their outputs forced HIGH or LOW. All together not good.

NOR gate Bistable


Consider the function of the NOR gate. When A = 0 (as shown circled in red) then the
NOR gate acts like a NOT gate with B and Q being opposite in both cases. Therefore, the
NOR gate can replace the NOT gate in the simple logic bistable. However, when A = 1,
Q = 0 irrespective of the state of B Therefore A is acting like a RESET. This is an
excellent bistable circuit. When SET and RESET are both LOW the NOR gates act as NOT
gates and the bistable has two stable states. SET and RESET can safely be made HIGH as they are not directly connected to the output of the NOR gates
- they are only inputs.

Revised Edition: Yasson-Twinomujuni. (256) 772-938844 // 752-938844 Page 359


PRINCIPLES AND STUDENTSCOMPANION
PERSPECTIVES OF PHYSICS

Situation 1: Consider SET = 0, RESET = 0, Q = 0 and therefore 푄 = 1. Making SET = 1 forces 푄 = 0. Both inputs to the right do not hand
NOR gate are now LOW and so Q = 1. The feedback does not ensure that at least one of the inputs of the left hand NOR gate is now HIGH and so 𝑄̅ =
0. Therefore, making SET = 1 forces Q = 1 as required.

Situation 2: Consider SET = 0, RESET = 0, Q = 1 and therefore 𝑄 = 0. Making


RESET = 1 forces Q = 0. Both inputs to the left hand NOR gate are now LOW and
so 𝑄 = 1. The feedback ensures that at least one of the inputs of the right hand
NOR gate is now HIGH and so Q = 0. Therefore, making RESET = 1 forces Q = 0
as required. Logic circuits or devices where the inputs are normally LOW and go
HIGH to make something happen are not often referred to as having "NOR gate logic" or "NOR logic" because in the NOR gate bistable SET and RESET
are normally LOW. Note that the 4043 and 4013 ICs employ NOR gate logic.
The NOR gate bistable circuit is shown - with suitable inputs and outputs - and is equivalent to the circuit described above but is drawn differently.
Pull down resistors ensure SET and RESET are normally LOW and go HIGH when the buttons are pressed (NOR Logic).

NAND gate Bistable


Consider the function of the NAND gate. When A = 1 (as shown circled in red) then the
NAND gate acts like a NOT gate with B and Q being opposite in both cases. Therefore, the
NAND gate can replace the NOT gate in the simple logic bistable if A is held HIGH. However,
when A = 0, Q = 1 irrespective of the state of B Therefore, A is acting like a SET.
When SET and RESET are both HIGH the NAND gates act as NOT gates and the bistable has
two stable states as before. SET and RESET can safely be made LOW as
they are not directly connected to the output of the NAND gates - they are
only inputs.
Note that in this case the SET and RESET inputs are normally HIGH and
must go LOW to cause a change to happen - this type of logic is called
"NAND gate logic" or "NAND logic". Having the normal state of the
inputs as HIGH does not seem obvious at first but this is very similar to
the monostable circuit where the trigger is held HIGH and goes LOW to
start the monostable. NAND gate logic is quite common in more advanced
digital circuits.
Situation 1: Consider SET = 1, RESET = 1, Q = 0 and therefore 𝑄= 1. Making SET = 0 forces Q = 1. Both inputs to the left hand NAND gate are
now HIGH and so 𝑄 = 0. The feedback ensures that at least one of the inputs of the right hand NAND gate is now LOW and so Q = 1. Therefore,
making SET = 0 forces Q = 1 as required.
Situation 2: Consider SET = 1, RESET = 1, Q = 1 and therefore 𝑄 = 0. Making RESET = 0 forces 𝑄= 1. Both inputs to the right hand NAND
gate are now HIGH and so Q = 0. The feedback ensures that at least one of the inputs of the right hand NAND gate is now LOW and so 𝑄 = 1.
Therefore, making RESET = 0 forces Q = 0 as required.

Revised Edition: Yasson-Twinomujuni. (256) 772-938844 // 752-938844 Page 360


PRINCIPLES AND STUDENTSCOMPANION
PERSPECTIVES OF PHYSICS
The 4044 IC is functionally equivalent to the 4043 described above except that it uses NAND
Logic and the pin layout is slightly different. The NAND gate bistable circuit is shown - with
suitable inputs and outputs - and is equivalent to the circuit described above but is drawn
differently. You have to convince yourself it is the same circuit! Pull up resistors ensure SET
and RESET are normally HIGH and go LOW when the buttons are pressed (NAND Logic).

Construction of digital logic gates on breadboards


Digital logic gates are what allow a computer to perform computations using transistors. This is done by using the binary or base 2 number system. A
transistor only has two states which are on or off. However, when multiple transistors are used together the output of the two inputs can be changed
in a logical way. This allows for AND, NAND, OR, NOR, XOR, and XNOR logic. Most logic gates can be made using multiple circuit configurations. Many
people understand the basics of digital logic gates but not enough to actually implement them at the transistor level. The logic gate needs to be wired
in a configuration so the output can be sent further down the circuit. There are also more advanced configurations that allow for lower power
consumption and faster switching time. A circuit diagram is also provided so there is no ambiguity as to where to connect the wires. The name,
symbol, and truth table for each logic gate is shown in the picture above. Often times circuits are not drawn at the transistor level and just the logic
gate symbol is used. The truth table is very important as it makes it clear
what the output should be based on the inputs. This is very useful when
trying to build larger circuits such as the half adder, full
adder, calculators, and computer processors

BUFFER
A buffer is a simple circuit where the input value is the same as the output
value. This means if the input is off the output will be off. If the input is on
the output will be on. There are many reasons why a buffer may be added to the circuit. In most cases, it is to ensure a strong output signal. The input
voltage or current may be a low value and running the input through the buffer helps ensure a consistent output level. There is more than one way to
make a buffer so check out the buffer article for more information.

INVERTER
The inverter uses one transistor to take an input signal and make the output
signal the opposite. This means if the input is off the output will be on. If
the input is on the output will be off. Many people say that a NAND gate is an
AND gate with an inverter. These gates do have inverted logic but a NAND
gate is not an AND gate with an inverter. In fact, the NAND gate can be made
with two transistors and the AND gate requires three transistors when
sending an output. The AND gate is basically a NAND gate with an
inverter. However, the LED in the circuit is also flipped as the current is
flowing through the circuit in a two-transistor AND gate. The current is flowing toward the second ground and can be used as an output signal in a
three-transistor AND gate.

Revised Edition: Yasson-Twinomujuni. (256) 772-938844 // 752-938844 Page 361


PRINCIPLES AND STUDENTSCOMPANION
PERSPECTIVES OF PHYSICS

AND Gate
An AND gate is only on when inputs A and B are both on. In all other cases, it is
off. It can be made using many different transistor configurations. The case above
uses three transistors. The first two transistors from the left are the inputs. Right
now both resistors are plugged in meaning the inputs are on. This makes it so
the output is ON which is represented by the yellow LED. A and B are turned on
by providing a positive voltage of over 0.6 volts into the base of the transistor.

NAND Gate
The NAND gate is a universal logic gate. This means that it can be used to make
all the other types of logic gates. It is turned off when both inputs are on. In all
other cases, it is turned on. In the photo, both inputs A and B are on which is
why the LED is turned off. This is a simple logic gate that only requires two
transistors. Using just two transistors it sends the output further down the circuit
which is desired when incorporating the NAND logic gate into larger circuit
designs.

OR Gate
The OR gate is off when inputs A and B are both off. In all other cases, it is on. It
is important to note that the OR gate is on during the AND condition. If you want
the output to be on only when one of the inputs is on the exclusive OR gate is
needed. In the picture, both inputs A and B are on which is why the LED is on.
The OR gate can also be made out of three NAND gates or by using two NOR gates.
OR gate 2 shown above only requires three transistors and that is how I would
recommend making an OR gate using individual transistors.

NOR Gate
The NOR gate is another universal logic gate. This means that it can make all the
other types of logic gates. Like the NAND gate it also only requires two transistors.
In the picture, the orange wire connects the collectors of both transistors. There is
also ground going to the emitter of both transistors. This makes it so if any input
is on the output will be off. The only case where the output will be on is when both
inputs are off.

Revised Edition: Yasson-Twinomujuni. (256) 772-938844 // 752-938844 Page 362


PRINCIPLES AND STUDENTSCOMPANION
PERSPECTIVES OF PHYSICS
XOR Gate
The XOR gate is also called an exclusive OR gate. It is similar to an OR gate but is not on
when both inputs A and B are on. In the picture, only input A is on which is why the
LED is on. This is a common logic gate to see used to build half adders and full adders.
It can be built in multiple configurations including using NAND and NOR gates. The one
shown uses 6 transistors and the output can be sent to be used as inputs in other circuits.

XNOR Gate
The XNOR gate is also called the exclusive NOR gate. This means that it is not on when
only input A or only input B is on. When both inputs are on or off the output will be on.
This circuit only requires 5 transistors and the output can be sent to other circuits. In
the photo inputs A and B are both on which is why the LED is lit up.

Revised Edition: Yasson-Twinomujuni. (256) 772-938844 // 752-938844 Page 363


PRINCIPLES AND STUDENTSCOMPANION
PERSPECTIVES OF PHYSICS

Yasson Twinomujuni (Mr.) holds Bachelor of Science with Education (BSE) from Gulu University (GU) and Master of Business Administration (MBA) from
Mount Kenya University (MKU) with vast experience in teaching, guiding and directing physics for over 20 years, and having taught in a number of
schools both within and central region since 2002. You can reach him out for assistance in all aspects related to Physics.

“Inspiring greatness”
For inquiries.
Inquiries: Call/WhatsApp: +(256)-772-938844 // 752-938844

Revised Edition: Yasson-Twinomujuni. (256) 772-938844 // 752-938844 Page 364

You might also like