0% found this document useful (0 votes)
917 views

Prabhu T. - Automobile Engineering. Basic Fundamentals. Basic Fundamentals To Advanced Concepts of Automobile Engineering (2021, Nestfame Creations Pvt. LTD.) - Libgen - Li

Uploaded by

vivek panchal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
917 views

Prabhu T. - Automobile Engineering. Basic Fundamentals. Basic Fundamentals To Advanced Concepts of Automobile Engineering (2021, Nestfame Creations Pvt. LTD.) - Libgen - Li

Uploaded by

vivek panchal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1377

AUTOMOBILE ENGINEERING

Basic Fundamentals to Advanced


Concepts of Automobile Engineering
Prabhu TL

Nestfame Creations Pvt. Ltd.


[Automobile Engineering]
Copyright © [2021] Prabhu TL. All rights reserved.
Publisher - Nestfame Creations Pvt. Ltd.
Publisher Website - www.nestfamecreations.com
The contents of this book may not be reproduced, duplicated or transmitted
without direct written permission from the Author .
Under no circumstances will any legal responsibility or blame be held against
the publisher for any reparation, damages, or monetary loss due to the
information herein, either directly or indirectly.
Author - Prabhu TL
Indexer - Akshai Kumar RY

Legal Notice:
This book is copyright protected. This is only for personal use. You cannot
amend, distribute, sell, quote or paraphrase any part or the content within this
book without the consent of the author.
Disclaimer Notice:
Please note the information contained within this document is for educational
and entertainment purpose only .every attempt has been made to provide
accurate, up to date and reliable complete information. No warranties of any
kind are expressed or implied. Please consult a licensed professional before
attempting any techniques outlined in this book .
By reading this document, the reader agrees that under no circumstances are
is the author responsible for any losses, direct or indirect, which are incurred
as a result of the use of information contained within this document,
including, but not limited to, __eRrors, omissions, or inaccuracies.
PREFACE
Automobile or Automotive Engineering has gained recognition and
importance ever since motor vehicles capable for transporting passengers has
been in vogue. Now due to the rapid growth of auto component
manufacturers and automobile industries, there is a great demand for
Automobile Engineers. Automobile Engineering alias Automotive
Engineering or Vehicle Engineering is one of the most challenging careers in
the field of engineering with a wide scope.

This branch deals with the designing, developing, manufacturing, testing and
repairing and servicing automobiles such as cars, trucks, motorcycles,
scooters etc & the related sub Engineering systems. For the perfect blend of
manufacturing and designing automobiles, Automobile Engineering uses the
features of different elements of Engineering such as mechanical, electrical,
electronic, software and safety engineering.

To become a proficient automobile engineer, specialized training is essential


and it is a profession, which requires a lot of hard work, dedication,
determination and commitment.

The major task of an Automobile Engineer is the designing, developing,


manufacturing and testing of vehicles from the concept stage to the
production stage
The automotive industry is one of the largest and most important industries in
the world. Cars, buses, and other engine-based vehicles abound in every
country on the planet, and it is continually evolving, with electric cars,
hybrids, self-driving vehicles, and so on. Technologies that were once
thought to be decades away are now on our roads right now. Engineers,
technicians, and managers are constantly needed in the industry, and, often,
they come from other areas of engineering, such as electrical engineering,
process engineering, or chemical engineering. Introductory books like this
one are very useful for engineers who are new to the industry and need a
tutorial.
Also valuable as a textbook for students, this introductory volume not only
covers the basics of automotive engineering, but also the latest trends, such as
self-driving vehicles, hybrids, and electric cars. Not only useful as an
introduction to the science or a textbook, it can also serve as a valuable
reference for technicians and engineers alike. The volume also goes into
other subjects, such as maintenance and performance. Data has always been
used in every company irrespective of its domain to improve the operational
efficiency and performance of engines. This work deals with details of
various automotive systems with focus on designing various components of
these system to suit the working conditions on roads.
Whether a textbook for the student, an introduction to the industry for the
newly hired engineer, or a reference for the technician or veteran engineer,
this volume is the perfect introduction to the science of automotive
engineering.
TABLE OF CONTENT
FUNDAMENTALS
1. What Is Automobile Engineering?
2. Anti-Lock Braking System (abs)
3. Adaptive Cruise Control
4. Crdi (common Rail Direct Injection)
5. Dtsi (digital Twin Spark Ignition System)
6. Electromagnetic Brake
7. Spark Plug
8. Turbocharger
9. Windshield Washer
10. Blink Code In Antilock Braking System(abs)
11. Working Of A Car
12. Layout Of A Car
13. Battery Introduction
14. Advantages Of Using Rechargeable Batteries
15. Are Primary And Rechargeable Batteries
Interchangeable Amongst Each Other?
16. The Batteries Work Better In Different Devices
17. Types Of Batteries
18. Battery Principle Of Operation
19. Brake Introduction
20. Types Of Brakes
21. Frictional, Pumping, Electromagnetic Brakes
22. Hydraulic Brake
23. Air Brake System
24. Clutch Introduction
25. Requirements Of A Good Clutch
26. Different Types Of Clutch
27. Fluid Coupling
28. Differential Introduction
29. Advantages & Disadvantages Of Front Wheel Drive
30. Advantages & Disadvantages Of Rear Wheel Drive
31. Advantages & Disadvantages Of All Or 4- Wheel Drive
32. Need Of A Differential
33. Construction And Working Of Differential Assembly
34. Components Of Automobile Engine
35. Engine Problems
36. How To Produce More Engine Power
37. Efficiency Of The Engine
38. Overall Power Loss In Engine
39. Gear Introduction
40. Types Of Gears
41. Terminology Of Spur Gear
42. Use Of Gear Advantage Of Teeth On Gear
43. Gear Ratio
44. How Does A Gear Ratio Affect Speed?
45. How Does Gear Ratio Affects Torque
46. Gear Train
47. Simple & Compound Gear Train
48. Planetary Or Epicyclic Gear Train
49. Reverted Gear Train
50. Mechanical Advantage
51. Suspension System Introduction
52. Principle Of Suspension System
53. Components Of Suspension System
54. Common Problems Of The Suspension System
55. Preventive Measures For Suspension System
56. Comparison Between Macpherson Double Wishbone
Suspension Systems
57. Transmission System Introduction
58. Need For A Transmission
59. Types Of Transmission System
60. Manual Transmission
61. Components Of Manual Transmission
62. Working Of Manual Transmission
63. Five-Speed Manual Transmission
64. Double Clutching
65. Synchronized Transmission
66. Automatic Transmission
67. Planetary Gear Sets
68. Clutches & Bands, Torque Converter, Valve Body
69. Comparison Between Manual & Automatic
Transmission-
70. Semi-Automatic Transmission
71. Dual Clutch Transmission
72. Sequential Transmission
73. Continuously Variable Transmissions
74. Future Developments In Automotive Transmission
Systems
75. Terms Connected With I. C. Engines:
76. Fuel Supply System In Spark Ignition Engine
77. Fuel Supply System In Diesel Engine
78. Carburetor
79. Mpfi
80. Cooling System
81. Air Cooling System
82. Water Cooling System
83. Lubrication
84. Types Of Lubricating Systems
85. Need Of Lubrication System
86. Additives In Lubricating Oil
87. Valve Operating Systems
88. Anti-Friction Bearings
89. Straight-Tooth Spur & Helical Spur Gears
90. Straight-Tooth Bevel, Spiral Bevel & Hypoid Gears
91. Four-Wheel Drive (4wd) And All-Wheel Drive (awd)
92. Steering Mechanisms
93. Wheel Alignment
94. Toe & Caster
95. Effect Of Improper Alignment On Vehicle
96. Vehicle Rollover
97. Hotchkiss Suspensions
98. Disc Brakes
99. Lead-Acid Batteries
100. Nickel-Cadmium (nicd) Batteries
101. Nickel-Metal Hydride (nimh) Batteries
102. Lithium Ion (li-1on)llithium Polymer Batteries
103. Dual Hybrid Systems
104. Tire
105. Lean-Burn Nox-Reducing Catalysts, "denox”
106. Automobile History - Top 10 Interesting Facts
107. Difference Between Turbocharging And Supercharging
108. Why Diesel Cannot Be Used In Petrol Engine?
109. What Is Scavenging?
110. Why Petrol Cannot Be Used In Diesel Engine?
111. Flywheel
112. Unmanned Aerial Vehicles
113. Laser Ignition System
114. Technology Of Hydrogen Fuelled Rotary Engine
115. Common Rail Type Fuel Injection System
116. Disi Turbo | Direct Injection Spark Ignition Technology
117. Biotech Materials | Bio-Plastics | Bio-Fabrics
118. Hybrid Synergy Drive (hsd) Technology
119. Ultimate Eco Car Challenge - Development
120. Fuel Cell Technology
121. World’s First Air-Powered Car | Zero Emissions
122. Air Car Is Heading For Mass Production
123. Air-Powered Car Coming To Hit 1000-Mile Range
124. Future Of Car Infotainment Systems
125. Fuel Cell Car | How Fuel Cell Works | Detail
Explanation
126. How Fuel Cells Work
127. Kinetic Energy Recovery System | Kers | Formula One
(f1)
128. New Battery Technology | Fast Recharge 3d Film
Technology
129. Advanced Battery Storage Technology
130. New Powerful Capacitors Nano Composite Processing
Technique
131. Rtv Molding | Urethane Casting | Room Temperature
Vulcanized
132. Tdi Blue Motion Technologies
133. Turbocharged Stratified Injection (tsi) Engines | Tsi
Technology
134. Turbocharged Direct Injection (tdi) Diesel Engines
135. Fuel Cell-Powered Electric Vehicles | Mercedes Benz F-
Cell World Drive
136. Driver Assistance Technologies
137. Compressed Air Cars | Air Motion Racing Car
138. Idling Devices Of Automobile | Anti-Dieseling Device
139. Choking Device | Functions Of Choking Device
140. Antilock Or Antiskid Device | Anti-Lock Braking
System In Detail
141. Power Steering | Electronic Power Steering
142. Components Of Automatic Transmission System
143. Safety Systems In Vehicles | Seat Belts | Air Bags
144. Engine Speed Governors | Speed Control Governor |
Speed Limiters
145. Steering Systems
146. Gorilla Glass Manufacturing Process
147. Gorilla Glass History | Gorilla Glass Scratch | Gorilla
Glass Touch Screen
148. Magnetic Bearing Technology
149. Artificial Photosynthesis
150. The Future Of Bicycling Hydration
151. Wireless Battery Charger
152. Hydraulic Hybrid Vehicles
153. Nvh | Noise Vibration And Harshness
154. Durability Analysis
155. What Is Nvh
156. Nvh Terms | Nvh Terminology -1
157. Nvh Terms | Nvh Terminology - 2
158. Aerogel | World’s Lightest Material
159. Led Light Bulbs | Bonded Fin Heat Sink
160. Nano-Nuclear Batteries | Beta-Voltaic Power
161. Self-Driving Car Technology
162. Electro Chromatic Auto Dimming Mirror
163. Rain Sensors
164. Tandem Wipers | Windshield Wiper Blades
165. Ambient Light Sensor
166. Optoelectronic Materials
167. Opto Electronics | Fiber Optics Technology
168. Hybrid Drive Trains | Hybrid Vehicles
169. Variable Turbo Chargers Geometry (vtg)
170. Trends In Common Rail Fuel Injection System
171. Chassis Frame | Frame Rails | Auto Chassis
172. Types Of Chassis Frame | Auto Chassis
173. Piston-Engine Cycles Of Operation
174. Engine Components And Terms
175. Crankcase Disc-Valve And Reed-Valve Inlet Charge
Control
176. Engine Torque
177. Engine Power
178. Engine Cylinder Capacity
179. Compression-Ratio
180. Digital Engine Control Systems
181. Digital Engine Control
182. Digital Engine Control Features
183. Control Modes For Fuel Control
184. Engine Crank
185. Engine Warm-Up
186. Open-Loop Control
187. Closed-Loop Control
188. Acceleration Enrichment
189. Deceleration Leaning
190. Idle Speed Control
191. Idle Air Control.
192. Egr Control
193. Electronic Ignition Control
194. Closed-Loop Ignition Timing
195. Integrated Engine Control System
196. Evaporative Emissions Canister Purge
197. Automatic System Adjustment
198. System Diagnosis
199. Summary Of Control Modes
200. Improvements In Electronic Engine Control
201. Flywheel Energy Storage
202. General Characteristics Of Wheel Suspensions
203. Independent Wheel Suspensions – General
204. Steering System

ADVANCED CONCEPTS
1. Motor Technology – The ‘Centre’ of an Electric Vehicle Efficiency
2. Electromagnetic Stir Casting: An Approach to Produce Hybrid
Metal Matrix Composite (MMC)
3. Challenges and Opportunities in lithium-ion battery technologies
for electric vehicles
4. 90 Degree Steering Mechanism
5. Thermoelectric Cooler : A new horizon in Mechanical and
Electronics Engineering
6. Performance And Cost Of Other Types Of Light-Duty Vehicles
7. Emissions Performance
8. Safety Of Lightweight Vehicles
9. Spark Ignition and Diesel Engines
10. Battery Technologies
11. Technologies for Advanced Vehicles Performance and
Cost Expectations
12. Materials Selection Criteria
13. Aerodynamic Drag Reduction
14. Rolling Resistance Reduction
15. Improvements To Spark Ignition Engines
16. Reducing Mechanical Friction
17. Reducing Pumping Loss
18. DISC and Two-Stroke Engines
19. Electric Drivetrain Technologies
20. Battery Characteristics
21. Bringing an Advanced Battery to Market
22. Other Engine And Fuel Technologies
23. Improvements To Automatic Transmissions

AUTOMOTIVE ENGINES
1. Engine & Working Principles
2. Constructional Features of IC Engine
3. Principles of Operation Of IC Engines: Four-Stroke Cycle Diesel
Engine
4. Two-Stroke Cycle Diesel Engine:
5. Four-Stroke Spark Ignition Engine
6. Two-Stroke Cycle Petrol Engine
7. Comparison Of CI And SI Engines
8. Advantages and Disadvantages Of Two-Stroke Cycle Over Four-
Stroke Cycle Engines
9. Internal Combustion Engines

HYBRID ELECTRIC VEHICLES


1. Introduction to Trends and Hybridization Factor for Heavy-Duty
Working Vehicles
2. Introduction to Development of Bus Drive Technology towards
Zero Emissions: A Review
3. Introduction to Advanced Charging System for Plug-in Hybrid
Electric Vehicles and Battery Electric Vehicles
4. Introduction to Hybrid Energy Storage System for a Coaxial
Power-Split Hybrid Powertrain
5. Introduction to Performance Analysis of an Integrated Starter-
Alternator-Booster for Hybrid Electric Vehicles
6. Introduction toDesign, Optimization and Modelling of High Power
Density Direct-Drive Wheel Motor for Light Hybrid Electric
Vehicles

AUTOMOTIVE TRANSMISSIONS
1. Automotive Clutches
2. Clutch Construction
3. Coil Spring Pressure Plate
4. Diaphragm Pressure Plate
5. Flywheel
6. Pilot Bearing
7. Clutch Operation
8. Pressure Plate Adjustment
9. Hydraulic Clutch
10. Slipping
11. Grabbing
12. Dragging
13. Abnormal Noises
14. Pedal Pulsation
15. Clutch Overhaul
16. Manual Transmissions
17. Transmission Construction
18. Transmission Gears
19. Synchronizers
20. Shift Forks, Shift Linkage and Levers
21. Transmission Types
22. Auxiliary Transmissions
23. Transmission Troubleshooting
24. Transmission Overhaul
25. Automatic Transmissions
26. Torque Converters
27. Planetary Gearsets
28. Clutches and Bands
29. Overrunning Clutch
30. Hydraulic System of an Automatic Transmission
31. Automatic Transmission Service
32. Electronic Systems of an Automatic Transmission
33. Transaxles

VEHICLE DYNAMICS
1. Suspensions, Functions, and Main Components
2. Desired Features of Suspension Systems
3. Functions and Basic Principles Suspension systems
4. Functions of Suspension Systems
5. Classification of Suspensions
6. Basic Principles of Suspension System
7. Lateral Acceleration.
8. Springs
9. Sprung and Unsprung Weight
10. Shock Absorbers
11. Control Arms
12. Ball Joints
13. Steering Knuckles
14. Types of Suspension Systems
15. 4wd Suspensions

IC ENGINES
1. Detonation or Knocking in IC Engines
2. Cetane Number - Rating of CI Engine Fuels
3. Ignition System of Petrol Engines
4. Definition and Classification of I.C. Engines
5. Difference between four stroke and two stroke engines:-
6. Efficiency of an IC Engine
7. Brake power of IC Engine
8. Comparison of Petrol and Diesel Engines
9. Difference between SI and CI engines
10. I.C Engines Important definitions and formulas
11. Internal Combustion Engines- Basic Differences
12. Air Standard Otto Cycle
13. Governing of IC Engines
14. Carburetor of an IC Engine
15. Air Standard Cycles
16. Spark Plug in IC Engines
17. Supercharging of IC Engines
18. Two Stroke vs Four Stroke Engines
19. Octane Number - Rating of S.I. Engine Fuels
20. Valve Timing Diagram of Diesel Engine
21. Thermodynamic Tests for I.C. Engines
22. Lubrication of IC Engines
23. Testing of IC Engines
24. Indicated power of an IC Engine
25. Scavenging of IC Engines
26. Valve Timing Diagram of Petrol Engine
27. Sequence of Operations in IC Engine

AUTOMOBILE FUEL AND


LUBRICANTS
1. Petroleum
2. Petroleum Refining and Formation Process
3. Visbreaking, thermal cracking, and coking
4. Cracking methodologies
5. Polymerization in Petroleum Refinery
6. Alkylation
7. Isomerization Process
8. Gasoline blending
9. Bearing Lubrication
10. Lubricant base stocks
11. Engine Friction and Lubrication
12. Hydrodynamic Lubrication (HL)
13. Pressure-Viscosity Coefficient and Characteristics of
Lubricants
14. Function of Lubrication system
15. Requirements and characteristics of lubricants
16. Determining the Cause of Oil Degradation
17. Additives in lubricating oils
18. Lubricant additives, explained
19. Synthetic Lubricants
20. Lubricants : Classification and properties
21. Requirements and properties of lubricants
22. Lubricants Testing
23. What is Grease?
24. Fuel Thermochemistry
25. Relative Density
26. Fuel Calorific Values
27. Flash point and fire point
28. Vapor pressure
29. Fuel viscosity control
30. What is API Gravity?
31. Aniline point
32. Copper Strip Corrosion
33. SI ENGINE
34. What is octane rating?
35. Rating of CI Engine Fuels:
36. Diesel Fuel Cetane

AUTOMOTIVE ELECTRICAL AND


ELECTRONICS SYSTEMS
1. What Is Battery And Why It Is Used?
2. Types Of Batteries
3. Battery Working Principle
4. History Of The Battery
5. Testing, Charging And Replacing A Battery
6. Starter
7. The Starting System
8. Ignition Switch
9. Difference Between Alternator & Generator
10. What Is The Charging System?
11. Charging System Components
12. How It Works - The Cut-Out
13. The Voltage Regulator
14. Interior Lighting:
15. Exterior Lighting
16. Design Of Lighting System:
17. Dashboard Gauges
18. How Electronic Ignition System Works?
19. Electronic Ignition System Main Components
20. Three Types Of Vehicle Ignition Systems And How
They Work
21. The Evolution Of Fuel Injection
22. Multi Point Fuel Injection (mpfi)
23. Different Types Of Sensors Used In Automobiles
24. Oxygen Sensor Working And Applications
25. Hot Wire Anemometer
26. Vehicle Speed Sensor (vss)
27. Accelerometers:
28. Crankshaft Position Sensor
29. Microcontroller Vs Microprocessor
30. What Is Keyless Entry And How Does It Work?

URBAN TRANSPORTATION SYSTEM


1. What Is Urbanization?
2. 7 Transportation Challenges in Urban Areas
3. Changing Urban Transportation Systems for Improved Quality of
Life
4. Urban Transport Challenges
5. Automobile Dependency
6. Congestion
7. Mitigating Congestion
8. The Urban Transit Challenge
9. Global Urbanization
10. Evolution of Transportation and Urban Form
11. The Spatial Constraints of Urban Transportation
12. Transportation and the Urban Structure
13. Process and Top 5 stages of Transportation Planning
14. The Benefits Of Urban Mass Transit
15. Effects of public policy
16. Mass Transit Finance
17. Marketing Mass Transit
18. Trip characteristics
19. The Future Of Mass Transportation
20. 7 Problems of Urban Transport
21. Role of Transport in Urban Growth
22. 8 Helpful Steps for Solving the Problems of Urban
Transport
23. Vehicle To Vehicle Communication
24. Components of an Urban Transit System
25. What is a BRT Corridor?
26. The Land Use – Transport System
27. Urban Land Use Models
28. Transportation and Urban Dynamics
29. Transportation-Land Use Interactions
30. Land Requirement and Consumption
31. Spatial Form, Pattern and Interaction
32. Environmental Externalities of Land Use

Vehicle Design Data Characteristics


1. Vehicle Chassis And Frame Design
2. Gross Vehicle Weight Rating
3. Speed Limits
4. What Is The Maximum Acceleration
5. Types Of Gears
6. Resistance To Motion
7. Vehicle Power Requirements
8. Force Required To Accelerate A Load
9. Vehicle Acceleration And Maximum Speed Modeling And
Simulation
10. Curve Interpolation Methods And Options
11. What Is The Mean Effective Pressure (mep) Of An
Engine ?
12. What Is Engine Capacity (cc):
13. How Engine Capacity Affects Its Performance:
14. What Is Bore-Stroke Ratio?
15. How Rod Lengths And Ratios Affect Performance
16. Best Rod Ratio?
17. Oversquare Vs Undersquare
18. Piston Motion Equations
19. Gear Ratios
20. Classical Design Of Gear Train
21. How To Determine Gear Ratio
22. Vehicle Performance
23. Coordinate Systems
What is Automobile Engineering?
Automobile Engineering is a branch of engineering which deals with designing, manufacturing and
operating automobiles. It is a segment of vehicle engineering which deals with motorcycles, buses,
trucks, etc. It includes mechanical, electrical, electronic, software and safety elements.
Skills Required:

● Artistic

● Creative

● Technical knowledge

● Effective planner

● Precision

● Meticulous

● Systematic

● Punctual

● Team worker
Automotive engineering is one of the most exciting professions you can choose. From the global
concerns of sustainable mobility, and teaching cars to drive themselves, to working out how we’ll get
around on the surface of Mars, automotive engineering is all about the future.
The challenges facing personal mobility are endless. Automotive engineers work in every area of the
industry, from the look and feel of current cars, to the safety and security of new forms of transport.
Attempting to make cars as fast as possible whilst keeping them fuel efficient may seem like an
impossible task, but this is the kind of problem automotive engineer’s deal with every day.
The work of an automotive engineer breaks down into three categories:
Design
Designing new products and improving existing ones
Research and Development
Finding solutions to engineering problems
Production
Planning and designing new production processes

What does an automotive engineer really do?


They study
One of the first steps in becoming an automotive engineer is going to university. Most automotive
engineers start out by studying Mechanical Engineering, but increasingly more specific Automotive
Engineering degrees are becoming available.
Don’t just look to apply in your home country - the automotive industry is truly international, and
studying abroad might be your way into this popular job market. For a growing list of courses available
worldwide
If you’re not sure that university is right for you, you could also explore apprenticeships as your route
into automotive.
Before you get to university, the most important subject area to be focussing on is STEM (Science,
Technology, Engineering, and Maths).Once at University taking an internship can be a really important
step on your route into automotive. Having the right internship on your CV is an announcement to the
industry how passionate and dedicated you are to your career.

They think big


The automotive industry represents some of the largest companies in the world, from car manufacturers
to fuel specialists. As an engineer you can expect to work for one of these industrial titans.

They work in a global profession


Automotive engineers and automotive companies exist all over the world, based in completely different
cultures and speaking totally different languages. The automotive engineer needs to know how to
communicate on a global level and have a horizon broader than just their own culture.

They do more
Automotive engineers are forward thinking people. They are dynamic, visionary, and are employed
based on their ability to think outside the box. One way to expand your horizons, engage your passion,
and to start thinking like an automotive engineer is to get involved with extracurricular activities and
competitions.
Additional skills and activities
The variety of skills and tasks automotive engineers get involved with are almost endless here are some
examples to get you started.
● Developing new test procedures, using both conventional and innovative methods

● Bringing new products to market and being involved in problem-solving and project
management
● Devising and organising tests, to answer questions from clients, consumers and other
engineers involved in vehicle development
● Anticipating vehicle or component behaviour in different conditions with computer
modelling software
● Analysing and interpreting technical data into reports or presentations and answering any
queries about the results
● Building an individual specialism within a larger team and working independently

● Contributing to regular team meetings to update colleagues on progress, problems and new
developments
● Managing all details of projects, including projected costs

● Recognising the benefits of engineering developments to related departments in order to


market projects and secure internal funding
● Negotiating costs of development and engineering work with commercial departments

● Monitoring any related systems or engineering issues associated with the component and
final product
● Supervising technical staff, engineers or designers (dependent upon specific role)

● Operating in cross-functional or internationally-based teams to design experiments in order


to test the validity and competence of new technology.

Employment Opportunities:
Automobile engineering is a huge industry. There is great number of employment opportunities in the
following fields:

● Private national and multinational automobile companies

● Service stations

● Private transport companies

● Defence services

● Self-employment by setting up automobile garage or maintenance workshops


Scope:
There are plenty of employment opportunities for the qualified people and they can select a career in
automobile industry, which leads to bright future.
Who This Career is For?
A career as an automobile engineer is for people who are driven and passionate about cars. They
must have considerable understanding and interest in mechanics, electronics, and mathematics as
these are vital skills required for this career path. Automobile engineers must be organized
individuals who are able to work in a methodical manner.
People in this career are required to communicate with other professionals on a regular basis, both
from within the field and outside it. Hence, this career is only for those with fluent communication
skills. While automobile engineers must be innovative, eager workers, they must not get carried
away.
Want to know more about it?
Automobile engineers hold a wide variety of responsibilities. Their primary purpose is to maximize
the feasibility and design of automobiles keeping costs to an absolute minimum.
A typical professional in this field spends a lot of time on researching and designing both systems
and machines for automobiles. The designs are initially done in the form of drawings and blueprints.
Automobile engineers then apply physical and mathematical principles to these plans to make sure
they are viable. The planning is done after considerable research, and then altered again after linking
the plans to the available research.

Once the planning process is done, the designing begins. Automobile engineers are responsible for
transforming their plans and research into a viable end product. They must oversee the entire process
of manufacturing, with meticulous attention to detail.
After the end product is manufactured, the most important part of an automobile engineer’s job
begins. Testing is a rigorous process that must be done with utmost care. This procedure generally
entails focusing on each and every component of an automobile to ensure it is able to function in
every imaginable condition in a safe and secure manner.

Automobile engineers generally tend to specialize in a particular area. The most common areas of
specialization include exhaust systems, engines and structural designs. No matter what an engineer
decides to specialize in, he or she is almost always required to work on all three aspects of the
automobile engineering process; research, designing as well as testing.

There is also often a financial side to this job, which involves preparing costs of buying materials and
producing systems. It is also important to realize the legal aspects of this job. Automobile engineers
must be up to date with all safety regulations, so that they do not violate legislation related to
automobile engineering procedure.

Automobile engineers generally know they want to get into the field at a fairly young age, so they are
generally people who studied natural sciences and mathematics in high school. This gives them the
edge to get a degree in engineering, which is an essential prerequisite to become an automobile
engineer.

Additionally, while it is not necessary, a master’s degree in a field such as auto motives or
automotive engineering gives prospective automobile engineers a distinct advantage.
How is Life?
Automobile engineers work varying hours per week depending on the amount of work they have
assigned during a particular week. They may work anywhere between a 40 hour week and a 55 hour
week, but may even be required to work overtime if there are some outstanding deadlines or
emergencies.

Automobile engineers spend a lot of time in front of a computer on their desks. When they are not
doing research in their office, they are at plants monitoring the manufacturing of the automobiles
they planned and their testing. This may involve spending lengthy periods of time in noisy, dirty
factory environments.
What Perks come along with this career?
Automobile engineering is a career path that no one will deny is unimportant. There are millions of
vehicles on roads in every corner of the world, and automobile engineers are the people responsible
for that. They feel an immense amount of satisfaction when they see a machine as intricate as a
modern automobile completed when they are the ones who contributed towards its design.
Automobile engineers earn a considerable salary, more so than many other types of engineers. They
have a fair amount of job security as they begin gaining experience. People in the field are generally
passionate about automobiles, and so have the added advantage of working with something they truly
appreciate.
Which Downsides are there in this career?
The job often requires automobile engineers to work under the immense pressure of tight deadlines.
Moreover, they have the lives of millions of people in their hands as they do their work. The slightest
mistakes in planning, designing or testing could be catastrophic.
Automobile engineers often have to deal with noisy factory conditions for extended periods of time.
They have to pay scrupulous attention to detail at every aspect of their job, which can often get
monotonous.
How is Competition?
While the automobile industry has been on a decline in the past five years in most areas of the world,
the number of people who opt for this career is quite low because of the high level of training and
specialization required to become successful. As a result, there is a fair amount of competition in the
field, especially for the most lucrative jobs. The number of jobs in the field is likely to increase at a
slow pace in the next few years.
Locations where this career is good?
In the USA, the Midwest is the best place to be an automobile engineer because of the concentration
of automobile manufacturing firms in the region. In Europe, Germany is the leading automobile
manufacturer.
There is a large demand for automobile engineers in Japan, South Korea, China and India.
Anti-Lock Braking System (ABS)
It is a safety system in automobiles. It prevents the wheels from locking while braking. The purpose of
this is to allow the driver to maintain steering control under heavy braking and, in some situations, to
shorten braking distances (by allowing the driver to hit the brake fully without skidding or loss of
control).

How Do Wheels Lock?


During braking, wheels lock if the brake force applied is more than the friction between the road and
tyre. This often happens in a panic braking situation, especially on a slippery road. When the front
wheels lock, the vehicle slides in direction of motion. When the rear wheels locks, the vehicle swings
around. It is impossible to steer around an obstacle with wheels locked. Locked wheels can thus result
in accident. Skidding also reduce tyre life.

What Does ABS Do?


The system detects when the wheel are about to lock and momentarily release the pressure on locking
wheel. The brakes are reapplied as soon as the wheels have recovered.
A toothed wheel (pole wheel) is fitted to the rotating wheel hub. A magnetic sensor mounted on each
wheel in close in close proximity to the teeth, generates electrical pulses when the pole wheel rotates.
The rate at which the pulses are generated (frequency) is a measure of wheel speed. This signal is read
by electronic control unit (ECU). When a wheel is lock, the ECU sends an electrical signal to the
modulator valve solenoid, which release pressure from the brake chamber. When the wheel recovers
sufficiently, the brake pressure is reapplied again by the switch off signal to the modulator valve.
The modulator valve has an addition ‘hold’ state which maintains pressure. In break in the chamber,
thus optimizing the braking process. The cycling of modulator valve (5 to 6 times per second) is
continued till the vehicle comes to a controlled stop.
With ABS, the vehicle remains completely stable even when the driver continues to press the brake
pedal during braking, thus avoiding accidents.

Components:
The anti-lock braking system consists of following components.
Wheel Speed Sensor
The wheel speed sensor consists of a permanent magnet and coil assembly. It generates electrical pulses
when the pole wheel rotates. The rate at which the pulses are generated is a measure of wheel speed.
The voltage induced increases with the speed of rotation of the wheel and reduces with increasing gap
between the pole wheel and the sensor.
Pole Wheel
Pole Wheel is a toothed wheel made of ferrous material. It normally has teeth on the face. In some
cases where it is not possible to install the sensor parallel to the axle, the pole wheels are designed with
teeth on periphery. The pole wheel fitted on standard 9-20, 10-20 tires has normally 100 evenly spaced
teeth. 80 evenly spaced teeth pole wheels are used for the vehicles having the tyre diameter less than
9mm.

Sensor Extension Cable


The sensor extension cable is a two core cable which connects the wheel speed sensor to the Electronic
Control Unit. The inner core sheathing is of EPDM rubber and the outer sheathing is polyurethane
which provide abrasion resistance to the cable. The cable has a module plug with two pins is connected
to the control assembly. The cable has two cores-brown and black in colour.
Electronic Control Unit
The ECU is the core component of the ABS system. Wheel speed sensor signal are the input to the
Electronic Control Unit. The ECU computers wheel speeds, wheel deceleration and acceleration. If any
wheel tends to lock, the ECU actuates the corresponding Modulator valve to prevent wheel lock. The
ECU is normally mounted in driver's cabin.

The ECU consists of 7 major circuits,


> Input circuit
> Master circuit
> Slave circuit
> Driver circuit
> Feedback circuit
> Power supply circuit
> Fail safe circuit
The functions of ECU,
> It receives wheel speed signal from the sensor. The wheel speed signals are processed and
appropriate output signals are sent to the modular valves in the event of a wheel lock.
> It continuously monitors the status and operation of ABS components and wiring.
> It alerts the driver in the event of occurrence of any electrical fault in the ABS system by actuating a
warning lamp.
> It disconnects the exhaust brakes during ABS operations.
> It enables the service technician to read the faults in the system either through a diagnostic controller
or a blink code lamp.

Modulator Valve Cable


The Modulator valve cable has thee cores. There are two solenoid interface lines and a common ground
line. The inner core sheathing is of EPDM type and the outer sheathing is polyurethane which provide
abrasion resistance to the cable.
The cable has a three pin moulded socket is connected to the modulator valve solenoid at one and an
interlock connector with locking feature at the other end. The cores are brown, blue and green.
Modulator Valve
ABS Modulator valve regulate the air pressure to the brake chamber during ABS action. During normal
braking it allows air to flow directly from inlet to delivery. Modulator valve cannot automatically apply
the brakes, or increase the brake application pressure above the level applied by the driver through the
dual brake valve.

There is an inlet port, Delivery port and Exhaust passage.


> The inlet port is connected to the delivery of quick release valve or relay valve.
> The delivery port is connected to the brake chamber.
> The exhaust passage vents air from the brake chambers.

The modulator valve has two solenoids. By energizing the solenoids, the modular valve can be
switched to any of the following modes.
> Pressure
> Pressure hold
> Pressure release

Quick Release Valve


Quick release valve are fitted in air braking system to release the air from the brake chamber quickly
after release of brake pedal. This prevents delay in brake release due to long piping runs or multiples of
brake chamber being exhausted through the brake valve.

Relay Valve
Relay valve provides a means of admitting and releasing air to and from brake chamber quickly, in
accordance with the signal pressure from the delivery of the dual brake valve. Air from the reservoir
passes through the valve into the brake chamber. The pressure applied to the brake is equal to the signal
pressure from the dual brake valve. When the brake pedal is released the signal pressure is released.
The pressure in the brake chamber is released directly through the exhaust port of the relay valve.
Warning Lamp
Vehicle are fitted with an ABS warning lamp. It is a LED indicator lamp amber in colour and lights up
when the system has detected any electrical fault. ABS warning lamp is located on the instrument panel
in form of a driver.

Blink Code Lamp


This lamp is green in colour and is used to indicate the stored faults in the system to the service
technician on operating a blink code switch. The nature of fault in the system can be diagnosed by the
number of flashes.
Off Highway Switch
This is an optional switch in front of the driver which can be switched ON when the vehicle is
operating off highway. In this mode, ABS control will; allow higher wheel slip to achieve shorter
stopping distance than with normal ABS control.

Blink Code Switch


A momentary switch that grounds the ABS Indicator Lamp output is used to place the ECU into the
diagnostic blink code mode and is typically located on the vehicle's dash panel.
ADAPTIVE CRUISE CONTROL
Adaptive Cruise Control (ACC) is an automotive feature that allows a
vehicle’s cruise control system to adapt the vehicle speed to the environment.
A radar system attached to the front of the vehicle is used to detect whether
slower moving vehicles are in the ACC vehicle path. If a slower moving
vehicle is detected, the ACC system will slow the vehicle down and control
the clearance, or time gap, between the ACC vehicle and the forward vehicle.
If the system detects that the forward vehicle is no longer in the ACC vehicle
path, the ACC system will accelerate the back to its set cruise control speed.
This operation allows the ACC vehicle to autonomously slow down and
speed up is controlled is via engine throttle control and limited brake
operation.

HOW DOES IT WORK?


The radar headway sensor sends information to a digital signal processor,
which in turn translates the speed and distance information for a longitudinal
controller. The result? If the lead vehicle slows down, or if another object is
detected, the system sends a signal to the engine or braking system to
decelerate. Then, when the road is clear, the system will re-accelerate the
vehicle back to the set speed.
The adaptive cruise control (ACC) system depends on two infrared sensors to
detect cars up ahead. Each sensor has an emitter, which sends out a beam of
infrared light energy, and a receiver, which captures light reflected back from
the vehicle ahead.
The first sensor, called the sweep long-range sensor, uses a narrow infrared
beam to detect objects six to 50 yards away. At its widest point, the beam
covers no more than the width of one highway lane, so this sensor detects
only vehicles directly ahead and doesn't detect cars in other lanes. Even so, it
has to deal with some tricky situations, like keeping track of the right target
when the car goes around a curve. To deal with that problem, the system has
a solid-state gyro that instantaneously transmits curve-radius information to
the sweep sensor, which steers its beam accordingly.
Another challenge arises when a car suddenly cuts in front of an ACC-
equipped car. Because the sweep sensor's beam is so narrow, it doesn't "see"
the other car until it's smack in the middle of the lane. That's where the other
sensor, called the cut-in sensor, comes in. It has two wide beams that "look"
into adjacent lanes, up to a distance of 30 yards ahead. And because it ignores
anything that isn't moving at least 30 percent as fast as the car in which it is
mounted, highway signs and parked cars on the side of the road don't confuse
it.
Information from the sensors goes to the Vehicle Application Controller
(VAC), the system's computing and communication centre. The VAC reads
the settings the driver has selected and figures out such things as how fast the
car should go to maintain the proper distance from cars ahead and when the
car should release the throttle or downshift to slow down. Then it
communicates that information to devices that control the engine and the
transmission.
There are several inputs:
System on/off: If on, denotes that the cruise-control system should maintain
the car speed.
Engine on/off: If on, denotes that the car engine is turned on; the cruise-
control system is only active if the engine is on.
Pulses from wheel: A pulse is sent for every revolution of the wheel.
Accelerator: Indication of how far the accelerator has been pressed.
Brake: On when the brake is pressed; the cruise-control system temporarily
reverts to manual control if the brake is pressed.
Increase/Decrease Speed: Increase or decrease the maintained speed; only
applicable if the cruise-control system is on.
Resume: Resume the last maintained speed; only applicable if the cruise-
control system is on.
Clock: Timing pulse every millisecond.
There is one output from the system:
Throttle: Digital value for the engineer throttle setting.
ADAPTIVE CRUISE CONTROL FEATURES
- Maintains a safe, comfortable distance between vehicles without driver
interventions
- Maintains a consistent performance in poor visibility conditions.
- Maintains a continuous performance during road turns and elevation
changes
- Alerts drivers by way of automatic braking.
PHYSICAL LAYOUT
The ACC system consists of a series of interconnecting components and
systems. The method of communication between the different modules is via
a serial communication network known as the Controller Area Network
(CAN).
ACC Module – The primary function of the ACC module is to process the
radar information and determine if a forward vehicle is present. When the
ACC system is in 'time gap control', it sends information to the Engine
Control and Brake Control modules to control the clearance between the
ACC Vehicle and the Target Vehicle.
Engine Control Module – The primary function of the Engine Control
Module is to receive information from the ACC module and Instrument
Cluster and control the vehicle's speed based on this information. The Engine
Control Module controls vehicle speed by controlling the engine's throttle.
Brake Control Module – The primary function of the Brake Control Module
is to determine vehicle speed via each wheel and to decelerate the vehicle by
applying the brakes when requested by the ACC Module. The braking system
is hydraulic with electronic enhancement, such as an ABS brake system, and
is not full authority brake by wire.
Instrument Cluster – The primary function of the Instrument Cluster is to
process the Cruise Switches and send their information to the ACC and
Engine Control Modules. The Instrument Cluster also displays text messages
and tell-tales for the driver so that the driver has information regarding the
state of the ACC system.
CAN – The Controller Area Network (CAN) is an automotive standard
network that utilizes a 2 wire bus to transmit and receive data. Each node on
the network has the capability to transmit 0 to 8 bytes of data in a message
frame. A message frame consists of a message header, followed by 0 to 8
data bytes, and then a checksum. The message header is a unique identifier
that determines the message priority. Any node on the network can transmit
data if the bus is free. If multiple nodes attempt to transmit at the same time,
an arbitration scheme is used to determine which node will control the bus.
The message with the highest priority, as defined in its header, will win the
arbitration and its message will be transmitted. The losing message will retry
to send its message as soon as it detects a bus free state.
Cruise Switches – The Cruise Switches are mounted on the steering wheel
and have several buttons which allow the driver to command operation of the
ACC system. The switches include:
'On': place system in the 'ACC standby' state
'Off'': cancel ACC operation and place system in the 'ACC off' state
'Set +': activate ACC and establish set speed or accelerate
'Coast': decelerate
'Resume': resume to set speed
'Time Gap +': increase gap
'Time gap –': decrease gap

ADVANTAGES
1. The driver is relieved from the task of careful acceleration, deceleration
and braking in congested traffics.
2. A highly responsive traffic system that adjusts itself to avoid accidents can
be developed.
3. Since the braking and acceleration are done in a systematic way, the fuel
efficiency of the vehicle is increased.
DISADVANTAGES
1. A cheap version is not yet realized.
2. A high market penetration is required if a society of intelligent vehicles is
to be formed.
3. Encourages the driver to become careless. It can lead to severe accidents if
the system is malfunctioning.
4. The ACC systems yet evolved enable vehicles to cooperate with the other
vehicles and hence do not respond directly to the traffic signals.
CRDI (Common Rail Direct Injection)
CRDi stands for Common Rail Direct Injection meaning, direct injection of the fuel into the cylinders
of a diesel engine via a single, common line, called the common rail which is connected to all the fuel
injectors.

Whereas ordinary diesel direct fuel-injection systems have to build up pressure anew for each and every
injection cycle, the new common rail (line) engines maintain constant pressure regardless of the
injection sequence. This pressure then remains permanently available throughout the fuel line. The
engine's electronic timing regulates injection pressure according to engine speed and load. The
electronic control unit (ECU) modifies injection pressure precisely and as needed, based on data
obtained from sensors on the cam and crankshafts. In other words, compression and injection occur
independently of each other. This technique allows fuel to be injected as needed, saving fuel and
lowering emissions.

More accurately measured and timed mixture spray in the combustion chamber significantly reducing
unburned fuel gives CRDi the potential to meet future emission guidelines such as Euro V. CRDi
engines are now being used in almost all Mercedes-Benz, Toyota, Hyundai, Ford and many other diesel
automobiles.

History
The common rail system prototype was developed in the late 1960s by Robert Huber of Switzerland
and the technology further developed by Dr. Marco Ganser at the Swiss Federal Institute of Technology
in Zurich, later of Ganser-Hydromag AG (est.1995) in Oberägeri. The first successful usage in a
production vehicle began in Japan by the mid-1990s. Modern common rail systems, whilst working on
the same principle, are governed by an engine control unit (ECU) which opens each injector
electronically rather than mechanically. This was extensively prototyped in the 1990s with
collaboration between Magnetic Marelli, Centro Ricerche Fiat and Elasis. The first passenger car that
used the common rail system was the 1997 model Alfa Romeo 156 2.4 JTD, and later on that same year
Mercedes-Benz C 220 CDI.
Common rail engines have been used in marine and locomotive applications for some time. The
Cooper-Bessemer GN-8 (circa 1942) is an example of a hydraulically operated common rail diesel
engine, also known as a modified common rail. Vickers used common rail systems in submarine
engines circa 1916. Early engines had a pair of timing cams, one for ahead running and one for astern.
Later engines had two injectors per cylinder, and the final series of constant-pressure turbocharged
engines were fitted with four injectors per cylinder. This system was used for the injection of both
diesel oil and heavy fuel oil (600cSt heated to a temperature of approximately 130 °C). The common
rail system is suitable for all types of road cars with diesel engines, ranging from city cars such as the
Fiat Nuova Panda to executive cars such as the Audi A6.

Operating Principle
Solenoid or piezoelectric valves make possible fine electronic control over the fuel injection time and
quantity, and the higher pressure that the common rail technology makes available provides better fuel
atomisation. In order to lower engine noise, the engine's electronic control unit can inject a small
amount of diesel just before the main injection event ("pilot" injection), thus reducing its explosiveness
and vibration, as well as optimizing injection timing and quantity for variations in fuel quality, cold
starting and so on. Some advanced common rail fuel systems perform as many as five injections per
stroke.
Common rail engines require very short (< 10 second) or no heating-up time at all , dependent on
ambient temperature, and produce lower engine noise and emissions than older systems. Diesel engines
have historically used various forms of fuel injection. Two common types include the unit injection
system and the distributor/inline pump systems (See diesel engine and unit injector for more
information). While these older systems provided accurate fuel quantity and injection timing control,
they were limited by several factors:

• They were cam driven, and injection pressure was proportional to engine speed. This typically meant
that the highest injection pressure could only be achieved at the highest engine speed and the maximum
achievable injection pressure decreased as engine speed decreased. This relationship is true with all
pumps, even those used on common rail systems; with the unit or distributor systems, however, the
injection pressure is tied to the instantaneous pressure of a single pumping event with no accumulator,
and thus the relationship is more prominent and troublesome.
• They were limited in the number and timing of injection events that could be commanded during a
single combustion event. While multiple injection events are possible with these older systems, it is
much more difficult and costly to achieve.

• For the typical distributor/inline system, the start of injection occurred at a pre-determined pressure
(often referred to as: pop pressure) and ended at a pre-determined pressure. This characteristic resulted
from "dummy" injectors in the cylinder head which opened and closed at pressures determined by the
spring preload applied to the plunger in the injector. Once the pressure in the injector reached a pre-
determined level, the plunger would lift and injection would start.

In common rail systems, a high-pressure pump stores a reservoir of fuel at high pressure — up to and
above 2,000 bars (psi). The term "common rail" refers to the fact that all of the fuel injectors are
supplied by a common fuel rail which is nothing more than a pressure accumulator where the fuel is
stored at high pressure. This accumulator supplies multiple fuel injectors with high-pressure fuel. This
simplifies the purpose of the high-pressure pump in that it only has to maintain a commanded pressure
at a target (either mechanically or electronically controlled). The fuel injectors are typically ECU-
controlled. When the fuel injectors are electrically activated, a hydraulic valve (consisting of a nozzle
and plunger) is mechanically or hydraulically opened and fuel is sprayed into the cylinders at the
desired pressure. Since the fuel pressure energy is stored remotely and the injectors are electrically
actuated, the injection pressure at the start and end of injection is very near the pressure in the
accumulator (rail), thus producing a square injection rate. If the accumulator, pump and plumbing are
sized properly, the injection pressure and rate will be the same for each of the multiple injection events.
Advantages
CRDi engines are advantageous in many ways. Cars fitted with this new engine technology are
believed to deliver 25% more power and torque than the normal direct injection engine. It also offers
superior pick up, lower levels of noise and vibration, higher mileage, lower emissions, lower fuel
consumption, and improved performance.

Disadvantages
Like all good things have a negative side, this engine also have few disadvantages. The key
disadvantage of the CRDi engine is that it is costly than the conventional engine. The list also includes
high degree of engine maintenance and costly spare parts. Also this technology can’t be employed to
ordinary engines.

Applications
The most common applications of common rail engines are marine and locomotive applications. Also,
in the present day they are widely used in a variety of car models ranging from city cars to premium
executive cars.
However, most of the car manufacturers have started using the new engine concept and are appreciating
the long term benefits of the same. The technology that has revolutionized the diesel engine market is
now gaining prominence in the global car industry.

CRDi technology revolutionized diesel engines and also petrol engines (by introduction of GDI
technology).
By introduction of CRDi a lot of advantages are obtained, some of them are, more power is developed,
increased fuel efficiency, reduced noise, more stability, pollutants are reduced, particulates of exhaust
are reduced, exhaust gas recirculation is enhanced, precise injection timing is obtained, pilot and post
injection increase the combustion quality, more pulverization of fuel is obtained, very high injection
pressure can be achieved, the powerful microcomputer make the whole system more perfect, it doubles
the torque at lower engine speeds. The main disadvantage is that this technology increase the cost of the
engine. Also this technology can’t be employed to ordinary engines.
DTSI (Digital Twin Spark Ignition System)
It is very interesting to know about complete combustion in automobile engineering, because in actual
practice, perfect combustion is not at all possible due to various losses in the combustion chamber as
well as design of the internal combustion engine. Moreover the process of burning of the fuel is also
not instantaneous. However an alternate solution to it is by making the combustion of fuel as fast as
possible. This can be done by using two spark plugs which spark alternatively at a certain time interval
so as increase the diameter of the flame & burn the fuel instantaneously. This system is called DTSI
(Digital Twin Spark Ignition system). In this system, due to twin sparks, combustion will be complete.
This paper represents the working of digital twin spark ignition system, how twin sparks are produced
at 20,000 Volts, their timings, efficiency, advantages & disadvantages, diameter of the flame, how
complete combustion is possible & how to decrease smoke & exhausts from the exhaust pipe of the
bike using Twin Spark System.

How Does It Works?


Digital Twin Spark ignition engine has two Spark plugs located at opposite ends of the combustion
chamber and hence fast and efficient combustion is obtained. The benefits of this efficient combustion
process can be felt in terms of better fuel efficiency and lower emissions. The ignition system on the
Twin spark is a digital system with static spark advance and no moving parts subject to wear. It is
mapped by the integrated digital electronic control box which also handles fuel injection and valve
timing. It features two plugs per cylinder.

This innovative solution, also entailing a special configuration of the hemispherical combustion
chambers and piston heads, ensures a fast, wide flame front when the air-fuel mixture is ignited, and
therefore less ignition advance, enabling, moreover, relatively lean mixtures to be used. This
technology provides a combination of the light weight and twice the power offered by two-stroke
engines with a significant power boost, i.e. a considerable "power-to-weight ratio" compared to quite a
few four-stroke engines.
Moreover, such a system can adjust idling speed & even cuts off fuel feed when the accelerator pedal is
released, and meters the enrichment of the air-fuel mixture for cold starting and accelerating purposes;
if necessary, it also prevents the upper rev limit from being exceeded. At low revs, the over boost is
mostly used when overtaking, and this is why it cuts out automatically. At higher speeds the over boost
will enhance full power delivery and will stay on as long as the driver exercises maximum pressure on
the accelerator.
Main characteristics
• Digital electronic ignition with two plugs per cylinder and two ignition distributors.
• Twin overhead cams with camshaft timing variation.
• Injection fuel feed with integrated electronic twin spark ignition.
• A high specific power.
• Compact design and Superior balance.

Construction
Digital twin spark ignition technology powered engine has two spark plugs. It is located at opposite
sides of combustion chamber. This DTS-I technology will have greater combustion rate because of twin
spark plug located around it. The engine combust fuel at double rate than normal. This enhances both
engine life and fuel efficiency. It is mapped by the digital electronic control box which also handles
fuel ignition and valve timing.

A microprocessor continuously senses speed and load of the engine and respond by altering the ignition
timing thereby optimizing power and fuel economy.
Advantages
• Less vibrations and noise
• Long life of the engine parts such as piston rings and valve stem.
• Decrease in the specific fuel consumption
• No over heating
• Increase the Thermal Efficiency of the Engine & even bear high loads on it.
• Better starting of engine even in winter season & cold climatic conditions or at very low temperatures
because of increased Compression ratio.
• Because of twin Sparks the diameter of the flame increases rapidly that would result in instantaneous
burning of fuels. Thus force exerted on the piston would increase leading to better work output.

Disadvantages
• There is high NOx emission
• If one spark plug get damaged then we have to replace both
• The cost is relatively more
Electromagnetic Brake
Electromagnetic brakes are the brakes working on the electric power & magnetic power. They works
on the principle of electromagnetism. These are totally friction less. Due to this they are more durable
& have longer life span. Less maintenance is there. These brakes are an excellent replacement on the
convectional brakes due to their many advantages. The reason for implementing this brake in
automobiles is to reduce wear in brakes as it friction less. Therefore there will also be no heat loss. It
can be used in heavy vehicles as well as in light vehicles.
The electromagnetic brakes are much effective than conventional brakes & the time taken for
application of brakes are also smaller. There is very few need of lubrication. Electromagnetic brakes
gives such better performance with less cost which is today’s need. There are also many more
advantages of Electromagnetic brakes. That’s why electromagnetic brakes are an excellent replacement
on conventional brakes.

Electromagnetic brakes are of today’s automobiles. An electromagnetic braking system for automobiles
like cars, an effective braking system. And, by using this electromagnetic brakes, we can increase the
life of the braking unit. The working principle of this system is that when the magnetic flux passes
through and perpendicular to the rotating wheel the eddy current flows opposite to the rotating
wheel/rotor direction. This eddy current trying to stop the rotating wheel or rotor. This results in the
rotating wheel or rotor comes to rest/ neutral.
HISTORY
It is found that electromagnetic brakes can develop a negative power which represents nearly twice the
maximum power output of a typical engine, and at least three times the braking power of an exhaust
brake. (Reverdin 1994). These performance of electromagnetic brakes make them much more
competitive candidate for alternative retardation equipment’s compared with other retarders. By using
by using the electromagnetic brakes are supplementary retardation equipment, the friction brakes can
be used less frequently, and therefore practically never reach high temperatures. The brake linings
would last considerably longer before requiring maintenance and the potentially “brake fade” problem
could be avoided.

In research conducted by a truck manufacturer, it was proved that the electromagnetic brake assumed
80% of the duty which would otherwise have been demanded of the regular service brake (Reverdin
1974). Furthermore the electromagnetic brakes prevents the danger that can arise from the prolonged
use of brake beyond their capability to dissipate heat.

This is most likely to occur while a vehicle descending a long gradient at high speed. In A study with a
vehicle with 5 axles and weighing 40 tons powered by a powered by an engine of 310 b.h.p travelling
down a gradient of 6% at a steady speed between 35 and 40 m.h.p, it can be calculated that the braking
power necessary to maintain this speed at the order of 450 hp. The brakes, therefore, would have to
absorb 300 hp, meaning that each brake in the 5 axles must absorb 30 hp, that a friction brake can
normally absorb with self-destruction.

The magnetic brake is well suited to such conditions since it will independently absorb more than 300
hp (Reverdin 1974). It therefore can exceed the requirements of continuous uninterrupted braking,
leaving the friction brakes cool and ready for emergency braking in total safety.

The installation of an electromagnetic brake is not very difficulty if there is enough space between the
gearbox and the rear axle. If did not need a subsidiary cooling system. It relay on the efficiency of
engine components for its use, so do exhaust and hydrokinetic brakes. The exhaust brake is an on/off
device and hydrokinetic brakes have very complex control system. The electromagnetic brake control
system is an electric switching system which gives it superior controllability.

CONSTRUCTION
The construction of the electromagnetic braking system is very simple. The parts needed for the
construction are electromagnetic, rheostat, sensors and magnetic insulator. A cylindrical ring shaped
electromagnet with winding is placed parallel to rotating wheel disc/ rotor. The electro magnet is fixed,
like as stator and coils are wounded along the electromagnet. These coils are connected with electrical
circuit containing one rheostat which is connected with brake pedal. And the rheostat is used to control
the current flowing is used to control the magnetic flux. And also it is used to prevent the magnetization
of other parts like axle and it act as a support frame for the electromagnet. The sensor used to indicate
the disconnection in the whole circuit. If there is any error it gives an alert, so we can avoid accident.

WORKING PRINCIPLE
The working principle of the electric retarder is based on the electric retarder is based on the creation of
eddy currents within a metal discs rotating between two electro magnets, which set up a force opposing
the rotation of the discs. If the electromagnet is not energized, the rotation of the disc free and
accelerates uniformly under the action of the weight to which its shaft is connected.
When the electromagnet is energized, the rotation of the disc is retarded and the energy absorbed
appears as heating of the discs. If the current exciting the electromagnet is varied by a rheostat, the
raking force varies indirect proportion of the value of the current. The development of this invention
began when the French company Thelma, associated with Raoul Sarasin, developed and marketed
several generations of electric brake based on the functioning principle described above. A typical
retarder consists of stator and rotor. The stator hold 16 induction coils, energized separately in group of
four. The coils are made up of varnished aluminium wire mounted in epoxy resin.

The stator assembly is supported resiliently through anti-vibration mountings on the chasisframe of the
vehicle. The rotor is made up of two discs, which provide the braking force when subjected to the
electromagnetic influence when the coil are excited. Carefully design of the fins, which are integral to
the disc, permit independent cooling of the arrangement.
ADVANTAGES
1. Electromagnetic brakes can develop a negative power which represents nearly twice the maximum
power output of a typical engine.

2. Electromagnetic brakes work in a relatively cool condition and satisfy all


The energy requirements of braking at high speeds, completely without the use of friction. Due to its
specific installation location (transmission line of rigid vehicles), electromagnetic brakes have better
heat dissipation capability to avoid problems that friction brakes face times the braking power of an
exhaust brake.
3. Electromagnetic brakes have been used as supplementary retardation equipment in addition to the
regular friction brakes on heavy vehicles.

4. Electromagnetic brakes has great braking efficiency and has the potential to regain energy lost in
braking.
5. Its component cost is less.

DISADVANTAGES
1. The installation of an electromagnetic brake is very difficult if there is
Not enough space between the gearbox and the rear axle.
2. Need a separate compressor.
3. Maintenance of the equipment components such as hoses, valves has to done periodically.
4. It cannot use grease or oil.
APPLICATIONS
1. Used in crane control system.
2. Used in winch controlling.
3. Used in lift controlling.
4. Used in automatic purpose.

The lots of new technologies are arriving in world. They create a lot of effect. Most industries got their
new faces due to this arrival of technologies. Automobile industry is also one of them. There is a boom
in World’s automobile industry. So lots of research is also going here. As an important part of
automobile, there are also innovations in brakes. Electromagnetic brake is one of them.
A electromagnetic braking for automobiles like cars, an effective braking system. And, by using this
electromagnetic brakes, we can increase the life of the braking unit. The working principle of this
system is that when the electromagnetic flux passes through and perpendicular to the rotating wheel the
eddy current is induced in the rotating wheel or rotor. This eddy current flows opposite to the rotating
wheel. This eddy current tries to stop the rotating wheel or rotor. This results in the rotating wheel or
rotor comes to rest.
SPARK PLUG
Spark plug is a device used to produce electric spark to ignite the compressed air fuel mixture inside the
cylinder. The spark plug is screwed in the top of the cylinder so that it electrode project in the
combustion chamber.
A spark plug consist of mainly three parts:
1. Center electrode or insulated electrode.
2. Ground electrode or outer electrode.
3. Insulation separating the two electrodes.

The upper end of the centre electrode is connected to the spark plug terminal, where cable from the
ignition coil is connected. It is surrounded by insulator. The lower half portion of the insulator is
fastened with a metal shell. The lower portion of the shell has a short electrode attached to one side and
bent in towards the centre electrode, so that there is a gap between the two electrodes. The two
electrodes are thus separated by the insulator. The sealing gaskets are provided between the insulator
and the shell to prevent the escape of gas under various temperature and pressure conditions. The lower
part of the shell has screw threads and the upper part is made in hexagonal shape like a nut, so that the
spark plug may be screwed in or unscrewed from the cylinder head.

Cleaning the Spark Plug


Due to the combustion of fuel in the cylinder, carbon particles deposit on and around the electrode
which not only reduce the plug gap but also prevent the spark to occur. If the spark is still occurring, it
is too weak that it cannot ignite the fuel. Hence the spark plug is to be cleaned. Carbon particles can
deposit due to any reason like, nature of fuel, mixture strength, lubricating oil, etc. The spark plug can
be cleaned by a sand paper.

TURBOCHARGER
A turbocharger or turbo is a forced induction device used to allow more power to be produced for an
engine of a given size. A turbocharged engine can be more powerful and efficient than a naturally
aspirated engine because the turbine forces more intake air, proportionately more fuel, into the
combustion chamber than if atmospheric pressure alone is used. Turbo are commonly used on truck,
car, train, and construction equipment engines. Turbo are popularly used with Otto cycle and Diesel
cycle internal combustion engines.
There are two ways of increasing the power of an engine. One of them would be to make the fuel-air
mixture richer by adding more fuel. This will increase the power but at the cost of fuel efficiency and
increase in pollution levels… prohibitive! The other would be to somehow increase the volume of air
entering into the cylinder and increasing the fuel intake proportionately, increasing power and fuel
efficiency without hurting the environment or efficiency. This is exactly what Turbochargers do,
increasing the volumetric efficiency of an engine.
In a naturally aspirated engine, the downward stroke of the piston creates an area of low pressure in
order to draw more air into the cylinder through the intake valves. Now because of the pressure in the
cylinder cannot go below 0 (zero) psi (vacuum) and relatively constant atmospheric pressure (about 15
psi) there will be a limit to the pressure difference across the intake valves and hence the amount of air
entering the combustion chamber or the cylinder. The ability to fill the cylinder with air is its
volumetric efficiency. Now if we can increase the pressure difference across the intake valves by some
way we can make more air enter into the cylinder and hence increasing the volumetric efficiency of the
engine.
It increases the pressure at the point where air is entering the cylinder, thereby increasing the pressure
difference across the intake valves and thus more air enters into the combustion chamber. The
additional air makes it possible to add more fuel, increasing the power and torque output of the engine,
particularly at higher engine speeds.

Turbochargers were originally known as Turbo superchargers when all forced induction devices were
classified as superchargers; nowadays the term "supercharger" is usually applied to only mechanically-
driven forced induction devices. The key difference between a turbocharger and a conventional
supercharger is that the latter is mechanically driven from the engine, often from a belt connected to the
crankshaft, whereas a turbocharger is driven by the engine's exhaust gas turbine. Compared to a
mechanically-driven supercharger, turbochargers tend to be more efficient but less responsive.
HISTORICAL PERSPECTIVE
The turbocharger was invented by Swiss engineer Alfred Büchi. His patent for a turbocharger was
applied for use in 1905. Diesel ships and locomotives with turbochargers began appearing in the 1920s.

AVIATION:
During the First World War French engineer Auguste Rateau fitted turbochargers to Renault engines
powering various French fighters with some success. In1918, General Electric engineer Sanford Moss
attached a turbo to a V12 Liberty aircraft engine. The engine was tested at Pikes Peak in Colorado at
4,300 m to demonstrate that it could eliminate the power losses usually experienced in internal
combustion engines as a result of reduced air pressure and density at high altitude.
Turbochargers were first used in production aircraft engines in the 1920s, although they were less
common than engine-driven centrifugal superchargers. The primary purpose behind most aircraft-based
applications was to increase the altitude at which the airplane could fly, by compensating for the lower
atmospheric pressure present at high altitude.

PRODUCTION AUTOMOBILES:
The first turbocharged diesel truck was produced by Schweizer Maschinenfabrik Saurer(Swiss Machine
Works Saurer) in 1938 .The first production turbocharged automobile engines came from General
Motors in 1962. At the Paris auto show in1974, during the height of the oil crisis, Porsche introduced
the 911 Turbo – the world’s first production sports car with an exhaust turbocharger and pressure
regulator. This was made possible by the introduction of a waste gate to direct excess exhaust gasses
away from the exhaust turbine. The world's first production turbo diesel automobiles were the Garrett-
turbocharged Mercedes 300SD and the Peugeot 604, both introduced in 1978. Today, most automotive
diesels are turbocharged.
1962 Oldsmobile Cutlass Jet fire
1962 Chevrolet Corvair Monza Spyder
1973 BMW 2002 Turbo
1974 Porsche 911 Turbo
1978 Saab 99
1978 Peugeot 604 turbo diesel
1978 Mercedes-Benz 300SD turbo diesel (United States/Canada)
1979 Alfa Romeo Alfetta GTV 2000 Turbodelta
1980 Mitsubishi Lancer GT Turbo
1980 Pontiac Firebird
1980 Renault 5 Turbo
1981 Volvo 240-series Turbo

OPERATING PRINCIPLE
A turbocharger is a small radial fan pump driven by the energy of the exhaust gases of an engine. A
turbocharger consists of a turbine and a compressor on a shared shaft. The turbine converts exhaust heat
to rotational force, which is in turn used to drive the compressor. The compressor draws in ambient air
and pumps it in to the intake manifold at increased pressure resulting in a greater mass of air entering
the cylinders on each intake stroke.

The objective of a turbocharger is the same as a supercharger; to improve the engine's volumetric
efficiency by solving one of its cardinal limitations. A naturally aspirated automobile engine uses only
the downward stroke of a piston to create an area of low pressure in order to draw air into the cylinder
through the intake valves. Because the pressure in the atmosphere is no more than 1 atm
(approximately 14.7 psi), there ultimately will be a limit to the pressure difference across the intake
valves and thus the amount of airflow entering the combustion chamber.
Because the turbocharger increases the pressure at the point where air is entering the cylinder, a greater
mass of air (oxygen) will be forced in as the inlet manifold pressure increases. The additional air flow
makes it possible to maintain the combustion chamber pressure and fuel/air load even at high engine
revolution speeds, increasing the power and torque output of the engine. Because the pressure in the
cylinder must not go too high to avoid detonation and physical damage, the intake pressure must be
controlled by venting excess gas. The control function is performed by a waste gate, which routes some
of the exhaust flow away from the turbine. This regulates air pressure in the intake manifold.

COMPONENTS OF A TURBOCHARGER
The turbocharger has four main components. The turbine (almost always a radial turbine) and
impeller/compressor wheels are each contained within their own folded conical housing on opposite
sides of the third component, the centre housing/hub rotating assembly. The housings fitted around the
compressor impeller and turbine collect and direct the gas flow through the wheels as they spin. The
size and shape can dictate some performance characteristics of the overall turbocharger. The turbine
and impeller wheel sizes dictate the amount of air or exhaust that can be flowed through the system,
and the relative efficiency at which they operate. Generally, the larger the turbine wheel and
compressor wheel, the larger the flow capacity. The centre hub rotating assembly houses the shaft
which connects the compressor impeller and turbine. It also must contain a bearing system to suspend
the shaft, allowing it to rotate at very high speed with minimal friction. Waste gates for the exhaust
flow.

TURBINE WHEEL:
The Turbine Wheel is housed in the turbine casing and is connected to a shaft that in turn rotates the
compressor wheel.

COMPRESSOR WHEEL (IMPELLER)


Compressor impellers are produced using a variant of the aluminium investment casting process. A
rubber former is made to replicate the impeller around which a casting mould is created. The rubber
former can then be extracted from the mould into which the metal is poured. Accurate blade sections
and profiles are important in achieving compressor performance. Back face profile machining
optimizes impeller stress conditions. Boring to tight tolerance and burnishing assist balancing and
fatigue resistance. The impeller is located on the shaft assembly using a threaded nut.
WASTE GATES:
On the exhaust side, a Waste gate provides us a means to control the boost pressure of the engine.
Some commercial diesel applications do not use a Waste gate at all. This type of system is called a free-
floating turbocharger. However, the vast majority of gasoline performance applications require Waste
gates. Waste gates provide a means to bypass exhaust flow from the turbine wheel. Bypassing this
energy (e.g. exhaust flow) reduces the power driving the turbine wheel to match the power required for
a given boost level.
ADVANTAGES
1. More specific power over naturally aspirated engine. This means a turbocharged engine can achieve
more power from same engine volume.
2. Better thermal efficiency over both naturally aspirated and supercharged engine when under full load
(i.e. on boost). This is because the excess exhaust heat and pressure, which would normally be wasted,
contributes some of the work required to compress the air.
3. Weight/Packaging. Smaller and lighter than alternative forced induction systems and may be more
easily fitted in an engine bay.
4. Fuel Economy. Although adding a turbocharger itself does not save fuel, it will allow a vehicle to
use a smaller engine while achieving power levels of a much larger engine, while attaining near normal
fuel economy while off boost/cruising. This is because without boost, less fuel is used to create a proper
air/fuel ratio.

DISADVANTAGES
1. Lack of responsiveness if an incorrectly sized turbocharger is used. If a turbocharger that is too large
is used it reduces throttle response as it builds up boost slowly otherwise known as "lag". However,
doing this may result in more peak power.
2. Boost threshold- A turbocharger starts producing boost only above a certain rpm due to a lack of
exhaust gas volume to overcome inertia of rest of the turbo propeller. This results in a rapid and
nonlinear rise in torque, and will reduce the usable power band of the engine. The sudden surge of
power could overwhelm the tires and result in loss of grip, which could lead to under steer/over steer,
depending on the drive train and suspension setup of the vehicle. Lag can be disadvantageous in racing,
if throttle is applied in a turn, power may unexpectedly increase when the turbo spools up, which can
cause excessive wheel spin.
3. Cost- Turbocharger parts are costly to add to naturally aspirated engines. Heavily modifying OEM
turbocharger systems also require extensive upgrades that in most cases requires most (if not all) of the
original components to be replaced.
4. Complexity- Further to cost, turbochargers require numerous additional systems if they are not to
damage an engine. Even an engine under only light boost requires a system for properly routing (and
sometimes cooling) the lubricating oil, turbo-specific exhaust manifold, application specific downpipe,
boosts regulation. In addition inter -cooled turbo engines require additional plumbing, while highly
tuned turbocharged engines will require extensive upgrades to their lubrication, cooling, and breathing
systems; while reinforcing internal engine and transmission parts.
TURBO LAG AND BOOST
The time required to bring the turbo up to a speed where it can function effectively is called turbo lag.
This is noticed as a hesitation in throttle response when coming off idle. This is symptomatic of the
time taken for the exhaust system driving the turbine to come to high pressure and for the turbine rotor
to overcome its rotational inertia and reach the speed necessary to supply boost pressure. The directly-
driven compressor in a supercharger does not suffer from this problem. Conversely on light loads or at
low RPM a turbocharger supplies less boost and the engine acts like a naturally aspirated engine.
Turbochargers start producing boost only above a certain exhaust mass flow rate (depending on the size
of the turbo). Without an appropriate exhaust gas flow, they logically cannot force air into the engine.
The point at full throttle in which the mass flow in the exhaust is strong enough to force air into the
engine is known as the boost threshold rpm. Engineers have, in some cases, been able to reduce the
boost threshold rpm to idle speed to allow for instant response. Both Lag and Threshold characteristics
can be acquired through the use of a compressor map and a mathematical equation.
APPLICATIONS
Gasoline-powered cars
Today, turbocharging is commonly used by many manufacturers of both diesel and gasoline-powered
cars. Turbo charging can be used to increase power output for a given capacity or to increase fuel
efficiency by allowing a smaller displacement engine to be used. Low pressure turbocharging is the
optimum when driving in the city, whereas high pressure turbocharging is more for racing and driving
on highways/motorways/freeways.

Diesel-powered cars
Today, many automotive diesels are turbocharged, since the use of turbocharging improved efficiency,
driveability and performance of diesel engines, greatly increasing their popularity.
Motorcycles
The first example of a turbocharged bike is the 1978 Kawasaki Z1R TC. Several Japanese companies
produced turbocharged high performance motorcycles in the early 1980s. Since then, few turbocharged
motorcycles have been produced.

Trucks
The first turbocharged diesel truck was produced by Schweizer Maschinenfabrik Saurer(Swiss Machine
Works Saurer) in 1938.
Aircraft
A natural use of the turbocharger is with aircraft engines. As an aircraft climbs to higher altitudes the
pressure of the surrounding air quickly falls off. At 5,486 m (18,000 ft), the air is at half the pressure of
sea level and the airframe experiences only half the aerodynamic drag. However, since the charge in the
cylinders is being pushed in by this air pressure, it means that the engine will normally produce only
half-power at full throttle at this altitude. Pilots would like to take advantage of the low drag at high
altitudes in order to go faster, but a naturally aspirated engine will not produce enough power at the
same altitude to do so.

Here the main aim is to effectively utilize the non-renewable energy such as petrol and diesel.
Complete combustion of the fuels can be achieved. Power output can be increased. Wind energy can be
used for air compression. We conclude that the power as well as the efficiency is increasing 10 to 15 %
and pollution can also decrease. From the observation we can conclude that when the full throttle valve
is open at that time the engine speed is 4000 rpm and by this the turbocharger generate 1.60 bar
pressurized air. Generally the naturally aspirated engine takes atmospheric pressurized air to the
carburettor for air fuel mixture but we can add the high density air for the combustion so as the result
the power and the complete combustion take place so efficiency is increasing.
WINDSHIELD WASHER
A windshield washer system for an automotive vehicle includes a fluid reservoir, a pump mounted
within the fluid reservoir and a heater mounted in proximity to the pump so as to provide heat to fluid
contained within the reservoir. The system further includes a nozzle operatively associated with the
pump for applying fluid from the reservoir to an outside surface of an automotive vehicle. The heater
may comprise an electric resistance element, such as a positive temperature coefficient element
mounted about a pumping chamber of the pump. In any event, the heater provides sufficient heat to the
fluid contained within the reservoir to prevent water in the fluid from freezing at ambient temperatures
normally encountered by automobiles.
According to another aspect of the present invention, a nozzle incorporated in the present system may
be of the telescoping variety such that it has a first position for spraying and a second, or retracted,
position when it is not spraying. In this fashion, a neat, uncluttered appearance may be achieved, while
protecting the nozzle from damage. In any event, the nozzle is close-coupled to the pump, so as to
minimize the fluid volume between the pump and nozzle. This promotes drain back of fluid from the
nozzle to the pump, while allowing heat to flow from the pump to the nozzle, thereby further inhibiting
freezing of water within the nozzle.

WINDSHIELD WASHER FLUID


Windshield washer fluid is sold in many formulations, and some may require dilution before being
applied, although most solutions available in North America come premixed with no diluting required.
The most common washer fluid solutions are given labels such as "All-Season", "Bug Remover", or
"De-icer", and usually are a combination of solvents with a detergent. Dilution factors will vary
depending on season, for example in winter the dilution factor may be 1:1, whereas during summer the
dilution factor may be 1:10. It is sometimes sold as sachet of crystals, which is also diluted with water.
Distilled water is the preferred diluent, since it will not leave trace mineral deposits on the glass. Anti-
freeze, or methylated spirits, may be added to a mixture to give the product a lower freezing
temperature. But methanol vapour is harmful when breathed in, so more popular now is an ethanol
winter mix, e.g. PAV, water, ethanol (or isopropanol), and ethylene glycol.
Concerns have been raised about the overall environmental aspects of washer fluid. Widespread,
ground-level use of wiper fluid (amounting to billions of litres each year) can lead to cumulative air
pollution and water pollution. Consumer advocacy groups and auto enthusiasts believe that the alcohols
and solvents present in some, but not all, windshield washer fluid can damage the vehicle. These critics
point to the corrosive effects of ethanol, methanol, and other components on paint, rubber, car wax, and
plastics, and groups propose various alternatives and homemade recipes so as to protect the finish and
mechanics of the motor vehicle.
WINDSHIELD WASHER NOZZLE(S)
This model is equipped with two hood mounted washer nozzles. Each nozzle emits two streams into the
wiper pattern. If the nozzle performance is unsatisfactory they can be adjusted. To adjust insert a pin
into the nozzle ball and move to proper pattern. The right and left nozzles are identical. It is an
advantage of the present system that separated fluid lines and nozzles are eliminated, with the entire
system being contained in a single assembly, so as to allow the protection of the fluid and the entire
system from freezing with a single heat source.

WINDSHIELD WASHER SYSTEM


All models are equipped with electrically operated windshield washer pumps. The wash function can
be accessed in the OFF position of the wiper control switch. Holding the wash button depressed when
the switch is in the OFF position will operate the wipers and washer motor pump continuously until the
washer button is released. Releasing the button will stop the washer pump but the wipers will complete
the current wipe cycle. Followed by an average of two more wipe cycles before the wipers park and the
module turns off.
The electric pump assembly is mounted directly to the reservoir. A permanently lubricated motor is
coupled to a rotor type pump. Fluid, gravity fed from the reservoir, is forced by the pump through
rubber hoses to the hood mounted nozzles which direct the fluid streams to the windshield. The pump
and reservoir are serviced as separate assemblies.
ADVANTAGES
- It is an advantage of the present system that the use of hydrocarbon-based freezing point depressants
may be eliminated with the present system.
- It is another advantage of the present system that the nozzle included with the system is self-draining
so as to allow the nozzle to purge itself of fluid when the system is not being energized and therefore to
further protect the system against freezing.
DISADVANTAGES
- Windshield washer fluid can damage the vehicle. These critics point to the corrosive effects of
ethanol, methanol, and other components.

>In conclusion, it can be said that windshield washer is one of the most important parts of a vehicle’s
equipment and that, despite the fact that most people do not pay much attention to this, they are really
helpful when it comes to keeping the windshield clean and ensuring that visibility levels are as high as
possible.

Blink Code in Antilock Braking System(ABS)


Blink Code
Blink code is a method of visual indication of the components fault to the service technician, by means
of flashing Blink Code Lamp. The number and sequence of flashes indicate the status of the system or
the nature of failure. This is useful to the service technician both during periodic checkup as well as
during troubleshooting the system whenever a failure is observed through the Warning Lamp.
How to use Blink Code?
The Blink code can be read by pressing the blink code switch. The blink code switch should be pressed
till the first flash appears. This typically takes about 5 second. The exact number of flashes, which are
separated by pauses, should be noted. Using the blink code table, the corresponding failure can be
easily identified.
If the stored fault is not erased, it remains in the memory till it is erased, even if the fault is physically
repaired. If there are more than one error, the user can read the errors one after the other by repairing
and deleting the errors displayed and once again pressing the blink code switch.

How to erase Blink Code?


The fault which is stored in the system memory can be erased by once again invoking the blink code
switch and keeping the switch pressed for the first three flashing.
Working of a Car
When a driver turns a key in the ignition:
● The car battery powers up sending
● Power to the starter motor, which
● Turns the crankshaft, which
● Gets the pistons moving
● With the pistons moving the engine fires up and ticks over
● A fan draws air into the engine via an air filter
● The air filter removes dirt and grit from the air
● The cleaned air is drawn into a chamber where fuel (petrol or
diesel) is added
● This fuel-air mix (a vaporised gas) is stored in the chamber
● The driver presses the accelerator pedal
● The throttle valve is opened
● The gas-air mix passes through an intake manifold and is
distributed, through intake valves, into the cylinders. The camshaft
controls the opening and closing of the valves.
● The distributor makes the spark plugs spark, which ignites the
fuel-air mix. The resulting explosion forces a piston to move down
which in turn causes the crankshaft to rotate.
Layout of a Car
In automotive design, the automobile layout describes where on the vehicle
the engine anddrive wheels are found. Many different combinations of engine
location and driven wheels are found in practice, and the location of each is
dependent on the application the vehicle will be used for. Factors influencing
the design choice include cost, complexity, reliability, packaging (location
and size of the passenger compartment and boot), weight distribution and the
vehicle's intended handling characteristics.
Layouts can roughly be divided into two categories: front- or rear-wheel
drive. Four-wheel-drive vehicles may take on the characteristics of either,
depending on how power is distributed to the wheels.
Battery Introduction
An electrical battery is one or more electrochemical cells that convert stored
chemical energy into electrical energy. Batteries come in many sizes, from
miniature cells used to power hearing aids and wristwatches to battery banks
the size of rooms that provide standby power for telephone exchanges and
computer data centres.
Battery has two terminals. One terminal is marked (+), or positive, while the
other is marked (-), or negative. In normal flashlight batteries, like AA, C or
D cell, the terminals are located on the ends. On a 9-volt or car battery,
however, the terminals are situated next to each other on the top of the unit. If
you connect a wire between the two terminals, the electrons will flow from
the negative end to the positive end as fast as they can. This will quickly wear
out the battery and can also be dangerous, particularly on larger batteries. To
properly harness the electric charge produced by a battery, you must connect
it to a load. The load might be something like a light bulb, a motor or an
electronic circuit like a radio.
Advantages of Using Rechargeable Batteries
1. Performance – Since rechargeable batteries can be recharged many times
over, the cumulative total service life exceeds that of primary batteries by a
wide margin.
2. Savings – Recharging rechargeable batteries many hundred times is giving
the consumer tremendous savings in the long run.
3. Environmentally friendly – Since the cumulative service is so much
longer than primary batteries, only a fraction of the solid waste is generated
and a solid waste reduction of 90% and more is possible. If the battery
contains no toxins, such as rechargeable alkalines, they can be even disposed
of in regular landfills. Other rechargeables, which do contain toxins such as
NiMH should be recycled. Most stores nowadays do take old rechargeables
back.
Are Primary and Rechargeable Batteries
interchangeable amongst each other?
Not all battery types are interchangeable. However, in the consumer,
household small format battery category, the following types of the same
format can in most cases be interchanged: Heavy Duty, Alkaline,
Rechargeable Alkaline and NiMH batteries. Although primary and
rechargeable alkaline batteries are rated at a nominal voltage of 1.5 volts, as
they begin discharging, their voltage continuously drops. Over the course of
discharge, the average voltage of alkaline batteries is in fact about 1.2 volts,
very close to NiMH batteries. The main difference is that alkaline batteries
start at 1.5 volts and gradually drop to less than 1.0 volt, while NiMH
batteries stay at about 1.2 volts for most of the service time. However, NiMH
batteries make only practical sense in very high drain devices such as digital
cameras as their self-discharge rate is too high for applications that require
power of long periods of time. For those slow discharges, a battery type with
a very low self-discharge rate is required. Rechargeable Alkaline will fit the
bill there. Remember, whatever battery type you use, NEVER mix battery
types for use at the same time and never mix old and new batteries. Keep
batteries in sets for best performance.
How should batteries be stored?
Remember, batteries are like any other chemical system. Heat will accelerate
the chemical reaction and shorten cell life. Therefore, the greatest threat to a
battery's useful life and shelf life is heat. So, avoid storing batteries or
battery-operated devices in extremely warm places; store them in a cool, dry
place.
The Batteries Work Better In Different Devices
1. HEAVY DUTY BATTERIES are still very popular and have been
around for many years because they are so cheap to purchase. Heavy Duty
batteries work best in low drain devices such as AM/FM radios flashlights,
smoke alarms and remote controls. Over the lifetime of the device,
rechargeable alkaline batteries will provide the better value and result actual
in cost savings although the initial cost is higher.
2. ALKALINE BATTERIES are the most popular battery used today.
Alkaline will last 5 to 10 times longer than heavy duty batteries on higher
current drains, making them more economical. They get their long life from
unique construction and the purity of the materials used. Alkaline batteries
are best suited for moderate to high drain devices such as portable CD
players, electronic games, motorized toys, tape recorders and cassette players.
Again, over the lifetime of the device, rechargeable alkaline batteries will
provide the better value and result actual in cost savings although the initial
cost is slightly higher.
3. RECHARGEABLE ALKALINE BATTERIES are specially designed
for use 25 times or more when charged properly in a dedicated charger for
rechargeable alkaline batteries. Rechargeable alkaline batteries come fully
charged, have no memory problems, up to a seven-year shelf life and will last
up to three times longer than a fully charged nickel cadmium rechargeable
battery. They do not require to be fully drained before recharge and will
actually last longer if frequently recharged. They will work in all applications
where Heavy Duty Primary Batteries are being used and in all applications
for Alkaline Primary Batteries with not too high drain rates.
4. RECHARGEABLE NiMH BATTERIES are an extension of the old
fashioned NiCdbatteries. These batteries offer capacities at least 30% higher
per charge than NiCdbatteries of the same size. NiMH batteries can be
recharged without having to be fully drained and can be charged several
hundred times. NiMH work best in high drain devices that chew through
alkaline batteries quickly such as digital cameras, hand held TV’s and remote
controlled racing toy cars.
5. RECHARGEABLE Li-Ion BATTERIES are mainly used in Laptop
computers and cell phones. They have a 3 times higher voltage on a per cell
basis than NiMH batteries and are usually only sold as a ‘system’ (device w/
built-in charger), as they require a special type of charger. More recently,
single Li- Ion cells with dedicated chargers are being offered for cameras that
take Lithium cells.
6. RECHARGEABLE NiCd BATTERIES should not be used due to the
toxic cadmium, but are still in high demand for power tools due to their
rugged design and performance. However, NiCd batteries have to be recycled
to prevent toxic, carcinogenic cadmium entering the waste stream.
7. PRIMARY LITHIUM BATTERIES offer an outstanding shelf-life of
above 10-years and they will work at very low temperatures. They are mainly
used in imaging applications, i.ecameras.
Types of Batteries
Unfortunately there is no single battery technology available on the market
today that can be considered as “The Solution” for all classes of portable
battery operated devices. There are a variety of batteries in use, each with its
own advantages and disadvantages.
There are two main categories of batteries:
(1) PRIMARY BATTERIES, sometimes also called single-use, or “throw-
away” batteries because they have to be discarded after they run empty as
they cannot be recharged for reuse. Primary batteries can produce current
immediately on assembly. Disposable batteries are intended to be used once
and discarded. These are most commonly used in portable devices that have
low current drain, are only used intermittently, or are used well away from an
alternative power source, such as in alarm and communication circuits where
other electric power is only intermittently available. Disposable primary cells
cannot be reliably recharged, since the chemical reactions are not easily
reversible and active materials may not return to their original forms. Battery
manufacturers recommend against attempting to recharge primary cells.
Primary Batteries include-
A) Carbon Zinc (aka. ‘Heavy Duty’) -- The lowest cost primary cell
(household) is the zinc-acidic manganese dioxide battery. They provide only
very low power, but have a good shelf life and are well suited for clocks and
remote controls.
B) Alkaline -- The most commonly used primary cell (household) is the zinc-
alkaline manganese dioxide battery. They provide more power-per-use than
Carbon-zinc and secondary batteries and have an excellent shelf life.
C) Lithium Cells -- Lithium batteries offer performance advantages well
beyond the capabilities of conventional aqueous electrolyte battery systems.
Their shelf-life can be well above 10-years and they will work at very low
temperatures. Lithium batteries are mainly used in small formats (coins cells
up to about AA size) because bigger sizes of lithium batteries are a safety
concern in consumer applications. Bigger (i.e. ‘D’) sizes are only used in
military applications.
D) Silver Oxide Cells – These batteries have a very high energy density, but
are very expensive due to the high cost of silver. Therefore, silver oxide cells
are mainly used in button cell format for watches and calculators.

E) Zinc Air Cells – These batteries have become the standard for hearing aid
batteries. They have a very long run time, because they store only the anode
material inside the cell and use the oxygen from the ambient air as cathode.

(2) SECONDARY BATTERIES, mostly called rechargeable batteries


because they can be recharged for reuse. They are usually assembled with
active materials in the discharged state. Rechargeable batteries or secondary
cells can be recharged by applying electric current, which reverses the
chemical reactions that occur during its use. Devices to supply the
appropriate current are called chargers or rechargers.

Secondary batteries include-


a) Rechargeable Alkaline - Secondary alkaline batteries, the lowest cost
rechargeable cells, have a long shelf life and are useful for moderate-power
applications. Their cycle life is less than most other secondary batteries, but
they are a great consumer’s choice as they combine the benefits of the
popular alkaline cells with the added benefit of re-use after recharging. They
have no toxic ingredients and can be disposed in regular landfills (local
regulations permitting).
b) Nickel-Cadmium - Secondary Ni-Cd batteries are rugged and reliable.
They exhibit a high power capability, a wide operating temperature range,
and a long cycle life, but have a low run time per charge. They have a self-
discharge rate of approximately 30% per month. They contain about 15%
toxic, carcinogenic cadmium and have to be recycled.
c) Nickel-Metal Hydride - Secondary NiMH batteries are an extension of
the old fashionedNiCd batteries. NiMH batteries provide the same voltage as
NiCd batteries, but offer at least 30% more capacity. They exhibit good high
current capability, and have a long cycle life. The self-discharge rate is higher
than NiCd at approximately 40% per month. NiMH cells contain no toxic
cadmium, but they still contain a large amount of nickel oxides and also some
cobalt, which are known human carcinogens and should be recycled.
d) Lithium Ion - Secondary Li-Ion batteries are the latest breakthrough in
rechargeable batteries. They are at least 30% lighter in weight than NiMH
batteries and provide at least 30% more capacity. They exhibit good high
current capability, and have a long cycle life. The self-discharge rate is better
than NiMH at approximately 20% per month. Overheating will damage the
batteries and could cause a fire. Li-Ion cells contain no toxic cadmium, but
they still contain either cobalt oxides or nickel oxides, which are known
human carcinogens and should be recycled.
e) Lead-Acid -- Secondary lead-acid batteries are the most popular
rechargeable batteries worldwide. Both the battery product and the
manufacturing process are proven, economical, and reliable. However,
because they are heavy, Lead-Acid batteries are not being used in portable,
consumer applications. Lead is a toxic, carcinogenic compound and should
not enter the regular waste stream. Recycling of Lead-Acid batteries is the
environmental success story of our time, approx. 93% of all battery lead is
being recycled today in reused in the production of new Lead-Acid batteries.

Primary Alkaline Batteries are long lasting, single-use batteries. They will
give good performance in all battery devices. Most standard alkaline batteries
give you similar performance, regardless of brand. Rechargeable Alkaline
Batteries use a revolutionary type of battery technology that provides the long
life of alkaline cells, but can be reused 25 times or more. Rechargeable
batteries are ideal for many of your frequently used electronic devices. And
because Rechargeable Alkaline Batteries give longer life per charge, hold
their power in storage and are recharged when you buy them, they work far
better than the old fashioned NiCd rechargeable batteries.
Using rechargeable alkaline batteries instead of single use, primary batteries
will result in cost savings that can add up to hundreds of dollars. Nickel
Metal Hydride (NiMH) batteries meet the demanding power needs for
today’s high-tech devices, such as digital cameras, handheld TVs, two-way
radios, and personal organizers. NiMH batteries can last three times longer
than any alkaline in digital cameras. NiMH can be charged many hundred
times resulting in cost savings that can add up to hundreds of dollars. Heavy
Duty batteries can be used in non-motor driven devices with low drain, such
as radios, remote controls, smoke alarms and clocks.
In devices like these, Heavy Duty batteries will give good performance at a
minimal initial cost. However, over the lifetime of the application a
rechargeable alkaline cell would provide a much better value and actually
save you some money. Most batteries can be stored for long periods of time.
Heavy Duty batteries will retain more than 80 % of their power, even when
stored at normal household temperatures for up to four years. Single use
alkaline and rechargeable alkaline batteries can be stored for up to seven
years retaining 80% of its power.
NiMH batteries on the other hand have a fairly rapid self-discharge losing
about 40% of their rated capacity per month; hence, one pretty much has to
recharge a NiMH battery before each use after prolonged periods of storage.
Battery Principle of Operation
A battery is a device that converts chemical energy directly to electrical
energy. It consists of a number of voltaic cells; each voltaic cell consists of
two half cells connected in series by a conductive electrolyte containing
anions and cations. One half-cell includes electrolyte and the electrode to
which anions (negatively charged ions) migrate, i.e., the anode or negative
electrode; the other half-cell includes electrolyte and the electrode to which
cations (positively charged ions) migrate, i.e., the cathode or positive
electrode. In the redox reaction that powers the battery, cations are reduced
(electrons are added) at the cathode, while anions are oxidized (electrons are
removed) at the anode. The electrodes do not touch each other but are
electrically connected by the electrolyte. Some cells use two half-cells with
different electrolytes. A separator between half cells allows ions to flow, but
prevents mixing of the electrolytes.
Each half cell has an electromotive force (or emf), determined by its ability to
drive electric current from the interior to the exterior of the cell. The net emf
of the cell is the difference between the emfs of its half-cells, as first
recognized by Volta. The net emf is the difference between the reduction
potentials of the half-reactions.
The electrical driving force across the terminals of a cell is known as the
terminal voltage (difference) and is measured in volts. The terminal voltage
of a cell that is neither charging nor discharging is called the open-circuit
voltage and equals the emf of the cell. Because of internal resistance, the
terminal voltage of a cell that is discharging is smaller in magnitude than the
open-circuit voltage and the terminal voltage of a cell that is charging
exceeds the open-circuit voltage. An ideal cell has negligible internal
resistance, so it would maintain a constant terminal voltage until exhausted,
then dropping to zero. If such a cell maintained 1.5 volts and stored a charge
of one coulomb then on complete discharge it would perform 1.5 joule of
work. In actual cells, the internal resistance increases under discharge, and
the open circuit voltage also decreases under discharge. If the voltage and
resistance are plotted against time, the resulting graphs typically are a curve;
the shape of the curve varies according to the chemistry and internal
arrangement employed.
Brake Introduction
A brake is a mechanical device which inhibits motion. Its opposite
component is a clutch. Brake pedal slows a car to a stop. When you depress
your brake pedal, your car transmits the force from your foot to its brakes
through a fluid. Since the actual brakes require a much greater force than you
could apply with your leg, your car must also multiply the force of your foot.
The brakes transmit the force to the tires using friction, and the tires transmit
that force to the road using friction also.

Almost all wheeled vehicles have a brake of some sort. Even baggage carts
and shopping carts may have them for use on a moving ramp. Most fixed-
wing aircraft are fitted with wheel brakes on the undercarriage. Some aircraft
also feature air brakes designed to reduce their speed in flight. Friction brakes
on automobiles store braking heat in the drum brake or disc brake while
braking then conduct it to the air gradually. When traveling downhill some
vehicles can use their engines to brake.
Types of Brakes
Brakes may be broadly described as using friction, pumping, or
electromagnetics. One brake may use several principles: for example, a pump
may pass fluid through an orifice to create friction:
1. Frictional Brake
2. Pumping Brake
3. Electromagnetic Brake
4. Hydraulic Brake
5. Air Brake
6. Anti-Braking System(ABS)
Frictional, Pumping, Electromagnetic brakes
1. Frictional brakes are most common and can be divided broadly into
"shoe" or "pad" brakes, using an explicit wear surface, and hydrodynamic
brakes, such as parachutes, which use friction in a working fluid and do not
explicitly wear. Typically the term "friction brake" is used to mean pad/shoe
brakes and excludes hydrodynamic brakes, even though hydrodynamic
brakes use friction.
Friction (pad/shoe) brakes are often rotating devices with a stationary pad and
a rotating wear surface. Common configurations include shoes that contract
to rub on the outside of a rotating drum, such as a band brake; a rotating
drum with shoes that expand to rub the inside of a drum, commonly called a
"drum brake", although other drum configurations are possible; and pads
that pinch a rotating disc, commonly called a "disc brake". Other brake
configurations are used, but less often. For example, PCC trolley brakes
include a flat shoe which is clamped to the rail with an electromagnet; the
Murphy brake pinches a rotating drum, and the Ausco Lambert disc brake
uses a hollow disc (two parallel discs with a structural bridge) with shoes that
sit between the disc surfaces and expand laterally.

2. Pumping brakes are often used where a pump is already part of the
machinery. For example, an internal-combustion piston motor can have the
fuel supply stopped, and then internal pumping losses of the engine create
some braking. Some engines use a valve override called a Jake brake to
greatly increase pumping losses. Pumping brakes can dump energy as heat, or
can be regenerative brakes that recharge a pressure reservoir called a
hydraulic accumulator.

3. Electromagnetic brakes are likewise often used where an electric


motor is already part of the machinery. For example, many hybrid
gasoline/electric vehicles use the electric motor as a generator to charge
electric batteries and also as a regenerative brake. Some diesel/electric
railroad locomotives use the electric motors to generate electricity which is
then sent to a resistor bank and dumped as heat. Some vehicles, such as some
transit buses, do not already have an electric motor but use a secondary
"retarder" brake that is effectively a generator with an internal short-circuit.
Related types of such a brake are eddy current brakes, and electro-mechanical
brakes (which actually are magnetically driven friction brakes, but nowadays
are often just called “electromagnetic brakes” as well).
Hydraulic Brake
Hydraulic Brake is an arrangement of braking mechanism which uses brake
fluid, typically containing ethylene glycol, to transfer pressure from the
controlling unit, which is usually near the operator of the vehicle, to the
actual brake mechanism, which is usually at or near the wheel of the vehicle.
The most common arrangement of hydraulic brakes for passenger vehicles,
motorcycles, scooters, and mopeds, consists of the following:
a) Brake pedal or lever
b) A push rod (also called an actuating rod)
c) A master cylinder assembly containing a piston assembly (made up of
either one or two pistons, a return spring, a series of gaskets/ O-rings and a
fluid reservoir)
d) Reinforced hydraulic lines
e) Brake calliper assembly usually consisting of one or two hollow
aluminium or chrome-plated steel pistons (called calliper pistons), a set of
thermally conductive brake pads and arotor (also called a brake disc) or drum
attached to an axle.

At one time, passenger vehicles commonly employed drum brakes on all four
wheels. Later, disc brakes were used for the front and drum brakes for the
rear. However, because disc brakes have shown a better stopping
performance and are therefore generally safer and more effective than drum
brakes, four-wheel disc brakes have become increasingly popular, replacing
drums on all but the most basic vehicles. Many two-wheel vehicles designs,
however, continue to employ a drum brake for the rear wheel.

Air Brake System


Air Brake System is the brake system used in automobiles such as buses,
trailers, trucks, and semi-trailers. George Westinghouse created air brakes for
utilizing it in trains for railway service. A secured air brake was patented by
him on 5th, March 1872. At first air brake is produced for use on trains and
now it is used common in automobiles. Westinghouse made various
modifications to enhance his creation, directing to several appearances of the
automatic brake which was extended to include road vehicles.
Compressed Air Brake System- The Compressed Air Brake System is a
different air brake used in trucks which contains a standard disc or drum
brake using compressed air instead of hydraulic fluid. Most types of truck
air brakes are drum units, though there is growing trend to the use of disc
brakes. The compressed air brake system works by drawing clean air from
the environment, compressing it, and hold it in high pressure tanks at
around 120 PSI.
Whenever the air is needed for braking, this air is directed to the functioning
cylinders on brakes to activate the braking hardware and slow the vehicle. Air
brakes use compressed air to increase braking forces. The large vehicles also
have an emergency brake system, in which the compressed air holds back a
mechanical force using springs which will otherwise engage the brakes. If air
pressure is lost for any reason, the brakes will hold and vehicle is stopped.
Design and Function- the Compressed air brake system is separated into
control system and supply system. The supply system compresses, stores and
provides high pressure air to the control system and also to other air operated
secondary truck systems such as gearbox shift control, clutch pedal air
assistance servo, etc.,
Control system- The control system is separated into two service brake
circuits. They are the parking brake circuit and the trailer brake circuit. This
two brake circuits is again separated into front and rear wheel circuits which
gets compressed air from their individual tanks for more protection in case of
air leak. The service brakes are applied by brake pedal air valve which
controls both circuits. The parking brake is the air controlled spring brake
which is applied by spring force in the spring brake cylinder and released by
compressed air through the hand control valve.
The trailer brake consists of a direct two line system the supply line which is
marked red and the separate control or service line which is marked blue. The
supply line gets air from the main mover park brake air tank through a park
brake relay valve and the control line is regulated through the trailer brake
relay valve. The working signals for the relay are offered by the prime mover
brake pedal air valve, trailer service brake hand control and Prime Mover
Park brake hand control.
Supply system- The air compressor is driven off of the automobile engine by
crankshaft pulley through a belt or straightly off of the engine timing gears.
It is lubricated and cooled by the engine lubrication and cooling systems. The
Compressed air is initially directed through a cooling coil and into an air
dryer which eliminates moisture and oil impurities and also contains a
pressure regulator, safety valve and a little purge reservoir. The supply
system is outfitted with an anti-freeze device and oil separator which is an
alternative to the air dryer. The compressed air is then stored in a tank and
then it is issued through a 4 - way protection valve into the front and rear
brake circuit air reservoir, a parking brake reservoir and an auxiliary air
supply distribution point. The Supply system also contains many check,
pressure limiting, drain and safety valves.
Clutch Introduction
A Clutch is a machine member used to connect the driving shaft to a driven
shaft, so that the driven shaft may be started or stopped at will, without
stopping the driving shaft. A clutch thus provides an interruptible connection
between two rotating shafts. Clutches allow a high inertia load to be stated
with a small power.
Clutches are used whenever the ability to limit the transmission of power or
motion needs to be controlled either in amount or over time (e.g. electric
screwdrivers limit how much torque is transmitted through use of a clutch;
clutches control whether automobiles transmit engine power to the wheels).
In the simplest application clutches are employed in devices which have two
rotating shafts. In these devices one shaft is typically attached to a motor or
other power unit (the driving member) while the other shaft (the driven
member) provides output power for work to be done. In a drill for instance,
one shaft is driven by a motor and the other drives a drill chuck. The clutch
connects the two shafts so that they may be locked together and spin at the
same speed (engaged), locked together but spinning at different speeds
(slipping), or unlocked and spinning at different speeds (disengaged).
A popularly known application of clutch is in automotive vehicles where it is
used to connect the engine and the gear box. Here the clutch enables to crank
and start the engine disengaging the transmission Disengage the transmission
and change the gear to alter the torque on the wheels. Clutches are also used
extensively in production machinery of all types.
When your foot is off the pedal, the springs push the pressure plate against
the clutch disc, which in turn presses against the flywheel. This locks the
engine to the transmission input shaft, causing them to spin at the same
speed.
Clutch for a drive shaft: The clutch disc (centre) spins with the flywheel
(left). To disengage, the lever is pulled (black arrow), causing a white
pressure plate (right) to disengage the green clutch disc from turning the
drive shaft, which turns within the thrust-bearing ring of the lever. Never will
all 3 rings connect, with no gaps.
In a car's clutch, a flywheel connects to the engine, and a clutch plate
connects to the transmission.

The amount of force the clutch can hold depends on the friction between the
clutch plate and the flywheel, and how much force the spring puts on the
pressure plate. When the clutch pedal is pressed, a cable or hydraulic piston
pushes on the release fork, which presses the throw-out bearing against the
middle of the diaphragm spring. As the middle of the diaphragm spring is
pushed in, a series of pins near the outside of the spring causes the spring to
pull the pressure plate away from the clutch disc (see below). This releases
the clutch from the spinning engine.
Requirements of a Good Clutch
1. Torque Transmission
2. Gradual Engagement
3. Good Heat Dissipation
4. Compact Size
5. Sufficient Clutch Pedal Free Play
6. Ease of Operation
Different Types of Clutch
Friction Clutch
Friction clutches are the most commonly used clutch mechanisms. They are
used to transmit torque by using the surface friction between two faces of the
clutch.
Dog Clutch
A dog clutch couples two rotating shafts or other rotating components not by
friction, but by interference. Both the parts of the clutch are designed so that
one pushes into the other, causing both to rotate at the same speed, so that
they never slip.
Cone Clutch
Cone clutches are nothing, but frictional clutches with conical surfaces. The
area of contact differs from normal frictional surfaces. The conical surface
provides a taper, which means that while a given amount of actuating force
brings the surfaces of the clutch into contact really slowly, the pressure on the
mating surfaces increases rapidly.
Overrunning Clutch
Also known as the freewheel mechanisms, this type of clutch disengage the
drive shaft from the driven shaft, when the driven shaft rotates faster than the
drive shaft. An example of such a situation can be when a cyclist stops
pedalling and cruises. However, in case of automobiles going down the hill,
you cannot take your feet off the gas pedal, as there is no free wheel system.
If you do so, the whole engine system can be damaged.
Safety Clutch
Also known as the torque limiter, this device allows a rotating shaft to "slip"
or disengage when higher than normal resistance is encountered on a
machine. An example of a safety clutch is the one mounted on the driving
shaft of a large grass mower. If a stone or something else is encountered by
the grass mower, it stops immediately and does not hamper the blades.
Centrifugal clutch
Centrifugal and semi-centrifugal clutches are employed where they need to
engage only at some specific speeds. There is a rotating member on the
driving shaft, which rises up as the speed of the shaft increases and engages
the clutch, which then drives the driven shaft.
Hydraulic Clutch
In a hydraulic clutch system, the coupling is hydrodynamic and the shafts are
not actually in contact. They work as an alternative to mechanical clutches.
They are known to have common problems associated with hydraulic
couplings, and are a bit unsteady in transmitting torque.
Electromagnetic Clutch
These clutches engage the theory of magnetism on to the clutch mechanisms.
The ends of the driven and driving pieces are kept separate and they act as the
pole pieces of a magnet. When a DC current is passed through the clutch
system, the electromagnet activates and the clutch is engaged.
Fluid Coupling
It is a device for transmitting rotation between shafts by means of the
acceleration and deceleration of a hydraulic fluid (such as oil). Also known as
hydraulic coupling. Structurally, a fluid coupling consists of an impeller on
the input or driving shaft and a runner on the output or driven shaft. The two
contain the fluid. Impeller and runner are bladed rotors, the impeller acting as
a pump and the runner reacting as a turbine. Basically, the impeller
accelerates the fluid from near its axis, at which the tangential component of
absolute velocity is low, to near its periphery, at which the tangential
component of absolute velocity is high. This increase in velocity represents
an increase in kinetic energy. The fluid mass emerges at high velocity from
the impeller, impinges on the runner blades, gives up its energy, and leaves
the runner at low velocity.

Hydraulic fluid couplings transfer rotational force from a transmitting axis to


a receiving axis. The coupling consists of two toroid’s -- doughnut-shaped
objects -- in a sealed container of hydraulic fluid. One toroid is attached to
the driving shaft and spins with the rotational force. The spinning toroid
moves the hydraulic fluid around the receiving toroid. The movement of the
fluid turns the receiving toroid and thus turns the connected shaft.
Although fluid couplings use hydraulic fluid within their construction, the
mechanism loses a portion of its force to friction and results in the creation of
heat. No fluid coupling can run at 100 percent efficiency. Excessive heat
production from poorly maintained couplings can result in damage to the
coupling and surrounding systems.
A fluid coupling is a hydrodynamic device used to transmit rotating
mechanical power. It has been used in automobile transmissions as an
alternative to a mechanical clutch. It also has widespread application in
marine and industrial machine drives, where variable speed operation and/or
controlled start-up without shock loading of the power transmission system is
essential.
Differential Introduction
A differential is a device, usually, but not necessarily, employing gears,
capable of transmitting torque and rotation through three shafts, almost
always used in one of two ways: in one way, it receives one input and
provides two outputs—this is found in most automobiles - and in the other
way, it combines two inputs to create an output that is the sum, difference, or
average, of the inputs.
In automobiles and other wheeled vehicles, the differential allows each of the
driving road wheels to rotate at different speeds.
The differential has three jobs:
1. To aim the engine power at the wheels
2. To act as the final gear reduction in the vehicle, slowing the rotational
speed of the transmission one final time before it hits the wheels
3. To transmit the power to the wheels while allowing them to rotate at
different speeds (This is the one that earned the differential its name.)
Advantages & Disadvantages of Front Wheel Drive

Advantages of Front Wheel Drive-


1. Interior space: Since the powertrain is a single unit contained in the
engine compartment of the vehicle, there is no need to devote interior
space for a driveshaft tunnel or rear differential, increasing the volume
available for passengers and cargo.
2. Cost: Fewer components overall
3. Weight: Fewer components mean lower weight
4. Fuel economy: Lower weight means better gasoline mileage
5. Improved drivetrain efficiency: the direct connection between
engine and transaxle reduce the mass and mechanical inertia of the
drivetrain compared to a rear-wheel drive vehicle with a similar engine
and transmission, allowing greater fuel economy.
6. Assembly efficiency: the powertrain can be often be assembled and
installed as a unit, which allows more efficient production.
7. Slippery-surface traction: placing the mass of the drivetrain over the
driven wheels improves traction on wet, snowy, or icy surfaces.
Although heavy cargo can be beneficial for traction on rear-wheel drive
pickup trucks.
8. Predictable handling characteristics: front-wheel drive cars, with a
front weight bias, tend to understeer at the limit, which is commonly
believed to be easier for average drivers to correct than terminal
oversteer, and less prone to result in fishtailing or a spin.
9. Better crosswind stability.
10. Tactile feedback via the steering wheel informing driver if a wheel
is slipping.
11. Front wheel drive allows the use of left-foot braking as a driving
technique.

Disadvantages of Front Wheel Drive-


1. The centre of gravity of the vehicle is typically farther forward than
a comparable rear-wheel drive layout. In front wheel drive cars, the
front axle typically supports around 2/3rd of the weight of the car (quite
far off the "ideal" 50/50 weight distribution). This is a contributing
factor in the tendency of front wheel drive cars to understeer.
2. Torque steer can be a problem on front wheel drive cars with higher
torque engines ( > 210 N·m ) and transverse layout. This is the name
given to the tendency for some front wheel drive cars to pull to the left
or right under hard acceleration. It is a result of the offset between the
point about which the wheel steers (which falls at a point which is
aligned with the points at which the wheel is connected to the steering
mechanisms) and the centroid of its contact patch. The tractive force
acts through the centroid of the contact patch, and the offset of the
steering point means that a turning moment about the axis of steering is
generated. In an ideal situation, the left and right wheels would generate
equal and opposite moments, cancelling each other out, however in
reality this is less likely to happen. Torque steer is often incorrectly
attributed to differing rates of twist along the lengths of unequal front
drive shafts. However, Centre-point steering geometry can be
incorporated in the design to avoid torque steer. This is how the
powerful Citroen SM front-wheel drive car avoided the problem.
3. Lack of weight shifting will limit the acceleration of a front wheel
drive vehicle. In a rear wheel drive car the weight shifts back during
acceleration giving more traction to the driving wheels. This is the main
reason why nearly all racing cars are rear wheel drive. However, since
front wheel cars have the weight of the engine over the driving wheels
the problem only applies in extreme conditions.
4. In some towing situations front wheel drive cars can be at a traction
disadvantage since there will be less weight on the driving wheels.
Because of this, the weight that the vehicle is rated to safely tow is
likely to be less than that of a rear wheel drive or four wheel drive
vehicle of the same size and power.
5. Due to geometry and packaging constraints, the CV joints (constant-
velocity joints) attached to the wheel hub have a tendency to wear out
much earlier than their rear wheel drivecounterparts? The significantly
shorter drive axles on a front wheel drive car causes the joint to flex
through a much wider degree of motion, compounded by additional
stress and angles of steering, while the CV joints of a rear wheel drive
car regularly see angles and wear of less than half that of front wheel
drive vehicles.
6. The driveshaft’s may limit the amount by which the front wheels can
turn, thus it may increase the turning circle of a front wheel drive car
compared to a rear wheel drive one with the same wheelbase.
7. In low traction conditions (i.e.: ice or gravel) the front (Drive)
Wheels lose traction first making steering ineffective.
Advantages & Disadvantages of Rear Wheel Drive

Advantages of Rear Wheel Drive-


1. Better handling in dry conditions - accelerating force is applied to
the rear wheels, on which the down force increases, due to load transfer
in acceleration, making the rear tires better able to take simultaneous
acceleration and curving than the front tires.
2. More predictable steering in low traction conditions (i.e.: ice or
gravel) because the steering wheels maintain traction and the ability to
affect the motion of the vehicle even if the drive wheels are slipping.
3. Less costly and easier maintenance - Rear wheel drive is
mechanically simpler and typically does not involve packing as many
parts into as small a space as does front wheel drive, thus requiring less
disassembly or specialized tools in order to replace parts.
4. No torque steer.
5. Even weight distribution - The division of weight between the front
and rear wheels has a significant impact on a car's handling, and it is
much easier to get a 50/50 weight distribution in a rear wheel drive car
than in a front wheel drive car, as more of the engine can lie between
the front and rear wheels (in the case of a mid-engine layout, the entire
engine), and the transmission is moved much farther back.
6. Steering radius - As no complicated drive shaft joints are required at
the front wheels, it is possible to turn them further than would be
possible using front wheel drive, resulting in a smaller steering radius.
7. Towing - Rear wheel drive puts the wheels which are pulling the
load closer to the point where a trailer articulates, helping steering,
especially for large loads.
8. Weight transfer during acceleration. (During heavy acceleration, the
front end rises, and more weight is placed on the rear, or driving
wheels).
9. Drifting - Drifting is a controlled skid, where the rear wheels break
free from the pavement as they spin, allowing the rear end of the car to
move freely left and right. This is of course easier to do on slippery
surfaces. Severe damage and wear to tires and mechanical components
can result from drifting on dry asphalt. Drifting can be used to help in
cornering quickly, or in turning the car around in a very small space.
Many enthusiasts make a sport of drifting, and will drift just for the
sake of drifting. Drifting requires a great deal of skill, and is not
recommended for most drivers. It should be mentioned that front wheel
drive and four wheel drive cars may also drift, but only with much more
difficulty. When front wheel drive cars drift, the driver usually pulls on
the emergency brake in order for the back wheels to stop and thus skid.
This technique is also used for 'long' drifts, where the turn is
accomplished by pulling the e-brake while turning the steering wheel to
the direction the driver desires. With drifting, there is also the
importance of 'counter-steering' - where while temporarily out of
control, the driver regains it by turning the wheel in the opposite
direction and thus preparing for the next turn or straight-away.

Disadvantages of Rear Wheel Drive-


1. More difficult to master - While the handling characteristics of rear-
wheel drive may be more fun for some drivers, for others having rear
wheel drive is less intuitive. The unique driving dynamics of rear wheel
drive typically do not create a problem when used on vehicles that also
offer electronic stability control and traction control.
2. Decreased interior space - This isn't an issue in a vehicle with a
ladder frame like a pickup truck, where the space used by the drive line
is unusable for passengers or cargo. But in a passenger car, rear wheel
drive means: Less front leg room (the transmission tunnel takes up a lot
of space between the driver and front passenger), less leg room for
centre rear passengers (due to the tunnel needed for the drive shaft), and
sometimes less trunk space (since there is also more hardware that must
be placed underneath the trunk).
3. Increased weight - The drive shaft, which connects the engine at the
front to the drive axle in the back, adds weight. There is extra sheet
metal to form the transmission tunnel. A rear wheel drive car will weigh
slightly more than a comparable front wheel drive car, but less than four
wheel drive.
4. Higher purchase price - Due to the added cost of materials, rear
wheel drive is typically slightly more expensive to purchase than a
comparable front wheel drive vehicle. This might also be explained by
production volumes, however. Rear drive is typically the platform for
luxury performance vehicles, which makes read drive appear to be more
expensive. In reality, even luxury performance front drive vehicles are
more expensive than average.
5. More difficult handling on low grip surfaces (wet road, ice, snow,
gravel...) as the car is pushed rather than pulled. In modern rear drive
cars, this disadvantage is offset by electronic stability control and
traction control.
Advantages & Disadvantages of All Or 4- Wheel
Drive
The differential is found on all modern cars and trucks, and also in many all-
wheel-drive (full-time four-wheel-drive) vehicles. These all-wheel-drive
vehicles need a differential between each set of drive wheels, and they need
one between the front and the back wheels as well, because the front wheels
travel a different distance through a turn than the rear wheels.

Part-time four-wheel-drive systems don't have a differential between the front


and rear wheels; instead, they are locked together so that the front and rear
wheels have to turn at the same average speed. This is why these vehicles are
hard to turn on concrete when the four-wheel-drive system is engaged.

The Advantages & Disadvantages of All Wheel Drive


All Wheel Drive (or AWD) is a system in which all four wheels of a car
operate simultaneously to improve traction and handling. While it is possible
for a car to have continuous AWD capabilities, it is far more common for one
pair of wheels to engage only when sensors detect that the other pair has
begun to slip. There are both advantages and disadvantages to AWD systems

1. Traction
In intermittent AWD systems, the rear wheels engage when sensors detect
slippage from the front wheels. Under these circumstances, the vehicle
effectively detects and compensates for dangerous driving conditions such as
standing water, snow, ice or gravel that could otherwise compromise control
of the vehicle. By engaging the second set of wheels, the vehicle experiences
two additional points of contact on the surface of the road, allowing greater
likelihood that its tires will grip the surface and allow the driver to retain
control. The additional weight of AWD systems also encourages more grip
on the road and the added points of contact distribute the vehicle's weight
more evenly over points of propulsion.

2. Fuel Efficiency
The primary disadvantage of an AWD vehicle is its cost. The drive train and
related equipment necessary to provide both continuous and intermittent
AWD is complex and expensive, often requiring sensors and computers that
are not necessary on two- or four-wheel-drive vehicles. This cost increases
the initial market value of the vehicle and can also affect the cost of repairs.
In addition to these costs, AWD systems require more fuel to power the
additional wheels and are less fuel efficient than comparable two-wheel-drive
vehicles.

3. Braking Distance and Collision Avoidance


While the weight of AWD vehicles improves their handling, it also increases
the distance they require to stop. In a scenario where the vehicle must make a
sudden stop and cannot swerve or turn, a collision becomes more likely than
with a lighter car. Under similar circumstance, but ones in which an accident
can be avoided by turning, AWD vehicles offer superior collision avoidance
than similar vehicles with less effective handling and turning capabilities.

Need of a Differential
Car wheels spin at different speeds, especially when turning. Each wheel
travels a different distance through the turn, and that the inside wheels travel
a shorter distance than the outside wheels. Since speed is equal to the
distance travelled divided by the time it takes to go that distance, the wheels
that travel a shorter distance travel at a lower speed. Also note that the front
wheels travel a different distance than the rear wheels.
For the non-driven wheels on your car -- the front wheels on a rear-wheel
drive car, the back wheels on a front-wheel drive car -- this is not an issue.
There is no connection between them, so they spin independently. But the
driven wheels are linked together so that a single engine and transmission can
turn both wheels. If your car did not have a differential, the wheels would
have to be locked together, forced to spin at the same speed. This would
make turning difficult and hard on your car: For the car to be able to turn, one
tire would have to slip. With modern tires and concrete roads, a great deal of
force is required to make a tire slip. That force would have to be transmitted
through the axle from one wheel to another, putting a heavy strain on the axle
components.
Construction and Working of Differential Assembly
Torque is supplied from the engine, via the transmission, to a drive shaft
(British term: 'propeller shaft', commonly and informally abbreviated to
'prop-shaft'), which runs to thefinal drive unit that contains the differential. A
spiral bevel pinion gear takes its drive from the end of the propeller shaft, and
is encased within the housing of the final drive unit. This meshes with the
large spiral bevel ring gear, known as the crown wheel. The crown wheel and
pinion may mesh in hypoid orientation, not shown. The crown wheel gear is
attached to the differential carrier or cage, which contains the 'sun' and
'planet' wheels or gears, which are a cluster of four opposed bevel gears in
perpendicular plane, so each bevel gear meshes with two neighbours, and
rotates counter to the third, that it faces and does not mesh with. The two sun
wheel gears are aligned on the same axis as the crown wheel gear, and drive
the axle half shafts connected to the vehicle's driven wheels. The other two
planet gears are aligned on a perpendicular axis which changes orientation
with the ring gear's rotation.

Input torque is applied to the ring gear (blue), which turns the entire carrier
(blue). The carrier is connected to both the side gears (red and yellow) only
through the planet gear (green) (visual appearances in the diagram
notwithstanding). Torque is transmitted to the side gears through the planet
gear. The planet gear revolves around the axis of the carrier, driving the side
gears. If the resistance at both wheels is equal, the planet gear revolves
without spinning about its own axis, and both wheels turn at the same rate.
If the left side gear (red) encounters resistance, the planet gear (green) spins
as well as revolving, allowing the left side gear to slow down, with an equal
speeding up of the right side gear (yellow).

Thus, for example, if the vehicle is making a turn to the right, the main
crown wheel may make 10 full rotations. During that time, the left wheel will
make more rotations because it has further to travel, and the right wheel will
make fewer rotations as it has less distance to travel. The sun gears (which
drive the axle half-shafts) will rotate in opposite directions relative to the ring
gear by, say, 2 full turns each (4 full turns relative to each other), resulting in
the left wheel making 12 rotations, and the right wheel making 8 rotations.
The rotation of the crown wheel gear is always the average of the rotations of
the side sun gears. This is why, if the driven road wheels are lifted clear of
the ground with the engine off, and the drive shaft is held (say leaving the
transmission 'in gear', preventing the ring gear from turning inside the
differential), manually rotating one driven road wheel causes the opposite
road wheel to rotate in the opposite direction by the same amount.

When the vehicle is traveling in a straight line, there will be no differential


movement of the planetary system of gears other than the minute movements
necessary to compensate for slight differences in wheel diameter, undulations
in the road (which make for a longer or shorter wheel path), etc.
· This is a depiction of an open differential, which is commonly found in
most vehicles. They are quite trouble-free, but do have one disadvantage. On
a dry road with good traction, the power is evenly applied to both wheels.
When one of the tires hits ice or a slippery surface, it begins to spin and the
majority of torque is directed to the spinning wheel, leaving very little for the
wheel with the good traction. This is how vehicles can get stuck in snow or
mud.
· Another type of differential is the limited slip differential, which is an
option on most new cars. It has a distinct advantage by having a set of
clutches and springs within the differential. Their function is to apply
pressure to the side gears should one of the tires begin to slip. By applying
pressure to the opposite wheel from the one spinning, it allows for more
torque to be applied to the wheel with traction. If is far superior to the open
differential when it comes to traction in bad weather.
Components of Automobile Engine
1) Camshaft:
Camshaft is a type of rotating device or apparatus used in piston engines for
propelling or operating poppet valves. Camshaft comprises of series of cams
that regulates the opening and closing of valves in the piston engines. The
camshaft works with the help of a belt, chain and gears.

2) Crankshaft:
Crankshaft is a device, which converts the up and down movement of the
piston into rotatory motion. This shaft is presented at the bottom of an engine
and its main function is to rotate the pistons in a circular motion. Crankshaft
is further connected to flywheel, clutch, and main shaft of the transmission,
torque converter and belt pulley.
To convert Reciprocating motion of the Piston into Rotary motion, the
Crankshaft and Connecting Rod combination is used. The Crankshaft which
is made by Steel Forging or Casting is held on the Axis around which it
rotates, by the Main Bearings, which is fit round the main Journals provided.
There are always at least two such bearings, one at the rear end and another at
front end. The increase in number of Main Bearings for a given size of the
Crankshaft means less possibility of Vibration and Distortion.
But it will also increase the difficulty of correct alignment in addition to
increased production cost. The Main Bearings are mounted on the Crankcase
of the Engine. The Balance weight or Counterweight keep the system in
perfect balance.
The Crank Webs are extended and enlarged on the side of Journal opposite
the Crank Throw so as to from balance weights. The Crankshaft may be
made from Carbon Steel, Nickel Chrome or other Alloy Steel.
3) Connecting Rod:
Connecting rods are made of metals, which are used, for joining a rotating
wheel to a reciprocating shaft. More precisely, connecting rods also referred
to as con rod are used for conjoining the piston to the crankshaft.
The load on the piston due to combustion of fuel in the combustion chamber
is transmitted to crankshaft through the connecting rod. One end of
connecting rod known as small end and is connected to the piston through
gudgeon pin while the other end known as big end and is connected to
crankshaft through crank pin.
Connecting rods are usually made up of drop forged I section. In large size
internal combustion engine, the connecting rods of rectangular section have
been employed. In such cases, the larger dimensions are kept in the plane of
rotation.
In petrol engine, the connecting rod's big end is generally split to enable its
clamping around the crankshaft. Suitable diameter holes are provided to
accommodate connecting rod bolts for clamping. The big end of connecting
rod is clamped with crankshaft with the help of connecting rod bolt,nut and
split pin or cotter pin.
Generally, plain carbon steel is used as material to manufacture connecting
rod but where low weight is most important factor, aluminium alloys are
most suitable. Nickel alloy steel are also used for heavy duty engine's
connecting rod.
Connecting rods can be made of steel, aluminum, titanium, iron and
other types of metals.
4) Crankcase:
A crankcase is a metallic cover that holds together the crankshaft and its
attachments. It is the largest cavity within an engine that protects the
crankshaft, connecting rods and other components from foreign objects.
Automotive crankcases are filled with air and oil, while Magnesium, Cast
Iron, Aluminium and alloys are some common materials used to make
crankcases.
5) Cylinder Heads:
Cylinder heads refers to a detachable plate, which is used for covering the
closed end of a cylinder assembled in an automotive engine. It comprises of
combustion chamber valve train and spark plugs. Different types of
automobiles have different engine configurations such as Straight engine has
only one cylinder head while an engine has two cylinder heads.

6) Engine Belts:
Engine belts are the bands made of flexible material used for connecting or
joining two rotating shafts or pulleys together. These belts work in
coordination with wheels and axles for transferring energy. When the wheels
or shafts are positioned at extremely different angles, then the engine belts
have the ability to change the direction of a force. Engine pulley is a type of
machine or a wheel having either a broad rim or groomed rim attached to a
rope or chain for lifting heavy objects.
7) Engine Oil System:
Oil is one of the necessities of an automobile engine. Oil is distributed
under strong pressure to all other moving parts of an engine with the help
of an oil pump. This oil pump is placed at the bottom of an engine in the oil
pan and is joined by a gear to either the crankshaft or the camshaft. Near
the oil pump, there is an oil pressure sensor, which sends information about
the status of oil to a warning light or meter gauge.
The different parts of engine oil systems include:
- Engine Oil
- Engine Oil Cooler
- Engine Oil Filter
- Engine Oil Gaskets
- Engine Oil Pan
- Engine Oil Pipe
8) Engine Valve:
Automobile engine valves are devices that regulate the flow of air and fuel
mixture into the cylinder and assist in expelling exhaust gases after fuel
combustion. They are indispensable to the system of coordinated opening and
closing of valves, known as valve train. Engine valves are made from varied
materials such as Structural Ceramics, Steels, Super alloys and Titanium
alloys. Valve materials are selected based on the temperatures and pressures
the valves are to endure.
The primary components of engine valve are:
- Inlet Valve
- Exhaust Valve
- Combination Valve
- Check Valve
- EGR Valve
- Thermostat Valve
- Overhead Valve
- Valve Guide
- Schrader Valve
- Vacuum Delay Parts

Inlet Valve & Exhaust Valve-


Function-Inlet valve allow the fresh charge of air-fuel mixture to enter
the cylinder bore. Exhaust valve permits the burnt gases to escape from
the cylinder bore at proper timing.
9) Engine Block:
An engine block is a metal casting that serves as a basic structure on
which other engine parts are installed. A typical block contains bores for
pistons, pumps or other devices to be attached to it. Even engines are
sometimes classified as small-block or big-block based on the distance
between cylinder bores of engine blocks. Engine blocks are made from
different materials including Aluminium alloys, grey cast iron, ferrous
alloys, white iron, grey iron, ductile iron, malleable iron, etc.

10) Engine Pulley:


An engine pulley is a wheel with a groove around its circumference, upon
which engine belts run and transmit mechanical power, torque and speed
across different shafts of an engine. An engine houses pulley units of
different sizes for cam shaft drive, accessory drive and timing belts. Moulded
plastics, iron and steel are normally used to make engine pulleys.
11) Engine Brackets:
An engine bracket is a metallic part used to join an engine mount to the
power unit or the body of a vehicle. These auto parts are installed between a
vehicle's body and power unit to dampen the vibrations generated by the
engine, thus preventing a vehicle's body from shaking due to the vibrations.
Engine brackets are made from Ductile Iron Cast, Aluminium,
Polypropylene, Fiberglass and alloys.
12) Engine Mounting Bolts:
Automotive mounting bolts secure different automobile components viz. air
bags, brake fittings, etc. on to a supporting structure. Likewise, engine
mounting bolts help secure an automobile's engine in place. Based on usage,
a number of materials such as alloys, silicon bronze, bronze, ceramic, carbon,
aluminium, nylon, phosphor bronze, nickel silver, plastic, titanium,
zirconium and stainless steel are utilized to produce these bolts.
13) Piston:
Piston is a cylindrical plug which is used for moving up and down the
cylinder according to the position of the crankshaft in its rotation. Piston has
multiple uses and functions. In the case of four-stroke engine the piston is
pulled or pushed with the help of crankshaft while in the case of compression
stroke, piston is pushed with the powerful explosion of mixture of air and
fuel.
Piston comprises of several components namely:
a) Piston Pins
b) Piston Floor Mat
c) Piston Rings
d) Piston Valve

14) Piston rings:


Piston rings provide a sliding seal between the outer edge of the piston and
the inner edge of the cylinder. The rings serve two purposes:
· They prevent the fuel/air mixture and exhaust in the combustion chamber
from leaking into the sump during compression and combustion.
· They keep oil in the sump from leaking into the combustion area, where it
would be burned and lost.
15) Push Rods:
Push rods are thin metallic tubes with rounded ends that move through the
holes within a cylinder block and head, to actuate the rocker arms. Pushrods
are found in valve-in-head type engines and are essential for the motion of
engine valves. Some commonly used materials for manufacturing pushrods
are Titanium, Aluminium, Chrome Moly and Tempered Chrome Moly.
16) Valve train:
Valve train consists of various components and parts, which enables valves to
operate and function smoothly. Valve train comprises of three main
components: camshafts, several components which are used for turning the
camshaft’s rotating movement into reciprocating movement, and lastly valves
and its various parts.
The primary components of valve train are:
a) Tappet
b) Rocker Arms
c) Valve Timing System
17) Governor
It controls the speed of engine at a different load by regulating fuel supply in
diesel engine. In petrol engine, supplying the mixture of air-petrol and
controlling the speed at various load condition.
18) Carburettor
It converts petrol in fine spray and mixes with air in proper ratio as per
requirement of the engine.

19) Fuel Pump


This device supplies the petrol to the carburettor sucking from the fuel tank.
20) Spark Plug
This device is used in petrol engine only and ignite the charge of fuel for
combustion.
21) Fuel Injector
This device is used in diesel engine only and delivers fuel in fine spray under
pressure.
22) Gudgeon Pin
Connects the piston with small end of connecting rod.
This pin connects the piston with small end of the connecting rod, and also
known as piston pin. It is made up of case hardened steel and accurately
ground to the required diameters. Gudgeon pins are made hollow to reduce its
weight, resulting low inertia effect of reciprocating parts.
This pin is also known as "Fully Floating" as this is free to turn or oscillate
both in the piston bosses as well as the small end of the connecting rod. There
are very less chances of seizure in this case but the end movement of the pin
must be restricted to score the cylinder walls. This can be achieved by using
any one of the following three methods,
A) One spring circlip at each end is fitted into the groove in the piston bosses.
B) On spring circlip is provided in the middle.
C) Bronze or Aluminium pads are fitted at both ends of the pin, which
prevents the cylinder walls from being damaged.
The gudgeon pin may also be semi-floating type, in which either the pin is
free to turn or oscillate in the small end bearing but secured in the piston
bosses or it may secured in the small end bearing and allowed a free
oscillating movement in the piston bosses. This method provides more
bearing area at the bosses and hence no need for providing bushes therein, is
preferred.
23) Crank Pin
Hand over the power and motion to the crankshaft which come from piston
through connecting rod.
24) Sump
The sump surrounds the crankshaft. It contains some amount of oil,
which collects in the bottom of the sump (the oil pan).
25) Distributor
It operates the ignition coil making it spark at exactly the right moment. It
also distributes the spark to the right cylinder and at the right time. If the
timing is off by a fraction then the engine won't run properly.
Engine Problems
Three fundamental things can happen: a bad fuel mix, lack of compression or
lack of spark. Beyond that, thousands of minor things can create problems,
but these are the "big three." Based on the simple engine we have been
discussing, here is a quick rundown on how these problems affect your
engine:
Bad fuel mix - A bad fuel mix can occur in several ways:
· You are out of gas, so the engine is getting air but no fuel.
· The air intake might be clogged, so there is fuel but not enough air.
· The fuel system might be supplying too much or too little fuel to the
mix, meaning that combustion does not occur properly.
· There might be an impurity in the fuel (like water in your gas tank)
that makes the fuel not burn.
Lack of compression - If the charge of air and fuel cannot be compressed
properly, the combustion process will not work like it should. Lack of
compression might occur for these reasons:
· Your piston rings are worn (allowing air/fuel to leak past the piston
during compression).
· The intake or exhaust valves are not sealing properly, again allowing a
leak during compression.
· There is a hole in the cylinder.
The most common "hole" in a cylinder occurs where the top of the cylinder
(holding the valves and spark plug and also known as the cylinder head)
attaches to the cylinder itself. Generally, the cylinder and the cylinder head
bolt together with a thin gasket pressed between them to ensure a good seal.
If the gasket breaks down, small holes develop between the cylinder and the
cylinder head, and these holes cause leaks.
Lack of spark - The spark might be non-existent or weak for a number of
reasons:
· If your spark plug or the wire leading to it is worn out, the spark will
be weak.
· If the wire is cut or missing, or if the system that sends a spark down
the wire is not working properly, there will be no spark.
· If the spark occurs either too early or too late in the cycle (i.e. if the
ignition timing is off), the fuel will not ignite at the right time, and
this can cause all sorts of problems.
Many other things can go wrong. For example:
·If the battery is dead, you cannot turn over the engine to start it.
·If the bearings that allow the crankshaft to turn freely are worn out,
the crankshaft cannot turn so the engine cannot run.
·If the valves do not open and close at the right time or at all, air
cannot get in and exhaust cannot get out, so the engine cannot run.
·If someone sticks a potato up your tailpipe, exhaust cannot exit the
cylinder so the engine will not run.
·If you run out of oil, the piston cannot move up and down freely in
the cylinder, and the engine will seize.
In a properly running engine, all of these factors are within tolerance.
How to Produce More Engine Power
Car manufacturers are constantly playing with all of the following variables
to make an engine more powerful and/or more fuel efficient.
1. Increase displacement - More displacement means more power because
you can burn more gas during each revolution of the engine. You can
increase displacement by making the cylinders bigger or by adding more
cylinders. Twelve cylinders seems to be the practical limit.
2. Increase the compression ratio - Higher compression ratios produce
more power, up to a point. The more you compress the air/fuel mixture,
however, the more likely it is to spontaneously burst into flame (before the
spark plug ignites it). Higher-octane gasolines prevent this sort of early
combustion. That is why high-performance cars generally need high-octane
gasoline -- their engines are using higher compression ratios to get more
power.
3.Stuff more into each cylinder - If you can cram more air (and therefore
fuel) into a cylinder of a given size, you can get more power from the
cylinder (in the same way that you would by increasing the size of the
cylinder). Turbochargers and superchargers pressurize the incoming air to
effectively cram more air into a cylinder. See How Turbochargers Work for
details.
4. Cool the incoming air - Compressing air raises its temperature. However,
you would like to have the coolest air possible in the cylinder because the
hotter the air is, the less it will expand when combustion takes place.
Therefore, many turbocharged and supercharged cars have an intercooler.
An intercooler is a special radiator through which the compressed air passes
to cool it off before it enters the cylinder. See How Car Cooling Systems
Work for details.
5. Let air come in more easily - As a piston moves down in the intake
stroke, air resistance can rob power from the engine. Air resistance can be
lessened dramatically by putting two intake valves in each cylinder. Some
newer cars are also using polished intake manifolds to eliminate air resistance
there. Bigger air filters can also improve air flow.
6. Let exhaust exit more easily - If air resistance makes it hard for exhaust
to exit a cylinder, it robs the engine of power. Air resistance can be lessened
by adding a second exhaust valve to each cylinder (a car with two intake and
two exhaust valves has four valves per cylinder, which improves performance
-- when you hear a car ad tell you the car has four cylinders and 16 valves,
what the ad is saying is that the engine has four valves per cylinder). If the
exhaust pipe is too small or the muffler has a lot of air resistance, this can
cause back-pressure, which has the same effect. High-performance exhaust
systems use headers, big tail pipes and free-flowing mufflers to eliminate
back-pressure in the exhaust system. When you hear that a car has "dual
exhaust," the goal is to improve the flow of exhaust by having two exhaust
pipes instead of one.
7. Make everything lighter - Lightweight parts help the engine perform
better. Each time a piston changes direction, it uses up energy to stop the
travel in one direction and start it in another. The lighter the piston, the less
energy it takes.
8. Inject the fuel - Fuel injection allows very precise metering of fuel to each
cylinder. This improves performance and fuel economy.
Efficiency of the engine
The efficiency of the engine depends to a large extent upon the following
criteria:
· Compression

· Combustion Process

· Air/Fuel Mixture

· Mechanical Design

· Lubrication

1) Compression
The higher the Compression Ratio or the pre-compression pressure, then the
higher is the thermal efficiency of the internal combustion engine. This
results in a better fuel usage and more power is developed while less fuel is
consumed. The maximum compression is however limited by the Octane
Rating of the Gasoline that will be used. The higher the Octane Rating the
higher the compression can be.
Unfortunately, higher Octane Gasoline costs more to produce than low
Octane Gasoline. Therefore the increase in fuel efficiency can be offset by
increase in fuel costs.
The Compression Ratio is based on the mechanical design of the engine and
is expressed as:

Where:
e = Compression Ratio
Vh = Cylinder swept Volume
Vc = Combustion space Volume of Cylinder
Even more important than Compression Ratio is the actual pre-compression
pressure also called Final Compression Pressure. Although its value can be
also described and figured out mathematically, it is always substantially less
than the mathematical result. The actual Final Compression Pressure can be
reliably obtained only by a measurement with a special tool, the Compression
Tester.
It is however important to know what the Final Compression Pressure should
be for the particular engine. This specification can be usually found in a
"Shop Manual" for the particular engine. The difference between the
measured and specified values for the Final Compression Pressure determines
the "Sealing Quality" of the combustion chamber.
The quality of the combustion chamber sealing by means of the Piston Rings
and the Valves is a measure of the condition of the engine. Lubricant can also
affect the quality of the sealing between the Rings and the Cylinder bore.
When the Final Compression Pressure is too high on a used engine, it usually means that
the combustion chamber and the piston crown have excessive amounts of carbon deposits that have
been formed due to any of the following:

1. Incomplete combustion
2. Use of poor quality fuel
3. Use of poor quality lubricant
If the Final Compression Pressure is too low on a used engine, it usually
means that the engine has any of the following problems:
Has excessive amount of cylinder wear (due to poor lubrication)

Has sticking piston rings (poor lubricant)

Has burned exhaust valves (poor fuel or incorrect ignition timing)

Has damaged cylinder head gasket

Has sticking intake or exhaust valves (poor lubricant)

2) Combustion Process
For the quality of the combustion process it is of prime importance that the
fuel mixes intimately with the air, so that it can be burnt as completely as
possible. It is important that the flame front progresses spatially and in
regular form during the power stroke, until the whole mixture has been burnt.
The combustion process is considerably influenced by the point in the
combustion chamber at which the mixture is ignited, and by the mixture ratio
as well as the manner in which it is fed into the combustion chamber.
Combustion is optimal and the efficiency of the engine is at its best when the
residual gases contain no unburned fuel and as little of Oxygen as possible.
The Hydrocarbons are broken up during the combustion into their constituent
parts, they are Hydrogen and Carbon. On complete combustion the Carbon
and Hydrogen burn to form Carbon Dioxide and Water vapour. When the
combustion is incomplete the exhaust gases also contain other undesirable
constituents.

3) Air/Fuel Mixture
The Specific Fuel Consumption of an engine is defined as the amount of
energy produced per given amount of fuel consumed in the combustion
process. The amount of fuel is quoted in grams or kilograms and the amount
of energy produced in Kilo-Watt-Hours or Horsepower per hour.
Internal combustion engines can consume as little as 300 grams per kWh or
as much as 1,200 grams per kWh.
In general the Specific Fuel Consumption is at its greatest (least efficient)
when the engine is subjected to low loads, such as idle. This is because the
ratio between the idling losses (due to friction, leaks, and poor fuel
distribution) and the brake horsepower is the most unfavourable.
Most engines have the lowest Specific Fuel Consumption at three-quarter
load, which is at 75% of the maximum power output and at about 2,000
RPM.
The Specific Fuel Consumption of engine is for the most part dependent on
the mixture ratio of the Air/Fuel mixture. Consumption is at its lowest with
an Air/Fuel Ratio of approximately 15 pounds of Air to one pound of Fuel.
This means that 10,000 gallons of Air are needed to burn one gallon of
Gasoline.

4) Mechanical Design
The mechanical design of the internal combustion engine has not changed
since its conception in 1876, mainly because it works. The problem is, that it
has been invented long before there was thorough understanding of
thermodynamics or of the chemical reactions during combustion process.
Further cheap and plentiful fuel -- Gasoline was easily available and until few
years ago there was no concern with conservation or pollution.
As a result the internal combustion engine is an energy efficiency dinosaur
that refuses to die.
To give you some idea why that is so, let’s consider this:
Gasoline contains about 42 to 43.5 Mega-Joules of energy in one Kilogram
that is equal to about 18,060 to 18,705 Btu per pound.
The pie chart on next page will show you where all that energy that is
available in Gasoline goes:
Overall Power Loss In Engine
Gear Introduction
A gear also known as "gear wheel" is a rotating machine part having cut
teeth, or cogs, which mesh with another toothed part in order to transmit
torque. Two or more gears working in tandem are called a transmission and
can produce a mechanical advantage through a gear ratio and thus may be
considered a simple machine. Geared devices can change the speed,
magnitude, and direction of a power source. The most common situation is
for a gear to mesh with another gear, however a gear can also mesh a non-
rotating toothed part, called a rack, thereby producing translation instead of
rotation.
The gears in a transmission are analogous to the wheels in a pulley. An
advantage of gears is that the teeth of a gear prevent slipping.
When two gears of unequal number of teeth are combined a mechanical
advantage is produced, with both the rotational speeds and the torques of the
two gears differing in a simple relationship.
There are tiny gears for devices like wrist watches and there are large gears
that some of you might have noticed in the movie Titanic. Gears form vital
elements of mechanisms in many machines such as vehicles, metal tooling
machine tools, rolling mills, hoisting and transmitting machinery, marine
engines, and the like. Toothed gears are used to change the speed, power, and
direction between an input and output shaft.
1) Gears are the most common source used for power transmission.
2) They can be applied for two shafts which are-
· Parallel
· Collinear
· Perpendicular & Intersecting
· Perpendicular and Non-intersecting
· Inclined at an arbitrary angle
Gear Parameters
1) Number of teeth
2) Form of teeth
3) Size of teeth
4) Face width of teeth
5) Style and dimension of gear blank
6) Design of the hub of the gear
7) Degree of precision required
8) Means of attaching the gear to the shaft
9) Means of locating the gear axially to the shaft
Types of Gears
1. Spur Gear
2. Helical Gear
3. Herringbone Gear
4. Bevel Gear
5. Worm Gear
6. Rack and Pinion
7. Internal and External Gear
8. Face Gear
9. Sprockets
1) Spur Gear-Parallel and coplanar shafts connected by gears are called spur
gears. The arrangement is called spur gearing.
Spur gears have straight teeth and are parallel to the axis of the wheel. Spur
gears are the most common type of gears. The advantages of spur gears are
their simplicity in design, economy of manufacture and maintenance, and
absence of end thrust. They impose only radial loads on the bearings.
Spur gears are known as slow speed gears. If noise is not a serious design
problem, spur gears can be used at almost any speed.

2) Helical Gear-Helical gears have their teeth inclined to the axis of the
shafts in the form of a helix, hence the name helical gears.
These gears are usually thought of as high speed gears. Helical gears can take
higher loads than similarly sized spur gears. The motion of helical gears is
smoother and quieter than the motion of spur gears.
Single helical gears impose both radial loads and thrust loads on their
bearings and so require the use of thrust bearings. The angle of the helix on
both the gear and the must be same in magnitude but opposite in direction,
i.e., a right hand pinion meshes with a left hand gear.

3) Herringbone Gear - Herringbone gears resemble two helical gears that


have been placed side by side. They are often referred to as "double helical".
In the double helical gears arrangement, the thrusts are counter-balanced. In
such double helical gears there is no thrust loading on the bearings.

4) Bevel/Miter Gear-Intersecting but coplanar shafts connected by gears are


called bevel gears. This arrangement is known as bevel gearing. Straight
bevel gears can be used on shafts at any angle, but right angle is the most
common. Bevel Gears have conical blanks. The teeth of straight bevel gears
are tapered in both thickness and tooth height.
Spiral Bevel gears: In these Spiral Bevel gears, the teeth are oblique. Spiral
Bevel gears are quieter and can take up more load as compared to straight
bevel gears.
Zero Bevel gear: Zero Bevel gears are similar to straight bevel gears, but
their teeth are curved lengthwise. These curved teeth of zero bevel gears are
arranged in a manner that the effective spiral angle is zero.

5) Worm Gear- Worm gears are used to transmit power at 90° and where
high reductions are required. The axes of worm gears shafts cross in space.
The shafts of worm gears lie in parallel planes and may be skewed at any
angle between zero and a right angle. In worm gears, one gear has screw
threads. Due to this, worm gears are quiet, vibration free and give a smooth
output. Worm gears and worm gear shafts are almost invariably at right
angles.

6) Rack and Pinion- A rack is a toothed bar or rod that can be thought of as
a sector gear with an infinitely large radius of curvature. Torque can be
converted to linear force by meshing a rack with a pinion: the pinion turns;
the rack moves in a straight line. Such a mechanism is used in automobiles to
convert the rotation of the steering wheel into the left-to-right motion of the
tie rod(s). Racks also feature in the theory of gear geometry, where, for
instance, the tooth shape of an interchangeable set of gears may be specified
for the rack (infinite radius), and the tooth shapes for gears of particular
actual radii then derived from that. The rack and pinion gear type is employed
in a rack railway.

7) Internal & External Gear- An external gear is one with the teeth formed
on the outer surface of a cylinder or cone. Conversely, an internal gear is one
with the teeth formed on the inner surface of a cylinder or cone. For bevel
gears, an internal gear is one with the pitch angle exceeding 90 degrees.
Internal gears do not cause direction reversal.
8) Face Gears- Face gears transmit power at (usually) right angles in a
circular motion. Face gears are not very common in industrial application.

9) Sprockets-Sprockets are used to run chains or belts. They are typically


used in conveyor systems.
Gears may also be classified according to the position of axis of shaft:
a. Parallel
1. Spur Gear
2. Helical Gear
3. Rack and Pinion
b. Intersecting
Bevel Gear
c. Non-intersecting and Non-parallel
Worm and worm gears
Terminology of Spur Gear

● Pitch surface: The surface of the imaginary rolling cylinder


(cone, etc.) that the toothed gear may be considered to replace.
● Pitch circle: A right section of the pitch surface.
● Addendum circle: A circle bounding the ends of the teeth, in a
right section of the gear.
● Root (or dedendum) circle: The circle bounding the spaces
between the teeth, in a right section of the gear.
● Addendum: The radial distance between the pitch circle and the
addendum circle.
● Dedendum: The radial distance between the pitch circle and the
root circle.
● Clearance: The difference between the dedendum of one gear and
the addendum of the mating gear.
● Face of a tooth: That part of the tooth surface lying outside the
pitch surface.
● Flank of a tooth: The part of the tooth surface lying inside the
pitch surface.
● Circular thickness (also called the tooth thickness): The
thickness of the tooth measured on the pitch circle. It is the length of
an arc and not the length of a straight line.
● Tooth space: The distance between adjacent teeth measured on
the pitch circle.
● Backlash: The difference between the circle thickness of one gear
and the tooth space of the mating gear.
Backlash =Space width – Tooth thickness
● Circular pitch p: The width of a tooth and a space, measured on
the pitch circle.
● Diametral pitch P: The number of teeth of a gear per inch of its
pitch diameter. A toothed gear must have an integral number of teeth.
The circular pitch, therefore, equals the pitch circumference divided
by the number of teeth. The diametral pitch is, by definition, the
number of teeth divided by the pitch diameter.
● Module m: Pitch diameter divided by number of teeth. The pitch
diameter is usually specified in inches or millimetres; in the former
case the module is the inverse ofdiametral pitch.
● Fillet: The small radius that connects the profile of a tooth to the
root circle.
● Pinion: The smallest of any pair of mating gears. The largest of
the pair is called simply the gear.
● Velocity ratio: The ratio of the number of revolutions of the
driving (or input) gear to the number of revolutions of the driven (or
output) gear, in a unit of time.
● Pitch point: The point of tangency of the pitch circles of a pair of
mating gears.
● Common tangent: The line tangent to the pitch circle at the pitch
point.
● Base circle: An imaginary circle used in involute gearing to
generate the involutes that form the tooth profiles.
· Line of Action or Pressure Line: The force, which the driving tooth exerts
at point of contact of the two teeth. This line is also the common tangent at
the point of contact of the mating gears and is known as the line of action or
the pressure line. The component of the force along the common tangent at
the p point is responsible for the power transmission.
The component of the force perpendicular to the common tangent through the
pitch point produces the required thrust.
· Pressure Angle or Angle of Obliquity (φ): The angle between pressure
line and the common tangent to the pitch circles is known as the pressure
angle or the angle of obliquity.
For more power ‘transmission and lesser pressure on the bearing pressure
angle must be kept small. Standard pressure angles arc and 25°. Gears with
14.5° pressure angles have become almost obsolete.
· Path of Contact or Contact Length: Locus of the point of contact between
two mating teeth from the beginning of engagement to the end is known as
the path of contact or the contact length. It is CD in the figure. Pitch point P
is always one point on the path of contact. It can be subdivided as follows:
Path of Approach: Portion of the path of contact from the beginning of
engagement to the pitch point, i.e. the length CP.
Path of Recess: Portion of the path of contact from the pitch point to the end
of engagement i.e. length PD.
· Arc of Contact: Locus of a point on the pitch circle from the beginning to
the
end of engagement of two mating gears is known as the arc of contact in fig.
3.22, APB
or EPF is the arc of contact. It has also been divided into sub-portions.
Arc of Approach: It is the portion of the arc of contact from the beginning of
engagement to the pitch point, i.e. length AP or EP.
Arc of Recess: Portion of the arc of contact from the pitch point to the end of
engagement is the arc of recess i.e. length PB or PF.
· Angle of Action (δ): It is the angle turned by a gear from the beginning of
engagement to the end of engagement of a pair of teeth i.e. the angle turned
by arcs of contact of respective gear wheels. Similarly, angle of approach (a)
and angle of recess (β) can be defined.
S=a+ β
Use of Gear Advantage of Teeth on Gear
Use of Gears-
● To reverse the direction of rotation
● To increase or decrease the speed of rotation
● To move rotational motion to a different axis
● To keep the rotation of two axis synchronized

Advantages of Teeth-
● They prevent slippage between the gears - therefore axles
connected by gears are always synchronized exactly with one
another.
● They make it possible to determine exact gear ratios - you
just count the number of teeth in the two gears and divide. So if
one gear has 60 teeth and another has 20, the gear ratio when
these two gears are connected together is 3:1.
● They make it so that slight imperfections in the actual
diameter and circumference of two gears don't matter. The gear
ratio is controlled by the number of teeth even if the diameters
are a bit off.
Gear Ratio
The gear ratio of a gear train is the ratio of the angular velocity of the input
gear to the angular velocity of the output gear, also known as the speed ratio
of the gear train. The gear ratio can be computed directly from the numbers
of teeth of the various gears that engage to form the gear train.
In simple words, gear ratio defines the relationship between multiple gears.
Gear Ratio= Output gear # teeth / Input gear # teeth
For example, if our motor is attached to a gear with 60 teeth and this gear is
then attached to a gear with 20 teeth that drives a wheel, our gear ratio is
60:20, or more accurately 3:1
If you do not want to count a gears teeth (or if they do not exist), gear ratios
can also be determined by measuring the distance between the centre of each
gear to the point of contact.
For example, if our motor is attached to a gear with a 1" diameter and this
gear is connected to a gear with a 2" diameter attached to a wheel,
From the centre to edge of our input gear is 0.5"
From the centre to edge of our output gear is 1"
Our ratio is 1/0.5 or more accurately 2:1
How Does a Gear Ratio Affect Speed?
The gear ratio tells us how fast one gear is rotating when compared to
another.
If our input gear (10 teeth) is rotating at 5 rpms, and it is connected to our
output gear (50 teeth), our output gear will rotate at 1 rpms.
Why?
Our gear ratio is 50:10... Or 5:1
If our small gear rotates 1x, our large gear only rotates 1/5. It takes 5
rotations of our small gear to = 1 rotation of our large gear. Thus our large
gear is rotating at 1/5 the speed = 1rpm.
What if our gear ratio where 1:3?
In this case our input gear is 3x larger as large as our output gear.
If our input gear were rotating at 20 rpms.... each rotation, would result in 3
rotations of our output gear. Our output would be 60 rpms.
How Does Gear Ratio Affects Torque
First....What is torque?
Torque is a twisting force- (it doesn't do any 'work' itself- it is simple an
application of energy).
Work (or 'stuff') happens, when torque is applied and movement occurs.
"Torque is a force that tends to rotate or turn things. You generate a torque
any time you apply a force using a wrench. Tightening the lug nuts on your
wheels is a good example. When you use a wrench, you apply a force to the
handle. This force creates a torque on the lug nut, which tends to turn the lug
nut.
English units of torque are pound-inches or pound-feet; the SI unit is the
Newton-meter. Notice that the torque units contain a distance and a force. To
calculate the torque, you just multiply the force by the distance from the
centre. In the case of lug nuts, if the wrench is a foot long, and you put 200
pounds of force on it, you are generating 200 pound-feet of torque. If you use
a two-foot wrench, you only need to put 100 pounds of force on it to generate
the same torque."
In summary:
Torque equals Force multiplied by Distance

How does gear ratio affect Torque?


Simply put, torque at work (such as at a wheel) is your motor's torque times
your gear ratio.
Motor Torque x gear ratio = torque at the wheel
Let’s say we have a 10rmps motor that is capable of 5 oz. Torque (we know
this from our motor spec.)
Let’s say we have 2 gears. Our input gear (attached to our motor) has 10 teeth
our output gear has 50 teeth
Our Gear ratio is 5:1
Motor Torque x gear ratio = torque at the wheel
5oz x 5:1 = 25 oz.
What if our gear ratio were 1:3?
5oz x 1:3 = 1.6oz
Gear Train
A gear train is formed by mounting gears on a frame so that the teeth of the
gears engage. Gear teeth are designed to ensure the pitch circles of engaging
gears roll on each other without slipping; this provides a smooth transmission
of rotation from one gear to the next.
• A gear train is two or more gear working together by meshing their teeth
and turning each other in a system to generate power and speed
• It reduces speed and increases torque
• Electric motors are used with the gear systems to reduce the speed and
increase the torque

Types of gear train


- Simple gear train
- Compound gear train
- Epicyclic gear train
- Reverted gear train
Simple & Compound Gear Train
Simple Gear Train-
The simple gear train is used where there is a large distance to be covered
between the input shaft and the output shaft. Each gear in a simple gear train
is mounted on its own shaft.
When examining simple gear trains, it is necessary to decide whether the
output gear will turn faster, slower, or the same speed as the input gear. The
circumference (distance around the outside edge) of these two gears will
determine their relative speeds.
Suppose the input gear's circumference is larger than the output gear's
circumference. The output gear will turn faster than the input gear. On the
other hand, the input gear's circumference could be smaller than the output
gear's circumference. In this case the output gear would turn more slowly
than the input gear. If the input and output gears are exactly the same size,
they will turn at the same speed.
In many simple gear trains there are several gears between the input gear and
the output gear.
These middle gears are called idler gears. Idler gears do not affect the speed
of the output gear.

Compound Gear Train-


In a compound gear train at least one of the shafts in the train must hold two
gears.
Compound gear trains are used when large changes in speed or power output
are needed and there is only a small space between the input and output
shafts.
The number of shafts and direction of rotation of the input gear determine the
direction of rotation of the output gear in a compound gear train. The train in
Figure has two gears in between the input and output gears. These two gears
are on one shaft. They rotate in the same direction and act like one gear.
There are an odd number of gear shafts in this example. As a result, the input
gear and output gear rotate in the same direction.
Since two pairs of gears are involved, their ratios are “compounded”, or
multiplied together.

Example- The input gear, with 12 teeth, drives its mating gear on the counter-
shaft, which has 24 teeth. This is a ratio of 2 to 1.
This ratio of DRIVEN over DRIVER at the Input - 2 to 1 - is then multiplied
by the Output ratio, which has a DRIVEN to DRIVER ratio of 3 to 1.
This gives a gear ratio of 6 to 1 between the input and the output, resulting in
a speed reduction and a corresponding increase in torque.
Planetary Or Epicyclic Gear Train
Like a compound gear train, planetary trains are used when a large change in
speed or power is needed across a small distance. There are four different
ways that a planetary train can be hooked up.
A planetary gear train is a little more complex than other types of gear trains.
In a planetary train at least one of the gears must revolve around another gear
in the gear train. A planetary gear train is very much like our own solar
system, and that's how it gets its name. In the solar system the planets revolve
around the sun. Gravity holds them all together. In a planetary gear train the
sun gear is at the centre. A planet gear revolves around the sun gear. The
system is held together by the planet carrier. In some planetary trains, more
than one planet gear rotates around the sun gear. The system is then held
together by an arm connecting the planet gears in combination with a ring
gear.

The planetary gear set is the device that produces different gear ratios through
the same set of gears. Any planetary gear set has three main components:
· The sun gear
· The planet gears and the planet gears' carrier
· The ring gear
Each of these three components can be the input, the output or can be held
stationary. Choosing which piece plays which role determines the gear ratio
for the gearset.
These four combinations and the resulting speed and power outputs are listed
in Table
• They have higher gear ratios.
• They are popular for automatic transmissions in automobiles.
• They are also used in bicycles for controlling power of pedalling
automatically or manually.
• They are also used for power train between internal combustion engine and
an electric motor.
Reverted Gear Train
A reverted gear train is very similar to a compound gear train. They are both
used when there is only a small space between the input and output shafts and
large changes in speed or power are needed.

There are two major differences between compound and reverted gear trains.
First, the input and output shafts of a reverted train must be on the same axis
(in a straight line with one another). Second, the distance between the centres
of the two gears in each pair must be the same.
Mechanical Advantage
Gear teeth are designed so that the number of teeth on a gear is proportional
to the radius of its pitch circle, and so that the pitch circles of meshing gears
roll on each other without slipping. The speed ratio for a pair of meshing
gears can be computed from ratio of the radii of the pitch circles and the ratio
of the number of teeth on each gear..
Two meshing gears transmit rotational motion.
The velocity v of the point of contact on the pitch circles is the same on both
gears, and is given by

Where input gear A has radius rA and meshes with output gear B of radius rB,
therefore,

Where NA is the number of teeth on the input gear and NB is the number of
teeth on the output gear.
The mechanical advantage of a pair of meshing gears for which the input gear
has NA teeth and the output gear has NBteeth is given by

This shows that if the output gear GB has more teeth than the input gear GA,
then the gear train amplifies the input torque. And, if the output gear has
fewer teeth than the input gear, then the gear train reduces the input torque.
If the output gear of a gear train rotates more slowly than the input gear, then
the gear train is called a speed reducer. In this case, because the output gear
must have more teeth than the input gear, the speed reducer will amplify the
input torque.
Suspension System Introduction
Suspension is the term given to the system of springs, shock absorbers and
linkages that connects a vehicle to its wheels. Suspension systems serve a
dual purpose — contributing to the car's road holding/handling and braking
for good active safety and driving pleasure, and keeping vehicle occupants
comfortable and reasonably well isolated from road noise, bumps, and
vibrations, etc. These goals are generally at odds, so the tuning of
suspensions involves finding the right compromise. It is important for the
suspension to keep the road wheel in contact with the road surface as much as
possible, because all the forces acting on the vehicle do so through the
contact patches of the tires. The suspension also protects the vehicle itself and
any cargo or luggage from damage and wear.

Principle of Suspension System


1. To restrict road vibrations from being transmitted to the various
components of the vehicle
2. To protect the passengers from road shocks
3. To maintain the stability of the vehicle in pitching and rolling
Components of Suspension System
1. Control Arm: A movable lever that fastens the steering knuckle
to the frame of the vehicle.
2. Control Arm Busing: This is a sleeve which allows the control
arm to move up and down on the frame.
3. Strut Rod: Prevents the control arm from swinging forward and
backwards.
4. Ball Joints: A joint that allows the control arm and steering
knuckle to move up and down and sideways as well
5. Shock absorbers or Struts: prevents the suspension from
bounce after spring compression and extension
6. Stabilizer Bar: Limits body roll of the vehicle during cornering
7. Spring: Supports the weight of the vehicle
Common Problems of the Suspension System
Shocks and Struts: Shocks and Struts are located behind the wheels of a
vehicle. Shocks and Struts are subject to wear and tear just like other vehicle
parts. The signs of a shock wear out are if the car bounces excessively, leans
hard in corners and jerks at brakes then the shocks and struts are definitely
calling for a change.
Ball joints: The wearing out of ball joints can get dangerous because if they
separate they cause you to lose control over the vehicle which could also be a
life risk.
Preventive Measures for Suspension System
The shocks and struts should be check frequently for leakages
Ball joints should be checked immediately in case the motion of the car is not
right.
Make sure to lubricate the ball joints of your car frequently.
Comparison between MacPherson Double
Wishbone Suspension Systems
Two of the most popular suspensions systems for passenger cars today are
the double wishbone suspension system and the MacPherson strut suspension
system. While it is more usual to see the double wishbone system at the rear
end of the car, MacPherson’s solution normally finds its place at the front end
of the car. Both types of suspensions have their own sets of benefits and
limitations, thus let us look at both the advantages and disadvantages of both
systems, starting with the simpler of the two, the MacPherson struts.
MacPherson Struts- The struts are designed with more simplicity, and thus
takes up less space horizontally. As a result, passengers get more
compartment place in the car. They also display low un-sprung weight, an
advantage that reduces the overall weight of the vehicle as well as increases
the car’s acceleration. Lower un-sprung weight also makes your ride more
comfortable. Another major advantage of this system is its ease of
manufacturing as well as low cost of manufacture compared to other stand-
alone suspension systems. Without an upper arm, the suspension system
designers can directly block vibration from reaching the passenger
compartment.
Nevertheless, the MacPherson struts come with their own drawbacks. Being a
long, vertical assembly, you would encounter difficulties if you lower your
car as they may be collision with the structure of your car. Thus they do not
work well with racing cars that are normally lowered. The MacPherson struts
also have problems working with wider wheels that have increased scrub
radius, where you would need extra effort to navigate your car in this
situation. There is also the problem with the small camber change with
vertical movement of the suspension, which could mean the tires have less
contact with the road during cornering. This could reduce handling abilities
of your vehicle.
Double Wishbone Suspension System- One of its primary benefits is the
increase of negative chamber as a result of the vertical suspension movement
of the upper and lower arms. This translates to better stability properties for
the car as the tires on the outside maintain more contact with the road surface.
Handling performance also increases. The double suspension system is much
more rigid and stable than other suspension systems, thus you would realize
that your steering and wheel alignments are constant even when undergoing
high amounts of stress.

Moving on to the drawbacks of the double wishbone suspension system, it is


normally bugged by cost issues as it is a more complicated design to produce.
There are many parts to the system, and thus every time any of these
malfunction of fail, your whole system fails. Repair, modification and
maintenance costs and complexities for double wishbone suspension systems
are normally higher due to these reasons. This suspension system also proves
to be flexible for design engineers, as the arms of the system can be fixed at
different angles to the surface, parameters such as camber gain, roll center
height and swing arm length can be determined and designed flexibly to suit
and road surface in condition.
As we have seen, both suspension systems have their own plus points and
limitations. To conclude, double wishbones may perform better, but the
MacPherson struts would prove to be more affordable in the long run.
Transmission system Introduction
Transmission system in a car helps to transmit mechanical power from the car
engine to give kinetic energy to the wheels. It is an interconnected system of
gears, shafts, and other electrical gadgets that form a bridge to transfer power
and energy from the engine to the wheels. The complete setup of the system
helps to maintain the cruising speed of the car without any disturbance to the
car’s performance. The oldest variant of the transmission system in India is
the manual transmission that has undergone various modifications and
alterations to form the present day automatic transmission.
A transmission or gearbox provides speed and torque conversions from a
rotating power source to another device using gear ratios. The transmission
reduces the higher engine speed to the slower wheel speed, increasing torque
in the process. A transmission will have multiple gear ratios (or simply
"gears"), with the ability to switch between them as speed varies. This
switching may be done manually (by the operator), or automatically.
Directional (forward and reverse) control may also be provided.
In motor vehicle applications, the transmission will generally be connected to
the crankshaft of the engine. The output of the transmission is transmitted via
drive shaft to one or more differentials, which in turn drive the wheels.
Most modern gearboxes are used to increase torque while reducing the speed
of a prime mover output shaft (e.g. a motor crankshaft). This means that the
output shaft of a gearbox will rotate at slower rate than the input shaft, and
this reduction in speed will produce a mechanical advantage, causing an
increase in torque.
Need For a Transmission
The need for a transmission in an automobile is a consequence of the
characteristics of the internal combustion engine. Engines typically operate
over a range of 600 to about 7000 revolutions per minute (though this varies,
and is typically less for diesel engines), while the car's wheels rotate between
0 rpm and around 1800 rpm.
Furthermore, the engine provides its highest torque outputs approximately in
the middle of its range, while often the greatest torque is required when the
vehicle is moving from rest or traveling slowly. Therefore, a system that
transforms the engine's output so that it can supply high torque at low speeds,
but also operate at highway speeds with the motor still operating within its
limits, is required. Transmissions perform this transformation. The
transmission allows the gear ratio between the engine and the drive wheels to
change as the car speeds up and slows down.
Types of Transmission System
1. Manual Transmission
2. Automatic Transmission
3. Semi-automatic Transmission:-
a) Dual-clutch Transmission
b) Sequential Transmission
4. Continuously Variable Transmission
Manual Transmission
The first transmission invented was the manual transmission system. A
manual transmission, also known as a manual gearbox or standard
transmission (informally, a "manual", "stick shift", "straight shift", or
"straight drive") is a type of transmission used in motor vehicle applications.
It generally uses a driver-operated clutch, typically operated by a pedal or
lever, for regulating torque transfer from the internal combustion engine to
the transmission, and a gear-shift, either operated by hand (as in a car) or by
foot (as on a motorcycle).
In manual transmission the driver needs to disengage the clutch to disconnect
the power from the engine first, select the target gear, and engage the clutch
again to perform the gear change.
Components of Manual Transmission
The diagram below shows a very simple two-speed transmission in neutral:
· The green shaft comes from the engine through the clutch. The green
shaft and green gear are connected as a single unit. (The clutch is a
device that lets you connect and disconnect the engine and the
transmission. When you push in the clutch pedal, the engine and the
transmission are disconnected so the engine can run even if the car is
standing still. When you release the clutch pedal, the engine and the
green shaft are directly connected to one another. The green shaft and
gear turn at the same rpm as the engine.)
· The red shaft and gears are called the lay shaft. These are also
connected as a single piece, so all of the gears on the lay shaft and the lay
shaft itself spin as one unit. The green shaft and the red shaft are directly
connected through their meshed gears so that if the green shaft is
spinning, so is the red shaft. In this way, the lay shaft receives its power
directly from the engine whenever the clutch is engaged.
● The yellow shaft is a splined shaft that connects directly to the
drive shaft through thedifferential to the drive wheels of the car. If
the wheels are spinning, the yellow shaft is spinning.

· The blue gears ride on bearings, so they spin on the yellow shaft. If
the engine is off but the car is coasting, the yellow shaft can turn
inside the blue gears while the blue gears and the lay shaft are
motionless.
· The purpose of the collar is to connect one of the two blue gears to
the yellow drive shaft. The collar is connected, through the splines,
directly to the yellow shaft and spins with the yellow shaft. However,
the collar can slide left or right along the yellow shaft to engage either
of the blue gears. Teeth on the collar, called dog teeth, fit into holes
on the sides of the blue gears to engage them.
Working of Manual Transmission
When the gear selector fork is shifted into first gear, the collar engages the
blue gear on the right:

In this picture, the green shaft from the engine turns the lay shaft,
which turns the blue gear on the right. This gear transmits its energy
through the collar to drive the yellow drive shaft. Meanwhile, the blue
gear on the left is turning, but it is freewheeling on its bearing so it has
no effect on the yellow shaft.
Five-Speed Manual Transmission
The five-speed manual transmission is fairly standard on cars today.
Internally, it looks something like this:

There are three forks controlled by three rods that are engaged by the shift
lever. The shift lever has a rotation point in the middle. When you push the
knob forward to engage first gear, you are actually pulling the rod and fork
for first gear back.
When you move the shifter left and right you are engaging different forks
(and therefore different collars). Moving the knob forward and backward
moves the collar to engage one of the gears.
Idler Gear or Reverse Gear-
Idler gear is a small gear (purple) and is slid between red and blue gear. At all
times, the blue reverse gear in this diagram is turning in a direction opposite
to all of the other blue gears. The idler has teeth which mesh with both gears,
and thus it couples these gears together and reverses the direction of rotation
without changing the gear ratio.
Double Clutching
Double-clutching was common in older cars and is still common in some
modern Race Cars. In double-clutching, you first push the clutch pedal in
once to disengage the engine from the transmission. This takes the pressure
off the dog teeth so you can move the collar into neutral. Then you release
the clutch pedal and rev the engine to the "right speed." The right speed is the
rpm value at which the engine should be running in the next gear. The idea is
to get the blue gear of the next gear and the collar rotating at the same speed
so that the dog teeth can engage. Then you push the clutch pedal in again and
lock the collar into the new gear. At every gear change you have to press and
release the clutch twice, hence the name "double-clutching."
Synchronized Transmission
Manual transmissions in modern passenger cars use synchronizers to
eliminate the need for double-clutching. A synchro's purpose is to allow the
collar and the gear to make frictional contact before the dog teeth make
contact. This lets the collar and the gear synchronize their speeds before the
teeth need to engage as shown in figures.

Synchronized gearbox consists of cone shaped brass clutch engaged to the


gear. The cone on the blue gear fits into the cone-shaped area in the collar,
and friction between the cone and the collar synchronize the collar and the
gear. The outer portion of the collar then slides so that the dog teeth can
engage the gear.
Automatic Transmission
The concept of an automatic transmission is new in India. An automatic
transmission is amotor vehicle transmission that can automatically change
gear ratios as the vehicle moves, freeing the driver from having to shift
gears. In this transmission system the gears are never physically moved and
are always engaged to the same gears. Automatic transmissions contain
mechanical systems, hydraulic systems, electrical systems and computer
controls, all working together in perfect harmony manually.
Main Components of an Automatic Transmission-
1. Planetary Gear Sets
2. Clutches and Bands
3. Torque Converter
4. Valve Body
Planetary Gear Sets
The planetary gear set is the device that produces different gear ratios through
the same set of gears. Any planetary gear set has three main components:
· The sun gear
· The planet gears and the planet gears' carrier
· The ring gear
Each of these three components can be the input, the output or can be held
stationary. Choosing which piece plays which role determines the gear
ratio for the gear set.
Compound Planetary gear set - The automatic transmission uses a set of
gears, called a compound planetary gear set that looks like a single planetary
gear set but actually behaves like two planetary gear sets combined. It has
one ring gear that is always the output of the transmission, but it has two sun
gears and two sets of planets.
The figure below shows a compound planetary gear set:

First Gear- In first gear, the smaller sun gear is driven clockwise by the
turbine in thetorque converter. The planet carrier tries to spin counter
clockwise, but is held still by the one-way clutch (which only allows rotation
in the clockwise direction) and the ring gear turns the output. The small gear
has 30 teeth and the ring gear has 72, so the gear ratio is:
Ratio = -R/S = - 72/30 = -2.4:1
So the rotation is negative 2.4:1, which means that the output direction would
be opposite the input direction. But the output direction is really the same as
the input direction -- this is where the trick with the two sets of planets comes
in. The first set of planets engages the second set, and the second set turns the
ring gear; this combination reverses the direction. You can see that this would
also cause the bigger sun gear to spin but because that clutch is released, the
bigger sun gear is free to spin in the opposite direction of the turbine (counter
clockwise).

Second Gear - The two planetary gear sets connected to each other with a
common planet carrier.
The first stage of the planet carrier actually uses the larger sun gear as the
ring gear. So the first stage consists of the sun (the smaller sun gear), the
planet carrier, and the ring (the larger sun gear).
The input is the small sun gear; the ring gear (large sun gear) is held
stationary by the band, and the output is the planet carrier. For this stage, with
the sun as input, planet carrier as output, and the ring gear fixed, the formula
is:
1 + R/S = 1 + 36/30 = 2.2:1
The planet carrier turns 2.2 times for each rotation of the small sun gear. At
the second stage, the planet carrier acts as the input for the second planetary
gear set, the larger sun gear (which is held stationary) acts as the sun, and the
ring gear acts as the output, so the gear ratio is:
1 / (1 + S/R) = 1 / (1 + 36/72) = 0.67:1
To get the overall reduction for second gear, we multiply the first stage by the
second, 2.2 x 0.67, to get a 1.47:1 reduction.

Third Gear- Most automatic transmissions have a 1:1 ratio in third gear. To
achieve a ratio of 1:1 engage the clutches that lock each of the sun gears to
the turbine. If both sun gears turn in the same direction, the planet gears
lockup because they can only spin in opposite directions. This locks the ring
gear to the planets and causes everything to spin as a unit, producing a 1:1
ratio.

Overdrive Gear- An overdrive has a faster output speed than input speed.
When overdrive is engaged, a shaft that is attached to the housing of the
torque converter (which is bolted to the flywheel of the engine) is connected
by clutch to the planet carrier. The small sun gear freewheels, and the larger
sun gear is held by the overdrive band. Nothing is connected to the turbine;
the only input comes from the converter housing. This time with the planet
carrier for input, the sun gear fixed and the ring gear for output.
Ratio = 1 / (1 + S/R) = 1 / (1 + 36/72) = 0.67:1
So the output spins once for every two-thirds of a rotation of the engine. If
the engine is turning at 2000 rotations per minute (RPM), the output speed is
3000 RPM. This allows cars to drive at freeway speed while the engine speed
stays nice and slow.

Reverse Gear- Reverse is very similar to first gear, except that instead of the
small sun gear being driven by the torque converter turbine, the bigger sun
gear is driven, and the small one freewheels in the opposite direction. The
planet carrier is held by the reverse band to the housing. So, according to our
equations from the last page, we have:
Ratio = -R/S = 72/36 = 2.0:1
So the ratio in reverse is a little less than first gear in this transmission.
Summary of Gear Ratios- Transmission having four forward gears and one
reverse gear.
Clutches & Bands, Torque Converter, Valve Body
1. Clutches & Bands - These are friction devices that drive or lock planetary
gear sets members. They are used to cause the gear set to transfer power. In
other words, they are used to hold a particular member of the planetary gear
set motionless, while allowing another member to rotate, thereby transmitting
torque and producing gear reductions or overdrive ratios. These clutches are
actuated by the valve body their sequence controlled by the transmission's
internal programming.
2. Torque Converter - It is a hydraulic device that connects the engine and
the transmission. It takes the place of a mechanical clutch, allowing the
transmission to stay 'in gear' and the engine to remain running whilst the
vehicle is stationary, without stalling. A torque converter is a fluid coupling
that also provides a variable amount of torque multiplication at low engine
speeds, increasing "breakaway" acceleration.

3. Valve Body - The valve body is the control centre of the automatic
transmission. It contains a maze of channels and passages that direct
hydraulic fluid to the numerous valves. Depending on which gear is selected,
the manual valve feeds hydraulic circuits that inhibit certain gears. For
instance, if the shift lever is in third gear, it feeds a circuit that prevents
overdrive from engaging. The valve body of the transmission contains
several shift valves. Shift valves supply hydraulic pressure to the clutches and
bands to engage each gear. The shift valve determines when to shift from
one gear to the next
Comparison between Manual & Automatic
Transmission-
Both the automatic transmission and a manual transmission accomplish
exactly the same thing, but they do it in totally different way.
Advantages of manual transmission over automatic transmission-
1. It is easier to build a strong manual transmission than an automatic one.
This is because a manual system has one clutch to operate, whereas an
automatic system has a number of clutch packs that function in harmony with
each other.
2. Manual transmissions normally do not require active cooling, because not
much power is dissipated as heat through the transmission.
3. Manual gearshifts are more fuel efficient as compared to their automatic
counterpart. Torque convertor used to engage and disengage automatic gears
may lose power and reduce acceleration as well as fuel economy.
4. Manual transmissions generally require less maintenance than automatic
transmissions. An automatic transmission is made up of several components
and a breakdown of even a single component can stall the car completely.
Advantages of automatic transmission over manual transmission-
1. The manual transmission locks and unlocks different sets of gears to the
output shaft to achieve the various gear ratios, while in an automatic
transmission; the same set of gears produces all of the different gear ratios.
The planetary gear set is the device that makes this possible in an automatic
transmission.
2. Automatic cars are easier to use, especially for the inexperienced car
driver. Manual system requires better driving skills, whereas with an
automatic, the clever system does it all on its own. This holds a greater
advantage for new and inexperienced drivers and also helps during congested
traffic situations where it becomes difficult to change gears every second.
3. Automatic transmission requires less attention and concentration from the
driver because the automatic gears start functioning as soon as the system
feels the need of a gear change. For car with manual gear shifts, the driver
has to be more alert while driving and better coordinated.
4. There is no clutch pedal and gear shift in an automatic transmission car.
Once you put the transmission into drive, everything else is automatic.
5. Automatic cars have better ability to control traction when approaching
steep hills or engine braking during descents. Manual gears are difficult to
operate on steep climbs.
Semi-Automatic Transmission
A semi-automatic transmission (also known as clutch less manual
transmission, automated manual transmission, flappy-paddle gearbox, or
paddle shift gearbox) is a system which uses electronic sensors, processors
and actuators to execute gearshifts on the command of the driver. This
removes the need for a clutch pedal which the driver otherwise needs to
depress before making a gear change, since the clutch itself is actuated by
electronic equipment which can synchronise the timing and torque required
to make gear shifts quick and smooth.

The two most common semi-automatic transmissions are-


1. Dual-clutch Transmission
2. Sequential Transmission
Dual Clutch Transmission
A dual clutch transmission, commonly abbreviated to DCT uses two clutches,
but has no clutch pedal. Sophisticated electronics and hydraulics control the
clutches, just as they do in a standard automatic transmission. In a DCT,
however, the clutches operate independently. One clutch controls the odd
gears (first, third, fifth and reverse), while the other controls the even gears
(second and fourth) as shown in figure. Using this arrangement, gears can be
changed without interrupting the power flow from the engine to the
transmission.

A two-part transmission shaft is at the heart of a DCT. Unlike a conventional


manual gearbox, which houses all of its gears on a single input shaft, the
DCT splits up odd and even gears on two input shafts. The outer shaft is
hollowed out, making room for an inner shaft, which is nested inside. The
outer hollow shaft feeds second and fourth gears, while the inner shaft feeds
first, third and fifth.
Sequential Transmission
A sequential transmission is a type of transmission used on motorcycles and
high-performance cars for auto racing, where gears are selected in order, and
direct access to specific gears is not possible. Cars with SMTs have a manual
transmission with no clutch pedal; the clutch is automatically engaged.
In a race car, the motion of the shift lever is either "push forward" to upshift
or "pull backward" to downshift. If you are in a gear and you want to go to a
higher gear (e.g. from 2nd to 3rd), you push the shift lever forward. To go
from 3rd to 4th, you push the lever forward again. To go from 4th to 5th, you
press it forward again. It is the same motion every time.

To drop back down a gear, say from 5th to 4th, you pull the lever backward.
In European mass-produced automobiles, the shift lever, the motion of the
shift lever is either "push forward" to upshift or "pull backward" to downshift
moves forward and backward to shift into higher and lower gears,
respectively.

In Formula One cars, there are actually two paddles on the sides of the
steering wheel, instead of a shift lever. The left paddle upshifts, while the
right paddle downshifts. On a motorcycle, you do the same thing, but instead
of moving a lever back and forth with your hand, you move a lever up and
down with your foot.
Advantages of using Sequential Transmission-
1. It provides a direct connection between engine and transmission, allowing
100 percent of the engine's power to be transmitted to the wheels.
2. The SMT provides more immediate response and ensures that the engine
RPMs do not drop when the driver lifts off the accelerator (as happens with
an automatic), giving her more precise control over power output.
3. It uses a solid coupling, as opposed to a fluid coupling (torque converter)
4. The sequential shift lever takes up less space in the race car cockpit.
5. The sequential shift is quicker.
6. The sequential shift is consistent. You do not have to think before gear
change.
7. The hand location is consistent; the shift lever is always in the same place
for the next shift.
8. The gearbox reduces the risk of blowing up engine due to mis-shift.
Continuously Variable Transmissions
CVT is an “infinite speed” transmission which can change steplessly through
an infinite number of effective gear ratios between maximum and minimum
values. Unlike traditional automatic transmissions, continuously variable
transmissions don't have a gearbox with a set number of gears, which means
they don't have interlocking toothed wheels. The word gear in CVT refers to
a ratio of engine shaft speed to driveshaft speed. Moreover, CVTs change this
ratio without using a set of planetary gears.
Different types of CVTs –
1. Pulley-based CVTs
2. Toroidal CVTs
3. Hydrostatic CVTs
The most common CVT design uses a segmented metal V-belt running
between two pulleys. Each pulley consists of a pair of cones that can be
moved close together or further apart to adjust the diameter at which the belt
operates. The pulley ratios are electronically controlled to select the best
overall drive ratio based on throttle position, vehicle speed and engine speed.
Future Developments in Automotive Transmission
Systems
1. The CVT will gradually replace the conventional automatic transmission
due to its high fuel efficiency and smooth gear shift.
2. The technology of semi-automatic transmission systems will also be
improved to perform smooth gear shift and extend the cars' lifetime, without
losing fast acceleration and fuel efficiency.
3.The torque converter with fluid coupling may be improved, or may no
longer be used for cars in the future due to its low-efficiency power transfer.
4. Auto Shift Manual Transmission – This transmission system combines
the advantages of an automatic transmission with the flexibility and low fuel
consumption of a manual transmission. This is an advanced Shift-By-Wire
electronic control system technology. Shift-by-wire totally eliminates
mechanical lever shifting, keeping both of driver's hands on the wheel.
The clutch is used only for starting and stopping. Once the vehicle is in
motion, Auto Shift operates like an automatic transmission, with the
efficiency of a manual transmission.
5. Adaptive transmission control - ATC has also been invented by using a
computer to recognize and memorize different drivers' styles, and
determining the best shifting timing for different drivers.
6. A transmission system is needed for a vehicle due to the internal
combustion engines property of running at high pressure at high speed but
low pressure at low speed. If someday an engine with different properties is
invented, the transmission system may no longer be necessary, but can still
get the vehicle to reach its maximum speed in a couple of seconds.
TERMS CONNECTED WITH I. C.
ENGINES:
Bore: The inside diameter of the cylinder is called bore.
Stroke: when the piston reciprocate in the cylinder it has the limiting upper
and lower positions beyond which it can not move. The linear distance
between the two limiting positions of the cylinder is called Stroke.
Top Dead Centre (TDC): The top most position of the piston towards top
end side is called top dead center. But, in case of Horizontal Engines it is
known as inner dead center.
Bottom Dead Center (BDC): The lowest position of the piston towards
crank end side is called Bottom dead center. But, in case of Horizontal
Engines it is known as outerdead center.
Clearance Volume: The volume contained in the cylinder above the top of
the piston when piston is at the top is called Clearance Volume.
When L = D - Called Square Engines
When L D - Under Square Engine
Swept Volume: The volume swept by piston between between top and
bottom dead center is called swept volume / piston displacement.
Compression Ratio: It is ratio of total cylinder volume to clearance Volume.
Compression ratio (r) = Vs + Vc / Vc
Piston Speed: The average speed of the piston is called Piston Speed & =
2LN where L = Stroke of Piston & N = RPM of engine.
Average engine speed of engines is 5 to 15 m/sec. ( This speed is kept in this
range Because of – strength of material& noise consideration.
Direct Injection: Fu el injected to the main combustion chamber of an
engine.
Indirect Injection: Fuel injected to the secondary combustion chamber of an
engine.
Smart Engine: The Engines made with computer controls that regulate
operating characteristics such as air-fuel ratio, ignition timings, valve timings,
intake tuning and exhaust control.
Air fuel Ratio: It is ratio of the mass of Air to mass of Fuel.

FUEL SUPPLY SYSTEM IN SPARK


IGNITION ENGINE
The fuel supply system of spark ignition engine consists of
1. Fuel tank
2. Sediment bowl
3. Fuel lift pump
4. Carburetor
5. Fuel pipes
In some spark ignition engines the fuel tank is placed above the level of the
carburetor. The fuel flows from fuel tank to the carburetor under the action of
gravity. There are one or two filters between fuel tank and carburetor. A
transparent sediment bowl is also provided to hold the dust and dirt of the
fuel. If the tank is below the level of carburetor, a lift pump is provided in
between the tank and the carburetor for forcing fuel from tank to the
carburetor of the engine. The fuel comes from fuel tank to sediment bowl and
then to the lift pump. From there the fuel goes to the carburetor through
suitable pipes. From carburetor the fuel goes to the engine cylinder through
inlet manifold of the engine.
FUEL SUPPLY SYSTEM IN DIESEL
ENGINE
Fuel supply system of diesel engine consists of the following components
1. Fuel tank
2. Fuel lift pump or fuel feed pump
3. Fuel filter
4. Fuel injection pump
5. High pressure pipe
6. Over flow valve
7. Fuel injector
Fuel is drawn from fuel tank by fuel feed pump and forced to injection pump
through fuel filter. The injection pump supplies high pressure fuel to injection
nozzles through delivery valves and high pressure pipes. Fuel is injected into
the combustion chamber through injection nozzles. The fuel that leaks out
from the injection nozzles passes out through leakage pipe and returns to the
fuel tank through the over flow pipe. Over flow valve installed at the top of
the filter keeps the feed pressure under specified limit. If the feed pressure
exceeds the specified limit , the over flow valve opens and then the excess
fuel returns to fuel tank through over flow pipe.
Fuel tank
It is a storage tank for diesel. A wire gauge strainer is provided under the cap
to prevent foreign particles entering the tank
Fuel lift pump
It transfers fuel from fuel tank to inlet gallery of fuel injection pump
Preliminary filter (sediment bowl assembly) This filter is mostly fitted on fuel
lifts pump. It prevents foreign materials from reaching inside the fuel line. It
consists of a glass cap with a gasket.
Fuel filter
Mostly two stage filters are used in diesel engines
1. Primary filter
2. Secondary filter
Primary filter removes coarse materials, water and dust. Secondary filter
removes fine dust particles.
Fuel Injection Pump
It is a high pressure pump which supplies fuel to the injectors according to
the firing order of the engine. It is used to create pressure varying from 120
kg/cm2 to 300 kg/cm2. It supplies the required quantity of fuel to each
cylinder at the appropriate time.
Air Venting of Fuel System
When air has entered the fuel lines or suction chamber of the injection pump,
venting should be done properly.. Air is removed by the priming pump
through the bleeding holes of the injection pump.
Fuel Injector
It is the component which delivers finely atomized fuel under high pressure
to combustion chamber of the engine. Modern tractor engines use fuel
injectors which have multiple holes. Main parts of injectors are nozzle body,
and needle valve. The needle valve is pressed against a conical seat in the
nozzle body by a spring. The injection pressure is adjusted by adjusting a
screw. In operation, fuel from injection pump enters the nozzle body through
high pressure pipe.
When fuel pressure becomes so high that it exceeds the set spring pressure,
the needle valve lifts off its seat. The fuel is forced out of the nozzle spray
holes into the combustion Main types of modern fuel injection systems:
1. Common-rail injection system.
2. Individual pump injection system.
3. Distributor system.
Carburetor
The earliest form of fuel supply mechanism for modern automobile is
carburetor. The primary function of carburetor is to provide the air-fuel
mixture to the engine in the required proportion. The goal of a carburetor is to
mix just the right amount of gasoline with air so that the engine runs
properly. If there is not enough fuel mixed with the air, the engine "runs lean"
and either will not run or potentially damages the engine. If there is too much
fuel mixed with the air, the engine runs rich and either will not run (it floods),
runs very smoky, runs poorly (bogs down, stalls easily), or at the very least
wastes fuel. The car is in charge of getting the mixture just right.
Carburetor Basics
A carburetor basically consists of an open pipe, a "barrel" through which the
air passes into the inlet manifold of the engine. The pipe is in the form of a
venturi: it narrows in section and then widens again, causing the airflow to
increase in speed in the narrowest part.
Below the venturi is a butterfly valve called the throttle valve — a rotating
disc that can be turned end-on to the airflow, so as to hardly restrict the flow
at all, or can be rotated so that it (almost) completely blocks the flow of air.
This valve controls the flow of air through the carburetor throat and thus the
quantity of air/fuel mixture the system will deliver, thereby regulating engine
power and speed. The throttle is connected, usually through a cable or a
mechanical linkage of rods and joints or rarely by pneumatic link, to the
accelerator pedal on a car or the equivalent control on other vehicles or
equipment. Fuel is introduced into the air stream through small holes at the
narrowest part of the venturi and at other places where pressure will be
lowered when not running on full throttle. Fuel flow is adjusted by means of
precisely-calibrated orifices, referred to as jets, in the fuel path.
Parts of carburetor
· A carburetor is essentially a tube.
· There is an adjustable plate across the tube called the throttle plate
that controls how much air can flow through the tube. · At some point
in the tube there is a narrowing, called the venturi, and in this narrowing
a vacuum is created.
· In this narrowing there is a hole, called a jet, that lets the vacuum
draw in fuel.
How carburetors work ?
All carburetors work on "the Bernoulli Principle. Bernoulli principle states
that as the velocity of an ideal gas increases, the pressure drops. Within a
certain range of velocity and pressure, the change in pressure is pretty much
linear with velocity-if the velocity doubles the pressure halves.
However, this linear relationship only holds within a certain range.
Carburetors work because as air is pulled into the carburetor throat, the
venturi. It has to accelerate from rest, to some speed.
How fast depends upon the air flow demanded by the engine speed and the
throttle butterfly setting. According to Bernoulli, this air flowing through the
throat of the carb will be at a pressure less than atmospheric pressure, and
related to the velocity (and hence to how much air is being fed into the
engine).
If a small port is drilled into the carb throat in this low pressure region, there
will be a pressure difference between the throat side of the port, and the side
that is exposed to the atmosphere. If a reservoir of gasoline, the float bowl, is
between the inside of the port, and the atmosphere, the pressure difference
will pull gasoline through the port, into the air stream. At this point, the port
gets the name of a jet in the concept of a carburetor. The more air that the
engine pulls through the carburetor throat, the greater the pressure drop
across the jet, and the more fuel that gets pulled in.
As noted above, within a range of airflow in the throat, and fuel flow in the
jet, the ratio of fuel to air that flows will stay constant. And if the jet is the
right size, that ratio will be what the engine wants for best performance. A
venturi/jet arrangement can only meter fuel accurately over a certain range of
flow rates and pressures. As flow rates increase, either the venturi or the jet,
or both, will begin to choke, that is they reach a point where the flow rate will
not increase, no matter how hard the engine tries to pull air through. At the
other extreme, when the velocity of the air in the venturi is very low-like at
idle or during startup, the pressure drop across the jet becomes vanishingly
small. It is this extreme that concerns us with respect to starting, idle and
low-speed throttle response.
At idle, the pressure drop in a 32 mm venturi is so small that essentially no
fuel will be pulled through the main jets. But the pressure difference across
the throttle butterfly (which is almost completely closed) can be as high as
25+ mm Hg. Carb designers take advantage of this situation by placing an
extra jet, the "idle jet" notch, just downstream of the throttle butterfly.
Because of the very high pressure difference at idle, and the very small
amount of fuel required, this jet is tiny. When the throttle is open any
significant amount, the amount of fuel that flows through this jet is small, and
for all intents and purposes, constant. So its effect on the midrange and up
mixture is easily compensated for.
During startup, the amount of air flowing through the carburetor is smaller
still. At least till the engine begins to run on it's own. But when it is being
turned by the starter or the kicker, rpm is in the sub 100 range sometimes. So
the pressure difference across the jets is again in the insignificant range. If the
engine is cold, it wants the mixture extra-rich to compensate for the fact that
a lot of the fuel that does get mixed with air in the carb precipitates out on the
cold walls of the intake port. Bing carburetors, and most bike carburetors, use
enrich circuits. All this really is another port or jet from the float bowl to just
downstream of the throttle butterfly.
Except that the fuel flow to this jet is regulated by a valve that is built into the
carb body. At startup, when the lever is in the full on position, the valve is
wide open, and the fuel supply to the cold start jet is more or less unlimited.
In this condition, the amount of fuel that flows through the cold start jet is
regulated just like the idle jet is. When the throttle is closed, the pressure drop
across the jet is high, and lots of fuel flows, resulting in a very rich mixture,
just perfect for ignition of a cold motor. If the throttle butterfly is opened, the
pressure difference is less, and less fuel flows. This is why R bikes like no
throttle at all until the engine catches. However, the mixture quickly gets too
rich, and opening the throttle will make things better. Just like the idle jet,
this cold start jet is small enough that even when the circuit is wide open, the
amount of fuel that can flow is small enough that at large throttle openings, it
has little impact on the mixture. This is why you can ride off with the starting
circuit on full, and the bike will run pretty well-until you close the throttle for
the first time, and the mixture gets so rich the engine stalls.
The valve that controls fuel supply to the cold start jet allows the rider to cut
the fuel available through that jet down from full during startup, to none or
almost none once the engine is warm.
In most cases, at the intermediate setting, fuel to the cold start jet is cut to the
point where the engine will still idle when warm, although very poorly since
it is way too rich.
True "chokes" are different. But very aptly named. A choke is simply a plate
that can be maneuvered so that it completely (or very nearly) blocks off the
carburetor throat at it's entrance ("choking" the carb, just like a killer to a
victim in a bad movie). That means that the main, idle, intermediate, etc., jets
are all down stream of the choke plate. Then, when the engine tries to pull air
through the crab, it can't. The only place that anything at all can come in to
the carb venturi is through the various jets. Since there is little or no air
coming in, this results in an extremely rich mixture. The effect is maximized
if the throttle butterfly (which is downstream of the big main jets and the
choke plate) is wide open, not impeding things in any way. If the throttle
butterfly is completely closed, the engine does not really know that the choke
is there-all the engine "sees" is a closed throttle, so there is little enrichening
effect. The engine will pull as much fuel as possible through the idle jet, but
that is so small it won't have much effect. So a carb with a choke behaves in
exactly the opposite manner as one with an enrichener. During the cranking
phase, it is best to have the throttle pegged at WFO so that the most fuel gets
pulled in, resulting in a nice rich mixture. But as soon as the motor starts, you
want to close the throttle to cut down the effect of the choke. Even that is not
enough, and most chokes are designed so that as soon as there is any
significant airflow, they automatically open part way. Otherwise the engine
would flood. Even "manual" chokes have this feature most of the time.
MPFI
Multi-point fuel injection injects fuel into the intake port just upstream of the
cylinder's intake valve, rather than at a central point within an intake
manifold. MPFI (or just MPI) systems can be sequential, in which injection is
timed to coincide with each cylinder's intake stroke, batched, in which fuel is
injected to the cylinders in groups, without precise synchronization to any
particular cylinder's intake stroke, or Simultaneous, in which fuel is injected
at the same time to all the cylinders. Many modern EFI systems utilize
sequential
MPFI; however, it is beginning to be replaced by direct injection systems in
newer gasoline engines. The multi-point injector is an electromechanical
device which is fed by a 12 volt supply from either the fuel injection relay or
from the Electronic Control Module (ECM). The voltage in both cases will
only be present when the engine is cranking or running, due to both voltage
supplies being controlled by a tachometric relay. The injector is supplied with
fuel from a common fuel rail.
The length of time that the injector is held open for will depend on the input
signals seen by the engine management ECM from its various engine sensors.
These input signals will include:-
· The resistance of the coolant temperature.
• The output voltage from the airflow meter (when fitted).
• The resistance of the air temperature sensor.
• The signal from the Manifold Absolute Pressure (MAP) sensor (when
fitted).
• The position of the throttle switch / potentiometer.
The held open time or injector duration will vary to compensate for cold
engine starting and warm-up periods, i.e. a large duration that decreases the
injection time as the engine warms to operating temperature. Duration time
will also expand under acceleration and contract under light load conditions.
Depending on the system encountered the injectors can fire either once or
twice per cycle. The injectors are wired in parallel with simultaneous
injection and will all fire together at the same time. Sequential injection, as
with simultaneous, has a common supply to each injector but unlike
simultaneous has a separate earth path for each injector. This individual firing
allows the system, when used in conjunction with a phase sensor, to deliver
the fuel when the inlet valve is open and the incoming air helps to atomize
the fuel. It is also common for injectors to be fired in 'banks' on 'V'
configured engines. The fuel will be delivered to each bank alternately,
because of the frequency of the firing of the injectors, it is expected that a
sequential injector will have twice the duration, or opening, than that of a
simultaneous pulse. This will however be determined by the injector flow
rate.
COOLING SYSTEM
Fuel is burnt inside the cylinder of an internal combustion engine to produce
power. The temperature produced on the power stroke of an engine can be as
high as 1600 ºC and this is greater than melting point of engine parts.. The
best operating temperature of IC engines lie between 140 F and 200 ºF and
hence cooling of an IC engine is highly essential. . It is estimated that about
40% of total heat produced is passed to atmosphere via exhaust, 30% is
removed by cooling and about 30% is used to produce power.
Purpose of Cooling
1. To maintain optimum temperature of engine for efficient operation under
all conditions.
2. To dissipate surplus heat for protection of engine components like
cylinder, cylinder head, piston, piston rings, and valves
3. To maintain the lubricating property of oil inside engine.
Methods of Cooling
1. Air cooled system
2. Water cooled system
AIR COOLING SYSTEM
Air cooled engines are those engines in which heat is conducted from the
working components of the engine to the atmosphere directly.
Principle of air cooling- The cylinder of an air cooled engine has fins to
increase the area of contact of air for speedy cooling. The cylinder is
normally enclosed in a sheet metal casing called cowling. The fly wheel has
blades projecting from its face, so that it acts like a fan drawing air through a
hole in the cowling and directed it around the finned cylinder. For
maintenance of air cooled system, passage of air is kept clean by removing
grasses etc. by a stiff brush of compressed air.
Advantages of Air Cooled Engine:
1. It is simple in design and construction
2. Water jackets, radiators, water pump, thermostat, pipes, hoses are not
required.
3. It is more compact
4. Lighter in weight
Disadvantages:
1. There is uneven cooling of engine parts
2. Engine temperature is generally high during working period Air cooled
engine
WATER COOLING SYSTEM
Engines using water as cooling medium are called water cooled engines.
Water is circulated round the cylinders to absorb heat from the cylinder walls.
The heated water is conducted through a radiator to remove the heat and cool
the water.
Methods of Water Cooling
1. Open jacket or hopper method
2. Thermo siphon method
3. Forced circulation method
1. Open Jacket Method
There is a hopper or jacket containing water which surrounds the engine
cylinder. So long as the hopper contains water the engine continues to operate
satisfactorily. As soon as the water starts boiling it is replaced by cold water.
The hopper is large enough to run for several hours without refilling. A drain
plug is provided in a low accessible position for draining water as and when
required.
2. Thermo Siphon Method
It consists of a radiator, water jacket, fan, temperature gauge and hose
connections. The system is based on the principle that heated water which
surrounds the cylinder becomes lighter and it rises upwards in liquid column.
Hot water goes to the radiator where it passes through tubes surrounded by
air. Circulation of water takes place due to the reason that water jacket and
radiator are connected at both sides i.e. at top and bottom. A fan is driven
with the help of a V belt to suck air through tubes of the radiator unit, cooling
radiator water. The disadvantage of the system is that circulation of water is
greatly reduced by accumulation of scale or foreign matter in the passage and
consequently causing overheating of the engine.
3. Forced Circulation System
In this method, a water pump is used to force water from radiator to the water
jacket of the engine. After circulating the entire run of water jacket, water
comes back to the radiator where it loses its heat by the process of radiation.
To maintain the correct engine temperature, a thermostat valve is placed at
the outer end of cylinder head. Cooling liquid is by-passed through the water
jacket of th3e engine until the engine attains the desired temperature. The
thermostat valve opens and the by-pass is closed, allowing the water to go to
the radiator.
The system consists of the following components:
1. Water pump
2. Radiator
3. Fan
4. Fan-belt
5. Water jacket
6. Thermostat valve
7. Temperature gauge
8. Hose pipe
Water Pump:
It is a centrifugal pump. It draws the cooled water from bottom of the radiator
and delivers it to the water jackets surrounding the engine.
Thermostat Valve:
It is a control valve used in cooling system to control the flow of water when
activated by a temperature signal.
Fan
The fan is mounted on the water pump shaft. It is driven by the same belt that
drives the pump and dynamo. The purpose of radiator is to provide strong
draft of air through the radiator to improve engine cooling
Water jacket
Water jackets are passages cored out around the engine cylinder as well as
around the valve opening Forced Circulation cooling system- Water cooled
engine

LUBRICATION
IC engine is made of moving parts. Duo to continuous movement of two
metallic surfaces over each other, there is wearing of moving parts,
generation of heat and loss of power in engine. Lubrication of moving parts is
essential to prevent all these harmful effects.
Purpose of lubrication –
1. Reducing frictional effect
2. Cooling effect
3. Sealing effect
4. Cleaning effect
Types of Lubricants:
Lubricants are obtained from animal fat, vegetables and minerals. Vegetable
lubricants are obtained from seeds, fruits and plants. Cotton seed oil, olive
oil, linseed oil, caster oil are used as lubricants. Mineral lubricants are most
popular for engines and machines. It is obtained from crude petroleum found
in nature.. Petroleum lubricants are less expensive and suitable for internal
combustion engines.
Engine Lubrication System
The lubricating system of an engine is an arrangement of mechanisms which
maintains the supply of lubricating oil to the rubbing surfaces of an engine at
correct pressure and temperature.
The parts which require lubrication are
1. Cylinder walls and piston
2. Piston pin
3. Crankshaft and connecting rod bearings
4. Camshaft bearings
5. Valve operating mechanism
6. Cooling fan
7. Water pump and
8. Ignition mechanism
Types of Lubricating Systems
1. Splash system
2. Forced feed system
Splash Lubrication
Splash lubrication is a method of applying lubricant, a compound that
reduces friction, to parts of a machine. In the splash lubrication of an engine,
dippers on the connecting-rod bearing caps are submerged in oil with every
rotation. When the dippers emerge from the oil trough, the oil is splashed
onto the cylinders and pistons, lubricating them.
Experts agree that splash lubrication is suitable for small engines such as
those used in lawnmowers and outboard boat motors, but not for automobile
engines. This is because the amount of oil in the trough has a dramatic impact
on how well the engine parts can be lubricated. If there is not enough oil, the
amount splashed onto the machinery will be insufficient. Too much oil will
cause excessive lubrication, which can also cause problems.
Engines are often lubricated through a combination of splash lubrication and
force feeding. In some cases, an oil pump keeps the trough full so that the
engine bearings can always splash enough oil onto the other parts of the
engine. As the engine speeds up, so does the oil pump, producing a stream of
lubricant powerful enough to coat the dippers directly and ensure a sufficient
splash. In other cases, the oil pump directs oil to the bearings. Holes drilled in
the bearings allow it to flow to the crankshaft and connecting rod bearings,
lubricating them in the process.
Combination Splash and Force Feed
In a combination splash and force feed, oil is delivered to some parts by
means of splashing and other parts through oil passages under pressure from
the oil pump. The oil from the pump enters the oil galleries. From the oil
galleries, it flows to the main bearings and camshaft bearings. The main
bearings have oil-feed holes or grooves that feed oil into drilled passages in
the crankshaft.
The oil flows through these passages to the connecting rod bearings. From
there, on some engines, it flows through holes drilled in the connecting rods
to the piston-pin bearings. Cylinder walls are lubricated by splashing oil
thrown off from the connecting-rod bearings. Some engines use small troughs
under each connecting rod that are kept full by small nozzles which deliver
oil under pressure from the oil pump. These oil nozzles deliver an
increasingly heavy stream as speed increases. At very high speeds these oil
streams are powerful enough to strike the dippers directly. This causes a
much heavier splash so that adequate lubrication of the pistons and the
connecting-rod bearings is provided at higher speeds. If a combination
system is used on an overhead valve engine, the upper valve train is
lubricated by pressure from the pump.
Force-Feed
A somewhat more complete pressurization of lubrication is achieved in the
forcefeed lubrication system. Oil is forced by the oil pump from the
crankcase to the main bearings and the camshaft bearings. Unlike the
combination system the connecting-rod bearings are also fed oil under
pressure from the pump. Oil passages are drilled in the crankshaft to lead oil
to the connecting-rod bearings. The passages deliver oil from the main
bearing journals to the rod bearing journals.
In some engines, these opening are holes that line up once for every
crankshaft revolution. In other engines, there are annular grooves in the main
bearings through which oil can feed constantly into the hole in the crankshaft.
The pressurized oil that lubricates the connecting-rod bearings goes on to
lubricate the pistons and walls by squirting out through strategically drilled
holes. This lubrication system is used in virtually all engines that are
equipped with semi floating piston pins.
Full Force Feed
In a full force-feed lubrication system, the main bearings, rod bearings,
camshaft bearings, and the complete valve mechanism are lubricated by oil
under pressure. In addition, the full force-feed lubrication system provides
lubrication under pressure to the pistons and the piston pins.
This is accomplished by holes drilled the length of the connecting rod,
creating an oil passage from the connecting rod bearing to the piston pin
bearing. This passage not only feeds the piston pin bearings but also provides
lubrication for the pistons and cylinder walls. This system is used in virtually
all engines that are equipped with full-floating piston pins.
Need of Lubrication System
Lubrication is the admittance of oil between two surfaces having relative
motion. The objects of lubrication may be one or more of the following:
1. To reduce motion between the parts having relative motion.
2. To reduce wear of the moving part.
3. To cool the surfaces by carrying away heat generated due to friction.
4. To seal a space adjoining the surfaces.
5. To absorb shocks between bearings and other parts and consequently
reduce noise.
6. To remove dirt and grit that might have crept between the rubbing parts.
Additives in lubricating oil
In addition to the viscosity index improvers, motor oil manufacturers often
include other additives such as detergents and dispersants to help keep the
engine clean by minimizing sludge buildup, corrosion inhibitors, and alkaline
additives to neutralize acidic oxidation products of the oil. Most commercial
oils have a minimal amount of zinc dialkyldithiophosphate as an anti-wear
additive to protect contacting metal surfaces with zinc and other compounds
in case of metal to metal contact. The quantity of zinc dialkyldithiophosphate
is limited to minimize adverse effect on catalytic converters. Another aspect
for aftertreatment devices is the deposition of oil ash, which increases the
exhaust back pressure and reduces over time the fuel economy. The so-called
"chemical box" limits today the concentrations of sulfur, ash and phosphorus
(SAP).
There are other additives available commercially which can be added to the
oil by the user for purported additional benefit.
Gasoline and Diesel Additives
Legislation is now restricting the use of organo-metallic compounds for
improving the octane rating of gasoline. Consequently, they are not covered
here, but a discussion of their use, the other additives that must be used in
association with them, and the consequences of their withdrawal are
discussed in Stone (1999). The most significant additives are detergents and
antioxidants, but corrosion inhibitors, metal deactivators, biocides, anti-static
additives demulsifiers, dyes and markers, and anti-icing additives also are
used. These are discussed in detail by Owen and Coley (1995).
Antioxidants are needed in gasoline to inhibit the formation of gum, which
usually is associated with the unsaturated hydrocarbons in fuel. Formation of
gum can interfere with the operation of fuel injectors.
Detergents are added to reduce the deposits in fuel injectors, the inlet
manifold, and the combustion chamber. Surfactants inhibit the formation of
deposits in the injectors and the inlet manifold, but a different mechanism is
needed to combat valve and port deposits because these deposits are
associated with higher temperatures. High-boiling point, thermally stable,
oily materials such as polybutene are used, and these appear to dissolve the
deposits. 49 Diesel additives to improve the cetane number will be discussed
first, followed by additives to lower the cold filter plugging point
temperature, then additives that are used with low sulfur fuels, and finally
other additives.
The most widely used ignition-improving additive currently is 2-ethyl hexyl
nitrate (2EHN), because of its good response in a wide range of fuels and
comparatively low cost (Thompson et al., 1997). Adding 1000 ppm of 2EHN
will increase the cetane rating by approximately 5 units. In some parts of the
world, legislation limits the nitrogen content of diesel fuels, because although
the mass of nitrogen is negligible to that available from the air, fuel-bound
nitrogen contributes disproportionately to nitric oxide formation. Under these
circumstances, peroxides can be used, such as ditertiary butyl peroxide
(Nandi and Jacobs, 1995).
Diesel fuel contains molecules with approximately 12 to 22 carbon atoms,
and many of the higher molar mass components (e.g., cetane, C16H34)
would be solid at room temperature if they were not mixed with other
hydrocarbons. Thus, when diesel fuel is cooled, a point will be reached at
which the higher molar mass components will start to solidify and form a
waxy precipitate. As little as 2% wax out of the solution can be enough to gel
the remaining 98%. This will affect the pouring properties and (more
seriously at a slightly higher temperature) block the filter in the fuel-injection
system. These and other related low-temperature issues are discussed
comprehensively by Owen and Coley (1995), who point out that as much as
20% of the diesel fuel can consist of higher molar mass alkanes. It would be
undesirable to remove these alkanes because they have higher cetane ratings
than many of the other components. Instead, use is made of anti-waxing
additives that modify the shape of the wax crystals.
Wax crystals tend to form as thin "plates" that can overlap and interlock.
Anti-waxing additives do not prevent wax formation. They work by
modifying the wax crystal shape to a dendritic (needle-like) form, and this
reduces the tendency for the wax crystals to interlock. The crystals are still
collected on the outside of the filter, but they do not block the passage of the
liquid fuel. The anti-waxing additives in commercial use are copolymers of
ethylene and vinyl acetate, or other alkene-ester copolymers. The
performance of these additives varies with different fuels, and the
improvement decreases as the dosage rate is increased. It is possible for 200
ppm of additive to reduce the cold filter plugging point (CFPP) temperature
by approximately 10 K.
Additives can be used with low-sulfur diesel fuels to compensate for their
lower lubricity, lower electrical conductivity, and reduced stability. To
restore the lubricity of a low-sulfur fuel to that of a fuel with 0.2% sulfur by
mass, then a dosage on the order of 100 mg/L is needed. Care is required in
the selection of the additive, if it is not to interact unfavorably with other
additives (Batt et al., 1996).
Electrical conductivity usually is not subject to legislation, but if fuels have a
very low conductivity, then there is the risk of a static electrical charge being
built up. If a road tanker, previously filled with gasoline, is being filled with
diesel, then there is the possibility of a flammable mixture being formed. The
conductivity of untreated low-sulfur diesel fuels can be less than 5 pS/m
(Merchant et al., 1997). Conductivities greater than 100 pS/m can be obtained
by adding a few parts per million of a chromium-based static dispersant
additive. Low-sulfur fuels and fuels that have been hydro-treated to reduce
the aromatic content also are prone to the formation of hydroperoxides. These
are known to degrade neoprene and nitrile rubbers, but this can be prevented
by using antioxidants such as phenylenediamines (suitable only in low-sulfur
fuels) or hindered phenols (Owen and Coley, 1995).
Other additives used in diesel fuels are detergents, anti-ices, biocides, and
anti-foamants.
Detergents (e.g., amines and amides) are used to inhibit the formation of
combustion deposits. Most significant are deposits around the injector
nozzles, which interfere with the spray formation. Deposits then can lead to
poor air-fuel mixing and particulate emissions. A typical dosage level is 100-
200 ppm.
Anti-ices (e.g., alcohols or glycols) have a high affinity for water and are
soluble in diesel fuel. Water is present through contamination and as a
consequence of humid air above the fuel in vented tanks being cooled below
its dewpoint temperature. If ice formed, it could block both fuel pipes and
filters. Biocides act against anaerobic bacteria that can form growths at the
wateddiesel interface in storage tanks. These are capable of blocking fuel
filters.
Anti-foamants (1 0-20 ppm silicone-based compounds) facilitate the rapid
and complete filling of vehicle fuel tanks.
Valve Operating Systems
In engines with overhead poppet valves (OHV-verhead valves), the camshaft
is either mounted in the cylinder block, or in the cylinder head (OHC-
overhead camshaft). Figure 2.21a shows an overhead valve engine in which
the valves are operated from the camshaft, via cam followers, pushrods, and
rocker arms. This is a cheap solution because the drive to the camshaft is
simple (either gear or chain), and the machining is in the cylinder block. In a
"V" engine, this arrangement is particularly suitable because a single
camshaft can be mounted in the valley between the two cylinder banks.
In overhead camshaft (OHC) engines (Fig. 2.2 lb), the camshaft can be
mounted either directly over the valve stems, or it can be offset. When the
camshaft is offset, the valves are operated by rockers, and the valve
clearances can be adjusted by altering the pivot height or, as in the case of the
exhaust valves in Fig. 2.21 b, different thickness shims can be used. For the
inlet valves in Fig. 2.21b, the cam operates on a follower or "bucket." The
clearance between the follower and the valve end is adjusted by a shim.
Although this adjustment is more difficult than in systems using rockers, it is
much less prone to change. The spring retainer is connected to the valve
spindle by a tapered split collet. The valve guide is a press-fit into the
cylinder head, so that it can be replaced when worn. Valve seat inserts are
used, especially in engines with aluminum alloy cylinder heads, to ensure
minimal wear. Normally, poppet valves rotate to even out any wear and to
maintain good seating. This rotation can be promoted if the center of the cam
is offset from the valve axis. Invariably, oil seals are placed at the top of the
valve guide to restrict the flow of oil into the cylinder. This is most
significant with overhead cast-iron camshafts, which require a copious supply
of lubricant. When the valves are not in line (b), it is more usual to use two
camshafts because this gives more flexibility on valve timing and greater
control if a variable valve timing system is to be used.
The use of four valves per combustion chamber is quite common in high-
performance spark ignition engines and is used increasingly in compression
ignition engines. The advantages of four valves per combustion chamber are
larger valve throat areas for gas flow, smaller valve forces, and a larger valve
seat area. Smaller valve forces occur because a lighter valve with a less stiff
spring can be used. This also will reduce the hammering effect on the valve
seat when the valve closes. The larger valve seat area is important because
this is how heat is transferred (intermittently) from the valve head to the
cylinder head. In the case of diesel engines, four valves per cylinder allow the
injector to be placed in the center of the combustion chamber, which
facilitates the development of low-emission combustion systems.
To reduce maintenance requirements, it is now common to use some form of
hydraulic lash adjuster (also known as a hydraulic lifter or tappet), an
example of which is shown in Fig This consists of a pistonlcylinder
arrangement that is pressurized by engine lubricant. However, when the cam
starts to displace its follower, a sudden rise in pressure occurs in the lower oil
chamber. This causes a check valve (a ball loaded by a weak spring) to close,
so that the cam motion then is transmitted to the valve. There is always a
small leakage flow from the lash adjuster so that the valve will always seat
properly, even when there is a reduction

in the clearances within the valvetrain. The lash adjusters can be incorporated
into the follower of the overhead valve arrangement (Fig.a) or the bucket
tappet of Fig. b, or the pivot post of a cam-over-rocker system. A
disadvantage of this simple substitution is an increase in frictional losses
because the cam follower will always be loaded when sliding on the cam base
circle. Friction can be reduced by using a roller follower on the rocker of the
system in Fig. 2.22, and this cam-over-rocker system also minimizes the
mass of the moving valvetrain components. A hydraulic lash adjuster reduces
the stiffness of the valvetrain, which will reduce the maximum speed limit for
the valve gear.
The drive to the camshaft usually is by chain or toothed belt. Gear drives also
are possible but tend to be expensive, noisy, and cumbersome with overhead
camshafts. The advantage of a toothed belt drive is that it can be mounted
externally to the engine, and the rubber damps out torsional vibrations that
otherwise might be troublesome.
Antioxidants are needed in gasoline to inhibit the formation of gum, which
usually is associated with the unsaturated hydrocarbons in fuel. Formation of
gum can interfere with the operation of fuel injectors.
Detergents are added to reduce the deposits in fuel injectors, the inlet
manifold, and the combustion chamber. Surfactants inhibit the formation of
deposits in the injectors and the inlet manifold, but a different mechanism is
needed to combat valve and port deposits because these deposits are
associated with higher temperatures. High-boiling point, thermally stable,
oily materials such as polybutene are used, and these appear to dissolve the
deposits. 49 Diesel additives to improve the cetane number will be discussed
first, followed by additives to lower the cold filter plugging point
temperature, then additives that are used with low sulfur fuels, and finally
other additives.
The most widely used ignition-improving additive currently is 2-ethyl hexyl
nitrate (2EHN), because of its good response in a wide range of fuels and
comparatively low cost (Thompson et al., 1997). Adding 1000 ppm of 2EHN
will increase the cetane rating by approximately 5 units. In some parts of the
world, legislation limits the nitrogen content of diesel fuels, because although
the mass of nitrogen is negligible to that available from the air, fuel-bound
nitrogen contributes disproportionately to nitric oxide formation. Under these
circumstances, peroxides can be used, such as ditertiary butyl peroxide
(Nandi and Jacobs, 1995).
Diesel fuel contains molecules with approximately 12 to 22 carbon atoms,
and many of the higher molar mass components (e.g., cetane, C16H34)
would be solid at room temperature if they were not mixed with other
hydrocarbons. Thus, when diesel fuel is cooled, a point will be reached at
which the higher molar mass components will start to solidify and form a
waxy precipitate. As little as 2% wax out of the solution can be enough to gel
the remaining 98%. This will affect the pouring properties and (more
seriously at a slightly higher temperature) block the filter in the fuel-injection
system. These and other related low-temperature issues are discussed
comprehensively by Owen and Coley (1995), who point out that as much as
20% of the diesel fuel can consist of higher molar mass alkanes. It would be
undesirable to remove these alkanes because they have higher cetane ratings
than many of the other components. Instead, use is made of anti-waxing
additives that modify the shape of the wax crystals.
Wax crystals tend to form as thin "plates" that can overlap and interlock.
Anti-waxing additives do not prevent wax formation. They work by
modifying the wax crystal shape to a dendritic (needle-like) form, and this
reduces the tendency for the wax crystals to interlock. The crystals are still
collected on the outside of the filter, but they do not block the passage of the
liquid fuel. The anti-waxing additives in commercial use are copolymers of
ethylene and vinyl acetate, or other alkene-ester copolymers. The
performance of these additives varies with different fuels, and the
improvement decreases as the dosage rate is increased. It is possible for 200
ppm of additive to reduce the cold filter plugging point (CFPP) temperature
by approximately 10 K.
Additives can be used with low-sulfur diesel fuels to compensate for their
lower lubricity, lower electrical conductivity, and reduced stability. To
restore the lubricity of a low-sulfur fuel to that of a fuel with 0.2% sulfur by
mass, then a dosage on the order of 100 mg/L is needed. Care is required in
the selection of the additive, if it is not to interact unfavorably with other
additives (Batt et al., 1996).
Electrical conductivity usually is not subject to legislation, but if fuels have a
very low conductivity, then there is the risk of a static electrical charge being
built up. If a road tanker, previously filled with gasoline, is being filled with
diesel, then there is the possibility of a flammable mixture being formed. The
conductivity of untreated low-sulfur diesel fuels can be less than 5 pS/m
(Merchant et al., 1997). Conductivities greater than 100 pS/m can be obtained
by adding a few parts per million of a chromium-based static dispersant
additive. Low-sulfur fuels and fuels that have been hydro-treated to reduce
the aromatic content also are prone to the formation of hydroperoxides. These
are known to degrade neoprene and nitrile rubbers, but this can be prevented
by using antioxidants such as phenylenediamines (suitable only in low-sulfur
fuels) or hindered phenols (Owen and Coley, 1995).
Other additives used in diesel fuels are detergents, anti-ices, biocides, and
anti-foamants.
Detergents (e.g., amines and amides) are used to inhibit the formation of
combustion deposits. Most significant are deposits around the injector
nozzles, which interfere with the spray formation. Deposits then can lead to
poor air-fuel mixing and particulate emissions. A typical dosage level is 100-
200 ppm.
Anti-ices (e.g., alcohols or glycols) have a high affinity for water and are
soluble in diesel fuel. Water is present through contamination and as a
consequence of humid air above the fuel in vented tanks being cooled below
its dewpoint temperature. If ice formed, it could block both fuel pipes and
filters. Biocides act against anaerobic bacteria that can form growths at the
wateddiesel interface in storage tanks. These are capable of blocking fuel
filters.
Anti-foamants (1 0-20 ppm silicone-based compounds) facilitate the rapid
and complete filling of vehicle fuel tanks.
Anti-Friction Bearings
Anti-friction bearings, also called rolling contact bearings, are not widely
used in the engine proper. They are used more prevalently in the transmission
and drivetrain. Nonetheless, they are found in several engine components
such as alternators and water pumps. Their primary function is to support a
rotating shaft. Figure 5.1 shows a selection of types of anti-friction bearings.

Anti-friction bearings offer very low coefficients of friction. Furthermore,


some types are able tosupport axial (thrust) loads in addition to the radial, or
shaft, loads. Deep groove ball bearings can support small thrust loads.
Tapered roller bearings are specifically designed to support significant thrust
loads; hence, they are used as wheel bearings. The rollers or balls are made of
hardened, high carbon chromium alloy steels, and their construction is
somewhat complicated. Furthermore, these bearings do not handle shock
loads very well, and their performance is greatly reduced by the presence of
dirt. Thus, anti-friction bearings must be well sealed to keep in the lubricant
and keep out dirt and other contaminants.
Straight-Tooth Spur & Helical Spur Gears
Figure shows an example of this type of gear. Straight-tooth spur gears have
straight teeth parallel to the axis of rotation. When the teeth engage, they do
so instantaneously along the tooth face. This sudden meshing results in high
impact stresses and noise. Thus, these gears have been replaced with helical
gears in most transmissions. However, these gears do not generate axial (or
thrust) loads along the shaft axis. Furthermore, they are easier to manufacture
and can transmit high torque loads. For these reasons, many transmissions
use spur gears for first and reverse gears. This accounts for the distinctive
"whine" when a car is reversed rapidly.

Helical Spur Gears


Figure shows an example of a helical gear. Helical gears have teeth that are
cut in the form of a helix on a cylindrical surface. As the teeth begin to mesh,
contact begins at the leading edge of the tooth and progresses across the tooth
face. Although this greatly reduces the impact load and noise, it generates a
thrust load that must be absorbed at the end of the shaft by a suitable bearing.
Straight-Tooth Bevel, Spiral Bevel &
Hypoid Gears
Straight-Tooth Bevel Gears
Transmissions and Driveline These gears, shown in Fig. 6.11, have straight
teeth cut on a conical surface. They are used to transmit power between shafts
that intersect but are not parallel. They are used in differentials. Similar to
straight-tooth spur gears, they will be noisy. However, in the differential, they
rotate only when the axles are rotating at different speeds.

Spiral Bevel Gears


These gears have teeth cut in the shape of a helix on a conical surface. They
can be used for final drives to connect intersecting shafts.
Hypoid Gears
These gears have helical teeth cut on a hyperbolic surface. They are used in
final drives to connect shafts that are neither parallel nor intersecting. These
gears have high tooth loads and must be lubricated with special heavy-duty
hypoid gear oil because greater sliding occurs between the teeth. The sliding
increases with the amount of offset between the shaft axes. With zero offset,
a spiral bevel gear results, whereas the maximum offset corresponds to a
worm/wheel configuration. Despite having a lower efficiency than spiral
bevel gears, hypoid gears allow the driveshaft to be lowered, thereby
requiring a smaller "transmission tunnel" in the body.
Four-wheel Drive (4WD) and All-Wheel
Drive (AWD)
A vehicle that provides power to all four wheels has some key advantages in
slippery or rough terrain. First and foremost, four-wheel drive (4WD) enables
the vehicle to move under conditions of reduced traction. What is lost on
many drivers is that four-wheel drive does not enable the vehicle to stop more
rapidly, evidence of this being commonly seen in the mountainous states of
the western United States. Nevertheless, the U.S. market has seen an
explosion in the sale of four-wheel-drive sport utility vehicles (SUVs).
Several automakers also have successfully marketed all-wheel-drive (AWD)
vehicles, most notably Subaru and Audi. Furthermore, there is a vast array of
adjectives used to define the systems, including part-time four-wheel drive,
full-time four-wheel drive, all-wheel drive, and so forth. The differences
among these systems often owe more to marketing than engineering. Adding
to the confusion is the fact that the automakers themselves use various terms
for their systems, often meaning something quite different to a competitor. In
short, any attempt to classify four- or all-wheel drive systems invariably will
meet with exceptions to the classification. This work will classify these
systems into three broad categories: part-time four-wheel drive, full-time
fourwheeldrive. and all-wheel drive.
Part-Time Four-wheel Drive (4WD)
The key feature of a part-time four-wheel-drive system is the inclusion of a
separate transfer case aft of the transmission. This is the lowest cost option
and can be considered the first-generation option. It is called part-time
because it can be used only in conditions that will allow for wheel slip, such
as dirt roads, full snow coverage, and so forth. The reason for this is that
there is no mechanism to eliminate driveline wind-up. Recall that a
differential is used at the rear axle to allow differences in wheel rotation
while the vehicle is cornering. With four-wheel drive, the same thing is
happening with the front axle and the rear axle. One is traveling faster than
the other; therefore, something must allow for the speed difference. In the
absence of a center differential, the only mechanism allowing wheel speed
variation is for the wheel to break free at the contact patch. Because this
requires large forces on dry pavement, the part-time system cannot be used
on dry pavement without serious drivetrain damage.

The transfer case also incorporates two selectable gear ratios-low and high. In
four-wheel drive low, the vehicle has a limited top speed. However, because
of the large gear ratio in low, the vehicle has a large amount of torque
available at the drive wheels to enable the driver to extricate the vehicle from
difficult situations. Transmissions and Driveline The other feature of this
system is that'the front hubs usually are locked manually. In twowheel drive,
the front wheels spin freely around the spindles. When the driver desires
fourwheel drive, the hubs must be locked manually onto the drive spindles
for torque to be applied through the front wheels. The Isuzu Rodeo, Ford
Bronco, and Dodge Ram all have part-time four-wheel drive.
On-Demand Four-Wheel Drive (4WD)
This is the next option in terms of increasing convenience and cost. An open
differential is incorporated between the front and rear axles. The open
differential absorbs shaft speed variations between the front and rear output
shafts. However, being an open differential, it sends torque to the axle with
least resistance. This system allows driving in four-wheel drive on dry
pavement, but this will decrease the fuel economy of the vehicle. For this
reason, it is referred to as on-demand. The driver may use two-wheel drive
when there is no need for fourwheel drive. This system also will have
automatic locking hubs that are either vacuum operated or electrically
operated, saving the driver from a trip out of the cab in inclement weather to
lock the hubs. Often "on-demand" (a configuration) is confused with "shift-
on-the-fly" (an engagement method). The Chevrolet Blazer is an example of
a vehicle with on-demand fourwheel drive.

Full-Time Four-Wheel Drive (4WD)


This is the highest cost option. This system has differentials everywhere, at
both front and rear axles and in the transfer case. This allows the vehicle to be
in four-wheel drive on dry pavement. The system allows for slip, but
something had to be done about situations of very low traction-that is, the
open differentials would send torque to the wheel with the least traction.
Some vehicles, most notably the AM General Hummer, can lock all of the
differentials. Other vehicles, such as the 1995 Jeep Grand Cherokee, have a
viscous coupling that transmits power from the wheels that slip to the wheels
that grip. In this category, the distinction between four-wheel drive and all-
wheel drive begins to blur. For the purposes of this work, full-time four-
wheel drive is applied to vehicles that still require the driver to select the
four-wheel-drive option.
All-Wheel Drive (AWD)
For the purposes of this work, an all-wheel drive vehicle does not have a
selectable transfer case. Generally, these vehicles are not intended for off-
road use, but use four-wheel drive for inherent stability. Usually, they use
viscous couplings to send power from the spinning wheels to the gripping
wheels. The system operates automatically and requires no driver
intervention. This system also is used on high-performance cars to eliminate
wheel spin caused by the enormous torque generated at the rear wheels.
Steering Mechanisms
The fundamental problem in steering is to enable the vehicle to traverse an
arc such that all four wheels travel about the identical center point. In the
days of horse-drawn carriages, this was accomplished with the fifth-wheel
system depicted in Fig.

Although this system worked well for carriages, it soon proved unsuitable for
automobiles. In addition to the high forces required of the driver to rotate the
entire front axle, the system proved unstable, especially as vehicle speeds
increased. The solution to this problem was developed by a German engineer
named Lankensperger in 18 17. Lankensperger had an inherent distrust of the
German government, so he hired an agent in England to patent his idea. His
chosen agent was a lawyer named Rudolph Ackerman. The lawyer secured
the patent, but the system became known as the Ackerman system.
Figure depicts the key features of this system. The end of each axle has a
spindle that pivots around a kingpin. The linkages connecting the spindles
form a trapezoid, with the base of the trapezoid formed by the rack and tie
rods. The distance between the tie rod ends is less than the distance between
the kingpins. The wheels are parallel to each other when they are in the
straight-ahead position. However, when the wheels are turned, the inner
wheel turns through a greater angle than the outer wheel. Figure 7.2 also
shows that the layout is governed by the ratio of track (distance between the
wheels) to wheelbase (distance between front and rear wheels). The
Ackerman layout is accurate only in three positions: straight ahead, and at
one position in each direction. The slight errors present in other positions are
compensated for by the deflection of the pneumatic tires.
For the purposes of this book, "steering mechanism" refers to those
components required to realize the Ackerman system. Of course, all vehicles
today use a steering wheel as the interface between the system and driver.
(This has not always been the case. Early automobiles used a tiller.) The
steering wheel rotates a column, and this column is the input to the steering
mechanism. These mechanisms can be broadly grouped into two categories:
(I) worm-type mechanisms, and (2) rack and pinion mechanisms.
Worm Systems
The steering linkages required by worm gear steering systems. The Pitman
arm converts the rotational motion of the steering box output into side-to-side
motion of the center link. The center link is tied to the steering arms by the tie
rods, and the side-to-side motion causes the spindles to pivot around their
respective steering axes (kingpins). To achieve Ackerman steering, the four-
bar linkages must form a trapezoid instead of a parallelogram. Although all
worm-type steering systems use linkages similar to these, the specifics of the
steering boxes differ and are explained next.

Worm and Sector


The shaft to the Pitman arm is connected to a gear that meshes with a worm
gear on the steering column. Because the Pitrnan shaft gear needs to rotate
through only approximately 70°, only a sector of the gear is actually used.
The worm gear is assembled on tapered roller bearings to absorb some thrust
load, and an adjusting nut is provided to regulate the amount of end-play in
the worm.
Worm and Roller
The worm and roller system is very similar to the worm and sector system. In
this case, a roller is supported by ball bearings within the sector on the
Pitman shaft. The bearings reduce sliding friction between the worm and
sector. The worm also can be shaped similarly to an hourglass, that is,
tapered from each end to the center. This provides better contact between the
worm and the roller, as well as a variable steering ratio. When the wheels are
at the center (straight-ahead) position, the steering reduction ratio is high to
provide better control. As the wheels are turned farther off-center, the ratio
lowers. This gives better maneuverability during low-speed maneuvers such
as parking.
Recirculating Ball
The recirculating ball system, another form of worm and nut system. In this
system, a nut is meshed onto the worm gear by means of a continuous row of
ball bearings. As the worm turns, the nut moves up and down the worm
threads. The ball bearings not only reduce the friction between the worm and
nut, but they greatly reduce the wear because the balls continually recirculate
through the system, thereby preventing any one area from bearing the brunt
of the wear. The primary advantage of all worm-type steering systems is
reduced steering effort on the part of the driver. However, due to the worm
gear, the driver receives no feedback from the wheels. For these reasons,
worm-type steering systems are found primarily on large vehicles such as
luxury cars, sport utility vehicles, pickup trucks, and commercial vehicles.
Rack and Pinion Steering
The rack and pinion steering system is simpler, lighter, and generally cheaper
than worm-type systems (Fig. 7.7). The steering column rotates a pinion gear
that is meshed to a rack. The rack converts the rotary motion directly to side-
to-side motion and is connected to the tie rods. The tie rods cause the wheels
to pivot about the kingpins, thus turning the front wheels.
Rack and pinion systems have the advantage of providing feedback to the
driver. Furthermore, rack and pinion systems tend to be more responsive to
driver input, and for this reason, rack and pinion steering is found on most
small and sports cars.
Wheel Alignment
In addition to allowing the vehicle to be turned, the steering system must be
set up to allow the vehicle to track straight ahead without steering input from
the driver. Thus, an important design factor for the vehicle is the wheel
alignment. Four parameters are set by the designer, and these must be
checked regularly to ensure they are within the original vehicle
specifications. The four parameters discussed here are as follows:
1. Camber
2. Steering axis inclination (SAI)
3. Toe
4. Caster

Camber
Camber is the angle of the tire/wheel with respect to the vertical as viewed
from the front of the vehicle. Camber angles usually are very small, on the
order of 1 "; the camber angles shown in Fig. are exaggerated. Positive
camber is defined as the top of the wheel being tilted away from the vehicle,
whereas negative camber tilts the top of the wheel toward the vehicle. Most
vehicles use a small amount of positive camber, for reasons that will be
discussed in the next section. However, some off-road vehicles and race cars
have zero or slightly negative camber.
Steering Axis Inclination (SAI)
Steering axis inclination (SAI) is the angle from the vertical defined by the
centerline passing through the upper and lower ball joints. Usually, the upper
ball joint is closer to the vehicle centerline than the lower.

the advantage of combining positive camber with an inclined steering axis. If


a vertical steering axis is combined with zero camber (left side of Fig.), any
steering input requires the wheel to scrub in an arc around the steering axis.
In addition to increasing driver effort, it causes increased tire wear. The
combination of SAI and positive camber reduces the scrub radius (right side
of Fig.). This reduces driver effort under low-speed turning conditions and
minimizes tire wear. An additional benefit of this system is that the wheel arc
is no longer parallel to the ground. Any turning of the wheel away from
straight ahead causes it to arc toward the ground. Because the ground is not
movable, this causes the front of the vehicle to be raised. This is not the
minimum potential energy position for the vehicle; thus, the weight of the
vehicle tends to turn the wheel back to the straight ahead position. This
phenomenon is very evident on most vehicles-merely turning the steering
wheel to full lock while the vehicle is standing still will make the front end of
the vehicle rise visibly. Although the stationary the weight of the vehicle may
not be sufficient to rotate the wheels back to the straight-ahead position, as
soon as the vehicle begins to move, the wheels will return to the straight-
ahead position without driver input. Caster angle also contributes to this self-
aligning torque. Note that the diagrams in the preceding figures have been
simplified to facilitate discussion. In practice, the wheel is dished so that the
scrub radius is further reduced.
Toe & Caster
Toe
Toe is defined as the difference of the distance between the leading edge of
the wheels and the distance between the trailing edge of the wheels when
viewed from above. Toe-in means the front of the wheels are closer than the
rear; toe-out implies the opposite.
For a rear-wheel-drive vehicle, the front wheels normally have a slight
amount of toe-in. When the vehicle begins to roll, rolling resistance produces
a force through the tire contact patch perpendicular to the rolling axis. Due to
the existence of the scrub radius, this force produces a torque around the
steering axis that tends to cause the wheels to toe-out. The slight toe-in
allows for this, and when rolling, the wheels align along the axis of the
vehicle. Conversely, front-wheel-drive vehicles require slight toeout. In this
case, the tractive force of the front wheels produces a moment about the
steering axis that tends to toe the wheels inward. In this case, proper toe-out
absorbs this motion and allows the wheels to parallel the direction of motion
of the vehicle.
Caster
Caster is the angle of the steering axis from the vertical as viewed from the
side. Positive caster is defined as the steering axis inclined toward the rear of
the vehicle.
With positive caster, the tire contact patch is aft of the intersection of the
steering axis and the ground. This is a desirable feature for stability.
When the wheel is turned, the cornering force acts perpendicular to the wheel
axis and through the contact patch. This creates a torque about the steering
axis that acts to center the wheel. Obviously, negative caster results in the
opposite effect, and the wheel would tend to continue turning about the
steering axis. The most common example of positive caster is a shopping
cart. The wheels are free to turn around the steering axis, and when the cart is
pushed straight ahead, the wheels self-align to the straight-ahead position.
Effect Of Improper Alignment On Vehicle
Vehicle Rollover
One aspect of cornering behavior that can be terrifying for a driver is vehicle
rollover. Rollover is defined as the vehicle rotating 90' or more about its
longitudinal axis (Gillespie, 1994) and can be caused by many factors. It can
occur on a level surface if the tires can generate sufficient cornering force
that the vehicle rolls before it slips. Any cross slope of the road also will
excite (or inhibit) rollover. The most frequent cause of rollover is a skidding
vehicle coming into contact with a surface irregularity such as a curb, dirt
shoulder, or similar situation. The process is influenced by a large number of
complex phenomena, and a detailed analysis goes beyond the scope of an
introductory text. Nevertheless, some simple models exist that can aid one's
understanding of vehicle rollover.
Hotchkiss Suspensions
The Hotchkiss drive was used extensively on passenger cars through the
1960s and is shown in Fig. 8.26. The system consists of a longitudinal
driveshaft connected to a center differential by U-joints. The solid axle is
mounted to the frame by longitudinally mounted leaf springs. Although the
Hotchkiss suspension is simple, reliable, and rugged, it has been superseded
by other designs for several reasons. First, as designers sought better ride
qualities, the spring rates on the leaf springs dropped. This led to lateral
stability difficulties because softening leaf springs requires that they be
longer. Second, the longer leaf springs were susceptible to wind-up,
especially as braking power and engine power began to rise. Finally, as
frontwheel-drive cars became more prevalent, rear-wheel-drive cars were
forced to adopt independent rear suspensions to attain similar ride and
handling qualities. Nevertheless, the Hotchkiss drive is still used on many
four-wheel-drive trucks and SUVs at both ends of the vehicle. One
disadvantage of this suspension is that the stocky axles and differential
contribute to a relatively large unsprung mass.
Disc Brakes
Although drum brakes have the advantages of self-energization and ease of
parking brake incorporation, they suffer from several disadvantages. Their
heat dissipation is problematic, and drum brakes are prone to brake fade as
the drum becomes hot due to extended or frequent heavy braking. Also, drum
brakes are very sensitive to moisture or contamination inside the drum. Any
water in the drum rapidly vaporizes under braking, causing the coefficient of
friction of the shoe to become nearly zero.
On the other hand, disc brakes do not suffer these handicaps. The rotors can
be vented to aid heat dissipation, and any water or contamination of the rotor
is quickly removed by the scraping action of the pads. Figure 9.14 shows a
typical disc brake system.
Disc Brake Components
Brake Disc
The brake disc, also called the rotor, is connected to the wheel hub. The rotor
provides the friction surface for the pads, thus generating the braking torque.
Rotors usually are vented to aid in the dissipation of heat. Some rotors also
are cross drilled to save weight. High-performance brakes now are using
carbon fiber as a rotor material. Carbon fiber provides good, fade-free
performance when the material has been heated. Many Formula 1 teams use
carbon fiber brakes, and the driver must ride the brake during the warm-up
laps to bring them up to operating temperature. The performance of these
brakes under such demanding conditions is attested to by the fact that the
rotors often glow red hot after the brakes have been applied during a race.
The wheels are connected to the rotor by the lugs.
Brake Pads
The brake pads, consist of a stamped steel backing plate to which the friction
material is attached. The material, also called the lining, may be bonded to
the plate with adhesive, or it may be riveted. Most disc brakes also contain a
wear indicator. This indicator is a small tab of spring steel, the edge of which
is set to a predetermined height below the surface of the new pad. When the
pad wears to the point where it should be replaced, the spring steel begins to
rub on the rotor when the brakes are applied. This produces an irritating
squeal that is intended to motivate the driver to have the brake pads replaced.
Should the driver ignore the warning, the brakes will continue to function to
the point where no lining material remains. The author has had the experience
of a fairly new disc brake pad disintegrating during a stop. During the
subsequent trip home, the rivet heads remaining on the backing plate
provided more than adequate stopping power, although at great damage to the
rotor.
Caliper
The brake caliper houses the pistons, and these pistons apply the activation
force to the brake pads. The caliper may house as few as one or as many as
six pistons, depending on the specific vehicle in question. Calipers fall into
two categories: (1) fixed, or (2) floating.
Lead-Acid Batteries
Lead-acid batteries are currently used in commercially available electric
vehicles (EVs). Despite continuous development since 1859, the possibility
of further development still exists to increase the specific power and energy.
Lead-acid batteries are selected for their low cost, high reliability, and an
established recycling infrastructure. However, problems including low energy
density, poor cold-temperature performance, and low cycle life limit their
desirability. The lead-acid cell consists of a metallic lead anode and a lead
oxide (Pb02) cathode held in a sulfuric acid (H2S04) and water electrolyte.
The discharge of the battery is through the chemical reaction

The electron transfer between the lead and the sulfuric acid is passed through
an external electrical connection, thus creating a current. In recharging the
cell, the reaction is reversed. Lead-acid batteries have been used as car
batteries for many years and can be regarded as a mature technology. The
lead-acid battery is suited to traction application because it is capable of a
high power output. However, due to the relatively low energy density, lead-
acid batteries become large and heavy to meet the energy storage
requirements
Nickel-Cadmium (NiCd) Batteries
Nickel-cadmium (NiCd) batteries are used routinely in communication and
medical equipment and offer reasonable energy and power capabilities. They
have a longer cycle life than lead-acid batteries and can be recharged quickly.
The battery has been used successfully in developmental EVs. The main
problems with nickel-cadmium batteries are high raw-material costs,
recyclability, the toxicity of cadmium, and temperature limitations on
recharging. Their performance does not appear to be significantly better than
that of lead-acid batteries, and the energy storage can be compromised by
partial discharges-referred to as memory effects.
Nickel-Metal Hydride (NiMH) Batteries
Nickel-metal hydride (NiMH) batteries currently are used in computers,
medical equipment, and other applications. They have greater specific energy
and specific power capabilities than lead-acid or nickel-cadmium batteries,
but they are more expensive. The components are recyclable, so the main
challenges with nickel-metal hydride batteries are their high cost, the high
temperature they create during charging, the need to control hydrogen loss,
their poor charge retention, and their low cell efficiency.
Metal hydrides have been developed for high hydrogen storage densities and
can be incorporated directly as a negative electrode, with a nickel
hydroxyoxide (NiOOH) positive electrode and a potassium/lithium hydroxide
electrolyte. The electrolyte and positive electrode had been extensively
developed for use in nickel-cadmium cells.
The electrochemical reaction is

During discharge, hydroxyl (OH-) ions are generated at the nickel


hydroxyoxide positive electrode and consumed at the metal hydride negative
electrode. The converse is true for water molecules, which means that the
overall concentration of the electrolyte does not vary during
chargingldischarging. There are local variations, and care must be taken to
ensure that the flow of ions across the separator is high enough to prevent the
electrolyte "drying out" locally.
The conductivity of the electrolyte remains constant through the
chargeldischarge cycle because the concentration remains constant. In
addition, there is no loss of structural material from the electrodes; thus, they
do not change their electrical characteristics. These two details give the cell
very stable voltage operating characteristics over almost the full range of
charge and discharge.
Lithium Ion (Li-1on)lLithium Polymer
Batteries
The best prospects for future electric and hybrid electric vehicle battery
technology probably come from lithium battery chemistries. Lithium is the
lightest and most reactive of the metals, and its ionic structure means that it
freely gives up one of its three electrons to produce an electric current.
Several types of lithium chemistry batteries are being developed. The two
most promising of these appear to be the lithium ion (Li-ion) type and a
further enhancement of this, the lithium polymer type.
The Li-ion battery construction is similar to that of other batteries except for
the lack of any rare earth metals that are a major environmental problem
when disposal or recycling of the batteries becomes necessary. The battery
discharges by the passage of electrons from the lithiated metal oxide to the
carbonaceous anode by current flowing via the external electrical circuit. Li-
ion represents a general principle, not a particular system. For example,
lithium/aluminium /iron sulphide has been used for vehicle batteries. Li-ion
batteries have a very linear discharge characteristic, and this facilitates
monitoring the state of charge. The charge/discharge efficiency of Li-ion
batteries is approximately 80%, which compares favorably with nickel-
cadmium batteries (approximately 65%) but unfavourably with nickel-metal
hydride batteries (approximately 90%). Although the materials used are non-
toxic, a concern with the use of lithium is, of course, its flammability.
Lithium polymer batteries use a solid polymer electrolyte, and the battery can
be constructed similar to a capacitor, by rolling up the anode. polymer
electrode, composite cathode, current collector from the cathode, and
insulator. This results in a large surface area for the electrodes (to give a high
current density) and a low ohmic loss.
Dual Hybrid Systems
If the parallel system is modified by the addition of a second electrical
machine (which is equivalent to adding a mechanical power transmission
route to the series system), the result is a system that allows transmission of
the prime mover power through two parallel routes: (1) electrically, and (2)
mechanically. This is equivalent to the use of a mechanical shunt
transmission with a continuously variable transmission (CVT) to give an
infinitely variable transmission (IVT) (Ironside and Stubbs, 198 1). The result
is a transmission that enables the engine to operate at a high efficiency for a
wider range of vehicle operating points. A well-documented example of this
configuration is the "dual" hybrid system developed by Equos Research
(Yamaguchi et al., 1996) and used in the Toyota Prius. Figure below shows
this system.

The planetary gears act as a "torque divider," sending a proportion of the


engine power mechanically to the wheels and driving an electric machine (M
1) with the remainder. Consequently, the configuration acts simultaneously as
a parallel and a series hybrid. Engine speed is controlled using Machine 1,
removing the need for a transmission, a clutch, or a starter motor. Machine 2
acts in the same way as the motor in a parallel system, supplementing or
absorbing torque as required. The diagrams below show the possible modes
of operation, and each mode is explained next.
Electric Mode. (a) The engine is switched off, and Machine 1 acts as a
"virtual clutch," keeping the engine speed at zero. Torque and regenerative
braking are provided by Machine 2.
Parallel Mode. (b) Machine 1 is stationary (perhaps with a brake applied),
and the configuration is a simple parallel one, with a fixed engine-to-road
gear ratio.
Charging Mode. (c) The vehicle is stationary, and all of the engine power is
used to drive Machine 1 and charge the batteries. Torque is still transferred to
the wheels, allowing the car to "creep."
Dual Mode. (d) Some power is used to drive the wheels directly, while the
remainder powers Machine 1. The speed of Machine 1 determines the engine
operating speed.
The charging and parallel modes are effectively subsets of the dual mode, and
this continuity in control is the real strength of the configuration. The dual
hybrid configuration combines the advantages of both series and parallel, as
follows:
Optimal engine operating point at all times.
Much of the power (especially at cruising speeds) is delivered mechanically
to the wheels, thereby increasing efficiency.
Charging is possible, even when the vehicle is stationary.
The combined torque of the engine and Machine 2 is available, improving
performance.
Compared to a series hybrid (where the electrical machines must be rated for
the prime mover and the vehicle power requirement), only a fraction of the
prime mover power is transmitted electrically in the dual hybrid system. The
main difficulty with the dual hybrid is in the design of a control system,
which must resolve the two degrees of freedom (engine speed and engine
torque) and the associated transients into an optimal and robust control
strategy. System modeling is essential for optimizing this.
Tire

The tire’s INNERLINER -- keeps air inside the tire.


The CASING (or CARCASS) – the internal substructure of the tire.
The tire’s BEAD -- assures an airtight fit with the wheel and efficient transfer
of forces from the wheel to the carcass of the tire.
BEAD FILLER – reduces flex and aids in deflection.
A Tire’s BODY PLIES – withstands the forces of the tire’s inflation
pressure, provides the mechanical link from the from the wheel movement to
the tread are and flexibility to supplement the vehicle’s suspension system.
The SIDEWALL -- protects the side of the tire from road and curb attack
from atmospheric degradation.
A tire’s BELTS -- stabilize and strengthen the tread, allowing forces to be
efficiently transferred to thetread area.
Its BELT EDGE INSULATION – helps to reduce friction.
The TREAD -- provides the frictional coupling to the road surface to
generate traction and steering
Forces.
Ribs are a pattern that includes grooves around the tire in the direction of
rotation.
Lugs are the sections of rubber that make contact with the terrain.
Tread blocks are raised rubber compound segments on the outside visible
part of a tire.
Sipes are small lateral cuts made in the surface of the tread to improve
traction.
Kerfs are shallow slits molded into the tire tread for added traction – this
term often used interchangeably with sipes.
Grooves are circumferential or lateral channels between adjacent tread ribs or
tread blocks.
Shoulder blocks are the tread elements of segments on the tire tread nearest
to the sidewall.
Voids are the spaces that are located between the lugs.
Lean-Burn NOx-Reducing Catalysts,
"DENOx”
It has already been reported how stoichiometric operation compromises the
efficiency of engines, but that for control of NOx, it is necessary to operate
either at stoichiometric or sufficiently weak (say, an equivalence ratio of
O.6), such that there is no need for NOx reduction in the catalyst.
If a system can be devised for NOx to be reduced in an oxidizing
environment, this gives scope to operate the engine at a higher efficiency. A
number of technologies are being developed for "DENOx," some of which
are more suitable for diesel engines than spark ignition engines. The different
systems are designated active or passive (passive being when nothing must be
added to the exhaust gases). The systems are as follows:
Selective Catalytic Reduction (SCR). In this technique, ammonia
(NH3) or urea (CO(NH&) is added to the exhaust stream. This is likely to be
more suited to stationary engine applications. Conversion efficiencies of up
to 80% are quoted, but the NO level must be known, because if too much
reductant is added, ammonia would be emitted.
Passive DENOx. These use the hydrocarbons present in the exhaust to
chemically reduce the NO. There is a narrow temperature window (in the
range 160-220°C [320- 428"FI for platinum catalysts) within which the
competition for HC between oxygen and nitric oxide leads to a reduction in
the NOx (Joccheim et al., 1996). The temperature range is a limitation and is
more suited to diesel engine operation. More recent work with copper-
exchanged zeolite catalysts has shown them to be effective at higher
temperatures. By modifying the zeolite chemistry, a peak NOx conversion
efficiency of 40% has been achieved at 400°C (752°F) (Brogan et al., 1998).
Active DENOx Catalysts. These use the injection of fuel to reduce the
NOx, and a reduction in NOx of approximately 20% is achievable with
diesel-engined vehicles on typical drive cycles, but with a 1.5% increase in
fuel consumption (Pouille et a]., 1998). Current systems inject fuel into the
exhaust system, but there is the possibility of late in-cylinder injection with
future diesel engines.
NOx Trap Catalysts. In this technology (first developed by Toyota), a
three-way catalyst is combined with a NOx-absorbing material to store the
NOx when the engine is operating in lean-burn mode. When the engine
operates under rich conditions, the NOx is released from the storage media
and reduced in the three-way catalyst.
NOx trap catalysts have barium carbonate deposits between the platinum and
the alumina base. During lean operation, the nitric oxide and oxygen convert
the barium carbonate to barium nitrate. A rich transient (approximately 5 s at
an equivalence ratio of 1.4) is needed every five minutes or so, such that the
carbon monoxide, unburned hydrocarbons, and hydrogen regenerate the
barium nitrate to barium carbonate. The NOx that is released is then reduced
by the partial products of combustion over the rhodium in the catalyst. Sulhr
in the fuel causes the NOx trap to lose its effectiveness because of the
formation of barium sulfate. However, operating the engine at high load to
give an inlet temperature of 600°C (1 112"F), with an equivalence ratio of
1.05, for 600 s can be used to remove the sulfate deposits (Brogan et al.,
1998)
Automobile History - Top 10 Interesting
Facts
1. Adolf Hitler ordered Ferdinand Porsche to manufacture a Volkswagen,
which literally means 'People's Car' in German. This car went on to become
the Volkswagen Beetle.

2. In 1971, the cabinet of Prime Minister Indira Gandhi proposed the


production of a 'People's Car' for India - the contract of which was given to
Sanjay Gandhi. Before contacting Suzuki, Sanjay Gandhi held talks with
Volkswagen AG for a possible joint venture, encompassing transfer of
technology and joint production of the Indian version of the 'People's car',
that would also mirror Volkswagen's global success with the Beetle.
However, it was Suzuki that won the final contract since it was quicker in
providing a feasible design. The resulting car was based on Suzuki's Model
796 and went on to rewrite automotive history in India as the Maruti 800.
3. Rolls-Royce Ltd. was essentially a car and airplane engine making
company, established in 1906 by Charles Stewart Rolls and Frederick Henry
Royce.
The same year, Rolls-Royce rolled out its first car, the Silver Ghost. In 1907,
the car set a record for traversing 24,000 kilometers during the Scottish
reliability trials.
4. The most expensive car ever sold at a public auction was a 1954 Mercedes-
Benz W196R Formula 1 race car, which went for a staggering $30 million at
Bonhams in July 2013. The record was previously held by a 1957 Ferrari
Testa Rossa Prototype, sold in California at an auction for $16.4 million.
5. As a young man, Henry Ford used to repair watches for his friends and
family using tools he made himself. He used a corset stay as tweezers and a
filed shingle nail as a screwdriver.

6.British luxury car marque Aston Martin's name came from one of the
founders Lionel Martin who used to race at Aston Hill near Aston Clinton.
Difference between turbocharging and
supercharging
A supercharger is an air compressor used for forced induction of an internal
combustion engine. The greater mass flow-rate provides more oxygen to
support combustion than would be available in a naturally aspirated engine.
Supercharger allows more fuel to be burned and more work to be done per
cycle, increasing the power output of the engine. Power for the unit can come
mechanically by a belt, gear, shaft, or chain connected to the engine's
crankshaft.
A turbocharger or turbo is a centrifugal compressor powered by a turbine that
is driven by an engine's exhaust gases. Its benefit lies with the compressor
increasing the mass of air entering the engine (forced induction), thereby
resulting in greater performance (for either, or both, power and efficiency).
They are popularly used with internal combustion engines.
Supercharging and turbocharging are similar in process and differ in
operation, it means, both are used for same purpose i.e to increase engine
power, efficiency, torque by compressing the air in multistage for increasing
quantity of air, pressure and temperature. But the difference is ,
In turbocharging, the exhaust gases from the engine cylinder is used to drive
the turbine. The turbine and compressor are mounted on the same shaft.
When the exhaust gases are passed through turbine, the turbine rotates as the
gases import heat energy, hence the turbine produces mechanical energy i.e
rotation of shaft for driving compressor. Now the compressor also rotates to
compress the inlet air to the cylinder. The inlet air is compressed before
reaching engine cylinder.
In supercharging, the rotation of crank shaft is used to drive the turbine
through gears and chains or pulleys and belts. The turbine and compressor are
mounted on the same shaft. When the crank shaft rotates, the shaft of the
turbine also rotates since both are connected mechanically through gear and
chain or pulley and belt arrangements. Hence the turbine produces
mechanical energy i.e rotation of shaft for driving compressor.
Why diesel cannot be used in petrol engine?
We know that, petrol is ignited by spark and diesel is ignited by compression
ignition, the volatility of the petrol is greater than diesel. The spark plug
produces the spark only at some places, not at all the point of air-fuel
mixture. If the air-fuel mixture from the carburetor is completely vapourised
the fire produced by the spark can penetrate throughout the mixture to burn
all the mixture. Hence to produce the complete vapour of air-fuel mixture, the
volatility of the fuel should be more.
Since the volatility of the petrol is greater than diesel, if we use diesel on
petrol engine, the carburetor cannot produce fine vapourised mixture of air-
diesel due to low volatility of diesel, hence this improper mixture reaches the
combustion chamber. Now at the end of the compression stroke the spark will
be produced, this spark only will burn the diesel where it is produced, the rest
of the mixture will not receive the enough heat from fire produced by the
spark for burning and the mixture will remain as a unburnt mixture tend to
various efficiency loss, this is due to the improper penetration of fire. The
improper penetration of fire results from improper vapourisation of air-diesel
mixture and it results from low volatility of diesel.This is the reason for Why
diesel cannot be used in petrol engine?.
What is Scavenging?
Scavenging is the process used in IC engines in which the burnt gases are
forced or pushed to atmosphere from the engine cylinder by using the inlet
pressure of fresh air.
Importance and Causes of Scavenging:
If the burnt gases inside the engine cylinder are not completely exhausted,
then the following incidents will happen:
Already burnt gases will be compressed again during the compression stroke
if they are left inside the cylinder.
This causes the temperature of air fuel mixture to exceed the maximum
temperature as the burnt gases have already some temperature because of
burning.
Because of this maximum temperature, the fuel can burn before the power
stroke, so this tends to abnormal combustion.
We know that, the abnormal combustion causes the knocking phenomenon.
Why petrol cannot be used in diesel engine?
The fire point of the diesel is greater than petrol and compression ratio of
diesel engine is greater than petrol engine. If we use petrol on the diesel
engine, since diesel engine has greater compression ratio the air heated during
the compression has temperature which is enough to burn the diesel, but here
we use petrol, the petrol injected at the end of compression would burn
immediately before the power stroke gets started unlike diesel to burn
completely during power stroke. This due to excessive burning temperature
of air for petrol, but for diesel it will be normal burning temperature. Hence
piston can start to move to BDC before reaching the TDC during the
compression stroke, this would reverse the engine and may cause engine
vibration and noise. This is the reason for Why petrol cannot be used in diesel
engine?.
Flywheel
A flywheel is an inertial energy-storage device. It absorbs mechanical energy
and serves as a reservoir, storing energy during the period when the supply of
energy is more than the requirement and releases it during the period when
the requirement of energy is more than the supply.
Flywheels-Function need and Operation The main function of a fly wheel is
to smoothen out variations in the speed of a shaft caused by torque
fluctuations. If the source of the driving torque or load torque is fluctuating in
nature, then a flywheel is usually called for. Many machines have load
patterns that cause the torque time function to vary over the cycle. Internal
combustion engines with one or two cylinders are a typical example. Piston
compressors, punch presses, rock crushers etc. are the other systems that have
fly wheel. Flywheel absorbs mechanical energy by increasing its angular
velocity and delivers the stored energy by decreasing its velocity.

Design Approach
There are two stages to the design of a flywheel. First, the amount of energy
required for the desired degree of smoothening must be found and the (mass)
moment of inertia needed to absorb that energy determined. Then flywheel
geometry must be defined that caters the required moment of inertia in a
reasonably sized package and is safe against failure at the designed speeds of
operation.
Design Parameters
Flywheel inertia (size) needed directly depends upon the acceptable changes
in the speed.

Unmanned Aerial Vehicles

Unmanned Aerial Vehicles (UAVs) are expected to serve as aerial robotic


vehicles to perform tasks on their own. Computer vision is applied in UAVs
to improve their autonomies both in flight control and perception of
environment around them. A survey of researches in such a field is presented.
Based on images and videos captured by on-board camera(s), vision
measures, such as stereo vision, optical flow fields etc. extract useful features
which can be integrated with flight control system to form visual servoing.
Aiming at the use of hand gestures for human- computer interaction, this
paper presents a novel approach for hand gesture-based control of UAVs.
The research was mainly focused on solving some of the most important
problems that current HRI (Human-Robot Interaction) systems fight with.
Presenting a simple approach to recognizing gestures through image
processing techniques and web cameras, the problem of hand gestures
recognition has been addressed using motion detection and algorithm based
on histograms, which makes it efficient in unconstrained environments, easy
to implement and fast enough. Highly flexible manufacturing (HFM) is a
methodology that integrates vision and flexible robotic grasping.
The proposed set of hand grasping shapes presented here is based on the
capabilities and mechanical constraints of the robotic hand. Pre-grasp shapes
for a Barrett Hand are studied and defined using finger spread and flexion. In
addition, a simple and efficient vision algorithm is used to servo the robot
and to select the pre-grasp shape in the pick-and-place task of 9 different
vehicle handle parts. Finally, experimental results evaluate the ability of the
robotic hand to grasp both pliable and rigid parts and successfully control the
UAV.
Visual Navigation:
GPS and inertial sensors are typically combined to estimate UAV’s state and
form a navigation solution. However, in some circumstances, such as urban
or low altitude areas, GPS signal may be very weak or even lost. Under these
situations, visual data can be used as an alternative or substitute to GPS
measurements for the formulation of a navigation solution. This section
describes vision-based UA V navigation. A. Autonomous Landing
Autonomous landing is a crucial capability and requirement for UAV
autonomous navigation. It gives basic idea and method for UA V autonomy.
Generally, UAVs are classified into Vertical Take-Off and Landing (VTOL)
UAVs and fixed-wing UAVs. As to VTOL UA V vision-based landing,
Sharp et al.
Designed a landing target with simple pattern, on which comer points can be
easily detected and tracked. By tracking these comers, UAV could determine
its relative position to landing target using computer vision method. Details
are described as follows. Given the comer points, estimating the UA V state
is an optimization problem. The equation relating a point in the landing pad
coordinate frame to the image of that point in the camera frame is given by
Geometry of the coordinate frames and Euclidean motions involved in the
vision-based state estimation problem Fixed -wing UA V's autonomous
landing is similar to that of VTOL in theory but more complicated in practice.
Vision subsystem should recognize runway and keeps tracking on it during
landing. Kalman filter is introduced to keep stability of tracking.Horizon also
needs detecting. According to runway and horizon, vision subsystem could
estimate UAV's state , I.e. location (x,y,z), attitude (pitch, roll, yaw) etc
Autonomous Refuelling
The deployment of UAVs has been tested in overseas conflicts. People found
that one of the biggest limitations of UAVs is their limited range. To enlarge
their range, UAVs are expected capable of Autonomous Aerial Refuelling
(AAR). There are two ways for aerial refuelling, i.e. refuelling boom and
"probe and drogue". Very similar to auto landing, vision -based method for
UA V keeping pose and position to flying tanker during docking and
refuelling receives great attention. AAR also needs considering some
reference frames, such as UA V, tanker, camera frame etc. But AAR is much
more sensitive and facing more subtle air disturbance. It requires 0.5 to 1.0
cm accuracy in the relative position.
A fixed number of visible optical markers are assumed to be available to help
vision subsystem detect. However, temporary loss of visibility may occur
because of hardware failures and/or physical interference. Fravolini et al.
Proposed a specific docking control scheme featuring a fusion of GPS and
MV distance measurements to tackle this problem. Such studies are still
under stage of simulation.
Autonomous Flight
Auto flight is the extension of auto landing. Ideally, it means that UA V is
capable of high level environment understanding and decision making e.g.
Defining its position, attitude estimation, obstacle detecting and avoidance,
path planning etc. without outside instruction, guidance and intervention.
Some studies related to UA V auto manoeuvring considered different
conditions, including GPS signal failure, unstructured or unknown flying
zone etc. Madison et al. discussed miniature UAV's visual navigation in GPS
-challenged environment, e.g. indoor.
Vision subsystem geo--locates some landmarks while GPS provides accurate
navigation. Once GPS is unavailable, vision subsystem geo-locates new
landmarks with predefined landmarks. Using these landmarks, VISi On
subsystem provides information for navigation. Tests show that vision aided
navigation drift is significantly lower than under inertial only navigation.
NASA Ames Research Centre runs a Precision Autonomous Landing
Adaptive Control Experiment.
Laser Ignition System
Economic as well as environmental constraints demand a further reduction in
the fuel consumption and the exhaust emissions of motor vehicles. At the
moment, direct Injected fuel engines show the highest potential in reducing
fuel consumption and exhaust emissions. Unfortunately, conventional spark
plug ignition shows a major disadvantage with modern spray-guided
combustion processes since the ignition location cannot be chosen optimally.
It is important that the spark plug electrodes are not hit by the injected fuel
because otherwise severe damage will occur.Additionally, the spark plug
electrodes can influence the gas flow inside the combustion chamber. It is
well know that short and intensive laser pulses are able to produce an ”optical
breakdown” in air. Necessary intensities are in the range between 1010-
1011W/cm2.1, 2 at such intensities, gas molecules are dissociated and
ionized Within the vicinity of the focal spot of a laser beam and a hot plasma
is generated. This Plasma is heated by the incoming laser beam and a strong
shock wave occurs. The expanding hot plasma can be used for the ignition of
fuel-gas mixtures.
Drawbacks Of Conventional Spark Ignition
· Location of spark plug is not flexible as it require shielding of plug
from immense heat and fuel spray.
· It is not possible to ignite inside the fuel spray.
· It require frequent maintenance to remove carbon deposits..
· Leaner mixtures cannot be burned.
· Degradation of electrodes at high pressure and temperature.
· Flame propagation is slow.
· Multi point fuel ignition is not feasible.
· Higher turbulence levels are required.
What Is Laser?
Lasers provide intense and unidirectional beam of light. Laser light is
monochromatic (one specific wavelength). Wavelength of light is determined
by amount of energy released when electron drops to lower orbit. Light is
coherent; all the photons have same wave fronts that launch to unison. Laser
light has tight beam and is strong and concentrated.
To make these three properties occur takes something called “Stimulated
Emission”, in which photon emission is organized. Main parts of laser are
power supply, lasing medium and a pair of precisely aligned mirrors. One has
totally reflective surface and other is partially reflective (96 %). The most
important part of laser apparatus is laser crystal. Most commonly used laser
crystals manmade ruby consisting of aluminium oxide and 0.05% chromium.
Crystal rods are round and end surfaces are made reflective.
A laser rod for 3 J is 6 mm in diameter and70 mm in length approximately.
Laser rod is excited by xenon filled lamp, which surrounds it. Both are
enclosed in highly reflective cylinder, which directs light from flash lamp in
to the rod. Chromium atoms are excited to higher energy levels. The excited
ions meet photons when they return to normal state. Thus very high energy is
obtained in short pulses. Ruby rod becomes less efficient at higher
temperatures, so it is continuously cooled with water, air or liquid nitrogen.
The Ruby rod is the lasing medium and flashtube pumps it.

Laser Induced Spark Ignition


The process begins with multi-photon ionization of few gas molecules which
releases electrons that readily absorb more photons via the inverse
bremsstrahlung process to increase their kinetic energy. Electrons liberated
by this means collide with other molecules and ionize them, leading to an
electron avalanche, and breakdown of the gas. Multiphoton absorption
processes are usually essential for the initial stage of breakdown because the
available photon energy at visible and near IR wavelengths is much smaller
than the ionization energy.
For very short pulse duration (few picoseconds) the multiphoton processes
alone must provide breakdown, since there is insufficient time for electron-
molecule collision to occur. Thus this avalanche of electrons and resultant
ions collide with each other producing immense heat hence creating plasma
which is sufficiently strong to ignite the fuel. The wavelength of laser depend
upon the absorption properties of the laser and the minimum energy required
depends upon the number of photons required for producing the electron
avalanche.

The minimum ignition energy required for laser ignition is more than that for
electric spark ignition because of following reasons: An initial comparison is
useful for establishing the model requirements, and for identifying causes of
the higher laser MIE. First, the volume of a typical electrical ignition spark is
10^-3 cm3. The focal volume for a typical laser spark is 10^-5 cm3. Since
atmospheric air contains _1000 charged particles/cm3, the probability of
finding a charged particle in the discharge volume is very low for a laser
spark. Second, an electrical discharge is part of an external circuit that
controls the power input, which may last milliseconds, although high power
input to ignition sparks is usually designed to last <100 ns.
Breakdown and heating of laser sparks depend only on the gas, optical, and
laser parameters, while the energy balance of spark discharges depends on
the circuit, gas, and electrode characteristics. The efficiency of energy
transfer to near-threshold laser sparks is substantially lower than to electrical
sparks, so more power is required to heat laser sparks. Another reason is that,
energy in the form of photons is wasted before the beam reach the focal
point. Hence heating and ionizing the charge present in the path of laser
beam. This can also be seen from the propagation of flame which propagates
longitudinally along the laser beam. Hence this loss of photons is another
reason for higher minimum energy required for laser ignition than that for
electric spark.
Advantages
Location of spark plug is flexible as it does not require shielding from
immense heat and fuel spray and focal point can be made any where in the
combustion chamber from any point It is possible to ignite inside the fuel
spray as there is no physical component at ignition location.
v It does not require maintenance to remove carbon deposits because
of itself cleaning property.
v Leaner mixtures can be burned as fuel ignition inside combustion
chamber is also possible here certainty of fuel presence is very high.
v High pressure and temperature does not affect the performance
allowing the use of high compression ratios.
v Flame propagation is fast as multipoint fuel ignition is also possible.
v Higher turbulence levels are not required due to above said
advantages
Technology of Hydrogen Fuelled Rotary
Engine
This hydrogen engine takes advantage of the characteristics of Mazda’s
unique rotary engine and maintains a natural driving feeling unique to
internal combustion engines. It also achieves excellent environmental
performance with zero CO2 emissions.
Further, the hydrogen engine ensures performance and reliability equal to that
of a gasoline engine. Since the gasoline version requires only a few design
changes to allow it to operate on hydrogen, hydrogen-fueled rotary engine
vehicles can be realized at low cost. In addition, because the dual-fuel system
allows the engine to run on both hydrogen and gasoline, it is highly
convenient for long-distance journeys and trips to areas with no hydrogen
fuel supply.

Technology of the RENESIS Hydrogen Rotary Engine:


The RENESIS hydrogen rotary engine employs direct injection, with
electronically-controlled hydrogen gas injectors. This system draws in air
from a side port and injects hydrogen directly into the intake chamber with an
electronically-controlled hydrogen gas injector installed on the top of the
rotor housing. The technology illustrated below takes full advantage of the
benefits of the rotary engine in achieving hydrogen combustion.

RE Features suited to Hydrogen Combustion


In the practical application of hydrogen internal combustion engines,
avoidance of so-called backfiring (premature ignition) is a major issue.
Backfiring is ignition caused by the fuel coming in contact with hot engine
parts during the intake process. In reciprocal engines, the intake,
compression, combustion and exhaust processes take place in the same
location—within the cylinders. As a result, the ignition plugs and exhaust
valves reach a high temperature due to the heat of combustion and the intake
process becomes prone to backfiring.
In contrast, the RE structure has no intake and exhaust valves, and the low-
temperature intake chamber and high-temperature combustion chamber are
separated. This allows good combustion and helps avoid backfiring.
Further, the RE encourages thorough mixing of hydrogen and air since the
flow of the air-fuel mixture is stronger and the duration of the intake process
is longer than in reciprocal engines.
Combined use of Direct Injection and Premixing
Aiming to achieve a high output in hydrogen fuel mode, a direct injection
system is applied by installing an electronically-controlled hydrogen gas
injector on the top of the rotor housing. Structurally, the RE has considerable
freedom of injector layout, so it is well suited to direct injection.
Further, a gas injector for premixing is installed on the intake pipe enabling
the combined use of direct injection and premixing, depending on driving
conditions. This produces optimal hydrogen combustion.
When in the gasoline fuel mode, fuel is supplied from the same gasoline
injector as in the standard gasoline engine.

Adoption of Lean Burn and EGR


Lean burn and exhaust gas recirculation (EGR) are adopted to reduce
nitrogen oxide (NOx) emissions. NOx is primarily reduced by lean burn at
low engine speeds, and by EGR and a three-way catalyst at high engine
speeds. The three-way catalyst is the same as the system used with the
standard gasoline engine.
Optimal and appropriate use of lean burn and EGR satisfies both goals of
high output and low emissions. The volume of NOx emissions is about 90
percent reduced from the 2005 reference level.
Dual Fuel System
When the system runs out of hydrogen fuel, it automatically switches to
gasoline fuel. For increased convenience, the driver can also manually shift
the fuel from hydrogen to gasoline at the touch of a button.
Common Rail Type Fuel Injection
System
Electronic control common rail type fuel injection system drives an integrated
fuel pump at an ultrahigh pressure to distribute fuel to each injector per
cylinder through a common rail.
This enables optimum combustion to generate big horsepower, and reduce
PM* (diesel plume) and fuel consumption.
Bosch will supply the complete common-rail injection system for the high-
performance 12-cylinder engine introduced by Peugeot Sport for its latest
racing car. The system comprises high-pressure pumps, a fuel rail shared by
all cylinders (i.e. a common rail), piezo in-line injectors, and the central
control unit which compiles and processes all relevant sensor data.
Disi Turbo | Direct Injection Spark
Ignition Technology
DISI includes a whole new set of innovations for gasoline engines. To
mention a few, direct injection (including cooling the air-gasoline mixture), a
new combustion chamber geometry, variable timing technology, and
nanotechnology for the catalyst. This all makes the engines consume 20
percent less while getting 15 to 20 percent better performance.

Further developments for its diesels: new direct injection technology (most
European automakers are switching to piezoelectric injectors), making the
engine lighter, DPF, and urea technology to reduce NOx emissions
Mazda’s DISI* engines balance sporty driving with outstanding environment
performance. With the next generation engine in the series, we are aiming for
a 15% ~ 20% improvement in dynamic performance and a 20% increase in
fuel economy (compared with a Mazda 2.0L gasoline engine). Based on the
direct injection system, we aim to reduce all energy losses (see figure on the
right) and improve thermal efficiency through innovative engineering in a
variety of technological areas. Among these technologies we are paying
particular attention to direct injection, combustion control, variable valve
system technology and catalyst technology. Also, among the various fuels on
the market, we are studying the use of flex-fuel.

Biotech Materials | Bio-Plastics | Bio-


Fabrics
Today, various automobile parts are made from plastics, which are reliant on
the supply of petroleum. There is a need to find new materials for these parts
so we can promote a post-petroleum era and reduce CO2 emissions.

The automobile industry’s first plant-derived bio-plastic, which can be


injection-molded to ensure thermal and shock resistance and a beautiful
finish.
High Strength Heat Resistant Heat Materials

To be suitable for use as automobile parts, plant-derived plastics (bio-


plastics) must have the required strength (shock impact resistance) and heat
resistance.
It resulted in the creation of a bio-plastic with the high strength, heat
resistance and high quality finish necessary for injection-molded automobile
interior parts. It is the first bio-plastic in the automobile industry that
maintains a high plant-derived content (over 80 percent). We altered the
molecular structure of poly-lactic acid extracted from plants to raise its
melting point and developed it as a nucleating agent. A compatibilizer
compound*2 was also developed to highly disperse the shock-absorbing
flexible ingredients. These two breakthroughs improved material’s ability to
uniformly absorb and release energy generated by impacts.\
This bio-plastic is three times the shock impact resistance along with 25
percent higher heat resistance when compared to contemporary bio-plastics
used for items such as electrical appliances.
And unlike conventional bio-plastics whose properties are suitable for press-
forming only, Mazda’s bio-plastic can be extrusion-molded. Consequently,
this bio-plastic can be used for various car parts.
The Premacy Hydrogen RE Hybrid featured this bio-plastic in the vehicle’s
instrument panel and other interior fittings.
Less CO2 Emitted, Less Energy Consumed and less Material Used
Bio-plastic is a plant-derived and carbon-neutral material. It reduces reliance
on fossil fuels and therefore also cuts CO2 emissions. In addition, its
manufacture involves fermentation of natural materials such as starches and
sugars. As a result, it requires 30 percent less energy to produce than
petroleum-base polypropylene plastics. The new bio-plastic is also stronger
than other plastics, which means parts can be thinner so less material is
required for production.
Bio-Fabrics:

The world’s first bio-fabric made with completely plant-derived fibers,


suitable for use in vehicle interiors. This bio-fabric does not contain any oil-
based materials, yet it possesses the qualities and durability required for use
in vehicle seat covers. Resistant to abrasion and damage from sunlight, in
addition to being flame retardant, the new bio-fabric meets the highest quality
standards.

A new poly-lactic acid —as a crystallization agent to control the entire


molecular architecture of raw resins to form a "tereo complex structure*2."
The technique was used to improve fiber strength until the fabric attained
sufficient resistance to abrasion and light damage for practical use in vehicle
seat covers.

The technology enables the production of fibers made from 100 percent
plant-derived poly-lactic acid which are well-suited for automobile
applications. Other crucial qualities necessary for the highest performing
fabrics, such as fire retardant properties
Hybrid Synergy Drive (Hsd)
Technology
What Is a Hybrid System?
A hybrid system combines different power sources to maximize each one’s
strengths, while compensating for the others’ shortcomings. A gasoline-
electric hybrid system, for example, combines an internal combustion
engine’s high-speed power with the clean efficiency and low-speed torque of
an electric motor that never needs to be plugged in.

Are All Hybrids Created Equal?


There are several ways in which electric motors and a gas/petrol engine can
be combined.
Toyota perfected the series/parallel or "full" hybrid to deliver the energy-
saving benefit of a series hybrid together with the acceleration benefit of a
parallel hybrid. Two key technologies — the power split device and
sophisticated energy management — make this possible. They constantly
optimize the flows of mechanical power and electric power for safe and
comfortable vehicle operation at the highest possible efficiency.
The Full Hybrid
Toyota’s unique hybrid system combines an electric motor and a gasoline
engine in the most efficient manner. It saves fuel and reduces emissions while
giving ample power.
Taking advantage of the electric motors’ low-speed torque at start-off
When the car starts off, Toyota’s hybrid vehicles use only the electric motors,
powered by the battery, while the gas/petrol engine remains shut off. A
gas/petrol engine cannot produce high torque in the low rpm range, whereas
electric motors can – delivering a very responsive and smooth start.
Ultimate Eco Car Challenge -
Development

Continuous improvement in conventional engines, including lean-burn


gasoline engines, direct injection gasoline engines and common rail direct-
injection diesel engines, as well as engines modified to use alternative fuels,
such as compressed natural gas (CNG) or electricity (for Electric Vehicle).
Engineers may disagree about which fuel or car propulsion system is best, but
they do agree that hybrid technology is the core for eco-car development.
“Plug-in hybrid” technology brings further potential for substantial CO2
emissions reductions from vehicles. It has a higher battery capacity and is
thus more fuel-efficient than the current hybrid, assisted by the power of
engine. For a short-distance drive, it could be run with electricity charged
during the night. Depending on how electricity is generated, the vehicle could
run with much lower CO2 emissions. In order to commercialize the plug-in
hybrid, there is again a need for a breakthrough in battery technology. It is
necessary to develop a smaller-sized battery with higher capacity. Plug-in
hybrids could contribute to reducing substantial amounts of CO2 emissions
from vehicles, as well as fossil fuel use, by charging from cleaner electricity
sources in the future.
Challenges of increasing power performance
In order to improve the driving performance, its power train was completely
redesigned. To increase motor output, a high-voltage power-control was
adopted. Although this technology was used in industrial machines and
trains, the idea of incorporating it into an automobile did not easily occur at
first. First of all, the system itself would take up a substantial amount of
space and secondly, there was no prior example of applying this method to a
motor that switches between output and power generation at such a dizzy
pace.
Once the development of the high-voltage power circuit began, there was a
mountain of problems, such as what to do about the heat generated by
increasing voltage and the noise generated. To reevaluate the power train, the
project team had to produce prototypes and repeat numerous tests. The
prototyping stage went to seven prototypes instead of the usual three, and the
total distance driven by these prototypes during testing exceeded one million
kilometers.
Fuel Cell Technology
The fuel cell vehicle (FCV) is the nearest thing yet to an "ultimate eco-car" that offers solutions to
energy and emissions issues.

FCVs are powered by fuel cells, which generate electricity from hydrogen,
which is not only environmentally friendly and highly energy-efficient, but
can also be produced using a variety of readily available raw materials.
Thanks to these characteristics, fuel cell vehicles are ideal for achieving
sustainable mobility. Therefore, Toyota is striving to make this vehicle
technology widely available as soon as possible.
● Successful startup: -30° Celsius

● Extended cruising range: 830km (JC08 mode) without


refueling
At a steady cruising speed, the motor is powered by energy from the fuel cell.
When more power is needed, for example during sudden acceleration, the
battery supplements the fuel cell’s output. Conversely, at low speeds when
less power is required, the vehicle runs on battery power alone. During
deceleration the motor functions as an electric generator to capture braking
energy, which is stored in the battery.
World’s First Air-Powered Car | Zero Emissions
India’s largest automaker is set to start producing the world’s first
commercial air-powered vehicle. The Air Car, developed by ex-Formula One
engineer Guy Nègre for Luxembourg-based MDI, uses compressed air, as
opposed to the gas-and-oxygen explosions of internal-combustion models, to
push its engine’s pistons. Some 6000 zero-emissions Air Cars are scheduled
to hit Indian streets in August of 2008.

Barring any last-minute design changes on the way to production, the Air Car
should be surprisingly practical. The $12,700 City CAT, one of a handful of
planned Air Car models, can hit 68 mph and has a range of 125 miles. It will
take only a few minutes for the City CAT to refuel at gas stations equipped
with custom air compressor units; MDI says it should cost around $2 to fill
the car’s carbon-fiber tanks with 340 liters of air at 4350 psi. Drivers also will
be able to plug into the electrical grid and use the car’s built-in compressor to
refill the tanks in about 4 hours.
Of course, the Air Car will likely never hit American shores, especially
considering its all-glue construction. But that doesn’t mean the major
automakers can write it off as a bizarre Indian experiment — MDI has signed
deals to bring its design to 12 more countries, including Germany, Israel and
South Africa.

Air Car Is Heading For Mass Production


The Air Car is the brainchild of Guy Negre, a French inventor and former
Formula One engineer. In February, Negre’s company, Motor Development
International (MDI), announced a deal to manufacture the technology with
Tata Motors, India’s largest commercial automaker and a major player
worldwide. “It’s an innovative technology, it’s an environment-friendly
technology, and a scalable technology, ” says Tata spokesperson Debasis
Ray. “It can be used in cars, in commercial vehicles, and in power generation.

Though Negre first unveiled the technology in the early 1990s, interest has
only recently grown. In addition to the Tata deal, which could put thousands
of the cars on the road in India by the end of the decade, Negre has signed
deals to bring the design to twelve other countries, including South Africa,
Israel, and Germany. But experts say the car may never make it to US streets.
The Air Car works similarly to electric cars, but rather than storing electrical
energy in a huge, heavy battery, the vehicle converts energy into air pressure
and stores it in a tank. According to MDI’s Miguel Celades, Negre’s engine
uses compressed air stored at a pressure of 300 bars to pump the pistons,
providing a range of around 60 miles per tank at highway speeds. An onboard
air compressor can be plugged into a regular outlet at home to recharge the
tank in about four hours, or an industrial compressor capable of 3,500 psi
(likes those found in scuba shops) can fill it up in a few minutes for around
two dollars. Celades says optional gasoline or biofuel hybrid models will heat
the pressurized air, increasing the volume available for the pistons and
allowing the car to drive for nearly 500 miles between air refills and about
160 miles per gallon of fuel burned.
Early media reports speculated that Tata could have an Air Car on the market
by the end of 2008, but Ray says it’s likely to be a couple of years before the
technology is available. Until the Indian models hit the streets, the best way
to see an Air Car in action is to cross the pond and check out Negre’s
prototypes in France- a trip entrepreneur J. P. Maeder says is worthwhile.
“It’s not a fantasy, ” he says of the car. “It can make a real impact in how
personal transportation will develop from here. ”
In 2003, Maeder formed ZevCat, a Califonia company that aims to bring the
Air Car to America. So far, however, he says his plans have stalled for
financial reasons: Without enough money to build and crash test prototypes,
he can’t demonstrate the technology for investors who might be willing to
fund more prototypes.
The car might garner more attention in the US if it makes it to market in India
or elsewhere before other burgeoning technologies like plug-in hybrids or
fuel-cell electric cars. If that were to happen, compressed air could become
the “next big thing” for green-minded drivers, says Larry Rinek, an auto
analyst with the international market-research firm Frost and Sullivan. But
Rinek questions whether the car will have mass appeal. Another unknown is
whether the vehicle could pass crash tests.
“This is an R and D novelty; it’s a curiosity that is nowhere near ready for
primetime, ” says Rinek. “It’s unknown and untrusted, particularly here in
North America” where, he says, adoption of new technology moves “very
slowly. ”
Air-Powered Car Coming To Hit 1000-Mile Range
The Air Car caused a huge stir when we reported last year that Tata Motors
would begin producing it in India. Now the little gas-free ride that could is
headed Stateside in a big-time way.

Zero Pollution Motors (ZPM) confirmed on Thursday that it expects to


produce the world’s first air-powered car for the United States by late 2009 or
early 2010. As the U.S. licensee for Luxembourg-based MDI, which
developed the Air Car as a compression-based alternative to the internal
combustion engine, ZPM has attained rights to build the first of several
modular plants, which are likely to begin manufacturing in the Northeast and
grow for regional production around the country, at a clip of up to 10,000 Air
Cars per year.
And while ZPM is also licensed to build MDI’s two-seater One CAT
economy model (the one headed for India) and three-seat Mini CAT (like a
Smart For Two without the gas), the New Paltz, N.Y., startup is aiming
bigger: Company officials want to make the first air-powered car to hit U.S.
roads a $17,800, 75-hp equivalent, six-seat modified version of MDI’s City
CAT (pictured above) that, thanks to an even more radical engine, is said to
travel as far as 1000 miles at up to 96 mph with each tiny fill-up.
We’ll believe that when we drive it, but MDI’s new dual-energy engine—
currently being installed in models at MDI facilities overseas—is still pretty
damn cool in concept. After using compressed air fed from the same Airbus-
built tanks in earlier models to run its pistons, the next-gen Air Car has a
supplemental energy source to kick in north of 35 mph, ZPM says. A custom
heating chamber heats the air in a process officials refused to elaborate upon,
though they insisted it would increase volume and thus the car’s range and
speed.
"I want to stress that these are estimates, and that we’ll know soon more
precisely from our engineers," ZPM spokesman Kevin Haydon told PM, "but
a vehicle with one tank of air and, say, 8 gal. of either conventional petrol,
ethanol or biofuel could hit between 800 and 1000 miles."
Those figures would make the Air Car, along with Aptera’s Typ-1 and
Tesla’s Roadster, a favorite among early entrants for the Automotive X Prize,
for which MDI and ZPM have already signed up. But with the family-size,
four-door City CAT undergoing standard safety tests in Europe, then side-
impact tests once it arrives in the States, could it be the first 100-mpg,
nonelectric car you can actually buy?
Future of Car Infotainment Systems
Voice enabled GPS devices add another level of sophistication to advance
GPS navigation market.
GPS technology has been developed for military use.The global navigation of
naval warships, missiles guidance and movements of troops on the ground
has led to the need for advice site precision technology has finally found a
strong market for private users and consumers as well. In fact, you can find
GPS devices everywhere.

Navigation devices have been recent improvements in the size of the screen,
precision and live traffic reports. However, just as the market of cellular
telephony, the need for improving the safety and convenience became a
priority more than people began to GPS devices in their vehicles. Demand for
models activated voice GPS has grown year after year. Hands-free operation
was the next logical to include functionality.
Here I posted next generation Ford Fiesta Car Infotainment systems.
The Web Enabled Browser receives the Google Map to the GPS devices
when the driver given the street information to voice activated Bluetooth
Device. Then the GPS shows the accurate map to the user. If you want
navigation which tells the route accurately. So driving made easily. This is
the future of driving.
Fuel Cell Car | How Fuel Cell Works | Detail
Explanation

Fuel Cell Stacks


This is the heart of the hydrogen fuel cell car—the fuel cell stacks. Their
maximum output is 86 kilowatts, or about 107 HP. Because hydrogen fuel
cell stacks produce power without combustion, they can be up to twice as
efficient as internal combustion engines. They also produce zero carbon
dioxide and other pollutants. For more information on the stacks.
Fuel Cell Cooling System

This has several parts. Perched at an angle at the front of the vehicle is a large
radiator for the fuel cell system, while two radiators for the motor and
transmission lie ahead of the front wheels below the headlights. The car also
has a cooling pump located near the fuel cell stacks to stabilize temperature
within the stacks.
Ultra capacitor
This unit serves as a supplementary power source to the fuel cell stack. Like a
large battery, the ultra capacitor recovers and stores energy generated during
deceleration and braking. It uses this energy to provide a "power assist"
during startup and acceleration.
Hydrogen Tanks
Space in a car is limited, yet hydrogen is the most dispersive element in the
universe and normally requires lots of room. A challenge for manufacturers is
how to compress the gas into tanks small enough to fit in a compact car and
yet still provide enough fuel for hundreds of miles of driving between
refueling. The two high-pressure hydrogen tanks in this vehicle can hold up
to 3.75 kilograms of hydrogen compressed to roughly 5,000 PSI—enough to
enable an EPA-rated 190 miles of driving before refueling, the manufacturer
says.
Electric Motor
(General area only—motor not visible) The electric motor offers a maximum
output of 80 kilowatts, enabling a top speed of about 93 miles per hour. The
manufacturer says this vehicle can also start in subfreezing temperatures
(down to about -4°F), a perennial problem in fuel cell prototypes. Being
electric, the engine and the car as a whole are quiet, with none of the
vibration or exhaust noise of a gas-powered automobile.
Air Pump
(General area only—air pump not visible) Run by a high-voltage electric
motor, this pump supplies air at the appropriate pressure and flow rate to the
fuel cell stacks. The air, in turn, mixes with the stored hydrogen to create
electricity.
Humidifier
The humidifier monitors and maintains the level of humidity that the fuel cell
stack needs to achieve peak operating efficiency. It does this by recovering
some of the water from the electrochemical reaction that occurs within the
fuel cell stack and recycling it for use in humidification.
Power Control Unit
(General area only—power control unit not visible) This controls the
vehicle’s electrical systems, including the air and cooling pumps as well as
output from the fuel cell stacks, electric motor, and ultra capacitor.
Cabin
With the fuel cell stacks hidden beneath the floor and the hydrogen tanks and
the ultra capacitor beneath and behind the rear seats, respectively, the four-
passenger cabin is isolated from all hydrogen and high-voltage lines.
Hydrogen gas is colorless and odorless, and it burns almost invisibly. In case
of a leak, therefore, the manufacturer has placed hydrogen sensors throughout
the vehicle to provide warning and automatic gas shut-off. Also, in the event
of a collision, the electrical source power line shuts down.
Hydrogen Filler Mouth
(Not visible—located on other side of vehicle) Drivers would fill the car with
hydrogen just as they do with gasoline, through an opening on the side of the
vehicle. The main difference is that a fuel cell car must be grounded before
fueling to rid the car of hazardous static electricity. For this reason, this
model has two side-by-side openings, with the latch to open the hydrogen
filler mouth located inside the opening for the grounding wire. The
manufacturer says filling up this model’s two tanks at a hydrogen filling
station would take about three minutes.
Note
The limited-production vehicle seen in this feature is a Honda 2005 FCX,
which is typical of the kinds of hydrogen fuel cell automobiles that some
major automakers are now researching and developing. With such vehicles at
present costing about $1 million apiece, none is currently for sale, though
hundreds of fuel cell cars are now undergoing tests on the world’s roads.

How Fuel Cells Work


An electrochemical reaction occurs between hydrogen and oxygen that
converts chemical energy into electrical energy.
Think of them as big batteries, but ones that only operate when fuel—in this
case, pure hydrogen—is supplied to them. When it is, an electrochemical
reaction takes place between the hydrogen and oxygen that directly converts
chemical energy into electrical energy. Various types of fuel cells exist, but
the one automakers are primarily focusing on for fuel cell cars is one that
relies on a proton-exchange membrane, or PEM. In the generic PEM fuel cell
pictured here, the membrane lies sandwiched between a positively charged
electrode (the cathode) and a negatively charged electrode (the anode). In the
simple reaction that occurs here rests the hope of engineers, policymakers,
and ordinary citizens that someday we’ll drive entirely pollution-free cars.
Here’s what happens in the fuel cell: When hydrogen gas pumped from the
fuel tanks arrives at the anode, which is made of platinum, the platinum
catalyzes a reaction that ionizes the gas. Ionization breaks the hydrogen atom
down into its positive ions (hydrogen protons) and negative ions (electrons).
Both types of ions are naturally drawn to the cathode situated on the other
side of the membrane, but only the protons can pass through the membrane
(hence the name "proton-exchange"). The electrons are forced to go around
the PEM, and along the way they are shunted through a circuit, generating the
electricity that runs the car’s systems.
Using the two different routes, the hydrogen protons and the electrons
quickly reach the cathode. While hydrogen is fed to the anode, oxygen is fed
to the cathode, where a catalyst creates oxygen ions. The arriving hydrogen
protons and electrons bond with these oxygen ions, creating the two "waste
products" of the reaction—water vapor and heat. Some of the water vapor
gets recycled for use in humidification, and the rest drips out of the tailpipe as
"exhaust." This cycle proceeds continuously as long as the car is powered up
and in motion; when it’s idling, output from the fuel cell is shut off to
conserve fuel, and the ultra capacitor takes over to power air conditioning and
other components.
A single hydrogen fuel cell delivers a low voltage, so manufacturers "stack"
fuel cells together in a series, as in a dry-cell battery. The more layers, the
higher the voltage. Electrical current, meanwhile, has to do with surface area.
The greater the surface area of the electrodes, the greater the current. One of
the great challenges automakers face is how to increase electrical output
(voltage times current) to the point where consumers get the power and
distance they’re accustomed to while also economizing space in the tight
confines of an automobile.
Kinetic Energy Recovery System | Kers | Formula One
(F1)

The introduction of Kinetic Energy Recovery Systems (KERS) is one of the


most significant technical introductions for the Formula One Race. Formula
One have always lived with an environmentally unfriendly image and have
lost its relevance to road vehicle technology. This eventually led to the
introduction of KERS.
KERS is an energy saving device fitted to the engines to convert some of the
waste energy produced during braking into more useful form of energy. The
system stores the energy produced under braking in a reservoir and then
releases the stored energy under acceleration. The key purpose of the
introduction was to significantly improve lap time and help overtaking.
KERS is not introduced to improve fuel efficiency or reduce weight of the
engine. It is mainly introduced to improve racing performance.
KERS is the brainchild of FIA president Max Mosley. It is a concrete
initiative taken by F1 to display eco-friendliness and road relevance of the
modern F1 cars. It is a hybrid device that is set to revolutionize the Formula
One with environmentally friendly, road relevant, cutting edge technology.
Components of KERS
The three main components of the KERS are as follows:
● An electric motor positioned between the fuel tank and the
engine is connected directly to the engine crankshaft to produce
additional power.

● High voltage lithium-ion batteries used to store and deliver


quick energy.

● A KERS control box monitors the working of the electric


motor when charging and releasing energy.
A – Electric motor
B – Electronic Control Unit
C – Battery Pack
Working Principle of KERS
Kinetic Energy Recovery Systems or KERS works on the basic principle of
physics that states, “Energy cannot be created or destroyed, but it can be
endlessly converted.”
When a car is being driven it has kinetic energy and the same energy is
converted into heat energy on braking. It is the rotational force of the car that
comes to stop in case of braking and at that time some portion of the energy
is also wasted. With the introduction of KERS system the same unused
energy is stored in the car and when the driver presses the accelerator the
stored energy again gets converted to kinetic energy. According to the F1
regulations, the KERS system gives an extra 85 bhp to the F1 cars in less
than seven seconds.
This systems take waste energy from the car’s braking process, store it and
then reuse it to temporarily boost engine power. This and the following
diagram show the typical placement of the main components at the base of
the fuel tank, and illustrate the system’s basic functionality – a charging
phase and a boost phase. In the charging phase,
kinetic energy from the rear brakes (1)
is captured by an electric alternator/motor (2),
controlled by a central processing unit (CPU) (3),
which then charges the batteries (4).
In the boost phase, the electric alternator/motor gives the stored energy back
to the engine in a continuous stream when the driver presses a boost button
on the steering wheel. This energy equates to around 80 horsepower and may
be used for up to 6.6 seconds per lap. The location of the main KERS
components at the base of the fuel tank reduces fuel capacity (typically 90-
100kg in 2008 ) by around 15kg, enough to influence race strategy,
particularly at circuits where it was previously possible to run just one stop.
The system also requires additional radiators to cool the batteries. Mechanical
KERS, as opposed to the electrical KERS illustrated here, work on the same
principle, but use a flywheel to store and re-use the waste energy.
Types of KERS
There are basically two types of KERS system:
Electronic KERS
Electronic KERS supplied by Italian firm Magneti Marelli is a common
system used in F1 by Red Bull, Toro Rosso, Ferrari, Renault, and Toyota.
The key challenge faced by this type of KERS system is that the lithium ion
battery gets hot and therefore an additional ducting is required in the car.
BMW has used super-capacitors instead of batteries to keep the system cool.
With this system when brake is applied to the car a small portion of the
rotational force or the kinetic energy is captured by the electric motor
mounted at one end of the engine crankshaft. The key function of the electric
motor is to charge the batteries under barking and releasing the same energy
on acceleration. This electric motor then converts the kinetic energy into
electrical energy that is further stored in the high voltage batteries. When the
driver presses the accelerator electric energy stored in the batteries is used to
drive the car.
Electro-Mechanical KERS
The Electro-Mechanical KERS is invented by Ian Foley. The system is
completely based on a carbon flywheel in a vacuum that is linked through a
CVT transmission to the differential. With this a huge storage reservoir is
able to store the mechanical energy and the system holds the advantage of
being independent of the gearbox. The braking energy is used to turn the
flywheel and when more energy is required the wheels of the car are coupled
up to the spinning flywheel. This gives a boost in power and improves racing
performance.
Limitations of KERS
Though KERS is one of the most significant introductions for Formula One it
has some limitations when it comes to performance and efficiency. Following
are some of the primary limitations of the KERS:
● Only one KERS can be equipped to the existing engine of a
car.

● 60 kw is the maximum input and output power of the KERS


system.
● The maximum energy released from the KERS in one lap
should not exceed 400 kg.

● The energy recovery system is functional only when the car is


moving.
● Energy released from the KERS must remain under complete
control of the driver.

● The recovery system must be controlled by the same electronic


control unit that is used for controlling the engine, transmission,
clutch, and differential.
● Continuously variable transmission systems are not permitted
for use with the KERS.

● The energy recovery system must connect at one point in the


rear wheel drive train.

● If in case the KERS is connected between the differential and


the wheel the torque applied to each wheel must be same.

● KERS can only work in cars that are equipped with only one
braking system.

New Battery Technology | Fast Recharge 3d Film


Technology
Of all the criticisms of electric vehicles, probably the most commonly-heard
is that their batteries take too long to recharge – after all, limited range
wouldn’t be such a big deal if the cars could be juiced up while out and
about, in just a few minutes. Well, while no one is promising anything, new
batteries developed at the University of Illinois, Urbana-Champaign do
indeed look like they might be a step very much in the right direction. They
are said to offer all the advantages of capacitors and batteries, in one unit.

"This system that we have gives you capacitor-like power with battery-like
energy," said U Illinois’ Paul Braun, a professor of materials science
and engineering. "Most capacitors store very little energy. They can release it
very fast, but they can’t hold much. Most batteries store a reasonably large
amount of energy, but they can’t provide or receive energy rapidly. This does
both."
The speed at which conventional batteries are able to charge or discharge can
be dramatically increased by changing the form of their active material into a
thin film, but such films have typically lacked the volume to be able to store a
significant amount of energy. In the case of Braun’s batteries, however, that
thin film has been formed into a three-dimensional structure, thus increasing
its storage capacity.

Batteries equipped with the 3D film have been demonstrated to work


normally in electrical devices, while being able to charge and discharge 10 to
100 times faster than their conventional counterparts.
To make the three-dimensional thin film, the researchers coated a surface
with nano scale spheres, which self-assembled into a lattice-like arrangement.
The spaces between and around the spheres were then coated with metal,
after which the spheres were melted or dissolved away, leaving the metal as a
framework of empty pores. Electro polishing was then used to enlarge the
pores and open up the framework, after which it was coated with a layer of
the active material – both lithium-ion and nickel metal hydride batteries were
created.
The system utilizes processes already used on a large scale, so it would
reportedly be easy to scale up. It could also be used with any type of battery,
not just Li-ion and NiMH.
The implications for electric vehicles are particularly exciting. "If you had the
ability to charge rapidly, instead of taking hours to charge the vehicle you
could potentially have vehicles that would charge in similar times as needed
to refuel a car with gasoline," Braun said. "If you had five-minute charge
capability, you would think of this the same way you do an internal
combustion engine. You would just pull up to a charging station and fill up."
Braun and his team believe that the technology could be used not only for
making electric cars more viable, but also for allowing phones or laptops to
be able to recharge in seconds or minutes. It could also result in high-power
lasers or defibrillators that don’t need to warm up before or between pulses.
Advanced Battery Storage Technology

For decades, battery storage technology has been a heavy weight on the back
of scientific innovation. From cell phones to electric vehicles, our
technological capabilities always seem to be several steps ahead of our ability
to power them. Several promising new technologies are currently under
development to help power the 21st century, but one small start-up looks
especially well positioned to transform the way we think about energy
storage.

Texas-based EEStor, Inc. is not exactly proposing a new battery, since no


chemicals are used in its design. The technology is based on the idea of a
solid state ultra capacitor, but cannot be accurately described in these terms
either. Ultra capacitors have an advantage over electrochemical batteries (i.e.
lithium-ion technology) in that they can absorb and release a charge virtually
instantaneously while undergoing virtually no deterioration. Batteries trump
ultra capacitors in their ability to store much larger amounts of energy at a
given time.
EEStor’s take on the ultra capacitor — called the Electrical Energy Storage
Unit, or EESU — combines the best of both worlds. The advance is based on
a barium-titanate insulator claimed to increase the specific energy of the unit
far beyond that achievable with today’s ultra capacitor technology. It is
claimed that this new advance allows for a specific energy of about 280 watts
per kilogram — more than double that of the most advanced lithium-ion
technology and a whopping ten times that of lead-acid batteries. This could
translate into an electric vehicle capable of traveling up to 500 miles on a five
minute charge, compared with current battery technology which offers an
average 50-100 mile range on an overnight charge. As if that weren’t enough,
the company claims they will be able to mass-produce the units at a fraction
the cost of traditional batteries.
"It’s a paradigm shift," said Ian Clifford of ZENN Motor Co., an early
investor and exclusive rights-holder for use of the technology in electric cars.
"The Achilles’ heel to the electric car industry has been energy storage. By
all rights, this would make internal combustion engines unnecessary."
But this small electric car company isn’t the only organization banking on the
new technology. Lockheed-Martin, the world’s largest defense contractor,
has also signed on with EEStor for use of the technology in military
applications. Kleiner Perkins Caufield & Byers, a venture capital investment
firm who counts Google and Amazon among their early-stage successes, has
also invested heavily in the company.
New Powerful Capacitors Nano Composite Processing
Technique

A new technique for creating films of barium titanate (BaTiO3) nano


particles in a polymer matrix could allow fabrication of improved capacitors
able to store twice as much energy as conventional devices. The improved
capacitors could be used in consumer devices such as cellular telephones –
and in defense applications requiring both high energy storage and rapid
current discharge.
This capacitor array device made with a barium titanate nanocomposite.
Because of its high dielectric properties, barium titanate has long been of
interest for use in capacitors, but until recently materials scientists had been
unable to produce good dispersion of the material within a polymer matrix.
By using tailored organic phosphonic acids to encapsulate and modify the
surface of the nano particles, researchers at the Georgia Institute of
Technology’s Center for Organic Photonics and Electronics were able to
overcome the particle dispersion problem to create uniform nano composites.
For capacitors and related applications, the amount of energy you can store in
a material is related to these two factors.
1. High Dielectric Constant
2. High Dielectric breakdown Strength
The new nanocomposite materials have been tested at frequencies of up to
one megahertz, and the research says operation at even higher frequencies
may be possible. Though the new materials could have commercial
application without further improvement, their most important contribution
may be in demonstrating the new encapsulation technique – which could
have broad applications in other nanocomposite materials.

This work opens a door to effectively exploit this type of particle in nano
composites using the coating technology.
Because of their ability to store and rapidly discharge electrical energy,
capacitors are used in a variety of consumer products such as computers and
cellular telephones. And because of the increasing demands for electrical
energy to power vehicles and new equipment, they also have important
military applications.

Key to developing thin-film capacitor materials with higher energy storage


capacity is the ability to uniformly disperse nano particles in as high a density
as possible throughout the polymer matrix. However, nano particles such as
barium titanate tend to form aggregates that reduce the ability of the
nanocomposite to resist electrical breakdown. Other research groups have
tried to address the dispersal issue with a variety of surface coatings, but
those coatings tended to come off during processing – or to create materials
compatibility issues.
The robust Designed coating for the particles, which range in size from 30 to
120 nanometers in diameter.
“Phosphonic acids bind very well to barium titanate and to other related metal
oxides”. “The choice of that material and ligands were very effective in
allowing us to take the tailored phosphonic acids, put them onto the barium
titanate, and then with the correct solution processing, to incorporate them
into polymer systems. This allowed us to provide good compatibility with the
polymer hosts – and thus very good dispersion as evidenced by a three- to
four-fold decrease in the average aggregate size.”
Though large crystals of barium titanate could also provide a high dielectric
constant, they generally do not provide adequate resistance to breakdown –
and their formation and growth can be complex and require high
temperatures. Composites provide the necessary electrical properties, along
with the advantages of solution-based processing techniques.
“One of the big benefits of using a polymer nanocomposite approach is that
you combine particles of a material that provide desired properties in a matrix
that has the benefits of easy processing,”.
Scanning electron micrographs of barium titanate (BaTiO3) nano composites
with polycarbonate (left, top and bottom) and Viton (right, top and bottom)
polymer matrices. The images show the dramatic improvement in film
uniformity through the use of phosphonic acid coated BaTiO3 nano particles
(bottom images) as compared to uncoated nano particles (top images). The
higher uniformity results in greatly improved dielectric properties.
Though the new materials may already offer enough of an advantage to
justify commercializing. The research team also wants to scale up production
to make larger samples – now produced in two-inch by three-inch films –
available to other researchers who may wish to develop additional
applications.
“Beyond capacitors, there are many areas where high dielectric materials are
important, such as field-effect transistors, displays and other electronic
devices,” Perry added. “With our material, we can provide a high dielectric
layer that can be incorporated into those types of applications.”
Rtv Molding | Urethane Casting | Room Temperature
Vulcanized
What Is a Composite Part?

A part that combines a resin and reinforcing strands can properly be referred
to as a composite. This includes what is commonly referred to as fiberglass
(technically, it should be called fiberglass reinforced plastic), but there are
many other materials that are used in composite parts. To achieve desired
properties in composites, the chemistry of the resin used, the type of
reinforcing strands, and the ratio of resin to reinforcing strands can be varied.
In general, the more strands in the mix, the stronger the final part becomes.
Why Mold?
There are many desirable features of molded composite parts. Molded parts
are almost always more durable, repairable, heat-resistant, and lighter than
comparable strength carved wood parts, and they don’t soak up oil or
moisture. In the case of a molded composite cowl, its thin wall property
(about 0.05"…and you can’t carve a wood cowl to match!) gives much better
air flow around the motor as a big side benefit. Molded composite wheel
pants can handle bigger wheels and mount more conveniently. Other
possibilities are wing tips, gear legs, dummy exhausts, spinners, etc., and all
are made in a similar way.
Once a proper mold is made, composite parts can be replicated quickly and
easily, each one almost identical to its predecessors. So, you can make as
many parts as you and your friends need, you can sell them, or
, in the event of a crash, make a new one exactly like the
original in just a fraction of the time it would take to carve and
hollow another .
Molds and Plugs
Actually, anything you can lay resin in or on and pull away a part with the
opposite shape can be considered a mold. Male molds can be used, but a part
made from a male mold has a smooth interior and a rough exterior, which
requires quite a bit of work to make ready for a finish. A female mold, on the
other hand, produces parts with smooth exteriors, so every part molded in one
will come out with a smooth exterior that is exactly the shape desired. To
create a female mold with a smooth, perfectly-shaped interior cavity, a male
plug is carved and finished, and the female mold is cast around the male
plug. This requires just a bit of extra work, but keep in mind that there is no
need to hollow the plug, so making one goes quite quickly.
Plaster and even composite materials can be used to make a female mold, but
in my experience, the absolute best overall material to use is RTV (room
temperature vulcanized)
molding rubber. It’s easy to handle, and a female mold made from it is
flexible enough to allow a flex and peel technique to free the molded part
without doing any damage to the mold, so it can be used over and over again.

Polyester or Epoxy?
Polyester is cheaper and weaker than epoxy, but it’s certainly adequate for
first attempts. Polyester gets drops of hardener, more will accelerate the cure.
A down side is that finished parts often have pin holes. Polyester has a strong
odor, and you need to wear eye protection because the catalyst can be highly
dangerous if it gets in your eyes.
Reinforcing Fiber
The material used for reinforcing fiber has great influence on the overall
strength and ultimate weight of the finished part. Molding is best done with
woven cloth, typically two to four ounces per square yard, in as many layers
as needed to get the desired strength. Materials most used in composites for
model applications are E-glass (fiberglass, white color), Kevlar (yellow), or
carbon fiber (black). E-glass is the least expensive and is fully adequate for
most molding. Kevlar is much more expensive, difficult to bend and cut, and
would be best to avoid if you’re just starting out. Carbon fiber is the
strongest, but by far the most costly, hard to buy economically in small
quantities, and is best left for after you’re a seasoned molder. Carbon veil is
easy to handle, but it’s not well-suited to molding composite parts typically
found in models.
In full-scale aircraft composite parts, the weight ratio of the fibers and the
resin is carefully engineered and calculated for each part.
Fillers

All epoxy suppliers sell fillers, such as a Cab-O-Sill, milled fibers, and Q-
Cells. These are useful on some shapes to make a paste to get good edges
and corners. Painted in corners, they help define small details, too. A paste
made with filler is useful to fix flaws that may appear in the final part, also.
Tdi Blue Motion Technologies
Blue Motion is a technology, which help you save fuel and money, without
taking the fun out of driving.

Blue Motion Technologies represent the cleanest, most energy-efficient cars


in our range. So when you see this logo you’ll know about ways to cut your
emissions and driving costs.
Efficiency starts at the very core of our cars with their engines and gear
boxes. Innovative technologies such as TSI (Turbocharged Stratified
Injection), TDI ( Turbocharged Direct Injection) and DSG (Direct Shift
Gearbox) are major contributors to driving efficiency and set a solid
foundation to develop from.

Advanced engines such as the powerful TSI and the high-torque diesel TDI,
ensure outstanding fuel consumption and fewer pollutants together with pure
driving pleasure. The impressive DSG Dual-Clutch gearbox offers even
greater fuel economy. It has the convenience of an automatic gearbox and the
agility, dynamic performance and superior consumption figures of a manual
gearbox. So being efficient doesn’t have to take the fun out of driving.
Here the created modified engine management software that makes engines
run even more efficiently. This also been able to reduce the engine idle speed
so you emit less C02 and use less fuel. Blue Motion gearboxes have been
reconfigured with longer ratios in higher gears for the most economical fuel
consumption. Intelligent technology tells you when to change gear for
optimum driving efficiency.

The innovative start/stop system turns the engine off when you’re idling in
traffic and restarts it when you’re ready to move while the recuperation
mechanism uses some energy usually lost during braking and deceleration to
pass additional charge to the battery. The outside of this range is equally as
crafted as the inside: from the low rolling resistant tires which reduce road
friction to the specifically designed aerodynamic styling, Blue Motion means
less fuel consumption, lower emissions and a better driving experience.

Turbocharged Stratified Injection (Tsi) Engines | Tsi


Technology
TSI technology offers you great performance with fuel economy and low
emissions.

TSI engines offer an enjoyable and involving drive, while cutting fuel
consumption and CO2 emissions. Because TSI engines are cleaner, you’ll
also save on car tax. So they have a smaller impact on the environment, are
kinder to your pocket – and, best of all, they’re great fun to drive.
What is TSI?

TSI is our pioneering technology for petrol engines. TSI engines are compact,
high-powered and use less fuel. TSI technology blends the best of our TDI
diesel and FSI (Fuel Stratified Injection) engines.
The successful TSI formula combines a number of different elements:
Smaller engines
At the heart of TSI is a smaller engine. It’s more efficient, as there is less
power loss resulting from friction. It’s also lighter, so the engine has less
weight to shift in the car.
Direct petrol injection with charging
Direct petrol injection is combined with a turbocharger or charge
compression with a turbo and a supercharger working in tandem. This
enhances the engine’s combustion efficiency so the TSI engine power output
is much higher than that of conventional, naturally aspirated engines.
Torque when you want it
On the TSI T1.4 160PS the engine-driven supercharger operates at lower
revs, with the turbocharger – powered by the exhaust gases – joining in as
engine speed rises. The supercharger is powered via a belt drive directly from
the crankshaft. This provides maximum pulling power on demand, even at
very low engine speeds. TSI engines are designed to deliver maximum torque
from engine speeds as low as 1500 or 1750 rpm. And that has the twin
benefit of not only increasing your driving pleasure but also cutting fuel
consumption.
Turbocharged Direct Injection (Tdi) Diesel Engines
Turbocharged Direct Injection (TDI) diesel engines are responsive and fun to
drive, as well as being very efficient. They offer more power, with great fuel
economy, which all helps to lower emissions.

Why drive a TDI?


● You’ll enjoy the savings. Economical fuel consumption, long
service and maintenance intervals, plus low emissions, all combine
to keep costs low.

● You’ll love the drive. Our turbo diesel engines offer


exceptional torque even at low revs. This results in tremendous fun
at the wheel, thanks to their effortless acceleration and sparkling
performance.
● You’ll feel the power. High levels of pulling power over a wide
rev range offer real driving pleasure.
TDI identifies all our advanced diesel engines using direct injection and a
turbocharger. TDI engines are economical and smooth with high levels of
torque (pulling power) and good energy efficiency.
How does it work?
Fuel needs oxygen to burn and the engine has to be supplied with huge
quantities of air to get enough. You can solve this problem with a bigger
engine – or you can solve it with a turbocharger – as in the TDI. Driven by
the exhaust gases, it squeezes air more tightly into the cylinders.
After being drawn through the turbocharger the air is then cooled by passing
it through an air to air intercooler (cool air takes up less space than hot air),
before entering the combustion chamber where diesel is injected directly into
the cylinders at very high pressure through a nozzle. It’s this intensive mixing
of highly atomized fuel with the cooled compressed air that leads to better,
more efficient combustion.
Your driving experience is quiet and refined because effective sound
insulation keeps noise to a minimum, while hydraulic engine mounts ensure
smooth, low-vibration running.
Fuel Cell-Powered Electric Vehicles | Mercedes Benz
F-Cell World Drive

Mercedes-Benz F-CELL World Drive – 125 days, four continents and 14


countries: Starting on January 30, 2011 in Stuttgart (Germany), three
Mercedes-Benz B-Class F-CELL with fuel cell drive will be driving around
the globe with zero emissions. The objective is to demonstrate that electric
vehicles equipped with fuel cell are technically mature and suitable for
everyday use.
Driver Assistance Technologies
Due to the cramped out space in cities resulting from increasing population
and rising No. of vehicles, driving is becoming tougher with each passing
day. Hence the need for driver assist technology. The driver assist
technologies are meant not only for safer driving but also to help you reach at
your destination in time. Here we’re putting 5 driver assist technologies that
will not only make driving fun but safe at the same time.
1. GO 950 LIVE Satellite Navigation System

Description:
Finding the shortest route while traveling is going to become easy with the
Tom Tom GO 950 LIVE. This device will offer you precise and the latest
travel information through a new user interface.
2. Wake-Up Angle Vibrating Anti-Sleep Device
Description:
Feel drained after a long day at work and now have to drive back home. A
sleepy state can prove to be dangerous specially while driving. To make sure
that you don’t catch up on your forty winks while driving, a device named
Wake-Up Angle has was designed by The Design Town in the year 2007.
Resembling a hearing-aid, the device is positioned behind the ear. Whenever
the driver’s head bends forward, below the safe angel, the Wake-Up Angle
starts vibrating which in turn stirs up the person behind the wheel.
3. BMW develops automatic parking system

Description:
So, you love your Beemer but every time you have to park it in your tight
garage your heart stops beating. How can you see your beauty on wheels get
wounded, that too while parking? Well, to kill all such woes BMW has
developed a robotic parking system. All you have to do is press a remote-
control button and see the parking-assist technology place your car wherever
you want. Just relax and see your car maneuver through the parking lot or
your garage. A demo of this technology was shown at Munich headquarters
recently.
4. Satellite Navigation System
Description:
Garmin Nuvi 265W Satellite Navigation System is here to become your best
buddy while driving. It will not only guide you which route to take but will
also guide you at every step and turn. Nuvi 265W will make sure you stay
safe on the road as it comes wedged with an advanced safety camera warning
system. This uncomplicated and harmless device will guide you through
rough traffic as it comes filled with maps for Europe and millions of points of
interest (POIs). So grab this handy and reasonably priced piece of equipment
and always take the right route.
5. Volvo’s Pedestrian Detection with automatic brakes

Description:
The reaction time goes for a toss while driving when someone suddenly
emerges in front of the car. This can prove to be quite a disaster, throwing
you behind bars and the pedestrian in a hospital. Just to avoid any such
situation a “Pedestrian Detection” technology along with automatic brakes
has been embedded in Volvo’s S60 sedan. It will intelligently sense anyone
who gets in the way and if the driver fails to respond will apply brakes and
make the vehicle freeze. But if you are driving above 22mph, speed of the
vehicle will be reduced to desirable levels avoiding any serious crashes.
Compressed Air Cars | Air Motion Racing Car
“Compressed air cars” are cars with engines that use compressed air, instead
of regular gas used in conventional fuel cars. The idea of such cars is greatly
welcomed by people of the 21st century, when pollution caused by petrol and
diesel is an extremely worrying factor.

Engine and Technology


The engine that is installed in a “compressed air car” uses compressed air
which is stored in the car’s tank at a pressure as high as 4500 psi. The
technology used by air car engines is totally different from the technology
that is used in conventional fuel cars. They use the pressure generated by the
expansion of compressed air to run their pistons. This results in ‘no
pollution’, as air is the only product that is used by the engine to produce
power, and the waste material is the air itself.
Air Storage Tank/Fueling
As thought by engineers and designers, the storage tank would be made up of
carbon fiber to reduce the car’s weight and prevent an explosion, in case of a
direct collision. Carbon-fiber tanks are capable of containing air pressure up
to 4500 psi, something the steel tanks are not capable of. For fueling the car
tank with air, the compressor needs to be plugged into the car, which would
use the air that is around to fill the compressed air tank.
This could be a slow process of fueling; at least until air cars are commonly
used by people, after which high-end compressors would be available at gas
stations that would fuel the car in no time at all.
Emission
The air-powered car would normally emit air, as it’s what it would solely use.
But it would totally depend on the purity of air that is put into the air tank. If
impure air is filled in the tank, same would be the level of impurity of the
emission. The emission level would highly depend on the location and time
of filling air in the tank.
Idling Devices Of Automobile | Anti-Dieseling Device
Idling devices in carburetor
During idling, the throttle plate is almost in the closed position. Then the
mass rate of air flow through the venture is small. Hence, the vacuum or
depression produced at the venture is also small. With this small depression,
no fuel can issue from the fuel orifice. Hence, an idling device is incorporated
in the carburetor unit.

The engine idling device can be seen in the picture. This device utilizes the
large vacuum that prevails at the edge of the throttle plate for effecting fuel
supply, when the throttle plate is almost in the closed position.
The required idling mixture strength can be obtained by adjusting the idling
air screw. The quantity of the mixture supplied to the engine is controlled by
the throttle plate setting. This setting decides the extent of closure of the inlet
passage by the throttle plate.
The idling device operates at maximum capacity when the throttle plate is
almost in the closed position. The effectiveness of the idling device gradually
diminishes, as the throttle plate is being opened.
When the throttle plate is wide open, the depression felt at the idling get is
extremely small. This small depression is not capable of raising the fuel
through the large height in the idling jet upto the discharge point. Now the
maximum depression shifts to the venture throat. As such the main orifice
starts supplying the fuel.
Anti-dieseling device
A spark ignition engine sometime continues to run for a very small period,
even after the ignition is switched off. This phenomena is called dieseling or
after running. This causes wastage of fuel and pollution.

Some modern cars have the anti-dieseling system as shown in the picture.
This system has a solenoid valve operated idling circuit. When the ignition
switch is turned on, current flows in the coil of the solenoid valve and thereby
generates a force. This force pulls a needle valve and opens the passage for
slow speed mixture. When the ignition switch is turned off, the magnetic
force disappears. Then the needle valve goes to the original position
immediately by the action of the spring in the solenoid valve. By this way,
the slow speed mixture passage is cut off. Hence, the engine stops and the
fuel wastage are also eliminated.
Hot idling compensator: Some modern cars have this system in the carburetor
unit. Under certain extremely hot operating conditions there is a tendency for
the idling mixture to become too rich. This causes idling instability. The hot
idling compensator system incorporates a bimetallic value which admits air
directly into the manifold in correct quantity when needed. Thus the mixture
richness is adjusted and stable idling is ensured.
Choking Device | Functions Of Choking Device

Choking device
Starting an engine in cold weather is somewhat difficult. Choking device
makes engine starting easier. The choke valve is butterfly valve, similar to the
throttle valve. This valve is situated between the air intake and the venture.
At the time of starting, the choke valve is turned to close almost the inlet
passage. This is called choking. Then, during suction stroke, a greater
depression is created in the inlet passage and is felt on the fuel jet which is
situated at the throat of the venture. This causes more fuel to be ejected by the
fuel jet. Choking restricts airflow and provides an over supply of fuel. It
should be remembered that only the lighter fractions of the supplied fuel
evaporate at lower temperatures and form a combustible mixture in the
cylinder.
During initial cranking of the engine, the choke valve is almost completely
closed. Once the engine fires consistently, the choke valve is opened slightly
to keep the engine running. As the engine warms up to its normal
temperature, the choke valve is opened gradually to its full extent. At all
other times of engine operation, the choke valve is kept wide open. Thus the
inlet passage is unrestricted.

The choke valve may be operated either manually or automatically. Manual


choke operation is usually effected by a flexible cable connected to a knob in
the dash board/instrument panel. Automatic choking is accomplished by a
thermostatic element. The tension of the thermostat spring keeps the choke
valve in a nearly closed position. As soon as the engine begins to operate, the
exhaust gases heat the thermostat casing. This decreases in the tension of the
spring and causes the choke valve to open to wide open position.
During choking, only the lighter fractions of the supplied fuel evaporate and
form the combustible mixture in the cylinder. The unvapourized heavier
fractions of the fuel mxi with lubricating oil film on the cylinder wall. The
contaminated lubricating oil may run down into the crankcase as the piston
rings scrape the oil. Thus, the crankcase oil gets diluted. This lowers the
lubricating characteristics of the oil and may cause greater wear of the engine
parts. It is evident, that during choking, fuel is wasted. Hence, choking should
be done only when the engine fails to start and should be limited to the
minimum period necessary to realize starting.
Unloader or dechoker If, for any reason during starting period, the engine is
flooded, it becomes necessary to clear the excessive gasoline out of the intake
manifold. This is accomplished by an arrangement of the throttle lever and
chokes lingage. In one arrangement depressing the accelerator pedal to the
floorboard forces the choke to open sufficiently to allow the engine to clean
out the intake manifold. This device is called unloader or dechoker.
Antilock or Antiskid Device | Anti-Lock Braking
System In Detail

Antilock or antiskid device


The vehicle will stop more quickly if the brakes are applied just hard enough
to get maximum static friction between the tyres and road. If the brakes are
applied harder than this then the wheels will lock, the tyres will skid or slide
on the road and a lesser kinetic friction will result. Then braking the vehicle
is much less effective.
To prevent skidding and thus provide maximum effective braking several
devices have been proposed. Mostly skid control of the rear wheels only is
provided. Some others provide control at all the four wheels. What is meant
by “control” is this. As long as the wheels are rotating the antiskid device
permits normal application of the brakes. But if the brakes are applied so
tough that the wheels tend to stop turning and thus a skid starts to develop the
device comes into operation and partly release the brakes so that the wheels
continue to rotate. Still intermittent braking continues. But it is held to just
below the point where a skid would start. The result is maximum braking
effect.
Antilock brake system: The hydraulic unit is the central component of an
ABS system. Each of the four wheels has a speed sensor, which measures the
rotational speed of the wheel. This information is monitored by an Electronic
Control Unit: which opens and closes the magnetic valves at the right time. If
a wheel is about to lock under heavy braking, the system continues to reduce
the hydraulic pressure on that wheel alone, till the threat of locking is past.
Once the wheel is turning freely again, the hydraulic pressure is increased.
This increase and release of pressure continues until the driver reduces the
force on the brake pedal or until the tendency to lock is overcome. ABS is
incorporated in some of the cars to prevent skidding and to rove braking.
Senstronic braking control (SBC) Senstronic Braking Control is basically a
brake-by-wire system which eliminates the need for mechanical linkage
between the brake pedal and brake master cylinder. SBS also work in
conjunction with ABS to enhance braking.
It was developed by Mercedes in association with Robert Bosch GmbH.
Among its most important performance features are the dynamic building up
of brake pressure and the precise monitoring of driver and vehicle behavior
using sensors. In an emergency situation, SBC increases brake line pressure
and readies the brakes, so that they can grip instantly with full force when the
brakes are applied.
Additionally, variable brake proportioning offers enhanced safety when
braking on bends. SBC controls each wheel individually. When in corners,
they apply varying degree of pressure on the inside and outside wheels of a
car, and in the wet road condition. The SBC is found in Mercedes Benz E
cars.
Power Steering | Electronic Power Steering
Power steering
In heavy duty (dump) trucks and power tractors the effort applied by the
driver is inadequate to turn the wheels. In this case a booster arrangement is
incorporated in the steering system. The booster is set into operation when
the steering wheel is turned. The booster then takes over and does most of the
work for steering. This system called power steering uses compressed air,
electrical mechanisms, and hydraulic pressure. Hydraulic pressure is used on
a vast majority of power steering mechanism today.

When the steering wheel is turned, the worm turns the sector of the worm
wheel and the arm. The arm turns the road wheel by means of the drag link.
If the resistance offered to turn the wheels is too high and the effort applied
by the driver to the steering wheel is too weak, then the worm, like a screw in
a nut will be displaced axially together with the distributor slide valve. The
axial movement of the distributor slide valve in the cylinder will admit oil
into the booster cylinder through the pipe line. The piston in the booster
cylinder will turn the road wheels via the gear rack, the toothed worm sector,
arm and drag link. At the same time, the worm sector will act upon the work
and will shift it together with the distribution slide valve to its initial position
and stop the piston travel in the boost cylinder. When the steering wheel is
turned in the other direction, the wheels will be turned appropriately in the
same sequence.

The more the steering mechanism and wheels resist turning, the more the
control valve is displaced. Hence, power assistance is always supplied in
proportion to the effort needed to turn the wheels.
Electronic power steering
Electrically assisted power steering is used in some cars. The assistance can
be applied directly by an electric stepper motor integrated with the steering
column, or the steering mechanism, or it can be applied indirectly with
hydraulic assistance pressurized by electric pump. EPS attached to the rack
and pinion-steering-exists in Honda-City vehicles.

Electronic power steering Improves steering feel and power saving


effectiveness and increases steering performance. It does so with control
mechanisms that reduce steering effort. Nissan’s Blue Bird passenger car
series use an electronically controlled three way power steering. This power
steering is responsive to vehicle speed, providing maximum assistance as the
speed rises. The driver can also select his or her own performance from three
levels of assistance that make the steering effort heavy, normal or light.
Components of Automatic Transmission System
Automatic transmission
There are two types of automatic transmission (gear box), namely, semi-
automatic transmission and transmission and fully automatic transmission.
These are distinguished according to their effect on vehicle handling
dynamics.
Semi-automatic transmissions are the gear boxes on which all operations
normally performed by the driver when changing gear are carried out by
electronically controlled actuator systems. This implies that a gear change
always involves disengagement of the clutch and hence of the drive to the
driving wheels. Semi-automatic transmissions are found on long distance
haulage trucks, passenger coaches and more recently on small cars and sports
cars.
Fully automatic transmissions (normally called automatic transmission)
change gear under load. This means, the power continues to flow to the
driving wheels even during a gear shift operation. In this system, the drive
engagement and gear ratio selection operations are performed with no
additional driver input.
The main components of the automatic transmission are as follows
(1) Hydrodynamic torque converter.
(2) A number of planetary gear sets located downstream the hydrodynamic
torque converter.
(3) Hydraulically actuated multiplate clutches, plate or band brakes, assigned
to the individual elements within the planetary gear sets.
(4) One way clutches with shift elements.
(5) Transmission control system.
(6) An engine driven hydraulic fluid pump.

Fully automatic transmissions are used in situation where disengagement of


power transmission may cause a significant reduction in comfort (particularly
in cars with powerful accelerations) or where power flow interruption cannot
be accepted for reasons of vehicle handling dynamics (ie. on-off road
vehicles).
Mercedes Benz has introduced a seven speed automatic transmission unit,
which would make future Mercedes models more economical apart from
performance and driveability. This unit employs seven gear rations: These
allow the automatic transmission to retain the small increases in engine speed
which are important in ensuring optimum gear ratios, while at the same time
offering a larger ratio spread between the lowest and highest gear.
An outstanding feature of the new seven speed transmission is the lockup
clutch in the hydrodynamic torque converter, which largely eliminates slip
between the pump and turbine rotor. Unlike conventional automatic
transmissions where the torque converter lock-up is only possible is higher
gears, the lock-up clutch in the new seven speed automatic transmission is
active from the first gear up.
Electronic transmission control

Automatic gear boxes are controlled by electronically operated hydraulic


sysems. Hydraulic system actuates the clutches. Electronic units do gear
selection and adapt the hydraulic pressure in accordance with the torque flow.
Sensors detect the transmission output shaft speed, engine load and speed,
gear selector lever position and positions of the program selector and kick
down switch. The control signal processes this information according to a
predefined program and uses the results to determine the control variables
which are to be transmitted to the gear box.
Electrohydraulic converter elements form the link between the electronic and
hydraulic circults. Solenoid valves activate and disengage the clutches;
Pressure regulators control the pressure levels at the friction surfaces, which
influence on shift quality.
Intelligent shift programs supplement the standard transmission control data
wih additional parameters such as forward and lateral acceleration, and the
speed with which the accelerator and brake pedals are pressed. This improves
driveability.
Automatic transmission is found in the vehicles namely, Honda Accord (five
speed), Mercedez Benz C200K (5 speed) and Mitsubishi Pajero (5 speed).
Safety Systems in Vehicles | Seat Belts | Air Bags
Seat belts
The function of the seat belts is to restrain the occupants of a vehicle in their
seats when the vehicle hits an obstacle.
Three point seat belt: In motoring history, this is the single most significant
advance. All credit goes to Volvo’s Nils Bolin for devising it and to Volvo
for introducing it in 1959. The three pointer afforded unrivalled restraint. Its
use was a quick, easy one handed operation. Many improvements have been
made to the webbing, mountings, latches, and inertia reels and belts have got
smarter in relating to accident severity and occupant weight. In the present
day vehicles seat belts have become mandatory for the driver and front
passenger. Some vehicles have seat belts for rear passengers also.

Seat belt pretenioner: A spring loaded or explosive device that reacts to a


severe frontal impact by automatically snugging the seat belt tight for fully
effective restraint.
Seat belt tighteners improve the restraining characteristics of a three-point
inertia real belt and increase the protection against injury. In the event of a
frontal effect they pull the seat belts tighter against the body and thus hold the
upper body as closely as possible against the seat backrest. This avoids
excessive forward displacement of the occupants caused by mass inertia. The
maximum forward displacement with tightened seat belts is approximately
1cm and the duration of mechanical tightening is 5.. 10ms.

On activation, a pyrotechnical propellant charge is electrically fired. The


explosive pressure acts on a piston, which turns the belt reel via a steel cable
in such a way that the belt rests tightly against the body.
Air bags
The function of front air bags is to protect the driver and the front passenger
against head and chest injuries in a vehicle impact with a solid obstacle at
speeds of up to 60 km/h. In the frontal impact between two vehicles, the front
air bags afford protection at relative speeds of upto 100km/h.
To protect driver and front passenger, pyrotechnical gas inflators expand the
front air bags in pyrotechnical, highly dynamic way after a vehicle impact is
detected by sensors. For the affected occupant to enjoy maximum protection,
the air bag must be fully inflated before the occupant comes into contact with
it. The air bag needs approximately 40ms to inflate completely.
To maximize the effect of both protective devices (seat belt tightner and front
air bag), They are activated with optimized time response by a common ECU
(triggering unit) installed in the passenger cell. The ECU’s decleration
calculations are based on data from one or two electronic acceleration sensors
used to monitor the decelerative forces that accompany in impact. Depending
on the impact type, the first trigger threshold is reached within 5 … 60ms.
Seat belt load limiter : A means by which the seat belt relaxes its hold during
heavy deceleration, continuing to restrain the occupant while reducing the
risk of belt inflicted injury.

Engine Speed Governors | Speed Control Governor |


Speed Limiters
Speed Governor
The governor is a device which is used to controlling the speed of an engine
based on the load requirements. Basic governors sense speed and sometimes
load of a prime mover and adjust the energy source to maintain the desired
level. So it’s simply mentioned as a device giving automatic control (either
pressure or temperature) or limitation of speed.

The governors are control mechanisms and they work on the principle of
feedback control. Their basic function is to control the speed within limits
when load on the prime mover changes. They have no control over the
change in speed (flywheel determines change in speed i.e. speed control)
within the cycle.
Take an example:
Assume a driver running a car in hill station, at that time engine load
increases, and automatically vehicle speed decreases. Now the actual speed is
less than desired speed. So driver increases the fuel to achieve the desired
speed. So here, the driver is a governor for this system.
So governor is a system to minimise fluctuations within the mean speed
which can occur as a result of load variation. The governor has no influence
over cyclic speed fluctuations however it controls the mean speed over an
extended period throughout that load on the engine might vary. When there’s
modification in load, variation in speed additionally takes place then
governor operates a regulatory control and adjusts the fuel provide to keep up
the mean speed nearly constant. Therefore the governor mechanically
regulates through linkages, the energy provided to the engines as demanded
by variation of load, so the engine speed is maintained nearly constant.

Types of Governor:
The governor can be classified into the following types. These are given
below,
1. Centrifugal governor
a) Pendulum type watt governor
b) Loaded type governor
i) Gravity controlled type
Ø Porter governor
Ø Proell governor
Ø Watt governor
ii) Spring controlled type
Ø Hartnell governor
Ø Hartung governor
2. Inertia and fly-wheel governor
3. Pickering Governor
Purpose of governor:
1. To automatically maintain the uniform speed of the engine within the
specified limits, whenever there is a variation of the load.
2. To regulate the fuel supply to the engine as per load requirements.
3. To regulate the mean speed of the engines.
4. It works intermittently i.e., only there’s modification within the load
5. Mathematically, it can express as ΔN.
Terminology used in the governor:
1. Height of the governor (h):
Height of the governor is defined as the vertical distance between the centre
of the governor ball and the point of the intersection between the upper arm
on the axis of the spindle. The height of the governor is denoted by ‘h’.
2. Radius of rotation (r):
Radius of rotation is defined as the centre of the governor balls and the axis
of rotation in the spindle. The radius of rotation is denoted by ‘r’.
3. Sleeve lift (X):
The sleeve lift of the governor is defined as the vertical distance travelled by
the sleeve on spindle due to change in equilibrium in speed. The sleeve lift of
the governor is denoted by ‘X’.
4. Equilibrium speed:
The equilibrium speed means, the sped at which the governor balls, arms,
sleeve, etc, are in complete equilibrium and there is no upward or downward
movement of the sleeve on the spindle, is called as equilibrium speed.
5. Mean Equilibrium speed:
The mean equilibrium speed is defined as the speed at the mean position of
the balls or the sleeve is called as mean equilibrium speed.
6. Maximum speed:
The Maximum speed is nothing but the speeds at the maximum radius of
rotation of the balls without tending to move either way is called as
maximum speed.
7. Minimum speed:
The Minimum speed is nothing but the speeds at the minimum radius of
rotation of the balls without tending to move either way is called as minimum
speed.
8. Governor effort:
The mean force working on the sleeve for a given change of speed is termed
as the governor effort.
9. Power of the governor:
The power of the governor is state that the product of mean effort and lift of
the sleeve is called as power of the governor.
10. Controlling force:
The controlling force is nothing but an equal and opposite force to the
centrifugal force, acting radially (i.e., centripetal force) is termed as
controlling force of a governor. In other words, the force acting radially upon
the rotating balls to counteract its centrifugal force is called the controlling
force.
Steering Systems
STEERING
The steering system in a vehicle is used to move the vehicle in a particular
direction. This is a very important sub-system in a car without which it would
be impossible for a vehicle to follow its desired path. The steering system can
be used to steer all kinds of vehicles like cars, trucks, buses, trains, tanks etc.
The conventional steering system consisted of turning the front wheels in the
desired direction. But now we have four wheel steering system mostly used
in heavy vehicles, to reduce the turning radius, rear wheel steering system,
differential steering system etc.

The basic components of any steering system are:-


1. Steering column
2. Steering box
3. Tie rods
4. Steering arms
The main geometry followed in steering is ACKERMANN STEERING
GEOMETRY. It shows that while negotiating a curve, the inner wheel needs
to follow a smaller path as compared to the outer wheel. This results in
different steering angles for the respective tires.

STEERING RATIO is defined as the ratio of the turn of the steering wheel
to the corresponding turn of the wheels, both which are measured in degrees.
It plays an important role in determining the ease of steering. A higher ratio
would mean that a large number of turns of the steering wheel is required to
negotiate a small turn. A lower ratio would enable better handling. Sports
cars usually have lower ratio while heavier vehicles have a higher steering
ratio.
The Different types of steering systems are:-
1. Rack and pinion steering system
2. Recirculating Ball steering system
3. Power Steering
The Rack and Pinion steering system is the most common system found
mostly in modern vehicles. It employs a simple mechanism. The parts of this
system are steering column, pinion gear, rack gear, tie rods, kingpin. The
circular motion of the steering wheel is transmitted to the pinion gear through
the steering column and universal joint. The pinion is meshed with a rack
which translates the circular motion into linear motion thus providing the
necessary change in direction. It also provides a gear reduction, thus making
it easier to turn the wheels. This system is preferred because of its
compactness, efficiency, ease of operation. But at the same time it gets easily
damaged on impact.
The Recirculation Ball steering system is employed in SUV’s and trucks. It
uses a slightly different principle than the rack and pinion system. Here the
motion is translated with the help of a recirculating ball gearbox, pitman arm
and a track rod. It can transfer higher forces. But it is heavier and costlier
than the rack and pinion system.

The Power steering system employs either one of the above systems and in
addition has a hydraulic or electrical system connected to make it easier to
steer. This helps in better control of the vehicle.
Other systems like steer-by-wire systems, drive-by-wire systems also exist,
but they are not commercially used as of now but are most likely to replace
the modern day steering systems in the future.
Gorilla Glass Manufacturing Process
Touch screen technology in fast few years has grown drastically in various
applications, in order to overcome the difficulties faced by the touch screen; a
new frontier technology has to take its part to revitalize the use of touch
screen. In this counterpart gorilla glass has thrown a flash light focus on
touch screen technology. Gorilla Glass has taken an apt plays in touchscreen
technology. This scratch repellent glass is used to form touchscreen panel for
portable gadgets like ATM machines, android mobile phones, tablets,
personal computers and MP3 Players. It’s designed to protect display screens
from scratches, sticky oils, fractures, etc,.

Process 1: Melting the glass

● The Silicon Dioxide is mixed with other chemicals then


put into a furnace to be melted

● Oxygen and Hydrogen injected into the furnace to


increase the heat transfer making the material melt faster

● Resulting glass(Alumino – Silicate) Contains Aluminium


, Silicon, Oxygen , Sodium ions
Process 2: Mold the Glasses

● The molten glass is poured into the desired die and the
required shape and thickness obtained.
Process 3: Ion Exchange
Manufacturing Process:
Gorilla glass starts as a mix of pure sand (silicon dioxide) and naturally
occurring chemicals (resulting glass is termed as alumino-silicate) which
splits the impurities and melting the sand. The molten glass fills up the bin
and it is overflowed on each facet. During this “fusion draw” method, the
resulting molten glass is pull down by a robust process to a long of 0.59
millimetre-thick sheets of Alumino-silicate Glass.
At this point, you have some very huge sheets of clear, clean, pure glass,
however it’s not much stronger than regular glass. Gorilla gets its strength
through a noteworthy action. Currently the glass sheet is dipped each into a
molten salt bath where a chemical exchange happens. Potassium ions are
infused into the glass. At the similar time, sodium ions exit from the glass
compound. Here the potassium (K) ions are larger than the sodium (Na) ions.
So a compressive stress occurs. That stress is really an honest factor and
stops the glass from breaking on flaws.
Chemical Strengthening Process:
Chemical tempering strengthens the glass by putting the surface of the glass
into compression by “stuffing” larger sized ions into the glass surface. During
chemical tempering method, the glass is submersed in a bath of molten salt at
prescribed temperatures. The heat causes the smaller ions to depart the
surface of the glass and bigger ions present within the molten salts to enter it.
Once the glass is off from the bath and cooled, they shrink. The larger ions
that are currently present within the surface of the lens are crowded along.
This creates a compressed surface, which results in stronger glass that’s more
resistant to breakage.
Achieving the specified compressive stress characteristics is time/temperature
dependent. Gorilla glass, in contrast to most soda lime glasses, is not self
limiting thorough of layer, thus smart time/temperature management is
important for a stable method. Although Gorilla glass could also be
chemically tempered at temperatures up to 460°C (Temperature vary between
390°C and 420°C) with the target salt temperature maintained to +/- 2°C.
Tempering time should be controlled at intervals +/- 5 min.

Fluorosilane coating:
As people’s lives become busier and workplaces transcend the boundaries of
office walls, the demand for mobile technologies continues to grow. With this
transformation comes the need for a cover glass that promotes clarity while,
also protecting and promoting the lifespan of display devices. The primary
objective for development of this coating has been to enable the continued
ability of a coated glass surface to exhibit superior optical clarity and
mechanical reliability, service life, and, most importantly, the glass surface
must function for its purpose.
By applying Fluorosilane coating over the glass sheets prevent fingerprint
appearance and enable ease of fingerprint removal from the everyday
products like Dirt, Oil, Soap, Lotion, Butter, Ketchup etc. Now the coating
improves the clarity and optical performance better than soda lime glass.

The latest generation of Gorilla Glass is claimed to be 20% thinner, and more
responsive to touchscreen commands than its predecessor. This implies that
the screen pictures are probably brighter and slimmer line in style.
Gorilla Glass’s development coincided somewhat fortunately with the
increase of the touch-screen smartphone; the best example is arguably the
iPhone. However, handsets are only the beginning. The product /
merchandise is experimenting with more thin and flexible sheets, and printing
on glass to be used on custom – built laptops, Liquid Crystal Display
Televisions et al.

Gorilla Glass History | Gorilla Glass Scratch | Gorilla


Glass Touch Screen
Touch screen technology in fast few years has grown drastically in various
applications, in order to overcome the difficulties faced by the touch screen; a
new frontier technology has to take its part to revitalize the use of touch
screen. In this counterpart gorilla glass has thrown a flash light focus on
touch screen technology. Gorilla Glass has taken an apt plays in touchscreen
technology. This scratch repellent glass is used to form touchscreen panel for
portable gadgets like ATM machines, android mobile phones, tablets,
personal computers and MP3 Players. It’s designed to protect display screens
from scratches, sticky oils, fractures, etc,.

Characteristics of Gorilla Glass:


● Scratch resistance
● Slimness / Thinner

● Stronger

● Improved Touch Sensitivity

Comparatively perfect fit for today’s abundance touch-screen handsets.

Difference between ‘Scratch- Proof ‘ and ‘Scratch Resistant’ glass:


A Scratch- screen proof is impermeable resistant to scratches. This kind of
technology is not on the market however – any glass may break if it is placed
under enough stress. Gorilla Glass is NOT a scratch-proof.
A Scratch- screen resistant is much stronger than most screens. It is less
probable to smash/crack if dropped, and less probably to scratch if scratched.
Extreme force, sharp objects, and continual exposure to abrasive oils may
leave scratches.
History of Gorilla Glass:
The Gorilla Glass is the trade name of “Corning”, an United States of
America Glass maker. They form the toughened glass (Alkali – Alumino
Silicate Sheet) for the portable electronic gadgets. This idea generated in the
60s period as a project name, “Project Muscle”. The glass invented was
called as “Chemcor” glass, which are ultra strong and light weight. The
product is developed for windshield glass for cars, But the product is very
costly so they are not succeeding on that time.
In the period of 2005, they again started researching with the project name of
“Gorilla Glass”. In this period touch screen cell phones are popular, and the
product needs a resilient, scratch resistant cell phone cover glass. At that
time, the company take the idea from Chemcor glass and they start building
the Gorilla glass.
After completing the research, first production starts at the period of 2007
and they got the first order in the period of 2008. At that period nearly 200
million users (about 20% of Devices) uses the gorilla glass for their cell
phones.
In 2012, the second generation of gorilla glass they built and launched,
achieved a goal of one billion devices.

In 2013, the Third generation Gorilla Glass they launched, which are three
times more resistant and stronger; 40% scratches which occur will not be
visible to naked eye.
Currently Used Gorilla Glass products:

Phones

● Iphone

● HTC

● LG

● Motorolla

● Nokia

● Samsung
Tablets

● Samsung

● Blackberry

● Lenovo

Laptops

● Dell

● Sony
● Lenovo

TV’s

● Sony

Cameras
● Leica

BENEFITS:
• Glass designed for a high degree of chemical strengthening
– High compressive stress
– Deep compression layer
• High retained strength after use
• High resistance to scratch damage
• Superior surface quality
APPLICATIONS:
• Ideal protective covering for displays in
– Smart phones
– Laptop / Portable Computers and tablet computer screens
– Mobile devices
• Touchscreen devices
• Optical components
• High strength glass articles
Magnetic Bearing Technology

Magnetic bearings have been utilized by a variety of industries for over a


decade with benefits that include non-contact rotor support, no lubrication
and no friction.
Conventional mechanical bearings, the kind that physically interface with the
shaft and require some form of lubrication, can be replaced by a technology
that suspends a rotor in a magnetic field, which eliminates friction losses.
There are two types of magnetic bearing technologies in use today – passive
and active. Passive bearings are similar to mechanical bearings in that no
active control is necessary for operation. In active systems, non-contact
position sensors continually monitor shaft position and feed this information
to a control system. This in turn, based on the response commanded by the
system, flows to the actuator via current amplifiers. These currents are
converted to magnetic forces by the actuator and act on the rotor to adjust
position and provide damping.
Additional benefits of magnetic bearings include:
● No friction

● No lubrication

● No oil contamination

● Low energy consumption

● Capacity to operate within a wide temperature range

● No need for pumps, seals, filters, piping, coolers or tanks


● Environmentally friendly workplace

● Impressive cost savings

In practice, these attractions are balanced in order to maintain a gap between


the shaft (rotor) and static parts (stator). The function of the magnetic bearing
is to locate the shaft’s rotation axis in the center, reacting to any load
variation (external disturbance forces),

Floating rotors could boost compressor efficiencies


Traditional centrifugal compressors are based on low-speed drives,
mechanical gears and oil-film bearings, resulting in high running costs
because of their high losses, wear, and need for maintenance.
This new compressor drive (above) uses a permanent magnet motor,
operating at an efficiency of around 97%, to drive a rotor "floating" on
magnetic bearings, which spins the compressor impeller at speeds of around
60,000 rpm. These drives experience almost no friction or wear, and need
little maintenance. They also minimize the risk of oil contamination, and
result in compressors that are about half the size of traditional designs.
How they work
Magnetic bearings are basically a system of bearings which provide non-
contact operation, virtually eliminating friction from rotating mechanical
systems. Magnetic bearing systems have several components. The
mechanical components consist of the electromagnets, position sensors and
the rotor. The electronics consist of a set of power amplifiers that supply
current to electromagnets. A controller works with the position sensors which
provide feedback to control the position of the rotor within the gap.
The position sensor registers a change in position of the shaft (rotor). This
change in position is communicated back to the processor where the signal is
processed and the controller decides what the necessary response should be,
then initiates a response to the amplifier. This response should then increase
the magnetic force in the corresponding electromagnet in order to bring the
shaft back to center. In a typical system, the radial clearance can range from
0.5 to 1 mm.
This process repeats itself over and over again. For most applications, the
sample rate is 10,000 times per second, or 10 kHz. The sample rate is high
because the loop is inherently unstable. As the rotor gets closer to the magnet,
the force increases. The system needs to continuously adjust the magnetic
strength coming from the electromagnets in order to hold the rotor in the
desired position.
Artificial Photosynthesis
Artificial photosynthesis is one of the newer ways researchers are exploring
to capture the energy of sunlight reaching earth.

Photosynthesis:
Photosynthesis is the conversion of sunlight, carbon dioxide, and water into
usable fuel and it is typically discussed in relation to plants where the fuel is
carbohydrates, proteins, and fats. Using only 3 percent of the sunlight that
reaches the planet, plants collectively perform massive energy conversions,
converting just over 1,100 billion tons of CO2 into food sources for animals
every year.
Photovoltaic Technology:
This harnessing of the sun represents a virtually untapped potential for
generating energy for human use at a time when efforts to commercialize
photovoltaic–cell technology are underway. Using a semiconductor–based
system, photovoltaic technology converts sunlight to electricity, but in an
expensive and somewhat inefficient manner with notable shortcomings
related to energy storage and the dynamics of weather and available sunlight.
Artificial Photosynthesis:
Two things occur as plants convert sunlight into energy:
● Sunlight is harvested using chlorophyll and a collection of
proteins and enzymes, and

● Water molecules are split into hydrogen, electrons, and oxygen.

These electrons and oxygen then turn the CO2 into carbohydrates, after
which oxygen is expelled.
Rather than release only oxygen at the end of this reaction, an artificial
process designed to produce energy for human use will need to release liquid
hydrogen or methanol, which will in turn be used as liquid fuel or channeled
into a fuel cell. The processes of producing hydrogen and capturing sunlight
are not a problem. The challenge lies in developing a catalyst to split the
water molecules and get the electrons that start the chemical process to
produce the hydrogen.
There are a number of promising catalysts available, that, once perfected,
could have a profound impact on how we address the energy supply
challenge:
● Manganese directly mimics the biology found in plants.

● Titanium Dioxide is used in dye-sensitized cell.

● Cobalt Oxide is very abundant, stable and efficient as a catalyst


Artificial Photosynthesis Operation:

Under the fuel through artificial photosynthesis scenario, nano tubes


embedded within a membrane would act like green leaves, using incident
solar radiation (H³) to split water molecules (H2O), freeing up electrons and
oxygen (O2) that then react with carbon dioxide (CO2) to produce a fuel,
shown here as methanol (CH3OH). The result is a renewable green energy
source that also helps scrub the atmosphere of excessive carbon dioxide from
the burning of fossil fuels.
History:
Plants use organic compounds that need to be continuously renewed.
Researchers are looking for inorganic compounds that catalyze the needed
reactions and are both efficient and widely available.
The research has been significantly boosted by the application of nano
technology. It’s a good example of the step wise progress in the scientific
world.
Studies earlier in the decade showed that crystals iridium efficiently drove the
reduction of CO2, but iridium is extremely rare so technology that required
its use would be expensive and could never be used on a large scale.
Cobalt crystals were tried. They worked, and cobalt is widely available, but
the original formulations weren’t at all efficient.
Things changed with the introduction of nano technology.
The main point is that this unique approach increasing appears to be feasible.
It has the advantage of harnessing solar energy in a form that can be stored
and used with greater efficiency than batteries and it is at least carbon neutral.
The Future of Bicycling Hydration

Is it possible to drink too much water during ride without stop the
vehicle?
Adequate hydration is as important as calorie replacement to an athlete’s
performance, yet dehydration continues to be a condition many experience.
This is especially true in cycling where evaporative losses are significant and
can go unnoticed. Sweat production and losses through breathing can exceed
2 quarts per hour. To maximize your performance pre-hydration is important,
and it is essential that fluid replacement begin early and continue throughout
a ride.
Approximately 75% of the energy your body produces is converted to heat
rather than being delivered to your muscles to power your pedal stroke.
Keeping your body cool and re-hydrated during exertion will result in greater
efficiency, higher power output, extended endurance, and a quicker, more
thorough recovery. Say good-bye to the Wet Spot!

Individual fluid and electrolyte needs are widely variable during


physical exercise due to differences in metabolic rate, body mass and
size, environmental conditions (e.g. temperature, humidity, wind, solar
load, clothing worn), heat acclimatization status, physical fitness,
activity duration, and genetic variability. Sweat rates can vary from
0.5L/hr to more than 3 L/hr. Similarly, sodium concentration may vary
from less than 460 mg/L to more than 1840 mg/L
Technology:
Why use a perfectly good water bottle on your bike when you could use a
complex, expensive and awkward to use “hydration system” instead? That’s
the promise of the VelEau Bicycle Mounted Hydration System.

The VelEau comes in several parts. First, there’s a saddlebag which holds 42
ounces (1.4 liters) of water. Then there’s a tube through which you drink,
much like those found on CamelBak water bags. This runs from under the
seat, along the top-tube to the handlebars, where it is secured to a retracting
cord on the stem. This cord pulls the mouthpiece back into place when you’re
done drinking, where it is secured by magnets.

If that seems like it’s complex, unnecessarily heavy and annoying to use,
that’s because it probably is. However, there is at least a compartment to
carry a multi tool in the same bag, which adds some utility.
Wireless Battery Charger

In the future all electronic devices will be wirelessly powered. Small, battery-
powered gadgets make powerful computing portable.
The battery charger should be capable of charging the most common battery
types found in portable devices today. In addition, the charging should be
controlled from the base station and a bidirectional communication system
between the pickups and base station should be developed.
Inductive Power Systems:
Inductive Power Transfer (IPT) refers to the concept of transferring electrical
power between two isolated circuits across an air gap. While based on the
work and concepts developed by pioneers such as Faraday and Ampere, it
is only recently that IPT has been developed into working systems.
Essentially, an IPT system can be divided into two parts;
● Primary and

● Secondary.

The primary side of the system is made up of a resonant power supply and a
coil. This power supply produces a high frequency sinusoidal current in the
coil. The secondary side (or ‘pickup’) has a smaller coil, and a converter to
produce a DC voltage.
Working of Inductive Power Transfer:
In this system communications signals are encoded onto the waveform that
provides power to the air gap. Communication from the primary side to the
secondary is implemented by switching the power signal at the output of the
resonant converter between its normal level and a lower level which is
detectable by the pickup but still provides enough power to control the
pickup microcontroller. This process is called Amplitude Shift Keying
(ASK). This is achieved by varying the output voltage of the buck converter
which provides an input DC voltage to the resonant converter.
Communication from the secondary to the primary is achieved by a process
called Load Shift Keying (LSK). This involves varying the loading on the
pickup. Any load on the pickup will reflect a voltage on the primary circuit
proportional to the load. Therefore a variation in the load on the pickup can
be detected by the charging station.
The communications system must provide two discrete levels of voltage
reflected onto the primary side, to represent the on and off states for digital
communications. The difference must be easily detected on the primary side
to provide a robust communications channel. Signals are decoded by simple
filters and comparators which feed a digital signal to the microcontrollers.
Advantages:
IPT has a number of advantages over other power transfer methods – it is
unaffected by dirt, dust, water, or chemicals. In situations such as coal
mining IPT prevents sparks and other hazards. As the coupling is magnetic,
there is no risk of electrocution even when used in high power systems. This
makes IPT very suitable for transport systems where vehicles follow a fixed
track, such as in factory materials handling.
Hydraulic Hybrid Vehicles

Introduction To Hydraulic Hybrid Vehicles:


Hybrid vehicles use two sources of power to drive the wheels. In a hydraulic
hybrid vehicle (HHV) a regular internal combustion engine and a hydraulic
motor are used to power the wheels.
Hydraulic hybrid systems consist of two key components:
● High pressure hydraulic fluid vessels called accumulators, and

● Hydraulic drive pump/motors.

Working of Hydraulic Hybrid Systems:


The accumulators are used to store pressurized fluid. Acting as a motor, the
hydraulic drive uses the pressurized fluid (Above 3000 psi) to rotate the
wheels. Acting as a pump, the hydraulic drive is used to re-pressurize
hydraulic fluid by using the vehicle’s momentum, thereby converting kinetic
energy into potential energy. This process of converting kinetic energy from
momentum and storing it is called regenerative braking.
The hydraulic system offers great advantages for vehicles operating in stop
and go conditions because the system can capture large amounts of energy
when the brakes are applied.
The hydraulic components work in conjunction with the primary. Making up
the main hydraulic components are two hydraulic accumulator vessels which
store hydraulic fluid compressing inert nitrogen gas and one or more
hydraulic pump/motor units.
The hydraulic hybrid system is made up of four components.
● The working fluid

● The reservoir
● The pump or motor

● The accumulator

The pump or motor installed in the system extracts kinetic energy during
braking. This in turn pumps the working fluid from the reservoir to the
accumulator, which eventually gets pressurized. The pressurized working
fluid then provides energy to the pump or motor to power the vehicle when it
accelerates. There are two types of hydraulic hybrid systems – the parallel
hydraulic hybrid system and the series hydraulic hybrid system. In the
parallel hydraulic hybrid, the pump is connected to the drive-shafts through a
transmission box, while in series hydraulic hybrid, the pump is directly
connected to the drive-shaft.
There are two types of HHVs:
● Parallel and

● Series.

Parallel Hydraulic Hybrid Vehicles:

In parallel HHVs both the engine and the hydraulic drive system are
mechanically coupled to the wheels. The hydraulic pump-motor is then
integrated into the driveshaft or differential.
Series Hydraulic Hybrid vehicles:
Series HHVs rely entirely on hydraulic pressure to drive the wheels, which
means the engine does not directly provide mechanical power to the wheels.
In a series HHV configuration, an engine is attached to a hydraulic engine
pump to provide additional fluid pressure to the drive pump/motor when
needed.
Advantages:
● Higher fuel efficiency. (25-45 percent improvement in fuel
economy)
● Lower emissions. (20 to 30 percent)

● Reduced operating costs.

● Better acceleration performance.


Nvh | Noise Vibration and Harshness

NVH stands for noise, vibration, and harshness.


Noise is unwanted sound; vibration is the oscillation that is typically felt
rather than heard. Harshness is generally used to describe the severity and
discomfort associated with unwanted sound and/or vibration, especially from
short duration events.

The automotive industry is currently spending millions of dollars on NVH


work to develop new materials and damping techniques. The new design
methods are starting to consider NVH issues throughout the whole design
process. This involves integrating extensive modeling, simulation, evaluation,
and optimization techniques into the design process to insure both noise and
vibration comfort. New materials and techniques are also being developed so
that the damping treatments are lighter, cheaper, and more effective.
Some of the methods used to control noise, vibration, and harshness include
the use of different carpeting treatments, the addition of rubber or asphalt
material to car panels, gap sealant, and the injection of expandable foam into
body panels. The carpeting treatments include varying types of foam
padding combined with different weights of rubber-backed carpet.
The overall result of this technique is a mass-spring system that acts as a
vibration absorber. The rubber or asphalt materials are attached to various
car panels to add damping and mass loading to reduce vibration levels and
the rattling sounds from the panels. Sealant is applied to close gaps in order
to increase the transmission loss from the engine, wind, and road noise
sources to the vehicle interior. Expandable foam injected between panels,
such as the dashboard and firewall, helps to add stiffness and vibration
absorption.
All of these current methods are effective at reducing sound and vibration
levels in a vehicle at higher frequencies. However, some of the treatments
become almost ineffective at lower frequencies below 200 Hz. The
treatments also add a substantial amount of weight to the vehicle, thus
affecting its fuel economy, as well as adding cost.
NVH Test Equipment’s:
● Analyzers,

● Shakers and controllers,

● Accelerometers,

● Noise dosimeters,

● Octave band filters,

● Transducers for vibration and acoustics,

● Dynamometers,

● Sound level meters,


● Microphones, and

● Analysis software (Ricardo, Altair NVH Director, LMS Virtual


Lab etc.)

Recent NVH Test Equipment’s:


● PC based analyzers,

● Multichannel NVH data acquisition systems,

● Acoustic holography devices,

● Laser vibrometer’s, and

● Anechoic test cells.

NVH Applications:
● Engine noise vibration testing

● Acoustic performance testing

● Sound power testing

● Pass by noise testing

● Telephone testing

● Environmental noise measurements and noise field mapping

● Structural dynamics and vibration testing

● Occupational health and safety

Advantages of NVH Test Equipment:

Real-time multi-analysis is possible in one test run


Results obtained are accurate and precise
Report generation is made easy
Shorter lead times, and hence improved productivity
Durability Analysis
Automotive

● Design more reliable transmissions, drivelines and axles

● View the whole gearbox as an interacting and flexible system

● Predict gear, bearing and shaft life-times in the design concept


phase

● Accurately and efficiently compare complex gearbox


arrangements or concepts such as AMT, DCT, Hybrid and CVT
● Reduce gearbox weight by using component strength

● Minimize noise and vibration by influencing the transmission


error

● Identify the weak points in the whole system under realistic


load conditions
● Consider the impact of manufacturing tolerances in the concept
design phase

● Improve the bearing choice by unique accurate prediction of


bearing behavior
● Interact with dynamic solutions for your full vehicle design

● Predict the affects of generators/e-engines on the gears and its


components in your hybrid system
Wind turbine

● Understand and benchmark operating load and extreme load


scenarios
● Design gearboxes to meet life-time targets

● View the gearbox as one complete system, without the need for
sectioning and sectional boundary conditions

● Analyze the behavior of complex planetary systems within the


whole system
● Accurately predict loads, deflections and interactions of all
components

● Calculate detailed bearing behavior to identify excessive loads

● Direct loads or reduce misalignments to improve the system


quality

● Predict load sharing in the fully flexible system instead of


assuming load sharing factors
● Reduce weight and cost without reducing component lifetime

● Minimize noise pollution caused by transmission error

Aerospace

● Improve reliability for critical parts

● Reduce gearbox weight

● Predict bearing behavior under extreme load and climate


conditions

● Optimize gearbox size

Off-highway
● Design heavy duty transmissions

● Accurately represent multi-gear mesh situations

● Optimize gearbox weight without compromising durability

● Predict system behavior under misuse conditions

● Compare different lubrication situations

● Precisely define micro-geometries to avoid edge-loading of


teeth under extreme load conditions

● Consider split-torque system load

Industrial equipment
● Design for improved reliability in process machinery, material
handling, power take offs, speed reducers and production line
equipment
● Improve accuracy of high precision machinery by
understanding and predicting system and component deflections

● Reduce failures in gears and bearings due to precise prediction


of misalignments
Consumer and office appliance

● Optimize weight and size of power tools, food processors,


washing machines, printers and photocopiers
● Improve product quality by reducing unwanted deflections

● Predict changes of working accuracy over a product’s life

● Design casings that fulfill the requests for look and function
simultaneously without wasting material
● Consider new materials for new or existing product concepts

● Create technical documentation for certification


What Is Nvh
Noise:
Noise is defined as any unpleasant or unexpected sound created by a
vibrating object.
Vibration:
Vibration is defined as any objectionable repetitive motion of an object, back-
and-forth or up-and-down.
Harshness:
Harshness is defined as an aggressive suspension feel or lack of “give” in
response to a single input.

Noise and Vibration Theory:


A vibrating object normally produces sound, and that sound may be an
annoying noise. In the case where a vibrating body is the direct source of
noise (such as combustion causing the engine to vibrate), the vibrating body
or source is easy to find. In other cases, the vibrating body may generate a
small vibration only.This small vibration may cause a larger vibration or
noise due to the vibrating body’s contact with other parts. When this happens,
attention focuses on where the large vibration or noise occurs while the real
source often escapes notice. An understanding of noise and vibration
generation assists with the troubleshooting process. The development of a
small noise into a larger noise begins when a vibration source (compelling
force) generates a vibration. Resonance amplifies the vibration with other
vehicle parts. The vibrating body (sound generating body) then receives
transmission of the amplified vibration.

A sound wave’s cycle, period, frequency, and amplitude determine the


physical qualities of the sound wave.
The physical qualities of sound are:
● Audible range of sound

● Pitch

● Intensity

For sound to be heard, the resulting acoustic wave must have a range of 20 to
20,000 Hz, which is the audible range of sound for humans. While many
vehicle noises are capable of being heard, some NVH noises are not in the
audible range.
Nvh Terms | Nvh Terminology -1
Audible Range of Sound – Sounds that are in the range of 20 to 20,000
Hertz (Hz).

Amplitude – The vertical measurement between the top and bottom of a


wave. Also see magnitude.
Beat – An NVH concern produced by two sounds that is most noticeable
when the frequency difference is 1 to 6 Hz.
Bead Seating – The process of seating the tire to the rim. If properly
lubricated the bead seating occurs when the tire and wheel are assembled.
Compelling Force – A vibrating object acting upon another object that
causes the other object to vibrate.

Cycle – The path a wave travels before the wave begins to repeat the path
again.
Dampen – To reduce the magnitude of a noise or vibration.
Dampers – A component used to dampen a noise or vibration. Foam and
rubber are commonly used to dampen vibrations.
Dynamic Balance – A procedure that balances a tire and wheel assembly in
two planes. Dynamic balance removes radial and lateral vibrations.
Droning, High-Speed – A long duration, non-directional humming noise that
is uncomfortable to the ears and has a range of 50 mph (80.5 kph) and up.
Droning, Low-Speed – A long duration, low-pitched noise that is non-
directional and has a range of up to 30 mph (48 kph).
Droning, Middle-Speed – A long duration, low-pitched noise that is non-
directional and has a range of 30 to 50 mph (48 to 80.5 kph).
Electronic Vibration Analyzer – An electronic NVH diagnostic tool that
measures frequency and amplitude.
Frequency – The number of complete cycles that occurs in a given period of
time.
Harshness – An aggressive suspension feel or lack of give in response to a
single input.
Hertz – The unit of frequency measurement in seconds (a vibration occurring
8 times per second would be an 8 Hz vibration).
Intensity – The physical quality of sound that relates to the amount and
direction of the flow of acoustic energy at a given speed.

Lateral Run out – A condition where a rotating component does not rotate
in a true plane. The component moves side-to-side (wobbles) on its rotational
axis.

Magnitude (Amplitude) – The amount of force or the intensity of the


vibration. The magnitude or strength of a vibration is always greatest at the
point of resonance.

Medium – Provides a path for sound waves to travel through.


Natural Frequency – The frequency that a component will vibrate the
easiest. Normally, the larger the mass, the lower its natural frequency.
● Engine block (2-4 Hz)

● Tire and wheel assemblies (1-15 Hz) – proportional to vehicle


speed
● Suspension (10-15 Hz

● Driveline (20-60 Hz)

● Differential components (120-300 Hz)

Noise – The unpleasant or unexpected sound created by a vibrating object.


Order – The number of disturbances created in one revolution of a
component.
Phase – The position of a vibration cycle relative to another vibration cycle
in the same hertz rate (time frame).
Nvh Terms | Nvh Terminology - 2
Pitch – The physical quality of sound that relates to the frequency of the
wave.

Radial Force Variation (RFV) – A measurement of the tire’s uniformity,


under load, in regards to the variation of the load acting towards the center of
the tire; commonly referred to as the tire’s sidewall variation.

Radial Run out – A condition where a rotating component does not rotate in
a true plane. The component moves up and down on its rotational axis.
Resonance – The tendency of a system to respond increasingly to a
compelling force oscillating at, or near, the natural frequency of the system.
This causes a sudden and large vibration.
Road Noise – A noise that occurs while driving on gravel or roughly paved
roads at all vehicle speeds, or when a vehicle is coasting.

Shake, Lateral – A side-to-side vibration of the body, seats, and steering


wheel.
Shake, Vertical – An up and down vibration of the body, seats, and steering
wheel.
Shimmy, High-Speed – A vibration that causes the steering wheel to
oscillate when driving on smooth roads at high speeds.
Shimmy, Low-Speed – A vibration that causes the steering wheel to oscillate
when driving across a bump at low speeds.

Source Component – The component that is diagnosed as being the root


cause of a vibration or noise concern.
Sound – The result of a vibrating disturbance of an object, which produces
waves that transmit out from the source.
Static Balance – The method of balancing a tire and wheel assembly in a
single plane. Static balancing removes only the lateral (side to side, wobble)
imbalance and the tire and wheel assembly could possibly have a radial (up
and down) vibration.
Torque Sensitive Vibration or Noise – A vibration or noise that is sensitive
to different loads and torque, applied to the drive train of a vehicle. The
vibration or noise changes when the throttle position or transmission gearing
is used, during a road test, to change the torque applied to the drive train.
Vibration – The repetitive motion of an object, back and forth or up and
down, which may be felt or heard.
Wheel Diameter – The dimension of a wheel measured on the inside of the
wheel at the bead seat area.
Aerogel | World’s Lightest Material
Aerogel is a very special type of foam which is 99.8% air. Aerogel is a low-
density solid-state material derived from gel in which the liquid component
of the gel has been replaced with gas. The result is an extremely low density
solid with several remarkable properties, most notably its effectiveness as a
thermal insulator. Aerogels are solid, but can be less dense then air. Despite
their sparse molecular structure aerogels are strong.
It was first invented in the 1930s by Samuel Stephens Kistler, but was very
brittle and could not be shaped. Aerogels are traditionally expensive and
difficult to manufacture, and they are difficult to handle. Now a team of
scientists have discovered how to make it flexible so that it does not break so
easily. This means there are a lot of ways in which it can be used to solve
problems.
It is nicknamed frozen smoke, solid smoke or blue smoke due to its
translucent nature and the way light scatters in the material; however, it feels
like exploded polystyrene (Styrofoam) to the touch.
Aerogels posses the lowest density and highest internal surface area of any
known solid material, which makes them extremely high performance
material for collision, damping, acoustic and thermal insulation, structural
support and surface chemistry.
Properties:
1. Extremely low density

2. Very good thermal insulator


3. High specific surface area

4. Lowest dielectric constant

Metal aerogel Properties:


1. High specific surface area (100-500m2/g)

2. Electrically conductive!

3. Enhanced catalytic activity

4. Surprisingly capable thermal insulator

Interesting Facts:
1. A paperclip has a mass of approximately one gram. A one
gram sample of aerogel has an internal surface area of between 250
and 3000m2 per gram (when produced in a weightless
environment).

2. Lowest solid density: The lightest man-made material is an


Aerogel with a density of only three times the density of air.
However industrial aerogels can be made denser, up to 0.6 g/cc or
more.
1. Highest porosity: Perhaps the only material that can have over
95% porosity, and a very wide pore size distribution, ranging from
Angstroms (10-10 meter) to microns (10-6 meter).

2. Very high surface area: For some Aerogels, one ounce can
have a surface area equal to a football field (over 3000 square
meters per one gram).

3. Versatile compositions: Aerogels can be made with a wide


range of chemical compositions.
4. Functional properties by design: Combinations of the above
features can lead to Aerogel materials with useful properties such
as:

○ adsorbents,

○ catalysts,

○ insulators,

○ semiconductors,

○ piezoelectric,

○ dielectric,

○ ferroelectric,

○ diffusion controllers,

○ electric conductors,

○ electric insulators,

○ and optical features.

5. Can hold (theoretically) 500 to 4,000 times its weight in


applied force.

Types of Aerogels:
1. Silica:
•Silica aerogel is the most common type of aerogel and the most extensively
studied and used. It is a silica-based substance, derived from silica gel.

•The world’s lowest-density solid is a silica Nano foam at 1 mg/cm3, which


is the evacuated version of the record-aerogel of 1.9 mg/cm3.The density of
air is 1.2 mg/cm3.
•Silica aerogel strongly absorbs infrared radiation. It allows the construction
of materials that let light into buildings but trap heat for solar heating.
•It has remarkable thermal insulative properties, having an extremely low
thermal conductivity: from 0.03 W/m·K down to 0.004 W/m·K, which
correspond to R-values of 14 to 105 for 3.5 inch thickness. For comparison,
typical wall insulation is 13 for 3.5 inch thickness. Its melting point is 1,473
K (1,200 °C or 2,192 °F).
•Silica aerogel holds 15 entries in Guinness World Records for material
properties, including best insulator and lowest-density solid.
2. Carbon:
•Carbon aerogels are composed of particles with sizes in the nanometre
range, covalently bonded together. They have very high porosity (over 50%,
with pore diameter under 100 nm) and surface areas ranging between 400–
1000 m²/g. They are often manufactured as composite paper: non-woven
paper made of carbon fibres, impregnated with resorcinol-formaldehyde
aerogel, and pyrolyzed.

• Depending on the density, carbon aerogels may be electrically conductive,


making composite aerogel paper useful for electrodes in capacitors or
deionization electrodes. Due to their extremely high surface area, carbon
aerogels are used to create super capacitors, with values ranging up to
thousands of farads based on a capacitance of 104 F/g and 77 F/cm³.
•Carbon aerogels are also extremely "black" in the infrared spectrum,
reflecting only 0.3% of radiation between 250 nm and 14.3 µm, making them
efficient for solar energy collectors.
Manufacturing:
Aerogels a formed by a process known as supercritical drying, in which the
liquid from the gel base is removed and replaced by a gas, leaving a solid
structure.

It is prepared like gelatine by mixing a liquid silicon compound and a fast-


evaporating liquid solvent, forming a gel that is then dried in an instrument
similar to a pressure cooker.
The mixture thickens, and then careful heating and depressurizing produce a
glassy sponge of silicon.
Recent Development:
NASA’s Glenn Research Centre developed a polymer Aerogel which is
strong, flexible, and robust against folding, creasing, crushing, and being
stepped upon. These aerogels are among the least dense solids, possess
compressive specific strength similar to aerospace grade graphite composite,
and provide the smallest thermal conductivity for any solid.
The new aerogels are up to 500 times stronger than their silica counterparts.
A thick piece actually can support the weight of a car.

Silica aerogels would crush to powder if placed under a car tire. As seen
above, the same is not true of the new polymer aerogels, even if the car is
only a Smart car. Overall, the mechanical properties are rather like those of a
synthetic rubber, save that the aerogel has the same properties (and far
smaller thermal conductivity) with only about 10 per cent of the weight.The
new class of polymer aerogels also have superior mechanical properties. For
example silica aerogels of a similar density have a resistance to compression
and tensile limit more than 100 times smaller than the new polymer aerogels.
And they can be produced in a thin form, a film so flexible that a wide variety
of commercial and industrial uses are possible.
Applications:
Example 1:
Military aeroplane and helicopter engines produce a lot of heat. This means
they can be attacked by heat-seeking missiles. If the engine is surrounded by
a layer of Aerogel, then less heat escapes for the missiles to detect.
Example 2:
Aerogel can also be used to stop heat from escaping from hot water pipes.
When heat escapes energy is wasted, which means more of the earth’s energy
supplies are used up. Lots of other materials can be used to stop heat
escaping, so that aerogel was used.

Example 3:
Scientists look at the dust from comets to find out what the Solar system was
like when it was first formed. They want to know what the dust is made of
and what shape it is. But it is hard to catch the fast moving dust. If the dust
rubs against anything, friction makes the dust hot which can change it. If the
dust hits anything hard, that can also change its shape. So scientists use
Aerogel in a dust collector on the Stardust spacecraft. As the very small dust
particles go through the Aerogel they leave little paths. These paths are used
to find the dust particles when the probe comes back to Earth.

More Applications:
1. Fire retardant
1. Oven (regular, pizza, etc.)

2. Grill

3. Furnace

4. Blacksmith forge

2. Insulation (hot or cold):

a. Auto
1. Air intake

2. Engine

3. Exhaust

4. Manifolds

b. Clothing – Only for cold, not warm, since it’ll trap body heat!

c. Home

1. Furnace

2. Grill
3. Kitchen

1. Oven

2. Pot holders

3. Pots and pans

4. Coolers and refrigerator’s

4. Pipes & air ducts

5. Walls & Roof

6. Windows

3. Blacksmith forge
4. Pulling water out of materials
5. Shock absorption
6. Sound insulation
Led Light Bulbs | Bonded Fin Heat Sink

LEDs won’t burn your hand like some light sources, but they do produce
heat. All light sources convert electric power into radiant energy and heat in
various proportions. LEDs generate little or no IR or UV, but convert only
20%-30% of the power into visible light; the remainder (70%) is converted to
heat that must be conducted from the LED die to the underlying circuit board
and heat sinks, housings, or luminaire frame elements.
Term: Heat Sink:
Thermally conductive material attached to the printed circuit board on which
the LED is mounted. Myriad heat sink designs are possible; often a “finned”
design is used to increase the surface area available for heat transfer. For
general illumination applications, heat sinks are often incorporated into the
functional and aesthetic design of the luminaire, effectively using the
luminaire chassis as a heat management device.
Why does thermal management matter?
Excess heat directly affects both short-term and long-term LED performance.
The short term (reversible) effects are color shift and reduced light output
while the long-term effect
is accelerated lumen depreciation and thus shortened useful life. If heat is
allowed to build, it can damage parts causing them to dim and lose efficiency.
Manufacturing Methods:
In liquid metal forging , sometimes called squeeze casting process, molten
metal is poured directly into the bottom die. Then the top die is forced down
to forge the part as that in a conventional forging operation.
The metal solidifies rapidly under considerable pressure in the range of 27.5
to 82.6 MPa depending on the work metal. With optimized process
parameters, liquid metal forging parts have no internal porosity and a fine
cast structure.
Previous Heat Sink Manufacturing Methods:
1. Extrusion
2. Machining
3. Stampings
4. Castings
Types of LED Heat Sink produced:
Plated Fin

Pin Fin
Radial Fin

Speciality Heat Sink


Benefits of Liquid forged Heat sinks:
1. Improved thermal performance
Rapid heat transfer delivers more lumens/ watt and enhances the LED
lifespan.
• Aluminium wrought alloys conduct heat faster than cast alloys used in die
casting. Also by incorporating a copper base, the heat sink achieves 4 times
better thermal conductivity.
• Intricate fins and pins deliver a higher aspect ratio, increasing the surface
area for ambient heat transfer. With no centre core, heat removal by
convection is also improved.
• Porous-free microstructure eliminates air pockets for rapid, continuous heat
transfer through the heat sink to the surroundings.
2. Flexible Design
The key to an effective LED heat sink design is to be able to balance both
maximisation of heat sink surface area and form factor constraint of light
fixtures. Each custom LED lighting design involves the concept of efficiently
transferring as much heat as possible away from the LED chip. With a high
aspect ratio and the ability to create 3D designs as a single piece, liquid
forging is a highly scalable manufacturing process, allowing the creation of
intricate heat sinks made of composite materials such as copper and
aluminium in a single step. The fins of the heat sink can be combined with a
copper base to create a radial heat sink with improved design and better
thermal conductivity. The process allows heat sinks and light fixtures to be
formed as a single piece, minimising assembly costs, and improving thermal
efficiency.
3. Enhanced finishing
The heat sink can be anodised for a better finishing, which further improves
thermal performance by an additional 10 – 15%.
Features of LED heat sink by Liquid metal Forging:
1. High aspect ratio
2. Enhanced Heat Dissipation
3. Flexible Design
4. One step manufacturing with light fixture
5. Minimum Porosity
6. Anodised Finishing
7. Enhanced Aluminium alloy conductivity
Advantages:
(i) Elimination of micro porosity (shrinkage and porosity), due to the effect of
solidification under pressure.
(ii) Improvement in product surface finishing due to high direct pressure into
mould surface.
(iii) Production using Aluminium series materials
(iv) Multi cavity possible
(v) Near net shape process
(vi) High aspect ratio features
Limitations:
(i) Variation in z‐axis thickness control
(ii) Mechanical structure (elongation)
Nano-Nuclear Batteries | Beta-Voltaic Power

Long-lived power supplies for remote and even hostile environmental


conditions are needed for space and sea missions. Nuclear batteries can
uniquely serve this role. In spite of relatively low power, the nuclear battery
with packaging can have an energy density near a thousand watt-hours per
kilogram, which is much greater than the best chemical battery. It would
reason that small devices would need small batteries to power them.

The world of tomorrow that science fiction dreams of and technology


manifests might be a very small one. Tritium is a radioactive isotope of
hydrogen that typically is produced in nuclear reactors or high energy
accelerators. It decays at a rate of about five percent per year (half of it
decays in about 12 years). It gives off radiation in the form of a beta particle.
Tritium will bind anywhere hydrogen does, including in water, and in plant,
animal and human tissue. It cannot be removed from the environment once it
is released. Tritium can be inhaled, ingested, or absorbed through skin.
Moreover, radioactive isotopes are available on the market for reasonable
prices ($1000) and low power electronics are becoming increasingly more
versatile. Therefore, nuclear batteries are commercially relevant today.
Symbol: H (H-3)
Atomic Number: 1(Protons in Nucleus)
Atomic Weight: 1(naturally occurring H)
What is Tritium?
Tritium is the only radioactive isotope of hydrogen. The nucleus of a tritium
atom consists of a proton and two neutrons. This contrasts with the nucleus of
an ordinary hydrogen atom (which consists solely of a proton) and a
deuterium atom (which consists of one proton and one neutron). Ordinary
hydrogen comprises over 99.9% of all naturally occurring hydrogen.
Deuterium comprises about 0.02%, and tritium comprises about a billionth of
a billionth (10-16 percent) of natural hydrogen.
What is Isotope?
An isotope is a different form of an element that has the same number of
protons in the nucleus but a different number of neutrons.

Alpha radiation:
Alpha particles are Helium nuclei (2 protons and 2 neutrons) . These
particles are relatively heavy and have poor penetrating power being over
90% blocked by a sheet of paper.
Beta Radiation:
Beta radiation (high speed electrons or photons) can penetrate paper.
Gamma Radiation:
Gamma radiation which can penetrate Aluminium.
How to produce a Tritium?
Tritium can be made in production nuclear reactors, i.e., reactors designed to
optimize the generation of tritium and special nuclear materials such as
plutonium-239. Tritium is produced by neutron absorption of a lithium-6
atom. The lithium-6 atom, with three protons and three neutrons, and the
absorbed neutron combine to form a lithium-7 atom with three protons and
four neutrons, which instantaneously splits to form an atom of tritium (one
proton and two neutrons) and an atom of helium-4 (two protons and two
neutrons).
Direct Radio Isotope Converts:
Radioisotope power conversion, in which the energy from the decay of
radioisotopes is used as a power source, allows powering of applications
which are unsuited to power sources such as photovoltaics or generators or to
batteries. These applications are typically remote, not accessible to any
external energy source (including sunlight), and often must last between 5 to
50 years. They include not only space, but also small power sources for
biomedical uses. Radioisotope thermal generators (RTGs) are often used to
convert the energy from the radioisotope by, converting it to heat, and then
converting the heat to electricity via either a thermoelectric device, or
thermophotovoltaics (TPV). Alternately, the radioisotope may be directly
converted into electricity via betavoltaics, in which the energy from a beta
particle creates electron holes pairs which are collected and used to generate
power similar to a solar cell.
Self-Driving Car Technology

An autonomous vehicle is fundamentally defined as a passenger vehicle that


drives by itself. An autonomous vehicle is also referred to as an autopilot,
driverless car, auto-drive car, or automated guided vehicle.
In the future, automated systems will help to avoid accidents and reduce
congestion. The future vehicles will be capable of determining the best route
and warn each other about the conditions ahead.

Google has been working on it’s self driving car technology, where the user
is required to enter an address in Google maps, after which the system
gathers information from Google Street View and combines it with artificial
intelligence software. The software includes information from video cameras
in car, a LIDAR sensor on top of vehicle, radar sensors in front and a position
sensor attached to one of the rear wheels that helps locate the car’s position
on map. These sensors aid the car in maintaining distance with surrounding
vehicles/objects.

The control mechanism of an autonomous car consists of three main blocks


as shown below:
1. Sensors
-laser sensors
-cameras
-radars
-ultrasonic sensors
-GPS, etc.
2. Logic Processing units
-Software
-Decision making
-Checking functionality
-User interface
3. Mechanical control systems
-Consists of servo motors and relays
-Driving wheel control
-Brake control
-Throttle control, etc.

Artificial Intelligence Software:


Artificial intelligence is the making of intelligent machines, especially
intelligent computer programs. It is related to the similar task of using
computers to understand human intelligence. This system exhibits human
intelligence and behaviour include robots, expert systems, voice recognition,
natural language processing, face recognition, handwriting recognition, game
intelligence, artificial creativity and more. By this technology both google
map and google street view are interrelated.
Google Map:
Google Maps is a Google service offering powerful, user-friendly mapping
technology
and local business information-including business locations, contact
information, and
driving directions.

Google Street View:


Google Street View (GSV) has rapidly expanded to provide street-level
images of entire cities all around the world. The number and density of geo-
positioned images available make this service truly unprecedented. A Street
View user can wander through city streets, enabling a wide range of uses
such as scouting a neighbourhood, or finding specific items such as bike
racks or mail boxes.
LIDAR Sensor:
Light Detection And Ranging is an optical remote sensing technology that
can measure the distance to, or other properties of a target by illuminating the
target with light, often using pulses from a laser. LIDAR uses ultraviolet,
visible, or near infrared light to image objects and can be used with a wide
range of targets, including non-metallic objects, rocks, rain, chemical
compounds, aerosols, clouds and even single molecules.A narrow laser beam
can be used to map physical features with very high resolution.

Position Sensor:
This device provides the latitude, longitude and altitude together with the
corresponding standard deviation and the standard NMEA messages with a
frequency of 5 Hz. When geostationary satellites providing the GPS drift
correction are visible from the car, the unit enters the differential GPS mode
(high precision GPS). When no correction signal is available, the device
outputs standard precision GPS.
Radar Sensor:
Radar (Radio Detection And Ranging) is an object-detection system which
uses radio waves to determine the range, altitude, direction, or speed of
objects. It can be used to detect aircraft, ships, spacecraft, guided missiles,
motor vehicles, weather formations, and terrain. The radar dish or antenna
transmits pulses of radio waves or microwaves which bounce off any object
in their path. The object returns a tiny part of the wave’s energy to a dish or
antenna which is usually located at the same site as the transmitter.
Electro Chromatic Auto Dimming Mirror
Electro-chromatic mirrors system:

Electro chromatic mirrors / Auto dimming mirrors use a combination of opto


electronic sensors and complex electronics (sensors, circuit boards, micro-
controllers, etc.) that constantly monitor ambient light and the intensity of
light shining directly on the mirror. As soon as sensors detect glare, the
electro chromatic surface of the mirror becomes darker to protect driver’s
eyes and their concentration.

The electro chromatic technology usually is applied to the inside rearview


mirror / Side view mirror, where it basically saves you the trouble of flipping
the mirror manually if blinded by the light which increases driving safety.
Working operation of Electro chromatic mirrors:
This auto dimming rearview mirror is installed on higher end vehicles. It
consists of two lenses that sandwich an electro chromic (electronic color
changing) gel. The inside sides of the lenses are coated with a transparent
conductive layer coating and the deepest lens has a reflective coating.

This gel, when charged with electricity darkens, then clear once the glare is
no longer detected. The mirror uses a forward sensor which measures the
outside ambient light, and a rearward sensor to look for glare. When dark
enough, it sends current to the electrostatic gel, darkening it a rate which is
related to the level of ambient darkness and rearward glare. When the outside
ambient light increases, the current decreases, until the gel is clear again at
daylight light levels.
https://www.youtube.com/watch?v=PXFvnfi5t94

Rain Sensors
Rain sensor systems:
Opto electronic sensors are used in a reflective mode in rain sensor systems
to detect the presence of water on the windshield so that the windshield
wipers can be controlled automatically.
An LED emits light in such a way that when the windshield is dry almost the
entire amount of light is reflected onto a light sensor. When the windshield is
wet, the reflective behavior changes: the more water there is on the surface,
the less light is reflected. In the new rain sensor, infrared light is used instead
of conventional visible light. This means that the sensor can be mounted in
the black area at the edge of the windshield and cannot be seen from outside.
Working Operation:
An infrared beam is reflected off the outer windshield surface back to the
infrared sensor array. When moisture strikes the windshield, the system
detects a reflection to its infrared beam. Advanced analogue and digital signal
processing determines the intensity of rain. The sensor communicates to the
wiper control module, which switches on the wiper motor and controls the
wipers automatically, according to the moisture intensity detected.

Depending on the quantity of rain detected, the sensor controls the speed of
the wiper system. In conjunction with electronically controlled wiper drive
units, the wiping speed can be continuously adjusted in intermittent
operation. In the event of splash water – as when overtaking a truck – the
system switches immediately to the highest speed.
The new rain sensor offers further options. For example, it can be used to
close windows and sunroofs automatically if the vehicle is parked and it
starts to rain. It can even be fitted with an additional light sensor to control
the headlights – at night or at the entrance to a tunnel, the lights can be
switched on without any intervention by the driver.
Light Sensors:
Automatic lighting of the headlights is controlled by a passive light sensor. It
measures available light using a set of photo-electric cells.
The light sensor comprises three lenses that focus the light onto three photo-
electric cells. This allowed “the luminous space” surrounding the vehicle into
several zones through the directivity of each basic lens cell pair.
● Lens 1: Measure total ambient light

● Lens 2: Intersect Front source of light

● Lens 3: Distinguish Road Condition (Like brighter sunny


weather condition or Dark tunnel)

By comparing the information gathered by these three devices, the system


computer determines the situation with which the vehicle is confronted and
commands the headlights in consequence.
Tandem Wipers | Windshield Wiper Blades
Windshield Wipers:

Conventional wiper drives have only one direction of rotation, and the
direction of wiping is changed mechanically. The new electronically
controlled motor reverses its direction of rotation at the turning point of the
wiper – therefore, the mechanical components require less space.

The electronic controller ensures a maximum visibility area at all times,


irrespective of wiping speed, coefficient of friction or wind force. Through
exact adherence to the wiping angle, it is possible to reduce the tolerance
distance to the edge of the windscreen to a minimum and thus to enlarge the
swept area. The controller can detect obstacles such as packed snow at the
reversing points and automatically reduces the swept area to prevent the
system from blocking. The speed of the motor is reduced before reversal to
ensure quiet running.

The electronic speed controller is also very practical in conjunction with a


rain sensor: depending on the quantity of water on the windshield, the drive
unit can be operated at a continuously variable wiping speed.
The electronically controlled wiper motor also allows the use of two-motor
wiper systems. Each wiper arm is moved by its own drive unit, and the
electronic controller is responsible for the coordination of the movements.
Advantages:
● This system requires no connecting rods

● Reduced space requirements and

● Lower weight.

Features & Benefits:


An extended parking position is available as an additional function: this
means that when the wipers are switched off, the wiper arms are parked under
the trailing edge of the bonnet. This improves the aerodynamics of the
vehicle. Driving noise is reduced and the driver’s field of vision is enlarged.
At the same time, the risk of injury is reduced in the event of accidents with
pedestrians and two-wheeled traffic.
Ambient Light Sensor
Ambient light sensor:
Ambient light sensors automatically adjust the backlighting of the instrument
panel and various other displays in the vehicle according to the varying
available or ambient-light conditions. It also served in electronic equipment’s
like Laptop’s, cell phones and LCD TV’s to bring natural lighting solutions.

Cabin backlight control:


For cabin backlight control LED lighting’s are used. A LED lighting
technology significantly improves in cabin lighting systems. LEDS are
indeed useful for spot beam applications, where light can be focused directly
where it is needed, in a range of color’s and without disturbance to other
passengers, in a word – control. But this is only the beginning, and LEDs are
quite capable of providing much, much more.
LEDS can provide colour wash to surfaces, creating sophisticated ambient
and indirect lighting effects. It can do this by employing a range of colour
LEDs singly or in combination or by employing tunable colour LEDs. This
can be done with great precision, providing sophisticated control over colour,
tone and brightness to generate an infinite variety of mood, brand or
corporate effects.

Features of LED:
● Soft warm light – very comfortable on the eyes

● Intensity – brighter than most halogens used

● Very Low power consumption – Uses only 20% of the power


of standard cabin light bulbs

● Low glare – great for reading

● No harmful UV emissions – easy on the eyes

● Built in dimmer logic works with most standard dimmers or


our optional mini-dimmer
Ambient light sensor in LCD TV’s, Laptops and Cell Phones:
It is a sensor that detects light so if it’s dark your screen monitor brightness
will dim down. So you’re not looking at one source of extreme bright light,
when its bright the screen monitor will go up to compensate for the extra
light around you, so you can still see the monitor comfortably.
Bright Room

Dark Room

Dimly Light Room

In addition to providing a pleasant-looking display, ambient light sensors also


help reduce power dissipation and maximize the life of the backlighting
system.
Ambient light sensors are included in many laptops and cell phones to sense
the environment lighting conditions, allowing for adjustment of the screen’s
backlight to comfortable levels for the viewer. The range of "comfortable
levels" is dependent on the room’slight.

Understandably, a screen’s brightness needs to increase as the ambient light


increases. What is less obvious is the need to decrease the brightness in lower
light conditions-for comfortable viewing and to save battery life. In a cell
phone, the ambient light sensors are located under a protective cover glass.
Because of this protection, most of the ambient light is obstructed. The
obstruction reduces the amount of light to be measured, requiring a solution
with low light accuracy. For the accuracy needed in low-light conditions, the
best sensor choice is the integrated photodiode with an ADC.
Optoelectronic Materials
Components of Opto-Electronics:
Semi conductor materials:
Elemental semiconductors-Silicon, Germanium
Binary semiconductors-Aluminium, gallium, indium, phosphorus, arsenic,
antimony
Ternary semiconductors-Aluminum, Gallium, Arsenic
Quaternary Semiconductors-Indium, Gallium, Phosphorous, Arsenic
Why Opto-electronics sensor system used:
● The sensors used in an automotive environment must be
reliable

● Produce exactingly accurate results

● Require little maintenance

● Help reduce overall system cost and extend the life of the
system or sub system unit
● Integrated optoelectronic sensors are non-contact sensors, that
is, they are able to perform their sensing or measurement functions
without the need for physical contact with other parts of the system
or sub-system
Advantages:
● Reducing the number of required components

● Improving the reliability of the system or sub-system

● Improving the styling, aesthetics and ergonomics of the overall


vehicle through better utilization of the available space

● Improving the manufacturability of the vehicle by reducing the


number of sensors that need to be positioned during production
Disadvantages:
● They are not giving any direct feedback
Opto Electronics | Fiber Optics Technology
Optical techniques have always been used for a large number of automotive,
metrological and sensing applications. Fiber- and integrated- optics
technologies were primarily developed for telecommunication applications.
However, the advances in the development of high quality and competitive
price opto-electronic components and fibers have largely contributed to the
expansion of guided wave technology for sensing as well.
What is Opto-Electronics?

It is the branch of electronics dealing with devices that generate, transform,


transmit, or sense optical, infrared or ultraviolet radiation, as solar cells and
lasers.
Optoelectronic devices are used to activate or deactivate an electronic circuit
based on the intensity of light. Besides this general purpose, optoelectronics
devices are used in telecommunication, surveillance systems, capturing solar
energy etc. Solid state optoelectronic components are used as sensors for
detecting visible light, Laser Infrared rays, Ultraviolet rays etc.
What is Opto-Electronic Sensor?
An opto-electronic sensor is a device that is capable of producing an
electrical signal that is proportional to the amount of light incident on the
active area of the device.
Integrated optoelectronic sensors are designed to recognize fundamental
stimuli such as patterns, images, motion and intensity and colour.
Types of Opto-electronic Sensors:

Optoelectronic Sensor Description


Type

Light to voltage Convert linear output voltage proportional


converters to light intensity

Light to frequency Convert light intensity to digital format for


converters direct connection to microcontroller or
Digital signal processor

Ambient light sensors Measure what the human eye sees

Linear sensor arrays Measure spatial relationships and light


intensity

Colour sensors RGB filtered sensors for colour


discrimination, determination and
measurement

Reflective light sensors Convert reflective light intensity to voltage


output
Hybrid Drive Trains | Hybrid Vehicles

Hybrid vehicles

The term Hybrid drive denotes such vehicle drives with more than one drive
source. Hybrid drives can incorporate several similar or dissimilar types of
energy stores and/or power converters.

The goal of the hybrid drive developments is to combine different drive


components, such that the advantages of each are utilized under varying
operating conditions in such a manner that the overall advantages outweigh
the higher technical and cost outlay associated with hybrid drives.
Hybrid drive trains

Hybrid drive trains are broadly classified into series and parallel depending
on the configuration of the power source.
Series

In a series drive train, only the electric power is coupled to the wheels. The
second power source converts fuel energy into electric power. This electric
power is then passed in a series fashion through the electric drive and motor
to the wheels. Typically an IC engine is coupled to an alternator to provide
the fuel based electric power. The engine alternator combination is often
referred to as an auxiliary power unit.

Series hybrid engines have the following characteristics

The engine and the energy storage devices are closely coupled. More
efficient, satisfying light load power demands.
Parallel

In the parallel hybrid drive trains, two power sources operate in parallel to
propel the vehicle. Power from the electric motor and internal combustion
engine are combined via the vehicle transmission to satisfy the road power
demand.

Parallel hybrid engines have the following characteristics

Drive train losses between the engine and the engine and the road are
minimal. Generally more efficient and satisfying high power demand.

Parallel hybrids have speed coupling between the road and the engine. Series
hybrids do not have speed coupling. Both series and parallel hybrids can be
operated with engine to road power decoupling. Both parallel and series
hybrids can be implemented with large engines and small energy storage or
vice versa.

Series and parallel combined system

This combined system called the dual system having a generator and a motor,
features characteristics of both the series and parallel systems, and the
following systems are possible.
Switching system

This implies the application and the release of the clutch switches between
the series and parallel system. For driving by the series system, the clutch is
released, separating the engine and the generator from the driving wheels. For
driving with the parallel system, the clutch is engaged, connecting the engine
and the driving wheels.

Since city driving requires low loads for driving and low emission, the series
system is selected with the clutch released. For high speed driving where the
series system would not work efficiently due to higher drive loads and
consequently higher engine output is required, the parallel system is selected
with the clutch applied.
Split system

This system acts as the series and parallel systems at all times. The engine
output energy is split by the planetary gear into the series path and the
parallel path. It can control the engine speed under variable control of the
series path by the generator while maintaining the mechanical connection of
the engine and the driving wheels through the parallel path.
Variable Turbo Chargers Geometry (Vtg)
Variable geometry turbochargers (VGTs) are a family of turbochargers,
usually designed to allow the effective aspect ratio (sometimes called A/R
Ratio) of the turbo to be altered as conditions change. This is done because
optimum aspect ratio at low engine speeds is very different from that at high
engine speeds. If the aspect ratio is too large, the turbo will fail to create
boost at low speeds; if the aspect ratio is too small, the turbo will choke the
engine at high speeds, leading to high exhaust manifold pressures, high
pumping losses, and ultimately lower power output. By altering the geometry
of the turbine housing as the engine accelerates, the turbo’s aspect ratio can
be maintained at its optimum. Because of this, VGTs have a minimal amount
of lag, have a low boost threshold, and are very efficient at higher engine
speeds. VGTs do not require a waste gate.

Most common designs


The two most common implementations include a ring of aerodynamically-
shaped vanes in the turbine housing at the turbine inlet. Generally for light
duty engines (passenger cars, race cars, and light commercial vehicles) the
vanes rotate in unison to vary the gas swirl angle and the cross sectional area.
Generally for heavy duty engines the vanes do not rotate, but instead the axial
width of the inlet is selectively blocked by an axially sliding wall (either the
vanes are selectively covered by a moving slotted shroud, or the vanes
selectively move vs a stationary slotted shroud). Either way the area between
the tips of the vanes changes, leading to a variable aspect ratio.

Actuation
Often the vanes are controlled by a membrane actuator identical to that of a
waste gate, however increasingly electric servo actuation is used. Hydraulic
actuators have also been used in some applications.
Main suppliers
Several companies supply the rotating vane type of variable geometry
turbocharger, including Garrett (Honeywell), Borg Warner and MHI
(Mitsubishi Heavy Industries). The rotating vane design is mostly limited to
small engines and/or to light duty applications (passenger cars, race cars and
light commercial vehicles). The only supplier of the sliding vane type of
variable geometry turbocharger is Cummins Turbo Technologies (Holset),
who are effectively the sole supplier of variable geometry turbochargers for
applications involving large engines and heavy duty use (i.e. trucks and off
highway applications).
Other common uses
In trucks, VG turbochargers are also used to control the ratio of exhaust re-
circulated back to the engine inlet (they can be controlled to selectively
increase the exhaust manifold pressure exceeds the inlet manifold pressure,
which promotes exhaust gas recirculation (EGR)). Although excessive engine
back pressure is detrimental to overall fuel economy, ensuring a sufficient
EGR rate even during transient events (e.g. gear changes) can be sufficient to
reduce nitrogen oxide emissions down to that required by emissions
legislation (e.g. Euro 5 for Europe and EPA 10 for the USA).
Another use for the sliding vane type of turbocharger is as downstream
engine exhaust brake (non-decompression type), so that an extra exhaust
throttle valve isn’t needed. Also the mechanism can be deliberately modified
to reduce the turbine efficiency in a predefined position. This mode can be
selected to sustain a raised exhaust temperature to promote "light-off" and
"regeneration" of a diesel particulate filter (this involves heating the carbon
particles stuck in the filter until they oxidize away in a semi-self sustaining
reaction – rather like the self-cleaning process some ovens offer). Actuation
of a VG turbocharger for EGR flow control or to implement braking or
regeneration modes generally requires hydraulic or electric servo actuation.
Trends In Common Rail Fuel Injection System

Components of a common rail system

In the common rail system, a pressure sensor measures the fuel pressure in
the rail, its signal valve is compared with the desired value stored in the
engine computer. If the measured value and the desired value are different, an
overflow orifice in the pressure regulator on the high pressure side is opened
or closed. The overflow returns to the fuel tank.

The fuel injectors are opened and closed by the engine computer at defined
times. The duration of injection, the fuel pressure in the rail, and the flow
area of the injector determine the injected fuel quantity. The injector solenoid
valves are controlled according to the accelerator position and the engine
information.

The electronic management in the newer fuel injection systems is time based
control systems-injection timing can therefore be very flexible and highly
precise.

The current advanced fuel injection system such as common rails can account
for 30 to 40 percent of the total engine cost.

Third generation common rail technology is currently available on the


Mercedes E 280 CDI vehicles sold in our country.

Trends in common rail injection system

In the first and second generation of bosch’s common rail the injection
process is controlled by a magnetic solenoid on the injectors. With an
electronic solenoid on the injector nozzle, and electronic controls, pilot
injection becomes possible.

In pilot injection technique, a small quantity fuel is injected before the main
injection. A typical injection period is 300milliseconds. Too small or too
early pilot injection raises the noise, too large increases the particulate
emission. In short the quantity decreases with increasing engine speed and its
interval.

The third generation common rail injection units utilize piezo-electric


injector, which use piezo crystals for even more precise metering and
accurately timed delivery. Piezo crystals deform when a current is applied
across them and return to their original form as soon as the current supply is
switched off. The injector actuators consists of several hundred thin piezo
crystal wafers. In a piezo inline injectors, the actuator is built into the injector
body very close to the jet needle. The movement of the piezo packet is
transferred friction-free, without using mechanical parts, to the rapidly
switching jet needles.

The piezo injectors effect a more precise metering of the amount of fuel
injected and an improved atomization of the fuel in the cylinder.

The third generation common rail fuel injection technology enabled fuel
injectors to run with pressure as high as 2000 bar, while microsecond fuel
delivery timing is possible.

The rapid speed upon which the injectors can switch makes it possible to
reduce the intervals between injections and split the quantity of fuel delivered
into a large number of separate injections for each combustion stroke.

System changes

In modern common rail system, injection is split into several individual


injection such as preinjection, main injection and post injection. This change
will also help in reducing the emissions.
Diesel engine with common rail split injection system has become even
quieter, more fuel efficient, cleaner and more powerful.
Chassis Frame | Frame Rails | Auto Chassis
Chassis is a French term and was initially used to denote the frame parts or
Basic Structure of the vehicle. It is the back bone of the vehicle. A vehicle
with out body is called Chassis. The components of the vehicle like Power
plant, Transmission System, Axles, Wheels and Tyres, Suspension,
Controlling Systems like Braking, Steering etc., and also electrical system
parts are mounted on the Chassis frame. It is the main mounting for all the
components including the body. So it is also called as Carrying Unit.

The following main components of the Chassis are:


● Frame: it is made up of long two members called side members
riveted together with the help of number of cross members.

● Engine or Power plant: It provides the source of power


● Clutch: It connects and disconnects the power from the engine
fly wheel to the transmission system.

● Gear Box

● U Joint

● Propeller Shaft

● Differential

FUNCTIONS OF THE CHASSIS FRAME:


1. To carry load of the passengers or goods carried in the body.
2. To support the load of the body, engine, gear box etc.,
3. To withstand the forces caused due to the sudden braking or acceleration
4. To withstand the stresses caused due to the bad road condition.
5. To withstand centrifugal force while cornering

VARIOUS LOADS ACTING ON THE FRAME:


1. Short duration Load - While crossing a broken patch.
2. Momentary duration Load - While taking a curve.
3. Impact Loads - Due to the collision of the vehicle.
4. Inertia Load - While applying brakes.
5. Static Loads - Loads due to chassis parts.
6. Over Loads - Beyond Design capacity.
Types of Chassis Frame | Auto Chassis
TYPES OF CHASSIS FRAMES:
There are three types of frames
1. Conventional frame
2. Integral frame
3. Semi-integral frame

1. Conventional frame:
It has two long side members and 5 to 6 cross members joined together with
the help of rivets and bolts. The frame sections are used generally.
a. Channel Section - Good resistance to bending
b. Tabular Section - Good resistance to Torsion
c. Box Section - Good resistance to both bending and Torsion
2. Integral Frame:
This frame is used now a days in most of the cars. There is no frame and all
the assembly units are attached to the body. All the functions of the frame
carried out by the body itself. Due to elimination of long frame it is cheaper
and due to less weight most economical also. Only disadvantage is repairing
is difficult.
3. Semi - Integral Frame:
In some vehicles half frame is fixed in the front end on which engine gear
box and front suspension is mounted. It has the advantage when the vehicle is
met with accident the front frame can be taken easily to replace the damaged
chassis frame. This type of frame is
used in some of the European and American cars.
Piston-engine cycles of operation
The internal-combustion engine
The piston engine is known as an internal-combustion heat-engine. The
concept of the piston engine is that a supply of air-and-fuel mixture is fed to
the inside of the cylinder where it is compressed and then burnt. This internal
combustion releases heat energy which is then converted into useful
mechanical work as the high gas pressures generated force the piston to move
along its stroke in the cylinder. It can be said, therefore, that a heat-engine is
merely an energy transformer. To enable the piston movement to be
harnessed, the driving thrust on the piston is transmitted by means of a
connecting-rod to a crankshaft whose function is to convert the linear piston
motion in the cylinder to a rotary crankshaft movement . The piston can thus
be made to repeat its movement to and fro, due to the constraints of the
crankshaft crankpin’s circular path and the guiding cylinder. The backward-
and-forward displacement of the piston is generally referred to as the
reciprocating motion of the piston, so these power units are also known as
reciprocating engines.
Engine components and terms
The main problem in understanding the construction of the reciprocating
piston engine is being able to identify and name the various parts making up
the power unit. To this end, the following briefly describes the major
components and the names given to them.

Cylinder block
This is a cast structure with cylindrical holes bored to guide and support the
pistons and to harness the working gases. It also provides a jacket to contain a
liquid coolant.
Cylinder head This casting encloses the combustion end of the cylinder block
and houses both the inlet and exhaust poppet-valves and their ports to admit
air– fuel mixture and to exhaust the combustion products. Crankcase This is a
cast rigid structure which supports and houses the crankshaft and bearings. It
is usually cast as a mono-construction with the cylinder block. Sump This is a
pressed-steel or cast-aluminium alloy container which encloses the bottom of
the crankcase and provides a reservoir for the engine’s lubricant.
Piston
This is a pressure-tight cylindrical plunger which is subjected to the
expanding gas pressure. Its function is to convert the gas pressure from
combustion into a concentrated driving thrust along the connecting rod. It
must therefore also act as a guide for the small end of the connecting-rod.
Piston rings
These are circular rings which seal the gaps made between the piston and the
cylinder, their object being to prevent gas escaping and to control the amount
of lubricant which is allowed to reach the top of the cylinder.
Gudgeon-pin
This pin transfers the thrust from the piston to the connecting-rod small-end
while permitting the rod to rock to and fro as the crankshaft rotates.
Connecting-rod
This acts as both a strut and a tie link-rod. It transmits the linear pressure
impulses acting on the piston to the crankshaft big-end journal, where they
are converted into turning-effort.
Crankshaft
A simple crankshaft consists of a circular-sectioned shaft which is bent or
cranked to form two perpendicular crank-arms and an offset big-end journal.
The unbent part of the shaft provides the main journals. The crankshaft is
indirectly linked by the connecting-rod to the piston – this enables the
straight-line motion of the piston to be transformed into a rotary motion at the
crankshaft about the main-journal axis.
Crankshaft journals
These are highly finished cylindrical pins machined parallel on both the
centre axes and the offset axes of the crankshaft. When assembled, these
journals rotate in plain bush-type bearings mounted in the crankcase (the
main journals) and in one end of the connecting-rod (the big-end journal).
Small-end
This refers to the hinged joint made by the gudgeon-pin between the piston
and the connecting-rod so that the connecting-rod is free to oscillate relative
to the cylinder axis as it moves to and fro in the cylinder.
Main-ends
This refers to the rubbing pairs formed between the crankshaft main journals
and their respective plain bearings mounted in the crankcase.
Line of stroke
The centre path the piston is forced to follow due to the constraints of the
cylinder is known as the line of stroke.
Inner and outer dead centres
When the crankarm and the connecting-rod are aligned along the line of
stroke, the piston will be in either one of its two extreme positions. If the
piston is at its closest position to the cylinder head, the crank and piston are
said to be at inner dead centre (IDC) or top dead centre (TDC). With the
piston at its furthest position from the cylinder head, the crank and piston are
said to be at outer dead centre (ODC) or bottom dead centre (BDC). These
reference points are of considerable importance for valve-to-crankshaft
timing and for either ignition or injection settings.
Clearance volume
The space between the cylinder head and the piston crown at TDC is known
as the clearance volume or the combustion-chamber space.
Crank-throw
The distance from the centre of the crankshaft main journal to the centre of
the big-end journal is known as the crank-throw. This radial length influences
the leverage the gas pressure acting on the piston can apply in rotating the
crankshaft.
Piston stroke
The piston movement from IDC to ODC is known as the piston stroke and
corresponds to the crankshaft rotating half a revolution or 180. It is also equal
to twice the crank-throw.
i.e. L = 2R
where
L = piston stroke and
R = crank-throw
Thus a long or short stroke will enable a large or small turning-effort to be
applied to the crankshaft respectively.
Cylinder bore
The cylinder block is initially cast with sand cores occupying the cylinder
spaces. After the sand cores have been removed, the rough holes are
machined with a single-point cutting tool attached radially at the end of a
rotating bar. The removal of the unwanted metal in the hole is commonly
known as boring the cylinder to size. Thus the finished cylindrical hole is
known as the cylinder bore, and its internal diameter simply as the bore or
bore size.

Crankcase disc-valve and reed-valve inlet


charge control
An alternative to the piston-operated crankcase inlet port is to use a disc-
valve attached to and driven by the crankshaft (Fig. (a)). This disc-valve is

timed to open and close so that the fresh charge is induced to enter the
crankcase as early as possible, and only at the point when the charge is about
to be transferred into the cylinder is it closed. This method of controlling
crankcase induction does not depend upon the piston displacement to uncover

the port – it can therefore be so phased as to extend the filling period (Fig.).
A further method of improving crankcase filling is the use of reed-valves
(Fig. (b)). These valves are not timed to open and close, but operate
automatically when the pressure difference between the crankcase and the air
intake is sufficient to deflect the reed-spring. In other words, these valves
sense the requirements of the crankcase and so adjust their opening and
closing frequencies to match the demands of the engine.
Engine torque
This is the turning-effort about the crankshaft’s axis of rotation and is equal
to the product of the force acting along the connecting-rod and the
perpendicular distance between this force and the centre of rotation of the
crankshaft. It is expressed in newton metres (N m);
i.e. T = Fr
where T = engine torque (N m)
F = force applied to crank (N) and
r = effective crank-arm radius (m)
During the 180 crankshaft movement on the power stroke from TDC to BDC,
the effective radius of the crank-arm will increase from zero at the top of its
stroke to a maximum in the region of mid-stroke and then decrease to zero
again at the end of its downward movement. This implies that the torque on
the power stroke is continually varying. Also, there will be no useful torque
during the idling strokes. In fact some of the torque on the power stroke will
be cancelled out in overcoming compression resistance and pumping losses,
and the torque quoted by engine manufacturers is always the average value
throughout the engine cycle.
The average torque developed will vary over the engine’s speed range. It
reaches a maximum at about mid-speed and decreases on either side.
Engine power
Power is the rate of doing work. When applied to engines, power ratings may
be calculated either on the basis of indicated power (i.p.), that is the power
actually developed in the cylinder, or on the basis of brake power (b.p.),
which is the output power measured at the crankshaft. The b.p. is always less
than the i.p., due to frictional and pumping losses in the cylinders and the
reciprocating mechanism of the engine.
Since the rate of doing work increases with piston speed, the engine’s power
will tend to rise with crankshaft speed of rotation, and only after about two-
thirds of the engine’s speed range will the rate of power rise drop off.
The slowing down and even decline in power at the upper speed range is
mainly due to the very short time available for exhausting and for inducing
fresh charge into the cylinders at very high speeds, with a resulting reduction
in the cylinders’ mean effective pressures.
Different countries have adopted their own standardised test procedures for
measuring engine performance, so slight differences in quoted output figures
will exist. Quoted performance figures should therefore always state the
standard used. The three most important standards are those of the American
Society of Automotive Engineers (SAE), the German Deutsch Industrie
Normale (DIN), and the Italian Commissione technica di Unificazione nell
Automobile (CUNA).

The two methods of calculating power can be expressed as follows:


The imperial power is quoted in horsepower (hp) and is defined in terms of
foot pounds per minute. In imperial units one horsepower is equivalent to 33
000 ft lb per minute or 550 ft lb per second. A metric horsepower is defined
in terms of Newton-metres per second and is equal to 0.986 imperial
horsepower. In Germany the abbreviation for horsepower is PS derived from
the translation of the words ’Pferd-Sta¨rke’ meaning horse strength.
The international unit for power is the watt, W, or more usually the kilowatt,
kW, where 1 kW = 1000 W.
Conversion from watt to horsepower and vice versa is:
1 kW = 1.35 hp and 1 hp = 0.746 kW
Engine cylinder capacity
Engine sizes are compared on the basis of total cylinder swept volume, which

is known as engine cylinder capacity. Thus the engine cylinder capacity is


equal to the piston displacement of each cylinder times the number of
cylinders,

where VE = engine cylinder capacity (litre)


V = piston displacement (cm3 ) and
n = number of cylinders
Piston displacement is derived from the combination of both the cross-
sectional area of the piston and its stroke. The relative importance of each of
these dimensions can be demonstrated by considering how they affect
performance individually.
The cross-sectional area of the piston crown influences the force acting on the
connecting-rod, since the product of the piston area and the mean effective
cylinder pressure is equal to the total piston thrust;

The length of the piston stroke influences both the turning-effort and the
angular speed of the crankshaft. This is because the crank-throw length
determines the leverage on the crankshaft, and the piston speed divided by
twice the stroke is equal to the crankshaft speed;
This means that making the stroke twice as long doubles the crankshaft
turning-effort and halves the crankshaft angular speed for a given linear
piston speed.
The above shows that the engine performance is decided by the ratio of bore
to stroke chosen for a given cylinder capacity.
Compression-ratio
In an engine cylinder, the gas molecules are moving about at considerable
speed in the space occupied by the gas, colliding with other molecules and
the boundary surfaces of the cylinder head, the cylinder walls, and the piston
crown. The rapid succession of impacts of many millions of molecules on the
boundary walls produces a steady continuous force per unit surface which is
known as pressure.
When the gas is compressed into a much smaller space, the molecules are
brought closer to one another. This raises the temperature and greatly
increases the speed of the molecules and hence their kinetic energy, so more
violent impulses will impinge on the piston crown. This increased activity of
the molecules is experienced as increased opposition to movement of the
piston towards the cylinder head.
The process of compressing a constant mass of gas into a much smaller space
enables many more molecules to impinge per unit area on to the piston.
When burning of the gas occurs, the chemical energy of combustion is
rapidly transformed into heat energy which considerably increases the kinetic
energy of the closely packed gas molecules. Therefore the extremely large
number of molecules squeezed together will thus bombard the piston crown
at much higher speeds. This then means that a very large number of repeated
blows of considerable magnitude will strike the piston and so push it towards
ODC.
This description of compression, burning, and expansion of the gas charge
shows the importance of utilising a high degree of compression before
burning takes place, to improve the efficiency of combustion. The amount of
compression employed in the cylinder is measured by the reduction in
volume when the piston moves from BDC to TDC, the actual proportional
change in volume being expressed as the compression-ratio.
The compression-ratio may be defined as the ratio of the maximum cylinder
volume when the piston is at its outermost position (BDC) to the minimum
cylinder volume (the clearance volume) with the piston at its innermost
position (TDC) – that is, the sum of the swept and clearance volumes divided
by the clearance volume,
Petrol engines have compression-ratios of the order of 7:1 to 10:1; but, to
produce self-ignition of the charge, diesel engines usually double these
figures and may have values of between 14:1 and 24:1 for naturally aspirated
(depression-induced filling) types, depending on the design.
Digital engine control systems
Traditionally, the term powertrain has been thought to include the engine,
transmission, differential, and drive axle/wheel assemblies. With the advent
of electronic controls, the powertrain also includes the electronic control
system (in whatever configuration it has). In addition to engine control
functions for emissions regulation, fuel economy, and performance,
electronic controls are also used in the automatic transmission to select
shifting as a function of operating conditions. Moreover, certain vehicles
employ electronically controlled clutches in the differential (transaxle T/A)
for traction control.
These electronic controls for these major powertrain components can either
be separate (i.e., one for each component) or an integrated system regulating
the powertrain as a unit.
This latter integrated control system has the benefit of obtaining optimal
vehicle performance within the constraints of exhaust emission and fuel
economy regulations. Each of the control systems is discussed separately
beginning with electronic engine control. Then a brief discussion of
integrated powertrain follows. This chapter concludes with a discussion of
hybrid vehicle (HV) control systems in which propulsive power comes from
an internal combustion engine (ICE) or an electric motor (EM) or a
combination of both. The proper balance of power between these two sources
is a very complex function of operating conditions and governmental
regulations.
Digital engine control
A typical engine control system incorporates a microprocessor and is
essentially a special-purpose computer (or microcontroller).
Electronic engine control has evolved from a relatively rudimentary fuel
control system employing discrete analog components to the highly precise
fuel and ignition control through 32-bit (sometimes more) microprocessor
based integrated digital electronic powertrain control. The motivation for
development of the more sophisticated digital control systems has been the
increasingly stringent exhaust emission and fuel economy regulations. It has
proven to be cost effective to implement the powertrain controller as a
multimode computer-based system to satisfy these requirements.
A multimode controller operates in one of many possible modes, and, among
other tasks, changes the various calibration parameters as operating
conditions change in order to optimize performance. To implement
multimode control in analog electronics it would be necessary to change
hardware parameters (for example, via switching systems) to accommodate
various operating conditions. In a computer-based controller, however, the
control law and system parameters are changed via program (i.e., software)
control. The hardware remains fixed but the software is reconfigured in
accordance with operating conditions as determined by sensor measurements
and switch inputs to the controller.
Digital engine control features
The primary purpose of the electronic engine control system is to regulate the
mixture (i.e., air–fuel), the ignition timing, and EGR. Virtually all major
manufacturers of cars sold in the United States (both foreign and domestic)
use the three-way catalyst for meeting exhaust emission constraints. For such
cars, the air/fuel ratio is held as closely as possible to the stoichiometric value
of about 14.7 for as much of the time as possible. Ignition timing and EGR
are controlled separately to optimize performance and fuel economy.
Fig. below illustrates the primary components of an electronic engine control
system. In this figure, the engine control system is a microcontroller,
typically implemented with a specially designed microprocessor and
operating under program control. Typically, the controller incorporates
hardware multiply and ROM. The hardware multiply greatly speeds up the
multiplication operation required at several stages of engine control relative
to software multiplication routines, which are generally cumbersome and
slow. The associated ROM contains the program for each mode as well as
calibration parameters and lookup tables. The earliest such systems
incorporated 8-bit microprocessors, although the trend is toward
implementation with 32-bit microprocessors. The microcontroller under
program control generates output electrical signals to operate the fuel
injectors so as to maintain the desired mixture and ignition to optimize
performance. The correct mixture is obtained by regulating the quantity of
fuel delivered into each cylinder during the intake stroke in accordance with
the air mass.
In determining the correct fuel flow, the controller obtains a measurement or
estimate of the mass air flow (MAF) rate into the cylinder. The measurement
is obtained using an MAF sensor. Alternatively, the MAF rate is estimated
(calculated) using the speed–density method. This estimate can be found
from measurement of the intake manifold absolute pressure (MAP), the
revolutions per minute (RPM) and the inlet air temperature.
Using this measurement or estimate, the quantity of fuel to be delivered is
determined by the controller in accordance with the instantaneous control
mode. The quantity of fuel delivered by the fuel injector is determined by the
operation of the fuel injector. A fuel injector is essentially a solenoid-
operated valve. Fuel that is supplied to each injector from the fuel pump is
supplied to each fuel injector at a regulated fuel pressure. When the injector
valve is opened, fuel flows at a rate Rf (in gal/sec) that is determined by the
(constant) regulated pressure and by the geometry of the fuel injector valve.
The quantity of fuel F delivered to any cylinder is proportional to the time T
that this valve is opened:

Components of an electronically controlled engine


F = RfT
The engine control system, then, determines the correct quantity of fuel to be
delivered to each cylinder (for a given operating condition) via measurement
of MAF rate. The controller then generates an electrical signal that opens the
fuel injector valve for the appropriate time interval T to deliver this desired
fuel quantity to the cylinder such that a stoichiometric air/fuel ratio is
maintained.
The controller also determines the correct time for fuel delivery to correspond
to the intake stroke for the relevant cylinder.
Control modes for fuel control
The engine control system is responsible for controlling fuel and ignition for
all possible engine operating conditions. However, there are a number of
distinct categories of engine operation, each of which corresponds to a
separate and distinct operating mode for the engine control system. The
differences between these operating modes are sufficiently great that different
software is used for each. The control system must determine the operating
mode from the existing sensor data and call the particular corresponding
software routine.
For a typical engine, there are seven different engine operating modes that
affect fuel control: engine crank, engine warm-up, open-loop control, closed-
loop control, hard acceleration, deceleration, and idle. The program for mode
control logic determines the engine operating mode from sensor data and
timers.
In the earliest versions of electronic fuel control systems, the fuel metering
actuator typically consisted of one or two fuel injectors mounted near the
throttle plate so as to deliver fuel into the throttle body. These throttle body
fuel injectors (TBFIs) were in effect an electromechanical replacement for the
carburetor. Requirements for the TBFIs were such that they only had to
deliver fuel at the correct average flow rate for any given MAF. Mixing of
the fuel and air, as well as distribution to the individual cylinders, took place
in the intake manifold system.
The more stringent exhaust emissions regulations of the late 1980s and the
1990s have demanded more precise fuel delivery than can normally be
achieved by TBFI. These regulations and the need for improved performance
have led to timed sequential port fuel injection (TSPFI). In such a system
there is a fuel injector for each cylinder that is mounted so as to spray fuel
directly into the intake of the associated cylinder. Fuel delivery is timed to
occur during the intake stroke for that cylinder.
The digital engine control system requires sensors for measuring the engine
variables and parameters. Referring to Fig. below, the set of sensors may
include, for example, MAF, exhaust gas oxygen (EGO) concentration, and
crankshaft angular position (CPS), as well as RPM, camshaft position
(possibly a single reference point for each engine cycle), coolant temperature
(CT), throttle plate angular position (TPS), intake air temperature, and
exhaust pressure ratio (EPR) for EGR control.

In the example configuration of Fig. below, fuel delivery is assumed to be


TSPFI (i.e., via individual fuel injectors located so as to spray fuel directly
into the intake port and timed to coincide with the intake stroke). Air flow
measurement is via an MAF sensor. In addition to MAF, sensors are
available for the measurement of EGO concentration, RPM, inlet air and CTs,
throttle position, crankshaft (and possibly camshaft) position, and exhaust
differential pressure (DP) (for EGR calculation). Some engine controllers
involve vehicle speed sensors and various switches to identify brake on/off
and the transmission gear, depending on the particular control strategy
employed.
When the ignition key is switched on initially, the mode control logic
automatically selects an engine start control scheme that provides the low
air/fuel ratio required for starting the engine. Once the engine RPM rises
above the cranking value, the controller identifies the ‘‘engine started’’ mode
and passes control to the program for the engine warm-up mode. This
operating mode keeps the air/fuel ratio low to prevent engine stall during cool
weather until the engine CT rises above some minimum value. The
instantaneous air/fuel is a function of CT. The particular value for the
minimum CT is specific to any given engine and, in particular, to the fuel
metering system. (Alternatively, the low air/fuel ratio may be maintained for
a fixed time interval following start, depending on start-up engine
temperature.)
When the CT rises sufficiently, the mode control logic directs the system to
operate in the open-loop control mode until the EGO sensor warms up
enough to provide accurate readings. This condition is detected by
monitoring the EGO sensor’s output for voltage readings above a certain
minimum rich air/fuel mixture voltage set point. When the sensor has
indicated rich at least once and after the engine has been in open loop for a
specific time, the control mode selection logic selects the closed-loop mode
for the system. (Note: other criteria may also be used.) The engine remains in
the closed-loop mode until either the EGO sensor cools and fails to read a
rich mixture for a certain length of time or a hard acceleration or deceleration
occurs. If the sensor cools, the control mode logic selects the open-loop mode
again.
During hard acceleration or heavy engine load, the control mode selection
logic chooses a scheme that provides a rich air/fuel mixture for the duration
of the acceleration or heavy load. This scheme provides maximum torque but
relatively poor emissions control and poor fuel economy regulation as
compared with a stoichiometric air/fuel ratio. After the need for enrichment
has passed, control is returned to either open-loop or closed-loop mode,
depending on the control mode logic selection conditions that exist at that
time.
During periods of deceleration, the air/fuel ratio is increased to reduce
emissions of HC and CO due to unburned excess fuel. When idle conditions
are present, control mode logic passes system control to the idle speed control
mode. In this mode, the engine speed is controlled to reduce engine
roughness and stalling that might occur because the idle load has changed
due to air conditioner compressor operation, alternator operation, or gearshift
positioning from park/neutral to drive, although stoichiometric mixture is
used if the engine is warm.
In modern engine control systems, the controller is a special-purpose digital
computer built around a microprocessor. A block diagram of a typical
modern digital engine control system is depicted in Fig. below. The controller
also includes ROM containing the main program (of several thousand lines of
code) as well as rRAM for temporary storage of data during computation.
The sensor signals are connected to the controller via an input/output (I/O)
subsystem. Similarly, the I/O subsystem provides the output signals to drive
the fuel injectors (shown as the fuel metering block of Fig. below) as well as
to trigger pulses to the ignition system (described later in this chapter). In
addition, this solid-state control system includes hardware for sampling and
analog-to-digital conversion such that all sensor measurements are in a
format suitable for reading by the microprocessor.

Digital engine control system diagram.

The sensors that measure various engine variables for control are as follows:
MAF:- Mass air flow sensor
CT :- Engine temperature as represented by coolant temperature
HEGO:- (One or two) heated EGO sensor(s)
POS/RPM:- Crankshaft angular position and RPM sensor cycle camshaft
position sensor for determining start of each engine cycle
TPS:- Throttle position sensor
DPS:- Differential pressure sensor (exhaust to intake) for EGR control
The control system selects an operating mode based on the instantaneous
operating condition as determined from the sensor measurements. Within any
given operating mode the desired air/fuel ratio (A/F )d is selected. The
controller then determines the quantity of fuel to be injected into each
cylinder during each engine cycle. This quantity of fuel depends on the
particular engine operating condition as well as the controller mode of
operation, as will presently be explained.
Engine crank
While the engine is being cranked, the fuel control system must provide an
intake air/fuel ratio of anywhere from 2:1 to 12:1, depending on engine
temperature. The correct [A/F]d is selected from an ROM lookup table as a
function of CT. Low temperatures affect the ability of the fuel metering
system to atomize or mix the incoming air and fuel. At low temperatures, the
fuel tends to form into large droplets in the air, which do not burn as
efficiently as tiny droplets. The larger fuel droplets tend to increase the
apparent air/fuel ratio, because the amount of usable fuel (on the surface of
the droplets) in the air is reduced; therefore, the fuel metering system must
provide a decreased air/fuel ratio to provide the engine with a more
combustible air/fuel mixture. During engine crank the primary issue is to
achieve engine start as rapidly as possible. Once the engine is started the
controller switches to an engine warm-up mode.
Engine warm-up
While the engine is warming up, an enriched air/fuel ratio is still needed to
keep it running smoothly, but the required air/fuel ratio changes as the
temperature increases. Therefore, the fuel control system stays in the open-
loop mode, but the air/fuel ratio commands continue to be altered due to the
temperature changes. The emphasis in this control mode is on rapid and
smooth engine warm-up. Fuel economy and emission control are still a
secondary concern.
A diagram illustrating the lookup table selection of desired air/fuel ratios is
shown in Fig. below. Essentially, the measured CT is converted to an address
for the lookup table. This address is supplied to the ROM table via the system
address bus (A/B). The data stored at this address in the ROM are the desired
(A/F )d for that temperature. These data are sent to the controller via the
system data bus (D/B).
There is always the possibility of a CT failure. Such a failure could result in
excessively rich or lean mixtures, which can seriously degrade the
performance of both the engine and the three-way catalytic converter (3wcc).
One scheme that can circumvent a temperature sensor failure involves having
a time function to limit the duration of the engine warm-up mode. The
nominal time to warm the engine from cold soak at various temperatures is
known. The controller is configured to switch from engine warm-up mode to
an open-loop (warmed-up engine) mode after a sufficient time by means of
an internal timer.
It is worthwhile at this point to explain how the quantity of fuel to be injected
is determined. This method is implemented in essentially all operating modes
and is described here as a generic method, even though each engine control
scheme may vary somewhat from the following. The quantity of fuel to be
injected during the intake stroke of any given cylinder (which we call F ) is
determined by the mass of air (A) drawn into that cylinder (i.e., the air
charge) during that intake
Illustration of lookup table for desired air/fuel ratio
stroke. That quantity of fuel is given by the air charge divided by the desired
air/fuel ratio:

The quantity of air drawn into the cylinder, A, is computed from the MAF
rate and the RPM. The MAF rate will be given in kg/sec. If the engine speed
is RPM, then the number of revolutions/second (which we call r) is:

Then, the MAF is distributed approximately uniformly to half the cylinders


during each revolution. If the number of cylinders is N then the air charge
(mass) in each cylinder during one revolution is:

In this case, the mass of fuel delivered to each cylinder is:

This computation is carried out by the controller continuously so that the fuel
quantity can be varied quickly to accommodate rapid changes in engine
operating condition. The fuel injector pulse duration T corresponding to this
fuel quantity is computed using the known fuel injector delivery rate Rf .

This pulse width is known as the base pulse width. The actual pulse width
used is modified from this according to the mode of operation at any time.
Open-loop control
For a warmed-up engine, the controller will operate in an open loop if the
closed-loop mode is not available for any reason. For example, the engine
may be warmed sufficiently but the EGO sensor may not provide a usable
signal. In any event, it is important to have a stoichiometric mixture to
minimize exhaust emissions as soon as possible. The base pulse width Tb is
computed as described above, except that the desired air/fuel ratio (A/F )d is
14.7 (stoichiometry):

Corrections of the base pulse width occur whenever anything affects the
accuracy of the fuel delivery. For example, low battery voltage might affect
the pressure in the fuel rail that delivers fuel to the fuel injectors. Corrections
to the base pulse width are then made using the actual battery voltage.
An alternate method of computing MAF rate is the speed–density method.
Although this method has essentially been replaced by direct MAF
measurements, there will continue to be a number of cars employing this
method for years to come, so it is arguably worthwhile to include a brief
discussion in this chapter. This method, which is illustrated in Fig. below.
Engine control system using the speed–density method
is based on measurements of MAP, RPM, and intake air temperature Ti. The
air density da is computed from MAP and Ti, and the volume flow rate Rv of
combined air and EGR is computed from RPM and volumetric efficiency, the
latter being a function of MAP and RPM. The volume rate for air is found by
subtracting the EGR volume flow rate from the combined air and EGR.
Finally, the MAF rate is computed as the product of the volume flow rate for
air and the intake air density. Given the complexity of the speed–density
method it is easy to see why automobile manufacturers would choose the
direct MAF measurement once a cost-effective MAF sensor became
available.
The speed–density method can be implemented either by computation in the
engine control computer or via lookup tables. Fig. below is an illustration of
the lookup table implementation. In this figure, three variables need to be
determined: volumetric efficiency (nv), intake density (da), and EGR volume
flow rate (RE). The volumetric efficiency is read from ROM with an address
determined from RPM and MAP measurements. The intake air density is
read from another section of ROM with an address determined from MAP
and Ti measurements. The EGR volume flow rate is read from still another
section of ROM with an address determined from DP and EGR valve
position. These variables are combined to yield the MAF rate:

where D is the engine displacement.

Lookup table determination of da, RE, and nv


Closed-loop control
Perhaps the most important adjustment to the fuel injector pulse duration
comes when the control is in the closed-loop mode. In the open-loop mode
the accuracy of the fuel delivery is dependent on the accuracy of the
measurements of the important variables. However, any physical system is
susceptible to changes with either operating conditions (e.g., temperature) or
with time (aging or wear of components).
In any closed-loop control system a measurement of the output variables is
compared with the desired value for those variables. In the case of fuel
control, the variables being regulated are exhaust gas concentrations of HC,
CO, and NOx. Although direct measurement of these exhaust gases is not
feasible in production automobiles, it is sufficient for fuel control purposes to
measure the EGO concentration. These regulated gases can be optimally
controlled with a stoichiometric mixture. The EGO sensor is, in essence, a
switching sensor that changes output voltage abruptly as the input mixture
crosses the stoichiometric mixture of 14.7.
The closed-loop mode can only be activated when the EGO (or HEGO)
sensor is sufficiently warmed. That is, the output voltage of the sensor is high
(approximately 1 volt) when the exhaust oxygen concentration is low (i.e.,
for a rich mixture relative to stoichiometry). The EGO sensor voltage is low
(approximately 0.1 volt) whenever the exhaust oxygen concentration is high
(i.e., for a mixture that is lean relative to stoichiometry).
The time-average EGO sensor output voltage provides the feedback signal
for fuel control in the closed-loop mode. The instantaneous EGO sensor
voltage fluctuates rapidly from high to low values, but the average value is a
good indication of the mixture.
As explained earlier, fuel delivery is regulated by the engine control system
by controlling the pulse duration (T ) for each fuel injector. The engine
controller continuously adjusts the pulse duration for varying operating
conditions and for operating parameters. A representative algorithm for fuel
injector pulse duration for a given injector during the nth computation cycle,
T(n), is given by:
where
Tb(n) is the base pulse width as determined from measurements of MAF rate
and the desired air/fuel ratio
CL(n) is the closed-loop correction factor For open-loop operation, CL(n)
equals 0; for closed-loop operation, CL is given by:

where I(n) is the integral part of the closed-loop correction P(n) is the
proportional part of the closed-loop correction a and b are constants
These latter variables are determined from the output of the EGO sensor.
Whenever the EGO sensor indicates a rich mixture (i.e., EGO sensor voltage
is high), then the integral term is reduced by the controller for the next cycle,

for a rich mixture.


Whenever the EGO sensor indicates a lean mixture (i.e., low output voltage),
the controller increments I(n) for the next cycle,

for a lean mixture. The integral part of CL continues to increase or decrease in


a limit-cycle operation.
The computation of the closed-loop correction factor continues at a rate
determined within the controller. This rate is normally high enough to permit
rapid adjustment of the fuel injector pulse width during rapid throttle changes
at high engine speed. The period between successive computations is the
computation cycle described above.
In addition to the integral component of the closedloop correction to pulse
duration is the proportional term. This term, P(n), is proportional to the
deviation of the average EGO sensor signal from its mid-range value
(corresponding to stoichiometry). The combined terms change with
computation cycle as depicted in fig below
In this figure the regions of lean and rich (relative to stoichiometry) are
depicted. During relatively lean periods the closed-loop correction term
increases for each computation cycle, whereas during relatively rich intervals
this term decreases.
Once the computation of the closed-loop correction factor is completed, the
value is stored in a specific memory location (RAM) in the controller. At the
appropriate time for fuel injector activation (during the intake stroke), the
instantaneous closed-loop correction factor is read from its location in RAM
and an actual pulse of the corrected duration is generated by the engine
control.
Acceleration enrichment
During periods of heavy engine load such as during hard acceleration, fuel
control is adjusted to provide an enriched air/fuel ratio to maximize engine
torque and neglect fuel economy and emissions. This condition of enrichment
is permitted within the regulations of the EPA as it is only a temporary
condition. It is well recognized that hard acceleration is occasionally required
for maneuvering in certain situations and is, in fact, related at times to safety.
The computer detects this condition by reading the throttle angle sensor
voltage. High throttle angle corresponds to heavy engine load and is an
indication that heavy acceleration is called for by the driver. In some vehicles
a switch is provided to detect wide open throttle. The fuel system controller
responds by increasing the pulse duration of the fuel injector signal for the
duration of the heavy load. This enrichment enables the engine to operate
with a torque greater than that
allowed when emissions and fuel economy are controlled. Enrichment of the
air/fuel ratio to about 12:1 is sometimes used.
Deceleration leaning
During periods of light engine load and high RPM such as during coasting or
hard deceleration, the engine operates with a very lean air/fuel ratio to reduce
excess emissions of HC and CO. Deceleration is indicated by a sudden
decrease in throttle angle or by closure of a switch when the throttle is closed
(depending on the particular vehicle configuration). When these conditions
are detected by the control computer, it computes a decrease in the pulse
duration of the fuel injector signal. The fuel may even be turned off
completely for very heavy deceleration.
Idle speed control
Idle speed control is used by some manufacturers to prevent engine stall
during idle. The goal is to allow the engine to idle at as low an RPM as
possible, yet keep the engine from running rough and stalling when power
consuming accessories, such as air conditioning compressors and alternators,
turn on.
The control mode selection logic switches to idle speed control when the
throttle angle reaches its zero (completely closed) position and engine RPM
falls below a minimum value, and when the vehicle is stationary. Idle speed
is controlled by using an electronically controlled throttle bypass valve (Fig.
a) that allows air to flow around the throttle plate and produces the same
effect as if the throttle had been slightly opened.
There are various schemes for operating a valve to introduce bypass air for
idle control. One relatively common method for controlling the idle speed
bypass air uses a special type of motor called a stepper motor. A stepper
motor moves in fixed angular increments when activated by pulses on its two
sets of windings (i.e., open or close). Such a motor can be operated in either
direction by supplying pulses in the proper phase to the windings. This is
advantageous for idle speed control.
since the controller can very precisely position the idle bypass valve by
sending the proper number of pulses of the correct phasing.
The engine control computer can precisely know the position of the valve in a
number of ways. In one way the computer can send sufficient pulses to
completely close the valve when the ignition is first switched on. Then it can
send open pulses (phased to open the valve) to a specified (known) position.
Idle air control.
A block diagram of a simplified idle speed control system is shown in Fig. b.
Idle speed is detected by the RPM sensor, and the speed is adjusted to
maintain a constant idle RPM. The computer receives digital on/ off status
inputs from several power-consuming devices attached to the engine, such as
the air conditioner clutch switch, park-neutral switch, and the battery charge
indicator. These inputs indicate the load that is applied to the engine during
idle.
When the engine is not idling, the idle speed control valve may be completely
closed so that the throttle plate has total control of intake air. During periods
of deceleration leaning, the idle speed valve may be opened to provide extra
air to increase the air/fuel ratio in order to reduce HC emissions.
EGR control
A second electronic engine control subsystem is the control of exhaust gas
that is recirculated back to the intake manifold. Under normal operating
conditions, engine cylinder temperatures can reach more than 3,0000F. The
higher the temperature, the more is the chance that the exhaust will have NOx
emissions. A small amount of exhaust gas is introduced into the cylinder to
replace normal intake air. This results in lower combustion temperatures,
which reduces NOx emissions.
The control mode selection logic determines when EGR is turned off or on.
EGR is turned off during cranking, cold engine temperature (engine warm-
up), idling, acceleration, or other conditions demanding high torque.
Since EGR was first introduced as a concept for reducing NOx exhaust
emissions, its implementation has gone through considerable change. There
are in fact many schemes and configurations for EGR realization. We discuss
here one method of EGR implementation that incorporates enough features to
be representative of all schemes in use today and in the near future.
Fundamental to all EGR schemes is a passageway or port connecting the
exhaust and intake manifolds. A valve is positioned along this passageway
whose position regulates EGR from zero to some maximum value. Typically,
the valve is operated by a diaphragm connected to a variable vacuum source.
The controller operates a solenoid in a periodic variable-duty-cycle mode.
The average level of vacuum on the diaphragm varies with the duty cycle. By
varying this duty cycle, the control system has proportional control over the
EGR valve opening and thereby over the amount of EGR.
In many EGR control systems the controller monitors the DP between the
exhaust and intake manifold via a differential pressure sensor (DPS). With
the signal from this sensor the controller can calculate the valve opening for
the desired EGR level. The amount of EGR required is a predetermined
function of the load on the engine (i.e., power produced).
A simplified block diagram for an EGR control system is depicted in Fig. a.
In this figure the EGR valve is operated by a solenoid-regulated vacuum
actuator (coming from the intake). The engine controller determines the
required amount of EGR based on the engine operating condition and the
signal from the DPS between intake and exhaust manifolds. The controller
EGR control
then commands the correct EGR valve position to achieve the desired amount
of EGR.
Electronic ignition control
An engine must be provided with fuel and air in correct proportions and the
means to ignite this mixture in the form of an electric spark. Before the
development of electronic ignition the traditional ignition system included
spark plugs, a distributor, and a high-voltage ignition coil. The distributor
would sequentially connect the coil output high voltage to the correct spark
plug. In addition, it would cause the coil to generate the spark by interrupting
the primary current (ignition points) in the desired coil, thereby generating
the required spark. The time of occurrence of this spark (i.e., the ignition
timing) in relation of the piston to top dead center (TDC) influences the
torque generated.
In most present-day electronically controlled engines the distributor has been
replaced by multiple coils. Each coil supplies the spark to either one or two
cylinders. In such a system the controller selects the appropriate coil and
delivers a trigger pulse to ignition control circuitry at the correct time for
each cylinder. (Note: In some cases the coil is on the spark plug as an integral
unit.)
Fig. a illustrates such a system as an example of a 4-cylinder engine. In this
example a pair of coils provides the spark for firing two cylinders for each
coil. Cylinder pairs are selected such that one cylinder is on its compression
stroke while the other is on exhaust. The cylinder on compression is the
cylinder to be fired (at a time somewhat before it reaches TDC). The other
cylinder is on exhaust.
The coil fires the spark plugs for these two cylinders simultaneously. For the
former cylinder, the mixture is ignited and combustion begins for the power
stroke that follows. For the other cylinder (on exhaust stroke), the combustion
has already taken place and the spark has no effect.
Although the mixture for modern emission-regulated engines is constrained
by emissions regulations, the spark timing can be varied in order to achieve
optimum performance within the mixture constraint. For example, the
ignition timing can be chosen to produce the best possible engine torque for
any given operating condition. This optimum ignition timing is known for
any given
Distributorless ignition system
engine configuration from studies of engine performance as measured on an
engine dynamometer.
Fig. above is a schematic of a representative electronic ignition system. In
this example configuration the spark advance (SA) value is computed in the
main engine control (i.e., the controller that regulates fuel). This system
receives data from the various sensors (as described above with respect to
fuel control) and determines the correct SA for the instantaneous operating
condition.
The variables that influence the optimum spark timing at any operating
condition include RPM, manifold pressure (or MAF), barometric pressure,
and CT. The correct ignition timing for each value of these variables is stored
in an ROM lookup table. For example, the variation of SA with RPM for a
representative engine is shown in Fig. below The engine control system
obtains readings from the various sensors and generates an address to the
lookup table (ROM). After reading the data from the lookup tables, the
control system computes the correct SA. An output signal is generated at the
appropriate time to activate the spark

In the configuration depicted in Fig. 1st, the electronic ignition is


implemented in a stand-alone ignition module. This solid-state module
receives the correct SA data and generates electrical signals that operate the
coil driver circuitry. These signals are produced in response to timing inputs
coming from crankshaft and camshaft signals (POS/RPM).
The coil driver circuits generate the primary current in windings P1 and P2 of
the coil packs depicted in 1st Fig. These primary currents build up during the
so-called dwell period before the spark is to occur. At the correct time the
driver circuits interrupt the primary currents via a solid-state switch. This
interruption of the primary current causes the magnetic field in the coil pack
to drop rapidly, inducing a very high voltage (20,000–40,000 volts) that
causes a spark. In the example depicted in 1st Fig. a pair of coil packs, each
firing two spark plugs, is shown. Such a configuration would be appropriate
for a 4-cylinder engine. Normally there would be one coil pack for each pair
of cylinders.
The ignition system described above is known as a distributorless ignition
system (DIS) since it uses no distributor. There are a number of older car
models on the road that utilize a distributor. However, the electronic ignition
system is the same as that shown in 1st Fig, up to the coil packs. In
distributor-equipped engines there is only one coil, and its secondary is
connected to the rotary switch (or distributor).
In a typical electronic ignition control system, the total spark advance, SA (in
degrees before TDC), is made up of several components that are added
together:

The first component, SAS, is the basic SA, which is a tabulated function of
RPM and MAP. The control system reads RPM and MAP, and calculates the
address in ROM of the SAS that corresponds to these values. Typically, the
advance of RPM from idle to about 1200 RPM is relatively slow. Then, from
about 1200 to about 2300 RPM the increase in RPM is relatively quick.
Beyond 2300 RPM, the increase in RPM is again relatively slow. Each
engine configuration has its own SA characteristic, which is normally a
compromise between a number of conflicting factors.
The second component, SAP, is the contribution to SA due to manifold
pressure. This value is obtained from ROM lookup tables. Generally
speaking, the SA is reduced as pressure increases.
The final component, SAT, is the contribution to SA due to temperature.
Temperature effects on SA are relatively complex, including such effects as
cold cranking, cold start, warm-up, and fully warmed-up conditions.
Closed-loop ignition timing
The ignition system described in the foregoing section is an open-loop
system. The major disadvantage of open loop control is that it cannot
automatically compensate for mechanical changes in the system. Closed-loop
control of ignition timing is desirable from the standpoint of improving
engine performance and maintaining that performance in spite of system
changes.
One scheme for closed-loop ignition timing is based on the improvement in
performance that is achieved by advancing the ignition timing relative to
TDC. For a given RPM and manifold pressure, the variation in torque with
SA is as depicted in Fig. below. One can see that advancing the spark relative
to TDC increases the torque until a point is reached at which best torque is
produced. This SA is known as mean best torque, or MBT.

When the spark is advanced too far, an abnormal combustion phenomenon


occurs that is known as knocking. Although the details of what causes
knocking are beyond the scope of this book, it is generally a result of a
portion of the air–fuel mixture autoigniting, as opposed to being normally
ignited by the advancing flame front that occurs in normal combustion
following spark ignition. Roughly speaking, the amplitude of knock is
proportional to the fraction of the total air and fuel mixture that autoignites. It
is characterized by an abnormally rapid rise in cylinder pressure during
combustion, followed by very rapid oscillations in cylinder pressure. The
frequency of these oscillations is specific to a given engine configuration and
is typically in the range of a few kilohertz. Fig. below is a graph of a typical
cylinder pressure versus time under knocking conditions. A relatively low
level of knock is arguably beneficial to performance, although excessive
knock is unquestionably damaging to the engine and must be avoided.

One control strategy for SA under closed-loop control is to advance the spark
timing until the knock level becomes unacceptable. At this point, the control
system reduces the SA (retarded spark) until acceptable levels of knock are
achieved. Of course, an SA control scheme based on limiting the levels of
knocking requires a knock sensor. This sensor responds to the acoustical
energy in the spectrum of the rapid cylinder pressure oscillations, as shown in
Fig.
Integrated engine control system
Each control subsystem for fuel control, spark control, and EGR has been
discussed separately. However, a fully integrated electronic engine control
system can include these subsystems and provide additional functions.
(Usually the flexibility of the digital control system allows such expansion
quite easily because the computer program can be changed to accomplish the
expanded functions.) Several of these additional functions are discussed in
the following.

Secondary air management


Secondary air management is used to improve the performance of the
catalytic converter by providing extra (oxygen-rich) air to either the converter
itself or to the exhaust manifold. The catalyst temperature must be above
about 200C to efficiently oxidize HC and CO and reduce NOx. During
engine warm-up when the catalytic converter is cold, HC and CO are
oxidized in the exhaust manifold by routing secondary air to the manifold.
This creates extra heat to speed the warm-up of the converter and EGO
sensor, enabling the fuel controller to go to the closed-loop mode more
quickly.
The converter can be damaged if too much heat is applied to it. This can
occur if large amounts of HC and CO are oxidized in the manifold during
periods of heavy loads, which call for fuel enrichment, or during severe
deceleration. In such cases, the secondary air is directed to the air cleaner,
where it has no effect on exhaust temperatures.
After warm-up, the main use of secondary air is to provide an oxygen-rich
atmosphere in the second chamber of the three-way catalyst, dual-chamber
converter system. In a dual-chamber converter, the first chamber contains
rhodium, palladium, and platinum to reduce NOx and to oxidize HC and CO.
The second chamber contains only platinum and palladium. The extra oxygen
from the secondary air improves the converter’s ability to oxidize HC and
CO in the second converter chamber.
The computer program for the control mode selection logic can be modified
to include the conditions for controlling secondary air. The computer controls
secondary air by using two solenoid valves similar to the EGR valve. One
valve switches air flow to the air cleaner or to the exhaust system. The other
valve switches air flow to the exhaust manifold or to the converter. The air
routing is based on engine CT and air/fuel ratio. The control system diagram
for secondary air is shown in Fig.
Evaporative emissions canister purge
During engine-off conditions, the fuel stored in the fuel system tends to
evaporate into the atmosphere. To reduce these HC emissions, the fuel tank is
sealed and evaporative gases are collected by a charcoal filter in a canister.
The collected fuel is released into the intake through a solenoid valve
controlled by the computer. This is done during closed-loop operation to
reduce fuel calculation complications in the open-loop mode.
Automatic system adjustment
Another important feature of microcomputer engine control systems is their
ability to be programmed to adapt to parameter changes. Many control
systems use this feature to enable the computer to modify lookup table values
for computing open-loop air/fuel ratios. While the computer is in the closed-
loop mode, the computer checks its open-loop calculated air/fuel ratios and
compares them with the closed-loop average limitcycle values. If they match
closely, the open-loop lookup tables are unchanged. If the difference is large,
the system controller corrects the lookup tables so that the open-loop values
more closely match the closedloop values. This updated open-loop lookup
table is stored in separate memory (RAM), which is always powered directly
by a car battery so that the new values are not lost while the ignition key is
turned off. The next time the engine is started, the new lookup table values
will be used in the open-loop mode and will provide more accurate control of
the air/fuel ratio. This feature is very important because it allows the system
controller to adjust to long-term changes in engine and fuel system
conditions. This feature can be applied in individual subsystem control
systems or in the fully integrated control system. If not available initially, it
may be added to the system by modifying its control program.
System diagnosis
Another important feature of microcomputer engine control systems is their
ability to diagnose failures in their control systems and alert the operator.
Sensor and actuator failures or mis-adjustments can be easily detected by the
computer. For instance, the computer will detect a malfunctioning MAF
sensor if the sensor’s output goes above or below certain specified limits, or
fails to change for long periods of time. A prime example is the automatic
adjustment system just discussed. If the open-loop calculations consistently
come up wrong, the engine control computer may determine that one of the
many sensors used in the open-loop calculations has failed.
If the computer detects the loss of a primary control sensor or actuator, it may
choose to operate in a different mode until the problem is repaired. The
operator is notified of a failure by an indicator on the instrument panel (e.g.,
check engine). Because of the flexibility of the microcomputer engine control
system, additional diagnostic programs might be added to accommodate
different engine models that contain more or fewer sensors. Keeping the
system totally integrated gives the microcomputer controller access to more
sensor inputs so they can be checked.
Summary of control modes
Engine crank (start)
The following list is a summary of the engine operations in the engine crank
(starting) mode. Here, the primary control concern is reliable engine start.
1. Engine RPM at cranking speed.
2. Engine coolant at low temperature.
3. Air/fuel ratio low.
4. Spark retarded.
5. EGR off.
6. Secondary air to exhaust manifold.
7. Fuel economy not closely controlled.
8. Emissions not closely controlled.

Engine warm-up
While the engine is warming up, the engine temperature is rising to its
normal operating value. Here, the primary control concern is rapid and
smooth engine warm-up. A summary of the engine operations during this
period follows:
1. Engine RPM above cranking speed at command of driver.
2. Engine CT rises to minimum threshold.
3. Air/fuel ratio low.
4. Spark timing set by controller.
5. EGR off.
6. Secondary air to exhaust manifold.
7. Fuel economy not closely controlled.
8. Emissions not closely controlled.

Open-loop control
The following list summarizes the engine operations when the engine is being
controlled with an open-loop system. This is before the EGO sensor has
reached the correct temperature for closed-loop operation. Fuel economy and
emissions are closely controlled.
1. Engine RPM at command of driver.
2. Engine temperature above warm-up threshold.
3. Air/fuel ratio controlled by an open-loop system to 14.7.
4. EGO sensor temperature less than minimum threshold.
5. Spark timing set by controller.
6. EGR controlled.
7. Secondary air to catalytic converter.
8. Fuel economy controlled.
9. Emissions controlled.
Closed-loop control
For the closest control of emissions and fuel economy under various driving
conditions, the electronic engine control system is in a closed loop. Fuel
economy and emissions are controlled very tightly. The following is a
summary of the engine operations during this period:
1. Engine RPM at command of driver.
2. Engine temperature in normal range (above warm-up threshold).
3. Average air/fuel ratio controlled to 14.7, 0.05.
4. EGO sensor’s temperature above minimum threshold detected by a sensor
output voltage indicating a rich mixture of air and fuel for a minimum
amount of time.
5. System returns to open loop if EGO sensor cools below minimum
threshold or fails to indicate rich mixture for given length of time.
6. EGR controlled.
7. Secondary air to catalytic converter.
8. Fuel economy tightly controlled.
9. Emissions tightly controlled.
Hard acceleration
When the engine must be accelerated quickly or if the engine is under heavy
load, it is in a special mode. Now, the engine controller is primarily
concerned with providing maximum performance. Here is a summary of the
operations under these conditions:
1. Driver asking for sharp increase in RPM or in engine power, demanding
maximum torque.
2. Engine temperature in normal range.
3. Air/fuel ratio rich mixture.
4. EGO not in loop. 5. EGR off.
6. Secondary air to intake.
7. Relatively poor fuel economy.
8. Relatively poor emissions control

Deceleration and idle


Slowing down, stopping, and idling are combined in another special mode.
The engine controller is primarily concerned with reducing excess emissions
during deceleration, and keeping idle fuel consumption at a minimum. This
engine operation is summarized in the following list.
1. RPM decreasing rapidly due to driver command or else held constant at
idle.
2. Engine temperature in normal range.
3. Air/fuel ratio lean mixture.
4. Special mode in deceleration to reduce emissions.
5. Special mode in idle to keep RPM constant at idle as load varies due to air
conditioner, automatic transmission engagement, etc.
6. EGR on.
7. Secondary air to intake.
8. Good fuel economy during deceleration.
9. Poor fuel economy during idle, but fuel consumption kept to minimum
possible
Improvements in electronic engine control
The digital engine control system in this chapter has been made possible by a
rapid evolution of the state of technology. Some of this technology has been
briefly mentioned in this chapter. It is worthwhile to review some of the
technological improvements that have occurred in digital engine control in
greater detail to fully appreciate the capabilities of modern digital engine
control.

Integrated engine control system


One of the developments that has occurred since the introduction of digital
engine control technology is the integration of the various functions into a
single control unit. Whereas the earlier systems in many cases had separate
control systems for fuel and ignition control, the trend is toward integrated
control. This trend has been made possible, in part, by improvements in
digital hardware and in computational algorithms and software. For example,
one of the hardware improvements that has been achieved is the operation of
the microprocessor unit (MPU) at higher clock frequencies. This higher
frequency results in a reduction of the time for any given MPU computation,
thereby permitting greater computational capability. This increased
computational capability has made it possible, in turn, to have more precise
control of fuel delivery during rapid transient engine operation.
Except for long steady cruise while driving on certain rural roads or
freeways, the automobile engine is operated under changing load and RPM
conditions. The limitations in the computational capability of early engine
control systems restricted the ability of the controller to continuously
maintain the air/fuel ratio at stoichiometry under such changing operating
conditions. The newer, more capable digital engine control systems are more
precise than the earlier versions at maintaining stoichiometry and therefore
operate more of the time within the optimum window for the three-way
catalytic converter.
Moreover, since the control of fuel and ignition requires, in some cases, data
from the same sensor set, it is advantageous to have a single integrated
system for fuel and ignition timing control. The newer engine controllers
have the capability to maintain stoichiometry and simultaneously optimize
ignition timing.
Flywheel energy storage
Flywheel energy storage systems for use in vehicle propulsion has reached
application in the light tram vehicle. They have also featured in pilot-
production vehicles such as the Chrysler Patriot hybrid-drive racing car
concept. Here, flywheel energy storage is used in conjunction with a gas
turbine prime-mover engine, Fig. The drive was developed by Satcon
Technologies in the USA to deliver 370 kW via an electric motor drive to the
road wheels. A turbine alternator unit is also incorporated which provides
high frequency current generation from an electrical machine on a common
shaft with the gas turbine. The flywheel is integral with a motor/generator
and contained in a protective housing affording an internal vacuum
environment. The 57 kg unit rotates at 60 000 rpm and provides 4.3 kW of
electrical energy. The flywheel is a gimbal-mounted carbon-fibre composite
unit sitting in a carbo-fibre protective housing. In conjunction with its
motor/generator it acts as a load leveller, taking in power in periods of low
demand on the vehicle and contributing power for hill climbing or high
acceleration performance demands.
European research work into flywheel storage systems includes that reported
by Van der Graaf at the Technical University of Eindhoven. Rather than
using continuously variable transmission ratio between flywheel and
driveline, a two-mode system is involved in this work. A slip coupling is used
up to vehicle speeds of 13 km/h, when CVT comes in and upshifts when
engine and flywheel speed fall simultaneously. At 55 km/h the drive is
transferred from the first to the second sheave of the CVT variator, the engine
simultaneously being linked to the first sheave. Thus a series hybrid drive
exists at lower speeds and a parallel hybrid one at higher speeds. The 19 kg
390 mm diameter composite-fibre flywheel has energy content of 180 kW
and rotates up to 19 000 rpm.
General characteristics of wheel
suspensions
The suspension of modern vehicles need to satisfy a number of requirements
whose aims partly conflict because of different operating conditions (loaded/
unloaded, acceleration/braking, level/uneven road, straight
running/cornering). The forces and moments that operate in the wheel contact
area must be directed into the body.
The kingpin offset and disturbing force lever arm in the case of the
longitudinal forces, the castor offset in the case of the lateral forces, and the
radial load moment arm in the case of the vertical forces are important
elements whose effects interact as a result of, for example, the angle of the
steering axis.
Sufficient vertical spring travel, possibly combined with the horizontal
movement of the wheel away from an uneven area of the road (kinematic
wheel) is required for reasons of ride comfort. The recession suspension
should also be compliant for the purpose of reducing the rolling stiffness of
the tyres and short-stroke movements in a longitudinal direction resulting
from the road surface (longitudinal compliance, Fig.), but without affecting
the development of lateral wheel forces and hence steering precision, for
which the most rigid wheel suspension is required. This requirement is
undermined as a result of the necessary flexibility that results from disturbing
wheel movements generated by longitudinal forces arising from driving and
braking operations.
For the purpose of ensuring the optimum handling characteristics of the
vehicle in a steady state as well as in a transient state, the wheels must be in a
defined position with respect to the road surface for the purpose of generating
the necessary lateral forces. The build-up and size of the lateral wheel forces
are determined by specific toe-in and camber changes of the wheels
depending on the jounce and movement of the body as a result of the axle
kinematics (roll steer) and operative forces (compliance steer). This makes it
possible for specific operating conditions such as load and traction to be
taken into consideration. By establishing the relevant geometry and
kinematics of the axle, it is also possible to prevent the undesirable diving or
lifting of the body during braking or accelerating and to ensure that the
vehicle does not exhibit any tendency to oversteer and displays predictable
transition behaviour for the driver.
Other requirements are:
· independent movement of each of the wheels on an axle (not
guaranteed in the case of rigid axles);
· small, unsprung masses of the suspension in order to keep wheel
load fluctuation as low as possible (important for driving safety);
· the introduction of wheel forces into the body in a manner
favourable to the flow of forces;
· the necessary room and expenditure for construction purposes,
bearing in mind the necessary tolerances with regard to geometry and
stability; ease of use;
· behaviour with regard to the passive safety of passengers and other
road users;
· costs.
A multi-link rear axle – a type of suspension system which is progressively
replacing the semi-trailing arm axle, and consists of at least one trailing arm
on each side. This arm is guided by two (or even three) transverse control
arms . The trailing arm simultaneously serves as a wheel hub carrier and (on
four-wheel steering) allows the minor angle movements required to steer the
rear wheels. The main advantages are, however, its good kinematic and
elastokinematic characteristics. BMW calls the design shown in the
illustration and fitted in the 3-series (1997) a ‘central arm axle’. The trailing
arms 1 are made from GGG40 cast iron; they absorb all longitudinal forces
and braking moments as well as transfering them via the points 2 – the
centres of which also form the radius arm axes – on the body. The lateral
forces generated at the centre of tyre contact are absorbed at the subframe 5,
which is fastened to the body with four rubber bushes (items 6 and 7) via the
transverse control arms 3 and 4. The upper arms 3 carry the minibloc springs
11 and the joints of the anti-roll bar 8. Consequently, this is the place where
the majority of the vertical forces are transferred between the axle and the
body. The shock absorbers, which carry the additional polyurethane springs 9
at the top, are fastened in a good position behind the axle centre at the ends of
the trailing arms. For reasons of noise, the differential 10 is attached
elastically to the subframe 5 at three points (with two rubber bearings at the
front and one hydro bearing at the back). When viewed from the top and the
back, the transverse control arms are positioned at an angle so that, together
with the differing rubber hardness of the bearings at points 2, they achieve
the desired elastokinematic characteristics. These are:
Ø toe-in under braking forces;
Ø lateral force compliance understeer during cornering;
Ø prevention of torque steer effects ;
Ø lane change and straight running stability.
For reasons of space, the front eyes 2 are pressed into parts 1 and bolted to
the attachment bracket. Elongated holes are also provided in this part so toe-
in can be set. In the case of the E46 model series (from 1998 onwards), the
upper transverse arm is made of aluminium for reasons of weight (reduction
of unsprung masses).
Independent wheel suspensions – general
The chassis of a passenger car must be able to handle the engine power
installed. Ever-improving acceleration, higher peak and cornering speeds, and
deceleration lead to significantly increased requirements for safer chassis.
Independent wheel suspensions follow this trend. Their main advantages are:
Ø little space requirement;
Ø a kinematic and/or elastokinematic toe-in change, tending towards
understeering is possible;
Ø easier steerability with existing drive;
Ø low weight;
Ø no mutual wheel influence.

Driven, rigid steering axle with dual joint made by the company GKN –
Birfield AG for four-wheel-drive special-purpose vehicles, tractors and
construction machinery. The dual joint is centred over the bearings 1 and 2
in the region of the fork carriers; these are protected against fouling by the
radial sealing rings 3. Bearing 1 serves as a fixed bearing and bearing 2 as a
movable bearing. The drive shaft 4 is also a sun gear for the planetary gear
with the internal-geared wheel 5. Vertical, lateral and longitudinal forces are
transmitted by both tapered-roller bearings 6 and 7. Steering takes place
about the steering axis EG.
The last two characteristics are important for good roadholding, especially on
bends with an uneven road surface. Transverse arms and trailing arms ensure
the desired kinematic behaviour of the rebounding and jouncing wheels and
also transfer the wheel loadings to the body . Lateral forces also generate a
moment which, with unfavourable link arrangement, has the disadvantage of
reinforcing the roll of the body during cornering. The suspension control
arms require bushes that yield under load and can also influence the
springing. This effect is either reinforced by twisting the rubber parts in the
bearing elements, or the friction increases due to the parts rubbing together ,
and the driving comfort decreases. The wheels incline with the body. The
wheel on the outside of the bend, which has to absorb most of the lateral
force, goes into a positive camber and the inner wheel into a negative camber,
which reduces the lateral grip of the tyres. To avoid this, the kinematic
change of camber needs to be adjusted to take account of this behaviour and
the body roll in the bend should be kept as small as possible. This can be
achieved with harder springs, additional anti-roll bars or a body roll centre
located high up in the vehicle.
Steering system
On passenger cars, the driver must select the steering wheel angle to keep
deviation from the desired course low. However, there is no definite
functional relationship between the turning angle of the steering wheel made
by the driver and the change in driving direction, because the correlation of
the following is not linear.
· turns of the steering wheel;
· alteration of steer angle at the front wheels;
· development of lateral tyre forces;
· alteration of driving direction.
This results from elastic compliance in the components of the chassis. To
move a vehicle, the driver must continually adjust the relationship between
turning the steering wheel and the alteration in the direction of travel. To do
so, the driver will monitor a wealth of information, going far beyond the
visual perceptive faculty (visible deviation from desired direction). These
factors would include for example, the roll inclination of the body, the feeling
of being held steady in the seat (transverse acceleration) and the self-centring
torque the driver will feel through the steering wheel. The most important
information the driver receives comes via the steering moment or torque
which provides him with feedback on the forces acting on the wheels.
It is therefore the job of the steering system to convert the steering wheel
angle into as clear a relationship as possible to the steering angle of the
wheels and to convey feedback about the vehicle’s state of movement back to
the steering wheel. This passes on the actuating moment applied by the
driver, via the steering column to the steering gear 1 which converts it into
pulling forces on one side and pushing forces on the other, these being
transferred to the steering arms 3 via the tie rods 2. These are fixed on both
sides to the steering knuckles and cause a turning movement until the
required steering angle has been reached. Rotation is around the steering axis
EG, also called kingpin inclination, pivot or steering rotation axis.
Damper strut front axle of a VW Polo (up to 1994) with ‘steering gear’, long
tie rods and a ‘sliding clutch’ on the steering tube; the end of the tube is
stuck onto the pinion gear and fixed with a clamp. The steering arms, which
consist of two half shells and point backwards, are welded to the damper
strut outer tube. An ‘additional weight’ (harmonic damper) sits on the longer
right drive shaft to damp vibrations. The anti-roll bar carries the lower
control arm. To give acceptable ground clearance, the back of it was
designed to be higher than the fixing points on the control arms. The virtual
pitch axis is therefore in front of the axle and the vehicle’s front end is drawn
downwards when the brakes are applied.

ADVANCED CONCEPTS
Motor Technology – The ‘Centre’ of an
Electric Vehicle Efficiency

In the energy starving world, demand of energy is continuously raising, of-


course not parallel to its supply. This issue become more significant when
traditional sources of energy are continuously depleting and people are
rushing to resort the renewable energy sources. Electricity is one of the chief
renewable energy sources. In the meanwhile, rising prices of the fuel has
occurred as an opportunistic symbol for the electric vehicles to come into the
picture.

World is witnessing the regular increase in the demand for electric vehicle
since last few decades. So, the demand for new ideas and innovation is also
intensifying in this field. Motors are the fundamentals to any vehicle. The
basic functioning of the vehicle is largely governed by the power and
efficiency of the motor and its supporting technology inside.

In an electric vehicle, usually direct current (DC) electricity is fed from a


battery bank to AC/DC inverter. This inverter converts DC into alternating
current(AC) electricity and then it is fed to 3 phase AC induction motor. For
small range and slow speed application of electric vehicle, even brushless DC
motor is also used. The electrical energy is converted into mechanical energy
through motor and fed to wheels through transmission and differential.
Normally, it is said that inverter is the brain and motor is heart of the electric
vehicle.
In order to understand the potential of the electric motor, the following range
of motors can be taken as example:

· 15 Kw Traction Motor: It provides almost unlimited maximum


speed.
· 8 Kw Traction Motor: It is designed for small e-vehicles with
speed limit of 45 Km/h. It can be used without a fan in normal operation
and can be suitable for tight spaces.
· 4 Kw Traction Motor: It is the ideal motor for light weight e-
vehicles for recreational uses, for off-road vehicles- for example, golf
cars or ATVs (All Terrain Vehicles) and alternatively for transport
vehicles that have limited space.
· 1.5 Kw Traction Motor: Typical uses for this electric motor are for
scooter and go-carts.

The rotor in an induction motor is specially designed with internal magnet


that creates almost sine distribution of the magnetic flux. Because no external
magnetization is required, there are no losses in the rotor through
magnetization currents. The efficiency of electric motor is judged through its
function of torque and speed. However, the heating of the motors at high
speed is an area which require further innovation and development.

The manufacturers have started using permanent magnet electric motor


instead of the AC induction motor. The advantage is that unlike AC induction
motor which uses electricity to generate the magnetic currents inside the
motor, which causes rotor to spin, permanent magnet motors does not require
that additional current since its magnets which are created from rare-earth
materials are always on. But these motors are suitable only for smaller and
lighter cars. The high performance cars require greater power which can only
be produced by induction motors.

Efforts are also on to reduce the cost of electric motors by using different
materials that can reduce the losses with improved performance. Another,
effort manufacturers have put in to reduce the size of motor is by using
square copper wires instead of round ones in its stator. The stator is a
stationary part of an electric motor, which generates the alternating magnetic
field to spin the rotor. This is because square wires nest more compactly and
densely.

The motor designers have also used three smaller magnets in place of two
larger ones for the motor which helps to improve torque. Some innovators
have even used non-rare- earth materials like Daido Steel to make magnets to
reduce their cost. One more example is Neodymium magnet that contains no
rare-earth-material but it is still powerful enough for vehicle use. The aim is
to develop motors with high RPM of 30,000. At present it ranges from
12,000 to 18,000 RPM.
There is no exaggeration in making a statement that a lot is required to be
done in development of motors with higher efficiency and torque in case we
wish to replace gasoline driven vehicles with all electric vehicles.
Electromagnetic Stir Casting: An Approach to
Produce Hybrid Metal Matrix Composite
(MMC)

(Figure: Experimental Setup)

A metal matrix composite is a composite material that constitutes one metal


and other different metal or another material like ceramic or organic
compound. When at least three materials are present it constitutes hybrid
MMC. The use of composite increases day by day in defense, automobile,
aviation, aerospace industry. Now a day’s hybrid MMC development and
characterization play an important role in the industry. Hybrid aluminium
MMC has better mechanical properties like hardness, tensile strength, impact
strength, microstructural morphology etc. and good tribological properties.
The metal matrix materials for hybrid MMC are aluminum, copper,
magnesium, titanium alloys while the reinforcement can be widely used
Al2O3, SiC, BN, B4C, B, AlN, TiB2, graphite etc. Electromagnetic stir
casting method is an effective way to produce hybrid metal matrix
composites. Electromagnetic fields are often used industrially to control the
flow of liquid metal. An alternating field induces eddy currents in liquid
metal which interact with the field to give a Lorentz body force which is
generally rotational and which must, therefore, drive fluid motion.

In the high-frequency limit, the field is confined to a narrow layer on the


surface of the conductor. Electromagnetic stirring uses the principle of a 3
phases induction motor and it differs from the conventional mechanical stirrer
as it is a non-contact type stirrer in which no part is in contact with the
molten metal. The molten metal is mixed with the material by
electromagnetic action.

Electromagnetic stirring has the advantage of having no contact between the


melt and the stirring system. To achieve the optimum properties of the hybrid
MMC, the distribution of the reinforcement material in the matrix alloy
should be uniform, and the wettability between these substances should be
optimum. This method is having an edge over other MMC fabrication
methods due to the more uniform distribution of reinforced particles. Hybrid
composite applications continue to expand, the spectrum of materials and
processes employed will remain relatively wide.
Challenges and Opportunities in lithium-ion
battery technologies for electric vehicles

The automotive industry’s quest to limit its impact on the environment and
transform automotive mobility into a sustainable mode of transportation
continues at high intensity, despite the current economic crisis. The major
issue which is still haunting the electric vehicles is the technology and cost of
lithium-ion batteries. Normally, the value chain of electric vehicle batteries
consists of raw material for production, cell production, module production,
assembly of modules into the battery pack, integration of the battery pack
into the vehicle, use during the life of the vehicle, and reuse and recycling.

Lithium-ion batteries comprise a family of battery chemistries that employ


various combinations of anode and cathode materials. The most prominent
technologies for automotive applications are lithium-nickel-cobalt-
aluminium(NCA), Lithium-nickel-manganese-cobalt(NMC), Lithium-
manganese spinel(LMO), Lithium titanate(LTO) and Lithium-ion phosphate
(LFP). The technology that is currently most prevalent in consumer
application is lithium-cobalt oxide(LCO), which is generally considered
unsuitable for automotive applications because of its inherent safety risks.
The key performance parameters for electric vehicle batteries are discussed in
succeeding paras.

Safety: It is the most important criterion. The main concern is thermal


runaway. In this the chemical reactions triggered in the cell exacerbate heat
release, and thus resulting in a fire.
Life Span: Two ways of measuring battery life span are cycle stability and
overall age. Cycle stability is the number of times a battery can be fully
charged and discharged before being degraded to 80 percent of its original
capacity at full charge. Overall age is the number of years a battery can be
expected to remain useful.

Specific Energy and Specific Power: The specific energy of batteries is


their capacity for storing energy per kilogram of weight. Specific power is the
amount of power that batteries can deliver per kilogram of mass. There is a
need to optimize the trade-off between specific energy and specific power.

Charging Time: It almost takes ten hours to charge a 15-Kwh battery by


plugging it into a standard 120-volt outlet. 240-volt outlet with increased
power (40 amps) can take two hours. Charging at a commercial three-phase
charging station can take as little as 20 mins. However, these charging
systems do come at an additional cost and weight, as they require enhanced
cooling systems in the vehicle.

The challenge and tremendous opportunity still remains with cell


manufacturers. As of now chemical companies and component producers
tend to see the electric vehicle business as representing only a small
percentage of their overall revenues. They will ultimately supply active
materials, separators and other key parts for cell manufacturing. The
continued growth of the market for electric vehicles will finally depend on
new battery technologies, and the will of governments as well as on driving
patterns and definitely on price of gasoline.
90 Degree Steering Mechanism
Let’s know more and get inspired –

It’s Working Principle

This project works on the principle that the two alternate diagonal wheels i.e.
the front left and back right will steer together to give the turning motion to
the vehicle and similarly with the other two wheels through chain drive
arrangements. In this vehicle all the four wheels will be independently
working & the battery provided which is connected to the control unit will
provide the necessary power supply.

Objective for the study –

· To obtain a large steering angle.


· To enable small sharp turns to be made smoothly without wheel
lock-up occurring even at large steering angles.
· To make it possible to change the driving rotation direction and the
amount of rotation of front and rear wheels and inner and outer wheels
in dependence upon the steering angle.
What benefits can be driven out of this?

· With the 360 mode, the vehicle can quickly turn around at the press
of a button and a blip of the throttle.
· Crab mode helps simplify the lane changing procedure.
· Due to the better handling and easier 90 degree steering capability,
driver fatigue can be reduced even over long drives.
· Military reconnaissance and combat vehicles can benefit to a great
extent from 360 modes, since the 90 degree steering can be purpose
built for their application and are of immense help in navigating
difficult terrain.
Where to Apply it Practically-

It has a wide array of real-life applications such as –

· High speed lane changing


· High speed straight line operation
· Turning on curve
· For parking
· Junction intersection
· Parallel Parking
· Gentle curves
Thermoelectric Cooler : A new horizon in
Mechanical and Electronics Engineering

The Thermoelectric (TEC) innovation was first found by a French physicist


named Jean Peltier that is why it is also known as a pettier cooler. The main
principle behind Peltier effect is that when current is allowed to pass through
the circuit of various conductors, heat is either retained or discharged at the
conductor’s intersections or junctions depend upon their extremity like
current polarity. The measure of heat conduction generated or absorbed is
corresponding to inter pass with different conductors.

The Peltier cooler unit is integrated with the set of thermocouple consisting p-
and n-type semiconductor elements, or pellets, sandwiched between two
ceramic plates. Pairs are combined into a module. It is connected electrically
in series so that equi-distribution of current will take place and thermally in
parallel so that heat conduction is properly maintained. A heat sink is a
device that is attached to the hot side of the thermoelectric module. It is used
to facilitate the transfer of heat from the hot side of the module to the
environment. A cold sink is attached to the cold side of the module which
facilitates heat transfer.

In general, when a DC voltage is applied across the device, one face cools
while the other face heats up. The module itself is made up of p-type and n-
type semiconductors connected electrically in series and thermally in parallel.
The p-type element has a deficiency of electrons compared to its n-type
counterpart, which has a surplus of electrons. In order to complete the lattice
of material, the electrons flow from the n-type to the p-type material and
holes from p-type to n-type maintained in a material through an electrical
connector, dropping to a lower energy state and releasing energy heat to the
heat sink (hot side).

Thermoelectric cooler can be of two types:-

1. Single stage Thermoelectric cooler

2. Multi-stage Thermoelectric cooler

Single stage thermoelectric coolers work at the permissible range at the


temperature difference of 74-76 degrees Kelvin. In order to create a great
difference in the temperature range, many research is going on multi-stage
thermoelectric coolers are used. For example, the maximum temperate
difference of one of the serial thermoelectric coolers is 140 degrees, Kelvin.

Advantage:

1. No release of harmful substances which can affect the environment.

2. The thermoelectric cooler doesn’t require any fluid, gases for cooling.

3. It doesn’t emit any noise.

4. Lesser size and weight of thermoelectric cooler are reduced.

5. A thermoelectric cooler is reliable.

6. Its cooling and heating capacity can be adjustable.

Application:

1. Military/Aerospace: Inertial guidance systems, night vision equipment,


electronic equipment, cooled personal garments, portable refrigerators.

2. Consumer products: Recreational vehicle refrigerators, mobile home


refrigerators, portable picnic coolers, wine and beer keg coolers, residential
water coolers/ purifiers.

3. Laboratory and scientific equipment: Infrared detectors, integrated circuit


coolers, laboratory cold plates, cold chambers, ice point reference baths, dew
point hygrometers.

4. Industrial Equipment: Computer microprocessors, microprocessors in


numerical control and robotics, medical instruments, hypothermia blankets,
pharmaceutical refrigerators- portable and stationary, blood analysers, tissue
preparation and storage, restaurants equipment, cream and butter dispensers.

5. Miscellaneous: Hotel room refrigerators, automobile mini – refrigerators,


automobile seal cooler, aircraft drinking water coolers.

Performance And Cost Of Other Types Of


Light-Duty Vehicles
Most of the results of OTA’s analyses of mid-size autos apply similarly, on a
percentage basis, to other auto size classes—such as subcompacts—and to
light trucks. There are, however, some interesting differences. For example,
the aerodynamics of different vehicle classes are subject to different
constraints. Subcompacts are unlikely to attain as low a drag coefficient as
mid-size vehicles because their short lengths inhibit optimum shapes for
minimum drag. Pickup trucks, with their open rectangular bed and higher ride
height have relatively poor drag coefficients, and fourwheel-drive pickups are
even worse, because of their large tires and higher ground clearance. And
compact vans and utility vehicles have short noses, relatively high ground
clearance, and box-type designs that restrict drag coefficients to relatively
high values. Although each vehicle type can be made more aerodynamic, it is
unlikely that light-truck drag values will decline quite so much as automobile
drag values can.

Another important difference is market-based—historically, introduction of


new technologies on light-duty trucks has typically lagged by five to seven
years behind their introduction in cars. Although this lag time might change,
it is likely that some lag will continue to persist.

Differences in the functions of the different vehicle classes will affect fuel
economy potential, as well. For example, the load-carrying function of many
light trucks demands high torque at low speed, and may demand trailer-
towing capability. The latter requirement, in particular, will constrain the type
of performance tradeoffs that might be very attractive for passenger cars
using electric or hybrid-electric powertrains.

Whereas OTA expects the business-as-usual fleet of automobiles to improve


in fuel economy by about 24 percent between 1995 and 2015, the fuel
economy of the light truck fleet is expected to increase a bit less than 20
percent. Prices will scale with size: for example, for hybrids, subcompact
prices will increase by about 80 percent of the mid-size car’s price increment,
compact vans by about 110 percent, and standard pickups by about 140
percent, reflecting the different power requirements of the various vehicle
classes.

Lifecycle Cost---Will They Offset Higher Purchase Prices?


Although vehicle purchasers may tend to focus on initial purchase price more
than on operating and maintenance (O&M) costs and expected vehicle
longevity in their purchase decisions, large reductions in O&M costs and
longer lifespans may offset purchase price advantages in vehicle purchase
decisions. For example, diesel-powered vehicles typically cost more than the
same model with a gasoline engine, and often are less powerful, but are
purchased by shoppers who respect their reputation for longevity, low
maintenance, and better fuel economy, or who are swayed by diesel fuel’s
price advantage (in most European nations), or both. Proponents of advanced
vehicle technologies, especially EVs and fuel cell EVs, often cite their
claimed sharp advantages in fuel costs, powertrain longevity, and
maintenance costs as sufficient economic reasons to purchase them—aside
from their societal advantages.5

A few simple calculations show how a substantially higher vehicle purchase


price may indeed be offset by lower O&M costs or longer vehicle lifetime.
Assuming a 10 percent interest rate and 10-year vehicle lifetime, for example,
a $1,000 increase in purchase price would be offset by a $169 per year
reduction in O&M costs. Since average annual maintenance costs for
gasoline vehicles are $100 for scheduled maintenance and $400 for
unscheduled maintenance over the first 10 years of vehicle life,55 there is
potentially a substantial purchase price offset if advanced vehicles can
achieve very low maintenance costs. Similarly, an increase in vehicle price of
about 25 percent—for example, from $20,000 to $25,000-would be offset by
an increase in longevity of 5 years, assuming the less expensive vehicle
would last 10 years.

OTA’s evaluation of lifecycle costs leads to the conclusion that their


influence will offset sharply higher purchase prices only under limited
conditions. For example, unless gasoline prices increase substantially over
time, any energy savings associated with lower fuel use or a shift to
electricity will provide only a moderate offset against high purchase price-
primarily because annual fuel costs are not high in efficient conventional
vehicles. In the mid-size vehicles OTA examined for 2015, for $1.50 a gallon
gasoline, the minimum savings (NiMH EV versus baseline vehicle, savings
of about $400 per year—see table 1-3) would offset about $2,300 in higher
purchase price for the NiMH EV. In contrast, the EV may cost as much as
$10,000 more than the baseline vehicle. Moreover, percent of the fuel cost
savings could be obtained by purchasing the mpg advanced conventional
vehicle, which costs only $1,500 more than the baseline vehicle.
Experts contacted by OTA generally agree that electric drivetrains should
experience lower maintenance costs and last longer than ICE drivetrains. The
amount of savings is difficult to gauge, however, and may not be large
because of continuing improvements in ICE drivetrains (for example, the
introduction of engines that do not require a tune-up for 100,000 miles) and
the likelihood that future electric drivetrains will undergo profound changes
from today’s, with unknown consequences for their longevity and
maintenance requirements. Moreover, battery replacement costs for EVs (and
hybrids and fuel cell EVs to a lesser extent) could offset other savings, 59
although this, too, is uncertain because it is not yet clear whether battery
development will succeed in extending battery lifetime to the life of the
vehicle. Vehicles with hybrid drivetrains may experience no O&M savings
because of their complexity. Finally, although analysts have claimed that fuel
cell vehicles will be low maintenance and long-lived, 60 the very early
development state of PEM cells demands caution in such assessments, and
we see little basis for them. In particular, fuel cells have a complex balance of
plant,61 a methanol reformer with required gas clean-up to avoid poisoning
the fuel cell’s catalysts, and a number of still-unresolved O&M-related issues
such as cathode oxidation and deterioration of membranes.
Emissions Performance
Reductions in vehicular emissions are a key goal of programs to develop
advanced technology vehicles. In California, it is the only explicit goal,
although other considerations, such as economic development, are important.
Furthermore, PNGV’s original name was the Clean Car Initiative.

The drive to ratchet down the emissions of new vehicles is highly


controversial. One reason is that most vehicular emissions come from older
vehicles, or relatively new vehicles whose emission controls are
malfunctioning. Automakers have long argued that new control requirements
that raise the price of new vehicles have the effect of slowing new vehicle
sales and, thus, reducing fleet turnover-the primary source of improved fleet
emissions (and fuel economy) performance. Further, there is substantial
disagreement about how much new controls will cost, and thus similar
disagreement about their balance of costs and benefits.

Each of the advanced vehicles examined by OTA have emission


characteristics that are different from current vehicles as well as from the
baseline (business-as-usual) vehicles expected to enter the fleet, if there are
no new incentives for significant changes in vehicle technology. A number of
changes that will yield improvements to new vehicles’ emission performance,
however, already are programmed into vehicle development programs. Both
the federal Clean Air Act and California’s Low Emission Vehicle Program
require significant improvements in the certified emission levels allowable
for new light-duty vehicles, as well as an extension of the certified “lifetime”
of required control levels from 50 thousand to 100 thousand miles. New
requirements for onboard diagnostics to alert drivers and mechanics to
problems with control systems, more stringent and comprehensive inspection
and maintenance testing (including testing for evaporative emissions), and
expansion of certification testing procedures to include driving conditions
that today cause high emission levels should ensure that actual on-road
emissions of average vehicles more closely match the new vehicle
certification emissions levels.

The Advanced Conventional vehicles will most closely resemble the baseline
vehicles’ emissions performance. By 2015, however, these vehicles will have
direct injection engines-either diesel or gasoline. These engines should have
lower cold start and acceleration enrichment-related emissions than
conventional gasoline engines. This should have a positive impact on
emissions, although new regulations should force down such emissions even
in the baseline case. A key uncertainty about emissions performance for these
vehicles is the performance of the NOX catalysts, which currently remain
under development. Another area of concern, for the diesels, is particulate
emissions performance; although new diesel designs have substantially
reduced particulate emissions, these emissions levels are still considerably
higher than those of gasoline vehicles.

The key emissions advantage of EVs is that they have virtually no vehicular
emissions regardless of vehicle condition or age-they will never create the
problems of older or malfunctioning “super-emitters,” now a significant
concern of the current fleet. Because EVs are recharged with powerplant-
generated electricity, however, EV emissions performance should be viewed
from the standpoint of the entire fuel cycle, not just the vehicle. From this
standpoint, EVs have a strong advantage over conventional vehicles in
emissions of HC and CO, because power generation produces little of these
pollutants. Where power generation is largely coal-based—as it is in most
areas of the country-some net increases in sulphur dioxide might occur.
However, Clean Air Act rules “cap” national powerplant emissions of
sulphur oxides (SOX) at about 9 million tons per year, which limits the
potential adverse effects of any large-scale increase in power generation
associated with EVs Any net advantage (or disadvantage) in NOX and
particulate emissions of EVs over conventional vehicles is ambiguous,
however. All fossil and biomass-fuelled power generation facilities are
significant emitters of NOX , and most are significant emitters of particulate,
although there are wide variations depending on fuel generation technology,
and emission controls. Analyses of the impact of EVs on NOX and
particulate emissions are extremely sensitive to different assumptions about
which powerplants will be used to recharge the vehicles, as well as
assumptions about the energy efficiency of the EVs and competing gasoline
vehicles 63 and the likely on-road emissions of the gasoline vehicles. OTA
estimates that the year 2005 lead acid EVs will most likely increase net NOX
on a nationwide basis, with the NiMH batterypowered vehicle about breaking
even, but that the combined effect of increased NOX controls on
powerplants, a continuing shift to cleaner generating sources, and increases in
EV efficiency will allow the more efficient EVs in 2015 to gain a small net
reduction in NOX emissions.

Hybrid vehicles have been generally considered as likely to have


significantly lower emissions than conventional vehicles because of their
smaller engines and the supposition that these engines would be run at
constant speed and load (for series hybrids). There have been various reports
of hybrids attaining very low emissions—below ultralow emissions vehicle
standards-in certification-type testing.

One key advantage for some hybrids will be their ability to run in an EV-
mode in cities, although their performance or range may be limited in this
mode. Other advantages are less certain, however. Hybrids will likely not run
at constant speed, although their speed and load excursions will be less than
with a conventional vehicle; they must cope with cold start and evaporative
emissions essentially similar to a conventional vehicle; and their engines may
be stopped and restarted several times during longer trips, raising concerns
about increased emissions from hot restarts. In OTA’s view, hybrid vehicles
with substantial EV range have clear emission advantages in this mode, but
advantages in normal driving are unclear.

Fuel cell vehicles will have zero emissions unless they use an onboard
reformer to process methanol or another fuel into hydrogen. Emissions from
the reformer should be extremely low in normal steady-state operation, but
there may be some concern about emissions during increased loads, or the
potential for malfunctions. In particular, the noble metal catalyst needed for
the reformer can be poisoned in the same manner as the catalyst on a gasoline
vehicle.
Safety Of Lightweight Vehicles
Several of the advanced vehicles examined by OTA will be extremely light.
For example, one of the 2015 advanced conventional vehicles weighs less
than 2,000 pounds. An examination of the basic physics of vehicle accidents
and the large U.S. database on fatal and injury-causing accidents indicates
that a substantial “down weighting” of the light-duty fleet will create some
significant safety concerns, especially during the transition period when new,
lighter vehicles mix with older, heavier ones. Any adverse safety impacts,
however, are unlikely to be nearly so severe as those that occurred as a result
of changes in the size and weight composition of the new car fleet in 1970 to
1982.67 The National Highway Traffic Safety Administration concluded that
those changes “resulted in (net) increases of nearly 2,000 fatalities and
20,000 serious injuries per year. ” Many of those adverse impacts occurred
because vehicles changed in size as well as weight, however, yielding
reduced crush space, reduced track width and wheelbase (which increased the
incidence of vehicle rollovers), and so forth. Reducing weight while
maintaining vehicle size and structural integrity should have lower impacts.
The major areas of concern about vehicle “light weighting” are the following:

Passengers in lighter vehicles tend to fare much worse than the passengers in
heavier ones in collisions between vehicles of unequal weight, because heavy
vehicles transfer more momentum to lighter cars than vice-versa. During the
long transition period when older, heavier vehicles would remain in the fleet,
lightweight vehicles might fare poorly. Moreover, if the large numbers of
light trucks in the fleet do not reduce their weight proportionately, the weight
distribution of the fleet could become wider, which would cause adverse
impacts on safety.

Vehicle designers must balance the need to protect passengers from


deceleration forces (requiring crush zones of lower stiffness), and the need to
prevent passenger compartment intrusion (requiring high strength/high
stiffness structure surrounding the passengers). 68 Lighter vehicles will have
lower crash energy in barrier crashes or crashes into vehicles of similar
weight, so they will require a softer front structure than a heavier vehicle to
obtain the same degree of crush (and same protection against deceleration
forces) in otherwise similar crashes (e.g., barrier crashes at the same
velocity). Designing large, lightweight vehicles with soft structures that have
acceptable ride and handling characteristics (structural stiffness is desirable
for obtaining good ride and handling characteristics) and are protective
against passenger compartment intrusion may be a challenge to vehicle
designers. Additionally, the differential needs for stiffness among lighter and
heavier vehicles may cause compatibility problems in multi-vehicle crashes.

In collisions with roadside obstacles, lighter vehicles have less chance than a
heavier vehicle of deforming the obstacle or even running through it, both of
which would decrease deceleration forces on the occupants. Also, a
substantial decrease in average vehicle weight might cause compatibility
problems with current designs of safety barriers and breakaway roadside
devices (e.g., light poles), which are designed for a heavier fleet.

If weight reductions are achieved by shifting to new materials, vehicle


designers may need considerable time to regain the level of modelling
expertise currently available in designing steel vehicles for maximum safety.

There exist several safety design improvements that could mitigate any
adverse effects caused by large fleetwide weight reductions—though, of
course, such measures could improve fleet safety at any weight. Examples
include external air bags deployed by radar sensing of impending accidents;
accident avoidance technology such as automatic braking; and improvements
in vehicle restraint systems (including faster acting sensors and “smart”
airbags that can adjust to accident conditions and occupant characteristics).
The latter would greatly benefit from further biomechanical research to
improve our understanding of accident injury mechanisms.

Large fleet weight reductions also will intensify the need for the National
Highway Traffic Safety Administration to examine carefully its array of crash
tests for vehicles, to ensure that these tests provide incentives to maximize
vehicle-to-vehicle compatibility in crashes.

A Note About Costs And Prices


The price of advanced technologies is a controversial aspect of the continuing
debate over the merits of several government actions promoting such
technologies. These actions range from the alternative fuel vehicle
requirements of the federal Energy Policy Act of 199269 to California’s ZEV
requirements to federal funding (in concert with industry) of PNGV. OTA’s
estimates of retail price differentials for advanced conventional vehicles are
somewhat below industry estimates, while estimates for hybrid, fuel cell, and
electric vehicles seem to be above some others prepared by advocacy groups.
Part of the difference between OTA’s estimates and others undoubtedly
reflects the substantial uncertainty that underlies any efforts to predict future
prices of new technologies. Other differences arise from the following
sources

· OTA’s relatively low incremental prices for advanced conventional


vehicles rest partly on our assumption that the advanced technologies
are competing with baseline technologies that are new models with
newly designed assembly lines; the baseline vehicles are not simply
continued production of an existing technology whose investment costs
may have been fully amortized.

· OTA’s relatively high prices for hybrid, fuel cell, and electric
vehicles reflect in part OTA’s assumption that these vehicles are
competitive in performance with the baseline, conventional vehicles;
other estimates often reflect lesser performing vehicles, which our
analysis concludes would be considerably less expensive.

· Another source of price differences is OTA’s assumption that


vehicle prices must costs and manufacturer/dealer profits beyond the
manufacturing costs for vehicle price estimates do not reflect these
additional costs.
Spark Ignition and Diesel Engines
Spark Ignition Engines
Although spark ignition (SI) engines have been the dominant passenger car
and light truck powerplant in the United States for many decades, there are
several ways to achieve additional improvements in efficiency---either
through wider use of some existing technologies or by introduction of
advanced technologies and engine concepts. Some key examples of improved
technology, most having some current application, are:

· Advanced electronic controls; improved understanding of


combustion processes. Improved thermodynamic efficiency through
improved spark timing, increased compression ratios, and faster
combustion.

· Use of lightweight materials in valves, valve springs, and pistons,


advanced coatings on pistons and ring surfaces, improved lubricants.
Reduced mechanical friction.

· Increased number of valves per cylinder (up to five), variable


timing for valve opening, deactivating cylinders at light loads, variable
tuning of intakes to increase intake pressure. Reduce “pumping losses”
caused by throttling the flow of intake air to reduce power output.

Combining the full range of improvements in a conventional engine can yield


fuel economy improvements of up to 15 percent from a baseline four-valve
engine.

Besides improvements in engine components, new engine concepts promise


additional benefits. The highest level of technology refinement for SI engines
is the direct injection stratified charge (DISC) engine. DISC engines inject
fuel directly into the cylinder rather than premixing fuel and air, as
conventional engines do; the term “stratified charge” comes from the need to
aim the injected fuel at the spark plug, so the fuel-air mixture in the cylinder
is highly nonuniform. DISC engines are almost unthrottled; power is reduced
by reducing the amount of fuel injected, not the amount of air. As a result,
these engines have virtually no throttling loss and can operate at high
compression ratios (because not premixing the fuel and air avoids premature
ignition). DISC engines have been researched for decades without successful
commercialization, but substantial improvements in fuel injection technology
and in the understanding and control of combustion, and a more optimistic
outlook for nitrogen oxide (NOX) catalysts that can operate in an oxygen-rich
environment make the outlook for such engines promising. The estimated
fuel economy benefit of a DISC engine coupled with available friction-
reduction technology and variable valve timing ranges from 20 to 33 percent,
compared to a baseline four-valve engine.

Diesel Engines
Automakers can achieve a substantial improvement in fuel economy by
shifting to compression ignition (diesel) engines. Diesels are more efficient
than gasoline engines for two reasons. First, they use compression ratios of
16:1 to 24:1 versus the gasoline engine’s 10:1 or so, which allows a higher
thermodynamic efficiency. Second, diesels do not experience the pumping
loss characteristic of gasoline engines because they do not throttle their intake
air; instead, power is controlled by regulating fuel flow alone. Diesels have
much higher internal friction than gasoline engines, however, and they are
heavier for the same output.

Diesels are not popular in the U.S. market because they generally have been
noisier, more prone to vibration, more polluting, and costlier than comparable
gasoline engines. Although they have low hydrocarbon (HC) and carbon
monoxide (CO) emissions, they have relatively high NOX and particulate
emissions.

The latest designs of diesel engines recently unveiled in Europe are far
superior to previous designs. Oxidation catalysts and better fuel control have
substantially improved particulate emission performance. Four-valve per
cylinder design and direct injection have separately led to better fuel
economy, higher output per unit weight, and lower emissions—though NOX
emissions are still too high. Compared with a current gasoline engine, the
four valve indirect injection design will yield about a 25 percent mpg gain
(about 12 percent gain on a fuel energy basis), while the direct injection (Dl)
design may yield as much as a 40 percent gain (30 percent fuel energy gain).

The new diesels are likely to meet California’s LEV standards for HC, CO,
and particulate, but will continue to require a NOX waiver to comply with
emission requirements. Although the four-valve design and other innovations
(e.g., improved exhaust gas recirculation and improved fuel injection) will
improve emissions performance and may allow compliance with federal Tier
1 standards, LEV standards cannot be met without a NOX reduction catalyst.
Although manufacturers are optimistic about such catalysts for gasoline
engines, they consider a diesel catalyst to be a much more difficult challenge.
Battery Technologies
The battery is the critical technology for electric vehicles, providing both
energy and power storage. Unfortunately, the weak link of batteries has been
their low energy storage capacity-on a weight basis, lower than gasoline by a
factor of 100 to 400. Power capacity may also be a problem, especially for
some of the higher temperature and higher energy batteries. In fact, power
capacity is the more crucial factor for hybrid vehicles, where the battery’s
major function is to be a load leveler for the engine, not to store energy.
Aside from increasing energy and power storage, other key goals of battery
R&D are increasing longevity and efficiency and reducing costs.

Numerous battery types are in various stages of development. Although there


are multiple claims for the efficacy of each type, there is a large difference
between the performance of small modules or even full battery packs under
nondemanding laboratory tests, and performance in the challenging
environment of actual vehicle service or tests designed to duplicate this
situation. Although the U.S. Advanced Battery Consortium is sponsoring
such tests, the key results are confidential, and much of the publicly available
information comes from the battery manufacturers themselves, and may be
unreliable. Nevertheless, it is quite clear that a number of the batteries in
development will prove superior to the dominant conventional lead acid
battery, 2 though at a higher purchase price. Promising candidates include
advanced lead acid (e.g., woven-grid semi-bipolar and bipolar) with specific
energy of 35 to 50 Wh/kg, specific power of 200 to 900 W/kg, 3 and claimed
lifetimes of five years and longer; nickel metal hydride with 80 Wh/kg and
200 W/kg specific energy and power, and claimed very long lifetimes;
lithium polymer, considered potentially to be an especially “EV friendly”
battery (they are spillage proof and maintenance free), that claims specific
energy and power of 200 or more Wh/kg and 100 or more W/kg; lithium-ion,
which has demonstrated specific energy of 100 to 110 Wh/kg; and many
others. The claimed values of battery lifetime in vehicle applications should
be considered extremely uncertain. With the possible exception of some of
the very near-term advanced lead acid batteries, each of the battery types has
significant remaining challenges to commercialization—high costs, corrosion
and thermal management problems, gas build-up during charging, and so
forth. Further, the history of battery commercialization demonstrates that
bringing a battery to market demands an extensive probationary period: once
a battery has moved beyond the single cell stage, it will require a testing time
of nearly a decade or more before it can be considered a proven production
model.

Non-battery Energy Storage: Ultracapacitors and Flywheels


Ultracapacitors

Ultra-capacitors are devices that can directly store electrical charges—unlike


batteries, which store electricity as chemical energy. A variety of
ultracapacitor materials and designs are being investigated, but all share some
basic characteristics-very high specific power, greater than 1 kW/kg, coupled
with low specific energy. The U.S. Department of Energy mid-term goal is
only 10 Wh/kg (compared to the U.S. Advanced Battery Consortium midterm
battery goal of 100 Wh/kg). Other likely ultracapacitor characteristics are
high storage efficiency and long life.

Ultracapacitors’ energy and power characteristics define their role. In electric


vehicles, their high specific power can be used to absorb the strong power
surges of regenerative braking, to provide high power for brief spurts of
acceleration, and to smooth out any rapid changes in power demand from the
battery in order to prolong its life. In hybrids, they theoretically could be used
as the energy storage mechanism; however, their low specific energy limits
their ability to provide a prolonged or repeatable power boost. Increasing
ultracapacitors’ specific energy is a critical research goal.

Flywheels

A flywheel stores energy as the mechanical energy of a rapidly spinning


mass, which rotates on virtually frictionless bearings in a near-vacuum
environment to minimize losses. The flywheel itself can serve as the rotor of
an electrical motor/generator, so it can turn its mechanical energy into
electricity or vice versa, as needed. Like ultracapacitors, flywheels have very
high specific power ratings and relatively low specific energy, though their
energy storage capacity is likely to be higher than ultracapacitors.
Consequently, they may be more practical than ultracapacitors for service as
the energy storage mechanism in a hybrid. In fact, the manufacturer of the
flywheel designed for Chrysler’s Patriot race car, admittedly a very expensive
design, claims a specific energy of 73 Wh/kg, which would make the
flywheel a very attractive hybrid storage device. Mass-market applications
for flywheels depend on solving critical rotor manufacturing issues, and, even
if these issues were successfully addressed, it is unclear whether mass-
produced flywheels could approach the Patriot flywheel’s specific energy
level.
Technologies for Advanced Vehicles
Performance and Cost Expectations
This chapter discusses the technical potential and probable costs of a range of
advanced vehicle technologies that may be available for commercialization
by 2005 and 2015 (or earlier). As noted, projections of performance and cost
can be highly uncertain, especially for technologies that are substantially
different from current vehicle technologies and for those that are in a fairly
early stage of development. In addition, although substantial testing of some
technologies has occurred- -for example, the Advanced Battery Consortium
has undertaken extensive testing of new battery technologies through the
Department of Energy’s national laboratories--the results are often
confidential, and were unavailable to the Office of Technology Assessment
(OTA). Nevertheless, there is sufficient available data to draw some
preliminary conclusions, to identify problem areas, and to obtain a rough idea
of what might be in store for the future automobile purchaser, if improving
fuel economy were to become a key national goal.

The chapter discusses two groupings of technologies:

1. Technologies that reduce the tractive forces that a vehicle must


overcome, from inertial forces associated with the mass of the vehicle
and its occupants, the resistance of the air flowing by the vehicle, and
rolling losses from the tires (and related components); and

2. Technologies that improve the efficiency with which the vehicle


transforms fuel (or electricity) into motive power, such as by improving
engine efficiency, shifting to electric drivetrains, reducing losses in
transmissions, and so forth.

Technologies that reduce energy needs for accessories, such as for heating
and cooling, can also play a role in overall fuel economy--especially for
electric vehicles--but are not examined in depth here. Some important
technologies include improved window glass to reduce or control solar heat
input and heat rejection; technologies for spot heating and cooling; and
improved heat pump air conditioning and heating.
Weight Reduction With Advanced Materials And Better Design
Weight reduction has been a primary component of efforts to improve
automobile fuel economy during the past two decades. Between 1976 and
1982, in response to federal Corporate Average Fuel Economy (CAFE)
regulations, automakers managed to reduce the weight of the steel portions of
the average auto from 2,279 to 1,753 pounds by downsizing the fleet and
shifting from body-on-frame to unibody designs.1 Future efforts to reduce
vehicle weights will focus both on material substitution--the use of
aluminum, magnesium, plastics, and possibly composites in place of steel--
and on optimization of vehicle structures using more efficient designs.

Although there is widespread agreement that improved designs will play a


significant role in weight reduction, there are several views about the role of
new materials. On the one hand, a recent Delphi study based on interviews
with auto manufacturers and their suppliers projects that the vehicle of 2010
will be composed of materials remarkably similar to today’s vehicles.2 At the
other extreme, some advocates claim that the use of strong, lightweight
polymer composites such as those currently used in fighter aircraft, sporting
goods, and race cars, coupled with other reductions in tractive loads and
downsized powertrains, will soon allow total weight reductions of 65 percent
to 75 percent.3 The factors that influence the choices of vehicle materials and
design are discussed below.

Vehicle Design Constraints

The most important element in engineering design of a vehicle is past


experience. Vehicle designs almost always start with a consideration of past
designs that have similar requirements. Designers rarely start from “blank
paper,” because it is inefficient for several reasons:

· Time pressure. Automakers have found that, as with so many other


industries, time to market is central to market competitiveness. While
tooling acquisition and facilities planning are major obstacles to
shortening the development cycle, they tend to be outside the direct
control of the automaker. Design time, however, is directly under the
control of the automaker, and reduction of design time has, therefore,
been a major goal of vehicle development.

· Cost pressures. The reuse of past designs also saves money. In


addition to the obvious time savings above, the use of a proven design
means that the automaker has already developed the necessary
manufacturing capability (either in-house or through purchasing
channels). Furthermore, because the established component has a
known performance history, the product liability risk and the warranty
service risk is also much reduced.

· Knowledge limitations. Automakers use a various analytical


methods (e.g., finite element codes) to calculate the stresses in a
structure under specified loading. They have only a rough idea,
however, of what the loads are that the structure will experience in
service. Thus, they cannot use their analytical tools to design the
structure to handle a calculated limiting load. Given this limitation, it is
far more efficient to start with a past design that has proven to be
successful, and to modify it to meet the geometric limitations of the new
vehicle. The modified design can then be supported with prototyping
and road testing.

This normative design process has been central to automobile design for
decades. Although it , has generally served the automakers well, it also has
some limitations. In particular, this strategy is unfriendly to innovations such
as the introduction of new materials in a vehicle design. The advantages of a
new material stem directly from the fact that it offers a different combination
of performance characteristics than does a conventional material. If the
design characteristics are specified in terms of a past material, however, that
material will naturally emerge as the “best” future material for that design. In
other words, if a designer says, “Find me a material that is at least as strong
as steel, at least as stiff as steel, with the formability of steel, and costing no
more than steel for this design that I derived from a past steel design,” the
obvious materials choice is steel.
Materials Selection Criteria
Five key factors affect the auto designer’s selection of materials:
manufacturability and cost, performance, weight, safety, and recyclability.

Manufacturability and Cost


A typical mid-size family car costs about $5 per pound on the dealer’s lot,
and about $2.25 per pound to manufacture. Of the manufacturing cost, about
$1.35 goes to labor and overhead, and $0.90 for materials, including scrap.4
The reason cars are so affordable is that steel sheet and cast iron, the
dominant materials, cost only $0.35 to 0.55 per pound. Advocates of
alternative materials such as aluminum and composites are quick to point out,
however, that the per-pound cost of materials is not the proper basis for
comparison, but rather the per-part cost for finished parts. Although they may
have a higher initial cost, alternative materials may offer opportunities to
reduce manufacturing and finishing costs through reduced tooling, net shape
forming, and parts consolidation. In addition, a pound of steel will be
replaced by less than a pound of lightweight material. Nevertheless, the cost
breakdown given above suggests that, if finished parts made with alternative
materials cost much more than $1.00 per pound, overall vehicle
manufacturing costs will rise significantly.5 This severe constraint will be
discussed later.

For comparison, the per-pound and per-part costs of alternative materials


considered in this study are given in table 3-1, along with the expected
weight savings achieved by making the substitution. On a per-pound basis,
glass fiber-reinforced polymers (FRP), aluminum, and graphite FRP cost
roughly 3 times, 4 times, and 20 times as much as carbon steel, respectively.6
Because these materials are less dense than steel, however, fewer pounds are
required to make an equivalent part, so that, on a part-for-part basis, the
difference in raw materials cost relative to steel is 1.5 times, 2 times, and 5
times, respectively.High-strength steel costs 10 percent more per pound than
ordinary carbon steel, but 10 percent less is required to make a part, so, on a
part basis, the two have roughly equivalent cost.
Manufacturing costs
As with any mass production industry, cost containment/reduction (while
maintaining equivalent performance) is a dominant feature of the materials
selection process for automotive components. Customarily, this objective has
focused the automobile designer upon a search for one-to-one substitutes for
a particular part, where a material alternative can provide the same
performance for lower cost. More recently, the focus has broadened to
include subassembly costs, rather than component costs, which has enabled
consideration of materials that are initially more expensive, but may yield
cost savings during joining and assembly. Manufacturers can also reduce
costs by shilling production of complex subassemblies (such as dashboards,
bumpers, or door mechanisms) to suppliers who can use less expensive labor
(i.e., non-United Auto Worker labor) to fabricate components that are then
shipped to assembly plants.

Thus, the manufacturer’s calculation of the cost of making a materials change


also depends on such factors as tooling costs, manufacturing rates, production
volumes, potential for consolidation of parts, scrap rates, and so forth. For
example, the competition between steel and plastics is discussed not only in
terms of the number of units processed, but also the time period over which
these parts will be made. Because the tooling and equipment costs for plastic
parts are less than those for steel parts, low vehicle production volumes
(50,000 per year or less) and short product lifetimes lead to part costs that
favor plastics, while large production runs and long product lifetimes favor
steel. As automakers seek to increase product diversity, rapid product
development cycles and frequent styling changes have become associated
with plastic materials, although the steel industry has fought this
generalization. Nevertheless, styling elements like fascias and spoilers are
predominantly plastic, and these elements are among the first ones redesigned
during product facelifts and updates.

Life Cycle Costs


The total cost of a material over its entire life cycle (i.e., manufacturing costs,
costs incurred by customers after the vehicle leaves the assembly plant, and
recycling costs) may also be a factor in materials choices. For example, a
material that has a higher first cost may be acceptable, if it results in savings
over the life of the vehicle through increased fuel economy, lower repair
expense, and so forth. However, this opportunity is rather limited. For
instance, at gasoline prices of $1.20 per gallon, fuel cost savings owing to
extensive substitution of a lightweight material such as aluminum might be
$580 over 100,000 miles of driving--about $1 per pound of weight saved.
These savings are insufficient to justify the added first cost of the aluminum-
intensive vehicle (perhaps as much as $1,500, see below). Moreover,
manufacturers are generally skeptical about the extent to which customers
take life cycle costs into consideration in making purchasing decisions.

Materials choices also influence the cost of recycling or disposing of the


vehicle, though these costs are not currently borne by either the manufacturer
or consumer. This situation could change in the near future, however, with
increasing policy emphasis on auto recycling around the world (see recycling
section below).

Manufacturability
Steel vehicles are constructed by welding together body parts that have been
stamped from inexpensive steel sheet materials. Over the years, this process
has been extensively refined and optimized for high speed and low cost. Steel
tooling is expensive: an individual die can cost over $100,000 dollars, and
with scores of dies for each model, total tooling costs maybe several tens of
millions of dollars per vehicle. A stamped part can be produced every 17
seconds, however, and with production volumes of 100,000 units or more,
per-part costs are kept low.

Aluminium-intensive vehicles have been produced by two methods: by


stamping and welding of aluminum sheet to forma unibody structure (a
process parallel to existing steel processes); and by constructing a “space
frame” in which extruded aluminum tubes are inserted like tinker toys into
cast aluminum nodes, upon which a sheet aluminum outer skin can be placed.

An advantage of the stamped aluminum unibody approach is that existing


steel presses can be used with modified tooling, which keeps new capital
investment costs low for automakers and permits large production volumes.
Ford used this method to produce a test fleet of 40 aluminumbodied Sables;
as did Chrysler in the production of a small test fleet of aluminum Neons.8
The Honda NSX production vehicle was also fabricated by this method.

The aluminum space frame approach was pioneered by Audi in the A8, the
result of a 10-year development program with Alcoa. Tooling costs are
reportedly much less than sheet-stamping tools, but production volumes are
inherently limited; for example, the A8 is produced in volumes of about
15,000 units per year. Thus, per-part tooling costs for space frames may not
be much different from stamped unibodies.

Manufacturability is a critical issue for using composites in vehicle bodies,


particularly in loadbearing structures.9 Although composite manufacturing
methods exist that are appropriate for aircraft or aerospace applications
produced in volumes of hundreds or even thousands of units per year, no
manufacturing method for load-bearing structures has been developed that is
suitable to the automotive production environment of tens or hundreds of
thousands of units per year.

The most promising techniques available thus far appear to be liquid molding
processes, in which a fiber reinforcement “preform” is placed in a closed,
part-shaped mold and liquid resin is injected. l0 The resin must remain fluid
long enough to flow throughout the mold, thoroughly wetting the fibers and
filling in voids between the fibers. It must then “cure” rapidly into a solid
structure that can be removed from the mold so that the process can be
repeated. A vehicle constructed from polymer composites might be built with
a continuous glass FRP or carbon FRP structure made by liquid molding
techniques, with chopped fiber composite skin and closure panels made by
stamping methods.

Liquid molding can be used to make entire body structures in large,


integrated sections: as few as five moldings could be used to construct the
body compared with the conventional steel construction involving several
hundred pieces. However, a number of manufacturing issues must be
resolved, especially demonstrating that liquid molding can be accomplished
with fast cycle times (ideally 1 per minute) and showing that highly reliable
integrated parts can be produced that meet performance specifications.
Suitable processes have yet to be invented, which is the principal reason that
the composite vehicle is used in the 2015 “optimistic” scenario. At present,
manufacturing rates for liquid molding processes are much slower than steel
stamping rates (roughly 15 minutes per part for liquid molding, 17 seconds
for steel), so that order of magnitude improvements in the speed of liquid
molding will be necessary for it to be competitive.

While advocates of automotive composites point to the General Motors (GM)


Ultralite as an example of what can be achieved with composites, in some
ways this example is misleading. First, the Ultralite was manufactured using
the painstaking composite lay-up methods borrowed from the aerospace
industry, which are far too slow to be acceptable in the automotive industry.
Second, the Ultralite body cost $30 per pound in direct materials alone
(excluding manufacturing costs). This is at least an order of magnitude too
high for an automotive structural material.
Aerodynamic Drag Reduction
The aerodynamic drag force is the resistive force of the air as the vehicle tries
to push its way through it. The power required to overcome the aerodynamic
drag force increases with the cube of vehicle speed,32 and the energy/mile
required varies with the square of speed. Thus, aerodynamic drag principally
affects highway fuel economy. Aside from speed, aerodynamic drag depends
primarily on the vehicle’s frontal area, its shape, and the smoothness of its
body surfaces. The effect of the vehicle’s shape and smoothness on drag is
characterized by the vehicle drag coefficient CD, which is the
nondimensional ratio of the drag force to the dynamic pressure of the wind on
an equivalent area. Typically, a 10 percent C D reduction will result in a 2 to
2.5 percent improvement in fuel economy, if the top gear ratio is adjusted for
constant highway performance. The same ratio holds for a reduction in
frontal area, although the potential for such reductions is limited by interior
space requirements.

The CD of most cars sold in the United States in 1994 and 1995 is between
0.30 and 0.35, and the best models are at 0.29. In contrast, CD for most cars
in 1979 to 1980 was between 0.45 and 0.50. The pace of drag reduction has
slowed considerably during the mid-1990s, and automakers claim that the
slowdown reflects the difficulty of reducing CD values much below 0.30 for
a typical mid-size sedan. Meanwhile, however, highly aerodynamic
prototypes have been displayed at motor shows around the world. Interesting
historical examples include the Chevrolet Citation IV with a CD of 0.18, and
the Ford Probe IV with a CD of 0.15, which is the lowest obtained by a
functional automobile.34 (See figure 3-l).

In interviews, manufacturers pointed out that these prototypes are design


exercises that have features that may make them unsuitable for mass
production or unacceptable to consumers. Such features include very low,
sloping hoods that restrict engine space and suspension strut heights.
Windshields typically slope at 65 degrees or more from vertical, resulting in a
large glass area that increases weight and cooling loads and causes potential
vision distortion. Ground clearance typically is lower than would be required
for vehicles to traverse sudden changes in slope (e.g., driveway entrances)
without bottoming. The rear of these cars is always tapered, restricting rear
seat space and cargo volume. Wheel skirts and underbody covers add weight
and restrict access to parts needed for wheel change or maintenance, and
make engine and catalyst heat rejection more difficult. Frontal wheel skirts
may also restrict the vehicle’s turning circle. In addition, radiator airflow and
engine cooling airflow systems in highly aerodynamic vehicles must be
sophisticated and probably complex. For example, the Ford Probe IV uses
rear mounted radiators and air intake ducts in the rear quarter panels to keep
the airflow “attached” to the body for minimum drag. Liquids are piped to
and from the front of the car via special finned aluminum tubes that run the
length of the car. An attitude control system raises and lowers the chassis to
minimize ground clearance at high speeds when aerodynamic forces are high
and avoid clearance problems at lower speeds. While such designs may have
minimum drag, the weight and complexity penalty will overcome some of the
fuel economy benefits associated with low drag.

The trade-offs made in these vehicles may not be permanent, of course.


Engineering solutions to many of the perceived problems will be devised:
advanced design of the suspension to overcome the reduced space; thermal
barriers in the glass and lighter weight formulations to overcome the added
cooling loads and weight gain associated with steeply raked windshields; and
so forth. Presumably, the more conservative estimates of drag reduction
potential do not account for such solutions. Of course, there is no guarantee
that they will occur.

Drag Reduction Potential


Manufacturers were conservative in their forecast of future potential drag
coefficient. The consensus was remarkably uniform that for average family
sedans, a CD of 0.25 was the best that would be possible without major
sacrifices in ride, interior space, and cargo space. Some manufacturers,
however, suggested that niche market models (sport cars, luxury coupes)
could have CD values of 0.22. Other manufacturers stated that even 0.25 was
optimistic, as maximizing interior volume for a given vehicle length, to
minimize weight, would require drag compromises.

In contrast to these moderate expectations of drag reduction potential, some


prototype cars not as extreme as the Probe, with shapes that do not appear to
have radical compromises, have demonstrated drag coefficients of 0.19 to
0.20. For example, the Toyota AXV5, with a CD of 0.20, appears to offer
reasonable backseat space< and cargo room. The car does, however, have
wheel skirts and an underbody cover; it is also a relatively long car as shown
in figure 3-2. Removing the wheel skirts typically increases CD by 0.015 to
0.02, and the AXV5 could have a CD of 0.22 and be relatively accessible for
maintenance by the customer. This suggests that attaining a CD of 0.22 could
be a goal for 2015 for most cars except subcompacts (owing to their short
body), and sports cars might aim for CD levels of 0.19. For these cars,
underbody and wheel covers could add about 40 to 45 lbs to vehicle weight,
assuming they were manufactured from lightweight plastic or aluminium
materials. This increased weight will decrease fuel economy by about 1
percent, although the reduced drag will offset this increase.

Light trucks have much different potential for CD reduction. Pickup trucks,
with their open rectangular bed and higher ride height, have relatively poor
CDS; the best of today’s pickups are at 0.44. Four-wheel-drive pickups are
even worse, with large tires, exposed axles and driveshafts, and higher
ground clearance. Compact vans and utilities can be more aerodynamic, but
their short nose and box-type design restrict drag co-efficient to high values.
Manufacturers argue that tapering the body and lowering their ground
clearance would make them more like passenger cars, hence unacceptable to
consumers as trucks. GM’s highly aerodynamic Lumina Van has not been
popular with customers, partly because the sharp nose made it difficult to
park; the Lumina Van was recently redesigned and its CD was increased
from the previous value of 0.32.

Manufacturer’s projections of potential improvements in future truck CD are


given in table 3-4.

Effect of Advanced Aerodynamics on Vehicle Prices


The costs of aerodynamic improvements are associated primarily with the
expense of developing a low drag body shape that is attractive and then
developing the trim and aerodynamic detailing to lower CD. The essential
inseparability of drag reduction and styling costs makes it difficult to allocate
the fixed costs to aerodynamics alone. Manufacturers confirmed that current
body assembly procedures and existing tolerances were adequate to
manufacture vehicles with CD levels of 0.25 or less.

Previously, aerodynamic styling to CD levels of 0.30 required investments in


the range of $15 million in development costs. 36 Requiring levels of CD to
be less than 0.25 would likely double development costs owing to the need to
stabilize underbody airflow and control engine and internal air flow. Unit
variable costs to an automobile manufacturer (from supplier data) are:

· Flush glass windows: $8 to $10 (for four),

· Underbody cover (plastic): $25 to $30,

· Wheel skirts: $5 to $6 each.

Hence the retail price effect (RPE) is calculated as follows:

· Unit investment cost: ~$30,

· Variable costs: ~$48 to $64,

· RPE: ~$125 to $150.

These RPE’s would be associated with CD levels of 0.20 to 0.22, while RPE
for achieving a CD levels of 0.24 to 0.25 would not require wheel skirts,
reducing theRPEto$90to$115.

Price effects for trucks are expected to be similar to autos, for a similar
percentage reduction in drag. Of course, the absolute values of CD will be
higher.
Rolling Resistance Reduction
Background
The rolling resistance of a tire is the force required to move the tire forward,
and represents nearly a third of the tractive forces on a vehicle. The force is
directly proportional to the weight load supported by the tire, and the ratio of
the force to the weight load supported by the tire is called the rolling
resistance coefficient (RRC). The higher the RRC, the more fuel needed to
move the vehicle.

Tires are of two construction types: bias-ply and radial-ply. Bias-ply tires
have been largely phased out of the light-duty truck and car markets except in
certain rough-duty applications, but still retain some market share in the
medium-duty and heavy-duty commercial truck and bus markets. In general,
bias-ply tires have significantly, higher RRCs than radial tires. The RRC of
radial tires has also decreased over time owing to improvements in materials
and design.

The primary source of tire rolling resistance is internal fiction in the rubber
compounds as the tire deflects on contact with the road. Reducing this
“hysteresis loss” has typically involved a trade-off with other desirable tire
attributes such as traction and tread wear, but advances in tire design and
rubber technology have brought significant reduction in rolling resistance
without compromising other attributes.

This evolution of passenger car and light truck tires maybe divided into three
phases:

· The first radials (generation one), which used a type of synthetic


rubber, 37 had 20 percent to 25 percent lower rolling resistance than
bias-ply tires, and became available during the late 1970s.

· The second phase (generation two), using new formulations of


synthetic rubber,38 achieved an additional 20 percent to 25 percent
reduction in rolling resistance over generation one radials, and became
available during the mid-1980s.

· The third phase (generation three), which adds silica to the tread
compounds, achieve an additional 20 percent reduction, and has
recently become available in limited quantities.

In addition to changing the tread materials, RRC reductions can be realized


by changing the shape of the tread and the design of the shoulder and
sidewall, as well as the bead. The type of material used in the belts and cords
also affects the RRC. For example, DuPont has suggested the use of aramid
fibres to replace steel cords and monofilament replacement of current
polyester multifilament to modify stiffness. Aramid yams have been available
for over a decade, and their use can cut rolling resistance by 5 percent.40
Polyamide monofilament have been recently introduced that improve the tire
sidewall stiffness and reduce rolling resistance by about 5 percent. These new
materials also contribute to reducing tire weight (by as much as 4 kg/tire),
which provides secondary fuel economy benefits and improved ride.

The rolling resistance values of current OEM tires are not well documented.
Anecdotal evidence from experts states that most normal (i.e. not
performance-oriented) tires have RRCs of 0.008 to 0.010 as measured by the
Society of Automotive Engineers (WE) method.41 Performance tires used in
luxury and sports cars, and increasingly in high performance versions of
family sedans, use H- or V-rated tires that have RRC values of (SAE) 0.012
to 0.013. Tires for compact vans have RRC values of 0.008 to 0.009 while
four-wheel-drive trucks and sport utilities feature tires with RRC values
(SAE) of 0.012 to 0.014.

Potential for Rolling Resistance Improvement


Most manufacturers OTA interviewed had similar expectations for tire rolling
resistance reduction over the next decade. The expectation was that an overall
reduction of 30 percent was feasible by 2005, resulting in normal tires with
an RRC of 0.0065 (if the current average is 0.009). Most also believed the H-
rated or V-rated tires would have similar percentage reductions in rolling
resistance so that they would have RRCs of 0.009 to 0.01 by 2005. Very
similar percentage reductions in RRC for light truck tires were also expected.
A 30 percent reduction in rolling resistance can translate to a 5 percent
improvement in fuel economy, if the design is optimized for the tire.
Manufacturers were unwilling (or unable) to estimate additional RRC
reductions in the post-2005 time frame, possibly owing to their unfamiliarity
with tire technologies in the research stage at this time.

These 30 percent reductions are expected to be achieved with virtually no


loss in handling properties or in traction and braking. Manufacturers
suggested that some loss in ride quality may occur because of the higher tire
pressure, but this could be offset by suspension improvements or the use of
semi active suspension systems. However, manufacturers expected noise and
tire life to be somewhat worse than those for current tires. Both of these
factors are highly important--noise may represent a special problem because
the improved aerodynamics and, possibly, electric drivetrains of advanced
vehicles will reduce other sources of noise.

An optimistic view for the 2015 time frame suggests that RRC values as low
as 0.005 may be achievable. Such low rolling resistance tires have already
been built for electric cars. Auto manufacturers believe that such tires are not
yet commercially acceptable because prototypes have suffered from losses in
handling, traction, and durability. Tire manufacturers have expressed the
view that technological improvements during the next 20 years could
minimize these losses, and an RRC of 0.005 could be a realistic goal for a
“normal” tire in 2015, as an average, which implies that some tires would
have even lower RRC values.

Only two auto manufacturers discussed other components of rolling


resistance, including brake drag and wheel/drivetrain oil seals and bearing
loss. Brake drag accounts for 6 percent of total rolling resistance, while
bearing and seal drag account for about 12 percent of rolling resistance, with
the tires accounting for the remaining 82 percent. The use of highly rigid
callipers, pads, and shoes to avoid brake pad contact with the rotor when the
wheels are spinning can reduce brake drag by as much as 60 percent. Bearing
and oil seal relative friction can be reduced by:

· Downsizing bearings and reducing preload


· Using low-tension oil seals

· Using low-viscosity lubricants

Manufacturers anticipate that these frictional losses can be reduced by 20 to


25 percent by 2005. A composite analysis of total rolling resistance suggests
that a 25 percent reduction is possible by 2005, and up to 40 percent by 2015,
if new tire technologies are successful There is some disagreement among
engineers about the effect such reductions will have on vehicle fuel economy,
with some asserting that the 25 percent reduction in resistance would
translate into no more than a 3 percent fuel economy increase, and the 40
percent reduction into a 5 percent fuel economy increase. OTA is more
optimistic than this; we conclude that the projected reductions in rolling
resistance may yield as much as a 5 percent improvement in fuel economy by
2005 and an 8 percent improvement by 2015 for an optimized vehicle design.
Improvements To Spark Ignition Engines
Overview
The spark ignition (SI) engine is the dominant passenger car and light truck
powerplant in the United States. The theoretical efficiency of the SI engine is:

Efficiency = 1 - l/m-l

where r is the compression ratio and “n” the polytropic expansion coefficient,
which is a measure of the way the mixture of air and fuel in the engine
expands when heated. For a compression ratio of 10:1, and an n value of 1.26
(which is correct for today’s engines, which require the air-fuel ratio to be
stoichiometnri, that is, with precisely enough air to allow complete burning of
the fuel), the theoretical efficiency of the engine is 45 percent. This value is
not attained in practice, but represents a ceiling against which developments
can be compared.

Four major factors limit the efficiency of SI engines. First, the ideal cycle
cannot be replicated because combustion is not instantaneous, allowing some
fuel to be burned at less than the highest possible pressure, and allowing heat
to be lost through the cylinder walls before it can do work. Second,
mechanical friction associated with the motion of the piston, crankshaft, and
valves consumes a significant fraction of total power. Friction is a stronger
function of engine speed than of torque; therefore, efficiency is degraded
considerably at light load and high rpm conditions. Third, aerodynamic
fictional and pressure losses associated with air flow through the air cleaner,
intake manifold and valves, exhaust manifold, silencer, and catalyst are
significant, especially at high air flow rates through the engine. Fourth, SI
engines reduce their power output by throttling the air flow, which causes
additional aerodynamic losses called “pumping losses” that are very high at
light loads.

Because of these losses, production spark ignition engines do not attain the
theoretical values of efficiency, even at their most efficient operating point.
In general, the maximum efficiency point occurs at an engine speed
intermediate to idle and maximum rpm, and at a torque level that is 60 to 75
percent of maximum. “On-road” average efficiencies of engines used in cars
and light trucks are much lower than peak efficiency, since the engines
generally operate at very light loads--when pumping losses are highest--
during city driving and steady state cruise on the highway. The high power
that these engines are capable of is utilized only during strong accelerations,
at very high speeds or when climbing steep grades. And during stop-and-go
driving conditions typical of city driving, a substantial amount of time is
spent at idle, where efficiency is zero. Typical modem spark ignition engines
have an efficiency of about 18 to 20 percent on the city part of the
Environmental Protection Agency driving cycle, and about 26 to 28 percent
on the highway part of the cycle.

During the 1980s, most automotive engine manufacturers improved engine


technology to increase thermodynamic efficiency, reduce pumping loss and
decrease mechanical fiction and accessory drive losses. These improvements
have resulted in fuel economy benefits of as much as 10 percent in most
vehicles.

Increasing Thermodynamic Efficiency


Increasing the thermodynamic efficiency of SI engines can be attained by
optimum control of spark timing, by reducing the time it takes for the fuel-air
mixture to be fully combusted (burn time), and by increasing the compression
ratio.

Spark timing

For a particular combustion chamber, compression ratio and air fuel mixture,
there is an optimum level of spark advance for maximizing combustion
chamber pressure and, hence, fuel efficiency. This level of spark advance is
called MBT for “maximum brake torque. ” Owing to production variability
and inherent timing errors in a mechanical ignition timing system, the
average value of timing in mechanically controlled engines had to be retarded
significantly from the MBT timing so that the fraction of engines with higher
than average advance owing to production variability would be protected
from knock. The use of electronic controls coupled with magnetic or optical
sensors of crankshaft position has reduced the variability of timing between
production engines, and also allowed better control during transient engine
operation. More recently, engines have been equipped with knock sensors,
which are essentially vibration sensors tuned to the frequency of knock.
These sensors allow for advancing ignition timing to the point where trace
knock occurs, so that timing is optimal for each engine produced regardless
of production variability. Manufacturers expect that advanced controls of this
sort can provide small benefits to future peak efficiency.

Faster combustion

High-swirl, fast-bum combustion chambers were developed during the 1980s


to reduce the time taken for the air fuel mixture to be fully combusted. The
shorter the burn time, the more closely the cycle approximates the theoretical
Otto cycle with constant volume combustion, and the greater the
thermodynamic efficiency. Recent improvements in flow visualization and
computational fluid dynamics have allowed the optimization of intake valve,
inlet port, and combustion chamber geometry to achieve desired flow
characteristics. Typically, these designs have resulted in a 2 to 3 percent
improvement in thermodynamic efficiency and fuel economy. The high swirl
chambers also allow higher compression ratios and reduced “spark advance”
at the same fuel octane number. More important, manufacturers stated that
advances in this area are particularly useful in perfecting lean-bum engines.

Increased compression ratios

Compression ratio is limited by fuel octane, and increases in compression


ratio depend on how the characteristics of the combustion chamber and the
timing of the spark can be tailored to prevent knock, or early detonation of
the fuel-air mixture, while maximizing efficiency. Improved electronic
control of spark timing and improvements in combustion chamber design are
likely to increase compression ratios in the future. In newer engines of the 4-
valve dual overhead cam (DOHC) type, the spark plug is placed at the center
of the combustion chamber, and the chamber can be made very compact by
having a nearly hemispherical shape. Engines incorporating these designs
have compression ratios up to 10:1, while still allowing the use of regular
octane gasoline. High compression ratios also can increase hydrocarbon
emissions from the engines, although this is becoming less of a concern with
newer combustion chamber designs. Manufacturers indicated that increases
beyond 10:1 are expected to have diminishing benefits in efficiency and fuel
economy and compression ratios beyond 12:1 are probably not beneficial,
unless fuel octane is raised simultaneously. The use of oxygenates in
reformulated gasoline could, however, allow the octane number of regular
gasoline to increase in the future.
Reducing Mechanical Friction
Mechanical friction losses can be reduced by converting sliding metal
contacts to rolling contacts, reducing the weight of moving parts, reducing
production tolerances to improve the fit between pistons and bore, and
improving the lubrication between sliding or rolling parts. Friction reduction
has focused on the valvetrain, pistons, rings, crankshaft, crankpin bearings,
and the oil pump. This is an area where OTA found considerable
disagreement among manufacturers interviewed.

Rolling contacts and lighter valvetrain

Roller cam followers to reduce valvetrain friction are already widely used in
most U.S. engines. In OTA interviews, some manufacturers claimed that
once roller cams are adopted, there is very little fiction left in the valvetrain.
Other manufacturers are pursuing the use of lightweight valves made of
ceramics or titanium. The lightweight valves reduce valvetrain inertia and
also permit the use of lighter springs with lower tension. Titanium alloys are
also being considered for valve springs. A secondary benefit associated with
lighter valves and springs is that the erratic valve motion at high rpm is
reduced, allowing increased engine rpm range and power output.

Fewer rings

Pistons and rings contribute to approximately half of total fiction. The


primary function of the rings is to minimize leakage of the air-fuel mixture
from the combustion chamber to the crankcase, and oil leakage from the
crankcase to the combustion chamber. The ring pack for most current engines
is composed of two compression rings and an oil ring. The rings have been
shown to operate hydrodynamically over the cycle, but metal-to-metal
contact occurs often at the top and bottom of the stroke. The outward radial
force of the rings is a result of installed ring tension, and contributes to
effective sealing as well as fiction. Various low-tension ring designs were
introduced during the 1980s, especially since the need to conform to axial
diameter variations or bore distortions has been reduced by improved
cylinder manufacturing techniques. Elimination of one of the two
compression rings has also been tried on some engines, and two-ring pistons
may be the low friction concept for the future. Here again, we found
considerable disagreement, with some manufacturers stating that two-ring
pistons provided no friction benefits, while others suggested fiction reduction
of 5 to 10 percent.

Lighter pistons

Reducing piston mass is the key to reducing piston fiction, and engine
designers have continuously reduced mass since the 1980s. Analytical results
indicate that a 25 percent mass reduction reduces fiction mean effective
pressure by 0.7 kilopascals at 1500 rpm.43 Secondary benefits include
reduced engine weight and reduced vibration. Use of advanced materials also
results in piston weight reduction. Current lightweight pistons use
hypereutectic aluminium alloys, while future pistons could use composite
materials such as fibre-reinforced plastics. Advanced materials can also
reduce the weight of the connecting rod, which also contributes to the side
force on a piston. Manufacturers agreed that a 25 to 30 percent reduction in
piston and connecting rod weight could occur by 2015.

Coatings

Coating the piston and ring surfaces with materials to reduce wear also
contributes to fiction reduction. The top ring, for example, is normally coated
with molybdenum, and new proprietary coating materials with lower fiction
are being introduced. Piston coatings of advanced high temperature plastics
or resin have recently entered the market, and are claimed to reduce fiction by
5 percent and fuel consumption by 1 percent.44 Some manufacturers claimed
that coatings wear off quickly, but others suggested that advanced coatings
were durable for the life of the engine. These differences may be owing to
proprietary advantages in coating technology with some manufacturers.

Improved oil pump

Friction in the oil pump can be reduced by optimizing oil flow rates and
reducing tolerances for rotor clearance. Some manufacturers suggested
fiction can be reduced by 2 to 3 percent with improved oil pump designs, for
a 0.3 to 0.4 percent fuel economy benefit.

Lubricants

Improvements to lubricants used in the engine also contribute to reduced


fiction and improved fuel economy. Friction modifiers containing
molybdenum compounds have reduced friction without affecting wear or oil
consumption. Some manufacturers stated that future synthetic oils combining
reduced viscosity and fiction modifiers could offer good wear protection, low
oil consumption, and extended drain capability, as well as small
improvements to fuel economy in the range of 1 percent over current 5W-30
oils.
Reducing Pumping Loss
Reductions in flow pressure loss can be achieved by reducing the pressure
drop that occurs in the flow of air (air fuel mixture) into the cylinder, and the
combusted mixture through the exhaust system. The largest part of pumping
loss during normal driving results from throttling, however, and strategies to
reduce throttling loss have included variable valve timing, “lean-bum”
systems, and “variable displacement” systems that shut off some engine
cylinders at low load.

Intake manifold design


There are various strategies to reduce the pressure losses associated with the
intake system and exhaust system. Efficiency can be improved by making the
intake air flow path as free as possible of flow restrictions through the air
filters, intake manifolds, and valve ports.45 Intake and exhaust manifolds can
be designed to exploit resonance effects associated with pressure waves
similar to those in organ pipes. By properly tuning the manifolds, high
pressure waves can be generated at the intake valve as it is about to close,
which increases intake pressure, and at the exhaust valve as it is about to
open, which purges exhaust gases from the cylinder. Formerly, “tuned”
intake and exhaust manifolds could help performance only in certain narrow
rpm ranges. Recently, the introduction of new designs, including variable
resonance systems (where the intake tube lengths and resonance volumes are
changed at different rpm by opening and closing switching valves) have
allowed smooth and high torque to be realized across virtually the entire
engine speed range. Manufacturers expect variable intake systems to be in
widespread use over the next 10 years.

Multiple valves

Another method to increase efficiency is by increasing valve area, especially


by increasing the number of valves. A four-valve system that increases flow
area by 25 to 30 percent over twovalve layouts has gained broad acceptance.
The valves can be arranged around the cylinder bore and the spark plug
placed in the center of the bore to improve combustion. While the peak
efficiency or brake-specific fuel consumption (bsfc) of a four-valve engine
may not be significantly different from a two-valve engine, there is a broader
range of operating conditions where low bsfc values are realized. Analysis of
additional valve layout designs suggests that five valve designs (three intake,
two exhaust) can provide an additional 20 percent increase in flow area, at the
expense of increased valvetrain complexity. Current expectations are that
most engines will be of the four-valve types by 2005.

Under most normal driving conditions, throttling loss is the single largest
contributor to engine efficiency losses. In SI engines, the air is throttled
ahead of the intake manifold by means of a butterfly valve that is connected
to the accelerator pedal. The vehicle’s driver demands a power level by
depressing or releasing the accelerator pedal, which, in turn, opens or closes
the butterfly valve. The presence of the butterfly valve in the intake air stream
creates a vacuum in the intake manifold at part throttle conditions, and the
intake stroke draws in air at reduced pressure, which results in pumping
losses. These losses are proportional to the intake vacuum, and disappear at
wide open throttle.

Lean-burn

Lean-bum is one method to reduce pumping loss. Instead of throttling the air,
engine power can be reduced by reducing the fuel flow so that the air-fuel
ratio increases, or becomes leaner. (In this context, the diesel engine is a lean-
bum engine). Most SI engines, however, do not run well at air: fuel ratios
leaner than 18:1, as the combustion quality deteriorates under lean
conditions. Manufacturers provided data on engines constructed to create
high swirl and turbulence when the intake air and fuel are injected into the
cylinder that can run well at air: fuel ratios up to 22:1. Lean-bum engines
actually run at high air-fuel ratios only at light loads; they run at
stoichiometric or rich air: fuel ratios at high loads to maximize power. The
excess air combustion at light loads has the added advantage of having a
favourable effect on the polytropic coefficient, n, in the efficiency equation.
Modem lean burn engines commercialized recently in Japan do not
completely eliminate throttling loss, but the reduction is sufficient to improve
vehicle fuel economy by 8 to 10 percent. A disadvantage of lean-bum
engines, however, is that they cannot use conventional three-way catalysts to
reduce emissions of nitrogen oxides (NOX , and the in cylinder NOx
emission control from running lean is sometimes insufficient to meet
stringent NOx emissions standards. There are developments in “lean NOX
catalysts,” however, that could allow lean-bum engines to meet the most
stringent NOX standards proposed in the future, which will be discussed
later.

Variable valve timing

Variable valve timing (VVT) is another method to reduce pumping loss.


Instead of using the butterfly valve to throttle the intake air, the intake valves
can be closed early, reducing the time (and volume) of air intake. The system
has some problems at very light load (the short duration of the intake valve
opening leads to weaker in-cylinder gas motion and reduced combustion
stability). Moreover, at high rpm, some throttling losses occur at the valve
itself.47 Hence, throttling losses can be decreased by 80 percent at light load,
low rpm conditions, but by only 40 to 50 percent at high rpm, even with fully
VVT.

Aside from improved fuel economy, VVT also increases power output over
the entire range of engine rpm. Fully variable valve timing can result in
engine output levels of up to 100 brake horsepower (BHP)/liter at high rpm
without the decline in low-speed torque that is characteristic of four-valve
engines with fixed valve timing. In comparison to an engine with fixed valve
timing that offers equal performance, fuel efficiency improvements of 7 to 10
percent are possible. The principal drawback has historically been the lack of
a durable and low cost mechanism to implement valve timing changes.
Honda has commercialized a two stage system in its four valve/cylinder
engines where, depending on engine speed and load, one of two valve timing
and lift schedules are realized for the intake valves. (This type of engine burn
to achieve remarkable efficiency in a small car.)

Another version of VVT also shuts off individual cylinders by example, an


eight-cylinder engine can operate at light load as has been combined with
deactivating the valves. a four-cylinder engine lean For (by deactivating the
valves for four of the cylinders) and as a six-cylinder engine at moderate
load. Such systems have also been tried on four-cylinder engines in Japan
with as many as two cylinders deactivated at light load. At idle, such systems
have shown a 40 to 45 percent decrease in fuel consumption, while composite
fuel economy has improved by 16 percent on the Japanese 10-15 mode test
since both pumping and fictional losses are reduced by cylinder deactivation.
50 Earlier systems had problems associated with noise, vibration, and
emissions that resulted in reduced acceptance in the market place, but more
recent systems introduced in Japan have solved most of the problems. OTA
had the opportunity to drive Mitsubishi’s MIVEC V-6 which features VVT
and cylinder shutoff, and noise and vibration effects on this vehicle from
cylinder shutoff were barely noticeable.

Total effect

All of the aforementioned technologies can reduce pumping loss, increase


volumetric efficiency, increase specific output, and reduce fuel consumption
at part load, but the benefits are not additive. Most manufacturers provided
estimates of benefits for several combinations; for example, a recent paper by
engineers from Porsche forecast a 13 percent reduction in fuel consumption
with no loss in performance for a system featuring variable valve timing and
lift, variable resonance intake, and cylinder cut off (from a baseline vehicle
featuring a four-valve engine with a two-stage resonance intake and cam
phase adjustment) . This estimate is more optimistic than what many
manufacturers believed to be possible.
DISC and Two-Stroke Engines
Direct Injection Stratified Charge (DISC) Engines are considered as the
highest level of technology refinement for SI engines. These engines are
almost completely unthrottled, and will require variable valve timing to reach
their maximum potential fuel efficiency. Their high efficiency is associated
with high compression ratio (up to 13), absence of throttling loss, and
favorable characteristics of the products of combustion. Although DISC
engines have been researched for decades (with some versions such as Ford’s
PROCO almost entering production) there is renewed excitement about DISC
owing to:

· Advancements in fuel injection technology (e.g., the air atomized


injection system developed by Orbital, and new fast-response piezo-
electric injectors developed by Toyota).

· Improved understanding and control of vortex flow in the


combustion chamber (e.g., Mitsubishi’s vertical vortex system
maintains charge stratification through the compression stroke over a
wide speed/load range. Increased turbulence in the chamber can also be
used to support combustion to very lean A/F ratios-as lean as 40: 1).

· Developments in lean NOX catalysts.

DISC engines still have problems associated with meeting future


hydrocarbon (HC) and NOX standards. Manufacturers indicated that the HC
problem was easier to solve than the NOX problem, and meeting a standard
of 0.4 g/mi NOX or lower would require a “lean-NOX” catalyst capable of
conversion efficiency over 60 percent. The development of the lean-NOx
catalyst is discussed below, but several manufacturers appeared to be
optimistic about the future prospects for the DISC.

Two-stroke engines

The two-stroke engine is a variant of the four-stroke DISC engine, with the
potential to produce substantially higher specific power. The reduced engine
weight provides fuel economy benefits in addition to those provided by the
DISC design. The two-stroke design is thermodynamically less efficient than
the four-stroke, however, because part of the gas expansion stroke cannot be
used to generate power.

Two-stroke engine designs have been developed by various research groups


and manufacturers, with Orbital, Toyota, and Chrysler publicly displaying
alternative designs. The Orbital engine uses crankcase scavenging (like a
traditional motorcycle two-stroke engine), with a specially developed direct
injection system with air assisted atomizers. An Orbital engine installed in a
European Ford Fiesta has achieved 44 mpg city, 61.3 mpg highway, for a
composite fuel economy of 50.4 mpg on the EPA test cycle.52 Orbital claims
a 22 percent benefit in fuel economy for this engine, 53 although it is difficult
to verify this claim with available tests because the baseline vehicles have
different performance.

The Orbital engine uses a very low-fiction design, with roller bearings for its
crankshaft, but manufacturers doubt the durability of this system. Chrysler
uses an externally scavenged design with an air compressor, so that crankcase
induction and lubrication problems are avoided. Toyota uses an external
induction system with exhaust valves in the cylinder head. These designs are
likely to be more durable, but lose the fiction advantage, so that their fuel
economy benefits are lower than the Orbital design. However, a four-stroke
DISC will be more thermodynamically efficient than a two-stroke DISC, and
the current opinion is that the four-stroke’s effect on fuel economy will be
greater than the two stroke’s despite the latter’s weight advantage.

Summary of Engine Technology Benefits


Estimates of engine technology benefits are given in table 3-5, assuming that
a lean-NOx catalyst is available for lean-bum and DISC engines. The mean
for all manufacturers over the long term suggests that use of a DISC engine
coupled with available friction reduction technologies can yield a 17 to 18
percent fuel consumption reduction, while an optimistic view suggests that as
much as 25 percent may be available. These reductions can be achieved with
no trade-off in performance although cost and complexity will increase.
Electric Drivetrain Technologies
Introduction
The appeal of using electricity to power automobiles is that it would
eliminate vehicular air pollution (although there would still be pollution at the
power source), and that electricity can be reversibly translated to shaft power
with precise control and high efficiency. The main problem with this use is
that electricity cannot be easily stored on a vehicle. California’s mandate for
the introduction of zero emission vehicles in 1998 has resulted in a major
research effort to overcome this storage problem. The only commercially
available systems for storage today, however, are the lead acid and nickel-
cadmium battery, and both have limited capabilities. The lead acid battery’s
limited storage capacity and substantial weight are ill-suited to a vehicle’s
needs, although advanced versions of this battery reduce some of these
limitations; the nickel-cadmium battery is very expensive and requires careful
maintenance.
Electricity can also be produced onboard a vehicle by using an engine and
generator. Simply feeding the generated electricity directly into a drive motor
to power the wheels, however, would probably be less efficient than a
mechanical transmission, because the combined generator and motor losses
may outweigh transmission losses. The total system can be made more
efficient, however, if the engine is operated at near constant output close to
its most efficient point, and any excess electricity is stored in a buffer, which
is used to satisfy the variable electrical demands of the motor and other
vehicle power demands. Vehicles with powertrains combining a device to
store electrical energy and another to produce it are called hybrids. The
storage or buffer device can be an ultracapacitor, flywheel, or battery,
depending on system design; the electricity producer can be an internal
combustion engine or, perhaps, a fuel cell, which would be both highly
efficient and almost non-polluting.
The sections that follow discuss new technology under development for
batteries for electrical energy storage, fuel cells for energy production,
capacitors/flywheels for peak power storage, and motors for conversion of
electrical power to shaft power. The discussions focus on a selected set of
technologies likely to be competitive in the future marketplace (at least
according to current wisdom), and their efficiency and cost characteristics.
The data and descriptions presented in this section can become out-of-date
very quickly, especially if there are breakthroughs in the design or
manufacturability of the technologies. Hence, the projections in this section
represent an extrapolation of technology performance into the future based on
information mailable as of mid-1994. New technology competitors may
emerge very quickly and new findings may render existing “competitive”
technologies poor prospects for the future.
Battery Technology
Requirements
A battery is a device that stores electricity in a chemical form that is released
when an external circuit is completed between the battery’s opposing
terminals. The battery, which provides both energy and power storage, is the
critical technology for electric vehicles. Unfortunately, the weak link of
batteries has been their low energy storage capacity--on a weight basis, lower
than gasoline by a factor of 100 to 400. Power capacity may also be a
problem, especially for some of the higher temperature and higher energy
batteries. In fact, power capacity is the more crucial factor for hybrid
vehicles, where the battery’s major function is to be a load leveller for the
engine, not to store energy. Aside from increasing energy and power storage,
other key goals of battery R&D are increasing longevity and efficiency and
reducing costs.
Traditionally, the storage characteristics of conventional lead-acid batteries
have been so poor that electric vehicles (EVs) have been extremely heavy,
with poor acceleration performance and limited range. Battery technology
research sponsored by the U.S. Advanced Battery Consortium (ABC) has
sought to develop new batteries with improved storage and other
characteristics. The performance characteristics of a battery relevant to use in
vehicles can be defined by the following parameters, for which ABC has set
goals.
The specific energy is a measure of the total quantity of energy stored per
unit of battery weight. ABC has set a goal of 80 watt-hours/kilogram (with
100 Wh/kg desired) as a mid-term goal and 200 Wh/kg as a long term goal
for this parameter. In contrast, conventional lead acid batteries have specific
energy levels of 25 to 28 Wh/kg.
Specific power is a measure of how much power per unit weight the battery
can deliver per second to handle peak requirements for acceleration and grade
climbing. ABC’s mid- and longterm goals are 150 W/kg (200 W/kg desired)
and 400 W/kg respectively for a 30-second pulse of power. Conventional
lead acid batteries can provide as much as 100 W/kg when fully charged, but
their peak power capability declines rapidly as they are discharged, and is
about 60 W/kg at 80 percent depth-of-discharge (DoD). To some degree,
specific power is a function of battery design, and especially trades off with
specific energy. Hence, batteries designed for high power may differ from
those designed for high energy.
The sustainability of peak power levels is an important issue for hybrid
vehicles. The peak power values quoted in this section are based on a 30-
second pulse. Batteries may not be able to sustain even half this peak level, if
the duration is in the order of two to four minutes. However, the capability of
the battery to deliver high power is a function of its design as well as the
battery cooling system installed to prevent thermal degradation. At this point,
it is unclear whether all of the battery types described below can provide half
the rated peak power for several minutes, as is required for a hill climb.
Life can be based on both calendar years and charge/discharge cycles.
USABC has set mid- and long-term goals of 5 and 10 years and 600 and
1,000 cycles respectively. Conventional lead acid batteries in electric car use
have a life of only about two to three years and 300 to 400 cycles. For some
batteries, calendar life and cycle life may present different limiting
constraints, and the life itself is affected by how deeply a battery is
discharged.
There are several other parameters that are of major concern, such as the
power density and energy density, which are measures of battery power and
energy storage capabilities on a volumetric basis (to avoid very large
batteries), power and energy degradation over the useful life, fast recharge
time, range of ambient operating conditions, maintenance requirements, and
durability. USABC goals for some of these parameters are shown in table 3-
10. In addition, there are special concerns with each battery type that include
behaviour at low charge, special charging characteristics, and recyclability.
This review of batteries is not meant to be comprehensive nor intended to
cover all of the above factors. Rather, the intention of the review is to
describe auto manufacturer concerns and battery manufacturer inputs on the
current status of battery development, while the conclusions reflect only
OTA’s opinion on battery prospects.
Credible specification of battery parameters is critical to judging EV
capabilities, but in fact such specification is difficult to come by. Measuring
battery parameters raises many issues, as the results are sensitive to the test
procedure and ambient conditions employed. For example, most batteries
display reduced energy densities at higher power levels, as well as during
cyclically varying power draws (as will be the case in an electric vehicle).
Yet, specific energy values generally are quoted at a constant discharge rate
that would drain the battery in three hours (c/3). As noted, many batteries
also display significant reductions in power density at low state-ofcharge, and
at reduced ambient temperatures, while available data may be for filly
charged batteries at 20oC. Finally, battery characteristics are often different
among single cells, modules, and collections of modules required for a high-
voltage battery. In many battery types, the failure of a single cell, or
variations (owing to production tolerances) between cells often has
significant impact on battery performance.
Auto manufacturers interviewed by OTA universally agreed that many
battery manufacturer claims about battery performance and longevity are
unlikely to be reproduced in a vehicle environment. European manufacturers
have devised new testing procedures through their joint consortium, EUCAR,
that appear to be more stringent and comprehensive than those performed
previously by USABC or by DOE affiliated laboratories;68 similarly,
USABC in 1994 also revised its testing procedures, which are now reported
to be very stringent. Auto manufacturers stressed the need to test an entire
high-voltage battery system with the thermal and electrical management
systems included as part of the overall system to obtain a good picture of
real-world performance.
Battery Characteristics
For this discussion, batteries have been divided into four thematic groups:
lead acid, alkaline, high temperature, and solid electrolyte. Various battery
designs have been examined that would fall under the latter three types, and
obtaining comprehensive data on their current development status and
characteristics is challenging; a listing of the various types under
development and their developers is given in table 3-11. The discussion
focuses on batteries that are potential winners according to the current
consensus, but it should be noted that the list of “winners” has changed
considerably during the last five years. For example, in 1991, the nickel-iron
and sodium-sulphur batteries were considered the most promising, but are no
longer the leading contenders.
Lead acid
Lead acid batteries have been in existence for decades, and more advanced
traction batteries with improved specific power and energy, as well as
durability, are under development. Delco Remy’s VRLA battery is perhaps is
the most advanced battery commercially available (though in limited
quantity), and it has claimed the following characteristics per battery module:
a specific energy of 35 Wh/kg, specific power of 210 W/kg (filly charged)
and 150 W/kg at 20 percent charge, and over 800 cycles of life at 50 percent
DoD. Delco also offers a “battery package” including fill thermal and
electrical management. An entire 312V system with 26 modules and battery
management has a net specific energy of 30.5 Wh/kg.
Other recent developments include the woven grid pseudo-bipolar lead acid
battery from Horizon, which has a demonstrated specific energy of 42 Wh/kg
and peak power of 500 W/kg at fill charge and 300 W/kg at 80 percent DoD
at the cell level. Horizon claims life in excess of 900 cycles at C/2 and has
begun delivery of complete batteries from a pilot production plant.70 Horizon
anticipates additional improvements to specific energy levels over 48 Wh/kg
at the module level, and expects other benefits, such as fast charging, owing
to the batteries’ low internal resistance.
Bipolar lead acid batteries under development offer even higher power
densities and energy densities than the Horizon battery, with specific power
of 900 W/kg and specific energy of 47 Wh/kg demonstrated by ARIAS
Research at the module level. The traditional problem with bipolar batteries
has been with corrosion at the electrode interfaces, and it is not yet clear
whether this problem has been solved over the life of the batteries.
Nevertheless, the new designs show promise in providing significant
improvements in power and energy density, but providing reasonable life
may still be a serious problem.
Alkaline Systems
The three most successful candidates in this category are nickel-cadmium,
nickel-iron and nickel-metal hydride. Nickel-cadmium (Ni-Cd) batteries are
available commercially, but the major problem has been their relatively
modest improvement in specific energy over advanced lead acid batteries in
comparison to their high cost. Modern Ni-Cd batteries have specific energy
ratings up to 55 Wh/kg, which is about 25 percent better than the Horizon
lead acid battery. They cost at least four times as much,72 but these higher
costs will be offset to an extent by Ni-Cd batteries’ longer cycle lives. High-
energy versions of these batteries require maintenance and their capacity
changes with charge/discharge cycles. Sealed Ni-Cd batteries that are
maintenance free have significantly lower specific energy (35 to 40 Wh/kg),
although there is ongoing research to avoid this penalty. In addition, concerns
about the toxicity of battery materials and the recyclability of the battery has
resulted in reduced expectations for this battery.
Nickel-iron batteries received considerable attention a few years ago, but
interest has faded recently. Their specific energy is about 50 Wh/kg, and their
costs are similar to, or slightly lower than, those for Ni-Cd batteries.
Although they have demonstrated good durability, they require a
sophisticated maintenance system that adds water to the batteries and
prevents overheating during charge. In addition, they cannot be sealed, as
they produce hydrogen and oxygen during charging, which must be vented
and pose some safety problems. The formation of hydrogen and oxygen also
results in reduced battery charging efficiency, and these features account for
the lack of current interest in this battery.
Nickel-metal hydride batteries have received much recent attention lately,
and Ovonic and SAFT are the leading developers of such batteries. The
maintenance-free Ovonic batteries have demonstrated specific energy values
in excess of 80 Wh/kg at the module level and specific power densities of
over 200 W/kg.74 However, auto manufacturers have stated that these
batteries have high internal self-discharge rates, especially at high ambient
temperatures, with losses of 32 percent over 5 days at 40oC.75 Auto
manufacturers have also noted that Ovonic batteries have capacity limitations
at low temperatures when discharged quickly, and they are worried about
hydrogen build-up during charging. Nevertheless, the Ovonic batteries’
demonstrated capabilities and the potential to overcome these problems has
led to optimism about their prospects for commercialization. GM and
Ovonics have entered into a joint venture to produce the battery, and pilot
production may occur in late-1996. It should be noted that a complete battery
to power an EV has only recently become available, and prototype testing
will demonstrate the battery’s durability in an EV environment.
Auto manufacturers do not believe that the Ovonic battery can be
manufactured at low cost, especially as other battery manufacturers
developing nickel metal hydride batteries do not support Ovonic’s cost
claims. Ovonic has suggested that the batteries can be manufactured at
$235/kWh and perhaps below, whereas others expect costs to be twice as
high (~$500/kWh) in volume production.76 It should also be noted that the
batteries are not yet easily recyclable, as the complex metal hydride used by
Ovonic can only be regenerated today by an expensive process
High-temperature batteries
This category includes sodium sulfur, sodium-nickel chloride and lithium-
metal disulfide batteries. All high-temperature batteries suffer from the fact
that temperature must be maintained at about 300°C, which requires a
sophisticated thermal management system and battery insulation and imposes
a lack of packaging flexibility. Moreover, thermal losses must be
compensated by electrical heating when the vehicle is not in use, so that these
electrical losses are similar to self discharge. Hence, these losses may
significantly increase total electrical consumption for lightly used vehicles.
Meanwhile, these batteries offer much higher levels of energy storage
performance than lead acid or alkaline systems and are insensitive to
ambient temperature effects.
Sodium sulfur batteries have been in operation for more than a decade in
Europe and offer high specific energy (100 Wh/kg) with relatively low-cost
battery materials. They have the favorable characteristic of their specific
power’s not declining significantly with the state-of-charge, although the
specific power value is a relatively low 130 W/kg. 77 More recently, Silent
Power has unveiled a new design, the MK6, with a specific energy of 120
Wh/kg and specific power of about 230 W/kg.78 However, the corrosivity of
the battery materials at high temperature has led to limited calendar life (to
date), and reliability is affected if the battery “freezes.” Even now, a leading
manufacturer, ABB, claims a battery life of less than three years for its
sodium sulfurbattery. Silent Power has estimated a selling price of $250/kWh
in volume production of 1050 units/month for its MK6 battery.
Sodium-nickel chloride batteries have many of the sodium sulfur batteries’
favorable characteristics along with reduced material corrosivity, so that they
may have longer calendar life. These batteries are being extensively tested in
Europe, and the latest versions (dubbed ZEBRA in Europe) have shown
energy densities over 80 Wh/kg and specific power of over 110 W/kg at full
charge. 79 Other advancements are expected to increase both specific energy
and specific power. However, specific power drops to nearly half the fully
charged value at 80 percent DoD, and possibly is also reduced with age or
cycles used. Despite this problem, this battery type has emerged as a leading
contender in Europe owing to its potential to meet a life goal of five years.
Lithium-metal sulfide bipolar batteries hold the promise of improvements
in specific energy and power relative to the other “hot” batteries, but they are
in a very early stage of their development. Work by Argonne National
Laboratories has shown very good prospects for this type of battery. It is
lithium’s low equivalent weight that gives lithium batteries their high-energy
content of three to five times that of a lead acid battery. Research efforts on
lithium-metal sulfide batteries of the bipolar type are being funded by the
USABC, and battery developers hope to achieve specific energy levels of
over 125 Wh/kg and power levels of 190 W/kg.80 Initial tests on cells have
indicated approximately constant power output with battery DoD, and the
system also holds the potential for long life and maintenance free operation,
but substantial research is still required to meet these goals. Problem areas
include corrosion and thermal management, as well as durability. At this
point, an EV-type battery or module has not yet been fabricated.

Lithium-Ion
This battery type has many supporters who consider it a leading long term
candidate for EV power. The battery has been studied at the cell level and has
demonstrated the following advantages
· high specific energy of about 100 to 110 Wh/kg,
· good cycle performance with a life of over 1,000 cycles at 100
percent DoD,
· maintenance free system,
· potential for low cost.
The battery developer, SAFT, has used a lithium-nickel oxide alloy (LiNiO2)
as the anode and a carbon cathode, with an electrolyte of confidential
components to demonstrate a prototype cell with the above properties. SAFT
has publicly stated that it can attain a specific power of about 200 W/kg, and
costs near the $150/KWh goal, similar to the statements of other battery
developers. Nevertheless, there is much development work to be done, as the
current system is seriously degraded by overcharge or overdischarge, and a
mass production process for the anode material is not well developed.82 The
battery holds promise for commercialization in the post2005 time frame.
Solid electrolyte batteries
These batteries are potentially extremely “EV friendly” batteries in that they
are spillage proof and maintenance free. A schematic of the lithium polymer
battery is shown in figure 3-4, and the battery can be manufactured as
“sheets” using manufacturing technology developed for magnetic tape
production. Many problems still remain to be resolved for lithium-polymer
rechargeable batteries including the need for reversible positive electrode
materials and stable high conductivity polymers as well as scale-up problems
associated with high voltages and current. Researchers at Oak Ridge National
Laboratory (ORNL) have projected specific energy and power of 350 Wh/kg
and 190 W/kg, respectively, but these figures are based on laboratory cell
performance data.83 Actual data from Westinghouse and 3M suggest that the
specific energy and power from an entire battery may be at half the levels
projected by ORNL for a single cell. 84 Other researchers have suggested
that sodium-polymer batteries may be superior to lithium-polymer versions,
and could have lower costs. However, even a prototype EV size battery is
possibly several years away.
As noted, the previous discussion covers only those battery types that are
highly regarded today, but there are numerous other electrochemical couples
in various stages of development with the potential to meet USABC goals.
These include nickel-zinc, zinc-bromine, and sodiumpolydisulfide systems;
these are being actively researched but need considerable development before
they can become serious contenders. Nickel-zinc and zinc-bromine batteries
have energy densities comparable to Ni-MH batteries but significantly lower
power densities of about 100 W/kg, so that they can compete only if costs are
low and they have long life. 86 Sodiumpolydisulfide batteries are in a very
early stage of development and little is publicly known about their
performance parameters.
Table 3-12 provides a summary of the state-of-the-art for batteries of
different types. It is important to note that the actual usable specific energy
and power can differ significantly from the values listed for some batteries.
Lead acid batteries should not be discharged to below 80 percent DoD, for
example, so that usable specific energy is only (40x 0.8) or 32 Wh/kg for the
advanced lead acid battery.

Bringing an Advanced Battery to Market


Initial testing of a simple cell at the laboratory is basically a proof-of concept,
and is utilized to test the stability and output under carefully controlled
conditions. A group of cells aggregated into a module is the first step toward
a functional battery, and scaleup, cell packaging, interconnections between
cells, and multiple cell charge and discharge control are demonstrated in this
phase. The development of a prototype EV battery with an overall energy
storage capability of 20 to 40 kwh at a voltage of 200 to 300 V involves
collections of modules in an enclosure with appropriate electrical and thermal
management. These batteries typically must be tested extensively in the real
world EV environment to understand the effect of severe ambient, vibration,
cell failures, and cyclically varying discharge rates--all which can have
significant effects on the usable power, energy, and life of a battery that is not
properly designed. A preproduction battery is one that has been redesigned to
account for the real world experience, and is also suitable for mass
production. Typically, preproduction batteries are built at modest volumes of
a few hundred per year to ascertain whether the production process is suitable
for high-volume output with low-production variability.
Many new entrants in the advanced battery arena have made bold claims
about the availability of their particular battery designs for commercial use in
time to meet the California “ZEV” requirements for 1998. More established
battery manufacturers contest their claims, and have stated that several years
of in-vehicle durability testing is required before a preproduction design can
be completed, as batteries often fail in the severe EV environment. The case
of ABB’s sodium-sulphur battery is illustrative. Early prototype batteries
were available during the late- 1980s and tested by Mercedes and BMW.
These prototypes had a calendar life of about six months and were plagued by
excessive failures. Second generation prototypes were supplied to BMW and
Ford, and these doubled calendar life to about one year. More recently, two
of the Ford Ecostar vehicles have reported fires during charging. ABB is
currently providing third generation prototypes to Ford, but even these are
not considered production ready. ABB is willing to guarantee a calendar life
of only one year in EV services for its latest sodium-sulfur prototypes,
although actual life may be two to three years.
Although the sodium-sulfur battery may pose especially difficult
development problems, such experiences are reported even for advanced lead
acid batteries whose basic principles have been utilized introduction batteries
for many decades. INEL reports that the Sonnenschein advanced lead acid
battery has demonstrated very good cycle life in the laboratory, but that its in-
use reliability is very poor.88 Once a battery has moved beyond the single-
cell stage, manufacturers estimate that a minimum of three years per stage is
required to move to the module, prototype battery, and preproduction battery
stages, and a total testing time of nearly a decade will be necessary for a
proven production model.
This estimate of time assumes that problems are successfully tackled in each
stage and that manufacturing processes can replicate cells with very little
variability in mass production--an assumption that remains unproven for
almost all advanced battery types demonstrated to date. Based on this, it is
reasonable to conclude that batteries whose status is listed “3” in Table 3-12
will not be mass produced until 2000 at the earliest.
Vehicle lifetime costs depend on the battery durability, an issue about which
little is known except for the fact that usable lifetimes are quite different for
different batteries. It should be noted that battery life depends on the desire of
the battery system and its usage pattern. Also, there are tradeoffs between
battery life and cost, specific energy, specific power, and user specification of
end-of-life criteria. For example, a battery may have very different “life,” if
the end-of-life criterion is set at 90 percent of initial energy density, or is set
at 80 percent. Nevertheless, for almost any set of reasonable criteria for end-
of-life that are acceptable to auto manufacturers, there are currently no
advanced batteries that have demonstrated an average fiveyear life in the
field, nor have any battery manufacturers been willing to warranty a battery
for this period. Hence, even the prospect of five-year life in customer service
is unproven and is an input assumption for most analyses of battery costs.
Cost per kilowatt-hour of storage capacity in table 3-9 is based on production
rates of at least 10,000 modules per month and are estimated from the
educated guesses of battery manufacturers, (except for the nickel-metal
hydride battery where the cost controversy was noted earlier). The cost
estimates in the table are based on both battery and auto manufacturer inputs.
Although OTA has attempted to include only estimates that appear realistic
given current knowledge, these estimates may still be unreliable as most
battery types are not yet production ready.
Other Engine And Fuel Technologies
Overview
Numerous engine and fuel technologies have been suggested as powerplants
and power sources for the future In general, most of the alternative fuels, with
one exception, are hydrocarbon fuels ranging from natural gas to biomass-
derived alcohol fuels, and most of these are being used commercially in
limited scale in the United States. Although these fuels can offer significant
advantages in emissions and small advantages in fuel economy over
gasoline/diesel, their properties and benefits have received significant
attention over the last decade, and there is a large body of literature on their
costs and benefits. The one exception to this is hydrogen, which often is
portrayed as the zero emission fuel of the future. Hydrogen’s ability to fuel
current and future automobiles is considered in this section.
Alternative engine technologies considered for the future include gas turbine
and Stirling engines. (In this context, the two-stroke engine is considered as a
“conventional” engine type, as it is similar in operating principles to four-
stroke engines). The gas turbine engine, in particular, has received increased
attention recently as a power source for hybrid vehicles. As a result, the
potential for the gas turbine and Stirling engine in non-traditional
applications is also discussed here.
Hydrogen
Hydrogen is viewed by many as the most environmentally benign fuel,
because its combustion will produce only water and NOX as exhaust
components, and its use in a fuel ceil produces only water as a “waste”
product. Because hydrogen, like methanol, must be derived from other
naturally occurring compounds at substantial expenditure of energy, fuel
economy evaluations of hydrogen vehicles should consider the overall energy
efficiency of the hydrogen fuel cycle. Even if hydrogen is produced using
electricity from photovoltaic cells, it may be more efficient to use the
electricity directly for transportation rather than through the production of
hydrogen, depending on the location of the hydrogen production.
Because hydrogen is a gas at normal temperatures and pressures and has very
low energy density, it has serious storage problems on-board a vehicle. There
are essentially four different ways to store hydrogen, which are as a:
· compressed hydrogen gas,
· cryogenic liquid,
· reacted with metals to form a hydride, and
· adsorbed on carbon sieves.
Compressed hydrogen gas can be stored in high-pressure tanks (of advanced
composite material) at pressures of 3,000 to 6,000 pounds per square inch
(psi). To store the equivalent of 10 gallons of gasoline, a tank at 3,000 psi
must have a volume of 150 gallons, and the tank weight is approximately 200
lbs.137 Doubling the pressure to 6,000 psi does not halve the tank volume
because of increasing tank wall thickness and the nonideal gas behavior of
hydrogen; at 6,000 psi, the tank volume is 107 gallons, and its weight is 225
lbs. Increasing tank pressure leads to greater safety problems and increased
energy loss for compressing the hydrogen; at 6,000 psi, the energy cost of
compression is approximately 10 to 15 percent of the fuel energy.
Realistically, pressures over 6,000 psi are not considered safe, 138 and tank
capacity over 30 or 40 gallons would seriously compromise the room
available in a car. Hence, compressed hydrogen gas storage in a car would
have the energy equivalent of only about 3.0 gallons of gasoline for a 6,000
psi tank of a size that could be accommodated without seriously impairing
trunk room. \
Liquid storage is possible because hydrogen liquefies at -253oC, but a highly
insulated--and, thus, heavy and expensive--cryogenic storage tank is required.
A state-of-the-art tank designed by BMW accommodates 25 gallons of liquid
hydrogen.139 It is insulated by 70 layers of aluminum foil with interlayered
fiberglass matting. The weight of the tanks when fill is about 130 lbs, and
hydrogen is held at an overpressure of up to 75 psi. The total system volume
is about five times that of an energy equivalent gasoline tank (gasoline has
3.8 times the energy content of liquid hydrogen per unit volume), and the
weight is twice that of the gasoline tank. Heat leakage results in an
evaporation loss of 1 to 2 percent of the tank volume per day. Although the
container size for a 120-liter tank would fit into the trunk of most cars, there
are safety concerns regarding the venting of hydrogen lost to evaporation,
and crash-safety-related concerns.There is also an important sacrifice in
overall energy efficiency, because the energy required to liquefy hydrogen is
equal to about one-third the energy content of hydrogen.
Metal hydride storage utilizes a process by which metals such as titanium and
vanadium react exothermally (that is, the reaction generates heat) with
hydrogen to form a hydride. During refueling, heat must be removed when
hydrogen is reacting with the metals in the tank; when the vehicle powerplant
requires fuel heat must be supplied to release the hydrogen from the tank. For
these reasons, the entire tank must be designed as a heat exchanger, with
cooling and heating water flow ducts. The hydrogen used must also be very
pure, as gaseous impurities impair the chemical reactions in the metal hydride
tank Moreover, the weight of metal required to store hydrogen is very high:
to store the energy equivalent of 10 gallons of gasoline, the tank would weigh
more than 500 lbs.141 The main advantages of the system are safety and low
hydrogen pressure. The overall process is so cumbersome, however, that it
seems an unlikely prospect for light duty vehicles, although such systems can
be used in buses and trucks.
Adsorption in carbon sieves was thought to be a promising idea to increase
the capacity of compressed gas cylinders, although there is a weight penalty.
However, most recent work on carbon sieves have concluded that the
capacity increase is significant only at pressures in the 1,000 to 1,500 psi
range; at 3,000 psi or higher pressure, carbon sieves appear to offer no benefit
over compressed gas cylinders. 142 Because a pressure of 5,000 psi or more
is desirable, it does not appear that this technology is of use for on-board
storage .
Hydrogen can be used directly in engines or in fuel cells. When used in
conventional IC engines, the combustion properties of hydrogen tend to cause
irregular combustion and backfires. 143 To prevent this, BMW has used very
lean mixtures successfully, with the added benefit of no measurable
emissions of NOX and an improvement in peak energy efficiency of 12 to 14
percent. Because of hydrogen’s low density, however, operating lean results
in a power reduction of about 50 percent from the engine’s normal capacity.
BMW uses superchargers to restore some of the power loss, 144 but a larger
engine is still required, and the added weight and increased fiction losses
could offset much of the energy efficiency gain. Mercedes Benz has solved
the low power problem by operating at stoichiometry or rich air fuel ratio at
high loads, coupled with water injection to reduce backfire and knocking
potential. The Mercedes approach results in significant NOX emissions,
however, and the engine requires a three-way catalyst to meet ULEV NOX
standards. Overall engine efficiency is not much different from gasoline
engine efficiency owing to compromises in spark timing and compression
ratio.
The use of hydrogen in a compression-ignition (diesel) engine has also been
attempted by directly injecting liquid hydrogen into the combustion chamber.
Cryogenic injectors operating on low lubricity liquid hydrogen poses difficult
engineering problems, however, and auto manufacturers doubt whether a
commercially viable system can ever be developed.

Improvements To Automatic Transmissions


The transmission in a vehicle matches the power requirements of the
automobile to the power output available from an engine or motor; the
automatic transmission’s selection of different gears keeps the engine
operating in speed ranges that allow high levels of efficiency to be achieved.
Most modem transmissions operate at efficiencies of over 85 percent on the
city cycle and 92 to 94 percent on the highway cycle. The efficiency losses
that do occur are caused primarily by:
· Hydraulic losses in the torque converter (current automatic
transmissions use a hydraulic system to transmit the engine power to the
drivetrain).
· Designs that avoid the operating point that would maximize
fuel economy. If fuel economy were the only concern, the optimum
point would maximize torque and minimize engine speed (rpm), which
reduces throttling and fiction losses. Designing the transmission for
maximum efficiency leaves little or no reserve power, however, so that
even modest changes in road load horsepower may require a downshift-
and frequent downshifts are considered undesirable for customer
satisfaction. In addition, operating at too low an rpm causes excessive
driveline harshness and poor accelerator response.
Improvements to current transmissions can occur in the following areas:
· reduction in flow losses in the torque converter for automatic
transmissions;
· increase in the ratio spread between top and first gear;
· increase in the number of gear steps between the available limits
(that is, moving to five or more speeds in an automatic transmission),
with continuous variable transmissions (CVTs) being the extreme limit;
and
· electronic control of transmission shift points and torque converter
lockup.
All of these improvements have been adopted, in some form, by automakers,
but their penetration of the fleet is incomplete and, in some cases, further
technical improvements are possible. For example, Mercedes-Benz and
Nissan have recently (1993) introduced a five-speed automatic transmission,
while GM introduced a six-speed manual transmission. Product plans reveal
that such transmissions are likely to be more widely adopted by 2005. CVTs
have been introduced in Europe and Japan, and in the United States in one car
model that has been since discontinued.
Torque converter improvements
Redesign of the torque converter to reduce flow losses will yield improved
fuel economy. Toyota has introduced a new “Super Flow” converter in its
Lexus LS400 vehicle.157 The new converter was computer designed to
optimize impeller blade angle and blade shape to reduce loss of oil flow. In
addition, new manufacturing techniques were developed for the impeller to
increase rigidity. As a result, Toyota claims the converter efficiency is the
world’s best, and is 3 percent to 5 percent higher than other torque
converters.158 Such an improvement is expected to provide a 0.5 percent
benefit in composite fuel economy.
Greater number of gears
Increasing the number of transmission gears can be used to provide a wider
ratio spread between first and top gears, or else to increase the number of
steps with a constant ratio spread for improved drivability and reduced shift
shock. In addition, the wider ratio spread can be utilized to provide higher
performance in the first few gears while keeping the ratio of engine speed to
car speed in top gear constant, or else to maintain the same performance in
the first few gears and to reduce engine speed in top gear. Because the
manufacturer is able to select among these tradeoffs, different manufacturers
have chosen different strategies in selecting gear ratios; therefore, any fuel
economy gain from increasing the number of gears is dependent upon these
strategies.
Five-speed automatic transmissions have only recently been commercialized
in Japan and Europe. Nissan has provided a comprehensive analysis of the
effect of numbers of gears and choice of first gear and top gear ratios on fuel
economy.159 They found declining benefits with increasing numbers of
gears, with little or no benefit above six gears. With a first gear ratio of 3.0
(similar to that of current automatics) they found no benefits in fuel economy
in using overdrive ratios lower than about 0.7. Increasing the first gear ratio
to about 4.0, however, provided better standing start performance. The
Nissan production five-speed transmission uses a 3.85 first gear ratio and a
0.69 overdrive ratio for a 5.56 ratio spread. At constant performance, Nissan
showed fuel economy gains in the 3 percent range. 160 Mercedes, the only
other manufacturer to have introduced a five-speed automatic, confirmed that
the fuel economy benefit over a four-speed automatic was in the 2 to 3
percent range. Ford estimates that their planned five-speed automatic would
provide a 2.5 percent fuel economy benefit at current performance levels, but
could have much smaller benefits at other levels.
A 2.5 percent fuel economy benefit appears representative of a five-speed
automatic over a four-speed automatic. With either a six-speed or seven-
speed transmission, complexity and weight increases appear to offset fuel
efficiency benefits.
A continuously variable transmission (CVT) offers an infinite choice of gear
ratios between fixed limits, allowing optimization of engine operating
conditions to maximize fuel economy. Currently, Subaru is the only
manufacturer that has offered a CVT in a small car in the United States.
Although there are several designs being tested, the CVT that is in production
features two conical pulleys driven by metal belts. The position of the belts
on the conical pulleys determines the gear ratio between input and output
shafts. Under steady-state conditions, the metal belt system can be less
efficient than a conventional system, but the fuel used over a complete
driving cycle is decreased because of the optimized speed/load conditions for
the engine. Nissan and Ford have developed CVTs using rollers under radial
loads that may be more efficient than metal belt designs.
Shift performance of the CVT should be equal to, or somewhat better than,
conventional automatic transmissions, with its main benefit the absence of
shift shock associated with discrete gear changes. However, a CVT can
produce unexpected changes in engine speed--that is engine speed dropping
while the vehicle speed is increasing--which may deter consumer acceptance.
Moreover, attaining acceptable start-up vehicle performance could require the
use of a lockup torque converter or a conventional planetary gear set, or both,
which would add to cost and complexity. Nevertheless, developments in the
metal belt system coupled with weight reduction of future cars are expected
to enhance the availability of the CVT for use in all classes of cars and trucks
in the 2005 time frame.
During the early 1980s, CVTs were expected to provide substantial fuel
economy benefits over three-speed automatic transmissions. Researchers
from Ford161 showed that an Escort with a CVT of 82 percent efficiency
would have a fuel economy 14 percent higher than the fuel economy with a
three-speed automatic; at a CVT efficiency of 91 percent, the fuel economy
benefit was computed to be 27 percent (91 percent was considered to be an
upper limit of potential efficiency). Similarly, Gates Corporation installed a
CVT in a Plymouth Horizon and found a fuel economy improvement of 15.5
percent over a conventional three-speed automatic with lockup, at almost
identical performance levels. 162 Design compromises for drivability,
however, as well as improvements to the base (three speed) automatic since
the time these papers were published (1982), have resulted in lowered
expectations of benefits. A more recent test conducted by the Netherlands
Testing Organization on a Plymouth Voyager van with a 3.3 LV6 and a four-
speed automatic replaced by a Van Doorne CVT showed fuel economy
benefits of 13 percent on the city cycle and 5 percent on the highway cycle
for a 9.5 percent improvement (over a four-speed automatic). These figures,
however, may be unrepresentative of more average applications as supplier
companies usually provide the best possible benefit estimates. The current
consensus among auto manufacturers is that the CVT will be 4 to 8 percent
more efficient than current fourspeed automatics with lockup. A 6 percent
improvement, including the benefit of the electronic control required to
maximize CVT benefits, would be consistent with the measured results from
the Subaru Justy CVT sold in the United States.
The benefits for the CVT, however, are associated with current engine
technology. Reduction of fuel consumption is associated with two effects:
reduced friction losses owing to lower engine rpm, and reduced pumping
losses owing to operation at higher load. In the future, engines equipped with
variable valve timing and direct injection stratified charge engines will have
much lower pumping losses than current engines, thus reducing part of the
CVT fuel economy reduction potential. Typically, this would reduce the
benefits of CVTs to about half the value estimated for current engines, or to
approximately 3 percent.
Electronic transmission control (ETC)
ETC systems to control shift schedules and torque converter lockup can
replace the hydraulic controls used in most transmissions. Such systems were
first introduced in Toyota’s A43DE transmission in 1982. The benefit of the
ETC system lies in the potential to maximize fuel economy by tailoring shifts
and torque converter lockup to the driving schedule. Domestic auto
manufacturers, however, claim that the measured benefits are small, because
most modem nonelectronic transmissions have been optimized for the FTP
test cycle. In 1994, more than half of all vehicles had ETC. Although several
electronically controlled transmissions are available, “paired sample”
comparisons are impossible as no example is available of the same car/engine
combination with nonelectronic and electronic transmissions. Regression
studies across different models of similar weight and performance show a 0.9
percent advantage164 for the electronic transmission. However, it appears
there is potential for greater improvement with some loss of smoothness or
“feel.”
Estimates by Ross and DeCicco165 have claimed very large benefits for ETC
by following an aggressive shift profile, and they estimate fuel economy
benefits as great as 9 percent. These benefits have been estimated from
simulation models, although detailed documentation of the input assumptions
and shift schedule followed is unavailable. Clearly, shifting very early into a
high gear (such as by shifting from second gear to fourth gear directly) and
operating the engine at very low rpm and high torque can produce significant
gains in fuel economy--but at a great cost to drivability and vibration.
Operating the engine at very low rpm leads to conditions known as “lugging”
that causes a very jerky ride. Current industry trends, however, are to
maximize smoothness, so that it is difficult to envision a strategy similar to
the one advocated by Ross and DeCicco being introduced without incentives
strong enough to override performance and comfort considerations.
AUTOMOTIVE ENGINES

Engine & Working Principles


A heat engine is a machine, which converts heat energy into mechanical
energy. The combustion of fuel such as coal, petrol, diesel generates heat.
This heat is supplied to a working substance at high temperature. By the
expansion of this substance in suitable machines, heat energy is converted
into useful work. Heat engines can be further divided into two types:
(i) External combustion and
(ii) Internal combustion.
In a steam engine the combustion of fuel takes place outside the engine and
the steam thus formed is used to run the engine. Thus, it is known as external
combustion engine. In the case of internal combustion engine, the
combustion of fuel takes place inside the engine cylinder itself.
The IC engine can be further classified as:
(i) stationary or mobile,
(ii) horizontal or vertical and
(iii) low, medium or high speed.
The two distinct types of IC engines used for either mobile or stationary
operations are:
(i) diesel and
(ii) carburettor.

Chart 1. Types of Heat Engines


Spark Ignition (Carburettor Type) IC Engine
In this engine liquid fuel is atomised, vaporized and mixed with air in correct
proportion before being taken to the engine cylinder through the intake
manifolds. The ignition of the mixture is caused by an electric spark and is
known as spark ignition.
Compression Ignition (Diesel Type) IC Engine In this only the
liquid fuel is injected in the cylinder under high pressure.
Constructional Features of IC Engine
The cross section of IC engine is shown in Fig. 1. A brief description of these
parts is given below.
Cylinder:
The cylinder of an IC engine constitutes the basic and supporting portion of
the engine power unit. Its major function is to provide space in which the
piston can operate to draw in the fuel mixture or air (depending upon spark
ignition or compression ignition), compress it, allow it to expand and thus
generate power. The cylinder is usually made of high-grade cast iron. In
some cases, to give greater strength and wear resistance with less weight,
chromium, nickel and molybdenum are added to the cast iron.
Piston:
The piston of an engine is the first part to begin movement and to transmit
power to the crankshaft as a result of the pressure and energy generated by
the combustion of the fuel. The piston is closed at one end and open on the
other end to permit direct attachment of the connecting rod and its free action.
The materials used for pistons are grey cast iron, cast steel and aluminium
alloy. However, the modern trend is to use only aluminium alloy pistons in
the tractor engine.

Cross-section of a diesel engine


Piston Rings:
These are made of cast iron on account of their ability to retain bearing
qualities and elasticity indefinitely. The primary function of the piston rings
is to retain compression and at the same time reduce the cylinder wall and
piston wall contact area to a minimum, thus reducing friction losses and
excessive wear. The other important functions of piston rings are the control
of the lubricating oil, cylinder lubrication, and transmission of heat away
from the piston and from the cylinder walls. Piston rings are classed as
compression rings and oil rings depending on their function and location on
the piston. Compression rings are usually plain one-piece rings and are
always placed in the grooves nearest the piston head. Oil rings are grooved or
slotted and are located either in the lowest groove above the piston pin or in a
groove near the piston skirt. Their function is to control the distribution of the
lubricating oil to the cylinder and piston surface in order to prevent
unnecessary or excessive oil consumption ion.
Piston Pin:
The connecting rod is connected to the piston through the piston pin. It is
made of case hardened alloy steel with precision finish. There are three
different methods to connect the piston to the connecting rod.
Connecting Rod: This is the connection between the piston and
crankshaft. The end connecting the piston is known as small end and the
other end is known as big end. The big end has two halves of a bearing bolted
together. The connecting rod is made of drop forged steel and the section is
of the I-beam type.
Crankshaft: This is connected to the piston through the connecting rod
and converts the linear motion of the piston into the rotational motion of the
flywheel. The journals of the crankshaft are supported on main bearings,
housed in the crankcase. Counter-weights and the flywheel bolted to the
crankshaft help in the smooth running of the engine.
Engine Bearings: The crankshaft and camshaft are supported on anti-
friction bearings. These bearings must be capable of with standing high
speed, heavy load and high temperatures. Normally, cadmium, silver or
copper lead is coated on a steel back to give the above characteristics. For
single cylinder vertical/horizontal engines, the present trend is to use ball
bearings in place of main bearings of the thin shell type.
Valves: To allow the air to enter into the cylinder or the exhaust, gases to
escape from the cylinder, valves are provided, known as inlet and exhaust
valves respectively. The valves are mounted either on the cylinder head or on
the cylinder block.
Camshaft: The valves are operated by the action of the camshaft, which
has separate cams for the inlet, and exhaust valves. The cam lifts the valve
against the pressure of the spring and as soon as it changes position the spring
closes the valve. The cam gets drive through either the gear or sprocket and
chain system from the crankshaft. It rotates at half the speed of the camshaft.
Flywheel This is usually made of cast iron and its primary function is to
maintain uniform engine speed by carrying the crankshaft through the
intervals when it is not receiving power from a piston. The size of the
flywheel varies with the number of cylinders and the type and size of the
engine. It also helps in balancing rotating masses.
Principles of Operation Of IC Engines:
Four-Stroke Cycle Diesel Engine
In four-stroke cycle engines there are four strokes completing two revolutions
of the crankshaft. These are respectively, the suction, compression, power
and exhaust strokes. In Fig. 3, the piston is shown descending on its suction
stroke. Only pure air is drawn into the cylinder during this stroke through the
inlet valve, whereas, the exhaust valve is closed. These valves can be
operated by the cam, push rod and rocker arm. The next stroke is the
compression stroke in which the piston moves up with both the valves
remaining closed. The air, which has been drawn into the cylinder during the
suction stroke, is progressively compressed as the piston ascends. The
compression ratio usually varies from 14:1 to 22:1. The pressure at the end of
the compression stroke ranges from 30 to 45 kg/cm2 . As the air is
progressively compressed in the cylinder, its temperature increases, until
when near the end of the compression stroke, it becomes sufficiently high
(650-80O oC) to instantly ignite any fuel that is injected into the cylinder.
When the piston is near the top of its compression stroke, a liquid
hydrocarbon fuel, such as diesel oil, is sprayed into the combustion chamber
under high pressure (140-160 kg/cm2 ), higher than that existing in the
cylinder itself. This fuel then ignites, being burnt with the oxygen of the
highly compressed air.
During the fuel injection period, the piston reaches the end of its compression
stroke and commences to return on its third consecutive stroke, viz., power
stroke. During this stroke the hot products of combustion consisting chiefly
of carbon dioxide, together with the nitrogen left from the compressed air
expand, thus forcing the piston downward. This is only the working stroke of
the cylinder.
During the power stroke the pressure falls from its maximum combustion
value (47-55 kg/cm2 ), which is usually higher than the greater value of the
compression pressure (45 kg/cm2 ), to about 3.5-5 kg/cm2 near the end of the
stroke. The exhaust valve then opens, usually a little earlier than when the
piston reaches its lowest point of travel. The exhaust gases are swept out on
the following upward stroke of the piston. The exhaust valve remains open
throughout the whole stroke and closes at the top of the stroke.
The reciprocating motion of the piston is converted into the rotary motion of
the crankshaft by means of a connecting rod and crankshaft. The crankshaft
rotates in the main bearings, which are set in the crankcase. The flywheel is
fitted on the crankshaft in order to smoothen out the uneven torque that is
generated in the reciprocating engine.

Principle of four-stroke engine


Two-Stroke Cycle Diesel Engine:
The cycle of the four-stroke of the piston (the suction, compression, power

and exhaust strokes) is completed only in two strokes in the case of a two-
stroke engine. The air is drawn into the crankcase due to the suction created
by the upward stroke of the piston. On the down stroke of the piston it is
compressed in the crankcase, The compression pressure is usually very low,

being just sufficient to enable the air to flow into the cylinder through the
transfer port when the piston reaches near the bottom of its down stroke.
The air thus flows into the cylinder, where the piston compresses it as it
ascends, till the piston is nearly at the top of its stroke. The compression
pressure is increased sufficiently high to raise the temperature of the air
above the self-ignition point of the fuel used. The fuel is injected into the
cylinder head just before the completion of the compression stroke and only
for a short period. The burnt gases expand during the next downward stroke
of the piston. These gases escape into the exhaust pipe to the atmosphere
through the piston uncovering the exhaust port.
Modern Two-Stroke Cycle Diesel Engine
The crankcase method of air compression is unsatisfactory, as the exhaust
gases do not escape the cylinder during port opening. Also there is a loss of
air through the exhaust ports during the cylinder charging process. To
overcome these disadvantages blowers are used to precompress the air. This
pre-compressed air enters the cylinder through the port. An exhaust valve is
also provided which opens mechanically just before the opening of the inlet
ports
Four-Stroke Spark Ignition Engine
In this gasoline is mixed with air, broken up into a mist and partially
vaporized in a carburettor . The mixture is then sucked into the cylinder.
There it is compressed by the upward movement of the piston and is ignited
by an electric spark. When the mixture is burned, the resulting heat causes the
gases to expand. The expanding gases exert a pressure on the piston (power
stroke). The exhaust gases escape in the next upward movement of the piston.
The strokes are similar to those discussed under four-stroke diesel engines.
The various temperatures and pressures are shown in Fig. 6. The compression
ratio varies from 4:1 to 8:1 and the air-fuel mixture from 10:1 to 20:1.
Two-Stroke Cycle Petrol Engine
The two-cycle carburettor type engine makes use of an airtight crankcase for
partially compressing the air-fuel mixture. As the piston travels down, the
mixture previously drawn into the crankcase is partially compressed. As the
piston nears the bottom of the stroke, it uncovers the exhaust and intake ports.
The exhaust flows out, reducing the pressure in the cylinder. When the
pressure in the combustion chamber is lower than the pressure in the
crankcase through the port openings to the combustion chamber, the
incoming mixture is deflected upward by a baffle on the piston. As the piston
moves up, it compresses the mixture above and draws into the crankcase
below a new air-fuel mixture.

The, two-stroke cycle engine can be easily identified by the air-fuel mixture
valve attached to the crankcase and the exhaust Port located at the bottom of
the cylinder.
Comparison Of CI And SI Engines
The CI engine has the following advantages over the SI engine.
1. Reliability of the CI engine is much higher than that of the SI engine. This
is because in case of the failure of the battery, ignition or carburettor system,
the SI engine cannot operate, whereas the CI engine, with a separate fuel
injector for each cylinder, has less risk of failure.
2. The distribution of fuel to each cylinder is uniform as each of them has a
separate injector, whereas in the SI engine the distribution of fuel mixture is
not uniform, owing to the design of the single carburettor and the intake
manifold.
3. Since the servicing period of the fuel injection system of CI engine is
longer, its maintenance cost is less than that of the SI engine.
4. The expansion ratio of the CI engine is higher than that of the SI engine;
therefore, the heat loss to the cylinder walls is less in the CI engine than that
of the SI engine. Consequently, the cooling system of the CI engine can be of
smaller dimensions.
5. The torque characteristics of the CI engine are more uniform which results
in better top gear performance.
6. The CI engine can be switched over from part load to full load soon after
starting from cold, whereas the SI engine requires warming up.
7. The fuel (diesel) for the CI engine is cheaper than the fuel (petrol) for SI
engine.
8. The fire risk in the CI engine is minimised due to the absence of the
ignition system.
9. On part load, the specific fuel consumption of the CI engine is low.
Advantages and Disadvantages Of Two-
Stroke Cycle Over Four-Stroke Cycle
Engines
Advantages:
1) The two-stroke cycle engine gives one working stroke for each revolution
of the crankshaft. Hence theoretically the power developed for the same
engine speed and cylinder volume is twice that of the four-stroke cycle
engine, which gives only one working stroke for every two revolutions of the
crankshaft. However, in practice, because of poor scavenging, only 50-60%
extra power is developed.
2) Due to one working stroke for each revolution of the crankshaft, the
turning moment on the crankshaft is more uniform. Therefore, a two-stroke
engine requires a lighter flywheel.
3) The two-stroke engine is simpler in construction. The design of its ports is
much simpler and their maintenance easier than that of the valve mechanism.
4) The power required to overcome frictional resistance of the suction and
exhaust strokes is saved, resulting in some economy of fuel.
5) Owing to the absence of the cam, camshaft, rockers, etc. of the valve
mechanism, the mechanical efficiency is higher.
6) The two-stroke engine gives fewer oscillations.
7) For the same power, a two-stroke engine is more compact and requires less
space than a four-stroke cycle engine. This makes it more suitable for use in
small machines and motorcycles.
8) A two-stroke engine is lighter in weight for the same power and speed
especially when the crankcase compression is used.
9) Due to its simpler design, it requires fewer spare parts.
10) A two-stroke cycle engine can be easily reversed if it is of the valve less
type.

Disadvantages:
1. The scavenging being not very efficient in a two-stroke engine, the dilution
of the charges takes place which results in poor thermal efficiency.
2. The two-stroke spark ignition engines do not have a separate lubrication
system and normally, lubricating oil is mixed with the fuel. This is not as
efrective as the lubrication of a four-stroke engine. Therefore, the parts of the
two-stroke engine are subjected to greater wear and tear.
3. In a spark ignition two-stroke engine, some of the fuel passes directly to
the exhaust. Hence, the fuel consumption per horsepower is comparatively
higher.
4. With heavy loads a two-stroke engine gets heated up due to the excessive
heat produced. At the same time the running of the engine is riot very smooth
at light loads.
5. It consumes more lubricating oil because of the greater amount of heat
generated.
6. Since the ports remain open during the upward stroke, the actual
compression starts only after both the inlet and exhaust ports have been
closed. Hence, the compression ratio of this engine is lower than that of a
four-stroke engine of the same dimensions. As the efficiency of an engine is
directly proportional to its compression ratio, the efficiency of a two-stroke
cycle engine is lower than that of a four-stroke cycle engine of the same size.
Internal Combustion Engines
Internal combustion engines are devices that generate work using the
products of combustion as the working fluid rather than as a heat transfer
medium. To produce work, the combustion is carried out in a manner that
produces high-pressure combustion products that can be expanded through a
turbine or piston. The engineering of these highpressure systems introduces a
number of features that profoundly influence the formation of pollutants.
There are three major types of internal combustion engines in use today:
(1) the spark ignition engine, which is used primarily in automobiles;
(2) the diesel engine, which is used in large vehicles and industrial systems
where the improvements in cycle efficiency make it advantageous over the
more compact and lighter-weight spark ignition engine; and
(3) the gas turbine, which is used in aircraft due to its high power/weight ratio
and also is used for stationary power generation.
Each of these engines is an important source of atmospheric pollutants.
Automobiles are major sources of carbon monoxide, unburned hydrocarbons,
and nitrogen oxides. Probably more than any other combustion system, the
design of automobile engines has been guided by the requirements to reduce
emissions of these pollutants. While substantial progress has been made in
emission reduction, automobiles remain important sources of air pollutants.
Diesel engines are notorious for the black smoke they emit. Gas turbines emit
soot as well. These systems also release unburned hydrocarbons, carbon
monoxide, and nitrogen oxides in large quantities. In this chapter we examine
the air pollutant emissions from engines. To understand the emissions and the
special problems in emission control, it is first necessary that we understand
the operating principles of each engine type. We begin our discussion with a
system that has been the subject of intense study and controversy-the spark
ignition engine.
HYBRID ELECTRIC VEHICLES
Introduction to Trends and Hybridization
Factor for Heavy-Duty Working Vehicles
Over the past decades, the efficiency of vehicles has become a highly
discussed topic due to pollution regulation requirements. Modern internal
combustion engines (ICEs) have already reached remarkable performances
compared with the engines of the early 1990s. However, they are still unable
to consistently reverse the growth trend in pollutant emissions because the
number of vehicles is also constantly increasing [1, 2]. The European Union
first introduced mandatory CO2 standards for new passenger cars in 2009 [3]
and set a 2020-onward target average emission of 95 g CO2/km for new car
fleets. The automotive industry devotes considerable research efforts toward
reducing emissions and fossil fuel dependency without sacrificing vehicle
performance. Recently, manufacturers developed technologies to reduce the
NOx and particulate emissions of diesel engines, such as selective catalytic
reduction and diesel oxidation catalyst [4, 5]. Moreover, common rail fuel
injection has led to higher-efficiency diesel engines [6, 7]. Partial substitution
of fossil diesel fuel with biodiesel is an appealing option to reduce CO2
emissions [8, 9]. In the Brazilian transportation sector, the addition of
biodiesel to fossil diesel fuel has been increasing since 2012 [10].
Heavy-duty construction and agricultural vehicles also have an
environmental impact. In Agricultural Industry Advanced Vehicle
Technology: Benchmark Study for Reduction in Petroleum Use [11], the
current trends in increasing diesel efficiency in the farm sector are explored.
Figure 1 shows the diesel demand in the United States, highlighting that in
the agricultural and construction machinery field, the demand has remained
relatively constant since 1985, representing a significant portion of the total
fuel consumption. Similarly to the automotive sector, considerable efforts
have been dedicated in recent years toward reducing the energy consumption
of construction and agricultural machines without compromising their
functionality and performance, taking into account the restrictions imposed
by the recent emission regulations [12, 13]. Engine calibrations have been
optimized to reduce exhaust pollutants in accordance with the U.S.
Environmental Protection Agency emissions tiers. This was accomplished
through several means, including in-cylinder combustion optimization and
exhaust gas recirculation, without exhaust after-treatment systems for Tiers
1–3. With the addition of exhaust after-treatment systems for the Tier 4
interim stage, some engines require diesel exhaust fluid to catalyze pollutants
in the system (e.g., urea). Some manufacturers claim as much as 5% greater
fuel efficiency for their Tier 4 interim engines compared with Tier 3 models
[14]; however, these entail increasing complexity, dimensions, and
maintenance costs. Although most construction and agricultural vehicles
include a driving mode tractor as a primary power unit, most modern models
provide power for implementing a power takeoff (PTO) shaft and/or fluid
power hydraulics. Moreover, working machine engines can stay idle for a
notable amount of time [15]. Advanced engine controls are being introduced
to reduce fuel consumption by lowering engine idle speeds and even shutting
off the engine during extended idle periods. Examples of these strategies are
found in existing patent applications, which indicate intentions of further
development of these strategies [16]. Hybrid electric propulsion systems
allow the combustion engine to operate at maximum efficiency and ensure
both a considerable reduction of pollutant emissions and an appreciable
decrease in energy consumption. Over the last few years, many
configurations of hybrid propulsion systems have been proposed, some of
which are also very complex. The fuel efficiency in this operating mode is
greater than in a conventional machine for the following reasons:

FIGURE 1.
Historical diesel consumption in the United States. “Farm” includes
agricultural diesel use; “off-highway” includes forestry, construction, and
industrial use [11].
· the fuel and energy consumption is limited only to the vehicle work
time;
· the electronic control selects the engine speed to minimize fuel
consumption depending on the state of charge of the batteries and the
vehicle power demand;

· the power transmission from the electric motor to the gearbox


ensures greater energy savings compared with hydraulic power
transmission;
· the electric motor acts as a power unit to charge the batteries, while
the vehicle is slowing down/stopping.
The automotive field has the largest number of studies, published patents, and
proposals for hybrid and electric vehicles. Recently, intensive research has
been carried out to find solutions that will enable the gradual replacement of
the conventional engine with a highly integrated hybrid system. In the
construction and agricultural working machines field, the number of concepts
is limited and sporadic, and only recently has the market shown great
attention to these studies. Thus, hybrid architectures allow the development
of work machines characterized by high versatility and new features. Such
machines can be used both indoors and outdoors because they can operate in
both full electric and hybrid modes. The advantages to end users are
reduction of running cost due to greater fuel efficiency and use of electric
energy, and better work conditions due to low noise emissions.
From a system engineering point of view, the different solutions are
described by introducing a specific hybridization factor suitable for work
vehicles that include two main functionalities: driving and loading. The high-
voltage electrification of work vehicles is also currently under development
[17, 18]. According to Ponomarev et al. [19], in order to be competitive,
manufacturers should offer energy-efficient and reliable hybrid vehicles to
their customers. Compared with automobiles, the introduction of electric
drives in work vehicles would allow expanded functionalities because these
machines have a large variety of functional drives [20]. The first part of this
report gives an overview of the components of the electrification solution and
hybrid/electric architectures, discussing the advantages related to the different
solutions. The machines are then schematically described and compared,
showing the hybrid architectures of the proposed solutions. Finally, the
introduction of a specific hybridization factor is proposed as a first
classification of the main hybrid work vehicles [21, 22].
HEV power train configurations
The SAE defines a hybrid vehicle as a system with two or more energy
storage devices, which must provide propulsion power either together or
independently [23]. Moreover, an HEV is defined as a road vehicle that can
draw propulsion energy from the following sources of stored energy: a
conventional fuel system and a rechargeable energy storage system (RESS)
that can be recharged by an electric machine (which can work as a generator),
an external electric energy source, or both. The expression “conventional
fuel” in the SAE definition constrains the term HEV to vehicles with a spark-
ignition or a compression-ignition engine as the primary energy source.
However, the United Nations definition of HEV [24] mentions consumable
instead of conventional fuel. On this basis, the primary energy source in an
HEV is not necessarily the engine hydrocarbon fuel, or biofuels but can also
be the hydrogen fuel cell. The term electric-drive vehicle (EDV) is used in
Ref. [25] to define any vehicle in which wheels are driven by an electric
motor powered either by a RESS alone or by a RESS in combination with an
engine or a fuel cell. Some types of EDV belong to the subset of plug-in
electric vehicles (PEVs) [25, 26].
Compared with conventional internal combustion engine vehicles, HEVs
include more electrical components, such as electric machines, power
electronics, electronic continuously variable transmissions, and advanced
energy storage devices [27]. The number of possible hybrid topologies is
very large, considering the combinations of electric machines, gearboxes, and
clutches, among others. The two main solutions, series and parallel hybrid,
can be combined to obtain more complex and optimized architectures. There
is no standard solution for the optimal size ratio of the internal combustion
engine and the electric system, and the best choice includes complex trade-
offs between the power as well as between cost and performance [28]. The
power train configuration of an HEV can be divided into three types: series,
parallel, and a combination of the two [29].
SERIES HYBRID ELECTRIC VEHICLES
Series hybrid electric vehicles (SHEVs) involve an internal combustion
engine (ICE), generator, battery packs, capacitors and electric motors as
shown in Figure 2 [30–32]. SHEVs have no mechanical connections between
the ICE and the wheels. The ICE is turned off when the battery packs feed the
system in urban driving. A significant amount of energy is supplied from the
regenerative braking. Therefore, the engine operates at its maximum
efficiency point, leading to improved fuel efficiency and lesser carbon
emission compared with other vehicle configurations [33]. The series hybrid
configuration is mostly used in heavy vehicles, military vehicles, and buses
[34]. An advantage of this topology is that the ICE can be turned off when
the vehicle is driving in a zero-emission zone. Moreover, the ICE and the
electric machine are not mechanically coupled; thus, they can be mounted in
different positions on the vehicle layout drive system [35].

FIGURE 2.
Schematic of series hybrid electric vehicles (SHEV).
PARALLEL HYBRID ELECTRIC VEHICLES (PHEV)
In a PHEV, mechanical and electrical powers are both connected to the
driveline, as shown in Figure 3. In the case of parallel architectures, good
performance during acceleration is possible because of the combined power
from both engines [35]. Different control strategies are used in a preferred
approach. If the power required by the transmission is higher than the output
power of the ICE, the electric motor is turned on so that both engines can
supply power to the transmission. If the power required by the transmission is
less than the output power of the ICE, the remaining power is used to charge
the battery packs [36]. Moreover, mechanical and electric power could be
decoupled, and the system has a high operating flexibility enabling three
modes of operation: purely combustion; purely electric and hybrid. Usually, a
PHEVs are managed in purely electric mode at low speeds, until the battery
charge state reaches a predetermined low threshold, typically 30%.

FIGURE 3.
Schematic of parallel hybrid electric vehicles (PHEV).
COMBINATION OF PARALLEL AND SERIES HEVS
In the series-parallel hybrid configuration can be highlighted two main power paths. In mechanical
power path, the energy generated by the combustion engine is directly transmitted to the wheels, while
the electric path the energy generated by the thermal engine is converted first into electrical energy by
means of the generator and then again converted to mechanical energy delivered at the wheels. It is
possible therefore to have mixed architectures denominated “power splits” in which the installed power
is divided by means of mechanical couplers. Combination of parallel and series hybrid configurations is
further divided into sub-categories based on how the power is distributed [37]. PHEVs are even more
suitable topologies than HEVs for reducing fuel consumption because, unlike HEVs, they may be
charged from external electric power sources [38]. In all the configurations, regenerative braking can be
used to charge the battery [36]. Moreover to make recharging of batteries easier, some configurations
are equipped with an on-board charger and defined Plug-in electric vehicle (PEV) [39].

Sub-system components of hybrid vehicles


ELECTRIC MOTORS
The energy efficiency of a vehicle power train depends on, among other
features, the size of its components. The optimization problem of sizing the
electric motor, engine, and battery pack must consider both performance and
cost specifications [40, 41]. Among electric motors, although the permanent
magnet synchronous motor is considered as the benchmark, other types of
motors are being explored for use in HEVs. Currently, there is some concern
on the supply and cost of rare-earth permanent magnets.
Considerable research efforts have been made to find alternative electric
motor solutions with the lowest possible use of these materials [42, 43]. For
instance, some automotive applications use induction motors or switched
reluctance motors [34]. Figure 4 shows the most conceivable electric motor
scenario in forthcoming years. Compared with hydraulics, electric drives
provide better controllability and dynamic response and require less
maintenance. Similarly to electric power, hydraulic power can be distributed
quite easily on the implement; however, hydraulics suffers from poor
efficiency in part-load operating conditions [44]. The specific electric drives
for agricultural tractors are listed in Refs. [45, 46].

FIGURE 4.
Types of electric motors for HEV applications.
CONTINUOUS VARIABLE TRANSMISSION (CVT)
Working vehicles drive at low speed, and the energy consumed in
accelerating and climbing slopes should be partially recovered at decelerating
and descending slopes. Compared with urban and on-road vehicles,
construction and agricultural are used in a lower range of velocity. Rolling
requirements in construction and agricultural machines are related to the
resistance due to tire deformation combined with resistance due to soil
deformation [47, 48]. In the case of work vehicles, continuous variable
transmission CVT could be used to determine the energy flow that reaches
the transmission from each energy source (engine, generator, and motor
battery) [49].
ENERGY STORAGE DEVICES
The energy efficiency of construction machinery is generally relatively low,
and kinetic or potential energy is lost during operation [50]. Currently,
batteries [51], super-capacitors, hydraulic accumulators, and flywheels are
mainly used as energy storage devices in hybrid construction and agricultural
machinery (HCAM), as schematically described in Figure 5

BATTERIES
Batteries are the most studied energy storage and are divided into three types:
Li-ion [52], nickel-metal hydride [53, 54], and lead-acid [55]. Li-ion batteries
are considered as a highly prospective technology for vehicle applications
[56, 57] because of their larger storage capacity, wide operating temperature
range, better material availability, lesser environmental impact, safety
[58–60]. However, despite having the highest energy density, Li-ion batteries
a shorter lifetime, higher vulnerability to environmental temperature, and
higher cost compared with other energy storage devices. A comprehensive
review examined the electrochemical basis for the deterioration of batteries
used in HEV applications and carried out tests on xEVs, automotive cells,
and battery packs [61, 62] regarding their specific energy, efficiency, self-
discharge, charge-discharge cycles, and cost. The results indicated that Li-ion
is currently the best battery solution, surpassing the other technologies in all
parameters except charge speed, in which Pb-acid batteries showed a better
performance. Over the last years, graphene and its applications have become
an important factor in improving the performance of batteries [63].
SUPERCAPACITORS
An alternative energy storage device for hybrid power trains could be super-
capacitors, which are designed to achieve fast-charging devices of
intermediate specific energy [64]. A super-capacitor [65, 66] has the
advantage of a fast charge-discharge capacity, allowing a higher regenerative
braking energy and supplying power for larger acceleration [67] and can be
classified as a double-layer capacitor or a pseudo-capacitor according to the
charge storage mode. However, the main drawback of a super-capacitor is
that it has low energy density, which leads to a limited energy capacity.
HYDRAULIC ACCUMULATOR
The hydraulic storage approach converts the recoverable energy into
hydraulic form inside an accumulator and then releases it by using secondary
components or auxiliary cylinders [68–70]. Compared with an electric hybrid
system composed of a battery or super-capacitor, a hydraulic accumulator
device has an advantage in power density over an electric system. Moreover,
hydraulic accumulator energy recovery systems are ideal for cases of frequent
and short start-stop cycles [71, 72]. However, the application of such systems
in work vehicles still presents several defects: The impact of the limited
energy density is a design trade-off between the energy storage capacity and
volume or weight [73].
FLYWHEEL ENERGY STORAGE SYSTEM
The flywheel energy storage system (FESS) has improved considerably in
recent years because of the development of lightweight carbon fiber
materials. This system has become one of the most common mechanical
energy storage systems for hybrid vehicles [74, 75]. When in charge mode,
the electric motor drives the flywheel to rotate and store a large amount of
kinetic energy (mechanical energy); when in discharge mode, the flywheel
drives the generator, converting kinetic energy into electric energy [76]. The
FESS has the advantages of high energy density and high power density [77]
and works best at low speeds and in frequent stop-start work conditions.
Producing this system could be cheaper than producing batteries; however,
the system has limited storage time, and a significant percentage of the stored
capacity is wasted through self-discharge [78].
Hybridization factor
In HEV engineering, the integration of engines, mechanical components, and
electric power trains leads to increased energy efficiency, that is, a reduction
in fuel consumption and a subsequent decrease in CO2 emissions. In the
automotive industry, the basic logic of a hybrid vehicle is to provide a new
source of power that intervenes in place of the primary source (ICE) to
improve the overall performance of the system. Moreover, there are possible
modes of operation that are not provided in a conventional vehicle, such as
regenerative braking and electric mode (EV). Below are some of the main
advantages of a hybrid configuration over a vehicle equipped with a
combustion engine alone.
· Electric motor can act both as an engine and as a generator, allowing
a reversible flow of power from the battery to the wheels and vice versa.

· During braking, some of the kinetic energy is recovered


(regenerative braking).
· The vehicle can be used only in the electric mode (zero emission
vehicle—ZEV).
· When the vehicle has to stop temporarily, the combustion engine can
be switched off, therefore ensuring considerable energy saving.
It should first be mentioned that there is actually no real classification for
hybrid vehicles, although a first orientation phase can be identified by
defining a significant hybridization factor (HF) as the ratio between the
power of the installed electric motor and the total amount of power delivered
by the combustion engine and electric motor on the vehicle:
Compared with cars, the introduction of electric drives in tractors would
allow expanded functionalities, considering that agricultural machines have a
large variety of functional loading and working drives [20, 84]. The working
cycle of a vehicle is strongly correlated with the application. In the case of a
car, the comparison can be carried out by evaluating the extra-urban cycle
and the urban cycle. For example, in the case of the urban cycle, the vehicle
recovers energy due to frequent accelerations and stops. Working machines
even with repetitive movements, such as excavators, are able to recover the
kinetic energy of the arm. For agricultural tractors and machinery, two tasks
[85] have been identified, such as working conditions with steps at which
energy recovery is possible: transport and front loading. Telescopic handlers
also have a similar duty cycle. Unlike in hybrid cars, the hybrid propulsion
system in heavy-duty machinery can supply power to the driveline and
loading hydraulic circuit [86]. The mechanical power supplied by the ICE
flows to recharge the battery pack, actuate the hydraulic pump, and move the
driveline (Table 2).
Architecture review of hybrid construction and agricultural
machinery
Manufacturers, governments, and researchers have been paying increasing
attention to hybrid power train technology toward decreasing the high fuel
consumption rate of construction machinery [17]. Hybrid wheel loaders,
excavators, and telehandlers have particularly shown significant progress in
this regard [88, 89]. With hybrid work vehicles attracting more attention,
power train configurations, energy management strategies, and energy
storage devices have also been increasingly reported in the literature [73,
90–92]. Both researchers and manufacturers have approached studies of the
hybrid power system applications, energy regeneration systems, and
architectural challenges of construction machinery qualitatively but not
systematically and quantitatively. A first review of an electric hybrid HCM
was presented in 2010 [107]. More recently, a specific review of a wheel
loader and an excavator [108] was carried out, and another work in the field
of high-voltage hybrid electric tractors [109] was published. Hitachi
successfully launched the first hybrid loader in 2003 [90], and Komatsu
developed the first commercial hybrid excavator in 2008 [93]. Komatsu
developed the HB205-1 and HB215LC-1 hybrid electric excavators, which
are capable of recovering energy during the excavator slewing motion and of
storing this energy in ultra-capacitors. Earth-moving machinery
manufacturers have developed some diesel-electric or even hybrid-electric
models. Johnson et al. [96] compared the emissions of a Caterpillar D7E
diesel-electric bulldozer with its conventional counterpart [95]. Over the last
years, there has been increasing interest in tractor and agricultural machinery
electrification [96–99]. A number of tractor and agricultural machinery
manufacturers have developed some diesel-electric or even hybrid-electric
prototypes [20, 49, 100–102]. Recently, the Agricultural Industry Electronics
Foundation started working on a standard for compatible electric power
interfacing between agricultural tractors and implements [103], including,
among others, the John Deere 7430/7530 E-Premium and 6210RE electric
tractors [104] and the Belarus 3023 diesel-electric tractor [105]. Among
telehandler vehicles, the TF 40.7 Hybrid telescopic handler proposed by
Merlo [106]. Thus, it is necessary to study the various types of power train
configurations of hybrid wheel loaders and excavators to better understand
their construction features. The power requirement has different working
cycles depending on the applications. Many construction machinery
manufacturers and researchers have studied hybrid wheel loaders to
effectively use the braking energy and operate the engine within its high-
efficiency range [110–113]. According to the classification of hybrid vehicles
in the automotive field, there are three main design options for hybrid wheel
loader power trains: series, parallel, and series-parallel. In the literature
review, the proposed architecture is mainly described, but no attempt at
classification and comparison is made. It is not easy to find data sheets on the
different vehicles because most of them are still at the prototype level. The
comparison first outlines the architectures of the hybrid work vehicle
solutions developed by the main manufacturers, as shown in Table 3.
Figure 6 shows the series hybrid configuration of a wheel loader. As in the
configuration of a hybrid vehicle, classic engine series ICE directly drives the
electric generator, the electricity so generated is used to control the electric
motor connected to the driveline. The advantage of a series hybrid wheel
loader is the greater simplicity. In addition the engine ICE, being decoupled
from the wheels, it can be used at a fixed point in the conditions of greater
efficiency. In the case of hybrid wheel loaders in series from the
transformation of mechanical power into electrical and drive of the electric
motor can also be done with a battery pack reduced but the generator and the
electric motor need to be manufactured in terms of maximum power demand.
The presence of the battery pack can allow to better manage the power
demand peaks without the need to over-dimension the motor ICE [114, 115].
In literature, the hybrid drive train in the series has been applied mainly in
large tonnage hybrid wheel loader.
In 2009, Caterpillar came out with the first electric hybrid bulldozers. The Caterpillar D7E model is
within the range of medium dozers and replaced the traditional model D7R [94].
The company claimed an increase of productivity and a reduction in fuel consumption up to 24% over
the conventional model [94]. The driveline architecture is of the series electric hybrid type, as described
in Figure 6, with the electric motors powered directly from the inverter but having the peculiarity to be
directly charged from the ICE without any accumulation system. The hydraulic system has a
conventional architecture. Table 4 shows the main parameters of this work vehicle. A parallel hybrid
power train configuration has two separate power sources that can directly power the loader. The
disadvantage of a parallel configuration is that the engine cannot always be controlled in its high-
efficiency operating region because it is still mechanically coupled to the wheels with an increased
efficiency compared with the conventional model and a fuel consumption reduction of 10%[116].
Figure 7 shows a schematic of the Volvo L220F parallel hybrid electric wheel loader (HEWL). The
vehicle has a parallel hybrid electric architecture for both the driving and the loading system. The basic
idea of this parallel hybrid layout is to supply additional electric power when necessary, regenerating
the machine during normal operations and minimizing the consumption in idle conditions. The power
required by the device can be flexibly provided by using a work pump, which is driven by the pump
motor shows the main parameters of the Volvo L220F. Mecalac proposed a similar architecture for the
12 MTX hybrid model and claimed to save up to 20% in fuel consumption [117].
At the CONEXPO International Trade Fair for Construction Machinery
(2011), John Deere presented the first prototype of its hybrid wheel loader,
the 944K hybrid. In February 2013, the entry of the first hybrid wheel loader,
the 644K hybrid, in the market was announced with a reduction in fuel
consumption up to 25% [119]. In this smaller model, a single electric
machine provides all the power needed to drive the vehicle. The vehicle
driveline has a series electric hybrid architecture, with the electric motor
directly powered by the inverter without an energy storage system. Figure 9
shows a schematic view of John Deere 644K hybrid wheel loader [120]. The
installed electrical machines are liquid-cooled brushless permanent magnet
motors.

The innovative architecture proposed by Merlo, as shown in Figures 10 and


11, is considered as a fully series architecture for vehicle traction and as a
parallel architecture for the operation of hydraulic systems. This kind of
innovative, patented series-parallel architecture, with a split input for
hydraulic lifting, allows both the electrical and the mechanical components to
be arranged in a way that is compatible with the current layout and
performance of Merlo machines. The main objectives of this hybrid
telehandler are an overall improvement in performance, a decrease in daily
fuel consumption in ordinary work activities, and a reduction in noise
emissions. Moreover, the proposed configuration is capable of working in
full electric, zero-emission mode for indoor use, such as in cattle sheds,
stables, industrial and food processing warehouses, and buildings. In Ref.
[87], it has been demonstrated a fuel consumption reduction of 30% with the
same level of dynamic performance compared with the conventional
telehandler.

Claas proposed a parallel mild hybrid solution for the Scorpion telehandler. The simulation results
reported in Refs. [121, 122] show a reduction in fuel consumption of about 20% and emissions for this
parallel hybrid solution compared with the traditional model. The solution proposes the use of the
electric motor as a power boost to maintain the performance while using a smaller diesel motor.
The excavator is a type of construction machinery with a larger weight and higher energy consumption
[107]. A hybrid excavator can typically recycle two energy types, including the braking kinetic energy
of the swing and the gravitational potential energy of the booms. In the recent literature, excavators
present a wide combination of series, parallel, or series-parallel hybrid architectures. The change in
configuration and the additional costs of electrical components make the commercialization of hybrid
configurations difficult. Figure 12 shows the schematic of the Kobelco series hybrid excavator; the first
prototype of this 6-t configuration was developed in 2007 with a claimed in [123] to cut fuel
consumption by 40% or more and reporting results of the verification test on the efficiency of the
hybrid excavator in different working cycle operations [124, 125]

As showed in Figure 12 in the hybrid solution proposed by Kobelco, each hydraulic is driven by an
electric motor. This solution increases efficiency but the production cost is higher.
In the case of parallel hybrid excavator, the internal combustion engine operates the hydraulic pump
and generator. The hydraulic pump drives the hydraulic circuit of the device, in a manner similar to
conventional excavators, while the generator transforms the mechanical energy into electrical power
and can operate the electric motor of swing rotation. The hybrid solution in parallel is simpler;
however, the fuel consumption is higher, and the return time for these working machines is longer
[126]. Hitachi, as shown in Figure 13, proposed a parallel hybrid excavator with the gravitational
potential recovery of the boom [113].

In the series-parallel hybrid power train configuration of an excavator, the


engine drives the generator directly. The hydraulic pumps are driven by the
generator in series, and the swing electric motor is powered by the generator
and the battery or super-capacitor in parallel. Although series-parallel hybrid
excavators have higher production costs compared with parallel and series
structures, they offer the shortest cost recovery time and efficiency with a
fuel consumption up to 25% [126]. Series-parallel hybrid excavators are
regarded as the most
promising solutions, and both Komatsu (Figures 14 and 15) and Doosan use
similar configurations [128, 129].

The attempt at classification in the present work is based on the specific HF


defined in Section 4, taking into account the data sheets of the vehicles.
Table 4 shows the hybridization factors for work machines, calculated by
using Eq. (4) [22] and considering the effect of a hybrid electric driveline and
hybrid electric loading/working functions.
Trends and conclusions
This study focused on the electrification of work vehicles, such as
agricultural machineries, which is still in the research and development stage.
Similarly to HEVs, the main design issue in HACMs is controlling the energy
transfer from the sources to the loads with minimum loss of energy, which is
dependent on the driving and working cycles. Compared with automobiles,
the introduction of electric drives in tractors would allow expanded
functionalities because agricultural machines have a large variety of
functional drives.
Main differences in requirements, working cycles, and proposed hybrid
architectures between HEV and HACM were determined along the present
study, focusing on a specific hybridization factor for working vehicles that
consider both the driving and the loading electrification.
The hybridization factor for working vehicles is introduced in order to
classify and compare the different hybrid solutions proposed by main
manufactures taking into account different architectural choices. Moreover,
the claimed increasing of efficiency due to the power train electrification is
reported and listed in terms of fuel consumption reduction. Taking into
account a large variety of architectural hybrid solution, it has been proven a
good correlation between the hybridization factor and the fuel efficiency as a
general trend in benefit of hybrid electrification of working machine.
Because charging a battery pack from the grid is more efficient than charging
it from a tractor engine, it seems logical to hybridize the tractor with high-
voltage batteries and propulsion motors. In this manner, the internal
combustion engine could be downsized, and the traction battery pack could
be charged from the grid. Fuel consumption costs would thus decrease.
However, compared with traditional construction machinery, an additional
energy storage device is needed, which increases the initial costs. Moreover,
the cost added by high-voltage equipment needs to be considered in the
whole turnover of the hybrid vehicle conversion. As indicated by several
reports and prototypes, hybrid systems have promising applications in both
agricultural and construction machinery, but major drawbacks are related to
the increased cost due to electrification. Hybrid technologies, particularly
energy storage devices, are still in the early stages of development, and the
trends in cost reduction could push researchers and manufacturers toward the
optimization of hybrid solutions for HCAM.
Introduction to Development of Bus Drive
Technology towards Zero Emissions: A
Review
Over the past 100 years, the bus industry has come to be dominated by diesel
powered buses due to their increasingly low cost and greater maturity of the
technology. However, this comes at an environmental cost, for example, over

600 kt of CO2 was emitted by London’s bus fleet in 2015 [1]. It is these

carbon emissions and their link to climate change that have provided one of
the major drivers in recent years to develop and deploy alternative
technologies for bus propulsion [2]. Other emissions associated with diesel
vehicles such as NOx and particulates have provided a local driver to change
due to their detrimental impacts on human health [3–5]. In 2008, it was
estimated over 4000 deaths were brought forward as a result of long-term
exposure to particulates in London [6]. In order to combat these concerns,

many cities have introduced measures such as the ‘low emission zone’ in
London and emission control targets [7]. London is to introduce the first
ultra-low emissions zone (ULEZ) in 2020, which, amongst other targets will
aim to replace conventional diesel powered buses with low emissions
alternatives [8, 9]. Despite this drive for change, it is evident that finding a
replacement for diesel buses is not simple. In addition to the low cost,
simplicity, reliability and maturity of the technology, diesels also offer
excellent characteristics to meet the required power demands and operational
needs of city buses. It can be seen from Figure 1, the diesel engine that is a

type of internal combustion engine (ICE) provides high output powers and
uses energy dense fuel making them ideal for both the range and operating

times expected of city buses and also for meeting the high transient power
requirements during acceleration.

In order to address the environmental concerns posed by diesel buses, a


number of technologies are being investigated and implemented. The most
widespread of these are diesel-hybrid buses, which make use of an on-board
energy storage system to effectively recycle captured kinetic energy obtained
through regenerative braking. Although hybrid buses are capable of
significantly reducing fuel consumption, they are still reliant on diesel as the
primary fuel source and hence do not address the fundamental problems
associated with emission that come from using diesel as a fuel. As such, there
has in recent years been an increased focus on the development of zero
emissions buses, with two main competing technologies. These are battery
electric buses and hydrogen fuel cell (FC), both of which exhibit zero
operating emissions, hence eliminates the environmental and health issues
associated with diesel buses [11]. Such technology solutions are less mature
and result in significant changes to the propulsion system. Although these
technologies have been deployed in operational bus fleets, there remain a
number of barriers to widespread deployment.
London has one of the most comprehensive and busiest public transport
networks in the world, operated by Transport for London (TfL). There are
over 9000 buses in operation [12], which are estimated to account for 21% of
the CO2 emissions in London [7], 63% of NOx and 52% of PM10 particulate
emissions [13]. It is reported that the TfL bus fleet carries 6 million
passengers each working day, which the number of bus passenger journeys
grew by 64% between 2000 and 2013 and is continuing to increase [14]. The
Greater London Authority (GLA) has introduced a number of strategies in an
attempt to reduce emissions from buses, part of which is the London hybrid
bus project which aims to replace the conventional bus fleet with diesel
hybrid buses [7, 15]. This is to be furthered with the introduction of the ultra-
low emissions zone (ULEZ) in 2020, which, amongst other targets will
require all 3000 double-decker buses operating in the ULEZ to be diesel
hybrid and all 300 single decker buses to be zero emissions [8, 9, 16]. Since
2004, a number of technologies have been deployed as part of the operational
bus fleet, as shown in Figure 2, as a means of reducing emissions. London
has been used as a case study throughout this chapter due to both the
comprehensive bus network and the operational deployment of new
technologies.

Within this chapter, the development of low emission bus propulsion


technologies will be discussed, through the evolution of diesel to diesel
hybrid buses and onto the development and deployment of battery electric
and FC buses. The aim is to outline the benefits of such technologies and the
barriers that exist to their widespread implementation from both a technical
and economic perspective. Part 2 discusses the implementation of diesel
electric hybrid buses and their evolution from diesel buses. Parts 3 and 4
consider battery electric buses and fuel cell buses, respectively, whilst part 5
provides a comparison of these emerging technologies.

Diesel hybrid bus


BASIC PRINCIPLES OF DIESEL ELECTRIC HYBRID
BUSES
The principle difference between diesel hybrid buses and diesel buses is the
inclusion of an electrical energy storage system in conjunction with an
electrical motor/generator. The primary source of energy is still the diesel
engine; however, the inclusion of the electrical system provides a number of
advantages such as facilitating regenerative braking and allowing reduced
idling time [17]. The utilisation of a hybrid system results in improvements
fuel efficiency and emissions, although these come at the price of additional
cost and complexity [17].
The integration of the electrical energy system can be utilised through a
number of configurations, with the common options being the series, parallel
and series-parallel hybrid configurations, as shown in Figure 3. In a series
hybrid drivetrain, the mechanical output from the diesel engine is converted
into electrical power via a generator when operating at its most efficient
loading. This is supplemented with a battery to provide for the electric drive
motor requirements. Since the propulsion needs are met by an electric motor,
this results in the complete decoupling between the diesel engine and the
wheels, meaning that engine control is not dependent on vehicle speed so
offering additional flexibility [18]. This is a major advantage of series hybrid
drivetrains, where the engine can operate at any point on its speed-torque
map, which is impossible for conventional vehicles. Therefore, the engine is
capable of constantly operating at near optimum load, which minimises fuel
consumption and emission [19].
The parallel hybrid configuration maintains the direct mechanical link
between the diesel engine and the wheels, using the battery for regenerative
braking and supplementing the peak power demands. The main advantages
over the series hybrid are that the additional generator is no longer needed so
has higher efficiency as well as reducing the size of the required drive motor.
The parallel configuration, however, does not decouple the diesel engine
from the wheels and hence operation is directly linked to the vehicle speed
hence for low speed city operation the ICE will often operate at a low
efficiency [20]. As a result, the parallel configuration is more appropriate for
longer distance and higher speed routes. The series-parallel hybrid can
operate in either the series or parallel configurations and so can utilise the
advantages of both systems; however, the additional complexity and capital
cost of the system mean that they are currently not a viable option for
transportation applications [19]. The most popular option for city buses is the
series configuration due to the simplicity of a single drive system as well as
higher efficiency during city driving where buses have a start-stop traffic
pattern with generally low speed operation [19].
The benefits offered by the hybridisation of the drive system relate to the
increase in fuel economy and reduction in emissions compared to a diesel bus
and can be attributed to the following points.
· On average buses idles for around 30–44% of urban driving time
[21]. By using a hybrid system, the vehicle can turn off the engine to
prevent idling and low loads because it can use the electrical energy
storage and motor for initial acceleration. This can save 5–8% of fuel
consumption [17].
· A significant amount of energy is lost and dissipated by heat due to
friction during conventional braking. When a hybrid vehicle is braking,
the drive motor can work as generator to charge the electrical energy
storage system and thus recycle some of the energy used to propel the
bus. Typically, 10–20% of the kinetic energy is recovered.
· In a conventional bus, the diesel engine needs to be large enough to
provide for all of the peak transient power demands. A hybrid vehicle is
able to use the electrical system to provide for a portion of these peak
demands, and therefore, the engine can be downsized [17, 19].
· A diesel engine operates at its lowest efficiency during low load and
low speed operation. The electrical system can drive the electric motor to
power the bus during low load and start-up to avoid this. It is expected
that diesel hybrid technology can achieve reductions of between 24 and
37% CO2 emission [22], 21% to NOx emission and 10% to fuel
consumption compared with conventional diesel buses [7, 15].
In contrast to these benefits, the hybridisation of the drive system has a
number of drawbacks. These predominately amount to the additional capital
cost, where a diesel hybrid typically costs £300,000, this is £110,000 more
than a conventional diesel bus and constitutes an increase of about 50% [23].
The additional complexity of both the drive system and its control results in
additional maintenance time and cost, where a diesel hybrid typically requires
50% more maintenance time than a conventional diesel bus [22].
CASE STUDY 1: TFL
Initially a trial consisting of eight diesel hybrid buses was carried out in 2007
and was found to have very high (96%) customer support [24]. After
analysing the trial, the official deployment of diesel hybrid buses began in
central London. The number of diesel hybrid buses has steadily increased,
where in 2015, more than 1200 diesel hybrid buses were in operation in
London, as can be seen in Figure 4, and exceed the target of 1700 in 2016
[12]. This consists of old buses redesigned for hybrid operation and new
designs such as the new Routemaster.
The impact of the deployment of the low emission bus fleets has already
begun to have an impact on emissions in London, as shown in Figure 5. In
the last few years, emissions of NOx and CO2 have begun to drop due to the
introduction of diesel hybrid buses into the TfL fleet and the retrofitting of
selective catalytic reduction measures into the existing fleet. The level of
PM100 emissions dropped considerably due to the introduction of PM filters
in the early 2000s. It is expected that these will continue to drop as further
deployment of diesel hybrid and zero emissions vehicles continues.

The performance of the diesel hybrid bus fleet in London is very variable, as
might be expected due to differing models and routes. It was claimed that the
average Euro V bus achieved a fuel economy of 32.9 l/100 km in London [9].
The reported fuel economy of diesel hybrid buses operating in London is
presented in Table 1. As may be expected, the type of bus and bus route
significantly affects the fuel economy, where a single decker bus generally
exhibits a higher fuel economy than a double-decker bus. It was found that
the introduction of diesel hybrid technology improved the fuel economy on
nearly all routes; however, there were a couple of discrepancies to this, such
as on the E8 bus route where the fuel economy actually decreased. The
introduction of the new Routemaster bus appears to provide a slight
improvement over previous diesel hybrid buses; however, there appears to be
significant discrepancies between the recorded and expected performance.
Results released by TfL in 2014 suggest a fuel economy in the range of 38.2–
45.6 l/100 km, whereas it is claimed by the manufacturer that a fuel economy
of 24.4 l/100 km was recorded on the 159 bus route. Unfortunately, the
details for these results are not available and so it is difficult to determine the
validity of the results. This discrepancy could be the result of a number of
factors such as the route topology, traffic conditions, driving style and
passenger conditions.
In summary, TfL has successfully introduced a large number of diesel hybrid
buses into their bus fleet. This has resulted in a decrease in the emissions
associated with the bus fleet, with considerable further reductions expected. It
provides an example of the successful deployment of diesel hybrid buses into
a large operational bus fleet to achieve reductions in emissions and fuel
consumption. However, the increased cost and system complexity remain
problematic.

Battery electric bus


OVERVIEW OF ELECTRIC BUSES TECHNOLOGY
The battery electric bus, often described as a pure electric bus, uses an
electric motor for propulsion and a battery for energy storage [29]. In most
cases the battery is the primary energy source, although for trolley buses
power is delivered from overhead cables during operation.
The configuration for electric buses is typically fairly straightforward since it
is basically a battery driving an electric motor to propel the vehicle [30], as
shown in Figure 6. During braking it is also possible to make use of
regenerative braking to recharge the battery during braking. The main battery
technologies that have been used in transportation are Ni-MH, Zebra (Na-
NiCL2) and lithium batteries [31]. The most promising of these are the
lithium batteries, where three main categories exist, these being Li-ion,
lithium polymer (LiPo) and Lithium-iron-phosphate (LiFePO4) batteries [32].
Most current buses use lithium-based batteries [33] due to their high power
and energy densities and fast charging capabilities, although their high cost is
still problematic [32]. A problem faced by all battery technologies is their
cycle life; typically, these are short and hence require relatively regular
replacement [34]. In addition to a battery pack, some buses utilise
supercapacitors in conjunction with a battery as supercapacitors are much
more effective in shielding batteries from high current load and thus increase
battery life [35]; however, their low energy density means they are unsuitable
to be used as the primary energy source, as shown in Figure 1. They do,
however, have several key advantages over existing battery technologies,
such as very high power densities and discharge rates as well as very long
cycle life [34]. There is no simple answer to which battery technology is best,
as it will depend on the application. Mahmoud et al. [36] carried out a
detailed comparison study of different electric powertrains and concluded
that a single technological choice would not satisfy the varied operational
demands of transit services because electric buses are highly sensitive to the
energy profile and operational demands. Electric buses are zero emission at
the point of use and therefore offer great emission savings particularly in
terms of local air pollution when compared to ICE or hybrid buses, as well as
very high efficiency. However, there are a number of barriers to widespread
deployment, the main ones are recharging time, vehicle range, infrastructure
and cost [34].
Battery electric buses normally operate in one of two different forms: opportunity and overnight [32].
Opportunity e-buses have a smaller energy storage capacity that offers limited range but can be charged
much quicker (5–10 minutes); while overnight e-buses have a much larger energy storage but at the
cost of longer charging time (2–4 hour) [36]. These represent two different approaches for electric
buses in the urban environment. The opportunity approach aims to minimise the weight of the battery
pack by utilising frequent and fast recharging at points along the bus route, such as bus stops or the end
of route [32]. This holds the promise of high efficiency and lower projected bus costs but requires a
comprehensive recharging network [37]. Route flexibility of the bus is, however, limited, as it is
required to follow the assigned bus route to recharge the battery. The overnight method utilises a large
energy storage system to extend the range so that the bus can drive the entire route/day without
recharging [37]. This holds the promise of greater route flexibility and convenience as well as utilising
a centralised recharging infrastructure, but suffers from passenger loss due to increased battery weight
as well as battery lifetime issues [38] and battery cost [34]. An alternative approach is offered by the
Trolleybus, which has a small battery but receives power from overhead cables along the assigned
route. This overcomes problems associated with range and recharging times but is very limited in terms
of route flexibility.
The process of recharging a battery electric bus can be completed through plug-in (conductive),
wireless (inductive) or catenary (overhead power lines) charging. Plug-in charging requires a direct
connection through a power cord [39] and is well-suited to overnight bus charging, but can be used in
some instances for opportunity charging. This is popular due to its simplicity and high efficiency [39].
Wireless charging relies on induction between two coils, this is better suited to opportunity buses where
recharging can take place along the route without the need for a physical connection [39], such as the
PRIMOVE bus where charging is carried out at each end of the route and at five intermediate stops
[40]. This form of charging, however, suffers from increased charging times and relatively low
efficiency [39]. The trolleybus uses overhead catenary to provide power to the bus [41]. This type of
charging exhibits high efficiency but requires an extensive network of overhead cables.
Table 2 shows a selection of operating pure electric buses in different locations and utilise a number of
battery technologies and operating approaches. In 2015, there were an estimated 150,000 battery
electric buses, mostly located in China, with a sixfold increase between 2014 and 2015 [42]. The
electric bus market is growing quickly where it had a 6% share of global bus purchases in 2012 but is
forecasted to grow to 15% by 2020 [43]. Battery electric bus development has been carried out all over
the world with the largest shares in China, Europe and North America [44]. It is clear that some of the
buses listed in Table 2 utilise more than one mode of operation to provide for the operational power
requirements, such as the complete coach works bus, which uses both overnight and opportunity
charging. The differences in operating regimes are reflected in the sizing of the batteries and as a result
the range of the buses, where they vary from 5.9 kWh for the trolleybus design to >300 kWh for
overnight charging. This will have a significant impact in terms of the bus’s battery costs; however, the
charging infrastructure for overnight charging does not need to be as comprehensive as for the
alternative methods.

CASE STUDY: LONDON ELECTRIC BUSES


London has been working on overnight e-bus demonstrations since 2012 and
is also investigating the potential of opportunity e-bus technologies. From the
overnight e-bus perspective, TfL has collaborated with BYD, which is one of
the largest electric bus manufacturer in China, to test the potential of battery
electric buses in London, starting from 2012 [45]. The first two battery
electric buses were handed over to TfL in 2013 and then entered daily service
on two central London routes, numbers 507 and 521, which were the first
battery electric buses in London. These single-decker 12-metre BYD buses
utilise Lithium-Iron-phosphate batteries and have demonstrated a range in
excess of 250 km on a single charge in real world urban driving conditions
[46]. The 507 and 521 bus routes are relatively short commuter service routes
and were chosen so that the bus can start operating in the morning peak
alongside the diesel bus fleet and return to the depot to recharge during the
day before resuming service for the evening peak [34, 47]. The battery takes
4–5 hours to recharge when fully discharged and has been designed for a
cycle life of more than 4000 cycles, meaning a 10-year battery lifetime under
normal operating conditions [48]. The trail fleet was extended to six buses in
the summer of 2014. The trail buses in London not only provide a zero
emission environmental benefit but also have shown promising result in
terms of both technical and economic performance, and hence TfL has taken
further steps towards adopting this new clean technology in the capital. The
development timeline and future plans for London electric buses are plotted
in Figure 7

The latest data in 2016 showed that there are currently 22 battery electric
buses operating in London including 17 single-decker battery electric buses
and five double-decker battery electric buses. This is a world first for double-
decker battery electric buses, as shown in Figure 8, and entered service in
May 2016. These are 10.2 m buses with a capacity of 81 passengers and a
claimed range of 303 km. The battery is a Lithium-Iron-Phosphate battery
with a capacity of 320 kWh [49]. They utilise a combination of both
overnight and opportunity e-bus technology and will operate on route 69 in
Central London. They will use a high powered wireless inductive charging
system to top up their battery system at the beginning and end of this route to
keep the bus operating throughout the entire day [50]. The recent double-
decker electric buses have used wireless charging technology as part of
innovative charging technology development. However, this is still far from a
mature technology and requires a massive recharging infrastructure network
[51]. The electric buses in London have shown promising performance on
short commuter routes; however, pure e-buses are still best suited for shorter
routes with operational flexibility and scope to recharge them in inter-peak
periods due to the limit of present battery capacity and recharging technology
[52].

In 2015, BYD and Alexander Dennis (ADL) announced a partnership to


provide 51 further single-decker buses to route operator Go-Ahead with an
expected delivery in late 2016 [53]. BYD will provide the batteries and
electric chassis technology, and ADL will provide the bus body-building
technology [54]. The cost of each bus is expected to be £350,000 [55].
In summary, the recent development and deployment of battery electric buses
in London have shown that electric buses are technically feasible. It can be
seen that electric buses will also have an important role to play in the coming
ULEZ implementation in 2020. However, more time is needed to evaluate the
actual performance and address the key challenges facing electric buses such
as limitations of battery technology that restricts range.

Hydrogen fuel cell hybrid bus


BASIC THEORY
Hydrogen fuel cells (FCs) are considered a clean energy source with the main
benefits over ICEs of zero harmful emissions during operation and high
efficiency [56]. Although many types of FCs exist, this paper will only
consider the application of FCs in transportation, considering the operating
temperature, start-up time and technology maturity, Proton Exchange
Membrane Fuel Cell (PEMFC) offer most promising solution [57].
Significant research into solid oxide fuel cells (SOFCs) in transportation has
been carried out [58–60], although these have yet to been applied in real
world bus applications. A PEM FC uses hydrogen as the fuel, which, through
an electrochemical reaction with oxygen (usually from air) generates
electricity with water as the only by-product from the chemical process [61].
By replacing the internal combustion engine in conventional buses, FCs can
be used as the primary energy source to power a bus with electrical energy,
therefore, achieving zero operating emissions. An additional advantage over
ICE’s comes from the higher efficiencies exhibited by FCs [62, 63].
However, there are a number of barriers that need to be overcome before
widespread deployment can be achieved. These are primarily cost and
infrastructure [64, 65]. FC powered buses cost approximately five times more
than a conventional diesel bus with the similar power output [66], where they
typically cost in excess of £1,000,000 [67], due primarily to the expensive FC
stack and the small scale of production [68]. In addition, the widespread
deployment of FC buses would require a significant investment in hydrogen
refuelling infrastructure [64]. The implementation of FC buses has shown
that the technology is a promising solution for zero emissions buses if these
barriers can be overcome.
Figure 9 shows the configuration usually used in FC vehicles. The basic
drive train utilises a FC to power the propulsion motor; however, FCs are not
well suited to providing for the transient power demands associated with city
driving buses [69–73]. As such, most FC buses utilise a form of energy
storage in a series configuration to both address this and also to utilise
regenerative braking [74]. An additional benefit of such an approach is that
the size of the expensive fuel cell stack can be reduced [75]. The energy
storage implemented is usually either electrochemical battery technology
such as Li-ion or NiCd batteries or electrostatic supercapacitors (sometimes
referred to as ultracapacitors). The choice between these depends on the
particular design and requirements of the system, with batteries offering
reasonable power and energy densities although they have a relatively short
cycle life and supercapacitors offering poor energy densities but excellent
power densities, as shown in Figure 1. Additionally, supercapacitors have
very long lifetimes of up to 40 years [31].
In a series configuration, there are three main modes of operation that can be
utilised to provide for the buses power demands, as shown in Figure 10.
Although these are the main modes of operation, the way these modes are
utilised will depend on the control strategy implemented [76].

· Mode 1: The SC discharges to supplement the FC to provide for high transient power
demands. This type of operation is expected to occur under heavy loads such as during
acceleration or going uphill.

· Mode 2: The FC will both power the load and use excess power to charge the SC. This is
expected to occur under low loads, when the FC power output is higher than the required load.

· Mode 3: The power from the FC and generated power from regenerative brake will both be
used to charge the SC. This is only expected to occur when the drive motor is operating as a
generator in the regenerative brake mode.

There have been a number of projects aimed at utilising FC technology for bus propulsion applications.
Table 3 lists many of the projects currently in operation along with the FC size and energy storage
used. The projects are split into two main categories depending on the relative size of the FC and
energy storage systems. The majority of the current projects are FC dominant, whereby the FC is
expected to provide for the majority of the propulsion needs. Alternatively there a few examples of
battery dominant hybrids, where the battery is the main source of power with the FC used as a
supplementary power source. It was announced in 2017 that the JIVE project is to implement 142 buses
across nine European cities with 56 new FC buses in the UK, which will be the first large scale
validation project of FC bus fleets [78].
CASE STUDY: TFL FC BUS ON THE RV1 BUS ROUTE
London has been involved with the testing and deployment of FC buses,
Figure 11 shows the evolution of FC bus implementation in London.
Initially, this was through the EU funded Clean Urban Transport for Europe
(CUTE) project, which aimed at introducing hydrogen FC buses into
European cities, where a test run of three buses were operated on the RV1
bus route between 2004 and 2006, this was increased to five buses from 2007
to 2009 [83]. London is now part of Clean Hydrogen in European Cities
(CHICs) project with the first deployment in full service of the next
generation of FC bus in 2011 and is expected to continue until 2019. There
are currently eight Hydrogen buses operating in Central London as part of the
CHIC project, fully covering the RV1 bus route, which is 9.7 km in length
[83]. It is expected that by 2017 a further two buses developed as part of the
3Emotion project will be deployed through Van Hool [84]. The buses operate
for 16–18 hours/day, before returning to the depot for refuelling at the central
depot, which takes <10 minutes [85]. The workshop, which is responsible for
routine maintenances and hydrogen management, was specifically designed
and built for hydrogen FC buses [86]. The hydrogen has been transported in
liquid form to the depot and converted into gaseous form to refuel buses [83],
it is then stored on site in gaseous form at 500 bar [86].

The buses themselves have developed throughout this project, where the first
generation was powered only by a FC. These utilised a 250 kW fuel cell [82]
and achieved a hydrogen economy of 18.4–29.1 kg H2/100 km [87]. The
buses deployed as part of the CHIC project utilised a series hybrid
configuration, with a 75 kW PEM FC from Ballard and a 0.5 kWh Bluways
supercapacitor energy storage system [88]. This introduction of the hybrid
system significantly reduced the hydrogen economy to <10 kg H2/100 km
[87] and is one of the most significant results of the CHIC project in London.
Figure 12 shows that the fuel economy of the buses operated as part of the
CHIC project showed considerable improvements over those in the CUTE
project. It can also be seen that the London buses performed better than the
CHIC target, exceeding it by nearly 50%. For all of the London FC buses, the
hydrogen is stored as a compressed gas at 350 bar, with the gas cylinders
stored on the roof of the bus [82].
Between 2011 and 2016, the FC buses in operation in London have covered
over 1.1 million kilometres [89], and a number of the FC buses have achieved
the milestone of 20,000 hours of operation [90]. This reflects the
improvement of availability seen over the course of the deployment of
CHIC’s London fleet. Figure 13 shows the availability from January 2012
until May 2015. The monthly availability of London FC buses has also
significantly increased after the availability upgrade program carried out in
2014. The availability is expected to improve to over 85% by the end of the
CHIC project as operators gain more operational and problem-solving
experience.

Apart from the technical and economic improvements, the London trail buses
have also proven that the technology became more viable because of the full
working schedule, direct diesel replacement, centralised infrastructure and
high public acceptance [86]. The trial test of FC-powered buses projects has
provided promising performance as a long-term solution to zero emission
transportation.
Comparison study
This part aims at to provide a comparison of the current state of low emission
and zero emission bus systems. Diesel hybrid buses have been developed and
deployed as a means of achieving emissions reductions, where a number of
advantages in terms of efficiency, emissions and fuel consumption can be
seen over diesel buses. There are, however, a number of problems associated
with their widespread deployment. The first of these is the cost and is due to
the additional components necessary for the electrical system. Second, the
inclusion of the electrical system necessitates a significantly more
complicated configuration [19]. Third, although diesel hybrid buses can offer
significant improvements in terms of CO2 and NOx emissions, the primary
energy source is still the ICE. As such, they fail to address the underlying
source of emissions and are therefore fundamentally limited in the
improvements that can be achieved. As such, they can only really be
considered as a transitional technology to reduce emissions but are not a
viable option for meeting zero emissions targets. In order to meet the
requirements for zero emissions buses, which is the ultimate objective for a
clean transportation network, technologies such as electric and FC buses have
been developed as a long term solution for city bus transportation needs.
Therefore, this section will mainly compare the battery electric bus
(opportunity, overnight and trolley) and FC bus technologies as the two most
promising zero emission solutions in terms of the operational requirements
and is summarised in Table 4. The rankings are based on the authors’
opinions with reasoning given in the paragraphs below.
Range: Opportunity e-buses have a smaller energy storage that requires frequent recharging, which
equates to poor performance in terms of daily range. Overnight e-buses utilise a much larger battery,
which increases the range with reported values of over 300 km per charge. Trolley e-buses are
continuously powered with electricity by overhead lines along the route which effectively gives
unlimited range. FC buses use hydrogen cylinders as the fuel tanks, which allow the range to be greatly
extended (up to 450 km) for as much as hydrogen fuel cylinder weight and size allows [91].
Route flexibility: Opportunity and trolley e-buses require recharging infrastructures along the route
which greatly limits their route flexibility. This is somewhat dependant on the size of the on-board
battery and will likely be more acute for trolley e-buses. The overnight e-buses and FC buses are
expected to be able to operate for an entire day’s service without recharging or refuelling. As such this
allows for much greater route flexibility. This appears to be easily achieved for FC buses, however for
overnight e-buses this is not always the case and will again be dependent on the size of the battery.
Refuelling time: Opportunity e-buses require frequent recharging throughout the entire route. Although
each recharges for the opportunity e-bus only takes up to 15 minutes, it is still considered as a
drawback due to the requirement for regular recharging. Overnight e-buses require a longer recharging
time (average >4 hours) after each operation due to the increased battery capacity. The recharging time
is heavily dependent on the charging power. Trolley e-buses are charged through overhead wires so that
they require no refuelling time. FC buses are refuelled with gaseous hydrogen, which can be completed
quickly (<10 minutes) [91].
Infrastructure: Opportunity e-buses and trolley buses require corresponding infrastructure along the
route and each end of the routes. Therefore, opportunity e-buses and trolley buses require a
comprehensive infrastructure network. Overnight e-buses and FC buses both require infrastructure to
recharge/refuel at the end of daily operation. This can, however, be centralised at the service depot and
hence does not need to be as comprehensive. It appears, however, that the current recharging times for
overnight e-buses presents a problem since it is likely that a significant number of recharging points
and a massive recharging power would be needed to recharge the batteries of a large fleet in time for
the next day’s service. This could potentially be an issue for the electrical grid infrastructure if the
number of buses grows significantly, while this would not be a problem for FC buses because of their
short refuelling time.
Fuel availability: All three battery electric bus technologies use electricity to recharge their batteries.
This electricity could be central managed and distributed locally through the local electricity grids;
however, widespread electric bus deployment could significantly stress this infrastructure. FC buses
will likely require the development of a comprehensive distribution network for hydrogen, although on-
site hydrogen production has been demonstrated. Additionally, hydrogen fuel storage would also create
additional cost.
Clean source: Real zero emissions bus technology needs to be clean throughout the manufacturing
process, fuel production and bus operation. Currently, battery electric and FC bus technologies can
achieve zero operating emission but the lifetime emissions are much harder to quantify. It is hard to
forecast how the emissions from new technology manufacturing will change, but the fuel production
method can be roughly estimated. In the UK, the GHG emissions for electrical energy were 0.44932
kgCO2/kWh in 2014 [92]. This is likely to change as the UK’s energy mix changes, where in 2015,
24.6% of electricity was generated from renewable energy sources [93]. Similarly, for FC buses, the
source of hydrogen is critical in determining the overall emissions. Currently, about 96% of hydrogen
is derived from fossil fuels [94] which results in 13.7 kgCO2/kgH2 [95]. Despite this, investigations
into the use of renewable energy for hydrogen production through the process of electrolysis have been
carried out offering potential for a low carbon source of hydrogen. Currently, electricity for battery
electric buses is a cleaner fuel than hydrogen for FC buses.
Cost: Both electric and FC buses have higher capital costs than a conventional diesel bus; however, FC
buses are currently far more expensive than electric buses. The capital cost of electric buses is
somewhat dependant on the type of operation expected, where overnight buses will have higher costs
than opportunity and trolley buses due to the increased battery capacity. This does, however, need to be
weighed up against the cost of infrastructure, where opportunity and trolley buses require a
comprehensive and expensive charging network. Overnight electric and FC buses on the other hand can
make use of a centralised recharging/refuelling infrastructure.
Throughout this chapter, the main technologies being implemented to meet the low emissions
requirements have been presented. The most promising for these in terms of zero emissions are electric
and FC buses; however, it is clear that there are still significant barriers to their widespread
implementation. Following on from the challenges identified in the comparison section a number of
challenges for future developments have been identified.

For electric buses, it is clear that further improvements to battery technology are required in terms of
their energy densities and lifetime as well as the development of an effective charging infrastructure.
The challenges are somewhat dependant on whether the bus is intended to use the overnight or
opportunity charging schemes. For overnight charging, the charging infrastructure can be centralised;
however, this necessitates very large power requirements for the charging infrastructure, additionally
the range of the buses needs to be addressed through battery developments. The opportunity charging
schemes a comprehensive and distributed charging network. In most cases, this requires the
development of high efficiency and power wireless charging technologies.

The future development of FC buses requires development in a broader range of areas. This includes
further work on individual components such as the FC stack and hydrogen storage. The FC stack is still
the most expensive component of the FC bus. The further development of the control strategies for
hybridised buses held significant promise in reducing the size of the required FC stack and improving
the fuel economy. Hydrogen storage is a key area for future research for bus applications, where
technologies such as solid state storage offer potential to improve the storage density of hydrogen. For
widespread implementation, the development of the hydrogen infrastructure is vital. This includes the
production of hydrogen, particularly from clean sources, the distribution of hydrogen or on-site
production and purification.
Introduction to Advanced Charging System
for Plug-in Hybrid Electric Vehicles and
Battery Electric Vehicles
Electric vehicles (EVs) have received an intensive attention during the last
decade due to their characteristics as vehicles as well as other additional
benefits that cannot be offered by conventional vehicles. A massive
deployment of electric vehicles can reduce the total consumption of fossil
fuel, therefore, cuts down the greenhouse gas emission [1]. In addition, as
they have higher energy efficiency, lower running cost can be achieved than
conventional internal combustion-engine vehicles. Recently, value-added
utilization of electric vehicles also has been proposed and developed
including the ancillary services for the electrical grid and electricity support
to certain energy management system [2–5]. Therefore, the economic
performance of the electric vehicles can be significantly improved.
Some literatures have proposed and described well the grid integration,
especially the introduction of renewable energy, and electric vehicles [6]. The
fluctuating renewable energy sources, such as wind and solar, require a fast-
response energy buffer to cover their intermittency as well as and to store the
surplus electricity due to higher supply side than demand side. Electric
vehicles are considered as the appropriate resource to balance and store these
kinds of renewable energy sources [7]. The battery owned by the electric
vehicles can absorb and release the electricity from and to the electrical grid,
respectively, to balance the electrical grid promptly.
In general, there are four types of electric vehicles currently running and
developed: (i) conventional hybrid electric vehicle (HEV), (ii) plug-in hybrid
electric vehicle (PHEV), (iii) battery electric vehicle (BEV) and (iv) fuel-cell
electric vehicle (FCEV). HEV combines electric motor and internal
combustion engine; hence, it is also fitted with a battery to power the motor
as well as store the electricity. The energy to power the motor comes from the
engine and regenerative breaking. However, recently, many HEVs have been
redeveloped and shifted to PHEV due to the excellent characteristics and
higher flexibility of PHEV than HEV. Like HEV, PHEV also owns electric
motor and internal combustion engine.
According to IEEE standards, PHEV is HEV having following additional
specifications: battery storage of larger than 4 kWh, charging system from
external energy source and capability to run longer than 16 km [8].
Furthermore, BEV is generally defined as the vehicle driven solely by electric
motors and the source of electricity is stored and converted from chemical
energy in the battery. Therefore, BEV relies on external charging and its
driving range depends strongly on its battery capacity. As the battery capacity
of BEV is significantly larger than HEV and PHEV, battery makes up a
substantial cost of BEV. Advanced development of battery and decrease of
its price is highly expected in the near future; hence, more massive
deployment of PHEVs and BEVs can be realized.
On the other hand, FCEV uses only electric motor like BEV. However, it
utilizes hydrogen as the main fuel that is stored in the tank. The oxidation of
hydrogen produces electricity to power the electric motor and if there is any
surplus it is stored in the battery. In practice, as the hydrogen refuelling can
be performed in a very short time, almost similar to one of the gasoline
refuelling, FCEV basically facilitates no charging from the external charger.
Although it varies, the battery capacity of PHEV is generally larger than
HEV. According to survey conducted by Union of Concerned Scientists
(UCS), about 50% of drivers in US drive less than 60 km on weekdays [9].
Therefore, many available PHEVs can hold for a weekday commuting
without additional charging outside. In addition, although its battery capacity
is lower than BEV, PHEV has higher flexibility on driving range as the
power can be supplied by the engine once the battery capacity drops to
certain low value. Both PHEV and BEV are believed will dominate the share
of vehicles in the future. In addition, according to Electric Power Research
Institute (EPRI), around 62% of vehicles will encompass of PHEVs [10].
High share of PHEV and BEV results in high demand of electricity due to
charging; hence, it strongly correlates to the supply and balancing of
electrical grid.
Unmanaged charging of PHEVs and BEVs potentially results in several grid
problems including over and under voltage and frequency in distribution
networks, especially when individual charging of PHEVs and BEVs takes
place in large number and capacity [11]. Some methods to minimize the
impact of unmanaged charging of PHEVs and BEVs have been proposed and
developed by some researcher. They include coordinated charging [12],
demand response [13], battery-assisted charging [14] and appropriate charger
distribution [15]. In addition, an integrated vehicle to grid (V2G) is also
potential to avoid the concentrated charging, as well as facilitate the other
services [16].
In the coordinated charging, the charging behaviour of PHEVs and BEVs are
controlled by certain entities; therefore, the electrical grid can be maintained
stable and balance. Further, this charging behaviour control is then correlated
strongly with the V2G services, especially for load-shifting or valley filling
strategy [17]. However, the algorithm for valley filling under large-scale
vehicles deployment is very sophisticated; hence, computational complexity
becomes a very crucial factor [18].
Demand response encourages the users or drivers of PHEVs and BEVs to
manage their charging demand during peak-load hours or when the electrical
grid system is at risk [19]. Therefore, it is usually divided into two types:
time-based and incentive-based. The former deals strongly with the real-time
pricing and critical peak pricing. On the other hand, the latter is related to the
incentive due to utilization of PHEVs and BEVs for frequency regulation and
spinning reserve [20]. Pricing system in the electrical grid requires accurate
prediction on both supply and demand sides. Therefore, the uncertainties
clarification and their impacts minimization become the major concern in
demand response.
Although they are promising methods, both coordinated charging and
demand response require further theoretical developments and
demonstrations on to ensure the system and standard in a relatively massive
control system. On the other hand, the battery-assisted charging is considered
simple and applicable, due to its simplicities and convenience in structure and
control.
This chapter discusses the charging system for both PHEV and BEV
including the recently developed battery-assisted charger. At the beginning,
available charging levels and systems for PHEVs and BEVs are explained
initially in terms of charging rate and standards. In addition, the charging
behaviour of the PHEV and BEV in different ambient temperature (seasons)
are also described, clarifying the effect of ambient temperature to the
charging rate. At the last, an advanced charging system with battery
assistance is also explained including their quick-charging performance
during simultaneous charging of electric vehicles.
Charging system for PHEV and BEV
Charging of PHEVs and BEVs correlates strongly with some parameters
including charging devices, cost, charging rate, location, time and grid
condition. Therefore, relevant selection and distribution of chargers are very
crucial to be able to accommodate those parameters appropriately. PHEV and
BEV basically share the same charging standards; therefore, there is no
peculiar charger features or requirements for each vehicle. Charger is
designed to be able to communicate with the vehicle to ensure the safety and
appropriate electricity flow. In addition, charger also monitors the earth
leakage at the surrounding ground.

On the other hand, battery management system (BMS) is installed in the


vehicle as a very vital component, which is performing a thermal
management, cell balancing and monitoring of over-charge and discharge of
the battery pack. The battery pack consists of many individual cells having
certain safe low working voltage. Therefore, it is very crucial to ensure that
they are operating within the permitted range to avoid shorter battery life and
battery failures, including fire.
Chargers can be installed on-board and off-board. The on-board charger
limits its electricity flow because of some constraints, such as weight and
space. It can be performed though conductive (direct contact through
charging connector and cable) and inductive ways (using the electromagnetic
field). On the other hand, the off-board charger is installed externally;
therefore, there is no limitation related to size and weight. The electricity
flow from the charger to vehicle is a DC flow; hence, high charging rate can
be achieved.
The direction of electricity between charger and vehicle can be classified into
unidirectional and bidirectional flows. The former only facilitates a single
direction charging from external charger to the vehicle (battery). The latter
provides the possibility of charging and discharging the electricity to and
from the vehicle. Through bidirectional charging the utilization of PHEVs
and BEVs is greatly widen.
Correlated to the charging rate, chargers or electric vehicle supply equipment
(EVSEs) can be classified by its maximum amount of electricity possibly
charged to the battery of PHEV or BEV, as follows:
a. Level-1 charging

Level-1 charging utilizes the on-board charger and is compatible with the
household electrical socket and power, which generally has voltage of
100 or 200 V (AC) depending on the region. This level of charging can
facilitate charging rate up to about 4 kW. This level of charging is suitable
for the overnight charging at the ordinary household without the need of
additional device installation.
b. Level-2 charging

This level of charging has the purpose of improving the charging rate by
using the dedicated mounted-box. This level-2 charging can supply power
of 4−20 kW, with a maximum voltage of 400 V (AC three phase),
depending on the available capacity of local supply. Generally, this kind
of chargers is installed at dedicated charging facilities including
residential areas or public spaces. The charging connectors for both level-
1 and level-2 chargers vary across the countries and manufactures.
c. Level-3 charging

Different to the above levels of charging, level-3 charging is performed in


DC system. DC electricity is supplied by the charger, bypassing the on-
board charger. Therefore, very high charging rate, higher than 50 kW, can
be achieved. Currently, there is no single standard for this kind of fast
charging which is accepted by all vehicle manufactures. The charging
plug (including EV socket) and the communication protocol between the
charger and vehicles are different between the standards, although the
basic principles are similar.
Currently, there are three major standards of charger, especially for quick
charging: CHAdeMO, combined charging system (CCS) and Tesla
Supercharger. The detailed specifications of each charging standard are
shown in Table 1.
CHAdeMO was the first, DC fast charging standard originally developed by
Japanese companies including Tokyo Electric Power Companies (TEPCO),
Fuji Heavy Industries, Nissan, Mitsubishi Motors and Toyota, which are
organized by CHAdeMO Association. CHAdeMO standard also complies
with international standard of IEC 62196-3. This standard is designed only
for DC fast charging. According to the development roadmap [21], high
power CHAdeMO is also developed with which is able to charge with 100
kW continuous power and 150–200 kW peak power (350 A, 500 V). In
addition, further higher power CHAdeMO is also planned in future (2020)
which can charge with charging rate of 350–400 kW (350–400 A, 1 kV).
Currently, CHAdeMO has the largest global coverage, including Japan (about
7000 chargers), Europe (about 4000 chargers) and USA (about 2000
chargers).
On the other hand, CCS standards, including Combo 1 and 2, are capable to
facilitate both AC charging, including level-1 and level-2 charging, and DC
charging. It was developed by several European and US car manufactures in
around 2012. Society of Automotive Engineers (SAE) and European
Automobile Manufacturer’s Association (ACEA) strongly supported this
initiative with the main purpose of facilitating both AC and DC charging with
only single charging inlet in the vehicle. CCS is able to facilitate AC charging
at maximum charging rate of 43 kW and DC charging at maximum charging
rate of 200 kW with the future perspective of up to 350 kW [22]. CCS
chargers are currently installed mainly in Europe and the USA with
approximate numbers of 2500 and 1000, respectively.
Tesla Supercharger uses its own charging standard. Currently, Tesla
Supercharger includes multiple chargers that are working in parallel and able
to deliver up to 120 kW of DC charging [23]. Tesla Superchargers are
currently installed in about 800 stations, having about 5000 superchargers in
total.
Other charging method for PHEV and BEV includes inductive charging,
which is conducted wirelessly. The electromagnetic induction is created by
the induction coil, which is charged with high-frequency AC. The generated
magnetic field will induce the vehicle-side inductive power receiver; thus, the
electricity can be transferred to the vehicle. Inductive charging uses the
family of IEC/TS 61980 standards. The application of inductive charging is
potential to eliminate the range anxiety, as well as reduce the size of battery
pack. However, there are some technical barriers in its application, especially
related to lower efficiency, slower charging rate, interoperability and safety.

General charging behaviour of electric vehicles


In general, PHEVs and BEVs adopt lithium-ion battery for energy storage
due to high energy density, longer charging and discharging cycles, lower
environmental impacts and more stable electrochemical properties [24]. In
general, charging and discharging of lithium-ion batteries are greatly
influenced by the temperature. According to literatures [25, 26], lower rates
of charging and discharging occur under relatively lower temperature. This is
due to the change of interface properties of electrolyte and electrode such as
viscosity, density, dielectric strength and ion diffusion [27]. Furthermore, the
transfer resistance also increases, which could be higher than the bulk and
solid-state interface resistances, as the temperature decreases [28].
Aziz et al. [14] have performed a study to clarify the influence of ambient
temperature or season to charging rate of PHEV and BEV. The study was
performed during both winter and summer, using CHAdeMO DC quick
charger having rated power output of 50 kW. In addition, Nissan Leaf having
battery capacity of 24 kWh was used as the vehicle. The results of their study
are explained below.
Figure 1 shows the obtained charging rate and battery state of charge (SOC)
under different seasons. Although the rated output capacity of the quick
charger is 50 kW, the realized charging rate to vehicle is lower, especially
during winter. Charging during summer (higher ambient temperature) leads
to higher charging rate; therefore, shorter charging time can be achieved. To
charge to battery SOC of 80% from about SOC of 30%, the required charging
durations in both winter and summer are 35 and 20 min, respectively. During
summer, a relatively high charging rate (about 40 kW) can be achieved up to
an SOC of about 50%. However, the charging rate decreases moderately in
accordance with the increase of battery SOC. The charging rate at battery
SOC of 80% is about 16 kW. On the other hand, during winter, the charging
rate reaches about 35 kW instantaneously in relatively short duration and then
decreases following the increase of battery SOC. In addition, the charging
rate at battery SOC of 80% is about 10 kW.
Figure 2 shows both current and voltage changes during charging under
different seasons. The curves of charging current are almost similar to
charging rates in Figure 1. Lithium-ion batteries are generally charged with a
constant current (CC)–constant voltage (CV) method [22]. Charging under
lower temperature leads to a gradual decrease in the charging current with
charging time or increase in battery SOC. In contrast, charging under
relatively warmer conditions resulted in a higher charging current, especially
at low battery SOCs. Higher CC of about 105 A is obtained at the initial
charging of 5–10 min (battery SOC of up to about 50%). With regard to
charging voltage, although there is no significant difference between charging
in both conditions, charging in a relatively higher temperature (summer)
results in a higher initial charging voltage before it is settling down to a
certain constant value. Therefore, the CV condition can be reached faster.
It is clear that the ambient temperature affects significantly the charging
behaviour of PHEVs and BEVs. Charging under relatively high ambient
temperature (such as summer) facilitates a higher charging rate, especially
because of higher charging current and faster increase in the charging
voltage. Hence, a shorter charging time can be achieved.
When the vehicles are near to empty, the electricity can flow at a high rate
and it starts to pace down when the battery SOC is higher than 50%. In
addition, it gets really slower when SOC is higher than 80%. This
phenomenon is generally called as tapering.

Advanced charging system


The widespread deployment of PHEV and BEV charging, especially fast
charging, has some critical impacts on the electrical grid including the quality
deterioration of the grid and grid overload. Therefore, it is very crucial to
schedule and control the charging of PHEVs and BEVs. One strategic
method to charge the vehicles with minimum impact on the electric grid is to
adopt a battery to assist the charging. Aziz et al. [14] have proposed and
studied the battery-assisted charger (BAC) for PHEV and BEV. The battery
is embedded inside the charger with the aims of improving the quick-
charging performance and minimizing the concentrated load to the grid.
The developed BAC is able to limit the received power from electrical grid,
as well as control the charging rate to the vehicles. It is important to manage
the received power from the grid in order to avoid the electricity demand
larger than the contracted capacity and also optimize the electricity demand
following the grid conditions. In future, as the share of renewable energy
increases, the electrical grid also faces some problems including
intermittency. This leads to the requirements of energy storage and demand
control.
BAC manages the electricity distribution inside the system, such as electricity
received from the grid, battery and chargers, to realize the optimum
performance. Therefore, BAC is able to satisfy both supply side (minimizing
the grid load through load shifting and reduction of electricity cost) and
demand side (fascinating the vehicle owners through quick charging,
although during peak hours).
The purposes of BAC covers: (1) reducing the contracted power capacity
from the electrical grid, (2) avoiding the high electricity demand during peak
hours due to PHEV and BEV charging, (3) shortening the charging time, as
well as the waiting/queueing time, (4) facilitating a possible participation to
the grid-ancillary programs such as spinning reserve and frequency
regulations, (5) facilitating as storage for surplus electricity in the electrical
grid due to high generated power by renewable energy and excess power and
(6) providing an emergency back-up to the surrounding community in which
it is installed.
Figure 3 represents the schematic diagram of the proposed BAC (solid and
dashed lines serve both electricity and information streams, respectively). A
community energy management system (CEMS) correlates to the whole
management of energy throughout the community, covering supply, demand
and storage. It monitors and controls the energy inside the community to
ensure the comfort and security of community members as well as minimize
the environmental influences and social cost. Concretely, CEMS
communicates with other EMSs under its authority including electricity price
and supply and demand forecast. In addition, it also negotiates with other
CEMS or utilities outside the community to achieve the largest benefits for
the whole community.
In the electricity stream, there are three main components that are connected by high-capacity DC lines:
1) AC/DC inverter, 2) stationary battery for storage and buffer and 3) quick charger for vehicles. The
AC/DC inverter receives the electricity from electrical grid and converts it to relatively high voltage
DC, which is about 400 V. In addition, the server/controller monitors, calculates and controls the
amount of electricity received from the electrical grid based on some data, including electricity price
and grid condition. Furthermore, the server manages the electricity to and from the battery and the
charging rate from a quick charger to the connected vehicles. In the battery unit, a bidirectional DC/DC
converter and battery management unit (BMU) are introduced to facilitate controllable charging and
discharging behaviours according to the control values from the server. In the quick charger, a DC/DC
converter and a charging control unit (CCU) are introduced to facilitate active management during
vehicle charging. The number of quick chargers can be more than one, depending on the conditions.

The battery is adopted to store the electricity in case of the presence of remaining contracted power
capacity and lower electricity price (during off-peak hours). In addition, the battery discharges its
stored electricity in case of high electricity price due to high demand for charging or peak hours. The
stationary battery having relatively large capacity is generally employed to sufficiently facilitate
simultaneous charging of multiple vehicles. Therefore, the charging service can be maintained with
high quality.

According to the charging and discharging behaviours of the employed stationary battery and the
source of electricity for charging, quick-charging modes of the BAC are classified as follows:

a. Battery discharging mode

Stationary battery releases its electricity to assist the charging. Therefore, vehicle charging is
conducted using electricity received from the electrical grid and discharged from the stationary
battery. This mode is introduced when a simultaneous quick charging of multiple vehicles occurs,
especially in case of high electricity price. Electricity in the battery discharging mode can be
shown as follows:

Battery idling mode

Stationary battery might be in the idling (stand-by) mode in case of several conditions: (a) contracted
power capacity can sufficiently cover the electricity demand for simultaneous charging of vehicles (low
charging demand), (b) stationary battery is empty or under certain threshold value due to high and
continuous charging of vehicles (stationary battery is not able to supply the electricity unless being
recharged). In the latter, BAC manages the charging rate of each charger to corresponding vehicle;
hence, the contracted power capacity can be maintained avoiding any penalty. Electricity flow in the
battery idling mode can be represented as follows:
Table 2 shows the specification of the developed BAC system and the used
vehicles during experiments. Nissan Leaf having battery capacity of 24 kWh
is used as the vehicle. Figure 4 shows the results of simultaneous quick
charging of two vehicles during winter conducted using conventional quick
charger and BAC under contracted power capacity of 50 kW. The electricity
received from the grid is kept at 50 kW or below. In case of charging using
the conventional charging system, the first connected vehicle is charged with
higher charging rate than the vehicles connected later. This is due to the limit
on contracted power capacity as well as the available power for charging. The
charging rate of the second connected vehicle increases gradually as the
charging rate of the first connected vehicle starts to decrease; therefore, the
total electricity can be maintained to be lower or equal to the contracted
power capacity. In addition, when the charging rate of both the connected
vehicles decreases due to an increase of battery SOC, the total electricity
purchased from the electrical grid decreases. The first and second vehicles are
charged to SOC of 80% after charging for 40 and 50 min, respectively.
In contrast, in case of charging using the BAC, the first and second vehicles
can enjoy almost the same charging rate, and both vehicles reach battery SOC
of 80% in almost the same time (about 35 min). Furthermore, the electricity
from electrical grid can be kept below the contracted power capacity,
although the total charging rate for both vehicles is larger than the contracted
power capacity. This is because the battery assisting the system was
discharged to supply electricity. Hence, compared to a conventional charging
system, BAC is able to achieve high-quality charging with higher charging
rate during simultaneous charging.
Figure 5 shows the results of simultaneous charging of two vehicles during
summer performed using conventional charging system and BAC. A same
tendency with charging during winter, in the conventional charging system,
the first connected vehicle enjoys a higher charging rate, while the second
vehicle must contend with significantly lower charging rate because of
limited contracted power capacity. The first and second vehicles reach battery
SOC of 80% after charging of about 20 and 30 min, respectively.

When charging with BAC, similar to the case in winter, both vehicles could
be charged almost at the same charging rate while maintaining the contracted
power capacity. Both vehicles could be charged in a relatively short time of
about 20 min. The stationary battery discharges its electricity until the total
charging rate of two vehicles is equal or lower than the contracted power
capacity.
It is clear that BAC improves the charging quality, especially during
simultaneous charging of multiple vehicles. In addition, from the point of
view of the electricity grid, application and deployment of BAC can reduce
the stress on the grid because of the high demand for vehicle charging.

Simultaneous charging with developed BAC system


Figure 6 shows the demonstration test results during winter and summer
under the contracted power capacity of 30 kW. Simultaneous charging of
eight vehicles during summer can be conducted quicker than one during
winter because of higher charging rate. However, the SOC of the stationary
battery decreases considerably. It is because of the high discharging rate of
the stationary battery to assist the quick chargers as well as cover the
electricity demand due to limit of the contracted power capacity. In addition,
the stationary battery cannot be charged because of no available marginal
electricity from the electrical grid.
On the other hand, the discharging rate of the stationary battery is
significantly lower during winter due to slower charging rate to the vehicles.
Hence, the total charging rate of two quick chargers can be maintained to be
lower than the contracted power capacity. It results in the marginal electricity
that can be utilized to charge the stationary battery. Therefore, the SOC of the
stationary battery in winter does not largely decrease compared to one during
summer.
Figure 7 shows the simultaneous charging of eight vehicles during summer
under a contracted power capacity of 15 kW. Compared to Figure 6, there is
almost no significant change in the charging rate of vehicles, except that of
the last connected vehicle. However, the discharging rate of the stationary
battery is very high, resulting in significant decrease in its SOC. The SOC of
stationary battery drops rapidly and reaches 10% during charging of the last
two vehicles. As the result, the last connected vehicle is charged only using
the electricity received from the electrical grid, with no assistance from the
stationary battery. As the contracted power capacity is very low, the very last
connected vehicle is not charged until the vehicle before it is charged
completely. The stationary battery cannot be charged during simultaneous
charging because of the lack of marginal electricity and the high charging rate
of vehicles.

Based on the results of the demonstration test, the application of BAC is


potential to improve significantly the charging performance of quick
chargers, especially during the simultaneous charging of multiple vehicles.
The balance among vehicle charging rate, contracted power capacity and
stationary battery SOC seems to be very important. Therefore, PHEVs and
BEVs charging demand must be forecast initially.

Conclusion
As the number of PHEVs and BEVs is massively increasing, their charging
becomes a very important issue due to fluctuating and high demand of
electricity. Therefore, it is very important to manage their charging through
coordinated charging, battery-assisted charging and demand respond. Among
these three methods, coordinated charging and demand respond require
advanced theoretical development, massive demonstration and coordination
in the electrical grid, therefore, they need couple of years in the future for
realization. On the other hand, battery-assisted charging is considered very
applicable in terms of economy and technology.

Charging behaviour of PHEV and BEV is strongly influenced by ambient


temperature. Charging under relatively high ambient temperature (summer)
leads to higher charging rate; therefore, shorter charging time can be realized.
In addition, battery-assisted charger (BAC) has been developed especially to
facilitate simultaneous charging of multiple vehicles under certain limited
contracted power capacity. The demonstration test of BAC proves that it can
facilitate high quality of charging, while minimizing the electrical grid stress
due to massive and concentrated charging of PHEVs and BEVs.
Introduction to Hybrid Energy Storage
System for a Coaxial Power-Split Hybrid
Powertrain
Energy conservation and emission reduction are two important tasks for a
sustainable development of our industrial society. In terms of automotive
industry, applications of hybrid electric vehicle (HEV), electric vehicle (EV),
and fuel cell vehicle (FCV) are effective technical approaches for energy
conservation and emission reduction [1, 2]. Currently, EV and HEV cannot
comprehensively fulfill the requirements of transit buses due to their poor
durability and high cost. However, HEV is a feasible solution with high
reliability and relatively low cost.
There are two kinds of power source for an HEV: one is internal combustion
engine, and the other is electric energy storage system (ESS). An ESS can
discharge electric power to propel the vehicle or absorb electric power during
the regenerative braking process. Generally, the architecture of an HEV
powertrain can be classified as series hybrid, parallel hybrid, and power-split
hybrid [3]. Taking into account the heavy-duty hybrid powertrain for transit
buses, series hybrid and parallel hybrid are widely adopted presently. For
instance, Orion VII and Man Lion use series hybrid powertrain for their
transit buses. Volvo and Eaton designed two types of parallel hybrid
powertrain for heavy-duty applications.
With the progress of technology, the battery used for the ESS of an HEV has
been gradually shifted from lead-acid, NiCad, and Ni-MH to lithium-ion
battery. Lithium-ion batteries will be used widely as the ESS for various
vehicles because of their high-energy density, good safety, and long
durability [4]. Currently, the materials of lithium-ion batteries are mainly
lithium iron phosphate and nickel-cobalt-manganese ternary composite [5].
An HEV transit bus undergoes frequent acceleration and deceleration during
its working time and requires large working currents of the ESS for these
processes. Because the discharge and charge rates of lithium-ion battery are
limited, if the ESS consists of only lithium-ion batteries, a large capacity of
lithium-ion batteries is required, which will increase the cost and weight of an
HEV greatly. To overcome this problem, hybrid energy storage system
(HESS) composed of lithium-ion batteries and supercapacitors is employed.
In contrast to a lithium-ion battery, a supercapacitor can charge or discharge
with very large instantaneous currents. This characteristic can provide
sufficient electric power during the acceleration process and store electric
energy during the regenerative braking process. Because supercapacitors use
a porous carbon-based electrode material, a very high effective surface area
can be obtained by this porous structure compared to a conventional plate
structure. Supercapacitors also have a minimal distance between the
electrodes, which result in a very high capacitance compared to a
conventional electrolytic capacitor [6, 7]. Apart from the fast
charge/discharge rates and the high-power density, supercapacitors have
much longer lifetimes (>100,000 cycles) compared to lithium-ion batteries
[8–10]. However, supercapacitors normally have a much smaller energy
capacity compared with lithium-ion batteries. Therefore, using an HESS can
fully utilize the advantages of these two kinds of energy storage devices and
avoid their disadvantages.
Figure 1 shows four primary topologies of an HESS, which encompass
passive hybrid topology, supercapacitor semi-active hybrid topology, battery
semi-active hybrid topology, and parallel active hybrid topology [11, 12].
The passive hybrid topology is the simplest to combine battery and
supercapacitor together. The advantage of this topology is that no power
electronic converters are needed. Because the voltage of the DC bus is
stabilized by the battery, the stored energy of the supercapacitor cannot be
utilized sufficiently. In the supercapacitor semi-active topology, the battery is
connected to the DC bus directly, while the supercapacitor uses a
bidirectional DC/DC converter to interface the DC bus. As a result, the
voltage of the DC bus equals the output voltage of the battery so that it
cannot be varied too much. But the voltage of the supercapacitor can be
changed in a wide range. The disadvantage of this topology is that a large
size of DC/DC converter is required. In the battery semi-active topology, the
supercapacitor is connected to the DC bus directly, while the battery uses a
bidirectional DC/DC converter to interface the DC bus. The advantage is the
battery current can be maintained at a near constant value, thus the lifetime
and energy efficiency of the battery can be improved significantly. The main
shortcoming of this topology is that the voltage of the DC bus will vary
during the charging/discharging process of the supercapacitor. The parallel
hybrid topology is an optimal choice that can solve all the problems of the
above topologies. The disadvantage is that two DC/DC converters are needed
which will increase complexity, cost, and additional losses of the system.
Three different topologies of HESS using supercapacitors as the main energy
storage device for EV application were analyzed by Song et al. [13].
Rothgang et al. studied the performance of an active hybrid topology [14].
Among these various architectures, the supercapacitor semi-active hybrid
topology using lithium-ion batteries as the main storage device is considered
as a good solution for HEV application due to its high-power density and low
cost.

Most of investigations about HESS focus on EV applications. Hung and Wu


designed a rule-based control strategy for the HESS of an EV and estimated
the system performance [15]. Vulturescu et al. tested the performance of an
HESS consisting of NiCad batteries and supercapacitors [16]. Song et al.
compared three different control strategies for an HESS, which included a
rule-based control, a model predictive control, and a fuzzy logic control [17].
An HESS can also be used with a fuel cell and successfully satisfy the
requirements of an FCV [18–20]. However, few investigations are
concentrated on the application of an HESS for an HEV at present because
they are more complicated than those for an EV. Masih-Tehrani et al. studied
the energy management strategy of an HESS for a series hybrid powertrain
[21] and Nguyen et al. investigated the performance of a belt-driven starter
generator (BSG)-type parallel hybrid system with an HESS [22]. All these
studies give the HESS an advantage over the ESS with only batteries.
The control strategy of an HESS for an EV is easier than that for an HEV or
an FCV because it only needs to consider the power distribution between the
batteries and supercapacitors. By contrast, the control strategy of an HEV
with an HESS must assign the power demand of vehicle among the ICE, the
batteries, and the supercapacitors, thus it is more complicated. Currently, few
researches about the HESS for a power-split HEV are reported. In this study,
a coaxial power-split HEV with an HESS for heavy-duty transit bus
application is evaluated. The HESS uses lithium-ion batteries as the main
energy storage device. The performance of the hybrid transit bus is analyzed
using the Chinese Transit Bus City Driving Cycle (CTBCDC).

System description
Figure 2 shows the architecture of the designed coaxial power-split hybrid
powertrain for a transit bus with a supercapacitor semi-active hybrid energy
storage system. The auxiliary power unit (APU) consists of a diesel engine
and a permanent magnetic synchronous generator (PMSG) whose shafts are
connected directly. This shaft is also associated with the input axle of the
clutch. The output axle of the clutch is linked to a permanent magnetic
synchronous motor (PMSM) whose shaft is also connected to the final drive.
Besides the diesel engine, an HESS composed of lithium-ion batteries and
supercapacitors as well as a bidirectional DC/DC converter is used to provide
electric power to the PMSM. A high-voltage power line is connected to the
PMSG, the PMSM, the battery pack, and the DC/DC converter. The
supercapacitor pack exchanges electric power with the high-voltage power
line via the controllable bidirectional DC/DC converter.
Mathematical model
A mathematical model of the coaxial power-split hybrid powertrain with an
HESS is established to analyze the system performance. According to the
working principle of the entire system, a lumped-parameter model is used.

VEHICLE DYNAMICS
Since the rear axle is used to drive the hybrid transit bus, the tractive force of
the rear axle is determined according to the longitudinal vehicle dynamics
[23].
The PMSM model accounts for the requested torque Tmr and the input power
Pm calculated by a two-dimensional (2D) lookup table measured from a
motor test bench [24].
In the coaxial power-split hybrid powertrain, a clutch is used to control the
operation mode that is either the series or the parallel mode. The
mathematical model of the clutch is a simple friction model [25], which is
used to determine the clutch state among the engaged, the slipping, and the
disengaged states. The torque and speed transmitted through the clutch are
also determined.
AUXILIARY POWER UNIT
Normally, an auxiliary power unit is a combination of the engine and the
generator. The diesel engine model accounts for the requested torque Ter
according to the engine output torque Te and the engine speed ωe.
Energy management strategy design
The control strategy of a coaxial power-split hybrid powertrain only using
supercapacitors as energy storage system was designed by Ouyang et al.
previously, which involves series hybrid mode, parallel hybrid mode, and the
mode transition logic [28]. For the coaxial power-split hybrid powertrain with
an HESS, one more task must be designed—the energy management strategy
of the HESS. The thermostatic control, the power follower control, and the
optimal control [29] are the three main control strategies for the series hybrid
mode while the parallel electric assist control, the adaptive control, and the
fuzzy logic control are normally used for the parallel hybrid mode. Currently,
the rule-based control, the filter control, the model predictive control, and the
fuzzy logic control are the four main control strategies of an HESS to
distribute the power demand between the batteries and the supercapacitors.
Investigations indicate that the performance of rule-based control strategy can
approach to that of the optimal control after optimization of the parameters of
the rule-based control [30]. Therefore, in this study, a rule-based control
strategy for the coaxial power-split hybrid powertrain is designed and is
shown in Figure 3. The entire rule-based control strategy is composed of a
series mode control and a parallel mode control as well as an HESS control.
The series mode control strategy uses a power follower control method
shown in Figure 3. If the driving power demand of the vehicle is lower than
a certain value, the system enters the series mode control. First, the required
power of the motor Pd is computed according to the system mathematical
model. Then, the output power of the engine is determined based on the
generator efficiency map. The modified discharging/charging power is
calculated according to the difference between the present SOC and the target
one. Subsequently, the requested power of the diesel engine is determined
according to the defined optimal operation line (OOL) of the series mode
control [31]. Meanwhile, the engine on/off state is decided due to the
required power Pd and the current SOC. If the engine state is on, the final
required engine torque and speed are computed according to the requested
power of the engine and the defined OOL.

If the vehicle velocity is greater than the set value, the system changes to the
parallel mode control. A parallel electric assist control strategy is used in this
research and is shown in Figure 3. First, the demanded speed and torque of
the auxiliary power unit are determined by the clutch model according to the
power demand of the motor Pd. In the clutch model, the decision logic of the
clutch state sets the working state of the clutch. When the clutch is engaged at
the parallel mode control, the requested torque of the APU, which equals the
output torque of the engine, is the summation of the driving torque directly
transferred to the final drive and the charge torque that is transmitted to the
generator. The motor provides the remaining driving torque simultaneously.
Subsequently, the control parameters of the parallel mode are optimized
across the overall engine’s working region. Same as the series mode control,
the modified discharging/charging power is calculated. Then, engine on/off
state is decided by the clutch state and the SOC as well as the requested
vehicle velocity. At last, the requested torque and speed of the engine are
determined based on the engine state and the power demand.
The requested power of the diesel engine is obtained by the series or parallel
control strategy. Then, the power demand of the HESS Phess can be calculated
as the difference between the power demand of the motor and the request
power of the diesel engine. The HESS energy management strategy is in
charge of the distribution of the power demand between the lithium-ion
batteries and the supercapacitors. The detailed strategy shown in Figure 3 is
a combination of the rule-based control and the filter control. If Phess is
positive, the HESS should output electric power to the power line. First, an
algorithm estimates the mean discharge power during the driving cycle and a
low-pass filter outputs a filtered discharge power of the batteries. Then, the
discharging power decision block calculates the output powers of the
batteries and the supercapacitors based on the designed threshold values of
SOC and the mean discharge power. If Phess is negative, the motor outputs
electric power to charge the HESS. Thereby, the control strategy is similar
with the positive situation. Meanwhile, an SOC correction algorithm of the
supercapacitors is employed based on the vehicle velocity for the discharging
process.
Results and discussion
A program was developed using Matlab and Advisor according to the
mathematical model of the coaxial power-split hybrid powertrain with an
HESS. The designed simulation model combines the backward- and forward-
facing methods and can evaluate the drivability and economy of the coaxial
power-split hybrid powertrain system. In this research, a hybrid transit bus
with a total mass of 15 ton manufactured by Higer Bus Company Limited is
applied. The detailed parameters of the hybrid transit bus are listed in Table
2.
The system operation state is shown in Figure 4c, where 0 represents the EV
mode, 1 means the series control mode, and 2 denotes the parallel control
mode. If the vehicle driving power is greater than 110 kW, the operation
mode switches from the series mode to the parallel mode. Meanwhile, the EV
mode can be changed from both the series control mode and the parallel
control mode. The output speed and torque of the diesel engine are shown in
Figure 4d and e, respectively. Because the power follower control is used for
the series control mode and the engine speed is proportional to the vehicle
velocity at the parallel mode, the engine speed varied from 1494 to 1842
r/min during the working process. Furthermore, the engine output torque
maintains at a high level of 550.4–761.6 Nm when the engine is running.
The performance of the HESS for the coaxial power-split hybrid powertrain
is displayed in Figure 5. The output power of the HESS is given in Figure
5a. The output powers of each of the energy storage devices are given in
Figure 5b, where the dashed line is used for the lithium-ion batteries, and the
dash dotted line is used for the supercapacitors. In Figure 5, positive values
represent discharging process and negative is used for charging. The output
power of the HESS varies with the power demand of the motor. The
maximum and minimum output powers achieve 104.8 and −112.9 kW,
respectively. The output power of the lithium-ion batteries changes from −50
to 50 kW during the driving cycle, which means the output current of the
batteries can be significantly decreased. The supercapacitors discharge or
charge with large powers for the high-power working conditions to
compensate the power difference between Phess and Pbat. The SOC profiles for
the lithium-ion batteries and the supercapacitors are shown in Figure 6c.
During the charge sustain mode, the SOC of the batteries manifests a very
small variation during the cycle. As a contrast, the SOC of the
supercapacitors varies within a wide range from 0.395 to 0.99. The reason is
that the capacity of the lithium-ion batteries is much greater than that of the
supercapacitors and the output power of the lithium-ion batteries is
constrained to a small range. The output voltages of the batteries and the
supercapacitors are shown in Figure 5d. The output voltage of the lithium-
ion batteries, which equals the power line voltage, changes in a very small
range from 559.1 to 584.9 V. The stable voltage characteristic will be
beneficial for the operations of the motor and the generator. Although the
output voltage of the supercapacitors varies in a large scope, it is still within
the allowable voltage ratio of the DC/DC module. The currents of the
batteries and the supercapacitors are displayed in Figure 5e. The current of
the supercapacitors increases while the current of the batteries is much
smaller and the discharging/charging rate of the batteries is less than 4C,
which is very helpful for the life extension of the batteries.
Figure 6 gives the performances of the motor and the generator of the coaxial
power-split hybrid powertrain with an HESS. The input power of the motor
generally follows the vehicle driving power during the cycle, which means
only a small part of the mechanical power of the diesel engine is used to
propel the vehicle directly. Although the current of the motor varies in a wide
range from −193.6 to 215.9 A, it does not exceed the maximum operation
current of the motor. The generator operates only a small part of the time and
its output power changes mainly from 24.38 to 104.8 kW. The current of the
generator varies with the output power because the power line voltage is
stable.
To evaluate the potential of fuel saving for the coaxial power-split hybrid
powertrain with an HESS, 12 prototypes of hybrid transit bus were built and
applied to a practical city routes in Ningbo, Zhejiang Province. A total
mileage of over 40,000 km for each hybrid transit bus was achieved, and the
average fuel consumption is approximately 24.53 L/100 km, which is listed
in Table 3. Using the established mathematical model and the analysis
program, the estimated fuel consumption of the coaxial power-split hybrid
powertrain with an HESS is 24.43 L/100 km, which is also listed in Table 3.
The SOC difference of the lithium-ion batteries between the start and end
points is 0.0034. The practical driving routes in Ningbo are different with the
CTBCDC driving cycle. The start/stop frequency is decreased compared with
the CTBCDC driving cycle. Therefore, the improvement of fuel efficiency is
a bit lower. On the other hand, the ambient temperature and the total vehicle
weight varying during the practical driving conditions also have a great
influence on the fuel consumption. Herein, the experimental result only gives
a very coarse and average evaluation of the fuel efficiency. Moreover, the
fuel consumption of a conventional powertrain using a YC6G270 diesel
engine and a five-gear manual transmission is computed based on the same
vehicle parameters as the hybrid transit bus. The total mass of the bus with
the conventional powertrain is 15,000 kg, which is the same as that of the
coaxial power-split hybrid powertrain with an HESS. The result is 36.33
L/100 km. Compared with the results of the conventional powertrain, the fuel
consumption of the coaxial power-split hybrid powertrain with an HESS can
be decreased significantly by about 32.5%.
From the viewpoint of energy efficiency, the reason for such a great fuel
reduction can be explained. Figure 7 shows the effective thermal efficiency
map of the YC6J220 diesel engine obtained on an engine test bench. The
engine’s working points estimated by the analysis program are also
displayed. The OOL line of the series control mode is represented by the
thick solid line in Figure 7. As a contrast, the effective thermal efficiency
map of the YC6G270 diesel engine for the conventional powertrain and the
corresponding working points are given in Figure 8. It can be seen that the
engine working points of the coaxial power-split hybrid powertrain with an
HESS are very close to the region having the peak efficiency and their
thermal efficiencies are greater than 40%. However, the working points of the
conventional powertrain shown in Figure 8 will change with the vehicle
velocity, resulting in a very wide distribution from the idle speed to the full
load. Therefore, many working points of the conventional powertrain locate
in the low-efficiency regions, leading to a low efficiency of the entire
powertrain system.
The energy efficiency map of the PMSM obtained on a motor test bench and
the relative working points for the CTBCDC driving cycle are given in
Figure 9. A large part of the working points situates close to the peak-
efficiency region. The efficiencies of most of the working points are higher
than 93.4% except for the low-speed and small-load regions. The energy-
weighted average efficiency of the motor during the CTBCDC driving cycle
is 91.92%. Because the motor is connected to the final drive without a
transmission, a particular design of the PMSM can ensure that the motor
efficiency is high enough for low-speed working conditions. The energy
efficiency map of the PMSG measured on a motor test bench and the
corresponding working points for the CTBCDC driving cycle are displayed
in Figure 10. The efficiencies of the PMSG are found to be between 92 and
93% during the CTBCDC driving cycle, and the energy-weighted average
efficiency is 92.55%, which approaches the peak efficiency of the PMSG
Conclusions
In this study, the system performance of a coaxial power-split hybrid
powertrain with an HESS for transit bus application was investigated. First, a
system topology was designed. Then, a mathematical model was established
and an energy management strategy was developed. Finally, the energy
efficiency of the hybrid powertrain system was evaluated by Matlab and
Advisor. The main conclusions are summarized as follows:
1. Compared with the results of a coaxial power-split hybrid powertrain
with supercapacitors as energy storage device, the equivalent fuel
consumption of the designed coaxial power-split hybrid powertrain with
an HESS is a bit higher (24.43 vs. 20.46 L/100 km). The reason is that the
energy capacity of the supercapacitors in this study is much smaller than
that of the hybrid powertrain with only supercapacitors. Thus, the amount
of the recovered regenerative energy is smaller than that of the hybrid
powertrain with only supercapacitors. On the other hand, the DC/DC
converter and the battery pack also have some losses during the working
processes. As a result, the energy efficiency of the coaxial power-split
hybrid powertrain with an HESS is lower than that of the hybrid
powertrain with supercapacitors. Because we have no data about the
coaxial power-split hybrid powertrain with only batteries, a quantitative
comparison cannot be given here. Generally, if only batteries are used for
the energy storage system, a large size will be used and the cost will
increase a lot. The energy efficiency of such a system will be a trade-off
between cost and lifetime of the batteries. Nevertheless, the results of this
study show that the coaxial power-split hybrid powertrain with an HESS
has a very good energy efficiency compared with a conventional
powertrain of the same level.
2. Because the designed HESS has a smaller size and weight and lower
price than that with only supercapacitors, and the lithium-ion batteries of
the HESS can operate at an averaged current and thus have a longer life
cycle, the HESS is more suitable for hybrid transit bus application.
3. The power line voltage of the HESS is more stable than that with
only supercapacitors. This will be beneficial for the operation of the
accessories such as the air conditioner or the in-vehicle infotainment
system during the driving process.

Introduction to Performance Analysis of


an Integrated Starter-Alternator-Booster
for Hybrid Electric Vehicles
As the electrification of vehicle propulsion at low (e-bikes) and high power
(buses) continues to extend, the current research efforts on this topic are
focused especially on increasing the autonomy of vehicles due to the
accumulation of electricity. Due to the lack of charging station and low
autonomy in terms of maintaining a reduced weight of the battery, the
electrical vehicle is momently limited to urban trails. In this context, the
hybrid electric vehicles (HEVs) were considered initially as a transition
between conventional vehicles (internal combustion engine (ICE)) and the
electric ones, and now they still remain a viable solution that is gaining
ground by combining the advantages of both types of vehicles [1–4].
The trend in all types of vehicles (conventional, electrical, or hybrid vehicles)
for the next years is to increase the equipment with different types of
electrical subsystems. These can be related to the safety (direction, breaking,
lights, distance sensors, mirrors etc.) or to the comfort (seats, HAVC, audio,
navigation display etc.). At the same time, a lot of traditional mechanically
driven loads are replaced with electrical driven ones (water pumps, servo
steering, ventilation fan, etc.). This demand of electrical energy, of around 10
kW [5], requires increasing generator power and a certain level of efficiency
(normally situated at 40–55%) [6]. A common alternator in a car is relatively
cheap and with low efficiency, but with the expected increase of power, it
exceeds the capability of the Lundell generator (claw pole synchronous
machine). In this context, the replacement of classical alternator with a high
efficiency machine is mandatory. Besides this, the operating mode of the
conventional starter (around 1 s for each start) is used only for the start of the
ICE and after it becomes an extra weight in the vehicle. The easy (costs and
implementation) solution of this problem is to replace both machines (starter
and alternator) with a single electrical machine.
The initial concept of the integrated starter-alternator (ISA) system was
developed in order to gain more space for the powertrain system and to
reduce the weight of the vehicle by combining the starter with the alternator.
This system ensures the start/stop of the internal combustion engine and the
supply with electricity of all the auxiliary subsystems (safety or comfort).
Especially in parallel configuration of HEV, the ISA is used for starting the
internal combustion and supply the electrical load. A second electrical
machine is necessary for the electric propulsion. The method for the
simplification of this structure involves the use of a single electric machine
comprising three operating modes: starter-alternator and booster. In this case,
the integrated starter-alternator-booster (ISAB) system will be able initially to
start the ICE, then, when it is turned on, it will reverse to generator mode and
will supply electricity to consumers and the storage system. By adopting
adequate control strategies, the electrical machine is capable of moving
quickly from generator to motor (booster) and back in order to help the
international combustion engine for a short period of time (maximum 2–3
min), if more power is necessary (overruns, ramp, curbs, etc.) [7]. This
operating mode of the machine is generically called integrated starter-
alternator-booster (ISAB). Using ISAB in parallel HEV is generically called
Mild-HEV. In this configuration, the full electric propulsion of the vehicle is
not possible, but the production costs necessary for the implementation of the
hybridization in conventional vehicles are lowest compared to other variants
of HEV.
According to Ref. [8], where the influence of fuel consumption for a small
car equipped with ISAB is investigated and considering the European
standard (1999/100 EC), the fuel consumption is reduced to about 12% in
total.
The increase in the number of electric components within the vehicles boosts
the market for electrical motors for hybrid and electric vehicles. A Frost &
Sullivan market research finds that the market earned revenues of about 55
million Euros in 2010, which are expected to reach $1.6 billion by the end of
2017 [9].
The required characteristics of the ISAB in the starter mode and alternator
(generator) mode are very restrictive for a conventional electrical machine
[10]:
· High value of electromagnetic torque for starting ICE (120–300
Nm).
· High efficiency at wide speed range (600–6000 rpm).

· Reliability at high vibrations and over 250,000 start/stop cycles


(during 10 years).
· Operation at temperatures between −30 and 130°C.

· Easy maintenance and low cost and so on.

Usually, the ISAB can be connected with a gasoline or diesel engine, either
directly through the crankshaft (see Figure 1) or indirectly through the belt
system (see Figure 2); based on that, the systems are called belt-driven
starter-alternator-booster (BSAB) and conventional ISAB, respectively.

The size of the electrical machine is very important for BSAB application in
order to keep the overall size low (the same dimensions like the ones of a
conventional alternator) but for a given maximum torque, the systems usually
have a recommended gear ratio 3:1 to the ICE crankshaft, according to Ref.
[8]. The BSAB runs with a speed three times higher than the ICE. For the
ISAB, the speed range is usually synchronized with combustion engine.
Electrical drive used for ISAB applications
ELECTRICAL MACHINES
In the last decade, the development of power electronics (inverter/convertor)
made the alternative current (AC) machines the best solution for ISAB
applications, especially due to their high power density. These are
synchronous reluctance machine (SynRM), induction machine and permanent
magnet synchronous machines (PMSM) in both supplying variants: with
sinusoidal and trapezoidal current.
The detailed investigation of SRM and induction machine is presented in
Refs. [11, 12]. In these studies, the complicated electronics needed for SRM
and the difficult control of the induction machine (influence of slip in
performance of the machine) are highlighted. In this context, the SynRM and
PMSM are the best candidates for ISAB applications.
The electrical machines used for conventional ISAB applications are exposed
at high temperatures generated by ICE. This makes impossible the use of the
PMSM in high efficiency and low-cost conditions (only with a special
method for cooling or using expensive SmCo magnet). Therefore, the
SynRM without permanent magnets is the best solution for the direct
connection to the crankshaft of ICE (ISAB) and PMSM machine for BSAB.

PMSM MACHINE FOR BSAB APPLICATIONS


The main advantage of the PMSM compared with other types of electrical
machine is their high efficiency due to the absence of the field coil losses.
The stator is constructed from three-phase windings and steel sheets (the
same as the induction machine), but due to the absence of iron losses, the
rotor is built from massive steel and permanent magnets. The position of the
permanent magnets can be categorized as surface-mounted type and interior
type. This position can have a significant effect on the mechanical and
electrical characteristics, especially on the synchronous inductance [13].
Because the permeability value of rare earth magnet (such as NdFeB) is very
close to that of the air, the air gap of the machine with mounted surface PM
effectively becomes larger in this case. This makes the machine d-axis
inductance value very low, with a significant effect on the ability of
overloading the machine and operation at flux weakening. Because the
maximum torque is inverse proportional with the d-inductance, this becomes
very large. But the low value of d-inductance reduces the possibility to
operate at flux weakening. This is caused by the need to use a high value of
the demagnetization component of the stator current in order to decrease the
flux value in the air gap. Therefore, the remained current on the q axis will be
insufficient to produce torque.
In the case of the interior magnets, it is possible to obtain a sinusoidal
distribution of the air-gap flux by using simple rectangular magnets. A
sinusoidal flux distribution reduces considerably the cogging torque, in
particular in the case of the machine with a large number of pole pairs and a
small number of slots per pole and phase [14]. For these structures, it is also
possible to increase the flux density in the air gap beyond the value of the
remnant flux density of the magnets by using the flux concentrators. Because
in this case the d-inductance is usually higher than with that of the surface
magnets topologies, the overload capacity of the machine will be reduced and
the performance in flux weakening conditions will be higher.
The PMSM with outer rotor (PMSMOR) (see Figure 3) is one of the special
topologies of PMSM, with some advantages for BSAB applications:
· Belt mounted directly on the outer rotor, without using pulley.

· Easy mounting of permanent magnets, the centrifugal forces do not


influence their mechanical stability.
· High torque capabilities.

· Convenience of cooling, etc.

The development cycle of PMSM (inner or outer rotor) topologies includes


analytical procedure, magnetic field analysis and optimization procedure
connected to previous design steps. The analytical procedure is presented in
detail in Refs. [13, 14] and includes the following topics: analysis of the
specifications, selection of the topology, the active and passive materials,
sizing the machine, choice of the manufacturing technologies and
information about preliminary cost evaluation. In the dimensioning
procedure, classical formulas or dedicated software platforms like SPEED
software can be used.
The electromagnetic flux analysis is realized with dedicated programs (like
Flux 2D/3D, Jmag 2D/3D, Maxwell 2D/3D, ANSYS, Opera, open-source
programs, etc.) based on the finite element method (FEM). The FEM is a
widely used method for obtaining a numerical approximate solution for a
given mathematical model of the machine. The obtained results are related to
the voltage/current waveform, map of flux density, electromagnetic torque,
losses (iron and Joule), and the efficiency value or map of it.
The optimization of electric machine is a multivariable, nonlinear problem
with constraints. In order to treat problems with constraints, it is necessary to
transform them into unconstrained ones. This can be done, for instance, by
embedding the constraints in the objective function. The most used
optimization algorithms in design of all types of electrical machines are as
follows: genetics algorithms (GA), differential evolution algorithm (DEA),
estimation of distribution algorithms (EDAs), particle swarm optimization
(PSO) and multi-objective genetic algorithms (MOGA, Pareto, etc.) [15].
A comprehensive evaluation of optimization algorithms was performed in
Refs. [16–18]. The authors of these studies state that any such classification
of different optimization algorithms is not truly appropriate since the
performance is an objective closely related to the specifics of each
application. Nevertheless, in the optimization of the electrical machine, the
authors mostly agree that DEA achieves the best fitness values, i.e. the
minimum objective function value, usually with a smaller number of
evaluation steps.
Considering the important step in the development of cycle of PMSM
presented above, a general design procedure of PMSMOR for BSAB
applications is proposed and presented in Figure 4.
SYNRM MACHINE FOR ISAB APPLICATIONS
Variable reluctance synchronous machines have received little attention in
various comparative studies approaching the selection of the most appropriate
electric-propulsion system for either HEV or EV. Malan [19, 20] showed that
the SynRM drive has major advantages in electrical propulsion. SynRM’s
performance strongly depends on the saliency ratio, but increasing the
saliency complicates the rotor construction and drastically increases the
motor cost. Interesting results concerning the influence of the saliency ratio
on the SynRM steady-state performances, mainly on power factor and
efficiency, are given in Ref. [21], while the effect of rotor dimensions on d-
and q-axis inductances in the case of a SynRM with flux barrier rotor is
discussed in Ref. [22]. Thus, the number of rotor flux barrier for the SynRM
recommended in the literature is four. Above this value, the technology of the
rotor is too complicated, while for a value lower than 4, the value of the
torque ripple is too high. Regarding the rotor construction, there are three
main different types, given in Ref. [23], presented in Figure 5
· With salient rotor poles (see Figure 5a): require low technological
effort and are obtained by removing the iron material from each rotor pole
in the transversal region.
· With axially laminated rotor (ALA) (see Figure 5b): the rotor core is
made of axial steel sheets that are insulated from each other using
electrically and magnetically insulation (passive material).
· Transversally laminated anisotropic rotor (TLA) (see Figure 5c): the
so-called ribs are obtained by punching and then the various rotor
segments are connected to each other by these ribs.
The SynRM has a larger torque density compared with that of IM. This
comes from the absence of rotor cage and related losses. A different dynamic
behaviour is expected from SynRM due to the specific relationships between
currents and fluxes. Because SynRM does not have a traditional cage
(especially used for starting), it is necessary to use the modern inverter
technology. Therefore, most of the literature on SynRM drives has
concentrated mainly on the design and control of the machine with the goal
of improving control, efficiency and torque production, drive flexibility and
cost [24].
The main drawback of SynRM is related to structural behaviour at high
speeds (over 10,000 rpm) because the specific geometry of the rotor involves
thin layers of steel and large cut-off areas.
Based on advantages and disadvantages of the SynRM and the specific
applications of ISAB (rated speed up to 10,000 rpm), SynRM is one of the
most suitable candidates for direct connection to crankshaft. The major
advantages are high torque, thermal behaviour (absence of permanent
magnets and low average value of iron losses), high value of efficiency at
entire drive cycle of functioning, vibro-acoustic behaviour (low noise), etc.
In the development cycle of SynRM presented below, the most important
step is related to the rotor geometry and the structural behaviour (see Figure
6).

POWER ELECTRONICS
The electrical equipment installed on the vehicles operates at a nominal
voltage of 14 V. In the early 1990s, a new standard (PowerNET) for
automotive electrical systems has been proposed by a consortium of
automotive manufacturers (Daimler-Benz and General Motors). Following a
proposal by the PowerNET, the voltage level increases for the electrical
installation to 42 V [25]. The goal was to reduce the section of the conductors
and gain the possibility to increase the total power installed in the new
generation of vehicle. The standard did not become very popular because of
its high implementation costs, which would require the redesign of all
electrical and electronical subsystems [26]. Instead, most producers were
oriented on systems with two voltage levels: high voltage for propulsion and
low voltage for auxiliary and electronic subsystems.
A starter-alternator system involves the use of a static frequency converter for
the driving of the electrical machine. The convertor will operate in both the
inverter and rectifier regimes. In the rectifier operating mode, it is indicated
to adopt a control strategy of the converter with the purpose of reducing
losses and the harmonic content of the output currents of the machine. The
techniques for the control of the converter for these two modes are the same,
only the current reverses its sense depending on the operating mode.
The input voltage of the static frequency converter is a DC voltage, the value
of which must be kept constant in order to function optimally. The regulation
of the input voltage of the converter can be done by using a bidirectional
DC/DC converter with a closed loop control. An alternative to the use a
DC/DC stage converter and another DC/AC converter is to use a Z-Source
Converter [27]. The Z-Source Converter is more capable compared with the
classical converter to operate both as a boost and buck converter due to the
input impedances that give it particular operating properties.
POWER ELECTRONICS OF SYNRM AND PMSM
For the control of PMSM machine, the current of the q axis is maintained
maximum in order to produce high value of the torque and zero for d axis
current, respectively. Instead, for SynRM, the control strategies mean to keep
the equal value of the q axis current with the d axis. In the case of PMSM
with interior magnets, this control strategy does not provide maximum torque
due to the additional reluctant torque [28] component that appears in
expression:

Usually, the implemented control method for the PMSM and SynRM for
automotive application is an indirect method, which is based on measuring
the stator currents and calculating the rotor flux phasor magnitude and
position using these currents and the rotor position. Thus, the flux transducer
or flux estimators that are usually used in the vector control method with
direct measurement of flux are eliminated. This method has a disadvantage
due to the fact that the accurate determination of rotor flux phasor position
requires a precise measurement of rotor position. Thus, the practical
implementation using speed measurement for obtaining the integration of the
rotor angle is not recommended. Hence, an incremental encoder position or a
resolver, which has a higher cost while providing the precision required of a
vector control with a good dynamic response in applications is used. In
addition to this vector control method that uses position sensors for
determining the rotor angle control, other methods where these transducers
are eliminated (sensorless) exist. In these cases, the rotor position is estimated
by using complex algorithms, using as input the measurement values of
voltages and currents [29].
The general diagram control presented in Figure 7, usually used for PMSM
and SynRM, can be divided into power and control components. The power
circuit consists of the electrical machine (PMSM/SynRM), DC/DC converter,
inverter, while the control loop consists of speed transducer, current
transducers, PWM signal generation block, transformation of coordinate
systems blocks and computing block of references current.

The control strategies considered for the SynRM are:

· Maximum torque control per ampere control (MPTAC)

The model of control is based on imposing the same currents for the d and q axes of the machine
as current references for the vector control of the machine. These currents are calculated from the
torque equation like
Simulation of a hybrid electric vehicle with the ISAB system
In order to study the electrical machines in ISAB applications, the electric
drive model can be introduced and simulated in the Advanced Modelling
Environment for performing Simulation (AMESim). AMESim is a multi-
domain simulation software for the modelling and analysis of one-
dimensional (1D) systems. In this program, each component or physical
phenomenon is described by differential equations, type formulation in which
the major variable is the time [31]. This approach is different from the partial
derivate equations formulation, which formalizes the notion of the
distribution of system properties in space. The representation of a dynamic
system starting from the notion of “multiport” consists of highlighting the
energy exchanges between a component and its environment through the
connecting ports. The connection of two or more components through the
port allows port exchange power (electrical, mechanical, etc.) according to
the adopted sign convention.
For automotive applications, the program comprises discrete components of
the ICE, gearbox, control system, electric loads, electrical machine and power
inverter, connected together to form a global model of a hybrid electric
vehicle.
The geometrical and electrical parameters of electrical machine considered
for ISAB (SynRM) and BSAB (PMSMOR) application are presented in
Table 1. The configuration of PMSMOR is a three-phase machine with 36
slots and 15 poles, and the SynRM topology is a three-phase machine with 27
slots and 4 poles. The dimension of PMSMOR has been imposed according
to Ref. [32] (data chosen for belt brushless alternator).
The simulation of the BSAB and ISAB is carried out on a New European
Driving Cycle (NEDC). A driving cycle is a series of points defining a speed
profile that the studied vehicle must follow [33]. The defined speed profile
simulates most common operating modes of an automobile (frequent
acceleration and deceleration, load variations and speed variations) and
corresponds to both urban and extra-urban environments. The parameters and
the profile of NEDC are presented in Table 2 and Figure 8, respectively.
The model takes into account the most complex thermodynamic phenomena
occurring in a heat engine. In the initial implementation, the starter and
alternator were a DC permanent magnets machine and the Lundell generator,
respectively. The model has been replaced by the studied model and is shown
earlier (PMSMOR/SynR). The motors are powered from a battery through
the converter DC/DC that will operate in this case as a boost converter.
The evaluation of the performance of PMSMOR and SynRM was started
from a demonstration model in AMESim of a compact car category (see
Figures 9 and 10) with a compression-ignition combustion engine. The
imposed weight of the vehicle was 1200 kg (usually between 1134 and 1360
kg, according to Ref. [34]) without any extra weight or passengers.
In order to have comparative results regarding the fuel consumption, in the
first simulation, the conventional vehicle functioning during the NEDC cycle
was tested. In the next simulations, the ISAB regime with considered
electrical machines was established. The behaviour of the starter and
alternator in the vehicle model was supposed to be the same as in a
conventional car. For the booster regime, a set-up to help ICE for 15 min/h
was added and this works in the booster regime only when the battery was
fully charged (up to 95%). The time limit for each booster regime was set at 2
min in order to avoid the complete discharging of the battery (but no less than
20%). The parameters of the ICE considered are presented in Table 3.

The control of the electrical machines in the starter operating mode involves a
maximum torque value (150 N m) until the ICE reaches 350 rpm. Torque
command is provided by a bi-positional regulator with hysteresis. It is active
when the command of ICE is active and its speed is less than 200 rpm, and it
is off when the speed exceeds 400 rpm. When the ICE speed exceeds 300
rpm, the process of fuel injection into the cylinders starts and the ICE
accelerates to idle speed. By applying the necessary torque to start the ICE,
this is accelerated rapidly at the speed of 400 rpm in about 0.35 s.
When ICE exceeds the speed of 400 rpm, the bi-positional controller
becomes inactive and the combustion engine continues to spin out due to its
inertia. If the pistons do not reach the maximum compression point, they will
not be able to inject fuel to start the combustion process; consequently, the
speed drops below 200 rpm and now the controller output is active.
Therefore, the starter is controlled again and the combustion engine is
brought to a speed of 400 rpm. At the start of the combustion engine process,
ICE is accelerated to the idle speed, where it is maintained by the electronic
control unit. The entire process of starting the engine (in normal condition),
from the beginning until stable operation at idle speed, lasts 0.8 s. In the
winter, this process may take 1.5 s. The speed profile of starting the ICE is
presented in Figure 11.

For the alternator mode, the nominal value of the electrical loads is
considered in the model. Some electrical loads are intermittently connected
(fan, electrical window, heating systems, etc.). Other loads are dependent on
ICE speed (fuel pump and injectors) or the speed of the vehicle.
When the entire driving cycle is considered, the fuel consumption in the
vehicle is reduced to 878.63 ml for the BSAB system and 941 ml for the
ISAB system. These values represent a fuel economy of around 16% for
BSAB and 17.3% for ISAB of total consumption compared with a classical
vehicle with a dedicated alternator and starter (without booster option)
system. The difference in fuel consumption is due to the value of nominal
power of electrical machines (see Table 1). But, the performances of SynRM
are limited by the battery (which uses 75 Ah) capacities. If it uses a stronger
battery, the total fuel economy can be increased with 2 or 4% (especially due
to the booster mode).
In the mechanical evaluation of electrical machines for automotive
applications, the variation of electromagnetic torque is one of the most
important parameters, because this variation (torque ripple) can become a
source of noise and vibration in vehicles. Thus, for a better visualization of
the torque profile of PMSMOR and SynRM, a new scenario for all three
regimes was considered. For better comparative results (variation of axis
torque) between IASB and BSAB, the BSAB system is taken into account
through directly coupling (using ratio 1:1 between ICE and BSAB speed) at
ICE. The starter and generator regime has been set for 1.5 and 20 s,
respectively. The variation of the axis torque in the generator mode has been
obtained by intermittent connection of the electrical loads (lights, HVAC,
media, etc.). For the booster mode, the speed of the vehicle is increased from
70 to 120 km/h in 17 s, necessary to overtake other vehicles. In this case, the
battery is considered fully charged.
Figures 12 and 13 show the variation of the axis torque versus time for all
operating modes, respectively. Due to the proper windings-slot combination,
the torque ripple values are below 10%. In fact, the ratio of the torque ripple
is 7.1% for PMSMOR and 6.2% for SynRM. In the booster mode, the rated
torque value of the 10 kW SynRM machine used for ISAB is obviously
bigger than that of the 6.5 kW PMSMOR.
Conclusions
The chapter presents the main steps to be followed in the development of a
specific electric drive system dedicated to automotive domain, such as
integrated starter-alternator-booster applications. Replacing the starter and
alternator in a conventional vehicle with extended possibility to work in
booster mode represents the first step of vehicles’ hybridization, called Mild-
HEV. In this way, two variants in mounting the ISAB had been identified in
the literature: one, directly driven generic called ISAB and another belt-
driven called BSAB. In this chapter, the approach contains the major
elements that need to be discussed for two type of electrical machine
(PMSMOR and SynRM) suitable for BSAB and ISAB, respectively. The
general design procedure is presented for these two electrical machines by
taking into account the typical constraints of the applications and the
behaviour of the machine (thermal, structural, and noise, vibrations and
harshness particularities). Also, the control aspects of both electrical
machines are presented.
In order to demonstrate the capability of this vehicle hybridization method,
two electrical machines have been designed and the equations model was
developed and implemented in the general 1D model of conventional vehicle
performed in AMESim software. The influence of fuel consumption on the
entire drive cycling (NEDC) was investigated. Based on the obtained results,
the ISAB system gives a greater value of the reduction of fuel consumption,
but the coupling of the electrical machine directly to the crankshaft involves
complicated manufacturing techniques (higher cost) compared with the
BSAB system procedure.
Introduction to Design, Optimization and
Modelling of High Power Density Direct-
Drive Wheel Motor for Light Hybrid
Electric Vehicles
Recent environmental concerns due to global warming and air pollution
motivated many countries around the world to legislate fuel economy and
emission regulations for ground vehicles [1]. Furthermore, the necessity of
developing alternative methods to generate energy for vehicles owing to
depletion of conventional resources was greater than ever [2]. These features
encouraged the introduction of fuel cell vehicles (FCVs), electric vehicles
(EVs) and hybrid electric vehicles (HEVs) as suitable candidates for the
replacement of the conventional internal combustion engine counterparts.
Since the performance of EVs and FCVs is still far behind the requirements,
HEVs are considered as the most reliable and preferred choice among similar
technologies by manufacturers, governments and consumers [3, 4].
Comparing to those technologies, HEVs are advantageous, as they exhibit
high fuel economy, lower emissions, lower operating cost and noise, higher
resale price, smaller engine size, longer operating life and longer driving
range [5]. The world HEVs market has been rapidly growing and existing
hybrid powertrains include passenger cars, small, medium and heavy trucks,
buses, vehicles used in construction domain (e.g. forklifts, excavators), etc.
Despite HEV’s high performance, their design and control strategies are not
trivial. Multiple hybrid electric architectures have been developed and
incorporated so far into commercially available vehicles in order to find
acceptable design solutions with respect to various objectives and constraints
[6]. Each configuration presents particular characteristics and the architecture
selection depends on the application requirements and vehicle’s type. For
instance, series configuration is mainly used in heavy vehicles, whereas
parallel-series one is preferable in small and medium automobiles, such as
passenger cars and smaller buses, notwithstanding its more complex structure
[7]. The specific topology combines the advantages of both series and
parallel HEVs and has a greater potential in improving fuel economy and
efficiency of the overall powertrain [8]. The HEV performance is even more
enhanced when new design methodologies are implemented in order to find
optimal configurations for power split devices, whereas at the same time, a
single planetary gear is used [1].
However, the performance of an HEV is strictly dependent on the individual
characteristics of its components (i.e. the internal combustion engine, the
electrical motor and generator, the electronic equipment, the batteries, etc.).
There is a strong “connection” among them and their collaboration interacts
with the performance of the vehicle [9]. Several techniques, presented in [10],
can be applied in order to achieve the optimal design and energy management
of an HEV. According to [11], multi-objective optimization procedure and
decision-making approach are necessary since there is a great amount of
variables and goals to be taken into account. Moreover, among the most
crucial decisions in the design of a HEV is the selection of the electric
motor’s type and its topology. A large amount of requirements such as (a)
high power and torque density, (b) flux-weakening capability, (c) high
efficiency over a wide range of speed, (d) high fault tolerance and overload
capability, (e) high reliability and robustness, (f) low acoustic noise during
operation and (g) low cost have to be met if a motor is to be considered as a
suitable one for such an application [4].
Nowadays, various structures have been tested by HEV manufacturers and
even more have been investigated in recent studies, e.g. [12]. Some of them,
such as switched-reluctance motors (SRMs), despite their important
advantages (high fault tolerance, simple construction, outstanding torque-
speed characteristics and low cost) are currently not widely used for HEV
applications. This is associated with the fact that they exhibit high acoustic
noise, high torque ripple and low power factor [13]. Among the most popular
topologies for this kind of traction system are induction and permanent
magnet synchronous motors [14]. These two types are thoroughly examined
and compared to each other [15] and their specific features are quite well
known so far [16].
In order to meet the continuously increasing power density and efficiency
requirements, PMSMs have become the dominant topology for light duty
HEVs [14]. PMSMs with one or multiple layers of interior magnets fulfil the
aforementioned characteristics and are commonly used in several commercial
HEVs. Their typical output power varies from 30 to 70 kW for full hybrid
passengers cars and can exceed 120 kW in the case of sport utility vehicles
(SUVs). Recently, it has been found that surface-mounted permanent magnet
synchronous motors (SPMSMs), especially when they are combined with
concentrated windings instead of distributed ones, are also promising
candidates for HEV propulsion [16]. They present high efficiency,
satisfactory flux-weakening capability, low cogging torque and facile
manufacturing procedure [17]. Honda Insight was one of the first commercial
HEVs that incorporated this specific motor configuration. Since then, there
has been increasing research interest for this topology.
That research effort though was carried out mainly for inner rotor topologies,
in which the propulsion is provided by a single traction motor coupled with a
gearbox and a differential. Thus, the perspective of mounting a motor with
outer rotor to the wheel of a vehicle is very interesting and may present
plenty of advantages. In this case, much lower flux density and respectively
less magnet mass is required for the achievement of the same maximum
torque. Copper as well as mechanical losses can be significantly lower than
the corresponding ones of inner rotor topology. The manufacturing cost is
lower, whereas at the same time, the total structure is lighter and can be
constructed more easily [18]. Numerous in-wheel concepts for HEVs have
been developed in the last years, mainly by Protean Electric and Mitsubishi.
The design procedure of direct-drive SPMSMs for an HEV presents
increased complexity. There is a large number of variables and geometrical
parameters that have to be estimated, while simultaneously numerous
problem constraints have to be satisfied. The applied constraints refer to the
maximum acceptable value of current density, the maximum value of dc-link
voltage, the motor’s volume and weight due to the limited available space,
etc. Additionally, SPMSMs have to exhibit low-current harmonic content,
non-saturable operation, low torque ripple and cogging torque. The
determination of motor’s thermal behaviour during different operating
conditions and the implementation of the suitable cooling system are also of
great importance. The adequate temperature alleviation can ensure the high
driving performance, the motor’s durability and the elimination of magnets
demagnetization risk [19].
Based on the above, this chapter aims to investigate, optimize, compare and
propose suitable high-power density in-wheel SPMSMs for a light HEV
application. For this purpose, a design, optimization and modelling
methodology for in-wheel motors is analytically presented in Section 2.
According to this approach, the specifications of the derived topology are
incorporated to an analytical HEV’s model, which has been developed in
Matlab/Simulink. In this way, the better approximation of the dynamic
behaviour of the entire system is allowed. The performance estimation of
each single subsystem and the calculation of parameters, such as the fuel
consumption during different driving cycles, are also far more accurate. This
methodology is compared to so far commonly used techniques, which are
reviewed here too. Next, the proposed approach is applied to the case of two
15.3 kW in-wheel motors, which are going to be part of the driving system of
a hybrid passenger car with series-parallel configuration. The derived results
are given in Section 3 and relevant discussion is made regarding the motor
and overall HEV system performance. Moreover, motors thermal behaviour
is studied and a simple and effective cooling system for this kind of traction
system is proposed. Finally, Section 4 summarizes and concludes the work.

High-power density direct-drive in-wheel motors


REQUIREMENTS OVERVIEW
The development of a direct-drive SPMSM, which will exhibit desirable
performance, requires a large amount of problem variables, constants and
constraints to be taken into account according to [20]. Moreover, meta-
heuristic optimization techniques can be applied along with the classical
design theory and the analytical equations. In this case, the multi-objective
SPMSM optimization has to be modelled and performed carefully, especially
when certain quantities are of primary concern [21]. The problem complexity
is increased if an in-wheel PMSM is supposed to be incorporated into the
powertrain of an HEV, whereas its operating point varies almost ceaselessly.
Thus, the study of motor performance in the rated operating point or in the
point of maximum provided torque, using finite element method (FEM) or
fixed permeability method (FPM) has been proven inefficient enough [22].
Consequently, various design approaches and optimization methodologies
have been revealed so far and each of them has its own advantages and
disadvantages.
In classical HEV design process motor’s efficiency map or torque-speed
curve is a convenient way to represent drive system’s performance. The
determination of efficiency, torque and speed for different operating points
permits the preliminary estimation of motor’s characteristics in agreement
with vehicle’s attribute. Also, different topologies that are investigated as
possible candidates for the same application can be easily compared to each
other [23]. However, by using efficiency maps the motor is considered as a
black box, which responds to certain inputs (voltage and current). These two
variables are assumed to be optimal in order to achieve the highest efficiency
at a specific torque and speed output. Furthermore, a map scaling factor
model (MSFM), based again on the knowledge of an efficiency map, is
generally used for the selection of motor’s output power rating and
specifications. The efficiency and torque are scaled using a linear dependency
on the rated power. At the same time, few HEV’s subsystems, such as the
internal combustion engine, wheels, batteries and control scheme, can also be
modelled constructing the appropriate equations and then a joint optimization
of all the subsystems using dynamic programming can be performed [24].
Although the aforementioned procedure permits a better interaction between
the electric motor/s and the other vehicle’s subsystems, the approximation of
the dynamic behaviour of the entire system is not satisfactory enough. It lacks
accuracy concerning energy management estimation and fuel consumption
calculation. Additionally, there is no association between motor’s
performance and its geometrical parameters. A compromise between FEA
and MSFM method is introduced in [9], in which the detailed magnetic
circuit model is incorporated in the optimization process. Starting from a
preliminary topology, the final configuration can be derived when the user’s
requirements are met. The drawback of this approach is that only a restricted
number of variables can be treated simultaneously. Thus, some geometrical
parameters, such as motor’s diameter and length, should be specified by the
designer and this method should be applied only for the optimization of
magnets and windings modulation. A fast magnetostatic FEA is proposed in
[25] in order to address the specific problem. The derived results are now
more precise and the computational time and complexity are significantly
reduced. The final proposed PMSM configuration is developed studying
motor’s torque behaviour and minimizing stator flux linkage for the
efficiency enhancement.
Another important requirement for the optimal HEV’s operation is the
minimization of motor’s losses during different driving cycles or the overall
profile of the HEV [26]. It is evident that design parameters that are
optimized for one average assumed drive cycle are not necessarily optimal
when an alternative use of the vehicle is carried out [27]. At least twelve
characteristics points of representative driving cycles should be used for the
analysis of motor’s performance according to [28]. These points have to
include acceleration, cruising and regenerative modes for more accurate fuel
consumption calculation. A methodology based upon the overall driving
cycle efficiency of the traction drive, which also takes into account the
inverter losses, cooling system specifications and energy consumption of
other subsystems, is presented in [29]. The implementation of the appropriate
cooling system and the determination of its specifications are also of great
importance as stated in [30].
DESCRIPTION OF PROPOSED METHODOLOGY
The complex problem of the development of high-power density direct-drive
SPMSMs for a light HEV can be solved by using a knowledge-based system
(KBS), similar to that analytically described in [31]. The proposed
architecture scheme (depicted in Figure 1) involves a number of knowledge
sources (KS) and several layers that interact with each other, in order to
ensure that the final solution is acceptable from technical, economical and
manufacturing point of view. The first two layers (layer 0 and 1) incorporate
the provided information about the properties of high quality steels, soft
magnetic conductors and insulation materials, while at the same time user’s
demands, machine’s specifications, design variables and problem constraints
are also determined. At the next level, the appropriate objective functions,
taking into account the aforementioned, are constructed and an optimization
method (e.g. genetic algorithm) is applied. At layer 3, an analytical
evaluation of all the alternative derived solutions is conducted through FEA
and post-processing analysis. Finally, the optimal motor configuration is
selected (layer 4) and its application in HEV industry is thoroughly
investigated
The above approach was enhanced and finally an overall PMSM design and
HEV performance assessment procedure is introduced in order to be a useful
tool in the HEV design industrial process. This methodology is based upon
the efficient design of the in-wheel motors and the determination of their
average driving cycle efficiency. Furthermore, an analytical HEV’s model,
which has been developed in Matlab/Simulink, incorporates all the necessary
subsystems of the vehicle. The internal combustion engine, the two identical
SPMSMs coupled in the front wheels, the batteries pack, the dc-dc converter,
the three-phase inverter, the power-split device and the control strategy are
implemented in this model in order to permit a more realistic study of HEV’s
behaviour. For instance, the batteries model would make it possible to define
the maximum provided voltage dynamically, while the state of their charge,
the effect of their internal resistance, the effect of the prevailing temperature
and working conditions can also be studied. Thus, a more appropriate
selection of each single subsystem can be made resulting to an optimal
energy management and performance.
The first step of the proposed methodology, which is presented in flowchart
form in Figure 2, is the determination of motor’s rated parameters, such as
output power, speed and torque. These features are defined based on
vehicle’s speed and grade-ability along with the collaboration of in-wheel
motors with the internal combustion engine. The outer motor diameter is
fixed by the size of the wheel and the maximum dc-link voltage is also
estimated by the battery pack and converter specifications. For the design of
the SPMSMs a combination of classical design theory and meta-heuristic
optimization techniques can be applied. The designer can choose among
popular techniques based on swarm intelligence, such as genetic algorithm
(GA), particle swarm optimization (PSO), ant colony optimization (ACO),
etc. In [32], it is outlined that another new method called“Grey Wolf
Optimizer” (GWO) exhibits acceptable and satisfactory performance when
implemented in similar machine design problems. Based on the results of
authors’ previous works (i.e. [20, 21]), where different optimization methods
were applied and compared, it was found out that all the adopted algorithms
succeeded to converge to a (sub)-optimum design solution. Despite the fact
that GA presents higher computational cost and complexity than PSO,
fmincon and pattern search, its solutions have been proven the most attractive
among others. The same conclusion was validated for all the examined case
studies, in which different performance quantities were also of primary
concern. Additionally, the main advantages of GA are its capacity of
parallelism detection between different agents and its elitist selection. The
first characteristic is crucial for the computation of Pareto solutions, whereas
the latter one ensures that the best solutions are passed to the next iterative
step without major changes. Following these, GA has been finally chosen for
the specific optimization problem.
Afterwards, the initialization of motor’s length and poles number is
following. The first parameter is specified by the size of the tire and the latter
should be chosen carefully as it is of great importance for the overall motor
performance [33]. The proper poles and slots combination can eliminate the
presence of higher order harmonics in the air gap flux distribution, which
results in lower iron losses and torque ripple. The motor must be capable of
providing high torque when the vehicle accelerates while its losses should be
as low as possible. Moreover, concentrated winding configuration is
preferable for this traction system, since it presents shorter end windings and
higher slot fill factor compared to the corresponding ones of distributed
windings, and contributes to a smaller motor volume and lower copper losses,
respectively. High power and torque density are very essential characteristics
for such an application since there is restricted space inside the wheel.
Furthermore, a high efficiency should be achieved over a wide speed range.
Thus, the minimization of motor’s volume simultaneously with the
enhancement of efficiency will be of great concern during the construction of
the objective functions. A weighted linear scalarization function is proposed
—as a cost function—in order to“translate” the original multi-objective
problem into a single-objective one which can be solved more easily. This
function presents simplicity and thus the overall optimization complexity is
reduced. Let the general form be CFj= βi ⋅ Qi, where βi is a 1 × i row matrix
which contains the weight coefficients of the cost function and Qi is an i × 1
column matrix, which contains the values of any motor’s quantities under
optimization. Numerous cost functions can be produced in this way by
altering the weights and/or quantities according to the problem specifications
and user’s requirements. Normally, a semi-exhaustive search has to be done
first in order to explore weights search space for linear scalarization and,
consequently, to identify efficient weight combinations. In the case examined
here, we consider the motors’ weight (Mtot) and efficiency (η) as equally
important quantities for optimization, thus the above cost function is
formulated as CF = 0.5 Mtot + 0.5 (1 − η).
Next, the optimization procedure is applied for the determination of
numerous variables, such as stator slot configurations, the number of turns
per phase, the thickness and the width of permanent magnets, etc. At each
step of the proposed approach, a large amount of constraints have to be met.
Some of them are imposed in order to ensure the acceptable electromagnetic
behaviour of the motor. For example, the motor’s rotor yoke should be
sufficient enough in order to ensure that no saturation will occur on this part
of the machine. Likewise, the maximum acceptable flux density at other parts
of the motor will also be set as problem constraints. The estimation of various
electromechanical quantities using FEM analysis is indispensable in order to
find out if any of these constraints is violated. If this happens, the adopted
variables and geometrical parameters of the investigated topology have to be
modified and the procedure returns to its initial step.
Another significant constraint is the maximum allowable value of the current
density. For a totally enclosed in-wheel motor this value cannot exceed 10
A/mm2 because there is no physical air circulation and temperature
alleviation. Thus, the determination of this parameter and motor’s thermal
behaviour is essential in order to ensure high driving performance even under
overload conditions, reduce the risk of magnets demagnetization and enhance
the durability of insulation materials. Also, the implementation of a liquid
cooling system for the motor is required. The research in recent literature
revealed that the commonly used cooling system configurations are not
suitable enough for this application. The oil-spray cooling method, which
uses a radiator, is very energy consuming and increases the manufacturing
complexity and the installation cost [34]. The implementation of ducting
system and slot water jackets is difficult due to the limited space [35]. For the
same reason, circumferential and axial water jackets are difficult to be
applied, since the length of the motor is very short. Consequently, a more
appropriate cooling system topology, which is effective enough despite the
small cooling system surface, is developed and described thoroughly in the
next Section. For each derived motor configuration, its thermal model and the
thermal model of the proposed cooling system are constructed, the heat
sources and the materials properties are specified, and the boundaries
conditions and the temperature coefficients are determined. Finally, the
temperature distribution and the overall performance of the cooling system
are estimated. Its parameters are calculated by taking into account the optimal
energy management of the HEV and the fact that the system’s energy
consumption must be kept as low as possible. The aim of the incorporation of
motor’s thermal analysis in the proposed methodology is to guarantee that
motor designs which gather efficient performance and meet the specific
problem requirements will not be excluded at this step of the design
procedure due to over temperatures and high value of current density.
After applying all the eliminatory criteria, only optimal topologies are
selected. Their geometrical parameters and specifications, such as stator
phase resistance, inductance in d- and q-axis, flux linkage established by
magnets, number of pole pairs, efficiency at rated power, source frequency,
shaft inertia and damping coefficient, are then imported in Matlab/Simulink
HEV model. This model, as mentioned before, involves all the necessary
HEV subsystems and it will be used in order to assess the overall system
performance. The final HEV configuration and motor topology will be
chosen according to the optimal energy management and efficient
collaboration of the subsystems. For this purpose, HEV performance can be
estimated during one single or several different driving cycles. The designer
should carefully choose the appropriate driving cycle, which fulfil his own
requirements and the use of the vehicle. The urban driving cycle (ECE 15)
and the New European Driving Cycle (NEDC) have been extensively
employed by manufacturers for vehicle energy consumption and emission
testing, as they represent the typical use of light duty vehicles in Europe.
Summarizing, the methodology proposed here seems to be very promising
compared to other common practices, since it permits the detail
implementation of motor’s characteristics in HEV model and the interaction
between its geometrical parameters with vehicle’s performance. Additionally,
the user can thoroughly compare to each other several candidate topologies
before making his final choice, by examining aspects, such as the fuel
consumption, the state of charge of the batteries, the compatibility of
inverter’s specifications with motor’s requirements, etc. The large amount of
constraints, the determination of motor’s temperature distribution and
electromechanical performance can ensure that the in-wheel motor will
exhibit the desirable operation even under adverse working conditions. The
relatively high simulation time that is required for running Matlab/Simulink
model could be considered as the main disadvantage of the proposed here
design procedure.

Case studies, results and discussion


In this Section the problem of the design and optimization of a light duty
HEV’s traction system is examined. The HEV under consideration
incorporates the series-parallel configuration, using an internal combustion
engine (ICE) and two SPMSMs for propulsion. The electric motors are
implemented around each of the driving wheels to directly deliver power to
them. Series-parallel architecture enables the engine and electric motors to
provide power independently or in conjunction with one another. At lower
vehicle’s speeds the system operates more as series vehicle, whereas at high
speeds, where the series drive train is less efficient, the engine takes over and
energy loss is minimized. The engine is going to be able to produce 115 Nm
torque at 4200 rpm, whereas its output power and its maximum speed will be
57 kW and 5000 rpm, respectively. The output power of each in-wheel motor
will be equal to 15.3 kW and a torque of 170 Nm at 850 rpm will be
provided. Moreover, the engine is going to drive a salient pole synchronous
permanent magnet generator, which will either charge the batteries or provide
power directly to the electric motors depending on vehicle’s mode. A
planetary gear is used in order to split power among the engine, the generator
and the differential. The nominal voltage of the battery pack is 201.6 V
comprising 168 nickel-metal hydride (NiMh) cells, and its nominal capacity
is 6.5 Ah. The battery pack voltage is raised by a boost converter leading to a
400 V dc-link voltage. Finally, each in-wheel motor is fed through a three-
phase inverter (Figure 3) and is individually controlled using vector control
method. For the assessment of the overall system performance, a HEV
model, which is available in Matlab/Simulink version R2016a (Figure 4) has
been modified properly in order to meet the specific problem requirements.
This model permits the study of vehicle’s dynamic behaviour, as the
aerodynamical and frictional phenomena are included. The vehicle’s and its
component specifications of the case study are presented in Table 1.
The design of a high-power density in-wheel motor is a complex
optimization problem which will conclude to the most suitable candidates
according to some criteria. There are several requirements that have to be
met. Some of them are related to the motor’s placement and physical
constraints, such as its outer rotor radius and active length, whereas others are
imposed by the motor’s desired operation. The efficiency, for example, is of
great importance considering the energy consumption. Efficiency higher than
90% will be an appropriate choice. Despite that, since the motor is mounted
inside the wheel, as depicted in Figure 5a, its weight must be as low as
possible in order to reduce unsprung mass and eliminate vibrations. Recently
in-wheel motors with power-mass ratio of approximately 1 kW/kg have been
implemented in commercially available HEVs. In this study, it will be
investigated if this value can be exceeded. Thus, the objective function
chosen for the case study is a compromise of motor’s weight and power
losses minimization. The desired SPMSMs characteristics are given in Table
2.
Furthermore, there are more than 15 design variables that have to be
optimized simultaneously (under certain constraints) by the applied
algorithm. Apart from the geometrical parameters that are presented in
Figure 5b, variables such as the number of poles (2p), the number of slots
per pole per phase (q) and the number of conductors per slot (nc) are also
involved. Table 3 summarizes the upper and lower bounds of all these
quantities that will be considered as problem constraints. At this point, it
must be mentioned that for sake of space, the analytical equations that
describe the electromechanical and magnetic behaviour of the specific
machine are not given here. The reader can refer to [18, 36] for more details.
Concerning the materials used for different motor’s parts, a high quality
silicon steel (M19-24G) has been selected both for stator and rotor, according
to NEMA’s instructions for super premium efficiency motors. Moreover,
high energy NdFeB magnets have been chosen, as they have been proven
efficient and reliable enough for this kind of application [37]. The values of
materials properties will be regarded as constants during optimization
problem and they are presented in Table 4.
Following the methodology proposed here, a set of optimization results are
presented in Table 5, in which the design variables of four final solution
topologies are given. Let us denote“Motor A” up to “Motor D” the derived
in-wheel configurations. Their electromechanical performance has been
validated through 2D and 3D FEM and considered acceptable and
satisfactory enough. The obtained results are summarized in Table 6.
Moreover, the overall HEV’s system behaviour assessment has been
conducted by examining vehicle’s subsystems collaboration and calculating
fuel consumption during four different driving cycles, which are depicted in
Figure 6. Among the applied driving cycles are (a) European Driving Cycle
ECE 15, (b) Extra Urban Driving Cycle (EUDC), (c) Supplemental Federal
Test Procedure (SFTP-75) and (d) Japanese 10-15 mode driving cycle (JP 10-
15). During ECE 15 cycle the vehicle covers a distance of 0.9941 km in 195
sec. The average and the maximum vehicle’s speed are equal to 25.93 km/h
and 50 km/h, respectively and its maximum acceleration is 1.042 m/sec2.
EUDC represents a more aggressive and high speed driving mode. The
maximum developed and average speeds are equal to 120 km/h and 69.36
km/h, respectively. SFTP-75 is commonly used for emission certification and
fuel economy testing for light duty vehicles in United States. This cycle
involves both driving in urban areas and high speed road. In this case study,
only the second part of this cycle is incorporated. The duration of JP 10-15 is
660 sec and its average speed is 22.7 km/h. A fuel consumption lower than
5.0 l/100 km has been considered acceptable and each topology that did not
meet the specific requirement has been excluded from the next step of the
proposed methodology. The optimization procedure was terminated when for
an investigated configuration the target was achieved for all the examined
driving cycles. The relative results are presented in Table 7. From Tables
5–7, it is initially clear that the proposed approach succeeded in finding
optimum and feasible design solutions satisfying all the existing constraints.
Analytically the following can be observed:
I. The optimization procedure provided solutions over the examined
range of poles number and the final topologies are investigated and
compared to each other from several aspects. The designer has the
opportunity to evaluate the derived results from many points of view
(i.e. technical, economical, etc.) and finally select the appropriate in-
wheel SPMSM topology.
II. The motors efficiency has been found high enough, as it varies
from 94 to 95.5%. This feature, especially when is combined with the
lowest possible current, is of great importance for HEV’s energy
management. Concerning this, Motor A seems to be a more suitable
choice for the case study.
III. All topologies exhibit high power to mass ratio over 1 kW/kg,
since their mass range is from 12.5 to 14.8 kg. In the case of Motor
C, the ratio is increased by 22%. Thus, if motor’s total mass is the
primary objective, this motor prevails. Despite their relatively low
weight, all configurations present durability and do not suffer from
mechanical stresses.
IV. The volume of NdFeB magnets is small, which will lead to a
reasonable motor’s cost.
V. The current density constraint has been fulfilled. However,
concerning the short axial length of the machine (30 mm) and its
placement into a totally enclosed environment the implementation of
a cooling system, which has been also proposed and optimized here,
is more than essential. More details about the cooling system’s
characteristics and its performance are going to be provided later in
this chapter.
VI. During the adopted design approach, a large amount of motor
features were also determined, as they significantly affect its
operation. Some of great importance estimated quantities are airgap
flux density, torque and phase-back emf curve’s shape, as well as
their corresponding harmonics, cogging torque, torque angle and
magnetic field distribution. For completeness purposes, these
quantities are depicted in Figures 7–11, indicatively for Motor C and
Motor D. As it can be seen from Figure 7, the values of flux density
developed over the different parts of both configurations are found
within acceptable limits. Despite the low volume and especially
active length of the motor, non-saturable operation has been detected
for all the finally proposed topologies. Moreover, the airgap flux
density and the phase-back emf, as depicted in Figures 8 and 9,
respectively, present low harmonic content. The proper selection of
windings configurations along with the specification of permanent
magnets parameters through the proposed approach contribute to this
feature. The airgap flux is of great importance of the torque pulsation.
The small amplitude of its third, fifth and seventh harmonic in both
cases resulted in the low value of motors torque ripple. The torque
ripple for Motor C was found equal to 3.3%, while the same
parameter for Motor D was equal to 2.4%. The above can also be
validated by the observation of Figure 10, in which the torque and its
harmonic content is presented. A very low cogging torque and
relatively torque angle is also achieved, as it can be seen from Figure
11. These parameters are essential for this kind of traction
application, as their low value can ensure a high quality and safe
driving performance.
VII. The calculation of crucial HEV’s parameters, such as fuel
consumption, permits a better approximation of the optimal
configuration. For example, Motor A seems to have a significant
advantage compared to the other motors over all examined driving
cycles, when the aspect of fuel consumption is examined. This may
not be so clear if only the motor’s rated performance characteristics
were taken into account.
At this point, Motor C, which exhibit the higher nominal current compared to
other topologies, is going to be used as a case study for the description of the
applied cooling system. It must be outlined that a suitable cooling topology
for this kind of motor should gather the following features: (a) be easily
implemented in the restricted surface of the motor and (b) be close as much
as possible to the part of the machine, which is the main heat source (i.e.
stator copper windings). Taking these into account, the attachment of a
cooling channel into the inner yoke circumference is proposed. This
configuration, which looks like a ring, permits the circulation of a coolant
through a pipe with rectangular cross section and the removal of the heat
from the inner stator surface. A pump combined with a heat
exchanger/compressor for the alleviation of coolant’s temperature will consist
of the overall cooling system. The accurate position of the cooling channel is
shown in Figure 12, in which the shell, the rim and the tire are also
presented. All these will be parts of the developed thermal model in order to
have a more accurate temperature determination. Moreover, during thermal
analysis the temperature and the pressure inside the rim will be taken into
account. Compared to other cooling system schemes, the proposed here
topology enables a larger contact area between the stator and the coolant,
simpler manufacturing and installation procedure and lower cost.
The design procedure of the proposed here cooling system requires the
incorporation of an optimization method along with the conduction of
motor’s thermal analysis through FEM under different operating conditions.
The specifications of its parameters, such as the coolant’s flow rate and
coolant’s inlet temperature are of great importance for system’s efficiency
and they will be calculated through the applied optimization algorithm. Since
these parameters have essential effect on HEV’s energy management, their
values have to be carefully selected. For example, a high value for coolant’s
flow rate will conclude to increased energy consumption by the pump in
order to circulate the liquid. Likewise, the heat exchanger should be capable
of restoring coolant’s temperature while its capacity will remain as low as
possible. During this procedure, cooling channel’s dimensions will be
considered as variables. An aluminium alloy (6060-T6) with good
mechanical properties has been selected for the channel, while ethylene-
glycol mixed with water (50-50 volumetric proportion) has been chosen as
coolant. The applied methodology involves the following the steps: (a) the
determination of the thermal properties of all involved materials (including
the shell, the tire and the insulation materials), (b) the specification of motor’s
heat resources and the ambient temperature, (c) the calculation of the
boundary conditions in the air gap and other motor’s parts and (d) the
modification of the 2D thermal modelling according to the prevailing
conditions. Reader can refer to [38–40] in order to find more details about the
classical theory governing the thermal analysis and the development of the
motor’s and cooling system’s thermal model.
In Figure 13, the influence of coolant’s inlet temperature and flow rate in
temperature distribution over different in-wheel motor parts is presented.
Based on these results and the aforementioned considerations, the inlet
temperature of 30oC along with a flow rate of 4 l/min has been chosen as the
optimal combination. Moreover, the channel’s length, width and breadth have
been specified to 30 mm, 10 mm and 1.5 mm, respectively. The derived
requirements for heat exchanger, pump and pipe can be easily fulfilled by
commercially available models. Figure 14 shows the maximum observed
temperatures of motor’s parts for the same operating conditions without and
with the application of the proposed cooling system. It can be easily observed
that a significant temperature drop is achieved with the implementation of the
cooling system at all the different loading conditions and the cooling system
is considered efficient enough. The maximum heat removal is occurred in
stator yoke and the copper windings. The maximum observed temperature in
stator slots is far from the insulation materials limits. The temperature
developed in air gap and especially near the magnets is relatively low and
there is no risk of magnets demagnetization.

Conclusions
In this chapter, the perspective of direct-drive traction systems for HEVs,
which lately concentrates on increasing interest among researchers and
manufacturers but is not adequately investigated in the literature, is
examined. A design and optimization methodology for the development of
high-power density in-wheel motors and the corresponding beneficial
assessment of the overall HEV’s system performance is derived and
discussed thoroughly. This approach is enhanced with the incorporation of a
simple though efficient cooling system and the interaction of motor’s
geometrical parameters and performance with the vehicle’s subsystems by
using a dynamic HEV model. Through a case study, the particular problem
requirements and constraints, the eliminatory criteria and the motor’s
topology selection strategy are illustrated and commented. Based on the
overall results, the introduced methodology seems very promising and could
be of great aid to designers in order to conclude to the optimal motor
configuration.
Automotive Clutches
An automotive clutch is used to connect and disconnect the engine and
manual transmission or transaxle. The clutch is located between the back of
the engine and the front of the transmission. With a few exceptions, the
clutches common to the Naval Construction Force (NCF) equipment are the
single, double, and multiple-disc types. The clutch that you will encounter the

most is the single-disc type (Figure 10-1). The double-disc clutch is


substantially the same as the single-disc, except that another driven disc and
an intermediate driving plate are added. This clutch is used in heavy-duty
vehicles and construction equipment. The multiple-disc clutch is used in the
automatic transmission and for the steering clutch used in tracked equipment.
The operating principles, component functions, and maintenance
requirements are essentially the same for each of the three clutches
mentioned. This being the case, the single-disc clutch will be used to acquaint
you with the fundamentals of the clutch.

Clutch Construction
The clutch is the first drive train component powered by the engine
crankshaft. The clutch lets the driver control power flow between the engine
and the transmission or transaxle. Before understanding the operation of a
clutch, you must first become familiar with the parts and their functions. This
information is very useful when learning to diagnose and repair the clutch
assembly.
Clutch Release Mechanism
A clutch release mechanism allows the operator to operate the clutch.
Generally, it consists of the clutch pedal assembly, a mechanical linkage,
cable, or hydraulic circuit, and the clutch fork. Some manufacturers include
the release bearing as part of the clutch release mechanism.
Linkage
A clutch linkage mechanism uses levers and rods to transfer motion from the
clutch pedal to the clutch fork. One configuration is shown in Figure 10-2.
When the pedal is pressed, a pushrod shoves on the bell crank and the bell
crank reverses the forward movement of the clutch pedal. The other end of
the bell crank is connected to the release rod. The release rod transfers bell
crank movement to the clutch fork. It also provides a method of adjustment
for the clutch.

Cable
The clutch cable mechanism uses a steel cable inside a flexible housing to
transfer pedal movement to the clutch fork. As shown in Figure 10-3, the
cable is usually fastened to the upper end of the clutch pedal, with the other
end of the cable connecting to the clutch fork. The cable housing is mounted
in a stationary position. This allows the cable to slide inside the housing
whenever the clutch pedal is moved. One end of the clutch cable housing has
a threaded sleeve for clutch adjustment.
Hydraulic
A hydraulic clutch release mechanism uses a simple hydraulic circuit to
transfer clutch pedal action to the clutch fork (Figure 10-4). It has three basic
parts—master cylinder, hydraulic lines, and a slave cylinder. Movement of
the clutch pedal creates hydraulic pressure in the master cylinder, which
actuates the slave cylinder. The slave cylinder then moves the clutch fork.
Slave Cylinder with Clutch Master Cylinder
The master cylinder is the controlling cylinder that develops the hydraulic
pressure. The slave cylinder is the operating cylinder that is actuated by the
pressure created by the master cylinder.
Clutch Fork
The clutch fork, also called a clutch arm or release arm, transfers motion
from the release mechanism to the release bearing and pressure plate. The
clutch fork sticks through a square hole in the bell housing and mounts on a
pivot. When the clutch fork is moved by the release mechanism, it pries on
the release bearing to disengage the clutch. A rubber boot fits over the clutch
fork. This boot is designed to keep road dirt, rocks, oil, water, and other
debris from entering the clutch housing.
Clutch Housing
The clutch housing is also called the bell housing. It bolts to the rear of the
engine, enclosing the clutch assembly, with the manual transmission bolted to
the back of the housing. The lower front of the housing has a metal cover that
can be removed for flywheel ring gear inspection or when the engine must be
separated from the clutch assembly. A hole is provided in the side of the
housing for the clutch fork. It can be made of aluminum, magnesium, or cast
iron.
Release Bearing
The release bearing, also called the throw-out bearing, is a ball bearing and
collar assembly. It reduces friction between the pressure plate levers and the
release fork. The release bearing is a sealed unit pack with a lubricant. It
slides on a hub sleeve extending out from the front of the manual
transmission or transaxle and is moved by either hydraulic or manual
pressure.
Hydraulic Type
The hydraulic release bearing eliminates the stock mechanical release bearing
linkage and slave cylinder. The release bearing mounts on the transmission
face or slips over the input shaft of the transmission. When the clutch pedal is
pressed, the bearing face presses against the pressure plate to disengage the
clutch.
Manual Type
The release bearing snaps over the end of the clutch fork. Small spring clips
hold the bearing on the fork. Then fork movement in either direction slides
the release bearing along the transmission hub sleeve.
Pressure Plate
The pressure plate is a spring-loaded device that can either engage or
disengage the clutch disc and the flywheel. It bolts to the flywheel. The
clutch disc fits between the flywheel and the pressure plate. There are two
types of pressure plates—the coil spring type and the diaphragm type.
Coil Spring Pressure Plate
The coil spring pressure plate uses small coil springs similar to valve springs
(Figure 10- 5). The face of the pressure plate is a large, flat ring that contacts
the clutch disc during clutch engagement. The back side of the pressure plate
has pockets for the coil springs and brackets for hinging the release levers.
During clutch action, the pressure plate moves back and forth inside the
clutch cover. The release levers are hinged inside the pressure plate to pry on
and move the pressure plate face away from the clutch disc and flywheel.
Small clip-type springs fit around the release levers to keep them rattling
when fully released. The pressure plate cover fits over the springs, the release
levers, and the pressure plate face. Its main purpose is to hold the assembly
together. Holes around the outer edge of the cover are for bolting the pressure
plate to the flywheel.
Diaphragm Pressure Plate
The diaphragm pressure plate (Figure 10-6) uses a single diaphragm spring
instead of coil springs. The diaphragm spring is a large, round disc of spring
steel. The spring is bent or dished and has pie-shaped segments running from
the outer edge to the center. The diaphragm spring is mounted in the pressure
plate with the outer edge touching the back of the pressure plate face. The
outer rim of the diaphragm is secured to the pressure plate and is pivoted on
rings approximately 1 inch from the outer edge. Application of pressure at the
inner section of the diaphragm will cause the outer rim to move away from
the flywheel and draw the pressure plate away from the clutch disc,
disengaging the clutch.

Clutch Disc
Wet Type A “wet” clutch is immersed in a cooling lubricating fluid, which
also keeps the surfaces clean and gives smoother performance and longer life.
Wet clutches, however, tend to lose some energy to the liquid. Since the
surfaces of a wet clutch can be slippery, stacking multiple clutch discs can
compensate for the lower coefficient of friction and so eliminate slippage
under power when fully engaged. Wet clutches are designed to provide a
long, service-free life. They often last the entire life of the machine they are
installed on. If you must provide service to a wet clutch, refer to the
manufacturer’s service manual for specific details
Dry Type
The clutch disc, also called friction lining, is a “dry” clutch and consists of a
splined hub and a round metal plate covered with friction material (lining).
The splines in the center of the clutch disc mesh with the splines on the input
shaft of the manual transmission. This makes the input shaft and disc turn
together. However, the disc is free to slide back and forth on the shaft. Clutch
disc torsion springs, also termed damping springs, absorb some of the
vibration and shock produced by clutch engagement. They are small coil
springs located between the clutch disc splined hub and the friction disc
assembly. When the clutch is engaged, the pressure plate jams the stationary
disc against the spinning flywheel. The torsion springs compress and soften
as the disc first begins to turn with the flywheel. Clutch disc facing springs,
also called the cushioning springs, are flat metal springs located under the
friction lining of the disc. These springs have a slight wave or curve, allowing
the lining to flex inward slightly during initial engagement. This also allows
for smooth engagement. The clutch disc friction material, also called disc
lining or facing, is made of heatresistant asbestos, cotton fibers, and copper
wires woven or molded together. Grooves are cut into the friction material to
aid cooling and release of the clutch disc. Rivets are used to bond the friction
material to both sides of the metal body of the disc.
Flywheel
The flywheel is the mounting surface for the clutch (Figure 10-7). The
pressure plate bolts to the flywheel face. The clutch disc is clamped and held
against the flywheel by the spring action of the pressure plate. The face of the
flywheel is precision machined to a smooth surface. The face of the flywheel
that touches the clutch disc is made of iron. Even if the flywheel were
aluminum, the face is iron because it wears well and dissipates heat better.
Pilot Bearing
The pilot bearing or bushing is pressed into the end of the crankshaft to
support the end of the transmission input shaft (Figure 10-7). The pilot
bearing is a solid bronze bushing, but it also may be a roller or ball bearing.
The end of the transmission input shaft has a small journal machined on its
end. This journal slides inside the pilot bearing. The pilot bearing prevents
the transmission shaft and clutch disc from wobbling up and down when the
clutch is released. It also assists the input shaft center the disc on the
flywheel.
Clutch Operation
When the operator presses the clutch pedal, the clutch release mechanism
pulls or pushes on the clutch release lever or fork (Figure 10-8). The fork
moves the release bearing into the center of the pressure plate, causing the
pressure plate to pull away from the clutch disc releasing the disc from the
flywheel. The engine crankshaft can then turn without turning the clutch disc
and transmission input shaft. When the operator releases the clutch pedal,
spring pressure inside the pressure plate pushes forward on the clutch disc.
This action locks the flywheel, the clutch disc, the pressure plate, and the
transmission input shaft together. The engine again rotates the transmission
input shaft, the transmission gears, the drive train, and the wheels of the
vehicle.

Clutch Start Switch


Many of the newer vehicles incorporate a clutch start switch into the starting
system. The clutch start switch is mounted on the clutch pedal assembly. The
clutch start switch prevents the engine from cranking unless the clutch pedal
is depressed fully. This serves as a safety device that keeps the engine from
possibly starting while in gear. Unless the switch is closed (clutch pedal
depressed), the switch prevents current from reaching the starter solenoid.
With the transmission in neutral, the clutch start switch is bypassed so the
engine will crank and start.
Clutch Adjustment
Clutch adjustments are made to compensate for wear of the clutch disc lining
and linkage between the clutch pedal and the clutch release lever. This
involves setting the correct amount of free play in the release mechanism.
Too much free play causes the clutch to drag during clutch disengagement.
Too little free play causes clutch slippage. It is important for you to know
how to adjust the three types of clutch release mechanisms.
Clutch Linkage Adjustment
Mechanical clutch linkage is adjusted at the release rod going to the release
fork (Figure 10-2). One end of the release rod is threaded. The effective
length of the rod can be increased to raise the clutch pedal (decrease free
travel). It can also be shortened to lower the clutch pedal (increase free
travel). To change the clutch adjustment, loosen the release rod nuts. Turn the
release rod nuts on the threaded rod until you have reached the desired free
pedal travel.
Pressure Plate Adjustment
When a new pressure plate is installed, do not forget to check the plate for
proper adjustments. These adjustments will ensure proper operation of the
pressure plate. The first adjustment ensures proper movement of the pressure
plate in relation to the cover. With the use of a straightedge and a scale as
shown in Figure 10-9, begin turning the adjusting screws until you obtain the
proper clearance between the straight-edge and the plate as shown. For exact
measurements, refer to the manufacturer’s service manual. The second
adjustment positions the release levers and allows the release bearing to
contact the levers simultaneously while maintaining adequate clearance of the
levers and disc or pressure plate cover. This adjustment is known as finger
height. To adjust the pressure plate, place the assembly on a flat surface and
measure the height of the levers, as shown in Figure 10-10. Adjust it by
loosening the locknut and turning. After the proper height has been set, make
sure the locknuts are locked and staked with a punch to keep them from
coming loose during operations. Exact release lever height can be found in
the manufacturer’s service manual.

Clutch Cable Adjustment


Like the mechanical linkage, a clutch cable adjustment may be required to
maintain the correct pedal height and free travel. Typically the clutch cable
will have an adjusting nut. When the nut is turned, the length of the cable
housing increases or decreases. To increase clutch pedal free travel, turn the
clutch cable housing nut to shorten the housing, and, to decrease clutch pedal
free travel, turn the nut to lengthen the housing.
Hydraulic Clutch
The hydraulically operated clutch is adjusted by changing the length of the
slave cylinder pushrod. To adjust a hydraulic clutch, simply turn the nut or
nuts on the pushrod as needed.
Note
When a clutch adjustment is made, refer to the manufacturer's service manual
for the correct method of adjustment and clearance. If no manuals are
available, an adjustment that allows 1 1/2 inches of clutch pedal free travel
will allow adequate clutch operation until the vehicle reaches the shop and
manuals are available.
Clutch Troubleshooting
An automotive clutch normally provides dependable service for thousands of
miles. However, stop and go traffic will wear out a clutch quicker than
highway driving. Every time a clutch is engaged, the clutch disc and other
components are subjected to considerable heat, friction, and wear.
Operator abuse commonly causes premature clutch troubles. For instance,
"riding the clutch," resting your foot on the clutch pedal while driving, and
other driving errors can cause early clutch failure.
When a vehicle enters the shop for clutch troubles, you should test drive the
vehicle. While the vehicle is being test driven, you should check the action of
the clutch pedal, listen for unusual noises, and feel for clutch pedal
vibrations.
There are five types of clutch problems—slipping, grabbing, dragging,
abnormal noises, and vibration. It is important to know the symptoms
produced by these problems and the parts that might be causing them.
Slipping
Slipping occurs when the driven disc fails to rotate at the same speed as the
driving members when the clutch is fully engaged. This condition results
whenever the clutch pressure plate fails to hold the disc tight against the face
of the flywheel. If clutch slippage is severe, the engine speed will rise rapidly
on acceleration while the vehicle gradually increases in speed. Slight but
continuous slippage may go unnoticed until the clutch facings are ruined by
excessive temperature caused by friction.
Normal wear of the clutch lining causes the free travel of the clutch linkage to
decrease, creating the need for adjustment. Improper clutch adjustment can
cause slippage by keeping the release bearing in contact with the pressure
plate in the released position. Even with your foot off the pedal, the release
mechanism will act on the clutch fork and release bearing.
Some clutch linkages are designed to allow only enough adjustment to
compensate for the lining to wear close to the rivet heads. This prevents
damage to the flywheel and pressure plate by the rivets wearing grooves in
their smooth surfaces.
Other linkages will allow for adjustment after the disc is worn out. When in
doubt whether the disc is worn excessively, remove the inspection cover on
the clutch housing and visually inspect the disc. Binding linkage prevents the
pressure plate from exerting its full pressure against the disc, allowing it to
slip. Inspect the release mechanism for rusted, bent, misaligned, sticking, or
damaged components. Wiggle the release fork to check for free play. These
problems result in slippage.
A broken motor mount (engine mount) can cause clutch slippage by allowing
the engine to move, binding the clutch linkage. Under load, the engine can
lift up in the engine compartment, shifting the clutch linkage and pushing on
the release fork.
Grease and oil on the disc will also cause slippage. When this occurs, locate
and stop any leakage, thoroughly clean the clutch components, and replace
the clutch disc. This is the only remedy.
If clutch slippage is NOT caused by a problem with the clutch release
mechanism, then the trouble is normally inside the clutch. You have to
remove the transmission and clutch components for further inspection.
Internal clutch problems, such as weak springs and bent or improperly
adjusted release levers, will prevent the pressure plate from applying even
pressure. This condition allows the disc to slip.
To test the clutch for slippage, set the emergency brake and start the engine.
Place the transmission or transaxle in high gear. Then try to drive the vehicle
forward by slowly releasing the clutch pedal. A clutch in good condition
should lock up and immediately kill the engine. A badly slipping clutch may
allow the engine to run, even with the clutch pedal fully released. Partial
clutch slippage could let the engine run momentarily before stalling.
NOTE :- Never let a clutch slip for more than a second or two. The extreme
heat generated by slippage will damage the flywheel and pressure plate faces.
Grabbing
A grabbing or chattering clutch will produce a very severe vibration or
jerking motion when the vehicle is accelerated from a standstill. Even when
the operator slowly releases the clutch pedal, it will seem like the clutch pedal
is being pumped rapidly up and down. A loud bang or chattering may be
heard as the vehicle body vibrates.
Clutch grabbing and chatter is caused by problems with components inside
the clutch housing (friction disc, flywheel, or pressure plate). Other reasons
for a grabbing clutch could be oil or grease on the disc facings, glazing, or
loose disc facings. Broken parts in the clutch, such as broken disc facings,
broken facing springs, or a broken pressure plate, will also cause grabbing.
There are several things outside of the clutch that will cause a clutch to grab
or chatter when it is being engaged. Loose spring shackles or U-bolts, loose
transmission mounts, and worn engine mounts are among the items to be
checked. If the clutch linkage binds, it may release suddenly to throw the
clutch into quick engagement, resulting in a heavy jerk. However, if all these
items are checked and found to be in good condition, the trouble is inside the
clutch itself and will have to be removed for repair.
Dragging
A dragging clutch will make the transmission or transaxle grind when trying
to engage or shift gears. This condition results when the clutch disc does not
completely disengage from the flywheel or pressure plate when the clutch
pedal is depressed. As a result, the clutch disc tends to continue turning with
the engine and attempts to drive the transmission.
The most common cause of a dragging clutch is too much clutch pedal free
travel. With excessive free travel, the pressure plate will not fully release
when the clutch pedal is pushed to the floor. Always check the clutch
adjustments first. If adjustment of the linkage does not correct the trouble, the
problem is in the clutch, which must be removed for repair.
On the inside of the clutch housing, you will generally find a warped disc or
pressure plate, oil or grease on the friction surface, rusted or damaged
transmission input shaft, or improper adjustment of the pressure plate release
levers causing the problem.
Abnormal Noises
Faulty clutch parts can make various noises. When an operator reports that a
clutch is making noise, find out when the noise is heard. Does the sound
occur when the pedal is moved, when in neutral, when in gear, or when the
pedal is held to the floor? This will assist you in determining which parts are
producing these noises.
An operator reports hearing a scraping, clunking, or squeaking sound when
the clutch pedal is moved up or down. This is a good sign of a worn or
unlubricated clutch release mechanism. With the engine off, pump the pedal
and listen for the sound. Once you locate the source of the sound, you should
clean, lubricate, or replace the parts as required.
Sounds produced from the clutch when the clutch is initially engaged are
generally due to friction disc problems, such as a worn clutch disc facing,
which causes a metal-tometal grinding sound. A rattling or a knocking sound
may be produced by weak or broken clutch disc torsion springs. These
sounds indicate problems that require the removal of the transmission and
clutch assembly for repair.
If clutch noises are noticeable when the clutch is disengaged, the trouble is
most likely the clutch release bearing. The bearing is probably either worn or
binding, or, in some cases, is losing its lubricant. Most clutch release bearings
are factory lubricated; however, on some larger trucks and construction
equipment, the bearing requires periodic lubrication. A worn pilot bearing
may also produce noises when the clutch is disengaged. The worn pilot
bearing can let the transmission input shaft and clutch disc vibrate up and
down, causing an unusual noise.
Sounds heard in neutral, which disappear when the clutch pedal is pushed,
are caused by problems inside the transmission. Many of these sounds are
due to worn bearings. However, always refer to the troubleshooting chart in
the manufacturer's manual.
Pedal Pulsation
A pulsating clutch pedal is caused by the runout (wobble or vibration) of one
of the rotating members of the clutch assembly. A series of slight movements
can be felt on the clutch pedal. These pulsations are noticeable when light
foot pressure is applied. This is an indication of trouble that could result in
serious damage if not corrected immediately. There are several conditions
that can cause these pulsations. One possible cause is misalignment of the
transmission and engine.
If the transmission and engine are not in line, detach the transmission and
remove the clutch assembly. Check the clutch housing alignment with the
engine and crankshaft. At the same time, check the flywheel for runout, since
a bent flywheel or crankshaft flange will produce clutch pedal pulsation. If
the flywheel does not seat on the crankshaft flange, remove the flywheel.
After cleaning the crankshaft flange and flywheel, replace the flywheel,
making sure a positive seat is obtained between the flywheel and the flange.
If the flange is bent, the crankshaft must be replaced.
Other causes of clutch pedal pulsation include bent or maladjusted pressure
plate release levers, a warped pressure plate, or a warped clutch disc. If either
the clutch disc or pressure plate is warped, they must be replaced.
Clutch Overhaul
When adjustment or repair of the linkage fails to remedy problems with the
clutch, you must remove the clutch for inspection. Discard any faulty parts
and replace them with new or rebuilt components. If replacement parts are
not readily available, a decision to use the old components should be based
on the manufacturer’s and the maintenance supervisor’s recommendations.
Transmission or transaxle removal is required to service the clutch. Always
follow the detailed directions in the service manual. To remove the clutch in a
rear-wheel drive vehicle, remove the drive shaft, the clutch fork, the clutch
release mechanism, and the transmission. With a front-wheel drive vehicle,
the axle shafts (drive axles), the transaxle, and, in some cases, the engine
must be removed for clutch repairs.
Warnings While Servicing
· When the transmission or transaxle is removed, support the weight
of the engine. Never let the engine, the transmission, or the transaxle be
unsupported. The transmission input shaft, clutch fork, engine mounts,
and other associated parts could be damaged.
· After removal of the transmission or transaxle bolts, remove the
clutch housing from the rear of the engine. Support the housing as you
remove the last bolt. Be careful not to drop the clutch housing as you
pull it away from the dowel pins.
· Using a hammer and a center punch, mark the pressure plate and
flywheel. You will need these marks when reinstalling the same
pressure plate to assure correct balancing of the clutch.
· With the clutch removed, clean and inspect all components for
wear and damage. After cleaning, inspect the flywheel and pressure
plate for signs of unusual wear, such as scoring or cracks. Use a
straightedge to check for warpage of the pressure plate. Using a dial
indicator, measure the runout of the flywheel. The pressure plate release
levers should show very limited or no signs of wear from contact with
the release bearing. If you note excessive wear, cracks, or warping on
the flywheel and/or pressure plate, you should replace the assembly.
This is also a good time to inspect the ring gear teeth on the flywheel. If
they are worn or chipped, install a new ring gear.
· A clutch disc contains asbestos—a cancer-causing substance. Be
careful how you clean the parts of the clutch. Avoid using compressed
air to blow clutch dust from the parts.
· While inspecting the flywheel, you should check the pilot bearing
in the end of the crankshaft. A worn pilot bearing will allow the
transmission input shaft and clutch disc to wobble up and down. Using
a telescoping gauge and a micrometre, measure the amount of wear in
the bushing. For wear measurements of the pilot bearing, refer to the
service manual. If a roller bearing is used, rotate them. They should turn
freely and show no signs of rough movement. If replacement of the
pilot bearing is required, the use of a slide hammer puller will drive the
bearing out of the crankshaft end. Before installing a new pilot bearing,
check the fit by sliding it over the input shaft of the transmission. Then
drive the new bearing into the end of the crankshaft.
· Inspect the disc for wear; inspect the depth of the rivet holes, and
check for loose rivets and worn or broken torsion springs. Check the
splines in the clutch disc hub for a "like new" condition. Inspect the
clutch shaft splines by placing the disc on the clutch shaft and sliding it
over the splines. The disc should move relatively free back and forth
without any unusual tightness or binding. Normally, the clutch disc is
replaced anytime the clutch is torn down for repairs.
· Another area to inspect is the release bearing. The release bearing
and sleeve are usually sealed and factory packed (lubricated). A bad
release bearing will produce a grinding noise whenever the clutch pedal
is pushed down. To check the action of the release bearing, insert your
fingers into the bearing; then turn the bearing while pushing on it. Try
to detect any roughness; it should rotate smoothly. Also, inspect the
spring clip on the release bearing or fork. If bent, worn, or fatigued, the
bearing or fork must be replaced.
· The last area to check before reassembly is the clutch fork. If it is
bent or worn, the fork can prevent the clutch from releasing properly.
Inspect both ends of the fork closely. Also, inspect the clutch fork pivot
point in the clutch housing; the pivot ball or bracket should be
undamaged and tight.
· When you install a new pressure plate, do not forget to check the
plate for proper adjustments.
· Reassemble the clutch in the reverse order of disassembly. Mount
the clutch disc and pressure plate on the flywheel. Make sure the disc is
facing in the right direction. Usually, the disc's offset center (hub and
torsion springs) fit into the pressure plate.
· If reinstalling, line up the old pressure plate using the alignment
marks made before disassembly. Start all of the pressure plates bolts by
hand. Never replace a clutch pressure plate bolt with a weaker bolt.
Always install the special case-hardened bolt recommended by the
manufacturer.
· Use a clutch alignment tool to center the clutch disc on the
flywheel. If an alignment tool is unavailable, you can use an old clutch
shaft from the same type of vehicle. Tighten each pressure plate bolt a
little at a time in a crisscross pattern. This will apply equal pressure on
each bolt as the pressure plate spring(s) are compressed. When the bolts
are snugly in place, torque them to the manufacturer’s specifications
found in the service manual. Once the pressure plates bolts are torqued
to specification, slide out the alignment tool. Without the clutch disc
being centered, it is almost impossible to install the transmission or
transaxle.
· Next, install the clutch fork and release bearing in the clutch
housing. Fit the clutch housing over the rear of the engine. Dowels are
provided to align the housing on the engine. Install and tighten the bolts
in a crisscross manner.
· Install the transmission and drive shaft or the transaxle and axle
shafts. Reconnect the linkages, the cables, any wiring, the battery, and
any other parts required for disassembly. After all parts have been
installed, adjust the clutch pedal free travel as prescribed by the
manufacturer, and test drive the vehicle for proper operation.
Manual Transmissions
A manual transmission (Figure 10-11) is designed with two purposes in
mind. One purpose of the transmission is providing the operator with the
option of maneuvering the vehicle in either the forward or reverse direction.
This is a basic requirement of all automotive vehicles. Almost all vehicles
have multiple forward gear ratios, but in most cases, only one ratio is
provided for reverse.

Another purpose of the transmission is to provide the operator with a


selection of gear ratios between engine and wheel so that the vehicle can
operate at the best efficiency under a variety of operating conditions and
loads. If in proper operating condition, a manual transmission should do the
following :
• Be able to increase torque going to the drive wheel for quick acceleration.
• Supply different gear ratios to match different engine load conditions.
• Have a reverse gear for moving the vehicle backwards.
• Provide the operator with an easy means of shifting transmission gears.
• Operate quietly with minimum power loss.
Transmission Construction
Before understanding the operation and power flow through a manual
transmission, you first must understand the construction of the transmission
so you will be able to diagnose and repair damaged transmissions properly.

Transmission Case
The transmission case provides support for the bearings and shafts, as well as
an enclosure for lubricating oil. A manual transmission case is cast from
either iron or aluminum. Because they are lighter in weight, aluminum cases
are preferred. A drain plug and fill plug are provided for servicing. The drain
plug is located on the bottom of the case, whereas the fill plug is located on
the side.
Extension Housing
Also known as the tail shaft, the extension housing bolts to the rear of the
transmission case. It encloses and holds the transmission output shaft and rear
oil seal. A gasket is used to seal the mating surfaces between the transmission
case and the extension housing. On the bottom of the extension housing is a
flange that provides a base for the transmission mount.
Front Bearing Hub
Sometimes called the front bearing cap, the bearing hub covers the front
transmission bearing and acts as a sleeve for the clutch release bearing. It
bolts to the transmission case, and a gasket fits between the front hub and the
case to prevent oil leakage.
Transmission Shafts
A manual transmission has four steel shafts mounted inside the transmission
case. These shafts are the input shaft, the countershaft, the reverse idler shaft,
and the main shaft.
Input Shaft
The input shaft, also known as the clutch shaft, transfers rotation from the
clutch disc to the countershaft gears (Figure 10-11). The outer end of the
shaft is splined, except the hub of the clutch disc. The inner end has a
machined gear that meshes with the countershaft. A bearing in the
transmission case supports the input shaft in the case. Anytime the clutch disc
turns, the input shaft gear and gears on the countershaft turn.
Countershaft
The countershaft, also known as the cluster gear shaft, holds the countershaft
gear into mesh with the input shaft gear and other gears in the transmission
(Figure 10-11). It is located slightly below and to one side of the clutch shaft.
The countershaft does not turn in the case. It is locked in place by a steel pin,
force fit, or locknuts.
Reverse Idler Shaft
The reverse idler shaft is a short shaft that supports the reverse idle gear
(Figure 10- 11). It mounts stationary in the transmission case about halfway
between the countershaft and output shaft, allowing the reverse idle gear to
mesh with both shafts.
Main Shaft
The main shaft, also called the output shaft, holds the output gears and
synchronizers (Figure 10-11). The rear of the shaft extends to the rear of the
extension housing where it connects to the drive shaft to turn the wheel of the
vehicle. Gears on the shaft are free to rotate, but the synchronizers are locked
on the shaft by splines. The synchronizers will only turn when the shaft itself
turns.
Transmission Gears
Transmission gears can be classified into four groups—input gear,
countershaft gears, main shaft gears, and the reverse idler gear. The input
gear turns the countershaft gears, the countershaft gears turns the main shaft
gears, and, when engaged, the reverse idler gear.
In low gear, a small gear on the countershaft drives a larger gear on the main
shaft, providing for a high gear ratio for accelerating. Then, in high gear, a
larger countershaft gear turns a small main shaft gear or a gear of equal size,
resulting in a low gear ratio, allowing the vehicle to move faster. When
reverse is engaged, power flows from the countershaft gear, to the reverse
idler gear, and to the engaged main shaft gear. This action reverses main shaft
rotation.
Synchronizers
The synchronizer is a drum or sleeve that slides back and forth on the splined
main shaft by means of the shifting fork. Generally, it has a bronze cone on
each side that engages with a tapered mating cone on the second and high-
speed gears. A transmission synchronizer (Figure 10-12) has two functions:
1. Lock the main shaft gear to the main shaft.
2. Prevent the gear from clashing or grinding during shifting.
When the synchronizer is moved along the main shaft, the cones act as a
clutch. Upon touching the gear that is to be engaged, the main shaft is
accelerated or slowed down until the speeds of the main shaft and gear are
synchronized. This action occurs during partial movement of the shift lever.
Completion of lever movement then slides the sleeve and gear into complete
engagement. This action can be readily understood by remembering that the
hub of the sleeve slides on the splines of the main shaft to engage the cones;
then the sleeve slides on the hub to engage the gears. As the synchronizer is
slid against a gear, the gear is locked to the synchronizer and to the main
shaft. Power can then be sent out of the transmission to the wheels.
Shift Forks, Shift Linkage and Levers
Shift Forks
Shift forks fit around the synchronizer sleeves to transfer movement to the
sleeves from the shift linkage. The shift fork sits in a groove cut into the
synchronizer sleeve. The linkage rod or shifting rail connects the shift fork to
the operator’s shift lever. As the lever moves, the linkage or rail moves the
shift fork and synchronizer sleeve to engage the correct transmission gear.
Shift Linkage and Levers
There are two types of shift linkages used on manual transmissions. They are
the external rod and the internal shift rail. They both perform the same
function. They connect the shift lever with the shift fork mechanism. The
transmission shift lever assembly can be moved to cause movement of the
shift linkage, shift forks, and synchronizers. The shift lever may be either
floor mounted or column mounted, depending upon the manufacturer. Floor-
mounted shift levers are generally used with an internal shift rail linkage,
whereas column-mounted shift levers are generally used with an external rod
linkage.
Transmission Types
Modern manual transmissions are divided into two major categories:
• Constant mesh
• Synchromesh

Constant Mesh Transmission


The constant mesh transmission has two parallel shafts where all forward
gears of the countershaft are in constant mesh with the mainshaft gears,
which are free to rotate. Reverse can either be a sliding collar or a constant
mesh gear. On some earlier versions, first and reverse gears were sliding
gears.
To eliminate the noise developed by the spur-tooth gears used in the sliding
gear transmission, automotive manufacturers developed the constant mesh
transmission. The constant mesh transmission has parallel shafts with gears in
constant mesh. In neutral, the gears are free running but, when shifted, they
are locked to their shafts by sliding collars.
When the shift lever is moved to third, the third and fourth shifter fork moves
the sliding collar toward the third speed gear. This engages the external teeth
of the sliding collar with the internal teeth of the third speed gear. Since the
third speed gear is meshed and rotating with the countershaft, the sliding
collar must also rotate. The sliding collar is splined to the main shaft, and
therefore, the main shaft rotates with the sliding collar. This principle is
carried out when the shift lever moves from one speed to the next.

Synchromesh Transmission
The synchromesh transmission also has gears in constant mesh (Figure 10-
11). However, gears can be selected without clashing or grinding by
synchronizing the speeds of the mating part before they engage.
The construction of the synchromesh transmission is the same as that of the
constant mesh transmission with the exception that a synchronizer has been
added. The addition of synchronizers allows the gears to be constant mesh
when the cluster gears and the synchronizing clutch mechanisms lock the
gears together.
The synchronizer accelerates or slows down the rotation of the shaft and gear
until both are rotating at the same speed and can be locked together without a
gear clash. Since the vehicle is normally standing still when it is shifted into
reverse gear, a synchronizer is not ordinarily used on the reverse gear.
Auxiliary Transmissions
The auxiliary transmission is used to provide additional gear ratios in the
power train (Figure 10-13). This transmission is installed behind the main
transmission, and power flows directly to it from the main transmission when
of the integral type, or by a short propeller shaft (jack shaft) and universal
joints.
Support and alignment are provided by a frame cross member. Rubber-
mounting brackets are used to isolate vibration and noise from the chassis. A
lever that extends into the operator's compartment accomplishes shifting.
Like the main transmission, the auxiliary transmission may have either
constant mesh gears or synchronizers to allow for easier shifting.
This transmission, when of the two-speed design, has a low range and direct
drive. Three- and four-speed auxiliary transmissions commonly have at least
one overdrive gear ratio. The overdrive position causes increased speed of the
output shaft in relation to the input shaft. Overdrive is common on heavy-
duty trucks used to carry heavy loads and travel at highway speeds.
The auxiliary transmission can provide two-speed ratios. When it is in the
direct drive position, power flows directly through the transmission and is
controlled only by the main transmission. When the auxiliary transmission is
shifted into low range, vehicle speed is reduced and torque is increased.
When the low range is used with the lowest speed of the main transmission,
the engine drives the wheels very slowly and with less engine horsepower.
In this constant mesh auxiliary transmission, the main gear is part of the input
shaft, and it is in constant mesh with the countershaft drive gear. A pilot
bearing aligns the main shaft output shaft with the input shaft. The low-speed
main shaft gear runs free on the main shaft when direct drive is being used
and is in constant mesh with the countershaft low-speed gear. A gear type
dog clutch, splined to the main shaft, slides forward or backward when you
shaft the auxiliary transmission into high or low gear position.
In high gear, when direct drive from the main transmission is being used, the
dog clutch is forward and makes a direct connection between the input shaft
and the main shaft. When in low gear, the dog clutch is meshed with the low-
speed, main shaft gear and is disengaged from the main drive gear.
Transmission Troubleshooting
Transmissions are designed to last for the life of the vehicle when lubricated
and operated properly. The most common cause of failure results from
shifting when the vehicle is not completely stopped or without waiting long
enough to allow the gears to stop spinning after depressing the clutch pedal.
This slight clashing of gears may not seem significant at the time, but each
time this occurs, small particles of the gears will be ground off and carried
with the lubricant through the transmission. These small metal particles may
become embedded in the soft metal used in synchronizers, reducing the
frictional quality of the clutch. At the same time, these particles damage the
bearings and their races by causing pitting, rough movement, and noise. Soon
transmission failure will result. When this happens, you will have to remove
the transmission and replace either damaged parts or the transmission unit.
As a mechanic, your first step toward repairing a transmission is the
diagnosis of the problem. To begin diagnosis, gather as much information as
possible. Determine in which gears the transmission acts up—first, second,
third, fourth, or in all forward gears when shifting. Does it happen at specific
speeds? This information will assist you in determining which parts are
faulty. Refer to a diagnosis chart in the manufacturer’s service manual when
a problem is difficult to locate. It will be written for the exact type of
transmission.
Many problems that seem to be caused by the transmission are caused by
clutch, linkage, or drive line problems. Keep this in mind before removing
and disassembling a transmission.
Transmission Overhaul
Because of the variations in construction of transmissions, always refer to the
manufacturer’s service manual for proper procedures in the removal,
disassembly, repair, assembly, and installation. The time to carry out these
operations varies from 6 to 8 hours, depending on transmission type and
vehicle manufacturer.
The basic removal procedures are as follows:
1. Unscrew the transmission drain plug and drain the oil.
2. Remove the drive shaft and install a plastic cap over the end of the
transmission shaft.
3. Disconnect the transmission linkage at the transmission.
4. Unbolt and remove the speedometer cable from the extension housing.
5. Remove all electrical wires leading to switches on the transmission.
6. Remove any cross members or supports.
7. Support the transmission and engine with jacks. Operate the jack on the
engine to take the weight off the transmission.
Be careful not to crush the oil pan.
Never let the engine hang suspended by only the front motor mounts.
8. Depending upon what is recommended by the service manual, either
remove the transmission-to-clutch cover bolts or the bolts going into the
engine from the clutch cover.
9. Slide the transmission straight back, holding it in alignment with the
engine. You may have to wiggle the transmission slightly to free it from the
engine.
Once the transmission has been removed from the engine, clean the outside
and place it on your workbench. Teardown procedures will vary from one
transmission to another. Always consult the service manual for the type of
transmission you are working on. If improper disassembly methods are used,
major part damage could possibly result.
Before disassembly, remove the inspection cover. This will allow you to
observe transmission action. Shift the transmission into each gear, and at the
same time rotate the input shaft while inspecting the conditions of the gears
and synchronizers.
The basic disassembly procedures are as follows:
1. Unbolt and remove the rear extension housing. It may be necessary to tap
the housing off with a soft face mallet or bronze hammer.
2. Unbolt and remove the front extension housing and any snap rings.
3. Carefully pry the input shaft and gear forward far enough to free the main
shaft.
4. Using a brass drift pin, push the reverse idler shaft and countershaft out of
the transmission case.
5. Remove the input shaft and output shaft assemblies. Slide the output shaft
and gears out of the back of the transmission as a unit. Be careful not to
damage any of the gears.
After the transmission is disassembled, clean all the parts thoroughly and
individually. Clean all the parts of hardened oil, lacquer deposits, and dirt.
Pay particular attention to the small holes in the gears and to the shifter ball
bores in the shifter shaft housing. Remove all gasket material using a putty
knife or other suitable tool. Ensure that the metal surfaces are not gouged or
scratched. Also, clean the transmission bearings and blow-dry them using
low-pressure compressed air.
Power Flow
Now that you understand the basic parts and construction of a manual
transmission, we will cover the flow of power through a five-speed
synchromesh transmission (Figure 10- 14). In this example neither first gear
nor reverse gear are synchronized.
Reverse Gear
In passing from neutral to reverse, the reverse idler gear has been moved
rearward, and power from the countershaft gear flows into the reverse idler
gear. The reverse idler gear directs power to the gear on the outside of the
first and second synchronizer. Since the outer sleeve of the first and second
gear synchronizer has been moved to the center position, power will not flow
through first or second gear. The output shaft and synchronizer remain locked
together; rotation is reversed to the countershaft gear and is reversed again on
its way through the reverse idler gear. Since the power flow has changed
three times, an odd number, direction of transmission spin is opposite of that
of the engine (Figure 10-14). The sole function of this gear is to make the
main shaft rotate in the opposite direction to the input shaft; it does not affect
gear ratio.

reversed again on its way through the reverse idler gear. Since the power
flow has changed three times, an odd number, direction of transmission spin
is opposite of that of the engine (Figure 10-14). The sole function of this gear
is to make the main shaft rotate in the opposite direction to the input shaft; it
does not affect gear ratio.
First Gear
To get the vehicle moving from a standstill, the operator moves the gearshift
lever into first. The input shaft’s main drive gear turns the countershaft gear
in a reverse direction. The countershaft gear turns the low gear in the same
direction as the input shaft. Since the outer sleeve on the first-second gear
synchronizer has been moved rearward, the low gear is locked to the output
shaft (Figure 10-14). The difference in countershaft gear and first gear results
in a gear ratio approximately 3.5:1.
Second Gear
In second gear, the input shaft’s main drive gear turns the countershaft gear
in a reverse direction. The countershaft gear turns the second gear on the
output shaft to reverse the direction again. This action will result in the
rotation of the output shaft to turn in the same direction as the input shaft.
Since the outer sleeve on the first-second gear synchronizer has been moved
forward, the second gear is locked to the output shaft (Figure 10-14). Gear
ratio is approximately 2.5:1.
Third Gear
In third gear, the input shaft’s main drive gear turns the countershaft gear in a
reverse direction. The countershaft gear turns the third gear on the output
shaft to reverse the direction again. This action will result in the rotation of
the output shaft to turn in the same direction as the input shaft. Since the
outer sleeve on the third-fourth gear synchronizer has been moved rearward,
the third gear is locked to the output shaft (Figure 10-14). Gear ratio is
approximately 1.5:1.
Fourth Gear
In fourth gear, the synchronizer outer sleeve moves forward to engage the
main drive gear. This will lock the input and output shafts together (Figure
10-14). This is direct drive and gives you a 1:1 gear ratio.
Fifth Gear
In fifth gear, the input shaft’s main drive gear turns the countershaft gear in a
reverse direction. The fifth gear synchronizer outer sleeve moves forward.
This engages the fifth gear with the counter gear. Since fifth gear is already in
mesh with a gear on the output shaft, the synchronizer has locked the counter
gear to the output shaft (Figure 10-14). Gear ratio is approximately .7:1.
Automatic Transmissions
The automatic transmission, like the manual transmission, is designed to
match the load requirements of the vehicle to the power and speed range of
the engine (Figure 10-15). The automatic transmission, however, does this
automatically depending on throttle position, vehicle speed, and the position
of the control lever. Automatic transmissions are built in models that have
two, three, or four-forward speeds and in some that are equipped with
overdrive. Operator control is limited to the selection of the gear range by
moving a control lever.

The automatic transmission is coupled to the engine through a torque


converter. The torque converter is used with an automatic transmission
because it does not have to be manually disengaged by the operator each time
the vehicle is stopped. Because the automatic transmission shifts without any
interruption of engine torque application, the cushioning effect of the fluid
coupling within the torque converter is desirable. Because the automatic
transmission shifts gear ratios independent of the operator, it must do so
without the operator releasing the throttle. The automatic transmission does
this by using planetary gearsets whose elements are locked and released in
various combinations that produce the required forward and reverse gear
ratios. The locking of the planetary gearset elements is done through the use
of hydraulically actuated multiple-disc clutches and brake bands. The valve
body controls the hydraulic pressure that actuates these locking devices. The
valve body can be thought of as a hydraulic computer that receives signals
that indicate vehicle speed, throttle position, and gearset lever position. Based
on this information, the valve body sends hydraulic pressure to the correct
locking devices.
The parts of the automatic transmission are as follows:
• Torque converter—fluid coupling that connects and disconnects the engine
and transmission.
• Input shaft—transfers power from the torque converter to internal drive
members and gearsets.
• Oil pump—produces pressure to operate hydraulic components in the
transmission.
• Valve body—operated by shift lever and sensors; controls oil flow to
pistons and servos.
• Pistons and servos—actuate the bands and clutches.
• Bands and clutches—apply clamping or driving pressure on different parts
of gearsets to operate them.
• Planetary gears—provide different gear ratios and reverse gear.
• Output shaft—transfers engine torque from the gearsets to the drive shaft
and rear wheels.
Torque Converters
The torque converter is a fluid clutch that performs the same basic function as
a manual transmission dry friction clutch (Figure 10-16). It provides a means
of uncoupling the engine for stopping the vehicle in gear. It also provides a
means of coupling the engine for acceleration.

A torque converter has four basic parts:


1. Outer housing—normally made of two pieces of steel welded together in
a doughnut shape, housing the impeller, stator, and turbine. The housing is
filled with transmission fluid.
2. Impeller—driving member that produces oil movement inside the
converter whenever the engine is running. The impeller is also called the
converter pump.
3. Turbine—a driven fan splined to the input shaft of the automatic
transmission. Placed in front of the stator and impeller in the housing. The
turbine is not fastened to the impeller but is free to turn independently. Oil is
the only connection between the two.
4. Stator—designed to improve oil circulation inside the torque converter.
Increases efficiency and torque by causing the oil to swirl around the inside
of the housing.
The primary action of the torque converter results from the action of the
impeller passing oil at an angle into the blades of the turbine. The oil pushes
against the faces of the turbine vanes, causing the turbine to rotate in the
same direction as the impeller (Figure 10-17). With the engine idling, the
impeller spins slowly. Only a small amount of oil is thrown into the stator
and turbine. Not enough force is developed inside the torque converter to spin
the turbine. The vehicle remains stationary with the transmission in gear.

During acceleration, the engine crankshaft, the converter housing, and the
impeller begin to move faster. More oil is thrown out by centrifugal force,
turning the turbine. As a result, the transmission input shaft and vehicle starts
to move, but with some slippage. At cruising speeds, the impeller and turbine
spin at almost the same speed with very little slippage. When the impeller is
spun fast enough, centrifugal force throws oil out hard enough to almost lock
the impeller and turbine. After the oil has imparted its force to the turbine, the
oil follows the contour of the turbine shell and blades so that it leaves the
center section of the turbine spinning counter clockwise.
Because the turbine has absorbed the force required to reverse the direction of
the clockwise spinning of the oil, it now has greater force than is being
delivered by the engine. The process of multiplying engine torque has begun.
Torque multiplication refers to the ability of a torque converter to increase the
amount of engine torque applied to the transmission input shaft. Torque
multiplication occurs when the impeller is spinning faster than the turbine.
For example, if the engine is accelerated quickly, the engine and impeller rpm
might increase rapidly while the turbine is almost stationary. This is known
as stall speed. Stall speed of a torque converter occurs when the impeller is at
maximum speed without rotation of the turbine. This condition causes the
transmission fluid to be thrown off the stator vanes at tremendous speeds.
The greatest torque multiplication occurs at stall speed. When the turbine
speed nears impeller speed, torque multiplication drops off. Torque is
increased in the converter by sacrificing motion. The turbine spins slower
than the impeller during torque multiplication.
If the counter clockwise oil were allowed to continue to the center section of
the impeller, the oil would strike the blades of the pump in a direction that
would hinder its rotation and cancel any gains in torque. To prevent this, you
can add a stator assembly.
The stator is located between the pump and the turbine and is mounted on a
one-way clutch that allows it to rotate clockwise but not counter clockwise
(Figure 10-16). The purpose of the stator is to redirect the oil returning from
the turbine and change its rotation back to that of the impeller. Stator action is
only needed when the impeller and turbine are turning at different speeds.
The one-way clutch locks the stator when the impeller is turning faster than
the turbine. This causes the stator to route oil flow over the impeller vanes
properly. Then, when turbine speed almost equals impeller speed, the stator
can freewheel on its shaft so not to obstruct flow.
Even at normal highway speeds, there is a certain amount of slippage in the
torque converter. Another type of torque converter that is common on
modern vehicles is the lockup torque converter. The lockup torque converter
provides increased fuel economy and increased transmission life through the
elimination of heat caused by torque converter slippage. A typical lockup
mechanism consists of a hydraulic piston, torsion springs, and clutch friction
material. In lower gears, the converter clutch is released. The torque
converter operates normally, allowing slippage and torque multiplication.
However, when shifted into high or direct drive, transmission fluid is
channeled to the converter piston. The converter piston pushes the friction
discs together, locking the turbine and impeller. The crankshaft is able to
drive the transmission input shaft directly, without slippage. The torsion
springs assist to dampen engine power pulses entering the drive train.
Planetary Gearsets
A planetary gearset consists of three members--sun gear, ring gear, and
planetary carrier, which hold the planetary gears in proper relation with the
sun and ring gear (Figure 10-18). The planetary gears are free to rotate on
their own axis while they "walk" around the sun gear or inside the ring gear.
By holding or releasing the components of a planetary gearset, it is possible
to do the following:

• Reduce output speed and increase torque (gear reduction).


• Increase output speed while reducing torque (overdrive).
• Reverse output direction (reverse gear).
• Serve as a solid unit to transfer power (one to one ratio).
• Freewheel to stop power flow (park or neutral).
Figure 10-19 shows the simplest application of planetary gears in a
transmission. With the application shown, two forward speeds and neutral are
possible. High gear or direct drive is shown. The clutch is holding the planet
carrier to the input shaft, causing the carrier and sun gear to rotate as a single
unit. With the clutch released, all gears are free to rotate and no power is
transmitted to the output shaft. In neutral, the planetary carrier remains
stationary while the pinion gears rotate on their axis and turn the ring gear.
Should the brake be engaged on the ring gear, the sun gear causes the
planetary gears to walk around the inside of the ring gear and forces the
planet carrier to rotate in the same direction as the sun gear, but at a slower
speed (low gear). To provide additional speed ranges or a reverse, you must
add other planetary gearsets to this transmission.

A compound planetary gearset combines two planetary units into one housing
or ring gear. It may have two sun gears or a long sun gear to operate two sets
of planetary gears. A compound planetary gearset is used to provide more
forward gear ratios than a simple planetary gearset.
Clutches and Bands
Automatic transmission clutches and bands are friction devices that drive or
lock planetary gearsets members. They are used to cause the gearset to
transfer power.

Multiple-Disc Clutch
The multiple-disc clutch is used to transmit torque by locking elements of the
planetary gearsets to rotating members within the transmission. In some
cases, the multiple-disc clutch is also used to lock a planetary gearset element
to the transmission case so it can act as a reactionary member. The multiple-
disc clutch is made up of the following components (Figure 10-20):

• Discs and plates—The active components of the multiple-disc clutch are


the discs and the plates. The discs are made of steel and are faced with a
friction material. They have teeth cut into their inner circumference to key
them positively to the clutch hub. The plates are made of steel with no lining.
They have teeth cut into their outer circumference to key them positively with
the inside of a clutch drum or to the inside of the transmission case. Because
the discs and plates are alternately stacked, they are locked together or
released by simply squeezing them.
• Clutch drum and hub—The clutch drum holds the stack of discs and
plates and is attached to the planetary gearset element that is being driven.
The clutch hub attaches to the driving member and fits inside the clutch discs
and plates.
• Pressure plate—The pressure plates are thick clutch plates that are placed
on either end of the stack. Their purpose is to distribute the application
pressure equally on the surfaces of the clutch discs and plates.
• Clutch piston—The clutch piston uses hydraulic pressure to apply the
clutch. Hydraulic pressure is supplied to the clutch piston through the center
of the rotating member.
• Clutch piston seals—The clutch piston seals serve to prevent the leakage
of hydraulic pressure around the inner and outer circumferences of the clutch
piston.
• Clutch springs—The clutch springs ensure rapid release of the clutch when
hydraulic pressure to the clutch piston is released. The clutch springs may be
in the form of several coil springs equally spaced around the piston or one
large coil spring that fits in the center of the clutch drum. Some models use a
diaphragmtype clutch spring.
The operation of the multiple-disc clutch is as follows (Figure 10-21):
Released—When the clutch is released, there is no hydraulic pressure on the
clutch piston, and the clutch discs and plates are free to rotate within each
other. The result is that the clutch hub rotates freely and does not drive the
clutch drum.
Figure 10-21 — Multiple-disc clutch operation.

Applied—When the clutch is applied, hydraulic pressure is applied to the


clutch piston that in turn applies pressure to the clutch discs and plates,
causing them to lock together. The result is that the clutch hub drives the
clutch drum through the clutch.
Overrunning Clutch
An overrunning clutch is used in automatic transmissions to lock a planetary
gearset to the transmission case so that it can act as a reactionary member.
The overrunning clutch for the planetary gears is similar to the one in a
torque converter stator or an electric starting motor drive gear. A planetary
gearset overrunning clutch consists of an inner race, a set of springs, rollers,
and an outer race. Operation of the overrunning clutch is very simple to
understand. When driven in one direction, rollers lock between ramps on the
inner and outer race, allowing both races to turn. This action can be used to
stop movement of the planetary member, for example. When turned in the
other direction, rollers walk off the ramps, and the races are free to turn
independently.

Brake Band The brake band is used to lock a planetary gearset element to the
transmission case
so that the element can act as a reactionary member. The brake band is made
up of the following elements:
• Band—The brake band is a circular piece of spring steel that is rectangular
in cross section. Its inside circumference is lined with a friction material. The
brake band has bosses on each end so that it can be held and compressed.
• Drum—The drum fits inside the band and attaches to the planetary gear-
set element, and is to be locked by the band. Its outer surface is machined
smoothly to interact with the friction surface of the brake band. When the
open ends of the band are pulled together, the rotation of the drum stops.
• Anchor—The anchor firmly attaches one end of the brake band to the
transmission case. A provision for adjusting the clearance between the band
and the drum is usually provided on the anchor.
• Servo—The servo uses hydraulic pressure to squeeze the band around the
drum. The servo piston is acted on by hydraulic pressure from the valve body
that is fed through an internal passage through the case. The servo piston has
a seal around it to prevent leakage of hydraulic pressure, and is spring loaded
to allow quick release of the band. Some servos use hydraulic pressure on
both sides of their pistons so that they use hydraulic pressure for both the
release and the application of the band.
The operation of the brake band is as follows (Figure 10-23):
Released—When the brake band is released, there is no hydraulic pressure
applied to the servo, and the drum is free to rotate within the band.
Applied—When the brake band is applied, hydraulic pressure is applied to
the servo that in turn tightens the band around the drum. The result is that the
drum is locked in a stationary position, causing an output change from the
planetary gearset.
In the applied circuit of a clutch or band, an accumulator is used to cushion
initial application. It temporarily absorbs some of the hydraulic pressure to
cause slower movement of the applied piston.
Hydraulic System of an Automatic
Transmission
The hydraulic system of an automatic transmission serves four basic
purposes:
1. Actuates clutches and brake bands by hydraulic pressure from the
hydraulic slave circuits.
2. Controls the shifting pattern of the transmission. This is done by switching
hydraulic pressure to programmed combinations of clutches and brake bands
based on vehicle speed and engine load.
3. Circulates the transmission fluid through a remote cooler to remove excess
heat that is generated in the transmission and torque converter.
4. Provides a constant fresh supply of oil to all critical wearing surfaces of the
transmission.
The hydraulic system for an automatic transmission typically consists of a
pump, pressure regulator, manual valve, vacuum modulator valve, governor
valve, shift valves, kick down valve, and a valve body.
Pump
The typical hydraulic pump is an internal-external rotor or gear-type pump.
Located in the front of the transmission case, it is keyed to the torque
converter hub so that it is driven by the engine. As the torque converter spins
the oil pump, transmission fluid is drawn into the pump from the transmission
pan. The pump compresses the oil and forces it to the pressure regulator. The
pump has several basic functions:
• Produces pressure to operate the clutches, the bands, and the gearsets.
• Lubricates the moving parts in the transmission.
• Keeps the torque converter filled with transmission fluid for proper
operation.
• Circulates transmission fluid through the transmission and cooling tank
(radiator) to transfer heat.
• Operates hydraulic valves in the transmission
Pressure Regulator
The pressure regulator limits the maximum amount of oil pressure developed
by the oil pump. It is a spring-loaded valve that routes excess pump pressure
out of the hydraulic system, assuring proper transmission operation.

Manual Valve
A manual valve located in the valve body is operated by the driver through
the shift linkage (Figure 10-24). This valve allows the operator to select park,
neutral, reverse, or different drive ranges. When the shift lever is moved, the
shift linkage moves the manual valve. As a result, the valve routes hydraulic
fluid throughout the transmission to the correct places. When the operator
selects overdrive, drive, or second, the transmission takes over, shifting
automatically to meet driver conditions. When the selector is placed in low
and reverse, the transmission is locked into the selected gear.

Vacuum Modulator Valve


The vacuum modulator valve is a diaphragm device that uses engine
manifold vacuum to indicate engine load to the shift valve (Figure 10-25). As
engine vacuum (load) rises and falls, it moves the diaphragm inside the
modulator. This in turn moves the rod and hydraulic valve to change throttle
control pressure in the transmission. In this way, the vacuum modulator can
match transmission shift points to engine loads.
Governor Valve
The governor valve senses engine speed (transmission output shaft speed) to
help control gear shifting (Figure 10-26). The vacuum modulator and
governor work together to determine the shift points. The governor gear is
meshed with a gear on the transmission output shaft. Whenever the vehicle
and output shaft are moving, the centrifugal weights rotate. When the output
shaft and weights are spinning slowly, the weights are held in by the
governor springs, causing low-pressure output, and the transmission remains
in low gear. As the engine speeds increases, the weights are thrown out
further and governor pressure increases, moving the shift valve and causing
the transmission to shift into higher gear.
Shift Valves
The shift valves are simple balance type spool valves that select between low
and high gear when the manual valve is in drive. Using control pressure (oil
pressure from the regulator, governor, vacuum modulator, and manual
valves), they operate the bands, servos, and gearsets. Oil pressure from the
other transmission valves acts on each end of the shift valve. In this way, the
shift valve is sensitive to engine load (vacuum modulator valve oil pressure),
engine speed (governor valve oil pressure), and gearshift position (manual
valve oil pressure). These valves move according to the forces and keep the
transmission shifted into the correct gear ratio for the driving conditions.
Kickdown Valve
The kickdown valve causes the transmission to shift into a lower gear during
fast acceleration. A rod or cable links the carburetor or fuel injection throttle
body to a lever on the transmission. When the operator depresses the throttle,
the lever moves the kickdown valve. This action causes hydraulic pressure to
override normal shift control pressure and the transmission downshifts.
Valve Body
The valve body is a very complicated hydraulic system (Figure 10-27). It
contains hydraulic valves used in an automatic transmission, such as the
pressure regulator, shift valves, and manual valves. The valve body bolts to
the bottom of the transmission case and is housed in the transmission pan. A
filter or screen is attached to the bottom of the valve body. Passages in the
valve body route fluid from the pump to the valves and then into the
transmission case. Passages in the transmission case carry fluid to other
hydraulic components.
Automatic Transmission Service
Automatic transmission service can be easily divided into the following
areas: preventive maintenance, troubleshooting, and major overhaul. Before
you perform maintenance or repair on an automatic transmission, consult the
maintenance manual for instructions and proper specifications. As a floor
mechanic, however, your area of greatest concern is preventive maintenance.
Preventive maintenance includes the following:
• Checking the transmission fluid daily.
• Adjusting the shifting and kickdown linkages.
• Adjusting lockup bands.
• Changing the transmission fluid and filter at recommended service intervals.
Checking the Fluid
The operator is responsible for first echelon (operator’s) maintenance. The
operator should be trained not only to know to look for the proper fluid level
but also to know how to look for discoloration of the fluid and debris on the
dipstick. Fluid levels in automatic transmissions are almost always checked at
operating temperature. This is important to know since the level of the fluid
may vary as much as three quarters of an inch between hot and cold. The
fluid should be either reddish or clear. The color varies due to the type of
fluid. (For example, construction equipment using OE-10 will be clear). A
burnt smell or brown coloration of the fluid is a sign of overheated oil from
extra heavy use or slipping bands or clutch packs. The vehicle should be sent
to the shop for further inspection.
CAUTION
Not all transmission fluids are the same. Before you add fluid, check the
manufacturer’s recommendations first. The use of the wrong fluid will lead to
early internal parts failure and costly overhaul. Overfilling the transmission
can result in the fluid foaming and the fluid being driven out through the vent
tube. The air that is trapped in the fluid is drawn into the hydraulic system by
the pump and distributed to all parts of the transmission. This situation will
cause air to be in the transmission in place of fluid and in turn cause slow
application and burning of clutch plates and facings. Slippage occurs, heat
results, and failure of the transmission follows.
Another possible, but remote, problem is water, indicated by the fluid having
a "milky" appearance. A damaged fluid cooling tube in the radiator
(automotive) or a damaged oil cooler (construction) could be the problem.
The remedy is simple. Pressure-test the suspected components and perform
any required repairs. After repairs have been performed, flush and refill the
transmission with clean, fresh fluid.
Linkage and Band Adjustment
The types of linkages found on an automatic transmission are gearshift
selection and throttle kickdown. The system can be a cable or a series of rods
and levers. These systems do not normally present a problem, and preventive
maintenance usually involves only a visual inspection and lubrication of the
pivot points of linkages or the cable. When adjusting these linkages, you
should strictly adhere to the manufacturer’s specifications.
If an automatic transmission is being used in severe service, the manufacturer
may suggest periodic band adjustment. Always adjust lockup bands to the
manufacturer’s specifications. Adjust the bands by loosening the locknut and
tightening down the adjusting screw to a specified value. Back off the band
adjusting screw with a specified number of turns and tighten down the
locking nut. Not all bands are adjustable. Always check the manufacturer’s
service manual before any servicing of the transmission.
Fluid Replacement
Perform fluid replacement according to the manufacturer’s recommendations.
These recommendations vary considerably for different makes and models.
Before you change automatic transmission fluid, always read the service
manual first. Service intervals depend on the type of use the vehicle receives.
In the NCF, because of the operating environment, more than a few of the
vehicles are subjected to severe service. Severe service includes the
following: hot and dusty conditions, constant stop and-go driving (taxi
service), trailer towing, constant heavy hauling, and around-the clock
operations (contingency). Any CESE operating in these conditions should
have its automatic transmission fluid and filter changed on a regular
schedule, based on the manufacturer's specifications for severe service.
Ensure the vehicle is on level ground or a lift and let the oil drain into a
proper catchment device. The draining of the transmission can be
accomplished in one of the following three ways:
1. Removing the drain plug.
2. Loosening the dipstick tube.
3. Removing the oil pan.
CAUTION
Oil drained from automatic transmissions contains heavy metals and is
considered hazardous waste and should be disposed of according to local
instructions. Once the oil is drained, remove the pan completely for cleaning
by paying close attention to any debris in the bottom of the pan. The presence
of a high amount of metal particles may indicate serious internal problems.
Clean the pan, and set it aside. All automatic transmissions have a filter or
screen attached to the valve body. The screen is cleanable, whereas the filter
is a disposable type and should always be replaced when removed. These are
retained in different ways: retaining screws, metal retaining clamps, or O-
rings made of neoprene. Clean the screen with solvent and use low-pressure
air to blow-dry it. Do not use rags to wipe the screen dry, as they tend to
leave lint behind that will be ingested into the hydraulic system of the
transmission. If the screen is damaged or is abnormally hard to clean,
replace it. Draining the oil from the pan of the transmission does not remove
all of the oil—draining the oil from the torque converter completes the
process. To do this, remove the torque converter cover and remove the drain
plug, if so equipped. For a torque converter without a drain plug, special
draining instructions may be found in the manufacturer’s service manual.
Before performing this operation, clear it with your shop supervisor.
Refilling the Transmission Reinstall the transmission oil pan, the oil plug,
and the fill tube. Fill the transmission with the fluid prescribed by the
manufacturer to the proper level. With the brakes applied, start the engine
and let it idle for a couple of minutes. Move the gear selector through all
gear ranges several times, allowing the fluid to flow through the entire
hydraulic system to release any trapped air. Return the selector lever to park
or neutral and recheck the fluid level. Bring the fluid to the proper level. Run
the vehicle until operating temperature is reached, checking for leaks. Also,
recheck the fluid and adjust the level as necessary.
Electronic Systems of an Automatic
Transmission
Speed Sensors
The vehicle speed sensor is a device that is mounted on the output shaft of the
transmission or transaxle. This device tells the electronic control module
(ECM) how fast the vehicle is moving. It consists of a wheel with teeth
around it and a magnetic pickup. The wheel can either be attached to the
output shaft or be gear driven off the output shaft. As the wheel is turned, it
induces an alternating current (AC) in the magnetic pickup. The ECM uses
this information to calculate how fast the vehicle is moving.
Shift Solenoids
Electronic transmissions utilize shift solenoids to control when the
transmission will shift from one gear to the next. The solenoid affects
hydraulic pressure on one side of a shift valve, causing it to move. In some
transmissions this solenoid is connected directly to a check ball that acts as a
shift valve. Energizing the shift solenoid causes the check ball to move and
either open or close pressure passages leading to the holding members. The
ECM works with the shift solenoids, either receiving or sending input to tell
the solenoid to operate or hold. If the speed is appropriate for an upshift,
however, the throttle position sensor tells the ECM it is wide open; under a
heavy load, for example, the ECM may hold the shift solenoid from operation
until the throttle is changed.
Transaxles
A transaxle is a transmission and differential combination in a single
assembly. Transaxles are used in front-wheel drive vehicles. A transaxle
allows the wheels next to the engine to propel the vehicle. Short drive axles
are used to connect the transaxle output to the hubs and drive wheels. Vehicle
manufacturers claim that a transaxle and front-wheel drive vehicle has several
advantages over a vehicle with rear-wheel drive. A few of these advantages
are the following:
• Improved efficiency and reduced drive train weight
• Improved traction on slippery surfaces because of increased
weight on the drive wheels
• Increased passenger compartment space (no hump in
floorboard for rear drive shaft)
• Less unsprung weight (weight that must move with
suspension action), thereby providing a smoother ride
• Quieter operation since engine and drive train noise is
centrally located in the engine compartment
• Improved safety because of the increased mass in front of the
passengers
Most transaxles are designed so that the engine can be transverse (sideways)
mounted in the engine compartment. The transaxle bolts to the rear of the
engine. This produces a very compact unit. Engine torque enters the transaxle
transmission. The transmission transfers power to the differential. Then the
differential turns the drive axles that rotate the front wheels. Both manual and
automatic transaxles are available. Manual transaxle uses a friction clutch and
a standard transmission-type gearbox. An automatic transaxle uses a torque
converter and a hydraulic system to control gear engagement.
Manual Transaxles
A manual transaxle uses a standard clutch and transmission (Figure 10-28). A
footoperated clutch engages and disengages the engine and transaxle. A
hand-operated shift lever allows the operator to charge gear ratios. The basic
parts relating to a manual transaxle are the following:
• Transaxle input shaft—main shaft splined to the clutch disc, turns the gear
in the transaxle.
• Transaxle input gears—either freewheeling or fixed gears on the input shaft,
mesh with the output gears.
• Transaxle output gears—either fixed or freewheeling gears driven by the
input gears.
• Transaxle output shaft—transfers torque to the ring gear, pinion gears, and
differential.
• Transaxle synchronizers—splined hub assemblies that can lock
freewheeling gears to their shafts for engagement.
• Transaxle differential—transfers gearbox torque to the driving axle and
allows the axles to turn at different speeds.
• Transaxle case—aluminium housing that encloses and supports parts of the
transaxle.
The manual transaxle can be broken up into two separate units—a manual
transaxle transmission and a transaxle differential. A manual transaxle
transmission provides several (usually four or five) forward gears and
reverse. You will find that the names of shafts, gears, and other parts in the
transaxle vary, depending on the location and function of the components.
For example, the input shaft may also be called the main shaft, and the output
shaft is called the pinion shaft because it drives the ring and pinion gear in the
differential. The output, or pinion, shaft has a gear or sprocket for driving the
differential ring gear.
The clutch used on the manual transaxle transmission is almost identical to
the manual transmission clutch for rear-wheel drive vehicles. It uses a friction
disc and springloaded pressure plate bolted to the flywheel. Some transaxles
use a conventional clutch release mechanism (release bearing and fork);
others use a long pushrod passing through the input shaft. The transaxle
differential, like a rear axle differential, transfers power to the axles and
wheels while allowing one wheel to turn at a different speed than the other. A
small pinion gear on the gearbox output shaft or countershaft turns the
differential ring gear. The ring gear is fastened to the differential case. The
case holds the spider gears (pinion gears and axle side gears) and a pinion
shaft. The axle shafts are splined to the differential side gears.
VEHICLE DYNAMICS
Suspensions, Functions, and Main
Components
Suspension Systems
To many focusing on ride comfort, it may seem like the suspension system is
merely a set of springs and shock absorbers which connect the wheels to the
vehicle body. However, this is a very simplistic viewpoint of the suspension
system. A vehicle suspension system provides a smooth ride over rough
roads while ensuring that the wheels remain in contact with the ground and
vehicle roll is minimized. The suspension system contains three major parts:
a structure that supports the vehicle’s weight and determines suspension
geometry, a spring that converts kinematic energy to potential energy or vice
versa, and a shock absorber that is a mechanical device designed to dissipate
kinetic energy.
An automotive suspension connects a vehicle’s wheels to its body while
supporting the vehicle’s weight. It allows for the relative motion between
wheel and vehicle body; theoretically, a suspension system should reduce a
wheel’s degree of freedom (DOF) from 6 to 2 on the rear axle and to 3 on the
front axle even though the suspension system must support propulsion,
steering, brakes, and their associated forces. The relative motions of the
wheels are its vertical movement, rotational movement about the lateral axes,
and rotational movement about the vertical axes due to steer angle.
Main Components of Suspension Systems
A vehicle suspension system is made of four main components-mechanism,
spring, shock absorber, and bushings-as shown in Figure 1.1.
• Mechanism: The suspension mechanism might contain one or several
arms that connect a wheel to the vehicle body. They transfer all forces and
moments in different directions between the vehicle body and the ground.
The mechanism determines some of the most important characteristics of a
suspension system. It determines the suspension geometry and wheel angles
and their relative motions. Variation in wheel angles during suspension travel
causes a change in tire forces, which affect the vehicle’s road holding and
handling. The main weight of a suspension system arises from its mechanism.
Using heavy materials in its construction decreases the ride quality, whereas
light materials, although improve ride quality, are more expensive.
• Spring: The spring is usually a winding wire or a number of strips of
metal that have elastic properties. It supports the vehicle’s weight and makes
a suspension tolerable for passengers. To best understand suspension
behavior, the most important component requiring study is the spring.
However, with its importance, a conflict arises: when using high stiffness
springs, the vehicle exhibits good road holding and handling but with a
noticeably decreased ride comfort. This creates a condition of limitation
when choosing an appropriate spring stiffness. The spring weight and size
may also make this accommodation difficult.
• Shock absorber: The shock absorber is a mechanical or hydraulic
device to dampen impulses. A high damping shock absorber compromises the
vehicle’s ride quality in order to immediately dampen impulses to improve
handling and road holding.
• Bushings: The bushings prevent the direct contact of two metal objects in
order to isolate noise and minimize vibration. Soft materials such as rubber is
used in bushings for isolation. In fact, they are a type of vibration isolator
used to connect various moving components to the vehicle body or
suspension frame. Many types of bushing exist, and they are classified by the
number of DOF between the two connected parts that they support. Revolute
joints are the most common type of bushings. They are annular cylindrical
and support a rotational relative motion, whereas ball joints allow rotational
relative motion in all directions. Bushings are some of the most expensive
parts in a suspension system.
Desired Features of Suspension Systems
A suspension system should satisfy certain requirements for use in vehicles.
The main desired features are as follows.
• Independency: It is desirable to have the movement of a wheel on one
side of the axle to be independent from the movement of the wheel on the
other side of the axle. Figure 1.2 (top) shows a vehicle with its left wheel
going over a bump. At higher speeds, the wheel can negotiate the bump
without disturbing the other wheel. This is only possible when each wheel
has an independent suspension. Independency of wheel movement improves
a vehicle’s ride comfort, road holding, and handling.
• Good camber control: The camber angle is the wheel angle about its
longitudinal axis (this will be comprehensively discussed in Section 4.1.2). A
negative camber is desired since it results in improved handling, however, the
convex shape of roads tends toward a positive camber to reduce tire wear.
Due to road bump and body roll, the camber will ultimately change. Using a
well-designed suspension geometry, we can control the camber angle
(Figures 1.3 and 1.4)
• Good body roll control: Each suspension system has a roll center. The
hypostatical line that connects the front and rear suspensions’ roll centers is
called the roll axis. The vehicle’s body rolls about this line during cornering
maneuvers. It is necessary to analyze the roll axis because of its effect on the
body roll and lateral vehicle behavior. The design of the suspension geometry
should account for the best location of the roll axis in order to optimize
vehicle body roll motion.
• Good space efficiency: It comes as no surprise that the space utilized
by a suspension system may create difficulties for the installation of other
components of the vehicle. Under the hood, the suspension system should
leave enough room for an engine and other components. Also, the suspension
of the rear axle should not interfere with the vehicle trunk and, instead,
occupy only its internal space (Figure 1.5).
• Good structural efficiency: The suspension system should be able to
handle the vehicle’s weight and all applied forces and moments in the contact
area between the wheel and the road. The suspension mechanism must feed
loads into the body in a well distributed manner and prevent the transfer of
concentrated forces onto the vehicle’s body (Figure 1.6).

• Good isolation: Improving ride quality and isolating road roughness is


one of the most important tasks of a suspension system.
• Low weight: Due to the road irregularities, the kinematic energy of a
suspension system is proportional to its mass. Higher kinematic energy
results in stronger transmitted shocks to the vehicle body. This effect clearly
decreases the ride quality. To minimize this negative effect, we should
minimize the suspension mass by using optimized designs and/or lightweight
material. Lightweight materials may increase the cost and, therefore, a
balanced design is needed for any suspension system.
• Long life: No one enjoys having to repair their car frequently, so the
suspension must be as durable as any other part of a car. A durable system is
able to resist wear, pressure, or damage, all of which play important roles in
the success of a product.
• Low cost: While defining a low enough cost is a subjective matter, the
suspension as a vehicle sub-system should be affordable. High performance
suspension systems are more expensive, and mainly used in premium
vehicles. Using a high number of bushings and lightweight materials
certainly improves the ride quality, noise isolation, and performance of the
system, but they also increase the cost of the product.
• Others: Other suspension features may include anti-dives and anti-squats.
When a vehicle is breaking, a dive occurs where the front of the vehicle dips
and the tail rises. A similar but opposite action, a squat, happens during
acceleration. This rotational movement is slight but since the human body is
very sensitive to pitch motion, mitigating for this movement in the passenger
cabin allows for greater ride quality.
Functions and Basic Principles Suspension
systems
Suspension systems have evolved significantly since the earliest adaptations
from horse-drawn buggies to self powered automobiles, but the basic
requirements remain the same. Just as in the horse and buggy days, today’s
suspension systems must provide for safe handling and maximum traction
while being able to sustain passenger comfort. To accomplish these goals,
modern suspension systems rely on various types of springs, shock absorbers,
control arms, and other components. As a comparison, the front suspension
from a Ford Model T and from a modern vehicle are shown in Figure 6-1 and
Figure 6-2.
All of the components of the suspension system must work together to
provide the proper ride quality and handling characteristics expected by the
driver and passengers. Each component is engineered to work as a part of the
overall system. If one part of the system fails, it can lead to faster wear or
damage to other components. Therefore, a complete understanding of each
component and how it functions as part of the whole suspension system is
critical.
Functions of Suspension Systems
As previously mentioned, it is mostly assumed that the only function of a
suspension system is the absorption of road roughness; however, the
suspension of a vehicle needs to satisfy a number of requirements with
partially conflicting aims as a result of different operating conditions. The
suspension connects the vehicle’s body to the ground, so all forces and
moments between the two go through the suspension system. Thus, the
suspension system directly influences a vehicle’s dynamic behavior.
Automotive engineers usually study the functions of a suspension system
through three important principle.
• Ride Comfort: Ride comfort is defined based on how a passenger feels
within a moving vehicle. The most common duty of the suspension system is
road isolation—isolating a vehicle body from road disturbances. Generally,
ride quality can be quantified by the passenger compartment’s level of
vibration. There are a lot of inner and outer vibration sources in a vehicle.
Inner vibration sources include the vehicle’s engine and transmission,
whereas road surface irregularities and aerodynamic forces are the outer
vibration sources. The spectrum of vibration may be divided up according to
ranges in frequency and classified as comfortable (0–25 Hz) or noisy and
harsh (25–20,000 Hz).
• Road Holding: The forces on the contact point between a wheel and the
road act on the vehicle body through the suspension system. The amount and
direction of the forces determine the vehicle’s behavior and performances,
therefore one of the important tasks of the suspension system is road holding.
The lateral and longitudinal forces generated by a tire depend directly on the
normal tire force, which supports cornering, traction, and braking abilities.
These terms are improved if the variation in the normal tire load is
minimized. The other function of the suspension is supporting the vehicle’s
static weight. This task is performed well if the rattle space requirements in
the vehicle are kept minimal.
• Handling: A good suspension system should ensure that the vehicle will
be stable in every maneuver. However, perfect handling is more than
stability. The vehicle should respond to the driver’s inputs proportionally
while smoothly following his/her steering/braking/accelerating commands.
The vehicle behavior must be predictable, and behavioral information should
accordingly be communicated to the driver. Suspension systems can affect
vehicle handling in many ways: they can minimize the vehicle’s roll and
pitch motion, control the wheels’ angles, and decrease the lateral load transfer
during cornering.
Classification of Suspensions
Independent Suspensions.
To provide the best possible ride quality, many vehicles use fully
independent front and rear suspension systems. This allows the vehicle to
respond to varying road conditions much more effectively. Nearly all front
suspensions found on modern cars and light trucks are independent. Even
four-wheel drive (4WD) vehicles often have independent front suspensions to
improve their ride and handling qualities. The lower image in Figure 6-6
shows how each wheel is able to move in an independent suspension while
the upper image illustrates the movements of a dependent or rigid axle. In an
independent suspension, each wheel can move independently, so a bump on
one side of the vehicle does not affect the tires on the other side. This
improves ride quality and maintains tire contact with the road for the
remaining tires. Many rear suspensions on rear-wheel drive (RWD) vehicles
are independent systems.
The differential is mounted solidly to the body or rear frame and short axles,
similar to those found on the front of frontwheel drive (FWD) vehicles, are
used to drive the rear wheels. This provides improved ride quality and
handling. These suspension types are discussed later in this chapter. Many
FWD cars have independent rear suspension systems as well. This improves
ride quality and handling.
Dependent Suspensions.
Still found on the rear of many vehicles and on the front of most heavy duty
vehicles, dependent suspensions sacrifice ride quality for strength. Since the
movement of one wheel affects the opposite wheel, ride quality and handling
suffer on these systems. A large, straight I-beam is often used on the front of
heavy-duty vehicles, such as buses and semi-trucks. An example of this is
shown in Figure 6-7. This design is used for its strength and durability but
does not provide the best ride quality. The rear axle on many RWD cars, light
trucks, and SUVs is a dependent live axle; an example is shown in Figure 6-
8. Live axle means the rear axle is driving the rear wheels. Since a live rear
axle is one large assembly housing the differential gears and axles, it is a
dependent system. Live rear axles are mounted on leaf springs, coil springs,
or air springs.
A vehicle with a solid rear axle that does not drive the rear wheels has what is
called a dead axle or a rigid or straight axle. An example of this type of rear
suspension is shown in Figure 6-9. A dead axle supports the weight of the
rear of the vehicle and can be fitted with coil, leaf, or air springs. A dead axle
is a dependent form of suspension.
Semi-Independent Suspensions.
Found on the rear of many FWD vehicles, this type of system uses a fixed
rear axle that twists slightly under loads. This allows for semi-independent
movement of the rear wheels. This system typically uses coil springs or
struts. The semi-independent system provides better ride and handling than a
straight axle while not being as costly as a fully independent system.
Front Suspensions.
The main purpose of the front suspension is to provide safe, comfortable
handling while allowing wheel movement for the steering and enabling the
driver to react to various road conditions. To accomplish this, several
different front suspension styles are used in modern vehicles. The front
suspensions on FWD vehicles also have to be able to handle the additional
torque of driving the front wheels. Additionally, during braking, as much as
70 percent of the vehicle weight is transferred to the front, adding additional
loads to the front suspension. Vehicle type and intended use of the vehicle are
the main considerations when engineers begin to design the suspension
systems. Many cars have suspensions that look very similar, but actually have
many differences. The exact size and placement of components have a large
effect on individual vehicle driving characteristics.
Rear Suspensions.
The rear suspension must be able to carry any additional loads placed in the
rear of the vehicle while still maintaining the correct ride height. The rear
suspensions on many FWD and RWD vehicles are similar in that a solid type
of axle is used. Though strong, a solid axle does not provide the level of
handling and ride quality that an independent rear suspension does. The rear
suspension on RWD vehicles must be able to handle the torque of the
driveline. This can be difficult since torque tries to twist the vehicle and rear
suspension.
Basic Principles of Suspension System
The components of the suspension system, while impor tant separately, must
operate as a whole for the system to provide all of the requirements during
normal driving conditions. As stated before, the suspension is responsible for
carrying vehicle weight, absorbing road shocks, providing a smooth ride, and
allowing good handling qualities.

While many drivers do not know the specifics of how these goals are met,
they do feel how their vehicle rides and handles and can tell quickly when
something is not quite right. For the technician, it is important to under¬
stand the underlying principles of suspension operation so that he or she can
accurately diagnose a concern when one is present.
Oversteer.
Oversteer is a term used to describe a driving condition where the rear tires
reach their cor¬ nering limit before the front tires. This can allow the rear
tires to break loose and cause the vehicle to spin. Figure 6-10 illustrates the
effects of oversteer. Oversteer can be used as an advantage in certain racing
situations, but if you have ever experienced the back end of a car sliding on
wet or slippery pavement, you know that over¬ steer can also be a very
undesirable event!
To correct for oversteer, you should steer into the slide and reduce power
until control returns. Applying the brakes can actually make oversteer worse
since the weight transfer from the rear wheels can reduce rear tire traction.
Understeer.
The opposite of oversteer is understeer. This condition occurs when the front
of the vehicle can¬ not make a turn through the desired turn radius because
the front tires have lost traction. Figure 6-11 shows how a vehicle will
continue in a somewhat straight line instead of making the intended turn. This
causes the vehicle to overshoot the turn. If you have ever tried to make a turn
in slippery or snowy conditions and the vehicle contin¬ ued in a straight line
instead of turning the corner, you have experienced understeer. Understeer is
measured by the difference between the angle the tires are pointing and the
angle needed to make the turn. Most cars are designed to have understeer.
This is because understeer can be reduced by reducing vehicle speed, which
is safer for the average driver.
Neutral Steering.
If a vehicle turns at the same rate that the steering wheel is turned, it is said to
have neutral steering. This means that the vehicle does not exhibit a tendency
to either over- or understeer.
Lateral Acceleration.
Lateral acceleration is the measurement of the vehicles ability to corner.
What we feel during a corner is that a force pushes the vehicle and its
occupants to the outside of a turn. In reality, as both the car and occupants
turn, the people inside are still subject to Newton’s First Law of Motion and
continue to move in a straight line. The effect is that we feel pushed toward
the outside of the corner. Centripetal force, meaning “toward the center,” is
the force that pulls an object toward the center of a circle as the object rotates.
Imagine swinging a ball over your head and that the ball is attached to a
string. The ball travels in a circle because the centripetal force is pulling the
ball toward the center. Obviously, cars do not have strings pulling them in
toward the center of a circle while turning but they do have tires. The tires are
exerting the force toward the center. The lateral (sideways) force is
perpendicular to the direction the car is traveling. This is where the term
lateral acceleration comes from for vehicle test scores. The test is performed
by driving the car on a large test-track circle at ever-increasing speeds. The
faster the car can go around the circle, the greater the lateral acceleration.
This means the better the vehicle will handle when cornering. Figure 6-12
shows an illustration of how this test is performed. Low riding, wide
wheelbase sports cars can achieve a much higher lateral acceleration than a
vehicle that is higher off the ground, such as a minivan or SUV.
Springs
The springs in the suspension have two important functions. Springs support
the vehicle weight and absorb the bumps and movements that occur when
driving. There are four types of springs used in suspension systems.
Coil springs—are a length of steel wound into a coil shape. Used on most
front and many rear suspensions, coil springs, such as those shown in Figure
6-13, are large pieces of round steel formed into a coil. The spring absorbs
energy as the coils are forced closer together. This is called compression. The
stored energy is released when the coil extends back out. The energy
continues to dissipate as the spring bounces. Eventually, the energy is
exhausted and the spring stops bouncing. Coil springs are found in front and
rear suspension systems, have a compact design, and do not need
maintenance. When the spring becomes fatigued or weak, ride height will
drop, and the spring will need to be replaced.
Coil springs are often sandwiched between the lower control arm and the
vehicle frame. In this position, the weight of the vehicle is pushing down
against the spring, which is supported by the lower control arm. This
configuration allows movement of the suspension while the spring carries the
weight and dampens out road shock. Coil springs often use rubber insulators
between the spring and the frame to reduce noise.
The coil springs used in strut suspensions appear similar to those used in
other applications, but are not interchangeable. Most strut coil springs are
made of smaller diameter steel but are larger in total outside diameter than
those in other applications. Strut coil springs are usually painted or coated
with rustresistant coverings.
Coil springs are categorized as either standard or variable-rate springs. A
standard-rate spring has evenly spaced coils and requires a specific amount of
force to compress the spring a given amount. Further compression requires an
additional force, equal to the original force. A variable-rate spring has
unequally spaced coils and requires an increasing amount of force to achieve
further compression. For example, a standard-rate spring may require 300 lbs.
of force to compress one inch and an additional 300 lbs. to compress the
second inch (600 lbs. equals two inches). A variable-rate spring requires the
same 300 lbs. of force to compress one inch but requires 500 lbs. to compress
the next inch (800 lbs. equals two inches). Coil springs used in passenger car
rear suspensions are usually lighter duty than those found at the front. This is
because the majority of the vehicle’s weight is often toward the front. Coil
springs on the rear of larger passenger cars, trucks, and SUVs are often
variable-rate springs.
Leaf springs—are long semi-elliptical pieces of flattened steel and are
used on the rear of many vehicles. Leaf springs are typically mounted as
shown in Figure 6-14. Leaf springs have been in use since the horse-and-
buggy days. A leaf spring is a long, flat piece of spring steel, shaped into a
semicircle. The spring is attached to the frame through a shackle or bracket
assembly that permits changes in the effective length of the spring as it is
compressed. To carry heavier loads, additional leaves can be stacked below
the master leaf. Increasing the number of leaves increases load carrying
capacity but makes the ride stiffer. Some suspensions use transverse leaf
springs that are mounted perpendicular to the frame. In a transverse
arrangement, one leaf spring supports both sides of the suspension. This style
was used for many years on the Corvette and on some FWD vehicles with
independent rear suspensions.
Service Warning: Vehicles with air springs often require special lifting and
jacking procedures. Do not attempt to raise a vehicle with air springs until
you have read and followed all applicable warnings and procedures.
Air springs—are thick, tough bags filled with air that act as springs. Air
springs are used on some larger sedans and most large commercial semi
trucks and trailers. Air springs are typically located in the rear, though some
manufacturers use air springs at both the front and the rear, as shown in
Figure 6-15. Air springs, like torsion bars, are adjustable. On many vehicles,
the on-board computer system uses a ride height sensor to determine
suspension load. As additional weight is added to the trunk, the suspension
will drop. When the computer senses this drop, it can turn on an on-board air
compressor to supply more air to the air springs. The increased pressure in
the springs will restore the ride height to the desired position. Some systems
may use the adjustment of air pressure to the air springs to control ride height
based on the vehicle’s speed or driver input.
Torsion bars—are coil springs that are not coiled. Torsion bars are lengths of
round steel bar fastened to a control arm on one end and the frame on the
other end. Movement of the control arm causes the torsion bar to twist. The
absorption of the twist is similar to compression of a coil spring. As the
torsion bar untwists, the control arm returns to its normal position. Torsion
bars are used in many 4WD vehicles where a front drive axle occupies the
space where the coil spring normally sits. The torsion bar shown in Figure 6-
16 is mounted to the lower control arm and the transmission crossmember.
Torsion bars can be mounted in either the upper or lower control arms. The
control arm acts as a lever against the torsion bar, twisting the bar. The bar
twists since it is rigidly mounted in a crossmember. As it releases energy and
untwists, the torsion bar returns to its original shape, forcing the control arm
back into position.

An advantage of torsion bars is that they are adjustable. At the rear torsion
bar mount is an adjustment mechanism. If a torsion bar-equipped vehicle is
sagging, the torsion bar may be able to be adjusted to bring the vehicle back
into specification. When a torsion bar is replaced, it must be tightened to
provide the necessary lift to support the vehicle.
Spring Ratings
Automotive springs are rated for their frequency and their load rate. Springs,
when either compressed or extended are under tension. When the tension is
released, the springs will attempt to return to their original condition. When
the springs are compressed or twisted, they store energy. Upward movement
of the wheel that compresses the spring is called jounce. The stored energy is
released when the spring rebounds. This downward movement of the tire, as
the spring extends out, is called rebound. As you probably know, a
compressed spring will rebound many times before all of the energy is
dissipated. The number of times a spring oscillates or bounces before
returning to its rest point is called the spring frequency. An example
illustrating this is shown in Figure 6-17. The size of the spring and the spring
material contribute to the spring frequency. Ideally, a spring should dampen
out its oscillations quickly enough to provide a smooth ride but not so fast
that it causes a harsh, jarring ride. If left to bounce or oscillate on its own, the
spring will cause the vehicle to bounce excessively, probably to the
discomfort of the passengers.
The amount of force it takes to compress or twist a spring a certain amount is
called spring rate. Springs can have either linear or variable rates. Figure 6-18
illustrates the differences in spring rate. When a vehicle is designed, the
engineers will factor spring size, rate, and frequency based on the intended
vehicle use, tire size and type, suspension style, and many other factors. The
goal is to have the best compromise between component weight, vehicle cost,
and the ride and handling qualities desired for the vehicle.
Sprung and Unsprung Weight

Weight carried by the springs is called sprung weight. Weight not carried by
the springs is called unsprung weight. The less unsprung weight a vehicle
has, the better the handling and ride will be. Some examples of unsprung
weight are the wheels and tires, brake components, control arms, steering
knuckles, and rear axles. Figure 6-19 shows examples of unsprung weight.
Sprung weight includes the vehicle body, engine and transmission, the
passengers, and in general, items above the axles. Examples are shown in
Figure 6-20. The amount of sprung weight should be high and unsprung
weight should be low.
Shock Absorbers
Shock absorbers are actually dampers, meaning that they reduce or make
something less intense. The springs do the shock absorbing while the shocks
dampen the spring oscillations. Without the shocks, our vehicles would
continue to bounce for a long time after every bump, dip, and change in body
movement. The most common type of shock is the direct doubleacting
hydraulic shock absorber. This means that the shocks are used to directly act
on motion; double acting means that they work in both compression and
extension modes, and hydraulic means that a fluid is used to perform work.
Compression is upward wheel travel, also called jounce. Extension is
downward wheel motion and is also called rebound.

Shocks are typically mounted near the springs, with the lower end of the
shock mounted on a lower control arm or axle, as shown in Figure 6-21. The
top of the shock, which is connected to the shock piston, is mounted to the
vehicle body. Inside the shock are two chambers, each partially filled with
oil, as shown in Figure 6-22.

The shock piston moves up and down in the main chamber. This movement
displaces the oil into a second chamber. A set of one-way valves control the
flow of oil from the chambers. Moving the oil is difficult. This is where the
shock’s resistance to movement comes from. Figure 6-23 shows the
movement of oil through the valves and chambers. By allowing more oil to
flow, a shock will dampen less and provide a smoother ride. By restricting oil
flow, the shock will be more resistant to movement and provide a stiffer ride.
A shock may have an equal amount of resistance during both compression
and extension, or it may have more resistance during extension. This is
because the spring naturally resists compression, and the shock does not need
to add much resistance to that of the spring. But since the spring will easily
extend out, the shock’s greater resistance on extension can help better control
spring action.
Control Arms
Control arms are used to control wheel movement. Used on both front and
rear suspensions, they are commonly referred to by their position, such as the
upper and lower control arms. Common control arm configurations are
shown in Figure 6-24 through Figure 6-26. Control arms are also called A-
arms or wishbones due to their similarity to being A- or wishbone shaped.
A-arms typically have two connections to the frame and a ball joint for
connecting to the steering knuckle. The control arm mounts to the frame with
bushings. These bushings allow for up and down movement of the arm while
controlling back and forth motion.
The bushings are generally rubber and steel and are pressed into the control
arms. In addition to acting as pivots for the control arms, the bushings act as
dampers, twisting and untwisting to return the control arm to its original
position. Also connected to lower control arms are the stabilizer bar links.
The stabilizer bar links join the lower control arms to the stabilizer bar. These
links can be a set of bushings and washers or a solid link with ball-and-socket
joints.
Ball Joints
Ball joints allow the steering knuckle to pivot for steering while providing a
tight connection to the control arms and preventing any unwanted up and
down or sideways movement. Ball joints use a ball-and socket joint to allow
a wide range of motion, similar to a shoulder or hip joint. An illustration of a
ball joint is shown in Figure 6-27.

Ball joints can be one of two types, load carrying or non-load carrying. Load-
carrying ball joints support the weight carried by the springs. Because of this,
these joints tend to wear faster and need replacement more often than non-
load-carrying joints. Non-load-carrying joints provide a steering pivot and
component connection with a wide range of movement just like load carrying
joints, but without the sprung weight applied to them. Figure 6-28 illustrates
how weight is carried by a ball joint.
Ball joints are mounted to the control arms in a variety of ways. The most
common ways are a press fit, bolt in, and rivets. Some older vehicles had
threads on the ball joint itself, which was then threaded into the control arm.
Joints that are riveted at the factory are replaced with joints that bolt into the
control arm. Some heavy-duty and older vehicles use kingpins instead of ball
joints. A king pin connects the steering knuckle to the front axle. King pins
and king pin bushings do not use a ball-and-socket joint; instead, the king pin
is pressed into the bushings. The king pin rotates in the bushing to allow for
steering movement.
Steering Knuckles
Steering knuckles support the wheel and tire, brakes, and sprung weight of
the vehicle. A steering knuckle can be mounted in a variety of ways for both
front and rear suspensions. Figure 6-29 shows an example of a common
steering knuckle configuration. The steering knuckle also has an attachment
point for the outer tie rod end. A wheel bearing or set of bearings mount to
the steering knuckle to provide the mounting of the wheel hub. Steering
knuckles are also sometimes called spindles. The spindle portion of the
steering knuckle is where the wheel bearings and brake components are
mounted. The spindle supports those components and allows the wheel to
rotate on the wheel bearings.

Stabilizer Bars. Stabilizer bars, also called sway bars or anti-roll bars, reduce
body roll. These steel bars attach to the lower control arms or axle assembly
and the body or frame. When the vehicle body starts to lift while cornering,
the bar tries to move with the body. Since the outer ends of the stabilizer bar
are connected to the control arms or axle, and the control arms cannot move
upward, it forces the stabilizer bar to pull the body back down, limiting body
roll. An illustration of a stabilizer bar is shown in Figure 6-30.

Figure 6-31 shows how the stabilizer bar is connected to the control arm.
Some vehicles have adjustable stabilizer bar links, while some modern sports
cars use electronic anti-roll systems to reduce body movement. Regardless of
the type, broken stabilizer bar links will cause excessive body roll while
cornering.
Types of Suspension Systems
Even though there are many different suspension setups, most types can be
categorized into one of these types: MacPherson strut, modified strut,
multilink, short/long arm, I-beam, and solid axles. Regardless of the type, all
suspensions try to accomplish the same goals of good ride quality and
handling.
Macpherson Struts
The popularity of small FWD vehicles has brought with it the dominate type
of suspension system used today, the MacPherson strut suspension. These
systems combine a coil spring, shock absorber, and bearing plate into a single
unit. A typical strut is shown in Figure 6-32.

This arrangement allows for greater engine compartment space and reduced
weight compared to short/long arm suspensions. This is because the
MacPherson strut suspension eliminates the upper control arm and upper ball
joint. This reduces weight and moves the top of the suspension higher and
toward the outside of the vehicle. Because the upper control arms are
removed, there is space for the engine and transmission to be mounted
transversely (sideways) in the engine compartment.
The strut connects to the car body through the upper strut mount or bearing
plate, which also acts as a pivot and damper. The upper mount provides
flexibility, so the strut can change angle to follow the path of the lower ball
joint. The mount also dampens or reduces vibration and serves as the upper
pivot for the steering axis. The components of a strut mount are illustrated in
Figure 6-33.

The shock absorbers piston rod in a strut is larger than the standard shock
piston rod to withstand sideways bending from loads placed on the tire while
it is turning. Figure 6-34 shows a comparison of a strut piston and a shock
piston. The strut piston rod, on the left, is much larger in diameter than that of
the shock, shown on the right.
Modified Struts
Some vehicles use a strut-style shock absorber but relocate the spring. These
are not true MacPherson struts. Called a modified strut, this system has the
spring mounted separate from the strut. The strut performs the function of the
shock absorber and is connected to an upper bearing plate at the top and to
the steering knuckle at the lower end. The coil spring is located between the
frame and the lower control arm. This design has the weight and space saving
advantages of the MacPherson strut suspension but can contain larger
springs. Relocating the spring also can allow for a wider distance between the
wheel wells, increasing engine compartment room.
Multilink
Many vehicles use a multilink system. With a multilink suspension, the
steering knuckle is taller than on a traditional strut or short/long arm
suspension, often reaching the height of the top of the tire. The strut does not
turn with the steering axis; rather it is mounted rigidly to the body at the
upper strut mount. This is because the steering knuckle pivots on the upper
and lower ball joints for steering action. Multilink systems are designed to
produce neutral steering on FWD vehicles, which tend to exhibit understeer
with traditional MacPherson strut suspensions. This suspension is also
commonly used on RWD cars, light trucks, and SUVs.

Figure 6-36 shows a common multilink arrangement. Multilink suspensions


are also found on the rear of many vehicles, both FWD and RWD. Several
control arms are used to reduce rear axle movements and provide better
handling and ride qualities than a traditional rear strut system.

Short/Long Arm
Short/long arm suspensions, also called SLA suspensions, are typically used
on RWD vehicles. This suspension consists of two unequal length control
arms connected with a steering knuckle. The control arms are generally
triangular and are often called wishbones or A-arms. A steering knuckle,
control arm bushings and ball joints comprise the rest of the suspension.

Figure 6-37 shows an illustration of a typical system. Control arm design is


matched with the spring for tire control and ride characteristics. The control
arms are mounted to the frame with control arm bushings. Some suspensions
use a lower control arm with a single frame mounting point. In this case, a
strut rod will also be used as an additional mount and stabilizer for the
control arm as shown in Figure 6-38.
SLA systems use two ball joints, one of which carries the sprung weight of
the vehicle. The other ball joint provides a friction and pivot point and does
not carry weight. The load-carrying joint is located in the control arm in
which the spring sits. The other ball joint is called the friction or following
ball joint.

Figure 6-39 shows how the weight is carried by the load-carrying ball joint in
an SLA suspension. SLA suspensions are not as common as they once were
due to the popularity of FWD vehicles. These suspensions tend to intrude into
the engine compartment, causing space problems with FWD drivetrains.
I-BEAM
This suspension system was used on Ford trucks and vans for many years.
Twin I-beams are strong and simple like solid axles but provide independent
movement of the front suspension. An illustration of this system is shown in
Figure 6-40. I-beams are mounted to the crossmember with a bushing and
house the ball joints at the outside of the beam. I-beams also use a radius arm
to control I-beam movement, as shown in Figure 6-41. I-beams are similar to
very long control arms. They move on a pivot and allow for vertical wheel
movement while the radius arm stops forward and backward movement of
the suspension.
4wd Suspensions
For many years, the front suspensions on 4WD vehicles were nearly identical
to the rear suspensions. A large live axle supported with either leaf or coil
springs for support was standard for most 4WD trucks, an example of which
is shown in Figure 6-42. While strong, these systems did not have
outstanding ride quality. To improve the ride and handling of 4WD trucks,
manufacturers began to redesign the front suspensions to allow for
independent wheel movement. One novel approach to this was Ford’s Twin-
Traction Beam or TTB. This system uses a live front axle that contains U-
jointed axle shafts that allow for independent wheel movement for improved
ride and handling while still retaining the durability and strength of
traditional 4WD. Manufacturers of 4WD vehicles today often mount the front
differential directly to the chassis. Short FWD drive shafts then connect the
differential to the wheels.

An example of this arrangement is shown in Figure 6-43. This allows fully


independent suspension movement. Full live front axles can still be found on
heavy-duty light trucks, but the majority of trucks now have independent
front suspensions whether they are 2WD or 4WD.
IC ENGINES
Detonation or Knocking in IC Engines

Detonation or knocking in I.C. Engines - The loud pulsating noise heard


within the engine cylinder of an I.C. engine is known as detonation (also
called knocking or inking). It is caused due to the propagation of a high speed
pressure wave created by the auto-ignition of end portion of unburnt fuel. The
blow of this pressure wave may be of sufficient strength to break the piston.
Thus, the detonation is harmful to the engine and must be avoided. The
following are certain factors which causes detonation:
1. The shape of the combustion chamber,
2. The relative position of the sparking plugs in case of petrol engines,
3. The chemical nature of the fuel,
4. The initial temperature and pressure of the fuel, and
5. The rate of combustion of that portion of the fuel which is the first
to ignite. This portion of the fuel in heating up, compresses the
remaining unburnt fuel, thus producing the conditions for auto-
ignition to occur.
The detonation in petrol engines can be suppressed or reduced by the addition
of a small amount of lead ethide or ethyl fluid to the fuel. This is called
doping.
The following are the chief effects due to detonation:
1. A loud pulsating noise which may be accompanied by a vibration of
the engine.
2. An increase in the heat lost to the surface of the combustion
chamber.
3. An increase in carbon deposits.
Cetane Number - Rating of CI Engine Fuels
What is Cetane Number? How rating of fuels used in C. I. Engines
(Compression Ignition Engines) is done using Cetane number and other
parameters?
The knocking tendency is also found in compression ignition (C. I.) engines
with an effect similar to that of S. I. engines, but it is due to a different
phenomenon. The knock in C. I. engines is due to sudden ignition and
abnormally rapid combustion of accumulated fuel in the combustion
chamber. Such a situation occurs because of an ignition lag in the combustion
of the fuel between the time of injection and the actual burning.
The property of ignition lag is generally measured in terms of cetane number.
It is defined as the percentage, by volume, of cetane in a mixture of cetane
and alpha-methyl-naphthalene that produces the same ignition lag as the fuel
being tested, in the same engine and under the same operating conditions. For
example, a fuel of cetane number 50 has the same ignition quality as a
mixture of 50 percent cetane and 50 percent alpha-methyl-naphthalene.
The cetane which is a straight chain paraffin with good ignition quality is
assigned a cetane number of 100 and alpha methyl-naphthalene which is a
hydrocarbon with poor ignition quality, is assigned a zero cetane number.
Notes:
1. The knocking in C. I. engines may be controlled by decreasing
ignition lag. The shorter the ignition lag, the lesser is the tendency to
knock.
2. The cetane number of diesel oil, generally available, is 40 to 55.
Ignition System of Petrol Engines
The ignition in a petrol engine takes place by means of a spark plug at the
end of the compression stroke. The voltage required to produce a spark across
the gap between the sparking points of a plug is 6000 to 10000 volts. The
following two ignition systems of petrol engines are important:
1. Coil ignition system (also known as battery ignition system); and
2. Magneto ignition system.

The coil ignition system has an induction coil, which consists of two coils
known as primary and secondary coils wound on a soft iron core, as shown in
figure above. One end of the primary coil is connected to the ignition switch,
ammeter and battery generally of 6 volts. The other end of the primary coil is
connected to a condenser and a contact breaker. A condenser is connected
across the contact breaker for the following two reasons :
(a) It prevents sparking across the gap between the points,
(b) It causes a more rapid break of the primary current, giving a higher
voltage in the secondary circuit.
The secondary coil is connected to a distributor (in a multi-cylinder engine)
with the central terminal of the sparking plugs. The outer terminals of the
sparking plugs are earthed together and connected to the body of the engine.
The coil ignition system is employed in medium and heavy spark ignition
engines such as in cars.
The magneto ignition system has the same principle of working as that of
coil ignition system, except that no battery is required, as the magneto acts as
its own generator. This type of ignition system is generally employed in small
spark ignition engines such as scooters, motor cycles and small motor boat
engines.
Definition and Classification of I.C. Engines
I.C. Engines: Internal combustion engine more popularly known as I.C.
engine, is a heat engine which converts the heat energy released by the
combustion of the fuel inside the engine cylinder, into mechanical work. Its
versatile advantages such as high efficiency light weight, compactness, easy
starting, adaptability, comparatively lower cost has made its use as a prime
mover universal

Classification of I.C. Engines: I.C engines are classified according to:

1. Nature of thermodynamic cycles as:


1. Otto cycle engine
2. Diesel cycle engine
3. Dual combustion cycle engine

2. Type of the fuel used:


1. Petrol engine
2. Diesel engine
3. Gas engine
4. Bi-fuel engine

3. Number of strokes as
1. Four stroke engine
2. Two stroke engine
4. Method of ignition as:
1. Spark ignition engine, known as SI engine
2. Compression ignition engine, known as C.I. engine

5. Number of cylinder as:


1. Single cylinder engine
2. Multi cylinder engine

6. Position of the cylinder as:


1. Horizontal engine
2. Vertical engine
3. Vee engine
4. In-line engine
5. Opposed cylinder engine

7. Method of cooling as:


1. Air cooled engine
2. Water cooled engine

Difference between four stroke and two stroke


engines:-
The following are the main differences between a four stroke and two
stroke engines:
4-Stroke 2-Stroke

In four stroke engine all the four In two strokes engine all the four
operations i.e. suction, compression, operations are completed in one
ignition and exhaust are completed revolution of the crank shaft.
in two revolutions of crank shaft.

Power is developed in every Power is developed in every


alternate revolution of the revolution of the crankshaft.
crankshaft.

The torque is less uniform; hence a The torque is more uniform than in
four stroke engine requires a heavier the four stroke engine hence a
flywheel. lighter flywheel is necessary in a
two stroke engine.

The suction and the exhaust are In a two stroke engine, the piston
opened and closed by mechanical itself opens and closes the ports
valves in a four stroke engine

In a four stroke engine the charge in a two stroke engine


directly enters into the cylinder the charge first enters the crankcase
and then flows into the cylinder

The crankcase of a four stroke The crankcase of a two stroke


engine even though closed is not a engine is a closed pressure tight
pressure tight chamber. chamber

In a four stroke engine the piston whereas, in a two stroke engine the
drives out the burnt gases during the high pressure fresh charge
exhaust stroke. scavenges out the burnt gases

The lubricating oil consumption in a The lubricating oil consumption in a


four stroke engine is less. two stroke engine is more than in
four stroke engine.

A four stroke engine produces less A two stroke engine produces more
noise. noise than a four stroke engine.

Since the fuel burns in every Since the fuel burns in every
alternate revolution of the crankshaft revolution of the crankshaft in a two
in a two stroke engine the rate of stroke engine the rate of cooling is
cooling is more than in a four stroke more than in a four stroke engine.
engine.

A four stroke engine cannot run in A valve less two stroke engines runs
either direction. in either direction
Efficiency of an IC Engine

The efficiency of an IC engine (Internal Combustion Engine) is defined as the


ratio of workdone to the energy supplied to an engine. The following
efficiencies of an 1.C. engine are important:
(a) Mechanical efficiency. It is the ratio of brake power (B.P.) to the
indicated power (I.P.).
Mathematically, mechanical efficiency,

Since B. P. is always less than I.P. , therefore mechanical efficiency is always


less than unity (i.e. 100%).
(b) Overall efficiency. It is the ratio of the work obtained at the crankshaft in
a given time to the energy supplied by the fuel during the same time.
Mathematically, overall efficiency,
where
B.P. = Brake power in kW,
mf = Mass of fuel consumed in kg per hour, and
C = Calorific valve of fuel in kJ / kg of fuel.
(c) Indicated thermal efficiency. It is the ratio of the heat equivalent to one
kW hour to the heat in the fuel per I.P. hour, Mathematically, indicated
thermal efficiency,

Note : The following ratio is Known as specific fuel consumption per I.P.
hour:

(d) Brake thermal efficiency. It is the ratio of the heat equivalent to one kW
hour to the heat in the fuel per B.P. hour. Mathematically, brake thermal
efficiency,

Note: The following ratio is known as specific fuel consumption per B. P.


hour:

(e) Air standard efficiency. The general expression for the air standard
efficiency is given as
(For petrol engines)

(For diesel engines)

(f) Relative efficiency. It is also known as efficiency ratio. The relative


efficiency of an I. C. engine is the ratio of the indicated thermal efficiency to
the air standard efficiency.
(g) Volumetric efficiency. It is the ratio of the actual volume of charge
admitted during the suction stroke at N.T.P to the swept volume of the piston.
Brake power of IC Engine
The brake power (briefly written as B.P.) of an IC Engine is the power
available at the crankshaft. The brake power of an I.C. engine is, usually,
measured by means of a brake mechanism (prony brake or rope brake).
In case of prony brake, brake power of the engine,

where
W = Brake load in newtons,
l = Length of arm in metres, and
N = Speed of the engine in r.p.m.
In case of rope brake, brake power of the engine,

where
W = Dead load in newtons,
S = Spring balance reading in newtons,
D = Diameter of the brake drum in metres,
d = Diameter of the rope in metres, and
N = Speed of the engine in r.p.m.
Note : The brake power (B.P.) of an engine is always less than the indicated
power (I.P.) of an engine, because some power is lost in overcoming the
engine friction (known as frictional power). Mathematically,
Frictional power, F.P. = I.P. — B.P.
Comparison of Petrol and Diesel Engines
Petrol vs Diesel engines: A comparison - Basic difference between a petrol
engine and a diesel engine based on working, pressures, combustion,
compression ratios, speed, efficiency, maintenance, and running costs. The
following points are important for the comparison of petrol and diesel
engines:

SN Petrol Engines Diesel Engines

1. A petrol engine draws a mixture A diesel engine draws only air


of petrol and air during suction during suction stroke.
stroke.

2. The carburetor is employed to The injector or atomiser is


mix air and petrol in the required employed to inject the fuel at the
proportion and to supply it to the end of compression stroke.
engine during suction stroke.

3. The pressure at the end of The pressure at the end of


compression is about 10 bar. compression is about 35 bar.

4. The charge (i.e. petrol and air The fuel is injected in the form of
mixture) is ignited with the help fine spray. The temperature of the
of a spark plug. compressed air (about 600° C at a
pressure of about 35 bar) is
sufficiently high to ignite the fuel.

5. The combustion of fuel takes The combustion of fuel takes


place approximately at constant place approximately at constant
volume. In other words, it works pressure. In other words, it works
on Otto cycle. on Diesel cycle.

6. A petrol engine has compression A diesel engine has compression


ratio approximately from 6 to ratio approximately from 15 to 25.
10.

7. The starting is easy due to low The starting is little difficult due
compression ration. to high compression ratio.

8. As the compression ratio is low, As the compression ratio is high,


the petrol engines are lighter and the diesel engines are heavier and
cheaper. costlier.

9. The running cost of petrol The running cost of diesel engines


engines is high because of is low because of the lower cost of
higher cost of petrol. diesel.

10. The maintenance cost is less. The maintenance cost is more.

11. The thermal efficiency is upto The thermal efficiency is upto


about 26%. about 40%.

12. Overheating trouble is more due Overheating trouble is less due to


to low thermal efficiency. high thermal efficiency.

13. These are high speed engines. These are relatively low speed
engines.

14. The petrol engines are generally The diesel engines are generally
employed in light duty vehicles employed in heavy duty vehicles
such as scooters, motorcycles, such as buses, trucks and earth
cars. These are also used in moving machines etc.
aeroplanes.
Difference between SI and CI engines
Spark Ignition Engines (S.I. Engine)

It works on Otto cycle. In Otto cycle, the energy supply and rejection occur at
constant volume process and the compression and expansion occur
isentropically. The engines working on Otto cycle use petrol as the fuel and
incorporate a carburettor for the preparation of mixture of air fuel vapour in
correct proportions for rapid combustion and a spark plug for the ignition of
the mixture at the end of compression. The compression ratio is kept 5 to
10.5. Engine has generally high speed as compared to C.I. engine. Low
maintenance cost but high running cost. These engines are also called spark
ignition engines or simply S.I. Engine.

SI engine P-V and T-S Diagram

Compression Ignition Engines (C.I. Engine)


It works on diesel cycle. In diesel engines, the energy addition occurs at
constant pressure but energy rejection at constant volume. Here spark plug is
replaced by fuel injector. The compression ratio is from 12 to 25. Engine has
generally low speed as compared to S.I. engine. High maintenance cost but
low running cost. These are known as compression ignition engines, (C.I) as
the ignition is accomplished by heat of compression.

CI Engine P-V and T-S Diagram

The upper limit of compression ratio in S.I. Engine is fixed by anti knock
quality of fuel. While in C.I. Engine upper limit of compression ratio is
limited by thermal and mechanical stresses of cylinder material. That’s way
the compression ratio of S.I. engine has more compression ratio as compared
to S.I. Engine.

Dual cycle is a combination of the above two cycles, where part or the
energy is given a constant volume and rest at constant pressure.

The following are the main differences between SI and CI Engines:

Spark Ignition Engine (S.I Engine) Compression Ignition Engine(C.I


Engine)

Spark plug required No spark plug required

The mixture of air and fuel is Only air is introduced into the
introduced into the cylinder from cylinder.
carburettor.

These type of engines compresses air In these engines air is only


and fuel together in the cylinder compressed in the cylinder.

No fuel pump is used. Fuel pump is used to inject fuel.

Fuel is mixed with air before Fuel is mixed with air once
compression starts. compression is complete.

Compression ratio is low. Compression ratio is high.

This type of engine makes use of This type of engine makes use of
highly volatile liquid fuel. less volatile liquid fuel.

Less efficient. More efficient.

Fuel used in this engine is expensive. Cheaper fuels are used in these
engines.

Higher fuel consumption in these These engines have lesser fuel


engines for same power. consumption for same power.

Engines are more compact and light. Heavier and strong engines due to
higher pressure involved

Initial cost is less Initial Cost is high.

These engines have a smooth Roughness in engine operation


operation encountered, especially when the
engine runs at high speed and low
loads.
I.C Engines Important definitions and formulas

Top Dead Centre (T.D.C):-


When the piston is at its top most position, i.e. the piston is closet to cylinder
head, it is called top dead centre.

Bop Dead Centre (B.D.C):-


When the piston is at its lowest position, i.e. the piston is farthest from the
cylinder head, it is called bottom dead centre.

Bore:-
The inner diameter of the engine cylinder is known as bore. It can be
measured precisely by a Vernier calliper or bore gauge. As the engine
cylinder wears out with the passage of time, so the bore diameter changes to a
larger value, hence the piston becomes lose in the cylinder, and power loss
occurs. To correct this problem re-boring to the next standard size is done
and a new piston is placed. Bore is denoted by the letter ‘D’. It is usually
measured in mm (S.I. units) or inches (metric units). It is used to calculate the
engine capacity (cylinder volume).

Stroke Length or Stroke:-


The distance travelled by the piston from its topmost positions (also called as
Top dead centre TDC), to its bottom most position (or bottom dead centre
BDC) is called stroke it will be two times the crank radius. It is denoted by
letter h. Units mm or inches (S.L, Metric). Now we can calculate the swept
volume as follows: (L = 2r)
VS = (πD²/4) x L

If D is in cm and L is also in cm than the units of V will be cm3 which is


usually written as cubic centimeter or c.c.

Clearance Volume:-
The volume above the T.D.C is called as clearance volume, this is provided
so as to accommodate engine valves etc. this is referred as (VC).

Swept Volume or Piston Displacement:-


The volume swept by piston while moving from T.D.C to B.D.C is called
swept volume. This is referred as (VS).

Therefore, the total volume of the engine cylinder


V =VS + VC

Compression Ratio:-
It is the ratio of volume above the piston at B.D.C to the volume above the
piston at T.D.C. It is the ratio of total volume of the cylinder (VS + VC), to
the clearance volume (VC).
It is calculated as follows
rk = Total volume/Clearance volume
rk = (VS + VC)/VC
For petrol engine, it ranges from 8 to 12.
For diesel engine, it ranges from 15 to 24.

Power:-
It is the work-done in a given period of time. More power is required to do
the same amount of work in a lesser time.

Indicated power (I.P):-


The power developed inside the engine cylinder is called the indicated power.
It is expressed is kW. It is given by the area under engine indicator diagram.

Indicated power of an engine is given by, I.P = Pim L A N K/60,000

Indicator diagram:-
An indicated diagram is a graph between pressure and volume. The former
being taken on vertical axis and the latter on the horizontal axis. This is
obtained by an instrument known as indicator. The indicator diagrams are of
two types;
(a) Theoretical or hypothetical
(b) Actual.
The theoretical or hypothetical indicator diagram is always longer in size as
compared to the actual one. Since in the former losses are neglected.
The area of the indicator diagram represents the magnitude of the net work-
done by the system in one engine cycle.
The area of the diagram = ad
The length of the diagram = ld
Therefore, the mean effective pressure (m.e.p) is defined as
Pm = (Area of Indicator diagram/Length of diagram) x constant
= (ad / ld) x k

Work-done in one engine cycle = Pm A L


For 2-stroke engine, work-done in one min. = Pm. A. L. N
For 4-stroke engine, work-done in one min. = Pm. A. L. N/2

Diagram Factor:-
The ratio of the area of the actual indicator diagram to the theoretical one is
called diagram factor.

Brake Power (B.P):-


This is the actual power available at the crank shaft. The indicated power
minus various power losses in the engine like, friction and pumping losses in
the engine, gives brake power. It is measured by using Dynamometer and
expressed in kW.

Brake power of an engine is given by, B.P = T. ω


Or, B.P = 2 π NT/60,000
Mean Effective Pressure (Pm or Pmef ):-
Mean effective pressure is that hypothetical constant pressure which is
assumed to be acting on the piston during its expansion stroke producing the
same work output as that from the actual cycle.

Or,
As piston performs power stroke, cylinder pressure decreases. Thus it is
required to refer an average effective pressure throughout the whole power
stroke. It is expressed in bars.

Mathematically,
Pm = Work Output/ Swept volume = Wnet /(V₁ - V₂)
It can also be shown as
Pm = (Area of Indicator diagram/Length of diagram) x
constant
= (ad / ld) x k
The constant depends on the mechanism used to get the indicator diagram
and has the unit, bar/m.

Indicated Mean Effective Pressure (Pim)


Indicated power of an engine is given by
I.P = Pim L A N K/60,000

Therefore, Pim = (60,000 x I.P)/L A N K


Break Mean Effective Pressure (Pbm)
Similarly, the brake mean effective pressure is given by
Pbm = (60,000 x B.P)/L A N K

Engine Torque:-
It is the force of rotating action about the crank axis at any given instant of
time.
It is given by, T = F. r

Where;
I.P = Indicated Power (kW)
B.P = Break Powder (kW)
Pim = Indicated mean effective pressure (N/m²)
Pbm = Break mean effective Pressure (N/m²)
L = Length of the stroke
A = (πD²/4) = Area of the piston (m²)
N = Number of power strokes
= rpm for 2-stroke engines = rpm/2 for 4-stroke
K = Number of cylinder.
T = Engine Torque (Nm)
F = Force applied to the crank (N)
r = Effective crank radius (m).
ω = Average velocity of crankshaft (rad/sec)
Mechanical Efficiency of Engine: ηmech = B.P/I.P

Otto cycle Efficiency: ηotto = 1- (1/rk ɤ - 1)

Diesel cycle Efficiency: [1 - (1/ɣ). (1/rk ɤ - 1)] x [(rcɤ - 1)/rc - 1)]

Where, rk = v₁/v₂ = Compression ratio


re = v₄/v₃ = Expansion ratio
rc = v₃/v₂ = Cut-off ratio
Also, rk = re x rc.
Internal Combustion Engines- Basic Differences
Difference between SI and CI Engines:-
The following are the main differences between SI and CI Engines:

Spark Ignition (SI) Engine Compression Ignition (CI) Engine

Spark plug required No spark plug required

The mixture of air and fuel is Only air is introduced into the
introduced into the cylinder from cylinder.
carburettor.

These type of engines compresses In these engines air is only


air and fuel together in the cylinder compressed in the cylinder.

No fuel pump is used. Fuel pump is used to inject fuel.

Fuel is mixed with air before Fuel is mixed with air once
compression starts. compression is complete.

Compression ratio is low. Compression ratio is high.

This type of engine makes use of This type of engine makes use of less
highly volatile liquid fuel. volatile liquid fuel.

Less efficient. More efficient.


Fuel used in this engine is Cheaper fuels are used in these
expensive. engines.

Higher fuel consumption in these These engines have lesser fuel


engines for same power. consumption for same power.

Engines are more compact and Heavier and strong engines due to
light. higher pressure involved

Initial cost is less Initial Cost is high.

These engines have a smooth Roughness in engine operation


operation encountered, especially when the
engine runs at high speed and low
loads.

Air Standard Otto Cycle


Otto Cycle (1876) (Used S. I. Engines):

Fig: Engine and Indicator diagram

Processes:
0-1 = Suction: The inlet valve is open, the piston moves to the right,
admitting fuel-air mixture into the cylinder at constant pressure.
1-2 = Isentropic compression: Both the valves are closed; the piston
compresses the combustible mixture to the minimum volume.
2-3 = Heat addition at constant volume (combustion): The mixture is then
ignited by means of a spark, combustion takes place and there is an increase
of temperature and pressure.
3-4 = Isentropic expansion: The products of combustion do work on the
piston which moves to the right and pressure and temperature of the gases
decreases.
4–1 = Constant volume heat rejection (Blow down): The exhaust valve open,
and the pressure drops to the initial pressure.
1-0 = Exhaust: With the exhaust valve open, the piston moves inwards to
expel the combustion products from the cylinder at constant pressure.

The series of processes as described above constitute a mechanical cycle and


not a thermodynamic cycle. The cycle is completed in four strokes of the
piston.

Fig: Air standard otto cycle P-V and T-S diagram


This cycle consists of two reversible adiabatic processes and two constant
volume processes as shown in figure on P-V and T-S diagrams.
The process 1-2 is reversible adiabatic compression, the process 2-3 is heat
addition at constant volume, the process 3-4 is reversible adiabatic expansion
and the process 4-1 is heat rejection at constant volume.

The cylinder is assumed to contain air as the working substance and heat is
supplied at the end of compression, and heat is rejected at the end of
expansion to the sink and the cycle is repeated.

Air is compressed in process 1-2 reversibly and adiabatically. Heat is then


added to air reversibly at constant volume in process 2-3. Work is done by air
in expanding reversibly and adiabatically in process 3-4. Heat is then rejected
by air reversibly at constant volume in process 4-1, and the system (air)
comes back to its initial state. Heat transfer processes have been substituted
for the combustion and blow down processes of the engine. The intake and
exhaust of the engine cancel each other.

Let ‘m’ be the fixed mass of air undergoing the cycle of operation. Therefore,
Heat supplied, Q₁ = Q₂ - ₃ = mcv (T₃ - T₂)
Heat rejected, Q₂ = Q₄ - ₁ = mcv (T₄ - T₁)
Therefore, Efficiency, ɳ = 1 - (Q₂/Q₁) = 1 - [mcv (T₄ - T₁)/ mcv (T₃ - T₂)]
= 1 - [(T₄ - T₁)/(T₃ - T₂)]
……………..eq. (i)

Process 1-2, T₂/T₁ = (V₁/V₂)ɣ-1


Process 3-4, T₃/T₄ = (V₄/V₃) ɣ-1 = (V₁/V₂)ɣ-1

Therefore, T₂/T₁ = T₃/T₄ or, T₃/T₂ = T₄/T₁


or, (T₃/T₂) - 1 = (T₄/T₁) - 1

Therefore, (T₄- T₁)/(T₃ - T₂) = T₁/T₂ = (V₂/V₁)ɣ-1…..substituting this value to


eq.(i)

From eq. (i), we have, ɳ = 1 - (V₂/V₁)ɣ-1


or, ɳotto = 1 - 1/(rk)ɣ-1

Where, rk is called the compression ratio and is given by,


rk = volume at the beginning of compression/volume at the end of
compression
= V₁/V₂ = v₁/v2
The efficiency of air standard otto cycle is thus a function of the compression
ratio only. The higher the compression ratio, the higher is the efficiency.
It is independent of the temperature levels at which the cycle operates. The
compression ratio can’t, however be increased beyond a certain limit, because
of a noisy and destructive phenomenon, known as detonation.

It is also depends upon the fuel, engine design and the operating conditions.
Governing of IC Engines
The process of providing any arrangement, which will keep the engine speed
constant (according to the changing load conditions) is known as governing
of I.C. engines. Though there are many methods for the governing of I.C.
engines, yet the following are important :

1. Hit and miss governing. In this system of governing, whenever the engine
starts running at higher speed (due to decreased load), some explosions are
omitted or missed. This is done with the help of a centrifugal governor. This
method of governing is widely used for I. C. engines of smaller capacity or
gas engines.
2. Qualitative governing. In this system of governing, a control valve is
fitted in the fuel delivery pipe, which controls the quantity of fuel to be mixed
in the charge. The movement of control valve is regulated by the centrifugal
governor through rack and pinion arrangement.
3. Quantitative governing. In this system of goverliing, the quality of charge
(i.e. air-fuel ratio of the mixture) is kept constant. But the quantity of mixture
supplied to the engine cylinder is varied by means of a throttle valve which
is regulated by the centrifugal governor through rack and pinion arrangement.
4. Combination system of governing. In this system of governing, the
qualitative and quantitative methods of governing are combined together.

Carburetor of an IC Engine

The carburetor is a device for atomising and vaporising the fuel and mixing it
with the air in the varying proportions to suit the changing operating
conditions of the engine. The process of breaking up and mixing the fuel with
the air is called carburetion.
* Atomisation is the mechanical breaking up of the liquid fuel into small
particles so that every minute particle of the fuel is surrounded by air.
** Vaporization is a change of state of fuel from a liquid to vapour.
Air Standard Cycles
Internal combustion engines, in which the combustion of fuel occurs in the
engine cylinder itself, are non-cyclic heat engines. The temperature due to
evolution of heat, because of the combustion of fuel inside the cylinder is so
high, that the cylinder is closed by water circulation around it to avoid rapid
deterioration. The working fluid, the fuel-air mixture undergoes permanent
chemical change due to combustion and the product of combustion after
doing work are thrown out of the engine, and the fresh charge is taken. So the
working fluid does not undergo a complete thermodynamic cycle.

Most of the power plant operates in a thermodynamic cycle i.e. the working
fluid undergoes a series of processes and finally returns to its original state.
Hence, in order to compare the efficiencies of various cycles, a hypothetical
efficiency called air standard efficiency is calculated.

If air is used as the working fluid in a thermodynamic cycle, then the cycle is
known as “Air Standard Cycle”.

To simplify the analysis of I.C. engines, air standard cycles are conceived.
Assumptions:
1. The working medium is assumed to be a perfect gas and follows the
relation pV = mRT or P = pRT
2. In air standard cycle, a certain mass of air operates in a complete
thermodynamic cycle, i.e. there is no change in the mass of the working
medium.
3. The heat added and rejected with external heat reservoirs.
4. All the processes that constitute the cycle are reversible.
4. Heat is added and rejected with external heat reservoirs.
5. The working medium has constant specific heats.
Spark Plug in IC Engines

A spark plug in IC engines is a device used to produce spark for igniting the
charge of petrol engines. It is always screwed into the cylinder head. It is,
usually, designed to withstand a pressure upto 35 bar and operate under a
current of 10 000 to 30 000 volts. The spark plug gap is kept from 0.3 mm to
0.7 mm.
Supercharging of IC Engines
Supercharging of IC Engines - It is the process of increasing the mass (or in
other words density) of the air fuel mixture (in spark ignition engines) or air
(in compression ignition engines) induced into the engine cylinder. This is
usually done with the help of a compressor or blower known as supercharger.
It has been experimentally found that the supercharging increases the power
developed by the engine. It is widely used in aircraft engines, as the mass of
air sucked in the engine cylinder decreases at very high altitudes. This
happens, because atmospheric pressure decreases with the increase in
altitude.

Following are the objects of supercharging the engines :


1. To reduce mass of the engine per brake power (as required in
aircraft engines).
2. To maintain power of air craft engines at high altitudes where less
oxygen is available for combustion.
3. To reduce space occupied by the engine (as required in marine
engines).
4. To reduce.consumption of lubricating oil (as required is all types of
engines).
5. To increase the power output of an engine when greater power is
required (as required in racing cars and other engines).

Two Stroke vs Four Stroke Engines


Two Stroke vs Four Stroke Engines - Advantages and Disadvantages of Two
Stroke over Four Stroke Cycle Engines.
In a two stroke engine, the working cycle is completed in two strokes of the
piston or one revolution of the crankshaft. In a four stroke engine, the
working cycle is completed in four strokes of the piston or two revolutions of
the crankshaft.
The following are the advantages and disadvantages of two stroke over four
stroke cycle engines :
Advantages
1. A two stroke cycle engine gives twice the number of power strokes
than the four stroke cycle engine at the same engine speed.
Theoretically, a two stroke cycle engine should develop twice the
power as that of a four stroke cycle engine.
2. For the same power developed, a two stroke cycle engine is lighter,
less bulky and occupies less floor area.
3. A two stroke cycle engine has a lighter flywheel and gives higher
mechanical efficiency than a tour stroke cycle engine.
Disadvantages
1. The thermal efficiency of a two stroke cycle engine is less than that
of a four stroke cycle engine, because a two stroke engine has less
compression ratio than that of a four stroke cycle engine.
2. The overall efficiency of a two stroke cycle engine is also less than
that of a four stroke cycle engine.
3. The consumption of lubricating oil is large in a two stroke cycle
engine because of high operating temperature.

Octane Number - Rating of S.I. Engine


Fuels
What is Octane Number? How rating of SI Engines (Spark ignition engines)
is done using an octane number?
The hydrocarbon fuels used in spark ignition (S.I.) engine have a tendency to
cause engine knock when the engine operating conditions become severe.
The knocking tendency of a fuel in S. I. engines is generally expressed by its
octane number. The percentage, by volume, of iso-octane in a mixture of iso-
octane and normal heptane, which exactly matches the knocking intensity of
a given fuel, in a standard engine, under given standard operating conditions,
is termed as the octane number rating of that fuel. Thus, if a mixture of 50
percent iso-octane and 50 percent normal heptane matches the fuel under test,
then this fuel is assigned an octane number rating of 50. If a fuel matches in
knocking intensity a mixture of 75 percent iso-octane and 25 percent normal
heptane, then this fuel would be assigned an octane number rating of 75. This
octane number rating is an expression which indicates the ability of a fuel to
resist knock in a spark ignition engine.

Since iso-octane is a very good anti-knock fuel, therefore it is assigned a


rating of 100 octane number. On the other hand, normal heptane has a very
poor anti-knock qualities, therefore, it is given a rating of zero octane
number. These two fuels, i.e., iso-octane and normal heptane are known as
primary reference fuels. It may be noted that higher the octane number rating
of a fuel, the greater will be its resistance to knock and higher will be the
compression ratio. Since the power output and specific fuel consumption are
functions of compression ratio, therefore we may say that these are also
functions of octane number rating. This fact indicates the extreme importance
of the octane number rating in fuels for S. I. engines.

Note: The octane number of petrol, generally available, is 80 to 100.

Valve Timing Diagram of Diesel Engine


Valve Timing Diagram for a Four Stroke Cycle Diesel Engine - The diesel
engines are also known as compression ignition engines. The valve timing
diagram for a four stroke cycle diesel engine is shown in Figure below:

The following particulars are important for a four stroke cycle diesel engine
regarding valve timing diagram:
(a) The inlet valve opens at 10° — 20° before TDC and closes at 25° — 40°
after BDC.
(b) The fuel valve opens at 10° — 15° before TDC and closes at 15°— 20°
after TDC.
(c) The compression starts at 25° — 40° after BDC and ends at 10°— 15°
before TDC.
(d) The expansion starts at 10° — 15° after TDC and ends at 30° — 50°
before BDC.
(e) The exhaust valve opens at 30° — 50° before BDC and closes at 10° —
15° after TDC.
Note: In diesel engines, the fuel is injected in the form of very fine spray
into the engine cylinder, which gets ignited due to high temperature of the
compressed air.

Thermodynamic Tests for I.C.


Engines
Why thermodynamic tests are done on IC Engines? An internal combustion
engine (IC Engine) is put to thermodynamic tests, so as to determine the
following quantities:

Indicated mean effective pressure


Indicated power of an IC Engine
Brake power of IC Engine
Efficiency of an IC Engine
Lubrication of IC Engines

The lubrication of IC engines is required to reduce tear, vibrations, over


heating, and cleaning. The lubrication of IC engines has the following
advantages :
1. It reduces wear and tear of the moving parts.
2. It damps down the vibrations of the engine.
3. It dissipates the heat generated from the- moving parts due to
friction.
4. It cleans the moving parts.
5. It makes the piston gas-tight.
Testing of IC Engines

Testing of IC Engines - why is it done? The purpose of testing an internal


combustion engine (IC Engine) are
(a) To determine the information which cannot be obtained by calculations.
(b) To conform the data used in design, the validity of which is doubtful.
(c) To satisfy the customer regarding the performance of the engine.
An internal combustion engine (IC Engine) is put to thermodynamic tests, so
as to determine efficiency and performance indicators.

Indicated power of an IC Engine


The indicated power of an IC engine (briefly written as I.P.) is the power
actually developed by the engine cylinder. Mathematically,

where
K = Number of cylinders,
pm = Actual mean effective pressure in bar (1 bar = 100 kN/m2),
L = Length of stroke in meters,
A = Area of the piston in m2,
n = Number of working strokes per minute
= Speed of the engine for two stroke cycle engine
= Half the speed of the engine for four stroke cycle engine.
Note : The I.P. of a multi-cylinder of spark ignition engine is determined by
Morse test.

Scavenging of IC Engines
The scavenging, in an internal combustion engine (IC Engine), is the process
of removing the burnt gases from the combustion chamber of the engine
cylinder. Though there are many types of scavenging, yet the following are
important from the subject point of view:
1. Crossflow scavenging. In this method, the transfer port (or inlet port for
the engine cylinder) and exhaust port are situated on the opposite sides of the
engine cylinder (as in the case of two stroke cycle engines).
2. Back flow or loop scavenging. In this method, the inlet and outlet ports are
situated on the same side of the engine cylinder.
3. Uniflow scavenging. In this method, the fresh charge, while entering from
one side (or sometimes two sides) of the engine cylinder pushes out the gases
through the exit valve situated on the top of the cylinder.
Note: The scavenging efficiency of a four stroke cycle diesel engine is
between 95 and 100 percent.

Valve Timing Diagram of Petrol Engine


Valve Timing Diagram for a Four Stroke Cycle Petrol Engine - The petrol
engines are also known as spark ignition engines. The valve timing diagram
for a four stroke cycle petrol engine is shown in Figure below:
The following particulars are important for a four stroke cycle petrol engine
regarding valve timing diagram :
(a) The inlet valve opens (IVO) at 10° — 20° before top dead center (TDC)
and closes 30° — 40° after bottom dead center (BDC).
(b) The compression of charge starts at 30° — 40° after BDC and ends at 20°
— 30° before TDC.
(c) The ignition (IGN) of charge takes place at 20°— 30° before TDC.
(d) The expansion starts at 20° — 30° before TDC and ends at 30° — 50°
before BDC.
(e) The exhaust valve opens (EVO) at 30° — 50° before BDC and closes at
10° —15° after TDC.
Notes:
(i) The inlet valve of a four stroke I. C. engine remains open for 230°.
(ii) The charge is compressed when both the valves (i.e. inlet valve and
exhaust valve) are closed.
(iii) The charge is ignited with the help of a spark plug.
(iv) The pressure inside the engine cylinder is above the atmospheric pressure
during the exhaust stroke.
Sequence of Operations in IC Engine
Each stroke in IC engines forms a sequence of operations in one cycle of IC
Engines i.e suction stroke, compression stroke, expansion stroke, and exhaust
stroke.

Strictly speaking, when an engine is working continuously, we may consider


a cycle starting from any stroke. We know that when the engine returns back
to the stroke where it started, we say that one cycle has completed. The
following sequence of operation in a cycle is widely used.
1. Suction stroke. In this stroke, the fuel vapor in correct proportion, is
supplied to the engine cylinder.
2. Compression stroke. In this stroke, the fuel vapor is compressed in
the engine cylinder.
3. Expansion or working stroke. In this stroke, the fuel vapor is fired
just before the compression is complete. It results in the sudden rise
of pressure, due to expansion of the combustion products in the
engine cylinder. This sudden rise of pressure pushes the piston with
a great force and rotates the crankshaft. The crankshaft, in turn,
drives the machine connected to it.
Exhaust stroke. In this stroke, the burnt gases (or combustion products) are
exhausted from the engine cylinder, so as to make space available for the
fresh fuel vapor

AUTOMOBILE FUEL AND


LUBRICANTS
Petroleum
Petroleum is a naturally occurring, yellowish-black liquid found in
geological formations beneath the Earth's surface. It is commonly refined into
various types of fuels. Components of petroleum are separated using a
technique called fractional distillation, i.e. separation of a liquid mixture into
fractions differing in boiling point by means of distillation, typically using a
fractionating column.
It consists of naturally occurring hydrocarbons of various molecular weights
and may contain miscellaneous organic compounds. The name petroleum
covers both naturally occurring unprocessed crude oil and petroleum products
that are made up of refined crude oil. A fossil fuel, petroleum is formed when
large quantities of dead organisms, mostly zooplankton and algae, are buried
underneath sedimentary rock and subjected to both intense heat and pressure.
Petroleum has mostly been recovered by oil drilling (natural petroleum
springs are rare). Drilling is carried out after studies of structural geology (at
the reservoir scale), sedimentary basin analysis, and reservoir characterisation
(mainly in terms of the porosity and permeability of geologic reservoir
structures) have been completed. It is refined and separated, most easily by
distillation, into numerous consumer products, from gasoline (petrol) and
kerosene to asphalt and chemical reagents used to make plastics, pesticides
and pharmaceuticals. Petroleum is used in manufacturing a wide variety of
materials, and it is estimated that the world consumes about 95 million
barrels each day.
The use of petroleum as fuel causes global warming and ocean acidification.
According to the UN's Intergovernmental Panel on Climate Change, without
fossil fuel phase-out, including petroleum, there will be "severe, pervasive,
and irreversible impacts for people and ecosystems".
Petroleum Refining and Formation Process
Petroleum refining or Oil refining is an industrial process in which crude oil
is extracted from the ground and transformed and refined into useful products
like Liquefied Petroleum Gas (LPG), kerosene, asphalt base, jet fuel,
gasoline, heating oil, fuel oils etc. Crude Oil consists of hydrocarbon
molecules. There are three steps in the Petroleum refining process –
Separation, Conversion, and Treatment. It would not be possible to travel by
vehicle without petrol or diesel oil. Learn how do we obtain petrol and crude
oil and also more about petroleum or oil refineries.

What is Petroleum?

Petroleum is a substance that occurs naturally. It is a dark liquid. It occurs


beneath the earth’s surface. A large number of products like petrol, diesel,
lubricating oil, etc derive from petroleum. Its compounds can be separated
with the help of fractional distillation. In the year 2006, primary sources of
energy consisted of petroleum 36.8%, natural gas 22.9%, and coal 26.6%
share for fossil fuels in primary energy production in the world.

Formation of Petroleum
It takes around millions of years for petroleum to form. It is formed due to
the presence of decomposed carcasses of dead animals beneath the surface of
the earth. These carcasses of dead animals are subjected to extreme pressure
and heat. Over centuries, millions of animals lived and died to become
fossilized, just in case of plant-based matter. Similarly, in the ocean, oceanic
creatures drowned to the bottom of the ocean and got buried under the sand
and rocks.

Decayed due to the presence of bacteria, the decomposed organic matter got
buried deeper and deeper over the years. Over millions of years, high
temperature, high pressure, the absence of air converted the dead animals to
petroleum and coal. This liquefied form of dead organic matter is petroleum
or crude oil. Crude oil is extracted from oil wells, these wells can be very
deep. The oil extracted is later refined to form petrol, diesel, aviation fuel,
paraffin wax, lubrication oil, etc.

Petroleum Refining Process


Deposition of petroleum occurs with natural gas in the rocks called oil wells
from where it is taken out by drilling. Refining is a process where the
separation of various compounds of crude oil occurs. Fractional distillation is
a process used to separate its compounds.

Crude Oil procured from an oil well is a mixture of many liquids. Different
temperature evaporates a different liquid. This temperature is the boiling
point of that liquid. The crude oil is heated to a temperature of 400-degree
Celcius, is fed in at the bottom of the column and heated further. The liquid
with the lowest boiling point changes into vapor first and condenses. At
higher temperature, the next volatile liquid changes into a vapor state and
rises.

Petroleum Refining Process Flow Chart


Different vapors rise at different temperature and as they condense, they are
collected in different trays. Different liquids are collected in different trays. A
few gases reach the top of the column without condensing.

Products produced in Petroleum Refining Process


· Petroleum Gas: Generally, liquefied petroleum gas is useful for
domestic fuel.
· Gasoline: Procuration of petrol occurs from this fraction.
· Kerosene: It is used as domestic fuel and also as fuel in jet engines.
· Diesel oil or light oil: It is useful in the automobile industry.
· Heavy Oil or Lubricating Oil: This type of oil is used in making
lubricating oils.
· Fuel Oil: It is essential for ships, central heating, and factories.
· Residue: We can procure products like paraffin wax, bitumen from
this residue. It is useful for making roads and roofing.
Visbreaking, thermal cracking, and coking
Since World War II the demand for light products (e.g., gasoline, jet, and
diesel fuels) has grown, while the requirement for heavy industrial fuel oils
has declined. Furthermore, many of the new sources of crude petroleum
(California, Alaska, Venezuela, and Mexico) have yielded heavier crude oils
with higher natural yields of residual fuels. As a result, refiners have become
even more dependent on the conversion of residue components into lighter
oils that can serve as feedstock for catalytic cracking units.
As early as 1920, large volumes of residue were being processed in
visbreakers or thermal cracking units. These simple process units basically
consist of a large furnace that heats the feedstock to the range of 450 to 500
°C (840 to 930 °F) at an operating pressure of about 10 bars (1 MPa), or
about 150 psi. The residence time in the furnace is carefully limited to
prevent much of the reaction from taking place and clogging the furnace
tubes. The heated feed is then charged to a reaction chamber, which is kept at
a pressure high enough to permit cracking of the large molecules but restrict
coke formation. From the reaction chamber the process fluid is cooled to
inhibit further cracking and then charged to a distillation column for
separation into components.
Visbreaking units typically convert about 15 percent of the feedstock to
naphtha and diesel oils and produce a lower-viscosity residual fuel.
Thermal cracking units provide more severe processing and often convert
as much as 50 to 60 percent of the incoming feed to naphtha and light
diesel oils.
Coking is severe thermal cracking. The residue feed is heated to about
475 to 520 °C (890 to 970 °F) in a furnace with very low residence time
and is discharged into the bottom of a large vessel called a coke drum for
extensive and controlled cracking. The cracked lighter product rises to the
top of the drum and is drawn off. It is then charged to the product
fractionator for separation into naphtha, diesel oils, and heavy gas oils for
further processing in the catalytic cracking unit. The heavier product
remains and, because of the retained heat, cracks ultimately to coke, a
solid carbonaceous substance akin to coal. Once the coke drum is filled
with solid coke, it is removed from service and replaced by another coke
drum.
Decoking is a routine daily occurrence accomplished by a high-pressure
water jet. First the top and bottom heads of the coke drum are removed.
Next a hole is drilled in the coke from the top to the bottom of the vessel.
Then a rotating stem is lowered through the hole, spraying a water jet
sideways. The high-pressure jet cuts the coke into lumps, which fall out
the bottom of the drum for subsequent loading into trucks or railcars for
shipment to customers. Typically, coke drums operate on 24-hour cycles,
filling with coke over one 24-hour period followed by cooling, decoking,
and reheating over the next 24 hours. The drilling derricks on top of the
coke drums are a notable feature of the refinery skyline.
Cokers produce no liquid residue but yield up to 30 percent coke by weight.
Much of the low-sulfur product is employed to produce electrodes for the
electrolytic smelting of aluminum. Most lower-quality coke is burned as fuel
in admixture with coal. Coker economics usually favour the conversion of
residue into light products even if there is no market for the coke.

Purification
Before petroleum products can be marketed, certain impurities must be
removed or made less obnoxious. The most common impurities are sulfur
compounds such as hydrogen sulfide (H2S) or the mercaptans (“R”SH)—
the latter being a series of complex organic compounds having as many as
six carbon atoms in the hydrocarbon radical (“R”). Apart from their foul
odour, sulfur compounds are technically undesirable. In motor and
aviation gasoline they reduce the effectiveness of antiknock additives and
interfere with the operation of exhaust-treatment systems. In diesel fuel
they cause engine corrosion and complicate exhaust-treatment systems.
Also, many major residual and industrial fuel consumers are located in
developed areas and are subject to restrictions on sulfurous emissions.
Most crude oils contain small amounts of hydrogen sulfide, but these
levels may be increased by the decomposition of heavier sulfur
compounds (such as the mercaptans) during refinery processing. The bulk
of the hydrogen sulfide is contained in process-unit overhead gases,
which are ultimately consumed in the refinery fuel system. In order to
minimize noxious emissions, most refinery fuel gases are desulfurized.
Other undesirable components include nitrogen compounds, which poison
catalyst systems, and oxygenated compounds, which can lead to colour
formation and product instability. The principal treatment processes are
outlined below.

Sweetening
Sweetening processes oxidize mercaptans into more innocuous disulfides,
which remain in the product fuels. Catalysts assist in the oxidation. The
doctor process employs sodium plumbite, a solution of lead oxide in caustic
soda, as a catalyst. At one time this inexpensive process was widely
practiced, but the necessity of adding elemental sulfur to make the reactions
proceed caused an increase in total sulfur content in the product. It has largely
been replaced by the copper chloride process, in which the catalyst is a slurry
of copper chloride and fuller’s earth. It is applicable to both kerosene and
gasoline. The oil is heated and brought into contact with the slurry while
being agitated in a stream of air that oxidizes the mercaptans to disulfides.
The slurry is then allowed to settle and is separated for reuse. A heater raises
the temperature to a point that keeps the water formed in the reaction
dissolved in the oil, so that the catalyst remains properly hydrated. After
sweetening, the oil is water washed to remove any traces of catalyst and is
later dried by passing through a salt filter.

Mercaptan extraction
Simple sweetening is adequate for many purposes, but other methods must be
used if the total sulfur content of the fuel is to be reduced. When solutizers,
such as potassium isobutyrate and sodium cresylate, are added to caustic
soda, the solubility of the higher mercaptans is increased and they can be
extracted from the oil. In order to remove traces of hydrogen sulfide and
alkyl phenols, the oil is first pretreated with caustic soda in a packed column
or other mixing device. The mixture is allowed to settle and the product water
washed before storage.
Cracking methodologies
Thermal cracking
Modern high-pressure thermal cracking operates at absolute pressures of
about 7,000 kPa. An overall process of disproportionation can be observed,
where "light", hydrogen-rich products are formed at the expense of heavier
molecules which condense and are depleted of hydrogen. The actual reaction
is known as homolytic fission and produces alkenes, which are the basis for
the economically important production of polymers.[citation needed]
Thermal cracking is currently used to "upgrade" very heavy fractions or to
produce light fractions or distillates, burner fuel and/or petroleum coke. Two
extremes of the thermal cracking in terms of product range are represented by
the high-temperature process called "steam cracking" or pyrolysis (ca. 750 °C
to 900 °C or higher) which produces valuable ethylene and other feedstocks
for the petrochemical industry, and the milder-temperature delayed coking
(ca. 500 °C) which can produce, under the right conditions, valuable needle
coke, a highly crystalline petroleum coke used in the production of electrodes
for the steel and aluminium industries.[citation needed]
William Merriam Burton developed one of the earliest thermal cracking
processes in 1912 which operated at 700–750 °F (370–400 °C) and an
absolute pressure of 90 psi (620 kPa) and was known as the Burton process.
Shortly thereafter, in 1921, C.P. Dubbs, an employee of the Universal Oil
Products Company, developed a somewhat more advanced thermal cracking
process which operated at 750–860 °F (400–460 °C) and was known as the
Dubbs process.[6] The Dubbs process was used extensively by many
refineries until the early 1940s when catalytic cracking came into use.
[citation needed]
Steam cracking
Steam cracking is a petrochemical process in which saturated hydrocarbons
are broken down into smaller, often unsaturated, hydrocarbons. It is the
principal industrial method for producing the lighter alkenes (or commonly
olefins), including ethene (or ethylene) and propene (or propylene). Steam
cracker units are facilities in which a feedstock such as naphtha, liquefied
petroleum gas (LPG), ethane, propane or butane is thermally cracked through
the use of steam in a bank of pyrolysis furnaces to produce lighter
hydrocarbons.
In steam cracking, a gaseous or liquid hydrocarbon feed like naphtha, LPG or
ethane is diluted with steam and briefly heated in a furnace without the
presence of oxygen. Typically, the reaction temperature is very high, at
around 850 °C, but the reaction is only allowed to take place very briefly. In
modern cracking furnaces, the residence time is reduced to milliseconds to
improve yield, resulting in gas velocities up to the speed of sound. After the
cracking temperature has been reached, the gas is quickly quenched to stop
the reaction in a transfer line heat exchanger or inside a quenching header
using quench oil.[citation needed]
The products produced in the reaction depend on the composition of the feed,
the hydrocarbon-to-steam ratio, and on the cracking temperature and furnace
residence time. Light hydrocarbon feeds such as ethane, LPGs or light
naphtha give product streams rich in the lighter alkenes, including ethylene,
propylene, and butadiene. Heavier hydrocarbon (full range and heavy
naphthas as well as other refinery products) feeds give some of these, but also
give products rich in aromatic hydrocarbons and hydrocarbons suitable for
inclusion in gasoline or fuel oil.[citation needed]
A higher cracking temperature (also referred to as severity) favors the
production of ethene and benzene, whereas lower severity produces higher
amounts of propene, C4-hydrocarbons and liquid products. The process also
results in the slow deposition of coke, a form of carbon, on the reactor walls.
This degrades the efficiency of the reactor, so reaction conditions are
designed to minimize this. Nonetheless, a steam cracking furnace can usually
only run for a few months at a time between de-cokings. Decokes require the
furnace to be isolated from the process and then a flow of steam or a
steam/air mixture is passed through the furnace coils. This converts the hard
solid carbon layer to carbon monoxide and carbon dioxide. Once this reaction
is complete, the furnace can be returned to service.[citation needed]
Fluid Catalytic cracking
Schematic flow diagram of a fluid catalytic cracker
The catalytic cracking process involves the presence of solid acid catalysts,
usually silica-alumina and zeolites. The catalysts promote the formation of
carbocations, which undergo processes of rearrangement and scission of C-C
bonds. Relative to thermal cracking, cat cracking proceeds at milder
temperatures, which saves energy. Furthermore, by operating at lower
temperatures, the yield of alkenes is diminished. Alkenes cause instability of
hydrocarbon fuels.
Fluid catalytic cracking is a commonly used process, and a modern oil
refinery will typically include a cat cracker, particularly at refineries in the
US, due to the high demand for gasoline. The process was first used around
1942 and employs a powdered catalyst. During WWII, the Allied Forces had
plentiful supplies of the materials in contrast to the Axis Forces, which
suffered severe shortages of gasoline and artificial rubber. Initial process
implementations were based on low activity alumina catalyst and a reactor
where the catalyst particles were suspended in a rising flow of feed
hydrocarbons in a fluidized bed.[citation needed]
In newer designs, cracking takes place using a very active zeolite-based
catalyst in a short-contact time vertical or upward-sloped pipe called the
"riser". Pre-heated feed is sprayed into the base of the riser via feed nozzles
where it contacts extremely hot fluidized catalyst at 1,230 to 1,400 °F (666 to
760 °C). The hot catalyst vaporizes the feed and catalyzes the cracking
reactions that break down the high-molecular weight oil into lighter
components including LPG, gasoline, and diesel. The catalyst-hydrocarbon
mixture flows upward through the riser for a few seconds, and then the
mixture is separated via cyclones. The catalyst-free hydrocarbons are routed
to a main fractionator for separation into fuel gas, LPG, gasoline, naphtha,
light cycle oils used in diesel and jet fuel, and heavy fuel oil.[citation needed]
During the trip up the riser, the cracking catalyst is "spent" by reactions
which deposit coke on the catalyst and greatly reduce activity and selectivity.
The "spent" catalyst is disengaged from the cracked hydrocarbon vapors and
sent to a stripper where it contacts steam to remove hydrocarbons remaining
in the catalyst pores. The "spent" catalyst then flows into a fluidized-bed
regenerator where air (or in some cases air plus oxygen) is used to burn off
the coke to restore catalyst activity and also provide the necessary heat for the
next reaction cycle, cracking being an endothermic reaction. The
"regenerated" catalyst then flows to the base of the riser, repeating the cycle.
[citation needed]
The gasoline produced in the FCC unit has an elevated octane rating but is
less chemically stable compared to other gasoline components due to its
olefinic profile. Olefins in gasoline are responsible for the formation of
polymeric deposits in storage tanks, fuel ducts and injectors. The FCC LPG is
an important source of C3-C4 olefins and isobutane that are essential feeds
for the alkylation process and the production of polymers such as
polypropylene.[citation needed]
Hydrocracking[edit]
Hydrocracking is a catalytic cracking process assisted by the presence of
added hydrogen gas. Unlike a hydrotreater, hydrocracking uses hydrogen to
break C-C bonds (hydrotreatment is conducted prior to hydrocracking to
protect the catalysts in a hydrocracking process). In the year 2010, 265 × 106
tons of petroleum was processed with this technology. The main feedstock is
vacuum gas oil, a heavy fraction of petroleum.
The products of this process are saturated hydrocarbons; depending on the
reaction conditions (temperature, pressure, catalyst activity) these products
range from ethane, LPG to heavier hydrocarbons consisting mostly of
isoparaffins. Hydrocracking is normally facilitated by a bifunctional catalyst
that is capable of rearranging and breaking hydrocarbon chains as well as
adding hydrogen to aromatics and olefins to produce naphthenes and alkanes.
The major products from hydrocracking are jet fuel and diesel, but low
sulphur naphtha fractions and LPG are also produced. All these products
have a very low content of sulfur and other contaminants. It is very common
in Europe and Asia because those regions have high demand for diesel and
kerosene. In the US, fluid catalytic cracking is more common because the
demand for gasoline is higher.
The hydrocracking process depends on the nature of the feedstock and the
relative rates of the two competing reactions, hydrogenation and cracking.
Heavy aromatic feedstock is converted into lighter products under a wide
range of very high pressures (1,000-2,000 psi) and fairly high temperatures
(750°-1,500 °F, 400-800 °C), in the presence of hydrogen and special
catalysts.
The primary functions of hydrogen are, thus:
1. preventing the formation of polycyclic aromatic compounds if
feedstock has a high paraffinic content,
2. reducing tar formation,
3. reducing impurities,
4. preventing buildup of coke on the catalyst,
5. converting sulfur and nitrogen compounds present in the feedstock
to hydrogen sulfide and ammonia, and
6. achieving high cetane number fuel.
Polymerization in Petroleum Refinery
The light vaporous hydrocarbons created by synergist breaking are
exceptionally unsaturated and are normally changed over into high-octane
gas segments in polymerization or alkylation forms. In polymerization, the
light olefins propylene and butylene are prompted to consolidate, or
polymerize, into particles of a few times their unique atomic weight. The
impetuses utilized comprise of phosphoric corrosive on pellets of kieselguhr,
a permeable sedimentary shake. High weights, on the request of 30 to 75 bars
(3 to 7.5 MPa), or 400 to 1,100 psi, are required at temperatures running from
175 to 230 °C (350 to 450 °F). Polymer fuels got from propylene and
butylene have octane numbers over 90. The alkylation response additionally
accomplishes a more drawn out chain particle by the mix of two littler atoms,
one being an olefin and the other an isoparaffin (normally isobutane). Amid
World War II, alkylation turned into the principle procedure for the fabricate
of isooctane, an essential part in the mixing of avionics fuel. Two alkylation
forms utilized in the business depend on various corrosive frameworks as
impetuses. In sulfuric corrosive alkylation, concentrated sulfuric corrosive of
98 percent immaculateness fills in as the impetus for a response that is
completed at 2 to 7 °C (35 to 45 °F). Refrigeration is vital as a result of the
warmth created by the response. The octane quantities of the alkylates
delivered run from 85 to 95.
· Petroleum refining
· Polymerization and alkylation
· Heteroatom-assisted olefin polymerization by rare-earth metal as
catalyst
· Drag reducing agent
· Desulfurization Process in Petroleum
Alkylation
Alkylation, in petroleum refining, chemical process in which light, gaseous
hydrocarbons are combined to produce high-octane components of gasoline.
The light hydrocarbons consist of olefins such as propylene and butylene and
isoparaffins such as isobutane. These compounds are fed into a reactor,
where, under the influence of a sulfuric-acid or hydrofluoric-acid catalyst,
they combine to form a mixture of heavier hydrocarbons. The liquid fraction
of this mixture, known as alkylate, consists mainly of isooctane, a compound
that lends excellent antiknock characteristics to gasolines.
Alkylation units were installed in petroleum refineries in the 1930s, but the
process became especially important during World War II, when there was a
great demand for aviation gasoline. It is now used in combination with
fractional distillation, catalytic cracking, and isomerization to increase a
refinery’s yield of automotive gasoline.
Isomerization Process
The isomerization process is gaining importance in the present refining
context due to limitations on gasoline benzene, aromatics, and olefin
contents. The isomerization process upgrades the octane number of light
naphtha fractions and also simultaneously reduces benzene content by
saturation of the benzene fraction. Isomerization complements catalytic
reforming process in upgrading the octane number of refinery naphtha
streams. Isomerization is a simple and cost-effective process for octane
enhancement compared with other octane-improving processes. Isomerate
product contains very low sulfur and benzene, making it ideal blending
component in refinery gasoline pool. Due to the significance of isomerization
to the modern refining industry, it becomes essential to review the process
with respect to catalysts, catalyst poisons, reactions, thermodynamics, and
process developments. The present research thrust in this field along with
future scope of work is also discussed briefly. The isomerization process is
compared with another well-known refinery process called the catalytic
reforming process.
Gasoline blending
One of the most critical economic issues for a petroleum refiner is selecting
the optimal combination of components to produce final gasoline products.
Gasoline blending is much more complicated than a simple mixing of
components. First, a typical refinery may have as many as 8 to 15 different
hydrocarbon streams to consider as blend stocks. These may range from
butane, the most volatile component, to a heavy naphtha and include several
gasoline naphthas from crude distillation, catalytic cracking, and thermal
processing units in addition to alkylate, polymer, and reformate. Modern
gasoline may be blended to meet simultaneously 10 to 15 different quality
specifications, such as vapour pressure; initial, intermediate, and final boiling
points; sulfur content; colour; stability; aromatics content; olefin content;
octane measurements for several different portions of the blend; and other
local governmental or market restrictions. Since each of the individual
components contributes uniquely in each of these quality areas and each
bears a different cost of manufacture, the proper allocation of each
component into its optimal disposition is of major economic importance. In
order to address this problem, most refiners employ linear programming, a
mathematical technique that permits the rapid selection of an optimal solution
from a multiplicity of feasible alternative solutions. Each component is
characterized by its specific properties and cost of manufacture, and each
gasoline grade requirement is similarly defined by quality requirements and
relative market value. The linear programming solution specifies the unique
disposition of each component to achieve maximum operating profit. The
next step is to measure carefully the rate of addition of each component to the
blend and collect it in storage tanks for final inspection before delivering it
for sale. Still, the problem is not fully resolved until the product is actually
delivered into customers’ tanks. Frequently, last-minute changes in shipping
schedules or production qualities require the reblending of finished gasolines
or the substitution of a high-quality (and therefore costlier) grade for one of
more immediate demand even though it may generate less income for the
refinery.

Kerosene
Though its use as an illuminant has greatly diminished, kerosene is still used
extensively throughout the world in cooking and space heating and is the
primary fuel for modern jet engines. When burned as a domestic fuel,
kerosene must produce a flame free of smoke and odour. Standard laboratory
procedures test these properties by burning the oil in special lamps. All
kerosene fuels must satisfy minimum flash-point specifications (49 °C, or
120 °F) to limit fire hazards in storage and handling.
Jet fuels must burn cleanly and remain fluid and free from wax particles at
the low temperatures experienced in high-altitude flight. The conventional
freeze-point specification for commercial jet fuel is −50 °C (−58 °F). The
fuel must also be free of any suspended water particles that might cause
blockage of the fuel system with ice particles. Special-purpose military jet
fuels have even more stringent specifications.

Diesel oils
The principal end use of gas oil is as diesel fuel for powering automobile,
truck, bus, and railway engines. In a diesel engine, combustion is induced by
the heat of compression of the air in the cylinder under compression.
Detonation, which leads to harmful knocking in a gasoline engine, is a
necessity for the diesel engine. A good diesel fuel starts to burn at several
locations within the cylinder after the fuel is injected. Once the flame has
initiated, any more fuel entering the cylinder ignites at once.
Straight-chain hydrocarbons make the best diesel fuels. In order to have a
standard reference scale, the oil is matched against blends of cetane (normal
hexadecane) and alpha methylnaphthalene, the latter of which gives very
poor engine performance. High-quality diesel fuels have cetane ratings of
about 50, giving the same combustion characteristics as a 50-50 mixture of
the standard fuels. The large, slower engines in ships and stationary power
plants can tolerate even heavier diesel oils. The more viscous marine diesel
oils are heated to permit easy pumping and to give the correct viscosity at the
fuel injectors for good combustion.
Until the early 1990s, standards for diesel fuel quality were not particularly
stringent. A minimum cetane number was critical for transportation uses, but
sulfur levels of 5,000 parts per million (ppm) were common in most markets.
With the advent of more stringent exhaust emission controls, however, diesel
fuel qualities came under increased scrutiny. In the European Union and the
United States, diesel fuel is now generally restricted to maximum sulfur
levels of 10 to 15 ppm, and regulations have restricted aromatic content as
well. The limitation of aromatic compounds requires a much more
demanding scheme of processing individual gas oil components than was
necessary for earlier highway diesel fuels.

Fuel oils
Furnace oil consists largely of residues from crude oil refining. These are
blended with other suitable gas oil fractions in order to achieve the viscosity
required for convenient handling. As a residue product, fuel oil is the only
refined product of significant quantity that commands a market price lower
than the cost of crude oil.
Because the sulfur contained in the crude oil is concentrated in the residue
material, fuel oil sulfur levels are naturally high. The sulfur level is not
critical to the combustion process as long as the flue gases do not impinge on
cool surfaces (which could lead to corrosion by the condensation of acidic
sulfur trioxide). However, in order to reduce air pollution, most industrialized
countries now restrict the sulfur content of fuel oils. Such regulation has led
to the construction of residual desulfurization units or cokers in refineries that
produce these fuels.
Residual fuels may contain large quantities of heavy metals such as nickel
and vanadium; these produce ash upon burning and can foul burner systems.
Such contaminants are not easily removed and usually lead to lower market
prices for fuel oils with high metal contents.

Lubricating oils
At one time the suitability of petroleum fractions for use as lubricants
depended entirely on the crude oils from which they were derived. Those
from Pennsylvania crude, which were largely paraffinic in nature, were
recognized as having superior properties. But, with the advent of solvent
extraction and hydrocracking, the choice of raw materials has been
considerably extended.
Viscosity is the basic property by which lubricating oils are classified. The
requirements vary from a very thin oil needed for the high-speed spindles of
textile machinery to the viscous, tacky materials applied to open gears or wire
ropes. Between these extremes is a wide range of products with special
characteristics. Automotive oils represent the largest product segment in the
market. In the United States, specifications for these products are defined by
the Society of Automotive Engineers (SAE), which issues viscosity ratings
with numbers that range from 5 to 50. In the United Kingdom, standards are
set by the Institute of Petroleum, which conducts tests that are virtually
identical to those of the SAE.
When ordinary mineral oils having satisfactory lubricity at low temperatures
are used over an extended temperature range, excessive thinning occurs, and
the lubricating properties are found to be inadequate at higher temperatures.
To correct this, multigrade oils have been developed using long-chain
polymers. Thus, an oil designated SAE 10W40 has the viscosity of an SAE
10W oil at −18 °C (0 °F) and of an SAE 40 oil at 99 °C (210 °F). Such an oil
performs well under cold starting conditions in winter (hence the W
designation) yet will lubricate under high-temperature running conditions in
the summer as well. Other additives that improve the performance of
lubricating oils are antioxidants and detergents, which maintain engine
cleanliness and keep fine carbon particles suspended in the circulating oil.

Gear oils and greases


In gear lubrication the oil separates metal surfaces, reducing friction and
wear. Extreme pressures develop in some gears, and special additives
must be employed to prevent the seizing of the metal surfaces. These oils
contain sulfur compounds that form a resistant film on the surfaces,
preventing actual metal-to-metal contact.
Greases are lubricating oils to which thickening agents are added. Soaps of
aluminum, calcium, lithium, and sodium are commonly used, while nonsoap
thickeners such as carbon, silica, and polyethylene also are employed for
special purposes.
Bearing Lubrication
Lubricants take the form of either oil or grease. Oil lubricants are most
common in high speed, high-temperature applications that need heat transfer
away from working bearing surfaces. Bearing oils are either a natural mineral
oil with additives to prevent rust and oxidation or a synthetic oil. In synthetic
oils the base is usually polyalphaolefins (PAO), polyalkylene glycols (PAG)
and esters. Although similar, synthetic and mineral oils offer different
properties and are not interchangeable. Mineral oils are the more common of
the two.

The most important characteristic when specifying oil for a bearing is


viscosity. Viscosity is a measure of a fluid’s internal friction or resistance to
flow. High-viscosity fluids are thicker like honey; low-viscosity fluids are
thinner like water. Engineers express fluid resistance to flow in Saybolt
Universal Seconds (SUS) and centistokes (mm2/sec, cSt). The difference in
viscosity at different temperatures is the viscosity index (VI). An oil’s
viscosity is correlative to the film thickness it can create. This thickness is
crucial to the separation of the rolling and sliding elements in a bearing.
Bearings in some applications use oil, but grease is the lubricant of choice for
80 to 90% of bearings.

Grease consists of about 85% mineral or synthetic oil with thickeners


rounding out the rest of the grease volume.

The thickeners are usually lithium, calcium or sodium-based metallic soaps.


Formulations for higher-temperature applications often include polyurea. The
higher viscosity of grease helps contain it within the bearing envelope. The
most important considerations when choosing a grease are the base oil
viscosity, rust-inhibiting capabilities, operating temperature range and load-
carrying capabilities.
Lubricant base stocks

Why are base stocks important?


Lubricant properties affected by base stocks

What are base stocks?

Chemical bonds and terminology


Base stock molecules – hydrocarbons

Base stock molecules – polars


Engine Friction and Lubrication
Engine friction
– terminology
– Pumping loss
– Rubbing friction loss

Engine Friction: terminology

Friction components
Hydrodynamic Lubrication (HL)
Hydrodynamic lubrication is a way that is used to reduce friction and/or
wear of rubbing solids with the aid of liquid (or semi-solid) lubricant. For a
vast majority of the surfaces encountered in nature and used in industry, the
source of friction is the imperfections of the surfaces. Even mirror shining
surfaces are composed of hills and valleys – surface roughness. The goal of
hydrodynamic lubrication is to add a proper lubricant, so that it penetrates
into the contact zone between rubbing solids and creates a thin liquid film, as
shown in the figure below. This film separates the surfaces from direct
contact and it in general reduces friction and consequently wear (but not
always), since friction within the lubricant is less than between the directly
contacting solids. Hydrodynamic theory is developed within a field known as
tribology.
Lubricant is a substance which is used to control (more often to reduce)
friction and wear of the surfaces in a contact of the bodies in relative
motion. Depending on its nature, lubricants are also used to eliminate
heat and wear debris, supply additives into the contact, transmit power,
protect, seal. A lubricant can be in liquid (oil, water, etc.), solid
(graphite, graphene, molybdenum disulfide), gaseous (air) or even
semisolid (grease) forms. Most of the lubricants contain additives (5-
30%) to improve their performance. Read further about the lubricant
definition here.

Lubricant Film (From: Lubrication for Industry, by Ken Bannister, Industrial


Press)
History of lubrication theory goes more than a century back to 1886 when
O. Reynolds published famous equation of thin fluid film flow in the narrow
gap between two solids (Reynolds 1886). This equation carries his name and
forms a foundation of the lubrication theory. Derivation of Reynolds equation
and possible solution methods are given here.
It differs from elastohydrodynamic lubrication theory (EHL) due to
negligible elastic deformation of the surfaces. Absence of the elastic
deformations simplifies the problem as compared to EHL theory, but
allows one to construct some important analytical solutions as will be
shown further.
Solutions of Reynolds Equation
First solutions of lubrication theory were obtained by Reynolds himself
and can be found in the original work. One of the most important
analytical solutions of the hydrodynamic theory (of the most interest in
tribology field) was obtained by Martin in 1916. The solution considers a
relative motion of cylinder on flat plane, as shown in figure below.

Following system of equations is considered:


where are
hydrodynamic film thickness, pressure, viscosity and average sliding speed
correspondingly (). here is the normal load (per unit length). There are two
unknowns, pressure and the approach with two equations to determine them.
Martin’s solution states following:

This solution immediately shows the major relations within the


hydrodynamic theory (but they also remain qualitatively true in
elastohydrodynamic theory as well). The sliding speed has to be higher to
generate a thicker film. The same is true for viscosity: higher viscosity leads
to a thicker lubricant film. It should be noted, that it is typically desired to
have a sufficiently thick lubricant film, so that the surface are completely
separated to reduce wear. At the same time, it is clear, that the friction
(hydrodynamic) will increase with the increase of both, sliding speed and
viscosity. This in turn leads to energy losses. Therefore, there will be a trade-
off between the wear performance and the optimization of energy losses.
Currently, continuous efforts are undertaken to reduce the energy losses
and to move towards a sustainable society and at the same time increase
the reliability of tribological devices. According to the discussion above
it is clear that there is a contradiction between the wear performance and
the energetic performance. Therefore, classical lubrication theory has
reached its fundamental limit in the energy losses reduction and new
theories have to be developed. From that standpoint, solid lubricants,
such as graphene or diamond like carbon are promising materials to
reduce friction further and reach a so-called superlubricity regime.
Pressure-Viscosity Coefficient and
Characteristics of Lubricants
Primary purpose of any industrial lubricant is to maintain a satisfactory
lubricating film in between contacting surfaces; and importance of viscosity
to achieve this is univocally known in industrial maintenance practices.
However, what is often overlooked in such easy comprehension on
lubricant’s viscosity is the gap between the effective viscosity and quoted
(laboratory measured) viscosity of any lubricant. Lubricant’s viscosity may
be physically and/or chemically affected by a number of factors; however,
some of the prominent influencing agents are temperature, pressure, and
shear stress. These influences in turn depends on friction, load, speed, and
machine operating environments. In addition, lubricant’s chemical make-ups
and any alteration to it during operation are some of other important causes to
influence a lubricant’s effective viscosity. Thereby, it can be inexplicably
understood that the laboratory measured viscosity is dramatically different
than the viscosity which is effective inside the system during operation. This
difference is unavoidable since any mechanical system is exposed to the
fluctuation of the environmental and mechanical operating factors
(temperature, load, speed, friction, etc.).
Any industrial lubricant’s viscosity is often found to be quoted in
manufacturer’s catalog in terms of the standardized unit. However, these
quoted viscosities are usually measured in laboratories at atmospheric
pressure and often with bench-test instruments such as falling weight type
viscometers. Major limitations of these instruments are the availability of the
limited range of low shear stresses. However, the effects of shear stress on
viscosity can now be captured up to good extent with the help of advanced
rheometer facilities which are commercially available and being continuously
optimized. Further, the temperature-viscosity characteristics is also well
understood and standard characteristic features for lubricants such as
viscosity index has been well recognized. Nevertheless, the pressure-
viscosity characteristics of lubricants still remains largely elusive in broader
industrial practices.
Pressure induced variation in viscosity can often be a tackling issue in
lubricant’s performance in the system. Increase in pressure over a
confined lubricant often results in increasing the lubricant’s viscosity
(Hersey and Shore [1928]). Such change in viscosity under mechanical
pressure is also known as piezo-viscous effects. This situation is likely to
prevail in certain regimes such as mixed and elasto-hydrodynamic
lubrication. In several instances, especially in non-conformal, highly
concentrated, and heavily loaded contacts such as in ball/roller bearings,
gears, cams, etc., the contact pressure often becomes several order
higher than atmospheric pressure. It is therefore of practical importance
to assess and quantify the pressure-viscosity behavior of any lubricant in
order to assess its effectiveness inside the system under varying degrees
of operating pressures.
Several investigations on the pressure-viscosity behavior of fluids were
reported over the preceding decades; and some form of empirical
correlations were also fitted to these experimental data (So and Klaus,
1980). The most primitive, but, most popular and simple one is the Barus
equation proposed in 1893:

Mathematically, α is the tangent to the Barus equation curve at zero pressure.


This coefficient has been frequently used in analytical formulations of elasto-
hydrodynamic lubrication. In some cases, the slope of a secant of the Barus
equation is used to estimate the coefficient. Such secant is drawn between 0
and any other point up to 0.1 GPa. It is important to note here that α is ideally
a pressure independent and a temperature sensitive coefficient. However, it
has been repeatedly mentioned in literature about the significant error that the
Barus equation yields at elevated pressures above 0.5 GPa. A case study is
reproduced here in Fig.1 (Sargent Jr, 1983). It appears that as pressure rises,
the coefficient, α, loses its independence from pressure; alternatively, the use
of the slope of secant (α ) exhibits closer prediction but not satisfactorily free
*

from errors. Therefore, the challenges still persist to accurately capture the
piezo-viscous characteristics of lubricants.
In follow-ups of the Barus equation, a couple of empirical correlations
emerged and have been used by subsequent researchers. Some of these
correlations are furnished in Table 1. It is important to note here that
these equations are not exhaustive and based on pure empirical means.
The common purpose of these empirical correlations are to estimate the
coefficients (sometime called the pressure-viscosity coefficients such as
α in case of the Barus equation) with a handful of experimental data and
then to use these coefficients in the correlation to predict viscosity at a
given pressure. However, more often than not, such empirical
approaches are applicable only for a limited range of its variables (read
‘pressure’, ‘temperature’, ‘shear stress’, etc. in this context) and some
other times they appear too obscure to comprehend and implement in
engineering calculations.
Fig. 1: Pressure-viscosity dependence curve under isothermal conditions (at
37.80C) for a lubricant, α* is the slope of the secant between 0 and 0.1 GPa
(data collected from Sergeant Jr. [1983]).
Table 1: Empirical models for lubricant’s pressure-viscosity characteristics
(Sargent Jr. [1983])
Despite all the anomalies, almost all of these correlations estimate the
viscosity as an exponential function of pressure. This manifests the
possibility of dramatic increase in the viscosity of lubricants with respect to
rise in pressure. Thereby, under piezo-viscous situation, increase in contact
pressure may result in thickening of the lubricating fluid. Overall, this means,
an incremental pressure rise may at some point be detrimental to a lubricant’s
performance. Intuitively, thickening of lubricant may seize the lubricant’s
flow and leading to a failure of lubrication, in other words, by lubricant
fracture!
Further, it can be interesting and important to examine the influence of
pressure on lubricant’s effective viscosity as a function of the lubricant’s
chemical composition. Figure 2 shows some already examined cases
(data reported in Sargent Jr. [1983]) with different lubricating oil under
similar pressure and temperature. The ordinate of the graph represents
the ratio of a particular lubricant’s viscosities measured at 98 C and 38 C,
0 0

respectively.
Fig. 2: Effects of chemical make-up on pressure-viscosity characteristics of
lubricants (data from Sargent Jr. [1983)].
These data show that the pressure also influences the lubricant’s temperature-
viscosity characteristics. In order to interpret this graph, the reader may note
that the more the value of the ratio, η /η , is, the better is its viscosity-
100 40

temperature characteristics. Interestingly, synthetic composition such as


silicone oil exhibited to be most susceptible to the influences of pressure. In
general, silicone oil exhibit better resistance against temperature induced
viscosity drop at atmospheric pressure, however, this resistance probably
loosens up under pressure. Counter intuitively, it can also be understood that
while increase in pressure gives rise to viscosity, increase in temperature does
the opposite; therefore, a lubricant’s declining resistance against temperature
under pressure may at some instances be advantageous to the system.
However, in order to arrive at such trade-offs, especially with the advent of
new type of lubricants, we look forward to analyze more data to interpret the
combined effects of temperature, pressure, shear stress, and new age
lubricant’s wonder chemical make-ups; hopefully, in an upcoming
discussion.
Function of Lubrication system
1. Reduced friction
Lubricant forms an oil film on the surface of metals, converting solid
friction into liquid friction to reduce friction, which is the most common
and essential function of lubricants. Reduced friction prevents heating
and abrasion on the friction surface.

2. Cooling
Friction certainly causes heating on the area and more heat is produced
if metals rub against each other. Therefore the heat needs to be absorbed
or released; otherwise the system is destroyed or deformed. To prevent
it, lubricants are applied. Especially cooling is critical to rolling oils,
cutting oils, and lubricating oils used in an internal combustion engine.

3. Load balancing
Components like gear or bearing are limitedly contacted on a certain
line or surface, so load can be increased in a moment, making systems
at risk for being destroyed and attached to each other. Therefore the
application of lubricant protects systems against increased load by
forming an oil film to disperse load in the film.

4. Cleaning
Long-term use of systems may lead to corrosion or aging, producing
foreign substances. In case of using hydraulic oil and gear oil,
sediments accumulate such as sludge from deterioration. Especially an
internal combustion engine generates too much soot, so that it is likely
to shorten the life of systems and make them fail to work properly.
Therefore lubricant itself cleans out foreign substances like soap.

5. Sealing
Sealing is to close the macro-gap between systems. Sealing the space
between pistons and cylinders in the internal combustion engines or air
compressors blocks the leakage of combustion gas and the inflow of
external foreign substances to maintain the defined internal pressure and
protect the system. Especially in the hydraulic system, lubricants itself
serve to prevent the leakage by creating a hydraulic film.

6. Rust prevention
Metals produce rust when contacting water and oxygen. However, rust
formation can be controlled and the system lifetime is extended if the
surface of metals is coated with lubricating film.
Requirements and characteristics of
lubricants
The main requirements for lubricants are that they are able to:
Keep surfaces separate under all loads, temperatures and speeds, thus
minimising friction and wear.
Act as a cooling fluid removing the heat produced by friction or from
external sources
Remain adequately stable in order to guarantee constant behaviour over the
forecasted useful life
Protect surfaces from the attack of aggressive products formed during
operation
Show cleaning capability and dirt holding capacity in order to remove
residue and debris that may form during operation
The properties of lubricants:
The main properties of lubricants, which are usually indicated in the technical
characteristics of the product, are:
· Viscosity
· Viscosity index
· Pour point
· Flash point

Viscosity
Viscosity describes the flow behaviour of a fluid. The viscosity of lubricating
oils diminishes as temperature rises and consequently is measured at a given
temperature (e.g. 40�C).
The viscosity of a lubricant determines the thickness of the layer of oil
between metallic surfaces in reciprocal movement.
The most widely used unit of measurement of viscosity is the centistoke
(cSt).

Viscosity index
The viscosity index is a characteristic used to indicate variations in the
viscosity of lubricating oils with changes in temperature.
The higher the level of the viscosity index, the lower the variation in
viscosity at temperature changes.
Consequently, if two lubricants with the same viscosity are considered at a
temperature of 40 �C, the one with the higher viscosity index will
guarantee:
· better engine start up at low temperatures (lower internal friction)
· a higher stability of the lubricating film at high temperatures

Viscosimetric classifications
There are a number of viscosimetric classification systems that indicate,
usually with a number, a more or less limited viscosity range.
The aim is to provide, along with the viscosity index, a rapid indication of the
most appropriate choice of lubricant for a specific application.
ISO VG degrees are widely used to classify industrial oils. Each degree
identifies a kinematic viscosity gap measured at 40�C.
SAE degrees are used in the field of engine oils and gear oils.
Pour Point
The pour point refers to the minimum temperature at which a lubricant
continues to flow. Below the pour point, the oil tends to thicken and to cease
to flow freely.
Flash point
The flash point is the minimum temperature at which an oil-vapour-air-
mixture becomes inflammable. It is determined by progressively heating the
oil-vapour-air-mixture in a standard laboratory receptacle until the mixture
ignites.
Determining the Cause of Oil Degradation
There are many causes that can result in the degrading of your lube oil. The
most common are oxidation, thermal breakdown of the lube oil, micro-
dieseling, additive depletion and contamination.
Oxidation
Oxidation is the reaction of oil molecules with oxygen molecules. It can lead
to an increase in viscosity and the formation of varnish, sludge and sediment.
Additive depletion and a breakdown in the base oil can also result. Once an
oil starts to oxidize, you may see an increase in the acid number. In addition,
rust and corrosion can form on the equipment due to oxidation.
Thermal Breakdown
The temperature of the lubricant should be a primary concern. Besides
separating the moving parts within a piece of machinery, a lubricant must
also dissipate heat. This means the lubricant can and will be heated above its
recommended stable temperature. The Arrhenius rate rule for temperature
states that for every 18 degrees F (10 degrees C), the chemical reaction
doubles. In other words, for every increase of 18 degrees F for your oil, the
life of the oil is cut in half. Keeping the oil as cool as possible when in use
will extend its life and reduce the reaction of thermal breakdown.
Micro-dieseling
Also known as pressure-induced thermal breakdown (degradation), micro-
dieseling is a process in which an air bubble transitions from a low-pressure
region in a system to a high-pressure zone. This is very common in hydraulic
systems. Micro-dieseling results in adiabatic compression of the air bubble
within the oil, which then cooks the surrounding oil molecules, causing
instant oxidation of those molecules.
Additive Depletion
Most additive packages in oil are designed to be sacrificial and used up
during the life of the oil. Utilizing oil analysis to monitor additive levels is
important not only to assess the health of the lubricant but also to provide
clues as to what is causing the depletion of the additives.
Contamination
Contamination such as dirt, water, air, etc., can greatly influence the rate of
lubricant degradation. Dirt containing fine metal particles can be a catalyst
that sparks and speeds up the degradation process of your lubricant. Air and
water can provide a source of oxygen that reacts with the oil and leads to
oxidation of the lubricant. Here again, oil analysis can be helpful in
monitoring your lubricant’s contamination levels.
Additives in lubricating oils
Additives are substances formulated for improvement of the anti-friction,
chemical and physical properties of base oils (mineral, synthetic, vegetable or
animal), which results in enhancing the lubricant performance and extending
the equipment life.

Combination of different additives and their quantities are determined by the


lubricant type (Engine oils, Gear oils, Hydraulic oils, cutting fluids, Way
lubricants, compressor oils etc.) and the specific operating conditions
(temperature, loads, machine parts materials, environment).
Amount of additives may reach 30%.
§ Friction modifiers
§ Anti-wear additives
§ Extreme pressure (EP) additives
§ Rust and corrosion inhibitors
§ Anti-oxidants
§ Detergents
§ Dispersants
§ Pour point depressants
§ Viscosity index improvers
§ Anti-foaming agents

Friction modifiers
Friction modifiers reduce coefficient of friction, resulting in less fuel
consumption.
Crystal structure of most of friction modifiers consists of molecular platelets
(layers), which may easily slide over each other.

The following Solid lubricants are used as friction modifiers:


§ Graphite;
§ Molybdenum disulfide;
§ Boron nitride (BN);
§ Tungsten disulfide (WS2);
§ Polytetrafluoroethylene (PTFE).
Anti-wear additives
Anti-wear additives prevent direct metal-to-metal contact between the
machine parts when the oil film is broken down.
Use of anti-wear additives results in longer machine life due to higher wear
and score resistance of the components.
The mechanism of anti-wear additives: the additive reacts with the metal on
the part surface and forms a film, which may slide over the friction surface.

The following materials are used as anti-wear additives:


§ Zinc dithiophosphate (ZDP);
§ Zinc dialkyldithiophosphate (ZDDP);
§ Tricresylphosphate (TCP).

Extreme pressure (EP) additives


Extreme pressure (EP) additives prevent seizure conditions caused by direct
metal-to-metal contact between the parts under high loads.
The mechanism of EP additives is similar to that of anti-wear additive: the
additive substance form a coating on the part surface. This coating protects
the part surface from a direct contact with other part, decreasing wear and
scoring.
The following materials are used as extra pressure (EP) additives:
§ Chlorinated paraffins;
§ Sulphurized fats;
§ Esters;
§ Zinc dialkyldithiophosphate (ZDDP);
§ Molybdenum disulfide;

Rust and corrosion inhibitors


Rust and Corrosion inhibitors, which form a barrier film on the substrate
surface reducing the corrosion rate. The inhibitors also absorb on the metal
surface forming a film protecting the part from the attack of oxygen, water
and other chemically active substances.

The following materials are used as rust and corrosion inhibitors:


§ Alkaline compounds;
§ Organic acids;
§ Esters;
§ Amino-acid derivatives.

Anti-oxidants
Mineral oils react with oxygen of air forming organic acids. The oxidation
reaction products cause increase of the oil viscosity, formation of sludge and
varnish, corrosion of metallic parts and foaming.
Anti-oxidants inhibit the oxidation process of oils.
Most of lubricants contain anti-oxidants.

The following materials are used as anti-oxidants:


§ Zinc dithiophosphate (ZDP);
§ Alkyl sulfides;
§ Aromatic sulfides;
§ Aromatic amines;
§ Hindered phenols.

Detergents
Detergents neutralize strong acids present in the lubricant (for example
sulfuric and nitric acid produced in internal combustion engines as a result of
combustion process) and remove the neutralization products from the metal
surface. Detergents also form a film on the part surface preventing high
temperature deposition of sludge and varnish.
Detergents are commonly added to Engine oils.
Phenolates, sulphonates and phosphonates of alkaline and alkaline-earth
elements, such as calcium (Ca), magnesium (Mg), sodium (Na) or Ba
(barium), are used as detergents in lubricants.

Dispersants
Dispersants keep the foreign particles present in a lubricant in a dispersed
form (finely divided and uniformly dispersed throughout the oil).
The foreign particles are sludge and varnish, dirt, products of oxidation,
water etc.
Long chain hydrocarbons succinimides, such as polyisobutylene
succinimides are used as dispersants in lubricants.
Pour point depressants
Pour point is the lowest temperature, at which the oil may flow.
Wax crystals formed in mineral oils at low temperatures reduce their fluidity.
Pour point depressant inhibit formation and agglomeration of wax particles
keeping the lubricant fluid at low temperatures.
Co-polymers of polyalkyl methacrylates are used as pour point depressant in
lubricants.

Viscosity index improvers


Viscosity of oils sharply decreases at high temperatures. Low viscosity
causes decrease of the oil lubrication ability.
Viscosity index improvers keep the viscosity at acceptable levels, which
provide stable oil film even at increased temperatures.
Viscosity improvers are widely used in multigrade oils, viscosity of which is
specified at both high and low temperature.
Acrylate polymers are used as viscosity index improvers in lubricants.

Anti-foaming agents
Agitation and aeration of a lubricating oil occurring at certain applications
(Engine oils, Gear oils, Compressor oils) may result in formation of air
bubbles in the oil - foaming. Foaming not only enhances oil oxidation but
also decreases lubrication effect causing oil starvation.
Dimethylsilicones (dimethylsiloxanes) is commonly used as anti-foaming
agent in lubricants.
Lubricant additives, explained
Additives are a chemical component or blend used at a specific treat rate,
generally from < 1 to 35 percent, to provide one or more functions in the
fluid. Ideally, additive components are multifunctional. They are soluble in
mineral oil, water or sometimes both.
Second, additives offer or help with a wide variety of functions, such as:
● boundary lubricity
● extreme pressure (EP)
● inhibiting corrosion
● boosting reserve alkalinity
● emulsification
● antimisting
● antimicrobial pesticide
● antifoam additives and defoamers

With such a variety of effects, chemists often look for additives that can be
multifunctional as well as compatible with different chemicals in a
formulation, both with other additives as well as the base fluid.
Performance-related lubricant additives

The properties of the oil are augmented by lubricant additives.


● Boundary Lubricity Additives enhance fluid lubricity by adsorbing on
the metal surface to form a film, limiting metal-tometal contact.
Examples include lard and canola oil. Solid lubricants can also be used
for boundary lubrication.
● Extreme Pressure Additives are a special type of boundary lubricity
additive that form a metal salt layer between mating surfaces that limit
friction, wear and damage. Examples include:
○ Zinc dialkyl dithiophosphate (ZDDP)
○ Chlorinated paraffins
○ Sulfurized lard oils
○ Phosphate esters
○ Overbased calcium sulfonates
● Corrosion Inhibitors prevent the fluid from corroding machine
surfaces, metal work piece, cutting tool and machine tool. Examples
include:
○ Overbased sulfonates
○ Alkanolamides
○ Aminoborates
○ Aminocarboxylates
● Reserve Alkalinity Additives essentially serve as a buffer, neutralizing
acidic contaminants to preserve the fluid’s corrosion protection and
maintain the pH in a suitable range. Examples include alkanolamines
like:
○ Monoethanolamine (MEA)
○ Triethanolamine (TEA)
○ Aminomethylpropanol (AMP)
○ 2-(2-aminoethoxy) ethanol
● Metal Deactivators prevent the MWF from staining nonferrous alloys
(such as copper and brass) and reduce corrosion when dissimilar metals
contact each other. They act by forming a protective coating on the
metal surface. Examples include:
○ Mercaptobenzothiazole
○ Tolyltriazole
○ Benzotriazole
● Detergents stabilize dirt and wear debris in oil formulations.
● Emulsifiers reducing interfacial tension between incompatible
components by forming micelles, thereby stabilizing oil-soluble
additives in water-dilutable MWFs. These micelles —droplets in a
colloidal system—then can remain suspended in the fluid. Milk is an
emulsion. In MWFs, examples include sodium petroleum sulfonate and
alkanolamine salts of fatty acids.
● Couplers help stabilize water-dilutable MWFs in the concentrate to
prevent component separation. Couplers facilitate formation of soluble
oil emulsions. Examples include:
○ Propylene glycol
○ Glycol ethers
○ Nonionic alkoxylates
● Chelating Agents (also known as water softeners or conditioners)
reduce the destabilizing effect of hard water (calcium and magnesium
ions) on MWF emulsions. An example might be
ethylenediaminetetracetic acid (EDTA).
● Antimist Additives minimize the amount of lubricant that disperses
into the air during machining. They are typically polymers and/or
wetting agents. For oil-based systems, ethylene, propylene copolymers
and polyisobutenes are used. For water-based systems, polyethylene
oxides are common.
● Dyes change the color of the lubricant or MWF, usually as requested by
the customer. In water-diluted fluids, their main value is to indicate that
product is present, since some of these can be clear and water-like in
appearance. However, dyes carry some negatives:
○ They can stain skin and paint
○ Some water-soluble dyes are unstable and can change color
○ Some dyes can pass through waste treatment systems, resulting in
pollution downstream.
Synthetic Lubricants
ynthetics aren’t new; they’ve been around for 70 years. Esters were used in
WWII by both Germany and the U.S. to keep equipment running under harsh
conditions. Only in the last 20 years, however, have end users throughout
industry really recognized the cost benefits of synthetics. Now, more and
more uses are being found for these formulations and they are experiencing
significantly high growth rates.

Synthetics are formulated by combining low molecular-weight materials in a


chemical reaction to produce higher molecular-weight materials. These
reactions are controlled to produce products with uniform consistency and
targeted performance properties. Mineral-based lubricants don’t exhibit the
consistency and uniformity that synthetics exhibit, nor do they have the
performance properties.

Synthetic classifications There are many types of synthetic fluids. Within


the scope of this particular article, we will deal with the most common.

Synthetics are classified into the following major groups:

· Synthetic Hydrocarbons
o Polyalphaolefins (PAO)
o Alkylated Aromatics
o Polybutenes
· Esters
o Diesters
o Polyol Esters
o Phosphate Esters
· Others
o Polyglycols
o Silicones
PAOs are the largest synthetic group, followed by esters and PAGs. Most of
the discussion will focus on these three synthetic types.
Advantages/disadvantages
Fig. 1 details the overall advantages of synthetics as a class. Not all synthetics
have all these advantages—and some have more than others. Fig. 2 describes
some of the disadvantages. Note that this is a composite of all the major
synthetic types. The only disadvantage common to all synthetics is cost. For
the most common synthetics (PAO, Esters and PAGs) the cost is 3-5 times
the cost of a high-quality mineral oil.
The advantages offered by synthetics allow these formulations to be real
lubrication problem solvers. The three major categories for synthetic use are:

Temperature extremes—Synthetics are wax-free, so they can be used at


very low temperatures. High temperatures (over 200 F) call for the use of
synthetics. They should be considered when mineral bulk temperatures reach
180 F. The high- and low-temperature capability of the most common
synthetic types is illustrated in Table I. These numbers indicate the highest
and lowest levels of operation and are not common operating conditions.
Wear reduction—Synthetics are custom manufactured and the molecular
sizes are more uniform than mineralbased products, thereby providing greater
film strength and lubricity than mineral oils. Diesters and polyol esters,
because of their polarity, have excellent lubricity and film strength, followed
by polyalkylene glycols. PAOs, which are non-polar, have the lowest level of
lubricity and film strength of this group. The film strength, coupled with
additives, helps minimize wear under boundary lubrication conditions.
Energy savings—Many times, the use of synthetics has been justified on
energy savings alone. This is particularly true in the case of gearboxes
(something that will be discussed in the an upcoming article in this series.)
Efficiency is related to lubricity and film strength. Traction coefficient is an
important factor with synthetics during elastohydrodynamic lubrication
(EHL), which occurs with rolling motion, such as that in bearings and pitch
point in gears. EHL lubrication results in a thin film under high pressure that
increases the viscosity of the film. The internal resistance of the fluid film to
sliding, known as traction coefficient, can affect energy savings in rolling
element bearings. PAOs and PAGs have low traction coefficients compared
to mineral oils and, as such, can lead to energy savings.

Synthetic types
Polyalphaolefins…
PAOs are produced by the following reaction:

Decene is reacted with itself to produce high molecular weight hydrocarbons


that are linked in groups of 10 carbon atoms; thus, we can produce any
molecular weight in groups of 10. The initial reaction involves a reaction of a
linear alpha olefin to produce molecular weights in groups of 10 carbon
atoms. The final reaction saturates the double bond to produce a PAO.

It is also possible to react dodecene (which has 12 carbon atoms) to produce


increasing molecular weights in groups of 12 carbon atoms. For the purposes
of this article, however, our discussion will focus on carbon atoms in groups
of 10.

PAOs are classified in terms of viscosity at 100 C. Table II illustrates some of


their viscosity grades.

By blending different viscosities, there is a great deal of flexibility in creating


different viscosity grades with PAOs.

Key properties:
· Excellent low-temperature fluidity
· Good high-temperature properties
· High viscosity index
· Low volatility
· Hydrolytic stability
· Highly compatible with mineral oils
· Low biodegradability
· Slight elastomeric seal shrinkage
· Low additive solvency
· Low lubricity
PAOs are formulated with 5-20% ester—which is typically a diester—to
overcome the seal shrinkage and non-polarity, resulting in good additive
solubility and increased lubricity. PAOs have the widest application of any of
the synthetics. This will be discussed in the next article

Esters… Two major groups of esters to be discussed are diesters and polyol
esters. Diesters are produced by the following reaction:

This reaction is reversible so, in the presence of heat and water, a diester can
decompose back to an acid and an alcohol. The conditions need to be severe
to cause this reaction to reverse, but it will occur under high-temperature and
high-moisture conditions.

Depending on the alcohol and acid selected, a large number of diester types
can be produced and tailored to a particular application.
Key properties:
· Low pour point
· Low volatility
· Good thermal and oxidative stability
· Excellent solvency and cleanliness
· Good metal-wetting properties, resulting in good lubricity
· Good biodegradability
· Poor compatibility with some elastomers, plastics and paints
· Hydrolyze under high-temperature, high-moisture conditions
Polyol esters…
These synthetics are produced by reacting a highly branched di-functional
alcohol with a mono-basic acid as follows:

This ester is highly branched, which results in the following key properties:
Key properties:
· Low pour point
· Low volatility
· Good viscosity index
· Excellent thermal and oxidative stability
· Excellent solvency and cleanliness
· Very good lubricity
· Highly biodegradable
· Slight tendency to hydrolyze under severe conditions
· Nearly 50% more expensive than diesters
Polyalkylene glycol…
PAGs are quite versatile. Many different types can be created, which allows
for a wide variance in properties. PAGs are produced as follows:

Either 100% ethylene oxide, 100% propylene oxide or a combination of the


two are used to create many different types of PAGs with unique properties
— and with many different molecular weights.
PAGs can be made either water soluble or insoluble. Increasing the ethylene
oxide (EO) ratio increases the water solubility and decreases the oil
solubility. Water soluble PAGs are inversely soluble, meaning that the
solubility decreases with increasing temperature.

Table III illustrates the various ratios of ethylene and propylene oxide and
their properties.

Key properties:
· Versatile with both water-soluble and water-insoluble grades
· High viscosity indexes
· Hydrolytic stability
· Excellent lubricity
· Low volatility
· High oxidative and thermal stability
· Can be formulated to have limited gas solubility
· Resistant to sludge formation
· Compatible with most elastomeric seals but may cause slight
shrinkage
· Incompatible with many paints and polycarbonate and polyurethane
· Incompatible with mineral oil and other non-ester synthetics
Conclusion
Table IV summarizes the strengths and weaknesses associated with each of
the major synthetics that have been discussed in this article.
Synthetic fluids are real problem solvers—and very important in improving
equipment reliability. Their usage is growing as equipment conditions require
higher performing lubricants. In the next installment of this series, selecting
the optimal synthetic based on the equipment and conditions will be
discussed.
Lubricants : Classification and properties
A substance which is capable of reducing friction between two surfaces
which are sliding over each other is called Lubricant. Due to Lubricant the
co-efficient of friction between two rubbing surfaces is reduced. The loss of
energy due to friction is considerably reduced in this manner. Petroleum is
the main source of Lubricants, but synthetic Lubricants have also been
prepared for specific purpose. The lubricant acts in a number of manners. It
acts as a coolant by removing the heat of friction generated as a result of
rubbing of surfaces. In lubrication it does not allow a direct contact between
the rubbing surfaces .A thin film of lubricant lies between these rubbing
surfaces under proper condition of lubrication. In internal combustion
engines, it acts as a seal by sealing the piston & cylinder wall at compression
rings .Hence there is no leakage of gases at high pressure in the combustion
chamber. Lubrication also avoids power loss in the IC engine.
The main features of lubrication are:
1. It reduces wear and tear and surface deformation, by avoiding
direct contact between the rubbing surfaces.
2. It reduces the loss of energy in the form of heat by acting as a
coolant.
3. It reduces the efficiency of the machine by reducing the waste of
energy.
4. It reduces expansion of metal by local frictional heat.
5. It minimizes the liberation of frictional heat and hence avoids
seizure of moving surfaces.
6. It prevents unsmooth motion relative motion of the moving or
sliding parts.
7. It reduces the maintenances as well as running cost of the machine
to a large extent.
8. As seen above it also acts as a seal in IC engines.

CLASSIFICATION OF LUBRICANTS
Lubricants can be broadly classified, on the basis of their physical state, as
follows: (1) Liquid lubricants or lubricating oils; (2) Semi-solid lubricants or
greases, and (3) Solid lubricants.
Lubricating oils
Lubricating oils reduce friction and wear between two moving/sliding
metallic surfaces by
providing a continuous fluid film in-between them. They also act as: (a)
cooling medium; (b) sealing agent, and (c) corrosion preventer. Good
lubricating oil must possess: (a) low pressure (or high boiling point), (b)
adequate viscosity for particular service conditions, (c) low freezing point, (d)
high oxidation resistance. (e) Heat stability, (f) non-corrosive properties, (g)
stability to decomposition at the operating temperatures. Lubricating oils are
further classified as:
(1) Animal and vegetable oils: Before the advent of the petroleum industry,
oils of the vegetable and animal origins were the most commonly used
lubricants. They posses good oiliness (a property by virtue of which the oil
sticks to the surface of machine parts, even under high temperatures and
heavy loads). However, they: (i) are costly, (ii) undergo oxidation easily
forming gummy and acidic products and get thickened on coming in contact
with air, (iii) have some tendency to hydrolyze, when allowed to remain in
contact with moist-air or aqueous medium. So at present, they are rarely used
as such. Actually, they are used as "blending agent" with other ' lubricating
oils (like mineral oils) to produce desired effects in the latter.
(2) Mineral or petroleum oils are obtained by distillation of petroleum. The
length of the hydrocarbon chain in petroleum oils varies between about 12to
50 carbon atoms. The shorter-chain oils have lower viscosity than the longer-
chain hydrocarbons. These are the most widely used lubricants, because they
are; (i) cheap, (ii) available in abundance, and (iii) quite stable under service
conditions. However, they possess poor oiliness as compared to that of
animal and vegetable oils. Tile oiliness of petroleum oils can be increased by
the addition of high molecular weight compounds like oleic acid, stearic acid,
etc.
Purification : Crude liquid petroleum oils contain lot of impurities (like wax,
asphalt,etc.) and consequently, they have to be thoroughly purified before
being put to use.(i) The wax, if not removed, raises the pour-point and
renders the lubricating oil unfit for use at low temperatures. (ii) Certain
constituents get easily oxidized under working conditions and cause sludge
formation. (iii) Some constituents mainly asphalt, undergo decomposition at
higher temperatures, causing carbon deposition and sludge formation. A
number of processes are used for removing these unwanted impurities by
using Dewaxing or acid refining or by solvent refining.
(3) Blended oils: No single oil saves as the most satisfactory lubricant for
many of the modern machineries. Typical properties of petroleum oils are
improved by incorporating specific additives. These so-called 'blended oils'
give desired lubricating properties, required for particular machinery. The
following additives are employed
(i) Oiliness-carriers: Oiliness of a lubricant can be increased by addition of an
oiliness-carrier like vegetable oils (e.g., coconut oil, castor oil) and fatty acids
(like palmitic acid, stearic acid, oleic acid. etc.).
(ii) Extreme-pressure additives: Under extreme-pressure, a thick film of oil is
difficult to maintain, and the oil need to have a high oiliness. Besides
improving oiliness directly, high-pressure additives are used. .these additives
contain certain materials which are absorbed on the metal surface or react
chemically with metal, producing a surface a layer of low shear-strength on
the metal surface, thereby preventing the tearing up of the metal. Another
property of high-pressure additives is that they react, at high temperature on
metal surfaces, forming surface alloys so as to prevent the welding together
of the rubbing parts under severe operating conditions.
The main substances added for high-pressure lubrication are :(a) fatty ester,
acids, etc., which form oxide film with the metal surface ; (i) organic
materials, which contain sulphur ; (c) organic chlorine compounds ; (d)
organic phosphorus compounds. High-pressure lubricants also contain some
lead in order to produce thin film of lead sulphide and other lead compounds
on the surfaces of machines like gear teeth.
(iii) Pour-point depressing additives used are phenol and certain condensation
products of chlorinated wax with naphthalene. These prevent the separation
of wax from the oil.
(iv) Viscosity-index improvers are certain high molecular weight compounds
like hexanol.
(v) Thickeners such as polystyrene are materials usually of molecular weight
between 300 and 3,000. They are added in order to give the lubricating oil a
higher viscosity.
(vi) Antioxidants or inhibitors, when added to oil, retard oxidation of oil by
getting them- selves preferentially oxidized. They are particularly added in
lubricants used in internal combustion engines, turbines, etc., where oxidation
of oil is a serious problem. The antioxidants are aromatic, phenolic or amino
compounds.
(vii) Corrosion preventers are organic compounds of phosphorus or
antimony. They protect the metal from corrosion by preventing contact
between the metal surfaces and the corrosive substances.
(viii) Abrasion inhibitors like tricresyl phosphate.
(ix) Antifoaming agents (like glycols and glycerol) help in decreasing foam
formation.
(x) Emulsifiers such as sodium salts of sulphonic acid.
(xi) Deposit inhibitors are detergents such as the salts of phenol and
carboxylic acids. Deposits are formed in internal combustion engine, due to
imperfect combustion. Such additive disperses and cleans the deposits.
GREASES OR SEMI-SOLID LUBRICANTS
Lubricating grease is a semi - solid, consisting of a soap dispersed throughout
liquid lubricating oil. The liquid lubricant may be petroleum oil or even
synthetic oil and it may contain any of the additives for specific requirements.
Greases are prepared by saponification of fat (such as tallow or fatty acid)
with alkali (like lime, caustic soda, etc.), followed by adding hot lubricating
oil while under agitation. The total amount of
mineral oil added determines the consistency of the finished grease. The
structure of lubricating greases is that of a gel. Soaps are gelling agents,
which give an interconnected structure (held together by intermolecular
forces) containing the added oil. At high temperatures, the soap dissolves in
the oil, whereupon the interconnected structures cease to exist and the grease
liquefies. Consistency of greases may vary from a heavy viscous liquid to the
of a stiff solid mass. To improve the heat-resistance of grease, inorganic solid
thickening agents (like finely divided clay, bentonite, colloidal silica, carbon
black, etc.) are added.
Greases have higher shear or frictional resistance than oils and, therefore, can
support much heavier loads at lower speeds. They also do not require as
much attention unlike the lubricating liquids. But greases have a tendency to
separate into oils and soaps. Grease are used : (i) in situations where oil
cannot remain in place, due to high load, low speed, intermittent operation,
sudden jerks, etc. e.g. rail axle boxes, (ii) in bearing and gears that work at
high temperatures ; (iii) in situations where bearing needs to be sealed against
entry of dust, dirt, grit or moisture, because greases are less liable to
contamination by these ; (iv) in situations where dripping or spurting of oil is
undesirable, because unlike oils, greases if used do not splash or drip over
articles being prepared by the machine. For example, in machines preparing
paper, textiles, edible articles, etc.
The main function of soap is thickening agent so that grease sticks firmly to
the metal surfaces. However, the nature of the soap decides: (a) the
temperature up to which the grease can be used; (b) its consistency; (c) Its
water and oxidation resistance. So, greases are classified after the soap used
in their manufacture. Important greases are: (i) Calcium-based greases or
cup-greases are emulsions of petroleum oils with calcium soaps. They are,
generally, prepared by adding requisite amount of calcium hydroxide to hot
oil (like tallow) while under agitation. These greases are the cheapest and
most commonly used. They are insoluble in water, so water resistant.
However, they are satisfactory for use at low temperatures, because above
80oC, oil and soap begins to separate out.
(ii) Soda-base greases are petroleum oils, thickened by mixing sodium soaps.
They are not water resistant, because the sodium soap content is soluble in
water. However, they can be used up to 175oC. They are suitable for use in
ball bearings, where the lubricant gets heated due to friction.
(iii) Lithium-based greases are petroleum oils, thickened by mixing lithium
soaps. They are water-resistant and suitable for use at low temperatures [up to
15oC] only.
(iv) Axle greases are very cheap resin greases, prepared by adding lime (or
any heavy metal hydroxide) to resin and fatty oils. The mixture is thoroughly
mixed and allowed to stand, when grease floats as stiff mass. Filters (like talc
and mica) are also added to them. They are water-resistant and suitable for
less delicate equipments working under high loads and at low speeds. Besides
the above, there are greases prepared by dispersing solids (like graphite,
soapstone) in mineral oil. These are mostly used in rail axle boxes, machine
bearings, tractors rollers, wires ropes etc.
SOLID LUBRICANTS
Solid lubricants are used where: (i) operating conditions are such that a
lubricating film cannot be secured by use of lubricating oils or greases; (ii)
contamination (by the entry of dust or grit particles) of lubricating oil or
grease is unacceptable, (iii) the operating temperatures or load is too high
even for a semi-solid lubricant to remain in position; and (iv) combustible
lubricants must be avoided.
The two most usual solid lubricants employed are graphite and molybdenum
disulphide. Graphite consists of a multitude of flat plates, one atom thick,
which are held together by only weak bonds, so that the force to shear the
crystals parallel to the layers is low. Consequently, the parallel layers slide
over one another easily. Usually, some organic substances are mixed solid
lubricants so that they may stick firmly to the metal surface.
On the other hand, molybdenum disulphide has a sandwich like structure in
which a layer of a Mo atoms lies between two layers of S atoms. Poor
interlaminar attraction is responsible for low shear strength in a direction
parallel to the layers. Solids lubricants are used either in the dry powder or
mixed with water or oil. The solids fill up the low spots in the surfaces of
moving parts and form solid films, which have low frictional resistance. The
usual coefficient of friction between solid lubricants is between 0.005 and
0.01.
(a) Graphite is the most widely used of all solid lubricants. It is very soapy to
touch, non-inflammable and not oxidized in air below 375oC. In the absence
of air, it can be used upto very much higher temperatures. Graphite is used
either in powdered form or as suspension. Suspension of graphite in oil or
water is brought about with the help of an emulsifying agent like tannin.
When graphite is dispersed in oil, it is called 'oildag' and when it is dispersed
in water; it is called 'aquadag'. Oildag is found particularly useful in internal
combustion engines, because it forms a film between the piston rings and the
cylinder and gives a tight-fit contact, thereby increasing compression. On the
other hand, oildag is useful where a lubricant free from oil is needed. e.g.,
foodstuffs industry. Graphite is also mixed with greases to form graphite-
greases, which are used at still higher temperatures.
Uses: As lubricant in air-compressors, lathes, general machine-shop works,
foodstuffs industry, railway track-joints, open gears, chains, cast iron
bearings, internal combustion engine, etc. (b) Molybdenum disulphide
possesses very low coefficient of friction and is stable in air up to 400oC. Its
fine powder may be sprinkled on surfaces sliding at high velocities, when it
fills low spots in metal surfaces, forming its film. It is also used along with
solvents and in greases. Besides the more important graphite and
molybdenum disulphide, other substances like soapstone, talc, mica, etc., are
also used as solid lubricants.
SYNTHETIC LUBRICANTS
Petroleum-based lubricants can be used under abnormal conditions like
extremely high
temperature, chemically reactive atmosphere, etc. By employing certain
specific additives. However, synthetic lubricants have been developed which
alone can meet the most drastic and severe conditions such as those existing
in aircraft engines, in which the same lubricant may have to use in the
temperature range of -50oC and 250oC. Such a lubricant should possess low
freezing point, high viscosity-index and also should be non-inflammable.
Modern synthetic lubricants possess, in general, the following distinguishing
characteristics: (1) non-inflammable, (ii) high flash points, (iii) high thermal
stability at high operating temperatures, (iv) high viscosity-index, (v)
chemical stability, etc. Important synthetic lubricants are given below.
(1) Polymerized hydrocarbons like polyethylene, polypropylene,
polybutylene in the molecular weight range of 500 to 50,000 are residue-free,
light in color, free from non-hydrocarbon impurities, chemically non-reactive
and high temperature lubricants.
(2) Polyglycols and related compounds like polyethylene glycol,
polypropylene glycol, polyglycidyl ethers, and higher polyalkylene oxides
can be used as water-soluble as well as water-insoluble lubricants in rubber
bearings and joints. Polyglycidyl ethers and higher polyalkylene oxides are
water-insoluble, but they can absorb a considerable amount of water. Their
viscosity-index is high and these are used in roller bearings of sheet glass
manufacturing machines. It may be pointed that polyethylene oxides undergo
thermal decomposition (at high temperature) to evolve volatile oxidisable
products, so these are not useful as lubricants at high temperatures.
(3) Organic amines, imines and amides are good synthetic lubricants, since
they possess low pour-points and high viscosity-index. They can be used
under temperature conditions of -50oC to 250oC
(4) Silicones are very good synthetic lubricants, because are not oxidized
below 200oC and possess high viscosity-index. These are frequently used for
low temperature lubrication purposes. It may be pointed have that silicones
are oxidized quickly above 200oC and undergo cracking process at about
230oC, so they are not employed for high temperature applications.
(5) Fluorocarbons are not decomposed by heat, not easily oxidizable and
chemically inert and resistant to chemicals, except molten sodium.
LUBRICATING EMULSIONS
An emulsion is two-phase system, consisting of a fairly coarse dispersion of
two immiscible liquids, the one being dispersed as fine droplets in the other.
The disperse (or the internal) phase is the liquid that is broken into droplets.
The surrounding liquid is known as the continuous or external or dispersing
phase. Usually, the size of dispersed droplets varies from 1 to 6 micron. A
dispersion system consisting of two immiscible liquids is inherently unstable,
and to increase its stability, a third agent, called emulsifier or emulsifying
agent, is added. Emulsifiers are compounds exhibiting both polar and non-
polar character. The emulsifier molecule contains a hydrophobic-end and a
hydrophilic end. Hydrophobic-end of the molecule is preferably wetted by
oil; whereas the hydrophilic-end is wetted by water. Thus, emulsifier
molecule is adsorbed at the interface of the two phases (oil and water),
resulting in the formation of a protective film around the dispersed droplets.
A Sodium Soap molecule illustrates well the functioning of an emulsifier.
The sodium soap is the sodium salt of a long chain fatty acids like
C15H31COONa, possessing a hydrophilic group -COONa and hydrophobic-
end, C15H31.
Geometric calculations have shown that the maximum amount of dispersed
phase in the other liquid can be 74.02% of the total volume. Thus, a water-oil
emulsion mixture with less than 26% oil would tend to form an O/W
emulsion; whereas a mixture with more than 74% oil would result in a W/O
emulsion. Composition between 26 and 74% can result in both types of
emulsion.
(a) Oil in-water emulsions are obtained by adding oil containing about 3-20%
water-soluble emulsifying agent to a suitable quantity of wafer. The most
usual emulsifying agents are sodium soaps and sodium and potassium salts of
sulphonic acids. The main use of such an emulsion is as cooling and
lubricating liquid for cutting tools. Another use is lubrication for certain
rather heavy sliding components such as pistons in marine diesel engines and
large internal combustion engines. Such emulsions also give rust protection.
(b) Water-in-oil emulsions are prepared by 'mixing together water containing
about 1 to 10 % water- soluble emulsifier (Like alkaline-earth soap, e.g.,
calcium stearate) with oil. These emulsions possess much higher viscosity
than that of the oil from which they are prepared. An emulsion which uses
about 40% water by volume is widely used to lubricate compressors and
pneumatic tools. They provide cooling effect (due to the evaporation of
water), besides lubrication action.
Requirements and properties of lubricants
Lubricants must have the following main characteristics
· Keep surfaces separate under all loads, temperatures and speeds,
thus minimizing friction and wear.
· Act as a cooling fluid removing the heat produced by friction or
from external sources
· Remain adequately stable in order to guarantee constant behavior
over the forecasted useful life
· Protect surfaces from the attack of aggressive products formed
during operation
· Show cleaning capability and dirt holding capacity in order to
remove residue and debris that may be form during operation

The properties of lubricants


The main properties of lubricants, which are usually indicated in the technical
characteristics of the product, are:
· Viscosity
· Viscosity index
· Pour point
· Flash point

Viscosity
Viscosity describes the flow behavior of a fluid. The viscosity of lubricating
oils diminishes as temperature rises and consequently is measured at a given
temperature (e.g. 40°C). The viscosity of a lubricant determines the thickness
of the layer of oil between metallic surfaces in reciprocal movement. The
most widely used unit of measurement of viscosity is the centistokes (cSt).

Viscosity index
The viscosity index is a characteristic used to indicate variations in the
viscosity of lubricating oils with changes in temperature. The higher the level
of the viscosity index, the lower the variation in viscosity at temperature
changes.
Consequently, if two lubricants with the same viscosity are considered at a
temperature of 40 °C, the one with the higher viscosity index will guarantee:
· better engine start up at low temperatures (lower internal friction)
· a higher stability of the lubricating film at high temperatures

Viscosimetric classifications
There are a number of viscosimetric classification systems that indicate,
usually with a number, a more or less limited viscosity range. The aim is to
provide, along with the viscosity index, a rapid indication of the most
appropriate choice of lubricant for a specific application.
ISO VG degrees are widely used to classify industrial oils. Each degree
identifies a kinematic viscosity gap measured at 40°C. SAE degrees are used
in the field of engine oils and gear oils.
Pour Point
The pour point refers to the minimum temperature at which a lubricant
continues to flow. Below the pour point, the oil tends to thicken and to cease
to flow freely.
Flash point
The flash point is the minimum temperature at which an oil-vapor-air-mixture
becomes inflammable. It is determined by progressively heating the oil-
vapor-air-mixture in a standard laboratory receptacle until the mixture ignites.
Lubricants Testing
Lubricant quality testing from our Total Quality Assurance experts .Lubricant
testing and oil condition monitoring provides quality and condition
assessment of lubricants and oils used in engines and other expensive
machinery and systems.

Lubricant quality control testing includes lubricant analysis programs for


large, high-value engines and drive-trains, turbines, ships, trains, generators,
offshore platforms, and other highly valuable machinery.

Intertek lubricant quality testing helps clients minimize costly down-time and
repairs by alerting the customer to early, developing problems before they
become big, expensive, and costly failures.
Intertek helps lubricant manufacturers within the areas of quality control,
formulation, R&D, and qualification testing.
Lubricants testing and consulting:
● Lubricants Oil Condition Monitoring Program
● Gear Lubricants Testing
● Lubricant Quality Scanning Services (Marine lubricants testing)
● Automotive Industry Engine Lubricant Tests
● Lubricant Qualification Testing Services
● Ferrography Testing
● Oils and Fluids Testing
● Wear Metals Testing
● Oil Analysis
● R&D and product development support
● Raw material chemical evaluation and development
● Tribology film and residue chemical/mechanical analysis
● and more

What is Grease?
A solid or semi-solid product of dispersion of a thickening agent in
lubricating oil. Other ingredients imparting special properties may be
included.

Classification of Lubricants
On the basis of its physical state, grease is classified as semi-solid lubricant.
Grease vs. Oil Lubrication - Advantages and Disadvantages

Item Grease Lubrication Oil Lubrication

Feeding Grease sealed bearing does Requires a device that


device not require relubrication continuously feeds oil (drip-
for an extended period. feed, splash feed,
recirculation system).

Consumption Can be kept to the Significant amount required.


minimum necessary.

Lubrication Simple. Complex.


system

Leakage Unlikely because of its Possible if sealing system is


seal forming adequate.
characteristics.
Use for high- Limited. Yes.
speed
application

Contaminant No. Continual removal by


removal filtration or centrifuge.

Cooling No cooling capacity. High cooling capacity.


efficiency

Friction loss Generally high, but torque Generally low.


reduction can be achieved
by channeling in roller
bearings.
Fuel Thermochemistry
Thermochemistry and One Ignition Cycle:
Thermochemistry deals with energy, heat and work. Much of what we study
deals with static systems in which no change is taking place. But we are not
content with an automobile or truck that just sits in the driveway, so this
discussion deals with situations undergoing change.
The change in energy of a system that is going through any process is the
sum of the heat added TO the system and the work done ON the system.
delta E = delta q + delta w
If heat flows from the system and work is done by the system as in the case
of fuel combustion, then both delta q and delta e are negative and delta E is
likewise negative. This situation expresses the conversion of the potential or
stored energy of the fuel converted to heat and kinetic energy and transferred
from the system to its surroundings.
Consider one cylinder of that Dodge RAM truck outside. In one ignition in
which a small amount of fuel is injected into the cylinder, is compressed to a
smaller volume and ignited with a spark in the presence of air, we see both
the heat and work that results. The engine gets hot and the cylinder moves
back down and the resulting work drives the truck. Heat is added to the
system and work is done by the system - the energy of the system - the
engine - changes.
Where did the energy come from? What was the origin of both heat and
work. The ignition of the gasoline in the cylinder caused a rapid chemical
reaction with the conversion of the hydrocarbon fuel in the presence of
oxygen to carbon dioxide and water with the release of energy. The standard
for the measurement of energy - different from the methods of using energy
- is called standard enthalpy. Standard enthalpy is the heat given off or
required for a chemical reaction carried out on one mole of the substance
under standard conditions - that is 25 degrees Celsius and one atmosphere
pressure.
Thus for octane - one component of the mixture of hydrocarbons called
gasoline, the combustion of one mole - about 114g - follows the equation
below and the standard enthalpy of combustion is -1307kcal/mol.
C8H18 + 12.5O2 --> 8CO2 + 9H2O delta Hc = -1307kcal/mole

Please understand this standard enthalpy is a measurement of the energy


available. No more no less. But this is confusing, isn't it? We said energy was
the sum of heat given off and work done. How then can we measure energy
by determining heat effects and ignoring the work. The reason is that we
define work as a force acting over a distance.
· Gas expanding in a cylinder, against a constant resisting pressure,
is work.
· A gas expanding into the atmosphere with no restraint is not work.
In the absence of work, all the energy appears as heat. We can measure heat
rather well, so standard enthalpies are collections of heat measurements done
under the conditions of no work. But they are measurements of energy none
the less.
So thermochemistry is limiting. There is only so much energy available for
each mole of combustible material. You might look up the standard enthalpy
for hexane and find delta Hc = -997kcal/mol. Octane might look to more
energy producing than hexane, but in practice, we use a fuel by weight or
volume, not by mole. So what is important is the energy per gram:
· Octane: delta Hc = -1307kcal/mole/114g/mol = 11.5kcal/g

· Hexane: delta Hc = -997kcal/mole/86g/mol = 11.6kcal/g

Energy from fuels is limiting. We can obtain no more than about 11-12 kcal/g
from fuel combustion. If we are heating a home with this fuel, we want all of
that energy as heat, and as we have seen, burning in the atmosphere without
capturing the exhaust gases does just that.
But in an auto we want to maximize the work, not the heat. We know that
energy is released as heat AND work. We will define how we maximize
work and why the early auto engineers wanted to make high compression
engines in the section on Fuel Thermodynamics.
1. Thermochemistry is the relationship between chemical reactions and
energy changes.
2. Thermochemical and other data on elements and compounds is catalogued
and reported by the National Institute of Science and Technology (NIST).

Relative Density
Relative density is the ratio of the density (mass of a unit volume) of a
substance to the density of a given reference material (i.e., water). It is
usually measured at room temperature (20 Celcius degrees) and standard
atmosphere (101.325kPa). It is unitless. You can often find it in the section 9
of a safety data sheet (SDS).

Regulatory Implications of Relative Density


Relative density is often used to calculate the volume or weight of samples
needed for preparing a solution with a specified concentration. It also helps
us understand the environmental distribution of insoluble substances (i.e, oil
spill) in aquatic eco-system (on water surface or bottom sediment) if the
substance is released to water.
Relative density test is not required for every chemical. Under REACH, the
study does not need to be conducted if:
· the substance is only stable in solution in a particular solvent and the
solution density is similar to that of the solvent. In such cases, an
indication of whether the solution density is higher or lower than the
solvent density is sufficient, or
· the substance is a gas. In this case, an estimation based on calculation
shall be made from its molecular weight and the Ideal Gas Laws.
Fuel Calorific Values
The calorific value of a fuel is the quantity of heat produced by its
combustion – at constant pressure and under “normal” (standard) conditions
(i.e. to 0oC and under a pressure of 1,013 mbar).
The combustion process generates water vapor and certain techniques may be
used to recover the quantity of heat contained in this water vapor by
condensing it.
● Higher Calorific Value (or Gross Calorific Value – GCV, or Higher
Heating Value – HHV) – the water of combustion is entirely condensed
and that the heat contained in the water vapor is recovered;
● Lower Calorific Value (or Net Calorific Value – NCV, or Lower
Heating Value – LHV) – the products of combustion contains the water
vapor and that the heat in the water vapor is not recovered.
Fuel Calorific Values

Natural gas 12500 kcal/kg

Propane-butane 11950 kcal/kg

Disel 10000 kcal/kg

Fuel oil 9520 kcal/kg

Brown coal 3500 kcal/kg

Woods 2500 kcal/kg

Electricity 860 kcal/kWh


Flash point and fire point
FLASH POINT :-
A certain concentration of vapor in the air is necessary to sustain combustion,
and that concentration is different for each flammable liquid. The flash point
of a flammable liquid is the lowest temperature at which there will be enough
flammable vapor to ignite when an ignition source is applied
FIRE POINT :-
The fire point of a fuel is the lowest temperature at which the vapour of that
fuel will continue to burn for at least 5 seconds after ignition by an open
flame. At the flash point, a lower temperature, a substance will ignite briefly,
but vapor might not be produced at a rate to sustain the fire.
Vapor pressure
Vapor pressure is an important quality specification for gasoline.
Specifically, vapor pressure is a measure of the volatility of a fuel, or the
degree to which it vaporizes at a given temperature.
For gasoline, vapor pressure is important for both performance and
environmental reasons. First, because gasoline engines require the fuel to be
vaporized in order to burn, gasoline must meet a minimum vapor pressure to
ensure that it is volatile enough to vaporize under cold start conditions.
Engines also have a maximum limit for vapor pressure set by concerns over
vaporization in the fuel line that can result in vapor lock, or a blocking of the
fuel line. However, the most critical limit to vapor pressure in most markets
now is environmental concern about evaporative emissions outside of the
vehicle, which contribute to pollution. Typically, it is this concern that sets
the critical maximum vapor pressure specification for most grades of
gasoline.
Measurement of vapor pressure
The most common measure of vapor pressure for gasoline is Reid vapor
pressure. This is the pressure, in psi (pounds per square inch) or kPa
(kiloPascals), necessary to keep a liquid from vaporizing when at 100 F (37.8
C).
Vapor pressure limits vary seasonally, with higher limits in the cold months
and lower limits in the warm months. Typical ranges are 7-15 psi or 48-103
kPa.
Vapor pressure does not blend linearly with volume. To estimate the vapor
pressure of a blend, a vapor pressure index is used. This is simply:
Vapor pressure index = (Vapor pressure)^(1.25)
High and low vapor pressure blendstocks
Most gasoline blendstocks have vapor pressures below the maximum
specification for finished gasoline, so achieving the vapor pressure
specification is not difficult. However, one of the highest vapor pressure
blendstocks, butane, also happens to be very high in octane and very cheap
relative to other blendstocks. Consequently, there is typically a strong
incentive for refiners to blend as much butane into gasoline as possible, up to
the point where it meets the maximum vapor pressure limit.
Fuel viscosity control
Fuel viscosity control is a technique to control viscosity and temperature of
fuel oil (FO) for efficient combustion in diesel engines of motor vessels and
generators of oil-fired power plants.
Fuel oil's viscosity strongly depends on the temperature, the higher is the
temperature the lower is the viscosity. For optimal combustion the viscosity
of the fuel should be in the range of 10–20 cSt. To maintain this value a
combination of viscometer, PID controller and heater is used. Viscometer
measures the actual viscosity of the fuel, this value is compared with the set
point in the controller and the command is sent to the heater to adjust the
temperature of the fuel.
What is API Gravity?
Industry indicator
API gravity is short for American Petroleum Institute gravity, an inverse
measure that is used to determine the weight of petroleum liquids in
comparison to water. If a liquid has API gravity of more than 10 it is
considered a light oil that floats on water. If the liquid’s API gravity is less
than 10 it will sink and falls into the heavy oil category. While API gravity
essentially measures the relative density of petroleum liquid and water it is
primarily used to evaluate and contrast the relative densities of petroleum
liquids.
In mathematical terms API gravity has no dimensions. However the measure
is gradated in degrees using a purpose built hydrometer instrument. Thanks to
a strategic API scale design most petroleum liquids will be categorised
between 10 and 70 API gravity degrees.
The origins of API gravity
The original technique used to measure the gravity of liquids was called the
Baumé scale. It was developed in France in 1768 and officially accepted by
the U.S. National Bureau of Standards in 1916. After encountering a series of
errors and variations the American Petroleum Institute refined the scale and
created API gravity. This is now widely used across the globe.
Clear-cut formulas
The official formula used to derive the gravity of petroleum liquids from the
specific gravity (SG), as follows:
API gravity = 141.5/SG – 131.5
The relative density of petroleum liquids can also be uncovered by using API
gravity value:
RD at 60oF = 141.5 / (API gravity + 131.5)
A key formula for establishing barrels of crude oil per metric ton
Using the following formula, API gravity can also be used to calculate how
many barrels of crude oil can be produced per metric ton. Given that the
weight of an oil plays an integral role in establishing its market value this
formula is incredibly important!
Barrels of crude oil per metric ton = 1 / [141.5 / (API gravity + 131.5) x
0.159]
API gravity classifications and grades
In general oils with API gravity of 40 – 45 generate the highest market prices.
Any oils with API gravity of 45 or over have shorter molecular chains which
are less desirable to refineries. Below is an overview of the four major crude
oil classifications:
· Light crude oil
Any crude oil with API gravity of over 31.1 degrees falls into the light crude
oil category.
· Medium crude oil
Oils with API gravity falling between 22.3 and 31.1 degrees are classed as
medium crude oils.
· Heavy crude oil
Heavy crude oils have API gravity of under 22.3.
· Extra heavy oil
Also referred to as bitumen, extra heavy crude oils have API gravity of below
10.0 degrees.
While these are accurate classifications it’s important to note that exact
differentiations between light, medium, heavy and extra heavy will vary
depending on the region of origin. At the end of the day, fluctuations are
largely based on current oil commodity trading.
Aniline point
The aniline point of an oil is defined as the minimum temperature at which
equal volumes of aniline (C6H5NH2) and lubricant oil are miscible, i.e. form
a single phase upon mixing..
The value gives an approximation for the content of aromatic compounds in
the oil, since the miscibility of aniline, which is also an aromatic compound
suggests the presence of similar (i.e. aromatic) compounds in the oil. The
lower the aniline point, the greater is the content of aromatic compounds in
the oil.
The aniline point serves as a reasonable proxy for aromaticity of oils
consisting mostly of saturated hydrocarbons (i.e. alkanes, paraffins) or
unsaturated compounds (mostly aromatics). Significant chemical
functionalization of the oil (chlorination, sulfonation, etc.) can interfere with
the measurement, due to changes to the solvency of the functionalized oil.
Determination of aniline point
Equal volumes of aniline and oil are stirred continuously in a test tube and
heated until the two merge into a homogeneous solution. Heating is stopped
and the tube is allowed to cool. The temperature at which the two phases
separate out is recorded as aniline point.
Copper Strip Corrosion
This is a qualitative method that is used to determine the level of corrosion of
petroleum products. In this test, a polished copper strip is suspended in the
product and its effect observed.
The method is well suited for specification settings, internal quality control
tools and development and research on aromatic industrial hydrocarbons. It
also detects the presence of harmful corrosive substances, like acidic or sulfur
compounds, which may corrode the equipment. The value of this test is
reported in SI units.
Copper strip corrosion is also known as the copper strip test.

This test can be used for testing gasoline, solvents, natural gasoline, kerosene,
diesel fuel, distilled fuel oil and lubricating oil, among other products, using
test baths. At elevated temperatures, a copper strip that has been polished is
immersed in a sample, usually 30 ml.
The strip is then removed and tested for corrosion and a classification number
is given. The number ranges from 1 to 4 after a comparison with the ASTM
copper strip corrosion standard is done.
There are several methods and tests available. One is the test bomb bath,
7151K59. In this test a thermostatically controlled water bath is used to
immerse copper strip corrosion test bombs. This must be done at the right
depth as per the ASTM requirements. This test has several specifications that
are identified with it:
● Testing up to four copper strips at a time.
● Maximum temperature of 221°F (±1°F)/105°C (±0.5°C).
● Using a five-gallon bath.
● Conforming to the ASTM D 130; IP 154; FSPT DT-28-65; ISO 2160;
FTM 791-5325 and the DIN 51759.
Another method is using test tube baths, 7151K89 and K92. The features of
this test are:
● Testing up to 16 samples at a time.
● Microprocessor control.
● Maximum temperature of 374°F (±2°F)/190°C (±1°C).
● Using a five-gallon bath and the use of water or heater transfer fluid.
This can be used to test samples which do not require a test bomb. These
include diesel fuel, automotive gasoline, fuel oil, Stoddard solvent, kerosene,
and lubricating oil.
SI ENGINE
Spark-ignition engines normally use volatile liquid fuels. Preparation of fuel-
air mixture is done outside the engine cylinder and formation of a
homogeneous mixture is normally not completed in the inlet manifold. Fuel
droplets, which remain in suspension, continue to evaporate and mix with air
even during suction and compression processes. The process of mixture
preparation is extremely important for spark-ignition engines. The purpose of
carburetion is to provide a combustible mixture of fuel and air in the required
quantity and quality for efficient operation of the engine under all conditions.

Definition of Carburetion

The process of formation of a combustible fuel-air mixture by mixing the


proper amount of fuel with air before admission to engine cylinder is called
carburetion and the device which does this job is called a carburetor.

Factors Affecting Carburetion


Of the various factors, the process of carburetion is influenced by
i. The engine speed
ii. The vaporization characteristics of the fuel
iii. The temperature of the incoming air and
iv. The design of the carburetor

Principle of Carburetion
Both air and gasoline are drawn through the carburetor and into the engine
cylinders by the suction created by the downward movement of the piston.
This suction is due to an increase in the volume of the cylinder and a
consequent decrease in the gas pressure in this chamber. It is the difference in
pressure between the atmosphere and cylinder that causes the air to flow into
the chamber. In the carburetor, air passing into the combustion chamber picks
up discharged from a tube. This tube has a fine orifice called carburetor jet
that is exposed to the air path. The rate at which fuel is discharged into the air
depends on the pressure difference or pressure head between the float
chamber and the throat of the venturi and on the area of the outlet of the tube.
In order that the fuel drawn from the nozzle may be thoroughly atomized, the
suction effect must be strong and the nozzle outlet comparatively small. In
order to produce a strong suction, the pipe in the carburetor carrying air to the
engine is made to have a restriction. At this restriction called throat due to
increase in velocity of flow, a suction effect is created. The restriction is
made in the form of a venturi to minimize throttling losses. The end of the
fuel jet is located at the venturi or throat of the carburetor. The geometry of
venturi tube is as shown in Fig.16.6. It has a narrower path at the center so
that the flow area through which the air must pass is considerably reduced.
As the same amount of air must pass through every point in the tube, its
velocity will be greatest at the narrowest point. The smaller the area, the
greater will be the velocity of the air, and thereby the suction is
proportionately increased

As mentioned earlier, the opening of the fuel discharge jet is usually loped
where the suction is maximum. Normally, this is just below the narrowest
section of the venturi tube. The spray of gasoline from the nozzle and the air
entering through the venturi tube are mixed together in this region and a
combustible mixture is formed which passes through the intake manifold into
the cylinders. Most of the fuel gets atomized and simultaneously a small part
will be vaporized. Increased air velocity at the throat of the venturi helps he
rate of evaporation of fuel. The difficulty of obtaining a mixture of
sufficiently high fuel vapour-air ratio for efficient starting of the engine and
for uniform fuel-air ratio indifferent cylinders (in case of multi cylinder
engine) cannot be fully met by the increased air velocity alone at the venturi
throat.

The Simple Carburetor


Carburetors are highly complex. Let us first understand the working principle
bf a simple or elementary carburetor that provides an air fuel mixture for
cruising or normal range at a single speed. Later, other mechanisms to
provide for the various special requirements like starting, idling, variable load
and speed operation and acceleration will be included. Figure 3. shows the
details of a simple carburetor.
The simple carburetor mainly consists of a float chamber, fuel discharge
nozzle and a metering orifice, a venturi, a throttle valve and a choke. The
float and a needle valve system maintain a constant level of gasoline in the
float chamber. If the amount of fuel in the float chamber falls below the
designed level, the float goes down, thereby opening the fuel supply valve
and admitting fuel. When the designed level has been reached, the float
closes the fuel supply valve thus stopping additional fuel flow from the
supply system. Float chamber is vented either to the atmosphere or to the”
upstream side of the venturi.During suction stroke air is drawn through the
venturi.

As already described, venturi is a tube of decreasing cross-section with a


minimum area at the throat, Venturi tube is also known as the choke tube and
is so shaped that it offers minimum resistance to the air flow. As the air
passes through the venturi the velocity increases reaching a maximum at the
venturi throat. Correspondingly, the pressure decreases reaching a minimum.
From the float chamber, the fuel is fed to a discharge jet, the tip of which is
located in the throat of the venturi. Because of the differential pressure
between the float chamber and the throat of the venturi, known as carburetor
depression, fuel is discharged into the air stream. The fuel discharge is
affected by the size of the discharge jet and it is chosen to give the required
air-fuel ratio. The pressure at the throat at the fully open throttle condition
lies between 4 to 5 cm of Hg, below atmospheric and seldom exceeds8 cm
Hg below atmospheric. To avoid overflow of fuel through the jet, the level
of the liquid in the float chamber is maintained at a level slightly below the
tip of the discharge jet. This is called the tip of the nozzle. The difference in
the height between the top of the nozzle and the float chamber level is
marked h in Fig.3.

The gasoline engine is quantity governed, which means that when power
output is to be varied at a particular speed, the amount of charge delivered to
the cylinder is varied. This is achieved by means of a throttle valve usually of
the butterfly type that is situated after the venturi tube. As the throttle is
closed less air flows through the venturi tube and less is the quantity of air-
fuel mixture delivered to the cylinder and hence power output is reduced. As
the” throttle is opened, more air flows through the choke tube resulting in
increased quantity of mixture being delivered to the engine. This increases
the engine power output. A simple carburetor of the type described above
suffers from a fundamental drawback in that it provides the required A/F ratio
only at one throttle position. At the other throttle positions the mixture is
either leaner or richer depending on whether the throttle is opened less or
more. As the throttle opening is varied, the air flow varies and creates a
certain pressure differential between the float chamber and the venturi throat.
The same pressure differential regulates the flow of fuel through the nozzle.
Therefore, the velocity of flow of air II and fuel vary in a similar manner. At
the same time, the density I of air decrease as the pressure at the venturi
throat decrease with increasing air flow whereas that of the fuel remains
unchanged. This results in a simple carburetor producing a progressively rich
mixture with increasing throttle opening.

The Choke and The Throttle

When the vehicle is kept stationary for a long period during cool winter
seasons, may be overnight, starting becomes more difficult. As already
explained, at low cranking speeds and intake temperatures a very rich mixture
is required to initiate combustion. Some times air-fuel ratio as rich as 9:1 is
required. The main reason is that very large fraction of the fuel may remain
as liquid suspended in air even in the cylinder. For initiating combustion,
fuel-vapour and air in the form of mixture at a ratio that can sustain
combustion is required. It may be noted that at very low temperature vapour
fraction of the fuel is also very small and this forms combustible mixture to
initiate combustion. Hence, a very rich mixture must be supplied. The most
popular method of providing such mixture is by the use of choke valve. This
is simple butterfly valve located between the entrance to the carburetor and
the venturi throat as shown in Fig.3.

When the choke is partly closed, large pressure drop occurs at the venturi
throat that would normally result from the quantity of air passing through the
venturi throat. The very large depression at the throat inducts large amount of
fuel from the main nozzle and provides a very rich mixture so that the ratio of
the evaporated fuel to air in the cylinder is within the combustible limits.
Sometimes, the choke valves are spring loaded to ensure that large carburetor
depression and excessive choking does not persist after the engine has
started, and reached a desired speed. This choke can be made to operate
automatically by means of a thermostat so that the choke is closed when
engine is cold and goes out of operation when engine warms up after starting.
The speed and the output of an engine is controlled by the use of the throttle
valve, which is located on the downstream side of the venturi.

The more the throttle is closed the greater is the obstruction to the flow of the
mixture placed in the passage and the less is the quantity of mixture delivered
to .the cylinders. The decreased quantity of mixture gives a less powerful
impulse to the pistons and the output of the engine is reduced accordingly. As
the throttle is opened, the output of the engine increases. Opening the throttle
usually increases the speed of the engine. But this is not always the case as
the load on the engine is also a factor. For example, opening the throttle when
the motor vehicle is starting to climb a hill may or may not increase the
vehicle speed, depending upon the steepness of the hill and the extent of
throttle opening. In short, the throttle is simply a means to regulate the output
of the engine by varying the quantity of charge going into the cylinder.

Compensating Devices

An automobile on road has to run on different loads and speeds. The road
conditions play a vital role. Especially on city roads, one may be able to
operate the vehicle between 25 to 60% of the throttle only. During such
conditions the carburetor must be able to supply nearly constant air-fuel ratio
mixture that is economical (16:1).However, the tendency of a simple
carburetor is to progressively richen the mixture as the throttle starts opening.
The main metering system alone will not be sufficient to take care of the
needs of the engine. Therefore, certain compensating devices are usually
added

in the carburetor along with the main metering system so as to supply a


mixture with the required air-fuel ratio. A number of compensating devices
are in use. The important ones are
i. Air-bleed jet
ii. Compensating jet
iii. Emulsion tube
iv. Back suction control mechanism
v. Auxiliary air valve
vi. Auxiliary air port

As already mentioned, in modern carburetors automatic compensating


devices are provided to maintain the desired mixture proportions at the higher
speeds. The type of compensation mechanism used determines the metering
system of the carburetor. The principle of operation of various compensating
devices are discussed briefly in the following sections.
What is octane rating?

Octane rating is the measure of a fuel's ability to resist "knocking" or


"pinging" during combustion, caused by the air/fuel mixture detonating
prematurely in the engine.
In the U.S., unleaded gasoline typically has octane ratings of 87 (regular),
88–90 (midgrade), and 91–94 (premium). Gasoline with an octane rating of
85 is available in some high-elevation areas of the U.S. (more about that
below).
The octane rating is prominently displayed in large black numbers on a
yellow background on gasoline pumps.

What octane fuel should I use in my vehicle?


You should use the octane rating required for your vehicle by the
manufacturer. So, check your owner's manual. Most gasoline vehicles are
designed to run on 87 octane, but others are designed to use higher octane
fuel.

Why do some manufacturers require or recommend the use of higher


octane gasoline?
Higher octane fuels are often required or recommended for engines that use a
higher compression ratio and/or use supercharging or turbocharging to force
more air into the engine. Increasing pressure in the cylinder allows an engine
to extract more mechanical energy from a given air/fuel mixture but requires
higher octane fuel to keep the mixture from pre-detonating. In these engines,
high octane fuel will improve performance and fuel economy.

What if I use a lower octane fuel than required for my vehicle?


Using a lower octane fuel than required can cause the engine to run poorly
and can damage the engine and emissions control system over time. It may
also void your warranty. In older vehicles, the engine can make an audible
"knocking" or "pinging" sound. Many newer vehicles can adjust the spark
timing to reduce knock, but engine power and fuel economy will still suffer.

Will using a higher octane fuel than required improve fuel economy or
performance?
It depends. For most vehicles, higher octane fuel may improve performance
and gas mileage and reduce carbon dioxide (CO2) emissions by a few percent
during severe duty operation, such as towing a trailer or carrying heavy loads,
especially in hot weather. However, under normal driving conditions, you
may get little to no benefit.

Why does higher octane fuel cost more?


The fuel components that boost octane are generally more expensive to
produce.

Is higher octane fuel worth the extra cost?


If your vehicle requires midgrade or premium fuel, absolutely. If your
owner's manual says your vehicle doesn't require premium but says that your
vehicle will run better on higher octane fuel, it's really up to you. The cost
increase is typically higher than the fuel savings. However, lowering CO2
emissions and decreasing petroleum usage by even a small amount may be
more important than cost to some consumers.

What is 85 octane, and is it safe to use in my vehicle?


The sale of 85 octane fuel was originally allowed in high-elevation regions—
where the barometric pressure is lower—because it was cheaper and because
most carbureted engines tolerated it fairly well. This is not true for modern
gasoline engines. So, unless you have an older vehicle with a carbureted
engine, you should use the manufacturer-recommended fuel for your vehicle,
even where 85 octane fuel is available.
Can ethanol boost gasoline's octane rating?
Yes. Ethanol has a much higher octane rating (about 109) than gasoline.
Refiners usually blend ethanol with gasoline to help boost its octane rating—
most gasoline in the U.S. contains up to 10% ethanol. Blends of up to 15%
ethanol are available in some areas, and several manufacturers approve using
this blend in recent-model vehicles.
Rating of CI Engine Fuels:
In compression-ignition engines, the knock resistance depends on chemical
characteristics as well as as on the operating and design conditions of the CI
engine. Therefore, the knock rating of a diesel fuel is found by comparing the
fuel under prescribed conditions of operation in a special engine with primary
reference fuels. The reference fuels are normal cetane (C16H34), which is
arbitrarily assigned a cetane number of 100 and α-methyl napthalene (C11H10)
with an assigned cetane number of 0. Cetane number of a fuel is defined as
the percentage by volume of normal cetane in a mixture of normal cetane and
α-methyl napthalene which has the same ignition characteristics (ignition
delay) as the test fuel when combustion is carried out in a standard engine
under specified operating conditions. Since ignition delay is the primary
factor in controlling the initial auto ignition in CI engine, it is reasonable to
conclude that knock should be directly related to the ignition delay of the
fuel. Knock resistance property of diesel oil can be improved by adding small
quantities of compounds like amyl nitrate, ethyl nitrate or ether.
Labroratory Method: The test is carried out in a standard single cylinder
engine like the CFR diesel engine or Ricardo single cylinder variable
compression engine under the operating shown in the below table. The test
fuel is first used in the engine operating at the specified conditions. The fuel
pump delivery is adjustable to give a particular fuel-air ratio. The injection
timing is also adjusted to give an injection advance of 13 degrees. By varying
the compression ratio the ignition delay can be reduced or increased until a
position is found where combustion begins at Top Dead Center (TDC). When
this position is found, the test fuel undergoes a 13 degree ignition delay.
Conditions for ignition quality test on diesel fuels
Engine speed 900 rpm

Jacket water temperature 100° C.

Inlet air temperature. 65.5° C

Injection advance Constant at 13° begins at TDC

Ignition delay 13°


The cetane number of the unknown fuel can be estimated by nothing the
compression ratio of 13 degree and then referring to a prepared chart shown
the relationship between cetane number and combustion ratio. However, for
accuracy two reference fuel blends differing by not more than 5 cetane
numbers are selected to bracket the unknown sample. The compression ratio
is varied for each reference blend to reach the standard ignition delay (13
degrees) and by interpolation of the compression ratios, the cetane rating of
the unknown fuel is determined.
Diesel Fuel Cetane
Cetane is a colorless, liquid hydrocarbon (a molecule from the alkane series)
that ignites easily under compression. For this reason, it was given a base
rating of 100 and is used as the standard measure of the performance of
compression ignition fuels, such as diesel and biodiesel. All the sundry
hydrocarbon constituents of diesel fuel are measured and indexed against
cetane's base-100 rating.
What Is Cetane Number?
Similar to the octane number rating that's applied to gasoline to rate its
ignition stability, the cetane number is the rating assigned to diesel fuel to
rate its combustion quality. While gasoline's octane number signifies its
ability to resist auto-ignition (also referred to as pre-ignition, knocking,
pinging, or detonation), diesel's cetane number is a measure of the fuel's
delay of ignition time (the amount of time between the injection of fuel into
the combustion chamber and the actual start of combustion of the fuel
charge).
Because diesel engines rely on compression ignition (no spark), the fuel must
be able to auto-ignite—and generally, the quicker the better. A higher cetane
number means a shorter ignition delay time and more complete combustion
of the fuel charge in the combustion chamber. This, of course, translates into
a smoother running, better performing engine with more power and fewer
harmful emissions.
How Does the Cetane Number Test Work?
The process for determining the true cetane rating of fuel requires either the
use of precisely controlled test engines and procedures or fuel analysis that
relies on exacting instruments and conditions. Since using dedicated engines
and processes or instruments for real fuel tests is painstaking, expensive, and
time-consuming, many diesel fuel formulators use a "calculated" method to
determine cetane numbers. Two common tests are ASTM D976 and ASTM
4737 that use fuel density and boiling/evaporation points to arrive at cetane
ratings.
How Does Cetane Number Affect Engine Performance?
Just as there are no advantages to using gasoline with a higher octane rating
than recommended for a specific engine by its manufacturer, using diesel fuel
with a higher cetane rating than required for a particular diesel engine design
yields no bonuses either. Cetane number requirements depend mainly on
engine design, size, speed of operation, and load variations—and to a slightly
lesser extent, atmospheric conditions. Conversely, running a diesel engine on
fuel with a lower-than-recommended cetane number can result in rough
operation (noise and vibration), low power output, excessive deposits and
wear, and hard starting.
Cetane Numbers of Various Diesel Fuels
Normal modern highway diesel engines run best with a fuel rated between 45
and 55. The following list of cetane numbers denotes varying grades and
types of compression ignition diesel fuels:
● Regular diesel: 48
● Premium diesel: 55
● Biodiesel (B100): 55
● Biodiesel blend (B20): 50
● Synthetic diesel: 55
It's crucial to find a service station that markets fuel of the cetane number
recommended by your vehicle manufacturer. Always look for an appropriate
label
AUTOMOTIVE ELECTRICAL AND
ELECTRONICS SYSTEMS
What is Battery and why it is used?
Battery is the primary power source for any electronics wireless gadget, be it
a smartphone, laptop, watch or remote. Can you imagine the situation without
these energy sources? We wouldn’t be able to build any wireless electronic
device and have to rely on wired power source only, even electric cars and
space missions would not be possible without Batteries. Today in this tutorial
we discuss briefly about various types of batteries, their classification,
terminology and specifications.

What is Battery and why it is used?


Let’s see the basic difference between a battery and a cell. Also let’s find
out why we exactly need a battery and why can’t we use the Alternating
power (i.e., AC power from the wall sockets) instead of DC power.
Cell: A cell is an energy source which can deliver only DC voltage and
current which are in very small quantities. For example if we take cells that
we use in watches or remote controls, it can give maximum of 1.5 – 3V.
Battery: The functionality of the battery is exactly same as that of a cell but a
battery is a pack of cells arranged is a series/parallel fashion so that the
voltage can be raised to desired levels. The best known example for a battery
is a power bank which is used to charge up smart phones. If we ever see the
inside of a power bank we can find set of batteries arranged serially/parallel
based on the requirement. Batteries are arranged in series to increase the
voltage and in parallel to increase the current.
Now Why DC is preferred over AC? In most of the portable electronics,
AC can’t be stored where as DC can be stored without any difficulty. Even
the losses due to AC power are more when compared to the DC power. That
is the reason DC is preferred for portable electronic devices.

Technical terms used while dealing with batteries


We can’t just keep on using voltage and current alone to explain about a
battery’s functionality, there are some unique terms that defines the
characteristics of a battery like Watt-hour (mAh), C-rating, nominal voltage,
charging voltage, charging current, discharging current, cut off voltage, shelf
life, cycle life are the few terms used to define a batteries performance.
Let’s discuss each of the term briefly,

Power capacity:
It is the energy stored in a battery which is measured in Watt-hour
Watt-hour = V * I * hours {since voltage is kept constant, so it is measured in
Ah/mAh}
We generally see the battery ratings as 2500 mAh or 4000 mAh while
reading the specifications of a smart phone. What does that mean?? Let’s see

Example: 2500 mAh it means that the battery has a capability to deliver
2.5A/2500mA of current to the load for 1 hour. The time that the battery
works continuously depends upon the load current that it consumes. So if the
load consumes only 25 mA of current then the battery can stay alive for 100
hours. How is it?
25 mA * 100 hours {so 25 mA of current for 100 hours}
Similarly 250 mA for 10 hours So on…
Though the theoretical calculations seem ideal but the battery’s duration
changes based on the temperature and the current consumption etc.

Power capability:
It means the amount of current that the battery can deliver. It is also
known as C-rating.
Theoretically, it is calculated as A-h divided by 1 hour.
Example: Let’s consider a battery which has 10000 mAh of power capacity.
After dividing 10000 mA hour/1 hour gives 10000 mA = 10 A = 10 C
So, a battery with 10000 mAh of capacity will have a C rating of 10 C which
means the battery has a capability of delivering 10 A of current at a constant
voltage (fixed voltage/rated voltage).
If a battery has 1C rating then the battery has a capability of delivering 1A of
current.
Note: Higher the C rating, more the current that can be drawn from the
battery.

Nominal voltage:
While defining power capacity we have a unit called Wh which can be
elaborated as V * I * hour but where did the voltage gone? As the voltage of
the battery will be constant and will not be varied, it is considered as nominal
voltage (fixed voltage). So since the voltage is fixed only Ampere and hour
are considered as the unit (Ah/mAh).

Charging current:
It is the maximum current that can be applied to charge the battery i.e.,
practically maximum of 1A/2A can be applied if a battery protecting circuit is
in-built but still 500 mA is the best the range for charging the battery.

Charging voltage:
It is the maximum voltage that should be applied to the battery to efficiently
charge a battery. Basically 4.2 V is the best/standard charging voltage.
Though we apply 5 V to the battery it accepts only 4.2 V.

Discharging current:
It is the current that can be drawn from the battery and is delivered to
the load. If the current drawn by the load is greater than the rated discharging
current, the battery drains very fast which causes the battery heat up quickly
which also causes the battery to explode. So it is cautious to determine the
amount of current drawn by the load as well as the maximum discharging
current a battery can withhold.

Shelf life:
There might be a situation where the batteries are kept idle/sealed especially
in the stores/shops for a long period of time. So shelf life defines the time
period a battery can be stay powered up and should be able to use it for
a rated time period. Shelf life is mainly considered for non-rechargeable
batteries because those are of use and throw. For rechargeable batteries even
if the shelf time is less, we can still recharge it.

Cut-off voltage:
It is the voltage at which the battery can be considered as fully
discharged, after which if we still try to discharge from it the battery gets
damaged. So beyond the cut-off voltage the battery should be disconnected
from the circuit and should be charged appropriately.

Cycle life:
Let’s consider a battery is fully charged and it is discharged to 80% of its
actual capacity, then the battery is said to be completed one cycle. Likewise
the number of such cycles that a battery can charge and discharge
defines the cycle life. The more the cycle life the better will be the battery’s
quality. But if a battery is discharged to say 40% of its actual capacity
considering the battery is fully charged initially, it cannot be considered as a
cycle life.

Power density:
It defines power capacity of battery for a given mass of volume.
For example 100 Wh/Kg (Alkaline battery standard power density) implies
that for 1 Kg of chemical composition it provides 100 Wh of power capacity.
Now, volume of a AAA alkaline battery is 11.5 grams. So if 1Kg can give
100 Wh capacity, then how much a 11.5 gram AAA batery can give?? Let’s
calculate.
Wh (for 11.5 gm) = 100*11.5/1000 = 1.15 Wh
So, we know the nominal voltage of alkaline battery is 1.5V. So it provides
1.5V * (1.15/1.5)A * 1 hour gives 0.76 Ah = 760 mAh of power capacity
which is almost equal to the power capacity of a standard AAA alkaline
battery.
Types of Batteries
Batteries are basically classified into 2 types:
· Non-rechargeable batteries (primary batteries)
· Rechargeable batteries (secondary batteries)

Non-rechargeable Batteries
These are basically considered as primary batteries because they can be
used only once. These batteries cannot be recharged and used again. Let’s see
about the regular, daily life primary batteries that we see.
· Alkaline batteries: It is basically constructed with the chemical
composition of Zinc (Zn) and Manganese dioxide (MnO2), as the
electrolyte used in it is potassium hydroxide which is purely an
alkaline substance the battery is named as alkaline battery having he
power density of 100 Wh/Kg.

Advantages:

1. Cycle life is more


2. More compatible and efficient for powering up portable devices.
3. Shelf life is more.
4. Small in size.
5. Highly efficient.
6. Low internal resistance so that discharge state in idle state is less.
7. Leakage is low.

Disadvantages:

1. Cost is a bit high. Except it everything is an advantage.

Applications:
It can used in torches, remotes, wall clocks, small portable gadgets etc.

· Coin cell batteries: The chemical composition of coil cell batteries is


also alkaline in nature. Apart from alkaline composition, lithium and
silver oxide chemicals will be used to manufacture these batteries
which are more efficient in providing steady and stable voltage in such
a small sizes. It has Power density of 270 Wh/Kg.

Advantages:

1. Light in weight
2. Small in size
3. High density
4. Low cost
5. High nominal voltage (up to 3V)
6. Easy to get high voltages by arranging serially
7. Long shelf life

Disadvantages:
1. Needs a holder
2. Low current draw capability

Applications:
Used in watches, wall clocks, miniature electronic products etc.

Rechargeable Batteries
These are generally called as secondary batteries which can be recharged
and can be reused. Though the cost is high, but they can be recharged and
reused and can have a huge life span when properly used and safely charged.

Lead-acid batteries
It consists of lead-acid which is very cheap and seen mostly in cars and
vehicles to power the lighting systems in it. These are more preferable in the
products where the size/space and weight doesn’t matter. These comes with
the nominal voltage starting 2V to24V and most commonly seen as 2V, 6V,
12V and 24V batteries. It has Power density of 7 Wh/Kg.

Advantages:
1. Cheap in cost
2. Easily rechargeable
3. High power output capability

Disadvantages:

1. Very heavy
2. Occupies much space
3. Power density is very low

Applications:
Used in cars, UPS (uninterrupted Power Supply), robotics, heavy machinery
etc..

Ni-Cd batteries
These batteries are made of Nickel and Cadmium chemical composition.
Though these are very rarely used, these are very cheap and their discharge
rate is very low when compared to NiMH batteries. These are available in all
standard sizes like AA, AAA, C and rectangular shapes. The nominal voltage
is 1.2V, often connected together in a set of 3 which gives 3.6V. It has Power
density of 60 Wh/Kg.

Advantages:
1. Cheap in cost
2. Easy to recharge
3. Can be used in all environments
4. Comes in all standard sizes

Disadvantages:

1. Lower power density


2. Contains toxic metal
3. Needs to be charged very frequently in order to avoid growth of
crystals on the battery plate.

Applications:
Used in RC toys, cordless phones, solar lights and mostly in the applications
where price is important.

Ni-MH batteries
The Nickel – Metal Hydride batteries are much preferable than Ni-Cad
batteries because of their lower environmental impact. Its nominal voltage is
1.25 V which is greater than Ni-Cad batteries. It has less nominal voltage
than alkaline batteries and they are good replacement due to its availability
and less environmental impact. The power density of Ni-MH batteries is 100
Wh/Kg.
Advantages:

1. Available in all standard sizes.


2. High power density.
3. Easy to recharge.
4. A good alternative to alkaline which has almost all similarities and
also it is rechargeable.

Disadvantages:

1. Self-discharge is very high.


2. Expensive than Ni-Cad batteries.

Applications:
Used in all applications similar to the alkaline and Ni-Cad batteries.

Li-ion batteries
These are made up of Lithium metal and are latest in rechargeable
technology. As these are compact in size they can be used in most of the
portable applications which need high power specifications. These are the
best rechargeable batteries available. These have a nominal voltage of 3.7V
(most commonly we have 3.6V and 7.2V) and have various ranges of power
capacity (starting from 100s of mAh to 1000s of mAh). Even the C-rating
ranges from 1C to 10C and Power density of Li-ion batteries is 126 Wh/Kg.

Advantages:
1. Very light in weight.
2. High C-rating.
3. Power density is very high.
4. Cell voltage is high.

Disadvantages:

1. These are a bit expensive.


2. If the terminals are short circuited the battery might explode.
3. Battery protection circuit is needed.

Li-Po batteries
These are also called as Lithium Ion polymer rechargeable batteries because
it uses high conductivity polymer gel/polymers electrolyte instead of liquid
electrolyte. These come under the Li-ion technology. These are a bit costly.
But the battery is very highly protected when compared to the Li-ion
batteries. It has Power density of 185 Wh/Kg.

Advantages:

1. These are highly protective compared to Li-ion batteries.


2. Very light in weight
3. Thin in structure when compared to Li-ion batteries.
4. Power density, nominal voltages are comparatively very high
compared to Ni-Cad and Ni-MH batteries.

Disadvantages:

1. Expensive.
2. Might explode if wrongly connected.
3. Should not be bent or exposed to high temperature which may
cause to explosion.
Battery Working Principle:
Working Principle of Battery
A battery works on the oxidation and reduction reaction of an electrolyte with
metals. When two dissimilar metallic substances, called electrode, are placed
in a diluted electrolyte, oxidation and reduction reaction take place in the
electrodes respectively depending upon the electron affinity of the metal of
the electrodes. As a result of the oxidation reaction, one electrode gets
negatively charged called cathode and due to the reduction reaction, another
electrode gets positively charged called anode.

The cathode forms the negative terminal whereas anode forms the positive
terminal of a battery. To understand the basic principle of battery properly,
first, we should have some basic concept of electrolytes and electrons
affinity. Actually, when two dissimilar metals are immersed in an electrolyte,
there will be a potential difference produced between these metals.
It is found that, when some specific compounds are added to water, they get
dissolved and produce negative and positive ions. This type of compound is
called an electrolyte. The popular examples of electrolytes are almost all
kinds of salts, acids, and bases etc. The energy released during accepting an
electron by a neutral atom is known as electron affinity. As the atomic
structure for different materials are different, the electron affinity of different
materials will differ.
If two different kinds of metals are immersed in the same electrolyte solution,
one of them will gain electrons and the other will release electrons. Which
metal (or metallic compound) will gain electrons and which will lose
electrons, depend upon the electron affinity of these metals. The metal with
low electron affinity will gain electrons from the negative ions of the
electrolyte solution.

On the other hand, the metal with high electron affinity will release electrons
and these electrons come out into the electrolyte solution and are added to the
positive ions of the solution. In this way, one of these metals gains electrons
and another one loses electrons. As a result, there will be a difference in
electron concentration between these two metals.
This difference in electron concentration causes an electrical potential
difference developed between the metals. This electrical potential difference
or emf can be utilized as a source of voltage in any electronics or electrical
circuit. This is a general and basic principle of battery and this is how a
battery works .
All batteries cells are based only on this basic principle. Let’s discuss one by
one. As we said earlier, Alessandro Volta developed the first battery cell, and
this cell is popularly known as the simple voltaic cell. This type of simple cell
can be created very easily. Take one container and fill it with diluted sulfuric
acid as the electrolyte. Now we immerse one zinc and one copper rod in the
solution and we connect them externally by an electric load. Now your
simple voltaic cell is completed. Current will start flowing through the
external load.
Zinc in a diluted sulfuric acid gives up electrons as below:

These Zn + + ions pass into the electrolyte, and each of the Zn + + ions leaves
two electrons in the rod. As a result of the above oxidation reaction, the zinc
electrode is left negatively charged and hence acts as a cathode.
Consequently, the concentration of Zn + + ions near the cathode in the
electrolyte increases.
As per the property of electrolyte, the diluted sulfuric acid and water have
already disassociated into positive hydronium ions and negative sulfate ions
as given below:

Due to the high concentration of Zn+ + ions near the cathode, the H3O+ ions
get repelled towards the copper electrode and get discharged by absorbing
electrons from atoms of the copper rod. The following reaction takes place at
the anode:

As a result of the reduction reaction taking place at the copper electrode,


copper rod gets positively charged and hence it acts as an anode.

Daniell Cell
The Daniell cell consists of a copper vessel containing copper sulfate
solution. The copper vessel itself acts as the positive electrode. A porous pot
containing diluted sulfuric acid is placed in the copper vessel. An
amalgamated zinc rod, dipped inside the sulfuric acid, acts as the negative
electrode.

The diluted sulfuric acid in the porous pot reacts with zinc and as result
hydrogen gets elaborated. The reaction takes place as below:

The formation of ZnSO4 in the porous pot does not affect the working of the
cell until crystals of ZnSO4 is deposited. The hydrogen gas passes through the
porous pot and reacts with the CuSO4 solution as below:

Copper so formed gets deposited on the copper vessel.


History of the Battery
In the year of 1936 during the middle of summer, an ancient tomb was
discovered during construction of a new railway line near Bagdad city in
Iraq. The relics found in that tomb were about 2000 years old. Among these
relics, there were some clay jars sealed at the top with the pitch. An iron rod,
surrounded by a cylindrical tube made of a wrapped copper sheet was
projected out from this sealed top.

When discoverers filled these pots with an acidic liquid, they found a
potential difference of around 2 volts between the iron and copper. These
clay jars were suspected to be 2000-year-old battery cells. They named the
pot as Parthian battery.
In 1786, Luigi Galvani, an Italian anatomist, and physiologist got surprised to
see that when he touched dead frog legs with two different metals, the
muscles of the legs contracted.
He could not understand the actual reason otherwise he would have been
known as the first inventor of the battery cell. He thought that the reaction
might be due to a property of the tissues.

After that, Alessandro Volta realized the same phenomenon on cardboard


soaked in salt water instead of frog legs. He sandwiched a copper disc and a
zinc disc with a piece of cardboard soaked in salt water in between them and
found a potential difference between the copper and zinc.
After that in 1800, he developed the first voltaic cell (battery) constructed of
alternating copper and zinc discs with pieces of cardboard soaked in brine
between them. This system could produce a measurable current. We consider
Alessandro Volta’s Voltaic pile as the first “wet battery cell”. Thus, the
history of battery began. From that time until today, the battery remains a
preferable source of electricity in our many daily life applications.

The main problem with the Voltaic pile was that it could not deliver current
for a long time. A British inventor John F. Daniell solved this problem in
1836. He invented a more developed version of the battery cell which is
known as the Daniell cell. John F. Daniell immersed one zinc rod in zinc
sulfate in one container, and one copper rod in copper (II) sulfate in another
container.
A U shaped salt bridge bridges the solutions of these two containers. A
Daniell cell could produce 1.1 volts, and this type of battery lasted much
longer than the Voltaic pile. In 1839, Sir William Robert Grove, a discoverer,
and the man of science designed the fuel cell. He mixed hydrogen and
oxygen within an electrolyte solution and created electricity and water. The
fuel cell did not deliver enough power, but it is helpful. Bunsen (1842) and
Grove (1839) created enhancements to battery that used liquid electrodes to
supply electricity.
In the year 1859, Gaston Plante; first developed the lead-acid battery cell.
The lead-acid battery was the first form of rechargeable secondary battery.
The lead-acid battery is still in use for many industrial purposes. It is still the
most popular to be used as a car battery. In 1866, a French engineer, Georges
Leclanche, developed a new kind of battery. It was a carbon-zinc wet cell
battery known as the Leclanche cell.
Crushed manganese dioxide mixed with a few carbons forms the positive
electrode and a zinc rod forms as the negative electrode. He used an
ammonium chloride solution as a liquid electrolyte. After some years,
Georges Leclanche himself improved his design by replacing liquid
ammonium chloride solution with ammonium chloride.

Hence, he invented the first dry cell. In 1901, Thomas Alva Edison
discovered the alkaline accumulator. Thomas Edison’s primary battery had
iron as the anode material (-) and nickel oxide as the cathode material (+).
The above content is just one portion of the endless history of the battery .

Step by Step Development in History of Batteries

Developer/Inventor Country Year Invention

Luigi Galvani Italy 1786 Animal Electricity


Alessandro Volta Italy 1800 Voltaic Pile

John F. Daniell Britain 1836 Daniell Cell

Sir William Robert Britain 1839 Fuel Cell


Grove

Robert Bunsen German 1842 used liquid electrodes to supply


electricity

Gaston Plante France 1859 Lead Acid Battery

Georges Leclanche France 1866 Leclanche Cell

Thomas Alva United 1901 Alkaline Accumulator


Edison States
Testing, charging and replacing a battery
In recent years, the electronic content in vehicles has multiplied several times
over – and more electronics means more demand on the battery and charging
system.

In recent years, the electronic content in vehicles has multiplied several times
over. More electronics means more demand on the battery and charging
system. A weak battery or low system voltage due to a charging problem can
cause all kinds of havoc with the on-board electronics.

For example, low voltage may cause the airbag or ABS warning
lights to come on. The turn signals may not blink normally when
the switch is flipped to either side. Electronic gauges may give
strange or erratic readings. The engine may lack power, misfire or
stall. Any of these things may occur if the battery is low or the
alternator is not producing its normal charging output.
Many so-called battery problems are not the battery, but a charging
fault. The alternator’s job is two-fold: to supply current for the
vehicle’s electrical system and to maintain the battery at full
charge. Normally, the battery is only used to crank the engine, to
provide power for lights and accessories when the engine is not
running and provide supplemental power when the demands of the
vehicle’s electrical system exceed the output of the alternator.
The alternator’s output is lowest at idle, and increases with engine
speed. The powertrain control module in most late-model vehicles
controls charging output, so that the PCM can boost the charging
curve a bit when demands are high at low engine speed. Even so,
most alternators can’t achieve maximum output until engine speed
reaches about 3,000 RPM or higher. Consequently, if the engine is
left idling for a long period of time with the headlights, A/C,
defrosters, radio or other accessories on, it can overtax the charging
system and drain the battery.

Police cars are murder on alternators and batteries because they


spend so much time idling with high electrical loads on the
charging system (lights, radios, heater or A/C, etc.).
If the battery is low when a vehicle is first started, it takes some
time for the charging system to bring the battery back up to full
charge. It might take 20 to 30 minutes or more of normal driving to
fully recharge the battery.
Lead-acid battery technology is actually ancient. But it is simple,
cost-effective and generally provides an adequate power for most
automotive applications. But automotive lead-acid batteries must
be maintained at or near full charge for the cells to last. If the
battery is allowed to run down or discharge excessively and is not
fully recharged within a few days, the lead plates inside the battery
can become permanently sulfated. This will reduce the battery’s
ability to accept and hold a charge, and drastically shorten the
battery’s life.
The average service life of a conventional lead-acid car battery is
only about four to five years, and typically a year or so less in
extremely hot climates. Gel-cell batteries that do not contain liquid
acid electrolyte are better in this respect because they are less
affected by evaporation. Even so, their average service life is
typically five to six years depending on use.
Battery Power Drains
Allowing a vehicle to sit for a long period of time without being
driven (say a week or more) can allow the battery to run down. The
electronic modules in today’s vehicles draw a small amount of
power from the battery to keep their memories alive when the
vehicle isn’t running. Many go into sleep mode and shut down after
a certain period of time to reduce the power draw, but others (such
as the antitheft system, keyless entry system and PCM keep-alive
memory) are always on. Because of this, the key-off power drain
can be fairly high in many late model vehicles (80 milliamps to
several hundred milliamps). This can run the battery down fairly
quickly if the vehicle sits for long periods of time, is driven only
infrequently or for short trips, or has a weak battery or low
charging system output.
Abnormal key-off power drains can also run down a battery.
Leaving the lights on can drain a battery fairly quickly. Interior
lights, or a trunk or underhood light that fails to go out can also sap
power from the battery when a vehicle sits overnight. Sometimes a
power relay may stick on, or a module may fail to go to sleep after
the engine has been turned off, causing a higher than normal key-
off power drain. Any of these can run the battery down and
increase the load on the charging system when the engine is first
started. The result can be a chronic undercharging condition if the
vehicle isn’t driven long enough to fully recharge the battery, and
shortened battery life.
Any problems in the charging system itself can also allow the
battery to run down and/or shorten battery life. A bad alternator,
voltage regulator, faults in wiring harness or PCM voltage control
circuit, or even a slipping alternator drive belt can all cause low or
no charging output.
Charging Checks
The output of the charging system can be easily checked with a
voltmeter while the engine is idling. The actual output voltage
produced by the charging system will vary depending on
temperature and load, but will typically be about 1-1/2 to 2 volts
higher than battery voltage. At idle, most charging systems will
produce 13.8 to 14.8 volts with no lights or accessories on.
If the current produced by the charging system is not sufficient to
recharge a low battery, the battery may never achieve full charge.
This can lead to a permanent loss of voltage capacity inside the
battery as the plates become sulfated.
The current (amperage) produced by the charging system is also
important to maintain a fully charged battery. Not long ago, an 80-
amp alternator was considered a high-output unit. Now, alternators
that produce up to 120 to 155 amps are used in many vehicles. The
current output can be measured with a charging system tester, or on
a test bench if the alternator has been removed from the vehicle.
Alternator power ratings can also be given in watts (which is volts
times amps). Many alternators in foreign vehicles are rated in watts
rather than amps. The important point here is to make sure a
replacement alternator has the same power rating (in amps or
watts) as the original so the charging system can maintain the same
power output as before, should the alternator need to be replaced.
If your store has a bench tester or a portable charging system tester,
you should always recommend testing a customer’s alternator if
their battery keeps running down, is dead or has failed prematurely.
This can prevent unnecessary battery warranty claims if they buy a
new battery only to have it run down or fail due to a charging fault.
Battery Tests
Batteries need to be tested for two things: state of charge (a base
voltage measurement that shows if the battery is low or fully
charged), and capacity (a load or conductance test that checks the
condition of the plates inside the battery).
Connecting a voltmeter to the battery’s positive and negative
terminals (key off and all lights and accessories off) will reveal the
charge level of the battery. A reading of 12.66 volts indicates a
fully charged battery. If the reading is 12.45 volts or less, the
battery is low and needs to be recharged.
Some batteries have a built-in “charge indicator.” A green dot tells
you the battery is 75 percent or more charged. A dark indicator (no
dot visible), means the cell is low and the battery needs to be
recharged. A yellow or clear indicator tells you the electrolyte level
inside the cell is low and the battery needs water. If the battery has
a sealed top and water cannot be added to the cells, do not attempt
to recharge the battery. The battery must be replaced.
If a battery is low, use a charger to bring it back up to full charge.
Alternators are designed to maintain the battery charge, not to
recharge dead batteries. A heavier than normal charging load on an
alternator may overheat and damage the diode trio (rectifier) in the
alternator, causing it to fail.
When charging a battery, do not turn the charger on until after the
charger has been connected to the battery. Sparks can be very
dangerous around a car battery because lead-acid batteries give off
hydrogen gas, which is highly flammable. Also, if a battery is
frozen, do not attempt to jump it or recharge it. Remove the battery
from the vehicle, bring it indoors and allow it to thaw before
recharging it.
Slow-charging is usually better than fast charging. Fast-charging
saves time, but risks overheating the battery. Slow-charging at 6
amps or less develops less heat inside the battery and breaks up the
sulfate on the battery plates more efficiently to bring the battery
back up to full charge. “Smart Chargers” automatically adjust the
charging rate. Most start out with a charging rate of 15 amps or
higher, then taper off the charging rate as the battery comes up.
The time it takes to recharge a battery will depend on the battery’s
reserve capacity (RC) rating, it’s state of discharge, and the output
of the battery charger. The charging rate (in amps) multiplied by
the number of hours of charging time should equal the reserve
capacity of the battery. For example, a dead battery with a RC
rating of 72 will take about 12 hours to fully recharge with a 6 amp
charger.
Testing Battery Condition
A load test will tell you if a battery is good or bad. The test is done
by applying a calibrated load to the battery and noting how much
battery voltage drops. The test requires a carbon pile load tester, a
volt/amp meter (if not part of the load tester), and a battery that is
75 percent or more charged. If the battery is low it must be
recharged prior to load testing.
The test requires loading the battery to 1/2 of its CCA rating for
exactly 15 seconds. This is done by adjusting the carbon pile
setting on the tester. The battery must maintain a minimum post
voltage of 9.6 Volts at 70 degrees F during the test to pass. If the
voltage drops below 9.6 volts, the battery is “bad” and needs to be
replaced.
A faster and easier method to check the condition of a battery is to
use an electronic battery conductance tester. Conductance is how
much current the battery can conduct internally. Conductance is
determined by sending an alternating frequency signal through the
battery. The main advantage with this method is that the battery
does NOT have to be fully charged for accurate test results.
Battery Replacement
If a battery tests bad, or it will not accept or hold a charge, it will
have to be replaced. There is no way to rejuvenate an old sulfated
battery or a battery with internal shorts, opens or cell damage.
A replacement battery must be the same group size (dimensions
and post configuration) as the original, and should have the same or
higher Cold Cranking Amp (CCA) rating as the original battery.
Most V6 and V8 engines require 600 CCA for reliable cold
weather starting. Many diesel pickup trucks have a dual battery
setup for added cranking power, so if one battery has failed it is
usually a good idea to replace both batteries at the same time.
Replacing a battery in some vehicles can be difficult because of the
battery’s location. It may be sealed up inside a fender panel (many
Chrysler cars) or in the trunk or under the back seat. If the vehicle
is a hybrid, it may require a special gel cell 12-volt battery rather
than a wet cell lead-acid battery. Also, use extreme caution around
high-voltage hybrid batteries. Follow the vehicle manufacturer’s
safety precautions. The high voltage hybrid battery is usually
covered by a 10-year warranty and is a dealer-only replacement
item.
Here’s another precaution that is often overlooked: Disconnecting a
battery that still has voltage can wipe the memory in some modules
in many late model vehicles. The resulting memory loss in the
affected modules may prevent certain systems from functioning
until a special relearn procedure has been performed (some of
which may require using a scan tool to reset the module).
To prevent unwanted memory loss in modules, connect a “memory
saver” to the electrical system before the battery is disconnected.
These devices typically plug into the cigarette lighter or power
outlet, or attach to the battery cables, and use a 9-volt battery to
supply power to the modules. Another option is to connect a low
amperage (3 amps) battery charger to the battery cables while the
battery is being replaced.
Be extra careful when reconnecting battery cables to not reverse
polarity. Reversing the connections can damage the battery,
charging system, and on-board electronics (including the PCM).
Except for some antique vehicles, all modern vehicles have a
negative ground electrical system. The negative battery post is
marked with a minus (-) sign, while the positive battery post is
marked with a plus (+) sign. The battery cables may be color coded
red for positive and black for negative (but not always, so watch
out!).
Finally, batteries should be fully charged before they are installed
(to reduce the initial load on the charging system). Batteries are
“dry charged” at the factory, but can discharge over time as they sit
on the shelf. Your battery inventory should be arranged so your
oldest batteries are the first on the shelf, with the newest batteries
in the back. Use a voltmeter to check the charge level on your
batteries, and use a charger to bring any low batteries up to full
charge before they go out the door.
Battery customers should also be reminded to check the condition
of the battery cables on their vehicle. A new battery can’t crank the
engine normally or maintain its charge if the battery cables are
loose, badly corroded or undersized. Watch out for cheap
replacement battery cables that have undersized wire inside. It
takes heavy gauge wire to handle all the amps that many starting
systems require.
Starter
A starter (also self-starter, cranking motor, or starter motor) is a device used
to rotate (crank) an internal-combustion engine so as to initiate the engine's
operation under its own power. Starters can be electric, pneumatic, or
hydraulic. In the case of very large engines, the starter can even be another
internal-combustion engine.
Internal combustion engines are feedback systems, which, once started, rely
on the inertia from each cycle to initiate the next cycle. In a four-stroke
engine, the third stroke releases energy from the fuel, powering the fourth
(exhaust) stroke and also the first two (intake, compression) strokes of the
next cycle, as well as powering the engine's external load. To start the first
cycle at the beginning of any particular session, the first two strokes must be
powered in some other way than from the engine itself. The starter motor is
used for this purpose and it is not required once the engine starts running and
its feedback loop becomes self-sustaining.
The Starting System

The “starting system”, the heart of the electrical system in your car, begins
with the Battery. The key is inserted into the Ignition Switch and then turned
to the start position. A small amount of current then passes through the
Neutral Safety Switch to a Starter Relay or Starter Selenoid which allows
high current to flow through the Battery Cables to the Starter Motor. The
starter motor then cranks the engine so that the piston, moving downward,
can create a suction that will draw a Fuel/Air mixture into the cylinder, where
a spark created by the Ignition System will ignite this mixture. If the
Compression in the engine is high enough and all this happens at the right
Time, the engine will start.
Battery
The automotive battery, also known as a lead-acid storage battery, is an
electrochemical device that produces voltage and delivers current. In an
automotive battery we can reverse the electrochemical action, thereby
recharging the battery, which will then give us many years of service. The
purpose of the battery is to supply current to the starter motor, provide
current to the ignition system while cranking, to supply additional current
when the demand is higher than the alternator can supply and to act as an
electrical reservoir.
The automotive battery requires special handling. The electrolyte (water)
inside the battery is a mixture of sulfuric acid and water. Sulfuric acid is very
corrosive; if it gets on your skin it should be flushed with water immediately;
if it gets in your eyes it should be flushed with a mild solution of baking soda
and water immediately and you should see a doctor as soon as possible.
Sulfuric acid will eat through clothing, so it is advisable to wear old clothing
when handling batteries. It is also advisable to wear goggles and gloves while
servicing the battery. When charging, the battery will emit hydrogen gas; it is
therefore extremely important to keep flames and sparks away from the
battery.
Because batteries emit hydrogen gas while charging, the battery case cannot
be completely sealed. Years ago there was a vent cap for each cell and we
had to replenish the cells when the electrolyte evaporated. Today’s batteries
(maintenance free) have small vents on the side of the battery; the gases
emitted have to go through baffles to escape. During this process the liquid
condenses and drops back to the bottom of the battery. There’s need to
replenish or add water to the battery.
Today’s batteries are rated in cold cranking amps. This represents the current
that the battery can produce for 30 seconds at 0 degrees before the battery
voltage drops below 7.2 volts. An average battery today will have a CCA
(Cold Cranking Amps) of 500. With the many different makes and models of
cars available today, batteries will come in many different sizes, but all sizes
come in many CCAs. Make sure you get a battery strong enough to operate
properly in your car. The length of the warranty is not indicative of the
strength of the battery.
Battery cables are large diameter, multistranded wire which carry the high
current (250+ amps) necessary to operate the starter motor. Some battery
cables will have a smaller wire, soldered to the terminal, which is used to
either operate a smaller device or to provide an additional ground. When the
smaller cable burns it indicates a high resistance in the heavy cable.
Even maintenance free batteries need periodic inspection and cleaning to
insure they stay in good working order. Inspect the battery to see that it is
clean and that it is held securely in its carrier. Some corrosion naturally
collects around the battery. Electrolyte condensation contains corrosive
sulfuric acid, which eats away the metal of battery terminals, cable ends and
battery holddown parts. To clean away the corrosion, use a mixture of baking
soda and water, and wash all the metal parts around the battery, being careful
not to allow any of the mixture to get into the battery (batteries with top cell
caps and vents). Rinse with water. Remove the battery cables from the battery
(negative cable first), wire brush the inside of the cable end and the battery
post. Reinstall the cables (negative end last). Coat all exposed metal parts(
paint or grease can be used) so that the sulfuric acid cannot get on the metal.
Ignition Switch
The ignition switch allows the driver to distribute electrical current to where
it is needed. There are generally 5 key switch positions that are used:
● Lock – All circuits are open ( no current supplied) and the steering
wheel is in the lock position. In some cars, the transmission lever
cannot be moved in this position. If the steering wheel is applying
pressure to the locking mechanism, the key might be hard to turn. If you
do experience this type of condition, try moving the steering wheel to
remove the pressure as you turn the key.
● Off – All circuits are open, but the steering wheel can be turned and the
key cannot be extracted.
● Run – All circuits, except the starter circuit, are closed (current is
allowed to pass through). Current is supplied to all but the starter
circuit.
● Start – Power is supplied to the ignition circuit and the starter motor
only. That is why the radio stops playing in the start position. This
position of the ignition switch is spring loaded so that the starter is not
engaged while the engine is running. This position is used momentarily,
just to activate the starter.
● Accessory – Power is supplied to all but the ignition and starter circuit.
This allows you to play the radio, work the power windows, etc. while
the engine is not running.
Most ignition switches are mounted on the steering column. Some switches
are actually two separate parts;
● The lock into which you insert the key. This component also contains
the mechanism to lock the steering wheel and shifter.
● The switch which contains the actual electrical circuits. It is usually
mounted on top of the steering column just behind the dash and is
connected to the lock by a linkage or rod.

Neutral Safety Switch


This switch opens (denies current to) the starter circuit when the transmission
is in any gear but Neutral or Park on automatic transmissions. This switch is
normally connected to the transmission linkage or directly on the
transmission. Most cars utilize this same switch to apply current to the back
up lights when the transmission is put in reverse. Standard transmission cars
will connect this switch to the clutch pedal so that the starter will not engage
unless the clutch pedal is depressed. If you find that you have to move the
shifter away from park or neutral to get the car to start, it usually means that
this switch needs adjustment. If your car has an automatic parking brake
release, the neutral safety switch will control that function also.

Starter Relay
A relay is a device that allows a small amount of electrical current to control
a large amount of current. An automobile starter uses a large amount of
current (250+ amps) to start an engine. If we were to allow that much current
to go through the ignition switch, we would not only need a very large
switch, but all the wires would have to be the size of battery cables (not very
practical). A starter relay is installed in series between the battery and the
starter. Some cars use a starter solenoid to accomplish the same purpose of
allowing a small amount of current from the ignition switch to control a high
current flow from the battery to the starter. The starter solenoid in some cases
also mechanically engages the starter gear with the engine.

Battery Cables
Battery cables are large diameter, multistranded wire which carry the high
current (250+ amps) necessary to operate the starter motor. Some have a
smaller wire soldered to the terminal which is used to either operate a smaller
device or to provide an additional ground. When the smaller cable burns, this
indicates a high resistance in the heavy cable. Care must be taken to keep the
battery cable ends (terminals) clean and tight. Battery cables can be replaced
with ones that are slightly larger but never smaller.

Starter Motor
The starter motor is a powerful electric motor, with a small gear (pinion)
attached to the end. When activated, the gear is meshed with a larger gear
(ring), which is attached to the engine. The starter motor then spins the
engine over so that the piston can draw in a fuel/ air mixture, which is then
ignited to start the engine. When the engine starts to spin faster than the
starter, a device called an overrunning clutch (bendix drive) automatically
disengages the starter gear from the engine gear.
Difference Between Alternator & Generator
The major difference between the alternator and the generator is that in
alternator the armature is stationary and the field system rotates whereas in
the generator armature rotates and field is stationary. The armature of the
alternator is mounted on the stationary element called stator and field
winding on a rotating element. While the connection of a generator is just the
reverse of it. The other differences between them are shown below in the
comparison chart.
The alternator and generator both works on the principle of Faraday law of
electromagnetic induction. The generator induces both the alternating and
direct current and the alternator produces only alternating current. The rotor
of the generator is placed inside the stationary magnetic field. The stationary
magnetic field is produced by the magnetic poles. The rotor moves inside the
magnetic field, intersects the magnetic line of force which induces the current
in the wire.
Every half rotation of rotor changes the direction of the current which causes
the alternating current. For getting the alternating current, the ends of the
circuit is directly connected to the load. But for producing the direct current,
the ends of the wire is connected to the commutator. The commutator
converts the alternating current into direct current.

Content: Alternator Vs Generator


1. Comparison Chart
2. Definition
3. Key Differences
Comparison Chart
Basis for Alternator Generator
Comparison

Definition A machine that converts A machine that changes


the mechanical energy into mechanical energy into
AC electrical power. electrical energy (AC or
DC).
Current Induces alternating current Generate both AC & DC.

Magnetic Rotating Stationary


Field

Input Supply Takes from stator. Takes from rotor.

Armature Stationary Rotatory

Output EMF Alternating Constant

RPM Wide Range Narrow Range


(Rotation per
minute)

Dead Battery Do not charge charge

Output Higher Lower

Definition of an Alternator
The synchronous generator or Alternator is a machine for converting the
mechanical power from a prime mover to an AC electrical power at a specific
voltage and frequency. Three-phase alternators are used because it has
several advantages of distribution, generation, and transmission. For bulk
power generation large alternator is used in the thermal, hydro and nuclear
power station.
The magnetic pole of the rotor is excited by the direct field current. When the
rotor rotates, the magnetic flux cut the stator conductor, and hence EMF
induces in them. As the magnetic pole alternating rotating N and S, they
induce an EMF and current in armature conductor which first rotate in a
clockwise direction and then in an anti-clockwise direction. Thus, generates
the alternating current.
What is the charging system?
Made up of the alternator, battery, wiring and electronic control unit (ECU),
your vehicle’s charging system keeps your battery charged. It also delivers
the energy necessary to run the lights, radio and other electrical components
while the engine is running

What’s happening when the battery/check charging system light comes on?

Whenever this light goes on, it means that the vehicle is running solely on
battery power. If the problem continues and your charging system fails, the
battery won’t be able to recharge and it will soon run down, leaving you with
a dead battery. Nothing can ruin a day like a dead battery, so if this light
comes on, it’s time to take your vehicle to your trusted mechanic to have
them find the source of the problem.

Please note that depending upon your vehicle, you might have a battery light
and/or check charging system light. Check your owner’s manual to learn
what warning lights your car has.

What can cause my battery/check charging system light to come on?

Unfortunately, there isn’t one answer as to why the battery/check charging


system light comes on. The good news is that your mechanic has the know-
how to get to the bottom of the issue. The following are some parts that can
cause the battery/check charging system warning light to come on.

Alternator issues - Many times, the alternator is the root of the problem
when your check charging system/battery light comes on. Have your
mechanic test the voltage coming from your alternator. If the voltage is low,
your mechanic will likely replace your weak alternator with a new one.

Battery problems - Your battery/check charging system light could be


coming on because your battery is low and needs replacing. Take your
vehicle to your mechanic and have them test your battery strength.

Drive belt troubles – A failed drive belt prohibits the alternator from doing
its job and can cause the warning light to come on. Have your mechanic
check the condition of your vehicle’s drive belt. It could be faulty and need
replacing.

Corroded wires and connections - Have your mechanic clean all of the
connections and make sure the battery clamps are clean and tight.
Additionally, have them inspect all internal alternator wirings and
connections and also have them check all of the fusible links and look for any
burned links. If they are burned, get them repaired.

Faulty computer system - If your vehicle isn’t having alternator or battery


issues, it could be a computer issue. Have your mechanic test your vehicle’s
computer system after all other issues have been tested and cleared.
CHARGING SYSTEM COMPONENTS
AGM battery with battery sensor
The abbreviation AGM stands for Absorbent Glass Mat (AGM) and means
that the electrolyte in these batteries is bound in a glass fiber fleece. AGM
batteries ensure higher start and supply reliability, are leak-proof and have
been specially developed for vehicles with start-stop systems.

An intelligent battery sensor that has been additionally mounted on the


battery's negative terminal monitors the battery condition. The measured
values are transmitted to a higher-level control unit via the serial LIN
communication interface.

Secondary battery
A second, so-called secondary battery has been additionally installed in this
vehicle. Secondary batteries, also known as buffer or back-up batteries,
support the vehicle electrical system when the engine has been switched off.
The additional supply is managed by a battery control unit and it
simultaneously prevents the batteries from discharging each other.
Alternator with overrunning alternator pulley
The alternator supplies power consumers with power while the engine is
running and maintains the battery charge. The alternator output depends on
the engine speed. The maximum alternator output is only generated above
2000 rpm. A charge regulator is installed in the alternator, which is also
referred to as the alternator control unit. The charge controller has been
connected to the engine control unit via a LIN interface. By mounting an
overrunning alternator pulley only the driving force of one direction of
rotation is transmitted to the alternator, thus reducing friction and wear.

Engine control unit


In this system, the charging system is intelligently controlled via the engine
control unit (ECM). The central electronics module (CEM) sends a request
about the desired charging voltage for the main battery to the engine control
unit from where this request is forwarded to the alternator regulator. The
charge signal lamp in the instrument cluster is controlled via the CAN
network. At the same time, the engine control unit switches on the secondary
battery via a relay for the charging process. The charging time of the
secondary battery is calculated by the central electronics module and
forwarded to the engine control unit.
Starter
The starter, also known as the starter motor, is required to start the internal
combustion engine. Starters are electric motors that are briefly connected to
the engine during the starting process via a ring gear that bring it to the
desired starting speed. Depending on the design, the current consumption of
the starter can be well over one hundred amperes. By reinforcing individual
components, the starter has been designed for an increased number of start
cycles over its entire service life
How it Works - the Cut-out
The charging system of the Austin Seven primarily consists of a dynamo to
produce an electric current and a battery in which to store the charge. In
between these two is an automatic switch, known as a Cut-out. Without the
cut-out, the battery would be charged whenever the dynamo is rotated
sufficiently fast by the engine, but would be rapidly discharged by trying to
operate the dynamo as a motor when the engine revs are not high enough –
particularly when the engine is stationary. Note that the ignition switch does
NOT disconnect the dynamo from the battery; it only disconnects the ignition
circuit itself, (plus a few auxiliary devices such as stop lights). The ignition
switch is therefore not relevant to the cut out operation or charging circuit.
There have been a number of variants during the period over which the cars
were manufactured. However, they all have the same function and all have
very similar operation. Each Cut-out has a set of spring loaded contacts
mounted on a mechanical rocker, known as an armature, plus two
electromagnets which are arranged so as to pull and push on this armature.
One electromagnet is connected between the dynamo output terminal “D”
and earth, (the “shunt” winding”), and the other, (the “series” winding) is
connected in series with the battery usually marked “A” and the cut-out
contacts. Note that, depending on cut-out type, these windings may be on
separate bobbins or combined on one bobbin. Irrespective of type, the shunt
winding is composed of many turns of thin wire, whilst the series winding is
very few turns of substantially thicker wire.
Example cut-out circuit
With the engine stationary, it is necessary for the battery to be disconnected
from the dynamo, to prevent the “motor” action described above. Hence, the
contacts must be open. This is the resting state of the cut-out.
As the engine is started and the revs increased, the dynamo output voltage
will rise. This output voltage appears across the shunt winding, making it act
as an electromagnet. As the series winding has one end connected to an open
contact, it plays no part at this stage. Eventually, (at fast tickover), the
dynamo voltage will reach a value that would be high enough to prevent
current flowing from the battery to the dynamo, but would allow current to
flow from the dynamo to the battery – the normal charging method. The
shunt winding is arranged so that when the dynamo output voltage reaches
this value, it produces a sufficiently strong magnetic field to pull on the
armature overcoming the spring force and close the contacts. The battery is
now being charged. For 6V systems, this should occur at approximately
6.5V; for 12V, this occurs at 13.5V. As soon as the contacts are closed, the
shunt winding is now also connected across the battery. This gives a latching
action as the battery now also supplies current for the shunt winding. In this
state, the contacts would still be pulled shut until the battery voltage dropped
to a low level irrespective of the dynamo output, as the electromagnet is now
being operated by the battery.
Consequently, it is necessary to counteract this force so that the spring can
pull the contacts apart to disconnect the battery. This is the function of the
second or “series” winding.
The Series winding produces a force on the armature which is proportional to
the current flowing either to or from the battery, and which changes direction
with the direction of current flow. When the battery is being charged, the
force acts to add to the pulling force from the shunt winding, causing the
contacts to be held firmly shut. However, when the dynamo output falls so
that there is a current from the battery to the dynamo, the force is now in the
opposite direction and opposes that of the shunt winding. When the force
from the series winding equals that of the shunt, then the armature is released
for the spring to open the contacts.
If the engine is stopped at this point, the battery is safely disconnected from
the dynamo. If the revs rise, the force from the shunt winding will close the
contacts again.
This can be seen in operation from the behaviour of the ammeter. With all
other electrical loads off, (note that the ammeter measures all current flow
from the battery, not just that to the dynamo), and the engine running at a fast
tickover, the ammeter will settle in the charging side of the scale. When the
revs are allowed to drop, the ammeter will move to the discharge side and
then suddenly fall back nearer to zero. This sudden fall back occurs when the
series winding has sufficient current flowing in the discharge direction to
counteract the closing force from the shunt winding and the contacts are
opened. The charging warning light will operate at the same time. No current
flows to the dynamo; the remaining discharge shown is actually the current
required by the ignition circuit.
The Voltage Regulator
The voltage regulator can be mounted inside or outside of the alternator
housing. If the regulator is mounted outside (common on some Ford
products) there will be a wiring harness connecting it to the alternator.
The voltage regulator controls the field current applied to the spinning rotor
inside the alternator. When there is no current applied to the field, there is no
voltage produced from the alternator. When voltage drops below 13.5 volts,
the regulator will apply current to the field and the alternator will start
charging. When the voltage exceeds 14.5 volts, the regulator will stop
supplying voltage to the field and the alternator will stop charging. This is
how voltage output from the alternator is regulated. Amperage or current is
regulated by the state of charge of the battery. When the battery is weak, the
electromotive force (voltage) is not strong enough to hold back the current
from the alternator trying to recharge the battery. As the battery reaches a
state of full charge, the electromotive force becomes strong enough to oppose
the current flow from the alternator, the amperage output from the alternator
will drop to close to zero, while the voltage will remain at 13.5 to 14.5. When
more electrical power is used, the electromotive force will reduce and
alternator amperage will increase. It is extremely important that when
alternator efficiency is checked, both voltage and amperage outputs are
checked. Each alternator has a rated amperage output depending on the
electrical requirements of the vehicle.
Charging system gauge or warning lamp
The charging system gauge or warning lamp monitors the health of the
charging system so that you have a warning of a problem before you get
stuck.
When a charging problem is indicated, you can still drive a short distance to
find help unlike an oil pressure or coolant temperature problem which can
cause serious engine damage if you continue to drive. The worst that can
happen with a charging system problem is that you get stuck in a bad
location.
A charging system warning lamp is a poor indicator of problems in that there
are many charging problems that it will not recognize. If it does light while
you are driving, it usually means the charging system is not working at all.
The most common cause of this is a broken alternator belt.
There are two types of gauges used to monitor charging systems on some
vehicles: a voltmeter which measures system voltage and an ammeter which
measures amperage. Most modern cars that have gauges use a voltmeter
because it is a much better indicator of charging system health. A mechanic’s
voltmeter is usually the first tool a technician uses when checking out a
charging system

A modern automobile has a 12 volt electrical system. A fully charged battery


will read about 12.5 volts when the engine is not running. When the engine is
running, the charging system takes over so that the voltmeter will read 14 to
14.5 volts and should stay there unless there is a heavy load on the electrical
system such as wipers, lights, heater and rear defogger all operating together
while the engine is idling at which time the voltage may drop. If the voltage
drops below 12.5, it means that the battery is providing some of the current.
You may notice that your dash lights dim at this point. If this happens for an
extended period, the battery will run down and may not have enough of a
charge to start the car after shutting it off. This should never happen with a
healthy charging system because as soon as you step on the gas, the charging
system will recharge the battery. If the voltage is constantly below 14 volts,
you should have the system checked. If the voltage ever goes above 15 volts,
there is a problem with the voltage regulator. Have the system checked as
soon as possible as this “overcharging” condition can cause damage to your
electrical system.

If you think of electricity as water, voltage is like water pressure, whereas


amperage is like the volume of water. If you increase pressure, then more
water will flow through a given size pipe, but if you increase the size of the
pipe, more water will flow at a lower pressure. An ammeter will read from a
negative amperage when the battery is providing most of the current thereby
depleting itself, to a positive amperage if most of the current is coming from
the charging system. If the battery is fully charged and there is minimal
electrical demand, then the ammeter should read close to zero, but should
always be on the positive side of zero. It is normal for the ammeter to read a
high positive amperage in order to recharge the battery after starting, but it
should taper off in a few minutes. If it continues to read more than 10 or 20
amps even though the lights, wipers and other electrical devices are turned
off, you may have a weak battery and should have it checked.

What can go wrong?


There are a number of things that can go wrong with a charging system:
● Insufficient Charging Output
If one of the three stator windings failed, the alternator would still charge, but
only at two thirds of its normal output. Since an alternator is designed to
handle all the power that is needed under heavy load conditions, you may
never know that there is a problem with the unit. It might only become
apparent on a dark, cold rainy night when the lights, heater, windshield
wipers and possible the seat heaters and rear defroster are all on at once that
you may notice the lights start to dim as you slow down. If two sets of
windings failed, you will probably notice it a lot sooner
It is more common for one or more of the six diodes in the rectifier to fail. If
a diode burns out and opens one of the circuits, you would see the same
problem as if one of the windings had failed. The alternator will run at a
reduced output. However, if one of the diodes were to short out and allow
current to pass in either direction, other problems will occur. A shorted diode
will allow AC current to pass through to the automobile’s electrical system
which can cause problems with the computerized sensors and processors.
This condition can cause the car to act unpredictably and cause all kinds of
problems.
● Too much voltage
A voltage regulator is designed to limit the voltage output of an alternator to
14.5 volts or less to protect the vehicle’s electrical system. If the regulator
malfunctions and allows uncontrolled voltage to be released, you will see
bulbs and other electrical components begin to fail. This is a dangerous and
potentially costly problem. Fortunately, this type of failure is very rare. Most
failures cause a reduction of voltage or amperage.
● Noise
Since the rotor is always spinning while the engine is running, there needs to
be bearings to support the shaft and allow it to spin freely. If one of those
bearings were to fail, you will hear a grinding noise coming from the
alternator. A mechanic’s stethoscope can be used to confirm which of the
spinning components driven by the serpentine belt is making the noise.
Repairing Charging System Problems
The most common repair is the replacement of the alternator with a new or
rebuilt one. A properly rebuilt alternator is as good as a new alternator and
can cost hundreds less than purchasing a brand new one.
Labor time to replace an alternator is typically under an hour unless your
alternator is in a hard to access location. Most alternators are easily accessible
and visible on the top of the engine.
Replacing an alternator is usually an easy task for a backyard mechanic and
rebuilt alternators are readily available for most vehicles at the local auto
parts store. The most important task for the do-it-yourselfer is to be careful
not to short anything out. ALWAYS DISCONNECT THE BATTERY
BEFORE REPLACING AN ALTERNATOR.
Alternators can be repaired by a knowledgeable technician, but in most cases,
it is not economical to do this. Also, since the rest of the alternator is not
touched, a repair job is usually not guaranteed.
In some cases, if the problem is diagnosed as a bad voltage regulator, the
regulator can be replaced without springing for a complete rebuild. The
problem with this is that there will be an extra labor charge for disassembling
the alternator in order to get to the internal regulator. That extra cost, along
with the cost of the replacement regulator, will bring the total cost close to
the cost of a complete (and guaranteed) rebuilt.
This is not the case when the regulator is not inside the alternator. In those
cases, the usual practice is to just replace the part that is bad.
Interior Lighting:
Interior Lighting is a complex problem depending on various factors such as
• Purpose intended service,
• Class of Interiors.
• Luminaire best suited,
• Color effect and
• Reflection from ceiling, walls, floors.

Good Lighting means intensity should be ample to see clearly and distinctly.
The light distribution should be nearly uniform over a part of the room at
least. It should be diffused that is soft and well diffused. Color depends on
purpose and taste source but should approach daylight / yellow. Source
location should be well above range of vision. To avoid glare intrinsic
brightness is reduced by diffused glass ware and by remaining objects of
secular reflection from range of vision. Shadows are a must for accentuating
depth but should not too apparent abruptly or dense, they are not to be harsh
and should toned down.
Standard practice is to have general lighting in all areas at a level comfortable
to eye. It should eliminate dark shadows and avoid sharp contrast. In order to
emphasize on parts that should be shown. Light sources located such that
visual importance of object is kept in mind. Lamp may be concealed or
counter lighted with a very low attention value to itself. Glare minimized by
diffusing.
American Institute of Architects Recommends for Good Illumination.
1. General. Lighting – effectively illuminate all objects/areas with due
regard to relative importance in the interior composition. Adequate for eye
comfort throughout the room elimination of dark shadows and sharp contrasts
– preserve soft shadows for roundness/relief – lighting emphasis on those
parts that need first attention.

2. Light sources be subordinated in visual importance to the things


intended to illuminate, except rarely when itself is a dominant decorative
element. Unless – concealed/counter lighted, that they are not apparent they
have extremely high attention value – dominate the scheme. If visible – so
disposed – to attract eye to major feature of room than themselves.

3. Glare must be eliminated. Result of intense brightness in concentrated


areas within the line of vision. Produced by excess brightness of visible light.
reflection of bright lights from – Polished – low diffused surfaces - extreme
contrast of light/shade Employ – means of diffusing – at source or finish the
room - with Diffusing/Absorbing materials rather than reflecting material.
4. Level of illumination to be adequate for the type of eye work. Local
lighting to supplement general lighting adequate illumination – working at
m/cs – desks – reading tables High level local lighting is always to be
accompanied by general lighting to avoid eye strain and minimize controls. If
glare is avoided there is no over illumination. Natural light limits are for
outdoor 107600 lux and 1076 lux for indoor. Level should be adequate for
eye task expected.
5. General lighting is to be related and controlled to suit the mood. While
worship, meditation, introspection need low levels. Gaiety, mental activity,
physical activity or intense activity needs high levels. Theaters, homes and
restaurants may need levels varied according to mood. Shops level should be
appropriate to woo customers through psychological reaction. Offices,
factories and schools adequate illumination to work w/o eye strain.
6. Light source must suit interior in style, shape and finish in all
architectural aspects.
Exterior Lighting
Automotive exterior lighting systems including turn signals, headlights and
taillights are a standard fitting in any vehicle. To enhance safety and comfort
for drivers and passengers and enable more sophisticated body design,
vehicles are increasingly equipped with high-intensity discharge lamps (HID
or xenon) and light-emitting diodes (LED) in adaptive front lighting (AFL)
systems.

Smart headlamp systems require the use of high-efficiency LED drive


solutions, with advanced diagnostics. Advanced systems, implementing
dynamic lighting, also require headlamp leveling and bend lighting control.

ST’s automotive portfolio includes a wide selection of high- and low-side


drivers for exterior lighting systems from higher power “front beam” lamps
to daytime running lights (DRL) and low-power loads used by lateral and
rear lights as well as advanced regulators and cost optimized microcontrollers
to enable building high-efficiency and reliable lighting modules.

ESD and battery protection devices complement the offer, to cover all design
requirements.
Design of lighting system:
Direct lighting:
Lighting provided from a source without reflection from other surfaces. In
day lighting, this means that the light has travelled on a straight path from the
sky (or the sun) to the point of interest. In electrical lighting it usually
describes an installation of ceiling mounted or suspended luminaires with
mostly downward light distribution characteristics.

Indirect lighting:
Lighting provided by reflection usually from wall or ceiling surfaces. In day
lighting, this means that the light coming from the sky or the sun is reflected
on a surface of high reflectivity like a wall, a window sill or a special
redirecting device. In electrical lighting the luminaries are suspended from
the ceiling or wall mounted and distributes light mainly upwards so it gets
reflected off the ceiling or the walls.
Dashboard Gauges
The minimum number of gauges on a passenger car dashboard are the
speedometer and the fuel gauge. The most common additional gauge is the
temperature gauge followed by the tachometer, voltmeter and oil pressure
gauge. If your car does not have a temperature gauge, oil pressure gauge or
charging system gauge, then you will have a warning light for these
functions.
The most common configurations in today’s CarParts.com are speedometer,
tachometer, fuel, and temperature.

To get more information about the gauges on your car, check your owner’s
manual.
Speedometer
In the past, the most used of the gauges. The speedometer was usually driven
by a cable that spins inside a flexible tube. The cable is connected on one side
to the speedometer, and on the other side to the speedometer gear inside the
transmission. Today, just about all vehicles have eliminated the cable and use
an electronic sensor to measure wheel speed and send the signal to an
electronically driven speedometer.
The accuracy of the speedometer can be affected by the size of the tires. If the
tires are larger in diameter than original equipment, the speedometer will read
that you are going slower then you actually are. On older vehicles, another
cause for inaccurate speed readings was an improper speedometer gear inside
the transmission. This can sometimes happen after a replacement
transmission has been installed. Most good transmission shops are aware of
this and will make sure that the correct speedometer gear is in the new
transmission.Bottom of Form

On vehicles with electronic speedometers, the computer has settings for


speedometer calibration when necessary, to allow a technician to adjust for
different sized tires. These calibrations usually require specialized equipment
like diagnostic scanners to do these types of adjustments.
Fuel Gauge

Deliberately designed to be inaccurate! After you fill-up the tank, the gauge
will stay on full for a long time, then slowly drop until it reads 3/4 full. After
that, it moves progressively faster until the last quarter of a tank seems to go
very quickly. This is a bit of psychological sleight-of-hand to give the
impression that the car gets better gas mileage then it does, it seems to reduce
the number of complaints from new car buyers during the first few weeks
after they bought the car.
The fuel gauge shown here is probably more accurate than most. Notice the
difference between 3/4 to full and empty to 1/4.
When the needle drops below E, there are usually 1 or 2 gallons left in
reserve. To find out for sure, pull out your owner’s manual and find out how
many gallons of gas your tank holds, then the next time you fill up an empty
tank, check how many gallons it took to fill it. The difference is your reserve.
Note: It is not a good idea to let your tank drop below 1/4. This is because
your fuel pump is submerged in fuel at the bottom of the tank. The liquid
fuel helps to keep the fuel pump cool. If the fuel level goes too low and
uncovers the pump, the pump will run hotter than normal. If you do this often
enough, it can shorten the life of the fuel pump and eventually cause it to fail.
Temperature Gauge or Warning Lamp

This gauge measures the temperature of the engine coolant in degrees. When
you first start the car, the gauge will read cold. If you turn the heater on when
the engine is cold, it will blow cold air. When the gauge starts moving away
from cold, you can then turn the heater on and get warm air.
Most temperature gauges do not show degrees like the one pictured here.
Instead, they will read cold, hot, and have a normal range as pictured in the
dash panel at the top of this page.
It is very important to monitor the temperature gauge to be sure that your
engine is not overheating. If you notice that the gauge is reading much hotter
than it usually is and the outside temperature is not unusually hot, have the
cooling system checked as soon as possible.
Note: If the temperature gauge moves all the way to hot, or if the temperature
warning light comes on, the engine is overheating! Safely pull off the road
and turn the engine off and let it cool. An overheating engine can quickly
cause serious engine damage!
Tachometer
The tachometer measures how fast the engine is turning in RPM (Revolutions
Per Minute). This information is useful if your car has a standard shift
transmission and you want to shift at the optimum RPM for best fuel
economy or best acceleration. One of the least used gauges on a car with an
automatic transmission. You should never race your engine so fast that the
tach moves into the red zone as this can cause engine damage. Some engines
are protected by the engine computer from going into the red zone. Usually,
the tachometer shows single-digit markings like 1, 2, 3, etc. Somewhere, you
will also see an indicator that says RPM x 1000. This means that you
multiply the reading by 1000 to get the actual RPM, so if the needle is
pointing to 2, the engine is running at 2000 RPM.
Oil Pressure Gauge or Warning Lamp
Measures engine oil pressure in pounds per square inch. Oil pressure is just
as important to an engine as blood pressure is to a person. If you run an
engine with no oil pressure even for less then a minute, you can easily
destroy it. Most cars have an oil lamp that lights when oil pressure is
dangerously low. If it comes on while you’re driving, stop the vehicle as soon
as it is safely possible and shut off the engine. Then, check the oil level and
add oil as necessary.
Charging System Gauge or Warning Lamp
The charging system is what provides the electrical current for your vehicle.
Without a charging system, your battery will soon be depleted and your
vehicle will shut down. The charging system gauge or warning lamp
monitors the health of this system so that you have a warning of a problem
before you get stuck.
When a charging problem is indicated, you can still drive a short distance to
find help unlike an oil pressure or coolant temperature problem which can
cause serious engine damage if you continue to drive. The worst that can
happen is that you get stuck in a bad location.
A charging system warning lamp is a poor indicator of problems in that there
are many charging problems that it will not recognize. If it does light while
you are driving, it usually means the charging system is not working at all.
The most common cause is a broken alternator belt.
There are two types of gauges used to monitor charging systems: a voltmeter
which measures system voltage and an ammeter which measures amperage
going out of, or coming into the battery. Most modern cars that have gauges
use a voltmeter because it is a much better indicator of charging system
health. A voltmeter is usually the first tool a technician uses when checking
out a charging system.
A modern automobile has a 12-volt electrical system. A fully charged battery
will read about 12.5 volts when the engine is not running. When the engine is
running, the charging system takes over so that the voltmeter will read 14 to
14.5 volts and should stay there unless there is a heavy load on the electrical
system such as wipers, lights, heater, and rear defogger all operating together
while the engine is idling at which time the voltage may drop. If the voltage
drops below 12.5, it means that the battery is providing some of the current.
You may notice that your dash lights dim at this point. If this happens for an
extended period, the battery will run down and may not have enough of a
charge to start the car after shutting it off. This should never happen with a
healthy charging system because as soon as you step on the gas, the charging
system will recharge the battery. If the voltage is constantly below 14 volts,
you should have the system checked. If the voltage ever goes above 15 volts,
there is a problem with the voltage regulator. Have the system checked as
soon as possible as this “overcharging” condition can cause damage to your
electrical system.

If you think of electricity as water, voltage is like water pressure, whereas


amperage is like the volume of water. If you increase pressure, then more
water will flow through a given size pipe, but if you increase the size of the
pipe, more water will flow at a lower pressure. An ammeter will read from a
negative amperage when the battery is providing most of the current thereby
depleting itself, to a positive amperage if most of the current is coming from
the charging system. If the battery is fully charged and there is minimal
electrical demand, then the ammeter should read close to zero, but should
always be on the positive side of zero. It is normal for the ammeter to read a
high positive amperage in order to recharge the battery after starting, but it
should taper off in a few minutes. If it continues to read more than 10 or 20
amps even though the lights, wipers, and other electrical devices are turned
off, you may have a weak battery and should have it checked.
How Electronic Ignition System Works?
Introduction
“From a little spark may burst a flame” by Dante Alighieri, Rightly said that
a spark is required to start a flame and in automobile since there is a
conversion of chemical energy (i.e. air-fuel mixture) into mechanical energy
i.e. (crankshaft rotation) spark is essential which is responsible for the
combustion, but from where does this spark comes? How does the timing of
spark and prepared air-fuel mixture is managed? Let’s just dig it out.
In internal combustion engine, combustion is a continuous cycle and occurs
thousands time in a minute so a effective and accurate source of ignition is
required. The idea of spark ignition came from a toy electric pistol that used
electric spark to ignite a mixture of hydrogen and air to shoot a cork.
Electronic ignition system is the type of ignition system that uses electronic
circuit,usually by transistors controlled by sensors to generate electric pulses
which in turn generate better spark that can even burn the lean mixture and
provide better economy and lower emission.
Why Electronic Ignition System?
Various types of ignition systems were used lately that are
1. Glow plug ignition system,
2. Magneto ignition system
3. Electric coil or Battery ignition system,
But all these systems have their own limitations that are:
Glow plug ignition system is the oldest of all and is obsolete because of its
many limitations-
Glow plug ignition system has a problem of causing uncontrolled combustion
due to the use of electrode as a ignition source, which is solved later after the
introduction of Magneto ignition system in which electrodes are replaced by
spark plug. Unlike magneto ignition, Glow plug produces high exhaust
emission due to the incomplete combustion.
Magneto ignition system: It is the system introduce to overcome the
limitations of old ignition systems, but it has its own limitations-
§ It depends on the engine speed, so shows starting problem due to low
speed at the starting of the engine, which is later solved with the
introduction of Battery coil ignition system in which battery becomes
the energy source for the system.
§ Expensive than electric coil ignition system.
§ Wear and tear is more than battery coil ignition because of greater
number of mechanical moving parts than battery coil system.
§ Can cause misfire due to leakage.

How Anti-lock Braking System (ABS) Works?


Electric coil ignition or Battery ignition system – System is the latest of all
above and is being used from long time due to its better efficiency and
accuracy but it also shows some limitations-
§ Less efficient with the high speed engines
§ High maintenance required due to mechanical and electrical wear of the
contact breaker points
So, Since in the modern automobile new technologies are introduced and it is
found that use of sensors and electronic component gives more effective and
accurate outputs than that of mechanical components so the use of sensors
with electronic controlled unit becomes essential to fulfill the needs of
modern high power and high speed automobiles or hyper series of
automobiles, so to fulfill the need for high performance, high mileage and
greater reliability has led to the development of Electronic ignition system.
Electronic Ignition System Main
Components

1. Battery
It is the powerhouse of the ignition system as it supplies the necessary energy
to the ignition system.same as battery coil ignition system.
2. Ignition Switch
it is the switch used in ignition system which governs the ON and OFF of the
system ,same as the battery coil ignition system.
3. Ignition Control Module or Control Unit of Ignition System
It is the brain or programmed instruction given to the ignition system which
monitors and control the timing and intensity of the spark automatically. It is
the device that receives voltage signals from the armature and set the primary
coil to ON and OFF ,it can be placed separately outside the distributor or can
be place in electronic control unit box of the vehicle.
4. Armature
Contact breaker points of battery ignition system is replaced by an armature
which consists of a reluctor with teeth (the rotating part), vacuum advance
and a pickup coil(to catch the voltage signals),Electronic module receives the
voltage signals from the armature in order to make and break the circuit,
which in turn sets the timing of the distributor to accurately distribute current
to the spark plugs.
5. Ignition Coil
Same as the battery ignition coil system ignition coil is used in electronic
ignition system to produce high voltage to the spark plug.
6. Ignition Distributor
As the name indicates it is the device use to distribute the current to the spark
plugs of the multi cylinder engine.
7. Spark Plug
Spark plug is used to generate spark inside the cylinder.
Working of Electronic Ignition System

§ To understand the working of the electronic ignition system let’s


consider above figure in which all the components mentioned above
are connected in their working order.
§ When the driver switch ON the ignition switch in order to start a vehicle
the current starts flowing from the battery through the ignition switch
to the coil primary winding, which in turn starts the armature pickup
coil to receives and send the voltage signals from the armature to the
ignition module.
§ When the tooth of the rotating reluctor comes in front of the pickup coil
as shown in the fig the voltage signal from pickup coil is sent to the
electronic module which in turn senses the signal and stops the current
to flow from primary coil.
§ When the tooth of the rotating reluctor goes away from the pickup coil,
the change in voltage signal is sent by pickup coil to the ignition
module and a timing circuit inside ignition module turns ON the
current flow.
§ A magnetic field is generated in the ignition coil due to this continuous
make and break of the circuit which induced an EMF in secondary
winding which increases the voltage upto 50000 Volts.
§ This high voltage is then sent to distributor ,which has the rotating rotor
and distributor points which is set according to the ignition timing.
§ When the rotor comes in front of any of those distributor points the
jumping of voltage through the air gap from the rotor to the distributor
point takes place which is then sent to the adjacent spark plug terminal
through the high tension cable and a voltage difference is generated
between the central electrode and ground electrode which is
responsible for generating a spark at the tip of the spark plug and
finally the combustion takes place.
Three Types of Vehicle Ignition
Systems and How They Work
Vehicle ignition systems have evolved significantly over the years to deliver
improved, more reliable and more powerful performance. Today, there are
three primary ignition systems, and despite their differences in technology
and components, they all work on the same basic principles.

HOW IGNITION SYSTEMS WORK

All automotive ignition systems (except diesels) have to generate a spark


strong enough to jump across the spark plug's gap. This is accomplished
using an ignition coil consisting of two coils of wire wrapped around an iron
core. The goal is to create an electromagnet by routing the battery's 12 volts
through the primary coil. When the car ignition system turns off the power
flow, the magnetic field collapses, and as it does, a secondary coil captures
this collapsing magnetic field and converts it into 15,000 to 25,000 volts.
In order to generate maximum power from the air/fuel mixture, the spark
must fire at just the right moment during the compression stroke. Engineers
have used several methods to control spark timing. The early systems used
fully mechanical distributors. Next came hybrid distributors equipped with
solid-state switches and ignition control modules—essentially low-end
computers. Then, engineers designed fully electronic automotive ignition
systems, the first of which was a distributor-less style (DIS). Modern
automotive ignition systems are referred to as coil-on-plug (COP), which in
addition to improving spark timing, uses redesigned ignition coils that pack a
much bigger wallop and generate a hotter spark.
THE DISTRIBUTOR AUTOMOTIVE IGNITION SYSTEM
A distributor-based automotive ignition system connects to the camshaft with
gears. In the fully mechanical distributor, the gears spin the main distributor
shaft. Inside, a set of “ignition points” rubs against a multi-sided cam on the
distributor shaft. The cam opens and closes the points; they act like a
mechanical switch that interrupts current flow. That’s what starts and stops
the flow of power to the ignition coil. Once the coil generates firing voltage,
it travels to the top of the coil and into the top of the distributor cap. There, a
rotating disc attached to the distributor shaft “distributes” the power to each
of the spark plug wires.
These early, fully mechanical distributor systems had their shortcomings. The
ignition points would break down and change spark timing, messing up
engine efficiency and requiring replacement as often as every 12,000 miles.
They also had to be set very precisely using a set of feeler gauges;
improperly-gapped points wouldn't work very efficiently.
The solution was to move from a fully mechanical distributor by
incorporating solid-state switches that didn’t wear out. Doing so increased
reliability, but the solid-state switches still took their marching orders from
the distributor shaft, which was still mechanically driven from the camshaft.
Distributor shafts would tend to develop a certain amount of "lash" or slop
after 120,000 or so miles. Since gear wear would always be an impediment to
proper spark timing, mechanical ignition systems had to evolve, and
beginning in the early ’80s, vehicle manufacturers began moving from the
mechanical distributor to a distributor-less automotive ignition system (DIS).
THE DISTRIBUTOR-LESS AUTOMOTIVE IGNITION
SYSTEM (DIS)
This system determines spark timing based on two shaft position sensors and
a computer. The Crankshaft Position Sensor (CKP) is mounted at the front of
the crankshaft, or near the flywheel on some vehicles, and the Camshaft
Position Sensor (CMP) is mounted near the end of the camshaft. These
sensors continually monitor both shafts’ positions and feed that information
into a computer.
The DIS also employs a different coil setup as compared to its predecessor.
Instead of asking a single coil to power all the cylinders, the DIS uses
multiple ignition coils called “coil packs," each generating spark for just two
cylinders. As a result, each coil can be “on” longer and develop a stronger
magnetic field—as much as 30,000 volts—and stronger, hotter spark that was
needed in order to ignite newer vehicle's leaner fuel mixtures.
COIL-ON-PLUG IGNITION SYSTEM
The coil-on-plug (COP) vehicle ignition system incorporates all the
electronic controls found in a DIS car ignition system, but instead of two
cylinders sharing a single coil, each COP coil services just one cylinder, and
has twice as much time to develop maximum magnetic field. As a result,
some COP car ignition systems generate as much as 40,000 to 50,000 volts
and much hotter, stronger sparks.
COP ignition systems have another big advantage over DIS ignition systems.
Since the coil mounts directly on top of the spark plug, spark plug cables are
eliminated because the firing voltage is delivered directly to the plug. Plug
cables mean greater resistance loss of amperage and voltage, as well as the
possibility of contamination and cross-firing between cables if they become
greasy or worn.
The coils in a COP ignition system can be damaged by degreasers and water
during engine cleaning so be sure each is wrapped in plastic and protected
before any under-hood cleaning begins.
Ignition systems will continue to improve with features that today are
unimaginable as technology advancements lead to continued improvements.
Even as they do, all three of these ignition system types are still well-suited to
the vehicle era they were originally intended for and easy to maintain and
repair.
The Evolution of Fuel Injection
Electronic fuel injection (EFI) is interesting tech, critical to the performance
of a modern engine. Originally hated as computer-controlled carburetors, EFI
gained reliability in the 1980s and picked up performance in the 1990s.
Today's impressively powerful vehicles are possible through direct fuel
injection. Here's a detailed look at how we got here and the evolution of EFI
over the years.

SOME BACKGROUND...

Fuel injection isn't new technology—it's been around almost as long as the
internal combustion engine and was used in aircraft engines as far back as
World War I. Chevrolet rolled out a mechanical-injection V8 in the late '50s,
but electronic fuel injection is an entirely different technology.
You probably already know the differences between carburetors and fuel
injection, and that they accomplish the same goal of adding fuel into an
engine. Carburetors, and 1960s mechanical injection, use precisely calibrated
mechanical parts to essentially dribble the fuel into the intake manifold,
which delivers the air/fuel mix to the combustion chamber. Electronic fuel
injection delivers much more accurate fuel delivery today, but it wasn't
always so efficient.
WHY CARBURETORS DIED OUT

Carburetors were the go-to fuel-delivery system for many decades. They
were cheap to manufacture and easily adapted to new engines and higher
power requirements. Many enthusiasts still prefer carburetors, due to
simplicity of tuning and troubleshooting. But back in the 1960s, the Los
Angeles skyline turned orange with smog created by vehicle exhaust (among
other sources). Concerns over air pollution led to emissions laws that
mandated manufacturers clean up their tailpipe emissions. Carburetors are
good for performance, but due to their imprecise nature, they can't make great
horsepower, get solid gas mileage, and pass an emissions test, all with the
same tune. Between the choke, the float bowl, accelerator pump, adjustment
screws, and other parts, carburetors also had many mechanical parts that
could become gummy over time. This means they were more maintenance-
intensive, with a carburetor rebuild often being part of a routine maintenance
schedule.
Manufacturers tried for a bit in the mid-1970s, but carbs became too complex
and expensive trying to meet all these demands with very few input sources.
A good example of this is the 1975 Corvette, with a carbureted 5.7L V8
making a sad 165 horsepower and only hitting 15 MPG highway. The
mid/late 1970s were dismal times for gearheads.
EARLY ELECTRONIC FUEL INJECTION

Late-'70s high tech means more than 8-track disco tapes. State-of-the-art
computers had assisted the Apollo moon missions and now were much
smaller. Small enough to carry in a car. Manufacturers looked to EFI to solve
their complex emissions problems. The first electronic fuel injection was
basically just computer-controlled carburetors attached to a few sensors, like
an oxygen sensor and throttle position sensor, all wired to an electronic
control unit. This transitional technology worked reasonably well, but was
complex and difficult to adjust and maintain as an engine aged and tolerances
loosened. Still, it was enough to meet government emissions standards for a
while before production switched over to electronic fuel injection.
Early EFI met the goals for emissions, but not for power or gas mileage, and
it was viewed as unreliable. An example of the day is a 1980 Corvette with
the California-spec LG4 5.0L V8. It made 180 horsepower and 16 MPG with
a computer-controlled Rochester Quadrajet carb. Not great, but a slight
improvement is still improvement.
SINGLE PORT INJECTION

Carbs were used for so long, manufacturers at first didn't seem to know how
to build anything else. Single port fuel injection debuted in the early 1980s
and looked like the familiar carburetor with a circular air cleaner on top of the
engine. Look closer, and you'll find one or two fuel injectors in the throttle
body, adding fuel to the air mix just before the intake manifold. Single port
injection took the place of the carb but delivered precise fuel delivery, thanks
to the computer-controlled injectors.
Gas mileage improved, but horsepower and torque left enthusiasts still
purchasing carb-equipped cars or illegally replacing the EFI with a carb that
they understood. Ironically, many enthusiasts today upgrade their classic
carbureted ride with a single-port EFI system. A common example of single-
port injection was Cadillac's digital injection, debuting on its high-end cars in
1980, or the 1985 Mustang 5.0L with central fuel injection. The 1982 and the
1984 Corvette featured cross-fire injection, and with it the 'vette finally made
respectable numbers again, with the '84 clocking in at 205 hp and 20 MPG.
Not huge, but certainly better than the emissions-choked carbureted cars of a
decade earlier.
MULTI-PORT INJECTION

As gas mileage improved in the 1980s, power and reliability started to come
back, too. This is where you started to see EFI components lasting over
100,000 miles. Part of that is thanks to multi-port EFI. This system uses
multiple injectors to add fuel to the air mix just before it goes into the
combustion chamber. Injectors usually spray the fuel before the intake valve,
so it has a better air/fuel mixture than single-port EFI. This type results in
better efficiency and performance. The one downside is complexity when
diagnosing EFI related problems, and the need to replace more fuel injectors
when they eventually wear out. To prevent deposit buildup on the valves,
look into a fuel system cleaner to keep your multi-port injection running
smoothly. MPFI swap kits are out there for the carb people who want to
convert but are rarer and more costly than single port kits. Tuned port
injection is a good example of this system, with the base 1990 Corvette 5.7L
V8 making 250 hp and 22 MPG.
DIRECT FUEL INJECTION

Computers got smaller and faster after the '90s, and the number of car sensors
increased as manufacturers all adopted OBD-II as an industry standard for
diagnostics, emissions, and performance. This allowed EFI to switch to a
diesel-like system for even more accurate fuel metering.
Rather than adding fuel to the air in the intake manifold, direct fuel injection
adds fuel to the combustion chamber, creating a couple of benefits. First, DFI
provides the most accurate and precise fuel metering of any EFI system yet
developed. Second, it allows higher compression ratios, for more power and
cleaner emissions. The fuel is injected at an extremely high pressure, in the
range of 500 to 3,000 psi. There is a second high-pressure fuel pump, usually
located on the engine right next to the fuel rail to deliver these high pressures
at exactly the right time. This is why we have vehicles like the 2018 Corvette.
Its 6.2L V8 makes 460 horsepower, and the car can hit 29 MPG. Owners say
30+ is easily achievable if you keep your foot out of it, but why would you?
This is a bigger engine, making three times the 1975 carb Corvette's power,
but it still gets twice the gas mileage, all with reduced emissions. That's an
impressive résumé, and EFI is the reason we can enjoy the amazing
performance cars of today.
Multi Point Fuel Injection (MPFI)
What is multi point fuel injection (MPFI) system?
The MPFI is a system or method of injecting fuel into internal combustion
engine through multi ports situated on intake valve of each cylinder. It
delivers an exact quantity of fuel in each cylinder at the right time. There are
three types of MPFI systems – Batched, Simultaneous and Sequential.
In the batched MPFI system fuel is injected to the groups or batches of the
cylinders without bringing their intake stroke together. In the simultaneous
system, fuel is inserted to all cylinders at the same time, while the sequential
system injection is timed to overlap with intake stroke of each cylinder.

Multi Point Fuel Injection

How fuel injection system works?


MPFI includes a fuel pressure regulator, fuel injectors, cylinders, pressure
spring and a control diaphragm. It uses multiple individual injectors to insert
fuel in each cylinder through intake port situated upstream of cylinder’s
intake value. The fuel pressure regulator, connected to the fuel rail by means
of an inlet and outlet, directs the flow of the fuel. While the control
diaphragm and pressure spring controls the outlet valve opening and the
amount of fuel that can return. The pressure in the intake manifold
significantly changes with the engine speed and load.
Advantages of multi point fuel injection system?
● The multi-point fuel injection technology improves fuel efficiency of
the vehicles. MPFI uses individual fuel injector for each cylinder, thus
there is no gas wastage over time. It reduces the fuel consumption and
makes the vehicle more efficient and economical.
● The vehicles with MPFI automobile technology have lower carbon
emissions than a few decades old vehicles. It reduces the emission of
the hazardous chemicals or smoke, released when fuel is burned. The
more precise fuel delivery cleans the exhaust and produces less toxic
byproducts. Therefore, the engine and the air remain cleaner.
● MPFI system improves the engine performance. It atomizes the air in
small tube instead additional air intake, and enhances the cylinder-to-
cylinder fuel distribution that aid to the engine performance.
● It encourages distribution of more uniform air-fuel mixture to each
cylinder that reduces the power difference developed in individual
cylinder.
● The MPFI automobile technology improves the engine response during
sudden acceleration and deceleration.
● The MPFI engines vibrate less and don’t require to be cranked twice or
thrice in cold weather.
● It improves functionality and durability of the engine components.
● The MPFI system encourages effective fuel utilization and distribution. .
Other benefits
● Smooth operations and drivability
● Reliability
● Competent to accommodate alternative fuels
● Easy engine tuning
● Diagnostic capability
● Initial and maintenance cost
Different Types of Sensors used in
Automobiles
Different Types of Sensors used in Automobiles
At present, the modern automobile designing can be done using different
types of sensors. These are arranged into the car engine to recognize & solve
possible problems like repairs, servicing, etc. The sensors used in
automobiles will check the function of the vehicle. An owner of a vehicle
doesn’t know the status of how many sensors are used in their vehicles. There
are several largest sensor organizations available worldwide, which can offer
an innovative solution to the customers. In recent automobiles, sensors are
used for detecting as well as responding to change the conditions inside &
outside of the car. So that travelers in the vehicle can move efficiently and
safely. By using these sensors data we can increase comfort, efficiency, and
safety
Types of Sensors used in Automobiles
Automobile sensors are intelligent sensors which can be used to control and
process the pressure of oil, temperature, level of emission, coolant levels, etc.
There are different types of sensors used in automobiles, but knowing the
working of these sensors is essential. In order to the function of these sensors,
here we have listed some popular sensors used in automobiles which include
the following.
· Mass airflow sensor
· Engine Speed Sensor
· Oxygen Sensor
· Spark Knock Sensor
· Coolant Sensor
· Manifold Absolute Pressure (MAF) Sensor
· Fuel Temperature Sensor
· Voltage sensor
· Camshaft Position Sensor
· Throttle Position Sensor
· Vehicle Speed Sensor
Mass Air Flow Sensor
The MAF or Mass airflow sensor is one of the essential sensor used in
automobiles. This sensor is used in an engine of the car. This sensor can be
controlled by a computer and can calculate the air density in the engine. If the
working of this sensor halts, then the running of the vehicle will be stopped.
In addition, the usage of petroleum will be high. These sensors are classified
into two types namely vane meter & hot wire.

mass-air-flow-sensor
Engine Speed Sensor
The engine speed sensor in the automobile can be connected to the
crankshaft. The main purpose of this sensor is to monitor the crankshaft’s
rotating speed. So that fuel injection & the engine timing can be controlled.
There are different ways for the vehicle engine to stop unexpectedly. So this
sensor will stop that for car drivers.
engine-speed-sensor
Oxygen Sensor
Located in the exhaust stream, usually near the exhaust manifold and after the
catalytic converter, the oxygen sensor (or O2 sensor) monitors the content of
exhaust gases for the proportion of oxygen. The information is compared to
the oxygen content of ambient air and is used to detect whether the engine is
running a rich fuel ratio or a lean one. The engine computer uses this
information to determine fuel metering strategy and emission controls.

oxygen-sensor
Spark Knock Sensor
The spark knock sensor is used to ensure whether the fuel is burning
smoothly, otherwise, it will cause an unexpected ignition. This ignition is
very dangerous which will cause damage in the engine of the car like damage
of rings, head gasket, and rod bearings. Fitting these parts can be costly. So
this sensor is used to save all the troubles occurred in an engine of the car.

spark-knock-sensor
Coolant Sensor
The coolant sensor is the most significant sensor used in automobiles.
Because the computer depends on the sensor inputs to control all the
functions. For instance, turn ON/OFF the EFE system (Early Fuel
Evaporation), retard, spark advance, the flow of EGR, and canister purge.
coolant-sensor
Generally, this sensor can be connected on the board. If the sensor is failed,
then there will be some indications stalling, like poor fuel mileage, etc. So,
the status of the sensor should be checked whether it is defective or not. If it
is damaged, then it will be a problem.

Manifold Absolute Pressure Sensor


The short term of the manifold absolute pressure is MAP. The main function
of this sensor in an automobile is to monitor the load of an engine. Mostly, it
measures the dissimilarity among manifold pressure. This can be received
from the outside pressure by the car to make sure that the car engine is
capable to receive petroleum depending on the changes within the pressure.
manifold-absolute-pressure-sensor
Fuel Temperature Sensor
The fuel sensor is used to check the temperature of the fuel continually
whether the fuel utilization is optimum or not. If the fuel of the engine is
cold, then it will take much time to burn due to its high density. Similarly, if
the fuel is warm then it will take less time to burn. Here, the main problem is
the inflow varying levels. So this can injure other parts of an automobile.
This sensor will monitor the petroleum is injected at the right speed and
temperature. So that engine of the automobile works properly.

fuel-temperature-sensor
Voltage Sensor
Voltage Sensor is one type of sensor used in automobiles. The main function
of this sensor is to manage the car speed and to make sure the speed of is
increased (or) decreased as required. So it is essential to have in your car.
voltage-sensor
Throttle Position Sensor
The throttle position sensor in automobile mainly uses feedback carburetion
& electronic fuel injection (EFI). It informs the computer regarding the
throttle opening rate as well as the position of the relative throttle. This sensor
is a variable resistor, which is used to change the resistance as the throttle
opens.

throttle-position-sensor
It is not complex to identify the faulty throttle position sensor symptoms. As
there is a fall while speeding up, then you can identify the faulty of the
sensor. It is the major sign of a faulty throttle position sensor. Whenever you
change this sensor, you cannot adjust it properly.
Vehicle Speed Sensor
As the name suggests, this VSS sensor has the capability to verify the speed
of the car wheels. It is a type of tachometer. This sensor is arranged within
the anti braking system which is known as ABS. Additionally, the output of
this sensor is also utilized for the odometer to read the speed of the vehicle to
control gears depending on the vehicle speed.

vehicle-speed-sensor
Thus, this is all about the different types of sensors used in automobiles.
These sensors are smart systems which are used for controlling different parts
like coolant levels, temperature, the pressure of oil, levels of emission, etc.
These automobile sensors are complex to allow a variety of values, decide
and process the accurate combination.
Oxygen Sensor Working and Applications
Nowadays, automobile engines can be controlled using different types of
sensors. These sensors control the performance & emissions of an engine.
When the sensor doesn’t provide accurate data then a lot of problems will
occur like drivability, an increase in fuel usage, and failure of emission. One
of the essential sensor used in automobiles are oxygen sensor, and the
chemical formula of this is o2. The first oxygen sensor was invented in the
year 1976 in Volvo 240 vehicle. In 1980, the automobiles in California used
these sensors for lower emissions.
What is an Oxygen Sensor?
An oxygen sensor is one type of sensor and it is available in the exhaust
system of an automobile. The size and shape of this sensor look like a spark
plug. Based on its arrangement in regard to the catalytic converter, this sensor
can be arranged before (upstream) or after (downstream) the converter. Most
of the automobiles which are designed after 1990 include upstream &
downstream o2 sensors.
The oxygen sensors used in automobiles are one sensor is arranged in front of
the catalytic converter & one is arranged in every exhaust manifold of the
automobile. But, the maximum number of these sensors in a car mainly
depends on the engine, model, year. But, most of the vehicles have 4-sensors

oxygen-sensors

Working Principle
The working principle of the o2 sensor is to check the oxygen amount within
the exhaust. Firstly, this oxygen was added to the fuel for good ignition. The
communication of this sensor can be done with the help of a voltage signal.
So the oxygen status in the exhaust will be decided by the computer of the
car.
The computer regulates the mixture of fuel or oxygen delivered to the car
engine. The arrangement of the sensor before & after the catalytic converter
permits to maintain the hygiene of the exhaust & check the converter’s
efficiency.

Types of Oxygen Sensors


Oxygen sensors are classified into two namely binary exhaust gas and
universal exhaust gas.
1). Binary Exhaust Gas Oxygen Sensor
The binary sensor gives a transition within electric voltage at 350 °C
temperature based on the level of oxygen within the exhaust. It contrasts the
remaining oxygen content within the exhaust by the ambient air oxygen level
& recognizes the change from a lack of air to an excess air & vice versa.
2). Universal Exhaust Gas
This sensor is very exact when calculating the ratios of lack and excess of air
or fuel. It has a better calculating range & is also appropriate for employ in
gas &diesel engines.
Signs of Faulty Sensor
The faulty sensor can be found by using the following signs.
· Breakdown to exceed the emissions analysis
· Fuel mileage can be decreased.
· The engine light will be off
· Performance is poor, stalling and rough idling.
· Code checker recognizing sensor failure
Hot Wire Anemometer
Definition: The Hot Wire Anemometer is a device used for measuring the
velocity and direction of the fluid. This can be done by measuring the heat
loss of the wire which is placed in the fluid stream. The wire is heated by
electrical current.
The hot wire when placed in the stream of the fluid, in that case, the heat is
transferred from wire to fluid, and hence the temperature of wire reduces. The
resistance of wire measures the flow rate of the fluid.

The hot wire anemometer is used as a research tool in fluid mechanics. It


works on the principle of transfer of heat from high temperature to low
temperature.

Construction of Hot Wire Anemometer


The hot wire anemometer consists two main parts.
1. Conducting wire
2. Wheat stone bridge.
The conducting wire is housed inside the ceramic body. The wires are taking
out from the ceramic body and connecting to the Wheatstone bridge. The
wheat stone bridge measures the variation of resistance.
Constant Current Method
In the constant current method, the anemometer is placed in the stream of the
fluid whose flow rate needs to be measured. The current of constant
magnitude is passed through the wire. The Wheatstone bridge is also kept on
the constant voltage.

When the wire is kept in the stream of liquid, in that case, the heat is
transferred from the wire to the fluid. The heat is directly proportional to the
resistance of the wire. If heat reduces, that means the resistance of wire also
reduces. The Wheatstone bridge measures the variation in resistance which is
equal to the flow rate of the liquid.
Constant Temperature Method
In this arrangement, the wire is heated by the electric current. The hot wire
when placed in the fluid stream, the heat transfer from wire to the fluid. Thus,
the temperature of the wire changes which also changes their resistance. It
works on the principle that the temperature of the wire remains constant. The
total current requires to bring the wire in the initial condition is equal to the
flow rate of the gas.

Measurement of the rate of a fluid using a Hot Wire Instrument


In hot wire anemometer, the heat transferred electrically to the wire which is
placed in the fluid stream. The Wheatstone bridge is used for measuring the
temperature of wire regarding their resistance. The temperature of the wire
remains constant for measuring the heating current. Thus, the bridge remains
balanced.
The standard resistor is connected in series with the heating wire. The current
across the wire is determined by knowing the voltage drop across the resistor.
And the value of voltage drop is determined by the potentiometer.
The equation determines the heat loss from the heated wire

Where, v – velocity of heat flow,


ρ – the density of fluid,
The a and b are the constants. Their value depends on the dimension and the
physical properties of the fluid and wire.
Suppose I, is the current of the wire and the R is their resistance. In
equilibrium condition,
Heat generated = Heat Lost
The resistance and temperature of the instrument are kept constant for
measuring the rate of the fluid by measuring the current I.
VEHICLE SPEED SENSOR (VSS)
General description
VSS gives the onboard computer information about the vehicle speed.
The sensor operates on the principle of the Hall Effect and is usually mounted
on the tachometer or in the gearbox.
Appearance
Fig. 1 shows typical speed sensors.

Fig. 1
Used types of sensors
· Speed sensors based on the Hall effect
· Speed sensors with mechanical tenon
· Inductive speed sensors
Working principle of different types of VSS
- With Hall Effect
VSS is supplied with +12V from the ignition key. When the
tachometer speed cable rotates, the Hall switch is turned on and off
consecutively, sending a rectangular signal to the onboard computer. The
frequency of this signal indicates the speed of the car.

- Speed sensor with mechanical tenon


The signal from the rotating drive wheel has a rectangular form. The
signal voltage varies from 0V to +5 V or 0V to a value close to the nominal
of the car battery. Pulses duty cycle is between 40% and 60%.

- Inductive speed sensor


The signal from the rotating drive wheel has a sinusoidal form
(alternative current). The signal changes depending on the speed of the
wheels as every inductive sensor, for example, the ABS sensor.
Procedure for verification of functionality of the VSS sensor
NOTE: This algorithm describes how to check the most common VSS
sensor, the Hall Effect type.
· VSS is usually mounted in the gearbox.
· Check the VSS connector for corrosion or mechanical damages.
· Make sure connector pins are firmly fit in their places and whether
they make good contact to the VSS sensor.
· Pull off the protective rubber muff from the VSS sensor connector.
· Find the power supply, the ground and the signal terminals.
· Connect the oscilloscope GND probe to the chassis ground.
· Connect the active end of the oscilloscope probe to the signal
terminal of the VSS.
· Signal is generated when the drive wheels of the car are spinning.
This can be achieved in the following ways:
o Push the car forward.
o Lift the car at stand or jack so that the drive wheels can rotate freely.
o Rotate the wheels by hand to get impulses.
o On the oscilloscope screen you must observe the following signal (fig.
2).

Fig. 2
Possible damages:
Interruptions or lack of signal - voltage/duty cycle
· Disconnect the VSS sensor connector and turn on the ignition key.
· Attach the oscilloscope probe to the signal terminal and measure the
voltage. Its value should be from 8.8 to 10V.
· Also check the voltage of the power supply terminal. Its value
should be lower than the nominal of the car battery.
· Check the GND connection of VSS sensor.
· If everything is normal, probably the fault is in the VSS sensor or the
speed cable does not rotate because it is broken or the gearbox is
damaged.
Lack of signal voltage
Check the voltage at the onboard computer connector terminals:
· If voltage of the onboard computer is normal, check the signal circuit
conductivity and the diode in the circuit between the onboard
computer and the VSS sensor.
· If voltage of the onboard computer is missing, check all power
supply voltages and all connections to GND of the onboard computer.
If there is no problem, doubt about a damage of the onboard computer
remains.
Accelerometers:
Before you can understand accelerometers, you really need to understand
acceleration—so let's have a quick recap. If you have a car that accelerates
from a standstill to a speed (or, strictly speaking, velocity) of 100km/h in 5
seconds, the acceleration is the change in velocity or speed divided by the
time—so 100/5 or 20 km/h per second. In other words, each second the car is
driving, it adds another 20km/h to its speed. If you're sitting inside this car,
you could measure the acceleration using a stopwatch and the car's
speedometer. Simply read the speedometer after 5 seconds, divide the reading
by 5, and you get the acceleration.
But what if you want to know the acceleration moment by moment, without
waiting for a certain time to elapse? If you know about the laws of motion,
you'll know that the brilliant English scientist Isaac Newton defined
acceleration in a different way by relating it to mass and force. If you have a
certain force (say, the power in your leg as you kick it outward) and you
apply it to a mass (a soccer ball), you'll make the mass accelerate—the ball
will shoot off into the air.

Newton's second law of motion relates force, mass, and acceleration through
this very simple equation:
Force = mass x acceleration
or...
F=ma
or...
a=F/m
In other words, acceleration is the amount of force we need to move each unit
of mass. Looking at this equation, you can see why soccer balls work the way
they do: the harder you kick (the more the force), or the lighter the ball (the
less the mass), the more acceleration you'll produce—and the faster the ball
will fly through the sky.
You can also see we now have a second way of calculating acceleration that
doesn't involve distance, speed, or time. If we can measure the force that's
acting on something and also its mass, we can figure out its acceleration
simply by dividing the force by the mass. No need to measure speed or time
at all!
How do accelerometers work?
This equation is the theory behind accelerometers: they measure acceleration
not by calculating how speed changes over time but by measuring force. How
do they do that? Generally speaking, by sensing how much a mass presses on
something when a force acts on it.
This is something we're all very familiar with when we're in cars. Imagine
you're sitting in the back seat of a car, happily minding your own business,
and the driver accelerates suddenly to pass a slow-moving truck. You feel
yourself thumping back into the seat. Why? Because the car's acceleration
makes it move forward suddenly. You might think you move backward when
a car accelerates forward, but that's an illusion: really what you experience is
the car trying to move off without you and your seat catching you up from
behind!
The laws of motion tell us that your body tries to keep going at a steady
speed, but the seat is constantly pushing into you with a force and making
you accelerate instead. The more the car accelerates, the more force you feel
from your seat—and you really can feel it! Your brain and body work
together to make a reasonably effective accelerometer: the more force your
body experiences, the more acceleration your brain registers from the
difference between your body's movements and those of the car. (And it picks
up useful clues from other sensations, including the rate at which moving
objects pass by the window, the change in sound of the car's engine, the noise
of the air rushing past, and so on.) Moment by moment, you sense changes in
acceleration from changes in sensations on your body, not by calculating how
far you've traveled and how long it took.
And accelerometers work in broadly the same way.
Types of accelerometers
There are many different types of accelerometers. The mechanical ones are a
bit like scaled-down versions of passengers sitting in cars shifting back and
forth as forces act on them. They have something like a mass attached to a
spring suspended inside an outer casing. When they accelerate, the casing
moves off immediately but the mass lags behind and the spring stretches with
a force that corresponds to the acceleration. The distance the spring stretches
(which is proportional to the stretching force) can be used to measure the
force and the acceleration in a variety of different ways. Seismometers (used
to measure earthquakes) work in broadly this way, using pens on heavy
masses attached to springs to register earthquake forces. When an earthquake
strikes, it shakes the seismometer cabinet but the pen (attached to a mass)
takes longer to move, so it leaves a jerky trace on a paper chart.

Artwork: The basic concept of a mechanical accelerometer: as the gray


accelerometer box moves from side to side, the mass (red blob) is briefly left
behind. But the spring connecting it to the box (red zig-zag) soon pulls it back
into position and, as it moves, it draws a trace (blue line) on the paper.
Alternative designs of accelerometers measure force not by making a pen
trace on paper but by generating electrical or magnetic signals. In
piezoresistive accelerometers, the mass is attached to a potentiometer
(variable resistor), a bit like a volume control, which turns an electric current
up or down according to the size of the force acting on it. Capacitors can also
be used in accelerometers to measure force in a similar way: if a moving
mass alters the distance between two metal plates, measuring the change in
their capacitance gives a measurement of the force that's acting.

Artwork: The broad concept of a capacitive accelerometer: as the gray


accelerometer box moves to the right, the red mass is left behind and pushes
the blue metal plates closer together, changing their capacitance in a
measurable way.
In some accelerometers, piezoelectric crystals such as quartz do the clever
work. You have a crystal attached to a mass, so when the accelerometer
moves, the mass squeezes the crystal and generates a tiny electric voltage.
Artwork: The basic concept of a piezoelectric accelerometer: as the gray
accelerometer box moves right, the mass squeezes the blue piezoelectric
crystal (very exaggerated in this picture), which generates a voltage. The
bigger the acceleration, the bigger the force, and the greater the current that
flows (blue arrows).
In Hall-effect accelerometers, force and acceleration are measured by sensing
tiny changes in a magnetic field.
How does an accelerometer chip work?
The accelerometers you can find inside cellphones clearly don't have gigantic
masses bouncing up and down on springs—you'd never fit something so big
and clumsy inside a phone! Instead, cellphone accelerometers are based on
tiny microchips with all their components chemically etched onto the surface
of a piece of silicon.
Here's a very simplified illustration of what you'd find in one of these
semiconductor accelerometers, as they're called:
1. Right in the middle, we have a red electrode (electrical terminal)
that has enough mass to move up and down very slightly when you
move or tilt the accelerometer.
2. The electrode is supported by a tiny beam (cantilever) that's rigid
enough to hold it but flexible enough to allow it to move.
3. There's an electrical connection from the cantilever and electrode
to the outside of the chip so it can be wired into a circuit.
4. Below the red electrode, and separated from it by an air gap,
there's a second electrode (purple). The air gap between the two
electrodes means the red and purple electrodes work together as a
capacitor. As you move the accelerometer and the red electrode
moves up and down, the distance between the red and purple
electrodes changes, and so does the capacitance between them.
We're talking about amazingly tiny distances here of a few
millionths of a meter (µm, microns). Small bits of insulation (shown
as black lines) prevent the red electrode from making direct
electrical contact with the purple one if the accelerometer
experiences a really big force (a sudden jolt).
5. In exactly the same way, there's a blue electrode above the red
electrode and another air gap making a second capacitor. As before,
the distance between the blue and red electrodes (and the
capacitance between them) changes as you move the accelerometer.
6. The electrodes are connected to more electrical terminals at the
edges of the chip, again, so it can be wired to a bigger circuit.
Now the amazing thing about capacitors like this is that they're fabricated on
single, microscopic chips, with the various different electrodes and
electrically conducting layers made from different types of silicon (n-type
and p-type, if you're familiar with how silicon is "doped" to make
components such as transistors and diodes.)
Crankshaft position sensor
The crankshaft position sensor measures the rotation speed (RPMs) and the
precise position of the engine crankshaft. Without a crankshaft position
sensor the engine wouldn't start.
In some cars, the sensor is installed close to the main pulley (harmonic
balancer) like in this Ford in the photo. In other cars, the sensor could be
installed at the transmission bell housing, or in the engine cylinder block, as
in the photo below. In the technical literature, the crankshaft position sensor
is abbreviated to CKP.
How the crankshaft position sensor works

In this GM engine, the crankshaft position


sensor is installed at the cylinder block

The crankshaft position sensor is positioned so that teeth on the reluctor ring
attached to the crankshaft pass close to the sensor tip. The reluctor ring has
one or more teeth missing to provide the engine computer (PCM) with the
reference point to the crankshaft position.

As the crankshaft rotates, the sensor produces a pulsed voltage signal, where
each pulse corresponds to the tooth on the reluctor ring. The photo below
shows the actual signal from the crankshaft position sensor with the engine
idling. In this vehicle, the reluctor ring is made with two missing teeth, as you
can notice on the graph.

The PCM uses the signal from the crankshaft position sensor to determine at
what time to produce the spark and in which cylinder. The signal from the
crankshaft position is also used to monitor if any of the cylinders misfires.
Crankshaft position sensor signal on the oscilloscope
screen.

If the signal from the sensor is missing, there will be no spark and fuel
injectors won't operate.

The two most common types are the magnetic sensors with a pick-up coil that
produce A/C voltage and the Hall-effect sensors that produce a digital square
wave signal as in the photo above. Modern cars use the Hall-effect sensors. A
pick-up coil type sensor has a two-pin connector. The Hall-effect sensor has a
three-pin connector (reference voltage, ground and signal).
Symptoms of a failing crankshaft position sensor
A failing sensor can cause intermittent problems: a car may cut out or stall
randomly, but then restart with no problems. The engine might have troubles
starting in wet weather, but starts OK after. Sometimes you might see the
RPM gauge behaving erratically. In some cases, a failing sensor can cause
long crank time before the engine starts. If the sensor is bad, the engine will
crank but won't start. Read more: Why an engine cranks but won't start:
common problems.
Crankshaft position sensor problems
Crankshaft position sensor

The most common OBDII code related to the crankshaft position sensor is
P0335 - Crankshaft Position Sensor "A" Circuit. In some cars (e.g.
Mercedes-Benz, Nissan, Chevy, Hyundai, Kia) this code is often caused by a
failed sensor itself, although there could be other reasons, such as wiring or
connector issues, damaged reluctor ring, etc.

In some cars, the intermittent stalling can also be caused by a problem with
the crankshaft position sensor wiring. For example, if if the sensor wires are
not secured properly, they could rub against some metal part and short out,
which can cause intermittent stalling.

The Chrysler bulletin 09-004-07 describes a problem with some 2005-2007


Jeep and Chrysler models where a failed crankshaft position sensor can cause
a no-start problem. The sensor will need to be replaced with an updated part
to correct the problem.

Another Chrysler bulletin 18-024-10 for some 2008-2010 Chrysler, Dodge


and Jeep vehicles mentions a problem where the code P0339 - Crankshaft
Position Sensor Intermittent can be caused by improper gap or a bad
flexplate.

Failures of the crankshaft position sensor were common in some 90's GM


cars. One of the symptoms was stalling when the engine is hot. Replacing the
crankshaft position sensor usually solved the problem.
How the crankshaft position sensor is tested

The resistance of this crankshaft position sensor from


the 2008 Ford Escape measures at 285.6 ohms,
which is within specifications

Whenever there is a suspicion that the problem might be caused by a


crankshaft position sensor or if there is a related trouble code, the sensor must
be visually inspected for cracks, loose or corroded connector pins or other
obvious damage. The proper gap between the tip of the sensor and the
reluctor ring is also very important.

The correct testing procedure can be found in the service manual. We posted
several links where you can access a service manual for a subscription fee at
the bottom of this article.

For the pick-up coil type sensors, the testing procedure includes checking the
resistance. For example, for the 2008 Ford Escape, the resistance of the
crankshaft position sensor (CKP) should be between 250-1,000 ohms,
according to Autozone. We measured 285.6 ohms (in the photo) which is
within specifications. If the resistance is lower or higher than specified, the
sensor must be replaced.
For the Hall-type sensors, the reference voltage (typically +5V) and the
ground signal must be tested. The most accurate way to test a crankshaft
position sensor is checking the sensor signal with an oscilloscope.
Sometimes, the sensor may have an intermittent fault that is not present
during testing. In this case checking for Technical Service Bulletins (TSBs)
and researching common problems may help. The crankshaft position sensor
can be checked with a scan tool. A scan tools will show the sensor signal as
"Engine RPM" or "Engine speed." When this could be helpful? If a car stalls
intermittently, monitoring the sensor signal can provide the answer: if the
sensor signal suddenly drops to zero, and then comes back it means either
there is a problem inside the sensor or with the sensor wiring or connector.
Microcontroller vs Microprocessor
Selecting the right device on which to base your new design can be daunting.
The need to make the right balance of price, performance and power
consumption has many implications. First, there will be the immediate
technology considerations for the design you are able to embark on.
However, if microcontroller (MCU) or microprocessor (MPU), becomes the
basis of a platform approach, the decision can have long-lasting
consequences. Difference between microprocessor and microcontroller
becomes an important debate at this point.

Microcontroller vs Microprocessor: Primary Differences


Typically an MCU uses on-chip embedded Flash memory in which to store
and execute its program. Storing the program this way means the MCU
having a shorter start-up period and executing code quickly. The only
practical limitation to using embedded memory is that the total available
memory space is finite. Most Flash MCU devices available on the market
have a maximum of 2 Mbytes of Program memory. This may prove to be a
limiting factor, depending on the application.

MPUs do not have memory constraints in the same way. They use external
memory to provide program and data storage. The program is typically stored
in non-volatile memory, such as NAND or serial Flash. At start-up, this is
loaded into an external DRAM and execution commences. This means the
MPU will not be up and running as quickly as an MCU but the amount of
DRAM and NVM you can connect to the processor is in the range of
hundreds of Mbytes and even Gbytes for NAND.

Another difference is power. By embedding its own power supply, an MCU


needs just one single voltage power rail. By comparison, an MPU requires
several difference voltage rails for core, DDR etc. The developer needs to
cater for this with additional power ICs / converters on- board.

Difference Between Microprocessor and Microcontroller: Application


Perspective
From the application perspective, some aspects of the design specification
might drive device selection in particular ways. For example, is the number
of peripheral interface channels required more than can be catered for by an
MCU? Or, does the marketing specification stipulate a user interface
capability that will not be possible with an MCU because it does not contain
enough memory on-chip or has the required performance?

When embarking on the first design and knowing that, it is highly likely there
will be many product variations. In that case, it is very possible a platform-
based design approach will be preferred. This would stipulate more
“headroom” in terms of processing power and interface capabilities in order
to accommodate future feature upgrades.

Some measurement parameters


An attribute that is difficult to determine is the required processing
performance any given design might require. Processing power, measured in
terms of Dhrystone MIPS (DMIPS), helps quantify these criteria.

Explained below is table for the difference between microprocessor and


microcontroller.
Difference between Microprocessor and Microcontroller
For example, an ARM Cortex-M4-based microcontroller such as Atmel’s
SAM4 MCU is rated at 150 DMIPS. Whereas an ARM Cortex-A5
application processor (MPU) such as Atmel’s SAMA5D3 can deliver up to
850 DMIPS. One way of estimating the DMIPS required is by looking at the
performance hungry parts of the application.

Running a full operating system (OS), such as Linux, Android or Windows


CE, for your application would demand at least 300–400 DMIPS. For many
applications, a straightforward RTOS might suffice and an allowance of 50
DMIPS would be more than adequate. Using an RTOS also has the benefit
that it requires little memory space; a kernel of just a few kB being typical.
Unfortunately, a full OS demands a memory management unit (MMU) in
order to run; this in turn specifies the type of processor core to be used and
require more processor capability.
What is Keyless Entry and How Does it
Work?
Keyless entry systems allow you to unlock and lock the doors to your vehicle
without using a key. Most modern US vehicles are equipped with a basic
keyless entry system that includes a short-range remote transmitter.
How does keyless entry work?
Keyless entry to a vehicle is most commonly gained by sending a radio
frequency signal from a remote transmitter to a control module/receiver in the
vehicle. This radio frequency signal, or RF for short, is sent as an encrypted
data stream directly to the car.
There is also another type of keyless entry that allows you access to your
vehicle without even having to press a button, you just walk within five feet
of your vehicle and the doors will unlock. Wouldn’t that be nice? A nice
sleek product that helps you accomplish this is called the EZ GO. You
simply put it in your pocket or place it on your key ring, and whenever you
walk within range of your vehicle your doors will unlock.
OEM vs Aftermarket Keyless Entry
Many OEM/factory keyless entry systems are 1-way systems with limited
range as mentioned above. These 1-way key fobs allow you to send a
command to your vehicle, but you will not receive a confirmation that your
command was actually successful. With a 2-way aftermarket solution such as
the T11, you can be sitting in your home or office and send a lock command
to your car and you will receive a confirmation that your vehicle was actually
locked.
One of the cool features with some 2-way remotes, like the T11, is the ability
to check your vehicle’s status. Imagine getting into your work or office and
trying to remember if you locked your car. Well, with a 2-way remote, you
can press a button to verify that your vehicle was actually locked.
I briefly mentioned a solution that has virtually no limit regarding range:
DroneMobile. DroneMobile allows you to connect your smart phone to your
vehicle. This has many advantages including using a cellular network to send
and receive commands to and from your vehicle with virtually no range limit.
Other features include, but are not limited to: vehicle tracking, geo tracking,
security alerts, remote start and keyless entry. One bonus feature with
DroneMobile: if you ever lock your keys in your vehicle you can unlock it
with your smartphone.
What components do you need to implement keyless entry on your
vehicle?
If you are looking to implement a keyless entry system on your vehicle,
regardless of whether you already have an OEM solution, you will need to
get a system installed into your vehicle.
Compustar systems are sold with the added benefit of remote start and/or
security to keep you and your vehicle comfortable and safe. At this time,
Compustar does not offer systems that only add keyless entry, but make sure
to contact your Local Authorized Dealer to confirm solutions available for
your specific vehicle.
The advantage of a Compustar keyless entry system is that they are 100%
upgradeable. If you ever decide that you want more range out of your keyless
entry system, or you want to add 2-way confirmation, simply revisit your
authorized Compustar dealer and they will swap out and reprogram your
remotes. No replacement of your internal system is required!
As you’re shopping for a keyless entry system, don’t overlook the importance
of hiring a professional installer. Installers can not only assist you in finding
the perfect system for your needs, but they can guarantee a safe and clean
install. Contact an Authorized Dealer today to schedule your installation
appointment. Don’t forget to mention the year/make/model of your vehicle!
The most important component, and is often overlooked, is the professional
installer. Installers can not only assist you in finding the perfect remote start
control module and remote transmitter combo for your needs, but they can
guarantee a safe and professional install. It is extremely important to get a
remote start system professionally installed because making a mistake during
installation could potentially cost you more money and headache.
URBAN TRANSPORTATION
SYSTEM
What Is Urbanization?
In 2008, the United Nations announced that 50 percent of the world’s
population now lives in urban areas, a milestone in demographic history.
News reports on the subject frequently rephrased this development slightly to
say that half of the global population now lives in “cities” and illustrated
articles with photos of Mumbai, Shanghai, or New York. These cities are
what the UN terms “mega-cities,” urban areas of 10 million people or more.
The distinct impression was created that a majority of people lived in very
large cities. However, only about 5 percent of world population lives in the
largest cities or, more properly, metropolitan areas. The fact that over half of
the world’s population live in places termed urban is a notable development,
to be sure. But, at the same time, it is useful and important to know just how
the term “urban” is defined.
In most countries, a large part the urban population actually lives in relatively
small towns and villages. The urban population may also be thought of more
as nonagricultural than urban in the way those in industrialized countries
would naturally tend to perceive it. In its most recent urbanization estimates
and projections, the UN Population Division recognized that when
urbanization is discussed, “the focus is often on large cities, cities whose
populations are larger than many countries.” The table below gives examples
of how countries themselves define urban. The great variation in the urban
definition and the size of places deemed urban is readily apparent (see table).
Selected Urban Definitions With Population Size and Other Criteria

Country Urban Definition

Argentina Populated centers with 2,000 or more

Canada Places of 1,000 or more*

Cities designated by the State Council and other places with


China density of 1,500 or more per sq. km.*

Specified towns with governments and places with 5,000 or


more and at least three-fourths of the male labor force not in
India agriculture*

Japan Cities (shi) with 50,000 or more*

Maldives Male, the capital

Mexico Localities of 2,500 or more

New
Zealand Cities, towns, etc. with 1,000 or more

Niger Capital city and department and district capitals

Norway Localities of 200 or more

Peru Populated centers with 100 or more dwellings

Senegal Agglomerations of 10,000 or more

United
States Places of 2,500 or more, urbanized areas of 50,000 or more*
7 Transportation Challenges in Urban
Areas
1. Traffic Congestion
Overloading is the primary cause of congestion. Patterns of land use and
transport infrastructure influence traffic flow. While both commuter and
freight traffic contribute to congestion, passenger movements are the main
source of gridlock in urban areas.
Motorists in the 21st century spend three times longer in traffic than drivers
did a few decades ago. Large numbers of single-occupancy vehicles add to
traffic volume. The resulting congestion contributes to air pollution,
inefficient use of fuel, and slower commutes, which makes urban life
frustrating. Drivers contend with obstacles like buses, delivery trucks, and
service vehicles, to searching for parking spots near their destination.

2. Long Commutes
Growing populations, roadwork, and the distance between homes and
workplaces all contribute to increased congestion and longer commute times.
Expanding road capacity is not always effective for shortening commute
times, as it cannot keep up with the growing volume of traffic. New highways
can actually result in longer commutes, as they encourage more vehicles to
use road networks, increasing overall vehicle-miles-traveled (VMT).
Residential affordability also affects commuting patterns. While most
employment opportunities remain in city centers, suburban housing is more
affordable. Thus, cheaper housing comes at the expense of longer commuting
time.

3. Sprawling Cities
Decentralization makes urban transport systems more complex. As cities
expand outward, and distances increase between residences and places of
work, congestion becomes a bigger problem for communities and commuting
times a major burden for individuals.
Urban sprawl makes public transportation systems more expensive to build
and operate and restricts pedestrian movement. Large-scale superstores, and
other facilities serving large catchment areas, are not easily accessible by
foot, and this encourages the use of motor vehicles.

4. Secondary Infrastructure
Demand for bike and pedestrian infrastructure is increasing, as more people
choose to walk or cycle to work. However, many cities were built for cars
and are not bike or pedestrian friendly. Bicycle lanes and wider footpaths
make riding and walking safer and can help control traffic, but such
infrastructure comes at the expense of roadway capacity and parking space.
Access to public transport often requires parking infrastructure. Suburban
stations can provide parking spaces for riders to promote public transit usage.
Commuters can use these suburban stations to avoid the inconvenience of
parking in the city.

5. Large Fleets, Large Costs


Urban transport agencies face challenges when managing large fleets of
vehicles and a growing workforce, including maintenance costs, recruiting
and retaining skilled employees, and meeting task requirements. Agencies
must train their workforce to increase safety and reduce accidents.
Fluctuating demand for public transport poses a dilemma for public transport
operators, who must determine the size of their fleets. A fleet large enough to
meet peak-hour demand is not economically viable when operated off-peak.
However, if operators don’t provide enough vehicles, they cannot support the
volume of passengers during peak hours.

6. Parking Difficulties
Drivers stuck in traffic while looking for a parking spot contribute to urban
congestion. Cities struggle to provide sufficient parking space to serve central
business districts (CBDs). Large car parks consume expensive real-estate,
while street parking takes up lanes that could be used for moving traffic.
7. Negative Environmental Impacts
Automobile dependency affects the quality of life of residents, including
public health. Cars and related infrastructure have a visual impact on cities.
Air pollution, including greenhouse gas emissions, increases alongside
vehicle-miles traveled (VMT). Road networks consume between 30 and 60%
of metropolitan land, and their territorial imprint grows as more people use
private cars. Traffic generates noise and fumes that make walking in urban
areas unpleasant. Prolonged exposure to these fumes, especially if the engine
is inadequately maintained, is hazardous to health. Fumes emitted from cars
contain carbon monoxide, aldehydes, unburnt hydrocarbons and other gases
and deposits like tetra-ethyl lead, nitrogen oxides, and carbon particles.
Changing Urban Transportation Systems
for Improved Quality of Life
Cities can measure the quality of their transportation systems and apply their
insights to their transport policies. The efficiency of transportation systems
can be defined by their availability, affordability, efficiency, convenience,
and sustainability. Below, you’ll find a review of key factors that influence
urban transportation commute, and the solutions that could solve transport
issues.

Before the Trip


Availability directly influences how people choose to travel. A number of
indicators define availability for four categories of transport options:
· Rail infrastructure: The proximity of train stations to workplaces
and residences, as well as pedestrian access to and between public
transport lines.
· Road infrastructure: Road quality and the presence of bicycle lanes,
as well as connectivity between car parks and pedestrian zones.
· Shared transport: The number of vehicles in car-sharing services or
the number of rental bikes per capita.
· External connectivity: Access to destinations outside the city,
including the number of flights departing from local airports.
Cities can leverage affordability to encourage or discourage certain modes of
transportation:
· Public transport: Ticket prices in relation to average income
· Barriers to private transport: Cost of parking tickets, taxes, fees,
and road tolls, as well as policies restricting the use of private vehicles

During the Trip


Efficiency involves speed and reliability and is especially important for
public transit systems:
· Public transport: Travel time at rush hour, waiting time for street-
level transport options (buses, trams), and the proportion of dedicated
bus lanes throughout the road network
· Private transport: The predictability of commuting time, differences
between peak-hour and off-peak travel times, and average speed during
peak times
Convenience affects the quality of public transport services. The following
four parameters measure convenience:
· Travel comfort: The quality and age of buses and train carriages,
transport operating hours, the frequency of services, and the extent of
access to the disabled
· Ticketing system: Availability of a travel card that is valid for
multiple modes of public transport, mobile ticketing, and remote top-
up
· Electronic services: The usability of public transport apps, access to
WiFi in buses and metro carriages and stations, and the availability of
real-time information on the progress of transit services, parking
information and online payment options for parking
· Transfers: The distance between metro stations and bus stops, the
time it takes to transfer from one mode of public transport to another,
and the availability of a navigation system to help passengers plan
journeys

After the Trip


Transportation systems need to be safe and environmentally friendly to be
sustainable:
· Safety—the number of road and public transport-related casualties
relative to population, as well as safety enforcement measures
· Environmental impact—encompasses fuel standards, the age, and
quality of vehicles, the proportion of electric vehicles sold, and the
time private motor vehicles operate
Urban Transport Challenges
1. Urban Transportation at the Crossroads
Cities are locations having a high level of accumulation and concentration
of economic activities. They are complex spatial structures supported by
infrastructures, including transport systems. The larger a city, the greater its
complexity and the potential for disruptions, particularly when this
complexity is not effectively managed. Urban productivity is highly
dependent on the efficiency of its transport system to move labor, consumers,
and freight between multiple origins and destinations. Additionally, transport
terminals such as ports, airports, and railyards are located within urban areas,
help anchor a city within a regional and global mobility system. Still, they are
also contributing to a specific array of challenges. Some challenges are
ancient, like congestion (which plagued cities such as Rome), while others
are new like urban freight distribution or environmental impacts.
a. Traffic congestion and parking difficulties
Congestion is one of the most prevalent transport challenges in large
urban agglomerations. Although congestion can occur in all cities, it is
particularly prevalent above a threshold of about 1 million inhabitants.
Congestion is particularly linked with motorization and the diffusion of the
automobile, which has increased the demand for transport infrastructures.
However, the supply of infrastructures has often not been able to keep up
with mobility growth. Since vehicles spend the majority of the time parked,
motorization has expanded the demand for parking space, which has created
footprint problems, particularly in central areas where the footprint of parked
vehicles is significant. By the 21st century, drivers are three times more likely
to be affected by congestion than in the latter part of the 20th century.
Congestion and parking are also interrelated since street parking consumes
transport capacity, removing one or two lanes for circulation along urban
roads. Further, looking for a parking space (called “cruising”) creates
additional delays and impairs local circulation. In central areas of large cities,
cruising may account for more than 10% of the local circulation, as drivers
can spend up to 20 minutes looking for a parking spot. This practice is often
judged more economically effective than using a paying off-street parking
facility. The time spent looking for a free (or low cost) parking space is
compensated by the monetary savings. Parking also impairs deliveries as
many delivery vehicles will double-park at the closest possible spot to unload
their cargo.
Identifying the true cause of congestion is a strategic issue for urban planning
since congestion is commonly the outcome of specific circumstances such as
the lack of parking or poorly synchronized traffic signals.
b. Longer commuting
On par with congestion, people are spending an increasing amount of time
commuting between their residence and workplace. An important factor
behind this trend is related to residential affordability as housing located
further away from central areas (where most of the employment remains) is
more affordable. Therefore, commuters are exchanging commuting time for
housing affordability. However, long commuting is linked with several social
problems, such as isolation (less time spent with family or friends), as well as
poorer health (obesity). Time spent during commuting is at the expense of
other economic and social activities. However, information technologies have
allowed commuters to perform a variety of tasks while traveling.
c. Public transport inadequacy
Many public transit systems, or parts of them, are either over or underused
since the demand for public transit is subject to periods of peaks and troughs.
During peak hours, crowdedness creates discomfort for users as the system
copes with a temporary surge in demand. This creates the challenge of the
provision of an adequate level of transit infrastructures and service levels.
Planning for peak capacity leaves the system highly under-used during off-
peak hours, while planning for an average capacity will lead to congestion
during peak hours.
Low ridership makes many services financially unsustainable, particularly in
suburban areas. Despite significant subsidies and cross-financing (e.g. tolls),
almost every public transit system cannot generate sufficient income to
cover its operating and capital costs. While in the past, deficits were deemed
acceptable because of the essential service public transit was providing for
urban mobility, its financial burden is increasingly controversial.
d. Difficulties for non-motorized transport
These difficulties are either the outcome of intense traffic, where the mobility
of pedestrians, bicycles, and other non-motorized vehicles is impaired, but
also because of a blatant lack of consideration for pedestrians and bicycles in
the physical design of infrastructures and facilities. On the opposite side, the
setting of bicycle paths takes capacity away from roadways as well as
parking space. A negative outcome would be to allocate more space for non-
motorized transport than the actual mobility demand, which would exacerbate
congestion.

e. Loss of public space


Most roads are publicly owned and free of access. Increased traffic has
adverse impacts on public activities, which once crowded the streets such as
markets, agoras, parades and processions, games, and community
interactions. These have gradually disappeared to be replaced by
automobiles. In many cases, these activities have shifted to shopping malls,
while in other cases, they have been abandoned altogether. Traffic flows
influence the life and interactions of residents and their usage of street space.
More traffic impedes social interactions and street activities. People tend to
walk and cycle less when traffic is high.
f. High infrastructure maintenance costs
Cities facing the aging of their transport infrastructure have to assume
growing maintenance costs as well as pressures to upgrade to more modern
infrastructure. In addition to the involved costs, maintenance and repair
activities create circulation disruptions. Delayed maintenance is rather
common since it conveys the benefit of keeping current costs low, but at the
expense of higher future costs and, on some occasions, the risk of
infrastructure failure. The more extensive the road and highway network, the
higher the maintenance cost and its financial burden. The same applies to
public transit infrastructure that requires a system-wide maintenance strategy.
g. Environmental impacts and energy consumption
Pollution, including noise generated by circulation, has become an
impediment to the quality of life and even the health of urban populations.
Further, energy consumption by urban transportation has dramatically
increased, and so the dependency on petroleum. These considerations are
increasingly linked with peak mobility expectations where high energy prices
incite a shift towards more efficient and sustainable forms of urban
transportation, namely public transit. There are pressures to “decarbonize”
urban transport systems, particularly with the diffusion of alternative energy
sources such as electric vehicles.
h. Accidents and safety
The growth in the intensity of circulation in urban areas is linked with a
growing number of accidents and fatalities, especially in developing
economies. Accidents account for a significant share of recurring delays from
congestion. As traffic increases, people feel less safe to use the streets. The
diffusion of information technologies leads to paradoxical outcomes. While
users have access to reliable location and navigation information, portable
devices create distractions linked with a rise of accidents for drivers and
pedestrians alike.
i. Land footprint
The footprint of transportation is significant, particularly for the automobile.
Between 30 and 60% of a metropolitan area may be devoted to
transportation, an outcome of the over-reliance on infrastructures supporting
road transportation. Yet, this footprint also underlines the strategic
importance of transportation in the economic and social welfare of cities as
mobility is a sign of efficiency and prosperity.

j. Freight distribution
Globalization and the materialization of the economy have resulted in
growing quantities of freight moving within cities. As freight traffic
commonly shares infrastructures supporting the circulation of passengers, the
mobility of freight in urban areas has become increasingly controversial. The
growth of e-commerce and home deliveries has created additional pressures
in the urban mobility of freight. City logistics strategies can be established to
mitigate the variety of challenges faced by urban freight distribution.
Many dimensions to the urban transport challenge are linked with the
dominance of the automobile.
Automobile Dependency
Automobile use is related to a variety of advantages, such as on-demand
mobility, comfort, status, speed, and convenience. These advantages jointly
illustrate why automobile ownership continues to grow worldwide, especially
in urban areas and developing economies. When given a choice and the
opportunity, most individuals will prefer using an automobile. Several factors
influence the growth of the total vehicle fleet, such as sustained economic
growth (increase in income and quality of life), complex individual urban
movement patterns (many households have more than one automobile), more
leisure time, and suburbanization (areas where mobility options are limited).
Therefore, rising automobile mobility can be perceived as a positive
consequence of economic development. The automotive sector, particularly
car manufacturing, is a factor of economic growth and job creation, with
several economies actively promoting it.
The growth in the total number of vehicles also gives rise to congestion at
peak traffic hours on major thoroughfares, in business districts, and often
throughout the metropolitan area. Cities are important generators and
attractors of mobility, which is associated with a set of geographical
paradoxes that are self-reinforcing. For instance, economic specialization
leads to additional transport demands, while agglomeration leads to
congestion. Over time, a state of automobile dependency has emerged, which
results in a declining role of other modes, thereby limiting alternatives to
urban mobility through path dependency. Future development options are
locked-in because of past choices. A city can become locked-into planning
decisions that reinforce the use of the automobile. In addition to the factors
contributing to the growth of driving, two major factors contributing to
automobile dependency are:
· Underpricing and consumer choices. Most roads and highways are
subsidized as they are considered a public good. Consequently, drivers
do not bear the full cost of automobile use, such as parking. Like the
“Tragedy of the Commons”, when a resource is free of access (road), it
tends to be overused and abused (congestion). This is also reflected in
consumer choice, where automobile ownership is a symbol of status,
freedom, and prestige, especially in developing economies. Single
homeownership also reinforces automobile dependency if this
ownership is favored through various policies and subsidies.
· Planning and investment practices. Planning and the ensuing
allocation of public funds aim towards improving road and parking
facilities in an ongoing attempt to avoid congestion. Other
transportation alternatives tend to be disregarded. In many cases,
zoning regulations impose minimum standards of road and parking
services, such as the number of parking spaces per square meter of
built surface, and de facto impose a regulated automobile dependency.
There are several levels of automobile dependency, ranging from low to
acute, with their corresponding land use patterns and alternatives to mobility.
Among the most relevant automobile dependency indicators is the level of
vehicle ownership, per capita motor vehicle mileage, and the proportion of
total commuting trips made using an automobile. A situation of high
automobile dependency is reached when more than three-quarters of
commuting trips are done using the automobile. For the United States, this
proportion has remained around 88% over recent decades.
Automobile dependency is also served by a cultural and commercial system
promoting the automobile as a symbol of status and personal freedom,
namely through intense advertising and enticements to purchase new
automobiles. Not surprisingly, many developing economies perceive
motorization as a condition for development. Even if the term automobile
dependency is often negatively perceived and favored by market distortions
such as the provision of roads, its outcome reflects the choice of individuals
who see the automobile more as an advantage than an inconvenience. This
can lead to a paradoxical situation where planners try to counterbalance the
preference of automobile ownership supported by the bulk of the population.
Congestion
Congestion can be perceived as an unavoidable consequence of the usage of
scarce transport resources, particularly if they are not priced. The last decades
have seen the extension of roads in urban areas, most of them free of access.
Those infrastructures were designed for speed and high capacity, but the
growth of urban circulation occurred at a rate higher than often expected.
Investments came from diverse levels of government intending to provide
accessibility to cities and regions. There were strong incentives for the
expansion of road transportation by providing high levels of transport supply.
This has created a vicious circle of congestion, which supports the
construction of additional road capacity and automobile dependency. Urban
congestion mainly concerns two domains of circulation, often sharing the
same infrastructures:
· Passengers. In many world regions, incomes have significantly
increased; one automobile per household or more is becoming
common. Access to an automobile conveys flexibility in terms of the
choice of origin, destination, and travel time. The automobile is
favored for most trips, including commuting. For instance, automobiles
account for the bulk of commuting trips in the United States. The
majority of automobile-related congestion is the outcome of time
preferences in the usage of vehicles (during commuting hours) as well
as a substantial amount of space required to park vehicles. About 95%
of the time, an automobile is idle, and each new automobile requires an
additional footprint.
· Freight. Several industries have shifted their transport needs to
trucking, thereby increasing the usage of road infrastructure. Since
cities are the leading destinations for freight flows (either for
consumption or transfer to other locations), trucking adds to urban
congestion. The “last mile” problem remains particularly prevalent for
freight distribution in urban areas. Congestion is commonly linked
with a drop in the frequency of deliveries tying additional capacity to
ensure a similar level of service. The growth of home deliveries due to
e-commerce has placed additional pressures, particularly in higher
density areas, on congestion in part because of more frequent parking.
Congestion in urban areas is dominantly caused by commuting patterns and
little by truck movements. On average, infrastructure provision could not
keep up with the growth in the number of vehicles, even more with the total
number of vehicles-km. During infrastructure improvement and construction,
capacity impairment (fewer available lanes, closed sections, etc.) favors
congestion. Significant travel delays occur when the capacity limit is reached
or exceeded, which is the case of almost all metropolitan areas. In the largest
cities such as London, road traffic is slower than it was 100 years ago.
Marginal delays are thus increasing, and driving speed becomes problematic
as the level of population density increases. Once a population threshold of
about 1 million is reached, cities start to experience recurring congestion
problems. This observation must be nuanced by numerous factors related to
the urban setting, modal preferences (share of public transit), and the quality
of existing urban transport infrastructures.
Large cities have become congested most of the day, and congestion was
getting more acute in the 1990s and 2000s and then leveled off in many
cases. For instance, average car travel speeds have substantially declined in
China, with many cities experiencing an average driving speed of less than 20
km/hr with car density exceeding 200 cars per km of road, a figure
comparable to many developed economies. Another important consideration
concerns parking, which consumes large amounts of space and provides a
limited economic benefit if not monetized. In automobile-dependent cities,
this can be very constraining as each land use has to provide an amount of
parking space proportional to their level of activity. Parking has become a
land use that significantly inflates the demand for urban land.
Urban mobility also reveals congestion patterns. Daily trips can be either
mandatory (workplace-home) or voluntary (shopping, leisure, visits). The
former is often performed within fixed schedules while the latter complies
with variable and discretionary schedules. Correspondingly, congestion
comes in two major forms:
· Recurrent congestion. The consequence of factors that cause regular
demand surges on the transportation system, such as commuting,
shopping, or weekend trips. However, even recurrent congestion can
have unforeseen impacts in terms of its duration and severity.
Mandatory trips are mainly responsible for the peaks in circulation
flows, implying that about half the congestion in urban areas is
recurring at specific times of the day and on specific segments of the
transport system.
· Non-recurrent congestion. The other half of congestion is caused by
random events such as accidents and unusual weather conditions
(rain, snowstorms, etc.), which can be represented as a risk factor that
can be expected to take place. Non-recurrent congestion is linked to the
presence and effectiveness of incident response strategies. As far as
accidents are concerned, their randomness is influenced by the level of
traffic as the higher the traffic on specific road segments, the higher the
probability of accidents.
Behavioral and response time effects are also important as in a system
running close to capacity. For instance, braking suddenly while driving may
trigger what can be known as a backward traveling wave. It implies that as
vehicles are forced to stop, the bottleneck moves up the location it initially
took place at, often leaving drivers puzzled about its cause. The spatial
convergence of traffic causes a surcharge on transport infrastructures up to
the point where congestion can lead to the total immobilization of traffic. Not
only does the use of the automobile have an impact on traffic circulation and
congestion, but it also leads to the decline in public transit efficiency when
both are sharing the same road infrastructures.
Mitigating Congestion
In some areas, the automobile is the only mode for which adequate
transportation infrastructures are provided. This implies less capacity for
using alternative modes such as transit, walking, and cycling. At some levels
of density, no public infrastructure investment can be justified in terms of
economic returns. Longer commuting trips in terms of average travel time,
the result of fragmented land uses, and congestion levels are a significant
trend. A convergence of traffic is taking place at major highways serving
low-density areas with high levels of automobile ownership and low levels of
automobile occupancy. The result is energy (fuel) wasted during congestion
(additional time) and supplementary commuting distances. In automobile-
dependent cities, a few measures can help alleviate congestion to some
extent:
· Ramp metering. Controlling access to a congested highway by
letting automobiles in one at a time instead of in random groups. The
outcome is a lower disruption on highway traffic flows through better
merging.
· Traffic signal synchronization. Tuning the traffic signals to the time
and direction of traffic flows. This is particularly effective if the
signals can be adjusted hourly to reflect changes in circulation patterns.
Trucks can be allowed to pass traffic lights through delayed signals,
which reduces the risk of accidents through sudden collision with a car
breaking at a yellow light. Therefore, trucks are less likely to be the
first vehicle at a red light, which increases capacity because trucks
have lower acceleration.
· Incident management. Making sure that vehicles involved in
accidents or mechanical failures are removed as quickly as possible
from the road. Since accidents account for 20 to 30% of all the causes
of congestion, this strategy is particularly important.
· Car ownership restrictions. Several cities and countries (e.g.
Singapore) have quotas in the number of license plates that can be
issued or require high licensing fees. To purchase a vehicle an
individual thus must first secure a license through an auction. Such
strategies, however, go against market principles.
· Sharing vehicles. Concerns two issues. The first is providing
ridership to people (often co-workers) having a similar origin,
destination, and commuting time. Two or more vehicle trips can thus
be combined into one, which is commonly referred to as carpooling.
The second involves a pool of vehicles (mostly cars, but also bicycles)
that can be leased or shared for a short duration when mobility is
required. Adequate measures must be taken so that supply and demand
are effectively matched with information technologies providing
effective support.
· HOV lanes. High Occupancy Vehicle (HOV) lanes ensure that
vehicles with two or more passengers (buses, taxis, vans, carpool, etc.)
have exclusive access to a less congested lane, particularly during peak
hours.
· Congestion pricing. A variety of measures aimed at imposing
charges on specific segments or regions of the transport system, mainly
as a toll. The charges can also vary during the day to reflect congestion
levels so that drivers are incited to consider other time periods or other
modes.
· Parking management. Removing parking or free parking spaces can
be an effective dissuasion tool since it reduces cruising and enables
those willing to pay to access an area (e.g. for a short shopping stop).
Parking spaces should be treated as a scarce asset subject to a price
structure, reflecting the willingness to pay. Further, planning
regulations provide an indirect subsidy to parking by enforcing
minimum parking space requirements based upon the type and the
density of the land use.
· Public transit. Offering alternatives to driving can significantly
improve efficiency, notably if it circulates on its own infrastructure
(subway, light rail, buses on reserved lanes, etc.) and is well integrated
within a city’s development plans. However, public transit has its own
set of issues (see the next section about urban transit challenges).
· Non-motorized transportation. Since most urban trips are over short
distances, non-motorized modes, particularly walking and cycling,
have an important role in supporting urban mobility. The provision of
adequate infrastructure, such as sidewalks, is often a low priority as
non-motorized transportation is often perceived as not modern despite
the important role it needs to assume in urban areas.
All these measures only partially address the issue of congestion, as they
alleviate, but do not solve the problem. Fundamentally, congestion remains
the sign of economic success, but a failure at reconciling rising mobility
demands and acute supply constraints.
The Urban Transit Challenge
As cities continue to become more dispersed, the cost of building and
operating public transportation systems increases. For instance, as of
2015, about 201 urban agglomerations had a subway system, the vast
majority of them being in developed economies. Furthermore, dispersed
residential patterns characteristic of automobile-dependent cities make public
transportation systems less convenient to support urban mobility. Additional
investments in public transit often do not result in significant additional
ridership. Unplanned and uncoordinated land development has led to the
rapid expansion of the urban periphery. By selecting housing in outlying
areas, residents restrict their potential access to public transportation. Over-
investment (when investments do not appear to imply significant benefits)
and under-investment (when there is a substantial unmet demand) in public
transit are both complex challenges.
Urban transit is often perceived as the most efficient transportation mode for
urban areas, notably large cities. However, surveys reveal a stagnation of
public transit systems, especially in North America, where ridership levels
have barely changed in the last 30 years. The economic relevance of public
transit is being questioned. Most urban transit developments had little impact
on alleviating congestion despite mounting costs and heavy subsidies. This
paradox is partially explained by the spatial structure of contemporary cities,
which are oriented along servicing individual mobility needs. Thus, the
automobile remains the preferred mode of urban transportation.

Besides, public transit is publicly owned, implying a politically motivated


service that provides limited economic returns. Even in transit-oriented
cities, transit systems depend massively on government subsidies. Little or no
competition within the public transit system is permitted as wages and fares
are regulated, undermining any price adjustments to ridership changes. Thus,
public transit often serves the purpose of a social function (public service) as
it provides accessibility and social equity, but with limited relationships with
economic activities. Among the most difficult challenges facing urban transit
are:
· Decentralization. Public transit systems are not designed to service
low density and scattered urban areas dominating the urban landscape.
The greater the decentralization of urban activities, the more difficult
and expensive it becomes to serve urban areas with public transit.
Additionally, decentralization promotes long-distance trips on transit
systems causing higher operating costs and revenue issues for flat fare
transit systems.
· Fixity. The infrastructures of several public transit systems, notably
rail and subway systems, are fixed, while cities are dynamical entities,
even if the pace of change can take decades. This implies that travel
patterns tend to change with a transit system built for servicing a
specific pattern that may eventually face “spatial obsolescence”; the
pattern it was designed to serve no longer exists.
· Connectivity. Public transit systems are often independent of other
modes and terminals. It is consequently difficult to transfer passengers
from one system to the other. This leads to a paradox between the
preference of riders to have direct connections and the need to provide
a cost-efficient service network that involves transfers.
· Automobile competition. Given cheap and ubiquitous road transport
systems, public transit faced strong competition and lost ridership in
relative terms and, in some cases, in absolute terms. The higher the
level of automobile dependency, the more inappropriate the public
transit level of service. The convenience of the automobile outpaces
the public service being offered.
· Construction and maintenance costs. Public transit systems,
particularly heavy rail, are capital intensive to build, operate, and
maintain. Cost varies depending on local conditions such as density
and regulations, but average construction costs are around $300 million
per km. However, there are exceptions where cost overruns can be
substantial because of capture by special interest groups such as labor
unions, construction companies, and consulting firms. When there is
inefficient regulatory oversight, these actors will converge to extract as
much rent as possible from public transit capital improvements. The
world’s highest subway construction costs are in New York. For
instance, the Second Avenue subway extension in Manhattan,
completed in 2015, was done at the cost of $1.7 billion per km, five to
seven times the average in comparable cities such as Paris or London.
This project employed four times more labor, with construction costs
50% higher.
· Fare structures. Historically, most public transit systems have
abandoned a distance-based fare structure for a simpler flat fare
system. This had the unintended consequence of discouraging short
trips, for which most transit systems are well suited for, and
encouraging longer trips that tend to be costlier per user than the fares
they generate. Information systems offer the possibility for transit
systems to move back to a more equitable distance-based fare
structure, particularly with smartcards that enable to charge according
to the point of entry and exit within the public transit system.
· Legacy costs. Most public transit systems employ unionized labor
that has consistently used strikes (or the threat of labor disruptions) and
the acute disruptions they create as leverage to negotiate favorable
contracts, including health and retirement benefits. Since public transit
is subsidized, these costs were not well reflected in the fare systems. In
many transit systems, additional subsidies went into compensation or
covered past debt, not necessarily into performance improvements or
additional infrastructure. As most governments face stringent
budgetary constraints because of social welfare commitments, public
transit agencies are being forced to reassess their budgets through an
unpopular mix of higher fares, deferred maintenance, and the breaking
of labor contracts.
· Self-driving vehicles. Developments in information technologies let
anticipate in the coming years the availability of self-driving vehicles.
Such a development would entail point to point services by on-demand
vehicles and a much better utilization level of such assets. This system
could compete directly with transit systems due to its convenience,
comfort, and likely affordability
Therefore, public transit systems are challenged to remain relevant to urban
mobility as well as to increase its market share. The increasing volatility in
petroleum prices since 2006 provides uncertainties in the costs of transit fleet
ownership and operations and how effective it is to convert transit fleets to
alternative energy sources. A younger generation with a preference in living
in higher density areas perceives the automobile as a less attractive
proposition than the prior generations. Electronic fare systems are also
making the utilization of public transit more convenient. A recent trend
concerns the usage of incentives, such as point systems (e.g. air miles with
purchase of a monthly pass), to promote public transit and influence
consumer behavior. Yet, evidence underlines that the inflation-adjusted cost
of using public transit is increasing, implying that the cost advantage of
public transit over the automobile is not changing in a significant manner. If
self-driving vehicles become a possibility, many highly subsidized transit
systems may have limited competitive advantage. Under such circumstances,
the fate of many surface public transit systems will be in question.

Global Urbanization
Urbanization has been one of the dominant economic and social changes of
the 20th century, especially in the developing world. Although cities played a
significant role throughout human history, it is not until the industrial
revolution that a network of large cities started to emerge in the most
economically advanced parts of the world. Since 1950, the world’s urban
population has more than doubled, to reach nearly 4.2 billion in 2018, about
55.2% of the global population. This transition is expected to go on well into
the second half of the 21st century, a trend reflected in the growing size of
cities and the increasing proportion of the urbanized population. By 2050,
70% of the global population could be urbanized, representing 6.4 billion
urban residents. Cities also dominate the national economic output as they
account for the bulk of the production, distribution, and consumption.
Global urbanization is the outcome of three main demographic trends:
· Natural increase. The outcome of more births than deaths in urban
areas, a direct function of the fertility rate as well as the quality of
healthcare systems (lower mortality rates, particularly for infants).
Phases in the demographic transition are commonly linked with
urbanization rates, with peak growth years corresponding to large
differences between birth and death rates. Although natural increase
played an essential role in the past, it is of much lesser importance
today as fertility rates in many developed economies have dropped
significantly. In some cases like Western Europe, Japan, and South
Korea, fertility is below the replacement rate.
· Rural to urban migrations. This has been a strong urbanization
factor, particularly in the developing world, where migration accounted
for between 40 and 60% of urban growth. Migration endured since the
beginning of the industrial revolution in the 19th century. It first took
place massively in the developed world in the first half of the 20th
century and then in the developing world since the second half of the
20th century. The factors behind rural to urban migrations may involve
the expectation to find employment, improved agricultural
productivity, which frees rural labor or even political and
environmental problems where populations are constrained to leave the
countryside. The industrialization of coastal China and its integration
into the global trade system since the 1980s has led to the largest rural
to urban migration in history. According to the United Nations
Population Fund, about 18 million people migrate from rural areas to
cities each year in China alone.
· International migration. The growth in international migration has
been an important factor in the urbanization of major gateway cities,
such as Los Angeles, Miami, New York, London, and Paris. This
process tends to occur in the largest cities, but there is a trickle-down
to cities of smaller size.
Through urbanization, fundamental changes in the socio-economic
environment of human activities have been observed. What drives
urbanization is a complex mix of economic, demographic, and technological
factors. The growth in GDP per capita is a dominant driver of urbanization,
but this is supported by corresponding developments in transportation
systems and even the diffusion of air conditioning, allowing for settlements
in high-temperature areas such as the Middle East (e.g. Dubai). Urbanization
involves new forms of employment, economic activity, and lifestyle.
Urban mobility problems have increased proportionally, and in some cases,
exponentially, with urbanization. This is associated with two outcomes. First
is the emergence of a network of megacities that account for the most salient
urban mobility challenges. Second, mobility demands tend to be concentrated
over specific urban areas, such as central business districts.
Current global trends indicate a growth of about 50 million urbanites each
year, roughly a million a week. More than 90% of that growth occurs in
developing economies, which places intense pressures on urban
infrastructures to cope, particularly transportation. What is considered urban
includes a whole continuum of urban spatial structures, ranging from small
towns to large urban agglomerations. This also brings the question about
optimal city size since technical limitations (road, utilities) are not much of
an impediment in building very large cities. Many of the world’s largest cities
can be labeled as dysfunctional mainly because as city size increases, the
rising operational and infrastructure complexities are not effectively coped
with managerial expertise.
Evolution of Transportation and Urban
Form
Urbanization is occurring in accordance with the development of urban
transport systems, particularly in terms of their capacity and efficiency.
Historically, movements within cities tended to be restricted to walking,
which made urban mobility rather inefficient and time-consuming. Thus,
activity nodes tended to be agglomerated, and urban forms compact with
mixed uses. Many modern cities have inherited an urban form created under
such circumstances, even though they are no longer prevailing. The dense
urban cores of many European and East Asian cities, for example, enable
residents to make between one third and two-thirds of all trips by walking
and cycling. At the other end of the spectrum, the dispersed urban forms of
most Australian, Canadian, and American cities, which were built more
recently, encourages automobile dependency and are linked with high levels
of mobility. Still, Chinese cities have experienced a high level of
motorization, implying the potential of convergence towards more uniform
urban forms. Many cities are also port cities with trade playing an enduring
role not only for the economic vitality but also in the urban spatial structure,
with the port district being an important node. Airports terminals have also
been playing a growing role in the urban spatial structure as they can be
considered as cities within cities.
The evolution of transportation has generally led to changes in urban form.
The more radical the changes in transport technology, the more the alterations
on the urban form. Among the most fundamental changes in the urban form
is the emergence of new clusters in peripheral areas expressing new urban
activities and new relationships between elements of the urban system. Many
cities are assuming a polycentric form a change that is associated with new
mobility patterns. The central business district (CBD), once the primary
destination of commuters and serviced by public transportation, has been
transformed by new manufacturing, retailing, and management practices.
Whereas traditional manufacturing depended on centralized workplaces and
transportation, technological and transportation developments rendered
modern industry more flexible. In many cases, manufacturing relocated in a
suburban setting, if not altogether, to entirely new low-cost locations
offshore. Retail and office activities are also suburbanizing, producing
changes in the urban form. Concomitantly, many important transport
terminals, namely port facilities, and railyards, have emerged in suburban
areas following new requirements in modern freight distribution brought in
part by containerization. The urban spatial structure shifted from a nodal to a
multi-nodal character, implying new forms of urban development and new
connections to regional and global economic processes.
Initially, suburban growth mainly took place adjacent to major road corridors,
leaving plots of vacant or farmland in between. Later, intermediate spaces
were gradually filled up, more or less coherently. Highways and ring roads,
which circled and radiated from cities, favored the development of suburbs
and the emergence of important sub-centers that compete with the central
business district for the attraction of economic activities. As a result, many
new job opportunities have shifted to the suburbs, and the activity system of
cities has been considerably modified. Depending on the economic sector
they specialize in, cities and even different parts of a metropolitan area can be
experiencing development at entirely different rates (or even decline), leading
to a highly heterogeneous urban landscape. These changes have occurred
according to a variety of geographical and economic contexts, notably in
North America and Europe, as each subsequent phase of urban transportation
developments led to different spatial structures. Sometimes, particularly when
new modern urban road infrastructures are built, the subsequent changes in
the urban form can be significant. Two processes had a substantial impact on
contemporary urban forms:

· Urban sprawl has been dominant in North America since the end of
World War II, where land is abundant, transportation costs were low,
and where the economy became dominated by tertiary and quaternary
activities. Under such circumstances, a strong negative relationship
between urban density and automobile use emerged. In the context of
cities with high automobile dependency, their built-up areas have
grown at a faster rate than their populations, resulting in declining
densities. In addition, commuting became relatively inexpensive
compared with land costs, so households had an incentive to buy
lower-priced housing at the urban periphery. Wherever there is
motorization, a pattern of sprawl takes shape.
· The decentralization of activities resulted in two opposite effects.
First, commuting time has remained relatively stable in duration.
Second, commuting increasingly tends to be longer in terms of distance
and made by using the automobile rather than by public transit. Most
transit and road systems were developed to facilitate suburb-to-city,
rather than suburb-to-suburb commuting. As a result, suburban
highways are often as congested as urban highways.
The Spatial Constraints of Urban
Transportation
The amount of urban land allocated to transportation is often correlated with
the level of mobility. In the pre-automobile era, about 10% of the urban land
was devoted to transportation, which was simply roads for dominantly
pedestrian traffic. As the mobility of people and freight increased, a growing
share of urban areas was allocated to transport and the infrastructures
supporting it. Large variations in the spatial imprint of urban transportation
are observed between different cities as well as between different parts of a
city, such as between central and peripheral areas. The major components of
the spatial imprint of urban transportation are:
· Pedestrian areas. Refer to the amount of space devoted to walking.
This space is often shared with roads as sidewalks may use between
10% and 20% of a road’s right of way. In central areas, pedestrian
areas tend to use a greater share of the right of way, and in some
instances, whole areas are reserved for pedestrians. However, in a
motorized context, most pedestrian areas are for servicing people’s
access to transport modes such as parked automobiles.
· Roads and parking areas. Refer to the amount of space devoted to
road transportation, which has two states of activity; moving or parked.
In a motorized city, on average 30% of the surface is devoted to roads
while another 20% is required for off-street parking. This implies for
each car about two off-street, and two on-street parking spaces are
available. In North American cities, roads and parking lots account for
between 30 and 60% of the total surface.
· Cycling areas. In a disorganized form, cycling simply shares access
to pedestrian and road space. However, many attempts have been made
to create spaces specifically for bicycles in urban areas, with reserved
lanes and parking facilities. The Netherlands has been particularly
proactive over this issue with biking paths and parking areas active
component of the urban transport system; 27% of the total amount of
commuting is accounted for by cycling.
· Transit systems. Many transit systems, such as buses and tramways,
share road space with automobiles, which often impairs their
respective efficiency. Attempts to mitigate congestion have resulted in
the creation of road lanes reserved for buses either on a permanent or
temporary (during rush hour) basis. Other transport systems such as
subways and rail have their own infrastructures and, consequently,
their own rights of way.
· Transport terminals. Refer to the amount of space devoted to
terminal facilities such as ports, airports, transit stations, railyards, and
distribution centers. Globalization has increased the mobility of people
and freight, both in relative and absolute terms, and consequently the
amount of urban space required to support those activities. Many major
terminals are located in the peripheral areas of cities, which are the
only locations where sufficient amounts of land are available.
The spatial importance of each transport mode varies according to a number
of factors, density being the most important. Further, each transport mode has
unique performance and space consumption characteristics. The most
relevant example is the automobile. It requires space to move around (roads),
but it also spends 98% of its existence stationary in a parking space.
Consequently, a significant amount of urban space must be allocated to
accommodate the automobile, especially when it does not move and is thus
economically and socially useless. In large urban agglomerations, close to all
the available street parking space in areas of average density and above is
occupied throughout the day. At an aggregate level, measures reveal a
significant spatial imprint of road transportation among developed countries.
In the United States, more land is thus used by the automobile than for
housing. In Western Europe, roads account for between 15% and 20% of the
urban surface, while for developing economies, this figure is about 10% but
rising fast due to motorization.
Transportation and the Urban Structure
Urbanization involves an increased number of trips in urban areas. Cities
have traditionally responded to the growth in mobility by expanding the
transportation supply by building new highways and transit lines. This has
mainly meant building more roads to accommodate an ever-growing number
of vehicles. Several urban spatial structures have accordingly emerged,
with the reliance on the automobile being the most important discriminatory
factor. Four major types can be identified at the metropolitan scale:

· Type I – Completely Motorized Network. Representing an


automobile-dependent city with limited centrality and dispersed
activities.
· Type II – Weak Center. Representing the spatial structure where
many activities are located in the periphery.
· Type III – Strong Center. Representing high-density urban centers
with well developed public transit systems.
· Type IV – Traffic Limitation. Representing urban areas that have
implemented traffic control and modal preference in their spatial
structure. Commonly, the central area is dominated by public transit.
· There are different scales where transportation systems influence
the structure of communities, districts, and the whole metropolitan
area. For instance, one of the most significant impacts of
transportation on the urban structure has been the clustering of
activities near areas of high accessibility.
· The impact of transport on the spatial structure is particularly
evident in the emergence of suburbia. Although many other factors
are important in the development of suburbia, including low land
costs, available land (large lots), environmental considerations
(clean and quiet), safety, and car-oriented services (shopping malls),
the spatial imprint of the automobile is dominant. Suburban
developments have occurred in many cities worldwide, although no
other places have achieved such a low density and automobile
dependency than in North America. The automobile is also linked
with changes in street layouts. While older parts of cities tend to
have a conventional grid layout, from the 1930s, new suburbs
started to be designed in a curvilinear fashion, which included some
cul-de-sacs (dead ends). By the 1950s, the prevailing design for new
suburbs was privileging cul-de-sacs. Although the aim was to create
a more private and safe environment, particularly in cul-de-sac
sections, the outcome was also a growing sense of isolation and car
use.
· With the expansion of urban areas, congestion problems, and the
increasing importance of inter-urban movements, the existing
structure of urban roads was judged to be inadequate. Several ring
roads have been built around major cities and became an important
attribute of the spatial structures of cities. Highway interchanges in
suburban areas are notable examples of clusters of urban
development that have shaped the multicentric character of many
cities. The extension (and the over-extension) of urban areas have
created what may be called peri-urban areas. They are located well
outside the urban core and the suburbs, but are within reasonable
commuting distances; the term “edge cities” has been used to label a
cluster of urban development taking place in suburban settings.
Process and Top 5 stages of Transportation
Planning
Transportation planning is a complex problem. Increased facilities will
change the environment and land use patterns and result in increased trips
invalidating the original criteria and projections used. Increased use of
operations research and systems approach to transportation planning in recent
times seeks to optimise the system performance for deriving maximum
benefit from the facilities.
The use of mathematical models to simulate the problem leads one to
understand the variables involved to a reasonable degree; this helps in
arriving at the best possible solution under a given set of circumstances.
Transport planning for urban areas and big cities is much more complex.
Urban transport planning process involves the following stages:
(i) Inventorying of existing conditions.
(ii) Forecasting future conditions including land use.
(iii) Evaluation of alternative plans based on cost-benefit analysis.
(iv) Adoption and implementation of a programme.
(v) Continuing study to assess the impact to help in future planning.
The following sequential stages are relevant to transportation planning:
Stage # 1. Trip Generation:
A trip is a one-way movement of a person by a mechanised mode of
transport, having an ‘origin’ (start of the trip) and a ‘destination’ (end of the
trip). Trips may be home-based or non- home-based; in the former, one end
of the trip – either the origin or the destination is at the home of the person
while in the latter, neither end of the trip is the home of the person making
the trip. The trip ends are classified as generations and attractions. In the case
of home-based trips, the home end of any trip is a ‘generation’; in the case of
non-home-based trips, the origin of the trip is a generation.
The non-home end of a home-based trip is an ‘attraction’; the destination of a
non-based trip is also an attraction.
The following are the factors governing trip generation and attraction:
1. Family income – generally, the more the income, the higher will be the trip
generation rate.
2. Car ownership – the more the households owning cars, the more the trip
generation.
3. Family size and composition – the more the number of members and those
who go out for work, the more the trips generated. Age structure is a
significant factor. Young school-going children generate trips while elderly
people do not.
4. Land-use characteristics have a bearing on trip generation.
5. Accessibility to a public transport system can generate more trips.
6. Employment opportunities, existence of shopping malls and offices
influence the trip attraction rate.
7. In general, two approaches – multiple linear regression analysis and
category analysis – are used for estimating trip generation.
Multiple Linear Regression Analysis:
This is a well-known statistical technique for connecting independent
variables with a dependent variable using a mathematical relationship. The
total number of trips is the dependent variable and the various measurable
factors that influence trip generation are the independent variables.
The general form of the relationship is

Category Analysis:
Also called cross-classification technique, this considers trip-making by
individual households rather than by the different zones. Households in the
study area are divided into several categories, and their base year trip rates
are determined by surveys.
Income-class is another variable; similarly car-ownership levels and
household structures are other variables.
This method has a number of advantages over regression analysis. A multi-
dimensional matrix is used to define the categories, each dimension
representing one independent variable, which are again classified into a
definite number of discrete class intervals. Unlike in regression analysis, no
mathematical equation is derived, and the computations are relatively simple.
Data from census can be used directly, saving considerable effort.
However, new variables cannot be easily introduced at a later stage, and large
samples are needed to assign trip rates to any category.
Stage # 2. Trip Purpose:
Trips are made for different purposes, and a classification by purpose is
helpful.
The following are some of the important classes based on the purpose of
a trip:
i. Work
ii. Education
iii. Business
iv. Shopping
v. Health and medical
vi. Social, recreational, and sports
vii. Miscellaneous purposes.
A typical distribution obtained by Central Road Research Institute
(CRRI) is given below:

The above classification covers only home-based trips since these constitute
80 to 90% of the total number.
Stage # 3. Trip Distribution:
After estimating the trips generated from and attracted to the various zones, it
is necessary to apportion the trips generated in every zone to the zones to
which these trips are attracted. In other words, the trip distribution stage
determines the number of trips, tij, which would originate at zone i and
terminate at zone j.
Methods of Trip Distribution:
There are two broad types of trip distribution methods:
(i) Growth Factor Methods:
Growth factor methods have been used earlier; but these have now been
replaced by the more rational synthetic models. However, growth factor
methods are still used for the study of small areas in view of their simplicity.
These methods are based on the assumption that the existing travel patterns
can be projected to a future design year by using certain growth factors.
The general formula is–
Tij = tij X E … (4.54)
Where, Tij = Trips from zone i to zone j in the design year,
tij = Observed trips in the base year from zone i to zone j
and E = growth factor
E may be taken to be uniform or constant, or an average value may be used.
Iterative methods have also been devised in this category.
(ii) Synthetic Methods:
Synthetic models are used to develop a relationship between trips, the
resistance to travel between the zones, and the relative attractiveness of the
zones for travel. Existing data are used for this purpose. Once the model is
established, it can be used to predict future pattern of inter-zonal travel trips.
One of the well-known synthetic models is the gravity model. As proposed
by Voorhees (1955), this model assumes that the interchange of trips between
different zones of an area is dependent upon the relative attraction between
the zones and their spatial separation, as measured by an appropriate function
of the distance.
The relative attraction of each zone makes the trip maker overcome the
spatial separation by his or her ability, desire or necessity. While the trip
interchange is directly proportional to the relative attraction between the
zones, it is inversely proportional to the measure of spatial separation. (This
is somewhat similar to Newton’s Universal Law of Gravitation, and hence
the name Gravity Model.)
The following simple equation represents this relationship –

Existing data are used to calibrate the model parameters K and n, using a
computer program. Assuming these parameters to be the same at a future
date, the trip interchanges are computed by substituting the future trip
generated values in the model.
Stage # 4. Traffic Assignment:
This is the stage in which the trip interchanges are allocated to different parts
of the network. The route between any two O and D-pairs to be used is
determined and the inter-zonal flows are assigned to the selected routes.
All traffic assignment techniques are based on route selection, which is made
on the basis of a number of criteria like the journey time, distance, cost,
comfort, convenience and safety. Distance or journey time may often be
considered as the sole criterion, but the problem of driver’s preferences is not
always as simple as this. While for small jobs route selection may be made
manually; for large jobs the use of the digital computer is a must.
A concept which is commonly used in traffic assignment is the Moore
Algorithm, developed for dealing with phone calls that query for the shortest
path between two points. The algorithm comes in handy in computer
programmes developed for this purpose and helps in building the minimum-
path tree between any two zone centroids in a street network.
The following are the traffic assignment methods normally used:
(i) ‘All-or-nothing’ method (Free or Desire assignment)
(ii) Multiple route method
(iii) Capacity restraint method
(iv) Diversion curves approach
‘All-or-Nothing’ Method:
In this method, all the trips between any O and D-pair are assigned to the
shortest path between the trip ends. This is, in a way, oversimplifying the
problem as it is based on the premise that the route followed by the traffic is
the one with the least travel resistance, which can be measured in terms of the
distance, travel time, cost, or a suitable combination of these factors. The
traffic volumes of the trips are assigned based on the principle of minimum-
path tree. Once the assignment is completed, it is checked to ensure that no
link in the network is loaded beyond its capacity. In case of an overload, the
journey time at the overloaded link increases, which calls for the repetition of
the traffic assignment.
If a superior facility like an expressway is available, drivers tend to prefer to
use this even though the route is longer. The ‘all-or-nothing’ method does not
reflect small differences in journey time and may result in unrealistic route
selection.
The other methods mentioned above are considered to be more accurate, but
will need the use of a digital computer.
Stage # 5. Mode Split:
Mode split or Modal split is the process of separating trips by the mode of
travel. In general, modal split refers to the trips made by private cars and the
public transportation system – buses or trains.
The factors affecting the choice among alternative modes are not restricted to
cost and time, but are heterogeneous.
Some broad categories are given below:
1. Characteristics of the trip – trip purpose, trip distance, etc.
2. Household characteristics – income, car ownership, family size and
composition.
3. Zonal characteristics – residential density, concentration of workers,
distance from the central business district.
4. Network characteristics – accessibility and travel time comparison by the
different travel modes.
An understanding of the modal split is very relevant to transportation
planning. Future transportation pattern can be accurate only if the motivations
that guide the travellers in their choice of the modes of transportation can be
analysed and understood. The problem being a complex one, better
techniques are being evolved to aid the planning process.
The Benefits Of Urban Mass Transit
Advantages to individuals and communities
Where the automobile is a major competitor to mass transportation, the use of
transit has declined, reducing revenues available to pay the costs of these
systems and services, and—in a setting where government subsidies are
essential for sustaining mass transit—political support has eroded as well. As
more people rely on the automobile, their interest in directing public
resources to improving the highway system dominates their concern for
subsidizing transit.
For those who can use the automobile for quick and reliable transportation,
this trend simply represents the evolution of urban transport from collective
riding to individual riding, from the economies of sharing a relatively high-
speed service in a corridor where travel patterns are similar or the same, to
the privacy of one’s own “steel cocoon,” which can go anywhere, anytime,
without the need to coordinate travel plans with the schedule and routes of a
transit operator attempting to serve large groups of people. The automobile
has captured a large share (more than 95 percent by 1983) of urban trips in
the United States, and only in some cities of more than two million people
does the mass transportation share reach or exceed 10 percent of the trips.
If the automobile provides superior service for the majority of riders, why not
let the market operate without government intervention, perhaps leading to
the demise of transit? While this has happened in some medium-size and
small American cities, mass transportation can be important for a number of
reasons.
First, some portion of the urban travel market is made up of people who
cannot use the automobile to travel because they are handicapped, elderly, or
too young to drive. Some people cannot afford to own and operate a car, and
the young, the old, and the handicapped often fall into this category. If these
people are to have the mobility essential for subsistence and satisfaction in
their lives, some form of public transportation is necessary.
Second, transit provides a community with a way to move potentially large
numbers of people while consuming fewer resources. A single bus, if it is full
(50 to 80 passengers), can carry as many people as 50 or 60 cars, which
normally operate with fewer than 2 occupants. The bus requires less street
space, equivalent to 2 or 3 automobiles, and, when it is full, it requires much
less energy to move each person. Because emissions from internal-
combustion engines are proportional to fuel consumption, a full bus will
produce less pollution per person-trip than an automobile. Finally, because
they are operated by professional drivers, buses have a lower accident rate
than automobiles. Electric rail rapid transit trains produce even less air
pollution and are far safer per person-trip than either automobiles or buses.

Typical capacities of urban mass transportation modes

vehicle guideway persons per "train" "trains" per


type type vehicle length hour

seated crowded low high

auto streets 1.2 3 1 450 900

auto freeways 1.2 3 1 900 1,800

bus streets 50 80 1 1 60

bus separate 50 80 1 1 30

light rail streets 80 100 3 6 30

light rail separate 90 120 4 4 30

heavy rail separate 100 120 8 4 40

commuter separate 100 150 10 1 12


rail

These benefits accrue not only to transit travelers but also to other residents
and to the owners of land and businesses. Indeed, a major benefit of mass
transportation services goes to automobile travelers, who experience less
congestion and shorter travel times. There is no monetary market for these
broadly distributed public goods produced by mass transportation. There is
no practical way to sell clean air or lower accident rates to city dwellers to
raise funds to subsidize and expand mass transit or to restrict access to these
benefits only to those who pay for them. Some communities do raise
revenues for transit and other purposes by levying special fees on properties
particularly well-served by fixed-guideway transit (for example, in downtown
areas or near rail stations) to capture some of the increased value produced by
raising their accessibility with public transportation.
These public transportation benefits provide the justification for government
subsidies. Their generation is strongly dependent on the utilization of mass
transportation. The heavier the use of public transit, the larger will be the
benefits produced. Yet even if only a small portion (5–10 percent) of the
travel market uses transit in the rush hours, a major reduction in congestion
can result. On the other hand, buses and trains running nearly empty in the
middle of the day, during the evening, or on weekends do not produce
sufficient benefits to the community to justify the high costs to provide these
services.
Effects of public policy
The benefits of mass transportation result from the utilization of these
services: more utilization produces more benefits. Crowded buses and trains
signify a smaller market share for the automobile, with its attendant air
pollution, congestion, accidents, and excessive land consumption. Heavy
utilization of mass transportation can produce a larger revenue stream from
passenger fares, which can help support these systems, either by reducing
subsidy requirements or, in a few very high-density travel corridors, actually
covering all the costs of providing mass transportation.
There are a number of ways to increase and maintain mass transit ridership.
These differ by context and government policy, and none offers guaranteed
results. Keeping transit utilization high is much easier where competition
from the automobile is limited. In Third World cities, where the automobile
has never taken hold, transit, bicycles, and walking remain dominant modes.
Cities are more densely settled, and work, shopping, and residential activities
are closely intermingled so that trip distances are short. This encourages
walking and the use of bicycles, with their low energy requirements. Even if
mass transportation is slow and crowded, it may be the dominant mechanized
travel option in such settings.
Cities in many developed countries in Europe and Asia have long-standing
government policies that simultaneously controlled the growth of automobile
ownership through high taxes on vehicles and their fuel; restricted land
development to encourage high-density activity centres, including suburban
new towns, as well as mixed land uses to keep trips short; and funneled a
steady stream of public resources to subsidize mass transit operations and
make capital investments to extend systems into new areas. These public
investments in transit were generally not matched with similar investments in
facilities for the automobile. Indeed, a number of cities around the world have
restricted automobile travel to their downtown areas by defining auto-free
zones (e.g., Gothenburg, Sweden), prohibiting the growth of parking, or
charging high entry tolls for vehicles carrying only one or two people
(Singapore).
In the United States the approach has been to allow the free market, for both
travel and land development, to determine the role of competing modes. Mass
transportation does attract high market shares where the automobile is
inherently less competitive, as, for example, travel to dense downtown areas
during the rush hours. In the central areas of larger cities such as New York,
Boston, Washington, Chicago, and even Los Angeles, street congestion can
be intense and parking fees high. Where high-quality mass transportation is
available (particularly rail service, which is as fast as or faster than the
automobile), with frequent departures and high reliability, it can capture 50 to
80 percent of all travel to downtown in the rush hour. At other hours of the
day, the mass transportation share of downtown travel may drop to 20
percent, and across the regions in which such cities are centred, the all-day
transit share may be as little as 5 to 10 percent of trips.
Mass transit is critically important to the economic and social health of these
cities, and it is also important in other communities where its market share is
lower but its contributions to peak-period congestion reduction and mobility
assurance are significant. These effects provide the argument for public
involvement in transit, through ownership, development, operation, and
service subsidies. The key policy choices about mass transit in the United
States concern how to spend public funds to produce these benefits, including
decisions about capital investments for new and replacement technologies,
the quantity and quality of services to offer, and how to pay for all of this.
Mass Transit Finance
Costs
The costs of providing mass transportation services are of two types, capital
and operating. Capital costs include the costs of land, guideways, structures,
stations, and rolling stock (vehicles); operating costs include labour to
operate the vehicles, maintain the system, and manage the enterprise; energy;
replacement parts; and liability costs (or insurance). The principal factors
affecting the cost of providing mass transportation service are the type of
technology used, particularly the nature of the guideway; the extent or size of
the system, measured in terms of the length of the routes; and the peak
passenger demand.
The choice of technology affects both capital and operating costs. Bus
systems are less costly to buy than fixed-guideway technologies using steel-
wheeled cars on steel rails or rubber-tired cars on concrete beams. Buses
require more operators (one driver per bus), and they do not benefit from
automation, whereas only one or two operators can run a 10-car train carrying
1,000 passengers, and some rail systems are nearly fully automated.
Mass transportation systems that operate on guideways separated from other
traffic are more expensive because of guideway costs but are also faster,
safer, and more reliable. Guideways can cost $10 million per mile at ground
level in low-density areas with occasional street crossings or as much as $200
million per mile in bored tunnels under densely developed cities. Light rail
transit, designed to operate singly or in trains up to four units long, can be
used on guideways separated from other traffic for high-speed sections and
intermingled with street traffic in downtowns or near stations. This flexibility
can make light rail less expensive, and service can be brought closer to the
origins and destinations of travelers. Light rail stations can simply be
stopping points marked with signs or separate stations with protected waiting
areas. They may be a few blocks or as much as a mile apart.
Rail rapid transit systems use heavier cars designed to operate in trains of up
to 10 or 12 cars. They are used on exclusive guideways, often in tunnels or on
elevated structures, and their average speeds (including station stops) may
approach 30 mile/h. Rapid transit stations themselves can be costly
structures, either off-street or underground, typically spaced at one-half- to
one-mile intervals. Some communities have commuter rail systems,
descendants of older intercity rail lines, which connect distant suburbs with
downtown areas. The technology is identical or similar to intercity passenger
trains, with diesel-electric locomotives pulling unpowered coaches. Speeds
are high (35–40 mile/h average), stations are 2–5 miles apart, and guideways
are separated from street traffic, with occasional street crossings at grade
level.
The size of the mass transportation operation during the peak period is also a
major determinant of costs. For example, 4,800 people can be carried in one
corridor during the rush hour with buses operating one minute apart (60 buses
per hour), each carrying a standing load of 80 persons. To provide each
traveler with a seat (offering better-quality service), each bus would carry
only 50 persons, and 96 buses per hour would be needed.
The actual number of buses to be purchased (and the number of drivers
required) would depend on how long it would take a bus to make a round-
trip. This depends on the length of the route (longer routes take more time
and would require more buses), as well as the average speed (faster routes
would allow buses to get back to the starting point sooner, requiring fewer
buses). In the above example, if a route were 5 miles long (round-trip) and
the bus made an average speed of 10 mile/h, it would take one-half hour to
make a round-trip. If one bus were needed every minute, then 30 buses would
be required, because the first bus would get back to the starting point one
minute after the 30th bus left. To give each passenger a seat, one bus would
be needed every 37.5 seconds (96 buses per hour), so 48 vehicles would be
required.
This illustrates the way both capital and operating costs are affected by the
number of passengers to be carried in a given time period, the route length,
the average operating speed, and policies on crowding (whether or not each
passenger gets a seat). If the transit operator buys 48 buses to serve this route,
many will be idle during the midday and evening, because travel volumes
will be much lower in those periods. Yet the capital cost of the buses cannot
be reduced if the rush hour demand is to be met. At least 48 drivers will be
required, many of whom will not be occupied outside the peak travel periods.
If the morning and evening rush periods are widely separated in time, it may
be necessary to hire two sets of drivers or to ask drivers to work split shifts—
for example, four hours during the morning rush and four more hours in the
late afternoon. Workers may demand wage premiums if the spread between
the start and finish of the workday is excessively long. This illustrates the
inherent inefficiencies in transit services, because they must be designed to
meet peak-period travel needs. Mass transportation services that have higher
capacity (passengers per hour) and offer faster, more reliable service (e.g.,
rail rapid transit) are more costly, in terms of both capital and operating costs,
than lower-capacity, slower services (e.g., buses). To make decisions about
investing in new mass transportation services, it is useful to examine the cost
per passenger carried as well as the total cost to implement and operate a
system. Analyses show that fixed-guideway transit requires much higher
corridor travel demands (perhaps 10,000 to 20,000 passengers per hour or
more) to reduce the unit cost below that of light rail or bus systems. Such
demand densities are found only in larger cities, and, as the trend toward
suburban growth and the spreading of travel demand over regions continues,
there are fewer locations where large investments in rail transit can be
justified.

Revenues
Transit costs are paid from passenger fares and, in most developed countries,
public subsidies. The most common way to collect passenger fares is by cash
payment on the vehicle (for bus and light rail systems without closed
stations) or upon entry to the station (for systems requiring entry through
closed stations). Normally, the driver collects fares, although some
intensively used bus and light rail systems carry conductors on the vehicles to
collect fares and make change. Because making change slows the boarding
process, most American systems require prepaid tokens or exact fares. It is
more common in European cities to use an honour fare system, in which the
passenger purchases a ticket before entering the vehicle, cancels that ticket
using an on-board machine, and presents the ticket to fare inspectors on
request. While only 2 to 10 percent of the passengers may be checked for
valid tickets, fare evasion is low because fines for improper tickets are high
and are collected immediately.
Monthly, semimonthly, and even daily passes are sold on many systems,
which keeps fare purchase off the vehicle and makes the process of checking
for prepayment fast. The monthly pass is particularly convenient for frequent
riders, for it does not require having the correct change, and unlimited rides
are allowed, so transit riding is encouraged. Many communities in the United
States and Europe offer substantial discounts for monthly passes, because
passes reduce fare collection costs and encourage transit use. Reduced-price
fares are commonly offered to students, the elderly, and handicapped persons.
To make prices more equitable, some transit operators vary charges for
different trips. Distance-based fares, proportional to the length of the trip, are
a better reflection of the cost of service, and travelers tend to accept the idea
that they should pay more for longer trips. The disadvantage of distance-
based fares is that the operator must distinguish travelers by their trip lengths,
which is done by checking fares when passengers enter and leave the system.
This makes fare collection more time-consuming and costly, particularly if
the validation is done by conductors or fare clerks. Modern systems use
magnetically encoded fare cards read by computer-controlled turnstiles when
passengers enter and leave the transit system. When travelers buy fare cards,
the amount of their purchase is encoded on the card, and this balance is
decreased by an appropriate amount for each trip.
Some transit operators charge differently by time of day, based on two
concepts: first, the cost to provide transit service is higher during the rush-
hours because equipment and personnel requirements are greater then;
second, most rush-hour trips are for the purpose of going to and from work,
and travelers are willing to pay more for these because of the monetary
reward they get for the trip. Automated fare collection facilitates time-of-day
pricing as well as distance-based fares.

Subsidies
Mass transportation fares typically are set below the level necessary to cover
full costs, and the difference is made up by government subsidy intended to
create the social benefits produced when people use transit. In the United
States, revenues from passenger fares typically pay from 10 to 70 percent of
operating costs, the lower number representing lightly used suburban services
and the higher number reflecting intensely used downtown corridor services.
The other 30 to 90 percent comes from state, regional, and local subsidies.
Limited federal operating subsidies were made available beginning in 1974,
allocated in proportion to the scale of each city’s transit operations. The
federal role in providing operating subsidies has been declining, and state and
regional governments, along with transit riders (through increased fares),
have made up much of the difference.
Commonly, capital costs for new transit investments in the United States are
paid entirely through government subsidies. The federal government has
offered capital grants for mass transportation since 1964. Decisions about
investments in new fixed-guideway transit services have been made
cautiously, and only a few such systems have been supported. Federal capital
subsidies can cover up to three-quarters of the cost of the investment.
Marketing Mass Transit
The mass transportation market—its riders and potential riders—comprises
two broad groups, captive riders and choice riders. Captive transit riders must
rely on mass transit; they do not have an alternative way to travel for some or
all of their trips because an automobile is required but none is available or
because they cannot drive or cannot afford an automobile. Choice riders use
transit if it provides service superior to that of their principal alternative,
usually the automobile. Captive and choice riders have different needs and
preferences, and different services can be designed to accommodate them.
Captives may become choice riders over time if their circumstances change,
particularly if poor mass transit gives them strong incentives to find other
ways to travel.
To attract and retain mass transportation riders in automobile-dominated
cities, it is important to understand what factors influence travelers’ choice of
mode. Travel behaviour and market research studies have shown that mode
choice is affected by three classes of factors: the quantity and relative quality
of competing transportation services; characteristics of the trips people take;
and attributes of the people themselves and their households.

Service quality and quantity


The amount of service offered, especially the geographic and temporal extent
of mass transportation, will determine which trips are served. To meet the
needs of captive riders, broad coverage of the region, the day, and the week is
desired. Choice riders are more likely to consider transit for work trips to
dense employment centres during peak periods.
The most important service quality attribute is travel time from origin to
destination. Several factors contribute to travel time. The first is the average
speed of the vehicles, determined in part by their rate of acceleration and
maximum speed but strongly influenced by the distance between stops and
the dwell time at stations. Electric-rail vehicles can accelerate rapidly and
may have top speeds of 70 mile/h, but if stations are only one-half mile apart,
the average speed may be less than 30 mile/h. While longer distances
between stops mean higher speeds and shorter travel times, the time it takes
for travelers to get to and from stations will increase. Thus, to the traveler,
increasing station spacing may not decrease door-to-door travel time.
Travel time also is affected by the frequency of service, the time interval
between vehicles. If transit vehicles depart every five minutes, the travel time
experienced by riders will generally be less than if vehicles are dispatched at
15-minute intervals. If the transit service operates reliably on a published
schedule, travelers can reduce this waiting time by planning their arrival at
the station to coincide with vehicle departures. Services that are slow or
unreliable relative to the automobile will primarily attract captive riders,
while those offering competitive travel times, usually those operating on
exclusive guideways, are appealing to both markets but have the strongest
prospects for attracting choice riders.
The price of transit is less important than service quality to choice travelers,
because under most circumstances mass transportation fares are lower than
auto costs. Because captive riders tend to have lower incomes than choice
riders, increasing the price of transit can be a special burden to them; yet their
dependence on mass transportation makes them less likely to switch modes in
the face of a fare increase than choice riders. Even captive riders find price to
be less important in mode-choice decisions than service quality factors such
as travel time and reliability. Field experiments show that improving other
service factors, such as comfort, safety from crime, and cleanliness of
vehicles and stations, contributes less to ridership increases than
improvements to the basic service attributes of travel time, frequency, and
reliability.
Trip characteristics
Travelers making regular trips each day, particularly for work or school, are
more likely to take transit. Repetitive trips can be planned in advance to
coordinate with transit schedules; some transit services offer discounts for
regular riding; transit service is usually better during the rush hours, when
these trips tend to occur; and there is more competition for the use of family
cars when work trips are made. Mass transportation is less likely to be used
for shopping and recreational trips because of the difficulty of carrying
packages, the requirement to pay separate fares for each person in the group,
and long waiting times and walking distances. Thus, transit use is much
lower at midday, on evenings, and on weekends than it is during peak
weekday periods.

Traveler and household characteristics


Among the most influential factors determining travel-mode choice are the
characteristics of the travelers themselves and their households. These factors
cannot be directly affected by public transportation policy, while service
characteristics and even land-use patterns are subject to some control.
The availability of automobiles has a powerful influence on the use of transit,
because the quality of automobile service is commonly superior to that of
transit. Auto availability is a household characteristic, reflecting the
interaction between the number of cars in the household, the number of
drivers, and the travel needs of those drivers. The use of mass transportation
is quite low in households having a car for every driver, except where one or
more travelers make regular trips to congested areas where good quality
transit is available. It is much higher in households with fewer cars than
drivers. These are often lower-income households, and so transit usage is
often correlated with low income. To compete with the automobile, transit
service must be very good, and, where it is, the relationship between income
and transit use may be reversed—i.e., higher-income travelers may use transit
more.
Gender is an important determinant of transit use, with women traveling by
transit more than men. Men may get priority use of the household car for
work trips because they may be the primary wage earner and because women
have traditionally been more involved in child care and household
management. These gender roles are changing rapidly. Men may be the
dominant users of some high-quality downtown-oriented transit services if
their spouses work in suburbs where transit services are limited.
The Future Of Mass Transportation
Mass transportation performs important economic, social, and environmental
functions in cities, ranging from providing basic mobility services in
developing countries, to securing the viability of dense business districts, to
meeting all the transportation requirements for those unable to use
automobiles, to reducing the negative impacts of automobile congestion.
Decisions about what services to provide, and how to provide and pay for
them, should be based on an understanding of the mission of mass
transportation in a particular community. The diversity of missions,
geographies, and market characteristics leads to a variety of transit service
concepts.
The main challenges facing mass transportation policymakers are the
dispersal of development through suburban growth and increases in capital
and operating costs, which require either higher fares, greater subsidies, or
both. Responses to these challenges include alternative service concepts, new
technology and automation, more efficient service delivery, and alternative
sources of funding.

Alternative service concepts


In low-density settings, traditional fixed-route, fixed-schedule bus or train
operations cannot meet market needs. If the priority is to discourage travelers
from driving alone in their automobiles, mass transportation services can
include a variety of forms of individualized ride sharing that put 2, 4, or even
10 people in a single vehicle. Some agencies provide rider matching services
and better parking arrangements to encourage carpooling, the sharing of auto
rides by people who make similar or identical work trips. Car-pool vehicles
are privately owned, the guideways (roads) are in place, drivers do not have
to be compensated, and vehicle operating costs can be shared. On the other
hand, carpoolers must coordinate their travel times, which can be a major
inconvenience.
Some agencies and employers have subsidized vanpooling, ride sharing in 8-
to 15-passenger vans provided by the sponsor. One worker is recruited to
drive the van to and from work in return for free transportation and limited
personal use of the van. Passengers pay a monthly fee to the sponsor. Van
pools are most successful for extremely long work trips (e.g., 30–50 miles
each way).
The uncertainty associated with putting a new transit service into the
marketplace, particularly in low-density suburban settings, has been avoided
by selling subscription services. Workers with common origins and
destinations buy monthly bus tickets in advance, for which they receive
guaranteed seating and a commitment to be delivered to work on time,
usually without intermediate stops. The subscription operator normally
requires a minimum ridership level to assure financial viability of the route.
These unconventional transit services operate about as fast as a private
automobile, but they allow many riders to share the cost, so the price to an
individual is usually low. Their main disadvantage is that they do not give
riders schedule flexibility. If there is a family need to go to work late or come
home early, or a work need to stay after hours, the traveler may be stranded.
Those who often require schedule flexibility avoid ride-sharing services.
Some employers and transit operators reduce this obstacle by using backup
vehicles to provide guaranteed rides home.
Low-density trip needs, and particularly the needs of the handicapped and
elderly, have been met with demand-responsive services, in which vehicles
are dispatched to pick up travelers in response to a telephone call. This
provides door-to-door service, but if a vehicle serves several travelers at
once, trip times can be very long; if it serves only one person (or group) at a
time, the operating costs can be as high as taxi fares or higher.

New technology
Automatic train operation has been suggested as a way to increase capacity
(by allowing closer vehicle spacing, since computers can react faster than
humans to avoid collision), reduce travel time (by operating vehicles at
higher speeds), and reduce costs. Some heavy rail transit systems operating
on separate guideways are now partially or fully automated—e.g., the Bay
Area Rapid Transit (BART) system in San Francisco and the Metro system in
Washington, D.C. The capital cost of automated systems is high, and
promised reductions in operating costs have not always been achieved
because of maintenance requirements.
There have been many proposals, and some field implementations, of small
(3–5-passenger), automated vehicles operating on separate, usually elevated,
guideways. These personal rapid transit (PRT) systems function like
“horizontal elevators,” coming to a station in response to a traveler’s demand
and moving directly from origin to destination. Because of this service
pattern and the small size of the vehicles, PRT systems indeed offer
personalized service much like an automobile, including the ability to control
who rides in one’s party, which provides privacy and security. PRT systems
have low capacity in passengers carried per hour, and guideway and vehicle
costs are high. They are best suited for short distribution trips around and
within activity centres such as office complexes, airports, and shopping
centres.
When the PRT concept is extended to larger (15–25-passenger) vehicles, the
term automated guideway transit (AGT) is sometimes applied. AGT systems
have been built to provide circulation in downtown areas (e.g., Detroit,
Michigan, and Miami, Florida, both in the United States) and on a dispersed
American college campus (West Virginia University, at Morgantown). The
vehicles commonly have rubber tires and operate on twin concrete beams,
elevated or at grade level. AGT is a scaled-down, modernized application of
rail rapid transit—slower, with smaller, lighter cars, more easily fit into
established communities. Monorail systems are an AGT concept using a
single guide and support beam, usually elevated, with a vehicle riding on top
of, or suspended beneath, the beam. Monorail systems can be found at some
activity centres in the United States (e.g., the downtown area of Seattle,
Washington; Disneyland in Anaheim, California; and Pearl City Shopping
Center in Honolulu) and a system completed in 1901 continues to serve
Wuppertal, Germany. There is no inherent advantage in monorail other than
its novelty. Switching trains from track to track can be complex, and the lack
of standardization makes acquisition and maintenance costly.

Cost reduction through management and contracting


Transit systems are shrinking because fare revenues cannot cover costs, and
there are many other demands on government monies. Some of this shrinkage
is to be expected, because as the market becomes smaller (because auto use
expands, people move to the suburbs, and so forth), the service should get
smaller. Mass transportation systems, particularly those in older cities, need
to be rationalized by eliminating underutilized components and improving
service on remaining lines.
Costs can be controlled through administrative reorganization to increase
efficiency. The trend toward public ownership of systems, nearly complete
by the 1970s in the United States, has been redirected by contracting out
many services to private operators through competitive bidding. This has
been a successful cost-cutting strategy for services that can be broken into
manageable work pieces, such as demand-responsive services for the
handicapped. Competition in the bidding process, as well as incentive
contracts that reward providers for efficiency, can keep costs down. In some
cases, complete reprivatization of transportation services may provide cost
reductions and service improvements as long as regulatory protections assure
service for all markets.

Financing options
There also is a need for a regular source of subsidies, so that operators do not
have to return annually to legislative bodies to fight for survival. Local sales
taxes and special assessments within districts where the benefits of transit are
focused are logical sources. It is also important to create incentives and
restrictions to encourage service providers to be efficient and limit subsidy
costs. Some communities require that some minimum share of operating
costs be paid with passenger fares, which ensures that the primary
beneficiaries (the riders) pay a reasonable share of costs. If there is one key to
the survival and success of mass transportation, it is enlightened public policy
that defines the evolving mission of transit in the community, implements
economical ways to deliver quality service, and provides for stable financial
support.
7 Problems of Urban Transport
While urban transport has had a tremendous liberating impact, it has also
posed a very serious problem to the urban impact in which it operates.
Buchanan gave a warning in 1963 when he wrote Traffic in Towns, that “the
motor vehicle has been responsible for much that adversely effects our
physical surrounding.
There is its direct competition for space with environmental requirements,
and it is greatest where space is limited… the record is one of steady
encroachment, often in small instalments, but cumulative in effect. There are
the visual consequences of this intrusion; the crowding out of every available
square yard of space with vehicles, either moving or stationary, so that
buildings seem to rise from a plinth of cars; the destruction of architectural
scenes; visual effects from the cutter of signs, signals, bollards, railings, etc.,
associated with the use of motor vehicles”.
1. Traffic Movement and Congestion:
Traffic congestion occurs when urban transport networks are no longer
capable of accommodating the volume of movements that use them. The
location of congested areas is determined by the physical transport
framework and by the patterns of urban land use and their associated trip-
generating activities. Levels of traffic overloading vary in time, with a very
well-marked peak during the daily journey-to-work periods.
Although most congestion can be attributed to overloading, there are other
aspects of this basic problem that also require solutions. In the industrialised
countries increasing volumes of private car, public transport and commercial
vehicle traffic have exposed the inadequacies of urban roads, especially in
older city centres where street patterns have survived largely unaltered from
the nineteenth century and earlier.
The intricate nature of these centres makes motorised movements difficult
and long-term car parking almost impossible. In developing countries the
problem is particularly acute: Indian and South-East Asian cities often have
cores composed of a mesh of narrow streets often accessible only to non-
motorised traffic.
The rapid growth in private car ownership and use in western cities in the
period since 1950 has rarely been accompanied by a corresponding upgrading
of the road network, and these increases will probably continue into the
twenty-first century, further exacerbating the problem. In less-developed
countries car ownership in urban areas is in at a much lower level but there is
evidence of an increased rate in recent decades, especially in South America
and South-East Asia (Rimmer, 1977).
Satisfactory definitions of the saturation level of car ownership vary but if a
ratio of 50 cars to 100 persons is taken then in several US cities the figure is
now over 80 per 100, whereas in South-East Asian cities the level rarely
exceeds 10 per 100. One factor contributing to congestion in developing
world cities is the uncontrolled intermixing of motorised and animal-or
human-drawn vehicles. The proliferation of pedal and motorcycles causes
particular difficulties (Simon 1996).
2. Public Transport Crowding:
The ‘person congestion’ occurring inside public transport vehicles at such
peak times adds insult to injury, sometimes literally. A very high proportion
of the day’s journeys are made under conditions of peak-hour loading, during
which there will be lengthy queues at stops, crowding at terminals, stairways
and ticket offices, and excessively long periods of hot and claustrophobic
travel jammed in overcrowded vehicles.
In Japan, ‘packers’ are employed on station platforms to ensure that
passengers are forced inside the metro trains so that the automatic doors can
close properly. Throughout the world, conditions are difficult on good days,
intolerable on bad ones and in some cities in developing countries almost
unbelievable every day. Images of passengers hanging on to the outside of
trains in India are familiar enough. Quite what conditions are like inside can
only be guessed at?
3. Off-Peak Inadequacy of Public Transport:
If public transport operators provide sufficient vehicles to meet peak-hour
demand there will be insufficient patronage off-peak to keep them
economically employed. If on the other hand they tailor fleet size to the off-
peak demand, the vehicles would be so overwhelmed during the peak that the
service would most likely break down.
This disparity of vehicle use is the hub of the urban transport problem for
public transport operators. Many now have to maintain sufficient vehicles,
plant and labour merely to provide a peak-hour service, which is a hopelessly
uneconomic use of resources. Often the only way of cutting costs is by
reducing off-peak services, but this in turn drives away remaining patronage
and encourages further car use. This ‘off-peak problem’ does not, however,
afflict operators in developing countries. There, rapidly growing urban
populations with low car ownership levels provide sufficient off-peak
demand to keep vehicle occupancy rates high throughout the day.
4. Difficulties for Pedestrians:
Pedestrians form the largest category of traffic accident victims. Attempts to
increase their safety have usually failed to deal with the source of the
problem (i.e., traffic speed and volume) and instead have concentrated on
restricting movement on foot. Needless to say this worsens the pedestrian’s
environment, making large areas ‘off-limits’ and forcing walkers to use
footbridges and underpasses, which are inadequately cleaned or policed.
Additionally there is obstruction by parked cars and the increasing pollution
of the urban environment, with traffic noise and exhaust fumes affecting most
directly those on feet.
At a larger scale, there is the problem of access to facilities and activities in
the city. The replacement of small-scale and localised facilities such as shops
and clinics by large-scale superstores and hospitals serving larger catchment
areas has put many urban activities beyond the reach of the pedestrian. These
greater distances between residences and needed facilities can only be
covered by those with motorised transport. Whereas the lack of safe facilities
may be the biggest problem for the walker in developing countries, in
advanced countries it is the growing inability to reach ‘anything’ on foot,
irrespective of the quality of the walking environment.
5. Parking Difficulties:
Many car drivers stuck in city traffic jams are not actually trying to go
anywhere: they are just looking for a place to park. For them the parking
problem is the urban transport problem: earning enough to buy a car is one
thing but being smart enough to find somewhere to park it is quite another.
However, it is not just the motorist that suffers. Cities are disfigured by ugly
multi-storey parking garages and cityscapes are turned into seas of metal, as
vehicles are crammed on to every square metre of ground.
Public transport is slowed by clogged streets and movement on foot in
anything like a straight line becomes impossible. The provision of adequate
car parking space within or on the margins of central business districts
(CBDs) for city workers and shoppers is a problem that has serious
implications for land use planning.
A proliferation of costly and visually intrusive multi-storey car-parks can
only provide a partial solution and supplementary on-street parking often
compound road congestion. The extension of pedestrian precincts and retail
malls in city centres is intended to provide more acceptable environments for
shoppers and other users of city centres. However, such traffic-free zones in
turn produce problems as they create new patterns of access to commercial
centres for car-borne travellers and users of public transport, while the latter
often lose their former advantage of being conveyed directly to the central
shopping area.
6. Environmental Impact:
The operation of motor vehicles is a polluting activity. While there are
innumerable other activities which cause environmental pollution as a result
of the tremendous increases in vehicle ownership, society is only now
beginning to appreciate the devastating and dangerous consequences of motor
vehicle usage. Pollution is not the only issue.
Traffic noise is a serious problem in the central area of our towns and cities
and there are other environmental drawbacks brought about through trying to
accommodate increasing traffic volumes. The vast divergence between
private and social costs is one, which has so far been allowed to continue
without any real check. Perhaps more disturbing is that society is largely
unaware of the longer-term effects of such action, and while the motorcar is
by no means the only culprit, it is a persistently obvious offender.
Traffic Noise:
It is generally recognised that traffic noise is the major environment problem
caused by traffic in urban areas. Noise became a pressing problem late in the
1950s and in 1960 the Government set up a committee to look into the whole
issue. This committee, headed by Sir Alan Wilson, pointed out with reference
to London that traffic noise “is the predominant source of annoyance and no
other single noise is of comparable importance”.
Traffic noise is both annoying and disturbing. Walking and other activities in
urban areas can be harassing and, perhaps more important, traffic noise
penetrates through to the interior of buildings. Working is therefore more
difficult since noise disturbs concentration and conversation. High noise
levels can also disturb domestic life as sleeping and relaxation become
affected.
Traffic noise tends to be a continuous sound, which is unwanted by the
hearer. It is caused as a result of fluctuations in air pressure, which are then
picked up by the human ear. Whilst other noise phenomena such as aircraft
noise and vibrations from a road drill produce a more intense sound, traffic
noise is a much more continuous and an almost round-the-clock discomfort.
Noise is usually measured on a weighted scale in decibel units, an increase of
10 dB corresponding to a doubling of loudness.
The Wilson Committee published studies, which showed that a decibel noise
level of 84 dB was much as people found acceptable and they proposed
legislation which would make any engine noise more than 85 dB, illegal.
They proposed that there should be a progressive reduction in acceptable
limits, but this has not been achieved. In fact, heavy lorries produce a noise
level still well in excess of the above acceptable level.
The noise from motor vehicles comes from various sources. The engine,
exhaust and tyres are the most important ones but with goods vehicles,
additional noise can be given off by the body, brakes, loose fittings and
aerodynamic noise. The level of noise is also influenced by the speed of the
vehicle, the density of the traffic flow and the nature of the road surface on
which the vehicle is operating.
Vehicles, which are accelerating or travelling on an uphill surface, produce
more noise than those moving in a regular flow on an even road. The
regulations now in force lay down the limits of 84 dB for cars and 89 dB for
Lorries. Buses, particularly when stopping and starting, motorcycles and
sports cars as well as goods vehicles produce higher noise levels than the
average private car.
7. Atmospheric Pollution:
Fumes from motor vehicles present one of the most unpleasant costs of living
with the motor vehicle. The car is just one of many sources of atmospheric
pollution and although prolonged exposure may constitute a health hazard, it
is important to view this particular problem in perspective. As the Royal
Commission on Environmental Pollution has stated, “there is no firm
evidence that in Britain the present level of these pollutants is a hazard to
health”.
Traffic fumes, especially from poorly maintained diesel engines, can be very
offensive and added to noise contribute to the unpleasantness of walking in
urban areas. No urban street is free from the effects of engine fumes and
these almost certainly contribute towards the formation of smog. As traffic
volumes increase, however, atmospheric pollution will also increase. In the
United States, with its much higher levels of vehicle ownership, there is
mounting concern over the effects of vehicle fumes. In large cities such as
Mexico City, Los Angeles, New York and Tokyo, fumes are responsible for
the creation of very unpleasant smog.
Ecologists believe that the rapid increase in the number of vehicles on our
roads which has taken place without (as yet) any real restriction is fast
developing into an environmental crisis. Exhaust fumes are the major source
of atmospheric pollution by the motor vehicle.
The fumes, which are emitted, contain four main types of pollutant:
(i) Carbon monoxide:
This is a poisonous gas caused as a result of incomplete combustion;
(ii) Unburnt hydrocarbons:
This caused by the evaporation of petrol and the discharge of only partially
burnt hydrocarbons;
(iii) Other gases and deposits:
Nitrogen oxides, tetra-ethyl lead and carbon dust particles;
(iv) Aldehydes:
Organic compounds containing the group CHO in their structures.
Hydrocarbon fumes are also emitted from the carburettor and petrol tanks, as
well as from the exhaust system.
The Royal Commission provides some interesting statistics on the extent of
air pollution. In 1970 an estimated 6 million tonnes of carbon monoxide were
emitted into the atmosphere. If estimates of vehicle ownership are correct,
then by the year 2010, this volume would increase to 14 million tonnes. This
figure, however, assumes the current state of engine and fuel technology. A
further and more detailed estimate of emissions is given by Sharp in Table
5.3.
Fears of urban pollution by motor vehicles, are greater in the United States
and Japan. In day-time Manhattan, for example, readings of pollutants of 25-
30 parts per million have been recorded – exposure has the same effect as
smoking two packets of cigarettes per day. USA has imposed certain
restrictions on vehicle manufacturers and more stringent levels are proposed,
but as in the earlier case of traffic noise, increasing vehicle ownership levels
are liable to offset some of the benefits which accrue.
Role of Transport in Urban Growth
Transport is the underlying force in the location, growth, rank-size and
functional differentiation of cities. Adequate, cheap and efficient passenger
transport facilities are essential requirement of urban life. Cities develop at
foci or break of transportation points. They are the nodes of route systems
and their importance closely reflects the degree to which they possess the
property, which is called nodality.
Well-organised, inexpensive and efficient transport facilities are of the first
importance in the economic and social life of our cities and towns. Thus,
transportation, both intercity and intra city, is of prime concern of both urban
and transport geographers. With the growing urbanisation and rapid growth
of transportation it is necessary not only to examine the present pattern of
transport but also their problems as well, and suggestions should be given to
the policy makers for the better planning of urban transportation system.
Transport and Urban Growth:
During the last century, there has been a rapid growth of urbanisation,
resulting in the emergence of million-plus cities. The number of such cities is
constantly increasing not only in North America and Europe, but in other
parts of the world also. Transport developments were one of the major factors
in this growth. The urban growth was accompanied by three important
changes in the structure of the cities.
These are:
(i) The separation of work and residence,
(ii) The drain of resident population from the Central Business District, and
(iii) Areal expansion.
These three trends have been made possible by developments in transport.
But since these took place in the following well-marked stages, it’s possible
to trace their consequences in the structure of the existing urban sub-region.
The changes, well traced in developed countries, are:
(1) The walk to work
(2) The steam railway
(3) The electric train
(4) The motor-bus/electric railway
(5) The post-war suburb
(6) The central business district (CBD)
(7) Industrial areas
(8) The inner suburbs
(9) Medium-density outer suburbs
(10) Low-density outer suburbs
The above mentioned changes in urban areas have resulted into rapid
diversifying and intensifying circulation patterns created by journeys to work,
to school, for shopping and for recreation. Thus, transport planning and
traffic management become prime concern both for town and transport
planners.
Urban Travel/Movement of People:
Travel is necessary to engage in spatially dispersed activities such as work,
shopping, visits to friends, etc. In economic terms, travel is an intermediate
good, because demand for travel is derived from the demand for other
spatially separated goods and services. Thus, one travels in order to engage in
work or to do shopping or see a film. Apart from sightseeing and some types
of holiday, rarely do people travel simply for the sheer pleasure of the trip.
Like other goods and services, travel has a cost. When an individual makes a
trip, he or she values the destination activity sufficiently to incur the trip cost.
The cost of travel usually has two components, time and money. Time spent
travelling is time not spent doing other things, hence those who value their
time highly will be willing to spend more money in order to save time by
using a faster mode. For example, business travellers may use air travel or
high-speed trains to economise on time spent travelling from one engagement
to another, while retirees and university students – for different reasons – are
among those who are quite willing to use cheaper and slower buses and local
trains.
The urban personal movement is controlled by the principle of distance
decay, whereby we attempt to minimise the cost or inconvenience of travel
for a given purpose. Another principle is the conditioning of individual travel
behaviour by personal circumstances which dictate the need and ability to
engage in particular activities. Daniels and Warnes (1980) has developed a
hypothetical scheme of movements,
Urban travel pattern is mostly controlled by ‘travel demand’. Travel demand
can be examined at the level of individuals or households or at the level of
population segments. For this, models have been developed by Chapin (1975)
and Hagerstrand (1970). Chapin (1974) conceptualises activity patterns as an
outcome of demand and supply, demand being the motivation to take an
action, and supply being the opportunity to do so, as illustrated in Figure 5.1.
Motivation or desire to act depends on the person’s household role and
individual characteristics. Opportunity depends on the availability of
resources required to act, and on the perceived value on the act.

Hagerstrand’s (1970) work focuses on the interplay of space and time, since
activity locations are distributed in space and time, time resources are
required in order both to access locations as well as to participate in the
activity itself.
Hagerstrand identified three categories of time and space constraints
that affect activity opportunities:
1. Capability constraints describe the limits of the physical system, the
transportation technology available and the fact that one can only be in one
place at a given time.
2. Coupling constraints describe the schedule dependences of activities, such
as the hours of operation of stores, or an individual’s work schedule.
3. Authority constraints describe the legal, social or political limitations
placed on access, such as the age requirement for a driver’s license.

8 Helpful Steps for Solving the Problems of


Urban Transport
There is no readymade universally acceptable solution to the urban transport
problem. Planners, engineers, economists and transport technologists each
have their own views, which when combined, invariably produced a
workable strategy. Whatever policy evolved should be considered firstly, in
the light of time it takes to implement them and secondly, all policies need to
be appraised in terms of their cost.
The following common steps may be helpful in solving the problems of
urban transport:
1. Development of Additional Road Capacity:
One of the most commonly adopted methods of combatting road congestion
in medium and small towns or in districts of larger centres is the construction
of bypasses to divert through-traffic. This practice has been followed
throughout the world including India. Mid-twentieth century planners saw the
construction of additional road capacity in the form of new or improved
highways as the acceptable solution to congestion within major towns and
cities.
Since the pioneer transportation studies of the 1950s and 1960s were carried
out in the US metropolitan areas, where the needs of an auto-dominated
society were seen to be paramount, the provision of additional road capacity
was accepted for several decades as the most effective solution to congestion,
and urban freeways were built in large cities such as Chicago, San Francisco
and Los Angeles.
Western European transport planners incorporated many of their American
counterparts’ concepts into their own programmes and the urban motorway
featured in many of the larger schemes (Muller, 1995). However, it soon
became evident that the generated traffic on these new roads rapidly reduced
the initial advantages.
The construction of an urban motorway network with its access junctions
requires large areas of land and the inevitable demolition of tracts of housing
and commercial properties. By the 1970s planners and policymakers came to
accept that investment in new highways dedicated to the rapid movement of
motor traffic was not necessarily the most effective solution to urban
transport problems.
2. Traffic Management Measures:
Temporary and partial relief from road traffic congestion may be gained from
the introduction of traffic management schemes, involving he reorganisation
of traffic flows and directions without any major structural alterations to the
existing street pattern. Among the most widely used devices are the extension
of one-way systems, the phasing of traffic-light controls to take account of
traffic variation, and restrictions on parking and vehicle loading on major
roads.
On multi-lane highways that carry heavy volumes of commuter traffic,
certain lanes can be allocated to incoming vehicles in the morning and to
outgoing traffic in the afternoon, producing a tidal-flow effect. Recent
experiments using information technology have been based upon intelligent
vehicle highway systems (IVHS), with the computerised control of traffic
lights and entrances to freeways, advice to drivers of alternative routes to
avoid congestion, and information on weather and general road conditions.
The IVHS can be linked up with advanced vehicle control systems, making
use of in-car computer to eliminate driver error and control automatic braking
and steering when accidents are imminent.
Traffic management has been extensively applied within urban residential
areas, where excessive numbers of vehicles produce noise, vibration,
pollution and, above all, accident risks, especially to the young. ‘Traffic
calming’ has been introduced to many European cities and aims at the
creation of an environment in which cars are permitted but where the
pedestrian has priority of movement. Carefully planned street-width
variations, parking restrictions and speed-control devices such as ramps are
combined to secure a safe and acceptable balance between car and pedestrian.
3. Effective Use of Bus Service:
Many transportation planning proposals are aimed specifically at increasing
the speed and schedule reliability of bus services, and many European cities
have introduced bus priority plans in an attempt to increase the attractions of
public transport. Bus-only lanes, with or against the direction of traffic flow,
are designated in heavily congested roads to achieve time savings, although
such savings may later be dissipated when buses enter inner-city areas where
priority lanes at intersections and certain streets may be restricted to buses
only, particularly in pedestrianised shopping zones.
Where entirely new towns are planned, there is an opportunity to incorporate
separate bus networks within the urban road system, enabling buses to
operate free from congestion. In the UK, Runcorn New Town, built as an
overspill centre for the Merseyside conurbation, was provided with a double-
looped busway linking shopping centre, industrial estates and housing areas.
About 90 per cent of the town’s population was within five minutes’ walk of
the busway and operating costs were 33 per cent less than those of buses on
the conventional roads. Although the system is not used to the extent
originally envisaged, it successfully illustrates how public transport can be
integrated with urban development. Bus-only roads can also be adapted to
vehicle guidance systems, whereby the bus is not steered but controlled by
lateral wheels, with the resumption of conventional control when the public
road network is re-entered.
Such systems have been adopted in Adelaide and experiments have been
made in many other cities (Adelaide Transit Authority, 1988). The bus can
also be given further advantages in city centres where major retailing and
transport complexes are being redeveloped. The construction of covered
shopping malls and precincts can incorporate bus facilities for shoppers, and
reconstruction of rail stations can also allow bus services to be integrated
more closely with rail facilities.
The ‘park-and-ride’ system, now adopted by many European cities, enables
the number of cars entering city centres to be reduced, particularly at
weekend shopping peak periods. Large car-parks, either temporary or
permanent according to need, on the urban fringe are connected by bus with
city centres, with charges generally lower than central area parking costs.
The advantages of the bus over the car as an efficient carrier are secured, and
the costs of providing the fringe car-parks are much less than in inner-city
zones. Rail commuters can also be catered for in a similar manner with the
provision of large-capacity car-parks adjacent to suburban stations.
Many towns and cities have’ attempted to attract passengers back to bus
transport by increasing its flexibility and level of response to market demand.
In suburban areas the dial-a-ride system has met with partial success, with
prospective passengers booking seats by telephone within a defined area of
operation.
Such vehicles typically serve the housing areas around a district shopping
centre and capacity is limited, so they are best suited to operations in
conditions of low demand or in off-peak periods. Fares are higher than on
conventional buses since the vehicle control and booking facilities require
financing.
Experiments have also been made with small- capacity buses that can be
stopped and boarded in the same way as a taxi and which can negotiate the
complex street patterns of housing estates more easily than larger buses.
However, with the widespread introduction of scheduled minibus is the
problem of overloading has been reduced.
4. Parking Restrictions:
As we have seen, it is not possible to provide sufficient space for all who
might like to drive and park in the central areas of large towns. Parking thus
must be restricted and this is usually done by banning all-day parking by
commuters or making it prohibitively expensive. Restrictions are less severe
– off-peak, so that shoppers and other short-term visitors who benefit the
economy of the centre are not deterred. Separate arrangements must be made
for local residents, perhaps through permits or reserved parking.
City authorities can thus control public car-parking places, but many other
spaces are privately owned by businesses and reserved for particular
employees. The effect of this is to perpetuate commuting to work by car. The
future provision of such space can be limited through planning permission for
new developments, as is done in London, but controlling the use of existing
private spaces raises problematical issues of rights and freedoms that many
countries are reluctant to confront.
Overall, parking restrictions have the advantage of being simple to
administer, flexible in application and easily understood by the public. Their
Achilles’ heel is enforcement, for motorists are adept at parking where and
when they should not and evading fines once caught.
Fines in many cities are so low that being caught once or twice a week works
out cheaper than paying the parking charge. Indeed, in London in 1982, a
survey showed that illegal parkers outnumbered legal ones and only 60 per
cent of the fines were ever paid. Parking controls have to be stringent and be
enforced if they are to make any significant contribution to reducing
congestion in the city.
5. Promoting the Bicycle:
The benefits of cycling have long been recognised. The bicycle is cheap to
buy and run and is in urban areas often the quickest door-to-door mode
(Figure 5.3). It is a benign form of transport, being noiseless, non-polluting,
energy-and space-efficient and non-threatening to most other road users. A
pro-cycling city would promote fitness among cyclists and health among
non-cyclists. Cycling is thus a way of providing mobility, which is cheap for
the individual and for society.
Advocates of Environmental Traffic Management (ETM) frequently cast
envious glances at the Netherlands, where cycle planning is set in the context
of national planning for sustainability. The Master Plan Bicycle, which aims
to increase bicycle-kilometers by at least 30 per cent between 1986 and 2010,
not only tackles the traditional concerns of cycle infrastructure and road
safety, but also addresses issues of mobility and modal choice; how to
encourage businesses to improve the role of the bicycle in commuting;
reducing bicycle theft and increasing parking quantity and quality; improving
the combination of cycling and public transport; and promoting consideration
of the bicycle amongst influential decision makers. These ‘pull’ measures are
part of a national transport strategy of discouraging car use, which ‘pushes’
motorists towards use of the bicycle.
6. Encouraging Walking:
Walking is the most important mode of transport in cities, yet frequently data
on it are not collected and many planners do not think of it as a form of
transport. As a result of this neglect, facilities provided specifically for
walking are often either absent or badly maintained and pedestrians form the
largest single category of road user deaths. There are social, medical,
environmental and economic reasons for promoting walking, for it is an
equitable, healthy, non-polluting and inexpensive form of transport.
Moreover, ‘foot cities’ tend to be pleasurable places in which to live, with
access to facilities within walking distance frequently cited as a key indicator
of neighbourhood quality of life.
7. Promoting Public Transport:
If ETM aims to shift trips away from cars, then attractive alternatives are
required. Cycling and walking may be appropriate for the shorter distances,
but transferring longer trips requires that a good quality public transport
system is in place to ensure that the city can function efficiently.
This means that:
1. Fares need to be low enough for poor people to be able to afford them;
2. There must be sufficient vehicles for a frequent service to be run
throughout the day;
3. Routes must reflect the dominant desire lines of the travelling public and
there should be extensive spatial coverage of the city so that no one is very
far from a public transport stop;
4. Speeds of buses need to be raised relative to cars by freeing them from
congestion;
5. It is not enough to provide public transport: it also has to be coordinated.
Multi-modal tickets may be one essential ingredient of a functional urban
transport system, but the key item is the integration of services by the
provision of connections between modes.
8. Other Measures:
Some of the other measures useful for urban transport planning are:
1. Restrictions on road capacity and traffic speeds,
2. Regulating traffic access to a link or area,
3. Charging for the use of roads on a link, or area basis,
4. Vehicle restraint schemes,
5. Rail rapid transit,
6. Transport coordination, and
7. Public transport improvement, etc.
The urban transport planning is a continuous process and it should be done
through a process, as Figure 5.4 shows, are the pre-analysis, technical
analysis and the post analysis phases.
Once the goals are established, data need to be collected in order to prepare
land use, transport and travel inventories of the study area. The availability of
good quality, extensive and up-to-date data is an essential precondition for
the preparation of an urban transport plan. Accordingly, there will need to be
an inventory of the existing transport system and the present distribution of
land uses; a description of current travel patterns; and data on such matters as
population growth, economic activity, employment, income levels, car
ownership, housing and preferred travel modes.
In brief, urban transport process has four principal characteristics –
quantification, comprehensiveness, systems thinking and a scientific
approach. The environmental traffic management system should be adopted
both in developed and developing countries in order to check the increasing
problems of the urban transport.
Vehicle To Vehicle Communication
Vehicle to Vehicle Communication (V2V) is an upcoming technology under
development by automotive giants such as Toyota and Tesla, as well as
numerous startups. It promises to make human driving safer and become an
enabler for autonomous driving by connecting vehicles and road
infrastructure via ad hoc networks.
V2V systems, once fully deployed, could reduce accidents caused by human
error by up to 70-80%, and can have a massive impact on congestion and
carbon emissions. However, there are technical, security and regulatory
concerns standing in the way of this important innovation.

What Is Vehicle To Vehicle Communication?


Vehicle to vehicle communication helps vehicles form spontaneous wireless
networks on the go and transfer data over an ad-hoc mesh network. Each
vehicle sends reports about traffic and road conditions, vehicle position and
speed, route direction, and loss of stability and brakes if these occur.
The information is added to the network and serves as a safety warning for
the other vehicles. Very much like traffic radio stations provide information
for people. Individual cars use the information from the network to create a
dynamic view of their surroundings. A complete overview enables the car
can send danger alerts and encourage actions that prevent accidents and
reduce traffic congestion.

How V2V Communication Works


Vehicle to vehicle communication combines two types of vehicular
communication systems, called vehicle to vehicle (V2V) and Vehicle To
Infrastructure (V2I). Together, the systems form an interactive routing map.
Vehicle to vehicle communication is made possible due to the Internet of
Things (IoT) devices like GPS receivers, which let vehicles communicate
their location through the V2V system, and road sensors, which send data
about road conditions through the V2I system. DSRC (Dedicated Short
Range Communications) connect the two systems and ensure each vehicle
receives all the information it needs for safe navigation.
Main components of v2v systems:
· Dedicated short range communications (DSRC): Wireless
communication channels that work in the 5.9GHz band with a
bandwidth of 75MHz, designed for short-range use of about 1000m.
The DSCR enables real-time communication between the vehicle to
vehicle system and the vehicle to infrastructure system. When the V2V
system communicates, it may say, “Firefighter approaching.” While
the V2I system warns, “Car on fire, 10 feet, road blocked.”
· GPS receiver: Provides the vehicle with real-time location
information, which helps vehicles navigate around objects and vehicles
on the road.
· Inertial navigation system: Acts as the vehicle’s orientation system,
helping each car navigate safely around cars and objects. The inertial
navigation system monitors and estimates positioning, speed, and
direction of vehicles with onboard sensors.

· Laser Illuminated Detection And Ranging (LiDAR): A laser


detection system that Creates 3D maps and heat images of the vehicle’s
surroundings. LiDAR calculates the exact distance between objects by
measuring the speed of light as it bounces from vehicle to object and
vice versa. It helps the vehicle orient with the objects around it and
interact with infrastructure sensors.

Benefits Of V2V
· Prevents crashes: Car accidents kill around 33,000 people worldwide
annually, and the numbers keep rising every year. Safety has become a
major concern, and despite efforts to raise awareness and educate on
safe driving, the main cause of car accidents remain human error.
Vehicle to vehicle communication technology can help mitigate
anywhere up to 70% to 80% of vehicle crashes involving human error.
· Improves traffic management and reduces congestion: Law
enforcement officials can use vehicle to vehicle communication to
monitor and manage traffic by using real-time data streaming from
vehicles to reduce congestion. V2V communication can help officials
re-route traffic, track vehicle locations, adapt traffic light schedules,
and address speed limits. Drivers using V2V communication can avoid
traffic jams and maintain a safe distance from other cars.
· Improves fuel efficiency via truck platooning: Vehicle to vehicle
communication enables fleets of trucks to drive in close formation. The
truck in the front acts as the leader of the pack, after which all trucks
follow. The trucks in the platoon remain in constant formation and
adjust their speed and location based on a constant stream of
communication. Tests have found that truck platooning can save fuel
consumption of up to 5 percent for the lead truck and up to 10 percent
for the following trucks.
· Optimizes routes: Once vehicle to vehicle communication
technology is fully adopted commercially, every vehicle on the road
will benefit from better navigation. Open channel communication
between all vehicles will provide precise location, speed, and
positioning information that will help each vehicle optimize routes in
real time.

Limitations Of V2V
Multiple factors hinder the adoption of vehicle to vehicle communication.
Commercial integration of the technology presents challenges in global,
public, and private sectors, from security issues to protocol standards, to the
concern that the frequency band allocated for the system can’t support a large
number of vehicles.
· Security risks: Can you imagine enjoying a smooth ride and then,
suddenly, you lose control to someone else? The doors lock, the wheel
sends you on a spin, the car engine revs up and passes the speed limit.
Vehicles with DSRC may be vulnerable to cyber attacks. The
consequences of a security breach in V2V-enabled cars could be
catastrophic, with multiple cars exposed to terrorist attacks. V2V
communication systems will require comprehensive security measures
in order to be fully integrated.
· Privacy issues: The V2V network collects and stores private
information about the drivers. Since there are no regulations at the
moment, the government and private companies have the ability to
track vehicles and monitor driving habits. Anyone with access to
Automated License Plate Readers (ALPR) will be able to track and
collect data about cars with vehicle to vehicle communication. If the
data is hacked, it can lead to identity theft and other security concerns.
· Liability concerns: Since V2V technologies are still new and there
aren’t clear laws and regulations, incidents involving V2V vehicles
may result in liability concerns. What if the instructions given by the
V2V communication system lead to an accident? You were only
following the system’s instructions when you crashed into the back of
a car. Whose fault is it—yours or the system vendor’s?
· Potentially distracting drivers: At the moment, vehicle to vehicle
communication systems need human intervention to work. The driver
needs to perform tasks similar to texting or talking on the phone to
operate the V2V communication system. The communication process
is still in the works, as it will need to be less distracting to the driver or
it may end up being a new cause of traffic accidents.
· Expensive: The cost of installing V2V communication systems in the
vehicles depends on the system complexity and vehicle model and can
range from $2,000 to $20,000.

The Future Of V2V Systems


Currently, vehicle to vehicle communication systems are only able to send
warnings to drivers. While the technology is still at an early stage of
development, the next generation of vehicle to vehicle systems are being
designed with autonomous driving in mind—with capabilities that will give
the system the power to take control of a vehicle in danger and take action to
prevent disaster.
V2V systems have the potential to save lives and improve driving efficiency,
generating massive improvements in global productivity. They can have an
impact on communities and cities by reducing congestion, and contribute
directly to the reduction of carbon emissions in urban centers.
Components of an Urban Transit System

Components of an Urban Transit System


The above figure represents a hypothetical urban transit system where each
component designed to provide a specific array of services conferring
mobility. Among the defining factors of urban transit services are capacity,
frequency, flexibility, costs, and distance between stops:
· Metro (subway) system. A heavy rail system, often underground in
central areas (parts above ground at more peripheral locations), with
fixed routes, services, and stations. Transfers between lines or to other
components of the transit systems (mainly buses and light rail) are
made at connected stations. The frequency of services tends to be
uniform throughout the day, but increases during peak hours. Fares are
commonly access driven and constant, implying that once a user has
entered the system the distance traveled has no impact on the fare.
However, with the application of information technologies in many
transit fare systems, zonal/distance driven fares are becoming more
common.
· Bus system. Characterized by scheduled fixed routes and stops
serviced by motorized multiple passenger vehicles (45 – 80
passengers). Services are often synchronized with other heavy systems,
mainly metro and transit rail, where they act as feeders. Express
services (or bus rapid transit), using their own right of ways, and only a
limited number of stops can also be available, notably during peak
hours. Since metro and bus systems are often managed by the same
transit authority the user’s fare is often valid for both systems.
· Transit rail system. Fixed rail comes into two major types. The first
is the tram rail system, which is mainly composed of streetcars
(tramways) they are mostly operating in central areas. They can be
composed of up to 4 cars. The second is the commuter rail system,
which are passenger trains mainly developed to service
peripheral/suburban areas through a heavy (faster and longer distances
between stations) or light rail systems (slower and shorter distances
between stations). The frequency of services is strongly linked with
peak hours and traffic tends to be imbalanced because of the influence
of commuting. Fares tend to be separate from the transit system and
proportional to distance or service zones.
· Shuttle system. Composed of a number of privately (dominantly)
owned services using small buses or vans. Shuttle routes and
frequencies tend to be fixed, but can be adapted to fit new situations.
They service functions such as expanding mobility along a corridor
during peak hours, linking a specific activity center (airport, shopping
mall, university campus, industrial zone, hotel, etc.) or aimed at
servicing the elderly or people with disabilities.
· Paratransit system. A flexible and privately owned collective
demand-response system composed of minibuses, vans, or shared taxis
commonly servicing peripheral and low-density zones. Their key
advantage is the possibility of a door-to-door service, less loading and
unloading time fewer stops, and more maneuverability in traffic. In
cities in developing economies, this system is informal, dominant, and
often services central areas because of the inadequacies or high costs of
the formal transit system.
· Taxi system. Comprises privately owned cars or small vans offering
an on-call, individual demand-response system. Fares are commonly a
function of a metered distance/time, but sometimes can be negotiated.
A taxi system has no fixed routes but is rather servicing an area where
a taxi company has the right (permit) to pick up customers. Commonly,
rights are issued by a municipality and several companies may be
allowed to compete on the same territory. When competition is not
permitted, fares are set up by regulations. Information technologies
have enabled new forms of on-demand taxi services with reservation
systems allowing the use of mobile devices.
What is a BRT Corridor?
A BRT corridor is a section of road or contiguous roads served by a bus route
or multiple bus routes with a minimum length of 3 kilometers (1.9 miles) that
has dedicated bus lanes. The BRT Standard is to be applied to specific BRT
corridors rather than to a BRT system as a whole, because the quality of BRT
in cities with multiple corridors can vary significantly*.

To be considered BRT, a corridor must:


· be at least 3km length with dedicated lanes,
· score 4 or more points in dedicated right-of-way element,
· score 4 or more points in busway alignment element; and
· score 20 or more points across all five BRT Basics element.
*See The Scorecard 2016 for more details.
The BRT Basics
There are five essential features that define BRT. These features most
significantly result in a faster trip for passengers and make traveling on
transit more reliable and more convenient.
Dedicated Right-of-Way
Bus-only lanes make for faster travel and ensure that buses are never delayed
due to mixed traffic congestion.
Busway Alignment
Center of roadway or bus-only corridor keeps buses away from the busy
curbside where cars are parking, standing, and turning

Off-board Fare Collection


Fare payment at the station, instead of on the bus, eliminates the delay caused
by passengers waiting to pay on board

Intersection Treatments
Prohibiting turns for traffic across the bus lane reduces delays caused to
buses by turning traffic. Prohibiting such turns is the most important measure
for moving buses through intersections – more important even than signal
priority.
Platform-level Boarding
The station should be at level with the bus for quick and easy boarding. This
also makes it fully accessible for wheelchairs, disabled passengers, strollers
and carts with minimal delays.
The Land Use – Transport System
Urban areas are characterized by social, cultural, and economic activities
taking place at separate locations forming an activity system. Some are
routine activities, because they occur regularly and are thus predictable, such
as commuting and shopping. There are also activities that tend to be irregular
and shaped by lifestyle (e.g. sports and leisure) or by specific needs (e.g.
healthcare). Such activities are usually related to the mobility of passengers.
In addition, there are production activities that are related to manufacturing
and distribution, whose linkages may be local, regional, or global. Such
activities are usually associated with the mobility of freight. Since activities
have a different location, their separation is a generator of movements of
passengers and freight, which are supported by transportation. Therefore,
transportation and land use are interrelated because of the locational and
interactional nature of urban activities.
Most economic, social, or cultural activities imply a multitude of functions,
such as production, consumption, and distribution. The urban land use is a
highly heterogeneous space, and this heterogeneity is in part shaped by the
transport system. There is a hierarchy in the distribution of urban activities
where central areas have emerged because of economic (management and
retail), political (seats of government), institutional (universities), or cultural
factors (religious institutions). Central areas have a high level of spatial
accumulation, and the corresponding land uses, such as retail. In contrast,
peripheral areas have lower levels of accumulation corresponding to
residential and warehousing areas.
The preferences of individuals, institutions, and firms have an imprint on land
use in terms of their locational choice. The representation of this imprint
requires a typology of land use, which can be formal or functional:
Formal land use representations are concerned with qualitative attributes of
space such as its form, pattern, and aspect and are descriptive in nature.
Functional land use representations are concerned with the economic nature
of activities such as production, consumption, residence, and transport, and
are mainly a socioeconomic description of space.
At the global level, cities consume about 3% of the total landmass. Although
the land use composition can vary considerably depending on the function of
a city, residential land use is the most common, occupying between 65 and
75% of the footprint of a city. Commercial and industrial land uses occupy 5-
15% and 15-25% of the footprint, respectively. There are also variations in
the built-up areas that are commonly a function of density, level of
automobile use, and planning practices. In automobile-dependent cities, 35 to
50% to land-use footprint is accounted for by roads and parking lots. Within
a parking lot, about 40% of the surface is devoted to parking vehicles, while
the remaining 60% is for circulation and access to individual parking spaces.
These variations are the outcome of a combination of factors that reflect the
unique geography, history, economy, and planning of each city.
Land use, both informal and functional representations, implies a set of
relationships with other land uses. For instance, commercial land use
involves relationships with its suppliers and customers. While relationships
with suppliers will dominantly be related to the mobility of freight,
relationships with customers would also include the mobility of people. Thus,
a level of accessibility to both systems of circulation must be present for a
functional transportation/land use system. Since each type of land use has its
own specific mobility requirements, transportation is a factor of activity
location.
Within an urban system, each activity occupies a suitable, but not necessarily
optimal location, from which it derives rent. Transportation and land use
interactions mostly consider the retroactive relationships between activities,
which are land use related, and accessibility, which is transportation-related.
These relationships have often been described as a classic “chicken-and-egg”
problem since it is difficult to identify the cause of change; do transportation
changes precede land-use changes or vice-versa? There is a scale effect at
play in this relationship as large infrastructure projects tend to precede and
trigger land-use changes. In contrast, small scale transportation projects tend
to complement the existing land use pattern. Further, the expansion of urban
land use takes place over various circumstances such as infilling (near the
city center) or sprawl (far from the city center) and where transportation plays
a different role in each case. For infilling, the value of land becomes high
enough to justify developments despite potential congestion, while for
sprawl, accessibility has improved enough to justify developments.
Urban transportation aims at supporting transport demands generated by the
diversity of urban activities in a diversity of urban contexts. A key for
understanding urban entities thus lies in the analysis of patterns and processes
of the transport – land use system since the same processes may result in a
different outcome. This system is highly complex and involves several
relationships between the transport system, spatial interactions, and land use:
· Transport system. The transport infrastructures and modes that
support the mobility of passengers and freight. It generally expresses
the level of accessibility.
· Spatial interactions. The nature, extent, origins, and destinations of
the urban mobility of passengers and freight. They take into
consideration the attributes of the transport system as well as the land
use factors that are generating and attracting movements.
· Land use. The level of spatial accumulation of activities and their
associated levels of mobility requirements. Land use is commonly
linked with demographic and economic attributes.
A conundrum concerns the difficulties of linking a specific transportation
mode with specific land use patterns. While public transit systems tend to
be associated with higher densities of residential and commercial activities
and highways with lower densities, the multiplicity of modes available in
urban areas, including freight distribution, conveys an unclear and complex
relationship. Further, land use is commonly subject to zoning restrictions in
terms of the type of activities that can be built as well as their density.
Therefore, land use dynamics are influenced by planning restrictions and the
urban governance structure.
Urban Land Use Models
The relationships between transportation and land use are rich in theoretical
representations that have significantly contributed to regional sciences. They
can be investigated empirically through the observation and analysis of real-
world changes in the urban spatial structure. However, empirical
investigations cannot readily be used for simulation and forecasting purposes.
For that purpose, the relationships between transportation and land use can
also be investigated through models trying to synthesize the spatial structure
through a series of assumptions about urban dynamics.

Since transportation is a distance-decay altering technology, the spatial


organization is assumed to be strongly influenced by the concepts of location
and distance. Several descriptive and analytical urban land use models have
been developed over time, with increased levels of complexity. All involve
some consideration of transport in the explanations of urban land use
structures. Changes are commonly the outcome of location decisions such as
building a facility (residential building, warehouse, store, office tower, etc.)
or a transportation infrastructure (road, transit line, port, airport, etc.).
a. Early models
Von Thunen’s regional land use model is the oldest representation based on a
central place, the market town, and its concentric impacts on surrounding
agricultural land use. The model was initially developed in the early 19th
century (1826) for the analysis of agricultural land use patterns observed in
Germany. The concept of economic rent is used to explain a spatial
organization where different agricultural activities are competing for the
usage of the available land. The closer a location is to the market, the lower
the transportation cost and availability of land. The underlying principles of
this model have been the foundation of many others, where economic
considerations, namely land rent and distance-decay, are incorporated. The
core assumption of the model is that agricultural land use is patterned in the
form of concentric circles around a market that consumes all the surplus
production, which must be transported. It is this transportation cost that bears
the most influence for the purpose the land will be used. The closer the
market, the higher the intensity and productivity of agricultural land use, such
as dairy products and vegetables, while the further away, less intensive uses
such as grain and livestock dominate. Many empirical concordances of this
model have been found, notably in North America.

Another range of early models, such as Weber’s industrial location model


developed in 1909, dealt with industrial location, in an attempt to minimize
the total transportation costs of accessing raw materials and moving the
output to the market, which indicated an optimal location for the activity to
take place. The main principle explored by early models is that locational
choice and the resulting land uses are primarily influenced by transportation
costs. This assumption is not surprising since, in the late 19th Century and the
early 20th Century, land transportation options were limited and of the
relatively high cost.
b. Concentric urban land uses
The Burgess concentric model was among the first attempts to investigate
spatial patterns at the urban level in the first quarter of the 20th century.
Although the purpose of the model was to analyze social classes, it
recognized that transportation and mobility were important factors shaping
the spatial organization of urban areas and the distribution of residential
choices. The formal land use representation of this model is derived from
commuting distance from the central business district, creating concentric
circles. Each circle represents a specific socioeconomic urban landscape. This
model is conceptually a direct adaptation of the Von Thunen’s model to
urban land use since it deals with a concentric representation, which
considers a transportation trade-off between the cost of commuting and the
cost of renting housing. Therefore, if the cost of commuting declines due to
improvements (e.g. new transit lines), the outcome is that more people can
afford to live further away, which results in urban sprawl. Even close to one
century after the concentric urban model was developed, spatial changes in
cities such as Chicago are still reflective of such a process.
c. Polycentric and zonal land uses
Sector and multiple nuclei land use models were developed to take into
account numerous factors overlooked by concentric models, namely the
influence of transport corridors (Hoyt, 1939) and multiple nuclei (Harris and
Ullman, 1945) on land use and growth. Both representations consider the
emerging impacts of motorization on the urban spatial structure, particularly
through the beginning of suburbanization and the setting of polycentric cities.
Such representations also consider that transportation infrastructures,
particularly terminals such as rail stations or ports, occupy specific locations
and are also land uses. In the second half of the 20th century, the construction
of airport and container port complexes created new nodes around which
urban land uses developed.
d. Hybrid land uses
Hybrid models are an attempt to include the concentric, sector, and nuclei
behavior of different processes in explaining urban land use. They try to
integrate the strengths of each approach since none of these appear to provide
a completely satisfactory explanation. Thus, hybrid models, such as that
developed by Isard (1956), consider the concentric effect of central locations
(CBDs and sub-centers) and the radial effect of transport corridor, all overlaid
to form a land use pattern. Hybrid representations are also suitable to explain
the evolution of the urban spatial structure as they combine different spatial
and temporal impacts of transportation on urban land use, such as concentric
and radial impacts.
e. Land use market
Land rent theory was also developed to explain land use as an outcome of a
market where different urban activities are competing to secure a footprint at
a location. The theory is strongly based on the market principle of spatial
competition where actors are bidding to secure and maintain their presence at
a specific location. The more desirable a location is, the higher its rent value
and the intensity of activities. Transportation, through accessibility and
distance-decay, is a strong explanatory factor on the land rent and its impacts
on land use. Conventional representations of land rent leaning on the
concentric paradigm are being challenged by structural modifications of
contemporary cities that were identified by hybrid models.
The applicability and dynamics of land use models are related to issues such
as the history, size, and the locational setting of a city. For instance,
concentric cities are generally older and of smaller size, while polycentric
cities are larger and relate to urban developments that took place more
recently. This also includes the impacts of public transit systems that can vary
according to the level of automobile dependence. While most of the
conceptual approaches related to the relationships between transportation and
land use have been developed using empirical evidence related to North
America and Western Europe, this perspective does not necessarily apply to
other parts of the world.
Dualism has been observed in cities in developing economies where
processes such as economic development and motorization are creating an
urban landscape that is common in advanced economies. However, an
informal landscape of shantytowns represents a land use structure that is not
effectively captured by conventional land use models. It remains to be seen to
what extent globalization will favor a convergence of land use patterns across
the world’s cities. Irrespective of the urban context, standard technologies
such as the automobile, construction techniques, information technologies,
and managerial practices (e.g. urban planning or supply chain management)
are likely to homogenize the land use structure of global cities.
Transportation and Urban Dynamics
Both land use and transportation are part of a dynamic system that is subject
to external influences and internal changes. Each component of the system
is continuously evolving due to changes in technology, policy, economics,
demographics, and even culture or values. Since transportation infrastructure
and real estate development require significant capital investments,
understanding their dynamics is of high relevance for investors, developers,
planners, and policymakers. As a result, the interactions between land use
and transportation are played out as the outcome of the many decisions made
by residents, businesses, and governments.
The field of urban dynamics has expanded the scope of conventional land use
models, which tended to be descriptive, by trying to consider the
relationships behind the evolution of the urban spatial structure. This focus
has led to a complex modeling framework, including a wide variety of
components such as the transportation network, housing locations, and
workplaces. Among the concepts supporting urban dynamics are retroactions,
whereby changes in one component will influence other associated
components. As these related components change, there is a feedback effect
on the initial component, which is either positive or negative. The most
significant components of urban dynamics are:
· Land use. The most stable component of urban dynamics, as changes
are likely to modify the land use structure over a rather long period of
time. This is to be expected since most real estate is built to last at least
several decades, and there are vested interests to amortize its usage
over that period with minimal changes outside repairs and
maintenance. The main impact of land use on urban dynamics is its
function of a generator and attractor of movements.
· Transport networks. Networks are a rather stable component of
urban dynamics, as transport infrastructures are built for the long term.
This is particularly the case for large transport terminals and subway
systems that can operate for decades. For instance, many railway
stations and subway systems are more than one hundred years old and
continue to influence the urban spatial structure. The main contribution
of transport networks to urban dynamics is the provision of
accessibility where changes will impact mobility.
· Movements (flows). The most dynamic component of the system
since the mobility of passengers and freight reflects almost
immediately changes in the supply or demand. Mobility thus tends
more to be an outcome of urban dynamics than a factor shaping it.
· Employment and workplaces. They account for significant
inducement effects over urban dynamics since many models often
consider employment as an exogenous factor from which other aspects
of the urban dynamics are derived. This is specifically the case for
employment that is categorized as basic, or export-oriented, and which
is linked with specific economic sectors such as manufacturing.
Commuting is a direct outcome of the number of jobs and the location
of workplaces.
· Population and housing. They act as the generators of movements
because residential areas are generators of commuting flows. Since
there is a wide array of incomes, standards of living, and preferences,
this socioeconomic diversity is reflected in the urban spatial structure.
· For representing complex urban dynamics, a number of
transportation land use models have been developed, with the
Lowry model among the first (1964). Its core assumption is that
regional and urban growth (or decline) is a function of the
expansion (or contraction) of the basic sector, which is represented
as export-based employment that meets non-local demand. An
urban area produces goods and services, which are exported. This
employment is, in turn, having impacts on the employment of two
other sectors; retail and residential. Its premises were expended by
several other models, known as “Lowry-type” models that were
applied to various cities. The core of these models relies on a
regional economic forecast that predicts and assigns the location of
the basic employment sector. As such, they are dependent on the
reliability and accuracy of macro-economic and micro-economic
indicators and forecasting. Such forecasting tends not to be very
accurate as it does not capture well the impacts of economic, social,
and technological changes, which also change the relevance of
indicators.
· Another line of models emerged in the 1990s with the rise of
computing power. Cellular automata are dynamic land use models
developed on the principle that space can be represented as a grid
where each cell is a discrete land use unit. Cell states thus symbolize
land uses, and transition rules express the likelihood of a change
from one land use state to another. Because cells are symbolically
connected and interrelated (e.g. adjacency), models can be used to
investigate the dynamics, evolution, and self-organization of cellar
automata land use systems. The cellular approach allows achieving
a high level of spatial detail (resolution) and realism, as well as to
link the simulation directly to visible outcomes on the regional
spatial structure. They are also readily implementable since
Geographic Information Systems are designed to work effectively
with grid-based (raster) spatial representations. Cellular automata
improve upon most transportation – land use models that are
essentially static as they explain land use patterns. Still, they do not
explicitly consider the processes that are creating or changing them.
· The issue about how to articulate transportation and land use
interactions remains, particularly in the current context of
interdependence between local, regional, and global processes.
There is also the risk of unintended consequences (unaccounted
feedback) where a change may not result in an expected outcome.
For instance, improving road transportation infrastructure can have
the potential to create even more congestion as new users are
attracted by the additional capacity. Globalization has substantially
blurred the relationships between transportation and land use, as
well as its dynamics. The primary paradigm is concerned with some
factors once endogenous to a regional setting have become
exogenous.
· Many economic activities that provide employment and
multiplying effects, such as manufacturing, are driven by forces that
are global in scope and may have little to do with regional
dynamics. For instance, capital investment in infrastructures and
facilities could come from external sources, and the bulk of the
output could be bound to international markets. In such a context, it
would be challenging to explain urban development processes
taking place in coastal Chinese cities, such as the Pearl River Delta,
since export-oriented strategies are among the most significant
driving forces. Looking at the urban dynamics of such a system
from an endogenous perspective would fail to capture driving forces
that are dominantly exogenous.
· The relationships between transportation and land use that have
been the focus of a long line of geographical representations,
including models, are mainly driven by economic and technological
changes. It is expected that ongoing changes related to
digitalization, such as e-commerce, and automation in
manufacturing and distribution, will continue to shape the urban
spatial structure in the 21st Century.
Transportation-Land Use Interactions

Transportation-Land Use Interactions


Transportation and land use are part of a retroactive feedback system.
Accessibility is shaped by the structure, capacity and connectivity of
transportation infrastructure, which is not uniform. Since accessibility differs,
this attribute has an impact on land use, such as the location of new activities,
their expansion or densification. These changes will influence activity
patterns in terms of their distribution and level of transport demand. This
change in the demand will shape the planning, maintenance and upgrade of
transportation infrastructure and services such as roads and public transit.
Again, these changes will further impact accessibility into a new cycle of
interactions.
The interactions between transportation and land use are also part of a
complex framework that includes economic, political, demographic and
technological changes. Several characteristics and processes have an
influence on the dynamics between transportation and land use. Changes in
transportation technology, investment and service characteristics can alter
overall accessibility levels as well as the relative accessibility of different
locations. The recent trend towards digitalization is providing a new impetus
to urban mobility such as on-demand services and the availability of large
amounts of information about the characteristics of urban travel. E-commerce
by itself is generating an entirely new set of patterns in urban freight
distribution, particularly with home deliveries.
Land use characteristics also affect activity patterns, such as zoning patterns
and regulations, the availability of land, public utilities and
telecommunication infrastructure. Of special importance are the changes in
trip generation, both for passenger and freight, which are influenced by
economic and demographic changes. Obviously, population growth is a
vector for additional transportation demand, but rising incomes as well. Trip
patterns may change in a number of ways, such in terms of the number of
trips, the timing of trips, their origin or destination, the mode, and trip
chaining. These changes in travel demand exert considerable influence on the
development of new transportation infrastructure or services. As such, the
interactions between transportation and land use are often referred as a
“chicken-and-egg” conundrum since it is empirically difficult to demonstrate
if transportation changes precede land use changes, or vice-versa.
Land Requirement and Consumption
Historically, several environmental aspects impacted the organization and
regulation of the footprint taken by transportation activities. Although various
forms of pollution were noted since Antiquity, by the 19th century,
environmental considerations started to become regulations at the onset of the
Industrial Revolution. Zoning restrictions in central business districts
forbidding polluting industrial uses were among the first to be implemented.
Then, in the 20th-century, land uses judged to be incompatible were
separated. The most prevalent were heavy industries and residential areas,
which led to a series of zoning definitions of urban areas. Transportation
infrastructures, particularly roads, began to have a growing footprint on urban
land uses. However, this development is paradoxical since the construction of
highways was initially seen as a local benefit, providing mobility and
accessibility. It is only later, from the 1970s, that the perspective changed. As
providers of mobility, highways were also seen as generators of
environmental externalities such as land take, noise, and air pollution.
From the 1950s, urbanization has rapidly seen the expansion of urban land
uses, which means that a large city of 5 million inhabitants may stretch over
100 km (including suburbs and satellite cities) and may use an amount of
land exceeding 5,000 square km. Such large cities obviously cannot be
supported without a vast and complex transport system. Also, modal
choice has an important impact on land consumption. The preference for road
transportation has led to massive consumption of space with 1.5 to 2.0% of
the world’s total land surface devoted to road transportation, mainly for roads
rights of way and parking lots. The footprint of transportation has reached a
point where 30 to 60% of urban areas are taken by road transportation
infrastructure alone. In more extreme cases of road transportation
dependency, such as Los Angeles, this figure can reach 70%. Yet, for many
developing countries such as China and India, motorization is still in its early
stages. For China to have a level of motorization similar to those of Western
Europe would imply a fleet of vehicle superior to the current global fleet.
From a land requirement perspective, full motorization would generate a
massive footprint.
Cities consume large quantities of land, and their growth leads to the notion
of metropolitan areas and, further, urban regions oriented along corridors.
With urbanization, expansion has allowed the reclamation of vast amounts of
land from rural activities towards other uses. Economic globalization and the
associated rise in the mobility of passengers and freight have required the
expansion of terminal facilities such as ports and airports that have a large
footprint. Also, the duplication of infrastructure, public and private alike,
have resulted in additional land requirements. This is notably the case for
large transport terminals such as ports and airports that were built because
they belonged to different administrative jurisdictions. The general aim was
to convey a high level of accessibility to answer mobility demands. While in
several regions, road transportation infrastructures are overused, a situation of
under-capacity exists in others. The formation of compact and accessible
cities must be allowed to contend with the already existing built environment
while considering several limits to development and urban renewal through
temporal constraints and common limitations in capital availability.
The geographical growth of cities has not been proportional to the growth of
their population, resulting in lower densities and higher space consumption.
This also concerns manufacturing and freight distribution that have the
propensity to expand horizontally with the expansion of the transportation
and storage functions, particularly for distribution centers. An increase in the
quantity of energy consumed and waste generated has been the outcome.
Consequently, changes in urban land use and its transport system have
expanded the environmental footprint of cities.
Spatial Form, Pattern and Interaction
The structure of urban land use has an important impact over transport
demand and over the capacity of transportation systems to answer such
mobility needs. This involves three dimensions influencing the environmental
impacts of transportation and land use:
· Spatial form. Relates to the spatial arrangement of a city, particularly
in terms of the setting and orientation of its axis of circulation. This
form thus conveys a general structure to urban transportation ranging
from centralized to distributed. The dominant influence has been
expansion and motorization. The resulting polycentric cities are
economically and functionally flexible but consume more energy.
· Spatial pattern. Relates to the organization of land use in terms of
location of major socio-economic functions such as residential,
commercial, and industrial uses. The prevailing trend has been a
growing specialization, disconnection, and fragmentation between land
uses. Also, different types of land use can be incompatible with their
proximity to the source of additional externalities. For instance,
residential land use is incompatible with the majority of industrial,
manufacturing, warehousing, and transport terminal activities. They
generate noise and congestion externalities to which residents are
highly susceptible. In such a context, buffers, which apply different
barriers effects to promote physical separation, can help mitigate
incompatible land uses.
· Spatial interaction. Relates to the nature and the structure of
movements generated by urban land uses. The prevailing trend has
been a growth in urban interactions in terms of their volume,
complexity, and average distance.
The location of activities such as residence, work, retail, production, and
distribution is indicative of the required travel demand and the average
distance between activities. With specialized land use functions and spatial
segregation between economic activities, interactions are proportionally
increasing. It is over the matter of density that the relationships between
transportation, land use, and the environment can be the most succinctly
expressed. The higher the density level, the lower the level of energy
consumption per capita, and the relative environmental impacts. A
remarkable diversity of urban densities is found around the world, which is
reflective of different geographical settings, planning frameworks, and levels
of economic development. This complexity is compounded by how density
changes in relation to the city center.
Paradoxically, the outward expansion of cities and suburbanization has
favored a relatively uniform distribution of land use densities, notably in
cities with prior low-density levels. In recent decades, the average density of
several large metropolitan areas has declined by at least 25%, implying
additional transport requirements to support mobility demands. Further,
residence/work separation is becoming more acute as well as the average
commuting time and distance. It is consequently increasingly challenging to
provide urban transit services at an efficient cost. This underlines that the
future of sustainable mobility will require accomodating personal mobility
requirements, even if this mobility is considered less sustainable than
collective mobility.
An important effect of land use pattern and density on the local environment
concerns the heat island effect. It is an outcome of differences in albedo
between an urban surface composed of buildings and paved surfaces (roads,
parking lots) and the natural landscape. The urban landscape absorbs more
heat during the day, which is released during the night and can result in
ambient temperatures up to 5 degrees Celsius higher than normal. The land
use pattern plays a role in the heat island effect with grid patterns (or other
ordered patterns) retaining more heat than other disordered patterns, mostly
because buildings and other structures reabsorb the heat emitted by others.
A higher level of integration between transportation and land use, particularly
density, often results in increased accessibility levels without necessarily
increasing the need for automobile travel. The slow transformation of urban
land uses, with annual rates lower than 2%, makes it difficult to establish
sound transportation/land use strategies that could have effective impacts
over a short time period. As it is generally market forces that shape such
changes, it is uncertain which drivers of change would significantly impact
the transformation of urban land use.
Environmental Externalities of Land Use
As a spatial structure, land use is linked to a number of externalities that
impose significant economic, social, and environmental costs that
communities are less willing to assume. This has led to various land use
regulations, mostly under the umbrella of “smart growth” initiatives, to
increase density and promoting modes other than the automobile. Strategic
indicators that are recurrent in evaluating the environmental externalities of
land use involve vehicle-mile (km) traveled, transit ridership, and average
commuting time to the workplace, which are all spatial interaction variables.

The last half a century has been associated with a declining role of public
transit, a more disorganized spatial structure, and the prevalence of
suburbanization. This trend could be reversed with two possible and
interdependent paths of land use changes unfolding, depending upon the
concerned urban setting:
· Densification. It involves a more rational and intensive use of the
existing land uses to minimize the environmental footprint and the
level of energy consumption. Initiatives such as smart growth are
trying to change the urban planning framework towards forms and
densities that are more suitable for walking, non-motorized modes, and
public transit. If this occurs in proximity to a transit station, the term
transit-oriented development is used to characterize the densification
process. Yet this implies higher levels of capital investment and the
provision of an adequate public transit service since, in a car-dependent
context, densification easily leads to congestion and other externalities.
· Devolution. Due to economic and demographic trends, several cities
could lose a share of their population, imposing a rationalization of
urban land uses. In industrial regions of Europe and North America,
several cities have lost a share of their economic base and,
correspondingly, their population. This involves dismantling urban
infrastructure and closing sections or whole neighborhoods, leading to
the emergence of urban forests and even forms of urban agriculture.
Detroit is a salient example since the population of the city dropped by
more than a half from 1.8 million in 1950 to 713,000 in 2010. Yet, the
population of Detroit’s metropolitan area has remained relatively stable
since the 1970s, hovering around 4.2 million. This implies that the
process of devolution is very location-specific.
What could shape land use towards a more environmentally beneficial
structure in the future is uncertain since many policies appear to be not
particularly useful. Since it took 30 to 50 years for North American,
Australian, and to some extent, European cities to reach their current patterns
of automobile dependency, it may take the same amount of time to reach a
new equilibrium if specific conditions apply. This transition could even be
more complex in developing economies where the forces of motorization are
gaining momentum with economic development. Since the price of energy is
an important component in the cost of personal mobility, energy costs are
likely to be a significant force shaping urban development. If the energy
component does not change significantly, congestion and infrastructure
capacity limitations will likely play a more important aspect. Consequently,
the environmental impacts of transportation and land use are likely to stay
prevalent for several decades.
Vehicle Design Data Characteristics
Vehicle Chassis And Frame Design
Automotive industry is one of the biggest and most innovative in total

industry area. Almost all manufactured cars and vehicle are made by mass
production but in the very beginning the cars were produced by the same
technologies of hand craftsmanship that had been used for centuries for the
construction of horse-drawn carriages. Due to the issue of large number of

components and assembly rely on joining items the procedure has been
changed. It has been started by Henry Ford who developed the techniques of
mass production based on preliminary production of rifles during American
Civil War. The line production was based on special tracks, so the car’s
chassis were moved through next assembly stations with overhead store
components. Thus the motor industry from small workshops producing hand-
build vehicle changed into huge corporation with mass production techniques
with components supply chain. Second main factor of changing the

production processes and techniques was construction development. From the


first construction based on horse-drawn carriages with wooden chassis and
framework to the modern constructions of steel, lightweight steel (or even
ULSAB – Ultra Lightweight Steel Auto Body) or fiber constructions. In
addition to the direct engineering issues the vehicle designer needs to
consider the political issues such as pollution and recycling. Thus the
research on materials of engine and vehicle body in terms of environmental
and safety are constant conducted. The novel materials cause changes in
construction due to the different physical and mechanical properties [1-7].

First commercial vehicles (lorries and buses) were based on steam-powered


carriages. The typical example was the steam-engined road vehicle based on
railway technology. By the time of First and Second World War the
commercial road vehicle industry has developed. One of the most specific
group of commercial vehicles are special heavy vehicles operating often in
off-road conditions in different environment and on irregular ground surface.
The issues of absorption of vibration, vehicle dynamics and stability in
terrain become big factors for developing project and new constructions.
These type of vehicles have large scope of loads, from general cargo to
concrete. It can be designed, developed, produced and installed truck bodies
and trailers for any kind of use according to special and individual
requirements [8-10].
2. Vehicle design
The very first stage of the vehicle production has to be design. Design can be
considered as an activity to find the best (optimal) solution to an engineering
problem within certain constraints. The whole process involving solution
from conception to evaluation, including safety, comfort, aesthetics,
ergonomics, manufacture and cost. Designing is an integrated, multi-stage
operation, which must be flexible to allow modifications for specific
problems and all requirements arise during whole process. One of the
management technique used for design is IPPD (Integrated Product and
Process Development). IPPD facilitates meeting cost and performance
objectives from product concept through production, including field support.
There are 10 key tends of IPPD:
- customer focus,
- concurrent development of products and processes,
- early and continuous life cycle planning,
- maximize flexibility for optimization and use of contractor unique
approaches,
- encourage robust design and improved process capability,
- event-driven scheduling,
- multidisciplinary teamwork,
- empowerment,
- seamless management tools,
- proactive identification and management risk .
The requirements for modern cars and heavy vehicles cause many tasks in
vehicle design. Beside the fundamental tasks as proper identification of
engine, transmission system, steering, suspension, brakes in terms of safety,
utility and comfort the material properties and structure geometry become
more and more important. Also the noise, vibration and harshness become
important requirements for the customer. It has to be emphasized the role of
endurance and durability in design and manufacture of reliable vehicle.
The requirements for utility vehicles have increased as well. The range and
scope of potential use and application possibilities become very wide. Thus
these vehicles start to be considered for the customer not only in terms of
utility but more often in terms of comfort and safety as it is in passenger cars.
These are the reasons for innovation solutions in modern vehicle. One of
those is intermediate frame. The requirements for the intermediate frame are
focused on loads (goods) or vehicle use and stability of body (superstructure).
As far as vehicle application or load requirements determine shape and
volume of the construction, the stability will be realized as stiffness and
spring/damping properties of the connections. Torsionally stiff bodies may
not affect the torsional flexibility of the chassis frame. They must be
connected to the chassis so that they are torsionally flexible in accordance
with the specifications of the body/equipment mounting directives. Fixed
bearings and pivot bearings are used for this purpose. Due to types of special
bodies the mounting of implements and bodies become very important [13].
The process of chassis design consists of:
- load case,
- chassis type,
- structural analysis.
Very important issue of vehicle design is selection of material according to
require experimental and analytical data and maintenance properties (i.e.
corrosion resistance). Nowadays a wide range of alloys in available with
different properties, heat treatments and manufacturing opportunities. Thus
these materials have now replaced steel and copper alloys in many vehicle
components. New materials as aluminum alloys, polymers and composite
materials are more often used even as vehicle bodywork (body panel). Thus
the first stage will determine which group of metals or other materials can be
used according to experimental and analytical data. Depending on the
application the design engineer has to consider the material and mechanical
properties due to the forces expected during the operation of the vehicle. A
sufficiently strong force will produce a definite amount of deformation. Thus
the designers and engineers have to understand and compare many
parameters of the materials. For example:
- Strength is the ability of material to withstand a force without permanent
deformation.
- Compressive strength is the ability to withstand a pushing force.
- Torsional strength is the ability to withstand twisting force.
Other important properties are: tensile strength, elasticity, plasticity,
hardness, toughness, dimensional stability and durability.
3. Vehicle chassis and frame design
One of the fundamental and most important stage during designing process is
proper develop of chassis and frame of the vehicle, especially for special
heavy vehicles. Design of the vehicle chassis has to be started from analysis
of load cases. There are five basic load cases to consider:
- bending case: loading in vertical plane, the x-z plane due to the weight of
components distributed along the vehicle frame which cause bending about
the y-axis;
- torsion case: vehicle body is subjected to a moment applied at the axle
centerlines by applying upward and downward loads at each axle. These
loads result in twisting action or torsion moment about the longitudinal x-
axis;
- combined bending and torsion loads;
- lateral loading: generated at the tire to ground contact patch. These loads are
balanced by centrifugal forces;
- fore and aft loading: generated when vehicle accelerates and decelerates
inertia forces [14,15].
The axes of the vehicle and directions of basic movements have been
depicted in Fig. 1.
Fig. 1. The axes of the vehicle and directions of basic movements [16]

An example of the worst-case loading conditions as well as overloading must


be considered for the static load case. The factors usually applied to the static
load case, especially for those vehicles with a long overhang containing
concentrated loads (e.g., rear engine buses). Such loads result in high bending
moments over the rear axle. The various dynamic conditions considered here
for the determination of the axle loads are considered in [17]. If we consider
the dynamic loads caused by vehicle-pavement interaction are either moving
loads or random loads. There are some publications reporting number of field
measurements and theoretical investigations, which have shown that vehicle
vibration-induced pavement loads are moving stochastic loads [18, 19].
Also the torsion stiffness is an important characteristic in chassis design.
Because of the impact on the ride safety and comfort [20]. Thus goal of
design is to increase the torsion stiffness without significantly increasing the
weight of the chassis.
One of the most interesting methods for chassis and frame design is the SSS
(Simple Structural Surfaces) method. It is a simple analytical approach for
initial analyses of a preliminary design concept. The SSS method is used to
analyze simple structures using thin plates as structural members. It can be
considered as rigid only in its own plane. The represent of the vehicle
structure by SSS have been described years ago in . Some examples have
been depicted in Fig. 2, where η is the number of the plane structural
elements (subassembly).
Fig. 2. The representations of the vehicle structure by SSS

There are two key assumptions made when analyzing a structure. The first is
that the structure is statically determinant . This assumption limits the
accuracy, especially in vehicle design where a number of redundant
structures are used. The second assumption is that a sheet is unable to react
out of plane loads, it has zero stiffness to loads applied perpendicular to the
surface. When analyzing a vehicle a systematic approach is used where sheets
are analyzed one at a time starting with sheets containing the input loads,
which were calculated separately. The end result will be the edge loads of
each sheet, as labeled in the Fig. 3 . The same method can be carried out
modeling of structures for commercial or special vehicle. Some example of
simple SSS van structure have been depicted in Figure 4.
Fig. 3. Edge load diagram – half of the vehicle
Some disadvantages of the SSS method in vehicle design are:
- problem in the design concept,
- flexibility in the rear door frame of a simple box results in the torsion
moment being carried entirely in the floor or chassis frame,
- if the surrounding frame has low stiffness the glass may be loaded
excessive.
For the commercial and special vehicles the utility of the vehicle during
operation is very important. It becomes an impact factor especially for special
heavy vehicles, which are dedicated for specific operating conditions and
defined purpose of utility. Thus there are many types of chassis. Starting
from historical ladder frames used by early motor cars. These frames have
carried all load but it can accommodate large variety of body shapes. It has
good bending strength and stiffness but very low torsional stiffness. These
frames are still used in light commercial vehicles like pick up. Other type are
cruciform frames which can carry torsional loads because no elements of the
frame is subjected to torsional moment. It is made of two straight beams and
have only bending loads. Torque tube back bone (tube-frame) frame is made
of closed box section as main back bone. Traverse beams resist lateral loads
and back bone frame bending and torsion. The advantage of using tubes
rather than the previous open channel sections is that they resist torsional
forces better. Typical chassis for the race care is space frame, which is
lightweight rigid structure constructed from interlocking struts in a geometric
pattern. Beam elements carry either tension or compressive loads by the
inherent rigidity of the triangular frame. In both a space frame and a tube-
frame chassis, the suspension, engine, and body panels are attached to a
skeletal frame of tubes, and the body panels have little or no structural
function. Other modern structure types are monocoque (single-shell), punt
structure, perimeter space frame, integral body structure, modern integral
body-in-white.
Some examples of chassis and frames for special vehicles have been depicted
in Figs. 5-8.
Fig. 4. SSS Van structure, where SSS 1-6: carry bending load, SSS 5-10:
carry torsion load

Fig. 5. SSS ladder frame [1]


Fig. 6. Cruciform frame

Fig. 7. Back bone structure (Lotus) [24]

An interesting solution to extend possibilities of regular frames or chassis for


specific or defined purpose and utility is intermediate frames. As the example
of the intermediate frame for special vehicles operated in terrain the
construction presented in Fig. 9 has been developed by PS Szcześniak
company.
Fig. 8. Space-frame – Formula 1 [25]

Fig. 9. Intermediate frame – PS Szcześniak solution

4. Conclusions
The process of design of chassis and frames, especially in special heavy
vehicles, is fundamental stage in total production process. Many of vehicles
properties are strictly connected with the chassis or frame. Dynamic
properties and static or geometric parameters of the vehicle depends on
chassis or frames. Also vibration phenomena in heavy vehicles are an
important issue. As far as dynamic responses isolation in cabs is well
recognized for the isolation of loads there are many investigations needed.
These issues are very important for vehicle designers and engineers and has
to be take into focused consideration in all productions processes, especially
during assumptions and constructions of chassis or frames.
The review of the solutions in constructions of the chassis and frames allows
to make some assumptions for PS Szcześniak project in scope of research
programme DEMONSTRATOR + Supporting scientific research and
development works in demonstration scale, the title of the project is Develop
High Mobility Wheeled Platform for special applications.
Gross vehicle weight rating
The gross vehicle weight rating (GVWR), or gross vehicle mass (GVM) is
the maximum operating weight/mass of a vehicle as specified by the
manufacturer[1] including the vehicle's chassis, body, engine, engine fluids,
fuel, accessories, driver, passengers and cargo but excluding that of any
trailers.[2] The term is used for motor vehicles and trains.
The weight of a vehicle is influenced by passengers, cargo, even fuel level, so
a number of terms are used to express the weight of a vehicle in a designated
state. Gross combined weight rating (GCWR) refers to the total mass of a
vehicle, including all trailers. GVWR and GCWR both describe a vehicle that
is in operation and are used to specify weight limitations and restrictions.
Curb weight describes a vehicle which is "parked at the curb" and excludes
the weight of any occupants or cargo. Dry weight further excludes the weight
of all consumables, such as fuel and oils. Gross trailer weight rating specifies
the maximum weight of a trailer and the gross axle weight rating specifies the
maximum weight on any particular axle.
Speed limits
Speed limits in India vary by state and vehicle type. In April 2018, the Union
Ministry of Road Transport and Highways fixed the maximum speed limit on
expressways at 120 km/h, for national highways at 100 km/h, and for urban
roads at 70 km/h for M1 category of vehicles. The M1 category includes
most passenger vehicles that have less than 8 seats. State and local
governments in India may fix lower speed limits than those prescribed by the
Union Ministry.
State Motorcycle Light Medium Medium
motor passenger goods
vehicle vehicle vehicle
(cars)
Andhra Pradesh 50 No default 65 65
/ Telangana limit (65
State[2] for
transport
vehicles)
Maharashtra[3] 50 No default 65
limit (65
for
transport
vehicles)
Delhi[4] 30-70 25-50 20-40 20-40
Uttar 40 40 40 40
Pradesh[5]
Haryana[6] 30/50 50 40/65 40/65
Karnataka 50 No limit 60 (KSRTC) 60
(60 for
cars in
Bangalore
except in
Airport
road where
it is 80,
100 for
cars only
on NH 66
between
Mangalore
and Udupi)
[7] (65 for
transport
vehicles)
Punjab[8] 35/50 50/70/80 45/50/65
Tamil Nadu 50 60
Kerala[9] 30 (Near 30 (Near 30-40 (Near 30-40
School) / 45 School) / School /In (Near
(In Ghat 45 (In Ghat roads / School /In
roads) / 50 Ghat City) / 50-65 Ghat roads
(City/State roads) / (All other / City) / 50-
Highway/ All 50 (City) / places / State 65 (All
other places) / 70 (All Highway / other places
60(National other National / State
Highway) / 70 places) / Highway) 70 Highway /
(4-lane 80 (State (4-lane National
highway) Highway) / highway) Highway)
85 70 (4-lane
(National highway)
Highway) /
90 (4-lane
highway)
What is the maximum acceleration
I think that the maximum acceleration will be obtained at the instant
immediately before the tyre starts to move:
TYPES OF GEARS
What is a gear ?
A gear is a kind of machine element in which teeth are cut around cylindrical
or cone shaped surfaces with equal spacing. By meshing a pair of these
elements, they are used to transmit rotations and forces from the driving shaft
to the driven shaft. Gears can be classified by shape as involute, cycloidal and
trochoidal gears. Also, they can be classified by shaft positions as parallel
shaft gears, intersecting shaft gears, and non-parallel and non-intersecting
shaft gears. The history of gears is old and the use of gears already appears in
ancient Greece in B.C. in the writing of Archimedes.

A sample box of various types of gears

Types of Gears
Various types of gears
There are many types of gears such as spur gears, helical gears, bevel gears,
worm gears, gear rack, etc. These can be broadly classified by looking at the
positions of axes such as parallel shafts, intersecting shafts and non-
intersecting shafts.
It is necessary to accurately understand the differences among gear types to
accomplish necessary force transmission in mechanical designs. Even after
choosing the general type, it is important to consider factors such as:
dimensions (module, number of teeth, helix angle, face width, etc.), standard
of precision grade (ISO, AGMA, DIN), need for teeth grinding and/or heat
treating, allowable torque and efficiency, etc.
Besides this page, we present more thorough gear technical information
under Gear Knowledge (separate PDF page). In addition to the list below,
each section such as worm gear, rack and pinion, bevel gear, etc. has its own
additional explanation regarding the respective gear type. If it is difficult to
view PDF, please consult these sections.
It is best to start with the general knowledge of the types of gears as shown
below. But in addition to these, there are other types such as face gear,
herringbone gear (double helical gear), crown gear, hypoid gear, etc.
· Spur Gear
Gears having cylindrical pitch surfaces are called cylindrical gears.
Spur gears belong to the parallel shaft gear group and are cylindrical
gears with a tooth line which is straight and parallel to the shaft. Spur
gears are the most widely used gears that can achieve high accuracy
with relatively easy production processes. They have the characteristic
of having no load in the axial direction (thrust load). The larger of the
meshing pair is called the gear and smaller is called the pinion.
Click Here to Select Spur Gears

A sketch of spur gears

· Helical Gear
Helical gears are used with parallel shafts similar to spur gears and are
cylindrical gears with winding tooth lines. They have better teeth
meshing than spur gears and have superior quietness and can transmit
higher loads, making them suitable for high speed applications. When
using helical gears, they create thrust force in the axial direction,
necessitating the use of thrust bearings. Helical gears come with right
hand and left hand twist requiring opposite hand gears for a meshing
pair.
Click Here to Select Helical Gears

A sketch of helical gears

· Gear Rack
Same sized and shaped teeth cut at equal distances along a flat surface
or a straight rod is called a gear rack. A gear rack is a cylindrical gear
with the radius of the pitch cylinder being infinite. By meshing with a
cylindrical gear pinion, it converts rotational motion into linear motion.
Gear racks can be broadly divided into straight tooth racks and helical
tooth racks, but both have straight tooth lines. By machining the ends of
gear racks, it is possible to connect gear racks end to end.
Click Here to Select Gear Rack

A sketch of gear rack

· Bevel Gear
Bevel gears have a cone shaped appearance and are used to transmit
force between two shafts which intersect at one point (intersecting
shafts). A bevel gear has a cone as its pitch surface and its teeth are cut
along the cone. Kinds of bevel gears include straight bevel gears, helical
bevel gears, spiral bevel gears, miter gears, angular bevel gears, crown
gears, zerol bevel gears and hypoid gears.
Click Here to Select Bevel Gears
A sketch of bevel gears

· Spiral Bevel Gear


Spiral bevel gears are bevel gears with curved tooth lines. Due to higher
tooth contact ratio, they are superior to straight bevel gears in
efficiency, strength, vibration and noise. On the other hand, they are
more difficult to produce. Also, because the teeth are curved, they cause
thrust forces in the axial direction. Within the spiral bevel gears, the one
with the zero twisting angle is called zerol bevel gear.
Click Here to Select Spiral Bevel Gears
A sketch of spiral bevel gears

· Screw Gear
Screw gears are a pair of same hand helical gears with the twist angle of
45° on non-parallel, non-intersecting shafts. Because the tooth contact is
a point, their load carrying capacity is low and they are not suitable for
large power transmission. Since power is transmitted by the sliding of
the tooth surfaces, it is necessary to pay attention to lubrication when
using screw gears. There are no restrictions as far as the combinations
of number of teeth.
Click Here to Select Screw Gears
A sketch of screw gears

· Miter Gear
Miter gears are bevel gears with a speed ratio of 1. They are used to
change the direction of power transmission without changing speed.
There are straight miter and spiral miter gears. When using the spiral
miter gears it becomes necessary to consider using thrust bearings since
they produce thrust force in the axial direction. Besides the usual miter
gears with 90° shaft angles, miter gears with any other shaft angles are
called angular miter gears.
Click Here to Select Miter Gears
A sketch of miter gears

· Worm Gear
A screw shape cut on a shaft is the worm, the mating gear is the worm
wheel, and together on non-intersecting shafts is called a worm gear.
Worms and worm wheels are not limited to cylindrical shapes. There is
the hour-glass type which can increase the contact ratio, but production
becomes more difficult. Due to the sliding contact of the gear surfaces,
it is necessary to reduce friction. For this reason, generally a hard
material is used for the worm, and a soft material is used for worm
wheel. Even though the efficiency is low due to the sliding contact, the
rotation is smooth and quiet. When the lead angle of the worm is small,
it creates a self-locking feature.
Click Here to Select Worm Gears
A sketch of worm gears

· Internal gear
Internal gears have teeth cut on the inside of cylinders or cones and are
paired with external gears. The main use of internal gears are for
planetary gear drives and gear type shaft couplings. There are
limitations in the number of teeth differences between internal and
external gears due to involute interference, trochoid interference and
trimming problems. The rotational directions of the internal and
external gears in mesh are the same while they are opposite when two
external gears are in mesh.
Click Here to Select Internal Gears
A sketch of internal gear
An overview of gears
(Important Gear Terminology and Gear Nomenclature in this picture)
● Worm
● Worm wheel
● Internal gear
● Gear coupling
● Screw gear
● Involute spline shafts and bushings
● Miter gear
● Spur gear
● Helical gear
● Ratchet
● Pawl
● Rack
● Pinion
● Straight bevel gear
● Spiral bevel gear
There are three major categories of gears in accordance with the orientation
of their axes
Configuration :

1. Parallel Axes / Spur Gear, Helical Gear, Gear Rack, Internal Gear
2. Intersecting Axes / Miter Gear, Straight Bevel Gear, Spiral Bevel
Gear
3. Nonparallel, Nonintersecting Axes / Screw Gear, Worm, Worm
Gear (Worm Wheel)
4. Others / Involute Spline Shaft and Bushing, Gear Coupling, Pawl
and Ratchet
Resistance to motion
This is the resistance a vehicle faces while attempting to move from a stall
condition or while accelerating. This resistance must be overcome by the
powerplant of the engine in order to sustain motion. When the power
produced is smaller than the resistance to motion, the vehicle will gradually
slow down. We must have experienced the slowing down of bicycles if we
stop pedaling. The bicycle also slows down if we go uphill or if wind blows
from front. A poorly inflated tire also causes the vehicle to groan more and
slow down. These are the resistances that force the vehicle to slow down
under their effect.

Broadly the resistances can be categorized into the following categories:


● Aerodynamic drag
● Gradient resistance
● Rolling resistance
● Inertia
All the above produce a restraining force working against the tractive force.
The tractive force must be greater than or equal to the resistive forces in order
to maintain a sustainable motion. We can balance them as
F = F req = FA + FG + FR + FI
where
FA= Force due to air resistance
FG = Force due to gradient of a slope
FG = Force due to rolling resistance
FI = Force due to moving or static inertia
The last one FI comes into the picture only when the vehicle accelerates or
decelerates, while the first three always offer a resistance even when the
vehicle is moving at a constant speed.
Air resistance/ Aerodynamic drag:
When a body travels within a dense medium, the molecules of the medium
collide with the moving object and thereby absorb some of the energy. This is
felt as a resistance to the moving object. If the medium is denser, then the
resistance is more. Also when the object moves at a faster speed, the
resistance increases proportionately. Mathematically it can be expressed as:

FA = −½ × Cd × P × V²
where
Cd = Co−efficient of discharge
P = Pressure
V = Velocity of the vehicle
Gradient resistance:

A truck moving uphill


When the vehicle travels uphill, a component of its weight works in a
direction opposite to its motion. If some energy is not supplied to overcome
this backward force, then the vehicle would slow down, stall and roll
backwards. If the vehicle is trading uphill at a slope of θ, then the weight of
the vehicle, W has two components: one perpendicular to the road surface
(with a value W·Cos θ) and the other along the road surface (with a value
W·Sin θ). The component along the road surface is the one that tries to
restrict the motion.

The gradient resistance is given by: FG = W·Sin θ


Rolling resistance
When a vehicle rolls, it rolls with its tires in contact with the road surface.
The relative motion of two hard surfaces produces a friction. Further, neither
the road, nor the tire are perfectly rigid. Hence, both flex under the load
slightly. As there is a gradual deformation at the contact between the road and
the tire, greatest at the bottom most point and least at the entry and exit
points, the slip of the tire w.r.t. the road produces another type of loss of
energy which results in a resistance.
Rolling resistance is composed of the following components:
● Tire Rolling resistance: FR,T
● Road rolling resistance: FR,Tr

● Resistance due to tire slip angle: FR,α


● Resistance due to bearing friction and residual braking: FR,fr
Hence the rolling resistance offered may be written as:
FR = FR,T + FR,Tr + FR,α + FR,fr
The tire rolling resistance FR,T is a result of the resistance due to flexure of the
tire, air resistance on the tire and friction of tire with the road. These three
can be summed up and written as:
FR,T = FR.T.flex + FR.T.A + FR.T.fr.
In a simplified manner the total rolling resistance can be related to the
vertical load on the wheels and can be written as:

Co−efficient of rolling friction, kR = FR/FZ.w


Vehicle Power Requirements
How much power does it take to go 60 mph? 80 mph? The weight of the car
and its speed all are factors that affect the amount of power required. The
force balance of a vehicle is shown in Figure 4.

Figure 4. Force Balance


The forces acting on the car are caused by internal, tire, and air resistance.
The resultant of these forces, the total drag force, FD, can be estimated by the
following equation:

Where:
cR = coefficient of rolling resistance
cD = drag coefficient
m = mass of vehicle [kg]
A = frontal surface area [m2]
g = 9.8 m/s
r = density of are, 1.2 kg/m3 @STP
The coefficients of rolling resistance and drag are determined from
experiment. A typical value for the coefficient of rolling resistance is 0.015.
The drag coefficient for cars varies, a value of 0.3 is commonly used.
The power output requirement can be determined from the drag force given
above and the vehicle velocity.

P = FDV

Given the mass of a vehicle and its frontal surface area, a plot can be drawn
showing the power requirements for a range of speeds. The Power
Requirement Applet plots this relationship.

The power required to accelerate to a given speed is also of interest. More


power is required for more acceleration. The Acceleration Applet compares
the power required to accelerate from 0 to 60 mph for a range of times.
The basic definition for the acceleration force (neglecting drag !) is given by:
F = ma

Assuming that the force required to accelerate a vehicle from 0 to 60 mph can
be determined from the above equation, then the power necessary to
accelerate to a given velocity is:
P = maV
Where:
m = mass of the vehicle
a = acceleration = DV/DT
DV = 60 - 0 = 60 mph = 26.82 m/s
V = final velocity, 60 mph
Force Required to Accelerate a Load
The load on a hydraulic cylinder (or
motor) consists of these three
components:
(1). Normal load resistance, where
This power due to extra pressure is
fluid power is converted into
carried as kinetic energy while the
mechanical work exerted against the
load is moving at a constant
load.
velocity, and may come back into
(2). Friction resistance, where some the system as shock and heat when
of the fluid power is expended in the load is stopped, unless it can be
overcoming friction. absorbed by the load in the form of
work.
(3). Inertia, where fluid power is The purpose of this data sheet is to
needed to get a massive load into show how to calculate the extra
motion, sometimes very quickly. pressure or torque needed in a
As far as Items (1) and (2) are hydraulic system to accelerate an
concerned, acceleration to final inertia load, Item (3), from standstill
velocity would be instantaneous as to its final velocity in a given time,
soon as the fluid power is applied to assuming the pressure needed for
the cylinder (or motor). Items (1) and (2), the work load and
the friction resistance has already
If the load has high inertia due to been calculated or assumed.
high mass, as in Item (3), then an
additional amount of pressure must
be supplied to accelerate the load
from standstill to final velocity in a
desired interval of time.

Calculating for Inertia Load ...

Before calculating the extra PSI needed to rapidly accelerate this


vertically moving cylinder, the normal PSI needed to move the load at a
constant speed must be calculated by the usual method: Load weight ÷
Piston Area. Allowance should also be made for friction in the ways or
guides if significant.
Use the following formula to calculate the extra PSI for acceleration to
a final velocity in a specified time:
(a). F = (V x W) ÷ (g x t) Lbs, in which:
F is the accelerating force, in lbs, that will be needed.
V is the final velocity, in feet per second, starting from standstill.
W is the load weight, in lbs.
g is acceleration of gravity to convert weight into mass, always 32.16
t is the time, in seconds, during which acceleration will take place.
If the cylinder bore is known, the accelerating force for its piston can be
found directly from the formula:
(b). PSI = V x W ÷ (A x g x t), in which:
A is piston area in square inches. Other symbols are same as above.

Problem Data - Vertical Cylinder with Inertia Load

Steady Load= 35,000 lbs. Initial Velocity= 0; Final Velocity=


Cylinder Bore= 4" (Piston Area= 12ft/sec.
12.57 sq.ins.) Time Required to Accelerate= 2
Seconds

Example of Inertia Calculation... Use the problem data in the box to solve
for the total PSI needed on the vertically moving cylinder not only to lift the
given load, but to accelerate it to its final velocity in the specified time. Or to
accelerate it from a lower to a higher velocity.
PSI for Steady Movement... 35,000 lbs. (load weight) ÷ 12.57 (piston area)
= 2784 PSI needed to raise the load.
PSI for Acceleration... PSI = (12 x 35,000) ÷ (12.57 x 32.16 x 2) = 520 PSI.
Total PSI... The cylinder must be provided with 2784 + 520 = 3304 PSI to
meet all conditions of the problem.
Non-Inertia Loads... No significant extra PSI is needed to accelerate work
loads which consist almost entirely of frictional resistance and negligible
mass.

Moment of Inertia of Rotating Load...


The moment of inertia, indicated with symbol J in the formula in the
opposite column, must be calculated before accelerating torque can be
figured. Examples for three common shapes are shown here. Many other
shapes are shown in machinery handbooks.

HOLLOW CYLINDER (Pipe)...


J (moment of inertia) of a pipe about an axis running lengthwise is:
J = W x (R2 + r2) ÷ 2g lnch-Lbs-Secs2 , in which:
W is total weight of pipe in pounds
R is outside radius of pipe in inches
r is inside radius of pipe in inches
g is acceleration of gravity, always 32.16

SOLID CYLINDER...
J (moment of inertia) of a solid cylinder about an axis running lengthwise
is:
J = W x R2 ÷ 2g lnch-Lbs-Secs2 , in which:
W is total weight of the cylinder in pounds
R is outside radius of the cylinder in inches
g is acceleration of gravity, always 32.16

PRISM...
J (moment of inertia) of a prism of uniform cross section about the axis
shown is:
J = W x (A2 + 82) ÷ 12g lnch-Lbs-Secs2, in which:
W is total weight of prism in pounds
A and B are cross section dimensions in inches
g is acceleration of gravity, always 32.16
Vehicle acceleration and maximum speed
modeling and simulation
In engineering, simulations play a critical part in the design phase of any
system. Through simulation we can understand how a system works, how it
behaves under predefined conditions and how the performance is affected by
different parameters.
In this article we are going to use a simplified mathematical model of the
longitudinal dynamics of a vehicle, in order to evaluate the acceleration
performance of the vehicle (0-100 kph time) and determine the maximum
speed.
To validate the accuracy of our mathematical model, we are going to
compare the simulation result with the advertised parameters of the vehicle.
Input data
The vehicle parameters are taken from a rear-wheel drive (RWD) 16MY
Jaguar F-Type:

3-litre V6 DOHC V6, aluminium-


alloy
Engine cylinder block and heads

Maximum torque [Nm] 450

Engine speed @ maximum torque


[rpm] 3500

Maximum power [HP] 340

Engine speed @ maximum power


[rpm] 6500

Transmission type automatic, ZF8HP, RWD

1st 4.71

2nd 3.14
3rd 2.11

4th 1.67

5th 1.29

6th 1.00

7th 0.84

8th 0.67

Gear ratio Final drive (i0) 3.31

Tire symbol 295/30ZR-20

Vehicle mass (curb) [kg] 1741

Aerodynamic drag, Cd [-] 0.36

Frontal area, A [m2] 2.42

Maximum speed [kph] 260

Acceleration time 0-100 kph [s] 5.3


From data available on the internet, we can also extract the static engine
torque values at full load, function of engine speed:

Engine
speed
points
(full
load)
[rpm] 1000 2020 2990 3500 5000 6500

Engine
static
torque
points
(full
load) 306 385 439 450 450 367
[Nm]
Vehicle layout
The powertrain and drivetrain of a RWD vehicle consists of:
§ engine
§ torque converter (clutch)
§ automatic (manual) transmission
§ propeller shaft
§ differential
§ drive shafts
§ wheels

Image: Vehicle rear-wheel drive (RWD) powertrain diagram


For simplicity, for our simulation example we are going to make the
following assumptions:
§ the engine is only a source of torque, without any thermodynamics or
inertia modeling
§ the engine is running at full load all the time
§ the effect of the torque converter is not considered
§ the gear shifting is performed instantaneous disregarding any related
slip or dynamics
§ the effects of the propeller shaft and drive shafts are not considered
§ the tires have constant radius and the effect of slip is not considered
The mathematical model is going to be implemented as a block diagram in
Xcos (Scilab), based on the following equations.

Mathematical equations
The vehicle movement is described by the longitudinal forces equation:

where:
Ft [N] – traction force
Fi [N] – inertial force
Fs [N] – road slope force
Fr [N] – road load force
Fa [N] – aerodynamic drag force
The traction force can be regarded a a “positive” force, trying to move the
vehicle forward. All the other forces, are resistant, “negative” forces which
are opposing motion, trying to slow down the vehicle.
As long as the traction force will be higher than the resistances, the vehicle
will accelerate. When the traction force is smaller compared with the sum of
resistant forces, the vehicle will decelerate (slow down). When the traction
force is equal with the sum of resistant forces, the vehicle will maintain a
constant speed.

The traction force [N] depends on the engine torque, engaged transmission
gear ratio, final drive ratio (differential) and wheel radius:

where:
Te [Nm] – engine torque
ix [-] – transmission gear ratio
i0 [-] – final drive ratio
ηd [-] – driveline efficiency
rwd [m] – dynamic wheel radius
The dynamic wheel radius [m] is the radius of the wheel when the vehicle is
in motion. It is smaller than the static wheel radius rws because the tire is
slightly compressed during vehicle motion.

The static wheel radius [m] is calculated based on the tire symbol
(295/30ZR-20). For a better understanding of the calculation method read the
article How to calculate the wheel radius.
The inertial (resistant) force [N] is given by the equation:

where:
mv [kg] – total vehicle mass
av [m/s2] – vehicle acceleration
The total vehicle mass [kg] consists of the curb vehicle mass, the driver’s
mass and an additional mass factor. The mass factor takes into account the
effect of the rotating components (crankshaft, gearbox shafts, propeller shaft,
drive shafts, etc.) on the overall vehicle inertia.

where:
fm [-] – mass factor
mcv [kg] – curb vehicle mass
md [m] – driver mass
The road slope (resistant) force [N] is given by the equation:
where:
g [m/s2] – gravitational acceleration
αs [rad] – road slope angle
The road load (resistant) force [N] is given by the equation:

where:
cr [-] – road load coefficient

The aerodynamic drag (resistant) force [N] is given by the equation:

where:
ρ [kg/m3] – air density at 20 °C
cd [-] – air drag coefficient
A [m2] – vehicle frontal area
v [m/s] – vehicle speed
The traction force is limited by the wheel friction coefficient in the contact
patch. The maximum friction force [N] that allows traction is:
Curve Interpolation Methods and Options
The RMC supports several interpolation methods and options to satisfy a
wide range of curve applications.
Interpolation Methods
The interpolation method is specified in the Properties pane in the Curve
tool, or in the Curve Add (82) command. Choose from one of the methods
below. The Cubic (2) method is the most common method and creates the
smoothest motion.
· Cubic (2)
The curve will smoothly go through all points. This is the most
common interpolation method. This method will create smooth
motion.
On position axes, the Velocity Feed Forward and Acceleration Feed
Forward will apply to cubic interpolated curves, but higher order
gains should not be used, such as the Jerk Feed Forward, Double
Differential Gain, and Triple Differential Gain.
On pressure or force axes, the Pressure/Force Rate Feed Forward will
apply.

· Linear (1)
The curve will consist of straight-line segments between each point.
Because the velocity is not continuous, a position axis will tend to
overshoot at each point. This type of curve is typically more suitable
for pressure or force axes.
On position axes, the Target Acceleration will always be zero.
Therefore, the Acceleration Feed Forward will have no effect for
linear interpolated curves.
· Constant (0)
The curve will consist of step jumps to each point. The curve will not
be continuous. This method is seldom used, but may be useful in
applications where step jumps are desired, such as some blow-
molding systems. This method requires that the axis not be tuned very
tightly, or the axis may oscillate and Output Saturated errors may
occur. The Position I-PD control algorithm is recommended for
following constant interpolated curves.
On position axes, the Target Velocity and Target Acceleration will
always be zero. Therefore, the Velocity Feed Forward and
Acceleration Feed Forward will have no effect for constant
interpolated curves.
On pressure or force axes, the Target Rate will always be zero.
Therefore, the Pressure/Force Rate Feed Forward will have no effect
for constant interpolated curves.

Interpolation Options
The interpolation options are specified in the curve data. The available
options depend on the interpolation method, as shown in the table below.
Add the numbers for each desired option. For example, to choose Cyclic
Curve and Overshoot Protection, the Interpolation Option value would be
2+8 = 10.
Interpolation
Type Interpolation Options
Constant None
Linear None
Cubic Endpoint Behavior
Choose only one. See the Endpoint Behavior section
below for details.
+0 Zero-Velocity Endpoints
+1 Natural-Velocity Endpoints
+2 Cyclic Curve
Overshoot Protection
Choose only one. See the Overshoot Protection section
below for details.
+0 Disabled
+8 Enabled
Auto Constant Velocity
Choose only one. See the Auto Constant Velocity
section below for details.
+0 Disabled
+16 Enabled
Endpoint Behavior
This option defines the behavior at the endpoints. Each option below
shows a plot for the following cubic curve data with 9 points at 0.25
second intervals:
Y- 1.0 1.5 1.6 1.7 1.6 1.2 1.3 1.4 1.0
axis
time 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00

· +0 Zero-Velocity Endpoints
The endpoints will have zero velocity and acceleration. This is the
most commonly-used option. Curves with zero-velocity endpoints can
be repeated cyclically.

· +1 Natural-Velocity Endpoints
The endpoints will have their velocity automatically selected to match
the natural slope of the curve at the endpoints. The acceleration at the
endpoints will be zero. Curves with natural-velocity endpoints cannot
be repeated cyclically because the endpoint velocities are typically not
equal.

· +2 Cyclic Curve
The endpoints are assumed to wrap. Therefore the acceleration,
velocity, and position will be continuous between cycles of this curve,
although the velocity and acceleration at each endpoint will not
necessarily be zero.
If the y-values of the first and last points are not equal, and the curve
is run for more than 1 consecutive cycle, the curve will be
automatically offset so that the first point of the next cycle matches
the last point of the previous cycle.
For Advanced format curves, if an endpoint is defined as a Fixed
Velocity type, it will use that velocity even if the curve is cyclic.
Setting each endpoint to a different fixed velocity will cause a
discontinuity in the velocity between cycles of the curve, but the
curve can still be used cyclically.

Overshoot Protection
The Overshoot Protection option prevents the curve from exceeding a
local maximum or local minimum point. A local maximum occurs where a
point is greater than or equal to both the preceding point and the following
point. A local minimum occurs where a point is less than or equal to both
the preceding point and the following point. Overshoot protection will not
prevent overshooting in other locations.
When overshoot protection is enabled, the velocity is set to zero at each
local minimum/maximum point, which eliminates the chance of the curve
overshooting that point for the curve segments on either side of the point.
For Advanced format curves, Overshoot Protection will not apply to
Fixed-Velocity points, or points at the beginning or end of a Constant-
Velocity segment.
Example 1
Consider the cubic curve data in the Endpoint Behavior section
above. Without Overshoot Protection enabled, the curve looks like
this:

Notice that the curve overshoots the points after times 0.75, 1.25, and
before 1.75.
With overshoot protection enabled, the curve will look like this:

Notice that the curve no longer overshoots the points after times 0.75,
1.25, and before 1.75. Notice, however, that the curve still overshoots
between points 0.25 and 0.5 because neither point is a local minimum
or maximum.
Auto-Constant Velocity
The Auto-Constant Velocity option will automatically insert a linear
segment in the curve if three or more data points are in a straight line.
If you have also selected Overshoot Protection, be aware that the points
identified as local minimums or maximums do not count as consecutive
points for the Auto-Constant Velocity (see the example below).
For Advanced format curves, be aware that the Fixed Velocity type points
do not count as consecutive points for the Auto-Constant Velocity (see the
example below).
Example 2
Consider the cubic curve data in the Endpoint Behavior section
above. The points at times 0.25, 0.50, and 0.75 are in a straight line,
as are the points at times 1.25, 1.50, and 1.75. With the Auto-Constant
Velocity option, the curve will look like this:

Example 3
Consider this same curve with both Overshoot Protection and Auto-
Constant Velocity enabled. This particular curve ends up looking the
same as it does with only Overshoot Protection, because both
constant-velocity segments are lost because at least one of each set of
3 consecutive points was identified by Overshoot Protection as a local
minimum or maximum.
What is the Mean Effective Pressure (MEP)
of an engine ?
The Mean Effective Pressure (MEP) is a theoretical parameter used to
measure the performance of an internal combustion engine (ICE). Even if it
contains the word “pressure” it’s not an actual pressure measurements within
the engine cylinder.
The cylinder pressure in an ICE is continuously changing during the
combustion cycle. For a better understanding of the pressure variation within
the cylinder, read the article The pressure-volume (pV) diagram and how
work is produced in an ICE.
The mean effective pressure can be regarded as an average pressure in the
cylinder for a complete engine cycle. By definition, mean effective pressure
is the ratio between the work and engine displacement:
What is Engine Capacity (cc):
The term “cc” stands for Cubic Centimeters or simply cm³ which is a metric
unit to measure the Engine's Capacity or its volume. It is the unit of
measuring the volume of a cube having size 1cm X 1cm X 1cm. CC is also
known as ‘Engine Displacement’. It means the displacement of the piston
inside the cylinder from Top Dead Centre (TDC) to the Bottom Dead Centre
(BDC) in the engine’s one complete cycle. The Engine Volume is also
measured in Liters corresponding to Cubic Centimeters.

Figure 1 showing Engine Capacity (cc)


If an engine has a capacity of say 1000cc or 1000 Cubic Centimeters, then the
capacity of that engine is 1 Liter.
For e.g.
1000cc = 1000cm³ = 1 Liter = 1.0L
Similarly,
800cc = 800cm³ = 0.8 Liter = 0.8L
How to measure Engine capacity or Engine Volume:
To calculate the volume of an engine you can use the formula-
V = π/4 x (D)² x H x N
Where, V = Volume, D = Bore Diameter, H = Stroke Length, N = No. of
Cylinders
It is the combined capacity for all cylinders of the engine added together
while it completes its one cycle. For example, if a four-cylinder engine has a
capacity of 1000cc or 1.0L, that means all the four cylinders can together
accommodate a maximum of 1000 cubic centimeters or 1.0L of the volume
of air (or the air-fuel mixture) in them. If the engine has only one cylinder,
then that lone cylinder will accommodate all of the 1000cc or 1.0L of air
inside it. By the way, the world's first automotive - the Mercedes-Benz
MotorWagen featured a single-cylinder 1.0-litre engine (954cc to be precise)
to power it.
How Engine Capacity affects its
performance:
The engine's capacity plays an important role in determining various engine
outputs such as engine power, torque, and mileage. It is the volume, or in
other words, the space available inside the cylinder to accommodate air-fuel
mixture for burning. Consider that it is just like a drum filled with water.
Bigger the drum, more the water it can accumulate and guzzle.
Similarly, an engine with higher capacity sucks more air into the cylinder. As
the volume of the air grows, the fuel system also proportionately increases
the corresponding quantity of fuel to the engine. As the amount of the fuel for
burning increases, it also increases the power output. Hence, in simple words,
the power output of an engine is directly proportional to its capacity in a
conventional engine design. Incidentally, Chevrolet 9.3L V8 crate engine is
one of the largest capacity engines in the world.
Supplying more fuel to the engine increases its power and also its fuel
consumption. As the volume of the cylinders increases, the power output
also increases. But eventually, this reduces the mileage. Hence, in that
context, the mileage of the car is inversely proportional to the engine
capacity in a conventional design. The manufacturers keep upgrading the
petrol engines and strike a balance between power and mileage to achieve
both performance and efficiency.
How Engine Capacity affects mileage:
Typically, the cars with petrol engines of best fuel mileage come in the
zone of up to 1000cc. Those with capacities 1000cc to 1500cc have
better mileage figures. Whereas, engines with displacement from 1500cc
to 1800cc have moderate fuel average range. Those with capacities from
1800cc to 2500cc have lower fuel average range and the engines above
2500cc have the least mileage among all as shown in the chart below.
Almost identical set of rules apply to the smaller carburetor engines for
bikes. Typically, the bike engines with best fuel average come in the
range of up to 110cc. Engines from 110cc to 150cc have better mileage
figures. Engines with capacities from 150cc to 200cc have moderate fuel
average range. The engines with capacity from 200cc to 500cc have lower
mileage. Engines above 500cc have the least mileage among all, as shown
in the chart below.
Hence, the engine cc displacement is a crucial factor while buying an
automobile. You should decide the displacement thoughtfully by
analyzing the intended purpose or the end usage of the vehicle. So that, it
doesn’t disappoint you with the performance of the vehicle which you
select.
What is Bore-Stroke Ratio?
Bore-Stroke Ratio is the ratio between the dimensions of the engine cylinder
bore diameter to its piston stroke-length. The cylinder bore diameter divided
by the stroke-length gives the Bore-to-Stroke ratio. The ‘Bore-stroke ratio' is
an important factor which determines engine’s power & torque
characteristics.
Bore-Stroke ratio = Cylinder Bore diameter / Piston Stroke-Length
The engineers also classify and categorize the automotive engines according
to their shapes. It’s the shape of cylinders as seen from the side of the engine.
Principally, there are three types of geometrical shapes around which the
engineers design conventional engines with. They are –

1. Square Engine
2. Under Square Engine
3. Over Square Engine

What is a ‘Square Engine’ and how Bore-Stroke Ratio affects its design?
An engine is called a ‘Square Engine’ when its cylinder bore diameter &
stroke-length are almost equal which forms a geometrical figure of a perfect
‘square’. The bore-stroke ratio is almost 1:1 in Square engine design.

Square Engine Design


For e.g. an engine with a bore diameter of 83mm and stroke-length of 83mm
which forms a perfect square. It provides a perfect balance between speed
and pulling ability.
Get Car Bike Tech directly in your inbox

Therefore, the ‘Bore -stroke ratio’ = 1:1


What is an ‘Over Square’ Engine and how Bore-Stroke Ratio affects its
design?
An engine is called an ‘Over-square’ engine when its cylinder bore is wider
than its stroke. In this design, the stroke-length is shorter than the cylinder
bore. Manufacturers also refer it as a ‘short-stroke’ engine. Generally, ‘Over-
square’ design tends to produce higher engine speeds. Therefore, engineers
refer to it as a ‘high-speed’ engine. As the stroke-length is short, the piston
has to travel a shorter distance. Hence, this design tends to produce higher
engine speed and is typically used in high-speed cars & bikes.
‘Over-Square’ engine has the bore-stroke ratio greater than 1:1.
Over- Square Engine
For e.g. an engine with the bore diameter of 83mm and stroke-length of
67mm which forms an ‘over square design’.
Therefore, the ‘Bore -stroke ratio’ = 1.23 : 1
Advantages of ‘Over-Square’ Design:

1. Less frictional losses & load on bearings.


2. Higher engine speeds.
3. Reduces engine height thereby lowering the bonnet line.

What is an ‘Under Square’ Engine and how Bore-Stroke Ratio affects it?
An engine is called an ‘Under-square’ engine when it has a longer stroke. In
this engine, the stroke-length is longer than the cylinder bore. Generally,
‘Under-Square’ design tends to produce comparatively higher torque. Hence,
engineers also refer to it as ‘high-torque’ engine. As the stroke-length is long,
the piston has to travel a longer distance which tends to increase engine’s
torque. Hence, the manufacturer typically uses it in commercial vehicles such
as trucks, buses and earth-moving equipment.
‘Under-Square’ engine has the bore-stroke ratio lesser than 1:1.
Under Square Engine
For e.g. an engine with a bore diameter of 70mm and stroke-length of 83mm
which forms an ‘under square design’.
Therefore, the ‘Bore-stroke ratio’ = 0.84 : 1
Advantages of ‘Under-Square’ Design:

1. Higher engine torque.


2. Can pull heavier loads.

The engine manufacturers try to attain the near-perfect ratio from these
designs depending upon the application for which they intend to develop the
engine.
How Rod Lengths and Ratios Affect
Performance
Changing the length of the rods with respect to the stroke of the crankshaft
offers some advantages in certain situations.

The relationship between bore and stroke impacts the RPM range where an
engine develops peak torque and horsepower.
Performance engine builders are always looking at changes they
can make that will give their engine an edge over the competition.
Rod ratio is one of those factors that may make a difference.
Changing the length of the rods with respect to the stroke of the
crankshaft offers some advantages in certain situations, and may
allow the same number of cubic inches to deliver a little more
power or a little longer ring life (take your pick). But experts
disagree as to whether or not changing rod ratios really makes that
much difference.
Rod ratio is the mathematical relationship between the overall
length of the connecting rods and the stroke of the crankshaft.
Divide rod length by the crank stroke and you get the rod ratio. For
example, say you’re building a stock small block 350 Chevy with
5.7-inch rods and a 3.48 inch stroke. The rod ratio in this engine
would be 5.7 (rod length) divided by 3.48 (stroke), which equals
1.64.
If you build the same 350 engine with longer 6-inch rods, the rod
ratio becomes 1.72. And if you are building a 383 stroker with 6-
inch rods, the rod ratio becomes 1.6 due to the longer stroke (3.750
inches).
What do these numbers mean? They express a geometric
relationship between the rods, crankshaft and pistons. The lower
the rod ratio, the greater the side forces exerted by the pistons
against the cylinder walls. This increases wear on the piston skirts
and cylinder walls, and creates a higher level of vibration inside the
engine. The increase in friction can also elevate coolant and oil
temperatures.
LONG RODS VS SHORT RODS
On the other hand, lower rod ratios do have some advantages.
Shorter rods mean the overall height of the block can be shorter,
which means the overall weight of the block can be lighter. The
engine will typically pull more vacuum at low RPM, which means
better throttle response and low end torque (good for street
performance and everyday driving). Spark timing can be advanced
a few degrees for some additional low speed torque, and the engine
is less prone to detonation, which can be a plus in turbocharged,
supercharged or nitrous applications.
Connecting rods come in various styles and lengths. Choosing the one that’s
“right” for a given application depends more on strength, loading and RPM
than rod ratio.
What about longer rod ratios? Using longer connecting rods with
the same stroke reduces the side loading on the pistons, which
reduces friction. It also increases the piston dwell time at Top Dead
Center. Holding compression for maybe half a degree of crankshaft
rotation longer at TDC improves combustion efficiency and
squeezes a little more power out of the air / fuel mixture. Typically,
an engine with a higher rod ratio will produce a little more power
from mid-range to peak RPM.
Longer rods require the wrist pin to be located higher in the piston,
or the engine has to have a taller deck height to accommodate
longer rods. Longer rods also mean shorter and lighter pistons can
be used, so the additional weight of the rods is more or less offset
by the reduced weight of the pistons.
One of the disadvantages of longer rods and a higher rod ratio is
that low RPM intake vacuum is reduced somewhat. Reduced air
velocity into the engine hurts low speed throttle response and
torque, which is not good for everyday driving or street
performance, but works well on a high-revving race engine.
Some engine builders say a “good” rod ratio is anything 1.55 or
higher. Production engines may have rod ratios that range from 1.4
to over 2.0, with many falling in the 1.6 to 1.8 range. Four
cylinders tend to have lower rod ratios (1.5 to 1.7 range) while
many V6s have somewhat higher rod ratios of 1.7 to 1.8. As for
V8s, they typically range from 1.7 to 1.9. Often, the rod ratio is
dictated by the design and deck height of the block, and the pistons,
rods and crank that are available to fit the block.
BEST ROD RATIO?
Essentially, there is no “best” rod ratio for any given engine. Some say to use
the longest rods that will fit the engine to make the most mid-range and peak
RPM power while others say it doesn’t really matter. Smokey Yunick was
one of the early proponents of long rods, and they worked well for him in
NASCAR. Even so, some engines that have lower rod ratios will out-perform
engines of the same displacement that have higher rod ratios. How can this
be? Because of differences in the design and porting of the cylinder heads,
different valve sizes and valve angles, different camshaft lift and duration,
different intake systems and different tuning.
For example, a BMW M3 has a rod ratio of 1.48, which doesn’t sound very
good based on the number alone. But the M3 engine also makes 2.4
horsepower per cubic inch (with the help of a turbo), which is nearly twice
the power ratio of a typical street performance Chevy 350 or small block
Ford. The point here is not that turbos make lots of power (they do), but that
rod ratios don’t really affect performance one way or the other very much.
Some people put way too much emphasis on rod ratios and worry excessively
about how their engine’s rod ratio will affect performance. Our take on the
issue is that rod ratio is just a number that may or may not make much
difference depending on the situation. In some cases, it can make a slight
difference and in others it seems to make no significant difference
whatsoever. Peak horsepower and torque depend on too many other
variables.
The maximum achievable rod ratio is always going to be limited by the
physical dimensions of the block (deck height, tall or short), the longest rods
that are available to fit the engine (off-the-shelf mass produced rods or
custom made), and the shortest pistons that will work with the rod, block and
stroke combination. The combined weight of the rod and piston has more
effect on momentum and throttle response than the rod ratio. Also, moving
the wrist pin higher up in the piston and using a shorter piston may create
some piston wobble and instability issues if you go too far. Because of this,
excessive rod ratio may actually be detrimental to engine performance.
These relatively long and skinny Ford rods (4.6L left and 5.2L GT 350 right)
have a rod ratio of 1.68, which is a little less than a 350 Chevy with 5.7-inch
rods, but a little more than a Chevy 350 built with longer 6-inch rods.
OVERSQUARE VS UNDERSQUARE
A closely related topic to rod ratio is that of bore and stroke. If the bore and
stroke dimensions in an engine are the same (say a 4.00 inch bore with a 4.00
inch stroke), the engine is said to be “square.” If the bores are larger than the
stroke, the engine is “oversquare,” and if the stroke is longer than the bore
diameter it is said to be “undersquare.”
If you divide stroke by bore, you get a numerical value for the stroke/bore
ratio. Many production passenger car engines have a stroke/bore ratio
between 0.8 to 1.1. Truck stroke/bore ratios are typically higher (1.0 to 1.4)
to improve efficiency and low speed torque. The higher the stroke/bore ratio,
the less RPM the engine can safely handle, but the more low end torque it
will produce.
The 2017 Ford GT 350 has a 5.2L engine with a flat-plane crank that redlines
at 8,250 RPM. It has a 3.7-inch (94 mm) bore and 3.66-inch (93 mm) stroke,
making it slightly oversquare. By comparison, a C7 Corvette with a 6.2L LT1
engine has a bore and stroke of 4.06 x 3.62, which is quite a bit oversquare,
yet it redlines at
6,600 RPM (due to hydraulic lifters). Both are excellent engines with lots of
performance potential, but the Ford revs higher because of its overhead cam
heads, and makes more horsepower (526 vs 460).
As with rod ratios, the geometric relationship between bore and stroke can
also affect an engine’s power and RPM potential. Even so, such generalities
often don’t hold true across the spectrum of production engines or engines
that are purpose-built for racing.
As a general rule, large bore, short stroke engines are high revving, high
power engines good for road racing and circle track applications. Pro Stock
racers also like this combination for drag racing as do NASCAR engine
builders. Small bore, large stroke engines, on the other hand, are better for
low RPM torque, street performance, towing and pulling, but have limited
RPM potential.
Formula 1 engines have an extremely short stroke, only 1.566 inches. The
bore size is limited to a maximum of 3.858 inches. This is a very oversquare
design, but one that allows these engines to rev to an incredible 20,000 RPM
and squeeze 800 horsepower out of 2.4 liters of displacement! One of the
reasons they are able to rev so high is the extremely short stroke. The pistons
are not moving up and down very far in their bores. The stroke/bore ratio is
only 0.4, which is less than half that of a typical passenger car engine. At
20,000 RPM, the relative piston speed in a Formula 1 engine is 5,248 feet per
minute. Formula 1 engines also use a pneumatic valve system that is far
faster than any mechanical valvetrain.
By comparison, a 358 cubic inch NASCAR engine with a 4.185 bore and
3.58-inch stroke (still oversquare, but not as oversquare as a Formula 1
engine) redlines at 10,000 RPM with a piston speed of 5,416 feet per minute.
A 500 cubic inch Pro Stock drag motor may be running a bore size of 4.750
inches with a crank stroke of 3.52 inches. At 10,000 RPM, the piston speed in
one of these motors is about the same as a NASCAR engine. If they are
running a smaller bore with a longer crank (say 3.75 inches), pistons speeds
may be as high as 6,250 feet per minute.
High piston speeds not only increase friction and ring wear inside the engine,
it also increases loads on the connecting rods dramatically. Using longer rods
with shorter, lighter pistons can help reduce the stress on the rods in these
applications.
Determining the best rod ratio and bore/stroke combination for a Pro Stock
motor depends a lot on the breathing characteristics of the cylinder heads,
intake runners and plenum. Some say shorter rods work best with heads and
intake systems that can flow big CFM numbers. Longer rods are better for
heads and intake systems that don’t flow as well. The rod ratios that seem to
work best in Pro Stock drag racing years ago was around 1.8, but today it’s
more in the 1.70 to 1.65 range according to some sources.
There is no magic formula for building a race-winning engine. Rod ratios and
stroke/bore ratios can vary quite a bit. Rules that limit maximum engine
displacement in certain classes may also restrict maximum bore diameter and
stroke length, but within those rules is often some leeway to experiment with
different combinations – and that’s the real secret to finding the right
combination of parts that will create a truly competitive engine.
Piston motion equations
The motion of a non-offset piston connected to a crank through a connecting
rod (as would be found in internal combustion engines), can be expressed
through several mathematical equations. This article shows how these motion
equations are derived, and shows an example graph.
Crankshaft geometry

Diagram showing geometric layout of piston pin, crank pin and crank center
Definitions
Gear Ratios

Gear ratios are a core science behind almost every machine in the modern
era. They can maximize power and efficiency and are based on simple
mathematics. So, how do they work?
If you work with gear ratios every single day, this post probably isn’t for you.
But, if you want to improve your understanding of this essential element of
machine design, keep reading.
Gear ratios are simple as long as you understand some of the math behind
circles. I’ll spare you the grade school math, but it is important to know that
the circumference of a circle is related to a circle’s diameter. This math is
important in gear ratio design.
The basics of gear ratios and gear ratio design
To begin to understand gear ratios, it’s easiest if we start by removing the
teeth from the gears. Imagine two circles rolling against one another, and
assuming no slippage, just like college Physics 1. Give circle one a diameter
of 2.54 inches. Multiplying this by pi leaves us with a circumference of 8
inches or, in other words, one full rotation of the circle one will result in 8
inches of displacement.
Give circle two a diameter of .3175 inches, giving us a circumference of 1
inch. If these two circles roll together, they will have a gear ratio of 8:1,
since circle one has a circumference 8 times as big as circle two. A gear ratio
of 8:1 means that circle two rotates 8 times for every time circle one rotates
once. Don’t fall asleep on me yet; we are going to get more and more
complex.
Gears aren’t circles because, as you know, they have teeth. Gears have to
have teeth because, in the real world, there isn’t infinite friction between two
rolling circles. Teeth also make exact gear ratios very easy to achieve.
Rather than having to deal with the diameters of gears, you can use the
number of teeth on a gear to achieve highly precise ratios. Gear ratios are
never just arbitrary values, they are highly dependent on the needed torque
and power output, as well as gear and material strength. For example, if you
need a gear ratio of 3.57:1, it would be possible to design two compatible
gears, one with 75 teeth and another with 21.
Classical Design of Gear Train
In order to find the required overall gear ratio, the compound gear train
contains two pairs of gears, d-a and b-f (Figure 1). The desired gear ratio,
itot, between the input and output gears is displayed as

where wo and wi are the angular velocities ofthe output andinput gears,
respectively, and n expresses the number of teeth on each gear wheel.

Figure 1: Compound gear train.


Classical Gear Ratio Optimization
In the augmented Lagrangian multiplier method, the objective function is
computing the number of teeth for gears d, a, b, and f in order to find a gear
ratio, itot. The minimum number of teeth for each gear is assumed between
12 and 60. The target ratio, itrg, is assumed to be 1/6.93, and the function is
expressed as follows:
In other words, the process is going to compute the optimum values of four
variables that will minimize the squared difference between the target ratio,
itrg, and available gear ratio, itot. The objective function is expressed as the
squared error between the actual and the desired gear ratios.
An alternative solution for the gear train problem is founded by the
differential evolution strategy. These solutions have been found to be
globally optimal by applying explicit enumeration as mentioned in Tables
1 and .In this paper this problem has been resolved with genetic algorithm
for 2009 hummer H3 4dr SUV.
3. 4WD Vehicle Dynamic Gear Ratio
The dynamic gear ratio is obtained with applying dynamic equations as
follows
How to Determine Gear Ratio
In mechanical engineering, a gear ratio is a direct measure of the ratio of the
rotational speeds of two or more interlocking gears. As a general rule, when
dealing with two gears, if the drive gear (the one directly receiving rotational
force from the engine, motor, etc.) is bigger than the driven gear, the latter
will turn more quickly, and vice versa. We can express this basic concept
with the formula Gear ratio = T2/T1, where T1 is the number of teeth on the
first gear and T2 is the number of teeth on the second.
Finding the Gear Ratio of a Gear Train

1. Start with a two-gear train. To be able to determine a gear ratio,


you must have at least two gears engaged with each other — this is
called a "gear train." Usually, the first gear is a "drive gear" attached
to the motor shaft and the second is a "driven gear" attached to the
load shaft. There may also be any number of gears between these
two to transmit power from the drive gear to the driven gear: these
are called "idler gears."
· For now, let's look at a gear train with only two gears in it. To be
able to find a gear ratio, these gears have to be interacting with each
other — in other words, their teeth need to be meshed and one
should be turning the other. For example purposes, let's say that
you have one small drive gear (gear 1) turning a larger driven gear
(gear 2).
2. Count the number of teeth on the drive gear. One simple way to
find the gear ratio between two interlocking gears is to compare the
number of teeth (the little peg-like protrusions at the edge of the
wheel) that they both have. Start by determining how many teeth are
on the drive gear. You can do this by counting manually or,
sometimes, by checking for this information labeled on the gear
itself.
For example purposes, let's say that the smaller drive gear in our system has
20 teeth.

3. Count the number of teeth on the driven gear. Next, determine


how many teeth are on the driven gear exactly as you did before for
the drive gear.
· Let's say that, in our example, the driven gear has 30 teeth.
4. Divide one teeth count by the other. Now that you know how
many teeth are on each gear, you can find the gear ratio relatively
simply. Divide the driven gear teeth by the drive gear teeth.
Depending on your assignment, you may write your answer as a
decimal, a fraction, or in ratio form (i.e., x : y).[4]
· In our example, dividing the 30 teeth of the driven gear by the 20
teeth of the drive gear gets us 30/20 = 1.5. We can also write this as
3/2 or 1.5 : 1, etc.
· What this gear ratio means is that the smaller driver gear must turn
one and a half times to get the larger driven gear to make one
complete turn. This makes sense — since the driven gear is bigger,
it will turn more slowly.
Vehicle Performance
Vehicle performance is the study of the motion of a vehicle. The motion of
any vehicle depends upon all the forces and moments that act upon it. These
forces and moments, for the most part are caused by interaction of the vehicle
with the surrounding medium(s) such as air or water (e.g. fluid static and
dynamic forces), gravitational attraction (gravity forces), Earth’s surface
(support, ground, or landing gear forces), and on-board energy consuming
devices such as rocket, turbojet, piston engine and propellers (propulsion
forces). Consequently, in order to fully understand the performance problem,
it is necessary to study and in some way characterize these interacting forces.
Although these four categories of forces are the dominating ones acting on
the vehicles of our interest, it should be pointed our that other forces can
enter into the performance considerations with varying degrees of
participation (e.g. magnetic, electrostatic). These types of forces will be
neglected for the present studies.
For performance studies vehicles are usually assumed to behave as rigid
bodies, that is structural deflections are generally ignored. In most cases this
is a good assumption and it simplifies analysis considerably. Under this
assumption, a theorem from the dynamics of rigid bodies states that the
motion of a rigid body can be separated into the motion of the center-ofmass
(translational motion), and the motion about the center-of-mass (rotational or
attitude motion). Further it can be shown that the center-of-mass motion or
translational motion is caused only by the forces that act on the vehicle while
the rotational motion is caused only by the moments about the center-of-of
mass that act on the vehicle.
For detailed studies of vehicle motion, both rotational and translational
motions must be considered simultaneously since these motions cause both
forces and moments to act on the vehicle (e.g. aerodynamic forces and
moments). Hence there is a coupling between the two motions through the
forces and moments that they generate (translational motion can cause
moments as well as forces and rotational motion can cause forces as well as
moments). However, there is a large class of problems that may be
considered under the assumption that these two types of motion can be
separated. That is we can look at the force - center-of-mass motion, and the
moment - rotational motion independent of each other. Studies concerning
center-of-mass motion are called trajectory and/or performance analysis
while those concerning rotational or attitude motions are called static stability
and control analysis. Assumptions allowing such a separation are related to
the time scales of the respective motions, and will not be discussed here. The
remainder of this study is concerned with trajectory analysis or performance.
Governing Equations
In performance analysis it is assumed that the moments about the center-of-
mass (cm) are identically equal to zero and that any desired attitude can be
achieved instantaneously. Alternatively, we can say that the vehicle has no
moment of inertia and consequently can be treated as a point mass, with all
the mass located at the cm. As a result, the equations governing the motion of
the vehicle are given by Newton’s second law:
Coordinate Systems
----------------<<<<<<>>>>>>----------------

You might also like