0% found this document useful (0 votes)
20 views

1 - Introduction To Control Chapter One

Control systems consist of subsystems and processes assembled to achieve a desired output. Control theory uses mathematical concepts from areas like feedback to analyze and design control systems. The history of control systems began with naturally occurring feedback loops and early human inventions like water clocks in ancient Greece. A key development was James Watt's steam engine centrifugal governor in the late 18th century. This led to further studies in control system stability and design methods in the 19th-20th centuries, along with applications in areas like aircraft, industrial processes, and modern digital controllers. Control systems integrate concepts from various engineering fields to regulate physical processes.

Uploaded by

sythepaul24
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

1 - Introduction To Control Chapter One

Control systems consist of subsystems and processes assembled to achieve a desired output. Control theory uses mathematical concepts from areas like feedback to analyze and design control systems. The history of control systems began with naturally occurring feedback loops and early human inventions like water clocks in ancient Greece. A key development was James Watt's steam engine centrifugal governor in the late 18th century. This led to further studies in control system stability and design methods in the 19th-20th centuries, along with applications in areas like aircraft, industrial processes, and modern digital controllers. Control systems integrate concepts from various engineering fields to regulate physical processes.

Uploaded by

sythepaul24
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Introduction

Control System Definition

A control system consists of subsystems and processes (or plants) assembled for the purpose of obtaining
a desired output with desired performance, given a specified input. Control Theory is an interdisciplinary
area of research where many mathematical concepts and methods work together to produce an impressive
body of important applied mathematics. A general conclusion is that the main advances in Control of
Systems would come both from mathematical progress and from technological development.

The word control has two main meanings. First, it is understood as the activity of testing or checking that
a physical or mathematical device has a satisfactory behavior. Secondly, to control is to act, to implement
decisions that guarantee that device behaves as desired.

Control idea trace back in times of Aristotle (384-322 BC), he has written:

“… if every instrument could accomplish its own work, obeying or anticipating the will of others … if
the shuttle weaved and the pick touched the lyre without a hand to guide them, chief workmen would not
need servants, nor masters slaves.”

We see that Aristotle described in a very transparent manner the purposes of Control Theory: to
automatize processes in such a way to achieve their purposes they have constructed of, and to let the
human being in liberty and freedom.

In the old times, the description of physical or artificial systems was more linguistic, often not they are,
but more they like to be. Even the mathematics in old times, as synthesised by Euclid (325-265 BC) and
Diophantus of Alexandria (200-284 AD), was expressed in terms of syllogisms.

Brief History of Control System

Feedback control systems are older than humanity. We are not the only creators of automatically
controlled systems; these systems also exist in nature. Within our own bodies are numerous control
systems, such as the pancreas, which regulates our blood sugar. In time of "fight or flight ," our adrenaline
increases along with our heart rate, causing more oxygen to be delivered to our cells. Our eyes follow a
moving object to keep it in view; our hands grasp the object and place it precisely at a predetermined
location.

When we look at the history of human-designed control systems, man-made control systems have existed
in some form since the time of the ancient Greeks. The Greeks began engineering feedback systems
around 300 A.C .A water clock invented by Ktesibios operated by having water trickle into a measuring
container at a constant rate. The level of water in the measuring container could be used to tell time. For
water to trickle at a constant rate, the supply tank had to be kept at a constant level. This was
accomplished using a float valve similar to the water-level control in today's flush toilets. Soon after
Ktesibios, the idea of liquid-level control was applied to an oil lamp by Philon of Byzantium.

Regulation of steam pressure began around 1681 with Denis Papin's invention of the safety valve. The
concept was further elaborated on by weighting the valve top which sets the internal pressure. Depending
on the difference between the internal pressure and the weight, the valve opens or closes. Also in the

Page 1 of 13
seventeenth century, Comelis Drebbel in Holland invented a purely mechanical temperature control
system for hatching eggs.

In 1745, speed control was applied to a windmill by Edmund Lee. Increasing winds pitched the blades
farther back, so that less area was available. As the wind decreased, more blade area was available.
William Cubitt improved on the idea in 1809 by dividing the windmill sail into movable layers. Also in
the eighteenth century, around 1788, James Watt invented the flyball speed governor to control the speed
of steam engines. In this device, two spinning flyballs rise as rotational speed increases. A steam valve
connected to the flyball mechanism closes with the ascending flyballs and opens with the descending
flyballs thus regulating the speed.

Fig. 1.1 The Watt centrifugal speed governor

Electrical control systems are a product of the twentieth century. Electromechanical relays were
developed and used for remote control of motors and devices. Relays and switches were also used as
simple logic gates to implement some intelligence. Using vacuum-tube technology, significant
development in control systems was made during World War II. Dynamic position control systems
(servomechanisms) were developed for aircraft applications, gun turrets, and torpedoes. Today, position
control systems are used in machine tools, industrial processes, robots, cars, and office machines, to name
a few.

Page 2 of 13
Beginnings of control theory

Because Watt was a practical man, he did not engage in theoretical analysis of the governor.

Stability analysis:-

The first systematic study of the stability of feedback control was apparently given in the paper “On
Governors” by Maxwell (1868). In this paper, Maxwell developed the differential equations of the
governor, linearized them about equilibrium, and stated that stability depends on the roots of a certain
(characteristic) equation having negative real parts. He was successful only for second- and third-order
cases. Determining criteria for stability was the problem for the Adams Prize of 1877, which was won by
E. J. Routh. Shortly after publication of Routh’s work, the Russian mathematician Lyapunov (1892) began
studying the question of stability of motion for nonlinear differential equations of motion.

Frequency response

In 1932, H. Nyquist published a paper describing how to determine stability from a graphical plot of the
loop frequency response. From this theory developed an extensive methodology of feedback-amplifier
design described by Bode (1945) and still extensively used in the design of feedback controls.

PID control

The PID controller was first described by Callender et al. (1936). This technology was based on extensive
experimental work and simple linearized approximations to the system dynamics.

Root locus

Another approach to control systems design was introduced in 1948 by W. R. Evans.Evans developed
techniques and rules allowing one to follow graphically the paths of the roots of the characteristic
equation as a parameter was changed. His method, the root locus, is suitable for design as well as for
stability analysis and remains an important technique today.

State-variable design

During the 1950s several authors, including R. Bellman and R. E. Kalman in the United States and L. S.
Pontryagin in the U.S.S.R., began again to consider the ordinary differential equation (ODE) as a model
for control systems, by representing a model by a set of first order ordinary differential equations. Even
though the foundations of the study of ODEs were laid in the late 19th century, this approach is now often
called modern control to distinguish it from classical control, which uses Laplace transforms and complex
variable methods of Bode and others

Meanwhile, other developments in electronics were having an impact on control system design. Solid-
state devices started to replace the power relays in motor control circuits. Transistors and integrated
circuit operational amplifiers (IC op-amps) became available for analog controllers. Digital integrated
circuits replaced bulky relay logic.
Finally, and perhaps most significantly, the microprocessor allowed for the creation of digital controllers
that are inexpensive, reliable, able to control complex processes, and adaptable (if the job changes, the
controller can be reprogrammed).
The subject of control systems is really many subjects: electronics (both analog and digital), power-
control devices, sensors, motors, mechanics, and control system theory, which ties together all these
concepts. Many students find the subject of control systems to be interesting because it deals with
applications of much of the theory to which they have already been exposed.

Page 3 of 13
Definition of terms

System is a combination of components that act together to perform a function not possible with any of
the individual parts. Thus, a control system is essentially a study of an important aspect of systems
engineering and its application. It is basically composed of the plant, the controller, sensors and actuator
and has inputs and outputs.

The process or plant represents the physical process to be controlled and being affected by the actuator,

The controller is the intelligence of the system and is usually electronic. It is also called a compensator.

The actuator is an electromechanical device that takes the signal from the controller and converts it into
some kind of physical action. The controller drives a process or a plant using the actuator.

An Output transducer, or sensor, measures the output response and converts it into the form used by the
controller.

Controlled variables or output: The measurable process variable that is controlled. The desired value of
a CV is called its set point or reference input.

Error input is the difference between the reference input and plant output.

Manipulated variables or Control input: The process input variable that can be adjusted to keep the
controlled variable at or near the set points.

Disturbance input are shown added to the controller and process outputs via summing junctions, which
yield the algebraic sum of their input signals using associated signs. It is an external disturbance input
signal to the system that has an unwanted effect on the system output.

The identification of controlled variables, manipulated variables and disturbance variables is critical step
in developing control system.

To achieve good control there are typical goals:

 Stability: The system must be stable at all times. This is an absolute requirement.
 Tracking: The system output must track the command reference signal as closely as possible.
 Disturbance rejection: The system output must be as insensitive as possible to disturbance
inputs.
 Robustness: The aforementioned goals must be met even if the model used in the design is not
completely accurate or if the dynamics of the physical system change over time.

Finally, to design a controller for a dynamic system, it is necessary to have a mathematical model of the
dynamic response of the system being controlled in all but the simplest cases. Unfortunately, almost all
physical systems are very complex and often nonlinear. As a result, the design will usually be based on a
simplified model and must be robust enough that the system meets its performance requirements when
applied to the real device.

Page 4 of 13
System Configurations
Major configurations of control systems

Here, we discuss two major configurations of control systems: open loop and closed loop. Control
systems can be broadly divided into two categories: open- and closed-loop systems.

Open-Loop Control Systems

Open-loop control systems are appropriate in applications where the actions of the actuator on the process
are very repeatable and reliable. Systems in which the output quantity has no effect upon the input
quantity are called open-loop control systems. With this approach, however, the controller never actually
knows if the actuator did what it was supposed to because there is no feedback. This system absolutely
depends on the controller knowing the operating characteristics of the actuator. Relays and stepper motors
are devices with reliable characteristics and are usually open-loop operations. Actuators such as motors or
flow valves are sometimes used in open-loop operations, but they must be calibrated and adjusted at
regular intervals to ensure proper system operation.

It starts with a subsystem called an input transducer, which converts the form of the input to that used by
the controller. The controller drives a process or a plant. The input is sometimes called the reference,
while the output can be called the controlled variable. Other signals, such as disturbances, are shown
added to the controller and process outputs via summing junctions, which yield the algebraic sum of their
input signals using associated signs.

The distinguishing characteristic of an open-loop system is that it cannot compensate for any disturbances
that add to the controller's driving signal (Disturbance). Open-loop systems, then, do not correct for
disturbances and are simply commanded by the input.

Closed-Loop Control Systems

In a closed-loop control system, the output of the process (controlled variable) is constantly monitored by
a sensor. The sensor samples the system output and converts this measurement into an electric signal that
it passes back to the controller. Because the controller knows what the system is actually doing, it can
make any adjustments necessary to keep the output where it belongs. The signal from the controller to the
actuator is the forward path, and the signal from the sensor to the controller is the feedback (which
“closes” the loop). In Figure 1.2(a), the feedback signal is subtracted from the set point at the comparator
(just ahead of the controller).

Page 5 of 13
Fig.1.2. Block diagrams of control systems: a. open-loop system; b. closed-loop systems

The self-correcting feature of closed-loop control makes it preferable over open-loop control in many
applications, despite the additional hardware required. This is because closed-loop systems provide
reliable, repeatable performance even when the system components themselves (in the forward path) are
not absolutely repeatable or precisely known.

The closed-loop system compensates for disturbances by measuring the output response, feeding that
measurement back through a feedback path, and comparing that response to the input at the summing
junction. If there is any difference between the two responses, the system drives the plant, via the
actuating signal, to make a correction. If there is no difference, the system does not drive the plant, since
the plant's response is already the desired response.

Closed-loop systems, then, have the obvious advantage of greater accuracy than open loop systems. They
are less sensitive to noise disturbances, and changes in the environment. Transient response and steady-
state error can be controlled more conveniently and with greater flexibility in closed loop systems, often
by a simple adjustment of gain (amplification) in the loop and sometimes by redesigning the controller.

The input transducer converts the form of the input to the form used by the controller. An output
transducer, or sensor, measures the output response and converts it into the form used by the controller.
The first summing junction algebraically adds the signal from [he input to the signal from the output.
which arrives via the feedback path, the return path from the output to the summing junction. The output
signal is subtracted from the input signal. The result is generally called the actuating signal. However, in
systems where both the input and output transducers have unity gain (that is, the transducer amplifies its

Page 6 of 13
input by I), the actuating signal's value is equal to the actual difference between the input and the output.
Under this condition, the actuating signal is called the error.

The closed-loop system compensates for disturbances by measuring the output response, feeding that
measurement back through a feedback path, and comparing that response to the input at the summing
junction. If there is any difference between the two responses, the system drives the plant, via the
actuating signal, to make a correction. If there is no difference, the system does not drive the plant. since
the plant's response is already the desired response. Closed-loop systems, then, have the obvious
advantage of greater accuracy than open loop systems. They are less sensitive to noise, disturbances, and
changes in the environment.

Transient response and steady-state error can be controlled more conveniently and with greater flexibility
in closed loop systems, often by a simple adjustment of gain (amplification) in the loop and sometimes by
redesigning the controller. We refer to the redesign as compensating the system and to the resulting
hardware as a compensator. On the other hand, closed-loop systems are more complex and expensive than
open-loop systems. A standard, open· loop toaster serves as an example: It is simple and inexpensive. A
c1osed·loop toaster oven is more complex and more expensive since it has to measure both color (through
light reflectivity) and humidity inside the toaster oven. Thus, the control systems engineer must consider
the trade-off between the simplicity and low cost of an open-loop system and the accuracy and higher cost
of a closed-loop system.

The distinguishing characteristic of an open-loop system is that it cannot compensate for any disturbances
that add to the controller's driving signal. Open-loop systems, then, do not correct for disturbances and are
simply commanded by the input.

CONTROL SYSTEM DESIGN

The design of control systems is a specific example of engineering design. Again, the goal of control
engineering design is to obtain the configuration, specifications, and identification of the key parameters
of a proposed system to meet an actual need.

The first step in the design process consists of establishing the system goals. The second step is to identify
the variables that we desire to control (for example, the velocity of the motor).The third step is to write
the specifications in terms of the accuracy we must attain. This required accuracy of control will then lead
to the identification of a sensor to measure the controlled variable.

As designers, we proceed to the first attempt to configure a system that will result in the desired control
performance. This system configuration will normally consist of a sensor, the process under control, an
actuator, and a controller. The next step consists of identifying a candidate for the actuator. This will, of
course, depend on the process, but the actuation chosen must be capable of effectively adjusting the
performance of the process. For example, if we wish to control the speed of a rotating flywheel, we will
select a motor as the actuator. The sensor, in this case, will need to be capable of accurately measuring the
speed. We then obtain a model for each of these elements.

The next step is the selection of a controller, which often consists of a summing amplifier that will
compare the desired response and the actual response and then forward this error-measurement signal to
an amplifier.

Page 7 of 13
The final step in the design process is the adjustment of the parameters of the system in order to achieve
the desired performance. If we can achieve the desired performance by adjusting the parameters, we will
finalize the design and proceed to document the results. If not, we will need to establish an improved
system configuration and perhaps select an enhanced actuator and sensor. Then we will repeat the design
steps until we are able to meet the specifications, or until we decide the specifications are too demanding
and should be relaxed.

The performance specifications will describe how the closed-loop system should perform and will include
(1) good regulation against disturbances, (2) desirable responses to commands, (3) realistic actuator
signals, (4) low sensitivities, and (5) robustness.

The design process has been dramatically affected by the advent of powerful and inexpensive computers
and effective control design and analysis software like MATLAB and LabVIEW.

Fig.1.3. The control system design process

In summary, the controller design problem is as follows: Given a model of the system to be controlled
(including its sensors and actuators) and a set of design goals, find a suitable controller, or determine that
none exists. As with most of engineering design, the design of a feedback control system is an iterative
and nonlinear process. A successful designer must consider the underlying physics of the plant under
control, the control design strategy, the controller design architecture (that is, what type of controller will
be employed), and effective controller tuning strategies.

Page 8 of 13
Control examples

Control systems are an integral part of modern society. Numerous applications are all around us: The
rockets fire, and the space shuttle lifts off to earth orbit; in splashing cooling water, a metallic part is
automatically machined; a self-guided vehicle delivering material to workstations in an aerospace
assembly plant glides along the floor seeking its destination. These are just a few examples of the
automatically controlled systems that we can create.

Position Control Systems

The satellite dish in the backyard or on the rooftop of a house has become common in recent years. It is
an antenna aimed at a satellite that is stationary with respect to the earth and is used to transmit television
or other signals. To increase the number of channels for a television, the dish may be designed to aim at
different satellites.

A possible arrangement of such a system is shown in Figure 1.4(a). These types of systems are called
position control system.

Fig.1.4 Position Control

Page 9 of 13
Fig.1.5,Radar Antenna

Velocity Control Systems


Driving tapes in video or audio recorders at a constant speed is important in producing quality output. The
problem is complicated because the load of the driver varies from a full reel to an empty reel.

Fig.1.6 Velocity Control System

Page 10 of 13
Temperature Control Systems
Consider the temperature control of the enclosed chamber shown in Figure 1.8(a). This problem, which
arises in the temperature control of an oven, a refrigerator, an automobile compartment, a house, or the
living quarters of a space shuttle, can certainly be approached by using the empirical method. If the
analytical method is to be used, we must develop a model as shown in Figure 1.8(b). We can then use the
model to carry out analysis and design.

Temperature control is also important in chemical processes. The rate of chemical reactions often depends
on the temperature. If the temperature is not properly controlled, the entire product may become useless.
In industrial processes, temperature, pressure and flow controls are widely used.

Fig.1.7 Nuclear Power Plant

Page 11 of 13
Fig.1.8. Temperature control system

Trajectory Control and Autopilot

The landing of a space shuttle on a runway is a complicated control problem. The task is then to bring the
space shuttle to follow the desired trajectory as closely as possible. The structure of a shuttle is less stable
than that of an aircraft, and its landing speed cannot be controlled. Thus, the landing of a space shuttle is
considerably more complicated than that of an aircraft. The landing has been successfully accomplished
with the aid of on-board computers and altimeters and of the sensing devices on the ground. In fact, it can
even be achieved automatically, without involving astronauts. This is made possible by the use of an
autopilot. The autopilot is now widely used on aircrafts and ships to maintain desired altitude and/or
heading.

Page 12 of 13
Fig 1.9 Desired landing trajectory of space shuttle

Page 13 of 13

You might also like