0% found this document useful (0 votes)
10 views

Control Engineering Completion

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Control Engineering Completion

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

RHEYSHA LYN R.

VILLA BSME 4B
REVIEW QUESTIONS
PAGE 95

1. Mathematical model for easy interconnection of physical systems: The transfer


function model or block diagram representation allows for easy interconnection of
physical systems. By modeling individual components with transfer functions and
connecting them in series, parallel, or feedback arrangements, complex systems can
be constructed from simpler subsystems.
2. Classification of systems best suited for the transfer function: Transfer functions
are best applied to linear time-invariant (LTI) systems. They are especially useful for
analyzing single-input, single-output (SISO) systems where the relationships
between input and output can be represented with linear differential equations.
3. Transformation converting differential equations into algebraic manipulations:
The Laplace transform is used to turn differential equations into algebraic equations.
By transforming functions of time into functions of a complex frequency variable sss,
solving the system equations becomes more manageable.
4. Definition of the transfer function: The transfer function is defined as the ratio of
the Laplace transform of the output to the Laplace transform of the input, assuming
all initial conditions are zero. Mathematically, for a system with input X(s)X(s)X(s) and
output Y(s)Y(s)Y(s), the transfer function G(s)G(s)G(s) is:
G(s)=Y(s)X(s)G(s) = \frac{Y(s)}{X(s)}G(s)=X(s)Y(s)​
5. Assumption concerning initial conditions with transfer functions: When working
with transfer functions, it is assumed that all initial conditions are zero. This simplifies
the Laplace transformation process and focuses on the system’s response due solely
to inputs.
6. Mechanical equations for evaluating the transfer function: The differential
equations of motion or dynamic equations describe the behavior of mechanical
systems. These equations, derived from Newton’s laws or Lagrangian methods, help
in constructing the transfer function for a mechanical system.
7. Avoiding steps with known forms of mechanical equations: Understanding the
standard forms of mechanical equations (for example, those describing
mass-spring-damper systems) allows you to skip re-deriving the equations. Instead,
you can directly use established forms to write the transfer function.
8. Why mechanical and electrical network transfer functions look identical:
Transfer functions for mechanical and electrical networks often look identical due to
the analogous systems concept. Both mechanical and electrical systems can be
represented by similar differential equations where quantities like force and voltage,
or current and velocity, are analogous.
9. Function of gears: Gears transmit torque and modify speed or direction of rotation
between connected components. They are commonly used in mechanical systems to
adjust the output speed and torque in proportion to gear ratios.
10. Component parts of the motor’s transfer function on stands: The mechanical
components typically considered in a motor’s transfer function include inertia
(moment of inertia for rotating components), damping (frictional forces), and stiffness
or elasticity in the shaft or load.
11. Determining the transfer function relating load displacement and armature
voltage: To find the transfer function relating load displacement to armature voltage,
cascade the motor’s internal transfer function (from armature voltage to armature
displacement) with the load dynamics. This combined transfer function reflects how
the input voltage influences the load position.
12. Steps to linearize a nonlinear system:
○ Step 1: Identify the nonlinear equation(s) governing the system.
○ Step 2: Find the equilibrium point(s) around which to linearize the system.
○ Step 3: Use a first-order Taylor series expansion around these points to
approximate the nonlinear terms.
○ Step 4: Ignore higher-order terms (terms beyond the first derivative) to obtain
a linearized model.
○ Step 5: Express the result in state-space or transfer function form for further
analysis.
PAGE 278

1. Four components of a block diagram for a linear, time-invariant system:


○ Blocks (represent system functions or transfer functions)
○ Summing points (combine inputs algebraically, e.g., adding or subtracting)
○ Branches (indicate the direction of the signal flow)
○ Takeoff points (duplicate the signal flow for use in different parts of the
diagram)
2. Three basic forms for interconnecting subsystems:
○ Series connection
○ Parallel connection
○ Feedback connection
3. Equivalent transfer function for each form:
○ Series connection
○ Parallel connection
○ Feedback connection
4. Other equivalents needed for block diagram reduction: Besides knowing the
basic interconnection forms, you should also be familiar with moving summing points
and takeoff points and simplifying multiple cascaded or parallel blocks to reduce
complex diagrams.
5. Effect of forward-path gain KKK on the transient response in a second-order
feedback system: As KKK increases, the system’s transient response generally
becomes faster, reducing the rise time. However, higher KKK values may also
increase overshoot and settling time if the system becomes more oscillatory.
6. Changes in damping ratio as KKK increases in the underdamped region: As
KKK increases, the damping ratio generally decreases, leading to a more oscillatory
response. A higher KKK in the underdamped region reduces stability and can result
in increased overshoot and oscillations.
7. Two components of a signal-flow graph:
○ Nodes (represent variables or states)
○ Branches (directed edges that represent system gains or transfer functions)
8. Representation of summing junctions on a signal-flow graph: Summing
junctions are shown implicitly by the convergence of branches at a node, where
incoming signals are summed or subtracted as indicated by branch gains.
9. Value of Δk\Delta_kΔk​if a forward path touches all closed loops: If a forward
path touches all closed loops, Δk=1\Delta_k = 1Δk​=1 in Mason’s gain formula. This
condition means there are no untouchable loops in the graph relative to that path.
10. Five representations of systems in state space:
○ Controllable canonical form
○ Observable canonical form
○ Diagonal canonical form
○ Jordan canonical form
○ Phase variable form
11. Two state-space forms found using the same method: The controllable canonical
form and observable canonical form are derived using similar techniques, but with
different emphases on control and observation of states, respectively.
12. State-space form that leads to a diagonal matrix: The diagonal canonical form
results in a diagonal system matrix when the system is decoupled into independent
modes.
13. Quantities along the diagonal of a diagonal system matrix: The diagonal entries
of a diagonal system matrix are the system’s eigenvalues, representing each mode's
natural response characteristics.
14. Terms along the diagonal in Jordan canonical form: The diagonal elements are
the system’s eigenvalues, while elements just above the diagonal may represent
coupling if eigenvalues are repeated. The Jordan form simplifies the matrix structure
for systems with repeated eigenvalues.
15. Advantages of a diagonal system matrix: A diagonal matrix simplifies system
analysis and allows each mode to be analyzed independently, especially beneficial
for stability and modal control analysis.
16. Reasons for alternative system representations:
○ Simplification of complex systems by highlighting specific properties.
○ Easier design and analysis of control systems, particularly for controllability
and observability studies.
17. System suited for observer canonical form: The observer canonical form is used
for systems where estimation of unmeasured states is needed, often in design
contexts where observation is essential.
18. State-vector transformations and different bases: State-vector transformations
change the basis, or coordinate system, in which the system is represented, allowing
for alternative perspectives or simplifications in analysis and control design.
19. Definition of an eigenvector: An eigenvector is a non zero vector that.
20. Definition of an eigenvalue: An eigenvalue λ\lambdaλ is the scalar that satisfies for
a corresponding eigenvector of matrix AAA.
21. Significance of using eigenvectors as basis vectors: Eigenvectors form a natural
basis for system transformations, where each eigenvector aligns with a mode of the
system. This basis simplifies the system dynamics, often leading to diagonalization or
near-diagonalization of matrices, which aids in analyzing the system's behavior.

PAGE 146

1. Two reasons for modeling systems in state space:


○ Multi-input, multi-output (MIMO) capability: State-space models can handle
multiple inputs and outputs, which is a limitation in traditional transfer function
models.
○ Time-domain analysis and control design flexibility: State-space models make
it easier to analyze system stability, controllability, observability, and to design
state-feedback controllers.
2. Advantage of the transfer function approach over the state-space approach:
The transfer function approach simplifies the analysis of single-input, single-output
(SISO) systems by representing system dynamics through a single algebraic
expression, which can be easier to interpret for frequency response and steady-state
analysis.
3. Definition of state variables: State variables are variables that define the state of a
dynamic system at any given time. They represent the smallest set of variables
necessary to describe the system's behavior completely.
4. Definition of state: The state of a system is the set of values of the state variables
at a specific time, capturing all information needed to determine future behavior in the
absence of external inputs.
5. Definition of state vector: The state vector is a vector composed of the state
variables, typically represented as x(t)\mathbf{x}(t)x(t), encapsulating the system’s
state in a compact form.
6. Definition of state space: The state space is the multidimensional space defined by
all possible values of the state variables, where each point represents a unique state
of the system.
7. Requirements to represent a system in state space:
○ Identification of a set of state variables that describe the system.
○ State equations that describe the dynamics of the state variables (typically as
a system of first-order differential equations).
○ An output equation that defines the relationship between state variables and
system output(s).
8. State equations for an eighth-order system: An eighth-order system would require
eight state equations, as each state equation represents a first-order differential
equation corresponding to a state variable.
9. Function of the output equation in state-space modeling: The output equation
defines how the state variables relate to the system’s output. It essentially maps the
internal state of the system to observable outputs.
10. Definition of linear independence: Linear independence means that no state
variable (or vector) in a set can be written as a linear combination of the others. This
concept ensures that each state variable provides unique information about the
system.
11. Factors influencing the choice of state variables:
○ The physical meaning or interpretation of variables related to energy storage
elements (like capacitors and inductors in electrical systems or masses and
springs in mechanical systems).
○ The need for simplicity in forming first-order differential equations.
○ Ensuring linear independence among state variables.
12. Convenient choice of state variables for electrical networks: For electrical
networks, common state variables are the currents through inductors and voltages
across capacitors, as they represent energy storage in the network.
13. State variables for an electrical network with three energy-storage elements:
Generally, the number of state variables corresponds to the number of independent
energy storage elements, so three state variables are sufficient if all three are
independent. However, additional state variables may be used if a different
formulation or redundancy is introduced, though this is typically unnecessary.
14. Phase-variable form of the state equation: The phase-variable form represents the
state equations such that each state variable represents a derivative of the output.
This form is common when state variables are defined as successive derivatives of
the output, resulting in a companion matrix that simplifies control design.

PAGE 323

1. Part of the output response responsible for determining stability: The natural
response (or homogeneous response) of the system is responsible for determining
stability. It reflects the system’s response to initial conditions without any external
inputs.
2. Condition causing instability in the natural response: Instability occurs when the
natural response grows unbounded over time, usually due to poles with positive real
parts or purely imaginary poles that lead to sustained oscillations without decay.
3. Effect on a physical system that becomes unstable: An unstable physical system
exhibits unbounded behavior, which can lead to damage, oscillations, or runaway
responses that prevent the system from reaching a steady state or desired output.
4. Reason marginally stable systems are considered unstable under BIBO
stability: Marginally stable systems may have poles on the imaginary axis, causing
oscillations that neither decay nor grow over time. Under the Bounded Input,
Bounded Output (BIBO) stability definition, these oscillations mean the output could
remain unbounded for certain inputs, classifying the system as unstable.
5. Pole locations to ensure stability: For a system to be stable, all poles must lie in
the left-half of the complex plane (i.e., they must have negative real parts). This
ensures that all natural responses decay to zero over time.
6. Purpose of the Routh-Hurwitz criterion: The Routh-Hurwitz criterion is a stability
test that determines the number of poles in the right-half plane (RHP) of a
polynomial. It provides a method to check if a system is stable by examining the
signs of coefficients without directly calculating the poles.
7. Conditions for Routh-Hurwitz criterion to reveal pole locations: The
Routh-Hurwitz criterion alone does not reveal exact pole locations, but if it shows no
sign changes in the first column of the Routh table, it indicates all poles are in the
left-half plane (LHP), suggesting stability. Actual pole locations require solving the
characteristic equation.
8. Cause of a zero only in the first column of the Routh table: A single zero in the
first column can result from a root with a zero real part, indicating the presence of a
purely imaginary pole.
9. Cause of an entire row of zeros in the Routh table: An entire row of zeros typically
indicates the presence of symmetrical roots (e.g., complex conjugate roots on the
imaginary axis), suggesting possible oscillatory behavior in the system.
10. Reason for multiplying a row by a positive constant: Multiplying by a positive
constant can make calculations simpler without altering the sign pattern in the Routh
table, which is essential for determining stability accurately.
11. Reason for not multiplying a row by a negative constant: Multiplying by a
negative constant would invert the signs in that row, which could lead to incorrect
conclusions about stability since the Routh-Hurwitz criterion relies on the sign
pattern.
12. Right-half-plane poles from sign changes in a Routh table: The number of RHP
poles is equal to the number of sign changes in the first column. With two sign
changes above the even polynomial and five below, the system has a total of seven
right-half-plane poles.
13. Does a row of zeros always mean jω poles? Not always. A row of zeros may
indicate complex conjugate roots on the imaginary axis, but it can also occur in
systems with symmetrical roots. Further analysis is required to confirm jω poles.
14. Number of jω poles for a seventh-order system with a row of zeros at the s³
row and two sign changes below the s⁴ row: This configuration suggests the
system has two jω poles, as the row of zeros at s3s^3s3 indicates oscillatory
components on the imaginary axis.
15. Are eigenvalues of the system matrix the same as closed-loop poles? Yes, the
eigenvalues of the system matrix are the same as the closed-loop poles, as they
represent the system's natural response frequencies.
16. Method for finding eigenvalues: To find the eigenvalues of a system matrix AAA,
solve the characteristic equation.

PAGE 423

1. Root locus definition: The root locus is a graphical representation that shows the
possible locations of the closed-loop poles of a control system as the system gain
KKK varies. It helps analyze system stability and transient response characteristics.
2. Two ways to obtain the root locus:
○ Manual plotting using root locus rules: Plot by applying standard rules (such
as identifying asymptotes, breakaway points, and angles of departure).
○ Computer-aided plotting: Use software tools like MATLAB to generate the root
locus diagram based on the system’s transfer function.
3. Effect of gain change on system zeros: The zeros of a system are independent of
the gain KKK; they remain constant as gain changes. Only the locations of the poles
are affected by changes in KKK.
4. Location of closed-loop transfer function zeros: The zeros of the closed-loop
transfer function are the same as the zeros of the open-loop transfer function, as they
do not change with feedback gain.
5. Two ways to find where the root locus crosses the imaginary axis:
○ Routh-Hurwitz criterion: Apply the criterion to determine the values of gain at
which closed-loop poles lie on the imaginary axis.
○ Solving for imaginary roots: Substitute s=jωs = j\omegas=jω into the
characteristic equation and solve for values of KKK where the imaginary roots
occur.
6. Indicating instability from the root locus: A system is unstable if the root locus
crosses into the right-half plane (RHP), indicating that some closed-loop poles have
positive real parts, leading to an unbounded response.
7. Determining if the settling time does not change over a gain region: The settling
time remains constant if the root locus remains at the same horizontal distance from
the imaginary axis over a range of gains, indicating consistent real parts for the
poles.
8. Indicating constant natural frequency from the root locus: If the radial distance
from the origin to the root locus remains unchanged over a range of gains, then the
natural frequency is constant in that region.
9. Checking for real-axis crossings in the root locus: To determine if the root locus
crosses the real axis, examine the characteristic equation for real solutions at varying
gain values. Additionally, any pole movement along the real axis in a root locus plot
will cross the real axis.
10. Conditions for a second-order approximation: For a second-order approximation
to be valid, there must be a pair of dominant poles near the imaginary axis with other
poles and zeros located much farther to the left in the complex plane (for stability) or
with negligible influence on the transient response.
11. Root locus plotting rules common to positive- and negative-feedback systems:
○ Rules for identifying asymptotes, angles of departure and arrival, and
breakaway and break-in points apply to both positive- and negative-feedback
systems.
○ The fundamental rule for the root locus lying on sections of the real axis
where there is an odd number of poles and zeros to the right also applies to
both cases.
12. Effect of open-loop zeros on the root locus and transient response: The zeros
of the open-loop system influence the shape of the root locus, as poles are attracted
toward zeros as KKK increases. The closer an open-loop zero is to the imaginary
axis, the more it influences the transient response by increasing overshoot or
oscillations.

PAGE 505
1. Difference between design techniques in Chapter 8 and Chapter 9:
○ Chapter 8 generally covers frequency-domain techniques for control system
design, using root locus, Bode plots, and Nyquist criteria.
○ Chapter 9 focuses on state-space design techniques, which use state-space
representations, allowing for multi-variable systems and advanced control
strategies like state feedback and observer design.
2. Advantages of Chapter 9 design techniques over Chapter 8:
○ Multi-input, multi-output (MIMO) capability: State-space methods handle
MIMO systems directly, whereas frequency-domain techniques are more
suited for single-input, single-output (SISO) systems.
○ Flexibility in control design: State-space design allows for better control over
system dynamics by directly placing poles and designing observers, which is
more challenging with frequency-domain techniques.
3. Type of compensation improving steady-state error:
○ Integral compensation (or a PI controller) is used to improve the steady-state
error by adding a pole at the origin, which increases system type and reduces
error in response to certain input types.
4. Type of compensation improving transient response:
○ Derivative compensation (or a PD controller) improves transient response by
increasing the damping ratio, thereby reducing overshoot and settling time.
5. Type of compensation improving both steady-state error and transient
response:
○ PID compensation or a lead-lag compensator can improve both steady-state
error and transient response, as it combines the effects of integral (for
steady-state error) and derivative (for transient response) actions.
6. Cascade compensation to improve steady-state error (pole-zero placement):
○ Typically, a lag compensator with a zero closer to the imaginary axis and a
pole further left on the real axis is used. This placement increases
low-frequency gain, which helps reduce steady-state error while minimizing
impact on transient response.
7. Cascade compensation to improve transient response (pole-zero placement):
○ A lead compensator is used, placing the zero closer to the imaginary axis and
the pole further left on the real axis. This placement increases system
bandwidth and damping, leading to faster and more stable transient
responses.
8. Difference between using a PD controller and a lead network for transient
response on the s-plane:
○ A PD controller adds a zero to the open-loop transfer function, increasing
system damping and moving the root locus to the left, but it affects the entire
s-plane. A lead network also adds a zero and a pole but focuses the phase
lead effect within a specific frequency range, giving more control over
transient response without affecting other regions as much.
9. Location of compensated system poles to speed up the system without
changing overshoot:
○ To speed up the system without increasing overshoot, the compensated poles
should be placed farther left along the same radial line (constant damping
ratio line) as the uncompensated poles. This maintains the same percent
overshoot while reducing settling time.
10. Greater improvement in steady-state error with a PI controller over a lag
network:
○ A PI controller has a pole at the origin, providing infinite DC gain, which fully
eliminates steady-state error for step inputs, whereas a lag network only
increases low-frequency gain to a limited extent, offering less error reduction.
11. Effect on transient response when compensating for steady-state error:
○ When compensating for steady-state error (e.g., using a lag compensator),
the transient response may degrade, often resulting in a slower system with
increased settling time due to the additional pole's effect on system dynamics.
12. Improvement in steady-state error with a lag compensator zero 25 times farther
from the imaginary axis than the pole:
○ This setup typically provides a 25-fold improvement in steady-state error, as
the low-frequency gain increase is approximately the ratio of the zero to pole
distance.
13. Pole-zero cancellation with a zero at 3 and a closed-loop pole at 3.001:
○ There would be near pole-zero cancellation, but it may not be exact due to
the slight difference in locations. Exact cancellation requires the zero and pole
to be at precisely the same location; near cancellation can still affect system
dynamics and lead to unpredicted behavior if poles and zeros are close but
not identical.
14. Two advantages of feedback compensation:
○ Improved disturbance rejection: Feedback compensation allows the system to
better reject disturbances, enhancing stability.
○ Increased robustness: Feedback compensation improves system stability
margins, making the system less sensitive to parameter variations and
modeling uncertainties.

PAGE 209

1. Performance specification for first-order systems:


○ The primary performance specification for first-order systems is the time
constant..
2. Interpretation of the time constant in first-order systems:
○ The time constant τ\tauτ indicates how quickly the system responds to a step
input. Specifically, it is the time required for the system to reach approximately
63.2% of its final steady-state value.
3. Poles generating steady-state response:
○ Poles at or near the origin (real part close to zero) primarily influence the
steady-state response.
4. Poles generating transient response:
○ Poles with non-zero real parts, especially those farther from the origin,
determine the transient response, as they decay over time.
5. Imaginary part of a pole:
○ The imaginary part of a pole generates the oscillatory part of the response.
6. Real part of a pole:
○ The real part of a pole generates the exponential decay or growth, affecting
the speed of the response.
7. Natural frequency vs. damped frequency:
○ The natural frequency is the frequency of oscillation without damping.
○ The damped frequency ​is the frequency at which the system oscillates when
damping is present
○ Constant imaginary part movement of a pole:
○ If a pole is moved with a constant imaginary part, the responses will have the
same oscillatory frequency.
8. Constant real part movement of a pole:
○ If a pole is moved with a constant real part, the responses will have the same
rate of decay or growth.
9. Movement of a pole along a radial line from the origin:
○ Moving a pole along a radial line from the origin maintains a constant
damping ratio, so responses will have the same percentage overshoot.
10. Specifications for a second-order underdamped system:
○ Five common specifications are:
■ Natural frequency
■ Damping ratio
■ Peak time
■ Settling time
■ Steady-state error
11. Number of specifications to determine the second-order response completely:
○ Two specifications, usually the natural frequency are sufficient to completely
determine the response.
12. Pole locations characterizing different damping conditions:
○ Underdamped system: Poles are complex conjugates with real parts less
than zero.
○ Overdamped system: Poles are real and negative, with distinct values.
○ Critically damped system: Poles are real, negative, and repeated.
13. Conditions to neglect the response generated by a pole:
○ If the pole is far from the imaginary axis (fast-decaying response) or if the
pole’s contribution is relatively small compared to the dominant poles, it can
be neglected.
14. Justifying pole-zero cancellation:
○ Pole-zero cancellation can be justified if a pole and zero are located at the
same or nearly the same location in the s-plane, meaning their effects on the
response will cancel each other out.
15. State equation solution and output response:
○ Solving the state equation yields the state vector response. The output
response is then obtained by combining the state response with the output
equation.

PAGE 597
1. Advantages of frequency response techniques over root locus:
○ Frequency response methods directly provide phase and gain margin, assess
stability and robustness, handle noise and disturbances, and help analyze
systems without solving differential equations.
2. Frequency response of a physical system:
○ Frequency response is the steady-state response of a system to sinusoidal
inputs of varying frequencies, showing how the output amplitude and phase
vary with frequency.
3. Ways to plot the frequency response:
○ Bode plots and Nyquist diagrams are the two main methods.
4. Obtaining frequency response analytically:
○ The frequency response can be obtained by substituting s=jωs and evaluating
the magnitude and phase for different values of ω\omegaω.
5. Definition of Bode plots:
○ Bode plots are graphical representations of a system’s frequency response,
showing the magnitude (in dB) and phase (in degrees) as a function of
frequency (logarithmic scale).
6. Contribution of each pole to Bode magnitude plot slope:
○ Each pole contributes a slope of -20 dB/decade in the magnitude plot.
7. Slope for a system with four poles and no zeros at high frequencies:
○ The slope would be -80 dB/decade at high frequencies.
8. Slope for a system with four poles and two zeros at high frequencies:
○ The slope would be -40 dB/decade at high frequencies.
9. Asymptotic phase response of a single pole at 2:
○ The phase shift asymptotically approaches -90° as frequency increases.
10. Difference between Bode magnitude plots for first-order and second-order
systems:
○ First-order systems have a single slope change at the corner frequency, while
second-order systems exhibit resonance peaks and a more rapid change
near the natural frequency.
11. Maximum difference between asymptotic and actual magnitude response for
three poles at 4:
○ The maximum difference is approximately 6 dB.
12. Nyquist criterion:
○ The Nyquist criterion states that the number of clockwise encirclements of the
point −1+j0-1 + j0−1+j0 by the Nyquist plot is related to the stability of the
closed-loop system.
13. Purpose of the Nyquist criterion:
○ It tells us whether the closed-loop system is stable by examining the
open-loop frequency response.
14. Nyquist diagram:
○ A plot of the open-loop transfer function on the complex plane as frequency
varies.
15. Reason for calling the Nyquist criterion a frequency response method:
○ Because it analyzes system stability based on the open-loop frequency
response.
16. Open-loop poles on the imaginary axis when sketching Nyquist diagram:
○ For poles on the imaginary axis, a small semicircular path is added around
the pole in the Nyquist plot to avoid undefined points.
17. Simplification of Nyquist criterion for open-loop stable systems:
○ For open-loop stable systems, the Nyquist plot must not encircle −1+j0-1 +
j0−1+j0 for closed-loop stability.
18. Simplification of Nyquist criterion for open-loop unstable systems:
○ For open-loop unstable systems, we adjust the Nyquist plot encirclement
count to account for the number of right-half-plane poles.
19. Gain margin:
○ Gain margin is the factor by which the system gain can be increased before
the system becomes unstable, measured in dB.
20. Phase margin:
○ Phase margin is the additional phase lag required to bring the system to the
verge of instability.
21. Frequency response characteristics for transient response:
○ Gain margin and phase margin indicate transient response characteristics.
22. Methods to find closed-loop frequency response from open-loop transfer
function:
○ Nyquist plot, Bode plot, and Nichols chart.
23. Finding the static error constant from the Bode magnitude plot:
○ The low-frequency magnitude of the Bode plot indicates the static error
constant
24. Effect of time delay on open-loop frequency response magnitude plot:
○ Time delay introduces a phase lag that increases with frequency, often
reducing stability margins.
25. Shape of pure time delay on linear phase vs. linear frequency plot:
○ The curve appears as a straight line with a negative slope, indicating
increasing phase lag with frequency.
26. Completing extraction of component transfer functions from frequency
response data:
○ You know the extraction is complete when the remaining response can be
attributed to noise or insignificant unmodeled dynamics.

PAGE 754

1. Functions of digital computers in feedback control systems:


○ Digital computers can implement control algorithms and process sensor data
to adjust the control response dynamically.
2. Advantages of using digital computers in the loop:
○ Flexibility in implementing complex control laws, high accuracy, and the ability
to handle multiple inputs and outputs simultaneously.
3. Considerations in analog-to-digital conversion that yield errors:
○ Quantization error and sampling error (aliasing).
4. Block diagram model for a computer:
○ The block diagram typically includes analog-to-digital (A/D) conversion, the
digital controller or processor, and digital-to-analog (D/A) conversion.
5. Definition of the z-transform:
○ The z-transform is a mathematical tool that converts a discrete-time signal
into a complex frequency domain, similar to how the Laplace transform
applies to continuous systems.
6. Inverse z-transform of a time waveform:
○ It yields the original discrete-time signal in the time domain.
7. Methods for finding the inverse z-transform:
○ Partial fraction expansion and power series expansion.
8. Method yielding a closed-form expression for the time function:
○ Partial fraction expansion.
9. Method that immediately yields values of the time waveform at sampling
instants:
○ The power series expansion method.
10. Requirement for finding the z-transform of G(s)G(s)G(s):
○ The system must be linear, time-invariant, and sampled periodically.
11. Nature of c(t)c(t)c(t) if input R(z)R(z)R(z) to G(z)G(z)G(z) yields output
C(z)C(z)C(z):
○ c(t)c(t)c(t) is a discrete-time signal at the output corresponding to the sampled
time instants.

You might also like