0% found this document useful (0 votes)
94 views

Methods and Algorithms For Advanced Process Control

1) The document describes a case study analyzing a model of a seesaw system and developing control algorithms to stabilize it. It presents the state-space model and shows the system is unstable. 2) An LQR controller is designed using full state feedback to stabilize the system. The choice of LQR gain is tuned using trial and error. 3) A second model is presented incorporating measurement noise and filtering. The controller is unable to reject the noise, resulting in a more oscillatory response.

Uploaded by

John Couc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
94 views

Methods and Algorithms For Advanced Process Control

1) The document describes a case study analyzing a model of a seesaw system and developing control algorithms to stabilize it. It presents the state-space model and shows the system is unstable. 2) An LQR controller is designed using full state feedback to stabilize the system. The choice of LQR gain is tuned using trial and error. 3) A second model is presented incorporating measurement noise and filtering. The controller is unable to reject the noise, resulting in a more oscillatory response.

Uploaded by

John Couc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Methods and Algorithms for Advanced Process Control

Seesaw Case Study

Anand Murali Ramankandath - Ioannis Koukoulis


April 2021
1 Model and Open Loop Analysis
We begin the case study by analyzing the state-space model of our system. Our system’s state
comprises of four components, position, angle and their rate of change x = [x, θ, ẋ, θ̇]T . Our system
matrix is equal to
 
0 0 1 0
 0 0 0 1
A=  −1.6143 −9.1618 −16.9249 0

−12.9144 5.1856 −2.729 0


The input and output matrices are equal to
 
0  
 0  C = 1 0 0 0
B=  3.3827  0 1 0 0
0.54543

We know that for continuous systems the transmission matrix is zero in practice. Therefore
our direct transmission matrix will be equal to D = [0 0]T .
We can now perform an open loop analysis of the model. The system has 4 poles at s =
2.96, −1.531 ± 0.5115i, −16.7635 (fig. 1). It is evident and also matches with our intuition, that
the system is unstable, as it has a pole with positive real value. The rank of the controllability
matrix C = [B, AB, A2 B ... An−1 B], is 4, equal to the order of the system, thus the system is
controllable, and therefore stabilizable. Correspondingly, the rank of the observability matrix
is O = [C, CA, CA2 ... CAn−1 ]T and thus the system is observable, and also detectable. Since
the system is observable and controllable, the realization is minimal. Finally, we note that the
system has no transmission zeros.

Figure 1: Pole-zero map of our system.

Our goal will be to control the cart’s position so that the seesaw balances at a given (small)
angle, an equilibrium which as we saw is unstable. In other words, we are looking to stabilize the
system at the given reference angle.

2
2 LQR Design - First Closed-Loop Diagram
For our first approach, we will make a simplified closed-loop diagram, shown in fig. 2. According
to this approach the controller has direct access to the full state x.

Figure 2: First Closed-Loop Simulink Diagram.

The diagram is explained below:


• The blocks on the left define the reference signal. The user sets a target angle θ. This is sent
through the x cart function block, which calculates the corresponding x-position of the cart
at equilibrium according to the solved Lagrange equations. Finally, zero values for velocity
and angular velocity are appended to the reference signal.
• The signal is then split in two parts, as implemented in (Nx , Nu , K) full-state feedback
controllers: one part is then passed through the Nx gain, to produce the steady-state xss
the other through Nu , to produce the steady-state input uss . The difference between the
current state and the computed steady state is then passed through the LQR gain, resulting
in K∆x. The two are summed to produce the input to the state-space model.
The LQR gain was computed using the lqr command, along with the A,B, Q,R matrices. We
chose this LQR gain through a trial-and-error approach, beginning with the recommended values
for Q and R. Overall, we tried to find a balance between a short settling time, and keeping the
control actions within the actuator limit of 5 Volts. We would check these by observing the step
response of the system starting from [0, 0, 0, 0]T and trying to settle to an angle of 3.5 degrees.
Additionally we also checked the impulse response on a system trying to balance on x = 0, θ = 0.
In particular, we began with the recommended values for Q and R, that is Q = diag([1000, 4000, 0, 0]),
R = 10. The system navigated successfully to the desired state, although the controls remained
quite low, and the settling time was quite high. In our next attempt we increased the penalty on
the θ term, in order to decrease the settling time. We thus set Q22 = 40000. This lowered the
settling time by 1 second, while the control input’s magnitude was increased to ≈ 4.5V .
For our third attempt, we lowered the penalty of the x term, setting Q11 = 1. The reasoning
behind that was that the cart was allowed to move more freely, allowing for more effective control.
This did not make much difference in settling time or control input magnitude. Penalizing the
terms relating to velocity and angular velocity did not improve our system’s response. Finally, we
tried reducing the control action penalty, by setting R = 1, but this lead to input actions that
were beyond the saturation limit. Thus, our final choices for Q and R are:

 
1 0 0 0
0 40000 0 0
Q= 
 , R=10.
0 0 0 0
0 0 0 0

3
Figure 3: Step response & control inputs for initial Q and R matrices

Figure 4: Step response & control inputs for chosen Q and R matrices

Figure 5: Impulse response for chosen Q and R matrices

Our closed-loop system has poles at s = −16.69, −3.517 ± 4.317i, −5.2358 . As expected, all
poles lie on the left half of the complex plane, thus the closed loop system is stable (fig 6).

4
Figure 6: Poles for the closed-loop system.

3 Second Closed-Loop Diagram


We now attempt to make our model more involved in order to better reflect the actual conditions
of our experimental setup. The simulink diagram we used is shown figure 7 below, as well as a list
explaining the modifications & additions to the first simulink diagram.

Figure 7: Second Closed-Loop Simulink Diagram.

i. As in the original diagram, we have an input reference signal, a calculated steady state and
steady state input. We have included the effect of a saturator on the input voltage (the
magnitude of all inputs is cut off at 5V ). Additionally, the state space model’s output matrix
is not the identity as before. The only outputs are now the position and angle.

ii. Here we simulate the measurement process of the two outputs; since we don’t have any access
to the laboratory equipment, we just assume that our output signal is affected by a noise source
of power equal to 10−7 . This amounts to disturbances of 0.01 meters and radians, respectively
for the position and angle. This noisy signal is then translated by the potentiometer into a
voltage value via a linear relationship (as long as it’s within the limits of ±0.456m and ±15o ),
and digitized by an A/D converter. We translated the digitization process into Simulink in the
following way: first we discretized the interval [−10, 10] in 65536 possible values, as this is the
resolution of the 16-bit converter. Then we pinpointed exactly where the meaningful values
for our variables lie in this discretized interval (they are ±4.41V and ±3.166V for position
and angle respectively). Dividing the range of our meaningful values (in engineering units),

5
by the length of the ”acceptable” values vector minus one, we get the quantization interval
for each component of the signal. This is approx. equal to 3.156 · 10−5 m for the position and
2.5235 · 10−5 rad for the angle.
iii. After position and angle have been measured, an estimation for velocity and angular velocity
must be made; this is done by cooperation of a memory block with the low-pass filter gains.
The processed signal at time k is a linear combination of the value of the filtered signal at
time k − 1, and the raw signal at time k: ykf = 1+ω
ωc T s
cT s
f
yk + 1+ω1c T s yk−1 . Here ωc is the cutoff
frequency in radians per second of our filter. Consequently, the rates of change of the signals
f f
yk −yk−1
are computed from the filtered values with a backward difference formula: ẏ = Ts . The
estimated state is thus fed back into the controller as before.

We tested the system’s response to a reference input of 3.5 degrees, as before. The results
can be seen in fig. 8. Evidently, the addition of noise isn’t handled very well by the system. It
does of course settle around the desired equilibrium, yet the controller is very nervous. Playing
around with the Q and R matrices didn’t help with the system’s behaviour either. Finally, we also
attempted to feed the post-filter signal of position and velocity back to the controller, instead of
the unfiltered one. This led to only slightly less nervous response in the control action.

Figure 8: Step response & control inputs for a low-pass filter cutoff frequency of 2Hz.

Figure 9: Step response & control inputs for a low-pass filter cutoff frequency of 1Hz.

6
Figure 10: Step response for a low-pass filter cutoff frequency of 50Hz.

The diagrams above also show the system’s behaviour for different values of the cut-off fre-
quency. We can see that the system’s dynamics can be heavily influenced by the cut-off frequency,
and it may even become unstable. Therefore a balance has to be maintained. If we set the cut-off
frequency too high, we are inviting a lot of noise into our signal, and thus increasing the error in
the estimation of the derivative. On the other hand, setting wc too low essentially means that
a significant part of the actual system dynamics will be filtered away, thus again increasing the
error.

4 Experimental Results
Having completed the closed loop diagram we can pair it to our experimental setup’s controller.
This is shown in the third and final Simulink diagram of fig. 11. Admittedly, it is a bit tangled.
The basic difference is that here we don’t have to simulate the measurement process of x and θ.
Those are given by the actual potentiometer and A/D converter. Given the raw signal the model
proceeds to filter it, estimate the velocities and calculate the corresponding gain as before. There
is also an additional saturation at 3.5V right before the input is set to the controller (because of
this perhaps we should reconsider the control action penalty in our LQR controller).

Figure 11: Third Closed-Loop Simulink Diagram.

Unfortunately, this year’s circumstances did not allow for ”eating the pudding”, i.e. testing
our model in the lab by performing a real-time physical experiment.

7
5 Conclusion
In this practicum we studied the implementation of a controller for a relatively simple physical
mechanism. The project allowed us to see how even this simple setup has many subtle details that
have to be addressed in order to achieve robust and effective control. We began by coding a state
space model for a linearized version of our system, and studying its dynamics. We then added
state feedback to our system, by investigating the options given by the LQR methodology. This
led to the creation of a stable closed-loop system. We proceeded by expanding this model, taking
into account the measurement and digitization process of the output variables, and the filtering
that is necessary for signal processing and estimation of the full state. We saw how seriously this
filtering process can affect the performance of our controller. Overall, the project served as an
enlightening example of how a digital controller based on a continuous model is built.

You might also like