0% found this document useful (0 votes)
127 views438 pages

Analysis and Identification of Time-Invariant Systems, Time-Varying Systems, and Multi-Delay Systems Using Orthogonal Hybrid Functions

Uploaded by

aapirasteh1402
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
127 views438 pages

Analysis and Identification of Time-Invariant Systems, Time-Varying Systems, and Multi-Delay Systems Using Orthogonal Hybrid Functions

Uploaded by

aapirasteh1402
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 438

Studies in Systems, Decision and Control 46

Anish Deb
Srimanti Roychoudhury
Gautam Sarkar

Analysis and Identification


of Time-Invariant Systems,
Time-Varying Systems,
and Multi-Delay Systems
using Orthogonal Hybrid
Functions
Theory and Algorithms with MATLAB®
Studies in Systems, Decision and Control

Volume 46

Series editor
Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
e-mail: [email protected]
About this Series

The series “Studies in Systems, Decision and Control” (SSDC) covers both new
developments and advances, as well as the state of the art, in the various areas of
broadly perceived systems, decision making and control- quickly, up to date and
with a high quality. The intent is to cover the theory, applications, and perspectives
on the state of the art and future developments relevant to systems, decision
making, control, complex processes and related areas, as embedded in the fields of
engineering, computer science, physics, economics, social and life sciences, as well
as the paradigms and methodologies behind them. The series contains monographs,
textbooks, lecture notes and edited volumes in systems, decision making and
control spanning the areas of Cyber-Physical Systems, Autonomous Systems,
Sensor Networks, Control Systems, Energy Systems, Automotive Systems,
Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace
Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power
Systems, Robotics, Social Systems, Economic Systems and other. Of particular
value to both the contributors and the readership are the short publication timeframe
and the world-wide distribution and exposure which enable both a wide and rapid
dissemination of research output.

More information about this series at http://www.springer.com/series/13304


Anish Deb Srimanti Roychoudhury

Gautam Sarkar

Analysis and Identification


of Time-Invariant Systems,
Time-Varying Systems,
and Multi-Delay Systems
using Orthogonal Hybrid
Functions
Theory and Algorithms with MATLAB®

123
Anish Deb Gautam Sarkar
Department of Applied Physics Department of Applied Physics
University of Calcutta University of Calcutta
Kolkata Kolkata
India India

Srimanti Roychoudhury
Department of Electrical Engineering
Budge Budge Institute of Technology
Kolkata
India

ISSN 2198-4182 ISSN 2198-4190 (electronic)


Studies in Systems, Decision and Control
ISBN 978-3-319-26682-4 ISBN 978-3-319-26684-8 (eBook)
DOI 10.1007/978-3-319-26684-8

Library of Congress Control Number: 2015957098

© Springer International Publishing Switzerland 2016


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained herein or
for any errors or omissions that may have been made.

Printed on acid-free paper

This Springer imprint is published by SpringerNature


The registered company is Springer International Publishing AG Switzerland
To my grandson Aryarup Basak who reads
only in class four, but wants to be a writer

—Anish Deb

To my mother Dipshikha Roychoudhury, my


sister Sayanti Roychoudhury and to my father
Sujit Roychoudhury whose affection and
loving presence I always feel in my heart

—Srimanti Roychoudhury

To my wife Sati Sarkar and my beloved


daughter Paimi Sarkar

—Gautam Sarkar
Preface

The book deals with a new set of orthogonal functions, termed as ‘hybrid functions’
(HFs). This new set is a combination of ‘sample-and-hold functions’ (SHFs) and
‘triangular functions’ (TFs), which are also orthogonal as well. The set of hybrid
functions are apt to approximate functions in a piecewise linear manner. From this
starting point, the presented analysis takes off and explores many aspects of control
system analysis and identification.
Application of non-sinusoidal piecewise constant orthogonal functions was
initiated by Walsh functions, which was introduced by J.L. Walsh in 1922 in a
conference and a year later was published in the American Journal of Mathematics.
The very look of the Walsh function set was very different from the set of sine–
cosine functions in the basic sense that it did not contain any curved lines at all!
But Walsh function set was not the first of its kind. Its forerunner was the set of
Haar functions, proposed in 1910, which belonged to the same class of piecewise
constant orthogonal functions. However, the Haar function set could not make a
very significant stir for many decades. But with the advent of wavelet analysis in
the 1960s, wider cross section of researchers came to take notice of the Haar
function set, now known to be the first ever wavelet function.
For more than four decades, the Walsh function set remained dormant by way of
its applications. It became attractive to a few researchers only during the mid-1960s.
But in the next 10–15 years, the Walsh function set found its application in many
areas of electrical engineering such as communication, solution of differential as
well as integral equations, control system analysis, control system identification,
and in other various fields. But from the beginning of the 1980s, the spotlight
shifted to block pulse functions (BPFs). The BPF set was also orthogonal and
piecewise constant. Further, it was related to Walsh functions and Haar functions by
similarity transformation. This function set was the most fundamental and simplest
of all piecewise constant basis functions (PCBFs). So it is no wonder that the BPF
set has been enjoying moderate popularity till date.
In the last decade of the twentieth century and in the first decade of the
twenty-first century, few other function sets were introduced in the literature by

vii
viii Preface

Anish Deb and his co-researchers. These are the sample-and-hold function set
(1998) and the triangular function set (2003).
In 2010, Anish Deb and his co-workers invented and introduced yet another new
set of piecewise linear orthogonal hybrid functions (HFs). This new set could
approximate square integrable time functions of Lebesgue measure in a piecewise
linear manner, and it used the samples of the function as expansion coefficients,
without using the traditional integration formula employed for orthogonal
function-based expansions. Compared to Walsh, block pulse function, and other
PCBF-based approximations, this was the main advantage of the HF set because it
reduced the computational burden appreciably. Moreover, HF-based approximation
incurred much less mean integral square error (MISE) as compared to BPF and
other PCBF-based approximations.
In the preliminary chapters, the following topics have been discussed in detail
with suitable numerical examples:
(i) properties of hybrid function (HF) and its operational rules,
(ii) function approximation and error estimates,
(iii) integration and differentiation using HF domain operational matrices,
(iv) one-shot operational matrices for integration,
(v) solution of linear differential equations, and
(vi) convolution of time functions.
In later parts of the book, in general, analysis and synthesis of many linear
continuous time control systems, which include time-invariant systems,
time-varying systems, and multi-delay systems, of homogeneous as well as
non-homogeneous types, are discussed. And what attractive results the HF domain
technique yielded!
In later chapters, the discussed topics are as follows:
(i)time-invariant and time-varying system analysis via state-space approach,
(ii)multi-delay system analysis via state-space approach,
(iii)time-invariant system analysis using the method of convolution,
(iv) time-invariant and time-varying system identification in state-space
environment,
(v) time-invariant system identification using ‘deconvolution,’ and
(vi) parameter estimation of transfer function from impulse response data.
All the topics are supported with relevant numerical examples. And to make the
book user-friendly, many MATLAB programs are appended at the end of the book.
Now, about the hybrid functions and the three authors. The first author started
working on this function set in 2005, and the third author was associated with him
constantly. The second author got interested in the hybrid function set and joined
the other two authors in 2010. Then, from 2012, after publication of a few works on
hybrid functions, the first author dreamt about a whole book on hybrid functions
and the other two authors strongly supported the dream and joined the mission.
Then, all of them toiled and toiled to make the dream come true. It was a great
Preface ix

feeling to work with HF, and though toil was the major component during the past
few years, it never was able to overtake the academic enjoyment of the authors.
Finally, the authors acknowledge the support of the Department of Applied
Physics, University of Calcutta, and the second author acknowledges the support of
her institute Budge Budge Institute of Technology, Kolkata, India, during prepa-
ration of this book. Also, the support of Dr. Amitava Biswas, Associate Professor,
Department of Electrical Engineering, Academy of Technology, Hooghly, India, is
gratefully acknowledged.

Kolkata Anish Deb


September 2015 Srimanti Roychoudhury
Gautam Sarkar
Contents

1 Non-sinusoidal Orthogonal Functions in Systems and Control . . . . 1


1.1 Orthogonal Functions and Their Properties . . . . . . . . . . . . . . . 2
1.2 Different Types of Non-sinusoidal Orthogonal Functions. . . . . . 3
1.2.1 Haar Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.2 Rademacher Functions . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.3 Walsh Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.4 Block Pulse Functions (BPF) . . . . . . . . . . . . . . . . . . . 7
1.2.5 Slant Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.6 Delayed Unit Step Functions (DUSF) . . . . . . . . . . . . . 9
1.2.7 General Hybrid Orthogonal Functions (GHOF) . . . . . . 10
1.2.8 Variants of Block Pulse Functions . . . . . . . . . . . . . . . 11
1.2.9 Sample-and-Hold Functions (SHF) . . . . . . . . . . . . . . . 11
1.2.10 Triangular Functions (TF) . . . . . . . . . . . . . . . . . . . . . 12
1.2.11 Non-optimal Block Pulse Functions (NOBPF) . . . . . . . 13
1.3 Walsh Functions in Systems and Control . . . . . . . . . . . . . . . . 14
1.4 Block Pulse Functions in Systems and Control . . . . . . . . . . . . 17
1.5 Triangular Functions (TF) in Systems and Control . . . . . . . . . . 18
1.6 A New Set of Orthogonal Hybrid Functions (HF):
A Combination of Sample-and-Hold Functions (SHF)
and Triangular Functions (TF) . . . . . . . . . . . . . . . . . . . . .... 19
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 19
2 The Hybrid Function (HF) and Its Properties . . . . . . . . . . . . . . . . 25
2.1 Brief Review of Block Pulse Functions (BPF) . . . . . . . . . . . . . 25
2.2 Brief Review of Sample-and-Hold Functions (SHF) . . . . . . . . . 26
2.3 Brief Review of Triangular Functions (TF) . . . . . . . . . . . . . . . 27
2.4 Hybrid Function (HF): A Combination of SHF and TF . . . . . . . 28
2.5 Elementary Properties of Hybrid Functions . . . . . . . . . . . . . . . 30
2.5.1 Disjointedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.5.2 Orthogonality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.5.3 Completeness. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

xi
xii Contents

2.6 Elementary Operational Rules . . . . . . . . . . . . . . . . . . . . . . . . 33


2.6.1 Addition of Two Functions . . . . . . . . . . . . . . . . . . . . 33
2.6.2 Subtraction of Two Functions . . . . . . . . . . . . . . . . . . 37
2.6.3 Multiplication of Two Functions . . . . . . . . . . . . . . . . 39
2.6.4 Division of Two Functions . . . . . . . . . . . . . . . . . . . . 44
2.7 Qualitative Comparison of BPF, SHF, TF and HF . . . . . . . . . . 47
2.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3 Function Approximation via Hybrid Functions . . . . . . . . . . . . . . . 49
3.1 Function Approximation via Block Pulse Functions (BPF) . . . . 49
3.1.1 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.2 Function Approximation via Hybrid Functions (HF) . . . . . . . . . 51
3.3 Algorithm of Function Approximation via HF . . . . . . . . . . . . . 52
3.3.1 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.4 Comparison Between BPF and HF Domain Approximations . . . 54
3.5 Approximation of Discontinuous Functions . . . . . . . . . . . . . . . 56
3.5.1 Modified HF Domain Approach for Approximating
Functions with Jump Discontinuities. . . . . . . . . . . ... 58
3.5.2 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . ... 62
3.6 Function Approximation: HF Versus Other Methods. . . . . . ... 67
3.7 Mean Integral Square Error (MISE) for HF Domain
Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 74
3.7.1 Error Estimate for Sample-and-Hold Function
Domain Approximation. . . . . . . . . . . . . . . . . . . . ... 75
3.7.2 Error Estimate for Triangular Function Domain
Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . ... 76
3.8 Comparison of Mean Integral Square Error (MISE)
for Function Approximation via HFc and HFm Approaches . ... 79
3.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 84
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 86
4 Integration and Differentiation Using HF Domain Operational
Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.1 Operational Matrices for Integration . . . . . . . . . . . . . . . . . . . . 87
4.1.1 Integration of Sample-and-Hold Functions . . . . . . . . . . 88
4.1.2 Integration of Triangular Functions . . . . . . . . . . . . . . . 92
4.2 Integration of Functions Using Operational Matrices. . . . . . . . . 96
4.2.1 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.3 Operational Matrices for Differentiation. . . . . . . . . . . . . . . . . . 100
4.3.1 Differentiation of Time Functions Using
Operational Matrices. . . . . . . . . . . . . . . . . . . . . . . . . 100
4.3.2 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 103
4.4 Accumulation of Error for Subsequent
Integration-Differentiation (I-D) Operation in HF Domain . . . . . 106
Contents xiii

4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110


References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
5 One-Shot Operational Matrices for Integration . . . . . . . . . . . . . . . 115
5.1 Integration Using First Order HF Domain Integration
Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
5.2 Repeated Integration Using First Order HF Domain
Integration Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
5.3 One-Shot Integration Operational Matrices for Repeated
Integration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
5.3.1 One-Shot Operational Matrices for Sample-and-Hold
Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
5.3.2 One-Shot Operational Matrices for Triangular
Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
5.3.3 One-Shot Integration Operational Matrices in HF
Domain: A Combination of SHF Domain and TF
Domain One-Shot Operational Matrices . . . . . . . . . . . 126
5.4 Two Theorems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
5.5 Numerical Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
5.5.1 Repeated Integration Using First Order Integration
Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
5.5.2 Higher Order Integration Using One-Shot Operational
Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
5.5.3 Comparison of Two Integration Methods Involving
First, Second and Third Order Integrations . . . . . . . . . 136
5.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
6 Linear Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
6.1 Solution of Linear Differential Equations Using HF
Domain Differentiation Operational Matrices . . . . . . . . . . . . . . 142
6.1.1 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 143
6.2 Solution of Linear Differential Equations Using HF
Domain Integration Operational Matrices . . . . . . . . . . . . . . . . 145
6.2.1 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 148
6.3 Solution of Second Order Linear Differential Equations . . . . . . 152
6.3.1 Using HF Domain First Order Integration
Operational Matrices. . . . . . . . . . . . . . . . . . . . . . . . . 152
6.3.2 Using HF Domain One-Shot Integration
Operational Matrices. . . . . . . . . . . . . . . . . . . . . . . . . 154
6.3.3 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 155
6.4 Solution of Third Order Linear Differential Equations . . . . . . . . 159
6.4.1 Using HF Domain First Order Integration
Operational Matrices. . . . . . . . . . . . . . . . . . . . . . . . . 160
xiv Contents

6.4.2 Using HF Domain One-Shot Integration


Operational Matrices. . . . . . ........ . . . . . . . . . . . 162
6.4.3 Numerical Examples . . . . . . ........ . . . . . . . . . . . 164
6.5 Conclusion . . . . . . . . . . . . . . . . . . ........ . . . . . . . . . . . 165
References . . . . . . . . . . . . . . . . . . . . . . . ........ . . . . . . . . . . . 166
7 Convolution of Time Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
7.1 The Convolution Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
7.2 Convolution of Basic Components of Hybrid Functions . . . . . . 169
7.2.1 Convolution of Two Elementary Sample-and-Hold
Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
7.2.2 Convolution of Two Sample-and-Hold Function
Trains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
7.2.3 Convolution of an Elementary Sample-and-Hold
Function and an Elementary Triangular Function . . . . . 172
7.2.4 Convolution of a Triangular Function Train
and a Sample-and-Hold Function Train . . . . . . . . . . . . 173
7.2.5 Convolution of Two Elementary Triangular
Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
7.2.6 Convolution of Two Triangular Function Trains. . . . . . 174
7.3 Convolution of Two Time Functions in HF Domain . . . . . . . . . 176
7.4 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
7.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
8 Time Invariant System Analysis: State Space Approach. . . . . . . . . 185
8.1 Analysis of Non-homogeneous State Equations . . . . . . . . . . . . 186
8.1.1 Solution from Sample-and-Hold Function Vectors . . . . 188
8.1.2 Solution from Triangular Function Vectors . . . . . . . . . 195
8.1.3 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 197
8.2 Determination of Output of a Non-homogeneous System. . . . . . 197
8.2.1 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 201
8.3 Analysis of Homogeneous State Equation . . . . . . . . . . . . . . . . 202
8.3.1 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 202
8.4 Determination of Output of a Homogeneous System. . . . . . . . . 208
8.4.1 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 212
8.5 Analysis of a Non-homogeneous System with Jump
Discontinuity at Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
8.5.1 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . 215
8.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
9 Time Varying System Analysis: State Space Approach. . . . . . . . . . 221
9.1 Analysis of Non-homogeneous Time Varying State
Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
9.1.1 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 230
Contents xv

9.2 Determination of Output of a Non-homogeneous Time


Varying System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
9.3 Analysis of Homogeneous Time Varying
State Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
9.3.1 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 234
9.4 Determination of Output of a Homogeneous Time
Varying System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
9.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
10 Multi-delay System Analysis: State Space Approach . . . . . . . . . . . 241
10.1 HF Domain Approximation of Function with Time Delay . . . . . 241
10.1.1 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 245
10.2 Integration of Functions with Time Delay . . . . . . . . . . . . . . . . 246
10.2.1 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 247
10.3 Analysis of Non-homogeneous State Equations with Delay . . . . 248
10.3.1 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 260
10.4 Analysis of Homogeneous State Equations with Delay . . . . . . . 266
10.4.1 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 267
10.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
11 Time Invariant System Analysis: Method of Convolution . . . . . . . . 271
11.1 Analysis of an Open Loop System . . . . . . . . . . . . . . . . . . . . . 271
11.1.1 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 272
11.2 Analysis of a Closed Loop System . . . . . . . . . . . . . . . . . . . . . 276
11.2.1 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 283
11.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
12 System Identification Using State Space Approach: Time
Invariant Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
12.1 Identification of a Non-homogeneous System. . . . . . . . . . . . . . 289
12.1.1 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 291
12.2 Identification of Output Matrix of a Non-homogeneous
System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
12.2.1 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 295
12.3 Identification of a Homogeneous System . . . . . . . . . . . . . . . . . 297
12.4 Identification of Output Matrix of a Homogeneous
System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
12.5 Identification of a Non-homogeneous System with Jump
Discontinuity at Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
12.5.1 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 299
12.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
xvi Contents

13 System Identification Using State Space Approach:


Time Varying Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
13.1 Identification of a Non-homogeneous System. . . . . . . . . . . . . . 307
13.1.1 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 309
13.2 Identification of a Homogeneous System . . . . . . . . . . . . . . . . . 311
13.2.1 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 311
13.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
14 Time Invariant System Identification: Via ‘Deconvolution’. . . . . . . 319
14.1 Control System Identification Via ‘Deconvolution’ . . . . . . . . . . 319
14.1.1 Open Loop Control System Identification . . . . . . . . . . 320
14.1.2 Closed Loop Control System Identification . . . . . . . . . 323
14.2 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
15 System Identification: Parameter Estimation of Transfer
Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
15.1 Transfer Function Identifications . . . . . . . . . . . . . . . . . . . . . . 331
15.2 Pade Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
15.3 Parameter Estimation of the Transfer Function of a Linear
System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
15.3.1 Using Block Pulse Functions . . . . . . . . . . . . . . . . . . . 336
15.3.2 Using Non-optimal Block Pulse Functions
(NOBPF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
15.3.3 Using Triangular Functions (TF) . . . . . . . . . . . . . . . . 342
15.3.4 Using Hybrid Functions (HF) . . . . . . . . . . . . . . . . . . 345
15.3.5 Solution in SHF Domain from the HF Domain
Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
15.4 Comparative Study of the Parameters of the Transfer
Function Identified via Different Approaches . . . . . . . . . . . . . . 350
15.5 Comparison of Errors for BPF, NOBPF, TF, HF and SHF
Domain Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
15.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355

Appendix A: Introduction to Linear Algebra . . . . . . . . . . . . . . . . . . . . 357

Appendix B: Selected MATLAB Programs. . . . . . . . . . . . . . . . . . . . . . 367

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
About the Authors

Anish Deb (b.1951) obtained his B.Tech. (1974), M.Tech. (1976), and Ph.D.
(Tech.) (1990) degrees from the Department of Applied Physics, University of
Calcutta. Presently, he is a professor (1998) in the Department of Applied Physics,
University of Calcutta. His research interest includes automatic control in general
and application of ‘alternative’ orthogonal functions in systems and control. He has
published more than 70 research papers in different national and international
journals and conference proceedings. He is the principal author of the books
‘Triangular Orthogonal Functions for the Analysis of Continuous Time Systems’
published by Elsevier (India) in 2007 and Anthem Press (UK) in 2011, and ‘Power
Electronic Systems: Walsh Analysis with MATLAB’ published by CRC Press
(USA) in 2014.
Srimanti Roychoudhury (b.1984) did her B.Tech. (2006) from Jalpaiguri
Government Engineering College, under West Bengal University of Technology
and M.Tech. (2010) from the Department of Applied Physics, University of
Calcutta. Presently, she is an assistant professor (from 2010) in the Department of
Electrical Engineering, Budge Budge Institute of Technology under West Bengal
University of Technology and pursuing her Ph.D. Her research area includes
application of ‘alternative’ orthogonal functions in different areas of systems and
control. She has published four research papers in different national and interna-
tional journals.
Gautam Sarkar (b.1953) received his B.Tech. (1975), M.Tech. (1977), and Ph.D.
(Tech) (1991) degrees from the Department of Applied Physics, University of
Calcutta. He is presently in the chair of Labanyamoyee Das Professor. His area of
research includes automatic control, fuzzy systems, smart grids, and application of
piecewise constant basis functions in systems and control. He has published more
than 50 research papers in different national and international journals and con-
ference proceedings. He is the co-author of the book ‘Triangular Orthogonal
Functions for the Analysis of Continuous Time Systems’ published by Elsevier
(India) in 2007 and Anthem Press (UK) in 2011.

xvii
Principal Symbols

dpq Kronecker delta


un (n + 1)th component function of a Walsh function set
U Walsh function vector
wn (n + 1)th component function of a block pulse function set
WðmÞ Block pulse function vector of dimension m, having m component
functions
li A point in the ith interval
lmax Maximum value of li
AMP error Average of Mod of Percentage error
A System matrix in state-space model
A(t) Time-varying system matrix in state-space model
B Input vector in state-space model
B(t) Time-varying input vector in state-space model
C Output matrix in state-space model
C(t) Time-varying output matrix in state-space model
D Direct transmission matrix in state-space model
D(t) Time-varying direct transmission matrix in state-space model
DG1 A diagonal matrix of order m whose entries are the elements of the
vector G1
DG2 A diagonal matrix of order m whose entries are the elements of the
vector G2
DR1 A diagonal matrix of order m whose entries are the elements of the
vector R1
DR2 A diagonal matrix of order m whose entries are the elements of the
vector R2
DS Differentiation matrix for the sample-and-hold function component
DT Differentiation matrix for the triangular function component
f ðtÞ Time function
f ðt  sÞ Function f(t) delayed by τ seconds

xix
xx Principal Symbols

f_max Maximum value of first-order derivative of f ðtÞ


f_ ðli Þ First order derivative of f ðtÞ in the (i + 1)th interval at the point li
f ðtÞ Function approximated in hybrid function domain
^f ðtÞ Reconstructed function
h Sampling period
Hi ðtÞ (i + 1)th component function of a hybrid function set
HðmÞ ðtÞ Hybrid function set of dimension m, having m component functions
LTI Linear time invariant
LTV Linear time varying
m Number of subintervals considered in a time period T
mi Slope of the reconstructed function in the (i + 1)th interval
MISE Mean integral square error
Pi ðtÞ (i + 1)th Legendre polynomial
P1ss Sample-and-hold part of the first-order integration operational
matrices for integration of the SHF component
P1st Triangular part of the first-order integration operational matrices for
integration of the SHF component
P1ts Sample-and-hold part of the first-order integration operational
matrices for integration of the TF component
P1tt Triangular part of the first-order integration operational matrices for
integration of the TF component
P(m) Operational matrix for integration in block pulse function domain of
dimension m
Pnss Sample-and-hold part of nth-order integration operational matrices
for integration of the SHF component
Pnst Triangular part of nth-order integration operational matrices for
integration of the SHF component
Pnts Sample-and-hold part of nth-order integration operational matrices
for integration of the TF component
Pntt Triangular part of nth-order integration operational matrices for
integration of the TF component
Si ðtÞ (i + 1)th component of a sample-and-hold function set
SðmÞ ðtÞ An orthogonal function set of dimension m, sample-and-hold
function set of order m, having m component functions
t Time in seconds
T Time period
T1ðmÞ ðtÞ Left-handed triangular function vector of dimension m, having
m component functions
T1i ðtÞ (i + 1)th component function of a left-handed triangular function
vector
T2ðmÞ ðtÞ Right-handed triangular function vector of dimension m, having
m component functions
Principal Symbols xxi

TðmÞ ðtÞ Same as T2ðmÞ ðtÞ, but renamed as triangular function vector of
dimension m, having m component functions
T2i ðtÞ (i + 1)th component function of a right-handed triangular function
vector
Ti ðtÞ Same as T2i ðtÞ, but renamed as (i + 1)th component function of a
triangular function vector
Chapter 1
Non-sinusoidal Orthogonal Functions
in Systems and Control

Abstract This chapter discusses different types of non-sinusoidal orthogonal


functions such as Haar functions, Walsh functions, block pulse functions,
sample-and-hold functions, triangular functions, non-optimal block pulse functions
and a few others. It also discusses briefly the application of Walsh, block pulse and
triangular functions, three major members of the non-sinusoidal orthogonal func-
tion family, in the area of systems and control. Finally, this chapter proposes a new
set of orthogonal functions named ‘Hybrid Function’ (HF). At the end of the
chapter, more than hundred useful references are given.

Orthogonal properties [1] of familiar sine-cosine functions have been known for
more than two centuries; but the use of such functions to solve complex analytical
problems was initiated by the work of the famous mathematician Baron
Jean-Baptiste-Joseph Fourier [2]. Fourier introduced the idea that an arbitrary
function, even the one defined by different equations in adjacent segments of its
range, could nevertheless be represented by a single analytic expression. Although
this idea encountered resistance at the time, it proved to be central to many later
developments in mathematics, science, and engineering.
In many areas of electrical engineering the basis for any analysis is a system of
sine-cosine functions. This is mainly due to the desirable properties of frequency
domain representation of a large class of functions encountered in engineering design
and also immense popularity of sinusoidal voltage in most engineering applications.
In the fields of circuit analysis, control theory, communication, and the analysis
of stochastic problems, examples are found extensively where the completeness and
orthogonal properties of such a system of sine-cosine functions lead to attractive
solutions. But with the application of digital techniques in these areas, awareness
for other more general complete systems of orthogonal functions has developed.
This “new” class of functions, though not possessing some of the desirable prop-
erties of sine-cosine functions, has other advantages to be useful in many appli-
cations in the context of digital technology. Many members of this class of
orthogonal functions are piecewise constant binary valued, and therefore indicate
their possible suitability and applicability in the analysis and synthesis of systems
leading to piecewise constant solutions.

© Springer International Publishing Switzerland 2016 1


A. Deb et al., Analysis and Identification of Time-Invariant Systems, Time-Varying
Systems, and Multi-Delay Systems using Orthogonal Hybrid Functions,
Studies in Systems, Decision and Control 46, DOI 10.1007/978-3-319-26684-8_1
2 1 Non-sinusoidal Orthogonal Functions in Systems and Control

1.1 Orthogonal Functions and Their Properties

Any continuous time function can be synthesized completely to a tolerable degree


of accuracy by using a set of orthogonal functions. For such accurate representation
of a time function, the orthogonal set should be complete [1].
Let a time function f ðtÞ, defined over a time interval [0, T), be represented by an
orthogonal function set SðnÞ ðtÞ. Then

X
1
f ðtÞ ¼ cj sj ðtÞ ð1:1Þ
j¼0

where, cj is the coefficient or weight connected to the (j + 1)th member of the


orthogonal set.
The members of the function set SðnÞ ðtÞ are said to be orthogonal in the interval
0  t  T if for any positive integral values of p and q, we have

ZT
sp ðtÞsq ðtÞ dt ¼ dpq ða constantÞ ð1:2Þ
0

where, dpq is the Kronecker delta and



0 for p 6¼ q
dpq ¼
constant for p ¼ q

When dpq ¼ 1, the set is said to be orthonormal. An orthogonal set is said to be


complete or closed if no function can be found which is normal to each member of
the defined set, satisfying Eq. (1.2).
Since, only a finite number of terms of the series SðnÞ ðtÞ can be considered for
practical realization of any time function f ðtÞ, right-hand side (RHS) of Eq. (1.1)
has to be truncated and we write

X
N
f ðtÞ  cj sj ðtÞ ð1:3Þ
j¼0

where N is an integer. A point to remember is, N has to be large enough to come up


with a solution of the problem with the desired accuracy.
When N is appreciably large, the accuracy of representation is good enough for
all practical purposes. Also, it is necessary to choose the coefficients cj ; s in such a
manner that the mean integral square error (MISE) [3] is minimized. The MISE is
defined as
1.1 Orthogonal Functions and Their Properties 3

ZT " X
N
#2
1
MISE D f ðtÞ  cj sj ðtÞ dt ð1:4Þ
T j¼0
0

and its minimization is achieved by making

ZT
1
cj ¼ f ðtÞsj ðtÞ dt ð1:5Þ
T
0

For a complete orthogonal function set, the MISE in Eq. (1.4) decreases
monotonically to zero as N tends to infinity.

1.2 Different Types of Non-sinusoidal Orthogonal


Functions

For more than four decades different piecewise constant basis functions (PCBF)
have been employed to solve problems in different fields of engineering including
the area of control theory.

1.2.1 Haar Functions

In 1910, Hungarian mathematician Haar [4] proposed a complete set of piecewise


constant binary-valued orthogonal functions that are shown in Fig. 1.1. In fact, Haar
functions have three possible states 0 and ±A where A is a function of √2. Thus, the
amplitude of the component functions varies with their place in the series.
The component functions of the Haar function set have both scaling and shifting
properties. These properties are a necessity for any wavelet [5]. That is why it is
now recognized as the first known wavelet basis and at the same time, it is the
simplest possible wavelet.
An m-set of Haar functions may be defined mathematically in the semi-open
interval t 2 ½0; 1Þ as given below.
The first member of the set is

harð0; 0; tÞ ¼ l; t 2 ½0; 1Þ

while the general term for other members is given by


4 1 Non-sinusoidal Orthogonal Functions in Systems and Control

Fig. 1.1 A set of Haar


functions
1.2 Different Types of Non-sinusoidal Orthogonal Functions 5

8 j=2
< 2 ; ðn  1Þ=2  t\ðn  2Þ=2
> j 1 j

harðj; n; tÞ ¼ 2j=2 ; ðn  12Þ=2 j  t\n=2 j ð1:6Þ


>
:
0; elsewhere

where, j, n and m are integers governed by the relations 0  j\ log2 ðmÞ, 1  n  2 j .


The number of members in the set is of the form m = 2 k, k being a positive integer.
Following Eq. (1.6), the members of the set of Haar functions can be obtained in a
sequential manner. In Fig. 1.1, k is taken to be 3, thus giving m = 8.
Haar’s set is such that the formal expansion of a given continuous function in
terms of these new functions converges uniformly to the given function.

1.2.2 Rademacher Functions

In 1922, inspired by Haar, German mathematician H. Rademacher presented


another set of two-valued orthonormal functions [6] that are shown in Fig. 1.2. The
set of Rademacher functions is orthonormal but incomplete. As seen from Fig. 1.2,
the function rad(n, t) of the set is given by a square wave of unit amplitude and 2n−1
cycles in the semi-open interval [0, 1). The first member of the set rad(0, t) has a
constant value of unity throughout the interval.

Fig. 1.2 A set of


Rademacher functions
6 1 Non-sinusoidal Orthogonal Functions in Systems and Control

1.2.3 Walsh Functions

After the Rademacher functions were introduced in 1922, around the same time,
American mathematician J.L. Walsh independently proposed yet another
binary-valued complete set of normal orthogonal functions U, later named Walsh
functions [3, 7], that is shown in Fig. 1.3.
As indicated by Walsh, there are many possible orthogonal function sets of this
kind and several researchers, in later years, have suggested orthogonal sets [8–10]
formed with the help of combinations of the well-known piecewise constant
orthogonal functions.
In his original paper Walsh pointed out that, “… Haar’s set is, however, merely
one of an infinity of sets which can be constructed of functions of this same
character.” While proposing his new set of orthonormal functions U, Walsh wrote

Fig. 1.3 A set of Walsh


functions arranged in dyadic
order
1.2 Different Types of Non-sinusoidal Orthogonal Functions 7

“… each function u takes only the values +1 and −1, except at a finite number of
points of discontinuity, where it takes the value zero.”
However, the Rademacher functions were found to be a true subset of the Walsh
function set. The Walsh function set possesses the following properties all of which
are not shared by other orthogonal functions belonging to the same class. These are:
(i) Its members are all two-valued functions,
(ii) It is a complete orthonormal set,
(iii) It has striking similarity with the sine-cosine functions, primarily with respect
to their zero-crossing patterns.

1.2.4 Block Pulse Functions (BPF)

During the 19th century, voltage and current pulses, such as Morse code signals,
were generated by mechanical switches, amplified by relays and finally detected by
different magneto-mechanical devices. These pulses are nothing but block pulses—
the most important function set used for communication.
However, until the 80s of the last century, the set of block pulses received less
attention from the mathematicians as well as application engineers possibly due to
their apparent incompleteness. But disjoint and orthogonal properties of such a
function set were well known.
A set of block pulse functions [11–13] in the semi-open interval t 2 ½0; TÞ is
shown in Fig. 1.4.
An m-set block pulse function is defined as
(
1 for ih  t\ði þ 1Þh
wi ðtÞ ¼
0 elsewhere

where, i = 0, 1, 2, …, (m − 1) and h ¼ mT .
The block pulse function set is a complete [14] orthogonal function set and can
easily be normalized by defining the component functions in the interval [0, T) as
(
p1ffiffi for ih  t\ði þ 1Þh
wi ðtÞ ¼ h
0 elsewhere

1.2.5 Slant Functions

A special orthogonal function set, known as the slant function set, was introduced
by Enomoto and Shibata [15] for image transmission analysis. These functions are
also applied successfully to image processing problems [16, 17].
8 1 Non-sinusoidal Orthogonal Functions in Systems and Control

Fig. 1.4 A set of block pulse


functions

Slant functions have a finite but a large number of possible states as can be seen
from Fig. 1.5. The superiority of the slant function set lies in its transform char-
acteristics, which permit a compaction of the image energy to only a few trans-
formed samples. Thus, the efficiency of image data transmission in this form is
improved.
1.2 Different Types of Non-sinusoidal Orthogonal Functions 9

Fig. 1.5 A set of slant


functions

1.2.6 Delayed Unit Step Functions (DUSF)

Delayed unit step functions, shown in Fig. 1.6, were suggested by Hwang [18] in
1983. Though not of much use due to its dependency on BPFs, shown by Deb et al.
[13], it deserves to be included in the record of piecewise constant basis functions
as a new variant. The (i + 1)th member of this function set is defined as
10 1 Non-sinusoidal Orthogonal Functions in Systems and Control

Fig. 1.6 A set of DUSF for


m-component functions


1 t  ih
ui ðtÞ ¼
0 t\ih

where, i = 0, 1, 2, …, (m − 1).

1.2.7 General Hybrid Orthogonal Functions (GHOF)

So far the discussion centered on different types of orthogonal functions having a


piecewise constant nature. The major departure from this class was the formulation
of general hybrid orthogonal functions (GHOF) introduced by Patra and Rao [19,
20]. While sine-cosine functions or orthogonal polynomials can represent a con-
tinuous function quite nicely, these functions/polynomials become unsatisfactory
for approximating functions with discontinuities, jumps or dead time. For repre-
sentation of such functions, undoubtedly piecewise constant orthogonal functions
such as Walsh or block pulse functions, can be used more advantageously. But with
1.2 Different Types of Non-sinusoidal Orthogonal Functions 11

functions having both continuous nature as well as a number of discontinuities in


the time interval of interest, it is quite clear that none of the orthogonal
functions/polynomials of continuous nature is suitable for approximating the
function with a reasonable degree of accuracy. Also, piecewise constant orthogonal
functions are not suitable for the job either.
Hence, to meet the combined features of continuity and discontinuity in such
situations, the framework of GHOF was proposed and applied by Patra and Rao and
it seemed to be more appropriate. The system of GHOF formed a hybrid basis
which was both flexible and general.
However, the main disadvantage of GHOF seems to be it requires a priori
knowledge about the nature as well as discontinuities of the function, which are to
be matched with the segment boundaries of the system of GHOF comprised of
different types of orthogonal function sets chosen for the analysis. This also requires
a complex algorithm for better results.

1.2.8 Variants of Block Pulse Functions

In 1995, a pulse-width modulated version of the block pulse function set was
presented by Deb et al. [21, 22] where, the pulse-width of the component functions
of the BPF set was gradually increased (or, decreased) depending upon the nature of
the square integrable function to be handled.
In 1998, a further variant of the BPF set was proposed by Deb et al. [23] where,
the set was called sample-and-hold function (SHF) set and the same was utilized for
the analysis of sampled data systems with zero-order hold.

1.2.9 Sample-and-Hold Functions (SHF)

Any square integrable function f ðtÞ may be represented by a sample-and-hold


function set [23] in the semi-open interval [0, T) by considering the (i + 1)th
member of the set to be

fi ðtÞ ¼ f ðihÞ; i ¼ 0; 1; 2; . . .; ðm  1Þ

where, h is the sampling period (=T/m), fi ðtÞ is the amplitude of the function f ðtÞ at
time ih and f ðihÞ is the first term of the Taylor series expansion of the function f ðtÞ
around the point t = ih, because, for a zero order hold (ZOH) the amplitude of the
function f ðtÞ at t = ih is held constant for the duration h.
A set of SHF, comprised of m component functions, is defined as
12 1 Non-sinusoidal Orthogonal Functions in Systems and Control

Fig. 1.7 Dissection of the first member of a BPF set into two triangular functions


1 for ih  t\ði þ 1Þh
si ðtÞ ¼ ð1:7Þ
0 elsewhere

where, i = 0, 1, 2, …, (m − 1).
The basis functions of the SHF set are look-likes of the members of the BPF set
shown in Fig. 1.4. Only the method of computation of the coefficients differs in
respective cases. That is, the expansion coefficients in SHF domain do not depend
upon the traditional integration Formula (1.5).

1.2.10 Triangular Functions (TF)

A rectangular shaped block pulse function can be dissected along its two diagonals
to generate two triangular functions [24–26]. That is, when we add two component
triangular functions, we get back the original block pulse function. This dissection
process is shown in Fig. 1.7, where the first member w0 ðtÞ is resolved into two
component triangular functions T10(t) and T20(t).
From a set of block pulse function, WðmÞ ðtÞ, we can generate two sets of
orthogonal triangular functions (TF), namely T1ðmÞ ðtÞ and T2ðmÞ ðtÞ such that

WðmÞ ðtÞ ¼ T1ðmÞ ðtÞ þ T2ðmÞ ðtÞ

These two TF sets are complementary to each other. For convenience, we call
T1ðmÞ ðtÞ the left handed triangular function (LHTF) vector and T2ðmÞ ðtÞ the right
handed triangular function (RHTF) vector. Figure 1.8a, b show the orthogonal
triangular function sets, T1ðmÞ ðtÞ and T2ðmÞ ðtÞ, where m has been chosen arbitrarily
as 8. For triangular function domain expansion of a time function, the coefficients
are computed from function samples only [26], and they do not need any assistance
from Eq. (1.5).
1.2 Different Types of Non-sinusoidal Orthogonal Functions 13

Fig. 1.8 a A set of eight


LHTF T1ð8Þ ðtÞ. b A set of
eight RHTF T2ð8Þ ðtÞ

1.2.11 Non-optimal Block Pulse Functions (NOBPF)

The ‘non-optimal’ method of block pulse function coefficient computation has been
suggested by Deb et al. [27] which employs trapezoidal [28] integration instead of
exact integration. The approach is ‘new’ in the sense that the BPF expansion
coefficients of a locally square integrable function have been determined in a more
‘convenient’ manner.
This ‘non-optimal’ expansion procedure for computation of coefficients uses the
trapezoidal rule for integration where only the samples of the function to be
approximated are needed in any particular time intervals to represent the function in
NOBPF [27] domain and thus reduces the computational burden.
14 1 Non-sinusoidal Orthogonal Functions in Systems and Control

Fig. 1.9 Function approximation in non-optimal block pulse function (NOBPF) domain where
area preserving transformation is employed

Let us employ the well-known trapezoidal rule for integration to compute the
non-optimal block pulse function coefficients of a time function f ðtÞ. Calling these
coefficients fi0 ’s, we get
1
½ f ðihÞ þ f fði þ 1Þhg  h ½ f ðihÞ þ f fði þ 1Þhg
fi0  2 ¼ ð1:8Þ
h 2

fi0 ’s are ‘non-optimal’ coefficients computed approximately from Eq. (1.8). It is


observed that fi0 ’s are in effect, the average values of two consecutive samples of the
function f ðtÞ, and this is again a significant deviation from the traditional Formula
(1.5).
The process of function approximation in non-optimal block pulse function
(NOBPF) domain is shown in Fig. 1.9.
A time function f ðtÞ can be approximated in NOBPF domain as
  0
f ðtÞ  f00 f10 f20 . . . fi0 ... 0
fðm1Þ WðmÞ ðtÞ ¼ F 0T W0ðmÞ ðtÞ ð1:9Þ

It is evident that f ðtÞ in (1.9) will not be approximated with guaranteed mean
integral square error (MISE).
Figure 1.10 shows a time scale history of all the functions discussed above.

1.3 Walsh Functions in Systems and Control

Among all the orthogonal functions outlined earlier, Walsh function based analysis
first became attractive to the researchers from 1962 onwards [7, 29–31]. The reason
for such success was mainly due to its binary nature. One immediate advantage is
the task of analog multiplication. To multiply any signal by a Walsh function, the
problem reduces to an appropriate sequence of sign changes, which makes this
usually difficult operation both simple and potentially accurate [29]. However, in
1.3 Walsh Functions in Systems and Control 15

Fig. 1.10 Time scale history of piecewise constant and related basis function family
16 1 Non-sinusoidal Orthogonal Functions in Systems and Control

system analysis, Walsh functions were employed during the early 1970s. As a
consequence, the advantages of Walsh analysis were unraveled to the workers in
the field compared to the use of conventional sine-cosine functions. Ultimately, the
mathematical groundwork of the Walsh analysis became strong to lure interested
researchers to try every new application based upon this function set.
In 1973, it was Corrington [32] who proposed a new technique for solving linear
as well as nonlinear differential and integral equations with the help of Walsh
functions. In 1975, important technical papers relating Walsh functions to the field
of systems and control were published. New ideas were proposed by Rao [33–41]
and Chen [42–47]. Other notable workers were Le Van et al. [48], Tzafestas [49],
Chen [50–53], Mahapatra [54], Paraskevopoulos [55], Moulden [56], Deb and
Datta [57–59], Lewis [60], Marszalek [61], Dai and Sinha [62], Deb et al. [63–68],
and others.
The first positive step for the development of the Walsh domain analysis was the
formulation of the operational matrix for integration. This was done independently
by Rao [33], Chen [42], and Le Van et al. [48]. Le Van sensed that since the
integral operator matrix had an inverse, the inverse must be the differential operator
in the Walsh domain. However, he could not represent the general form of the
operator matrix that was done by Chen [42, 43]. Interestingly, the operational
matrix for integration was first presented by Corrington [32] in the form of a table.
But he failed to recognize the potentiality of the table as a matrix.
This was first pointed out by Chen and he presented Walsh domain analysis with
the operational matrices for integration as well as differentiation:
(i) to solve the problems of linear systems by state space model [42];
(ii) to design piecewise constant gains for optimal control [43];
(iii) to solve optimal control problem [44];
(iv) in variational problems [45];
(v) for time domain synthesis [46];
(vi) for fractional calculus as applied to distributed systems [47].
Rao used Walsh functions for:
(i)system identification [33];
(ii)optimal control of time-delay systems [35];
(iii)identification of time-lag systems [36];
(iv) transfer function matrix identification [38] and piecewise linear system iden-
tification [39];
(v) parameter estimation [40];
(vi) solving functional differential equations and related problems [41].
Rao first formulated the operational matrices for stretch and delay [41]. He
proposed a new technique for extension of computation beyond the limit of initial
normal interval with the help of “single term Walsh series” approach [37], and
estimated the error due to the use of different operational matrices [40]. Rao and
Tzafestas indicated the potentiality of Walsh and related functions in the area of
systems and control in a review paper [69]. Tzafestas [70] assessed the role of
1.3 Walsh Functions in Systems and Control 17

Walsh functions in signal and system analysis and design, in a rich collection of
papers.
W.L. Chen defined a “shift Walsh matrix” for solving delay-differential equa-
tions [51], and used Walsh functions for parameter estimation of bilinear systems
[50] as well as for the analysis of multi-delay systems [53]. Paraskevopoulos
determined the transfer functions of a single input single output (SISO) system from
its impulse response data with the help of Walsh functions and a fast Walsh
algorithm [55]. Tzafestas applied Walsh series approach for lumped and distributed
system identification [49]. Mahapatra used Walsh functions for solving matrix
Riccati equation arising in optimal control studies of linear diffusion equations [54].
Moulden’s work was concerned with the application of Walsh spectral analysis of
ordinary differential equations in a very formal manner [56]. Deb applied Walsh
functions to analyze power-electronic systems [57, 58], Deb and Datta was the first
to define the Walsh Operational Transfer Function (WOTF) for the analysis of
linear SISO systems [57, 58, 63–65]. Deb was the first to notice the oscillatory
behavior in the Walsh domain analysis of first-order systems [68].

1.4 Block Pulse Functions in Systems and Control

The earliest work concerning completeness and suitability of BPF for use in place
of Walsh functions, is a small technical note of Rao and Srinivasan [71]. Later
Kwong and Chen [14], Chen and Lee [72], and Sloss and Blyth [73] discussed
convergence properties of BPF series and the BPF solution of a linear time invariant
system.
Sannuti’s paper [74] on the analysis and synthesis of dynamical systems in state
space was a significant step toward BPF applications. Shieh et al. [75] dealt with the
same problems. The doctoral dissertation of Srinivasan [76] contained several
applications of BPF to a variety of problems. Rao and Srinivasan proposed methods
of analysis and synthesis for delay systems [77] where an operational matrix for
delay via BPF was proposed.
Chen and Jeng [78] considered systems with piecewise constant delays. BPF’s
are also used to invert Laplace transforms [79–82] numerically. Differential equa-
tions, related to the dynamics of current collection mechanism of electric loco-
motives, contain terms with a stretched argument. Such equations have been treated
in Ref. [83] using BPF. Chen [84] also dealt with scaled systems. BPFs have been
used in obtaining discrete-time approximations of continuous-time systems. Shieh
et al. [75] and recently Sinha et al. [85] gave some interesting results in this
connection. The BPF method of discretization has been compared with other
techniques employing bilinear transformation, state transition matrix, etc.
The higher powers of the operational matrix for integration accomplished the
task of repeated integration. However, the use of higher powers led to accumulation
of error at each stage of integration. This has been recognized by Rao and
Palanisamy who gave one shot operational matrices for repeated integration via
18 1 Non-sinusoidal Orthogonal Functions in Systems and Control

BPF and Walsh functions. Wang [86] deals with the same aspect suggesting
improvements in operational matrices for fractional and operational calculus.
Palanisamy reveals certain interesting aspects of the operational matrix for inte-
gration [87]. Optimal controls for time-varying systems have also been worked out
[88]. Kawaji [89] gave an analysis of linear systems with observers.
Replacement of Walsh function by block pulse took place in system identifi-
cation algorithms for computational advantage. Shih and Chia [90] used BPF in
identifying delay systems. Jan and Wong [91] and Cheng and Hsu [92] identified
bilinear models. Multidimensional BPFs have been proposed by Rao and
Srinivasan [93]. These were used in solving partial differential equations. Nath and
Lee [94] discussed multidimensional extensions of block pulse with applications.
Identification of nonlinear distributed systems and linear feedback systems via
block pulse functions were done by Hsu and Cheng [95] and Kwong and Chen [96].
Palanisamy and Bhattacharya also used block pulse functions in system identifi-
cation [97] and in analyzing stiff systems [98]. Solution of multipoint boundary
value problems and integral equations were obtained using a set of BPF [99, 100].
In parameter estimation of bilinear systems Cheng and Hsu [101] applied block
pulse functions. Still many more applications of block pulse functions remain to be
mentioned.
Thus block pulse function continued to reign over other piecewise constant
orthogonal functions with its simple but powerful attributes. But numerical insta-
bility is observed when deconvolution operation [102] in BPF domain is executed
for system identification. Also, oscillations where observed [68] for system analysis
in BPF domain.

1.5 Triangular Functions (TF) in Systems and Control

It was Corrington [32] who initiated application of Walsh functions in the area of
systems and control by solving differential and integral equations with this new set.
Only four years later, Chen et al. [47] took up the trail and came up with formal
representation of block pulse functions and partially used them in conjunction with
Walsh functions for solving problems related to distributed systems. Two years
earlier, operational matrices for integration and differentiation in Walsh domain
were proposed by Chen et al. [46]. It was also independently presented by Rao and
Sivakumar [33] and Le Van et al. [48]. Such operational matrices played a vital role
in the analysis and synthesis of control systems.
With the onset of the 80s, the block pulse function set proved to be more elegant,
simple and computationally attractive compared to Walsh functions. Thus, it
enjoyed immense popularity for more than a decade. Later variants of block pulse
functions were employed successfully by Deb et al. [22, 23]. In the last decade, a
new set of functions, namely, triangular function set were introduced by Deb et al.
[24–26]. Gradually, these new sets of functions attracted many researchers
[103–106].
1.6 A New Set of Orthogonal Hybrid Functions (HF) … 19

1.6 A New Set of Orthogonal Hybrid Functions (HF):


A Combination of Sample-and-Hold Functions
(SHF) and Triangular Functions (TF)

In the present work, a new set of piecewise orthogonal hybrid function (HF),
derived from sample-and-hold [23] functions and triangular functions [24–26], is
proposed. This new set of functions is used for the analysis and synthesis of various
types of linear continuous time non-homogeneous as well as homogeneous control
systems, namely, time-invariant systems, time varying systems and multi-delay
systems. The main advantage of the hybrid function set is it works with function
samples. Also, it provides the time solutions in a piecewise linear manner in two
parts: sample-and-hold function component and the triangular function component.
And if we leave out the TF component of the solution, we are left with the
sample-and-hold function component which is sometimes needed for analyzing
digital control systems and related research.
Function approximation in traditional BPF domain requires many integration
operations. That is, computation of each coefficient means performing one
numerical integration, thus requiring more time as well as memory space increasing
the computational burden. But, this ‘new’ HF domain approach is more suited to
handle complicated functions because it works with function samples as coeffi-
cients. Also, for identification of control systems, block pulse domain technique
gives rise to numerical instability [102], while the HF domain approach does not.
Thus, the HF domain approach seems to be more efficient in many ways than
traditional approaches, the main reason being this set works with samples only.
With this rich background, it seems worthwhile to explore this field.

References

1. Sansone, G.: Orthogonal Functions. Interscience Publishers Inc., New York (1959)
2. Fourier, J.B.: Thėorie analytique de la Chaleur, 1828. English edition: The analytic theory of
heat, 1878, Reprinted by Dover Pub. Co, New York (1955)
3. Beauchamp, K.G.: Walsh functions and their applications. Academic Press, London (1975)
4. Haar, Alfred: Zur theorie der orthogonalen funktionen systeme. Math. Annalen 69, 331–371
(1910)
5. Mix Dwight, F., Olejniczak, K.J.: Elements of Wavelets for Engineers and Scientists. Wiley
Interscience (2003)
6. Rademacher, H.: Einige sätze von allegemeinen orthogonal funktionen. Math. Annalen 87,
122–138 (1922)
7. Walsh, J.L.: A closed set of normal orthogonal functions. Am. J. Math. 45, 5–24 (1923)
8. Harmuth, H.F.: Transmission of information by orthogonal functions, 2nd edn. Springer,
Berlin (1972)
9. Fino, B.J., Algazi, V.R.: Slant-Haar transform. Proc. IEEE 62, 653–654 (1974)
10. Huang, D.M.: Walsh-Hadamard-Haar hybrid transform. In: IEEE Proceedings of 5th
International Conference on Pattern Recognition, pp. 180–182 (1980)
20 1 Non-sinusoidal Orthogonal Functions in Systems and Control

11. Wu, T.T., Chen, C.F., Tsay, Y.T.: Walsh operational matrices for fractional calculus and their
application to distributed systems. In: IEEE Symposium on Circuits and Systems, Munich,
Germany, April, 1976
12. Jiang, J.H., Schaufelberger, W.: Block Pulse Functions and Their Application in Control
System. LNCIS, vol. 179. Springer, Berlin (1992)
13. Deb, Anish, Sarkar, Gautam, Sen, S.K.: Block pulse functions, the most fundamental of all
piecewise constant basis functions. Int. J. Syst. Sci. 25(2), 351–363 (1994)
14. Kwong, C.P., Chen, C.F.: The convergence properties of block pulse series. Int. J. Syst. Sci.
12(6), 745–751 (1981)
15. Enomoto, H., Shibata, K.: Orthogonal transform coding system for television signals. In:
Proceedings of the 1971 Symposium on Application of Walsh Functions, Washington DC,
USA, pp. 11–17 (1971)
16. Pratt, W.K., Welch, L.R., Chen, W.: Slant transform for image coding. In: Proceedings of the
1972 Symposium on Application of Walsh Functions, Washington DC, USA, pp. 229–234,
March 1972
17. Pratt, W.K.: Digital Image Processing. Wiley, New York (1978)
18. Hwang, Chyi: Solution of functional differential equation via delayed unit step functions. Int.
J. Syst. Sci. 14(9), 1065–1073 (1983)
19. Patra, A., Rao, G.P.: General hybrid orthogonal functions—a new tool for the analysis of
power-electronic systems. IEEE Trans. Ind. Electron. 36(3), 413–424 (1989)
20. Patra, A., Rao, G.P.: General Hybrid Orthogonal Functions and Their Applications in
Systems and Control. Springer, Berlin (1996)
21. Deb, Anish, Sarkar, Gautam, Sen, S.K.: Linearly pulse-width modulated blockpulse
functions and their application to linear SISO feedback control system identification. Proc.
IEE, Part D, Control Theory Appl. 142(1), 44–50 (1995)
22. Deb, Anish, Sarkar, Gautam, Sen, S.K.: A new set of pulse width modulated generalised
block pulse functions (PWM-GBPF) and their application to cross/auto-correlation of time
varying functions. Int. J. Syst. Sci. 26(1), 65–89 (1995)
23. Deb, Anish, Sarkar, Gautam, Bhattacharjee, Manabrata, Sen, S.K.: A new set of piecewise
constant orthogonal functions for the analysis of linear SISO systems with sample-and-hold.
J. Franklin Inst. 335B(2), 333–358 (1998)
24. Deb, Anish, Sarkar, Gautam, Dasgupta, Anindita: A complementary pair of orthogonal
triangular function sets and its application to the analysis of SISO control systems. J. Inst.
Eng. (India) 84(December), 120–129 (2003)
25. Deb, Anish, Dasgupta, Anindita, Sarkar, Gautam: A complementary pair of orthogonal
triangular function sets and its application to the analysis of dynamic systems. J. Franklin
Inst. 343(1), 1–26 (2006)
26. Deb, Anish, Sarkar, Gautam, Sengupta, Anindita: Triangular Orthogonal Functions for the
Analysis of Continuous Time Systems. Anthem Press, London (2011)
27. Deb, A., Sarkar, G., Mandal, P., Biswas, A., Sengupta, A.: Optimal block pulse function
(OBPF) vs. Non-optimal block Pulse function (NOBPF). In: Proceedings of International
Conference of IEE (PEITSICON) 2005, Kolkata, 28–29 Jan 2005, pp. 195–199
28. Riley, K.F., Hobson, M.P., Bence, S.J.: Mathematical Methods for Physics and Engineering,
2nd edn. Cambridge University Press, UK (2004)
29. Beauchamp, K.G.: Applications of Walsh and Related Functions with An Introduction to
Sequency Theory. Academic Press, London (1984)
30. Maqusi, M.: Applied Walsh Analysis. Heyden, London (1981)
31. Hammond, J.L., Johnson, R.S.: A review of orthogonal square wave functions and their
application to linear networks. J. Franklin Inst. 273, 211–225 (1962)
32. Corrington, M.S.: Solution of differential and integral equations with Walsh functions. IEEE
Trans. Circuit Theory CT-20(5), 470–476 (1973)
33. Rao, G.P., Sivakumar, L.: System identification via Walsh functions. Proc. IEE 122(10),
1160–1161 (1975)
References 21

34. Rao, G.P., Palanisamy, K.R.: A new operational matrix for delay via Walsh functions and
some aspects of its algebra and applications. In: 5th National Systems Conference, NSC-78,
PAU Ludhiana (India), Sept 1978, pp. 60–61
35. Rao, G.P., Palanisamy, K.R.: Optimal control of time-delay systems via Walsh functions. In:
9th IFIP Conference on Optimisation Techniques, Polish Academy of Sciences, System
Research Institute, Poland, Sept 1979
36. Rao, G.P., Sivakumar, L.: Identification of time-lag systems via Walsh functions. IEEE
Trans. Autom. Control AC-24(5), 806–808 (1979)
37. Rao, G.P., Palanisamy, K.R., Srinivasan, T.: Extension of computation beyond the limit of
initial normal interval in Walsh series analysis of dynamical systems. IEEE Trans. Autom.
Control AC-25(2), 317–319 (1980)
38. Rao, G.P., Sivakumar, L.: Transfer function matrix identification in MIMO systems via
Walsh functions. Proc. IEEE 69(4), 465–466 (1981)
39. Rao, G.P., Sivakumar, L.: Piecewise linear system identification via Walsh functions. Int.
J. Syst. Sci. 13(5), 525–530 (1982)
40. Rao, G.P.: Piecewise Constant Orthogonal Functions and Their Application in Systems and
Control. LNC1S, vol. 55. Springer, Berlin (1983)
41. Rao, G.P., Palanisamy, K.R.: L Walsh stretch matrices and functional differential equations.
IEEE Trans. Autom. Control AC-21(1), 272–276 (1982)
42. Chen, C.F., Hsiao, C.H.: A state space approach to Walsh series solution of linear systems.
Int. J. Syst. Sci. 6(9), 833–858 (1975)
43. Chen, C.F., Hsiao, C.H.: Design of piecewise constant gains for optimal control via Walsh
functions. IEEE Trans. Autom. Control AC-20(5), 596–603 (1975)
44. Chen, C.F., Hsiao, C.H.: Walsh series analysis in optimal control. Int. J. Control 21(6), 881–
897 (1975)
45. Chen, C.F., Hsiao, C.H.: A Walsh series direct method for solving variational problems.
J. Franklin Inst. 300(4), 265–280 (1975)
46. Chen, C.F., Hsiao, C.H.: Time domain synthesis via Walsh functions. IEE Proc. 122(5), 565–
570 (1975)
47. Chen, C.F., Tsay, Y.T., Wu, T.T.: Walsh operational matrices for fractional calculus and their
application to distributed systems. J. Franklin Inst. 303(3), 267–284 (1977)
48. Le Van, T., Tam, L.D.C., Van Houtte, N.: On direct algebraic solutions of linear differential
equations using Walsh transforms. IEEE Trans. Circuits Syst. CAS-22(5), 419–422 (1975)
49. Tzafestas, S.G.: Walsh series approach to lumped and distributed system identification.
J. Franklin Inst. 305(4), 199–220 (1978)
50. Chen, W.L., Shih, Y.P.: Parameter estimation of bilinear systems via Walsh functions.
J. Franklin Inst. 305(5), 249–257 (1978)
51. Chen, W.L., Shih, Y.P.: Shift Walsh matrix and delay differential equations. IEEE Trans.
Autom. Control AC-23(6), 1023–1028 (1978)
52. Chen, W.L., Lee, C.L.: Walsh series expansions of composite functions and its applications
to linear systems. Int. J. Syst. Sci. 13(2), 219–226 (1982)
53. Chen, W.L.: Walsh series analysis of multi-delay systems. J. Franklin Inst. 303(4), 207–217
(1982)
54. Mahapatra, G.B.: Solution of optimal control problem of linear diffusion equation via Walsh
functions. IEEE Trans. Autom. Control AC-25(2), 319–321 (1980)
55. Paraskevopoulos, P.N., Varoufakis, S.J.: Transfer function determination from impulse
response via Walsh functions. Int. J. Circuit Theory Appl. 8(1), 85–89 (1980)
56. Moulden, T.H., Scott, M.A.: Walsh spectral analysis for ordinary differential equations: Part
1—Initial value problem. IEEE Trans. Circuits Syst. CAS-35(6), 742–745 (1988)
57. Deb, Anish, Datta, A.K.: Time response of pulse-fed SISO systems using Walsh operational
matrices. Adv. Model. Simul. 8(2), 30–37 (1987)
58. Deb, A.: On Walsh domain analysis of power-electronic systems. Ph.D. (Tech.) dissertation,
University of Calcutta (1989)
22 1 Non-sinusoidal Orthogonal Functions in Systems and Control

59. Deb, Anish, Sen, S.K., Datta, A.K.: Walsh functions and their applications: a review. IETE
Tech. Rev. 9(3), 238–252 (1992)
60. Lewis, F.L., Mertzios, B.G., Vachtsevanos, G., Christodoulou, M.A.: Analysis of bilinear
systems using Walsh functions. IEEE Trans. Autom. Control 35(1), 119–123 (1990)
61. Marszalek, W.: Orthogonal functions analysis of singular systems with impulsive responses.
Proc. IEE, Part D, Control Theory Appl. 137(2), 84–86 (1990)
62. Dai, H., Sinha, N.K.: Robust coefficient estimation of Walsh functions. Proc. IEE, Part D,
Control Theory Appl. 137(6), 357–363 (1990)
63. Deb, Anish, Datta, A.K.: Analysis of continuously variable pulse-width modulated system
via Walsh functions. Int. J. Syst. Sci. 23(2), 151–166 (1992)
64. Deb, Anish, Datta, A.K.: Analysis of pulse-fed power electronic circuits using Walsh
functions. Int. J. Electron. 62(3), 449–459 (1987)
65. Deb, A., Sarkar, G., Sen, S.K., Datta, A.K.: A new method of analysis of chopper-fed DC
series motor using Walsh function. In: Proceedings of 4th European Conference on Power
Electronics and Applications (EPE ’91), Horence, Italy, 1991
66. Deb, A., Datta, A.K.: On Walsh/block pulse domain analysis of power electronic circuits Part
1. Continuously phase-controlled rectifier. Int. J. Electron. 79(6), 861–883 (1995)
67. Deb, A., Datta, A.K.: On Walsh/block pulse domain analysis of power electronic circuits Part
2. Continuously pulse-width modulated inverter. Int. J. Electron. 79(6), 885–895 (1995)
68. Deb, A., Fountain, D.W.: A note on oscillations in Walsh domain analysis of first order
systems. IEEE Trans. Circuits Syst. CAS-38(8), 945–948 (1991)
69. Rao, G.P., Tzafestas, S.G.: A decade of piecewise constant orthogonal functions in systems
and control. Math. Comput. Simul. 27(5 and 6), 389–407 (1985)
70. Tzafestas, S.G. (ed.): Walsh functions in signal and systems analysis and design. Van
Nostrand Reinhold Co., New York (1985)
71. Rao, G.P., Srinivasan, T.: Remarks on “Author’s reply” to “Comments on design of
piecewise constant gains for optimal control via Walsh functions”. IEEE Trans. Autom.
Control AC-23(4), 762–763 (1978)
72. Chen, W.L., Lee, C.L.: On the convergence of the block-pulse series solution of a linear
time-invariant system. Int. J. Syst. Sci. 13(5), 491–498 (1982)
73. Sloss, G., Blyth, W.F.: A priori error estimates for Corrington’s Walsh function method.
J. Franklin Inst. 331B(3), 273–283 (1994)
74. Sannuti, P.: Analysis and synthesis of dynamic systems via block pulse functions. Proc. IEE
124(6), 569–571 (1977)
75. Shieh, L.A., Yates, R.E., Navarro, J.M.: Representation of continuous time state equations by
discrete-time state equations. IEEE Trans. Signal Man Cybern. SMC-8(6), 485–492 (1978)
76. Srinivasan, T.: Analysis of dynamical systems via block-pulse functions. Ph. D. dissertation,
Department of Electrical Engineering, I.I.T., Kharagpur, India (1979)
77. Rao, G.P., Srinivasan, T.: Analysis and synthesis of dynamic systems containing time delays
via block-pulse functions. Proc. IEE 125(9), 1064–1068 (1978)
78. Chen, W.L., Jeng, B.S.: Analysis of piecewise constant delay systems via block-pulse
functions. Int. J. Syst. Sci. 12(5), 625–633 (1981)
79. Hwang, C., Guo, T.Y., Shih, Y.P.: Numerical inversion of multidimensional Laplace
transforms via block-pulse function. Proc. IEE, Part. D, Control Theory Appl. 130(5), 250–
254 (1983)
80. Marszalek, W.: Two-dimensional inverse Laplace transform via block-pulse functions
method. Int. J. Syst. Sci. 14, 1311–1317 (1983)
81. Jiang, Z.H.: New approximation method for inverse Laplace transforms using block-pulse
functions. Int. J. Syst. Sci. 18(10), 1873–1888 (1987)
82. Shieh, L.A., Yates, R.E.: Solving inverse Laplace transform, linear and nonlinear state
equations using block-pulse functions. Comput. Electr. Eng. 6, 3–17 (1979)
83. Rao, G.P., Srinivasan, T.: An optimal method of solving differential equations characterizing
the dynamic of a current collection system for an electric locomotive. J. Inst. Math. Appl. 25
(4), 329–342 (1980)
References 23

84. Chen, W.L.: Block-pulse series analysis of scaled systems. Int. J. Syst. Sci. 12(7), 885–891
(1981)
85. Sinha, N.K., Zhou, Q.: Discrete-time approximation of multivariable continuous-time
systems. Proc. IEE, Part D, Control Theory Appl. 130(3), 103–110 (1983)
86. Wang, C.-H.: On the generalisation of block pulse operational matrices for fractional and
operational calculus. J. Franklin Inst. 315(2), 91–102 (1983)
87. Palanisamy, K.R.: A note on block-pulse function operational matrix for integration. Int.
J. Syst. Sci. 14(11), 1287–1290 (1983)
88. Hsu, N.-S., Cheng, B.: Analysis and optimal control of time-varying linear systems via
block-pulse functions. Int. J. Control 33(6), 1107–1122 (1981)
89. Kawaji, S.: Block-pulse series analysis of linear systems incorporating observers. Int.
J. Control 37(5), 1113–1120 (1983)
90. Shih, Y.P., Chia, W.K.: Parameter estimation of delay systems via block-pulse functions.
J. Dyn. Syst. Measur. Control 102(3), 159–162 (1980)
91. Jan, Y.G., Wong, K.M.: Bilinear system identification by block pulse functions. J. Franklin
Inst. 512(5), 349–359 (1981)
92. Cheng, B., Hsu, N.-S.: Analysis and parameter estimation of bilinear systems via block-pulse
functions. Int. J. Control 36(1), 53–65 (1982)
93. Rao, G.P., Srinivasan, T.: Multidimensional block pulse functions and their use in the study
of distributed parameter systems. Int. J. Syst. Sci. 11(6), 689–708 (1980)
94. Nath, A.K., Lee, T.T.: On the multidimensional extension of block-pulse functions and their
applications. Int. J. Syst. Sci. 14(2), 201–208 (1983)
95. Hsu, N.-S., Cheng, B.: Identification of nonlinear distributed systems via block pulse
functions. Int. J. Control 36(2), 281–291 (1982)
96. Kwong, C.P., Chen, C.F.: Linear feedback systems identification via block pulse functions.
Int. J. Syst. Sci. 12(5), 635–642 (1981)
97. Palanisamy, K.R., Bhattacharya, D.K.: System identification via block-pulse functions. Int.
J. Syst. Sci. 12(5), 643–647 (1981)
98. Palanisamy, K.R., Bhattacharya, D.K.: Analysis of stiff systems via single step method of
block-pulse functions. Int. J. Syst. Sci. 13(9), 961–968 (1982)
99. Kalat, J., Paraskevopoulos, P.N.: Solution of multipoint boundary value problems via block
pulse functions. J. Franklin Inst. 324(1), 73–81 (1987)
100. Kung, F.C., Chen, S.Y.: Solution of integral equations using a set of block pulse functions.
J. Franklin Inst. 306(4), 283–291 (1978)
101. Cheng, B., Hsu, N.S.: Analysis and parameter estimation of bilinear systems via block pulse
functions. Int. J. Control 36(1), 53–65 (1982)
102. Deb, A., Sarkar, G., Biswas, A., Mandal, P.: Numerical instability of deconvolution
operation via block pulse functions. J. Franklin Inst. 345, 319–327 (2008)
103. Hoseini, S.M., Soleimani, K.: Analysis of time delay systems via new triangular functions.
J. Appl. Math. 5(19) (2010)
104. Babolian, Z.M., Saeed, H.-V.: Numerical solution of nonlinear Volterra-Fredholm
integro-differential equations via direct method using triangular functions. Comput. Math.
Appl. 58(2), 239–247 (2009)
105. Han, Z., Li, S., Cao, Q.: Triangular orthogonal functions for nonlinear constrained optimal
control problems. Res. J. Appl. Sci. Eng. Technol. 4(12), 1822–1827 (2012)
106. Almasieh, H., Roodaki, M.: Triangular functions method for the solution of Fredholm
integral equations system. Ain Shams Eng. J. 3(4), 411–416 (2012)
Chapter 2
The Hybrid Function (HF) and Its
Properties

Abstract Starting with a brief review of block pulse functions (BPF),


sample-and-hold functions (SHF) and triangular functions (TF), this chapter pre-
sents the genesis of hybrid functions (HF) mathematically. Then different ele-
mentary properties and operational rules of HF are discussed. The chapter ends with
a qualitative comparison of BPF, SHF, TF and HF.

In this chapter, we propose a new set of orthogonal functions [1]. The function set is
named ‘hybrid function (HF)’. This set is a combination of the sample-and-hold
function (SHF) set [2] and a right handed triangular function (RHTF) set [3, 4]. This
new function set is different from piecewise constant orthogonal functions [5] and
approximates square integrable time functions of Lebesgue measure in a piecewise
linear manner.
In the following, we discuss different properties of the proposed hybrid function
(HF) set. That is, its elementary properties and the operational rules like, addition,
subtraction, multiplication and division in HF domain are discussed.

2.1 Brief Review of Block Pulse Functions (BPF) [6]

Referring to Fig. 1.4 and definition of block pulse functions given in Sect. 1.2.4, a
square integrable time function f(t) of Lebesgue measure may be expanded into an
m-term BPF series in t 2 ½0; TÞ as

X
m1  
f ðtÞ  fi wi ðtÞ ¼ f0 f1 f2  fi  fðm1Þ WðmÞ ðtÞ for i ¼ 0; 1; 2; . . .; ðm  1Þ
i¼0
, FTðmÞ WðmÞ ðtÞ
ð2:1Þ

© Springer International Publishing Switzerland 2016 25


A. Deb et al., Analysis and Identification of Time-Invariant Systems, Time-Varying
Systems, and Multi-Delay Systems using Orthogonal Hybrid Functions,
Studies in Systems, Decision and Control 46, DOI 10.1007/978-3-319-26684-8_2
26 2 The Hybrid Function (HF) and Its Properties

where, ½    T denotes transpose and the (i + 1)th BPF coefficient fi is given by

ðiZþ 1Þh
1
fi ¼ f ðtÞwi ðtÞdt ð2:2Þ
h
ih

where, h ¼ mT s:
Coefficients evaluated via Eq. (2.2) always ensures minimum mean integral
square error (MISE) [6] with respect to function approximation. Thus, the coeffi-
cients fi ’s may be termed ‘optimal’.
If Eq. (2.2) is computed via trapezoidal rule, the coefficients will slightly deviate
from the fi ’s of (2.2) due to inexact integration. However, such ‘approximate’
computation leading to ‘approximate’ coefficients fi0 ’s has the advantage of working
with function samples only. Thus

ðiZþ 1Þh
1 ½f ðihÞ þ f fði þ 1Þhg
fi ¼ f ðtÞwi ðtÞdt  ¼ fi0 ð2:3Þ
h 2
ih

We call such BPF expansion a ‘non-optimal’ one and any analysis based upon
this technique may be called ‘non-optimal’ BPF (NOBPF) analysis.

2.2 Brief Review of Sample-and-Hold Functions (SHF) [2]

Any square integrable function f(t) may be represented by a sample-and-hold


function set in the semi-open interval [0, T) by considering

fi ðtÞ ¼ f ðihÞ; i ¼ 0; 1; 2; . . .; ðm  1Þ

where, h is the sampling period (=T/m), fi ðtÞ is the amplitude of the function f(t) at
time ih and f(ih) is the first term of the Taylor series expansion of the function f
(t) around the point t = ih, because, for a zero order hold (ZOH) the amplitude of the
function f(t) at t = ih is held constant for the duration h.
SHFs are similar to BPFs in many aspects. The (i + 1)th member of an SHF set,
comprised of m component functions, is defined as

1 for ih  t\ði þ 1Þh
Si ðtÞ ¼ ð2:4Þ
0 elsewhere

where, i = 0, 1, 2, …, (m − 1).
2.2 Brief Review of Sample-and-Hold Functions (SHF) … 27

Fig. 2.1 A sample-and-hold device

A square integrable time function f(t) of Lebesgue measure may be expanded


into an m-term SHF series in t 2 ½0; TÞ as

X
ðm1Þ
 
f ðtÞ  gi Si ðtÞ ¼ g0 g1 g2  gi  gðm1Þ SðmÞ ðtÞ for i ¼ 0; 1; 2; . . .; ðm  1Þ
i¼0
, GTðmÞ SðmÞ ðtÞ
ð2:5Þ

where ½  T denotes transpose and gi ¼ f ðihÞ, the (i + 1)th sample of the function f
(t). In fact, gi ’s are the samples of the function f(t) with the sampling period h.
Considering the nature of the SHF set, which is a look alike of the BPF set, it is
easy to conclude that this set is orthogonal as well as complete in t 2 ½0; TÞ.
However, the special property of the SHF is revealed by using the sample-and-hold
concept in deriving the required operational matrices. If a time signal f(t) is fed to a
sample-and-hold device as shown in Fig. 2.1, the output of the device approximates
f(t) as per Eq. (2.5).

2.3 Brief Review of Triangular Functions (TF) [3, 4]

From a set of block pulse function,WðmÞ ðtÞ, we can generate two sets of orthogonal
triangular functions (TF) [3, 4], namely T1ðmÞ ðtÞ and T2ðmÞ ðtÞ such that

WðmÞ ðtÞ ¼ T1ðmÞ ðtÞ þ T2ðmÞ ðtÞ ð2:6Þ

Figure 1.8a, b show the orthogonal triangular function sets, T1ðmÞ ðtÞ and T2ðmÞ ðtÞ,
where m has been chosen arbitrarily as 8. These two TF sets are complementary to
each other. For convenience, we call T1ðmÞ ðtÞ the left-handed triangular function
(LHTF) vector and T2ðmÞ ðtÞ the right handed triangular function (RHTF) vector.
Using the component functions, we could express the m-set triangular function
vectors as

T1ðmÞ ðtÞ , ½ T10 ðtÞ T11 ðtÞ T12 ðtÞ    T1i ðtÞ    T1m1 ðtÞ T
T2ðmÞ ðtÞ , ½ T20 ðtÞ T21 ðtÞ T22 ðtÞ    T2i ðtÞ    T2m1 ðtÞ T

where ½  T denotes transpose.


28 2 The Hybrid Function (HF) and Its Properties

The (i + 1)th component of the LHTF vector T1ðmÞ ðtÞ is defined as



1  ðt  ihÞ=h; for ih  t\ði þ 1Þh
T1i ðtÞ ¼ ð2:7Þ
0; elsewhere

and the (i + 1)th component of the RHTF vector T2ðmÞ ðtÞ is defined as

ðt  ihÞ=h; for ih  t\ði þ 1Þh
T2i ðtÞ ¼ ð2:8Þ
0; elsewhere

where i = 0, 1, 2, …, (m − 1).
A square integrable time function f(t) of Lebesgue measure may be expanded
into an m-term TF series in t 2 ½0; TÞ as

f ðtÞ  ½ c0 c1 c2  cm1 T1ðmÞ ðtÞ þ ½ d0 d1 d2  dm1 T2ðmÞ ðtÞ


, C T1ðmÞ ðtÞ þ D T2ðmÞ ðtÞ
T T

ð2:9Þ

The constant coefficients c0i s and di0 s in Eq. (2.9) are given by

ci , f ðihÞ and di , f ½ði þ 1Þh ð2:10Þ

and the relation ci þ 1 ¼ di holds between c0i s and di0 s:

2.4 Hybrid Function (HF): A Combination of SHF and TF

We can use a set of sample-and-hold functions and the right handed triangular
function set to form a new function set, which we name a ‘Hybrid Function set’. To
define a hybrid function (HF) set, we express the (i + 1)th member Hi ðtÞ of the m-
set hybrid function HðmÞ ðtÞ as

Hi ðtÞ ¼ ai Si ðtÞ þ bi T2i ðtÞ

where, i = 0, 1, 2, …, (m − 1), ai and bi are scaling constants, 0 ≤ t < T, Si and T2i


are the (i + 1)th component sample-and-hold function and right handed triangular
function.
For convenience, in the following, we write T instead of T2. The above equation
can now be expressed as

Hi ðtÞ ¼ ai Si ðtÞ þ bi Ti ðtÞ ð2:11Þ

Let us now illustrate how a function f(t) is represented via a set of hybrid functions.
2.4 Hybrid Function (HF): A Combination of SHF and TF 29

Fig. 2.2 Function


approximation via hybrid
functions (HF) domain

In Fig. 2.2, the function f(t) is sampled at three equidistant points (sampling
interval h) A, C and E respectively with corresponding sample values c0 , c1 and c2 .
Now, f(t) can be approximated in a piecewise linear manner by the two straight
lines AC and CE, which are the sides of two adjacent trapeziums. The trapezium
ACFO may be considered to be a combination of the SHF block ABFO, and the
triangular block ACB. Similar is the case for the second trapezium CEGF.
Hence, for the first trapezium, the hybrid function representation may be written
as a combination of SHF and TF as

H0 ðtÞ ¼ c0 S0 ðtÞ þ ðc1  c0 ÞT0 ðtÞ

Then the function f(t) may be represented in an interval t 2 ½0; 2hÞ as

f ðtÞ ¼ H0 ðtÞ þ H1 ðtÞ


¼ fc0 S0 ðtÞ þ ðc1  c0 ÞT0 ðtÞg þ fc1 S1 ðtÞ þ ðc2  c1 ÞT1 ðtÞg
¼ fc0 S0 ðtÞ þ c1 S1 ðtÞg þ fðc1  c0 ÞT0 ðtÞ þ ðc2  c1 ÞT1 ðtÞg
, CT Sð2Þ (t) þ DT Tð2Þ (t)

where, ½ c0 c1  ¼ CT and ½ ðc1  c0 Þ ðc2  c1 Þ  ¼ DT :


Generalizing this, we can extend the concept for an m-component function set as

f ðtÞ , CT SðmÞ ðtÞ þ DT TðmÞ ðtÞ ð2:12Þ

where, CT ¼ ½ c0 c1    cm1  and DT ¼ ½ ðc1  c0 Þ ðc2  c1 Þ    ðcm 


cm1 Þ
The radical difference between block pulse domain representation and hybrid
function domain representation of a function is, the BPF representation and sub-
sequent analysis always provides us with a staircase solution, while the HF domain
technique provides us with piecewise linear results.
30 2 The Hybrid Function (HF) and Its Properties

It is noted that, the SHF coefficients are simply the samples at the sampling
instants, while the TF coefficients are the differences between two consecutive
samples, e.g., ðci  ci1 Þ, i being a positive integer. An added advantage of HF
domain representation is, by dropping the TF domain components, we are left with
only the SHF domain representation. This is sometimes convenient in function
analysis, especially, for digital control systems or sample-and-hold systems.

2.5 Elementary Properties of Hybrid Functions [7]

In solving certain problems of control engineering, the advantages of using the


hybrid function technique are their easy operations and satisfactory approximations.
These advantages are due to the distinct properties of hybrid functions. The ele-
mentary properties are as follows.

2.5.1 Disjointedness

Hybrid function (HF) set is a combination of the sample-and-hold function


(SHF) set and the triangular function (RHTF) set.
The sample-and-hold functions are disjoint with each other in the interval
t 2 ½0; TÞ. This property can be formulated as

0 where i 6¼ j
Si Sj ¼ ð2:13Þ
Si where i ¼ j

where i, j = 0, 1, 2,…, (m − 1) and the argument (t) is dropped for simplicity.


Henceforth, for both sample-and-hold and triangular functions, the argument
indicating time dependency will be discarded.
The property of Eq. (2.13) can directly be obtained from the definition of
sample-and-hold functions.
Triangular functions (TF) are derived from the block pulse function set. A block
pulse function can be dissected along its two diagonals to generate two triangular
functions. That is, when we add two component triangular functions, we get back the
original block pulse function. This dissection process has been shown in Fig. 1.7.
Since, the component block pulse functions of a set are mutually disjoint, the
triangular functions of the LHTF set are disjoint, and so are the functions of the
RHTF set.
2.5 Elementary Properties of Hybrid Functions … 31

Fig. 2.3 The product ðTi Tj Þ and its triangular function representation

Hence, the product of two right handed triangular functions Ti and Tj in the
semi-open interval t 2 ½0; TÞ is

Ti Tj ¼ 0 where i 6¼ j and i; j ¼ 0; 1; 2; . . .; ðm  1Þ

when i ¼ j, the product at the sample points, namely ih and (i + 1)h are 0 and 1,
respectively. Since we are concerned only with the triangular function representa-
tion of the product, shown in Fig. 2.3, the result is Ti only. Thus [3]

Ti Tj  Ti where i ¼ j

This property can be formulated as



0 where i 6¼ j
Ti Tj ¼ ð2:14Þ
Ti where i ¼ j

This property can directly be obtained from the definition of triangular functions.

2.5.2 Orthogonality [1]

The sample-and-hold functions of a SHF set are orthogonal with each other in the
interval t 2 ½0; TÞ:

ZT 
0 where i 6¼ j
Si Sj dt ¼ ð2:15Þ
dij where i ¼ j
0

where, i, j = 0, 1, 2, …, (m − 1). This property can directly be obtained from the


disjointedness of sample-and-hold functions.
32 2 The Hybrid Function (HF) and Its Properties

Similarly, the triangular functions of a TF set are orthogonal with each other in
the interval t 2 ½0; TÞ:

ZT 
0 where i 6¼ j
Ti Tj dt ¼ ð2:16Þ
dij where i ¼ j
0

where, i, j = 0, 1, 2, …, (m − 1).
Now, the hybrid function is orthogonal because each of the SHF and TF sets are
orthogonal.

2.5.3 Completeness

Like the block pulse functions, the sample-and-hold function set is also complete
when i approaches infinity. This means that we have:

ZT X
1
f 2 ðtÞdt ¼ fi2 kSi k2 ð2:17Þ
i¼0
0

for any real bounded function f(t) which is square integrable in the interval
t 2 ½0; TÞ:
Here, the expression:
2 T 31=2
Z
kSi ðtÞk ¼ 4 S2i dt5 ð2:18Þ
0

is the norm of Si :
From a set of block pulse function, we can generate two sets of orthogonal
triangular functions (TF) [3], namely T1ðmÞ and T2ðmÞ : The triangular function sets
are complete when i approaches infinity. This means that we have:

ZT X
1
f 2 ðtÞdt ¼ k½ci T1i þ di T2i k2 ð2:19Þ
i ¼0
0

for any real bounded function f(t) which is square integrable in the interval
t 2 [0; T).
2.5 Elementary Properties of Hybrid Functions … 33

Here, the expression:


2 T 31=2
Z
 
k½ci T1i þ di T2i k ¼ 4 ci T1i þ di T22 dt 5 i ð2:20Þ
0

is the norm of ½ci T1i þ di T2i .


The hybrid function set is also complete when i approaches infinity. This means
that we have:

ZT X
1
f 2 ðtÞdt ¼ kfi Si þ di T2i k2
i¼0
0
ð2:21Þ
ZT X
1
or f 2 ðtÞdt ¼ kfi Si þ di Ti k2
i¼0
0

for any real bounded function f(t) which is square integrable in the interval t 2 [0,T)
and piecewise linear approximation.
The completeness of hybrid functions guarantees that an arbitrarily small mean
square error can be obtained for a real bounded function, which has only a finite
number of discontinuous points in the interval t 2 [0; T), by increasing the number
of terms in the sample-and-hold function series and the triangular function series.

2.6 Elementary Operational Rules

2.6.1 Addition of Two Functions

For addition of two time functions f(t) and g(t), the following cases are considered.
Let, a(t) = f(t) + g(t), where, a(t) is the resulting function.
We call the samples of each functions f(t) and g(t) in t 2 ½0; TÞ f0, f1, f2, … fi, …,
fm and g0, g1, g2, … gi, … gm respectively, as shown in Fig. 2.4.
(a) The continuous functions are expanded in HF domain separately and then
added
The functions f(t) and g(t) can be expanded directly into hybrid functions
series and then added to find the resultant function in HF domain. The HF
domain expanded form of a(t) may be called aðtÞ. It should be noted that aðtÞ
is piecewise linear in nature. Thus
34 2 The Hybrid Function (HF) and Its Properties

Fig. 2.4 Two time functions f(t) and g(t) expressed in HF domain and their sum 
aðtÞ in HF
domain
2.6 Elementary Operational Rules 35

aðtÞ ¼ f ðtÞ þ gðtÞ  ½f0 f1 f2    fi    fm1  SðmÞ


 
þ f00 f10 f20    fi0    fm1
0
TðmÞ
þ ½g0 g1 g2    gi    gm1  SðmÞ
 
þ g00 g01 g02    g0i    g0m1 TðmÞ
" # " #
X
m1 X
m1 X
m1 X
m1
0 0
¼ f i Si þ fi Ti þ gi Si þ gi T i
i¼0 i¼0 i¼0 i¼0
   T 
, FTS SðmÞ + FT TðmÞ þ GS SðmÞ
T
+ T
GT TðmÞ ð2:22Þ

where,

FTS , ½f0 f1 f2  fi  fm1 


 
FTT , f00 f10 f20  fi0  0
fm1
fi0 ¼ ðfi þ 1  fi Þ
GTS , ½g0 g1 g2  gi  gm1 
GTT , ½g00 g01 g02  g0i  g0m1 
and g0i ¼ ð gi þ 1  gi Þ

Now,
   

aðtÞ , FTS + GTS SðmÞ þ FTT + GTT TðmÞ
¼ ½ðf0 þ g0 Þ ðf1 þ g1 Þ    ðfi þ gi Þ    ðfm1 þ gm1 ÞSðmÞ
 
þ ðf00 þ g00 Þ ðf10 þ g01 Þ    ðfi0 þ g0i Þ    ðfm1
0
þ g0m1 Þ TðmÞ
, ATS SðmÞ þ ATT TðmÞ
ð2:23Þ
   
where, ATS ¼ FTS þ GTS and ATT ¼ FTT þ GTT

ATS , ½a0 a1 a2  ai  am1 


and ATT , ½ða1  a0 Þ ða2  a1 Þ ða3  a2 Þ  ðai  ai1 Þ  ðam  am1 Þ

Equation (2.23) shows that the hybrid function coefficients of the sum a(t) are
the sums of the hybrid functions coefficients of the individual functions f
(t) and g(t), in each subinterval. This is shown in Fig. 2.4.
(b) The continuous functions f(t) and g(t) are first added and then the resulting
function aðtÞ ¼ f ðtÞ þ gðtÞ is expressed in HF domain
In this case, the resulting continuous function aðtÞ ¼ f ðtÞ þ gðtÞ is expanded in
HF domain as
36 2 The Hybrid Function (HF) and Its Properties

aðtÞ  
aðtÞ ¼ ½a0 a1 a2  ai  am1  SðmÞ
þ ½ða1  a0 Þ ða2  a1 Þ  ðai  ai1 Þ  ðam  am1 Þ TðmÞ
ð2:24Þ

where a0 ; a1 ; a2 ; . . .; ai ; . . .; am are the samples of aðtÞ or aðtÞ at time instants


0; h; 2h; . . .; ih; . . .; mh:
It is evident from Fig. 2.4 that at sampling instants the function values i.e.,
samples will be added as before. Hence,

aðtÞ ¼ f ðtÞ þ gðtÞ



¼ ½ðf0 þ g0 Þ ðf1 þ g1 Þ    ðfi þ gi Þ    ðfm1 þ gm1 Þ SðmÞ
 
þ ðf00 þ g00 Þ ðf10 þ g01 Þ    ðfi0 þ g0i Þ    ðfm1
0
þ g0m1 Þ TðmÞ

That is

aðtÞ , ATS SðmÞ þ ATT TðmÞ ð2:25Þ

From Eqs. (2.23) and (2.25), it is seen that both the results of addition are
identical.
Considering two functions f(t) = 1 − exp(−t) and g(t) = exp(−t), the result of
addition of these two functions, using their individual coefficients, is shown in
Fig. 2.5.

Fig. 2.5 Hybrid function


expansion of f(t) = 1 − exp
(−t) and g(t) = exp(−t) and the
result of their addition
f ðtÞ þ gðtÞ ¼ aðtÞ in HF
domain with m = 8 and
T = 1 s. Due to high degree of
accuracy, the piecewise linear
curves look like continuous
curves (vide Appendix B,
Program no. 1)
2.6 Elementary Operational Rules 37

2.6.2 Subtraction of Two Functions

For subtraction of two time functions f(t) and g(t), the following cases are
considered.
Let, s(t) = f(t) − g(t), where, s(t) is the resulting function of subtraction.
As before, the samples of individual functions f(t) and g(t) in time domain are f0,
f1, f2, … , fi, … , fm and g0, g1, g2, …, gi,…, gm respectively.
The functions f(t) and g(t) are expressed in HF domain. Then we subtract the HF
domain expanded functions to obtain s(t) in HF domain. The HF domain expanded
form of s(t) may be called sðtÞ.
(a) The continuous functions are expanded in HF domain separately and then
subtracted
The difference of two time functions f(t) and g(t) can be expressed via HF
domain as

sðtÞ ¼ f ðtÞ  gðtÞ  ½f0 f1 f2    fi    fm1  SðmÞ


 
þ f00 f10 f20    fi0    fm1
0
TðmÞ
 ½g0 g1 g2    gi    gm1  SðmÞ
 
 g00 g01 g02    g0i    g0m1 TðmÞ
   
, FTS SðmÞ + FTT TðmÞ  GTS SðmÞ + GTT TðmÞ ð2:26Þ
   
Hence; sðtÞ , FTS  GTS SðmÞ þ FTT  GTT TðmÞ
¼ ½f0  g0 f1  g1    fi  gi    fm1  gm1  SðmÞ
 
þ f00  g00 f10  g01    fi0  g0i    fm1
0
 g0m1 TðmÞ
, STS SðmÞ þ STT TðmÞ
ð2:27Þ

where,
 
STS , FTS  GTS , ½s0 s1 s2  si    sm1 

and,
 
STT ¼ FTT  GTT , ½ðs1  s0 Þ ðs2  s1 Þ ðs3  s2 Þ  ðsi  si1 Þ  ðsm  sm1 Þ

Equation (2.27) shows that the hybrid function coefficients of the difference of
two functions s(t) is the differences of the hybrid functions coefficients of the
individual functions at each sampling points. This is shown in Fig. 2.6.
38 2 The Hybrid Function (HF) and Its Properties

Fig. 2.6 Two time functions f


(t) and g(t) expressed via HF
domain and their difference
sðtÞ in HF domain

(b) The continuous functions f(t) and g(t) are first subtracted and then the
resulting function s(t)¼ f (t)g(t) is expanded via HF domain
In this case, the resulting continuous function sðtÞ ¼ f ðtÞ  gðtÞ is expanded in
HF domain as

sðtÞ ¼ ½s0 s1 s2  si  sm1  SðmÞ


þ ½ðs1  s0 Þ ðs2  s1 Þ  ðsi  si1 Þ  ðsm  sm1 Þ TðmÞ

Coefficients of the resulting function s(t) are found directly by subtracting the
corresponding coefficients of both the time functions at same sampling
instants. From Fig. 2.6, we can write the resultant function s(t) are as follows
2.6 Elementary Operational Rules 39

Fig. 2.7 Hybrid function


expansion of f(t) = 1 − exp
(−t) and g(t) = exp(−t) and the
result of their subtraction
f ðtÞ  gðtÞ ¼ sðtÞ in HF
domain, with m = 8 and
T = 1 s. Due to high degree of
accuracy, the piecewise linear
curves look like continuous
curves

sðtÞ ¼ f ðtÞ  gðtÞ


¼ ½ðf0  g0 Þ ðf1  g1 Þ    ðfi  gi Þ    ðfm1  gm1 Þ SðmÞ
 
þ ðf00  g00 Þ ðf10  g01 Þ    ðfi0  g0i Þ    ðfm1
0
 g0m1 Þ TðmÞ

sðtÞ , STS SðmÞ þ STT TðmÞ ð2:28Þ

Thus, the result of Eq. (2.28) is same as expression (2.27).


Considering two functions f(t) = 1 − exp(−t) and g(t) = exp(−t), the result of
subtraction of these two functions in HF domain is shown in Fig. 2.7.

2.6.3 Multiplication of Two Functions

We consider multiplication of two functions r(t) and g(t) in HF domain.


Let, r(t) × g(t) = m(t), where, m(t) is the result of multiplication.
Let the samples of the individual time functions r(t) and g(t) be r0, r1, r2, …, ri,
…, rm and g0, g1, g2, …, gi, …, gm respectively at the sampling instants 0, h, 2h, …,
ih, …. mh. Now the time functions are expanded in HF domain, as shown in
Fig. 2.8.
(a) The functions r(t) and g(t) are expanded separately in HF domain and then
multiplied
For functions r(t) and g(t), we expand them via hybrid function series and then
multiply. The resulting output function m(t) can be expressed as
40 2 The Hybrid Function (HF) and Its Properties

Fig. 2.8 Two time functions r(t) and g(t) are expanded in HF domain

Let m(t) = r(t)  g(t)

and,

r(t)  r (t) ¼ [r0 r1 r2  ri  rm1 ] SðmÞ


+ [r00 r10 r20  ri0  0
rm1 ] TðmÞ

Also,

g(t)  g(t) ¼ ½g0 g1 g2    gi    gm1  SðmÞ


 
þ g00 g01 g02    g0i    g0m1 TðmÞ

Then
" # " #
Xm1 X
m1 X
m1 X
m1
r ðtÞ  
gðtÞ ¼ r i Si þ ri0 Ti  gi Si þ g0i Ti
i¼0 i¼0 i¼0 i¼0 ð2:29Þ
 T   T 
, RS SðmÞ + RTT TðmÞ  GS SðmÞ + GTT TðmÞ

where,

RTS , ½r0 r1 r2  ri  rm1 


 
RTT , r00 r10 r20  ri0  0
rm1
GTS , ½ g0 g1 g2  gi  gm1 
 
and GTT , g00 g01 g02  g0i  g0m1

In Eq. (2.29), there are three types of products involved, related to


sample-and-hold functions and triangular functions. Results of these three
2.6 Elementary Operational Rules 41

Fig. 2.9 Multiplication of the first members of two SHF sets

types of products of the basis component functions are studied in the


following:
(i) (Sample-and-hold function) × (Sample-and-hold function)
Two sample-and-hold functions of an SHF set are mutually disjoint.
Thus, the product rule is

0 where i 6¼ j
Si Sj ¼
Si where i ¼ j

This also holds for multiplication of two sample-and-hold functions


belonging to two different but equivalent SHF sets. That is, both the sets
having the same h and T.
For example, the product of the first members of two such SHF sets in
HF domain is shown in Fig. 2.9.
(ii) (Triangular function) × (Sample-and-hold function)
If a triangular function of a TF set is multiplied with a sample-and-hold
function of an SHF set, both the sets having the same h and T, and
matched along the time scale, then the product rule is

0 where i 6¼ j
Ti Sj ¼
Ti where i ¼ j

For example, the product of the first members of two such TF and SHF
sets, in HF domain, is shown in Fig. 2.10.

Fig. 2.10 Multiplication of the first member of TF set and the first member of SHF set
42 2 The Hybrid Function (HF) and Its Properties

(iii) (Triangular function) × (Triangular function)


Two triangular functions of a TF set are mutually disjoint. Thus, the
product rule is

0 where i 6¼ j
Ti Tj ¼
Ti where i ¼ j

This also holds for multiplication of two triangular functions belonging


to two different TF sets, both having the same h and T, and matched
along the time scale.
The product of the first members of two such TF sets, in HF domain, is
shown in Fig. 2.11.
The result of multiplication of the first components of two equivalent
triangular function sets, is converted to HF domain. The sample-and-hold
function component of the product being zero, as seen from Fig. 2.11, the
multiplication result is represented by only a triangular function
component.
Following above rules, the result of multiplication of the two functions r
(t) and g(t) are expressed in HF domain for m = 4 and T = 1 s as follows

 , ½r0 g0
mðtÞ r1 g1 r2 g2 r3 g3  SðmÞ
þ ½ðr1 g1  r0 g0 Þ ðr2 g2  r1 g1 Þ ðr3 g3  r2 g2 Þ ðr4 g4  r3 g3 Þ TðmÞ

The generalized result of multiplication of two time functions r(t) and g


(t) are expressed in HF domain as follows :

 , ½ r 0 g0
mðtÞ r1 g1  r i gi  rm1 gm1  SðmÞ
þ ½ðr1 g1  r0 g0 Þ ðr2 g2  r1 g1 Þ  ðri gi  ri1 gi1 Þ 
ðrm gm  rm1 gm1 Þ TðmÞ
ð2:30Þ
h i
 , RTS DG1 SðmÞ + R0 S DG2  RTS DG1 TðmÞ
T
Now; mðtÞ
ð2:31Þ
, MTS SðmÞ + MTT TðmÞ

Fig. 2.11 Multiplication of the first members of two TF sets. The actual product and its HF
domain equivalent are also shown
2.6 Elementary Operational Rules 43

h i
where, MTS , RTS DG1 and MTT , R0 TS DG2  RTS DG1

RTS , ½r0 r1 r2  ri  rm1 


R0 S , ½r1
T
r2 r3  ri þ 1  rm 
MTS , ½m0 m1 m2  mi  mm1 
MTT , ½ðm1  m0 Þ ðm2  m1 Þ ðm3  m2 Þ  ðmi  mi1 Þ  ðmm  mm1 Þ

and DG1 denotes a diagonal matrix whose entries are the m elements of
the vector G1, i.e. g0 ; g1 ; g2 ; . . .; gm1 and DG2 denotes a diagonal matrix
whose entries are the m elements of the vector G2, i.e. g1 ; g2 ; g3 ; . . .; gm :

DG1 ¼ diagðg0 ; g1 ; g2 ; . . .; gm1 Þ


and DG2 ¼ diagðg1 ; g2 ; g3 ; . . .; gm Þ

(b) The functions r(t) and g(t) are first multiplied and then the resulting
function m(t) is expanded via HF domain
In this case, the resulting continuous function m(t) ¼ r(t)  g(t) is expressed
in HF domain as

mðtÞ ¼ rðtÞ  gðtÞ  rðtÞ  gðtÞ


¼ ½m0 m1 m2  mi  mm1  SðmÞ
þ ½ðm1  m0 Þ ðm2  m1 Þ  ðmi  mi1 Þ  ðmm  mm1 Þ TðmÞ

where m0 ; m1 ; m2 ; . . .; mi ; . . .; mm are the samples of mðtÞ at time instants


0; h; 2h; . . .; ih; . . .; mh:
At sampling instants the function values i.e., samples, will be multiplied as
before. Hence,

 ¼ ½r0 g0
mðtÞ r1 g1 r2 g2  ri gi  rm1 gm1  SðmÞ
þ ½ðr1 g1  r0 g0 Þ ðr2 g2  r1 g1 Þ  ðri gi  ri1 gi1 Þ  ðrm gm  rm1 gm1 Þ TðmÞ

That is

 , MTS SðmÞ þ MTT TðmÞ


mðtÞ ð2:32Þ

From Eqs. (2.31) and (2.32), it is seen that both the results of multiplication
are identical.
Considering two functions r(t) = 1 − exp(−t) and g(t) = exp(−t), the result of
multiplication of these two functions, using their individual coefficients, is
shown in Fig. 2.12.
44 2 The Hybrid Function (HF) and Its Properties

Fig. 2.12 Hybrid function


expansion of r(t) = 1 − exp
(−t) and g(t) = exp(−t) and the
result of their multiplication
 in HF domain with m = 8
mðtÞ
and T = 1 s. Due to high
degree of accuracy, the
piecewise linear curves look
like continuous curves

2.6.4 Division of Two Functions

We consider division of two nonzero time functions r(t) and y(t) in HF domain.
Let, dðtÞ ¼ yðtÞ
rðtÞ, where, d(t) is the resulting continuous function after division.
The HF domain expanded form of d(t) may be called dðtÞ: 
Thus, we have,

yð t Þ ¼ r ð t Þ  d ð t Þ ð2:33Þ

The m number of samples of the functions r(t) and y(t) with sampling period h are
r0, r1, r2, … ,ri, … rm and y0, y1, y2, … ,yi, … ym respectively. After division, let the
resulting function d(t) have the samples d0, d1, d2, … ,di, … dm with the same
sampling period h.
(a) The functions r(t) and y(t) are expanded in HF domain and then the division
operation is executed
Using Eqs. (2.30), (2.33) can be written as follows

yðtÞ ¼ rðtÞ  dðtÞ


 ½r0 d0 r1 d1 r2 d2  r i di  rm1 dm1  SðmÞ
ð2:34Þ
þ ½ðr1 d1  r0 d0 Þ ðr2 d2  r1 d1 Þ  ðri di  ri1 di1 Þ
 ðrm dm  rm1 dm1 Þ TðmÞ

Again, the time function y(t) is expressed in HF domain as


2.6 Elementary Operational Rules 45

yðtÞ  ½y0 y1 y2  yi  ym1  SðmÞ


þ ½ðy1  y0 Þ ðy2  y1 Þ  ðyi  yi1 Þ  ðym  ym1 Þ TðmÞ
ð2:35Þ

Comparing the coefficients of the Eqs. (2.34) and (2.35), we have y0 ¼ r0 d0 ;


y1 ¼ r1 d1 ; . . . yi ¼ ri di . . . and so on.
Thus, d0 ¼ yr00 ; d1 ¼ yr11 ; . . . di ¼ yrii . . . and so on.
Now, the above results can be expressed as follows

½d0 d1 d2  di  dm1  SðmÞ


þ ½ðd1  d0 Þ ðd2  d1 Þ    ðdi  di1 Þ    ðdm  dm1 Þ TðmÞ
 
y0 y1 yi ym1
¼   SðmÞ
r0 r1 ri rm1
       
y1 y0 y2 y1 yi yi1 ym ym1
þ       TðmÞ
r1 r0 r2 r1 ri ri1 rm rm1

 of the division is expressed as


Now, the resulting function dðtÞ

m1  
X m1 
X 
 ¼ yi yi þ 1 yi
dðtÞ Si þ  Ti
i¼0
ri i¼0
ri þ 1 ri
h i
or  , YT D1 SðmÞ + Y0 T D1  YT D1 TðmÞ
d(t) S R1 S R2 S R1
ð2:36Þ
, DTS SðmÞ þ DTT TðmÞ

where,

YTS , ½y0 y1 y2  yi  ym1 


Y0 S , ½ y 1
T
y2 y3  yi þ 1  ym 
DTS ¼ YTS D1
R1 , ½ d0 d1 d2  di  dm1 
h i
Y0 S D1 YTS D1
T
DTT ¼ R2  R1 , ½ðd1  d0 Þ ðd2  d1 Þ  ðdi  di1 Þ  ðdm  dm1 Þ

and DR1 denotes a diagonal matrix whose entries are the m elements of the
vector R1, i.e. r0 ; r1 ; r2 ; . . .; rm1 , and DR2 denotes another diagonal matrix
whose entries are the m elements of the vector R2, i.e. r1 ; r2 ; r3 ; . . .; rm . That is

DR1 ¼ diagðr0 ; r1 ; r2 ; . . .; rm1 Þ


and DR2 ¼ diagðr1 ; r2 ; r3 ; . . .; rm Þ
46 2 The Hybrid Function (HF) and Its Properties

 
1 1 1 1
D1
R1 ¼ diag ; ; ; . . .;
r0 r1 r2 rm1
 
1 1 1 1
and D1
R2 ¼ diag ; ; ; . . .;
r1 r2 r3 rm

(b) The functions y(t) is first divided by r(t) and then the resulting function d(t)
is expanded in HF domain
In this case, the samples of the resulting continuous function d(t) ¼ yr((tt)) will be
the results of division of the corresponding samples of the functions y(t) and r
(t). That is
yi
di ¼
ri

Hence,

yðtÞ  yðtÞ
dðtÞ ¼  dðtÞ ¼
rðtÞ r ðtÞ
¼ ½d0 d1 d2    di  dm1  SðmÞ
þ ½ðd1  d0 Þ ðd2  d1 Þ  ðdi  di1 Þ  ðdm  dm1 Þ TðmÞ

where, dðtÞ  is the HF domain representation of d(t) and


d0 ; d1 ; d2 ; . . .; di ; . . .; dm are the samples of dðtÞ at time instants
0; h; 2h; . . .; ih; . . .mh:
So, at sampling instants, the sample value of d(t) will be division as discussed
before.
Hence,
 
 ¼ y0 y1 y2    yi    ym1 SðmÞ
dðtÞ
r0 r1 r2 ri rm1
       
y1 y0 y2 y1 yi yi1 ym ym1
þ       TðmÞ
r1 r0 r2 r1 ri ri1 rm rm1

That is

 , DT SðmÞ þ DT TðmÞ
dðtÞ ð2:37Þ
S T

From Eqs. (2.36) and (2.37), it is seen that both the results of division are
identical.
Considering two functions y(t) = 1 − exp(−t) and r(t) = exp(−t), the result of
their division dðtÞ ¼ yðtÞ
rðtÞ ¼ expðtÞ  1, using their individual coefficients (i.e.,
samples), is shown in Fig. 2.13.
2.7 Qualitative Comparison of BPF, SHF, TF and HF 47

Fig. 2.13 Hybrid function


expansion of r(t) = exp
(−t) and y(t) = 1 − exp
(−t) and the result of their
 in HF domain
division dðtÞ
with m = 8 and T = 1 s. Due to
high degree of accuracy, the
piecewise linear curves look
like continuous curves (vide
Appendix B, Program no. 2)

2.7 Qualitative Comparison of BPF, SHF, TF and HF

The basic properties of BPF, SHF, TF and HF are tabulated in Table 2.1 above to
provide a qualitative appraisal.

2.8 Conclusion

A new set of orthogonal hybrid function (HF) has been proposed for function
approximation, and its subsequent application to control system analysis and
identification. The set of hybrid functions is formed using the set of
sample-and-hold functions and the set of triangular functions.
The hybrid function set works with function samples, and this makes it more
convenient for use. That is, the expansion coefficients of SHF and TF components
of the HF set are simply the samples of the function to be approximated indicating a
non-optimal approach. Thus, like traditional orthogonal function sets, the HF set
does not use the well known integration formula for coefficient computation. This
presents a faster algorithm, makes the mathematics less involved, and also, reduces
the computation time.
The comparison of the basic qualitative properties of the hybrid function set with
different related orthogonal functions have been presented in Table 2.1.
In the following chapters, it will be shown that the hybrid function set is not only
suitable for function approximation, but it can efficiently integrate time functions as
well. Furthermore, it is a strong tool for various applications in the area of control
theory.
48 2 The Hybrid Function (HF) and Its Properties

Table 2.1 Qualitative comparison of BPF, SHF, TF and HF


Property BPF SHF TF HF
Piecewise Yes Yes No (piecewise No (piecewise
constant linear) linear)
Orthogonal Yes Yes Yes Yes
Finite Yes Yes Yes Yes
Disjoint Yes Yes Yes Yes
Orthonormal Can easily be Can easily be Can easily be Can easily be
normalized normalized normalized normalized
Implementation Easily Easily Implementation Easily
implementable implementable is relatively implementable
complex
Coefficient Involves Needs only Needs only Needs only samples
determination integration of f samples of f(t) samples of f(t) of f(t)
of f(t) (t) and scaling
Accuracy of Staircase Staircase solution Piecewise linear Provides two part
analysis solution having less error solution having (SHF and TF)
having more than BPF for less error than piecewise linear
error than TF sample-and-hold BPF solution having less
systems error than BPF as
well as SHF.
Staircase solution in
SHF mode is
available as a
‘by-product’ by
setting the TF part
of the solution equal
to zero. This gives
an edge to HF
analysis over TF
analysis

References

1. Sansone, G.: Orthogonal functions. Interscience, New York (1959)


2. Deb, A., Sarkar, G., Bhattacharjee, M., Sen, S.K.: A new set of piecewise constant orthogonal
functions for the analysis of linear SISO systems with sample-and-hold. J. Franklin Inst. 335B
(2), 333–358 (1998)
3. Deb, A., Sarkar, G., Senupta, A.: Triangular orthogonal functions for the analysis of continuous
time systems. Anthem Press, London (2011)
4. Deb, A., Dasgupta, A., Sarkar, G.: A complementary pair of orthogonal triangular function sets
and its application to the analysis of dynamic systems. J. Franklin Inst. 343(1), 1–26 (2006)
5. Rao, G.P.: Piecewise constant orthogonal functions and their applications in systems and
control, LNC1S, vol. 55. Springer, Berlin (1983)
6. Jiang, J.H., Schaufelberger, W.: Block pulse functions and their application in control system,
LNCIS, vol. 179. Springer, Berlin (1992)
7. Biswas, A.: Analysis and synthesis of continuous control systems using a set of orthogonal
hybrid functions. Doctoral dissertation, University of Calcutta (2015)
Chapter 3
Function Approximation via Hybrid
Functions

Abstract In this chapter, square integrable time functions of Lebesgue measure are
approximated via hybrid functions and such approximations are compared with
similar approximations using BPF and Legendre polynomials. For handling dis-
continuous functions, a modified method of approximation is suggested in hybrid
function domain. This modified approach, named HFm approach, seems to be more
accurate than the conventional HF domain technique, termed as HFc approach. The
mean integral square errors (MISE) for both the approximations are computed and
compared. Finally, error estimates for the SHF domain approximation and TF
domain approximation are derived. The chapter contains many tables and graphs
along with six illustrative examples.

In this chapter, similar to block pulse function [1, 2] domain approximation, we use
the complementary hybrid function (HF) set, combination of the sample-and-hold
function (SHF) set [3] and the triangular function (TF) set [4–6], for function
approximation. Earlier, we presented the principle for the proposed hybrid function
domain expansion where the expansion coefficients were the sample values of the
function to be approximated. The hybrid function set may now be utilized for
approximating square integrable functions in a piecewise linear manner.
The HF set obeys the conditions of orthogonality because it approximates
functions using the linear combination of two orthogonal function sets, namely SHF
set and TF set. For each of the orthogonal function sets, the members of the set
satisfy the criteria of completeness, and hence, this complementary orthogonal set is
a complete orthogonal function set.

3.1 Function Approximation via Block Pulse Functions


(BPF)

A square integrable time function f(t) of Lebesgue measure [7] may be expanded
into an m-term BPF series in t 2 ½0; TÞ as

© Springer International Publishing Switzerland 2016 49


A. Deb et al., Analysis and Identification of Time-Invariant Systems, Time-Varying
Systems, and Multi-Delay Systems using Orthogonal Hybrid Functions,
Studies in Systems, Decision and Control 46, DOI 10.1007/978-3-319-26684-8_3
50 3 Function Approximation via Hybrid Functions

X
m1
f ðtÞ  fi wi ðtÞ for i ¼ 0; 1; 2; . . .; ðm  1Þ
¼ ½ f0 f1 f2 . . . fi . . . fm1 WðmÞ ðtÞ
i¼0 ð3:1Þ
, FTðmÞ WðmÞ ðtÞ

where, [    ]T denotes transpose and f0, f1, f2, … , fi, … f(m−1) are the coefficients of
block pulse function expansion. The (i + 1)th BPF coefficient fi is given by

ðiZþ 1Þh
1
fi ¼ f ðtÞwi ðtÞdt ð3:2Þ
h
ih

where, h ¼ mT s.
The coefficients f’i s are determined in such a way that the integral square error
[ISE] [7] is minimized.

3.1.1 Numerical Examples

At first we determine the coefficients of a time function f(t) in BPF domain using
Eq. (3.2). We consider the following two examples:
Example 3.1 Let us expand the function f1(t) = t in block pulse function domain
taking m = 8 and T = 1 s. Following the method mentioned above, the result is

f1 ðtÞ  ½ 0:06250000 0:18750000 0:31250000 0:43750000


ð3:3Þ
0:56250000 0:68750000 0:81250000 0:93750000 Wð8Þ (t)
Figure 3.1 shows the original function along with its BPF approximation.
Example 3.2 Now we take up the function f2(t) = sin(πt) and express it in block
pulse function domain for m = 8 and T = 1 s. The result is

f2 ðtÞ  ½ 0:19383918 0:55200729 0:82613728 0:97449537


ð3:4Þ
0:97449537 0:82613728 0:55200729 0:19383918 Wð8Þ (t)

Figure 3.2 shows the original function along with its BPF domain
approximation.
3.2 Function Approximation via Hybrid Functions (HF) 51

Fig. 3.1 Exact curve for


f1(t) = t and its block pulse
function approximation for
m = 8 and T = 1 s

Fig. 3.2 Exact curve for


f2(t) = sin(πt) and its block
pulse function approximation
for m = 8 and T = 1 s (vide
Appendix B, Program no. 3)

3.2 Function Approximation via Hybrid Functions


(HF) [8, 9]

Consider a function f(t) in an interval t 2 ½0; TÞ. If we consider (m + 1) equidistant


samples f0 ; f1 ; f2 ; . . .; fi ; . . .; fm of the function with a sampling period h (i.e.,
T = mh), f(t) can be expressed as per Eq. (2.12) as discussed in Sect. 2.4.
52 3 Function Approximation via Hybrid Functions

 
f ðtÞ  f0f1 f2 . . . fi . . . fðm1Þ SðmÞ
 
þ ðf1  f0 Þ ðf2  f1 Þ ðf3  f2 Þ . . . ðfi  fi1 Þ . . . ðfm  fðm1Þ Þ TðmÞ
X
m1 X
m1
ð3:5Þ
¼ fi S i þ ðfi þ 1  fi ÞTi
i¼0 i¼0
D FTS SðmÞ þ FTT TðmÞ

3.3 Algorithm of Function Approximation via HF

The algorithm of function approximation via hybrid function domain is explained


below in Fig. 3.3.

3.3.1 Numerical Examples

A few functions are now approximated in hybrid function domain using Eq. (3.5).
We consider the following two examples:
Example 3.3 Let us expand the function f1(t) = t in hybrid function domain taking
m = 8 and T = 1 s. Following the method presented above, the result is

f1 ðtÞ  ½ 0:00000000 0:12500000 0:25000000 0:375000000


0:50000000 0:62500000 0:75000000 0:87500000 Sð8Þ
ð3:6Þ
þ ½ 0:12500000 0:12500000 0:12500000 0:12500000
0:12500000 0:12500000 0:12500000 0:12500000 Tð8Þ

Figure 3.4 shows the plot of the function f1(t) = t and its hybrid function
approximation, as per Eqs. (3.5) and (3.6). It is observed that the exact curve
entirely matches with its HF domain approximation, as is expected for a ramp
function.
Hence, it is apparent that hybrid function domain representation can generally
give the exact presentation of a piecewise linear function.
Example 3.4 Now we take up the function f2(t) = sin(πt) and express it via hybrid
functions for m = 8 and T = 1 s. The result is presented below:

f2 ðtÞ  ½ 0:00000000 0:38268343 0:70710678 0:92387953


1:00000000 0:92387953 0:70710678 0:38268343 Sð8Þ
þ ½ 0:38268343 0:32442335 0:21677275 0:07612047
0:07612047 0:21677275 0:32442335 0:38268343 Tð8Þ
ð3:7Þ
3.3 Algorithm of Function Approximation via HF 53

Fig. 3.3 The algorithm of


function approximation via Start
hybrid function domain

Determine the coefficients fi ’s of


sample-and-hold functions using the
samples of the function f(t)

Determine the coefficients gi ’s of


triangular functions using the relation
gi = ( fi+1 − fi )

Reconstruct the function f(t) using the


coefficients fi ’s of SHF functions

Improve the reconstruction further using


the triangular function coefficients gi ’s

Stop

Fig. 3.4 Exact curve for


f1(t) = t and its hybrid
function approximation for
m = 8 and T = 1 s. It is seen
that the curves overlap
54 3 Function Approximation via Hybrid Functions

Fig. 3.5 Exact curve for


f2(t) = sin(πt) and its hybrid
function approximation for
m = 8 and T = 1 s (vide
Appendix B, Program no. 4)

Figure 3.5 shows the plot of the function f2(t) = sin(πt), and its hybrid function
approximation. It is observed that the curve approximated by hybrid function is
much closer to the exact curve compared to the BPF approximation of Fig. 3.2. This
result may further be improved by increasing the number of samples. The closeness
of the results with the exact curves and the pictorial presentation of the original
functions along with their HF domain equivalent show the usefulness of the HF
domain description.

3.4 Comparison Between BPF and HF Domain


Approximations

Figures 3.6 and 3.7 show comparison of block pulse function and hybrid function
based approximations for the functions f1(t) and f2(t). It is obvious from the figures
that HF domain approximations are much better than BPF based approximations.
Quantitative estimates of MISE’s of these two approximations are presented in
Table 3.1. If we define an index Δ which is the ratio of the respective MISE’s of
BPF domain approximation and that of HF domain approximation, we see that, in
case of ramp function, BPF domain approximation is no match for HF domain
approximation. And for the time function f2(t), the MISE of BPF domain repre-
sentation is about 25 times than that of HF domain. That is, for the sine wave, we
have

MISEBPF
D¼ ¼ 25:43420823 ð3:8Þ
MISEHF
3.4 Comparison Between BPF and HF Domain Approximations 55

Fig. 3.6 Approximated


curves for f1(t) = t in BPF
domain and hybrid function
domain for m = 8 and T = 1 s
along with the exact curve.
The HF domain
approximation overlaps with
the exact curve (vide
Appendix B, Program no. 5)

Fig. 3.7 Approximated


curves for f2(t) = sin(πt) in
BPF domain and hybrid
function domain for m = 8 and
T = 1 s along with the exact
curve

Table 3.1 Comparison of


MISE’s of function
Function MISE D ¼ MISE
MISE
BPF
HF
BPF HF
approximation via BPF and
HF for two time functions, for f1(t) = t 0.00333333 0.00000000 –
m = 10 and T = 2 s f2(t) = sin 0.01623440 6.382897e−04 25.43420823
(πt)
56 3 Function Approximation via Hybrid Functions

3.5 Approximation of Discontinuous Functions

In some cases, the value of a function changes rapidly, i.e., almost instantly, from
one value to some other higher or lower value. This is termed as a ‘jump discon-
tinuity’. For such a discontinuity, both the left and right limits exist at the jump
point, but obviously they are not equal. An attempt to analyse systems with such
input discontinuities (say) in any orthogonal function domain framework, produces
a large error at the function approximation stage. This error is propagated
throughout the rest of the analysis.
For approximating a discontinuous function gðtÞ having jumps at a finite number
of points, i.e., a0, a1, …, ai, …, am−1 over a semi-open interval [0, T) we can select
the sampling interval h randomly and employ hybrid function approximation
method to come up with a piecewise linear reconstruction of the function gðtÞ (let
us call it 
gðtÞ). Obviously, the resulting function gðtÞ will be a piecewise linear
function with its equidistant break points (i.e., sample points) evenly distributed
over [0, T). Due to random or non-judicious selection of h, it may so happen that the
sampling instants and jump points may or may not coincide on the time scale.
Rather, coincidence of the sampling instants and the jump points, if any, will
entirely depend on chance. However, judicious selection of the sampling period
h may lead to the best approximation of a time function as far as jumps are
concerned.
Figure 3.8 shows a piecewise continuous function gðtÞ in [0, T) with three jump
points (say) at t1, t2 and t3 respectively. So, the four time intervals are:
(i) from the origin to the first jump point = t1 ¼ a(say)
(ii) from the first jump point to the second jump point = t2  t1 ¼ b
(iii) from the second jump point to the third jump point = t3  t2 ¼ c
(iv) from the third jump point to the end of the interval = T  t3 ¼ d

Fig. 3.8 A piecewise continuous function g(t) having three jumps at points t1 ; t2 and t3 over a
period [0, T) seconds
3.5 Approximation of Discontinuous Functions 57

The reconstruction of such a function in HF domain may be achieved in the


following ways:
Case I For reconstructing this function via hybrid functions, we can take samples
of g(t) very closely with a small sampling period in the regions a, b, c and d,
excluding the jump points. If these samples are now joined by straight lines to form
a piecewise linear reconstruction, the function will be almost truthfully represented
in HF domain in all the four regions. However, such reconstruction will involve a
huge volume of data to represent the jump situations with reasonable accuracy.
Case II We can choose the sampling period h in such a fashion that the sample
points always coincide with all the jump points. To make this happen, h should be
the GCD of the four time intervals a, b, c and d, where a + b + c + d = T. Then, each
of the intervals a, b, c and d will be divisible by h and the sampling points over [0,
T) will coincide with the three jump points. This is shown in Fig. 3.9. Under such
circumstances, the HF reconstruction will not be able to indicate truthfully the jump
points of the function gðtÞ. That is, hybrid function domain approximation will
represent the piecewise continuous function gðtÞ as shown in Fig. 3.9 in magenta.
Case III If we make h to be even smaller, say k, then if k ¼ hn, n being an integer,
this will again make the sampling points almost coincide with the jump points and
the HF domain reconstruction will show all the jump situations with reasonable
accuracy, as shown in Fig. 3.10.
Case IV However, if h is not chosen judiciously, then the samples will not
coincide with the jump points and HF domain reconstruction will be de-shaped
compared to the original function and in the reconstructed function the jumps would
be unrecognizable. This is shown in Fig. 3.11, where, the lines CD, EF and FG
represent the function in the regions of discontinuities.
In the following, we discuss a modified HF domain approach for approximating
such functions with jump discontinuities.

Fig. 3.9 The function g(t) is sampled with a moderate sampling period h. it is noticed that the
jumps are represented fairly accurately (shown in magenta) in the hybrid function reconstruction
of the function
58 3 Function Approximation via Hybrid Functions

Fig. 3.10 The function g(t) is sampled very closely with a small sampling period in the regions a,
b, c and d, excluding the jump points. these samples are joined by straight lines to form a piecewise
linear almost truthful reconstruction

Fig. 3.11 The function g(t) is sampled with a sampling period h chosen at random. It is noticed
that the jumps are represented erroneously (shown in magenta) in the hybrid function
reconstruction of the function. In the reconstructed function, no jumps are noticeable

3.5.1 Modified HF Domain Approach for Approximating


Functions with Jump Discontinuities

We know that with conventional HF domain (HFc, say) approximations, we end up


with attractive piecewise linear reconstructions. But when the functions involve
jump discontinuities, the approximations are not so attractive anymore, because a
large amount of approximation error is introduced in the interval containing the
jump, as is obvious from Fig. 3.9. This mars the quality of approximation, and this
error is transmitted through the whole of the remaining analysis to infect the final
results with unacceptable error.
To overcome such situations involving ‘jump functions’ (that is, functions with
jump discontinuities), a modified approach in HF domain (HFm domain, say) may
be proposed. This approach is superior to the conventional HFc approach.
For example, to approximate a function as shown in Fig. 3.12, having a jump
discontinuity at t = td, the HFm approach produces much better result than the HFc
3.5 Approximation of Discontinuous Functions 59

Fig. 3.12 A function f3(t) with jump discontinuity at t = td

approach. For most of the functions, this is so and will be illustrated later in
Sect. 3.5.2. A ‘better’ approximation is always judged by the quantitative factor
mean integral square error (MISE) [2], mentioned in Sect. 1.1.
To illustrate the superiority of the HFm approach, we consider a simple delayed
unit step function having a delay of kh seconds, shown in Fig. 3.13a. Using the HFc
approach, the HF domain approximation of u(t − kh) is shown in Fig. 3.13b,
whereas, Fig. 3.13c illustrates the approximation of u(t − kh) using the modified
HFm approach. In this approach, we have dropped the triangular function compo-
nent in the kth sub-interval (that is from (k − 1)h to kh) by making the TF coefficient
zero. This makes the approximation one hundred per cent accurate.
Comparing the figures, we see that while the conventional HF domain
approximation produces a large error (the shaded triangular zone), the modified
HFm approach produces zero error.
The two approximations can be represented mathematically, as under.
Using HFc approach, the approximation is given by

Fig. 3.13 a Delayed unit step function with a step change at t = kh with b its approximation in HF
domain using conventional approach and c a modified HF domain approach to tackle the step
change leading to a better approximation
60 3 Function Approximation via Hybrid Functions

2 3 2 3
6 7
uðt  khÞ  4 0 . . . 0 1 . . . 1 5S þ 4 0 . . . 0 1 0 . . . 0 5T
|fflfflfflfflfflfflffl{zfflfflfflfflfflfflffl} |fflfflfflfflfflfflffl{zfflfflfflfflfflfflffl} ðmÞ |fflfflfflfflfflfflffl{zfflfflfflfflfflfflffl} " |fflfflfflfflfflfflffl{zfflfflfflfflfflfflffl} ðmÞ
k zeros all ones ðk1Þ zeros kth term all zeros

ð3:9Þ

This is shown in Fig. 3.13b.


If we approximate the same function using the HFm approach, the result is
2 3 2 3
uðt  khÞ ¼ 4 0 . . . 0
 1 . . . 1 5S þ 4 0 0 . . . 0 5TðmÞ ð3:10Þ
|fflfflfflfflfflfflffl{zfflfflfflfflfflfflffl} |fflfflfflfflfflfflffl{zfflfflfflfflfflfflffl} ðmÞ |fflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflffl}
k zeros all ones all zeros

as illustrated in Fig. 3.13c.


Now consider the following function

f4 ðtÞ ¼ ðc0 þ ktÞuðtÞ þ uðt  3hÞ

where, c0 is the DC bias and k is the slope of the ramp function. The function is
depicted in Fig. 3.14a. It is noted that the function has a jump of one unit at
t = 3 h (say).
This function may be approximated via HFc approach for m ¼ 6 and T ¼ 1 s as
shown in Fig. 3.14b.
Mathematically, we can write

f4 ðtÞ  ½ c0 c1 c2 c3 c4 c5 Sð6Þ


þ ½ ð c1  c0 Þ ð c2  c1 Þ ðc3  c2 Þ ð c4  c3 Þ ð c5  c4 Þ ðc6  c5 Þ Tð6Þ
, CTS Sð6Þ þ CTT Tð6Þ
ð3:11Þ

Fig. 3.14 a A typical time function f4(t) having a jump at t ¼ 3h, b its approximation using the
HFc based approach and c the HFm approach to approximate the jump discontinuity, for m = 6 and
T=1s
3.5 Approximation of Discontinuous Functions 61

Now, using the new approach HFm, we have

f ðtÞ  ½ c c1 c2 c3 c4 c5 Sð6Þ
4 0
þ ½ ð c1  c0 Þ ðc2  c1 Þ 0 ð c4  c3 Þ ðc5  c4 Þ ðc6  c5 Þ Tð6Þ
, CTS Sð6Þ þ CTT J3ð6Þ Tð6Þ
ð3:12Þ

where, J3ð6Þ is a diagonal matrix, very much like the unit matrix I, but having a zero
as the third element of the diagonal. This special matrix is introduced to handle the
jump discontinuity mathematically and it is defined as
2 3
1
6 1 7
6 7
6 0 7
J3ð6Þ , 6
6
7
7
6 " 1 7
4 3rd 1 5
term 1 ð66Þ

According to Eq. (3.12), the approximated function f4(t) is shown in Fig. 3.14c.
For any time function f(t), if the jump occurs at t = kh, then the J matrix becomes
2 3
1
6 .. 7
6 . 7
6 7
6 1 7
6 7
JkðmÞ ¼6 0 7 ð3:13Þ
6 " 7
6 1 7
6 .. 7
4 kth . 5
term 1 ðmmÞ

If the function involved has more than one jump, obviously, the matrix J will
have more than one zeroes in its diagonal at the locations, as discussed. If the
function has no jump at all, the J matrix is simply replaced by the I matrix.
Consider a function with a downward jump at t = kh, shown in Fig. 3.15a.
Figure 3.15b illustrates the approximation of the function using the HFc approach.
Here, the area of the shaded portion is some sort of indicator of the approximation
error, and it is apparent that with respect to MISE, this approximation will incur
more error compared to the approximation based upon the HFm approach, shown in
Fig. 3.15c.
62 3 Function Approximation via Hybrid Functions

Fig. 3.15 a A typical time function having a downward jump at kh, with its approximation using
b the HFc approach and c the HFm approach, for a typical value of k = 3

3.5.2 Numerical Examples

Example 3.5 Consider the function f4 ðtÞ ¼ ð0:2 þ tÞuðtÞ þ uðt  1Þ having a jump
discontinuity at t = 1 s.
It may seem that the HFm approach always provides a better approximation than
the HFc approach. But it is not always so.
Using Eqs. (3.11) and (3.12), approximations of this function via the HFc and the
HFm approaches is compared graphically in Fig. 3.16a for m = 10 and in Fig. 3.16b
for m = 20.
However, for linear or piecewise linear functions with jumps, we can improve the
reconstruction further. To implement such improvement we need to use both the
HFc and HFm approaches combined together. It should be kept in mind that a
piecewise linear functions with jump is necessarily a combination of ramp and step
functions. And a jump in such a function is always contributed by a delayed step
function. The function f4(t) of Fig. 3.16 is one such example. For a flawless
reconstruction, the jump points or break points (combination of ramp function
having different slopes) have to coincide with the partition line of two adjacent
sub-intervals, each of duration h.
Now let us again consider the function f4(t) of Fig. 3.16, which is

f4 ðtÞ ¼ ð0:2 þ tÞuðtÞ þ uðt  1Þ ð3:14Þ

It is apparent that the function is a combination of step and ramp functions. We


now express the component functions without delay by HFc approach and the
component functions with delay via HFm approach. It is obvious that the recon-
struction in such a fashion will be exact in nature. That is, for m = 10 and T = 2 s,
we can express the component functions of f4(t) is HF domain as
3.5 Approximation of Discontinuous Functions 63

Fig. 3.16 Graphical


comparison of HF domain
approximation with the exact
function, of function f4(t) of
Example 3.5 using the HFc
and the HFm approach, for
a m = 10, T = 2 s and
b m = 20, T = 2 s (vide
Appendix B, Program no. 6)

0:2uðtÞ ¼ ½ 0:2 0:2 0:2 0:2 0:2 0:2 0:2 0:2 0:2 0:2 Sð10Þ
ð3:15Þ
þ½0 0 0 0 0 0 0 0 0 0 Tð10Þ

tuðtÞ ¼ ½ 0 0:2 0:4 0:6 0:8 1:0 1:2 1:4 1:6 1:8 Sð10Þ
ð3:16Þ
þ ½ 0:2 0:2 0:2 0:2 0:2 0:2 0:2 0:2 0:2 0:2 Tð10Þ

uðt  1Þ  ½ 0 0 0 0 0 1:0 1:0 1:0 1:0 1:0 Sð10Þ


ð3:17Þ
þ ½ 0 0 0 0 1:0 0 0 0 0 0 J5ð10Þ Tð10Þ
64 3 Function Approximation via Hybrid Functions

For Eqs. (3.15) and (3.16), we have used the HFc approach, while for Eq. (3.17),
we have employed the HFm approach.
Combining the above three equations, we can write

f4 ðtÞ ¼ ½ 0:2 0:4 0:6 0:8 1:0 2:2 2:4 2:6 2:8 3:0 Sð10Þ
þ ½ 0:2 0:2 0:2 0:2 0:2 0:2 0:2 0:2 0:2 0:2 Tð10Þ ð3:18Þ
þ½0 0 0 0 1:0 0 0 0 0 0 J5ð10Þ Tð10Þ

Thus, the function f4(t) has been represented through HF domain in an exact
manner. This representation has been shown in Fig. 3.17 along with the exact
function.
Table 3.2 compares the MISE’s of the function via the HFc, the HFm and the
combined approaches for different values of m. It is noted that the MISE is much
less for the HFm approach for all values of m, but for the combined approach, the
MISE is zero. Figure 3.18 depicts the comparison of MISE’s for different values of
m using all the three approaches.
Example 3.6 Consider the function of Fig. 3.12, having a jump discontinuity at
td = 1 s (say). Calling this function f3(t), we write

t2 for t  td
f 3 ðt Þ ¼ ð3:19Þ
2  ðt  td Þ2 for t [ td

Consider Fig. 3.19a. If we try to compare the amount of errors of approximation


of the ramp function, it is noted that the areas of triangles ΔABD and ΔABC are
rather head to head contenders. If CD = H and CA = xH, then the areas become
equal only when x = ½. That is, CA = AD. At this point the errors are exactly equal

Fig. 3.17 Graphical


comparison of HF domain
approximation with the exact
function, of function f4(t) of
Example 3.5 using the
combined HFc and HFm
approach for m = 10 and
T=2s
3.5 Approximation of Discontinuous Functions 65

Table 3.2 Comparison of MISE’s for HF domain approximation of the function f4(t) of Example
3.5 using the HFc, the HFm and the combined approaches for different values m for T = 2 s (vide
Appendix B, Program no. 7)
Number of MISE using MISE using MISE using combined HFc
sub-intervals used HFc approach HFm approach and HFm approaches
(m)
4 0.16666667 0.04166667 0.00000000
6 0.11111111 0.01234568 0.00000000
8 0.08333333 0.00520833 0.00000000
10 0.06666667 0.00266667 0.00000000
12 0.05555556 0.00154321 0.00000000
14 0.04761905 0.00097182 0.00000000
16 0.04166667 0.00065104 0.00000000
18 0.03703704 0.00045725 0.00000000
20 0.03333333 0.00033333 0.00000000
24 0.02777778 0.00019290 0.00000000
30 0.02222222 0.00009877 0.00000000

Fig. 3.18 Comparison of


MISE’s for different values of
m for approximation of the
function of Example 3.5,
using the HFc approach, the
HFm approach and the
combined approach (vide
Table 3.2 and Appendix B,
Program no. 7)

and it is obvious that if CA > AD ðx [ 0:5Þ then the error of HFm approach based
approximation is larger than HFc based approximation.
But since, for different practical applications we rarely have ramp type functions
which are very steep, for most of the cases, HFm wins the competition.
Table 3.3 computes the MISE’s of the function f3(t) of Example 3.6 via the HFc,
HFm and the combined approaches, for two different values of m (m = 12 and 24). It
is noted that the MISE for the combined approach is significantly much less than
that of the other two approaches.
66 3 Function Approximation via Hybrid Functions

Fig. 3.19 Comparison of the exact function f3(t) of Example 3.6 with its HF domain
approximations using the HFc and the HFm approaches, for a m = 12, b m = 24, and
c m = 240, for T = 2.4 s. It is to be noted that for m = 240, both the approximations become
indistinguishable from the exact function

Figure 3.19a, b show the HF domain approximation of f3(t) with different


number of segments m and T = 2.4 s using the HFc approach (along the dashed line
BD) and HFm approach (along the line BCAD).
In Fig. 3.19, with increase in m, both the areas of the triangles ΔABC and ΔBDA
reduce and approach zero in the limit. When m is increased to 240, the ΔABC
reduces almost to a point and the MISE is reduced almost to zero. This is shown in
Fig. 3.19c for m = 240 and T = 2.4 s.
3.6 Function Approximation: HF Versus Other Methods 67

Table 3.3 Comparison of MISE’s for HF domain approximation of the function f3(t) of Example
3.6 using the HFc, HFm and the combined approaches for different values m (m = 12 and 24) and
T = 2.4 s
Number of MISE using HFc MISE using HFm MISE using combined
sub-intervals approach approach HFc and HFm
used (m) approaches
12 2838.66666667e−05 345.33333333e−05 5.33333333e−05
24 13961.66666667e−06 491.52777778e−06 3.33333333e−06

3.6 Function Approximation: HF Versus Other Methods

The essence of function approximation by other functions, e.g., polynomial func-


tions, orthogonal polynomials, orthogonal functions etc., is to satisfy the need of
efficient computing keeping the mean integral square error (MISE) within tolerable
limits. The general form to represent a square integrable function f ðtÞ via a set of
orthogonal functions is given in Eq. (1.3). Such approximation via HF method may
now be compared with a few other methods.
We consider block pulse functions [1] and Legendre polynomials [10, 11] to
approximate a function of the form f ðtÞ ¼ expðt  1Þ in t 2 ½0; 2Þ and compare it
with its HF equivalent.
To approximate f ðtÞ via Legendre polynomials [10, 11], we write

X
n1
f ðtÞ  ci Pi
0

where, Pi is the (i + 1)th Legendre polynomial and c0 ; c1 ; . . .; ci are the respective


coefficients given by

Z1
2i þ 1
ci ¼ f ðtÞPi ðtÞdt; i ¼ 0; 1; 2; 3; . . .
2
1

For the example treated in the following, we consider up to sixth degree


Legendre polynomial. The polynomials are

3t2  1 5t3  3t 35t4  30t2 þ 3


P0 ðtÞ ¼ 1; P1 ðtÞ ¼ t; P2 ðtÞ ¼ ; P 3 ðt Þ ¼ ; P 4 ðt Þ ¼ ;
2 2 8
63t5  70t3 þ 15t 231t6  315t4 þ 105t2  5
P5 ðtÞ ¼ ; P 6 ðt Þ ¼ :
8 16

obtained from the well known recurrence formula for Legendre polynomials.
Figure 3.20 depicts the comparison of the function f(t) with its approximation
obtained via sixth degree Legendre polynomial. Figure 3.21 compares f(t) with its
68 3 Function Approximation via Hybrid Functions

Fig. 3.20 Comparison of the


function f(t) = exp(t − 1) with
its Legendre polynomial
approximated version using
up to sixth degree polynomial
with T = 2 s (vide
Appendix B, Program no. 8)

Fig. 3.21 Comparison of the


function f(t) = exp(t − 1) with
its HF approximated version
using m = 10 and T = 2 s

HF domain approximation for m = 10. Finally, Fig. 3.22 shows the comparison of f
(t) with its BPF domain approximation for m = 10.
However, for function approximation via BPF or HF, the sampling theorem has
to be satisfied while selecting the width of sub-interval, h. This takes care of the
accuracy of approximation. And also, to approximate oscillatory functions—that is,
high frequency signals—we need a smaller h to reach useful result, which in effect,
is in compliance with the sampling theorem.
For function approximation via Legendre polynomials, the oscillatory property
is inherently present in the polynomial set which helps in successful approximation
of high frequency signals. Also, the Legendre approach and least squares approach
usually come up with the same result.
In case of HF based approximation, we work with m number of samples and
number of operations for function approximation is only m number of subtractions.
In case of Legendre polynomial based approximation, we need to evaluate as many
as n integrals (for n number of Legendre polynomials) numerically and also we
3.6 Function Approximation: HF Versus Other Methods 69

Fig. 3.22 Comparison of the


function f(t) = exp(t − 1) with
its BPF approximated version
using m = 10 and T = 2 s

need many addition and multiplication operations. Thus, it is expected that its
computational burden is much more compared to the HF method.
In case of block pulse based approximation, to evaluate each expansion coeffi-
cient we need to perform one numerical integration applying Simpson’s 3/8th rule
or the like. Such integration obviously add to the computational burden by way of
memory as well as execute on time.
Also, hardware implementation of Legendre polynomials is hardly possible, and
for generating BPF’s, we need to take care of the ‘orthogonality error’ via hardware
which is complex. Compared to these hassles, HF is much simpler to generate
because we do not need to construct the sample-and-hold functions or the triangular
functions. Instead, we take samples of the concerned function and produce its
piecewise linear version.
Tables 3.4 and 3.5, along with Figs. 3.23 and 3.24, illustrate the essence of
efficiency of approximation via three different methods.
From Table 3.4, Legendre polynomial based approximation with sixth degree
polynomial fit incurs an error 3.88664563e−12, while for about the same MISE, the
HF domain approximation needs m = 500. In Table 3.5, for BPF approximation
method with m = 128, the MISE is 3.68934312e−005. This MISE is obtained for
m = 9 in HF domain.
Data for Tables 3.4 and 3.5 are calculated via MATLAB 7.9 [12] and we have
presented the result up to the eighth place of decimal. It is observed that with
respect to MISE, fifth order polynomial fit of Legendre is somewhat equivalent to
HF approximation with m = 134.
For BPF and HF approximation, such equivalence is obtained with m = 158
(BPF) and m = 10 (HF) for T = 2 s.
In Table 3.4, elapsed times are computed for evaluation of the expression f ðtÞ 
Pn1
0 ci Pi with Legendre polynomials for different values of n from 1 to 7, and also
respective MISE’s. It is observed that for computations with the index n = 6 and 7,
elapsed times are much less compared to HF with m = 134 and 500 respectively.
70

Table 3.4 Comparison of MISE’s for Legendre polynomial based approximation and HF based approximation of the function f(t) = exp(t − 1) with different
degrees of Legendre polynomials and different number of HF component functions over a 2 s interval, along with respective elapsed times for computation
(vide Appendix B, Program nos. 9 and 10)
S. Legendre polynomial based function approximation Hybrid function based function approximation MISELegendre
D¼ MISEHF
no Highest degree of MISELegendre Elapsed Number of SHF and TF MISEHF Elapsed
polynomial used time (s) components used (m) time (s)
1 P0(t) 0.43233236 2.141362 1 0.16336887 2.178037 2.64635704
2 P1(t) 0.02632651 2.810796 2 0.01353700 2.435132 1.94478171
3 P2(t) 7.20286766e−04 3.737012 5 3.79858533e−04 3.915994 1.89619741
4 P3(t) 1.11444352e−05 4.569169 13 8.44276211e−06 7.959310 1.31999872
5 P4(t) 1.10681987e−07 6.050759 39 1.04483986e−07 21.19464 1.05932010
6 P5(t) 7.64745535e−10 7.553432 134 7.49910190e−10 70.21305 1.01978283
7 P6(t) 3.88664563e−12 8.861730 500 3.86864398e−12 274.6432 1.00465322
3 Function Approximation via Hybrid Functions
Table 3.5 Comparison of MISE’s for BPF based approximation and HF based approximation of the function f(t) = exp(t − 1) for different values of m with
T = 2 s along with respective elapsed times for computation
S. BPF based function approximation Hybrid function based function approximation MISE
Dm ¼ MISEBPFðm1 Þ
no Number of BPF MISEBPF(m1) Elapsed Number of SHF and TF MISEHF(m2) Elapsed HFðm2 Þ

component functions time (s) component functions used time (s)


used (m = m1, say) (m = m2, say)
1 39 3.97316113e−04 30.28443 5 3.79858533e−04 4.700064 1.04595811
2 57 1.86027184e−04 39.01264 6 1.84208601e−04 4.787076 1.00987241
3 77 1.01945683e−04 54.25152 7 9.97660910e−05 5.379379 1.02184702
3.6 Function Approximation: HF Versus Other Methods

4 101 5.92542920e−05 69.52123 8 5.86090963e−05 5.772918 1.01100845


5 128 3.68934312e−05 88.73480 9 3.66443586e−05 6.405623 1.00679702
6 158 2.42135494e−05 110.8396 10 2.40682301e−05 6.770899 1.00603780
71
72 3 Function Approximation via Hybrid Functions

Fig. 3.23 Ratio of Legendre


polynomial approximation
MISE and HF approximation
MISE for T = 2 s (refer to
Table 3.4) for different values
of m where Pn(t) (n = 0, …, 5)
denotes the highest degree of
Legendre polynomial used.
Due to inconvenience in
choosing the scale, the last
entry of Table 3.4 is excluded
from the figure

Fig. 3.24 Ratio of BPF


approximation MISE and HF
approximation MISE (refer to
Table 3.5) for different values
of m for T = 2 s

This is because, the evaluation of MISE for HF based approximation takes up much
more time for m number of squaring operations and subsequent numerical
integrations.
In fact, for any value of n, i.e., n = 1 to 5, this is true and the elapsed times are
less compared to that of HF based procedure. Had we been able to represent the HF
domain piecewise linear curve by a single analytical expression, suitable for
MATLAB, it is possible that the HF based computations would have taken much
less computational time.
Figure 3.25a, b show elapsed times for MISE computation for Legendre poly-
nomial and HF based approximations. Also, from these two figures, for the same
3.6 Function Approximation: HF Versus Other Methods 73

Fig. 3.25 MISE and elapsed time (vide Table 3.4) with respect to a number of Legendre
polynomials and b number of sub-intervals m for HF approximation

range of MISE, number of Legendre polynomials required for the approximation


may be compared with the number of sub-intervals (m) required for the HF based
approach.
Also if we use only one sub-interval for HF domain approximation and one
polynomial for Legendre, the MISE for HF based approach is noted as one-third of
that of the Legendre approximation. It proves the effectiveness of HF based
approximation even in smallest number of sub-interval.
74 3 Function Approximation via Hybrid Functions

Fig. 3.26 Comparison of the


function f3(t) of Example 3.6
with its Legendre polynomial
approximated version using
up to sixth degree polynomial
with T = 2 s ð1 s  t  1 s)

However, for approximation of functions with jump discontinuities, the


Legendre polynomial approach is not the best suited orthogonal polynomial set
compared to hybrid function based approximation. This is because, the Legendre
polynomial set contains only two members, one the constant u(t) and the other unit
ramp function, which are linear. All the other remaining polynomials being of curvy
nature, it is a contention that approximation of ‘jump functions’ by Legendre
polynomials will be met with difficulty.
That this is so, is proved by Fig. 3.26, where the approximation of the function
of Example 3.6 is attempted with seven Legendre polynomials, that is P0(t) to P6(t).
It is seen from Fig. 3.26 that such attempt turns out to be a fiasco, whereas with HFc
and HFm, we have been able to approximate this function in a reasonably well
manner, vide Fig. 3.17.
In Fig. 3.27a, b, comparison has been made between BPF approximation and HF
approximation. It is noted that approximately same range of MISE can be obtained
in HF based approximation using much lesser number of sub-intervals. Also the
elapsed times for MISE computation for HF based computation are comparatively
negligible as obtained in BPF based approximation.

3.7 Mean Integral Square Error (MISE) for HF Domain


Approximations [8]

The representational error for equal width block pulse function expansion of any
square integrable function of Lebesgue measure has been investigated by Rao and
Srinivasan [13], while the error analysis for pulse-width modulated generalized
block pulse function (PWM-GBPF) expansion has been carried out by Deb et al.
[14].
3.7 Mean Integral Square Error (MISE) for HF Domain Approximations 75

Hybrid function approximation has two components: sample-and-hold function


component and triangular function component. To present an error analysis in HF
domain, it is obvious that the upper bound of representational error will be com-
prised of two parts: error in the component of sample-and-hold function based
approximation and in the component of triangular function based approximation.
Any time function can be represented by Taylor series as an infinite sum of terms
those are calculated from the values of the function’s derivatives at a single point. In
practice all the time functions are usually approximated using a finite number of
terms of its Taylor series, which gives quantitative estimates on the error. Here the
time function f(t) first expanded into Taylor series polynomials. The maximum error
that can incur in approximating a function within a time interval is termed as upper
bound of error.
As sample-and-hold function gives a stair-case approximation, two terms of
Taylor series i.e. two-degree Taylor polynomials are sufficient to approximate a
function. Similarly the triangular function gives linear approximation, three terms
of Taylor series or minimum three-degree Taylor polynomials are used to
approximate a function.

3.7.1 Error Estimate for Sample-and-Hold Function


Domain Approximation [2]

Let us consider m cells of equal width h spanning an interval [0, T) such that
T = mh. In the (i + 1)th interval, the representational error is

Ei ðtÞ , jf ðtÞ  f ðihÞj ð3:20Þ

The mean integral square error (MISE) is given by

ðiZþ 1Þh
2
½Ei  , ½f ðtÞ  f ðihÞ2 dt ð3:21Þ
ih

Now expanding f(t) in the ith interval around any point µi, by Taylor series, we
can write,

f ðtÞ  f ðli Þ þ f_ ðli Þðt  li Þ þ €f ðli Þðt  li Þ2 =2! þ    ð3:22Þ

where, li 2 ½ih; ði þ 1Þh


Neglecting second and higher order derivatives of f(t) and substituting (3.22) in
(3.21), we have
76 3 Function Approximation via Hybrid Functions

ðiZþ 1Þh
 2
2
½Ei  , f ðli Þ þ f_ ðli Þðt  li Þ  f ðihÞ dt
ih

Now considering
 
f_max ¼ max f_ ðli Þ
dmax ¼ f ðli Þ  f ðihÞ ¼ maxfdi g
lmax ¼ maxfli g

Then the upper bound of MISE over the interval [0, T) is [2]

X
m1  
Ei2 ¼ E2 ¼ mhdmax dmax þ mhf_max  2lmax f_max
i¼0
 2 2 ð3:23Þ
_ m h
þ mhfmax
2
þ lmax ðlmax  mhÞ
3

3.7.2 Error Estimate for Triangular Function Domain


Approximation [3–5]

Let us consider (m + 1) sample points of the function f(t), having a sampling period
h, denoted by f(ih), i = 0, 1, 2, …, m. Then piecewise linear representation of the
function f(t) by triangular functions is obtained simply by joining these sample
points. The equation of one such straight line ^f ðtÞ, approximating f(t), in the (i + 1)
th interval is

^f ðtÞ ¼ mi t þ f ðihÞ  imi h ð3:24Þ

where mi ¼ f ½ði þ 1Þhf


h
ðihÞ
.
Then integral square error (ISE) in the (i + 1)th interval is

ðiZþ 1Þh
 2
2
½Ei  D f ðtÞ  ^f ðtÞ dt ð3:25Þ
ih

Let the function f(t) be expanded by Taylor series in the (i + 1)th interval around
the point li considering second order approximation. Then
3.7 Mean Integral Square Error (MISE) for HF Domain Approximations 77

f ðtÞ  f ðli Þ þ f_ ðli Þðt  li Þ þ €f ðli Þðt  li Þ2 =2! ð3:26Þ

Using Eqs. (3.26) and (3.25) may be written as

ðiZþ 1Þh 
 
2
½Ei  ¼ f ðihÞ  imi h  f ðli Þ þ li f_ ðli Þ
ih
  2
þ mi  f_ ðli Þ t  €f ðli Þðt  li Þ2 =2! dt ð3:27Þ

ðiZþ 1Þh
h i2
¼ A þ Bt þ Cðt  li Þ2 dt
ih

where
9
A , f ðihÞ  imi h  f ðli Þ þ li f_ ðli Þ =
B , mi  f_ ðli Þ ð3:28Þ
;
C ,  €f ðli Þ=2!

Hence, Eq. (3.27) may be simplified to

B2 h3 2 C 2 h5 4
½Ei 2 ¼ A2 h þ ð3i þ 3i þ 1Þ þ ð5i þ 10i3 þ 10i2 þ 5i þ 1Þ  h4 li C 2 ð4i3 þ 6i2 þ 4i þ 1Þ
3 5
þ 2h3 l2i C2 ð3i2 þ 3i þ 1Þ  2h2 l3i C 2 ð2i þ 1Þ þ hl4i C 2 þ ABh2 ð2i þ 1Þ
h4 BC 3 4h3 BCli 2
þ ð4i þ 6i2 þ 4i þ 1Þ  ð3i þ 3i þ 1Þ þ h2 BCl2i ð2i þ 1Þ
2 3
2h3 CA 2
þ ð3i þ 3i þ 1Þ  2h2 CAli ð2i þ 1Þ þ 2hCAl2i
3
ð3:29Þ
Then the upper bound of ISE over m subintervals is given by

X
m1  
Ei2max ¼ E2
i¼1

m5 h5 m4 h4 m3 h3 2 m2 h2 3 mh 4
¼ €f max
2
 lmax þ lmax  l þ l
20 4 2 2 max 4 max

ð2m3  3m2 þ mÞh3
þ f_max
2
 ðm2  mÞh2 lmax þ mhl2max
6

ð3m4  2m3  m2 Þh4 ð6m3  3m2  mÞh3
þ €f max f_max  lmax
12 6
ð3m2  mÞh2 2
þ lmax  mhl3max ð3:30Þ
2
78 3 Function Approximation via Hybrid Functions

where

fmax , maxff ðihÞ; f ½ði þ 1Þh; f ðli Þg;


 
f_max , max f_ ðli Þ; mi
 
€f max , max €f ðli Þ and
lmax , maxfli g

These maximum values are considered to be the largest over the entire period.
So, Eq. (3.30) gives the upper bound of ISE for the triangular function component.
For simplicity, assume that the function is approximated by first-order Taylor
series expansion. In that case, €f ðli Þ ¼ 0. Then from Eq. (3.28), C = 0.
Equation (3.29) now can be simplified to

B2 h3 2
½Ei 2 ¼ A2 h þ 3i þ 3i þ 1 þ ABh2 ð2i þ 1Þ ð3:31Þ
3

Case I Let us assume that the function f(t) is a step function. Then,

f ðihÞ ¼ f ðli Þ; mi ¼ 0 and f_ ðli Þ ¼ 0:

This implies, from Eq. (3.29),


A = 0, and B = 0.
Hence, from Eq. (3.31), we have ISE in the ith interval, i.e.

½Ei 2 ¼ 0

Case II If f(t) is a ramp function, then, mi ¼ mr (constant). Since, mr ¼ f_ ðli Þ, we


have
B = 0.
Consider li ¼ ih þ x; 0  x  h.
Then,

f ðli Þ ¼ f ½ðihÞ þ x ¼ f ðihÞ þ mr x

Hence, from Eq. (3.22), we get

A ¼ f ðihÞ  mr ih  f ðihÞ  mr x þ ðih þ xÞmr ¼ 0

Then, from Eq. (3.31), ISE in the ith interval is zero.


Case III Let f(t) be a piecewise ramp function having different slope in different
sampling period. Then, considering the slope to be mi in the ith interval, we have
3.7 Mean Integral Square Error (MISE) for HF Domain Approximations 79

mi ¼ f_ ðli Þ and li ¼ ih þ x; 0  x  h

If follows from Eq. (3.28) that A = B = 0, implying zero ISE.


As the result of case II and case III are independent of x, we can conclude that
for a ramp function, whether continuous or piecewise, ISE is zero irrespective of the
magnitude of li , the focal point of Taylor series expansion.

3.8 Comparison of Mean Integral Square Error (MISE)


for Function Approximation via HFc and HFm
Approaches

For the function of Example 3.6, the time interval under consideration is
pffiffiffi
T ¼ 1 þ 2 s. We have considered an approximate interval of T = 2.4 s and have
approximated the function for different number of m (for m = 12–240). But if we
intend to improve the approximation even more, let us consider T = 2.41 s, and
m = 241, so that each sub-interval matches with 0.01 s. It is found that the MISE
remains the same as before, i.e., for m = 240. This indicates, the infinitesimal
change in MISE is not reflected in the approximation.
For different values of m, as mentioned above, the MISE’s for both the
approaches within a specified time zone, are computed and the modified approach
always comes up with less MISE. This has been studied deeply for no less than ten
different values of m and two curves are drawn for approximation of f3(t) using the
HFc approach and HFm approach, and as expected, the MISE for HFm approach is
always less than the HFc approach. Figure 3.28 represents this fact visually.
Figure 3.27, MISEc and MISEm, are computed for twelve different values of
MISEc for each value of m, we
m. If we determine the ratio of the errors RHFcm , MISE
m
can plot a curve of RHFcm with m. Figure 3.29a shows the variation where, with
increasing m, the ratio RHFcm increases parabolically. Study of Figs. 3.28 and 3.29a
apparently presents a paradox: while both the MISEs converge to zero, with
increasing m, their ratio RHFcm increases. This is because the MISEm converges at a
much faster rate compared to its counterpart MISEc.
To explain this ‘phenomenon’ in more detail, we present Table 3.5 with different
MISEs for approximations via different orthogonal function sets.
Remembering the fact that approximation of a function by conventional hybrid
functions and orthogonal triangular functions yield identical results, in Table 3.6,
MISEs for approximation via modified HF based method, conventional HF based
method (equivalent to triangular function based approximation) and block pulse
function domain approximation are tabulated for ten different values of m from 12
to 240.
Also, the ratios MISE MISEBPF
MISEm , RTFm = RHFcm and MISEm , RBPFm are defined.
TF
80 3 Function Approximation via Hybrid Functions

Fig. 3.27 Elapsed time and MISE (vide Table 3.5) with respect to number of sub-intervals used in
a BPF based and b HF based approximation

Using the data of Table 3.6, we draw another curve, shown in Fig. 3.29b, to
study the variation of the ratio of MISEBPF and MISEm with different values of m. It
is observed that this ratio increases linearly with m.
Studying the two ratio curves, it is noted that for Fig. 3.29a, when m becomes
greater than 100, the curve becomes acutely steep. At the end, when m reaches 240,
3.8 Comparison of Mean Integral Square Error (MISE) … 81

Fig. 3.28 Comparison of MISEs for different values of m, using the HFc approach and the HFm
approach for approximating the function of Example 3.6, with T = 2.4 s

Fig. 3.29 Ratio of MISEs of a the HFc approach and the HFm approach, and b the BPF approach
and the HFm approach, for approximating the function of Example 3.6 for different values of m and
T = 2.4 s

the magnitude of RHFcm becomes 2525.3788, which is very high, tilting the case
significantly in favour of HF based modified approach.
In contrast to the curve of Fig. 3.29a, the curve of Fig. 3.29b is linear, as
indicated above. And at a value of m = 240, the magnitude of the ratio is only
13.1288. This apparently indicates, with increasing m, the rates of convergence to
zero MISE for both the approaches are comparable. But this does not weaken the
case for HFm approach.
This small ratio 13.1288 keeps one wondering about the established superiority
of HF based approximation over BPF based approximation in general. Though we
82

Table 3.6 Comparison of MISE’s for HF domain approximation of function f3(t) of Example 3.6 using the HFc and the HFm approaches, for different number
of segments m and T = 2.4 s
Number of MISE using HFm MISE using HFc or MISETF/MISEm = RTFm = RHFcm MISE using BPF MISEBPF/MISEm = RBPFm
sub-intervals approach TF approach approach
used, m (MISEm) (MISEc) (MISEBPF)
12 0.00828800 0.06812800 8.22007722 0.00689778 0.83226110
24 0.00117967 0.03350800 28.40455382 0.00173111 1.46745276
36 0.00036438 0.02227319 61.12626928 0.00076993 2.11298644
48 0.00015691 0.01668800 106.35396087 0.00043319 2.76075457
60 0.00008132 0.01334420 164.09493360 0.00027728 3.40973930
96 0.00002022 0.00833597 412.46759030 0.00010832 5.36021771
120 0.00001041 0.00666801 640.53890490 0.00006933 6.65994236
156 0.00000477 0.00512882 1075.22431866 0.00004102 8.59958071
204 0.00000214 0.00392184 1832.63551402 0.00002399 11.21028037
240 0.00000132 0.00333350 2525.37878788 0.00001733 13.12878788
3 Function Approximation via Hybrid Functions
Table 3.7 Cell wise comparison of MISEs for HFm, HFc and BPF based approximations of the function f3(t) of Example 3.6 for m = 12 and T = 2.4 s
Sub-interval no. [0 to Segment wise MISE using HFm Segment wise MISEc or Segment wise MISE using BPF
(m − 1)] m = 12, MISE using approach (MISEm) MISE using HFc MISETF over MISE using approach (MISEBPF)
h = 0.2 s T = 2.4 s HFm approach over the whole period or TF approach the whole BPF approach over the whole period
period T
0 0.00005333 0.00345333 0.00005333 0.02838666 0.00014222 0.00689778
1 0.00005333 0.00005333 0.00120889
2 0.00005333 0.00005333 0.00334222
3 0.00005333 0.00005333 0.00654222
4 0.04085333 0.34005333 0.01080889
5 0.00005333 0.00005333 0.00014222
6 0.00005333 0.00005333 0.00120889
3.8 Comparison of Mean Integral Square Error (MISE) …

7 0.00005333 0.00005333 0.00334222


8 0.00005333 0.00005333 0.00654222
9 0.00005333 0.00005333 0.01080889
10 0.00005333 0.00005333 0.01614222
11 0.00005333 0.00005333 0.02254222
sub-interval containing the jump discontinuity.
83
84 3 Function Approximation via Hybrid Functions

have used HF based modified approach for this special kind of function f3(t) with
one jump, the ratio is still more uncomfortable. For this reason, we investigated cell
wise MISE for three approximations, namely, BPF based approximation and HF
based approximation, both conventional and modified. The results are tabulated in
Table 3.7.
It is noted from the table that for the cell immediately before the jump, MISE is
maximum (=0.34005333) for HFc and minimum (=0.01080889) for BPF. And for
the same cell, for HFm based approximation, MISE is 0.04085333 which is mod-
erate. But for all other cells, HFm and HFc methods have the same MISE and its
magnitude is much less than that of BPF method. In fact, the sum of MISEs of all
cells for HFc or HFm methods, excluding the cell just before the jump, is
0.00058666 (for HFc and HFm) and 0.06862221 for BPF. This proves the efficiency
of HF based approximation and indicates its superiority over BPF technique.
Further, when m is increased from 12 to 24 or even higher values, it is noted
from Table 3.6 that HFm method is always more accurate than BPF based
approximation, while HFc method is not. This proves, HFm method is the most
competent for handling functions with jumps.

3.9 Conclusion

The orthogonal hybrid function (HF) set has been employed for piecewise linear
approximation of time functions of Lebesgue measure. For HF domain approxi-
mation, the expansion coefficients are simply the samples of the function to be
approximated, where as for BPF [1] domain approximation, each expansion coef-
ficient is determined via integration of the function making the computation more
complex. So is the case for approximation of a function using Legendre polynomial
or any other orthogonal polynomial for that matter. Also, the block pulse function
based approximation, being staircase in nature, incurs higher mean integral square
error (MISE) compared to that via hybrid functions because HF domain approxi-
mation reconstructs a function in a piecewise linear manner.
For linear functions like f1(t) = t, HF domain approximation comes up with zero
mean integral square error (MISE) as expected. This fact is shown in Fig. 3.4. Also,
HF domain approximation proved to be much more accurate compared to equiv-
alent BPF domain approximation. For example, for the function f2(t) = sin(πt),
MISE for HF domain approximation is much less than BPF domain approximation.
These facts are evident from Figs. 3.5, 3.6 and 3.7; Table 3.1. In Table 3.1 the ratio
D ¼ MISE
MISEHF is found to be 25.43420823.
BPF

For approximation of discontinuous function, the outcome of HF domain


approximation is illustrated via Fig. 3.8 through Fig. 3.11 qualitatively. It is noted
that if the sampling period h is small enough, hybrid functions could turn out
reasonably good approximation of discontinuous functions. However, the accuracy
of approximation is much improved for most of the functions if we employ a
3.9 Conclusion 85

modified HF domain approach. Calling this technique the HFm approach, it has
been established through numerical Examples 3.5 and 3.6 that HFm approach is
much more accurate than the conventional HF domain approach (named, HFc
approach). Table 3.2 is evidence enough to establish the superiority of the HFm
approach for eleven different values of m. Figure 3.18 provides the pictorial
translation of Table 3.2 where the fineness of approximation using the HFm
approach is quite apparent.
For the numerical Example 3.6; Fig. 3.19 compares HF domain approximations
with the exact function for three different values of m, namely, m = 12, 24 and 240
and thus compares effectiveness of approximation via HFc and HFm qualitatively.
Function approximation via Legendre polynomials are also compared with HF
approximation. These are illustrated in Table 3.4 and Figs. 3.20 and 3.23. From
Table 3.4, it is observed that with up to sixth degree Legendre polynomial
approximation, the same order of MISE is achieved in HF domain approximation
with m = 500. But with up to fifth degree polynomial approximation, the same order
of MISE is achieved in HF domain approximation with m = 134. However, for BPF
domain approximation with m = 128 and T = 2 s the MISE is 3.68934312e-05 and
for the same order of MISE, HF domain approximation requires only m = 9, vide
Table 3.5.
From Fig. 3.25a, b we find that the HF based approach proves to be more
effective than Legendre polynomial based approximation, in the sense that HF
based approximation provides a better deal even in smallest number of sub-interval.
For approximation of functions with jump discontinuities, again the HF domain
approximation turns out to be the winner compared to Legendre polynomial
approach.
In Fig. 3.27a, b, a comparison of BPF approximation and HF approximation
shows that HF based approximation comes up with the same range of MISE using
much lesser number of sub-intervals compared to its BPF equivalent with respect to
MISE.
From the above discussion, it may be concluded that function approximation
using hybrid functions is more simple as well as advantageous compared to block
pulse function based equivalent approximation. Though Legendre polynomial
based approximation produces good results, computation of its coefficients are
much more tedious and complicated. Further, the HF based approximation works
with function samples which is a great advantage in view of the present digital age.
This advantage is offered neither by BPF approximation, nor Legendre polynomial
based approximation.
For approximating functions with jump discontinuities, the modified HF domain
approach seems to be more efficient than the conventional HF domain approach and
examples have been provided to prove the point. This broadens application suit-
ability of HF domain approach still more.
86 3 Function Approximation via Hybrid Functions

References

1. Jiang, J.H., Schaufelberger, W.: Block Pulse Functions and their Application in Control
System, LNCIS, vol. 179. Springer, Berlin (1992)
2. Deb, A., Sarkar, G., Sen, S.K.: Block pulse functions, the most fun-damental of all piecewise
constant basis functions. Int. J. Syst. Sci. 25(2), 351–363 (1994)
3. Deb, A., Sarkar, G., Bhattacharjee, M., Sen, S.K.: A new set of piecewise constant orthogonal
functions for the analysis of linear SISO systems with sample-and-hold. J. Franklin Inst. 335B
(2), 333–358 (1998)
4. Deb, A., Sarkar, G., Sengupta, A.: Triangular Orthogonal Functions for the Analysis of
Continuous Time Systems. Anthem Press, London (2011)
5. Deb, A., Sarkar, G., Dasgupta, A.: A complementary pair of orthogonal triangular function
sets and its application to the analysis of SISO control systems. J. Inst. Eng. (India) 84, 120–
129 (2003)
6. Deb, A., Dasgupta, A., Sarkar, G.: A complementary pair of orthogonal triangular function
sets and its application to the analysis of dynamic systems. J. Franklin Inst. 343(1), 1–26
(2006)
7. Rao, G.P.: Piecewise Constant Orthogonal Functions and their Application in Systems and
Control, LNCIS, vol. 55. Springer, Berlin (1983)
8. Deb, A., Sarkar, G., Mandal, P., Biswas, A., Ganguly, A., Biswas, D.: Transfer function
identification from impulse response via a new set of orthogonal hybrid function (HF). Appl.
Math. Comput. 218(9), 4760–4787 (2012)
9. Deb, A., Sarkar, G., Ganguly, A., Biswas, A.: Approximation, integration and differentiation
of time functions using a set of orthogonal hybrid functions (HF) and their application to
solution of first order differential equations. Appl. Math. Comput. 218(9), 4731–4759 (2012)
10. Baranowski, J.: Legendre polynomial approximations of time delay systems. XII international
Ph.D workshop OWD 2010, 23–26 Oct 2010
11. Tohidi, E., Samadi, O.R.N., Farahi, M.H.: Legendre approximation for solving a class of
nonlinear optimal control problems. J. Math. Finance 1, 8–13 (2011)
12. Mathews, J.H., Kurtis, D.F.: Numerical Methods using MATLAB, 4th edn. Prentice Hall of
India Pvt. Ltd., New Delhi (2005)
13. Rao, G.P., Srinivasan, T.: Analysis and synthesis of dynamic systems containing time delays
via block pulse functions. Proc. IEE 125(9), 1064–1068 (1978)
14. Deb, A., Sarkar, G., Sen, SK.: Linearly pulse-width modulated block pulse functions and their
application to linear SISO feedback control system identification. Proc. IEE, Part D, Control
Theory Appl. 142(1), 44–50 (1995)
Chapter 4
Integration and Differentiation Using HF
Domain Operational Matrices

Abstract This chapter introduces the operational matrices for integration as well as
differentiation. In such hybrid function domain integration or differentiation, the
function to be integrated or differentiated is first expanded in hybrid function
domain and then operated upon by some special matrices to achieve the result.
These special matrices are the operational matrices for integration and differentia-
tion and these are derived in this chapter. Also, the nature of accumulation of error
at each stage of integration-differentiation dual operation is investigated. Four
examples are treated to illustrate the operational methods. Three tables and fifteen
figures are presented for user friendly clarity.

The proposed hybrid function (HF) set has been utilized for approximating square
integrable functions in a piecewise linear manner. The spirit of such approximation
was explained in the previous chapter. As was done with block pulse functions [1,
2], in this chapter, we use the complementary hybrid function sets in a similar
fashion, to develop the operational matrices for integration [3, 4] and these new
operational matrices are employed to integrate time functions in HF domain. These
matrices are finally used for the analysis and synthesis of control systems, for
solving the identification problem from state space description of systems, and
parameter estimation of transfer functions from impulse response data.

4.1 Operational Matrices for Integration

A hybrid function set is a combination of a sample-and-hold function (SHF) [5] set


and a triangular function set [6–8]. Thus, when we express a time function in HF
domain, the result is comprised of two parts, namely, SHF part and TF part. So, to
integrate such a function in HF domain, we need to integrate both the parts.
Integration of each part would again produce SHF and TF parts, the combination of
which is the result of integration in HF domain. This is achieved by means of
operational matrices.

© Springer International Publishing Switzerland 2016 87


A. Deb et al., Analysis and Identification of Time-Invariant Systems, Time-Varying
Systems, and Multi-Delay Systems using Orthogonal Hybrid Functions,
Studies in Systems, Decision and Control 46, DOI 10.1007/978-3-319-26684-8_4
88 4 Integration and Differentiation Using HF …

In order to derive the operational matrices for integration in HF domain, we


proceed in a manner adopted for Walsh and block pulse functions. Here, we con-
sider for both the component function sets separately and develop the integration
operational matrix for each of them to eventually derive the operational matrices for
integration in HF domain [9].

4.1.1 Integration of Sample-and-Hold Functions [5]

First we take up the m-member sample-and-hold function set and integrate each of
its components and consequently express the result in terms of hybrid function.
Figure 4.1a shows the first member S0 of the SHF set and Fig. 4.1b shows its
decomposition into two step functions.
Mathematically, S0 can be expressed as

S0 ¼ uð t Þ  uð t  hÞ

Subsequent integration of S0 produces two ramp


R functions as shown in Fig. 4.1c,
while Fig. 4.1d depicts the resulting function S0 dt. It is noted that the result is
comprised of one triangular function and several sample-and-hold function blocks
[9].

Fig. 4.1 Decomposition of the first member of the SHF set S0 and its subsequent integration
4.1 Operational Matrices for Integration 89

Fig. 4.2 Integration of the (i + 1)th member of the SHF set

Figure 4.2 shows the (i +R 1)th member of the SHF set and its integration.
Mathematically, the function Si dt is given by
Z
Si ðtÞdt ¼ ðt  ihÞuðt  ihÞ  ½t  ði þ 1Þhu½t  ði þ 1Þh ð4:1Þ

It is apparent that integration of the (i + 1)th member of the SHF set produces the
same result of Fig. 4.1d, but with a shift of ih to the right as shown in Fig. 4.2.
Putting different values of i in (4.1), e.g., 0, 1, 2, …, we can obtain expressions for
integrations of different component SHF’s.
Starting with the first member S0 of the SHF set, we integrate each member and
express the result [9] in hybrid function domain. Let us consider four members of
the sample-and-hold function set, i.e. m = 4 and T = 1 s and h = T/m. Figure 4.3
shows the result of integration which is comprised of one triangular function and
three sample-and-hold functions. The figure also contains the integration results of
other three SHF’s, namely, S1 , S2 and S3 :

Fig. 4.3 Integration of first


four members of the SHF set
90 4 Integration and Differentiation Using HF …

Integration of the first member of the SHF set can be expressed mathematically
as
3
2 23
Z S0 T0
6 S1 7 6 T1 7
S0 dt ¼ h ½ 0 1 1 1 6 7
4 S2 5 þ h ½ 1 0 0 06 7
4 T2 5
S3 T3

where, for convenience, we can write


Z
S0 dt ¼ h ½ 0 1 1 1  Sð4Þ þ h ½ 1 0 0 0  Tð4Þ ð4:2Þ

It is noted that, when the first member of the sample-and-hold function set is
integrated and the result is expressed in HF domain, its approximation is comprised
of two parts: sample-and-hold functions as well as triangular functions.
Following Eq. (4.2), the results of integration of the second, third and fourth
members of the SHF set, as shown in Fig. 4.3, may be expressed as
Z
S1 dt ¼ h ½ 0 0 1 1  Sð4Þ þ h ½ 0 1 0 0  T ð4Þ ð4:3Þ

Z
S2 dt ¼ h ½ 0 0 0 1  Sð4Þ þ h ½ 0 0 1 0  T ð4Þ ð4:4Þ

Z
S3 dt ¼ h ½ 0 0 0 0  Sð4Þ þ h ½ 0 0 0 1  T ð4Þ ð4:5Þ

Expressing Eqs. (4.2)–(4.5) in matrix form, we have


2R 3 2 3 2 3
R S0 dt 0 1 1 1 1 0 0 0
6 S1 dt 7 60 0 1 17 60 1 0 07
6R 7 6 7S þ h6 7T
4 S2 dt 5 ¼ h4 0 0 0 1 5 ð4Þ 40 0 1 0 5 ð4Þ
R
S3 dt 0 0 0 0 0 0 0 1

or,
Z X
3
Sð4Þ dt ¼ h Qið4Þ Sð4Þ þ hIð4Þ Tð4Þ ð4:6Þ
i ¼1

where, Ið4Þ is the (4 × 4) identity matrix and Qð4Þ is the delay matrix [3] of order 4,
having a general structure
4.1 Operational Matrices for Integration 91

2 3
..
6 0 . 7
6 . IðmiÞ 7
6             ..             7
ðmiÞ  i
i
QðmÞ ¼6 .. 7 ð4:7Þ
6 0i  ðmiÞ 7
4 0ðiÞ . 5
..
. ðm mÞ

where, i = 1, 2, 3, …, (m − 1) and QðmÞ has the property such that

ðmÞ ¼ 0ðmÞ
Qm

Equation (4.6) may be expressed as


Z
Sð4Þ dt ¼ P1ssð4Þ Sð4Þ þ P1stð4Þ Tð4Þ ð4:8Þ

where,
2 3 2 3
0 1 1 1 1 0 0 0
60 0 1 17 60 1 0 07
P1ssð4Þ , h6
40
7 and P1stð4Þ , h6 7
0 0 15 40 0 1 05
0 0 0 0 0 0 0 1

The square matrices P1ss(4) and P1st(4) may be expressed in the following
compact form

P1ssð4Þ ¼ h½½ 0 1 1 1  and P1stð4Þ ¼ h½½ 1 0 0 0 


2 3
a b c
in which ½½ a b c  , 4 0 a b5
0 0 a
Following (4.7), in general, for m component functions in each of the SHF set
and TF set, we can write
Z X
m1
SðmÞ dt ¼ h QiðmÞ SðmÞ þ hIðmÞ TðmÞ ¼ P1ssðmÞ SðmÞ þ P1stðmÞ TðmÞ ð4:9Þ
i ¼1

where,
92 4 Integration and Differentiation Using HF …

22 33
P1ssðmÞ , h44 0 1 . . . . . . 1 1 55 and
|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
22 m terms 33
P1stðmÞ , h44 1 0 . . . . . . 0 0 55
|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
m terms

Thus, P1ss(4) and P1st(4) are the first order operational matrices for integration
for the sample-and-hold function part.

4.1.2 Integration of Triangular Functions [6]

Figure 4.4a shows the first member T0 of the TF set and Fig. 4.4b shows its
decomposition into two ramp functions and one delayed negative step function.

Fig. 4.4 Decomposition of the first member of the TF set and its subsequent integration. a the
triangular function, b its decomposition into three functions, c integration of the component
functions of (b), and (d) the result of integration after combining all the three component integrated
functions of (c)
4.1 Operational Matrices for Integration 93

Mathematically, the function T0 can be expressed as

1 1
T0 ¼ t  ðt  hÞuðt  hÞ  uðt  hÞ
h h

Subsequent integration of the first member T0 produces two parabolic functions


and one ramp function as shown in Fig. 4.4c. The integrated function may be
expressed as
Z
1 t 2 1 ð t  hÞ 2
T0 dt ¼  uðt  hÞ  ðt  hÞuðt  hÞ
h2 h 2
R
Figure 4.4d depicts the resulting function T0 dt.
Similar to the case for sample-and-hold functions, the mathematical expression
for the (i + 1)th member of the TF set is
Z
1 ðt  ihÞ2 1 ½t  ði þ 1Þh2
Ti dt ¼ uðt  ihÞ 
h 2 h 2 ð4:10Þ
u½t  ði þ 1Þh  ½t  ði þ 1Þhu½t  ði þ 1Þh

Figure 4.5 shows pictorially the result integration of the (i + 1)th member of the
TF set.
Taking different values of i in (4.10), e.g., 0, 1, 2, …, we can obtain expressions
for integrations of different component TF’s.
Following a similar procedure [1, 6] for other components of the TF set and
using Eq. (4.10), we integrate each member and express the result of integration in
hybrid function domain.
The result of integration of T0 is now to be expressed in HF domain. Since the
HF technique works with function samples, it is apparent that the expansion will be
comprised of one triangular function and three sample-and-hold functions, where,
the triangular function represents the parabolic part of the exact integration.
This is shown in Fig. 4.6. The figure also depicts similar results for the next three
members of the TF set.
This can be represented mathematically as

Fig. 4.5 Integration of the (i + 1)th member of the TF set


94 4 Integration and Differentiation Using HF …

Fig. 4.6 Integration of first


four members of the TF set

2 3 2 3
S0 T0
Z 6S 7 h 6T 7
h 6 17 6 17
T0 dt  ½ 0 1 1 1 6 7 þ ½ 1 0 0 0 6 7
2 4 S2 5 2 4 T2 5
ð4:11Þ
S3 T3
h h
¼ ½0 1 1 1 Sð4Þ þ ½1 0 0 0 Tð4Þ
2 2

Following a similar procedure as in Sect. 4.1.1, we integrate other three members


of the triangular functions set and express the result in HF domain.
Following Eq. (4.11), integration of the second, third and fourth members of the
TF set are given by
Z
h h
T1 dt ¼ ½ 0 0 1 1 Sð4Þ þ ½0 1 0 0 Tð4Þ ð4:12Þ
2 2
Z
h h
T2 dt ¼ ½ 0 0 0 1 Sð4Þ þ ½0 0 1 0 Tð4Þ ð4:13Þ
2 2
Z
h h
T3 dt ¼ ½0 0 0 0  Sð4Þ þ ½0 0 0 1  T ð4Þ ð4:14Þ
2 2

Expressing Eqs. (4.11)–(4.14) in matrix form, we have


2R 3 2 3 2 3
R T0 dt 0 1 1 1 1 0 0 0
6 T1 dt 7 h 6 0 0 1 17 h 60 1 0 07
6R 7 6 7S þ 6 7T
4 T2 dt 5 ¼ 2 4 0 0 0 1 5 ð4Þ 2 4 0 0 1 0 5 ð4Þ
R
T3 dt 0 0 0 0 0 0 0 1
4.1 Operational Matrices for Integration 95

or
Z
hX 3
h
Tð4Þ dt ¼ Qið4Þ Sð4Þ þ Ið4Þ Tð4Þ ð4:15Þ
2 i¼1 2

Equation (4.15) can be expressed as


Z
Tð4Þ dt ¼ P1tsð4Þ Sð4Þ þ P1ttð4Þ Tð4Þ ð4:16Þ

where,
2 3 2 3
0 1 1 1 1 0 0 0
h6 0 0 1 17 h6 0 1 0 07
P1tsð4Þ , 6 7 and P1ttð4Þ , 6 7
240 0 0 15 240 0 1 05
0 0 0 0 0 0 0 1

The square matrices P1ts(4) and P1tt(4) may be expressed in the following
compact forms

h h
P1tsð4Þ ¼ ½½ 0 1 1 1  and P1ttð4Þ ¼ ½½ 1 0 0 0 
2 2

Hence, as in Eq. (4.8), P1ts(4) and P1tt(4) are the first order integration opera-
tional matrices for integration of triangular function components of an HF domain
expanded time function.
The following relations are noted amongst the operational matrices:
9
1
P1ts ¼ P1ss >
=
2 ð4:17Þ
1 >
P1tt ¼ P1st ;
2

In general, for an m-set function, similar to (4.9), we can write


Z
hXm1
h
TðmÞ dt ¼ Qi SðmÞ þ IðmÞ TðmÞ ¼ P1tsðmÞ SðmÞ þ P1ttðmÞ TðmÞ ð4:18Þ
2 i ¼1 ðmÞ 2

where,
96 4 Integration and Differentiation Using HF …

22 33
h 44
P1tsðmÞ , 0 1 . . . . . . 1 1 55 and
2 |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
22 m terms 33
h 44
P1ttðmÞ , 1 0 . . . . . . 0 0 55
2 |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
m terms

4.2 Integration of Functions Using Operational Matrices

The integration operational matrices developed for SHF and TF in Sects. 4.1.1 and
4.1.2 will now be used to integrate any time function in HF domain.
Let f(t) be a square integrable function which can be expanded in hybrid function
domain as

f ðtÞ  ½ c0 c1 c2 . . . ci . . . cm1 SðmÞ


þ ½ ðc1  c0 Þ ðc2  c1 Þ ðc3  c2 Þ . . . ðci  ci1 Þ . . . ðcm  cm1 Þ TðmÞ
, CTS SðmÞ þ CTT TðmÞ
ð4:19Þ

where, c0 ; c1 ; c2 ; . . .; cm are (m + 1) equidistant samples of f(t) with a sampling


period h and […]T denotes transpose.
Integrating Eq. (4.19) with respect to t, we get
Z Z Z
f ðtÞdt  CTS SðmÞ dt þ CTT TðmÞ dt
Z Z
¼ CS SðmÞ dt þ CT TðmÞ dt
T T

   
¼ CTS P1ssðmÞ SðmÞ þ P1stðmÞ TðmÞ þ CTT P1tsðmÞ SðmÞ þ P1ttðmÞ TðmÞ
 
1  
¼ CTS þ CTT P1ssðmÞ SðmÞ þ P1stðmÞ TðmÞ
2
ð4:20Þ

where we have been made of the relations (4.17).


Now we use (4.20) to perform integration of a few simple square integrable
functions.
4.2 Integration of Functions Using Operational Matrices 97

4.2.1 Numerical Examples

To show the validity of Eq. (4.20), we first compute the exact integration of time
function f(t) and then expand it directly in HF domain. Then we perform the same
integration via operational technique using Eq. (4.20) and compare the results with
the previous one.
Example 4.1 We integrate the function f1(t) = t via hybrid function method taking
T = 1 s, m = 8 and h = T/m s.
Exact integration of the given function is t22 and direct expansion of t22 in hybrid
function domain, using Eq. (3.5), is
Z t
t2
f1 ðsÞds ¼  ½0:00000000 0:00781250 0:03125000 0:07031250
0 2
0:12500000 0:19531250 0:28125000 0:38281250Sð8Þ
þ ½0:00781250 0:02343750 0:3906250 0:05468750
0:07031250 0:08593750 0:10156250 0:11718750 Tð8Þ
ð4:21Þ

In hybrid function domain, the function f1(t) = t is expanded directly as

f1 ðtÞ ¼ t  ½0:00000000 0:12500000 0:25000000 0:37500000


0:50000000 0:62500000 0:75000000 0:87500000Sð8Þ
ð4:22Þ
þ ½0:12500000 0:12500000 0:12500000 0:12500000
0:12500000 0:12500000 0:12500000 0:12500000Tð8Þ

Putting the values of CTS and CTT from Eq. (4.22) in Eq. (4.20), we perform
operational integration of the function f1(t) in hybrid function domain to obtain
Z t
t2
f1 ðsÞds ¼  ½0:00000000 0:00781250 0:03125000 0:07031250
0 2
0:12500000 0:19531250 0:28125000 0:38281250Sð8Þ
þ ½0:00781250 0:02343750 0:03906250 0:05468750
0:07031250 0:08593750 0:10156250 0:11716750Tð8Þ
ð4:23Þ

In Fig. 4.7, we compare the results obtained via direct expansion using
Eq. (4.21) and using the HF domain operational technique via Eq. (4.23). It is
observed that for the function f1(t) = t, the result of direct integration and subsequent
HF domain expansion, and integration by the operational method are identical. That
is, in this case, percentage error is zero as shown in Table 4.1.
98 4 Integration and Differentiation Using HF …

Fig. 4.7 Comparison of


integration of the function
f1(t) = t via (i) direct
expansion of the integrated
function in HF domain
[Eq. (4.21)] and (ii) using HF
domain integration
operational matrices
[Eq. (4.23)]. It is noted that
the two curves overlap

Example 4.2 Now we integrate the function f2(t) = sin (πt) in hybrid function
domain taking T = 1 s, m = 8 and h = T/m s. The exact integration of the given
function is ½1  cosðp tÞ=p and direct expansion of ½1  cosðp tÞ=p in HF domain
is
Z t
f2 ðsÞds ¼ ½1  cosðp tÞ=p
0
 ½0:00000000 0:02422989 0:09323080 0:19649796
0:31830988 0:44012180 0:54338896 0:61238987Sð8Þ
þ ½0:02422989 0:06900090 0:10326715 0:12181191
0:12181191 0:10326715 0:06900090 0:0242298Tð8Þ
ð4:24Þ
The function f2(t) = sin(πt), when expanded directly in hybrid function domain,
is expressed as

f2 ðtÞ ¼ sinðp tÞ  ½0:00000000 0:38268343 0:70710678 0:92387953 1:00000000


0:92387953 0:70710678 0:38268343Sð8Þ
þ ½0:38268343 0:32442334 0:21677275 0:07612046  0:07612046
 0:21677275  0:32442334  0:38268343Tð8Þ
ð4:25Þ

Putting the values of CTS and CTT from Eq. (4.25) in Eq. (4.20), we perform
operational integration of the function f2(t) = sin (πt) in hybrid function domain to
obtain
4.2 Integration of Functions Using Operational Matrices 99

Table 4.1 Comparison of t(s) Direct Via operational % Error


samples obtained via two expansion method
methods and percentage error
for (a) SHF coefficients and a Sample-and-hold function domain coefficients
(b) TF coefficients for the 0
function f1(t) = t 0.00000000 0.00000000 –
1
8

0.00781250 0.00781250 0.00000000


2
8

0.03125000 0.03125000 0.00000000


3
8

0.07031250 0.07031250 0.00000000


4
8

0.12500000 0.12500000 0.00000000


5
8

0.19531250 0.19531250 0.00000000


6
8

0.28125000 0.28125000 0.00000000


7
8

0.38281250 0.38281250 0.00000000


8
8

b Triangular function domain coefficients


0
0.00781250 0.00781250 0.00000000
1
8

0.02343750 0.02343750 0.00000000


2
8

0.03906250 0.03906250 0.00000000


3
8

0.05468750 0.05468750 0.00000000


4
8

0.07031250 0.07031250 0.00000000


5
8

0.08593750 0.08593750 0.00000000


6
8

0.10156250 0.10156250 0.00000000


7
8

0.11718750 0.11718750 0.00000000


8
8
100 4 Integration and Differentiation Using HF …

Fig. 4.8 Comparison of integration of the function f2(t) = sin (πt) via (a) direct integration and
subsequent expansion of the integrated function in HF domain [Eq. (4.24)] and (b) using HF
domain integration operational matrices [Eq. (4.26)]. It is noted that the two curves almost overlap
(vide Appendix B, Program no. 11)

Z t
f2 ðsÞds ¼ ½1  cosðp tÞ=p
0
 ½0:00000000 0:02391771 0:09202960 0:19396624 0:31420871
0:43445118 0:53638783 0:60449972Sð8Þ
þ ½0:02391771 0:06811188 0:10193664 0:12024247 0:12024247
0:10193664 0:06811188 0:02391771Tð8Þ
ð4:26Þ

In Fig. 4.8, the result of above integration via direct expansion and by the
operational method are plotted. It is noted that the two results are very close. This is
also apparent from Table 4.2, which shows the results from Eqs. (4.24) and (4.26).
It is seen from the table that percentage error is very small and reasonably constant
over the time interval of interest.

4.3 Operational Matrices for Differentiation [10]

4.3.1 Differentiation of Time Functions Using Operational


Matrices

Let a square integrable function f(t) of Lebesgue measure be expressed in HF


domain, for m = 4, as
4.3 Operational Matrices for Differentiation 101

Table 4.2 Comparison of t(s) Direct Via operational % Error


samples via two methods and expansion method
percentage error for (a) SHF
coefficients and (b) TF a Sample-and-hold function domain coefficients
coefficients for the function 0
f2(t) = sin (πt), (vide 0.00000000 0.00000000 –
Appendix B, Program no. 11) 1
8

0.02422989 0.02391771 1.288408656


2
8

0.09323080 0.09202960 1.288415416


3
8

0.19649796 0.19396624 1.288420501


4
8

0.31830988 0.31420871 1.288420579


5
8

0.44012180 0.43445118 1.288420614


6
8

0.54338896 0.53638783 1.288419625


7
8

0.61238987 0.60449972 1.288419418


8
8

b Triangular function domain coefficients


0
0.02422989 0.02391771 1.288408656
1
8

0.06900090 0.06811188 1.288417977


2
8

0.10326715 0.10193664 1.288415532


3
8

0.12181191 0.12024247 1.288412603


4
8

0.12181191 0.12024247 1.288412603


5
8

0.10326715 0.10193664 1.288415532


6
8

0.06900090 0.06811188 1.288417977


7
8

0.02422989 0.02391771 1.288408656


8
8
102 4 Integration and Differentiation Using HF …

f ðtÞ  ½ c0 c1 c2 c3 Sð4Þ þ ½ ðc1  c0 Þ ð c2  c1 Þ ðc3  c2 Þ ðc4  c3 Þ Tð4Þ


DCTS Sð4Þ þ CTT Tð4Þ
ð4:27Þ

When a function f(t) is expressed in HF domain, it is converted to a piecewise


linear function in [0, T). If this converted function is differentiated, the result will be
a staircase function. For such a function, any attempt to compute the higher
derivatives will give rise to delta functions as well as double delta functions.
To avoid this difficulty, we compute the first derivative from the samples of the
function f(t) by taking appropriate first order differences. Thus, from Eq. (4.27), we
can write
 1
f ðtÞ  ½ ðc1  c0 Þ ðc2  c1 Þ ðc3  c2 Þ ðc4  c3 Þ Sð4Þ
h
1
þ ½ fðc2  c1 Þ  ðc1  c0 Þg fðc3  c2 Þ  ðc2  c1 Þg
h
fðc4  c3 Þ  ðc3  c2 Þg fðc5  c4 Þ  ðc4  c3 Þg Tð4Þ ð4:28Þ

Let there be two square matrices DS(4) and DT(4) such that which, when operated
upon the S(4) vector and the T(4) vector of Eq. (4.27) respectively, yield Eq. (4.28).
That is, DS(4) acts as the differentiation matrix in sample-and-hold function domain
and DT(4) acts as the differentiation matrix in triangular function domain. Thus,
Eq. (4.28) may now be written as

f ðtÞ  ½ c0 c1 c2 c3 DSð4Þ Sð4Þ þ ½ ðc1  c0 Þ ðc2  c1 Þ ðc3  c2 Þ ðc4  c3 Þ DTð4Þ Tð4Þ
1
¼ ½ ðc1  c0 Þ ðc2  c1 Þ ðc3  c2 Þ ðc4  c3 Þ Sð4Þ
h
1
þ ½ fðc2  c1 Þ  ðc1  c0 Þg fðc3  c2 Þ  ðc2  c1 Þg
h
fðc4  c3 Þ  ðc3  c2 Þg fðc5  c4 Þ  ðc4  c3 ÞgTð4Þ

Thus, for DS(4), we can write


1
½ c0 c1 c2 c3 DSð4Þ ¼ ½ ðc1  c0 Þ ðc2  c1 Þ ðc3  c2 Þ ð c4  c3 Þ 
h

Solving for DS(4) algebraically, we have


2 3
1 0 0 0
16 1 1 0 0 7
DSð4Þ ¼ 64
7
5 ð4:29Þ
h 0 1 1 0
ðc4 c3 Þ
0 0 1 c3

Similarly, for the differentiation matrix DT(4) in triangular function domain, we


can write
4.3 Operational Matrices for Differentiation 103

½ ðc1  c0 Þ ðc2  c1 Þ ðc3  c2 Þ ðc4  c3 Þ DTð4Þ


1
¼ ½ fðc2  c1 Þ  ðc1  c0 Þg fðc3  c2 Þ  ðc2  c1 Þg
h
fð c 4  c 3 Þ  ð c 3  c 2 Þ g fð c 5  c 4 Þ  ð c 4  c 3 Þ g 

Solving for DT(4), we have


2 3
1 0 0 0
16 1 1 0 0 7
DTð4Þ ¼ 6 7 ð4:30Þ
h4 0 1 1 0
ðc5 c4 Þðc4 c3 Þ
5
0 0 1 ðc4 c3 Þ

Following Eqs. (4.29) and (4.30), the generalized matrices of order m may be
formed from the following equation, where m component functions have been used.
That is
 1
f ðtÞ  ½ ðc1  c0 Þ ðc2  c1 Þ ðc3  c2 Þ . . . ðcm  cm1 Þ SðmÞ
h
1
þ ½ fðc2  c1 Þ  ðc1  c0 Þg fðc3  c2 Þ  ðc2  c1 Þg . . . fðcm þ 1  cm Þ  ðcm  cm1 Þg TðmÞ
h
ð4:31Þ

Thus, the general forms of differentiation matrices, DS(m) and DT(m), are given by
2 3
1 0  0 0
6 1 1  0 0 7
166  0
7
7
DSðmÞ ¼ 6 0 1 0 7 ð4:32Þ
h 6 .. .. .. .. .. 7
4 . . . . . 5
ðcm cm1 Þ
0 0  1 cm1 ðm  mÞ

2 3
1 0 ... 0 0
6 1 1 . . . 0 0 7
16
6 1 ...
7
7
DTðmÞ ¼ 6 0. .. ..
0
..
0
.. 7 ð4:33Þ
h6 . 7
4 . . . . . 5
ðcm þ 1 cm Þðcm cm1 Þ
0 0 ... 1 ðcm cm1 Þ ðm  mÞ

4.3.2 Numerical Examples

Example 4.3 Let us consider a function f3(t) = 1 − exp(−t).


Expanding it in HF domain, for m = 10 and T = 1 s, we have
104 4 Integration and Differentiation Using HF …

f3 ðtÞ  ½0 0:09516258 0:18126924 0:25918177 0:32967995


0:39346934 0:45118836 0:50341469 0:55067103 0:59343034Sð10Þ
þ ½0:09516258 0:08610666 0:07791253 0:07049818 0:06378939
0:05771902 0:05222633 0:04725634 0:04275931 0:03869021Tð10Þ
ð4:34Þ

Now we differentiate the function given in (4.34) using the matrices of


Eqs. (4.32) and (4.33) for m = 10. The result of differentiation in HF domain is
obtained as

f 3 ðtÞ  ½0:95162581 0:86106664 0:77912532 0:70498174 0:63789386
0:57719023 0:52226332 0:47256339 0:42759304 0:38690218Sð10Þ
þ ½0:09055917  0:08194132  0:07414358  0:06708788  0:06070363
 0:05492691  0:04969993  0:04497035  0:04069086  0:03681861Tð10Þ
ð4:35Þ

Direct expansion of the function exp(−t), that is f3 ðtÞ, in HF domain is

f 3 ðtÞ  ½1:00000000 0:90483741 0:81873075 0:74081822 0:67032004
0:60653065 0:54881163 0:49658530 0:44932896 0:40656965Sð10Þ
þ ½0:09516259  0:08610666  0:07791253  0:07049818  0:06378939
 0:05771902  0:05222633  0:04725633  0:04275930  0:03869021Tð10Þ
ð4:36Þ

Figure 4.9 shows the direct expansion of the original function f3(t) and its

derivative f3 ðtÞ in HF domain using Eqs. (4.34) and (4.36). The figure also includes HF

domain representation of f3 ðtÞ obtained via Eq. (4.35), using differentiation matrices.

From the curve, it is seen that at t = 0, the curve f3 ðtÞ deviates from its exact
value 1. This deviation may be reduced by increasing m. That is, an increased value
of m will make the differentiated curve start from a value more close to 1 on the y
axis at t = 0.
Example 4.4 Let us consider a function f4(t) = sin(πt)/π. Expanding it in HF
domain, for m = 10 and T = 1 s, we have

f4 ðtÞ ½0:00000000 0:09836316 0:18709785 0:25751810 0:30273069


0:31830988 0:30273069 0:25751810 0:18709785 0:09836316Sð10Þ
þ ½0:09836316 0:08873469 0:07042025 0:04521258 0:01557919
 0:01557919  0:04521258  0:07042025  0:08873469  0:09836316Tð10Þ
ð4:37Þ
4.3 Operational Matrices for Differentiation 105

Fig. 4.9 HF domain direct


expansion of the function
f3 ðtÞ ¼ 1  expðtÞ, its exact

derivative f3 ðtÞ along with

f3 ðtÞ
obtained using HF
domain differentiation
matrices for m = 10 and
T = 1 s (vide Appendix B,
Program no. 12)

Now we differentiate the function given in (4.37) using the matrices of


Eqs. (4.32) and (4.33) for m = 10. The result of differentiation in HF domain is
obtained as

f4 ðtÞ  ½0:98363164 0:88734692 0:70420250 0:45212584 0:15579194
 0:15579194  0:45212584  0:70420250  0:88734692  0:98363164Sð10Þ
þ ½0:09628471  0:18314441  0:25207666  0:29633389  0:31158389
 0:29633389  0:25207666  0:18314441  0:09628471  1:55431223e  015Tð10Þ
ð4:38Þ

Direct expansion of the function cos(πt), that is f4 ðtÞ, in HF domain is

f4 ðtÞ  ½1:00000000 0:95105651 0:80901699 0:58778525 0:30901699
0:00000000  0:30901699  0:58778525  0:80901699  0:95105651Sð10Þ
þ ½0:04894348  0:14203952  0:22123174  0:27876825  0:30901699
 0:30901699  0:27876825  0:22123174  0:14203952  0:04894348Tð10Þ
ð4:39Þ

Figure 4.10 shows both the original function f4(t) and the differentiated function

f 4 ðt Þ expressed as HF domain direct expansions [using Eqs. (4.37) and (4.39)]. For

comparison, f4 ðtÞ obtained via HF domain differential matrices [Eq. (4.38)] is also
plotted. On increasing sample density, that is m, the HF domain differentiated curve
moves closer to the exact solution.
106 4 Integration and Differentiation Using HF …

Fig. 4.10 HF domain direct


expansion of the function
f4 ðtÞ ¼ sinðptÞ=p, its

derivative f4 ðtÞ along with

f4 ðtÞ
obtained using HF
domain differential matrices
for m = 10 and T = 1 s (vide
Appendix B, Program no. 13)

4.4 Accumulation of Error for Subsequent


Integration-Differentiation (I-D) Operation in HF
Domain

It is obvious that integer integration using the integral operational matrices intro-
duce error since the operation is approximate. So do the operational matrices for
differentiation, when any time function is differentiated in HF domain. Hence, it is
apparent that subsequent integration-differentiation (I-D) operation on a function in
HF domain would fail to come up with the original function, unlike the exact I-D
operation.
In the following, accumulation of error for subsequent I-D operation is studied in
detail with table and characteristic curves.
For m = 4, we expand a function f ðtÞ in HF domain, as shown in Eq. (4.27), to
write

f ð t Þ  ½ c0 c1 c2 c3 Sð4Þ þ ½ ðc1  c0 Þ ðc2  c1 Þ ðc3  c2 Þ ðc4  c3 Þ Tð4Þ


, CTS Sð4Þ þ CTT Tð4Þ

Now integrating f ðtÞ using the operational matrices gives


Z  
1  
 ðtÞ ðsayÞ
f ðtÞdt , F ðtÞ  h CTS þ CTT P11 Sð4Þ þ I Tð4Þ , F
2

where P11 ¼ ½½ 0 1 1 1  and I is an identity matrix of order 4.


It may be noted that P1ss ¼ h P11 .
4.4 Accumulation of Error for Subsequent Integration-Differentiation … 107

Thus
 
 ðtÞ ¼ h ½ ðc0 þ c1 Þ ðc1 þ c2 Þ ðc2 þ c3 Þ ðc3 þ c4 Þ  P11 Sð4Þ þ I Tð4Þ
F
2
h
¼ ½ ðc0 þ c1 Þ ðc1 þ c2 Þ ðc2 þ c3 Þ ðc3 þ c4 Þ P11 Sð4Þ
2
h
þ ½ ðc0 þ c1 Þ ðc1 þ c2 Þ ðc2 þ c3 Þ ðc3 þ c4 Þ Tð4Þ
2
h
¼ ½ 0 ðc0 þ c1 Þ fðc0 þ c1 Þ þ ðc1 þ c2 Þg fðc0 þ c1 Þ þ ðc1 þ c2 Þ þ ðc2 þ c3 Þg Sð4Þ
2
h
þ ½ ðc0 þ c1 Þ ðc1 þ c2 Þ ðc2 þ c3 Þ ðc3 þ c4 Þ Tð4Þ
2
ð4:40Þ

Now, it is of interest to estimate the accumulation of error for subsequent


integration-differentiation operation (I-D operation) on a function f(t).
To achieve this end, we differentiate F  ðtÞ of Eq. (4.40) using DS(4) and DT(4).
Usually, exact integration-differentiation always yields f(t) itself. But since HF
domain operational calculus is somewhat approximate, the resulting function is
expected to deviate from the HF domain representation of f(t).
Differentiating F ðtÞ using DS(4) and DT(4), we have

d
f ðt Þ ¼ ½F ðtÞ , f ðtÞID;1
dt
h
¼ ½ 0 ðc0 þ c1 Þ fðc0 þ c1 Þ þ ðc1 þ c2 Þg fðc0 þ c1 Þ þ ðc1 þ c2 Þ þ ðc2 þ c3 Þg DSð4Þ Sð4Þ
2
h
þ ½ ðc0 þ c1 Þ ðc1 þ c2 Þ ðc2 þ c3 Þ ðc3 þ c4 Þ DTð4Þ Tð4Þ
2

Substituting DS(4) and DT(4) from Eqs. (4.29) and (4.30), we have

1
f ðtÞID;1 ¼ ½ ðc0 þ c1 Þ ðc1 þ c2 Þ ðc2 þ c3 Þ ðc3 þ c4 Þ Sð4Þ
2
1
þ ½ fðc1 þ c2 Þ  ðc0 þ c1 Þg fðc2 þ c3 Þ  ðc1 þ c2 Þg fðc3 þ c4 Þ  ðc2 þ c3 Þg
2
fðc4 þ c5 Þ  ðc3 þ c4 ÞgTð4Þ
   0   0   0 
, ½ c00 c01 c02 c03 Sð4Þ þ c01  c00 c2  c01 c3  c02 c4  c03 Tð4Þ
ð4:41Þ

where,

1 1 1 1
c00 ¼ ðc0 þ c1 Þ; c01 ¼ ðc1 þ c2 Þ; c02 ¼ ðc2 þ c3 Þ; c03 ¼ ðc3 þ c4 Þ and
2 2 2 2
1
c04 ¼ ðc4 þ c5 Þ
2
108 4 Integration and Differentiation Using HF …

The result obtained in Eq. (4.41) is somewhat deviated from f ðtÞ the original HF
domain expansion of f(t).
Similarly, subsequent I-D operation upon the function f ðtÞID;1 produces a
function still more deviated from f ðtÞ.
For two subsequent I-D operations on the function f ðtÞ, namely f ðtÞID;2 , the
result is
  00   00     
f ðtÞID;2 , ½ c000 c001 c002 c003 Sð4Þ þ c1  c000 c2  c001 c003  c002 c004  c003 Tð4Þ
ð4:42Þ

where,

1 1
c000 ¼ fðc0 þ c1 Þ þ ðc1 þ c2 Þg ¼ ðc0 þ 2c1 þ c2 Þ
4 4
00 1 1
c1 ¼ fðc1 þ c2 Þ þ ðc2 þ c3 Þg ¼ ðc1 þ 2c2 þ c3 Þ
4 4
00 1 1
c2 ¼ fðc2 þ c3 Þ þ ðc3 þ c4 Þg ¼ ðc2 þ 2c3 þ c4 Þ
4 4
00 1 1
c3 ¼ fðc3 þ c4 Þ þ ðc4 þ c5 Þg ¼ ðc3 þ 2c4 þ c5 Þ
4 4
00 1 1
and c4 ¼ fðc4 þ c5 Þ þ ðc5 þ c6 Þg ¼ ðc4 þ 2c5 þ c6 Þ
4 4

After three such operations, we have

f ðtÞID;3 , ½ c000 c000 c000 c000


3 Sð4Þ
0
  0001 2
000
  000   000   000  ð4:43Þ
þ c1  c0 c2  c000
1 c3  c000
2 c4  c000
3 Tð4Þ

where,
 
1 1 1 1
c000
0 ¼ fðc0 þ c1 Þ þ ðc1 þ c2 Þg þ fðc1 þ c2 Þ þ ðc2 þ c3 Þg ¼ ðc0 þ 3c1 þ 3c2 þ c3 Þ
2 4 4 8
 
1 1 1 1
c000
1 ¼ fðc1 þ c2 Þ þ ðc2 þ c3 Þg þ fðc2 þ c3 Þ þ ðc3 þ c4 Þg ¼ ðc1 þ 3c2 þ 3c3 þ c4 Þ
2 4 4 8
 
1 1 1 1
c000
2 ¼ fðc2 þ c3 Þ þ ðc3 þ c4 Þg þ fðc3 þ c4 Þ þ ðc4 þ c5 Þg ¼ ðc2 þ 3c3 þ 3c4 þ c5 Þ
2 4 4 8
 
1 1 1 1
c000
3 ¼ fðc3 þ c4 Þ þ ðc4 þ c5 Þg þ fðc4 þ c5 Þ þ ðc5 þ c6 Þg ¼ ðc3 þ 3c4 þ 3c5 þ c6 Þ
2 4 4 8
 
1 1 1 1
c000
4 ¼ fðc4 þ c5 Þ þ ðc5 þ c6 Þg þ fðc5 þ c6 Þ þ ðc6 þ c7 Þg ¼ ðc4 þ 3c5 þ 3c6 þ c7 Þ
2 4 4 8
4.4 Accumulation of Error for Subsequent Integration-Differentiation … 109

From inspection of Eqs. (4.41), (4.42) and (4.43), we can write down the
expression for HF coefficients obtained after n-times repeated I-D operations in
terms of the HF coefficients of the original function.
Thus, the kth coefficients of the SHF components are

n!
cnj ðkÞ ¼ n Cj ¼ for 0  j  n and 1  k  ðn þ 1Þ ð4:44Þ
j!ðn  jÞ!

where, n is the number of I-D operations executed, and, cj is the coefficient of jth
element of the SHF coefficient matrix after n repeated I-D operations.
The coefficients for the TF components can easily be derived from Eq. (4.44).
Now, let us define an index called ‘Average of Mod of Percentage’ (AMP) error,
which is given by

P
r
ej
j¼1
AMP error; eavðrÞ , ð4:45Þ
r

where,
r is the number of sample points, or number of items, or elements considered,
εj is the percentage error at each sample point (or, percentage error for each item or
element, as may be the case).
Figure 4.11 shows increasing trend of the Average of Mod of Percentage
(AMP) error with subsequent I-D operations for a particular function f(t) = t, using
only ten equidistant samples of the function. Though this pattern is somewhat
function dependent, it is interesting to note that the pattern traces a ramp function.
When the starting function is considered to be f ðtÞ ¼ sinðptÞ, once again the
variation of the AMP error with number of I-D operations resembles the pattern of a
sine wave. This is shown in Fig. 4.12.
Table 4.3 compares the SHF coefficients of the original function f(t) = t with its
SHF coefficients obtained after five subsequent I-D operations for m = 10 and
T = 1 s. Also, the AMP error is computed.
From Table 4.3, the fourth sample was chosen arbitrarily and its shifting to more
erroneous zone due to subsequent I-D operations in HF domain is depicted in
Fig. 4.13.
Figure 4.14 shows the decaying nature of the AMP error with increasing number
of sub-intervals, for four subsequent I-D operations of a typical ramp function over
a time interval of T = 1 s.
Therefore, for a particular need of subsequent I-D operations, the error can be
reduced by considering a larger number of samples within the particular time span.
110 4 Integration and Differentiation Using HF …

Fig. 4.11 Variation of AMP


error for the function f
(t) = t (for m = 10, T = 1 s)
with number of successive
I-D operations

Fig. 4.12 Variation of AMP


error for the function f(t) = sin
(πt) (m = 20, T = 1 s) with
number of successive I-D
operations (vide Appendix B,
Program no. 14)

Figure 4.15 shows the deviation of the function f(t) = sin(πt) from its original
form with successive I-D operations.

4.5 Conclusion

In this chapter, the integration operational matrices for sample-and-hold functions


and triangular functions are derived independently. Integration of the SHF part
produces both SHF and TF components and the result is expressed in HF domain
using two operational matrices, as shown in Eq. (4.8). Integration of TF compo-
nents, like that of SHF part, also gives rise to both SHF and TF components, vide
Eq. (4.16). In fact, a total of four operational matrices are used conjunctively to
perform integration in the HF domain and the result of integration is again com-
prised of SHF and TF.
4.5 Conclusion 111

Table 4.3 Comparison of the SHF coefficients of the function f(t) = t before and after five
successive I-D operations for m = 10 and T = 1 s
t(s) SHF coefficients of the SHF coefficients of f(t) = t after % Error AMP
original function f(t) = t five subsequent I-D operations error
0
0 0.25 –
0.1
0.1 0.35 −250.00
0.2
0.2 0.45 −125.00
0.3
0.3 0.55 −83.33
0.4
0.4 0.65 −62.50 78.58
0.5
0.5 0.75 −50.00
0.6
0.6 0.85 −41.67
0.7
0.7 0.95 −35.71
0.8
0.8 1.05 −31.25
0.9
0.9 1.15 −27.78
1.0

Fig. 4.13 Variation of a


typical SHF coefficient 0.3
(vide Table 4.3) for the
function f(t) = t for m = 10,
T = 1 s with number of
successive I-D operations
112 4 Integration and Differentiation Using HF …

Fig. 4.14 Variation of AMP


error for the function f
(t) = t for four subsequent I-D
operations with increasing
m and T = 1 s

Fig. 4.15 Shifting of the


original function f(t) = sin(πt)
due to successive I-D
operations for m = 20, T = 1 s
(vide Appendix B, Program
no. 15)

The integration operation is illustrated via a few examples. That is, the function
f1(t) = t has been integrated in an exact manner and the result is expanded in HF
domain.
Further, the function f1(t) = t is represented in HF domain and is then integrated
using Eq. (4.20). These two results are compared in Table 4.1. Figure 4.7 also
depicts this comparison for better clarity.
In Table 4.1, it is noted the percentage error is zero. This implies that for linear
functions, HF domain integration results are identical with the exact solutions.
However, for the second example, that is, the function f2(t) = sin(πt), shown in
Fig. 4.8, the results of integration via the two methods are not identical but very
close. Both the results are tabulated and compared in Table 4.2.
The hybrid function (HF) set has been used to derive the operational matrices for
differentiation as well. These matrices can be used for differentiation of functions in
4.5 Conclusion 113

hybrid function domain. The operational matrix DS(m) acts as the differentiation
matrix in sample-and-hold function (SHF) domain while DT(m) acts as the differ-
entiation matrix in triangular function (TF) domain. These matrices are presented in
Eqs. (4.32) and (4.33).
Figures 4.9 and 4.10 show graphically the application of differential operational
matrices for two typical time functions and also compare the same with respective
direct expansion in HF domain.
It is apparent that successive integration-differentiation (I-D) operation upon any
time function in HF domain accumulates error in the result. That is, we do not get
back the original time function as we do for exact I-D operation. The effect of HF
domain I-D operation is thus of interest and has been studied. Figures 4.11 and 4.12
show typical curves for accumulation of errors for such repeated I-D operations of
two different time functions, t and sin(πt).
The time function f(t) = t has been subjected to five successive I-D operations
considering ten sub-intervals over a period T = 1 s. For such successive operations
Table 4.3 shows the error accumulated at different sample points, i.e., the SHF
coefficients. As a typical case study, deviation of a sample 0.3 of the function after
each I-D operation, has been tracked. Figure 4.13 shows the locus of the sample
moving more and more into the erroneous zone. However, as is obvious, with
increasing number of sub-intervals within a fixed time period, the error reduces.
This is shown in Fig. 4.14 where for four successive I-D operations upon the
function f(t) = t, the AMP error goes down exponentially with increasing m.
Figure 4.15 shows deviation of a function f(t) = sin(πt) with successive I-D
operation for m = 20 and T = 1 s. It is noted that the original function shifts with
each I-D operation, but reasonably maintains the shape of the original. However,
this is merely function specific.

References

1. Jiang, J.H., Schaufelberger, W.: Block Pulse Functions and Their Application in Control
System, LNCIS, vol. 179. Springer, Berlin (1992)
2. Deb, Anish, Sarkar, Gautam, Sen, Sunit K.: Linearly pulse-width modulated block pulse
functions and their application to linear SISO feedback control system identification. IEE Proc.
Control Theory Appl. 142(1), 44–50 (1995)
3. Chen, C.F., Tsay, Y.T., Wu, T.T.: Walsh operational matrices for fractional calculus and their
application to distributed systems. J. Franklin Inst. 303(3), 267–284 (1977)
4. Rao, G.P., Srinivasan, T.: Analysis and synthesis of dynamic systems containing time delays
via block pulse functions. Proc. IEE 125(9), 1064–1068 (1978)
5. Deb, Anish, Sarkar, Gauatm, Bhattacharjee, Manabrata, Sen, Sunit K.: A new set of piecewise
constant orthogonal functions for the analysis of linear SISO systems with sample-and-hold.
J. Franklin Inst. 335B(2), 333–358 (1998)
6. Deb, Anish, Sarkar, Gautam, Sengupta, Anindita: Triangular orthogonal functions for the
analysis of continuous time systems. Anthem Press, London (2011)
114 4 Integration and Differentiation Using HF …

7. Deb, Anish, Sarkar, Gautam, Dasgupta, Anindita: A complementary pair of orthogonal


triangular function sets and its application to the analysis of SISO control systems. J. Inst. Eng.
India 84, 120–129 (2003)
8. Deb, Anish, Dasgupta, Anindita, Sarkar, Gautam: A complementary pair of orthogonal
triangular function sets and its application to the analysis of dynamic systems. J Franklin Inst.
343(1), 1–26 (2006)
9. Deb, Anish, Sarkar, Gautam, Mandal, Priyaranjan, Biswas, Amitava, Ganguly, Anindita,
Biswas, Debasish: Transfer function identification from impulse response via a new set of
orthogonal hybrid function (HF). Appl. Math. Comput. 218(9), 4760–4787 (2012)
10. Deb, Anish, Sarkar, Gautam, Ganguly, Anindita, Biswas, Amitava: Approximation,
integration and differentiation of time functions using a set of orthogonal hybrid functions
(HF) and their application to solution of first order differential equations. Appl. Math. Comput.
218(9), 4731–4759 (2012)
Chapter 5
One-Shot Operational Matrices
for Integration

Abstract This chapter is devoted to develop the theory of one-shot operational


matrices. These matrices are useful for multiple integration and, in general, are
superior to repeated integration using the first order integration matrices. Theory of
one-shot operational matrices is presented and the one-shot operational matrices of
n-th order integration have been derived. Three examples with nine figures and four
tables elucidate the technique.

In this chapter, the hybrid function set has been utilized to derive one-shot oper-
ational matrices [1, 2] for integration of different orders in HF domain. These
matrices are employed for more accurate multiple integrations. That is, in case of
repeated integrations, one may use the first order integration matrices, derived in
Chap. 4, repeatedly. But this will lead to accumulation of errors at each stage, and
finally such error may disqualify the result to be of any further use.
In case of Walsh [1] and block pulse functions [2, 3] such one-shot matrices
were derived, and accumulation of errors was avoided. For hybrid functions, things
are a bit different because the hybrid function domain theories always deals with
function samples. But in this case also, the accumulation of errors is avoided and
much better results are obtained.
After derivation of the one shot matrices of different orders in HF domain, they
are used for many numerical examples to bring out the difference in computations
of multiple integrals by using (a) the first order integration matrices repeatedly and
(b) the one shot matrices only once.
First of all, second and third order one-shot operational matrices are derived and
then the general form of (m × m) integration matrices for n times multiple inte-
gration are derived. Superiority of these matrices over the repeated use of first order
integration matrices is strongly established from the examples treated here in.

© Springer International Publishing Switzerland 2016 115


A. Deb et al., Analysis and Identification of Time-Invariant Systems, Time-Varying
Systems, and Multi-Delay Systems using Orthogonal Hybrid Functions,
Studies in Systems, Decision and Control 46, DOI 10.1007/978-3-319-26684-8_5
116 5 One-Shot Operational Matrices for Integration

5.1 Integration Using First Order HF Domain Integration


Matrices

For first-order integration of sample-and-hold [4] function component, referring to


Eq. (4.9), we have
Z
SðmÞ dt ¼ P1ssðmÞ SðmÞ þ P1stðmÞ TðmÞ

where,
)
P1ssðmÞ , h½½ 0 1       1 1 ðmmÞ
ð5:1Þ
P1stðmÞ , h½½ 1 0       0 0 ðmmÞ

Similarly, for first-order integration of triangular function component [5, 6],


referring to Eq. (4.18), we have
Z
TðmÞ dt ¼ P1tsðmÞ SðmÞ þ P1ttðmÞ TðmÞ

where,
)
P1tsðmÞ , h2 ½½ 0 1       1 1 ðmmÞ
ð5:2Þ
P1ttðmÞ , h2 ½½ 1 0       0 0 ðmmÞ

Using Eqs. (4.9), (4.17) and (4.18), we get


Z Z
1
TðmÞ dt ¼ SðmÞ dt ð5:3Þ
2

We know that the operational matrix for integration in block pulse function
domain is given by [2]
"" 1 ##
11 1
PðmÞ , h |fflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflffl}
2 ð5:4Þ
m terms

Using Eqs. (5.1) and (5.2), we can write the following relations:

P1stðmÞ
P1ssðmÞ þ ¼ 2 P1tsðmÞ þ P1ttðmÞ ¼ PðmÞ ð5:5Þ
2

If a square integrable function f(t) is expanded in hybrid function [7] domain as


per Eq. (4.19), we can write
5.1 Integration Using First Order HF Domain Integration Matrices 117

 
f ð t Þ  c0 c1 c2  cðm1Þ SðmÞ ðtÞ
þ ½ ð c1  c0 Þ ðc2  c1 Þ ð c3  c2 Þ    ðcm  cm1 Þ TðmÞ ðtÞ ð5:6Þ
, CTS SðmÞ ðtÞ þ CTT TðmÞ ðtÞ

where, T denotes transpose.


Then integration of the time function f(t) with respect to t, referring to Eq. (4.20),
we have
Z  
1  
f ðtÞ dt  CTS þ CTT P1ssðmÞ SðmÞ þ P1stðmÞ TðmÞ ð5:7Þ
2

5.2 Repeated Integration Using First Order HF Domain


Integration Matrices

Already we know that


Z
SðmÞ dt ¼ P1ssðmÞ SðmÞ þ P1stðmÞ TðmÞ

and
Z
TðmÞ dt ¼ P1tsðmÞ SðmÞ þ P1ttðmÞ TðmÞ

So we can write
ZZ Z Z
SðmÞ dt ¼ P1ssðmÞ SðmÞ dt þ P1stðmÞ TðmÞ dt

¼ P1ss2ðmÞ SðmÞ þ P1ssðmÞ P1stðmÞ TðmÞ ð5:8Þ

þ P1stðmÞ P1tsðmÞ SðmÞ þ P1stðmÞ P1ttðmÞ TðmÞ

Using the relations (5.3) and (5.5) in (5.8), we get


ZZ   Z
P1stðmÞ  
SðmÞ dt ¼ P1ssðmÞ þ P1ssðmÞ SðmÞ þ P1stðmÞ TðmÞ ¼ PðmÞ SðmÞ dt
2
ð5:9Þ

With n times repeated integration of the S(m) vector, we get


118 5 One-Shot Operational Matrices for Integration

ZZZ Z Z
ðn1Þ
   SðmÞ dt ¼ PðmÞ SðmÞ dt where n ¼ 2; 3; 4; . . . ð5:10Þ
|fflfflfflfflfflffl{zfflfflfflfflfflffl}
n

Similarly, repeated integration of T(m) vector gives

ZZZ Z Z PðmÞ Z
ðn1Þ
ðn1Þ
   TðmÞ dt ¼ PðmÞ TðmÞ dt ¼ SðmÞ dt where n ¼ 2; 3; 4; . . .
2
|fflfflfflfflfflffl{zfflfflfflfflfflffl}
n
ð5:11Þ

That is, for n times repeated integration, Eq. (5.11) takes the following form
ZZZ Z ZZZ Z
1
   TðmÞ dt ¼    SðmÞ dt for n ¼ 1; 2; 3; . . . ð5:12Þ
2
|fflfflfflfflfflffl{zfflfflfflfflfflffl} |fflfflfflfflfflffl{zfflfflfflfflfflffl}
n n

5.3 One-Shot Integration Operational Matrices


for Repeated Integration [8]

It is noted from Fig. 4.8 and Table 4.2, that the operation of first order integration
using operational matrices P1ss; P1st; P1ts; P1tt the result of integration is some-
what approximate. If we carry on repeated integration using these matrices, error
will be introduced at each integration stage and such accumulated error may corrupt
the final result. Thus, higher order integrations in HF domain may become so
corrupted that the effort may lead to a fiasco.
For this reason, we present in the following, one-shot operational matrices of
different orders of integration suitable for computation of function integration with
improved accuracy.
The basic principle of determination of one-shot operational matrices for inte-
gration is elaborated by the following steps:
(i) Integrate the sample-and-hold basis function set repeatedly n times. Find out
the samples of the n times integrated curves.
(ii) From these samples, form corresponding sample-and-hold function coefficient
row matrices as well as the triangular function coefficient row matrices. That
is, the n times integrated sample-and-hold function is expressed in HF domain.
(iii) Integrate the triangular basis function set repeatedly n times. Find out the
samples of the n times integrated curves.
5.3 One-Shot Integration Operational Matrices for Repeated Integration 119

(iv) From these samples, form corresponding sample-and-hold function coefficient


row matrices and the triangular function coefficient row matrices. That is, the
n times integrated triangular function is thus expressed in HF domain.
(v) From the above steps, form one-shot operational matrices of n-th order
integration.

5.3.1 One-Shot Operational Matrices for Sample-and-Hold


Functions

To improve accuracy of higher order integrations in hybrid function domain, we


develop one-shot integration matrices both for the sample-and-hold function set and
the triangular function set, since the hybrid function set is comprised of these sets.
As discussed in the earlier section, the integration matrices P1ss and P1st are
essentially the ‘one-shot operational matrices for single integration’ from SHF set.
For multiple integrations, instead of using these matrices repeatedly, one-shot
matrices of different orders of integration are derived to obtain improved accuracy.
These one-shot matrices from SHF set are presented in the following.

5.3.1.1 Second Order One-Shot Matrices


R
Referring to Sect. 4.1.1 and Fig. 4.1c, decomposition of first integration S0 dt into
two ramp functions is shown in Fig. 5.1a. Their subsequent integration produces
two parabolic
RR functions as shown in Fig. 5.1b. Finally Fig. 5.2 depicts the resulting
function S0 dt.
Mathematically it can be represented as,
ZZ
t2 ðt  hÞ2
S0 dt ¼  uðt  hÞ:
2 2

Fig. 5.1 Decomposition of a the first integration and b the double integration of the first member S0
120 5 One-Shot Operational Matrices for Integration

Fig. 5.2 Double integration


of the first member S0 of the
SHF set

The samples ofnthe above double


o n integrated function
o n at sampling
o ninstants 0, h, 2h,
o
2
ð2hÞ2 2
ð3hÞ2 2
ð4hÞ2 2
3h and 4h are 0; h2
2  ðhh
2
Þ
; 2  ð2hh
2
Þ
; 2  ð3hh
2
Þ
; 2  ð4hh
2
Þ

respectively.
As in Eq. (3.6), the first four samples of the function f ðtÞ ¼ t are the coefficients
of the SHF components while differences of the consecutive samples provide the
coefficients of the TF components.
From these samples we develop the one-shot operational matrices P2ss and P2st
for double integration considering m = 4 as
ZZ
Sð4Þ dt = P2ssð4Þ Sð4Þ þ P2stð4Þ Tð4Þ ð5:13Þ

where

h2  
P2ssð4Þ , 0 ð 12  02 Þ ð 22  12 Þ ð 32  22 Þ
2!

and

h2   2 
P2stð4Þ , 1 ð 2  12 Þ  ð 12  02 Þ
2!  2   2  
ð3  22 Þ  ð22  12 Þ ð 4  32 Þ  ð 32  22 Þ

Following the above pattern, the generalized one-shot operational matrices for
m terms for double integration are
5.3 One-Shot Integration Operational Matrices for Repeated Integration 121

Fig. 5.3 Triple integration of


the first member S0 of the
SHF set

hh n oii 9
P2ssðmÞ , h2
0ð12  02 Þð22  12 Þð32  22 Þ    ðm  1Þ2 ðm  2Þ2 >
>
2! >
ðmmÞ >
>
=
and hh   
h2
P2stðmÞ , 1 ð22  12 Þ  ð12  02 Þ ð32  22 Þ  ð22  12 Þ    >
2!
n oii >
>
>
>
m2  ðm  1Þ2  ðm  1Þ2 ðm  2Þ2 ;
ðmmÞ

ð5:14Þ

5.3.1.2 Third Order One-Shot Matrices

The first member SRR0 of the SHF set is integrated thrice and Fig. 5.3 shows the
integrated function s0 dt. Mathematically it is represented as
ZZZ
t 3 ð t  hÞ 3
S0 dt ¼  uð t  hÞ
6 6

The samples
n of the resulting
o n function atosampling
n instants
o 0, h, 2h,
n 3h and 4h are
o
3
ð2hÞ3 3
ð3hÞ3 3
ð4hÞ3 3
given as 0; h3
6  ðhh
6
Þ
; 6  ð2hh
6
Þ
; 6  ð3hh
6
Þ
and 6  ð4hh
6
Þ

respectively.
From these samples, the one-shot operational matrices P3ss and P3st for three
consecutive integrations, considering m = 4 can be developed as follows:
ZZZ
Sð4Þ dt = P3ssð4Þ Sð4Þ þ P3stð4Þ Tð4Þ ð5:15Þ
122 5 One-Shot Operational Matrices for Integration

where,

h3  
P3ssð4Þ , 0 ð 13  03 Þ ð 23  13 Þ ð 33  23 Þ
3!

and

h3   3   
P3stð4Þ , 1 ð 2  13 Þ  ð 13  03 Þ ð 33  23 Þ  ð 23  13 Þ
3!  3  
ð 4  33 Þ  ð 33  23 Þ

For m terms, the generalized one-shot operational matrices for triple integration
are
hh n oii 9
P3ssðmÞ , h3
0ð13  03 Þð23  13 Þð33  23 Þ    ðm  1Þ3 ðm  2Þ3 >
>
3! >
ðmmÞ >
>
=
and   3  3 
h3
P3stðmÞ , 1 ð2 
n 1 3
Þ  ð 1 3
 0 3
Þ ð 3  2 3
Þ  ð2 3
 1 3
Þ  oii
 >
>
3!
>
>
m3  ðm  1Þ3  ðm  1Þ3 ðm  2Þ3 >
;
ðmmÞ

ð5:16Þ

5.3.1.3 n-th Order One-Shot Matrices

Now considering n times repeated integration, and proceeding via a similar track,
we can write the one-shot operational matrices for n times repeated integration for
sample-and-hold functions as
hn
9
PnssðmÞ , ½½0ð1n  0n Þð2n  1n Þð3n  2n Þ    fðm  1Þn ðm  2Þn gðmmÞ >
>
n!
and =
n
PnstðmÞ , n! ½½1fð2  1 Þ  ð1  0 Þgfð3  2 Þ  ð2  1 Þg   
h n n n n n n n n
>
>
;
fðmn  ðm  1Þn Þ  ððm  1Þn ðm  2Þn ÞgðmmÞ
ð5:17Þ

where, n; m  2.

5.3.2 One-Shot Operational Matrices for Triangular


Functions

To develop the one-shot integration matrices for the TF set, we proceed as in


Sect. 5.3.1.
5.3 One-Shot Integration Operational Matrices for Repeated Integration 123

Fig. 5.4 Double integration


of the first member T0 of the
triangular function set

The integration matrices P1ts and P1tt are essentially the ‘one-shot operational
matrices for single integration’ for the TF set. For multiple integrations, instead of
using these matrices repeatedly, one-shot matrices of different orders of integration
are derived to obtain improved accuracy. These one-shot matrices from TF set are
presented in the following.

5.3.2.1 Second Order One-Shot Matrices

The first member T0 of the triangular


RR function set is integrated twice and Fig. 5.4
shows the integrated function T0 dt. Mathematically, it can be represented as,
ZZ
1 t 3 1 ð t  hÞ 3 ð t  hÞ 2
T0 dt ¼  uð t  hÞ  uðt  hÞ
h6 h 6 2

The samples of the resulting


n 3 o n
function at samplingoinstants
n
0, h, 2h, 3h and 4h are o
3
1 ðhhÞ ðhhÞ2 1 ð2hÞ
3
1 ð2hhÞ
3
ð2hhÞ2 3
Þ3 Þ2
0; h 6  h 6  2
1h
; h 6 h 6  2 ; h 6  1h ð3hh
1 ð3hÞ
6  ð3hh
2 ;
n 3 3 2
o
1 ð4hÞ 1 ð4hhÞ
h 6 h 6  ð4hh
2
Þ
respectively.
From these samples we develop the one-shot operational matrices P2ts and P2tt
for double integration with m = 4, as follows.
ZZ
Tð4Þ dt = P2tsð4Þ Sð4Þ þ P2ttð4Þ Tð4Þ ð5:18Þ
124 5 One-Shot Operational Matrices for Integration

where,

h2  
P2tsð4Þ , 0 1 ð23  13  3:12 Þ ð33  23  3:22 Þ
ð2 þ 1Þ!

and

h2   3 
P2ttð4Þ , 1 2  13  13  03  3: 12  02
ð2 þ 1Þ!
 3  3 
3  23  23  13  3: 22  12 4  33  33  23  3: 32  22

For m terms, the generalized one-shot operational matrices for double integration
are:
9
P2tsðmÞ , h2
ð2 þ 1Þ! ½½0 n 1ð23  13  3:12 Þ ð33  23  3:22 Þ    >
>
oii >
>
>
>
ðm  1Þ3 ðm  2Þ3 3:ðm  2Þ2 >
>
ðmmÞ =
  and   >
P2ttðmÞ , h2
1 ð23  13 Þ  ð13  03 Þ  3:ð12  02 Þ ð33  23 Þ  ð23  13 Þ  3:ð22  12 Þ    >
>
>
ð2 þ 1Þ! n oii >
>
>
>
m3  ðm  1Þ3  ðm  1Þ3 ðm  2Þ3  3: ðm  1Þ2 ðm  2Þ2 ;
ðmmÞ

ð5:19Þ

5.3.2.2 Third Order One-Shot Matrices

The first member T0 of the triangular function


RR set is repeatedly integrated thrice and
Fig. 5.5 shows the integrated function T0 dt, while its magnified view is shown in
Fig. 5.6. RR
Mathematically, T0 dt can be represented as
ZZZ
1 t 4 1 ð t  hÞ 4 ðt  hÞ3
T0 dt ¼  uð t  hÞ  uð t  hÞ
h 24 h 24 6

The samples of the resulting


n 4 o n
function at sampling oinstants
n
0, h, 2h, 3h and 4h are o
4
1 ðhhÞ ðhhÞ3 1 ð2hÞ
4
1 ð2hhÞ
4
ð2hhÞ3 4
Þ4 ð3hhÞ3
0; h 24  h 24  6
1h
; h 24  h 24  6 ; h 24  1h ð3hh
1 ð3hÞ
24  6 ;
n 4 4 3
o
1 ð4hÞ 1 ð4hhÞ
h 24  h 24  ð4hh
6
Þ
respectively.
The one-shot operational matrices P3ts and P3tt for three times repeated inte-
grations with m = 4 can be developed from these samples as follows.
ZZZ
Tð4Þ dt = P3tsð4Þ Sð4Þ þ P3ttð4Þ Tð4Þ ð5:20Þ
5.3 One-Shot Integration Operational Matrices for Repeated Integration 125

Fig. 5.5 Triple integration of


the first member T0 of the
triangular function set

Fig. 5.6 Magnified view of


the triple integration of the
first member T0 of the
triangular function set

where

h3  
P3tsð4Þ , 0 1 ð24  14  4:13 Þ ð34  24  4:23 Þ
ð3 þ 1Þ!

and

h3   4  4 
P3tt4 , 1 2  14  14  04  4: 13  03 3  24  24  14  4: 23  13
ð3 þ 1Þ!
 4 
4  34  34  24  4: 33  23
126 5 One-Shot Operational Matrices for Integration

For m terms, the generalized one-shot operational matrices for triple integration
are
hh n oii 9
P3tsðmÞ , h3
01 ð24  14  4:13 Þ ð34  24  4:23 Þ    ðm  1Þ4 ðm  2Þ4 4:ðm  2Þ3 >
>
ð3 þ 1Þ! ðmmÞ >
>
>
>
and =
h3
  4   4 
P3ttðmÞ , ð3 þ 1Þ! 1 ð2  14 Þ  ð14  04 Þ  4:ð13  03 Þ ð3  24 Þ  ð24  14 Þ  4:ð23  13 Þ    >
>
n oii >
>
>
>
m4  ðm  1Þ4  ðm  1Þ4 ðm  2Þ4  4: ðm  1Þ3 ðm  2Þ3 ;
ðmmÞ

ð5:21Þ

5.3.2.3 n-th Order One-Shot Matrices

Now considering n times repeated integration, and following a similar track, we can
write the one-shot operational matrices for n times repeated integration for trian-
gular functions as

   ð n þ 1Þ  9
PntsðmÞ , hn
01 2ðn þ 1Þ  1ðn þ 1Þ  ðn þ 1Þ0:1n 3  2ðn þ 1Þ  ðn þ 1Þ0:2n  >
>
ðn þ 1Þ! n oii >
>
>
>
ðm  1Þðn þ 1Þ ðm  2Þðn þ 1Þ ðn þ 1Þ:ðm  2Þn =
  ðn þ 1Þ  ðmmÞ
hn
PnttðmÞ , ðn þ 1Þ! 1 2  1ðn þ 1Þ  1ðn þ 1Þ  0ðn þ 1Þ  ðn þ 1Þ:ð1n  0n Þ   >
>
n oii >
>
>
>
mðn þ 1Þ  ðm  1Þðn þ 1Þ  ðm  1Þðn þ 1Þ ðm  2Þðn þ 1Þ  ðn þ 1Þ:ððm  1Þn ðm  2Þn Þ ;
ðmmÞ

ð5:22Þ

where, n; m  2.

5.3.3 One-Shot Integration Operational Matrices in HF


Domain: A Combination of SHF Domain and TF
Domain One-Shot Operational Matrices

In the Sects. 5.3.1 and 5.3.2, we have constructed the one-shot integration opera-
tional matrices both for the sample-and-hold functions and the triangular functions.
With the help of these one-shot operational matrices, we can perform repeated
integration in HF domain with much better accuracy. First of all, the function to be
integrated is described in HF domain and then the one-shot matrices are applied to
obtain the desired degree of integration with higher accuracy.
Let us consider a function f ðtÞ to be integrated, is defined in HF domain as

f ðtÞ  CTS SðmÞ þ CTT TðmÞ

After referring to Sects. 5.3.1 and 5.3.2, for getting improved accuracy, using
higher-order one-shot operational matrices, the repeated integrations of function
f ðtÞ can be expressed as follows:
5.3 One-Shot Integration Operational Matrices for Repeated Integration 127

For double integration


ZZ ZZ
 T 
f ðtÞ dt  CS SðmÞ þ CTT TðmÞ dt
ZZ ZZ
¼ CST
SðmÞ dt þ CT T
TðmÞ dt
   
¼ CTS P2ssðmÞ SðmÞ þ P2stðmÞ TðmÞ þ CTT P2tsðmÞ SðmÞ þ P2ttðmÞ TðmÞ
   
¼ CTS P2ssðmÞ þ CTT P2tsðmÞ SðmÞ þ CTS P2stðmÞ þ CTT P2ttðmÞ TðmÞ
ð5:23Þ

Similarly for triple integration of function f ðtÞ can be expressed as


ZZZ
   
f ðtÞ dt  CTS P3ssðmÞ þ CTT P3tsðmÞ SðmÞ þ CTS P3stðmÞ þ CTT P3ttðmÞ TðmÞ

ð5:24Þ

In a similar track, using higher-order one-shot matrices, the n-times repeated


integration can be mathematically expressed as
ZZZ Z
   
 f ðtÞ dt  CTS PnssðmÞ þ CTT PntsðmÞ SðmÞ þ CTS PnstðmÞ þ CTT PnttðmÞ TðmÞ
|fflfflfflffl{zfflfflfflffl}
n
ð5:25Þ

The process is illustrated in detail through the following numerical examples.

5.4 Two Theorems [8]

It should be noted that all the operational matrices P, P1ss, P1st, P1ts, P1tt, P2ss,
P2st, P2ts, P2tt, P3ss, P3st, P3ts, P3tt, …, Pnss, Pnst, Pnts, Pntt are of regular
upper triangular nature and may be represented by S having the following general
form:

X
j
S ¼ an Q n
n¼0

where, the delay matrix Q [12] is given by

QðmÞ , ½½ 0 1 0 0    0 
|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
m terms
128 5 One-Shot Operational Matrices for Integration

We present the following two theorems regarding commutative property of


matrices of class S and its polynomials.
Theorem 1 If a regular upper triangular matrix S of order m can be expressed as

X
j
SðmÞ ¼ an QnðmÞ
n¼0

where, the coefficients an’s are constants, j  ðm  1Þ, then the product of two
matrices S1 and S2, similar to S, raised to different integral power p and q, is
always commutative and of the form

X
k
S1pðmÞ S2qðmÞ ¼ cn QnðmÞ
n¼0

where, the coefficients cn’s are constants and p, q, k are positive integers and
k  ðm  1Þ.
Proof Let,
X
l
S1ðmÞ ¼ an QnðmÞ
n¼0

and
X
s
S2ðmÞ ¼ bn QnðmÞ
n¼0

where l; s  ðm  1Þhand an andibn are constant coefficients.


Then the product S1pðmÞ S2qðmÞ is given by
" #p " #q
X
l X
s
S1pðmÞ S2qðmÞ ¼ an QnðmÞ bn QnðmÞ ð5:26Þ
n¼0 n¼0

The resulting polynomial would contain different coefficients with different


powers of Q(m) from 0 to u (say) where u  ðm  1Þ, as Q(m) has the property [12]

QnðmÞ ¼ 0ðmÞ for n [ ð m  1Þ

Then Eq. (5.26) reduces to


X
k
S1pðmÞ S2qðmÞ ¼ cn QnðmÞ for k  ð m  1Þ h
n¼0
5.4 Two Theorems 129

Theorem 2 If a regular upper triangular matrix S(m)of order m can be expressed


as

X
v
SðmÞ ¼ an QnðmÞ
n¼0

where, the coefficients an’s are constants and v  ðm  1Þ, then any polynomial of
S(m)can be expressed as

X
j X
k
cn SnðmÞ ¼ dn QnðmÞ
n¼0 n¼0

where, cn’s, dn’s are constants and j; k  ðm  1Þ


Pj
Proof The ðr þ 1Þth term of the polynomial n¼0 cn SnðmÞ is
" #r
X
v X
w X
w
cr S ¼ cr
r
an Q n
¼ cr fn Qn ¼ gn Qn ð5:27Þ
n¼0 n¼0 n¼0

Since Q has the property

QnðmÞ ¼ 0ðmÞ for n [ ðm  1Þ

Hence, putting r = n, Eq. (5.27) can be written as

X
j j X
X w X
k
cn Sn ¼ gn Q n ¼ dn Qn h
0 0 0 0
Since all the HF domain integration operational matrices are of upper triangular
nature having a form similar to S1(m) or S2(m) above, by virtue of Theorem 5.1, their
products will always be commutative. Also, if higher power of any of the opera-
tional matrices is multiplied with any other operational matrix, or its higher power,
the product is commutative as well.
These properties are frequently used in the derivations presented later in this
chapter.

5.5 Numerical Examples

Let us consider few examples to compare the efficiencies of higher order one-shot
integration matrices over the repeated use of first order integration matrices.
Example 5.1 will illustrate the process of finding second order integration of
130 5 One-Shot Operational Matrices for Integration

function f ðtÞ ¼ t. Similarly, Example 5.2 will compare the effectiveness of higher
order one-shot operational matrices in case of third order integration of function
f ðtÞ ¼ t. Finally, Example 5.3 will show the cumulative effect of two higher order
one-shot operational matrices for second and third order integrations and will
compare the deviations of the samples obtained, with respect to exact values, using
two different methods, as explained in previous sections.

5.5.1 Repeated Integration Using First Order Integration


Matrices

Example 5.1 (vide Appendix B, Program no. 16) Consider the function f(t) = t.
RR 3
Integrating twice, we have f ðtÞ ¼ t6 .
We expand this function directly in HF domain, for m = 10 and T = 1 s, to obtain
ZZ
f ðtÞ  ½0:00000000 0:00016667 0:00133333 0:00450000 0:01066667

0:02083333 0:03600000 0:05716667 0:08533333 0:12150000Sð10Þ


þ ½0:00016667 0:00116667 0:00316667 0:00616667 0:01016667
0:01516667 0:02116667 0:02816667 0:03616667 0:04516667Tð10Þ
ð5:28Þ

Now, the expansion of the function f(t) in HF domain, for m = 10 and T = 1 s,


results in

f ðtÞ  ½ 0 0:1 0:2 0:3 0:4 0:5 0:6 0:7 0:8 0:9 Sð10Þ
þ ½ 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 Tð10Þ

Using second order one-shot integration operational matrices from Eq. (5.23),
we obtain the results of double integration of f(t) in HF domain as
ZZ
f ðtÞ  ½0:00000000 0:00016667 0:00133333 0:00450000 0:01066667

0:02083333 0:03600000 0:05716667 0:08533333 0:12150000Sð10Þ


þ ½0:00016667 0:00116667 0:00316667 0:00616667 0:01016667
0:01516667 0:02116667 0:02816667 0:03616667 0:04516667Tð10Þ
ð5:29Þ
It is noted that the results obtained in Eqs. (5.28) and (5.29) match exactly for this
particular case. This is because, the function f(t) = t is a linear function and hybrid
functions represent any linear function in an exact manner. But had the function
been non-linear, the results would have been very close, though not exact,
5.5 Numerical Examples 131

indicating much less error for repeated integration by the use of one-shot matrices.
This is also illustrated by examples to follow.
Let the derivation of the (i + 1)-th sample of an HF domain integrated function
from its corresponding exact sample obtained via conventional integration, be Di
(i = 0, 1, 2, …, m). Then we can define the following two terms as indicators of the
efficiency of multiple integration, calling each of them ‘deviation index’, we can write

mP
þ1 mP
þ1
jDRi j jDOi j
i¼1 i¼1
dR , and dO ,
ð m þ 1Þ ð m þ 1Þ

where, DRi is the deviation of the (i + 1)-th sample from its exact value for repeated
integration, dR is the related ‘deviation index’ and DOi is the deviation of the (i + 1)-
th sample from its exact value for one-shot integration, and dO is the related
‘deviation index’.
In the following, computational efficiency of the second order one-shot inte-
gration operational matrices for different types of standard functions like t, exp(−t),
sin(πt) and cos(πt) are studied rather closely. As expected, the higher order one-shot
operational matrices provide better results compared to integration with repeated
use of first order operational matrices. Table 5.1 tabulates the deviation indices for
different types of standard functions, obtained using these two methods and has
proved the effectiveness of using one-shot matrices.
Figure 5.7 translates Table 5.1 into visual form. It shows the deviation indices δR
and δO for double integration of four functions t, exp(−t), sin(πt) and cos(πt) for
m = 10 and T = 1 s. It is observed that there is difference in deviation indices (δR
and δO) for repeated integration and one-shot integration for each of the four
functions as expected. While the difference of the deviation indices is a maximum
for the function cos(πt), δO being smaller, the same for the function sin(πt) is a
minimum where δO is larger. This is an oddity which has been removed for triple
integration illustrated later.
It is also seen from Fig. 5.7 that for the linear ramp function the deviation index
for one-shot integration is zero. This is a specific case for linear functions.

Table 5.1 Deviation indices for double integration of four different functions for m = 10 and
T = 1 s (vide Appendix B, Program no. 17)
Method of Deviation indices dR and dO for different functions
integration t expðtÞ sinðptÞ cosðptÞ
Repeated 4.166667e−004 1.880368e−004 7.485343e−004 16.625382e−004
integration
(dR )
One-shot 0.000000e−004 1.142047e−004 8.352104e−004 8.292048e−004
integration
matrices (dO )
132 5 One-Shot Operational Matrices for Integration

Fig. 5.7 Deviation indices


(δR and δO) for double
integration of four different
functions, t, exp(−t), sin(πt)
and cos(πt) for m = 10 and
T=1s

5.5.2 Higher Order Integration Using One-Shot


Operational Matrices

Example 5.2 Let us take up an example to compare the efficiencies of repeated use
of first order integration matrices and third order one-shot integration matrices for
the function f ðtÞ.
Consider the function
ZZZ
f ðt Þ ¼ t dt ð5:30Þ

Let

f ðtÞ  DTS SðmÞ þ DTT TðmÞ ð5:31Þ

where, DS and DT are HF domain coefficient vectors of f ðtÞ known from the actual
samples of the function t.
Also, let

t  CTS SðmÞ þ CTT TðmÞ ð5:32Þ

where, CS and CT are HF domain coefficient vectors known from actual samples of
the function t.
5.5 Numerical Examples 133

Now we perform triple integration on the RHS of Eq. (5.32) via HF domain and
obtain HF domain solution of f ðtÞ for Eq. (5.30).
Considering the discussion in earlier sections, we can determine the result by
performing the integration in HF domain in the following two ways:
(i) Using the first order HF domain integration operational matrices P1ss(m),
P1st(m), P1ts(m) and P1tt(m) of Eqs. (5.1) and (5.2).
(ii) Using HF domain one-shot integration operational matrices of third order from
Eqs. (5.16) and (5.21).
Finally, the results obtained via above two integration methods are compared
with the exact samples of the function f ðtÞ of Eq. (5.30).

5.5.2.1 By Repeated Use of HF Domain 1st Order Integration


Matrices P1ss(m), P1st(m), P1ts(m) and P1tt(m)
RR RR RR   R
We know that t dt ¼ CTS SðmÞ dt þ CTT TðmÞ dt ¼ CTS þ 12 CTT P2 SðmÞ dt
Putting these results in Eq. (4.8), we obtain
   
1 T 2 1 T 2
f ðtÞ  CS þ CT P P1ssðmÞ SðmÞ þ CS þ CT P P1stðmÞ TðmÞ
T T
2 2 ð5:33Þ
, DT1S SðmÞ þ DT1T TðmÞ

From the two vectors DT1S and DT1T , the samples of f ðtÞ can be computed easily.

5.5.2.2 By Use of HF Domain One-Shot Integration Operational


Matrices

Knowing relations (4.8) and (4.15) and the one-shot operational matrices from Eqs.
(5.16) and (5.21), we can express RHS of Eq. (5.30) as
   
f ðtÞ  CTS P3ssðmÞ þ CTT P3tsðmÞ SðmÞ þ CTS P3stðmÞ þ CTT P3ttðmÞ TðmÞ
ð5:34Þ
, DT2S SðmÞ þ DT2T TðmÞ

From Eq. (5.34), we have


 
DT2S ¼ CTS P3ssðmÞ þ CTT P3tsðmÞ

and
134 5 One-Shot Operational Matrices for Integration

 
DT2T ¼ CTS P3stðmÞ þ CTT P3ttðmÞ

From the two vectors DT2S and DT2T the samples of f ðtÞ can be computed.
After computation of f (t) by the above three methods [using Eqs. (5.31), (5.33)
and (5.34)], we get the solution for the coefficients DTS , DTT , DT1S ; DT1T , DT2S ; and DT2T
can easily find out the different sets of samples which are compared in Fig. 5.8,
which shows that the application of one-shot operational matrices provides much
more better approximation compared to the repeated use of first order integration
matrices only as shown in Table 5.2.
Like in the case of second order one shot matrices, the computational efficiency
of the third order one-shot integration operational matrices are studied for the same
standard functions t, exp(−t), sin(πt) and cos(πt). As expected, the higher order
one-shot operational matrices provide better results compared to integration with
repeated use of first order operational matrices. Table 5.3 shows the deviation
indices for different functions, obtained using these two methods.
Figure 5.9 translates Table 5.3 into visual form. It shows the deviation indices δR
and δO for triple integration of four functions t, exp(−t), sin(πt) and cos(πt) for m = 10
and T = 1 s. It is observed that there is difference in deviation indices (δR and δO) for
repeated integration and one-shot integration for each of the four functions as
expected. While the difference of the deviation indices is a maximum for the function
t, δR being larger, the same for the function sin(πt) is a minimum. It is observed that
for all the cases, δR is larger than δO, proving the case for one-shot integration.
It is also seen from Fig. 5.9 that for the linear ramp function the deviation index
for one-shot integration is zero. As mentioned earlier, this is a specific case for
linear functions.

Fig. 5.8 Comparisons of three sets of solutions of the function f(t) (Example 5.2) obtained (i) via
direct expansion, (ii) via repeated application of integration operational matrices of first order only
and (iii) via one-shot operational matrices of third order
Table 5.2 Comparison of the samples (Example 5.2) obtained via three repetitive integration and third order one-shot integration along with the exact
samples for m = 10 and T = 1 s
5.5 Numerical Examples

t(s) Exact samples Via repeated integration Via one-shot matrices Deviation Deviation index Deviation Deviation index
(E) (R) (O) DRi ¼ E  R δR DOi ¼ E  O δO
0 0.000000 0.000000 0.000000 0.000000 2.917e−004 0.000000 0.000e−004
1 0.000004 0.000013 0.000004 −0.000008 0.000000
10
2 0.000067 0.000100 0.000067 −0.000033 0.000000
10
3 0.000338 0.000413 0.000338 −0.000075 0.000000
10
4 0.001067 0.001200 0.001067 −0.000133 0.000000
10
5 0.002604 0.002813 0.002604 −0.000208 0.000000
10
6 0.005400 0.005700 0.005400 −0.000300 0.000000
10
7 0.010004 0.010413 0.010004 −0.000408 0.000000
10
8 0.017067 0.017600 0.017067 −0.000533 0.000000
10
9 0.027338 0.028013 0.027338 −0.000675 0.000000
10
10 0.041667 0.042500 0.041667 −0.000833 0.000000
10
135
136 5 One-Shot Operational Matrices for Integration

Table 5.3 Deviation indices for triple integration of four different functions for m = 10 and T = 1 s
Method of integration Deviation indices dR and dO for different functions
t expðtÞ sinðptÞ cosðptÞ
Repeated integration (dR ) 2.916667e 2.195964e 3.316339e 4.571223e
−004 −004 −004 −004
One-shot integration 0.000000e 0.312764e 1.945120e 2.628472e
matrices (dO ) −004 −004 −004 −004

Fig. 5.9 Deviation indices


(δR and δO) for tripple
integration of four different
functions, t, exp(−t), sin(πt)
and cos(πt) for m = 10 and
T=1s

5.5.3 Comparison of Two Integration Methods Involving


First, Second and Third Order Integrations

Example 5.3 Now let us consider an example involving single integration, double
integration and triple integration to study the overall effect and make comparisons
of the results obtained via two integration methods explained earlier.
Z ZZ ZZZ
t2 t3 t4
f ðt Þ ¼ t dt þ t dt þ t dt ¼ þ þ ð5:35Þ
2 6 24

Let

f ðtÞ  DTS SðmÞ þ DTT TðmÞ ð5:36Þ


5.5 Numerical Examples 137

where, DS and DT are the HF domain coefficient vectors of f ðtÞ known from its
direct expansion.
Also, let

t  CTS SðmÞ þ CTT TðmÞ ð5:37Þ

where, CS and CT are HF domain coefficient vectors known from actual samples of
the function t.
Now we perform single, double and triple integration on the RHS of Eq. (5.37)
via HF domain one-shot operational matrices and substitute the results in Eq. (5.35)
to obtain HF domain representation of f ðtÞ.
Finally, the results obtained via two integration methods (as discussed earlier)
are compared with the exact samples of the function f ðtÞ of Eq. (5.36).

5.5.3.1 By Repeated Use of HF Domain 1st Order Integration


Matrices P1ss(m), P1st(m), P1ts(m) and P1tt(m)

We know that
Z Z Z  Z
1 T
t dt ¼ CTS SðmÞ dt þ CTT TðmÞ dt ¼ þ CT
CTS SðmÞ dt
2
ZZ ZZ ZZ   Z
1
t dt ¼ CTS SðmÞ dt þ CTT TðmÞ dt ¼ CTS þ CTT P SðmÞ dt
2
ZZZ ZZZ ZZZ   Z
1 T 2
t dt ¼ CS
T
SðmÞ dt þ CTT
TðmÞ dt ¼ CS þ CT P
T
SðmÞ dt
2

Putting these results in RHS of Eq. (5.35) and using Eqs. (5.10) and (5.11), we
obtain
   
1   1  
f ðtÞ  CTS þ CTT P2 þ P þ I P1ssðmÞ SðmÞ þ CTS þ CTT P2 þ P þ I P1stðmÞ TðmÞ
2 2
, DT1S SðmÞ þ DT1T TðmÞ
ð5:38Þ

From the two vectors DT1S and DT1T the samples of f ðtÞ can be computed easily.
138 5 One-Shot Operational Matrices for Integration

5.5.3.2 By Use of HF Domain One-Shot Integration Operational


Matrices

Knowing the one-shot operational matrices from Eqs. (5.23) and (5.24), we can
express RHS of Eq. (5.35) as
   
f ðtÞ  CTS P1ssðmÞ þ CTT P1tsðmÞ SðmÞ þ CTS P1stðmÞ þ CTT P1ttðmÞ TðmÞ
   
þ CTS P2ssðmÞ þ CTT P2tsðmÞ SðmÞ þ CTS P2stðmÞ þ CTT P2ttðmÞ TðmÞ
    ð5:39Þ
þ CTS P3ssðmÞ þ CTT P3tsðmÞ SðmÞ þ CTS P3stðmÞ þ CTT P3ttðmÞ TðmÞ
, DT2S SðmÞ þ DT2T TðmÞ

From Eq. (5.39), rearranging coefficients of SðmÞ , we have


     
DT2S ¼ CTS P1ssðmÞ þ CTT P1tsðmÞ þ CTS P2ssðmÞ þ CTT P2tsðmÞ þ CTS P3ssðmÞ þ CTT P3tsðmÞ

Rearranging coefficients of TðmÞ , we get


     
DT2T ¼ CTS P1stðmÞ þ CTT P1ttðmÞ þ CTS P2stðmÞ þ CTT P2ttðmÞ þ CTS P3stðmÞ þ CTT P3ttðmÞ

From the two vectors DT2S and DT2T the samples of f ðtÞ can be computed.
After computation of f (t) by the above three methods [using Eqs. (5.36), (5.38)
and (5.39)], we get the solution for the coefficients DTS ; DTT , DT1S , DT1T , DT2S ; and DT2T
and can easily find out the different sets of samples which are compared in
Table 5.4.

5.6 Conclusion

In this chapter we have derived one-shot operational matrices of different orders in


HF domain and the same have been employed for multiple integration. Finally, the
generalized form of such matrices for n times repeated integration having the
dimension (m × m) have been derived.
For evaluating multiple integrals, the one-shot operational matrices have been
proved to be more efficient and they produced much more accurate results com-
pared to the method of repeated use of the first order integration matrices.
Few examples are treated to compare the results obtained via repeated use of the
first order operational matrices and using higher order one-shot operational matri-
ces. The results are presented in Fig. 5.8 and Tables 5.2 and 5.4 to compare them
closely. The maximum deviation with respect to exact solution for the samples
obtained via one-shot integration matrices for second and third order integrations
are found to be −0.138778e−016 and −0.111022e−015, vide Tables 5.2 and 5.4.
5.6 Conclusion

Table 5.4 Comparison of the samples (Example 5.3) obtained via repetitive integration and one-shot integration along with the exact samples for m = 10 and
T=1s
t(s) Exact samples Via repeated integration Via one-shot matrices Deviation Deviation index Deviation Deviation index
(E) (R) (O) DRi ¼ E  R δR DOi ¼ E  O δO
0 0.000000 0.000000 0.000000 0.000000 7.083e−004 0.000000 0.000e−004
1 0.005171 0.005263 0.005171 −0.000092 0.000000
10
2 0.021400 0.021600 0.021400 −0.000200 0.000000
10
3 0.049838 0.050163 0.049838 −0.000325 0.000000
10
4 0.091733 0.092200 0.091733 −0.000467 0.000000
10
5 0.148438 0.149063 0.148438 −0.000625 0.000000
10
6 0.221400 0.222200 0.221400 −0.000800 0.000000
10
7 0.312171 0.313163 0.312171 −0.000992 0.000000
10
8 0.422400 0.423600 0.422400 −0.001200 0.000000
10
9 0.553838 0.555263 0.553838 −0.001425 0.000000
10
10 0.708333 0.710000 0.708333 −0.001667 0.000000
10
139
140 5 One-Shot Operational Matrices for Integration

However, for the samples obtained via repeated use of first order integration
operational matrices, maximum deviations, in terms of magnitudes, for second and
third order integration turns out to be −0.833333e−003 and −1.666667e−003, vide
Tables 5.2 and 5.4.
From Figs. 5.7 and 5.9, we observe that for most of the cases, while computing
via first order integration operational matrices, the deviation indices for four dif-
ferent standard functions are much larger than that of one-shot matrices. Hence, it
implies that for multiple integrations of any linear or non-linear function, the use of
one-shot operational matrices provide highly accurate results.

References

1. Rao, G.P.: Piecewise constant orthogonal functions and their applications in systems and
control, LNC1S, vol. 55. Springer, Berlin (1983)
2. Jiang, J.H., Schaufelberger, W.: Block pulse functions and their application in control system,
LNCIS, vol. 179. Springer, Berlin (1992)
3. Deb, A., Sarkar, G., Bhattacharjee, M., Sen, S.: All integrator approach to linear SISO control
system analysis using block pulse function (BPF). J. Franklin Instt. 334B(2), 319–335 (1997)
4. Deb, A., Sarkar, G., Bhattacharjee, M., Sen, S.: A new set of piecewise constant orthogonal
functions for the analysis of linear SISO systems with sample-and-hold. J. Franklin Instt. 335B
(2), 333–358 (1998)
5. Deb, A., Dasgupta, A., Sarkar, G.: A new set of orthogonal functions and its application to the
analysis of dynamic systems. J. Franklin Instt. 343(1), 1–26 (2006)
6. Deb, A., Sarkar, G., Sengupta, A.: Triangular orthogonal functions for the analysis of
continuous time systems. Anthem Press, London (2011)
7. Deb, A., Sarkar, G., Ganguly, A., Biswas, A.: Approximation, integration and differentiation of
time functions using a set of orthogonal hybrid functions (HF) and their application to solution
of first order differential equations. Appl. Math. Comput. 218(9), 4731–4759 (2012)
8. Deb, A., Ganguly, A., Sarkar, G., Biswas, A.: Numerical solution of third order linear
differential equations using generalized one-shot operational matrices in orthogonal hybrid
function (HF) domain. Appl. Math. Comput. 219(4), 1485–1514 (2012)
Chapter 6
Linear Differential Equations

Abstract This chapter is devoted to linear differential equations. That is, it presents
the solution of first order differential equations using both HF domain differentiation
operational matrices and integration operational matrices. Higher order differential
equations are also solved via the same first order operational matrices, and again
employing one-shot integration matrices. The results are compared by way of
treating five examples. Eleven figures are presented as illustration of the HF domain
techniques.

The main tool for tackling differential equations in the modern age is the numerical
analysis, and to be explicit, numerical integration. Differential equations, in general,
have a wide range of varieties [1–3] along with different degrees of difficulties. For
handling differential equations arising out of modern complex systems, numerical
analysis is the forerunner of all solution techniques and modern day algorithms and
number crunching capability of computers help in solving varieties of such equa-
tions to obtain practical solutions avoiding numerical instability. Work by Butcher
[4] gives an exhaustive overview of numerical methods for solving ordinary dif-
ferential equations. The 4th order Runge-Kutta method has undergone many
improvements and modifications discussed by Butcher [2].
Differential equations having oscillatory solutions need special techniques for
obtaining reasonable solution within tolerable error limits. Simos’s [5] work on
modified Runge-Kutta methods for the numerical solution of ODEs with oscillating
solutions tackles simultaneous first order ODE’s to obtain the required solution.
In control theory, essentially we handle differential equations of different forms
and different orders. Any method based upon numerical techniques for solving such
equations is of interest in modern control theory and applications.
For more than three decades, solution of differential equations as well as integral
equations was also attempted by employing piecewise constant basis functions
(PCBF) [6] like Walsh functions, block pulse functions [7] etc. In such attempts
function approximation plays a pivotal role because the initial error in function
approximation is propagated in a cumulative manner in different stages of com-
putations. Apart from orthogonal functions, orthogonal polynomials have also
played their important role [8] in this area.

© Springer International Publishing Switzerland 2016 141


A. Deb et al., Analysis and Identification of Time-Invariant Systems, Time-Varying
Systems, and Multi-Delay Systems using Orthogonal Hybrid Functions,
Studies in Systems, Decision and Control 46, DOI 10.1007/978-3-319-26684-8_6
142 6 Linear Differential Equations

Now solution of differential equations is attempted using both differentiation and


integration operational matrices.

6.1 Solution of Linear Differential Equations Using HF


Domain Differentiation Operational Matrices

If we try to solve any differential equation with the operational matrices DS and DT,
(vide Eqs. 4.32 and 4.33), the attempt is met with a permanent difficulty: The
samples of the unknown function, say x(t), are required as elements of both the
differentiation matrices. Obviously, any such attempt is certain to fail, because these
samples of x(t) are yet to be derived as the solution of the differential equation.
However, the use of integral operational matrices to solve the problem do not have
such difficulty.
Now, we employ the concept of numerical differentiation to solve a first order
differential equation and derive the necessary theory.
Let us consider the following first order non-homogeneous differential equation.

_ þ agðtÞ ¼ b
gðtÞ ð6:1Þ

where a and b are constants and g(0) = 0.


With m component functions in HF domain, we can express g(t) in the following
form as in Eq. (2.12). That is

gðtÞ  ½ c0 c1 c2  cm1 SðmÞ


þ ½ ðc1  c0 Þ ð c2  c1 Þ ð c3  c2 Þ    ðcm  cm1 Þ TðmÞ
, CTS SðmÞ þ CTT TðmÞ ð6:2Þ

_
Also, following Eq. (4.31), gðtÞ may be expressed as

1
g_ ðtÞ  ½ ðc1  c0 Þ ðc2  c1 Þ ðc3  c2 Þ    ðcm  cm1 Þ SðmÞ
h
1
þ ½ fðc2  c1 Þ  ðc1  c0 Þg fðc3  c2 Þ  ðc2  c1 Þg
h
   fðcm þ 1  cm Þ  ðcm  cm1 Þg TðmÞ
1 1
, CTT SðmÞ þ CTD TðmÞ ð6:3Þ
h h

where, CTD , ½ fðc2  c1 Þ  ðc1  c0 Þg fðc3  c2 Þ  ðc2  c1 Þg    fðcm þ 1 


cm Þ  ðcm  cm1 Þg
6.1 Solution of Linear Differential Equations … 143

Substituting (6.2) and (6.3) in (6.1), we get


   
1 T 1
CT þ aCTS SðmÞ þ CTD þ aCTT TðmÞ ¼ ½ b  b SðmÞ
h h
þ½0 0    0 TðmÞ ð6:4Þ

Equating the like coefficients of the vectors in (6.4), we have


 
1 T
CT þ a CTS ¼ ½ b b  b ð6:5Þ
h

and
 
1 T
CD þ a CTT ¼ ½ 0 0  0 ð6:6Þ
h

Proceeding further with (6.5), we get


 T 
CT þ a hCTS ¼ ½ bh bh    bh 
 
or, ðc1  c0 Þ ðc2  c1 Þ    ðcm  c m1 Þ þ a h½ c0 c1    cm1  ¼
½ bh bh    bh  or, ðcm  cm1 Þ þ ahcm1 ¼ bh
Thus we obtain the following recursive equation as the solution for the HF
domain coefficients of the unknown function g(t) as

cm ¼ bh þ ð1  ahÞcm1 ð6:7Þ

6.1.1 Numerical Examples

Example 6.1 (vide Appendix B, Program no. 18) Consider the non- homogeneous
first order differential equation

g_ 1 ðtÞ þ 0.5g1 ðtÞ ¼ 1.25, where, g1 ð0Þ ¼ 0 ð6:8Þ

The exact solution of (6.8) is

g1 ðtÞ ¼ 2:5½1  expð0:5tÞ ð6:9Þ

The direct expansion of g1(t) in HF domain, for T = 1 s and m = 4, can be


expressed as
144 6 Linear Differential Equations

g1 ðtÞ  ½ 0:00000000 0:29375774 0:55299804 0:78177680 SðmÞ


þ ½ 0:29375774 0:25924030 0:22877876 0:20189655 TðmÞ

Whereas using the recursive relation of (6.7), for T = 1 s and m = 4, the HF


domain expansion of g1(t) may be written as

g1 ðtÞ  ½ 0:00000000 0:31250000 0:58593750 0:82519531 SðmÞ


þ ½ 0:31250000 0:27343750 0:23925781 0:20935059 TðmÞ

The exact solution of g1(t) expressed in HF domain is compared with the HF


domain solution of (6.8) obtained using the recursive relation (6.7) in the above two
expressions and in Fig. 6.1a. Figure 6.1b shows the same result with better accuracy
due to increased m.

Fig. 6.1 Solution of Example


6.1 in hybrid function
(HF) domain, using recursive
relation (6.7) for a m = 4,
T = 1 s and b m = 12, T = 1 s,
along with the exact solution
g1(t) of Eq. (6.8) (vide
Appendix B, Program no. 18)
6.2 Solution of Linear Differential Equations … 145

6.2 Solution of Linear Differential Equations Using HF


Domain Integration Operational Matrices

Solving any differential equation in HF domain, provides the extra advantage that
the differential equation is converted into a simple algebraic equation. This obvi-
ously reduces computational burden. Moreover, the HF domain analysis technique
works with time samples of functions, meaning the whole analysis is carried out in
time domain. So the final solution of the differential equation is also obtained
directly in time domain.
We start with Eq. (6.1) and integrate it to get
Z Z
gðtÞ  gð0ÞuðtÞ þ a gðtÞ dt ¼ b uðtÞ dt ð6:10Þ

Expanding each of the functions g(t), g(0)u(t) and u(t) in hybrid function domain
with m terms, we have

gðtÞ  CTS SðmÞ þ CTT TðmÞ ð6:11Þ


2 3 2 3
gð0ÞuðtÞ ¼ gð0Þ4 1 1    1 1 5SðmÞ þ gð0Þ4 0 0    0 0 5TðmÞ
|fflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflffl} |fflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
m terms m terms
 T 
, gð0Þ US SðmÞ þ ZTT TðmÞ
ð6:12Þ

where, UTS , ½ 1 1  1 1 ð1mÞ , ZTT , ½ 0 0  0 0 ð1mÞ and


 
uðtÞ ¼ UTS SðmÞ þ ZTT TðmÞ ð6:13Þ

In the following, we drop the subscript (m) for simplicity.


Substituting Eqs. (6.11) to (6.13) in Eq. (6.10), we have
Z Z
     
CTS S þ CTT T  gð0Þ UTS S þ ZTT T þ a CTS S þ CTT T dt ¼ b UTS S þ ZTT T dt

ð6:14Þ

or
Z Z Z
CTS S þ CTT T  gð0ÞUTS S þ a CTS S dt þ a CTT T dt ¼ bUTS S dt ð6:15Þ

Using the operational matrices for integration in SHF and TF domain from
Eqs. (4.9) and (4.18), we can write
146 6 Linear Differential Equations

Z
SðmÞ dt ¼ P1ssðmÞ SðmÞ þ P1stðmÞ TðmÞ
Z
TðmÞ dt ¼ P1tsðmÞ SðmÞ þ P1ttðmÞ TðmÞ

Employing (4.9) and (4.18) in (6.15) we get

CTS S þ CTT T  gð0ÞUTS S þ a CTS ½P1ss S þ P1st T þ a CTT ½P1ts S þ P1tt T


¼ bUTS ½P1ss S þ P1st T

Equating the like coefficients of two vectors S and T, we have

1
CTS ½I þ a P1ss  gð0ÞUTS þ a CTT P1ss ¼ bUTS P1ss ð6:16Þ
2

where, I is the identity matrix of order m, and,


 
1
1þ ah CTT þ ah CTS ¼ bh UTS ð6:17Þ
2

Using these two Eqs. (6.16) and (6.17), we will solve for the two row matrices
CTS and CTT .
From Eq. (6.17), putting 2 þ2 ah ¼ f , we have

1 T
C þ ah CTS ¼ bh UTS
f T

or
CTT ¼ bfh UTS  afh CTS ð6:18Þ

Substituting the expression for CTT from Eq. (6.18) into (6.16), we have,
a 
CTS ½½ 1 ah ah    ah   gð0ÞUTS þ bfhUTS  afhCTS P1ss ¼ bUTS P1ss
2
a2 fh T abfh T
or; CTS ½½ 1 ah ah    ah   C P1ss ¼ bUTS P1ss  U P1ss þ gð0ÞUTS
2 hh S 2 ii S
CTS ½½ 1 ah ah    ah   CTS a2 fh2 a2 fh2 a2 fh
or, 0 2 2  2
2
abfh
¼ bh½ 0 1 2    ðm  1Þ   ½0 1 2  ðm  1Þ  þ gð0ÞUTS
hh 2 ii
or, CTS 1 ah 1  afh2 ah 1  afh
2    ah 1  afh
2
 
afh
¼ bh 1  ½ 0 1 2    ðm  1Þ  þ gð0ÞUTS
2
6.2 Solution of Linear Differential Equations … 147

Now
 
afh
1 ¼f
2

Therefore, we can write,

CTS ½½ 1 afh afh    afh  ¼ bfh½ 0 1 2    ðm  1Þ  þ gð0ÞUTS


CTS ¼ bfh½ 0 1 2    ðm  1Þ ½½ 1 afh afh    afh 1
þ gð 0Þ ½ 1 1    1 ½½ 1 afh afh    afh 1 ð6:19Þ

In (6.19), the inverse is given by


2 3
1 afh afhð1  afhÞ afhð1  afhÞ2    afhð1  afhÞm2
6 1 afh afhð1  afhÞ    afhð1  afhÞm3 7
6 7
6 1 afh    afhð1  afhÞm4 7
1 6 7
½½ 1 afh afh    afh  ¼ 6 .. .. 7
6 .  . 7
6 7
4 0 1 afh 5
1

Therefore, we can write,


h i
CTS ¼ bfh 0 1 ð1 þ ð1  afhÞÞ 1 þ ð1  afhÞ þ ð1  afhÞ2  1 þ    þ ð1  afhÞm3 þ ð1  afhÞm2
 
þ gð0Þ 1 ð1  afhÞ ð1  afhÞ2    ð1  afhÞm1

ð6:20Þ

In (6.20), the r-th element of CTS can be expressed as

X
r2
CTS ð1; rÞ ¼ bfh ð1  afhÞn þ gð0Þð1  afhÞr1 where; r ¼ 1; 2; . . .; m:
n¼0

Now, we substitute the expression of CTS from (6.20) in (6.18) to obtain CTT .
Hence,
 
CTT ¼ fh ðb  agð0ÞÞ ðb  agð0ÞÞð1  afhÞ ðb  agð0ÞÞð1  afhÞ2    ðb  agð0ÞÞð1  afhÞm1
 
¼ fhðb  agð0ÞÞ 1 ð1  afhÞ ð1  afhÞ2    ð1  afhÞm1

Now, the r-th element of CTT is CTT ð1; rÞ ¼ fhðb  agð0ÞÞð1  afhÞr where, r = 0,
1, 2, …, m.
148 6 Linear Differential Equations

Therefore, finally we can write


h
CTS ¼ gð0Þ b
a þ gð0Þ  ba ð1  afhÞ b
a þ gð0Þ  ba ð1  afhÞ2
i
 b
a þ gð0Þ  ba ð1  afhÞm1 ð6:21Þ

and
 
CTT ¼ fhðb  agð0ÞÞ 1 ð1  afhÞ ð1  afhÞ2  ð1  afhÞm1 ð6:22Þ

Equations (6.21) and (6.22) provide the required solution for the samples of the
unknown function g(t) of Eq. (6.1). It is to be noted that for the solution to exist,
h should be selected such that ah ≠ 2.
From these two equations, we can derive a recursive formula for determining the
samples of the solution. If we call the (m + 1)-th sample of g(t) to be cS;m , and since
the m-th sample of g(t) is cS;m1 , then according to Eqs. (6.20) we can write

cS;m ¼ bfh þ ð1  afhÞcS;m1 ð6:23Þ

6.2.1 Numerical Examples

Example 6.2 (vide Appendix B, Program no. 18) Consider the non-homogeneous
first order differential equation of Example 6.1 having the solution

g1 ðtÞ ¼ 2:5½1  expð0:5tÞ

The samples of exact solution of g1(t) and the samples obtained using Eq. (6.23)
in HF domain, for T = 1 s and m = 12, are tabulated and compared in Table 6.1.
Accuracy of the recursive relation (6.23) is apparent from the curves of the
Fig. 6.2. While Fig. 6.3 compares sample values of the solution of Eq. (6.8) with
those obtained with two HF domain solutions via relations (6.7) and (6.23), derived
from application of the HF domain differential matrices and integration operational
matrices respectively. In the latter case, since all the sample points almost overlap, it
shows that the relation (6.23) is a shade better than (6.7). However, the choice of
use of either of (6.7) or (6.23) depends entirely on the degree of accuracy desired
for any first order differential equation.
In Fig. 6.3, since the solution points almost overlap, comparison of the samples
of the function g1(t) is presented in Table 6.1 for better clarity.
Now, it seems fit that the solution obtained via the present method is compared
with a standard proven method to assess its credibility. The method most proven
and popular is the 4th order Runge-Kutta (RK4) method [5].
6.2 Solution of Linear Differential Equations … 149

Table 6.1 Comparison of samples obtained from exact solution and the results obtained by using
Eqs. (6.7) and (6.23) with respective percentage errors for SHF coefficients for Example 6.2 (vide
Appendix B, Program no. 18)
t(s) Direct SHF domain % Error SHF domain % Error
expansion coefficients coefficients
using Eq. (6.7) using Eq. (6.23)
0
0.00000000 0.00000000 – 0.00000000 –
1
12

0.10202636 0.10416667 −2.09780051 0.10204082 −0.01417193


2
12

0.19988896 0.20399306 −2.05318596 0.19991670 −0.01387668


3
12

0.29375774 0.29966001 −2.00922977 0.29379765 −0.01358562


4
12

0.38379569 0.39134084 −1.96593054 0.38384673 −0.01329874


5
12

0.47015913 0.47920164 −1.92328667 0.47022033 −0.01301604


6
12

0.55299804 0.56340157 −1.88129632 0.55306848 −0.01273750


7
12

0.63245625 0.64409318 −1.83995747 0.63253507 −0.01246314


8
12

0.70867172 0.72142263 −1.79926788 0.70875813 −0.01219292


9
12

0.78177680 0.79553002 −1.75922511 0.78187004 −0.01192685


10
12

0.85189842 0.86654960 −1.71982651 0.85199780 −0.01166491


11
12

0.91915834 0.93461003 −1.68106925 0.91926319 −0.01140708


12
12

Its importance as well as span is apparent from the extensive insightful dis-
cussions presented in. Hence, RK4 is taken as the benchmark for comparison which
is presented in Table 6.2.
It is observed that the order of accuracy for RK4 is slightly better than the
proposed recursive Eq. (6.23). A point to be noted is that for RK4 method each
iteration requires computation of four equations [2], where as Eq. (6.23) alone is
competent to perform the iteration to produce updated values of the solution.
150 6 Linear Differential Equations

Fig. 6.2 Solution of Eq. (6.8)


in hybrid function
(HF) domain, using recursive
relation (6.23) for m = 12,
along with direct expansion of
g1(t) via HF (vide
Appendix B, Program no. 18)

Fig. 6.3 Solution of Eq. (6.8)


in hybrid function
(HF) domain, using recursive
relations (6.7) and (6.23) for
m = 12, along with direct
expansion of g1(t) via HF
(vide Appendix B, Program
no. 18)

For a homogeneous first order differential equation of the form

1
g_ 2 ðtÞ þ a g2 ðtÞ ¼ 0 with g2 ð0Þ ¼ ð6:24Þ
a

Its general solution is of the form 1a expðatÞ:


Thus, for this case when b = 0, the recursive relations (6.7) and (6.23) are
respectively modified as

cm ¼ ð1  ahÞcm1 ð6:25Þ

cS;m ¼ ð1  afhÞcS;m1 ð6:26Þ

Either of the relations (6.25) or (6.26) may be used for solution of Eq. (6.24).
6.2 Solution of Linear Differential Equations … 151

Table 6.2 Exact samples of the solution g1(t) of Eq. (6.8) compared with its solutions obtained
recursively from relations (6.7) and (6.23) in HF domain with m = 12 and T = 1 s. For further
comparison, Eq. (6.8) has been solved via standard 4th order Runge-Kutta method with the results
tabulated in the last column (vide Appendix B, Program no. 19)
t(s) Exact g1 ðtÞ solved g1 ðtÞ solved via g1 ðtÞ solved via 4th order
samples of via Eq. (6.7) Eq. (6.23) Runge-Kutta method
g1 ðtÞ
0 0.00000000 0.00000000 0.00000000 0.00000000
1 0.10202636 0.10416667 0.10204082 0.10202635
12
2 0.19988896 0.20399306 0.19991670 0.19988896
12
3 0.29375774 0.29966001 0.29379765 0.29375774
12
4 0.38379569 0.39134084 0.38384673 0.38379568
12
5 0.47015913 0.47920164 0.47022033 0.47015912
12
6 0.55299804 0.56340157 0.55306848 0.55299803
12
7 0.63245625 0.64409318 0.63253507 0.63245624
12
8 0.70867172 0.72142263 0.70875813 0.70867171
12
9 0.78177680 0.79553002 0.78187004 0.78177678
12
10 0.85189842 0.86654960 0.85199780 0.85189841
12
11 0.91915834 0.93461003 0.91926319 0.91915833
12
12 0.98367335 0.99983461 0.98378306 0.98367333
12

Fig. 6.4 Solution of


Eq. (6.24) (homogeneous
form of Example 6.2) in
hybrid function (HF) domain
for a = 1, using recursive
relations (6.25) and (6.26) for
m = 12 and T = 1 s, along with
direct expansion of g2(t) via
HF

Figure 6.4 shows the solution of Eq. (6.24) in hybrid function (HF) domain for
a = 1, using both the recursive relations (6.25) or (6.26) for m = 12 and T = 1 s. The
figure also plots the samples of the solution obtained via direct expansion in HF
domain.
152 6 Linear Differential Equations

6.3 Solution of Second Order Linear Differential


Equations

We present the two methods in the following based upon


(i) The repeated use of first order integration matrices.
(ii) The use of second order one-shot integration matrices.

6.3.1 Using HF Domain First Order Integration


Operational Matrices

Consider the linear differential equation

€xðtÞ þ a x_ ðtÞ þ b xðtÞ ¼ d ð6:27Þ

where, a, b and d are positive constants.


Let the initial conditions be xð0Þ ¼ k1 and x_ ð0Þ ¼ k2 :
Integrating Eq. (6.27) twice we get,
Z ZZ ZZ Z
xð t Þ þ a xðtÞ dt þ b xðtÞ dt ¼ d uðtÞ dt þ ðak1 þ k2 Þ uðtÞ dt þ k1 uðtÞ

ð6:28Þ

Let ðak1 þ k2 Þ , r2 and k1 , r3 .


So, Eq. (6.28) takes the form
Z ZZ ZZ Z
xðtÞ þ a xðtÞ dt þ b xðtÞ dt ¼ d uðtÞ dt þ r2 uðtÞ dt þ r3 uðtÞ ð6:29Þ

Expanding all the time functions in m-term HF domain, we have


 Z Z   ZZ ZZ 
CTS S þ CTT T þ a CTS S dt þ CTT T dt þ b CTS S dt þ CTT T dt
ZZ Z ð6:30Þ
¼ d UTS S dt þ r2 UTS S dt þ r3 UTS S

2 3
where, UTS ¼ 4 1 1    1 1 5
|fflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
m terms
6.3 Solution of Second Order Linear Differential Equations 153

Using (5.10), (5.11) and (5.12), we can write


 Z  ZZ
1 T 1 T
CS S þ CT T þ 2a CS þ CT
T T T
T dt þ 2b CTS þ C T dt
2 2 T
ZZ Z
¼ 2d UTS T dt þ 2r2 UTS T dt þ r3 UTS S
 Z   Z ð6:31Þ
1 1 T
or; CTS S þ CTT T þ 2a CTS þ CTT T dt þ 2b CTS þ CT P T dt
2 2
Z Z
¼ 2d UTS P T dt þ 2r2 UTS T dt þ r3 UTS S

Rearranging the terms and using the first order integration matrices we can write
Z   Z
  1
CTS  r3 UTS S þ CTT T ¼ 2d UTS P þ 2r2 UTS T dt  CTS þ CTT ½2aI þ 2bP T dt
2
 
or; CTS  r3 UTS S þ CTT T ¼ 2d UTS P þ 2r2 UTS ½P1ts S þ P1tt T
 
1
 CTS þ CTT ½2aI þ 2bP½P1ts S þ P1tt T
2
ð6:32Þ

Equating the like coefficients of S from both the sides


 
  1 T
CS  r3 US ¼ 2d US P þ 2r2 US P1ts  CS þ CT ½2aI þ 2bPP1ts
T T T T T
2

Let 2d P þ 2r2 I , L and ðaI þ bPÞ , Q


Then
 
CTS  r3 UTS ¼ UTS L þ 2 CTS Q þ CTT Q P1ts ð6:33Þ

Now, rearranging the coefficients of T of Eq. (6.32), we get


 
  1
CTT ¼ 2d UTS P þ 2r2 UTS P1tt  CTS þ CTT ½2aI þ 2bPP1tt
2 ð6:34Þ
 T 
or; CT ¼ US L þ 2CS Q þ CT Q P1tt
T T T

From Eqs. (6.33) and (6.34), we can write

CTS  r3 UTS ¼ CTT P1tt1 P1ts ð6:35Þ

Using the Eq. (5.2) in (6.35), we get


2
CTS  r3 UTS ¼ CTT P1ts ð6:36Þ
h
154 6 Linear Differential Equations

Solving the simultaneous Eqs. (6.33) and (6.36) for CTS and CTT , we have
  1
2 4
CTT ¼ UTS ½L þ 2r3 Q I  Q  P1ts Q ð6:37Þ
h h
  1
2 T 2 4
CTS ¼ US ½L þ 2r3 Q I  Q  P1ts Q P1ts þ r3 UTS ð6:38Þ
h h h

6.3.2 Using HF Domain One-Shot Integration Operational


Matrices

We consider Eq. (6.27) and use one-shot operational matrices for integration of
second order differential equation and to determine its solution.
After integrating the Eq. (6.27) twice, now we start from Eq. (6.29). We expand
all the time functions in m-term HF domain and employ the one-shot integration
matrices.
From Eq. (6.29), we can write
   
CTS S þ CTT T þ a CTS P1ss þ CTT P1ts S þ a CTS P1st þ CTT P1tt T
   
þ b CTS P2ss þ CTT P2ts S þ b CTS P2st þ CTT P2tt T
¼ d UTS ½P2ss S þ P2st T þ r2 UTS ½P1ss S þ P1st T þ r3 UTS S ð6:39Þ

Rearranging the coefficients of S, we have

P1ss P2ss
CTS þ a CTS P1ss þ a CTT þ b CTS P2ss þ b CTT ¼ UTS ½d P2ss þ r2 P1ss þ r3 I
2   2
P1ss P2ss
or; CTS ½I þ a P1ss þ b P2ss þ CTT a þb
2 2
¼ UTS ½d P2ss þ r2 P1ss þ r3 I
ð6:40Þ

Rearranging the coefficients of T, we get

P1st P2st
CTT þ a CTS P1st þ a CTT þ b CTS P2st þ b CTT ¼ UTS ½d P2st þ r2 P1st
2  2
P1st P2st
or; CTS ½a P1st þ b P2st þ CTT I þ a þb ¼ UTS ½d P2st þ r2 P1st
2 2
ð6:41Þ
6.3 Solution of Second Order Linear Differential Equations 155

In Eq. (6.40), let us define

P1ss P2ss
I þ a P1ss þ b P2ss , X and a þb , Y:
2 2

Then Eq. (6.40) may be written as

CTS X þ CTT Y ¼ UTS ½d P2ss þ r2 P1ss þ r3 I ð6:42Þ

In Eq. (6.41), let us define

P1st P2st
a P1st þ b P2st , W and I þ a þb , Z:
2 2

Then Eq. (6.41) may be expressed as

or; CTS W þ CTT Z ¼ UTS ½d P2st þ r2 P1st ð6:43Þ

Solving the matrix Eqs. (6.42) and (6.43) for CTS and CTT , we get

UTS ½d P2ss þ r2 P1ss þ r3 IX1  CTT Y X1 ¼ UTS ½d P2st þ r2 P1stW1  CTT Z W1
 
or; CTT Y X1  Z W1 ¼ UTS ½d P2ss þ r2 P1ss þ r3 IX1  UTS ½d P2st þ r2 P1stW1
ð6:44Þ

Let Y X1  Z W1 , M1


and UTS ½d P2ss þ r2 P1ss þ r3 IX1  UTS ½d P2st þ r2 P1stW1 , M2
So Eq. (6.44) becomes

or; CTT ¼ M2 M1


1 ð6:45Þ

Now substituting the expression of CTT in Eq. (6.43), we get

or; CTS ¼ UTS ½d P2st þ r2 P1stW1  M2 M1


1 ZW
1
ð6:46Þ

Let M2 M11 ZW
1
, M3 and UTS ½d P2st þ r2 P1stW1 , M4
Therefore Eq. (6.46) may be expressed as

CTS ¼ M4  M3 ð6:47Þ

6.3.3 Numerical Examples

Example 6.3 (vide Appendix B, Program no. 20) Consider the non- homogeneous
second order differential equation
156 6 Linear Differential Equations

g3 ðtÞ þ 3g_ 3 ðtÞ þ 2g3 ðtÞ ¼ 2; with; g_ 3 ð0Þ ¼ 1 and g3 ð0Þ ¼ 1


€ ð6:48Þ

The exact solution of (6.48) is

g3 ðtÞ ¼ expð2tÞ  expðtÞ þ 1 ð6:49Þ

The samples of exact solution of g3(t) and the samples obtained using Eqs. (6.38)
and (6.47) in HF domain, for T = 1 s and m = 8, are compared in Fig. 6.5.
Example 6.4 (vide Appendix B, Program no. 20) Consider the homogeneous
second order differential equation


g4 ðtÞ þ 100g4 ðtÞ ¼ 0; with; g_ 4 ð0Þ ¼ 0 and g4 ð0Þ ¼ 2 ð6:50Þ

Fig. 6.5 Solution of Example


6.3 in hybrid function
(HF) domain for, using a first
order integration matrices
(vide Eq. 6.38) and b using
one-shot integration
operational matrices (vide
Eq. 6.47), for m = 8 and
T = 1 s, along with direct
expansion of g3(t) via HF
(vide Appendix B, Program
no. 20)
6.3 Solution of Second Order Linear Differential Equations 157

Table 6.3 Comparison of sample values of the function g3(t) of Example 6.3 and its solutions
obtained via recursive relations (6.38) and (6.47) in HF domain (vide Appendix B, Program no.
20)
t(s) Exact samples of g3 ðtÞ g3 ðtÞ solved via Eq. (6.38) g3 ðtÞ solved via Eq. (6.47)
0 1.00000000 1.00000000 1.00000000
1 0.89630388 0.89542484 0.89542484
8
2 0.82772988 0.82639156 0.82639156
8
3 0.78507727 0.78355456 0.78355456
8
4 0.76134878 0.75981533 0.75981533
8
5 0.75124337 0.74980303 0.74980303
8
6 0.75076361 0.74947295 0.74947295
8
7 0.75691192 0.75579615 0.75579615
8
8 0.76745584 0.76652001 0.76652001
8

Table 6.4 Comparison of sample values of the function g4(t) of Example 6.4 and its solutions
obtained via recursive relations (6.38) and (6.47) in HF domain (vide Appendix B, Program no.
20)
t(s) Exact samples of g4 ðtÞ g4 ðtÞ solved via Eq. (6.38) g4 ðtÞ solved via Eq. (6.47)
0 2.00000000 2.00000000 2.00000000
1 1.75516512 1.76470588 1.76470588
8
2 1.08060461 1.11418685 1.11418685
8
3 0.14147440 0.20150621 0.20150621
8
4
8
−0.83229367 −0.75858766 −0.75858766
5
8
−1.60228723 −1.54019031 −1.54019031
6
8
−1.97998499 −1.95939525 −1.95939525
7
8
−1.87291337 −1.91756601 −1.91756601
8
8
−1.30728724 −1.42454476 −1.42454476

The exact solution of (6.48) is

g4 ðtÞ ¼ 2 cosð10tÞ ð6:51Þ

Hybrid function domain solutions of Example 6.3 and Example 6.4, obtained via
Eqs. (6.38) and (6.47) are presented in Tables 6.3 and 6.4 and parallely shown in
Figs. 6.5 and 6.6. The results are contrary to the expectation that the use of second
order one-shot integration operational matrices will yield better results. In fact,
results obtained via repeated integration (vide Eq. 6.38) method and one-shot
integration method (vide Eq. 6.47) are the same.
158 6 Linear Differential Equations

Fig. 6.6 Solution of Example


6.4 in hybrid function
(HF) domain for, using a first
order integration matrices
(vide Eq. 6.38) and b using
one-shot integration
operational matrices (vide
Eq. 6.47), for m = 8 and
T = 0.4 s, along with direct
expansion of g4(t) via HF
(vide Appendix B, Program
no. 20)

This paradox may be explained as follows:


(i) For second order repeated integration the result of exact integration at each
stage is transformed to hybrid function domain. This should incur error at each
stage. However, for the sample-and-hold component, the first stage integration
does not incur any error, while the integration of the triangular function
component does. That is, integration of SHF components incur error at one
stage (i.e., the second stage) only, and integration of TF components incur
error at both the stages. Thus, with respect to SHF component integration, the
incurred error is the same for repeated integration method and one-shot inte-
gration method. This has been illustrated in Fig. 6.7.
(ii) For second order repeated integration, the expression for error for the SHF
component is 12h3
and that for the TF component is ð12
h2
þ 24
h3
Þ, where, h is the width
h3
of the sub-interval. The error for one-shot integration is 12 for the SHF
6.3 Solution of Second Order Linear Differential Equations 159

Fig. 6.7 Repeated integration of the first member of the hybrid function set: a first member S0 of
the HF set, b first integration of S0 and c subsequent integration of the function of figure b

h3
component and that for the TF component is 24 . It is noted that, h being small,
the difference in error for second order integration using the two methods are
really very small. Thus, the result obtained in Table 6.3 is found to be the same
indicating non-superiority of the second order integration matrices in HF
domain.
(iii) In view of the above, it is expected that for even higher order integrations, like
third order integration, the results obtained via the above two methods will
differ appreciably. This is because, for third order repeated integration (say),
the integration of the SHF component will incur error at the second and third
stages, while such integration of the TF components will incur error at all the
three stages. However, for one-shot integration, the error is incurred at one go.
So, now we proceed to the task of comparing the repeated approach and the
one-shot approach for third order integration in hybrid function domain.

6.4 Solution of Third Order Linear Differential Equations

We present the two methods in the following based upon


(i) The repeated use of first order integration matrices.
(ii) The use of second order one-shot integration matrices.
160 6 Linear Differential Equations

6.4.1 Using HF Domain First Order Integration


Operational Matrices

Consider the third order linear differential equation


...
x ðtÞ þ a €xðtÞ þ b x_ ðtÞ þ c xðtÞ ¼ d ð6:52Þ

where, a, b, c and d are positive constants.


Let the initial conditions be xð0Þ ¼ k1 ; x_ ð0Þ ¼ k2 and €xð0Þ ¼ k3 :
Integrating Eq. (6.52) thrice we get,
Z ZZ ZZZ
xðtÞ þ a xðtÞ dt þ b xðtÞ dt þ c xðtÞ dt
ZZZ ZZ Z
¼d uðtÞ dt þ ðbk1 þ ak2 þ k3 Þ uðtÞ dt þ ðak1 þ k2 Þ uðtÞ dt þ k1 uðtÞ

ð6:53Þ

Let ðbk1 þ ak2 þ k3 Þ , r1 , ðak1 þ k2 Þ , r2 and k1 , r3 :


So, Eq. (6.53) takes the form
Z ZZ ZZZ
xðtÞ þ a xðtÞ dt þ b xðtÞ dt þ c xðtÞ dt
ZZZ ZZ Z ð6:54Þ
¼d uðtÞ dt þ r1 uðtÞ dt þ r2 uðtÞ dt þ r3 uðtÞ

Expanding all the time functions in m-term HF domain, we have


 Z Z   ZZ ZZ 
CTS SðmÞ þ CTT TðmÞ þ a CTS SðmÞ dt þ CTT TðmÞ dt þ b CTS SðmÞ dt þ CTT TðmÞ dt
 ZZZ ZZZ 
þ c CTS SðmÞ dt þ CTT TðmÞ dt
ZZZ ZZ Z
¼ d UTS SðmÞ dt þ r1 UTS SðmÞ dt þ r2 UTS SðmÞ dt þ r3 UTS SðmÞ

ð6:55Þ
2 3
where, UTS ¼ 4 1 1    1 1 5
|fflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
m terms
6.4 Solution of Third Order Linear Differential Equations 161

Using (5.10), (5.11) and (5.12), we can write


 Z  ZZ
1 1
CTS SðmÞ þ CTT TðmÞ þ 2a CTS þ CTT TðmÞ dt þ 2b CTS þ CTT TðmÞ dt
2 2
 ZZZ
1
þ 2c CTS þ CTT TðmÞ dt
2
ZZZ ZZ Z
¼ 2d UTS TðmÞ dt þ 2r1 UTS TðmÞ dt þ 2r2 UTS TðmÞ dt þ r3 UTS SðmÞ
 Z   Z
1 T 1 T
or; CS SðmÞ þ CT TðmÞ þ 2a CS þ CT
T T T
TðmÞ dt þ 2b CS þ CT P TðmÞ dt
T
2 2
  Z
1
þ 2c CTS þ CTT P2 TðmÞ dt
2
Z Z Z
¼ 2d US PT 2
TðmÞ dt þ 2r1 US P TðmÞ dt þ 2r2 US TðmÞ dt þ r3 UTS SðmÞ
T T

ð6:56Þ

Rearranging the terms and using the first order integration matrices we can write
Z
 
CTS  r3 UTS SðmÞ þ CTT TðmÞ ¼ 2d UTS P2 þ 2r1 UTS P þ 2r2 UTS TðmÞ dt
  Z
1  
 CTS þ CTT 2aI þ 2bP þ 2cP2 TðmÞ dt
2
   
or; CTS  r3 UTS SðmÞ þ CTT TðmÞ ¼ 2d UTS P2 þ 2r1 UTS P þ 2r2 UTS P1ts SðmÞ þ P1tt TðmÞ
 
1   
 CTS þ CTT 2aI þ 2bP þ 2cP2 P1ts SðmÞ þ P1tt TðmÞ
2
ð6:57Þ

Equating the like coefficients of SðmÞ from both the sides


 
CTS  r3 UTS ¼ 2d UTS P2 þ 2r1 UTS P þ 2r2 UTS P1ts
 
1 T  
 CS þ CT 2aI þ 2bP þ 2cP2 P1ts
T
2

Let 2d P2 þ 2r1 P þ 2r2 I , L and  aI þ bP þ cP2 , Q


Then
 
CTS  r3 UTS ¼ UTS L þ 2 CTS Q þ CTT Q P1ts ð6:58Þ
162 6 Linear Differential Equations

Now, rearranging the coefficients of TðmÞ of Eq. (6.57), we get


 
  1 T  
CTT ¼ 2d US P þ 2r1 US P þ 2r2 US P1tt  CS þ CT 2aI þ 2bP þ 2cP2 P1tt
T 2 T T T
2
 
or; CTT ¼ UTS L þ 2CTS Q þ CTT Q P1tt
ð6:59Þ

From Eqs. (6.58) and (6.59), we can write

CTS  r3 UTS ¼ CTT P1tt1 P1ts ð6:60Þ

Using the Eq. (5.2) in (6.60), we get


2
CTS  r3 UTS ¼ CTT P1ts ð6:61Þ
h

Solving the simultaneous Eqs. (6.58) and (6.61) for CTS and CTT , we have
  1
2 4
CT ¼ US ½L þ 2r3 Q I  Q  P1ts Q
T T
ð6:62Þ
h h
  1
2 2 4
CTS ¼ UTS ½L þ 2r3 Q I  Q  P1ts Q P1ts þ r3 UTS ð6:63Þ
h h h

6.4.2 Using HF Domain One-Shot Integration Operational


Matrices

We consider Eq. (6.52) and use one-shot operational matrices for integration of
second order differential equation and to determine its solution.
After integrating the Eq. (6.52) twice, now we start from Eq. (6.54). We expand
all the time functions in m-term HF domain and employ the one-shot integration
matrices.
From Eq. (6.54), we can write
   
CTS SðmÞ þ CTT TðmÞ þ a CTS P1ss þ CTT P1ts SðmÞ þ a CTS P1st þ CTT P1tt TðmÞ
   
þ b CTS P2ss þ CTT P2ts SðmÞ þ b CTS P2st þ CTT P2tt TðmÞ
   
þ c CTS P3ss þ CTT P3ts SðmÞ þ c CTS P3st þ CTT P3tt TðmÞ
   
¼ d UTS P3ss SðmÞ þ P3st TðmÞ þ r1 UTS P2ss SðmÞ þ P2st TðmÞ
 
þ r2 UTS P1ss SðmÞ þ P1st TðmÞ þ r3 UTS SðmÞ
ð6:64Þ
6.4 Solution of Third Order Linear Differential Equations 163

Rearranging the coefficients of SðmÞ , we have


P1ss P2ss P3ss
CTS þ a CTS P1ss þ a CTT þ b CTS P2ss þ b CTT þ c CTS P3ss þ c CTT
2 2 2
¼ UTS ½d P3ss þ r1 P2ss þ r2 P1ss þ r3 I
 
P1ss P2ss P3ss
or; CTS ½I þ a P1ss þ b P2ss þ c P3ss þ CTT a þb þc
2 2 2
¼ UTS ½d P3ss þ r1 P2ss þ r2 P1ss þ r3 I
ð6:65Þ

Rearranging the coefficients of TðmÞ , we get


P1st P2st P3st
CTT þ a CTS P1st þ a CTT þ b CTS P2st þ b CTT þ c CTS P3st þ c CTT
2 2 2
¼ UTS ½d P3st þ r1 P2st þ r2 P1st
 
P1st P2st P3st
or; CS ½a P1st þ b P2st þ c P3st þ CT I þ a
T T
þb þc
2 2 2
¼ UTS ½d P3st þ r1 P2st þ r2 P1st
ð6:66Þ

In Eq. (6.65), let us define


I þ a P1ss þ b P2ss þ c P3ss , X and a P1ss
2 þb P2ss
2 þc P3ss
2 , Y:
Then Eq. (6.65) may be written as

CTS X þ CTT Y ¼ UTS ½d P3ss þ r1 P2ss þ r2 P1ss þ r3 I ð6:67Þ

In Eq. (6.66), let us define


a P1st þ b P2st þ c P3st , W and I þ a P1st
2 þb P2st
2 þc P3st
2 , Z:
Then Eq. (6.66) may be expressed as

or; CTS W þ CTT Z ¼ UTS ½d P3st þ r1 P2st þ r2 P1st ð6:68Þ

Solving the matrix Eqs. (6.67) and (6.68) for CTS and CTT , we get

UTS ½d P3ss þ r1 P2ss þ r2 P1ss þ r3 IX1  CTT Y X1


¼ UTS ½d P3st þ r1 P2st þ r2 P1stW1  CTT Z W1
  ð6:69Þ
or; CTT Y X1  Z W1 ¼ UTS ½d P3ss þ r1 P2ss þ r2 P1ss þ r3 IX1
 UTS ½d P3st þ r1 P2st þ r2 P1stW1
164 6 Linear Differential Equations

Let Y X1  Z W1 , M1


and UTS ½d P3ss þ r1 P2ss þ r2 P1ss þ r3 IX1  UTS ½d P3st þ r1 P2st þ r2
1
P1stW , M2
So Eq. (6.69) becomes

or; CTT ¼ M2 M1


1 ð6:70Þ

Now substituting the expression of CTT in Eq. (6.68), we get

or; CTS ¼ UTS ½d P3st þ r1 P2st þ r2 P1stW1  M2 M1


1 ZW
1
ð6:71Þ

Let M2 M11 ZW
1
, M3 and UTS ½d P3st þ r1 P2st þ r2 P1stW1 , M4
Therefore Eq. (6.71) may be expressed as

CTS ¼ M4  M3 ð6:72Þ

It is known that inversion of upper or lower triangular matrices can be computed


by simple decomposition and multiplication.

6.4.3 Numerical Examples

Example 6.5 Consider the non- homogeneous third order differential equation

gv5 ðtÞ þ 3€g5 ðtÞ  g_ 5 ðtÞ  3g5 ðtÞ ¼ 0,


ð6:73Þ
with; €g5 ð0Þ ¼ 22; g_ 5 ð0Þ ¼ 6 and g5 ð0Þ ¼ 6

Fig. 6.8 Exact solution of


Example 6.5 and comparison
of deviation using first order
integration matrices (vide
Eq. 6.63) and one-shot
integration operational
matrices (vide Eq. 6.72), for
m = 30 and T = 3 s
6.4 Solution of Third Order Linear Differential Equations 165

The exact solution of (6.73) is

g5 ðtÞ ¼ 2 expðtÞ þ 2 expðtÞ þ 2 expð3tÞ ð6:74Þ

The samples of exact solution of g5(t) and deviations of the samples obtained
using Eqs. (6.63) and (6.72) in HF domain, for T = 3 s and m = 30, are compared in
Fig. 6.8.

6.5 Conclusion

One shot integration operational matrices like P2ss, P2st, P2ts, P2tt, P3ss, P3st,
P3ts, P3tt for 2nd and 3rd order repeated integration and consequently the gen-
eralized one-shot matrices for n times repeated integration, have been used for
solution of higher order differential equations. Some examples, separately for
second order and third order differential equations, are treated to compare the results
obtained via repeated use of 1st order operational matrices and using higher order
one-shot operational matrices. The results are presented in Figs. 6.5, 6.6 and 6.8 to
compare them graphically. It is observed that (vide Fig. 6.8) the method based upon
one-shot operational matrices produces much accurate result compared to the
method using only 1st order integration operational matrices.
One second order differential equation has been solved via the well established
4th order Runge-Kutta method and the results obtained via HF domain one-shot
operational matrices and the results obtained via repeated use of 1st order inte-
gration operational matrices, are compared in Table 6.2. It is noted that 4th order
Runge-Kutta method maintains its supremacy compared to HF domain analysis, as
far as solution of 2nd order differential equation is concerned. But it should be kept
in mind that while the 4th order Runge-Kutta method provides smart solution to
differential equations only, the HF domain technique can (i) approximate square
integrable time functions (ii) integrate time functions and (iii) can solve higher
order differential equations with considerable accuracy. However, HF domain
analysis with a higher value of m can produce more improved result to become a
significant contender to 4th order Runge-Kutta method.
It is known that inversion of upper or lower triangular matrices can be computed
by simple decomposition and multiplication. Hence the inversions in Eqs. (6.37),
(6.38), (6.46), (6.47), (6.62), (6.63), (6.70) and (6.72) will not pose any compu-
tational burden while solving for the HF domain solution matrices CTs and CTT .
Finally, an advantage of HF based analysis is, the sample-and-hold function
based results may easily be obtained by simply dropping the triangular part of the
hybrid function domain solution.
166 6 Linear Differential Equations

References

1. Tenenbaum, M., Pollard, H.: Ordinary differential equations. Dover publications, USA (1985)
2. Butcher, J.C.: Numerical methods for ordinary differential equations (2nd edn). Wiley,
Hoboken (2008)
3. Coddington, E.A.: An introduction to differential equations. Dover publications, USA (1989)
4. Butcher, J.C.: Numerical methods for ordinary differential equations in the 20th century.
J. Comput. Appl. Math. 125, 1–29 (2000)
5. Simos, T.E.: Modified Runge-Kutta methods for the numerical solution of ODEs with
oscillating solutions. Appl. Math. Comput. 84, 131–143 (1997)
6. Rao, G.P.: Piecewise constant orthogonal functions and their applications in systems and
control, LNC1S, vol. 55. Springer, Berlin (1983)
7. Jiang, J.H., Schaufelberger, W.: Block pulse functions and their application in control system,
LNCIS, vol. 179. Springer, Berlin (1992)
8. Feng, Y.Y., Qi, D.X.: A sequence of piecewise orthogonal polynomials, SIAM J
Chapter 7
Convolution of Time Functions

Abstract In this chapter, theory of hybrid function domain convolution technique


is presented. First, the rules for convolution for sample-and-hold functions and
triangular functions are derived. Then, these two component results are combined to
get the rules for convolution in HF domain. This idea is used to determine the result
of convolution of two time functions in HF domain. One example and eleven
figures are presented to illustrate the idea.

Having established the theoretical principles of the orthogonal hybrid function set,
it is worthwhile to investigate the convolution operation of two real-valued func-
tions, in hybrid function domain. This will later be useful for analysing control
systems.
In control system analysis, the well-known relation [1] involving the input and
the output of a linear time invariant system is given by

CðsÞ ¼ GðsÞ RðsÞ ð7:1Þ

where, C(s) is the Laplace transform of output, G(s) is the transfer function of the
plant, R(s) is the Laplace transform of the input, and s is the Laplace operator.
In time domain, Eq. (7.1) takes the form

Z1
cðtÞ ¼ gðtÞ  rðtÞ ¼ gðsÞrðt  sÞ ds ð7:2Þ
0

That is, the output c(t) in the time domain involves the convolution of the plant
impulse response and the input function. The output c(t) is determined by evalu-
ating the convolution integral of the RHS of Eq. (7.2), where, it has been assumed
that the integral exists.
Evaluation of this integral is frequently needed in the analysis of control sys-
tems. In what follows, such an integral is evaluated in its general form in hybrid
function domain and the results are used to determine the output of a single input
single output (SISO) linear control system.

© Springer International Publishing Switzerland 2016 167


A. Deb et al., Analysis and Identification of Time-Invariant Systems, Time-Varying
Systems, and Multi-Delay Systems using Orthogonal Hybrid Functions,
Studies in Systems, Decision and Control 46, DOI 10.1007/978-3-319-26684-8_7
168 7 Convolution of Time Functions

7.1 The Convolution Integral

Convolution [2] of two functions is a significant physical concept in many diverse


scientific fields. However, as in the case of many important mathematical rela-
tionships, the convolution integral does not readily unveil itself as to its implica-
tions. The convolution integral of two convolving time function x(t) and h(t) over
the entire time scale is given by

Z1
yðtÞ ¼ xðtÞ  hðtÞ ¼ xðsÞhðt  sÞ ds ð7:3Þ
1

where  indicates convolution.


Let x(t) and h(t) be two time functions represented by Figs. 7.1a, b, respectively.
To evaluate Eq. (7.3), functions x(τ) and h(t − τ) are required. The functions x(τ)
and h(τ) are obtained by simply replacing the variable t to the variable τ. h(−τ) is the
image of h(τ) about the orthogonal axis and h(t − τ) is the function h(−τ) shifted by
the quantity t. Functions x(τ), h(−τ), and h(t − τ) are shown in Fig. 7.2. The resultant
of convolution of x(t) and h(t), as per Eq. (7.3), is the triangular function shown in
Fig. 7.3.
We can now summarize the steps for convolution as:
(i) Folding: Take the mirror image of h(τ) about the ordinate as shown in
Fig. 7.2b.
(ii) Shifting: Shift h(−τ) by the amount t as shown in Fig. 7.2c.

Fig. 7.1 Two typical waveforms for convolution

Fig. 7.2 Graphical illustration of folding and shifting operations


7.1 The Convolution Integral 169

Fig. 7.3 Graphical example of convolution [2]

(iii) Multiplication: Multiply the shifted function h(t − τ) and x(τ).


(iv) Integration: The area under the product of h(t − τ) and x(τ) is the value of the
convolution at time instant t.

7.2 Convolution of Basic Components of Hybrid


Functions

The convolution process and ‘deconvolution’ in block pulse domain [3, 4] was
introduced by Kwong and Chen [5] for identification of a system. We introduce the
convolution as well as ‘deconvolution’ in hybrid function domain and subsequently
use the results for control system analysis and synthesis.
Hybrid function expansion involves two kinds of basis functions:
sample-and-hold function [6] and triangular function [7]. To derive the expression
for convolution of two time functions in hybrid function domain, we consider
convolution of different interactive components of hybrid functions [8]. That is, we
need to compute the equidistant samples, having the same sampling period as the
convolving functions, of the resulting function. These samples may be used for
hybrid function expansion of the resulting function as per Eq. (2.13).
That is, the principle of HF domain convolution is:
(i) Expand the convolving functions in HF domain using their samples.
(ii) Convolve the component sample-and-hold functions [9] and triangular func-
tions [7].
(iii) Express the result of convolution in hybrid function domain using the samples
of the resulting function.
Let us consider two functions r(t) and g(t) and expand these functions into
hybrid function domain as
170 7 Convolution of Time Functions

   
rðtÞ  RTS SðmÞ þ RTT TðmÞ and gðtÞ  GTS SðmÞ þ GTT TðmÞ

Then the result of convolution y(t) is given by


   
yðtÞ ¼ rðtÞ  gðtÞ  RTS SðmÞ þ RTT TðmÞ  GTS SðmÞ þ GTT TðmÞ
       
or; yðtÞ  RTS SðmÞ  GTS SðmÞ þ RTS SðmÞ  GTT TðmÞ ð7:4Þ
       
þ RTT TðmÞ  GTS SðmÞ þ RTT TðmÞ  GTT TðmÞ

where

RTS ¼ ½r0 r1 r2  ri  rm1 ;


RTT ¼ ½ðr1  r0 Þ ðr2  r1 Þ ðr3  r2 Þ  ðri  ri1 Þ  ðrm  rm1 Þ
GTS ¼ ½g0 g1 g2  gi  gm1  and
GTT ¼ ½ðg1  g0 Þ ðg2  g1 Þ ðg3  g2 Þ  ðgi  gi1 Þ  ðgm  gm1 Þ

Inspection of Eq. (7.4) reveals that to determine y(t), we need to compute the
results of three types of convolution operations, namely
(i) Convolution between two sample-and-hold functions trains.
(ii) Convolution between a sample-and-hold function train and a triangular
function train (or vice versa).
(iii) Convolution between two triangular function trains.
To achieve this end, we present below the convolution of all possible combi-
nations of elementary sample-and-hold functions, triangular functions and subse-
quently their trains.

7.2.1 Convolution of Two Elementary Sample-and-Hold


Functions

In Figs. 7.4a, b, a1(t) and b1(t) are two sample-and-hold functions of different
amplitudes, both occurring at t = 0. The result of convolution of these two functions
a1(t) and b1(t) is the triangular function c1(t) shown in Fig. 7.4c.
The function c1(t) may be expressed in hybrid function domain using its three
samples, namely, 0, ha1 b1 and 0.
7.2 Convolution of Basic Components of Hybrid Functions 171

Fig. 7.4 Convolution of two elementary sample-and-hold functions (SHF)

Fig. 7.5 Two trains of sample-and-hold functions

7.2.2 Convolution of Two Sample-and-Hold Function


Trains

Now, we extend our idea to the convolution of two sample-and-hold function trains
r1(t) and g1(t), comprised only of four component functions (m = 4), with different
amplitudes. These function trains are shown in Fig. 7.5 along with their sample
values. Finally, we represent the result in hybrid function domain.
Five samples of the resulting function y1(t), with the sampling period h, are
0; hg10 r10 ; hðg10 r11 þ g11 r10 Þ, hðg10 r12 þ g11 r11 þ g12 r10 Þ, and
hðg10 r13 þ g11 r12 þ g12 r11 þ g13 r10 Þ.
The resulting function y1(t) may be described in the HF domain as

y1ðtÞ ¼ r1ðtÞ  g1ðtÞ D Y1TS Sð4Þ þ Y1TT Tð4Þ ð7:5Þ

where

Y1TS ¼ ½ 0 hg10 r10 hðg10 r11 þ g11 r10 Þ hðg10 r12 þ g11 r11 þ g12 r10 Þ 
, ½y1S0 y1S1 y1S2 y1S3 

and
172 7 Convolution of Time Functions

Y1TT ¼ ½ fhg10 r10  0g fhðg10 r11 þ g11 r10 Þ  hg10 r10 g


fhðg10 r12 þ g11 r11 þ g12 r10 Þ  hðg10 r11 þ g11 r10 Þg
fhðg10 r13 þ g11 r12 þ g12 r11 þ g13 r10 Þ  hðg10 r12 þ g11 r11 þ g12 r10 Þg
, ½y1T0 y1T1 y1T2 y1T3 

Writing Eq. (7.5) in matrix form, we have


2 3
0 r10 r11 r12
6 0 0 r1 r1 7
6 0 17
y1ðtÞ ¼ h½ g10 g11 g12 g13 6 7Sð4Þ
40 0 0 r10 5
0 0 0 0
2 3
r10 ðr11  r10 Þ ðr12  r11 Þ ðr13  r12 Þ
6 0 ðr11  r10 Þ ðr12  r11 Þ 7
6 r10 7
þ h½ g10 g11 g12 g13  6 7Tð4Þ
4 0 0 r10 ðr11  r10 Þ 5
0 0 0 r10
ð7:6Þ

Writing (7.6) in a compact form, we get

y1ðtÞ ¼ h½ g10 g11 g12 g13 ½½ 0 r10 r11 r12 Sð4Þ
þ h½ g10 g11 g12 g13 ½½ r10 ðr11  r10 Þ ðr12  r11 Þ ðr13  r12 Þ Tð4Þ
ð7:7Þ
22 33
a b c
where ½½ a b c  , 44 0 a b 55
0 0 a

7.2.3 Convolution of an Elementary Sample-and-Hold


Function and an Elementary Triangular Function

The result of convolution of a sample-and-hold function and a triangular function is


shown in Fig. 7.6.
The function c2(t) may now be expressed in hybrid function domain using its
three equidistant samples 0, h2a1 b2 and 0.
7.2 Convolution of Basic Components of Hybrid Functions 173

Fig. 7.6 Convolution of a sample-and-hold function and a right handed triangular function

Fig. 7.7 Trains of triangular function and sample-and-hold functions

7.2.4 Convolution of a Triangular Function Train


and a Sample-and-Hold Function Train

A triangular function train and a sample-and-hold function train of four component


functions (m = 4) each, having different amplitudes, are shown in Fig. 7.7.
After convolution of these two trains, five samples of the resulting function y2(t),
with the sampling period h, are 0, h2g10 r20 , h2ðg10 r21 þ g11 r20 Þ,
2ðg10 r22 þ g11 r21 þ g12 r20 Þ, and 2ðg10 r23 þ g11 r22 þ g12 r21 þ g13 r20 Þ.
h h

The function y2(t) may be described in HF domain as

y2ðtÞ ¼ r2ðtÞ  g1ðtÞ , Y2TS Sð4Þ þ Y2TT Tð4Þ ð7:8Þ

where

Y2TS ¼ ½ 0 h
2 g10 r20 h
2 ðg10 r21 þ g11 r20 Þ ðg10 r22 þ g11 r21 þ g12 r20 Þ 
h
2

, ½y2S0 y2S1 y2S2 y2S3 

and
h
h h
Y2TT ¼ fg10 r20  0g fðg10 r21 þ g11 r20 Þ  g10 r20 g
2 2
h
fðg10 r22 þ g11 r21 þ g12 r20 Þ  ðg10 r21 þ g11 r20 Þg
2
i
h
fðg10 r23 þ g11 r22 þ g12 r21 þ g13 r20 Þ  ðg10 r22 þ g11 r21 þ g12 r20 Þg
2
, ½y2T0 y2T1 y2T2 y2T3 
174 7 Convolution of Time Functions

Writing Eq. (7.8) in matrix form, we get


2 3
0 r20 r21 r22
60 r20 r21 7
h 6 0 7
y2ðtÞ ¼ ½ g10 g11 g12 g13 6 7Sð4Þ
2 40 0 0 r20 5
0 0 0 0
2 3
r20 ðr21  r20 Þ ðr22  r21 Þ ðr23  r22 Þ
6 0 ðr21  r20 Þ ðr22  r21 Þ 7
h 6 r20 7
þ ½ g10 g11 g12 g13 6 7Tð4Þ
2 4 0 0 r20 ðr21  r20 Þ 5
0 0 0 r20
ð7:9Þ

Writing (7.9) in a compact form, we have

h
y2ðtÞ ¼ ½ g10 g11 g12 g13 ½½ 0 r20 r21 r22 Sð4Þ
2
h
þ ½ g10 g11 g12 g13 ½½ r20 ðr21  r20 Þ ðr22  r21 Þ ðr23  r22 Þ Tð4Þ
2
ð7:10Þ

7.2.5 Convolution of Two Elementary Triangular Functions

Let, a2(t) and b2(t) be two elementary triangular functions, as represented in


Fig. 7.8a, b. Figure 7.8c shows the convolution result of these two functions. The
function c3(t) may now be expressed in hybrid function domain using its three
samples, namely, 0, 6h a2 b2 and 0.

7.2.6 Convolution of Two Triangular Function Trains

Now we compute the result of convolution of two triangular function trains com-
prised of four component functions (m = 4) each, having different amplitudes. These
trains are shown in Fig. 7.9.
Five samples of the resulting convolution function y3(t), with the sampling
period h, are 0, h6 g20 r20 , h6 ðg20 r21 þ g21 r20 Þ, h6 ðg20 r22 þ g21 r21 þ g22 r20 Þ, and
6 ðg20 r23 þ g21 r22 þ g22 r21 þ g23 r20 Þ.
h

Hence, y3(t), expressed in HF domain, is


7.2 Convolution of Basic Components of Hybrid Functions 175

Fig. 7.8 Convolution of two elementary triangular functions

Fig. 7.9 Two trains of triangular functions

y3ðtÞ ¼ r2ðtÞ  g2ðtÞ , Y3TS Sð4Þ þ Y3TT Tð4Þ ð7:11Þ

where
 
Y3TS ¼ 0 6 ðg20 r21 þ g21 r20 Þ 6 ðg20 r22 þ g21 r21 þ g22 r20 Þ
h h h
6 g20 r20
, ½y3S0 y3S1 y3S2 y3S3 

and

h h
Y3TT ¼ fg20 r20  0g fðg20 r21 þ g21 r20 Þ  g20 r20 g
6 6
h
fðg20 r22 þ g21 r21 þ g22 r20 Þ  ðg20 r21 þ g21 r20 Þg
6
h
fðg20 r23 þ g21 r22 þ g22 r21 þ g23 r20 Þ  ðg20 r22 þ g21 r21 þ g22 r20 Þg
6
, ½y3T0 y3T1 y3T2 y3T3 

Writing equation in matrix form, we get


176 7 Convolution of Time Functions

2 3
0 r20 r21 r22
60 r20 r21 7
h 6 0 7
y3ðtÞ ¼ ½ g20 g21 g22 g23  6 7Sð4Þ
6 40 0 0 r20 5
0 0 0 0
2 3
r20 ðr21  r20 Þ ðr22  r21 Þ ðr23  r22 Þ
6 0 ðr21  r20 Þ ðr22  r21 Þ 7
h 6 r20 7
þ ½ g20 g21 g22 g23  6 7Tð4Þ
6 4 0 0 r20 ðr21  r20 Þ 5
0 0 0 r20
ð7:12Þ

Writing in a compact form, we have


h
y3ðtÞ ¼ ½ g20 g21 g22 g23 ½½ 0 r20 r21 r22 Sð4Þ
6
h
þ ½ g20 g21 g22 g23 ½½ r20 ðr21  r20 Þ ðr22  r21 Þ ðr23  r22 Þ Tð4Þ
6
ð7:13Þ

7.3 Convolution of Two Time Functions in HF Domain [8]

Consider two square integrable time functions r(t) and g(t). These two time func-
tions are expressed in HF domain using their equidistant samples. Figure 7.10
shows these functions with their five time samples each. If we express these
functions in hybrid function domain for m = 4, we have

rðtÞ  ½r0 r1 r2 r3 Sð4Þ ðtÞ þ ½ðr1  r0 Þ ðr2  r1 Þ ðr3  r2 Þ ðr4  r3 ÞTð4Þ ðtÞ
gðtÞ  ½g0 g1 g2 g3 Sð4Þ ðtÞ þ ½ðg1  g0 Þ ðg2  g1 Þ ðg3  g2 Þ ðg4  g3 ÞTð4Þ ðtÞ

Hence, the result of convolution in HF domain may be derived using the


sub-results of convolutions between different sample-and-hold and triangular
function trains of both r(t) and g(t), deduced in Sect. 7.2.

Fig. 7.10 Two time functions r(t) and g(t) and their equidistant samples
7.3 Convolution of Two Time Functions in HF Domain 177

Using the results of Eqs. (7.7), (7.10) and (7.13), we can write

yðtÞ ¼ rðtÞ  gðtÞ


 h½ g0 g1 g2 g3  ½½ 0 r0 r1 r2  Sð4Þ
þ h½ g0 g1 g2 g3  ½½ r0 ðr1  r0 Þ ðr2  r1 Þ ðr3  r2 Þ  Tð4Þ
h
þ ½ ðg1  g0 Þ ðg2  g1 Þ ðg3  g2 Þ ðg4  g3 Þ  ½½ 0 r0 r1 r2  Sð4Þ
2
h
þ ½ ðg1  g0 Þ ðg2  g1 Þ ðg3  g2 Þ ðg4  g3 Þ  ½½ r0 ðr1  r0 Þ ðr2  r1 Þ ðr3  r2 Þ  Tð4Þ
2
h
þ ½ g0 g1 g2 g3  ½½ 0 ðr1  r0 Þ ðr2  r1 Þ ðr3  r2 Þ  Sð4Þ
2
h
þ ½ g0 g1 g2 g3  ½½ ðr1  r0 Þ ðr2  2r1 þ r0 Þ ðr3  2r2 þ r1 Þ ðr4  2r3 þ r2 Þ  Tð4Þ
2
h
þ ½ ðg1  g0 Þ ðg2  g1 Þ ðg3  g2 Þ ðg4  g3 Þ  ½½ 0 ðr1  r0 Þ ðr2  r1 Þ ðr3  r2 Þ  Sð4Þ
6
h
þ ½ ðg1  g0 Þ ðg2  g1 Þ ðg3  g2 Þ ðg4  g3 Þ 
6
 ½½ ðr1  r0 Þ ðr2  2r1 þ r0 Þ ðr3  2r2 þ r1 Þ ðr4  2r3 þ r2 Þ  Tð4Þ
¼ f h½ g0 g1 g2 g3  ½½ 0 r0 r1 r2 
h
þ ½ ðg1  g0 Þ ðg2  g1 Þ ðg3  g2 Þ ðg4  g3 Þ  ½½ 0 r0 r1 r2 
2
h
þ ½ g0 g1 g2 g3  ½½ 0 ðr1  r0 Þ ðr2  r1 Þ ðr3  r2 Þ 
2 
h
þ ½ ðg1  g0 Þ ðg2  g1 Þ ðg3  g2 Þ ðg4  g3 Þ  ½½ 0 ðr1  r0 Þ ðr2  r1 Þ ðr3  r2 Þ  Sð4Þ
6
þ f h½ g0 g1 g2 g3  ½½ r0 ðr1  r0 Þ ðr2  r1 Þ ðr3  r2 Þ 
h
þ ½ ðg1  g0 Þ ðg2  g1 Þ ðg3  g2 Þ ðg4  g3 Þ  ½½ r0 ðr1  r0 Þ ðr2  r1 Þ ðr3  r2 Þ 
2
h
þ ½ g0 g1 g2 g3  ½½ ðr1  r0 Þ ðr2  2r1 þ r0 Þ ðr3  2r2 þ r1 Þ ðr4  2r3 þ r2 Þ 
2
h
þ ½ ðg1  g0 Þ ðg2  g1 Þ ðg3  g2 Þ ðg4  g3 Þ 
6
½½ ðr1  r0 Þ ðr2  2r1 þ r0 Þ ðr3  2r2 þ r1 Þ ðr4  2r3 þ r2 Þ g Tð4Þ
ð7:14Þ
178 7 Convolution of Time Functions

Equation (7.14) can be simplified to be arranged in the following form:


8 2 3
> 0 ð2r1 þ r0 Þ ð2r2 þ r1 Þ ð2r3 þ r2 Þ
>
>
< 60 ð2r1 þ r0 Þ ð2r2 þ r1 Þ 7
h 6 0 7
yðtÞ ¼ ½ g0 g1 g2 g3  6 7
6>> 40 0 0 ð2r1 þ r0 Þ 5
>
:
0 0 0 0
2 39
0 ðr1 þ 2r0 Þ ðr2 þ 2r1 Þ
ðr3 þ 2r2 Þ >
>
>
60 ðr1 þ 2r0 Þ ðr2 þ 2r1 Þ 7 =
6 0 7
þ ½ g1 g2 g3 g4  6 7 Sð4Þ
40 0 0 ðr1 þ 2r0 Þ 5> >
>
;
0 0 0 0
8 2 3
> ð2r1 þ r0 Þ ð2r2  r1  r0 Þ ð2r3  r2  r1 Þ ð2r4  r3  r2 Þ
>
>
< 6 ð2r1 þ r0 Þ ð2r2  r1  r0 Þ ð2r3  r2  r1 Þ 7
h 6 0 7
þ ½ g0 g1 g2 g3  6 7
6>> 4 0 0 ð2r1 þ r0 Þ ð2r2  r1  r0 Þ 5
>
:
0 0 0 ð2r1 þ r0 Þ
2 39
ðr1 þ 2r0 Þ ðr2 þ r1  2r0 Þ ðr3 þ r2  2r1 Þ ðr4 þ r3  2r2 Þ >
>
>
6 ðr1 þ 2r0 Þ ðr2 þ r1  2r0 Þ ðr3 þ r2  2r1 Þ 7 =
6 0 7
þ ½ g1 g2 g3 g4  6 7 Tð4Þ
4 0 0 ðr1 þ 2r0 Þ ðr2 þ r1  2r0 Þ >
5 >
>
;
0 0 0 ðr1 þ 2r0 Þ
2 3
0 ð2r1 þ r0 Þ ð2r2 þ r1 Þ ð2r3 þ r2 Þ
6 0 ðr þ 2r Þ ðr þ 4r þ r Þ ðr þ 4r þ r Þ 7
h 6 1 0 2 1 0 3 2 1 7
¼ ½ g0 g1 g2 g3  6 7 Sð4Þ
6 40 0 ðr1 þ 2r0 Þ ðr2 þ 4r1 þ r0 Þ 5
0 0 0 ðr1 þ 2r0 Þ
8 2 3
> ð2r1 þ r0 Þ ð2r2  r1  r0 Þ ð2r3  r2  r1 Þ ð2r4  r3  r2 Þ
>
>
< 6 ð2r1 þ r0 Þ ð2r2  r1  r0 Þ ð2r3  r2  r1 Þ 7
h 6 0 7
þ ½ g0 g1 g2 g3  6 7
6>> 4 0 0 ð2r1 þ r0 Þ ð2r2  r1  r0 Þ 5
>
:
0 0 0 ð2r1 þ r0 Þ
2 39
ðr1 þ 2r0 Þ ðr2 þ r1  2r0 Þ ðr3 þ r2  2r1 Þ ðr4 þ r3  2r2 Þ >
>
>
6 ðr1 þ 2r0 Þ ðr2 þ r1  2r0 Þ ðr3 þ r2  2r1 Þ 7 =
6 0 7
þ ½ g1 g2 g3 g4  6 7 Tð4Þ
4 0 0 ðr1 þ 2r0 Þ ðr2 þ r1  2r0 Þ 5 >>
>
;
0 0 0 ðr1 þ 2r0 Þ
ð7:15Þ

Now let,
7.3 Convolution of Two Time Functions in HF Domain 179

9
R0 , 2r1 þ r0 >
>
>
>
R1 , 2r2 þ r1 >
>
>
>
>
>
R2 , 2r3 þ r2 >
>
>
>
>
>
R3 , 2r4 þ r3 >
>
>
>
R4 , r1 þ 2r0 >
>
>
=
R5 , r2 þ 4r1 þ r0 ð7:16Þ
>
>
R6 , r3 þ 4r2 þ r1 >
>
>
>
>
>
R7 , r4 þ 4r3 þ r2 >
>
>
>
>
>
R8 , r2 þ r1  2r0 >
>
>
>
>
>
R9 , r3 þ r2  2r1 >
>
>
;
R10 , r4 þ r3  2r2

Now, Eq. (7.15) can be written as follows


2 3
0 R0 R1 R2
60 R6 7
h 6 R4 R5 7
yðtÞ ¼ ½ g0 g1 g2 g3 6 7Sð4Þ
6 40 0 R4 R5 5
0 00 R4
8 2 3
> R0 ðR1  R0 Þ ðR2  R1 Þ ðR3  R2 Þ
>
>
< 6 0 ðR1  R0 Þ ðR2  R1 Þ 7
h 6 R0 7
þ ½ g0 g1 g2 g3 6 7
6>> 4 0 0 R ðR  R Þ 5
>
:
0 1 0
0 0 0 R0
2 39
R4 R8 R9 R10 >
>
>
6 0 R R R9 7 =
h 6 4 8 7
þ ½ g1 g2 g3 g4 6 7 Tð4Þ
6 4 0 0 R4 R8 5> >
>
;
0 0 0 R4
ð7:17Þ

h
yðtÞ ¼ ½0 ðg0 R0 þ g1 R4 Þ ðg0 R1 þ g1 R5 þ g2 R4 Þ ðg0 R2 þ g1 R6 þ g2 R5 þ g3 R4 ÞSð4Þ
6
h
þ ½fg0 R0 þ g1 R4 g fg0 ðR1  R0 Þ þ g1 ðR0 þ R8 Þ þ g2 R4 gfg0 ðR2  R1 Þ
6
þ g1 ðR1  R0 þ R9 Þ þ g2 ðR0 þ R8 Þ þ g3 R4 gfg0 ðR3  R2 Þ
þ g1 ðR2  R1 þ R10 Þ þ g2 ðR1  R0 þ R9 Þ þ g3 ðR0 þ R8 Þ þ g4 R4 gTð4Þ
ð7:18Þ
180 7 Convolution of Time Functions

Equation (7.18) can be modified to

h
yðtÞ ¼ ½0 ðg0 R0 þ g1 R4 Þ ðg0 R1 þ g1 R5 þ g2 R4 Þ ðg0 R2 þ g1 R6 þ g2 R5 þ g3 R4 ÞSð4Þ
6
h
þ ½fg0 R0 þ g1 R4 g fg0 ðR1  R0 Þ þ g1 ðR5  R4 Þ þ g2 R4 g
6
fg0 ðR2  R1 Þ þ g1 ðR6  R5 Þ þ g2 ðR5  R4 Þ þ g3 R4 g
fg0 ðR3  R2 Þ þ g1 ðR7  R6 Þ þ g2 ðR6  R5 Þ þ g3 ðR5  R4 Þ þ g4 R4 gTð4Þ
ð7:19Þ

Equation (7.19) represents the final output/result of the two convolving time
functions, for m = 4, in hybrid function domain. In doing so, we have utilized the
results of convolution of different possible combination of SHF and TF trains to
yield the final result expressed in HF domain.
Direct expansion of the output y(t) in HF domain is

yðtÞ , ½ y0 y1 y2 y3 Sð4Þ þ ½ ðy1  y0 Þ ðy2  y1 Þ ðy3  y2 Þ ðy4  y3 Þ Tð4Þ


ð7:20Þ

Comparing Eqs. (7.19) and (7.20), we get

y0 ¼ 0
h
y1 ¼ ½g0 R0 þ g1 R4 
6
h
y2 ¼ ½g0 R1 þ g1 R5 þ g2 R4 
6
h
y3 ¼ ½g0 R2 þ g1 R6 þ g2 R5 þ g3 R4 
6

By following the pattern, we can write down the expression for y4 as

h
y4 ¼ ½g0 R3 þ g1 R7 þ g2 R6 þ g3 R5 þ g4 R4 
6

If we determine the term y4 by adding the 4-th term of the row matrix for the Sð4Þ
vector and the 4-th term of the row matrix for the Tð4Þ vector, the result turns out to
be the same as above.
Hence, the generalized form of the i-th output coefficient is
" #
h Xi
yi ¼ g0 Rði1Þ þ gp Rðm þ ipÞ for i = 1,2,3,. . .,m ð7:21Þ
6 p¼1
7.4 Numerical Example 181

7.4 Numerical Example

To determine the convolution result using Eq. (7.21), we first compute the samples
of both the functions and express the functions in HF domain. Then we use
Eq. (7.21) to arrive at the results.
Example 7.1 (vide Appendix B, Program no. 21) Consider two time functions r
(t) = u(t) and g(t) = exp(−0.5t) (2 cos2t − 0.5 sin2t) and the exact convolution result
of these two functions is y(t) = exp(−0.5t) sin(2t) for t  0.
To note the variation of the result for an appreciable time, we consider T = 5 s
and for HF domain analysis, take m = 25, i.e., h = T/m = 0.2 s.
Then, in HF domain, r(t) is

r ðtÞ ¼ ½ 1 1    1    1 1  Sð4Þ þ ½ 0 0    0    0 0  Tð4Þ


|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
25 terms 25 terms

and g(t) is given by

gðtÞ  ½ 2:00000000 1:49064076 0:84716967 0:19164669 0:37416316


0:78057001 0:99473153 1:01896263 0:88401092 0:63923184
0:34171806 0:04622404 0:20272788 0:37575612 0:46033860
0:45965891 0:38927571 0:27251601 0:13552338 0:00277597
0:10633011 0:17950603 0:21214867 0:20664673 0:17075045 Sð25Þ
þ ½ 0:50935924 0:64347109 0:65552299 0:56580985 0:40640685
0:21416151 0:02423110 0:13495171 0:24477908 0:29751377
0:29549402 0:24895193 0:17302824 0:08458248 0:00067969
0:07038320 0:11675970 0:13699263 0:13274741 0:10910609
0:07317592 0:03264264 0:00550194 0:03589628 0:05532806 Tð25Þ

Using Eqs. (7.19) or (7.21), convolution of r(t) and g(t) in HF domain yields

yc ðtÞ ½ 0:00000000 0:34906408 0:58284512 0:68672675 0:66847511


0:55300179 0:37547163 0:17410222 0:01619514 0:16851941
0:26661440 0:30540861 0:28975823 0:23190983 0:14830036
0:05630060 0:02859286 0:09477203 0:13557597 0:14940591
0:13905049 0:11046688 0:07130141 0:02942187 0:00831785 Sð25Þ
þ ½ 0:34906408 0:23378104 0:10388164 0:01825165 0:11547332
0:17753015 0:20136942 0:19029735 0:15232428 0:09809499
0:03879421 0:01565038 0:05784839 0:08360947 0:09199975
0:08489346 0:06617917 0:04080394 0:01382994 0:01035541
0:02858361 0:03916547 0:04187954 0:03773972 0:02884140 Tð25Þ
182 7 Convolution of Time Functions

Direct expansion of y(t), in HF domain, for m = 25 and T = 5 s, is given by

yd ðtÞ  ½ 0:00000000 0:35236029 0:58732149 0:69047154 0:67003422


0:55151677 0:37070205 0:16635019 0:02622919 0:17991539
0:27841208 0:31676081 0:30003901 0:24076948 0:15566844
0:06234602 0:02353088 0:09026637 0:13119242 0:14477041
0:13389508 0:10465113 0:06481067 0:02234669 0:01581457 Sð25Þ
þ ½ 0:35236029 0:23496121 0:10315004 0:02043731 0:11851746
0:18081471 0:20435186 0:19257939 0:15368619 0:09849669
0:03834873 0:01672180 0:05926953 0:08510105 0:09332241
0:08587690 0:06673549 0:04092605 0:01357799 0:01087533
0:02924395 0:03984046 0:04246399 0:03816125 0:02884140 Tð25Þ

Figure 7.11 presents graphically the samples obtained through HF domain


convolution of the functions r(t) and g(t) along with HF domain direct expansion of
the result y(t). These two results are compared for a typical of eleven samples in
Table 7.1 and respective percentage errors are computed.
From Table 7.1, it is observed that error is quite large for the 17th sample
25 s) and the 24th sample (t ¼ 25 s). The reason for such sudden increase in
(t ¼ 80 115

error may be due to the fact that the sample values for these two cases are quite
small, e.g., 0.02353088 and 0.02234669; in fact the lowest two of all the sample
values, and computation of error needs the deviations (yd − yc) to be divided by
these small sample values.

Fig. 7.11 Convolution of two functions r(t) = u(t) and g(t) = exp(−0.5t) (2 cos2t − 0.5 sin2t),
computed using Eq. (7.21) in HF domain and through direct expansion (yd) for T = 5 s and m = 25.
It is observed that that the two curves fairly overlap validating the HF domain convolution
technique (vide Appendix B, Program no. 21)
7.5 Conclusion 183

Table 7.1 Convolution results via (a) HF domain direct expansion (yd), and (b) HF domain
convolution (yc) along with percentage errors for eleven typical samples chosen randomly for
Example 7.1 for T = 5 s and m = 25 (vide Appendix B, Program no. 21)
t (s) Via direct expansion in HF Via convolution in HF % Error
domain ðyd Þ Domain ðyc Þ e ¼ ðydyy cÞ
 100
d

5
25 0.35236029 0.34906408 0.93546580
20
25 0.67003422 0.66847511 0.23269110
25
25 0.55151677 0.55300179 −0.26926108
35
25 0.16635019 0.17410222 −4.66006681
50
25 −0.27841208 −0.26661440 4.23748855
60
25 −0.30003901 −0.28975823 3.42648111
70
25 −0.15566844 −0.14830036 4.73318805
80
25 0.02353088 0.02859285 −21.51203015
95
25 0.14477041 0.14940591 −3.20196648
105
25 0.10465112 0.11046687 −5.55727449
115
25 0.02234669 0.02942187 −31.66097529

7.5 Conclusion

In this chapter we have introduced the idea of convolution in hybrid function


domain. This idea has been built up in a step by step manner. That is, first of all
convolution result for two elementary functions of the sample-and-hold function set
and two elementary functions of the triangular function set are derived. Also,
convolution of an elementary function of the SHF set with an elementary function
of the TF set has also been treated. Then the idea of convolution of sample-and-hold
function trains and triangular function trains are discussed with mathematical
support for m = 4. All these results are transformed to hybrid function domain.
These sub-results of convolution, presented through Eqs. (7.7), (7.10) and
(7.13), are the basic results involving different combinations of two function trains
—SHF and TF. These three equations have been utilized to arrive at the general
Eq. (7.21), giving the result of convolution of two time functions, expressed in HF
domain.
Using the developed theory of HF domain convolution, an example has been
treated to prove the viability of the method. Table 7.1 presents eleven typical
sample values obtained via HF domain convolution technique and compares the
same with the sample values obtained through direct HF domain expansion of the
exact convolution result. It is noted that for two samples, the error is quite large, that
is, −21.51203015 and −31.66097529 %. This may be due to low sample values as
mentioned above. Since, the HF domain analysis uses only function samples, the
numerical computation is rather simple, straight forward and computationally
attractive.
184 7 Convolution of Time Functions

References

1. Ogata, K.: Modern Control Engineering, 5th edn. Prentice-Hall of India Ltd., New Delhi (1997)
2. Brigham, E.O.: The Fast Fourier Transform and Its Applications. Prentice-Hall International
Inc., New Jersey (1988)
3. Jiang, J.H., Schaufelberger, W.: Block Pulse Functions and their Application in Control System.
LNCIS, vol. 179. Springer, Berlin (1992)
4. Deb, A., Sarkar, G., Sen, S.K.: Linearly pulse-width modulated block pulse functions and their
application to linear SISO feedback control system identification. Proc. IEE, Part D Control
Theory Appl. 142(1), 44–50 (1995)
5. Kwong, C.P., Chen, C.F.: Linear feedback system identification via block pulse functions. Int.
J. Syst. Sci. 12(5), 635–642 (1981)
6. Deb, A., Sarkar, G., Bhattacharjee, M., Sen, S.K.: A new set of piecewise constant orthogonal
functions for the analysis of linear SISO systems with sample-and-hold. J. Franklin Instt. 335B
(2), 333–358 (1998)
7. Deb, Anish, Sarkar, Gautam, Sengupta, Anindita: Triangular Orthogonal Functions for the
Analysis of Continuous Time Systems. Anthem Press, London (2011)
8. Biswas, A.: Analysis and synthesis of continuous control systems using a set of orthogonal
hybrid functions, Ph. D. dissertation, University of Calcutta (2015)
9. Deb, Anish, Sarkar, Gautam, Dasgupta, Anindita: A complementary pair of orthogonal
triangular function sets and its application to the analysis of SISO control systems. J. Instt.
Engrs. (India) 84, 120–129 (2003)
Chapter 8
Time Invariant System Analysis: State
Space Approach

Abstract This chapter is devoted to time invariant system analysis using state
space approach in hybrid function platform. Both homogeneous and
non-homogeneous systems are treated along with numerical examples. States and
outputs of the systems are solved. Also, a non-homogeneous system with a jump
discontinuity at input is analyzed. Exhaustive illustration has been provided with
the support of nine examples, twenty two figures and twenty one tables.

In this chapter, we deal with linear time invariant (LTI) control systems [1]. A linear
control system abides by the superposition law and being time invariant, its
parameters do not vary with time. We intend to analyse two types of LTI control
systems in hybrid function platform, namely non-homogeneous system and
homogeneous system [1, 2].
Analysing a control system means, knowing the system parameters and the
nature of input signal or forcing function, we determine the behavior of different
system states over a common time frame. The output of the system may be any one
of the states or a combination of two or many states. Therefore, after knowing all
the states within the system, we can easily assess the performance of the system.
In practice, application of linear time-invariant systems may be found in circuits,
control theory, NMR spectroscopy, signal processing, seismology and in many
other areas.
Any LTI system can be classified into two broad categories: one is
‘non-homogeneous’ system and the other being ‘homogeneous’ system. In a
homogeneous system, no external signal is applied and we look for behavior of the
states due to the presence of initial condition only. So, in short, the analysis of a
homogeneous system helps us to know about the internal behavior of the system.
In a non-homogeneous system, we deal with the presence of both the initial
conditions and external input signals simultaneously.
In this chapter, the hybrid function set is employed for the analysis of
non-homogeneous as well as homogeneous systems described as state space
models [2].

© Springer International Publishing Switzerland 2016 185


A. Deb et al., Analysis and Identification of Time-Invariant Systems, Time-Varying
Systems, and Multi-Delay Systems using Orthogonal Hybrid Functions,
Studies in Systems, Decision and Control 46, DOI 10.1007/978-3-319-26684-8_8
186 8 Time Invariant System Analysis: State Space Approach

First we take up the problem of analysis of a non-homogeneous system in HF


domain, because, after putting the specific condition of zero forcing function, we
can arrive at the result of analysis of a homogeneous system.

8.1 Analysis of Non-homogeneous State Equations [3]

Consider the non-homogeneous state equation,

x_ ðtÞ ¼ AxðtÞ þ BuðtÞ ð8:1Þ

where, A is an (n × n) system matrix given by


2 3
a11 a12 a13  a1n
6 a21 a22 a23  a2n 7
6 . .. .. .. 7
6 7
A , 6 .. . . . 7
6 . .. .. .. 7
4 .. . . . 5
an1 an2 an3  ann

B is the (n × 1) input vector given by B ¼ ½ b1 b2    bn T


x(t) is the state vector given by xðtÞ ¼ ½ x1 x2    xn T
with the initial conditions,

xð0Þ ¼ ½ x1 ð0Þ x2 ð0Þ    xn ð0Þ T

where, ½  T denotes transpose, and u is the forcing function.


Integrating Eq. (8.1) we have
Z Z
xð t Þ  xð 0Þ ¼ A xðtÞdt þ B uðtÞdt ð8:2Þ

Expanding x(t), x(0) and u(t) via an m-set hybrid function [4], we get

xðtÞ , CSx SðmÞ þ CTx TðmÞ


xð0Þ , CSx0 SðmÞ þ CTx0 TðmÞ and
uðtÞ , CTSu SðmÞ þ CTTu TðmÞ
8.1 Analysis of Non-homogeneous State Equations 187

where,
2 3 2 3 2 3 2 3
CTSx1 CTTx1 CTSx01 CTTx01
6 CT 7 6 CT 7 6 CT 7 6 CT 7
6 Sx2 7 6 Tx2 7 6 Sx02 7 6 Tx02 7
6 . 7 6 . 7 6 . 7 6 . 7
6 .. 7 6 .. 7 6 .. 7 6 .. 7
6 7 6 7 6 7 6 7
6 .. 7 6 .. 7 6 .. 7 6 .. 7
6 . 7 6 . 7 6 . 7 6 . 7
CSx ¼6
6 CT 7;
7 CTx ¼6
6 CT 7;
7 CSx0 ¼6
6 CT 7
7 and CTx0 ¼6
6 CT 7
7
6 Sxi 7 6 Txi 7 6 Sx0i 7 6 Tx0i 7
6 . 7 6 . 7 6 . 7 6 . 7
6 .. 7 6 .. 7 6 .. 7 6 .. 7
6 7 6 7 6 7 6 7
6 .. 7 6 .. 7 6 .. 7 6 .. 7
4 . 5 4 . 5 4 . 5 4 . 5
CTSxn CTTxn CTSx0n CTTx0n

CSxi ¼ ½ cSxi1 cSxi2 cSxi3    cSxim T


CTxi ¼ ½ cTxi1 cTxi2 cTxi3    cTxim T
CSx0i ¼ ½ cSx0i1 cSx0i2 cSx0i3    cSx0im T

CTx0i ¼ ½ cTx0i1 cTx0i2 cTx0i3  cTx0im T


Csu ¼ ½ csu;1 csu;2 csu;3  csu;m T
CTu ¼ ½ cTu;1 cTu;2 cTu;3  cTu;m T

Substituting in (8.1) and rearranging we have


Z Z
ðCSx  CSx0 ÞSðmÞ þ ðCTx  CTx0 ÞTðmÞ = A xdt þ B udt ð8:3Þ

We take up the first term on the RHS of (8.3) to write


Z Z Z Z
 
A xdt ¼ A CSx SðmÞ þ CTx TðmÞ dt ¼ ACSx SðmÞ dt þ ACTx TðmÞ dt

Using relations (4.9) and (4.18), we have


Z
   
A xdt ¼ ACSx P1ssðmÞ SðmÞ þ P1stðmÞ TðmÞ þ ACTx P1tsðmÞ SðmÞ þ P1ttðmÞ TðmÞ
   
1 1
¼ A CSx þ CTx P1ssðmÞ SðmÞ þ hA CSx þ CTx TðmÞ
2 2
ð8:4Þ
188 8 Time Invariant System Analysis: State Space Approach

and similarly for the second term on the RHS of (8.3) we have
Z Z
 T 
B udt ¼ B CSu SðmÞ þ CTTu TðmÞ dt
    ð8:5Þ
1 T 1 T
¼ B CSu þ CTu P1ssðmÞ SðmÞ þ hB CSu þ CTu TðmÞ
T T
2 2

Hence we can rewrite Eq. (8.3) as (dropping the dimension argument m)


   
1 1
ðCSx  CSx0 ÞS þ ðCTx  CTx0 ÞT ¼ A CSx þ CTx P1ss S þ hA CSx þ CTx T
2 2
   
1 1
þ B CTSu þ CTTu P1ss S þ hB CTSu þ CTTu T
2 2
 
1 1
¼ ACSx þ ACTx þ BCTSu þ BCTTu P1ss S
2 2
 
1 1
þ h ACSx þ ACTx þ BCTSu þ BCTTu T
2 2
ð8:6Þ

Now equating like coefficients of (8.6), we get


 
1 1
CSx  CSx0 ¼ ACSx þ ACTx þ BCTSu þ BCTTu P1ss ð8:7Þ
2 2
 
1 1
CTx  CTx0 ¼ h ACSx þ ACTx þ BCTSu þ BCTTu ð8:8Þ
2 2

These two Eqs. (8.7) and (8.8) are to be solved for CSx and CTx respectively. We
can solve for CSx and CTx , either from sample-and-hold function vector, or from
triangular function vector. Both these approaches are described below.

8.1.1 Solution from Sample-and-Hold Function Vectors

The initial values of all the states being constants, they always essentially represent
step functions. Hence, HF domain expansions of the initial values will always yield
null coefficient matrices for the T vectors.
So, from Eq. (8.8)
8.1 Analysis of Non-homogeneous State Equations 189

 
1 1
CTx ¼ h ACSx þ ACTx þ BCSu þ BCTu
T T
2 2
  ð8:9Þ
1 1 1 T
or, ACSx ¼ CTx  ACTx  B CSu þ CTu
T
h 2 2

Substituting relation (8.8) into (8.7) and simplifying, we have

CTx P1ss ¼ hðCSx  CSx0 Þ ð8:10Þ

From Eq. (8.7),


 
1 1
CSx  CSx0 ¼ ACSx þ ACTx þ BCSu þ BCTu P1ss
T T
2 2
  ð8:11Þ
1 1 T
¼ ACSx P1ss þ ACTx P1ss þ B CSu þ CTu P1ss
T
2 2

Using (8.10) on the RHS of (8.11), we have


 
1 1
CSx  CSx0 ¼ ACSx P1ss þ AhðCSx  CSx0 Þ þ B CTSu þ CTTu P1ss
2 2

Calling the operational matrix for integration in (BPF) domain P [5] we have the
following relation:

h
P1ss ¼ P  I ð8:12Þ
2

Replacing P1ss following (8.12), we get


   
h 1 1 T
CSx  CSx0 ¼ ACSx P  I þ AhðCSx  CSx0 Þ þ B CSu þ CTu P1ss
T
2 2 2
 
h 1
¼ ACSx P  ACSx0 þ B CTSu þ CTTu P1ss
2 2

Therefore,
   
h 1 T
CSx  ACSx P ¼ I  A CSx0 þ B CSu þ CTu P1ss
T
ð8:13Þ
2 2

Now subtracting the ith column from the (i + 1)th column, we get
190 8 Time Invariant System Analysis: State Space Approach

½CSx i þ 1 ½CSx i ½ACSx Pi þ 1 þ ½ACSx Pi


     
h h
¼ I  A CSx0  I  A CSx0
2 iþ1 2 i
       
1 1 T ð8:14Þ
þ B CSu þ CTu P1ss
T T
 B CSu þ CTu P1ss
T
2 2
    i þ 1    i
1 1
¼ B CTSu þ CTTu P1ss  B CTSu þ CTTu P1ss
2 iþ1 2 i

Now, the (i + 1)th column of ACSx P is

ði þ 1Þth column
2#3
1
617
2 32 3 6 7
a11 a12  a1n cSx11 cSx12  cSx1m 6.7
6.7
6a  a2n 7 6  cSx2m 7 6.7
6 21 a22 76 cSx21 cSx22 7 617
½ACSx Pi þ 1 ¼6
6 .. .. .. 7 6
76 . .. .. 77h
6 7
627 ði þ 1Þth element
4 . . . 54 .. . . 5 6 7
ði þ 2Þth element
607
  6.7
an1 an2 ann cSxn1 cSxn2 cSxnm 6.7
4.5
0
2 32 3
a11 a12    a1n cSx11 þ cSx12 þ    þ 12 cSx1ði þ 1Þ
6a 76 cSx21 þ cSx22 þ    þ 1 cSx2ði þ 1Þ 7
6 21 a22    a2n 76 7
76 7
2
¼ h6
6 .. . . 76 .. 7
4 . .
. .
. 54 6 . 7
5
an1 an2    ann cSxn1 þ cSxn2 þ    þ 2 cSxnði þ 1Þ
1
2     3
1 1
6 a 11 cSx11 þ cSx12 þ    þ c Sx1 ði þ 1 Þ þ a12 cSx21 þ cSx22 þ    þ cSx2 ð i þ 1 Þ þ    7
6 2 2 7
6   7
6 1 7
6 þ a c þ c þ    þ c 7
6 1n Sxn1 Sxn2 Sxnði þ 1Þ 7
6 
2
   7
6 7
6 1 1 7
6 a21 cSx11 þ cSx12 þ    þ cSx1ði þ 1Þ þ a22 cSx21 þ cSx22 þ    þ cSx2ði þ 1Þ þ    7
6 2 2 7
6 7
6   7
¼ h6
6 1 7
7
6 þ a2n cSxn1 þ cSxn2 þ    þ cSxnði þ 1Þ 7
6 2 7
6. 7
6. 7
6. 7
6     7
6 1 1 7
6 an1 cSx11 þ cSx12 þ    þ cSx1ði þ 1Þ þ an2 cSx21 þ cSx22 þ    þ cSx2ði þ 1Þ þ    7
6 2 2 7
6 7
6   7
4 1 5
þ ann cSxn1 þ cSxn2 þ    þ cSxnði þ 1Þ
2
ð8:15Þ
8.1 Analysis of Non-homogeneous State Equations 191

Similarly, the ith column of ACSx P is


2     3
1 1
6 a11 cSx11 þ cSx12 þ    þ 2 cSx1i þ a12 cSx21 þ cSx22 þ    þ 2 cSx2i þ    7
6   7
6 7
6 1 7
6 þ a1n cSxn1 þ cSxn2 þ    þ cSxni 7
6  2   7
6 7
6 1 1 7
6 a21 cSx11 þ cSx12 þ    þ cSx1i þ a22 cSx21 þ cSx22 þ    þ cSx2i þ    7
6 2 2 7
6   7
6 7
½ACSx P i ¼ h6
6
1
þ a2n cSxn1 þ cSxn2 þ    þ cSxni
7
7
ð8:16Þ
6 2 7
6. 7
6. 7
6.     7
6 7
6a c 1 1 7
6 n1 Sx11 þ cSx12 þ    þ cSx1i þ an2 cSx21 þ cSx22 þ    þ cSx2i þ    7
6 2 2 7
6   7
4 1 5
þ ann cSxn1 þ cSxn2 þ    þ cSxni
2

Subtracting (8.16) from (8.15), we can write


2 a11 3
2 cSx1i þ a11
2 cSx1ði þ 1Þ þ  þ a1n
2 cSxni þ a1n
2 cSxnði þ 1Þ
6 a21 7
6 2 cSx1i þ a21
2 cSx1ði þ 1Þ þ  þ a2n
2 cSxni þ a2n
2 cSxnði þ 1Þ 7
6 7
½ACSx Pi þ 1 ½ACSx Pi ¼ h6 .. 7
6 . 7
4 5
an1
2 cSx1i þ an1
þ    þ a2nn cSxni þ a2nn cSxnði þ 1Þ
2 cSx1ði þ 1Þ
2 32 3 2 32 c 3
a11 a12    a1n cSx1i a11 a12    a1n Sx1ði þ 1Þ
6a    a2n 76 7 6 7 cSx2ði þ 1Þ 7
6
h6 21 a22 76 cSx2i 7 h 6 a21 a22    a2n 76 6
7
7
6
¼ 6 . 76 7
.. 76 .. 7 þ 2 6 .. 6 .. 7
2 4 .. .. .. 76
6 .. 7
7
. . 54 . 5 4 . . . 54 . 5
an1 an2    ann cSxni an1 an2    ann cSxnði þ 1Þ
2 3 2 3
cSx1i cSx1ði þ 1Þ
6c 7 6 7
h 6 6
Sx2i 7
7 h 6 c
6 Sx2ði þ 1Þ 7
7
or, ½ACSx Pi þ 1 ½ACSx Pi ¼ A6 . 7 þ A6 . 7
2 4 .. 5 2 6 .. 7
4 5
cSxni cSxnði þ 1Þ
ð8:17Þ

Substituting relation (8.17) in the LHS of Eq. (8.14), we get,


192 8 Time Invariant System Analysis: State Space Approach

½CSx i þ 1 ½CSx i ½ACSx Pi þ 1 þ ½ACSx Pi


2 3 2 3 2 3 2 3
cSx1ði þ 1Þ cSx1i cSx1ði þ 1Þ cSx1i
6 7 6 7
6 cSx2ði þ 1Þ 7 6 cSx2i 7
7 h 6 c 7 6c 7
6 7 6 6 7 6 Sx2ði þ 1Þ 7 h 6 Sx2i 7
¼6 .. 7  6 . 7  A6 . 7  A6
6 .. 77
6 . 7 4 .
. 5 2 6 .
. 7 2 4 . 5
4 5 4 5
cSxnði þ 1Þ cSxni cSxnði þ 1Þ cSxni ð8:18Þ
2 3 2 3
cSx1ði þ 1Þ cSx1i
 66
7
cSx2ði þ 1Þ 7  6 c 7
h 6 7 h 6 Sx2i 7
¼ I A 6 . 7  Iþ A 6 .. 7
2 6 .. 7 2 6 4 . 5
7
4 5
cSxnði þ 1Þ cSxni

Now from RHS of Eq. (8.14)


     
1 T 1 T
B CSu þ CTu P1ss
T
¼ B CSu þ CTu ½P1ssi þ 1
T
2 iþ1 2

and,
     
1 T 1 T
B CSu þ CTu P1ss ¼ B CSu þ CTu ½P1ssi
T T
2 i 2
2 3
0 h h  h
6 0 0 h  h7
We know from (8.12) that P1ss ¼ P  h2 I ¼ 6
4 ... .. .. .. 7
. . .5
0 0 0  0
Therefore,
8.1 Analysis of Non-homogeneous State Equations 193

ði þ 1Þth column
2#3
h
6h7
6 7
6.7
6.7
    6.7
1 T 1 T 6 7
B CSu þ CTu ½P1ssi þ 1 ¼ B CSu þ CTu
T T 6h7 ith element
2 2 6 7
6 7
607 ði þ 1Þth element
6.7
6.7
4.5
0
2 3
h
6h7
6 7
6.7
6 7
  6 .. 7 i 
X 
1 6 7 1 T
¼ ½Bn1 CTSu þ CTTu 6h7 ¼ h½Bn1 CTSu þ C
2 6 7 2 Tu
1m 6 7 j
607 j¼1
6.7
6.7
4.5
0 m1

Similarly,

ith element
2#3
h
6h7
6 7
6.7
6.7
    6.7
1 1 6 7
B CTSu þ CTTu ½P1ssi ¼ B CTSu þ CTTu 6h7 ði  1Þth element
2 2 6 7
6 7
607 ith element
6.7
6.7
4.5
0
i1 
X 
1 T
¼ h½Bn1 CSu þ CTu
T

j¼1
2 j

Therefore,
194 8 Time Invariant System Analysis: State Space Approach

       
1 T 1 T
B CSu þ CTu P1ss
T
 B CSu þ CTu P1ss
T
2 iþ1 2 i
X i   i1 
X 
1 T 1 T
¼ h½Bn1 CSu þ CTu  h½Bn1
T
CSu þ CTu
T
ð8:19Þ
j¼1
2 j j¼1
2 j
 
1
¼ h½Bn1 CTSu þ CTTu
2 i

After substituting the expressions from Eqs. (8.14), (8.18) and (8.19), we can
write the following recursive structure of system states
2 3 2 3
cSx1ði þ 1Þ cSx1i
 6 7
6 cSx2ði þ 1Þ 7  6 c 7  
h 6 7 h 6 6
Sx2i 7
7 1 T
I A 6 .. 7  I þ A . 7 ¼ hB C T
þ C
2 6 . 7 2 6 4 .. 5
Su
2 Tu i
4 5
cSxnði þ 1Þ cSxni
2 3 2 3 ð8:20Þ
cSx1ði þ 1Þ cSx1i
 6 7
6 cSx2ði þ 1Þ 7  6 c 7  
2 6 7 2 6 Sx2i 7 1 T
or; IA 6 6 7
7  I þ A 6 . 7 ¼ 2B CSu þ CTu T
..
h 6
4 . 7
5
h 4 .. 5 2 i

cSxnði þ 1Þ cSxni

From Eq. (8.20), using matrix inversion, we have


2 3 2 3
cSx1ði þ 1Þ cSx1i
6 cSx2ði þ 1Þ 7 2 1  6 7  1  
6 7 2 6 cSx2i 7 2 1 T
6 .. 7 ¼ I  A I þ A 6 . 7 þ 2 I  A B C T
þ C
4 . 5 h h 4 .. 5 h Su
2 Tu i
cSxnði þ 1Þ cSxni
ð8:21Þ

The inverse in (8.21) can always be made to exist by judicious choice of h.


Equation (8.21) provides a simple recursive solution of the states of a
non-homogeneous system, or, in other words, time samples of the states, with a
sampling period of h knowing the system matrix A, the input matrix B, the input
signal u, and the initial values of the states.
8.1 Analysis of Non-homogeneous State Equations 195

8.1.2 Solution from Triangular Function Vectors

Now from Eq. (8.8), we have


 
1 1
CTx ¼ h ACSx þ ACTx þ BCTSu þ BCTTu
2 2

And from Eq. (8.7), using (8.12), we have


    
1 h 1
CSx  CSx0 ¼ ACSx þ ACTx P  I þ B CTSu þ CTTu P1ss
2 2 2
 
h 1 h 1 T
¼ ACSx P  ACSx þ ACTx P  ACTx þ B CSu þ CTu P1ss
T
2 2 4 2
ð8:22Þ
h
or; ðCSx  ACSx PÞ  CSx0 þ ACSx
 2 
1 h 1
¼ ACTx P  ACTx þ B CSu þ CTTu P1ss
T
2 4 2

From Eqs. (8.13) and (8.22), we have


   
h 1 h
I  A CSx0 þ B CTSu þ CTTu P1ss  CSx0 þ ACSx
2 2 2
 
1 h 1
¼ ACTx P  ACTx þ B CTSu þ CTTu P1ss
2 4 2
1 1
or, ACSx ¼ ACSx0 þ ACTx P  ACTx ð8:23Þ
h 2

The initial values of all the states being constants, they always essentially rep-
resent step functions. Hence, HF domain expansions of the initial values will
always yield null coefficient matrices for the T vectors. That means CTx0 ¼ 0.
Using (8.23) in (8.8) we have
  
1 1 1 1
CTx ¼ h ACSx0 þ ACTx P  ACTx þ ACTx þ B CTSu þ CTTu
h 2 2 2
  ð8:24Þ
1
or, CTx  ACTx P ¼ hACSx0 þ hB CTSu þ CTTu
2

Subtracting the ith column from the (i + 1)th column, we have


196 8 Time Invariant System Analysis: State Space Approach

½CTx i þ 1 ½CTx i ½ACTx Pi þ 1 þ ½ACTx Pi


     
1 T 1 T ð8:25Þ
¼ hB CSu þ CTu
T
 hB CSu þ CTu
T
2 iþ1 2 i

Similar to Eqs. (8.18) and (8.25) can be written as

½CTx i þ 1 ½CTx i ½ACTx Pi þ 1 þ ½ACTx Pi


2 3 2 3
cTx1ði þ 1Þ cTx1i
 6
 cTx2ði þ 1Þ 7 7 6 c 7
h 6 6 7 h 6 Tx2i 7 ð8:26Þ
¼ I A 6 ..
6
7  Iþ A 6 . 7
2 6 . 7 2 4 .. 7 5
4 5
cTxnði þ 1Þ cTxni

From Eqs. (8.25) and (8.26), we have


2 3 2 3
CTx1ði þ 1Þ cTx1i
 6 7
 CTx2ði þ 1Þ 7  6 c 7
h 6 6 7 h 6 Tx2i 7
I A 6 ..
6
7  Iþ A 6 . 7
2 6 . 7 2 4 .. 7 5
4 5 ð8:27Þ
CTxnði þ 1Þ cTxni
     
1 T 1 T
¼ h B CSu þ CTu
T
h B CSu þ CTu
T
2 iþ1 2 i
2 3 23
cTx1ði þ 1Þ cTx1i
6 7 6 c 7
6 cTx2ði þ 1Þ 7 2 1 
6 Tx2i 7
6 7 2 6 . 7
or, 6 .. 7 ¼ I  A I þ A 6 . 7
6 . 7 h h 4 . 5
4 5 ð8:28Þ
cTxnði þ 1Þ cTxni
 1      
2 1 T 1 T
þ2 I  A B CSu þ CTu
T
 B CSu þ CTu
T
h 2 iþ1 2 i

Equation (8.28) provides an alternative recursive solution of the states of a


non-homogeneous system, knowing the system matrix A, the input matrix B, the
input signal u, and the initial values of the states. The solution as obtained via
Eq. (8.21) can be verified by Eq. (8.28) as well. But the only thing we have to
remember that, in case of Eq. (8.28), to know the initial value of T matrix, we
should have first two samples of the states. Whereas the second sample of the state
can be determined with the help of Eq. (8.21) only. So when only first sample of the
states are given, the system states can be solved only by Eq. (8.21). And if fortu-
nately first two samples of states are available, then only with the help of T matrix
i.e. equation (8.28), the system states can be solved.
8.1 Analysis of Non-homogeneous State Equations 197

Fig. 8.1 Comparison of HF


based recursive solution of
Example 8.1, for m = 12 with
the exact solutions of state x1
and state x2 (vide
Appendix B, Program no. 22)

8.1.3 Numerical Examples

Example 8.1 [1] (vide Appendix B, Program no. 22) Consider a non-homogeneous
system given by x_ ðtÞ ¼ AxðtÞ þ BuðtÞ where
     
0 1 0 0
A¼ ; B¼ ; x0 ¼ and uðtÞ ¼ 1
2 3 1 0:5

having the solution x1 ðtÞ ¼ 0:5ð1  expðtÞÞ and x2 ðtÞ ¼ 0:5 expðtÞ.
The graphical comparison of the system states of Example 8.1, obtained via HF
domain analysis (for m = 12) with their direct expansion are presented in Fig. 8.1,
whereas in Table 8.1 we compare the results obtained in HF domain using direct
expansion for m = 8.
Figure 8.2 is proof enough that the percentage error of HF based recursive
solution decreases drastically as the number of segments m increases. With the
increase in m, it is observed that the number of zero error points has increased.

8.2 Determination of Output of a Non-homogeneous


System [3]

Consider the output of a non-homogeneous system described by

yðtÞ ¼ CxðtÞ þ DuðtÞ ð8:29Þ

where,
x is the state vector given by x ¼ ½ x1 x2    xn  T
198 8 Time Invariant System Analysis: State Space Approach

Table 8.1 Solution of states x1 and x2 of the non-homogeneous system of Example 8.1 with
comparison of exact samples and corresponding samples obtained via HF domain with percentage
error at different sample points for m = 8 and T = 1 s (vide Appendix B, Program no. 22)
(a)
t(s) System state x1
Exact samples of the Samples from HF analysis, using % error
x x
state x1;d Eq. (8.21), x1;h 1 ¼ 1;dx1;d 1;h  100
0 0.00000000 0.00000000 –
1
8 0.05875155 0.05882353 −0.12251592
2
8 0.11059961 0.11072664 −0.11485574
3
8 0.15635536 0.15652351 −0.10754348
4
8 0.19673467 0.19693251 −0.10056184
5
8 0.23236929 0.23258751 −0.09391086
6
8 0.26381672 0.26404780 −0.08759111
7
8 0.29156899 0.29180688 −0.08158961
8
8 0.31606028 0.31630019 −0.07590641
(b)
t(s) System state x2
Exact samples of the Samples from HF analysis, using % error
x x
state x2;d Eq. (8.21), x2;h 2 ¼ 2;dx2;d 2;h  100
0 0.50000000 0.50000000 0.00000000
1
8 0.44124845 0.44117647 0.01631281
2
8 0.38940039 0.38927336 0.03262195
3
8 0.34364464 0.34347649 0.04893136
4
8 0.30326533 0.30306749 0.06523660
5
8 0.26763071 0.26741249 0.08153773
6
8 0.23618328 0.23595220 0.09783927
7
8 0.20843101 0.20819312 0.11413369
8
8 0.18393972 0.18369981 0.13042860

y(t) is the output vector, is expressed by yðtÞ , ½ y1 y2    yv T


u(t) is the input vector, is expressed by uðtÞ , ½ u1 u2    ur T
2 3
c11 c12 c13    c1n
6 c21 c22 c23    c2n 7
6 . .. .. .. 7
6 7
C is the output matrix given by C , 6 .. . . . 7
6 . .. .. .. 7
4 .. . . . 5
cv1 cv2 cv3    cvn
8.2 Determination of Output of a Non-homogeneous System 199

Fig. 8.2 Percentage error for


three different values of m
(m = 4, 8 and 20) and T = 1 s
for a state x1 and b state x2 of
Example 8.1

2 3
d11 d12 d13  d1r
6 d21 d22 d23  d2r 7
6 . .. .. .. 7
6 7
D is the direct transmission matrix given by D , 6 .. . . . 7
6 . .. .. .. 7
4 .. . . . 5
dv1 dv2 dv3  dvr
As before, expanding state vector x, output vector y(t) and the forcing function
u(t) via an m-set hybrid function set, we get

xðtÞ , CSx SðmÞ þ CTx TðmÞ yðtÞ , yS SðmÞ þ yT TðmÞ uðtÞ , CSu SðmÞ þ CTu TðmÞ
200 8 Time Invariant System Analysis: State Space Approach

where,
2 3 2 T 3 2 T 3
CTSx1 CTx1 CSu1
6 CT 7 6 CT 7 6 CT 7
6 Sx2 7 6 Tx2 7 6 Su2 7
6 . 7 6 . 7 6 . 7
6 .. 7 6 .. 7 6 .. 7
6 7 6 7 6 7
6 .. 7 6 .. 7 6 .. 7
6 . 7 6 . 7 6 . 7
CSx ¼ 6 7 6 7
6 CT 7 ; CTx ¼ 6 CT 7 ; CSu ¼ 6 CT 7 ;
6 7
6 Sxi 7 6 Txi 7 6 Sui 7
6 . 7 6 . 7 6 . 7
6 .. 7 6 .. 7 6 .. 7
6 7 6 7 6 7
6 .. 7 6 .. 7 6 .. 7
4 . 5 4 . 5 4 . 5
C T
CTTxn nm CTSur rm
3
2SxnT nm 2 T 3 2 T 3
CTu1 yS1 yT1
6 CT 7 6 y T 7 6 yT 7
6 Tu2 7 6 S2 7 6 T2 7
6 . 7 6 .. 7 6 .. 7
6 .. 7 6 . 7 6 . 7
6 7 6 7 6 . 7
6 .. 7 6 .. 7 6 . 7
6 . 7 6 . 7 6 . 7
6 7
CTu ¼ 6 T 7 yS ¼ 6 T 7 ; yT ¼ 6 T 7
6 CTui 7 6 ySi 7 6 yTi 7
6 . 7 6 . 7 6 . 7
6 .. 7 6 . 7 6 . 7
6 7 6 . 7 6 . 7
6 .. 7 6 . 7 6 . 7
4 . 5 4 .
. 5 4 .. 5
T
CTur rm T
ySv vm yTTv vm

and,

CTSxi ¼ ½ cSxi1 cSxi2 cSxi3    cSxim ; CTTxi ¼ ½ cTxi1 cTxi2 cTxi3  cTxim 
CTSui ¼ ½ cSui1 cSui2 cSui3    cSuim ; CTTui¼ ½ cTui1 cTui2 cTui3    cTuim 
yTSi ¼ ½ cSi1 cSi2 cSi3    cSim ; yTTi ¼ ½ cTi1 cTi2 cTi3    cTim 

Substituting in (8.29) we have

yS SðmÞ þ yT TðmÞ ¼ C CSx SðmÞ þ CTx TðmÞ þ D CSu SðmÞ þ CTu TðmÞ
ð8:30Þ
¼ ðCCSx þ DCSu ÞSðmÞ þ ðCCTx þ DCTu ÞTðmÞ

From Eq. (8.30) we can write

yS ¼ CCSx þ DCSu ð8:31Þ

yT ¼ CCTx þ DCTu ð8:32Þ


8.2 Determination of Output of a Non-homogeneous System 201

Fig. 8.3 Hybrid function


based analysis of system
output with m = 10 and its
comparison with the exact
output of Example 8.2 (vide
Appendix B, Program no. 23)

Table 8.2 Solution of output of the non-homogeneous system of Example 8.2 with comparison of
exact samples and corresponding samples obtained via HF domain with percentage error at
different sample points for m = 8 and T = 1 s (vide Appendix B, Program no. 23)
t(s) System output y(t)
Direct expansion HF coefficients using Eq. (8.31) % error
0 0.00000000 0.00000000 –
1
8 0.05875155 0.05882353 −0.12251592
2
8 0.11059961 0.11072664 −0.11485574
3
8 0.15635536 0.15652351 −0.10754348
4
8 0.19673467 0.19693251 −0.10056184
5
8 0.23236929 0.23258751 −0.09391086
6
8 0.26381672 0.26404780 −0.08759111
7
8 0.29156899 0.29180688 −0.08158961
8
8 0.31606028 0.31630019 −0.07590641

8.2.1 Numerical Examples

Example 8.2 (vide Appendix B, Program no. 23) Consider a non-homogeneous


_
 xðtÞ ¼ Ax
system  ðtÞ þ BuðtÞ; yðtÞ = CxðtÞ with unit step forcing
  function, where
0 1 0 0
A¼ ;B ¼ ; C ¼ ½ 1 0 ; D ¼ 0 and x0 ¼
2 3 1 0:5
The time variation of the output y(t) is shown in Fig. 8.3 and the respective sample
values are compared in Table 8.2.
202 8 Time Invariant System Analysis: State Space Approach

8.3 Analysis of Homogeneous State Equation [4]

For a homogeneous system, B is zero and Eq. (8.21) will be reduced to


2 3 2 3
cSx1ði þ 1Þ cSx1i
6 cSx2ði þ 1Þ 7 2 1  6 7
6 7 2 6 cSx2i 7
6 .. 7¼ IA I þ A 6 .. 7 ð8:33Þ
4 . 5 h h 4 . 5
cSxnði þ 1Þ cSxni

Whereas Eq. (8.33) provides a simple recursive solution of the states of


homogeneous system, or, in other words, time samples of the states, with a sam-
pling period of h knowing the system matrix A and the initial values of the states.

8.3.1 Numerical Examples

Example 8.3 (vide Appendix  B, Program


 no. 24) Consider
  a homogeneous system
0 1 0
x_ ðtÞ ¼ AxðtÞ; where A ¼ and xð0Þ ¼ having the solution x1 ðtÞ ¼
1 2 1
t expðtÞ and x2 ðtÞ ¼ ð1  tÞ expðtÞ.
The results of analysis of the given system are presented in Table 8.3 and Fig. 8.4
compares the result obtained in HF domain for m = 4 with its direct expansion.
 
0 x
Example 8.4 [6] Consider a homogeneous system x_ ðtÞ ¼ ; xðtÞ , AxðtÞ;
x 0
with initial
 condition
 xð0Þ ¼ ½ 1 0 T and the exact solution is given by
cosðxtÞ
xð t Þ ¼ .
 sinðxtÞ
It is observed that the use of only Eq. (8.33) provides the complete solution of
the states x1(t) and x2(t) in hybrid function domain as the method is recursive and
we can solve for any sample point using the previous sample.
From Eq. (8.33), we solve for the vector x(t) and study four cases with h = 0.1,
0.01, 0.001 and 0.0001 s, ω = 10 and m = 10. The Tables 8.4, 8.5, 8.6, 8.7, 8.8, 8.9,
8.10 and 8.11 compares the HF domain results with the exact solution. The last
column in each table contain the percentage errors for different samples of x1(t) and
x2(t). Also, respective figures are shown from Figs. 8.5, 8.6, 8.7, 8.8, 8.9, 8.10, 8.11
and 8.12.
Since we have considered values of h from 0.1 to 0.0001, it may be expected that
for smaller h, HF domain solutions will match almost exactly with the exact sample
values of the states x1(t) and x2(t). For this reason, to bring out the difference
between these two solutions, whatever less it may be, we have used MATLAB long
8.3 Analysis of Homogeneous State Equation 203

Table 8.3 Solution of states x1 and x2 of the homogeneous system of Example 8.3 with
comparison of exact samples and corresponding samples obtained via HF domain with percentage
error at different sample points for m = 4 and T = 1 s (vide Appendix B, Program no. 24)
(a)
t(s) System state x1
Exact samples of Samples from HF domain, using % error
x x
the state Eq. (8.33), x1;h 1 ¼ 1;dx1;d 1;h  100
x1;d
0 0.00000000 0.00000000 –
1
4 0.19470020 0.19753086 −1.45385572
2
4 0.30326533 0.30727023 −1.32059276
3
4 0.35427491 0.35848194 −1.18750436
4
4 0.36787944 0.37175905 −1.05458734
(b)
t(s) System state x2
Exact samples of the Samples from HF domain, using % error
x x
state x2;d Eq. (8.33), x2;h 2 ¼ 2;dx2;d 2;h  100
0 1.00000000 1.00000000 0.00000000
1
4 0.58410059 0.58024691 0.65976307
2
4 0.30326533 0.29766804 1.84567421
3
4 0.11809164 0.11202561 5.13671417
4
4 0.00000000 −0.00580874 –

Fig. 8.4 Comparison of HF


domain recursive solution of
Example 8.3, for m = 4 and
T = 1 s with the exact
solutions of states x1 and x2
(vide Appendix B, Program
no. 24)

format computations for computing the HF domain as well as exact values of the
samples of the states tabulated Tables 8.8, 8.9, 8.10 and 8.11.
Table 8.12 [6] compare the maximum absolute errors of five different methods
including the HF domain approach.
204 8 Time Invariant System Analysis: State Space Approach

Table 8.4 Solution of states x1 of the homogeneous system of Example 8.4 with comparison of
exact samples and corresponding samples obtained via HF domain with percentage error at
different sample points for m = 10 and T = 1 s
t(s) System state x1
Exact samples of the Samples from HF domain, using % error
x x
state x1;d Eq. (8.33), x1;h 1 ¼ 1;dx1;d 1;h  100
0 1.00000000 1.00000000 0.00000000
1
10 0.54030231 0.60000000 −11.04894221
2
10 −0.41614684 −0.28000000 32.71605763
3
10 −0.98999250 −0.93600000 5.45382920
4
10 −0.65364362 −0.84320000 −28.99995872
5
10 0.28366219 −0.07584000 126.73602710
6
10 0.96017029 0.75219200 21.66056294
7
10 0.75390225 0.97847040 −29.78743597
8
10 −0.14550003 0.42197248 390.01539037
9
10 −0.91113026 −0.47210342 48.18485998
10
10 −0.83907153 −0.98849659 −17.80838160

Table 8.5 Solution of state x2 of the homogeneous system of Example 8.4 with comparison of
exact samples and corresponding samples obtained via HF domain with percentage error at
different sample points for m = 10 and T = 1 s
t(s) System state x2
Exact samples of the Samples from HF domain, using % error
x x
state x2;d Eq. (8.33), x2;h 2 ¼ 2;dx2;d 2;h  100
0 0.00000000 0.00000000 –
1
10 −0.84147098 −0.80000000 4.92839099
2
10 −0.90929743 −0.96000000 −5.57601598
3
10 −0.14112001 −0.35200000 −149.43308890
4
10 0.75680250 0.53760000 28.96429385
5
10 0.95892427 0.99712000 −3.98318524
6
10 0.27941550 0.65894400 −135.82943681
7
10 −0.65698660 −0.20638720 68.58578242
8
10 −0.98935825 −0.90660864 8.36396826
9
10 −0.41211849 −0.88154317 −113.90527030
10
10 0.54402111 −0.15124316 127.80097265

Example 8.5 Consider


2 another homogeneous
3 2 system
3 x_ ðtÞ ¼ AxðtÞ,
0 1 0 1
where A¼4 0 0 1 5 and xð0Þ ¼ 4 0 5having the
6 11 6 0
8.3 Analysis of Homogeneous State Equation 205

Table 8.6 Solution of state x1 of the homogeneous system of Example 8.4 with comparison of
exact samples and corresponding samples obtained via HF domain with percentage error at
different sample points for m = 10 and T = 0.1 s
t(s) System state x1
Exact samples of the Samples from HF domain, using % error
x x
state x1;d Eq. (8.33), x1;h 1 ¼ 1;dx1;d 1;h  100
0 1.00000000 1.00000000 0.00000000
1
100 0.99500417 0.99501247 −0.00083417
2
100 0.98006658 0.98009963 −0.00337222
3
100 0.95533649 0.95541023 −0.00771875
4
100 0.92106099 0.92119055 −0.01406639
5
100 0.87758256 0.87778195 −0.02272037
6
100 0.82533561 0.82561741 −0.03414369
7
100 0.76484219 0.76521729 −0.04904280
8
100 0.69670671 0.69718408 −0.06851807
9
100 0.62160997 0.62219641 −0.09434212
10
100 0.54030231 0.54100229 −0.12955340

Table 8.7 Solution of state x2 of the homogeneous system of Example 8.4 with comparison of
exact samples and corresponding samples obtained via HF domain with percentage error at
different sample points for m = 10 and T = 0.1 s
t(s) System state x2
Exact samples of the Samples from HF domain, using % error
x x
state x2;d Eq. (8.33), x2;h 2 ¼ 2;dx2;d 2;h  100
0 0.00000000 0.00000000 –
1
100 −0.09983342 −0.09975062 0.08293816
2
100 −0.19866933 −0.19850623 0.08209621
3
100 −0.29552021 −0.29528172 0.08070176
4
100 −0.38941834 −0.38911176 0.07872767
5
100 −0.47942554 −0.47906039 0.07616407
6
100 −0.56464247 −0.56423035 0.07298778
7
100 −0.64421769 −0.64377209 0.06916917
8
100 −0.71735609 −0.71689216 0.06467220
9
100 −0.78332691 −0.78286118 0.05945538
10
100 −0.84147098 −0.84102112 0.05346114

x1 ðtÞ ¼ 3 expðtÞ  3 expð2tÞ þ expð3tÞ;


solution x2 ðtÞ ¼ 3 expðtÞ þ 6 expð2tÞ  3 expð3tÞ and
x3 ðtÞ ¼ 3 expðtÞ  12 expð2tÞ þ 9 expð3tÞ
Figure 8.13 shows the solution of the system states x1(t), x2(t) and x3(t) and the
results obtained for m = 8 in HF domain are compared with the direct expansion.
206 8 Time Invariant System Analysis: State Space Approach

Table 8.8 Solution of state x1 of the homogeneous system of Example 8.4 with comparison of
exact samples and corresponding samples obtained via HF domain with percentage error at
different sample points for m = 10 and T = 0.01 s
t(s) System state x1
Exact samples of the Samples from HF domain, using % error
x x
state x1;d Eq. (8.33), x1;h 1 ¼ 1;dx1;d 1;h  100
0 1.000000000000000 1.000000000000000 0.000000000000000
1
1000 0.999950000416665 0.999950001249969 −8.333450931694e−8
2
1000 0.999800006666578 0.999800009999625 −3.333713781104e−7
3
1000 0.999550033748988 0.999550041247719 −7.502106664312e−7
4
1000 0.999200106660978 0.999200119990500 −1.334019299526e−6
5
1000 0.998750260394966 0.998750281219221 −2.085031178134e−6
6
1000 0.998200539935204 0.998200569916632 −3.003547586832e−6
7
1000 0.997551000253280 0.997551041052492 −4.089937456679e−6
8
1000 0.996801706302619 0.996801759578061 −5.344637849625e−6
9
1000 0.995952733011994 0.995952800419614 −6.768154521393e−6
10
1000 0.995004165278026 0.995004248470945 −8.361062463701e−6

Table 8.9 Solution of state x2 of the homogeneous system of Example 8.4 with comparison of
exact samples and corresponding samples obtained via HF domain with percentage error at
different sample points for m = 10 and T = 0.01 s
t(s) System state x2
Exact samples of the Samples from HF domain, using % error
x x
state x2;d Eq. (8.33), x2;h 2 ¼ 2;dx2;d 2;h  100
0 0.000000000000000 0.000000000000000 –
1
1000 −0.009999833334167 −0.009999750006250 8.332930563493e−4
2
1000 −0.019998666693333 −0.019998500062498 8.332097225479e−4
3
1000 −0.029995500202496 −0.029995250318735 8.330708254490e−4
4
1000 −0.039989334186634 −0.039989001124926 8.328763539592e−4
5
1000 −0.049979169270678 −0.049978753130974 8.326262924834e−4
6
1000 −0.059964006479445 −0.059963507386653 8.323206210049e−4
7
1000 −0.069942847337533 −0.069942265441499 8.319593150895e−4
8
1000 −0.079914693969173 −0.079914029444652 8.315423457460e−4
9
1000 −0.089878549198011 −0.089877802244640 8.310696795552e−4
10
1000 −0.099833416646828 −0.099832587489093 8.305412786219e−4

The results of analysis of the system are presented in Table 8.13 and we have
compared the samples obtained via HF domain analysis with its direct expansion
for m = 8.
8.3 Analysis of Homogeneous State Equation 207

Table 8.10 Solution of state x1 of the homogeneous system of Example 8.4 with comparison of
exact samples and corresponding samples obtained via HF domain with percentage error at
different sample points for m = 10 and T = 0.001 s
t(s) System state x1
Exact samples of the Samples from HF domain, using % error
x x
state x1;d Eq. (8.33), x1;h 1 ¼ 1;dx1;d 1;h  100
0 1.000000000000000 1.000000000000000 0.000000000000000
1
10000 0.999999500000042 0.999999500000125 −8.337779083824e−12
2
10000 0.999998000000667 0.999998000001000 −3.335116636205e−11
3
10000 0.999995500003375 0.999995500004125 −7.501810735515e−11
4
10000 0.999992000010667 0.999992000012000 −1.333499542859e−10
5
10000 0.999987500026042 0.999987500028125 −2.083581595030e−10
6
10000 0.999982000054000 0.999982000057000 −3.000320707358e−10
7
10000 0.999975500100042 0.999975500104125 −4.083722379966e−10
8
10000 0.999968000170666 0.999968000176000 −5.333904138926e−10
9
10000 0.999959500273374 0.999959500280125 −6.750762460663e−10
10
10000 0.999950000416665 0.999950000424999 −8.334194818315e−10

Table 8.11 Solution of state x2 of the homogeneous system of Example 8.4 with comparison of
exact samples and corresponding samples obtained via HF domain with percentage error at
different sample points for m = 10 and T = 0.001 s
t(s) System state x2
Exact samples of the Samples from HF domain, using % error
x x
state x2;d Eq. (8.33), x2;h 2 ¼ 2;dx2;d 2;h  100
0 0.000000000000000 0.000000000000000 –
1
10000 −0.000999999833333 −0.000999999750000 8.333329318508e−6
2
10000 −0.001999998666667 −0.001999998500001 8.333320995157e−6
3
10000 −0.002999995500002 −0.002999995250003 8.333307115657e−6
4
10000 −0.003999989333342 −0.003999989000011 8.333287679974e−6
5
10000 −0.004999979166693 −0.004999978750031 8.333262666380e−6
6
10000 −0.005999964000065 −0.005999963500074 8.333232118187e−6
7
10000 −0.006999942833473 −0.006999942250154 8.333196004346e−6
8
10000 −0.007999914666940 −0.007999914000294 8.333154330969e−6
9
10000 −0.008999878500492 −0.008999877750523 8.333107091766e−6
10
10000 −0.009999833334167 −0.009999832500875 8.333054300258e−6
208 8 Time Invariant System Analysis: State Space Approach

Fig. 8.5 Solution of state x1


in HF domain a using step
size h = 0.1 s, m = 10, T = 1 s
and b using step size
h = 0.01 s, m = 100, T = 1 s,
along with the exact solution
of Example 8.4

8.4 Determination of Output of a Homogeneous System [3]

For an n × n homogeneous system, D will be zero and Eqs. (8.31) and (8.32) will be
reduced to,

yS ¼ CCSx ð8:34Þ

yT ¼ CCTx ð8:35Þ

These Eqs. (8.34) and (8.35) provide simple solution of the output of homo-
geneous system.
8.4 Determination of Output of a Homogeneous System 209

Fig. 8.6 Solution of state x2


in HF domain a using step
size h = 0.1 s, m = 10, T = 1 s
and b using step size
h = 0.01 s, m = 100, T = 1 s,
along with the exact solution
of Example 8.4

Fig. 8.7 Solution of state x1


in HF domain, using step size
h = 0.01 s, for m = 10 and
T = 0.1 s along with the exact
solution of Example 8.4
210 8 Time Invariant System Analysis: State Space Approach

Fig. 8.8 Solution of state x2


in HF domain, using step size
h = 0.01 s, for m = 10 and
T = 0.1 s along with the exact
solution of Example 8.4

Fig. 8.9 Solution of state x1


in HF domain, using step size
h = 0.001 s, for m = 10 and
T = 0.01 s along with the
exact solution of Example 8.4

Fig. 8.10 Solution of state x2


in HF domain, using step size
h = 0.001 s, for m = 10 and
T = 0.01 s along with the
exact solution of Example 8.4
8.4 Determination of Output of a Homogeneous System 211

Fig. 8.11 Solution of state x1


in HF domain, using step size
h = 0.0001 s, for m = 10 and
T = 0.001 s along with the
exact solution of Example 8.4

Fig. 8.12 Solution of state x2


in HF domain, using step size
h = 0.0001 s, for m = 10 and
T = 0.001 s along with the
exact solution of Example 8.4

Table 8.12 Comparison of maximum absolute error obtained via MERKDP510, MERKDP512,
MERKDP514 [6, 7] and HF based approaches for step sizes h of 0.1, 0.01, 0.001 and 0.0001 s
respectively, for Example 8.4
h(s) Maximum absolute error
MERKDP510 MERKDP512 MERKDP514 HF domain
0.1 1.6 × 10−1 4.4 × 10−2 1.8 × 10−3 390.01539037
0.01 3.1 × 10−3 1.1 × 10−3 7.4 × 10−4 0.12955340
0.001 2.9 × 10−4 2.8 × 10−5 1.9 × 10−6 8.332930563493e−4
0.0001 2.9 × 10−5 1.0 × 10−6 2.9 × 10−7 8.333329318508e−6
212 8 Time Invariant System Analysis: State Space Approach

Fig. 8.13 Comparison of HF


domain recursive solution for
m = 8 and T = 1 s with the
exact solutions of state x1,
state x2 and state x3 of
Example 8.5

8.4.1 Numerical Examples

_
Example
 8.6 Consider
   system xðtÞ ¼ AxðtÞ; yðtÞ = CxðtÞwhere,
a homogeneous
0 1 0
A¼ C ¼ ½ 1 0  x0 ¼
1 2 1
The samples of the system output y(t) is presented in Table 8.14 and its time
variation is shown in Fig. 8.14 for m = 10 and T = 1 s.
_ tÞ ¼ AxðtÞ; yðtÞ = CxðtÞwhere
2 8.7 Consider3a homogeneous system2xð3
Example
0 1 0 1
A¼4 0 0 1 5; C ¼ ½ 4 5 1 ; x0 ¼ 4 0 5
6 11 6 0
The system output y(t) is presented in Table 8.15 and its time variation is shown
in Fig. 8.15 for m = 10 and T = 1 s.

8.5 Analysis of a Non-homogeneous System with Jump


Discontinuity at Input

We can modify Eq. (8.21) to make it suitable for the HFm based approach (as
described in Chap. 3), so that it can come up with good results in spite of the jump
discontinuities. The modification is quite simple in the sense that in the RHS of Eq.
(8.21), all the triangular function coefficient matrices associated with the matrix
B have to be modified as discussed in Chap. 3, Sect. 3.5.1. That is, all the CTTu ’s in
(8.21) are to be replaced by C0T 0T
Tu , where CTu , CTu Jk ðmÞ . This modification will
T

come up with good results for system analysis, shown in the numerical section. The
modified form for analyzing the system with jump discontinuities, is given by
8.5 Analysis of a Non-homogeneous System with Jump Discontinuity at Input 213

Table 8.13 Solution of states x1, x2 and x3 of the homogeneous system of Example 8.5 with
comparison of exact samples and corresponding samples obtained via HF domain with percentage
error at different sample points for m = 8 and T = 1 s
(a)
t(s) System state x1
Exact samples of the Samples from HF domain, using % error
x x
state x1;d Eq. (8.33), x1;h 1 ¼ 1;dx1;d 1;h  100
0 1.00000000 1.00000000 0.00000000
1
8 0.99837764 0.99793602 0.04423376
2
8 0.98917692 0.98896937 0.02098209
3
8 0.96942065 0.96964539 −0.02318292
4
8 0.93908382 0.93971286 −0.06698444
5
8 0.89962486 0.90054168 −0.10191137
6
8 0.85310840 0.85417906 −0.12550105
7
8 0.80170398 0.80281012 −0.13797362
8
8 0.74741954 0.74847056 −0.14061982
(b)
t(s) System state x2
Exact samples of the Samples from HF domain, using % error
x x
state x2;d Eq. (8.33), x2;h 2 ¼ 2;dx2;d 2;h  100
0 0.00000000 0.00000000 –
1
8 −0.03655385 −0.03302374 9.65728644
2
8 −0.11431805 −0.11044264 3.39002458
3
8 −0.20162592 −0.19874093 1.43086265
4
8 −0.28170581 −0.28017962 0.54176731
5
8 −0.34682040 −0.34655920 0.07531276
6
8 −0.39451637 −0.39524283 −0.18413938
7
8 −0.42526167 −0.42666011 −0.32884224
8
8 −0.44098783 −0.44277287 −0.40478215
(c)
t(s) System state x3
Exact samples of the Samples from HF domain, using % Error
x x
state x3;d Eq. (8.33), x3;h 3 ¼ 3;dx3;d 3;h  100
0 0.00000000 0.00000000 –
1
8 −0.51251518 −0.52837977 −3.09543807
2
8 −0.69066659 −0.71032272 −2.84596508
3
8 −0.68465859 −0.70244984 −2.59855792
4
8 −0.58678987 −0.60056918 −2.34825288
5
8 −0.45207858 −0.46150419 −2.08494948
6
8 −0.31186924 −0.31743382 −1.78426702
7
8 −0.18274345 −0.18524277 −1.36766598
8
8 −0.07230146 −0.07256132 −0.35941183
214 8 Time Invariant System Analysis: State Space Approach

Table 8.14 Solution of output of the homogeneous system of Example 8.6 with comparison of
exact samples and corresponding samples obtained via HF domain with percentage error at
different sample points for m = 10 and T = 1 s
t(s) System output y(t)
Direct expansion HF coefficients using Eq. (8.34) % error
0 0.00000000 0.00000000 –
1
10 0.09048374 0.09070295 −0.24226452
2
10 0.16374615 0.16412914 −0.23389252
3
10 0.22224547 0.22274670 −0.22552991
4
10 0.26812802 0.26871030 −0.21716492
5
10 0.30326533 0.30389855 −0.20880066
6
10 0.32928698 0.32994700 −0.20043914
7
10 0.34760971 0.34827739 −0.19207749
8
10 0.35946317 0.36012356 −0.18371562
9
10 0.36591269 0.36655434 −0.17535604
10
10 0.36787944 0.36849378 −0.16699493

Fig. 8.14 Hybrid function


based solution of system
output with m = 10, T = 1 s,
and its comparison with exact
output of Example 8.6

2 3 2 3
cSx1ði þ 1Þ cSx1i
6 cSx2ði þ 1Þ 7   1  6 cSx2i 7  1  
6 7 2 2 6 7 2 1 0T
6 .. 7¼ IA I þ A 6 .. 7 þ 2 I  A B CSu þ CTu
T
4 . 5 h h 4 . 5 h 2 i
cSxnði þ 1Þ cSxni
ð8:36Þ

The inverse in (8.36) can always be made to exist by judicious choice of h.


8.5 Analysis of a Non-homogeneous System with Jump Discontinuity at Input 215

Table 8.15 Solution of output of the non-homogeneous system of Example 8.7 with comparison
of exact samples and corresponding samples obtained via HF domain with percentage error at
different sample points for m = 10 and T = 1 s
t(s) System output y(t)
Direct expansion HF coefficients using Eq. (8.34) % error
0 4.00000000 4.00000000 0.00000000
1
10 3.43074808 3.43083004 −0.00238898
2
10 2.92429700 2.92390133 0.01353043
3
10 2.47973050 2.47865663 0.04330592
4
10 2.09358536 2.09183323 0.08369040
5
10 1.76101633 1.75868707 0.13226794
6
10 1.47656750 1.47380325 0.18720783
7
10 1.23466893 1.23161802 0.24710348
8
10 1.02994320 1.02674151 0.31086083
9
10 0.85738230 0.85414466 0.37761917
10
10 0.71243756 0.70925511 0.44669880

Fig. 8.15 Hybrid function


based solution of system
output with m = 10, T = 1 s,
and its comparison with exact
output of Example 8.7

8.5.1 Numerical Example

Example 8.8 (vide Appendix B, Program no. 25)


_
Consider the non-homogeneous
   system
 xðtÞ ¼ AxðtÞ þ BuðtÞ þ Buðt  aÞ
0 1 0 0
where A ¼ ;B ¼ ; xð 0Þ ¼ ; u(t) is the unit step function and
2 3 1 0:5
u(t − a) is the delayed unit step function, having the solution
216 8 Time Invariant System Analysis: State Space Approach

Fig. 8.16 Comparison of


exact samples of the states x1
and x2 of the
non-homogeneous system of
Example 8.8 with the samples
obtained using a the HFc
approach and b the HFm
approach, for m = 10 and
T = 1 s (vide Appendix B,
Program no. 25)

 
1 1
x1 ðtÞ ¼ 1  expðtÞ  expððt  aÞÞ  expð2ðt  aÞÞ uðt  aÞ
2 2
1
x2 ðtÞ ¼ expðtÞ þ ½expððt  aÞÞ  expð2ðt  aÞÞuðt  aÞ
2
Considering a = 0.2 s, the exact solution of the states x1(t) and x2(t) are shown in
Fig. 8.16a along with results computed using the HFc approach, with the effect of
combined presence of non-delayed and delayed inputs to the system for m = 10 and
T = 1 s. Figure 8.16b shows similar results using the HFm approach and presents a
visual comparison of the results with the exact solution and the results obtained via
the HFc approach. It is evident that the HFm approach produces much better result
compared to the HFc based analysis.
8.5 Analysis of a Non-homogeneous System with Jump Discontinuity at Input 217

To have an idea about the variation of error with increasing m for both of HFc
and HFm approaches, we compute percentage errors of (m + 1) number of samples
of each an individual state and calculate AMP error, vide Eq. (4.45). Only in this
case, number of elements in the denominator will be r = m + 1.
When we compute the AMP error using the conventional HF based approach, let
us call it εavic, and when we compute the same using the modified HF based
approach, we call it εavim.
Table 8.16 presents the percentage error at different sample points in a tabular
form. It is observed that the error is much less for the HFm approach. Also, the ratio
of the AMP errors of the states x1 and x2, via HFc approach and the HFm approach,
are given by
eav1c
¼ 46.23544541 ðfor x1 Þ and
eav1m
eav2c
¼ 24.26089295 ðfor x2 Þ
eav2m

This indicates superiority of the HFm approach beyond any doubt.


_
Example 8.9 Consider the non-homogeneous
   system  xðtÞ ¼ AxðtÞ þ Bu1 ðtÞ þ
0 1 0 0
Bu2 ðt  aÞwhere A ¼ ;B ¼ ; xð 0Þ ¼ ; u1(t) is a unit ramp
2 3 1 0:5
function and u2(t − a) is a delayed unit step function.
The system has the following solution
 
1 1 3 3 1
x1 ðtÞ ¼  þ t þ expðtÞ  expð2tÞ  expððt  aÞÞ  expð2ðt  aÞÞ uðt  aÞ
4 2 2 4 2
1 3 3
x2 ðtÞ ¼  expðtÞ þ expð2tÞ þ ½expððt  aÞÞ  expð2ðt  aÞÞuðt  aÞ
2 2 2

Considering a = 0.2 s, Fig. 8.17a shows the solution of the system states
x1(t) and x2(t), along with the effect of the combined presence of non-delayed and
delayed inputs to the system, using the HFc approach.
Figure 8.17b represents the effectiveness of the HFm approach compared to the
HFc approach. It is noted that for the state x2, which is affected by the jump input,
the samples derived using the HFm approach, are very close to the exact samples of
the states, while that obtained via the HFc approach are not that close to the exact
samples. This indicates reasonably less MISE with the HFm based analysis com-
pared to the HFc based analysis.
Table 8.16 Percentage errors at different sample points of the (a) state x1 and (b) state x2 of Example 8.8 computed using the HFc based approach and the HFm
218

based approach, for m = 10, T = 1 s. The results are compared with the exact samples (vide Appendix B, Program no. 25)
(a)
t(s) Exact samples of the Samples from HFc Samples from HFm % error eav1c % error eav1m
x x x x
state x1;d approach, x1;hc approach, x1;hm ex1;c ¼ 1;dx1;d1;hc  100 ex1;m ¼ 1;dx1;d1;hm  100
0 0.00000000 0.00000000 0.00000000 – 3.23226287 – 0.06990876
1
10 0.04758129 0.04761905 0.04761905 −0.07935892 −0.07935892
2
10 0.09063462 0.09286745 0.09070295 −2.46355090 −0.07539062
3
10 0.13411885 0.13990644 0.13401262 −4.31526963 0.07920587
4
10 0.18126925 0.18962091 0.18106849 −4.60732308 0.11075238
5
10 0.23032227 0.24045506 0.23008268 −4.39939655 0.10402381
6
10 0.27993862 0.29123784 0.27969781 −4.03632053 0.08602243
7
10 0.32911641 0.34110322 0.32889867 −3.64211860 0.06615896
8
10 0.37712099 0.38942602 0.37694088 −3.26288653 0.04775921
9
10 0.42342835 0.43577015 0.42329350 −2.91473162 0.03184718
10
10 0.46767957 0.47984706 0.46759273 −2.60167234 0.01856827
(b)
t(s) Exact samples of the Samples from HFc Samples from HFm % Error eav2c % error eav2m
x x x x
state x2;d approach, x2;hc approach, x2;hm ex2;c ¼ 2;dx2;d2;hc  100 ex2;m ¼ 2;dx2;d2;hm  100
0 0.50000000 0.50000000 0.50000000 0.00000000 2.94188446 0.00000000 0.12126035
1
10 0.45241871 0.45238095 0.45238095 0.00834625 0.00834625
2
10 0.40936538 0.45258710 0.40929705 −10.55822551 0.01669169
3
10 0.45651578 0.48819273 0.45689647 −6.93885105 −0.08339033
4
10 0.48357073 0.50609660 0.48422077 −4.65823686 −0.13442501
5
10 0.49527191 0.51058653 0.49606308 −3.09216406 −0.15974457
6
10 0.49539690 0.50506892 0.49623962 −1.95237798 −0.17011007
7
10 0.48694387 0.49223868 0.48777742 −1.08735530 −0.17117989
8
10 0.47228191 0.47421735 0.47306683 −0.40980608 −0.16619735
9
8 Time Invariant System Analysis: State Space Approach

10 0.45327317 0.45266533 0.45398554 0.13410014 −0.15716130


10
10 0.43137217 0.42887288 0.43199920 0.57938137 −0.14535708
8.6 Conclusion 219

Fig. 8.17 Comparison of


exact samples of the states x1
and x2 of the
non-homogeneous system of
Example 8.9 with the samples
obtained using a the HFc
approach and b the HFm
approach, for m = 10 and
T=1s

8.6 Conclusion

In this chapter, we have analysed the state space model of a non-homogeneous


system and solved for the states in hybrid function platform. The method of analysis
is attractive in the sense that it offers a simple recursive solution in a generalized
form. This recursive equation has been used for the analysis of homogeneous
systems by putting the condition B = 0. Different types of numerical examples have
been treated using the derived recursive matrix equations for homogeneous as well
as non-homogeneous systems, and the results are compared with the exact solutions
of the system states with error estimates. It is found that the HF method is a strong
tool for such analysis. This fact is reflected through various tables and curves.
As an interesting example [6], we have taken up a set of simultaneous differ-
ential equations, which are no different from the well known homogeneous state
equation, having oscillatory solution. This example has been treated for various step
220 8 Time Invariant System Analysis: State Space Approach

sizes, i.e., h = 0.1, 0.01, 0.001 and 0.0001 s and the maximum absolute errors
incurred in HF domain analysis have been compared with other improved fifth
order Runge-Kutta methods, such as MERKDP510, MERKDP512 and
MERKDP514 suggested by Dormand and Prince [7].
Apart from deriving recursive equations for solving the system states, yet
another recursive matrix equation has been derived for solving for the outputs
equations for any non-homogeneous as well as homogeneous systems. As before,
HF domain solutions are compared with the exact outputs and found to be reliably
close.

References

1. Ogata, K.: Modern Control Engineering 5th ed. Prentice Hall of India, Upper Saddle River
(2011)
2. Ogata, K.: System Dynamics, 4th ed. Pearson Education, New York City (2004)
3. Roychoudhury, S., Deb, A., Sarkar, G.: Analysis and synthesis of homogeneous/non-homogeneous
control systems via orthogonal hybrid functions (HF) under states space environment. J. Inf. Optim.
Sci. 35(5 & 6), 431–482 (2014)
4. Deb, A., Sarkar, G., Ganguly, A., Biswas, A.: Approximation, integration and differentiation of
time functions using a set of orthogonal hybrid functions (HF) and their application to solution
of first order differential equations. Appl. Math. Comput. 218(9), 4731–4759 (2012)
5. Jiang, J.H., Schaufelberger, W.: Block Pulse Functions and their Application in Control System,
LNCIS, vol. 179. Springer, Berlin (1992)
6. Simos, T.E.: Modified Runge-Kutta methods for the numerical solution of ODEs with
oscillating solutions. Appl. Math. Comput. 84, 131–143 (1997)
7. Dormand, J.R., Prince, P.J.: New Runge-Kutta algorithms for numerical simulations in
dynamical astronomy. Celest. Mech. 18, 223–232 (1978)
Chapter 9
Time Varying System Analysis: State
Space Approach

Abstract In this chapter, time varying system analysis is presented using state
space approach in hybrid function domain. Both homogeneous and
non-homogeneous systems are treated along with numerical examples. States and
outputs of the systems are solved. Illustration has been provided with the support of
five examples, six figures and nine tables.

As the heading implies, this chapter is dedicated for linear time varying
(LTV) control system analysis.
A time varying system means, its parameters do vary with time. In the following,
we intend to analyse two types of LTV control system in hybrid function platform,
namely non-homogeneous system and homogeneous system [1, 2].
As discussed in Chap. 8, analysing a control system means, we determine the
behavior of each system state of a system over a common time frame, knowing the
system parameters and the input signal or forcing function. Like linear time
invariant (LTI) systems, the output of a linear time varying system may be any one
of the states or a linear combination of two or many states. Therefore, knowing all
the states of the system, we can assess its performance.
In this chapter, the hybrid function set is employed for the analysis of time
varying non-homogeneous as well as homogeneous systems described by their state
space models [2].
First we take up the problem of analysis of a time varying non-homogeneous
system in HF domain. After putting the specific condition of zero forcing function,
we can easily arrive at the result of analysis of a time varying homogeneous system.
In practice, applications of time varying systems may be found in aircraft con-
trol, like during its takeoff, cruise and landing. Also, the aircraft has to adapt itself
to the continuous decrease of its fuel leading to loss of weight.
The human vocal tract is another example. It is a time variant system due to time
dependent nature of the shape of the vocal organs.
The last example is the discrete wavelet transform, used in modern signal pro-
cessing. It is often used in its time variant form. And of course there are many more
such examples.

© Springer International Publishing Switzerland 2016 221


A. Deb et al., Analysis and Identification of Time-Invariant Systems, Time-Varying
Systems, and Multi-Delay Systems using Orthogonal Hybrid Functions,
Studies in Systems, Decision and Control 46, DOI 10.1007/978-3-319-26684-8_9
222 9 Time Varying System Analysis: State Space Approach

9.1 Analysis of Non-homogeneous Time Varying State


Equation [3]

Consider the following n-state non-homogeneous equation of a time varying control


system

xðtÞ ¼ AðtÞxðtÞ þ BðtÞuðtÞ ð9:1Þ

where,
A(t) is the time varying system matrix of order n,
B(t) is the time varying input vector of order (n × 1),
x(t) is the state vector with the initial conditions x(0), of order (n × 1),
and u(t) is the forcing function.
Integrating Eq. (9.1) we have

Zt Zt
xð t Þ  xð 0Þ ¼ AðsÞxðsÞds þ BðsÞuðsÞds ð9:2Þ
0 0

Expanding x(t), x(0) and u(t) via an m-set hybrid function, we get
2 3 2x ...
3
x1 ð t Þ 10 x11 x1ðm1Þ
6 x2 ð t Þ 7 6
6 x x21 ...
7
x2ðm1Þ 7
6 7 6 20 7
xð t Þ ¼ 6
6 .. 7 6
7
.. .. .. 7 SðmÞ
4 . 5 6 4 . . . 7 5
xn ð t Þ xn0 xn1 ... xnðm1Þ
2  3
ðx11  x10 Þ ðx12  x11 Þ . . . x1m  x1ðm1Þ
6  7
6 ðx21  x20 Þ ðx22  x21 Þ . . . x2m  x2ðm1Þ 7
6 7
þ6 .. .. .. 7TðmÞ
6 . . . 7
4 5
 
ðxn1  xn0 Þ ðxn2  xn1 Þ . . . xnm  xnðm1Þ
ð9:3Þ
2 3 2 3
x 1 ð 0Þ x10 x10 ... x10
6 x2 ð0Þ 7 6 x20 x20 ... x20 7
6 7 6 7
xð0Þ ¼ 6 . 7  6 .. .. .. 7SðmÞ ð9:4Þ
4 .. 5 4 . . . 5
x n ð 0Þ xn0 xn0 ... xn0
9.1 Analysis of Non-homogeneous Time Varying … 223

uðtÞ  ½ u0 u1 u2 ... um1 SðmÞ


ð9:5Þ
þ ½ ðu1  u0 Þ ð u2  u1 Þ ð u3  u2 Þ    ðum  um1 Þ TðmÞ

We now follow the rule of multiplication of two time functions, as given in


Chap. 2, Sect. 2.6.3 and Eq. (2.30), to expand AðtÞxðtÞ and BðtÞuðtÞ via an m-set
hybrid functions.
We have
2 3
x1 ð t Þ
2 36 7
a11 ðtÞ a12 ðtÞ . . . a1k ðtÞ ... a1n ðtÞ 6 x2 ðtÞ 7
6 . 7
6 a21 ðtÞ a22 ðtÞ . . . . . . a2n ðtÞ 76 7
6 a2k ðtÞ 76 .. 7
AðtÞxðtÞ ¼ 6
6 .. .. ..
76
.. 76 x ðtÞ 7
7
4 . . . . 56 k 7
6 7
an1 ðtÞ an2 ðtÞ . . . an k ð t Þ 6 .
. . . an n ðtÞ 4 .. 7 5
xn ð t Þ
2P
n 3
a ðtÞxj ðtÞ ð9:6Þ
6 j¼1 1j 7
6 7
6 7
6 .. 7
6 . 7
6 n 7
6P 7
6
¼6 a ðtÞx ðtÞ 7
kj j 7
6 j¼1 7
6 7
6 .. 7
6 . 7
6 n 7
4P 5
anj ðtÞxj ðtÞ
j¼1

where, akj(t) is the kjth element of the square matrix A(t).


Following (2.30), the kth term of the first element of the RHS column matrix of
(9.6) can be represented in HF domain as

a1k ðtÞxk ðtÞ  a1k ðtÞxk ðtÞ


 
¼ a1k0 xk0 a1k1 xk1 a1k2 xk2 . . . a1kðm1Þ xkðm1Þ SðmÞ
  
þ ða1k1 xk1  a1k0 xk0 Þ ða1k2 xk2  a1k1 xk1 Þ . . . a1km xkm  a1kðm1Þ xkðm1Þ TðmÞ
224 9 Time Varying System Analysis: State Space Approach

We can express the first element of the resulting matrix of (9.6) as


X
n
a1j ðtÞxj ðtÞ
j¼1
¼ ½ ða110 x10 þ    þ a1n0 xn0 Þ ða111 x11 þ    þ a1n1 xn1 Þ
 
. . . a11ðm1Þ x1ðm1Þ þ    þ a1nðm1Þ xnðm1Þ SðmÞ
þ ½fða111 x11  a110 x10 Þ þ    þ ða1n1 xn1  a1n0 xn0 Þg
fða112 x12  a111 x11 Þ þ    þ ða1n2 xn2  a1n1 xn1 Þg
   
. . . a11m x1m  a11ðm1Þ x1ðm1Þ þ    þ a1nm xnm  a1nðm1Þ xnðm1Þ TðmÞ
 n
P Pn P n
¼ a1j0 xj0 a1j1 xj1 . . . a1jðm1Þ xjðm1Þ SðmÞ
j¼1 j¼1 j¼1
" ! !
Pn P n Pn Pn
þ a1j1 xj1  a1j0 xj0 a1j2 xj2  a1j1 xj1
j¼1 j¼1 j¼1 j¼1
!#
P
n P
n
 a1jm xjm  a1jðm1Þ xjðm1Þ TðmÞ
j¼1 j¼1

2 3
Pn P
n P
n

6 j¼1 a1j0 xj0 a1j1 xj1 ... a1jðm1Þ xjðm1Þ 7


6 j¼1 j¼1 7
6P P P 7
6 n n n 7
6 a2j0 xj0 a2j1 xj1 ... a2jðm1Þ xjðm1Þ 7
 ðtÞ 6 7
AðtÞxðtÞ  A xðtÞ ¼ 6 j¼1 j¼1 j¼1 7SðmÞ
6 .. .. .. 7
6 7
6 7
6 n . . . 7
4P P
n Pn 5
anj0 xj0 anj1 xj1 ... anjðm1Þ xjðm1Þ
j¼1 j¼1 j¼1
2 ! ! !3
P
n P
n P
n P
n P
n P
n
6 a1j1 xj1  a1j0 xj0 a1j2 xj2  a1j1 xj1 ... a1jm xjm  a1jðm1Þ xjðm1Þ 7
6 j¼1 j¼1 j¼1 j¼1 j¼1 j¼1 7
6 ! ! 7
!7
6 n
6 P Pn P
n P
n Pn Pn 7
6 a2jm xjm  a2jðm1Þ xjðm1Þ 7
6 j¼1 a2j1 xj1  j¼1 a2j0 xj0 a2j2 xj2  a2j1 xj1 ... 7
þ6 j¼1 j¼1 j¼1 j¼1 7TðmÞ
6 7
6 .. .. .. 7
6 . . . 7
6 ! ! !7
6 n 7
4 P Pn P
n P
n Pn Pn 5
anj1 xj1  anj0 xj0 anj2 xj2  anj1 xj1 ... anjm xjm  anjðm1Þ xjðm1Þ
j¼1 j¼1 j¼1 j¼1 j¼1 j¼1

, ATXS SðmÞ þ ATXT TðmÞ

ð9:7Þ

Similarly, the product term BðtÞuðtÞ in (9.2) can also be expressed in HF domain.
We may write
3 2 2 3
b1 ðtÞ b1 ðtÞuðtÞ
6 b2 ðtÞ 7 6 b2 ðtÞuðtÞ 7
6 7 6 7
6 .. 7 6 .. 7
6 . 7 6 . 7
BðtÞuðtÞ ¼ 6 7uðtÞ ¼ 6
6 bi ðtÞ 7
7
6 bi ðtÞuðtÞ 7
6 7 6 7
6 . 7 6 .. 7
4 .. 5 4 . 5
bn ðtÞ bn ðtÞuðtÞ
9.1 Analysis of Non-homogeneous Time Varying … 225

where, bi(t) is the ith element of the matrix B(t).


Then
2 3
b10 u0 b11 u1  b1ðm1Þ uðm1Þ
6 7
6 b20 u0 b21 u1  b2ðm1Þ uðm1Þ 7
6 7
BðtÞuðtÞ 6 . .. .. 7SðmÞ
6 .. . . 7
4 5
bn0 u0 bn1 u1  bnðm1Þ uðm1Þ
2  3
ðb11 u1  b10 u0 Þ ðb12 u2  b11 u1 Þ  b1m um  b1ðm1Þ uðm1Þ
6  7
6 ðb21 u1  b20 u0 Þ ðb22 u2  b21 u1 Þ  b2m um  b2ðm1Þ uðm1Þ 7
6 7
þ6 .. .. .. 7TðmÞ
6 . . . 7
4 5
 
ðbn1 u1  bn0 u0 Þ ðbn2 u2  bn1 u1 Þ  bnm um  bnðm1Þ uðm1Þ
, BTUS SðmÞ þ BTUT TðmÞ

Therefore, considering the first term of the RHS of Eq. (9.2), we have
Zt  
1 1
AðsÞxðsÞ ds  ATXS þ ATXT P1ssðmÞ SðmÞ þ h ATXS þ ATXT TðmÞ
2 2
0
2 n   Pn   n  3
P P
6 j¼1 a1j0 xj0 þ a1j1 xj1 j¼1 a1j1 xj1 þ a1j2 xj2 . . . a1jðm1Þ xjðm1Þ þ a1jm xjm 7
6 j¼1 7
6P 7
6 n   Pn   Pn  7
16 a x þ a2j1 xj1
6 j¼1 2j0 j0
a2j1 xj1 þ a2j2 xj2 . . . a2jðm1Þ xjðm1Þ þ a2jm xjm 7
7
¼ 6 j¼1 j¼1 7P1ssðmÞ SðmÞ
26 7
6
6 ... ... ..
.
7
7
6 n   Pn   7
4P P
n 5
anj0 xj0 þ anj1 xj1 anj1 xj1 þ anj2 xj2 . . . anjðm1Þ xjðm1Þ þ anjm xjm
j¼1 j¼1 j¼1
2 n   n   n  3
P P P
6 j¼1 a1j0 xj0 þ a1j1 xj1 a1j1 xj1 þ a1j2 xj2 ... a1jðm1Þ xjðm1Þ þ a1jm xjm 7
6 j¼1 j¼1 7
6P 7
6 n   Pn   Pn  7
h6
6
a2j0 xj0 þ a2j1 xj1 a2j1 xj1 þ a2j2 xj2 ... a2jðm1Þ xjðm1Þ þ a2jm xjm 7
7
þ 6 j¼1 j¼1 j¼1 7TðmÞ
26 .. .. .. 7
6 7
6 . . . 7
6 n   n   n  7
4P P P 5
anj0 xj0 þ anj1 xj1 anj1 xj1 þ anj2 xj2 ... anjðm1Þ xjðm1Þ þ anjm xjm
j¼1 j¼1 j¼1

ð9:8Þ
226 9 Time Varying System Analysis: State Space Approach

Similarly, the second term of the RHS of Eq. (9.2) can be expressed as

Zt  
1 1 T
BðsÞuðsÞds  BTUS þ BTUT P1ssðmÞ SðmÞ þ h BTUS þ B Tð m Þ
2 2 UT
0
2  3
ðb10 u0 þ b11 u1 Þ ðb11 u1 þ b12 u2 Þ . . . b1ðm1Þ uðm1Þ þ b1m um
6  7
1 6 ðb20 u0 þ b21 u1 Þ ðb21 u1 þ b22 u2 Þ
6 . . . b2ðm1Þ uðm1Þ þ b2m um 7
7
¼ 6 .. .. .. 7P1ssðmÞ SðmÞ
26 . . . 7
4 5
 
ðbn0 u0 þ bn1 u1 Þ ðbn1 u1 þ bn2 u2 Þ . . . bnðm1Þ uðm1Þ þ bnm um
2  3
ðb10 u0 þ b11 u1 Þ ðb11 u1 þ b12 u2 Þ . . . b1ðm1Þ uðm1Þ þ b1m um
6  7
h6 ðb u þ b21 u1 Þ ðb21 u1 þ b22 u2 Þ . . . b2ðm1Þ uðm1Þ þ b2m um 7
6 20 0 7
þ 6 .. .. .. 7TðmÞ
26 . . . 7
4 5
 
ðbn0 u0 þ bn1 u1 Þ ðbn1 u1 þ bn2 u2 Þ . . . bnðm1Þ uðm1Þ þ bnm um
ð9:9Þ

Therefore, substituting P1ss from (4.9) in Eqs. (9.8) and (9.9), the RHS of
Eq. (9.2) can be written as

Zt Zt
AðsÞxðsÞds þ BðsÞuðsÞds;
0 0
2 n  
P
60 a1j0 xj0 þ a1j1 xj1 þ ðb10 u0 þ b11 u1 Þ
6 j¼1
6 Pn  
6
h6
6
0 a2j0 xj0 þ a2j1 xj1 þ ðb20 u0 þ b21 u1 Þ
¼ 6 j¼1
26 . ..
6.
6. .
6 n  
4 P
0 anj0 xj0 þ anj1 xj1 þ ðbn0 u0 þ bn1 u1 Þ
j¼1

n 
P   
a1j0 xj0 þ a1j1 xj1 þ a1j1 xj1 þ a1j2 xj2 þ ðb10 u0 þ b11 u1 Þ þ ðb11 u1 þ b12 u2 Þ   
j¼1
Pn    
a2j0 xj0 þ a2j1 xj1 þ a2j1 xj1 þ a2j2 xj2 þ ðb20 u0 þ b21 u1 Þ þ ðb21 u1 þ b22 u2 Þ   
j¼1
..
.
n 
P   
anj0 xj0 þ anj1 xj1 þ anj1 xj1 þ anj2 xj2 þ ðbn0 u0 þ bn1 u1 Þ þ ðbn1 u1 þ bn2 u2 Þ   
j¼1
9.1 Analysis of Non-homogeneous Time Varying … 227

n 
P       3
a1j0 xj0 þ a1j1 xj1 þ    þ a1jðm2Þ xjðm2Þ þ a1jðm1Þ xjðm1Þ þ ðb10 u0 þ b11 u1 Þ þ    þ b1ðm2Þ uðm2Þ þ b1ðm1Þ uðm1Þ 7
j¼1 7
Pn        7
7
a2j0 xj0 þ a2j1 xj1 þ    þ a2jðm2Þ xjðm2Þ þ a2jðm1Þ xjðm1Þ þ ðb20 u0 þ b21 u1 Þ þ    þ b2ðm2Þ uðm2Þ þ b2ðm1Þ uðm1Þ 7
7
j¼1 7SðmÞ
.. 7
7
. 7
Pn        7
5
anj0 xj0 þ anj1 xj1 þ    þ anjðm2Þ xjðm2Þ þ anjðm1Þ xjðm1Þ þ ðbn0 u0 þ bn1 u1 Þ þ    þ bnðm2Þ uðm2Þ þ bnðm1Þ uðm1Þ
j¼1
2n   n    3
P P
6 j¼1 a1j0 xj0 þ a1j1 xj1 þ ðb10 u0 þ b11 u1 Þ  a1jðm1Þ xjðm1Þ þ a1jm xjm þ b1ðm1Þ uðm1Þ þ b1m um 7
6 j¼1 7
6P 7
6 n   Pn    7
h6
6
a2j0 xj0 þ a2j1 xj1 þ ðb20 u0 þ b21 u1 Þ  a2jðm1Þ xjðm1Þ þ a2jm xjm þ b2ðm1Þ uðm1Þ þ b2m um 7
7
+ 6 j¼1 j¼1 7TðmÞ
26 .. .. 7
6 7
6 . . 7
6 n   n    7
4P P 5
anj0 xj0 þ anj1 xj1 þ ðbn0 u0 þ bn1 u1 Þ  anjðm1Þ xjðm1Þ þ anjm xjm þ bnðm1Þ uðm1Þ þ bnm um
j¼1 j¼1

ð9:10Þ

Using Eqs. (9.3) and (9.4), the LHS of Eq. (9.2) can be written as
2  3
0 ðx11  x10 Þ ðx12  x10 Þ ... x1ðm1Þ  x10
6 7 
6 0 ðx21  x20 Þ ðx22  x20 Þ . . . x2ðm1Þ  x20 7
6 7
xð t Þ  xð 0Þ  6 . .. .. .. 7SðmÞ
6 .. . . . 7
4 5
 
0 ðxn1  xn0 Þ ðxn2  xn0 Þ . . . xnðm1Þ  xn0
2  3
ðx11  x10 Þ ðx12  x11 Þ . . . x1m  x1ðm1Þ
6  7
6 ðx21  x20 Þ ðx22  x21 Þ . . . x2m  x2ðm1Þ 7
6 7
þ6 .. .. .. 7TðmÞ
6 . . . 7
4 5
 
ðxn1  xn0 Þ ðxn2  xn1 Þ . . . xnm  xnðm1Þ
ð9:11Þ

Equating the SHF components of second column of Eqs. (9.10) and (9.11), we
get,
" #9
n 
P  >
ðx11  x10 Þ ¼ h
a1j0 xj0 þ a1j1 xj1 þ ðb10 u0 þ b11 u1 Þ >
>
>
>
2
>
" j¼1
#>
>
>
>
Pn   >
>
ðx21  x20 Þ ¼ 2
h
a2j0 xj0 þ a2j1 xj1 þ ðb20 u0 þ b21 u1 Þ =
j¼1 ð9:12Þ
>
>
.. >
>
" . #>
>
>
Pn   >
>
>
ðxn1  xn0 Þ ¼ 2
h
anj0 xj0 þ anj1 xj1 þ ðbn0 u0 þ bn1 u1 Þ >
>
;
j¼1
228 9 Time Varying System Analysis: State Space Approach

From these n number of equations, we can write


9
h h h >
1  a111 x11  a121 x21      a1n1 xn1 >
>
2 2 2 >
>
>
>
>
>
h h h h >
>
¼ 1 þ a110 x10 þ a120 x20 þ    þ a1n0 xn0 þ ðb10 u0 þ b11 u1 Þ >
>
2 2 2 2 >
>
>
>
h h h >
>
 a211 x11 þ 1  a221 x21      a2n1 xn1 >
>
2 2 2 >
>
=
h h h h
¼ a210 x10 þ 1 þ a220 x20 þ    þ a2n0 xn0 þ ðb20 u0 þ b21 u1 Þ >
>
2 2 2 2 >
>
>
>
.. >
>
. >
>
>
>
h h h >
>
 an11 x11  an21 x21    þ 1  ann1 xn1 >
>
2 2 2 >
>
>
>
h h h h >
>
¼ an10 x10 þ an20 x20 þ    þ 1 þ ann0 xn0 þ þ ðbn0 u0 þ bn1 u1 Þ >
;
2 2 2 2
ð9:13Þ

Writing in matrix form, we have


2  32 3
1  h2 a111  h a121 ...  h2 a1n1 x11
6  2h  7
6  h2 a211 1  2 a221 ...  h2 a2n1 76 x21 7
6 76 7
6 .. .. .. 76 6 .. 77
6 7
4 . . . 54 . 5
 
 h2 an11  h2 an21 ... 1  h2nn1 xn1
2  32 3 2 3 2 3
1 þ h2 a110 h
a 120 ... h
a
2 1n0 x10 b10 b11
6  2 h  7
6 h
a210 1 þ 2 a220 ... h
2 a2n0 76 x20 7 6b 7 6b 7
6 2 76 7 h6 20 7 6 21 7
6 . 7u0 þ h 6 . 7u1
¼6 .. .. .. 766 .. 7
7þ 6 7
6 7 2 4 .. 5 264 .. 5
7
4 . . . 54 . 5
 
h
2 a n10
h
a
2 n20 . . . 1 þ h2 ann0 xn0 bn0 bn1
ð9:14Þ

Similarly, comparing the SHF coefficients of third column of Eqs. (9.10) and
(9.11), we have
" #9
P
n P
n P
n >
ðx12  x10 Þ ¼ h
a1j0 xj0 þ 2 a1j1 xj1 þ a1j2 xj2 þ ðb10 u0 þ b11 u1 Þ þ ðb11 u1 þ b12 u2 Þ >
>
>
>
2
>
" j¼1 j¼1 j¼1
#>
>
>
>
Pn Pn Pn >
>
ðx22  x20 Þ ¼ 2
h
a2j0 xj0 þ 2 a2j1 xj1 þ a2j2 xj2 þ ðb20 u0 þ b21 u1 Þ þ ðb21 u1 þ b22 u2 Þ =
j¼1 j¼1 j¼1
>
>
.. >
>
" . #>
>
>
P P P >
>
n n n >
ðxn2  xn0 Þ ¼ 2
h
anj0 xj0 þ 2 anj1 xj1 þ anj2 xj2 þ ðbn0 u0 þ bn1 u1 Þ þ ðbn1 u1 þ bn2 u2 Þ >
>
;
j¼1 j¼1 j¼1

ð9:15Þ
9.1 Analysis of Non-homogeneous Time Varying … 229

From the set of Eqs. (9.15), we can write


9
h h h >
1  a112 x12  a122 x22      a1n2 xn2 >
>
2 2 2 >
>
>
>
>
>
h h h h >
>
¼ 1 þ a111 x11 þ a121 x21 þ    þ a1n1 xn1 þ ðb11 u1 þ b12 u2 Þ >
>
2 2 2 2 >
>
>
>
h h h >
>
 a212 x12 þ 1  a222 x22      a2n2 xn2 >
>
2 2 2 >
>
=
h h h h
¼ a211 x11 þ 1 þ a221 x21 þ    þ a2n1 xn1 þ ðb21 u1 þ b22 u2 Þ >
>
2 2 2 2 >
>
>
>
.. >
>
. >
>
>
>
h h h >
>
 an12 x12  an22 x22    þ 1  ann2 xn2 >
>
2 2 2 >
>
>
>
h h h h >
>
¼ an11 x11 þ an21 x21 þ    þ 1 þ ann1 xn1 þ þ ðbn1 u1 þ bn2 u2 Þ >
;
2 2 2 2
ð9:16Þ

Writing these equations in matrix form, we get


2  32 3
1  h2 a112  h2 a122 ...  h2 a1n2 x12
6   7
6  h2 a212 1  h2 a222 ...  h2 a2n2 76 x22 7
6 766 . 7
7
6 .. .. .. 7 6 . 7
6 7
4 . . . 54 . 5
 
 h2 an12  h2 an22 . . . 1  h2 ann2 xn2
2  32 3 2 3 2 3
1 þ 2 a111
h
2 a121  . . .
h h
2 a1n1 x11 b11 b12
6  76
6 h
2 a211 1 þ 2 a221
h
... h
2 a2n1 76 x21 77 h6
6b 7
21 7
6b 7
6 22 7
6 76 6 . 7u 1 þ h 6 . 7u 2
¼6 . . . 76 . 7 þ
6 .. .. .. 74 .. 7 5
6
2 4 .. 57 264 .. 5
7
4 5
 
h
2 an11
h
2 an21 . . . 1 þ h2 ann1 xn1 bn1 bn2
ð9:17Þ
230 9 Time Varying System Analysis: State Space Approach

Equations (9.14) and (9.17), give a recursive solution for the states, for
m number of subintervals.
That is,
2 3 2 2  3 1
x1ðk þ 1Þ  a11ðk þ 1Þ a12ðk þ 1Þ ... a1nðk þ 1Þ
6 7 6
h 2  7
6 x2ðk þ 1Þ 7 6 a21ðk þ 1Þ h  a22ðk þ 1Þ ... a2nðk þ 1Þ 7
6 7 6 7
6 . 7¼6 .. .. .. 7
6 .. 7 6 . . . 7
4 5 4 5
 
xnðk þ 1Þ an1ðk þ 1Þ an2ðk þ 1Þ . . . 2h  annðk þ 1Þ
0 2 2  32 3 2 3 2 3 1
h þ a11k ...
a12k a1nk x1k b1k b1ðk þ 1Þ
B6   7 6 7 C
B6 a21k h þ a22k
2
... a2nk 76 x2k 7 6 7 6 b2ðk þ 1Þ 7 C
B6 76 7 6 b2k 7
6 . 7 þ 6 . 7uk þ 6 7 C
B6 7 6 .. 7 7u C
... ... .. 6 . 7 6 . 7 ð k þ 1Þ
B6 7 6 C
@4 . 54 . 5 4 . 5 4 . 5 A
2 
an1k an2k . . . h þ annk xnk bnk bnðk þ 1Þ
ð9:18Þ

for k = 0, 1, 2, …, (m − 1).

9.1.1 Numerical Examples

Example 9.1 (vide Appendix B, Program


 no. 26) Consider the non-homogeneous
 0 0 1 1
system x ¼ Ax þ Bu, where A ¼ ;B ¼ ; xð 0Þ ¼ and u ¼ uðtÞ; a
t 0 0 1
unit step function.
2 3
The solution of the equation is x1 ðtÞ ¼ 1 þ t and x2 ðtÞ ¼ 1 þ t2 þ t3 .
Figure 9.1 graphically shows the comparison of the results obtained in HF
domain using Eq. (9.18) with the exact curve. It is noted that the HF domain
solutions (black dots) are right upon the exact curves. Table 9.1a, b show the
comparison of the exact samples with HF domain solutions for the states of the
system of Example 9.1 for m = 4 and T = 1 s.

Example 9.2
 Consider the non-homogeneous
 system x ¼ Ax þ Bu,
0 1 0 0
where A ¼ ;B ¼ ; xð0Þ ¼ and u ¼ uðtÞ a unit step function;
0 t 1 1
having the solution

1 2 1 3 1 4 1 5 1 6
x1 ð t Þ ¼ t þ t þ t þ t þ t þ t þ 
2 6 12 40 90
1 1 1 1 5 1 6 1 7
and x2 ðtÞ ¼ 1 þ t þ t2 þ t3 þ t4 þ t þ t þ t þ 
2 3 8 15 48 105
9.1 Analysis of Non-homogeneous Time Varying … 231

Fig. 9.1 Comparison of the


HF domain solution of the
states x1 and x2 of Example
9.1 with the exact solutions
for m = 4 and T = 1 s (vide
Appendix B, Program no. 26)

Table 9.1 Solution of the non-homogeneous system of Example 9.1 in HF domain compared
with the exact samples along with percentage error at different sample points for (a) state x1 and
(b) state x2, with m = 4, T = 1 s (vide Appendix B, Program no. 26)
t(s) Samples of the exact Samples from HF analysis using % Error
solution Eq. (9.18) e ¼ sd ss
d
h
 100
sd sh
(a) System state x1 (m = 4)
0 1.00000000 1.00000000 0.00000000
1
4 1.25000000 1.25000000 0.00000000
2
4 1.50000000 1.50000000 0.00000000
3
4 1.75000000 1.75000000 0.00000000
4
4 2.00000000 2.00000000 0.00000000
(b) System state x2 (m = 4)
0 1.00000000 1.00000000 0.00000000
1
4 1.03645833 1.03906250 −0.25125660
2
4 1.16666667 1.17187500 −0.44642828
3
4 1.42187500 1.42968750 −0.54945055
4
4 1.83333333 1.84375000 −0.56818200

Table 9.2a and b show the comparison of the exact samples with HF domain
solutions for the states of the system of Example 9.2 for m = 4 and T = 1 s.
Results obtained via direct expansion and using Eq. (9.18) are plotted in Fig. 9.2.
It is noted that the HF domain solutions (black dots) are right upon the exact curves.
232 9 Time Varying System Analysis: State Space Approach

Table 9.2 Solution of the non-homogeneous system of Example 9.2 in HF domain compared
with the exact samples along with percentage error at different sample points for (a) state x1 and
(b) state x2 with m = 4, T = 1 s
t (s) Samples of the exact Samples from HF analysis using % Error
solution Eq. (9.18) e ¼ sd ss
d
h
 100
sd sh
(a) System state x1 (m = 4)
0 0.00000000 0.00000000 –
1
4 0.28421139 0.28629032 −0.73147315
2
4 0.65228950 0.65833333 −0.92655638
3
4 1.13917694 1.15065814 −1.00785046
4
4 1.80486111 1.81990969 −0.83378050
(b) System state x2 (m = 4)
0 1.00000000 1.00000000 0. 00000000
1
4 1.28701739 1.29032258 −0.25681005
2
4 1.67696243 1.68602151 −0.54020768
3
4 2.23222525 2.25257694 −0.91172205
4
4 3.05535714 3.10143546 −1.50811568

Fig. 9.2 Comparison of the


HF domain solution of the
states of Example 9.2 with the
exact solutions for m = 4 and
T=1s

9.2 Determination of Output of a Non-homogeneous Time


Varying System

Consider the output of a time varying non-homogeneous system described by

yðtÞ ¼ CðtÞxðtÞ þ DðtÞ uðtÞ


9.2 Determination of Output of a Non-homogeneous … 233

where,
x(t) is the state vector given by xðtÞ ¼ ½ x1 x2       xn T ;
y(t) is the output vector, is expressed by yðtÞ , ½ y1 y2       yv T ;
u(t) is the input vector, is expressed by uðtÞ , ½ u1 u2       ur T ;
C(t) is the time varying output matrix and
D(t) is the time varying direct transmission matrix.
As we have shown in Sect. 9.1, we can easily expand the products CðtÞxðtÞ and
DðtÞ uðtÞ in HF domain, vide Eqs. (9.6) and (9.7). Since, C(t), D(t), x(t) and u(t) are
already known, y(t) can easily be determined.

9.3 Analysis of Homogeneous Time Varying State


Equation [3]

If we consider the time varying input vector B(t) as zero, the resulting expression
from Eq. (9.18) provides the recursive solution for an n-state homogeneous time
varying system.
That is,
2 3 2 2  3 1
x1ðk þ 1Þ  a11ðk þ 1Þ a12ðk þ 1Þ  a1nðk þ 1Þ
6 7 6
h 2  7
6 x2ðk þ 1Þ 7 6 a21ðk þ 1Þ h  a22ðk þ 1Þ  a2nðk þ 1Þ 7
6 7 6 7
6 . 7¼6 .. .. .. 7
6 .. 7 6 . . . 7
4 5 4 5
2 
xnðk þ 1Þ an1ðk þ 1Þ an2ðk þ 1Þ    h  annðk þ 1Þ
2 2  32 3
h þ a11k 2
a12k

 a1nk x1k
6 76
6 a21k h þ a22k  a2nk 76 x2k 7
7
6 76
6 .. .. .. 76 . 7
6 . . . 74 .. 75
4 5
 
an1k an2k    2h þ annk xnk
ð9:19Þ

for k = 0, 1, 2, …, (m − 1)
For a time-invariant system, A(t) = A and B(t) = B. For such a system Eq. (9.18)
can be modified as
234 9 Time Varying System Analysis: State Space Approach

2 3 2 2  3 1
h  a11 a12  a1n
x1ðk þ 1Þ
6 7 6 2  7
6 x2ðk þ 1Þ 7 6 a21  a22  a2n 7
6 7 6 h 7
6 . 7¼6 .. .. .. 7
6 .. 7 6 . . . 7
4 5 4 5
2 
xnðk þ 1Þ an1 an2    h  ann
02 2  32 3 2 3 2 3 1
h þ a11 2
a12

 a1n x1k b1 b1
B6 7 C
B6 a21 h þ a22  a2n 76 x2k 7 6 7 6b 7 C
B6 76 7 6 b2 7 6 27
7 þ 6 . 7uk þ 6 . 7uðk þ 1Þ C
B6 .. .. .. 76
6 . 7 6 7 6 7 C
B6 . . . 74 .. 5 4 .. 5 4 .. 5 C
@4 5 A
2 
an1 an2    h þ ann xnk bn bn
ð9:20Þ

This (9.20) can be written as


 1 
2 2
xk þ 1 ¼ I  A I þ A xk þ Bðuk þ uk þ 1 Þ ;
h h ð9:21Þ
for k ¼ 0; 1; 2;    ; ðm  2Þ

where,
2 3 2 3
x1k x1ðk þ 1Þ
6 x2k 7 6 x2ðk þ 1Þ 7
6 7 6 7
xk ¼ 6 .. 7 and xðk þ 1Þ ¼6 . 7
4 . 5 4 .. 5
xnk xnðk þ 1Þ

In (9.21), if we set the vector B as zero, we get the recursive solution for the
states of a homogeneous linear time-invariant (LTI) system.

9.3.1 Numerical Examples


Example 9.3 Consider the first order homogeneous system x ¼ Ax where A ¼
2t and xð0Þ ¼ 1: having the solution xðtÞ ¼ expðt2 Þ.
It is noted from Table 9.3a, b that an increasing m from 4 to 10 improves the
accuracy to such a degree that the HF domain solution almost coincides with the
exact samples of the states. Figures 9.3 and 9.4 prove this point graphically.

Example
 9.4 [4] Consider
 the homogeneous system x ¼ Ax, where
0 0 1
A¼ and xð0Þ ¼ having the solution x1(t) = 1 and x2 ðtÞ ¼ 12 t2 þ 1.
t 0 1
9.3 Analysis of Homogeneous Time Varying … 235

Table 9.3 Solution of the homogeneous system of Example 9.3 in HF domain compared with the
exact samples along with percentage error at different sample points for (a) m = 4, T = 1 s and
(b) m = 10, T = 1 s
t(s) Samples of the exact Samples from HF analysis using % Error
solution Eq. (9.19) e ¼ sd ss
d
h
 100
sd sh
(a) System state x (m = 4)
0 1.00000000 1.00000000 0.00000000
1
4 1.06449446 1.06666667 −0.20406024
2
4 1.28402542 1.29523810 −0.87324439
3
4 1.75505466 1.79340659 −2.18522709
4
4 2.71828183 2.83956044 −4.46159072
(b) System state x (m = 10)
0 1.00000000 1.00000000 0.00000000
1
10 1.01005017 1.01010101 −0.00503341
2
10 1.04081077 1.04102247 −0.02033991
3
10 1.09417428 1.09468342 −0.04653189
4
10 1.17351087 1.17450409 −0.08463663
5
10 1.28402542 1.28577290 −0.13609388
6
10 1.43332941 1.43623568 −0.20276358
7
10 1.63231622 1.63699981 −0.28692909
8
10 1.89648088 1.90390195 −0.39130740
9
10 2.24790799 2.25957594 −0.51905817
10
10 2.71828183 2.73659753 −0.67379695

Fig. 9.3 Comparison of the


HF domain solution of state x
of Example 9.3 with the exact
solutions for m = 4 and T = 1 s

We see that the system state x2, obtained via direct expansion and the proposed
HF domain technique, are the same leading to zero error, vide Table 9.4.
Figure 9.5 shows the results obtained via HF analysis with the exact solutions,
for m = 4 and T = 1 s.
236 9 Time Varying System Analysis: State Space Approach

Fig. 9.4 Comparison of the


HF domain solution of state x
of Example 9.3 with the exact
solutions for m = 10 and
T=1s

Table 9.4 HF domain solution of the state x2 of the system of Example 9.4 compared with the
exact samples along with percentage error at different sample points, for m = 4 and T = 1 s
t(s) System state x2 (m = 4)
Samples of the exact Samples from HF analysis using % Error
solution Eq. (9.19) e ¼ sd ss
d
h
 100
sd sh
0 1.00000000 1.00000000 0.00000000
1
4 1.03125000 1.03125000 0.00000000
2
4 1.12500000 1.12500000 0.00000000
3
4 1.28125000 1.28125000 0.00000000
4
4 1.50000000 1.50000000 0.00000000

Example 9.5 [5] (vide Appendix B, Program no. 27) Consider the
 homogeneous
 cosðtÞ sinðtÞ 1
system x ¼ Ax, where A ¼ and xð0Þ ¼ having the
 sinðtÞ cosðtÞ 2
solution

x1 ðtÞ ¼ ðcosð1  cos tÞ þ 2 sinð1  cos tÞÞesin t


and x2 ðtÞ ¼ ð sinð1  cos tÞ þ 2 cosð1  cos tÞÞesin t

Analysis of the given system produces the following results:


Tables 9.5a, b show the comparison of the exact samples with HF domain
solutions for the states of the system of Example 9.5 for m = 8 and T = 1 s. Results
obtained via direct expansion and using Eq. (9.19) are plotted in Fig. 9.6. It is noted
that the HF domain solutions (black dots) are right upon the exact curves.
9.3 Analysis of Homogeneous Time Varying … 237

Fig. 9.5 Comparison of the


HF domain solution of the
states x1 and x2 of Example 9.4
with the exact solutions
for m = 4 and T = 1 s

Table 9.5 HF domain solution of the (a) state x1 and (b) state x2 of the homogeneous system of
Example 9.5 compared with the exact samples along with percentage error at different sample
points, with m = 8, T = 1 s (vide Appendix B, Program no. 27)
t(s) Samples of the exact Samples from HF analysis using % Error
solution Eq. (9.19), e ¼ sd ss
d
h
 100
sd sh
(a) System state x1 (m = 8)
0 1.00000000 1.00000000 0.00000000
1
8 1.15042193 1.15148482 −0.09239132
2
8 1.35969222 1.36188071 −0.16095481
3
8 1.63917010 1.64229144 −0.19042197
4
8 1.99751628 2.00098980 −0.17389195
5
8 2.43785591 2.44061935 −0.11335535
6
8 2.95465368 2.95517291 −0.01757329
7
8 3.53102052 3.52746220 0.10077313
8
8 4.13741725 4.12800452 0.22750256
(b) System state x2 (m = 8)
0 2.00000000 2.00000000 0. 00000000
1
8 2.25665267 2.25592250 0.03235633
2
8 2.52034777 2.51828652 0.08178435
3
8 2.77758241 2.77342604 0.14963984
4
8 3.00888958 3.00181231 0.23521202
5
8 3.18903687 3.17834568 0.33524824
6
8 3.28860794 3.27401580 0.44371784
7
8 3.27727645 3.25919411 0.55174900
8
8 3.12867401 3.10841539 0.64751457
238 9 Time Varying System Analysis: State Space Approach

Fig. 9.6 Comparison of the


HF domain solution of the
states of Example 9.5 with the
exact solutions for m = 8 and
T = 1 s (vide Appendix B,
Program no. 27)

9.4 Determination of Output of a Homogeneous Time


Varying System

Consider the output of a time varying homogeneous system described by

yðtÞ ¼ CðtÞxðtÞ

where,
x(t) is the state vector given by xðtÞ ¼ ½ x1 x2 . . . . . . xn T ;
y(t) is the output vector, is expressed by yðtÞ , ½ y1 y2 . . . . . . yv T and
C(t) is the time varying output matrix.
We can easily solve for y(t) in a fashion similar to that of Sect. 9.2.

9.5 Conclusion

In this chapter, we have analysed the state space model of a time varying
non-homogeneous system and solved for the states in hybrid function platform. The
proficiency of the method has been illustrated by suitable examples. Also, by
putting B(t) = 0, the same method has been applied successfully to analyze
homogeneous time varying systems as well. With slight modification, we arrive
from Eqs. (9.19) to (9.21) which is suitable for the analysis of time-invariant
non-homogeneous systems. Further, by setting B = 0, Eq. (9.21) becomes suitable
for the analysis of linear time-invariant homogeneous system.
Using the above mentioned recursive matrix equations, different types of
numerical examples have been treated for homogeneous as well as
9.5 Conclusion 239

non-homogeneous systems, and the results are compared with the exact solutions
along with error estimates for each sample point. It is found that the HF method is a
strong as well as convenient tool for such analysis. This fact is reflected through
various tables and curves.
For the analysis of output equations, since it has been treated in Chap. 8, we
avoid repeating the same here. However, it has already been shown that the HF
domain solutions are reliably close.

References

1. Ogata, K.: Modern Control Engineering, 5th edn. Prentice Hall of India (2011)
2. Fogiel, M. (ed.): The Automatic Control Systems/ Robotics Problem Solver. Research &
Education Association, New Jersey (2000)
3. Roychoudhury, S., Deb, A., Sarkar, G.: Analysis and synthesis of time-varying systems via
orthogonal hybrid functions (HF) in states space environment. Int. J. Dyn. Control. 3(4), 389–
402 (2015)
4. Rao, G.P.: Piecewise Constant Orthogonal Functions and Their Application in Systems and
Control, LNC1S, vol. 55. Springer, Berlin (1983)
5. Jiang, J.H., Schaufelberger, W.: Block Pulse Functions and Their Application in Control
System, LNCIS, vol. 179. Springer, Berlin (1992)
Chapter 10
Multi-delay System Analysis: State Space
Approach

Abstract This chapter is devoted to multi-delay system analysis using state space
approach in hybrid function domain. First, the theory of HF domain approximation
of functions with time delay is presented. This is followed by integration of func-
tions with time delay. Then analysis of both homogeneous and non-homogeneous
systems are given along with numerical examples. States of the systems are solved.
Illustration of the theory has been provided with the support of eight examples,
nineteen figures and four tables.

In this chapter, we deal with multi-delay control systems. In practice we can find
time delays in different electrical systems, industrial processes, mechanical systems,
population growth, economic growth and chemical processes. We intend to analyse
two types of control systems in hybrid function platform, namely non-homogeneous
system and homogeneous system.
Analysing a time delay control system means, knowing the system parameters,
the nature of input signal or forcing function and the presence of delay in the
system, we determine the behavior of each system states of a time delay system
over a common time frame, to assess the performance of the system.
In this chapter, the hybrid function set is employed for the analysis of time delay
non-homogeneous as well as homogeneous systems described in state space plat-
form. Here we have converted the multi-delay differential equation to an algebraic
form using the orthogonal hybrid function (HF) set.
First we take up the problem of analysis of a delayed non-homogeneous system
in HF domain, because after putting the specific condition of zero forcing function,
we can arrive at the result of analysis of a delayed homogeneous system.

10.1 HF Domain Approximation of Function with Time


Delay

Let us consider a time function f ðtÞ which consists of single delay or multiple
delays along the time scale. In case of HF approximation of the delayed time
function, following situations may arise:
© Springer International Publishing Switzerland 2016 241
A. Deb et al., Analysis and Identification of Time-Invariant Systems, Time-Varying
Systems, and Multi-Delay Systems using Orthogonal Hybrid Functions,
Studies in Systems, Decision and Control 46, DOI 10.1007/978-3-319-26684-8_10
242 10 Multi-delay System Analysis …

• The function f ðtÞ consists of multiple delays with one delay equal to τ (say) and
the rest its integral multiples. Then we can select the sampling interval h judi-
ciously as equal to the delay τ, or τ may be integral multiple of h when h is
smaller than v. Then we can approximate the delayed time function in HF
domain comfortably.
• The function f ðtÞ consists of multiple delays with one delay equal to τ
(say) while the other delays are not integral multiples of τ. This situation may be
handled in a different way. In this case the sampling interval h has to be chosen
in such a way that all the delays are integral multiples of h. So we find the
highest common factor (HCF) of the delays to determine a suitable value of the
sampling interval h. That is, if hn ¼ highest common factorðs1 ; s2 ; . . .; sn Þ, the
sampling interval h ¼ hkn ; where, k is an integer greater than zero.
Figure 10.1a shows equidistant samples of a time function f ðtÞ while Fig. 10.1b
illustrates its delayed version f ðt  sÞ with s ¼ 2h:

Fig. 10.1 Equidistant samples of a a function f ðtÞ and b its delayed version f ðt  sÞ with s ¼ 2h
10.1 HF Domain Approximation of Function with Time Delay 243

Let us now recall the delay matrix Q, defined as

QðmÞ , ½½ 0 1 0 0  0 0 ðm  m Þ

Q(m) has the property

22 33
QkðmÞ ¼ 440|fflfflfflfflfflfflfflfflfflffl . . .0 ffl0}
0 ffl{zfflfflfflfflfflfflfflfflfflffl 1 0 . . . 0 055
0|fflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflffl}
k terms ðmk1Þ terms ðm  m Þ

and QkðmÞ ¼ 0ðmÞ

If we post-multiply any row matrix A by a Q matrix of proper dimension, the


A matrix will be shifted towards right by one element, and a zero will be introduced
as the first element of AQ.
However, if we post-multiply AQ by Q again, the result will be shifted further
towards right by another element, and the vacant place at the start of AQ2 will again
be taken up by another zero. If we continue such post multiplication with Q, at each
step of multiplication the result will be shifted towards right by one element.
That means, if

A ¼ ½ a11 a12    a1m ð1mÞ ; then


2 3
AQk ¼ 40|fflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflffl}
0 ... 0 0 a11 a12 . . . a1ðmkÞ 5 ð10:1Þ
k terms ð1mÞ

We find that, transpose of Q, namely QT, behaves in a manner contrary to that of


Q. That is, if we post-multiply a row matrix A by the QT matrix of proper
dimension, the result will be the A matrix shifted towards left by one element, and a
zero will be introduced as the last element of AQT. Thus, we can call QT as the
advance matrix.
Like repeated multiplication by Q, if we proceed similarly with QT, at each stage
of multiplication, the resulting row matrix will be shifted towards left by one
element.
That means
2 3
 k 6 7
A QT ¼ 4a1ðmkÞ a1ðmk þ 1Þ . . . a1ðm1Þ a1m 0 . . . 0 05
0|fflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflffl}
|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
k terms
ðmk Þterms ð1mÞ

ð10:2Þ
244 10 Multi-delay System Analysis …

In approximating time functions with delays, Q as well as QT have vital roles to


play.
For convenience in representation, let us define

QT , Qt

Referring to Fig. 10.1a, we can represent the function f(t) in HF domain with a
sampling period of h in the interval t 2 [0, mh). That is

f ðtÞ  f ðtÞ ¼ ½ c0 c1 c2 . . . cm1 SðmÞ ðtÞ


þ ½ ðc1  c0 Þ ðc2  c1 Þ ðc3  c2 Þ . . . ðcm  cm1 Þ TðmÞ ðtÞ
, CTS SðmÞ þ CTT TðmÞ

where f ðtÞ is the piecewise linear approximation of f ðtÞ in HF domain.


Consider that we know m number of samples of the function f ðtÞ for time t < 0.
That is, we want to work with a row matrix of dimension (1 × m). Thus, for t < 0,
the function f ðtÞ may be represented via sample-and-hold functions as

f ðtÞjt \ 0  ½ c0m c0m 1 ... c03 c02 c01 SðmÞ ðtÞ ð10:3Þ

Now, consider the samples of the function for t ≥ 0. Again, if we consider


m number of samples and represent the function f ðtÞ via sample-and-hold functions,
we have

f ðtÞjt [ 0  ½ c0 c1 c2 ... cm1 SðmÞ ðtÞ ð10:4Þ

If we work with m samples in total within the time interval t 2 [−kh, (m-k)h] then
we can express this particular section of the function in SHF domain as

f ðtÞjkh  t  ðmkÞh  ½ c0k c0k1 c0k2 ... c01 c0 c1 ... cmk1 SðmÞ ðtÞ
ð10:5Þ

Using Eq. (10.5), we can handle a function delayed by kh (k being an integer).


Also, by employing the Q and Qt matrices, we can arrive at (10.5) from the
generalized Eqs. (10.3) and (10.4). That is

f ðtÞjkh  t  ðmkÞh  ½ c0 c1 c2 . . . cm1 Qk Sm ðtÞ


þ ½ c0m c0m 1 . . . c02 c01 Qkt Sm ðtÞ
¼ ½ c0k c0k1 c0k2 . . . c01 c0 c1 . . . cmk1 SðmÞ ðtÞ

Now, representation of the delayed function f ðt  khÞ in HF domain in an


interval t 2 [−kh, (m − k)h] is given by
10.1 HF Domain Approximation of Function with Time Delay 245

f ðt  khÞ  f ðt  khÞ ¼ ½ c0k c0k1 c0k2 . . . c01 c0 c1 . . . cmk1 SðmÞ ðtÞ


   0   
þ c0k1  c0k ck2  c0k1 . . . c0  c01 ðc1  c0 Þ

. . .ðcmk  cmk1 Þ TðmÞ ðtÞ
¼ ½ c0 c1 c2 . . . cm1 Qk Sm ðtÞ
þ ½ c0m c0m1 . . . c02 c01 Qkt Sm ðtÞ
þ ½ ðc1  c0 Þ ðc2  c1 Þ ðc3  c2 Þ . . . ðcm  cm1 Þ Qk Tm ðtÞ
þ ½ ðc0m1  c0m Þ ðc0m2  c0m1 Þ . . . ðc01  c02 Þ ðc0  c01 Þ Qkt Tm ðtÞ
 T k   
, CS Q þ CTSs Qkt SðmÞ þ CTT Qk þ CTTs Qkt TðmÞ
ð10:6Þ

where, use is made of Eq. (10.5), Qk is termed as the delay matrix of degree k and
Qkt is termed as the advance matrix of degree k.
Here CTSs is the sample-and-hold coefficient vector for initial values of the state,
representing the coefficients belong to time scale kh  t\0. Whereas CTTs is the
triangular function coefficient vector for initial values of the state, representing the
coefficients belong to time scale kh  t\0.

10.1.1 Numerical Examples

Example 10.1 Consider a function f1 ðtÞ ¼ t having a delay at t = 0.4 s. Using


Eq. (10.6), approximations of this function is graphically shown in Fig. 10.2, for
m = 10 and T = 1 s.
Here the delay time s ¼ kh ¼ 0:4 s, and time period T = 1 s. That means k = 4.
Using Eq. (10.1), the approximation of function f1 ðt  khÞ ¼ ðt  0:4Þ can be
represented by

Fig. 10.2 Graphical


representation of HF domain
approximation with exact
function, of function f1 ðtÞ of
Example 10.1, for m = 10 and
T=1s
246 10 Multi-delay System Analysis …

Fig. 10.3 Pictorial


representation of HF domain
approximation of f2 ðtÞ ¼
sin½pðt  0:4Þ of Example
10.2 along with the exact
function for m = 20 and
T=2s

ðt  0:4Þ  ½ 0:4 0:3 0:2 0:1 0 0:1 0:2 0:3 0:4 0:5 Sð10Þ ðtÞ
þ ½ 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 Tð10Þ ðtÞ
ð10:7Þ

In Fig. 10.2, we see that the samples of the delayed time function, obtained via
HF domain approximation, coincide with exact samples of the function f1 ðt  khÞ
due to its linear nature.
Example 10.2 Consider a function f2 ðtÞ ¼ sinðptÞ over a time period of T = 2 s,
having a delay of τ = 0.4 s. For m = 20 and T = 1 s, h = 0.1 and k ¼ hs :
Using Eq. (10.6), the approximation of this function is graphically shown in
Fig. 10.3 as in Example 10.2.

10.2 Integration of Functions with Time Delay

Let us consider a time delayed square integrable function f ðt  khÞ which can be
expanded in HF domain, using Eq. (10.6). Now, if we consider all the initial values
of the function for kh  t\0 to be zero, then integrating Eq. (10.6) with respect to
t, we have
Z Z Z
f ðt  khÞdt  CTS Qk SðmÞ dt þ CTT Qk TðmÞ dt ð10:8Þ

Using Eqs. (10.8), (4.9) and (4.18), the integration of a time delayed function
with all zero initial values, can be expressed as
10.2 Integration of Functions with Time Delay 247

Z  
1 T k
f ðt  khÞdt  þ CT Q P1ssðmÞ SðmÞ
C TS
2
  ð10:9Þ
1 T k
þ h C S þ CT Q TðmÞ
T
2

But in reality, most of the practical systems have some non-zero initial values of
their states. In that case, the integration of a delayed time function can be repre-
sented by
Z Z
 T k 
f ðt  khÞ dt  CS Q þ CTSs Qkt SðmÞ dt
Z ð10:10Þ
 T k 
þ CT Q þ CTTs Qkt TðmÞ dt

Using Eqs. (10.10), (4.9) and (4.18), integration of a time delayed function with
initial values, may be expressed as
Z  
 T k  1 
f ðt  khÞdt  CS Q þ CTSs Qkt þ CTT Qk þ CTTs Qkt P1ssðmÞ SðmÞ
2
  ð10:11Þ
 T k  1 
þ h CS Q þ CTSs Qkt þ CTT Qk þ CTTs Qkt TðmÞ
2

Using Eq. (10.11), integration of some basic time delayed functions are illus-
trated below.

10.2.1 Numerical Examples

Example 10.3 Consider the function f1 ðtÞ ¼ t having a delay at t = 0.4 s, from
Example 10.1. Using Eq. (10.11), integration of the function in HF domain is
plotted with the exact integration in Fig. 10.4, for m = 10 and T = 1 s.
Here we note that the samples obtained via HF domain integration approach
coincide with the exact integration curve.
Example 10.4: Consider the function f2 ðtÞ ¼ sinðptÞ having a delay of τ = 0.4 s.
Using Eq. (10.11), integration of the function in HF domain is plotted along with
the exact curve in Fig. 10.5, for m = 20 and T = 2 s.
As in Example 10.2, here, k = 4, the delay time s ¼ kh ¼ 0:4 s, and time period
T = 2 s.
248 10 Multi-delay System Analysis …

Fig. 10.4 Graphical


representation of HF domain
integration of the function
f1 ðtÞ ¼ ðt  0:4Þ of Example
10.1 along with its exact
integration, for m = 10 and
T=1s

Fig. 10.5 Graphical


representation of HF domain
integration, of the function
f2 ðtÞ of Example 10.2, along
with its exact integration, for
m = 10 and T = 1 s

10.3 Analysis of Non-homogeneous State Equations


with Delay

Consider the non-homogeneous delayed state equation as,

 X
M X
N
xðtÞ ¼ AxðtÞ þ BuðtÞ þ Ak xðt  khÞ þ Bk uðt  khÞ ð10:12Þ
k¼1 k¼1

where, A is an (n × n) system matrix given by


10.3 Analysis of Non-homogeneous State Equations … 249

2 3
a11 a12 a13  a1n
6 a21 a22 a23 ... a2n 7
6 . .. .. .. 7
6 7
A , 6 .. . . . 7
6 . .. .. .. 7
4 .. . . . 5
an1 an2 an3 ... ann

Ak is an (n × n) system matrix associated with the delayed state vector,


B is the (n × 1) input vector given by B ¼ ½ b1 b2       bn T
Bk is the (n × 1) input vector associated with the delayed input,
x(t) is the state vector given by xðtÞ ¼ ½ x1 x2 . . . . . . xn T with the initial
conditions

xð0Þ ¼ ½ x1 ð0Þ x2 ð0Þ ... ... xn ð0Þ T where; ½. . .T denotes transpose;

x(t − kh) is the state vector with a delay of t = kh, k being an integer,
u is the forcing function,
u(t − λh) is the forcing function with a delay of t = λh, λ being an integer,
and M and N are upper limits of k and λ respectively.
For simplicity, let us consider the following multi-delay system

xðtÞ ¼ AxðtÞ þ BuðtÞ þ A1 xðt  k1 hÞ þ A2 xðt  k2 hÞ
ð10:13Þ
þ B1 uðt  k1 hÞ þ B2 uðt  k2 hÞ

Integrating Eq. (10.13), we get


Z Z Z Z
xðtÞ  xð0Þ ¼ A xðtÞdt þ B uðtÞdt þ A1 xðt  k1 hÞdt þ A2 xðt  k2 hÞdt
Z Z
þ B1 uðt  k1 hÞdt þ B2 uðt  k2 hÞdt

ð10:14Þ
250 10 Multi-delay System Analysis …

Expanding x(t), x(0), xðt  k1 hÞ, xðt  k2 hÞ, u(t), uðt  k1 hÞ and uðt  k2 hÞ via
an m-set hybrid function, we get

xðtÞ , CSx SðmÞ þ CTx TðmÞ ; xð0Þ , CSx0 SðmÞ þ CTx0 TðmÞ
   
xðt  k1 hÞ , CSx Qk1 þ CSxs1 Qk1
t SðmÞ þ CTx Q þ CTxs1 Qt TðmÞ
k1 k1
   
xðt  k2 hÞ , CSx Qk2 þ CSxs2 Qk2
t SðmÞ þ CTx Q þ CTxs2 Qt TðmÞ
k2 k2

uðtÞ , CTSu SðmÞ þ CTTu TðmÞ


h i h i
uðt  b1 hÞ , CTSu Qb1 þ CTSus1 Qb1 t S ð mÞ þ C T
Tu Q b1
þ C T
Q
Tus1 t
b1
TðmÞ
h i h i
uðt  b2 hÞ , CTSu Qb2 þ CTSus2 Qb2 t SðmÞ þ CTTu Qb2 þ CTTus2 Qb2 t TðmÞ

where,
2 3 2 T 3 2 T 3 2 T 3
CTSx1 CTx1 CSx01 CTx01
6C 7T 6C 7T 6C T 7 6 CT 7
6 Sx2 7 6 Tx2 7 6 Sx02 7 6 Tx02 7
6 . 7 6 . 7 6 . 7 6 . 7
6 .. 7 6 .. 7 6 .. 7 6 .. 7
6 7 6 7 6 7 6 7
6 .. 7 6 .. 7 6 .. 7 6 .. 7
6 . 7 6 . 7 6 . 7 6 . 7
CSx 6
¼6 T 7 7 6
; CTx ¼ 6 T 7 7 6
; CSx0 ¼ 6 T 7 7 ; CTx0 ¼ 6 T 7
6
7
6 CSxi 7 6 CTxi 7 6 CSx0i 7 6 CTx0i 7
6 . 7 6 . 7 6 . 7 6 . 7
6 .. 7 6 .. 7 6 .. 7 6 .. 7
6 7 6 7 6 7 6 7
6 .. 7 6 .. 7 6 .. 7 6 .. 7
4 . 5 4 . 5 4 . 5 4 . 5
CTSxn ðn  mÞ CTTxn ðn  mÞ CTSx0n ðn  mÞ CTTx0n ðn  mÞ

CSxi ¼ ½ cSxi1 cSxi2 cSxi3 ... . . . cSxim T


CTxi ¼ ½ cTxi1 cTxi2 cTxi3 . . . . . . cTxim T
CSx0i ¼ ½ cSx0i1 cSx0i2 cSx0i3 . . . . . . cSx0im T
CTx0i ¼ ½ cTx0i1 cTx0i2 cTx0i3 . . . . . . cTx0im T
Csu ¼ ½ csu;1 csu;2 csu;3 . . . . . . csu;m T
CTu ¼ ½ cTu;1 cTu;2 cTu;3 . . . . . . cTu;m T

Therefore LHS of Eq. (10.14), can be expressed as

xðtÞ  xð0Þ  ðCSx  CSx0 ÞSðmÞ þ ðCTx  CTx0 ÞTðmÞ ð10:15Þ


10.3 Analysis of Non-homogeneous State Equations … 251

The initial values of all the states being constants, they always essentially rep-
resent step functions. Hence, HF domain expansions of the initial values will
always yield null coefficient matrices for the T vectors.
Therefore in HF domain, integration of the above terms can be represented by
Z h i h i
1 1
xðtÞdt  CSx þ CTx P1ss SðmÞ þ h CSx þ CTx TðmÞ
2 2
Z h i
 1 
xðt  k1 hÞdt 
CSx Q k1
þ CSxs1 Qk1
t þ CTx Qk1 þ CTxs1 Qk1
P1ss SðmÞ
t
2
h  1 i
þ h CSx Qk1 þ CSxs1 Qk1 t þ CTx Qk1 þ CTxs1 Qk1 t TðmÞ
2
Z h i
 1 
xðt  k2 hÞdt  CSx Qk2 þ CSxs2 Qk2t þ CTx Qk2 þ CTxs2 Qk2 t P1ss SðmÞ
2
h  1 i
þ h CSx Qk2 þ CSxs2 Qk2 t þ CTx Qk2 þ CTxs2 Qk2 t TðmÞ
2
Z h i h i
1 1
uðtÞdt  CTSu þ CTTu P1ss SðmÞ þ h CTSu þ CTTu TðmÞ
2 2
Z h i
uðt  b1 hÞdt  CSu Q þ CSus1 Qt þ CTu Q þ CTTus1 Qb1
b1
T b1 T 1 T b1
t P1ss SðmÞ
2
h i
þ h CTSu Qb1 þ CTSus1 Qb1 þ CTTu Qb1 þ CTTus1 Qb1
1
t t TðmÞ
2
Z h i
uðt  b2 hÞdt  CTSu Qb2 þ CTSus2 Qb2 þ CTTu Qb2 þ CTTus2 Qb2
1
t t P1ss SðmÞ
2
h i
þ h CTSu Qb2 þ CTSus2 Qb2 þ CTTu Qb2 þ CTTus2 Qb2
1
t t TðmÞ
2

Substituting the above terms in RHS of Eq. (10.14), we get


Z Z Z Z
A xðtÞdt þ B uðtÞdt þ A1 xðt  k1 hÞdt þ A2 xðt  k2 hÞdt
Z Z
þ B1 uðt  b1 hÞ dt þ B2 uðt  b2 hÞ dt
h i
1 1
 A CSx þ CTx P1ss SðmÞ þ h CSx þ CTx TðmÞ
2 2
h i
1 1
þ B CTSu þ CTTu P1ss SðmÞ þ h CTSu þ CTTu TðmÞ
2 2
h  1 i
þ A1 CSx Qk1 þ CSxs1 Qk1 t þ C Tx Q k1
þ CTxs1 Qk1
t P1ss SðmÞ
2
h    i
1
þ A1 h CSx Qk1 þ CSxs1 Qk1 þ CTx Qk1 þ CTxs1 Qk1 TðmÞ
h
t 2 t
i ð10:16Þ
 1  
þ A2 CSx Qk2 þ CSxs2 Qk2 t þ CTx Qk2 þ CTxs2 Qk2 t P1ss SðmÞ
2
h    i
1
þ A2 h CSx Qk2 þ CSxs2 Qk2 t þ CTx Qk2 þ CTxs2 Qk2 t TðmÞ
2
h i
þ B1 CTSu Qb1 þ CTSus1 Qb1 þ CTTu Qb1 þ CTTus1 Qb1
1
t t P1ss SðmÞ
2
h i
b1 b1 1 b1 b1
þ B1 h CSu Q þ CSus1 Qt þ CTu Q þ CTus1 Qt
T T T T
TðmÞ
2
h i
þ B2 CTSu Qb2 þ CTSus2 Qb2 þ CTTu Qb2 þ CTTus2 Qb2
1
t t P1ss SðmÞ
2
h i
þ B2 h CTSu Qb2 þ CTSus2 Qb2 þ CTTu Qb2 þ CTTus2 Qb2
1
t t TðmÞ
2
252 10 Multi-delay System Analysis …

Now equating the SHF part of Eqs. (10.15) and (10.16), we get

1 1
CSx  CSx0 ¼ A CSx þ CTx P1ss þ B CTSu þ CTTu P1ss
2 2
h  1 i
þ A1 CSx Q þ CSxs1 Qt þ CTx Qk1 þ CTxs1 Qk1
k1 k1
t P1ss
2
h  1 i
þ A2 CSx Qk2 þ CSxs2 Qk2t þ CTx Qk2 þ CTxs2 Qk2t P1ss
2
h i
þ B1 CTSu Qb1 þ CTSus1 Qb1 b1
1 b1
t þ C T
Tu Q þ CT
Q
Tus1 t P1ss
2
h i
þ B2 CTSu Qb2 þ CTSus2 Qb2 þ CTTu Qb2 þ CTTus2 Qb2
1
t t P1ss
2
ð10:17Þ

Similarly, equating the TF part of Eqs. (10.15) and (10.16), we have

1 1
CTx ¼ Ah CSx þ CTx þ Bh CTSu þ CTTu
2 2
h  1 i
þ A1 h CSx Q þ CSxs1 Qt þ CTx Qk1 þ CTxs1 Qk1
k1 k1
t
2
h  1 i
þ A2 h CSx Q þ CSxs2 Qt þ CTx Q þ CTxs2 Qk2
k2 k2 k2
t ð10:18Þ
2
h i
þ B1 h CSu Q þ CSus1 Qt þ CTTu Qb1 þ CTTus1 Qb1
b1
T b1 T 1
2 t
h i
þ B2 h CSu Q þ CSus2 Qt þ CTTu Qb2 þ CTTus2 Qb2
b2
T b2 T 1
2 t

Therefore, from Eqs. (10.17) and (10.18), we can write


 
CTx P1ss ¼ h CSx  CSx0

Substituting this relation in Eq. (10.17), we have

1 h
CSx  CSx0 ¼ ACSx P1ss þ B CTSu þ CTTu P1ss þ AðCSx  CSx0 Þ
2 2
h 1
þ A1 CSx Qk1 P1ss þ A1 ðCSx  CSx0 ÞQk1 þ A1 CSxs1 þ CTxs1 Qk1
t P1ss
2 2
h 1
þ A2 CSx Q P1ss þ A2 ðCSx
k2
 CSx0 ÞQ þ A2 CSxs2 þ CTxs2
k2
Qk2
t P1ss
2 2
h i
CTSu Qb1 þ CTSus1 Qb1 b1
1 b1
þ B1 t þ C T
Tu Q þ C T
Q
Tus1 t P1ss
2
h i
CTSu Qb2 þ CTSus2 Qb2 þ CTTu Qb2 þ CTTus2 Qb2
1
þ B2 t t P1ss
2
ð10:19Þ
10.3 Analysis of Non-homogeneous State Equations … 253

h h
or, CSx  ACSx P1ss  ACSx  A1 CSx Qk1 P1ss  A1 CSx Qk1
2 2
h
 A2 CSx Q P1ss  A2 CSx Q
k2 k2
2
h h 1
¼ CSx0  ACSx0  A1 CSx0 Qk1 þ A1 CSxs1 þ CTxs1 Qk1 t P1ss
2 2 2
h 1 1 T
 A2 CSx0 Qk2 þ A2 CSxs2 þ CTxs2 Qk2 t P1ss þ B CSu þ 2CTu
T
P1ss
2 2
h i
þ B1 CTSu Qb1 þ CTSus1 Qb1 b1
1 b1
t þ C T
Tu Q þ C T
Q
Tus1 t P1ss
2
h i
þ B2 CTSu Qb2 þ CTSus2 Qb2 þ CTTu Qb2 þ CTTus2 Qb2
1
t t P1ss
2
ð10:20Þ

Replacing P1ss in Eq. (10.20), following (8.12), we get

CSx  ACSx P  A1 CSx Qk1 P  A2 CSx Qk2 P


h i
h h h 1
¼ I  A  A1 Qk1  A2 Qk2 CSx0 þ A1 CSxs1 þ CTxs1 Qk1
t P1ss
2 22 2
1 1
þ A2 CSxs2 þ CTxs2 t P1ss þ B
Qk2 CTSu þ CTTu P1ss ð10:21Þ
2 2
h i
þ CTSus1 Qb1 þ CTTus1 Qb1
1
þ B1 CTSu Qb1 t þ CTTu Qb1 t P1ss
2
h i
CTSu Qb2 þ CTSus2 Qb2 CTTu Qb2 þ CTTus2 Qb2
1
þ B2 t þ t P1ss
2

Now, the (i + 1)th column of ACSx P is

ði þ 1Þ  th column
2#3
1
2 32 3 6 ... 7
a11 a12 ... a1n cSx11 cSx12 ... cSx1m 6 7
6 7
6 a21 ... a2n 76 ... cSx2m 7 617
6 a22 76 cSx21 cSx22 7 617
½ACSx Pi þ 1 ¼ 6 .. .. .. 76 .. .. .. 7h 627 ði þ 1Þth element
4 . . . 54 . . . 5 6 7
607 ði þ 2Þth element
an1 an2 ... ann cSxn1 cSxn2 ... cSxnm 6.7
4 .. 5
0

2 32 3
a11 a12 ... a1n cSx11 þ cSx12 þ    þ cSx1i þ 12cSx1ði þ 1Þ
6 a21 a22 ... a2n 7 6c þ cSx22 þ    þ cSx2i þ 12cSx2ði þ 1Þ 7
6 76 Sx21 7
¼ h6 .. .. .. 76 .. 7
4 . . . 54 . 5
an1 an2 ... ann cSxn1 þ cSxn2 þ    þ cSxni þ 12cSxn ði þ 1Þ
254 10 Multi-delay System Analysis …

2 3
1 1
a11 cSx11 þ cSx12 þ    þ cSx1i þ cSx1ði þ 1Þ þ a12 cSx21 þ cSx22 þ    þ cSx2i þ cSx2ði þ 1Þ
6 2 2 7
6 7
6 þ  þ a1n cSxn1 þ cSxn2 þ   
1
þ cSxni þ cSxnði þ 1Þ 7
6 7
6 2 7
6a c 1 1 7
6 21 Sx11 þ cSx12 þ    þ cSx1i þ 2cSx1ði þ 1Þ þ a22 cSx21 þ cSx22 þ    þ cSx2i þ cSx2ði þ 1Þ 7
6 2 7
6 7
¼ h6 þ  þ a2n cSxn1 þ cSxn2 þ   
1
þ cSxni þ cSxnði þ 1Þ 7
6 2 7
6 .. 7
6 7
6 . 7
6 7
6 an1 cSx11 þ cSx12 þ    þ cSx1i þ 1cSx1ði þ 1Þ 1
þ an2 cSx21 þ cSx22 þ    þ cSx2i þ cSx2ði þ 1Þ 7
6 2 2 7
4 5
1
þ  þ ann cSxn1 þ cSxn2 þ    þ cSxni þ cSxnði þ 1Þ
2

ð10:22Þ

Similarly, the (i + 1)th column of A1 CSx Qk1 P is


 
A1 CSx Qk1 P i þ 1
ði þ 1Þ  thcolumn
2 3#
1
2 3 6 .. 7
6.7
að1Þ11 að1Þ12 ... að1Þ1n 2 cSx11 cSx12 ... cSx1m
3
6 7
6 7 6 7
6 að1Þ21 að1Þ22 ... að1Þ2n 76 ... cSx2m 7 617
6 76 cSx21
6 .
cSx22 7 6 7 ði  k1 þ 1Þ
¼ h6 .
6 .. .. .. 7 6 . .. .. 77
6 12 7
6 7
4 . . 7 54 . . . 5 6 7
607 th element
... ... 6.7
að1Þn1 að1Þn2 að1Þnn cSxn1 cSxn2 cSxnm 6.7
4.5
0

2 3 2 3
að1Þ11 að1Þ12 ... að1Þ1n cSx11 þ cSx12 þ    þ cSx1ðik1Þ þ 12cSx1ðik1 þ 1Þ
6 að1Þ21 að1Þ22 ... að1Þ2n 7 6 cSx21 þ cSx22 þ    þ cSx2ðik1Þ þ 12cSx2ðik1 þ 1Þ 7
6 7 6 7
¼ h6 . .. .. 7 6 .. 7
4 .. . . 5 4 . 5
að1Þn1 að1Þn2 ... að1Þnn cSxn1 þ cSxn2 þ    þ cSxnðik1Þ þ 12cSxnðik1 þ 1Þ
2   3
að1Þ11 cSx11 þ cSx12
 þ    þ cSx1ðik1Þ þ 2cSx1ðik1 þ 1Þ
1

6 þ að1Þ12 cSx21 þ cSx22 þ    þ cSx2ðik1Þ þ 2cSx2ðik1 þ 1Þ þ   þ 7
1
6 7
6 þ að1Þ1n cSxn1 þ cSxn2 þ    þ cSxnðik1Þþ 12cSxnðik1 þ 1Þ 7
6  7
6 að1Þ21 cSx11 þ cSx12 þ    þ cSx1ðik1Þ þ 2cSx1ðik1 þ 1Þ
1 7
6   7
6 þ að1Þ22 cSx21 þ cSx22 þ    þ cSx2ðik1Þ þ 2cSx2ðik1 þ 1Þ þ   þ 7
1
6 7
¼ h6 þ að1Þ2n cSxn1 þ cSxn2 þ    þ cSxnðik1Þ þ 2cSxnðik1 þ 1Þ
1 7 for ði þ 1Þ [ k1
6 7
6. 7
6 .. 7
6   7
6a 7
6 ð1Þn1 cSx11 þ cSx12
 þ    þ cSx1ðik1Þ þ 2cSx1ðik1 þ 1Þ 7
1

4 
þ að1Þn2 cSx21 þ cSx22 þ    þ cSx2ðik1Þ þ 2cSx2ðik1 þ 1Þ þ   þ 5
1

þ að1Þnn cSxn1 þ cSxn2 þ    þ cSxnðik1Þ þ 12cSxnðik1 þ 1Þ


ð10:23Þ
10.3 Analysis of Non-homogeneous State Equations … 255

Similarly, the (i + 1)th column of A2 CSx Qk2 P is given by


 
A2 CSx Qk2 P i þ 1
2 32 3
að2Þ11 að2Þ12 . . . að2Þ1n cSx11 þ cSx12 þ    þ cSx1ðik2Þ þ 12cSx1ðik2 þ 1Þ
6 að2Þ21 . . . að2Þ2n 7 6 7
að2Þ22 76 cSx21 þ cSx22 þ    þ cSx2ðik2Þ þ 2cSx2ðik2 þ 1Þ 7
1
6
¼ h6 . .. .. 76 .. 7
4 .. . . 54 . 5
að2Þn1 að2Þn2 . . . að2Þnn cSxn1 þ cSxn2 þ    þ cSxnðik2Þ þ 2cSxnðik2 þ 1Þ
1

2   3
að2Þ11 cSx11 þ cSx12 þ    þ cSx1ðik2Þ þ 2cSx1ðik2 þ 1Þ
1

6 þ að2Þ12 cSx21 þ cSx22 þ    þ cSx2ðik2Þ þ 12cSx2ðik2 þ 1Þ þ   þ 7
6 7
6 þ að2Þ1n cSxn1 þ cSxn2 þ    þ cSxnðik2Þþ 12cSxnðik2 þ 1Þ 7
6  7
6 að2Þ21 cSx11 þ cSx12 þ    þ cSx1ðik2Þ þ 12cSx1ðik2 þ 1Þ 7
6   7
6 þ a c þ c þ    þ c þ 1
c þ    þ 7
6 ð 2Þ22 Sx21  Sx22 Sx2 ð ik2 Þ 2 Sx2 ðik2 þ 1 Þ  7
¼ h6 þ a c þ c þ    þ c þ 1
c 7 for ði þ 1Þ [ k2
6 ð2 Þ2n Sxn1 Sxn2 Sxn ðik2 Þ 2 Sxnð ik2 þ 1Þ 7
6. 7
6 .. 7
6   7
6a 7
6 ð2Þn1 cSx11 þ cSx12  þ    þ cSx1ðik2Þ þ 2cSx1ðik2 þ 1Þ 7
1

4 
þ að2Þn2 cSx21 þ cSx22 þ    þ cSx2ðik2Þ þ 2cSx2ðik2 þ 1Þ þ   þ 5
1

þ að2Þnn cSxn1 þ cSxn2 þ    þ cSxnðik2Þ þ 12cSxnðik2 þ 1Þ


ð10:24Þ

Now, the ith column of ACSx P is


2 3
1 1
a11 cSx11 þ cSx12 þ    þ cSx1ði1Þ þ cSx1i þ a12 cSx21 þ cSx22 þ    þ cSx2ði1Þ þ cSx2i
6 2 2 7
6 7
6 þ  þ a1n cSxn1 þ cSxn2 þ   
1
þ cSxnði1Þ þ cSxni 7
6 7
6 2 7
6a c 1 1 7
6 21 Sx11 þ cSx12 þ    þ cSx1ði1Þ þ 2cSx1i þ a22 cSx21 þ cSx22 þ    þ cSx2ði1Þ þ cSx2i 7
6 2 7
6 7
¼ h6 þ  þ a2n cSxn1 þ cSxn2 þ   
1
þ cSxnði1Þ þ cSxni 7
6 2 7
6 .. 7
6 7
6 . 7
6 7
6 an1 cSx11 þ cSx12 þ    þ cSx1ði1Þ þ 1cSx1i 1
þ an2 cSx21 þ cSx22 þ    þ cSx2ði1Þ þ cSx2i 7
6 2 2 7
4 5
1
þ  þ ann cSxn1 þ cSxn2 þ    þ cSxnði1Þ þ cSxni
2
ð10:25Þ
256 10 Multi-delay System Analysis …

The ith column of A1 CSx Qk1 P is


2   3
að1Þ11 cSx11 þ cSx12
 þ    þ cSx1ðik11Þ þ 2cSx1ðik1Þ
1

6 þ að1Þ12 cSx21 þ cSx22 þ    þ cSx2ðik11Þ þ 12cSx2ðik1Þ þ   þ 7
6 7
6 þ að1Þ1n cSxn1 þ cSxn2 þ    þ cSxnðik11 7
 Þ þ 2cSxnðik1Þ
1
6  7
6 að1Þ21 cSx11 þ cSx12 þ    þ cSx1ðik11Þ þ 12cSx1ðik1Þ 7
6   7
6 þ að1Þ22 cSx21 þ cSx22 þ    þ cSx2ðik11Þ þ 2cSx2ðik1Þ þ   þ 7
1
6 7
¼ h6 þ að1Þ2n cSxn1 þ cSxn2 þ    þ cSxnðik11Þ þ 12cSxnðik1Þ 7
6 7
6. 7
6 .. 7
6   7
6a þ þ    þ þ 7
6 ð1Þn1 Sx11
c c c c 7
1

4 
Sx12 Sx1ðik11Þ 2 Sx1ðik1Þ 
þ að1Þn2 cSx21 þ cSx22 þ    þ cSx2ðik11Þ þ 12cSx2ðik1Þ þ   þ 5
þ að1Þnn cSxn1 þ cSxn2 þ    þ cSxnðik11Þ þ 12cSxnðik1Þ
for i [ k1
ð10:26Þ

The ith column of A2 CSx Qk2 P is


2   3
að2Þ11 cSx11 þ cSx12
 þ    þ cSx1ðik21Þ þ 2cSx1ðik2Þ
1

6 þ að2Þ12 cSx21 þ cSx22 þ    þ cSx2ðik21Þ þ 12cSx2ðik2Þ þ   þ 7
6 7
6 þ að2Þ1n cSxn1 þ cSxn2 þ    þ cSxnðik21 7
 Þ þ 2cSxnðik2Þ
1
6  7
6 að2Þ21 cSx11 þ cSx12 þ    þ cSx1ðik21Þ þ 12cSx1ðik2Þ 7
6   7
6 þ a c
ð2Þ22 Sx21 þ c þ    þ c þ 1
c þ    þ 7
6 Sx22 Sx2ðik21Þ 2 Sx2ðik2Þ  7
¼ h6 þ a c þ c þ    þ c þ 1
c 7
6 ð2Þ2n Sxn1 Sxn2 Sxnðik21Þ 2 Sxnðik2Þ 7
6. 7
6 .. 7
6   7
6a 7
6 ð2Þn1 cSx11 þ cSx12 þ    þ cSx1ðik21Þ þ 12cSx1ðik2Þ  7
4 þ að2Þn2 cSx21 þ cSx22 þ    þ cSx2ðik21Þ þ 12cSx2ðik2Þ þ   þ 5
þ að2Þnn cSxn1 þ cSxn2 þ    þ cSxnðik21Þ þ 12cSxnðik2Þ
for i [ k2
ð10:27Þ

After subtracting the ith column of LHS of Eq. (10.21) from its (i + 1)th column,
using the relations from (10.22) to (10.27), we get
10.3 Analysis of Non-homogeneous State Equations … 257

   
½CSx i þ 1 ½ACSx Pi þ 1  A1 CSx Qk1 P i þ 1  A2 CSx Qk2 P i þ 1
    
 ½CSx i ½ACSx Pi  A1 CSx Qk1 P i  A2 CSx Qk2 P i
   
¼ ½CSx i þ 1 ½CSx i  ½ACSx Pi þ 1 ½ACSx Pi
       
 A1 CSx Qk1 P i þ 1  A1 CSx Qk1 P i  A2 CSx Qk2 P i þ 1  A2 CSx Qk2 P i
2 3 2 3 2 3 2 3 2 3
cSx1ði þ 1Þ cSx1i cSx1i cSx1ði þ 1Þ cSx1ðik1Þ
6 7 6 7 6 7
6 cSx2ði þ 1Þ 7 6 cSx2i 7 6
7 h 6 cSx2i 7 h 6
7 c 7 6c 7
6 7 6 6 7 6 7 6 Sx2ði þ 1Þ 7 h 6 Sx2ðik1Þ 7
¼6 .. 7  .  A .  A 6 .. 7  A 6 .. 7
7 6 . 7 2 6 7 1
6
4 . 5 4 . 5 4 .. 5 2 6 4 . 7 2 6
5 4 . 7
5
cSxnði þ 1Þ cSxni cSxni cSxnði þ 1Þ cSxnðik1Þ
2 3 2 3 2 3
cSx1ðik1 þ 1Þ cSx1ðik2Þ cSx1ðik2 þ 1Þ
6 7 6 7 6 7
6 cSx2ðik1 þ 1Þ 7 6c 7 6c 7
h 6 7 h 6 Sx2ðik2Þ 7 h 6 Sx2ðik2 þ 1Þ 7
 A1 6 .. 7  A 2 6 .. 7  A 2 6 .. 7
2 6 7 2 6 7 2 6 7
4 . 5 4 . 5 4 . 5
cSxnðik1 þ 1Þ cSxnðik2Þ cSxnðik2 þ 1Þ
ð10:28Þ

Similar to Eq. (10.28), we subtract the ith column of RHS of Eq. (10.21) from its
(i + 1)th column and express the result of subtraction in parts. First, we subtract the
first terms of the two columns in the RHS of Eq. (10.21) and get
h i
h h h
I  A  A1 Qk1  A2 Qk2 CSx0
2 2 2
h i iþ1
h h h
 I  A  A1 Q  A2 Qk2 CSx0 ¼ 0
k1
2 2 2 i

Subtractions of the remaining terms, we have


h i h i
1 1
A1 CSxs1 þ CTxs1 Qk1t P1ss  A1 CSxs1 þ CTxs1 Qk1
t P1ss
2 iþ1 2 i
1  k1 
¼ h A1 CSxs1 þ CTxs1 Qt i
2

Similarly,
h i h i
1 1
A2 CSxs2 þ CTxs2 Qk2t P1ss  A2 CSxs2 þ CTxs2 Qk2
t P1ss
2 iþ1 2 i
1  k2 
¼ h A2 CSxs2 þ CTxs2 Qt i
2
258 10 Multi-delay System Analysis …

Now
h i h i
1 1 1
B CTSu þ CTTu P1ss  B CTSu þ CTTu P1ss ¼ h B CTSu þ CTTu
2 iþ1 2 i 2 i

Similarly
h i
CTSu Qb1 þ CTSus1 Qb1 CTTu Qb1 þ CTTus1 Qb1
1
B1 t þ t P1ss
2 iþ1
h i
 B1 CSu Q þ CSus1 Qt þ CTTu Qb1 þ CTTus1 Qb1
b1
T b1 T 1
t P1ss
2
 b1  h i i

Q i þ h B1 CTSus1 þ CTTus1 Qb1


1 1
¼ h B1 CTSu þ CTTu t
2 2 i

and
h i
CTSu Qb2 þ CTSus2 Qb2 þ CTTu Qb2 þ CTTus2 Qb2
1
B2 t t P1ss
2
h i iþ1
b2 b2 1 b2 b2
 B2 CSu Q þ CSus2 Qt þ CTu Q þ CTus2 Qt
T T T T
P1ss
2
  h i i

¼ h B2 CTSu þ CTTu Qb2 i þ h B2 CTSus2 þ CTTus2 Qb2


1 1
2 t 2 i

Therefore, finally the subtraction of ith columns of Eq. (10.21) from respective
(i + 1)th columns can be expressed as
2 3 2 3 2 3 2 3 2 3
cSx1ði þ 1Þ cSx1i cSx1i cSx1ði þ 1Þ cSx1ðik1Þ
6 7 6 6 7 6 7
6 cSx2ði þ 1Þ 7 6 cSx2i 7 6 7
7 h 6 cSx2i 7 h 6 c 7 6c 7
6 7 6 7 6 7 6 Sx2ði þ 1Þ 7 h 6 Sx2ðik1Þ 7
6 .. 7  6 . 7  2A6 . 7  2A6 .. 7  2 A1 6 .. 7
6 7 . . 6 7 6 7
4 . 5 4 . 5 4 . 5 4 . 5 4 . 5
cSxnði þ 1Þ cSxni cSxni cSxnði þ 1Þ cSxnðik1Þ
2 3 2 3 2 3
cSx1ðik1 þ 1Þ cSx1ðik2Þ cSx1ðik2 þ 1Þ
6 7 6 7 6 7
6 cSx2ðik1 þ 1Þ 7 6c 7 6c 7
h 6 7 h 6 Sx2ðik2Þ 7 h 6 Sx2ðik2 þ 1Þ 7
 A1 6 . 7  A2 6 . 7  A2 6 . 7
2 6 .. 7 2 6 .. 7 2 6 .. 7
4 5 4 5 4 5
cSxnðik1 þ 1Þ cSxnðik2Þ cSxnðik2 þ 1Þ
   k2  ð10:29Þ
1 1 1
¼ hA1 CSxs1 þ CTxs1 Qk1 t i þ hA2 CSxs2 þ 2CTxs2 Qt i þ h B CTSu þ CTTu
2 2
  h i i

þ h B1 CTSu þ CTTu Qb1 i þ h B1 CTSus1 þ CTTus1 Qb1


1 1
2 2 t
h ii
1 T  b2  1 T b2
þ h B2 CSu þ CTu Q i þ h B2 CSus2 þ CTus2 Qt
T T
2 2 i
1 1 1
¼ hA1 CSxs1 þ CTxs1 þ hA2 CSxs2 þ CTxs2 þhB CTSu þ CTTu
2 i þ k1 2 i þ k2 2 i
1 1
þ h B1 CTSu þ CTTu þ h B1 CTSus1 þ CTTus1
2 ib1 2 i þ b1
1 1
þ B2 h CTSu þ CTTu þ B2 h CTSus2 þ CTTus2
2 ib2 2 i þ b2
10.3 Analysis of Non-homogeneous State Equations … 259

Following Eq. (10.12), the generalized form of Eq. (10.29) can be written as
2 3 2 3
cSx1ði þ 1Þ cSx1i
6 7
h i6 cSx2ði þ 1Þ 7 h i6 cSx2i 7
7 hX X
h 6
Nx Nx
h 6 7
¼ Iþ A 6 7þ h
I A 6 . 7 6 . 7 ½Ak ½CSx ik þ ½Ak ½CSx ik þ 1
2 6
4
.. 7
5
2
4 .. 5 2 k¼1 2
k¼1

cSxnði þ 1Þ cSxni
X
Nx h i
1 1
þh ½Ak  CSxs k þ CTxs k þ h B CTSu þ CTTu
2 iþk 2 i
k¼1
X
Nu h i X
Nu h i
1 1
þh ½Bk  CTSu þ CTTu þh ½Bk  CTSus k þ CTTus k
2 ik 2 iþk
k¼1 k¼1
ð10:30Þ

From Eq. (10.30), using matrix inversion, we have


2 3 8 2 3
cSx1ði þ 1Þ > cSx1i
6 7 >
>
6 cSx2ði þ 1Þ 7 h i 1 >
<h i6 c 7 X X
h 6 Sx2i 7
Nx Nx
6 7
Iþ A 6 7þh
h h
6 . 7 ¼ I  2A .
2 6 . 7 2
½ A k ½ C Sx  þ ½Ak ½CSx ik þ 1
6 .. 7 >
> 4 . 5
ik 2
4 5 >
> k¼1 k¼1
:
cSxnði þ 1Þ cSxni
XNx h i
1 1
þh ½Ak  CSxs k þ CTxs k þ h B CTSu þ CTTu
2 iþk 2 i
k¼1
)
X
Nu h i X
Nu h i
1 1
þh ½Bk  CTSu þ CTTu þh ½Bk  CTSus k þ CTTus k
2 ik 2 iþk
k¼1 k¼1
2 3 8 2 3
cSx1ði þ 1Þ > cSx1i
6 7 >
>
6 cSx2ði þ 1Þ 7 h i 1 >
<h i66 c 7 X
Sx2i 7 Nx XNx
6 7
IþA 6 7þ
2 2
or, 6 .. 7 ¼ hI  A 6 . 7 ½Ak ½CSx ik þ ½Ak ½CSx ik þ 1
6
4 . 7
5
> h
>
> 4 .. 5 k¼1
>
:
k¼1

cSxnði þ 1Þ cSxni
XNx h i
1 1
þ2 ½Ak  CSxs k þ CTxs k þ 2 B CTSu þ CTTu
2 iþk 2 i
k¼1
)
X
Nu h i X
Nu h i
1 1
þ2 ½Bk  CTSu þ CTTu þ2 ½Bk  CTSus k þ CTTus k
2 ik 2 iþk
k¼1 k¼1

ð10:31Þ

The inverse in (10.31) can always be made to exist by judicious choice of h.


Equation (10.31) provides a simple recursive solution of the states of a
multi-delay non-homogeneous system, or, in other words, time samples of the
260 10 Multi-delay System Analysis …

states, with a sampling period of h knowing the system matrix A, the input matrix
B, the input signal u, the delay matrices for states and for input, and the initial
values of the states.
For a time-invariant system, all the delay matrices are zero. For such a system
Eq. (10.31) can be modified as
2 3 2 3
cSx1ði þ 1Þ cSx1i
6 7   1  6 c 7
6 cSx2ði þ 1Þ 7 6 Sx2i 7
6 7 2 2
6 . 7¼ IA IþA 6 6 .. 7
7
6 .. 7 h h 4 . 5
4 5
cSxnði þ 1Þ cSxni
  1  
2 1
+ 2 IA B CTSu þ CTTu
h 2 i

like Eq. (8.21).

10.3.1 Numerical Examples

Example 10.5 (vide Appendix B, Program no. 28)


Consider the non-homogeneous time-delay system

x_ ðtÞ ¼ xðt  1Þ þ uðtÞ; 0  t  1


xðtÞ ¼ 1; for  1  t  0
2:1 þ 1:05t; 0  t  1
and uðtÞ ¼
1:05; 1  t  2

having the solution

1  1:1t þ 0:525t2 ; 0  t  1
xð t Þ ¼
0:25 þ 1:575t  1:075t2 þ 0:175t3 ; 1  t  2

The exact solution of the state x(t) along with samples computed using the HF
approach, are shown in Fig. 10.6, for m = 4 and m = 8 with T = 2 s.
For quantitative analysis of this HF based approach, we have compared
sample-wise computational error with the exact samples, in Table 10.1. From
Table 10.1 it is noted that, instead of using a small number of segments (m = 4), the
samples obtained using HF domain analysis are reasonably close to their respective
exact samples.
10.3 Analysis of Non-homogeneous State Equations … 261

Fig. 10.6 Comparison of the


exact samples of the state x
(t) of the non-homogeneous
time-delay system of Example
10.5, with the samples
obtained in HF domain, for a
m = 4 and b m = 8, with
T = 2 s (vide Appendix B,
Program no. 28)

Example 10.6 Consider the non-homogeneous time-delay system

1 1
x_ ðtÞ ¼ xðtÞ  2x t  þ 2u t  ; 0t1
4 4
1
xðtÞ ¼ uðtÞ ¼ 0; for  t0
4
and uðtÞ ¼ 1 for t 0
262 10 Multi-delay System Analysis …

Table 10.1 Solution of state x(t) of the non- homogeneous time delay system of Example 10.5
with comparison of exact samples and corresponding samples obtained via HF domain with
percentage errors at different sample points for (a) m = 4 and (b) m = 8, with T = 2 s (vide
Appendix B, Program no. 28)
t (s) Exact samples of the Samples from HF analysis, using % Error e ¼
xd xh
state xd Eq. (10.31), xh xd  100
(a) System state x(t)
0 1.00000000 1.00000000 0.00000000
2
4 0.58125000 0.58125000 0.00000000
4
4 0.42500000 0.42500000 0.00000000
6
4 0.28437500 0.29531250 −3.84615385
8
4 0.00000000 0.02187500 –
(b) System state x(t)
0 1.00000000 0.00000000 0.00000000
2
8 0.75781250 0.75781250 0.00000000
4
8 0.58125000 0.58125000 0.00000000
6
8 0.47031250 0.47031250 0.00000000
8
8 0.42500000 0.42500000 0.00000000
10
8 0.38085938 0.38222656 −0.35897239
12
8 0.28437500 0.28710938 −0.96154022
14
8 0.15195313 0.15605469 −2.69922706
16
8 0.00000000 0.00546875 –

having the solution


8
> 0;  0 1t  4 1
1
>
<
2
   2 exp
  t  ; 4 
 t 12  1
1
xðtÞ ¼ 4
>
>   2 
 2 exp  t  1
4  þ ð2 þ 4t Þ
 17 exp  t  2 ; 2 
t  34 
:
6  2 exp  t  4 þ ð2 þ 4tÞ exp  t  2  4 þ 2t þ 4t exp  t  34 ;
1 1 2 3
4 t1

The exact solution of the state x(t) along with results computed using the HF
approach, are shown in Fig. 10.7, for m = 10 and T = 1 s.
Example 10.7 Consider a homogeneous time-delay system

x_ ðtÞ ¼ xðt  0:35Þ þ xðt  0:7Þ þ uðtÞ;


xðtÞ ¼ 0; for t  0;
uð t Þ ¼ 1

The exact solution is


8
< t; 0  t  0:35
xð t Þ ¼ t þ 12 ðt  0:35Þ2 ; 0:35  t  0:7
:
t þ 12 ðt  0:35Þ2 þ 12 ðt  0:7Þ2 þ 16 ðt  0:7Þ3 ; 0:7  t  1:05
10.3 Analysis of Non-homogeneous State Equations … 263

Fig. 10.7 Comparison of


exact samples of the state x
(t) of the non-homogeneous
time-delay system of Example
10.6, with the samples
obtained in HF domain, for a
m = 8 and b m = 16, with
T=1s

The exact solution of the state x(t) along with the results obtained via HF
approach, are shown in Fig. 10.8a and b, for m = 3 and 6, for T = 1 s. Figure 10.9
graphically compares the exact solution of the state with the results obtained using
Walsh series (m = 4), Taylor series (4th order) and HF domain (m = 6) analyses. It
seems that the HF based analysis results are more close than the results obtained via
Walsh series or Taylor series. To show the differences in the three results,
Fig. 10.10 is presented where a magnified view of some portion of Fig. 10.9. is
264 10 Multi-delay System Analysis …

Fig. 10.8 Comparison of


exact samples of the state x
(t) of the non-homogeneous
time-delay system of Example
10.7, with the samples
obtained in HF domain, for a
m = 3 and b m = 6, with
T = 1.05 s

shown for better clarity. In Fig. 10.10, it is noted that the samples obtained via HF
domain analysis fall directly on the exact curve, while this is not the case for Taylor
series as well as Walsh series analyses.
To assess this attribute quantitatively, we present Table 10.2 where the MISE’s
[1, 2] of three different analyses are compared.
It is noted from Table 10.2 that the MISE for Taylor series analysis [3] is much
less compared to Walsh series analysis [4]. But the HF analysis produces even less
10.3 Analysis of Non-homogeneous State Equations … 265

Fig. 10.9 Comparison of the


exact samples of the state x
(t) of the non-homogeneous
time-delay system of Example
10.7, for T = 1.05 s, with a the
samples obtained via HF
domain analysis for m = 6, b
Taylor series analysis of 4th
order [3] and c Walsh series
analysis [4] with m = 4

Fig. 10.10 Magnified view


of Fig. 10.9, of Example 10.7

Table 10.2 Computation of MISE for the analysis of state x(t) of Example 10.7 for T = 1.05 s, for
(a) Walsh series analysis [4] with m = 4 (b) Taylor series analysis of 4th order [3] and (c) HF
domain analysis for m = 6
Walsh series analysis with 4th order Taylor series HF domain analysis with
m=4 analysis
  m=6
ðMISEWalsh Þ MISETaylor ðMISEHF Þ
0.01385083 9.04236472e-05 1.67268017e-05
266 10 Multi-delay System Analysis …

Table 10.3 Computation of percentage increase in MISE for Walsh analysis (m = 4) and Taylor
analysis (4th order) with respect to HF domain analysis (m = 6), for the state x(t) of Example 10.7,
with T = 1.05 s

Percentage increase in MISE MISEMISE


Walsh MISEHF
 100 Percentage increase in MISE
HF MISETaylor MISEHF
MISEHF  100
82,706.2069 440.591376

error. Also, a fourth order Taylor series involves much more computation burden
compared to the HF analysis. Therefore, the HF domain analysis proves to be more
efficient and it produces best results compared to the other two methods.
This is further evident from Table 10.3 where the MISE’s of Walsh series and
Taylor series analyses are expressed as percentages of the MISE of HF based
analysis.
The numerical figures are self explanatory, because, in case of Walsh series
analysis, the increase in percentage error is 82,706.2069 % and that for the Taylor
series analysis is 440.591376 %.

10.4 Analysis of Homogeneous State Equations with Delay

For a homogeneous system, B and the input delay matrices are zero and Eq. (10.26)
will be reduced to
2 3 8 2 3
cSx1ði þ 1Þ > cSx1i
6 7 >
>
6 cSx2ði þ 1Þ 7 h i 1 >
<h i6 cSx2i 7 X X
h 6 7 Nx Nx
6 7 h 6 . 7þh h
6 7 ¼ I  A I þ A ½ A  ½ C  þ ½Ak ½CSx ik þ 1
... 2 6 . 7 2 k Sx
6 7 2 >
> 4 . 5
ik 2
4 5 >
> k¼1 k¼1
:
cSxnði þ 1Þ cSxni
)
XNx h i
1
þh ½Ak  CSxs k þ CTxs k
2 iþk
k¼1

ð10:32Þ

The inverse in (10.32) can always be made to exist by judicious choice of h.


Equation (10.32) provides a simple recursive solution of the states of a
multi-delay homogeneous system, or, in other words, time samples of the states,
10.4 Analysis of Homogeneous State Equations with Delay 267

with a sampling period of h knowing the system matrix A, the delay matrices for
states, and the initial values of the states.

10.4.1 Numerical Examples

Example 10.8 Consider a homogeneous time-delay system

1
x_ ðtÞ ¼ 4x t  ; 0t1
4
x ð 0Þ ¼ 1
1
xðtÞ ¼ 0; for   t\0
4

having the solution


8
> 1; 10 t 
1
>
> 4
< 1 þ 4 t  4 ; 4  t  12
1
xð t Þ ¼    2
>
> 1 þ 4 t  14 þ 8 t  12 ; 12  t  34
>
:    2  
3 3
1 þ 4 t  14 þ 8 t  12 þ 32 3 t4 ; 4 t1
3

The exact solution of the state x(t) along with results computed using the HF
approach, are shown in Fig. 10.11, for m = 4, 12, 20 with T = 1 s.
In Example 10.8, the initial values of the state x(t) are zero for time t\0 second,
and the state jump to one at t = 0 s. If we do not approximate the initial values in HF
domain, considering this jump, will analysis the state of the delay system with error.
Though we can reduce this error using increasing number of segments, m.
In Example 10.8, it is noted that a jump is involved in the initial values of the
state at t = 0. To obtain even better results in HF domain, we utilize the concept of
jump discontinuity, as illustrated in Chap. 3. Hence we can refine the results in
system analysis, using minimum number of sub-intervals. This is evident from
Fig. 10.12a and b.
268 10 Multi-delay System Analysis …

Fig. 10.11 Comparison of


exact samples of the state
x(t) of the homogeneous
time-delay system of
Example 10.8, with the
samples obtained in HF
domain, for a m = 4, b m = 12
and c m = 20, with T = 1 s
10.5 Conclusion 269

Fig. 10.12 Comparison of


exact samples of the state x
(t) of the homogeneous
time-delay system of Example
10.8, with the samples
obtained in HF domain,
considering the concept of
jump discontinuity, for a
m = 4, and b m = 8, with
T=1s

10.5 Conclusion

In this chapter, we have analysed the state space models of non-homogeneous


multi-delay systems in hybrid function platform. In the generalized form of solu-
tion, vide Eq. (10.31), it is noted that the structure of the solution is recursive in
manner. Thus, the computation is simple as well as attractive.
The same Eq. (10.31) has been used for the analysis of homogeneous systems by
simply putting B = 0.
270 10 Multi-delay System Analysis …

Different types of numerical examples have been treated using the derived
matrix equations for homogeneous as well as non-homogeneous systems, and the
results are compared with the exact solutions of the system states with respective
error estimates. These facts are reflected in Figs. 10.6, 10.7, 10.8, 10.9, 10.10,
10.11, 10.12 and Table 10.1, 10.2 and 10.3 both qualitatively and quantitatively.

References

1. Rao, G.P.: Piecewise Constant Orthogonal Functions and Their Application in Systems and
Control, LNC1S, vol. 55. Springer, Berlin (1983)
2. Jiang, J.H., Schaufelberger, W.: Block Pulse Functions and Their Application in Control
System, LNCIS, vol. 179. Springer, Berlin (1992)
3. Chung, H.Y., Sun, Y.Y.: Taylor series analysis of multi-delay system. J. Franklin Instt. 324(1),
65–72 (1987)
4. Chen, W.L.: Walsh series analysis of multi-delay systems. J. Franklin Inst. 313(4), 207–217
(1982)
Chapter 11
Time Invariant System Analysis: Method
of Convolution

Abstract This chapter presents analysis of both open loop as well as closed loop
systems based upon the idea of convolution in HF domain. Three numerical
examples have been treated, and for clarity, thirteen figures and six tables have been
presented.

As the heading implies, this chapter is dedicated for linear time invariant
(LTI) control system [1] analysis using convolution method. The convolution
operation in HF domain was discussed in detail in Chap. 7 where the key equation
giving the samples of the result of convolution was Eq. (7.21).
In any control system block diagram, we find many blocks through which a
signal passes. Since the block diagram is usually drawn in Laplace domain, we
frequently have products of two functions described in s-domain. Such a product in
s-domain means convolution in time domain. Thus, analysis of control systems
involves convolution. The principles of convolution in HF domain are now
employed for analysis of time invariant non-homogeneous system, both open loop
as well as closed loop. As mentioned earlier, HF domain analysis is always based
upon function samples and it provides attractive computational advantages.

11.1 Analysis of an Open Loop System

An input r(t) is applied to a causal SISO system, as shown in Fig. 11.1, at t = 0. If


the impulse response of the plant is g(t), we obtain the output y(t) simply by
convolution of r(t) and g(t). Knowing r(t) and g(t), we can employ the generalized
Eq. (7.21) for computing the samples of the output y(t) so that the result is obtained
in HF domain [2–4].

© Springer International Publishing Switzerland 2016 271


A. Deb et al., Analysis and Identification of Time-Invariant Systems, Time-Varying
Systems, and Multi-Delay Systems using Orthogonal Hybrid Functions,
Studies in Systems, Decision and Control 46, DOI 10.1007/978-3-319-26684-8_11
272 11 Time Invariant System Analysis: Method of Convolution

Fig. 11.1 An open loop SISO control system

11.1.1 Numerical Examples

Example 11.1 Consider a linear open loop system, shown in Fig. 11.2, with its
impulse response given by g1 ðtÞ ¼ expðtÞ.
Taking T = 1 s, m = 4 and h = T/m = 0.25 s, we analyze the system in HF domain
for a step input. Then the actual output is y1 ðtÞ ¼ 1  expðtÞ
In HF domain, for m = 4 and T = 1 s, uðtÞ and g1 ðtÞ are given by

uð t Þ ¼ ½ 1 1 1 1 Sð4Þ þ ½ 0 0 0 0 Tð4Þ

and

g1 ðtÞ ¼ ½ 1:00000000 0:77880078 0:60653066 0:47236655 Sð4Þ


þ ½ 0:22119922 0:17227012 0:13416411 0:10448711 Tð4Þ

Using Eq. (7.18) or (7.21), convolution of uðtÞ and g1 ðtÞ in hybrid function
domain yields the output as

y1c ðtÞ ¼ ½ 0:00000000 0:22235010 0:39551653 0:53037868 Sð4Þ


þ ½ 0:22235010 0:17316643 0:13486215 0:10503075 Tð4Þ

Direct expansion of the output y1d ðtÞ, in HF domain, for m = 4 and T = 1 s, is

y1d ðtÞ ¼ ½ 0:00000000 0:22119922 0:39346934 0:52763345 Sð4Þ


þ ½ 0:22119922 0:17227012 0:13416411 0:10448711 Tð4Þ

We compute percentage errors at different sample points of the function y1c ðtÞ
and compare the same with respective reference samples of the function y1d ðtÞ for
m = 4 and m = 10. These are presented in Tables 11.1 and 11.2. Also, percentage
errors at different sample points for different values of m are plotted in Fig. 11.3 to

Fig. 11.2 System having an impulse response expðtÞ with a unit step input
11.1 Analysis of an Open Loop System 273

Table 11.1 Percentage error of different samples of y1c ðtÞ for Example 11.1, with T = 1 s, m = 4
t (s) Via direct expansion in HF Via convolution in HF domain, % Error
domain using (7.21) e ¼ ðy1dyy 1c Þ
 100
ðy1d Þ ðy1c Þ 1d

0 0.00000000 0.00000000 –
1
4 0.22119922 0.22235010 −0.52029160
2
4 0.39346934 0.39551653 −0.52029160
3
4 0.52763345 0.53037868 −0.52029160
4
4 0.63212056 0.63540943 −0.52029160

Table 11.2 Percentage error of different samples of y1c ðtÞ for Example 11.1, with T = 1 s, m = 10
t (s) Via direct expansion in Via convolution in HF Domain, % Error
HF domain using (7.21) e ¼ ðy1dyy 1c Þ
 100
ðy1d Þ ðy1c Þ 1d

0 0.00000000 0.00000000 –
1
10 0.09516258 0.09524187 −0.08331945
2
10 0.18126925 0.18142028 −0.08331945
3
10 0.25918178 0.25939773 −0.08331945
4
10 0.32967995 0.32995464 −0.08331945
5
10 0.39346934 0.39379718 −0.08331945
6
10 0.45118836 0.45156429 −0.08331945
7
10 0.50341470 0.50383414 −0.08331945
8
10 0.55067104 0.55112985 −0.08331945
9
10 0.59343034 0.59392478 −0.08331945
10
10 0.63212056 0.63264724 −0.08331945

Fig. 11.3 Percentage error at


different sample points for
different values of m, with
T = 1 s for Example 11.1
274 11 Time Invariant System Analysis: Method of Convolution

Fig. 11.4 AMP Error for different values of m, with T = 1 s for Example 11.1

Fig. 11.5 Open loop SISO system with the impulse response 2 expð2tÞ½cosð2tÞ  sinð2tÞ and a
unit step input

show that with increasing m, error reduces at a very fast rate. Figure 11.4 shows the
AMP error for different values of m. The curve resembles a rectangular hyperbola.
It is seen that for m = 20, average absolute percentage error is less than 0.05 %.
Example 11.2 (vide Appendix B, Program no. 29) Consider the open loop system,
shown in Fig. 11.5, with an impulse response given by

g2 ðtÞ ¼ 2 expð2tÞ½cosð2tÞ  sinð2tÞ

Taking T = 1 s, m = 4 and we analyze the system in HF domain for a step input.


The exact solution is y2 ðtÞ ¼ expð2tÞ sinð2tÞ.
In HF domain, for m = 4 and T = 1 s, uðtÞ and g2 ðtÞ are given by

uð t Þ ¼ ½ 1 1 1 1 Sð4Þ þ ½ 0 0 0 0 Tð4Þ

and

g2 ðtÞ ¼ ½ 2:00000000 0:48298888 0:22158753 0:41357523 Sð4Þ


þ ½ 1:51701112 0:70457641 0:19198770 0:05481648 Tð4Þ
11.1 Analysis of an Open Loop System 275

Convolution of uðtÞ and g2 ðtÞ in HF domain yields the output y2 ðtÞ as

y2c ðtÞ ¼ ½ 0:00000000 0:31037361 0:34304878 0:26365344 Sð4Þ


þ ½ 0:31037361 0:03267517 0:07939534 0:09654175 Tð4Þ

Direct expansion of the output y2d ðtÞ, in HF domain, for m = 4 and T = 1 s, is

y2d ðtÞ ¼ ½ 0:00000000 0:29078628 0:30955988 0:22257122 Sð4Þ


þ ½ 0:29078629 0:01877359 0:08698866 0:09951119 Tð4Þ

Now we compare the corresponding sample points of y2c ðtÞ and y2d ðtÞ, of
example 2, and compute percentage errors at different sample points with reference
to the samples of y2d ðtÞ for m = 4 and m = 10. The results of comparison are
presented in tabular form in Tables 11.3 and 11.4.

Table 11.3 Percentage error of different samples of y2c ðtÞ for Example 11.2, with T = 1 s, m = 4
(vide Appendix B, Program no. 29)
t (s) Via direct expansion in HF Via convolution in HF domain, % Error
domain using (7.21) e ¼ ðy2dyy 2c Þ
 100
ðy2d Þ ðy2c Þ 2d

0 0.00000000 0.00000000 –
1
4 0.29078628 0.31037361 −6.73598553
2
4 0.30955988 0.34304878 −10.81823151
3
4 0.22257122 0.26365344 −18.45801075
4
4 0.12306002 0.16711169 −35.79689134

Table 11.4 Percentage errors of samples of y2c ðtÞ for Example 11.2, with T = 1 s, m = 10 (vide
Appendix B, Program no. 29)
t (s) Via direct expansion in HF Via convolution in HF domain, % Error
domain using (7.21) e ¼ ðy2dyy 2c Þ
 100
ðy2d Þ ðy2c Þ 2d

0 0.00000000 0.00000000 –
1
10 0.16265669 0.16397540 −0.81072892
2
10 0.26103492 0.26358786 −0.97800816
3
10 0.30988236 0.31353208 −1.17777597
4
10 0.32232887 0.32691139 −1.42168985
5
10 0.30955988 0.31490417 −1.72641797
6
10 0.28072478 0.28666632 −2.11650235
7
10 0.24300891 0.24939830 −2.62928134
8
10 0.20181043 0.20851818 −3.32378529
9
10 0.16097593 0.16789439 −4.29781955
10
10 0.12306002 0.13010323 −5.72338986
276 11 Time Invariant System Analysis: Method of Convolution

Fig. 11.6 Percentage error at


different sample points for
different values of m, with
T = 1 s for Example 11.2
(vide Appendix B, Program
no. 29)

Fig. 11.7 AMP Error for


different values of m, with
T = 1 s for Example 11.2
(vide Appendix B, Program
no. 29)

Also, percentage errors at different sample points for different values of


m (m = 4, 6, 10, 20) are plotted in Fig. 11.6 to show that with increasing m, error
reduces quite rapidly.
Figure 11.7 shows the AMP error for several values of m. It is seen that for
m = 20, AMP error is less than 1.0 %.

11.2 Analysis of a Closed Loop System

Consider a single-input-single-output (SISO) time-invariant system [1].


An input r ðtÞ is applied to the system at t ¼ 0. The block diagram of the system
using time variables is shown in Fig. 11.8. Application of r ðtÞ to the system gðtÞ
with feedback hðtÞ produces the corresponding output yðtÞ for t  0.
11.2 Analysis of a Closed Loop System 277

Fig. 11.8 Block diagram of a closed loop control system

Considering r ðtÞ, gðtÞ, yðtÞ and hðtÞ to be bounded (i.e. the system is BIBO
stable) and absolutely integrable over t 2 ½0; T Þ, all these functions may be
expanded via HF series. For m = 4, we can write and
9
r ðtÞ , RTS Sð4Þ þ RTT Tð4Þ >
>
>
>
gð t Þ , G T S þ G T T =
S ð4Þ T ð4Þ
ð11:1Þ
yðtÞ , YS Sð4Þ þ YT Tð4Þ
T T >
>
>
>
;
and hðtÞ , HTS Sð4Þ þ HTT Tð4Þ

Where

RTS ¼ ½ r0 r1 r2 r3  RTT ¼ ½ ðr1  r0 Þ ðr2  r1 Þ ðr3  r2 Þ ðr4  r3 Þ 


GTS ¼ ½ g0 g1 g2 g3  GTT ¼ ½ ðg1  g0 Þ ðg2  g1 Þ ðg3  g2 Þ ðg4  g3 Þ 
YTS ¼ ½ y0 y1 y2 y3  YTT ¼ ½ ðy1  y0 Þ ð y2  y1 Þ ð y3  y2 Þ ð y4  y3 Þ 
HTS ¼ ½ h0 h1 h2 h3  HTT ¼ ½ ð h1  h0 Þ ð h2  h1 Þ ð h3  h2 Þ ðh4  h3 Þ 

Output of the feedback system is bðtÞ ¼ yðtÞ  hðtÞ.


Following Eq. (7.17), we can write

bðtÞ ¼ yðtÞ  hðtÞ


2 3
0 H0 H1 H2
60 H H H 7
h 6 4 5 67
bðtÞ ¼ ½ y0 y1 y2 y3  6 7S
6 4 0 0 H4 H5 5 ð4Þ
0 0 0 H4
8 2 3
> H 0 ðH1  H0 Þ ðH2  H1 Þ ðH3  H2 Þ
>
>
h< 6 0
6 H0 ðH1  H0 Þ ðH2  H1 Þ 7 7 ð11:2Þ
þ ½ y0 y1 y2 y3  6 7
6> > 4 0 0 H ðH  H Þ 5
>
:
0 1 0
0 0 0 H0
2 39
H4 H8 H9 H10 >
>
>
6 0 H H H9 7 =
6 4 8 7
þ ½ y1 y2 y3 y4  6 7 Tð4Þ
4 0 0 H4 H8 5> >
>
;
0 0 0 H4
278 11 Time Invariant System Analysis: Method of Convolution

where
9
H0 , 2h1 þ h0 H1 , 2h2 þ h1 >
>
>
>
H2 , 2h3 þ h2 H3 , 2h4 þ h3 >
>
=
H4 , h1 þ 2h0 H5 , h2 þ 4h1 þ h0
ð11:3Þ
H6 , h3 þ 4h2 þ h1 H7 , h4 þ 4h3 þ h2 >
>
>
H8 , h2 þ h1  2h0 H9 , h3 þ h2  2h1 >
>
>
;
H10 , h4 þ h3  2h2

After simplification, we have

h
bðtÞ ¼ ½ 0 ðH0 y0 þ H4 y1 Þ ðH1 y0 þ H5 y1 þ H4 y2 Þ ðH2 y0 þ H6 y1 þ H5 y2 þ H4 y3 Þ Sð4Þ
6
h
þ ½fH0 y0 þ H4 y1 g fðH1  H0 Þy0 þ ðH0 þ H8 Þy1 þ H4 y2 g
6
fðH2  H1 Þy0 þ ðH1  H0 þ H9 Þy1 þ ðH0 þ H8 Þy2 þ H4 y3 g
fðH3  H2 Þy0 þ ðH2  H1 þ H10 Þy1 þ ðH1  H0 þ H9 Þy2 þ ðH0 þ H8 Þy3 þ H4 y4 gTð4Þ
ð11:4Þ

Now the error signal of the system is

eðtÞ ¼ r ðtÞ  bðtÞ


h h
¼ ½ r0 r1  ðH0 y0 þ H4 y1 Þ r2  ðH1 y0 þ H5 y1 þ H4 y2 Þ
6 6
i
h
r3  ðH2 y0 þ H6 y1 þ H5 y2 þ H4 y3 Þ Sð4Þ
6
h
h h
þ ðr1  r0 Þ  ðH0 y0 þ H4 y1 Þ ðr2  r1 Þ  fðH1  H0 Þy0 þ ðH0 þ H8 Þy1 þ H4 y2 g
6 6
h
ðr3  r2 Þ  fðH2  H1 Þy0 þ ðH1  H0 þ H9 Þy1 þ ðH0 þ H8 Þy2 þ H4 y3 g
6
h
ðr4  r3 Þ  fðH3  H2 Þy0 þ ðH2  H1 þ H10 Þy1 þ ðH1  H0 þ H9 Þy2
6
þ ðH0 þ H8 Þy3 þ H4 y4 gTð4Þ
ð11:5Þ

Again, direct expansion of the error signal eðtÞ in HF domain is

eð t Þ , ½ e0 e1 e2 e3 Sð4Þ þ ½ ðe1  e0 Þ ðe2  e1 Þ ðe3  e2 Þ ðe4  e3 Þ Tð4Þ


ð11:6Þ
11.2 Analysis of a Closed Loop System 279

Comparing Eqs. (11.5) and (11.6), HF coefficients of error eðtÞ are

e0 ¼ r 0
h
e1 ¼ r1  ðH0 y0 þ H4 y1 Þ
6
h
e2 ¼ r2  ðH1 y0 þ H5 y1 þ H4 y2 Þ
6
h
e3 ¼ r3  ðH2 y0 þ H6 y1 þ H5 y2 þ H4 y3 Þ ð11:7Þ
6
h
e4 ¼ r4  fH3 y0 þ ðH2  H1 þ H10 þ H6 Þy1 þ ðH1  H0 þ H9 þ H5 Þy2
6
þ ðH0 þ H8 þ H4 Þy3 þ H4 y4 g
h
¼ r4  ðH3 y0 þ H7 y1 þ H6 y2 þ H5 y3 þ H4 y4 Þ
6

Hence, the output yðtÞ of the system is


y ð t Þ ¼ e ð t Þ  gð t Þ

Thus, following Eq. (7.17), we can write


2 3
0 G0 G1 G2
60 G6 7
h 6 G4 G5 7
yðtÞ  ½ e0 e1 e2 e3  6 7Sð4Þ
6 40 0 G4 G5 5
0 0 0 G4
8 2 3
> G0 ðG1  G0 Þ ðG2  G1 Þ ðG3  G2 Þ
>
>
< 6 0 ðG1  G0 Þ ðG2  G1 Þ 7
h 6 G0 7
þ ½ e0 e1 e2 e3  6 7
6>> 4 0 0 G0 ðG1  G0 Þ 5
>
:
0 0 0 G0
2 39
G4 G8 G9 G10 >
>
>
6 0 G8 G9 7 =
h 6 G4 7
þ ½ e1 e2 e3 e4  6 7 Tð4Þ
6 4 0 0 G4 G8 5>>
>
;
0 0 0 G4
ð11:8Þ

where
9
G0 , 2g1 þ g0 G1 , 2g2 þ g1 >
>
>
>
G2 , 2g3 þ g2 G3 , 2g4 þ g3 >
>
=
G4 , g1 þ 2g0 G5 , g2 þ 4g1 þ g0
ð11:9Þ
G6 , g3 þ 4g2 þ g1 G7 , g4 þ 4g3 þ g2 >
>
>
G8 , g2 þ g1  2g0 G9 , g3 þ g2  2g1 >
>
>
;
G10 , g4 þ g3  2g2
280 11 Time Invariant System Analysis: Method of Convolution

and
9
E0 , 2e1 þ e0 E1 , 2e2 þ e1 >
>
>
>
E2 , 2e3 þ e2 E3 , 2e4 þ e3 >
>
=
E4 , e1 þ 2e0 E5 , e2 þ 4e1 þ e0
ð11:10Þ
E6 , e3 þ 4e2 þ e1 E7 , e4 þ 4e3 þ e2 >
>
>
E8 , e2 þ e1  2e0 E9 , e3 þ e2  2e1 >
>
>
;
E10 , e4 þ e3  2e2

Hence,

h
yðtÞ  ½ 0 ðG0 e0 þ G4 e1 Þ ðG1 e0 þ G5 e1 þ G4 e2 Þ
6
ðG2 e0 þ G6 e1 þ G5 e2 þ G4 e3 ÞSð4Þ
h
þ ½ fG0 e0 þ G4 e1 g fðG1  G0 Þe0 þ ðG0 þ G8 Þe1 þ G4 e2 g
6
fðG2  G1 Þe0 þ ðG1  G0 þ G9 Þe1 þ ðG0 þ G8 Þe2 þ G4 e3 g
fðG3  G2 Þe0 þ ðG2  G1 þ G10 Þe1 þ ðG1  G0 þ G9 Þe2 þ ðG0 þ G8 Þe3 þ G4 e4 gTð4Þ
h
or, yðtÞ  ½ 0 ðG0 e0 þ G4 e1 Þ ðG1 e0 þ G5 e1 þ G4 e2 Þ
6
ðG2 e0 þ G6 e1 þ G5 e2 þ G4 e3 ÞSð4Þ
h
½ fG0 e0 þ G4 e1 g fðG1  G0 Þe0 þ ðG5  G4 Þe1 þ G4 e2 g
6
fðG2  G1 Þe0 þ ðG6  G5 Þe1 þ ðG5  G4 Þe2 þ G4 e3 g
fðG3  G2 Þe0 þ ðG7  G6 Þe1 þ ðG6  G5 Þe2 þ ðG5  G4 Þe3 þ G4 e4 gTð4Þ
ð11:11Þ

Since convolution operation is commutative, using Eq. (11.10) instead of (11.9),


an alternative expression for the output yðtÞ can be

h
yðtÞ  ½ 0 ðE0 g0 þ E4 g1 Þ ðE1 g0 þ E5 g1 þ E4 g2 Þ
6
 ðE2 g0 þ E6 g1 þ E5 g2 þ E4 g3 ÞSð4Þ
h
þ ½ fE0 g0 þ E4 g1 g fðE1  E0 Þg0 þ ðE5  E4 Þg1 þ E4 g2 g
6
 fðE2  E1 Þg0 þ ðE6  E5 Þg1 þ ðE5  E4 Þg2 þ E4 g3 g
 fðE3  E2 Þg0 þ ðE7  E6 Þg1 þ ðE6  E5 Þg2 þ ðE5  E4 Þg3 þ E4 g4 gTð4Þ
ð11:12Þ

Again, direct expansion of the output yðtÞ of the system in HF domain is

yðtÞ , ½ y0 y1 y2 y3 Sð4Þ
ð11:13Þ
þ ½ ðy1  y0 Þ ðy2  y1 Þ ð y3  y2 Þ ðy4  y3 Þ Tð4Þ
11.2 Analysis of a Closed Loop System 281

We equate respective coefficients of yðtÞ from Eqs. (11.11) and (11.13), and use
Eqs. (11.3), (11.7), (11.9) and (11.10) to determine the five output coefficients

ðiÞ y0 ¼ 0 ð11:14Þ

h
y1  ðG0 e0 þ G4 e1 Þ
6 
ðiiÞ
h h h
¼ G0 r0 þ G4 r1  ðG4 H0 Þy0  ðG4 H4 Þy1
6 6 6
Solving for y1

 
h 1 h
y1 ¼   G r
0 0 þ G r
4 1  ð G H Þy
4 0 0 ð11:15Þ
6 1 þ 36
h2
G4 H4 6

h
y2  ðG1 e0 þ G5 e1 þ G4 e2 Þ
6    
ðiiiÞ h h h
¼ G1 r0 þ r1  ðH0 y0 þ H4 y1 ÞG5 þ r2  ðH1 y0 þ H5 y1 þ H4 y2 ÞG4
6 6 6

Solving for y2


h 1 h
y2 ¼   G1 r0 þ G5 r1 þ G4 r2  ðG5 H0 þ G4 H1 Þy0
6 1 þ 36
h2
G4 H4 6
 ð11:16Þ
h
 ðG5 H4 þ G4 H5 Þy1
6

h
y3  ðG2 e0 þ G6 e1 þ G5 e2 þ G4 e3 Þ
6    
h h h
ðivÞ ¼ G2 r0 þ r1  ðH0 y0 þ H4 y1 ÞG6 þ r2  ðH1 y0 þ H5 y1 þ H4 y2 ÞG5
6 6 6
 
h
þ r3  ðH2 y0 þ H6 y1 þ H5 y2 þ H4 y3 ÞG4
6
282 11 Time Invariant System Analysis: Method of Convolution

Solving for y3


h 1 h
y3 ¼   G2 r0 þ G6 r1 þ G5 r2  ðG6 H0 þ G5 H1 þ G4 H2 Þy0
6 1 þ 36
h2
G4 H4 6

h h
 ðG6 H4 þ G5 H5 þ G4 H6 Þy1  ðG5 H4 þ G4 H5 Þy2
6 6
ð11:17Þ

h
y4  ½ðG3  G2 Þe0 þ ðG7  G6 Þe1 þ ðG6  G5 Þe2 þ ðG5  G4 Þe3 þ G4 e4 
6
h
þ ½G2 e0 þ G6 e1 þ G5 e2 þ G4 e3 
6
h
¼ ½G3 e0 þ G7 e1 þ G6 e2 þ G5 e3 þ G4 e4 
6
ðvÞ h h
¼ G3 r0 þ G7 r1 þ G6 r2 þ G5 r3 þ G4 r4 þ ðG7 H0 þ G6 H1 þ G5 H2 þ G4 H3 Þy0
6 6
h h
 ðG7 H4 þ G6 H5 þ G5 H6 þ G4 H7 Þy1  ðG6 H4 þ G5 H5 þ G4 H6 Þy2
6  6
h h
 ðG5 H4 þ G4 H5 Þy3  G4 H4
6 6

Solving for y4


h 1 h
y4 ¼   G3 r0 þ G7 r1 þ G6 r2 þ G5 r3 þ G4 r4  ðG7 H0 þ G6 H1 þ G5 H2 þ G4 H3 Þy0
6 1 þ 36
h2
G4 H4 6
h h
 ðG7 H4 þ G6 H5 þ G5 H6 þ G4 H7 Þy1  ðG6 H4 þ G5 H5 þ G4 H6 Þy2
6  6
h
 ðG5 H4 þ G4 H5 Þy3
6
ð11:18Þ

To write down the generalized form of the output coefficients, first we express
G0i s and Hi0 s by their following general forms:
8
>
> 2gði þ 1Þ þ gi for i ¼ 0 to ðm  1Þ
< g þ 2g for i¼m
1 0
Gi ¼
>
> gðim þ 1Þ þ 4gðimÞ þ gðim1Þ for i ¼ ðm þ 1Þ to ð2m  1Þ
:
gði2m þ 2Þ þ gði2m þ 1Þ  2gði2mÞ for i ¼ 2m to ð3m  2Þ
ð11:19Þ
11.2 Analysis of a Closed Loop System 283

8
>
> 2hði þ 1Þ þ hi for i ¼ 0 to ðm  1Þ
< h þ 2h for i¼m
1 0
Hi ¼
>
> hðim þ 1Þ þ 4hðimÞ þ hðim1Þ for i ¼ ðm þ 1Þ to ð2m  1Þ
:
hði2m þ 2Þ þ hði2m þ 1Þ  2hði2mÞ for i ¼ 2m to ð3m  2Þ
ð11:20Þ

It is noted that G0i s and Hi0 s have the same general form.
Now, we use Eqs. (11.14)–(11.20) to write down the generalized form of the
output coefficients as
" #
P
i P
i P
i
h
6 Gði1Þ r0 þ Gðm þ ipÞ rp  h
6 Gðm þ ipÞ Hðp1Þ y0  h
6 Kðip þ 2Þ yðp1Þ
p¼1 p¼1 p¼2
yi ¼
1þ h2
36 Gm Hm
ð11:21Þ

P
j
where Kj ¼ Gðm þ jpÞ Hðm þ p1Þ and j ¼ i  p þ 2.
p¼1

11.2.1 Numerical Examples

To analyse a closed loop system, we use Eqs. (11.14)–(11.21) to determine the


output coefficients in HF domain. The results are compared with direct expression
of the output of the system in HF domain.
Example 11.3 (vide Appendix B, Program no. 30) Consider a closed loop system
with input r ðtÞ, plant g3 ðtÞ, feedback h3 ðtÞ and output y3 ðtÞ, shown in Fig. 11.9.
Taking T = 1 s and m = 4, we analyse the system in HF domain for a step input
uðtÞ and feedback h3 ðtÞ ¼ 4uðtÞ with its impulse response given by
g3 ðtÞ ¼ 2 expð4tÞ. The exact solution of the system is y3 ðtÞ ¼ expð2tÞ sinð2tÞ.

Fig. 11.9 A closed loop system with unit step input


284 11 Time Invariant System Analysis: Method of Convolution

Using Eqs. (11.19)–(11.21), the output coefficients of closed loop system y3c ðtÞ
in HF domain, for m = 4 and T = 1 s, is

y3c ðtÞ ¼ ½ 0:00000000 0:31126040 0:33909062 0:24469593 Sð4Þ


þ ½ 0:31126040 0:02783022 0:09439469 0:11398545 Tð4Þ

Direct expansion of the output y3d ðtÞ, in HF domain, for m = 4 and T = 1 s, is

y3d ðtÞ ¼ ½ 0:00000000 0:29078629 0:30955988 0:22257122 Sð4Þ


þ ½ 0:29078629 0:01877358 0:08698865 0:09951120 Tð4Þ

Now we compare the corresponding samples of y3c ðtÞ and y3d ðtÞ and compute
percentage errors at different sample points with reference to the samples of y3 ðtÞ

Table 11.5 Percentage error at different sample points of y3c ðtÞ for Example 11.3, with m = 4,
T = 1 s and (b) m = 10, T = 1 s (vide Appendix B, Program no. 30)
t (s) Via direct expansion in Via convolution in HF domain, % Error
HF domain using (11.19)–(11.21) e ¼ ðy3dyy 3c Þ
 100
ðy3d Þ ðy3c Þ 3d

(a)
0 0.00000000 0.00000000 –
1
4 0.29078629 0.31126040 −7.04094744
2
4 0.30955988 0.33909062 −9.53959161
3
4 0.22257122 0.24469593 −9.94050884
4
4 0.12306002 0.13071048 −6.21685153
(b)
0 0.00000000 0.00000000 –
1
10 0.16265669 0.16411049 −0.89378245
2
10 0.26103492 0.26393607 −1.11140316
3
10 0.30988236 0.31388535 −1.29177880
4
10 0.32232887 0.32694277 −1.43142705
5
10 0.30955988 0.31428062 −1.52498673
6
10 0.28072478 0.28511656 −1.56444413
7
10 0.24300891 0.24674586 −1.53778446
8
10 0.20181043 0.20468938 −1.42656376
9
10 0.16097593 0.16290972 −1.20128964
10
10 0.12306002 0.12405920 −0.81193829
11.2 Analysis of a Closed Loop System 285

for m = 4 and m = 10. The results of computation are presented in tabular form in
Table 11.5a, b. Figure 11.10a, b shows the graphical comparison of improvement in
HF domain convolution with increasing m, for m = 4 and m = 10. Also, percentage
errors at different sample points for different values of m (m = 4, 6, 10 and 20) are
plotted in Fig. 11.11 to show that with increasing m, error reduces at a very fast rate.
Figure 11.12 shows AMP error for several different values of m. It is seen that
for m = 20, AMP error is less than 1.0 %.

Fig. 11.10 Output of the


closed loop system of
Example 11.3, obtained via
convolution in HF domain for
a m = 4, T = 1 s and b m = 10,
T = 1 s (vide Appendix B,
Program no. 30)
286 11 Time Invariant System Analysis: Method of Convolution

Fig. 11.11 Percentage error


at different sample points for
different values of m, with
T = 1 s for Example 11.3
(vide Appendix B, Program
no. 30)

Fig. 11.12 AMP error for


different values of m, with
T = 1 s for Example 11.3
(vide Appendix B, Program
no. 30)

11.3 Conclusion

In this chapter, we have studied the analysis of open loop as well as closed loop
linear systems in hybrid function domain, employing the rules of HF domain
convolution. As a foundation, convolution of basic component functions of the HF
set was discussed in Chap. 7 and relevant sub-results were derived. These
sub-results were subsequently used to determine the convolution of two time
functions.
Using this theory, output of a linear open loop system is determined. For the
open loop system, two examples are treated and related tables are presented for
m = 4 and 10, m being the number of component functions used in HF set. For these
11.3 Conclusion 287

two cases, percentage errors at different sample points are also computed. These are
tabulated in Tables 11.1, 11.2, 11.3 and 11.4. It is observed that with increasing m,
the error is rapidly reduced. This fact is also supported by Figs. 11.3, 11.4, 11.6,
11.7, 11.11 and 11.12.
For analyzing a closed loop control system the same technique has been
employed and Eq. (11.8) has been derived. In this equation, the output is obtained
from algebraic operations of some special matrices determined from the convolu-
tion operation in HF domain. However, apart from Eq. (11.8), an interesting re-
cursive equation has been derived in the form of Eq. (11.21). A few relevant
examples have been treated for m = 4 and 10, and the results are found to be in good
agreement with the exact solutions. Percentage errors at different sample points are
shown in Table 11.5a, b. As observed before, with increasing m, the error is
reduced rapidly and this fact is apparent from Figs. 11.11 and 11.12.

References

1. Ogata, K.: Modern Control Engineering (5th Ed.). Prentice Hall of India (2011)
2. Deb, A., Sarkar, G., Bhattacharjee, M., Sen, S.K.: A new set of piecewise constant orthogonal
functions for the analysis of linear SISO systems with sample-and-hold. J. Franklin Instt. 335B
(2), 333–358 (1998)
3. Deb, A., Sarkar, G., Sengupta, A.: Triangular Orthogonal Functions for the Analysis of
Continuous Time Systems. Anthem Press, London (2011)
4. Biswas, A.: Analysis and synthesis of continuous control systems using a set of orthogonal
hybrid functions. Ph. D. Dissertation, University of Calcutta (2015)
Chapter 12
System Identification Using State Space
Approach: Time Invariant Systems

Abstract In this chapter, HF domain identification of time invariant systems in


state space is presented. Both homogeneous and non-homogeneous systems are
treated. State and output matrices of the systems are identified. A non-homogeneous
system with a jump discontinuity at input is also identified. Illustration has been
provided with the support of six examples, eleven figures and eleven tables.

In this chapter, we deal with linear time invariant (LTI) control systems [1]. We
intend to identify two types of LTI systems in hybrid function platform, namely
non-homogeneous system and homogeneous system [1, 2].
Identifying a control system [3] means, knowing the input signal or forcing
function and the output of the system, we determine the system parameters. For a
time invariant system, these parameters do not vary with time.
In a homogeneous system, though no external signal is applied, the initial
conditions of system states may exist. While in a non-homogeneous system, we
deal with a situation where the initial conditions and external input signals exist
simultaneously.
Here, we solve the problem of system identification [4, 5], both for
non-homogeneous systems and homogeneous systems, described as state space
models, by using the concept of HF domain.
First, we take up the problem of identification [6] of a non-homogeneous system,
because after putting the specific condition of zero forcing function in the solution,
we can arrive at the result of identification of a homogeneous system quite easily.

12.1 Identification of a Non-homogeneous System [3]

For solving the identification problem, we refer to Eq. (8.20). That is

© Springer International Publishing Switzerland 2016 289


A. Deb et al., Analysis and Identification of Time-Invariant Systems, Time-Varying
Systems, and Multi-Delay Systems using Orthogonal Hybrid Functions,
Studies in Systems, Decision and Control 46, DOI 10.1007/978-3-319-26684-8_12
290 12 System Identification Using State Space Approach …

2 3 3 2
cSx1ði þ 1Þ cSx1i
 6 7
6 cSx2ði þ 1Þ 7 2 6 c 7  
2 6 7 6 Sx2i 7 1 T
IA 6 6 7
7  I þ A 6 . 7 ¼ 2B CSu þ CTu
T
..
h 6
4 . 7
5
h 4 .. 5 2 i

cSxnði þ 1Þ cSxni
2 3 2 3 ð12:1Þ
cSx1i þ cSx1ði þ 1Þ cSx1ði þ 1Þ  cSx1i
6 7 6 7  
6 cSx2i þ cSx2ði þ 1Þ 7 2 6 cSx2ði þ 1Þ  cSx2i 7
6 7 6 7 1 T
or,A6 .. 7 ¼ 6 .. 7  2B C T
þ C
6 . 7 h6 . 7 Su
2 Tu i
4 5 4 5
cSxni þ cSxnði þ 1Þ cSxnði þ 1Þ  cSxni

It is noted that, if we try to solve A from Eq. (12.1) we meet with one immediate
difficulty. The coefficient matrix of A is a column matrix, and so no question of
inversion arises.
To tackle this problem, we proceed to construct a coefficient matrix which is
square and can be inverted. To achieve this end, we can write down (n − 1) more
equations similar to Eq. (12.1), by varying the index i from i to (i + n − 1) by
incrementing i by 1. For example, by incrementing i by 1, we can write down the
following equation:
2 3 2 3
cSx1ði þ 1Þ þ cSx1ði þ 2Þ cSx1ði þ 2Þ  cSx1ði þ 1Þ
6 cSx2ði þ 1Þ þ cSx2ði þ 2Þ 7 2 6 cSx2ði þ 2Þ  cSx2ði þ 1Þ 7  
6 7 6 7 1 T
A6 .. 7¼ 6 .. 7  2B CSu þ CTu
T
4 . 5 h4 . 5 2 iþ1
cSxnði þ 1Þ þ cSxnði þ 2Þ cSxnði þ 2Þ  cSxnði þ 1Þ
ð12:2Þ

From these n numbers of equations, we rearrange them in a manner to produce the


following equation:
2 3
cSx1i þ cSx1ði þ 1Þ cSx1ði þ 1Þ þ cSx1ði þ 2Þ    cSx1ði þ n1Þ þ cSx1ði þ nÞ
6 7
6 cSx2i þ cSx2ði þ 1Þ cSx2ði þ 1Þ þ cSx2ði þ 2Þ    cSx2ði þ n1Þ þ cSx2ði þ nÞ 7
6 7
A6 .. .. .. 7
6 . . . 7
4 5
cSxni þ cSxnði þ 1Þ cSxnði þ 1Þ þ cSxnði þ 2Þ    cSxnði þ n1Þ þ cSxn ði þ nÞ nn
2 3
cSx1ði þ 1Þ  cSx1i cSx1ði þ 2Þ  cSx1ði þ 1Þ    cSx1ði þ nÞ  cSx1ði þ n1Þ
6 7 ð12:3Þ
 cSx2i cSx2ði þ 2Þ  cSx2ði þ 1Þ    cSx2ði þ nÞ  cSx2 ði þ n1Þ 7
26
c
6 Sx2ði þ 1Þ 7
¼ 6 .. .. .. 7
h6 . . . 7
4 5
cSxnði þ 1Þ  cSxni cSxnði þ 2Þ  cSxnði þ 1Þ    cSxnði þ nÞ  cSxn ði þ n1Þ nn
   
1 1  
 2B CTSu þ CTTu CTSu þ CTTu    CTSu þ 12 CTTu i þ n1
2 i 2 iþ1 1n

Now, in (12.3), we have the coefficient matrix of A which is square of order n and
invertible.
12.1 Identification of a Non-homogeneous System … 291

It is evident from (12.3) that for solving n elements of the matrix A, we need
(n + 1) samples of the states and subsequently use them to form an (n × n) coeffi-
cient matrix. That is, if A has a dimension (2 × 2), say, we need at least 3 samples of
the states. Also, if we have m samples of the states, we normally use a set of
consecutive samples from the states and the inputs, starting from any ith sample.
Hence, if any system has the order n, the number of sub-intervals to be chosen
should be greater than n to take advantage of Eq. (12.3). That is, m ≥ n.
It is noted that the pattern of the coefficient matrix of A in (12.3) indicates that
each column of the coefficient matrix uses a pair of consecutive samples. However,
if we choose the samples of the states and the inputs erratically, to obtain reliable
results, such erratic choice should not violate the indicated pattern requirement.
Thus, it is a necessity to consider 2n number of samples, instead of (n + 1) number
of samples, to identify a system of order n.
Now, from Eq. (12.3), we have identified the system matrix A as,
8 2 3
>
> cSx1ði þ 1Þ  cSx1i cSx1ði þ 2Þ  cSx1ði þ 1Þ    cSx1ði þ nÞ  cSx1ði þ n1Þ
>
> 6 7
>
<2 6 cSx2ði þ 1Þ  cSx2i cSx2ði þ 2Þ  cSx2ði þ 1Þ    cSx2ði þ nÞ  cSx2ði þ n1Þ 7
6 7
A= 6 .. .. .. 7
>h 6
> . . . 7
>
> 4 5
>
: c
Sxnði þ 1Þ  cSxni cSxnði þ 2Þ  cSxnði þ 1Þ    cSxnði þ nÞ  cSxnði þ n1Þ
   
1 1  
2B CTSu þ CTTu CTSu þ CTTu    CTSu þ 12 CTTu i þ n1 ð12:4Þ
2 i 2 iþ1
2 31
cSx1i þ cSx1ði þ 1Þ cSx1ði þ 1Þ þ cSx1ði þ 2Þ    cSx1ði þ n1Þ þ cSx1ði þ nÞ
6 7
6 cSx2i þ cSx2ði þ 1Þ cSx2ði þ 1Þ þ cSx2ði þ 2Þ    cSx2ði þ n1Þ þ cSx2ði þ nÞ 7
6 7
6 .. .. .. 7
6 . . . 7
4 5
cSxni þ cSxnði þ 1Þ cSxnði þ 1Þ þ cSxnði þ 2Þ    cSxnði þ n1Þ þ cSxnði þ nÞ

12.1.1 Numerical Examples

Example 12.1 [1] (vide Appendix B, Program  no. 31) Consider the system
 
0 1 0 0
x_ ðtÞ ¼ AxðtÞ þ BuðtÞwhere A¼ B¼ and x0 ¼
2 3 1 0:5
with a unit step forcing function.
This system is identified using Eq. (12.4). The identified elements of system for
increasing number of segments m, are tabulated in Table 12.1.
292 12 System Identification Using State Space Approach …

Table 12.1 Identification of the non-homogeneous system matrix A of Example 12.1 for different
values of m = 4, 10, 12 15, 20, 25 tabulated with the actual elements of A (vide Appendix B,
Program no. 31)
Elements of Exact HF domain solution for
system matrix A values m=4 m = 10 m = 12 m = 15 m = 20 m = 25
a11 0 0.0000 −0.0000 −0.0000 −0.0000 −0.0000 −0.0000
a12 1 0.9948 0.9992 0.9994 0.9996 0.9998 0.9999
a21 −2 −2.0000 −2.0000 −2.0000 −2.0000 −2.0000 −2.0000
a22 −3 −2.9948 −2.9992 −2.9994 −2.9996 −2.9998 −2.9999

Table 12.2 Hybrid function based system identification for Example 12.1 with m = 10
Elements of system Exact HF domain solution for % Error
matrix A values (E) (m ¼ 10Þ (H) e ¼ E H
E  100
a11 0 0.0000 –
a12 1 0.9992 0.0800
a21 −2 −2.0000 0.0000
a22 −3 −2.9992 0.0266
The results are compared with the actual elements of A and corresponding percentage errors are
computed

Table 12.2 shows the comparison of the actual elements of the system matrix
with respective computed elements in HF domain along with percentage errors for
m = 10 and T = 1 s.
Results obtained for Example 12.1, using Eq. (12.4), are plotted in Fig. 12.1. It is
noted that, with an increase in m from 4 to 25, the HF domain solution (black dots)
improves rapidly.

Example 12.2 Consider the system x_ ðtÞ ¼ AxðtÞ þ BuðtÞ where A¼


2 3 2 3 2 3
0 1 0 0 1
4 0 0 1 5; B ¼ 4 0 5 and x0 ¼ 4 0 5 with a unit step forcing
6 11 6 1 0
function.
This system is identified using Eq. (12.4). The results are tabulated in
Table 12.3.
Table 12.4 shows the comparison of the actual elements of the system matrix
with respective computed elements in HF domain along with percentage errors for
m = 12 and T = 1 s.
12.1 Identification of a Non-homogeneous System … 293

Fig. 12.1 Hybrid function


domain system identification
for increasing m, for the
elements a a12 and b a22 (vide
Appendix B, Program no. 31)

Table 12.3 Identification of the non-homogeneous system matrix A of Example 12.2 for different
values of m = 4, 10, 12 15, 20, 25 tabulated with the actual elements of A
Elements of Exact HF domain solution for
system values m=4 m = 10 m = 12 m = 15 m = 20 m = 25
matrix A
a11 0 0.0222 0.0041 0.0028 0.0018 0.0010 0.0007
a12 1 1.0518 1.0091 1.0063 1.0041 1.0023 1.0015
a13 0 0.0282 0.0049 0.0034 0.0022 0.0012 0.0008
a21 0 −0.1423 −0.0246 −0.0172 −0.0110 −0.0062 −0.0040
a22 0 −0.2978 −0.0501 −0.0348 −0.0223 −0.0126 −0.0080
a23 1 0.8813 0.9795 0.9857 0.9908 0.9948 0.9967
a31 −6 −5.3961 −5.8971 −5.9283 −5.9540 −5.9740 −5.9834
a32 −11 −9.7937 −10.8000 −10.8610 −10.9110 −10.9500 −10.9680
a33 −6 −5.5705 −5.9260 −5.9484 −5.9669 −5.9813 −5.9880
294 12 System Identification Using State Space Approach …

Table 12.4 Hybrid function based system identification for Example 12.2 with m = 12
Elements of system Exact HF domain solution for % Error
matrix A values (E) (m = 12) (H) e ¼ E H
E  100
a11 0 0.0028 –
a12 1 1.0063 −0.6300
a13 0 0.0034 –
a21 0 −0.0172 –
a22 0 −0.0348 –
a23 1 0.9857 1.4300
a31 −6 −5.9283 1.1950
a32 −11 −10.8610 1.2636
a33 −6 −5.9484 0.8600
The elements obtained via HF domain are compared with the actual elements of A and
corresponding percentage errors are computed

Results obtained for Example 12.2, using Eq. (12.4) is plotted in Fig. 12.2a–e. It
is noted that, with an increase in m the HF domain solutions (black dots) has
improved much more from 4 to 24.

12.2 Identification of Output Matrix


of a Non-homogeneous System [3]

Referring to Eq. (8.31), we can write,

yS ¼ CCSx þ DCSu
or, CCSx ¼ yS  DCSu ð12:5Þ
or, C ¼ ½yS  DCSu C1
Sx

Similarly from Eq. (8.32), we have

yT ¼ C CTx þ DCTu
ð12:6Þ
or, C ¼ ½yT  DCTu C1
Tx

For identification of the output matrix C, we can use either Eq. (12.5) or (12.6). If
we use Eq. (12.5), the dimension of the matrix CSx should be n × n, so that it is
invertible. That is, for n states, we need to expand each state in HF domain and
consider n consecutive samples of each state to form the square matrix CSx .
If we use Eq. (12.6), for identification of the output matrix C, we need (n + 1)
samples of the output y and of each state. This will help to form n number of
respective triangular function coefficients of the variables when they are expanded
in HF domain. This is because, as before, the dimension of the matrix CTx should
12.2 Identification of Output Matrix of a Non-homogeneous System … 295

Fig. 12.2 Hybrid function based identification for Example 12.2 for the elements a a12 , b a23 , c
a31 , d a32 and e a33 of A

be such that it is invertible. Meeting this condition will lead to the identification of
the output matrix quite easily.

12.2.1 Numerical Examples

Example 12.3 Consider a system of Example 12.1 having two states and let its
output curve be given by Fig. 12.3. Using three samples from the output curve
and Eq. (12.5) or (12.6), we identify the output matrix as
296 12 System Identification Using State Space Approach …

C ¼ ½1 0 :

Example 12.4 (vide Appendix B, Program no. 32) Consider a system of Example
12.2 having three states and the output curve of Fig. 12.4. From the given output
curve, we consider four samples, and then using Eq. (12.5) or (12.6), we may
identify the output matrix as

C ¼ ½4 5 1 :

Fig. 12.3 Hybrid function


based identification of system
output matrix C using three
samples from the exact
solution of Example 12.3

Fig. 12.4 Hybrid function


based identification of system
output matrix C using four
samples from the exact
solution of Example 12.4
(vide Appendix B, Program
no. 32)
12.3 Identification of a Homogeneous System 297

12.3 Identification of a Homogeneous System [3]

Similarly if we like to identify the system matrix A for an n × n homogeneous


system, Eq. (12.4) will be modified to
2 3
cSx1ði þ 1Þ  cSx1i cSx1ði þ 2Þ  cSx1ði þ 1Þ    cSx1ði þ nÞ  cSx1ði þ n1Þ
6 7
26 c  cSx2i cSx2ði þ 2Þ  cSx2ði þ 1Þ    cSx2ði þ nÞ  cSx2 ði þ n1Þ 7
6 Sx2ði þ 1Þ 7
A= 6 .. .. .. 7
h6 . . . 7
4 5
cSxnði þ 1Þ  cSxni cSxnði þ 2Þ  cSxnði þ 1Þ    cSxnði þ nÞ  cSxn ði þ n1Þ
2 31
cSx1i þ cSx1ði þ 1Þ cSx1ði þ 1Þ þ cSx1ði þ 2Þ    cSx1ði þ n1Þ þ cSx1ði þ nÞ
6 7
6 cSx2i þ cSx2ði þ 1Þ cSx2ði þ 1Þ þ cSx2ði þ 2Þ    cSx2ði þ n1Þ þ cSx2ði þ nÞ 7
6 7
6 .. .. .. 7
6 . . . 7
4 5
cSxni þ cSxnði þ 1Þ cSxnði þ 1Þ þ cSxnði þ 2Þ    cSxnði þ n1Þ þ cSxn ði þ nÞ
ð12:7Þ

12.4 Identification of Output Matrix of a Homogeneous


System [3]

Referring to Eq. (12.5), we can identify the output matrix of a homogeneous system
by substituting a condition of zero to the direct coupling matrix D, and we get

C ¼ yS C1
Sx ð12:8Þ

Similarly from Eq. (12.6), we have

C ¼ yT C1
Tx ð12:9Þ

12.5 Identification of a Non-homogeneous System


with Jump Discontinuity at Input

Equation (12.4) is derived in HF domain and is based upon the conventional HF


technique. That is, it is essentially the HFc based approach and it has one incon-
venience: it comes up with utterly erroneous results for identification, shown later,
if the samples are chosen from the zones where the jumps have occurred. And for
system analysis as well, vide Figs. 8.16 and 8.17, and Table 8.16, we ended up with
unacceptable errors in the results due to jump discontinuities in the inputs.
But we can modify Eq. (12.4) to make it suitable for the HFm based approach, so
that it can come up with good results in spite of the jump discontinuities. The
298 12 System Identification Using State Space Approach …

modification is quite simple in the sense that in the RHS of Eq. (12.4), all the
triangular function coefficient matrices associated with the matrix B have to be
modified as discussed in Chap. 3, Eq. (3.12). That is, all the CTTu ’s in (12.4) are to be
replaced by C0 TTu , where C0 TTu , CTTu JkðmÞ . This modification will come up with good
results for system identification, shown in the following, even if we choose the
samples of the states from the region containing the jump. It will also be shown that
if we take the samples of the inputs from a region excluding the jump portion, and
corresponding samples of the states, the results of identification through any of the
HFc or HFm approach will have no difference at all.
The HFm approach shows its usefulness only when the samples are selected from
the jump region. Thus, with the HFm approach we can be careless about the chosen
region where from the involved samples are selected. But, as delineated, we have to
be careful in selecting the samples while using the HFc based approach because it is
no different from the conventional HF domain analysis.
From (12.4), in HFc approach, we write
8 2 3
>
> cSx1ði þ 1Þ  cSx1i . . . cSx1ði þ nÞ  cSx1ði þ n1Þ
<2 6 7
A¼ 6 .. .. .. 7
> 4 . . . 5
>
:
h
cSxnði þ 1Þ  cSxni . . . cSxnði þ nÞ  cSxnði þ n1Þ
  
1 T  T 
2B CSu þ CTu T
. . . CSu þ 1 T
C
2 Tu i þ n1 ð12:10Þ
2 i
2 3
cSx1i þ cSx1ði þ 1Þ . . . cSx1ði þ n1Þ þ cSx1ði þ nÞ 1
6 .. .. .. 7
64 . . .
7
5
cSxni þ cSxnði þ 1Þ . . . cSxnði þ n1Þ þ cSxnði þ nÞ

Whereas from (12.4), in HFm approach, we have


8 2 3
>
> cSx1ði þ 1Þ  cSx1i . . . cSx1ði þ nÞ  cSx1ði þ n1Þ
<2 6 7
A¼ 6 .. .. .. 7
> 4 . . . 5
>
:
h
cSxnði þ 1Þ  cSxni . . . cSxnði þ nÞ  cSxnði þ n1Þ
  
1 T
2B CTSu þ C0Tu . . . CTSu þ 12 C0Tu
T
ð12:11Þ
2 i i þ n1
2 3
cSx1i þ cSx1ði þ 1Þ . . . cSx1ði þ n1Þ þ cSx1ði þ nÞ 1
6 .. .. .. 7
64 . . .
7
5
cSxni þ cSxnði þ 1Þ . . . cSxnði þ n1Þ þ cSxnði þ nÞ
12.5 Identification of a Non-homogeneous System with Jump Discontinuity … 299

Similarly, if we want to identify the system matrix A for an n × n homogeneous


system, we simply put B = 0 in Eq. (12.11) and get
2 32 3
cSx1ði þ 1Þ  cSx1i  cSx1ði þ nÞ  cSx1ði þ n1Þ cSx1i þ cSx1ði þ 1Þ  cSx1ði þ n1Þ þ cSx1ði þ nÞ 1
26 .. .. 76 .. .. 7
A¼ 4 . . 54 . . 5
h
cSxnði þ 1Þ  cSxni  cSxnði þ nÞ  cSxnði þ n1Þ cSxni þ cSxnði þ 1Þ  cSxnði þ n1Þ þ cSxnði þ nÞ
ð12:12Þ

12.5.1 Numerical Examples

_
Example 12.5 Consider  the non-homogeneous
   system xðtÞ ¼ AxðtÞ þ BuðtÞ þ
0 1 0 0
Buðt  aÞ where A ¼ ; B¼ ; xð 0Þ ¼ ;u(t) is a unit step
2 3 1 0:5
function and u(t − a) is a delayed unit step function.
The system has the solution
 
1 1
x1 ðtÞ ¼1  expðtÞ  expððt  aÞÞ  expð2ðt  aÞÞ uðt  aÞ
2 2
1
x2 ðtÞ ¼ expðtÞ þ ½expððt  aÞÞ  expð2ðt  aÞÞuðt  aÞ
2

Knowing the states and inputs, this system can be identified in HF domain.
It may be noted that the input to the system has a jump discontinuity. Though we
can represent this function using conventional HF set, the approximation is not
exact due to the jump. However if we employ the HFm based approximation
technique, the input function may be represented in an exact manner. So we use
both the approximation techniques and take help of Eqs. (12.10) and (12.11) to
identify the system. Here, we take a = 0.5 s. It is expected that the HFm approach
will bring out much better result compared to the HFc approach.
The results obtained via the HFc approach and HFm approach, for five different
values of m, are tabulated in Table 12.5a, b respectively, where the samples of the
inputs and corresponding states are purposely selected from the jump region. It is
noted that for the HFc approach the effort ends in a fiasco, producing results which
are unreliable. But the HFm approach identifies the system with much less error.
However, if the samples are chosen from a region excluding the sub-interval
containing the jump point, the results of identification derived via any of the
approaches yields the same results. Table 12.6 presents these results.
From the data presented in Table 12.5b, we show a typical comparison of the
actual elements of the system matrix with respective elements computed using the
HFm based approach for m = 10, in Table 12.7.
Percentage errors are computed to figure out the efficiency of the method
through quantitative estimates. To have an idea about the behavior of error with
increasing m, for both HFc and HFm approach, we compute percentage errors of all
300 12 System Identification Using State Space Approach …

Table 12.5 Identification of the system of Example 12.5 using the (a) HFc approach and the
(b) HFm approach for different values of m = 8, 10, 20, 40 and 50 for T = 1 s, with the samples
chosen from the jump region
Elements of Exact m¼8 m ¼ 10 m ¼ 20 m ¼ 40 m ¼ 50
system matrix values
A
(a) HFcbased approach
a11 0 0.0000 0.0000 0.0000 0.0000 0.0000
a12 1 0.9987 0.9992 0.9998 0.9999 1.0000
a21 −2 −10.5104 −12.5083 −22.5042 −42.5021 −52.5017
a22 −3 0.1006 1.3859 7.8492 20.8118 27.2967
(b) HFmbased approach
a11 0 0.0000 0.0000 0.0000 0.0000 0.0000
a12 1 0.9987 0.9992 0.9998 0.9999 1.0000
a21 −2 −2.0000 −2.0000 −2.0000 −2.0000 −2.0000
a22 −3 −2.9987 −2.9992 −2.9998 −2.9999 −3.0000
The results are compared with the actual elements of A. It is noted that the results obtained via the
HFc approach are simply unreliable while that obtained via the HFm approach are reasonably
accurate

Table 12.6 Identification of the system of Example 12.5, considering the samples chosen from a
region excluding the sub-interval containing the jump point
Elements of Exact HFc or HFm based approach
system matrix A values m¼8 m ¼ 10 m ¼ 20 m ¼ 40 m ¼ 50
a11 0 0.0000 0.0000 0.0000 0.0000 0.0000
a12 1 0.9987 0.9992 0.9998 0.9999 1.0000
a21 −2 −2.0000 −2.0000 −2.0000 −2.0000 −2.0000
a22 −3 −2.9987 −2.9992 −2.9998 −2.9999 −3.0000
It is noted that in such a case both the approaches yields the same results. These are compared with
the actual elements of the system matrix A for different values of m = 8, 10, 20, 40 and 50 with
T=1s

Table 12.7 Identification of the system of Example 12.5 with comparison of actual elements of
A with corresponding elements computed via HFm approach, for m = 20, T = 1 s
Elements of system Exact Values obtained via HFm % Error
matrix A values (E) approach for m = 20 (H) e ¼ E H
E  100
a11 0 0.0000 –
a12 1 0.9998 0.0200
a21 −2 −2.0000 0.0000
a22 −3 −2.9998 0.0067
12.5 Identification of a Non-homogeneous System with Jump Discontinuity … 301

Fig. 12.5 Error in system identification for increasing m (m = 8, 10, 20, 40 and 50) using the
a HFc approach and the b HFm approach for Example 12.5, with T = 1 s. It is noted that the error is
large and increases with increasing m for the HFc approach, while for the HFm approach, the error
is comparatively much smaller and decreases steadily and rapidly with increasing m

the elements of a system matrix A of order n for any particular value of m and
calculate AMP error, vide Eq. (4.45). Only in this case, number of elements in the
denominator will be r = n2.
Figure 12.5 is drawn to show the variation of εav(r) with m for both the HFc and
HFm approaches. It is noted that for the HFc approach, εav(r) is large and increases
with increasing m, but for the HFm approach, it is comparatively much smaller and
decreases steadily and rapidly with increasing m.
Example 12.6 (vide Appendix B, Program no. 33) Consider the non-homogeneous
system
    x_ ðtÞ ¼ AxðtÞ þ B u1 ðtÞ þ B u2 ðt  aÞwhere
0 1 0 0
A¼ ; B¼ ; xð 0Þ ¼ u (t) is a ramp function and
2 3 1 0:5 1
u2(t − a) is a delayed unit step function, having the jump at t = a s.
The system has the solution
 
1 1 3 3 1
þ t þ expðtÞ  expð2tÞ  expððt  aÞÞ  expð2ðt  aÞÞ uðt  aÞ
x1 ðtÞ ¼ 
4 2 2 4 2
1 3 3
x2 ðtÞ ¼  expðtÞ þ expð2tÞ þ ½expððt  aÞÞ  expð2ðt  aÞÞuðt  aÞ
2 2 2

Knowing the states and inputs, this system can be identified in HF domain.
It may be noted that the input to the system has a jump discontinuity which can
not be represented in an exact manner if we approximate this function using con-
ventional HF set. Here, even if we employ the HFm based approximation technique,
the input function can not be represented exactly, because of its nature. Anyway, we
302 12 System Identification Using State Space Approach …

use both the approximation techniques two identify the system using Eqs. (12.10)
and (12.11). Here, we take a = 0.5 s. Here also, we expect that the HFm approach
will produce better result compared to the HFc approach. However, if we employ
the combined HFc and HFm technique, the input function can be approximated in an
exact fashion as was done for the function of Example 3.5, vide Eq. (3.18). For the
combined HFc and HFm technique, we expect best results of all the above three
approaches.
The inputs u1 ðtÞ and u2 ðt  aÞ to the system are basically a combination of a
ramp function and a delayed step function. Such a function can be expanded in HF
domain using a combination of HFc and HFm approaches, as was done in Eq. (3.18)
in Sect. 3.5.2. In such a case, to solve the identification problem via Eq. (12.11), the
C0 TTu matrix has to be modified accordingly.
Knowing the states and inputs, this system can be identified in HF domain using
the Eqs. (12.10) and (12.11). That is, we identify the system using both the HFc and
HFm based approaches.
The results obtained via the HFc, HFm and the combined approaches, for five
different values of m, are tabulated in Table 12.8a, b, c respectively, where the
samples of the inputs and corresponding states are purposely selected from the jump
region. It is noted that for the HFc approach the effort ends in a fiasco, producing

Table 12.8 Identification of the system of Example 12.6 using the (a) HFc approach, (b) HFm
approach and (c) the combination of HFc and HFm approaches, for different values of m = 8, 10,
20, 40 and 50 for T = 1 s, with the samples chosen from the jump region
Elements of system Exact m¼8 m ¼ 10 m ¼ 20 m ¼ 40 m ¼ 50
matrix A values
(a) HFc based approach
a11 0 0.0022 0.0019 0.0008 0.0002 0.0002
a12 1 0.9858 0.9905 0.9974 0.9993 0.9995
a21 −2 −12.6762 −15.4476 −30.1201 −60.4888 −75.7946
a22 −3 2.1893 4.7296 18.4059 46.9492 61.3603
(b) HFm based approach
a11 0 0.0022 0.0019 0.0008 0.0002 0.0002
a12 1 0.9858 0.9905 0.9974 0.9993 0.9995
a21 −2 −0.6663 −0.6572 −0.5952 −0.5382 −0.5244
a22 −3 −3.6110 −3.7482 −4.0637 −4.2470 −4.2861
(c) Combination of HFc and HFm based approaches
a11 0 0.0022 0.0019 0.0008 0.0002 0.0002
a12 1 0.9858 0.9905 0.9974 0.9993 0.9995
a21 −2 −2.0007 −2.0018 −2.0012 −2.0004 −2.0003
a22 −3 −2.9665 −2.9775 −2.9937 −2.9983 −2.9989
The results are compared with the actual elements of A. It is noted that the elements computed via
the HFc approach are simply unrecognizable. So is the case for identification via the HFm
approach. But use of the combined approach (HFc and HFm) comes up with reasonably accurate
results (vide Appendix B, Program no. 33)
12.5 Identification of a Non-homogeneous System with Jump Discontinuity … 303

results which are unrecognizable. The HFm approach, though ends up with much
less error compared to the HFc approach, also fails to identify the system. However,
since the combined approach can represent the input function exactly, it identifies
the system with much less error.
Figure 12.6 is drawn to show the variation of AMP error with different values of
m (m = 8, 10, 20, 40 and 50) for the HFc, HFm and the combined approaches.
However, if the samples are chosen from a region excluding the sub-interval
containing the jump point, the results of identification derived via any of the
approaches yields the same results. Table 12.9 presents these results.

Fig. 12.6 Error in system identification for increasing m (m = 8, 10, 20, 40 and 50) using the
a HFc approach, b HFm approach and c combined approach, for Example 12.6, with T = 1 s. It is
noted that for the HFc approach the error is large, and it increases linearly with increasing m. So is
the case for error for the HFm approach, though the rate of increase is much decreased. But for the
combined approach (HFc and HFm), the error is very very less in comparison and decreases
steadily and rapidly with increasing m (vide Appendix B, Program no. 33)
304 12 System Identification Using State Space Approach …

Table 12.9 Identification of the system of Example 12.6, considering the samples chosen from a
region excluding the sub-interval containing the jump point
Elements of Exact HFc or HFm based approach
system matrix A values m¼8 m ¼ 10 m ¼ 20 m ¼ 40 m ¼ 50
a11 0 0.2829 0.0493 0.0059 0.0012 0.0008
a12 1 0.8195 0.9711 0.9972 0.9995 0.9997
a21 −2 −2.5797 −2.1011 −2.0120 −2.0025 −2.0016
a22 −3 −2.6303 −2.9409 −2.9942 −2.9989 −2.9993
It is noted that in such a case both the approaches yields the same results. These are compared with
the actual elements of the system matrix A for different values of m = 8, 10, 20, 40 and 50 with
T = 1 s (vide Appendix B, Program no. 33)

12.6 Conclusion

In this chapter the system identification problem has been tackled. For an
(n × n) system, the system matrix A is determined considering (n + 1) number of
consecutive samples of the states, the input vector B and the input function. The
output matrix C is determined from the knowledge of (n + 1) number of consec-
utive samples of the output function y and those of the states.
Also in this chapter, we have used the modified approach of hybrid function
domain, as discussed in Chap. 3, for identification of non-homogeneous systems
involving jump discontinuity in the applied input, thus affecting the system states.
Equation (12.10) presents the equation for solving the elements of the systems
matrix A via HFc approach. Equation (12.11) is the ultimate equation for solving
the elements of A using HFm approach.
It is noted that both the equations work with the samples of the states and the
inputs. And also, while using these samples for obtaining the elements of A, the
samples of the states are always to be considered in pairs. This fact is evident from
the structure of the Eqs. (12.10) and (12.11).
If the system matrix has a dimension of n, the number of sample pairs to be
considered for determining the solution is n pairs. If consecutive pairs are chosen,
then number of samples involved will be (n + 1). But if we choose the pairs
randomly from the sample sequence, then obviously, the number of samples will
become 2n. However, in both the cases, matching samples are to be chosen from
the input functions.
The major difference between (12.10) and (12.11) is, while (12.11) can work
effectively with samples chosen from anywhere within the time interval under
consideration, Eq. (12.10) has restrictions. That is, if the samples are chosen from
the jump portions of any of the functions, it will simply fail to compute the elements
of A. Should the selection of samples be made from non-jump portions, both the
Eqs. (12.10) and (12.11) produce the same result.
All these points are presented via Tables 12.5 and 12.6.
12.6 Conclusion 305

Figure 12.5a, b present the variation of AMP error with m for Example 12.5. It is
noted that though the samples are chosen intentionally from the jump portion, the
error is large and increases with m for the HFc approach, but for the HFm approach,
the error is comparatively much smaller and decreases steadily and rapidly with m.
Similarly, for Example 12.6, the behavior of AMP error with m is illustrated
using Tables 12.8 and 12.9, and Fig. 12.6, when the samples are chosen from the
jump portion. It is noted that for the HFc approach the error is large and increases
linearly with increasing m. So is the case for error for the HFm approach, though the
rate of increase is much less. But for the combined approach (HFc and HFm), the
error is not only much less compared to the other two methods, but decreases
rapidly with increasing m.
The above results prove beyond any doubt that when handling stair case func-
tions with jump discontinuities, the HF based modified approach is superior to the
HF based conventional approach. However for jumps of different nature, the
combined HFc and HFm approach is much more suitable than either of HFc and
HFm approaches. This is because, good results are dependent upon proper
approximation of jump functions at the input. These facts are useful for identifi-
cation of control systems with jump discontinuity in the input functions.

References

1. Ogata, K.: Modern Control Engineering, 5th edn. Prentice Hall of India, New Delhi (2011)
2. Ogata, K.: System Dynamics, 4th edn. Pearson Education, Upper Saddle River (2004)
3. Roychoudhury, S., Deb, A., Sarkar, G.: Analysis and synthesis of
homogeneous/non-homogeneous control systems via orthogonal hybrid functions (HF) under
states space environment. J. Inf. Optim. Sci. 35(5–6), 431–482 (2014)
4. Rao, G.P.: Piecewise Constant Orthogonal Functions and Their Application to Systems and
Control, LNCIS Series, vol. 55. Springer, Berlin (1983)
5. Jiang, J.H., Schaufelberger, W.: Block Pulse Functions and Their Application in Control
System, LNCIS, vol. 179. Springer, Berlin (1992)
6. Unbehauen, H., Rao, G.P.: Identification of Continuous Systems. North-Holland, Amsterdam
(1987)
Chapter 13
System Identification Using State Space
Approach: Time Varying Systems

Abstract This chapter discusses HF domain identification of time varying systems


in state space. Both homogeneous and non-homogeneous systems are treated.
Illustration has been provided with the support of three examples, ten figures and
two tables.

As the name suggests, in this chapter, we deal with linear time varying
(LTV) control systems [1]. We intend to identify two types of LTV systems in
hybrid function platform, namely non-homogeneous system and homogeneous
system [1, 2]. For a time varying system, these parameters do vary with time.
The HF domain identification [3] pivot upon the samples of involved functions.
In today’s digital world, this is an important advantage.
Here, we solve the problem of system identification [4, 5], both for
non-homogeneous systems and homogeneous systems, described as state space
models, by using the concept of HF domain. First, we take up the problem of
identification of a non-homogeneous time varying system, because after putting the
specific condition of zero forcing function in the solution, we can arrive at the result
of identification of a homogeneous system quite easily.

13.1 Identification of a Non-homogeneous System [3]

Consider the system given by Eq. (9.1).


Its time varying system matrix AðtÞ is given by

2 3
a11 ðtÞ a12 ðtÞ    a1n ðtÞ
6 a21 ðtÞ a22 ðtÞ    a2n ðtÞ 7
6 7
A ðt Þ , 6 . .. .. 7
4 .. . . 5
an1 ðtÞ an2 ðtÞ    ann ðtÞ

© Springer International Publishing Switzerland 2016 307


A. Deb et al., Analysis and Identification of Time-Invariant Systems, Time-Varying
Systems, and Multi-Delay Systems using Orthogonal Hybrid Functions,
Studies in Systems, Decision and Control 46, DOI 10.1007/978-3-319-26684-8_13
308 13 System Identification Using State Space …

To identify the elements of A(t), we take help of the first equation of the
equation set (9.12) and also the first equation of the equation set (9.15). Thus, we
have
" #
h Xn  
ðx11  x10 Þ ¼ a1j0 xj0 þ a1j1 xj1 þ ðb10 u0 þ b11 u1 Þ ð13:1Þ
2 j¼1

and
"
h Xn Xn Xn
ðx12  x10 Þ ¼ a1j0 xj0 þ 2 a1j1 xj1 þ a1j2 xj2 þ ðb10 u0 þ b11 u1 Þ
2 j¼1 j¼1 j¼1

þ ðb11 u1 þ b12 u2 Þ
ð13:2Þ

For a second order system, we make use of the generalized Eqs. (13.1) and
(13.2), and solve for the off-diagonal elements a12 ðtÞ and a21 ðtÞ as
   
1 2 2
a12 ðk þ 1Þ ¼  a11 ðk þ 1Þ x1 ðk þ 1Þ  þ a11k x1k  a12k x2k
x2 ðk þ 1Þ h h ð13:3Þ
 
 b1k uk þ b1 ðk þ 1Þ uðk þ 1Þ
   
1 2 2
a21 ðk þ 1Þ ¼  a22 ðk þ 1Þ x2 ðk þ 1Þ  þ a22k x2k  a21k x1k
x1 ðk þ 1Þ h h ð13:4Þ
 
 b2k u k þ b2 ðk þ 1Þ uðk þ 1Þ

It can be shown that, for an nth order system, the off-diagonal elements can be
represented by the following general form:
2
    X X
1 6 6 2  aii ðk þ 1Þ xi ðk þ 1Þ  2 þ aiik xik 
n n
aij ðk þ 1Þ ¼ 4 ail ðk þ 1Þ xl ðk þ 1Þ  ailk xlk
xj ðk þ 1Þ h h l6¼j l6¼i
l6¼i
 
 bik uk þ bi ðk þ 1Þ uðk þ 1Þ

ð13:5Þ

From Eqs. (13.1) and (13.2), the diagonal elements a11 ðtÞ and a22 ðtÞ can be
solved as
13.1 Identification of a Non-homogeneous System … 309

 
2 1 2
a11 ðk þ 1Þ ¼  þ a11k x1k þ a12k x2k þ a12 ðk þ 1Þ x2 ðk þ 1Þ
h x1 ðk þ 1Þ h ð13:6Þ
 
 b1k uk þ b1 ðk þ 1Þ uðk þ 1Þ
  
2 1 2
a22 ðk þ 1Þ ¼  a21k x1k þ þ a22k x2k þ a21 ðk þ 1Þ x1ðk þ 1Þ
h x2 ð k þ 1 Þ h ð13:7Þ
 
 b2k uk þ b2 ðk þ 1Þ uðk þ 1Þ

For an nth order system, the diagonal elements can be represented by the fol-
lowing general form:
" 
2 1 2 X n X n
aii ðk þ 1Þ ¼  þ aiik xik þ ail ðk þ 1Þ xl ðk þ 1Þ þ ailk xlk
h xi ðk þ 1Þ h l6¼i l6¼i ð13:8Þ
 
þ bik uk þ bi ðk þ 1Þ uðk þ 1Þ

Using above relevant equations, all unknown parameters of the system can be
computed.

13.1.1 Numerical Examples

Example 13.1 (vide Appendix B,Program  no. 34)


 Consider
 thenon-homogeneous

 0 0 1 1
system x ¼ Ax þ Bu where A ¼ ; B¼ , xð 0Þ ¼ and u ¼ uðtÞ;
t 0 0 1
a unit step function.
The solution of the equation is

x1 ð t Þ ¼ 1 þ t

and

t2 t3
x2 ð t Þ ¼ 1 þ þ
2 3

Identification of the given system parameter produces the following results:


Table 13.1 shows the comparison of the exact samples of a21(t) and the samples
obtained in HF domain using Eq. (13.5) and the percentage errors of respective
samples for m = 8, T = 1 s.
310 13 System Identification Using State Space …

Table 13.1 Identification of the non-homogeneous system parameter a21(t) of Example 13.1 in
HF domain compared with its exact samples along with percentage error at different sample points
for m = 8 and T = 1 s (vide Appendix B, Program no. 34)
t (s) System parameter a21(t) (m = 8)
Samples of the exact solution Samples from HF domain synthesis % Error
0 0.00000000 0.00000000 –
1
8 0.12500000 0.12037037 3.70370370
2
8 0.25000000 0.24999999 0.00000000
3
8 0.37500000 0.37121212 1.01010101
4
8 0.50000000 0.49999999 0.00000000
5
8 0.62500000 0.62179487 0.51282051
6
8 0.75000000 0.75000000 0.00000000
7
8 0.87500000 0.87222222 0.31746031
8
8 1.00000000 1.00000000 0.00000000

Fig. 13.1 Identification of


a21(t) of the system matrix of
Example 13.1 via HF domain
for m = 8 and T = 1 s (vide
Appendix B, Program no. 34)

Figure 13.1 compares graphically the exact curve for a21(t) with its hybrid
function domain solution. Though the sample points obtained via HF domain
technique seem to be reasonably close with the exact curve, a scrutiny of the error
column of Table 13.1 reveals that there is a slight tendency of oscillation in the HF
domain results. This possibly is due to numerical instability in the computation.
Such oscillation turned out to be predominant for another case study presented in
the following.
13.2 Identification of a Homogeneous System … 311

13.2 Identification of a Homogeneous System [3]

Similarly if we like to identify the time-varying system matrix A(t) for an


n × n homogeneous system, Eqs. (13.5) and (13.8) will be modified to
"   
1 2 2 X n
aij ðk þ 1Þ ¼  aii ðk þ 1Þ xi ðk þ 1Þ  þ aiik xik  ail ðk þ 1Þ xl ðk þ 1Þ :
xj ðk þ 1Þ h h l6¼j
l6¼i
#
X
n
 ailk xlk
l6¼i

ð13:9Þ

It can be shown that, for an nth order homogeneous time-varying system, the
off-diagonal elements can be identified by the Eq. (13.9).
For an nth order homogeneous time-varying system, the diagonal elements can
be represented by the following general form:
"  #
2 1 2 X n X n
aii ðk þ 1Þ ¼  þ aiik xik þ ail ðk þ 1Þ xl ðk þ 1Þ þ ailk xlk
h xi ð k þ 1 Þ h l6¼i l6¼i

ð13:10Þ

Using above relevant equations, all unknown parameters of the homogeneous


system can be computed.

13.2.1 Numerical Examples

Example 13.2 (vide Appendix B, Program no. 35) Consider the time-varying
 
 cosðtÞ sinðtÞ 1
homogeneous system x ¼ Ax , where, A ¼ and xð0Þ ¼
 sinðtÞ cosðtÞ 2
having the solution

x1 ðtÞ ¼ ðcosð1  cos tÞ þ 2 sinð1  cos tÞÞesin t

and

x2 ðtÞ ¼ ð sinð1  cos tÞ þ 2 cosð1  cos tÞÞesin t

Identification of the given system parameters produces results shown in


Figs. 13.2 through 13.6.
312 13 System Identification Using State Space …

Fig. 13.2 Identification of


a11(t) of the system matrix of
Example 13.2 via HF domain
for m = 40 and T = 5 s (vide
Appendix B, Program no. 35)

Fig. 13.3 Identification of


a21(t) of the system matrix of
Example 13.2 via HF domain
for m = 40 and T = 6 s (vide
Appendix B, Program no. 35)

For the given homogeneous system, Figs. 13.2 and 13.3 illustrate the identified
parameters a11(t) and a21(t) in HF domain for m = 40 and over a time period of 5
and 6 s respectively.
But interestingly, if instead of a11(t) and a21(t), we attempt to identify the elements
a12(t) and a22(t), we are met with numerical instability. That is, knowing a11(t) and
a21(t), the HF domain solution of a12(t) shows somewhat erratic results even for
m = 50 over a 20 s interval. The nature of instability is illustrated in Fig. 13.4.
To investigate this phenomenon, we have computed the same result with another
higher value of m, namely, m = 100, for a time period T = 20 s. Figure 13.5 shows
the results. While numerical instability is encountered, no specific pattern of such
instability emerged from the study.
Similarly, the effort to identify the parameter a22(t) ends up with the same fate.
Figure 13.6 demonstrates this instability.
13.2 Identification of a Homogeneous System … 313

Fig. 13.4 Identification of


a12(t) of system matrix of
Example 13.2 via HF domain
is met with numerical
instability for m = 50 and
T = 20 s

Fig. 13.5 Identification of


a12(t) of system matrix of
Example 13.2 again shows
numerical instability for
m = 100 and T = 20 s, though
the pattern is different from
the other two cases


Example
 13.3
 Consider the time-varying
 homogeneous system x ¼ Ax , where,
0 1 0
A¼ and xð0Þ ¼ having the solution
0 t 1
( 2 )
2þt
x1 ðtÞ ¼ ln expðtÞ
2t

and
 2
t
x2 ðtÞ ¼ exp
2
314 13 System Identification Using State Space …

Fig. 13.6 Identification of


a22(t) of system matrix of
Example 13.2 via HF domain
causes numerical instability
for m = 100 and T = 20 s

Fig. 13.7 Identification of


a12(t) of the system matrix of
Example 13.3 via HF domain,
for m = 10 and T = 1 s

Identification of the given system parameters produces results shown in


Figs. 13.7 through 13.9.
For the given homogeneous system, Figs. 13.7 and 13.8 illustrate the identified
parameters a12(t) and a22(t) in HF domain for m = 10 and over a time period of 1 s.
It seems that the results are readily acceptable.
To study the results more minutely, Table 13.2 is presented where the exact
samples of a12(t) and a22(t) are compared with the samples computed via HF
domain analysis using Eqs. (13.9) and (13.10), for m = 10, T = 1 s.
13.2 Identification of a Homogeneous System … 315

Fig. 13.8 Identification of


a22(t) of the system matrix of
Example 13.3 via HF domain,
for m = 10 and T = 1 s

Table 13.2 Identification of the system parameters a12(t) and a22(t) of Example 13.3 in HF
domain compared with its exact samples at different sample points for m = 10 and T = 1 s
t (s) System parameter a12(t) (m = 10) System parameter a22(t) (m = 10)
Samples of the Samples from HF Samples of the Samples from HF
exact solution domain synthesis exact solution domain synthesis
sd sh sd sh
0 1.00000000 1.00000000 0.00000000 0.00000000
1
10 1.00000000 0.99833417 0.10000000 0.09975042
2
10 1.00000000 0.99995041 0.20000000 0.19949588
3
10 1.00000000 0.99830032 0.30000000 0.29923145
4
10 1.00000000 0.99980359 0.40000000 0.39895217
5
10 1.00000000 0.99822872 0.50000000 0.49865310
6
10 1.00000000 0.99956392 0.60000000 0.59832929
7
10 1.00000000 0.99810797 0.70000000 0.69797582
8
10 1.00000000 0.99922356 0.80000000 0.79758775
9
10 1.00000000 0.99788543 0.90000000 0.89716016
10
10 1.00000000 0.99868608 1.00000000 0.99668814

Interestingly, we find that though the sample points obtained for a12(t), via HF
domain technique seem to be extremely close with the exact samples, a scrutiny of
Table 13.2 reveals that there is a slight tendency of oscillation due to numerical
instability in the HF domain results for a12(t). Noting this fact, such oscillation has
been studied in more detail for three different values of m, namely, m = 10, 20 and
30. These results are depicted in Fig. 13.9, which seems to be oscillation free. But a
magnified view of a portion of Fig. 13.9 tells another story which is self evident
from Fig. 13.10. From the Fig. 13.10, it is noted that, the tendency of this oscillation
can be reduced with increasing number of segments.
316 13 System Identification Using State Space …

Fig. 13.9 Identification of


a12(t) of the system matrix of
Example 13.3 via HF domain,
for three different values of
m (10, 20 and 30) for T = 1 s

Fig. 13.10 Magnified view


of Fig. 13.9 showing the
identified parameter a12(t) of
Example 13.3 via HF domain,
for three different values of
m (10, 20 and 30) for T = 1 s

13.3 Conclusion

In this chapter, we have presented a generalized method for identifying


non-homogeneous as well as homogeneous time-varying systems. The proficiency
of the method has been illustrated by suitable examples. Also, by putting B(t) = 0,
the same methods have been successfully applied to identify homogeneous
time-varying systems as well.
Here the linear time-varying system identification problem has been solved for
an n-state system. However, the limitation of the proposed method is, it can solve
only a maximum of n number of system parameters of the system matrix A(t) out of
n2 parameters. Further, an essential requirement is, for solving the above mentioned
n parameters, the remaining ðn2  nÞ parameters have to be known.
13.3 Conclusion 317

Some examples have been treated to establish the validity of the HF domain
methods. However, for the case of system identification, numerical instability is
encountered in Example 13.2. Such instability is vigorous and the same has been
illustrated via Figs. 13.4, 13.5 and 13.6. Since no regular pattern has immerged
from different unstable results, the reason for such instability needs to be explored
in detail.
A scrutiny of Table 13.2 for Example 13.3 reveals that there is a slight tendency
of oscillation for a12(t) in the HF domain results due to numerical instability. While
Fig. 13.9 can not reveal such oscillation, its magnified view, shown in Fig. 13.10,
can. It is noted that, such oscillation can be reduced with increased number of
sub-intervals m.

References

1. Ogata, K.: Modern Control Engineering (5th Ed.). Prentice Hall of India (2011)
2. Fogiel, M. (Chief Ed.): The Automatic Control Systems/Robotics Problem Solver. Research &
Education Association, New Jersey (2000)
3. Roychoudhury, S., Deb, A., Sarkar, G.: Analysis and synthesis of time-varying systems via
orthogonal hybrid functions (HF) in states space environment. Int. J. Dyn. Control 3(4), 389–
402 (2015)
4. Rao, G.P.: Piecewise Constant Orthogonal Functions and their Application in Systems and
Control, LNCIS, vol. 55. Springer, Berlin (1983)
5. Jiang, J.H., Schaufelberger, W.: Block Pulse Functions and their Application in Control System,
LNCIS, vol. 179. Springer, Berlin (1992)
Chapter 14
Time Invariant System Identification: Via
‘Deconvolution’

Abstract Control system identification using hybrid function domain ‘deconvo-


lution’ technique is discussed in this chapter. Both open loop as well as closed loop
systems have been identified. Two numerical examples have been treated, and for
clarity, eight figures and four tables have been presented.

System identification [1–3] is a common problem encountered in the design of


control systems. The known components, usually the plant under control, are
assumed to be described satisfactorily by its respective models. Then, the problem
of identification is the characterization of the assumed model based on some
observations or measurements.
It is well known that one may set up more than one model for a dynamic system,
and in control system design the choice of the most suitable one depends heavily on
the design method [3] being used. While in classical design, a nonparametric model
such as an impulse response function is more appropriate. Kwong and Chen [4], in
their work, presented a method based upon block pulse function (BPF) to identify
an unknown plant modeled by impulse-response.
In this chapter, as the name suggests, the orthogonal hybrid function (HF) set
[1], a combination of sample-and-hold functions (SHF) and triangular functions
(TF), is employed to identify an unknowing plant using method of deconvolution.

14.1 Control System Identification Via ‘Deconvolution’

The basic equation which relates the input-output of a control system in Laplace
domain is given by

C(s) ¼ G(s) R(s)


C(s) ð14:1Þ
or G(s) ¼
R(s)

© Springer International Publishing Switzerland 2016 319


A. Deb et al., Analysis and Identification of Time-Invariant Systems, Time-Varying
Systems, and Multi-Delay Systems using Orthogonal Hybrid Functions,
Studies in Systems, Decision and Control 46, DOI 10.1007/978-3-319-26684-8_14
320 14 Time Invariant System Identification: Via ‘Deconvolution’

where C(s) is the Laplace transformed output, G(s) is the transfer function of the
system and R(s) is the Laplace transformed input to the system. The problem of
identification is basically to determine G(s) of Eq. (14.1).

14.1.1 Open Loop Control System Identification [5]

Consider two time functions r(t) and g(t) expanded in hybrid function domain.
Considering the convolution between the time functions r(t) and g(t), we determine
the output y(t) in HF domain using Eq. (7.15). For m = 4, T = 1 s, the convolution
result can be written as
2 3
0 ð2r1 þ r0 Þ ð2r2 þ r1 Þ ð2r3 þ r2 Þ
60 ðr1 þ 2r0 Þ ðr2 þ 4r1 þ r0 Þ ðr3 þ 4r2 þ r1 Þ 7
h 6 7
yðtÞ ¼ ½ g0 g1 g2 g3  6 7 Sð4Þ
6 40 0 ðr1 þ 2r0 Þ ðr2 þ 4r1 þ r0 Þ 5
0 0 0 ðr1 þ 2r0 Þ
h
þ ½g0 ð2r1 þ r0 Þ þ g1 ðr1 þ 2r0 Þ g0 ð2r2  r1  r0 Þ þ g1 ðr2 þ 3r1  r0 Þ þ g2 ðr1 þ 2r0 Þ
6
g0 ð2r3  r2  r1 Þ þ g1 ðr3 þ 3r2  3r1  r0 Þ þ g2 ðr2 þ 3r1  r0 Þ þ g3 ðr1 þ 2r0 Þ
g0 ð2r4  r3  r2 Þ þ g1 ðr4 þ 3r3  3r2  r1 Þ þ g2 ðr3 þ 3r2  3r1  r0 Þ
þ g3 ðr2 þ 3r1  r0 Þ þ g4 ðr1 þ 2r0 ÞTð4Þ
ð14:2Þ

Comparing the SHF vectors of output y(t) with Eq. (14.2), we get
2 3
0 ð2r1 þ r0 Þ ð2r2 þ r1 Þ ð2r3 þ r2 Þ
h 6 0 ðr1 þ 2r0 Þ ðr2 þ 4r1 þ r0 Þ ðr3 þ 4r2 þ r1 Þ 7
½ y0 y1 y2 y3  ¼ ½ g0 g1 g2 6
g3  4 7
6 0 0 ðr1 þ 2r0 Þ ðr2 þ 4r1 þ r0 Þ 5
0 0 0 ðr1 þ 2r0 Þ
ð14:3Þ

where, because the first column contains all zeros, the matrix is singular and thus
poses restriction on its inversion. To avoid this problem, the leading element of the
first column may be replaced by a very small positive number e. But before
introducing e, the scaling factor h6 is multiplied with each element of the square
matrix on the RHS of (14.3). This is done to avoid any adverse effect of the scaling
factor h6 upon e.
Then Eq. (14.3) becomes
2 3
e 6 ð2r1 þ r0 Þ
h
6 ð2r2 þ r1 Þ
h
6 ð2r3 þ r2 Þ
h
60 7
6 ðr1 þ 2r0 Þ 6 ðr2 þ 4r1 þ r0 Þ 6 ðr3 þ 4r2 þ r1 Þ 7
h h h
½ y0 y1 y2 y 3  ¼ ½ g0 g1 g2 6
g3  4 5
6 ðr1 þ 2r0 Þ ðr þ þ Þ
h h
0 0 6 2 4r 1 r 0
0 0 0 h
6 ðr 1 þ 2r0 Þ
ð14:4Þ
14.1 Control System Identification Via ‘Deconvolution’ 321

Now let,
9
R0 , 2r1 þ r0 ; R6 , r3 þ 4r2 þ r1 ; >>
>
R1 , 2r2 þ r1 ; R7 , r4 þ 4r3 þ r2 ; >>
>
=
R2 , 2r3 þ r2 ; R8 , r2 þ r1  2r0 ;
ð14:5Þ
R3 , 2r4 þ r3 ; R9 , r3 þ r2  2r1 ; >>
>
R4 , r1 þ 2r0 ; R10 , r4 þ r3  2r2 : >
>
>
;
R5 , r2 þ 4r1 þ r0 ;

Using Eqs. (14.5) and (14.4) can be written as


2 3
e h
6 R0
h
6 R1
h
6 R2
60 h h h 7
6 6 R4 6 R5 6 R6 7 ð14:6Þ
½ y0 y1 y2 y 3  ¼ ½ g0 g1 g2 g3  6 7
40 0 h
6 R4
h
6 R5
5
or; ½ y0 y1 y 2 y 3  ¼ ½ g0 g1 g2 g3  R h
0 0 0 6 R4
2 3
e h h
6 R0 6 R1 6 R2
h
60 h h h 7
where, R , 6 6 R4 6 R5 6 R6 7
40 0 h h 5
6 R4 6 R5
h
0 0 0 6 R4
Hence,

½ g0 g1 g2 g3  ¼ ½ y 0 y1 y2 y3 W ð14:7Þ

where, R1 , W
The computation of Eq. (14.7) was carried out with different small values of e to
determine the first four samples of the impulse response function g(t). It was noted
that the values of the coefficients g0, g1, g2,…, etc. did not alter with different values
of e.
This was investigated theoretically and it was noted that the matrix R being
upper triangular, its inverse W is also upper triangular in nature. For an upper
triangular matrix, if the leading element is e, in its inverse, only the elements of the
first row contains expressions involving the factor e.
Now, in the row matrix ½ y0 y1 y2 y3 ; the first element y0 is zero. Usually,
for realistic causal systems, this is so. If the matrix multiplication in Eq. (14.7) was
executed using this value of y0, the first element g0 of plant impulse response would
have turned out to be zero. To avoid this problem, like the leading element of R, the
first element y0 of the output was also replaced by the same small number e, used in
R. Hence, when W was pre-multiplied by the row matrix in (14.7), the result did
not contain any term involving e. That is, the choice of e does not affect the result.
To determine the fifth sample of g(t), we proceed as follows.
322 14 Time Invariant System Identification: Via ‘Deconvolution’

We equate the last elements of the TF parts of Eqs. (11.13) and (14.2) to get

h
y4  y3 ¼ ½g0 ð2r4  r3  r2 Þ þ g1 ðr4 þ 3r3  3r2  r1 Þ þ g2 ðr3 þ 3r2  3r1  r0 Þ
6
þ g3 ðr2 þ 3r1  r0 Þ þ g4 ðr1 þ 2r0 Þ

Solving for coefficient g4, we have

½y4  y3   h6 ½g0 ðR3  R2 Þ þ g1 ðR7  R6 Þ þ g2 ðR6  R5 Þ þ g3 ðR5  R4 Þ


g4 ¼ h
6 R4

For an analysis involving m terms, the generalized expression for the last
coefficient of the function g(t) is
 
  m1
P  
½ym  yðm1Þ   h6 g0 Rðm1Þ  Rðm2Þ þ gi Rð2miÞ  Rð2mi1Þ
i¼1
gm ¼ h
6 Rm
ð14:8Þ

From Eqs. (14.7) and (14.8), we can calculate all the coefficient of the impulse
response of the plant.

14.1.1.1 Numerical Examples

Now, we consider an open loop system, whose input u(t) and output y(t) are known
in HF domain. With this information, we identify the plant g(t) in HF domain. For
such identification in HF domain, we use Eqs. (14.7) and (14.8).
Example 14.1 (vide Appendix B, Program no. 36) Consider an open loop system
with input r1 ðtÞ ¼ uðtÞ and output y1 ðtÞ ¼ 1  expðtÞ. The plant g1 ðtÞ ¼ expðtÞ
is computed in HF domain via deconvolution and compared with the HF domain
direct expansion of the plant impulse response.
Let, time T = 1 s, m = 4 and 10, and ε = 10−5.
Tables 14.1 and 14.2 present the quantitative results and these are graphically
compared with the exact impulse response of the plant in Figs. 14.1 and 14.2, for
m = 4 and m = 10. Figure 14.3 shows the plots of percentage errors at different
sample points for various values of m when the exact samples of g1(t) are compared
with the samples computed via HF domain technique. Figure 14.4 shows the plot of
AMP error for different values of m. It is noted that the value of average absolute
percentage error decreases drastically with increasing m.
14.1 Control System Identification Via ‘Deconvolution’ 323

Table 14.1 Percentage error at different sample points of the impulse response of the plant g1(t) of
Example 14.1 for T = 1 s, m = 4 and ε = 10−5 (vide Appendix B, Program no. 36)
t (s) Samples obtained from Samples of g1(t) obtained via HF % Error
direct expansion of domain analysis using Eqs. (14.7) and e = g1dgg
1d
1c
 100
g1(t) (14.8)
g1d g1c
0 1.00000000 1.00000000 0.00000000
1
4 0.77880078 0.76959374 1.18220832
2
4 0.60653066 0.60856725 −0.33577721
3
4 0.47236655 0.46474560 1.61335479
4
4 0.36787944 0.37115129 −0.88938024

Table 14.2 Percentage error at different sample points of the impulse response of the plant g1(t) of
Example 14.1 for T = 1 s, m = 10 and ε = 10−5 (vide Appendix B, Program no. 36)
t (s) Samples obtained from Samples of g1(t) obtained via HF % Error
direct expansion of domain analysis using Eqs. (14.7) and e = g1dgg
1d
1c
 100
g1(t) (14.8)
g1d g1c
0 1.00000000 1.00000000 0.00000000
1
10 0.90483742 0.90325164 0.17525566
2
10 0.81873075 0.81888166 −0.01843180
3
10 0.74081822 0.73936899 0.19562594
4
10 0.67032005 0.67059450 −0.04094445
5
10 0.60653066 0.60519322 0.22050627
6
10 0.54881164 0.54918725 −0.06844146
7
10 0.49658530 0.49533940 0.25089517
8
10 0.44932896 0.44978740 −0.10202639
9
10 0.40656966 0.40539869 0.28801225
10
10 0.36787944 0.36840568 −0.14304711

From the error columns of Tables 14.1 and 14.2, and from Fig. 14.3, it is noted
that there is a slight oscillation in the result, and such oscillations are reduced with
increasing m.

14.1.2 Closed Loop Control System Identification

Consider the closed loop system shown in Fig. 11.8. The impulse response of the
plant is g(t). Hence, we obtain the output y(t) simply by convolution of e(t) and g(t).
In line with Eq. (7.15), for m = 4, T = 1 s, convolution result can directly be written
as
324 14 Time Invariant System Identification: Via ‘Deconvolution’

Fig. 14.1 Samples of the


plant g1(t) of Example 14.1
identified using HF domain
deconvolution are compared
with the exact curve for m = 4
and T = 1 s (vide Appendix B,
Program no. 36)

Fig. 14.2 Samples of the


plant g1(t) of Example 14.1
identified using HF domain
deconvolution are compared
with the exact curve for
m = 10 and T = 1 s (vide
Appendix B, Program no. 36)

2 3
0 ð2e1 þ e0 Þ ð2e2 þ e1 Þ ð2e3 þ e2 Þ
6 0 ðe þ 2e Þ ðe þ 4e þ e Þ ðe3 þ 4e2 þ e1 Þ 7
h 6 1 0 2 1 0 7
yðtÞ ¼ ½ g0 g1 g2 g3  6 7Sð4Þ
6 40 0 ðe1 þ 2e0 Þ ðe2 þ 4e1 þ e0 Þ 5
0 0 0 ðe1 þ 2e0 Þ
h
þ ½fg0 ð2e1 þ e0 Þ þ g1 ðe1 þ 2e0 Þg
6
fg0 ð2e2  e1  e0 Þ þ g1 ðe2 þ 3e1  e0 Þ þ g2 ðe1 þ 2e0 Þg
fg0 ð2e3  e2  e1 Þ þ g1 ðe3 þ 3e2  3e1  e0 Þ þ g2 ðe2 þ 3e1  e0 Þ þ g3 ðe1 þ 2e0 Þg
fg0 ð2e4  e3  e2 Þ þ g1 ðe4 þ 3e3  3e2  e1 Þ þ g2 ðe3 þ 3e2  3e1  e0 Þ
þ g3 ðe2 þ 3e1  e0 Þ þ g4 ðe1 þ 2e0 ÞgTð4Þ
ð14:9Þ
14.1 Control System Identification Via ‘Deconvolution’ 325

Fig. 14.3 Percentage error at


different sample points
computed via HF domain for
different values of m with
T = 1 s and ε = 10−5 for
Example 14.1. It is observed
that percentage error
decreases with increasing m

Fig. 14.4 AMP error of the


samples computed via HF
domain for different values of
m for T = 1 s and ε = 10−5 for
Example 14.1

Comparing the SHF vectors of output y(t) with Eq. (14.9), we get

h
½ y0 y1 y2 y 3  ¼ ½ g0 g1 g2 g3 
6
2 3
0 ð2e1 þ e0 Þ ð2e2 þ e1 Þ ð2e3 þ e2 Þ
6 0 ðe þ 2e Þ ðe þ 4e þ e Þ ðe þ 4e þ e Þ 7
6 1 0 2 1 0 3 2 1 7
6 7
40 0 ðe1 þ 2e0 Þ ðe2 þ 4e1 þ e0 Þ 5
0 0 0 ðe1 þ 2e0 Þ
ð14:10Þ
326 14 Time Invariant System Identification: Via ‘Deconvolution’

Proceeding in the same way as for Eqs. (14.6) and (14.10) can be written as
2 3
e h
6 E0
h
6 E1
h
6 E2
60 h h h 7
6 6 E4 6 E5 6 E6 7
½ y0 y1 y2 y3  ¼ ½ g 0 g1 g2 g3 6 7
40 0 h
6 E4
h
6 E5
5
h
0 0 0 6 E4
2 31 ð14:11Þ
e h
6 E0
h
6 E1
h
6 E2
60 h h h 7
6 6 E4 6 E5 6 E6 7
or; ½ g0 g1 g2 g3  ¼ ½ y 0 y1 y2 y3  6 7
40 0 h
6 E4
h
6 E5
5
h
0 0 0 6 E4

As before, the computation is carried out for a typical values of e, i.e., e = 10−5.
We equate the last elements of the TF parts of Eqs. (11.13) and (14.9) to get

h
y4  y3 ¼ ½g0 ð2e4  e3  e2 Þ þ g1 ðe4 þ 3e3  3e2  e1 Þ þ g2 ðe3 þ 3e2  3e1  e0 Þ
6
þ g3 ðe2 þ 3e1  e0 Þ þ g4 ðe1 þ 2e0 Þ

Solving for coefficient g4, we have

½y4  y3   h6 ½g0 ðE3  E2 Þ þ g1 ðE7  E6 Þ þ g2 ðE6  E5 Þ þ g3 ðE5  E4 Þ


g4 ¼ h
6 E4

For an analysis involving m terms, the generalized expression for the last
coefficient of the plant is
 
  m1
P  
½ym  yðm1Þ   g0 Eðm1Þ  Eðm2Þ þ
h
6 gi Eð2miÞ  Eð2mi1Þ
i¼1
gm ¼ h
6 Em
ð14:12Þ

From Eqs. (14.11) and (14.12), we can calculate all the coefficient of the plant
impulse response

G ¼ ½ g0 g1 g2 g3 g4  ð14:13Þ

If we intend to compute further samples of G, we need to know the respective


HF coefficients of the error signal and the system response. This can easily be done
by using (14.12).
14.1 Control System Identification Via ‘Deconvolution’ 327

14.1.2.1 Numerical Examples

An input r(t) is applied to a causal SISO system, shown in Fig. 11.8, at t = 0. If the
impulse response of the plant is g(t), we obtain the output y(t) simply by convo-
lution of e(t) and g(t). Knowing r(t), h(t) and y(t), we can employ Eqs. (14.11) and
(14.12) for computing the samples of the plant g(t) so that the result is obtained in
HF domain.
Example 14.2 (vide Appendix B, Program no. 37) Consider the closed loop system
of Fig. 11.8 with input r2 ðtÞ = u(t), feedback h(t) = u(t) and output
  pffiffi
y2 ðtÞ ¼ p2ffiffi exp  2t sin 23 t. The plant g2 ðtÞ ¼ expðtÞ is identified using HF
3
domain via deconvolution and compared with the direct expansion of the plant.

Table 14.3 Samples of the plant of the system via direct expansion and samples obtained from
HF domain identification for Example 14.2 for T = 1 s, m = 4 and ε = 10−5 (vide Appendix B,
Program no. 37)
t (s) Samples obtained Samples of g2(t) obtained via HF % Error
from direct expansion domain analysis using Eqs. (14.11) e = g2dgg
2d
2c
 100
of g2(t) and (14.12)
g2d g2c
0 1.00000000 1.00000000 0.00000000
1
4 0.77880078 0.77656217 0.28744362
2
4 0.60653066 0.60289480 0.59945116
3
4 0.47236655 0.46770601 0.98663752
4
4 0.36787944 0.36267668 1.41425719

Table 14.4 Samples of the plant of the system via direct expansion and samples obtained from
HF domain identification for Example 14.2 for T = 1 s, m = 10 and ε = 10−5 (vide Appendix B,
Program no. 37)
t (s) Samples obtained Samples of g2(t) obtained via HF % Error
from direct expansion domain analysis using Eqs. (14.11) e = g2dgg
2d
2c
 100
of g2(t) and (14.12)
g2d g2c
0 1.00000000 1.00000000 0.00000000
1
10 0.90483742 0.90468094 0.01729318
2
10 0.81873075 0.81844307 0.03513718
3
10 0.74081822 0.74041517 0.05440636
4
10 0.67032005 0.66982167 0.07434830
5
10 0.60653066 0.60594820 0.09603091
6
10 0.54881164 0.54816110 0.11853513
7
10 0.49658530 0.49587437 0.14316533
8
10 0.44932896 0.44857050 0.16879892
9
10 0.40656966 0.40576860 0.19702903
10
10 0.36787944 0.36704625 0.22648450
328 14 Time Invariant System Identification: Via ‘Deconvolution’

Fig. 14.5 Samples of the


closed loop system of
Example 14.2 identified using
HF domain deconvolution
along with the exact curve for
m = 4 and T = 1 s (vide
Appendix B, Program no. 37)

Fig. 14.6 Samples of the


closed loop system of
Example 14.2 identified using
HF domain deconvolution for
m = 10 and T = 1 s (vide
Appendix B, Program no. 37)

Fig. 14.7 Percentage error at


different sample points,
computed via HF domain, for
different values of m with
T = 1 s and ε = 10−5 for
Example 14.2. It is observed
that percentage error
decreases with increasing m
14.1 Control System Identification Via ‘Deconvolution’ 329

Fig. 14.8 AMP error of the


samples for different values of
m for T = 1 s and ε = 10−5 for
Example 14.2

Here Tables 14.3 and 14.4 present the quantitative results in HF domain along
with the percentage errors. These facts are graphically compared with the exact
impulse response of the plant in Figs. 14.5 and 14.6, for m = 4 and m = 10.
Figures 14.7 and 14.8 show the characteristics of percentage errors for various
values of m.

14.2 Conclusion

In this chapter, we have studied identification of linear control systems, open loop
as well as closed loop, using the hybrid function platform employing the principle
of ‘deconvolution’.
As a foundation, convolution of basic component functions of the HF set was
computed in Chap. 7. These sub-results were further used to determine the con-
volution of two time functions.
Applying the deconvolution concept, an open loop system has been identified
for m = 4 and 10. Percentage errors at different sample points have been computed
and presented in Tables 14.1 and 14.2. Figures 14.3 and 14.4 show that, with
increasing m, the error is reduced.
Similarly, a closed loop system was identified successfully for m = 4 and m = 10.
The percentage errors in identification, at different sampling instants, are tabulated
in Tables 14.3 and 14.4, and for better clarity, they have been translated graphically
in Figs. 14.7 and 14.8.
In case of system analysis or identification, block pulse domain approach
showed oscillations in many instances indicating the onset of numerical instability.
330 14 Time Invariant System Identification: Via ‘Deconvolution’

We have noted such cases of instability in references [1, 4, 6–8]. But HF based
technique achieves the objective without any numerical instability, even with a
small number of sub-intervals m.

References

1. Deb, A., Sarkar, G., Sen, S.K.: Linearly pulse-width modulated block pulse functions and their
application to linear SISO feedback control system identification. Proc. IEE, Part D, Control
Theor. Appl. 142(1), 44–50 (1995)
2. Unbehauen, H., Rao, G.P.: Identification of continuous systems. North-Holland, Amsterdam
(1987)
3. Ljung, L.: System identification: theory for the user. Prentice-Hall Inc., New Jersey (1985)
4. Kwong, C.P., Chen, C.F.: Linear feedback system identification via block pulse functions. Int.
J. Syst. Sci. 12(5), 635–642 (1981)
5. Biswas, A.: Analysis and synthesis of continuous control systems using a set of orthogonal
hybrid functions. Ph. D. dissertation, University of Calcutta (2015)
6. Jiang, J.H., Schaufelberger, W.: Block pulse functions and their application in control system,
LNCIS, vol. 179. Springer, Berlin (1992)
7. Deb, A., Ghosh, S.: Power electronic systems, walsh analysis with MATLAB®. CRC Press,
Boca Raton (2014)
8. Deb, A., Sarkar, G., Biswas, A., Mandal, P.: Numerical instability of deconvolution operation
via block pulse functions. J. Franklin Inst. 345(4), 319–327 (2008)
Chapter 15
System Identification: Parameter
Estimation of Transfer Function

Abstract In this chapter, parameter estimation of the transfer function of a linear


system has been done employing many non-sinusoidal orthogonal function sets,
e.g., block pulse functions, non-optimal block pulse functions, triangular functions,
hybrid functions and sample-and-hold functions. A comparative study of the
parameters estimated by different methods are made with focus on estimated errors.
One numerical example has been studied extensively and ten figures and fourteen
tables have been presented as illustration.

A typical problem considered in system identification [1–4] is the design of esti-


mators trying to recover a discrete time linear time invariant (LTI) system based on
noise corrupted output sequence resulting from a known input sequence. The
unknown components, usually the plant under control, are assumed to be described
satisfactorily by its respective models.
In this chapter, as the name suggests, the orthogonal hybrid function (HF) set, a
combination of sample-and-hold functions (SHF) and triangular functions (TF), is
employed to identify an unknowing plant using method of deconvolution.

15.1 Transfer Function Identifications

The estimation method consists of finding a rough estimate of the impulse response
from the sampled input and output data. The impulse response estimate is then
transformed to a two dimensional time-frequency mapping [5]. The mapping pro-
vides a clear graphical method for distinguishing the noise from the system
dynamics. The information believed to correspond to noise is discarded and a
cleaner estimate of the impulse response is obtained from the remaining informa-
tion. The new impulse response estimate is then used to obtain the transfer function
estimate [6].
There are many transfer function estimation techniques available, given data
limitations, but these may yield poor results. One such method is the Empirical
Transfer Function Estimate (ETFE), which estimates the transfer function by taking

© Springer International Publishing Switzerland 2016 331


A. Deb et al., Analysis and Identification of Time-Invariant Systems, Time-Varying
Systems, and Multi-Delay Systems using Orthogonal Hybrid Functions,
Studies in Systems, Decision and Control 46, DOI 10.1007/978-3-319-26684-8_15
332 15 System Identification: Parameter Estimation …

the ratios of the Fourier transforms of the output y(t) and the input u(t). The estimate
is given by

^ F fyðtÞg
GðxÞ ¼ ð15:1Þ
F fuðtÞg

If the data set is noisy, the resulting estimate is also noisy. Unfortunately, taking
more data points does not help. The variance does not decrease as the number of
data points increase, because there is no feature of information compression. There
are as many independent estimates as there are data points [4].
Parametric estimation methods are another class of system identification tech-
niques. The motivation behind these methods is to find an estimate or model of the
system in terms of a small number (compared to the number of measurements) of
numerical values or parameters. A linear system is typically represented by

yðtÞ ¼ GðsÞuðtÞ þ HðsÞeðtÞ ð15:2Þ

where e(t) is the disturbance, G(s) is the transfer function from input to output, H
(s) is the transfer function from disturbance to output, and s is the shift operator
(s ¼ ejx ) used when dealing with discrete systems. The most generalized model
structure is

BðsÞ CðsÞ
AðsÞyðtÞ ¼ uðtÞ þ eðtÞ ð15:3Þ
FðsÞ DðsÞ

where A(s), B(s), C(s), D(s), F(s) are all parameter polynomials to be estimated.

15.2 Pade Approximation

The Pade approximant [7] of a given power series is a rational function of


numerator degree L and denominator degree M whose power series agrees with the
given one up to degree L + M inclusively. A collection of Pade approximants
formed by using a suitable set of values of L and M often provides a means of
obtaining information about the function outside its circle of convergence, and of
more rapidly evaluating the function within its circle of convergence. Applications
of these ideas in physics, chemistry, electrical engineering, and other areas have led
to a large number of generalizations of Pade approximants that are tailor-made for
specific applications. Applications to statistical mechanics and critical phenomena
are extensively covered, and there are newly extended sections devoted to circuit
design, matrix Pade approximation, computational methods, and integral and
algebraic approximants.
Approximants derived by expanding a function as a ratio of two power ser-
ies and determining both the numerator and denominator coefficients. Pade
15.2 Pade Approximation 333

approximations are usually superior to Taylor series when functions contain poles,
because the use of rational functions allows them to be well-represented.
The relation between the Taylor series expansion and the function is given
classically by the statement that if the series converges absolutely to an infinity
differentiable function, then the series defines the function uniquely and the func-
tion uniquely defines the series.
The Pade approximants are a particular type of rational approximation. The L,
M Pade approximant is denoted by
 
L PL ðxÞ
¼ ¼ f ðxÞ ð15:4Þ
M QM ðxÞ

where PL ðxÞ is a polynomial of degree less than or equal to L, and QM ðxÞ is a


polynomial of degree less than or equal to M. Sometimes, when the function f being
approximated is not clear from the context, the function name is appended as a
subscript ½L=Mf . The formal power series

X
1
f ðxÞ ¼ fj x j ð15:5Þ
j¼0

determines the coefficients by the equation

PL ðxÞ  
f ðxÞ  ¼ O xL þ M þ 1 : ð15:6Þ
QM ðxÞ

This is the classical definition. The Baker [8] definition adds the condition

QM ð0Þ ¼ 1: ð15:7Þ

Let; f ðxÞ ¼ c0 þ c1 x þ c2 x2 þ    ð15:8Þ

Let the Pade approximation (that is the reduced model) be

PL ðxÞ a0 þ a1 x þ a2 x 2 þ    þ aL x L
f ðxÞ ¼ ¼ ð15:9Þ
QM ðxÞ b0 þ b1 x þ b2 x2 þ    þ bM1 xM1 þ xM

Hence, in accordance with the definition, Eqs. (15.8) and (15.9) yield the fol-
lowing set of linear equations:
334 15 System Identification: Parameter Estimation …

9
a0 ¼ b0 c 0 >
>
>
>
a1 ¼ b0 c 1 þ b1 c 0 >
>
>
>
a2 ¼ b0 c 2 þ b1 c 1 þ b2 c 0 >
>
>
>
>
>
.. .. .. .. >
>
. . . . =
ð15:10Þ
am ¼ b0 cm þ b1 cm1 þ    þ bm c0 >
>
>
>
0 ¼ b0 c m þ 1 þ b1 c m þ    þ bm þ 1 c 0 >
>
>
>
>
>
.. .. .. .. .. >
>
. . . .  . >
>
>
>
;
0 ¼ b0 cm þ n þ b1 cm þ n1 þ    þ bn1 cm þ 1 þ cm

which serve to determine the coefficients of Eq. (15.10) uniquely.


Baker et al. [8] introduced the concept of Pade approximation about more than
one point. They suggested that the Pade approximation be required to exactly
satisfy conditions at other points (they imposed the value of the function at infinity
on the Pade approximate) as well as at the origin. The required modifications in the
linear Eqs. (15.10) are very simple. The equation which makes the last power series
coefficient of the function and its approximant equal is replaced by one that makes
the Pade approximant equal to a given value at infinity.

15.3 Parameter Estimation of the Transfer Function


of a Linear System

Now we employ the set of hybrid functions to estimate the parameters of a linear
system from its impulse response [6] data.
Let h(t) be the p  r impulse response matrix of a linear time-invariant multi-
variable system where h(t) is specified graphically or analytically. Also, let HðsÞ ¼
L½hðtÞ be the transfer function matrix of the system.
Now consider the rational function matrix G(s) of the form

Bp sp þ Bp1 sp1 þ    þ B1 s þ B0
GðsÞ ¼ ; p\ q ð15:11Þ
sq þ aq1 sq1 þ    þ a1 s þ a0

Expansions of H(s) and G(s) in power series are as follows:

X
1
HðsÞ ¼ H k sk ð15:12Þ
k¼ 0

X
1
GðsÞ ¼ Gk sk ð15:13Þ
k¼0
15.3 Parameter Estimation of the Transfer … 335

The problem now may be stated as


Given h(t), the impulse response, find the parameters a0 ; a1 ; . . .; an1 and the
elements of the matrices B0 ; B1 ;    Bp of G(s) such that G(s) matches H(s) in the
Pade sense [7], i.e., such that the power series coefficient matrices of H(s) and G
(s) match up to the power (p + q).
Thus,
Hk ¼ Gk , where, k ¼ 0; 1;    ; p þ q.
To solve the problem we first determine the power series coefficient matrices Hk
of HðsÞ. We use the following power series coefficient formula

HðsÞ ¼ H0 s0 þ H1 s1 þ H2 s2 þ H3 s3 þ    þ Hk sk þ    ð15:14Þ

Now, Eq. (7.14) is differentiated with respect to s to obtain

@HðsÞ
¼ H1 þ 2H2 s1 þ 3H3 s2 þ    þ k Hk sðk1Þ þ    ð15:15Þ
@s

Putting, s = 0 in Eq. (15.15), we get



@HðsÞ 
¼ H1
@s s ¼ 0

Similarly, we get

@ k HðsÞ 
¼ k! Hk ð15:16Þ
@sk  s ¼ 0

where, k ¼ 0; 1;    ; ðp þ qÞ .
Again, the transfer function H(s) can be written as

Z1
HðsÞ ¼ hðtÞ expðstÞdt ð15:17Þ
0

Now, differentiating Eq. (15.17) with respect to s, we get


336 15 System Identification: Parameter Estimation …

Z1
@HðsÞ
¼ ð1Þ t hðtÞ expðstÞ dt
@s
0
..
.
ð15:18Þ
..
.
Z1
@ k HðsÞ
¼ (  1)k tk hðtÞ expðstÞ dt
@sk
0

Putting s = 0 in Eq. (15.18), we get

 Z1
@ k HðsÞ
¼ (  1) k
tk hðtÞ dt ð15:19Þ
@sk s ¼ 0
0

Assuming the system to be asymptotically stable, we consider a large positive


number λ to be the upper limit of the integral in (15.19) such that it converges to a
finite value [9]. Thus, using Eqs. (15.16) and (15.19), we get

Zk
ð1Þk
Hk ¼ tk hðtÞ dt ð15:20Þ
k!
0

Let, t ¼ ks. Then dt ¼ k ds, for t ¼ 0; s ¼ 0 and t ¼ k; s ¼ 1.


Thus Eq. (15.20) becomes

Z1
ð1Þk
Hk ¼ k sk kk hðksÞ ds
k!
0
ð15:21Þ
Z1
sk
or, Hk ¼ ð1Þk kðk þ 1Þ hðksÞ ds
k!
0

15.3.1 Using Block Pulse Functions

In block pulse function (BPF) domain [10], referring to Sect. 2.1, hðksÞ can be
written as
15.3 Parameter Estimation of the Transfer … 337

ðm1Þ
X
hðksÞ ¼ D1ðl1Þi wi ðsÞ; 0  s \1 i ¼ 0; 1; 2; . . .; ðm  1Þ ð15:22Þ
i¼0

 T
where, D1ðl1Þi ¼ d10i d11i d12i    d1ðl1Þi
and d1ðl1Þi ’s are the expansion coefficients in block pulse domain.
R 1Þh
ði þ
and D1ðl1Þ i ¼ 1h hðksÞ wi ðsÞ ds
ih
where, l is the number of input variables of the system.
Again, let

X
m1
uT0 WðsÞ ¼ wi ðsÞ ¼ 1 ð15:23Þ
i¼0

where, uT0 ¼ ½ 1 1    1 1 
|fflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
m columns
Now, we know

Zs Zs Zs Zs ðsÞk Zs Zs Zs Zs
   ðdsÞk ¼    uT0 WðsÞðdsÞk ¼ uT0 Pk WðsÞ ð15:24Þ
0 0 0 0
k! 0 0 0 0
|fflfflfflfflffl{zfflfflfflfflffl} |fflfflfflfflffl{zfflfflfflfflffl}
k times k times

where, P is the operational matrix for integration in block pulse function domain
[10].
Then, from Eqs. (15.21)–(15.24), we can write

m1  Z
X 1 
Hk ¼ ð1Þk kðk þ 1Þ uT0 Pk wi ðsÞW ðsÞ ds D1ðl1Þi ð15:25Þ
i¼0 0

Due to orthogonal property,

ði þ 1Þth position
Z1 #

wi ðsÞWðsÞds ¼ ei ¼ ½ 0    0 h 0    0  T
|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
0 m columns

So, Eq. (15.25) becomes,

X
m1
Hk ¼ ð1Þk kðk þ 1Þ uT0 Pk ei D1ðl1Þi ð15:26Þ
i¼0
338 15 System Identification: Parameter Estimation …

For instance, if we take m = 4, then


2 3 2 3 2 3 2 3
h 0 0 h
X
4 1 607 6h7 607 607
ei D1ðl1Þi ¼6 7 6 7 6 7 6 7
4 0 5 D1ðl1Þ0 þ 4 0 5 D1ðl1Þ 1 þ 4 h 5 D1ðl1Þ2 þ 4 0 5 D1ðl1Þ3
i¼ 0
0 0 0 0

Thus, Eq. (15.26) becomes


82 3 2 3 2 3 2 3 9
>
>
h 0 0 0 >
>
>
<607 6h7 607 607 >
=
k ðk þ 1Þ T k 6 7 6 7 6 7 6 7
Hk ¼ ð1Þ k u0 P 6 7 D1ðl1Þ0 þ 6 7 D1ðl1Þ 1 þ 6 7 D1ðl1Þ2 þ 6 7 D1ðl1Þ3
>
> 405 405 4h5 405 >
>
>
: >
;
0 0 0 h
X 3
or, Hk ¼ a ei D1ðl1Þi
i¼0

ð15:27Þ

where, a , ð1Þk kðk þ 1Þ uT0 Pk


Now, we consider a SISO system having a transfer function of the following
form:

a0 þ a1 s
¼ H 0 þ H 1 s þ H 2 s2 þ H 3 s3 ð15:28Þ
b0 þ b1 s þ s 2

From Eq. (15.28), equating the coefficients of like powers of s, we get

a0 ¼ b0 H0 ð15:29Þ

a1 ¼ b0 H 1 þ b1 H 0 ð15:30Þ

b0 H2 þ b1 H1 ¼  H0 ð15:31Þ

b0 H 3 þ b1 H 2 ¼  H 1 ð15:32Þ

Solving from (15.29)–(15.32), the ai ’s and bi ’s may be obtained as


15.3 Parameter Estimation of the Transfer … 339

  9
 0 0 H0 0  >
>
  >
>
1  0 1 H1 H0  >
>
>
a0 ¼   >
>
D  H0 0 H2 H1  >
>
 >
>
 H 0 >
H3 H2  >
>
>

1
 >
>
 1 0 H0 0  >
>
  >
>
>
1  0 0 H1 H0  >
>
>
>
a1 ¼   >
>
D  0 H0 H2 H1  >
>
 >
>
 0 H H2  >
>
1 H3 >
>
  >
>
 1 0 0 0  >
>
  >
>
1  0 1 0 H0  =
b0 ¼   ð15:33Þ
D  0 0 H0 H1  >
>
 >
>
 0 0 H H2  >
>
1 >
>
  >
>
 1 0 H0 0  >
>
  >
>
1  0 1 H1 0  >
>
>
>
b1 ¼   >
>
D 0 0 H2 H0  >
>
 >
>
 0 0 >
H3 H1  >
>
>
  >
>
 1 0 H0 0  >
>
  >
>
 0 1 H H  >
>
 1 0 >
>
where, D ¼   >
>
0 0 H2 H1  >
>
 >
>
0 0 ;
H3 H2 

15.3.1.1 Numerical Example

Example 15.1 Consider the function

gðtÞ ¼ expðtÞ  expð2tÞ ð15:34Þ


We express it in s-domain in the form as in (15.28). That is

1 a1 s þ a0
HðsÞ ¼ ¼ 2 ð15:35Þ
s2 þ 3s þ 2 s þ b1 s þ b0

Then using Eqs. (15.27) and (15.33), the unknown parameters may be solved for
m = 64 and λ = 1, λ = 6 and λ = 12 as shown in Table 15.1. Also, percentage errors
for such estimation for λ = 12 is presented in Table 15.2.
340 15 System Identification: Parameter Estimation …

Table 15.1 Estimated parameters of the transfer function of Example 15.1 for m = 64 and For
three different values of λ, in block pulse function domain
Parameters Actual λ=1 λ=6 λ = 12
a0 1 2.54176555 0.76514514 0.97307956
a1 0 - 0.25154866 0.05825146 0.01235613
b0 2 12.72230064 1.55297667 1.94618304
b1 3 6.11098678 2.40403634 2.94373198

Table 15.2 Percentage errors Parameters Actual BPF domain % Error


for parameter estimation of values values e ¼ ðcacc hÞ
 100
the transfer function of ca ch a

Example 15.1 for m = 64 and


λ = 12, in BPF domain a0 1 0.97307956 2.69204374
a1 0 0.01235613 –
b0 2 1.94618304 2.69084797
b1 3 2.94373198 1.87560054

15.3.2 Using Non-optimal Block Pulse Functions (NOBPF)

Referring to Sect. 1.2.11, if we use a set of non-optimal block pulse functions


W0 ðsÞ [11] instead of the traditional block pulse functions, the results obtained are
slightly different.
In non-optimal block pulse function domain hðksÞ can be written as

X
m1
hðksÞ ¼ D2ðl1Þi w0i ðsÞ; 0  s \1 ð7:36Þ
i¼0

where, each element, d20i (say), of D2ðl1Þi is the average of two consecutive
samples, ci and c(i+1) (say), of each component of the function hðksÞ. That is
 
ci þ cði þ 1Þ
d20i ¼
2

As before

ðm1Þ
X
uT0 W0 ðsÞ ¼ w0i ðsÞ ¼ 1
i¼0

Using the above relation in Eq. (15.24), we have, as per Eq. (15.25)
15.3 Parameter Estimation of the Transfer … 341

2 1 3
m1 Z
X
Hk ¼ ð1Þk kðk þ 1Þ uT0 Pk 4 w0 ðsÞW0 ðsÞ ds5 D2ðl1Þi ð15:37Þ
i
i¼0
0

where, P is the operational matrix for integration in non-optimal block pulse


function domain.
It is to be noted that for both the traditional block pulse function based derivation
and non-optimal block pulse function based derivation P remains unaltered.
So, Eq. (15.37) becomes,

X
m1
Hk ¼ ð1Þk kðk þ 1Þ uT0 Pk ei D2ðl1Þi ð15:38Þ
i¼0

which is similar to Eq. (15.26).


If we take m = 4, then, following Sect. 15.3.1, we get
82 3 2 3 2 3 2 3 9
>
>
h 0 0 0 >
>
>
<6 7 6h7 607 607 >
=
k ðk þ 1Þ T k 6 0 7 6 7 6 7 6 7
Hk ¼ ð1Þ k u0 P 6 7 D2ðl1Þ0 þ 6 7 D2ðl1Þ 1 þ 6 7 D2ðl1Þ2 þ 6 7 D2ðl1Þ3
>
> 4 0 5 4 0 5 4 h 5 4 0 5 >
>
>
: >
;
0 0 0 h
X3
or, Hk ¼ b ei D2ðl1Þi
i¼0

ð15:39Þ

where, b , ð1Þk kðk þ 1Þ uT0 Pk


We use Eqs. (15.39) to solve for the parameters in (15.35) in NOBPF domain.
For m = 64 and λ = 1, λ = 6 and λ = 12 the results are shown in Table 15.3. Also,
percentage errors for such estimation for λ = 12 is presented in Table 15.4.

Table 15.3 Estimated parameters of the transfer function of Example 15.1 for m = 64 and for
three different values of λ, in non-optimal block pulse function domain
Parameters Actual λ=1 λ=6 λ = 12
a0 1 2.54147603 0.77203411 0.96755155
a1 0 −0.25160253 0.05617966 0.00373086
b0 2 12.72227298 1.55404255 1.94648567
b1 3 6.11095662 2.40494375 2.94403456
342 15 System Identification: Parameter Estimation …

Table 15.4 Percentage errors for parameter estimation of the transfer function of Example 15.1
for m = 64 and λ = 12, in non-optimal block pulse function domain
Parameters Actual values NOBPF domain values % Error
ca ch e ¼ ðcacc
a

 100
a0 1 0.96755155 3.24484513
a1 0 0.00373086 –
b0 2 1.94648567 2.67571651
b1 3 2.94403456 1.86551468

15.3.3 Using Triangular Functions (TF)

In the triangular function domain [12], referring to Sect. 2.3, hðksÞ is given by

X
m1  
hðksÞ ¼ D31ðl1Þi T1i ðsÞ þ D32ðl1Þi T2i ðsÞ ; 0  s \1 ð15:40Þ
i¼0

where, each element, d310i (say), of D31ðl1Þi and d320i (say), of D32ðl1Þi are i- th
and (i + 1)th samples of each component of the function hðksÞ.
For triangular functions, the unit step function is represented as

uðtÞ¼ uT0 ½T1ðsÞ þ T2ðsÞ 

To determine the formula for repeated integration in triangular function domain,


we proceed from the basic relation for first integration which is
Z Z
T1 dt ¼ P1 T1 þ P2 T2 ¼ T2 dt ð15:41Þ

where, P1 and P2 are operational matrices for integration related to T1 and T2 [12]
respectively. Also P1 and P2 are related to P, the operational matrix for integration
in BPF domain, by the following equation

P ¼ P1 þ P2 ð15:42Þ

Using relations (15.41) and (15.42), second integration of T1 is


ZZ Z Z Z Z
T1 dt ¼ ðP1 T1 þ P2 T2Þ dt ¼ P1 T1 dt þ P2 T2 dt ¼ ðP1 þ P2Þ T1 dt
Z
¼ P T1 dt
15.3 Parameter Estimation of the Transfer … 343

Similarly, third integration of T1 is given by


ZZZ Z
T1dt ¼ P2 T1 dt

Thus, keeping in mind relations (15.41) and (15.42) above, k-times integration of
T1 and T2 yields
Z Z Z Z Z Z Z Z Z Z
   T1 dt ¼    T2 dt ¼ Pk1 T1 dt Pk1 T2 dt
|fflfflfflfflffl{zfflfflfflfflffl} |fflfflfflfflffl{zfflfflfflfflffl}
k times k times
¼ Pk1 ðP1 T1 þ P2 T2Þ ð15:43Þ

Hence, using (15.43) and from Eq. (15.21) we have

Hk ¼ ð1Þk kðk þ 1Þ 2uT0 Pðk1Þ


Z1 " #
Xm1 X
m1
 ½P1 T1ðsÞ þ P2 T2ðsÞ D31ðl1Þi T1i ðsÞ þ D32ðl1Þi T2i ðsÞ ds
i¼0 i¼0
0
m1 Z
X 1 
¼ ð1Þk kðk þ 1Þ 2uT0 Pðk1Þ  P1 D31ðl1Þi T1i ðsÞT1ðsÞ þ P2 D31ðl1Þi T1i ðsÞT2ðsÞ
i¼0 0

þ P1 D32ðl1Þi T2i ðsÞT1ðsÞ þ P2 D32ðl1Þi T2i ðsÞT2ðsÞ ds
8
<Xm1 Z1
¼ ð1Þk kðk þ 1Þ 2uT0 Pðk1Þ  P1 D31ðl1Þi ½T1i ðsÞT1ðsÞ ds
:i ¼ 0
0

X
m1 Z1 X
m1 Z1
þ P2 D31ðl1Þi 0 T1i ðsÞT2ðsÞ ds þ P1 D32ðl1Þi T2i ðsÞT1ðsÞ ds
i¼0 i¼0
0
9
X
m1 Z1 =
þ P2 D32ðl1Þi T2i ðsÞT2ðsÞ ds
i¼0
;
0

ð15:44Þ

Due to orthogonal property

ði þ 1Þth position
Z1 # Z1
T
T1i ðsÞT1ðsÞ ds ¼ e1i ¼ ½ 0    0 h3 0    0  ¼ T2i ðsÞT2ðsÞ ds
|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
0 m columns 0

ð15:45Þ
344 15 System Identification: Parameter Estimation …

ði þ 1Þth position
Z1 # Z1
T
T1i ðsÞT2ðsÞ ds ¼ e2i ¼ ½ 0    0 0  0 ¼
h
T2i ðsÞT1ðsÞ ds
|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
6

0 m columns 0

ð15:46Þ

So Eq. (15.44) becomes


(
X
m1 X
m1
k ðk þ 1Þ ðk1Þ
Hk ¼ ð1Þ k 2uT0 P  P1 e1i D31ðl1Þi þ P2 e2i D31ðl1Þi
i¼0 i¼0
)
X
m1 X
m1
þ P1 e2i D32ðl1Þi þ P2 e1i D32ðl1Þi
i¼0 i¼0

For m = 4, following the earlier procedure, we get

Hk ¼ ð1Þk kðk þ 1Þ 2 uT0 Pðk1Þ


28 2 h 3 2 3 2 3 2 3 9
>
>
0 0 0 >
>
> 6 7 >
3
6< 6 h 7 6 7 6 7 =
6 607 637 607 607
 6 P1 6 7 D31ðl1Þ0 þ P16 7 D31ðl1Þ1 þ P16 h 7 D31ðl1Þ2 þ P16 7 D31ðl1Þ3
4>> 405 405 435 405 >
>
>
: >
;
h
0 0 0
8 2h3 2 3 2 3 2 3 9
3

>
>
0 0 0 >
>
> >
6
< 607 6h7 607 607 =
6 7 667 6 7 6 7
þ P2 6 7 D31ðl1Þ0 þ P26 7 D31ðl1Þ1 þ P26 h 7 D31ðl1Þ2 þ P26 7 D31ðl1Þ3
>
> 405 405 465 405 >
>
>
: >
;
h
0 0 0
8 2h3 2 3 2 3 2 3 9
6

>
>
0 0 0 >
>
> >
6
< 607 6h7 607 607 =
6 7 667 6 7 6 7
þ P1 6 7 D32ðl1Þ0 þ P16 7 D32ðl1Þ1 þ P16 h 7 D32ðl1Þ2 þ P16 7 D32ðl1Þ3
>
> 4 0 5 4 0 5 4 5 4 0 5 >
>
>
: 6 >
;
h
0 0 0
8 2h3 2 3 2 3 2 3 93
6

>
>
0 0 0 >
>
> >
3
< 607 6h7 607 607 =7
6 7 637 6 7 6 7 7
þ P2 6 7 D32ðl1Þ0 þ P26 7 D32ðl1Þ1 þ P26 h 7 D32ðl1Þ2 þ P26 7 D32ðl1Þ3 7
>
> 4 0 5 4 0 5 4 5 4 0 5 >
> 5
>
: 3 >
;
h
0 0 0 3

X3  
or, Hk ¼ c c1i D31ðl1Þi þ c2i D31ðl1Þi þ c3i D32ðl1Þi þ c4i D32ðl1Þi
i¼0

ð15:47Þ

where, c , ð1Þk kðk þ 1Þ 2 uT0 Pðk1Þ


and c1i , P1e1i ; c2i , P2e2i ; c3i , P1e2i ; c4i , P2e1i
15.3 Parameter Estimation of the Transfer … 345

Table 15.5 Estimated parameters of the transfer function of Example 15.1 for m = 64 and for
three different values of λ, in triangular function domain
Parameters Actual λ=1 λ=6 λ = 12
a0 1 2.54176778 0.77261257 0.97296430
a1 0 −0.25152488 0.05825568 0.01226366
b0 2 12.72231195 1.55292352 1.94613618
b1 3 6.11098640 2.40394316 2.94367405

Table 15.6 Percentage errors Parameters Actual TF domain % Error,


for parameter estimation of values values e ¼ ðcacc hÞ
 100
the transfer function of ca ch a

Example 15.1 for m = 64 and


λ = 12, in triangular function a0 1 0.97296430 2.70357003
domain a1 0 0.01226366 –
b0 2 1.94613618 2.69319093
b1 3 2.94367405 1.87753173

Following the procedure used for Sect. 7.3.1, we determine the values of ai ’s
and bi ’s in TF domain.
Tables 15.5 and 15.6 present the results of TF domain based estimation.

15.3.4 Using Hybrid Functions (HF)

In hybrid function domain [13], referring to Sect. 2.4, hðksÞ can be written as

X
m1 X
m1
hðksÞ ¼ D41ðl1Þi Si ðsÞ þ D42ðl1Þi Ti ðsÞ; 0  s \1 ð15:48Þ
i¼0 i¼0

where, D41ðl1Þi ’s are the samples and D42ðl1Þi ’s are difference between two
consecutive samples of hðksÞ e.g. (D41ði þ 1Þ  D41i ).
As before, following (7.23), in hybrid function domain, we write

X
m1 X
m1
uT01 SðsÞ þ uT02 TðsÞ ¼ uT01 Si ðsÞ þ uT02 Ti ðsÞ ¼ 1
i¼0 i¼0

where, uT01 ¼ ½ 1 1    1 1  and uT02 ¼ ½ 0 0    0 0 


|fflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} |fflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
m columns m columns
346 15 System Identification: Parameter Estimation …

To determine the formula for repeated integration in hybrid function domain, we


proceed from the relation for first integration of sample-and-hold functions.
From Eq. (4.9), we have
Z X
m1
SðmÞ dt ¼ h QiðmÞ SðmÞ þ h IðmÞ TðmÞ ¼ P1ssðmÞ SðmÞ þ P1stðmÞ TðmÞ ð15:49Þ
i¼1

Comparing Eqs. (4.9) and (4.18), we have


Z Z
1
TðmÞ dt ¼ SðmÞ dt ð15:50Þ
2

Dropping the subscript m for convenience and using (15.49) and (15.50), the
second integration of S is given by
ZZ Z Z Z Z Z
1
S dt ¼ ðP1ss S þ h I TÞdt ¼ P1ss S dt þ h T dt ¼ P1ss S dt þ hI S dt
2
Z
1
¼ ðP1ss þ hIÞ S dt
2

The third integration of S is


ZZZ Z Z 2Z
1 1
S dt ¼ ðP1ss þ hIÞ S dt ¼ P1ss þ hI S dt
2 2

Repeating this procedure, we obtain

ðk1Þ
Z Z Z Z 1
   S dt ¼ P1ss þ hI ðP1ss S þ hI TÞ
|fflfflfflfflffl{zfflfflfflfflffl} 2
k times
ðk1Þ
1
¼ P1ss þ hI ðP1ss S þ h TÞ
2

Then, proceeding as before, we have


15.3 Parameter Estimation of the Transfer … 347

Z1
sk
Hk ¼ ð1Þk kðk þ 1Þ uT01 hðksÞ ds
k!
0
Z1 " ðk1Þ
"
X
m1 X
m1
##
¼ ð1Þk kðk1Þ uT01
h
P1ss þ I ðP1ss S þ h I TÞ D41ðl1Þi Si ðsÞ þ D42ðl1Þi Ti ðsÞ ds
2
i¼0 i¼0
0

m1 Z 
1
ðk1Þ X
¼ ð1Þk kðk þ 1Þ uT01 P1ss þ I
h
 P1ss D41ðl1Þi Si ðsÞSðsÞ þ h D41ðl1Þi Si ðsÞTðsÞ
2
i¼0
0

þ P1ss D42ðl1Þi Ti ðsÞSðsÞ þ h D42ðl1Þi Ti ðsÞTðsÞ ds
ðk1Þ
¼ ð1Þk kðk þ 1Þ uT01 P1ss þ I
h
2
8
<Xm1 Z1 X
m1 Z1
 P1ss D41ðl1Þi Si ðsÞSðsÞ ds þ h D41ðl1Þi Si ðsÞTðsÞ ds
:i ¼ 0 i¼0
0 0
9
X
m1 Z 1
X
m1 Z1 =
þ P1ss D42ðl1Þi Ti ðsÞSðsÞ ds þ h D42ðl1Þi Ti ðsÞTðsÞ ds
i¼0 i¼0
;
0 0

ð15:51Þ

Due to orthogonal property, we have the three following relations

ði þ 1Þth position
Z1 #

Si ðsÞ SðsÞ ds ¼ ei ¼ ½ 0    0 h 0    0  T
|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
0 m columns

ði þ 1Þth position
Z1 Z1 #

Si ðsÞ TðsÞ ds ¼ Ti ðsÞ SðsÞ ds ¼ e3i ¼ ½ 0    0 h2 0    0  T


|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
0 0 m columns

ði þ 1Þth position
Z1 #

T1i ðsÞ TðsÞ ds ¼ e1i ¼ ½ 0    0 h3 0    0  T


|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
0 m columns

Using the above relations in Eq. (15.51), we get


(
ðk1Þ X
m1 X
m1
Hk ¼ ð1Þk kðk þ 1Þ uT01 P1ss þ I
h
 P1ss ei D41ðl1Þi þ h e3i D41ðl1Þi
2
i¼0 i¼0
) ð15:52Þ
X
m1 X
m1
þ P1ss e3i D42ðl1Þi þ h e1i D42ðl1Þi
i¼0 i¼0

If we consider m = 4, then
348 15 System Identification: Parameter Estimation …

ðk1Þ
Hk ¼ ð1Þk kðk þ 1Þ uT01 P1ss þ I
h
2
28 2 3 2 3 2 3 2 3 9
>
>
h 0 0 0 >
>
6<> 607 6h7 607 607 >
=
6 6 7 6 7 6 7 6 7
 6 P1ss 6 7 D41ðl1Þ0 þ P1ss6 7 D41ðl1Þ1 þ P1ss6 7 D41ðl1Þ2 þ P1ss6 7 D41ðl1Þ3
4>> 405 405 4h5 405 >
>
>
: >
;
0 0 0 h
82 h 3 2 3 2 3 2 3 9
>
>
0 0 0 >
>
>6 7 >
2
< 6h7 607 607 =
607 627 6 7 6 7
þ h 6 7 D41ðl1Þ0 þ 6 7 D41ðl1Þ1 þ 6 h 7 D41ðl1Þ2 þ 6 7 D41ðl1Þ3
>
> 405 405 425 405 >
>
>
: >
;
h
0 0 0
8 2h3 2 3 2 3
2
2 3 9
>
>
0 0 0 >
>
> >
2
< 607 6h7 607 607 =
6 7 6 7 6 7 6 7
þ P1ss 6 7 D42ðl1Þ0 þ P1ss6 2 7 D42ðl1Þ1 þ P1ss6 h 7 D42ðl1Þ2 þ P1ss6 7 D42ðl1Þ3
>
> 405 405 425 405 >
>
>
: >
;
h
0 0 0
82 h 3 2 3 2 3 2 3 93 2

> 3
>
0 0 0 >
>
>
<607 6h7 607 607 >
=7
6 7 637 6 7 6 7 7
þ h 6 7 D42ðl1Þ0 þ 6 7 D42ðl1Þ1 þ 6 h 7 D42ðl1Þ2 þ 6 7 D42ðl1Þ3 7
>
> 405 405 435 405 >
> 5
>
: >
;
h
0 0 0 3

X3  
or, Hk ¼ d d1i D41ðl1Þi þ d2i D41ðl1Þi þ d3i D42ðl1Þi þ d4i D32ðl1Þi
i¼0

ð15:53Þ

where, d ¼ ð1Þk kðk þ 1Þ uT01 ðP1ss þ h2IÞðk1Þ


and d1i , P1ss ei ; d2i , h e3i ; d3i , P1ss e3i ; d4i , h e1i
Using Eq. (15.53) and following the earlier procedure, the values of ai ’s and bi ’s
determined via HF domain are tabulated in Tables 15.7 and 15.8.

Table 15.7 Estimated parameters of the transfer function of Example 15.1 for m = 64 and for
three different values of λ, in hybrid function domain
Parameters Actual λ=1 λ=6 λ = 12
a0 1 2.54431980 0.77367044 0.99396497
a1 0 −0.23665257 0.06699767 0.04655428
b0 2 12.72939710 1.54597031 1.94350586
b1 3 6.11083725 2.39578631 2.94075968

Table 15.8 Percentage errors Parameters Actual HF domain % Error,


for parameter estimation of values values e ¼ ðcacc hÞ
 100
the transfer function of ca ch a

Example 15.1 for m = 64 and


λ = 12, in hybrid function a0 1 0.99396497 0.60350298
domain a1 0 0.04655428 –
b0 2 1.94350586 2.82470706
b1 3 2.94075968 1.97467736
15.3 Parameter Estimation of the Transfer … 349

15.3.5 Solution in SHF Domain from the HF Domain


Solution

To find out the solution in sample-and-hold function domain [14], we simply


discard the triangular function component of the hybrid function part in Eq. (15.48)
for hðksÞ and write

X
m1
hðksÞ ¼ D41ðl1Þi Si ðsÞ; 0  s \1
i¼0

where, D41ðl1Þi ’s are the samples of hðksÞ with the sampling period h.
In this case also,

uT01 ¼ ½ 1 1    1 1 
|fflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
m columns

Then, from (15.51), we write

m1 Z 
1
h ðk1Þ
X 
k ðk þ 1Þ
Hk ¼ ð1Þ k uT01 ðP1ss þ IÞ P1ss D41ðl1Þi Si ðsÞ SðsÞ ds
2
i¼0
0

Due to orthogonal property,

ði þ 1Þth position
Z 1 #

Si ðsÞ SðsÞ ds ¼ ei ¼ ½ 0    0 h 0    0  T
0 |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
m columns

Equation (15.51) now becomes,

ðk1Þ X
m1  
Hk ¼ ð1Þk kðk þ 1Þ uT01 P1ss þ I
h
P1ss ei D41ð l1Þ i
2
i¼0

If we take m = 4. Then
350 15 System Identification: Parameter Estimation …

Table 15.9 Estimated parameters of the transfer function of Example 15.1 for m = 64 and for
three different values of λ, in sample-and-hold function domain
Parameters Actual λ=1 λ=6 λ = 12
a0 1 2.54442701 0.77194125 0.98292877
a1 0 −0.22952649 0.06307111 0.02956877
b0 2 12.73277718 1.54703132 1.94374640
b1 3 6.11061106 2.39575881 2.94090502

Table 15.10 Percentage Parameters Actual SHF domain % Error,


errors for parameter values values e ¼ ðcacc hÞ
 100
estimation of the transfer ca ch a

function of Example 15.1 for


m = 64 and λ = 12, in a0 1 0.98292877 1.70712349
sample-and-hold function a1 0 0.02956877 –
domain b0 2 1.94374640 2.81267981
b1 3 2.94090502 1.96983257

ðk1Þ
Hk ¼ ð1Þk kðk þ 1Þ uT01 P1ss þ I
h
2
82 3 2 3 2 3 2 3 9
>
>
h 0 0 0 >
>
>
<607 6h7 607 607 >
=
6 7 6 7 6 7 6 7
 P3 6 7 D41ðl1Þ0 þ 6 7 D41ðl1Þ1 þ 6 7 D41ðl1Þ2 þ 6 7 D41ðl1Þ3
>
> 405 405 4h5 405 >
>
>
: >
;
0 0 0 h
X
3
or, Hk ¼ r ei D41ðl1Þi
i¼0

ð15:54Þ

where, r , ð1Þk kðk þ 1Þ uT01 ðP1ss þ h2 IÞðk1Þ P1ss


Using Eq. (15.54) the values of ai ’s and bi ’s computed via SHF domain are
tabulated in Tables 15.9 and 15.10.

15.4 Comparative Study of the Parameters of the Transfer


Function Identified via Different Approaches [15]

The estimated values of the parameters for the transfer function, for λ = 1, 6 and 12
derived via five types of orthogonal basis functions i.e. BPF, NOBPF, TF, HF and
SHF for Example 15.1 are shown in Tables 15.11, 15.12 and 15.13 respectively.
The comparative study of the error estimates for the system under study for λ = 12
via block pulse function, non-optimal block pulse function, triangular function,
hybrid function and sample-and-hold function domains, are shown in Table 15.14.
15.4 Comparative Study of the Parameters … 351

Table 15.11 Comparative study of the parameters of the transfer function of Example 15.1 under
investigation in different function domains for λ = 1
Parameters Actual BPF NOBPF TF HF SHF
values
a0 1 2.54176555 2.54147603 2.54176778 2.54431980 2.54442701
a1 0 −0.25154866 −0.25160253 −0.25152488 −0.23665257 −0.22952649
b0 2 12.72230064 12.72227298 12.72231195 12.72939710 12.73277718
b1 3 6.11098678 6.11095662 6.11098640 6.11083725 6.11061106

Table 15.12 Comparative study of the parameters of the transfer function of Example 15.1 under
investigation in different function domains for λ = 6
Para-meters Actual values BPF NOBPF TF HF SHF
a0 1 0.76514514 0.77203411 0.77261257 0.77367044 0.77194125
a1 0 0.05825146 0.05617966 0.05825568 0.06699767 0.06307111
b0 2 1.55297667 1.55404255 1.55292352 1.54597031 1.54703132
b1 3 2.40403634 2.40494375 2.40394316 2.39578631 2.39575881

Table 15.13 Comparative study of the parameters of the transfer function of Example 15.1 under
investigation in different function domains for λ = 12
Para-meters Actual values BPF NOBPF TF HF SHF
a0 1 0.97307956 0.96755155 0.97296430 0.99396497 0.98292877
a1 0 0.01235613 0.00373086 0.01226366 0.04655428 0.02956877
b0 2 1.94618304 1.94648567 1.94613618 1.94350586 1.94374640
b1 3 2.94373198 2.94403456 2.94367405 2.94075968 2.94090502

Table 15.14 Comparative study of error estimates of the parameters for the system of Example
15.1 under study for λ = 12 via different function domains
Parameters % Error in % Error in % Error in % Error in % Error in
BPF NOBPF TF HF SHF
a0 2.69204374 3.24484513 2.70357003 0.60350298 1.70712349
a1 – – – – –
b0 2.69084797 2.67571651 2.69319093 2.82470706 2.81267981
b1 1.87560054 1.86551468 1.87753173 1.97467736 1.96983257
352 15 System Identification: Parameter Estimation …

15.5 Comparison of Errors for BPF, NOBPF, TF, HF


and SHF Domain Approaches [15]

Figures 15.1a–d show the estimated values of the parameters, for λ = 1, derived via
five kinds of orthogonal function basis. Figure 15.2 shows estimated values of two
parameters a0 and b1 for λ = 6 to compute the results obtained via five different
basis functions, while Fig. 15.3 presents all the parameter for λ = 12.

Fig. 15.1 Magnified view of the estimated parameters (a0, a1, b0 and b1) as per Tables 15.1, 15.3,
15.5, 15.7 and 15.9 for λ = 1 (vide Appendix B, Program no. 38)
15.5 Comparison of Errors for BPF … 353

Fig. 15.2 Magnified view of


two estimated parameters (a0
and b1) as per Tables 15.1,
15.3, 15.5, 15.7 and 15.9 for
λ = 6 (vide Appendix B,
Program no. 38)
354 15 System Identification: Parameter Estimation …

Fig. 15.3 Magnified view of the estimated parameters (a0, a1, b0 and b1) as per Tables 15.1, 15.3,
15.5, 15.7 and 15.9 for λ = 12 (vide Appendix B, Program no. 38)

15.6 Conclusion

The problem of parameter estimation of a transfer function of a multivariable


system has been treated to determine the solution via orthogonal functions using a
generalized algorithm. The derived algorithm is employed to solve for the
parameters of the transfer function of a system via five different orthogonal sets e.g.,
(i) block pulse functions (BPF), (ii) non-optimal block pulse functions (NOBPF),
(iii) triangular functions (TF), (iv) hybrid functions (HF) and (v) sample-and-hold
functions (SHF).
Many tables are presented to compare the accuracies of different methods.
Different curves are plotted are to indicate the values of the four parameters, a0, a1,
b0 and b1 of a second order transfer function, computed via five different methods.
From Tables 15.11, 15.12 and 15.13, we conclude that the parameter a0 of the
transfer function is closer to the actual value in HF domain.
It is noted that none of the presented methods proves itself absolutely superior to
others, but from Table 15.14, it is observed that the minimum percentage error is
obtained for a0 for HF domain based computation.
15.6 Conclusion 355

Finally, an advantage of HF based analysis is, the sample-and-hold function


based result may easily be obtained by simply dropping the triangular part solution
of the hybrid function based result. This advantage may prove much significant for
function approximation as well as for control system analysis.

References

1. Eykhoff, P.: System Identification: Parameter and State Estimation. Wiley, London (1974)
2. Chen, C.T.: Linear System Theory and Design. Holt Rinehart and Winston, Holt-Saunders,
Japan (1984)
3. Unbehauen, H., Rao, G.P.: Identification of Continuous Systems. North-Holland, Amsterdam
(1987)
4. Ljung, L.: System Identification: Theory for the User. Prentice-Hall Inc., New Jersey (1985)
5. Pintelon, R., Guillaume, P., Rolain, Y., Schoukens, J., Van Hamme, H.: Parametric
identification of transfer functions in the frequency domain: a survey. IEEE Trans. Autom.
Control 39(11), 2245–2259 (1994)
6. Paraskevopoulos, P.N., Varoufakis, S.J.: Transfer function determination from impulse
response via Walsh functions. Int. J. Circuit Theor. Appl. 8, 85–89 (1980)
7. Baker Jr, G.A., Graves-Morris, P.: Padé Approximants. Cambridge University Press, New
York (1996)
8. Baker Jr., G.A.: Padé Approximants in Theoretical Physics, pp. 27–38. Academic Press, New
York (1975)
9. Zakian, V.: Simplification of linear time-invariant systems by moment approximants. Int.
J. Control 18, 455–460 (1973)
10. Jiang, J.H., Schaufelberger, W.: Block Pulse Functions and their Applications in Control
System, LNCIS, vol. 179. Springer, Berlin (1992)
11. Deb, A., Sarkar, G., Mandal, P., Biswas, A., Sengupta, A.: Optimal block pulse function
(OBPF) versus non-optimal block pulse function (NOBPF). In: Proceedings of International
Conference of IEE (PEITSICON) 2005, pp. 195–199. Kolkata (2005) (28–29th Jan)
12. Deb, Anish, Sarkar, Gautam, Sengupta, Anindita: Triangular Orthogonal Functions for the
Analysis of Continuous Time Systems. Anthem Press, London (2011)
13. Deb, Anish, Sarkar, Gautam, Mandal, Priyaranjan, Biswas, Amitava, Ganguly, Anindita,
Biswas, Debasish: Transfer function Identification from impulse response via a new set of
orthogonal hybrid function (HF). Appl. Math. Comput. 218(9), 4760–4787 (2012)
14. Deb, Anish, Sarkar, Gautam, Bhattacharjee, Manabrata, Sen, Sunit K.: A new set of piecewise
constant orthogonal functions for the analysis of linear SISO systems with sample-and-hold.
J. Franklin Instt. 335B(2), 333–358 (1998)
15. Biswas, A.: Analysis and synthesis of continuous control systems using a set of orthogonal
hybrid functions. Ph. D. Dissertation, University of Calcutta (2015)
Appendix A
Introduction to Linear Algebra

A matrix is a rectangular array of variables, mathematical expressions, or simply


numbers. Commonly a matrix is written as
2 3
a11 a12  a1n
6 a21 a22  a2n 7
6 7
A ¼ 6 .. .. .. .. 7: ðA:1Þ
4 . . . . 5
am1 am2    amn

The size of the matrix, with m rows and n columns, is called an m-by-n (or,
m × n) matrix, where, m and n are called its dimensions.
A matrix with one row [a (1 × n) matrix] is called a row vector, and a matrix with
one column [an (m × 1) matrix] is called a column vector. Any isolated row or
column of a matrix is a row or column vector, obtained by removing all other rows
or columns respectively from the matrix.

Square Matrices

A square matrix is a matrix with m = n, i. e., the same number of rows and columns.
An n-by-n matrix is known as a square matrix of order n. Any two square matrices
of the same order can be added, subtracted, or multiplied.
For example, each of the following matrices is a square matrix of order 4, with
four rows and four columns:
2 3 2 3
1 2 7 0 3 1 0 1
63 4 3 17 6 2 3 4 2 7
A¼6
48
7; B¼6 7
3 1 15 4 1 0 9 1 5
2 4 0 3 3 2 1 0

Then,

© Springer International Publishing Switzerland 2016 357


A. Deb et al., Analysis and Identification of Time-Invariant Systems, Time-Varying
Systems, and Multi-Delay Systems using Orthogonal Hybrid Functions,
Studies in Systems, Decision and Control 46, DOI 10.1007/978-3-319-26684-8
358 Appendix A: Introduction to Linear Algebra

2 3 2 3
4 3 7 1 2 1 7 1
6 5 1 7 3 7 6 1 7 1 1 7
A + B¼6
4 9
7 and AB¼6 7
3 10 2 5 4 7 3 8 0 5
1 6 1 3 5 2 1 3

Determinant

The determinant [written as det(A) or |A|] of a square matrix A is a number


encoding certain properties of the matrix. A matrix is invertible, if and only if, its
determinant is nonzero.
The determinant of a 2-by-2 matrix is given by
 
a b
det ¼ ad  bc ðA:2Þ
c d

Properties
The determinant of a product of square matrices equals the product of their
determinants: det(AB) = det(A) · det(B).
Adding a multiple of any row to another row, or a multiple of any column to
another column, does not change the determinant.
Interchanging two rows or two columns makes the determinant to be multiplied
by −1.
Using these operations, any matrix can be transformed to a lower (or, upper)
triangular matrix, and for such matrices the determinant equals the product of the
entries on the main diagonal.

Orthogonal Matrix

An orthogonal matrix is a square matrix with real entries whose columns and rows
are orthogonal vectors, i.e., orthonormal vectors.
Equivalently, a matrix A is orthogonal if its transpose is equal to its inverse. That
is

AT ¼ A1

which implies
Appendix A: Introduction to Linear Algebra 359

AT A ¼ A AT ¼ I;

where, I is the identity matrix.


An orthogonal matrix A is necessarily invertible with inverse A−1 = AT. The
determinant of any orthogonal matrix is either +I or −I.

Trace of a Matrix

In (A.1), the entries ai,i form the main diagonal of the matrix A. The trace, tr(A), of
the square matrix A is the sum of its diagonal entries. The trace of the product of
two matrices is independent of the order of the factors A and B. That is

trðABÞ ¼ trðBAÞ:

Also, the trace of a matrix is equal to that of its transpose, i.e., tr(A) = tr(AT).

Diagonal, Lower Triangular and Upper Triangular


Matrices

If all entries of a matrix except those of the main diagonal are zero, the matrix is
called a diagonal matrix. If only all entries above (or, below) the main diagonal are
zero, it is called a lower triangular matrix (or, upper triangular matrix).
For example, a diagonal matrix of 3rd order is
2 3
d11 0 0
4 0 d22 0 5
0 0 d33

A lower triangular matrix of 3rd order is


2 3
l11 0 0
4 l21 l22 0 5
l31 l32 l33

and an upper triangular matrix of similar order is


2 3
u11 u12 u13
4 0 u22 u23 5
0 0 u33
360 Appendix A: Introduction to Linear Algebra

Symmetric Matrix

A square matrix A that is equal to its transpose, i.e., A = AT, is a symmetric matrix.
If instead, A is equal to the negative of its transpose, i.e., A = −AT, then A is a
skew-symmetric matrix.

Singular Matrix

If the determinant of a square matrix A is equal to zero, it is called a singular matrix


and its inverse does not exist. Examples of two singular matrices are
2 3
  2 1 2
4 2 7
and 4 7 0 35
6 3
2 1 2

Identity Matrix or Unit Matrix

If A is a square matrix, then

AI ¼ IA ¼ A

where, I is the identity matrix of the same order.


The identity matrix I(n) of size n is the n-by-n matrix in which all the elements on
the main diagonal are equal to 1 and all other elements are equal to 0. An identity
matrix of order 3 is
2 3
1 0 0
Ið3Þ ¼ 40 1 0 5:
0 0 1

It is called identity matrix because multiplication with it leaves a matrix


unchanged. If A is an (m × n) matrix then

AðmnÞ IðnÞ ¼ IðmÞ AðmnÞ


Appendix A: Introduction to Linear Algebra 361

Transpose of a Matrix

The transpose AT of a square matrix A can be obtained by reflecting the elements


along its main diagonal. Repeating the process on the transposed matrix returns the
elements to their original position.
The transpose of a matrix may be obtained by any one of the following
equivalent actions:
(i) reflect A over its main diagonal to obtain AT
(ii) write the rows of A as the columns of AT
(iii) write the columns of A as the rows of AT
Formally, the ith row, jth column element of AT is the jth row, ith column
element of A. That is
 
AT ij ¼ Aji

If A is an m × n matrix then AT is an (n × m) matrix.

Properties

For matrices A, B and scalar c we have the following properties of transpose:


(i) ðAT ÞT ¼ A
(ii) ðA þ BÞT ¼ AT þ BT
(iii) ðABÞT ¼ BT AT
Note that the order of the factors above reverses. From this one can deduce
that a square matrix A is invertible if and only if AT is invertible, and in this
case, we have (A−1)T = (AT)−1. By induction this result extends to the
general case of multiple matrices, where we find that

ðA1 A2 . . .Ak1 Ak ÞT ¼ AkT ATk1 . . .AT2 AT1 :

(iv) (cA)T ¼ cAT


The transpose of a scalar is the same scalar.
(v) det(AT ) = det(A)
(vi) det(A1 ) = det(
1
A)
362 Appendix A: Introduction to Linear Algebra

Matrix Multiplication

Matrix multiplication is a binary operation that takes a pair of matrices, and pro-
duces another matrix. This term normally refers to the matrix product.
Multiplication of two matrices is defined only if the number of columns of the
left matrix is the same as the number of rows of the right matrix. If A is an m-by-
n matrix and B is an n-by-p matrix, then their matrix product AB is the m-by-
p matrix whose entries are given by dot product of the corresponding row of A and
the corresponding column of B. That is

X
n
½ABi;j ¼ Ai;1 B1;j þ Ai;2 B2;j þ    þ Ai;n Bn;j ¼ Ai;r Br;j
r¼1

where 1 ≤ i ≤ m and 1 ≤ j ≤ p.
Matrix multiplication satisfies the rules
(i) (AB)C = A(BC) (associativity),
(ii) (A + B)C = AC + BC
(iii) C(A + B) = CA + CB (left and right distributivity),
whenever the size of the matrices is such that the various products are
defined.
The product AB may be defined without BA being defined, namely, if A and B
are m-by-n and n-by-k matrices, respectively, and m ≠ k.
Even if both products are defined, they need not be equal, i.e., generally one has

AB 6¼ BA;

i.e., matrix multiplication is not commutative, in marked contrast to (rational, real,


or complex) numbers whose product is independent of the order of the factors. An
example of two matrices not commuting with each other is:
    
5 2 1 0 5 0
¼
3 3 0 0 3 0

where as
    
1 0 5 2 5 2
¼
0 0 3 3 0 0

Since det(A) and det(B) are just numbers and so commute, det(AB) = det(A)det
(B) = det(B)det(A) = det(BA), even when AB ≠ BA.
Appendix A: Introduction to Linear Algebra 363

A Few Properties of Matrix Multiplication

(i) Associative

AðBCÞ ¼ ðABÞC

(ii) Distributive over matrix addition

AðB þ CÞ ¼ AB þ AC;ðA þ BÞC ¼ AC þ BC

(iii) Scalar multiplication is compatible with matrix multiplication

kðABÞ ¼ ðkAÞB and ðABÞk ¼ AðBkÞ

where, λ is a scalar. If the entries of the matrices are real or complex numbers, then
all four quantities are equal.

Inverse of a Matrix
If A is a square matrix, there may be an inverse matrix A−1 = B such that

AB ¼ BA ¼ I

If this property holds, then A is an invertible matrix. If not, A is a singular or


degenerate matrix.

Analytic Solution of the Inverse

Inverse of a square non-singular matrix A may be computed from the transpose of a


matrix C formed by the cofactors of A. Thus, CT is known as the adjoint matrix of
A. The matrix CT is divided by the determinant of A to compute A−1. That is
2 3
c11 c21 ... cn1
T 6
1 6 12c c22 ... cn2 7
C 7
A1 ¼ ¼ 6 . .. .. .. 7 ðA:3Þ
detðAÞ detðAÞ 4 .. . . . 5
c1n c2n ... cnn

where, C is the matrix of cofactors, and CT denotes the transpose of C.


364 Appendix A: Introduction to Linear Algebra

Cofactors

Let a (3 × 3) matrix be given by


2 3
a b c
A ¼ 4d e f5
g h k

Then a (3 × 3) matrix P formed by the cofactors of A is


2 3
A B C
P ¼ 4D E F5
G H K

where,

A ¼ ðek  fhÞ; B ¼ ðfg  dk Þ; C ¼ ðdh  egÞ;


D ¼ ðch  bk Þ; E ¼ ðak  cgÞ; F ¼ ðgb  ahÞ;
G ¼ ðbf  ceÞ; H ¼ ðcd  af Þ and K ¼ ðae  bd Þ:

Inversion of a 2 × 2 Matrix

The cofactor equation listed above yields the following result for a 2 × 2 matrix. Let
the matrix to be inverted be
 
a b
A¼ :
c d

Then
 1    
a b 1 d b 1 d b
A1 ¼ ¼ ¼ :
c d detðAÞ c a ðad  bcÞ c a

using Eqs. (A.2) and (A.3).


Appendix A: Introduction to Linear Algebra 365

Inversion of a 3 × 3 Matrix

Let the matrix to be inverted be


2 3
a b c
A ¼ 4d e f 5:
g h k

Then, its inverse is given by


2 31 2 3T 2 3
a b c A B C A D G
1 1
A1 ¼ 4d e f5 ¼ 4D E F5 ¼ 4B E H 5 ðA:4Þ
detðAÞ detðAÞ
g h k G H K C F K

where, A, B, C, D, E, F, G, H, K are the cofactors of the matrix A.


The determinant of A can be computed as follows:

detðAÞ ¼ aðek  fhÞ  bðdk  fgÞ þ cðdh  egÞ:

If the determinant is non-zero, the matrix is invertible and determination of the


cofactors subsequently lead to the computation of the inverse of A.

Similarity Transformation

Two n-by-n matrices A and B are called similar if

B ¼ P1 AP ðA:5Þ

for some invertible n-by-n matrix P.


Similar matrices represent the same linear transformation under two different
bases, with P being the change of basis matrix.
The determinant of the similarity transformation of a matrix is equal to the
determinant of the original matrix A.

  detðAÞ
detðBÞ ¼ det P1 AP ¼ det P1 detðAÞdetðPÞ ¼ detðPÞ ¼ detðAÞ
detðPÞ
ðA:6Þ
366 Appendix A: Introduction to Linear Algebra

Also, the eigenvalues of the matrices A and B are also same. That is

detðB  kIÞ ¼ det P1 AP  kI

¼ det P1 AP  P1 kIP

¼ det P1 ðA  kIÞP ðA:7Þ

¼ det P1 detðA  kIÞdetðPÞ
¼ detðA  kIÞ

where, k is a scalar.
The eigenvalues of an n × n matrix A are the roots of the characteristic equation

jkI  Aj ¼ 0

Hence, eigenvalues are also called the characteristic roots. Also, the eigenvalues
are invariant under any linear transformation.
Appendix B
Selected MATLAB Programs

1. Program for adding two time functions in hybrid function


domain (Chap. 2, Fig. 2.5, p. 36)

clc
clear all
format long

%%---------- Number of Sub-intervals and Total Time ---------%%

m = input('Enter the number of sub-intervals chosen:\n')


T = input('Enter the total time period:\n')
h = T/m;
t = 0:h:T;

%%--------------- Functions for Addition -----------------%%

syms x
f = 1-exp(-x);
g = exp(-x);

%%------------------- Function Samples -------------------%%

F = subs(f,t) %Samples of first function f(t) upto final time T


G = subs(g,t) %Samples of second function g(t) upto final time T

%%--------- Hybrid Function Based Representation ---------%%

for i=1:m
F_SHF(i)=F(i); % Sample-and-Hold Function Coefficients for f(t)
end

for i=1:m
F_TF(i)=F(i+1)-F(i); % Triangular Function Coefficients for f(t)
end

© Springer International Publishing Switzerland 2016 367


A. Deb et al., Analysis and Identification of Time-Invariant Systems, Time-Varying
Systems, and Multi-Delay Systems using Orthogonal Hybrid Functions,
Studies in Systems, Decision and Control 46, DOI 10.1007/978-3-319-26684-8
368 Appendix B: Selected MATLAB Programs

for i=1:m
G_SHF(i)=G(i); % Sample-and-Hold Function Coefficients for g(t)
end

for i=1:m
G_TF(i)=G(i+1)-G(i); % Triangular Function Coefficients for g(t)
end

%%---------- Addition in Hybrid Function Domain ----------%%

A_SHF = F_SHF + G_SHF; % SHF Part of Addition


A_TF = F_TF + G_TF; % TF Part of Addition

A_m = A_SHF(m) + A_TF(m); % m-th coefficient of Addition


A = [A_SHF A_m]; % Samples for plotting Addition

%%------------------- Function Plotting -------------------%%

plot(t,F,'-^k','LineWidth',2,'MarkerFaceColor','k')
hold on
plot(t,G,'-ok','LineWidth',2,'MarkerFaceColor','k')
hold on
plot(t,A,'-.>k','LineWidth',2)
ylim([0 1.2])

2. Program for dividing two time functions in hybrid


function domain (Chap. 2, Fig. 2.13, p. 47)

clc
clear all
format long

%%------------ Number of Sub-intervals and Total Time -----------%%

m = input('Enter the number of sub-intervals chosen:\n')


T = input('Enter the total time period:\n')
h = T/m;
t = 0:h:T;

%%------------------- Functions for Division ---------------------%%

syms x
y = 1-exp(-x);
r = exp(-x);

%%----------------------- Function Samples -----------------------%%

Y = subs(y,t) % Samples of first function y(t) upto final time T


R = subs(r,t) % Samples of second function r(t) upto final time T

%%------------- Hybrid Function Based Representation -------------%%


Appendix B: Selected MATLAB Programs 369

for i=1:(m+1)
Y_SHF(i)=Y(i); % Sample-and-Hold Function Coefficients for y(t)
end

for i=1:(m+1)
R_SHF(i)=R(i); % Sample-and-Hold Function Coefficients for r(t)
end

%%-------------- Division in Hybrid Function Domain --------------%%

D_SHF=Y_SHF./R_SHF; % SHF Part of Multiplication

for i=1:m
D_TF(i)=D_SHF(i+1)-D_SHF(i); % TF Part of Multiplication
end

%%----------------------- Function Plotting -----------------------%%

plot(t,Y,'-^k','LineWidth',2,'MarkerFaceColor','k')
hold on
plot(t,R,'-ok','LineWidth',2,'MarkerFaceColor','k')
hold on
plot(t,D_SHF,'-.>k','LineWidth',2)
ylim([0 1.8])

3. Program for approximating a function f(t) = sin(πt) in BPF


domain (Chap. 3, Fig. 3.2, p. 51)

clc
clear all
format long

%%------------ Number of Sub-intervals and Total Time -----------%%

m = input('Enter the number of sub-intervals chosen:\n')


T = input('Enter the total time period:\n')
h = T/m;
t = 0:h:T;

%%------------------ Functions for Approximated ------------------%%

syms x
f=sin(pi*x); % the function to be approximated

j=0:0.01:T;
plot(j,sin(pi*j),'--k','LineWidth',2) % plot of the exact function
hold on
370 Appendix B: Selected MATLAB Programs

%%------------------- BPF Based Representation -------------------%%

C=zeros(1,m);
for k=1:m
C(k)=(m/T)*int(f,t(k),t(k+1)); % Calculating BPF Coefficients
end

Coeff=[C C(m)]; % For plotting the BPF Coefficients of the function

%%----------------------- Function Plotting -----------------------%%

stairs(t,Coeff,'-k','LineWidth',2)
ylim([0 1])

4. Program for approximating a function f(t) = sin(πt) in


hybrid function domain (Chap. 3, Fig. 3.5, p. 54)

clc
clear all
format long

%%------------ Number of Sub-intervals and Total Time -----------%%

m = input('Enter the number of sub-intervals chosen:\n')


T = input('Enter the total time period:\n')
h = T/m;
t = 0:h:T;

%%----------------- Function to be Approximated -----------------%%

syms x
f=sin(pi*x); % Function to be Approximated
F=subs(f,t) % Samples of first function f(t) upto final time T

%%------------- Hybrid Function Based Representation ------------%%

for i=1:m
F_SHF(i)=F(i); % Sample-and-Hold Function Coefficients for f(t)
end

for i=1:m
F_TF(i)=F(i+1)-F(i); % Triangular Function Coefficients for f(t)
end

%%----------------------- Function Plotting -----------------------%%

j=0:0.01:T;
plot(j,sin(pi*j),'-.k','LineWidth',2) % plot of the exact function
hold on
plot(t,F,'-ok','LineWidth',2,'MarkerFaceColor','k')
ylim([0 1])
Appendix B: Selected MATLAB Programs 371

5. Program for approximating a function f(t) = t in hybrid


function domain and BPF domain. (Chap. 3, Fig. 3.6, p. 55)

clc
clear all
format long

%%------------ Number of Sub-intervals and Total Time -----------%%

m = input('Enter the number of sub-intervals chosen:\n')


T = input('Enter the total time period:\n')
h = T/m;
t = 0:h:T;

%%----------------- Function to be Approximated -----------------%%

syms x
f=x; % Function to be Approximated
F=subs(f,t) % Samples of function f(t) upto final time T

%%------------------- BPF Based Representation -------------------%%

C=zeros(1,m);
for k=1:m
C(k)=(m/T)*int(f,t(k),t(k+1)); % Calculating BPF Coefficients
end
Coeff=[C C(m)]; % BPF Coefficients of the function

%%------------- Hybrid Function Based Representation -------------%%

for i=1:m
F_SHF(i)=F(i); % Sample-and=Hold Function Coefficients for f(t)
end

for i=1:m
F_TF(i)=F(i+1)-F(i); % Triangular Function Coefficients for f(t)
end

F_m = F_SHF(m)+F_TF(m); % m-th coefficient of Subtraction


F = [F_SHF F_m]; % Final Samples for plotting

%%----------------------- Function Plotting ----------------------%%

stairs(t,Coeff,'-k','LineWidth',2)
hold on
plot(t,t,'-- k','LineWidth',2) % plot of the exact function
hold on
plot(t,F,'-ok','LineWidth',2,'MarkerFaceColor','k')
ylim([0 1])
372 Appendix B: Selected MATLAB Programs

6. Program for comparing HFc and HFm based approach (Chap. 3,


Fig. 3.16, p. 63)

clc
clear all
format long

%%------------ Number of Sub-intervals and Total Time -----------%%

m = input('Enter the number of sub-intervals chosen:\n')


T1 = input('Enter the time at jump instant:\n')
T2 = input('Enter the time after jump instant:\n')
T=T1+T2; % Total time period
h = T/m;
m_jump = T1/h; % Number of sub-intervals up to jump instant
t = 0:h:T;

%%----------------- Function to be Approximated -----------------%%

syms t
f1 = t+0.2; % Function before jump instant
f2 = t+1.2; % Function after jump instant

%%------------------------ Exact Solution -----------------------%%

t1=0:0.001:T1;
t2=(T1+0.001):0.001:T;

f1t=subs(f1,t1);
f2t=subs(f2,t2);

te=[t1 t2];
fe=[f1t f2t];

%%------------ Hybrid Function Based Representation ------------%%

th1=0:h:(T1-h);
th2=T1:h:T;
th=[th1 th2];

F1=subs(f1,th1);
F2=subs(f2,th2);
cfsx=[F1 F2]; % total Sample-and-Hold function coefficients
for i=1:m
Cfsx(i)=cfsx(i); % First m number of SHF coefficients
end

for i=1:m
Cftx(i)=cfsx(i+1)-cfsx(i);
end
Cftx(m_jump)=0; % to be considered only in HFm based approximation
Cftx;
Appendix B: Selected MATLAB Programs 373

%%----------------------- Function Plotting -----------------------%%

tf=(T1-h):0.001:T1;
cf=[F1(m/T)*ones(1,length(tf)) 1.2];
Tf=[tf T1];

plot(te,fe,'k-','Linewidth',2) % plot of the exact function


hold on
plot(th,cfsx,'ko','MarkerFaceColor','k','MarkerSize',7)
hold on
plot(Tf,cf,'k-')
xlim([-0.001 2.001])

7. Program for comparing MISEs using HFc and HFm based


approaches (Chap. 3, Table 3.2, , p. 65); Fig. 3.17, p. 64)

clc
clear all
format long

%%------------ Number of Sub-intervals and Total Time -----------%%

m = input('Enter the number of sub-intervals chosen:\n')


T1 = input('Enter the time at jump instant:\n')
T2 = input('Enter the time after jump instant:\n')
T = T1+T2; % Total time period
h = T/m;
m_jump = T1/h; % Number of sub-intervals up to jump instant
t = 0:h:T;

%%------------------ Function to be Approximated -----------------%%

syms t
f1=t+0.2;
f2=t+1.2;

%%-------------- Hybrid Function Based Approximation -------------%%

th1=0:h:(T1-h);
th2=T1:h:T;
th=[th1 th2];

F1=subs(f1,th1);
F2=subs(f2,th2);
cfsx=[F1 F2];
Cfsx=cfsx(1:m)

for i=1:m
Cftx(i)=cfsx(i+1)-cfsx(i); % Triangular function coefficients
end

Cftx(m_jump)=0; % to be considered only in HFm based approximation

%%--------------------- Calculation of MISE ---------------------%%


374 Appendix B: Selected MATLAB Programs

mise=0;
for i=1:(T1/h)
ft=f1;
Fhf=(Cftx(i)/h)*(t-(i-1)*h)+Cfsx(i);
Fd=(ft-Fhf)^2;
mise1(i)=int(Fd,t,((i-1)*h),(i*h));
mise=mise1(i)+mise;
end
miseone=mise/T1;

mise=0;
for j=1:(T2/h)
ft=f2;
Fhf=(Cftx(j+(T1/h))/h)*((t-1)-(j-1)*h)+Cfsx(j+T1/h);
Fd=(ft-Fhf)^2;
mise2(j)=int(Fd,t,(T1+(j-1)*h),(T1+j*h));
mise=mise2(j)+mise;
end
misetwo=mise/T2;

MISE=double(miseone+misetwo)
Appendix B: Selected MATLAB Programs 375

8. Program for approximating a function using Legendre


Polynomial (Chap. 3, Fig. 3.20, p. 68)

clc
clear all
format long

%%------------ Number of Sub-intervals and Total Time -----------%%

m = input('Enter the number of sub-intervals chosen:\n')


T = input('Enter the total time interval:\n')
h = T/m;

%%----------------- Function to be Approximated -----------------%%

syms t
f=exp(t-1); % Function

%%------------------------ Exact Solution -----------------------%%

te=0:0.001:T;
ft=subs(f,te);
plot(te,ft,'k-','LineWidth',2)
hold on

%%------------ Legendre Polnomial Based Approximation ----------%%

P0=1; % Legendre polynomial of degree 0


P1=t; % Legendre polynomial of degree 1
P2=((3*t^2)-1)/2; % Legendre polynomial of degree 1
P3=((5*t^3)-(3*t))/2; % Legendre polynomial of degree 1
P4=((35*t^4)-(30*t^2)+3)/8; % Legendre polynomial of degree 1
P5=((63*t^5)-(70*t^3)+(15*t))/8; % Legendre polynomial of degree 1
P6=((231*t^6)-(315*t^4)+(105*t^2)-5)/16;
% Legendre polynomial of degree 1

P=[P0 P1 P2 P3 P4 P5 P6];

Func=subs(P,(t-1));

for i=1:7
c=double(int(P(i)*exp(t),-1,1));
Coeff(i)=((2*(i-1)+1)/2)*c; % Legendre coefficient for i-th degree
end

F=sum(Coeff.*Func);

%%---------------------- Function Plotting ----------------------%%

th=0:h:T;
F_Legendre=double(subs(F,th));

plot(th,F_Legendre,'ok','MarkerFaceColor','k')
376 Appendix B: Selected MATLAB Programs

9. Program for calculating MISE of a function, approximated


using Legendre Polynomial (Chap. 3, Table 3.4, p. 70)

clc
clear all
format long
%%------------------ Total Time Period considered-----------------%%

T = input('Enter the total time interval:\n')


Np = input('Enter the number of polynomials to be used:\n')

%%----------------- Function to be Approximated -----------------%%

syms t
f=exp(t-1); % Function

%%------------ Legendre Polnomial Based Approximation ----------%%

P0=1; % Legendre polynomial of degree 0


P1=t; % Legendre polynomial of degree 1
P2=((3*t^2)-1)/2; % Legendre polynomial of degree 1
P3=((5*t^3)-(3*t))/2; % Legendre polynomial of degree 1
P4=((35*t^4)-(30*t^2)+3)/8; % Legendre polynomial of degree 1
P5=((63*t^5)-(70*t^3)+(15*t))/8; % Legendre polynomial of degree 1
P6=((231*t^6)-(315*t^4)+(105*t^2)-5)/16; % Legendre polynomial of degree 1

P=[P0 P1 P2 P3 P4 P5 P6];

Func=subs(P,(t-1));

for i=1:Np
c=double(int(P(i)*exp(t),-1,1));
Coeff(i)=((2*(i-1)+1)/2)*c; % Legendre coefficient for i-th degree
end
Coeff
F=sum(Coeff.*Func(1:i));
Func(1:i)
F

%%-------------------- Calculation of MISE --------------------%%

for p=1:Np
F_Legendre=sum(Coeff(1:p).*Func(1:p));
Fd=(f-F_Legendre)^2;
MISE=(1/T)*double(int(Fd,t,0,T));
end

MISE
Appendix B: Selected MATLAB Programs 377

10. Program for calculating MISE of a function, approximated


in hybrid function domain (Chap. 3, Table 3.4, p. 70)

clc
clear all
format long
%%------------ Number of Sub -intervals and Total Time -----------%%

m = input('Enter the number of sub -intervals chosen: \n')


T = input('Enter the total time period: \n')
h = T/m;
th = 0:h:T;

%%----------------- Function to be Approximated -----------------%%

syms t
f=exp(t-1);

%%--------------- Hybrid Function Based Approximation -----------%%

cfsx=subs(f,th);
Cfsx=cfsx(1:m); % Sample -and-Hold function coefficients

for i=1:m
Cftx(i)=cfsx(i+1) -cfsx(i); % Triangular function coefficients
end

%%--------------------- Calculation of MISE -------------------- -%%

for i=1:m
Fhf=(Cftx(i)/h)*(t -(i-1)*h)+Cfsx(i);
Fd=(f-Fhf)^2;
mise(i)=(1/T)*double(int(Fd,t,((i -1)*h),(i*h)));
end

MISE=sum(mise)
378 Appendix B: Selected MATLAB Programs

11. Program for integrating a function f(t) = sin(πt) in


hybrid function domain (Chap. 4, Fig. 4.8; Table 4.2, p.
100, 89)

clc
clear all
format long

%%------------ Number of Sub -intervals and Total Time -----------%%

m = input('Enter the number of sub -intervals chosen: \n')


T = input('Enter the total time period: \n')
h = T/m;
th = 0:h:T;

%%------------------ Function to be Integrated ------------------%%

syms x
f=sin(pi*x); % Function to be Integrated
F=subs(f,th); % Samples of the Function f(t)
fi=int(f,0,x); % Function after Integration
Fi=subs(fi,th); % Samples of integrated function fi(t)

%%----- HF Based Representation of function to be Integrated -----%%

for i=1:m
F_SHF(i)=F(i); % Sample-and-Hold Function Coefficients for f(t)
end

for i=1:m
F_TF(i)=F(i+1)-F(i); % Triangular Function Coefficients for f(t)
end

%%----------- HF domain Integration Operational Matrices ----------%%

ps=zeros(m,m);

for i=1:m
for j=1:m
if j-i>0
Ps(i,j)=ps(i,j)+1;
else
Ps(i,j)=ps(i,j);
end
end
end

Pss=h*Ps; % SHF part after integrating SHF components

Pst=h*eye(m); % TF part after integrating SHF components

Pts=0.5*Pss; % SHF part after integrating TF components

Ptt=0.5*Pst; % TF part after integrating TF components


Appendix B: Selected MATLAB Programs 379

%%------- Integartion using Integration Operational Matrices -------%%

Cs = (F_SHF*Pss)+(F_TF*Pts); % Sample-and-Hold Function Coefficients


after integration in HF domain

Ct = (F_SHF*Pst)+(F_TF*Ptt); % Triangular Function Coefficients


after integration in HF domain

Cs_m = Cs(m)+Ct(m); % m-th coefficient

Cs_plot=[Cs Cs_m]; % Samples for plotting the function


after integration in HF domain

%%------------ ------------ Function Plotting ---------- --------------%%

plot(th,Fi,'r -','LineWidth',3)
hold on
plot(th,Cs_plot,'ok -','LineWidth',2,'MarkerFaceColor','k')
xlim([0 T])

12. Program for differentiating a function f(t) = 1 − exp(−t)


in hybrid function domain (Chap. 4, Fig. 4.9, p. 105)

clc
clear all
format long

%%------------ Number of Sub -intervals and Total Time -----------%%

m = input('Enter the number of sub -intervals chosen: \n')


T = input('Enter the total time period: \n')
h = T/m;
th=0:h:T;

%%----------------- Function to be Differentiated ---------------%%

syms x
f=1-exp(-x); % Function to be Differentiated
fd=diff(f); % Function after Differentiation
F=subs(f,th); % Samples of the Function f(t)
Fd=subs(fd,th); % Samples of differentiated function fd(t)

%%----- HF Based Representation of Differentiated function -----%%

for i=1:m
F_SHF(i)=F(i); % Sample -and-Hold Function Coefficients for f(t)
end

for i=1:m
F_TF(i)=F(i+1) -F(i); % Triangular Function Coefficients for f(t)
end

%%-------------- HF domain Differentiation Matrices ------------%%


380 Appendix B: Selected MATLAB Programs

ds=(-1)*eye(m);

for i=1:m
for j=1:m
if (i-j)==1
Ds(i,j)=ds(i,j)+1;
else
Ds(i,j)=ds(i,j);
end
end
end
Ds(m,m)=(F(m+1)-F(m))/F(m);

Ds=(1/h)*Ds; % SHF part of Differentiation matrix

dt=(-1)*eye(m);

for i=1:m
for j=1:m
if (i-j)==1
Dt(i,j)=dt(i,j)+1;
else
Dt(i,j)=dt(i,j);
end
end
end

F_(m+2)=subs(f,(T+h));
Dt(m,m)=((F_(m+2)-F(m+1))-(F(m+1)-F(m)))/(F(m+1)-F(m));

Dt=(1/h)*Dt; % TF part of Differentiation matrix

%%---------- Differentiation using Operational Matrices ----------%%

Cs = (F_SHF*Ds); % Sample-and-Hold Function Coefficients


after differentiation in HF domain

Ct = (F_TF*Dt); % Triangular Function Coefficients


after differentiation in HF domain

Cs_m=Cs(m)+Ct(m);

Cs_plot=[Cs Cs_m]; % Samples for plotting the function


after differentiation in HF domain

%%----------------------- Function Plotting -----------------------%%

plot(th,F,'-^k','LineWidth',2,'MarkerFaceColor','k')
hold on
plot(th,Fd,'-ok','LineWidth',2,'MarkerFaceColor','k')
hold on
plot(th,Cs_plot,':>k','LineWidth',2)
Appendix B: Selected MATLAB Programs 381

13. Program for differentiating a function f(t) = sin(πt)/π


in hybrid function domain (Chap. 4, Fig. 4.10, p. 106)
clc
clear all
format long

%%------------ Number of Sub-intervals and Total Time -----------%%

m = input('Enter the number of sub-intervals chosen:\n')


T = input('Enter the total time period:\n')
h = T/m;
th = 0:h:T;

%%----------------- Function to be Differentiated ---------------%%

syms x
f=sin(pi*x)/pi; % Function to be Differentiated
fd=diff(f); % Function after Differentiation
F=subs(f,th); % Samples of the Function f(t)
Fd=subs(fd,th); % Samples of differentiated function fd(t)

%%------ HF Based Representation of Differentiated function ------%%

for i=1:m
F_SHF(i)=F(i); % Sample-and-Hold Function Coefficients for f(t)
end

for i=1:m
F_TF(i)=F(i+1)-F(i); % Triangular Function Coefficients for f(t)
end

%%-------------- HF domain Differentiation Matrices --------------%%

ds=(-1)*eye(m);

for i=1:m
for j=1:m
if (i-j)==1
Ds(i,j)=ds(i,j)+1;
else
Ds(i,j)=ds(i,j);
end
end
end
Ds(m,m)=(F(m+1)-F(m))/F(m);

Ds=(1/h)*Ds; % SHF part of Differentiation matrix


dt=(-1)*eye(m);

for i=1:m
for j=1:m
if (i-j)==1
Dt(i,j)=dt(i,j)+1;
else
Dt(i,j)=dt(i,j);
end
382 Appendix B: Selected MATLAB Programs

end
end
F_(m+2)=subs(f,(T+h));
Dt(m,m)=((F_(m+2)-F(m+1))-(F(m+1)-F(m)))/(F(m+1)-F(m));

Dt=(1/h)*Dt; % TF part of Differentiation matrix

%%---------- Differentiation using Operational Matrices ----------%%

Cs=(F_SHF*Ds); % Sample-and-Hold Function Coefficients


after differentiation in HF domain

Ct=(F_TF*Dt); % Triangular Function Coefficients


after differentiation in HF domain

Cs_m=Cs(m)+Ct(m);

Cs_plot=[Cs Cs_m]; % Samples for plotting the function


after differentiation in HF domain

%%----------------------- Function Plotting -----------------------%%

plot(th,F,'-^k','LineWidth',2,'MarkerFaceColor','k')
hold on
plot(th,Fd,'-ok','LineWidth',2,'MarkerFaceColor','k')
hold on
plot(th,Cs_plot,':>k','LineWidth',2)

z=zeros(length(th));
hold on
plot(th,z,'-k')
Appendix B: Selected MATLAB Programs 383

14. Program for calculating average of mod of percentage


(AMP) error for nth repeated I-D operation of function f
(t) = sin(πt) in hybrid function domain (Chap. 4, Fig. 4.12,
p. 110)

function Per_Err=ID(n)

%%%%%%% Integration-Differentiation (I-D) operation %%%%%%%%


%%%%%%% For SINE Function
%Go to Command Window and type ID(required value of number of ID
%operations n) and ENTER

clc
clear all

%%---------- Defining the Function for I-D operation -----------%%

syms t
ft=sin(pi*t);

%%------------ Number of Sub-intervals and Total Time -----------%%

m = input('Enter the number of sub-intervals chosen:\n')


T = input('Enter the total time period:\n')
h = T/m;

t1 = 0:h:(T+n*h);
Cs = subs(ft,t1); % SHF coefficients

Cid=zeros(1,(n+1)); %Create space for coefficients of ID operation

Cid(1)=1;
a=n;
for j=1:n
c=a/factorial(j);
Cid(j+1)=c;
a=a*(n-j);
end
Cid;

Cs_n=zeros(1,m);
for k=1:m
C=Cs(k:(n+k));
Cs_n(k)=(1/2^n)*sum(C.*Cid);
end

%%-------------------- Calculation of AMP Error ---------------------%%

Percentage_Error=(Cs(1:m)-Cs_n)*100./Cs(1:m);
AMP_Error=abs(sum(PE(2:m))/(m-1))
384 Appendix B: Selected MATLAB Programs

15. Program for plotting the effect of repeated I-D operation


over a specific function f(t) = sin(πt) in hybrid function
domain (Chap. 4, Fig. 4.15, p. 112)
clc
clear all

%%------------ Defining the Function for I -D opeartion ------------%%


syms t
ft=sin(pi*t);

%%------------- Number of Sub-intervals and Total Time ------------%%

m = input('Enter the number of sub-intervals chosen:\n');


T = input('Enter the total time period:\n');
h = T/m;

n = input('Enter the maximum number of ID operations required:\n');

%%--------------------- Repeated I-D Opeartion --------------------%%

for id=1:n
t1=0:h:(T+id*h);
Cs=subs(ft,t1); % SHF component including m-th component
Cs_MAT(1,:)=Cs(1:(m+1));

Cid=zeros(1,(id+1)); %Create space for coefficients of ID operation

Cid(1)=1;
a=id;
for j=1:id
c=a/factorial(j);
Cid(j+1)=c;
a=a*(id-j);
end

Cs_n=zeros(1,m);

for k=1:m
C=Cs(k:(id+k));
Cs_n(k)=(1/2^id)*sum(C.*Cid);
end

% For m-th SHF component

t2=T:h:(T+id*h);
Cm=Cs((m+1):(m+id+1));
Cs_n(k+1)=(1/2^id)*sum(Cm.*Cid);

Cs_MAT((id+1),:)=Cs_n(1:(m+1))
end

%%------------- For Plotting the Effect of I-D Opeartion -------------%%


tn=0:h:T;
z=zeros(length(tn));
plot(tn,Cs_MAT(1,:),'r-',tn,Cs_MAT(2,:),'b',tn,Cs_MAT(3,:)
,'g',tn,Cs_MAT(4,:),'k-',tn,Cs_MAT(11,:),'m-','LineWidth',2)
hold on
plot(tn,z,'k-')
ylim([-1.2 1.2])
Appendix B: Selected MATLAB Programs 385

16. Program for finding second order integration of function f


(t) = t, using HF domain one-shot integration opera-
tional matrices (Chap. 5, Example 5.1, p. 130)
clc
clear all
format long

%%------------ Number of Sub-intervals and Total Time -----------%%

m = input('Enter the number of sub-intervals chosen:\n')


T = input('Enter the total time period:\n')
h = T/m;
th = 0:h:T;

%%------------------ Function to be Integrated ------------------%%

syms x
f=x; % Function to be Integrated
F=subs(f,th); % Samples of the Function f(t)

%%------------------------ Exact Solution -----------------------%%

fi=int(int(f));
Fi=subs(fi,th)

for j=1:m
Fi_SHF(j)=Fi(j); % Sample-and-Hold function part (Equation 5.28)
end

for j=1:m
Fi_TF(j)=Fi(j+1)-Fi(j); % Triangular Function part (Equation 5.28)
end

%%----- HF Based Representation of function to be Integrated -----%%

for i=1:m
F_SHF(i)=F(i); % Sample-and-Hold Function Coefficients for f(t)
end

for i=1:m
F_TF(i)=F(i+1)-F(i); % Triangular Function Coefficients for f(t)
end

%%------- Using One-Shot Integration Operational Matrices --------%%


%%--------------- Formation of P2SS matrices ----------------%%
n=2; %For Second order Integration

%%%%Formation of P2SS matrices %%%%%%


p=zeros(m,m);

for i=1:m
for j=1:m
if j-i>0
P2ss(i,j)=(j-i)^n-(j-i-1)^n;
else
386 Appendix B: Selected MATLAB Programs

P2ss(i,j)=p(i,j);
end
end
end
P2SS=(h^n/factorial(n))*P2ss;

%%%%Formation of P2ST matrices %%%%%%

p=eye(m);

for i=1:m
for j=1:m
if j-i>0
P2st(i,j)=((j-i+1)^n-(j-i)^n)-((j-i)^n-(j-i-1)^n);
else
P2st(i,j)=p(i,j);
end
end
end
P2ST=(h^n/factorial(n))*P2st;

%%%%Formation of P2TS matrices %%%%%%

p=zeros(m,m);
for i=1:m
for j=1:m
if j-i>0
P2ts(i,j)=(j-i)^(n+1)-(j-i-1)^(n+1)-(n+1)*(j-i-1)^n;
else
P2ts(i,j)=p(i,j);
end
end
end
P2TS=(h^n/factorial(n+1))*P2ts;

%%%%Formation of P2TT matrices %%%%%%

p=eye(m);
for i=1:m
for j=1:m
if j-i>0
P2tt(i,j)=((j-i+1)^(n+1)-(j-i)^(n+1))-((j-i)^(n+1)
-(j-i-1)^(n+1))-(n+1)*((j-i)^n-(j-i-1)^n);
else
P2tt(i,j)=p(i,j);
end
end
end
P2TT=(h^n/factorial(n+1))*P2tt;

CS_oneshot=(F_SHF*P2SS+F_TF*P2TS) %SHF Coefficients after


double integration in HF domain

CT_oneshot=(F_SHF*P2ST+F_TF*P2TT); %TF Coefficients after


double integration in HF domain
Appendix B: Selected MATLAB Programs 387

17. Program for finding deviation indices δR (using repeated


integration) and δO (using one-shot integration opera-
tional matrices) for second order integration of a typ-
ical functions f(t) = exp(−t) (Chap. 5, Table 5.1, p. 131)
clc
clear all
format long

%%------------ Number of Sub-intervals and Total Time -----------%%

m = input('Enter the number of sub-intervals chosen:\n');


T = input('Enter the total time period:\n');
h = T/m;
th = 0:h:T;

%%---------------- Function under Consideration -----------------%%

syms x
f=exp(-x);

F_samples=subs(f,th); % Samples of the Function f(t)

%%----- HF Based Representation of function to be Integrated -----%%

for i=1:m
Cfsx(i)=F_samples(i); % SHF Coefficients for f(t)
end

for i=1:m

Cftx(i)=F_samples(i+1)-F_samples(i); % TF Coefficients for f(t)


end

%%------- Exact Samples after double integration of function ------%%

FT=x+exp(-x)-1;

CFsx=subs(FT,th);

CFsx_Exact=CFsx; % (m+1) number of exact samples

for i=1:m
CFtx(i)=CFsx(i+1)-CFsx(i); % TF coefficients
end
CFsx=CFsx(1:m) % SHF coefficients

%%----------- HF domain Integration Operational Matrices ----------%%

p1s=zeros(m,m);

for i=1:m
for j=1:m
if j-i>0
P1s(i,j)=p1s(i,j)+1;
else
P1s(i,j)=p1s(i,j);
388 Appendix B: Selected MATLAB Programs

end
end
end

P1SS=h*P1s; % SHF part after integrating SHF components

P1ST=h*eye(m); % TF part after integrating SHF components

P1TS=0.5*P1SS; % SHF part after integrating TF components

P1TT=0.5*P1ST; % TF part after integrating TF components

%%------- Integration of Function through Repeated use of -------%%


%%------- First Order Integration Operational Matrices --------%%

n=2; %For Second order Integration

Cs=Cfsx;
Ct=Cftx;
for j=1:n
Cs1=(Cs*P1SS)+(Ct*P1TS);
Ct1=(Cs*P1ST)+(Ct*P1TT);
Cs=Cs1;
Ct=Ct1;
end

Cs_m1=Cs(m)+Ct(m); % (m+1)-th sample


CS_repeated=[Cs Cs_m1]; % SHF samples after Second order Integration

%%------- Using One-Shot Integration Operational Matrices -------%%


%%--------------- Formation of P2SS matrices ----------------%%

n=2; %For Second order Integration

p=zeros(m,m);
for i=1:m
for j=1:m
if j-i>0
P2ss(i,j)=(j-i)^n-(j-i-1)^n;
else
P2ss(i,j)=p(i,j);
end
end
end
P2SS=(h^n/factorial(n))*P2ss;

%%%%Formation of P2ST matrices %%%%%%

p=eye(m);
for i=1:m
for j=1:m
if j-i>0
P2st(i,j)=((j-i+1)^n-(j-i)^n)-((j-i)^n-(j-i-1)^n);
Appendix B: Selected MATLAB Programs 389

else
P2st(i,j)=p(i,j);
end
end
end
P2ST=(h^n/factorial(n))*P2st;

%%%%Formation of P2TS matrices %%%%%%

p=zeros(m,m);
for i=1:m
for j=1:m
if j-i>0
P2ts(i,j)=(j-i)^(n+1)-(j-i-1)^(n+1)-(n+1)*(j-i-1)^n;
else
P2ts(i,j)=p(i,j);
end
end
end
P2TS=(h^n/factorial(n+1))*P2ts;

%%%%Formation of P2TT matrices %%%%%%

p=eye(m);
for i=1:m
for j=1:m
if j-i>0
P2tt(i,j)=((j-i+1)^(n+1)-(j-i)^(n+1))-((j-i)^(n+1)
-(j-i-1)^(n+1))-(n+1)*((j-i)^n-(j-i-1)^n);
else
P2tt(i,j)=p(i,j);
end
end
end
P2TT=(h^n/factorial(n+1))*P2tt;

CS=(Cfsx*P2SS+Cftx*P2TS); %SHF Coefficients after


double integration in HF domain

CT=(Cfsx*P2ST+Cftx*P2TT); %TF Coefficients after


double integration in HF domain

CS_m1=CS(m)+CT(m); % (m+1)th sample


CS_oneshot=[CS CS_m1]; % SHF samples after Second order Integration

diff_repeated=CFsx_Exact-CS_repeated;
% differences between exact samples and samples obtained
% via repeated integration

diff_OneShot=CFsx_Exact-CS_oneshot;
% differences between exact samples and samples obtained
% using oneshot integration matrices

Deviation_Index_repeated=sum(abs(diff_repeated))/(m+1)

Deviation_Index_OneShot=sum(abs(diff_OneShot))/(m+1)
390 Appendix B: Selected MATLAB Programs

18. Program for obtaining the recursive solution of first


order differential equation of Examples 6.1 and 6.2
(Chap. 6, Figs. 6.1, 6.2 and 6.3; Table 6.1, p. 143,
144, 148–150)

clc
clear all
format long

%%------------ Number of Sub-intervals and Total Time -----------%%

m = input('Enter the number of sub-intervals chosen:\n');


T = input('Enter the total time period:\n');
h = T/m;
th = 0:h:T;

%%-------------- Solution of Differential Equation -------------%%

sol=dsolve('Dy + 0.5*y =1.25','y(0)=0','t')


t1=0:0.01:T;

F=subs(sol,t1);

plot(t1,F,'k-','Linewidth',2)

%%---------------- Direct Expansion of Solution ----------------%%

C=subs(sol,th);

Cs=C(1:m) % Sample-and-Hold function coefficients

for i=1:m
Ct(i)=C(i+1)-C(i); % Triangular function coefficients
end

%%--------- Solution in HF domain, using equation (6.7) --------%%

c1(1)=0; % given initial value

a=0.5; b=1.25; % constants as per equations (6.1) and (6.8)

for i=1:m
c1(i+1)=(b*h)+(1-a*h)*c1(i); % recursive solution
end

Cs_HF=c1(1:m); % SHF coefficients obtained using equation (6.7)

for i=1:m
Ct_HF(i)=c1(i+1)-c1(i); %TF coefficients obtained using equation(6.7)
end

hold on
plot(th,c1,'ko','MarkerEdgeColor','k','MarkerFaceColor','k','MarkerSize',7)

ylim([-0.2 1.2])
Appendix B: Selected MATLAB Programs 391

%%--------- Solution in HF domain, using equation (6.23) --------%%

c2(1)=0; % given initial value

f=2/(2+a*h);
for i=1:m
c2(i+1)=(b*f*h)+(1-a*f*h)*c2(i); % recursive solution
end

Cs_HF=c2(1:m) % Sample-and-Hold function coefficients

for i=1:m
Ct_HF(i)=c2(i+1)-c2(i); % Triangular function coefficients
end

hold on
plot(th,c2,'ko','MarkerEdgeColor','k','MarkerFaceColor','k','MarkerSize',7)

ylim([0 1.2])

%%----------- Calculation of Sample wise Percentage Error -----------%%

Percentage_Error=(Cs-Cs_HF)./Cs*100

19. Program for obtaining the solution of first order dif-


ferential equation of Example 6.2 using Runge-Kutta
method (Chap. 6, Table 6.2, p. 151)

Function File

function v=f(t,y)
v=1.25-0.5*y;

M-file for obtaining solution via Runge-Kutta method

clc
clear all
format long

h = 1/12; % step size


t = 0; % initial time
y = 0; % initial condition

for i=1:12
k1=h*v(t,y);
k2=h*v(t+h/2,y+k1/2);
k3=h*v(t+h/2,y+k2/2);
k4=h*v(t+h,y+k3);
y=y+(k1+2*k2+2*k3+k4)/6;
t=t+h;
y
end
392 Appendix B: Selected MATLAB Programs

20. Program for obtaining the solution of second order dif-


ferential equation of Examples 6.3 and 6.4, in Hybrid
Function domain (Chap. 6, Tables 6.3 and 6.4; Figs. 6.5
and 6.6, pp. 153-158)

clc
clear all
format long

%%------------ Number of Sub-intervals and Total Time -----------%%

m = input('Enter the number of sub-intervals chosen:\n');


T = input('Enter the total time period:\n');
h = T/m;
th = 0:h:T;

%%-------------- Solution of Differential Equation -------------%%

% sol=dsolve('D2y + 3*Dy + 2*y = 2','Dy(0)=-1','y(0)=1','t')


sol=dsolve('D2y + 100*y = 0','Dy(0)=0','y(0)=2','t')

Exact_samples=subs(sol,th)

tt=0:0.01:T;

F=subs(sol,t1);

plot(tt,F,'k-','Linewidth',2)

%%---------------- Direct Expansion of Solution ----------------%%

C=subs(sol,th);

Cs=C(1:m); % SHF components

for i=1:m
Ct(i)=C(i+1)-C(i); % TF components
end

%---- via repreated use of first order integration matrices ----%

% a=3; b=2; d=2;


% k1=1; k2=-1;

a=0; b=100; d=0;


k1=2; k2=0;

r2=(a*k1)+k2;
r3=k1;
U=ones(1,m);
Appendix B: Selected MATLAB Programs 393

I=eye(m); % Identity matrix

%-------- Operational matrix for Integration in BPF domain, P --------%

P=zeros(m,m);
for i=1:m
P(i,i)=h/2;
for j=1:(i-1)
P(j,i)=h;
end
end
P;

%%----------- HF domain Integration Operational Matrices ----------%%

p1s=zeros(m,m);

for i=1:m
for j=1:m
if j-i>0
P1s(i,j)=p1s(i,j)+1;
else
P1s(i,j)=p1s(i,j);
end
end
end

P1SS=h*P1s; % SHF part after integrating SHF components

P1ST=h*eye(m); % TF part after integrating SHF components

P1TS=0.5*P1SS; % SHF part after integrating TF components

P1TT=0.5*P1ST; % TF part after integrating TF components

L=(2*d*P)+(2*r2*I);
Q=-((a*I)+(b*P));

Cs_repeated=(2/h)*U*(L+(2*r3*Q))*inv((2/h)*I-(4/h)*P1TS*Q-Q)*P1TS+(r3*U)

Ct_repeated=U*(L+(2*r3*Q))*inv((2/h)*I-(4/h)*P1TS*Q-Q);

Cs_R=[Cs_repeated (Cs_repeated(m)+Ct_repeated(m))]

%%------- Using One-Shot Integration Operational Matrices --------%%


%%------------ Formation of P2SS matrix -------------%%

n=2; %For Second order Integration


%%----------------Formation of P2SS matrix ---------------%%
p=zeros(m,m);
for i=1:m
for j=1:m
if j-i>0
P2ss(i,j)=(j-i)^n-(j-i-1)^n;
394 Appendix B: Selected MATLAB Programs

else
P2ss(i,j)=p(i,j);
end
end
end

P2SS=(h^2/factorial(2))*P2ss;

%%----------------Formation of P2ST matrix ---------------%%

p=eye(m);

for i=1:m
for j=1:m
if j-i>0
P2st(i,j)=((j-i+1)^n-(j-i)^n)-((j-i)^n-(j-i-1)^n);
else
P2st(i,j)=p(i,j);
end
end
end

P2ST=(h^2/factorial(2))*P2st;

%%----------------Formation of P2TS matrix ---------------%%

p=zeros(m,m);

for i=1:m
for j=1:m
if j-i>0
P2ts(i,j)=(j-i)^(n+1)-(j-i-1)^(n+1)-(n+1)*(j-i-1)^n;
else
P2ts(i,j)=p(i,j);
end
end
end

P2TS=(h^2/factorial(3))*P2ts;

%%----------------Formation of P2TT matrix ---------------%%

p=eye(m);
for i=1:m
for j=1:m
if j-i>0
P2tt(i,j)=((j-i+1)^(n+1)-(j-i)^(n+1))-((j-i)^(n+1)
-(j-i-1)^(n+1))-(n+1)*((j-i)^n-(j-i-1)^n);
else
P2tt(i,j)=p(i,j);
end
end
end
Appendix B: Selected MATLAB Programs 395

P2TT=(h^n/factorial(n+1))*P2tt;

X=I+(a*P1SS)+(b*P2SS);
Y=0.5*((a*P1SS)+(b*P2SS));

W=(a*P1ST)+(b*P2ST);
Z=I+0.5*((a*P1ST)+(b*P2ST));

M1=(Y*inv(X))-(Z*inv(W));
M2=U*((d*P2SS)+(r2*P1SS)+(r3*I))*inv(X)-U*((d*P2ST)+(r2*P1ST))*inv(W);

Ct_oneshot=M2*inv(M1);

M3=M2*inv(M1)*Z*inv(W);
M4=U*((d*P2ST)+(r2*P1ST))*inv(W);

Cs_oneshot=M4-M3

Cs_O=[Cs_oneshot (Cs_oneshot(m)+Ct_oneshot(m))] % For plotting the


samples

Percentage_Error_repeated=(C-Cs_R)./C*100

Percentage_Error_OneShot=(C-Cs_O)./C*100

%%------ Plotting -----%%

hold on
plot(th,Cs_R,'k<','MarkerEdgeColor','k','MarkerFaceColor','k'
,'MarkerSize',7)

figure
plot(t1,F,'k-','Linewidth',2)
hold on
plot(th,Cs_O,'ko','MarkerEdgeColor','k','MarkerFaceColor','k'
,'MarkerSize',7)
396 Appendix B: Selected MATLAB Programs

21. Program for obtaining the convolution of two time func-


tions of Example 7.1, in Hybrid Function domain (Chap. 7,
Table 7.1; Fig. 7.11, p. 181, 182 and 183)
clc
clear all
format long

%%------------ Number of Sub-intervals and Total Time -----------%%

m = input('Enter the number of sub-intervals chosen:\n');


T = input('Enter the total time period:\n');
h = T/m;
th = 0:h:(T+h);

%%-------------- Exact Solution -------------%%

R=ones(1,(m+2));
R_SH=R(1:(m+1)); % SHF coefficients of r(t)

for i=1:(m+1)
R_T(i)=R(i+1)-R(i); % TF coefficients of r(t)
end

syms t
gt=exp(-0.5*t)*(2*cos(2*t)-0.5*sin(2*t));
G=subs(gt,th);
G_SH=G(1:(m+1)); % Sample-and-Hold function coefficients of g(t)

for i=1:(m+1)
G_T(i)=G(i+1)-G(i); % TF coefficients of g(t)
end

yt=exp(-0.5*t)*sin(2*t);

Y_direct=subs(yt,th(1:(m+1)))
plot(th(1:(m+1)),Y_direct,'k-','LineWidth',2)

%%-------------- Convolution of a time function -------------%%

%%----------- Formation of R1 matrix ----------%%

R1=zeros((m+1),(m+1));
for i=1:(m+1)
for j=1:(m+1)
if j-i>0
R1(i,j)=2*R_SH(j)+R_SH(j-1);
end
end
end

G1=G_SH; % to be pre-multiplied with R1


%%----------- Formation of R2 matrix ----------%%

R2=zeros((m+1),(m+1));
for i=1:(m+1)
for j=1:(m+1)
if j-i>0
R2(i,j)=R_SH(j)+2*R_SH(j-1);
end
Appendix B: Selected MATLAB Programs 397

end
end

G2=G(2:(m+2)); % to be pre-multiplied with R2

Y_HF=(h/6)*((G1*R1)+(G2*R2))

hold on
plot(th(1:(m+1)),Y_HF,'ko','LineWidth',2)

22. Program for analyzing a non-homogeneous system of


Example 8.1, Hybrid Function domain (Chap. 8, Table 8.1;
Fig. 8.1, pp. 197-198)
clc
clear all
format long

%%----------------- For step input --------------------%%

C_SH_input=1; % SHF part of input represented in HF domain

C_Triangular_input=0; % TF part of input represented in HF domain

C_input=C_SH_input+0.5*C_Triangular_input % Samples in HF domain

%%---------------------- System -----------------------%%

A=[0 1; -2 -3]; % System matrix


B=[0;1]; % Input matrix
x0=[0;0.5]; % Initial condition

m = input('Enter the number of sub-intervals chosen:\n');


T = input('Enter the total time period:\n');
h = T/m;

%%----------------- Exact Solution of States -------------------%%

syms s
Us=laplace(C_input,s);
I=eye(length(A)); % Identity matrix
x=inv(s*I-A)*(x0+B*Us);
xt=ilaplace(x)
t=0:0.01:T;
x_time=subs(xt,t);
xt1=x_time(1,:); % Exact solution of state x1
xt2=x_time(2,:); % Exact solution of state x2

%%----------- Solution in Hybrid Function domain ------------%%

xi=x0; % Initial values of the states

for i=1:m
Cs(:,i)=inv((2/h)*I-A)*[((2/h)*I+A)*xi+2*B*C_input];
xi=Cs(:,i);
end
398 Appendix B: Selected MATLAB Programs

HF_coefficients=[x0 Cs]

HF_State1=HF_coefficients(1,:);
HF_State2=HF_coefficients(2,:);

th=0:h:T;

x_t=subs(xt,th);

x1_t=x_t(1,:) % Direct expansion of state x1 in HF domain


x2_t=x_t(2,:) % Direct expansion of state x2 in HF domain
plot(t,xt1,'k-',th,HF_State1,'ko','MarkerFaceColor','k','Linewidth',2)
hold on
plot(t,xt2,'k-',th,HF_State2,'ko','MarkerFaceColor','k','Linewidth',2)
xlabel('Time (in Sec)')
ylabel('States X_1, X_2')

23. Program for analyzing output of a non-homogeneous sys-


tem of Example 8.2, Hybrid Function domain (Chap. 8,
Table 8.2; Fig. 8.3, p. 201 and 201)

clc
clear all
format long

%%----------------- For step input --------------------%%

C_SH_input=1; % SHF part of input represented in HF domain

C_Triangular_input=0; % TF part of input represented in HF domain

C_input=C_SH_input+0.5*C_Triangular_input % Samples in HF domain

%%---------------------- System -----------------------%%

A=[0 1; -2 -3]; % System matrix


B=[0;1]; % Input matrix
C=[1 0]; % Output matrix
D=0; % Direct transmission matrix

x0=[0;0.5]; % Initial condition

m = input('Enter the number of sub-intervals chosen:\n');


T = input('Enter the total time period:\n');
h = T/m;

%%----------------- Exact Solution of States -------------------%%

syms s
Us=laplace(C_input,s);
Appendix B: Selected MATLAB Programs 399

I=eye(length(A)); % Identity matrix

x=inv(s*I-A)*(x0+B*Us);
xt=ilaplace(x)
t=0:0.01:T;
x_time=subs(xt,t);
xt1=x_time(1,:); % Exact solution of state x1
xt2=x_time(2,:); % Exact solution of state x2
yt=C*x_time; % Exact solution of output y

%%----------- Solution in Hybrid Function domain ------------%%

xi=x0; % Initial values of the states

for i=1:m
Cs(:,i)=inv((2/h)*I-A)*[((2/h)*I+A)*xi+2*B*C_input];
xi=Cs(:,i);
end

HF_coefficients=[x0 Cs]

HF_State1=HF_coefficients(1,:);
HF_State2=HF_coefficients(2,:);

%%----------- Output in Hybrid Function domain ------------%%

th=0:h:T;

Xt=subs(xt,th);
Yt=C*Xt
y_h=C*[HF_State1; HF_State2]+D*C_SH_input*ones(1,m+1)

plot(t,yt,'k-','Linewidth',2)
hold on
plot(th,y_h,'ko','MarkerFaceColor','k')
400 Appendix B: Selected MATLAB Programs

24. Program for analyzing a homogeneous system of Example 8.3,


Hybrid Function domain (Chap. 8, Table 8.3; Fig. 8.4,
pp. 202-203)
clc
clear all
format long

%%---------------------- System -----------------------%%

A=[0 1; -2 -3]; % System matrix


B=[0;1]; % Input matrix
C=[1 0]; % Output matrix
D=0; % Direct transmission matrix

x0=[0;1]; % Initial condition

%%------------ Number of Sub-intervals and Total Time -----------%%

m = input('Enter the number of sub-intervals chosen:\n');


T = input('Enter the total time period:\n');
h = T/m;

%%----------------- Exact Solution of States ------------------%%

I=eye(length(A)); % Identity matrix

x=inv(s*I-A)*x0;
xt=ilaplace(x)
t=0:0.01:T;
x_time=subs(xt,t);
xt1=x_time(1,:); % Exact solution of state x1
xt2=x_time(2,:); % Exact solution of state x2

%%----------- Solution in Hybrid Function domain ------------%%

xi=x0; % Initial values of the states

for i=1:m
Cs(:,i)=inv((2/h)*I-A)*((2/h)*I+A)*xi;
xi=Cs(:,i);
end

HF_coefficients=[x0 Cs]
HF_State1=HF_coefficients(1,:);
HF_State2=HF_coefficients(2,:);

th=0:h:T;

x_t=subs(xt,th);

x1_t=x_t(1,:) % Direct expansion of state x1 in HF domain


x2_t=x_t(2,:) % Direct expansion of state x12 in HF domain

plot(t,xt1,'k-',th,HF_State1,'ko','MarkerFaceColor','k','Linewidth',2)
hold on
plot(t,xt2,'k-',th,HF_State2,'ko','MarkerFaceColor','k','Linewidth',2)
Appendix B: Selected MATLAB Programs 401

25. Program for analyzing a non-homogeneous system with jump


discontinuity at input of Example 8.8, both in HFc and HFm
based approaches (Chap. 8, Table 8.16; Fig. 8.16, p. 215,
218, 216)
clc
clear all
format long

%%---------------------- System -----------------------%%

A=[0 1; -2 -3]; % System matrix


B=[0;1]; % Input matrix

x0=[0;0.5]; % Initial condition

%------------ Number of Sub-intervals and Total Time -----------%%

m = input('Enter the number of sub-intervals chosen:\n');


T = input('Enter the total time period:\n');
h = T/m;

a=2*h; % Instant of jump in input

%%-------------------- Exact Plot of States ---------------------%%

t1=0:0.001:a; % Interval before occurring jump


t2=(a+0.001):0.001:T; % Interval after occurring jump
tt=[t1 t2]; % Total interval

syms t

x1a=0.5-0.5*exp(-t);
X1a=subs(x1a,t1); % State x1 before jump

x1b=1-(0.5*exp(-t))-exp(-(t-a))+(0.5*exp(-2*(t-a)));
X1b=subs(x1b,t2); % State x1 after jump
X1=[X1a X1b];

x2a=0.5*exp(-t);
X2a=subs(x2a,t1); % State x2 before jump

x2b=(0.5*exp(-t))+exp(-(t-a))-exp(-2*(t-a));
X2b=subs(x2b,t2); % State x2 after jump

X2=[X2a X2b];

plot(tt,X1,'k-','LineWidth',2)
hold on
plot(tt,X2,'k--','LineWidth',2)

%%----------- Solution in Hybrid Function domain ------------%%

th1=0:h:(a-h);
th2=a:h:T;
th=[th1 th2];

% Approximation of input in Hybrid Function domain (convensional)

U_SH=ones(1,length(th))+[zeros(1,length(th1)) ones(1,length(th2))];
% SHF coefficients
402 Appendix B: Selected MATLAB Programs

U_TF=zeros(1,m);

for k=1:m
U_TF(k)=U_SH(k+1)-U_SH(k);
End

U_TF; % Triangular function coefficients

U=U_SH(1:m)+0.5*U_TF % Input samples

I=eye(length(A));

xi=x0;

for i=1:m
Cs(:,i)=inv((2/h)*I-A)*[((2/h)*I+A)*xi+2*B*U(i)];
xi=Cs(:,i);
end

HF_coefficients_HFc=[x0 Cs];

HF_State1_HFc=HF_coefficients_HFc(1,:);
HF_State2_HFc=HF_coefficients_HFc(2,:);

plot(tt,X1,'k-',th,HF_State1_HFc,'ko','LineWidth',2)
hold on
plot(tt,X2,'k-',th,HF_State2_HFc,'ko','LineWidth',2)
hold on

% Approximation of input in Hybrid Function domain (modified)

U_SH=ones(1,length(th))+[zeros(1,length(th1)) ones(1,length(th2))];
% SHF coefficients

U_TF=zeros(1,m); % Triangular function coefficients

U=U_SH(1:m)+0.5*U_TF % Input samples

I=eye(length(A));

xi=x0;

for i=1:m
Cs(:,i)=inv((2/h)*I-A)*[((2/h)*I+A)*xi+2*B*U(i)];
xi=Cs(:,i);
end

HF_coefficients_HFm=[x0 Cs];

HF_State1_HFm=HF_coefficients_HFm(1,:);
HF_State2_HFm=HF_coefficients_HFm(2,:);

plot(tt,X1,'k-',th,HF_State1_HFm,'ko','MarkerFaceColor','k'
,'LineWidth',2)
hold on
plot(tt,X2,'k-',th,HF_State2_HFm,'ko','MarkerFaceColor','k'
,'LineWidth',2)
xlabel('Time (in Sec)')
ylabel('States X_1, X_2')
Appendix B: Selected MATLAB Programs 403

26. Program for analyzing a time-varying non-homogeneous


system of Example 9.1 (Chap. 9, Table 9.1; Fig. 9.1,
p. 230, 231)
clc
clear all
format long

x0=input('Enter the initial values:\n') % Initial values of the states

m = input('Enter the number of sub-intervals chosen:\n');


T = input('Enter the total time period:\n');
h = T/m;

B=input('Enter the input vector:\n') % Input matrix


%%-------------------- Exact Plot of States ---------------------%%

t=0:0.01:T;

syms t1

X1=1+t1;
xt1=subs(X1,t); % Exact values of state x1

X2=1+(t1^2/2)+(t1^3/3);
xt2=subs(X2,t); % Exact values of state x2

%%----------- Solution in Hybrid Function domain ------------%%

tt=0:h:T;

syms t2

A=[0 0;t2 0]; % System matrix

s=size(A,1);
x=zeros(s,(m+1));

x(:,1)=x0; % States x1 and x2 starts with initial values

I=eye(s); % Identity matrix of dimension s

% Solution of States in HF domain


for k=1:m;
for e=1:s;
for f=1:s;
a(e,f)=subs(A(e,f),tt(k+1));
end
end
b=(2/h)*I-a;

for e1=1:s;
for f1=1:s;
a1(e1,f1)=subs(A(e1,f1),tt(k));
end
end
404 Appendix B: Selected MATLAB Programs
b1=(2/h)*I+a1;

x(:,(k+1))=inv(b)*((b1)*x(:,k)+2*B);
end

x1=x(1,:);
x2=x(2,:);

plot(tt,x1,'ko','LineWidth',2,'MarkerFaceColor','k')
hold on
plot(t,xt1,'k-','LineWidth',2)
hold on
plot(tt,x2,'ko','LineWidth',2,'MarkerFaceColor','k')
hold on
plot(t,xt2,'k-','LineWidth',2)

xlabel('Time t(s)')
ylim([0 2.5])

27. Program for analyzing a time-varying homogeneous system


of Example 9.5 (Chap. 9, Table 9.5; Fig. 9.6, pp. 236-238)

clc
clear all
format long

x0=input('Enter the initial values:\n') % Initial values of the states

m = input('Enter the number of sub-intervals chosen:\n');


T = input('Enter the total time period:\n');
h = T/m;

B=input('Enter the input vector:\n') % Input matrix

%%-------------------- Exact Plot of States ---------------------%%

t=0:0.01:T;

syms tt

Xt1=(cos(1-cos(tt))+2*sin(1-cos(tt)))*exp(sin(tt));
xt1=subs(Xt1,t); % Exact values of state x1

Xt2=(-sin(1-cos(tt))+2*cos(1-cos(tt)))*exp(sin(tt));
xt2=subs(Xt2,t); % Exact values of state x2

%%----------- Solution in Hybrid Function domain ------------%%

t1=0:h:T;

syms t2

A=[cos(t2) sin(t2);-sin(t2) cos(t2)]; % System matrix


Appendix B: Selected MATLAB Programs 405

s=size(A,1);
x=zeros(s,(m+1));

x(:,1)=x0; % States x1 and x2 starts with initial values

I=eye(s); % Identity matrix of dimension s


% Solution of States in HF domain
for k=1:m;
for e=1:s;
for f=1:s;
a(e,f)=subs(A(e,f),t1(k+1));
end
end
b=(2/h)*I-a;

for e1=1:s;
for f1=1:s;
a1(e1,f1)=subs(A(e1,f1),t1(k));
end
end

b1=(2/h)*I+a1;
x(:,(k+1))=inv(b)*((b1)*x(:,k)+2*B);
end

x1=x(1,:);
x2=x(2,:);

plot(t1,x1,'ko','LineWidth',2,'MarkerFaceColor','k')
hold on
plot(t,xt1,'k-','LineWidth',2)
hold on
plot(t1,x2,'ko','LineWidth',2,'MarkerFaceColor','k')
hold on
plot(t,xt2,'k-','LineWidth',2)

xlabel('Time t(s)')
ylim([0 4.5])
406 Appendix B: Selected MATLAB Programs

28. Program for analyzing a time-delay non-homogeneous


system of Example 10.5 (Chap. 10, Table 10.1; Fig. 10.6,
pp. 260-262)
clc
clear all
format long

m = input('Enter the number of sub-intervals chosen:\n');


T = input('Enter the total time period:\n');
h = T/m;

d1=m/2; % Number of intervals up to delay time


%%----------- Information regarding the system ------------%%

A=input('The System Matrix without delay:\n');

A1=input('The System Matrix with delay:\n');

B=input('The Input Matrix without delay:\n');

%%-------------------- Exact Plot of States ---------------------%%

syms t

t1=0:.01:(1-0.01);
f1=1-(1.1*t)+(0.525*t^2); % Solution for time 0 s to 1 s
f1t=subs(f1,t1);

t2=1:0.01:T;
f2=-0.25+(1.575*t)-(1.075*t^2)+(0.175*t^3); % Solution for 1 s to 2 s
f2t=subs(f2,t2);

te=[t1 t2];
fe=[f1t f2t];
plot(te,fe,'k-','Linewidth',2)
hold on

%%----------- Solution in Hybrid Function domain ------------%%

I=eye(length(A1)); %Identity matrix

x=[1 zeros(1,m)];
xi=ones(1,m); % Initial value of state in HF domain

t1=0:h:(1-h);
t2=1:h:T;

%%----- Input represented in HF domain ------%%

u=zeros(1,(m+1));
u1=-2.1+(1.05*t); % Solution for time 0 s to 1 s
u1t=subs(u1,t1);

u2=-1.05;
u2t=u2*ones(1,length(t2)); % Solution for time 1 s to 2 s

U=[u1t u2t];
Appendix B: Selected MATLAB Programs 407

Csu=U(1:m); % SHF coefficients of input

for j=1:m
Ctu(j)=U(j+1)-U(j); % TF coefficients of input
end
Cu=Csu+(0.5*Ctu)

for i=1:d1
if (i+d1)<=m
x(i+1)=inv((2/h)*I-A)*(((2/h)+A)*x(i)+(2*A1*xi(i+d1))+2*B*Cu(i));
else
x(i+1)=inv((2/h)*I-A)*(((2/h)+A)*x(i)+2*B*Cu(i));
end
end

for i=(d1+1):m
if (i+d1)<=m
x(i+1)=inv((2/h)*I-A)*(((2/h)+A)*x(i)+(A1*x(i-d1))+(A1*x(i+1
-d1))+(2*A1*xi(i+d1))+2*B*Cu(i));
else
x(i+1)=inv((2/h)*I-A)*(((2/h)+A)*x(i)+(A1*x(i-d1))+(A1*x(i+1
-d1))+2*B*Cu(i));
end
end

th=0:h:T;
plot(th,x,'ko','MarkerFaceColor','k','MarkerSize',8)
hold on
plot(te,fe,'k-','Linewidth',2)

29. Program for analyzing a time invariant open loop system


of Example 11.2 (Chap. 11, Table 11.3 and 11.4; Figs. 11.6
and 11.7, pp. 274-276)

clc
clear all
format long

%%------------ Number of Sub-intervals and Total Time -----------%%

m=input('Enter the number of sub-intervals chosen:\n');


T=input('Enter the total time period:\n');
h=T/m;
th=0:h:(T+h);

%%--------------------- Exact Solution ----------------------%%


R=ones(1,(m+2));
R_SH=R(1:(m+1)); % SHF coefficients of r(t)

for i=1:(m+1)
R_T(i)=R(i+1)-R(i); % TF coefficients of r(t)
end
408 Appendix B: Selected MATLAB Programs

syms t
gt=2*exp(-2*t)*(cos(2*t)-sin(2*t));
G=subs(gt,th);
G_SH=G(1:(m+1)); % SHF coefficients of g(t)
for i=1:(m+1)
G_T(i)=G(i+1)-G(i); % TF coefficients of g(t)
end

yt=exp(-2*t)*sin(2*t);
Y=subs(yt,th(1:(m+1)))

te=0:0.01:T;
Ye=subs(yt,te);

plot(te,Ye,'k-','LineWidth',2)
hold on

%%-------------- Convolution of a time function -------------%%

%%----------- Formation of R1 matrix ----------%%

R1=zeros((m+1),(m+1));
for i=1:(m+1)
for j=1:(m+1)
if j-i>0
R1(i,j)=2*R_SH(j)+R_SH(j-1);
end
end
end

G1=G_SH; % to be pre-multiplied with R1

%%----------- Formation of R1 matrix ----------%%

R2=zeros((m+1),(m+1));
for i=1:(m+1)
for j=1:(m+1)
if j-i>0
R2(i,j)=R_SH(j)+2*R_SH(j-1);
end
end
end

G2=G(2:(m+2)); % to be pre-multiplied with R2

Y_HF=(h/6)*((G1*R1)+(G2*R2))

plot(th(1:(m+1)),Y_HF,'ko','MarkerFaceColor','k','LineWidth',2)
Percentage_Error=(Y-Y_HF)./Y*100

AMP_Error=sum((abs(Percentage_Error(2:(m+1)))))/m
Appendix B: Selected MATLAB Programs 409

30. Program for analyzing a time invariant closed loop sys-


tem of Example 11.3 (Chap. 11, Table 11.5; Figs. 11.10,
11.11, and 11.12, pp. 283-286)
clc
clear all
format long

%%------------ Number of Sub-intervals and Total Time -----------%%

m=input('Enter the number of sub-intervals chosen:\n');


T=input('Enter the total time period:\n');
hh=T/m;
th=0:hh:T;

%%------------------ HF domain representation of ----------------%%


%%------------- input, output, plant and feedback signal ------------%%

r=ones(1,(m+1)); %% Input

syms t
g_t=2*exp(-4*t); % Plant Impulse Response
g=subs(g_t,th);

h=4*ones(1,length(th)); % Feedback Gain

yt=exp(-2*t)*sin(2*t); % Output
Y=subs(yt,th)

te=0:0.01:T;
Ye=subs(yt,te); % Exact values of output

plot(te,Ye,'m-','LineWidth',2)
hold on

%%---------- Convolution of a time function in HF domain ----------%%

for i=1:m
G(i)=2*g(i+1)+g(i);
end

for i=(m+1)
G(i)=g(2)+2*g(1);
end

for i=(m+2):(2*m)
G(i)=g(i-m+1)+(4*g(i-m))+g(i-m-1);
end
G;

for i=1:m
H(i)=2*h(i+1)+h(i);
end

for i=(m+1)
H(i)=h(2)+2*h(1);
end
410 Appendix B: Selected MATLAB Programs

for i=(m+2):(2*m)
H(i)=h(i-m+1)+(4*h(i-m))+h(i-m-1);
end
H;

Denominator=1+((hh^2)/36)*H(m+1)*G(m+1);
y(1)=0; % First term of output in HF domain after convolution

for i=1:m
y1(i)=r(1)*G(i);

for p=2:(i+1)
y_part2(p-1)=r(p)*G(m+i-p+2);
end

y2(i)=sum(y_part2);

for p=2:(i+1)
y_part3(p-1)=H(p-1)*G(m+i-p+2)*y(1);
end

y3(i)=sum(y_part3);

if i>=2
for p=2:i
j=i-p+2;
k=zeros(1,i);
for p1=1:j
k(p1)=H(m+p1)*G(m+j-p1+1);
end
K=sum(k);
y_part4(p-1)=K*y(p);
end
y4(i)=sum(y_part4);
else
y4(i)=0;
end
y4

y(i+1)=((hh/6)*(y1(i)+y2(i)-((hh/6)*y3(i)) -((hh/6)*y4(i))))
/Denominator;
end

Y_HF=y

plot(th(1:(m+1)),y,'ko-','MarkerFaceColor','k','LineWidth',2)

Percentage_Error=(Y-Y_HF)./Y*100

AMP_Error=sum((abs(Percentage_Error(2:(m+1)))))/m
Appendix B: Selected MATLAB Programs 411

31. Program for identifying a non-homogeneous time invariant


system of Example 12.1 (Chap. 12, Table 12.1; Fig. 12.1,
pp. 291-293)

clc
clear all
format long

%%------------ Number of Sub-intervals and Total Time -----------%%

m = input('Enter the number of sub-intervals chosen:\n');


T = input('Enter the total time period:\n');
h = T/m;
th = 0:h:(T+h);

%%---------------------- System -----------------------%%

A=[0 1; -2 -3]; % System matrix


B=[0;1]; % Input matrix

x0=[0;1]; % Initial condition

%%---------- Approximating the step input in HF domain -----------%%

n=length(B);

C_SH_input=ones(1,n);

C_Triangular_input=zeros(1,n);

C_input=C_SH_input+0.5*C_Triangular_input

%%----------- Taking samples from system states ------------%%

syms t
x_expression=[1/2 - 1/(2*exp(t)); 1/(2*exp(t))]

x_time=subs(x_expression,th);

n1=1:n; n2=2:(n+1); % Required Dimensions

%%-------------------- Define Matrices ---------------------%%

Matrix_plus=x_time(:,n2)+x_time(:,n1);

Matrix_minus=x_time(:,n2)-x_time(:,n1);

%%------------- Identification of system matrix -------------%%

A=((2/h)*Matrix_minus-(2*B*C_input))*inv(Matrix_plus)
412 Appendix B: Selected MATLAB Programs

32. Program for identifying the output matrix of a time


invariant system of Example 12.4 (Chap. 12, Fig. 12.4,
p. 296, 296)

clc
clear all
format short

%%---------------------- System -----------------------%%

B=[0;0;1]; % Input matrix

%%------------ Number of Sub-intervals and Total Time -----------%%

m = input('Enter the number of sub-intervals chosen:\n');


T = input('Enter the total time period:\n');
h = T/m;
th = 0:h:(T+h);

%%---------- Approximating the step input in HF domain -----------%%

n=length(B);

C_SH_input=ones(1,n);

C_Triangular_input=zeros(1,n);

C_input=C_SH_input+0.5*C_Triangular_input

%%----------- Taking samples from system states ------------%%


syms t

x_expression=[5/(2*exp(t))-5/(2*exp(2*t))+5/(6*exp(3*t))+1/6;

5/exp(2*t)-5/(2*exp(t))-5/(2*exp(3*t));

5/(2*exp(t))-10/exp(2*t)+15/(2*exp(3*t))];

output =5/exp(2*t)-5/(3*exp(3*t))+2/3;

x_time=subs(x_expression,th);

output_time=subs(output,t);

n1=1:n; n2=2:(n+1); %Required Dimensions

%%-------------------- Define Matrices ---------------------%%

Matrix_x=x_time(:,1:n);
Matrix_y=y_time(:,1:n);

%%------------- Identification of output matrix -------------%%

C=Matrix_output*inv(Matrix_x)
Appendix B: Selected MATLAB Programs 413

33. Program for identifying the system matrix of a


non-homogeneous time invariant system involving jump
discontinuity at the input, of Example 12.6 (Chap. 12,
Tables 12.8 and 12.9; Fig. 12.6, pp. 301-304)

clc
clear all
format long

%%---------------------- System -----------------------%%

A=[0 1; -2 -3]; % System matrix


B=[0;1]; % Input matrix

x0=[0;0.5]; % Initial condition

%------------ Number of Sub-intervals and Total Time -----------%%

m = input('Enter the number of sub-intervals chosen:\n');


T = input('Enter the total time period:\n');
h = T/m;

a=0.5; % Instant of jump in input

%%-------------------- Exact Plot of States ---------------------%%

syms t

t1=0:h:(a-h);
t2=a:h:T;
tt=[t1 t2];

x1a= -(3/4)+(0.5*t)+(3/2)*exp(-t)-(3/4)*exp(-2*t);
X1a=subs(x1a,t1);

x1b=-(1/4)+(0.5*t)+(3/2)*exp(-t)-(3/4)*exp(-2*t)-exp(-(t-a))
+0.5*exp(-2*(t-a));
X1b=subs(x1b,t2);

X1=[X1a X1b];

x2a=0.5-(3/2)*exp(-t)+(3/2)*exp(-2*t);
X2a=subs(x2a,t1);

x2b=0.5-(3/2)*exp(-t)+(3/2)*exp(-2*t)+exp(-(t-a))-exp(-2*(t-a));
X2b=subs(x2b,t2);

X2=[X2a X2b];

X=[X1;X2]

% Approximation of input in HF domain

U_SH_Ramp=tt;
414 Appendix B: Selected MATLAB Programs

U_SH_Delayed_Step=[zeros(1,length(t1)) ones(1,length(t2))];

U_SH = U_SH_Ramp + U_SH_Delayed_Step

U_TF=zeros(1,m);
for k=1:m
U_TF(k)=U_SH(k+1)-U_SH(k);
end

% U_TF; % For HFc method

% U_TF(a/h)=0; % For HFm method

U_TF(a/h)=U_SH(a/h)-U_SH((a/h)-1); % For combined HFc and HFm method

U=U_SH(1:m)+0.5*U_TF % Samples of input

%%-------------------- Required Dimensions -------------------%%

n=length(B);

% n1=1:n; n2=2:(n+1); % for samples before jump point

% n1=(a/h+2):(a/h+1+n); n2=(a/h+1+n):(a/h+2+n);
% for samples after jump

n1=(a/h-1):(a/h); n2=(a/h):(a/h+1); % for samples involving jump

%%--------------------- Define Matrices ---------------------%%

Matrix_plus=X(:,n2)+X(:,n1);

Matrix_minus=X(:,n2)-X(:,n1);

%%------------- Identification of system matrix -------------%%

A=((2/h)*Matrix_minus-(2*B*U(n1)))*inv(Matrix_plus)
Appendix B: Selected MATLAB Programs 415

34. Program for identifying the time varying element of


system matrix of a non-homogeneous system, of Example
13.1 (Chap. 13, Table 13.1; Fig. 13.1, p. 309, 310)

clc
clear all
format long

%%---------------------- System -----------------------%%

B=[1;0]; % Input matrix

x0=[1;1]; % Initial condition

%------------ Number of Sub-intervals and Total Time -----------%%

m = input('Enter the number of sub-intervals chosen:\n');


T = input('Enter the total time period:\n');
h = T/m;

%%----------- Taking samples from system states ------------%%

t=0:h:T;
syms tt

Xt1=1+tt;
xt1=subs(Xt1,t)

Xt2=1+(tt^2/2)+(tt^3/3);
xt2=subs(Xt2,t)

%%------------- Identification of system matrix -------------%%

z=zeros(1,(m+1));

a11=z; a12=z; a21=z; a22=z;

a210=input('Enter the initial value of a21:\n')

a21(1)=a210;

for k=1:m
a21(k+1)=(1/xt1(k+1))*(((2/h)-a22(k+1))*xt2(k+1)-
((2/h)+a22(k))*xt2(k)-a21(k)*xt1(k)-(2*B(2)));
end
a21

plot(t,t,'-k','LineWidth',2)
hold on
plot(t,a21,'ko-','MarkerFaceColor','k')
416 Appendix B: Selected MATLAB Programs

35. Program for identifying the time varying elements of


system matrix of a homogeneous system, of Example 13.2
(Chap. 13, Figs. 13.2 and 13.3, p. 311, 312)

clc
clear all
format short

%%----------------- Initial values of the states ----------------%%

x0=[1;1]; % Initial condition

%------------ Number of Sub-intervals and Total Time -----------%%

m = input('Enter the number of sub-intervals chosen:\n');


T = input('Enter the total time period:\n');
h = T/m;

%%----------- Taking samples from system states ------------%%

t=0:h:T;
syms tt

Xt1=(cos(1-cos(tt))+2*sin(1-cos(tt)))*exp(sin(tt));
xt1=subs(Xt1,t)

Xt2=(-sin(1-cos(tt))+2*cos(1-cos(tt)))*exp(sin(tt));
xt2=subs(Xt2,t)

%%------------------- Identification of a11 -------------------%%

a12=sin(t);
a11(1)=1;

for k=1:m
a11(k+1)=(2/h)-(1/x1(k+1))
*(((2/h)+a11(k))*x1(k)+(a12(k)*x2(k))+(a12(k+1)*x2(k+1)));
end

plot(t,a11,'-ko','LineWidth',2,'MarkerFaceColor','k')

%%------------------- Identification of a21 -------------------%%

a22=cos(t);
a21(1)=0;

for k=1:m
a21(k+1)=(1/x1(k+1))*(((2/h)-a22(k+1))
*x2(k+1)-((2/h)+a22(k))*x2(k)-(a21(k)*x1(k)));
end

plot(t,a21,'-ko','LineWidth',2,'MarkerFaceColor','k')
Appendix B: Selected MATLAB Programs 417

36. Program for identifying the impulse response of the


plant, of Example 14.1 via method of deconvolution
(Chap. 14, Figs. 14.1 and 14.2; Tables 14.1 and 14.2,
pp. 322-324)

clc
clear all
format long

%------------ Number of Sub-intervals and Total Time -----------%%

m = input('Enter the number of sub-intervals chosen:\n');


T = input('Enter the total time period:\n');
h = T/m;

%%--------------- Exact Plot of impulse response ----------------%%

syms t

t_exact=0:0.01:T;

tt=0:h:T;

r=ones(1,length(tt)); % samples of input applied to system

yt=1-exp(-t);
y=subs(yt,tt); % samples of system output

gt=exp(-t);
g_direct=subs(gt,tt); % samples of impulse response
g_exact=subs(gt,t_exact);

%%------------ Formation of R coefficients -------------%%

for i=1:m
R(i)=2*r(i+1)+r(i);
end

R(m+1)=r(2)+2*r(1);

for i=(m+2):(2*m)
R(i)=r(i-m+1)+(4*r(i-m))+r(i-m-1);
end

for i=(2*m+1):(2*m+3)
R(i)=r(i-2*m+2)+r(i-2*m+1)-2*r(i-2*m);
end

for i=1:m
for j=1:m
if i==1
RR(i,:)=[0 R(1:(m-1))];
elseif j-i>=0
RR(i,j)=R(m+(j-i)+1);
418 Appendix B: Selected MATLAB Programs

end
end
end

RR=(h/6)*RR;
epsilon=0.01;
RR(1,1)=epsilon;

W=inv(RR);
g=[epsilon y(2:m)]*W;

for i=2:m
g_mult_R(i)=g(i)*(R(2*m-i+2)-R(2*m-i+1));
end

g_mult_R;

g(m+1)=((y(m+1)-y(m))-((h/6)*(g(1)*(R(m)-R(m-1))+sum(g_mult_R))))
/((h/6)*R(m+1));

plot(t_exact,g_exact,'-m','LineWidth',3)
hold on
plot(tt,g,'-ko','LineWidth',2,'MarkerFaceColor','k'
,'MarkerEdgeColor','k')

37. Program for identifying the impulse response of the plant


of closed loop system, of Example 14.2 via method of
deconvolution (Chap. 14, Figs. 14.5 and 14.6; Tables 14.3
and 14.4, p. 328)

clc
clear all
format long

%------------ Number of Sub-intervals and Total Time -----------%%

m = input('Enter the number of sub-intervals chosen:\n');


T = input('Enter the total time period:\n');
h = T/m;

%%--------------- Exact Plot of impulse response ----------------%%

syms t

t_exact=0:0.01:T;

tt=0:h:T;

r=ones(1,length(tt)); % samples of input applied to system

h1=ones(1,length(tt)); % samples of feedback signal


Appendix B: Selected MATLAB Programs 419

yt=(2/sqrt(3))*exp(-t/2)*sin((sqrt(3)*t)/2);
y=subs(yt,tt) % samples of system output

gt=exp(-t);
g_direct=subs(gt,tt); % samples of system output
g_exact=subs(gt,t_exact);

%%--------------- Formation of H coefficients -----------------%%

for i=1:m
H(i)=2*h1(i+1)+h1(i);
end

H(m+1)=h1(2)+2*h1(1);

for i=(m+2):(2*m)
H(i)=h1(i-m+1)+(4*h1(i-m))+h1(i-m-1);
end

for i=(2*m+1):(2*m+3)
H(i)=h1(i-2*m+2)+h1(i-2*m+1)-2*h1(i-2*m);
end

for i=1:m
for j=1:m
if i==1
HH(i,:)=[0 H(1:(m-1))];
elseif j-i>=0
HH(i,j)=H(m+(j-i)+1);
end
end
end
HH
HH=(h/6)*HH
b=y(1:m)*HH

for j=(m+1)
bm_part1=H(m)*y(1);
for k=2:(m+1)
bm_part2(k-1)=H(2*m-k+2)*y(k);
end
bm_2=sum(bm_part2);
bm=bm_part1+bm_2;
end

bm=bm*(h/6);
b=[b bm]

e=r-b
420 Appendix B: Selected MATLAB Programs

%%--------------- Formation of E coefficients ---------------%%

for i=1:m
E(i)=2*e(i+1)+e(i);
end

E(m+1)=e(2)+2*e(1);
for i=(m+2):(2*m)
E(i)=e(i-m+1)+(4*e(i-m))+e(i-m-1);
end

for i=(2*m+1):(2*m+3)
E(i)=e(i-2*m+2)+e(i-2*m+1)-2*e(i-2*m);
end

for i=1:m
for j=1:m
if i==1
EE(i,:)=[0 E(1:(m-1))];
elseif j-i>=0
EE(i,j)=E(m+(j-i)+1);
end
end
end

EE
EE=(h/6)*EE;

epsilon=0.0001;
EE(1,1)=epsilon;
E_inverse=inv(EE);

g=[epsilon y(2:m)]*E_inverse

plot(t_exact,g_exact,'-m','LineWidth',3)
hold on

for i=2:m
E_mult_G(i)=g(i)*(E(2*m-i+2)-E(2*m-i+1));
end

g(m+1)=((y(m+1)-y(m))-((h/6)*(g(1)*(E(m)-E(m-1))+sum(E_mult_G))))
/((h/6)*E(m+1));

plot(tt,g,'-ko','LineWidth',2,'MarkerFaceColor','k'
,'MarkerEdgeColor','k')
Appendix B: Selected MATLAB Programs 421

38. Program for drawing figures showing identified parameters


of the transfer function of a plant from impulse response
data, of Example 15.1 (Chap. 15, Figs. 15.1, 15.2 and 15.3,
p. 339 and 325-354)

clc
format long

% BPF NOBPF TF HF SHF


x=[1 2 3 4 5];

%%------------ L=1, a0=1 -------------%%


color [ 0.267 0.647 0.106 ]
y=[2.54176555024836 2.54147603465520 2.54176778302942
2.54431980212434 2.54442700797934 ];
stem(x,y,'ko:')
Title('\lambda = 1')
ylabel('a_o')
xlim([0 6])
ylim([2.54 2.545])

%%------------ L=1, a1=0 -------------%%


% y=[-0.25154866105028 -0.25160253204068 -0.25152488460955
-0.23665257425908 -0.22952648857943];
% stem(x,y,'ko:')
% text(1, 0.05, 'a_1 (exact) = 0')
% Title('\lambda = 1')
% ylabel('a_1')
% xlim([0 6])
% ylim([0 -0.35])

%%------------ L=1, b0=2 -------------%%


% y=[12.72230063919302 12.72227298081633 12.72231194744942
12.72939709991139 12.73277717740051];
% stem(x,y,'ko:')
% text(1, 12.7325, 'b_o (exact) = 2')
% Title('\lambda = 1')
% ylabel('b_o')
% xlim([0 6])
% ylim([12.7 12.75])

%%------------ L=1, b1=3 -------------%%


% y=[6.11098678009208 6.11095661720580 6.11098640031485
6.11083725422649 6.11061105677642];
% stem(x,y,'ko:')
% text(1, 6.1113, 'b_1 (exact) = 3')
% Title('\lambda = 1')
% ylabel('b_1')
% xlim([0 6])
% ylim([6.11 6.1115])

%%------------ L=6, a0=1 -------------%%


% y=[0.76514513692653 0.77203410891583 0.77261257243453
0.77367044331709 0.77194124707928];
422 Appendix B: Selected MATLAB Programs

% stem(x,y,'ko:')
% Title('\lambda = 6')
% ylabel('a_0')
% xlim([0 6])

%% ylim([6.11 6.1115])
% text(0.5, 0.7735, 'a_0 (exact) = 1')
% ylim([0.755 0.78])

%%------------ L=6, a1=0 -------------%%


% y=[0.05825145602514 0.05617965889910 0.05825567565826
0.06699766762652 0.06307111143826];
% stem(x,y,'bo:')
% Title('\lambda = 6')
% ylabel('a_1')
% xlim([0 6])
%% ylim([6.11 6.1115])
% text(1, 0.065, 'a_1 (exact) = 0')

%%------------ L=6, b0=2 -------------%%


% y=[1.55297667130353 1.55404255356324 1.55292351884170
1.54597030721091 1.54703131628115];
% stem(x,y,'ko:')
% Title('\lambda = 6')
% ylabel('b_o')
% xlim([0 6])
% ylim([2.37 2.43])
% text(1, 1.56, 'b_o (exact) = 2')

%%------------ L=6, b1=3 -------------%%


% y=[2.40403634491437 2.40494375238060 2.40394316332675
2.39578631301634 2.39575881018459];
% stem(x,y,'bo:')
% text(1, 2.407, 'b_1 (exact) = 3')
% Title('\lambda = 6')
% ylabel('b_1')
% xlim([0 6])

%%------------ L=12, a0=1 -------------%%


% y=[0.97307956262148 0.96755154868810 0.97296429969120
0.99396497018149 0.98292876505362];
% stem(x,y,'ko:')
% Title('\lambda = 12')
% ylabel('a_o')
% xlim([0 6])
% ylim([0.94 1.02])
% text(1, 1.015, 'a_o (exact) = 1')

%%------------ L=12, a1=0 -------------%%


% y=[0.01235612920984 0.00373085885751 0.01226365668376
0.04655427747571 0.02956876833446];
% stem(x,y,'ko:')
% Title('\lambda = 12')
% ylabel('a_1')
% xlim([0 6])
% ylim([-0.02 0.06])
% text(1, 1.94725, 'a_1 (exact) = 0')
Appendix B: Selected MATLAB Programs 423

%%------------ L=12, b0=2 -------------%%


% y=[1.94618304069512 1.94648566973870 1.94613618137098
1.94350585887879 1.94374640375645];
% stem(x,y,'ko:')
% Title('\lambda = 12')
% ylabel('b_o')
% xlim([0 6])
% ylim([1.93 1.955])
% text(1, 1.948, 'b_o (exact) = 2')

%%------------ L=12, b1=3 -------------%%


% y=[2.94373198375025 2.94403455964053 2.94367404814152
2.94075967907800 2.94090502275678];
% stem(x,y,'ko:')
% Title('\lambda = 12')
% ylabel('b_1')
% xlim([0 6])
% ylim([2.93 2.955])
% text(1, 2.9483, 'b_1 (exact) = 3')
Index

A F
Average of Mod of Percentage (AMP) error, Function approximation, 26, 29
109, 217, 274, 276, 285, 301, 322, 325 via Block Pulse Functions, 49
via Hybrid Functions, 49, 51–53, 57
B
Block pulse function, 7, 25, 49, 336 G
General hybrid orthogonal functions, 10
C Generalized one-shot operational matrices,
Closed loop system, 276, 283 120, 124
Completeness, 32, 33
Convolution, 167, 271 H
integral, 167, 168 Haar function, 3
of basic components, 169 Homogeneous, 150
of two time functions, 167 Homogeneous system, 185, 221
operation, 167 Hybrid function, 1, 19, 25, 28–30, 32, 33–40,
41–44, 46, 47, 49, 51–58, 70, 74, 75,
D 79, 345
Deconvolution, 319
Delayed unit step functions, 9 I
Delay matrix, 127, 243 Identification, 320
Differentiation matrix, 102 closed loop system, 323
in sample-and-hold function domain, 102 open loop system, 320
in triangular function domain, 102 Identification of output matrix, 294
Differentiation operational matrices, 142 homogeneous system, 297
Discontinuous functions, 49, 56, 84 non-homogeneous system, 294
Disjointedness, 30, 31 Identification of time-invariant, 289
homogeneous system, 297
E non-homogeneous system, 289
Elementary operational rules, 25, 33 Identification of time varying, 307
addition, 25, 33 homogeneous system, 311
division, 25, 44, 45 non-homogeneous system, 307
multiplication, 25, 39, 41, 43 Integral square error, 50, 76
subtraction, 25, 37, 39 Integration, 88
Error estimate, 49, 75, 76 of sample-and-hold functions, 88
sample-and-hold function domain, 75 of triangular functions, 92
triangular function domain, 76 Integration-differentiation (I-D) operation, 106

© Springer International Publishing Switzerland 2016 425


A. Deb et al., Analysis and Identification of Time-Invariant Systems, Time-Varying
Systems, and Multi-Delay Systems using Orthogonal Hybrid Functions,
Studies in Systems, Decision and Control 46, DOI 10.1007/978-3-319-26684-8
426 Index

Integration of a time delayed function, 246 non-homogeneous system, 185, 186,


Integration operational matrices, 145 194–198, 201, 212, 215, 216, 219, 221,
230–232
J
Jump discontinuity, 56, 58, 59–62, 64, 83, 185, P
267, 269, 297, 299 Pade approximation, 332
Parameter estimation, 334
K Piecewise constant basis functions, 3, 9
Kronecker delta, 2
R
L Rademacher functions, 5
Lebesgue measure, 25, 27, 28
Legendre polynomial, 49, 67–69, 70, 72–74 S
Linear differential equations, 141 Sample-and-hold function, 11, 49, 69, 75, 25,
349
M Shift Walsh matrix, 17
Mean integral square error (MISE), 2, 3, 14, SISO, 17, 167, 274, 276, 338
26, 49, 54, 55, 59, 61, 64–67, 69–83 Slant functions, 8
Minimum integral square error, 26 State equations with delay, 248
Multi-delay system, 241 homogeneous, 266
non-homogeneous, 248
N
Non- homogeneous, 143 T
Non-homogeneous system, 185, 221, 297 Theorems, 127
Non-optimal block pulse function, 14, 340 Time delay system, 241, 260–265, 267–269
Non-sinusoidal orthogonal functions, 1 homogeneous, 185, 186, 202–208,
212–214, 221, 234–238
O non-homogeneous, 185, 186, 194–198,
One-shot integration matrices, 152 201, 212, 215, 216, 219, 221, 230–232,
One-shot operational matrices, 118 241, 248
for sample-and-hold functions, 119 Time invariant state equations, 186
for triangular functions, 122 homogeneous, 185, 186, 202–208,
Open loop system, 271, 272, 274 212–214, 221, 234–238
Operational Matrices, 87 non-homogeneous, 185, 186, 194–198,
for differentiation, 100 201, 212, 215, 216, 219, 221, 230–232,
for Integration, 87 241, 248
Orthogonal function, 1, 10, 25 Time invariant system analysis, 185
Orthogonality, 31 Time varying state equation, 222
Orthogonal properties, 1 homogeneous, 233
Orthogonal set, 2 non-homogeneous, 222
Orthonormal, 2 Time varying system analysis, 221
Output, 271, 272, 275 Transfer function Identifications, 331
Output of time invariant, 197 Triangular function, 12, 49, 59, 70, 71, 75, 76,
homogeneous system, 185, 186, 202–208, 79, 82, 83, 25, 342
212–214, 221, 234–238 left handed, 12
non-homogeneous system, 185, 186, right handed, 12
194–198, 201, 212, 215, 216, 219, 221,
230–232 W
Output of time varying, 221, 232, 238 Walsh functions, 6
homogeneous system, 185, 186, 202–208, Walsh Operational Transfer Function, 17
212–214, 221, 234–238 zero order hold, 11, 26
non-homogeneous, 185, 186, 194–198,
201, 212, 215, 216, 219, 221, 230–232, Z
241, 248 Zero order hold, 11, 26

You might also like