(PDF Download) MATLAB Machine Learning Recipes: A Problem-Solution Approach 3rd Edition Michael Paluszek Fulll Chapter
(PDF Download) MATLAB Machine Learning Recipes: A Problem-Solution Approach 3rd Edition Michael Paluszek Fulll Chapter
com
https://textbookfull.com/product/matlab-
machine-learning-recipes-a-problem-solution-
approach-3rd-edition-michael-paluszek/
textbookfull
More products digital (pdf, epub, mobi) instant
download maybe you interests ...
https://textbookfull.com/product/matlab-machine-learning-recipes-
a-problem-solution-approach-2nd-edition-michael-paluszek/
https://textbookfull.com/product/matlab-recipes-a-problem-
solution-approach-2nd-edition-michael-paluszek/
https://textbookfull.com/product/matlab-machine-learning-1st-
edition-michael-paluszek/
https://textbookfull.com/product/java-9-recipes-a-problem-
solution-approach-3rd-edition-josh-juneau/
Android Recipes A Problem-Solution Approach Dave Smith
https://textbookfull.com/product/android-recipes-a-problem-
solution-approach-dave-smith/
https://textbookfull.com/product/raku-recipes-a-problem-solution-
approach-j-j-merelo/
https://textbookfull.com/product/c-recipes-a-problem-solution-
approach-1st-edition-shirish-chavan/
https://textbookfull.com/product/javascript-recipes-a-problem-
solution-approach-1st-edition-russ-ferguson/
https://textbookfull.com/product/wxpython-recipes-a-problem-
solution-approach-1st-edition-mike-driscoll/
MATLAB Machine
Learning Recipes
A Problem-Solution Approach
Third Edition
Michael Paluszek
Stephanie Thomas
MATLAB Machine Learning Recipes: A Problem-Solution Approach
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material
is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, repro-
duction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic
adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed.
Trademarked names, logos, and images may appear in this book. Rather than use a trademark symbol with every
occurrence of a trademarked name, logo, or image we use the names, logos, and images only in an editorial fashion
and to the benefit of the trademark owner, with no intention of infringement of the trademark.
The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not
identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary
rights.
While the advice and information in this book are believed to be true and accurate at the date of publication, neither
the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may
be made. The publisher makes no warranty, express or implied, with respect to the material contained herein.
Managing Director, Apress Media LLC: Welmoed Spahr
Acquisitions Editor: Celestin Suresh John
Development Editor: Laura Berendson
Coordinating Editor: Mark Powers
Introduction XXI
V
CONTENTS
2.1.6 Datastore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.1.7 Tall Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.1.8 Sparse Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.1.9 Tables and Categoricals . . . . . . . . . . . . . . . . . . . . . . . . 30
2.1.10 Large MAT-Files . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.2 Initializing a Data Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.2.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.2.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.2.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.3 mapreduce on an Image Datastore . . . . . . . . . . . . . . . . . . . . . . . 36
2.3.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.3.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.3.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.4 Processing Table Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.4.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.4.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.4.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.5 String Concatenation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.5.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.5.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.5.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.6 Arrays of Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.6.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.6.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.6.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.7 Substrings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.7.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.7.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.7.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.8 Reading an Excel Spreadsheet into a Table . . . . . . . . . . . . . . . . . . . 44
2.8.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.8.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.8.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.9 Accessing ChatGPT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.9.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.9.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.9.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.10 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
VI
CONTENTS
3 MATLAB Graphics 49
3.1 2D Line Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.1.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.1.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.1.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.2 General 2D Graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.2.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.2.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.2.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.3 Custom Two-Dimensional Diagrams . . . . . . . . . . . . . . . . . . . . . . 54
3.3.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.3.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.3.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.4 Three-Dimensional Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.4.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.4.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.4.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.5 Draw a 3D Object with a Texture . . . . . . . . . . . . . . . . . . . . . . . . 59
3.5.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.5.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.5.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.6 General 3D Graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.6.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.6.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.6.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.7 Building a GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.7.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.7.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.7.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.8 Animating a Bar Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.8.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.8.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.8.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.9 Drawing a Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.9.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.9.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.9.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.10 Importing a Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.10.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.10.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.10.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.11 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
VII
CONTENTS
4 Kalman Filters 85
4.1 Gaussian Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.2 A State Estimator Using a Linear Kalman Filter . . . . . . . . . . . . . . . . 87
4.2.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.2.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
4.2.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.3 Using the Extended Kalman Filter for State Estimation . . . . . . . . . . . . 106
4.3.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.3.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.3.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.4 Using the UKF for State Estimation . . . . . . . . . . . . . . . . . . . . . . 111
4.4.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
4.4.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.4.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
4.5 Using the UKF for Parameter Estimation . . . . . . . . . . . . . . . . . . . . 117
4.5.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
4.5.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
4.5.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
4.6 Range to a Car . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.6.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.6.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.6.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
VIII
CONTENTS
5.5 Ship Steering: Implement Gain Scheduling for Steering Control of a Ship . . 145
5.5.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.5.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.5.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.6 Spacecraft Pointing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
5.6.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
5.6.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
5.6.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
5.7 Direct Adaptive Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
5.7.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
5.7.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
5.7.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
5.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
IX
CONTENTS
X
CONTENTS
XII
CONTENTS
XIII
CONTENTS
XIV
CONTENTS
XV
CONTENTS
Bibliography 431
Index 435
XVI
About the Authors
XVII
ABOUT THE AUTHORS
XVIII
About the Technical Reviewer
XIX
Introduction
1. Autonomous cars: Machine learning is used in almost every aspect of car control systems.
2. Plasma physicists use machine learning to help guide experiments on fusion reactors.
TAE Technologies has used it with great success in guiding fusion experiments. The
Princeton Plasma Physics Laboratory (PPPL) has used it for the National Spherical Torus
Experiment to study a promising candidate for a nuclear fusion power plant.
5. Law enforcement and others use it for facial recognition. Several crimes have been solved
using facial recognition!
XXI
INTRODUCTION
1. MATLAB functions
2. MATLAB scripts
3. HTML help
The MATLAB scripts implement all of the examples in this book. The functions encapsulate the
algorithms. Many functions have built-in demos. Just type the function name in the command
window, and it will execute the demo. The demo is usually encapsulated in a subfunction. You
can copy out this code for your demos and paste it into a script. For example, type the function
name PlotSet into the command window, and the plot in Figure 1 will appear.
1 >> PlotSet
cos
1
A
B
0.5
0
y
-0.5
-1
0 100 200 300 400 500 600 700 800 900 1000
x
sin
1
0.5
0
y
-0.5
-1
0 100 200 300 400 500 600 700 800 900 1000
x
XXII
INTRODUCTION
You can use these demos to start your scripts. Some functions, like right-hand-side functions
for numerical integration, don’t have demos. If you type a function name at the command line
that doesn’t have a built-in demo, you will get an error as in the code snippet below.
1 >> RHSAutomobileXY
2 Error using RHSAutomobileXY (line 17)
3 A built-in demo is not available.
The toolbox is organized according to the chapters in this book. The folder names are
Chapter_01, Chapter_02, etc. In addition, there is a General folder with functions that support
the rest of the toolbox. In addition, you will need the open source package GLPK (GNU Linear
Programming Kit) to run some of the code. Nicolo Giorgetti has written a MATLAB mex
interface to GLPK that is available on SourceForge and included with this toolbox. The interface
consists of
1. glpk.m
3. GLPKTest.m
XXIII
CHAPTER 1
1.1 Introduction
Machine Learning is a field in computer science where data is used to predict, or respond to,
future data. It is closely related to the fields of pattern recognition, computational statistics, and
artificial intelligence. The data may be historical or updated in real time. Machine learning is
important in areas like facial recognition, spam filtering, content generation, and other areas
where it is not feasible, or even possible, to write algorithms to perform a task.
For example, early attempts at filtering junk emails had the user write rules to determine
what was junk or spam. Your success depended on your ability to correctly identify the attributes
of the message that would categorize an email as junk, such as a sender address or words in the
subject, and the time you were willing to spend to tweak your rules. This was only moderately
successful as junk mail generators had little difficulty anticipating people’s handmade rules.
Modern systems use machine learning techniques with much greater success. Most of us are
now familiar with the concept of simply marking a given message as “junk” or “not junk” and
take for granted that the email system can quickly learn which features of these emails identify
them as junk and prevent them from appearing in our inbox. This could now be any combination
of IP or email addresses and words and phrases in the subject or body of the email, with a variety
of matching criteria. Note how the machine learning in this example is data driven, autonomous,
and continuously updating itself as you receive emails and flag them. However, even today, these
systems are not completely successful since they do not yet understand the “meaning” of the
text that they are processing.
Content generation is an evolving area. By training engines over massive data sets, the
engines can generate content such as music scores, computer code, and news articles. This has
the potential to revolutionize many areas that have been exclusively handled by people.
In a more general sense, what does machine learning mean? Machine learning can mean
using machines (computers and software) to gain meaning from data. It can also mean giving
machines the ability to learn from their environment. Machines have been used to assist humans
for thousands of years. Consider a simple lever, which can be fashioned using a rock and a length
of wood, or an inclined plane. Both of these machines perform useful work and assist people,
but neither can learn. Both are limited by how they are built. Once built, they cannot adapt to
changing needs without human interaction.
© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2024 1
M. Paluszek, S. Thomas, MATLAB Machine Learning Recipes,
https://doi.org/10.1007/978-1-4842-9846-6 1
CHAPTER 1 OVERVIEW
Machine learning involves using data to create a model that can be used to solve a problem.
The model can be explicit, in which case the machine learning algorithm adjusts the model’s
parameters, or the data can form the model. The data can be collected once and used to train a
machine learning algorithm, which can then be applied. For example, ChatGPT scrapes textual
data from the Internet to allow it to generate text based on queries. An adaptive control system
measures inputs and command responses to those inputs to update parameters for the control
algorithm.
In the context of the software we will be writing in this book, machine learning refers to
the process by which an algorithm converts the input data into parameters it can use when
interpreting future data. Many of the processes used to mechanize this learning derive from
optimization techniques and, in turn, are related to the classic field of automatic control. In
the remainder of this chapter, we will introduce the nomenclature and taxonomy of machine
learning systems.
1.2.1 Data
All learning methods are data driven. Sets of data are used to train the system. These sets may
be collected and edited by humans or gathered autonomously by other software tools. Control
systems may collect data from sensors as the systems operate and use that data to identify pa-
rameters or train the system. Content generation systems scour the Internet for information. The
data sets may be very large, and it is the explosion of data storage infrastructure and available
databases that is largely driving the growth in machine learning software today. It is still true that
a machine learning tool is only as good as the data used to create it, and the selection of training
data is practically a field in itself. Selection of data for many systems is highly automated.
NOTE When collecting data for training, one must be careful to ensure that the time
variation of the system is understood. If the structure of a system changes with time, it may be
necessary to discard old data before training the system. In automatic control, this is sometimes
called a forgetting factor in an estimator.
1.2.2 Models
Models are often used in learning systems. A model provides a mathematical framework for
learning. A model is human-derived and based on human observations and experiences. For
example, a model of a car, seen from above, might be that it is rectangular with dimensions that
fit within a standard parking spot. Models are usually thought of as human-derived and provide
a framework for machine learning. However, some forms of machine learning develop their
models without a human-derived structure.
2
CHAPTER 1 OVERVIEW
1.2.3 Training
A system which maps an input to an output needs training to do this in a useful way. Just as
people need to be trained to perform tasks, machine learning systems need to be trained. Train-
ing is accomplished by giving the system an input and the corresponding output and modifying
the structure (models or data) in the learning machine so that mapping is learned. In some ways,
this is like curve fitting or regression. If we have enough training pairs, then the system should
be able to produce correct outputs when new inputs are introduced. For example, if we give
a face recognition system thousands of cat images and tell it that those are cats, we hope that
when it is given new cat images it will also recognize them as cats. Problems can arise when you
don’t give it enough training sets, or the training data is not sufficiently diverse, for instance,
identifying a long-haired cat or hairless cat when the training data is only of short-haired cats.
A diversity of training data is required for a functioning algorithm.
Supervised Learning
Supervised learning means that specific training sets of data are applied to the system. The
learning is supervised in that the “training sets” are human-derived. It does not necessarily
mean that humans are actively validating the results. The process of classifying the systems’
outputs for a given set of inputs is called “labeling.” That is, you explicitly say which results are
correct or which outputs are expected for each set of inputs.
The process of generating training sets can be time-consuming. Great care must be taken
to ensure that the training sets will provide sufficient training so that when real-world data is
collected, the system will produce correct results. They must cover the full range of expected
inputs and desired outputs. The training is followed by test sets to validate the results. If the
results aren’t good, then the test sets are cycled into the training sets, and the process is repeated.
A human example would be a ballet dancer trained exclusively in classical ballet technique.
If they were then asked to dance a modern dance, the results might not be as good as required
because the dancer did not have the appropriate training sets; their training sets were not suffi-
ciently diverse.
Unsupervised Learning
Unsupervised learning does not utilize training sets. It is often used to discover patterns in data
for which there is no “right” answer. For example, if you used unsupervised learning to train
a face identification system, the system might cluster the data in sets, some of which might be
faces. Clustering algorithms are generally examples of unsupervised learning. The advantage
of unsupervised learning is that you can learn things about the data that you might not know in
advance. It is a way of finding hidden structures in data.
3
CHAPTER 1 OVERVIEW
Semi-supervised Learning
With this approach, some of the data are in the form of labeled training sets, and other data are
not [12]. Typically, only a small amount of the input data is labeled, while most are not, as the
labeling may be an intensive process requiring a skilled human. The small set of labeled data is
leveraged to interpret the unlabeled data.
Online Learning
The system is continually updated with new data [12]. This is called “online” because many of
the learning systems use data collected while the system is operating. It could also be called
recursive learning. It can be beneficial to periodically “batch” process data used up to a given
time and then return to the online learning mode. The spam filtering systems collect data from
emails and update their spam filter. Generative deep learning systems like ChatGPT use massive
online learning.
Measurements (Learning)
Learning
Parameters Actions
Machine
Environment
Actions
Figure 1.1: A learning machine that senses the environment and stores data in memory
4
CHAPTER 1 OVERVIEW
Note that the machine produces output in the form of actions. A copy of the actions may
be passed to the learning system so that it can separate the effects of the machine’s actions
from those of the environment. This is akin to a feedforward control system, which can result
in improved performance.
A few examples will clarify the diagram. We will discuss a medical example, a security
system, and spacecraft maneuvering.
A doctor might want to diagnose diseases more quickly. They would collect data on tests
on patients and then collate the results. Patient data might include age, height, weight, historical
data like blood pressure readings and medications prescribed, and exhibited symptoms. The
machine learning algorithm would detect patterns so that when new tests were performed on a
patient, the machine learning algorithm would be able to suggest diagnoses or additional tests to
narrow down the possibilities. As the machine learning algorithm was used, it would, hopefully,
get better with each success or failure. Of course, the definition of success or failure is fuzzy. In
this case, the environment would be the patients themselves. The machine would use the data
to generate actions, which would be new diagnoses. This system could be built in two ways.
In the supervised learning process, test data and known correct diagnoses are used to train the
machine. In an unsupervised learning process, the data would be used to generate patterns that
might not have been known before, and these could lead to diagnosing conditions that would
normally not be associated with those symptoms.
A security system might be put into place to identify faces. The measurements are camera
images of people. The system would be trained with a wide range of face images taken from
multiple angles. The system would then be tested with these known persons and its success rate
validated. Those that are in the database memory should be readily identified, and those that are
not should be flagged as unknown. If the success rate was not acceptable, more training might
be needed, or the algorithm itself might need to be tuned. This type of face recognition is now
common, used in Mac OS X’s “Faces” feature in Photos, face identification on the new iPhone
X, and Facebook when “tagging” friends in photos.
For precision maneuvering of a spacecraft, the inertia of the spacecraft needs to be known.
If the spacecraft has an inertial measurement unit that can measure angular rates, the inertia
matrix can be identified. This is where machine learning is tricky. The torque applied to the
spacecraft, whether by thrusters or momentum exchange devices, is only known to a certain
degree of accuracy. Thus, the system identification must sort out, if it can, the torque scaling
factor from the inertia. The inertia can only be identified if torques are applied. This leads to
the issue of stimulation. A learning system cannot learn if the system to be studied does not
have known inputs, and those inputs must be sufficiently diverse to stimulate the system so that
the learning can be accomplished. Training a face recognition system with one picture will not
work.
5
CHAPTER 1 OVERVIEW
Autonomous
Learning
State Inductive
Estimation Learning
Pattern
Adaptive Recognition Expert
Control Systems
Data Mining
System
Fuzzy Logic
Optimal
Control Optimization
Figure 1.2: Taxonomy of machine learning. The dotted lines show connections between branches
6
CHAPTER 1 OVERVIEW
There are three categories under Autonomous Learning. The first is Control. Feedback con-
trol is used to compensate for uncertainty in a system or to make a system behave differently
than it would normally behave. If there was no uncertainty, you wouldn’t need feedback. For
example, if you are a quarterback throwing a football at a running player, assume for a moment
that you know everything about the upcoming play. You know exactly where the player should
be at a given time, so you can close your eyes, count, and just throw the ball to that spot. As-
suming the player has good hands, you would have a 100% reception rate! More realistically,
you watch the player, estimate the player’s speed, and throw the ball. You are applying feedback
to the problem. As stated, this is not a learning system. However, if now you practice the same
play repeatedly, look at your success rate, and modify the mechanics and timing of your throw
using that information, you would have an adaptive control system, the second box from the top
of the control list. Learning in control takes place in adaptive control systems and also in the
general area of system identification.
System identification is learning about a system. By system, we mean the data that rep-
resents anything and the relationships between elements of that data. For example, a particle
moving in a straight line is a system defined by its mass, the force on that mass, its velocity,
and its position. The position is related to the velocity times time, and the velocity is related and
determined by the acceleration, which is the force divided by the mass.
Optimal control may not involve any learning. For example, what is known as full-state
feedback produces an optimal control signal but does not involve learning. In full-state feed-
back, the combination of model and data tells us everything we need to know about the system.
However, in more complex systems, we can’t measure all the states and don’t know the param-
eters perfectly, so some form of learning is needed to produce “optimal” or the best possible re-
sults. In a learning system, optimal control would need to be redefined as the system learns. For
example, an optimal space trajectory assumes thruster characteristics. As a mission progresses,
the thruster performance may change, requiring recomputation of the “optimal” trajectory.
System identification is the process of identifying the characteristics of a system. A sys-
tem can, to a first approximation, be defined by a set of dynamical states and parameters. For
example, in a linear time-invariant system, the dynamical equation is
ẋ = Ax + Bu
. (1.1)
where A and B are matrices of parameters, u is an input vector, and x is the state vector. System
identification would find A and B. In a real system, A and B are not necessarily time invariant,
and most systems are only linear to a first approximation.
The second category is what many people consider true Machine Learning. This is mak-
ing use of data to produce behavior that solves problems. Much of its background comes from
statistics and optimization. The learning process may be done once in a batch process or con-
tinually in a recursive process. For example, in a stock buying package, a developer might have
processed stock data for several years, say before 2008, and used that to decide which stocks
to buy. That software might not have worked well during the financial crash. A recursive pro-
gram would continuously incorporate new data. Pattern recognition and data mining fall into
this category. Pattern recognition is looking for patterns in images. For example, the early AI
7
Another random document with
no related content on Scribd:
Belasyse, Frances, 137
Belasyse, Sir Henry, 137
Belasyse, Hon. Isabella, 55
Belasyse, Jane, Lady, 137
Belasyse, John, Baron Belasyse, 47, 55, 65, 137
Bell, William, 19n
Bellamont, Richard Coote, 4th Earl, 56, 76
Bellamont, Countess (formerly Lady Oxenden), 76
Belton Street, 103, 105, 111
Bennet, Samuel, 110
Bennet’s Garden (The Bowl property), 112
Berkeley, Elizabeth, dowager Lady, 92
Berkstead, Col., 60n
Bertie, Hon. Robert, 136
Bethell, Zachery, 119n, 122n
Betterton Street, 103, 104
Bevan, —, 182
Bierly, William. (See Byerly)
Bigg, John, 39n
Bigg, Walter, 120n
Bishop, John, 3n
Bishopp, Samuel, 3n
Black Bear Inn, 107
Black Bear Yard, 108
Black Lamb, 110, 111
Blacksmith’s forge, 144
Blackwell, Jonathan, 74
Blackwell, Rev. Thos., 115n
Blague, Mary, 16n
Blisset, Joseph, 70
Blomeson, John, 126
Bloomsbury Great Close, 125n, 187
Blount, Charles (afterwards Earl of Devonshire), 126n
Blount (Blunt), Sir James. (See Mountjoy.)
Blumsberrie Fieldes, 110n
Blyke, Ric., 75n
Blythe, Arthur, 110n, 111n
Blythe, Thomas, 110
Boak, —, 66
Boak, Ann, 66, 67n
Boak, E., 66
Bochier, Thomas, 3n
Boddington, John, 169
Bol, Ferdinand, 55
Boldero, John, 184
Boldero, Mrs., 184
Bolingbroke, Lord, 149
Bolton, Charles Powlett, 2nd Duke of, 65
Bonomi, —, 151
Booker, Mr., 12
Booth, Rev. Chas., 11
Bootle, Mrs., 169
Borde, Doctor, 119, 125
Boreman, Robert, 139
Borrett, Edw., 70
Bosomysynne, 23n
Boswell, Alexander, Lord Auchinleck, 57
Boswell, Jas., 57
Boswell, John, 121
Boteler, Sir Robert, 137
Bothwell, Lord, 6n
Bothwell House, 6n
Bottomley, Joseph, 44n, 46
Boundary of parish, 1–2
Bowen, —, 57
Bower, J., 84
Bowes, Robert, 28
Bowes, William, 144
Bowl, The, 110, 111, 112n
Bowl Yard, 111
Bowne, Madame, 56
Boyle, Roger, 1st Earl of Orrery, 79
Bradley, James, 76
Bradshaw, Mr., 91
Braithwait, Mr., 18
Bramston, Sir John, 145n
Bransby, Robert, 79
Braynsgrave, William, 20
Brereton, W., 56
Brett, Richard, 21, 42, 43
Brewer, Thomas, 46n, 50
Bringhurst, Anne, 121n
Bringhurst, Isaac, 118n, 119, 121
Briscowe, James, 20n, 24n, 107
Briscowe, Joan (née Wise), 107, 119
Bristol, George Digby, 2nd Earl of, 52, 54
Bristol, John Digby, Earl of, 23n, 47n, 50, 51
Bristol House (Nos. 55 and 56, Great Queen Street), 42–58, 59, 60,
63, 65
Bristow, John, 149
Bristowe, Jas., 119
British Lying-In Hospital, 103
Broad Street, 101, 106–111
Brock (Brooke), Thos., 92
Bromeley, Robert, 108
Bromley, Sir John. (See Brownlow.)
Brooke, Catherine, Lady, 51
Brooke, Robert Greville, 2nd Baron, 51n
Brooks, Mr., 61
Broome, Peter, 7n
Broomwhoerwood, Thomas, 11
Brown and Barrow, Messrs., 63
Browne, Henry, 47, 48
Browne, Henry, 5th Viscount Montagu, 65
Browne, Isaac Hawkins (father and son), 84, 85
Browne, Robert, 126
Browne, Thomas, 126
Browne, Tom, 68
Brownlow (Bromley), Sir John, 102, 112
Brownlow, Sir John, 103, 105
Brownlow, Sir Richard, 103
Brownlow Street, 103
Brownlow Street Lying-In Hospital, 103
Brudenell, Anne, Countess of Cardigan, 90
Brudenell, Robert, 2nd Earl of Cardigan, 90
Buck, George, 28
Buck, John, 7
Buck, Margaret, 6, 7
Buck, Matthew, 20, 24
Buckeridge, Edmund, 145n
Buckeridge, Nicholas, 145n
Buckeridge, Sara, 145n
Buckingham, George Villiers, 1st Duke of, 91n
Buckingham, George Villiers, 2nd Duke of, 91n
Buckingham, Katherine, Duchess of, 91n
Buckingham and Normanby, John Sheffield, Duke of, 74
Bucknall Street, 145
Buckner, John, (afterwards Bishop of Chichester), 138, 139
Buckridge Street, 145
Burges, Thos., 87, 92
Burgh, Ulick de. (See Clanricarde).
Burghe, Edw., 59n, 60n, 67n
Burn, Thomas, 167
Burnet, Gilbert, Bishop of Salisbury, 75
Burnett, —, 71
Burrage, Thomas, 21
Burton, Thomas, 27, 29, 30, 31n, 32, 35n, 37
Burton, Thomas, 11
Burton, Walter, 29, 30, 31, 35, 40
Burton and Co., 11
Burton Lazars, 24, 27, 117–126
Byerly (Bierly), William, 6, 8n, 94
Byng, Ed., 65n
Byrcke, — Esq., 119
Byrn, Wm., 71
Gage, George, 93
Galloway, Thomas, 82
Gallows, 144
Gally, Henry, 139
Gandy, J. M., 63
Garnault, —, 71
Garrett, Frauncis, 107n
Garrick, David, 67, 90
Gate House (near Broad Street), 110
Gate House (Great Gate) St. Giles’s Hospital, 118, 121, 125, 145
Gate Street, 5, 10
Gate Tavern, High Holborn, 15
Gatteker, Thos., 185
Gaussen, Samuel, 182
Gentleman, George, 8n
George, Prince of Wales (afterwards George IV.), 78
George, The, Broad Street, 125
George, The, High Holborn, 8
Gerard, Frances (née Godman), 21, 107
Gerard, Francis, 21, 107n
Gerard, Philip, 21n
Gerbier, —, 44, 45
Gerrard, Sir Thomas, 6
Gibbert, Mr., 13, 14
Gibbons, Walter, 28
Gibbs, Tristram, 120
Giffard, John, 10
Gifford, Dr. Andrew, 94
Gifford, Philip, 126
Gilbertson, Rev. Lewis, 153n, 169
Glenbervie, Sylvester Douglas, Baron, 172, 173
Gloucester, Duke of, 75
Glynn, John, 183
Goddard, Alexander, 3n
Godfrey, Jno., 172
Godman (afterwards Gerard), Frances, 21, 107
Godman, Olive, 21, 107
Godman, Thos., 21n
Goldsborough, Edward, 7
Goldsborough, Grace, 8
Goldsborough, Robert, 8
Goldsborough, William, 7
Goldsmith Street, 18–22
Goodman, George, 15n
Goodyer, Lady Dinely, 56
Goring, George, Earl of Norwich, 88
Gosling, Geo., 153
Gower, John, 1st Earl of, 149
Gower, Lady, 70n
Gower Street, 185
Granby, John Manners, Marquess of, 91
Grange, Sir John, 125n
Grape Street (formerly Vine Street), 124
Graunge, John, 119, 122
Gray and Davidson, Messrs., 132
Grayhound. (See Greyhound.)
Great Close of Bloomsbury, 125n, 186
Great Gate, St. Giles’ Hospital. (See Gatehouse.)
Great Portland Street, No. 122 (formerly 47), 58
Great Queen Street (Queen Street), 11n, 14, 34, 92, 149