Learning Probabilistic Graphical Models in R 1st Edition David Bellot all chapter instant download
Learning Probabilistic Graphical Models in R 1st Edition David Bellot all chapter instant download
com
https://textbookfull.com/product/learning-probabilistic-
graphical-models-in-r-1st-edition-david-bellot/
OR CLICK BUTTON
DOWNLOAD NOW
https://textbookfull.com/product/reasoning-with-probabilistic-and-
deterministic-graphical-models-exact-algorithms-second-edition-rina-
dechter/
textboxfull.com
https://textbookfull.com/product/graphical-simulation-of-deformable-
models-1st-edition-jianping-cai/
textboxfull.com
https://textbookfull.com/product/graphical-data-analysis-with-r-
antony-unwin/
textboxfull.com
https://textbookfull.com/product/probabilistic-deep-learning-with-
python-1st-edition-oliver-duerr/
textboxfull.com
Reliability Engineering Probabilistic Models and
Maintenance Methods Second Edition Nachlas
https://textbookfull.com/product/reliability-engineering-
probabilistic-models-and-maintenance-methods-second-edition-nachlas/
textboxfull.com
https://textbookfull.com/product/reliability-engineering-
probabilistic-models-and-maintenance-methods-second-edition-nachlas-2/
textboxfull.com
https://textbookfull.com/product/machine-learning-with-r-cookbook-
second-edition-analyze-data-and-build-predictive-models-bhatia/
textboxfull.com
Learning Probabilistic Graphical
Models in R
David Bellot
BIRMINGHAM - MUMBAI
Learning Probabilistic Graphical Models in R
All rights reserved. No part of this book may be reproduced, stored in a retrieval
system, or transmitted in any form or by any means, without the prior written
permission of the publisher, except in the case of brief quotations embedded in
critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy
of the information presented. However, the information contained in this book is
sold without warranty, either express or implied. Neither the author, nor Packt
Publishing, and its dealers and distributors will be held liable for any damages
caused or alleged to be caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the
companies and products mentioned in this book by the appropriate use of capitals.
However, Packt Publishing cannot guarantee the accuracy of this information.
ISBN 978-1-78439-205-5
www.packtpub.com
Credits
Reviewers Proofreader
Mzabalazo Z. Ngwenya Safis Editing
Prabhanjan Tattar
Indexer
Acquisition Editor Mariammal Chettiyar
Divya Poojari
Graphics
Content Development Editor Abhinash Sahu
Trusha Shriyan
Production Coordinator
Technical Editor Nilesh Mohite
Vivek Arora
Cover Work
Copy Editor Nilesh Mohite
Stephen Copestake
About the Author
David Bellot is a PhD graduate in computer science from INRIA, France, with a
focus on Bayesian machine learning. He was a postdoctoral fellow at the University
of California, Berkeley, and worked for companies such as Intel, Orange, and
Barclays Bank. He currently works in the financial industry, where he develops
financial market prediction algorithms using machine learning. He is also a
contributor to open source projects such as the Boost C++ library.
About the Reviewers
At www.PacktPub.com, you can also read a collection of free technical articles, sign
up for a range of free newsletters and receive exclusive discounts and offers on Packt
books and eBooks.
TM
https://www2.packtpub.com/books/subscription/packtlib
Do you need instant solutions to your IT questions? PacktLib is Packt's online digital
book library. Here, you can search, access, and read Packt's entire library of books.
Why subscribe?
• Fully searchable across every book published by Packt
• Copy and paste, print, and bookmark content
• On demand and accessible via a web browser
Table of Contents
Preface v
Chapter 1: Probabilistic Reasoning 1
Machine learning 4
Representing uncertainty with probabilities 5
Beliefs and uncertainty as probabilities 6
Conditional probability 7
Probability calculus and random variables 7
Sample space, events, and probability 7
Random variables and probability calculus 8
Joint probability distributions 10
Bayes' rule 11
Interpreting the Bayes' formula 13
A first example of Bayes' rule 13
A first example of Bayes' rule in R 16
Probabilistic graphical models 20
Probabilistic models 20
Graphs and conditional independence 21
Factorizing a distribution 23
Directed models 24
Undirected models 25
Examples and applications 26
Summary 31
Chapter 2: Exact Inference 33
Building graphical models 35
Types of random variable 36
Building graphs 37
Probabilistic expert system 37
Basic structures in probabilistic graphical models 40
Variable elimination 44
[i]
Table of Contents
[ ii ]
Table of Contents
[ iii ]
Preface
Probabilistic graphical models is one of the most advanced techniques in machine
learning to represent data and models in the real world with probabilities. In many
instances, it uses the Bayesian paradigm to describe algorithms that can draw
conclusions from noisy and uncertain real-world data.
The book covers topics such as inference (automated reasoning and learning), which
is automatically building models from raw data. It explains how all the algorithms
work step by step and presents readily usable solutions in R with many examples.
After covering the basic principles of probabilities and the Bayes formula, it presents
Probabilistic Graphical Models(PGMs) and several types of inference and learning
algorithms. The reader will go from the design to the automatic fitting of the model.
Then, the books focuses on useful models that have proven track records in solving
many data science problems, such as Bayesian classifiers, Mixtures models, Bayesian
Linear Regression, and also simpler models that are used as basic components to
build more complex models.
Chapter 2, Exact Inference, shows you how to build PGMs by combining simple
graphs and perform queries on the model using an exact inference algorithm called
the junction tree algorithm.
Chapter 3, Learning Parameters, includes fitting and learning the PGM models from
data sets with the Maximum Likelihood approach.
[v]
Preface
Chapter 4, Bayesian Modeling – Basic Models, covers simple and powerful Bayesian
models that can be used as building blocks for more advanced models and shows
you how to fit and query them with adapted algorithms.
Chapter 6, Bayesian Modeling – Linear Models, shows you a more Bayesian view of the
standard linear regression algorithm and a solution to the problem of over-fitting.
Chapter 7, Probabilistic Mixture Models, goes over more advanced probabilistic models
in which the data comes from a mixture of several simple models.
Appendix, References, includes all the books and articles which have been used to
write this book.
Conventions
In this book, you will find a number of text styles that distinguish between different
kinds of information. Here are some examples of these styles and an explanation of
their meaning.
Code words in text, database table names, folder names, filenames, file extensions,
pathnames, dummy URLs, user input, and Twitter handles are shown as follows:
"We can also mention the arm package, which provides Bayesian versions of glm()
and polr() and implements hierarchical models."
[ vi ]
Preface
Reader feedback
Feedback from our readers is always welcome. Let us know what you think about
this book—what you liked or disliked. Reader feedback is important for us as it helps
us develop titles that you will really get the most out of.
If there is a topic that you have expertise in and you are interested in either writing
or contributing to a book, see our author guide at www.packtpub.com/authors.
Customer support
Now that you are the proud owner of a Packt book, we have a number of things to
help you to get the most from your purchase.
[ vii ]
Preface
1. Log in or register to our website using your e-mail address and password.
2. Hover the mouse pointer on the SUPPORT tab at the top.
3. Click on Code Downloads & Errata.
4. Enter the name of the book in the Search box.
5. Select the book for which you're looking to download the code files.
6. Choose from the drop-down menu where you purchased this book from.
7. Click on Code Download.
You can also download the code files by clicking on the Code Files button on the
book's webpage at the Packt Publishing website. This page can be accessed by
entering the book's name in the Search box. Please note that you need to be logged in
to your Packt account.
Once the file is downloaded, please make sure that you unzip or extract the folder
using the latest version of:
Errata
Although we have taken every care to ensure the accuracy of our content, mistakes
do happen. If you find a mistake in one of our books—maybe a mistake in the text or
the code—we would be grateful if you could report this to us. By doing so, you can
save other readers from frustration and help us improve subsequent versions of this
book. If you find any errata, please report them by visiting http://www.packtpub.
com/submit-errata, selecting your book, clicking on the Errata Submission Form
link, and entering the details of your errata. Once your errata are verified, your
submission will be accepted and the errata will be uploaded to our website or added
to any list of existing errata under the Errata section of that title.
[ viii ]
Preface
Piracy
Piracy of copyrighted material on the Internet is an ongoing problem across all
media. At Packt, we take the protection of our copyright and licenses very seriously.
If you come across any illegal copies of our works in any form on the Internet, please
provide us with the location address or website name immediately so that we can
pursue a remedy.
We appreciate your help in protecting our authors and our ability to bring you
valuable content.
Questions
If you have a problem with any aspect of this book, you can contact us at
[email protected], and we will do our best to address the problem.
[ ix ]
Probabilistic Reasoning
Among all the predictions that were made about the 21st century, maybe the most
unexpected one was that we would collect such a formidable amount of data about
everything, everyday, and everywhere in the world. Recent years have seen an
incredible explosion of data collection about our world, our lives, and technology;
this is the main driver of what we can certainly call a revolution. We live in the Age
of Information. But collecting data is nothing if we don't exploit it and try to extract
knowledge out of it.
At the beginning of the 20th century, with the birth of statistics, the world was all about
collecting data and making statistics. In that time, the only reliable tools were pencils
and paper and of course, the eyes and ears of the observers. Scientific observation was
still in its infancy, despite the prodigious development of the 19th century.
More than a hundred years later, we have computers, we have electronic sensors,
we have massive data storage and we are able to store huge amounts of data
continuously about, not only our physical world, but also our lives, mainly through
the use of social networks, the Internet, and mobile phones. Moreover, the density of
our storage technology has increased so much that we can, nowadays, store months
if not years of data into a very small volume that can fit in the palm of our hand.
But storing data is not acquiring knowledge. Storing data is just keeping it
somewhere for future use. At the same time as our storage capacity dramatically
evolved, the capacity of modern computers increased too, at a pace that is sometimes
hard to believe. When I was a doctoral student, I remember how proud I was when
in the laboratory I received that brand-new, shiny, all-powerful PC for carrying my
research work. Today, my old smart phone, which fits in my pocket, is more than 20
times faster.
[1]
Probabilistic Reasoning
Therefore in this book, you will learn one of the most advanced techniques to
transform data into knowledge: machine learning. This technology is used in every
aspect of modern life now, from search engines, to stock market predictions, from
speech recognition to autonomous vehicles. Moreover it is used in many fields
where one would not suspect it at all, from quality assurance in product chains to
optimizing the placement of antennas for mobile phone networks.
Machine learning is the marriage between computer science and probabilities and
statistics. A central theme in machine learning is the problem of inference or how to
produce knowledge or predictions using an algorithm fed with data and examples.
And this brings us to the two fundamental aspects of machine learning: the design of
algorithms that can extract patterns and high-level knowledge from vast amounts of
data and also the design of algorithms that can use this knowledge—or, in scientific
terms: learning and inference.
In his Essai philosophique sur les probabilités (1814), Laplace formulated an original
mathematical system for reasoning about new and old data, in which one's belief
about something could be updated and improved as soon as new data where
available. Today we call that Bayesian reasoning. Indeed Thomas Bayes was
the first, toward the end of the 18th century, to discover this principle. Without
any knowledge about Bayes' work, Pierre-Simon Laplace rediscovered the same
principle and formulated the modern form of the Bayes theorem. It is interesting
to note that Laplace eventually learned about Bayes' posthumous publications
and acknowledged Bayes to be the first to describe the principle of this inductive
reasoning system. Today, we speak about Laplacian reasoning instead of Bayesian
reasoning and we call it the Bayes-Price-Laplace theorem.
More than a century later, this mathematical technique was reborn thanks to new
discoveries in computing probabilities and gave birth to one of the most important
and used techniques in machine learning: the probabilistic graphical model.
From now on, it is important to note that the term graphical refers to the theory of
graphs—that is, a mathematical object with nodes and edges (and not graphics or
drawings). You know that, when you want to explain to someone the relationships
between different objects or entities, you take a sheet of paper and draw boxes that
you connect with lines or arrows. It is an easy and neat way to show relationships,
whatever they are, between different elements.
[2]
Chapter 1
Probabilistic Graphical Models (PGM for short) are exactly that: you want to
describe relationships between variables. However, you don't have any certainty
about your variables, but rather beliefs or uncertain knowledge. And we know now
that probabilities are the way to represent and deal with such uncertainties, in a
mathematical and rigorous way.
Probabilistic graphical models can deal with our imperfect knowledge about the
world because our knowledge is always limited. We can't observe everything, we
can't represent all the universe in a computer. We are intrinsically limited as human
beings, as are our computers. With probabilistic graphical models, we can build
simple learning algorithms or complex expert systems. With new data, we can
improve those models and refine them as much as we can and also we can infer new
information or make predictions about unseen situations and events.
In this first chapter you will learn about the fundamentals needed to understand
probabilistic graphical models; that is, probabilities and the simple rules of calculus on
which they are based. We will have an overview of what we can do with probabilistic
graphical models and the related R packages. These techniques are so successful that
we will have to restrict ourselves to just the most important R packages.
We will see how to develop simple models, piece by piece, like a brick game and
how to connect models together to develop even more advanced expert systems.
We will cover the following concepts and applications and each section will contain
numerical examples that you can directly use with R:
• Machine learning
• Representing uncertainty with probabilities
• Notions of probabilistic expert systems
• Representing knowledge with graphs
• Probabilistic graphical models
• Examples and applications
[3]
Probabilistic Reasoning
Machine learning
This book is about a field of science called machine learning, or more generally
artificial intelligence. To perform a task, to reach conclusions from data, a computer
as well as any living being needs to observe and process information of a diverse
nature. For a long time now, we have been designing and inventing algorithms and
systems that can solve a problem, very accurately and at incredible speed, but all
algorithms are limited to the very specific task they were designed for. On the other
hand, living beings in general and human beings (as well as many other animals)
exhibit this incredible capacity to adapt and improve using their experience, their
errors, and what they observe in the world.
Machine learning is the study of algorithms that can learn and adapt from data
and observation, reason, and perform tasks using learned models and algorithms.
As the world we live in is inherently uncertain, in the sense that even the simplest
observation such as the color of the sky is impossible to determine absolutely, we
needed a theory that can encompass this uncertainty. The most natural one is the
theory of probability, which will serve as the mathematical foundation of the
present book.
But when the amount of data grows to very large datasets, even the simplest
probabilistic tasks can become cumbersome and we need a framework that will
allow the easy development of models and algorithms that have the necessary
complexity to deal with real-world problems.
At the beginning of artificial intelligence, building such models and algorithms was a
very complex task and, every time a new algorithm was invented, implemented, and
programmed with inherent sources of errors and bias. The framework we present
in this book, called probabilistic graphical models, aims at separating the tasks of
designing a model and implementing algorithm. Because it is based on probability
theory and graph theory, it has very strong mathematical foundations. But also, it is a
framework where the practitioner doesn't need to write and rewrite algorithms all the
time, for algorithms were designed to solve very generic problems and already exist.
[4]
Chapter 1
Algorithms in probabilistic graphical models can learn new models from data and
answer all sorts of questions using those data and the models, and of course adapt
and improve the models when new data is available.
In this book, we will also see that probabilistic graphical models are a mathematical
generalization of many standard and classical models that we all know and that we
can reuse, mix, and modify within this framework.
The rest of this chapter will introduce required notions in probabilities and graph
theory to help you understand and use probabilistic graphical models in R.
One last note about the title of the book: Learning Probabilistic Graphical Models in R.
In fact this title has two meanings: you will learn how to make probabilistic graphical
models, and you will learn how the computer can learn probabilistic graphical
models. This is machine learning!
Did I say Bayesian inference was the main topic before? Indeed, probabilistic
graphical models are also a state-of-the-art approach to performing Bayesian
inference or in other words to computing new facts and conclusions from your
previous beliefs and supplying new data.
[5]
Probabilistic Reasoning
So let's take a simple example that everyone knows: the game of flipping a coin.
What's the probability or the chance that coin will land on a head, or on a tail?
Everyone should and will answer, with reason, a 50% chance or a probability of 0.5
(remember, probabilities are numbered between 0 and 1).
This simple notion has two interpretations. One we will call a frequentist
interpretation and the other one a Bayesian interpretation. The first one, the
frequentist, means that, if we flip the coin many times, in the long term it will land
heads-up half of the time and tails-up the other half of the time. Using numbers,
it will have a 50% chance of landing on one side, or a probability of 0.5. However,
this frequentist concept, as the name suggests, is valid only if one can repeat the
experiment a very large number of times. Indeed, it would not make any sense
to talk about frequency if you observe a fact only once or twice. The Bayesian
interpretation, on the other hand, quantifies our uncertainty about a fact or an event
by assigning a number (between 0 and 1, or 0% and 100%) to it. If you flip a coin,
even before playing, I'm sure you will assign a 50% chance to each face. If you watch
a horse race with 10 horses and you know nothing about the horses and their rides,
you will certainly assign a probability of 0.1 (or 10%) to each horse.
Flipping a coin is an experiment you can do many times, thousands of times or more
if you want. However, a horse race is not an experiment you can repeat numerous
times. And what is the probability your favorite team will win the next football
game? It is certainly not an experiment you can do many times: in fact you will do it
once, because there is only one match. But because you strongly believe your team
is the best this year, you will assign a probability of, say, 0.9 that your team will win
the next game.
The main advantage of the Bayesian interpretation is that it does not use the notion
of long-term frequency or repetition of the same experiment.
[6]
Chapter 1
In machine learning, probabilities are the basic components of most of the systems
and algorithms. You might want to know the probability that an e-mail you received
is a spam (junk) e-mail. You want to know the probability that the next customer on
your online site will buy the same item as the previous customer (and whether your
website should advertise it right away). You want to know the probability that, next
month, your shop will have as many customers as this month.
As you can see with these examples, the line between purely frequentist and purely
Bayesian is far from being clear. And the good news is that the rules of probability
calculus are rigorously the same, whatever interpretation you choose (or not).
Conditional probability
A central theme in machine learning and especially in probabilistic graphical
models is the notion of a conditional probability. In fact, let's be clear, probabilistic
graphical models are all about conditional probability. Let's get back to our horse
race example. We say that, if you know nothing about the riders and their horses,
you would assign, say, a probability of 0.1 to each (assuming there are 10 horses).
Now, you just learned that the best rider in the country is participating in this race.
Would you give him the same chance as the others? Certainly not! Therefore the
probability for this rider to win is, say, 19% and therefore, we will say that all other
riders have a probability to win of only 9%. This is a conditional probability: that is, a
probability of an event based on knowing the outcome of another event. This notion
of probability matches perfectly changing our minds intuitively or updating our beliefs
(in more technical terms) given a new piece of information. At the same time we also
saw a simple example of Bayesian update where we reconsidered and updated our
beliefs given a new fact. Probabilistic graphical models are all about that but just
with more complex situations.
A sample space Ω is the set of all possible outcomes of an experiment. In this set, we
call ω a point of Ω, a realization. And finally we call a subset of Ω an event.
For example, if we toss a coin once, we can have heads (H) or tails (T). We say that the
sample space is Ω = {H , T } . An event could be I get a head (H). If we toss the coin twice,
the sample space is bigger and we can have all those possibilities Ω = {HH , HT , TH , TT } .
An event could be I get a head first. Therefore my event is E = {HH , HT } .
A random variable is something different: it is a function from a sample space into real
numbers. For example, in some experiments, random variables are implicitly used:
• When throwing two dices, X is the sum of the numbers is a random variable
• When tossing a coin N times, X is the number of heads in N tosses is a
random variable
[8]
Chapter 1
For each possible event, we can associate a probability pi and the set of all those
probabilities is the probability distribution of the random variable.
Let's see an example: we consider an experiment in which we toss a coin three times.
A sample point (from the sample space) is the result of the three tosses. For example,
HHT, two heads and one tail, is a sample point.
Therefore, it is easy to enumerate all the possible outcomes and find that the
sample space is:
Let's Hi be the event that the ith toss is a head. So for example:
1 1 1 1
P ( H1 ∩ H 2 ∩ H 3 ) = P ({ HHH } ) = = ⋅ ⋅ = P ( H1 ) P ( H 2 ) P ( H 3 )
8 2 2 2
2 1 1
P ( H1 ∩ H 2 ) = P ({ HHH , HHT } ) = = ⋅ = P ( H1 ) P ( H 2 )
8 2 2
The same applies to the two other pairs. Therefore H1, H2, H3 are mutually
independent. In general, we write that the probability of two independent events is the
product of their probability: P ( A ∩ B ) = P ( A ) .P ( B ) . And we write that the probability of
two disjoint independent events is the sum of their probability: P ( A ∨ B ) = P ( A ) + P ( B ) .
[9]
Probabilistic Reasoning
But as we consider the number of heads, the random variable X will map the sample
space to the following numbers this time:
So the range for the random variable X is now {0,1,2,3}. If we assume the same
probability for all points as before, that is , then we can deduce the probability
function on the range of X:
x 0 1 2 3
P(X=x) 3 3
8 8
When we consider the two experiments together (tossing a coin twice and throwing
a dice), we are interested in the probability of obtaining either 0, 1, or 2 heads and
at the same time obtaining either 1, 2, 3, 4, 5, or 6 with the dice. The probability
distribution of these two random variables considered at the same time is written
P(N, D) and it is called a joint probability distribution.
[ 10 ]
Chapter 1
If we keep adding more and more experiments and therefore more and more variables,
we can write a very big and complex joint probability distribution. For example, I
could be interested in the probability that it will rain tomorrow, that the stock market
will rise and that there will be a traffic jam on the highway that I take to go to work.
It's a complex one but not unrealistic. I'm almost sure that the stock market and the
weather are really not dependent. However, the traffic condition and the weather
are seriously connected. I would like to write the distribution P(W, M, T)—weather,
market, traffic—but it seems to be overly complex. In fact, it is not and this is what we
will see throughout this book.
One last and very important notion regarding joint probability distributions is
marginalization. When you have a probability distribution over several random
variables, that is a joint probability distribution, you may want to eliminate some of
the variables from this distribution to have a distribution on fewer variables. This
operation is very important. The marginal distribution p(X) of a joint distribution
p(X, Y) is obtained by the following operation:
Bayes' rule
Let's continue our exploration of the basic concepts we need to play with
probabilistic graphical models. We saw the notion of marginalization, which
is important because, when you have a complex model, you may want to
extract information about one or a few variables of interest. And this is when
marginalization is used.
But the two most important concepts are conditional probability and Bayes' rule.
[ 11 ]
Probabilistic Reasoning
This is a conditional probability. In more formal terms, we can write the following
formula:
p ( X ,Y ) P ( X ,Y )
p( X |Y ) = and P (Y | X ) =
p (Y ) P( X )
From these two equations we can easily deduce the Bayes formula:
P ( Y | X ) .P ( X )
P( X |Y ) =
P (Y )
This formula is the most important and it helps invert probabilistic relationships.
This is the chef d'oeuvre of Laplace's career and one of the most important formulas in
modern science. Yet it is very simple.
The normalization factor needs a bit of explanation and development here. Recall
that P ( X , Y ) = P (Y | X ) P ( X ) . And also, we saw that P (Y ) = ∑ x P ( X , Y ) , an operation
we called marginalization, whose goal was to eliminate (or marginalize out) a
variable from a joint probability distribution.
Thanks to this magic bit of simple algebra, we can rewrite the Bayes' formula in its
general form and also the most convenient one:
P ( Y | X ) .P ( X )
P( X |Y ) =
∑ x P (Y | X ) P ( X )
[ 12 ]
Discovering Diverse Content Through
Random Scribd Documents
and 2 in. thick, was used as a lever. One end of this piece had a U-
shaped notch cut in it to straddle the supports under the projecting
ends. A board was attached to the table top, having one straight
edge set where the knife edge would just pass it.
If the knife has a good sharp edge it will do very satisfactory work.
When the edges are trimmed the knife can be removed and used for
its original work.—Contributed by E. S. Mundell, Lowpoint, Ill.
How to Repair Rubber Gloves
While making a bunglesome job of patching a pair of rubber
gloves, which I used to keep the stains from my fingers while
developing photographic plates, a physician friend happened along
and told me how to do it properly. The method is as follows: Procure
a piece of card, such as heavy Bristol board used for filing cards;
spread it on one side, rather thick, with ordinary library paste and
stick it to the rubber tissue, preferably dentists’ rubber dam of light
weight; smooth it flat, and let the paste dry. When a patch is needed
cut one out, rubber and card together, and fit it to the cut in the
glove. Put the glove on the hand inside out, moisten the patch with
cement, let it dry for a few seconds and then press it in place hard. If
the tear is large, it is easier to lay the moistened patch down and fit
the edges of the tear to it, then press hard in place. After the cement
has thoroughly dried out, soak the patch in water and remove the bit
of card. This gives a patch fastened securely to the extreme edges;
flat, water-tight, and as fit to stand boiling as any patch, for
sterilizing. After removing the card, the glove is dusted with talcum.
—Contributed by J. S. Hogans, Uniontown, Pa.
Miniature Metal-Bound Chests
By F. E. TUCK
The metal may be left smooth and polished, or hammered with the
round end of a ball-peen hammer, to produce the dented effect
shown on several of the boxes in the group. This, as well as other
finishing of the metal, must be done before it is fixed in place.
Beautiful colors may be given to the metal by heating it, and
observing the colors as they “run.” A trial will enable one to judge the
proper heat for the various colors, which “run” from a light straw to a
deep purple, with various reddish intermediate tones. A brown
oxidized finish, or a verd-antique—greenish—finish may also be
obtained. The metal should be polished with wax to preserve the
finish if other than the latter type is used.
The boxes are lined with silk or other suitable material. The
method is as follows: Cut cardboard pieces to fit against the inner
sides of the bottom, sides, and ends. Pad one side of them with
cotton batting, and cover with silk, gluing the edges of it on the back
of the cardboard, as shown in the sketch. By bending the pieces
slightly, they may be inserted and glued in place. Care must be taken
in handling the glue, that the silk is not soiled. Pads of felt, or
chamois skin, may be glued to the bottom of the feet of the box, so
as not to mar the surface upon which it rests.
The most popular boxes, which are especially suitable for gift
purposes, are the jewelry, glove, and handkerchief boxes. Their
dimensions are: jewelry box, 2³⁄₄ by 4 by 7¹⁄₂ in.; glove box, 3¹⁄₄ by 5
by 13 in.; handkerchief box, 4 by 6 by 10 in. Other sizes suited to
special purposes may, of course, be designed readily, and made in
walnut, mahogany, or other cabinet woods.
A Piano or Reading Lamp
By WILLIAM E. FINKERNAGEL
This Lamp of Substantial Construction and Pleasing Design may be
Made at Small Cost. The Pedestal Assembled is Shown at the Left and
Details of the Parts and of the Metal Frame for the Shade, Above
Thehand
lamp illustrated was designed for use in reading, the doing of
work at which one is seated in a chair away from a table
lamp, or for lighting a piano rack. It is light, readily moved about,
easily made, and of pleasing design. It combines construction in
wood and metal, is inexpensive, and within the range of a careful
amateur craftsman. The pedestal is shown assembled at the left,
and above are detailed sketches of the parts. The construction of the
shade, which is 18 in. square, is shown at the right. The central post
is 40 in. long and 2 in. square, and the base measures 16 in. on the
arms.
The stock bill for the lamp is as follows:
1 piece, 2 by 2 in., oak, for post.
2 pieces, 1 by 3 by 16 in., oak, for base.
1 piece, 1 by 6 by 6 in., oak, for cap.
1 piece, 1 by 4 by 4 in., oak, for column base.
1 piece, 1 by 2 by 3 in., oak, for braces.
Copper or brass strip, 1 in. wide and ³⁄₃₂ in. thick, for shade frame. Wire
braces for shade.
Make all the pieces, smoothing and finishing their surfaces with a
scraper, before assembling the parts. The cap A may be made first.
Square the piece to 6 in. and cut a ¹⁄₄-in. chamfer around the upper
edge. Cut the 2 by 3-in. block on one of its diagonals and smooth it
to form the braces B. Square the ends of the post C to a length of 40
in., and smooth up the sides. Square the column base D to 4 in. and
cut a ³⁄₁₆-in. chamfer around its upper side.
Square up and smooth the cross arms E and F, for the base, to a
width of 3 in. and a length of 16 in. Bevel the upper corners 1 in., at
an angle of 45°. Bore holes with a ¹⁄₂-in. bit to form the rounded ends
of the portions cut out from the lower sides of the cross braces.
Chisel the wood away between the holes and smooth the resulting
surfaces. The half-lap joint, by which the cross braces are joined,
may then be made. It should not be made until the lower portions are
cut out of the cross braces, and the remaining portions are made of
exactly the same width, 2¹⁄₂ in., according to the drawing. The joint
must be fitted tightly in what is termed a driving fit, or it will not be
strong enough.
The construction may be assembled as follows, although several
methods may be adopted that will prove satisfactory: Fix the cap A to
the top of the post with glue and ¹⁄₂-in. dowels, bored not quite
through the cap. Screws may be used for this purpose, but they mar
the finish of the upper surface of the cap. Glue the braces B into the
corners to support the cap. They should be warmed before applying
the glue and rubbed slightly to bring them into place tightly and to
distribute the glue evenly. Small brads may be used to nail them in
place, but care must be taken not to mar the finish.
The column base D may be fixed to the bottom of the post in the
same way that the cap was fixed at the upper end. The cross braces
E and F, forming the base, should be glued in the half-lap joint and
fixed to the column base with glue and dowels, or screws sunk into
sockets from the lower side of the braces.
When the glue has dried, the pedestal should be scraped and
cleaned preparatory to a final sandpapering before applying the stain
and varnish.
The arms G for the shade holder are made of strips of brass or
copper, 1 in. wide and 8³⁄₄ in. long, bent to the proper form, as shown
in the sketch. The straight end, 2 in. long, is provided with two holes
through which screws are fixed into the top of the cap.
The shade is constructed as follows: Make a 4-in. square, H, of
brass strip, 1 in. wide, and solder or rivet it at the joint. Make the
lower square J of the same material and in the same way, 18 in. on
each side. Solder ¹⁄₈-in. wire, of a length that will give the desired
slant to the shade, at the corners of the squares, forming a rigid
frame for the covering. Cloth or silk may be used to cover the frame.
The braces for the shade may then be fastened to the top of the
cap, as shown in the assembly sketch, and their ends shaped to hold
the frame firmly. The pedestal should be smoothed off immediately
preparatory to finishing, and the sharp edges removed slightly. Care
should be taken in sandpapering, since rubbing across the grain is
ruinous, as is too much sandpapering. The latter particularly smacks
of the novice. A coat of stain, one of filler rubbed in thoroughly, a
coat of shellac, and a finish coat of wax or varnish will give a
satisfactory finish. The shellac and varnish coats should be permitted
to dry thoroughly and should then be sandpapered lightly before
applying other coats.
The electrical connections for the lamp may be made from a cord
extension to a socket fixed in the center of the cap. In some
instances it may be desirable to connect the cord from a floor socket.
In that case the post should be built up of two pieces of 1-in.
thickness, and a groove to admit the cord made in the center of it.
Sewing Rack Attached to Rocker
The Swinging Rack Folds under the Arm of the Chair When Not in Use
A rack like that shown in the illustration is convenient as a support
for articles being sewed or repaired by the home worker. It was
made by fastening two bars from a towel rack to the arm of the
rocker by means of a bolt. When not in use, the bars are folded back
under the arm of the chair. One of the bars may be provided with
hooks so that scissors and other sewing requisites may be placed on
them.—Mrs. J. E. McCoy, Philadelphia, Pa.
Glass Bottle as a Candle Lamp
Tolimited,
provide an inexpensive desk in a shop, where space was quite
the folding wall desk shown in the sketch was devised. It
was cut from a packing box and the hinged lid built up of boards of
better quality. To give a good writing surface, a piece of heavy
cardboard was fastened to the writing bed with thumb tacks and may
be renewed whenever necessary. The inside of the desk was fitted
with filing compartments arranged to care for a large variety of shop
forms and stationery. An inkwell holder made of a strip of sheet metal
was fixed to the end of the desk and the bottle suspended in it, there
being space for additional bottles also. The hinged lid is provided
with a hasp and padlock. When not in use the desk may be tilted
upward and locked against the wall with small catches. By using a T-
square against the left edge of the writing bed, a convenient drafting
table for shop sketching is provided.
Fig. 1 Fig. 2
Fig. 4
Fig. 5 Fig. 3
The Packing Box from Which the Desk was Made is Shown in Fig. 1. The
Dotted Lines Indicate Where It was Cut to Give the Slanting Writing
Surface. The Device in Its Normal Position is Shown in Fig. 2; Hooked
against the Wall, in Fig. 3, and with the Lid Raised, Showing the
Compartments, in Fig. 5.
The detailed construction, for the making of the desk from stock
lumber, by boys, or amateur workers with tools, may be carried out
as follows: Determine upon the size of the proposed desk.
Convenient dimensions are 30 in. long, 18 in. wide, 7 in. high at the
back, and 4 in. high at the front. Use ⁷⁄₈-in. soft wood; pine and
poplar are suitable. Cut and shape all the pieces before beginning
the assembling of the parts. The wood should be planed smooth and
may be sandpapered lightly when the construction is completed,
before applying a finish. A simple arrangement of the pieces so they
can be nailed together is that shown in the sketch, which was used
in making the box. First shape the pieces for the sides, 5¹⁄₄ in. wide
at the larger end, 2¹⁄₄ in. wide at the smaller, and 16¹⁄₄ in. long.
Clamp the boards together, or tack them with two wire nails while
shaping them, so that they will be exactly alike. Make a piece 5¹⁄₂ in.
wide and 30 in. long for the back, and one the same length and 2¹⁄₂
in. wide for the front. Nail them to the ends, as shown, permitting the
slight excess material to project over the upper edges of the
sidepieces. Trim off this extra stock with a plane so that the upper
surfaces of the front and back conform to the slant of the sidepieces.
Make a strip 4 in. wide for the upper edge of the desk, to which the
writing bed is hinged. Cut pieces for the bottom and nail them in
place.
Before nailing down the upper hinge strip the interior fittings
should be made. Use wood not thicker than ¹⁄₂ in., and fit the pieces
into place carefully, nailing them firmly through the outer faces of the
desk. A better method is to make the pigeonholes or compartments
with a piece of the thin stock on the ends of the partitions, so that the
compartments are built up as a unit and slid into the desk, no nails
being necessary to hold them.
The lid should be made of sound, dry stock and glued up of strips
about 3 in. wide, to prevent it from warping or twisting easily. If the
person making the desk has the necessary skill, it is best to fix a
strip, 2 in. wide, at each end of the writing bed, to hold the pieces
together and to keep the bed in shape.
The holder for the inkwell is made of a 1-in. strip of metal, bent to
the shape shown in Fig. 4, and drilled to fit small screws. A can is
supported in the holder and the bottle rests in it.
The desk may be finished by painting it or giving it a coat of
shellac and one of varnish, either after it has been stained to match
adjoining woodwork, or in the natural color.
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
textbookfull.com