100% found this document useful (10 votes)
109 views

Foundations of Factor Analysis Second Edition Stanley A Mulaik 2024 scribd download

Foundations

Uploaded by

bamfosixonmm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (10 votes)
109 views

Foundations of Factor Analysis Second Edition Stanley A Mulaik 2024 scribd download

Foundations

Uploaded by

bamfosixonmm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

Download the full version of the ebook at ebookname.

com

Foundations of Factor Analysis Second Edition


Stanley A Mulaik

https://ebookname.com/product/foundations-of-factor-
analysis-second-edition-stanley-a-mulaik/

OR CLICK BUTTON

DOWNLOAD EBOOK

Download more ebook instantly today at https://ebookname.com


Instant digital products (PDF, ePub, MOBI) available
Download now and explore formats that suit you...

Confirmatory Factor Analysis for Applied Research Second


Edition Timothy A. Brown Psyd

https://ebookname.com/product/confirmatory-factor-analysis-for-
applied-research-second-edition-timothy-a-brown-psyd/

ebookname.com

Stanley Kubrick A Narrative and Stylistic Analysis Second


Edition Mario Falsetto

https://ebookname.com/product/stanley-kubrick-a-narrative-and-
stylistic-analysis-second-edition-mario-falsetto/

ebookname.com

Confirmatory Factor Analysis for Applied Research 1st


Edition Timothy A. Brown Psyd

https://ebookname.com/product/confirmatory-factor-analysis-for-
applied-research-1st-edition-timothy-a-brown-psyd/

ebookname.com

The Green Guide for Horse Owners and Riders Sustainable


Practices for Horse Care Stable Management Land Use and
Riding 1st Edition Heather Cook
https://ebookname.com/product/the-green-guide-for-horse-owners-and-
riders-sustainable-practices-for-horse-care-stable-management-land-
use-and-riding-1st-edition-heather-cook/
ebookname.com
Wireless Security Handbook 1st Edition Aaron E. Earle

https://ebookname.com/product/wireless-security-handbook-1st-edition-
aaron-e-earle/

ebookname.com

Ask the Awakened The Negative Way Wei Wu Wei

https://ebookname.com/product/ask-the-awakened-the-negative-way-wei-
wu-wei/

ebookname.com

Career Guidance for Social Justice Contesting


Neoliberalism 1st Edition Tristram Hooley (Editor)

https://ebookname.com/product/career-guidance-for-social-justice-
contesting-neoliberalism-1st-edition-tristram-hooley-editor/

ebookname.com

CCNATM Cisco Certified Network Associate Study Guide 3rd


ed Edition Todd Lammle

https://ebookname.com/product/ccnatm-cisco-certified-network-
associate-study-guide-3rd-ed-edition-todd-lammle/

ebookname.com

College Physics 10th Edition Raymond A. Serway And Chris


Vuille

https://ebookname.com/product/college-physics-10th-edition-raymond-a-
serway-and-chris-vuille/

ebookname.com
The Wild Flower Key How to Identify Wild Plants Trees and
Shrubs in Britain and Ireland Francis Rose

https://ebookname.com/product/the-wild-flower-key-how-to-identify-
wild-plants-trees-and-shrubs-in-britain-and-ireland-francis-rose/

ebookname.com
Foundations
of
Factor Analysis
Second Edition

© 2010 by Taylor & Francis Group, LLC


Chapman & Hall/CRC
Statistics in the Social and Behavioral Sciences Series

Series Editors
A. Colin Cameron J. Scott Long
University of California, Davis, USA Indiana University, USA

Andrew Gelman Sophia Rabe-Hesketh


Columbia University, USA University of California, Berkeley, USA
Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014

Anders Skrondal
Norwegian Institute of Public Health, Norway

Aims and scope

Large and complex datasets are becoming prevalent in the social and behavioral
sciences and statistical methods are crucial for the analysis and interpretation of such
data. This series aims to capture new developments in statistical methodology with par-
ticular relevance to applications in the social and behavioral sciences. It seeks to promote
appropriate use of statistical, econometric and psychometric methods in these applied
sciences by publishing a broad range of reference works, textbooks and handbooks.

The scope of the series is wide, including applications of statistical methodology in


sociology, psychology, economics, education, marketing research, political science,
criminology, public policy, demography, survey methodology and official statistics. The
titles included in the series are designed to appeal to applied statisticians, as well as
students, researchers and practitioners from the above disciplines. The inclusion of real
examples and case studies is therefore essential.

Published Titles

Analysis of Multivariate Social Science Data, Second Edition


David J. Bartholomew, Fiona Steele, Irini Moustaki, and Jane I. Galbraith

Bayesian Methods: A Social and Behavioral Sciences Approach, Second Edition


Jeff Gill

Foundations of Factor Analysis, Second Edition


Stanley A. Mulaik

Linear Causal Modeling with Structural Equations


Stanley A. Mulaik

Multiple Correspondence Analysis and Related Methods


Michael Greenacre and Jorg Blasius

Multivariable Modeling and Multivariate Analysis for the Behavioral Sciences


Brian S. Everitt

Statistical Test Theory for the Behavioral Sciences


Dato N. M. de Gruijter and Leo J. Th. van der Kamp
© 2010 by Taylor & Francis Group, LLC
Chapman & Hall/CRC
Statistics in the Social and Behavioral Sciences Series

Foundations
Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014

of
Factor Analysis
Second Edition

Stanley A. Mulaik

© 2010 by Taylor & Francis Group, LLC


Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014

CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742

© 2010 by Taylor & Francis Group, LLC


CRC Press is an imprint of Taylor & Francis Group, an Informa business

No claim to original U.S. Government works


Version Date: 20110725

International Standard Book Number-13: 978-1-4200-9981-2 (eBook - PDF)

This book contains information obtained from authentic and highly regarded sources. Reasonable efforts
have been made to publish reliable data and information, but the author and publisher cannot assume
responsibility for the validity of all materials or the consequences of their use. The authors and publishers
have attempted to trace the copyright holders of all material reproduced in this publication and apologize to
copyright holders if permission to publish in this form has not been obtained. If any copyright material has
not been acknowledged please write and let us know so we may rectify in any future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmit-
ted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented,
including photocopying, microfilming, and recording, or in any information storage or retrieval system,
without written permission from the publishers.

For permission to photocopy or use material electronically from this work, please access www.copyright.
com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood
Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and
registration for a variety of users. For organizations that have been granted a photocopy license by the CCC,
a separate system of payment has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used
only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com

and the CRC Press Web site at


http://www.crcpress.com

© 2010 by Taylor & Francis Group, LLC


Contents

Preface to the Second Edition ............................................................................ xiii


Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014

Preface to the First Edition................................................................................. xix

1 Introduction .....................................................................................................1
1.1 Factor Analysis and Structural Theories .............................................1
1.2 Brief History of Factor Analysis as a Linear Model ...........................3
1.3 Example of Factor Analysis ................................................................. 12

2 Mathematical Foundations for Factor Analysis ..................................... 17


2.1 Introduction ........................................................................................... 17
2.2 Scalar Algebra ........................................................................................ 17
2.2.1 Fundamental Laws of Scalar Algebra .................................. 18
2.2.1.1 Rules of Signs............................................................ 18
2.2.1.2 Rules for Exponents ................................................. 19
2.2.1.3 Solving Simple Equations ....................................... 19
2.3 Vectors .................................................................................................... 20
2.3.1 n-Tuples as Vectors ..................................................................22
2.3.1.1 Equality of Vectors....................................................22
2.3.2 Scalars and Vectors.................................................................. 23
2.3.3 Multiplying a Vector by a Scalar ........................................... 23
2.3.4 Addition of Vectors ................................................................. 24
2.3.5 Scalar Product of Vectors ....................................................... 24
2.3.6 Distance between Vectors ...................................................... 25
2.3.7 Length of a Vector ................................................................... 26
2.3.8 Another Definition for Scalar Multiplication ...................... 27
2.3.9 Cosine of the Angle between Vectors ................................... 27
2.3.10 Projection of a Vector onto Another Vector ......................... 29
2.3.11 Types of Special Vectors .........................................................30
2.3.12 Linear Combinations .............................................................. 31
2.3.13 Linear Independence .............................................................. 32
2.3.14 Basis Vectors............................................................................. 32
2.4 Matrix Algebra ...................................................................................... 32
2.4.1 Definition of a Matrix ............................................................. 32
2.4.2 Matrix Operations ................................................................... 33
2.4.2.1 Equality......................................................................34
2.4.2.2 Multiplication by a Scalar .......................................34
2.4.2.3 Addition.....................................................................34
2.4.2.4 Subtraction ................................................................ 35
2.4.2.5 Matrix Multiplication .............................................. 35

v
© 2010 by Taylor & Francis Group, LLC
vi Contents

2.4.3 Identity Matrix......................................................................... 37


2.4.4 Scalar Matrix ............................................................................ 38
2.4.5 Diagonal Matrix ...................................................................... 39
2.4.6 Upper and Lower Triangular Matrices ................................ 39
2.4.7 Null Matrix............................................................................... 40
2.4.8 Transpose Matrix ..................................................................... 40
Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014

2.4.9 Symmetric Matrices ................................................................ 41


2.4.10 Matrix Inverse.......................................................................... 41
2.4.11 Orthogonal Matrices ...............................................................42
2.4.12 Trace of a Matrix ......................................................................43
2.4.13 Invariance of Traces under Cyclic Permutations ................43
2.5 Determinants .........................................................................................44
2.5.1 Minors of a Matrix .................................................................. 46
2.5.2 Rank of a Matrix ...................................................................... 47
2.5.3 Cofactors of a Matrix .............................................................. 47
2.5.4 Expanding a Determinant by Cofactors .............................. 48
2.5.5 Adjoint Matrix ......................................................................... 48
2.5.6 Important Properties of Determinants ................................. 49
2.5.7 Simultaneous Linear Equations ............................................ 50
2.6 Treatment of Variables as Vectors ....................................................... 51
2.6.1 Variables in Finite Populations ............................................. 51
2.6.2 Variables in Infinite Populations ........................................... 53
2.6.3 Random Vectors of Random Variables................................. 56
2.7 Maxima and Minima of Functions ..................................................... 58
2.7.1 Slope as the Indicator of a Maximum or Minimum ........... 59
2.7.2 Index for Slope......................................................................... 59
2.7.3 Derivative of a Function......................................................... 60
2.7.4 Derivative of a Constant ........................................................ 62
2.7.5 Derivative of Other Functions ............................................... 62
2.7.6 Partial Differentiation .............................................................64
2.7.7 Maxima and Minima of Functions of Several Variables....65
2.7.8 Constrained Maxima and Minima ....................................... 67

3 Composite Variables and Linear Transformations ................................ 69


3.1 Introduction ........................................................................................... 69
3.1.1 Means and Variances of Variables ........................................ 69
3.1.1.1 Correlation and Causation ...................................... 71
3.2 Composite Variables ............................................................................. 72
3.3 Unweighted Composite Variables ...................................................... 73
3.3.1 Mean of an Unweighted Composite .................................... 73
3.3.2 Variance of an Unweighted Composite ............................... 73
3.3.3 Covariance and Correlation between
Two Composites ......................................................................77
3.3.4 Correlation of an Unweighted Composite
with a Single Variable ............................................................. 78

© 2010 by Taylor & Francis Group, LLC


Contents vii

3.3.5
Correlation between Two Unweighted
Composites.................................................................................80
3.3.6 Summary Concerning Unweighted Composites..................83
3.4 Differentially Weighted Composites ..................................................83
3.4.1 Correlation between a Differentially Weighted
Composite and Another Variable ...........................................83
Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014

3.4.2 Correlation between Two Differentially


Weighted Composites ...............................................................84
3.5 Matrix Equations ...................................................................................84
3.5.1 Random Vectors, Mean Vectors, Variance–Covariance
Matrices, and Correlation Matrices ........................................84
3.5.2 Sample Equations...................................................................... 86
3.5.3 Composite Variables in Matrix Equations ............................. 88
3.5.4 Linear Transformations ............................................................ 89
3.5.5 Some Special, Useful Linear Transformations ...................... 91

4 Multiple and Partial Correlations ............................................................. 93


4.1 Multiple Regression and Correlation ................................................. 93
4.1.1 Minimizing the Expected Squared Difference between
a Composite Variable and an External Variable ................... 93
*4.1.2 Deriving the Regression Weight Matrix for
Multivariate Multiple Regression ...........................................95
4.1.3 Matrix Equations for Multivariate Multiple Regression .....97
4.1.4 Squared Multiple Correlations ................................................ 98
4.1.5 Correlations between Actual and Predicted Criteria ........... 99
4.2 Partial Correlations ............................................................................. 100
4.3 Determinantal Formulas .................................................................... 102
4.3.1 Multiple-Correlation Coefficient........................................... 103
4.3.2 Formulas for Partial Correlations ......................................... 104
4.4 Multiple Correlation in Terms of Partial Correlation .................... 104
4.4.1 Matrix of Image Regression Weights.................................... 105
4.4.2 Meaning of Multiple Correlation .......................................... 107
4.4.3 Yule’s Equation for the Error of Estimate ............................ 109
4.4.4 Conclusions .............................................................................. 110

5 Multivariate Normal Distribution .......................................................... 113


5.1 Introduction ......................................................................................... 113
5.2 Univariate Normal Density Function .............................................. 113
5.3 Multivariate Normal Distribution .................................................... 114
5.3.1 Bivariate Normal Distribution .............................................. 115
5.3.2 Properties of the Multivariate Normal Distribution .......... 116
*5.4 Maximum-Likelihood Estimation..................................................... 118
5.4.1 Notion of Likelihood .............................................................. 118
5.4.2 Sample Likelihood .................................................................. 119

© 2010 by Taylor & Francis Group, LLC


viii Contents

5.4.3 Maximum-Likelihood Estimates .......................................... 119


5.4.4 Multivariate Case .................................................................... 124
5.4.4.1 Distribution of y– and S ............................................ 128

6 Fundamental Equations of Factor Analysis .......................................... 129


6.1 Analysis of a Variable into Components ......................................... 129
Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014

6.1.1 Components of Variance ........................................................ 132


6.1.2 Variance of a Variable in Terms of Its Factors ..................... 133
6.1.3 Correlation between Two Variables
in Terms of Their Factors........................................................134
6.2 Use of Matrix Notation in Factor Analysis...................................... 135
6.2.1 Fundamental Equation of Factor Analysis .......................... 135
6.2.2 Fundamental Theorem of Factor Analysis .......................... 136
6.2.3 Factor-Pattern and Factor-Structure Matrices ..................... 137

7 Methods of Factor Extraction ................................................................... 139


7.1 Rationale for Finding Factors and Factor Loadings....................... 139
7.1.1 General Computing Algorithm
for Finding Factors ..................................................................140
7.2 Diagonal Method of Factoring .......................................................... 145
7.3 Centroid Method of Factoring .......................................................... 147
7.4 Principal-Axes Methods..................................................................... 147
7.4.1 Hotelling’s Iterative Method ................................................. 151
7.4.2 Further Properties of Eigenvectors and
Eigenvalues ..............................................................................154
7.4.3 Maximization of Quadratic Forms for Points
on the Unit Sphere .................................................................. 156
7.4.4 Diagonalizing the R Matrix into Its Eigenvalues ............... 158
7.4.5 Jacobi Method .......................................................................... 159
7.4.6 Powers of Square Symmetric Matrices ................................ 164
7.4.7 Factor-Loading Matrix from Eigenvalues
and Eigenvectors ..................................................................... 165

8 Common-Factor Analysis .......................................................................... 167


8.1 Preliminary Considerations............................................................... 167
8.1.1 Designing a Factor Analytic Study ....................................... 168
8.2 First Stages in the Factor Analysis .................................................... 169
8.2.1 Concept of Minimum Rank ................................................... 170
8.2.2 Systematic Lower-Bound Estimates
of Communalities .................................................................... 175
8.2.3 Congruence Transformations ................................................ 176
8.2.4 Sylvester’s Law of Inertia ...................................................... 176
8.2.5 Eigenvector Transformations ................................................ 177
8.2.6 Guttman’s Lower Bounds for Minimum Rank................... 177
8.2.7 Preliminary Theorems for Guttman’s Bounds .................... 178

© 2010 by Taylor & Francis Group, LLC


Contents ix

8.2.8 Proof of the First Lower Bound........................................... 181


8.2.9 Proof of the Third Lower Bound ......................................... 181
8.2.10 Proof of the Second Lower Bound ...................................... 184
8.2.11 Heuristic Rules of Thumb for the Number of Factors ..... 185
8.2.11.1 Kaiser’s Eigenvalues-Greater-
Than-One Rule ...................................................... 186
Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014

8.2.11.2 Cattell’s Scree Criterion ....................................... 186


8.2.11.3 Parallel Analysis ................................................... 188
8.3 Fitting the Common-Factor Model to a Correlation Matrix ......... 192
8.3.1 Least-Squares Estimation of the Exploratory
Common-Factor Model ........................................................ 193
8.3.2 Assessing Fit .......................................................................... 197
8.3.3 Example of Least-Squares Common-Factor
Analysis................................................................................... 197
8.3.4 Maximum-Likelihood Estimation of the
Exploratory Common-Factor Model .................................. 199
*8.3.4.1 Maximum-Likelihood Estimation Obtained
Using Calculus ......................................................202
8.3.5 Maximum-Likelihood Estimates ........................................ 206
*8.3.6 Fletcher–Powell Algorithm.................................................. 207
*8.3.7 Applying the Fletcher–Powell Algorithm to
Maximum-Likelihood Exploratory Factor Analysis .........210
8.3.8 Testing the Goodness of Fit of the
Maximum-Likelihood Estimates ........................................212
8.3.9 Optimality of Maximum-Likelihood Estimators .............. 214
8.3.10 Example of Maximum-Likelihood
Factor Analysis ......................................................................215

9 Other Models of Factor Analysis ............................................................. 217


9.1 Introduction ......................................................................................... 217
9.2 Component Analysis .......................................................................... 217
9.2.1 Principal-Components Analysis ......................................... 219
9.2.2 Selecting Fewer Components than Variables .................... 220
9.2.3 Determining the Reliability of Principal Components ....222
9.2.4 Principal Components of True Components..................... 224
9.2.5 Weighted Principal Components ........................................ 226
9.3 Image Analysis .................................................................................... 230
9.3.1 Partial-Image Analysis ......................................................... 231
9.3.2 Image Analysis and Common-Factor Analysis ................ 237
9.3.3 Partial-Image Analysis as Approximation of
Common-Factor Analysis..................................................... 244
9.4 Canonical-Factor Analysis ................................................................. 245
9.4.1 Relation to Image Analysis .................................................. 249
9.4.2 Kaiser’s Rule for the Number of Harris Factors ............... 253
9.4.3 Quickie, Single-Pass Approximation for
Common-Factor Analysis..................................................... 253

© 2010 by Taylor & Francis Group, LLC


x Contents

9.5 Problem of Doublet Factors ............................................................... 253


9.5.1 Butler’s Descriptive-Factor-Analysis Solution .................254
9.5.2 Model That Includes Doublets Explicitly .......................... 258
9.6 Metric Invariance Properties ............................................................. 262
9.7 Image-Factor Analysis ........................................................................ 263
9.7.1 Testing Image Factors for Significance ............................... 264
Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014

9.8 Psychometric Inference in Factor Analysis ..................................... 265


9.8.1 Alpha Factor Analysis .......................................................... 270
9.8.2 Communality in a Universe of Tests .................................. 271
9.8.3 Consequences for Factor Analysis ...................................... 274

10 Factor Rotation ............................................................................................ 275


10.1 Introduction ....................................................................................... 275
10.2 Thurstone’s Concept of a Simple Structure................................... 276
10.2.1 Implementing the Simple-Structure Concept ................. 280
10.2.2 Question of Correlated Factors ......................................... 282
10.3 Oblique Graphical Rotation ............................................................ 286

11 Orthogonal Analytic Rotation ................................................................. 301


11.1 Introduction ....................................................................................... 301
11.2 Quartimax Criterion ......................................................................... 302
11.3 Varimax Criterion.............................................................................. 310
11.4 Transvarimax Methods..................................................................... 312
11.4.1 Parsimax ............................................................................... 313
11.5 Simultaneous Orthogonal Varimax and Parsimax ....................... 315
11.5.1 Gradient Projection Algorithm .......................................... 323

12 Oblique Analytic Rotation ....................................................................... 325


12.1 General ............................................................................................... 325
12.1.1 Distinctness of the Criteria in Oblique Rotation............. 325
12.2 Oblimin Family ................................................................................. 326
12.2.1 Direct Oblimin by Planar Rotations ................................. 328
12.3 Harris–Kaiser Oblique Transformations ....................................... 332
12.4 Weighted Oblique Rotation ............................................................. 336
12.5 Oblique Procrustean Transformations ........................................... 341
12.5.1 Promax Oblique Rotation ..................................................342
12.5.2 Rotation to a Factor-Pattern Matrix Approximating
a Given Target Matrix .........................................................343
12.5.3 Promaj ...................................................................................343
12.5.4 Promin ..................................................................................345
12.6 Gradient-Projection-Algorithm Synthesis .....................................348
12.6.1 Gradient-Projection Algorithm .........................................348
12.6.2 Jennrich’s Use of the GPA .................................................. 351
12.6.2.1 Gradient-Projection Algorithm ........................ 353
12.6.2.2 Quartimin ............................................................ 353

© 2010 by Taylor & Francis Group, LLC


Contents xi

12.6.2.3 Oblimin Rotation ................................................354


12.6.2.4 Least-Squares Rotation to a Target Matrix ..... 357
12.6.2.5 Least-Squares Rotation to a Partially
Specified Target Pattern Matrix........................ 357
12.6.3 Simplimax ............................................................................ 357
12.7 Rotating Using Component Loss Functions ................................. 360
Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014

12.8 Conclusions........................................................................................ 366

13 Factor Scores and Factor Indeterminacy ................................................ 369


13.1 Introduction ....................................................................................... 369
13.2 Scores on Component Variables ..................................................... 370
13.2.1 Component Scores in Canonical-Component
Analysis and Image Analysis ............................................ 373
13.2.1.1 Canonical-Component Analysis....................... 373
13.2.1.2 Image Analysis .................................................... 374
13.3 Indeterminacy of Common-Factor Scores ..................................... 375
13.3.1 Geometry of Correlational Indeterminacy ...................... 377
13.4 Further History of Factor Indeterminacy ...................................... 380
13.4.1 Factor Indeterminacy from 1970 to 1980..........................384
13.4.1.1 “Infinite Domain” Position ............................... 392
13.4.2 Researchers with Well-Defined Concepts
of Their Domains ................................................................ 395
13.4.2.1 Factor Indeterminacy from 1980 to 2000 ......... 397
13.5 Other Estimators of Common Factors ........................................... 399
13.5.1 Least Squares .......................................................................400
13.5.2 Bartlett’s Method ................................................................. 401
13.5.3 Evaluation of Estimation Methods ................................... 403

14 Factorial Invariance ....................................................................................405


14.1 Introduction .......................................................................................405
14.2 Invariance under Selection of Variables ........................................405
14.3 Invariance under Selection of Experimental Populations ..........408
14.3.1 Effect of Univariate Selection ............................................408
14.3.2 Multivariate Case ................................................................ 412
14.3.3 Factorial Invariance in Different Experimental
Populations .......................................................................... 414
14.3.4 Effects of Selection on Component Analysis ................... 418
14.4 Comparing Factors across Populations ......................................... 419
14.4.1 Preliminary Requirements for Comparing
Factor Analyses ................................................................... 420
14.4.2 Inappropriate Comparisons of Factors ............................ 421
14.4.3 Comparing Factors from Component Analyses .............422
14.4.4 Contrasting Experimental Populations
across Factors .......................................................................423
14.4.5 Limitations on Factorial Invariance.................................. 424

© 2010 by Taylor & Francis Group, LLC


xii Contents

15 Confirmatory Factor Analysis .................................................................. 427


15.1 Introduction ....................................................................................... 427
15.1.1 Abduction, Deduction, and Induction ........................... 428
15.1.2 Science as the Knowledge of Objects ............................. 429
15.1.3 Objects as Invariants in the Perceptual Field ................ 431
15.1.4 Implications for Factor Analysis .....................................433
Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014

15.2 Example of Confirmatory Factor Analysis ....................................434


15.3 Mathematics of Confirmatory Factor Analysis.............................440
15.3.1 Specifying Hypotheses .....................................................440
15.3.2 Identification ...................................................................... 441
15.3.3 Determining Whether Parameters and
Models Are Identified ......................................................444
15.3.4 Identification of Metrics ................................................... 450
15.3.5 Discrepancy Functions ..................................................... 452
15.3.6 Estimation by Minimizing Discrepancy Functions ......454
15.3.7 Derivatives of Elements of Matrices...............................454
15.3.8 Maximum-Likelihood Estimation in
Confirmatory Factor Analysis ......................................... 457
15.3.9 Least-Squares Estimation ................................................. 461
15.3.10 Generalized Least-Squares Estimation ..........................463
15.3.11 Implementing the Quasi-Newton Algorithm ...............463
15.3.12 Avoiding Improper Solutions .......................................... 465
15.3.13 Statistical Tests ................................................................... 466
15.3.14 What to Do When Chi-Square Is Significant ................. 467
15.3.15 Approximate Fit Indices ................................................... 469
15.4 Designing Confirmatory Factor Analysis Models........................ 473
15.4.1 Restricted versus Unrestricted Models .......................... 473
15.4.2 Use for Unrestricted Model ............................................. 475
15.4.3 Measurement Model ......................................................... 476
15.4.4 Four-Step Procedure for Evaluating a Model ............... 477
15.5 Some Other Applications ................................................................. 477
15.5.1 Faceted Classification Designs ........................................ 477
15.5.2 Multirater–Multioccasion Studies .................................. 478
15.5.3 Multitrait–Multimethod Covariance Matrices..............483
15.6 Conclusion ......................................................................................... 489
References ........................................................................................................... 493
Author Index .......................................................................................................505
Subject Index ...................................................................................................... 509

© 2010 by Taylor & Francis Group, LLC


Preface to the Second Edition

This is a book for those who want or need to get to the bottom of things.
Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014

It is about the foundations of factor analysis. It is for those who are not content
with accepting on faith the many equations and procedures that constitute
factor analysis but want to know where these equations and procedures came
from. They want to know the assumptions underlying these equations and
procedures so that they can evaluate them for themselves and decide where
and when they would be appropriate. They want to see how it was done, so
they might know how to add modifications or produce new results.
The fact that a major aspect of factor analysis and structural equation
modeling is mathematical means that getting to their foundations is going to
require dealing with some mathematics. Now, compared to the mathematics
needed to fully grasp modern physics, the mathematics of factor analysis and
structural equation modeling is, I am happy to say, relatively easy to learn
and not much beyond a sound course in algebra and certainly not beyond a
course in differential calculus, which is often the first course in mathematics
for science and engineering majors in a university. It is true that factor analy-
sis relies heavily on concepts and techniques of linear algebra and matrix
algebra. But these are topics that can be taught as a part of learning about
factor analysis. Where differential calculus comes into the picture is in those
situations where one seeks to maximize or minimize some algebraic expres-
sion. Taking derivatives of algebraic expressions is an analytic process, and
these are algebraic in nature. While best learned in a course on calculus, one
can still be shown the derivatives needed to solve a particular optimization
problem. Given that the algebra of the derivation of the solution is shown
step by step, a reader may still be able to follow the argument leading to the
result. That, then, is the way this book has been written: I teach the math-
ematics needed as it is needed to understand the derivation of an equation or
procedure in factor analysis and structural equation modeling.
This text may be used at the postgraduate level as a first-semester course
in advanced correlational methods. It will find use in psychology, sociology,
education, marketing, and organizational behavior departments, especially
in their quantitative method programs. Other ancillary sciences may also
find this book useful. It can also be used as a reference for explanations of
various options in commercial computer programs for performing factor
analysis and structural equation modeling.
There is a logical progression to the chapters in this text, reflecting the
hierarchical structure to the mathematical concepts to be covered. First, in
Chapter 2 one needs to learn the basic mathematics, principally linear algebra
and matrix algebra and the elements of differential calculus. Then one needs
to learn about composite variables and their means, variances, covariances,

xiii
© 2010 by Taylor & Francis Group, LLC
xiv Preface to the Second Edition

and correlation in terms of means, variances, and covariances among their


component variables. Then one builds on that to deal with multiple and
partial correlations, which are special forms of composite variables. This is
accomplished in Chapter 3.
Differential calculus will first be encountered in Chapter 4 in demon-
strating where the estimates of the regression weights come from, for this
Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014

involves finding the weights of a linear combination of predictor variables


that has either the maximum correlation with the criterion or the minimum
expected difference with the criterion.
In Chapter 5 one uses the concepts of composite variables and multiple
and partial regression to understand the basic properties of the multivariate
normal distribution. Matrix algebra becomes essential to simplify notations
that often involve hundreds of variables and their interrelations.
By this point, in Chapter 6 one is ready to deal with factor analysis, which
is an extension of regression theory wherein predictor variables are now
unmeasured, and hypothetical latent variables and dependent variables are
observed variables. Here we describe the fundamental equation and fun-
damental theorem of factor analysis, and introduce the basic terminology
of factor analysis. In Chapter 7, we consider how common factors may be
extracted. We show that the methods used build on concepts of regression
and partial correlation. We first look at a general algorithm for extracting
factors proposed by Guttman. We then consider the diagonal and the cen-
troid methods of factoring, both of which are of historical interest. Next we
encounter eigenvectors and eigenvalues. Eigenvectors contain coefficients
that are weights used to combine additively observed variables or their
common parts into variables that have maximum variances (the eigenval-
ues), because they will be variables that account for most of the information
among the original variables in any one dimension.
To perform a common-factor analysis one must first have initial estimates
of the communalities and unique variances. Lower-bound estimates are fre-
quently used. These bounds and their effect on the number of factors to retain
are discussed in Chapter 8. The unique variances are either subtracted from
the diagonal of the correlation matrix or the diagonal matrix of the recipro-
cals of their square roots are pre- and postmultiplied times the correlation
matrix, and then the eigenvectors and eigenvalues of the resulting matrix are
obtained. Different methods use different estimates of unique variances. The
formulas for the eigenvectors and eigenvalues of a correlation matrix, say,
are obtained by using differential calculus to solve the maximization prob-
lem of finding the weights of a linear combination of the variables that has
the maximum variance under the constraint that the sum of the squares of
the weights adds to unity. These will give rise to the factors, and the common
factors will in turn be basis vectors of a common factor space, meaning that
the observed variables are in turn linear combinations of the factors.
Maximum-likelihood-factor analysis, the equations for which were ulti-
mately solved by Karl Jöreskog (1967), building on the work of precursor

© 2010 by Taylor & Francis Group, LLC


Preface to the Second Edition xv

statisticians, also requires differential calculus to solve the maximization


problem involved. Furthermore, the computational solution for the
maximum-likelihood estimates of the model parameters cannot be solved
directly by any algebraic analytic procedure. The solution has to be obtained
numerically and iteratively. Jöreskog (1967) used a then new computer algo-
rithm for nonlinear optimization, the Fletcher–Powell algorithm. We will
Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014

explain how this works to obtain the maximum-likelihood solution for the
exploratory factor-analysis model.
Chapter 9 examines several variants of the factor-analysis model: princi-
pal components, weighted principal components, image analysis, canonical
factor analysis, descriptive factor analysis, and alpha factor analysis.
In Chapters 10 (simple structure and graphical rotation), 11 (orthogonal
analytic rotation), and 12 (oblique analytic rotation) we consider factor rota-
tion. Rotation of factors to simple structures will concern transformations of
the common-factor variables into a new set of variables that have a simpler
relationship with the observed variables. But there are numerous math-
ematical criteria of what constitutes a simple structure solution. All of these
involve finding the solution for a transformation matrix for transforming the
initial “unrotated solution” to one that maximizes or minimizes a mathe-
matical expression constituting the criterion for simple structures. Thus, dif-
ferential calculus is again involved in finding the algorithms for conducting
a rotation of factors, and the solution is obtained numerically and iteratively
using a computer.
Chapter 13 addresses whether or not it is possible to obtain scores on the
latent common factors. It turns out that solutions for these scores are not
unique, even though they are optimal. This is the factor-indeterminacy
problem, and it concerns more than getting scores on the factors: there
may be more than one interpretation for a common factor that fits the data
equally well.
Chapter 14 deals with factorial invariance. What solutions for the factors
will reveal the same factors even if we use different sets of observed vari-
ables? What coefficients are invariant in a factor-analytic model under restric-
tion of range? Building on ideas from regression, the solution is effectively
algebraic.
While much of the first 14 chapters is essentially unchanged from the first
edition, developments that have taken place since 1972, when the first edition
was published, have been updated and revised.
I have changed the notation to adopt a notation popularized by Karl
Jöreskog for the common-factor model. I now write the model equation as
Y = LX + YE instead of Z = FX + UV and the equation of the fundamental
theorem as RYY = LFXX L¢ + Y 2 instead of R ZZ = FCXXF¢ + U2.
I have added a new Chapter 5 on the multivariate normal distribution
and its general properties along with the concept of maximum-likelihood
estimation based on it. This will increase by one the chapter numbers for
subsequent chapters corresponding to those in the first edition. However,

© 2010 by Taylor & Francis Group, LLC


xvi Preface to the Second Edition

Chapter 12, on procrustean rotation in the first edition, has been dropped,
and this subject has been briefly described in the new Chapter 12 on oblique
rotation. Chapters 13 and 14 deal with factor scores and indeterminacy, and
factorial invariance under restriction of range, respectively.
Other changes and additions are as follows. I am critical of several of the
methods that are commonly used to determine the number of factors to retain
Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014

because they are not based on sound statistical or mathematical theory. I have
now directed some of these criticisms toward some methods in renumbered
Chapter 8. However, since then I have also realized that, in most studies, a
major problem with determining the number of factors concerns the presence
of doublet variance uncorrelated with n − 2 observed variables and correlated
between just the two of them. Doublet correlations contribute to the commu-
nalities, but lead to an overestimate of the number of overdetermined common
factors. But the common-factor model totally ignores the possibility of dou-
blets, while they are everywhere in our empirical studies. Both unique factor
variance and doublet variance should be separated from the overdetermined
common-factor variance. I have since rediscovered that a clever but heuristic
solution for doing this, ignored by most factor-analytic texts, appeared in a
paper I cited but did not understand sufficiently to describe in the first edition.
This paper was by John Butler (1968), and he named his method “descriptive
factor analysis.” I now cover this more completely (along with a new method of
my own I call “doublet factor analysis”) in Chapter 9 as well as other methods
of factor analysis. It provides an objective way to determine the number of
overdetermined common factors to retain that those who use the eigenvalues
greater than 1.00 rule of principal components will find more to their liking in
the smaller number of factors it retains.
I show in Chapter 9 that Kaiser’s formula (1963) for the principal axes
1/2
factor structure matrix for an image analysis, Λ = SA r ⎣⎡( γ i − 1)2 γ i ⎦⎤ r , is not
the correct one to use, because this represents the “covariances” between the
“image” components of the variables and the underlying “image factors.”
The proper factor structure matrix that represents the correlations between
the unit variance “observed” variables and the unit variance image factors is
none other than the weighted principal component solution: Λ = SA r [γ i ]1/2 r .
In the 1990s, several new approaches to oblique rotation to simple struc-
ture were published, and these and earlier methods of rotation were in turn
integrated by Robert Jennrich (2001, 2002, 2004, 2006) around a simple core
computing algorithm, the “gradient projection algorithm,” which seeks
the transformation matrix for simultaneously transforming all the factors.
I have therefore completely rewritten Chapter 12 on analytic oblique rotation
on the basis of this new, simpler algorithm, and I show examples of its use.
In the 1970s, factor score indeterminacy was further developed by several
authors, and in 1993 many of them published additional exchanges on the
subject. A discussion of these developments has now been included in an
expanded Chapter 13 on factor scores.

© 2010 by Taylor & Francis Group, LLC


Preface to the Second Edition xvii

Factor analysis was also extended to confirmatory factor analysis and


structural equation modeling by Jöreskog (1969, 1973, 1974, 1975). As a con-
sequence, these later methodologies diverted researchers from pursuing
exclusively exploratory studies to pursuing hypothesis-testing studies.
Confirmatory factor analysis is now best treated separately in a text on struc-
tural equation modeling, as a special case of that method, but I have rewrit-
Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014

ten a chapter on confirmatory factor analysis for this edition. Exploratory


factor analysis still remains a useful technique in many circumstances as
an abductive, exploratory technique, justifying its study today, although its
limitations are now better understood.
I wish to thank Robert Jennrich for his help in understanding the gra-
dient projection algorithm. Jim Steiger was very helpful in clarifying Peter
Schönemann’s papers on factor indeterminacy, and I owe him a debt of
gratitude.
I wish to thank all those who, over the past 30 years, have encouraged me
to revise this text, specifically, Jim Steiger, Michael Browne, Abigail Panter,
Ed Rigdon, Rod McDonald, Bob Cudeck, Ed Loveland, Randy Engle, Larry
James, Susan Embretson, and Andy Smith. Without their encouragement,
I might have abandoned the project. I owe Henry Kaiser, now deceased, an
unrepayable debt for his unselfish help in steering me into a career in factor
analysis and in getting me the postdoctoral fellowship at the University of
North Carolina. This not only made the writing of the first edition of this
book possible, but it also placed me in the intellectually nourishing environ-
ment of leading factor analysts, without which I could never have gained the
required knowledge to write the first edition or its current sequel.
I also acknowledge the loving support my wife Jane has given me through
all these years as I labored on this book into the wee hours of the night.

Stanley A. Mulaik

© 2010 by Taylor & Francis Group, LLC


Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014

© 2010 by Taylor & Francis Group, LLC


Preface to the First Edition

When I was nine years old, I dismantled the family alarm clock. With gears,
Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014

springs, and screws—all novel mechanisms to me—scattered across the


kitchen table, I realized that I had learned two things: how to take the clock
apart and what was inside it. But this knowledge did not reveal the mysteries
of the all-important third and fourth steps in the process: putting the clock
back together again and understanding the theory of clocks in general. The
disemboweled clock ended up in the trash that night. A common experience,
perhaps, but nonetheless revealing.
There are some psychologists today who report a similar experience in
connection with their use of factor analysis in the study of human abilities or
personality. Very likely, when first introduced to factor analysis they had the
impression that it would allow them to analyze human behavior into its com-
ponents and thereby facilitate their activities of formulating structural theo-
ries about human behavior. But after conducting a half-dozen factor analyses
they discovered that, in spite of the plethora of factors produced and the piles
of computer output stacked up in the corners of their offices, they did not
know very much more about the organization of human behavior than they
knew before factor analyzing. Perhaps after factor analyzing they would
admit to appreciating more fully the complexity of human behavior, but in
terms of achieving a coherent, organized conception of human behavior, they
would claim that factor analysis had failed to live up to their expectations.
With that kind of negative appraisal being commonly given to the tech-
nique of factor analysis, one might ask why the author insisted on writing a
textbook about the subject. Actually, I think the case against factor analysis
is not quite as grim as depicted above. Just as my youthful experience with
the alarm clock yielded me fresh but limited knowledge about the works
inside the clock, factor analysis has provided psychologists with at least fresh
but limited knowledge about the components, although not necessarily the
organization, of human psychological processes.
If some psychologists are disillusioned by factor analysis’ failure to
provide them with satisfactory explanations of human behavior, the fault
probably lies not with the model of factor analysis itself but with the mindless
application of it; many of the early proponents of the method encouraged this
mindless application by their extravagant claims for the method’s efficacy in
discovering, as if by magic, underlying structures. Consequently, rather than
use scientific intuition and already-available knowledge about the properties
of variables under study to construct theories about the nature of relation-
ships among the variables and formulating these theories as factor-analytic
models to be tested against empirical data, many researchers have randomly
picked variables representing a domain to be studied, intercorrelated the

xix
© 2010 by Taylor & Francis Group, LLC
xx Preface to the First Edition

variables, and then factor-analyzed them expecting that the theoretically


important variables of the domain would be revealed by the analysis. Even
Thurstone (1947, pp. 55–56), who considered exploration of a new domain
a legitimate application of factor-analytic methods, cautioned that explora-
tion with factor analysis required carefully chosen variables and that results
from using the method were only provisional in suggesting ideas for further
Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014

research. Factor analysis is not a method for discovering full-blown struc-


tural theories about a domain. Experience has justified Thurstone’s cau-
tion for, more often than one would like to admit, the factors obtained from
purely exploratory studies have been difficult to integrate with other theory,
if they have been interpretable at all. The more substantial contributions of
factor analysis have been made when researchers postulated the existence
of certain factors, carefully selected variables from the domain that would
isolate their existence, and then proceeded to factor-analyze in such a way as
to reveal these factors as clearly as possible. In other words, factor analysis
has been more profitably used when the researcher knew what he or she was
looking for.
Several recent developments, discussed in this book, have made it pos-
sible for researchers to use factor-analytic methods in a hypothesis-testing
manner.
For example, in the context of traditional factor-analytic methodology,
using procrustean transformations (discussed in Chapter 12), a researcher
can rotate an arbitrary factor-pattern matrix to approximate a hypothetical
factor-pattern matrix as much as possible. The researcher can then examine
the goodness of fit of the rotated pattern matrix to the hypothetical pattern
matrix to evaluate his or her hypothesis.
In another recent methodological development (discussed in Chapter 15)
it is possible for the researcher to formulate a factor-analytic model for a set
of variables to whatever degree of completeness he or she may desire, leav-
ing the unspecified parameters of the model to be estimated in such a way
as to optimize goodness of fit of the hypothetical model to the data, and
then to test the overall model for goodness of fit against the data. The latter
approach to factor analysis, known as confirmatory factor analysis or analy-
sis of covariance structures, represents a radical departure from the tradi-
tional methods of performing factor analysis and may eventually become
the predominant method of using the factor-analytic model.
The objective of this book, as its title Foundations of Factor Analysis suggests,
is to provide the reader with the mathematical rationale for factor-analytic
procedures. It is thus designed as a text for students of the behavioral or
social sciences at the graduate level. The author assumes that the typical stu-
dent who uses this text will have had an introductory course in calculus, so
that he or she will be familiar with ordinary differentiation, partial differen-
tiation, and the maximization and minimization of functions using calculus.
There will practically be no reference to integral calculus, and many of the
sections will be comprehensible with only a good grounding in matrix

© 2010 by Taylor & Francis Group, LLC


Preface to the First Edition xxi

algebra, which is provided in Chapter 2. Many of the mathematical concepts


required to understand a particular factor-analytic procedure are introduced
along with the procedure.
The emphasis of this book is algebraic rather than statistical. The key con-
cept is that (random) variables may be treated as vectors in a unitary vec-
tor space in which the scalar product of vectors is a defined operation. The
Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014

empirical relationships between the variables are represented either by the


distances or the cosines of the angles between the corresponding vectors.
The factors of the factor-analytic model are basis vectors of the unitary vector
space of observed variables. The task of a factor-analytic study is to find a set
of basis vectors with optimal properties from which the vectors correspond-
ing to the observed variables can be derived as linear combinations.
Chapter 1 provides an introduction to the role factor-analytic models can
play in the formulation of structural theories, with a brief review of the his-
tory of factor analysis. Chapter 2 provides a mathematical review of concepts
of algebra and calculus and an introduction to vector spaces and matrix
algebra. Chapter 3 introduces the reader to properties of linear composite
variables and the representation of these properties by using matrix algebra.
Chapter 4 considers the problem of finding a particular linear combination
of a set of random variables that is minimally distant from some external
random variable in the vector space containing these variables. This discus-
sion introduces the concepts of multiple correlation and partial correlation;
the chapter ends with a discussion on how image theory clarifies the mean-
ing of multiple correlation. Multiple- and partial-correlational methods are
seen as essential, later on, to understanding methods of extracting factors.
Chapter 5 plunges into the theory of factor analysis proper with a dis-
cussion on the fundamental equations of common-factor analysis. Chapter 6
discusses methods of extracting factors; it begins with a discussion on a
general algorithm for extracting factors and concludes with a discussion on
methods for obtaining principal-axes factors by finding the eigenvectors and
eigenvalues of a correlation matrix.
Chapter 7 considers the model of common-factor analysis in greater
detail with a discussion on (1) the importance of overdetermining factors
by the proper selection of variables, (2) the inequalities regarding the lower
bounds to the communalities of variables, and (3) the fitting of the unre-
stricted common-factor model to a correlation matrix by least-squares and
maximum-likelihood estimation.
Chapter 8 discusses factor-analytic models other than the common-factor
model, such as component analysis, image analysis, image factor analysis,
and alpha factor analysis. Chapter 9 introduces the reader to the topic of factor
rotation to simple structure using graphical rotational methods. This discus-
sion is followed by a discussion on analytic methods of orthogonal rotation in
Chapter 10 and of analytic methods of oblique rotation in Chapter 11.
Chapters 12 and 13 consider methods of procrustean transformation of
factors and the meaning of factorial indeterminacy in common-factor

© 2010 by Taylor & Francis Group, LLC


xxii Preface to the First Edition

analysis in connection with the problem of estimating the common factors,


respectively. Chapter 14 deals with the topic of factorial invariance of the
common-factor model over sampling of different variables and over selec-
tion of different populations. The conclusion this chapter draws is that the
factor-pattern matrix is the only invariant of a common-factor-analysis model
under conditions of varied selection of a population.
Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014

Chapter 15 takes up the new developments of confirmatory factor analy-


sis and analysis of covariance structures, which allow the researcher to test
hypotheses using the model of common-factor analysis. Finally, Chapter 16
shows how various concepts from factor analysis can be applied to methods
of multivariate analysis, with a discussion on multiple correlations, step-
down regression onto a set of several variables, canonical correlations, and
multiple discriminant analysis.
Some readers may miss discussions in this book of recent offshoots of
factor-analytic theory such as 3-mode factor analysis, nonlinear factor analy-
sis, and nonmetric factor analysis. At one point of planning, I felt such top-
ics might be included, but as the book developed, I decided that if all topics
pertinent to factor analysis were to be included, the book might never be
finished. And so this book is confined to the more classic methods of factor
analysis.
I am deeply indebted to the many researchers upon whose published
works I have relied in developing the substance of this book. I have tried to
give credit wherever it was due, but in the event of an oversight I claim no
credit for any development made previously by another author.
I especially wish to acknowledge the influence of my onetime teacher,
Dr. Calvin W. Taylor of the University of Utah, who first introduced me to
the topic of factor analysis while I was studying for my PhD at the University
of Utah, and who also later involved me in his factor-analytic research as a
research associate. I further wish to acknowledge his role as the primary
contributing cause in the chain of events that led to the writing of this book,
for it was while substituting for him, at his request, as the instructor of his
factor-analysis course at the University of Utah in 1965 and 1966 that I first
conceived of writing this book. I also wish to express my gratitude to him for
generously allowing me to use his complete set of issues of Psychometrika and
other factor-analytic literature, which I relied upon extensively in the writing
of the first half of this book.
I am also grateful to Dr. Henry F. Kaiser who, through correspondence with
me during the early phases of my writing, widened my horizons and helped
and encouraged me to gain a better understanding of factor analysis.
To Dr. Lyle V. Jones of the L. L. Thurstone Psychometric Laboratory,
University of North Carolina, I wish to express my heartfelt appreciation for
his continued encouragement and support while I was preparing the manu-
script of this book. I am also grateful to him for reading earlier versions of
the manuscript and for making useful editorial suggestions that I have tried

© 2010 by Taylor & Francis Group, LLC


Preface to the First Edition xxiii

to incorporate in the text. However, I accept full responsibility for the final
form that this book has taken.
I am indebted to the University of Chicago Press for granting me permis-
sion to reprint Tables 10.10, 15.2, and 15.8 from Harry H. Harman’s Modern
Factor Analysis, first edition, 1960. I am also indebted to Chester W. Harris,
managing editor of Psychometrika, and to the following authors for granting
Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014

me permission to reprint tables taken from their articles that appeared in


Psychometrika: Henry F. Kaiser, Karl G. Jöreskog, R. Darrell Bock and Rolf
E. Bargmann, R. I. Jennrich, and P. F. Sampson. I am especially grateful to
Lyle V. Jones and Joseph M. Wepman for granting me permission to reprint
a table from their article, “Dimensions of language performance,” which
appeared in the Journal of Speech and Hearing Research, September, 1961.
I am also pleased to acknowledge the contribution of my colleagues,
Dr. Elliot Cramer, Dr. John Mellinger, Dr. Norman Cliff, and Dr. Mark
Appelbaum, who in conversations with me at one time or another helped me
to gain increased insights into various points of factor-analytic theory, which
were useful when writing this book. In acknowledging this contribution,
however, I take full responsibility for all that appears in this book.
I am also grateful for the helpful criticism of earlier versions of the manu-
script of this book, which were given to me by my students in my factor-
analysis classes at the University of Utah and at the University of North
Carolina.
I also wish to express my gratitude to the following secretaries who at one
time or another struggled with the preparation of portions of this manuscript:
Elaine Stewart, Judy Nelson, Ellen Levine, Margot Wasson, Judy Schenck,
Jane Pierce, Judy Schoenberg, Betsy Schopler, and Bess Autry. Much of their
help would not have been possible, however, without the support given to
me by the L. L. Thurstone Psychometric Laboratory under the auspices of
Public Health Service research grant No. M-10006 from the National Institute
of Mental Health, and National Science Foundation Science Development
grant No. GU 2059, for which I am ever grateful.

Stanley A. Mulaik

© 2010 by Taylor & Francis Group, LLC


1
Introduction

1.1 Factor Analysis and Structural Theories


By a structural theory we shall mean a theory that regards a phenomenon as
an aggregate of elemental components interrelated in a lawful way. An excel-
lent example of a structural theory is the theory of chemical compounds:
Chemical substances are lawful compositions of the atomic elements, with
the laws governing the compositions based on the manner in which the
electron orbits of different atoms interact when the atoms are combined in
molecules.
Structural theories occur in other sciences as well. In linguistics, for exam-
ple, structural descriptions of language analyze speech into phonemes or
morphemes. The aim of structural linguistics is to formulate laws govern-
ing the combination of morphemes in a particular language. Biology has a
structural theory, which takes, as its elemental components, the individual
cells of the organism and organizes them into a hierarchy of tissues, organs,
and systems. In the study of the inheritance of characters, modern geneticists
regard the manifest characteristics of an organism (phenotype) as a function
of the particular combination of genes (genotype) in the chromosomes of the
cells of the organism.
Structural theories occur in psychology as well. At the most fundamental
level a psychologist may regard behaviors as ordered aggregates of cellular
responses of the organism. However, psychologists still have considerable
difficulty in formulating detailed structural theories of behavior because
many of the physical components necessary for such theories have not been
identified and understood. But this does not make structural theories impos-
sible in psychology. The history of other sciences shows that scientists can
understand the abstract features of a structure long before they know the
physical basis for this structure. For example, the history of chemistry indi-
cates that chemists could formulate principles regarding the effects of mixing
compounds in certain amounts long before the atomic and molecular aspects
of matter were understood. Gregor Mendel stated the fundamental laws of
inheritance before biologists had associated the chromosomes of the cell
with inheritance. In psychology, Isaac Newton, in 1704, published a simple
mathematical model of the visual effects of mixing different hues, but nearly
a hundred years elapsed before Thomas Young postulated the existence of

© 2010 by Taylor & Francis Group, LLC


2 Foundations of Factor Analysis

three types of color receptors in the retina to account for the relationships
described in Newton’s model. And only a half-century later did physiologist
Helmholtz actually give a physiological basis to Young’s theory. Other physi-
ological theories subsequently followed. Much of psychological theory today
still operates at the level of stating relationships among stimulus conditions
and gross behavioral responses.
Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014

One of the most difficult problems of formulating a structural theory


involves discovering the rules that govern the composition of the aggregates
of components. The task is much easier if the scientist can show that the
physical structure he is concerned with is isomorphic to a known mathe-
matical structure. Then, he can use the many known theorems of the math-
ematical structure to make predictions about the properties of the physical
structure. In this regard, George Miller (1964) suggests that psychologists
have used the structure of euclidean space more than any other mathemati-
cal structure to represent structural relationships of psychological processes.
He cites, for example, how Isaac Newton’s (1704) model for representing the
effects of mixing different hues involved taking the hues of the spectrum in
their natural order and arranging them as points appropriately around the
circumference of a circle. The effects of color mixtures could be determined
by proportionally weighting the points of the hues in the mixture accord-
ing to their contribution to the mixture and finding the center of gravity of
the resulting points. The closer this center of gravity approached the center
of the color circle, the more the resulting color would appear gray. In addi-
tion, Miller cites Schlosberg’s (1954) representation of perceived emotional
similarities among facial expressions by a two-dimensional graph with one
dimension interpreted as pleasantness versus unpleasantness and the other
as rejection versus attention, and Osgood’s (1952) analysis of the compo-
nents of meaning of words into three primary components: (1) evaluation,
(2) power, and (3) activity.
Realizing that spatial representations have great power to suggest the exis-
tence of important psychological mechanisms, psychologists have developed
techniques, such as metric and nonmetric factor analysis and metric and non-
metric multidimensional scaling, to create, systematically, spatial representa-
tions from empirical measurements. All four of these techniques represent
objects of interest (e.g., psychological “variables” or stimulus “objects”) as
points in a multidimensional space. The points are so arranged with respect
to one another in the space as to reflect relationships of similarity among the
corresponding objects (variables) as given by empirical data on these objects.
Although a discussion of the full range of techniques using spatial repre-
sentations of relationships found in data would be of considerable interest,
we shall confine ourselves, in this book, to an in depth examination of the
methods of factor analysis. The reason for this is that the methodology of
factor analysis is historically much more fully developed than, say, that
of multidimensional scaling; as a consequence, prescriptions for the ways

© 2010 by Taylor & Francis Group, LLC


Random documents with unrelated
content Scribd suggests to you:
Homemade Electric Locomotive Model
and Track System
By A. E. ANDREW
PART I—The Motor

Thehaving
electric locomotive described may be constructed by boys
average mechanical ability and the necessary tools.
However, in any piece of mechanical construction care must be
taken to follow the instructions. The material required is inexpensive,
and the pleasure derived from such a toy is well worth the time used
in its construction.
The making of the outfit may be divided into three parts, the first of
which is the motor; second, the truck which is to carry the motor and
the body of the car, and third, the track system upon which the
engine is to operate. A side view of the locomotive is shown in Fig. 1.
The motor is of the series type, having its field and armature
terminals connected to the source of electrical energy through a
special reversing switch. By this means the rotation of the armature
may be reversed to make the locomotive travel forward or backward.
The armature and field are constructed of sheet-iron stampings,
riveted together.
The detailed construction of the armature and its dimensions are
shown in Fig. 2. The shaft upon which the armature core and
commutator are to be rigidly mounted is made of a piece of steel rod,
⁷⁄₃₂ in. in diameter. A portion of this rod, 2¹⁄₄ in. long, is threaded with
a fine thread, and two small brass, or iron, nuts are provided to fit it.
The ends of the rod are turned down to a diameter of ¹⁄₈ in. for a
distance of ¹⁄₈ in. These are to fit in the bearings that are to be made
later.
Cut from thin sheet iron a sufficient number of disks, 1¹⁄₈ in. in
diameter, to make a pile exactly ⁵⁄₈ in. thick when they are securely
clamped together. Drill a hole in the center of each of these disks, of
such a size that they will slip on the shaft snugly. Remove the rough
edges from the disks and see that they are flat. Cut two disks of the
same size, from a piece of ¹⁄₁₆-in spring brass, and drill a hole in the
center of each, so that they will slip on the shaft. Place all these
disks on the shaft, with the brass ones on the outside, and draw
them up tightly with the nuts provided. Be sure to get the laminated
core in the proper position on the shaft by observing the dimensions
given in the illustration, Fig. 2.
Fig. 1
Side View of a Locomotive Designed to be Operated with Either End Forward

After the disks have been fastened, clamp the shaft in the chuck of
a lathe and turn down the edges of all the disks so that they form a
smooth cylinder, 1¹⁄₁₆ in. in diameter. Draw a circle on the side of one
of the brass disks, ³⁄₃₂ in. from the edge, while the shaft is held in the
chuck. Divide this circle into eight equal parts and make a center-
punch mark at each division. Drill eight holes through the core
lengthwise with a ³⁄₁₆-in. drill. If the centers of the holes have been
properly located, all the metal on the outside will be cut away, as
shown in the end view, at the right in Fig. 2. The width of the gaps, F,
G, H, etc., thus formed, should be about ¹⁄₁₆ in. Smooth off all the
edges with a fine file after the holes are drilled.
A cross-sectional view of the commutator is shown at the extreme
left, Fig. 2. It is constructed as follows: Take a rod of copper or brass,
⁷⁄₈ in. diameter, and 1¹⁄₄ in. long; clamp one end in the chuck of a
lathe. Turn the other end down to a diameter of ³⁄₄ in., and drill a ¹⁄₂-
in. hole through it at the center. Cut away the metal from the end to
form a disklike recess.
Cut off a disk, ⁵⁄₁₆ in. thick, measuring from the finished end, from
the piece of stock. Place this disk in a chuck, with the unfinished end
exposed, and cut away the metal in a dish form, as shown at B. Cut
small slots, into which the ends of the wires used in winding are to
be soldered, as shown at 1, 2, 3, etc., in the right-hand view of Fig.
2. Obtain two brass nuts, about ¹⁄₄ in. in thickness, and turn their
edges down so that they correspond in form to those shown at C and
D. Divide the disk ring, just made, into eight equal parts, by lines
drawn across it through the center. Cut eight slots at these points, in
the rim of the disk. These cuts should be through the rim. Fill each of
the slots with a piece of mica insulation.

Fig. 2 Fig. 3
How the Armature Core is Made of Soft-Iron Disks for the Lamination, at the
Left. Diagram for the Winding of the Armature Coils and Their Connection to
the Commutator, at the Right

Place one of the nuts on the shaft and then a washer of mica
insulation shown by the heavy lines, near A and B; then the ring, a
second piece of mica, and last the nut, C. The latter should be drawn
up tightly, so that the insulation in the slots in the disk are opposite
the drilled slots in the armature core, as shown in the right-hand view
of Fig. 2. After the disk has been fastened securely, test it to learn
whether it is insulated from the shaft. This is done by means of a
battery and bell, connected in series, one terminal of the circuit being
connected to the disk, and the other to the shaft. If the bell rings
when these connections are made, the ring and shaft are not
insulated. The disk must then be remounted, using new washers of
mica insulation. Mica is used because of its ability to withstand a
higher degree of heat than most other forms of insulation.
Each of the eight segments of the dished disk should be insulated
from the others. Make a test to see if the adjacent commutator
segments are insulated from each other, and also from the shaft. If
the test indicates that any segment is electrically connected to
another, or to the shaft, the commutator must be dismantled, and the
trouble corrected.
The armature is now ready to be wound. Procure ¹⁄₈ lb. of No. 26
gauge insulated copper wire. Insulate the shaft, at E, with several
turns of thin cloth insulation, and also insulate similarly the nuts
holding the armature core and the inside nut holding the commutator.
Cut several pieces from the cloth insulation, wide enough to cover
the walls of the slots in the core, and long enough to extend at least
¹⁄₁₆ in. beyond the core at the ends. Insulate slots F and G thus, and
wind 15 turns of the wire around the core lengthwise, passing the
wire back through the slot F, across the back end of the core, then
toward the front end through slot G, and back through F, and so on.
About 2 in. of free wire should be provided at each end of the coils.
In passing across the ends of the armature, all the turns are
placed on one side of the shaft, and so as to pass on the left side,
the armature being viewed from the commutator end. The second
coil, which is wound in the same grooves, is then passed on the right
side, the third on the left, and so on. After this coil is completed test it
to see if it is connected to the armature core. If such a condition is
found, the coil must be rewound. If the insulation is good, wind the
second coil, which is wound in the same slots, F and G, and
composed of the same number of turns. Insulate the slots H and J,
and wind two coils of 15 turns each in them, observing the same
precautions as with the first two coils. The fifth and sixth coils are
placed in slots K and L, and the seventh and eighth, in slots M and
N.
The arrangement of the half coils, slots, and commutator
segments is given in detail in Fig. 3. Each coil is reduced to one turn
in the illustration, in order to simplify it. From an inspection of this
diagram it may be seen that the outside end of the second coil in the
upper row of figures, at the left end, is connected to the inside end of
the fourth coil at segment 1, in the lower row of figures, representing
the segments of the commutator. The outside end of the fourth coil is
connected with the inside end of the sixth coil, at segment 2; the
outside end of the sixth coil is connected with the inside end of the
eighth coil at segment 3; the outside end of the eighth coil is
connected to the inside end of the coil 1 at segment 4; the outside
end of the coil 1 is connected to the inside end of the coil 3 at
segment 5; the outside end of the third coil is connected to the inside
end of the fifth coil at segment 6; the outside end of the fifth coil is
connected to the inside end of the seventh coil at segment 7; the
outside end of the seventh coil is connected to the inside end of the
second coil at segment 8, and the outside end of the second coil is
connected to segment 1. completing the circuit.

Fig. 4
Pattern for the Field Stampings, Several Pieces being Used to make the
Desired Thickness

In winding the coils on the core, their ends should be terminated


close to the commutator segments to which they are to be
connected, in order to simplify the end connections. After all the coils
are wound and properly tested, their ends may be connected as
indicated. They are then soldered into the slots in the ends of the
commutator segments. The completed winding is given a coating of
shellac.
The dimensions and form of the field stampings are given in Fig. 4.
A number of these cut from thin sheet iron to make a pile ⁵⁄₈ in. thick
when clamped together is needed. The dimensions of the opening to
carry the armature should be a little less than that indicated in the
sketch, as it will be necessary to true it up after the stampings are
fastened together. Use one of the stampings as a pattern, and drill
seven small holes in each, as indicated by the letters O, P, Q, R, S,
T, and U. Fasten them together with small rivets, and true up the
opening for the armature to a diameter of 1¹⁄₈ in. Drill five ¹⁄₈-in.
holes, as indicated by the letters V, W, X, Y, and Z, to be used in
mounting the pieces, which are to form the armature bearings, brush
supports, and base of the motor.

Fig. 5 Fig. 6
Detail of the Field-Structure Supports, One Being for the Left Side and the
Other for the Right. The Supports are Shown in the Place at the Right

Cut two rectangular washers from a piece of thin fiber insulation,


with outside dimensions of 1¹⁄₈ in. and 1¹⁄₄ in., and an inside opening,
¹⁄₂ in. by ⁵⁄₈ in. Cut open these washers and slip them in position on
the portion of the field marked ZZ. Wrap two turns of the cloth
insulation about this part, which is to form the field core, and wind
the space full of No. 18 gauge enamel-insulated copper wire. Give
the completed winding a coat of shellac. The terminals of this
winding should be brought out through two holes drilled in one of the
fiber washers, one near the core and the other near the outer edge.
It is better to have the field terminals at the lower end of the part ZZ
than at the upper end.

Fig. 7
Detail of the Brush Holders, One Inch Long, with Holes as Shown

Now cut two pieces from ¹⁄₁₆-in. sheet brass, similar to those
shown in Fig. 5. Place them on opposite sides of the laminated field
structure, shown in Fig. 4, and carefully mark the position of the
holes, V, W, X, Y, and Z, as indicated in Fig. 4, and drill ¹⁄₈-in. holes,
where the marks were made. Lay out and drill ¹⁄₈-in. holes, A, B, C,
and D, Fig. 5. Bend the upper portion of the pieces at right angles to
the lower portion, along the dotted lines E, and then bend the end of
the horizontal portions down along the dotted lines F, until they are
parallel with the main vertical parts of the pieces. The latter should
be bent so that one forms the left support and the other the right, as
shown in Fig. 6.
Bend the projections G and H at right angles to the vertical main
parts. The parts at the bottom are bent, one back along the dotted
line J and forward on the line K; the other forward on the line L and
back on the line M. The pieces are then mounted, on the side of the
field structure, as shown in Fig. 6. The supports are fastened in
place with five small bolts. The grooves N and O, in Fig. 5, are used
in mounting the motor on the axles of the truck. They will not be cut
until after the truck is constructed.
The brush holders are made of two pieces of hexagonal brass,
each 1 in. in length, having a ¹⁄₈-in. hole drilled in the end to a depth
of ⁷⁄₈ in., and a threaded hole in the other end, for a small machine
screw, as shown in Fig. 7. Two holes are drilled and threaded in one
side of each of these pieces. These holders are to be mounted, by
means of screws, through the holes A, B, C, and D, Fig. 5. Each
holder must be insulated from its support. The distance of the holder
from its support should be such that the opening in its end is in the
center of the commutator. The brushes are made of very fine copper
gauze, rolled to form a rod. They are made long enough to extend
about ¹⁄₂ in. into the holder, when they are resting on the
commutator. A small spiral spring is placed in the holder, back of the
end of the brush, and which will serve to keep the latter in contact
with the commutator.
Temporary connections are made and the motor is tested with a
six-volt battery. The construction of the motor may be modified as to
the length of shaft, and other minor details, and may be used for
other purposes by fitting it with pulleys, a countershaft, or other
transmission devices.
Making String Solder
String solder of a size convenient for electrical work, or other
soldering, where only a small quantity is desired, may be made by
adapting a ladle for the purpose. Drill a small hole through the ladle
near its upper edge. Melt the solder and pour it through the small
hole, permitting it to fall on a slab of marble, slate, or stone. The
ladle must be moved in zigzag lines in order to prevent the string
from crossing and to make it possible to roll up the solder into rings
of a convenient size.—Contributed by L. E. Fetter, Portsmouth, N. H.
To Prevent Wire Coat Hook from Turning

A good way to keep a common wire coat hook in an upright


position is to drive a small wire staple over the smaller hook. In
public places, such as halls, this will often prevent their removal and
save considerable annoyance.—Contributed by Harry L. Dixson,
Chicago, Ill.

¶The second coat of varnish should never be put on until the first
has been “mossed” (rubbed) off; as, otherwise, it will not stick well.
Cement Grotto for an Aquarium
To build a small cement grotto for an aquarium, make a clay mold
by roughly excavating two right-angled gutters in a lump of clay.
Grease, or shellac, the mold after it is dry. Apply cement of about the
consistency of putty, or dough, filling the gutters roughly so as to give
a rocklike finish. Small shells can be stuck into the cement while it is
yet moist. Before entirely shaping the cement, a piece of heavy wire
is bent to conform to the shape of the grotto and set in for a
reinforcement.
Cement Grotto Roughly Molded over Clay, Shaped to Make the Right Form

Holes can be made by twisting paper, so that it will extend out at


each side, and laying cement over it. After the cement is thoroughly
dry, the paper can be removed in sections.
Lamp Wicks Cheaply Made

Cut the Hat into Halves and Then Cut It into Strips, from Which the Wicks are
Made

Lamp wicks may be made cheaply at home from an old soft-felt


hat. The hat should first be brushed clean and the brim flattened by
ironing it. The greatest number of wicks may be obtained if the hat is
cut into halves with a pair of scissors, and then cut into strips of the
required size, as shown in the sketch. Soak the strips in vinegar for
two hours, dry them out of doors, if convenient, and they are ready
for use.—Contributed by George H. Holden.
Concrete Water Basin for Poultry

An Ordinary Washbasin was Used to Shape the Depression in


the Concrete

A concrete worker was asked by a farmer to build a concrete basin


for watering the poultry. Having no forms at hand, the mechanic used
an ordinary washbasin and a wood box as shown in the illustration.
The basin was greased before it was placed in the concrete. The
completed concrete basin was buried with its upper surface level
with the ground.—Contributed by James M. Kane, Doylestown, Pa.
Substitute for Ground Glass in Camera
The ground glass in my camera was broken while arranging to
take a picture of a party one evening, and being unable to obtain
another, I substituted a piece of white tissue paper drawn over a
piece of plain glass. It did the work so well on that occasion that I
have used it continuously since, and have found it better than the
ground glass. It produces superior definition in the views on the
glass, especially when working in a poor light, and in a good
focusing screen.—Contributed by C. W. Smalley, Des Moines, Ia.
Pencil Holder for Workbench

Mechanics, and others who have occasion to use a pencil on a


workbench will appreciate the pencil holder shown in the sketch. It
was made by soldering the small brass ferrule into the bottom of a
portion of a brass knob, and weighting the knob with lead. The pencil
will be held in an upright position where it may be easily seen.—
Contributed by R. F. Hoffman, Chicago.
Repairing Burned-Out Incandescent Globes
Incandescent electric bulbs that have been burned out may be
repaired by shaking them, in order to cause the broken ends of the
metal filament to strike together. By examining the broken filament
one can determine in what direction to shake the globe. The sudden
passage of the current upon contact causes an arc at the broken
ends of the filament, welding them. A globe thus repaired should be
placed in a socket where it will not be given undue disturbance and
will then last for a considerable period.
Homemade Electric Locomotive Model
and Track System
By A. E. ANDREW
PART II—Construction of the
Locomotive Truck and Cab

Successful operation and construction that is feasible, yet of a


reasonable standard of workmanship, are the essentials of the
locomotive truck and cab described as the second feature of the
locomotive and track system under consideration. The materials
suggested are those found to be satisfactory, but substitutes may be
used if caution is observed. The completed locomotive is shown in
Figs. 1 and 2. The outward aspect only is presented, and, for the
sake of clearness, the portions of the motor and driving rigging
attached to it, that project below the cab, are omitted, These parts
are shown assembled in Fig. 12, and in detail in the succeeding
sketches.
The locomotive, apart from the motor, consists of two main
portions, the truck and the cab. Consideration will be given first to
the building of the truck and the fitting of the motor into it. The
mechanical and operative features are to be completed before
beginning work on the cab, which is merely a hood fixed into place
with screws, set into the wooden cab base.
Begin the construction with the wheels, shown in Fig. 3. Make the
axles of ¹⁄₈-in. round steel rod, cut 3³⁄₁₆ in. long.
Turn four wheels of ³⁄₈-in. brass. Drill a ¹⁄₈-in. hole in two of them
so that they may be forced on the slightly tapered ends of the axle.
Drill a ¹⁄₄-in. hole in each of the other wheels, and solder a collar, A,
Fig. 3, on the inside surfaces of them. Two fiber bushings, B, should
be provided to fit in the ¹⁄₄-in. openings in the wheels and to fit tightly
on the ends of the axles. This insulates the wheels on one side of
the truck from those on the other. If the rails forming the track are
insulated from each other, the current supplied to the motor may
pass in on one rail to the two insulated wheels, then to a brush,
which bears on the brass collar A, through the windings of the motor,
through the reversing switch to the other set of wheels, and back to
the source of energy over the other rail, as shown in Fig. 15.
The wheels of the truck should fit on the axles tightly, since no
means other than the friction will be employed in holding them in
position. If the ends of the axles are tapered slightly, the wheels may
be forced into place and will stay firmly. Do not force them on until
the truck is finally assembled.
The truck frame should be constructed next, and its details are
shown in Figs. 4 and 5. Make two sidepieces of ¹⁄₁₆-in. brass, 9³⁄₄ in.
long and 1⁵⁄₈ in. wide, cutting out portions, as shown, in order to
reduce the weight. This also gives the appearance of leaf springs.
The two rectangular openings are to accommodate the axle
bearings. They should be cut to precise dimensions, and their edges
should be squared off. Extensions, ¹⁄₁₆ in. wide, are provided at the
middle of the upper edges of each of these openings. They are to
hold the upper end of the coil springs, which are to rest in the holes
cut into the bearings, as shown at G, Fig. 7, and also in assembled
form, Fig. 6.

You might also like