Amos Book User Guide
Amos Book User Guide
net/publication/278889068
CITATIONS READS
111 6,020
1 author:
Asoka Malkanthie
University of Sri Jayewardenepura
16 PUBLICATIONS 115 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Asoka Malkanthie on 07 January 2019.
1.1. Introduction
SEM is an extension of the general linear model (GLM) that enables a researcher to
test a set of regression equations simultaneously. In other words, the purpose of SEM
is to examine a set of relationships between one or more exogenous Variables
(independent variables) and one or more endogenous variables (dependent variables).
SEM software can test traditional models, but it also permits examination of more
complex relationships and models, such as confirmatory factor analysis and time
series analysis. Moreover, through SEMs the structural relations can be modeled
graphically to enable a clear understanding of the theory under study.
SEM programs provide overall tests of model fit and individual parameter
estimate tests simultaneously.
Regression coefficients, means, and variances may be compared simultaneously,
even across different groups
Longitudinal data, databases with auto correlated error structures (time series
analysis), databases with non-normally distributed variables and incomplete data
can be handled.
Because of these advantages of SEM, it has become a popular methodology in non
experimental research.
2
The path diagram
Path diagram is a visual representation of relations among variables which are
assumed to hold in the study. Basically four geometric symbols are used in the path
diagrams; circles or ellipses ( ) represent unobserved latent variables, squares
or rectangles represent ( ) observed variables, single- headed arrows ( )
represent the effect of one variable on another variable, and double-headed arrows
( ) represent covariance or correlation between two variables. Figure 1.1, is the
simple model used to explain the meanings of the symbols of a path diagram.
RE1
ME1 OV1
OV4
ME4
LV1 LV2
ME2 OV2
OV5
ME5
ME3 OV3
In the above model (Figure 1.1), there are two latent variables (LV1 is the exogenous
variable and LV2 is the endogenous variable) and five observed variables; three are
used to measure LV1 and two are used to measure LV2. In addition, there are five
measurement errors (ME 1- ME5); associated with each observed variables and one
residual error associated with the factor being predicted (LV2).
3
There is an important distinction between measurement error and residual error.
Error in measuring the underline factor or latent variable through observed variable is
reflected by measurement error. Residual error represents error in the prediction of
endogenous factor from exogenous factor. For example, the residual error shown in
figure 1.1 (RE1) represents error in the prediction of endogenous factor (LV2) from
exogenous factor (LV1)
The general SEM can be divided in to two sub models; measurement models and
structural models. The measurement model shows the relationship between observed
and latent variables. In other words it represents the CFA model specifying the
pattern by which each measure loads on a particular factor. But structural model
shows the relationship between latent variables. There are two measurement models
and one structural model in the Figure 1.1 discussed earlier. Figure 1.2 shows an
example for a measurement model and Figure 1.3 shows an example model for a
structural model and both models are sub models derived from the model given in
Figure 1.1
ME1
OV1
ME3 OV3
LV1 LV2
4
1.4. Chapter Summery
SEM can test complex set of regression equations simultaneously. Further, in
addition to the text outputs, SEM can model the relationships graphically. By using
SEM, a researcher can conduct confirmatory approach in data analysis. As well as it
estimates error variance parameters. Further, SEM can incorporate both observed and
latent variables, whereas former methods are based on observed measurements only.
In addition to the above advantages, researcher can get a unifying framework that fit
numerous linear models by using SME. It provides overall tests of model fit and
individual parameter estimate tests simultaneously, even across different groups.
The general SEM can be divided in to two sub models; measurement models and
structural models. The measurement model shows the relationship between observed
and latent variables. The structural model shows the relationship between latent
variables.
5
Chapter Two
Introduction to AMOS program
AMOS
Graphic
s.
Once you finish these steps, you can see the AMOS graphic screen given in Figure 2.1 below.
You can use the tools in the screen to draw the path diagram in the empty space of the screen.
6
In order to start the drawing, first of all it is needed to select the data file. The Data
Files dialog allows you to specify the database file (or files) to be analyzed. To select
the database file click on “File” shown at the top of the graphic screen and then
select “Data Files”.
Then click on “File Name” and select the relevant database file and click on “OK”.
In the above Figure 2.4, the SPSS saved files are indicated, but data files in other
programs also can be accessed. By clicking the “view data” in the AMOS dialogue
box given in Figure2.3, it is possible to see the data file selected.
the diagram or the tool to draw a latent variable or add an indicator to a latent
variable.
8
Figure 2.5- Drawing the path Diagram
9
Observed variables can be directly taken from your data set easily without naming
them as discussed above. Use the “List Variables in Dataset” button on the left-hand
side to see the variables in your dataset
Drag and drop the observed variables that you want to include in your model onto the
gray workspace as shown in Figure 2.8 below.
10
Figure 2.8- Dragging Observed Variables to the Path Diagram
Similarly the path diagram can be rotate using the rotate button in the left hand side.
In the above Figure 2.9, there are two latent exogenous variables, product knowledge
and process knowledge. In addition to that, there is a latent endogenous variable;
vendor recommendations. Single headed arrows pointing towards endogenous
variable (vendor recommendations), predicted as a linear combination of the other
two exogenous variables, product knowledge and process knowledge. Error is an
11
unobserved variable and therefore, used an ellipse to draw the variable. Double
headed arrow shows the correlation between product and process knowledge.
In other words, effects are represented by single-headed arrows in the path diagram
while correlations and covariance are represented by bidirectional arrows, which
represent relationships without an explicitly defined causal direction. In the above
diagram there is an association or relationship between product knowledge and
process knowledge. But, a claim cannot be made about product knowledge affect
process knowledge or vice versa.
Since effects are represented by single-headed arrows in the path diagrams, in the
above path diagram, we can claim that product knowledge affect the scores observed
on the measured variables p1 and p2 etc.
12
Chapter-3
Running the Model and Interpreting the Results
13
Then, select the output in the analysis properties dialog box. The following dialog
box will appear to select the outputs you required. For this example, Minimization
history, Standardized estimates, and Squared multiple correlations are checked.
Next, examine the Estimation tab. This tab provides a check box that allows you to
estimate means and intercepts if your database has any cases with incomplete data.
AMOS will require you to estimate means and intercepts, if your database has any
missing data on observed variables included in your model. Because this model‟s
database does not contain any missing data and we are not interested in means at
present, we leave the Estimation tab settings at their default values. Before you run
the model, be sure to save it by choosing Save As from the File menu and saving a
copy of the model file to an appropriate location on your computer‟s disk drive. To
run the model, close the Analysis Properties window and click on the Calculate
Estimates tool icon. It resembles an abacus:
14
In order to view the output path diagram, click the relevant icon shown in Figure 3.3.
15
3.4. Correlation Coefficient
The Pearson correlation coefficient provides the basis for point estimation (test of
significance) and explain the variance accounted for in a dependent variable by an
independent variable. In order to do a prediction, of a dependent variable from an
independent variable, it is required to conduct a linear regression analysis. In addition
to that partial and part correlations provide identification of specific bivariate
relationship between variables showing unique variance shared by two variables
while controlling the influence of other variables.
Different types of correlation coefficients are used to check the correlation between
variables depending on the properties of scales of measurement.
16
discretely as two values
(dichotomous).
Though, this chapter so far discussed the procedure of running the model, there are
several steps in SEM analyses; model specification, model identification, model
testing, and model modification.
17
said to be just-identified. Models for which there are an infinite number of possible
parameter estimate values are said to be underidentified. Finally, models that have
more than one possible solution (but one best or optimal solution) for each parameter
estimate are considered overidentified.
3. Model generating approach- If data does not fit with the model generated by the
researcher, model modification is done to receive a final best model. Amos
generates alternative models by specifying optional and/or required paths in a
model. Hence, in AMOS, researcher needs not to generate and delete paths to
find out the best model.
18
3.8. Types of Model Fit Criteria
In order to test the model fit, different criterion can be checked. Some of the
recognized and popular criteria are given in Table 3.2 below.
19
fit)
Normed Fit 0 (no fit) Value close to .95
Index to 1 reflects a good model fit
(perfect
fit)
Normed Chi- 1.0-5.0 Less than 1.0 is a poor
Square model fit; more than 5.0
reflects a need for
improvement.
Parsimonious 0 (no fit) Compare values in
Fit Index to 1 alternative models.
(perfect
fit)
Akaike O (perfect Compares values in
Information fit) to alternative models
Criterion negative
value
(poor fit)
Source: “A Beginner's Guide to Structural Equation Modeling”, Randall E.
Schumacker, 3rd Edition, 2010
Relating to the Chi- Square test, model is considered as fit to the data if the x² value
is low relative to degree of freedom with an insignificant p value (p>0.05).
In the following path diagram in Figure 3.5, the first latent variable, i.e. product
knowledge has been measured using five observed variables and the second latent
20
variable i.e. process knowledge has been measured using four observed variables. As
well as vendor recommendation has been measured using nine observed variables.
In order to see the graphical output click and you can see standardized and
unstandardized estimates by clicking relevant estimate.
The notes for model of the text output of the above diagram (Figure 3.5) is as follows
In this section, AMOS will display most errors and warnings in this section of the
output. In the output shown above (Figure 3.7), AMOS reports that the minimum was
achieved with no errors or warnings. If errors or warnings are not reported in this
section of the output means that it is safe for you to proceed to the next output
section of interest.
Absolute model fit determines the degree to which the sample data fit the structural
22
equation model. Absolute model fit criteria commonly used are chi-square (x2), the
goodness-of-fit index (GFI), the adjusted goodness-of-fit index (AGFI), and the
root-mean-square residual (RMR) and the Root Mean Square Error of
Approximation (RMSEA). Absolute fit indices do not use an alternative model as
a base for comparison.
3.9.1. Chi-Square
A significant x² value relative to the degrees of freedom indicates that the suggested
model does not supported by the observed data. Hence researchers are interested in
obtaining a non significant x² value which proved the model fit to the data collected.
But researcher solely cannot depend on the chi-square value since the x² model fit
criterion is sensitive to sample size because as sample size increases (generally above
200), the x² statistic has a tendency to indicate a significant probability level. In
contrast, as sample size decreases (generally below 100), the x² statistic indicates
non- significant probability levels. In addition to that x² statistic is also sensitive to
departures from multivariate normality of the observed variables. Hence, a researcher
should not only depend on the chi-square analysis in testing the model fit.
24
Mean Square Error of Approximation (RMSEA) (Steiger and Lind, 1980). The
RMSEA is widely used in Structural Equation Modeling to provide a mechanism to
adjust the sample size where chi-square statistics are used.
According to the Figure 3.9, since the RMSEA value is 0.062 which is higher than
0.05, the model does not fit.
25
however values as high as 0.08 are deemed acceptable (Hu and Bentler, 1999). An
SRMR of 0 indicates perfect fit but it must be noted that SRMR will be lower when
there is a high number of parameters in the model and in models based on large
sample sizes.
In order to get SRMR in AMOS, select Analyze, Calculate Estimates as usual. Then
Select Plug ins, Standardized RMR: this brings up a blank Standardized RMR dialog.
Then re-select Analyze, Calculate Estimates, and the Standardized RMR dialog will
display SRMR.
In the Figure 3.10, RMR is not close to 1.0, and RMSEA also >o.05 (Figure 3.8),
as well as x² (Figure 3.7), also significant. Therefore, the given model does not fit
with the data.
26
is substantially less false than a baseline model, typically the independence model. A
model may not fit to the data, and yet performs well in comparison to other models
may be of substantive interest. For example, the Tucker-Lewis Index (TLI) and the
Comparative Fit Index (CFI) compare the absolute fit of your specified model to the
absolute fit of the Independence model.
Tucker Lewis index (TLI) is named as Non-Normed Fit Index (NNFI) as well. This
index is called as “non-normed” because, there may be occasions, the value of the
index can be larger than 1 or slightly below 0. In order to compare the default model
with baseline model following Incremental fit indices can be used. Incremental fit
indices, also known as comparative (or relative fit indices are a group of indices that
do not use the chi-square in its raw form but compare the chi-square value to a
baseline model.
27
0.95: guter
Fit
IFI Incremental Fit 0=poor fit
Index close to
1=very good
fit
TLI Tucker-Lewis [0;1] 0=poor fit
Index close to
=rho_2 rho_2 oder 1=very good
=NNFI Non-Normed Fit fit
Index
B:
TLI>0.95:
guter Fit
CFI Comparative Fit between A: 0=poor
=RNI Index, [0;1] fit
Relative close to
Noncentrality 1=very good
Index fit
B: CFI >.95:
good fit
28
upon the GFI by adjusting for loss of degrees of freedom. The PNFI also adjusts for
degrees of freedom however it is based on the NFI (Mulaik et al 1989). Both of these
indices seriously penalize for model complexity which results in parsimony fit index
values that are considerably lower than other goodness of fit indices. Although many
researchers believe that parsimony adjustments are important, there is some debate
about whether or not they are appropriate.
Another form of parsimony fit indices is known as „information criteria‟ indices. The
widely used of these indices is the Akaike Information Criterion (AIC) or the
Consistent Version of AIC (CAIC) which adjusts for sample size (Akaike, 1974).
These indices select the most parsimonious model by comparing non-nested or non-
hierarchical models using the same data. Smaller values suggest a good fitting of
models. The model that produces the lowest value is the most superior. In addition to
that, it is important to keep in mind that the „information criteria‟ indices need a
sample size of 200 to make their use reliable (Diamantopoulos and Siguaw, 2000).
29
SEM programs require an adequate number of known correlations or covariances as
inputs in order to generate a sensible set of results. Identification refers to the idea
that there is at least one unique solution for each parameter estimate in a SEM model.
Models in which there is only one possible solution for each parameter estimate are
said to be just-identified. Models for which there are an infinite number of possible
parameter estimate values are said to be underidentified. Finally, models that have
more than one possible solution (but one best or optimal solution) for each parameter
estimate are considered overidentified. Model is considered as identified if the model
is either just- or overidentified. If a model is identified only, the parameter estimates
can be trusted.
In SEM modeling researcher can use three main approaches to test whether the data
fit the model; Confirmatory approach, Alternative model approach, and Model
generating approach
If data does not fit with the model generated by the researcher, model modification is
done to receive a final best model. Amos generates alternative models by specifying
optional and/or required paths in a model. Hence, in AMOS, researcher needs not
to generate and delete paths to find out the best model. In order to test the model fit,
absolute model fit, test of relative fit, Parsimonious Fit Indices can be used. Absolute
model fit criteria commonly used are chi-square (x2), the goodness-of-fit index
(GFI), the adjusted goodness-of-fit index (AGFI), and the root-mean-square
residual (RMR) and the Root Mean Square Error of Approximation (RMSEA).
Tucker-Lewis Index (TLI) and the Comparative Fit Index (CFI) compare the absolute
fit of your specified model to the absolute fit of the Independence model and used as
the criteria to test the relative fit. The Parsimony Goodness-of-Fit Index (PGFI) and
the Parsimonious Normed Fit Index (PNFI) are good examples to test the
Parsimonious Fit.
30
Chapter-4
Traditionally, validity and reliability was checked by examining the validity and
reliability scores on instrument used in a particular context. Given an acceptable level
of score validity and reliability, is ensured. The traditional statistical analysis does
not consider the measurement error of variables. But it is found that the impact of
measurement error have serious consequences. Since structural equation modeling
software was developed which can accounts for the measurement error of variables.
In other words, you have very clear expectations about what you will find in your
own sample. This means that you know the number of factors that you will encounter
and which variables will load onto the factors.
The criteria for variable inclusion are much more stringent in a confirmatory factor
analysis than in an exploratory factor analysis. A rule of thumb is that variables that
have factor loadings <|0.7| are dropped.
When you are developing scales, you can use an exploratory factor analysis to test a
new scale, and then move on to confirmatory factor analysis to validate the factor
structure in a new sample.
In exploratory factor model approaches, the researcher tries to find a model that fits
the data. Hence, in practice, researchers develop different alternative models,
expecting to find a model that fits the data and has theoretical support. This is the
32
primary rationale for exploratory factor analysis (EFA). In confirmatory factor model
approaches, researchers statistically test the significance of a hypothesized factor
model, that is, whether the sample data confirm the model. Additional samples of
data that fit the model further confirm the validity of the hypothesized model. This is
the primary reason for conducting confirmatory factor analysis (CFA). Following
Figure 4.1 and Figure 4.2, shows the factor loadings/standardized regression weights
of observed variables that determine the latent variables.
33
Figure 4.2 -Text output of Factor loading values
In order to obtain a confirmatory factor model, the model fits should be checked as
discussed in chapter three.
In addition to the exploratory factor analysis, AMOS has the ability to estimate the
confirmatory factor analysis as well. Under exploratory factor analysis, you will
explore the factors. But in confirmatory factor analysis, you confirm the factors
which are already developed by other researchers.
34
Chapter Five
Amos has a facility of analyzing multiple groups at the same time. For this purpose
select the data file as discussed earlier in this book by using;
Now in order to subset the data into girls and boys, click on Grouping Variable and
then choose “gender” and click OK.
35
Figure 5.3- Choosing Grouping
Variable
You can carry on if you have already given grouping values in the data set.
Otherwise you can use “Group value” to give the values to nominal variables.
Now we can test whether the measurement model of boys is different than the
measurement model for the girls. For that purpose, we should include the two groups
using,
You will see that the window that pops up says “Group number 1”. That is the
current name of the “males” group. To change this name just type over it “males” in
the window. Then click NEW and you‟ll see it says “Group number 2”. Change this
name to say “female” and then click Close.
36
Figure 5.5- Multi Group Comparison (Step 1)
Now you can see in the left side of the AMOS window “male” and “female” listed.
37
Now we need to attach the separate data sets according to the groups. For this
purpose,;
File Data files (now you should see both “male” and “female” listed under group
name but you‟ll notice that there are no files under each category.
Therefore, click on male then File Name and find the data again. Then data set
appear in front of male. Then it is required to click the “Grouping variable” and
select “gender” and click ok. Next, under “Group Value”, select “1” and click Ok.
You should do the same to the second group; female. If you have done it correctly, it
will be shown as follows:
Now, you can draw your model and named the variables as discussed earlier in this
book. Then in order to do the group comparison,
38
Analyze Multiple- Group Analyze or use the icon of .
A window will pop up listing the different models which will be considered, each
one is constraining additional parameters to be equal across the two groups (male and
females).
Click Ok and there should now different models listed in the left side of the Amos
window.
39
Figure 5.12- Amos window with different models
The “measurement weights” model is fixing only the factor loadings, the “structural
covariances” is additionally fixing the variance of the factor to be the same across
groups and the “measurement residuals” is fixing the covariances and variances of
the errors to be the same. You can double click on each model to see what it is
constraining.
You will be able to see that AMOS has placed parameter names on the model and
that it uses a different naming root for different kinds of parameters.
40
Click the abacus to run the models. If they all worked correctly, each should say
OK rather than XX.
To look at the results, click and, click on the AMOS output window and click on
Model Fit and Model Comparison. The values for the different models can be seen
there by clicking relevant group.
41
Figure 5.15- Amos output for multi groups
42
Chapter-6
Interpretation of the Results and Modifying the Model
Interpretation of the results is difficult for a researcher after the analysis. Hence an
example of interpretation of results is given below using a hypothetical model given
in Figure 5.1.
H4
A
H1
H7
D
F
H2
B
H8
H6
H9
H3
C
E G
H5 H10
H11. The type of organization moderates the effects of the dimensions of the
Exogenous variables on endogenous variables
H11a. Customer perceptions about the “A” have a stronger positive effect on “D” for
“X” type of organizations‟ customers than “Y” type of organizations‟ customers.
H11b. “B” have a stronger positive effect on “D” for “Y” type of organizations‟
customers than “X” type of organizations‟ customers.
H11c. “B” have a stronger positive effect on “E” for “Y” type of organizations‟
customers than “X” type of organizations‟ customers.
H11d. Customer perceptions about the “C” have a stronger positive effect on “D” for
“X” type of organizations‟ customers than “Y” type of organizations‟ customers.
H11e. Customer perceptions about the “C” have a stronger positive effect on “E” for
“X” type of organizations‟ customers than “Y” type of organizations‟ customers.
44
proposed model in the X type of organizations‟ (NFI 0.94; NNFI 0.97; CFI 0.97; IFI
0.97) and Y type of organizations‟ samples (NFI 0.93; NNFI 0.97; CFI 0.98; IFI
0.98). In addition, the root mean square error of approximation (RMSEA) index was
below the minimum recommended value of 0.08 (RMSEA 0.03 in both samples).
The reliability of the measurement scales was evaluated by means of the Cronbach‟s
alpha (α) and the average variance extracted index, and in all the cases, these
indicators were over the recommended values of 0.7 and 0.5, respectively. The
convergent validity of the scales was also contrasted, as all the items were significant
to a confidence level of 95 per cent and their standardized lambda coefficients (λ)
were higher than 0.5 (Table 6-1). For the purpose of testing the discriminant validity,
the confidence intervals for the correlation between pairs of latent factors were
estimated and compared with the unit.
It was observed that in none of the cases the proposed intervals contained the value
1(Table 6-2).
Table 6.1- Internal consistency and convergent validity for two groups
Latent Items λ X λY R² R² α AVE
factors X Y
A 1 0.73 0.71 0.53 0.51 0.89 0.56
2 0.73 0.70 0.54 0.49 (x) (x)
3 0.75 0.70 0.56 0.50 0.88 0.56
4 0.79 0.84 0.62 0.71 (Y) (Y)
5 0.80 0.82 0.64 0.67
6 0.70 0.70 0.49 0.48
B 1 0.74 0.70 0.55 0.49 0.85 0.53
2 0.79 0.75 0.62 0.57 (x) (x)
3 0.71 0.72 0.51 0.51 0.84 0.51
4 0.66 0.73 0.43 0.53 (Y) (Y)
5 0.74 0.66 0.55 0.44
45
C 1 0.75 0.74 0.56 0.54 0.87 0.61
2 0.82 0.81 0.66 0.66 (x) (x)
3 0.82 0.82 0.66 0.67 0.87 0.57
4 0.80 0.76 0.64 0.57 (Y) (Y)
5 0.71 0.63 0.50 0.40
D 1 0.82 0.83 0.68 0.68 0.93(x) 0.70(x)
2 0.86 0.85 0.74 0.72 0.93(Y) 0.70(Y)
3 0.85 0.84 0.72 0.70
4 0.86 0.87 0.73 0.75
5 0.86 0.85 0.74 0.72
6 0.78 0.79 0.61 0.63
E 1 0.85 0.80 0.73 0.64 0.92(x) 0.74(x)
2 0.80 0.77 0.65 0.59 0.88(Y) 0.66(Y)
3 0.89 0.85 0.79 0.71
4 0.89 0.82 0.79 0.68
F 1 0.93 0.90 0.87 0.80 0.91(x) 0.84(x)
2 0.90 0.94 0.81 0.88 0.92(Y) 0.85(Y)
G 1 0.85 0.89 0.72 0.80 0.88(X) 0.78(X)
2 0.92 0.91 0.85 0.83 0.90(Y) 0.81(Y)
Notes: x- x type of organizations, Y- Y type of organizations,
x type of organizations- x²(380)= 652.59 (p<0.01), NFI=0.94, NNFI=0.97, CFI=0.97,
IFI=0.97, RMSEA=0.03
Y type of organizations- x²(380)= 562.21 (p<0.01), NFI=0.93, NNFI=0.97,
CFI=0.98, IFI=0.98, RMSEA=0.03
46
A - 0.59- 0.54- 0.37- 0.34- 0.37- 0.20-
0.75 o.70 0.53 0.50 0.53 0.40
B 0.40- -- 0.45- 0.43- 0.50- 0.44- 0.34-
0.60 0.65 0.59 0.66 0.60 0.54
C 0.54- 0.52- - 0.25- 0.29- 0.27- 0.23-
o.70 0.72 0.41 0.45 0.43 0.43
D 0.23- 0.37- 0.27- - 0.65- 0.65- 0.45-
0.43 0.57 0.47 0.77 0.77 0.61
E 0.26- 0.44- 0.30- 0.68- - 0.74- 0.71-
0.46 0.64 0.50 0.80 0.82 0.83
F 0.27- 0.41- 0.25- 0.67- - - 0.69-
0.47 0.61 0.45 0.79 0.72- 0.81
0.84
G 0.13- 0.31- 0.20- 0.35- 0.61- 0.59- -
0.33 0.51 0.40 0.55 0.77 0.75
According to the Figure 6-2, “D” was significantly and positively influenced by “A”
(β= 0.24, p <0.05) and “B” (β=0.38, p < 0.05) but “D” is not significantly influenced
by “C” (β= 0.04, p > 0.05). Thus, H1 and H2 are supported, whereas the H4 is not.
47
N.S
A 0.24**
(5.03) D 0.28**
5.69) F
B 0.38**
(7.65) N.S.
0-56**
(11.82)
0.59**
0.28** (11.10
(6.28)
E ) G
C 0.08** 0.80**
(2.24)
(2.24) A
H1
H4
H7
D
F
B H2
H8
H6
H3
C
E G
H5 H10
“B” also significantly and positively impacted “E” (β=0.28, p < 0.05), as well as “C”
(β= 0.08, p < 0.05). Based on these results, H3 and H5 are supported. “D” also
significantly and positively influenced “E” (β= 0.56, p< 0.05) and “F” (β=0.28, p <
0.05). However, it did not significantly influence “G” (β= 0.05, p >0.05). Thus, H6
and H7 are supported, whereas H8 is not. Finally, “E” significantly and positively
affected “F” (β= 0.59, p < 0.05) and “G” (β=0.80, p < 0.05). Based on these results,
H9 and H10 are supported.
The results of the analysis implemented with the data of the “y” type of
organizations‟ sample are shown in Figure 6-3.
48
N.S
A
N.S.
D 0.27**
(4.63) F
B 0.31**
(4.21) 0.17
**
(2.5
0-63** 7)
(11.45)
0.59*
0.25** *
(4.10) E 8.96) G
C N.S. 0.82**
(9.60) H4
A
H1
H7
D
F
B H2
H8
H6
H3
C
E G
H5 H10
Figure 6.3- Structural model estimation in the “y” type of organization‟s sample
First, “D” was significantly and positively influenced by “B” customers (β=0.31, p <
0.05) but not “A” (β= 0.08, p > 0.05) or “C” (β= 0.06, p > 0.05). Thus, H2 is
supported, whereas H1 and H3 are not. “B” also significantly and positively
impacted “E” (β= 0.25, p < 0.05), but, again, “C” did not significantly affect “E” (β=
0.06, p<0.05). Based on these results, H4 is supported, whereas H5 is not. “D”
identification also significantly and positively influenced “E” (β= 0.63, p < 0.05),
“F” (β= 0.27, p <0.05) and “G” (β= 0.17, p <0.05). Thus, H6 to H8 are supported.
Finally, “E” significantly and positively affected “F”(β=0.59, p < 0.05) and “H”
(β=0.82, p <0.05). Based on these results, H9 and H10 are supported.
However, none of the other effects between the samples of two groups can be seen
(Dif. x² < 1.04, p > 0.1). Based on these findings, H11b to H11d are not supported.
50
Some additional differences were observed in the effects of “D” on customer “E”
(βx= 0.56, p < 0.05; βy= 0.63, p< 0.05; Dif. x²= 5.29, p<0.01) and “G”(βx= 0.05, p >
0.05; βy=0.17,p <0.05; Dif. x²= 76.47, p < 0.01), as well as in the effect of “E” on
“G” (βx= 0.80, p< 0.05; βy= 0.82, p > 0.05; Dif. x²=26.42, p< 0.01). In the three
cases, the effects of “D” and “E” were stronger among “Y” type of organization‟s
customers.
51
The Threshold for Modification Indices facilitate the researcher by specifying the
level of chi-square change is required for a path to be included in the modification
index output. The default value is 4.00 because it slightly exceeds the tabled critical
value of a chi-square distribution with one degree of freedom: 3.84. Any additional
parameter estimated by AMOS should result in an expected reduction in the model
chi-square of at least 3.84.
Figure 6.5- Threshold for modification indicesThe modification index results appear
below in figure 6.6. But you should keep in mind that AMOS provides modification
index output only when the complete data are available. In other words, you cannot
obtain modification index information when you use missing data with AMOS.
52
Figure 6.6- Modification Indices
All possible variances were estimate in the model. Hence, there are no variances that
could be considered to improve the existing model. Thus, the Variances section
contains no model modification information in the Figure 6.6.
There are, however, possible regression weights and covariances that can be used in
the modified model (Figure 6.7).
53
Figure 6.7- possible regression weights and covariances of the modified model
If the modification index values are high, you can do the suggested changes and
check the model fit. But in the above Figure 6.7, the modification index values are
not that much significant to do the modifications to the current model. But if the
values are high, for example, the covariance of e15 with e17 is expected to be 0 .901,
you can re-specify the model with that covariance added and then refit the model. It
is worth to keep in mind that the researcher should reconsider the conceptual
implications before modifying the existing model.
1. AMOS prints the R² values for each dependent or mediating variable above the
variable. But, sometimes the value cannot be seen clearly. To move a parameter
on the output diagram, use the Move Parameter tool.
.
Select the tool and click your mouse pointer over the required variable until it is
highlighted in red. Then pull the mouse in a direction where you want the value to
appear.
54
Reference:
Akaike, H. (1974), "A New Look at the Statistical Model Identification," IEE
Transactions on Automatic Control, 19 (6), 716-23.
Byrne, B.M. (1998), “Structural Equation Modeling with LISREL, PRELIS and
SIMPLIS: Basic Concepts, Applications and Programming”, Mahwah, New Jersey:
Lawrence Erlbaum Associates.
Hu, L.T. and Bentler, P.M. (1999), "Cutoff Criteria for Fit Indexes in Covariance
Structure Analysis: Conventional Criteria versus New Alternatives," Structural
Equation Modeling, 6 (1), 1-55.
Kline, R.B. (2005), Principles and Practice of Structural Equation Modeling (2nd
Edition ed.). New York: The Guilford Press.
Rasch, G. (1980)” Probabilistic models for some intelligence and attainment tests”,
Chicago: University of Chicago Press.
55