Chapter 70 Evaluating Measurement Models Using Confirmatory Factor Analysis

In this chapter, we will learn how to evaluate measurement models using confirmatory factor analysis (CFA), where CFA is part of the structural equation modeling (SEM) family of analyses. Specifically, we will learn how to evaluate the measurement structure and construct validity of a theoretical construct operationalized as a multi-item measure (i.e., scale, inventory, test, questionnaire).

70.1 Conceptual Overview

Link to conceptual video: https://youtu.be/lhyDp2HtDiA

Confirmatory factor analysis (CFA) is a latent variable modeling approach and is part of the structural equation modeling (SEM) family of analyses, which are also referred to as covariance structure analyses. CFA is a useful statistical tool for evaluating the internal structure of a measure designed to assess a theoretical construct (i.e., concept); in other words, we can apply CFA to evaluate the construct validity of a construct. CFA allows us to directly specify and estimate a measurement model, which ultimately can be incorporated into structural regression models.

In CFA models, constructs are represented as latent variables (i.e., latent factors), which by nature are not directly measured. Instead, observed (manifest) variables serve as indicators of the latent construct. I should note that in this chapter, we will focus exclusively on reflective measurement models, which are models in which the latent factor is specified as the direct cause of its indicators. Not covered in this chapter are formative measurement models, which are models in which the observed variables are specified as the direct causes of the latent factor.

70.1.1 Path Diagrams

It is often helpful to visualize a CFA model using a path diagram. A path diagram displays the model parameter specifications and can also include parameter estimates. Conventional path diagram symbols are shown Figure 1.

Figure 1: Conventional path diagram symbols and their meanings.
Figure 1: Conventional path diagram symbols and their meanings.

For example of how the path diagram symbols can be used to construct a visual depiction of a CFA model, please reference Figure 2. The path diagram depicts a one-factor CFA model for a multi-item role clarity measure, which means that the model has a single latent factor representing the psychological construct called role clarity. Further, four observed variables (i.e., Items 1-4) serve as indicators of the latent factor, such that the indicators are reflective of the latent factor. Putting it all together, the one-factor CFA model serves as a measurement model and represents the measurement structure of a four-item measure designed to assess the construct of role clarity.

Figure 2: Example of a one-factor confirmatory factor analysis (CFA) model path diagram.
Figure 2: Example of a one-factor confirmatory factor analysis (CFA) model path diagram.

By convention, the latent factor for role clarity is represented by an oval or circle. Please note that the latent factor is not directly measured; rather, we infer information about the latent factor from its four indicators, which in this example correspond to Items 1-4. The latent factor has a variance term associated with it, which represents the latent factor’s variability; in CFA models, we often to don’t spend much time interpreting latent factors’ variance terms, though.

Each of the four observed variables (indicators) is represented with a rectangle. The one-directional, single-sided arrows represent the factor loadings, and point from the latent factor to the observed variables (indicators). Each indicator has a (residual) error variance term, which represents the amount of variance left unexplained by the latent factor in relation to each indicator.

To illustrate the covariance path diagram symbol, let’s refer to Figure 3. When standardized, a covariance can be interpreted as a correlation. The covariance symbol is a double-sided arrow in which the arrows connect two distinct latent or observed variables. In Figure 3, the path diagram depicts a multi-factor CFA model and, more specifically, a two-factor CFA model. The first latent factor is associated with a four-item role clarity measure, and the second latent factor is associated with a four-item task mastery measure. If freely estimated, the covariance term allows the two latent factors to covary with each other.

Figure 3: Example of a multi-factor confirmatory factor analysis (CFA) model path diagram.
Figure 3: Example of a multi-factor confirmatory factor analysis (CFA) model path diagram.

70.1.2 Model Identification

Model identification has to do with the number of (free or freely estimated) parameters specified in the model relative to the number of unique (non-redundant) sources of information available, and model implication has important implications for assessing model fit and estimating parameter estimates.

Just-identified: In a just-identified model (i.e., saturated model), the number of freely estimated parameters (e.g., factor loadings, covariances, variances) is equal to the number of unique (non-redundant) sources of information, which means that the degrees of freedom (df) is equal to zero. In just-identified models, the model parameter standard errors can be estimated, but the model fit cannot be assessed in a meaningful way using traditional model fit indices.

Over-identified: In an over-identified model, the number of freely estimated parameters is less than the number of unique (non-redundant) sources of information, which means that the degrees of freedom (df) is greater than zero. In over-identified models, traditional model fit indices and parameter standard errors can be estimated.

Under-identified: In an under-identified model, the number of freely estimated parameters is greater than the number of unique (non-redundant) sources of information, which means that the degrees of freedom (df) is less than zero. In under-identified models, the model parameter standard errors and model fit cannot be estimated. Some might say under-identified models are overparameterized because they have more parameters to be estimated than unique sources of information.

Most (if not all) statistical software packages that allow structural equation modeling (and by extension, confirmatory factor analysis) automatically compute the degrees of freedom for a model or, if the model is under-identified, provide an error message. As such, we don’t need to count the number of sources of unique (non-redundant) sources of information and free parameters by hand. With that said, to understand model identification and its various forms at a deeper level, it is often helpful to practice calculating the degrees freedom by hand when first learning.

The formula for calculating the number of unique (non-redundant) sources of information available for a particular model is as follows:

\(i = \frac{p(p+1)}{2}\)

where \(p\) is the number of observed variables to be modeled. This formula calculates the number of possible unique covariances and variances for the variables specified in the model – or in other words, it calculates the lower diagonal of a covariance matrix, including the variances.

In the single-factor CFA model path diagram we specified above, there are four observed variables: Item 1, Item 2, Item 3, and Item 4. Accordingly, in the following formula, \(p\) is equal to 4, and the number of unique (non-redundant) sources of information is 10.

\(i = \frac{4(4+1)}{2} = \frac{20}{2} = 10\)

To count the number of free parameters (\(k\)), simply add up the number of the specified unconstrained factor loadings, variances, covariances, and (residual) error variance terms in the one-factor CFA model. Please note that for latent variable scaling and model identification purposes, we typically constrain one of the factor loadings to 1.0, which means that it is not freely estimated and thus doesn’t count as one of the free parameters. As shown in Figure 4 below, the example one-factor CFA model has 8 free parameters.

\(k = 8\)

To calculate the degrees of freedom (df) for the model, we need to subtract the number of free parameters from the number unique (non-redundant) sources of information, which in this example equates to 10 minus 8. Thus, the degrees of freedom for the model is 2, which means the model is over-identified.

\(df = i - k = 10 - 8 = 2\)

Figure 4: Counting the number of free parameters in the CFA model path diagram.
Figure 4: Counting the number of free parameters in the CFA model path diagram.

70.1.3 Model Fit

When a model is over-identified (df > 0), the extent to which the specified model fits the data can be assessed using a variety of model fit indices, such as the chi-square (\(\chi^{2}\)) test, comparative fit index (CFI), Tucker-Lewis index (TLI), root mean square error of approximation (RMSEA), and standardized root mean square residual (SRMR). For a commonly cited reference on cutoffs for fit indices, please refer to Hu and Bentler (1999). And for a concise description of common guidelines regarding interpreting model fit indices, including differences between stringent and relaxed interpretations of common fit indices, I recommend checking out Nye (2023). Regardless of which cutoffs we apply when interpreting fit indices, we must remember that such cutoffs are merely guidelines, and it’s possible to estimate an adequate model that meets some but not all of the cutoffs given the limitations of some fit indices. Further, in light of the limitations of conventional model fit index cutoffs, McNeish and Wolf (2023) developed model- and data-specific dynamic fit index cutoffs, which we will cover later in the chapter tutorial.

Chi-square test. The chi-square (\(\chi^{2}\)) test can be used to assess whether the model fits the data adequately, where a statistically significant \(\chi^{2}\) value (e.g., p \(<\) .05) indicates that the model does not fit the data well and a nonsignificant chi-square value (e.g., p \(\ge\) .05) indicates that the model fits the data reasonably well (Bagozzi and Yi 1988). The null hypothesis for the \(\chi^{2}\) test is that the model fits the data perfectly, and thus failing to reject the null model provides some confidence that the model fits the data reasonably close to perfectly. Of note, the \(\chi^{2}\) test is sensitive to sample size and non-normal variable distributions.

Comparative fit index (CFI). As the name implies, the comparative fit index (CFI) is a type of comparative (or incremental) fit index, which means that CFI compares the focal model to a baseline model, which is commonly referred to as the null or independence model. CFI is generally less sensitive to sample size than the chi-square test. A CFI value greater than or equal to .95 generally indicates good model fit to the data, although some might relax that cutoff to .90.

Tucker-Lewis index (TLI). Like CFI, Tucker-Lewis index (TLI) is another type of comparative (or incremental) fit index. TLI is generally less sensitive to sample size than the chi-square test and tends to work well with smaller sample sizes; however, as Hu and Bentler (1999) noted, TLI may be not be the best choice for smaller sample sizes (e.g., N \(<\) 250). A TLI value greater than or equal to .95 generally indicates good model fit to the data, although some might relax that cutoff to .90.

Root mean square error of approximation (RMSEA). The root mean square error of approximation (RMSEA) is an absolute fit index that penalizes model complexity (e.g., models with a larger number of estimated parameters) and thus ends up effectively rewarding more parsimonious models. RMSEA values tend to upwardly biased when the model degrees of freedom are fewer (i.e., when the model is closer to being just-identified); further, Hu and Bentler (1999) noted that RMSEA may not be the best choice for smaller sample sizes (e.g., N \(<\) 250). In general, an RMSEA value that is less than or equal to .06 indicates good model fit to the data, although some relax that cutoff to .08 or even .10.

Standardized root mean square residual (SRMR). Like the RMSEA, the standardized root mean square residual (SRMR) is an example of an absolute fit index. An SRMR value that is less than or equal to .06 generally indicates good fit to the data, although some relax that cutoff to .08.

Summary of model fit indices. The conventional cutoffs for the aforementioned model fit indices – like any rule of thumb – should be applied with caution and with good judgment and intention. Further, these indices don’t always agree with one another, which means that we often look across multiple fit indices and come up with our best judgment of whether the model adequately fits the data. Generally, it is not advisable to interpret model parameter estimates unless the model fits the data reasonably adequately, as a poorly fitting model may be due to model misspecification, an inappropriate model estimator, or other factors that need to be addressed. With that being said, we should also be careful to not toss out a model entirely if one or more of the model fit indices suggest less than acceptable levels of fit to the data. The table below contains the conventional stringent and more relaxed cutoffs for the model fit indices.

Fit Index Stringent Cutoffs for Acceptable Fit Relaxed Cutoffs for Acceptable Fit
\(\chi^{2}\) \(p \ge .05\) \(p \ge .01\)
CFI \(\ge .95\) \(\ge .90\)
TLI \(\ge .95\) \(\ge .90\)
RMSEA \(\le .06\) \(\le .08\)
SRMR \(\le .06\) \(\le .08\)

70.1.4 Parameter Estimates

In CFA models, there are various types of parameter estimates, which correspond to the path diagram symbols covered earlier (e.g., covariance, variance, factor loading). When a model is just-identified or over-identified, we can estimate the standard errors for freely estimated parameters, which allows us to evaluate statistical significance. With most software applications, we can request standardized parameter estimates, which facilitate interpretation.

Factor loadings. When we standardize factor loadings, we obtain estimates for each directional relation between the latent factor and an indicator, including for the factor loading that we likely constrained to 1.0 for latent factor scaling and model identification purposes (see above). When standardized, factor loadings can be interpreted like correlations, and generally we want to see standardized estimate values between .50 and .95 (Bagozzi and Yi 1988). If a standardized factor loading falls outside of that range, we typically investigate whether there is a theoretical or empirical reason for the out-of-range estimate, and we may consider removing the associated indicator if warranted.

(Residual) error variance terms. The (residual) error variance terms, which are also known as disturbance terms or uniquenesses, indicate how much variance is left unexplained by the latent factor in relation to the indicators. When standardized, error variance terms represent the proportion (percentage) of variance that remains unexplained by the latent factor. Ideally, we want to see standardized error variance terms that are less than or equal to .50.

Variances. The variance estimate of the latent factor is generally not a focus when evaluating parameter estimates in a CFA model, as the variance of a latent factor depends on the factor loadings and scaling.

Covariances. In a CFA model, covariances between latent factors help us understand the extent to which they are related (or unrelated). When standardized, a covariance can be interpreted as a correlation.

Average variance extracted (AVE). Although not a parameter estimate, per se, average variance extracted (AVE) is a useful statistic for understanding the extent to which indicator variations can be attributed to the latent factor (Fornell and Larcker 1981). The formula for AVE takes into account the factor loadings and (residual) error variance terms associated with a latent factor. In general, we consider AVE values that are greater than or equal to .50 to be acceptable.

Composite reliability (CR). Like AVE, composite reliability (CR) is not a parameter estimate; instead, it is another useful statistic that helps us understand our CFA model. CR is also known as coefficient omega (\(\omega\)), and it provides an estimate of internal consistency reliability. In general, we consider CR values that are greater than or equal to .70 to be acceptable; however, if the estimate falls between .60 and .70, we might refer to the reliability as questionable but not necessarily unacceptable.

70.1.5 Model Comparisons

When evaluating CFA models, especially multi-factor models, we often wish to evaluate whether a focal model performs better (or worse) than an alternative model. Comparing models can help us arrive at a more parsimonious model that still fits the data well, as well as evaluate the potential multidimensionality of a construct.

As an example, imagine we have a focal model with two latent factors, and unique sets of indicators load onto their respective latent factors. Now imagine that we specify an alternative model that has one latent factor, and all the indicators from our focal model load onto that single latent factor. We can compare those two models to determine whether the alternative model fits the data about the same as our focal model or worse.

When two models are nested, we can perform nested model comparisons. As a reminder, a nested model has all the same parameter estimates of a full model but has additional parameter constraints in place. If two models are nested, we can compare them using model fit indices like CFI, TLI, RMSEA, and SRMR. We can also use the chi-square difference (\(\Delta \chi^{2}\)) test (likelihood ratio test) to compare nested models, which provides a statistical test for nested-model comparisons.

When two models are not nested, we can use other model fit indices like Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC). With respect to these indices, the best fitting model will have lower AIC and BIC values.

70.1.6 Statistical Assumptions

The statistical assumptions that should be met prior to estimating and/or interpreting a CFA model will depend on the type of estimation method. Common estimation methods for CFA models include (but are not limited to) maximum likelihood (ML), maximum likelihood with robust standard errors (MLM or MLR), weighted least squares (WLS), and diagonally weighted least squares (DWLS). WLS and DWLS estimation methods are used when there are observed variables with nominal or ordinal (categorical) measurement scales. In this chapter, we will focus on ML estimation, which is a common method when observed variables have interval or ratio (continuous) measurement scales. As Kline (2011) notes, ML estimation carries with it the following assumptions: “The statistical assumptions of ML estimation include independence of the scores, multivariate normality of the endogenous variables, and independence of the exogenous variables and error terms” (p. 159). When multivariate non-normality is a concern, the MLM or MLR estimator is a better choice than ML estimator, where the MLR estimator allows for missing data and the MLM estimator does not.

70.2 Tutorial

This chapter’s tutorial demonstrates how to estimate measurement models using confirmatory factor analysis (CFA) in R.

70.2.1 Video Tutorial

The video tutorial for this chapter is planned but has not yet been recorded.

70.2.2 Functions & Packages Introduced

Function Package
cfa lavaan
summary base R
semPaths semPlot
AVE semTools
compRelSEM semTools
anova base R
options base R
inspect lavaan
cbind base R
rbind base R
t base R
ampute mice
cfaOne dynamic
cfaHB dynamic

70.2.3 Initial Steps

If you haven’t already, save the file called “sem.csv” into a folder that you will subsequently set as your working directory. Your working directory will likely be different than the one shown below (i.e., "H:/RWorkshop"). As a reminder, you can access all of the data files referenced in this book by downloading them as a compressed (zipped) folder from the my GitHub site: https://github.com/davidcaughlin/R-Tutorial-Data-Files; once you’ve followed the link to GitHub, just click “Code” (or “Download”) followed by “Download ZIP”, which will download all of the data files referenced in this book. For the sake of parsimony, I recommend downloading all of the data files into the same folder on your computer, which will allow you to set that same folder as your working directory for each of the chapters in this book.

Next, using the setwd function, set your working directory to the folder in which you saved the data file for this chapter. Alternatively, you can manually set your working directory folder in your drop-down menus by going to Session > Set Working Directory > Choose Directory…. Be sure to create a new R script file (.R) or update an existing R script file so that you can save your script and annotations. If you need refreshers on how to set your working directory and how to create and save an R script, please refer to Setting a Working Directory and Creating & Saving an R Script.

# Set your working directory
setwd("H:/RWorkshop")

Next, read in the .csv data file called “sem.csv” using your choice of read function. In this example, I use the read_csv function from the readr package (Wickham, Hester, and Bryan 2024). If you choose to use the read_csv function, be sure that you have installed and accessed the readr package using the install.packages and library functions. Note: You don’t need to install a package every time you wish to access it; in general, I would recommend updating a package installation once ever 1-3 months. For refreshers on installing packages and reading data into R, please refer to Packages and Reading Data into R.

# Install readr package if you haven't already
# [Note: You don't need to install a package every 
# time you wish to access it]
install.packages("readr")
# Access readr package
library(readr)

# Read data and name data frame (tibble) object
df <- read_csv("sem.csv")
## Rows: 750 Columns: 18
## ── Column specification ────────────────────────────────────────────────────────────────────────────────────────────────────────
## Delimiter: ","
## chr  (1): id
## dbl (17): ac_1, ac_2, ac_3, ac_4, rc_1, rc_2, rc_3, tm_1, tm_2, tm_3, tm_4, perf_1, perf_2, perf_3, perf_4, age, gender
## 
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
# Print the names of the variables in the data frame (tibble) object
names(df)
##  [1] "id"     "ac_1"   "ac_2"   "ac_3"   "ac_4"   "rc_1"   "rc_2"   "rc_3"   "tm_1"   "tm_2"   "tm_3"   "tm_4"   "perf_1"
## [14] "perf_2" "perf_3" "perf_4" "age"    "gender"
# Print number of rows in data frame (tibble) object
nrow(df)
## [1] 750
# Print top 6 rows of data frame (tibble) object
head(df)
## # A tibble: 6 × 18
##   id     ac_1  ac_2  ac_3  ac_4  rc_1  rc_2  rc_3  tm_1  tm_2  tm_3  tm_4 perf_1 perf_2 perf_3 perf_4   age gender
##   <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>  <dbl>  <dbl>  <dbl>  <dbl> <dbl>  <dbl>
## 1 ee1       4     4     4     5     4     2     6     7     7     6     4      2      5      3      7    28      0
## 2 ee2       5     5     5     4     4     5     6     1     2     2     1      3      5      3      5    32      1
## 3 ee3       6     6     6     6     6     5     6     2     2     2     4      6      7      6      7    43      1
## 4 ee4       4     4     4     4     4     1     1     6     7     5     6      2      3      2      3    33      0
## 5 ee5       5     4     5     5     5     1     5     6     6     7     4      2      4      1      2    36      1
## 6 ee6       7     7     7     7     3     3     6     1     1     1     1      2      6      5      6    43      1

The data frame includes data from two sources that have been joined: a new-employee onboarding survey administered 1-month after employees’ respective start dates and an annual performance evaluation (rated by supervisors) administered six months after employees’ respective start dates. The sample includes 750 employees. As part of the survey, employees responded to three multi-item measures intended to assess their level of adjustment into the organization and provided their age measured in years (age) and their gender identity (gender). Employees responded to items from the three multi-item measures using a 7-point agreement Likert-type response format, ranging from Strongly Disagree (1) to Strongly Agree (7). For all items, higher scores indicate higher levels of the construct.

The first multi-item measure is designed to measure feelings of acceptance, which is conceptually defined as “the extent to which an individual feels welcomed and socially accepted at work.” The measure includes the following four items.

  • ac_1 (“My colleagues make me feel welcome.”)

  • ac_2 (“My colleagues seem to enjoy working with me.”)

  • ac_3 (“My colleagues respect my work-related opinions.”)

  • ac_4 (“My colleagues listen thoughtfully to my ideas.”)

The second multi-item measure is designed to measure role clarity, which is conceptually defined as “the extent to which an individual understands what is expected of them in their job or role.” The measure includes the following three items.

  • rc_1 (“I understand what my job-related responsibilities are.”)

  • rc_2 (“I understand what the organization expects of me in my job.”)

  • rc_3 (“My job responsibilities have been clearly communicated to me.”)

The third multi-item measure is designed to measure task mastery, which is conceptually defined as “the extent to which an individual feels self-efficacious in their role and feels confident in performing their job responsibilities.” The measure includes the following four items.

  • tm_1 (“I am confident I can perform my job responsibilities effectively.”)

  • tm_2 (“I am able to address unforeseen job-related challenges.”)

  • tm_3 (“When I apply effort at work, I perform well.”)

  • tm_4 (“I am proficient in the skills needed to perform my job.”)

The multi-item supervisor-rated annual performance evaluation measure is conceptually defined as “the extent to which an individual performs their key job-related responsibilities effectively.” For this measure, each item represents a distinguishable performance dimension, and supervisors applied used a 7-point response format, ranging from Does Not Meet Expectations (1) to Exceeds Expectations (7). For all items, higher scores indicate higher levels of the construct. The measure includes the following four supervisor-rated items.

  • perf_1 (“The employee provides effective customer service.”)

  • perf_2 (“The employee performs administrative responsibilities effectively.”)

  • perf_3 (“The employee collaborates effectively with team members.”)

  • perf_4 (“The employee communicates effectively with others.”)

70.2.4 Estimate One-Factor CFA Models

We will begin by estimating what is referred to as a one-factor confirmatory factor analysis (CFA) model. A one-factor model has a single latent factor (i.e., latent variable), which for our purposes will represent a psychological construct targeted by one of the multi-item survey measures. Each of the measure’s items will serve as a indicator of the latent factor.

Because confirmatory factor analysis (CFA) is a specific application of structural equation modeling (SEM), we will use functions from an R package developed for SEM called lavaan (latent variable analysis) to estimate our CFA models. Let’s begin by installing and access the lavaan package (if you haven’t already).

# Install package
install.packages("lavaan")
# Access package
library(lavaan)

In the following sections, we will learn how to estimate an over-identified one-factor model, followed by a just-identified model.

70.2.4.1 Estimate Over-Identified One-Factor Model

If you recall from the introduction to this chapter, in an over-identified model, the number of parameters (e.g., structural relations, variances) is less than the number of unique (non-redundant) sources of information, which means that the degrees of freedom (df) is greater than zero. In over-identified models, the model parameters can be estimated, and the model fit can be assessed.

The feelings of acceptance multi-item measure contains four items, which will serve as indicators for the latent factor associated with feelings of acceptance. A conventionally specified CFA model will be over-identified if the latent factor has at least four indicators, so given that our measure has four items, this model will be over-identified.

First, we must specify the one-factor model and assign it to an object that we can subsequently reference. To do so, we will do the following.

  1. Specify a name for the model object (e.g., cfa_mod), followed by the <- assignment operator.
  2. To the right of the <- assignment operator and within quotation marks (" "):
    • Specify a name for the latent factor (e.g., AC), followed by the =~ operator, which is used to indicate how a latent factor is measured. Anything that comes to the right of the =~ operator is an indicator (e.g., item) of the latent factor. Please note that the latent factor is not something that we directly observe, so it will not have a corresponding variable in our data frame object.
    • After the =~ operator, specify each indicator (i.e., item) associated with the latent factor, and to separate the indicators, insert the + operator. In this example, the four indicators of the feelings of acceptance latent factor (AC) are: ac_1 + ac_2 + ac_3 + ac_4. These are our observed variables, which conceptually are influenced by the underlying latent factor.
# Specify one-factor CFA model & assign to object
cfa_mod <- "
AC =~ ac_1 + ac_2 + ac_3 + ac_4
"

Second, now that we have specified the model object (cfa_mod), we are ready to estimate the model using the cfa function from the lavaan package. To do so, we will do the following.

  1. Specify a name for the fitted model object (e.g., cfa_fit), followed by the <- assignment operator.
  2. To the right of the <- assignment operator, type the name of the cfa function, and within the function parentheses include the following arguments.
    • As the first argument, insert the name of the model object that we specified above (cfa_mod).
    • As the second argument, insert the name of the data frame object to which the indicator variables in our model belong. That is, after data=, insert the name of the data frame object (df).
    • Note: The cfa function includes model estimation defaults, which explains why we had relatively few model specifications. For example, the function defaults to constraining the first indicator’s unstandardized factor loading to 1.0 for model fitting purposes, and constrains covariances between indicator error terms (i.e., uniquenesses) to zero (or in other words, specifies the error terms as uncorrelated).
# Estimate one-factor CFA model & assign to fitted model object
cfa_fit <- cfa(cfa_mod,      # name of specified model object
               data=df)      # name of data frame object

Third, we will use the summary function from base R to to print the model results. To do so, we will apply the following arguments in the summary function parentheses.

  1. As the first argument, specify the name of the fitted model object that we created above (cfa_fit).
  2. As the second argument, set fit.measures=TRUE to obtain the model fit indices (e.g., CFI, TLI, RMSEA, SRMR).
  3. As the third argument, set standardized=TRUE to request the standardized parameter estimates for the model.
# Print summary of model results
summary(cfa_fit,             # name of fitted model object
        fit.measures=TRUE,   # request model fit indices
        standardized=TRUE)   # request standardized estimates
## lavaan 0.6.15 ended normally after 20 iterations
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                         8
## 
##   Number of observations                           750
## 
## Model Test User Model:
##                                                       
##   Test statistic                                 2.499
##   Degrees of freedom                                 2
##   P-value (Chi-square)                           0.287
## 
## Model Test Baseline Model:
## 
##   Test statistic                              2269.112
##   Degrees of freedom                                 6
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    1.000
##   Tucker-Lewis Index (TLI)                       0.999
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)              -3226.110
##   Loglikelihood unrestricted model (H1)      -3224.860
##                                                       
##   Akaike (AIC)                                6468.220
##   Bayesian (BIC)                              6505.181
##   Sample-size adjusted Bayesian (SABIC)       6479.778
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.018
##   90 Percent confidence interval - lower         0.000
##   90 Percent confidence interval - upper         0.077
##   P-value H_0: RMSEA <= 0.050                    0.746
##   P-value H_0: RMSEA >= 0.080                    0.040
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.006
## 
## Parameter Estimates:
## 
##   Standard errors                             Standard
##   Information                                 Expected
##   Information saturated (h1) model          Structured
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   AC =~                                                                 
##     ac_1              1.000                               0.786    0.806
##     ac_2              1.314    0.040   32.511    0.000    1.032    0.972
##     ac_3              1.061    0.041   26.145    0.000    0.834    0.816
##     ac_4              1.145    0.043   26.833    0.000    0.900    0.831
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .ac_1              0.334    0.019   17.186    0.000    0.334    0.351
##    .ac_2              0.063    0.014    4.554    0.000    0.063    0.056
##    .ac_3              0.349    0.021   16.968    0.000    0.349    0.334
##    .ac_4              0.363    0.022   16.589    0.000    0.363    0.310
##     AC                0.617    0.047   13.177    0.000    1.000    1.000

Evaluating model fit. Now that we have the summary of our model results, we will begin by evaluating key pieces of the model fit information provided in the output.

  • Estimator. The function defaulted to using the maximum likelihood (ML) model estimator. When there are deviations from multivariate normality or categorical variables, the function may switch to another estimator.
  • Number of parameters. Eight parameters were estimated, which as we will see later correspond to factor loadings and (error) variance components.
  • Number of observations. Our effective sample size is 750. Had there been missing data on the observed variables, this portion of the output would have indicated how many of the observations were retained for the analysis given the missing data. How missing data are handled during estimation will depend on the type of missing data approach we apply, which is covered in more default in the section called Estimating Models with Missing Data. By default, the cfa function applies listwise deletion in the presence of missing data.
  • Chi-square test. The chi-square (\(\chi^{2}\)) test assesses whether the model fits the data adequately, where a statistically significant \(\chi^{2}\) value (e.g., p \(<\) .05) indicates that the model does not fit the data well and a nonsignificant chi-square value (e.g., p \(\ge\) .05) indicates that the model fits the data reasonably well (Bagozzi and Yi 1988). The null hypothesis for the \(\chi^{2}\) test is that the model fits the data perfectly, and thus failing to reject the null model provides some confidence that the model fits the data reasonably close to perfectly. Of note, the \(\chi^{2}\) test is sensitive to sample size and non-normal variable distributions. For this model, we find the \(\chi^{2}\) test in the output section labeled Model Test User Model. Because the p-value is equal to or greater than .05, we fail to reject the null hypothesis that the mode fits the data perfectly and thus conclude that the model fits the data acceptably (\(\chi^{2}\) = 2.499, df = 2, p = .287). Finally, note that because the model’s degrees of freedom (i.e., 2) is greater than zero, we can conclude that the model is over-identified.
  • Comparative fit index (CFI). As the name implies, the comparative fit index (CFI) is a type of comparative (or incremental) fit index, which means that CFI compares our estimated model to a baseline model, which is commonly referred to as the null or independence model. CFI is generally less sensitive to sample size than the chi-square (\(\chi^{2}\)) test. A CFI value greater than or equal to .95 generally indicates good model fit to the data, although some might relax that cutoff to .90. For this model, CFI is equal to 1.000, which indicates that the model fits the data acceptably.
  • Tucker-Lewis index (TLI). Like CFI, Tucker-Lewis index (TLI) is another type of comparative (or incremental) fit index. TLI is generally less sensitive to sample size than the chi-square test and tends to work well with smaller sample sizes; however, as Hu and Bentler (1999) noted, TLI may be not be the best choice for smaller sample sizes (e.g., N \(<\) 250). A TLI value greater than or equal to .95 generally indicates good model fit to the data, although like CFI, some might relax that cutoff to .90. For this model, TLI is equal to .999, which indicates that the model fits the data acceptably.
  • Loglikelihood and Information Criteria. The section labeled Loglikelihood and Information Criteria contains model fit indices that are not directly interpretable on their own (e.g., loglikelihood, AIC, BIC). Rather, they become more relevant when we wish to compare the fit of two or more non-nested models. Given that, we will will ignore this section in this tutorial.
  • Root mean square error of approximation (RMSEA). The root mean square error of approximation (RMSEA) is an absolute fit index that penalizes model complexity (e.g., models with a larger number of estimated parameters) and thus effectively rewards models that are more parsimonious. RMSEA values tend to upwardly biased when the model degrees of freedom are fewer (i.e., when the model is closer to being just-identified); further, Hu and Bentler (1999) noted that RMSEA may not be the best choice for smaller sample sizes (e.g., N \(<\) 250). In general, an RMSEA value that is less than or equal to .06 indicates good model fit to the data, although some relax that cutoff to .08 or even .10. For this model, RMSEA is equal to .018, which indicates that the model fits the data acceptably.
  • Standardized root mean square residual. Like the RMSEA, the standardized root mean square residual (SRMR) is an example of an absolute fit index. An SRMR value that is less than or equal to .06 generally indicates good fit to the data, although some relax that cutoff to .08. For this model, SRMR is equal to .006, which indicates that the model fits the data acceptably.

In sum, the chi-square (\(\chi^{2}\)) test, CFI, TLI, RMSEA, and SRMR model fit indices all indicate that our model fit the data acceptably based on conventional rules and thresholds. This level of agreement, however, is not always going to occur. For instance, it is relatively common for the \(\chi^{2}\) test to indicate a lack of acceptable fit while one or more of the relative or absolute fit indices indicates that fit is acceptable given the limitations of the \(\chi^{2}\) test. Further, there may be instances where only two or three out of five of these model fit indices indicate acceptable model fit. In such instances, we should not necessarily toss out the model entirely, but we should consider whether there are model misspecifications. Of course, if all five model indices are well beyond the conventional thresholds (in a bad way), then our model likely has major issues, and we should not proceed with interpreting the parameter estimates. Fortunately, for our model, all five model fit indices signal that the model fit the data acceptably, and thus we should feel confident proceeding forward with interpreting and evaluating the parameter estimates.

Evaluating parameter estimates. As noted above, our model showed acceptable fit to the data, so we can feel comfortable interpreting the parameter estimates. By default, the cfa function provides unstandardized parameter estimates, but if you recall, we also requested standardized parameter estimates. In the output, the unstandardized parameter estimates fall under the column titled Estimates, whereas the standardized factor loadings we’re interested in fall under the column titled Std.all.

  • Factor loadings. The output section labeled Latent Variables contains our factor loadings. For this model, the loadings represent the effect of the latent factor for feelings of acceptance on the four items from the associated measure.
    • Factor loading for ac_1. By default, the cfa function constrains the factor loading associated with the first indicator (which in this example is the observed variable ac_1) to 1.000 for model estimation purposes. Using the * operator, we can override that default in our model specification by preceding another indicator variable with 1*; for example, we could have specified our model like this: AC =~ ac_1 + 1*ac_2 + ac_3 + ac_4, which would have constrained the ac_2 indicator to 1.000 instead. Note, however, that there is a substantive standardized factor loading for ac_1 (\(\lambda\) = .806), but it lacks standard error (SE), z-value, and p-value estimates. We can still evaluate this standardized factor loading, though, and we can conclude that it falls within Bagozzi and Yi’s (1988) recommended range for factor loadings: .50 to .95. Thus, we can conclude that the factor loading for ac_1 looks acceptable.
    • Factor loading for ac_2. The standardized factor loading for ac_2 (\(\lambda\) = .972, p < .001) falls just beyond the upper limit of Bagozzi and Yi’s (1988) recommended range of .50 to .95; however, this is not necessarily an issue. It could mean that this is just a very strong indicator of the construct feelings of acceptance. Let’s retain this item, especially given that the model fit to the data was acceptable in accordance with all common model fit indices.
    • Factor loading for ac_3. The standardized factor loading for ac_3 (\(\lambda\) = .816, p < .001) falls within Bagozzi and Yi’s (1988) recommended range of .50 to .95, so we’ll consider this to be another acceptable indicator of our focal latent factor.
    • Factor loading for ac_4. The standardized factor loading for ac_4 (\(\lambda\) = .831, p < .001) falls within Bagozzi and Yi’s (1988) recommended range of .50 to .95, so we’ll consider this to be another acceptable indicator of our focal latent factor.
  • Variance components. The output section labeled Variances contains the (error) variance estimates for each observed indicator (i.e., item) of the latent factor and for the latent factor itself. As was the case with the factor loadings, we can view the standardized and unstandardized parameter estimates.
    • Error variances for indicators. The estimates associated with the four indicator variables represent the error variances. Sometimes these are referred to as residual variances, disturbance terms, or uniquenesses. The standardized estimates show that the error variances range from .056 to .351, which can be interpreted as proportions of the variance not explained by the latent factor. For example, the latent factor AC did not explain 5.6% of the variance in the indicator ac_1, which is actually excellent; this suggests that 94.4% (100% - 5.6%) of the variance in indicator ac_1 was explained by the latent factor AC. In general, error variances for indicators that are less than .50 are considered acceptable.
    • Variance of the latent factor. The variance estimate for the latent factor provides can provide an indication of the latent factors’ level variability; however, its value depends on the scaling of factor loadings, and generally it is not a point of interest when evaluating CFA models.

Within the semTools package, there are two additional diagnostic tools that we can apply to our model. Specifically, the AVE and compRelSEM functions allow us to estimate the average variance extracted (AVE) (Fornell and Larcker 1981) and the composite (construct) reliability (CR) (Bentler 1968). If you haven’t already, please install and access the semTools package.

# Install package
install.packages("semTools")
# Access package
library(semTools)

To estimate AVE, we simply specify the name of the AVE function, and within the function parentheses, we insert the name of our fitted CFA model estimate.

# Estimate average variance extracted (AVE)
AVE(cfa_fit)
##    AC 
## 0.742

average variance extracted (AVE). The AVE estimate was .742, which exceeded the conventional threshold (\(\ge\) .50). We can conclude that AVE for the four-item measurement model was acceptable.

# Estimate composite/construct reliability (CR)
compRelSEM(cfa_fit)
##    AC 
## 0.919

Composite reliability (CR). The CR estimate was .919, which exceeded the conventional threshold for acceptable reliability (\(\ge\) .70) as well as the more relaxed “questionable” threshold (\(\ge\) .60). We can conclude that the four-item measurement model showed acceptable reliability.

Write-up of the results. As part of a new-employee onboarding survey administered 1-month after employees’ respective start dates, we assessed new employees on a multi-item measure of feelings of acceptance. Using confirmatory factor analysis (CFA), we evaluated the measurement structure for a four-item measure of feelings of acceptance, where each item served as an indicator for the feelings of acceptance latent factor; we did not allow indicator error variances to covary (i.e., the associations were constrained to zero) and, by default, the first indicator of the latent factor (i.e., ac_1) was constrained to 1 for estimation purposes. The one-factor model was estimated using the maximum likelihood (ML) estimator and a sample size of 750 new employees. Missing data were not a concern. We evaluated the model’s fit to the data using the chi-square (\(\chi^{2}\)) test, CFI, TLI, RMSEA, and SRMR model fit indices. The \(\chi^{2}\) test indicated that the model did not fit the data worse than a perfectly fitting model (\(\chi^{2}\) = 2.499, df = 2, p = .287), which provided an initial indication that the model fit was acceptable. Further, the CFI and TLI estimates were 1.000 and .999, respectively, which exceeded the more stringent threshold of .95, thereby indicating acceptable model fit. Similarly, the RMSEA and SRMR estimates were .018 and .006, respectively, which fell below the more stringent threshold of .06, thereby indicating acceptable model fit. The freely estimated factor loadings associated with the ac_2, ac_3, and ac_4 items were all statistically significantly different from zero (p < .001), and the standardized factor loadings for the ac_1, ac_3, and ac_4 items (.806, .816, and .831, respectively) fell within the target .50-.95 range; the standardized factor loading for the ac_2 item (.972) exceeded the upper limit of the target range. Given the model’s aforementioned acceptable fit to the data, we decided to keep this item in the measurement model. The latent factor variance estimate was statistically significantly greater than zero (p < .001), indicating that the latent factor showed a statistically significant amount of variability. The error variance estimates for the four items ranged from .056 to .351, which all fall below the target threshold of .50, thereby indicating that is unlikely that an unmodeled construct had an outsized influence on any of the four items. The average variance extracted (AVE) for the four items was .742, which exceeded the conventional threshold of .50 and thus was deemed acceptable. Finally, the composite reliability (CR) reliability was .919, which exceeded the conventional threshold of .70 and thus was deemed acceptable. In sum, with the exception of a slightly elevated standardized factor loading for the ac_2 item, the measurement model for the four-item feelings of acceptance measure showed acceptable fit to the data, acceptable parameter estimates, and acceptable AVE and CR.

Visualize the path diagram. To visualize our CFA measurement model as a path diagram, we can use the semPaths function from the semPlot package. If you haven’t already, please install and access the semPlot package.

# Install package
install.packages("semPlot")
# Access package
library(semPlot)

While there are many arguments that can be used to refine the path diagram visualization, we will focus on just four to illustrate how the semPaths function works.

  1. As the first argument, insert the name of the fitted CFA model object (cfa_fit).
  2. As the second argument, specify what="std" to display just the standardized parameter estimates.
  3. As the third argument, specify weighted=FALSE to request that the visualization not weight the edges (e.g., lines) and other plot features.
  4. As the fourth argument, specify nCharNodes=0 in order to use the full names of latent and observed indicator variables instead of abbreviating them.
# Visualize the measurement model
semPaths(cfa_fit,         # name of fitted model object 
         what="std",      # display standardized parameter estimates
         weighted=FALSE,  # do not weight plot features
         nCharNodes=0)    # do not abbreviate names

The resulting CFA path diagram can be useful for interpreting the model specifications and the parameter estimates.

70.2.4.2 Estimate Just-Identified One-Factor Model

In the previous section, we evaluated the measurement structure for a four-item measure of feelings of acceptance, which resulted in an over-identified model (df > 0). In this section, we will review what happens when we specify a just-identified measurement model (df = 0).

For this example, we will evaluate the measurement model for the three-item measure of role clarity. As you can see below, we specified the three role clarity items as loading onto a latent factor for role clarity: RC =~ rc_1 + rc_2 + rc_3.

# Specify one-factor CFA model & assign to object
cfa_mod <- "
RC =~ rc_1 + rc_2 + rc_3
"

# Estimate one-factor CFA model & assign to fitted model object
cfa_fit <- cfa(cfa_mod,      # name of specified model object
               data=df)      # name of data frame object

# Print summary of model results
summary(cfa_fit,             # name of fitted model object
        fit.measures=TRUE,   # request model fit indices
        standardized=TRUE)   # request standardized estimates
## lavaan 0.6.15 ended normally after 26 iterations
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                         6
## 
##   Number of observations                           750
## 
## Model Test User Model:
##                                                       
##   Test statistic                                 0.000
##   Degrees of freedom                                 0
## 
## Model Test Baseline Model:
## 
##   Test statistic                               796.520
##   Degrees of freedom                                 3
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    1.000
##   Tucker-Lewis Index (TLI)                       1.000
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)              -4231.814
##   Loglikelihood unrestricted model (H1)      -4231.814
##                                                       
##   Akaike (AIC)                                8475.628
##   Bayesian (BIC)                              8503.349
##   Sample-size adjusted Bayesian (SABIC)       8484.296
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.000
##   90 Percent confidence interval - lower         0.000
##   90 Percent confidence interval - upper         0.000
##   P-value H_0: RMSEA <= 0.050                       NA
##   P-value H_0: RMSEA >= 0.080                       NA
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.000
## 
## Parameter Estimates:
## 
##   Standard errors                             Standard
##   Information                                 Expected
##   Information saturated (h1) model          Structured
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   RC =~                                                                 
##     rc_1              1.000                               1.457    0.766
##     rc_2              1.262    0.075   16.939    0.000    1.838    0.935
##     rc_3              0.711    0.046   15.333    0.000    1.035    0.569
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .rc_1              1.492    0.131   11.395    0.000    1.492    0.413
##    .rc_2              0.485    0.171    2.846    0.004    0.485    0.126
##    .rc_3              2.236    0.127   17.571    0.000    2.236    0.676
##     RC                2.122    0.200   10.595    0.000    1.000    1.000
# Estimate average variance extracted (AVE)
AVE(cfa_fit)
##    RC 
## 0.609
# Estimate composite/construct reliability (CR)
compRelSEM(cfa_fit)
##    RC 
## 0.817

When reviewing the model output, note that the degrees of freedom (df) is equal to zero, which indicates that the model is just-identified. When a model is just-identified, our go-to model fit indices (chi-square test, CFI, TLI, RMSEA, SRMR) become irrelevant because the model fits the data perfectly from the viewpoint of those indices. The parameter estimates, however, can be estimated as usual. Similarly, the average variance extracted (AVE) and composite reliability (CR) can also be interpreted meaningfully. Please refer to the previous section for guidance on how to interpret the parameter, AVE, and CR estimates.

We can also visualize the CFA model as a path diagrami for a just-identified model, just like we did with an over-identified model.

# Visualize the measurement model
semPaths(cfa_fit,         # name of fitted model object 
         what="std",      # display standardized parameter estimates
         weighted=FALSE,  # do not weight plot features
         nCharNodes=0)    # do not abbreviate names

70.2.5 Estimate Multi-Factor CFA Models

In the previous sections, we explored how to specify and estimate one-factor CFA models, or in other words, models with a single latent factor and all indicators loading onto that factor. In this section, we will learn how to specify multi-factor CFA models, which are models with two or more latent factors. Multi-factor models are useful for determining whether theoretically distinguishable constructs are empirically distinguishable.

The new-employee onboarding survey data includes responses to three multi-item measures of new-employee adjustment into the organization: feelings of acceptance, role clarity, and task mastery. We modeled feelings of acceptance and role clarity as one-factor models in the previous sections, and in this section we’re going to specify a three-factor model with three latent factors corresponding to feelings of acceptance, role clarity, and task mastery and each measure’s items loading on their respective latent factor. In doing so, we can determine whether a three-factor model fits the data acceptably.

For more in-depth guidance on how to specify and evaluate a CFA model, please refer back to this section.

When we specify a multi-factor model, we simply repeat the process we used for a one-factor model. That is, in this three-factor model example, we will specify three latent factors. By default, the cfa function will freely estimate the covariance parameters between the three latent factors, constrain the first indicator for each latent factor to 1, and constrain the covariance parameters between the indicator error variance components to zero. Because we will learn how to compare nested models in the following section, let’s name the specified model object cfa_mod3 and the fitted model object cfa_fit_3 to communicate that we are evaluating a three-factor model. Everything else is specified in the same manner as the one-factor models from the previous sections.

# Specify three-factor CFA model & assign to object
cfa_mod_3 <- "
AC =~ ac_1 + ac_2 + ac_3 + ac_4
RC =~ rc_1 + rc_2 + rc_3
TM =~ tm_1 + tm_2 + tm_3 + tm_4
"

# Estimate three-factor CFA model & assign to fitted model object
cfa_fit_3 <- cfa(cfa_mod_3,  # name of specified model object
                 data=df)    # name of data frame object

# Print summary of model results
summary(cfa_fit_3,           # name of fitted model object
        fit.measures=TRUE,   # request model fit indices
        standardized=TRUE)   # request standardized estimates
## lavaan 0.6.15 ended normally after 48 iterations
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                        25
## 
##   Number of observations                           750
## 
## Model Test User Model:
##                                                       
##   Test statistic                               178.198
##   Degrees of freedom                                41
##   P-value (Chi-square)                           0.000
## 
## Model Test Baseline Model:
## 
##   Test statistic                              5325.638
##   Degrees of freedom                                55
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    0.974
##   Tucker-Lewis Index (TLI)                       0.965
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)             -12416.134
##   Loglikelihood unrestricted model (H1)     -12327.035
##                                                       
##   Akaike (AIC)                               24882.269
##   Bayesian (BIC)                             24997.770
##   Sample-size adjusted Bayesian (SABIC)      24918.386
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.067
##   90 Percent confidence interval - lower         0.057
##   90 Percent confidence interval - upper         0.077
##   P-value H_0: RMSEA <= 0.050                    0.003
##   P-value H_0: RMSEA >= 0.080                    0.016
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.039
## 
## Parameter Estimates:
## 
##   Standard errors                             Standard
##   Information                                 Expected
##   Information saturated (h1) model          Structured
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   AC =~                                                                 
##     ac_1              1.000                               0.787    0.807
##     ac_2              1.309    0.040   32.555    0.000    1.030    0.969
##     ac_3              1.063    0.040   26.262    0.000    0.836    0.818
##     ac_4              1.145    0.043   26.907    0.000    0.901    0.832
##   RC =~                                                                 
##     rc_1              1.000                               1.487    0.782
##     rc_2              1.211    0.067   18.058    0.000    1.801    0.916
##     rc_3              0.704    0.046   15.427    0.000    1.047    0.576
##   TM =~                                                                 
##     tm_1              1.000                               1.348    0.738
##     tm_2              1.094    0.043   25.539    0.000    1.475    0.907
##     tm_3              1.327    0.050   26.321    0.000    1.788    0.949
##     tm_4              0.950    0.049   19.293    0.000    1.280    0.700
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   AC ~~                                                                 
##     RC                0.257    0.049    5.224    0.000    0.220    0.220
##     TM                0.261    0.043    6.006    0.000    0.246    0.246
##   RC ~~                                                                 
##     TM                0.296    0.083    3.576    0.000    0.148    0.148
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .ac_1              0.332    0.019   17.154    0.000    0.332    0.349
##    .ac_2              0.069    0.014    5.004    0.000    0.069    0.061
##    .ac_3              0.345    0.020   16.905    0.000    0.345    0.330
##    .ac_4              0.360    0.022   16.549    0.000    0.360    0.307
##    .rc_1              1.401    0.125   11.191    0.000    1.401    0.388
##    .rc_2              0.623    0.153    4.081    0.000    0.623    0.161
##    .rc_3              2.213    0.126   17.612    0.000    2.213    0.669
##    .tm_1              1.521    0.085   17.807    0.000    1.521    0.456
##    .tm_2              0.469    0.041   11.529    0.000    0.469    0.177
##    .tm_3              0.351    0.051    6.920    0.000    0.351    0.099
##    .tm_4              1.705    0.094   18.130    0.000    1.705    0.510
##     AC                0.619    0.047   13.201    0.000    1.000    1.000
##     RC                2.212    0.200   11.057    0.000    1.000    1.000
##     TM                1.816    0.157   11.564    0.000    1.000    1.000
# Estimate average variance extracted (AVE)
AVE(cfa_fit_3)
##    AC    RC    TM 
## 0.743 0.607 0.686
# Estimate composite/construct reliability (CR)
compRelSEM(cfa_fit_3)
##    AC    RC    TM 
## 0.920 0.818 0.884
# Visualize the measurement model
semPaths(cfa_fit_3,       # name of fitted model object 
         what="std",      # display standardized parameter estimates
         weighted=FALSE,  # do not weight plot features
         nCharNodes=0)    # do not abbreviate names

Evaluating model fit. Now that we have the summary of our model results, we will begin by evaluating key pieces of the model fit information provided in the output.

  • Estimator. The function defaulted to using the maximum likelihood (ML) model estimator. When there are deviations from multivariate normality or categorical variables, the function may switch to another estimator.
  • Number of parameters. Twenty-five parameters were estimated, which as we will see later correspond to factor loadings and (error) variance components.
  • Number of observations. Our effective sample size is 750. Had there been missing data on the observed variables, this portion of the output would have indicated how many of the observations were retained for the analysis given the missing data. How missing data are handled during estimation will depend on the type of missing data approach we apply, which is covered in more default in the section called Estimating Models with Missing Data. By default, the cfa function applies listwise deletion in the presence of missing data.
  • Chi-square test. The chi-square (\(\chi^{2}\)) test assesses whether the model fits the data adequately, where a statistically significant \(\chi^{2}\) value (e.g., p \(<\) .05) indicates that the model does not fit the data well and a nonsignificant chi-square value (e.g., p \(\ge\) .05) indicates that the model fits the data reasonably well (Bagozzi and Yi 1988). The null hypothesis for the \(\chi^{2}\) test is that the model fits the data perfectly, and thus failing to reject the null model provides some confidence that the model fits the data reasonably close to perfectly. Of note, the \(\chi^{2}\) test is sensitive to sample size and non-normal variable distributions. For this model, we find the \(\chi^{2}\) test in the output section labeled Model Test User Model. Because the p-value is less than .05, we reject the null hypothesis that the mode fits the data perfectly and thus conclude that the model does not fit the data acceptably (\(\chi^{2}\) = 178.198, df = 41, p < .001), at least according to this test. Finally, note that because the model’s degrees of freedom (i.e., 41) is greater than zero, we can conclude that the model is over-identified.
  • Comparative fit index (CFI). As the name implies, the comparative fit index (CFI) is a type of comparative (or incremental) fit index, which means that CFI compares our estimated model to a baseline model, which is commonly referred to as the null or independence model. CFI is generally less sensitive to sample size than the chi-square (\(\chi^{2}\)) test. A CFI value greater than or equal to .95 generally indicates good model fit to the data, although some might relax that cutoff to .90. For this model, CFI is equal to .974, which indicates that the model fits the data acceptably.
  • Tucker-Lewis index (TLI). Like CFI, Tucker-Lewis index (TLI) is another type of comparative (or incremental) fit index. TLI is generally less sensitive to sample size than the chi-square test and tends to work well with smaller sample sizes; however, as Hu and Bentler (1999) noted, TLI may be not be the best choice for smaller sample sizes (e.g., N \(<\) 250). A TLI value greater than or equal to .95 generally indicates good model fit to the data, although like CFI, some might relax that cutoff to .90. For this model, TLI is equal to .965, which indicates that the model fits the data acceptably.
  • Loglikelihood and Information Criteria. The section labeled Loglikelihood and Information Criteria contains model fit indices that are not directly interpretable on their own (e.g., loglikelihood, AIC, BIC). Rather, they become more relevant when we wish to compare the fit of two or more non-nested models. Given that, we will will ignore this section in this tutorial.
  • Root mean square error of approximation (RMSEA). The root mean square error of approximation (RMSEA) is an absolute fit index that penalizes model complexity (e.g., models with a larger number of estimated parameters) and thus effectively rewards models that are more parsimonious. RMSEA values tend to upwardly biased when the model degrees of freedom are fewer (i.e., when the model is closer to being just-identified); further, Hu and Bentler (1999) noted that RMSEA may not be the best choice for smaller sample sizes (e.g., N \(<\) 250). In general, an RMSEA value that is less than or equal to .06 indicates good model fit to the data, although some relax that cutoff to .08 or even .10. For this model, RMSEA is equal to .067, which indicates that the model fits the data acceptably according to the more relaxed threshold of .08 but does not fit the data acceptably according to the more stringent threshold of .06.
  • Standardized root mean square residual. Like the RMSEA, the standardized root mean square residual (SRMR) is an example of an absolute fit index. An SRMR value that is less than or equal to .06 generally indicates good fit to the data, although some relax that cutoff to .08. For this model, SRMR is equal to .039, which indicates that the model fits the data acceptably.

In sum, the chi-square (\(\chi^{2}\)) test indicated that the model did not fit the data acceptably, but as noted above, this test is sensitive to sample size and non-normality. In contrast, the CFI, TLI, and SRMR model fit indices indicated that our model fit the data acceptably, and RMSEA fell just above the stringent cutoff but below the relaxed cutoff, indicating that it has marginal fit. Collectively, these give indices suggest that model fit the data more-or-less acceptably. As a follow-up process, hypothetically, we might re-consider how the model is specified and make changes as needed, though ideally only if those changes in specification were theoretically informed. That being said, three out of five indices suggest acceptable model fit when the more stringent cutoffs were applied, and one indicated acceptable fit according to a more relaxed cutoff; thus, we should feel relatively confident proceeding forward with interpreting and evaluating the parameter estimates.

Evaluating parameter estimates. As noted above, our model mostly fit the data acceptable, so we can feel reasonably comfortable interpreting and evaluating the parameter estimates. By default, the cfa function provides unstandardized parameter estimates, but if you recall, we also requested standardized parameter estimates. In the output, the unstandardized parameter estimates fall under the column titled Estimates, whereas the standardized factor loadings we’re interested in fall under the column titled Std.all.

  • Factor loadings. The output section labeled Latent Variables contains our factor loadings. For this model, the loadings represent the effect of the latent factor for feelings of acceptance on the four items from the associated measure. By default, the cfa function constrains the factor loading associated with the first indicator of each latent factor to 1 for model estimation purposes. Note, however, that there are still substantive standardized factor loadings for those first indicators, but they lack standard error (SE), z-value, and p-value estimates. We can still evaluate those standardized factor loadings, though. With the exception of the ac_1 item’s factor loading on the AC latent factor, all standardized factor loadings (.576-.949) fell within Bagozzi and Yi’s (1988) recommended range of .50-.95; the standardized factor loading for the ac_1 item was .969, which is just above the upper limit of the recommended range. It could mean that ac_1 is just a very strong indicator of the construct feelings of acceptance, so let’s retain this item, especially given that the model fit to the data was mostly acceptable.
  • Covariances. The output section labeled Covariances contains the pairwise covariance estimates the three latent factors. As was the case with the factor loadings, we can view the standardized and unstandardized parameter estimates, where the standardized covariances can be interpreted as correlations. First, the correlation between AC and RC was .220 and statistically significant (p < .001), and can be considered small-to-medium-sized in terms of practical significance. Second, the correlation between AC and TM was .246 and statistically significant (p < .001), and can also be considered small-to-medium-sized in terms of practical significance. Finally, the correlation between RC and TM was .148 and statistically significant (p < .001), and can be considered small-sized in terms of practical significance.
  • Variances The output section labeled Variances contains the (error) variance estimates for each observed indicator (i.e., item) of the latent factor and for the latent factor itself. As was the case with the factor loadings, we can view the standardized and unstandardized parameter estimates.
    • Error variances for indicators. The estimates associated with the four indicator variables represent the error variances. Sometimes these are referred to as residual variances, disturbance terms, or uniquenesses. The standardized estimates showed that the error variances range from .061 to .669, which can be interpreted as proportions of the variance not explained by the latent factor. For example, the latent factor AC failed to explain 6.1% of the variance in the indicator ac_1, which is actually excellent; this suggests that 93.9% (100% - 6.1%) of the variance in indicator ac_1 was explained by the latent factor AC. With the exceptions of the rc_3 item’s error variance (.669) and the tm_4 item’s error variance (.510), the indicator error variances were less than the recommended .50 threshold, which means that unmodeled constructs did not likely have a notable impact on the vast majority of the indicators.
    • Variance of the latent factors. The variance estimate for the latent factor provides can provide an indication of the latent factors’ level variability; however, its value depends on the scaling of factor loadings, and generally it is not a point of interest when evaluating CFA models.

average variance extracted (AVE). The AVE estimates for feelings of acceptance (AC), role clarity (RC), and task mastery (TM) were .743, .607, and .686, respectively, which exceeded the conventional threshold (\(\ge\) .50). We can conclude that AVE estimates for the three latent factors are all acceptable.

Composite reliability (CR). The CR estimates for feelings of acceptance (AC), role clarity (RC), and task mastery (TM) were .920, .818, and .884, respectively, which exceeded the conventional threshold for acceptable reliability (\(\ge\) .70) as well as the more relaxed “questionable” threshold (\(\ge\) .60). We can conclude that three latent factors demonstrated acceptable reliability.

Write-up of the results. As part of a new-employee onboarding survey administered 1-month after employees’ respective start dates, we assessed new employees on three multi-item measures targeting feelings of acceptance, role clarity, and task mastery. Using confirmatory factor analysis (CFA), we evaluated the measurement structure of the three multi-item measures, where each item served as an indicator for its respective latent factor; we did not allow indicator error variances to covary (i.e., the associations were constrained to zero) and, by default, the first indicator of the latent factor (i.e., ac_1) was constrained to 1 for estimation purposes and the latent-factor covariances were estimated freely. The three-factor model was estimated using the maximum likelihood (ML) estimator and a sample size of 750 new employees. Missing data were not a concern. We evaluated the model’s fit to the data using the chi-square (\(\chi^{2}\)) test, CFI, TLI, RMSEA, and SRMR model fit indices. The \(\chi^{2}\) test indicated that the model fit the data worse than a perfectly fitting model (\(\chi^{2}\) = 178.198, df = 41, p < .000). Further, the CFI and TLI estimates were .974 and .965, respectively, which exceeded the more stringent threshold of .95, thereby indicating acceptable model fit. Similarly, the SRMR estimate was .039, which fell below the more stringent threshold of .06, thereby indicating acceptable model fit. The RMSEA estimate was .067, which was above the more stringent threshold of .06 but below the more relaxed threshold of .08. Collectively, the model fit information indicated that model fit the data mostly acceptably. The freely estimated factor loadings associated were all statistically significantly different from zero (p < .001), and the standardized factor loadings ranged from .576 to .969, with only the standardized factor loading for the ac_2 item (.969) slightly exceeding the upper limit of the target range of .50-.95. Given the model’s aforementioned acceptable fit to the data, we decided to keep this item in the measurement model. Regarding the standardized covariance estimates, the correlation between feelings of acceptance and role clarity latent factors was .220, statistically significant (p < .001), and small-to-medium in terms of practical significance; the correlation between feelings of acceptance and task mastery latent factors was .246, statistically significant (p < .001), and small-to-medium in terms of practical significance; and the correlation between role clarity and task mastery latent factors was .148, statistically significant (p < .001), and small in terms of practical significance. The latent factor variance estimates were all statistically significant (p < .001), indicating that each latent factor showed statistically significant amounts of variability. The error variance estimates ranged from .061 to .669; with the exceptions of the rc_3 item’s error variance (.669) and the tm_4 item’s error variance (.510), the indicator error variances were less than the recommended .50 threshold, which means that unmodeled constructs did not likely have a notable impact on the vast majority of the indicators. The average variance extracted (AVE) estimates for feelings of acceptance, role clarity, and task mastery were .743, .607, and .686, respectively, which exceeded the conventional threshold of .50 and thus were deemed acceptable. The composite reliability (CR) reliability estimates for feelings of acceptance, role clarity, and task mastery were .920, .818, and .884, respectively, which exceeded the conventional threshold of .70 and thus were deemed acceptable. In sum, with the exception of a slightly elevated standardized factor loading for the ac_2 item and two indicator error variance estimates above .50, this three-factor measurement model for feelings of acceptance, role clarity, and task mastery showed mostly acceptable fit to the data, acceptable parameter estimates, and acceptable AVE and CR.

70.2.6 Nested Model Comparisons

When evaluating measurement structures, there are variety of circumstances in which we might wish to compare nested models. A nested model has all the same parameter estimates of a full model but has additional parameter constraints in place.

In this section, we will evaluate whether the three-factor model we estimated in the previous section fits significantly better than models with additional constraints.

70.2.6.1 Two-Factor Model (Version a)

We’ll begin by specifying an alternative model where we load the items associated with the feelings of acceptance and role clarity measures onto a single factor that we’ll label AC_RC. We’ll keep the task mastery items loaded on a latent factor labeled TM. In this way, we’ve created a two-factor model, and we’ll name this model specification object (cfa_mod_2a).

# Specify two-factor CFA model & assign to object (version a)
cfa_mod_2a <- "
AC_RC =~ ac_1 + ac_2 + ac_3 + ac_4 + rc_1 + rc_2 + rc_3
TM =~ tm_1 + tm_2 + tm_3 + tm_4
"

# Estimate three-factor CFA model & assign to fitted model object
cfa_fit_2a <- cfa(cfa_mod_2a, # name of specified model object
                  data=df)    # name of data frame object    

At this point, you may be wondering: “How is the two-factor model (cfa_mod_2a) nested within the original three factor model (cfa_mod_3)?” That is, it may not look as though we applied any direct constraints to the three-factor model to result in the two-factor model. Perhaps the following alternative approach to specifying the two-factor model will clear up any confusion. Specifically, instead of collapsing the feelings of acceptance and role clarity items onto a single factor labeled AC_RC, we will retain the original three-factor model specification but add constraints to the model. First, we will set the covariance between the AC and RC latent factors to 1, which we can achieve by specifying: AC ~~ 1*RC. If you recall, the ~~ operator is used to specify covariances. Second, we will add the std.lv=TRUE argument to our cfa function to set all of the latent factor variances to 1 (i.e., standardized). Finally, because the AC and RC latent factors will be set to act as a single factor, we need to make sure that we constrain their respective covariances with TM to be equal, which we can achieve by specifying: AC ~~ cov*TM and RC ~~ cov*TM; note that the cov is an arbitrary constraint label that I’m applying, and you could name the constraint whatever you’d like so long as it is the same name across the two covariances and we follow the constraint name with the * operator.

# Alternative approach: Specify two-factor CFA model & assign to object (version a)
cfa_mod_2a <- "
AC =~ ac_1 + ac_2 + ac_3 + ac_4 
RC =~ rc_1 + rc_2 + rc_3
TM =~ tm_1 + tm_2 + tm_3 + tm_4

# Constrain AC & RC covariance to 1
AC ~~ 1*RC

# Constrain covariances to be equal
AC ~~ cov*TM
RC ~~ cov*TM
"

# Estimate three-factor CFA model & assign to fitted model object
cfa_fit_2a <- cfa(cfa_mod_2a,  # name of specified model object
                  data=df,     # name of data frame object 
                  std.lv=TRUE) # constrain factor variances to 1   

Because specifying the alternative approach to the two-factor model is more time intensive, let’s revert back to the initial specification and estimate the model.

# Specify two-factor CFA model & assign to object (version a)
cfa_mod_2a <- "
AC_RC =~ ac_1 + ac_2 + ac_3 + ac_4 + rc_1 + rc_2 + rc_3
TM =~ tm_1 + tm_2 + tm_3 + tm_4
"

# Estimate three-factor CFA model & assign to fitted model object
cfa_fit_2a <- cfa(cfa_mod_2a, # name of specified model object
                  data=df)    # name of data frame object    

# Print summary of model results
summary(cfa_fit_2a,           # name of fitted model object
        fit.measures=TRUE,    # request model fit indices
        standardized=TRUE)    # request standardized estimates
## lavaan 0.6.15 ended normally after 42 iterations
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                        23
## 
##   Number of observations                           750
## 
## Model Test User Model:
##                                                       
##   Test statistic                               943.393
##   Degrees of freedom                                43
##   P-value (Chi-square)                           0.000
## 
## Model Test Baseline Model:
## 
##   Test statistic                              5325.638
##   Degrees of freedom                                55
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    0.829
##   Tucker-Lewis Index (TLI)                       0.781
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)             -12798.732
##   Loglikelihood unrestricted model (H1)     -12327.035
##                                                       
##   Akaike (AIC)                               25643.463
##   Bayesian (BIC)                             25749.725
##   Sample-size adjusted Bayesian (SABIC)      25676.691
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.167
##   90 Percent confidence interval - lower         0.158
##   90 Percent confidence interval - upper         0.176
##   P-value H_0: RMSEA <= 0.050                    0.000
##   P-value H_0: RMSEA >= 0.080                    1.000
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.125
## 
## Parameter Estimates:
## 
##   Standard errors                             Standard
##   Information                                 Expected
##   Information saturated (h1) model          Structured
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   AC_RC =~                                                              
##     ac_1              1.000                               0.787    0.807
##     ac_2              1.305    0.040   32.530    0.000    1.027    0.967
##     ac_3              1.064    0.040   26.339    0.000    0.838    0.820
##     ac_4              1.145    0.043   26.921    0.000    0.902    0.833
##     rc_1              0.444    0.090    4.956    0.000    0.350    0.184
##     rc_2              0.528    0.092    5.710    0.000    0.415    0.211
##     rc_3              0.331    0.086    3.853    0.000    0.261    0.143
##   TM =~                                                                 
##     tm_1              1.000                               1.348    0.738
##     tm_2              1.094    0.043   25.546    0.000    1.475    0.907
##     tm_3              1.326    0.050   26.322    0.000    1.787    0.949
##     tm_4              0.950    0.049   19.303    0.000    1.280    0.700
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   AC_RC ~~                                                              
##     TM                0.265    0.044    6.078    0.000    0.249    0.249
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .ac_1              0.331    0.019   17.112    0.000    0.331    0.348
##    .ac_2              0.074    0.014    5.390    0.000    0.074    0.065
##    .ac_3              0.342    0.020   16.836    0.000    0.342    0.327
##    .ac_4              0.359    0.022   16.514    0.000    0.359    0.307
##    .rc_1              3.492    0.181   19.332    0.000    3.492    0.966
##    .rc_2              3.693    0.191   19.321    0.000    3.693    0.955
##    .rc_3              3.241    0.168   19.345    0.000    3.241    0.979
##    .tm_1              1.521    0.085   17.802    0.000    1.521    0.456
##    .tm_2              0.468    0.041   11.511    0.000    0.468    0.177
##    .tm_3              0.352    0.051    6.932    0.000    0.352    0.099
##    .tm_4              1.704    0.094   18.126    0.000    1.704    0.510
##     AC_RC             0.620    0.047   13.214    0.000    1.000    1.000
##     TM                1.817    0.157   11.567    0.000    1.000    1.000
# Visualize the measurement model
semPaths(cfa_fit_2a,      # name of fitted model object 
         what="std",      # display standardized parameter estimates
         weighted=FALSE,  # do not weight plot features
         nCharNodes=0)    # do not abbreviate names

Just as we did with the three-factor model in the previous section, we would first evaluate the model fit, and if the model fit appears acceptable, then we would evaluate the parameter estimates. To save space, however, we will skip directly to comparing this version of the two-factor model to our original three-factor model, which I summarize in the table below.

Model \(\chi^{2}\) df p CFI TLI RMSEA SRMR
3-Factor Model 178.198 41 < .001 .974 .965 .067 .039
2a-Factor Model 943.393 43 < .001 .829 .781 .167 .125

As you can see above, the first version (version a) of the two-factor model fits the data notably worse than the three-factor model, which suggests that the three-factor model is probably a better representation of the measurement structure.

As an additional test, we can perform a nested model comparison using the chi-square (\(\chi^{2}\)) difference test, which is also known as the log-likelihood (LL) test. To perform this test, we’ll apply the anova function from base R. As the first argument, we’ll insert the name of our three-factor model object (cfa_fit_3), and as the second argument, we’ll insert the name of our two-factor model object (cfa_fit_2a).

# Nested model comparison using chi-square difference test
anova(cfa_fit_3, cfa_fit_2a)
## 
## Chi-Squared Difference Test
## 
##            Df   AIC   BIC  Chisq Chisq diff  RMSEA Df diff            Pr(>Chisq)    
## cfa_fit_3  41 24882 24998 178.20                                                    
## cfa_fit_2a 43 25644 25750 943.39     765.19 0.7133       2 < 0.00000000000000022 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

The chi-square difference test indicates that the first version of the two-factor model fits the data statistically significantly worse than the original three-factor model (\(\Delta \chi^{2}\) = 765.19, \(\Delta df\) = 2, \(p\) < .001). This corroborates what we saw with the direct comparison of model fit indices above.

Note: If your anova function output defaulted to scientific notation, you can “turn off” scientific notation using the following function. After running the options function below, you can re-run the anova function to get the output in traditional notation.

# Turn off scientific notation
options(scipen=9999)

70.2.6.2 Two-Factor Model (Version b)

We’ll now evaluate a second version of a two-factor model (version b). In this version, we’ll collapse latent factors RC and TM into a single latent factor and load their respective items on the single factor labeled RC_TM. We’ll specify the AC latent factor such that only the corresponding feelings of acceptance items load on it.

# Specify two-factor CFA model & assign to object (version b)
cfa_mod_2b <- "
AC =~ ac_1 + ac_2 + ac_3 + ac_4 
RC_TM =~ rc_1 + rc_2 + rc_3 + tm_1 + tm_2 + tm_3 + tm_4
"

# Estimate three-factor CFA model & assign to fitted model object
cfa_fit_2b <- cfa(cfa_mod_2b, # name of specified model object
                  data=df)    # name of data frame object

# Print summary of model results
summary(cfa_fit_2b,          # name of fitted model object
        fit.measures=TRUE,   # request model fit indices
        standardized=TRUE)   # request standardized estimates
## lavaan 0.6.15 ended normally after 63 iterations
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                        23
## 
##   Number of observations                           750
## 
## Model Test User Model:
##                                                       
##   Test statistic                               955.944
##   Degrees of freedom                                43
##   P-value (Chi-square)                           0.000
## 
## Model Test Baseline Model:
## 
##   Test statistic                              5325.638
##   Degrees of freedom                                55
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    0.827
##   Tucker-Lewis Index (TLI)                       0.778
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)             -12805.007
##   Loglikelihood unrestricted model (H1)     -12327.035
##                                                       
##   Akaike (AIC)                               25656.014
##   Bayesian (BIC)                             25762.276
##   Sample-size adjusted Bayesian (SABIC)      25689.242
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.168
##   90 Percent confidence interval - lower         0.159
##   90 Percent confidence interval - upper         0.178
##   P-value H_0: RMSEA <= 0.050                    0.000
##   P-value H_0: RMSEA >= 0.080                    1.000
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.131
## 
## Parameter Estimates:
## 
##   Standard errors                             Standard
##   Information                                 Expected
##   Information saturated (h1) model          Structured
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   AC =~                                                                 
##     ac_1              1.000                               0.787    0.807
##     ac_2              1.309    0.040   32.517    0.000    1.030    0.969
##     ac_3              1.063    0.040   26.244    0.000    0.836    0.818
##     ac_4              1.146    0.043   26.913    0.000    0.901    0.833
##   RC_TM =~                                                              
##     rc_1              1.000                               0.478    0.252
##     rc_2              0.455    0.167    2.718    0.007    0.218    0.111
##     rc_3              0.275    0.148    1.857    0.063    0.132    0.072
##     tm_1              2.820    0.420    6.713    0.000    1.348    0.738
##     tm_2              3.084    0.450    6.848    0.000    1.474    0.907
##     tm_3              3.736    0.544    6.863    0.000    1.786    0.949
##     tm_4              2.675    0.401    6.669    0.000    1.279    0.700
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   AC ~~                                                                 
##     RC_TM             0.094    0.020    4.623    0.000    0.250    0.250
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .ac_1              0.332    0.019   17.145    0.000    0.332    0.349
##    .ac_2              0.069    0.014    4.974    0.000    0.069    0.061
##    .ac_3              0.345    0.020   16.897    0.000    0.345    0.331
##    .ac_4              0.360    0.022   16.525    0.000    0.360    0.307
##    .rc_1              3.385    0.176   19.282    0.000    3.385    0.937
##    .rc_2              3.818    0.197   19.350    0.000    3.818    0.988
##    .rc_3              3.291    0.170   19.359    0.000    3.291    0.995
##    .tm_1              1.520    0.085   17.804    0.000    1.520    0.455
##    .tm_2              0.469    0.040   11.616    0.000    0.469    0.177
##    .tm_3              0.355    0.050    7.088    0.000    0.355    0.100
##    .tm_4              1.707    0.094   18.132    0.000    1.707    0.511
##     AC                0.619    0.047   13.196    0.000    1.000    1.000
##     RC_TM             0.229    0.067    3.397    0.001    1.000    1.000
# Visualize the measurement model
semPaths(cfa_fit_2b,      # name of fitted model object 
         what="std",      # display standardized parameter estimates
         weighted=FALSE,  # do not weight plot features
         nCharNodes=0)    # do not abbreviate names

Below, I’ve expanded upon the previous model comparison table by adding in the model fit information for the second version of the two-factor model.

Model \(\chi^{2}\) df p CFI TLI RMSEA SRMR
3-Factor Model 178.198 41 < .001 .974 .965 .067 .039
2a-Factor Model 943.393 43 < .001 .829 .781 .167 .125
2b-Factor Model 955.944 43 < .001 .827 .778 .168 .131

As you can see above, the second version (version b) of the two-factor model also fits the data notably worse than the three-factor model, which suggests that the three-factor model is still probably a better representation of the measurement structure.

As before, we’ll also estimate the chi-square (\(\chi^{2}\)) difference test, except this time we’ll compare the three-factor model (cfa_fit_3) to the second version of the two-factor model (cfa_fit_2b).

# Nested model comparison
anova(cfa_fit_3, cfa_fit_2b)
## 
## Chi-Squared Difference Test
## 
##            Df   AIC   BIC  Chisq Chisq diff   RMSEA Df diff            Pr(>Chisq)    
## cfa_fit_3  41 24882 24998 178.20                                                     
## cfa_fit_2b 43 25656 25762 955.94     777.75 0.71914       2 < 0.00000000000000022 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

The chi-square difference test indicates that the second version of the two-factor model fits the data statistically significantly worse than the original three-factor model (\(\Delta \chi^{2}\) = 777.75, \(\Delta df\) = 2, \(p\) < .001). This corroborates what we saw with the direct comparison of model fit indices above.

70.2.6.3 Two-Factor Model (Version c)

We’ll now evaluate a third version of a two-factor model (version c). In this version, we’ll collapse latent factors AC and TM into a single latent factor and load their respective items on the single factor labeled AC_TM. We’ll specify the RC latent factor such that only the corresponding role clarity items load on it.

# Specify two-factor CFA model & assign to object (version c)
cfa_mod_2c <- "
AC_TM =~ ac_1 + ac_2 + ac_3 + ac_4 + tm_1 + tm_2 + tm_3 + tm_4
RC =~ rc_1 + rc_2 + rc_3 
"

# Estimate three-factor CFA model & assign to fitted model object
cfa_fit_2c <- cfa(cfa_mod_2c, # name of specified model object
                  data=df)    # name of data frame object

# Print summary of model results
summary(cfa_fit_2c,          # name of fitted model object
        fit.measures=TRUE,   # request model fit indices
        standardized=TRUE)   # request standardized estimates
## lavaan 0.6.15 ended normally after 40 iterations
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                        23
## 
##   Number of observations                           750
## 
## Model Test User Model:
##                                                       
##   Test statistic                              2098.796
##   Degrees of freedom                                43
##   P-value (Chi-square)                           0.000
## 
## Model Test Baseline Model:
## 
##   Test statistic                              5325.638
##   Degrees of freedom                                55
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    0.610
##   Tucker-Lewis Index (TLI)                       0.501
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)             -13376.433
##   Loglikelihood unrestricted model (H1)     -12327.035
##                                                       
##   Akaike (AIC)                               26798.866
##   Bayesian (BIC)                             26905.128
##   Sample-size adjusted Bayesian (SABIC)      26832.094
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.252
##   90 Percent confidence interval - lower         0.243
##   90 Percent confidence interval - upper         0.262
##   P-value H_0: RMSEA <= 0.050                    0.000
##   P-value H_0: RMSEA >= 0.080                    1.000
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.198
## 
## Parameter Estimates:
## 
##   Standard errors                             Standard
##   Information                                 Expected
##   Information saturated (h1) model          Structured
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   AC_TM =~                                                              
##     ac_1              1.000                               0.789    0.809
##     ac_2              1.290    0.040   32.218    0.000    1.018    0.958
##     ac_3              1.069    0.040   26.501    0.000    0.843    0.825
##     ac_4              1.149    0.042   27.057    0.000    0.907    0.837
##     tm_1              0.438    0.086    5.080    0.000    0.345    0.189
##     tm_2              0.558    0.076    7.347    0.000    0.441    0.271
##     tm_3              0.629    0.088    7.140    0.000    0.496    0.264
##     tm_4              0.424    0.086    4.915    0.000    0.335    0.183
##   RC =~                                                                 
##     rc_1              1.000                               1.463    0.770
##     rc_2              1.250    0.071   17.680    0.000    1.830    0.931
##     rc_3              0.710    0.046   15.377    0.000    1.039    0.571
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   AC_TM ~~                                                              
##     RC                0.257    0.049    5.276    0.000    0.222    0.222
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .ac_1              0.328    0.019   16.932    0.000    0.328    0.345
##    .ac_2              0.093    0.014    6.679    0.000    0.093    0.083
##    .ac_3              0.333    0.020   16.562    0.000    0.333    0.319
##    .ac_4              0.350    0.022   16.233    0.000    0.350    0.299
##    .tm_1              3.218    0.167   19.324    0.000    3.218    0.964
##    .tm_2              2.449    0.127   19.278    0.000    2.449    0.927
##    .tm_3              3.300    0.171   19.283    0.000    3.300    0.930
##    .tm_4              3.230    0.167   19.327    0.000    3.230    0.966
##    .rc_1              1.472    0.126   11.648    0.000    1.472    0.407
##    .rc_2              0.518    0.160    3.242    0.001    0.518    0.134
##    .rc_3              2.229    0.126   17.642    0.000    2.229    0.674
##     AC_TM             0.623    0.047   13.234    0.000    1.000    1.000
##     RC                2.141    0.198   10.810    0.000    1.000    1.000
# Visualize the measurement model
semPaths(cfa_fit_2c,      # name of fitted model object 
         what="std",      # display standardized parameter estimates
         weighted=FALSE,  # do not weight plot features
         nCharNodes=0)    # do not abbreviate names

Below, I’ve expanded upon the previous model comparison table by adding in the model fit information for the third version of the two-factor model.

Model \(\chi^{2}\) df p CFI TLI RMSEA SRMR
3-Factor Model 178.198 41 < .001 .974 .965 .067 .039
2a-Factor Model 943.393 43 < .001 .829 .781 .167 .125
2b-Factor Model 955.944 43 < .001 .827 .778 .168 .131
2c-Factor Model 2098.796 43 < .001 .610 .501 .252 .198

As you can see above, the third version (version c) of the two-factor model also fits the data notably worse than the three-factor model, which suggests that the three-factor model is still probably a better representation of the measurement structure.

As before, we’ll also estimate the chi-square (\(\chi^{2}\)) difference test, except this time we’ll compare the three-factor model (cfa_fit_3) to the third version of the two-factor model (cfa_fit_2c).

# Nested model comparison
anova(cfa_fit_3, cfa_fit_2c)
## 
## Chi-Squared Difference Test
## 
##            Df   AIC   BIC  Chisq Chisq diff RMSEA Df diff            Pr(>Chisq)    
## cfa_fit_3  41 24882 24998  178.2                                                   
## cfa_fit_2c 43 26799 26905 2098.8     1920.6 1.131       2 < 0.00000000000000022 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

The chi-square difference test indicates that the third version of the two-factor model fits the data statistically significantly worse than the original three-factor model (\(\Delta \chi^{2}\) = 1920.60, \(\Delta df\) = 2, \(p\) < .001). This corroborates what we saw with the direct comparison of model fit indices above.

70.2.6.4 One-Factor Model

We’ll now evaluate a one-factor model. In this version, we’ll collapse all three latent factors (AC, RC, and TM) into a single latent factor and load their respective items on the single factor labeled AC_RC_TM.

# Specify one-factor CFA model & assign to object 
cfa_mod_1 <- "
AC_RC_TM =~ ac_1 + ac_2 + ac_3 + ac_4 + 
rc_1 + rc_2 + rc_3 +
tm_1 + tm_2 + tm_3 + tm_4
"

# Estimate three-factor CFA model & assign to fitted model object
cfa_fit_1 <- cfa(cfa_mod_1, # name of specified model object
                 data=df)    # name of data frame object

# Print summary of model results
summary(cfa_fit_1,           # name of fitted model object
        fit.measures=TRUE,   # request model fit indices
        standardized=TRUE)   # request standardized estimates
## lavaan 0.6.15 ended normally after 30 iterations
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                        22
## 
##   Number of observations                           750
## 
## Model Test User Model:
##                                                       
##   Test statistic                              2856.081
##   Degrees of freedom                                44
##   P-value (Chi-square)                           0.000
## 
## Model Test Baseline Model:
## 
##   Test statistic                              5325.638
##   Degrees of freedom                                55
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    0.466
##   Tucker-Lewis Index (TLI)                       0.333
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)             -13755.076
##   Loglikelihood unrestricted model (H1)     -12327.035
##                                                       
##   Akaike (AIC)                               27554.151
##   Bayesian (BIC)                             27655.793
##   Sample-size adjusted Bayesian (SABIC)      27585.934
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.292
##   90 Percent confidence interval - lower         0.283
##   90 Percent confidence interval - upper         0.301
##   P-value H_0: RMSEA <= 0.050                    0.000
##   P-value H_0: RMSEA >= 0.080                    1.000
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.228
## 
## Parameter Estimates:
## 
##   Standard errors                             Standard
##   Information                                 Expected
##   Information saturated (h1) model          Structured
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   AC_RC_TM =~                                                           
##     ac_1              1.000                               0.789    0.809
##     ac_2              1.287    0.040   32.111    0.000    1.015    0.956
##     ac_3              1.071    0.040   26.534    0.000    0.845    0.827
##     ac_4              1.149    0.043   27.005    0.000    0.906    0.837
##     rc_1              0.469    0.090    5.230    0.000    0.370    0.195
##     rc_2              0.538    0.093    5.812    0.000    0.425    0.216
##     rc_3              0.337    0.086    3.909    0.000    0.266    0.146
##     tm_1              0.446    0.086    5.177    0.000    0.352    0.193
##     tm_2              0.566    0.076    7.447    0.000    0.447    0.275
##     tm_3              0.639    0.088    7.249    0.000    0.504    0.268
##     tm_4              0.431    0.086    4.988    0.000    0.340    0.186
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .ac_1              0.328    0.019   16.901    0.000    0.328    0.345
##    .ac_2              0.098    0.014    7.013    0.000    0.098    0.087
##    .ac_3              0.330    0.020   16.494    0.000    0.330    0.316
##    .ac_4              0.351    0.022   16.213    0.000    0.351    0.299
##    .rc_1              3.477    0.180   19.321    0.000    3.477    0.962
##    .rc_2              3.685    0.191   19.310    0.000    3.685    0.953
##    .rc_3              3.238    0.167   19.340    0.000    3.238    0.979
##    .tm_1              3.213    0.166   19.321    0.000    3.213    0.963
##    .tm_2              2.443    0.127   19.273    0.000    2.443    0.924
##    .tm_3              3.292    0.171   19.278    0.000    3.292    0.928
##    .tm_4              3.227    0.167   19.325    0.000    3.227    0.965
##     AC_RC_TM          0.623    0.047   13.230    0.000    1.000    1.000
# Visualize the measurement model
semPaths(cfa_fit_1,       # name of fitted model object 
         what="std",      # display standardized parameter estimates
         weighted=FALSE,  # do not weight plot features
         nCharNodes=0)    # do not abbreviate names

Below, I’ve expanded upon the previous model comparison table by adding in the model fit information for the one-factor model.

Model \(\chi^{2}\) df p CFI TLI RMSEA SRMR
3-Factor Model 178.198 41 < .001 .974 .965 .067 .039
2a-Factor Model 943.393 43 < .001 .829 .781 .167 .125
2b-Factor Model 955.944 43 < .001 .827 .778 .168 .131
2c-Factor Model 2098.796 43 < .001 .610 .501 .252 .198
1-Factor Model 2856.081 44 < .001 .466 .333 .292 .228

As you can see above, the one-factor model also fits the data notably worse than the three-factor model, which suggests that the three-factor model remains the best representation of the measurement structure out of the models tested. This gives us more confidence that the three-factor model in which we distinguish between the constructs of feelings of acceptance, role clarity, and task mastery is a decent measurement structure.

As before, though, we’ll also estimate the chi-square (\(\chi^{2}\)) difference test, except this time we’ll compare the three-factor model (cfa_fit_3) to the one-factor model (cfa_fit_1).

# Nested model comparison
anova(cfa_fit_3, cfa_fit_1)
## 
## Chi-Squared Difference Test
## 
##           Df   AIC   BIC  Chisq Chisq diff  RMSEA Df diff            Pr(>Chisq)    
## cfa_fit_3 41 24882 24998  178.2                                                    
## cfa_fit_1 44 27554 27656 2856.1     2677.9 1.0903       3 < 0.00000000000000022 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

The chi-square difference test indicates that the one-factor model fits the data statistically significantly worse than the original three-factor model (\(\Delta \chi^{2}\) = 2677.90, \(\Delta df\) = 3, \(p\) < .001). This corroborates what we saw with the direct comparison of model fit indices above.

70.2.6.5 Create a Matrix Comparing Model Fit Indices

If our goals are to create a matrix containing only those model fit indices that we covered in this chapter and to add in the chi-square difference tests, we can do the following, which incorporates the inspect function from the lavaan package and the cbind, rbind, and t functions from base R.

# Create object containing selected fit indices
select_fit_indices <- c("chisq","df","pvalue","cfi","tli","rmsea","srmr")

# Create matrix comparing model fit indices
compare_mods <- cbind(
  inspect(cfa_fit_3, "fit.indices")[select_fit_indices],
  inspect(cfa_fit_2a, "fit.indices")[select_fit_indices], 
  inspect(cfa_fit_2b, "fit.indices")[select_fit_indices], 
  inspect(cfa_fit_2c, "fit.indices")[select_fit_indices], 
  inspect(cfa_fit_1, "fit.indices")[select_fit_indices]
  )

# Add more informative model names to matrix columns
colnames(compare_mods) <- c("3 Factor Model", 
                            "2a Factor Model",
                            "2b Factor Model",
                            "2c Factor Model",
                            "1 Factor Model")

# Create vector of chi-square difference tests (nested model comparisons)
`chisq diff (p-value)` <- c(NA,
                            anova(cfa_fit_3, cfa_fit_2a)$`Pr(>Chisq)`[2],
                            anova(cfa_fit_3, cfa_fit_2b)$`Pr(>Chisq)`[2],
                            anova(cfa_fit_3, cfa_fit_2c)$`Pr(>Chisq)`[2],
                            anova(cfa_fit_3, cfa_fit_1)$`Pr(>Chisq)`[2])

# Add chi-square difference tests to matrix object
compare_mods <- rbind(compare_mods, `chisq diff (p-value)`)

# Round object values to 3 places after decimal
compare_mods <- round(compare_mods, 3)

# Rotate matrix
compare_mods <- t(compare_mods)

# Print object
print(compare_mods)
##                    chisq df pvalue   cfi   tli rmsea  srmr chisq diff (p-value)
## 3 Factor Model   178.198 41      0 0.974 0.965 0.067 0.039                   NA
## 2a Factor Model  943.393 43      0 0.829 0.781 0.167 0.125                    0
## 2b Factor Model  955.944 43      0 0.827 0.778 0.168 0.131                    0
## 2c Factor Model 2098.796 43      0 0.610 0.501 0.252 0.198                    0
## 1 Factor Model  2856.081 44      0 0.466 0.333 0.292 0.228                    0

70.2.7 Estimate Second-Order Model

In some instances, we have theoretical justification to specify and estimate a second-order model. A second-order is as CFA model in which latent factors serves as indicators for one or more superordinate latent factors. Let’s suppose that we have theoretical justification to specify a second-order model, where the feelings of acceptance (AC), role clarity (RC), and task mastery (TM) latent factors serve as indicators for a higher-order adjustment latent factor (ADJ). That is, conceptually, an individual’s level of adjustment is indicated by their feelings of acceptance, role clarity, and task mastery. Such a model might prove advantageous if a later goal is to estimate structural regression paths with other criteria, such that just the associations with the second-order adjustment latent factor (ADJ) are of interest.

Specifying a second-order factor is relatively straightforward. To our original three-factor model, we will specify a second-order factor called ADJ on which the first-order AC, RC, and TM latent factors load: ADJ =~ AC + RC + TM.

# Specify second-order CFA model & assign to object
cfa_mod_2ord <- "
AC =~ ac_1 + ac_2 + ac_3 + ac_4
RC =~ rc_1 + rc_2 + rc_3
TM =~ tm_1 + tm_2 + tm_3 + tm_4
ADJ =~ AC + RC + TM
"

# Estimate second-order CFA model & assign to fitted model object
cfa_fit_2ord <- cfa(cfa_mod_2ord,  # name of specified model object
                    data=df)       # name of data frame object

# Print summary of model results
summary(cfa_fit_2ord,        # name of fitted model object
        fit.measures=TRUE,   # request model fit indices
        standardized=TRUE)   # request standardized estimates
## lavaan 0.6.15 ended normally after 52 iterations
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                        25
## 
##   Number of observations                           750
## 
## Model Test User Model:
##                                                       
##   Test statistic                               178.198
##   Degrees of freedom                                41
##   P-value (Chi-square)                           0.000
## 
## Model Test Baseline Model:
## 
##   Test statistic                              5325.638
##   Degrees of freedom                                55
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    0.974
##   Tucker-Lewis Index (TLI)                       0.965
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)             -12416.134
##   Loglikelihood unrestricted model (H1)     -12327.035
##                                                       
##   Akaike (AIC)                               24882.269
##   Bayesian (BIC)                             24997.770
##   Sample-size adjusted Bayesian (SABIC)      24918.386
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.067
##   90 Percent confidence interval - lower         0.057
##   90 Percent confidence interval - upper         0.077
##   P-value H_0: RMSEA <= 0.050                    0.003
##   P-value H_0: RMSEA >= 0.080                    0.016
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.039
## 
## Parameter Estimates:
## 
##   Standard errors                             Standard
##   Information                                 Expected
##   Information saturated (h1) model          Structured
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   AC =~                                                                 
##     ac_1              1.000                               0.787    0.807
##     ac_2              1.309    0.040   32.555    0.000    1.030    0.969
##     ac_3              1.063    0.040   26.262    0.000    0.836    0.818
##     ac_4              1.145    0.043   26.907    0.000    0.901    0.832
##   RC =~                                                                 
##     rc_1              1.000                               1.487    0.782
##     rc_2              1.211    0.067   18.058    0.000    1.801    0.916
##     rc_3              0.704    0.046   15.427    0.000    1.047    0.576
##   TM =~                                                                 
##     tm_1              1.000                               1.348    0.738
##     tm_2              1.094    0.043   25.539    0.000    1.475    0.907
##     tm_3              1.327    0.050   26.321    0.000    1.788    0.949
##     tm_4              0.950    0.049   19.293    0.000    1.280    0.700
##   ADJ =~                                                                
##     AC                1.000                               0.605    0.605
##     RC                1.134    0.328    3.454    0.001    0.363    0.363
##     TM                1.150    0.337    3.415    0.001    0.406    0.406
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .ac_1              0.332    0.019   17.153    0.000    0.332    0.349
##    .ac_2              0.069    0.014    5.004    0.000    0.069    0.061
##    .ac_3              0.345    0.020   16.905    0.000    0.345    0.330
##    .ac_4              0.360    0.022   16.549    0.000    0.360    0.307
##    .rc_1              1.401    0.125   11.191    0.000    1.401    0.388
##    .rc_2              0.623    0.153    4.081    0.000    0.623    0.161
##    .rc_3              2.213    0.126   17.612    0.000    2.213    0.669
##    .tm_1              1.521    0.085   17.807    0.000    1.521    0.456
##    .tm_2              0.469    0.041   11.529    0.000    0.469    0.177
##    .tm_3              0.351    0.051    6.919    0.000    0.351    0.099
##    .tm_4              1.705    0.094   18.130    0.000    1.705    0.510
##    .AC                0.392    0.074    5.337    0.000    0.634    0.634
##    .RC                1.921    0.193    9.954    0.000    0.868    0.868
##    .TM                1.516    0.159    9.548    0.000    0.835    0.835
##     ADJ               0.227    0.073    3.114    0.002    1.000    1.000
# Estimate average variance extracted (AVE)
AVE(cfa_fit_2ord)
##    AC    RC    TM 
## 0.743 0.607 0.686
# Estimate composite/construct reliability (CR)
compRelSEM(cfa_fit_2ord)
##    AC    RC    TM 
## 0.920 0.818 0.884
# Visualize the measurement model
semPaths(cfa_fit_2ord,       # name of fitted model object 
         what="std",      # display standardized parameter estimates
         weighted=FALSE,  # do not weight plot features
         nCharNodes=0)    # do not abbreviate names

The second-order model fits the data the same as our original first-order three-factor model. A notable difference in the parameter estimates is that instead of covariances between the first-order latent factors (AC, RC, TM), we now see the three latent factors loading onto the new second-order factor (ADJ). We would precede with evaluating and interpreting the model as we did earlier in the chapter with multi-factor models.

70.2.8 Estimating Models with Missing Data

When missing data are present, we must carefully consider how we handle the missing data before or during the estimations of a model. In the chapter on Missing Data, I provide an overview of relevant concepts, particularly if the data are missing completely at random (MCAR), missing at random (MAR), or missing not at random (MNAR); I suggest reviewing that chapter prior to handling missing data.

As a potential method for addressing missing data, the lavaan model functions, such as the cfa function, allow for full-information maximum likelihood (FIML). Further, the functions allow us to specify specific estimators given the type of data we wish to use for model estimation (e.g., ML, MLR).

To demonstrate how missing data are handled using FIML, we will need to first introduce some missing data into our data. To do so, we will use a multiple imputation package called mice and a function called ampute that “amputates” existing data by creating missing data patterns. For our purposes, we’ll replace 10% (.1) of data frame cells with NA (which is signifies a missing value) such that the missing data are missing completely at random (MCAR).

# Install package
install.packages("mice")
# Access package
library(mice)
# Create a new data frame object
df_missing <- df

# Remove non-numeric variable(s) from data frame object
df_missing$id <- NULL

# Remove 10% of cells so missing data are MCAR
df_missing <- ampute(df_missing, prop=.1, mech="MCAR")

# Extract the new missing data frame object and overwrite existing object
df_missing <- df_missing$amp

Implementing FIML when missing data are present is relatively straightforward. For example, in the one-factor CFA model below, we can apply FIML in the presence of missing data by adding the missing="fiml" argument to the cfa function.

# Specify one-factor CFA model & assign to object
cfa_mod <- "
AC =~ ac_1 + ac_2 + ac_3 + ac_4
"

# Estimate one-factor CFA model & assign to fitted model object
cfa_fit <- cfa(cfa_mod,         # name of specified model object
               data=df_missing, # name of data frame object
               missing="fiml")  # specify FIML
              
# Print summary of model results
summary(cfa_fit,             # name of fitted model object
        fit.measures=TRUE,   # request model fit indices
        standardized=TRUE)   # request standardized estimates
## lavaan 0.6.15 ended normally after 39 iterations
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                        12
## 
##   Number of observations                           750
##   Number of missing patterns                         5
## 
## Model Test User Model:
##                                                       
##   Test statistic                                 2.746
##   Degrees of freedom                                 2
##   P-value (Chi-square)                           0.253
## 
## Model Test Baseline Model:
## 
##   Test statistic                              2250.249
##   Degrees of freedom                                 6
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    1.000
##   Tucker-Lewis Index (TLI)                       0.999
##                                                       
##   Robust Comparative Fit Index (CFI)             1.000
##   Robust Tucker-Lewis Index (TLI)                0.999
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)              -3211.822
##   Loglikelihood unrestricted model (H1)      -3210.449
##                                                       
##   Akaike (AIC)                                6447.643
##   Bayesian (BIC)                              6503.084
##   Sample-size adjusted Bayesian (SABIC)       6464.979
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.022
##   90 Percent confidence interval - lower         0.000
##   90 Percent confidence interval - upper         0.079
##   P-value H_0: RMSEA <= 0.050                    0.718
##   P-value H_0: RMSEA >= 0.080                    0.048
##                                                       
##   Robust RMSEA                                   0.022
##   90 Percent confidence interval - lower         0.000
##   90 Percent confidence interval - upper         0.080
##   P-value H_0: Robust RMSEA <= 0.050             0.715
##   P-value H_0: Robust RMSEA >= 0.080             0.049
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.005
## 
## Parameter Estimates:
## 
##   Standard errors                             Standard
##   Information                                 Observed
##   Observed information based on                Hessian
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   AC =~                                                                 
##     ac_1              1.000                               0.785    0.806
##     ac_2              1.312    0.040   32.513    0.000    1.030    0.972
##     ac_3              1.060    0.041   26.049    0.000    0.832    0.815
##     ac_4              1.147    0.043   26.671    0.000    0.900    0.830
## 
## Intercepts:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .ac_1              5.264    0.036  147.692    0.000    5.264    5.404
##    .ac_2              5.039    0.039  130.237    0.000    5.039    4.757
##    .ac_3              5.144    0.037  137.833    0.000    5.144    5.035
##    .ac_4              4.928    0.040  124.464    0.000    4.928    4.547
##     AC                0.000                               0.000    0.000
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .ac_1              0.333    0.019   17.213    0.000    0.333    0.351
##    .ac_2              0.062    0.014    4.403    0.000    0.062    0.055
##    .ac_3              0.351    0.021   16.750    0.000    0.351    0.336
##    .ac_4              0.365    0.022   16.634    0.000    0.365    0.311
##     AC                0.616    0.047   13.168    0.000    1.000    1.000

The FIML approach uses all observations in which data are missing on one or more endogenous variables in the model. As you can see in the output, all 750 observations were retained for estimating the model.

Now watch what happens when we remove the missing="fiml" argument in the presence of missing data.

# Specify one-factor CFA model & assign to object
cfa_mod <- "
AC =~ ac_1 + ac_2 + ac_3 + ac_4
"

# Estimate one-factor CFA model & assign to fitted model object
cfa_fit <- cfa(cfa_mod,         # name of specified model object
               data=df_missing) # name of data frame object
              
# Print summary of model results
summary(cfa_fit,             # name of fitted model object
        fit.measures=TRUE,   # request model fit indices
        standardized=TRUE)   # request standardized estimates
## lavaan 0.6.15 ended normally after 21 iterations
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                         8
## 
##                                                   Used       Total
##   Number of observations                           736         750
## 
## Model Test User Model:
##                                                       
##   Test statistic                                 2.466
##   Degrees of freedom                                 2
##   P-value (Chi-square)                           0.291
## 
## Model Test Baseline Model:
## 
##   Test statistic                              2220.985
##   Degrees of freedom                                 6
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    1.000
##   Tucker-Lewis Index (TLI)                       0.999
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)              -3161.224
##   Loglikelihood unrestricted model (H1)      -3159.991
##                                                       
##   Akaike (AIC)                                6338.448
##   Bayesian (BIC)                              6375.257
##   Sample-size adjusted Bayesian (SABIC)       6349.855
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.018
##   90 Percent confidence interval - lower         0.000
##   90 Percent confidence interval - upper         0.078
##   P-value H_0: RMSEA <= 0.050                    0.744
##   P-value H_0: RMSEA >= 0.080                    0.042
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.006
## 
## Parameter Estimates:
## 
##   Standard errors                             Standard
##   Information                                 Expected
##   Information saturated (h1) model          Structured
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   AC =~                                                                 
##     ac_1              1.000                               0.782    0.805
##     ac_2              1.315    0.041   32.148    0.000    1.028    0.973
##     ac_3              1.067    0.041   25.895    0.000    0.835    0.816
##     ac_4              1.143    0.043   26.388    0.000    0.894    0.827
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .ac_1              0.333    0.020   17.058    0.000    0.333    0.353
##    .ac_2              0.060    0.014    4.319    0.000    0.060    0.054
##    .ac_3              0.348    0.021   16.807    0.000    0.348    0.333
##    .ac_4              0.369    0.022   16.538    0.000    0.369    0.316
##     AC                0.611    0.047   13.029    0.000    1.000    1.000

As you can see in the output, the cfa function defaults to listwise deletion when we do not specify that FIML be applied. This results in the number of observations dropping from 750 to 727 for model estimation purposes.

Within the cfa function, we can also specify a specific estimator if we choose to override the default. For example, we could specify the MLR (maximum likelihood with robust standard errors) estimator if we had good reason to. To do so, we would add this argument: estimator="MLR". For a list of other available estimators, you can check out the lavaan package website.

# Specify one-factor CFA model & assign to object
cfa_mod <- "
AC =~ ac_1 + ac_2 + ac_3 + ac_4
"

# Estimate one-factor CFA model & assign to fitted model object
cfa_fit <- cfa(cfa_mod,         # name of specified model object
               data=df_missing, # name of data frame object
               estimator="MLR", # specify type of estimator 
               missing="fiml")  # specify FIML
              
# Print summary of model results
summary(cfa_fit,             # name of fitted model object
        fit.measures=TRUE,   # request model fit indices
        standardized=TRUE)   # request standardized estimates
## lavaan 0.6.15 ended normally after 39 iterations
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                        12
## 
##   Number of observations                           750
##   Number of missing patterns                         5
## 
## Model Test User Model:
##                                               Standard      Scaled
##   Test Statistic                                 2.746       1.945
##   Degrees of freedom                                 2           2
##   P-value (Chi-square)                           0.253       0.378
##   Scaling correction factor                                  1.412
##     Yuan-Bentler correction (Mplus variant)                       
## 
## Model Test Baseline Model:
## 
##   Test statistic                              2250.249    1233.125
##   Degrees of freedom                                 6           6
##   P-value                                        0.000       0.000
##   Scaling correction factor                                  1.825
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    1.000       1.000
##   Tucker-Lewis Index (TLI)                       0.999       1.000
##                                                                   
##   Robust Comparative Fit Index (CFI)                         1.000
##   Robust Tucker-Lewis Index (TLI)                            1.000
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)              -3211.822   -3211.822
##   Scaling correction factor                                  1.334
##       for the MLR correction                                      
##   Loglikelihood unrestricted model (H1)      -3210.449   -3210.449
##   Scaling correction factor                                  1.345
##       for the MLR correction                                      
##                                                                   
##   Akaike (AIC)                                6447.643    6447.643
##   Bayesian (BIC)                              6503.084    6503.084
##   Sample-size adjusted Bayesian (SABIC)       6464.979    6464.979
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.022       0.000
##   90 Percent confidence interval - lower         0.000       0.000
##   90 Percent confidence interval - upper         0.079       0.062
##   P-value H_0: RMSEA <= 0.050                    0.718       0.871
##   P-value H_0: RMSEA >= 0.080                    0.048       0.008
##                                                                   
##   Robust RMSEA                                               0.000
##   90 Percent confidence interval - lower                     0.000
##   90 Percent confidence interval - upper                     0.086
##   P-value H_0: Robust RMSEA <= 0.050                         0.724
##   P-value H_0: Robust RMSEA >= 0.080                         0.070
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.005       0.005
## 
## Parameter Estimates:
## 
##   Standard errors                             Sandwich
##   Information bread                           Observed
##   Observed information based on                Hessian
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   AC =~                                                                 
##     ac_1              1.000                               0.785    0.806
##     ac_2              1.312    0.040   32.898    0.000    1.030    0.972
##     ac_3              1.060    0.041   25.940    0.000    0.832    0.815
##     ac_4              1.147    0.044   25.983    0.000    0.900    0.830
## 
## Intercepts:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .ac_1              5.264    0.036  147.676    0.000    5.264    5.404
##    .ac_2              5.039    0.039  130.226    0.000    5.039    4.757
##    .ac_3              5.144    0.037  137.871    0.000    5.144    5.035
##    .ac_4              4.928    0.040  124.423    0.000    4.928    4.547
##     AC                0.000                               0.000    0.000
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .ac_1              0.333    0.026   12.912    0.000    0.333    0.351
##    .ac_2              0.062    0.017    3.529    0.000    0.062    0.055
##    .ac_3              0.351    0.031   11.144    0.000    0.351    0.336
##    .ac_4              0.365    0.031   11.779    0.000    0.365    0.311
##     AC                0.616    0.050   12.398    0.000    1.000    1.000

70.2.9 Simulate Dynamic Fit Index Cutoffs

Dynamic fit index cutoffs represent a more recent advance in evaluating model fit. For years, fixed cutoffs for common fit indices have been criticized because the appropriateness of a particular cutoff depends on a number of data- and model-specific factors. For years, the field has referenced cutoffs recommended by influential studies on model fit like Hu and Bentler (1999); however, such recommended cutoffs were based on a single CFA model and thus may not generalize as well as we’d like them to. To address the limitations of general model fit cutoffs, McNeish and Wolf (2023) developed dynamic fit index cutoffs, which are based on a simulation methodology. Further, Wolf and McNeish (2023) developed a package called dynamic to estimate dynamic fit index cutoffs for specific data sets and specific models.

To explore dynamic fit index cutoffs, we need to install and access the dynamic package (if you haven’t already).

# Install package
install.packages("dynamic")
# Access package
library(dynamic)

As an initial step, we need to specify and estimate a CFA model. Let’s start with a one-factor model, which is the same model we specified for the over-identified one-factor model earlier in this chapter.

# Specify one-factor CFA model & assign to object
cfa_mod <- "
AC =~ ac_1 + ac_2 + ac_3 + ac_4
"

# Estimate one-factor CFA model & assign to fitted model object
cfa_fit <- cfa(cfa_mod,      # name of specified model object
               data=df)      # name of data frame object

As the sole parenthetical argument in the cfaOne function from the dynamic package (which is intended for one-factor models), we will enter the name of our fitted model object (cfa_fit). Please note that because this methodology involves Monte Carlo simulations, it will take a minute (or more) to produce the desired output.

# Compute one-factor model dynamic fit index cutoffs
cfaOne(cfa_fit)
## Your DFI cutoffs: 
##                SRMR RMSEA CFI
## Level 1: 95/5  .016  .121 .99
## Level 1: 90/10   --    --  --
## 
## Empirical fit indices: 
##  Chi-Square  df p-value   SRMR   RMSEA    CFI
##       2.499   2   0.287  0.006   0.018      1

The output produces the prescribed dynamic fit index cutoffs under the section labeled Your DFI cutoffs. In this output, only a single “level” of cutoffs are produced, specifically Level 1. Had there been additional levels (i.e., Level 2, Level3), they would have corresponded to progressively more relaxed cutoffs indicating worse fitting models; Level 1 corresponds to cutoffs, that if met, correspond to a better fitting model than, say, Level 2 or Level 3. With our model, the dynamic fit index cutoffs for SRMR, RMSEA, and CFI are .016, .121, and .99, respectively, which means that we would love for our actual model fit indices to be less than the first two cutoffs and greater than the last. In the output section labeled Empirical fit indices, we find our actual model fit indices. As you can see, all three of our model fit indices meet the dynamic fit index cutoffs associated with a good-fitting model. Thus, based on these cutoffs, we can conclude that our model fits the data acceptably, which is the same conclusion we arrived at with the traditional cutoffs we applied earlier in the tutorial.

Let’s switch gears and simulate dynamic fit index cutoffs for a multi-factor model. Specifically, we will use our three-factor model from a previous section as an example.

# Specify three-factor CFA model & assign to object
cfa_mod_3 <- "
AC =~ ac_1 + ac_2 + ac_3 + ac_4
RC =~ rc_1 + rc_2 + rc_3
TM =~ tm_1 + tm_2 + tm_3 + tm_4
"

# Estimate three-factor CFA model & assign to fitted model object
cfa_fit_3 <- cfa(cfa_mod_3,  # name of specified model object
                 data=df)    # name of data frame object

Because we’re dealing with a multi-factor model, we need to switch over to the cfaHB function from the dynamic package. Again, please note that because this involves a Monte Carlo simulation, it will take a minute or two to generate the output.

# Compute three-factor model dynamic fit indices 
cfaHB(cfa_fit_3)
## Your DFI cutoffs: 
##                SRMR RMSEA  CFI Magnitude
## Level 1: 95/5  .069  .108  .94      .576
## Level 1: 90/10   --    --   --          
## Level 2: 95/5  .118   .21 .813      .534
## Level 2: 90/10   --    --   --          
## 
## Empirical fit indices: 
##  Chi-Square  df p-value   SRMR   RMSEA    CFI
##     178.198  41       0  0.039   0.067  0.974

In this example, the cfaHB function generates two levels (Level 1, Level 2) of dynamic fit index cutoffs, where Level 1 corresponds to cutoffs corresponding to better fitting models. Again, note that the actual (empirical) model fit indices for our three-factor model meet the Level 1 cutoffs for SRMR, RMSEA, and CFI, which suggests that our three-factor model fits the data well.

Because dynamic fit index cutoffs are relatively new, we have yet to find out whether they will gain broader traction. Still, their use makes conceptual sense, and they may be ushering in a notable shift in how we evaluate model fit.

70.2.10 Summary

In this chapter, we learned how to estimate measurement models using confirmatory factor analysis (CFA). More specifically, we learned how to estimate and interpret one-factor models, multi-factor models, and second-order models, and how to compare the fit of nested models. Further, we learned how to estimate models when missing data are present and how to estimate dynamic fit index cutoffs based on Monte Carlo simulations.

References

Bagozzi, Richard P, and Youjae Yi. 1988. “On the Evaluation of Structural Equation Models.” Journal of the Academy of Marketing Science 16: 74–94.
Bentler, Peter M. 1968. “Alpha-Maximized Factor Analysis (Alphamax): Its Relation to Alpha and Canonical Factor Analysis.” Psychometrika 33 (3): 335–45.
Fornell, Claes, and David F Larcker. 1981. “Evaluating Structural Equation Models with Unobservable Variables and Measurement Errors.” Journal of Marketing Research 18 (1): 39–50.
Hu, Li-tze, and Peter M Bentler. 1999. “Cutoff Criteria for Fit Indexes in Covariance Structure Analysis: Conventional Criteria Versus New Alternatives.” Structural Equation Modeling 6 (1): 1–55.
Kline, Rex B. 2011. Principles and Practice of Structural Equation Modeling (3rd Ed.). New York, New York: The Guilford Press.
McNeish, Daniel, and Melissa G Wolf. 2023. “Dynamic Fit Index Cutoffs for Confirmatory Factor Analysis Models.” Psychological Methods 28 (1): 61–88.
Nye, Christopher D. 2023. “Reviewer Resources: Confirmatory Factor Analysis.” Organizational Research Methods 26 (4): 608–28.
Wickham, Hadley, Jim Hester, and Jennifer Bryan. 2024. Readr: Read Rectangular Text Data. https://CRAN.R-project.org/package=readr.
Wolf, Melissa G, and Daniel McNeish. 2023. “Dynamic: An r Package for Deriving Dynamic Fit Index Cutoffs for Factor Analysis.” Multivariate Behavioral Research 58 (1): 189–94.