Multiple Imputation in R. How to impute data with MICE for lavaan.

Missing data is unavoidable in most empirical work. This can be a problem for any statistical analysis that needs data to be complete. Structural equation modeling and confirmatory factor analysis are such methods that rely on a complete dataset. The following post will give an overview on the background of missing data analysis, how the missingness can be investigated, how the R-package MICE for multiple imputation is applied and how imputed data can be given to the lavaan-package for confirmatory factor analysis.

If you are in a hurry and already know the background of multiple imputation, jump to: How to use multiple imputation with lavaan

What kinds of missing data are there?
There are two types of missingness: Unit nonresponse concerns cases in the sample, that didn´t respond to the survey at all, or – more generally spoken – the failure to obtain measurements for a sampled unit. Item nonresponse occurs, when a person leaves out particular items in the survey, or – more generally spoken – particular measurements of a sampled unit are missing. Here, we will focus on item nonresponse.

Why is it important?
The topic of missing data itself is still often missing in the curriculum of statistics for social sciences and sociology. Also in practical research a lot of studies don´t show transparently how they handled missing data. But there would be a lot reason to pay more attention to this issue. As an example, Ranjit Lall examined how political science studies dealed with missing data and found out, that 50 % had their key results „disappear“ after he re-analysed them with a proper way to handle the missingness: How multiple Imputation makes a difference. Most of these studies used listwise deletion, because it once was a standard way to deal with missings and still is in many software packages. For example, the statistic software SPSS still doesn´t offer multiple imputation (only single imputation with EM-algorithm, that doesn´t incorporate uncertainty and should only be used with a trivial amount of missingness of < 5 %). Regarding the state of the art right now, any researcher should take the following in consideration:

In listwise deletion every observation (every row in the dataset respectively every person in the survey) that has at least one missing value will be dropped completely out of the analysis. Only complete cases are analysed. Another way is pairwise deletion, which often is used for correlations. Here, all cases without missings in the analysed variables are included. The problem is, that if you run a correlation of variable a and variable b, and a correlation of variable a and variable c, your results can be based on a different amount of cases (N).
Listwise and pairwise deletion are problematic in multiple ways: both reduce your samplesize and your statistical power decreases. Other studies acknowledge this problem and replace missing values with the mean value of the remaining datapoints (mean value replacement). This is problematic as well, because your standard deviation increases and your results become biased as well. It is still more accepted than listwise or pairwise deletion and has the convenience of having a single dataset for analysis.

DO: (state of the art)
The state of the Art methods of dealing with missing data (at least in structural equation modeling) are multiple imputation as well as full information maximum likelihood (FIML). In FIML no data is imputed. Instead, an algorithm is used in your analysis (i.e. regression, structural equation modelling) that estimates your model and the missing values in one step, based on your model and all observed data in your sample. FIML should not be confused with EM-Imputation.
In multiple imputation each missing value is replaced (imputed) multiple times through a specified algorithm, that uses the observed data of every unit to find a plausible value for the missing cell. Every time a missing value is replaced through an estimated value, some uncertainty/randomness is introduced. This way, each of the resulting datasets differs a little bit, which brings the advantage of a more adequate estimation of variances.

How to use multiple imputation in practice
It is the decision of the researcher how many times the cells with missing data are imputed. There are rules of thumb and simulation studies to guide this decision. Often a minimum of 5 imputed datasets is enough, but some researchers think it should depend on the amount of missingness. At some point a greater number of imputation becomes obsolete. Depending on the number of variables and number of observations and the speed of your computer, it can take some hours to complete the calculations.

Multiple Imputation needs multivariate normality of the data and the missings ´should at least be MAR (missing at random). Simulation studies showed, that deviation of multivariate normality is not too problematic and even if the data is not MAR, multiple imputation showed itself as robust. Especially in comparison to listwise or pairwise deletion, multiple imputation produces more adequate results in spite of erroneous assumption of MAR or multivariate normality.

There are a lot of tools to do multiple imputation: Here is a list of multiple imputation software. The standalone Software NORM now also has an R-package NORM for R (package). Another R-package worth mentioning is Amelia (R-package). Now, we turn to the R-package MICE („multivariate imputation by chained equations“) which offers many functions to generate imputed datasets based on your missing data. MICE uses the pmm algorithm which stands for predictive mean modeling that produces good results with non-normal data. To be precise: Which algorithm is used for imputation depends on the variable and the decision of the analyst. We´ll come back to this later.

Three types of missingness
Before you start with imputing your data, you should check the type of missingness in your data. This is kind of a paradox. How can you say what pattern the missingness has, if you don´t know which values are missing? If you knew, you wouldn´t need to impute them, right? The Values could be missing just at random. Or it could be, that people with specific values on the variable in question chose to decline the answer. People with extreme values could be underrepresentated. It is our goal to make it plausible, that our missing Items are at least „missing at random“ (MAR).

There are three possible patterns of missingness:
– MCAR (Missing completely at random)
– MAR (Missing at random)
– NMAR (Not missing at random)

What are the reasons of missing data in particular cells of the dataset? It could happen in (manual) data entry or when people miss a question, because they were distracted. But it could also be, that a person refuses to answer a question, doesn´t have the knowledge, cognitive abilities or motivation to answer it, or the question itself is unclear. It is especially problematic if missing values are related to the (unobserved) value of the person in this variable. A typical example would be, that people refuse to answer questions on their income if it exceeds a certain amount. Or if you ask for the number of sex partners a person had and people with high numbers don´t answer it. In this case your data is not missing at random.
If your data is missing completely at random, there is no correlation between the missingness and the value the person would have, if there was a datapoint. To find out if your data is MCAR there is a statistical test called „little´s mcar test“, which tests the null hypothesis that data is completely missing at random. So you want it to be nonsignificant. Problem is, that it’s an omnibus test. It doesn’t tell you for each variable if its missingness is mcar, but only for a set of variables. A part of your data might be mcar, but another part not. The little-MCAR-Test will only test all data and discard MCAR. Also, it has assumptions like normality, so if your data doesn’t meet them, the test might tell you it’s not mcar even if it is. The “MissMech” package in R has tests to show if assumptions are met. Little´s MCAR-test is part of the „BaylorEdPsych“ Package. Please notice, that a maximum of 50 variables can be tested at once. I quess that this is an arbitrary value and it just doesn´t make sense, to perform Little´s MCAR-Test on more variables, because it would be most likely to become significant.

There is critic towards the naming of the missingness-patterns. If missings are random, they are random. There is no sense in saying they miss „completely at random“. That´s why some people argue, that MCAR should just be named MAR. MAR on the other hand should be called MCAR, but with the letters staying for Missing CONDITIONALLY at random, because that´s what MAR (in its original meaning) is about. But, that critic won´t change the differentiation of MCAR, MAR and NMAR because they are already a scientific convention.

Let´s do a „Little´s test“ on MCAR:

```#--------------------------------------------------
# Little-test
#--------------------------------------------------
install.packages('BaylorEdPsych', dependencies=TRUE)
library(BaylorEdPsych)

data(EndersTable1_1)

# run MCAR test
test_mcar<-LittleMCAR(EndersTable1_1)

# print p-value of mcar-test
print(test_mcar\$p.value)```

As a result we get

```print(test_mcar\$p.value)
[1] 0.01205778```

which means, that the result is significant. The null-hypotheses, that our data is mcar, is rejected. Data is mcar if p > 0.05. There is a possibility, that the test failed, because the data are not normal and homoscedastic. We test this:

```install.packages("MissMech")
library("MissMech")

#test of normality and homoscedasticity
out<-TestMCARNormality(EndersTable1_1)
print(out)```

The Output:

```Call:
TestMCARNormality(data = EndersTable1_1)

Number of Patterns:  2

Total number of cases used in the analysis:  17

Pattern(s) used:
IQ   JP   WB   Number of cases
group.1    1   NA    1                 8
group.2    1    1    1                 9

Test of normality and Homoscedasticity:
-------------------------------------------

Hawkins Test:

P-value for the Hawkins test of normality and homoscedasticity:  0.4088643

There is not sufficient evidence to reject normality
or MCAR at 0.05 significance level```

So the results of the test of MCAR for homogenity of covariances show us, that mcar was not rejected because of non-normality or heteroscedasticity. If the Hawkins-test becomes significant, the „MissMech“-package performs a nonparametric test on homoscedasticity. This way, it can show through the method of elimination if non-normality or heteroscedasticity is a problem.

OK. Back to our patterns of missingness: Our data is not MCAR, but that´s not too bad, because we only need our data to be MAR (Missing at random). MAR isn’t testable like mcar. If your data isn’t MCAR you can try to make plausible that your data is MAR through visualisation of the missingness pattern. Or you can show that missingness depends on other variables (like gender or sth else). If you find out that this is the case, you can include them as auxiliary variables in your imputation model. It´s best to have side-variables like socio-demographics from register-data that can be used to show if they are relevant for missingness.
You can create a dummy variable for missingness and use a t-test or chi-squre test to look for differences in other variables depending on the dummy variable groups.
If your data is NMAR (Not missing at random) you cannot ignore the missings and imputation is not an option. You then have to find a way of analysing your data adquately.

Visualisation of missing data patterns
First, we inspect the amount of missingness for every variable in our dataset.

```library("dplyr")
# Proportion of Missingness
propmiss <- function(dataframe) {
m <- sapply(dataframe, function(x) {
data.frame(
nmiss=sum(is.na(x)),
n=length(x),
propmiss=sum(is.na(x))/length(x)
)
})
d <- data.frame(t(m))
d <- sapply(d, unlist)
d <- as.data.frame(d)
d\$variable <- row.names(d)
row.names(d) <- NULL
d <- cbind(d[ncol(d)],d[-ncol(d)])
return(d[order(d\$propmiss), ])
}

miss_vars<-propmiss(EndersTable1_1)
miss_vars_mean<-mean(miss_vars\$propmiss)
miss_vars_ges<- miss_vars  %>% arrange(desc(propmiss))

plot1<-ggplot(miss_vars_ges,aes(x=reorder(variable,propmiss),y=propmiss*100)) +
geom_point(size=3) +
coord_flip() +
theme_bw() + xlab("") +ylab("Missingness per variable") +
theme(panel.grid.major.x=element_blank(),
panel.grid.minor.x=element_blank(),
panel.grid.major.y=element_line(colour="grey60",linetype="dashed")) +
ggtitle("Percentage of missingness")
plot1
```

There is no general rule on how much missing data is acceptable. It depends on your research context and samplesize. Sometimes 20 % shouldn´t be exceeded, sometimes more than 40 % missings are not tolerable and sometimes 5 % missings is too much. You should check all cases with the most amount of missingness, if the person did the survey conscientious and if its data does add value to the quality of your dataset.
I usually inspect amount of missingness per variable and per person. Often more than 90 % of participants have less then 10 % missings, but two or three cases have as much as 50 % missings. Concerning the variables, you should check every variable with more than 5 % missingness. Did you have a neutral category? Was the question problematic? Too personal? Too difficult? Questions like this normally are answered in a pretest.

Now, we´ll use the VIM package to visualize missings and if there are any patterns.

```install.packages("VIM", dependencies = TRUE)
install.packages("VIMGUI", dependencies = TRUE)
library("VIM")
library("VIMGUI")
VIMGUI()

# If you don´t like to use the GUI because of reproducibility, you can also use the console:
aggr(EndersTable1_1, numbers=TRUE, prop=TRUE, combined=TRUE, sortVars=FALSE, vscale = 1)
```

After we chose our dataframe from the environment, VIM gives us some plots to visualise our data:

Aggregation Plot

or

marginplot

Visualisiations like these show you, if there are a lot of different missing data patterns (~ random) or if there is some kind of systematics. The MICE-package can show missingness patterns as well:

```install.packages("mice")
library(mice)
md.pattern(EndersTable1_1)

IQ WB JP
9  1  1  1  0
8  1  1  0  1
1  1  0  1  1
2  1  0  0  2
0  3 10 13
```

If you can make it plausible your data is mcar (non-significant little test) or mar, you can use multiple imputation to impute missing data. Generate multiple imputed data sets (depending on the amount of missings), do the analysis for every dataset and pool the results according to rubins rules.

How to use MICE for multiple imputation
With MICE you can build an imputation model that is tailored for your dataset. At first this can be a little overwhelming, so we start easy. Just use "mice()" with your dataframe and use the defaults of the package.

```imp <- mice(EndersTable1_1)
imp
summary(imp)

> imp <- mice(EndersTable1_1)
iter imp variable
1   1  JP  WB
1   2  JP  WB
1   3  JP  WB
1   4  JP  WB
1   5  JP  WB
2   1  JP  WB
2   2  JP  WB
2   3  JP  WB
2   4  JP  WB
2   5  JP  WB
3   1  JP  WB
3   2  JP  WB
3   3  JP  WB
3   4  JP  WB
3   5  JP  WB
4   1  JP  WB
4   2  JP  WB
4   3  JP  WB
4   4  JP  WB
4   5  JP  WB
5   1  JP  WB
5   2  JP  WB
5   3  JP  WB
5   4  JP  WB
5   5  JP  WB
> imp
Multiply imputed data set
Call:
mice(data = EndersTable1_1)
Number of multiple imputations:  5
Missing cells per column:
IQ JP WB
0 10  3
Imputation methods:
IQ    JP    WB
"" "pmm" "pmm"
VisitSequence:
JP WB
2  3
PredictorMatrix:
IQ JP WB
IQ  0  0  0
JP  1  0  1
WB  1  1  0
Random generator seed value:  NA ```

MICE generates 5 imputated datasets using an algorithm called "predictive mean matching" (pmm), because all data are "numeric" in this case. Pmm has the advantage of finding robust values if the data don´t follow a normal distribution.
If there was binary data like a factor with 2 levels MICE would have chosen "logistic regression imputation (logreg). If there was an unordered factor with more than 2 levels, MICE would have used "polytomous regression imputation for unordered categorical data" (polyreg). And if there were missings in a variable with more than 2 ordered levels, MICE would have used "proportional odds model" (polr).

There are many other algorithms for imputation that can be specified:

```#Built-in elementary imputation methods are:

pmm
Predictive mean matching (any)

norm
Bayesian linear regression (numeric)

norm.nob
Linear regression ignoring model error (numeric)

norm.boot
Linear regression using bootstrap (numeric)

norm.predict
Linear regression, predicted values (numeric)

mean
Unconditional mean imputation (numeric)

2l.norm
Two-level normal imputation (numeric)

2l.pan
Two-level normal imputation using pan (numeric)

2lonly.mean
Imputation at level-2 of the class mean (numeric)

2lonly.norm
Imputation at level-2 by Bayesian linear regression (numeric)

2lonly.pmm
Imputation at level-2 by Predictive mean matching (any)

logreg
Logistic regression (factor, 2 levels)

logreg.boot
Logistic regression with bootstrap

polyreg
Polytomous logistic regression (factor, >= 2 levels)

polr
Proportional odds model (ordered, >=2 levels)

lda
Linear discriminant analysis (factor, >= 2 categories)

cart
Classification and regression trees (any)

rf
Random forest imputations (any)

ri
Random indicator method for nonignorable data (numeric)

sample
Random sample from the observed values (any)

fastpmm
Experimental: Fast predictive mean matching using C++ (any)```

You can decide for each of your variables which imputation-algorithm is used. First you should make sure, every variable has the right type:

```str(EndersTable1_1)

> str(EndersTable1_1)
'data.frame':	20 obs. of  3 variables:
\$ IQ: int  78 84 84 85 87 91 92 94 94 96 ...
\$ JP: int  NA NA NA NA NA NA NA NA NA NA ...
\$ WB: int  13 9 10 10 NA 3 12 3 13 NA ...```

In this case every variable has the type integer. Just as an example, we assume that variable "WB" is an ordered factor.

```EndersTable1_1\$WB<-as.factor(EndersTable1_1\$WB)
str(EndersTable1_1)

data.frame':	20 obs. of  3 variables:
\$ IQ: int  78 84 84 85 87 91 92 94 94 96 ...
\$ JP: int  NA NA NA NA NA NA NA NA NA NA ...
\$ WB: Factor w/ 8 levels "3","6","9","10",..: 7 3 4 4 NA 1 6 1 7 NA ...```

Now we can use the argument "method = c('','pmm','polr')" in the mice()-call to specify the imputation algorithm for each variable.

As a default MICE also uses every variable in the dataset to estimate the missing values. This is usually called a "massive imputation". This can also be problematic, because variables that don´t correlate with the variable, that will be imputed, it only adds noise to the estimation. Leaving such a variable out of the imputation model can improve data quality. There is an easy way to build a "predictor matrix" using quickpred():

```predictormatrix<-quickpred(EndersTable1_1,
include=c("IQ"),
exclude=NULL,
mincor = 0.1)```

Here, i force MICE to include the Variable "IQ" in the predictor matrix. No variable is excluded a priori, but with "mincor = 0.1" i decide to only use variables as predictor in the imputation model, that are correlated with at least r=0.1 with the target-variable. Variables that are very weakly correlated are now left out. Also, your estimates can be biased if you include too many auxiliary variables.

Now comes an example for a more tailored imputation model. It is really just a simple demonstration. The imputation model should always be specifically be made for your dataset. First we build a predictormatrix, then we make sure, every variable is of the right type and then, we let mice generate 10 imputed datasets based on the algorithms we specified in the "method = " argument.

```set.seed(121012)
predictormatrix<-quickpred(EndersTable1_1,
include=c("IQ"),
exclude=NULL,
mincor = 0.1)
str(EndersTable1_1)
EndersTable1_1<-as.data.frame(lapply(EndersTable1_1,as.numeric))
EndersTable1_1\$WB<-as.factor(EndersTable1_1\$WB)
str(EndersTable1_1)

imp_gen <- mice(data=EndersTable1_1,
predictorMatrix = predictormatrix,
method = c('pmm','pmm','polr'),
m=10,
maxit=5,
diagnostics=TRUE,
MaxNWts=3000)
```

Now, we inspect the imputed values and save the imputed datasets in one file.

```# Check plausibility of the results
#Variable JP
imp_gen\$imp\$JP
nrow(imp\$imp\$JP)

#Variable WB
imp_gen\$imp\$WB
nrow(imp\$imp\$WB)

# bring your imputed data in long format (first colum ",.imp" is the number of imputation, the second column ".id" is the id of case)
imp_data<-mice::<-complete(imp_gen,"long",inc=FALSE)

# save data
write.table(comp10_neu,file="/imp_test.csv",sep=";")
```

How to use Multiple Imputation with lavaan
There are three ways to use multiple imputation in lavaan. The first (i) uses runMI() to do the multiple imputation and the model estimation in one step. The second (ii) does the multiple imputation with mice() first and then gives the multiply imputed data to runMI() which does the model estimation based on this data. Since both ways use runMI() they run the analysis multiple times for each imputed dataset and then use rubins rules to pool the results. Here is a diagram, showing the principle:

The third way (iii) uses the lavaan.survey()-package. In this example we don´t specify any sampling design or survey weight, but if you need to, it is possible. Here, you first use mice() to do the multiple imputation (if you use a survey weight, be sure to include it in the model) and then pass the imputed data to the survey-package and generate a svydesign()-object. This svydesign()-object can itself be passed to lavaan.survey, together with the lavaan-model. The way Lavaan.survey() uses multiple imputed data differs from runMI(). Here, not the results for each dataset are pooled after analysis, but the datasets are pooled first (to be precise: the variance-covariance are first calculated, taking account of the sampling design and then the matrices are pooled, which are the data basis for model estimation) and then only one dataset is analysed. The results can differ somewhat, but tend to be the same. Of course, you only use lavaan.survey() if you need to incorporate weights or a sampling design. It is evidend, that it will give more adequate results than using runMI() and omitting the weights, even though the pooling does not happen in the typical order.

Example:

```#--------------------------
# Setting up packages
#--------------------------
install.packages("semTools","lavaan")
install.packages("survey")
install.packages("lavaan.survey")
install.packages("mitools")
install.packages("mice")
library("survey")
library("mice")
library("mitools")
library("semTools")
library("lavaan")
library("lavaan.survey")

#--------------------------
# Setting up example data and model
#--------------------------

# Create data with missings
set.seed(20170110)
HSMiss <- HolzingerSwineford1939[,paste("x", 1:9, sep="")]
randomMiss <- rbinom(prod(dim(HSMiss)), 1, 0.1)
randomMiss <- matrix(as.logical(randomMiss), nrow=nrow(HSMiss))
HSMiss[randomMiss] <- NA

# lavaan model
HS.model <- ' visual  =~ x1 + x2 + x3
textual =~ x4 + x5 + x6
speed   =~ x7 + x8 + x9 '
```
```#------------------------------------------------------------------
# Variant 1: Imputation and model estimation with runmI
#-------------------------------------------------------------------

# run lavaan and imputation in one step
out1 <- runMI(HS.model,
data=HSMiss,
m = 5,
miPackage="mice",
fun="cfa",
meanstructure = TRUE)
summary(out1)
fitMeasures(out1, "chisq")
```

At the moment you´ll get a warning:

```** WARNING ** lavaan (0.5-22) model has NOT been fitted
** WARNING ** Estimates below are simply the starting values```

You should just ignore it, because those warnings are side-effects from semtools and don´t have any meaning.

```#------------------------------------------------------------------
# Variant 2: Imputation in step 1 and model estimation in step 2 with runMI
#-------------------------------------------------------------------

# impute data first
HSMiss_imp<-mice(HSMiss, m = 5)
mice.imp <- NULL
for(i in 1:5) mice.imp[[i]] <- complete(HSMiss_imp, action=i, inc=FALSE)

# run lavaan with previously imputed data using runMI
out2 <- runMI(HS.model,
data=mice.imp,
fun="cfa",
meanstructure = TRUE)
summary(out2)
fitMeasures(out2, "chisq") ```

Here, we did the multiple imputation with mice() first and then passed the data to runMI(). In the first model we had a chisq of 73.841 and now chisq is 78.752. This might be, because of different imputation models.

```#------------------------------------------------------------------
# Variant 3: Imputation in step 1 and model estimation in step 2 with lavaan.survey (but without weights)
#-------------------------------------------------------------------

# take previously imputed data from variant 2 and convert it to svydesign-object
mice.imp2<-lapply(seq(HSMiss_imp\$m),function(im) complete(HSMiss_imp,im))
mice.imp2<-mitools::imputationList(mice.imp2)
svy.df_imp<-survey::svydesign(id=~1,weights=~1,data=mice.imp2)                    #survey-Objekt erstellen

# fit model with lavaan.survey
lavaan_fit_HS.model<-cfa(HS.model, meanstructure = TRUE)
out3<-lavaan.survey(lavaan_fit_HS.model, svy.df_imp)
summary(out3)
fitMeasures(out3, "chisq")```

In this last model, the chisq is 96.748, which is somewhat higher than in model 1 (chisq = 73.841) or 2 (chisq = 78.752). That is due to the different pooling strategies. But, as i said before, using lavaan.survey() without weights or else, does not make sense. And if you need weights, using runMI() is no option.

Of course, you can also use the FIML-Method and just use the dataset with the missings. FIML does not work with lavaan.survey(), only with lavaan().

```#------------------------------------------------------------------
# Variant 4: Use FIML (Full Information Maximum Likelihood) instead of multiple imputation
#-------------------------------------------------------------------

# fit model lavaan using FIML
out4<-cfa(HS.model, data=HSMiss, missing="FIML", meanstructure = TRUE)
summary(out4)
fitMeasures(out4, "chisq")```

The chisquare is 76.13, which isn´t very different from the first to methods.

FIML is definitely easier to apply than multiple imputation, because you don´t have to work out an imputation model. On the other hand, you can´t specify an imputation model, which could come handy if your data is MAR and you want to include certain auxiliary variables. Also, if you decide to use lavaan.survey, you cannot use FIML, because it only supports multiply imputed data.

Example for a latent class analysis with the poLCA-package in R

When you work with R for some time, you really start to wonder why so many R packages have some kind of pun in their name. Intended or not, the poLCA package is one of them. Today i´ll give a glimpse on this package, which doesn´t have to do anything with dancing or nice dotted dresses.

The „poLCA“-package has its name from „Polytomous Latent Class Analysis“. Latent class analysis is an awesome and still underused (at least in social sciences) statistical method to identify unobserved groups of cases in your data. Polytomous latent class analysis is applicable with categorical data. The unobserved (latent) variable could be different attitude-sets of people which lead to certain response patterns in a survey. In marketing or market research latent class analysis could be used to identify unobserved target-groups with different attitude structures on the basis of their buy-decisions. The data would be binary (not bought, bought) and depending on the products you could perhaps identify a class which chooses the cheapest, the most durable, the most environment friendly, the most status relevant […] product. The latent classes are assumed to be nominal. The poLCA package is not capable of sampling weights, yet.

By the way: There is also another package for latent class models called „lcmm“ and another one named „mclust„.

What does a latent class analysis try to do?
A latent class model uses the different response patterns in the data to find similar groups. It tries to assign groups that are „conditional independent“. That means, that inside of a group the correlations between the variables become zero, because the group membership explains any relationship between the variables.

Latent class analysis is different from latent profile analysis, as the latter uses continous data and the former can be used with categorical data.
Another important aspect of latent class analysis is, that your elements (persons, observations) are not assigned absolutely, but on probability. So you get a probability value for each person to be assigned to group 1, group 2, […], group k.

Before you estimate your LCA model you have to choose how many groups you want to have. You aim for a small number of classes, so that the model is still adequate for the data, but also parsimonious.
If you have a theoretically justified number of groups (k) you expect in your data, you perhaps only model this one solution. A typical assumption would be a group that is pro, one group contra and one group neutral to an object. Another, more exploratory, approach would be to compare multiple models – perhaps one with 2 groups, one with 3 groups, one with 4 groups – and compare these models against each other. If you choose this second way, you can decide to take the model that has the most plausible interpretation. Additionally you could compare the different solutions by BIC or AIC information criteria. BIC is preferred over AIC in latent class models, but usually both are used. A smaller BIC is better than a bigger BIC. Next to AIC and BIC you also get a Chi-Square goodness of fit.
I once asked Drew Linzer, the developer of PoLCA, if there would be some kind of LMR-Test (like in MPLUS) implemented anytime. He said, that he wouldn´t rely on statistical criteria to decide which model is the best, but he would look which model has the most meaningful interpretation and has a better answer to the research question.

Latent class models belong to the family of (finite) mixture models. The parameters are estimated by the EM-Algorithm. It´s called EM, because it has two steps: An „E“stimation step and a „M“aximization step. In the first one, class-membership probabilities are estimated (the first time with some starting values) and in the second step those estimates are altered to maximize the likelihood-function. Both steps are iterative and repeated until the algorithm finds the global maximum. This is the solution with the highest possible likelihood. That´s why starting values in latent class analysis are important. I´m a social scientist who applies statistics, not a statistician, but as far as i understand this, depending on the starting values the algorithm can stop at point where one (!) local maximum is reached, but it might not be the „best“ local maximum (the global maximum) and so the algorithm perhaps should´ve run further. If you run the estimation multiple times with different starting values and it always comes to the same solution, you can be pretty sure that you found the global maximum.

data preparation
Latent class models don´t assume the variables to be continous, but (unordered) categorical. The variables are not allowed to contain zeros, negative values or decimals as you can read in the poLCA vignette. If your variables are binary 0/1 you should add 1 to every value, so they become 1/2. If you have NA-values, you have to recode them to a new category. Rating Items with values from 1-5 should could be added a value 6 from the NAs.

`mydata[is.na(mydata)] <- 6`

Running LCA models
First you should install the package and define a formula for the model to be estimated.

```install.packages("poLCA")
library("poLCA")

# By the way, for all examples in this article, you´ll need some more packages:
library("reshape2")
library("plyr")
library("dplyr")
library("poLCA")
library("ggplot2")
library("ggparallel")
library("igraph")
library("tidyr")
library("knitr")

# these are the defaults of the poLCA command
poLCA(formula, data, nclass=2, maxiter=1000, graphs=FALSE, tol=1e-10, na.rm=TRUE, probs.start=NULL, nrep=1, verbose=TRUE, calc.se=TRUE)

#estimate the model with k-classes
k<-3
lc<-poLCA(f, data, nclass=k, nrep=30, na.rm=FALSE, Graph=TRUE)
```

The following code stems from this article. It runs a sequence of models with two to ten groups. With nrep=10 it runs every model 10 times and keeps the model with the lowest BIC.

```# select variables
mydata <- data %>% dplyr::select(F29_a,F29_b,F29_c,F27_a,F27_b,F27_e,F09_a, F09_b, F09_c)

# define function
f<-with(mydata, cbind(F29_a,F29_b,F29_c,F27_a,F27_b,F27_e,F09_a, F09_b, F09_c)~1) #

#------ run a sequence of models with 1-10 classes and print out the model with the lowest BIC
max_II <- -100000
min_bic <- 100000
for(i in 2:10){
lc <- poLCA(f, mydata, nclass=i, maxiter=3000,
tol=1e-5, na.rm=FALSE,
nrep=10, verbose=TRUE, calc.se=TRUE)
if(lc\$bic < min_bic){
min_bic <- lc\$bic
LCA_best_model<-lc
}
}
LCA_best_model```

You´ll get the standard-output for the best model from the poLCA-package:

```Conditional item response (column) probabilities,
by outcome variable, for each class (row)

\$F29_a
Pr(1)  Pr(2)  Pr(3)  Pr(4)  Pr(5)  Pr(6)  Pr(7)
class 1:  0.0413 0.2978 0.1638 0.2487 0.1979 0.0428 0.0078
class 2:  0.0000 0.0429 0.0674 0.3916 0.4340 0.0522 0.0119
class 3:  0.0887 0.5429 0.2713 0.0666 0.0251 0.0055 0.0000

\$F29_b
Pr(1)  Pr(2)  Pr(3)  Pr(4)  Pr(5)  Pr(6)  Pr(7)
class 1:  0.0587 0.2275 0.1410 0.3149 0.1660 0.0697 0.0222
class 2:  0.0000 0.0175 0.0400 0.4100 0.4249 0.0724 0.0351
class 3:  0.0735 0.4951 0.3038 0.0669 0.0265 0.0271 0.0070

\$F29_c
Pr(1)  Pr(2)  Pr(3)  Pr(4)  Pr(5)  Pr(6)  Pr(7)
class 1:  0.0371 0.2082 0.1022 0.1824 0.3133 0.1365 0.0202
class 2:  0.0000 0.0086 0.0435 0.3021 0.4335 0.1701 0.0421
class 3:  0.0815 0.4690 0.2520 0.0903 0.0984 0.0088 0.0000

\$F27_a
Pr(1)  Pr(2)  Pr(3)  Pr(4)  Pr(5)  Pr(6)  Pr(7)
class 1:  0.7068 0.2373 0.0248 0.0123 0.0000 0.0188 0.0000
class 2:  0.6914 0.2578 0.0128 0.0044 0.0207 0.0085 0.0044
class 3:  0.8139 0.1523 0.0110 0.0000 0.0119 0.0000 0.0108

\$F27_b
Pr(1)  Pr(2)  Pr(3)  Pr(4)  Pr(5)  Pr(6)  Pr(7)
class 1:  0.6198 0.1080 0.0426 0.0488 0.1226 0.0582 0.0000
class 2:  0.6336 0.1062 0.0744 0.0313 0.1047 0.0370 0.0128
class 3:  0.7185 0.1248 0.0863 0.0158 0.0325 0.0166 0.0056

\$F27_e
Pr(1)  Pr(2)  Pr(3)  Pr(4)  Pr(5)  Pr(6)  Pr(7)
class 1:  0.6595 0.1442 0.0166 0.0614 0.0926 0.0062 0.0195
class 2:  0.6939 0.1474 0.0105 0.0178 0.0725 0.0302 0.0276
class 3:  0.7869 0.1173 0.0375 0.0000 0.0395 0.0000 0.0188

\$F09_a
Pr(1)  Pr(2)  Pr(3)  Pr(4)  Pr(5)  Pr(6)
class 1:  0.8325 0.1515 0.0000 0.0000 0.0160 0.0000
class 2:  0.1660 0.3258 0.2448 0.1855 0.0338 0.0442
class 3:  0.1490 0.2667 0.3326 0.1793 0.0575 0.0150

\$F09_b
Pr(1)  Pr(2)  Pr(3)  Pr(4)  Pr(5)  Pr(6)
class 1:  0.8116 0.1594 0.0120 0.0069 0.0000 0.0101
class 2:  0.0213 0.3210 0.4000 0.2036 0.0265 0.0276
class 3:  0.0343 0.3688 0.3063 0.2482 0.0264 0.0161

\$F09_c
Pr(1)  Pr(2)  Pr(3)  Pr(4)  Pr(5)  Pr(6)
class 1:  0.9627 0.0306 0.0067 0.0000 0.0000 0.0000
class 2:  0.1037 0.4649 0.2713 0.0681 0.0183 0.0737
class 3:  0.1622 0.4199 0.2338 0.1261 0.0258 0.0322

Estimated class population shares
0.2792 0.4013 0.3195

Predicted class memberships (by modal posterior prob.)
0.2738 0.4055 0.3206

=========================================================
Fit for 3 latent classes:
=========================================================
number of observations: 577
number of estimated parameters: 155
residual degrees of freedom: 422
maximum log-likelihood: -6646.732

AIC(3): 13603.46
BIC(3): 14278.93
G^2(3): 6121.357 (Likelihood ratio/deviance statistic)
X^2(3): 8967872059 (Chi-square goodness of fit) ```

Generate table showing fitvalues of multiple models
Now i want to build a table for comparison of various model-fit values like this:

```Modell	 log-likelihood	 resid. df	 BIC	 aBIC	 cAIC	 likelihood-ratio	 Entropy
Modell 1	 -7171.940	 526	 14668.13	 14046.03	 14719.13	 7171.774	 -
Modell 2	 -6859.076	 474	 14373.01	 14046.03	 14476.01	 6546.045	 0.86
Modell 3	 -6646.732	 422	 14278.93	 13786.87	 14433.93	 6121.357	 0.879
Modell 4	 -6528.791	 370	 14373.66	 13716.51	 14580.66	 5885.477	 0.866
Modell 5	 -6439.588	 318	 14525.86	 13703.64	 14784.86	 5707.070	 0.757
Modell 6	 -6366.002	 266	 14709.29	 13721.99	 15020.29	 5559.898	 0.865
```

This table was build by the following code:

```#select data
mydata <- data %>% dplyr::select(F29_a,F29_b,F29_c,F27_a,F27_b,F27_e,F09_a, F09_b, F09_c)

# define function
f<-with(mydata, cbind(F29_a,F29_b,F29_c,F27_a,F27_b,F27_e,F09_a, F09_b, F09_c)~1)

## models with different number of groups without covariates:
set.seed(01012)
lc1<-poLCA(f, data=mydata, nclass=1, na.rm = FALSE, nrep=30, maxiter=3000) #Loglinear independence model.
lc2<-poLCA(f, data=mydata, nclass=2, na.rm = FALSE, nrep=30, maxiter=3000)
lc3<-poLCA(f, data=mydata, nclass=3, na.rm = FALSE, nrep=30, maxiter=3000)
lc4<-poLCA(f, data=mydata, nclass=4, na.rm = FALSE, nrep=30, maxiter=3000)
lc5<-poLCA(f, data=mydata, nclass=5, na.rm = FALSE, nrep=30, maxiter=3000)
lc6<-poLCA(f, data=mydata, nclass=6, na.rm = FALSE, nrep=30, maxiter=3000)

# generate dataframe with fit-values

results <- data.frame(Modell=c("Modell 1"),
log_likelihood=lc1\$llik,
df = lc1\$resid.df,
BIC=lc1\$bic,
ABIC=  (-2*lc1\$llik) + ((log((lc1\$N + 2)/24)) * lc1\$npar),
CAIC = (-2*lc1\$llik) + lc1\$npar * (1 + log(lc1\$N)),
likelihood_ratio=lc1\$Gsq)
results\$Modell<-as.integer(results\$Modell)
results[1,1]<-c("Modell 1")
results[2,1]<-c("Modell 2")
results[3,1]<-c("Modell 3")
results[4,1]<-c("Modell 4")
results[5,1]<-c("Modell 5")
results[6,1]<-c("Modell 6")

results[2,2]<-lc2\$llik
results[3,2]<-lc3\$llik
results[4,2]<-lc4\$llik
results[5,2]<-lc5\$llik
results[6,2]<-lc6\$llik

results[2,3]<-lc2\$resid.df
results[3,3]<-lc3\$resid.df
results[4,3]<-lc4\$resid.df
results[5,3]<-lc5\$resid.df
results[6,3]<-lc6\$resid.df

results[2,4]<-lc2\$bic
results[3,4]<-lc3\$bic
results[4,4]<-lc4\$bic
results[5,4]<-lc5\$bic
results[6,4]<-lc6\$bic

results[2,5]<-(-2*lc2\$llik) + ((log((lc2\$N + 2)/24)) * lc2\$npar) #abic
results[3,5]<-(-2*lc3\$llik) + ((log((lc3\$N + 2)/24)) * lc3\$npar)
results[4,5]<-(-2*lc4\$llik) + ((log((lc4\$N + 2)/24)) * lc4\$npar)
results[5,5]<-(-2*lc5\$llik) + ((log((lc5\$N + 2)/24)) * lc5\$npar)
results[6,5]<-(-2*lc6\$llik) + ((log((lc6\$N + 2)/24)) * lc6\$npar)

results[2,6]<- (-2*lc2\$llik) + lc2\$npar * (1 + log(lc2\$N)) #caic
results[3,6]<- (-2*lc3\$llik) + lc3\$npar * (1 + log(lc3\$N))
results[4,6]<- (-2*lc4\$llik) + lc4\$npar * (1 + log(lc4\$N))
results[5,6]<- (-2*lc5\$llik) + lc5\$npar * (1 + log(lc5\$N))
results[6,6]<- (-2*lc6\$llik) + lc6\$npar * (1 + log(lc6\$N))

results[2,7]<-lc2\$Gsq
results[3,7]<-lc3\$Gsq
results[4,7]<-lc4\$Gsq
results[5,7]<-lc5\$Gsq
results[6,7]<-lc6\$Gsq
```

Now i calculate the Entropy (a pseudo-r-squared) for each solution. I took the idea from Daniel Oberski´s Presentation on LCA.

```entropy<-function (p) sum(-p*log(p))

results\$R2_entropy
results[1,8]<-c("-")

error_prior<-entropy(lc2\$P) # class proportions model 2
error_post<-mean(apply(lc2\$posterior,1, entropy),na.rm = TRUE)
results[2,8]<-round(((error_prior-error_post) / error_prior),3)

error_prior<-entropy(lc3\$P) # class proportions model 3
error_post<-mean(apply(lc3\$posterior,1, entropy),na.rm = TRUE)
results[3,8]<-round(((error_prior-error_post) / error_prior),3)

error_prior<-entropy(lc4\$P) # class proportions model 4
error_post<-mean(apply(lc4\$posterior,1, entropy),na.rm = TRUE)
results[4,8]<-round(((error_prior-error_post) / error_prior),3)

error_prior<-entropy(lc5\$P) # class proportions model 5
error_post<-mean(apply(lc5\$posterior,1, entropy),na.rm = TRUE)
results[5,8]<-round(((error_prior-error_post) / error_prior),3)

error_prior<-entropy(lc6\$P) # class proportions model 6
error_post<-mean(apply(lc6\$posterior,1, entropy),na.rm = TRUE)
results[6,8]<-round(((error_prior-error_post) / error_prior),3)

# combining results to a dataframe
colnames(results)<-c("Model","log-likelihood","resid. df","BIC","aBIC","cAIC","likelihood-ratio","Entropy")
lca_results<-results

# Generate a HTML-TABLE and show it in the RSTUDIO-Viewer (for copy & paste)
view_kable <- function(x, ...){
tab <- paste(capture.output(kable(x, ...)), collapse = '\n')
tf <- tempfile(fileext = ".html")
writeLines(tab, tf)
rstudio::viewer(tf)
}
view_kable(lca_results, format = 'html', table.attr = "class=nofluid")

# Another possibility which is prettier and easier to do:
install.packages("ztable")
ztable::ztable(lca_results)
```

Elbow-Plot
Sometimes, an Elbow-Plot (or Scree-Plot) can be used to see, which solution is parsimonius and has good fit-values. You can get it with this ggplot2-code i wrote:

```# plot 1

# Order categories of results\$model in order of appearance
install.packages("forcats")
library("forcats")
results\$model < - as_factor(results\$model)

#convert to long format
results2<-tidyr::gather(results,Kriterium,Guete,4:7)
results2

#plot
fit.plot<-ggplot(results2) +
geom_point(aes(x=Model,y=Guete),size=3) +
geom_line(aes(Model, Guete, group = 1)) +
theme_bw()+
labs(x = "", y="", title = "") +
facet_grid(Kriterium ~. ,scales = "free") +
theme_bw(base_size = 16, base_family = "") +
theme(panel.grid.major.x = element_blank() ,
panel.grid.major.y = element_line(colour="grey", size=0.5),
legend.title = element_text(size = 16, face = 'bold'),
axis.text = element_text(size = 16),
axis.title = element_text(size = 16),
legend.text=  element_text(size=16),
axis.line = element_line(colour = "black")) # Achsen etwas dicker

# save 650 x 800
fit.plot
```

Inspect population shares of classes
If you are interested in the population-shares of the classes, you can get them like this:

```round(colMeans(lc\$posterior)*100,2)
[1] 27.92 40.13 31.95
```

or you inspect the estimated class memberships:

```table(lc\$predclass)
1   2   3
158 234 185

round(prop.table(table(lc\$predclass)),4)*100
1     2     3
27.38 40.55 32.06
```

Ordering of latent classes
Latent classes are unordered, so which latent class becomes number one, two, three... is arbitrary. The latent classes are supposed to be nominal, so there is no reason for one class to be the first. You can order the latent classes if you must. There is a function for manually reordering the latent classes: poLCA.reorder()
First, you run a LCA-model, extract the starting values and run the model again, but this time with a manually set order.

```#extract starting values from our previous best model (with 3 classes)
probs.start<-lc3\$probs.start

#re-run the model, this time with "graphs=TRUE"
lc<-poLCA(f, mydata, nclass=3, probs.start=probs.start,graphs=TRUE, na.rm=TRUE, maxiter=3000)

# If you don´t like the order, reorder them (here: Class 1 stays 1, Class 3 becomes 2, Class 2 becomes 1)
new.probs.start<-poLCA.reorder(probs.start, c(1,3,2))

lc<-poLCA(f, mydata, nclass=3, probs.start=new.probs.start,graphs=TRUE, na.rm=TRUE)
lc
```

Now you have reordered your classes. You could save these starting values, if you want to recreate the model anytime.

`saveRDS(lc\$probs.start,"/lca_starting_values.RData")`

Plotting

This is the poLCA-standard Plot for conditional probabilites, which you get if you add "graph=TRUE" to the poLCA-call.

It´s in a 3D-style which is not really my taste. I found some code at dsparks on github or this Blog, that makes very appealing ggplot2-plots and did some little adjustments.

```lcmodel <- reshape2::melt(lc\$probs, level=2)
zp1 <- ggplot(lcmodel,aes(x = L1, y = value, fill = Var2))
zp1 <- zp1 + geom_bar(stat = "identity", position = "stack")
zp1 <- zp1 + facet_grid(Var1 ~ .)
zp1 <- zp1 + scale_fill_brewer(type="seq", palette="Greys") +theme_bw()
zp1 <- zp1 + labs(x = "Fragebogenitems",y="Anteil der Item-\nAntwortkategorien", fill ="Antwortkategorien")
zp1 <- zp1 + theme( axis.text.y=element_blank(),
axis.ticks.y=element_blank(),
panel.grid.major.y=element_blank())
zp1 <- zp1 + guides(fill = guide_legend(reverse=TRUE))
print(zp1)
```

If you want to compare the items directly:

```zp2 <- ggplot(lcmodel,aes(x = Var1, y = value, fill = Var2))
zp2 <- zp2 + geom_bar(stat = "identity", position = "stack")
zp2 <- zp2 + facet_wrap(~ L1)
zp2 <- zp2 + scale_x_discrete("Fragebogenitems", expand = c(0, 0))
zp2 <- zp2 + scale_y_continuous("Wahrscheinlichkeiten \nder Item-Antwortkategorien", expand = c(0, 0))
zp2 <- zp2 + scale_fill_brewer(type="seq", palette="Greys") +
theme_bw()
zp2 <- zp2 + labs(fill ="Antwortkategorien")
zp2 <- zp2 + theme( axis.text.y=element_blank(),
axis.ticks.y=element_blank(),
panel.grid.major.y=element_blank()#,
#legend.justification=c(1,0),
#legend.position=c(1,0)
)
zp2 <- zp2 + guides(fill = guide_legend(reverse=TRUE))
print(zp2)```