Example for a latent class analysis with the poLCA-package in R

When you work with R for some time, you really start to wonder why so many R packages have some kind of pun in their name. Intended or not, the poLCA package is one of them. Today i´ll give a glimpse on this package, which doesn´t have to do anything with dancing or nice dotted dresses.

This article is kind of a draft and will be revised anytime.

The „poLCA“-package has its name from „Polytomous Latent Class Analysis“. Latent class analysis is an awesome and still underused (at least in social sciences) statistical method to identify unobserved groups of cases in your data. Polytomous latent class analysis is applicable with categorical data. The unobserved (latent) variable could be different attitude-sets of people which lead to certain response patterns in a survey. In marketing or market research latent class analysis could be used to identify unobserved target-groups with different attitude structures on the basis of their buy-decisions. The data would be binary (not bought, bought) and depending on the products you could perhaps identify a class which chooses the cheapest, the most durable, the most environment friendly, the most status relevant […] product. The latent classes are assumed to be nominal. The poLCA package is not capable of sampling weights, yet.

By the way: There is also another package for latent class models called „lcmm“ and another one named „mclust„.

What does a latent class analysis try to do?
A latent class model uses the different response patterns in the data to find similar groups. It tries to assign groups that are „conditional independent“. That means, that inside of a group the correlations between the variables become zero, because the group membership explains any relationship between the variables.

Latent class analysis is different from latent profile analysis, as the latter uses continous data and the former can be used with categorical data.
Another important aspect of latent class analysis is, that your elements (persons, observations) are not assigned absolutely, but on probability. So you get a probability value for each person to be assigned to group 1, group 2, […], group k.

Before you estimate your LCA model you have to choose how many groups you want to have. You aim for a small number of classes, so that the model is still adequate for the data, but also parsimonious.
If you have a theoretically justified number of groups (k) you expect in your data, you perhaps only model this one solution. A typical assumption would be a group that is pro, one group contra and one group neutral to an object. Another, more exploratory, approach would be to compare multiple models – perhaps one with 2 groups, one with 3 groups, one with 4 groups – and compare these models against each other. If you choose this second way, you can decide to take the model that has the most plausible interpretation. Additionally you could compare the different solutions by BIC or AIC information criteria. BIC is preferred over AIC in latent class models, but usually both are used. A smaller BIC is better than a bigger BIC. Next to AIC and BIC you also get a Chi-Square goodness of fit.
I once asked Drew Linzer, the developer of PoLCA, if there would be some kind of LMR-Test (like in MPLUS) implemented anytime. He said, that he wouldn´t rely on statistical criteria to decide which model is the best, but he would look which model has the most meaningful interpretation and has a better answer to the research question.

Latent class models belong to the family of (finite) mixture models. The parameters are estimated by the EM-Algorithm. It´s called EM, because it has two steps: An „E“stimation step and a „M“aximization step. In the first one, class-membership probabilities are estimated (the first time with some starting values) and in the second step those estimates are altered to maximize the likelihood-function. Both steps are iterative and repeated until the algorithm finds the global maximum. This is the solution with the highest possible likelihood. That´s why starting values in latent class analysis are important. I´m a social scientist who applies statistics, not a statistician, but as far as i understand this, depending on the starting values the algorithm can stop at point where one (!) local maximum is reached, but it might not be the „best“ local maximum (the global maximum) and so the algorithm perhaps should´ve run further. If you run the estimation multiple times with different starting values and it always comes to the same solution, you can be pretty sure that you found the global maximum.

data preparation
Latent class models don´t assume the variables to be continous, but (unordered) categorical. The variables are not allowed to contain zeros, negative values or decimals as you can read in the poLCA vignette. If your variables are binary 0/1 you should add 1 to every value, so they become 1/2. If you have NA-values, you have to recode them to a new category. Rating Items with values from 1-5 should could be added a value 6 from the NAs.

Running LCA models
First you should install the package and define a formula for the model to be estimated.

The following code stems from this article. It runs a sequence of models with two to ten groups. With nrep=10 it runs every model 10 times and keeps the model with the lowest BIC.

You´ll get the standard-output for the best model from the poLCA-package:

Generate table showing fitvalues of multiple models
Now i want to build a table for comparison of various model-fit values like this:

This table was build by the following code:

Now i calculate the Entropy (a pseudo-r-squared) for each solution. I took the idea from Daniel Oberski´s Presentation on LCA.

Elbow-Plot
Sometimes, an Elbow-Plot (or Scree-Plot) can be used to see, which solution is parsimonius and has good fit-values. You can get it with this ggplot2-code i wrote:

Modellfit_Medien_F27_F29_KWB

Inspect population shares of classes
If you are interested in the population-shares of the classes, you can get them like this:

or you inspect the estimated class memberships:

Ordering of latent classes
Latent classes are unordered, so which latent class becomes number one, two, three… is arbitrary. The latent classes are supposed to be nominal, so there is no reason for one class to be the first. You can order the latent classes if you must. There is a function for manually reordering the latent classes: poLCA.reorder()
First, you run a LCA-model, extract the starting values and run the model again, but this time with a manually set order.

Now you have reordered your classes. You could save these starting values, if you want to recreate the model anytime.

Plotting

This is the poLCA-standard Plot for conditional probabilites, which you get if you add „graph=TRUE“ to the poLCA-call.

It´s in a 3D-style which is not really my taste. I found some code at dsparks on github or this Blog, that makes very appealing ggplot2-plots and did some little adjustments.

If you want to compare the items directly:

54 Comments

  1. Niels

    Thank you very much for your comment! Of course you´re right, the default-plot is fine. It´s just that i usually try to avoid 3D-visualisations, because they might bias the perception of the bars´ heights.

  2. Niels

    Hey Ana,
    sorry for my late reply. Yes, it´s possible to do mixture models with covariates or a mixture regression model in poLCA. In the example above i define the model like this:
    # define function
    f< -with(mydata, cbind(var1,var2,var3...)~1)

    But you can also add predictor-variables:
    f< -with(mydata, cbind(Var1,var2,var3...) ~ predictor1 + predictor2 * predictor3)

    lca_model< -poLCA(f, data=data, nclass = 4, nrep=10)

    You can find more information in the poLCA vignette (https://cran.r-project.org/web/packages/poLCA/poLCA.pdf), or in the book „Latent Variable Modeling with R“ by W. Holmes Finch,Brian F. French.

    I hope this gives you a starting point =) Best regards
    Niels

    • Ana

      Thank you so much.

      I thought that i needed covLCA for what I am asking, something that affects the manifests variables but not the latent variable. I am building something across time and I need to make like a group or something for the years of the survey (because I have different sizes of samples). But, covLCA did not work. I am still looking.

      Best!

  3. Simon

    This is super helpful for a beginner. Thank you Niels!

    Btw, for those who wish to play around with these codes: because the first column of results2 is named „Model“, the codes in the elbow plot need to refer to „Model“ (not the „Modell“).

    • Niels

      Hi Simon,
      thank you for your comment! You´re right, i changed it to „model“ now. I´m glad you found it useful. Best, Niels

  4. dian

    Dear Simon,
    I am analyzing data with poLCA right now, and your article really precious.
    but i have a problem when i tried to immitate the elbow-plot, mine are 10class and Model 10 comes after Model 1, so it was not respectively appeared in the plot..(1,10,2,3,4,5,6,7,8,9). Do you have any sugesstion how to fix this problem?
    thanks a lot..

    Dian

    • Niels

      Hey Dian,
      that problem occured because „results2$model“ was of type character and not a factor. Thanks to Hadley Wickhams package forcats() we can convert results2$model to a factor where the categories are in the order of appearance.

      # Order categories in order of appearance
      install.packages("forcats")
      library("forcats")
      results$model < - as_factor(results$model) #convert dataframe to long-format results2<-tidyr::gather(results,Kriterium,Guete,4:7) results2

      And then you can just plot it.

  5. dian

    Wow..that is help me a lot!
    thanks again.
    by the way..is it normal to have entropy results as NaN? it was only appeared for model 2 and model 3 (0.991 and 0.998) but not of other models.

    best,
    dian

  6. Niels

    I don´t think that´s normal.
    I tried it with the example-data i just posted and get entropy results for model 2 – 10. If your code is exactly the same as mine, i think there´s some data related problem. Perhaps having a look at lc4$P (the class proportions) and lc4$posterior will help figuring this out.
    best

  7. dian

    Hi Niels,
    > lc2$P
    [1] 0.8564319 0.1435681
    > lc3$P
    [1] 0.4719811 0.1431164 0.3849026
    > lc4$P
    [1] 0.36169381 0.49220317 0.04778167 0.09832134
    > lc5$P
    [1] 0.04779091 0.09832134 0.46795171 0.35875165 0.02718438
    I don’t know where is the problem comes from, because class 2 and 3 got the results.

    best,
    dian

  8. dian

    Sorry, one more thing:
    you used package „rstudio“ to make copy-able lcaresults table, but here in R 3.2.4 this program no longer exist. Do you have any alternative?
    thanks a lot!

    • Niels

      I just tested ztable with R and it worked for me.

      install.packages(„ztable“)
      library(„ztable“)
      ztable(yourdataframe)

  9. Daniel E. Weeks

    Thank you for sharing this helpful code.

    This line:
    lcmodel <- reshape2::melt(lc$probs)
    should be:
    lcmodel <- reshape2::melt(lc$probs, level=2)
    in order to maintain the class labels.

    And in this part:
    results <- data.frame(Modell=c("Modell 1"),
    log_likelihood=lc1$llik,
    df = lc1$resid.df,
    BIC=lc1$bic,
    ABIC= (-2*lc2$llik) + ((log((lc2$N + 2)/24)) * lc2$npar),
    CAIC = (-2*lc1$llik) + lc1$npar * (1 + log(lc1$N)),
    likelihood_ratio=lc1$Gsq)

    the ABIC value should be computed using lc1 values instead of lc2 values.

  10. Niels

    Wow! Thank you, Daniel, this error completely slipped through! I copy/pasted that line from another part of my code.
    best
    Niels

  11. Fizaine Florian

    Hello Niels,

    Thank you very much for sharing your knowledge on this blog. It helps me to save time.
    Now, I am wondering wthether it possible to introduce both categorical (dichotomous) data and continuous variables in latent class analysis. If I correctly understood your draft, poLCA cannot do this.
    Best,
    Florian

    • Niels

      Hi Florian,
      you´re right, poLCA can´t use continous data. It´s only suitable for latent class analysis (observed categorical variables, unobserved categorical variables), not latent profile analysis (observed continous variables, unobserved categorical variables). Latent profile analysis has the same aim as latent class analysis: Finding unobserved segments of cases. But different to latent class analysis the observed values aren´t categorical but continous.
      Of course, if you want to stay with poLCA, you could cut you continous data in categories of equal size, but that would mean a loss of information. So, i think you would have to switch to mclust() or another package that is capable of that.
      This paper might be a good starting point: Oberski: Mixture models: latent profile and latent class analysis

  12. Malte Hückstädt

    Hallo Niels!
    Danke für den fabelhaften Code, sehr hilfreich für R-Novizen:)

    Eine Frage: Wäre es möglich in die ggplot-Plots noch die jeweiligen, prozentualen Anteile einzuzeichnen. Ich selbst versuche seit Stunden herum, komme aber nicht weiter.
    Könntest Du mir also vielleicht, aufbauend auf Deiner Syntax, helfen?

    Viele Grüße und Dank im Voraus,
    Malte

    • Niels

      Hi Malte!
      Danke für das Lob. Klar, kann man die Werte auch noch anzeigen. Allerdings kann es dadurch recht unübersichtlich werden. Ich persönlich nutze für die konkreten Werte noch eine ergänzende Tabelle. Aber zur Frage: Ich bin gerade nur am Handy, deshalb kann ich gerade nicht präzise werden, aber „+ geom_text(aes(label=value), colour=“red“) “ könnte Dir schonmal die Richtung weisen. Man muss sicher noch mit vjust oder hjust spielen, um die Position richtig hinzubekommen. Meld dich nochmal, obs geklappt hat, sonst gucken wir das nochmal zusammen an. Viele Grüße!

      • Malte Hückstädt

        Hi Niels! Ganz herzlichen Dank für die rasche Antwort.
        Ja, hat tatsächlich sogleich prima geklappt! Allerdings rutschen die Labels (wie von Dir angenommen) noch etwas durcheinander. Es geht aber schon sehr in die richtige Richtung. Vielleicht bekomme ich den Rest nun alleine hin.

        Könntest Du mir vielleicht bei Gelegenheit noch verraten, wie Du die „ergänzende Tabelle“ gebaut hast. Hast Du vielleicht eine passende Syntax, anhand der ich mich orientieren könnte?

        Vielleicht sollten wir ab hier auf Mail umsteigen. Will Dir hier keinesfalls Deinen Kommentarbereich vermüllen.

        Viele Grüße,
        Malte

        • Niels

          Ok, falls du Twitter hast, könntest du mich auch da anschreiben. Als Tabelle nehme ich die poLCA-Standardtabelle der konditionalen Wahrscheinlichkeiten (also der Wahrscheinlichkeit, dass eine willkürliche Person einer Klasse eine bestimmte Antwortkategorie eines Items gewählt hat) und packe sie einfach als dataframe in ztable::ztable(Tabelle). Normalerweise kopiere ich das dann zum Formatieren in Excel, da ich momentan mit Word arbeite.

          • Malte

            Hallo Niels, eine Frage noch zur einer Deiner Syntax-Zeilen.
            Durch die Formatumwandlung und die Auswahl der „probs“ (lcmodel <- reshape2::melt(lc$probs, level=2)) dezimiert sich meine Fallzahl hier bei mir um mehr als die Hälfte.
            Hat das evtl. mit der Klassen-Aggregierung zu tun?

            Viele Grüße + Dank,
            Malte
            PS: Hab Dich auf Twitter kontaktiert. Wenn es Dir lieber ist, schreiben wir einfach dort weiter.

  13. SIVA

    Hi Neils,

    wonderful code !!! thanks 🙂
    I want to identify from LCA model output , which observation/record belongs to which class ?
    How do i do that
    thanks again

    • Niels

      You can access this information with lc$predclass where lc is the LCA-Model you estimated via poLCA().

      BUT: Be aware that LCA is a probabilistic way assigning the classes. Every observation has a probability to belong to each of the classes, which you can inspect with lc$posterior. The lc$predclass-thing just assigns the class with the highest probability. Bonus-Info: Thats why, if you calculate the shares of the predicted classes (prop.table(table(lc$predclass)) they will differ from the estimated population share (colMeans(lc$posterior)*100 .

      • SIVA

        thanks Neils !!!
        I appreciate your help:).
        can u tell how to validate LCA model,
        i have split the original data 80:20 (train :test )…
        built model on train data
        which function in poLCA does this purpose….
        also wat would ensure classes formed would be stable???

        • Niels

          Sorry for my late reply. In my blogpost i recommend to estimate each model multiple times with different starting values so that you can be pretty sure that the algorithm found the best solution. The precision of the classification can be inspected through the entropy-statistic. It is near zero if the classification is no better than random with 1 beeing the opposite. Which model describes your data the best depends on model fit criteria and if the classes make sense (interpretation). There is also k-fold-cross-validation for this purpose.

          Cross Validation could be used for your problem as well. If the model you estimated with your randomly selected training data gets a comparable good fit with the test data, the model should be appropriate here as well. There is also two-fold (or k-fold) crossvalidation you could have a look at. Would love to hear feedback on how you solved your problem.

  14. Chao Liu

    Hi Niels,

    Thank you for your wonderful codes and it has been very helpful for me! I do have a question: in your post I assume you used observed categorical variables in the analysis, so how about unobserved variables?

    For example, I have multiple categorical outcome variables (y1, y2, y3, …, y27) and I want to group them into 6 factors (f1, f2, …, f6). I know I can create sum or average scores of these outcome variables but that will make me lose the variances. So how can poLCA handle these unobserved factors? Can you provide some codes?

    Thank you so much in advance!

    Chao

    • Niels

      Hi Chao,
      i´m not sure i understood your question completely. A Latent class analysis tries to find subtypes of related cases in that way, that the assigned group explains the correlations among the observed variables. While an exploratory factor analysis tries to find variables that belong together (i.e. you have a scale of 10 variables and PCA finds 2 principal components explaining X % of the variance), LCA tries to find cases that belong together. I just remembered this website, that explains how LCA compares to other methods http://www.john-uebersax.com/stat/faq.htm#otherm.

      I know I can create sum or average scores of these outcome variables but that will make me lose the variances. So how can poLCA handle these unobserved factors?

      Perhaps i don´t get this right, but to me, this sounds more like an factor analysis approach, where you make average scores of all variables of one factor. In latent class analysis you have the conditional class probabilities, where each observation has a probability to belong to each group. Perhaps you can help me understand your goal a bit better 🙂

  15. Chao Liu

    Hi Niels,

    Thank you for your response and sorry for any confusion. What I am trying to do is more like a two-step procedure: first a factor analysis and second a latent class analysis on those factors derived from the first step. So I wonder if there is any way that I can take those factors derived from factor analysis and put them into the latent class analysis. I thought about using lavaan to run a cfa first and take the resulted factors from lavaan to run a latent class analysis in poLCA but couldn’t figure out how. I hope this makes sense to you. Maybe I totally misunderstood what LCA is capable of doing but does it have to run on manifest variables?

    Thanks again for your help!

    Chao

    • Niels

      Hi Chao,
      thanks, that cleared things up for me. First: Yes, LCA runs on categorical manifest variables and tries to find the values of categorical latent variables.
      Considering your goal, i have some ideas, but they are more or less quesses what you could do and i´m not sure if they are methodologically adequate. Ok, so you want to reduce the data through factor analysis and then run a LCA to see the structure of your cases. The problem is, that factors are continous and an LCA uses categorical data, so a latent profile analysis would be more adequate. In latent profile analysis the observed data is continous and the latent is categorical. PoLCA is only capable of LCA. But you could also use cut() and make categories from the continous data, loosing some information, of course.
      In the first step, you would fit the cfa()-model, then you would save the predicted factor scores (Perhaps like this https://groups.google.com/d/msg/lavaan/E4NPoUiKsks/5IYLv5ggAAAJ), make them categorical and then run the LCA on them. But i´m not sure about the interpretation of the results.

  16. Chao Liu

    Hi Niels,

    Thank you for your suggestion! I will try the approach you provided. Just a quick follow-up question: my observed variables are categorical and I guess factor analysis will make them continuous in the end?

    Chao

    • Niels

      Hi, as you surely know, factor analysis normally needs continuous data. But in the social sciences it is often ok to use items that have equidistant scales with at least 5 categories (likert-type items). There are also other assumptions like normality. Latent class analysis has the advantage of beeing a nonparametric method without such assumptions. TL;DR: In a way your categorical data will lead to continous factors, yeah 🙂
      Best,
      Niels

  17. Sofiati Dian

    Hi Niels,
    Recently I tried to do LCA in Latent Gold, and there are several outputs, such as p value (to measure the difference between model and our data–when we perform goodness of fit test, such as likelihood ratio) that not produce in poLCA output.
    Also, for example, when I perform 2 class analysis and we see the lc2$predcell results, we will see the observed vs expected value for each combination of variables (ie: F09_a (1), F09_b (1), F09_c(1), F27_a (1), F27_e(1), F27_e(1), F29_a (1), F29_b(1), F29_c(1), but there is no two additional information just like Latent Gold provide which is the probability of that combination belong to class 1 or 2. Do you know how to produce this probability value?

    Thank you very much, I learn a lot from this article
    Dian–

    • Niels

      Hi Dian, i’m afraid poLCA doesn’t do likelihood ratio tests. I know MPlus and latent gold have them to compare models, but poLCA lacks this function. I guess this can be done somehow, but haven’t seen it on the internet, yet. Sorry.

  18. Ozzy

    Thank you Niels! Any suggestion on multilevel multinomial logistic regression model- is there any R package that can be used to build a two-level logistic regression model?
    J

    • Niels

      Hi Ozzy, you should check out the glmer() function from the lme4-package. It’s capable of that kind of model (as long as you’re not modeling any latent variables). Does this help?

  19. Hanan

    Hey Niels,

    This is really awesome!

    I did a latent class analysis, which gave a best fit for 3 classes. Now I want to use these classes and do a multinomial logistic regression. I read that I have to use flexmix package. But I am not sure really how that works.
    Do you have any tips?

    Or is poLCA package sufficient enough to do a regression analysis with the 3 classes by adding the covariates?

    I want to add demographical variables to the classes, so i can see which X variable predicts the membership of a certain class.

    Also I have to compare classes which each other? How do I do that?

    Best,
    Hanan

    • Niels

      Hi,
      here´s what the vignette says about it (https://cran.r-project.org/web/packages/poLCA/poLCA.pdf):
      „The term „Latent class regression“ (LCR) can have two meanings. In this package, LCR models refer to latent class models in which the probability of class membership is predicted by one or more covariates. However, in other contexts, LCR is also used to refer to regression models in which the manifest variable is partitioned into some specified number of latent classes as part of estimating the regression model. It is a way to simultaneously fit more than one regression to the data when the latent data partition is unknown. The flexmix function in package flexmix will estimate this other
      type of LCR model. Because of these terminology issues, the LCR models this package estimates are sometimes termed „latent class models with covariates“ or „concomitant-variable latent class analysis,“ both of which are accurate descriptions of this model.“

      • Hanan

        Thank you so much for the link.

        I was thinking instead of doing a regression,
        I could simply compute two-way tables summarizing the class membership probabilities
        per covariate category (e.g., for males and females, for educational levels, for
        age groups).

        Do you have an example of how to do that?

  20. Ghanendra

    Very detailed explanation .
    Just wanted to know where exactly we assign the classes to the individual observations(response level)

  21. Ozzy

    Hi Niels,
    Any suggestion on how to compute variable specific entropy contribution in order to identify relatively more informative indicators in forming latent classes?
    Thank you!

    • Niels

      Mhm, interesting… do have you a source, where you read about this analyis-step? You could compare entropy of two models, where one model omits a variable. But my approach is mostly to have a latent class model that makes sense from a theoretical point of view and criteria like maximization of entropy are of less importance to me. This seems to be different in your case, but i´m afraid i´m not of use here.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert.