Can I have you email address? I am very interested in your code and want to collabrate with you in a paper. I am a clinician and interested in using poLCA to classify my heterogeneous patient population and I found your code are very helpful to me.

I am looking forward to your positive response. ]]>

I think there is little critical reflection on that.

Best

gio

1.) Of course it´s a legitimate answer to choose „don´t know“ in a questionnaire. And ideally you should use this information in your analysis. In my phd thesis i did a latent class analysis so i could find out, if there is a class of persons with information-deficit on a certain topic. But in a confirmatory factor analysis you can´t use the „don´t know“-category. So what do you do? Leave the whole case out of your analysis (listwise deletion)? As i write in my blogpost, if you consider the smaller sample size through deletion of cases and the possible bias in your data through this step, MI is a better way.

2.) Also we shouldn´t forget that we don´t impute the data with the goal of estimating and analysing individual answers (that aren´t there), but to get more adequate results on the global level meaning the sample/population as a whole.

3.) I´m not sure i get the part about transparency of imputation models. With mice() in R you can define an imputation model for every single variable in your dataset, including an algorithm and the used auxiliary variables. After that you can inspect which values were imputed. Or do you mean the part, where some variance is introduced in the different imputations so our data doesn´t loose variance?

Thank you for your comment! It´s always interesting to be a little reflective on methods.

Best

Niels ]]>

your are suggesting that multiple imputation is the gold standard. But how can you be so sure about that? Lall`s results could also be interpretated the other way around. I would rather say that in political and social science MI is a dominant instrument at the moment. MI may make sense for specific data (e.g. panel data) but I do not think it is a trustworthy method for two reasons. First, the implicit assumption that every individual need to have a opinion and second, the widely missing transparency of imputation models.

Best,

gio

Any suggestion on how to compute variable specific entropy contribution in order to identify relatively more informative indicators in forming latent classes?

Thank you! ]]>

What does range = c(2, 5) in scale_nodes represent? I ran six models with one to six classes and wondering what sort of changes I need to make in the second part.

Thank you! ]]>

Error in lavaanify(model = FLAT, constraints = constraints, varTable = lavdata@ov, :

lavaan ERROR: wrong number of arguments in modifier () of element ablehn=~fl_sozial

I think the problem is still that it doesn’t recognize to create a grouped lavaan object.

I think I now found a way to do measurement invariance testing with lavaan.survey and multiple imputation. I just pass my original dataset with no imputed values to the first model (lavaan_model2) and then use that one for my lavaan.survey command. I get results for a fully imputed dataset plus the equal parameters I specified.

lavaan_model2<-cfa(model,

meanstructure = TRUE,

group = "kontakt_flu",

data = allbi_red_mi) # original data

fit_a2<-lavaan.survey(lavaan_model2, svy.df_imp_db2)

summary(fit_a2, standardized=TRUE,

fit.measures = TRUE, rsq = T)