I think there is little critical reflection on that.

Best

gio

1.) Of course it´s a legitimate answer to choose „don´t know“ in a questionnaire. And ideally you should use this information in your analysis. In my phd thesis i did a latent class analysis so i could find out, if there is a class of persons with information-deficit on a certain topic. But in a confirmatory factor analysis you can´t use the „don´t know“-category. So what do you do? Leave the whole case out of your analysis (listwise deletion)? As i write in my blogpost, if you consider the smaller sample size through deletion of cases and the possible bias in your data through this step, MI is a better way.

2.) Also we shouldn´t forget that we don´t impute the data with the goal of estimating and analysing individual answers (that aren´t there), but to get more adequate results on the global level meaning the sample/population as a whole.

3.) I´m not sure i get the part about transparency of imputation models. With mice() in R you can define an imputation model for every single variable in your dataset, including an algorithm and the used auxiliary variables. After that you can inspect which values were imputed. Or do you mean the part, where some variance is introduced in the different imputations so our data doesn´t loose variance?

Thank you for your comment! It´s always interesting to be a little reflective on methods.

Best

Niels ]]>

your are suggesting that multiple imputation is the gold standard. But how can you be so sure about that? Lall`s results could also be interpretated the other way around. I would rather say that in political and social science MI is a dominant instrument at the moment. MI may make sense for specific data (e.g. panel data) but I do not think it is a trustworthy method for two reasons. First, the implicit assumption that every individual need to have a opinion and second, the widely missing transparency of imputation models.

Best,

gio

Any suggestion on how to compute variable specific entropy contribution in order to identify relatively more informative indicators in forming latent classes?

Thank you! ]]>

What does range = c(2, 5) in scale_nodes represent? I ran six models with one to six classes and wondering what sort of changes I need to make in the second part.

Thank you! ]]>

Error in lavaanify(model = FLAT, constraints = constraints, varTable = lavdata@ov, :

lavaan ERROR: wrong number of arguments in modifier () of element ablehn=~fl_sozial

I think the problem is still that it doesn’t recognize to create a grouped lavaan object.

I think I now found a way to do measurement invariance testing with lavaan.survey and multiple imputation. I just pass my original dataset with no imputed values to the first model (lavaan_model2) and then use that one for my lavaan.survey command. I get results for a fully imputed dataset plus the equal parameters I specified.

lavaan_model2<-cfa(model,

meanstructure = TRUE,

group = "kontakt_flu",

data = allbi_red_mi) # original data

fit_a2<-lavaan.survey(lavaan_model2, svy.df_imp_db2)

summary(fit_a2, standardized=TRUE,

fit.measures = TRUE, rsq = T)

`#create model`

model < - '# measurement model 1
ablehn =~ 1*fl_sozial + fl_sicher + fl_zusamm + fl_nach'
#fit model
lavaan_model<-cfa(model,
meanstructure = TRUE,
group = "kontakt_flu",
group.equal = c("loadings","intercepts"))
out<-lavaan.survey(lavaan_model, svy.df_imp)
#inspect model
summary(out,fit.measures=TRUE,standardized=TRUE, rsquare = TRUE)

So you have to do it by hand. To compare means, you need scalar invariance between models. That´s a very strict assumption. You test a sequence of models and in each sequence, the models are more and more restricted to be invariant.

# 1.) configural invariance model

# 2.) weak invariance model

# 3.) strong invariance model

# 4.) scalar invariance model

It´s possible to do a statistical test for each step:

`#example test configural invariance model (1) against weak invariance model (2)`

fit.configural< -fitMeasures(survey.fit_configural, c("chisq","df", "pvalue","cfi","rmsea"))
fit.weak<-fitMeasures(survey.fit_weak, c("chisq","df", "pvalue","cfi","rmsea"))
lavTestLRT(survey.fit_configural, survey.fit_weak) # if not significant ==> passed.

For invariant loadings, you would specify it like this:

latent1 =~ c(1,1)*item1 + c(a21,a22)*item2 + c(a31,a32)*item3

lavInspect(object, what=’fit‘).

but I still was unable to get standard errors and z-values.. it just shows me the parameter estimates..

]]>