Here is the template I’ve been working on to fit EFA models to categorical items. Besides the packages listed at the beginning it will use the Num2Fac function.
##import your data: this example is for spss file
##remember to change data.sav for your path/filename
tt <- read.spss("data.sav", use.value.labels=TRUE,
#loading packages that may be used in the script
#You can get the Num2Fac function on a previous post
#save it as an R file and source it or run it to use it
#creating the dataframe with items as categorical variables ("factors")
tt.f <- Num2Fac(tt)
#using the hetcor function to run get the polychoric cors
#this will take a while
tt.polyf <- hetcor(tt.f)
tt.poly <- as.data.frame(tt.polyf$correlations)
#scree plot tests (notice that PA needs a proper simulation to run
tt.scree<-nScree(eig=NULL, x=tt.poly, aparallel=NULL,
cor=TRUE, model="factors", criteria=NULL)
#you can also use the Very Simple Structure method to get number of
#factors to retain
VSS(tt.poly, n=6, rotate="oblimin", diagonal=FALSE,
fm="pa", n.obs=426, plot=TRUE, title="VSS")
#fitting a one factor solution (principal axis, oblimin rotation)
tt.1f <- factor.pa(tt.poly, nfactors=1, n.obs=755,rotate="oblimin")
Created by Pretty R at inside-R.org
Sometimes it is necessary to declare variables as factors. For example, if they are items on a scale that need to be treated as ordinal and polychoric correlations will be estimated. This function takes as argument a data frame of such variables/items and declares them as “factors” (categorical variables with levels).
Created by Pretty R at inside-R.org
If a test does contain organically related items, it is often possible to arrange these in mutually unrelated groups and to treat each group as a single item for purposes of item analysis. p. 327
According to L&N items on a test should be related to the total score. They also should stay the same regardless of the group that is being tested. If the sample I am using is varied enough, and the items keep getting consistent responses, they are really doing a good job.
In terms of classical test theory, L&N state that item difficulty is given by the average test score given by the average score on each item in the sample. Item difficulty thus, would be mean item score, but this concept is only useful in relation to the total score. That is, if an item has a very low mean, but the test in general has a very low mean the item is still representative of the test and it is not an extreme item. On this particular case, there might be probably an issue of a very difficult test, which might affect the discrimination of the test, and thus of the items.
L&N, after describing item characteristics give a very good description (although at some point what can be considered “technical” by today’s psychology readers) of point biserial, biserial and tethrachoric correlations making L&N a good reference for these procedures. Biserial correlations, as well as tetrachoric ones, were developed by Karl Pearson. Thus to call the correlation between two continous variables “Pearson correlations” is a misnomer. Point biserial correlations are product-moment correlations between a continuous variable and a dichotmous one. In terms of test development, they are used to see the relationship between item score and full test score (sometimes excluding the item in question).
The idea behind tetrachoric correlations, as well as on the dichotomous part of biserial and point-biserial correlations, is that the dichotomy of the variable(s) in question is the reflection of a latent continous response set. For tetrachoric correlations, the assumption is that the patterns of ones and zeros of the two variables are nothing else but the reflection of an underlying bivariate normal distribution. The “challenge” is to estimate the product-moment correlation that represents such bivariate relationship, by observing the frequencies of the two dichotomous indicators. I pretty much glazed over the equations and proofs, because the access to computer packages that can estimate tetrachoric correlations nowadays makes it trivial to try to decipher the formulas and find the appropriate tables. Unless you are a Statistician/Mathematician, and not a simple “user” of Stats.
Posted in Latent