What is the future of psychometrics


The present work deals with the evaluation of structure-finding methods, which summarize the items of psychological questionnaire data in homogeneous groups of similar items. An essential difference between methods that are used for this purpose is whether they adopt an underlying measurement model or whether they only strive for a group of items that is as useful as possible. On the one hand there is the model-based factor analysis (FA), which is based on the factor model. The mathematical approach is similar to principal component analysis (PCA). In the FA, in contrast to the PCA, it is still assumed that the responses to the items are causally explained by underlying factors plus a unique residual term. And this specific residual term of each item is assumed to be completely uncorrelated to all other items. One method that does not make any model assumptions is cluster analysis (CA). Here, objects are only put together that are more similar than others on a certain criterion. Just as one can differentiate between methods in terms of whether or not they adopt an underlying model, one can also make this distinction when evaluating methods. One evaluation technique that assumes a model is Monte Carlo simulation. One technique that does not necessarily have to be based on a model is resampling. Samples are drawn from a real data set and the behavior of the method in these samples is examined. In the first study, such a resampling method was used, which we call Real World Simulation. It is intended to solve the existing problem of the lack of validity of Monte Carlo studies on FA. A real world simulation was carried out on two large data sets and the estimators of the model parameters from the real data set were then used as model parameters for the Monte Carlo simulation. In this way it can be tested what influence the specific data set characteristics as well as controlled changes of them have on the function of the methods. The results suggest that the results of simulation studies always strongly depend on certain specifications of the model and its injuries and therefore no general statements can be made. Analyzing real data is important to understand how different methods work. In the second study, a new k-means clustering method for clustering items was tested with the help of this new evaluation technique. The two methods that have been proposed are: k-means scaled distance measure (k-means SDM) and k-means cor. The analyzes showed that the new methods are better suited to assigning items to constructs than the EFA. Only when determining the number of underlying constructs were the EFA methods just as good. For this reason, it is proposed to use a combination of these two methods. A great advantage of the new methods is that they can solve the problem of the indeterminacy of the factor values ​​in the EFA, since the cluster values ​​of the people on the clusters can be clearly determined. At the end of the thesis, the different evaluation and validation techniques for model-based and non-model-based procedures are discussed. For the future it is proposed to use real world simulations and validations of the cluster values ​​with external criteria for the evaluation of the new k-means CA procedure for clustering items.