top of page
Factor analysis by the method of principal components

A method for grouping several indicators (i.e., observable variables) into one / several integral (latent) components (or factors). The indicators can belong to the interval (pseudo-interval) measure type and higher.

The required level of a user's qualification in data analysis: beginner.

It is desirable to be qualified in descriptive statistics, bivariate correlation and linear regression.

Brief description of the method
Principal component analysis
Principal component analysis
Each video's shorter than 3 min.

Video 1 (Rus. + Eng. subtitles).

Factor analysis (FA) by the principal component analysis (PCA) method makes it possible to identify several latent variables (components, factors) that are "behind" a large number of observable variables (indicators). The main limitations of the method are (pseudo-)interval measure type of variables, a linear relationship both between variables and between variables and components, the need for rotation.

Video 2 (Rus. + Eng. subtitles).

Launching the FA. Preliminary, Bartlett's test and KMO test show whether the correlation matrix is suitable for searching for sheaves of pairwise linear correlations (that is, whether the FA is applicable in the considering case). If KMO value is greater than 0.5 and the null hypothesis in Bartlett's test regarding the data sphericity is rejected (sig. is less than the conventional significance level) FA is applicable. A Communality table shows what percentage of the indicators' variance is explained by the component model. The number of components is determined by the Kaiser criterion by default (there remain components that "explain" more than one eigenvalue, that is, more than one abstract indicator).

Video 3 (Rus. + Eng. subtitles).

A component loadings matrix shows what times one a components' (factor's) value should be multiplied to get initial observable variable's (indicator's) value. When interpreting a component loadings matrix, it is necessary to look through each matrix's row for finding an absolute maximum value, this value indicates what component correlates with the exploring indicator. To facilitate the initial interpretation, a rotation -- orthogonal (rectangular) or oblique -- useful to apply. An orthogonal rotation relies on the absence of a correlation between any pair of components, and oblique - on its presence. If a researcher does not have a theoretical framework and assumptions about how the components correlate he may choose the rotation being guided by the nature of the available data (for example, by analyzing the correlation matrix).

Video 4 (Rus. + Eng. subtitles).

To select the rotation, basing on a correlation matrix it is necessary to understand whether a significant correlation exists only within the groups of observable variables (within each component) or it also exists between the groups (between the components).

Video 5 (Rus. + Eng. subtitles).

If there is a close correlation between variables belonging to the different components, then, there are reasons to assume that the components correlate. (The threshold value for identifying a close correlation is set by the researcher himself.) Therefore, it is correct to apply oblique rotation. In SPSS, two types of oblique rotation are implemented: Direct Oblimin and Promax.

Video 6 (Rus. + Eng. subtitles).

The Direct Oblimin rotation is controlled by the Delta parameter which varies from -9999 to 0.8 (switching on the minimum and maximum possible correlation of components respectively). The Promax rotation maximizes the difference among loadings by rows. In this sense, the Promax rotation resembles the Varimax one. Meanwhile, the Quartimax rotation maximizes the difference among loadings by columns and the Equamax rotation maximizes the difference among loadings both by rows and by columns. The Promax rotation is regulated by the Kappa parameter which varies from 1 to 9999 (switching on the minimum and maximum possible correlation of components respectively).

Video 7 (Rus. + Eng. subtitles).

In the considering case, indicators belonging to different components are correlated at 0.2 on average, in the Promax rotation, this value corresponds to Kappa, roughly equal to 15-25. Since FA is often an intermediate stage of an analysis and the components themselves are then used as axes for the respondents scattering, it is more appropriate to apply an orthogonal rotation. (Since perpendicular axes facilitate a visualization.) For this reason, the Varimax rotation is applied in the considering case.

Video 8 (Rus. + Eng. subtitles).

In the present case, using whether an oblique and orthogonal rotation, the components' structures is similar ane to another. If comparing and interpreting many component loadings matrixes is needed before choosing a model, then, to simplify this procedure, I propose applying the contrast estimate. The contrast estimate equals the sum (or the average -- if there are different numbers of rows in the matrixes) of the differences between the sum of the absolute loadings and the absolute maximum loading per rows. The smaller the contrast estimate, the more contrast the matrix (and the easier interpretation). The idea behind the contrast estimate is like this: if the loadings in a row are contrasting, then the maximum loading in the row does not differ from the sum of the absolute loadings in the row (ideally, the entire loading will fall on one component).

Video 9 (Rus. + Eng. subtitles).

In the considering case, only one variable pertains to the last component. Therefore, if including the variable in the component model it is fraught with loss of information (the percentage of "lost information" for this variable equals the difference between the value of one and a communality value contained in a communality table). If a significant amount of information is lost, it is better to exclude the variable from a model and work with the primary variable along with the components obtained, previously standardizing this variable. Negative standardized values correspond to the low primary values, meanwhile, positive standardized values -- to the high primary values (taking into account the coding character).

Video 10 (Rus. + Eng. subtitles).

When saving components as variables in a database, Anderson and Rubin's method matches an orthogonal rotation, meanwhile, a regression method matches an oblique rotation. The interpretation of the components is given.

Video 11 (Rus. + Eng. subtitles).

A continuation of the components interpretation. A component's component loadings with different signs indicate that the variables should comprise different "poles" of the component.

Comments can be left on Youtube directly

Principal component analysis. Components number selecting
Principal component analysis. Components number selecting
Each video's shorter than 3 min.

Video 1 (Eng.).

If constructing a component model out of all the six indicators, the two indicators’ communalities are below the empirical threshold of 0.5. It means that these indicators are underexplained by the component model. Hence, I try excluding the most underexplained indicator “The center's image is important” from the component model. After excluding, the indicator “A shopping center to be nearby the center is important” remained underexplained, hence I excluded it as well. After the two indicators are excluded the remained four indicators comprise a fair two-component component model.

Video 2 (Eng.).

Launching the FA. Preliminary, Bartlett's test and KMO test show whether the correlation matrix is suitable for searching for sheaves of pairwise linear correlations (that is, whether the FA is applicable in the considering case). If KMO value is greater than 0.5 and the null hypothesis in Bartlett's test regarding the data sphericity is rejected (sig. is less than the conventional significance level) FA is applicable. A Communality table shows what percentage of the indicators' variance is explained by the component model. The number of components is determined by the Kaiser criterion by default (there remain components that "explain" more than one eigenvalue, that is, more than one abstract indicator).

 

Comments can be left on Youtube directly

Principal component analysis. Genetic correlation
Principal component analysis. Genetic correlation
Each video's shorter than 3 min.

Video 1 (Eng.).

In this series of videos, I introduce a new view on deciding whether components are correlated. Regarding two-component model constructed in the previous series of videos, the question arises whether these components are correlated or not. By default, the principal component method relies on uncorrelated components. When applying Varimax rotation (or another orthogonal one), it is implied that the components are not correlated. When applying the Promax rotation (or another oblique one), it is implied loosening this limitation. To what extent one may loosen this limitation? It is useful to induce a new term: “a genetic correlation”. If there is a components correlation in data (“a genetic correlation”), it is better to identify it and measure. There are a short and a long way for identifying a genetic correlation. If going the short way, one may apply Promax rotation setting the value of Kappa 4. From my experience, this value refers to a genetic correlation.When launching Promax rotation, keep in mind the iteration number.

Video 2 (Eng.).

If the oblique factor models’ components are correlated strongly, one may not neglect this correlation. In my example, the correlation is not strong. Therefore, I may stay with the orthogonal factor model. If going the long way for identifying a genetic correlation, one may split the exploring indicators into one-component factor models according to the components from the initial factor model. I call “initial” a model gained after first applying of principal components method to exploring indicators. Thus, firstly I gained the two-components factor model. Then I produced two partial one-component factor models. I call such models “pure”. In my study, the 1st and 2nd pure models resembled the 1st and 2nd components of the initial model respectively.

Video 3 (Eng.).

In joint factor models, the components’ influence is mixed. In contrast, pure factor models are always one-component ones. That is why there is pure component’s influence in a pure factor model. In my example, I examed whether the pure models are correlated. I found out that the genetic correlation was weak. Thus I rejected the hypothesis that the initial model’s components are genetically correlated.

Comments can be left on Youtube directly

Principal component analysis. Factor models: joint VS pure
Principal component analysis. Component models: joint VS pure
Each video's shorter than 3 min.

Video 1 (Eng.).

Considering the topic of trust types, one may find in ESS database interpersonal trust indicators and indicators of the trust in political institutions (i.e., political trust). If one factorize these indicators, he may gain one two-components joint component model or two one-component pure component models. One should take into account that the components are genetically correlated. Therefore, one should choose between an oblique joint model or two correlated pure models. If an oblique joint model is chosen, one should select the respective rotation settings. In contrast, if the set of pure models is chosen, one may not keep in mind selecting rotation settings.

Video 2 (Eng.).

What model is the better choice mathematically and interpretatively? I think that choosing two correlated pure is favorable because they are more transparent. If looking at the joint model’s total explained variance, sums of squared loadings cannot be added simply to obtain a total variance because the components are correlated. For obtaining a total variance when components are correlated, one should compute the tangent of the angle between the components. It is necessary because the variances accounted for the two correlated components are overlapped. In contrast, for obtaining a total variance when pure models are applied, sums of squared loadings may be added simply, because the variances accounted for the two correlated components are not overlapped. On the other hand, if comparing the joint model and the set of pure models indicator by indicator, one may see that the indicators’ communalities and loadings are approximately equal in both alternatives.

Video 3 (Eng.).

For illustrating, I computed differences between the indicators’ communalities and loadings in the joint model and in the pure models. They are appeared approximately equal. Therefore, the pure models’ explanatory possibility is not significantly lower than the joint model’s one.

Comments can be left on Youtube directly

© А. Ротмистров

  • Вконтакте App Icon
  • Иконка Facebook с прозрачным фоном
  • YouTube Классик
bottom of page