Multidimensional Scaling (MDS) can be used as an alternative to Spotfire Statistica® Factor Analysis. The goal is to detect meaningful underlying dimensions that allow the data scientist to explain observed similarities or dissimilarities (distances) between the investigated objects. In factor analysis, the similarities between objects (e.g., variables) are expressed in the correlation matrix. With MDS you can analyze any kind of similarity or dissimilarity matrix, in addition to correlation matrices.
MDS includes a full implementation of nonmetric multidimensional scaling. Matrices of similarities, dissimilarities, or correlations between variables (i.e., "objects" or cases) can be analyzed. The starting configuration can be computed via principal components analysis or specified by the user. An iterative procedure is used to minimize the stress value and the coefficient of alienation. The output includes the values for the raw stress (raw F), Kruskal stress coefficient S, and the coefficient of alienation. The goodness of fit can be evaluated via Shepard diagrams with d-hats and d-stars.
Comprehensive introductions to the computational approach used in this module can be found in Borg and Lingoes (1987), Borg and Shye (1995), Guttman (1968), Schiffman, Reynolds, and Young (1981), Young and Hamer (1987), and in Kruskal and Wish (1978). D-stars are calculated via a procedure known as the rank-image permutation procedure (see Guttman, 1968; or Schiffman, Reynolds, & Young, 1981, pp. 368-369). D-hats are calculated via a procedure referred to as the monotone regression transformation procedure (see Kruskal, 1964;Schiffman, Reynolds, & Young, 1981, pp. 367-368).
The following simple example may demonstrate the logic of Multidimensional Scaling analysis. Suppose we take a matrix of distances between major US cities from a map. We then analyze this matrix, specifying that we want to reproduce the distances based on two dimensions. As a result of the MDS analysis, we would most likely obtain a two-dimensional representation of the locations of the cities, that is, we would basically obtain a two-dimensional map.
In general, MDS attempts to arrange "objects" (major cities in this example) in a space with a particular number of dimensions (two-dimensional in this example) so as to reproduce the observed distances. As a result, we can "explain" the distances in terms of underlying dimensions. In our example, we could explain the distances in terms of the two geographical dimensions: north/south and east/west.
As in factor analysis, the actual orientation of axes in the final solution is arbitrary. We could rotate the map in any way we want. The distances between cities remain the same. Thus, the final orientation of axes in the plane or space is mostly the result of a subjective decision by the data scientists, who will choose an orientation that can be most easily explained.
Note: MDS and factor analysis seem similar, they are fundamentally very different. Factor analysis requires that the underlying data are distributed as multivariate normal, and that the relationships are linear. MDS imposes no such restrictions.
As long as the rank-ordering of distances (or similarities) in the matrix is meaningful, MDS can be used. In terms of resultant differences, factor analysis tends to extract more factors (dimensions) than MDS. Therefore MDS often yields more readily, interpretable solutions. Most importantly, however, MDS can be applied to any kind of distances or similarities, while factor analysis requires us to first compute a correlation matrix. MDS can be based on subjects' direct assessment of similarities between stimuli, while factor analysis requires subjects to rate those stimuli on some list of attributes (for which the factor analysis is performed).
Recommended Comments
There are no comments to display.