The spatial sign covariance matrix and its application for robust correlation estimation

We summarize properties of the spatial sign covariance matrix and especially look at the relationship between its eigenvalues and those of the shape matrix of an elliptical distribution. The explicit relationship known in the bivariate case was used to construct the spatial sign correlation coefficient, which is a non-parametric and robust estimator for the correlation coefficient within the elliptical model. We consider a multivariate generalization, which we call the multivariate spatial sign correlation matrix.


Introduction
Let X 1 , . . . , X n denote a sample of independent p dimensional random variables from a distribution F and s : R p → R p with s(x) = x/|x| for x = 0 and s(0) = 0 the spatial sign, then S n (t n , X 1 , . . . , X n ) = 1 n n i=1 s(X i − t n )s(X i − t n ) T denotes the empirical spatial sign covariance matrix (SSCM) with location t n . The canonical choice for the location estimator t n is the spatial median µ n = argmin Beside its nice robustness properties like an asymptotic breakdown-point of 1/2, it has (under regularity conditions, see [12]) the advantageous feature that it centres the spatial signs, i.e., 1 n n i=1 s(X i − µ n ) = 0, so that S n (µ n , X 1 , . . . , X n ) is indeed the empirical covariance matrix of the spatial signs of the data. If t n is (strongly) consistent for a location t ∈ R, it was shown in [5] that under mild conditions on F the empirical SSCM is a (strongly) consistent estimator for its population counterpart S(X) = E(s(X − t)s(X − t) T ).
There are some nice results if F is within the class of continuous elliptical distributions, which means that F possesses a density of the form ) for a location µ ∈ R p , a symmetric and positive definite shape matrix V ∈ R p×p and a function g : R → R, which is often called the elliptical generator. Prominent members of the elliptical family are the multivariate normal distribution and elliptical t-distributions (e.g. [2], p. 208). If second moments exists, then µ is the expectation of X ∼ F , and V a multiple of the covariance matrix. The shape matrix V is unique only up to a multiplicative constant. In the following, we consider the trace-normalized shape matrix V 0 = V /tr(V ), which is convenient since S(X) also has trace 1. If F is elliptical, then S(X) and V share the same eigenvectors and the respective eigenvalues have the same ordering. For this reason, the SSCM has been proposed for robust principal component analysis (e.g. [13,15]). In the present article, we study the eigenvalues of the SSCM.

Eigenvalues of the SSCM
Let λ 1 ≥ . . . ≥ λ p ≥ 0 denote the eigenvalues of V 0 and δ 1 ≥ . . . ≥ δ p ≥ 0 those of S(X). Explicit formulae that relate the δ i to the λ i are only known for p = 2 (see [19,3]), namely Assuming λ 2 > 0, we have δ 1 /δ 2 = λ 1 /λ 2 ≤ λ 1 /λ 2 , thus the eigenvalues of the SSCM are closer together than those of the corresponding shape matrix. It is shown in [8] that this holds true for arbitrary p > 2, so as long as λ j > 0. There is no explicit map between the eigenvalues known for p > 2. Dürre et al. [8] give a representation of δ i as one-dimensional integral, which permits fast and accurate numerical evaluations for arbitrary p, We use this formula (implemented in R [17] in the package sscor [9]) to get an impression how the eigenvalues of S(X) look like in comparison to those of V 0 . We first look at of equidistantly spaced eigenvalues λ i = 2i p(p + 1) , i = 1, . . . , p, Eigenvalues of S(X) Eigenvalues of S(X) for different p = 3, 11, 101. The magnitude of the eigenvalues necessarily decreases as p increases, since p i=1 λ i = p i=1 δ i = 1 per definition of V 0 and S(X). As one can see in Figure 1, the eigenvalues of S(X) and V 0 approach each other for increasing p. In fact the maximal absolute difference for p = 101 is roughly 2 · 10 −4 . In the second scenario, we take p − 1 equidistantly spaced eigenvalues and one eigenvalue 5 times larger than the rest, i.e., This models the case where the dependence is mainly driven by one principle component. As one can see in Figure 2, the distance between the two largest eigenvalues is smaller for S(X) than for V 0 . This is not surprising in light of (2). Thus in general, the eigenvalues of the SSCM are less separated than those of V 0 , which is one reason why the use of the SSCM for robust principal component analysis has been questioned (e.g. [1,14]). However, the differences appear to be generally small in higher dimensions.

Estimation of the correlation matrix
Equation (1) can be used to derive an estimator for the correlation coefficient based on the empirical SSCM: the spatial sign correlation coefficient ρ n ( [6]). Under mild regularity assumptions this estimator is consistent under elliptical distributions and asymptotically normal with variance where a = v 11 /v 22 is the ratio of the marginal scales and ρ = v 12 / √ v 11 v 22 is the generalized correlation coefficient, which coincides with the usual moment correlation coefficient if second moments exists. Equation (4) indicates that the variance of ρ n is minimal for a = 1, but can get arbitrarily large if a tends to infinity or 0. Therefore a two-step procedure has been proposed, the two-stage spatial sign correlation ρ σ,n , which first normalizes the data by a robust scale estimator, e.g., the median absolute deviation (mad), and then computes the spatial sign correlation of the transformed data. Under mild conditions (see [7]), this two-step procedure yields an asymptotic variance of which equals that of ρ n for the favourable case of a = 1. Since (5) only depends on the parameter ρ, the two-stage spatial sign correlation coefficient is very suitable to construct robust and non-parametric confidence intervals for the correlation coefficient under ellipticity. It turns out that these intervals are quite accurate even for rather small sample sizes of n = 10 and in fact more accurate then those based on the sample moment correlation coefficient [7]. One can construct an estimator of the correlation matrix R by filling the off-diagonal positions of the matrix estimate with the bivariate spatial sign correlation coefficients of all pairs of variables. This was proposed in [6]. Equation (3) allows an alternative approach: First standardize the data by a robust scale estimator and compute the SSCM of the transformed data. Then apply a singular value decomposition S n (t n , X 1 , . . . , X n ) =Û∆Û T , where∆ contains the ordered eigenvaluesδ 1 ≥ . . . ≥δ p . One obtains estimateŝ λ 1 , . . . ,λ p by inverting (3). Although theoretical results are yet to be established, we found in our simulations that the following fix point algorithm works reliably and converges fast. LetΛ denote the diagonal matrix containingλ 1 , . . . ,λ p , thenV =ÛΛÛ T is a suitable estimator for for the shape of the standardized data and R withr ij =v ij / v iivjj an estimator for the correlation matrix, which we call the multivariate spatial sign correlation matrix. Contrary to the pairwise approach, the multivariate spatial sign correlation matrix is positive semi-definite by construction. Theoretical properties of the new estimator are not straightforward to establish. By a small simulation study we want to get an impression of its efficiency. We compare the variances of the moment correlation, the pairwise as well as the multivariate spatial sign correlation under several elliptical distributions: normal, Laplace and t distributions with 5 and 10 degrees of freedom. The latter three generate heavier tails than the normal distribution. The Laplace distribution is obtained by the elliptical generator g(x) = c p exp(− |x|/2), where c p is the appropriate integration constant depending on p (e.g. [2], p. 209).
We take the identity matrix as shape matrix and compare the variances of an offdiagonal element of the matrix estimates for different dimensions p = 2, 3, 5, 10, 50 and sample sizes n = 100, 1000. We use the R packages mvtnorm [10] and MNM [16] for the data generation. The results based on 10000 runs are summarized in Table 1.
Except for the moment correlation at the t 5 distribution, the results for n = 100 and n = 1000 are very similar. Note that the variance of the moment correlation decreases at the Laplace distribution as the dimension p increases, but not so for the other distributions considered. The lower dimensional marginals of the Laplace distribution are, contrary to the normal and the t-distributions, not Laplace distributed (see [11]), and the kurtosis of the one-dimensional marginals of the Laplace distribution in fact decreases as p increases.
Equation (5) yields an asymptotic variance of 2 for the pairwise spatial sign correlation matrix elements regardless of the specific elliptical generator, which can also be observed in the simulation results. The moment correlation is twice as efficient under normality, but has a higher variance at heavy tailed distributions. For uncorrelated t 5 distributed random variables, the spatial sign correlation outperforms the moment correlation. Looking at the multivariate spatial sign correlation, we see a strong increase of efficiency for larger p. For p = 50 the variance is comparable to that of the moment correlation. Since the asymptotic variance of the SSCM does not depend on the elliptical generator, this is expected to also hold for the multivariate spatial sign correlation, and we find this confirmed by the simulations. The multivariate spatial sign correlation is more efficient than the moment correlation even under slightly heavier tails for moderately large p.  Table 1: Simulated variances (multiplied by √ n) of one off-diagonal element of the correlation matrix estimate based on the moment correlation (cor), the pairwise spatial sign correlation (sscor pairwise) and the multivariate spatial sign correlation matrix (sscor multivariate) for spherical normal (N), t 5 , t 10 , and Laplace (L) distribution, several dimensions p and sample sizes n = 100, 1000.
An increase of efficiency for larger p is not uncommon for robust scatter estimators. It can be observed amongst others for M-estimators, the Tyler shape matrix, the MCD, and S-estimators (e.g. [4,18]). All of these are affine equivariant estimators, requiring n > p. This is not necessary for the spatial sign correlation matrix. One may expect that the efficiency gain for large p is at the expense of robustness, in particular a larger maximum bias curve. Further research will be necessary to thoroughly explore the robustness properties and efficiency of the multivariate spatial sign correlation estimator.