| Literature DB >> 23730197 |
Joong-Ho Won1, Johan Lim, Seung-Jean Kim, Bala Rajaratnam.
Abstract
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n" setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.Entities:
Keywords: condition number; convex optimization; covariance estimation; cross-validation; eigenvalue; portfolio optimization; regularization; risk comparisons; shrinkage
Year: 2013 PMID: 23730197 PMCID: PMC3667751 DOI: 10.1111/j.1467-9868.2012.01049.x
Source DB: PubMed Journal: J R Stat Soc Series B Stat Methodol ISSN: 1369-7412 Impact factor: 4.488