Empirical bayes minimax estimators of matrix normal means for arbitrary quadratic loss and unknown covariance matrix

Gwowen Shieh*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Let X = (X1,…, XK), where Xi are mutually independent p-variate (K > p+1) normal vectors with unknown means θiand unknown positive definite variance-covariance matrix V. Assume the statistic V is available for estimating V, where V has a Wishart distribution WP(n, V)/(n+p+1), n> p+1, and is independent of X. It is desired to estimate θ = (θ1,…,θK) under the quadratic loss LQ*(θ, θ) = tr{(θ - θ)1Q*(θ - θ)}, where Q* = V-1/2QV-1/2, V =V1/2V1/2, and Q is a known positive definite matrix chosen by the researcher. The LQ* loss includes the widely used loss L(θ, θ) = tr{(θ - θ)1V-1(θ - θ)} as a special case. It is shown that under some specifications of τ(V,S), a symmetric pxp matrix, the proposed empirical Bayes estimator (Ip - (VS-1 τ(V, S))X dominates the maximum likelihood estimator X and is minimax under the LQ* loss. Unlike the previous work on the estimation of vector normal means under quadratic losses with a weight matrix Q, the proposed empirical Bayes minimax estimators are structurally free of Q and the minimaxity holds for a class of quadratic loss functions LQ*. The simulated risks of several competing EB estimators are considered, and the risk improvement of these estimators over the sample mean is calculated.

Original languageEnglish
Pages (from-to)317-342
Number of pages26
JournalStatistics and Risk Modeling
Volume11
Issue number4
DOIs
StatePublished - 1 Jan 1993

Keywords

  • Wishart identity
  • empirical Bayes
  • frequentist risks
  • matrix normal means
  • minimax
  • quadratic loss

Fingerprint Dive into the research topics of 'Empirical bayes minimax estimators of matrix normal means for arbitrary quadratic loss and unknown covariance matrix'. Together they form a unique fingerprint.

Cite this