DOI: 10.17516/1997-1397-2020-13-5-608-621 УДК 517.9
Baranchick-type Estimators of a Multivariate Normal Mean Under the General Quadratic Loss Function
Abdenour Hamdaoui*
Department of Mathematics University of Sciences and Technology, Mohamed Boudiaf, Oran Laboratory of Statistics and Random Modelisations (LSMA), Tlemcen
Algeria
Abdelkader Benkhaled^
Department of Biology Mascara University Mustapha Stambouli Laboratory of Geomatics, Ecology and Environment (LGEO2E)
Mascara, Algeria
Mekki Terbeche*
Department of Mathematics University of Sciences and Technology, Mohamed Boudiaf, Oran Laboratory of Analysis and Application of Radiation (LAAR), USTO-MB
Oran, Algeria
Received 08.04.2020, received in revised form 01.06.2020, accepted 16.07.2020 Abstract. The problem of estimating the mean of a multivariate normal distribution by different types of shrinkage estimators is investigated. We established the minimaxity of Baranchick-type estimators for identity covariance matrix and the matrix associated to the loss function is diagonal. In particular the class of James-Stein estimator is presented. The general situation for both matrices cited above is discussed.
Keywords: covariance matrix, James-Stein estimator, loss function, multivariate gaussian random variable, non-central chi-square distribution, shrinkage estimator.
Citation: A. Hamdaoui, A. Benkhaled, M. Terbeche, Baranchick-type Estimators of a Multivariate Normal Mean Under the General Quadratic Loss Function, J. Sib. Fed. Univ. Math. Phys., 2020, 13(5), 608-621. DOI: 10.17516/1997-1397-2020-13-5-608-621.
1. Introduction and Preliminaries
The field of estimation of a multivariate normal mean using shrinkage estimators was introduced in [10]. The author showed that the maximum likelihood estimator (MLE) of the mean 0 of a multivariate gaussian distribution Np (0, a2Ip) is inadmissible in mean squared sense when the dimension of the parameters space p > 3. In particular, he proved the existence of an estimator which always achieves the smaller total mean squared error regardless of the true 0. Perhaps the best known estimator of such kind is James-Stein's estimator introduced in [7]. This
* [email protected], [email protected] tbenkhaled08<8 yahoo.fr ^ [email protected] © Siberian Federal University. All rights reserved
one is a special case of a larger class of estimators known as shrinkage estimators which is a combination of a model with low bias and high variance, and a model with high bias but low variance. In this context we can cite for example Baranchik [2] for his work on the minimax-ity of the estimators of the form Sr (X, S) = (1 — r (F)/F)X where F = \\X\\Z/S, the statistics S — a2xn is the estimator of the unknown parameter a2 and r(.) is a real mesurable function. Strawderman [12] was interested to study the estimation of the mean vector of a scale mixture of multivariate distribution under squared error loss. He showed the analogous results obtained by Baranchik [2]. Xie et al [13] have introduced a class of semiparametric/parametric shrinkage estimators and established their asymptotic optimality properties. Selahattin et al [9], provided several alternative methods for derivation of the restricted ridge regression estimator (RRRE). The optimal extended balanced loss function (EBLF) estimators and predictors are introduced and derived from [8] and discussed their performances. In [6], the authors considered the model X — Np (9,a2Ip) where a2 is unknown and estimated by S2 (S2 — a2^). They studied the following class of shrinkage estimators S^ = SJS + l(S2^(S2, \\X\\2)/ \\X\\2)X with l is real parameter. Benkhaled and Hamdaoui [3], have considered the model X — Np (9,a2Ip) where a2 is unknown. They studied the minimaxity of two different forms of shrinkage estimators of 9: estimators of the form S^ = (1 — ^(S2, \\X\\ )S2/ \\X\\ )X, and estimators of Lindley-type given by S^ = (1 — p(S2, T2)S2/T2)(X — X) + X.
In this work, we deal with the model X — Np (9, T) and the loss matrix Q where the covariance matrix T is known. Our aims is to estimate the unknown parameter 9 by shrinkage estimators deduced by the MLE. The paper is organized as follows. In Section 2, we study the standard case T = Ip and Q = D = diag(di, d2,..., dp), we find the explicit formula of the risk function of considered estimators and we treat there minimax property. As a special case, the James-Stein estimator and its risk are also found. In Section 3, we study the considered problem with the generalized matrices T and Q. In Section 4, we graphically illustrate risks ratios of the JamesStein estimator and the estimators of Baranchick-type to the MLE for various values of p. We end the manuscript by giving an Appendix which contains technical lemmas used in the proofs of our results.
We recall that if X - Np (9,a2Ip), then \\X\\2/a2 — xp W where xp W denotes the non-central chi-square distribution with p degrees of freedom and non-centrality parameter W = \\9\\2/2a2. We also recall the following results that are useful in our proofs.
Definition 1. For any measurable function f : r+ —> r xp W integrable, we have
Ef (x2p (W))]= ExlW [f (U)] = Y]
k=0
f (u)xl+2k du
p[ dk
where P (W/2) being the Poisson's distribution of parameter W/2 and xp+2k is the central chi-square distribution with p + 2k degrees of freedom.
Lemma 1. (Stein [11]). Let X be a N{v,a2) real random variable and let f : r —> r be an indefinite integral of the Lebesgue measurable function, f' essentially the derivative of f. Suppose also that E \f' (X)| < Then
E
X — u-f X)
= E (f' (X)).
For the next, if X — Np (9, T), we assume that the loss incurred in estimating 9 by S is the function Lq(S,9) = (S — 9)*Q(S — 9) and the risk function associated to this loss is
Rq(S,9) = Eq(Lq(S, 9)).
+
2. Results for standard case
Let X — Np(0,Ip) be a multivariate gaussian random variable in Rp and for any estimator 5 we take the loss function Lq(S, 0) = (5 — 0) Q(S — 0) where Q = D = diag (d1, d2,..., dp). It
is well known that the MLE of the parameter 0 is X and its risk function associated to the loss
p
function Ld is J2 di = Tr(D). Endeed
i=i
Rd (X, e) = E (Ld (X, e)) = E^di (Xi - Oif) diE (X - ^ )2 = Tt(D):
\i=l
i=l
because for any i (i = 1,... ,p), (Xi — 0i)2 — xi where xi is the chi-square distribution with 1 degrees of freedom, then Eg(Xi — 0i)2 = 1. It is easy to check that the MLE X is minimax, thus any estimator dominates it, is also minimax.
Next, we suppose that K = (K1,..., Kp) where Ki (i = 1,... ,p) are independent Poisson P Mil
P {0"2/2) and K = J2 Ki (K — P (||02||/2)). We give the following Lemma, that can be used in
i=i
our proofs and its proof is postponed to the Appendix.
Lemma 2. Let X - Np (0,Ip) where X = (Xi,.. ,,Xpf and 0 = (0i,..., 0p). If p > 3, we have
i) E
Xi2
\\X ||2
L 2e2 \
=E
1 +
K
p + 2K
ii) E
XL \\x \
=E
1+
2e2
K
(p - 2 + 2K) (p + 2K)
V
/
2.1. Baranchick-type estimators
In this part, we study the minimaxity of Baranchick-type estimator, which is given by
^ \\X\\2
Sj = I 1---—I X.
x \\Xf
(1)
Proposition 1. The risk function of the estimator defined in (1) under the loss function LD is
Rd (Sj ,e) = Ti(D) + E-
p i 2e2
=*( 1+ii K
p + 2K
4>2(Xp+2K) 2 A^(X2p+2K) -3--H (Xp+2K) + 4—-
Xp+2K
Xp+2K
- 2Ti(D) e(*(XT2k)
V Xp+2K
2
2
Proof. We have
1( t (\X\\2) I ( t
Rd (S^ ,6) = E [Ld (s^ ,6)] = E { IX - 6--V J X I D (X - 6 -
= E\ (X - 6)t D (X - 6H+ E
\ X\
't
\\X\
XI D
t
\\X\\2
\\X \
XI
XI =
-2EIX-6)D(tg*I=
V (\\X\\2) t ^
\
= Tr(D) + E Using Lemma 1, we obtain Rd (SJ ,0) = Tr(D) + E
= Tr(D) + E
\\X\\4
it2 (\\x^ P
- 2J2 diE
(Xi - 6i)
t
Xi
\\X\\2
\\X \\
it2 (\\X\\
- 2J2 diE
/
i=i
d 't (\\X f) Xi'
oYi \\X\\2
\\X \\
-4E
t'
E diXf
i=1
\\X\\2
- 2 É d0 e
\i=1
t
\\X\\2
+ 4E
t
\\X\\4
= Tr(D) + E
i t diXf
\\X\\2
t2 (\\X\\2) _
4t' (\\X\\2) +4
t
- 2Tr(D) E, iivi2
\\x \\ v y \X \\
> (\\X\f) \ yxy2 ) '
From the independence given K between X2/\\X\\2 and \\X\\2 for i = 1,... ,p, we get Rd (Sj ,d) = Tr(D) +
t
+ E | ]=J diE
- 2( E d0 E
X2
■\K)E
t (\\X\\2) _
vi=i
\\x|| . t (\\X\\2) I \\X\\2 ) '
\\X\\2
4t' (\\X\\2)
+4
t
\\X\\2
\K
6ii
2
2
2
Using the Lemma 2, we have
Rd (54,0) = tr(D) +
+E
p 2e2
Idii+Mf"
p + 2K
E
t2 (\\Xy
t (\\X\\2)
\ X\ 2
- 4t' ("X K
- 2Ti(D) E
t (\\X\\;
\ X\ 2
Ti(D) - 2Ti(D) E
t (\\X\\:
\ X\ 2
+ E E
p 2ei2
g. <. 1 + g K
p + 2K
+
t2 \ X\
\ X\ 2
- 4t' (\\X\\2 +4
t \ X\ 2
\ X\ 2
-\K
From Definition 1 and using properties of conditional expectation we have, for any two measurable functions G and H, E \o (\\X\\2)] = E [G (x2P+2K)] and E {E [H (K) \K]} = E [H (K)], where
K ~P
'/2), thus we get the desired result.
□
Note that the classical result of minimaxity of Baranchick-type estimators which is obtained
p2
for the loss function L (5, 0) = ^ (Si — 0i) (i.e. di = 1 for any i = 1,... ,p), is also available
i=i
and it is established in the following Theorem.
Theorem 1. Assume that 54 is given in (1) with p ^ 3. Under the loss function LD with
max (di)
l^i^p
i) ^ (.) is monotone non-decreasing function;
ii) 0 < t (.) < 2 | - 2 • .max (di)
then Sj is minimax.
Proof. From formula (2), we have
Rd(Sj,e) < Ti(D) +
+E
-4
p + 2K
2e2
+ max (dj) l<i<p
t2 (xp+2^ +4t (X2p+2k) XP+2K
- 2Ti(D)E
'Mxpp+K
K XP+2K
<
« Tï(D) + E
-4t' <^>5 ^ 1 + \\ K
p + 2K
262
+
+E
t (x2p+2k)
Xt+2K
max (di) [t (*P+2k) +4 - 2Tr(D)
Then, a sufficient condition for that is minimax is that 0 (.) is a positive monotone non-decreasing function and max (di) (xP+2K) +4 — 2Tr (D) < 0. Which are equivalent to
0 « t (.) « 2I max (di) 2
\1<i<v J
and 0 (.) is monotone non-decreasing. □
Example 1. Let the shrinkage functions (\\X||2) = \\X||2/ (||X||2 + , (\\X||2) =
= 1 — exp(- ||2) and the matrices D(i) = diag(di = 1,d2 = 1/2,... ,dp = 1/p) with p ^ 7 and D(2) = diag (di = 1/2, d2 = 2/3,... ,dp = p/p + 1) with p > 4. It is clear that the functions (.) and 0(2) (.) satisfie conditions of Theorem 1. Then the estimators and 5^(2) are minimax for p ^ 7 under the loss function lD and are minimax for p ^ 4 under the loss
function L
(2) D ■
Now, we discuss the special case where ^ (.) = a with a is a positive constant.
2.2. James-Stein estimator
Consider the estimator Sa = — a/\\X\\2 j X = X — (a/\\X||2j X, where a is a real parameter that can depend on p. Using the Proposition 1, the risk function of the estimator Sa is
Rd (Sa ,6) = Tr(D)+ a (a + 4) E
(p - 2 + 2K) (p + 2K)
V
- 2aTr(D) E
1
p-2+2K
/
. (3)
Tr(D)
Proposition 2. Under the loss function LD with p ^ 3 and---r- ^ 2, we have
max (di)
l^i^p
i) a sufficient condition for that Sa dominates the MLE X is
0 « a « 2 ( ^L - 2 I ;
max (di)
ii) the optimal value of a that minimizes the risk function RD(Sa,9) is
Tr(D) E
1
(p - 2 + 2K)
a
where a = E (Jt di (l + (20?/||0||2) K)/(p - 2 + 2K) (p + 2K^ .
Proof. i) From formula (3), a sufficient condition so that Sa dominating the MLE X is
\
1
a (a + 4) E
! p ( 292
S < 1 + K
(p - 2 + 2K) (p + 2K)
V
- 2aTr (D) E
p - 2 + 2K
< 0.
/
As
E
i p ( 292 \ \
EdJ 1 + -9V K x
i=i \ I9f
< max (di) E .
i<i</ ^ \p - 2 + 2K
(p - 2 + 2K ) (p + 2K ) \ /
thus, a sufficient condition so that Sa dominates the MLE X is
(a + 4) max (di) - 2Tr(D)
1<i<P
E
1
p - 2 + 2K
< 0,
which is equivalent to the desired result.
ii) Using the convexity of the risk function RD (Sa, 0) on a, one can easily show that the optimal value of a that minimizes the risk function RD(5a, 0) is a = (Tr (D) E (l/(p — 2 + 2K))/a) — 2,
where a = E (Jt di (l + (20?/||0||2) K^j/(p — 2 + 2K)(p + 2K)□
For a = a we obtain the James-Stein estimator Sjs = ^l — a /iXX which min-
imizes the risk function of estimators Sa, so that from formula (3), the risk function of the James-Stein estimator SJS under the loss function LD is
Rd(Sjs, 9) = Tr (D) -
Tr (D) E
1
p - 2 + 2K
2a
(4)
As the constant a is non-negative and using the formula (4), it is clear that the James-Stein estimator SJS, has a risk less than Tr(D), then SJS is minimax.
a
2
a
3. The case of generalized S and Q
Let X ~ Np (9, E) and the loss function Lq (6,9) = (5 — 9)f' Q (6 — 9) where the covari-ance matrix E is known and e1/2qe1/2 is diagonalizable matrix. Take the change of variables Y = PE_1/2X where P is an orthogonal matrix (PP1 = Ip) that diagonalizes the matrix E1/2QE1/2 such as PE1/2QE1/2Pt = D* = diag (a1,..., ap). Then we have Y - Np (v,Ip) with v = PE-1/29. Thus the risk function of the MLE X associated to the loss function Lq is
J2ai = Tr (D*). Endeed
i=i
Rq (X, 9) = E (X - 9) Q (X -I
=E
sV2p-1 (y - v) Q Ei/2P-1 (Y - v)
=E
p
- v)t PEi/2QEi/2Pt (Y - v)} = E{ (Y - v)t D* (Y - v)J =
aiE (Yi - vi)2 = Tr (D*),
because for any i (i = 1,... ,p) (Yi — vi)2 ~ xi where xi is the chi-square distribution with 1 degrees of freedom, thus E (Yi — vi)2 = 1. As the MLE X is minimax, then any estimator dominates it, is also minimax.
3.1. Baranchik-type estimators
Now, consider the estimator given by
' HX ^XU
1 - Xv-1X X.
(5)
Proposition 3. Under the loss function Lq the risk function of the estimator 5^is
Rq(S^,6) = TT(D*)+ E
p i 262 £aj 1 + -92 k
i=1 y \\6\\
p + 2K
P2 (x2
■P+2K)
Xp+2K
- 44 (Xp+2K) +4
4> (x'
P+2K)
2
Xp+2K
- 2Tr(D* ) E
'p (xP+2K)
. xP+2K
where K - P (\\v\\2/2) .
Proof.
RQ(S^,6)= E
1 - *-XXSX)x - 6
Q
1 -4XXSXV - 6
=E
(X 6) 4X ^-1XX
(X - 6) - Xtz-X X
Q
(X 6) tX^-1X X
(X - 6) - Xtz-1X X
Using the change variable Y = PE -i/2X where P is an orthogonal matrix and P diagonalizes the matrix Ei/2QEi/2, then Y - Np (v,Ip) with v = PE-i/2d and
Rq(S4>,6) = Ei S1/2P-1
=E
E
(Y - V) - ^y
Y tY
Q
t}/2p -
Y - V ) - ^ Y
(P-1)t £1/2Q£1/2P
1
4 M\Y\\
(Y - v)--^-Y
\\Yf
P £1/2Q£1/2Pt
(Y - v) - ^Y
(Y - v)- ^ Y
2
P[\\Y \\
(Y - v)--^-Y
\\Y \\
2
P[\\Y \\
Et (Y - v )--^-Y
\\Y\\2
D*
P[\\Y \\
(Y - v)--^-Y
\\Y\2
RD• (SI ,6),
where ||.|| is the usual euclidean norm in Rp, PEi/2QEi/2P1 = D* = diag (ai,..., ap) and 5* = (1 — ( <(||Y||2/||Y||2)) Y. From Proposition 1, we obtain the desired result. □
S
4>
t
1
t
t
Theorem 2. Assume that is given by (5) where p ^ 3. Under the loss function Lq with
^ > 2> if
max (ai)
i) $ (.) is monotone non-decreasing;
ii) 0 < $ (.) < 2[ - 2), \ max (ai) /
then 50 is minimax.
The proof is the same given for the Theorem 1.
3.2. James-Stein estimator
Consider the estimator 5b = (1 — b/ (X^^X)) X. Using the Proposition 3, one can show easily that the risk function of the estimator 5b under the loss function Lq is.
RQ(öb,9) = Tr(D*) + b (b + 4) E
/ P ( 2d2 \ \
jthwKl
(p - 2 + 2K) (p + 2K) \ /
- 2bTr (D*) E
(p - 2 + 2K )J'
where K ~ P || /2j. From the last formula, we deduce immediately that, a sufficient condition for that Sb dominating the MLE X is 0 ^ b ^ 2 1 (Tr (D* ) / max ai) — 2 I , and the optimal
V l^i^p J
value of b that minimizes the risk function Rq(5b, 0) is
Tr (D* ) E 1
b =_v(P - 2 + 2Kb _ 2,
ß
p
C
¿=i
where ß = E aj (l + fa2 / pf^) K)/(p — 2 + 2K) (p + 2K ) j .
For b = b we obtain the James-Stein estimator 5JS = 5^ = —b/ (X'E^X)^ X which minimizes the risk function of 5b. Its risk function associated to the loss function Lq is
1 2
Rq(SJS,6) = Tr(D*) -
Tr(D ) - 2ß
(6)
P
From formula (6), we note that 6JS dominates the MLE X, thus 6JS is minimax.
4. The simulation results
In this section we take the model X ~ Np(O, Ip) where O = ..., O1)t and we recall
the estimators of Baranchick-type and the matrices and DC2) given in Example 1, i.e., V) = (l - ^(1)(IIX II2 )/\\X ||2) X, 6^ = (l-'<P(2)( \\X ||2 )/\\X ||2) X with ^(1)( \\X ||2) = = \\X ||2/(||X ii2 +1), ^(2)( \\X ii2) = 1 - exp( -\\X ii2), D(1) =diag (di = 1d = 1/2,..., dp = 1/p)
1
and D(2) = diag(di = 1/2,d2 = 2/3,... ,dp = p/p + 1). We also recall the form of the James-Stein estimator 5JS(= 5a = (l - a / \\X\\2) x), where a = (Tr (D) E (1/ (p - 2 + 2K))/a) - 2 and
a = E^jt di (l + (2e2/\\ef) K/(P - 2 + 2K) (p + 2K . We graph the risks ratios of estimators cited above, to the MLE associated the the losses functions LD(i) and LD(2) denoted respectively: R(5js,0)/R(X,0), R(5^(i) ,0)/R(X,0) and R(5^ ,0)/R(X,0) as function of A = d"2 for various values of p.
In Figs. 1-4, we note that the risks ratios R(5js,0)/R(X,0), R(5^d ,0)/R(X,0) and R(5^(2) ,6)/R(X,6) are less than 1, thus the estimators 5JS, and 5^(2) are minimax for p = 8 and p =12 under the loss function LD(i), and also minimax for p = 4 and p = 6 under the loss function LD(2).
0 3:
0 2 4 G 8 10
X
Fig. 1. Graph of risks ratios R(5js,0)/R(X,0), R(5^i) ,0)/R(X,0) and R(5^ ,0)/R(X,0) as function of A = d2 for p = 8 under the loss function LD(i)
0 2
0 2 4 6 8 10
Fig. 2. Graph of risks ratios R(5js,Q)/R(X,6), R(5^w ,6)/R(X,6) and R(5^2) ,Q)/R(X,6) as function of A = d2 for p =12 under the loss function LD(i)
Fig. 3. Graph of risks ratios R(SJS,O)/R(X,O), R(^(d ,O)/R(X,O) and R(^) ,O)/R(X,O) as function of A = Of for p = 4 under the loss function LD(2)
Fig. 4. Graph of risks ratios R(5js,O)/R(X,O), R(^(d ,O)/R(X,O) and R(^) ,O)/R(X,O) as function of A = Of for p = 6 under the loss function LD(2)
5. Appendix
Lemma 3 (Bock [5]). Let X - Np (O,Ip) where X = (X1,...,Xp)t and O = (O1,...,Op)t , then, For any measurable function h : [0, ^ R, we have
E h h\X\\2 X2 = E h xp+2 m
+02 e
h'~2
Xp+4
where K - P [\\O\\2/2 a2) being the Poisson's distribution of parameter \\O\\ /2a2
2
Lemma 4 (Bock [5]). Let f be a real-valued mesurable function defined on the integer. Let K ~ P (X/2) being the Poison's distribution of parameter X/2. Then
XE [f (K)] = E [2Kf (K - 1)],
if both sides exist.
Proof Lemma 2. i) Using Lemma 3 and the Definition 1, we obtain
E
X? iiX II2
Ex2P+Á\m2)[U) + eiExiU\m2) VM
E
i
p + 2K
+ 0¡ E
i
p + 2 + 2KJ '
where K ~ P ^\\0\\2/2j being the Poisson's distribution of parameter \\0\\2/2. From Lemma 4, we have
E
XL
iX |
E
i \ e? ( 2K + tt^T E
p + 2K
p + 2K
E
pu2
p + 2K
ii) Using Lemma 3 and the Definition 1, we obtain y X2 \ „ ( T
E
IIX u4
E
E
i
(p - 2 + 2K )(p + 2K) where K ~ P . From Lemma 4, we have
' X2 \ „ / 1
+ e?E
E
iiX U2
= E
= E
(
(p - 2 + 2K) (p + 2K) e¡
1 + 2—^ K
Iieil
(p - 2 + 2K) (p + 2K)
+ E
(p + 2K )(p + 2 + 2K )
2K
(p - 2 + 2K) (p + 2K)
\
V
/
□
i
Conclusion
Stein [10], has started to study the estimation of the mean d of a multivariate gaussian random Np (0, a2Ip) in Rp, by the shrinkage estimators deduced from the usual estimator. Many authors continued to work in this field. The majority among them have studied the minimaxity of these estimators under the usual quadratic risk function, we cite for example [5,7]. Other authors research the stability of the minimaxity property in the case where the dimension of the parameter space and the sample size are large, we refer to [3,6]. In this work we studied the minimaxity of Baranchick-type estimators, relatively to the general loss function. We showed similar results to those found in the classical case. An idea would be to see whether one can
obtain similar results of the minimaxity and the asymptotic behavior of risk ratios in the general case of the symmetrical spherical models.
The authors are extremely grateful to the editor and the referees for carefully reading the paper and for valuable comments and helpful remarks and suggestions which greatly improved the presentation of this paper. This research is supported by the Thematic Research Agency in Science and Technology (ATRST-Algeria).
References
[1] A.J.Baranchick, Multiple Regression and estimation of the mean of a multivariate normal distribution, Stanford Univ., Technical Report, no. 51, 1964.
[2] A.J.Baranchick, A family of minimax estimators of the mean of a multivariate normal distribution, The Annals of Mathematical Statistics, 41(1970), no. 2, 642-645.
[3] A.Benkhaled, A.Hamdaoui, General classes of shrinkage estimators for the multivariate normal mean with unknown variance: minimaxity and limit of risks ratios, Kragujevac J. Math., 46(2019), no. 2, 193-213.
[4] D.Benmansour, A.Hamdaoui, Limit of the ratio of risks of James-Stein estimators with unknown variance, Far East J. Theo. Stat., 36(2011), no. 1, 31-53.
[5] M.E.Bock, Minimax estimators of the mean of a multivariate normal distribution, Ann. Statist., 3(1975), no. 1, 209-218.
[6] A.Hamdaoui, D.Benmansour, Asymptotic properties of risks ratios of shrinkage estimators, Hacet. J. Math. Stat., 44(2015), no. 5, 1181-1195.
[7] W.James, C.Stein, Estimation with Quadratic Loss, Proc. 4th Berkeley Symp, Math. Statist. Prob, Univ of California Press, Berkeley, Vol. 1, 1961, 361-379.
[8] K.Selahattin, D.Issam, The optimal extended balanced loss function estimators, J. Comput. Appl. Math., 345(2019), 86-98.
[9] K.Selahattin, S.Sadullah, M.ROzkale, H.Guler, More on the restricted ridge regression estimation, J. Stat. Comput. Simul., 81(2011), 1433-1448.
[10] C.Stein, Inadmissibilty of the usual estimator for the mean of a multivariate normal distribution, Proc. 3th Berkeley Symp, Math. Statist. Prob. Univ. of California Press, Berkeley, Vol. 1, 1956, 197-206.
[11] C.Stein, Estimation of the mean of a multivariate normal distribution, Ann. Statis., 9(1981), no. 6, 1135-1151.
[12] W.E.Strawderman, Minimax estimation of location parameters for certain spherically symmetric distribution, J. Multivariate Anal., 4(1974), no. 1, 255-264.
[13] X.Xie, S.C.Kou, L.Brown, Optimal shrinkage estimators of mean parameters in family of distribution with quadratic variance, Ann. Statis., 44(2016), no. 2, 564-597.
Об оценках решений задачи расщепления для некоторых многомерных дифференциальных уравнений в частных производных
Абденур Хамдауи
Университет науки и технологии Оран Мохамед-Будиаф
Оран, Алжир Университет Тлемсена (LSMA) Тлемсен, Алжир
Абделькадер Бенхалед
Университет Мюстафа Стамбули (LGEO2E)
Маскара, Алжир
Мекки Тербече
Университет науки и технологии Оран Мохамед-Будиаф (LAAR), USTO-MB
Оран, Алжир
Аннотация. Исследована проблема оценки среднего многомерного нормального распределения различными типами оценок усадки. Мы установили минимаксность оценок типа Баранчика для единичной ковариационной матрицы, а матрица, связанная с функцией потерь, является диагональной. В частности, представлен класс оценки Джеймса-Стейна. Обсуждается общая ситуация для обеих упомянутых выше матриц.
Ключевые слова: ковариационная матрица, оценка Джеймса-Стейна, функция потерь, многомерная гауссовская случайная величина, нецентральное распределение хи-квадрат, оценка усадки.