Научная статья на тему 'A Bayesian analysis on Weibull model allowing nearly instantaneous failures'

A Bayesian analysis on Weibull model allowing nearly instantaneous failures Текст научной статьи по специальности «Математика»

CC BY
228
66
i Надоели баннеры? Вы всегда можете отключить рекламу.
Область наук
Ключевые слова
Maximum Likelihood / Expectation-Maximization / Metropolis-Hasting / Gibbs / Population Monte Carlo method / Bayes Estimation / Mixture of Non-Degenerate and Weibull Model.

Аннотация научной статьи по математике, автор научной работы — K. Muralidharan, Rajiv Parikh, C. D. Lai

In this article, we study, the maximum likelihood as well as Bayes estimation on parameters of Mixture of Weibull with ‘nearly instantaneous failure’ as introduced in Lai et.al. (2007). For Maximum likelihood estimation, the EM algorithm is used. For Bayes estimation of parameters, we used three different algorithms namely, Population Monte Carlo method (PMC), Mixture version of Metropolis-Hasting and Gibbs sampler. The methods are compared using a simulation study. A numerical example is also discussed at the end of the paper

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «A Bayesian analysis on Weibull model allowing nearly instantaneous failures»

A BAYESIAN ANALYSIS ON WEIBULL MODEL ALLOWING NEARLY

INSTANTANEOUS FAILURES

1 2 3

K. Muralidharan , Rajiv Parikh and C. D. Lai •

1 Department of Statistics, Faculty of Science, The M. S. University of Baroda, Vadodara- 390 002, India. Email: muralikustat@gmail. com

2

Department of Statistics, Bhavnagar University, Bhavnagar- 364 002, India. Email: [email protected]

3Institute of Fundamental Sciences, Massey University, Palmerston North, New Zealand [email protected]

Abstract: In this article, we study, the maximum likelihood as well as Bayes estimation on parameters of Mixture of Weibull with 'nearly instantaneous failure' as introduced in Lai et.al. (2007). For Maximum likelihood estimation, the EM algorithm is used. For Bayes estimation of parameters, we used three different algorithms namely, Population Monte Carlo method (PMC), Mixture version of Metropolis-Hasting and Gibbs sampler. The methods are compared using a simulation study. A numerical example is also discussed at the end of the paper.

Key Words: Maximum Likelihood, Expectation-Maximization, Metropolis-Hasting, Gibbs, Population Monte Carlo method, Bayes Estimation, Mixture of Non-Degenerate and Weibull Model.

1. Introduction

The mixed failure time distribution arises frequently in many different contexts in statistical literature. For instance, when we put units in a life testing experiment, then some of the units fail instantaneously and thereafter the life time of units follow a distribution such as exponential, Weibull, gamma etc. Such situations may be represented as a mixture of singular distribution at zero and a positive continuous random variable distribution. Lai et. al. (2007) has proposed a model as a mixture of generalized Dirac delta function and the 2-parameter Weibull having a closed form expression for survival function and hazard rate. The density of such a mixture can be shown as

f (x) = pöd (x - x0 ) + qaÂa xa-1 e

-(Ixf

p + q = 1, q = 1 - p,0 < p < 1

(1)

Where

öd(x - x0) =

— , x 0 < x < x 0 + d d

0, otherwise

For sufficiently small d. Here p > 0 is the mixing proportion. We note that

8(x - x o) = Lim 8 d (x - x0),

d^-0

where 8(.) is the Dirac delta function that is well known in mathematical analysis. One can view the Dirac delta function as a normal distribution having a zero mean and standard deviation that tends to 0. For a fixed value of d, the distribution becomes a mixture of a Weibull with Uniform distribution.

As a special case, the model presented in (1) for x0 =0 becomes a mixture of Weibull with a uniform distribution (See Muralidharan and Lathika, 2007). The pdf of mixture of Weibull with 'nearly instantaneous failure' occurring uniformly over [0, d] can be given as

f (x)=P+(1 - p)a)a xa-1 e-()x)a ,0 < p < 1 (2)

d

Taking f1 ~ U (x0, x0 +d) and f2 ~ Weibull (a, A), then (2) can be written as

f (x) = pfi(x, | d) + (1 -p) f2(xi | a,)) (3)

If X1,X2,...,Xn are the random sample of size n from (3), then the likelihood function is given by

L(p,a,),d |x) = f\fx)

i=1

Introducing Latent variable Zi in the likelihood function, we have

n zi

L(p,a,),d | x,z) = n{pf1(x/ Id)} {(1 -p)f2(x,- I a,)}1-z' (4)

i=1

where

P(z/ = 1) = p f1 (x/ I d) , P(z/ = 0) = (1 - p) f2 (x/ I d) After some simplification, the Log-likelihood can be written as

f

ln(L(p,a,),d | x,z)) = ^z/ In

/=1

p

1 - p

A (x/ |d) f2 (x/1 a,))

A

+ ln((1 - p)f2(x/ |a,))) (5)

For further development in the study, we make use of the likelihood given in (5). One may refer to Lai et.al. (2007) paper for various characteristics of the model and the parametric estimation of the model.

Here, we compute the Maximum likelihood estimator using EM algorithm (see section-2), while in section-3, the Bayes estimators of the parameters are calculated. The posterior samples have been generated using three different approaches namely, Population Monte Carlo, Mixture

version of Metropolis-Hasting and Gibbs sampler. The performance of the algorithms is evaluated using a simulation study. The last section discusses a practical example.

2. Maximum Likelihood Estimators

To find MLE of the parameters of an underlying distribution we use EM algorithm given by Dempster, Laird and Rubin (1977). The EM algorithm has two advantages here: The first occurs when the data indeed has missing values, due to problems with or limitations of the observation process. The second occurs when optimizing the likelihood function is analytically intractable but when the likelihood function can be simplified by assuming the existence of and values for additional but missing (or hidden) parameters. There is much literature devoted to extensions and applications of the EM algorithm, and this is summarized in McLachlan & Krishnan (1997).

The expectation step or E-step computes the expected likelihood for the complete data. Let 0 be the complete collection of parameters occurring in the mixture, i.e. 0 = (d,a,X).

E-Step: We take expectation and get Q function

q (eje(t)) = E[\n(L( p, ej z , x))|x,e(t)]

XE(Z - \x i,e(t)) In

/=1

P

1 - P

fi(x / \d) f2(x-, \a,X)

+ ln({1 -p)f2(x - \a,X))

(6)

where

E(Z / \ x /, e(t))=P(z / = 1\ x ;, e(t))

Pfi(x- \d(t))

Pf1(x,\d(t)) + (1 -p) f2(xi \a(t),X(t))

P

(t)

d

(t)

P

(t)

+ (1 - p(t)) a(t) X(t)Jt) x,a(t)-1 e-(x(t)x)

d

(t)

a(t )

(7)

M-Step: Here we maximize the expectation, i.e., the Q-function that we computed in the E-step. The two steps may be repeated as per requirement

Maximizing Q (0|0(t)) w.r.t. 0 yields the update equations

iP = 0 ^ p(t+1) = 1 ±E(z,) (8)

op n ,.=!

* = 0 1 ±E[zt) = 0 5d d '

(9)

^=0 ^+1) = 51

I (1 - E(z i )

/=1

Jt )

I (1 - E(z/ ) x/

,(t)

(10)

i =1

PjQ n 1 t

= 0 ^ Xd-E(Zi))[-rr + ln(l(t)) + ln(x,-)-(A,(t) xi)a ln(l(t) Xi)] = 0 (11) 5a ,-=1 a(t)

Since equation (9) cannot solved analytically, we use d(t+1) = Max(xi) since x0 < xi < x0 + d . To obtain a(t+1), we solve equation (11) numerically using Newton-Raphson method.

The EM algorithm starts by assigning initial values to all parameters to be estimated. It then iteratively alternates between two steps, the E-step and M-step. The E-step computes the expected likelihood and the M-step re-estimates all the parameters by maximizing the Q-function. If the convergence criteria is not met, then the parameters p,a,X and d are updated. We can repeat E-step followed by the M-step until the likelihood converges. Every iteration is guaranteed to increase the log-likelihood and the algorithm is guaranteed to converge to a local maximum of the likelihood function.

3. Bayesian Estimate when all parameters are unknown

Let the prior distributions be

and

-I

=Ti—m p^-1 (1 - p)ß-1, 0,ß> 0 ßfa,ß) 1

m=-, 0 < d <s

rc(a) = 0 e-a0, 0> 0

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

x{A) =A_ ^ e-b '1 r(a)

Then the joint pdf becomes

f (x; P, d, a, 1) = n f (xi ) ^(p)^(d)n(a)n(X)

i=1

= TT fp + (1 _p)axax a1 e_(Zx'1de_ad p^ (1 -p)^1 — A^1 e b 1 x (12)

if I d ' J^ B(Tj,ß) r(o) V ;

Thus, the posterior density is proportional to

n(p, d,a,A\x) x J +(1 _ p)aÄaxa-1 e ^'"J e_ad p^1 (1 _ p)^1 r(o+1) e_b(13)

To generate sample from (13), we use Population Monte Carlo (PMC) algorithm and Mixture version of Metropolis-Hasting algorithm. The PMC algorithm is an iterated importance sampling scheme, it is an adaptive algorithm that calibrates the proposal distribution to the target distribution at each iteration by learning from the performance of the previous proposal distributions. A complete detail about PMC can be found in Capp'e et al. (2004). Letting 0= (p, d, a, X), the PMC algorithm can be given as

Algorithm-1

Initialize d(01),...,d(M ; set t=1. Step - 1 For i=1, ,...,M

1.1.1 Generate d([j ~ q it (d(;)_1))

1.1.2 Compute

1.2 Compute w(/) = ——

m n(dS) q * (O

( )

M

(i)

I=1

1.3 Resample M values with replacement from the 0(^ 's using the weights w1'

Step - 2 Repeat step - 1 for t=2 to N

Below we present Metropolis-Hasting (MH) algorithm corresponding to the mixture distribution shown in (13).

Algorithm-2

Initialize 0(o); set t=0.

~ (t)

Step 1: Generate 0 ~ q( 010 ') and u ~ Uniform (0, 1) Step 2: Compute

f (x|9) rc(9) q(9(t'19)

r =-

f (x| 9(t') n(9(t') q(9 | 9(t')

Step 3: If u < r then

9(t+1) = 9

else

Q(t +1) = Q (t)

Step 4: Repeat steps 1 to 2for t = 1,2,...,N and return the values 9(1),..., 9(r).

It is to be noted that in both the algorithms, the instrumental distributions q, for (p, d,a,l) is taken as their prior distributions. As seen above, Population Monte Carlo method is a combination of the Markov Chain Monte Carlo algorithms (for the construction of the proposal), Importance Sampling (for the construction of appropriate estimators), Sampling Importance Resampling (for sample equalization) and Iterated particle systems (for sample improvement). Thus the Population Monte Carlo (PMC) algorithm is in essence an iterated importance sampling scheme that simultaneously produces, at each iteration, a sample approximately simulated from a target distribution and provide unbiased estimates for that distribution. The sample is constructed using sample dependent proposals for generation and importance sampling weights for pruning the proposed sample.

Now, we define another approach to carry out Bayes estimation in mixture context. Introducing latent in the structure, Posterior density can be written as

ff jpJZi |i_p) aAOV"1 e~(lx')a j1"'' 9e^V^l-p)P"1 l"(a+1) e~b/l (R)

Then the full conditional distribution of (p, d,a,l) is given by U(d | z) x d_n1,

n(p | z) x pn1 +^-1 (1 _ p)n2 +P-1 , n(a|z,A) = an2 e_a9 JJxa-1 e_(Ax'a

{': zf =0}

and

n(l| z,a) = Xn2a_a_1 e_b/X ff e_(Xx')a (15)

{': z, =0}

nn

Where n1 = ^ I=1, n2 = ^ I=0 and n = n1 + n2

,=1 ' ,=1 '

For generating a sample from this full conditional distribution, we use following algorithm.

Algorithm-3

Initialize p(0 , d(0 , C0 and A(0 ; Set t=1. Step - 1 Generate zr(t) (i=1,..., n) from

P(z(t) = 1) x p(t-1)/ d(t-1)

P(z<t> = 0) x (1 - p't-1)) c(t-1) X,«,t-1)-1 e-<^,t-1)*'^

n n

Step - 2 Compute n^ = =1 and n(2t} = ,0 = 0

i=1 ' ¡=1 '

Step - 3 Generate p(t) ~ Beta(n{t +-q-1,n2t) + p-1) Step - 4 Generate d(t) ~ Uniform(d(t-1))

Step - 5 Generate c(t) ~ n(c(t) |n2t-1), X(t-1)) using M-H algorithm.

Step - 6 Generate X(t) ~ n(l(t) |n2t-1), c(t-1)) using M-H algorithm.

Step - 7Repeat step-1 to 6for t = 2, 3, ..., N

To implement the M-H algorithm, it is necessary that a suitable candidate-generating density be specified. If the domain explored is too small, compared with the range of f, the Markov chain will have difficulties in exploring this range and thus will converge very slowly so q may be (for both a and l, candidate-generating density is taken as exponential distribution) chosen in such a way that it converge to target distribution. Wide range of choices of q has been given in the literature; see Tierney (1994) and Chib and Greenberg (1995) and references contained therein.

4. Simulation Study

Here we have used the entire three algorithms to generate a posterior sample. A simulation study is carried out to compare the performance of different algorithms. A sample of size 50 was drawn from the population by taking different values of the parameters. On the basis of this sample, Bayes estimates are calculated using these algorithms. It was seen that, the number of iterations necessary to reach convergence for PMC is below 5000, while for M-H, it is below 15000 and for Gibbs, and it is above 20,000. A deeper difficulty in implementing Algorithm 3 is the existence of computational trapping states. Table-1 shows the ML estimators and Bayes estimators using three different algorithms.

Table 1 MLE and Bayes estimates

Parameter MLE PMC M-H Gibbs

p=0.2 0.2567 0.2084 0.2156 0.1692

d=0.05 0.0486 0.0467 0.0532 0.0594

a=2.0 1.8345 1.8857 1.8678 1.8523

X=1.5 1.4976 1.3885 1.3563 1.6974

p=0.6 0.5678 0.6132 0.5817 0.6679

d=0.05 0.0497 0.0522 0.0423 0.0526

a=2.0 1.8754 1.8804 1.7923 1.9069

X=1.5 1.6925 1.4760 1.5398 1.6856

5. Application to a real life data

Here, we consider the wood dryness data of 40 boards analyzed in Lai et.al. (2007) with ti=0, i=1,2,...,28 and the other positive observations are 0.0463741, 0.0894855, 0.4, 0.42517, 0.623441, 0.6491, 0.73346, 1.35851, 1.77112, 1.86047, 2.12125, 2.12. Here we spread the zeros uniformly over an interval taking d=0.042 so that t1 = 0, t2 = 0.0015, t3 = 0.003,., t28 = 0.042. We obtain MLE estimates using EM algorithm aspMLE = 0.7625,d = 0.0418, axMLE = 1.5689 and

XMLE = 1.2458. PMC method is used to find Bayesian estimates. Below we present two Bayes estimates related to same data set but with different prior values.

(i) Pbayes = 0.6899, dBAYES = 0.0404, aBAYES = 0.9363 and XBAYES = 1.9704 for 6 = 1.6, h=11, X=0.9, ¿0=2.3, a=1.5, b=1.9

(ii)PBAYES = 0.7187, dBAYES = 0.0649, aBAYES = 0.9015and XBAYES = 1.9704for 6 = 2.5, h=4.0, £=0.05, ¿0=2.3, a=2.0, b=2.0

6. Sensitivity analysis.

As can be seen from tables 2 and 3 that the estimate of the estimates depends on the choice of d. In practice, the value of d can be manually estimated quite accurately from the dataset. Even if the value of d is inside the peak of the target distribution (here it is Weibull), we are still able to estimate all four parameters of the model. The estimates are consistent for both parametric and Bayesian set up.

Table 2. Uniform spread of "nearly instantaneous failures" with d = 0.135

p d a X

Maximum likelihood estimates 0.78960 0.135 1.6852 1.2398

Bayes Estimates using PMC 0.8729 0.1265 1.2351 1.4388

Table 3. Uniform spread of "nearly instantaneous failures" with d = 0.2

p d a X

Maximum likelihood estimates 0.6125 0.2000 1.2341 1.3563

Bayes Estimates using PMC 0.7142 0.2764 1.5434 1.7649

References

Cappe' O., Guillin, A. Marin, Jean-M., Robert, C. P. (2004). Population Monte Carlo. J. Comp. Graph. Statist., 13,907-930.

Casella, G. and George, E. I. (1992). Explaining the Gibbs sampler. Journal of the American Statistical Association, Vol. 46 (3) 167-174.

Celeux, G. and Diebolt, J. (1985). The SEM Algorithm: a Probabilistic Teacher Algorithm

derived from the EM algorithm for the Mixture problem. Computational Statistics, Quarterly, 2, 73-92.

Celeux, G., Hurn, M. and Robert, C. P. (2000). Computational and Inferential Difficulties with Mixture Posterior distributions. Journal of the American Statistical Association, 95(451), 957-970.

Celeux, G., Hurn, M. and Robert, C. P. (2000). Computational and Inferential Difficulties with Mixture Posterior distributions. Journal of the American Statistical Association, 95(451), 957-970.

Chan, K. and Geyer, C. (1994). Discussion of "Markov Chains for Exploring Posterior Distribution" The Annals of Statistics, 22, 1747-1758.

Chib, S. and Greenberg, E. (1995). Understanding the Metropolis - Hastings Algorithm, The American Statistician, 49(4), 327-335.

Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm (with discussion). Journal of the Royal Statistical Society, Series B, 39, 1-38.

Gilks, W. R., Richardson, S. and Spiegelharter, D. J. (2000). Markov Chain Monte Carlo in practice. Chapman & Hall/ CRC, London.

Lai, C. D., Khoo, B. C., Muralidharan, K. and Xie, M. (2007). Weibull model allowing nearly instantaneous failures. Journal of Applied Mathematics and Decision Sciences, Article ID 90842, 11 pages.

Mengersen, K. L. and Tweedie, R. L. (1996). Rates of convergence of the Hastings and Metropolis algorithms. The Annals of Statistics, 24, 101-121.

Muralidharan, K. (1999). Tests for the mixing proportion in the mixture of a degenerate and exponential distribution. Journal of Indian Statistical Association, Vol. 37, issue 2.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Muralidharan, K. (2000). The UMVUE and Bayes estimate of reliability of mixed Failure time distribution, Communication in Statistics- Simulations & computations, 29(2), 603-619.

Muralidharan, K. and Lathika, P. (2006). Analysis of instantaneous and early failures in Weibull distribution, Metrika, 64(3), 305-316.

Raftery, A. and Lewis, S. (1992). How many Iterations in the Gibbs Sampler? In Bernardo, J. M., Berger, J. O., Dawid, A. P. and Smith, A. F. M. (eds.), Bayesian Statistics, 4, 763-773, Oxford: Oxford University Press.

Ritter, C. and Tanner, M. (1992). Facilitating the Gibbs sampler: The Gibbs Stopper and the Griddy-Gibbs Sampler, Journal of the American Statistical Association, 87(419), 861-868.

i Надоели баннеры? Вы всегда можете отключить рекламу.