Научная статья на тему 'BAYESIAN APPROACH FOR HEAVY-TAILED MODEL FITTING IN TWO LOMAX POPULATIONS'

BAYESIAN APPROACH FOR HEAVY-TAILED MODEL FITTING IN TWO LOMAX POPULATIONS Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
49
6
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
Lomax Distribution / Bayes estimation / Lindley’s Approximation / Gibbs Sampling / Bootstrapping

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Vijay Kumar Lingutla, Nagamani Nadiminti

Heavy-tailed data are commonly encountered in various real-world applications, particularly in finance, insurance, and reliability engineering. This study focuses on the Lomax distribution, a powerful tool for modeling heavy-tailed phenomena. We investigate the estimation of parameters in two Lomax populations characterized by a common shape parameter and distinct scale parameters. Our analysis employs both Maximum Likelihood Estimation (MLE) and Bayesian estimation techniques, recognizing the absence of closed-form solutions for the estimators. We utilize the Newton-Raphson method for numerical evaluation of the MLE and implement Lindley’s approximation for Bayesian estimators with different priors, under symmetric loss function. Additionally, we estimate posterior densities using Gibbs sampling and bootstrapping methods to manage uncertainty. A Monte Carlo simulation study is conducted to assess the performance of the proposed estimators, providing insights into their behavior under various scenarios. This paper also discusses the application of these methodologies through a real-life example, demonstrating the practical utility of the proposed estimation techniques for analyzing heavy-tailed data.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «BAYESIAN APPROACH FOR HEAVY-TAILED MODEL FITTING IN TWO LOMAX POPULATIONS»

BAYESIAN APPROACH FOR HEAVY-TAILED MODEL FITTING IN TWO LOMAX POPULATIONS

Vijay Kumar Lingutla1, Nagamani Nadiminti2

1,2Department of Mathematics, School of Advanced Sciences, VIT-AP University, Inavolu, Beside AP Secretariat, Amaravati AP-522237, India [email protected], [email protected]

Abstract

Heavy-tailed data are commonly encountered in various real-world applications, particularly in finance, insurance, and reliability engineering. This study focuses on the Lomax distribution, a powerful tool for modeling heavy-tailed phenomena. We investigate the estimation of parameters in two Lomax populations characterized by a common shape parameter and distinct scale parameters. Our analysis employs both Maximum Likelihood Estimation (MLE) and Bayesian estimation techniques, recognizing the absence of closed-form solutions for the estimators. We utilize the Newton-Raphson method for numerical evaluation of the MLE and implement Lindley's approximation for Bayesian estimators with different priors, under symmetric loss function. Additionally, we estimate posterior densities using Gibbs sampling and bootstrapping methods to manage uncertainty. A Monte Carlo simulation study is conducted to assess the performance of the proposed estimators, providing insights into their behavior under various scenarios. This paper also discusses the application of these methodologies through a real-life example, demonstrating the practical utility of the proposed estimation techniques for analyzing heavy-tailed data.

Keywords: Lomax Distribution, Bayes estimation, Lindley's Approximation, Gibbs Sampling, Bootstrapping.

1. Introduction

In many real-world applications, data often exhibit heavy tails, meaning extreme values occur more frequently than predicted by normal distributions, impacting risk assessment and decisionmaking in fields such as finance, insurance, and reliability engineering. To accurately model these datasets, the Lomax distribution, also known as the Pareto Type II distribution proposed by Lomax [1], is particularly effective. Its suitability for heavy-tailed data is highlighted by Bryson [2], who emphasized its superiority over traditional distributions like the Exponential, Gamma, or Weibull. The Lomax distribution's flexibility and capacity to model various tail behaviors have led to its widespread use, as demonstrated by Hassan et al. [3] and Aljohani [4] in optimal step stress accelerated life testing and Ijaz [5] in characterizing electronic device lifespans. Moreover, Chakraborty et al. [6] proposed Generalized Lomax Models (GLM) to capture the non-linearities and heavy-tailed nature of complex network degree distributions, further underscoring the Lomax distribution's versatility in addressing real-world challenges.

Building on the importance of the Lomax distribution, researchers have extensively explored the estimation of its parameters using various methodologies. Okasha [7] utilized Bayesian and E-Bayesian methods for estimating the shape parameter, reliability, and hazard functions based on type-II censored data. Fitrilia et al. [8] employed Bayesian and E-Bayesian methods under the balanced square error loss function for estimating the shape parameter with right-censored data. Ellah [9] applied Maximum Likelihood Estimation (MLE) and Bayesian methods,

considering symmetric and asymmetric loss functions, to estimate both parameters, reliability, and hazard functions from recorded values. Hasanain et al. [10] implemented MLE and Bayesian estimation with three distinct loss functions for parameter estimation. Al-Bossly [11] employed MLE, Bayesian, and E-Bayesian methods for estimating the shape parameter while considering six different loss functions. Additionally, Kumari et al. [12] utilized MLE and Bayesian estimation under entropy and precautionary loss functions. These studies collectively contribute to enhancing our understanding of parameter estimation in the context of the Lomax distribution, showcasing a variety of approaches and methodologies.

Despite significant advancements in parameter estimation for single-population models, a notable gap persists in extending these methodologies to more complex scenarios involving two or more Lomax populations. Estimating a common parameter across two or more populations is a widely employed statistical method with diverse applications, aiding in comparative analyses and supporting risk assessment by identifying similarities or differences in variable distributions. The pioneering investigation into estimating the common mean of two normal populations was conducted by Graybill and Deal [13], who introduced a combined estimator that surpasses individual sample means concerning variance, subject to certain constraints on sample sizes. For further insights into estimating the common mean of two or more normal populations, one can refer to Moore and Krishnamoorthy [14], Tripathy and Kumar [15], and the relevant citations therein, which provide valuable perspectives from both classical and decision-theoretic standpoints.

In addition to normally distributed populations, researchers have extensively explored estimating common parameters for non-normally distributed populations. For instance, Ghosh and Razmpour [16] considered two exponential distributions and examined a common location parameter using UMVUE (Uniformly Minimum Variance Unbiased Estimator), Maximum Likelihood Estimation (MLE), and modified MLE approaches. Similarly, Jin and Pal [17] introduced enhanced estimators that surpassed MLE for estimating common location parameters of exponential distributions, utilizing convex loss functions. Azhad et al. [18] delved into several heterogeneous exponential distributions, estimating common location parameters through UMVUE, MLE, and modified MLE approaches. Additionally, Nagamani and Tripathy [19] investigated the estimation of common scale parameters for two Gamma populations, employing both MLE and Bayesian estimation methods, including simulation studies to assess the performance of the proposed methods. In a different context, Nagamani et al. [20] addressed two inverse Gaussian populations and estimated the common dispersion parameter, conducting simulation studies to evaluate their results. These studies collectively contribute to advancing parameter estimation methodologies across diverse distributional settings, offering valuable insights for analyzing various types of data.

In our study, we focus on two Lomax populations characterized by a common shape parameter but distinct scale parameters. To estimate the parameters, we employ both Maximum Likelihood and Bayesian estimation techniques, as closed-form estimators do not exist in our scenario. The numerical evaluation of these estimators is facilitated by the Newton-Raphson technique for Maximum Likelihood Estimation. For Bayesian estimations, we utilize Lindley's approximation with different priors, under symmetric loss function. Additionally, we estimate the posterior densities of Bayesian estimators using Gibbs and bootstrapping sampling methods. Gibbs sampling, a technique for generating samples from a joint distribution, is particularly valuable in Bayesian statistics for handling complex posterior distributions. Conversely, bootstrapping, a resampling method, aids in estimating the sampling distribution of a statistic and can be adapted to the Bayesian context for uncertainty estimation. Both Gibbs sampling and bootstrapping play crucial roles in Bayesian data analysis, offering essential tools for managing complex models and estimating uncertainties. To assess the behavior of various estimates, we conduct a Monte Carlo simulation study employing a well-constructed algorithm.

The writing is organized as follows: In Section 2, we derive the Maximum Likelihood Estimates (MLE) and asymptotic confidence intervals for the scale parameters ¿1, ¿2, and the common shape parameter A. Section 3 discusses Bayesian estimators for the parameters under symmetric and

asymmetric loss functions, deriving the Bayes estimators using vague priors, Jeffreys priors, and conjugate priors. It is worth noting that none of these estimators has a closed-form expression. To approximate Bayes estimators, we utilize an approximation for the ratio of integrals suggested by Lindley. Sections 4 and 5 cover the generation of posterior densities using Gibbs and bootstrapping algorithms, respectively. Section 6 presents the numerical results and discusses the rigorous simulation analysis comparing all the offered estimators. In Section 7, we provide a real-life example to illustrate the estimation methods for estimating parameters. Finally, in Section 8, the study concludes with some remarks.

2. Maximum Likelihood Estimation

Maximum Likelihood Estimation (MLE) is a widely employed method for parameter estimation and inference within statistics. The principal aim of MLE is to identify the parameters that maximize the probability or likelihood of the sample data. This section is dedicated to acquiring the Maximum Likelihood Estimates (MLEs) for the model parameters.

Let us assume that two independent random samples are drawn from two Lomax populations X1 = (x11, x12,..., x1m) and X2 = (x21, x22,..., x2n), of sizes m and n, respectively. These samples share a common shape parameter A but may have different scale parameters, denoted as ¿1 and ¿2. The two populations are represented as L(A, 51) and L(A, ¿2), respectively. The corresponding probability density functions are given as:

f (xii, A, ¿1 ) = A

x1i 1 + 1T ¿1

-(A+1)

x1i > 0, A > 0, ¿1 > 0,

(1)

f (**, a, ¿2)=K1+D

-(A+1)

x2j > 0, A > 0, ¿2 > 0.

(2)

From equations (1) and (2), the joint likelihood function is obtained as:

I (A, ¿1, ¿2 |X1, X2)

Am+n m ^ Xi'^ -(A+1) n

¿m¿n f=\\ ¿1

n 1 + f-

m1+%

(A+1)

Taking the logarithm of the likelihood function, we obtain the log-likelihood function:

L = (m + n) log A - m log ¿1 - n log ¿2 - (A + 1)

Sl°g(1+f) + ,S'°g

1+?

¿2

(3)

To find the Maximum Likelihood (ML) estimates of the parameters A, ¿1, and 52, we differentiate the log-likelihood function with respect to each parameter and set the derivatives to zero. This yields a system of three non-linear equations:

m + n

A m

- t - t2 = 0,

dL

dX

dL=- m - <x+^=*

H = - ¡5 - <X + 1)T2 = »

Here, Ti and T2 represent the summation terms involving the samples from populations Xi and X2, and Ti and T2 are their first derivatives with respect to 51 and 52, respectively. Full expressions for Ti, T2, Ti, and T2 are provided in Appendix A.

As the system of non-linear equations cannot be solved analytically, we employ the Newton-Raphson method to obtain numerical solutions. The MLE results after solving these equations are presented in Section 6.

¿2:

Following this, we calculate the Fisher information matrix for the model parameters A, ¿1, and

I (A, ¿1, ¿2 )

— (m+n)

A2

— T1 —T2

—T1

-T' — T2

¿2 — (a+1)T' 0

¿2 — (A + 1)T2'

Here, and denote the second derivatives of T with respect to 5\ and T2 with respect to 52, respectively. Detailed expressions for T^, T22', and d are also provided in Appendix A.

Using the information matrix, we construct 95% asymptotic confidence intervals for the model parameters as follows:

A ml ± 1-96,

d (|— (A+1)T1') (|— (A+1)T2'

¿,ML ± 1-96. -,

¿2ML ± 1-96

mA+n — (A + 1)TTj +(T1 )2

Numerical results for these confidence intervals, obtained using fixed sample sizes, are presented in Section 6.

3. Bayesian Study

In recent decades, the Bayesian perspective has gained significant attention for statistical inference, offering a powerful and valid alternative to classical statistical methods. This section considers the Bayesian estimation of the model's parameters. The Bayes estimator is particularly useful when there is prior knowledge about the distribution of parameters. Let pi(A) be the prior density function of the shape parameter A, and let p2(51) and p3(52) be the prior densities for the scale parameters 51 and 52, respectively.

The likelihood function of (A, 51,52) for the given data (X1, X2) is obtained as:

— (A+1) n / x \ —(A+1)

n 1 + f

Am+n m / r

l(A,5i,№,X2) = ^(i + 52

We can obtain the joint density function of (A, 51,52, X1, X2) by combining the likelihood and the priors, as follows:

f (A, ¿1, ¿2, X1, X2)

Am+n m ^ Xi'^ —(A+1) n

¿m ¿n f=1 V ¿1

n 1

m1+*

(A+1)

P1(A)p2 (¿1 ^(¿2)-

The posterior joint density function of (A, ¿1, ¿2) for (X1, X2) is:

f (A, ¿1, ¿2, x1, X2 )

f (A, ¿1, ¿2IX1, X2 )

/0°° /0°° /0°° f (A, ¿1, ¿2, X1, X2)dAd¿ld¿2

The posterior expectation of g(A, ¿1, ¿2) is given by:

E[g(A, ¿1, ¿2 )|X1, X2]

/0°° /0° /o°° g(A, ¿1, ¿2 ) f (A, ¿1, ¿2, X1, X2 )dAd¿l d¿2 /0° fo°° fo°° f (A, ¿1, ¿2, X1, X2)dAd¿ld¿2

(4)

It is challenging to calculate the ratio of integrals in equation (4) using analytical methods. However, certain approximations can be used to obtain a numerical value. To calculate the ratio, we employ the method proposed by Lindley [21], which is explained in detail below. Moreover, by using different priors and loss functions for the parameters, Bayes estimators can be derived.

3.1. Lindley's Approximation

In Bayesian analysis, we frequently encounter the problem of the ratio of integrals. Lindley [21] proposed an asymptotic solution for the ratio of two integrals. We use this method to evaluate the expression in equation (4). Lindley's method allows us to approximate expressions such as:

_ f mvje) exp mde _

1 _ f v(6) exp L(6)d6 _ E [H°)m , (5)

where L(9) is the log-likelihood function of the data X _ (x1, x2,..., xn), 6 _ (61,62,..., 6m), v(6) is any function of 6, and v(d) is the prior function of 6. Lindley's approximation to equation (5) is given by:

E [v(6)\x]

v + 2 EE(vj +pj K +1EEEE Lijk akr ur

2 i j 2 i j k r

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

6ML

+ Din^ (6)

where v or v(6) is any function of 6, vi is the partial derivative of v with respect to 6i, vij represents the second partial derivative of the function v with respect to the parameters 6i and 6j, Lijk represents the third partial derivative of the function L with respect to the parameters 6i, 6j, and 6k, &ij represents the (i, j)th element of the matrix [—Lij]-1, and 6 is the MLE of 6. All terms are evaluated at the MLEs of 6. Further, p(6) _ log[v(6)] and pj is the partial derivative of p with respect to 6j.

In the subsequent section, Lindley's approximation method is employed to derive Bayes estimators for the parameters A, ¿1, and ¿2 under symmetric loss function. The primary role of a loss function is to assess the efficacy of a model by assigning penalties based on the extent of deviation between predictions and true values. Using equation (6), we can obtain Bayes estimators for the parameters (A, ¿1, ¿2) under symmetric (Squared Error) loss function.

3.2. Symmetric loss function

In this section, we obtain the Bayes estimators under the Symmetric(SE) Loss function, after ignoring the terms of order (m+n)2 and smaller, the expression in (6), reduces to

1

E[v(6)\(xi, X2)] _ v + vi ai + V2 «2 + V3 «3 + «4 + «5 + 2 [A(vi^ii + U2^12 + V3 ^13 )

+B(vi^21 + v2&22 + v3 ^23 ) + D(v1031 + v2 ^32 + ^ ^33 )]. (7)

In our notation, we have 6 _ (61,62,63) _ (A,¿1,¿2). For more details, refer to Tripathy, & Nagamani[22]. We obtain the Bayesian estimators of the parameters using various priors under symmetric loss function in the subsequent sections.

3.2.1 Vague Prior

We use Vague prior for the parameters A, ¿1, and ¿2 to estimate the Bayesian estimators. The prior densities shape A and scale ¿1 and ¿2 parameters is considered as.

11 P1(A) _ 1 p2^1) _ ¿¿, P3^ _ ¿2

We can derive the joint prior density for the parameters A, 51, and 52 by combining their individual prior densities. This can be expressed as follows:

1

Vv (A, 5i, 52) = -^-2 52 52

p(d) = logv(6) = -2log51 - Hog52. From p(9) we get p1, p2, p3, a1, a2, a3 and the details of the notations are provided in Appendix

[B]

Let ^(d) = A; then = 1, = = 0, ^ = 0, i, j = 1, 2, 3, a4 = 0, a5 = 0. These values, when substituted into (7), give the Bayes estimator for A.

E(X|(x1i, x2j)) = XML -

2 2

-°12 + и-а13

1

+ 2 [Лап + Bo-21 + D031] (8)

-51ML 52ML

Consider ^(d) = 51; then = 1, = = 0, = 0, i, j = 1, 2, 3, and a4 = 0, a5 = 0. We can obtain the Bayesian estimator for 51 by substituting these values into (7) as follows:

E(Ö1 |(x1i, x2j )) = ¿1ML -

22

"022 + «-023

1

+ 2 [Л012 + B022 + D032] (9)

-51ML 52ML

Again, consider p(9) = 52; then = 1, = = 0, = 0, i, j = 1, 2, 3, and a4 =0, a5 = 0. We can obtain the Bayesian estimator for 52 by substituting these values into (7) as follows:

E(^2 |(X1i, X2j )) = §2ML -

22

-032 + «-033

1

, -32 . , -33 + -[A013 + Bote + D033] (10)

-¿1ML ¿2ML J 2

All the terms A, B, D, and Oj's are provided in Appendix [B], and these notations remain consistent throughout the subsequent derivations.

3.2.2 Jeffrey's Prior

Here, we utilize Jeffrey's prior to formulate Bayes estimators for the parameters A, ¿1, and ¿2. Jeffrey initially developed this prior using the Fisher information matrix, denoted as I (A, ¿1, ¿2), and it is expressed as:

vJ

vj (X, S1r Ö2 ) det( I (X, S1, Ö2)),

d1 d2(X2 + W) - d1 (T2 )2 - d2(T1 )2) ,

1

p(0) = ^og

d1*X+jO - d1(T2 )2 - d2 (T1)

22

From p(d), we obtain pi, p2, and p3. The notations d\, d2, pi, p2, and p3 are given in Appendix [C].

Let ц(в) = X; then ц1 = l, ц2 = ц3 = 0, Цу = 0 for i, j = 1,2,3, a4 = 0, a5 = 0. These values, when substituted into (7), give the Bayes estimator for X:

l

E(X|(xii,x2j)) = xml + [PiCn + P2012 + Рз03] + 2 [A^ii + Bo2i + Do3i] (ll)

Next, consider ц(в) = Si; then ц2 = l, ц1 = ц3 = 0, Цу = 0 for i, j = l,2,3, and a4 = 0, a5 = 0. Substituting these values into (7) gives the Bayes estimator for ¿l:

l

E(5l |(Xli, X2j)) = SlML + [pl 02l + P2022 + P3 023 ] + ^ [A^l2 + B022 + D032 ] (l2)

Finally, for ц(в) = ¿2, with ц3 = l, ц1 = ц2 = 0, Цу = 0 for i, j = l,2,3, and a4 = 0, a5 = 0, the Bayes estimator for ¿2 is obtained as follows:

l

E(S2 |(Xli, X2j)) = ¿2ML + [pl 03l + P2032 + P3 033 ] + 2 [A0l3 + B023 + D033 ] (l3)

3.2.3 Conjugate Prior

In this context, for the parameters A, ¿1, and ¿2 we estimate Byes estimators using conjugate priors. For the shape parameter, we employ a gamma prior, and for the scale parameters, we employ inverse gamma priors, with their respective probability density functions given below:

P,(A) _ i) A«—1 e—A, P2 (¿1) _ ffj ¿—(«+» eP3№) _ ^ ¿—(*+1) e —23

We can derive the joint prior density for the parameters A, ¿1, and ¿2 by combining their individual prior densities. This can be expressed as follows:

^(A A ¿,) _ b22 b22 b33 AC1—1¿—(C2+1)¿—(C3+1)e—b1 A—b2—|

vc (A, ¿^ ¿2)_ f^ f(22) r^A ¿1 ¿2 e 1 2 p(6) _ l0gv(6) _ C1 l0gb1 + C2l0gb2 + C3l0gb3 — logT(C1) — l0gT(C2) — ¡0gT(C3)

b3

+ (C1 — 1)l0gA — (C2 — 1)l0g¿l — (C3 — 1)l0g¿2 — b1 A — — — —

¿1 ¿2

From p(6) we get p1, p2, and p3. The detailed notations for p1, p2, and p3, a1, a2, a3 are given in [D]

Let v(6) _ A; then v1 = 1, v2 = v3 = 0, vy = 0, i, j = 1, 2, 3, a4 = 0, a5 = 0. These values, then substituted into (7), give the Bayes estimator for A.

E(A\(X1i, X2j )) _ A ml +

C1 — 1 — bi A ^ + f J^ — C2 + A ^ + f J^ — C3 + A ^

AMML J \"lML ¿1ML J ¿2ML J

1

+ 2 [A011 + Bo-21 + D031 ] (14)

Consider v(6) = ¿1; then v2 = 1, v1 = v3 = 0, vij = 0, i, j = 1, 2, 3, and a4 = 0, a5 = 0. These values, then substituted into (7), give the Bayes estimator for ¿1.

E^1\(X1i, X2j)) _ ¿1 ML +

C1 — 1 b A , / b2 C2 + 1\ b3 C3 + 1 \

— b1 021 + pi---5- 022 + ----5- 023

AML / V ¿2ML ¿1ML J \ ¿'2ML ¿2ML

1

+ 2 [A012 + B022 + D032] (15)

Again, consider v(6) = ¿2; then v3 = 1, v1 = v2 = 0, vij = 0, i, j = 1, 2, 3, and a4 =0, a5 = 0. These values, then substituted into (7), give the Bayes estimator for ¿2.

E^\(X1i, X2j)) _ ¿2ML +

C1 — 1 h\„ , b2 C2 + 1\ b3 C3 + 1 \

— b1 °31 + "CT---5- °32 + ---5- °33

AML ) V ¿1ML ¿1ML J V ¿2Mj ¿2ML

1

+ 2 [AO13 + BO23 + DO33] (16)

4. Gibbs Sampling

Gibbs sampling is a method for generating samples from a joint probability distribution by iteratively sampling from the conditional distributions of each variable while keeping others fixed. This technique is particularly useful when direct sampling from the joint distribution is challenging. Gibbs sampling is prevalent in Bayesian statistics, probabilistic modeling, and fields requiring sampling from complex multivariate distributions.

Working rule for Gibbs sampling:

1. Start with initial values for the parameters.

2. Define the Prior, P(A,51,52), and the Likelihood, P(A,51,52|X1,X2).

3. Generate the Joint Posterior Density function using the defined Prior and Likelihood.

4. Randomly draw parameters from conditional densities as follows:

• Draw A from P(A|51,52, X1, X2) using current values of 51,52, X1, and X2.

• Draw 51 from P(511 A, 52, X1, X2) using current values of A, 52, X1, and X2.

• Draw 52 from P(521 A, 51, X1, X2) using current values of A, 51, X1, and X2.

5. Iterate through the above steps for N times to obtain N draws of the parameters.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

6. After obtaining N draws, calculate Highest Posterior Density (HPD) intervals for each parameter, representing the credible range of values.

7. To assess convergence and stability, calculate the average for each parameter and compare it with the initial values of (A,51,52).

5. Bootstrapping

Bootstrapping is a method that involves repeated resampling of a single dataset, allowing the creation of multiple simulated samples used to compute standard errors, confidence intervals, and conduct hypothesis tests. In Bayesian statistics, bootstrapping can extend to resampling from posterior samples obtained through methods like Markov Chain Monte Carlo (MCMC). This approach provides a means to estimate uncertainty in Bayesian inference by generating simulated datasets through resampling with replacement. These datasets are then utilized to compute summary statistics or parameters of interest, such as standard errors or confidence intervals, enhancing the robustness of uncertainty assessment in Bayesian estimates.

Steps in Bootstrapping:

1. Set the initial values for the parameters (A, 51,52).

2. Define the data (X1, X2) from Lomax distributions with parameters (A,51,52).

3. Choose the number of bootstrap samples N = 10000.

4. Draw N bootstrap samples Dfc (for k = 1,2,..., N) by randomly sampling with replacement from the observed dataset (X1, X2).

5. For each bootstrap sample Dk, conduct Bayesian parameter estimation to obtain posterior samples of the parameters using the Joint Posterior Density function with Prior P(A, 51,52) and Likelihood P(A, 51,52|X1, X2).

6. Calculate Highest Posterior Density (HPD) intervals for each parameter based on the obtained posterior samples, representing the credible range of values for the parameters.

7. To assess convergence and stability, calculate the average for each parameter and compare it with the initial values of (A,51,52).

6. Simulation Study

This study focuses on estimating parameters of two Lomax distributions, assuming a common shape parameter (A) and potentially two distinct scale parameters (¿1 and ¿2). Maximum Likelihood Estimators (MLEs) for the scale parameters and the shape parameter were computed in Section 2, utilizing computational techniques to compare these estimators numerically through simulations. In Section 3, we delve into the development of Bayes estimators. While these estimators lack a precise analytical form, we derive parameter approximations based on Lindley's method. Various priors, including the Vague Prior, Jeffreys Prior, and Conjugate Prior, are employed to calculate these estimators and assess their performance under symmetric (as described in Sections 8 to 16) loss function.

In Sections 4 and 5, we further include Bayesian estimators estimated using Gibbs and Bootstrapping algorithms to compare these results with the parameters obtained using MLE and Lindley's approximation. Additionally, 95% Highest Posterior Density (HPD) credible intervals for the parameters are estimated using Gibbs and Bootstrapping, facilitating comparisons with 95% asymptotic confidence intervals. Trace plots and density plots are also generated to evaluate the performance and convergence of the MCMC chains.

The performance of these estimators is evaluated using bias and mean square error (MSE) metrics. To quantitatively compare these estimators, we generate 10, 000 random samples from two Lomax populations across various sample sizes and parameter combinations. Specific hyperparameters (c1 _ C2 _ C3 _ 1.5 and b1 _ b2 _ b3 _ 0.5) are employed to calculate biases and MSE for all estimators, with results presented in tabular form in Tables 1 to 2.

Table 1 presents the bias and MSE of MLE and Bayes estimators under a symmetric loss function. The first column denotes various sample sizes, while the second column represents the parameters A, ¿1, and ¿2. Columns 3 and 4 display the bias and MSE of the MLE estimates, respectively. The subsequent columns represent the bias and MSE of Bayes estimators using Jeffreys prior in columns 5 and 6, Conjugate prior in columns 7 and 8, Gibbs in columns 9 and 10, and Bootstrapping in columns 11 and 12. Table 2 displays 95% asymptotic and HPD intervals using Gibbs and Bootstrapping for the parameters. The first two columns represent sample sizes and parameters A, ¿1, and ¿2. The third column indicates asymptotic confidence intervals estimated using the information matrix. The fourth and fifth columns denote the HPD intervals estimated using Gibbs and Bootstrapping algorithms.

The observations derived from our simulation study provide valuable insights into the performance of different estimators under varying conditions:

• As sample sizes increase, both bias and mean square error for each estimator decrease.

• Estimators of the shape parameter obtained using MLE, Lindley's method, Gibbs sampling, and Bootstrapping converge to constant values with increasing sample sizes, indicating consistency. The same trend is observed for scale parameters.

• For small sample sizes, Gibbs estimates outperform MLE and Bootstrapping in terms of bias and mean square error for parameters A, ¿1, and ¿2.

• Bayes estimation with informative priors yields minimum error compared to non-informative priors like Vague and Jeffreys under symmetric loss functions.

• Lindley's approximation, employing a conjugate prior in Bayes estimation, demonstrates superior performance compared to ML estimators, Gibbs sampling, and Bootstrapping under symmetric loss function.

• Highest Posterior Density (HPD) confidence intervals obtained through Gibbs sampling tend to be more precise than Bootstrapping and traditional asymptotic confidence intervals.

Table 1: For various sample sizes, we compare biases and mean square errors of multiple estimators under squared error loss for 0 = (A, 51,52).

MLE

Bayes ( J ) Bayes( C )

Gibbs

Bootstrap

m,n 04. Bias Mse Bias Mse Bias Mse Bias Mse Bias Mse

1.5 -0.14 0.264 0.022 0.316 0.076 0.244 -0.041 0.148 0.26 0.316

1 0.209 0.26 0.209 0.26 0.041 0.116 -0.139 0.110 0.036 0.232

2 0.469 1.087 -0.196 0.221 -0.173 0.151 0.096 0.552 -0.228 0.501

2 -0.196 0.45 0.019 0.531 0.056 0.404 -0.287 0.294 0.006 0.672

(10,10) 1 0.215 0.247 0.215 0.247 0.069 0.122 -0.084 0.107 0.173 0.298

2 0.456 1.058 -0.198 0.207 -0.122 0.147 0.001 0.484 -0.44 0.73

2.5 -0.266 0.719 0 0.833 0.02 0.638 0.077 0.488 0.125 0.862

1 0.232 0.269 0.232 0.269 0.095 0.141 0.023 0.112 0.024 0.15

2 0.503 1.289 -0.194 0.213 -0.091 0.161 0.022 0.426 0.3 0.854

1.5 -0.076 0.099 -0.01 0.104 0.011 0.093 -0.131 0.074 -0.029 0.102

1 0.081 0.068 0.081 0.068 0.024 0.051 -0.173 0.073 0.146 0.147

2 0.182 0.257 -0.022 0.139 0.004 0.132 -0.159 0.203 -0.497 0.418

2 -0.078 0.18 0.012 0.193 0.023 0.172 -0.441 0.269 -0.043 0.263

(20,30) 1 0.08 0.071 0.08 0.071 0.029 0.054 -0.008 0.059 0.091 0.169

2 0.163 0.257 -0.037 0.146 0.004 0.142 -0.274 0.225 -0.177 0.296

2.5 -0.123 0.295 -0.011 0.311 -0.008 0.277 -0.362 0.267 0.124 0.329

1 0.083 0.07 0.083 0.07 0.035 0.054 -0.15 0.059 0.197 0.125

2 0.193 0.263 -0.013 0.142 0.038 0.144 -0.259 0.202 0.203 0.313

1.5 -0.03 0.06 0.009 0.063 0.02 0.059 0.11 0.057 0.255 0.098

1 0.039 0.028 0.039 0.028 0.016 0.024 0.055 0.035 0.18 0.103

2 0.078 0.126 -0.017 0.098 0.005 0.097 -0.23 0.151 0.309 0.391

2 -0.051 0.098 0.001 0.101 0.007 0.095 -0.18 0.093 0.311 0.298

(50,40) 1 0.037 0.025 0.037 0.025 0.017 0.022 0.001 0.029 0.143 0.089

2 0.097 0.136 -0.001 0.101 0.029 0.105 -0.207 0.142 0.133 0.239

2.5 -0.042 0.165 0.023 0.174 0.024 0.163 0.214 0.194 0.171 0.227

1 0.036 0.026 0.036 0.026 0.018 0.023 0.074 0.037 0 0.036

2 0.091 0.124 -0.006 0.094 0.029 0.098 0.144 0.153 0.071 0.164

1.5 -0.016 0.042 0.014 0.044 0.022 0.042 -0.022 0.031 -0.092 0.05

1 0.03 0.019 0.03 0.019 0.012 0.017 0 0.025 0 0.04

2 0.047 0.08 -0.023 0.067 -0.008 0.067 -0.13 0.104 -0.222 0.156

2 -0.038 0.083 0.001 0.085 0.005 0.081 0.055 0.061 -0.18 0.121

(60,60) 1 0.034 0.021 0.034 0.021 0.018 0.019 0.001 0.022 -0.099 0.043

2 0.068 0.088 -0.004 0.071 0.017 0.072 0.073 0.099 -0.022 0.16

2.5 -0.053 0.121 -0.005 0.123 -0.004 0.117 -0.253 0.132 0.106 0.186

1 0.031 0.021 0.031 0.021 0.017 0.019 -0.044 0.021 0.045 0.033

2 0.078 0.096 0.005 0.077 0.03 0.079 -0.116 0.085 0.419 0.43

Table 2: 95% Asymptotic, HPD intervals using Gibbs and B00tstrap c0nfidence interval for the parameters 6 _ (A, ¿1, ¿2) at vari0us sample sizes

(m,n) 64. Asymptotic Gibbs Bootstrap

¿1 _1 [0.372,1.677] [0.299,1.433] [0.385,1.631]

¿2 _2 [1.42,3.521] [0.804,3.267] [0.676,2.804]

(10,10) A _ {1.5 [0.562,2.974] [0.773,2.218] [1.204,3.642]

2 [0.447,2.131] [0.874,2.615] [1.227,3.527]

2.5} [0.837,4.003] [1.32,3.96] [1.128,3.484]

¿1 _1 [0.456,1.292] [0.621,2.021] [0.831,2.577]

¿2 _2 [1.745,2.361] [0.62,2.616] [0.898,2.938]

(20,10) A _ {1.5 [0.705,2.535] [0.802,1.961] [1.261,3.168]

2 [1.281,4.627] [0.984,2.443] [1.719,4.321]

2.5} [0.978,3.431] [1.374,3.36] [1.426,3.486]

¿1 _1 [0.595,1.559] [0.622,1.832] [0.435,1.282]

¿2 _2 [1.887,3.025] [0.871,2.619] [0.791,2.378]

(20,20) A _ {1.5 [0.736,2.05] [0.79,1.709] [1.042,2.35]

2 [1.225,3.476] [1.405,3.151] [1.282,2.796]

2.5} [1.178,3.253] [1.374,3.36] [1.846,4.175]

¿1 _1 [0.604,1.499] [0.437,1.238] [0.515,1.443]

¿2 _2 [1.583,2.309] [0.987,2.474] [1.149,2.754]

(20,30) A _ {1.5 [0.516,1.258] [0.941,1.871] [0.908,1.788]

2 [1.189,2.979] [1.015,2.081] [1.415,2.912]

2.5} [1.46,3.654] [1.456,2.878] [1.883,4.013]

¿1 _1 [0.6571.271] [0.653,1.419] [0.717,1.535]

¿2 _2 [1.489,2.239] [1.277,2.777] [1.383,2.827]

(40,40) A _ {1.5 [1.28,2.614] [0.959,1.657] [1.171,2.088]

2 [0.973,1.918] [1.411,2.837] [1.746,3.038]

2.5} [1.347,2.706] [1.813,3.181] [1.57,2.807]

¿1 _1 [0.721,1.309] [0.75,1.442] [0.697,1.361]

¿2 _2 [1.721,2.393] [1.178,2.379] [1.307,2.595]

(50,40) A _ {1.5 [0.828,1.597] [1.225,2.046] [1.221,2.098]

2 [1.634,3.224] [1.359,2.328] [1.651,2.856]

2.5} [1.402,2.699] [1.989,3.475] [2.010,3.465]

¿1 _1 [0.738,1.253] [0.707,1.316] [0.701,1.291]

¿2 _2 [1.819,2.334] [1.357,2.417] [1.471,2.59]

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

(60,60) A _ {1.5 [1.146,2.017] [1.151,1.819] [1.075,1.664]

2 [1.235,2.158] [1.598,2.539] [1.298,2.084]

2.5} [2.028,3.617] [1.731,2.758] [1.784,2.867]

Table 3: MaXimum Likelih00d estimators and Bayes estimators 0f the C0mbined m0del are given bel0w.

6 MLE Mse Bayes(S) Mse Gibbs Mse Bootstrap Mse

A= 0.5 0.623 0.015 0.595 0.009 0.316 0.034 0.320 0.032

¿1 =10 9.044 0.912 9.742 0.066 10.115 2.390 10.390 2.473

¿2 =11 10.773 0.051 11.753 0.567 11.460 3.315 12.098 5.891

Table 4: 95% Asymptotic, HPD intervals using Gibbs and B00tstrap c0nfidence interval for the parameters 6 _ (A, ¿1, ¿2) 0f the m0del.

6 Asymptotic Gibbs Bootstrap

A [0.6181, 0.6268] [0.2784, 0.3561] [0.2764, 0.3519]

¿1 [5.2672,11.8221] [7.0932, 13.1585] [7.3985, 13.5131]

¿2 [6.4066, 15.1402] [8.212, 15.0033] [8.3435, 15.1913]

7. Empirical Example

In evaluating our model's accuracy, we've collected data on annual deaths from Meningitis and Nutritional Deficiencies across 158 countries. Our aim is to assess its performance by comparing it with observed patterns in the dataset. To facilitate this comparison, we'll create histograms to visually represent the data and calculate the joint probability to understand simultaneous occurrences of both types of deaths. By employing a joint density function, we can explore various scenarios and better understand the probability associated with different combinations of Meningitis and Nutritional Deficiencies deaths. It's crucial for our model to align well with observed data patterns and predict a range of related health outcomes. The data for annual deaths is as follows:

Deaths due to Meningitis: 1563, 13, 292, 2520, 453, 46, 31, 62, 2323, 49, 45, 1975, 22, 134, 123, 2008, 39, 5258, 1239, 129, 2791, 89, 18, 902, 4623, 113, 6465, 377, 70, 284, 35, 2259, 17, 94, 38, 6147, 31, 110, 217, 118, 764, 52, 218, 55, 693, 73, 11283, 13, 16, 226, 86, 212, 21, 216, 3487, 30, 242, 3260, 267, 997, 109, 50, 34736, 4715, 577, 427, 10, 28, 185, 21, 408, 83, 136, 4396, 11, 12, 38, 162, 34, 162, 432, 43, 16, 2084, 2369, 291, 6260, 217, 522, 20, 24, 400, 2729, 1246, 100, 469, 96, 13, 44, 7772, 44914, 1235, 223, 19, 4493, 14, 17987, 28, 36, 599, 65, 143, 2056, 135, 36, 60, 1143, 995, 180, 25, 1563, 27, 1630, 20, 17, 4672, 2221, 77, 1968, 155, 297, 662, 31, 27, 214, 105, 125, 3765,1037, 31, 625,11, 85, 351,52, 3941, 399,54, 265,1146,14,175, 254, 747,14, 479, 2065,1450.

Deaths due to Nutritional Deficiencies: 1244, 5, 114, 3015, 1330, 164,12, 29, 4402, 45, 371, 820, 7, 894, 185, 8221,11, 4048, 2048, 532, 965, 354,12, 1247, 2454, 576, 16863, 1332, 66, 275, 23, 756, 3, 65,134, 6355, 81, 206, 464, 734, 816, 293,126, 31 1051, 92, 8989, 19, 12, 4734, 68, 116, 10, 782,1973, 10, 1847, 1741, 100, 953, 218, 34, 26868, 20348, 230,104,10, 52, 773, 65,1832,18, 121, 4614, 13,1, 9, 201, 15, 194, 323,18, 4, 5285, 2062, 239,14865, 279, 7558, 2, 15, 215, 3530, 1386, 189, 1300, 210, 15, 145, 2449, 5496, 6445, 443, 101, 26438, 38, 14631, 12, 126, 205, 304, 1300, 3611, 125, 165, 75, 456, 1142, 66, 18, 425, 20, 1180, 10, 26, 7626, 2101, 318, 2180, 467, 163, 795, 127, 95, 92, 126, 25, 6887, 992, 65, 237, 26, 29,1043,11, 3937, 139, 7, 159, 6090,134, 41, 552, 954,10,1010, 1899, 2884.

Following the provided data, we estimate the parameters using Maximum Likelihood Estimation (MLE) and Bayes estimation techniques, as outlined in the preceding sections. Substituting the values, the results obtained are as follows.

In Table 3, we have estimated the parameters (A, ¿1, and ¿2) of the combined model for deaths due to Meningitis and Nutritional Deficiencies. The parameter values and Mean Squared Error (MSE) of Maximum Likelihood Estimators (MLE), Bayes estimators using symmetric loss function, as well as Bayes estimators using Gibbs and Bootstrapping methods have been computed. From the results, it is evident that Bayes estimators, when estimated using the symmetric loss function, yield the minimum error. Additionally, asymptotic and Highest Posterior Density (HPD) intervals

Figure 1: Illustration of Deaths due to Meningitis and Nutritional Deficiencies.

have been calculated using Gibbs and Bootstrapping, and the results are presented in Table 4.

Fig. 1 illustrates the number of deaths attributed to Meningitis and Nutritional Deficiencies. Each bar represents the frequency of deaths, while the curve depicts the density of the Lomax distribution. Based on the graph, we observe that deaths due to Meningitis approximately follow a Lomax distribution with parameters: shape=8, scale=0.2. Similarly, deaths due to Nutritional Deficiencies also approximately follow a Lomax distribution with parameters: shape=8, scale=0.

In summary, our analysis begins with trace plots and density plots, which serve as valuable tools for understanding the behavior of the Markov chain over iterations. Trace plots, illustrated in Figures 2 and 3, show the Markov chain's values against iteration number. A stable and random pattern across iterations is observed, indicating convergence and the adequate representation of the posterior distribution by the chain. Meanwhile, density plots, depicted in Figures 4 and 5, provide estimates of simulated marginal posterior distributions, resembling smoothed histograms. Importantly, the unimodal density plot indicates that the posterior distribution is well-behaved, lacking multimodality or skewness. Overall, our analysis underscores the robustness and reliability of our Bayesian inference process.

Figure 2: Illustrati0n 0f Trace 0f the parameters estimated using Gibbs

Figure 3: Illustrati0n 0f Trace 0f the parameters estimated using B00tstrapping

Figure 4: Illustratwn cf Density 0f the parameters estimated using Gibbs

Figure 5: Illustratwn 0f Density 0f the parameters estimated using B00tstrapping

8. Conclusion

The focus of our study was to estimate the common shape parameter A for two Lomax populations, where the scale parameters ¿1 and ¿2 are unknown and potentially different. It is important to note that this problem has not been explored in the existing literature. Similar to the case of a single population, it is not possible to obtain exact expressions for Maximum Likelihood Estimates (MLE) and Bayes estimates for our model. Hence, we employed a numerical approach to derive approximate MLEs for the associated parameters. Using these MLEs, we also obtained 95% asymptotic confidence intervals for the parameters.

Furthermore, we developed approximate Bayes estimators under various priors (vague, Jeffreys, and conjugate), incorporating symmetric loss function. A comprehensive assessment of all proposed estimators was conducted, evaluating their performance in terms of biases and risk values. Our numerical investigation highlighted the superiority of Bayes estimators under a conjugate prior, particularly when utilizing the symmetric loss function. These estimators demonstrated better performance compared to all other alternatives, specifically with respect to mean squared error.

It is imperative to emphasize that our conclusions regarding the suitability of these estimators are exclusively drawn from the outcomes of our numerical simulations. To elucidate the method of estimation, we presented a real-life example. We hope that our study will inspire researchers to explore alternative estimators for the common shape parameter, potentially offering competitive performance against our proposed estimators.

Acknowledgment

I would like to extend my heartfelt gratitude to the individuals and the esteemed VIT-AP Institution whose unwavering support and invaluable contributions have been instrumental in

the successful completion of this research endeavor.

Conflict of Interest The authors declare no conflicts of interest.

A. MLE

The detailed explanation of the notations used in Section 2 are given below.

T1

T1

T''

T

SI1+, T=si1+i

E

X1i

1 ¿2 + ¿1 Xi

, T2 = E

j=1v n X2j

j=

1 ¿2 + ¿2 X2j

E*1i (^1 + X1i) T' = EE X2j (^2 + X2j)

¿=1 (¿2 + ¿1 X1i )2 j=1 (¿2 + ¿2 X2j )2

m 2X1i (3¿2 + 3¿1 X1i + X2i) ,„

E-~-;--, T 2

i=1

(¿2 + ¿1 X1i

E

j=1

2X2j ^¿2 + 3¿2X2j + X2j

(¿2 + ¿2 X2j

m + n m ty , 1W

IT (¿2 - (A + 1)T1

¿2 - (A + 1)T2'j - (T1 )2i ¿2 - (A + 1)T2'

-(T2)2( ¿2 - (A + 1)T1'

3

d

B. Bayes

All the notations used in section 3.2.1 are given below.

22 P1 _ 0 P2 _ - p3 _ -

2 2 2 2 2 2 a1 _ - ¿1 012 - ¿2013, a2 _ - ¿1 022 - ¿2023, a3 _ - ¿1 032 - ¿2033.

A

D

011 012 013 022

023 033

2T2TU «2 - (A + 1W - Tr

d

A2

d

A2

m + n - (A + 1)Ti'J + (T1)2

^ f - (A + DT1'

n

¿2- (A +1)T2'

2 T1 T1

2T2 t2'

¿2- (A+1)t2'

m

d

¿2 - (A + 1)T1'

. 2

m + n

(a + 1)T' j +(T2)2

m + n - (A + 1)tA +(T1)2

A2

m + n I m

21 - (A + 1)7''

% - (A + 1)T2''

I - (A + 1)T1 ^ (J - (A + 1)7?) - T12(¿2 - (A + 1)72'

d | - (A + 1)71' |

(A +1)72'j,

T[ d

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

¿2

(A + 1)T2'j

-T2 m - (a+1)T1

021,

031,

m + n n

A2

, ¿2

(a + 1)T2' j +(T2 )2

7"V 7V T1 T2

032,

^ - (A + 1)T1') +(T1 )2

T

2m

¿2 - (A + 1)T1'

C. Jeffrey

All the notations used in section 3.2.2 are given below.

P1

P2

P3

d1d2 (m+n) + d1(T2 )2 + d2 (T1)2 1

d1d2 (m+n) + d1(T2 )2 + d2 (T1)2 1

M2A+1 + d1(T2 )2 + d2 (T1)2

(m + n) f2d1d2

AA

- d1 t2' - d27') - (t1 )2t2' - (t2)2t1

(m + n)d1 d2

A2

(m + n)d1 d:

+ 27 71' d2 + (T2)2 d1

A2

2 + 2T2 t2' d1 + (T1)2 d2

a1 _ P1011 + P2012 + P3013,

n

B

d

d

n

1

2

2

2

«2 = /01021 + P2 022 + P3023, «3 = P1031 + P2032 + P3033,

d1 = ¿2 - (A + 1)T1', d2 = | - (A + 1)7?. d1 = -2m - (A1 + 1)Tf, d2 = - (A + 1)7?'

D. Conjugate

All the notations used in section 3.2.3 are given below.

P1 = 21-1 - ^ P2 = | - ^ P3 = | - ^ «1 = a- - 61011 + | - ¥ 012 + I - ¥ 013, «2 = 21-1 - 61021 + | - ^ 022 + | - ^ 023, «3 = 21-1 - 61031 + | - ^ 032 + | - ^ 033,

[1 [2 [3 [4 [5

[6

[7 [8 [9 [10

[11 [12

References

Lomax, K. S. (1954). Business failures: Another example of the analysis of failure data.

/ourn«l of the Americ«n Statistic«/ Associ«tion, 49(268), 847-852.

Bryson, M. C. (1974). Heavy-tailed distributions: Properties and tests. Technometrics, 16, 61-68.

Hassan, A. M., and Al-Ghamdi, A. S. (2009). Optimum step stress accelerated life testing for Lomax distribution. /ourn«l of Applied Sciences Rese«rch, 5, 2153-2164.

Aljohani, H. M. (2024). Estimation for the P(X > Y) of Lomax distribution under accelerated life tests. Heliyon, 10(3).

Ijaz, M. (2021). Bayesian estimation of the shape parameter of Lomax distribution under uniform and Jeffery prior with engineering applications. G«zi University /ourn«l of Science, 34(2), 562-577.

Chakraborty, T., Chattopadhyay, S., Das, S., Kumar, U., and Senthilnath, J. (2022). Searching for heavy-tailed probability distributions for modeling real-world complex networks. IEEE Access, 10,115092-115107.

Okasha, H. M. (2014). E-Bayesian estimation for the Lomax distribution based on type-II censored data. /ourn«l o/Egypti«n M«them«tic«l Society, 22, 489-495.

Fitrilia, A., Fithriani, I., and Nurrohmah, S. (2018). Parameter estimation for the Lomax distribution using the E-Bayesian method. /ourn«l of Physics: Conference Series, 1108. Ellah, H. (2007). Comparison of estimates using record statistics from Lomax model: Bayesian and non-Bayesian approaches. /oMrn«l o/St«tistic«l Rese«rch o/1r«n /SRI, 3,139-158. Hasanain, W. S., Al-Ghezi, N. A. O., and Soady, A. M. (2022). Bayes estimation of Lomax parameters under different loss functions using Lindley's approximation. it«li«n /ourn«l o/ PMre «nd Applied M«them«tics, 48, 630-640.

Al-Bossly, A. (2021). E-Bayesian and Bayesian estimation for the Lomax distribution under weighted composite LINEX loss function. CompMt«tion«l Intelligence «nd Neuroscience, 2021. Kumari, P., Kumar, V., and Aditi. (2022). Bayesian analysis for two-parameter Lomax distribution under different loss functions. Communic«tions in M«them«tics «nd Applic«tions, 13,163-170.

[13] Graybill, F. A., and Deal, R. B. (1959). Combining unbiased estimators. Biometrics, 15(4), 543-550.

[14] Moore, B. C., and Krishnamoorthy, K. (1997). Combining independent normal sample means by weighting with their standard errors. Journal of Statistical Computation and Simulation, 58(2), 145-153.

[15] Tripathy, M. R., and Kumar, S. (2014). Equivariant estimation of common mean of several normal populations. Journal of Statistical Computation and Simulation, 85(18), 3679-3699.

[16] Ghosh, M., and Razmpour, A. (1984). Estimation of the common location parameter of several exponentials. The Indian Journal of Statistics, 46, 383-394.

[17] Jin, C., and Pal, N. (1992). On common location of several exponentials under a class of convex loss functions. Calcutta Statistical Association Bulletin, 42, 167-168.

[18] Azhad, Q. J., Arshad, M. A., and Misra, A. (2020). Estimation of common location parameter of several heterogeneous exponential populations based on generalized order statistics. Journal of Applied Statistics, 48(10), 1798-1815.

[19] Nagamani, N., and Tripathy, M. R. (2017). Estimating common scale parameter of two gamma populations: A simulation study. American Journal of Mathematical and Management Sciences, 36, 346-362.

[20] Nagamani, N., Tripathy, M. R., and Kumar, S. (2020). Estimating common scale parameter of two logistic populations: A Bayesian study. American Journal of Mathematical and Management Sciences, 40, 44-67.

[21] Lindley, D. V. (1980). Approximate Bayes method. Trabajos de Estad?stica, 31, 223-237.

[22] Tripathy, M. R., and Nagamani, N. (2017). Estimating common shape parameter of two gamma populations: A simulation study. Journal of Statistics and Management Systems, 20(3), 369-398.

i Надоели баннеры? Вы всегда можете отключить рекламу.