Literature DB >> 26989583

Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.

Andrew L Rukhin1.   

Abstract

A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.

Entities:  

Keywords:  DerSimonian-Laird estimator; Groebner basis; Mandel-Paule algorithm; heteroscedasticity; interlaboratory studies; iteration scheme; meta-analysis; parametrized solutions; polynomial equations; random effects model

Year:  2011        PMID: 26989583      PMCID: PMC4551277          DOI: 10.6028/jres.116.004

Source DB:  PubMed          Journal:  J Res Natl Inst Stand Technol        ISSN: 1044-677X


1. Introduction: Meta Analysis and Interlaboratory Studies

This article is concerned with mathematical aspects of an ubiquitous problem in applied statistics: how to combine measurement data nominally on the same property by methods, instruments, studies, medical centers, clusters or laboratories of different precision. One of the first approaches to this problem was suggested by Cochran [1], who investigated maximum likelihood (ML) estimates for one-way, unbalanced, heteroscedastic random-effects model. Cochran returned to this problem throughout his career [2, 3], and Ref. [4] reviews this work. Reference [5] discusses applications in metrology and gives more references. The problem of combining data from several sources is central to the broad field of meta-analysis. It is most difficult when the unknown measurement precision varies among methods whose summary results may not seem to conform to the same measured property. Reference [6] provides some practical suggestions for dealing with the problem. In this paper we investigate the ML solutions of the random effects model which is formulated in Sec. 2. By representing the likelihood equations as simultaneous polynomial equations, the so-called Groebner basis for them is derived when there are two sources. A parametrization of such solutions is suggested in Sec. 2.1. The maxima of the likelihood function are compared for positive and zero between-labs variance. A numerical method for solving likelihood equations by reducing them to an optimization problem for a homogeneous objective function is given in Sec. 2.2. An alternative to the ML method, the restricted maxi mum likelihood is considered in Sec. 3. An explicit formula for the restricted likelihood estimator is discovered in Sec. 3.2 in the case of two methods. Section 4 deals with the situation when methods variances are considered to be known, and an upper bound on the between-method variance is obtained. The Sec. 5 discusses the relationship between likelihood equations and moment-type equations, and Sec. 6 gives some conclusions. All auxiliary material related to an optimization problem and to elementary symmetric polynomials is collected in the Appendix.

2. ML Method and Polynomial Equations

To model interlaboratory testing situation, denote by n the number of observations made in laboratory i, i = 1, …, p, whose observations x have the form Here μ is the true property value, b represents the method (or laboratory) effect, which is assumed to be normal with mean 0 and unknown variance σ2, and ε represent independent normal, zero mean random errors with unknown variances . For a fixed i, the i-th sample mean x = Σ/n is normally distributed with the mean μ and the variance σ2 + σ2, where . If the σ’s were known, then the best estimator of μ would be the weighted average of x with weights proportional to 1/(σ2+ σ2). Since these variances are unknown, the weights have to be estimated. Traditionally to evaluate σ2 one uses the classical unbiased statistic s2 = Σ(x−x)2/(n), v = n−1, which has the distribution σ2χ2(v)/v. Since statistics x, s2, i = 1, …, p, are sufficient, we use the likelihood function based on them. The ML solution minimizes in μ, σ2, σ2, i = 1, …, p, the negative logarithm of this function which is proportional to It follows from (2) that if is the ML estimator of σ2, then . However, it is quite possible that . In order to find these estimates one can replace μ in (2) by which reduces the number of parameters from p + 2 to p + 1. Our goal is to represent the set of all stationary points of the likelihood equations as solutions to simultaneous polynomial equations. To that end, note that This formula, which easily follows from the Lagrange identity [7, Sec. 1.3], will be used with w = (σ2+ σ2)−1. We introduce a polynomial P of degree p in σ2, with , ℓ = 0,1,…, p, denoting the ℓ-th elementary symmetric polynomial. Another polynomial of interest is having degree p−2 in σ2, where is a polynomial in σ2 of degree p−2 which does not depend on , , and is a multilinear form in . Since the identity (4) can be written as The negative log-likelihood function (2) in this notation is Let be the partial derivative of P with regard to ; denote by Q the same partial derivative of Q and by the derivative of . By differentiating (9), we see that the stationary points of (2) satisfy polynomial equations, Each of these polynomials has degree 4 in , i = 1, …, p. When , , and the equations (10) simplify to These polynomials have degree 3 in . If , in addition to (10) one has and F has degree 3p−3 in σ2. In both cases the collection of all stationary points forms an affine variety whose structure can be studied via the Groebner basis of the ideal of polynomials (11) or (10) and (12) which vanish on this variety. The Groebner basis allows for successive elimination of variables leading to a description of the points in the variety, i.e., to the characterization of all (complex) solutions. There are powerful numerical algorithms for evaluation of such bases [8]. Many polynomial likelihood equations are reviewed in Ref. [9]. We determine the Groebner basis for equations (11) when p = 2 in the next section.

2.1 ML Method: p = 2

When p = 2, Q = (x1−x2)2. If σ2 = 0, the polynomial equations (11) take the form The Groebner basis is useful for solving these equations as it allows elimination of one of the variables, say, . With n = n1 + n2, f = n + n2, , , and , under lexicographic order, , the Groebner basis for equations (13) written in the form consists of two polynomials, and This fact can be derived from the definition of the Groebner basis and confirmed by existing computational algebra software. It follows that for stationary points u, υ, G1(u, υ) = G2(u, υ) = 0. All these points can be found by expressing u through υ via (17), substituting this expression in (16), and solving then the resulting sextic equation for υ, Thus, there are 3 × 6 = 18 complex root pairs out of which positive roots u, υ are to be chosen. Although in practice most all these roots are complex or negative and are not meaningful, sometimes the number of positive roots is fairly large. For example, x1 = −0.391, x2 = 0.860, , , n1 = n2 = 3, there are three positive solutions, (u = 0.263, υ = 0.053), (u = 0.138, υ = 0.106), (u = 0.035, υ = 0.312). In this particular case one has to compare the likelihood function evaluated at these solutions with the likelihood evaluated when , in which case the stationary points are (u = υ = 0.065, z = 0.230), (u = 0.246, υ = 0.055, z = 0.003), (u = 0.042, υ = 0.234, z = 0.019), (u = 0.048, υ = 0.065, z = 0.193). (The likelihood is maximized at the last solution.) If , the estimators , satisfy the equations, which imply the inequalities . Evaluation of second derivatives shows that . These solutions are to be compared with the solutions when , in which case with y = σ2/(x1−x2)2 > 0, Notice that (21)–(23) imply that if , then the inequalities , , and are all equivalent. The fact excludes the possibility that , unless , in which case , , x+ = max(0, x). In this and only in this situation (whose probability is zero), . The equation (21) implies that 2 y + u + υ ≤ ½, so that Unfortunately, the Groebner basis for equations (21)–(23) has a much more complicated structure: the form of the monomials entering into the basis polynomials depends on z1, z2, n1 and n2. To find solutions to (21)–(23) we start with conditions on u and υ for which y > 0 (21)–(23). For fixed u and υ the behavior of the derivative of (9) with respect to σ2 is determined by that of the cubic polynomial (12) which now takes the form, This derivative is positive if and only if F(y) > 0. As is easy to check, the derivative of F has two roots: −(u + υ)/2 and y* = 1/6 − (u + υ)/2. The polynomial F has no positive roots if and only if either y* ≤ 0, F(0) ≥ 0 or if y* > 0, F(y*) ≥ 0. We rewrite these conditions as follows: (21) has a positive root if and only if either or If (26) holds, there is a unique positive stationary point. When condition (27) is met, only the largest root of F(y) = 0 (i.e., the one exceeding y*) can be the ML estimator . Analysis of second derivatives shows that (y, u, υ), y > 0, can provide the minimum only if and (y + u) (y + υ) min (v1(y + u)/u, (v2(y + υ)/υ, ≥ y |u − υ|. Figure 1 shows the region where (21) has a positive root in the (u, υ) plane. Its boundary is formed by two straight segments where , 3(u + υ) ≥ 1, and by a cubic curve when u + υ ≥ 1/3. The largest possible value of u or υ such that is 8/27.
Fig. 1

The region where (21) has a positive root. The squares mark the points where the boundary changes from linear to cubic. Two points of this region at which u or υ take the largest possible value (8/27) are marked by o.

To study further the form of solutions to (21)–(23), we use the fact that which can be shown by the direct calculation or by the argument from Sec. 2.2. Thus assuming the condition z1 / u + z2 / υ + 1/(2y + u + υ) = n, one can parametrize the solutions of Eqs. (21)–(23) as follows: for s in the unit interval, u = (z1s + z2 (1 − s))/[s(n − 1/K)], υ = (z1s + z2(1 − s))/[(1 − s)(n − 1/K)], u + y= 2 w(1 − w)2, υ + y = 2w2(1 − w) with K = 2 y + u +υ = 2 w(1 − w) > 1/n, where w, 0 < w < 1, solves the cubic equation u − υ = 2(1 − 2w)w(1 − w) or The conditions (26) and (27) reduce to the inequalities, and The discriminant of the cubic equation (29) reveals that when s ≠ 1/2, , and (29) has three real roots, two of them of the same sign as 1 − 2 s. If s = 1/2, then we have w = 1/2. Indeed the condition y ≥ 0 leads to the following restriction on the range of s: | 1 − 2 w | ≤ | 1 − 2 s| with a strict inequality when y > 0. The end points of this domain can be found as solutions to the equation, which is a quartic equation in ξ = 1−2s, Here . If the equation (33) has two distinct roots in the interval (−1, 1), then the range of s-values is the interval , , and for s ∈ the polynomial in (32) is negative. Therefore, if there are two roots of (29) in the interval (0, 1) which have the same sign as 1 − 2s, the one with the smallest absolute value of |1 − 2w|, i.e., the one with , is to be chosen. Thus, the solutions of equations (21)–(23) are parameterized by s∈ . To determine conditions for the equation (33) to have two roots , in the interval (−1, 1), observe first that it cannot have more than two roots there. Indeed the polynomial in (33) assumes negative values at the end points, and it has exactly two roots if and only if there is a point in this region at which that polynomial is positive. If the derivative of this polynomial, 4ξ3 − 4 (n − 1) ξ/n − 8Δ/n, does not vanish in (−1, 1), then (33) cannot have two roots there. This happens when this derivative has just one real root, or equivalently the discriminant of the cubic polynomial is negative which means that | Δ | > [(n − 1)/(3n)]3/2 and which implies that | Δ | > 1/2. Then the derivative takes negative values at the end points of the interval (−1, 1). If ξ0 is the point of maximum of the polynomial (33) in (−1, 1), then , and the roots exist if and only if According to inequality (34), and (35) shows that . One must have , as the derivative decreases only in this sub-interval of (−1, 1). Thus If , Eq. (33) must have two roots as it takes a positive value when ξ = 0. Since sign(ξ0) = −sign (Δ), assuming that , we get which means that The condition (38) can also be shown to be sufficient for the existence of . This parametrization and the Groebner basis for (13) allow for given z1, z2 to compare the minimal values of (9) when y > 0 with that when y = 0. Figures 2–4 show bounded regions where in the space (z1, z2) when n1 = n2 = 2, n1 = n2 = 4, or n1 = 8, n2 = 3. When n1 = n2 ≥ 3, this region is a triangle, z1 + z2 < c; for n1 ≠ n2 it is more complicated.
Fig. 2

The region where when n1= 2, n2= 2.

Fig. 3

The region where when n1= 4, n2= 4.

Fig. 4

The region where when n1= 8, n2= 3.

We summarize the obtained results. Theorem 2.1. When p = 2, the Groebner basis for ∈ where the end points can be found from We conclude by noticing the relationship of the problem discussed here to the likelihood solutions to the classical Behrens-Fisher problem [10] which assumes that σ2 = 0.

2.2 Solving Likelihood Equations Numerically

In view of the difficulty in evaluating the Groebner basis for p > 2, an iterative method to solve the optimization problem in (2) is of interest. Notice that the sum of the first two terms in (2) is a homogeneous function of σ2, , of degree −1. Therefore, with n = ∑ The minimizing value of λ is . Thus the objective function in (39) becomes homogeneous of degree 0 in, y0, y1, …, y which reduces the problem to minimization of such a function. If y0, y1, …, y is a solution to the problem (39), then , is a ML solution. To minimize the objective function in (39) one can impose a restriction on y0, y1, …, y such as ∑ y = 1 or λ0 = 1. Note that F tends to +∞ when y → 0 or y → +∞ for any fixed y0 > 0. Since , one sees that which must be positive for large y and negative for sufficiently small y. If y0 > 0, by equating this derivative to zero, we obtain where so that Σλ = nλ0. The equation (42) must have a positive root, which means that for λ evaluated at a stationary point, and It follows that for any stationary point, , i.e., For a fixed value of λ0, say, λ0 = 1, the Eq. (42) leads to an iteration scheme, with specified initial values of y0(0) and . We take these to be the estimates arrived at by the method of moments as described later in Sec. 5, but once they are given, one can solve the cubic equation for υ = y0 / y, It is easy to see that each of these equations has either one or three positive roots. If there is just one root, then it uniquely defines υ. In case of three positive roots, the root which minimizes (40) is chosen. Under this agreement, at stage k in (46) υ = υ(), y0 = y0(), , and after solving (46) we put and As in [11], (46) defines a sequence converging to a stationary point, and at each step the value of (40) is decreased. Theorem 2.2 Successive-substitution iteration defined by equations Notice that Vangel and Rukhin tacitly assume in [11] that y0 > 0. The case of the global minimum attained at the boundary, i.e., when y0 = 0, can be handled as follows. Equating (41) to zero gives and a simpler version of an iterative scheme k = 0, 1, …, , y0 = 0, converges fast. However the solutions obtained via (46), (47), (48) and (50) are to be compared by evaluating L. To assure that a global minimum has been found, several starting values should be tried. The maximum likelihood estimator and the restricted maximum likelihood estimator discussed in the next section can be computed via their R-language implementation through the lme function from nlme library [12]. However this routine has a potential (false or singular) convergence problem occurring in some practical situations.

3. Restricted Maximum Likelihood

The possible drawback of the maximum likelihood method is that it may lead to biased estimators. Indeed the maximum likelihood estimators of σ ’s do not take into account the loss in degrees of freedom that results from estimating μ. This undesirable feature is eliminated in the restricted maximum likelihood (REML) method which maximizes the likelihood corresponding to the distribution associated with linear combinations of observations whose expectation is zero. This method discussed in detail by Harville [13] has gained considerable popularity and is employed now as a tool of choice by many statistical software packages. The negative restricted likelihood function has the form By using notation (5) and (6), we can rewrite To estimate σ2 and , i = 1, …, p one has to find the minimal value of Q/P′ + log P′. The derivative of this function in σ2 is proportional to the polynomial H of degree 2p−3 in this variable, as opposed to 3p−3 which is the degree of the corresponding polynomial F in (12) under the ML scenario. For fixed , i = 1, …, p, all p roots of the polynomial P = P (σ2) are real numbers, so that [14, 3.3.29]. An application of (54) shows that Since P (σ2) > 0, if F has a positive root σ2, then H also must have a positive root τ2. Moreover, τ2 ≤ σ2, so that the ML estimator of σ2 is always smaller than the REML estimator of the same parameter for the same data. The polynomial equations for the restricted likelihood method have the form, with each of these polynomials being of degree 3 in , i = 1, …, p. When σ2 > 0, Eqs. (56) have to be augmented by the equation H = 0.

3.1 Solving REML Equations

To solve the optimization problem in (51) in practice one can use a method similar to that in Sec. 2.2. Indeed, The minimizing value of λ is , and the objective function in (57) is homogeneous of degree 0 in y’s. If y0, y1, …, y is a solution to the problem (57), then , , …, is a REML solution. When y0 > 0, the function in (57) takes the form Its derivative with respect to y is As in Sec. 2.2, for a fixed value λ0 = 1 we can use an iterative scheme which is based on (59). However, now one has to specify initial values of y(0), and of A(0) = ∑(y0 + y)−1 after which the cubic equations for υ = y0/y, are solved for i = 1, …, p, with updating as in (47) and (48). Each of these equations has either one or three positive roots. If there is just one root, then it defines v, out of three roots the largest is taken. If y = 0, then with the vectors e = (1, …, 1), z = (z1,…, (z), , and p × p symmetric matrix ℝ defined by positive elements , (58) takes the form The gradient of the function in (61) and its Hessian is where is the vector with coordinates n / z, i = 1, …, p, leads to the iteration scheme, k = 0, 1, …, which converges fast. The solutions obtained via (60) and (64) are to be compared by evaluating RL in (57).

3.2 REML p = 2

To find the REML solutions and when p = 2 notice that (51) takes the form which shows that if , Indeed this choice simultaneously minimizes each of three bracketed terms in (65) guaranteeing the global minimum at this point. When , as we will see, σ2 = 0. To find , i = 1, 2, one can employ the Groebner basis of the polynomial equations (56) which in the notation of Sec. 2.1 takes the form This Groebner basis consists of three polynomials whose coefficients are not needed here, and with All stationary points of the restricted likelihood equations can be found by solving the cubic equation G5(υ) = 0 for υ > 0, and substituting this solution in (71), allowing one to express u through v, Thus, in this case there are 3 complex non-zero roots pairs u, v. Another approach is to use the argument in Sec. 3.1 which for a fixed sum 2y0 + y1 + y2 = 1, leads to the optimization problem, If , then according to Lemma 1 in the Appendix, for a fixed sum y1 + y2 = y, the minimum in (73) monotonically decreases when 0 < y ≤ 1, so that this minimum is attained at the boundary, y1 + y2 = 1, in which case indeed . Thus, one can solve the cubic equation for w = 1 − 2y1, where δ = (n2 − n1)/2, and as before , Δ = (z2 − z1)/2. An alternative method is to use the iteration scheme (64) to solve (57).

4. Known Variances

4.1 Conditions and Bounds for Strictly Positive Variance Estimates

In view of the rather complicated nature of likelihood equations, in many situations it is desirable to have a simpler estimating method for the common mean μ. The most straightforward way is to assume that the variances , i = 1, …, p, are known. In this case, essentially suggested in Ref. [15], but also pursued in Refs. [16, 17], these variances are taken to be (or a multiple thereof). Because of (3), the only parameter to estimate is y = σ2. We give here upper bounds on the ML estimator and on the REML estimator . Theorem 4.1 Assume that the variances , i = 1, …, p, are known. Then and In particular, if then . If . Proof: We prove first that Indeed, one can write and Lemma 3 in the Appendix shows that since (p − 1 − ℓ)| k − 1 − ℓ | ≤ (p − 1)(k − 1), (79) holds. One gets for the polynomial H in (53), which is positive under condition (78). Then . Similarly (77) and (54) imply that To prove (76) observe that with y replaced by y + a a formula like (79) holds for any positive a. The only modification is that is replaced by and the H()(a) / k! represent polynomial coefficients. Then a version of Lemma 3 in the Appendix states that The argument, as before, shows now that if , then H() (a) ≥ 0, k = 0, …, 2 p −3. It follows that , so that (76) is valid with . The proof of (75) is similar. + When p = 2, the bounds (75) and (76) are sharp as (24) and (67) show. When p increases, their accuracy decreases. Section 4.2 gives necessary and sufficient conditions for when p = 3. It is possible to get better estimates under additional conditions. For example, if the ordering of sequences (notation of Lemma 2) and is the same, then the maximum of , 1 ≤ i < j ≤ p, in (77) and (78) can be replaced by their average.

4.2 Example: Restricted Maximum Likelihood for p = 3

When p = 3, Q(y) = q0y + q1, is a polynomial of degree one, and is a polynomial of degree three. If H(0) = h3 < 0, it has a positive root, which means that . Otherwise the existence of a positive root is related to the sign of the discriminant If D < 0 and h3 ≥ 0, there is just one real root which must be negative, and then . If D ≥ 0, then there are three real roots. The condition h3 ≥ 0 implies that either two of them are positive and one is negative (so that at least one of the coefficients h1 or h2 is negative) and , or all three roots are negative (so that .) Separation between these cases depends on the ratio . Let , . Then , h0 = 18, h1/q0 = 18(E1−1/6), , . Actually, as follows from the proof of Theorem 4.1, E1 ≥ λ, but we do not use this fact here. Algebra shows that when , one has D= 544(E1 − 3λ)2(3λ +1/16 − E 1), when h = 0, i.e., if E1 + 0.5), then , and on the curve h2 = 0, i.e., when , D factorizes as follows, . Let be the smallest positive root of the cubic equation which is defined for λ < 0.5137…, and let be the (only) positive root of the equation The discriminant of (87) is negative, and according to Descartes’s rule there are always two complex roots, and one positive root. It is easy to check that is monotonically increasing in λ < 0 and . In this notation, when λ ≤ 1/16 (so that h 0 implies that h 0), the region , is formed by three curves: (i) h = 0 between the point , , where h = 0 intersects the curve , and the point on h between M and the point where D = 0 intersects , and by (i i i) the cubic in E curve corresponding to the equation D = 0, which connects the point M and the point . See Fig. 5. If λ <2/81, this region also includes the part of the curve h = 0 where , but the probability of having the likelihood solution there is zero.
Fig. 5

The region where h3 ≥ 0 and when λ = 1/27. The solid line is h3 ≥ 0, the dash-dotted line is , and the dotted line is D = 0. The point M1 is marked by +, the point M2 is marked by a square, the point M3 by o.

When λ > 1/16, one has , so that h >0. Also thenE2 >λ/3, , so that h2 >0. Thus in this situation if and only if h3 ≥0.

5. Moment-Type Equations

5.1 Weighted Means Statistics

When the within-lab and between-lab variances and σ2 are known, the best (in terms of the mean squared error) unbiased estimator of the treatment effect μ in the model (1) is a weighted means statistic (3) with Even without the normality assumption for these optimal weights, , and If , but the weights w are arbitrary, In particular, when The simplest estimate of the within-trials variances is by the available , but the problem of estimating the between-trial component of variance σ2 remains.

5.2 DerSimonian-Laird Procedure

By employing the idea behind the method of moments, DerSimonian and Laird [18] made use of the identity (90) as an estimating equation for μ and σ2, provided that are estimated by , in the following way. For weights of the form determine a non-negative y = yDL from the formula, and put . Here , is one of traditional estimators for the common mean (the Graybill-Deal estimator). In other words, the statistic and the weights corresponding to σ2 = 0, are used to evaluate the sum which is then employed to estimate σ2 via (90).

5.3 Mandel-Paule Method

The Mandel-Paule algorithm uses weights of the form (91) as well, but now y = y, which is designed to approximate σ2, is found from the moment-type estimating equation, See Refs. [15], [19]. The modified Mandel-Paule procedure with y = y is defined as above, but p − 1 in the right-hand side of (93) is replaced by p, i.e., Notice that when p = 2, the DerSimonian-Laird estimator coincides with the Mandel-Paule rule, as, in this case, so that this estimator is similar to the REML estimates estimates (66) and (67). In the general case, both of these rules set y = 0, when It was shown in Ref. [20] that the modified Mandel-Paule is characterized by the following fact: the ML estimator of σ2 coincides with y, if in the reparametrized version of the likelihood equation the weights w admit representation (91). As a consequence, the corresponding weighted means statistic (3) must be close to the ML estimator, so that the modified Mandel-Paule estimator approximates its ML counterpart. Thus, the modified Mandel-Paule estimator can be interpreted as a procedure which uses weights of the form , instead of solutions of the likelihood equation that are more difficult to find, and still maintains the same estimate of σ2 as ML. A similar interpretation holds for the original Mandel-Paule rule and the REML function [21]. For this reason both Mandel-Paule rules are quite natural. It is also suggestive to use the weights (91) with y = y determined from the Mandel-Paule equation (93) as a first approximation when solving the likelihood equations via the iterative scheme (47) and (48).

5.4 Uncertainty Assessment

It is tempting to use formula (88) to obtain an estimator of the variance of . For example, DerSimonian and Laird [18] suggested an approximate formula for the estimate of the variance of their estimator, Similarly motivated by (88), Paule and Mandel [19] suggested to use to estimate the variance of . However these estimators typically underestimate the true variance, Ref. [5]. They are not GUM consistent [22] in the sense that the variance estimate is not representable as a quadratic form in deviations . Horn, Horn and Duncan [23] in the more general context of linear models have suggested the following GUM consistent estimate of Var , which has the form , with ω = w/∑. When the plug-in weights are , i = 1, …, p, this estimate is Simulations show that (98) gives good confidence intervals of the form, . For the Mandel-Paule rule or the DerSimonian-Laird procedure they outperform the intervals, . Still when all sample means are close , then y = 0 and the uncertainty estimate may be a more satisfactory answer than δ ≈ 0. When p = 2, In this case δ is an increasing function of y with the largest value (x1 − x2)2/4 attained when y → ∞. An alternative method of obtaining confidence intervals for μ on the basis of REML estimators was suggested in Ref. [24] for an adjusted estimator of based on the inverse of the Fisher information matrix. This (not GUM consistent) estimator is more complicated.

6. Conclusions

The original motivation for this work was an attempt to employ modern computational algebra techniques by evaluating the Groebner bases of likelihood polynomial equations for the random effects model. While this attempt leads to an explicit answer when there are two labs with no between-lab effect, it was not successful in a more general situation. The classical iterative algorithms appear to be more efficient in this application although they do not guarantee the global optimum. Simplified, method of moments based procedures, especially the DerSimonian-Laird method, deserve much wider use in interlaboratory studies.
  3 in total

1.  Maximum likelihood analysis for heteroscedastic one-way random effects ANOVA in interlaboratory studies.

Authors:  M G Vangel; A L Rukhin
Journal:  Biometrics       Date:  1999-03       Impact factor: 2.571

2.  Small sample inference for fixed effects from restricted maximum likelihood.

Authors:  M G Kenward; J H Roger
Journal:  Biometrics       Date:  1997-09       Impact factor: 2.571

3.  Meta-analysis in clinical trials.

Authors:  R DerSimonian; N Laird
Journal:  Control Clin Trials       Date:  1986-09
  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.