Literature DB >> 28386432

Calibration with confidence: a principled method for panel assessment.

R S MacKay1, R Kenna2, R J Low2, S Parker1.   

Abstract

Frequently, a set of objects has to be evaluated by a panel of assessors, but not every object is assessed by every assessor. A problem facing such panels is how to take into account different standards among panel members and varying levels of confidence in their scores. Here, a mathematically based algorithm is developed to calibrate the scores of such assessors, addressing both of these issues. The algorithm is based on the connectivity of the graph of assessors and objects evaluated, incorporating declared confidences as weights on its edges. If the graph is sufficiently well connected, relative standards can be inferred by comparing how assessors rate objects they assess in common, weighted by the levels of confidence of each assessment. By removing these biases, 'true' values are inferred for all the objects. Reliability estimates for the resulting values are obtained. The algorithm is tested in two case studies: one by computer simulation and another based on realistic evaluation data. The process is compared to the simple averaging procedure in widespread use, and to Fisher's additive incomplete block analysis. It is anticipated that the algorithm will prove useful in a wide variety of situations such as evaluation of the quality of research submitted to national assessment exercises; appraisal of grant proposals submitted to funding panels; ranking of job applicants; and judgement of performances on degree courses wherein candidates can choose from lists of options.

Entities:  

Keywords:  assessment; calibration; confidence; evaluation; model comparison; uncertainty

Year:  2017        PMID: 28386432      PMCID: PMC5367308          DOI: 10.1098/rsos.160760

Source DB:  PubMed          Journal:  R Soc Open Sci        ISSN: 2054-5703            Impact factor:   2.963


Introduction

This paper addresses the widespread problem of how to take into account differences in standards, confidence and bias in assessment panels, such as those evaluating research quality or grant proposals, employment or promotion applications, and classification of university degree courses, in situations where it is not feasible for every assessor to evaluate every object to be assessed. A common approach to assessment of a range of objects by such a panel is to assign to each object the average of the scores awarded by the assessors who evaluate that object. This approach is represented by the cell labelled ‘simple averaging’ (SA) in the top left of a matrix of approaches listed in table 1, but it ignores the likely possibility that different assessors have different levels of stringency, expertise and bias [1]. Some panels shift the scores for each assessor to make the average of each take a normalized value, but this ignores the possibility that the set of objects assigned to one assessor may be of a genuinely different standard from that assigned to another. For an experimental scientist, the issue is obvious: calibration.
Table 1.

Panel Assessment Methods. The matrix of four approaches according to use of calibration and/or confidences. Simple averaging (SA) is the base for comparisons. Fisher's IBA does not deal with varying degrees of confidence and the confidence-weighted averaging does not achieve calibration. The method proposed herein (CWC) accommodates both calibration and confidences.

without confidenceswith confidences
without calibrationsimple averaging (SA)confidence-weighted averaging (CWA)
with calibrationincomplete block analysis (IBA)calibration with confidence (CWC)
Panel Assessment Methods. The matrix of four approaches according to use of calibration and/or confidences. Simple averaging (SA) is the base for comparisons. Fisher's IBA does not deal with varying degrees of confidence and the confidence-weighted averaging does not achieve calibration. The method proposed herein (CWC) accommodates both calibration and confidences. One solution is to seek to calibrate the assessors beforehand on a common subset of objects, perhaps disjoint from the set to be evaluated [2]. This means that they each evaluate all the objects in the subset and then some rescaling is agreed to bring the assessors into line as far as possible. This would not work well, however, in a situation where the range of objects is broader than the expertise of a single assessor. Also, regardless of how well the assessors are trained, differences between individuals' assessments of objects remain in such ad hoc approaches [3]. If the expertise of two assessors overlap on some subject, however, any discrepancy between their evaluations can be used to infer information about their relative standards. Thus, if the graph Γ on the set of assessors, formed by linking two whenever they assess a common object, is sufficiently well connected, one can expect to be able to infer a robust calibration of the assessors and hence robust scores for the objects. The construction of this graph is illustrated in figure 1, beginning from the graph Γ showing which objects are assessed by which assessors.
Figure 1.

Three examples of assessment graphs Γ showing which object o is assessed by which assessor a, and the resulting graphs Γ on the set of assessors where two assessors are linked if they assess an object in common. Case (a) produces a fully connected assessor graph, (b) a moderately connected graph, whereas case (c) is disconnected.

Three examples of assessment graphs Γ showing which object o is assessed by which assessor a, and the resulting graphs Γ on the set of assessors where two assessors are linked if they assess an object in common. Case (a) produces a fully connected assessor graph, (b) a moderately connected graph, whereas case (c) is disconnected. One approach to achieving such calibration was developed by Fisher [4], in the context of trials of crop treatments. Denoting the score from assessor a for object o by s, Fisher's approach is based on fitting a model of the form s=v+b+ε with ε independent identically distributed random variables of mean zero. Then b is the bias inferred for assessor a and v is the value inferred for object o. Fisher's approach is known as additive incomplete block analysis (IBA) and a body of associated literature and applications has since been developed (e.g. [5]), though its use in panel assessment seems rare. It is represented as the bottom left entry of table 1. Another ingredient that is important in many panel assessments, however, is different weights that may be put on different assessments. We refer to these weights as ‘confidences’. Fisher's IBA does not take different levels of confidence into account. If the assessors express confidences in the assessments, for example, by some pre-determined weights assigned to types of assessment or by the assessors declaring confidences in each of their scores, then it is natural to replace SA by confidence-weighted averaging. This is represented as the top right entry of table 1, but it does not address the calibration issue so we do not consider it further. In this paper, we present and test a method to calibrate scores taking into account confidences; that is, we complete the matrix of approaches represented in table 1, where our method is termed calibration with confidence (CWC). We demonstrate that the method can achieve a greater degree of accuracy with fewer assessors than either SA or IBA, and we derive robustness estimates taking the confidences into account. We are aware of two other schemes that incorporate confidences into a calibration process. One is the abstract-review method for the SIGKDD'09 conference (section 4 of [6]; see also [7]). The other is the abstract-review method used for the NIPS2013 conference (building on [8] and described in [9]). Our method has the advantages of simplicity of implementation and a straightforward robustness analysis. We leave detailed comparison with methods such as these for future publication.

The model

Let us suppose that each assessor is assigned a subset of the objects to evaluate. Denote the resulting set of (assessor, object) pairs by E. Let us further suppose that the score s that assessor a assigns to object o is a real number related to a ‘true’ value v for the object by where b can be called the bias of assessor a and ε are independent zero-mean random variables. Such a model forms the basis for additive IBA. This was also proposed in ref. [10] (see eqn (8.2b) therein) but without a method to estimate the true values. Here we will achieve this and make a significant improvement, namely the incorporation of varying confidences in the scores. To take into account the varying expertise of the assessors with respect to the objects, we propose that in addition to the score s, each assessor is asked to specify a level of confidence for that evaluation. This could be in the form of a rating such as ‘high’, ‘medium’, ‘low’, as requested by some funding agencies, but we propose to allow something more general and akin to experimental science. Confidence can be estimated by asking assessors to specify an uncertainty σ>0 for their score and then the confidence level (or ‘precision’) is taken to be The instructions to the assessors can be to choose s and σ so that of their probability distribution for the score lies in [s−σ,s+σ], above this interval and below it. Methods for training assessors to estimate uncertainties are presented in [11]. There are also methods for training assessors on the assessment criteria to improve their accuracy [12], which could also be expected to have the beneficial effect of reducing their uncertainties. So let us suppose that with η independent zero-mean, random variables of common variance w. For the moment, we set w=1; extensions to other values of w are considered in appendix A, and in particular are necessary if confidence is expressed only qualitatively. In the case that confidences are reported as only high, medium or low, they can be converted into quantitative ones by for example choosing λ≈2 and setting c=λ2,1,λ−2, respectively. The interpretation of λ is the ratio of the uncertainty for a low confidence evaluation to that for a medium one, and for a medium one to a high one. Then w is unspecified but can be fit from the data, as in appendix A. Thus, our basic model is

Solution of the model

Given the data {(s,σ):(a,o)∈E} for all assigned assessor–object pairs, we wish to extract the true values v and assessor biases b. The simplest procedure is to minimize the sum of squares where the confidence level c was defined in equation (2.2). This procedure can be justified if the η are assumed to be normally distributed, because then it gives the maximum-likelihood values for v and b. It can also be viewed as orthogonal projection of the vector s of scores s to the subspace of the form s=v+b in the Riemannian metric given by . Now expression (3.1) is minimized with respect to v iff and with respect to b iff It is notationally convenient to extend the sums to all assessors (respectively objects) by assigning the value c=0 to any assessor–object pair that is not in E (i.e. for which a score was not returned). Then these conditions can be written as and Here, is the confidence-weighted total score for object o and is that for assessor a, is the total confidence in the assessment of object o and is the total confidence expressed by assessor a. Equations (3.2) and (3.3) form a linear system of equations for the v and b. It has an obvious degeneracy in that one could add a constant k to all the v and subtract k from all the b and obtain another solution. One can remove this degeneracy by, for example, imposing the condition This is the simplest possibility and corresponds to a translation (shift) that brings the average bias over assessors to zero. Alternatives are discussed in appendix B. Define a graph Γ linking assessor a to object o if and only if (a,o)∈E, as illustrated in the left column of figure 1. The edges in the graph are weighted by the confidences c. Whether the set of equations (3.2) and (3.3) has a unique solution after breaking the degeneracy depends on the connectivity of Γ. Define a linear operator L by writing equations (3.2) and (3.3) as where v,b,V and B denote the column vectors formed by the v,b,V and B, respectively. The operator L has null space of dimension equal to the number of connected components of Γ (this follows from Perron–Frobenius theory, e.g. [13]). Thus, if Γ is connected, the null space of L has dimension one, so corresponds precisely to the null vectors v=k ∀o,b=−k ∀a, that we already noticed and dealt with. Connectedness of Γ ensures that if (3.9) has a solution then there is a unique one satisfying (3.8). It remains to check that the right-hand side of equation (3.9) lies in the range of L, thus ensuring that a solution exists. This is true if all null forms of the adjoint operator L† send the right-hand side to zero. The null space of L† has the same dimension as that of L, because L is square, and an obvious non-zero null form α is given by It follows from the definitions of V and B that α(V,B)=0. So a solution exists. Thus, under the assumption that the assessor–object graph Γ is connected, equations (3.2) and (3.3) have a unique solution (v,b) satisfying equation (3.8). Note that connectedness of Γ is necessary for uniqueness, otherwise one could follow an analogous procedure, adding and subtracting constants independently in each connected component of Γ, and thereby produce more solutions. Equations (3.2) and (3.3) have a special structure, due to the bipartite nature of Γ, that can be worth exploiting. The first equation (3.2) can be written as This can be substituted into the second equation (3.3) to obtain where can be considered as weights on the edges of the graph Γ on assessors illustrated in the right column of figure 1. The dimension of the reduced system (3.12) is the number N of assessors (rather than the sum of the numbers of assessors and objects), which gives some computational savings. Replacing one of the equations in (3.12), say that for the ‘last’ assessor, by equation (3.8) gives a system with a unique solution that can be solved for b by any method of numerical linear algebra, e.g. LUP decomposition [14]. Then v can be obtained from equation (3.11). A slightly more sophisticated approach to incorporating a degeneracy-breaking condition into equations (3.12) is described in appendix B. A key question with any black-box solution like the one presented here is how robust is the outcome? We propose two ways of quantifying the robustness. One is to bound how much the outcome would change if some of the scores were changed (e.g. representing mistakes or anomalous judgements). We treat this in appendix C. The other is to evaluate the posterior uncertainty of the outcomes, assuming normal distribution of the η. This is treated in appendix D.

Case studies

We have tested the approach in three contexts. We report in detail on two case studies here. In the first case study, we use a computer-generated set of data containing true values of assessed items, assessor biases and confidences for the assessments and resulting scores. This has the advantage of allowing us to compare the values obtained by the new approach with the true underlying value of each item. The second case study is an evaluation of grant proposals using realistic data based on a university's internal competition. In this test, of course, there is no possibility to access ‘true’ values, so instead we compare the evidence for the models using a Bayesian approach (appendix E), and we compare their posterior uncertainties (appendix D). The third context in which we tested our method was assessment of students; we report briefly on this at the end of the section.

Case study 1: simulation

In the simulation, N=3000 objects are assessed by a panel of N=15 assessors. This choice was motivated by the number of outputs and reviewers in the applied mathematics unit of assessment at the UK's 2008 research assessment exercise. The simulation was carried out using Matlab, and the system of equations was solved using its built-in procedure, which computed the LU decomposition of L (with the last row replaced by the degeneracy-breaking condition (3.8)). The reduction to (3.12) was not used because N=3000 is easily handled by modern personal computers. True values of the items v were assumed to be normally distributed with a mean of 50 and standard deviation set to 15, but with v values truncated at 0 and 100. The assessor biases b were assumed to be normally distributed with a mean of 0 and a standard deviation of 15. Each assessment was considered to be done with high, medium or low confidence, and these were modelled using scaled uncertainties for the awarded scores, of σ=5, 10 or 15, respectively. The allocated scores follow equation (2.4), but truncated at 0 and 100. With r assessors per item (which we took to be the same for each item in this instance), each simulation generated rN object scores s. From these, we generated N value estimates and N estimates of assessor biases using the calibration processes. We then took the mean and maximum values of the errors in the estimates, and . Simple averaging also delivered a value estimate , as well as mean and maximal values of the errors dv. Finally, we determined the averages of the errors dv and db over 100 simulations. The results for these averaged mean and maximal errors in the scores are denoted by 〈dv〉 and (dv)max, respectively, and those for the biases (for the calibrated approaches only) are denoted 〈db〉 and (db)max. Results for all three methods are presented in figures 2–4. The mean and maximal errors for the SA approach, the IBA method and the CWC approach are given in figures 2a–d and 3a–d. For demonstration purposes, we use three confidence levels rather than a continuous distribution. This allows us to clearly control differences in confidence levels in figures 2 and 3 and we do so by presenting four panels labelled (a)–(d). These represent different profiles, with the confidence for each assessment randomly allocated using probabilities for high, medium and low confidences in the ratios: (a) 1 : 1 : 1, (b) 1 : 1 : 2, (c) 1 : 2 : 1 and (d) 2 : 1 : 1. We observe that, for each method, the scores become more accurate (errors decrease) as the number of assessors per object r increases.
Figure 2.

Mean errors plotted against the number r of assessors per object for the SA approach (upper curves, orange), the incomplete-block-analysis method (middle curves, green) and the calibration-with-confidence approach (lower curves, blue). The various panels represent different confidence profiles with probabilities for high, medium and low confidences in the ratios: (a) 1 : 1 : 1, (b) 1 : 1 : 2, (c) 1 : 2 : 1 and (d) 2 : 1 : 1.

Figure 4.

(a) The ratios 〈dv〉IBA/〈dv〉avg and 〈dv〉CWC/〈dv〉avg measure the mean improved accuracies of IBA (green curves) and CWC (blue), respectively, over SA. Smaller ratios indicate a greater degree of improvement over SA. (b) The analogous quantities for maximal errors are (dv)max,IBA/(dv)max,avg and (dv)max,CWC/(dv)max,avg, respectively. The four line types correspond to relative probabilities of standard deviations of 5, 10 or 15, respectively, in the ratios 1 : 1 : 1 (solid); 1 : 1 : 2 (long-dashed); 1 : 2 : 1 (short-dashed) and 2 : 1 : 1 (dotted lines).

Figure 3.

Maximum errors plotted against the number r of assessors per object for the SA approach (upper curves, orange), the incomplete-block-analysis method (middle curves, green) and the calibration-with-confidence approach (lower curves, blue). The various panels represent different confidence profiles with probabilities for high, medium and low confidences in the ratios: (a) 1 : 1 : 1, (b) 1 : 1 : 2, (c) 1 : 2 : 1 and (d) 2 : 1 : 1.

Mean errors plotted against the number r of assessors per object for the SA approach (upper curves, orange), the incomplete-block-analysis method (middle curves, green) and the calibration-with-confidence approach (lower curves, blue). The various panels represent different confidence profiles with probabilities for high, medium and low confidences in the ratios: (a) 1 : 1 : 1, (b) 1 : 1 : 2, (c) 1 : 2 : 1 and (d) 2 : 1 : 1. Maximum errors plotted against the number r of assessors per object for the SA approach (upper curves, orange), the incomplete-block-analysis method (middle curves, green) and the calibration-with-confidence approach (lower curves, blue). The various panels represent different confidence profiles with probabilities for high, medium and low confidences in the ratios: (a) 1 : 1 : 1, (b) 1 : 1 : 2, (c) 1 : 2 : 1 and (d) 2 : 1 : 1. (a) The ratios 〈dv〉IBA/〈dv〉avg and 〈dv〉CWC/〈dv〉avg measure the mean improved accuracies of IBA (green curves) and CWC (blue), respectively, over SA. Smaller ratios indicate a greater degree of improvement over SA. (b) The analogous quantities for maximal errors are (dv)max,IBA/(dv)max,avg and (dv)max,CWC/(dv)max,avg, respectively. The four line types correspond to relative probabilities of standard deviations of 5, 10 or 15, respectively, in the ratios 1 : 1 : 1 (solid); 1 : 1 : 2 (long-dashed); 1 : 2 : 1 (short-dashed) and 2 : 1 : 1 (dotted lines). From figure 2a–d, with only two assessors per object, the SA method gives errors averaging about 10 points. Over r=6 assessors per object are required to bring the mean error down to six points. Fisher's IBA, however, achieves this level of improvement with only two or three assessors. The CWC method delivers a further level of improvement of about one point. One also notes that, for the calibration approaches, relatively little is gained on average by employing more than four assessors per object. This result can be compared with [15] which found that five assessors per object was optimal in terms of accuracy over cost, for a procedure used by the Canadian Institutes of Health Research. Figure 3 shows that IBA also leads to significant improvements in the maximal error values relative to those obtained through SA. With two assessors per object, maximal errors are reduced from about 45 to 30–35. The CWC approach does not appear to significantly improve upon this. However, with six assessors per object the maximal error value of about 25 delivered by the SA process is reduced to about 20 by IBA and to as low as 16 by CWC when half the assessments are done with a high degree of confidence in the scores. Figure 4a gives the improvements achieved by the calibration methods as ratios of the mean errors coming from Fisher's IBA approach to the SA approach 〈dv〉IBA/〈dv〉avg and of the mean errors coming from the CWC approach to the SA approach 〈dv〉CWC/〈dv〉avg. Smaller ratios mean greater accuracy on the part of the calibrated approaches. Figure 4b gives the analogous accuracy ratios for the maximal errors, namely (dv)max,IBA/(dv)max,avg and (dv)max,CWC/(dv)max,avg. Figure 4a demonstrates that IBA delivers mean errors between about 60 and 80% of those coming from the SA approach, the better improvements being associated with lower assessor numbers. This is also the most desirable configuration for realistic assessments, as it represents employment of a minimal number of assessors per object. The CWC approach reduces errors by about a further 10 percentage points irrespective of the number of assessors.

Case study 2: grant proposals

To test CWC in a realistic setting, we adapted data from a university's internal competition for research funding, in which 43 proposals were evaluated by a panel of 11 assessors. Each proposal was graded by two assessors, who in addition each specified a confidence level in their grading in the form of high, medium or low. To respect confidentiality of the competition while making the data available, we not only anonymized the proposals and assessors but also made sufficient changes to the data (while preserving the statistical properties) so that attribution would not be possible. The actual panel used SA, but the assessors were also asked to provide confidences so that CWC could be applied for comparison. The panel awarded grants to the top 10 proposals. Our goals were firstly to see what differences would have been made by use of IBA or CWC, secondly to quantify the evidence for the three models from the data to determine which was most appropriate, and thirdly to compare the posterior uncertainties they provide. To apply CWC, we translated the qualitative confidence levels of high, medium and low to values c=λ2,1,λ−2, respectively, with λ=1.75. We chose λ=1.75 as a reasonable guess at how the assessors used the confidence scale. One could include a computation to infer λ from the data, but our preference is for panel chairs to ask assessors to provide uncertainties rather than qualitative confidence levels, as indicated in §2, so we did not implement the inference of λ. Figure 5a–c shows the resulting values inferred by the three methods, projected into the planes of (SA; IBA), (IBA; CWC) and (CWC; SA), respectively. Figure 5d is a Bland–Altman or Tukey mean-difference plot [16]. The correlations are not strong, though as we would expect, the correlation of IBA with CWC is stronger than those of either with SA. In particular, we note that the set of proposals rated in the top 10 varies substantially with the method used (table 2). The reason for the differences is that IBA and CWC attribute a significant range of biases to the assessors (table 3).
Figure 5.

Correlations between the results coming from the three methods applied to Case Study 2. The three panels give the correlations between the outputs of (a) IBA and SA; (b) CWC and IBA and (c) SA and CWC. The coefficients of determination are given, respectively, by R2=0.5701; 0.8807 and 0.3772. Panel (d) is a Bland–Altman or Tukey mean-difference plot of differences between results from pairs of approaches against their averages. The symbols ‘+’ (red) compare CWC to IBA ( versus ); ‘×’ (green) compare IBA to SA ( versus ); ‘°’ (blue) compare SA to CWC ( versus ).

Table 2.

The 43 grant proposals are identified as OA, OB, OC, … OZ, OA′, OB′,…OP′, OQ′. Here they are ranked according to their , and values, representing the outcomes of SA, the IBA and CWC approaches. Proposals identified by CWC as belonging to the top 10 but missed by IBA are highlighted in boldface. Proposals identified by IBA or CWC as belonging to the top ten but missed by SA are highlighted in italics. Proposals which are not in the CWC top 10 are underlined.

rankSA VavgIBA VIBACWC VCWC
1OH (87.0)OA (85.3)OA (88.8)
2OP___ (87.0)OC (84.9)OB (85.2)
3OC (86.0)OH (80.6)OC (84.9)
4OS___ (84.0)OP___ (79.7)OD (82.8)
5OA (80.5)OD (79.5)OE (82.0)
6OM____ (80.5)OB (79.4)OF (78.9)
7OZ___ (80.5)OF (78.6)OG(78.4)
8OF (79.5)OE (76.9)OH (77.3)
9OA___ (78.5)OS___ (76.7)OI (77.1)
10OI (78.0)OJ (76.4)OJ (75.6)
Table 3.

Assessor statistics: assessors are labelled AA, … AK according to increasing CWC-biases (5th column). Here, we give the mean scores they awarded, standard deviations and IBA-biases too. The mean score awarded over all assessments was 66.9.

assessormeans.d.bias (IBA)bias (CWC)
AK84.216.614.617.7
AJ61.019.28.712.6
AI64.610.00.09.7
AH76.69.110.09.1
AG71.96.98.88.8
AF65.95.65.72.0
AE72.315.52.81.1
AD61.021.9−5.0−3.6
AC62.39.6−12.4−15.6
AB58.36.4−12.8−16.6
AA49.112.1−20.7−25.2
Correlations between the results coming from the three methods applied to Case Study 2. The three panels give the correlations between the outputs of (a) IBA and SA; (b) CWC and IBA and (c) SA and CWC. The coefficients of determination are given, respectively, by R2=0.5701; 0.8807 and 0.3772. Panel (d) is a Bland–Altman or Tukey mean-difference plot of differences between results from pairs of approaches against their averages. The symbols ‘+’ (red) compare CWC to IBA ( versus ); ‘×’ (green) compare IBA to SA ( versus ); ‘°’ (blue) compare SA to CWC ( versus ). The 43 grant proposals are identified as OA, OB, OC, … OZ, OA′, OB′,…OP′, OQ′. Here they are ranked according to their , and values, representing the outcomes of SA, the IBA and CWC approaches. Proposals identified by CWC as belonging to the top 10 but missed by IBA are highlighted in boldface. Proposals identified by IBA or CWC as belonging to the top ten but missed by SA are highlighted in italics. Proposals which are not in the CWC top 10 are underlined. Assessor statistics: assessors are labelled AA, … AK according to increasing CWC-biases (5th column). Here, we give the mean scores they awarded, standard deviations and IBA-biases too. The mean score awarded over all assessments was 66.9. In the absence of ‘true’ values for the proposals, how can one decide which is the best method to use, and hence which outcome is preferred? A first answer is to compare the ‘residuals’ that the methods leave after the least-squares fit. In the case of SA, this means the value of equation (3.1) obtained by taking the v to be the averaged scores and b=0. For IBA, the residual is the value of (3.1) at the least-squares fit, taking all the c=1. For CWC, we take the value of (3.1) at the least-squares fit, divided by the average confidence over all assessments. The residuals are presented in table 4. From this point of view, we see clear improvement progressively from SA to IBA to CWC, providing an apparently compelling argument for the use of CWC.
Table 4.

Residuals (scaled by mean confidence in the case of CWC).

methodSAIBACWC
residual860243883156
Residuals (scaled by mean confidence in the case of CWC). As IBA and CWC have more free parameters (the biases) than SA, however, one should penalize them appropriately to make a correct comparison. Also although normalizing the residual for CWC by the average confidence sounds sensible, it is not clear it is the right way to compare CWC with IBA. A principled answer is provided by Bayesian model comparison. In this procedure, the evidence provided by the data in favour of each model is quantified, and the best model is the one with the highest evidence. The procedure to quantify the evidence for the three models is described in appendix E. It depends on assumptions about the prior probability distribution for the parameters of the models, but we took ‘ball’ priors on the true values and on the biases (constrained by the degeneracy-breaking condition) and a truncated Jeffreys' prior on the variance of the noise. In the notation of appendix E, the parameters for the prior probability distributions were σ=22.5, σ=15, wmax=900, wmin=1. As the evidences come out to be small numbers (around 10−168), we took their (natural) logarithms. The resulting log-evidences are shown in table 5. Simple averaging wins, but these values are so close together that we cannot make a strong conclusion about which method is most justified by the data. Furthermore, adjusting the prior probability distributions and the confidence weights changes which method has the highest evidence. We suspect that differences between the evidences for the models would become apparent if each proposal had been evaluated by more than two assessors.
Table 5.

Bayesian log-evidences.

methodSAIBACWC
log-evidence−385−389−387
Bayesian log-evidences. A third approach is to evaluate the posterior uncertainty in the values assigned to the objects for the three methods, as detailed in appendix D, using (.26) for IBA and CWC, and (.27) for SA. The results are given in table 6. On this basis, the most precise results are given by CWC. None of them are very precise, however. A posterior uncertainty of eight means that we should consider values for the objects to have a one-third chance of differing by more than eight from the outputted values. This means that for IBA and CWC, only the top three proposals of table 2 are reasonably assured of being in the top 10.
Table 6.

Confidence-weighted root mean square uncertainties for the values (and biases in the cases of IBA and CWC). For SA, the weighting is according to the number n of assessors for object o.

methodSAIBACWC
uncertainty14.18.48.0
Confidence-weighted root mean square uncertainties for the values (and biases in the cases of IBA and CWC). For SA, the weighting is according to the number n of assessors for object o. As the object of the competition was only to choose the best 10 proposals to fund, rather than assign values to each proposal, it might have been more appropriate to design just a classifier system (with a tunable parameter to make the right number in the ‘fund’ class) but our goal was to use it as a test of CWC. The fact that three different methods with roughly equal evidence lead to drastically different allocation of the grants, and with large posterior uncertainties, highlights that better design of the panel assessment was required. Large variability of outcome even when just using SA but with different assessment graphs was already noted by Graves et al. [17]. A moral of our analysis is that to achieve a reliable outcome, the assessment procedure needs substantial advance design. We continue a discussion of design in appendices C and F, but substantial treatment is deferred to a future paper.

Third context: assessment of students

We also tested the method on undergraduate examination results for a degree with a flexible options system [18] and on the assessment of a multi-lecturer postgraduate module. In the former case, as surrogates for the confidences in the marks we took the number of Credit Accumulation and Transfer Scheme (CATS) points for the module, which indicate the amount of time a student is expected to devote to the module (for readers used to the European credit transfer and accumulation system, 2 CATS points are equivalent to 1 ECTS point). The amount of assessment for a module is proportional to the CATS points. If it can be regarded as consisting of independent assessments of subcomponents, e.g. one per CATS point, with roughly equal variances, then the variance of the total score would be proportional to the number of CATS points. As the score is then normalized by the CATS points, the variance becomes inversely proportional to the CATS points, making confidence directly proportional to CATS points. The outcome of our analysis indicated significant differences in standards for the assessment of different modules, but as most modules counted for 15 or 18 CATS, this was not a strong test of the merits of including confidences in the analysis, so we do not report on it here. For the postgraduate module, there were four lecturers plus module coordinator, who each assessed oral and written reports for some but not all of the students, according to availability and expertise (except the coordinator assessed them all). Each assessor provided a score and an uncertainty for each assessment. The results were combined using our method and the resulting value for each student was reported as the final mark. The lecturers agreed that the outcome was fair.

Discussion

We have presented and tested a method to calibrate assessors in a panel, taking account of differences in confidence that they express in their assessments. From a test on simulated data we found that calibration with confidence (CWC) generated closer estimates of the true values than Additive incomplete block analysis (IBA) or simple averaging (SA). A test on some real data, however, provided little evidence to distinguish between the methods, though they produced wildly different rankings, suggesting that the assessment procedure for that context needed more robust design. Nevertheless, CWC came ahead on posterior precision. We note that the default of assuming all assessment confidences to be equal results in IBA, which already represents a useful improvement over SA. One of the principal conclusions from our analysis is that to achieve reliable outcomes from the methods we tested, requires good design of the assessment graph (showing which objects are evaluated by which assessors and with what confidences). All three methods we compared are based on least-squares fitting. They may therefore be considered overly sensitive to outliers. An alternative approach which is less sensitive to outliers is based on medians rather than means. For example, Tukey's Median Polish [19] is a median-based version of Fisher's IBA. It would be good to develop a version of it that takes confidences into account too. Some other drawbacks of our CWC method are: — It requires assessors to give reliable uncertainties; if assessors differ in their confidence estimates the method gives higher weight to those who give higher confidences. In particular, one needs to guard against an assessor giving unwarrantedly high confidence for a particular assessment. There is a case for calibrating confidences too. — Bias effects may be more subtle than just an additive effect; for example, an assessor may be more generous (or perhaps tougher) on topics in which they have high confidence, or they may use a shorter or longer part of the scale than other assessors. — Some organizations insist on round-number scores; this goes against the spirit of our approach and is awkward for assessors who may rightly wish to rate an object as between two of the allowed grades. The requirement is perhaps based on the laudable idea of not wishing to imply higher accuracy than is warranted, yet in our opinion this is better dealt with by reporting an uncertainty for each result on a continuous scale. — Some organizations may insist that scores cannot go beyond certain limits, which is awkward for an assessor if after evaluating several objects highly they find there are some they wish to rate even higher. There are a number of refinements which one could introduce to the core method, addressing some of these drawbacks. These include how to deal with different types of bias, different scales for confidence, different ways to remove the degeneracy in the equations, how to deal with the endpoints on a marking scale, and how to choose the assessment graph. Some suggestions are made in the appendices, along with mathematical treatment of the robustness of the method and of computation of the Bayesian evidence for the models. An advantage of our type of calibration is that it does not produce the artificial discontinuities across field boundaries that tend to arise if the domain is partitioned into fields and evaluation in each field carried out separately. In the UK Research Assessment Exercise 2008, for example, there is evidence that different panels had different standards [20]. Although RAE2008 stated that cross-panel comparisons are not justified, some universities have used such comparisons to help decide on how much to resource different departments. Our approach would take advantage of cross-panel referrals (which was part of RAE2008 for work in the overlaps between panels) to infer relative standards and hence to normalize the outcomes. We suggest that a method such as this, which takes into account declared confidences in each assessment, is well suited to a multitude of situations in which a number of objects is assessed by a panel. We acknowledge, however, that this approach requires an investment in training assessors to estimate their uncertainties and in constructing a sufficiently strongly connected assessment graph. Different panels will deal with the trade-off between investment of effort and accuracy of results in different ways.
  5 in total

1.  Analysis of data from incomplete block designs.

Authors:  F G Giesbrecht
Journal:  Biometrics       Date:  1986-06       Impact factor: 2.571

2.  Statistical methods for assessing agreement between two methods of clinical measurement.

Authors:  J M Bland; D G Altman
Journal:  Lancet       Date:  1986-02-08       Impact factor: 79.321

3.  Funding grant proposals for scientific research: retrospective analysis of scores by members of grant review panel.

Authors:  Nicholas Graves; Adrian G Barnett; Philip Clarke
Journal:  BMJ       Date:  2011-09-27

4.  Menage a quoi? Optimal number of peer reviewers.

Authors:  Richard R Snell
Journal:  PLoS One       Date:  2015-04-01       Impact factor: 3.240

5.  Grant Peer Review: Improving Inter-Rater Reliability with Training.

Authors:  David N Sattler; Patrick E McKnight; Linda Naney; Randy Mathis
Journal:  PLoS One       Date:  2015-06-15       Impact factor: 3.240

  5 in total
  2 in total

Review 1.  Making better decisions in groups.

Authors:  Dan Bang; Chris D Frith
Journal:  R Soc Open Sci       Date:  2017-08-16       Impact factor: 2.963

2.  Why can't we make research grant allocation systems more consistent? A personal opinion.

Authors:  Roger Cousens
Journal:  Ecol Evol       Date:  2019-02-10       Impact factor: 2.912

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.