OSA's Digital Library

Journal of the Optical Society of America A

Journal of the Optical Society of America A

| OPTICS, IMAGE SCIENCE, AND VISION

  • Editor: Stephen A. Burns
  • Vol. 24, Iss. 12 — Dec. 1, 2007
  • pp: B70–B80
« Show journal navigation

Multireader multicase variance analysis for binary data

Brandon D. Gallas, Gene A. Pennello, and Kyle J. Myers  »View Author Affiliations


JOSA A, Vol. 24, Issue 12, pp. B70-B80 (2007)
http://dx.doi.org/10.1364/JOSAA.24.000B70


View Full Text Article

Acrobat PDF (701 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Multireader multicase (MRMC) variance analysis has become widely utilized to analyze observer studies for which the summary measure is the area under the receiver operating characteristic (ROC) curve. We extend MRMC variance analysis to binary data and also to generic study designs in which every reader may not interpret every case. A subset of the fundamental moments central to MRMC variance analysis of the area under the ROC curve (AUC) is found to be required. Through multiple simulation configurations, we compare our unbiased variance estimates to naïve estimates across a range of study designs, average percent correct, and numbers of readers and cases.

© 2007 Optical Society of America

1. INTRODUCTION

The study of image quality often involves the use of psychophysical studies to evaluate an imaging system, or perhaps as a validation of model observer predictions for circumstances new to that model observer. Studies involving human readers are also central to the evaluation of new imaging technologies for which there is no alternative to the use of clinical images from actual patients. Just as important as the mean performance of the observer is the uncertainty of the measurement.

In this paper we present an unbiased method for estimating the variance in an experiment with multiple readers and multiple cases for which the outcomes are binary and the summary performance measure is a percent correct (PC). We also extend the analysis beyond the fully crossed study design to allow arbitrary study designs, including the “doctor–patient” study design, where each doctor sees his or her own patients.

Some examples of PCs are sensitivity, specificity, and the PC in an M-alternative forced-choice (MAFC) experiment. Sensitivity is the percent of abnormals correctly identified, and specificity is the percent of normals correctly identified. We shall also refer to the abnormals as the signal-present cases (hypothesis 1, H1), and the normals as the signal-absent cases (hypothesis 0, H0).

In an MAFC experiment, the reader must choose which of M-alternatives within a trial contains the signal. So, in the typical two-alternative forced-choice (2AFC) task a trial is often a pair of images, one signal-absent and one signal-present, displayed side by side or in sequence. The outcome of the choice is binary; the reader is either right or wrong. The rate at which the reader correctly picks the alternative with the signal is the PC.

Regardless of the specific task, readers, and cases, we denote the binary success outcome generically by s(g,γ), where g specifies the case and γ specifies the reader. This success outcome is 0 when reader r incorrectly identifies case g and 1 when the reader is successful.

In a particular study, there is a set of Ng cases and a set of Nγ readers. Without replicating readings, we could collect Ng×Nγ outcomes if every reader reads every case (the fully crossed design). For the doctor–patient study design, depicted pictorially on the left of Fig. 1 , some of these data are not collected. The shaded area in Fig. 1 indicates which cases were read by which readers. Since each case is read by only one reader, a significant amount of data are missing compared to the fully crossed design, which would fill the whole matrix. Additionally, we allow the number of cases, or “case load,” read by each reader to be different.

On the right in Fig. 1 we provide a simple example demonstrating the data from a binary-outcome experiment with multiple readers, each reading their own cases. The PC in the last row weighs each reading equally: 100 correct decisions divided by 130 readings is 77%. Now one might assume that the readings are all independent and identically distributed (iid) and estimate the standard error using the sample variance divided by the total number of readings; this equals 3.7. However, since each reader may have a different skill at the task, the readings are not identically distributed and this naïve estimate likely underestimates the true variance.

This simple example highlights two naïve estimates of variance. The first incorrectly treats the readings as identically distributed, and the second incorrectly treats the reader PCs as being measured without error. The variance estimate that we provide appropriately accounts for the readers, cases, and correlations that arise from the actual study design. These variance estimates apply to the average PC when readers are treated equally or when readings are treated equally.

In what follows, we make the following assumptions: Readers are iid, cases are also iid, and readers are independent of cases. Additionally, given a reader and a case, an outcome can be deterministic, as when the reader is a mathematical classifier, or an outcome can be a random variable, as might be expected when the reader is a human and unable to reproduce the same decision on subsequent readings (reader jitter). This distinction is unnecessary for the current work; our variance estimate accounts for reader jitter whether it exists or not.

2. THEORY AND METHODS

2A. Setup

We define a design matrix D and a success matrix S. Both matrices are Ng×Nγ; their elements are denoted dir and sir, where i stands for the ith case and r for the rth reader. The design matrix holds a one in every position where an outcome was collected and a zero everywhere else. The success matrix holds the observed success outcomes sir=s(gi,γr). For the rth reader, we denote the number of cases read by Ngr=i=1Ngdir and the PC by
p̂r=1Ngri=1Ngdirsir.
(1)

When an outcome is not collected, dir=0 and sir is technically undefined. In practice, we can set sir to any number we want when dir=0, since it will always appear with dir and the product will always be zero. Therefore, to ease the transition to ensemble statistics, we think of sir as the success outcome whether or not it was collected in the study.

We shall assume that the design matrix does not depend on the success matrix and vice versa, as such dependencies would certainly bias the study. In this paper we shall consider fixed study designs and random study designs. For a fixed study design, D is specified before data are collected; for a random study design, there is a protocol, or sampling scheme, that determines a distribution for the possible study designs.

The typical endpoint in a study is a reader-averaged PC:
P̂=r=1Nγwrp̂r.
(2)
While this average appears trivial, there is a choice to be made about how to average, that is, how to weigh each reader. Two common choices exist for the doctor–patient study design, as mentioned in the Introduction: weigh each reader equally (wr=1Nγ) or weigh each reading equally (wr=NgrNg). We denote the resulting PCs as P̂γ and P̂g, respectively. Now, when cases are read by more than one reader, the total number of readings is more than Ng. Considering this situation, a more general expression for the second set of weights is wr=Ngrr=1NγNgr. These weights always sum to one. Of course, if each reader reads the same number of cases, P̂g=P̂γ, whereas if the case load of each reader is random, the weights of P̂g(P̂γ) will also be random.

Other choices for weights may be driven by the experience or skill of each reader. In the most general framework the weights are arbitrary, as long as they sum to one.

2B. Population Quantities

2B1. Fixed Study Designs

The mean of P̂ for a fixed study design D is straightforward:
P̂D=s(g,γ)D=s(g,γ).
(3)
Note that we use brackets ⟨…⟩ to denote expected values over all random variables, and the notation D denotes the conditional expected value where we fix the design matrix D and average over the remaining random quantities: the readers and cases. The expected reader-averaged PC, as is shown above, has no dependence on the study design or the reader weights.

Next, carefully accounting for possible correlations across readers and cases (see Appendix A), the population variance of P̂ for a fixed study design is
VD=var(P̂D)=c1s(g,γ)2+c4s(g,γ)γ2+c5s(g,γ)g2+c8s(g,γ)2,
(4)
where the coefficients depend on the study design and the reader weights [see Eqs. (A7, A8, A9, A10)].

The unique numbering of the coefficients above is driven by how we label the moments. We refer to the moments in Eq. (4) as M1, M4, M5, and M8 to coincide with notation previously derived for the empirical area under the ROC curve (AUC) [4

4. B. D. Gallas, “One-shot estimate of MRMC variance: AUC,” Acad. Radiol. 13, 353–362 (2006). [CrossRef] [PubMed]

, 5

5. B. D. Gallas and D. G. Brown, “Reader studies for validation of CAD systems,” submitted to Neural Networks.

]. For AUC, there are eight fundamental moments of the success outcomes. The factor of 2 increase in the number of moments comes from partitioning cases into two subsets: signal-absent and signal-present.

The variance can be written concisely as a scalar product between the coefficients and the moments arranged in vectors c̱ and M̱; that is, VD=c̱tM̱, where coefficients c2, c3, c6, and c7 are all understood to equal zero. This variance will carry a subscript γ or g when needed to indicate weights treating each reader equally or weights treating each reading equally. The moments themselves are nothing more than second moments (M1 through M7) and a mean squared (M8), as are expected in a variance. Finally, we shall extend this notation to include M0=s(g,γ), the success outcome averaged over reader γ and case g.

In Appendix A we derive the variance and covariance appearing in Eq. (5). The single-reader variance is
var(p̂rNgr)=p̂r2Ngrp̂r2
(6)
=(1NgrM1+Ngr1NgrM4)M8.
(7)
Notice that the first two terms of the expression above are second moments weighted by coefficients that sum to one and the last term is a negative mean squared. In this way, the expression fits our mental picture that a variance can be decomposed into a second moment minus a mean squared.

The covariance for the general study design simplifies for the fully crossed and doctor–patient study designs. The covariance for the fully crossed study design is Eq. (A13) minus the mean squared, or
cov(p̂r,p̂r)=1NgM51NgM8,
(8)
whereas for the doctor–patient study design, the covariance is zero (readers are independent and read different cases).

2B2. Special Cases and Random Study Designs

The vector of coefficients for a fixed study design is made up of complicated sums that simplify for the study designs considered in this paper (see Table 1 ). If we allow the study design to be random (with some distribution), we get the variance of P̂ by averaging the coefficients of the fixed study design over the distribution of study designs. This is possible because we assume the design and success matrices are independent. When averaged over the distribution of study designs, the variance is no longer dependent, or conditional, on a fixed study, and the subscript D should be dropped.

2C. Variance Estimates

2C1. Fixed Study Design

The weights for each pair of observations are analogous to the weights used for the average performance: Each case (or pair of cases) is given equal weight for each reader, and the readers are given the same (relative) weights as before. In theory, the weights could be different from those for P̂; however, there does not seem to be a good reason to make them different. As for P̂ and VD, we add the subscript γ or g to V̂D when necessary to indicate whether the weights equally weigh each reader or each reading.

In situations where two readers r,r have nonoverlapping case samples, the denominator in M̂5 can be zero. But at the same time, the numerator will be zero as well. In these situations the r,r contribution to M̂5 is taken to be zero. Consequently, for the doctor–patient study design, where readers never read the same cases, M̂5 is entirely zero.

When replacing the expected values with sums for estimation there are two things to remember: Avoid biases and count the number of samples that are being summed. The elements of the design matrix are an easy way to count the number of samples that are being summed.

Biases creep in when we replace a squared average with a squared sum. To avoid the bias, replace the squared average with two sums and do not include the index of the first sum in the second sum. For example, the estimate of M5 squares the average over readers for a fixed case. When replacing this squared average with sums over r and r, we do not let r equal r. We also normalize the weights in the sum over r so that they sum to one. The result can be shown to be unbiased with standard algebraic and probabilistic manipulations.

Unfortunately, our moment-based MRMC variance estimate is not necessarily positive. It is a linear combination of sums of squares, where one coefficient, c8, is negative. The possibility of negative estimates are an unfortunate consequence of estimating variances with sums of squares and too few samples. Bayesian and maximum-likelihood estimates could avoid the unfortunate negative estimates, but that approach is beyond the scope of the nonparametric treatment of this paper.

2C2. Random Study Design

The only change needed to account for random study designs is to replace c̱ with an estimate of c̱. One estimate of c̱ is just the observed c̱ itself, which would not be an actual change of the fixed-study-design variance estimator. Other estimates of c̱ would require priors on the distribution of possible study designs. For this manuscript, we shall investigate the fixed-study-design estimator and consider other estimators at a later date.

2C3. Naïve Estimates

As a basis for comparison, we consider the two naïve estimates described in the Introduction. Neither accounts for the MRMC nature of the data, but both have been used in the literature. The first estimate essentially assumes that all the readings are iid, indirectly assuming that readers all have the same skill and are reading different cases. Given this assumption, the success outcomes are all independent Bernoulli trials with the same probability of success, and the variance of the reader-averaged PC is estimated as
V̂naive̱g=1Ntotal(Ntotal1)r=1Nγi=1Ngdir(sirP̂γ)2,
(14)
where Ntotal denotes the total number of readings, dir, summed over all readers and cases.

The second estimate uses the sample variance of the reader-specific PCs:
V̂naive̱γ=1Nγ(Nγ1)r=1Nγ(p̂rP̂γ)2.
(15)
This statistic essentially assumes that the PCs are not noisy, that they represent the true reader skill (infinite testers). This statistic also assumes that when two readers read the same case, the outcomes are independent, ignoring the correlation by the case.

2D. Simulation

2D1. Model

We shall utilize the Monte Carlo (MC) simulation scheme developed by Roe and Metz [6

6. C. A. Roe and C. E. Metz, “Dorfman–Berbaum–Metz method for statistical analysis of multireader, multimodality receiver operating characteristic (ROC) data: validation with computer simulation,” Acad. Radiol. 4, 298–303 (1997). [CrossRef] [PubMed]

] to investigate the variance estimates presented above in a 2AFC experiment. This simulation scheme assumes that a reader generates two scores (t0ir,t1ir) for each case, where a case represents a signal-absent and signal-present pair of alternatives. If the score of the signal-absent alternative is lower than the score of the signal-present alternative, the success outcome for the case is one; otherwise, it is zero:
sir=s(t1irt0ir)={1ift1irt0ir>00ift1irt0ir<0}.
(16)

The model for the scores is a sum of Gaussian random variables:
t0ir=0+[R]0r+[C]0i+[RC]0ir,
(17)
t1ir=μt+[R]1r+[C]1i+[RC]1ir.
(18)
Here, t0ir and t1ir are the rth reader’s scores for the signal-absent and signal-present alternatives of the ith case. Except for μt, which indicates the separation between the two score distributions, the terms in t0ir,t1ir are independent zero-mean Gaussian random variables that we refer to as the reader effect (σR2), the case effect (σC2), and the reader/case interaction effect (σRC2). We shall follow the convenient constraint used by Roe and Metz on the sum of the variances of the random effects such that
σR2+σC2+σRC2=1.
(19)

With such a simple description for the scores, we can characterize the distribution of the PC to second order. First, the rth reader’s skill averaged over all cases is
pr=p̂rr=s(t1irt0ir)r,
(20)
where s(t1irt0ir) equals one for t1ir>t0ir, and zero otherwise. Since we have fixed the reader, the remaining randomness in t1irt0ir is the sum of two case terms and two reader/case terms. Since all these terms are independent, t1irt0ir given [R]1r[R]0r is a Guassian random variable with mean μt+[R]1r[R]0r and variance 2σC2+2σRC2. Therefore,
pr=Φ(μt+[R]1r[R]0r2σC2+2σRC2),
where Φ is the cumulative distribution function (cdf) of the standard normal. Furthermore, since the only randomness in this last expression comes from the currently fixed reader effects [R]1r[R]0r, the cdf of reader skill is given by
Pr(prτ)=τdxexp(x24σR2)4πσR2Φ(μt+x2σC2+2σRC2).
(21)
Unfortunately, we were unable to find a closed-form solution for this cdf. So, to find the average reader skill, we have two options.

The first option is to (numerically) calculate the average over the two independent reader components [Eq. (21)], letting τ go to infinity. The second option starts over, eliminating the condition on r in Eq. (20). Noticing that t1irt0ir is simply a Gaussian with variance two centered on μt,
M0=pr=s(t1irt0ir)=Φ(μt2),
(22)
as intended by Roe and Metz [6

6. C. A. Roe and C. E. Metz, “Dorfman–Berbaum–Metz method for statistical analysis of multireader, multimodality receiver operating characteristic (ROC) data: validation with computer simulation,” Acad. Radiol. 4, 298–303 (1997). [CrossRef] [PubMed]

]. Please note that for population quantities, M1 equals M0 (because s2=s) and M8 equals M02.

This leaves M4 and M5 as the remaining second-order moments unaccounted for in this problem. Without a familiar probability density function (pdf) for pr, the only option we found for calculating these moments is through numerical integration. The integral expressions for M4 and M5 are
M4=dxexp(x24σR2)4πσR2[Φ(μt+x2σC2+2σRC2)]2,
(23)
M5=dxexp(x24σC2)4πσC2[Φ(μt+x2σR2+2σRC2)]2.
(24)
Numerical calculation of these one-dimensional integrals is a readily tractable task.

2D2. Simulation Configurations

The relevant parameters for the simulation are listed in Table 2 . We vary all the simulation parameters in a factorial design, yielding 3×3×3×3×3×3=729 total configurations. For each of these, we run 10,000 MC trials. Compared to the simulation parameters of Roe and Metz [6

6. C. A. Roe and C. E. Metz, “Dorfman–Berbaum–Metz method for statistical analysis of multireader, multimodality receiver operating characteristic (ROC) data: validation with computer simulation,” Acad. Radiol. 4, 298–303 (1997). [CrossRef] [PubMed]

], we consider a broader range of reader variance (σR2) for the scores, especially on the high end. The range they considered was 1%–10% of the total; our range is 5%–83%.

The first doctor–patient study design is flat; every reader reads the same number of cases. For the Poisson doctor–patient study design, the number of cases each reader reads is five cases plus a Poisson random variable with mean N¯gr5. For the uniform distributions, the number is selected from the interval [5,2N¯gr5] for the broad distribution or [0.5N¯gr,1.5×N¯gr] for the moderate distribution. These distributions force a minimum of five readings per reader.

The final study design we consider is motivated by an observer study conducted by investigators at the National Cancer Institute. The observer study used a subset of images from the atypical squamous cells of undetermined significance (ASCUS) low-grade squamous intraepithelial lesion (LSIL) triage study known as ALTS [7

7. M. Schiffman and M. E. Adrianza, “ASCUS-LSIL triage study: design, methods and characteristics of trial participants,” Acta Cytol. 44, 726–742 (2000). [CrossRef] [PubMed]

, 8

8. J. Jeronimo, L. S. Massad, and M. Schiffman, “Visual appearance of the uterine cervix: correlation with human papillomavirus detection and type,” Am. J. Obstet. Gynecol. 97, 47.e1–47.e8 (2007). [CrossRef]

]. In that study a small subset of the cases were read by all the study colposcopists. The remaining cases were each read by three readers. Here we have a data set of Nγ(N¯gr33) cases. Each reader reads the first three cases of this data set; the remaining cases are each read by three randomly selected readers. The curious size of the data set is chosen so that the total number of readings for this study design is Nγ×N¯gr, the same total expected for the other study designs. We shall refer to this study design as the hybrid study design.

Finally, we consider both weighting methods mentioned above: equally weighing readers and equally weighing readings.

3. SIMULATION RESULTS AND DISCUSSION

In what follows, we compare our variance estimators to the truth, the population quantities. Our population quantities are calculated from the integral expressions in Eqs. (22, 23, 24) and the MC averages for the coefficients. So the truth still has an element of uncertainty in it; the expected values of the nonlinear coefficients are intractable.

To verify the integral expressions, we compare each population variance (from integration) to the sample variances of 10,000 independent MC performance estimates. A separate point is given for each of the 729 simulation configurations, 6 types of study designs, and both ways to weigh individual reader PCs in Fig. 2 . Across all these simulation configurations, which cover a broad range of variances, the maximum difference found was 6% and the mean was 0.1%.

Expected Variance. Before we assess our estimates, it is worthwhile to show the variances expected from all the experiments. Figure 3 shows the population variances for all the high-PC (0.96) simulation configurations compared to the expected values (from MC averaging) of the naïve variance estimates. The expected values of our moment estimators are unbiased and thus equal the population variances. At the bottom of each column of plots, the x axis is labeled according to the size of the simulated experiment. The 27 different components of variance configurations are then explored within each experiment size according to the reader component of variance (σR2). This sorting shows that the reader component of variance has a strong impact on the expected variance of the experiment. The size of the experiment also affects the experimental variance, though to a lesser degree. Additionally, the simulation configurations for lower PCs (0.86, 0.70; not shown) are quite similar to those given in Fig. 3 except that they are shifted upward. This behavior mimics the binomial variance, which increases with decreasing performance.

Interestingly, the overall scale across the different study designs is relatively constant for each experiment size. Recall that each study design has the same expected number of readings given the same experiment size. However, we can see that different study designs behave differently across different components-of-variance configurations.

Regarding the impact of reader weights, Vγ=var(P̂γ) lies on top of Vg=var(P̂g) in all the plots except for the broad uniform doctor–patient study design (Fig. 3d). In that plot, Vg can be ±30% that of Vγ (notice some dots peaking out from behind the solid curve). What this means is that the variance of the reader-averaged PC does not depend on the reader weights except when the reader case loads are very different.

Finally, in each plot the naïve estimates bracket the true MRMC variances: V̂naive̱γ is biased high (the dotted curve upper bound) and V̂naive̱g is biased low (the dashed curve lower bound). In the plots, V̂naive̱γ can be nine times the true variance, whereas V̂naive̱g can be as little as 2% of the true variance.

Root-mean-square error. Here we assess the variance estimators with the relative root-mean-square error (RRMSE), or
RRMSE=1V(V̂V̂2+var(V̂))12,
(25)
where the first term in the parentheses is the squared bias of the variance estimate and the second term is the variance of the variance estimate; the term in front of the parentheses scales the RMSE to the truth. Thus, the scale of RRMSE can be interpreted as the total error given as a fraction of what we are trying to estimate. Of course, since our moment estimator is unbiased, the RRMSE can also be interpreted as just the standard deviation of our estimator relative to what is being estimated.

We first point out that at high PC (Fig. 4a) and with only three readers, the RRMSE of our MRMC estimators runs above 100% (solid curve). Three readers are not enough to do the MRMC variance estimation, and as the reader component of variance increases, the estimator gets even noisier. In this regime, the naïve estimator V̂naive̱g appears to be performing fairly well (dashed curve), that is, until we recall how biased it is (Fig. 3e). The bias of V̂naive̱γ, on the other hand, is driving the RRMSE to extreme values (dotted curve).

As the size of the experiment grows, the RRMSE of our MRMC estimator decreases, while that of V̂naive̱g does not. That is to say, our MRMC estimator improves with more data, while the naïve estimator cannot adapt to the overdispersive nature of the data. Nonetheless, even with ten readers, each reading 102 cases on average, our MRMC estimator has too much error when the PC and reader variability are high.

When the PC is lower (Fig. 4b), the estimation problem can be done with reasonable precision and accurracy. When there are ten readers and 50 cases in the experiment, the RRMSE of our MRMC estimator ranges between 20% and 40%.

For the broad uniform study design (Figs. 5a, 5b ), the overall story is similar, but we now see a difference from the reader weights. In experiments with little data and high PC, the error estimating Vg (dashed-dotted curve) is significantly larger than that estimating Vγ (solid curve). However, for the experiments with adequate readers (ten) and moderate PC, where the RRMSE ranges between 30% and 60%, the difference in errors becomes negligible.

Finally, the RRMSE stories for the other study designs are similar to either that of the fully crossed or the broad uniform study designs. The hybrid study design, with its additional case correlations from cases being read by at least three readers, mimics the fully crossed study design. The other doctor–patient study designs mimic the broad uniform doctor–patient study design, although the differences between the RRMSEs for V̂γ and V̂g are not as pronounced.

In summary, the reader weights do not play a significant role in the total variance of the average PC except when the case loads are very different, as in the broad uniform study design. Additionally, it takes about ten readers and a moderate PC to reasonably estimate the MRMC variance. In this regime the error estimating Vγ and Vg is about the same. Finally, our MRMC estimator improves as more data are collected and performance is moderate; it is a consistent estimator. In contrast, the naïve estimators are not consistent; they do not get closer to the truth with more data.

4. CONCLUSIONS AND FUTURE WORK

We have presented a framework for estimating the variance of a binary-outcome experiment that appropriately accounts for readers and cases as random effects. This framework is based on the larger one developed for estimating the MRMC variance of AUC [4

4. B. D. Gallas, “One-shot estimate of MRMC variance: AUC,” Acad. Radiol. 13, 353–362 (2006). [CrossRef] [PubMed]

, 5

5. B. D. Gallas and D. G. Brown, “Reader studies for validation of CAD systems,” submitted to Neural Networks.

] obtained according to a fully crossed study design. The MRMC variance of AUC has eight fundamental second-order moments of the success outcomes, whereas for the binary-outcome experiment there are only four fundamental moments. We have also generalized the framework to accommodate any MRMC random or fixed study design. A fully crossed study design is not required, though we have highlighted it and another special study design, the doctor–patient study design.

In addition to quantifying the uncertainty of the MRMC experiment conducted, the framework provided can be used to consider other study designs. For example, a small pilot study can be used to estimate the moments of the success outcomes. Then a larger pivotal study can be considered by simply changing the study design matrix, which will change the coefficients c̱. This larger pivotal study does not even need to be of the same type as the pilot study, as long as the appropriate moments have been estimated.

The variance estimates presented are useful for the visual perception investigator performing clinical studies or human psychophysics experiments, as well as for the investigator developing models of the human or ideal observer. For the latter, the utility comes to bear when the model observer is estimated from a finite set of training cases. If another set of cases is obtained (same size), another estimate of the observer (same model) could be obtained. These two model-observer estimates can be thought of as samples from a population of readers. In this setting, a MRMC performance experiment can be run where we generate a sample of readers (trained on independent sets of cases) and a sample of testing cases (cases that are independent of the ones used for training any observer). Performing an MRMC variance analysis on this experiment will allow the investigator to account for the variability from training the model with a finite set of training samples and from testing the model with a finite set of testers. Such an accounting is essential to model development and is starting to be appreciated in the field of computer-aided diagnosis and detection of disease [12

12. W. A. Yousef, R. F. Wagner, and M. H. Loew, “Assessing classifiers from two independent data sets using ROC analysis: a nonparametric approach,” IEEE Trans. Pattern Anal. Mach. Intell. 28, 1809–1817 (2006). [CrossRef] [PubMed]

].

One direction for future work in this area is to estimate MRMC covariances. The method we presented in this paper generalizes easily to estimating covariances when the readers and cases are paired across two reading conditions or modalities. Simply replace the success matrix with a difference of success matrices and proceed as described for the single-modality MRMC variance analysis. These covariances can be used to quantify the statistical difference between the performance of a set of readers reading the same cases in two modalities, or the difference between two observer models.

Another direction for future work is to take the general study design concepts to AUC [5

5. B. D. Gallas and D. G. Brown, “Reader studies for validation of CAD systems,” submitted to Neural Networks.

]. Pooling ROC scores happens just as pooling success outcomes happens. The subsequent variance analysis and hypothesis tests done do not typically account for the fact that the scores from several readers reading different cases are not identically distributed. For AUC, however, not only is the variance analysis wrong, but the actual pooled AUC can be quite different from the average reader AUC [13

13. M. S. Pepe, The Statistical Evaluation of Medical Tests for Classification and Prediction (Oxford U. Press, 2003).

], especially when the readers use the ROC score axis differently.

APPENDIX A: SECOND-MOMENT, FIXED STUDY DESIGN

Here we assume that the design matrix and weights are fixed, and we calculate the second moment of P̂. It is
P̂2D=(r=1NγwrNgri=1Ngdirsir)2.
(A1)

The squared sum over readers and cases is a quadruple sum that we separate into four parts:
P̂2D=r=1Nγwr2Ngr2i=1Ngdir2sir2+r=1Nγwr2Ngr2i=1NgiiNgdirdirsirsir+r=1NγrrNγwrwrNgrNgri=1Ngdirdirsirsir+r=1NγrrNγwrwrNgrNgri=1NgiiNgdirdirsirsir.
(A2)

Since the readers and cases are iid, the moments in each line of the expression above do not depend on r, r or i, i, which we define to coincide with notation previously derived for the empirical AUC [4

4. B. D. Gallas, “One-shot estimate of MRMC variance: AUC,” Acad. Radiol. 13, 353–362 (2006). [CrossRef] [PubMed]

, 5

5. B. D. Gallas and D. G. Brown, “Reader studies for validation of CAD systems,” submitted to Neural Networks.

].
M1=sir2=s(g,γ)2,
(A3)
M4=sirsir=s(g,γ)γ2,
(A4)
M5=sirsir=s(g,γ)g2,
(A5)
M8=sirsir=s(g,γ)2.
(A6)
It is worth pointing out that, for the binary-outcome problem, dir2=dir and sir2=sir. Therefore, M8=M12.

The general case expression in Eq. (A2) simplifies for the study designs considered in this paper (see Table 1). For the fully crossed study design, dir always equals one, so sums over all i equal Ng and sums over ii equal Ng1. For doctor–patient study designs, readers never read the same cases, so the sum over i of dirdir always equals zero and the sum over i and ii of dirdir=NgrNgr.

It is also handy to derive the expected value of
p̂rp̂r=i=1NgdirsirNgrdirsirNgr+i=1NgiiNgdirsirNgrdirsirNgr.
(A11)
When the readers are the same, r=r,
p̂r2=1NgrM1+(Ngr1)NgrM4,
(A12)
regardless of the study design. When the readers are different, rr, the expected value depends on the study design. For the fully crossed (FC) and doctor–patient (Dr-Pt) study designs,
p̂rp̂rFC=1NgM5+(Ng1)NgM8,
(A13)
p̂rp̂rDrPt=M8.
(A14)

APPENDIX B: COMPONENTS OF VARIANCE

In this section we relate our moment decomposition of the variance given in Eq. (4) to a components-of-variances (CofVs) decomposition [1

1. D. D. Dorfman, K. S. Berbaum, and C. E. Metz, “Receiver operating characteristic rating analysis: generalization to the population of readers and patients with the jackknife method,” Invest. Radiol. 27, 723–731 (1992). [CrossRef] [PubMed]

, 2

2. S. V. Beiden, R. F. Wagner, and G. Campbell, “Components-of-variance models and multiple-bootstrap experiments: an alternative method for random-effects, receiver operating characteristic analysis,” Acad. Radiol. 7, 341–349 (2000). [CrossRef] [PubMed]

, 14

14. C. A. Roe and C. E. Metz, “Variance-component modeling in the analysis of receiver operating characteristic (ROC) index estimates,” Acad. Radiol. 4, 587–600 (1997). [CrossRef] [PubMed]

]. We begin by considering the distribution of reader skill; some readers are better than others. The skill of a reader is the success outcome for a given reader γ averaged over all cases in the population, or
pγ=s(g,γ)γ.
(B1)
We shall denote the mean and variance of this distribution, respectively, as
μγ=pγ=s(g,γ)γ=M0,
(B2)
σγ2=s(g,γ)γ2s(g,γ)2=M4M8.
(B3)
These two quantities arise naturally from the variance obtained when estimating the reader-specific performance of a random reader γ reading a random set of Ngγ cases. Specifically,
var(p̂γNgγ)=var(p̂γγ,Ngγ)+var(p̂γγ,Ngγ)
(B4)
=var(pγ)+pγ(1pγ)Ngγ
(B5)
=σγ2+μγ(1μγ)σγ2Ngr.
(B6)

Likewise, we consider the distribution of case difficulty. The case difficulty is the success outcome for a given case g averaged over all readers in the population, or
pg=s(g,γ)g.
(B7)
As with reader skill, we have a mean and variance of the case difficulty,
μg=pg=s(g,γ)g=M0,
(B8)
σg2=s(g,γ)g2s(g,γ)2=M5M8.
(B9)

Instead of the development above, the DBM model starts by decomposing the performance into three random effects:
p̂Gγ=β¯+βγ+βG+βGγ,
(B10)
where G denotes a set of cases; β¯ denotes the average performance; βγ is a random effect accounting for reader skill; βG is a random effect accounting for the difficulty of the case set; and βGγ quantifies two random effects, a possible reader-case interaction and reader jitter. The interaction and reader jitter effects are inseparable if there are no repeated readings. All the random effects are assumed to be independent zero-mean Gaussian random variables. The corresponding reader CofV is identical to σγ2, and the case CofV equals σg2 scaled per case set, or σg2Ngr.

At first, the variance of the interaction term is not obvious. The reason is that the variance of the interaction term depends on the study design. It depends on how the readers and cases are sampled and combined in the summary performance statistic. We can actually figure out the variance of the interaction term by starting with the total variance and organizing it according to reciprocal powers of Nγ, Ng, much like is done in the work of Barrett et al. [15

15. H. H. Barrett, M. A. Kupinski, and E. Clarkson, “Probabilistic Foundations of the MRMC Method,” Proc. SPIE 5749, 21–31 (2005). [CrossRef]

, 16

16. E. Clarkson, M. A. Kupinski, and H. H. Barrett, “A probabilistic model for the MRMC method. Part 1. theoretical development,” Acad. Radiol. 13, 1410–1421 (2006). [CrossRef] [PubMed]

]. For the fully crossed study design, we have
var(P̂FC)=M4M8Nγ+M5M8Ng+M1M4M5+M8NγNg.
(B11)
The first term is σγ2Nγ, the second term is σg2Ng, and we define the third term to be the variance of the interaction term (divided by NγNg). For the flat doctor–patient study design,
var(P̂flatDrPt)=M4M8Nγ+M1M4NγNgr.
(B12)
Here we find a reader CofV (numerator of first term), but we do not find a case CofV. Instead, we see that the second term is divided by the number of readers and the number of cases read by each reader. Thus we define the numerator of the second term as the interaction term.

Table 1. List of the Coefficients Needed to Appropriately Weight the Success Moments to Determine the Variance of P̂ for a Fixed Study Designa

table-icon
View This Table
| View All Tables

Table 2. Parameters Investigated in the Simulations According to a Factorial Design, Yielding a Total of 3×3×3×3×2×2=324 Simulation Configurations

table-icon
View This Table
| View All Tables

Table 3. One Example of the Six Distributions of Cases for Five Readers Reading 102 Cases on Average

table-icon
View This Table
| View All Tables
Fig. 1 Graphic on the left shows the (transpose) layout of data from a binary-outcome experiment with multiple readers. Compared to a fully crossed data set, which would fill the entire matrix, much data are missing. The table on the right shows a simple example. The PC in the last row weighs each reading equally. One estimate of the standard error of that average considers all the readings to be iid. The result is 3.7. One could instead obtain an average PC by averaging the reader-specific PCs, resulting in 60. Continuing, one might estimate the standard error of this average, yielding 20.6. While both of the averages are valid, both variance estimates are wrong.
Fig. 2 Population variances calculated using the integral expressions for the moments compared to those estimated from MC. A separate point is given for each of the 324 simulation configurations, 6 types of study designs, and both ways to weigh individual reader PCs.
Fig. 3 Population variances for all the high-PC (0.96) simulation configurations compared to the expected values (from MC averaging) of the naïve variance estimates. The expected values of our moment estimators are unbiased and thus equal the population variances. The naïve estimates bracket the true MRMC variances: V̂naive̱γ is biased high (dotted curve) and V̂naive̱g is biased low (dashed curve). In all but panel D, Vg lies on top of Vγ (notice some dots peaking out from behind the solid curve).
Fig. 4 RRMSE for the fully crossed study design: A, high PC (0.96); B, low PC (0.70).
Fig. 5 RRMSE for the broad uniform study design: A, high PC (0.96); B, low PC (0.70).
1.

D. D. Dorfman, K. S. Berbaum, and C. E. Metz, “Receiver operating characteristic rating analysis: generalization to the population of readers and patients with the jackknife method,” Invest. Radiol. 27, 723–731 (1992). [CrossRef] [PubMed]

2.

S. V. Beiden, R. F. Wagner, and G. Campbell, “Components-of-variance models and multiple-bootstrap experiments: an alternative method for random-effects, receiver operating characteristic analysis,” Acad. Radiol. 7, 341–349 (2000). [CrossRef] [PubMed]

3.

N. A. Obuchowski, S. V. Beiden, K. S. Berbaum, S. L. Hillis, H. Ishwaran, H. H. Song, and R. F. Wagner, “Multireader, multicase receiver operating characteristic analysis: an empirical comparison of five methods,” Acad. Radiol. 11, 980–995 (2004). [CrossRef] [PubMed]

4.

B. D. Gallas, “One-shot estimate of MRMC variance: AUC,” Acad. Radiol. 13, 353–362 (2006). [CrossRef] [PubMed]

5.

B. D. Gallas and D. G. Brown, “Reader studies for validation of CAD systems,” submitted to Neural Networks.

6.

C. A. Roe and C. E. Metz, “Dorfman–Berbaum–Metz method for statistical analysis of multireader, multimodality receiver operating characteristic (ROC) data: validation with computer simulation,” Acad. Radiol. 4, 298–303 (1997). [CrossRef] [PubMed]

7.

M. Schiffman and M. E. Adrianza, “ASCUS-LSIL triage study: design, methods and characteristics of trial participants,” Acta Cytol. 44, 726–742 (2000). [CrossRef] [PubMed]

8.

J. Jeronimo, L. S. Massad, and M. Schiffman, “Visual appearance of the uterine cervix: correlation with human papillomavirus detection and type,” Am. J. Obstet. Gynecol. 97, 47.e1–47.e8 (2007). [CrossRef]

9.

S. L. Hillis and K. S. Berbaum, “Monte Carlo validation of the Dorfman–Berbaum–Metz method using normalized pseudovalues and less data-based model simplification,” Acad. Radiol. 12, 1534–1541 (2005). [CrossRef] [PubMed]

10.

S. L. Hillis, N. A. Obuchowski, K. M. Schartz, and K. S. Berbaum, “A comparison of the Dorfman–Berbaum–Metz and Obuchowski–Rockette methods for receiver operating characteristic (ROC) data,” Stat. Med. 24, 1579–1607 (2005). [CrossRef] [PubMed]

11.

X. Song and X.-H. Zhou, “A marginal model approach for analysis of multi-reader multi-test receiver operating characteristic (ROC) data,” Biostatistics 6, 303–312 (2005). [CrossRef] [PubMed]

12.

W. A. Yousef, R. F. Wagner, and M. H. Loew, “Assessing classifiers from two independent data sets using ROC analysis: a nonparametric approach,” IEEE Trans. Pattern Anal. Mach. Intell. 28, 1809–1817 (2006). [CrossRef] [PubMed]

13.

M. S. Pepe, The Statistical Evaluation of Medical Tests for Classification and Prediction (Oxford U. Press, 2003).

14.

C. A. Roe and C. E. Metz, “Variance-component modeling in the analysis of receiver operating characteristic (ROC) index estimates,” Acad. Radiol. 4, 587–600 (1997). [CrossRef] [PubMed]

15.

H. H. Barrett, M. A. Kupinski, and E. Clarkson, “Probabilistic Foundations of the MRMC Method,” Proc. SPIE 5749, 21–31 (2005). [CrossRef]

16.

E. Clarkson, M. A. Kupinski, and H. H. Barrett, “A probabilistic model for the MRMC method. Part 1. theoretical development,” Acad. Radiol. 13, 1410–1421 (2006). [CrossRef] [PubMed]

OCIS Codes
(000.5490) General : Probability theory, stochastic processes, and statistics
(110.3000) Imaging systems : Image quality assessment
(330.5510) Vision, color, and visual optics : Psychophysics

History
Original Manuscript: April 13, 2007
Revised Manuscript: July 7, 2007
Manuscript Accepted: August 2, 2007
Published: September 28, 2007

Virtual Issues
Vol. 3, Iss. 1 Virtual Journal for Biomedical Optics

Citation
Brandon D. Gallas, Gene A. Pennello, and Kyle J. Myers, "Multireader multicase variance analysis for binary data," J. Opt. Soc. Am. A 24, B70-B80 (2007)
http://www.opticsinfobase.org/josaa/abstract.cfm?URI=josaa-24-12-B70


Sort:  Year  |  Journal  |  Reset  

References

  1. D. D. Dorfman, K. S. Berbaum, and C. E. Metz, "Receiver operating characteristic rating analysis: generalization to the population of readers and patients with the jackknife method," Invest. Radiol. 27, 723-731 (1992). [CrossRef] [PubMed]
  2. S. V. Beiden, R. F. Wagner, and G. Campbell, "Components-of-variance models and multiple-bootstrap experiments: an alternative method for random-effects, receiver operating characteristic analysis," Acad. Radiol. 7, 341-349 (2000). [CrossRef] [PubMed]
  3. N. A. Obuchowski, S. V. Beiden, K. S. Berbaum, S. L. Hillis, H. Ishwaran, H. H. Song, and R. F. Wagner, "Multireader, multicase receiver operating characteristic analysis: an empirical comparison of five methods," Acad. Radiol. 11, 980-995 (2004). [CrossRef] [PubMed]
  4. B. D. Gallas, "One-shot estimate of MRMC variance: AUC," Acad. Radiol. 13, 353-362 (2006). [CrossRef] [PubMed]
  5. B. D. Gallas and D. G. Brown, "Reader studies for validation of CAD systems," submitted to Neural Networks.
  6. C. A. Roe and C. E. Metz, "Dorfman-Berbaum-Metz method for statistical analysis of multireader, multimodality receiver operating characteristic (ROC) data: validation with computer simulation," Acad. Radiol. 4, 298-303 (1997). [CrossRef] [PubMed]
  7. M. Schiffman and M. E. Adrianza, "ASCUS-LSIL triage study: design, methods and characteristics of trial participants," Acta Cytol. 44, 726-742 (2000). [CrossRef] [PubMed]
  8. J. Jeronimo, L. S. Massad, and M. Schiffman, "Visual appearance of the uterine cervix: correlation with human papillomavirus detection and type," Am. J. Obstet. Gynecol. 97, 47.e1-47.e8 (2007). [CrossRef]
  9. S. L. Hillis and K. S. Berbaum, "Monte Carlo validation of the Dorfman-Berbaum-Metz method using normalized pseudovalues and less data-based model simplification," Acad. Radiol. 12, 1534-1541 (2005). [CrossRef] [PubMed]
  10. S. L. Hillis, N. A. Obuchowski, K. M. Schartz, and K. S. Berbaum, "A comparison of the Dorfman-Berbaum-Metz and Obuchowski-Rockette methods for receiver operating characteristic (ROC) data," Stat. Med. 24, 1579-1607 (2005). [CrossRef] [PubMed]
  11. X. Song and X.-H. Zhou, "A marginal model approach for analysis of multi-reader multi-test receiver operating characteristic (ROC) data," Biostatistics 6, 303-312 (2005). [CrossRef] [PubMed]
  12. W. A. Yousef, R. F. Wagner, and M. H. Loew, "Assessing classifiers from two independent data sets using ROC analysis: a nonparametric approach," IEEE Trans. Pattern Anal. Mach. Intell. 28, 1809-1817 (2006). [CrossRef] [PubMed]
  13. M. S. Pepe, The Statistical Evaluation of Medical Tests for Classification and Prediction (Oxford U. Press, 2003).
  14. C. A. Roe and C. E. Metz, "Variance-component modeling in the analysis of receiver operating characteristic (ROC) index estimates," Acad. Radiol. 4, 587-600 (1997). [CrossRef] [PubMed]
  15. H. H. Barrett, M. A. Kupinski, and E. Clarkson, "Probabilistic Foundations of the MRMC Method," Proc. SPIE 5749, 21-31 (2005). [CrossRef]
  16. E. Clarkson, M. A. Kupinski, and H. H. Barrett, "A probabilistic model for the MRMC method. Part 1. theoretical development," Acad. Radiol. 13, 1410-1421 (2006). [CrossRef] [PubMed]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Figures

Fig. 1 Fig. 2 Fig. 3
 
Fig. 4 Fig. 5
 

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited