Stanford study finds COVID-19 cases were undercounted 50- to 85-fold. Not so fast, statisticians say.

April 25, 2020, 4:00 p.m.

Swift blowback followed the release of preliminary findings from a Stanford-led study that estimated that the number of COVID-19 cases in Santa Clara County was 50 to 85 times higher than the number of confirmed cases. 

Statisticians reported being baffled by the narrow confidence intervals on the study’s estimates, given the potential for false-positive test results. Had the calculations been conducted accurately, the statisticians contend, the confidence interval would grow to include zero, rendering the results less meaningful.

Criticism on Twitter ranged from gentle — Fred Hutch statistician Trevor Bedford politely urged “caution in interpretation” after seven critical tweets complete with graphs — to scathing.

“We’ve been waiting for good serology for like, forever,” wrote Harvard epidemiology associate professor Bill Hanage. “It’s somehow amazingly still not here.”

Columbia University statistics professor Andrew Gelman even called on his blog for the researchers to issue an apology.

The researchers tested blood samples from 3,330 people for antibodies in early April. Of the samples analyzed, 50 came back positive, indicating a crude prevalence rate of 1.5%. The researchers adjusted the initial results to account for demographics and test accuracy. The study concludes that between 2.49 and 4.16% of people in the county have been infected, meaning that the true case numbers could range from 48,000 to 81,000 people as of early April. The county’s public health department has reported far fewer confirmed cases: only 2,040 as of Saturday.

The University has asked the researchers to not speak on criticisms of the study at this time, and the researchers are working on a revision to the study that will address criticisms raised, according to study co-lead Eran Bendavid, an associate professor of medicine. 

Stanford Medicine spokesperson Julie Greicius wrote in a statement to The Daily that the study findings are “being evaluated and revised.” Greicius declined to comment on if and why the University had requested that the researchers not comment on criticisms made of the study.

The study has also been criticized for potential selection bias. Medicine professor and study author Jay Bhattacharya’s wife, Catherine Su, advertised the study to a listserv of parents of a Los Altos high school, according to Buzzfeed.

Bhattacharya told Buzzfeed that Su’s email had not been authorized by the researchers, and when analyzing the data, the researchers had taken steps to correct for a potentially unrepresentative sample.

“Our tracking of signups very strongly suggests that this email attracted many people from the wealthier and healthier parts of Santa Clara County to seek to volunteer for the study,” Bhattacharya wrote. “In real time, we took immediate steps to slow the recruitment from these areas and open up recruitment from all around Santa Clara County.”

The Santa Clara County Public Health Department (SCCPHD) wrote in a statement to The Daily that while SCCPHD was aware of the study, it would decline to comment until the study was published in a peer-reviewed journal. 

False positives, a ‘fundamental barrier’

A primary concern of statisticians was the treatment of false-positive test results when analyzing the data. Microbiology and immunology professor Robert Siegel ’76 M.A. ’77 M.D. ’90 said that even small false-positive rates for antibody tests could cast doubt on study results if the prevalence of the virus in the community was also in the single digits.

“There’s a statistical possibility that every single positive result is false unless the accuracy is virtually perfect,” Siegel said. “And we know that it isn’t.”

Genetics postdoctoral research fellow Jeffrey Spence characterized the potential for false positives as a “fundamental barrier” to informative seroprevalence studies, which estimate the incidence of a disease based on blood analysis. 

“We’re never going to get an accurate estimate of the rate of prevalence if the rate of prevalence is close to the false positive rate,” Spence said. 

The researchers estimated the rate of false positives by testing 30 samples known to not have COVID-19 antibodies; all 30 accurately came back negative. The researchers then combined this information with data from the test manufacturer, Premier Biotech, which reported that in a test of 371 confirmed-negative samples, two tests came back as false positives. 

The researchers used this information to estimate the antibody tests’ specificity — its ability to correctly identify those without the disease. Based on in-house trials, they estimated the specificity at 100%, with a confidence interval of 90.5 to 100%. (A specificity of 100% means a false positive rate of 0% since specificity is calculated by subtracting the false positive rate from 100%.) And based on the manufacturer’s data, they estimated it at 99.5%, with a confidence interval of 98.1 to 99.9%. A confidence interval identifies the range of results within which researchers are fairly confident the true value falls.

These confidence intervals concerned statisticians like Gelman, given that the 1.5% of study participants tested positive for antibodies. If the antibody tests had a specificity of 98.5% — which falls within the confidence interval based on both the in-house and manufacturer’s trials — all of the 50 positive tests might have been false positives, according to Gelman. 

“If the false positive rate was 1.5%, then you’d expect to see 50 positive tests just from chance alone,” Gelman said. “Obviously, the true prevalence rate in the county is not zero. But the data are consistent with zero.”

If the specificity rate were any lower, the observed number of positive cases would exceed 50. Of course, having a surplus of false positives over 50 is impossible due to 50 total positives being the ground truth number of positive cases; it would nevertheless increase the possibility that most, if not all, of the 50 positive results were false positives.

The Stanford researchers wrote in the study’s preprint that new data about test kit performance could result in updated prevalence estimates, saying that if the test specificity were determined to be less than 97.9%, the lower uncertainty bound of the estimate would grow to include zero.

Spence said when calculating the standard error, which is in turn used to determine the confidence interval, the researchers appeared to have taken an additional, inaccurate step, dividing the variance by the total study population. This could have greatly widened the confidence interval, according to Spence.

“This artificially drives down the variance and substantially decreases the width of the confidence interval,” Spence said.

UC Berkeley assistant statistics professor Will Fithian Ph.D. ’15 corroborated that typically, the standard error is determined by taking the square root of the variance, not dividing by additional factors. 

“It’s quite an odd error to make, frankly,” he said. “Statistical errors are somewhat different from other errors in that they’re completely undeniable.”

Fithian said he would not be surprised if the corrected confidence in this case interval includes zero. If zero prevalence was in the range of plausible outcomes based on the data, it would indicate noisy data that would be less helpful in drawing conclusions about the actual prevalence rate, according to Fithian.

“If you thought that the prevalence was very low in Santa Clara County, and you saw a confidence interval that ran from zero to 4%, you could go on believing what you already did,” Fithian said. 

Some scientists have pointed to similar prevalence results that have emerged from a Los Angeles County seroprevalence study as support that the math done in the Santa Clara County study was accurate. The sister study was run by USC researchers in conjunction with the Stanford researchers, who estimated that COVID-19 had infected 2.8 to 5.6% of the county. 

Spence cautioned against using the results from the Los Angeles study before seeing a technical document or a preprint. The results from the study were initially released as a press release. Spence said that in addition to the press release, a technical document was published later on RedState.com, a conservative blog, but was later taken down.

“All of this is highly unusual, to put it mildly,” Spence wrote in an email to The Daily.

Los Angeles study co-author Neeraj Sood, University of Southern California Vice Dean for Faculty Affairs & Research confirmed that an “unauthorized copy” of the study had been posted at RedState. 

Gelman wrote on his blog that if the Santa Clara County results were accurate and could be extrapolated across states, New York City could also expect to have 5.4 million people with antibodies. Extrapolating from a state antibody survey, Gov. Andrew M. Cuomo (D) said on Thursday that he anticipates 1 in 5 New York City residents — and up to 2.7 million residents statewide — have COVID-19 antibodies.

Regardless of whether or not the Stanford researchers stumbled upon the correct prevalence rate, Gelman said the statistical criticisms were based on “avoidable errors” and repeated that he thought the authors should issue an apology.

“It’s like if somebody shows up to the party and then they forgot to bring the beer,” Gelman said. “You’d appreciate that they came to the party, but you kind of say, ‘Well, they should apologize because bringing the beer was their job, and they didn’t do it.’”

A previous version of this post incorrectly stated that 1 in 5 New York City residents is about 3 million people, when it is actually around 1.7 million people. The post has been updated to reflect that Cuomo said up to 2.7 million New York state residents could have COVID-19 antibodies based on the study. The Daily regrets this error.

Kate Selig served as the Vol. 260 editor in chief. Contact her at kselig 'at' stanforddaily.com.

Login or create an account

Apply to The Daily’s High School Summer Program

Priority deadline is april 14

Days
Hours
Minutes
Seconds