searchSearch data by region...
Pandemic Data Outlook

Lack of Test Positivity Standards for COVID Demands Recalculation

Due to the evolving and challenging pandemic data landscape and persisting incomplete data streams, the Coronavirus Resource Center has redefined and expanded how it calculates COVID-19 positivity for U.S. states. This effort is required due to the lack of federal standards and a perplexing testing data environment.

Share
Authors:
Jennifer Nuzzo, Associate Professor, Environmental Health and Engineering, BSPH
Beth Blauer, Associate Vice Provost, JHU
December 15, 2021

On Dec. 15, 2021, the Coronavirus Resource Center team released sweeping changes to how we calculate and report COVID-19 testing positivity. Test positivity (a percentage indicating the frequency of positive cases) is not a measure of disease presence in an area, but instead a metric for understanding testing capacity. Previously, the CRC reported positivity by dividing the number of cases by the total number of test results. Depending on the state, “results” could have referred to tests, testing encounters, or specimens. This initial approach was selected because it permitted a “person-centric” positivity ratio to be calculated for all states, or a ratio that focused on people infected rather than tests conducted. The use of “results” as the denominator meant no state was excluded based on the type of testing data it provided.

However, at the CRC we strive to inform the public on the evolution of the COVID-19 pandemic with the greatest accuracy. With that in mind, we designed a new Positivity Calculation Hierarchy (visualized below), allowing us to use multiple methods to analyze the data depending on what information each state provides. There are multiple acceptable methods to calculate positivity with varying utility, and each method tells us different things.1 Our design principles were simple: unit consistency, use of specimens as opposed to people tested, and near equivalency of encounters and specimens.

Testing-Positivity-Iconography.jpg

Consistency between units in the numerator and denominator of a proportion is standard operating procedure across fields because the result is a unitless percentage that can be applied to a wider population in ideal conditions. For example, when comparing colleges based on graduation rates, you examine the percentage of students that graduate out of all the enrolled students at each institution. The percentage of all graduating students in relation to the number of STEM majors is much less informative to the decision-making process. In the same vein, we want to compare “specimens” to “specimens” and “people tested” to “people tested” whenever possible – these calculations are used in the first three methods of the new hierarchy.

The CRC’s initial positivity calculations took a people-centered approach, but a specimen-centered approach is more informative at this later stage of the pandemic. Many people have been tested more than once over the past two years, and there are even some cases of reinfection,2 meaning “unique people tested” is a less useful metric. Finally, we believe that “specimens” and “encounters” are similar enough to use encounters as the denominator when specimens are not available. “Specimens” usually refer to individual tests performed whereas “encounters” often involve some level of deduplication to remove people who were tested multiple times in a short window, essentially preventing double-counting of test results for one person. When counting “encounters” instead of “specimens,” we miss some tests and therefore provide an artificially higher positivity. This occurs only in cases where a person has been tested repeatedly within the time period of deduplication utilized by a state. Fortunately, repeated testing within a short time span is uncommon.

The CRC’s new calculations have resulted in multiple states displaying significantly different COVID-19 positivity rates on the website. As stewards of good data, we reached out to each state health department that would experience a change in positivity on the CRC site due to these updates. Encouragingly, some states were eager to discuss how we calculated positivity, sharing new data to improve the calculations and prompting internal discussions on how to address positivity moving forward. Unfortunately, even though we now provide five distinct methods of calculating positivity, six states and territories still do not provide sufficient data for any of our calculation methods (shown below).

State-Positivity-with-Hierarchy.jpg

These positivity problems exist because federal and international institutions have not mandated a standard for COVID-19 positivity data. Pandemic pressures have yet to motivate transformational improvements in the way we collect, organize, and report data across the United States.

Our work over the past 22 months has revealed pervasive problems with positivity calculations that local, state, and federal authorities need to address. They are:

Lack of a Universal Dataset

The CDC maintains the central repository of national public health data, but its collection is incomplete because states are not required to submit their data. In addition, the CDC does not impose one standard method for calculating positivity. Absent a universal dataset, we have no way of comparing states. Despite our revamping of positivity on the Coronavirus Resource Center, there is not a single calculation method for which all states provide the appropriate data. The lack of complete federal data can also lead to confusion among citizens, localities, states, and the federal government.3

Inconsistent Naming and Changing Definitions

When is a COVID case a COVID case? Does a person have to have symptoms? Do they need to get tested once they are already hospitalized? In order to determine and report a confirmed positive person, we need to know what a positive person really is. That, too, varies from state to state and has changed throughout the pandemic. This issue has been repeated with breakthrough cases, reinfections, and asymptomatic cases. Identifying the difference between “specimens” and “encounters” in each state has been a source of confusion even for those of us at the CRC who wade into this data every day. In general we utilize the definitions originally set by the COVID Tracking Project: “specimens” refer to total samples collected and “encounters” refer to the number of people tested in one reporting period regardless of how often they were tested during that time.4 The conversion from “specimens” to “encounters” is through deduplication, which the COVID Tracking Project originally assigned a standard time frame of one day. But the deduplication window is defined differently across states. We need a standard data dictionary from which all states can work.

Poor Data Granularity

Case-level granularity with accurate timestamps and locations is ideal for positivity measurement. The goal of calculating positivity is to understand when, where, and if there is an outbreak so that authorities can address it, people can modify their behavior, and resources can be allocated. If states simply provide a raw number of positive tests and the number of tests administered for the past week or two, trends cannot be identified until it is too late to do anything. Additionally, some counties may be experiencing an outbreak while the rest of the state is not. Without geographic granularity, COVID-19 hotspots can be washed out by the low positivity of the rest of the state. These nuances affect many lives, but can only be detected with complete, granular testing data.

We are hopeful that our new positivity calculation hierarchy will bring some of these issues into the public dialogue and enable more detailed, robust analysis of COVID-19 trends. We are encouraged by states that have attempted to improve their data reporting in response to this change at the CRC and hope more will follow. Our goal is to continue providing the best data and informative analysis to help individuals and policymakers make ever-important decisions about health and safety as the pandemic continues.

For more detailed information on the actual calculations, please see the explainer page and explore all of the positivity calculation options in the updated Testing Trends Tool.


References
1. Calculating SARS-CoV-2 Laboratory Test Percent Positivity: CDC Methods and Considerations for Comparisons and Interpretation, 24 May 2021. https://www.cdc.gov/coronavirus/2019-ncov/lab/resources/calculating-percent-positivity.html. (01 August 2021).
2. L.J. Abu-Raddad and R. Bertollini, Severity of SARS-CoV-2 Reinfections as Compared with Primary Infections, The New England Journal of Medicine (24 Nov 2021).
3. J. Musgrave, Florida accuses CDC of inflating COVID numbers in apparent CDC mistake, 10 August 2021. https://www.palmbeachpost.com/story/news/coronavirus/2021/08/10/florida-accuses-cdc-inflating-covid-numbers-cdc-changes-tally/5558411001/. (Accessed 11 August 2021).
4. The COVID Tracking Project, Data Definitions, The Atlantic, 07 March 2021. https://covidtracking.com/about-data/data-definitions. (Accessed 08 December 2021).

Jennifer Nuzzo, Associate Professor, Environmental Health and Engineering, BSPH

Beth Blauer, Associate Vice Provost, JHU

Beth Blauer is the Associate Vice Provost for Public Sector Innovation and Executive Director of the Centers for Civic Impact at Johns Hopkins. Blauer and her team transform raw COVID-19 data into clear and compelling visualizations that help policymakers and the public understand the pandemic and make evidence-based decisions about health and safety.