The COVID-19 pandemic has required medical researchers to reinvent the structure of research to enable rapid data collection amid a near constant stream of updated information. Medical staff also had to combat anecdotal evidence and misinformation that arose from the lack of reliable data, which they themselves were working to rectify.
No one was prepared for the COVID-19 pandemic. With the benefit of hindsight, Dr. Jason Farley, a professor of nursing and an infectious disease-trained nurse epidemiologist, said we should have had a structured system — that all health systems were prepared to deploy — that pushed data to a central repository capable of providing actionable medical data in real time. Without the benefits of a national pandemic preparedness plan, or even functional and connected public health data systems, the COVID-19 pandemic has forced medical providers to abandon established research methodologies in order to collect and analyze data as quickly as possible.
Early in the pandemic we were searching for data elements in any place we could find them to guide clinical practice, management strategies on inpatient wards, and treatment approaches. People were publishing data points and data elements that were refuted in subsequent publications. We were getting fast and furious case reports, which gave us anecdotal information, but weren't able to provide conclusive evidence we were used to, like clinical practice guidelines.
Of course, people were grasping at many straws. As a result of the uncertainty and lack of what was known, there were anecdotes thrown out about certain drugs, vitamins, even horse dewormers. All of these pieces of data were refuted over time, but people were looking for anything that offered a glimmer of hope. As we look back almost two years later, we can see how desperate people were to get an answer, and how that desperation led to anecdotal stories becoming national points of debate. We are now seeing the churn of new evidence being put into evidence-based practice guidelines, and that's one of the most important evolutions we've had in terms of patient management.
One strategy that we've used in HIV for a long time is to create clinical cohorts. The clinic itself really is designed to enroll patients and follow them longitudinally over time in clinical cohort studies. There are longitudinal cohort studies that are engaged in a variety of different ways in the space of HIV. For patients with long COVID, similar longitudinal cohort evaluations need to be tracked and followed. We know many of the implications of viral infections and their relationship to cancer and to other comorbidities, but with COVID-19 that's yet to be discovered. Not to say that there is such a relationship, but until we follow patients longitudinally we won’t know, which is one of the functions of C-FORWARD.
COVID-related research turned our research enterprise completely on its head. We literally reinvented the research infrastructure. Things like creating outdoor clinical spaces. Tents went up across the nation that were able to serve as outdoor drive-thru testing and research locations. When we think about COVID-19 data, we have to first think about the massive amount of infrastructure development we needed to do in order to be able to see patients at-risk of or living with COVID-19. Secondly, we need to think about the elements that come with seeing people outside. That's wind and rain in the summer and snow and ice in the winter, and how we keep patients comfortable during all of the above. We've got scheduled visits, and the elements play a key factor in whether or not those visits can take place.
The data collection for C-FORWARD, as a comparative effectiveness trial for COVID-19 testing, has been exceptionally challenging from day one in terms of setting up the research sites, processes, and different types of tests that we had available, and then keeping people engaged over the long-term of the study. This study has now been going on for almost a year, and we've seen multiple peaks and valleys where people were interested or not in COVID-19 testing. Subsequently, the number of people who agreed to participate in the study was impacted.
Normally when you're doing research, you have a standard validated set of questions, questionnaires, and scales. None of that existed for COVID-19, so as we're setting up infrastructure for this study, we're also developing and designing questionnaires with rapidly changing data. We are asking patients to self-report on a variety of different social, political, and contextual factors associated with COVID-19 including: lost wages and jobs, compliance with public health measures, political affiliation, and so much more. We're looking at a huge amount of data to see how COVID-19 has not only impacted patients when they enrolled, but how it is impacting them as they move forward over the 12 months of the study.
There's a variety of different sub-studies going on by our doctoral students. I have one student who is interested in understanding how other medical comorbidities, like diabetes, are influenced by lockdowns and cancelled appointments. Another student is looking at cardiac and inflammatory markers associated with COVID-19 in patients with and without a history of COVID-19 disease. Other students are looking at how to differentiate patients with antibody levels from vaccination compared to natural infection. There is a lot of great science in action.
First and foremost, if you look at who produces the data that we're using right now for COVID-19, it’s Johns Hopkins University. It's not the public health department or the public health system. That data might be extracted from health departments, but this global network of data points was established by talented people at Johns Hopkins. The model used by agencies, like the CDC and local health departments, is antiquated. Our public health infrastructure across the United States and state and local health departments has been dwindling.
Garbage in equals garbage out, and one of the main reasons we sometimes get garbage data at the local level is because there's no staffing infrastructure to facilitate appropriate data collection. We need more staffing and we need to think smarter about the electronic means by which we can organize the workload as well as the data they produce to make more actionable and usable information in real time. That requires investments in resources, staffing, infrastructure, and training.