The pandemic disease Covid-19 does much more to the human body than a typical respiratory virus. In addition to neurological problems ranging from a loss of sense of smell to outright seizures, surprising gastrointestinal symptoms and kidney damage, and a potentially fatal haywire immune response, the disease also messes with a person’s blood. The sickest people start forming clots, potentially leading to stroke, heart attack, lung damage … it’s a mess. Physicians started noticing all this early in the pandemic, of course. The question was—and remains—what to do about it all.
“So, someone comes into the hospital and needs a blood-thinning medication to keep them from clotting,” says Tracy Wang, a cardiologist who specializes in that problem—it’s called anticoagulation—at Duke Clinical Research Institute. But which patients would benefit the most? Which drug should they get? How much? When? Figuring out that kind of thing is the foundational behind-the-scenes work of medicine, where clinical trials of protocols and medications connect with on-the-ground clinical work. Except, when it came to anticoagulants and Covid-19, the research hasn’t happened yet. “Each hospital tried to develop their own protocol,” Wang says. “Could we have joined up hospital networks and developed a coordinated anticoagulant regimen? Or, if we can’t agree, develop two or three regimens and compare those?”
Yes. Well, they could have. Researchers across multiple medical centers could have done just what Wang proposes here, recruited thousands of patient-volunteers and then randomly assigned them to get either a treatment or not (that’d be a control group), with neither scientists nor participants knowing who was getting what until they look at the final data. That’s called a double-blind, randomized, controlled clinical trial—an RCT. Not to be too blunt about it, but instead of dithering about the evidence, they could have just counted how many people recovered and how many people died. That’s using mortality as an endpoint, in the lingo of the field. And then they’d know what works. “But we didn’t do that. Instead, every hospital just launched its own variation on things. And at the end of the day, we couldn’t study this in a multi-center, large-scale fashion,” Wang says. “We couldn’t even agree on how to measure the outcome.”
Things might get better; better studies of anticoagulants are in the works. But for now, eight months into a global pandemic, physicians and scientists still don’t know a hell of a lot about how to fight it. The story of anticoagulants is also the story of hydroxychloroquine, which took months to knock down as a preventative or a treatment, and convalescent plasma taken from the blood of recovered patients, which—while promising—has been the subject of a fragmented, delayed research effort. Doctors know that the steroid dexamethasone gets people out of the hospital quicker. They know that keeping really sick people on their fronts instead of their backs helps. They know that a couple other familiar antiviral drugs don’t work. And despite hundreds of millions of dollars in research money, tens of thousands of volunteer subjects, and the diligent spadework of thousands of researchers, that’s basically it. With a couple of important exceptions, we’re all still kind of clueless.
Which leads to an important question: What the hell? It turns out that while scientists have been working crazy hard to figure all this out—to evaluate old drugs, test protocols, find new approaches—from a statistical and evidentiary perspective, they’ve mostly been spinning their wheels. The large-scale trial that has arguably done the most important work, the United Kingdom-based Randomised Evaluation of Covid-19 Therapy (Recovery) trial, spun up in part via the authority and infrastructure of the UK’s National Health Service. In the US, a lack of central planning, methodological obstacles, and professional pressures meant that since the pandemic began, everyone raced off at top speed, but in different directions, producing incompatible, unusable, or incoherent results—if they got results at all.
An examination in July in the journal JAMA Internal Medicine proves it. Of 1,551 Covid-19 studies entered into the US registry ClinicalTrials.gov between March and May, the study says, just over half were randomized clinical trials, and only 10 percent of those were double-blinded and had more than 100 participants. A fifth of the RCTs were on hydroxychloroquine or its cousin chloroquine, a dead end. The vast majority of the trials in the registry—76 percent—covered just one hospital. Only a third used mortality as an endpoint. Only 13 percent of the observational studies were prospective—that is, they looked forward instead of merely reviewing what happened, a significant weakness.
These results extend an analysis by Stat in July which showed that of studies begun or planned in ClinicalTrials.gov since January, one in six were on the chloroquine family. That meant that of the 685,000 patient volunteers who anyone planned to enroll in any study of Covid-19, 237,000—more than one in three—were captured by studies of those drugs and therefore nothing else. Overall, less than 40 percent of all those trials would have the statistical power to conclude anything meaningful.
None of this was bad science, exactly. It’s all in bounds, nothing unethical or methodologically unsound. The observational studies, even the retrospective ones, were critical in drawing the outlines of the pandemic and its manifestations. The problem is, taken in aggregate, most of that science didn’t change policy. It doesn’t save any lives. “We had this massive activation of clinical research worldwide,” says Mintu Turakhia, a cardiologist at Stanford University and the lead author of the JAMA Internal Medicine paper. “That’s all great. But the problem is, we didn’t really have a strong sense of what kind of evidence to expect and where we will be after the first wave of research.”
The result of the non-results? “There aren’t that many studies that are going to move the needle in terms of generating evidence,” Turakhia says. “You can comment on the public health response, but the scientific response lagged also. Especially in the US, we just haven’t activated the machinery.”
Part of what went wrong seems to have been a strategic misstep. The science funding agencies of the US, like the National Institutes of Health, have announced efforts to set up the kind of large-scale, multi-arm “adaptive” trials that most researchers agree are the way to get big, world-denting results. But so far the US iteration, the Adaptive Covid-19 Treatment trial, has only conclusively shown that the (expensive) drug remdesivir, made by the pharmaceutical company Gilead, can reduce the time Covid-19 patients spend in the hospital. That’s a perfectly fine endpoint—no shenanigans involved—but it’s not mortality. Meanwhile the UK’s Recovery trial has shown that the steroid drug dexamethasone saves lives, and that the AIDS drugs lopinavir and ritonavir and the antimalarial and autoimmune drug hydroxychloroquine do not. Those drugs all looked like they might help from the early days of the pandemic, they’re all relatively cheap, and those findings changed the global standard of care.
US agencies and research centers may yet mount more coordinated efforts, but, like, tick-tock, y’all. In the US, political actors like the president talked up hydroxychloroquine before a central effort could take off, bolluxing research already in progress. The same thing seems to be happening with convalescent plasma—already another arm of the Recovery trial, not for nothin’. “I think the clinical trial enterprise, certainly in the US, was not designed for speed and for a pandemic,” says Walid Gellad, director of the Center for Pharmaceutical Policy and Prescribing at the University of Pittsburgh. “We’re seeing the results of that, basically, in what we see now.”
Another problem has been the kind of internecine push-and-pull among hospitals and individual researchers. They’re all frenemies, all chasing the goal of helping people, but also getting published in big journals in pursuit of tenure and grants. That’s not necessarily bad—if the energy gets directed. “The lowest-lift study you can do as a clinician scientist is to write up the cases that come through your center. It’s not that hard to do, and it’s a low lift. But if you want impact, you’ve got to get over that,” Turakhia says. “We have to get away from academic opportunism, just so you have a paper, and figure out how to get together and work collaboratively.”
That opportunism isn’t just ambition. It actually risks disrespecting (if not outright harming) patients. “When we do clinical research, it isn’t just a researcher saying, ‘Here’s a good idea, let’s go do it.’ Research is a high-stakes endeavor for all of us. Our patients are volunteering, in most cases, to be parts of these studies, contributing data and their bodies to help us advance knowledge. There’s a cost to doing research,” says Wang, who co-wrote a commentary that ran alongside the JAMA Internal Medicine article. “Wouldn’t it seem possible, especially in this age of communication and technology, to be more efficient early on?”
Gellad takes an even harder line. “Every little group was doing its own trial rather than having an organized, central effort to say, ‘These are the most important central efforts. These are the trials we’re going to do,’” he says.
Blame the system, if you want. Big therapeutic trials are expensive, so only pharmaceutical companies and governments tend to have the bank accounts to pull them off. A whole grab bag of potential funders, from the NIH to the Gates Foundation and on and on, pulls researchers in many directions. A lack of central patient data means that even when hospital systems and researchers want to collaborate, it’s hard for them to talk to each other, digitally speaking. The mechanisms for protecting patients’ rights and keeping them safe during research trials are scattered and independent; no one is suggesting eliminating the institutional review boards at individual hospitals and research centers, but a big study protocol might have to deal with dozens of them, each one with veto power. And in the end, as the reporter Susan Dominus shows in a recent article in The New York Times Magazine, hospitalists and clinicians might feel that their duty to patients means they should try anything and everything to save their lives, rather than enroll them in studies that might randomize them to the control group (even though the study might eventually save more lives overall).
These problems have always challenged drug trials and the people who mount them. As with so many system failures, the pandemic has only made the issue worse. “There is no doubt we lack any sort of organized and systematic approach to testing therapeutic ideas,” says Peter Bach, director of the Center for Health Policy and Outcomes and the Drug Pricing Lab at Memorial Sloan Kettering Cancer Center. Bach says that small trials that risk false positive results, studies that use squishy outcomes instead of mortality, and all the other weaknesses that lead to biased results and lack of generalizability are obviously bad, “but I don’t know what to say other than it is always like this, really.”
Exposing these problems might provide the incentive and ideas to fix them. Turakhia thinks a solution—maybe for the next pandemic—would be a whole network of centers ready to mount clinical trials at a moment’s notice. Just fill in the nouns on the paperwork. “We need a bunch of sites that are a priori ready to go. ‘We’ve signed off, the IRBs have a fast-track mechanism,’” he says. “You just need the right infrastructure and the buy-in and commitment to the vision. The operational aspects, the approvals, and all that—you can get al