Alice Dreger: FAQ on the SUPPORT study controversy

Publica Alice Dreger en su blog: http://www.alicedreger.com/support.html

 

FAQ on the SUPPORT study controversy.

UPDATES:

  1. 1.Read our letter to the NEJM backing the OHRP findings here.

  2. 2.Read my expanded testimony with Susan Reverby by clicking here: Dreger Reverby HHS 2013.pdf.

  3. 3.Read my report of the August 28 HHS meeting here.

The SUPPORT study is making the news (as it should be) and causing heated debate within academic medicine and medical ethics, so I’m providing these FAQ to help people just coming to the story understand the lay of the land. If you have a question you’d like answered, pleasecontact me. These FAQs are solely by me and, where they include opinions, the opinions are my own.

What was SUPPORT, and what’s the controversy?

The SUPPORT study was a large randomized control trial (RCT) that ran from 2004 to 2009 and was aimed at finding out how to improve treatment of babies born prematurely. Approximately 23 institutions participated in this NIH-funded trial of about 1300 preemies. The New England Journal of Medicine published the science coming out of the study and the results have likely already changed clinical practice, probably for the better.

Yet the federal Office for Human Research Protections (OHRP)determined in March 2013 “that the informed consent document for this trial failed to adequately inform parents” of what risks their babies might face via study enrollment. The OHRP found that parents weren’t informed how enrollment in the study might increase babies’ risk of death and blindness—risks that, before the study, researchers could have reasonably guessed might increase through enrollment.

So it was a bad trial to do?

Actually, it was a really important trial to do. We badly need studies like SUPPORT. The hard truth is that—even in areas as critical as neonatal intensive care—too many medical interventions are still based on thin science. That’s not because clinicians like it that way; it’s because good clinical science is hard to do. Studies like SUPPORT offer us the hope of finding out—by randomizing subjects and controlling for variables—what really helps or harms.

Before SUPPORT, clinicians had reason to believe that too low or too high a level of oxygen in a baby’s blood might increase the risk of death, neurological impairment, or blindness. Many clinicians believed that somewhere in the 85-95% oxygen saturation level would be ideal, but the data needed be pinned down.

What are the relevant details of the trail?

In the part of the study at issue, parents of babies about to be born prematurely were asked to enroll their preemies in a study that included being randomly assigned to one of two study arms. Ordinarily a preemie in the NICU will have oxygen levels managed according to a local clinical protocol, which apparently typically involves levels somewhere between 85% and 95% saturation (SpO2). In the study, babies were randomly assigned to EITHER be held at 85-89% OR 91-95% level.

So, again, the SUPPORT study randomized enrolled babies into two trial arms. In one, neonatal intensive care unit (NICU) staff tried to keep the babies’ oxygen saturation levels at 85-89%. In the other, staff aimed for 91-95%. They did this by watching the babies’ oxygen saturation levels using devices called oximeters. Many NICU preemies wear oximeters, but—in order to improve science by reducing observer bias—SUPPORT babies were assigned modified oximeters.

Half the study babies (the ones to be kept in the 85-89% range) were hooked up to oximeters that read 3 points higher than the real level, and the other half (to be kept in the 91-95% range) had oximeters that read 3 points lower than real. NICU clinicians tried to hold all the study babies in what appeared on the oximeters to be the 88-92% level. Only after the trial’s conclusion would clinicians know which baby in the study had been at which oxygen level.

So the study babies had different oximeters than the other babies in the NICU?

Yes. The oximeters were marked to tell clinicians in the NICU whether the baby was in the study or not. Knowing which babies were in the study, NICU clinicians tried to hold all the study babies in what appeared to be the 88-92% level on the oximeters. They knew this would mean that, in reality, half would be at 85-89% and half at 91-95%, but only after “unblinding” of the study would clinicians know which baby in the study had been in which arm, the lower or upper. Only then would they know with greater certainty which levels lead to which risks and benefits.

The babies not in the study had oximeters that the NICU clinicians knew were reading accurately, and they made clinical decisions as they ordinarily would. They did not attempt to hold non-study babies to an apparent 88-92% range so far as we can tell. (And even if they did, because their oximeters were not modified for the experiment, the clinical decisions made for these babies would have been made on the basis of a true oxygen saturation level read.)

So what’s the problem?

The problem is that the parents were not adequately informed about the purpose, methods, and risks of the trial when they agreed to enroll their babies. The OHRP determination letter, which is the letter which spells out (only) some of what went wrong with informed consent in SUPPORT, can be read by clicking here.

Some SUPPORT defenders have argued that, “since all the study infants would receive oxygen levels within the prevailing standard of care [85-95%], there was no additional risk to being enrolled in the trial.” In other words, the study represented no special risk. Allpreemies are liable to risks of death, neurological impairment, and blindness. So why warn specially about these as part of research enrollment?

But this represents a fundamental misunderstanding of howrandomized assignment to one of the two experimentally-controlled arms could—indeed should, in the best scientific scenario—change a baby’s risks (and benefits).

Preemies are not ordinarily randomly assigned to a controlled 85-89% or 91-95% level, nor are their clinicians ordinarily getting their oxygen levels from oximeters that are altered to read above or below true. The babies in the study were put in this unusual situation specifically to draw good data from their randomized and controlled treatment. (In fact, unblinded results showed that babies in the lower oxygen saturation group experienced relatively higher rates of death, and babies in the upper group experienced relatively higher rates of severe eye damage.)

The parents should have been told about the purpose of the study, the methods (i.e., the use of false-read oximeters), and the potential for increased risk via study enrollment, including increased risk of death, neurological damage, and severe eye damage including damage that could lead to blindness.

Defenders of the study say that the researchers could not have guessed that there might be an increased risk of death, neurological damage, or severe eye damage caused by enrollment in the study.

Yes, they make this claim by selectively citing the literature pre-dating the study. You can look at what the study documents said, including the study protocol (pdf) and the description given atclinicaltrails.gov, and see that they were tracking death, neurological outcome, and eye damage as potential outcomes. That they were tracking these outcomes is not surprising; the population they were dealing with–very premature babies–are a notoriously fragile group. What is surprising is that they didn’t tell the parents what risks they were tracking.

This kind of inadequate disclosure of studies’ purposes, risks, and methods must happen all the time in studies. Why all the fuss?

We hope that this sort of thing doesn’t happen all the time. Institutional Review Boards (IRBs) are supposed to prevent failure of informed consent. In the case of SUPPORT, IRBs at 23 major medical institutions missed that the consent forms were missing material they should have contained. That’s huge! It suggests that something big went wrong here. It also suggests that the redundancy of having 23 IRBs didn’t fix the problem.

That this involved babies at risk of death — babies who might have died or been impaired by random assignment into one arm of the trial versus another, and whose parents weren’t told that an increased risk of death or impairment might be at stake — well, that makes the situation that much more grave.

Again, what’s really surprising and disturbing is that Institutional Review Boards (IRBs) at 23 institutions did not catch this problem. This suggests that translation of complex scientific protocols into plain-language consent forms is liable to substantial human error. It also suggests a failure of science education in medicine and medical ethics (see below).

I think perhaps it also suggests a problem within neonatology of paternalism — trying to take on the uncertainty and hard decision-making for parents — that got mapped onto the consent process. I can’t prove that; it’s just a hunch based on what I’ve run into talking to NICU staff about this. (A few have said to me, “but we don’t tell parents all the risks in great detail usually — it is too much to bear at the point of birth.”)

How could the researchers have known that the risk to babies might differ if they were in the study compared to not being in the study?

Researchers were necessarily (rightly) trying from the very start to push the babies into different risk-benefit categories. That’s how you get good data. So it would have been fair to say on the consent forms that the outcome might be better, but it might also be worse, than if the baby were not in the study. That uncertainty should have been passed on to parents as part of informed consent. The parents should at least have been warned what the SUPPORT Data Safety Monitoring Committee was watching for—including death, neurological impairment, and eye damage.

So being in the study really wasn’t the same as being in “standard of care”?

Right. This is best explained in an excellent essay by Lois Shepherd, JD. I want to add one thing to Shepherd’s essay:

People managing clinical trials worry about what ethicists call “the therapeutic misconception.” This occurs when potential research participants mistake clinical research for therapy. Surprisingly, the consent forms for SUPPORT appear to have actually incorporated the therapeutic misconception—leading parents of preemies to believe that enrollment in this part of SUPPORT would not really differ much from ordinary care.

How might these kinds of mistakes in the consent process be prevented?

So many smart doctors and ethicists missed the boat on this one that I think the problem here may be less about failure to be adequately trained in ethics than in science: at the heart of this matter seems to be a problem of physicians and IRBs mistaking a randomized control trial for an observational trial.We are dealing, in this case, with doctors and ethicists of a generation that wasn’t raised on evidence-based medicine. I think they weren’t ready to think about the critical scientific (and thus ethical) differences between an RCT and “standard care.” Hopefully the younger generation won’t be making the same mistakes, but we need to be vigilant about similar problems arising.

I do think that this case shows that, as medicine enters a new age of science, the failure to provide medical students, ethicists and IRB staff with substantial training in science may well have major ethical ramifications for patients at medical research centers.

What’s the role of the OHRP in preventing mistakes like this?

The OHRP will need to hold people to the regulations if they want the regulations to be followed. Unfortunately, the OHRP appears to be bowing (pdf) to massive political pressure by researchers and the bioethicists who choose to defend the importance of research over the rights of human subjects.

I happen to think both can be honored, and abusing the rights of human subjects ultimately harms research.

Anything else?

At this point I’d just like to note that two of the physician-ethicists who signed the NEJM letter complaining about the OHRP’s findings (guys I have generally respected) now admit that the consent process was flawed: here and here. I think as time goes on, more will admit the OHRP findings were correct.

Also, the NIH heads defending the NIH-funded study’s consent process in the NEJM apparently did not declare any possible conflicts of interest. That seems odd.

Further links:

The consent form for the study, sponsored by the NICHD: click here.

Sample consent forms: SUPPORT sample consent forms.pdf

The SUPPORT study protocol, showing what the researchers wanted to know from the study: click here.

The May 8, 2013 report/analysis from Public Citizen, co-authored by Michael Carome, MD, Sidney Wolfe, MD, and Ruth Macklin, PhD (Professor of Bioethics): click here.

Letter letter to the NEJM backing the OHRP findings: click here.

My expanded testimony with Susan Reverby: Dreger Reverby HHS 2013.pdf

My report of the August 28 HHS meeting: click here.

Responder

Introduce tus datos o haz clic en un icono para iniciar sesión:

Logo de WordPress.com

Estás comentando usando tu cuenta de WordPress.com. Cerrar sesión / Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Cerrar sesión / Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Cerrar sesión / Cambiar )

Google+ photo

Estás comentando usando tu cuenta de Google+. Cerrar sesión / Cambiar )

Conectando a %s


A %d blogueros les gusta esto: