This transcript has been edited for clarity.
Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.
As surgeries go, cesarean sections are pretty painful.
And treatment of postoperative C-section pain is tricky. Your pharmacopeia is limited as the new mother may want to be relatively alert to engage in childcare, and you need to worry about the pharmacokinetics of medications getting into breast milk and the new baby.
Nondrug methods of post–C-section pain control are thus sorely needed, which prompted researchers from the Medical University of Greifswald in Germany to conduct this study, a randomized trial which suggests that acupuncture is effective in treating postoperative pain among women who just had a C-section.
Full disclosure: I’ve been in touch with the lead author of this study, as I have concerns about some of the statistics. I’ll get to that in a moment, but first let me give you the run-down as reported.
Acupuncture, as it is traditionally understood, is the process of inserting needles in specific parts of the body in order to manipulate the flow of energy fields and promote health. But this study eschews the mystical aspects of the practice. This paper has no mention of qi, meridians, or traditional Chinese medicine energy points.
No, this paper hangs its biologic plausibility hat on the idea that stimulation of the vagus nerve can mediate pain relief through central processes.
It’s not crazy. There is some evidence that stimulation of receptors in one part of the body might attenuate pain in other parts of the body. This might be part of the mechanism of how capsaicin, when applied topically, can relieve pain.
So, by framing acupuncture in a way that is more consistent with our mechanistic understanding of the universe, can we give the technique a fair shake? Here’s the setup.
One hundred twenty women — mean age 31, all white — who were about to undergo an elective C-section were randomized to acupuncture or placebo acupuncture.
Okay, this is critical: Randomizing people to acupuncture vs usual care is a real problem, since it is obvious to them that they are getting acupuncture, which can have a strong placebo effect given the sort of mystical cultural associations the practice has.
Most good randomized trials of acupuncture compare “real” acupuncture with “sham” acupuncture. In these designs, needles are placed in the body regardless – but in the real acupuncture arm they go in those traditional energy spots, and in the sham group they go in other spots. Meta-analyses of acupuncture trials that include a sham of this type tend to conclude the same thing: Real acupuncture is better than usual care, but sham acupuncture is just as good. In other words, it’s not where you stick the needles; it’s just sticking needles, or the whole experience surrounding the sticking of needles.
But this trial didn’t take that approach. Rather, in both arms, needles were put in the same places. But in the placebo group, the needles didn’t actually penetrate the skin.
The procedure involved putting four tiny needles in both ears. Women in the placebo group got a simulated prick from a sharp probe and a similar bandage, but nothing was left in the skin.
Women also got needles or placebo needles placed in six points on the body as well.
The primary outcome, as described in the paper, is “pain intensity on movement” on postoperative day 1. It is a decent outcome, as getting people moving postoperatively is so critical, though there doesn’t appear to be a standard “movement” to elicit that pain. I will note that in the trials ClinicalTrials.gov entry, they describe the primary outcome as “[p]ain intensity as measured by numeric rating scale 1-10”, without mention of movement.
And when you look at the pain scores, that outcome — pain with movement — seems quite well-chosen. There was no difference in maximum pain level or minimum pain level between the groups. Nor was there any difference in pain on discharge or satisfaction with pain control.
There was no difference in the percentage of women who noted that pain disturbed their sleep, or their mood, or their movement.
Secondary outcomes looked at drug dosing as a proxy for breakthrough pain, and there was no difference in acetaminophen or diclofenac dosing.
So from a pain standpoint, we have one positive outcome among many, but it happened to be the primary outcome (albeit not clearly prespecified). Is a mean intensity of 4.7 in the real acupuncture group vs 6.0 in the placebo acupuncture group clinically meaningful? Ideally, people with well-controlled pain are going to give you scores below 4.
Of course, pain is highly subjective, and thus highly subject to placebo effects. What is somewhat less subjective is mobilization — getting up and out of bed into a chair or standing. And early mobilization is a goal for many women after C-section, and most surgeons too. And here, there really does seem to be a difference between the real and placebo acupuncture group. (The third group you see is a noncontemporaneous usual care group).
So up until this point, I felt like this study showed some modest effects, and the lack of any magical thinking was refreshing. But then I came across something that, frankly, got me worried. And it surprised me that the editors over at JAMA Network Open didn’t catch it.
A critical component of any acupuncture vs placebo acupuncture study is an assessment of how well the placebo worked — an assessment of blinding. If individuals knew which arm of the study they were in, or even suspected it, it could dramatically alter the results — both through direct placebo effects and even a desire to prove that the therapy actually works.
Fortunately, the authors asked the women which group they thought they were in. Twenty-five out of 58 women in the acupuncture group thought they were in the acupuncture group, and just 11 women out of 55 in the placebo group thought they were in the acupuncture group (some women didn’t answer, I guess).
Now, the authors describe this difference — 43% awareness vs 20% — as not statistically significant, with a P value of .08 according to the Fisher exact test.
Something needled me about that P value though, so I checked their math. Actually, this difference is quite statistically significant. The P value is .009 using the Fisher exact test, not .08. You can check it yourself; multiple online calculators will do this for you.
So either the P value they report is wrong, or the number of women who knew they got real acupuncture was wrong. But something here doesn’t add up, and when you are conducting a trial in a space that has been so inundated with shady science over the years, you really have to check your work.
As I mentioned, I reached out to the lead author Dr Taras Usichenko to ask about the discrepancy. He said he wasn’t able to reach his statistician before my deadline, but even if there was significantly more awareness in the true acupuncture group, it wouldn’t change his opinion of the results. He wrote:
“You may be right in thinking that the distribution indicted a degree of unblinding; however, this assessment was only performed at the end of the study period when participants had benefited from the intervention.”
He goes on to write that this may reflect that women realized they were getting real acupuncture because acupuncture was effective, as opposed to the idea that it was observed to be effective because women knew they were getting real acupuncture.
Of course, if that’s the case, why assess for adequacy of blinding anyway?
Dr Usichenko also said he will ask JAMA Network Open to publish a correction if his team determines there is an error in that table.
I think the best course here would be for Dr Usichenko’s team to release a deidentified analytic dataset so that independent statisticians could attempt to replicate the results. Transparency is always a good thing.
So what do we do with this paper? We’ve got an important scientific question: how to treat pain post–C-section. We have an intervention that might be modestly effective, especially in terms of mobilization postoperatively. But we also have some red flags, like the weird specification of the primary outcome and what I suspect is a pretty significant failure of blinding.
In other words, this study supports the idea that acupuncture may just be an elaborate placebo.
Dr Usichenko suggested as much in his email to me, writing:
“So even if the mechanism of action is entirely expectation…it is still of potential interest considering the improved outcomes.”
If it is safe and helps, why not use it — even if we’re just exploiting the placebo effect? I mean, if a given patient asks for it, I suppose it’s fine, but remember that one of the ways science benefits humankind is by advancing our understanding. If we close our eyes and pretend acupuncture works in a way that it does not work, future studies will be following the wrong scientific path: testing hypotheses that are doomed to failure. If it’s a placebo, fine — let’s figure out exactly how placebos work and exploit that mechanism for pain control.
I look forward to further updates on this study as the authors investigate the statistical anomalies.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and here on Medscape. He tweets @fperrywilson and hosts a repository of his communication work at www.methodsman.com.
Follow Medscape on Facebook, Twitter, Instagram, and YouTube