Vital Statistics Consulting Vital Statistics Consulting

Case Study: Nurse-Led QI Evaluation Project

Insight from VSC PRESIDENT, Dr. Bill Gallo, on our Analytic Approach

When I was a doctoral student in economics, one of my econometrics professors offered the following advice: “You don’t shoot a mouse with an elephant gun.” What he meant was that most of the time, a simple quantitative problem should be addressed with a correspondingly simple method. Sometimes, however, seemingly simple problems are not really so.

Recently, we were asked to evaluate a nurse-led quality-improvement (QI) intervention at a community hospital in a southeastern US state. The project involved educating nurses on neonatal abstinence syndrome (NAS), which is infant drug withdrawal among babies who were born to mothers who took opioids during pregnancy.

NAS normally presents with a cluster of symptoms, which may not be recognized as pertaining to opioid use, unless clinicians are trained to recognize it. Our job was to evaluate whether the educational intervention led to increases in recognition of NAS and its severity, factors that guide treatment. The task seemed straightforward enough…simple even. Yet when we looked over the survey and related instruments, we were struck by a wrinkle that we don’t often see. Although the design was a non-experimental pre-post, meaning the same participants would be assessed on their NAS knowledge before and after the educational intervention, the test’s questions and answers would differ at the two time points. This made it impossible to compare scores on the pre-test with scores on the post-test, as the scores both measured NAS knowledge, but in a different way.

So, what did we do? Well, in a sense, we pulled out the elephant gun—or at least the logical equivalent of it. We noticed that both assessments had correct answers to the questions contained in them, even though the set of responses were on different metrics (i.e. didn’t have the same number of potential responses). We therefore thought that if we first “standardized” the responses, then that would set them on the same metric. Next, we compared those responses to the correct ones (also standardized), and calculated the variance from the correct score. At this point, we were almost home. All we had to do was compare the average variance at the two time points, using fairly simple statistical methods, to see whether participants’ scores clustered more closely to the correct scores on the post-test than the pre-test, indicating the intervention “worked.” (We ultimately added some control variables, but that’s not entirely relevant.)

The moral of the story…although the elephant gun was not really necessary, a simple problem sometimes requires a fairly complex approach.