Risks, Resources, Readiness
3 things to consider when adapting or adopting research evidence to/in a particular practice setting according to Stetler (2001).
Check out the 1-minute video summary by DrH at https://www.instagram.com/martyhrn/
Risks, Resources, Readiness
3 things to consider when adapting or adopting research evidence to/in a particular practice setting according to Stetler (2001).
Check out the 1-minute video summary by DrH at https://www.instagram.com/martyhrn/
What’s the difference between statistical and clinical significance? Here’s a quick, non-exhaustive intro.
In short, statistical significance is when the difference in outcomes between an experimental and a control group is greater than would happen by chance alone. For example, in a trial of whether gum chewing promoted return of bowel activity among post-op patients, one post-op group would chew gum and the other group would not. Then researchers would statistically compare timing of return of bowel activity between the two groups to see if the difference was greater than would occur by chance (p<.05 or p<.01). If the probability (p) level of the statistical test is less than .05 then we have very strong evidence that gum chewing made the difference. [See example of gum chewing trial in free full text Ledari, Barat, & Delavar (2012).]
All well and good.
However, the effect of an intervention may be statistically significant, but not clinically meaningful to practitioners. Or the intervention’s effects may not be statistically significant, and yet still be clinically important enough to be worth the time, cost, and effort it takes to implement.
What is clinical significance, and how can we tell if something is clinically significant? Two overlapping views:
Let me illustrate. Researchers recently examined the effects of a 1300-1500 quiet time on a post-partum unit. Outcome measures showed that women’s exclusive breastfeeding rates increased 14%. However, this change was not statistically significant (p = .39)—a probability value well above p < .05. Nonetheless, researchers concluded that the findings were clinically significant because a higher percent of women exclusively breastfed their infants after quiet time, and arguably for those couplets the difference was “genuine” and “palpable” (p. 449, Polit & Beck). The time, cost, and effort of implementing a low risk quiet time was reasonably associated with producing valuable outcomes for some.
Always remember that the higher the risk of the intervention, the more cautious should be your translation of findings into a particular practice setting. Don’t overestimate, but don’t overlook, clinical significance in your search to improve patient care.
Critical thinking: How might issues of statistical versus clinical significance inform the dialogue on mask wearing during the pandemic?
For more info:
I recommend this event. I have no conflict of interest.
New virtual EBP Institute – Advanced Practice Institute: Promoting Adoption of Evidence-Based Practice is going virtual this October.
This Institute is a unique advanced program designed to build skills in the most challenging steps of the evidence-based practice process and in creating an organizational infrastructure to support evidence-based health care. Participants will learn how to implement, evaluate, and sustain EBP changes in complex health care systems.
Each participant also receives Evidence-Based Practice in Action: Comprehensive Strategies, Tools, and Tips From the University of Iowa Hospitals and Clinics. This book is an application-oriented EBP resource organized based on the latest Iowa Model and can be used with any practice change. The Institute will include tools and strategies directly from the book.
3-Day Virtual Institute
Wednesday, October 7
Wednesday, October 14
Wednesday, October 21
(participation is required for all 3 days)
Special pricing for this virtual institute: 5 participants from the same institution for the price of 4
Learn more and register for the October 2020 Advanced Practice Institute: Promoting Adoption of Evidence-Based Practice.
Kristen Rempel
Administrative Services Specialist | Nursing Research & Evidence-Based Practice
University of Iowa Health Care | Department of Nursing Services and Patient Care
200 Hawkins Dr, T155 GH, Iowa City, IA 52242 | 319-384-6737
uihc.org/nursing-research-and-evidence-based-practice-and quality
Enrolled in an MSN….and wondering what to do for an evidence-based clinical project?
Recently a former student contacted me about that very question. Part of my response to her is below:
“One good place to start if you are flexible on your topic is to look through Cochrane Reviews, Joanna Briggs Institute, AHRQ Clinical Practice Guidelines, or similar for very strong evidence on a particular topic and then work to move that into practice in some way. (e.g., right now I’m involved in a project on using evidence of a Cochrane review on the benefits of music listening–not therapy–in improving patient outcomes like pain, mood, & opioid use).
Once you narrow the topic it will get easier. Also, you can apply only the best evidence you have, so if there isn’t much research or other evidence about the topic you might have to tackle the problem from a different angle” or pick an area where there IS enough evidence to apply.
Blessings! -Dr.H
Medscape just came out with Eric J. Topol article: 15 Studies that Challenged Medical Dogma in 2019. Critically check it out to practice your skills in applying evidence to practice. What are the implications for your practice? Are more or stronger studies needed before this overturning of dogma becomes simply more dogma? Are the resources and people’s readiness there for any warranted change? If not, what needs to happen? What are the risks of adopting these findings into practice?
Yes. Change can be painful.
Yes. It is easier to do things the way we’ve always done them (and been seemingly successful).
Yet, most of us want to work more efficiently or improve our own or patients’ health.
So, there you have the problem: a tension between status quo and change. Perhaps taking the easy status quo is why ‘everyday nurses’ don’t read research.
Ralph (2017) writes encountering 3 common mindsets that keep nurses stuck in the rut of refusing to examine new research:
But, he argues, you have a choice: you can go with the status quo or challenge it (Ralph). And (admit it), haven’t we all found that the status quo sometimes doesn’t work well so that we end up
How to begin solving the problem of not reading research? Think of a super-interesting topic to you and make a quick trip to PubMed.com. Check out a few relevant abstracts and ask your librarian to get the articles for you. Read them in the nurses’ lounge so others can, too.
Let me know how your challenge to the status quo works out.
Bibliography: Fulltext available for download through https://www.researchgate.net/ of Ralph, N. (2017 April). Editorial: Engaging with research & evidence is a nursing priority so why are ‘everyday’ nurses not reading the literature, ACORN 30(3):3-5. doi: 10.26550/303/3.5
The difference between research and evidence-based practice (EBP) can sometimes be confusing, but the contrast between them is sharp. I think most of the confusion comes because those implementing both processes measure outcomes. Here are differences:
The creation of evidence obviously precedes its application to practice. Something must be made before it can be used. Research obviously precedes the application of research findings to practice. When those findings are applied to practice, then we say the practice is evidence-based.
A good analogy for how research & EBP differ & work together can be seen in autos.
CRITICAL THINKING: 1) Why is the common phrase “evidence-based research” unclear? Should you use it? Why or why not? 2) What is a clinical question you now face. (e.g., C.Diff spread; nurse morale on your unit; managing neuropathic pain) and think about how the Stetler EBP model at http://www.nccmt.ca/registry/resource/pdf/83.pdf might help. Because you will be measuring outcomes, then why is this still considered EBP.