Risks, Resources, Readiness
3 things to consider when adapting or adopting research evidence to/in a particular practice setting according to Stetler (2001).
Check out the 1-minute video summary by DrH at https://www.instagram.com/martyhrn/
Risks, Resources, Readiness
3 things to consider when adapting or adopting research evidence to/in a particular practice setting according to Stetler (2001).
Check out the 1-minute video summary by DrH at https://www.instagram.com/martyhrn/
Evidence-based nursing. I have heard and seen the terms evidence-based nursing & evidence-based practice sometimes mis-used by well-educated RNs. Want to know what it is? Here’s the secret (or at least some things you should think about says Dr. Ingersoll). https://www.nursingoutlook.org/article/S0029-6554(00)76732-7/pdf
First, she rightly differentiates 2 processes: research as discovery and evidence-based practice as application.
Too, Ingersoll argues that best evidence may include more than the much-vaunted systematic reviews or randomized controlled trials. Relying only on systematic, scientific research findings, she argues, is not enough to guide evidence-based practice. Her arguments provide a basis for discussion with those who might disagree.
[Note: Ingersoll uses the term “positivist thinking” at one point. For those uncertain about the term, I would define positivists as those who assume that reality and truth are objective, measurable, and discoverable by a detached, impartial researcher. Positivism underlies the empirical scientific process that most readers think of when they hear the word research.]
Do you agree with her that anecdotal and traditional knowledge make valuable contributions to evidence-based practice? Your thoughts about her thoughts?
How do you know when you have found enough research evidence on a topic to be able to use the findings in clinical practice? How many articles are enough? 5? 50? 100? 1000? Good question!
You have probably heard general rules like these for finding enough applicable evidence: Stick close to your key search terms derived from PICOT statement of problem; Use only research published in the last 5-7 years unless it is a “classic; & Find randomized controlled trials (RCTs), meta-analyses, & systematic reviews of RCTs that document cause-and-effect relationships. Yes, those are good strategies. The only problem is that sometimes they don’t work!
Unfortunately, some clinical issues are “orphan topics.” No one has adequately researched them. And while there may be a few, well-done, valuable published studies on the topic, those studies may simply describe bits of the phenomenon or focus on how to measure the phenomenon (i.e., instrument development). They may give us little to no information on correlation and causation. There may be no RCTs. This situation may tempt us just to discard our clinical issue and to wait for more research (or of course to do research), but either could take years.
In her classic 1998 1-page article, “When is enough, enough?” Dr. Carol Deets, argues that asking how many research reports we need before applying the evidence may be the wrong question! Instead, she proposes, we should ask, “What should be done to evaluate the implementation of research findings in the clinical setting?”
When research evidence is minimal, then careful process and outcome evaluation of its use in clinical practice can: 1) Keep patient safety as the top priority, 2) Document cost-effectiveness and efficacy of new interventions, and 3) Facilitate swift, ethical use of findings that contributes to nursing knowledge. At the same time, Deets recognizes that for many this idea may be revolutionary, requiring us to change the way we think.
So back to the original question…How many articles are enough? Deets’ answer? “One study is enough” if we build in strong evaluation as we translate it into practice.
Reference: Deets, C. (1998). When is enough, enough? Journal of Professional Nursing, 14(4), 196. doi.org/10.1016/S8755-7223(98)80058-6
What’s the difference between statistical and clinical significance? Here’s a quick, non-exhaustive intro.
In short, statistical significance is when the difference in outcomes between an experimental and a control group is greater than would happen by chance alone. For example, in a trial of whether gum chewing promoted return of bowel activity among post-op patients, one post-op group would chew gum and the other group would not. Then researchers would statistically compare timing of return of bowel activity between the two groups to see if the difference was greater than would occur by chance (p<.05 or p<.01). If the probability (p) level of the statistical test is less than .05 then we have very strong evidence that gum chewing made the difference. [See example of gum chewing trial in free full text Ledari, Barat, & Delavar (2012).]
All well and good.
However, the effect of an intervention may be statistically significant, but not clinically meaningful to practitioners. Or the intervention’s effects may not be statistically significant, and yet still be clinically important enough to be worth the time, cost, and effort it takes to implement.
What is clinical significance, and how can we tell if something is clinically significant? Two overlapping views:
Let me illustrate. Researchers recently examined the effects of a 1300-1500 quiet time on a post-partum unit. Outcome measures showed that women’s exclusive breastfeeding rates increased 14%. However, this change was not statistically significant (p = .39)—a probability value well above p < .05. Nonetheless, researchers concluded that the findings were clinically significant because a higher percent of women exclusively breastfed their infants after quiet time, and arguably for those couplets the difference was “genuine” and “palpable” (p. 449, Polit & Beck). The time, cost, and effort of implementing a low risk quiet time was reasonably associated with producing valuable outcomes for some.
Always remember that the higher the risk of the intervention, the more cautious should be your translation of findings into a particular practice setting. Don’t overestimate, but don’t overlook, clinical significance in your search to improve patient care.
Critical thinking: How might issues of statistical versus clinical significance inform the dialogue on mask wearing during the pandemic?
For more info:
I recommend this event. I have no conflict of interest.
New virtual EBP Institute – Advanced Practice Institute: Promoting Adoption of Evidence-Based Practice is going virtual this October.
This Institute is a unique advanced program designed to build skills in the most challenging steps of the evidence-based practice process and in creating an organizational infrastructure to support evidence-based health care. Participants will learn how to implement, evaluate, and sustain EBP changes in complex health care systems.
Each participant also receives Evidence-Based Practice in Action: Comprehensive Strategies, Tools, and Tips From the University of Iowa Hospitals and Clinics. This book is an application-oriented EBP resource organized based on the latest Iowa Model and can be used with any practice change. The Institute will include tools and strategies directly from the book.
3-Day Virtual Institute
Wednesday, October 7
Wednesday, October 14
Wednesday, October 21
(participation is required for all 3 days)
Special pricing for this virtual institute: 5 participants from the same institution for the price of 4
Learn more and register for the October 2020 Advanced Practice Institute: Promoting Adoption of Evidence-Based Practice.
Kristen Rempel
Administrative Services Specialist | Nursing Research & Evidence-Based Practice
University of Iowa Health Care | Department of Nursing Services and Patient Care
200 Hawkins Dr, T155 GH, Iowa City, IA 52242 | 319-384-6737
uihc.org/nursing-research-and-evidence-based-practice-and quality
Enrolled in an MSN….and wondering what to do for an evidence-based clinical project?
Recently a former student contacted me about that very question. Part of my response to her is below:
“One good place to start if you are flexible on your topic is to look through Cochrane Reviews, Joanna Briggs Institute, AHRQ Clinical Practice Guidelines, or similar for very strong evidence on a particular topic and then work to move that into practice in some way. (e.g., right now I’m involved in a project on using evidence of a Cochrane review on the benefits of music listening–not therapy–in improving patient outcomes like pain, mood, & opioid use).
Once you narrow the topic it will get easier. Also, you can apply only the best evidence you have, so if there isn’t much research or other evidence about the topic you might have to tackle the problem from a different angle” or pick an area where there IS enough evidence to apply.
Blessings! -Dr.H
Medscape just came out with Eric J. Topol article: 15 Studies that Challenged Medical Dogma in 2019. Critically check it out to practice your skills in applying evidence to practice. What are the implications for your practice? Are more or stronger studies needed before this overturning of dogma becomes simply more dogma? Are the resources and people’s readiness there for any warranted change? If not, what needs to happen? What are the risks of adopting these findings into practice?
TIME TO REPUBLISH THIS ONE:
Below is my adaptation of one of the clearest representations that I have ever seen of when the roads diverge into quality improvement, evidence-based practice, & research. Well done, Dr. E.Schenk PhD MHI, RN-BC!
For RNs wanting to pursue a doctorate, it is important to pick a degree that best matches your anticipated career path. The shortest simplest explanation of the difference in these degrees is probably:
An excellent, free full-text, critique can be found at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4547057/
Of course, some DNPs teach in universities, particularly in DNP programs. PhDs may otherwise be better prepared for faculty roles. I encourage you to look carefully at the curriculum at the school where you hope to study and expectations of a university where you hope to teach. Speak with faculty, & choose wisely.