TIME TO REPUBLISH THIS ONE:
Below is my adaptation of one of the clearest representations that I have ever seen of when the roads diverge into quality improvement, evidence-based practice, & research. Well done, Dr. E.Schenk PhD MHI, RN-BC!
TIME TO REPUBLISH THIS ONE:
Below is my adaptation of one of the clearest representations that I have ever seen of when the roads diverge into quality improvement, evidence-based practice, & research. Well done, Dr. E.Schenk PhD MHI, RN-BC!
A pilot study is to research what a trial balloon is to politics.
In politics, a trial balloon is communicating a law or policy idea via media to see how the intended audience reacts to it. A trial balloon does not answer the question, “Would this policy (or law) work?” Instead a trial balloon answers questions like “Which people hate the idea of the policy/law–even if it would work?” or “What problems might enacting it create?” In other words, a trial balloon answers questions that a politician wants to know BEFORE implementing a policy so that the policy or law can be tweaked to be successfully put in place.
In research, a pilot study is sort of like a trial balloon. It is “a small-scale test of the methods and procedures” of a planned full-scale study (Porta, Dictionary of Epidemiology, 5th edition, 2008). A pilot study answers questions that we want to know BEFORE doing a larger study, so that we can tweak the study plan and have a successful full-scale research project. A pilot study does NOT answer research questions or hypotheses, such as “Does this intervention work?” Instead a pilot study answers the question “Are these research procedures workable?”
A pilot study asks & answers: “Can I recruit my target population? Can the treatments be delivered per protocol? Are study conditions acceptable to participants?” and so on. A pilot study should have specific measurable benchmarks for feasibility testing. For example if the pilot is finding out whether subjects will adhere to the study, then adherence might be defined as “70 percent of participants in each [group] will attend at least 8 of 12 scheduled group sessions.” Sample size is based on practical criteria such as budget, participant flow, and the number needed to answer feasibility questions (ie. questions about whether the study is workable).
A pilot study does NOT: Test hypotheses (even preliminarily); Use inferential statistics; Assess safety of a treatment; Estimate effect size; Demonstrate safety of an intervention.
A pilot study is not just a small study.
Next blog: Why this matters!!
For more info read the source of all quotes in this blog: Pilot Studies: Common Uses and Misuses @ https://nccih.nih.gov/grants/whatnccihfunds/pilot_studies
I’m not a New Year’s resolution person. I used to be and then I realized that I wanted to hit the restart button more often than every 365 days. So…my aim for this blog remains pretty much unchanged: Make research processes and ideas understandable for every RN.
Although “to be simple is difficult,” that’s my goal. L
et me know what’s difficult for you in research, because it probably is for others as well. Let’s work on the difficult together so that you can use the BEST Evidence in your practice.
The 2019 journey begins today, and tomorrow, and the tomorrows after that!
FOR MORE: Go to PubMed. Search for a topic of interest. Send me the article & we’ll critique together.
Enjoy this 2+-minute, homegrown, YouTube video about our 7-year collaborative, EBP/research project recorded per request of a presenter at the Association for Nursing Staff Development conference. (I admit it’s intimidating to watch myself.)
Check out the video: https://www.youtube.com/watch?v=T8KUIt_Uq9k.
Key points from our efforts: EBP/research learning should be fun. Content, serious!
The related publication that records some of our fun efforts and the full collaborative picture: Highfield, M.E.F., Collier, A., Collins, M., & Crowley, M. (2016). Partnering to promote evidence-based practice in a community hospital: Implications for nursing professional development specialists, Journal of Nursing Staff Development, 32(3):130-6. doi: 10.1097/NND.0000000000000227.
For RNs wanting to pursue a doctorate, it is important to pick a degree that best matches your anticipated career path. The shortest simplest explanation of the difference in these degrees is probably:
An excellent, free full-text, critique can be found at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4547057/
Of course, some DNPs teach in universities, particularly in DNP programs. PhDs may otherwise be better prepared for faculty roles. I encourage you to look carefully at the curriculum at the school where you hope to study and expectations of a university where you hope to teach. Speak with faculty, & choose wisely.
Yes. Change can be painful.
Yes. It is easier to do things the way we’ve always done them (and been seemingly successful).
Yet, most of us want to work more efficiently or improve our own or patients’ health.
So, there you have the problem: a tension between status quo and change. Perhaps taking the easy status quo is why ‘everyday nurses’ don’t read research.
Ralph (2017) writes encountering 3 common mindsets that keep nurses stuck in the rut of refusing to examine new research:
But, he argues, you have a choice: you can go with the status quo or challenge it (Ralph). And (admit it), haven’t we all found that the status quo sometimes doesn’t work well so that we end up
How to begin solving the problem of not reading research? Think of a super-interesting topic to you and make a quick trip to PubMed.com. Check out a few relevant abstracts and ask your librarian to get the articles for you. Read them in the nurses’ lounge so others can, too.
Let me know how your challenge to the status quo works out.
Bibliography: Fulltext available for download through https://www.researchgate.net/ of Ralph, N. (2017 April). Editorial: Engaging with research & evidence is a nursing priority so why are ‘everyday’ nurses not reading the literature, ACORN 30(3):3-5. doi: 10.26550/303/3.5
Reposting. Enjoy the review. -Dr.H
Discovering Your Inner Scientist
Last week’s blog focused on the strongest types of evidence that you might find when trying to solve a clinical problem. These are: #1 Systematic reviews, Meta-analyses, or Evidence-based clinical practice guidelines based on systematic review of RCTs; & #2 Randomized controlled trials. (For levels of evidence from strongest to weakest, see blog “I like my coffee (and my evidence) strong!”)
So after the two strongest levels of evidence what is the next strongest? #3 level is controlled trials without randomization. (Sometimes called quasi-experimental studies.)
Here’s an example of a controlled trial without randomization: I take two groups of mice and test two types of cheese to find out which one mice like best. I do NOT randomly assign the mice to groups. The experimental group #1 loved Swiss cheese, & the control group #2 refused to eat the cheddar. I assume confidently that mice LOVE Swiss cheese…
View original post 196 more words
Practice based in evidence (EBP) means that you must critique/synthesize evidence and then apply it to particular setting and populations using your best judgement. This means that you must discriminate about when (and when NOT) to apply the research. Be sure to use best professional judgment to particularize your actions to the situation!
Add to your repertoire of EBP tools, the Number Needed to Treat (NNT). This is not mumbo -jumbo. NNT explained here–short & sweet: http://www.thennt.com/thennt-explained/
CRITICAL THINKING: Check out this or other analyses at the site. How does the info on antihypertensives for mild hypertension answer the question of whether more is better? Are there patients in whom you SHOULD treat mild HTN? (“We report, you decide.”) http://www.thennt.com/nnt/anti-hypertensives-for-cardiovascular-prevention-in-mild-hypertension/
MORE INFO: Check out what the data say about other risk/benefit treatments at http://www.thennt.com/
The difference between research and evidence-based practice (EBP) can sometimes be confusing, but the contrast between them is sharp. I think most of the confusion comes because those implementing both processes measure outcomes. Here are differences:
The creation of evidence obviously precedes its application to practice. Something must be made before it can be used. Research obviously precedes the application of research findings to practice. When those findings are applied to practice, then we say the practice is evidence-based.
A good analogy for how research & EBP differ & work together can be seen in autos.
CRITICAL THINKING: 1) Why is the common phrase “evidence-based research” unclear? Should you use it? Why or why not? 2) What is a clinical question you now face. (e.g., C.Diff spread; nurse morale on your unit; managing neuropathic pain) and think about how the Stetler EBP model at http://www.nccmt.ca/registry/resource/pdf/83.pdf might help. Because you will be measuring outcomes, then why is this still considered EBP.
“OBJECTIVE: To determine which factors influence whether Santa Claus will visit children in hospital on Christmas Day.
DESIGN: Retrospective observational study.
SETTING: Paediatric wards in England, Northern Ireland, Scotland, and Wales.
PARTICIPANTS: 186 members of staff who worked on the paediatric wards (n=186) during Christmas 2015.
MAIN OUTCOME MEASURES: Presence or absence of Santa Claus on the paediatric ward during Christmas 2015. This was correlated with rates of absenteeism from primary school, conviction rates in young people (aged 10-17 years), distance from hospital to North Pole (closest city or town to the hospital in kilometres, as the reindeer flies), and contextual socioeconomic deprivation (index of multiple deprivation).
RESULTS: Santa Claus visited most of the paediatric wards in all four countries: 89% in England, 100% in Northern Ireland, 93% in Scotland, and 92% in Wales. The odds of him not visiting, however, were significantly higher for paediatric wards in areas of higher socioeconomic deprivation in England (odds ratio 1.31 (95% confidence interval 1.04 to 1.71) in England, 1.23 (1.00 to 1.54) in the UK). In contrast, there was no correlation with school absenteeism, conviction rates, or distance to the North Pole.
CONCLUSION: The results of this study dispel the traditional belief that Santa Claus rewards children based on how nice or naughty they have been in the previous year. Santa Claus is less likely to visit children in hospitals in the most deprived areas. Potential solutions include a review of Santa’s contract or employment of local Santas in poorly represented region.” Park et al. (2016).BMJ. 2016 Dec 14;355:i6355. doi: 10.1136/bmj.i6355.