Last week’s blog focused on the strongest types of evidence that you might find when trying to solve a clinical problem. These are: #1 Systematic reviews, Meta-analyses, or Evidence-based clinical practice guidelines based on systematic review of RCTs; & #2 Randomized controlled trials. (For levels of evidence from strongest to weakest, see blog “I like my coffee (and my evidence) strong!”)
So after the two strongest levels of evidence what is the next strongest? #3 level is controlled trials without randomization. (Sometimes called quasi-experimental studies.)
Here’s an example of a controlled trial without randomization: I take two groups of mice and test two types of cheese to find out which one mice like best. I do NOT randomly assign the mice to groups. The experimental group #1 loved Swiss cheese, & the control group #2 refused to eat the cheddar. I assume confidently that mice LOVE Swiss cheese…
Practice based in evidence (EBP) means that you must critique/synthesize evidence and then apply it to particular setting and populations using your best judgement. This means that you must discriminate about when (and when NOT) to apply the research. Be sure to use best professional judgment to particularize your actions to the situation!
The difference between research and evidence-based practice (EBP) can sometimes be confusing, but the contrast between them is sharp. I think most of the confusion comes because those implementing both processes measure outcomes. Here are differences:
RESEARCH :The process of research (formulating an answerable question, designing project methods, collecting and analyzing the data, and interpreting themeaning of results) iscreating knowledge(AKA creating research evidence).A research project that has been written up IS evidence that can be used in practice. The process of research is guided by the scientific method.
EVIDENCE-BASED PRACTICE: EBP is using existing knowledge (AKA using research evidence) in practice. While researchers create new knowledge,
The creation of evidence obviously precedes its application to practice. Something must be made before it can be used. Research obviously precedes the application of research findings to practice. When those findings are applied to practice, then we say the practice is evidence-based.
A good analogy for how research & EBP differ & work together can be seen in autos.
Designers & factory workers create new cars.
Driversuse existing cars that they choose according to preferences and best judgments about safety.
CRITICAL THINKING: 1) Why is the common phrase “evidence-based research” unclear? Should you use it? Why or why not? 2) What is a clinical question you now face. (e.g., C.Diff spread; nurse morale on your unit; managing neuropathic pain) and think about how the Stetler EBP model at http://www.nccmt.ca/registry/resource/pdf/83.pdf might help. Because you will be measuring outcomes, then why is this still considered EBP.
“OBJECTIVE: To determine which factors influence whether Santa Claus will visit children in hospital on Christmas Day.
DESIGN: Retrospective observational study.
SETTING: Paediatric wards in England, Northern Ireland, Scotland, and Wales.
PARTICIPANTS: 186 members of staff who worked on the paediatric wards (n=186) during Christmas 2015.
MAIN OUTCOME MEASURES: Presence or absence of Santa Claus on the paediatric ward during Christmas 2015. This was correlated with rates of absenteeism from primary school, conviction rates in young people (aged 10-17 years), distance from hospital to North Pole (closest city or town to the hospital in kilometres, as the reindeer flies), and contextual socioeconomic deprivation (index of multiple deprivation).
RESULTS: Santa Claus visited most of the paediatric wards in all four countries: 89% in England, 100% in Northern Ireland, 93% in Scotland, and 92% in Wales. The odds of him not visiting, however, were significantly higher for paediatric wards in areas of higher socioeconomic deprivation in England (odds ratio 1.31 (95% confidence interval 1.04 to 1.71) in England, 1.23 (1.00 to 1.54) in the UK). In contrast, there was no correlation with school absenteeism, conviction rates, or distance to the North Pole.
CONCLUSION: The results of this study dispel the traditional belief that Santa Claus rewards children based on how nice or naughty they have been in the previous year. Santa Claus is less likely to visit children in hospitals in the most deprived areas. Potential solutions include a review of Santa’s contract or employment of local Santas in poorly represented region.” Park et al. (2016).BMJ. 2016 Dec 14;355:i6355. doi: 10.1136/bmj.i6355.
How would you translate this into practice? Questions to help you with this endeavor:Where does this retrospective, observational research fall on the evidence hierarchy? Is it quantitative or qualitative research? Experimental or non-experimental research? How generalizable is this research? What are the risks,resources, and readiness of people in potentially using the findings (Stetler & Marram, 1996; Stetler, 2001)? What might happen if you try to apply the abstract information to practice without reading the full article? Do you think the project done in Europe is readily applicable to America? What would be the next level of research that you might undertake to better confirm these findings?
How strong is the evidence regarding our holiday Santa Claus (SC) practices? And what are the opportunities on this SC topic for new descriptive, correlation, or experimental research? Although existing evidence generally supports SC, in the end we may conclude, “the most real things in the world are those that neither children nor men can see” (Church, as cited in Newseum, n.d.).
Last post I commented on the potentially misleading terms of Filtered & Unfiltered research. My key point? Much so-called “unfiltered research” has been screened (filtered) carefully through peer-review before publication; while some “filtered research” may have been ‘filtered’ only by a single expert & be out of date. If we use the terms filtered and unfiltered we should not be naive about their meanings. (Pyramid source: Wikimedia Commons )
This week, I address what I see as a 2nd problem with this evidence based medicine pyramid. That is, missing in action from it are descriptive, correlation, & in-depth qualitative research are not mentioned. Where are they? This undercuts the EBM pyramid as a teaching tool and also (intentionally or not) denigrates the necessary basic type of research on which stronger levels of evidence are built. That foundation of the pyramid, called loosely “background information,” includes such basic, essential research.
You may have heard of Benner’s Novice to Expert theory. Benner used in-depth, qualitative interview descriptions as data to generate her theory. Yet that type of research evidence is missing from medicine’s pyramid! Without a clear foundation the pyramid will just topple over. Better be clear!
I recommend substituting (or at least adding to your repertoire) anEvidence Based NURSING (EBN)pyramid. Several versions exist & one is below that includes some of the previously missing research! This one includes EBP & QI projects, too! Notice the explicit addition of detail to the below pyramid as described at https://www.youtube.com/watch?v=MfRbuzzKjcM.
Are we talking cigarettes? water? coffee? other? Yes, other. In this case about what is sometimes called“filtered” or “unfiltered” literature in the evidence-based medicinepyramid of research evidence. (I have more than one issue with this particular pyramid as a representation of all evidence, but for right now let’s look at filtered information & unfiltered information. Pyramid source: Wikimedia Commons
Filtered is considered stronger–meaning that we can be more confident that literature from this category better supports cause and effect. I agree.
Unfiltered evidence (usually single studies etc) is considered weaker–meaning that we must be more cautious about its accuracy in representing reality. I agree.
But, “Is unfiltered information really unfiltered?” No filtering at all? My qualified answer is, “No.” Argue with me if you like.
My opinion: If the “unfiltered” article is a primary source, research study that has strong design and is published in a peer-review journal then it has been filtered by multiple, expert peer reviewers just to make it to publication.
Thus, when discussing filtered vs. unfiltered one should be very clear on what those terms mean and do not mean.
Critical Thinking: When filtered literature (systematic reviews & critically appraised topics & articles) has been filtered by one individual, is that superior to unfiltered literature in terms of introducing bias? What if the “filtered” evidence is 7 years old and a primary, “unfiltered” source(s) from this year has different findings? What is the relationship between “filtered” and “unfiltered”–after all the “unfiltered” is the pyramid base so what does that mean?