Category Archives: research

iS IT 2? OR 3?

Credible sources often disagree on technicalities. Sometimes this includes classification of research design. Some argue that there are only 2 categories of research design:

  1. True experiments. True experiments have 3 elements: 1) randomization to groups, 2) a control group and an 3) intervention; and
  2. Non-experiments. Non-experiments may have 1 to none of those 3 elements.
Within-subject Control Group

Fundamentally, I agree with the above. But what about designs that include an intervention and a control group, but Not randomization?

Those may be called quasi-experiments; the most often performed quasi-experiment is pre/post testing of a single group. The control group are subjects at baseline and the experimental group are the same subjects after they receive a treatment or intervention. That means the control group is a within-subjects control group (as opposed to between-group control). Quasi-experiments can be used to answer cause-and-effect hypothesis when an experiment may not be feasible or ethical.

One might even argue that a strength of pre/post, quasi-experiments is that we do Not have to Assume that control and experimental groups are equivalent–an assumption we would make about the subjects randomized (randomly assigned) to a control or experimental group. Instead the control and experimental  are exactly equivalent because they are the same persons (barring maturation of subjects and similar threats to validity that are also true of experiments).

I think using the term quasi-experiments makes it clear that persons in the study receive an intervention. Adding “pre/post” means that the

This image has an empty alt attribute; its file name is intervention.jpg
Baseline ->Intervention->Post

researcher is using a single group as their own controls. I prefer to use the term non-experimental to mean a) descriptive studies (ones that just describe the situation) and b) correlation studies (ones without an intervention that look for whether one factor is related to another).

What do you think? 2? or 3?

Challenges to “Medical dogma” – Practice your EBP skills

Medscape just came out with Eric J. Topol article: 15 Studies that Challenged Medical Dogma in 2019. Critically check it out to practice your skills in applying evidence to practice. What are the implications for your practice? Are more or stronger studies needed before this overturning of dogma becomes simply more dogma? Are the resources and people’s readiness there for any warranted change? If not, what needs to happen? What are the risks of adopting these findings into practice?

Your thots? https://www.medscape.com/viewarticle/923150?src=soc_fb_share&fbclid=IwAR1SBNNVGW6BBWuKw7zBjhWIoQoMGtXZCy-BwpTTyavHSxmLleJuliKKG4A

On Target all the time and everytime !

“Measure twice. Cut once!” goes the old carpenter adage. Why? Because measuring accurately means you’ll get the outcomes you want!

Same in research. A consistent and accurate measurement will get you the outcomes you want to know. Whether an instrument measures something consistently is called reliability. Whether it measures accurately is called validity. So, before you use a tool, check for its reported reliability and validity.

A good resource for understanding the concepts of reliability (consistency) and validity (accuracy) of research tools is at https://opentextbc.ca/researchmethods/chapter/reliability-and-validity-of-measurement/ Below are quoted Key Takeaways:

  • Psychological researchers do not simply assume that their measures work. Instead, they conduct research to show that they work. If they cannot show that they work, they stop using them.
  • There are two distinct criteria by which researchers evaluate their measures: reliability and validity. Reliability is consistency across time (test-retest reliability), across items (internal consistency), and across researchers (interrater reliability). Validity is the extent to which the scores actually represent the variable they are intended to.
  • Validity is a judgment based on various types of evidence. The relevant evidence includes the measure’s reliability, whether it covers the construct of interest, and whether the scores it produces are correlated with other variables they are expected to be correlated with and not correlated with variables that are conceptually distinct.
  • The reliability and validity of a measure is not established by any single study but by the pattern of results across multiple studies. The assessment of reliability and validity is an ongoing process.

“Two roads diverged in a yellow wood…” R.Frost

TIME TO REPUBLISH THIS ONE:

Below is my adaptation of one of the clearest representations that I have ever seen of when the roads diverge into quality improvement, evidence-based practice, & research.  Well done, Dr. E.Schenk PhD MHI, RN-BC!qi-ebp-research-flow-chart

What’s in a Name?

[this posting back by popular demand]

TITLES!! That’s what you get when you search for research online!

But, whether your search turns up 3 or 32,003 article titles….remember that a title tells you a LOT In fact, if well-written it is a mini-abstract of the study. 

For example take this research article title “What patients with abdominal pain expect about pain relief in the Emergency Department” by Yee et al. in 2006 in JEN.
Variable (key factor that varies)?  Answer = Expectations about pain relief
Population studied? Answer = ED patients with abdominal pain
Setting? Answer = Maybe the ED (because they could’ve been surveyed after they got home or were admitted)
• Design?  Answer = not included, but you might guess that it is a descriptive study because it likely describes the patients’ expectations without any intervention.

There you have it! Now you know about TITLES!!

Now you try. Here’s your title: Gum chewing aids bowel function return and analgesic requirements after bowel surgery: a randomized controlled trial by Byrne CM, Zahid A, Young JM, Solomon MJ, Young CJ in May 2018

  • Variables? (this time there are 3 factors that vary–1 independent variable; & 2 dependent ones connected by “and”) Your answer is……
  • Population? (who is being studied; & if you have trouble identifying variables, identify the population first; then try) Your answer is….
  • Setting? (where; maybe not so clear; might have to go to abstract for this one) Your answer is….
  • Design of study? (it’s right there!) Your answer…..

Congratulate yourself!

Trial Balloons & Pilot Studies

A pilot study is to research what a trial balloon is to politics

In politics, a trial balloon is communicating a law or policy idea via media to see how the intended audience reacts to it.  A trial balloon does not answer the question, “Would this policy (or law) work?” Instead a trial balloon answers questions like “Which people hate the idea of the policy/law–even if it would work?” or “What problems might enacting it create?” In other words, a trial balloon answers questions that a politician wants to know BEFORE implementing a policy so that the policy or law can be tweaked to be successfully put in place.

meeting2

In research, a pilot study is sort of like a trial balloon. It is “a small-scale test of the methods and procedures” of a planned full-scale study (Porta, Dictionary of Epidemiology, 5th edition, 2008). A pilot study answers questions that we want to know BEFORE doing a larger study, so that we can tweak the study plan and have a successful full-scale research project. A pilot study does NOT answer research questions or hypotheses, such as “Does this intervention work?”  Instead a pilot study answers the question “Are these research procedures workable?”

A pilot study asks & answers:Can I recruit my target population? Can the treatments be delivered per protocol? Are study conditions acceptable to participants?” and so on.   A pilot study should have specific measurable benchmarks for feasibility testing. For example if the pilot is finding out whether subjects will adhere to the study, then adherence might be defined as  “70 percent of participants in each [group] will attend at least 8 of 12 scheduled group sessions.”  Sample size is based on practical criteria such as  budget, participant flow, and the number needed to answer feasibility questions (ie. questions about whether the study is workable).

A pilot study does NOT Test hypotheses (even preliminarily); Use inferential statistics; Assess safety of a treatment; Estimate effect size; Demonstrate safety of an intervention.

A pilot study is not just a small study.

Next blog: Why this matters!!

For more info read the source of all quotes in this blog: Pilot Studies: Common Uses and Misuses @ https://nccih.nih.gov/grants/whatnccihfunds/pilot_studies

2019: It is…….

I’m not a New Year’s resolution person.  I used to be and then I realized that I wanted to hit the restart button more often than every 365 days.  So…my aim for this blog remains pretty much unchanged:   Make research processes and ideas understandable for every RN.

DifficultToBeSimpleAlthough “to be simple is difficult,” that’s my goalLjourneyet me know what’s difficult for you in research, because it probably is for others as well.  Let’s work on the difficult together so that you can use the BEST Evidence in your practice.

The 2019 journey begins today, and tomorrow, and the tomorrows after that!

FOR MORE: Go to PubMed. Search for a topic of interest. Send me the article & we’ll critique together.

Fun Methods. Serious Content.

Enjoy this 2+-minute, homegrown, YouTube video about our 7-year collaborative, EBP/research project recorded per request of a presenter at the Association for Nursing Staff Development conference.  (I admit it’s intimidating to watch myself.)

Check out the video: https://www.youtube.com/watch?v=T8KUIt_Uq9kfun frog

Key points from our efforts:  EBP/research learning should be fun.  Content, serious!  

The related publication that records some of our fun efforts and the full collaborative picture: Highfield, M.E.F., Collier, A., Collins, M., & Crowley, M. (2016). Partnering to promote evidence-based practice in a community hospital: Implications for nursing professional development specialists, Journal of Nursing Staff Development, 32(3):130-6. doi: 10.1097/NND.0000000000000227.

It’s Up to you: Accept the Status Quo or Challenge it

Yes.  Change can be painful.question

Yes. It is easier to do things the way we’ve always done them (and been seemingly successful).

Yet, most of us want to work more efficiently or improve our own or patients’ health.Tension

 So, there you have the problem: a tension between status quo and change. Perhaps taking the easy status quo is why ‘everyday nurses’ don’t read research.

Ralph (2017) writes encountering 3 common mindsets that keep nurses stuck in the rut of refusing to examine new research:

  1. I’m not a researcher.
  2. I don’t value research.
  3. I don’t have time to read research.

But, he argues, you have a choice: you can go with the status quo or challenge it (Ralph).  And (admit it), haven’t we all found that the status quo sometimes doesn’t work well so that we end up

  • choosing a “work around,” or
  • ignoring/avoiding the problem or
  • leaving the problem for someone else or
  • ….[well….,you pick an action.]

TensionHow to begin solving the problem of not reading research? Think of a super-interesting topic to you and make a quick trip to PubMed.com. Check out a few relevant abstracts and ask your librarian to get the articles for you. Read them in the nurses’ lounge so others can, too.

Let me know how your challenge to the status quo works out.

Bibliography: Fulltext available for download through https://www.researchgate.net/ of  Ralph, N. (2017 April). Editorial: Engaging with research & evidence is a nursing priority so why are ‘everyday’ nurses not reading the literature, ACORN 30(3):3-5. doi: 10.26550/303/3.5

Research Words of the Week: Reliability & Validity

Reliability & validity are terms that refer to the consistency and accuracy of a quantitative measurement questionnaire, technical device, ruler, or any other measuring device.  It means that the outcome measure can be trusted and is relatively error free.

  • Reliability This means that the instrument measures CONSISTENTLY
  • Validity – This means that the instrument measures ACCURATELY. In other words it measures what it is supposed to measure and not something else.

For example: If your bathroom scale measures weight, then it is a valid measure of weight (e.g. it doesn’t measure BP or stress). You might say it had high validity. If your bathroom scale measures your weight as the same thing when you step on and off of it several times then it is measuring weight reliably  or consistently; and you might say it has high reliability.