iS IT 2? OR 3?

Credible sources often disagree on technicalities. Sometimes this includes classification of research design. Some argue that there are only 2 categories of research design:

  1. True experiments. True experiments have 3 elements: 1) randomization to groups, 2) a control group and an 3) intervention; and
  2. Non-experiments. Non-experiments may have 1 to none of those 3 elements.
Within-subject Control Group

Fundamentally, I agree with the above. But what about designs that include an intervention and a control group, but Not randomization?

Those may be called quasi-experiments; the most often performed quasi-experiment is pre/post testing of a single group. The control group are subjects at baseline and the experimental group are the same subjects after they receive a treatment or intervention. That means the control group is a within-subjects control group (as opposed to between-group control). Quasi-experiments can be used to answer cause-and-effect hypothesis when an experiment may not be feasible or ethical.

One might even argue that a strength of pre/post, quasi-experiments is that we do Not have to Assume that control and experimental groups are equivalent–an assumption we would make about the subjects randomized (randomly assigned) to a control or experimental group. Instead the control and experimental  are exactly equivalent because they are the same persons (barring maturation of subjects and similar threats to validity that are also true of experiments).

I think using the term quasi-experiments makes it clear that persons in the study receive an intervention. Adding “pre/post” means that the

This image has an empty alt attribute; its file name is intervention.jpg
Baseline ->Intervention->Post

researcher is using a single group as their own controls. I prefer to use the term non-experimental to mean a) descriptive studies (ones that just describe the situation) and b) correlation studies (ones without an intervention that look for whether one factor is related to another).

What do you think? 2? or 3?

A practical place to start

Enrolled in an MSN….and wondering what to do for an evidence-based clinical project?

Recently a former student contacted me about that very question. Part of my response to her is below:

“One good place to start if you are flexible on your topic is to look through Cochrane Reviews, Joanna Briggs Institute, AHRQ Clinical Practice Guidelines, or similar for very strong evidence on a particular topic and then work to move that into practice in some way.  (e.g., right now I’m involved in a project on using evidence of a Cochrane review on the benefits of music listening–not therapy–in improving patient outcomes like pain, mood, & opioid use).

Once you narrow the topic it will get easier.  Also, you can apply only the best evidence you have, so if there isn’t much research or other evidence about the topic you might have to tackle the problem from a different angle” or pick an area where there IS enough evidence to apply.

Blessings! -Dr.H

Challenges to "Medical dogma" – Practice your EBP skills

Medscape just came out with Eric J. Topol article: 15 Studies that Challenged Medical Dogma in 2019. Critically check it out to practice your skills in applying evidence to practice. What are the implications for your practice? Are more or stronger studies needed before this overturning of dogma becomes simply more dogma? Are the resources and people’s readiness there for any warranted change? If not, what needs to happen? What are the risks of adopting these findings into practice?

Your thots? https://www.medscape.com/viewarticle/923150?src=soc_fb_share&fbclid=IwAR1SBNNVGW6BBWuKw7zBjhWIoQoMGtXZCy-BwpTTyavHSxmLleJuliKKG4A

Ho Ho How Do You Punctuate That?

Grammar Party

santa

It’s getting to be that time of year when children close their eyes and fantasize about an old, fat man breaking into their house while they sleep naïvely in false security in their bedrooms.

“Ho! Ho! Ho!” the man says to himself as he places consumer goods under a tree that for some reason has been moved to their living room.

Wait. Perhaps he says “Ho ho ho!” instead. Just how many exclamation points does this slavemaster of reindeer use?

Let’s turn to the authorities. Here’s what Merriam-Websterhas to say:

Screen Shot 2018-12-06 at 10.24.56 AM.png

There you have it. Three hos and one exclamation point.

Ho ho ho! Merry Christmas (etc.) to you!

Erin Servais is a professional book editor who is really hoping she won’t get coal this Christmas. Learn more about how she can help you reach your publishing goals here: Dot and Dash website.

View original post

On Target all the time and everytime !

“Measure twice. Cut once!” goes the old carpenter adage. Why? Because measuring accurately means you’ll get the outcomes you want!

Same in research. A consistent and accurate measurement will get you the outcomes you want to know. Whether an instrument measures something consistently is called reliability. Whether it measures accurately is called validity. So, before you use a tool, check for its reported reliability and validity.

A good resource for understanding the concepts of reliability (consistency) and validity (accuracy) of research tools is at https://opentextbc.ca/researchmethods/chapter/reliability-and-validity-of-measurement/ Below are quoted Key Takeaways:

  • Psychological researchers do not simply assume that their measures work. Instead, they conduct research to show that they work. If they cannot show that they work, they stop using them.
  • There are two distinct criteria by which researchers evaluate their measures: reliability and validity. Reliability is consistency across time (test-retest reliability), across items (internal consistency), and across researchers (interrater reliability). Validity is the extent to which the scores actually represent the variable they are intended to.
  • Validity is a judgment based on various types of evidence. The relevant evidence includes the measure’s reliability, whether it covers the construct of interest, and whether the scores it produces are correlated with other variables they are expected to be correlated with and not correlated with variables that are conceptually distinct.
  • The reliability and validity of a measure is not established by any single study but by the pattern of results across multiple studies. The assessment of reliability and validity is an ongoing process.

“Two roads diverged in a yellow wood…” R.Frost

TIME TO REPUBLISH THIS ONE:

Below is my adaptation of one of the clearest representations that I have ever seen of when the roads diverge into quality improvement, evidence-based practice, & research.  Well done, Dr. E.Schenk PhD MHI, RN-BC!qi-ebp-research-flow-chart

Your public persona: Name matters

This applies to you current and future authors. (Don’t think you won’t be one someday!)

Try to find an author name you can stick with. You want people to easily find all your work. What to consider? What does the future hold? Here’s some help from new online article in Nurse Author & Editor:

Making research accessible to RNs

%d bloggers like this: