Category Archives: Research design

Pilot sTUdies: Look before you leap! (a priori vs. posthoc)

Why does it matter if a study is labeled a “pilot”?

SHORT ANSWER: …Because a pilot is about testing research methods,….not about answering research questions.

If a project has “pilot” in the title, then you as a reader should expect a study that examines whether certain research methods work (methodologic research). Methods include things like timing of data collection, sampling strategies, length of questionnaire, and so on. Pilots suggest what methods will effectively to answer researchers’ questions. Advance prep in methods makes for a smooth research landing.

Small sample = Pilot? A PILOT is related to study goals and design–not sample size. Of course pilots typically have small samples, but a small sample does not a pilot study make. Sometimes journals may tempt a researcher to call their study a pilot because of small samples. Don’t go there. Doing so means after-the-fact, posthoc changes that were Not the original, a priori goals and design.

Practical problems? If researchers label a study a “pilot” after it is completed (post hoc), they raise practical & ethical issues. At a practical level, researchers must create feasibility questions & answers. (See NIH.) The authors should drop data analysis that answers their original research questions.

Ethics? This ethically requires researchers 1) to say they planned something that they didn’t or 2) to take additional action. Additional action may be complete transparency about the change and seeking modification to original human subjects’ committee approvals. An example of one human subjects issue is that you informed your subjects that their data would answer a particular research question, and now you want to use their data to answer something else–methods questions!

Options? You can just learn from your small study and go for a bigger one, including improving methods. Some journals will consider publication of innovative studies even when small.

Look first, then leap: Better to look a priori, before leaping. If you think you might have trouble with your methods, design a pilot. If you made the unpleasant discovery that your methods didn’t work as you hoped, you can 1) disseminate your results anyway or 2) rethink ethical and practical issues.

Who’s with me? The National Institutes of Health agree: https://nccih.nih.gov/grants/whatnccihfunds/pilot_studies . NIH notes that common misuses of “pilots” are determining safety, efficacy of intervention, and effect size.

Who disagrees? McGrath argues that clinical pilots MAY test safety and efficacy, as well as feasibility. (See McGrath, J. M. (2013). Not all studies with small samples are pilot studies, Journal of Perinatal & Neonatal Nursing, 27(4): 281-283. doi: 10.1097/01.JPN.0000437186.01731.bc )

Trial Balloons & Pilot Studies

A pilot study is to research what a trial balloon is to politics

In politics, a trial balloon is communicating a law or policy idea via media to see how the intended audience reacts to it.  A trial balloon does not answer the question, “Would this policy (or law) work?” Instead a trial balloon answers questions like “Which people hate the idea of the policy/law–even if it would work?” or “What problems might enacting it create?” In other words, a trial balloon answers questions that a politician wants to know BEFORE implementing a policy so that the policy or law can be tweaked to be successfully put in place.

meeting2

In research, a pilot study is sort of like a trial balloon. It is “a small-scale test of the methods and procedures” of a planned full-scale study (Porta, Dictionary of Epidemiology, 5th edition, 2008). A pilot study answers questions that we want to know BEFORE doing a larger study, so that we can tweak the study plan and have a successful full-scale research project. A pilot study does NOT answer research questions or hypotheses, such as “Does this intervention work?”  Instead a pilot study answers the question “Are these research procedures workable?”

A pilot study asks & answers:Can I recruit my target population? Can the treatments be delivered per protocol? Are study conditions acceptable to participants?” and so on.   A pilot study should have specific measurable benchmarks for feasibility testing. For example if the pilot is finding out whether subjects will adhere to the study, then adherence might be defined as  “70 percent of participants in each [group] will attend at least 8 of 12 scheduled group sessions.”  Sample size is based on practical criteria such as  budget, participant flow, and the number needed to answer feasibility questions (ie. questions about whether the study is workable).

A pilot study does NOT Test hypotheses (even preliminarily); Use inferential statistics; Assess safety of a treatment; Estimate effect size; Demonstrate safety of an intervention.

A pilot study is not just a small study.

Next blog: Why this matters!!

For more info read the source of all quotes in this blog: Pilot Studies: Common Uses and Misuses @ https://nccih.nih.gov/grants/whatnccihfunds/pilot_studies

After taste…I mean “after test”

Let’s say you want to find out how well students’ think they learned theory in your class.

One option is to do a pre/post test: You distribute the same survey before and after the class asking them to rate on 1-4 scale how well they think they know the new material. Then you compare their ratings.

Another option is to do posttest only: You could give them a survey after the class that Surveyasks them to rate 1-4 their knowledge before the class and 1-4 their knowledge now. Then you compare their ratings.

One research option is stronger than the other.  Which one is it? and Why?  (hint: think retrospective/prospective)

Of Mice and Cheese: Research with Non-equivalent Groups

Reposting. Enjoy the review. -Dr.H

Discovering Your Inner Scientist

Last week’s blog focused on the strongest types of evidence that you might find when trying to solve a clinical problem. These are: #1 Systematic reviews, Meta-analyses, or Evidence-based clinical practice guidelines based on systematic review of RCTs; & #2 Randomized controlled trials. (For levels of evidence from strongest to weakest, see blog “I like my coffee (and my evidence) strong!”)

So after the two strongest levels of evidence what is the next strongest? #3 level is controlled trials without randomization. (Sometimes called quasi-experimental studies.)

Here’s an example of a controlled trial without randomization: I take two groups of mice and test two types of cheese to find out which one mice like best. I do NOT randomly assign the mice to groups. The experimental group #1 loved Swiss cheese, & the control group #2 refused to eat the cheddar. I assume confidently that mice LOVE Swiss cheese…

View original post 196 more words

What IS research!!??

WHAT IS RESEARCH?   Take < three minutes to check out: https://www.youtube.com/watch?v=v50ct9xJVKE .  Listen for what research is and 2 basic ways to approach the man-person-legs-grass.jpganswers to a research question: “Why is the sky blue?”

CRITICAL THINKING:  What is a recent problem you’ve experienced in clinical practice?  Write out a positivist question and an interpretist research question related to that same clinical problem.

“Should you? Can you?”

ApplesOranges2Quasi-experiments are a lot of work, yet don’t have the same scientific power to show cause and effect, as do randomized controlled trials (RCTs).   An RCT would provide better support for any hypothesis that X causes Y.   [As a quick review of what quasi-experimental versus RCT studies are, see “Of Mice & Cheese” and/or “Out of Control (Groups).”]

So why do quasi-experimental studies at all?  Why not always do RCTs when we are testing cause and effect?  Here are 3 reasons:

#1  Sometimes ETHICALLY the researcher canNOT randomly assign subjects to a control Smokingand an experimental group.  If the researcher wants to compare health outcomes of smokers with non-smokers, the researcher cannot assign some people to smoke and others not to smoke!  Why?  Because we already know that smoking has significant harmful effects. (Of course, in a dictatorship, by using the police a researcher could assign them to smoke or not smoke, but I don’t think we wanna go there.)

#2 Sometimes PHYSICALLY the researcher canNOT randomly assign subjects to control & Country of Originexperimental groups.   If the researcher wants to compare health outcomes of
individuals from different countries, it is physically impossible to assign country of origin.

#3 Sometimes FINANCIALLY the researcher canNOT afford to assign subjects randomly PiggyBankto control & experimental groups.   It costs $ & time to get a list of subjects and then assign them to control & experimental groups using random numbers table or drawing names from a hat.

Thus, researchers sometimes are left with little alternative, but to do a quasi-experiment as the next best thing to an RCT, then discuss its limitations in research reports.

Critical Thinking: You read a research study in which a researcher recruits the 1st 100 patients on a surgical ward January-March quarter as a control group.  Then the researcher recruits the 2nd 100 patients on that same surgical ward April-June for the experimental group.  With the experimental group, the staff uses a new, standardized pain script for better pain communications.  Then the pain communication outcomes of each group are compared statistically.

  • Is this a quasi-experiment or a randomized controlled trial (RCT)?
  • What factors (variables) might be the same among control & experimental groups in this study?
  • What factors (variables) might be different between control & experimental groups that might affect study outcomes?
  • How could you design an ethical & possible RCT that would overcome the problems with this study?
  • Why might you choose to do the study the same way that this researcher did?

For more info: see “Of Mice & Cheese” and/or “Out of Control (Groups).”

OUT OF CONTROL (groups)! The weak link in the cause-&-effect chain

Welcome back after a bit of silence on my end!welcome[1]

In the last “Quasi-wha??” blogpost, I described 1 type of experimental design: Quasi-experimental.  To review… In quasi-experimental designs, the researcher manipulates some variable, but either 1) doesn’t randomly assign subjects to a control and experimental group OR 2) doesn’t have a control group at all.

For example, the researcher may introduce pet therapy on unit #1 and avoid pet therapy on unit #2 and then afterwards compare the anxiety levels of patients on the 2 units.  That study has a control group (unit #2), but because patients weren’t (& probably couldn’t be) randomly assigned to the units, this would be a quasi-experimental study. The control group in this pet therapy case is what researchers call a “non-equivalent control group.”   Non-equivalent means the groups are different in ways that might affect study results! [Note: For review of what constitutes a true experimental study see first part of  “Quasi-wha??” blogpost.]

WeaknessHerein lies a weak link in the cause-and-effect chain. Quasi- designs are NOT as strong as true experimental designs because something other than our treatment (in this case pet therapy) may have created any difference in outcomes (e.g., anxiety levels).  Why?   Here’s your answer.

ApplesOranges
Unit #1

In an experimental study, randomly assigning subjects to a

ApplesOranges
Unit #2

control and a separate experimental group means that all the little, variable weirdities of all subjects are equally distributed to each group.  Each group is the same mix of different types of people. This means we can assume that both groups are the exact same type of
people in regard to things that may influence study outcomes, such as attitudes, values, preferences, beliefs, anxiety level, psychology, physiology and so on.

ApplesOranges2
Unit #1=Apples.        Unit #2=Oranges

In contrast, in the quasi-experimental pet therapy example above, there is probably something that caused a certain type of person to be on unit #1 and a different type to be on unit #2.  Maybe it was their diagnosis, their doctor, their type of surgery, or other.  Thus, we cannot assume that people in unit #1 and unit #2 groups are the same before pet therapy, and so any differences between them after pet therapy might have already existed.

So why do quasi-experimental studies at all?? There are great reasons!  Stay tuned for next blogpost.

Critical thinking: Check out free full-text, quasi-experiment Gough et al., (2017). Tweet for Behavior Change: Using Social Media for the Dissemination of Public Health Messages.  

  1. What makes this a quasi-experimental design?  [Hint: Does it have a control group? Were subjects randomly assigned to groups?  Are both randomization & control group missing?]
  2. What might have caused the change in behavior, instead of the tweets? 
  3. What contribution do you think the study makes to improving practice?

For more information on studies with non-randomized control groups see “Of Mice & Cheese”  or comment below.  Let’s talk!