A pilot study is to research what a trial balloon is to politics.
In politics, a trial balloon is communicating a law or policy idea via media to see how the intended audience reacts to it. A trial balloon does notanswer the question, “Would this policy (or law) work?” Instead a trial balloon answers questions like “Which people hate the idea of the policy/law–even if it would work?” or “What problems might enacting it create?” In other words, a trial balloon answers questions that a politician wants to know BEFORE implementing a policy so that the policy or law can be tweaked to be successfully put in place.
In research, a pilot study is sort of like a trial balloon. It is “a small-scale test of the methods and procedures” of a planned full-scale study (Porta, Dictionary of Epidemiology, 5th edition, 2008). A pilot study answers questions that we want to know BEFORE doing a larger study, so that we can tweak the study plan and have a successful full-scale research project. A pilot study does NOT answer research questions or hypotheses,such as “Does this intervention work?” Insteada pilot study answers the question “Are these research procedures workable?”
A pilot study asks & answers: “Can I recruit my target population? Can the treatments be delivered per protocol? Are study conditions acceptable to participants?” and so on. A pilot study should have specific measurable benchmarks for feasibility testing. For example if the pilot is finding out whether subjects will adhere to the study, then adherence might be defined as “70 percent of participants in each [group] will attend at least 8 of 12 scheduled group sessions.” Sample size is based on practical criteria such as budget, participant flow, and the number needed to answer feasibility questions (ie. questions about whether the study is workable).
A pilot study does NOT: Test hypotheses (even preliminarily); Use inferential statistics; Assess safety of a treatment; Estimate effect size; Demonstrate safety of an intervention.
Let’s say you want to find out how well students’ think they learned theory in your class.
One option is to do a pre/post test: You distribute the same survey before and after the class asking them to rate on 1-4 scale how well they think they know the new material. Then you compare their ratings.
Another option is to do posttest only: You could give them a survey after the class that asks them to rate 1-4 their knowledge before the class and 1-4 their knowledge now. Then you compare their ratings.
One research option is stronger than the other. Which one is it? and Why? (hint: think retrospective/prospective)
I’m not a New Year’s resolution person. I used to be and then I realized that I wanted to hit the restart button more often than every 365 days. So…my aim for this blog remains pretty much unchanged: Make research processes and ideas understandable for every RN.
Although “to be simple is difficult,” that’s my goal. Let me know what’s difficult for you in research, because it probably is for others as well. Let’s work on the difficult together so that you can use the BEST Evidence in your practice.
The 2019 journey begins today, and tomorrow, and the tomorrows after that!
FOR MORE: Go to PubMed. Search for a topic of interest. Send me the article & we’ll critique together.
“OBJECTIVE: To determine which factors influence whether Santa Claus will visit children in hospital on Christmas Day.
DESIGN: Retrospective observational study.
SETTING: Paediatric wards in England, Northern Ireland, Scotland, and Wales.
PARTICIPANTS: 186 members of staff who worked on the paediatric wards (n=186) during Christmas 2015.
MAIN OUTCOME MEASURES: Presence or absence of Santa Claus on the paediatric ward during Christmas 2015. This was correlated with rates of absenteeism from primary school, conviction rates in young people (aged 10-17 years), distance from hospital to North Pole (closest city or town to the hospital in kilometres, as the reindeer flies), and contextual socioeconomic deprivation (index of multiple deprivation).
RESULTS: Santa Claus visited most of the paediatric wards in all four countries: 89% in England, 100% in Northern Ireland, 93% in Scotland, and 92% in Wales. The odds of him not visiting, however, were significantly higher for paediatric wards in areas of higher socioeconomic deprivation in England (odds ratio 1.31 (95% confidence interval 1.04 to 1.71) in England, 1.23 (1.00 to 1.54) in the UK). In contrast, there was no correlation with school absenteeism, conviction rates, or distance to the North Pole.
CONCLUSION: The results of this study dispel the traditional belief that Santa Claus rewards children based on how nice or naughty they have been in the previous year. Santa Claus is less likely to visit children in hospitals in the most deprived areas. Potential solutions include a review of Santa’s contract or employment of local Santas in poorly represented region.” Park et al. (2016).BMJ. 2016 Dec 14;355:i6355. doi: 10.1136/bmj.i6355.
How would you translate this into practice? Questions to help you with this endeavor:Where does this retrospective, observational research fall on the evidence hierarchy? Is it quantitative or qualitative research? Experimental or non-experimental research? How generalizable is this research? What are the risks,resources, and readiness of people in potentially using the findings (Stetler & Marram, 1996; Stetler, 2001)? What might happen if you try to apply the abstract information to practice without reading the full article? Do you think the project done in Europe is readily applicable to America? What would be the next level of research that you might undertake to better confirm these findings?
Self-report by participants is one of the most common ways that researchers collect data, yet it is fraught with problems. Some worries for researchers are: “Will participants be honest or will they say what they think I want to hear?” “Will they understand the questions correctly?” “Will those who respond (as opposed to those who don’t respond) have unique ways of thinking so that my respondents do not represent everyone well?” and a BIG worry “Will they even fill out and return the questionnaire?”
One way to solve at least the latter 2 problems is to increase the response rate, and Edwards et al (2009 July 8) reviewed randomized trials to learn how to do just that!!
If you want to improve your questionnaire response rates, check it out! Here is Edwards et al.’s plain language summary as published in Cochrane Database of Systematic Reviews,where you can read the entire report.
Postal and electronic questionnaires are a relatively inexpensive way to collect information from people for research purposes. If people do not reply (so called ‘non-responders’), the research results will tend to be less accurate. This systematic review found several ways to increase response. People can be contacted before they are sent a postal questionnaire. Postal questionnaires can be sent by first class post or recorded delivery, and a stamped-return envelope can be provided. Questionnaires, letters and e-mails can be made more personal, and preferably kept short. Incentives can be offered, for example, a small amount of money with a postal questionnaire. One or more reminders can be sent with a copy of the questionnaire to people who do not reply.
Critical/reflective thinking: Imagine that you were asked to participate in a survey. Which of these strategies do you think would motivate or remind you to respond and why?