Research is not all white lab coats and test tubes. Simply put,research is a systematic way to ask and answer your questions by looking for patterns in new or existing data.Typical steps are clockwise are in Figure 1 below.
In the Figure 1 below, I’ve included the step of IRB review. Remember that an IRB (institutional review board/ AKA human subjects review board) must review all research procedures for your compliance with federal ethical and legal rulesbeforeyou begin any data collection or subject contact.
Research is using the scientific process to ask and answer questions by examining new or existing data for patterns. The data are measurements of variables of interest. The simplest definition of a variable is that it is something that varies, such as height, income, or country of origin. For example, a researcher might be interested in collecting data on triceps skin fold thickness to assess the nutritional status of preschool children. Skin fold thickness will vary.
Research is often categorized in different ways in terms of: data, design, broad aims, and logic.
Experimental research tries to answer “Why” question by examining cause and effect connections. e.g., gum chewing after surgery speeds return of bowel function. Gum chewing is a potential cause or “the why”
Aims. Studies, too, may be either applied research or basic research. Applied research is when the overall purpose of the research is to uncover knowledge that may be immediately used in practice (e.g., whether a scheduled postpartum quiet time facilitates breastfeeding). In contrast, basic research is when the new knowledge has no immediate application (e.g., identifying receptors on a cell wall).
Logic. Study logic may be inductive or deductive. Inductive reasoning is used in qualitative research; it starts with specific bits of information and moves toward generalizations [e.g., This patient’s pain is reduced after listening to music (specific); that means that music listening reduces all patients pain (general)]. Deductive reasoning is typical of quantitative research; it starts with generalizations and moves toward specifics [e.g., If listening to music relaxes people (general), then it may reduce post-operative pain (specific)]. Of course the logical conclusions in each case should be tested with research!
WHAT RESEARCH IS NOT:
Research as a scientific process is not going to the library or searching online to find information. It is also different from processes of applying research and non-research evidence to practice (called Evidence-Based Practice or EBP). And it is not the same as Quality Improvement (QI). See Two Roads Diverged for a flowchart to help differentiate research, QI and EBP.
Credible sources often disagree on technicalities. Sometimes this includes classification of research design. Some argue that there are only 2 categories of research design:
True experiments. True experiments have 3 elements: 1) randomization to groups, 2) a control group and an 3) intervention; and
Non-experiments. Non-experiments may have 1 to none of those 3 elements.
Fundamentally, I agree with the above. But what about designs that include an intervention and a control group, but Not randomization?
Those may be called quasi-experiments; the most often performed quasi-experiment is pre/post testing of a single group. The control group are subjects at baseline and the experimental group are the same subjects after they receive a treatment or intervention. That means the control group is a within-subjects control group (as opposed to between-group control). Quasi-experiments can be used to answer cause-and-effect hypothesis when an experiment may not be feasible or ethical.
One might even argue that a strength of pre/post, quasi-experiments is that we do Not have to Assume that control and experimental groups are equivalent–an assumption we would make about the subjects randomized (randomly assigned) to a control or experimental group. Instead the control and experimental are exactly equivalent because they are the same persons (barring maturation of subjects and similar threats to validity that are also true of experiments).
I think using the term quasi-experiments makes it clear that persons in the study receive an intervention. Adding “pre/post” means that the
researcher is using a single group as their own controls. I prefer to use the term non-experimental to mean a) descriptive studies (ones that just describe the situation) and b) correlation studies (ones without an intervention that look for whether one factor is related to another).
Medscape just came out with Eric J. Topol article: 15 Studies that Challenged Medical Dogma in 2019.Critically check it out to practice your skills in applying evidence to practice. What are the implications for your practice? Are more or stronger studies needed before this overturning of dogma becomes simply more dogma? Are the resources and people’s readiness there for any warranted change? If not, what needs to happen? What are the risks of adopting these findings into practice?
“Measure twice. Cut once!” goes the old carpenter adage. Why? Because measuring accurately means you’ll get the outcomes you want!
Same in research. A consistent and accurate measurement will get you the outcomes you want to know. Whether an instrument measures something consistently is called reliability. Whether it measures accurately is called validity. So, before you use a tool, check for its reported reliability and validity.
Psychological researchers do not simply assume that their measures work. Instead, they conduct research to show that they work. If they cannot show that they work, they stop using them.
There are two distinct criteria by which researchers evaluate their measures: reliability and validity. Reliability is consistency across time (test-retest reliability), across items (internal consistency), and across researchers (interrater reliability). Validity is the extent to which the scores actually represent the variable they are intended to.
Validity is a judgment based on various types of evidence. The relevant evidence includes the measure’s reliability, whether it covers the construct of interest, and whether the scores it produces are correlated with other variables they are expected to be correlated with and not correlated with variables that are conceptually distinct.
The reliability and validity of a measure is not established by any single study but by the pattern of results across multiple studies. The assessment of reliability and validity is an ongoing process.
Below is my adaptation of one of the clearest representations that I have ever seen of when the roads diverge into quality improvement, evidence-based practice, & research. Well done, Dr. E.Schenk PhD MHI, RN-BC!
TITLES!! That’s what you get when you search for research online!
But, whether your search turns up 3 or 32,003 article titles….remember that a title tells you a LOT. In fact, if well-written it is a mini-abstractof the study.
For example take this research article title “What patients with abdominal pain expect about pain relief in the Emergency Department” by Yee et al. in 2006 in JEN. • Variable (key factor that varies)? Answer = Expectations about pain relief • Population studied? Answer = ED patients with abdominal pain • Setting? Answer = Maybe the ED (because they could’ve been surveyed after they got home or were admitted) • Design?Answer = not included, but you might guess that it is a descriptive study because it likely describes the patients’ expectations without any intervention.
A pilot study is to research what a trial balloon is to politics.
In politics, a trial balloon is communicating a law or policy idea via media to see how the intended audience reacts to it. A trial balloon does notanswer the question, “Would this policy (or law) work?” Instead a trial balloon answers questions like “Which people hate the idea of the policy/law–even if it would work?” or “What problems might enacting it create?” In other words, a trial balloon answers questions that a politician wants to know BEFORE implementing a policy so that the policy or law can be tweaked to be successfully put in place.
In research, a pilot study is sort of like a trial balloon. It is “a small-scale test of the methods and procedures” of a planned full-scale study (Porta, Dictionary of Epidemiology, 5th edition, 2008). A pilot study answers questions that we want to know BEFORE doing a larger study, so that we can tweak the study plan and have a successful full-scale research project. A pilot study does NOT answer research questions or hypotheses,such as “Does this intervention work?” Insteada pilot study answers the question “Are these research procedures workable?”
A pilot study asks & answers: “Can I recruit my target population? Can the treatments be delivered per protocol? Are study conditions acceptable to participants?” and so on. A pilot study should have specific measurable benchmarks for feasibility testing. For example if the pilot is finding out whether subjects will adhere to the study, then adherence might be defined as “70 percent of participants in each [group] will attend at least 8 of 12 scheduled group sessions.” Sample size is based on practical criteria such as budget, participant flow, and the number needed to answer feasibility questions (ie. questions about whether the study is workable).
A pilot study does NOT: Test hypotheses (even preliminarily); Use inferential statistics; Assess safety of a treatment; Estimate effect size; Demonstrate safety of an intervention.