Category Archives: research methods

Essentials for Clinical Researchers

[note: bonus 20% book discount from publisher. See below flyer]

My 2025 book, Doing Research, is a user-friendly guide, not a comprehensive text. Chapter 1 gives a dozen tips to get started, Chapter 2 defines research, and Chapters 3-9 focus on planning. The remaining Chapters 10-12 guide you through challenges of conducting a study, getting answers from the data, and sharing with others what you learned. Italicized key terms are defined in the glossary, and a bibliography lists additional resources.

New book: “Doing Research: A Practical Guide”

Author: Martha “Marty” E. Farrar Highfield

NOW AVAILABLE ELECTRONICALLY & SOON IN PRINT.

CHECK OUT: https://link.springer.com/book/10.1007/978-3-031-79044-7

This book provides a step-by-step summary of how to do clinical research. It explains what research is and isn’t, where to begin and end, and the meaning of key terms. A project planning worksheet is included and can be used as readers work their way through the book in developing a research protocol. The purpose of this book is to empower curious clinicians who want data-based answers.

Doing Research is a concise, user-friendly guide to conducting research, rather than a comprehensive research text. The book contains 12 main chapters followed by the protocol worksheet. Chapter 1 offers a dozen tips to get started, Chapter 2 defines research, and Chapters 3-9 focus on planning. Chapters 10-12 then guide readers through challenges of conducting a study, getting answers from the data, and disseminating results. Useful key points, tips, and alerts are strewn throughout the book to advise and encourage readers.

New Book Strives to Make the Difficult Simple

Doing Research: A practical guide for health professionals, a new book by Martha E. Farrar Highfield is in press Springer Nature. Release date Feb 1, 2025 (preorder available).

Practical, brief, and affordable, Doing Research is for residents, nurses, chaplains, and other clinicians.

Written in informal, friendly style, this book makes the difficult simple.

The purpose of Doing Research is to empower curious clinicians to conduct research alongside a mentor, even when they lack prior research experience or formal training.

Doing Research presents practical steps for conducting a study from beginning to end. It begins with “a dozen tips” to get started, then moves to study planning, conduct, and dissemination of results. A worksheet to write your research plan (protocol) is included. Research terms and process are explained, including what research is and is not. Tips & Alerts provide a “reassuring voice,” as well as alerting readers to common missteps.

Primer on Design: Part 3 – Mixing UP Methods

QUICK REVIEW: Research design is the overall plan for a study. And…there are 2 main types of design: 1) Non-experiments in which the researcher observes and documents what exists,

and 2) Experiments when the researcher tries out an intervention and measures outcomes.

NEW INFO: Two non-experimental research designs that are often confused with one another are: 1) cohort studies & 2) case studies. Epidemiologists often use these designs to study large populations.

In a cohort study, a group of participants, who were exposed to a presumed cause of disease or injury, are followed into the future (prospectively) to identify emerging health issues. Researchers may also look at their past (retrospectively) to determine the amount of exposure that is related to health outcomes.

In contrast, in a case controlled study, participants with a disease or condition (cases) and others without it (controls) are followed retrospectively to compare their exposure to a presumed cause.

EXAMPLES?

  1. Martinez-Calderon et al (2017 ) Influence of psychological factors on the prognosis of chronic shoulder pain: protocol for a prospective cohort study. BMJ Open, 7. doi: 10.1136/bmjopen-2016-012822
  2. Smith et al (2019). An outbreak of hepatitis A in Canada: The use of a control bank to conduct a case-control study. Epidemiology & Infection, 147. doi: https://doi.org/10.1017/S0950268819001870

CRITICAL THINKING: Do you work with a group that has an interesting past of exposure to some potential cause of disease or injury? Which of the above designs do you find more appealing and why?

The Whole Picture: Mixed Methods Design

Idea2Mixed methods (MM) research provides a more complete picture of reality by including both complementary quantitative and qualitative data.

A clinical analogy for MM research is asking patients to rate their pain numerically on a 0–10 scale and then to describe the pain character in words.

MM researchers sometimes include both experimental hypotheses and non-experimental research questions in the same study.

writing article

Common MM subtypes are in the below table. In concurrent designs investigators collect all data at the same time, and in sequential designs they collect one type of data before the other. In triangulated MM, data receive equal weight, but in embedded designs, such as a large RCT in which only a small subset of RCT participants are interviewed, the main study data is weighted more heavily. In sequential MM, researchers give more weight to whatever type of data were collected first; for exploratory this is qualitative data and for explanatory it is quantitative data.

FOR MORE INFO: WHAT IS MIXED METHODS RESEARCH? – Dr. John Creswell

MM DESIGNEQUALLY WEIGHTED DATAPRIORITY WEIGHTED DATA
Concurrent data collection:
*Triangulation
All data
*EmbeddedMain study data
Sequential data collection:
*Exploratory
Qualitative data
*ExplanatoryQuantitative data
TYPES OF MM DESIGN: Concurrent & Sequential

Testing the Test (or an intro to “Does the measurement measure up?”)

When reading a research article, you may be tempted only to read the Introduction & Background, then go straight to the Discussion, Implications, and Conclusions at the end. You skip all those pesky, procedures, numbers, and p levels in the Methods & Results sections.

Perhaps you are intimidated by all those “research-y” words like content validity, construct validity, test-retest reliability, and Cronbach’s alpha because they just aren’t part of your vocabulary….YET!

WHY should you care about those terms, you ask? Well…let’s start with an example. If your bathroom scale erratically measured your weight each a.m., you probably would toss it and find a more reliable and valid bathroom scale. The quality of the data from that old bathroom scale would be useless in learning how much you weighed. Similarly in research, the researcher wants useful outcome data. And to get that quality data the person must collect it with a measurement instrument that consistently (reliably) measures what it claims to measure (validity). A good research instrument is reliable and valid. So is a good bathroom scale.

Let’s start super-basic: Researchers collect data to answer their research question using an instrument. That test or tool might be a written questionnaire, interview questions, an EKG machine, an observation checklist, or something else. And whatever instrument the researcher uses needs to give them correct data answers.

For example, if I want to collect BP data to find out how a new med is working, I need a BP cuff that collects systolic and diastolic BP without a lot of artifacts or interference. That accuracy in measuring BP only is called instrument validity. Then if I take your BP 3 times in a row, I should get basically the same answer and that consistency is called instrument reliability. I must also use the cuff as intended–correct cuff size and placement–in order to get quality data that reflects the subject’s actual BP.

The same thing is true with questionnaires or other measurement tools. A researcher must use an instrument for the intended purpose and in the correct way. For example, a good stress scale should give me accurate data about a person’s stress level (not their pain, depression, or anxiety)–in other words it should have instrument validity. It should measure stress without a lot of artifacts or interference from other states of mind.

NO instrument is 100% valid–it’s a matter of degree. To the extent that a stress scale measures stress, it is valid. To the extent that it also measures other things besides stress–and it will–it is less valid. The question you should ask is, “How valid is the instrument?” often on a 0 to 1 scale with 1 being unachievable perfection. The same issue and question applies to reliability.

Reliability & validity are interdependent. An instrument that yields inconsistent results under the same circumstances cannot be valid (accurate). Or, an instrument can consistently (reliably) measure the wrong thing–that is, it can measure something other than what the researcher intended to measure. Research instruments need both strong reliability AND validity to be most useful; they need to measure the outcome variable of interest consistently.

Valid for a specific purpose: Researchers must also use measurement instruments as intended. First, instruments are often validated for use with a particular population; they may not be valid for measuring the same variable in other populations. For example, different cultures, genders, professions, and ages may respond differently to the same question. Second, instruments may be valid in predicting certain outcomes (e.g., SAT & ACT have higher validity in predicting NCLEX success than does GPA). As Sullivan (2011) wrote: “Determining validity can be viewed as constructing an evidence-based argument regarding how well a tool measures what it is supposed to do. Evidence can be assembled to support, or not support, a specific use of the assessment tool.”

In summary….

  1. Instrument validity = how accurate the tool is in measuring a particular variable
  2. Instrument reliability = how consistently the tool measures whatever it measures

Fun Practice: In your own words relate the following article excerpt to the concept of validity? “To assess content validity [of the Moral Distress Scale], 10 nurses were asked to provide comments on grammar, use of appropriate words, proper placement of phrases, and appropriate scoring. From p.3, Ghafouri et al. (2021). Psychometrics of the moral distress scale in Iranian mental health nurses. BMC Nursing. https://doi.org/10.1186/s12912-021-00674-4

iS IT 2? OR 3?

Credible sources often disagree on technicalities. Sometimes this includes classification of research design. Some argue that there are only 2 categories of research design:

  1. True experiments. True experiments have 3 elements: 1) randomization to groups, 2) a control group and an 3) intervention; and
  2. Non-experiments. Non-experiments may have 1 to none of those 3 elements.
Within-subject Control Group

Fundamentally, I agree with the above. But what about designs that include an intervention and a control group, but Not randomization?

Those may be called quasi-experiments; the most often performed quasi-experiment is pre/post testing of a single group. The control group are subjects at baseline and the experimental group are the same subjects after they receive a treatment or intervention. That means the control group is a within-subjects control group (as opposed to between-group control). Quasi-experiments can be used to answer cause-and-effect hypothesis when an experiment may not be feasible or ethical.

One might even argue that a strength of pre/post, quasi-experiments is that we do Not have to Assume that control and experimental groups are equivalent–an assumption we would make about the subjects randomized (randomly assigned) to a control or experimental group. Instead the control and experimental  are exactly equivalent because they are the same persons (barring maturation of subjects and similar threats to validity that are also true of experiments).

I think using the term quasi-experiments makes it clear that persons in the study receive an intervention. Adding “pre/post” means that the

This image has an empty alt attribute; its file name is intervention.jpg
Baseline ->Intervention->Post

researcher is using a single group as their own controls. I prefer to use the term non-experimental to mean a) descriptive studies (ones that just describe the situation) and b) correlation studies (ones without an intervention that look for whether one factor is related to another).

What do you think? 2? or 3?

A practical place to start

Enrolled in an MSN….and wondering what to do for an evidence-based clinical project?

Recently a former student contacted me about that very question. Part of my response to her is below:

“One good place to start if you are flexible on your topic is to look through Cochrane Reviews, Joanna Briggs Institute, AHRQ Clinical Practice Guidelines, or similar for very strong evidence on a particular topic and then work to move that into practice in some way.  (e.g., right now I’m involved in a project on using evidence of a Cochrane review on the benefits of music listening–not therapy–in improving patient outcomes like pain, mood, & opioid use).

Once you narrow the topic it will get easier.  Also, you can apply only the best evidence you have, so if there isn’t much research or other evidence about the topic you might have to tackle the problem from a different angle” or pick an area where there IS enough evidence to apply.

Blessings! -Dr.H

Pilot sTUdies: Look before you leap! (a priori vs. posthoc)

Why does it matter if a study is labeled a “pilot”?

SHORT ANSWER: …Because a pilot is about testing research methods,….not about answering research questions.

If a project has “pilot” in the title, then you as a reader should expect a study that examines whether certain research methods work (methodologic research). Methods include things like timing of data collection, sampling strategies, length of questionnaire, and so on. Pilots suggest what methods will effectively to answer researchers’ questions. Advance prep in methods makes for a smooth research landing.

Small sample = Pilot? A PILOT is related to study goals and design–not sample size. Of course pilots typically have small samples, but a small sample does not a pilot study make. Sometimes journals may tempt a researcher to call their study a pilot because of small samples. Don’t go there. Doing so means after-the-fact, posthoc changes that were Not the original, a priori goals and design.

Practical problems? If researchers label a study a “pilot” after it is completed (post hoc), they raise practical & ethical issues. At a practical level, researchers must create feasibility questions & answers. (See NIH.) The authors should drop data analysis that answers their original research questions.

Ethics? This ethically requires researchers 1) to say they planned something that they didn’t or 2) to take additional action. Additional action may be complete transparency about the change and seeking modification to original human subjects’ committee approvals. An example of one human subjects issue is that you informed your subjects that their data would answer a particular research question, and now you want to use their data to answer something else–methods questions!

Options? You can just learn from your small study and go for a bigger one, including improving methods. Some journals will consider publication of innovative studies even when small.

Look first, then leap: Better to look a priori, before leaping. If you think you might have trouble with your methods, design a pilot. If you made the unpleasant discovery that your methods didn’t work as you hoped, you can 1) disseminate your results anyway or 2) rethink ethical and practical issues.

Who’s with me? The National Institutes of Health agree: https://nccih.nih.gov/grants/whatnccihfunds/pilot_studies . NIH notes that common misuses of “pilots” are determining safety, efficacy of intervention, and effect size.

Who disagrees? McGrath argues that clinical pilots MAY test safety and efficacy, as well as feasibility. (See McGrath, J. M. (2013). Not all studies with small samples are pilot studies, Journal of Perinatal & Neonatal Nursing, 27(4): 281-283. doi: 10.1097/01.JPN.0000437186.01731.bc )

Trial Balloons & Pilot Studies

A pilot study is to research what a trial balloon is to politics

In politics, a trial balloon is communicating a law or policy idea via media to see how the intended audience reacts to it.  A trial balloon does not answer the question, “Would this policy (or law) work?” Instead a trial balloon answers questions like “Which people hate the idea of the policy/law–even if it would work?” or “What problems might enacting it create?” In other words, a trial balloon answers questions that a politician wants to know BEFORE implementing a policy so that the policy or law can be tweaked to be successfully put in place.

meeting2

In research, a pilot study is sort of like a trial balloon. It is “a small-scale test of the methods and procedures” of a planned full-scale study (Porta, Dictionary of Epidemiology, 5th edition, 2008). A pilot study answers questions that we want to know BEFORE doing a larger study, so that we can tweak the study plan and have a successful full-scale research project. A pilot study does NOT answer research questions or hypotheses, such as “Does this intervention work?”  Instead a pilot study answers the question “Are these research procedures workable?”

A pilot study asks & answers:Can I recruit my target population? Can the treatments be delivered per protocol? Are study conditions acceptable to participants?” and so on.   A pilot study should have specific measurable benchmarks for feasibility testing. For example if the pilot is finding out whether subjects will adhere to the study, then adherence might be defined as  “70 percent of participants in each [group] will attend at least 8 of 12 scheduled group sessions.”  Sample size is based on practical criteria such as  budget, participant flow, and the number needed to answer feasibility questions (ie. questions about whether the study is workable).

A pilot study does NOT Test hypotheses (even preliminarily); Use inferential statistics; Assess safety of a treatment; Estimate effect size; Demonstrate safety of an intervention.

A pilot study is not just a small study.

Next blog: Why this matters!!

For more info read the source of all quotes in this blog: Pilot Studies: Common Uses and Misuses @ https://nccih.nih.gov/grants/whatnccihfunds/pilot_studies