Category Archives: quantitative research

Essentials for Clinical Researchers

[note: bonus 20% book discount from publisher. See below flyer]

My 2025 book, Doing Research, is a user-friendly guide, not a comprehensive text. Chapter 1 gives a dozen tips to get started, Chapter 2 defines research, and Chapters 3-9 focus on planning. The remaining Chapters 10-12 guide you through challenges of conducting a study, getting answers from the data, and sharing with others what you learned. Italicized key terms are defined in the glossary, and a bibliography lists additional resources.

Theoretically speaking…is this all “pie-in-the-sky” stuff?

Is using theory and conceptual frameworks in studies just “pie-in-the-sky” stuff? Do they have any practical use? Or are they merely for academics in ivory towers?

This blog about theory-testing research1 may affect your answers.

What is it? At its most basic, theory or framework is a set of statements that describe part of reality. Those related statements (called propositions) outline the relationship between two or more ideas (called concepts). One example of a set of propositions is: “Work stress leads to burnout; burnout leads to poor work outcomes; mindfulness practice leads to lower burnout and thus to better work outcomes.” These statements describe the relationships between concepts of “work stress,” “burnout,” “poor work outcomes,” and “mindfulness practice.”

Each concept has 1) an abstract dictionary-type, conceptual definition & 2) a concrete, measurable, operational definition. For example, Maslach conceptually defined burnout as a combination of emotional exhaustion, depersonalization, and lower personal accomplishment; then those concepts are operationally defined as a self-reported burnout score on Maslach’s Burnout Inventory (MBI).

Some theories are named for their authors–like Einstein’s theory of relativity expressed in a single proposition about the relationship between concepts of energy, mass, & speed of light. Einstein’s theory & propositions of other theory/frameworks describe our existing knowledge about a topic based on evidence and logical connections.

To connect your study with such existing knowledge, take these steps:

1) Identify a theory/framework that conceptually & operationally defines your concept of interest and states its relationship to other concepts. Start by looking in the library for articles on your topic.

2) Accept most of the theory/framework’s propositions as true without testing them yourself (called assumptions). All studies assume a lot to be true already–meaning they have a lot of assumptions. It’s the way science works because you can’t test everything at once.

3) Identify a proposition that you want to test, and write it in a testable form as a hypotheses or research questions. You will be testing only a tiny piece of the theory/framework, perhaps by examining the concepts in a new setting, with new methods, or in a different or larger sample. For example, you might want to test an intervention to see if it reduces burnout (e.g., Hypothesis: “ICU staff using a mindfulness phone app will report lower burnout than those who do not use the app.”)

4) When your study is complete, discuss how your findings confirm or disconfirm the theory/framework. Your logic and research are now a part of what we know (or think we know).

Conclusion: Of course there’s much more that could be said on this topic. Let me know what to add in the comments. -Dr.H

Questions for thot:

So, do you think theory/conceptual frameworks are just “pie in the sky” without practical value? If so, how would you build a study on existing knowledge? If you think they ARE practical, how would you use them to study your topic of interest? Explain how you have or have not used propositions in a study.

  1. Theory-building research is a different inductive path. Theory-testing is more deductive. ↩︎

New book: “Doing Research: A Practical Guide”

Author: Martha “Marty” E. Farrar Highfield

NOW AVAILABLE ELECTRONICALLY & SOON IN PRINT.

CHECK OUT: https://link.springer.com/book/10.1007/978-3-031-79044-7

This book provides a step-by-step summary of how to do clinical research. It explains what research is and isn’t, where to begin and end, and the meaning of key terms. A project planning worksheet is included and can be used as readers work their way through the book in developing a research protocol. The purpose of this book is to empower curious clinicians who want data-based answers.

Doing Research is a concise, user-friendly guide to conducting research, rather than a comprehensive research text. The book contains 12 main chapters followed by the protocol worksheet. Chapter 1 offers a dozen tips to get started, Chapter 2 defines research, and Chapters 3-9 focus on planning. Chapters 10-12 then guide readers through challenges of conducting a study, getting answers from the data, and disseminating results. Useful key points, tips, and alerts are strewn throughout the book to advise and encourage readers.

Primer on Design: Part 3 – Mixing UP Methods

QUICK REVIEW: Research design is the overall plan for a study. And…there are 2 main types of design: 1) Non-experiments in which the researcher observes and documents what exists,

and 2) Experiments when the researcher tries out an intervention and measures outcomes.

NEW INFO: Two non-experimental research designs that are often confused with one another are: 1) cohort studies & 2) case studies. Epidemiologists often use these designs to study large populations.

In a cohort study, a group of participants, who were exposed to a presumed cause of disease or injury, are followed into the future (prospectively) to identify emerging health issues. Researchers may also look at their past (retrospectively) to determine the amount of exposure that is related to health outcomes.

In contrast, in a case controlled study, participants with a disease or condition (cases) and others without it (controls) are followed retrospectively to compare their exposure to a presumed cause.

EXAMPLES?

  1. Martinez-Calderon et al (2017 ) Influence of psychological factors on the prognosis of chronic shoulder pain: protocol for a prospective cohort study. BMJ Open, 7. doi: 10.1136/bmjopen-2016-012822
  2. Smith et al (2019). An outbreak of hepatitis A in Canada: The use of a control bank to conduct a case-control study. Epidemiology & Infection, 147. doi: https://doi.org/10.1017/S0950268819001870

CRITICAL THINKING: Do you work with a group that has an interesting past of exposure to some potential cause of disease or injury? Which of the above designs do you find more appealing and why?

New research: Mindfulness

Check out the newest and add your critique in comments.

“Evidence suggests that mindfulness training using a phone application (app) may support neonatal intensive care unit (NICU) nurses in their high stress work.” https://journals.lww.com/advancesinneonatalcare/Abstract/9900/The_Effect_of_a_Mindfulness_Phone_Application_on.63.aspx

The Effect of a Mindfulness Phone Application on NICU Nurses’ Professional Quality of Life

by Egami, Susan MSN, RNC-NIC, IBCLC; Highfield, Martha E. Farrar PhD, RN

Editor(s): Dowling, Donna PhD, RN, Section Editors; Newberry, Desi M. DNP, NNP-BC, Section Editors; Parker, Leslie PhD, APRN, FAAN, Section EditorsAuthor Information

Advances in Neonatal Care ():10.1097/ANC.0000000000001064, April 10, 2023. | DOI: 10.1097/ANC.0000000000001064

Testing the Test (or an intro to “Does the measurement measure up?”)

When reading a research article, you may be tempted only to read the Introduction & Background, then go straight to the Discussion, Implications, and Conclusions at the end. You skip all those pesky, procedures, numbers, and p levels in the Methods & Results sections.

Perhaps you are intimidated by all those “research-y” words like content validity, construct validity, test-retest reliability, and Cronbach’s alpha because they just aren’t part of your vocabulary….YET!

WHY should you care about those terms, you ask? Well…let’s start with an example. If your bathroom scale erratically measured your weight each a.m., you probably would toss it and find a more reliable and valid bathroom scale. The quality of the data from that old bathroom scale would be useless in learning how much you weighed. Similarly in research, the researcher wants useful outcome data. And to get that quality data the person must collect it with a measurement instrument that consistently (reliably) measures what it claims to measure (validity). A good research instrument is reliable and valid. So is a good bathroom scale.

Let’s start super-basic: Researchers collect data to answer their research question using an instrument. That test or tool might be a written questionnaire, interview questions, an EKG machine, an observation checklist, or something else. And whatever instrument the researcher uses needs to give them correct data answers.

For example, if I want to collect BP data to find out how a new med is working, I need a BP cuff that collects systolic and diastolic BP without a lot of artifacts or interference. That accuracy in measuring BP only is called instrument validity. Then if I take your BP 3 times in a row, I should get basically the same answer and that consistency is called instrument reliability. I must also use the cuff as intended–correct cuff size and placement–in order to get quality data that reflects the subject’s actual BP.

The same thing is true with questionnaires or other measurement tools. A researcher must use an instrument for the intended purpose and in the correct way. For example, a good stress scale should give me accurate data about a person’s stress level (not their pain, depression, or anxiety)–in other words it should have instrument validity. It should measure stress without a lot of artifacts or interference from other states of mind.

NO instrument is 100% valid–it’s a matter of degree. To the extent that a stress scale measures stress, it is valid. To the extent that it also measures other things besides stress–and it will–it is less valid. The question you should ask is, “How valid is the instrument?” often on a 0 to 1 scale with 1 being unachievable perfection. The same issue and question applies to reliability.

Reliability & validity are interdependent. An instrument that yields inconsistent results under the same circumstances cannot be valid (accurate). Or, an instrument can consistently (reliably) measure the wrong thing–that is, it can measure something other than what the researcher intended to measure. Research instruments need both strong reliability AND validity to be most useful; they need to measure the outcome variable of interest consistently.

Valid for a specific purpose: Researchers must also use measurement instruments as intended. First, instruments are often validated for use with a particular population; they may not be valid for measuring the same variable in other populations. For example, different cultures, genders, professions, and ages may respond differently to the same question. Second, instruments may be valid in predicting certain outcomes (e.g., SAT & ACT have higher validity in predicting NCLEX success than does GPA). As Sullivan (2011) wrote: “Determining validity can be viewed as constructing an evidence-based argument regarding how well a tool measures what it is supposed to do. Evidence can be assembled to support, or not support, a specific use of the assessment tool.”

In summary….

  1. Instrument validity = how accurate the tool is in measuring a particular variable
  2. Instrument reliability = how consistently the tool measures whatever it measures

Fun Practice: In your own words relate the following article excerpt to the concept of validity? “To assess content validity [of the Moral Distress Scale], 10 nurses were asked to provide comments on grammar, use of appropriate words, proper placement of phrases, and appropriate scoring. From p.3, Ghafouri et al. (2021). Psychometrics of the moral distress scale in Iranian mental health nurses. BMC Nursing. https://doi.org/10.1186/s12912-021-00674-4

Research: What it is and isn’t

WHAT RESEARCH IS

Research is using the scientific process to ask and answer questions by examining new or existing data for patterns. The data are measurements of variables of interest. The simplest definition of a variable is that it is something that varies, such as height, income, or country of origin. For example, a researcher might be interested in collecting data on triceps skin fold thickness to assess the nutritional status of preschool children. Skin fold thickness will vary.

Research is often categorized in different ways in terms of: data, design, broad aims, and logic.

Qualitative Data
  • Design. Study design is the overall plan for conducting a research study, and there are three basic designs: descriptive, correlational, and experimental.
    1. Descriptive research attempts to answer the question, “What exists?” It tells us what the situation is, but it cannot explain why things are the way they are. e.g., How much money do nurses make?
    2. Correlational research answers the question: “What is the relationship” between variables (e.g., age and attitudes toward work). It cannot explain why those variables are or are not related. e.g., relationship between nurse caring and patient satisfaction
    3. Experimental research tries to answer “Why” question by examining cause and effect connections. e.g., gum chewing after surgery speeds return of bowel function. Gum chewing is a potential cause or “the why”
  • Aims. Studies, too, may be either applied research or basic research. Applied research is when the overall purpose of the research is to uncover knowledge that may be immediately used in practice (e.g., whether a scheduled postpartum quiet time facilitates breastfeeding). In contrast, basic research is when the new knowledge has no immediate application (e.g., identifying receptors on a cell wall).
  • Logic. Study logic may be inductive or deductive. Inductive reasoning is used in qualitative research; it starts with specific bits of information and moves toward generalizations [e.g., This patient’s pain is reduced after listening to music (specific); that means that music listening reduces all patients pain (general)]. Deductive reasoning is typical of quantitative research; it starts with generalizations and moves toward specifics [e.g., If listening to music relaxes people (general), then it may reduce post-operative pain (specific)]. Of course the logical conclusions in each case should be tested with research!

WHAT RESEARCH IS NOT:

Research as a scientific process is not going to the library or searching online to find information. It is also different from processes of applying research and non-research evidence to practice (called Evidence-Based Practice or EBP). And it is not the same as Quality Improvement (QI). See Two Roads Diverged for a flowchart to help differentiate research, QI and EBP.

iS IT 2? OR 3?

Credible sources often disagree on technicalities. Sometimes this includes classification of research design. Some argue that there are only 2 categories of research design:

  1. True experiments. True experiments have 3 elements: 1) randomization to groups, 2) a control group and an 3) intervention; and
  2. Non-experiments. Non-experiments may have 1 to none of those 3 elements.
Within-subject Control Group

Fundamentally, I agree with the above. But what about designs that include an intervention and a control group, but Not randomization?

Those may be called quasi-experiments; the most often performed quasi-experiment is pre/post testing of a single group. The control group are subjects at baseline and the experimental group are the same subjects after they receive a treatment or intervention. That means the control group is a within-subjects control group (as opposed to between-group control). Quasi-experiments can be used to answer cause-and-effect hypothesis when an experiment may not be feasible or ethical.

One might even argue that a strength of pre/post, quasi-experiments is that we do Not have to Assume that control and experimental groups are equivalent–an assumption we would make about the subjects randomized (randomly assigned) to a control or experimental group. Instead the control and experimental  are exactly equivalent because they are the same persons (barring maturation of subjects and similar threats to validity that are also true of experiments).

I think using the term quasi-experiments makes it clear that persons in the study receive an intervention. Adding “pre/post” means that the

This image has an empty alt attribute; its file name is intervention.jpg
Baseline ->Intervention->Post

researcher is using a single group as their own controls. I prefer to use the term non-experimental to mean a) descriptive studies (ones that just describe the situation) and b) correlation studies (ones without an intervention that look for whether one factor is related to another).

What do you think? 2? or 3?

Goldilocks and the 3 Levels of Data

Actually when it comes to quantitative data, there are 4 levels, but who’s counting? (Besides Goldilocks.)

  1. Nominal  (categorical) data are names or categories: (gender, religious affiliation, days of the week, yes or no, and so on)
  2. Ordinal data are like the pain scale.  Each number is higher (or lower) than the next but the distances between numbers are not equal.  In others words 4 is not necessarily twice as much as 2; and 5 is not half of 10.
  3. Interval data are like degrees on a thermometer.  Equal distance between them, but no actual “0”.  0 degrees is just really, really cold.
  4. Ratio data are those  with real 0 and equal intervals (e.g., weight, annual salary, mg.)

(Of course if you want to collect QUALitative word data, that’s closest to categorical/nominal, but you don’t count ANYTHING.  More on that another time.)

CRITICAL THINKING:   Where are the levels in Goldilocks and the 3 levels of data at this link:  https://son.rochester.edu/research/research-fables/goldilocks.html ?? Would you measure soup, bed, chairs, bears, or other things differently?  Why was the baby bear screaming in fright?

Words vs. Numbers: What does it all mean?

There are several ways to classify types of research.   One way is qualitative versus quantitative–in other words, WORD  vs. NUMBER data, methods, & analysis.

  1. Qualitative research focuses on words (or sometimes images) and their meanings.
  2. Quantitative research focuses on numbers or counting things and statistical analysis that yields probable meaning.

If you watch this short, easy-to-understand youtube clip, you’ll have all the basics that you need to understand these!   Enjoy!

Critical thinking:  Go to PubMed for this QUANTitative study on spiritual issues in care (https://www.ncbi.nlm.nih.gov/pubmed/28403299) and compare it to this PubMed QUALitative study (https://www.ncbi.nlm.nih.gov/pubmed/27853263) in terms of data, methods, & analysis)

For more information: See earlier posts