All posts by Martha "Marty" Farrar Highfield PhD RN

It is difficult to be simple! Research can be understood when explained well. That's my aim.

Historical research in healthcare: Free, zoom forums from UVA

“Hospital City, Health Care Nation: Race, Capital, and the Costs of American Health Care”Guian McKee, PhD
Tuesday, September 12, 2023
 12 p.m. (ET) on Zoom
Hosted by the UVA Bjoring Center for Nursing Historical InquiryZoom link: https://virginia.zoom.us/j/95756992198?pwd=TEZQNmVjOGxENmdCUlZFUGtWa3Ztdz09 Meeting ID: 957 5699 2198Passcode: 355573
Dr. Guian McKee will speak about his new book, Hospital City, Health Care Nation, which recasts the story of the U.S. health care system by emphasizing its economic, social, and medical importance in American communities. Focusing on urban hospitals and academic medical centers, the book argues that the country’s high level of health care spending has allowed such institutions to become vital economic anchors for communities. Yet that spending has also constrained possibilities for comprehensive health care reform over many decades. At the same time, the role of hospitals in urban renewal, in community health provision, and as employers of low-wage workers has contributed directly to racial health disparities. McKee points to the increased role of financial capital after the 1960s in shaping not only hospital growth but also the underlying character of these vital institutions. The book shows how hospitals’ quest for capital has interacted with structural racism and inequality to constrain the U.S. health care system.
Dr. McKee is a professor of presidential studies at the UVA Miller Center. He is co-chair of the Presidential Recordings Program and co-directs its Health Care Policy Project. His 2023 book, Hospital City, Health Care Nationis available through the University of Pennsylvania Press.  We hope you can tune in! Maura Singleton
Center Manager
Eleanor Crowder Bjoring Center for Nursing Historical Inquiry

mks2d@virginia.edu
P 434.924.0083; 434.989.1550 (cell)

UVA School of Nursing
202 Jeanette Lancaster Way
P.O. Box 800782
Charlottesville, VA 22908-0782

www.nursing.virginia.edu/nursing-history/

Primer on Design: Part 3 – Mixing UP Methods

QUICK REVIEW: Research design is the overall plan for a study. And…there are 2 main types of design: 1) Non-experiments in which the researcher observes and documents what exists,

and 2) Experiments when the researcher tries out an intervention and measures outcomes.

NEW INFO: Two non-experimental research designs that are often confused with one another are: 1) cohort studies & 2) case studies. Epidemiologists often use these designs to study large populations.

In a cohort study, a group of participants, who were exposed to a presumed cause of disease or injury, are followed into the future (prospectively) to identify emerging health issues. Researchers may also look at their past (retrospectively) to determine the amount of exposure that is related to health outcomes.

In contrast, in a case controlled study, participants with a disease or condition (cases) and others without it (controls) are followed retrospectively to compare their exposure to a presumed cause.

EXAMPLES?

  1. Martinez-Calderon et al (2017 ) Influence of psychological factors on the prognosis of chronic shoulder pain: protocol for a prospective cohort study. BMJ Open, 7. doi: 10.1136/bmjopen-2016-012822
  2. Smith et al (2019). An outbreak of hepatitis A in Canada: The use of a control bank to conduct a case-control study. Epidemiology & Infection, 147. doi: https://doi.org/10.1017/S0950268819001870

CRITICAL THINKING: Do you work with a group that has an interesting past of exposure to some potential cause of disease or injury? Which of the above designs do you find more appealing and why?

Free, virtual seminar: How to publish in peer review journal

This Wiley sponsored online seminar should provide good information on how to disseminate your project findings!

Click here: How to publish in a peer reviewed journal

The Whole Picture: Mixed Methods Design

Idea2Mixed methods (MM) research provides a more complete picture of reality by including both complementary quantitative and qualitative data.

A clinical analogy for MM research is asking patients to rate their pain numerically on a 0–10 scale and then to describe the pain character in words.

MM researchers sometimes include both experimental hypotheses and non-experimental research questions in the same study.

writing article

Common MM subtypes are in the below table. In concurrent designs investigators collect all data at the same time, and in sequential designs they collect one type of data before the other. In triangulated MM, data receive equal weight, but in embedded designs, such as a large RCT in which only a small subset of RCT participants are interviewed, the main study data is weighted more heavily. In sequential MM, researchers give more weight to whatever type of data were collected first; for exploratory this is qualitative data and for explanatory it is quantitative data.

FOR MORE INFO: WHAT IS MIXED METHODS RESEARCH? – Dr. John Creswell

MM DESIGNEQUALLY WEIGHTED DATAPRIORITY WEIGHTED DATA
Concurrent data collection:
*Triangulation
All data
*EmbeddedMain study data
Sequential data collection:
*Exploratory
Qualitative data
*ExplanatoryQuantitative data
TYPES OF MM DESIGN: Concurrent & Sequential

New research: Mindfulness

Check out the newest and add your critique in comments.

“Evidence suggests that mindfulness training using a phone application (app) may support neonatal intensive care unit (NICU) nurses in their high stress work.” https://journals.lww.com/advancesinneonatalcare/Abstract/9900/The_Effect_of_a_Mindfulness_Phone_Application_on.63.aspx

The Effect of a Mindfulness Phone Application on NICU Nurses’ Professional Quality of Life

by Egami, Susan MSN, RNC-NIC, IBCLC; Highfield, Martha E. Farrar PhD, RN

Editor(s): Dowling, Donna PhD, RN, Section Editors; Newberry, Desi M. DNP, NNP-BC, Section Editors; Parker, Leslie PhD, APRN, FAAN, Section EditorsAuthor Information

Advances in Neonatal Care ():10.1097/ANC.0000000000001064, April 10, 2023. | DOI: 10.1097/ANC.0000000000001064

Correlation Studies: Primer on Design Part 2

REMEMBER:

Research design = overall plan for a study.

The 2 major categories of research study design are:

  1. Non-experimental, observation-only studies, &
  2. Experimental testing of an intervention studies.

Correlation study designs are in that first category. Correlation studies focus on whether changes in at least one variable are statistically related to changes in another. In other words, do two or more variables change at the same time.

Such studies do not test whether one variable causes change in the other. Instead they are analogous to the chicken-and-egg dilemma in which one can confirm that the number of chickens and eggs are related to each other, but no one can say which came first or which caused the other. Correlation study questions may take this form, “Is there a relationship between changes in [variable x] and changes in [variable y]?” while a correlation hypothesis might be a prediction that, “As [variable x] increases, [variable y] decreases.”

An example of a question appropriate to this design is, “Are nurses’ age and educational levels related to their professional quality of life?” Sometimes a yet-unidentified, mediating variable may be creating the changes in one or all correlated variables. For example, rising nurse age and education may make them likely to choose certain work settings with high professional quality of life; this means the mediating variable of work setting—not age or education—might be creating a particular professional quality of life.

Alert! Correlation is not causation.

Primer on Research Design: Part 1-Description

A research design is the investigator-chosen, overarching study framework that facilitates getting the most accurate answer to a hypothesis or question. Think of research design as similar to the framing of a house during construction. Just as house-framing provides structure and limits to walls, floors, and ceilings, so does a research design provide structure and limits to a host of protocol details.

Tip. The two major categories of research design are: 1) Non-experimental, observation only and 2) Experimental testing of an intervention.

DESCRIPTIVE STUDIES

Non-experimental studies that examine one variable at a time.

When little is known and no theory exists on a topic, descriptive research begins to build theory by identifying and defining key, related concepts (variables). Although a descriptive study may explore several variables, only one of those is measured at a time; there is no examination of relationships between variables. Descriptive studies create a picture of what exists by analyzing quantitative or qualitative

data to answer questions like, “What is [variable x]?” or “How often does it occur?” Examples of such one-variable questions are “What are the experiences of first-time fathers?” or “How many falls occur in the emergency room?” (Variables are in italics.)  The former question produces qualitative data, and the latter, quantitative.

Descriptive results raise important questions for further study, and findings are rarely generalizable. You can see this especially in a descriptive case study: an in-depth exploration of a single event or phenomena that is limited to a particular time and place. Given case study limitations, opinions differ on whether they even qualify as research.

Descriptive research that arises from constructivist or advocacy assumptions merits particular attention. In these designs, researchers collect in-depth qualitative information about only one variable and then critically reflect on that data in order to uncover emerging themes or theories. Often broad data are collected in a natural setting in which researchers exercise little control over other variables. Sample size is not pre-determined, data collection and analysis are concurrent, and the researcher collects and analyzes data until no new ideas emerge (data saturation). The most basic qualitative descriptive method is perhaps content analysis, sometimes called narrative descriptive analysis, in which researchers uncover themes within informant descriptions. Figure 4 identifies major qualitative traditions beyond content analysis and case studies.

Alert! All qualitative studies are descriptive, but not all descriptive studies are qualitative.

Box 1. Descriptive Qualitative Designs

DesignFocusDiscipline of Origin
EthnographyUncovers phenomena within a given culture, such as meanings, communications, and mores.Anthropology
Grounded TheoryIdentifies a  basic social problem and the process that participants use to confront it.Sociology
PhenomenologyDocuments the “lived experience” of informants going through a particular event or situation.Psychology
Community participatory actionSeeks positive social change and empowerment of an oppressed community by engaging them in every step of the research process.Marxist political theory
FeministSeeks positive social change and empowerment of women as an oppressed group.Marxist political theory

EBP: Think Three R’s

Risks, Resources, Readiness

3 things to consider when adapting or adopting research evidence to/in a particular practice setting according to Stetler (2001).

Check out the 1-minute video summary by DrH at https://www.instagram.com/martyhrn/

The biggest enemy was not Russia

Check out this explanation of the famous rose plot about preventable deaths of soldiers!! Lessons to be learned today.

How to speak to stakeholders. How to change nursing.

https://www.youtube.com/watch?v=JZh8tUy_bnM

Testing the Test (or an intro to “Does the measurement measure up?”)

When reading a research article, you may be tempted only to read the Introduction & Background, then go straight to the Discussion, Implications, and Conclusions at the end. You skip all those pesky, procedures, numbers, and p levels in the Methods & Results sections.

Perhaps you are intimidated by all those “research-y” words like content validity, construct validity, test-retest reliability, and Cronbach’s alpha because they just aren’t part of your vocabulary….YET!

WHY should you care about those terms, you ask? Well…let’s start with an example. If your bathroom scale erratically measured your weight each a.m., you probably would toss it and find a more reliable and valid bathroom scale. The quality of the data from that old bathroom scale would be useless in learning how much you weighed. Similarly in research, the researcher wants useful outcome data. And to get that quality data the person must collect it with a measurement instrument that consistently (reliably) measures what it claims to measure (validity). A good research instrument is reliable and valid. So is a good bathroom scale.

Let’s start super-basic: Researchers collect data to answer their research question using an instrument. That test or tool might be a written questionnaire, interview questions, an EKG machine, an observation checklist, or something else. And whatever instrument the researcher uses needs to give them correct data answers.

For example, if I want to collect BP data to find out how a new med is working, I need a BP cuff that collects systolic and diastolic BP without a lot of artifacts or interference. That accuracy in measuring BP only is called instrument validity. Then if I take your BP 3 times in a row, I should get basically the same answer and that consistency is called instrument reliability. I must also use the cuff as intended–correct cuff size and placement–in order to get quality data that reflects the subject’s actual BP.

The same thing is true with questionnaires or other measurement tools. A researcher must use an instrument for the intended purpose and in the correct way. For example, a good stress scale should give me accurate data about a person’s stress level (not their pain, depression, or anxiety)–in other words it should have instrument validity. It should measure stress without a lot of artifacts or interference from other states of mind.

NO instrument is 100% valid–it’s a matter of degree. To the extent that a stress scale measures stress, it is valid. To the extent that it also measures other things besides stress–and it will–it is less valid. The question you should ask is, “How valid is the instrument?” often on a 0 to 1 scale with 1 being unachievable perfection. The same issue and question applies to reliability.

Reliability & validity are interdependent. An instrument that yields inconsistent results under the same circumstances cannot be valid (accurate). Or, an instrument can consistently (reliably) measure the wrong thing–that is, it can measure something other than what the researcher intended to measure. Research instruments need both strong reliability AND validity to be most useful; they need to measure the outcome variable of interest consistently.

Valid for a specific purpose: Researchers must also use measurement instruments as intended. First, instruments are often validated for use with a particular population; they may not be valid for measuring the same variable in other populations. For example, different cultures, genders, professions, and ages may respond differently to the same question. Second, instruments may be valid in predicting certain outcomes (e.g., SAT & ACT have higher validity in predicting NCLEX success than does GPA). As Sullivan (2011) wrote: “Determining validity can be viewed as constructing an evidence-based argument regarding how well a tool measures what it is supposed to do. Evidence can be assembled to support, or not support, a specific use of the assessment tool.”

In summary….

  1. Instrument validity = how accurate the tool is in measuring a particular variable
  2. Instrument reliability = how consistently the tool measures whatever it measures

Fun Practice: In your own words relate the following article excerpt to the concept of validity? “To assess content validity [of the Moral Distress Scale], 10 nurses were asked to provide comments on grammar, use of appropriate words, proper placement of phrases, and appropriate scoring. From p.3, Ghafouri et al. (2021). Psychometrics of the moral distress scale in Iranian mental health nurses. BMC Nursing. https://doi.org/10.1186/s12912-021-00674-4