Tag Archives: research

New research: Mindfulness

Check out the newest and add your critique in comments.

“Evidence suggests that mindfulness training using a phone application (app) may support neonatal intensive care unit (NICU) nurses in their high stress work.” https://journals.lww.com/advancesinneonatalcare/Abstract/9900/The_Effect_of_a_Mindfulness_Phone_Application_on.63.aspx

The Effect of a Mindfulness Phone Application on NICU Nurses’ Professional Quality of Life

by Egami, Susan MSN, RNC-NIC, IBCLC; Highfield, Martha E. Farrar PhD, RN

Editor(s): Dowling, Donna PhD, RN, Section Editors; Newberry, Desi M. DNP, NNP-BC, Section Editors; Parker, Leslie PhD, APRN, FAAN, Section EditorsAuthor Information

Advances in Neonatal Care ():10.1097/ANC.0000000000001064, April 10, 2023. | DOI: 10.1097/ANC.0000000000001064

Correlation Studies: Primer on Design Part 2

REMEMBER:

Research design = overall plan for a study.

The 2 major categories of research study design are:

  1. Non-experimental, observation-only studies, &
  2. Experimental testing of an intervention studies.

Correlation study designs are in that first category. Correlation studies focus on whether changes in at least one variable are statistically related to changes in another. In other words, do two or more variables change at the same time.

Such studies do not test whether one variable causes change in the other. Instead they are analogous to the chicken-and-egg dilemma in which one can confirm that the number of chickens and eggs are related to each other, but no one can say which came first or which caused the other. Correlation study questions may take this form, “Is there a relationship between changes in [variable x] and changes in [variable y]?” while a correlation hypothesis might be a prediction that, “As [variable x] increases, [variable y] decreases.”

An example of a question appropriate to this design is, “Are nurses’ age and educational levels related to their professional quality of life?” Sometimes a yet-unidentified, mediating variable may be creating the changes in one or all correlated variables. For example, rising nurse age and education may make them likely to choose certain work settings with high professional quality of life; this means the mediating variable of work setting—not age or education—might be creating a particular professional quality of life.

Alert! Correlation is not causation.

Primer on Research Design: Part 1-Description

A research design is the investigator-chosen, overarching study framework that facilitates getting the most accurate answer to a hypothesis or question. Think of research design as similar to the framing of a house during construction. Just as house-framing provides structure and limits to walls, floors, and ceilings, so does a research design provide structure and limits to a host of protocol details.

Tip. The two major categories of research design are: 1) Non-experimental, observation only and 2) Experimental testing of an intervention.

DESCRIPTIVE STUDIES

Non-experimental studies that examine one variable at a time.

When little is known and no theory exists on a topic, descriptive research begins to build theory by identifying and defining key, related concepts (variables). Although a descriptive study may explore several variables, only one of those is measured at a time; there is no examination of relationships between variables. Descriptive studies create a picture of what exists by analyzing quantitative or qualitative

data to answer questions like, “What is [variable x]?” or “How often does it occur?” Examples of such one-variable questions are “What are the experiences of first-time fathers?” or “How many falls occur in the emergency room?” (Variables are in italics.)  The former question produces qualitative data, and the latter, quantitative.

Descriptive results raise important questions for further study, and findings are rarely generalizable. You can see this especially in a descriptive case study: an in-depth exploration of a single event or phenomena that is limited to a particular time and place. Given case study limitations, opinions differ on whether they even qualify as research.

Descriptive research that arises from constructivist or advocacy assumptions merits particular attention. In these designs, researchers collect in-depth qualitative information about only one variable and then critically reflect on that data in order to uncover emerging themes or theories. Often broad data are collected in a natural setting in which researchers exercise little control over other variables. Sample size is not pre-determined, data collection and analysis are concurrent, and the researcher collects and analyzes data until no new ideas emerge (data saturation). The most basic qualitative descriptive method is perhaps content analysis, sometimes called narrative descriptive analysis, in which researchers uncover themes within informant descriptions. Figure 4 identifies major qualitative traditions beyond content analysis and case studies.

Alert! All qualitative studies are descriptive, but not all descriptive studies are qualitative.

Box 1. Descriptive Qualitative Designs

DesignFocusDiscipline of Origin
EthnographyUncovers phenomena within a given culture, such as meanings, communications, and mores.Anthropology
Grounded TheoryIdentifies a  basic social problem and the process that participants use to confront it.Sociology
PhenomenologyDocuments the “lived experience” of informants going through a particular event or situation.Psychology
Community participatory actionSeeks positive social change and empowerment of an oppressed community by engaging them in every step of the research process.Marxist political theory
FeministSeeks positive social change and empowerment of women as an oppressed group.Marxist political theory

EBP: Think Three R’s

Risks, Resources, Readiness

3 things to consider when adapting or adopting research evidence to/in a particular practice setting according to Stetler (2001).

Check out the 1-minute video summary by DrH at https://www.instagram.com/martyhrn/

The biggest enemy was not Russia

Check out this explanation of the famous rose plot about preventable deaths of soldiers!! Lessons to be learned today.

How to speak to stakeholders. How to change nursing.

https://www.youtube.com/watch?v=JZh8tUy_bnM

Testing the Test (or an intro to “Does the measurement measure up?”)

When reading a research article, you may be tempted only to read the Introduction & Background, then go straight to the Discussion, Implications, and Conclusions at the end. You skip all those pesky, procedures, numbers, and p levels in the Methods & Results sections.

Perhaps you are intimidated by all those “research-y” words like content validity, construct validity, test-retest reliability, and Cronbach’s alpha because they just aren’t part of your vocabulary….YET!

WHY should you care about those terms, you ask? Well…let’s start with an example. If your bathroom scale erratically measured your weight each a.m., you probably would toss it and find a more reliable and valid bathroom scale. The quality of the data from that old bathroom scale would be useless in learning how much you weighed. Similarly in research, the researcher wants useful outcome data. And to get that quality data the person must collect it with a measurement instrument that consistently (reliably) measures what it claims to measure (validity). A good research instrument is reliable and valid. So is a good bathroom scale.

Let’s start super-basic: Researchers collect data to answer their research question using an instrument. That test or tool might be a written questionnaire, interview questions, an EKG machine, an observation checklist, or something else. And whatever instrument the researcher uses needs to give them correct data answers.

For example, if I want to collect BP data to find out how a new med is working, I need a BP cuff that collects systolic and diastolic BP without a lot of artifacts or interference. That accuracy in measuring BP only is called instrument validity. Then if I take your BP 3 times in a row, I should get basically the same answer and that consistency is called instrument reliability. I must also use the cuff as intended–correct cuff size and placement–in order to get quality data that reflects the subject’s actual BP.

The same thing is true with questionnaires or other measurement tools. A researcher must use an instrument for the intended purpose and in the correct way. For example, a good stress scale should give me accurate data about a person’s stress level (not their pain, depression, or anxiety)–in other words it should have instrument validity. It should measure stress without a lot of artifacts or interference from other states of mind.

NO instrument is 100% valid–it’s a matter of degree. To the extent that a stress scale measures stress, it is valid. To the extent that it also measures other things besides stress–and it will–it is less valid. The question you should ask is, “How valid is the instrument?” often on a 0 to 1 scale with 1 being unachievable perfection. The same issue and question applies to reliability.

Reliability & validity are interdependent. An instrument that yields inconsistent results under the same circumstances cannot be valid (accurate). Or, an instrument can consistently (reliably) measure the wrong thing–that is, it can measure something other than what the researcher intended to measure. Research instruments need both strong reliability AND validity to be most useful; they need to measure the outcome variable of interest consistently.

Valid for a specific purpose: Researchers must also use measurement instruments as intended. First, instruments are often validated for use with a particular population; they may not be valid for measuring the same variable in other populations. For example, different cultures, genders, professions, and ages may respond differently to the same question. Second, instruments may be valid in predicting certain outcomes (e.g., SAT & ACT have higher validity in predicting NCLEX success than does GPA). As Sullivan (2011) wrote: “Determining validity can be viewed as constructing an evidence-based argument regarding how well a tool measures what it is supposed to do. Evidence can be assembled to support, or not support, a specific use of the assessment tool.”

In summary….

  1. Instrument validity = how accurate the tool is in measuring a particular variable
  2. Instrument reliability = how consistently the tool measures whatever it measures

Fun Practice: In your own words relate the following article excerpt to the concept of validity? “To assess content validity [of the Moral Distress Scale], 10 nurses were asked to provide comments on grammar, use of appropriate words, proper placement of phrases, and appropriate scoring. From p.3, Ghafouri et al. (2021). Psychometrics of the moral distress scale in Iranian mental health nurses. BMC Nursing. https://doi.org/10.1186/s12912-021-00674-4

Bates Center Seminar Series – Diabetes: A History of Race and Disease

Speaker: Arleen Tuchman, PhD, Vanderbilt University

Date and Time: Wednesday, April 6, 2022, 4:00pm EDT, virtual BlueJeans event

Abstract: Who is considered most at risk for diabetes, and why? In this talk, Tuchman discusses how, at different times over the past one hundred years, Jews, Native Americans, and African Americans have been labeled most at risk for developing diabetes, and that such claims have reflected and perpetuated troubling assumptions about race, ethnicity, and class. As Tuchman shows, diabetes also underwent a mid-century transformation in the public’s eye from being a disease of wealth and “civilization” to one of poverty and “primitive” populations. In tracing this cultural history, Tuchman argues that shifting understandings of diabetes reveal just as much about scientific and medical beliefs as they do about the cultural, racial, and economic milieus of their time.

Bio: Arleen Tuchman is a specialist in the history of medicine in the United States and Europe, with research interests in the cultural history of health, disease, and addiction; the rise of scientific medicine; and scientific and medical constructions of gender and sexuality. She is the author of three books, the most recent being Diabetes: A History of Race and Disease (Yale University Press, 2020). She is currently working on a history of addiction and the family in the United States.

Tuchman has held many fellowships, including ones from the American Council of Learned Societies, the National Institutes of Health, and the National Endowment for the Humanities.
Tuchman is a past director of Vanderbilt University’s Center for Medicine, Health, and Society (2006-2009) and has, since 2019, been the co-creator of a historic medicinal garden on Vanderbilt University’s campus

Register here.

?Unexpected Evidence in The Science of Lockdowns

Headlines are blaring: “New study shows that lockdowns had minimal effect on COVID-19 mortality.”

The January 2022 systematic review and meta-analysis that underlies that news is Herby, Jonung, & Hanke’s “A Literature Review and Meta Analysis of the Effects of Lockdowns on COVID 19 Mortality” in Applied Economics .

Scientists label systematic reviews and meta-analyses as the strongest type of scientific evidence (pyramid of evidence). Of course the strength of the systematic review/meta-analysis depends on whether it is well or poorly done, so never put your research-critique brain in neutral. This one seems well done.

In systematic reviews, researchers follow a methodical, focused process that describes their selection and analysis of all studies on a topic. Meta-analyses treat all the data from those selected studies as a single study. Researchers will specify their process and parameters for selecting studies, and they typically publish a table of evidence that summarizes key information about each study. Herby et al. did so. (Note: systematic reviews should not be confused with integrative reviews in which authors are less systematic and are giving background info.)

For example, from Herby et al’s study cited above: “This study employed a systematic search and screening procedure in which 18,590 studies are identified… After three levels of screening, 34 studies ultimately qualified. Of those 34 eligible studies, 24 qualified for inclusion in the meta-analysis. They were separated into three groups: lockdown stringency index studies, shelter-in-place-order (SIPO) studies, and specific [non-pharmaceutical intervention] NPI studies. An analysis of each of these three groups support the conclusion that lockdowns have had little to no effect on COVID-19 mortality.”

See the full publication below. And rather than reading it beginning to end, first 1) read the abstract; 2) identify parameters used to select the 34 eligible studies and 24 meta-analysis studies, 3) scan the table of evidence, and 4) read the discussion beginning page 40. Then read the complete article, and cut yourself some slack—-just try understand what you can depending on your research expertise.

What do you think? Are the studies that support their conclusions strong? What are the SCIENTIFIC objections to their conclusions? What do they identify as policy implications, and do you agree or disagree?

[NOTE THAT THIS ARTICLE LINK MAY BE GOOD FOR ONLY 30 DAYS, but a librarian can help you get it after that.] Happy evidence hunting.

Research: What it is and isn’t

WHAT RESEARCH IS

Research is using the scientific process to ask and answer questions by examining new or existing data for patterns. The data are measurements of variables of interest. The simplest definition of a variable is that it is something that varies, such as height, income, or country of origin. For example, a researcher might be interested in collecting data on triceps skin fold thickness to assess the nutritional status of preschool children. Skin fold thickness will vary.

Research is often categorized in different ways in terms of: data, design, broad aims, and logic.

Qualitative Data
  • Design. Study design is the overall plan for conducting a research study, and there are three basic designs: descriptive, correlational, and experimental.
    1. Descriptive research attempts to answer the question, “What exists?” It tells us what the situation is, but it cannot explain why things are the way they are. e.g., How much money do nurses make?
    2. Correlational research answers the question: “What is the relationship” between variables (e.g., age and attitudes toward work). It cannot explain why those variables are or are not related. e.g., relationship between nurse caring and patient satisfaction
    3. Experimental research tries to answer “Why” question by examining cause and effect connections. e.g., gum chewing after surgery speeds return of bowel function. Gum chewing is a potential cause or “the why”
  • Aims. Studies, too, may be either applied research or basic research. Applied research is when the overall purpose of the research is to uncover knowledge that may be immediately used in practice (e.g., whether a scheduled postpartum quiet time facilitates breastfeeding). In contrast, basic research is when the new knowledge has no immediate application (e.g., identifying receptors on a cell wall).
  • Logic. Study logic may be inductive or deductive. Inductive reasoning is used in qualitative research; it starts with specific bits of information and moves toward generalizations [e.g., This patient’s pain is reduced after listening to music (specific); that means that music listening reduces all patients pain (general)]. Deductive reasoning is typical of quantitative research; it starts with generalizations and moves toward specifics [e.g., If listening to music relaxes people (general), then it may reduce post-operative pain (specific)]. Of course the logical conclusions in each case should be tested with research!

WHAT RESEARCH IS NOT:

Research as a scientific process is not going to the library or searching online to find information. It is also different from processes of applying research and non-research evidence to practice (called Evidence-Based Practice or EBP). And it is not the same as Quality Improvement (QI). See Two Roads Diverged for a flowchart to help differentiate research, QI and EBP.

“Two roads diverged in a yellow wood…” R.Frost

TIME TO REPUBLISH THIS ONE:

Below is my adaptation of one of the clearest representations that I have ever seen of when the roads diverge into quality improvement, evidence-based practice, & research.  Well done, Dr. E.Schenk PhD MHI, RN-BC!qi-ebp-research-flow-chart