New research: Mindfulness

Check out the newest and add your critique in comments.

“Evidence suggests that mindfulness training using a phone application (app) may support neonatal intensive care unit (NICU) nurses in their high stress work.” https://journals.lww.com/advancesinneonatalcare/Abstract/9900/The_Effect_of_a_Mindfulness_Phone_Application_on.63.aspx

The Effect of a Mindfulness Phone Application on NICU Nurses’ Professional Quality of Life

by Egami, Susan MSN, RNC-NIC, IBCLC; Highfield, Martha E. Farrar PhD, RN

Editor(s): Dowling, Donna PhD, RN, Section Editors; Newberry, Desi M. DNP, NNP-BC, Section Editors; Parker, Leslie PhD, APRN, FAAN, Section EditorsAuthor Information

Advances in Neonatal Care ():10.1097/ANC.0000000000001064, April 10, 2023. | DOI: 10.1097/ANC.0000000000001064

Correlation Studies: Primer on Design Part 2

REMEMBER:

Research design = overall plan for a study.

The 2 major categories of research study design are:

  1. Non-experimental, observation-only studies, &
  2. Experimental testing of an intervention studies.

Correlation study designs are in that first category. Correlation studies focus on whether changes in at least one variable are statistically related to changes in another. In other words, do two or more variables change at the same time.

Such studies do not test whether one variable causes change in the other. Instead they are analogous to the chicken-and-egg dilemma in which one can confirm that the number of chickens and eggs are related to each other, but no one can say which came first or which caused the other. Correlation study questions may take this form, “Is there a relationship between changes in [variable x] and changes in [variable y]?” while a correlation hypothesis might be a prediction that, “As [variable x] increases, [variable y] decreases.”

An example of a question appropriate to this design is, “Are nurses’ age and educational levels related to their professional quality of life?” Sometimes a yet-unidentified, mediating variable may be creating the changes in one or all correlated variables. For example, rising nurse age and education may make them likely to choose certain work settings with high professional quality of life; this means the mediating variable of work setting—not age or education—might be creating a particular professional quality of life.

Alert! Correlation is not causation.

The biggest enemy was not Russia

Check out this explanation of the famous rose plot about preventable deaths of soldiers!! Lessons to be learned today.

How to speak to stakeholders. How to change nursing.

https://www.youtube.com/watch?v=JZh8tUy_bnM

Testing the Test (or an intro to “Does the measurement measure up?”)

When reading a research article, you may be tempted only to read the Introduction & Background, then go straight to the Discussion, Implications, and Conclusions at the end. You skip all those pesky, procedures, numbers, and p levels in the Methods & Results sections.

Perhaps you are intimidated by all those “research-y” words like content validity, construct validity, test-retest reliability, and Cronbach’s alpha because they just aren’t part of your vocabulary….YET!

WHY should you care about those terms, you ask? Well…let’s start with an example. If your bathroom scale erratically measured your weight each a.m., you probably would toss it and find a more reliable and valid bathroom scale. The quality of the data from that old bathroom scale would be useless in learning how much you weighed. Similarly in research, the researcher wants useful outcome data. And to get that quality data the person must collect it with a measurement instrument that consistently (reliably) measures what it claims to measure (validity). A good research instrument is reliable and valid. So is a good bathroom scale.

Let’s start super-basic: Researchers collect data to answer their research question using an instrument. That test or tool might be a written questionnaire, interview questions, an EKG machine, an observation checklist, or something else. And whatever instrument the researcher uses needs to give them correct data answers.

For example, if I want to collect BP data to find out how a new med is working, I need a BP cuff that collects systolic and diastolic BP without a lot of artifacts or interference. That accuracy in measuring BP only is called instrument validity. Then if I take your BP 3 times in a row, I should get basically the same answer and that consistency is called instrument reliability. I must also use the cuff as intended–correct cuff size and placement–in order to get quality data that reflects the subject’s actual BP.

The same thing is true with questionnaires or other measurement tools. A researcher must use an instrument for the intended purpose and in the correct way. For example, a good stress scale should give me accurate data about a person’s stress level (not their pain, depression, or anxiety)–in other words it should have instrument validity. It should measure stress without a lot of artifacts or interference from other states of mind.

NO instrument is 100% valid–it’s a matter of degree. To the extent that a stress scale measures stress, it is valid. To the extent that it also measures other things besides stress–and it will–it is less valid. The question you should ask is, “How valid is the instrument?” often on a 0 to 1 scale with 1 being unachievable perfection. The same issue and question applies to reliability.

Reliability & validity are interdependent. An instrument that yields inconsistent results under the same circumstances cannot be valid (accurate). Or, an instrument can consistently (reliably) measure the wrong thing–that is, it can measure something other than what the researcher intended to measure. Research instruments need both strong reliability AND validity to be most useful; they need to measure the outcome variable of interest consistently.

Valid for a specific purpose: Researchers must also use measurement instruments as intended. First, instruments are often validated for use with a particular population; they may not be valid for measuring the same variable in other populations. For example, different cultures, genders, professions, and ages may respond differently to the same question. Second, instruments may be valid in predicting certain outcomes (e.g., SAT & ACT have higher validity in predicting NCLEX success than does GPA). As Sullivan (2011) wrote: “Determining validity can be viewed as constructing an evidence-based argument regarding how well a tool measures what it is supposed to do. Evidence can be assembled to support, or not support, a specific use of the assessment tool.”

In summary….

  1. Instrument validity = how accurate the tool is in measuring a particular variable
  2. Instrument reliability = how consistently the tool measures whatever it measures

Fun Practice: In your own words relate the following article excerpt to the concept of validity? “To assess content validity [of the Moral Distress Scale], 10 nurses were asked to provide comments on grammar, use of appropriate words, proper placement of phrases, and appropriate scoring. From p.3, Ghafouri et al. (2021). Psychometrics of the moral distress scale in Iranian mental health nurses. BMC Nursing. https://doi.org/10.1186/s12912-021-00674-4

Bates Center Seminar Series – Diabetes: A History of Race and Disease

Speaker: Arleen Tuchman, PhD, Vanderbilt University

Date and Time: Wednesday, April 6, 2022, 4:00pm EDT, virtual BlueJeans event

Abstract: Who is considered most at risk for diabetes, and why? In this talk, Tuchman discusses how, at different times over the past one hundred years, Jews, Native Americans, and African Americans have been labeled most at risk for developing diabetes, and that such claims have reflected and perpetuated troubling assumptions about race, ethnicity, and class. As Tuchman shows, diabetes also underwent a mid-century transformation in the public’s eye from being a disease of wealth and “civilization” to one of poverty and “primitive” populations. In tracing this cultural history, Tuchman argues that shifting understandings of diabetes reveal just as much about scientific and medical beliefs as they do about the cultural, racial, and economic milieus of their time.

Bio: Arleen Tuchman is a specialist in the history of medicine in the United States and Europe, with research interests in the cultural history of health, disease, and addiction; the rise of scientific medicine; and scientific and medical constructions of gender and sexuality. She is the author of three books, the most recent being Diabetes: A History of Race and Disease (Yale University Press, 2020). She is currently working on a history of addiction and the family in the United States.

Tuchman has held many fellowships, including ones from the American Council of Learned Societies, the National Institutes of Health, and the National Endowment for the Humanities.
Tuchman is a past director of Vanderbilt University’s Center for Medicine, Health, and Society (2006-2009) and has, since 2019, been the co-creator of a historic medicinal garden on Vanderbilt University’s campus

Register here.

?Unexpected Evidence in The Science of Lockdowns

Headlines are blaring: “New study shows that lockdowns had minimal effect on COVID-19 mortality.”

The January 2022 systematic review and meta-analysis that underlies that news is Herby, Jonung, & Hanke’s “A Literature Review and Meta Analysis of the Effects of Lockdowns on COVID 19 Mortality” in Applied Economics .

Scientists label systematic reviews and meta-analyses as the strongest type of scientific evidence (pyramid of evidence). Of course the strength of the systematic review/meta-analysis depends on whether it is well or poorly done, so never put your research-critique brain in neutral. This one seems well done.

In systematic reviews, researchers follow a methodical, focused process that describes their selection and analysis of all studies on a topic. Meta-analyses treat all the data from those selected studies as a single study. Researchers will specify their process and parameters for selecting studies, and they typically publish a table of evidence that summarizes key information about each study. Herby et al. did so. (Note: systematic reviews should not be confused with integrative reviews in which authors are less systematic and are giving background info.)

For example, from Herby et al’s study cited above: “This study employed a systematic search and screening procedure in which 18,590 studies are identified… After three levels of screening, 34 studies ultimately qualified. Of those 34 eligible studies, 24 qualified for inclusion in the meta-analysis. They were separated into three groups: lockdown stringency index studies, shelter-in-place-order (SIPO) studies, and specific [non-pharmaceutical intervention] NPI studies. An analysis of each of these three groups support the conclusion that lockdowns have had little to no effect on COVID-19 mortality.”

See the full publication below. And rather than reading it beginning to end, first 1) read the abstract; 2) identify parameters used to select the 34 eligible studies and 24 meta-analysis studies, 3) scan the table of evidence, and 4) read the discussion beginning page 40. Then read the complete article, and cut yourself some slack—-just try understand what you can depending on your research expertise.

What do you think? Are the studies that support their conclusions strong? What are the SCIENTIFIC objections to their conclusions? What do they identify as policy implications, and do you agree or disagree?

[NOTE THAT THIS ARTICLE LINK MAY BE GOOD FOR ONLY 30 DAYS, but a librarian can help you get it after that.] Happy evidence hunting.

Revisiting Field Medicine – My post from 3/20/2020

Field medicine= healthcare in the non-hospital context when higher level technical care is not available.

EBP: OpEd – What it is and what it isn’t

Evidence-based nursing. I have heard and seen the terms evidence-based nursing & evidence-based practice sometimes mis-used by well-educated RNs. Want to know what it is? Here’s the secret (or at least some things you should think about says Dr. Ingersoll). https://www.nursingoutlook.org/article/S0029-6554(00)76732-7/pdf

First, she rightly differentiates 2 processes: research as discovery and evidence-based practice as application.

Too, Ingersoll argues that best evidence may include more than the much-vaunted systematic reviews or randomized controlled trials. Relying only on systematic, scientific research findings, she argues, is not enough to guide evidence-based practice. Her arguments provide a basis for discussion with those who might disagree.

Positivist pyramid

[Note: Ingersoll uses the term “positivist thinking” at one point. For those uncertain about the term, I would define positivists as those who assume that reality and truth are objective, measurable, and discoverable by a detached, impartial researcher. Positivism underlies the empirical scientific process that most readers think of when they hear the word research.]

Do you agree with her that anecdotal and traditional knowledge make valuable contributions to evidence-based practice? Your thoughts about her thoughts?

“How many articles are enough?” Is that even the right question?

How do you know when you have found enough research evidence on a topic to be able to use the findings in clinical practice? How many articles are enough? 5? 50? 100? 1000? Good question!

You have probably heard general rules like these for finding enough applicable evidence: Stick close to your key search terms derived from PICOT statement of problem; Use only research published in the last 5-7 years unless it is a “classic; & Find randomized controlled trials (RCTs), meta-analyses, & systematic reviews of RCTs that document cause-and-effect relationships. Yes, those are good strategies. The only problem is that sometimes they don’t work!

Unfortunately, some clinical issues are “orphan topics.” No one has adequately researched them. And while there may be a few, well-done, valuable published studies on the topic, those studies may simply describe bits of the phenomenon or focus on how to measure the phenomenon (i.e., instrument development). They may give us little to no information on correlation and causation. There may be no RCTs. This situation may tempt us just to discard our clinical issue and to wait for more research (or of course to do research), but either could take years.

In her classic 1998 1-page article, “When is enough, enough?” Dr. Carol Deets, argues that asking how many research reports we need before applying the evidence may be the wrong question! Instead, she proposes, we should ask, “What should be done to evaluate the implementation of research findings in the clinical setting?”

When research evidence is minimal, then careful process and outcome evaluation of its use in clinical practice can: 1) Keep patient safety as the top priority, 2) Document cost-effectiveness and efficacy of new interventions, and 3) Facilitate swift, ethical use of findings that contributes to nursing knowledge. At the same time, Deets recognizes that for many this idea may be revolutionary, requiring us to change the way we think.

So back to the original question…How many articles are enough? Deets’ answer? “One study is enough” if we build in strong evaluation as we translate it into practice.

Reference: Deets, C. (1998). When is enough, enough? Journal of Professional Nursing, 14(4), 196. doi.org/10.1016/S8755-7223(98)80058-6

“Remember our sons & Daughters:” an analysis of Igbo women’s petitionary letters to US missionary nurses

This is the link to a Panopto video that was to be presented at the International Nurse Christian Fellowship Conference July 10. https://csun.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=c1ed56b0-4c4c-4c85-9a94-ad0e016f7323

Unfortunately technical difficulties interfered. Let me know if you have difficulty accessing. My email martha.highfield@csun.edu

Making research accessible to RNs

%d bloggers like this: