Category Archives: nursing research

New research: Mindfulness

Check out the newest and add your critique in comments.

“Evidence suggests that mindfulness training using a phone application (app) may support neonatal intensive care unit (NICU) nurses in their high stress work.” https://journals.lww.com/advancesinneonatalcare/Abstract/9900/The_Effect_of_a_Mindfulness_Phone_Application_on.63.aspx

The Effect of a Mindfulness Phone Application on NICU Nurses’ Professional Quality of Life

by Egami, Susan MSN, RNC-NIC, IBCLC; Highfield, Martha E. Farrar PhD, RN

Editor(s): Dowling, Donna PhD, RN, Section Editors; Newberry, Desi M. DNP, NNP-BC, Section Editors; Parker, Leslie PhD, APRN, FAAN, Section EditorsAuthor Information

Advances in Neonatal Care ():10.1097/ANC.0000000000001064, April 10, 2023. | DOI: 10.1097/ANC.0000000000001064

Testing the Test (or an intro to “Does the measurement measure up?”)

When reading a research article, you may be tempted only to read the Introduction & Background, then go straight to the Discussion, Implications, and Conclusions at the end. You skip all those pesky, procedures, numbers, and p levels in the Methods & Results sections.

Perhaps you are intimidated by all those “research-y” words like content validity, construct validity, test-retest reliability, and Cronbach’s alpha because they just aren’t part of your vocabulary….YET!

WHY should you care about those terms, you ask? Well…let’s start with an example. If your bathroom scale erratically measured your weight each a.m., you probably would toss it and find a more reliable and valid bathroom scale. The quality of the data from that old bathroom scale would be useless in learning how much you weighed. Similarly in research, the researcher wants useful outcome data. And to get that quality data the person must collect it with a measurement instrument that consistently (reliably) measures what it claims to measure (validity). A good research instrument is reliable and valid. So is a good bathroom scale.

Let’s start super-basic: Researchers collect data to answer their research question using an instrument. That test or tool might be a written questionnaire, interview questions, an EKG machine, an observation checklist, or something else. And whatever instrument the researcher uses needs to give them correct data answers.

For example, if I want to collect BP data to find out how a new med is working, I need a BP cuff that collects systolic and diastolic BP without a lot of artifacts or interference. That accuracy in measuring BP only is called instrument validity. Then if I take your BP 3 times in a row, I should get basically the same answer and that consistency is called instrument reliability. I must also use the cuff as intended–correct cuff size and placement–in order to get quality data that reflects the subject’s actual BP.

The same thing is true with questionnaires or other measurement tools. A researcher must use an instrument for the intended purpose and in the correct way. For example, a good stress scale should give me accurate data about a person’s stress level (not their pain, depression, or anxiety)–in other words it should have instrument validity. It should measure stress without a lot of artifacts or interference from other states of mind.

NO instrument is 100% valid–it’s a matter of degree. To the extent that a stress scale measures stress, it is valid. To the extent that it also measures other things besides stress–and it will–it is less valid. The question you should ask is, “How valid is the instrument?” often on a 0 to 1 scale with 1 being unachievable perfection. The same issue and question applies to reliability.

Reliability & validity are interdependent. An instrument that yields inconsistent results under the same circumstances cannot be valid (accurate). Or, an instrument can consistently (reliably) measure the wrong thing–that is, it can measure something other than what the researcher intended to measure. Research instruments need both strong reliability AND validity to be most useful; they need to measure the outcome variable of interest consistently.

Valid for a specific purpose: Researchers must also use measurement instruments as intended. First, instruments are often validated for use with a particular population; they may not be valid for measuring the same variable in other populations. For example, different cultures, genders, professions, and ages may respond differently to the same question. Second, instruments may be valid in predicting certain outcomes (e.g., SAT & ACT have higher validity in predicting NCLEX success than does GPA). As Sullivan (2011) wrote: “Determining validity can be viewed as constructing an evidence-based argument regarding how well a tool measures what it is supposed to do. Evidence can be assembled to support, or not support, a specific use of the assessment tool.”

In summary….

  1. Instrument validity = how accurate the tool is in measuring a particular variable
  2. Instrument reliability = how consistently the tool measures whatever it measures

Fun Practice: In your own words relate the following article excerpt to the concept of validity? “To assess content validity [of the Moral Distress Scale], 10 nurses were asked to provide comments on grammar, use of appropriate words, proper placement of phrases, and appropriate scoring. From p.3, Ghafouri et al. (2021). Psychometrics of the moral distress scale in Iranian mental health nurses. BMC Nursing. https://doi.org/10.1186/s12912-021-00674-4

EBP: OpEd – What it is and what it isn’t

Evidence-based nursing. I have heard and seen the terms evidence-based nursing & evidence-based practice sometimes mis-used by well-educated RNs. Want to know what it is? Here’s the secret (or at least some things you should think about says Dr. Ingersoll). https://www.nursingoutlook.org/article/S0029-6554(00)76732-7/pdf

First, she rightly differentiates 2 processes: research as discovery and evidence-based practice as application.

Too, Ingersoll argues that best evidence may include more than the much-vaunted systematic reviews or randomized controlled trials. Relying only on systematic, scientific research findings, she argues, is not enough to guide evidence-based practice. Her arguments provide a basis for discussion with those who might disagree.

Positivist pyramid

[Note: Ingersoll uses the term “positivist thinking” at one point. For those uncertain about the term, I would define positivists as those who assume that reality and truth are objective, measurable, and discoverable by a detached, impartial researcher. Positivism underlies the empirical scientific process that most readers think of when they hear the word research.]

Do you agree with her that anecdotal and traditional knowledge make valuable contributions to evidence-based practice? Your thoughts about her thoughts?

“How many articles are enough?” Is that even the right question?

How do you know when you have found enough research evidence on a topic to be able to use the findings in clinical practice? How many articles are enough? 5? 50? 100? 1000? Good question!

You have probably heard general rules like these for finding enough applicable evidence: Stick close to your key search terms derived from PICOT statement of problem; Use only research published in the last 5-7 years unless it is a “classic; & Find randomized controlled trials (RCTs), meta-analyses, & systematic reviews of RCTs that document cause-and-effect relationships. Yes, those are good strategies. The only problem is that sometimes they don’t work!

Unfortunately, some clinical issues are “orphan topics.” No one has adequately researched them. And while there may be a few, well-done, valuable published studies on the topic, those studies may simply describe bits of the phenomenon or focus on how to measure the phenomenon (i.e., instrument development). They may give us little to no information on correlation and causation. There may be no RCTs. This situation may tempt us just to discard our clinical issue and to wait for more research (or of course to do research), but either could take years.

In her classic 1998 1-page article, “When is enough, enough?” Dr. Carol Deets, argues that asking how many research reports we need before applying the evidence may be the wrong question! Instead, she proposes, we should ask, “What should be done to evaluate the implementation of research findings in the clinical setting?”

When research evidence is minimal, then careful process and outcome evaluation of its use in clinical practice can: 1) Keep patient safety as the top priority, 2) Document cost-effectiveness and efficacy of new interventions, and 3) Facilitate swift, ethical use of findings that contributes to nursing knowledge. At the same time, Deets recognizes that for many this idea may be revolutionary, requiring us to change the way we think.

So back to the original question…How many articles are enough? Deets’ answer? “One study is enough” if we build in strong evaluation as we translate it into practice.

Reference: Deets, C. (1998). When is enough, enough? Journal of Professional Nursing, 14(4), 196. doi.org/10.1016/S8755-7223(98)80058-6

simply put: the step-by-step research process

Research is not all white lab coats and test tubes. Simply put, research is a systematic way to ask and answer your questions by looking for patterns in new or existing data. Typical steps are clockwise are in Figure 1 below.

Begin by identifying your problem clearly and concisely. A great way to do that is using the acronym PICO. (Learn how to use PICO by clicking here.)

In the Figure 1 below, I’ve included the step of IRB review. Remember that an IRB (institutional review board/ AKA human subjects review board) must review all research procedures for your compliance with federal ethical and legal rules before you begin any data collection or subject contact.

Search discoveringyourinnerscientist.com for what I’ve already written on some of these steps, and watch for more in upcoming posts.

Figure 1. Research Process Summary