Tag Archives: research article

New Book Strives to Make the Difficult Simple

Doing Research: A practical guide for health professionals, a new book by Martha E. Farrar Highfield is in press Springer Nature. Release date Feb 1, 2025 (preorder available).

Practical, brief, and affordable, Doing Research is for residents, nurses, chaplains, and other clinicians.

Written in informal, friendly style, this book makes the difficult simple.

The purpose of Doing Research is to empower curious clinicians to conduct research alongside a mentor, even when they lack prior research experience or formal training.

Doing Research presents practical steps for conducting a study from beginning to end. It begins with “a dozen tips” to get started, then moves to study planning, conduct, and dissemination of results. A worksheet to write your research plan (protocol) is included. Research terms and process are explained, including what research is and is not. Tips & Alerts provide a “reassuring voice,” as well as alerting readers to common missteps.

Primer on Design: Part 3 – Mixing UP Methods

QUICK REVIEW: Research design is the overall plan for a study. And…there are 2 main types of design: 1) Non-experiments in which the researcher observes and documents what exists,

and 2) Experiments when the researcher tries out an intervention and measures outcomes.

NEW INFO: Two non-experimental research designs that are often confused with one another are: 1) cohort studies & 2) case studies. Epidemiologists often use these designs to study large populations.

In a cohort study, a group of participants, who were exposed to a presumed cause of disease or injury, are followed into the future (prospectively) to identify emerging health issues. Researchers may also look at their past (retrospectively) to determine the amount of exposure that is related to health outcomes.

In contrast, in a case controlled study, participants with a disease or condition (cases) and others without it (controls) are followed retrospectively to compare their exposure to a presumed cause.

EXAMPLES?

  1. Martinez-Calderon et al (2017 ) Influence of psychological factors on the prognosis of chronic shoulder pain: protocol for a prospective cohort study. BMJ Open, 7. doi: 10.1136/bmjopen-2016-012822
  2. Smith et al (2019). An outbreak of hepatitis A in Canada: The use of a control bank to conduct a case-control study. Epidemiology & Infection, 147. doi: https://doi.org/10.1017/S0950268819001870

CRITICAL THINKING: Do you work with a group that has an interesting past of exposure to some potential cause of disease or injury? Which of the above designs do you find more appealing and why?

The Whole Picture: Mixed Methods Design

Idea2Mixed methods (MM) research provides a more complete picture of reality by including both complementary quantitative and qualitative data.

A clinical analogy for MM research is asking patients to rate their pain numerically on a 0–10 scale and then to describe the pain character in words.

MM researchers sometimes include both experimental hypotheses and non-experimental research questions in the same study.

writing article

Common MM subtypes are in the below table. In concurrent designs investigators collect all data at the same time, and in sequential designs they collect one type of data before the other. In triangulated MM, data receive equal weight, but in embedded designs, such as a large RCT in which only a small subset of RCT participants are interviewed, the main study data is weighted more heavily. In sequential MM, researchers give more weight to whatever type of data were collected first; for exploratory this is qualitative data and for explanatory it is quantitative data.

FOR MORE INFO: WHAT IS MIXED METHODS RESEARCH? – Dr. John Creswell

MM DESIGNEQUALLY WEIGHTED DATAPRIORITY WEIGHTED DATA
Concurrent data collection:
*Triangulation
All data
*EmbeddedMain study data
Sequential data collection:
*Exploratory
Qualitative data
*ExplanatoryQuantitative data
TYPES OF MM DESIGN: Concurrent & Sequential

New research: Mindfulness

Check out the newest and add your critique in comments.

“Evidence suggests that mindfulness training using a phone application (app) may support neonatal intensive care unit (NICU) nurses in their high stress work.” https://journals.lww.com/advancesinneonatalcare/Abstract/9900/The_Effect_of_a_Mindfulness_Phone_Application_on.63.aspx

The Effect of a Mindfulness Phone Application on NICU Nurses’ Professional Quality of Life

by Egami, Susan MSN, RNC-NIC, IBCLC; Highfield, Martha E. Farrar PhD, RN

Editor(s): Dowling, Donna PhD, RN, Section Editors; Newberry, Desi M. DNP, NNP-BC, Section Editors; Parker, Leslie PhD, APRN, FAAN, Section EditorsAuthor Information

Advances in Neonatal Care ():10.1097/ANC.0000000000001064, April 10, 2023. | DOI: 10.1097/ANC.0000000000001064

Testing the Test (or an intro to “Does the measurement measure up?”)

When reading a research article, you may be tempted only to read the Introduction & Background, then go straight to the Discussion, Implications, and Conclusions at the end. You skip all those pesky, procedures, numbers, and p levels in the Methods & Results sections.

Perhaps you are intimidated by all those “research-y” words like content validity, construct validity, test-retest reliability, and Cronbach’s alpha because they just aren’t part of your vocabulary….YET!

WHY should you care about those terms, you ask? Well…let’s start with an example. If your bathroom scale erratically measured your weight each a.m., you probably would toss it and find a more reliable and valid bathroom scale. The quality of the data from that old bathroom scale would be useless in learning how much you weighed. Similarly in research, the researcher wants useful outcome data. And to get that quality data the person must collect it with a measurement instrument that consistently (reliably) measures what it claims to measure (validity). A good research instrument is reliable and valid. So is a good bathroom scale.

Let’s start super-basic: Researchers collect data to answer their research question using an instrument. That test or tool might be a written questionnaire, interview questions, an EKG machine, an observation checklist, or something else. And whatever instrument the researcher uses needs to give them correct data answers.

For example, if I want to collect BP data to find out how a new med is working, I need a BP cuff that collects systolic and diastolic BP without a lot of artifacts or interference. That accuracy in measuring BP only is called instrument validity. Then if I take your BP 3 times in a row, I should get basically the same answer and that consistency is called instrument reliability. I must also use the cuff as intended–correct cuff size and placement–in order to get quality data that reflects the subject’s actual BP.

The same thing is true with questionnaires or other measurement tools. A researcher must use an instrument for the intended purpose and in the correct way. For example, a good stress scale should give me accurate data about a person’s stress level (not their pain, depression, or anxiety)–in other words it should have instrument validity. It should measure stress without a lot of artifacts or interference from other states of mind.

NO instrument is 100% valid–it’s a matter of degree. To the extent that a stress scale measures stress, it is valid. To the extent that it also measures other things besides stress–and it will–it is less valid. The question you should ask is, “How valid is the instrument?” often on a 0 to 1 scale with 1 being unachievable perfection. The same issue and question applies to reliability.

Reliability & validity are interdependent. An instrument that yields inconsistent results under the same circumstances cannot be valid (accurate). Or, an instrument can consistently (reliably) measure the wrong thing–that is, it can measure something other than what the researcher intended to measure. Research instruments need both strong reliability AND validity to be most useful; they need to measure the outcome variable of interest consistently.

Valid for a specific purpose: Researchers must also use measurement instruments as intended. First, instruments are often validated for use with a particular population; they may not be valid for measuring the same variable in other populations. For example, different cultures, genders, professions, and ages may respond differently to the same question. Second, instruments may be valid in predicting certain outcomes (e.g., SAT & ACT have higher validity in predicting NCLEX success than does GPA). As Sullivan (2011) wrote: “Determining validity can be viewed as constructing an evidence-based argument regarding how well a tool measures what it is supposed to do. Evidence can be assembled to support, or not support, a specific use of the assessment tool.”

In summary….

  1. Instrument validity = how accurate the tool is in measuring a particular variable
  2. Instrument reliability = how consistently the tool measures whatever it measures

Fun Practice: In your own words relate the following article excerpt to the concept of validity? “To assess content validity [of the Moral Distress Scale], 10 nurses were asked to provide comments on grammar, use of appropriate words, proper placement of phrases, and appropriate scoring. From p.3, Ghafouri et al. (2021). Psychometrics of the moral distress scale in Iranian mental health nurses. BMC Nursing. https://doi.org/10.1186/s12912-021-00674-4

“It’s All in The Name!” Titles of Research Articles

Research articles have relatively standardized sections:

• Title 
• Abstract (overview of project that is somewhat incomplete)
• Introduction (purpose, problem, & background)
• Methods (sample, setting, measurements collected)
• Results (data analysis from measurements), &
• Discussion/conclusions (what the data analysis tells us about the original purpose & problem)
These may vary a little from article to article.

Let’s look at the TITLE for a minute. A good title is a mini-abstract. A good title will include:
• Key variables (remember a variable is something that varies, such as fatigue or satisfaction)
• Population studied
• Setting of study
• Design of study

For example take this research article title “What patients with abdominal pain expect about pain relief in the Emergency Department” by Yee et al in 2006 in JEN.
• Key thing that varies? Expectations about pain relief
• Population studied? ED patients with abdominal pain
• Setting? May be the ED
• Design? (not included, but those with experience in reading research would guess that it is probably a descriptive study—in other words it just describes the patients’ expectations without any intervention.)

There you have it! Now you know about TITLES!!

New Antibiotic Found in Human Nose

Useless trivia, but interesting old quote from a detective on the ancient “Alvin & the nOSEChipmunks” cartoon: “Everyone with a nose knows the nose knows everything.” 

Check out the very interesting story about a new antibiotic that may fight MRSA and VRE.  A much needed medicinal weapon.  Still lots we don’t know about how well it will work in humans and resistance to it or other unintended consequences.

Want more info? See this article by By Kai KupferschmidtJul. 27, 2016   http://www.sciencemag.org/news/2016/07/new-antibiotic-found-human-nose

Critical thinking: What do you already do to avoid adding to microbial resistance?

 

Afraid to Relieve Pain? You may have Opiophobia

fear5In pain management are you afraid to give comfort to your patients with appropriate medications?   Are you afraid to be comforted when in pain?  Have you encountered families or care partners, who are afraid to comfort their loved one in pain by giving pain medications?

In a classic 2002 qualitative study, “Fearing to Comfort,” Zerwekh, Riddell, & Richard identified that RNs, physicians, patients, families, and health systems were afraid to relieve pain with appropriate use of pain medications.  They were Not doing evidence-based practice, but fear-based practice. barrier

Fear barriers include, but are not limited to 1) patients’ fear of addiction, fear of distracting the MD from the main treatment plan, and loss of control; 2) MDs’ avoiding the needs of the dying, fear of rewarding drug-seekers, or equating pain management with euthanasia; 3) RNs’ avoiding pain, failing to switch to palliative goals at end of life,  and fear of killing the patient; 4) families’ fears of addiction, side effects, & killing their loved one; and 5) health facilities’ not giving unique consideration to those at end of life, inadequate staffing, & time constraints (Zerwekh et al., 2002).

This is an issue because irrational problems cannot be simply solved by giving rational Pain fistinformation.   We have to find evidence-based practices that can create a change of heart, if you will.  As Zerwekh et al wrote: “Because fear is so influential in decisions to keep pain under control, palliative educational approaches must go beyond providing information to fill deficits in palliative knowledge.”
We must learn evidence-based ways to overcome fear and control pain.  Why?  Because pain interferes with living life.  Who are we protecting when we fear appropriate pain medications?  Not the patient.

FearRemedy?  Palliative care education must confront the fears and remove them through cognitive restructuring that includes learning to question beliefs about addiction etc.  Role playing, role modeling, and an expert walking through this with the provider or family who is afraid.  Beyond this helping people to recognize their own fears of pain & death, and providing the very best available information on pain management (Zerwekh et al).

CRITICAL THINKING:  Have you been afraid?  Or seen others afraid?  How can you solve this problem using evidence-based practice that = BEST available evidence + Clinical judgment + Patient/family preferences & values? Be specific because if you haven’t yet encountered the problem of fearing to comfort, be assured that you will.fear4

FOR MORE INFORMATION:   Read full text Zerwekh et al (2002) online.   It could change your life & the life of those for whom you care!!

Google’s Beauty is Only Skin Deep: Go for the Database!

maxresdefaultGoogle–not to mention yahoo, bing & other web search engines–are mere popularity contests of literature.   Google Scholar is a step up, but it is still a search engine.  It can miss important articles entirely.

If you want to be sure that you are getting the BEST, you gotta look in the right place if you want to find the right articles on the right topic at the right time!Beauty contest winner

You need a Database!

Don’t believe me?  Watch “What are databases and why you need them?”(youtube 2:34)

Reputable publishers give away very few articles for free, so when you want the best literature out there you need a Database that will systematically help you to find quality articles that fit your topic.

PubMed.gov is a tax funded database that is highly comprehensive.  CINAHL is strong on nursing literature.  If you are enrolled in a university, you have access to lots of full-text articles at no added cost.  Check with your librarian if your database search is not turning up what you need–with a few hints, you could get the best.

Needle in haystackFor more info:  Look for that needle in the haystack.

Self-Report Data: “To use or not to use. That is the question.”

[Note: The following was inspired by and benefited from Rob Hoskin’s post at http://www.sciencebrainwaves.com/the-dangers-of-self-report/]Penguins

If you want to know what someone thinks or feels, you ask them, right?

The same is true in research, but it is good to know the pros and cons of using the “self-report method” of collecting data in order to answer a research question.  Most often self-report is done in ‘paper & pencil’ or SurveyMonkey form, but it can be done by interview.

Generally self-report is easy and inexpensive, and sometimes facilitates research that might otherwise be impossible.  To answer well, respondents must be honest, have insight into themselves, and understand the questions.  Self-report is an important tool in much behavioral research.

But, using self-report to answer a research question does have its limits. People may tend to answer in ways that make themselves look good (social desirability bias), agree with whatever is presented (social acquiescence bias), or answer in either extreme terms (extreme response set bias) or always pick the non-commital middle Hypothesisnumbers.  Another problem will occur if the reliability  and validity of the self-report questionnaire is not established.  (Reliability is consistency in measurement and validity is the accuracy of measuring what it purports to measure.) Additionally, self-reports typically provide only a)ordinal level data, such as on a 1-to-5 scale, b) nominal data, such as on a yes/no scale, or c) qualitative descriptions in words without categories or numbers.  (Ordinal data=scores are in order with some numbers higher than others, and nominal data = categories. Statistical calculations are limited for both and not possible for qualitative data unless the researcher counts themes or words that recur.)

Gold_BarsAn example of a self-report measure that we regard as a gold standard for clinical and research data = 0-10 pain scale score.   An example of a self-report measure that might be useful but less preferred is a self-assessment of knowledge (e.g., How strong on a 1-5 scale is your knowledge of arterial blood gas interpretation?)  The use of it for knowledge can be okay as long as everyone understands that it is perceived level of knowledge.

Critical Thinking: What was the research question in this study? Malaria et al. (2016) Pain assessment in elderly with behavioral and psychological symptoms of dementia. Journal of Alzheimer’s Disease as posted on PubMed.gov questionat http://www.ncbi.nlm.nih.gov/pubmed/26757042 with link to full text.  How did the authors use self-report to answer their research question?  Do you see any of the above strengths & weaknesses in their use?

For more information: Be sure to check out Rob Hoskins blog: http://www.sciencebrainwaves.com/the-dangers-of-self-report/