Category Archives: reading research

Testing the Test (or an intro to “Does the measurement measure up?”)

When reading a research article, you may be tempted only to read the Introduction & Background, then go straight to the Discussion, Implications, and Conclusions at the end. You skip all those pesky, procedures, numbers, and p levels in the Methods & Results sections.

Perhaps you are intimidated by all those “research-y” words like content validity, construct validity, test-retest reliability, and Cronbach’s alpha because they just aren’t part of your vocabulary….YET!

WHY should you care about those terms, you ask? Well…let’s start with an example. If your bathroom scale erratically measured your weight each a.m., you probably would toss it and find a more reliable and valid bathroom scale. The quality of the data from that old bathroom scale would be useless in learning how much you weighed. Similarly in research, the researcher wants useful outcome data. And to get that quality data the person must collect it with a measurement instrument that consistently (reliably) measures what it claims to measure (validity). A good research instrument is reliable and valid. So is a good bathroom scale.

Let’s start super-basic: Researchers collect data to answer their research question using an instrument. That test or tool might be a written questionnaire, interview questions, an EKG machine, an observation checklist, or something else. And whatever instrument the researcher uses needs to give them correct data answers.

For example, if I want to collect BP data to find out how a new med is working, I need a BP cuff that collects systolic and diastolic BP without a lot of artifacts or interference. That accuracy in measuring BP only is called instrument validity. Then if I take your BP 3 times in a row, I should get basically the same answer and that consistency is called instrument reliability. I must also use the cuff as intended–correct cuff size and placement–in order to get quality data that reflects the subject’s actual BP.

The same thing is true with questionnaires or other measurement tools. A researcher must use an instrument for the intended purpose and in the correct way. For example, a good stress scale should give me accurate data about a person’s stress level (not their pain, depression, or anxiety)–in other words it should have instrument validity. It should measure stress without a lot of artifacts or interference from other states of mind.

NO instrument is 100% valid–it’s a matter of degree. To the extent that a stress scale measures stress, it is valid. To the extent that it also measures other things besides stress–and it will–it is less valid. The question you should ask is, “How valid is the instrument?” often on a 0 to 1 scale with 1 being unachievable perfection. The same issue and question applies to reliability.

Reliability & validity are interdependent. An instrument that yields inconsistent results under the same circumstances cannot be valid (accurate). Or, an instrument can consistently (reliably) measure the wrong thing–that is, it can measure something other than what the researcher intended to measure. Research instruments need both strong reliability AND validity to be most useful; they need to measure the outcome variable of interest consistently.

Valid for a specific purpose: Researchers must also use measurement instruments as intended. First, instruments are often validated for use with a particular population; they may not be valid for measuring the same variable in other populations. For example, different cultures, genders, professions, and ages may respond differently to the same question. Second, instruments may be valid in predicting certain outcomes (e.g., SAT & ACT have higher validity in predicting NCLEX success than does GPA). As Sullivan (2011) wrote: “Determining validity can be viewed as constructing an evidence-based argument regarding how well a tool measures what it is supposed to do. Evidence can be assembled to support, or not support, a specific use of the assessment tool.”

In summary….

  1. Instrument validity = how accurate the tool is in measuring a particular variable
  2. Instrument reliability = how consistently the tool measures whatever it measures

Fun Practice: In your own words relate the following article excerpt to the concept of validity? “To assess content validity [of the Moral Distress Scale], 10 nurses were asked to provide comments on grammar, use of appropriate words, proper placement of phrases, and appropriate scoring. From p.3, Ghafouri et al. (2021). Psychometrics of the moral distress scale in Iranian mental health nurses. BMC Nursing. https://doi.org/10.1186/s12912-021-00674-4

On Target all the time and everytime !

“Measure twice. Cut once!” goes the old carpenter adage. Why? Because measuring accurately means you’ll get the outcomes you want!

Same in research. A consistent and accurate measurement will get you the outcomes you want to know. Whether an instrument measures something consistently is called reliability. Whether it measures accurately is called validity. So, before you use a tool, check for its reported reliability and validity.

A good resource for understanding the concepts of reliability (consistency) and validity (accuracy) of research tools is at https://opentextbc.ca/researchmethods/chapter/reliability-and-validity-of-measurement/ Below are quoted Key Takeaways:

  • Psychological researchers do not simply assume that their measures work. Instead, they conduct research to show that they work. If they cannot show that they work, they stop using them.
  • There are two distinct criteria by which researchers evaluate their measures: reliability and validity. Reliability is consistency across time (test-retest reliability), across items (internal consistency), and across researchers (interrater reliability). Validity is the extent to which the scores actually represent the variable they are intended to.
  • Validity is a judgment based on various types of evidence. The relevant evidence includes the measure’s reliability, whether it covers the construct of interest, and whether the scores it produces are correlated with other variables they are expected to be correlated with and not correlated with variables that are conceptually distinct.
  • The reliability and validity of a measure is not established by any single study but by the pattern of results across multiple studies. The assessment of reliability and validity is an ongoing process.

What’s in a Name?

[this posting back by popular demand]

TITLES!! That’s what you get when you search for research online!

But, whether your search turns up 3 or 32,003 article titles….remember that a title tells you a LOT In fact, if well-written it is a mini-abstract of the study. 

For example take this research article title “What patients with abdominal pain expect about pain relief in the Emergency Department” by Yee et al. in 2006 in JEN.
Variable (key factor that varies)?  Answer = Expectations about pain relief
Population studied? Answer = ED patients with abdominal pain
Setting? Answer = Maybe the ED (because they could’ve been surveyed after they got home or were admitted)
• Design?  Answer = not included, but you might guess that it is a descriptive study because it likely describes the patients’ expectations without any intervention.

There you have it! Now you know about TITLES!!

Now you try. Here’s your title: Gum chewing aids bowel function return and analgesic requirements after bowel surgery: a randomized controlled trial by Byrne CM, Zahid A, Young JM, Solomon MJ, Young CJ in May 2018

  • Variables? (this time there are 3 factors that vary–1 independent variable; & 2 dependent ones connected by “and”) Your answer is……
  • Population? (who is being studied; & if you have trouble identifying variables, identify the population first; then try) Your answer is….
  • Setting? (where; maybe not so clear; might have to go to abstract for this one) Your answer is….
  • Design of study? (it’s right there!) Your answer…..

Congratulate yourself!

Easy to read. Hard to write.

Musings:  For me the most difficult to write sections of a research report are the Intro/Background and Discussion.  And yet,  those are apparently the easiest to read for many.   My students at least tend to read only those sections and skip the rest.

Why? For the author, Intro/Background and Discussion require hard, critical thinking about what is already known about the topic (Intro/Background) and then what one’s findings mean in light of that (Discussion).  For research consumers, the language used in these sections is more familiar, ordinary sounding words.  On the other hand, writing the technical nature of other sections (Methods, Instruments, Results) is pretty straightforward with scientifically standardized vocabulary and structure.  But, for readers, those same sections contain potentially unfamiliar research terminology that is not part of everyday conversation– i.e., scientific vocabulary.  Quantitative studies often create more reader difficulty.

My solution for myself as a writer? To spend time making sure that the first sentence of every paragraph in Intro/Background and Discussion makes a step-by-step argument supported by the rest of the paragraph. Follow standardized structure for the rest.  Keep  language  precise  yet  simple  as  possible.

Solution for research readers? Read the whole article understanding what you can and keep a research glossary handy (e.g., https://sites.google.com/site/nursingresearchaid/week-1. Even if practice doesn’t make you perfect, it works in learning a new language–whether it is  a ‘foreign’ language or a scientific one.

Critical Thinking:  Test out your reading skills with this article https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6503597/   .  Do the authors make systematic arguments in Intro/Background & Discussion? What makes this article hard or easy to read?

Happy Summer! -Dr.H

It’s Up to you: Accept the Status Quo or Challenge it

Yes.  Change can be painful.question

Yes. It is easier to do things the way we’ve always done them (and been seemingly successful).

Yet, most of us want to work more efficiently or improve our own or patients’ health.Tension

 So, there you have the problem: a tension between status quo and change. Perhaps taking the easy status quo is why ‘everyday nurses’ don’t read research.

Ralph (2017) writes encountering 3 common mindsets that keep nurses stuck in the rut of refusing to examine new research:

  1. I’m not a researcher.
  2. I don’t value research.
  3. I don’t have time to read research.

But, he argues, you have a choice: you can go with the status quo or challenge it (Ralph).  And (admit it), haven’t we all found that the status quo sometimes doesn’t work well so that we end up

  • choosing a “work around,” or
  • ignoring/avoiding the problem or
  • leaving the problem for someone else or
  • ….[well….,you pick an action.]

TensionHow to begin solving the problem of not reading research? Think of a super-interesting topic to you and make a quick trip to PubMed.com. Check out a few relevant abstracts and ask your librarian to get the articles for you. Read them in the nurses’ lounge so others can, too.

Let me know how your challenge to the status quo works out.

Bibliography: Fulltext available for download through https://www.researchgate.net/ of  Ralph, N. (2017 April). Editorial: Engaging with research & evidence is a nursing priority so why are ‘everyday’ nurses not reading the literature, ACORN 30(3):3-5. doi: 10.26550/303/3.5

Goldilocks and the 3 Levels of Data

Actually when it comes to quantitative data, there are 4 levels, but who’s counting? (Besides Goldilocks.)

  1. Nominal  (categorical) data are names or categories: (gender, religious affiliation, days of the week, yes or no, and so on)
  2. Ordinal data are like the pain scale.  Each number is higher (or lower) than the next but the distances between numbers are not equal.  In others words 4 is not necessarily twice as much as 2; and 5 is not half of 10.
  3. Interval data are like degrees on a thermometer.  Equal distance between them, but no actual “0”.  0 degrees is just really, really cold.
  4. Ratio data are those  with real 0 and equal intervals (e.g., weight, annual salary, mg.)

(Of course if you want to collect QUALitative word data, that’s closest to categorical/nominal, but you don’t count ANYTHING.  More on that another time.)

CRITICAL THINKING:   Where are the levels in Goldilocks and the 3 levels of data at this link:  https://son.rochester.edu/research/research-fables/goldilocks.html ?? Would you measure soup, bed, chairs, bears, or other things differently?  Why was the baby bear screaming in fright?

What IS research!!??

WHAT IS RESEARCH?   Take < three minutes to check out: https://www.youtube.com/watch?v=v50ct9xJVKE .  Listen for what research is and 2 basic ways to approach the man-person-legs-grass.jpganswers to a research question: “Why is the sky blue?”

CRITICAL THINKING:  What is a recent problem you’ve experienced in clinical practice?  Write out a positivist question and an interpretist research question related to that same clinical problem.

DIY your own Intro/Background: Structure & Argument

Want to know how to write an introduction/background section of a paper?  Pay attention to STRUCTURE & evidence-based ARGUMENT in order to DIY (do-it-yourself) your own intro/background for a school paper or research report!

Let’s use this 2015 free full-text article by Marie Flem Sørbø et al. as a model!  Past and recent abuse is associated with early cessation of breast feeding: results from a large prospective cohort in Norway .   (Hint: Clicking on the article’s pdf tab may make it easier to read.)

Focus only on the INTRO/BACKGROUND section for now.  Check out the STRUCTURE then the EVIDENCE-BASED ARGUMENT of the Intro/Background.  This is how you should write your own.

STructure

STRUCTURE of INTRO/BACKGROUND in Sørbø et al. (2015):

  1. Where is the Intro/Background section located in the article?
  2. What heading is used for the section?
  3. Where are the research questions located in the Intro/Background?  (HINT: this is the standard place in all papers & in this case the authors call them “aims.)
Why2

ARGUMENTS in INTRO/BACKGROUND in Sørbø et al. (2015):

  1. Look at the first (topic) sentence of each paragraph in INTRO/BACKGROUND & listen to the systematic argument the researchers are making for WHY their study is important.
    • “Breast feeding has long been acknowledged as the optimal infant nutrition conferring beneficial short-term and long-term health effects for both infants and mothers.1–5      …
    • Abuse of women is common worldwide, as one in three women during lifetime suffer partner or non-partner abuse.10   …Adverse  effects [of abuse]… are barriers to breast feeding.*…
    • Given the overwhelming evidence of the positive effects of breast feeding, knowledge about factors influencing breastfeeding behaviour is essential….
    • We explored the impact of abuse of women on breastfeeding behaviour in a large prospective population in Norway where the expectations to breast feed are high, and breast feeding is facilitated in the work regulations….” (pp. 1-2)
      evidence2
  2. Now look at the research & other evidence written down AFTER each of above key sentences that SUPPORT each idea.
  3. Notice that the INTRO/BACKGROUND is NOT a series of abstracts of different studies!!  Instead evidence is grouped into key arguments for the study: Breast feeding is best, Abuse is common, Abuse creates barriers to breastfeeding, & Therefore, knowing about factors affecting breastfeeding is important). [Note: Of course, if your particular professor or editor asks you to do a series of abstracts, then you must, but do group them in arguments like the topic sentences.]

All this leads naturally, logically to …(drum roll please!)…the research questions/hypotheses, which are the gaps in our knowledge that the research will fill.  This sets up the rest of the research article!

Image result for star
Critical Thinking:  Your turn! Write your own Intro/Background using
STructure

  • Structure: Placement in article, heading, placement of research question/hypothesis
    Why2
  • Argument: Key idea topic sentences (make a list 1st) with supporting research & other evidence (your literature review).

For more info on Intro/Background:  Review my blogpost Intro to Intro’s

*ok, yeh. I cheated and included one additional sentence to capture the authors’ flow of argument.

Introduction to Introductions!

In a couple of recent blog entries I noted what you can and cannot learn from research 1) titles & 2) abstracts. Now, let me introduce you to the next part of research article:  Introduction (or sometimes called Background or no title at all!).   Introduction immediately follows the abstract.Start

The introduction/background  “[a] outlines the background of the problem or issue being examined, [b] summarizes the existing literature on the subject, and [c] states the research questions, objectives, and possibly hypothesis” (p. 6, Davies & Logan, 2012)

This section follows the abstract. It may or may not have a heading(s) of “Introduction” or “Background” or both.  Like the abstract, the Introduction describes the problem in which the researcher is interested & sometimes the specific research question or hypothesis that will be measured.

In the Intro/Background you will get a more full description of why the problem is a priority for research and what is already known about the problem (i.e., literature writing-handreview).

Key point #1: Articles & research that are reviewed in theIntro/Background should be mostly within the past 5-7 years.  Sometimes included are classic works that may be much older OR sometimes no recent research exists.   If recent articles aren’t used, this should raise some questions in your mind.   You know well that healthcare changes all the time!!  If old studies are used the author should explain.

Key point #2:  The last sentence or two in theIntro/Background is usually the research question or hypothesis (unless the author awards it its own section).  If you need to know the research question/hypothesis right away, you can skip straight to the end of the Intro/background—and there it is!

Critical Thinking: 1) Read the abstract then 2) Read the 1st section of this 2015 free full-text article by Marie Flem Sørbø et al.:  Past and recent abuse is associated with early cessation of breast feeding: results from a large prospective cohort in Norway

  • Is it called Introduction/Background or both?
  • What literature is already available on the problem or issue being examined?
  • What are the research questions/hypotheses?  (After reading above you should know exactly where to look for these now.)

For More Info:  Check out especially Steps #1, #2, & #3 of How to read a research article.

33,000 foot view isn’t enough! Get down on the Ground To See What’s Really Happening!

My last blog post listed the usual sections of a research report (title, abstract, introduction, methods, results, & discussion/conclusion); and I illustrated the amazing things you can learn from only an article title!Its not enough

This week? Abstracts.   Abstracts are great; abstracts are not enough!

An abstract gives us only enough info to INaccurately apply the study findings to practice.

An abstract typically summarizes all the other sections of the article, such as  the question the researcher wanted to answer, how the researcher collected data to answer it, and what that data showed.  This is great when you are trying to get the general picture, but you should Never assume that the abstract tells you what you need to know.

Wrong WayAbstracts can mislead you IF you do not read the rest of the article.  They are only a short 100-200 words and so they leave out key information.   You may misunderstand study results if you read only the abstract.   An abstract’s 33,000 foot level description of a study, cannot reveal the same things that can be revealed in the up-close & personal description of the full article.

So…what is the takeaway?  Definitely read the abstract to get the general idea.  Then read the full article beginning to end to get the full & beautiful picture of the study.  Davies & Logan (2012) Butterflyencourage us,  Don’t give up reading the full article just because some parts of the study may be hard to understand.  Just read and get what you can, then re-read the difficult-to-understand parts.  Get some help with those PRN.

 

Critical thinking:   What info is missing from the below abstract that you might want to know?

J Nurses Prof Dev. 2016 May-Jun;32(3):130-6. doi: 10.1097/NND.0000000000000227.    Partnering to Promote Evidence-Based Practice in a Community Hospital: Implications for Nursing Professional Development Specialists. Highfield ME1, Collier A, Collins M, Crowley M.

ABSTRACT: Nursing professional development specialists working in community hospitals face significant barriers to evidence-based practice that academic medical centers do not. This article describes 7 years of a multifaceted, service academic partnership in a large, urban, community hospital. The partnership has strengthened the nursing professional development role in promoting evidence-based practice across the scope of practice and serves as a model for others.

More info on abstracts & other components of research articles?  Check out Davies & Logan (2012) Reading Research published by Elsevier.