Tag Archives: Outcome measurement

Testing the Test (or an intro to “Does the measurement measure up?”)

When reading a research article, you may be tempted only to read the Introduction & Background, then go straight to the Discussion, Implications, and Conclusions at the end. You skip all those pesky, procedures, numbers, and p levels in the Methods & Results sections.

Perhaps you are intimidated by all those “research-y” words like content validity, construct validity, test-retest reliability, and Cronbach’s alpha because they just aren’t part of your vocabulary….YET!

WHY should you care about those terms, you ask? Well…let’s start with an example. If your bathroom scale erratically measured your weight each a.m., you probably would toss it and find a more reliable and valid bathroom scale. The quality of the data from that old bathroom scale would be useless in learning how much you weighed. Similarly in research, the researcher wants useful outcome data. And to get that quality data the person must collect it with a measurement instrument that consistently (reliably) measures what it claims to measure (validity). A good research instrument is reliable and valid. So is a good bathroom scale.

Let’s start super-basic: Researchers collect data to answer their research question using an instrument. That test or tool might be a written questionnaire, interview questions, an EKG machine, an observation checklist, or something else. And whatever instrument the researcher uses needs to give them correct data answers.

For example, if I want to collect BP data to find out how a new med is working, I need a BP cuff that collects systolic and diastolic BP without a lot of artifacts or interference. That accuracy in measuring BP only is called instrument validity. Then if I take your BP 3 times in a row, I should get basically the same answer and that consistency is called instrument reliability. I must also use the cuff as intended–correct cuff size and placement–in order to get quality data that reflects the subject’s actual BP.

The same thing is true with questionnaires or other measurement tools. A researcher must use an instrument for the intended purpose and in the correct way. For example, a good stress scale should give me accurate data about a person’s stress level (not their pain, depression, or anxiety)–in other words it should have instrument validity. It should measure stress without a lot of artifacts or interference from other states of mind.

NO instrument is 100% valid–it’s a matter of degree. To the extent that a stress scale measures stress, it is valid. To the extent that it also measures other things besides stress–and it will–it is less valid. The question you should ask is, “How valid is the instrument?” often on a 0 to 1 scale with 1 being unachievable perfection. The same issue and question applies to reliability.

Reliability & validity are interdependent. An instrument that yields inconsistent results under the same circumstances cannot be valid (accurate). Or, an instrument can consistently (reliably) measure the wrong thing–that is, it can measure something other than what the researcher intended to measure. Research instruments need both strong reliability AND validity to be most useful; they need to measure the outcome variable of interest consistently.

Valid for a specific purpose: Researchers must also use measurement instruments as intended. First, instruments are often validated for use with a particular population; they may not be valid for measuring the same variable in other populations. For example, different cultures, genders, professions, and ages may respond differently to the same question. Second, instruments may be valid in predicting certain outcomes (e.g., SAT & ACT have higher validity in predicting NCLEX success than does GPA). As Sullivan (2011) wrote: “Determining validity can be viewed as constructing an evidence-based argument regarding how well a tool measures what it is supposed to do. Evidence can be assembled to support, or not support, a specific use of the assessment tool.”

In summary….

  1. Instrument validity = how accurate the tool is in measuring a particular variable
  2. Instrument reliability = how consistently the tool measures whatever it measures

Fun Practice: In your own words relate the following article excerpt to the concept of validity? “To assess content validity [of the Moral Distress Scale], 10 nurses were asked to provide comments on grammar, use of appropriate words, proper placement of phrases, and appropriate scoring. From p.3, Ghafouri et al. (2021). Psychometrics of the moral distress scale in Iranian mental health nurses. BMC Nursing. https://doi.org/10.1186/s12912-021-00674-4

What are you asking? (or “Can HCAHPS sometimes be a DIRECT measure?”)

In a prior blog (Direct speaking about INdirect outcomes: HCAHPS as a measurement*), I argued that HCAHPS questions were indirect measures of outcomes.  Indirect measures are weaker than direct measures because they are influenced by tons of variables that have nothing to do with the outcome of interest.  But wait!! There’s more!  HCAPS can sometimes be a DIRECT measure; it all depends on what you want to know.

(If you know this, then you are way ahead of many when it comes to measuring outcomes accurately!!)

KEYKEY POINTS:

  • If your research question is what do patients remember about hospitalization then HCAHPS is a DIRECT measure of what patients remember.  
  • However if your research question is what did hospital staff actually do  then HCHAPS is an INDIRECT* measure of what staff did. 

What is HCAHPS?  HCAHPS (pronounced “H-caps”)  questions are patient perceptions of what happened, which may or may not be what actually happened.    Patients are asked to remember their care that happened in the past, and memories may be less than accurate. (See this link for more on what HCAHPS is: http://www.hcahpsonline.org/Files/HCAHPS_Fact_Sheet_June_2015.pdf )

Example:  HCAHPS question #16 is, “Before giving you any new medicine, how often did hospital staff tell you what the medicine was for?”    Whether the patient answers yes or thinkerno, the response tells us only how the patient remembers it.

Why is this important?     

  • Because if you want to know whether or not RNs actually taught inpatients about their medications, then for the most direct & accurate measure you will have to observe RNs .
  • However, if you want to know whether patients remember RNs teaching them about discharge medications, then HCAHPS question #16 is one of the most direct & accurate measure of what they remember.

*FOR MORE INFORMATION on why you want to use DIRECT measures SanDiegoCityCollegeLearningResource_-_bookshelfsee https://discoveringyourinnerscientist.com/2016/11/04/direct-speaking-about-idirect-outcomes-hcahps-as-a-measurement/

CRITICAL THINKING Pick any HCAHPS question at this link and write a research question that for which it would be a DIRECT outcome measure: question(http://www.hcahpsonline.org/files/March%202016_Survey%20Instruments_English_Mail.pdf)

For your current project, how are you DIRECTLY measuring outcomes?

Direct speaking about INdirect outcomes: HCAHPS as a measurement

When you first plan a project, you need to know what OUTCOMES you want to achieve.  You need STRONG outcomes to show your project worked! imagesCALQ0QK9

Outcome measures are tricky & can be categorized into Indirect & Direct measures:

  1. INDIRECT outcome measures are often affected by many factors, not just your innovation
  2. DIRECT outcome measures are specific to what you are trying to accomplish.

For example: If you want to know your patient’s weight, you put them on the scale (direct). weight-scaleYou don’t merely ask them how much they weigh (indirect).

Another example?  If you planned music to reduce pain, you might a) measure how many patients were already using music and their pain scores (& perhaps those not using music and their pain scores), b) begin your music intervention, and c) thmusicen directly measure how many patients started using it after you started your intervention and their pain scores.  These data DIRECTLY target your inpatient outcomes versus looking at INDIRECT HCAHPS answers of discharged patients’ feelings after the fact in response to “During this hospital stay, how often was your pain well controlled?”

Nurses often decide to measure their project outcomes ONLY with indirect HCAHPS scores.  I hope you can see this is not as good as DIRECT measures.

So why use HCAHPS at all?measuring-tape

  • They reflect institutional priorities related to quality and reimbursement
  • Data are already collected for you
  • Data are available for BEFORE and AFTER comparisons of your project outcomes
  • It doesn’t cost you any additional time or money to get the data

Disadvantages of indirect HCAHPS measures?

  • HCAHPS data are indirect measures that are affected by lots of different things, and so they may have little to do with effect of your project.
  • HCAHPS responders often do Not represent all patients because the number responding is so small–sometimes just 1 or 2

Still, I think it’s good to include HCAHPS.  Just don’t limit yourself to that. Include also a DIRECT measure of outcomethat targets the precisely what you hope will be the result of your study.

imagesCALQ0QK9You need STRONG outcomes to convince others that your project works to improve care!

CRITICAL THINKING:  McClelland, L.E., &  Vogus, T.J. (2014) used HCHAPS as an outcome measure in their study, Compassion practices & HCAHPS: Does rewarding and supporting questionworkplace compassion influence patient perceptions?    What were the strengths & weaknesses of using HCHAPS in this study? [hint: check out the discussion section]  What would be a good direct measure that you could add to HCAHPS outcomes to improve the study?

FOR MORE INFORMATION:  Whole books of measurement instruments are available through the library or a librarian can help you search for something that will measure motivation, pain, anxiety, medication compliance, or whatever it is you are looking for!!  You can limit your own literature searches by selecting “instrument” as part of your search, or you can consult with a nurse researcher for more help.