Tag Archives: Data collection

Content Validity: Expert Judgment Required

For accurate study data, you need a tool that correctly & comprehensively measures the outcome of interest (concept). If a tool measures your outcome of interest accurately it has strong validity. If it measures that outcome consistently, it has high reliability.

For now, let’s focus on validity.

Again, validity is how well a research tool measures what it is intended to measure. 

The four (4) types of validity are 1) face, 2) content, 3) construct, & 4) criterion-related. Click here to read my blog on face validity–the weakest type. Now, let’s step it up a notch to content validity.

Content validity is the comprehensiveness of a data collection survey tool. In other words, does the instrument include items that measure all aspects of the thing (concept) you are studying–whether that thing be professional quality of life, drug toxicity, spiritual health, pain, or something else.

When you find a tool that you want to use, look for documented content validity. Content validity means that the tool creators:

  • 1) adopted a specific definition of the concept they want to measure,
  • 2) generated a list of all possible items from a review of literature and/or other sources,
  • 3) gave both their definition and item list to 3-5+ experts on the topic, &
  • 4) asked those experts independently to rate how well each item represents the adopted concept definition (or not). Often experts are asked to evaluate item clarity as well.

When a majority of the expert panel agrees that an item matches the definition, then that item becomes part of the new tool. Items without agreement are tossed. Experts may also edit items or add items to the list, and the tool creator may choose to submit edited and new items to the whole expert panel for evaluation.

Optionally tool creators  may statistically calculate a content validity index (CVI) for items and/or for the tool as a whole, but content validity is still based on experts’ judgment. Some tool authors are just more comfortable with having a number to represent that judgment. An acceptable CVI > 0.78; the “>” means “greater than or equal to.” (Click here for more on item & scale CVIs. )

When reading a research article, you might see content validity reported for the tool. Here’s an example: Content…validity of the nurse and patient [Spiritual Health] Inventories…[was] based on literature review [and] expert panel input….Using a religious-existential needs framework, 59 items for the nurse SHI were identified from the literature with the assistance of a panel of theology and psychology experts…. Parallel patient items were developed, and a series of testing and revisions was completed resulting in two 31-item tools (p. 4, Highfield, 1992).

For more, check out this  quick explanation of content validity: 3 minute YouTube video. If you are trying to establish content validity for your own new tool, consult a mentor and a research text like Polit & Beck’s Nursing research: Generating and assessing evidence for nursing practice.

Critical thinking: What is the difference between face and content validity? How are they alike. (Hint: check out the video.) What other questions do you have?

Face Validity: Judging a book by its cover

“Don’t judge a book by its cover.” That’s good advice about not evaluating persons merely by the way they look to you. I suggest we all take it.

But…when it comes to evaluating data collection tools, things are different. When we ask the question, “Does this questionnaire, interview, or measurement instrument look like it measures what it is supposed to measure, then we are legitimately judging a book (instrument) by its cover (appearance). We call that judgment face validity. In other words, the tool appears to us on its face to measure what it is designed to measure.

For example, items on the well-established Beck Depression Inventory (DPI) cover a range of symptoms, such as sadness, pessimism, feelings of failure, loss of pleasure, guilt, crying, and so on. If you read all DPI items, you could reasonably conclude just by looking at them that those items do indeed measure depression. That judgement is made without the benefit of statistics, and thus you are judging that book (the DPI) by its cover (how it appears to you). That is face validity.

Face validity is only one of four types of data collection tool validity.

In research, tool validity is defined as how well a research tool measures what it is designed to measure. The four broad types of validity are: a) face, b) content, c) construct, and d) criterion-related validity. And make no mistake, face validity is the weakest of the four. Nonetheless, it makes a good a starting point. Just don’t stop there; you will need one or more of its three statistical validity cousins–content, construct, and criterion-related–to have a strong data collection tool.

And…in referring back to the DPI example….the DPI looks valid probably because it is verified as valid by other types of validity

Thots about why we need face validity at all?

New book: “Doing Research: A Practical Guide”

Author: Martha “Marty” E. Farrar Highfield

NOW AVAILABLE ELECTRONICALLY & SOON IN PRINT.

CHECK OUT: https://link.springer.com/book/10.1007/978-3-031-79044-7

This book provides a step-by-step summary of how to do clinical research. It explains what research is and isn’t, where to begin and end, and the meaning of key terms. A project planning worksheet is included and can be used as readers work their way through the book in developing a research protocol. The purpose of this book is to empower curious clinicians who want data-based answers.

Doing Research is a concise, user-friendly guide to conducting research, rather than a comprehensive research text. The book contains 12 main chapters followed by the protocol worksheet. Chapter 1 offers a dozen tips to get started, Chapter 2 defines research, and Chapters 3-9 focus on planning. Chapters 10-12 then guide readers through challenges of conducting a study, getting answers from the data, and disseminating results. Useful key points, tips, and alerts are strewn throughout the book to advise and encourage readers.

After taste…I mean “after test”

Let’s say you want to find out how well students’ think they learned theory in your class.

One option is to do a pre/post test: You distribute the same survey before and after the class asking them to rate on 1-4 scale how well they think they know the new material. Then you compare their ratings.

Another option is to do posttest only: You could give them a survey after the class that Surveyasks them to rate 1-4 their knowledge before the class and 1-4 their knowledge now. Then you compare their ratings.

One research option is stronger than the other.  Which one is it? and Why?  (hint: think retrospective/prospective)

Goldilocks and the 3 Levels of Data

Actually when it comes to quantitative data, there are 4 levels, but who’s counting? (Besides Goldilocks.)

  1. Nominal  (categorical) data are names or categories: (gender, religious affiliation, days of the week, yes or no, and so on)
  2. Ordinal data are like the pain scale.  Each number is higher (or lower) than the next but the distances between numbers are not equal.  In others words 4 is not necessarily twice as much as 2; and 5 is not half of 10.
  3. Interval data are like degrees on a thermometer.  Equal distance between them, but no actual “0”.  0 degrees is just really, really cold.
  4. Ratio data are those  with real 0 and equal intervals (e.g., weight, annual salary, mg.)

(Of course if you want to collect QUALitative word data, that’s closest to categorical/nominal, but you don’t count ANYTHING.  More on that another time.)

CRITICAL THINKING:   Where are the levels in Goldilocks and the 3 levels of data at this link:  https://son.rochester.edu/research/research-fables/goldilocks.html ?? Would you measure soup, bed, chairs, bears, or other things differently?  Why was the baby bear screaming in fright?

“Please answer….” (cont.)

What do people HATE about online surveys?   If you want to improve your response rates, check out SurveyMonkey Eric V’s (May Mail2017)  Eliminate survey fatigue: Fix 3 things your respondents hate 

For more info: Check out my earlier post “Please Answer!”

Nightingale: Avant garde in meaningful data

In honor of Nurse Week, I offer this tribute to the avant garde research work of Florence Nightingale in the Crimea that saved lives and set a precedent worth following.

Nightingale was a “passionate statistician” knowing that outcome data are convincing when one wants to change the world.  She did not merely collect the data, but also documented it in a way that revealed its critical meaning for care.

As noted by John H. Lienhard (1998-2002): Nightingale coxcombchart“Once you see Nightingale’s graph, the terrible picture is clear. The Russians were a minor enemy. The real enemies were cholera, typhus, and dysentery. Once the military looked at that eloquent graph, the modern army hospital system was inevitable.  You and I are shown graphs every day. Some are honest; many are misleading….So you and I could use a Florence Nightingale today, as we drown in more undifferentiated data than anyone could’ve imagined during the Crimean War.” (Source: Leinhard, 1998-2002)

As McDonald (2001) writes in the BMJ free, full-text,  Nightingale was “a systemic thinker and a “passionate statistician.”  She insisted on improving care by making policy & care decisions based on “the best available government statistics and expertise, and the collection of new material where the existing stock was inadequate.”(p.68)

Moreover, her display of the data brought its message home through visual clarity!

Thus while Nightingale adhered to some well-accepted, but mistaken, scientific theories of the time (e.g., miasma) her work was superb and scientific in the best sense of the word.   We could all learn from Florence.

CRITICAL THINKING:   What issue in your own practice could be solved by more data?  How could you collect that data?   If you have data already, how can you display it so that it it meaningful to others and “brings the point home”?

FOR MORE INFO:

HAPPY NURSE WEEK TO ALL MY COLLEAGUES.  

MAY YOU GO WHERE THE DATA TAKES YOU!