Category Archives: Questionnaires

On Target all the time and everytime !

“Measure twice. Cut once!” goes the old carpenter adage. Why? Because measuring accurately means you’ll get the outcomes you want!

Same in research. A consistent and accurate measurement will get you the outcomes you want to know. Whether an instrument measures something consistently is called reliability. Whether it measures accurately is called validity. So, before you use a tool, check for its reported reliability and validity.

A good resource for understanding the concepts of reliability (consistency) and validity (accuracy) of research tools is at https://opentextbc.ca/researchmethods/chapter/reliability-and-validity-of-measurement/ Below are quoted Key Takeaways:

  • Psychological researchers do not simply assume that their measures work. Instead, they conduct research to show that they work. If they cannot show that they work, they stop using them.
  • There are two distinct criteria by which researchers evaluate their measures: reliability and validity. Reliability is consistency across time (test-retest reliability), across items (internal consistency), and across researchers (interrater reliability). Validity is the extent to which the scores actually represent the variable they are intended to.
  • Validity is a judgment based on various types of evidence. The relevant evidence includes the measure’s reliability, whether it covers the construct of interest, and whether the scores it produces are correlated with other variables they are expected to be correlated with and not correlated with variables that are conceptually distinct.
  • The reliability and validity of a measure is not established by any single study but by the pattern of results across multiple studies. The assessment of reliability and validity is an ongoing process.

Research Words of the Week: Reliability & Validity

Reliability & validity are terms that refer to the consistency and accuracy of a quantitative measurement questionnaire, technical device, ruler, or any other measuring device.  It means that the outcome measure can be trusted and is relatively error free.

  • Reliability This means that the instrument measures CONSISTENTLY
  • Validity – This means that the instrument measures ACCURATELY. In other words it measures what it is supposed to measure and not something else.

For example: If your bathroom scale measures weight, then it is a valid measure of weight (e.g. it doesn’t measure BP or stress). You might say it had high validity. If your bathroom scale measures your weight as the same thing when you step on and off of it several times then it is measuring weight reliably  or consistently; and you might say it has high reliability.

“Please answer….” (cont.)

What do people HATE about online surveys?   If you want to improve your response rates, check out SurveyMonkey Eric V’s (May Mail2017)  Eliminate survey fatigue: Fix 3 things your respondents hate 

For more info: Check out my earlier post “Please Answer!”

“Please answer!” – How to increase the odds in your favor when it comes to questionnaires

Self-report by participants is one of the most common ways that researchers collect data, yet it is fraught with problems.   Some worries for researchers are: “Will participants be honest or will they say what they think I want to hear?”   “Will they understand the DifferentGroupsquestions correctly?”  “Will those who respond (as opposed to those who don’t respond) have unique ways of thinking so that my respondents do not represent everyone well?” and a BIG worry “Will they even fill out and return the questionnaire?”

One way to solve at least the latter 2 problems is to increase the response rate, and Edwards et al (2009 July 8) reviewed randomized trials  to learn how to do just that!!Questionnaire faces

If you want to improve your questionnaire response rates, check it out!  Here is Edwards et al.’s plain language summary as published in Cochrane Database of Systematic Reviews, where you can read the entire report.

Methods to increase response to postal and electronic questionnaires

MailPostal and electronic questionnaires are a relatively inexpensive way to collect information from people for research purposes. If people do not reply (so called ‘non-responders’), the research results will tend to be less accurate. This systematic review found several ways to increase response. People can be contacted before they are sent a postal questionnaire. Postal questionnaires can be sent by first class post or recorded delivery, and a stamped-return envelope can be provided. Questionnaires, letters and e-mails can be made more personal, and preferably kept short. Incentives can be offered, for example, a small amount of money with Remember jpga postal questionnaire. One or more reminders can be sent with a copy of the questionnaire to people who do not reply.

 

Critical/reflective thinking:  Imagine that you were asked to participate in a survey.  Which of these strategies do you think would motivate or remind you to respond and why?

For more info read the full report: Methods to increase response to postal and electronic questionnaires

 

Self-Report Data: “To use or not to use. That is the question.”

[Note: The following was inspired by and benefited from Rob Hoskin’s post at http://www.sciencebrainwaves.com/the-dangers-of-self-report/]Penguins

If you want to know what someone thinks or feels, you ask them, right?

The same is true in research, but it is good to know the pros and cons of using the “self-report method” of collecting data in order to answer a research question.  Most often self-report is done in ‘paper & pencil’ or SurveyMonkey form, but it can be done by interview.

Generally self-report is easy and inexpensive, and sometimes facilitates research that might otherwise be impossible.  To answer well, respondents must be honest, have insight into themselves, and understand the questions.  Self-report is an important tool in much behavioral research.

But, using self-report to answer a research question does have its limits. People may tend to answer in ways that make themselves look good (social desirability bias), agree with whatever is presented (social acquiescence bias), or answer in either extreme terms (extreme response set bias) or always pick the non-commital middle Hypothesisnumbers.  Another problem will occur if the reliability  and validity of the self-report questionnaire is not established.  (Reliability is consistency in measurement and validity is the accuracy of measuring what it purports to measure.) Additionally, self-reports typically provide only a)ordinal level data, such as on a 1-to-5 scale, b) nominal data, such as on a yes/no scale, or c) qualitative descriptions in words without categories or numbers.  (Ordinal data=scores are in order with some numbers higher than others, and nominal data = categories. Statistical calculations are limited for both and not possible for qualitative data unless the researcher counts themes or words that recur.)

Gold_BarsAn example of a self-report measure that we regard as a gold standard for clinical and research data = 0-10 pain scale score.   An example of a self-report measure that might be useful but less preferred is a self-assessment of knowledge (e.g., How strong on a 1-5 scale is your knowledge of arterial blood gas interpretation?)  The use of it for knowledge can be okay as long as everyone understands that it is perceived level of knowledge.

Critical Thinking: What was the research question in this study? Malaria et al. (2016) Pain assessment in elderly with behavioral and psychological symptoms of dementia. Journal of Alzheimer’s Disease as posted on PubMed.gov questionat http://www.ncbi.nlm.nih.gov/pubmed/26757042 with link to full text.  How did the authors use self-report to answer their research question?  Do you see any of the above strengths & weaknesses in their use?

For more information: Be sure to check out Rob Hoskins blog: http://www.sciencebrainwaves.com/the-dangers-of-self-report/

 

 

Making research accessible to RNs

%d bloggers like this: