[Note: The following was inspired by and benefited from Rob Hoskin’s post at http://www.sciencebrainwaves.com/the-dangers-of-self-report/]
If you want to know what someone thinks or feels, you ask them, right?
The same is true in research, but it is good to know the pros and cons of using the “self-report method” of collecting data in order to answer a research question. Most often self-report is done in ‘paper & pencil’ or SurveyMonkey form, but it can be done by interview.
Generally self-report is easy and inexpensive, and sometimes facilitates research that might otherwise be impossible. To answer well, respondents must be honest, have insight into themselves, and understand the questions. Self-report is an important tool in much behavioral research.
But, using self-report to answer a research question does have its limits. People may tend to answer in ways that make themselves look good (social desirability bias), agree with whatever is presented (social acquiescence bias), or answer in either extreme terms (extreme response set bias) or always pick the non-commital middle
numbers. Another problem will occur if the reliability and validity of the self-report questionnaire is not established. (Reliability is consistency in measurement and validity is the accuracy of measuring what it purports to measure.) Additionally, self-reports typically provide only a)ordinal level data, such as on a 1-to-5 scale, b) nominal data, such as on a yes/no scale, or c) qualitative descriptions in words without categories or numbers. (Ordinal data=scores are in order with some numbers higher than others, and nominal data = categories. Statistical calculations are limited for both and not possible for qualitative data unless the researcher counts themes or words that recur.)
An example of a self-report measure that we regard as a gold standard for clinical and research data = 0-10 pain scale score. An example of a self-report measure that might be useful but less preferred is a self-assessment of knowledge (e.g., How strong on a 1-5 scale is your knowledge of arterial blood gas interpretation?) The use of it for knowledge can be okay as long as everyone understands that it is perceived level of knowledge.
Critical Thinking: What was the research question in this study? Malaria et al. (2016) Pain assessment in elderly with behavioral and psychological symptoms of dementia. Journal of Alzheimer’s Disease as posted on PubMed.gov
at http://www.ncbi.nlm.nih.gov/pubmed/26757042 with link to full text. How did the authors use self-report to answer their research question? Do you see any of the above strengths & weaknesses in their use?
For more information: Be sure to check out Rob Hoskins blog: http://www.sciencebrainwaves.com/the-dangers-of-self-report/
“What’s important is not where an organization begins its patient safety journey, but instead the degree to which it exhibits a relentless commitment to improvement.” – TJC, 2016, p.68

perimental drug X will have better cardiac function than will heart failure patients who receive standard drug Y.” You can see that the researcher is manipulating the drug (independent variable) that patients will receive. And patient cardiac outcomes are expected to vary—in fact cardiac function is expected to be better—for patients who receive the experimental drug X.
1st – Identify the population in the hypothesis—the population does not vary (& so, it is not a variable). 2nd – Identify the independent variable–This will be the one that is the cause & it will vary. 3rd – Identify the dependent variable–This will be the one that is the outcome & its variation depends on changes/variation in the independent variable.






On the experimental unit RNs stated the script to patients exactly as written and on room whiteboards posted the script, last pain med & pain scores. Posters of the script were also posted on the unit. In contrast, on the control unit RN communication and use of whiteboard were dependent on individual preferences. Researchers measured effectiveness of the script by collecting HCAHPS scores 2 times before RNs began using the script (a baseline pretest) and then 5 times during and after RNs began using it (a posttest) on both units.
Critical thinking? What would prevent you from adopting or adapting this script in your own personal practice tomorrow? What are the barriers and facilitators to getting other RNs on your unit to adopt this script, including using whiteboards? Are there any risks to using the script? What are the risks to NOT using the script?
(Gray et al., 2006). About 12% of the 4 million born in U.S. hospitals were admitted to NICU’s. At birth every infant requires quick application of an armband, and when parents have not yet decided on a name the assigned name is often quite nondistinct (e.g., BabySmith).
Their results? RAR events were reduced by 36.3%. Their recommendations? Switch to a distinct naming system.

Critical thinking: How would you apply
What is the difference between a hypothesis and a research question? I suppose some will ask: “Why should I care?”
hich and why?
