Tag Archives: research critique

“Here comes Santa Claus”: What’s the evidence?

Dec 3, 2025: It’s time once again to examine the evidence. How will you apply it in your Christmas practice?

FULL TEXT ONLINE: Adv Emerg Nurs J. 2011 Oct-Dec;33(4):354-8. doi: 10.1097/TME.0b013e318234ead3. [note: Below is full text excerpt from AENJ summary was published in DYIS blog 16 Dec 2016]

Abstract

The purpose of this article is to examine the strength of evidence regarding our holiday Santa Claus (SC) practices and the opportunities for new descriptive, correlation, or experimental research on SC. Although existing evidence generally supports SC, in the end we may conclude, “the most real things in the world are those that neither children nor men can see” (Church, as cited in Newseum, n.d.).

ARE HOLIDAY Santa Claus (SC) activities evidence based? This is a priority issue for those of us who’ve been nice, not naughty. In this article, I review the strength of current evidence supporting the existence of SC, discuss various applications of that evidence, and suggest new avenues of investigation.

[continue reading at 10.1097/TME.0b013e318234ead3]

Essentials for Clinical Researchers

[note: bonus 20% book discount from publisher. See below flyer]

My 2025 book, Doing Research, is a user-friendly guide, not a comprehensive text. Chapter 1 gives a dozen tips to get started, Chapter 2 defines research, and Chapters 3-9 focus on planning. The remaining Chapters 10-12 guide you through challenges of conducting a study, getting answers from the data, and sharing with others what you learned. Italicized key terms are defined in the glossary, and a bibliography lists additional resources.

“Here comes Santa Claus”: What’s the evidence?

FULL TEXT ONLINE: Adv Emerg Nurs J. 2011 Oct-Dec;33(4):354-8. doi: 10.1097/TME.0b013e318234ead3. [note: Below is full text excerpt from AENJ summary was published in DYIS blog 16 Dec 2016]

Abstract

The purpose of this article is to examine the strength of evidence regarding our holiday Santa Claus (SC) practices and the opportunities for new descriptive, correlation, or experimental research on SC. Although existing evidence generally supports SC, in the end we may conclude, “the most real things in the world are those that neither children nor men can see” (Church, as cited in Newseum, n.d.).

ARE HOLIDAY Santa Claus (SC) activities evidence based? This is a priority issue for those of us who’ve been nice, not naughty. In this article, I review the strength of current evidence supporting the existence of SC, discuss various applications of that evidence, and suggest new avenues of investigation.

[continue reading at 10.1097/TME.0b013e318234ead3]

The Whole Picture: Mixed Methods Design

Idea2Mixed methods (MM) research provides a more complete picture of reality by including both complementary quantitative and qualitative data.

A clinical analogy for MM research is asking patients to rate their pain numerically on a 0–10 scale and then to describe the pain character in words.

MM researchers sometimes include both experimental hypotheses and non-experimental research questions in the same study.

writing article

Common MM subtypes are in the below table. In concurrent designs investigators collect all data at the same time, and in sequential designs they collect one type of data before the other. In triangulated MM, data receive equal weight, but in embedded designs, such as a large RCT in which only a small subset of RCT participants are interviewed, the main study data is weighted more heavily. In sequential MM, researchers give more weight to whatever type of data were collected first; for exploratory this is qualitative data and for explanatory it is quantitative data.

FOR MORE INFO: WHAT IS MIXED METHODS RESEARCH? – Dr. John Creswell

MM DESIGNEQUALLY WEIGHTED DATAPRIORITY WEIGHTED DATA
Concurrent data collection:
*Triangulation
All data
*EmbeddedMain study data
Sequential data collection:
*Exploratory
Qualitative data
*ExplanatoryQuantitative data
TYPES OF MM DESIGN: Concurrent & Sequential

Primer on Research Design: Part 1-Description

A research design is the investigator-chosen, overarching study framework that facilitates getting the most accurate answer to a hypothesis or question. Think of research design as similar to the framing of a house during construction. Just as house-framing provides structure and limits to walls, floors, and ceilings, so does a research design provide structure and limits to a host of protocol details.

Tip. The two major categories of research design are: 1) Non-experimental, observation only and 2) Experimental testing of an intervention.

DESCRIPTIVE STUDIES

Non-experimental studies that examine one variable at a time.

When little is known and no theory exists on a topic, descriptive research begins to build theory by identifying and defining key, related concepts (variables). Although a descriptive study may explore several variables, only one of those is measured at a time; there is no examination of relationships between variables. Descriptive studies create a picture of what exists by analyzing quantitative or qualitative

data to answer questions like, “What is [variable x]?” or “How often does it occur?” Examples of such one-variable questions are “What are the experiences of first-time fathers?” or “How many falls occur in the emergency room?” (Variables are in italics.)  The former question produces qualitative data, and the latter, quantitative.

Descriptive results raise important questions for further study, and findings are rarely generalizable. You can see this especially in a descriptive case study: an in-depth exploration of a single event or phenomena that is limited to a particular time and place. Given case study limitations, opinions differ on whether they even qualify as research.

Descriptive research that arises from constructivist or advocacy assumptions merits particular attention. In these designs, researchers collect in-depth qualitative information about only one variable and then critically reflect on that data in order to uncover emerging themes or theories. Often broad data are collected in a natural setting in which researchers exercise little control over other variables. Sample size is not pre-determined, data collection and analysis are concurrent, and the researcher collects and analyzes data until no new ideas emerge (data saturation). The most basic qualitative descriptive method is perhaps content analysis, sometimes called narrative descriptive analysis, in which researchers uncover themes within informant descriptions. Figure 4 identifies major qualitative traditions beyond content analysis and case studies.

Alert! All qualitative studies are descriptive, but not all descriptive studies are qualitative.

Box 1. Descriptive Qualitative Designs

DesignFocusDiscipline of Origin
EthnographyUncovers phenomena within a given culture, such as meanings, communications, and mores.Anthropology
Grounded TheoryIdentifies a  basic social problem and the process that participants use to confront it.Sociology
PhenomenologyDocuments the “lived experience” of informants going through a particular event or situation.Psychology
Community participatory actionSeeks positive social change and empowerment of an oppressed community by engaging them in every step of the research process.Marxist political theory
FeministSeeks positive social change and empowerment of women as an oppressed group.Marxist political theory

Testing the Test (or an intro to “Does the measurement measure up?”)

When reading a research article, you may be tempted only to read the Introduction & Background, then go straight to the Discussion, Implications, and Conclusions at the end. You skip all those pesky, procedures, numbers, and p levels in the Methods & Results sections.

Perhaps you are intimidated by all those “research-y” words like content validity, construct validity, test-retest reliability, and Cronbach’s alpha because they just aren’t part of your vocabulary….YET!

WHY should you care about those terms, you ask? Well…let’s start with an example. If your bathroom scale erratically measured your weight each a.m., you probably would toss it and find a more reliable and valid bathroom scale. The quality of the data from that old bathroom scale would be useless in learning how much you weighed. Similarly in research, the researcher wants useful outcome data. And to get that quality data the person must collect it with a measurement instrument that consistently (reliably) measures what it claims to measure (validity). A good research instrument is reliable and valid. So is a good bathroom scale.

Let’s start super-basic: Researchers collect data to answer their research question using an instrument. That test or tool might be a written questionnaire, interview questions, an EKG machine, an observation checklist, or something else. And whatever instrument the researcher uses needs to give them correct data answers.

For example, if I want to collect BP data to find out how a new med is working, I need a BP cuff that collects systolic and diastolic BP without a lot of artifacts or interference. That accuracy in measuring BP only is called instrument validity. Then if I take your BP 3 times in a row, I should get basically the same answer and that consistency is called instrument reliability. I must also use the cuff as intended–correct cuff size and placement–in order to get quality data that reflects the subject’s actual BP.

The same thing is true with questionnaires or other measurement tools. A researcher must use an instrument for the intended purpose and in the correct way. For example, a good stress scale should give me accurate data about a person’s stress level (not their pain, depression, or anxiety)–in other words it should have instrument validity. It should measure stress without a lot of artifacts or interference from other states of mind.

NO instrument is 100% valid–it’s a matter of degree. To the extent that a stress scale measures stress, it is valid. To the extent that it also measures other things besides stress–and it will–it is less valid. The question you should ask is, “How valid is the instrument?” often on a 0 to 1 scale with 1 being unachievable perfection. The same issue and question applies to reliability.

Reliability & validity are interdependent. An instrument that yields inconsistent results under the same circumstances cannot be valid (accurate). Or, an instrument can consistently (reliably) measure the wrong thing–that is, it can measure something other than what the researcher intended to measure. Research instruments need both strong reliability AND validity to be most useful; they need to measure the outcome variable of interest consistently.

Valid for a specific purpose: Researchers must also use measurement instruments as intended. First, instruments are often validated for use with a particular population; they may not be valid for measuring the same variable in other populations. For example, different cultures, genders, professions, and ages may respond differently to the same question. Second, instruments may be valid in predicting certain outcomes (e.g., SAT & ACT have higher validity in predicting NCLEX success than does GPA). As Sullivan (2011) wrote: “Determining validity can be viewed as constructing an evidence-based argument regarding how well a tool measures what it is supposed to do. Evidence can be assembled to support, or not support, a specific use of the assessment tool.”

In summary….

  1. Instrument validity = how accurate the tool is in measuring a particular variable
  2. Instrument reliability = how consistently the tool measures whatever it measures

Fun Practice: In your own words relate the following article excerpt to the concept of validity? “To assess content validity [of the Moral Distress Scale], 10 nurses were asked to provide comments on grammar, use of appropriate words, proper placement of phrases, and appropriate scoring. From p.3, Ghafouri et al. (2021). Psychometrics of the moral distress scale in Iranian mental health nurses. BMC Nursing. https://doi.org/10.1186/s12912-021-00674-4

True or False: Experiment or Not

Experiments are the way that we confirm that one thing causes another.   If the study is not an experiment (or combined experiments in a meta-analysis), then the research does not show cause and effect. imagesCALQ0QK9

Experiments are one of the strongest types of research.

So…how can you tell a true experiment from other studies?   Hazel B can tell you in 4:04 and simple language at https://www.youtube.com/watch?v=x2i-MrwdTqI&index=1&list=PL7A7F67C6B94EB97E

Go for it!

[After watching video:  Note that the variable that is controlled by the researcher is call the Independent variable or Cause variable because it creates a change in something else. That something else that changes is the Dependent variable or Outcome variable.]Learning

CRITICAL THINKING:  

  1. Based on the video, can you explain why true experiments are often called randomized controlled trial (RCT)?
  2. Take a look at The Effect of the Physical and Mental Exercises During Hemodialysis on Fatigue: A Controlled Clinical Trial, that is free in full-text via PubMed. How does it meet the criteria of a true experiment as described by Hazel B in the video?

FOR MORE INFORMATION:   Go to “What’s an RCT Anyway?” (https://discoveringyourinnerscientist.wordpress.com/2015/01/23/whats-a-randomized-controlled-trial/ )

Self-Report Data: “To use or not to use. That is the question.”

[Note: The following was inspired by and benefited from Rob Hoskin’s post at http://www.sciencebrainwaves.com/the-dangers-of-self-report/]Penguins

If you want to know what someone thinks or feels, you ask them, right?

The same is true in research, but it is good to know the pros and cons of using the “self-report method” of collecting data in order to answer a research question.  Most often self-report is done in ‘paper & pencil’ or SurveyMonkey form, but it can be done by interview.

Generally self-report is easy and inexpensive, and sometimes facilitates research that might otherwise be impossible.  To answer well, respondents must be honest, have insight into themselves, and understand the questions.  Self-report is an important tool in much behavioral research.

But, using self-report to answer a research question does have its limits. People may tend to answer in ways that make themselves look good (social desirability bias), agree with whatever is presented (social acquiescence bias), or answer in either extreme terms (extreme response set bias) or always pick the non-commital middle Hypothesisnumbers.  Another problem will occur if the reliability  and validity of the self-report questionnaire is not established.  (Reliability is consistency in measurement and validity is the accuracy of measuring what it purports to measure.) Additionally, self-reports typically provide only a)ordinal level data, such as on a 1-to-5 scale, b) nominal data, such as on a yes/no scale, or c) qualitative descriptions in words without categories or numbers.  (Ordinal data=scores are in order with some numbers higher than others, and nominal data = categories. Statistical calculations are limited for both and not possible for qualitative data unless the researcher counts themes or words that recur.)

Gold_BarsAn example of a self-report measure that we regard as a gold standard for clinical and research data = 0-10 pain scale score.   An example of a self-report measure that might be useful but less preferred is a self-assessment of knowledge (e.g., How strong on a 1-5 scale is your knowledge of arterial blood gas interpretation?)  The use of it for knowledge can be okay as long as everyone understands that it is perceived level of knowledge.

Critical Thinking: What was the research question in this study? Malaria et al. (2016) Pain assessment in elderly with behavioral and psychological symptoms of dementia. Journal of Alzheimer’s Disease as posted on PubMed.gov questionat http://www.ncbi.nlm.nih.gov/pubmed/26757042 with link to full text.  How did the authors use self-report to answer their research question?  Do you see any of the above strengths & weaknesses in their use?

For more information: Be sure to check out Rob Hoskins blog: http://www.sciencebrainwaves.com/the-dangers-of-self-report/

 

 

Telling the Future: The Research Hypothesis

What is a research hypothesis?   A research hypothesis is a predicted answer; an educated guess.  It is a statement of the outcome that a researcher expects to find in an experimental study.Hypothesis

Why care?  Because it tells you precisely the problem that the research study is about!  Either the researcher’s prediction turns out to be true (supported by data) or not!
A hypothesis includes 3 key elements: 1) the population of interest, 2) the experimental treatment, & 3) the outcome expected.  It is a statement of cause and effect. The experimental treatment that the researcher manipulates is called the independent or cause variable.  The result of the study is an outcome that is called the dependent variable because it depends on the independent/cause variable.

For example, let’s take the hypothesis “Heart failure patients who receive exmeds2perimental drug X will have better cardiac function than will heart failure patients who receive standard drug Y.”  You can see that the researcher is manipulating the drug (independent variable) that patients will receive.  And patient cardiac outcomes are expected to vary—in fact cardiac function is expected to be better—for patients who receive the experimental drug X.

Ideally that researcher will randomly assign subjects to an experimental group that receives drug X and a control group that receives standard therapy drug Y.   Outcome cardiac function data will be collected and analyzed to see if the researcher’s predicted answer (AKA hypothesis) is true.

In a research article, the hypothesis is usually stated right at the end of the introduction or background section.

If you see a hypothesis, how can you tell what is the independent/cause variable and the dependent/effect/outcome variable?question   1st – Identify the population in the hypothesis—the population does not vary (& so, it is not a variable).   2nd – Identify the independent variable–This will be the one that is the cause & it will vary.  3rd – Identify the dependent variable–This will be the one that is the outcome & its variation depends on changes/variation in the independent variable.

PRACTICE:  What are the population, independent variable(s) & dependent variable(s) in these actual research study titles that reflect the research hypotheses:

FOR MORE INFORMATION:  See SlideShare by Domocmat (n.d.) Formulating hypothesis at http://www.slideshare.net/kharr/formulating-hypothesis-cld-handout

 

Introduction to Introductions!

I have a lot of new readers, so let’s revisit the standard sections of a research article.  They are:

  • Introduction (or Background)
  • Review of literature
  • Methods
  • Results (or findings)
  • Discussion & Implications
  • Conclusion

If we begin at the beginning, then we should ask: “What’s in an Introduction?”  Here’s the answer:

“[a] …Background of the problem or issue being examined,

[b] …Existing literature on the subject, and

[c] …Research questions, objectives, and possibly hypothesis” (p. 6, Davies & Logan, 2012)

This is the very 1st section of the body of the research article.  In it you will find a description of the problem that the researcher is studying, why the problem is a priority, and sometimes what is already known about the problem.  The description of what is already known may or may not be labelled separately as a Review of Literature.

KEYKey point #1: Articles & research that are reviewed in the Intro/Background should be mostly within the past 5-7 years.  Sometimes included are classic works that may be much older OR sometimes no recent research exists.   If recent articles aren’t used, this should raise some questions in your mind.   You know well that healthcare changes all the time!!  If there are no recent studies the author should explain.

KEY
Key point #2The last sentence or two in the Intro/Background is the research question or hypothesis.  If you need to know the research question/hypothesis right away, you can skip straight to the end of the Intro/background—and there it should be!

Happy research reading!

Critical Thinking: Do the sections of the abstract AND the sections of the research article match above headings?  Does it match the description of Introduction? Take a look at the free article by Kennedy et al. (2014). Is there a relationship between personality and choice of nursing specialty: An integrative literature, BMC Nursing, 13(40). Retrieved from the link http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4267136/.  question