Tag Archives: research

The biggest enemy was not Russia

Check out this explanation of the famous rose plot about preventable deaths of soldiers!! Lessons to be learned today.

How to speak to stakeholders. How to change nursing.

https://www.youtube.com/watch?v=JZh8tUy_bnM

Testing the Test (or an intro to “Does the measurement measure up?”)

When reading a research article, you may be tempted only to read the Introduction & Background, then go straight to the Discussion, Implications, and Conclusions at the end. You skip all those pesky, procedures, numbers, and p levels in the Methods & Results sections.

Perhaps you are intimidated by all those “research-y” words like content validity, construct validity, test-retest reliability, and Cronbach’s alpha because they just aren’t part of your vocabulary….YET!

WHY should you care about those terms, you ask? Well…let’s start with an example. If your bathroom scale erratically measured your weight each a.m., you probably would toss it and find a more reliable and valid bathroom scale. The quality of the data from that old bathroom scale would be useless in learning how much you weighed. Similarly in research, the researcher wants useful outcome data. And to get that quality data the person must collect it with a measurement instrument that consistently (reliably) measures what it claims to measure (validity). A good research instrument is reliable and valid. So is a good bathroom scale.

Let’s start super-basic: Researchers collect data to answer their research question using an instrument. That test or tool might be a written questionnaire, interview questions, an EKG machine, an observation checklist, or something else. And whatever instrument the researcher uses needs to give them correct data answers.

For example, if I want to collect BP data to find out how a new med is working, I need a BP cuff that collects systolic and diastolic BP without a lot of artifacts or interference. That accuracy in measuring BP only is called instrument validity. Then if I take your BP 3 times in a row, I should get basically the same answer and that consistency is called instrument reliability. I must also use the cuff as intended–correct cuff size and placement–in order to get quality data that reflects the subject’s actual BP.

The same thing is true with questionnaires or other measurement tools. A researcher must use an instrument for the intended purpose and in the correct way. For example, a good stress scale should give me accurate data about a person’s stress level (not their pain, depression, or anxiety)–in other words it should have instrument validity. It should measure stress without a lot of artifacts or interference from other states of mind.

NO instrument is 100% valid–it’s a matter of degree. To the extent that a stress scale measures stress, it is valid. To the extent that it also measures other things besides stress–and it will–it is less valid. The question you should ask is, “How valid is the instrument?” often on a 0 to 1 scale with 1 being unachievable perfection. The same issue and question applies to reliability.

Reliability & validity are interdependent. An instrument that yields inconsistent results under the same circumstances cannot be valid (accurate). Or, an instrument can consistently (reliably) measure the wrong thing–that is, it can measure something other than what the researcher intended to measure. Research instruments need both strong reliability AND validity to be most useful; they need to measure the outcome variable of interest consistently.

Valid for a specific purpose: Researchers must also use measurement instruments as intended. First, instruments are often validated for use with a particular population; they may not be valid for measuring the same variable in other populations. For example, different cultures, genders, professions, and ages may respond differently to the same question. Second, instruments may be valid in predicting certain outcomes (e.g., SAT & ACT have higher validity in predicting NCLEX success than does GPA). As Sullivan (2011) wrote: “Determining validity can be viewed as constructing an evidence-based argument regarding how well a tool measures what it is supposed to do. Evidence can be assembled to support, or not support, a specific use of the assessment tool.”

In summary….

  1. Instrument validity = how accurate the tool is in measuring a particular variable
  2. Instrument reliability = how consistently the tool measures whatever it measures

Fun Practice: In your own words relate the following article excerpt to the concept of validity? “To assess content validity [of the Moral Distress Scale], 10 nurses were asked to provide comments on grammar, use of appropriate words, proper placement of phrases, and appropriate scoring. From p.3, Ghafouri et al. (2021). Psychometrics of the moral distress scale in Iranian mental health nurses. BMC Nursing. https://doi.org/10.1186/s12912-021-00674-4

Bates Center Seminar Series – Diabetes: A History of Race and Disease

Speaker: Arleen Tuchman, PhD, Vanderbilt University

Date and Time: Wednesday, April 6, 2022, 4:00pm EDT, virtual BlueJeans event

Abstract: Who is considered most at risk for diabetes, and why? In this talk, Tuchman discusses how, at different times over the past one hundred years, Jews, Native Americans, and African Americans have been labeled most at risk for developing diabetes, and that such claims have reflected and perpetuated troubling assumptions about race, ethnicity, and class. As Tuchman shows, diabetes also underwent a mid-century transformation in the public’s eye from being a disease of wealth and “civilization” to one of poverty and “primitive” populations. In tracing this cultural history, Tuchman argues that shifting understandings of diabetes reveal just as much about scientific and medical beliefs as they do about the cultural, racial, and economic milieus of their time.

Bio: Arleen Tuchman is a specialist in the history of medicine in the United States and Europe, with research interests in the cultural history of health, disease, and addiction; the rise of scientific medicine; and scientific and medical constructions of gender and sexuality. She is the author of three books, the most recent being Diabetes: A History of Race and Disease (Yale University Press, 2020). She is currently working on a history of addiction and the family in the United States.

Tuchman has held many fellowships, including ones from the American Council of Learned Societies, the National Institutes of Health, and the National Endowment for the Humanities.
Tuchman is a past director of Vanderbilt University’s Center for Medicine, Health, and Society (2006-2009) and has, since 2019, been the co-creator of a historic medicinal garden on Vanderbilt University’s campus

Register here.

?Unexpected Evidence in The Science of Lockdowns

Headlines are blaring: “New study shows that lockdowns had minimal effect on COVID-19 mortality.”

The January 2022 systematic review and meta-analysis that underlies that news is Herby, Jonung, & Hanke’s “A Literature Review and Meta Analysis of the Effects of Lockdowns on COVID 19 Mortality” in Applied Economics .

Scientists label systematic reviews and meta-analyses as the strongest type of scientific evidence (pyramid of evidence). Of course the strength of the systematic review/meta-analysis depends on whether it is well or poorly done, so never put your research-critique brain in neutral. This one seems well done.

In systematic reviews, researchers follow a methodical, focused process that describes their selection and analysis of all studies on a topic. Meta-analyses treat all the data from those selected studies as a single study. Researchers will specify their process and parameters for selecting studies, and they typically publish a table of evidence that summarizes key information about each study. Herby et al. did so. (Note: systematic reviews should not be confused with integrative reviews in which authors are less systematic and are giving background info.)

For example, from Herby et al’s study cited above: “This study employed a systematic search and screening procedure in which 18,590 studies are identified… After three levels of screening, 34 studies ultimately qualified. Of those 34 eligible studies, 24 qualified for inclusion in the meta-analysis. They were separated into three groups: lockdown stringency index studies, shelter-in-place-order (SIPO) studies, and specific [non-pharmaceutical intervention] NPI studies. An analysis of each of these three groups support the conclusion that lockdowns have had little to no effect on COVID-19 mortality.”

See the full publication below. And rather than reading it beginning to end, first 1) read the abstract; 2) identify parameters used to select the 34 eligible studies and 24 meta-analysis studies, 3) scan the table of evidence, and 4) read the discussion beginning page 40. Then read the complete article, and cut yourself some slack—-just try understand what you can depending on your research expertise.

What do you think? Are the studies that support their conclusions strong? What are the SCIENTIFIC objections to their conclusions? What do they identify as policy implications, and do you agree or disagree?

[NOTE THAT THIS ARTICLE LINK MAY BE GOOD FOR ONLY 30 DAYS, but a librarian can help you get it after that.] Happy evidence hunting.

Research: What it is and isn’t

WHAT RESEARCH IS

Research is using the scientific process to ask and answer questions by examining new or existing data for patterns. The data are measurements of variables of interest. The simplest definition of a variable is that it is something that varies, such as height, income, or country of origin. For example, a researcher might be interested in collecting data on triceps skin fold thickness to assess the nutritional status of preschool children. Skin fold thickness will vary.

Research is often categorized in different ways in terms of: data, design, broad aims, and logic.

Qualitative Data
  • Design. Study design is the overall plan for conducting a research study, and there are three basic designs: descriptive, correlational, and experimental.
    1. Descriptive research attempts to answer the question, “What exists?” It tells us what the situation is, but it cannot explain why things are the way they are. e.g., How much money do nurses make?
    2. Correlational research answers the question: “What is the relationship” between variables (e.g., age and attitudes toward work). It cannot explain why those variables are or are not related. e.g., relationship between nurse caring and patient satisfaction
    3. Experimental research tries to answer “Why” question by examining cause and effect connections. e.g., gum chewing after surgery speeds return of bowel function. Gum chewing is a potential cause or “the why”
  • Aims. Studies, too, may be either applied research or basic research. Applied research is when the overall purpose of the research is to uncover knowledge that may be immediately used in practice (e.g., whether a scheduled postpartum quiet time facilitates breastfeeding). In contrast, basic research is when the new knowledge has no immediate application (e.g., identifying receptors on a cell wall).
  • Logic. Study logic may be inductive or deductive. Inductive reasoning is used in qualitative research; it starts with specific bits of information and moves toward generalizations [e.g., This patient’s pain is reduced after listening to music (specific); that means that music listening reduces all patients pain (general)]. Deductive reasoning is typical of quantitative research; it starts with generalizations and moves toward specifics [e.g., If listening to music relaxes people (general), then it may reduce post-operative pain (specific)]. Of course the logical conclusions in each case should be tested with research!

WHAT RESEARCH IS NOT:

Research as a scientific process is not going to the library or searching online to find information. It is also different from processes of applying research and non-research evidence to practice (called Evidence-Based Practice or EBP). And it is not the same as Quality Improvement (QI). See Two Roads Diverged for a flowchart to help differentiate research, QI and EBP.

“Two roads diverged in a yellow wood…” R.Frost

TIME TO REPUBLISH THIS ONE:

Below is my adaptation of one of the clearest representations that I have ever seen of when the roads diverge into quality improvement, evidence-based practice, & research.  Well done, Dr. E.Schenk PhD MHI, RN-BC!qi-ebp-research-flow-chart

Trial Balloons & Pilot Studies

A pilot study is to research what a trial balloon is to politics

In politics, a trial balloon is communicating a law or policy idea via media to see how the intended audience reacts to it.  A trial balloon does not answer the question, “Would this policy (or law) work?” Instead a trial balloon answers questions like “Which people hate the idea of the policy/law–even if it would work?” or “What problems might enacting it create?” In other words, a trial balloon answers questions that a politician wants to know BEFORE implementing a policy so that the policy or law can be tweaked to be successfully put in place.

meeting2

In research, a pilot study is sort of like a trial balloon. It is “a small-scale test of the methods and procedures” of a planned full-scale study (Porta, Dictionary of Epidemiology, 5th edition, 2008). A pilot study answers questions that we want to know BEFORE doing a larger study, so that we can tweak the study plan and have a successful full-scale research project. A pilot study does NOT answer research questions or hypotheses, such as “Does this intervention work?”  Instead a pilot study answers the question “Are these research procedures workable?”

A pilot study asks & answers:Can I recruit my target population? Can the treatments be delivered per protocol? Are study conditions acceptable to participants?” and so on.   A pilot study should have specific measurable benchmarks for feasibility testing. For example if the pilot is finding out whether subjects will adhere to the study, then adherence might be defined as  “70 percent of participants in each [group] will attend at least 8 of 12 scheduled group sessions.”  Sample size is based on practical criteria such as  budget, participant flow, and the number needed to answer feasibility questions (ie. questions about whether the study is workable).

A pilot study does NOT Test hypotheses (even preliminarily); Use inferential statistics; Assess safety of a treatment; Estimate effect size; Demonstrate safety of an intervention.

A pilot study is not just a small study.

Next blog: Why this matters!!

For more info read the source of all quotes in this blog: Pilot Studies: Common Uses and Misuses @ https://nccih.nih.gov/grants/whatnccihfunds/pilot_studies

Easy to read. Hard to write.

Musings:  For me the most difficult to write sections of a research report are the Intro/Background and Discussion.  And yet,  those are apparently the easiest to read for many.   My students at least tend to read only those sections and skip the rest.

Why? For the author, Intro/Background and Discussion require hard, critical thinking about what is already known about the topic (Intro/Background) and then what one’s findings mean in light of that (Discussion).  For research consumers, the language used in these sections is more familiar, ordinary sounding words.  On the other hand, writing the technical nature of other sections (Methods, Instruments, Results) is pretty straightforward with scientifically standardized vocabulary and structure.  But, for readers, those same sections contain potentially unfamiliar research terminology that is not part of everyday conversation– i.e., scientific vocabulary.  Quantitative studies often create more reader difficulty.

My solution for myself as a writer? To spend time making sure that the first sentence of every paragraph in Intro/Background and Discussion makes a step-by-step argument supported by the rest of the paragraph. Follow standardized structure for the rest.  Keep  language  precise  yet  simple  as  possible.

Solution for research readers? Read the whole article understanding what you can and keep a research glossary handy (e.g., https://sites.google.com/site/nursingresearchaid/week-1. Even if practice doesn’t make you perfect, it works in learning a new language–whether it is  a ‘foreign’ language or a scientific one.

Critical Thinking:  Test out your reading skills with this article https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6503597/   .  Do the authors make systematic arguments in Intro/Background & Discussion? What makes this article hard or easy to read?

Happy Summer! -Dr.H

Part 2: It’s a jungle out there! Flaky academic conferences

Flaky conferences can taken advantage of your time, money and energy.  My own publications in bona fide journals have triggered an onslaught of emails from probably predatory conferences–World Congresses of this and that (global health, nursing, education, etc.).  The cartoon below totally resonates!  Thanks PHD Comics.

http://phdcomics.com/comics.php?f=1704

 

2019: It is…….

I’m not a New Year’s resolution person.  I used to be and then I realized that I wanted to hit the restart button more often than every 365 days.  So…my aim for this blog remains pretty much unchanged:   Make research processes and ideas understandable for every RN.

DifficultToBeSimpleAlthough “to be simple is difficult,” that’s my goalLjourneyet me know what’s difficult for you in research, because it probably is for others as well.  Let’s work on the difficult together so that you can use the BEST Evidence in your practice.

The 2019 journey begins today, and tomorrow, and the tomorrows after that!

FOR MORE: Go to PubMed. Search for a topic of interest. Send me the article & we’ll critique together.