Category Archives: Methods

Primer on Research Design: Part 1-Description

A research design is the investigator-chosen, overarching study framework that facilitates getting the most accurate answer to a hypothesis or question. Think of research design as similar to the framing of a house during construction. Just as house-framing provides structure and limits to walls, floors, and ceilings, so does a research design provide structure and limits to a host of protocol details.

Tip. The two major categories of research design are: 1) Non-experimental, observation only and 2) Experimental testing of an intervention.

DESCRIPTIVE STUDIES

Non-experimental studies that examine one variable at a time.

When little is known and no theory exists on a topic, descriptive research begins to build theory by identifying and defining key, related concepts (variables). Although a descriptive study may explore several variables, only one of those is measured at a time; there is no examination of relationships between variables. Descriptive studies create a picture of what exists by analyzing quantitative or qualitative

data to answer questions like, “What is [variable x]?” or “How often does it occur?” Examples of such one-variable questions are “What are the experiences of first-time fathers?” or “How many falls occur in the emergency room?” (Variables are in italics.)  The former question produces qualitative data, and the latter, quantitative.

Descriptive results raise important questions for further study, and findings are rarely generalizable. You can see this especially in a descriptive case study: an in-depth exploration of a single event or phenomena that is limited to a particular time and place. Given case study limitations, opinions differ on whether they even qualify as research.

Descriptive research that arises from constructivist or advocacy assumptions merits particular attention. In these designs, researchers collect in-depth qualitative information about only one variable and then critically reflect on that data in order to uncover emerging themes or theories. Often broad data are collected in a natural setting in which researchers exercise little control over other variables. Sample size is not pre-determined, data collection and analysis are concurrent, and the researcher collects and analyzes data until no new ideas emerge (data saturation). The most basic qualitative descriptive method is perhaps content analysis, sometimes called narrative descriptive analysis, in which researchers uncover themes within informant descriptions. Figure 4 identifies major qualitative traditions beyond content analysis and case studies.

Alert! All qualitative studies are descriptive, but not all descriptive studies are qualitative.

Box 1. Descriptive Qualitative Designs

DesignFocusDiscipline of Origin
EthnographyUncovers phenomena within a given culture, such as meanings, communications, and mores.Anthropology
Grounded TheoryIdentifies a  basic social problem and the process that participants use to confront it.Sociology
PhenomenologyDocuments the “lived experience” of informants going through a particular event or situation.Psychology
Community participatory actionSeeks positive social change and empowerment of an oppressed community by engaging them in every step of the research process.Marxist political theory
FeministSeeks positive social change and empowerment of women as an oppressed group.Marxist political theory

Research: What it is and isn’t

WHAT RESEARCH IS

Research is using the scientific process to ask and answer questions by examining new or existing data for patterns. The data are measurements of variables of interest. The simplest definition of a variable is that it is something that varies, such as height, income, or country of origin. For example, a researcher might be interested in collecting data on triceps skin fold thickness to assess the nutritional status of preschool children. Skin fold thickness will vary.

Research is often categorized in different ways in terms of: data, design, broad aims, and logic.

Qualitative Data
  • Design. Study design is the overall plan for conducting a research study, and there are three basic designs: descriptive, correlational, and experimental.
    1. Descriptive research attempts to answer the question, “What exists?” It tells us what the situation is, but it cannot explain why things are the way they are. e.g., How much money do nurses make?
    2. Correlational research answers the question: “What is the relationship” between variables (e.g., age and attitudes toward work). It cannot explain why those variables are or are not related. e.g., relationship between nurse caring and patient satisfaction
    3. Experimental research tries to answer “Why” question by examining cause and effect connections. e.g., gum chewing after surgery speeds return of bowel function. Gum chewing is a potential cause or “the why”
  • Aims. Studies, too, may be either applied research or basic research. Applied research is when the overall purpose of the research is to uncover knowledge that may be immediately used in practice (e.g., whether a scheduled postpartum quiet time facilitates breastfeeding). In contrast, basic research is when the new knowledge has no immediate application (e.g., identifying receptors on a cell wall).
  • Logic. Study logic may be inductive or deductive. Inductive reasoning is used in qualitative research; it starts with specific bits of information and moves toward generalizations [e.g., This patient’s pain is reduced after listening to music (specific); that means that music listening reduces all patients pain (general)]. Deductive reasoning is typical of quantitative research; it starts with generalizations and moves toward specifics [e.g., If listening to music relaxes people (general), then it may reduce post-operative pain (specific)]. Of course the logical conclusions in each case should be tested with research!

WHAT RESEARCH IS NOT:

Research as a scientific process is not going to the library or searching online to find information. It is also different from processes of applying research and non-research evidence to practice (called Evidence-Based Practice or EBP). And it is not the same as Quality Improvement (QI). See Two Roads Diverged for a flowchart to help differentiate research, QI and EBP.

reposting: dispelling the nice or naughty myth–retrospective observational study of santa claus

Check out this re-post of my Christmas-y blog:

https://discoveringyourinnerscientist.com/2018/01/04/dispelling-the-nice-or-naughty-myth-retrospective-observational-study-of-santa-claus/

On Target all the time and everytime !

“Measure twice. Cut once!” goes the old carpenter adage. Why? Because measuring accurately means you’ll get the outcomes you want!

Same in research. A consistent and accurate measurement will get you the outcomes you want to know. Whether an instrument measures something consistently is called reliability. Whether it measures accurately is called validity. So, before you use a tool, check for its reported reliability and validity.

A good resource for understanding the concepts of reliability (consistency) and validity (accuracy) of research tools is at https://opentextbc.ca/researchmethods/chapter/reliability-and-validity-of-measurement/ Below are quoted Key Takeaways:

  • Psychological researchers do not simply assume that their measures work. Instead, they conduct research to show that they work. If they cannot show that they work, they stop using them.
  • There are two distinct criteria by which researchers evaluate their measures: reliability and validity. Reliability is consistency across time (test-retest reliability), across items (internal consistency), and across researchers (interrater reliability). Validity is the extent to which the scores actually represent the variable they are intended to.
  • Validity is a judgment based on various types of evidence. The relevant evidence includes the measure’s reliability, whether it covers the construct of interest, and whether the scores it produces are correlated with other variables they are expected to be correlated with and not correlated with variables that are conceptually distinct.
  • The reliability and validity of a measure is not established by any single study but by the pattern of results across multiple studies. The assessment of reliability and validity is an ongoing process.

Trial Balloons & Pilot Studies

A pilot study is to research what a trial balloon is to politics

In politics, a trial balloon is communicating a law or policy idea via media to see how the intended audience reacts to it.  A trial balloon does not answer the question, “Would this policy (or law) work?” Instead a trial balloon answers questions like “Which people hate the idea of the policy/law–even if it would work?” or “What problems might enacting it create?” In other words, a trial balloon answers questions that a politician wants to know BEFORE implementing a policy so that the policy or law can be tweaked to be successfully put in place.

meeting2

In research, a pilot study is sort of like a trial balloon. It is “a small-scale test of the methods and procedures” of a planned full-scale study (Porta, Dictionary of Epidemiology, 5th edition, 2008). A pilot study answers questions that we want to know BEFORE doing a larger study, so that we can tweak the study plan and have a successful full-scale research project. A pilot study does NOT answer research questions or hypotheses, such as “Does this intervention work?”  Instead a pilot study answers the question “Are these research procedures workable?”

A pilot study asks & answers:Can I recruit my target population? Can the treatments be delivered per protocol? Are study conditions acceptable to participants?” and so on.   A pilot study should have specific measurable benchmarks for feasibility testing. For example if the pilot is finding out whether subjects will adhere to the study, then adherence might be defined as  “70 percent of participants in each [group] will attend at least 8 of 12 scheduled group sessions.”  Sample size is based on practical criteria such as  budget, participant flow, and the number needed to answer feasibility questions (ie. questions about whether the study is workable).

A pilot study does NOT Test hypotheses (even preliminarily); Use inferential statistics; Assess safety of a treatment; Estimate effect size; Demonstrate safety of an intervention.

A pilot study is not just a small study.

Next blog: Why this matters!!

For more info read the source of all quotes in this blog: Pilot Studies: Common Uses and Misuses @ https://nccih.nih.gov/grants/whatnccihfunds/pilot_studies

After taste…I mean “after test”

Let’s say you want to find out how well students’ think they learned theory in your class.

One option is to do a pre/post test: You distribute the same survey before and after the class asking them to rate on 1-4 scale how well they think they know the new material. Then you compare their ratings.

Another option is to do posttest only: You could give them a survey after the class that Surveyasks them to rate 1-4 their knowledge before the class and 1-4 their knowledge now. Then you compare their ratings.

One research option is stronger than the other.  Which one is it? and Why?  (hint: think retrospective/prospective)

Research Words of the Week: Reliability & Validity

Reliability & validity are terms that refer to the consistency and accuracy of a quantitative measurement questionnaire, technical device, ruler, or any other measuring device.  It means that the outcome measure can be trusted and is relatively error free.

  • Reliability This means that the instrument measures CONSISTENTLY
  • Validity – This means that the instrument measures ACCURATELY. In other words it measures what it is supposed to measure and not something else.

For example: If your bathroom scale measures weight, then it is a valid measure of weight (e.g. it doesn’t measure BP or stress). You might say it had high validity. If your bathroom scale measures your weight as the same thing when you step on and off of it several times then it is measuring weight reliably  or consistently; and you might say it has high reliability.

Is History “Bunk”? We report. You Decide.

History?  Really?  Fascinating!  Ever thought about all the stories behind your own present life?

Check out this youtube dramatized documentary about Nurse Mary Seacole.  I promise – you’ll enjoy: https://www.youtube.com/watch?v=RIrim4r-LbY   

You can be a part of documenting such stories, including your own.  Can I pique your interest with these examples about historical research?

1. Artifacts: Example = http://acif.org/ The American Collectors of Infant Feeders:

Infant feeder
CREDIT http://acif.org/

The American Collectors of Infant Feeders is a non-profit organization whose primary purpose is to gather and publish information pertaining to the feeding of infants throughout history. The collecting of infant feeders and related items is promoted.

2. Interviews: Example = http://www.oralhistory.org/  Want to do interviews of interesting faculty, students, leaders, “ordinary” nurses?  Check out the Oral History Association    In addition to fostering communication among its members, the OHA encourages standards of excellence in the collection, preservation, dissemination and uses of oral testimony.

scrapbook
CREDIT https://archives.mc.duke.edu/blog/nursing-materials-displa

3. Stories from the “ordinary: Example: http://www.murphsplace.com/mother/main.html My Mother’s War – “Helen T.Burrey was an American nurse who served as a Red Cross Nurse during World War I. She documented her experience in both a journal and a scrapbook which has been treasured by her daughter, Mary Murphy. Ms Murphy has placed many of these items on the Internet for people to access and it provides a first-hand account of that experience. Additionally she has a variety of links to other WWI resources.” (quoted from AAHN Resources online)

Army history
CREDIT http://e-anca.org/

4. Ethnic studies: Example=https://libguides.rowan.edu/blacknurses  Black Nurses in History “This is a ‘bibliography and guide to web resources’ from the UMDNJ and Coriell Research Library. Included are Mamie O. Hail, Mary Eliza Mahoney, Jessie Sleet Scales, Mary Seacole, Mabel Keaton Staupers, Susie King Taylor, Sojourner Truth, Harriet Tubman.” (quoted from AAHN Resources online)

Want more?  

Critical thinking:  Don’t forget to save your own materials.  Your life is history!  What in your life is most interesting?  Have you written it down or dictated it into your iphone voice memo? There is GREAT interest in “ordinary” men and women.  Many times items are tossed because they are “just letters” or “only old records,” or “stuff.” Just Don’t Do It.

 

Words vs. Numbers: What does it all mean?

There are several ways to classify types of research.   One way is qualitative versus quantitative–in other words, WORD  vs. NUMBER data, methods, & analysis.

  1. Qualitative research focuses on words (or sometimes images) and their meanings.
  2. Quantitative research focuses on numbers or counting things and statistical analysis that yields probable meaning.

If you watch this short, easy-to-understand youtube clip, you’ll have all the basics that you need to understand these!   Enjoy!

Critical thinking:  Go to PubMed for this QUANTitative study on spiritual issues in care (https://www.ncbi.nlm.nih.gov/pubmed/28403299) and compare it to this PubMed QUALitative study (https://www.ncbi.nlm.nih.gov/pubmed/27853263) in terms of data, methods, & analysis)

For more information: See earlier posts