6052 Wk 7 Assgn

 

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

Realtors rely on detailed property appraisals—conducted using appraisal tools—to assign market values to houses and other properties. These values are then presented to buyers and sellers to set prices and initiate offers.

Research appraisal is not that different. The critical appraisal process utilizes formal appraisal tools to assess the results of research to determine value to the context at hand. Evidence-based practitioners often present these findings to make the case for specific courses of action.

In this Assignment, you will use an appraisal tool to conduct a critical appraisal of published research. You will then present the results of your efforts.

To Prepare:

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper
  • Review the Resources and consider the importance of critically appraising research evidence.
  • Reflect on the four peer-reviewed articles you selected in Module 2 and analyzed in Module 3.
  • Review and download the Critical Appraisal Tool Worksheet Template provided in the Resources.

The Assignment (Evidence-Based Project)

Part 4A: Critical Appraisal of Research

Conduct a critical appraisal of the four peer-reviewed articles you selected and analyzed by completing the Evaluation Table within the Critical Appraisal Tool Worksheet Template.

Part 4B: Critical Appraisal of Research

Based on your appraisal, in a 1-2-page critical appraisal, suggest a best practice that emerges from the research you reviewed. Briefly explain the best practice, justifying your proposal with APA citations of the research

Evaluation Table

Use this document to complete the
evaluation table
requirement of the Module 4 Assessment,

Evidence-Based Project, Part 4A: Critical Appraisal of Research

Full APA formatted citation of selected article.

Article #1

Article #2

Article #3

Article #4

Evidence Level *

(I, II, or III)

Conceptual Framework

Describe the theoretical basis for the study (If there is not one mentioned in the article, say that here).**

Design/Method

Describe the design and how the study was carried out (In detail, including inclusion/exclusion criteria).

Sample/Setting

The number and characteristics of

patients, attrition rate, etc.

Major Variables Studied

List and define dependent and independent variables

Measurement

Identify primary statistics used to answer clinical questions (You need to list the actual tests done).

Data Analysis Statistical or

Qualitative findings

(You need to enter the actual numbers determined by the statistical tests or qualitative data).

Findings and Recommendations

General findings and recommendations of the research

Appraisal and Study Quality

Describe the general worth of this research to practice.

What are the strengths and limitations of study?

What are the risks associated with implementation of the suggested practices or processes detailed in the research?

What is the feasibility of use in your practice?

Key findings

Outcomes

General Notes/Comments

*

These levels are from the Johns Hopkins Nursing Evidence-Based Practice: Evidence Level and Quality Guide

· Level I

Experimental, randomized controlled trial (RCT), systematic review RTCs with or without meta-analysis

· Level II

Quasi-experimental studies, systematic review of a combination of RCTs and quasi-experimental studies, or quasi-experimental studies only, with or without meta-analysis

· Level III

Nonexperimental, systematic review of RCTs, quasi-experimental with/without meta-analysis, qualitative, qualitative systematic review with/without meta-synthesis

· Level IV

Respected authorities’ opinions, nationally recognized expert committee/consensus panel reports based on scientific evidence

· Level V

Literature reviews, quality improvement, program evaluation, financial evaluation, case reports, nationally recognized expert(s) opinion based on experiential evidence

**Note on Conceptual Framework

· The following information is from Walden academic guides which helps explain conceptual frameworks and the reasons they are used in research. Here is the link

https://academicguides.waldenu.edu/library/conceptualframework

· Researchers create theoretical and conceptual frameworks that include a philosophical and methodological model to help design their work. A formal theory provides context for the outcome of the events conducted in the research. The data collection and analysis are also based on the theoretical and conceptual framework.

· As stated by Grant and Osanloo (2014), “Without a theoretical framework, the structure and vision for a study is unclear, much like a house that cannot be constructed without a blueprint. By contrast, a research plan that contains a theoretical framework allows the dissertation study to be strong and structured with an organized flow from one chapter to the next.”

· Theoretical and conceptual frameworks provide evidence of academic standards and procedure. They also offer an explanation of why the study is pertinent and how the researcher expects to fill the gap in the literature.

· Literature does not always clearly delineate between a theoretical or conceptual framework. With that being said, there are slight differences between the two.

References

The Johns Hopkins Hospital/Johns Hopkins University (n.d.). Johns Hopkins nursing dvidence-based practice: appendix C: evidence level and quality guide. Retrieved October 23, 2019 from

https://www.hopkinsmedicine.org/evidence-based-practice/_docs/appendix_c_evidence_level_quality_guide

Grant, C., & Osanloo, A. (2014). Understanding, Selecting, and Integrating a Theoretical Framework in Dissertation Research: Creating the Blueprint for Your” House”. Administrative Issues Journal: Education, Practice, and Research, 4(2), 12-26.

Walden University Academic Guides (n.d.). Conceptual & theoretical frameworks overview. Retrieved October 23, 2019 from https://academicguides.waldenu.edu/library/conceptualframework

Critical Appraisal Tool Worksheet Template

© 2018 Laureate Education Inc.
2

By Ellen Fineout-Overholt, PhD, RN,
FNAP, FAAN, Bernadette Mazurek
Melnyk, PhD, RN, CPNP/PMHNP,

FNAP, FAAN, Susan B. Stillwell,
DNP, RN, CNE, and Kathleen M.

Williamson, PhD, RN

In July’s evidence-based prac-tice (EBP) article, Rebecca R., our hypothetical staff
nurse, Carlos A., her hospital’s
expert EBP mentor, and Chen
M., Rebecca’s nurse colleague,
col lected the evidence to an-
swer their clinical question: “In
hospitalized adults (P), how
does a rapid response team
(I) compared with no rapid
response team (C) affect the
number of cardiac arrests (O)
and unplanned admissions to
the ICU (O) during a three-
month period (T)?” As part of
their rapid critical appraisal
(RCA) of the 15 potential
“keeper” studies, the EBP team
found and placed the essential
elements of each study (such as
its population, study design,
and setting) into an evaluation
table. In so doing, they began
to see similarities and differ-
ences between the studies,
which Carlos told them is the
beginning of synthesis. We now
join the team as they continue
with their RCA of these studies
to determine their worth to
practice.

RAPID CRITICAL APPRAISAL
Carlos explains that typically an
RCA is conducted along with an
RCA checklist that’s specific to
the research design of the study
being evaluated—and before any
data are entered into an evalua-
tion table. However, since Rebecca
and Chen are new to appraising
studies, he felt it would be easier
for them to first enter the essen-
tials into the table and then eval-
uate each study. Carlos shows
Rebecca several RCA checklists
and explains that all checklists
have three major questions in
common, each of which contains
other more specific subquestions
about what constitutes a well-
conducted study for the research
design under review (see Example
of a Rapid Critical Appraisal
Checklist).

Although the EBP team will
be looking at how well the re –
searchers conducted their studies
and discussing what makes a
“good” research study, Carlos
reminds them that the goal of
critical appraisal is to determine
the worth of a study to practice,
not solely to find flaws. He also

suggests that they consult their
glossary when they see an unfa-
miliar word. For example, the
term randomization, or random
assignment, is a relevant feature
of research methodology for in-
tervention studies that may be
unfamiliar. Using the glossary, he
explains that random assignment
and random sampling are often
confused with one another, but
that they’re very different. When
researchers select subjects from
within a certain population to
participate in a study by using a
random strategy, such as tossing
a coin, this is random sampling.
It allows the entire population
to be fairly represented. But
because it requires access to a
particular population, random
sampling is not always feasible.
Carlos adds that many health
care studies are based on a con-
venience sample—participants
recruited from a readily available
population, such as a researcher’s
affiliated hospital, which may or
may not represent the desired
population. Random assignment,
on the other hand, is the use of a
random strategy to assign study

ajn@wolterskluwer.com AJN ▼ September 2010 ▼ Vol. 110, No. 9 41

Critical Appraisal of the Evidence: Part II
Digging deeper—examining the “keeper” studies.

This is the sixth article in a series from the Arizona State University College of Nursing and Health Innovation’s Center
for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the
delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and
patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the
highest quality of care and best patient outcomes can be achieved.

The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one
step at a time. Articles will appear every two months to allow you time to incorporate information as you work toward
implementing EBP at your institution. Also, we’ve scheduled “Chat with the Authors” calls every few months to provide
a direct line to the experts to help you resolve questions. Details about how to participate in the next call will be pub-
lished with November’s Evidence-Based Practice, Step by Step.

are the same as three of their
po tential “keeper” studies. They
wonder whether they should keep
those studies in the pile, or if, as
duplicates, they’re unnecessary.
Carlos says that because the meta-
analysis only included studies
with control groups, it’s impor-
tant to keep these three studies so
that they can be compared with
other studies in the pile that don’t
have control groups. Rebecca
notes that more than half of their
15 studies don’t have control or
comparison groups. They agree
as a team to include all 15 stud-
ies at all levels of evidence and go
on to appraise the two remaining
systematic reviews.

The MERIT trial1 is next in
the EBP team’s stack of studies.

with him, Rebecca and Chen
find the checklist for systematic
reviews.

As they start to rapidly criti-
cally appraise the meta-analysis,
they discuss that it seems to be
biased since the authors included
only studies with a control group.
Carlos explains that while hav-
ing a control group in a study is
ideal, in the real world most stud-
ies are lower-level evidence and
don’t have control or compari-
son groups. He emphasizes that,
in eliminating lower-level studies,
the meta-analysis lacks evidence
that may be informative to the
question. Rebecca and Chen—
who are clearly growing in their
appraisal skills—also realize that
three studies in the meta-analysis

42 AJN ▼ September 2010 ▼ Vol. 110, No. 9 ajnonline.com

participants to the intervention
or control group. Random as-
signment is an important feature
of higher-level studies in the hier-
archy of evidence.

Carlos also reminds the team
that it’s important to begin the
RCA with the studies at the high-
est level of evidence in order to see
the most reliable evidence first. In
their pile of studies, these are the
three systematic reviews, includ-
ing the meta-analysis and the
Cochrane review, they retrieved
from their database search (see
“Searching for the Evidence,”
and “Critical Appraisal of the
Evidence: Part I,” Evidence-
Based Practice, Step by Step,
May and July). Among the RCA
checklists Carlos has brought

Example of a Rapid Critical Appraisal Checklist

Rapid Critical Appraisal of Systematic Reviews of Clinical Interventions or Treatments

1. Are the results of the review valid?
A. Are the studies in the review randomized controlled trials? Yes No
B. Does the review include a detailed description of the search

strategy used to find the relevant studies? Yes No
C. Does the review describe how the validity of the individual

studies was assessed (such as, methodological quality,
including the use of random assignment to study groups and
complete follow-up of subjects)? Yes No

D. Are the results consistent across studies? Yes No
E. Did the analysis use individual patient data or aggregate data? Patient Aggregate

2. What are the results?
A. How large is the intervention or treatment effect (odds ratio,

relative risk, effect size, level of significance)?
B. How precise is the intervention or treatment (confidence interval)?

3. Will the results assist me in caring for my patients?
A. Are my patients similar to those in the review? Yes No
B. Is it feasible to implement the findings in my practice setting? Yes No
C. Were all clinically important outcomes considered, including

both risks and benefits of the treatment? Yes No
D. What is my clinical assessment of the patient, and are there any

contraindications or circumstances that would keep me from
implementing the treatment? Yes No

E. What are my patients’ and their families’ preferences and
values concerning the treatment? Yes No

© Fineout-Overholt and Melnyk, 2005.

ajn@wolterskluwer.com AJN ▼ September 2010 ▼ Vol. 110, No. 9 43

As we noted in the last install-
ment of this series, MERIT is a
good study to use to illustrate
the different steps of the critical
appraisal process. (Readers may
want to retrieve the article, if
possible, and follow along with
the RCA.) Set in Australia, the
MERIT trial examined whether
the introduction of a rapid re –
sponse team (RRT; called a med-
ical emergency team or MET
in the study) would reduce the
incidence of cardiac arrest, death,
and unplanned admissions to
the ICU in the hospitals studied.
To follow along as the EBP team
addresses each of the essential
elements of a well-conducted
randomized controlled trial (RCT)
and how they apply to the MERIT
study, see their notes in Rapid
Critical Appraisal of the MERIT
Study.

ARE THE RESULTS OF THE STUDY VALID?
The first section of every RCA
checklist addresses the validity
of the study at hand—did the
researchers use sound scientific
methods to obtain their study
results? Rebecca asks why valid-
ity is so important. Carlos replies
that if the study’s conclusion can
be trusted—that is, relied upon
to inform practice—the study
must be conducted in a way that
reduces bias or eliminates con-
founding variables (factors that
influence how the intervention
affects the outcome). Researchers
typically use rigorous research
methods to reduce the risk of
bias. The purpose of the RCA
checklist is to help the user deter-
mine whether or not rigorous
methods have been used in the
study under review, with most
questions offering the option of
a quick answer of “yes,” “no,”
or “unknown.”

Were the subjects randomly
assigned to the intervention and
control groups? Carlos explains

that this is an important question
when appraising RCTs. If a study
calls itself an RCT but didn’t
randomly assign participants,
then bias could be present. In
appraising the MERIT study, the
team discusses how the research-
ers randomly assigned entire
hospitals, not individual patients,
to the RRT intervention and
control groups using a technique
called cluster randomization. To
better understand this method,
the EBP team looks it up on the
Internet and finds a PowerPoint
presentation by a World Health
Organization researcher that
explains it in simplified terms:
“Cluster randomized trials are
experiments in which social units
or clusters [in our case, hospitals]
rather than individuals are ran-
domly allocated to intervention
groups.”2

Was random assignment
concealed from the individuals
enrolling the subjects? Conceal-
ment helps researchers reduce
potential bias, preventing the
person(s) enrolling participants
from recruiting them into a study
with enthusiasm if they’re des-
tined for the intervention group
or with obvious indifference if
they’re intended for the control
or comparison group. The EBP
team sees that the MERIT trial
used an independent statistician
to conduct the random assign-
ment after participants had
already been enrolled in the
study, which Carlos says meets
the criteria for concealment.

Were the subjects and pro-
viders blind to the study group?
Carlos notes that it would be
difficult to blind participants
or researchers to the interven-
tion group in the MERIT study
because the hospitals that were
to initiate an RRT had to know
it was happening. Rebecca and
Chen wonder whether their “no”
answer to this question makes

the study findings invalid. Carlos
says that a single “no” may or
may not mean that the study
findings are invalid. It’s their job
as clinicians interpreting the data
to weigh each aspect of the study
design. Therefore, if the answer
to any validity question isn’t
affirmative, they must each ask
themselves: does this “no” make
the study findings untrustworthy
to the extent that I don’t feel
comfortable using them in my
practice?

Were reasons given to
explain why subjects didn’t
complete the study? Carlos
explains that sometimes par-
ticipants leave a study before the
end (something about the study
or the participants themselves
may prompt them to leave). If
all or many of the participants
leave for the same reason, this
may lead to biased findings.
Therefore, it’s important to look
for an explanation for why any
subjects didn’t complete a study.
Since no hospitals dropped out
of the MERIT study, this ques-
tion is determined to be not
applicable.

Were the follow-up assess-
ments long enough to fully study
the effects of the intervention?
Chen asks Carlos why a time
frame would be important in
studying validity. He explains
that researchers must ensure that
the outcome is evaluated for a
long enough period of time to
show that the intervention indeed
caused it. The researchers in the
MERIT study conducted the RRT
intervention for six months be-
fore evaluating the outcomes. The
team discusses how six months
was likely adequate to determine
how the RRT affected cardio-
pulmonary arrest rates (CR) but
might have been too short to es-
tablish the relationship between
the RRT and hospital-wide mor-
tality rates (HMR).

44 AJN ▼ September 2010 ▼ Vol. 110, No. 9 ajnonline.com

Rapid Critical Appraisal of the MERIT Study

1. Are the results of the study valid?
A. Were the subjects randomly assigned to the intervention and control groups? Yes No Unknown

Random assignment of hospitals was made to either a rapid response team (RRT; intervention) group or no RRT (con-
trol) group. To protect against introducing further bias into the study, hospitals, not individual patients, were randomly
assigned to the intervention. If patients were the study subjects, word of the RRT might have gotten around, potentially
influencing the outcome.

B. Was random assignment concealed from the individuals enrolling the subjects? Yes No Unknown

An independent statistician randomly assigned hospitals to the RRT or no RRT group after baseline data had been
collected; thus the assignments were concealed from both researchers and participants.

C. Were the subjects and providers blind to the study group? Yes No Unknown

Hospitals knew to which group they’d been assigned, as the intervention hospitals had to put the RRTs into practice.
Management, ethics review boards, and code committees in both hospitals knew about the intervention. The control
hospitals had code teams and some already had systems in place to manage unstable patients. But control hospitals
didn’t have a placebo strategy to match the intervention hospitals’ educational strategy for how to implement an RRT
(a red flag for confounding!). If you worked in one of the control hospitals, unless you were a member of one of the
groups that gave approval, you wouldn’t have known your hospital was participating in a study on RRTs; this lessens
the chance of confounding variables influencing the outcomes.

D. Were reasons given to explain why subjects didn’t complete the study? Yes No Not Applicable

This question is not applicable as no hospitals dropped out of the study.

E. Were the follow-up assessments long enough to fully study the effects of the
intervention? Yes No Unknown

The intervention was conducted for six months, which should be adequate time to have an impact on the outcomes of car-
diopulmonary arrest rates (CR), hospital-wide mortality rates (HMR), and unplanned ICU admissions (UICUA). However,
the authors remark that it can take longer for an RRT to affect mortality, and cite trauma protocols that took up to 10 years.

F. Were the subjects analyzed in the group to which they were randomly assigned? Yes No Unknown

All 23 (12 intervention and 11 control) hospitals remained in their groups, and analysis was conducted on an intention-
to-treat basis. However, in their discussion, the authors attempt to provide a reason for the disappointing study results;
they suggest that because the intervention group was “inadequately implemented,” the fidelity of the intervention was
compromised, leading to less than reliable results. Another possible explanation involves the baseline quality of care; if
high, the improvement after an RRT may have been less than remarkable. The authors also note a historical confounder:
in Australia, where the study took place, there was a nationwide increase in awareness of patient safety issues.

G. Was the control group appropriate? Yes No Unknown

See notes to question C. Controls had no time built in for education and training as the intervention hospitals did, so
this time wasn’t controlled for, nor was there any known attempt to control the organizational “buzz” that something
was going on. The study also didn’t account for the variance in how RRTs were implemented across hospitals. The
researchers indicate that the existing code teams in control hospitals “did operate as [RRTs] to some extent.” Because of
these factors, the appropriateness of the control group is questionable.

H. Were the instruments used to measure the outcomes valid and reliable? Yes No Unknown

The primary outcome was the composite of HMR (that is, unexpected deaths, excluding do not resuscitates [DNRs]),
CR (that is, no palpable pulse, excluding DNRs), and UICUA (any unscheduled admissions to the ICU).

ajn@wolterskluwer.com AJN ▼ September 2010 ▼ Vol. 110, No. 9 45

I. Were the demographics and baseline clinical variables of the subjects
in each of the groups similar? Yes No Unknown

The researchers provided a table showing how the RRT and control hospitals compared on several variables. Some
variability existed, but there were no statistical differences between groups.

2. What are the results?
A. How large is the intervention or treatment effect?

The researchers reported outcome data in various ways, but the bottom line is that the control group did better than
the intervention group. For example, RRT calling criteria were documented more than 15 minutes before an event
by more hospitals in the control group than in the intervention group, which is contrary to expectation. Half the HMR
cases in the intervention group met the criteria compared with 55% in the control group (not statistically significant).
But only 30% of CR cases in the intervention group met the criteria compared with 44% in the control group, which
was statistically significant (P = 0.031). Finally, regarding UICUA, 51% in the intervention group compared with 55%
in the control group met the criteria (not significant). This indicates that the control hospitals were doing a better job of
documenting unstable patients before events occurred than the intervention hospitals.

B. How precise is the intervention or treatment?

The odds ratio (OR) for each of the outcomes was close to 1.0, which indicates that the RRT had no effect in the
intervention hospitals compared with the control hospitals. Each confidence interval (CI) also included the num-
ber 1.0, which indicates that each OR wasn’t statistically significant (HMR OR = 1.03 (0.84 – 1.28); CR OR =
0.94 (0.79 – 1.13); UICUA OR = 1.04 (0.89 – 1.21). From a clinical point of view, the results aren’t straightfor-
ward. It would have been much simpler had the intervention hospitals and the control hospitals done equally badly;
but the fact that the control hospitals did better than the intervention hospitals raises many questions about the
results.

3. Will the results help me in caring for my patients?
A. Were all clinically important outcomes measured? Yes No Unknown

It would have been helpful to measure cost, since participating hospitals that initiated an RRT didn’t eliminate their code
team. If a hospital has two teams, is the cost doubled? And what’s the return on investment? There’s also no mention of
the benefits of the code team. This is a curious question . . . maybe another PICOT question?

B. What are the risks and benefits of the treatment?

This is the wrong question for an RRT. The appropriate question would be: What is the risk of not adequately introduc-
ing, monitoring, and evaluating the impact of an RRT?

C. Is the treatment feasible in my clinical setting? Yes No Unknown

We have administrative support, once we know what the evidence tells us. Based on this study, we don’t know much
more than we did before, except to be careful about how we approach and evaluate the issue. We need to keep the
following issues, which the MERIT researchers raised in their discussion, in mind: 1) allow adequate time to measure
outcomes; 2) some outcomes may be reliably measured sooner than others; 3) the process of implementing an RRT is
very important to its success.

D. What are my patients’ and their families’ values and expectations for the outcome and the
treatment itself?

We will keep this in mind as we consider the body of evidence.

46 AJN ▼ September 2010 ▼ Vol. 110, No. 9 ajnonline.com

Were the instruments used to
measure the outcomes valid and
reliable? The overall measure in
the MERIT study is the compos-
ite of the individual outcomes:
CR, HMR, and unplanned ad-
missions to the ICU (UICUA).
These parameters were defined
reasonably and didn’t include do
not resuscitate (DNR) cases. Car-
los explains that since DNR cases
are more likely to code or die, in-
cluding them in the HMR and
CR would artificially increase
these outcomes and introduce
bias into the findings.

As the team moves through
the questions in the RCA check-
list, Rebecca wonders how she
and Chen would manage this
kind of appraisal on their own.
Carlos assures them that they’ll
get better at recognizing well-
conducted research the more
RCAs they do. Though Rebecca
feels less than confident, she appre-
ciates his encouragement nonethe-
less, and chooses to lead the team
in discussion of the next question.

Were the demographics and
baseline clinical variables of the
subjects in each of the groups
similar? Rebecca says that the
intervention group and the con-
trol or comparison group need to
be similar at the beginning of any
intervention study because any
differences in the groups could
influence the outcome, poten-
tially increasing the risk that the
outcome might be unrelated to the
intervention. She refers the team
to their earlier discussion about
confounding variables. Carlos
tells Rebecca that her explana-
tion was excellent. Chen remarks
that Rebecca’s focus on learning
appears to be paying off.

WHAT ARE THE RESULTS?
As the team moves on to the sec-
ond major question, Carlos tells
them that many clinicians are
apprehensive about interpreting

statistics. He says that he didn’t
take courses in graduate school
on conducting statistical analysis;
rather, he learned about different
statistical tests in courses that re-
quired students to look up how
to interpret a statistic whenever
they encountered it in the articles
they were reading. Thus he had a
context for how the statistic was
being used and interpreted, what
question the statistical analysis
was answering, and what kind of
data were being analyzed. He also
learned to use a search engine,
such as Google.com, to find an
explanation for any statistical
tests with which he was unfamil-
iar. Because his goal was to un-
derstand what the statistic meant
clinically, he looked for simple
Web sites with that same focus
and avoided those with Greek
symbols or extensive formulas
that were mostly concerned with
conducting statistical analysis.

How large is the intervention
or treatment effect? As the team
goes through the studies in their
RCA, they decide to construct a
list of statistics terminology for
quick reference (see A Sampling of
Statistics). The major statistic used
in the MERIT study is the odds
ratio (OR). The OR is used to
provide insight into the measure
of association between an inter-
vention and an outcome. In the
MERIT study, the control group
did better than the intervention
group, which is contrary to what
was expected. Rebecca notes that
the researchers discussed the pos-
sible reasons for this finding in the
final section of the study. Carlos
says that the authors’ discussion
about why their findings occurred
is as important as the findings
themselves. In this study, the
discussion communicates to any
clinicians considering initiating
an RRT in their hospital that they
should assess whether the current
code team is already functioning

Were the subjects analyzed in
the group to which they were
randomly assigned? Rebecca
sees the term intention-to-treat
analysis in the study and says that
it sounds like statistical language.
Carlos confirms that it is; it means
that the researchers kept the hos-
pitals in their assigned groups
when they con ducted the analysis,
a technique intended to reduce
possible bias. Even though the
MERIT study used this technique,
Carlos notes that in the discussion
section the authors offer some
important caveats about how the
study was conducted, including
poor intervention implementation,
which may have contributed to
MERIT’s unexpected findings.1

Was the control group appro-
priate? Carlos explains that it’s
challenging to establish an ap-
propriate comparison or control
group without an understanding
of how the intervention will be
implemented. In this case, it may
be problematic that the interven-
tion group received education
and training in implementing the
RRT and the control group re-
ceived no comparable placebo
(meaning education and training
about something else). But Car-
los reminds the team that the re-
searchers attempted to control
for known confounding variables
by stratifying the sample on char-
acteristics such as academic versus
nonacademic hospitals, bed size,
and other important parameters.
This method helps to ensure
equal representation of these pa-
rameters in both the intervention
and control groups. However, a
major concern for clinicians con-
sidering whether to use the
MERIT findings in their decision
making involves the control hos-
pitals’ code teams and how they
may have functioned as RRTs,
which introduces a potential con-
founder into the study that could
possibly invalidate the findings.

ajn@wolterskluwer.com AJN ▼ September 2010 ▼ Vol. 110, No. 9 47

A Sampling of Statistics

Statistic Simple Definition Important Parameters Understanding the Statistic Clinical Implications

Odds Ratio
(OR)

The odds of an
outcome occurring
in the intervention
group compared
with the odds of
it occurring in the
comparison or
control group.

• If an OR is equal to 1, then the
intervention didn’t make a differ-
ence.

• Interpretation depends on the out-
come.

• If the outcome is good (for exam-
ple, fall prevention), the OR is pre-
ferred to be above 1.

• If the outcome is bad (for example,
mortality rate), the OR is preferred
to be below 1.

The OR for hospital-wide mor-
tality rates (HMR) in the MERIT
study was 1.03 (95% CI,
0.84 – 1.28). The odds of
HMR in the intervention group
were about the same as HMR
in the comparison group.

From the HMR OR data
alone, a clinician may not
feel confident that a rapid
response team (RRT) is the
best intervention to reduce
HMR but may seek out other
evidence before making a
decision.

Relative Risk
(RR)

The risk of an out-
come occurring
in the intervention
group compared
with the risk of it
occurring in the
comparison or
control group.

• If an RR is equal to 1, then the
intervention didn’t make a differ-
ence.

• Interpretation depends on the out-
come.

• If the outcome is good (for example
fall prevention), the RR is preferred
to be above 1.

• If the outcome is bad (for example,
mortality rate), the RR is preferred
to be below 1.

The RR of cardiopulmonary ar-
rest in adults was reported in
the Chan PS, et al., 2010 sys-
tematic reviewa as 0.66 (95%
CI, 0.54 – 0.80), which is sta-
tistically significant because
there’s no 1.0 in the CI.
Thus, the RR of cardiopulmo-
nary arrest occurring in the
interven tion group compared
with the RR of it occurring in the
control group is 0.66, or less
than 1. Since cardiopulmonary
arrest is not a good outcome,
this is a desirable finding.

The RRT significantly reduced
the RR of cardiopulmonary
arrest in this study. From
these data, clinicians can be
reasonably confident that ini-
tiating an RRT will reduce CR
in hospitalized adults.

Confidence
Interval (CI)

The range in
which clinicians
can expect to get
results if they pres-
ent the interven-
tion as it was in
the study.

• CI provides the precision of the
study finding: a 95% CI indicates
that clinicians can be 95% con-
fident that their findings will be
within the range given in the study.

• CI should be narrow around the
study finding, not wide.

• If a CI contains the number that
indicates no effect (for OR it’s 1; for
effect size it’s 0), the study finding
is not statistically significant.

See the two previous examples. In the Chan PS, et al., 2010
systematic review,a the CI is a
close range around the study
finding and is statistically
significant. Clinicians can be
95% confident that if they
conduct the same interven-
tion, they’ll have a result simi-
lar to that of the study (that is,
a reduction in risk of cardio-
pulmonary arrest) within the
range of the CI, 0.54 – 0.80.
The narrower the CI range,
the more confident clinicians
can be that, using the same
intervention, their results will
be close to the study findings.

Mean (X) Average • Caveat: Averaging captures only
those subjects who surround a
central tendency, missing those
who may be unique. For example,
the mean (average) hair color in a
classroom of schoolchildren cap-
tures those with the predominant
hair color. Children with hair color
different from the predominant hair
color aren’t captured and are con-
sidered outliers (those who don’t
converge around the mean).

In the Dacey M J , et al., 2007
study,a before the RRT the aver-
age (mean) CR was 7.6 per
1,000 discharges per month;
after the RRT, it decreased to
3 per 1,000 discharges per
month.

Introducing an RRT decreased
the average CR by more than
50% (7.6 to 3 per 1,000
discharges per month).

a For study details on Chan PS, et al., and Dacey MJ, et al., go to http://links.lww.com/AJN/A11.

http://links.lww.com/AJN/A11

48 AJN ▼ September 2010 ▼ Vol. 110, No. 9 ajnonline.com

as an RRT prior to RRT imple-
mentation.

How precise is the interven-
tion or treatment? Chen wants to
tackle the precision of the findings
and starts with the OR for HMR,
CR, and UICUA, each of which
has a confidence interval (CI) that
includes the number 1.0. In an
EBP workshop, she learned that
a 1.0 in a CI for OR means that
the results aren’t statistically sig-
nificant, but she isn’t sure what
statistically sig nificant means. Car-
los explains that since the CIs for
the OR of each of the three out-
comes contains the number 1.0,
these results could have been ob-
tained by chance and therefore
aren’t statistically significant. For
clinicians, chance findings aren’t
reliable findings, so they can’t
confidently be put into practice.
Study findings that aren’t statisti-
cally significant have a probabil-
ity value (P value) of greater than
0.5. Statistically significant find-
ings are those that aren’t likely to
be obtained by chance and have
a P value of less than 0.5.

WILL THE RESULTS HELP ME IN CARING
FOR MY PATIENTS?
The team is nearly finished with
their checklist for RCTs. The third
and last major question addresses
the applicability of the study—
how the findings can be used to
help the patients the team cares
for. Rebecca observes that it’s
easy to get caught up in the de-
tails of the research methods and
findings and to forget about how
they apply to real patients.

Were all clinically important
outcomes measured? Chen says
that she didn’t see anything in the
study about how much an RRT
costs to initiate and how to com-
pare that cost with the cost of one
code or ICU admission. Carlos
agrees that providing costs would
have lent further insight into the
results.

What are the risks and ben-
efits of the treatment? Chen won-
ders how to answer this since the
findings seem to be confounded
by the fact that the control hos-
pital had code teams that func-
tioned as RRTs. She wonders if
there was any consideration of
the risks and benefits of initiating
an RRT prior to beginning the
study. Carlos says that the study
doesn’t directly mention it, but
the consideration of the risks and
benefits of an RRT is most likely
what prompted the researchers
to conduct the study. It’s helpful
to remember, he tells the team,
that often the answer to these
questions is more than just “yes”
or “no.”

Is the treatment feasible in my
clinical setting? Carlos acknowl-
edges that because the nursing
administration is open to their
project and supports it by provid-
ing time for the team to conduct
its work, an RRT seems feasible
in their clinical setting. The team
discusses that nursing can’t be
the sole discipline involved in the
project. They must consider how
to include other disciplines as part
of their next step (that is, the im-
plementation plan). The team con-
siders the feasibility of getting all
disciplines on board and how to
address several issues raised by the
researchers in the discussion sec-
tion (see Rapid Critical Appraisal
of the MERIT Study), particu-
larly if they find that the body of
evidence indicates that an RRT
does indeed reduce their chosen
outcomes of CR, HMR, and
UICUA.

What are my patients’ and
their families’ values and expec-
tations for the outcome and the
treatment itself? Carlos asks
Rebecca and Chen to discuss with
their patients and their patients’
families their opinion of an RRT
and if they have any objections
to the intervention. If there are

objections, the patients or fami-
lies will be asked to reveal them.

The EBP team finally com-
pletes the RCA checklists for the
15 studies and finds them all to
be “keepers.” There are some
studies in which the find ings are
less than reliable; in the case of
MERIT, the team decides to in-
clude it anyway because it’s con-
sidered a landmark study. All
the studies they’ve retained have
something to add to their under-
standing of the impact of an RRT
on CR, HMR, and UICUA. Car-
los says that now that they’ve
determined the 15 studies to be
somewhat valid and reliable, they
can add the rest of the data to the
evaluation table.

Be sure to join the EBP team
for “Critical Appraisal of the Evi-
dence: Part III” in the next install-
ment in the series, when Rebecca,
Chen, and Carlos complete their
synthesis of the 15 studies and
determine what the body of evi-
dence says about implementing an
RRT in an acute care setting. ▼

Ellen Fineout-Overholt is clinical pro-
fessor and director of the Center for
the Advancement of Evidence-Based
Practice at Arizona State University in
Phoenix, where Bernadette Mazurek
Melnyk is dean and distinguished foun-
dation professor of nursing, Susan B.
Stillwell is clinical associate professor
and program coordinator of the Nurse
Educator Evidence-Based Practice
Mentorship Program, and Kathleen M.
Williamson is associate director of
the Center for the Advancement of
Evidence-Based Practice. Contact
author: Ellen Fineout-Overholt, ellen.
fineout-overholt@asu.edu.

REFERENCES
1. Hillman K, et al. Introduction of the

medical emergency team (MET) sys-
tem: a cluster-randomised controlled
trial. Lancet 2005;365, 2091-7.

2. Wojdyla D. Cluster randomized trials
and equivalence trials [PowerPoint
presentation]. Geneva, Switzerland:
Geneva Foundation for Medical
Education and Research; 2005. http://
www.gfmer.ch/PGC_RH_2005/pdf/
Cluster_Randomized_Trials .

By Ellen Fineout-Overholt, PhD, RN,

FNAP, FAAN, Bernadette Mazurek
Melnyk, PhD, RN, CPNP/PMHNP,

FNAP, FAAN, Susan B. Stillwell,
DNP, RN, CNE, and Kathleen M

.

Williamson, PhD, RN

In May’s evidence-based prac-tice (EBP) article, Rebecca R., our hypothetical staff nurse,
and Carlos A., her hospital’s ex-
pert EBP mentor, learned how to
search for the evidence to answer
their clinical question (shown
here in PICOT format): “In hos­
pitalized adults (P), how does a
rapid response team (I) compared
with no rapid response team (C)
affect the number of cardiac ar­
rests (O) and unplanned admis­
sions to the ICU (O) during a
three­month period (T)?” With
the help of Lynne Z., the hospi-
tal librarian, Rebecca and Car-
los searched three databases,
PubMed, the Cumulative Index
of Nursing and Allied Health
Literature (CINAHL), and the
Cochrane Database of Systematic
Review

s.

They used keywords
from their clinical question, in-
cluding ICU, rapid response
team, cardiac arrest, and un­
planned ICU admissions, as
well as the following synonyms:
failure to rescue, never events,
medical emergency teams, rapid
response systems, and code
blue. Whenever terms from a

database’s own indexing lan-
guage, or controlled vocabulary,
matched the keywords or syn-
onyms, those terms were also
searched. At the end of the data-
base searches, Rebecca and Car-
los chose to retain 18 of the 18
studies found in PubMed; six of
the 79 studies found in CINAHL;
and the one study found in the
Cochrane Database of System-
atic Reviews, because they best
answered the clinical question.

As a final step, at Lynne’s rec-
ommendation, Rebecca and Car-
los conducted a hand search of
the reference lists of each study
they retained looking for any rele-
vant studies they hadn’t found in
their original search; this process
is also called the ancestry method.
The hand search yielded one ad-
ditional study, for a total of 26.

RAPID CRITICAL APPRAISAL
The next time Rebecca and Car-
los meet, they discuss the next
step in the EBP process—critically
appraising the 26 studies. They
obtain copies of the studies by
printing those that are immedi-
ately available as full text through

library subscription or those
flagged as “free full text” by a
database or journal’s Web site.
Others are available through in-
terlibrary loan, when another
hos pital library shares its articles
with Rebecca and Carlos’s hospi-
tal library.

Carlos explains to Rebecca that
the purpose of critical appraisal
isn’t solely to find the flaws in a
study, but to determine its worth
to practice. In this rapid critical
appraisal (RCA), they will review
each study to determine
• its level of evidence.
• how well it was conducted.
• how useful it is to practice.

Once they determine which
studies are “keepers,” Rebecca
and Carlos will move on to the
final steps of critical appraisal:
evaluation and synthesis (to be
discussed in the next two install-
ments of the series). These final
steps will determine whether
overall findings from the evi-
dence review can help clinicians
improve patient outcomes.

Rebecca is a bit apprehensive
because it’s been a few years since
she took a research class. She

ajn@wolterskluwer.com AJN ▼ July 2010 ▼ Vol. 110, No. 7 47

Critical Appraisal of the Evidence: Part I
An introduction to gathering, evaluating, and recording the evidence.

This is the fifth article in a series from the Arizona State University College of Nursing and Health Innovation’s Center
for the Advancement of Evidence – Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the
delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and
patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the
highest quality of care and best patient outcomes can be achieved.

The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one
step at a time. Articles will appear every two months to allow you time to incorporate information as you work toward
implementing EBP at your institution. Also, we’ve scheduled “Chat with the Authors” calls every few months to provide
a direct line to the experts to help you resolve questions. Details about how to participate in the next call will be pub-
lished with September’s Evidence-Based Practice, Step by Step.

and the Boston University Medi-
cal Center Alumni Medical Li-
brary [http://medlib.bu.edu/
bugms/content.cfm/content/
ebmglossary.cfm#R].)

Determining the level of evi-
dence. The team begins to divide
the 26 studies into categories ac-
cording to study design. To help
in this, Carlos provides a list of
several different study designs
(see Hierarchy of Evidence for
Intervention Studies). Rebecca,
Carlos, and Chen work together
to determine each study’s design
by reviewing its abstract. They
also create an “I don’t know”
pile of studies that don’t appear
to fit a specific design. When they
find studies that don’t actively
answer the clinical question but

new EBP team, Carlos provides
Rebecca and Chen with a glossary
of terms so they can learn basic
research terminology, such as sam­
ple, independent variable, and de­
pendent variable. The glossary
also defines some of the study de-
signs the team is likely to come
across in doing their RCA, such
as systematic review, randomized
controlled trial, and cohort, qual-
itative, and descriptive studies.
(For the definitions of these terms
and others, see the glossaries pro-
vided by the Center for the Ad-
vancement of Evidence-Based
Practice at the Arizona State Uni-
versity College of Nursing and
Health Innovation [http://nursing
andhealth.asu.edu/evidence-based-
practice/resources/glossary.htm]

48 AJN ▼ July 2010 ▼ Vol. 110, No. 7 ajnonline.com

shares her anxiety with Chen M.,
a fellow staff nurse, who says
she never studied research in
school but would like to learn;
she asks if she can join Carlos
and Rebecca’s EBP team. Chen’s
spirit of inquiry encourages Re-
becca, and they talk about the
opportunity to learn that this
project affords them. Together
they speak with the nurse man-
ager on their medical–surgical
unit, who agrees to let them use
their allotted continuing educa-
tion time to work on this project,
after they discuss their expecta-
tions for the project and how its
outcome may benefit the patients,
the unit staff, and the hospital.

Learning research terminol-
ogy. At the first meeting of the

Hierarchy of Evidence for Intervention Studi

es

Type of evidence Level of evidence Description

Systematic review or
meta-analysis

I A synthesis of evidence from all relevant random ized controlled trials.

Randomized con-
trolled trial

II An experiment in which subjects are randomized to a treatment group
or control group.

Controlled trial with-
out randomization

III An experiment in which subjects are nonrandomly assigned to a
treatment group or control group.

Case-control or
cohort study

IV Case-control study: a comparison of subjects with a condition (case)
with those who don’t have the condition (control) to determine
characteristics that might predict the condition.

Cohort study: an observation of a group(s) (cohort[s]) to determine the
development of an outcome(s) such as a disease.

Systematic review of
qualitative or descrip-
tive studies

V A synthesis of evidence from qualitative or descrip tive studies to
answer a clinical question.

Qualitative or de-
scriptive study

VI Qualitative study: gathers data on human behavior to understand why
and how decisions are made.

Descriptive study: provides background informa tion on the what,
where, and when of a topic of interest.

Expert opinion or
consensus

VII Authoritative opinion of expert committee.

Adapted with permission from Melnyk BM, Fineout-Overholt E, editors. Evidence-based practice in nursing and healthcare:
a guide to best practice [forthcoming]. 2nd ed. Philadelphia: Wolters Kluwer Health/Lippincott Williams and Wilkins.

http://medlib.bu.edu/bugms/content.cfm/content/ebmglossary.cfm#

R

http://medlib.bu.edu/bugms/content.cfm/content/ebmglossary.cfm#R

http://medlib.bu.edu/bugms/content.cfm/content/ebmglossary.cfm#R

http://nursingandhealth.asu.edu/evidence-based-practice/resources/glossary.htm

http://nursingandhealth.asu.edu/evidence-based-practice/resources/glossary.htm

http://nursingandhealth.asu.edu/evidence-based-practice/resources/glossary.htm

ajn@wolterskluwer.com AJN ▼ July 2010 ▼ Vol. 110, No. 7 49

may inform thinking, such as
descriptive research, expert opin-
ions, or guidelines, they put them
aside. Carlos explains that they’ll
be used later to support Rebecca’s
case for having a rapid response
team (RRT) in her hospital, sh-
ould the evidence point in that
direction.

After the studies—including
those in the “I don’t know”
group—are categorized, 15 of
the original 26 remain and will
be included in the RCA: three
systematic reviews that include
one meta-analysis (Level I evi-
dence), one randomized con-
trolled trial (Level II evidence),
two cohort studies (Level IV evi-
dence), one retrospective pre-
post study with historic controls
(Level VI evidence), four preex-
perimental (pre-post) interven-
tion studies (no control group)
(Level VI evidence), and four EBP
implementation projects (Level
VI evidence). Carlos reminds
Rebecca and Chen that Level I
evidence—a systematic review
of randomized controlled trials

or a meta-analysis—is the most
reliable and the best evidence to
answer their clinical question.

Using a critical appraisal
guide. Carlos recommends that
the team use a critical appraisal
checklist (see Critical Appraisal
Guide for Quantitative Studies)
to help evaluate the 15 studies.
This checklist is relevant to all
studies and contains questions
about the essential elements of
research (such as, pur pose of the
study, sample size, and major
variables).

The questions in the critical ap-
praisal guide seem a little strange
to Rebecca and Chen. As they re-
view the guide together, Carlos
explains and clarifies each ques-
tion. He suggests that as they try
to figure out which are the essen-
tial elements of the studies, they
focus on answering the first three
questions: Why was the study
done? What is the sample size?
Are the instruments of the major
variables valid and reliable? The
remaining questions will be ad-
dressed later on in the critical

appraisal process (to appear in
future installments of this series).

Creating a study evaluation
table. Carlos provides an online
template for a table where Re-
becca and Chen can put all the
data they’ll need for the RCA.
Here they’ll record each study’s
essential elements that answer the
three questions and begin to ap-
praise the 15 studies. (To use this
template to create your own eval-
uation table, download the Eval­
uation Table Template at http://
links.lww.com/AJN/A10.)

EXTRACTING THE DAT

A

Starting with level I evidence
studies and moving down the
hierarchy list, the EBP team takes
each study and, one by one, finds
and enters its essential elements
into the first five columns of
the evaluation table (see Table
1; to see the entire table with
all 15 studies, go to http://links.
lww.com/AJN/A11). The team
discusses each element as they
enter it, and tries to determine if
it meets the criteria of the critical

Critical Appraisal Guide for Quantitative Studies
1. Why was the study done?
• Was there a clear explanation of the purpose of the study and, if so, what was it?
2. What is the sample size?
• Were there enough people in the study to establish that the findings did not occur by chance?
3. Are the instruments of the major variables valid and reliable?
• How were variables defined? Were the instruments designed to measure a concept valid (did

they measure what the researchers said they measured)? Were they reliable (did they measure a
concept the same way every time they were used)?

4. How were the data analyzed?
• What statistics were used to determine if the purpose of the study was achieved?
5. Were there any untoward events during the study?
• Did people leave the study and, if so, was there something special about them?
6. How do the results fit with previous research in the area?
• Did the researchers base their work on a thorough literature review?
7. What does this research mean for clinical practice?
• Is the study purpose an important clinical issue?

Adapted with permission from Melnyk BM, Fineout-Overholt E, editors. Evidence-based practice in nursing and healthcare:
a guide to best practice [forthcoming]. 2nd ed. Philadelphia: Wolters Kluwer Health/Lippincott Williams and Wilkins.

http://links.lww.com/AJN/A10

http://links.lww.com/AJN/A10

http://links.lww.com/AJN/A11

http://links.lww.com/AJN/A11

50 AJN ▼ July 2010 ▼ Vol. 110, No. 7 ajnonline.com

Ta
bl

e
1.

E
va

lu
at

io
n

Ta
bl

e,
P

ha
s

e

I

Fi
rs

t A
ut

ho
r

(Y
ea

r)
Co

nc
ep

tu
al

Fr

am
ew

or
k

D
es

ig
n/

M
et

ho
d

Sa
m

pl
e/

Se
tti

ng
M

aj
or

V
ar

ia
bl

es
S

tu
di

ed

(a
nd

T

he

ir

D

ef
in

iti
on

s)
M

ea
su

re

m
en

t
D

at
a

A
na

ly
si

s
Fi

nd
in

gs
A

pp
ra

is
al

:
W

or
th

to

Pr
ac

tic
e

C
ha

n
PS

,

e
t a

l.
A

rc
h

In
te

rn
M

ed

20
10

;1
70

(1
):1

8
-2

6.

N
on

e
SR Pu

rp
os

e:
e

ffe
ct

o
f R

RT
o

n
H

M
R

an
d

C
R


Se

ar
ch

ed
5

d
at

ab
as

es

fro
m

1
95

0
-2

00

8,

a
nd

“g

re
y

lit
er

at
ur

e”
fr

om

M
D

c
on

fe
re

nc
es


In

cl
ud

ed
o

nl
y

stu
di

es

w
ith

a
c

on
tro

l g
ro

up

N
=

1
8

stu
di
es

Se
tti

ng
: a

cu
te

c
ar

e
ho

s-
pi

ta
ls;

1
3

ad
ul

t,
5

pe
ds

A
ve

ra
ge

n
o.

b
ed

s:
N

R

A
ttr

iti
on

: N
R

IV
: R

RT

D
V1

:

H
M

R
D

V2
: C

R

M
cG

au
gh

ey
J,

e
t a

l.
C

oc
hr

an
e

D
at

ab
as

e
Sy

st
Re

v

20

07
;3

:
C

D
00

55
29

.
N
on

e
SR

(C
oc

hr
an

e
re

vi
ew

)
Pu

rp
os
e:
e
ffe
ct
o
f R
RT

on
H

M
R


Se
ar
ch

ed
6

d
at
ab
as
es

fro
m

1
99

0
-2

00
6


Ex

cl
ud

ed
a

ll
bu

t 2

RC
Ts

N
=

2
s

tu
di
es

24
a

du
lt

ho
sp

ita
ls

A
ttr
iti
on
: N
R
IV
: R
RT

D
V1
: H
M
R

W
in

te
rs

B
D

, e
t a

l.
C

rit
C

ar
e

M
ed

20
07
;3

5(
5)

:
12

38
-4

3.

N
on
e
SR Pu
rp
os
e:
e
ffe
ct
o
f R
RT
o
n
H
M
R
an
d
C
R

Se
ar
ch

ed
3

d
at
ab
as
es

fro
m
1
99
0
-2

00
5


In
cl
ud
ed
o
nl
y
stu
di
es

w
ith
a
c
on
tro
l g
ro
up
N
=

8
s

tu
di
es
A
ve
ra
ge
n
o.
b
ed

s:
5

00
A
ttr
iti
on
: N
R
IV
: R

RT
D

V1
: H

M
R

D
V2

: C
R

H
ill

m
an

K
, e

t a
l.

La
nc

et
2

00
5;

36

5(
94

77
):

20
91

-7
.

N
on

e
RC

T
Pu

rp
os
e:
e
ffe
ct
o
f R
RT
o

n
C

R,
H

M
R,

a
nd

U
IC

U
A

N
=

2
3

ho
sp
ita
ls
A
ve
ra
ge
n
o.
b
ed

s:
3

40

In
te

rv
en

tio
n

gr
ou

p

(n
=

1
2)


C

on
tro
l g
ro

up

(n
=

1
1)

Se
tti

ng
: A

us
tra

lia

A
ttr
iti
on

: n
on

e
IV
: R

RT
p

ro
to

co
l f

or

6
m

on
th

s

1
A

P

1
IC

U
o

r

E
D

R
N

D
V1
: H
M

R
(u

ne
xp

ec
te

d
de

at
hs

, e
xc

lu
di

ng
D

N
Rs

)
D

V2
: C

R
(e

xc
lu

di
ng

D
N
Rs
)
D

V3
: U

IC
U

A
H
M

R
C

R
ra

te
s

of

U
IC
U
A

N
ot

e:

C
rit

er
ia

fo
r

ac
tiv

at
in

g
RR

T

Sh
ad

ed
c

ol
um

ns
in

di
ca

te
w

he
re

d
at

a
w

ill
b

e
en

te
re

d
in

fu
tu

re
in

st
al

lm
en

ts
o

f t
he

s
er

ie
s.

A
P

=
at

te
nd

in
g

ph
ys

ic
ia

n;
C

R
=

ca
rd

io
pu

lm
on

ar
y

ar
re

st
o

r
co

de
r

at
es

;
D

N
R

=
do

n
ot

r
es

us
ci

ta
te

;
D

V
=

d
ep

en
de

nt
v

ar
ia

bl
e;

E
D

=
e

m
er

ge
nc

y
de

pa
rtm

en
t;

H
M

R:
h

os
pi

ta
l-w

id
e

m
or


ta

lit
y

ra
te

s;
IC

U
=

in
te

ns
iv

e
ca

re
u

ni
t;

IV
=

in
de

pe
nd

en
t v

ar
ia
bl
e;

M
D

=
m

ed
ic

al
d

oc
to

r;
N

R
=

no
t r

ep
or

te
d;

P
ed

s
=

pe
di

at
ric

;
RC

T
=

ra
nd

om
iz

ed
c
on
tro

lle
d

tri
al

;
RN

=
r

eg
is

te
re

d
nu

rs
e;

R
RT

=
r

ap
id

r
es

po
ns

e
te

am
;

SR
=

s
ys

te
m

at
ic

r
ev

ie
w

;
U

IC
U

A
=

u
np

la
nn

ed
IC

U
a

dm
is

si
on

s.

ajn@wolterskluwer.com AJN ▼ July 2010 ▼ Vol. 110, No. 7 51

suggests they leave the column in.
He says they can further discuss
this point later on in the process
when they synthesize the studies’
findings. As Rebecca and Chen
review each study, they enter its
citation in a separate reference list
so that they won’t have to create

this list at the end of the pro cess.
The reference list will be shared
with colleagues and placed at the
end of any RRT policy that re-
sults from this endeavor.

Carlos spends much of his
time answering Rebecca’s and
Chen’s questions concerning how
to phrase the information they’re
entering in the table. He suggests
that they keep it simple and con-
sistent. For example, if a study
indicated that it was implement-
ing an RRT and hoped to see a
change in a certain outcome, the
nurses could enter “change in
[the outcome] after RRT” as the
purpose of the study. For studies
examining the effect of an RRT
on an outcome, they could say as
the purpose, “effect of RRT on
[the outcome].” Using the same
words to describe the same pur-
pose, even though it may not have
been stated exactly that way in
the study, can help when they
compare studies later on.

Rebecca and Chen find it frus-
trating that the study data are
not always presented in the same
way from study to study. They
ask Carlos why the authors or
journals wouldn’t present similar
information in a similar manner.
Carlos explains that the purpose
of publishing these studies may
have been to disseminate the

find ings, not to compare them
with other like studies. Rebecca
realizes that she enjoys this kind
of conversation, in which she
and Chen have a voice and can
contribute to a deeper under-
standing of how research impacts
practice.

As Rebecca and Chen con-
tinue to enter data into the table,
they begin to see similarities and
differences across studies. They
mention this to Carlos, who tells
them they’ve begun the process
of synthesis! Both nurses are en-
couraged by the fact that they’re
learning this new skill.

The MERIT trial is next in the
stack of studies and it’s a good
trial to use to illustrate this phase
of the RCA process. Set in Aus-
tralia, the MERIT trial1 examined
whether the introduction of an
RRT (called a medical emergency
team or MET in the study) would
reduce the incidence of cardiac
arrest, unplanned admissions to
the ICU, and death in the hospi-
tals studied. See Table 1 to follow
along as the EBP team finds and
enters the trial data into the table.

Design/Method. After Rebecca
and Chen enter the citation infor-
mation and note the lack of a con-
ceptual framework, they’re ready
to fill in the “Design/Method”
column. First they enter RCT
for randomized controlled trial,
which they find in both the study
title and introduction. But MERIT
is called a “cluster- randomised
controlled trial,” and cluster is a
term they haven’t seen before.
Carlos explains that it means that
hospitals, not individuals or pa-
tients, were randomly assigned to
the RRT. He says that the likely
reason the researchers chose to
randomly assign hospitals is that
if they had randomly assigned
individual patients or units, oth-
ers in the hospital might have
heard about the RRT and poten-
tially influenced the outcome.

appraisal guide. These elements—
such as purpose of the study, sam-
ple size, and major variables—are
typical parts of a research report
and should be presented in a pre-
dictable fashion in every study
so that the reader understands
what’s being reported.

As the EBP team continues to
review the studies and fill in the
evaluation table, they realize that
it’s taking about 10 to 15 minutes
per study to locate and enter the
information. This may be because
when they look for a description
of the sample, for example, it’s
important that they note how the
sample was obtained, how many
patients are included, other char-
acteristics of the sample, as well
as any diagnoses or illnesses the
sample might have that could be
important to the study outcome.
They discuss with Carlos the like-
lihood that they’ll need a few ses-
sions to enter all the data into the
table. Carlos responds that the
more studies they do, the less
time it will take. He also says
that it takes less time to find the
information when study reports
are clearly written. He adds that
usually the important informa-
tion

can be found in the abstract.

Rebecca and Chen ask if it
would be all right to take out
the “Conceptual Framework”
column, since none of the stud-
ies they’re reviewing have con-
ceptual frameworks (which help
guide researchers as to how a
study should proceed). Carlos
replies that it’s helpful to know
that a study has no framework
underpinning the research and

Usually the important information in a study

can be found in the abstract.

52 AJN ▼ July 2010 ▼ Vol. 110, No. 7 ajnonline.com

To randomly assign hospitals
(instead of units or patients) to
the intervention and comparison
groups is a cleaner research de-
sign.

To keep the study purposes
con sistent among the studies in
the RCA, the EBP team uses inclu-
sive terminology they developed
after they noticed that different
trials had different ways of de-
scribing the same objectives. Now
they write that the purpose of the
MERIT trial is to see if an RRT
can reduce CR, for cardiopulmo-
nary arrest or code rates, HMR,
for hospital-wide mortality rates,
and UICUA for unplanned ICU
admissions. They use those same
terms consistently throughout the
evaluation table.

Sample/Setting. A total of 23
hospitals in Australia with an
average of 340 beds per hospi-
tal is the study sample. Twelve
hospitals had an RRT (the inter-
vention group) and 11 hospitals
didn’t (the control group).

Major Variables Studied. The
independent variable is the vari-
able that influences the outcome
(in this trial, it’s an RRT for six
months). The dependent vari-
able is the outcome (in this case,
HMR, CR, and UICUA). In this
trial, the outcomes didn’t include
do-not-resuscitate data. The RRT
was made up of an attending phy-
sician and an ICU or ED nurse.

While the MERIT trial seems
to perfectly answer Rebecca’s
PICOT question, it contains ele-
ments that aren’t entirely relevant,
such as the fact that the research-
ers collected information on how

the RRTs were activated and pro-
vided their protocol for calling the
RRTs. However, these elements
might be helpful to the EBP team
later on when they make decisions

about implementing an RRT in
their hospital. So that they can
come back to this information,
they place it in the last column,
“Appraisal: Worth to Practice.”

After reviewing the studies to
make sure they’ve captured the
essential elements in the evalua-
tion table, Rebecca and Chen still
feel unsure about whether the in-
formation is complete. Carlos
reminds them that a system-wide
practice change—such as the
change Rebecca is exploring, that
of implementing an RRT in her
hospital—requires careful consid-
eration of the evidence and this is
only the first step. He cautions
them not to worry too much
about perfection and to put their
efforts into understanding the
information in the studies. He re-
minds them that as they move on
to the next steps in the critical
appraisal process, and learn even
more about the studies and proj-
ects, they can refine any data in
the table. Rebecca and Chen feel
uncomfortable with this uncer-
tainty but decide to trust the pro-
cess. They continue extracting
data and entering it into the table
even though they may not com-
pletely understand what they’re
entering at present. They both
realize that this will be a learn-
ing opportunity and, though the
le arning curve may be steep at
times, they value the outcome of
improving patient care enough to

continue the work—as long as
Carlos is there to help.

In applying these principles
for evaluating research studies
to your own search for the evi-
dence to answer your PICOT
question, remember that this se-
ries can’t contain all the available
infor mation about research meth-
od ology. Fortunately, there are
many good resources available in
books and online. For example,
to find out more about sample
size, which can affect the likeli-
hood that researchers’ results oc-
cur by chance (a random finding)
rather than that the intervention
brought about the expected out-
come, search the Web using terms
that describe what you want to
know. If you type sample size
findings by chance in a search en-
gine, you’ll find several Web sites
that can help you better under-
stand this study essential.

Be sure to join the EBP team
in the next installment of the se-
ries, “Critical Appraisal of the
Evi dence: Part II,” when Rebecca
and Chen will use the MERIT
trial to illustrate the next steps
in the RCA process, complete
the rest of the evaluation table,
and dig a little deeper into the
studies in order to detect the
“keepers.” ▼

Ellen Fineout­Overholt is clinical profes­
sor and director of the Center for the
Advancement of Evidence­Based Practice
at Arizona State University in Phoenix,
where Bernadette Mazurek Melnyk
is dean and distinguished foundation
professor of nursing, Susan B. Stillwell
is clinical associate professor and pro­
gram coordinator of the Nurse Educator
Evidence­Based Practice Mentorship
Program, and Kathleen M. Williamson
is associate director of the Center for the
Advancement of Evidence­Based Practice.
Contact author: Ellen Fineout­Overholt,
ellen.fineout­overholt@asu.edu.

REFERENCE
1. Hillman K, et al. Introduction of

the medical emergency team (MET)
system: a cluster-randomised con-
trolled trial. Lancet 2005;365(9477):
2091-7.

Keep the data in the table consistent by using

simple, inclusive terminology.

By Ellen Fineout-Overholt, PhD, RN,

FNAP, FAAN, Bernadette Mazurek
Melnyk, PhD, RN, CPNP/PMHNP,

FNAP, FAAN, Su

s

an B. Stillwell,
DNP, RN, CNE, and Kathleen M.

Willia

m

son, PhD, RN

In September’s evidence- based practice (EBP) article, Rebecca R., our hypotheti cal
staff nurse, Carlos A., her hospi-
tal’s expert EBP mentor, and Chen
M., Rebecca’s nurse colleague, ra-
pidly critically appraised the 15
articles they found to answer their
clinical question—“In hospital-
ized adults (P), how does a rapid
response team (I) compared with
no rapid response team (C) affect
the number of cardiac arrests (O)
and unplanned admissions to the
ICU (O) during a three-month
period (T)?”—and determined
that they were all “keepers.” The
team now begins the process of
evaluation and syn thesis of the
articles to see what the evidence
says about initiating a rapid re-
sponse team (RRT) in their hos-
pital. Carlos reminds them that
evaluation and synthesis are syn-
ergistic processes and don’t neces-
sarily happen one after the other.
Nevertheless, to help them learn,
he will guide them through the
EBP process one step at a time.

STARTING THE EVALUATION
Rebecca, Carlos, and Chen begin
to work with the evaluation table

they created earlier in this proces

s

when they found and filled in the
essential elements of the 15 stud-
ies and projects (see “Critical Ap –
praisal of the Evidence: Part I,”
July). Now each takes a stack of
the “keeper” studies and system-
atically begins adding to the table
any remaining data that best re –
flect the study elements pertain-
ing to the group’s clinical question
(see Table 1; for the entire table
with all 15 articles, go to http://
links.lww.com/AJN/A17). They
had agreed that a “Notes” sec-
tion within the “Appraisal: Worth
to Practice” column would be a
good place to record the nuances

of an article, their impressions
of it, as well as any tips—such as
what worked in calling an RRT—
that could be used later when
they write up their ideas for ini-
tiating an RRT at their hospital, if
the evidence points in that direc-
tion. Chen remarks that al though
she thought their ini tial table con-
tained a lot of information, this
final version is more thorough by
far. She appreciates the opportu-
nity to go back and confirm her
original understanding of the
study essentials.

The team members discuss the
evolving patterns as they complete
the table. The three systematic

Critical Appraisal of the Evidence: Part III
The process of synthesis: seeing similarities and differences
across the body of evidence.

This is the seventh article in a series from the Arizona State University College of Nursing and Health Innovation’s
Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach
to the delivery of health care that integrates the best evidence from studies and patient care data with clinician exper-
tise and patient preferences and values. When delivered in a context of caring and in a supportive organizational
culture, the highest quality of care and best patient outcomes can be achieved.

The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one
step at a time. Articles will appear every two months to allow you time to incorporate information as you work toward
implementing EBP at your institution. Also, we’ve scheduled “Chat with the Authors” calls every few months to provide
a direct line to the experts to help you resolve questions. See details below.

Need Help with Evidence-Based Practice? Chat with
the Authors on November 16!

On November 16 at 3 PM EST, join the “Chat with the Au -thors” call. It’s your chance to get personal consultation from
the experts! Dial-in early! U.S. and Canada, dial 1-800-947-5134
(International, dial 001-574-941-6964). When prompted, enter
code 121028#.

Go to www.ajnonline.com and click on “Podcasts” and then
on “Conversations” to listen to our interview with Ellen Fineout-
Overholt and Bernadette Mazurek Melnyk.

ajn@wolterskluwer.com AJN ▼ November 2010 ▼ Vol. 110, No. 11 43

44 AJN ▼ November 2010 ▼ Vol. 110, No. 11 ajnonline.com

Ta
bl

e
1.

F
in

al
E

va
lu

at
io

n
Ta

bl

e

Fi
r

s

t A
ut

ho
r

(Y
ea

r

)

Co

nc
ep

tu
al

Fr

am
ew

or
k

D

es

ig
n/

M
et

ho
d

Sa
m

pl
e/

Se
tti

ng
M

aj
or

V
ar

ia
bl

es

St
ud

ie
d

(a
nd

Th

ei
r

D
ef

in
iti

on
s)

M
ea

su
re

m
en

t
D

at
a

A
na

ly
si

s
Fi

nd
in

gs
A

pp
ra

is
al

: W
or

th
to

Pr

ac
tic

e

C
ha

n
PS

, e
t a

l.
A

rc
h

In
te

rn
M

ed

20
10

;1
70

(1
):

18
-2

6

N
on

e
SR Pu

rp
os

e:
e

ffe
ct

o
f

R

R

T

on
H

M
R

an
d

C
R


Se

ar
ch

ed
5

da

ta
ba

se
s

fro
m

19

50
–2

00

8

an
d

“g
re

y
lit

er
at

ur
e”

fro

m
M

D
c

on
fe

r-
en

ce
s


In

cl
ud

ed
o

nl
y

1)
R

C

Ts

a
nd

pr

os
pe

c t
iv

e
stu

di
es

w
ith

2)

a
c

on
tro

l
gr

ou
p

or

co
nt

ro
l p

er
io

d
an

d
3)

h
os

pi
ta

l
m

or
ta

lit
y

w
el

l
de

sc
rib

ed
a

s
ou

tc
om

e

Ex
cl

ud
ed

5

s

tu
di

es
th

at
m

et

cr
ite

ria
d

ue
to

no

re
sp

on
se

to

e-
m

ai
l b

y
pr

im
ar

y
au

th
or

s

N
=

1
8

ou
t o

f
14

3
po

te
nt

ia
l

stu
di

es

Se
tti

ng
: a

cu
te

ca

re
h

os
pi

ta
ls;

1

3

a
du

lt,
5

p
ed

s

A
ve

ra
ge

n
o.

be

ds
: N

R

A
ttr

iti
on

: N
R

IV
: R

RT

D
V1

:

H
M

R
(in

cl
ud

in
g

D
N

R,

ex
cl

ud
in

g

D

N
R,

no

t t
re

at
ed

in

IC
U

, n
o

H
M

R
de

fin
iti

on
)

D
V2

:

C
R

RR
T:

w
as

th
e

M
D

in
vo

lv
ed

?

H
M

R:
o

ve
ra

ll
ho

sp
ita

l

d
ea

th
s

(s
ee

d
ef

in
iti

on
)

C
R:

c
ar

di
o

an
d/

or
p

ul
m

o

na
ry

a
rr

es
t;

ca
rd

ia
c

ar
re

st

ca
lls


Fr

e

qu
en

cy

Re
la

t

iv
e

ris
k

13
/1

6
stu

di
es

re

po
rti

ng
te

a

m

str
uc

tu
re

7/
11

a
du

lt
an

d
4/

5
pe

ds

stu
di

es
h

ad
s

ig

ni
fic

an
t r

ed
uc


tio

n
in

C
R
C
R:


In

a
du

lts
,

21
%

–4
8%

re

du
ct

io
n

in

C
R;

R
R

0.
66

(9

5%
C

I,

0.

54
–0

.8
0)


In
p
ed

s,
3

8%

re
du

ct
io

n
in

C

R;
R

R
0.

62

(9
5%

C
I,

0.
46

–0
.8

4)

H
M

R:

In
a

du
lts

,
H

M
R

RR

0.
96

(9
5%

C

I,
0.

84

1
.0

9)


In
p
ed

s,

H
M

R

RR

0.

79
(9

5%

C
I,

0.
63


0

.9
8)

W
ea

kn
es

se
s:


Po

te
nt

ia
l m

is
se

d
ev

i-
de

nc
e

w
ith

e
xc

lu

si
on

of

a
ll

stu
di

es
e

xc
ep

t
th

os
e

w
ith

c
on

tro
l

gr
ou

p

s

G
re

y
lit
er
at

ur
e

se
ar

ch

lim
ite

d
to

m
ed

ic
al

m
ee

t-
in

gs

O
nl

y
in

cl
ud

ed
H

M
R
an
d

C
R

ou
tc

om
es

N
o

co
st

da
ta

St
re

ng
th

s:

Id
en

tif
ie

d
no

. o
f a

ct
iv

a-
tio

ns
o

f

R
RT

/1
,0

00

ad
m

is
si
on

s

Id
en
tif
ie

d
va

ria
nc

e
in

o
ut

co
m

e
de

fin
iti

on

an
d

m
ea

s u
re

m
en

t (
fo

r
ex

a

m
pl

e,
1

0
of

1
5

stu
d-

ie
s

in
cl

ud
ed
d
ea
th
s
fro
m

D

N
Rs

in
th

ei
r m

or
ta
lit
y
m
ea
su
re
m
en

t)

C
on

cl
us

io
n:

RR
T

re
du

ce
s

C
R
in

ad
ul

ts,
a

nd
C

R
an

d
H

M
R

in
p

ed
s

Fe
as

ib
ili

ty
:


RR

T
is

re
as

on
ab

le
to

im

pl
em

en
t;

ev
al

ua
tin

g

co

st

w

ill
h

el
p

in
m

ak
in

g
de

ci
si
on

s
ab

ou
t u

si
ng

RR

T

Ri
sk

/B
en

ef
it

(h
ar

m
):

be
ne

fit
s

ou
tw

ei
gh

ri
sk

s

ajn@wolterskluwer.com AJN ▼ November 2010 ▼ Vol. 110, No. 11 45

M
cG

au
gh

ey
J,

et

a
l.

C
oc

hr
an

e
D

at
ab

as
e

Sy
st

Re
v

20
07

;3
:

C
D

00
55

29

N
on

e
SR

(C
oc

hr
an

e
re

vi
ew

)

Pu
rp

os
e:

e
ffe

ct
o

f
RR

T
on

H
M

R

Se
ar

ch
ed

6

da
ta

ba
se

s
fro

m

19
90

–2
00

6

Ex
cl
ud
ed
a
ll

bu
t

2
RC

Ts
N
=

2
s

tu
di
es

A
cu

te
c

ar
e

se
t-

tin
gs

in
A

us
tra

lia

an
d
th
e

U
K

A
ttr
iti
on
: N
R
IV
: R
RT

D
V1

: H
M

R
H

M
R:

A
us

tra
lia

:
ov

er
al

l h
os

pi
ta
l
m
or
ta
lit
y
w
ith


ou

t D
N

R

U
K:

S
im

pl
ifi

ed

A
cu

te
P

hy
si
ol


og

y
Sc

or
e

(S
A

PS
) I

I
de

at
h

pr
ob

ab
il-

ity
e

sti
m

at
e

O
R

O
R

of
A

us

tra
lia

n
stu

dy
,

0.
98

(9
5%

C
I,

0.
83

–1
.1

6)

O
R

of
U

K
stu

dy
,

0.
52

(9
5%
C
I,

0.
32

–0
.8

5)

W
ea
kn
es
se
s:


D

id
n’

t i
nc

lu
de

fu
ll

bo
dy

of

e
vi

de
nc

e

C
on

fli
ct

in
g

re
su

lts
o

f
re

ta
in

ed
s

tu
di

es
, b

ut
n

o
di

sc
us

si
on

o
f t

he
im

pa
ct

of

lo
w

er
-le

ve
l e

vi
de

nc
e


Re

co
m
m
en

da
tio

n
“n

ee
d

m
or

e
re
se
ar

ch

C
on
cl
us
io
n:

In
co

nc
lu

si
ve

W
in

te
rs

B
D

,
et

a
l.

C
rit

C
ar
e

M
ed

20

07
;3

5(
5)

:
12

38
-4

3
N
on
e
SR Pu
rp
os
e:
e
ffe
ct
o
f
RR
T
on
H
M
R
an
d
C
R

Se
ar
ch

ed
3

da

ta
ba

se
s
fro
m

19

90
–2

00
5


In
cl
ud
ed
o

nl
y

stu
di

es
w

ith
a

co

nt
ro

l g
ro

up

N
=

8
s

tu
di
es
A
ve
ra
ge
n
o.

be

ds
: 5

00
A
ttr
iti
on
: N
R
IV
: R

RT
D

V1
: H

M
R

D
V2

: C
R

H
M
R:
o
ve
ra

ll
de

at
h

ra
te

C
R:
n
o.

o
f i

n-
ho

sp
ita

l a
rr

es
ts

Ri
sk

ra
tio

H
M
R:

O
bs

er
va


tio

na
l s

tu
di

es
,

ris
k
ra
tio

fo
r

RR
T
on
H

M
R,

0.

87
(9

5%

C
I,

0.
73


1.

04
)


C

lu
ste

r

R
C

Ts
,

ris
k
ra
tio
fo
r
RR
T
on
H
M
R,

0.

76
(9

5%

C
I,

0.
39


1.

48
)

C
R:


O

bs
er

va

tio
na

l s
tu

di
es

,
ris

k
ra

tio
fo

r
RR

T
on

C
R,

0.

70
(9

5%

C
I,

0.
56


0.

92
)


C
lu
ste
r R
C

Ts
,

ris
k
ra
tio
fo
r
RR
T

on
C

R,

0.
94

(9
5%

C
I,
0.

79

1.
13

)
St
re
ng
th
s:

Pr
ov

id
es

c
om

pa
ris

on

ac
ro

ss
s

tu
di

es
fo

r

S

tu
dy

le
ng

th
s

(ra
ng

e,

4
–8

2
m

on
th

s)

Sa

m
pl

e
si
ze

(r
an

ge
,

2,
18

3–
19

9,
02

4)

C

rit
er

ia
fo

r R
RT

in
iti

a-
tio

n
(c

om
m

on
: r

es
pi

ra

to
ry

ra
te

, h
ea

rt
ra

te
,

bl
oo

d
pr

es
su

re
, m

en
ta

l
sta

tu
s

ch
an

ge
; n

ot
a

ll
stu

di
es

, b
ut

n
ot

ew
or


th

y:
o

xy
ge

n
sa

tu
ra

tio
n,

“w

or
ry

”)

In
cl

ud
es

id
ea

s
ab

ou
t

fu
tu

re
e

vi
de
nc
e

ge
n-

er
at
io
n

(c
on

du
ct
in
g

re
se

ar
ch

)—
fin

di
ng

o
ut

w

ha
t w

e
do

n’
t k

no
w

C
on
cl
us
io
n:

So
m

e
su

pp
or

t f
or

R
RT

,
bu

t n
ot

re
lia

bl
e

en
ou

gh

to
re

co
m
m
en

d
as

s
ta

n-
da

rd
o

f c
ar

e

C
I =

c
on

fid
en

ce
in

te
rv

al
; C

R
=

ca
rd

io
pu

lm
on

ar
y

ar
re

st
o

r
co

de
r
at

es
; D

N
R

=
do

n
ot

r
es

us
ci

ta
te

; D
V
=

d
ep

en
de

nt
v

ar
ia

bl
e;

H
M
R
=

ho
sp

ita
l-w

id
e

m
or

ta
lit

y
ra

te
s;

IC
U

=
in

te
ns

iv
e

ca
re

un

it;
IV

=
in

de
pe

nd
en

t v
ar

ia
bl

e;
M

D
=

m
ed
ic
al

d
oc

to
r;
N

R
=

no
t r

ep
or

te
d;

O
R

=
od

ds
r
at

io
; P

ed
s

=
pe

di
at

ric
s;

R
C

T
=

ra
nd

om
iz

ed
c

on
tro

lle
d

tri
al

; R
R

=
re

la
tiv

e
ris

k;
R

RT
=

r
ap

id

re
sp
on
se

te
am

; S
R

=
sy

st
em

at
ic

r
ev

ie
w

; U
K

=
U

ni
te

d
Ki

ng
do

m

46 AJN ▼ November 2010 ▼ Vol. 110, No. 11 ajnonline.com

as well as a good num ber of jour-
nals have encouraged their use.
When they review the actual
guidelines, the team notices that
they seem to be fo cused on re-
search; for example, they require
a research question and refer to

the study of an intervention,
whereas EBP projects have PICOT
questions and apply evidence to
practice. The team discusses that
these guidelines can be confusing
to the clinicians au thoring the re-
ports on their proj ects. In addition,
they note that there’s no mention
of the syn thesis of the body of
evidence that should drive an
evidence-based project. While the
SQUIRE Guidelines are a step in
the right direction for the future,
Carlos, Rebecca, and Chen con-
clude that, for now, they’ll need
to learn to read these studies as
they find them—looking care-
fully for the details that inform
their clinical question.

Once the data have been en-
tered into the table, Carlos sug-
gests that they take each column,
one by one, and note the similari-
ties and differences across the
studies and projects. After they’ve
briefly looked over the columns,
he asks the team which ones they
think they should focus on to an-
swer their question. Re becca and
Chen choose “Design/ Method,”
“Sample/Setting,” “Findings,” and
“Appraisal: Worth to Practice”
(see Table 1) as the ini tial ones
to consider. Carlos agrees that
these are the columns in which
they’re most likely to find the
most pertinent information for
their syn thesis.

Chen in their efforts to appraise
the MERIT study and comments
on how well they’re putting the
pieces of the evidence puzzle to-
gether. The nurses are excited
that they’re able to use their new
knowledge to shed light on the

study. They discuss with Carlos
how the interpretation of the
MERIT study has perhaps con-
tributed to a misunderstanding
of the impact of RRTs.

Comparing the evidence. As
the team enters the lower-level evi-
dence into the evaluation table,
they note that it’s challenging to
compare the project reports with
studies that have clearly described
methodology, measurement, anal –
ysis, and findings. Chen remarks
that she wishes researchers and
clinicians would write study and
project reports similarly. Although
each of the studies has a process
or method determining how it was
conducted, as well as how out-
comes were measured, data were
analyzed, and results interpreted,
comparing the studies as they’re
currently written adds an other
layer of complexity to the eval-
uation. Carlos says that while it
would be great to have studies
and projects written in a similar for-
mat so they’re easier to compare,
that’s unlikely to happen. But he
tells the team not to lose all hope,
as a format has been de veloped
for re porting quality improve-
ment initiatives called the SQUIRE
Guidelines; however, they aren’t
ideal. The team looks up the guide-
lines online (www.squire-statement.
org) and finds that the In stitute
for Healthcare Improve ment (IHI)

reviews, which are higher-level
evidence, seem to have an inher-
ent bias in that they included only
studies with control groups. In
general, these studies weren’t in
favor of initiating an RRT. Carlos
asks Rebecca and Chen whether,

now that they’ve appraised all the
evidence about RRTs, they’re con –
fident in their decision to include
all the studies and projects (in –
cluding the lower-level evidence)
among the “keepers.” The nurses
reply with an emphatic affirma-
tive! They tell Carlos that the pro j –
ects and descriptive studies were
what brought the issue to life for
them. They realize that the higher-
level evidence is somewhat in
conflict with the lower-level evi-
dence, but they’re most interested
in the conclusions that can be
drawn from considering the entire
body of evidence.

Rebecca and Chen admit they
have issues with the systematic
reviews, all of which include the
MERIT study.1-4 In particular, they
discuss how the authors of the
systematic reviews made sure to
report the MERIT study’s finding
that the RRT had no effect, but
didn’t emphasize the MERIT study
authors’ discussion about how
their study methods may have
influenced the reliability of the
findings (for more, see “Critical
Appraisal of the Evi dence: Part
II,” Septem ber). Carlos says that
this is an excellent observation.
He also reminds the team that
clinicians may read a systematic
review for the conclusion and
never consider the original stud-
ies. He encourages Rebecca and

It’s not the number of studies or projects that determines

the reliability of their findings, but the uniformity and

quality of their methods.

ajn@wolterskluwer.com AJN ▼ November 2010 ▼ Vol. 110, No. 11 47

SYNTHESIZING: MAKING DECISIONS
BASED ON THE EVIDENCE
Design/Method. The team starts
with the “Design/Method” column
because Carlos reminds them that
it’s important to note each study’s
level of evidence. He suggests
that they take this information
and create a synthesis table (one
in which data is extracted from
the evaluation table to better see
the similarities and differences
bet ween studies) (see Table 21-15).
The synthesis table makes it clear
that there is less higher-level and
more lower-level evidence, which
will impact the reliability of the
overall findings. As the team noted,
the higher-level evidence is not
without meth odological issues,
which will increase the challenge
of coming to a conclusion about

the impact of an RRT on the out –
comes.

Sample/Setting. In reviewing
the “Sample/Setting” column, the
group notes that the number of
hospital beds ranged from 218
to 662 across the studies. There
were several types of hospitals
represented (4 teaching, 4 com-
munity, 4 no mention, 2 acute
care hospitals, and 1 public hos-
pital). The evidence they’ve col-
lected seems applicable, since
their hospital is a community
hos pital.

Findings. To help the team
better discuss the evidence, Car-
los suggests that they refer to all
pro j ects or studies as “the body
of evidence.” They don’t want to
get confused by calling them all
studies, as they aren’t, but at the

same time continually referring
to “stud ies and projects” is cum-
bersome. He goes on to say that,
as part of the synthesis process,
it’s impor tant for the group to
determine the overall impact of
the intervention across the body
of evi dence. He helps them create
a second synthesis table contain-
ing the findings of each study or
pro ject (see Table 31-15). As they
look over the results, Rebecca
and Chen note that RRTs reduce
code rates, par ti cularly outside
the ICU, whereas unplanned
ICU admissions (UICUA) don’t
seem to be as affected by them.
How ever, 10 of the 15 studies
and projects reviewed didn’t
ev aluate this outcome, so it
may not be fair to write it off
just yet.

Table 2: The 15 Studies: Levels and Types of Evidence

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Level I: Systematic review
or meta-analysis

X X

X

Level II: Randomized con-
trolled trial

X

Level III: Controlled trial
without randomization

Level IV: Case-control or
cohort study

X X

Level V: Systematic review
of qualitative or descrip-
tive studies

Level VI: Qualitative or
descriptive study (includes
evidence implementation
projects)

X X X X X X X X X

Level VII: Expert opinion
or consensus

Adapted with permission from Melnyk BM, Fineout-Overholt E, editors. Evidence-based practice in nursing and healthcare: a guide to best practice.
2nd ed. Philadelphia: Wolters Kluwer Health / Lippincott Williams and Wilkins; 2010.

1 = Chan PS, et al. (2010); 2 = McGaughey J, et al.; 3 = Winters BD, et al.; 4 = Hillman K, et al.; 5 = Sharek PJ, et al.; 6 = Chan PS, et al.
(2009); 7 = DeVita MA, et al.; 8 = Mailey J, et al.; 9 = Dacey MJ, et al.; 10 = McFarlan SJ, Hensley S.; 11 = Offner PJ, et al.; 12 = Bertaut Y,
et al.; 13 = Benson L, et al.; 14 = Hatler C, et al.; 15 = Bader MK, et al.

hav ing level- VI evidence, a study
and a project, had statistically
significant (less likely to occur by
chance, P < 0.05) reductions in HMR, which in creases the reli- ability of the results.

Chen asks, since four level-VI
reports documented that an RRT
reduces HMR, should they put
more confidence in findings that
occur more than once? Carlos re-
plies that it’s not the number of
studies or projects that determines
the re liability of their findings, but
the uniformity and quality of their
methods. He recites something he
heard in his Expert EBP Mentor
program that helped to clarify
the concept of making decisions
based on the evidence: the level
of the evidence (the design) plus
the quality of the evidence (the
validity of the methods) equals the
strength of the evidence, which is

what leads clinicians to act in con –
fidence and apply the evidence (or
not) to their practice and expect
similar findings (outcomes). In
terms of making a decision about
whether or not to initiate an RRT,
Carlos says that their evidence
stacks up: first, the MERIT study’s
results are questionable because
of problems with the study meth-
ods, and this affects the reliability
of the three systematic reviews as
well as the MERIT study it self;
second, the reasonably conducted
lower-level studies/projects, with
their statistically significant find-
ings, are persuasive. Therefore,
the team begins to consider the
possibility that initiating an RRT
may re duce code rates outside the
ICU (CRO) and may impact non-
ICU mor tality; both are outcomes
they would like to address. The
evidence doesn’t provide equally

The EBP team can tell from
reading the evidence that research –
ers consider the impact of an RRT
on hospital-wide mortality rates
(HMR) as the more important
outcome; however, the group re –
mains unconvinced that this out-
come is the best for evaluating
the purpose of an RRT, which,
according to the IHI, is early in –
tervention in patients who are
unstable or at risk for cardiac or
respiratory arrest.16 That said, of
the 11 studies and projects that
evaluated mortality, more than
half found that an RRT reduced it.
Carlos reminds the group that
four of those six articles are level-VI
evidence and that some weren’t
research. The findings produced
at this level of evidence are typi-
cally less reliable than those at
higher levels of evidence; how-
ever, Carlos notes that two articles

48 AJN ▼ November 2010 ▼ Vol. 110, No. 11 ajnonline.com

Table 3: Effect of the Rapid Response Team on Outcomes

1a 2a 3a 4a 5a 6a 7 8 9 10 11 12 13 14 15

HMR
adult
b

peds

b NE c b NR NE c NE b, d

CRO NE NE NE NE c b NE NE b c b c NE c c

CR b

peds
and
adult

NE b NE b c NE NE NE NE b NE NE

UICUA NE NE NE NE NE NE NE b c NE NE NE b

1 = Chan PS, et al. (2010); 2 = McGaughey J, et al.; 3 = Winters BD, et al.; 4 = Hillman K, et al.; 5 = Sharek PJ, et al.;
6 = Chan PS, et al. (2009); 7 = DeVita MA, et al.; 8 = Mailey J, et al.; 9 = Dacey MJ, et al.; 10 = McFarlan SJ, Hensley S.;
11 = Offner PJ, et al.; 12 = Bertaut Y, et al.; 13 = Benson L, et al.; 14 = Hatler C, et al.; 15 = Bader MK, et al.

CR = cardiopulmonary arrest or code rates; CRO = code rates outside the ICU; HMR = hospital-wide mortality rates;
NE = not evaluated; NR = not reported; UICUA = unplanned ICU admissions
a higher-level evidence; b statistically significant findings; c statistical significance not reported; d non-ICU mortality was
reduced

ajn@wolterskluwer.com AJN ▼ November 2010 ▼ Vol. 110, No. 11 49

the important outcomes to mea-
sure are: CRO, non-ICU mortality
(excluding patients with do not
resuscitate [DNR] orders), UICUA,
and cost.

Appraisal: Worth to Practice.
As the team discusses their syn-
thesis and the decision they’ll
make based on the evidence,

data in the “Findings” column
that shows a financial return on
in vestment for an RRT.9 Carlos
remarks to the group that this is
only one study, and that they’ll
need to make sure to collect data
on the costs of their RRT as well
as the cost implications of the
outcomes. They determine that

promising results for UICUA, but
the team agrees to include it in
the outcomes for their RRT pro j –
ect be cause it wasn’t evaluated
in most of the articles they ap-
praised.

As the EBP team continues
to discusses probable outcomes,
Re becca points to one study’s

Table 4. Defined Criteria for Initiating an RRT Consult

4 8 9 13 15

Respiratory distress
(breaths/min)

Airway threatened
Respiratory arrest
RR < 5 or > 36

RR < 10 or > 30

RR < 8 or > 30

Unexplained dys-
pnea

RR < 8 or > 28

New-onset difficulty
breathing

RR < 10 or > 30

Shortness of breath

Change in mental
status

Change in LOC
Decrease in Glasgow
Coma Scale of
> 2 points

ND Unexplained change Sudden decrease
in LOC with normal
blood glucose

Decreased LOC

Tachycardia (beats/
min)

>140 > 130 Unexplained > 130
for 15 min

> 120 > 130

Bradycardia (beats/
min)

< 40 < 60 Unexplained < 50 for 15 min

< 40 < 40

Blood pressure
(mmHg)

SBP < 90 SBP < 90 or >
180

Hypotension (unex-
plained)

SBP > 200 or < 90 SBP < 90

Chest pain Cardiac arrest ND ND Complaint of nontrau-
matic chest pain

Complaint of nontraumatic
chest pain

Seizures Sudden or extended ND ND Repeated or pro-
longed

ND

Concern/worry
about patient

Serious concern
about a patient who
doesn’t fit the above
criteria

NE Nurse concern about
overall deterioration
in patients’ condi-
tion without any of
the above criteria
(p. 2077)

Nurse concern • Uncontrolled pain
• Failure to respond to

treatment
• Unable to obtain prompt

assistance for unstable
patient

Pulse oximetry (SpO2) NE NE NE < 92% < 92%

Other • Color change of
patient

• Unexplained agita-
tion for > 10 min

• CIWA > 15 points

• UOP < 50 cc/4 hr • Color change of patient

(pale, dusky, gray, or
blue)

• New-onset limb weak-
ness or smile droop

• Sepsis: ≥ 2 SIRS criteria

4 = Hillman K, et al.; 8 = Mailey J, et al.; 9 = Dacey MJ, et al.; 13 = Benson L, et al.; 15 = Bader MK, et al.

cc = cubic centimeters; CIWA = Clinical Institute Withdrawal Assessment; hr = hour; LOC = level of consciousness; min = minute; mmHg = millimeters
of mercury; ND = not defined; NE = not evaluated; RR = respiratory rate; SBP = systolic blood pressure; SIRS = systemic inflammatory response
syndrome; SpO2= arterial oxygen saturation; UOP = urine output

50 AJN ▼ November 2010 ▼ Vol. 110, No. 11 ajnonline.com

that an RRT is a valuable inter-
vention to initiate. They decide
to take the criteria for activating
an RRT from several successful
studies/projects and put them
into a synthesis table to better
see their ma jor similarities (see
Table 44, 8, 9, 13, 15). From this com-
bined list, they choose the criteria
for initiating an RRT consult that
they’ll use in their project (see
Table 5). The team also be gins
discussing the ideal make up for
their RRT. Again, they go back to
the evaluation table and look

of excitement about their project,
that their colleagues across all
disciplines have been eager to hear
the re sults of their review of the
evidence. In addition, Carlos says
that many re sources in their hos-
pital will be available to help them
get started with their project and
reminds them of their hospital
administrators’ commitment to
support the team.

ACTING ON THE EVIDENCE
As they consider the synthesis
of the evidence, the team agrees

Re becca raises a question that’s
been on her mind. She reminds
them that in the “Appraisal: Worth
to Practice” column, teaching was
identified as an important factor
in initiating an RRT and expresses
concern that their hospital is not
an aca demic medical center. Chen
re minds her that even though
theirs is not a designated teaching
hospital with residents on staff
24 hours a day, it has a culture
of teaching that should enhance
the success of an RRT. She adds
that she’s al ready hearing a buzz

Table 5. Defined Criteria for Initiating an RRT Consult at Our Hospital

Pulmonary

Ventilation Color change of patient (pale, dusky, gray, or blue)

Respiratory distress RR < 10 or > 30 breaths/min or unexplained dyspnea or new-onset difficulty breathing
or shortness of breath

Cardiovascular

Tachycardia Unexplained > 130 beats/min for 15 min

Bradycardia Unexplained < 50 beats/min for 15 min

Blood pressure Unexplained SBP < 90 or > 200 mmHg

Chest pain Complaint of nontraumatic chest pain

Pulse oximetry < 92% SpO2 Perfusion UOP < 50 cc/4 hr

Neurologic

Seizures Initial, repeated, or prolonged

Change in mental status • Sudden decrease in LOC with normal blood glucose
• Unexplained agitation for > 10 min
• New-onset limb weakness or smile droop

Concern/worry about
patient

Nurse concern about overall deterioration in patients’ condition without any of the above
criteria

Sepsis

• Temp, > 38°C
• HR, > 90 beats/min
• RR, > 20 breaths/min
• WBC, > 12,000, < 4,000, or > 10% bands

cc = cubic centimeters; hr = hours; HR = heart rate; LOC = level of consciousness; min = minute; mmHg = millimeters of
mercury; RR = respiratory rate; SBP = systolic blood pressure; SpO2 = arterial oxygen saturation; Temp = temperature;
UOP = urine output; WBC = white blood count

ajn@wolterskluwer.com AJN ▼ November 2010 ▼ Vol. 110, No. 11 51

3. Winters BD, et al. Rapid response sys –
tems: a systematic review. Crit Care
Med 2007;35(5):1238-43.

4. Hillman K, et al. Introduction of
the medical emergency team (MET)
system: a cluster-randomised con-
trolled trial. Lancet 2005;365(9477):
2091-7.

5. Sharek PJ, et al. Effect of a rapid re –
sponse team on hospital-wide mortal-
ity and code rates outside the ICU in
a children’s hospital. JAMA 2007;
298(19):2267-74.

6. Chan PS, et al. Hospital-wide code
rates and mortality before and after
implementation of a rapid response
team. JAMA 2008;300(21):2506-13.

7. DeVita MA, et al. Use of medical
emergency team responses to reduce
hospital cardiopulmonary arrests.
Qual Saf Health Care 2004;13(4):
251-4.

8. Mailey J, et al. Reducing hospital
standardized mortality rate with early
interventions. J Trauma Nurs 2006;
13(4):178-82.

9. Dacey MJ, et al. The effect of a rapid
response team on major clinical out-
come measures in a community hos-
pital. Crit Care Med 2007;35(9):
2076-82.

10. McFarlan SJ, Hensley S. Implementa-
tion and outcomes of a rapid response
team. J Nurs Care Qual 2007;22(4):
307-13.

11. Offner PJ, et al. Implementation of a
rapid response team decreases cardiac
arrest outside the intensive care unit.
J Trauma 2007;62(5):1223-8.

12. Bertaut Y, et al. Implementing a rapid-
response team using a nurse-to-nurse
consult approach. J Vasc Nurs 2008;
26(2):37-42.

13. Benson L, et al. Using an advanced
practice nursing model for a rapid re –
sp onse team. Jt Comm J Qual Pa tient
Saf 2008;34(12):743-7.

14. Hatler C, et al. Implementing a rapid
response team to decrease emergen-
cies. Medsurg Nurs 2009;18(2):84-90,
126.

15. Bader MK, et al. Rescue me: saving
the vulnerable non-ICU patient popu-
lation. Jt Comm J Qual Patient Saf
2009;35(4):199-205.

16. Institute for Healthcare Improvement.
Establish a rapid response team.
n.d. http://www.ihi.org/IHI/topics/
criticalcare/intensivecare/changes/
establisharapidresponseteam.htm.

evidence that led to the project,
how to call an RRT, and out-
come measures that will indicate
whether or not the implementation

of the evidence was successful.
They’ll also need an evaluation
plan. From reviewing the studies
and projects, they also re alize that
it’s important to focus their plan
on evidence implementation, in-
cluding carefully evaluating both
the process of implementation and
project outcomes.

Be sure to join the EBP team
in the next installment of this se –
ries as they develop their imple-
mentation plan for initiating an
RRT in their hospital, including
the submission of their project
proposal to the ethics review
board. ▼

Ellen Fineout-Overholt is clinical pro-
fessor and director of the Center for the
Advancement of Evidence-Based Prac –
tice at Arizona State University in Phoe –
nix, where Bernadette Mazurek Melnyk
is dean and distinguished foundation
professor of nursing, Susan B. Stillwell
is clinical associate professor and pro-
gram coordinator of the Nurse Educator
Evidence-Based Practice Men torship
Program, and Kathleen M. Williamson
is associate director of the Center for
the Advancement of Evidence-Based
Pra ctice. Contact author: Ellen Fineout-
Overholt, ellen.fineout-overholt@asu.
edu.

REFERENCES
1. Chan PS, et al. (2010). Rapid re –

sponse teams: a systematic review
and meta- analysis. Arch Intern Med
2010;170(1):18-26.

2. McGaughey J, et al. Outreach and
early warning systems (EWS) for the
prevention of intensive care admission
and death of critically ill adult patients
on general hospital wards. Cochrane
Database Syst Rev 2007;3:CD005529.

over the “Major Variables
Studied” column, noting that the
composition of the RRT varied
among the studies/projects. Some

RRTs had active physician partic-
ipation (n = 6), some had desig-
nated phy sician consultation on
an as-needed basis (n = 2), and
some were nurse-led teams (n = 4).
Most RRTs also had a respira-
tory therapist (RT). All RRT mem-
bers had expertise in intensive
care and many were certified in
ad vanced cardiac life support
(ACLS). They agree that their
team will be comprised of ACLS-
certified mem bers. It will be led
by an acute care nurse prac ti-
tioner (ACNP) credentialed for
advanced procedures, such as
cen tral line insertion. Members
will include an ICU RN and an
RT who can intubate. They also
discuss having physicians will-
ing to be called when needed.
Although no studies or projects
had a chaplain on their RRT,
Chen says that it would make
sense in their hospital. Carlos,
who’s been on staff the longest
of the three, says that interdisci-
plinary collaboration has been a
mainstay of their organization. A
physician, ACNP, ICU RN, RT,
and chaplain are logical choices
for their RRT.

As the team ponders the evi-
dence, they begin to discuss the
next step, which is to develop
ideas for writing their project
im plementation plan (also called
a protocol). Included in this pro-
tocol will be an educational plan
to let those involved in the proj-
ect know information such as the

As they consider the synthesis of the

evidence, the team agrees that an RRT is a

valuable intervention to initiate.

202 Copyright © 2009 The Author(s)

Evidence-Based Practice: Critical
Appraisal of Qualitative Evidence

Kathleen M.

Williamson

One of the key steps of evidence-based practice is to critically appraise evidence to best answer a clinical question. Mental
health clinicians need to understand the importance of qualitative evidence to their practice, including levels of qualitative
evidence, qualitative inquiry methods, and criteria used to appraise qualitative evidence to determine how implementing
the best qualitative evidence into their practice will influence mental health outcomes. The goal of qualitative research is
to develop a complete understanding of reality as it is perceived by the individual and to uncover the truths that exist.
These important aspects of mental health require clinicians to engage this evidence. J Am Psychiatr Nurses Assoc, 2009;
15(3), 202-207. DOI: 10.1177/1078390309338733

Keywords: evidence-based practice; qualitative inquiry; qualitative designs; critical appraisal of qualitative
evidence; mental health

Evidence-based practice (EBP) is an approach that
enables psychiatric mental health care practitioners
as well as all clinicians to provide the highest quality
of care using the best evidence available (Melnyk &
Fineout-Overholt, 2005). One of the key steps of EBP
is to critically appraise evidence to best answer a
clinical question. For many mental health questions,
understanding levels of evidence, qualitative inquiry
methods, and questions used to appraise the evidence
are necessary to implement the best qualitative evi-
dence into practice. Drawing conclusions and making
judgments about the evidence are imperative to the
EBP process and clinical decision making (Melnyk &
Fineout-Overholt, 2005; Polit & Beck, 2008). The over-
all purpose of this article is to familiarize clinicians
with qualitative research as an important source of
evidence to guide practice decisions. In this article, an
overview of the goals, methods and types of qualita-
tive research, and the criteria used to appraise the
quality of this type of evidence will be presented.

QUALITATIVE BELIEFS

Qualitative research aims to generate insight,
describe, and understand the nature of reality in

human experiences (Ayers, 2007; Milne & Oberle,
2005; Polit & Beck, 2008; Saddler, 2006; Sandelowski,
2004; Speziale & Carpenter, 2003; Thorne, 2000).
Qualitative researchers are inquisitive and seek to
understand knowledge about how people think and
feel, about the circumstances in which they find
themselves, and use methods to uncover and decon-
struct the meaning of a phenomenon (Saddler, 2006;
Thorne, 2000). Qualitative data are collected in a
natural setting. These data are not numerical; rather,
they are full and rich descriptions from participants
who are experiencing the phenomenon under study.
The goal of qualitative research is to uncover the
truths that exist and develop a complete understand-
ing of reality and the individual’s perception of what
is real. This method of inquiry is deeply rooted in
descriptive modes of research. “The idea that multiple
realties exist and create meaning for the individuals
studied is a fundamental belief of qualitative research-
ers” (Speziale & Carpenter, 2003, p. 17). Qualitative
research is the studying, collecting, and understand-
ing the meaning of individuals’ lives using a variety
of materials and methods (Denzin & Lincoln, 2005).

WHAT IS A QUALITATIVE
RESEARCHER?

Qualitative researchers commonly believe that indi-
viduals come to know and understand their reality in

Kathleen M. Williamson, PhD, RN, associate director, Center for
the Advancement of Evidence-Based Practice, Arizona State
University, College of Nursing & Healthcare Innovation, Phoenix,
Arizona; Kathleen.Williamson@asu.edu.

Journal of the American Psychiatric Nurses Association,Vol. 15, No. 3 203

Critical Appraisal of Qualitative Evidence

different ways. It is through the lived experience
and the interactions that take place in the natural
setting that the researcher is able to discover and
understand the phenomenon under study (Miles &
Huberman, 1994; Patton, 2002; Speziale & Carpenter,
2003). To ensure the least disruption to the environ-
ment/natural setting, qualitative researchers care-
fully consider the best research method to answer
the research question (Speziale & Carpenter, 2003).
These researchers are intensely involved in all
aspects of the research process and are considered
participants and observers in setting or field (Patton,
2002; Polit & Beck, 2008; Speziale & Carpenter,
2003). Flexibility is required to obtain data from the
richest possible sources of information. Using a
holistic approach, the researcher attempts to cap-
ture the perceptions of the participants from an
“emic” approach (i.e., from an insider’s viewpoint;
Miles & Huberman, 1994; Speziale & Carpenter,
2003). Often, this is accomplished through the use of
a variety of data collection methods, such as inter-
views, observations, and written documents (Patton,
2002). As the data are collected, the researcher
simultaneously analyzes it, which includes identi-
fying emerging themes, patterns, and insights
within the data. According to Patton (2002), quali-
tative analysis engages exploration, discovery, and
inductive logic. The researcher uses a rich literary
account of the setting, actions, feelings, and mean-
ing of the phenomenon to report the findings
(Patton, 2002).

COMMONLY USED
QUALITATIVE DESIGNS

According to Patton (2002), “Qualitative methods
are first and foremost research methods. They are
ways of finding out what people do, know, think, and

feel by observing, interviewing, and analyzing docu-
ments” (p. 145). Qualitative research designs vary by
type and purpose: data collection strategies used and
the type of question or phenomenon under study. To
critically appraise qualitative evidence for its valid-
ity and use in practice, an understanding of the
types of qualitative methods as well as how they are
employed and reported is necessary.

Many of the methods are routed in the anthropol-
ogy, psychological, and sociology disciplines. Many
commonly used methods in the health sciences
research are ethnography, phenomenology, and
grounded theory (see Table 1).

Ethnography

Ethnography has its traditions in cultural
anthropology, which describe the values, beliefs,
and practice of cultural groups (Ploeg, 1999; Polit
& Beck, 2008). According to Speziale and Carpenter
(2003), the characteristics that are central to eth-
nography are that (a) the research is focused on
culture, (b) the researcher is totally immersed in
the culture, and (c) the researcher is aware of her/
his own perspective as well as those in the study.
Ethnographic researchers strive to study cultures
from an emic approach. The researcher as a par-
ticipant observer becomes involved in the culture
to collect data, learn from participants, and report
on the way participants see their world (Patton,
2002). Data are primarily collected through obser-
vations and interviews. Analysis of ethnographic
results involves identifying the meanings attrib-
uted to objects and events by members of the cul-
ture. These meanings are often validated by
members of the culture before finalizing the results
(called member checks). This is a labor-intensive
method that requires extensive fieldwork.

TABLE 1. Most Commonly Used Qualitative Research Methods

Method
Purpose

Research question(s)

Sample size (on average)
Data sources/collection

Ethnography
Describe culture of people

What is it like to live . . .
What is it . . .
30-50
Interviews, observations, field

notes, records, chart data,
life histories

Phenomenology

Describe phenomena, the

appearance of things, as lived
experience of humans in a natural
setting

What is it like to have this
experience? What does it feel like?

6-8
Interviews, videotapes, observations,

in-depth conversations

Grounded theory
To develop a theory rather than

describe a phenomenon

Questions emerge from the data

25-50
Taped interview, observation,

diaries, and memos from
researcher

Source. Adapted from Polit and Beck (2008) and Speziale and Carpenter(2003).

204 Journal of the American Psychiatric Nurses Association,Vol. 15, No. 3

Williamson
Phenomenology

Phenomenology has its roots in both philosophy
and psychology. Polit and Beck (2008) reported,
“Phenomenological researchers believe that lived
experience gives meaning to each person’s percep-
tion of a particular phenomenon” (p. 227). According
to Polit and Beck, there are four aspects of the
human experience that are of interest to the phe-
nomenological researcher: (a) lived space (spatial-
ity), (b) lived body (corporeality), (c) lived human
relationships (relationality), and (d) lived time (tem-
porality). Phenomenological inquiry is focused on
exploring how participants in the experience make
sense of the experience, transform the experience
into consciousness, and the nature or meaning of
the experience (Patton, 2002). Interpretive phenom-
enology (hermeneutics) focuses on the meaning and
interpretation of the lived experience to better
understand social, cultural, political, and historical
context. Descriptive phenomenology shares vivid
reports and describes the phenomenon.

In a phenomenological study, the researcher is an
active participant/observer who is totally immersed
in the investigation. It involves gaining access to
participants who could provide rich descriptions
from in-depth interviews to gather all the informa-
tion needed to describe the phenomenon under study
(Speziale & Carpenter, 2003). Ongoing analyses of
direct quotes and statements by participants occur
until common themes emerge. The outcome is a vivid
description of the experience that captures the
meaning of the experience and communicates clearly
and logically the phenomenon under study (Speziale
& Carpenter, 2003).

Grounded Theory

Grounded theory has its roots in sociology and
explores the social processes that are present within
human interactions (Speziale & Carpenter, 2003).
The purpose is to develop or build a theory rather
than test a theory or describe a phenomenon (Patton,
2002). Grounded theory takes an inductive approach
in which the researcher seeks to generate emergent
categories and integrate them into a theory grounded
in the data (Polit & Beck, 2008). The research does
not start with a focused problem; it evolves and is
discovered as the study progresses. A feature of
grounded theory is that the data collection, data
analysis, and sampling of participants occur simulta-
neously (Polit & Beck, 2008; Powers, 2005). The

researchers using ground theory methodology are
able to critically analyze situations, not remove
themselves from the study but realize that they
are part of it, recognize bias, obtain valid and reliable
data, and think abstractly (Strauss & Corbin, 1990).

Data collection is through in-depth interview and
observations. A constant comparative process is used
for two reasons: (a) to compare every piece of data
with every other piece to more accurately refine the
relevant categories and (b) to assure the researcher
that saturation has occurred. Once saturation is
reached the researcher connects the categories, pat-
terns, or themes that describe the overall picture
that emerged that will lead to theory development.

ASPECTS OF QUALITATIVE RESEARCH

The most important aspects of qualitative inquiry
is that participants are actively involved in the
research process rather than receiving an interven-
tion or being observed for some risk or event to be
quantified. Another aspect is that the sample is pur-
posefully selected and is based on experience with a
culture, social process, or phenomena to collect infor-
mation that is rich and thick in descriptions. The final
essential aspect of qualitative research is that one or
more of the following strategies are used to collect
data: interviews, focus groups, narratives, chat rooms,
and observation and/or field notes. These methods
may be used in combination with each other. The
researcher may choose to use triangulation strategies
on data collection, investigator, method, or theory and
use multiple sources to draw conclusions about the
phenomenon (Patton, 2002; Polit & Beck, 2009).

SUMMARY

This is not an inclusive list of qualitative methods
that researchers could choose to use to answer a
research question, other methods include historical
research, feminist research, case study method, and
action research. All qualitative research methods are
used to describe and discover meaning, understand-
ing, or develop a theory and transport the reader to
the time and place of the observation and/or inter-
view (Patton, 2002).

THE HIERARCHY OF
QUALITATIVE EVIDENCE

Clinical questions that require qualitative evi-
dence to answer them focus on human response and

Journal of the American Psychiatric Nurses Association,Vol. 15, No. 3 205

Critical Appraisal of Qualitative Evidence

meaning. An important step in the process of apprais-
ing qualitative research as a guide for clinical prac-
tice is the identification of the level of evidence or the
“best” evidence. The level of evidence is a guide that
helps identify the most appropriate, rigorous, and
clinically relevant evidence to answer the clinical
question (Polit & Beck, 2008). Evidence hierarchy for
qualitative research ranges from opinion of authori-
ties and/or reports of expert committees to a single
qualitative research study to metasynthesis (Melnyk
& Fineout-Overholt, 2005; Polit & Beck, 2008). A
metasynthesis is comparable to meta-analysis (i.e.,
systematic reviews) of quantitative studies. A meta-
synthesis is a technique that integrates findings of
multiple qualitative studies on a specific topic, pro-
viding an interpretative synthesis of the research
findings in narrative form (Polit & Beck, 2008). This
is the strongest level of evidence in which to answer
a clinical question. The higher the level of evidence
the stronger the evidence is to change practice.
However, all evidence needs be critically appraised
based on (a) the best available evidence (i.e., level of
evidence), (b) the quality and reliability of the study,
and (c) the applicability of the findings to practice.

CRITICAL APPRAISAL OF
QUALITATIVE EVIDENCE

Once the clinical issue has been identified, the
PICOT question constructed, and the best evidence
located through an exhaustive search, the next step
is to critically appraise each study for its validity
(i.e., the quality), reliability, and applicability to use
in practice (Melnyk & Fineout-Overholt, 2005).
Although there is no consensus among qualitative
researchers on the quality criteria (Cutcliffe &
McKenna, 1999; Polit & Beck, 2008; Powers, 2005;
Russell & Gregory, 2003; Sandelowski, 2004), many
have published excellent tools that guide the process

for critically appraising qualitative evidence (Duffy,
2005; Melnyk & Fineout-Overholt, 2005; Polit &
Beck, 2008; Powers, 2005; Russell & Gregory, 2003;
Speziale & Carpenter, 2003). They all base their cri-
teria on three primary questions: (a) Are the study
findings valid? (b) What were the results of the
study? (c) Will the results help me in caring for my
patients? According to Melnyk and Fineout-Overholt
(2005), “The answers to these questions ensure rele-
vance and transferability of the evidence from the
search to the specific population for whom the practi-
tioner provides care” (p. 120). In using the questions
in Tables 2, 3, and 4, one can evaluate the evidence
and determine if the study findings are valid, the
method and instruments used to acquire the knowl-
edge credible, and if the findings are transferable.

The qualitative process contributes to the rigor or
trustworthiness of the data (i.e., the quality). “The
goal of rigor in qualitative research is to accurately
represent study participants’ experiences” (Speziale
& Carpenter, 2003, p. 38). The qualitative attributes
of validity include credibility, dependability, confirm-
ability, transferability, and authenticity (Guba &
Lincoln, 1994; Miles & Huberman, 1994; Speziale &
Carpenter, 2003).

Credibility is having confidence and truth about
the data and interpretations (Polit & Beck, 2008).
The credibility of the findings hinges on the skill,
competence, and rigor of the researcher to describe
the content shared by the participants and the abil-
ity of the participants to accurately describe the
phenomenon (Patton, 2002; Speziale & Carpenter,
2003). Cutcliffe and McKenna (1999) reported that
the most important indicator of the credibility of
findings is when a practitioner reads the study find-
ings and regards them meaningful and applicable
and incorporates them into his or her practice.

Confirmability refers to the way the researcher
documents and confirms the study findings (Speziale

TABLE 2. Subquestions to Further Answer, Are the Study Findings Valid?

Participants

Sample

Data collection

How were they
selected?

Was it adequate?

How were the
data collected?

Did they provide
rich and thick
descriptions?

Was the setting
appropriate to
acquire an
adequate sample?

Were the tools
adequate?

Were the
participants’
rights protected?

Was the sampling
method
appropriate?

How were the data
coded? If so
how?

Did the researcher
eliminate bias?

Do the data accurately
represent the study
participants?

How accurate and
complete were the
data?

Was the group or
population adequately
described?

Was saturation achieved?

Does gathering the data
adequately portray the
phenomenon?

Source. Adapted from Powers (2005), Polit and Beck (2008), Russell and Gregory (2003), and Speziale and Carpenter (2003).

206 Journal of the American Psychiatric Nurses Association,Vol. 15, No. 3

Williamson

& Carpenter, 2003). Confirmability is the process of
confirming the accuracy, relevance, and meaning of
the data collected. Confirmability exists if (a) the
researcher identifies if saturation was reached and
(b) records of the methods and procedures are
detailed enough that they can be followed by an
audit trail (Miles & Huberman, 1994).

Dependability is a standard that demonstrates
whether (a) the process of the study was consistent, (b)
data remained consistent over time and conditions,
and (c) the results are reliable (Miles & Huberman,
1994; Polit & Beck, 2008; Speziale & Carpenter, 2003).
For example, if study methods and results are depend-
able, the researcher consistently approaches each
occurrence in the same way with each encounter and
results were coded with accuracy across the study.

Transferability refers to the probability that the
study findings have meaning and are usable by oth-
ers in similar situations (i.e., generalizable to others
in that situation; Miles & Huberman, 1994; Polit &
Beck, 2008; Speziale & Carpenter, 2003). To deter-
mine if the findings of a study are transferable and
can be used by others, the clinician must consider
the potential client to whom the findings may be
applied (Speziale & Carpenter, 2003).

Authenticity is when the researcher fairly and
faithfully shows a range of different realities and
develops an accurate and authentic portrait for
the phenomenon under study (Polit & Beck, 2008).
For example, if a clinician were to be in the same

environment as the researcher describes, they would
experience the phenomenon similarly. All mental
health providers need to become familiar with these
aspects of qualitative evidence and hone their criti-
cal appraisal skills to enable them to improve the
outcomes of their clients.

CONCLUSION

Qualitative research aims to impart meaning of
the human experience and understand how people
think and feel about their circumstances. Qualitative
researchers use a holistic approach in an attempt to
uncover truths and understand a person’s reality.
The researcher is intensely involved in all aspects
of the research design, collection, and analysis pro-
cesses. Ethnography, phenomenology, and grounded
theory are some of the designs that a researcher may
use to study a culture, phenomenon, or theory. Data
collection strategies vary based on the research
question, method, and informants. Methods such as
interviews, observations, and journals allow for
information-rich participants to provide detailed lit-
erary accounts of the phenomenon. Data analysis
occurs simultaneously as data collection and is the
process by which the researcher identifies themes,
concepts, and patterns that provide insight into the
phenomenon under study.

One of the crucial steps in the EBP process is to
critically appraise the evidence for its use in practice

TABLE 3. Subquestions to Further Answer, What Were the Results of the Study?

Is the research
design
appropriate
for the
research
question?

Is the
description
of findings
thorough?

Do findings
fit the data
from which
they were
generated?

Are the results
logical,
consistent,
and easy to
follow?

Was the
purpose of the
study clear?

Were all themes
identified,
useful,
creative, and
convincing of
the
phenomena?

Source. Adapted from Powers (2005), Russell and Gregory (2003), and Speziale and Carpenter (2003).

TABLE 4. Subquestions to Further Answer, Will the Results Help Me in Caring for My Patients?

What meaning and
relevance does
this study have
for my patients?

How would I use
these findings
in my practice?

How does the study
help provide
perspective on my
practice?

Are the conclusions
appropriate to my
patient
population?

Are the results
applicable to
my patients?

How would patient
and family values
be considered in
applying these
results?

Source. Adapted from Powers (2005), Russell and Gregory (2003), and Speziale and Carpenter (2003).

Journal of the American Psychiatric Nurses Association,Vol. 15, No. 3 207

Critical Appraisal of Qualitative Evidence

and determine the value of findings. Critical appraisal
is the review of the evidence for its validity (i.e.,
strengths and weaknesses), reliability, and usefulness
for clients in daily practice. “Psychiatric mental
health clinicians are practicing in an era emphasizing
the use of the most current evidence to direct their
treatment and interventions” (Rice, 2008, p. 186).
Appraising the evidence is essential for assurance
that the best knowledge in the field is being applied
in a cost-effective, holistic, and effective way. To do
this, one must incorporate the critically appraised
findings with their abilities as clinicians and their
clients’ preferences. As professionals, clinicians are
expected to use the EBP process, which includes
appraising the evidence to determine if the best
results are believable, useable, and dependable.
Clinicians in psychiatric mental health must use
qualitative evidence to inform their practice deci-
sions. For example, how do clients newly diagnosed
with bipolar and their families perceive the life
impact of this diagnosis? Having a well done meta-
synthesis that provides an accurate representation of
the participants’ experiences, and is trustworthy (i.e.,
credible, dependable, confirmable, transferable, and
authentic), will provide insight into the situational
context, human response, and meaning for these cli-
ents and will assist clinicians in delivering the best
care to achieve the best outcomes.

REFERENCES

Ayers, L. (2007). Qualitative research proposals—Part I. Journal
Wound Ostomy Continence Nursing, 34, 30-32.

Cutcliffe, J. R., & McKenna, H. P. (1999). Establishing the credibil-
ity of qualitative research findings: The plot thickens. Journal
of Advanced Nursing, 30, 374-380.

Denzin, N. K., & Lincoln, Y. S. (2005). The Sage handbook of
qualitative research (3rd ed.). Thousand Oaks, CA: Sage.

Duffy, M. E. (2005). Resources for critically appraising qualitative
research evidence of nursing practice clinical question. Clinical
Nursing Specialist, 19, 288-290.

Guba, E. G., & Lincoln, Y. S. (1994). Competing paradigms in
qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.),
Handbook of qualitative research (pp. 105-117). Thousand
Oaks, CA: Sage.

Melnyk, B. M., & Fineout-Overholt, E. (Eds.). (2005). Evidence-based
practice in nursing and healthcare. Philadelphia: Lippincott
Williams & Wilkins.

Miles, M. B., & Huberman, A. M. (1994). An expend sourcebook
qualitative data analysis (4th ed.). Thousand Oaks, CA: Sage.

Milne, J., & Oberle, K. (2005). Enhancing rigor in qualitative
description: A case study. Journal Wound Ostomy Continence
Nursing, 32, 413-420.

Patton, M. Q. (2002). Qualitative research & evaluation methods
(3rd ed.). Thousand Oaks: Sage.

Ploeg, J. (1999). Identifying the best research design to fit the
question. Part 2: Qualitative designs. Evidence-Based Nursing,
2, 36-37.

Polit, D. F., & Beck, C. T. (2008). Nursing research: Generating and
assessing evidence fro nursing practice. Philadelphia: Lippincott
Williams & Wilkins.

Powers, B. A. (2005). Critically appraising qualitative evidence. In
B. M. Melnyk & E. Fineout-Overholt (Eds.), Evidence-based
practice in nursing and healthcare (pp. 127-162). Philadelphia:
Lippincott Williams & Wilkins.

Rice, M. J. (2008). Evidence-based practice in psychiatric care:
Defining levels of evidence. Journal of the American Psychiatric
Nurses Association, 14(3), 181-187.

Russell, C. K., & Gregory, D. M. (2003). Evaluation of qualitative
research studies. Evidence-Based Nursing, 6, 36-40.

Saddler, D. (2006). Research 101. Gastroenterology Nursing, 30,
314-316.

Sandelowski, M. (2004). Using qualitative research. Qualitative
Health Research, 14, 1366-1386.

Speziale, H. J. S., & Carpenter, D. R. (2003). Qualitative research
in nursing: Advancing the humanistic imperative. Philadelphia:
Lippincott Williams & Wilkins.

Strauss, A., & Corbin, J. (1990). Basics of qualitative research:
Grounded theory procedures and techniques. London: Sage.

Thorne, S. (2000). Data analysis in qualitative research. Evidence-
Based Nursing, 3, 68-70.

For reprints and permission queries, please visit SAGE’s Web site at http://www.sagepub.com/journalsPermissions.nav.

Calculate your order
Pages (275 words)
Standard price: $0.00
Client Reviews
4.9
Sitejabber
4.6
Trustpilot
4.8
Our Guarantees
100% Confidentiality
Information about customers is confidential and never disclosed to third parties.
Original Writing
We complete all papers from scratch. You can get a plagiarism report.
Timely Delivery
No missed deadlines – 97% of assignments are completed in time.
Money Back
If you're confident that a writer didn't follow your order details, ask for a refund.

Calculate the price of your order

You will get a personal manager and a discount.
We'll send you the first draft for approval by at
Total price:
$0.00
Power up Your Academic Success with the
Team of Professionals. We’ve Got Your Back.
Power up Your Study Success with Experts We’ve Got Your Back.

Order your essay today and save 30% with the discount code ESSAYHELP