Article Analysis 2
Complete an article analysis for each of the two articles using the “Article Analysis: Part 2” template.
While APA style is not required for the body of this assignment, solid academic writing is expected, and documentation of sources should be presented using APA formatting guidelines.
This assignment uses a rubric. Please review the rubric attached below prior to beginning the assignment to become familiar with the expectations for successful completion.
ESEARCH IN NURSING EDUCATION HAS
EVOLVED DURING THE PAST 50 YEARS.
From 1957 to 1982, nursing research predominately focused on
nurse preparation (Stevens & Cassidy, 1999). In the 1980s, coincid-
ing with the establishment of the National Center for Nursing
Research in 1985 and its transition to the National Institute of
Nursing Research in 1993, the focus began to shift to clinical
issues. The result has been that little, if any, funding is directed
toward educational research.
Despite the shift in focus, some researchers in the 1980s contin-
ued to aspire to understand the nature of research in nursing edu-
cation. Tanner and Lindeman (1987), using a Delphi survey tech-
nique, concurred that “research in nursing education can and
should meet criteria for scientific merit applied to other areas of sci-
entific investigation” (p. 50). Furthermore, while the proportional
emphasis of nursing research in the United States is no longer on
education, nursing education studies continue to increase in fre-
quency. From 70 published studies during the period 1976 to 1982
(an average of 12 per year), the total grew to 423 from 1988 to 1991
(105 per year); 1,963 articles were published for the period 1993 to
1997 (393 per year) (Stevens & Cassidy, 1999). An evaluation of the
nursing education research literature published between January
1991 and December 2000 (Yonge et al., 2005) showed the most
common topics to be continuing education and patient education,
with more than 100 articles focused on each of these two topics.
Today, nursing education research falls short of where it could
be. Writing in a 2009 Nursing Outlook editorial, Broome noted the
limited data available to demonstrate the most effective and efficient
methods to produce nurses capable of caring for patients in our
complex and fragmented health care system. Broome identified a
need to build the evidence base for nursing education.
The Institute of Medicine (IOM) Future of Nursing report (2011)
included research priorities for transforming nursing education:
“Nursing education needs to be transformed in a number of ways to
prepare nursing graduates to work collaboratively and effectively with
other health professionals in a complex and evolving health care sys-
tem in a variety of settings” (p. 164). To achieve IOM goals, the
methodological rigor of nursing education research must be improved
so that studies are replicable, fundable, and provide direction for
improving the education of the nursing workforce.
Funding Shortage Current research dollars are insufficient to
support professional education studies, both in nursing and in
other disciplines and professions (Benner, Sutphen, Leonard, &
Day, 2010; Broome, 2009). For a study of published medical edu-
cation research, Reed, Kern, Levine, and Wright (2005) surveyed
first authors of studies published in 13 prominent peer-reviewed
journals in 2002-2003. When asked to estimate the cost of their
studies, author responses ranged from $4,000 to $25,000, with a
median of $10,000. However, Reed et al. also calculated expendi-
tures for these studies based on costs of author efforts, research
assistants, statisticians, equipment, data entry assistants, secretar-
ial support, postage, and other resources. The calculated costs of
studies were higher, ranging from $11,531 to $63,808, with a
median of $24,471.
It is highly likely that lack of sufficient funding for nursing edu-
R
Methodological Quality and Scientific Impact
of Quantitative Nursing Education Research
Over 18 Months
CAROLYN B. YUCHA, BARBARA ST. PIERRE SCHNEIDER,
TISH SMYER, SUSAN KOWALSKI, AND EVA STOWERS
ABSTRACT The methodological quality of nursing education research has not been rigorously studied. The purpose of this study was to evalu-
ate the methodological quality and scientific impact of nursing education research reports. The methodological quality of 133 quantitative nursing
education research articles published between July 2006 and December 2007 was evaluated using the Medical Education Research Study Quality
Instrument (MERSQI). The mean (± SD) MERSQI score was 9.8 ± 2.2. It correlated (p < .05) with several scientific impact indicators: citation counts
from Scopus (r = .223), Google Scholar (r = .224), and journal impact factor (r = .216); it was not associated with Web of Science citation count, fund-
ing, or h Index. The similarities between this study’s MERSQI ratings for nursing literature and those reported for the medical literature, coupled with
the association with citation counts, suggest that the MERSQI is an appropriate instrument to evaluate the quality of nursing education research.
N U R S I N G E D U C AT I O N P E R S P E C T I V E S / N OV E M B E R / D E C E M B E R 2 0 1 1
362 Nursing Education Perspectives
R E S E A R C H
N o v e m b e r / D e c e m b e r Vol.32 No.6 363
S C I E N T I F I C I M PAC T / N U R S I N G E D U C AT I O N R E S E A R C H
cation studies makes it difficult to conduct high quality studies.
Because nursing programs are typically the most expensive under-
graduate programs located in higher education settings, many
nursing-related studies are conducted within single sites on small
samples. If nursing educational researchers could improve their
methodological rigor, funding agencies might be more likely to
fund these studies. One strategy to improve rigor is to evaluate the
use of a scientific quality instrument to assess nursing education
research.
Improving Methodological Rigor via Evaluation A num-
ber of medical researchers have published on the quality of medical
education literature during the last five years. Reed, Price, et al.
(2005) present a well-organized table comparing five previously
published guides for appraising reports of medical education inter-
vention studies. These include seven categories of variables and
questions one might consider: purpose, rationale, objectives,
design, interventions, evaluation, and educational significance.
Cook, Beckman, and Bordage (2007) reviewed articles reporting
experimental studies in medical education published in six well-
respected medical journals in 2003-2004. Of 185 articles meeting
inclusion criteria, they randomly selected 110 for full review. Cook
et al. found that the reporting of experimental studies in medical
education was generally incomplete. For example, only 45 percent
contained a literature review, 55 percent presented a theoretical
framework, and 76 percent included a statement of study purpose.
A mere 16 percent provided an explicit statement of study design.
Lastly, only 47 percent of the studies operationally defined the inde-
pendent variable(s), while only 32 percent operationally defined the
dependent variable(s).
In an effort to evaluate the relationship between quality and
funding, Reed et al. (2007) developed the Medical Education
Research Study Quality Instrument (MERSQI) to measure the
methodological quality of educational research. The MERSQI
includes 10 items reflecting six domains of study quality: study
design, sampling, type of data, validity, data analysis, and out-
comes. As designed, it is not limited to intervention studies only,
but is appropriate for all quantitative studies.
The development of the MERSQI and its testing for reliability
and validity have been well described by Reed et al. (2007). In
brief, a literature review was conducted to elicit factors that reflect
research quality. Items were defined and modified during repeated
pilot testing using studies not included in the validation study. The
MERSQI was then applied to 210 medical education research stud-
ies published in 13 peer-reviewed journals from 2002 to 2003.
Principal components analysis was done to select items to be
retained; Cronbach’s alpha was 0.6, demonstrating internal consis-
tency, intraclass correlation coefficients, and acceptable interrater
(range 0.72-0.98) and intrarater (0.78-0.998) reliability.
Criterion validity for the MERSQI was demonstrated via the
association with global assessment of methodological quality by two
independent experts, a three-year citation rate, and impact factor of
the publishing journal (Reed et al., 2007). The MERSQI score was
strongly and significantly correlated with the expert global quality
rating, the three-year citation rate (0.8 increase in score per 10 cita-
tions), and journal impact factor (1.0 increase in score per six-unit
increase in impact factor). The scores were also associated with the
total previous medical education peer-reviewed publications by the
first author (1.46 increase in MERSQI for each 20 publications). Of
the 210 studies, 71 percent had no funding, 14 percent had less
than $20,000 funding, and 15 percent had $20,000 or more. The
MERSQI scores were also associated with study funding of $20,000
or more (1.29 increase in MERSQI for $20,000 or more in funding).
The greater the funding level, the more likely the study was multi-
institutional and/or used a two-group randomized controlled design.
More recently, Reed et al. (2008) showed that MERSQI scores
predict editorial decisions, at least among those manuscripts sub-
mitted for publication in the annual medical education issue of the
Journal of General Internal Medicine. Of 100 manuscripts, the total
MERSQI was 9.6 ± 2.6 (range 5 to 15.5). Papers with one point
higher total MERSQI scores (e.g., score of 10.0 versus 9.0) were
associated with editorial decisions to: a) send manuscripts for peer
review versus reject without review, b) invite revisions after review
versus reject after review, and c) accept rather than reject the man-
uscript. MERSQI scores of accepted manuscripts were significant-
ly higher than scores of rejected manuscripts (10.7 ± 2.5 versus 9.0
± 2.4, p = 0.003). In summary, the MERSQI score was associated
with: a) expert quality ratings, b) three-year citation rate, c) journal
impact factor, d) number of previous medical education peer-
reviewed publications by the first author, e) amount of study fund-
ing, and f) editorial decisions.
While the MERSQI was found to be a reliable and valid instru-
ment for measuring methodological quality in medical education
research, methodological quality in nursing education research has
not been as rigorously evaluated. To evaluate the methodological
quality of nursing education research and ensure a solid evidence
base for nursing education, the authors examined the relationships
between MERSQI scores and h Index, citation counts, and journal
impact factor. The relationships between the MERSQI and the fund-
ing sources of the studies and the country of data collection were
also examined.
Method D ES IG N The cross-sectional design of this study was
chosen so that methodological quality and scientific impact of
recent nursing education research reports could be evaluated.
Because this study was a review of published literature, it did not
364 Nursing Education Perspectives
S C I E N T I F I C I M PAC T / N U R S I N G E D U C AT I O N R E S E A R C H
involve human subjects. Therefore, upon review, the University of
Nevada, Las Vegas Institutional Review Board for the Protection of
Human Subjects deemed the study excluded from review.
S A MPLE The time period July 2006 to December 2007 was
selected so that recent reports could be evaluated and a two-year
postpublication citation rate per report could be determined. A
minimum of 100 peer-reviewed reports meeting established inclu-
sion criteria were evaluated; this sample size was selected based on
the work of others in this area (Cook et al., 2007; Reed, Beckman,
& Wright, 2009).
Article inclusion criteria were as follows: a) available in
English; b) included original quantitative research (used descrip-
tive statistics to present all or a portion of findings or inferential
statistics to analyze all or a portion of results); c) focused on nurs-
ing students as subjects; and d) featured a descriptive, experimen-
tal, quasi-experimental, or observational (including case-control,
cohort, cross-sectional) design. Article exclusion criteria were: a)
solely qualitative research, b) meta-analysis, c) systematic review,
or d) literature review.
IN S TR U MEN T The instrument selected for the study, the
Medical Education Research Study Quality Instrument (MERSQI),
measures the methodological quality of a published educational
research report. To the researchers’ knowledge, the proposed study
is the first to assess nursing education research reports using the
MERSQI; its reliability, validity, and items are described above.
MERSQI domains and items within each domain of the instru-
ment are shown in Table 1. Each item within a domain is assigned
a value; the maximum of each domain score is 3, producing a max-
imum possible score of 18 and potential range of 5 to 18. Total
scores were calculated as the percentage of total achievable points
to account for “not applicable” (NA) responses. For example, the
response rate of retrospective studies of student records was
deemed NA; validity of evaluation instruments was deemed NA
when the instruments used were physiological in nature. If the eval-
uation consisted of a standardized test such as the NCLEX-RN® or
course grades, validity measures were considered present, even if
no information was provided about validity by the authors.
VA R IA BLES The main variable was the methodological quality
of published research reports. Two variables addressed the impact
of the research reports: citation count and journal impact factor. In
addition, study funding and country of data collection were explored
for their association with methodological quality.
Citation Counts To the researchers’ knowledge, this study is the
first to address the scientific impact of nursing education research
reports by examining citation rate. Because of this novel approach,
the research team librarian obtained citation counts 36 months
post-publication via three available databases: Web of Science,
Scopus, and Google Scholar.
The Web of Science database, with backfiles to 1900, has tradi-
tionally been used to measure the impact of journal articles. Web of
Science, the online version of Science Citation Index, Social
Sciences Citation Index, and Arts and Humanities Citation Index, is
arguably the leading database for providing this information in sci-
ence and medical literature and includes more than 12,000 journals.
Scopus, a newer database, indexes more journals than Web of
Science; it includes approximately 18,000 peer-reviewed journals
with most of its content dating from 1996 to the present.
Increasingly, researchers are using Google Scholar, not only to
locate research reports, but also for citation rates. Although there
are flaws in its citation count, including a problem with duplicate
records, Google Scholar indexes most current peer-reviewed jour-
nals and other open-access materials. Therefore, it includes more
journals than either Web of Science or Scopus and may yield higher
citation rates for nursing education research reports than the two
other databases. Google Scholar also differs from the others in that
its citation count indicates the number of unique online sources that
currently cite the article, providing a real-time citation count rather
than a history of citation as with Scopus or Web of Science.
Journal Impact Factor Using journal impact factor to assess the
scientific impact of nursing education research reports is also a
novel approach. Journal impact factor for each research report was
gathered from Journal Citation Reports. For the 2006 and 2007 arti-
cles, 2008 and 2009 data were used, respectively. A two-year time
frame was selected to allow sufficient time for citation. The research
assistant located the journal impact factor and entered the value
directly into a designated Microsoft Excel spreadsheet. Studies
without reported impact factors were excluded from this analysis.
h Index The relationship between the h Index of first authors and
methodological quality of the corresponding studies was examined;
one might expect studies with higher MERSQI scores to be conduct-
ed by authors with a higher h Index. This index, a measure of an
author’s scientific productivity and scientific impact, was developed
by Hirsch (2005). An author has index h if h of his or her number
of papers have at least h citations each and the other papers have
fewer than h citations each (Hirsch).
The h Index is easily found in the Scopus and Web of Science
databases or can be calculated manually. This study used the h
Index from Scopus, which is available for papers published since
1995. In essence, all of an author’s papers are listed in decreasing
order of the number of citations for that paper, and each paper is
given a rank in ascending order of the number of citations. The h
Index is equal to the article rank at the point where article rank
equals the number of citations. Thus, an author with a long publica-
tion record would tend to have a higher h Index.
Other Factors The two other factors examined for their associ-
ation with methodological quality were the country in which data
N o v e m b e r / D e c e m b e r Vol.32 No.6 365
Table 1. Medical Education Research Study Quality Instrument (MERSQI)
DOMAIN MERSQI ITEM N PERCENT*
STUDY DESIGN
Single-group cross-sectional or single-group posttest only 74 55.6
Single-group pretest and posttest 25 18.8
Nonrandomized, two or more groups 29 21.8
Randomized controlled trial 5 3.8
SAMPLING NO. OF INSTITUTIONS STUDIED
1 100 82.7
2 4 3.0
>2 19 14.3
RESPONSE RATE PERCENTAGE
Not applicable 11
<50% or not reported 35 28.0
50-74% 28 23.0
>75% 59 48.4
TYPE OF DATA
Assessment by study participant (knowledge self-report) 86 64.7
Objective measurement (knowledge test) 47 35.3
VALIDITY OF EVALUATION INTERNAL STRUCTURE
INSTRUMENT
Not applicable 12
Not reported 63 52.1
Reported 58 47.9
CONTENT VALIDITY
Not applicable 12
Not reported 78 64.5
Reported 43 35.5
RELATIONSHIPS TO OTHER VARIABLES
Not applicable 12
Not reported 101 83.5
Reported 20 16.5
DATA ANALYSIS APPROPRIATENESS OF ANALYSIS
Inappropriate for study design or type of data 7 5.3
Appropriate for study design & type of data 126 94.7
COMPLEXITY OF ANALYSIS
Descriptive analysis only 42 31.6
Beyond descriptive analysis 91 68.4
OUTCOMES
Satisfaction, attitudes, perceptions, opinions, general facts 84 63.1
Knowledge, skills 40 30.1
Behaviors 7 5.3
Patient/health care outcomes 2 1.5
*Ratings of “not applicable” are not included in percentages.
were collected and funding source. Funding source was designated
as: a) internal funding, b) external funding, c) internal and external
funding, or d) not stated.
PR O CED U R E The librarian searched the CINAHL database for
articles that involved nursing students as subjects and were pub-
lished in the second half of 2006 through 2007. The search was
done by using the CINAHL heading “students, nursing,” and lim-
iting the results to publication type “research” and “peer-
reviewed” journals. Ulrich’s Periodicals Directory was checked to
verify that the journals are refereed. In cases where the directory
did not have that information, the journal web pages were checked
to confirm that research articles were subject to peer review.
The librarian then reviewed the abstracts and eliminated
reports that met the exclusion criteria. Those reports meeting the
inclusion criteria were distributed to the four nursing members of
the research team. If the librarian was unsure if a report met the
inclusion criteria, she conferred with the principal investigator,
who made the determination.
366 Nursing Education Perspectives
Four nurse faculty members of the research team read and
scored the reports using the MERSQI. Initially, all four faculty and
the MERSQI consultant, Dr. Darcy Reed, evaluated four articles.
Once the scoring guidelines were understood and agreed upon, the
rest of the reports were divided into two alphabetical sets (A-M and
N-Z). Each set was scored by two nursing faculty; if they differed
on scores, the final score was derived by consensus. The research
assistant entered the MERSQI scores into a spreadsheet, and two
faculty scorers checked every tenth article for accuracy.
DATA ANALYSIS Data were analyzed using the Statistical
Program for the Social Sciences (SPSS version 17.0) software.
Alpha was set at .05. Descriptive analyses were followed by corre-
lational analyses and comparisons of means to test the following
null hypotheses: 1. There is no association between MERSQI score
and citation rate(s); 2. There is no association between MERSQI
score and h Index; 3. There is no association between MERSQI
score and journal impact factor. In addition, ANOVA was used to
compare MERSQI scores by funding category. Lastly, Student’s t-
test was used to compare studies done within and outside the
United States. All data are expressed as means ± SD.
Results REPORT DEMOGRAPHICS One hundred thirty-three
articles, published between July 1, 2006 and December 31, 2007,
met the inclusion criteria. Of these articles, 58 (43.6 percent) were
conducted in North and South America (United States, 50; Canada,
4; Brazil, 3; Mexico, 1); 33 (24.8 percent) in Europe (United
Kingdom, 27 [20.3 percent of total]); 17 (12.8 percent) in
Australia/New Zealand; 14 (10.5 percent) in Asia; 10 (7.5 percent) in
the Middle East; and 1 (0.8 percent) in Africa. Of these studies, 119
(89.5 percent) studied undergraduate nursing students,
10 (7.5 percent) studied graduate nursing students, and
4 (3 percent) did not identify the level of the students.
Twenty-four reports (18.0 percent) also included other
health professional students.
MERSQI SCORE The MERSQI score for the 133
studies was 9.8 ± 2.2 (range 6.0 to 14.5). Table 1 shows
the breakdown in item scoring. The majority (55.6 per-
cent) of studies were cross-sectional in design or
posttest only and involved only one institution. Most
studies (71.4 percent) had high response rates (> 50
percent), but involved self-report by participants (64.7
percent), rather than objective data (35.3 percent). The
validity of the instruments used, internal structure,
content validity, and relationships to other variables,
was not generally reported (52.1, 64.5, and 83.5 per-
cent, respectively). Statistical analyses were appropri-
ate (94.7 percent) and the majority (68.4 percent)
included inferential procedures. Most of the study out-
comes were related to student satisfaction and attitude (63.1 percent)
or knowledge/skills (30.1 percent); very few outcomes were behav-
ioral in nature (5.3 percent) or related to patient outcomes (1.5 per-
cent). Cronbach’s alpha for the overall MERSQI was .547.
MERSQI SCORE AND CITATION COUNTS AND h INDEX
Table 2 shows the citation counts and h Index. As expected, the
Google Scholar citation count was higher than the Scopus count,
which was higher than that from Web of Science; all three citation
measures were strongly correlated with one another. The MERSQI
score correlated significantly, but weakly, with citation counts from
Scopus and Google Scholar. The MERSQI score was not signifi-
cantly associated with the Web of Science citation count, nor with
the h Index of the first author. However the first author’s h Index
was significantly correlated with the Web of Science citation count.
MERSQI SCORE AND JOURNAL IMPACT FACTOR Of the 133
articles, 78 (59 percent) were published in journals with an impact
factor (0.996 ± .488, range 0.171 to 2.696). There was a small (r =
.216) but significant correlation (p = .029) between the MERSQI
score of reports published in these journals and their impact fac-
tors. Articles published in journals with an impact factor had sig-
nificantly higher MERSQI scores than those published in journals
without impact factors (F(1, 131) = 4.75, p = .031), with mean
MERSQI scores of 10.1 ± 2.1 and 9.3 ± 2.3, respectively.
MERSQI SCORE AND FUNDING AND COUNTRY Most (n = 99,
74.4 percent) of these studies made no mention of funding; 15
(11.3 percent) had received internal funding; 18 (13.5 percent) had
received external funding; and 1 (.8 percent) had received both
internal and external funding. MERSQI scores for those studies
with no funding reported were 9.7 ± 2.2; those with internal fund-
S C I E N T I F I C I M PAC T / N U R S I N G E D U C AT I O N R E S E A R C H
Table 2. Citation Counts and Correlations with MERSQI Score
ITEM MEAN±SD MERSQI WEB OF SCOPUS GOOGLE
N RANGE SCORE SCIENCE SCHOLAR
MERSQI 9.8±2.2
133 6.0–14.5
Web of Science 4.80±6.00
81 0–39 .138
Scopus 5.74±6.65
125 0–50 .223** .948**
Google Scholar 9.05±9.91
127 0–77 .224** .915** .940**
h Index 40.1±3.36
127 0–19 .023 .197* .127 .108
< .05 (one-tailed); **p < .001 (one-tailed); using Pearson correlation
N o v e m b e r / D e c e m b e r Vol.32 No.6 367
S C I E N T I F I C I M PAC T / N U R S I N G E D U C AT I O N R E S E A R C H
ing were 9.5 ± 2.2; and those with external funding were 10.5 ± 2.1
(these were not statistically different). Studies conducted in the
United States had significantly higher MERSQI scores (10.3 ± 2.5
vs. 9.5 ± 1.9, F(1,131), p = .027), with these studies scoring signif-
icantly more points for design (x2 = 9.437, p = .024) and validity
of instruments (x2 = 10.820, p = .013).
Discussion This study was the first investigation to evaluate the
methodological quality of nursing education research reports using
an instrument designed specifically for (medical) educational
reports. The major findings of the current study were that nursing
education research reports scored 9.8 ± 2.2 on the MERSQI and
their MERSQI score correlated with citation counts in Scopus and
Google Scholar as well as journal impact factor, but was not associ-
ated with Web of Science citation count or the first authors’ h Index.
Reed and colleagues (2007) reported a Cronbach’s alpha of .6 for
the medical education literature in contrast to the current study’s
finding of .547 for the nursing education literature, suggesting that
this tool is indeed reliable across these two professions. The MER-
SQI score of the nursing education research reports was 0.17 points
lower than that reported for the medical literature (Reed et al.). This
difference may be due to the method used to select articles for
review. Reed and colleagues selected articles, published between
September 2002 and December 2003, from widely cited (well-
known) medical journals, including JAMA, New England Journal of
Medicine, Academic Medicine, Medical Education, Teaching and
Learning in Medicine, Medical Teacher, and seven journals repre-
senting core medical specialty areas. The nursing MERSQI mean
was also 0.9 points lower than the manuscripts accepted for publi-
cation in the Journal of General Internal Medicine (Reed et al.,
2008). This difference is not unexpected; this journal had a 2009
impact factor of 2.65, in contrast to the mean of 0.996 for this study’s
78 journals with an impact factor. Because this study was the first
nursing education research investigation to use the MERSQI, all
articles indexed in CINAHL were selected to ensure inclusiveness.
Comparison of the distribution of data for the individual items
comprising the MERSQI (Table 1) with those reported by Reed et
al. (2007) showed surprisingly similar trends within each domain.
For example, Reed et al. reported the design distribution of their
articles as follows: single-group cross-sectional or posttest, 66.7
percent; pretest and posttest, 15.7 percent; nonrandomized two or
more groups, 14.8 percent; and randomized control trial, 2.8 per-
cent. Therefore, despite the 0.17-point difference in MERSQI
scores between the medical and nursing reports, the methodolog-
ical quality is similar and extends Reed’s medical educational
research finding to health-related educational research.
Regarding the current study’s hypotheses, the MERSQI scores of
this study’s reports correlated with Scopus and Google Scholar citation
counts, but not the citation count from Web of Science, rejecting null
hypothesis No. 1. If one assumes that articles of higher scientific qual-
ity are cited more frequently, these findings suggest that the MERSQI
is indeed a valid instrument to evaluate scientific quality of nursing
education literature. Null hypothesis No. 3 was also rejected because
the MERSQI score was weakly correlated for the journals that have
been appraised for impact factor. The low correlation may be related to
the small range of the MERSQI scores and impact factors.
In addition, this study is the first investigation using the
MERSQI to examine the relation between the MERSQI score and h
Index (null hypothesis No. 2). The hypothesis was accepted, indicat-
ing a lack of an association. The h Index for this group of authors
was relatively low, with 42 percent having an h Index of 2 or less.
This result suggests that many authors may be new to publishing or
do not publish extensively. To advance the science of nursing edu-
cation, nursing needs to develop and fund nurse researchers to sup-
port their programs of research in this area.
Reed and her colleagues (2007) examined categories of funding
amounts: $0, $1 to $19,999, and more than $20,000. Funding of
$20,000 or more was associated with a 1.29-point increase in the
MERSQI score; in contrast, the MERSQI score did not differ by the
current study’s funding categories of a) unfunded or b) funded by
internal or external souses. The investigations reviewed in the cur-
rent study are very likely to have been funded at very low levels,
accounting for this difference in findings.
The distribution of scores in all domains suggests that the
methodological quality of nursing (and medical) education articles
can be increased by a number of strategies. First, objective meas-
ures of outcomes need to be used. Instruments used should have
internal and content validity and correlate with other instruments
designed to measure similar or related concepts. This science must
be advanced from the measurement of attitudes and knowledge to
the measurement of behaviors and patient outcomes. Finally, more
complex experimental designs should be used. To increase the gen-
eralizability of the findings, more collaborative research across uni-
versities throughout the world needs to be done.
Regarding collaborations, this study showed a difference in
methodological quality between studies conducted within and out-
side the United States. This finding may be related to the education
and experience of nurse researchers in the United States. In their
review of the nursing education research literature from 1991 to
2000, Yonge et al. (2005) found that most (58 percent) of the quan-
titative, qualitative, or mixed-methods research originated in North
America; 83 percent of these were generated in the United States
and 17 percent were generated in Canada. This predominance of
North American or US studies is similar to the current study’s find-
ings. However, higher percentages in studies conducted in
Australia/New Zealand (12.8 vs. 6.7 [New Zealand not reported]),
368 Nursing Education Perspectives
S C I E N T I F I C I M PAC T / N U R S I N G E D U C AT I O N R E S E A R C H
Asia (10.5 vs. 2.8), and the Middle East (7.5 vs. not reported) were
observed in the current investigation (total 9.5 percent vs. 30.8
percent), suggesting that quantitative nursing education research
studies are increasing outside the United States. This rise repre-
sents an exciting opportunity to explore the development of studies
that involve collaborations across continents and address global
nursing education issues.
Limitations of the Study This study is the first of its kind in
nursing education literature. As such, it was limited to evaluation
of quantitative research articles published three years ago within
an 18-month time period. A wide diversity of journals indexed in
CINAHL was included, but qualitative research was not consid-
ered. Had the focus been on the premier nursing journals, this
study’s findings might be very different. In this study, Cronbach’s
alpha for the MERSQI was low (.547, compared to the .60 report-
ed for the medical literature, Reed et al., 2007); ideally a new
instrument should show a Cronbach’s alpha of .70. Although
MERSQI scores of reports in journals with an impact factor were
higher than those reports from journals without an impact factor,
the difference was not statistically significant. Research pub-
lished today may have greater methodological rigor than the sam-
ple used in this study.
Conclusion This evaluation suggests that the MERSQI is appli-
cable to the assessment of nursing education research reports. The
use of the MERSQI is likely to improve the quality of nursing edu-
cation research by: a) providing a guideline for nursing education
researchers as they develop their studies; b) providing a template
for the preparation of nursing education research reports; c) provid-
ing a template for the evaluation of research reports; d) allowing for
the evaluation of the quality of nursing education research reports
across journals, countries of origin, years of publication, and fund-
ing levels; and e) providing justification for increased funding for
nursing education research.
About the Authors The authors are faculty at the School of
Nursing, University of Nevada, Las Vegas. Carolyn B. Yucha,
PhD, RN, FAAN, is dean and professor. Barbara St. Pierre
Schneider, DNSc, RN, is associate professor, Tish Smyer, DNSc,
RN, is professor, and Susan Kowalski, PhD, RN, is faculty emeri-
tus. Eva Stowers, MA, is health sciences librarian. The authors
would like to thank: Darcy Reed, MD, MPH, assistant professor,
Mayo Clinic College of Medicine, for her helpful discussions
regarding the Medical Education Research Study Quality
Instrument (MERSQI); Kirsten Speck and Isabelle Stoate, gradu-
ate students, for their valuable assistance with data entry; and
Jennifer Orozco, for data verification. This study was funded, in
part, by the National League for Nursing Ruth Corcoran Grant for
Nursing Education Research. For more information, contact Dr.
Yucha at Carolyn.yucha@unlv.edu.
Key Words Citation Rate – Impact Factor – Methodological Quality
– Nursing Education Research
References
Benner, P., Sutphen, M., Leonard, V., & Day, L. (2010).
Educating nurses: A call for radical transformation. San
Francisco: Jossey-Bass.
Broome, M. E. (2009). Building the science for nurs-
ing education: Vision or improbable dream
[Editorial]. Nursing Outlook, 57, 177-179.
doi: 10.1016/j.outlook.2009.05.005
Cook, D. A., Beckman, T. J., & Bordage, G. (2007). A
systematic review of titles and abstracts of experi-
mental studies in medical education: Many informa-
tive elements missing. Medical Education, 41, 1074-
1081.
doi: 10.1111/j.1365-2923.2007.02861.x
Hirsch, J. E. (2005). An index to quantify an individ-
ual’s scientific research output. Proceedings of the
National Academy of Sciences of the United States of
America, 102, 16569-16572.
doi: 10.1073/pnas.0507655102
Institute of Medicine. (2011). The future of nursing:
Leading change, advancing health. Retrieved from
http://iom.edu/Reports/2010/The-Future-of-
Nursing-Leading-Change-Advancing-Health.aspx.
Reed, D., Price, E. G., Windish, D. M., Wright,
S. M., Gozu, A., Hsu, E. B., … Bass, E. B. (2005).
Challenges in systematic reviews of educational
intervention studies. Annals of Internal Medicine,
142(12 II), 1080-1089.
Reed, D. A., Beckman, T. J., & Wright, S. M. (2009). An
assessment of the methodologic quality of medical
education research studies published in American
Journal of Surgery. American Journal of Surgery, 198,
442-444.
doi: 10.1016/j.amjsurg.2009.01.024
Reed, D. A., Beckman, T. J., Wright, S. M., Levine, R.
B., Kern, D. E., & Cook, D. A. (2008). Predictive
validity evidence for medical education research
study quality instrument scores: Quality of submis-
sions to JGIM’s medical education special issue.
Journal of General Internal Medicine, 23, 903-907. doi:
10.1007/s11606-008-0664-3
Reed, D. A., Cook, D. A., Beckman, T. J., Levine, R. B.,
Kern, D. E., & Wright, S. M. (2007). Association
between funding and quality of published medical
education research. Journal of the American Medical
Association, 298, 1002-1009.
doi: 10.1001/jama.298.9.1002
Reed, D. A., Kern, D. E., Levine, R. B., & Wright,
S. M. (2005). Costs and funding for published
medical education research. Journal of the American
Medical Association, 294, 1052-1057. doi:
10.1001/jama.294.9.1052
Stevens, K. A., & Cassidy, V. R. (1999). Evidence-based
teaching: Current research in nursing
education. Boston: Jones and Bartlett.
Tanner, C. A., & Lindeman, C. A. (1987).
Research in nursing education: Assumptions and
priorities. Journal of Nursing Education, 26, 50-59.
Yonge, O. J., Anderson, M., Profetto-McGrath, J.,
Olson, J. K., Skillen, D. L., Boman, J., … Day, R.
(2005). An inventory of nursing education research.
International Journal of Nursing Education Scholarship,
2, i-11. doi: 10.2202/1548-923x.1095
NLN
Copyright of Nursing Education Perspectives is the property of National League for Nursing and its content
may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder’s express
written permission. However, users may print, download, or email articles for individual use.
Statistics Used in Current Nursing
Research
Kathleen Zellner, MSN, RN, BC; Connie J. Boerst, MSN, RN, BC; and
Wil Tabb, PhD
ABSTRACT
Undergraduate nursing research courses should empha-
size the statistics most commonly used in the nursing litera-
ture to strengthen students’ and beginning researchers’ un-
derstanding of them. To determine the most commonly used
statistics, we reviewed all quantitative research articles
published in 13 nursing journals in 2000. The findings sup-
ported Beitz’s categorization of kinds of statistics. Ten pri-
mary statistics used in 80% of nursing research published
in 2000 were identified. We recommend that the appropriate
use of those top 10 statistics be emphasized in undergradu-
ate nursing education and that the nursing profession con-
tinue to advocate for the use of methods (e.g., power analy-
sis, odds ratio) that may contribute to the advancement of
nursing research.
A
dilemma faced by teachers of nursing research in un-
dergraduate baccalaureate programs is identifying
the key concepts that would be most meaningful to
students as they enter professional practice. Statistics have
been an integral part of nursing practice and research since
the beginning of modern nursing. Florence Nightingale’s
innovative use of descriptive statistics and her use of the
now-famous radial graphs, sometimes called coxcombs, rep-
resents an early and effective use of statistical analysis of
data in nursing research. This marked the spread of the use
of statistical analysis in nursing, which migrated from the
larger field of 19th-century biomedical research (Nightin-
gale, 1859).
Nursing curricula include a component of statistical
methods as part of nursing research courses to enable stu-
dents to understand current research and contribute to its
ongoing discussion. Learning statistics can be a difficult
task for any student, and nursing students often struggle
with learning the concept and the transfer of their knowl-
edge to nursing literature (Schuster & Ritchey, 1998). Such
difficulty can often deter students from furthering their
nursing degree (Stranahan, 1995).
In developing statistics courses for undergraduate nurs-
ing students, it is important to consider how to teach sta-
tistics, what statistics to teach, and what statistics are
common in the literature to provide students with a basic
comprehension of data that will enable them to interpret
statistical results. Therefore, it is important to identify the
statistical tests and procedures needed to comprehend mod-
ern nursing research.
Nursing journals frequently publish articles in which a
basic understanding of statistics is assumed for an educated
reading of the research. To guide nursing instructors, we
aimed in this research to identify the statistical tests used
most frequently in nursing research articles. We anticipate
that this information could then help faculty in emphasizing
the statistics nurse readers would most likely encounter.
TheoReTiCAl FRAMeWoRK
The teaching of statistics can be linked to the Conversa-
tion Theory of Gordon Pask (1975). His theory focused on the
premise that “learning occurs through conversations about
a subject matter which serve to make knowledge explicit”
(Theory Into Practice Database, n.d., Overview Section, ¶1).
Pask’s (1975) theory serves as a foundational component to
student learning of statistics through the application of cog-
nitive learning. His methods of teaching relationships and
the use of problem-solving strategies facilitate an under-
Received: October 1, 2004
Accepted: June 30, 2005
Ms. Zellner is Associate Professor and Ms. Boerst is Associate
Professor and Undergraduate Program Director, Bellin College of
Nursing, and Dr. Tabb is ad hoc professor, University of Wisconsin
Green Bay, Green Bay, Wisconsin.
Address correspondence to Connie J. Boerst, MSN, RN, BC, As-
sociate Professor and Undergraduate Program Director, Bellin Col-
lege of Nursing, 725 South Webster Avenue, PO Box 23400, Green
Bay, WI 54305-3400; e-mail: connie.boerst@bcon.edu.
February 2007, Vol. 46, No. 2
55
STATISTICS USeD IN NURSING ReSeARCH
standing of statistics. Pask’s work provided the early steps
for later use of learning theories related to teaching statis-
tics. An appropriate assumption made on the basis of Pask’s
theory is to teach statistics so students have the advantage
of performing statistics while learning the concepts.
liTeRATURe RevieW
Johnson (1984) and Knapp and Miller (1987) wrote two of
the early articles addressing the incorporation of statistical
concepts and biometrical methods into undergraduate nurs-
ing curricula. Johnson (1984) argued that many nursing
students have high levels of anxiety regarding their math-
ematical ability and recommended that statistical concepts
be taught simultaneously with a nursing research course.
Knapp and Miller (1987) recommended that in addition to
requiring college algebra, statistics, and an introduction to
biomedical computing, nursing programs must ensure that
these courses are relevant to nursing. Similarly, Stranahan
(1995) concluded that being enrolled in both classes simulta-
neously is more beneficial to learning. In addition, Schuster
and Ritchey (1998) agreed and found that an interdisciplin-
ary statistics course incorporating the mathematics of both
statistics and nursing research enabled students to inter-
pret “the meaning of the statistics within the context of the
nursing research literature” (p. 34).
Johnson (1984), Knapp and Miller (1987), Beitz and Wolf
(1997), and Robinson (2001) asserted that to make statis-
tics meaningful, the examples used in teaching statistical
techniques need to be relevant to learners. Beitz and Wolf
(1997) recommended that to make statistical concepts seem
less abstract, educators should build in concrete activities
that make the concepts more meaningful. According to Be-
itz (1998), students’ greatest confusion in using statistics is
in transferring statistical knowledge and selecting the most
appropriate statistical test for a given situation. To that end,
Beitz developed an organized table that lists the statistical
tests central to nursing research and provides students with
an overview of the definition and application of a variety of
statistical measures. Robinson (2001) acknowledged Beitz’s
work and developed the Guidelines for Statistical Analysis
(p. 137), as well as the Golden Rules for Statistical Analysis
Adequacy (p. 138).
Authors agree that knowledge of statistics is a vital and
essential skill, necessary for the progression of the nurs-
ing profession, and needed for students to be able to read,
interpret, and integrate nursing research (Robinson, 2001;
Schuster & Ritchey, 1998; Taylor & Muncer, 2000).
The literature offers little information regarding which, if
any, statistical techniques are used most frequently and which
should therefore receive greater emphasis in statistics cours-
es. Although Beitz (1998) developed a chart listing the major
statistical tests, she provided no explanation of the criteria for
a test to be considered major. If frequently used tests could
be identified, taught, and emphasized in undergraduate pro-
grams, educators could spend more time providing application
opportunities and less time on seldom used or higher level sta-
tistics that are beyond the needs of beginning learners.
Graduate nursing research courses could build on this
knowledge, as Knapp and Miller (1987) suggested. Gradu-
ate courses could explore the lesser known but equally as
important and higher level statistical techniques. The pur-
pose of this article was to identify the most frequently used
statistics in nursing research today so that educators would
be able to share the results with beginning nursing research-
ers, enabling them to apply and critique this information.
A content analysis of current nursing research provides
many useful results. It informs readers of current topics of
professional interest and summarizes the principle meth-
ods used in research. It identifies the topics that might be
highlighted in classroom discussions to empower students
to become more literate readers of nursing journals. This
study joins a number of analyses of statistical methods in
professional research (Beitz, 1998; Polit & Sherman, 1990;
Robinson, 2001; Taylor & Muncer, 2000), which identify the
key statistical tools used in research. As such, it serves as a
guide to educators about appropriate classroom topics and a
resource to learners of essential research skills.
MeThoD
We reviewed a selection of 462 articles from 13 nursing
journals published in 2000. In choosing the journals for in-
clusion in the study, we believed it desirous to incorporate
those with wide readership, as well as those publishing ex-
tensive scholarship. The nursing journals used in this study
were identified through a three-step process. First, we used
the “Brandon/Hill Selected List of Print Nursing Books
and Journals” (B/H) by Hill and Stickell (2002). The B/H
is a “guide for nurses and librarians who find themselves
responsible for choosing a collection of current nursing lit-
erature” (Hill & Stickell, 2002, p. 100); it also includes a
comprehensive list of journals identified as key components
of any library.
Using the B/H list, we then cross-referenced those jour-
nals with the Key and Electronic Nursing Journals: Charac-
teristics and Database Coverage, 2001 ed. (KeNJ), a compre-
hensive guide of more than 200 national and international
nursing journals compiled by Margaret Allen (2001). Jour-
nals are included on the KeNJ list on the basis of criteria
of peer review; research percentages (i.e., percentages of
research articles for the journal for a particular year); inclu-
sion in the B/H, the Canadian Nursing Association list, and/
or the Nursing Research Journals Index; or being indexed in
the British Nursing Index, CINAHL, PubMed/Medline, and
other databases.
The final step was to identify those journals for which the
annual percentage of published research articles was 40%
or more. Thirteen journals, representing 11 specialty areas,
were chosen using this process. See Table 1 for a complete
list of the journals selected, including the percentage of re-
search published annually and the number of articles re-
viewed for this study. each issue of the selected journals was
reviewed, and all quantitative research articles (N = 462)
were scrutinized for the kinds of statistical methods used.
By mutual agreement, we designed a data-reporting grid,
56 Journal of Nursing Education
zeLLNeR, BOeRST, & TABB
focusing on the specific sta-
tistical methods used. Be-
cause all authors were re-
sponsible for data collection,
interrater reliability was
established via concurrent
review of two previously
published research articles,
with group discussion and
review to determine consis-
tency of data identification.
ReSUlTS
After all 462 articles
were reviewed, it became
apparent that regardless
of journal orientation or
focus, authors used and
consistently reported us-
ing similar statistical
methods. These similari-
ties served as the basis for
creating a set of categories
of statistical methods that
would summarize our re-
view. The typology used by
this study, although devel-
oped independently, came
to mirror the descriptions
of statistical techniques
noted in other studies (Beitz, 1998); the results are re-
ported in Table 2.
Descriptive Statistics
Descriptive statistics were the tools used most frequent-
ly in the nursing research articles. Measures of central ten-
dency and dispersion accounted for approximately 63% of
the statistics used. Authors assumed readers were famil-
iar with the basic concepts of mean, median, variance, and
standard deviation, as well as the use of percentiles and
percentages in research. Many articles also displayed these
statistics in plots or graphs. Although these tools were often
carried into more formal discussions, they in themselves of-
ten provided useful and relevant information gained from
the nursing research. These techniques are an assumed
base of knowledge for nurses and are another indication
of the ongoing need for nursing students to master math-
ematical skills early and comprehensively, as advocated by
Knapp and Miller (1987).
inferential Statistics
Most authors subjected their instrument to a measure-
ment of reliability; the most commonly applied measure-
ment was Cronbach’s alpha. However, the next step of anal-
ysis, the examination of the scores of the instrument, varied
widely. Many authors simply tabulated the scores or plotted
their frequency.
Interestingly, although most researchers used well-
known statistical methods, such as t tests, analysis of vari-
ance (ANOVA), and regression analysis in its many forms,
individual research needs often guided authors to other,
lesser known tests. Although it is possible to compile a list
of common statistical tests (e.g., the top 10 list presented in
Table 3), a comprehensive handbook of statistical methods
is always an important reference tool to consult when read-
ing nursing research (Polit, 1996).
Additional Findings
Articles addressing the nature of biomedical research
(Cummings & Rivara, 2003; International Committee of
Medical Journal editors, 1997) have suggested that it would
be more appropriate to use odds ratios as a way of express-
ing research outcomes. Where the outcomes of logistic re-
gression have required it, this method of statistical report-
ing was described. In reviewing the articles for this study,
we found that this approach is just beginning to be used in
nursing research and, in fact, would have been number 11 if
the top 10 list were expanded.
Our readings also confirmed that nursing research has
not included reference to the appropriateness of sample size,
which often limited the generalization of study conclusions
(Kachoyeanos, 1998; Polit & Sherman, 1990). Although sam-
ple sizes were usually clearly defined and the methods used
for sample selection discussed, a closer analysis of sample
TablE 1
Journals Selected for Inclusion in the Study, Percentage of
Research Published annually, and Number of articles Reviewed
Journal Title Specialty area
% of Research
Published annually
No. of articles
Reviewed
American Journal of Critical Care Critical care 73 28
Applied Nursing Research Research 71 22
Cancer Nursing Cancer 69 34
Heart & Lung: The Journal of Acute
and Critical Care
Critical care 58 25
Journal of Advanced Nursing Research 65 160
Journal of Community Health Community health 60 12
Journal of Nursing Administration Administration 35a 28
Journal of Nursing Care Quality Administration 41 15
Journal of Nursing Education Education 46 22
Journal of Nursing Scholarship Research 48 27
Journal of Obstetric, Gynecologic,
and Neonatal Nursing
Maternal-child 40 24
Nursing Research Research 83 42
Western Journal of Nursing Research Research 73 23
a Although below the cut-off point of 40% research published annually, this journal was selected for its accessibility
and wide readership.
February 2007, Vol. 46, No. 2 57
STATISTICS USeD IN NURSING ReSeARCH
size was often lacking. Readers were then required to consult
standard works on research design and sampling methods to
judge the adequacy of the sample for the research questions
explored. Nurse researchers are still not using power analy-
sis, which is used to estimate the size of a sample needed to
obtain a significant result, in their discussions. Kachoyeanos
(1998) noted that “In recent review of nursing research, the
overall lack of adequate statistical power became quite clear”
(p. 105). In addition, Polit and Sherman (1990) stated that
“nursing research needs to pay greater attention to issues of
power in designing their studies” (p. 368).
CoNClUSioN
The widespread use of statistical methods in a variety
of nursing journals underscores the importance of including
statistical skills in nursing education. A review of more than
400 nursing research articles revealed that regardless of
journal orientation and focus, the same 10 statistics were re-
peatedly used in approximately 80% of the research. These
10 statistics were included in the chart developed by Beitz
(1998) and termed “major statistics and statistical tests” (p.
49). We contend that initial statistics and nursing research
courses should emphasize these 10 statistics to strengthen
students’ and beginning researchers’ understanding.
Pask’s Conversation Theory (1975), while applicable to
any teachable subject, provides an extensive overview and
TablE 2
Statistics Used in the Journal articles
Reviewed and Frequency of Their Occurrence
Statistic n (%)
Descriptive statistics: Measures of central tendency
Mean 261 (12.4)
Median 41 (2)
Mode 12 (0.6)
Frequency distribution 189 (9)
Graphs and plots
Bar graphs 61 (2.9)
Dot plot 16 (0.8)
Line graph 25 (1.2)
Skew 3 (0.1)
Descriptive statistics: Measures of dispersion
Variance 4 (0.2)
Standard deviation 209 (10)
Range 129 (6.1)
Percentages, percentiles, and quartiles 361 (17.2)
Inferential statistics: Parametric
Z score 4 (0.2)
t test, independent and dependent 124 (5.9)
Analysis of variance (ANOVA), all kinds 100 (4.8)
Multiple comparison/post-hoc tests
(e.g., Scheffe, Tukey)
34 (1.6)
Analysis of covariance (ANCOVA) 12 (0.6)
Correlation 109 (5.2)
Cronbach’s alpha 82 (3.9)
Regression, all kinds 77 (3.7)
Odds ratio 27 (1.3)
Discriminant analysis 2 (0.1)
Factor analysis 23 (1.1)
Inferential statistics: Nonparametric
Chi square 114 (5.4)
Mann-Whitney test 18 (0.9)
Kruskal-Wallis test 9 (0.4)
Wilcoxon’s test 13 (0.6)
Fisher’s exact test 12 (0.6)
McNemar test 5 (0.2)
Power analysis 24 (1.1)
Total 2100 (100.1)
Note. Percentages do not equal exactly 100% due to rounding.
TablE 3
Top 10 Statisticsa Used in the Journal articles
Reviewed and Frequency of Their Occurrence
Statistic n (%)
Descriptive statistics: Measures of central tendency
Mean 261 (12.4)
Frequency distribution 189 (9)
Descriptive statistics: Measures of dispersion
Standard deviation 209 (10)
Range 129 (6.1)
Percentages, percentiles, and quartiles 361 (17.2)
Inferential statistics: Parametric
t test, independent and dependent 124 (5.9)
Analysis of variance (ANOVA), all kinds 100 (4.8)
Correlation 109 (5.2)
Cronbach’s alpha 82 (3.9)
Inferential statistics: Nonparametric
Chi square 114 (5.4)
Total 1678 (79.9)
a The top 10 statistics represent approximately 80% of all statistical
measures used in the 462 articles reviewed.
58 Journal of Nursing Education
discussion of the use of learning statistics. The main empha-
sis of the theory is to teach back what a person had learned;
in other words, manipulation of the subject matter facili-
tates learning. Although not all students learn in the same
manner, using Pask’s theory would allow students to learn
the subject as well as the relationships among the concepts,
which aids in the understanding and application of statis-
tics and how this applies to nursing.
At the same time, as nursing research becomes more
sophisticated, it is clear that greater understanding of the
techniques and issues of quantitative study needs to be em-
phasized. The nursing profession should continue to move
forward in the use of more advanced statistical analyses,
including logistic regression and power analysis.
ReFeReNCeS
Allen, M. (2001). Key and electronic nursing journals: Character-
istics and database coverage, 2001 ed. Retrieved November
18, 2003, from http://nahrs.library.kent.edu/resource/reports/
keyjrnls_intro2001ed
Beitz, J.M. (1998). Helping students learn and apply statistical
analysis: A metacognitive approach. Nurse Educator, 23(1), 49-
51.
Beitz, J.M., & Wolf, z.R. (1997). Creative strategies for teaching
statistical concepts in nursing education. Nurse Educator, 22(1),
30-34.
Cummings, P., & Rivara, F.P. (2003). Reporting statistical informa-
tion in medical journal articles. Archives of Pediatrics & Adoles-
cent Medicine, 157, 321-324.
Hill, D.R., & Stickell, H.N. (2002). Brandon/Hill selected list of print
nursing books and journals. Nursing Outlook, 50, 100-113.
International Committee of Medical Journal editors. (1997). Uni-
form requirements for manuscripts submitted to biomedical
journals. Journal of the American Medical Association, 277,
927-934.
Johnson, J.M. (1984). Strategies for teaching nursing research:
Strategies for including statistical concepts in a course in re-
search methodology for baccalaureate nursing students. Western
Journal of Nursing Research, 6, 259-264.
Kachoyeanos, M.K. (1998). The significance of power in research
design (part I). MCN, 23, 105.
Knapp, R.G., & Miller, M.C., III. (1987). Some thoughts on biometri-
cal training in colleges of nursing. Nurse Educator, 12(6), 5.
Nightingale, F. (1859). A contribution to the sanitary history of the
British Army during the late war with Russia. London, UK:
John W. Parker and Son.
Pask, G. (1975). Conversation, cognition and learning: A cybernetic
theory and methodology. Amsterdam: elsevier.
Polit, D.F. (1996). Data analysis and statistics for nursing research.
Stanford, CT: Appleton & Lange.
Polit, D.F., & Sherman, R.e. (1990). Statistical power in nursing
research. Nursing Research, 39, 365-369.
Robinson, J.H. (2001). Mastering research critique and statistical
interpretation: Guidelines and golden rules. Nurse Educator, 26,
136-141.
Schuster, P., & Ritchey, N. (1998). Teaching introductory statistics
to baccalaureate nursing students. Nurse Educator, 23(5), 34.
Stranahan, S.D. (1995). Sequence of research and statistics courses
and student outcomes. Western Journal of Nursing Research, 17,
695-699.
Taylor, S., & Muncer, S. (2000). Redressing the power and effect of
significance. A new approach to an old problem: Teaching statis-
tics to nursing students. Nurse Education Today, 20, 358-364.
Theory Into Practice Database. (n.d.). Conversation theory (G.
Pask). Retrieved July 16, 2004, from http://tip.psychology.org/
pask.html
February 2007, Vol. 46, No. 2
Article Analysis
2
Article Citation
and Permalink
(APA format)
Article 1
Article 2
Point
Description
Description
Broad Topic Area/Title
Define Hypotheses
Define Independent and Dependent Variables and Types of Data for Variables
Population of Interest for the Study
Sample
Sampling Method
How Were Data Collected?
© 2019. Grand Canyon University. All Rights Reserved.
2
Rubic_Print_Format
Course Code | Class Code | Assignment Title | Total Points | ||||||||||||
HLT-362V | HLT-362V-OL191 | Article Analysis 2 | 130.0 | ||||||||||||
Criteria | Percentage | 1: Unsatisfactory (0.00%) | 2: Less Than Satisfactory (65.00%) | 3: Satisfactory (75.00%) | 4: Good (85.00%) | 5: Excellent (100.00%) | Comments | Points Earned | |||||||
Content | 100.0% | ||||||||||||||
Two Quantitative Articles | 10.0% | Fewer than two articles are presented. None of the articles presented use quantitative research. | N/A | Two articles are presented. Of the articles presented, only one articles are based on quantitative research | Two articles are presented. Both articles are based on quantitative research. | ||||||||||
Article Citation and Permalink | Article citation and permalink are omitted. | Article citation and permalink are presented. There are significant errors. Page numbers are not indicated to cite information, or the page numbers are incorrect. | Article citation and permalink are presented. Article citation is presented in APA format, but there are errors. Page numbers to cite information are missing, or incorrect, in some areas. | Article citation and permalink are presented. Article citation is presented in APA format. Page numbers are used in to cite information. There are minor errors. | Article citation and permalink are presented. Article citation is accurately presented in APA format. Page numbers are accurate and used in all areas when citing information. | ||||||||||
Broad Topic Area/Title | Broad topic area and title are omitted. | Broad topic area and title are referenced but are incomplete. | Broad topic area and title are summarized. There are some minor inaccuracies. | Broad topic area and title are presented. There are some minor errors, but the content overall is accurate. | Broad topic area and title are fully presented and accurate. | ||||||||||
Hypothesis | Definition of hypothesis is omitted. The definition of the hypothesis is incorrect. | Hypothesis is summarized. There are major inaccuracies or omissions. | Hypothesis is generally defined. There are some minor inaccuracies. | Hypothesis is defined. Hypothesis is generally defined. There are some minor inaccuracies. | Hypothesis is accurate and clearly defined. | ||||||||||
Independent and Dependent Variable Type and Data for Variable | Variable types and data for variables are omitted. | Variable types and data for variables are presented. There are major inaccuracies or omissions. | Variable types and data for variables are presented. There are inaccuracies. | Variable types and data for variables are presented. Minor detail is needed for accuracy. | Variable types and data for variables are presented and accurate. | ||||||||||
Population of Interest for the Study | Population of interest for the study is omitted. | Population of interest for the study is presented. There are major inaccuracies or omissions. | Population of interest for the study is presented. There are inaccuracies. | Population of interest for the study is presented. Minor detail is needed for accuracy. | Population of interest for the study is presented and accurate. | ||||||||||
Sample | Sample is omitted. | Sample is presented. There are major inaccuracies or omissions. | Sample is presented. There are inaccuracies. | Sample is presented. Minor detail is needed for accuracy. Page citation for sample information is provided. | Sample is presented and accurate. Page citation for sample information is provided. | ||||||||||
Sampling Method | Sampling method is omitted. | Sampling is presented. There are major inaccuracies or omissions. | Sampling is presented. There are inaccuracies. Page citation for sample information is omitted. | Sampling is presented. Minor detail is needed for accuracy. | Sampling method is presented and accurate. | ||||||||||
How Was Data Collected | The means of data collection are omitted. | The means of data collection are presented. There are major inaccuracies or omissions. | The means of data collection are presented. There are inaccuracies. Page citation for sample information is omitted. | The means of data collection are presented. Minor detail is needed for accuracy. Page citation for sample information is provided. | The means of data collection are presented and accurate. Page citation for sample information is provided. | ||||||||||
Mechanics of Writing (includes spelling, punctuation, grammar, and language use) | Surface errors are pervasive enough that they impede communication of meaning. Inappropriate word choice or sentence construction is employed. | Frequent and repetitive mechanical errors distract the reader. Inconsistencies in language choice (register) or word choice are present. Sentence structure is correct but not varied. | Some mechanical errors or typos are present, but they are not overly distracting to the reader. Correct and varied sentence structure and audience-appropriate language are employed. | Prose is largely free of mechanical errors, although a few may be present. The writer uses a variety of effective sentence structures and figures of speech. | The writer is clearly in command of standard, written, academic English. | ||||||||||
Total Weightage | 100% |