Stakeholders and Creating a Culture of Assessment
focus on their involvement in the teaching process.
We provide a set of 50
Guidelines to use in
judging the quality and
effectiveness of an
assessment.
The major areas are:
Having a clear purpose
and readiness for
assessment
Involving stakeholders
throughout the
assessment process
What and how to
assess is critical
Assessment is telling
a story
Improvement and
follow-up are an
integral part of the
assessment process
The following Guidelines are intended for use in planning, implementing, and/or
judging the benefits and contributions of campus-based assessment efforts. The
Guidelines were developed through conversations with institutional researchers,
faculty, practitioners, and assessment scholars that focused on which aspects of
the assessment process were most important in optimizing the utility of
assessment efforts on college campuses. Additionally, the authors of the Guidelines
reviewed the major publications focused on assessment utilization and drew from
their collective experience of over 50 years working in the area of higher education
assessment.
The Guidelines stress that assessment must be strategic in its intent and function
and that stakeholders should primarily use assessment to improve the activities,
programs, or institutions for which they are responsible and accountable. The
Guidelines also focus on enhancing and fostering student learning.
1. We acknowledge the importance of aligning assessment approaches with
the culture and mission of the institution.
2. We have developed a culture of assessment on campus in which we
regularly assess student learning throughout all areas of the institution.
3. We acknowledge that assessment is often driven by external demands, but
the primary commitment to assess is to improve student learning.
4. We assess so that we can understand what and how students learn as a
result of their educational experiences.
5. We consider assessment to be an integral part of strategic planning
efforts.
6. We purposefully view assessment as an important process in organizational
decision-making.
7. We recognize the importance of developing a comprehensive assessment
plan prior to collecting data.
8. We emphasize the use of assessment evidence in planning and
implementation processes.
9. We have sufficient fiscal and human resources to address the feasibility of
assessment plans.
10. We recognize that the social, cultural, and racial/ethnic backgrounds of
students, faculty, staff, and administrators provide critical perspectives in
the planning, data collection, and interpretation phases of the assessment
plan.
GUIDELINES FOR JUDGING THE EFFECTIVENESS
OF ASSESSING STUDENT LEARNING
This publication was written by Larry A. Braskamp (Professor Emeritus of Education and former Provost, Loyola
University Chicago) and Mark E. Engberg (Associate Professor in Higher Education, Loyola University Chicago)
1. We include stakeholders in all phases of the assessment process, from determining central questions
and issues to interpreting the meaning and merit of different findings.
2. We recognize the importance of including primary stakeholders (i.e., administrators, faculty, staff,
and students) who are directly involved in educational experiences.
3. We design assessment plans to ensure a sense of ownership among the various stakeholders.
4. We identify assessment “champions” who demonstrate a sincere commitment to improving student
learning.
5. We understand the importance of consensus-building among different stakeholders in developing the
various phases of assessment plans.
6. We acknowledge the political nature of assessment and the importance of developing strategies for
dealing with potential conflicts and tensions among different stakeholders.
7. We recognize that the varying goals, needs, and backgrounds of different stakeholders may influence
how they interpret and use assessment evidence.
8. We develop specific sessions to ensure the assessment plan is understandable, relevant, and
acceptable to the stakeholders.
9. We recognize that assessment is most effective and useful when it engages different stakeholders in
conversations about what the evidence means to them.
10. We advocate a culture of openness, trust, and commitment to self-examination among different
stakeholders.
1. We stress the importance of collecting evidence that is congruent with the goals of the institution,
including departmental and programmatic objectives.
2. We include evidence of student background characteristics (inputs), student educational experiences
(environment), and student learning (outcomes) in data collection plans.
3. We advocate “high standards but not high standardization” in defining quality.
4. We recognize benefits and limitations in choosing either locally-developed or externally-based
assessment instruments.
5. We acknowledge the importance of accuracy and feasibility in choosing different assessment
approaches and consult with measurement and assessment experts accordingly.
6. We gather evidence using both quantitative and qualitative approaches to collectively understand
what students learn and how they make meaning of their educational experiences.
7. We triangulate evidence to identify areas of consistency and inconsistency across different findings.
8. We employ pilot testing to ensure the face validity of survey instruments and interview protocols.
9. We recognize the limitations of different assessment approaches and take into account rival
explanations and other potential threats to the validity of findings.
10. We acknowledge the importance of depth over breadth in developing assessment approaches that
start small and avoid overly complex and cumbersome processes.
1. We consider assessment as a special type of story – one that includes judgments of quality based
on evidence.
2. We purposefully link the assessment story to key issues and decisions.
3. We work to make the story clear, focused, simple, and easily understood by different stakeholders.
4. We recognize how the story is communicated is critical (e.g., written, oral, group meetings) and that a
variety of dissemination strategies may be needed to accommodate different stakeholders.
5. We communicate the story so that differences among students (e.g., social, cultural, ethnic/racial) are
respected.
6. We recognize that how the story is interpreted will be based in part on the multiple
experiences, backgrounds, and perspectives of key stakeholders.
7. We meet informally and formally with stakeholders, including students, to discuss, react, and make
meaning of the assessment story.
8. We know that telling the story must be combined with conversations and deliberations for action by
relevant stakeholders.
9. We know that the evidence and story must reach those who have the power and resources to
make changes.
10. We acknowledge that the story may not be complete and that additional findings may be necessary
to fill in gaps or address inconsistencies in the evidence.
1. We believe that assessment requires a willingness and caring among stakeholders to make
adjustments based on lessons learned from the assessment process.
2. We develop either relative or absolute standards to make judgments and to inform improvement
efforts.
3. We recognize that stakeholders often prefer comparisons and benchmarking, particularly in relation
to peer and aspirant institutions.
4. We promote transparency in informing key stakeholders about how and why programmatic decisions
were made based on the collected evidence.
5. We advocate for a dynamic, interactive, and ongoing communication process among stakeholders
rather than a unilateral transmission of collected evidence.
6. We develop coordinated and on-going efforts to bring stakeholders together to discuss future
directions and next steps.
7. We commit financial and human resources to ensure assessment evidence is not simply collected but
used in making programmatic improvements.
8. We recognize the continuous nature of assessment and that programmatic improvements may require
several years to produce identifiable results.
9. We continually evaluate the usefulness of assessment efforts and make changes when needed.
10. We change and adapt assessment strategies to meet the ongoing needs of those impacted and
remain sensitive to the social, cultural, and racial/ethnic backgrounds of students.
Astin, A. (1993). Assessment for excellence: The philosophy and practice of
assessment and evaluation in higher education. Washington, D.C.:
Oryx Press.
Banta, T. W. and Associates (2002). Building a scholarship of assessment. San
Francisco, CA: Jossey-Bass.
Banta, T. W. & Blaich, C. (2011). Closing the assessment loop. Change: The
Magazine of Higher Learning, 43(1), 22-27.
Blaich, C. & Wise, K. (2011). From gathering to using assessment results:
Lessons from the Wabash national study. National Institute for
Learning Outcomes Assessment. University of Illinois at Urbana-
Champaign: Champaign, IL.
Braskamp, L. A. (1989). So, what’s the use? In P.J. Gray (Ed). Achieving
Assessment Goals Using Evaluation Techniques. New Directions for
Higher Education, 67, (pp. 43-50). San Francisco, Jossey-Bass.
Braskamp, L.A. & Braskamp, D. C. (1997, July). The pendulum swing of
standards and evidence. CHEA Chronicle No. 5. Washington, DC:
Council for Higher Education Accreditation.
Braskamp, L.A., Braskamp, D. C., & Engberg, M.E. (2013). Global Perspective
Inventory. https://gpi.central.edu/supportDocs/manual
Braskamp, L. A. & Schomberg, S. (2006, July). Caring or uncaring assessment.
Inside Higher Education. Retrieved from www.insi dehighered.com/
views/2006/07/26/braskamp
Brown, R. D. & Braskamp, L. A. (1980). Summary: Common themes and a
checklist. In Braskamp, L.A. & Brown, R.D. (Eds.). Utilization of
Evaluative Information. Directions for Program Evaluation, 5, (pp. 91-
97). San Francisco, CA: Jossey-Bass.
Engberg, M.E. & Manderino, M. (2013). Collecting dust or creating change: A
multi-campus usability study of student survey results. Manuscript
submitted for publication.
Green, M. F. (2012). Measuring and assessing internationalization. New York:
NAFSA: Association of International Educators.
McCormick, A. C. & McClenney, K. (2012). Will these trees ever bear fruit? A
response to the special issue on student engagement. The Review of
Higher Education, 35(2), 307-333.
National Institute for Learning Outcomes Assessment. University of Illinois at
Urbana-Champaign: Champaign, IL.
www.learningoutcomeassessment.org/
Patton, M.Q. (2012). Essentials of utilization-focused evaluation. Los Angeles,
CA: Sage Publications.
Pike, G. R. (2013). NSSE benchmarks and institutional outcomes: A note on the
importance of considering the intended uses of a measure in validity
studies. Research in Higher Education, 54, 149 – 170.
Stake, R.E. (1967). The countenance of educational evaluation. Teachers
College Record, 68, 523-540.
Weiss, C. H. (1998). Evaluation (2ndEdition). Upper Saddle River, NJ: Prentice
Hall.
This
publication is
made possible
by a grant
from
The Teagle
Foundation.
The
statements
and views
expressed are
solely the
responsibility
of the
authors.
These Guidelines
can be
reproduced
with attribution.
These Guidelines
are posted on the
website:
http://gpi.central.edu
Suggested reference: Braskamp, L.A. & Engberg, M. E. (2014). Guidelines for judging the effectiveness of
assessing student learning. Loyola University Chicago: Chicago IL.
Email: lbraska@luc.edu & mengb er@luc. edu
https://gpi.central.edu/supportDocs/manual
https://gpi.central.edu/supportDocs/manual
http://www.insidehighered.com/views/2006/07/26/braskamp
http://www.insidehighered.com/views/2006/07/26/braskamp
http://gpi.central.edu/
mailto:lbraska@luc.edu
- OVERVIEW
Having a clear purpose and readiness for assessment
Involving stakeholders throughout the assessment process
What and how to assess is critical
Assessment is telling a story
Improvement and follow-up are an integral part of the assessment process
Additional resources