discussion 5

 

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

As a special education leader, it is crucial that you have a process by which you regularly evaluate what is working well and what is not. This important process is about achieving and sustaining success, a process that consists of components that can be replicated. Sustaining whole system change in education requires leaders at various levels of the organization engaging in collaborative conversations, designing innovative and meaningful programs, and having the ability to manage outcomes. For this Discussion, you will evaluate components of sustainability and apply those skills in analyzing the two Sustainability Scenarios. As a leader in the field of special education, how will you create an effective model of measuring sustainability?

To Prepare:

  • Review the articles on sustainability. Make notes of components of sustainability identified in successful districts.
  • Review the McIntosh et al. article. Reflect on the four factors contributing to sustainability.
  • Reflect on the Ontario Reform Strategy in Chapter 5 of the Fullan and Quinn text, concentrating on special education aspects.
  • Read The Components of Sustainability provided in the Module Resources. Focus on components of sustainability evident in both districts. Consider other aspects of the Coherence Framework that support sustainability.
  • Identify five additional scholarly sources that would support your position on components of sustainability.

By Day 3 of Week 8

Post your analysis of effective components of sustainability identified through your research and module resources. Based on the components chosen, identify components of sustainability evident in both districts. Which components of sustainability might be missing in the unsuccessful district? Compare and contrast any notable differences as they relate to sustaining and achieving success. Explain which components of the coherence framework are evident at Kristi’s site as opposed to Jenny’s. How do these factors impact the components of sustainability? Explain your answer. Include at least five scholarly resources to support your position. 

SCENARIO is attached. 

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

Also more graphics A full-color version of this infographic is also available for download at http://www.corwin.com/ books/Book244044 under “About” and then “Sample Materials and Chapter

The Components of Sustainability

The Components of Sustainability
Program Transcript

[MUSIC PLAYING]

CHRISTY: Jenny? Hi, I’m Christy.

JENNY: Hi Christy. It’s so nice to meet you.

CHRISTY: It’s nice to meet you too. I’m so glad we could finally get to meet. How
are you doing?

JENNY: Oh, I’m getting by. How are you?

CHRISTY: Oh, I’m doing well. Well, let me start off by explaining the CEC, new
teacher mentoring program. And then I’ll tell you about my background. And then,
I’m going to find out how your first couple of month’s teaching are going.

JENNY: That sounds good to me.

CHRISTY: OK, the Council of Exceptional Children started a mentoring program
for new teachers in the field years ago to help support and hopefully keep new
special education teachers. This is my fifth year of being qualified as a mentor.

JENNY: I am really counting on this program. Everything seems so
overwhelming right now.

CHRISTY: Well, we’ll be here to provide you guidance, resources, and support
throughout the year.

JENNY Thank you so much. I could really use the help.

CHRISTY: So about me, I started as a general education teacher and did that for
five years. But I always love working with my special ed students. So I decided to
get my license, and I’ve been a special education teacher for 15 years now. And
trust me, I’ve had my ups and downs, but there’s absolutely no profession I would
rather be in. And once you get acclimated and build up your resources, you’ll find
the job much more rewarding.

JENNY: Wow, you’ve been in the field for a really long time. I always thought I
wanted to be a special education teacher, and I just graduated last year with two
special education licenses, but these first few months have been really tough. I’m
beginning to think if I have the skills to do this.

© 2017 Laureate Education, Inc. 1

The Components of Sustainability

CHRISTY: Trust me, most beginning teachers say the same thing. I felt like that
when I first started. And actually I’m starting in a brand new school so, it’s a bit of
a fresh start for me too.

JENNY: Oh, it must be nice starting at a new school. The school I’m at in Grand
City is older and kind of run down.

CHRISTY: Oh, it is nice. My new school has all the latest technology, new
curriculum, and all the latest resources

JENNY: The school I’m in hardly has any technology. I have to use my personal
iPad to give the students some exposure. We have one classroom computer, but
it’s really old and runs slow. All the teachers fight to get into the computer lab, so
I don’t use it very much with my students.

CHRISTY: If you need help planning out some lessons, just let me know.

JENNY: I love planning creative lessons. I’m just having a hard time finding time.
Our principal is really strict, and he has really high expectations. Most of the
teachers stay here well past 5:00 to finish up work, and even take it home with
them.

CHRISTY: Just like your resources, you’ll find different leadership styles at
different schools. Sometimes you have to make do with what you have until you
can upgrade your tools. Sometimes you might have to look for opportunities
elsewhere.

JENNY: Our principal is really smart. I mean he really knows his stuff, but he can
be really condescending sometimes. As a first year teacher, I feel like I’m not
doing anything right, and I don’t hear anything positive. He’s so worried about
test scores.

CHRISTY: I haven’t looked up Grand City recently. How are the scores looking?

JENNY: Low, for both reading and math, not to mention poor attendance and
high rates of student mobility.

CHRISTY: Yeah, those are hard enough to deal with trying to teach any kid,
especially ones with exceptionalities. I know this seems so overwhelming right
now, and it sounds like you’re at a school with a lot of challenges, but your
students need a dedicated teacher like you. And trust me, you will make a
positive impact on them.

JENNY: I hope so. I guess I need to focus on that.

© 2017 Laureate Education, Inc. 2

The Components of Sustainability

CHRISTY: No matter what happens, this will be a good learning experience for
you, and will make you a stronger teacher.

JENNY: I just want to last through the year. I don’t think my principal likes me
very much.

CHRISTY: Maybe it will just take some time. I’ve worked for different principals
with different leadership styles, and right now I’m just lucky to work with an
amazing principal. That’s part of the reason I decided to go to the new school
when it opened. My former principal took a position there, and we all decided to
follow her.

JENNY: What did she do that you like so much?

CHRISTY: Well, she was a teacher for many years herself, and she understand
what it’s like. She has an outgoing and kind personality. And she also
acknowledges that teachers are professionals and experts. She’s set up
numerous committees and teams to ensure she’s getting feedback prior to
making decisions.

JENNY: She sounds amazing. Our principal says our team should meet up at
least twice a month, but I don’t think most teams are doing that.

CHRISTY: We have a mandatory team meeting each week and it’s so beneficial.
We review curriculum and standards. It’s a great way to hear new ideas. The
reading specialists and the technology specialists, they make their way around
the different meetings and share interesting things in their world. I encourage you
to start talking to the other special ed staff, and ask them to set up weekly
meetings.

JENNY: That is a good idea.

CHRISTY: Recently, I’ve been nominated to be the representative for a school
improvement team, that also works with other schools in the district. It does
mean some extra hours, but I get to work with other staff, parents, and
community members to create goals and action plans for school and district
improvement.

CHRISTY: Wow, you’re really busy. I don’t know how you can do all of that.

CHRISTY: Oh, part of it’s because of experience and part of it because I’ve had
good leadership in the schools where I’ve worked. Don’t worry, these things will
happen for you too.

© 2017 Laureate Education, Inc. 3

The Components of Sustainability

JENNY: I really hope so. I really want to be a good teacher, but I’m just so
overwhelmed with meeting the needs of the students while trying to figure out the
policies of the school.

CHRISTY: I’ll be here to offer you support throughout the school year. We can
make time for problem solving or brainstorming whenever you need help.

JENNY: Thank you so much for being here. I really appreciate it. I’m feeling a
little better already.

© 2017 Laureate Education, Inc. 4

Vol. 78, No. 4, pp. 407-422.
©2012 Council for

Exceptional Children

.

Exceptional Childre

n

The Sustainability of
Schoolwide Positive
Behavior Interventions
and Supports

JENNIFER H. COFFEY

ROBERT H. HORNER
University of Oregon

ABSTRACT:r: A summary of the available literature on sustainability is provided and recommended

sustainability features are applied to implementation of schoolwide positive behavior interventions

and supports (SWPBIS). One hundred and seventeen schools from 6 states completed the sustain-

ability survey, which determined the presence of 8 sustainability features as they related to sustain-

ing schoolwide positive behavior supports. Results from a logistic regression analysis demonstrate

that together the sustainability features of administrative support combined with communication

and data-based decision making create the best-fitting model of sustainability for

SWPBIS. The results suggest that an educational innovation is more likely to be implemented and

sustained with fidelity if it (a) has support from an administrator who encourages communication

about the core features of the innovation and (b) uses data to plan and make changes. Implications

for large-scale, sustained iniplementation of evidence-based practices are provided.

E
ducation research has made
important advances in defining
practices that are effective, or
evidence-based, in improving
students’ academic and social

outcomes (Slavin, Holmes, Madden, Chamber-
lain, & Cheung, 2010). Using evidence-based
practices with fidelity is more important than
ever as schools, districts, and state departments of
education strive to close the gaps between the
achievement of students with disabilities and
their peers. Practitioners cannot afford to “experi-

ment” on students with practices that have not
been proven effective. Instead, students need to
be given the best possible chance for succeeding
by receiving instruction and supports that have
an evidence base. Although the content of the ev-
idence-based practice, or innovation, is critical, it
is insufficient to ensure academic or behavioral
success (Datnow, 2005). How the innovation is
executed (i.e., its implementation) is an underem-
phasized component necessary for transforming
the “promise” of an effective innovation into the
outcome of improved student achievement

Exceptional Children 4 0 7

(Buzhardt, Greenwood, Abbott, & Tapia, 2006;
Fixsen, Naoom, Blase, Friedman, & Wallace,
2005).

Unfortunately, sustained use of an innova-
tion is not guaranteed even when full and effec-
tive implementation occurs. The history of the
field of education is littered with the detritus of
successful programs that fell out of favor or were
just forgotten over time, as evidenced by dusty
kits, books, and teachers’ guides safely tucked
away in school closets all over the United States.
In education, reinvention of the wheel occurs on
a regular basis. Education systems do not have the
funding or manpower to continually replace prac-
tices, nor can students “wait” for the education
system to get it right; fully implemented evi-
dence-based practices are needed now.

F E A T U R E S OF

S U S T A I N A B I L I T Y

A review of the literature on sustainability of edu-
cational practices produces a large number of con-
ceptual models and recommendations, but few
empirical analyses. Existing conceptual models
consistendy emphasize the following variables as
critical features that affect sustainability of an im-
plemented practice:

• A Contextually Appropriate Innovation. A
strong model of implementation for sustain-
ability begins with an innovation that is
aligned to state education agency (SEA) and
local education agency (LEA) standards and
requirements (Mihalic, Irwin, Fagan, Ballard,
& Elliott, 2004). Furney and colleagues’ lon-
gitudinal policy analysis of four schools found
that state and federal policy initiatives can in-
fluence outcomes for all students but will not
be wholly effective unless contextually appro-
priate implementation is considered (Furney,
Hasazi, Glark, & Keefe-Hartnett, 2003; Mi-
halic et al., 2004). Datnow (2005) found in
her study of the sustainability of comprehen-
sive school reform models that changing dis-
trict and state contexts affected the
sustainability of these models, but that the
effects were tempered by the schools’ strate-
gies for dealing with change, demonstrating
that each level of the school system should be

considered when determining the contextual
appropriateness of an innovation.

Staff Buy-In. It has been recommended that
80% of staff “buy in” before a decision is
made by the school to implement an innova-
tion (DeStefano, Dailey, Berman, & Mclner-
ney, 2001). Buy-in is defined as verbal
statements supporting change and the overt
nonverbal behaviors necessary for change to
take place (Boyce & Roman, 2002).

A Shared Vision. A shared vision is an agree-
ment between school personnel about the
core components of the innovation and what
implementation of those core components
will look like, as well as the teachers’ desired
outcomes for the innovation. A shared vision
becomes tangible in a plan detailing how im-
plementation and sustainability will be pro-
grammed; vague or tentative plans typically
end in unsuccessful implementation (Elias,
Zins, Graczyk, oí Weissberg, 2003; Fullan,
2005).

Administrative Support. Administrative sup-
port is the feature of sustainability most
strongly emphasized in the literature (Elliott
& Mihalic, 2004). Although different types
of administrators play a role in sustaining in-
novations in the school (e.g., district superin-
tendents, SEA personnel), the principal is
seen as the most critical player (Benz, Lind-
strom, Unruh, & Waintrup, 2004; Heller &
Firestone, 1995; Huberman, 1983) and has
been described as the gatekeeper of change
(Berman &C McLaughlin, 1976). The admin-
istrator can increase the likelihood of the in-
novation’s sustainability by (a) acquiring
resources for the implementation effort, (b)
orienting staff to new ways of doing business,
(c) providing clear expectations to the staff,
and (d) prompting frequent feedback from
staff regarding the progress of implementa-
tion and the types of support they need
(Adelman & Taylor, 1998; Blase & Fixsen,
2004; Fullan, 2002; Heller & Firestone,
1995; Kam, Greenberg, &c Walls, 2003; Mi-
halic et al., 2004).

Leadership at Various Levels. Though the
importance of administrative support in sus-
taining an innovation cannot be overstated,

Summer 2012

leadership from other personnel in the school
is also critical for success. Administrative
turnover occurs frequently and administra-
tors have many duties, so it is often necessary
for leadership to come from within the ranks.
In addition, there are typically one or two
administrators in a school, whereas there are
many practitioners. In various studies, practi-
tioner leadership, especially when the practi-
tioner is well respected by other school
personnel, has led to greater commitment
and use of the innovation (Berman &
McLaughlin, 1976; Gottfredson & Gottfred-
son, 2002; Mihalic & Irwin, 2003; Mihalic
et al., 2004).

Ongoing Technical Assistance.

Technical

assistance is a “means of using knowledge to
improve the adoption and implementation of
some type of educational practice or proce-
dure” (Yin & White, 1984). The quality of
technical assistance activities (e.g., training
and coaching) is of critical importance for
the success of implementation (Adelman &
Taylor, 1998; Berman & McLaughlin, 1976;
Mihalic & Irwin, 2003; Mihalic et al., 2004;
Ringeisen, Henderson, & Hoagwood, 2003).
Focusing on practical classroom issues and
skill building rather than on theoretical in-
formation helps teachers to build che compe-
tence that leads to sustained innovations
(American Education Research Association,
2005; Berman & McLaughlin, 1976). In ad-
dition, focusing activities on the core princi-
ples of the innovation increases the
likelihood that teachers will sustain the inno-
vation (Elias et al., 2003; Sindelar, Shearer,
Yendol-Hoppey, & Liebert, 2005).

Data-Based Decision Making and Sharing.
One task of coaches and school leaders is to
assist practitioners in collecting and analyzing
data related to the innovation (Joyce &
Showers, 2002). Having explicit systems to
collect and share the data with the entire
school staff, whether in celebration of im-
provements or to provide corrective feedback,
can increase short- and long-term commit-
ment to an innovation (FuUan, 2005). Imple-
mentation fidelity and outcome data can be
used to improve implementation quality and

should be accessible to practitioners (Adel-
man & Taylor, 2003; Greenwood, Delquadri,
& Bulgren, 1993; Martinez & Harvey,
2004), as monitoring of implementation dat

a

allows for the innovation to be improved and
refined over time (Berman & McLaughlin,
1976; DeStefano et al., 2001; Huberman,
1983; Weissberg & Utne-O’Brien, 2004).

• Continuous Regeneration. Regeneration is
the set of procedures that allow a system to
continually compare valued outcomes against
current practice and modify practices to con-
tinue to achieve these outcomes as the con-
text changes over time (Mclntosh, Horner, &
Sugai, 2009). Regeneration is necessary to
prevent or to remedy an implementation dip
(FuUan, 2002), which is a decrease in imple-
mentation fidelity that occurs after a period
of implementation and is the result of de-
creasing levels of interest in the program.
During this time, resources will be needed to
ensure teachers can receive training that will
review previously learned skills and teach
new skills so that they can reach a more ad-
vanced level with the innovation (Hatch,
2000). Cherniss (2006) further recommends
that teachers (a) crepte a culture of experi-
mentation, (b) set aside time for planning,
and (c) create an open and fiexible decision-
making structure.

The sustainability features provided here are not
exhaustive; rather, they are the features most con-
sistently recommended to lead to sustainability of
innovations.

P O S I T I V E B E H A V I O R

I N T E R V E N T I O N S

A N D S U P P O R T S

A technology of tiered behavior supports at the
universal, targeted group, and individual levels
has been created over the years through the
expansion of applied behavior analysis (Sugai &
Horner, 2002). Termed positive behavior inter-
ventions and supports (PBIS), this approach uses
systems change methodology to minimize indi-
viduals’ problem behavior, ificrease their quality
of life, and also increase their likelihood of success

Exceptional Children 4 O 9

academically and beyond (Carr et al., 2002).
Schools implementing PBIS focus on building
students’ academic skills along with their social
competencies. Through a behaviorally based sys-
tems approach, PBIS enhances the capacity of the
school to use research-validated ptactices and in-
sttuction (Sugai et al., 2000).

The core components of schoolwide PBIS
include (a) a statement of purpose, (b) schoolwide
expectations, (c) procedures for teaching school-
wide expectations, (d) a continuum of procedures
for encouraging schoolwide expectations, (e) a
continuum of procedures for discouraging prob-
lem behaviors, and (0 procedures for using data
to monitor the impact of schoolwide PBIS imple-
mentation (Lewis & Sugai, 1999). Students at a
PBIS school know which behaviors are
appropriate, can expect to receive both social and
tangible rewards when using those appropriate
behaviors, and also know what to expect when
they act inappropriately. Students at PBIS schools
do not “fall through the cracks” because educa-
tots, through the use of office discipline referrals
and systemwide communication, monitor all stu-
dents who exhibit problem behaviors. A PBIS
school is unified in its approach to supporting
students both academically and behaviorally. Ad-
ditional suppott is provided along a continuum
all the way to functional behavioral assessment
(Mclntosh, Chard, Boland, & Homer, 2006). Re-
sults have demonstrated that the use of PBIS pto-
cedures results in positive changes throughout the
school: A body of tesearch provides evidence of
PBIS’s effectiveness in decreasing problem behav-
iors for the whole school (Hotnet et al., 2009;
Nelson, Hurley, Synhorst, & Epstein, 2008; Nel-
son, Martella, & Marchand-Martella, 2002;
Safran &C Oswald, 2003).

A PBIS school is unified in its

approach to supporting students both
academically and behaviorally.

In addition to its schoolwide benefits, PBIS
offers important and meaningful benefits to stu-
dents with disabilities. First is the idea previously
intimated that in a PBIS school each teacher feels
responsibility for each student in the school. The

model of a sepatate general education and special
education has led to students with disabilities
being under the purview of special educators and
experiencing a lack of connection with genetal ed-
ucatots. PBIS encoutages and enables educatots
to share a commitment fot all students, whether
or not they are on record as the ptimaty provider
of services (Btadshaw, Koth, Bevans, Ialongo, &
Leaf, 2008). Second, full implementation of a
preventive model of behavior suppott may
decrease the number of students who are inappro-
priately determined to need mote intensive sup-
potts. When the number of students moving into
mote targeted or intensive services decreases, the
students who apptoptiately teceive those services
may receive more focused attention and assistance
(Mclntosh et al., 2006). Third, the teaming struc-
tures that ate a ctitical component of PBIS imple-
mentation enable the collaborative work necessary
to support students with intensive behavior needs
(Ebet, Sugai, Smith, & Scott, 2002). Accordingly,
a school that attempts to use evidence-based in-
terventions for students with intensive behavior
needs is likely to have more success if they alteady
have universal and tatgeted preventive supports
and interventions in place (Skiba, 2002). Addi-
tionally, the use of PBIS at the univetsal and tat-
geted levels can provide a tecord of interventions,
observations, and assessments that can be used as
part of a comptehensive evaluation for special ed-
ucation. In all, a preventive system using positive
behavior supports makes the provision of a free
and appropriate public education (FAPE) more
likely (Ebet et al., 2002; Skiba, 2002), and a
schoolwide system of PBIS may lead to a compe-
tent school culture capable of sustaining the use
of evidence-based practices (Hotnet & Sugai,
1999) that benefit students with disabilities.

This study was conducted to identify and
validate the components of sustainability that
increase the ability of schools to sustain school-
wide PBIS (SWPBIS). This was accomplished by
detetmining the significant differences of sustain-
ability features in schools that have sustained SW-
PBIS and schools that have not. As a result, a
model of sustainability will be presented for SW-
PBIS with the intention that the model may be
generalized to othet innovative ptactices being
implemented in schools.

Summer 201

2

METHOD

SAMPLE AND PARTICIPANT SELECTION

All schools included in this study had existing
PBIS implementation data from the Schoolwide
Evaluation Tool (SET; Horner et al., 2004; Sugai,
Lewis-Palmer, Todd, & Horner, 2001) or the
Team Implementation Ghecklist (TIG; Sugai,
Horner, & Lewis-Palmer, 2001). The PBIS Tech-
nical Assistance Genter database houses results for
schools using these tools. When data were ex-
tracted for this study, results were available from
1998 to 2006, with 429 schools in the SET
database and 932 schools in the TIG database.

The operational definition of sustaining was
established at a minimum of 3 years of imple-
mentation with the last 2 years demonstrating cri-
terion levels of implementation fidelity. Three to
5 years of implementation has been widely ac-
cepted as a marker for sustained use of a program
(Mihalic & Irwin, 2003; Rog et al., 2004; Schräg,
1996). To be included in the sample, schools
needed to have implemented PBIS for at least

3

years. The sample consisted of two groups: sus-
tainers and nonsustainers. The first sample, sus-
tainers, was made up of schools that sustained
their PBIS system with fidelity, as demonstrated
by a minimum SET total score of 80% for the
last 2 years on record or 2 years above 80% and a
consecutive year with a score of 75% or above.
The second sample, nonsustainers, was made up
of schools that had observed for at least 3 years
but did not meet the criteria for sustainer. School
characteristics gathered for all schools included (a)
socioeconomic status (SES) of the students (the
percentage of students receiving a free or reduced
lunch), (b) size of school (student enrollment), (c)
school academic level (e.g., middle school), (d)
geographic location (e.g., rural), (e) number of
years PBIS has been implemented, (f) percentage
of minority students, and (g) Title I status.

The sample schools were asked to take part
in a survey containing 40 questions about the sus-
tainability components in place in each school
related to SWPBIS implementation. The survey is
available upon request and is explained in more
depth in the next section. Recruitment was
accomplished by sending schools the sustainabil-
ity survey with a letter explaining the purpose and
importance of the study, the role the participating

schools would play, and the results that would be
reported, including how these results could en-
hance the field’s knowledge about PBIS sustain-
ability. Through the original screening process,
146 schools were categorized as sustainers. Non-
sustainers were fewer in number, with 11

1

schools. Surveys were sent to these 257 schools,
and unresponsive schools were sent two subse-
quent mailings.

MEASURES

Extant data were collected for two methods of
assessment: the SET, which measures the imple-
mentation of the core features of SWPBIS, as de-
scribed in the previous pages, and the TIG, which
tracks SWPBIS implementation activities in the
school. For both measures, scores are percentages
of essential features of SWPBIS in place—a score
of 100 equates to having 100% of the features in
place.

Schoolwide Evaluation Tool (SET). The SET
is designed to assess and evaluate the critical fea-
tures of SWPBIS for each school year (Sugai,
Lewis-Palmer, et al., 2001). An external reviewer
administers the 28-item research tool on site by
reviewing materials that relate to SWPBIS, per-
forming observations, and conducting staff and
student interviews. The SET was found to be
valid and reliable in measuring the implementa-
tion of schnolwide PBIS, can be measured with
high interobserver agreement, demonstrates excel-
lent test-retest reliability (97.3%), produces a
valid index of schoolwide behavior support as
defined by Lewis and Sugai (1999), and is sensi-
tive enough to be useful in documenting change
in levels of implementation of SWPBIS programs
in schools (Horner et al., 2004). In a study con-
ducted by Horner and colleagues, the average in-
terobserver agreement on SET item scores was
99% across 17 schools (2004). The SET has seven
subscales that measure the essential features of
SWPBIS:

• Behavior expectations defined.

• Behavior expectations taught.

• Ongoing behavior reward system

• System for responding to behavior violations.

• Monitoring and decision making.

Exceptional Children

• Management.

• District-level support.

The subscale alphas were found to range from

.63

to .92 with a full-scale alpha of .90 (Horner et al.,
2004).

Team Implementation Checklist (TIC). The
TIC monitors implementation and maintenance
of SWPBIS systems. When beginning implemen-
tation, the school’s SWPBIS team completes the
checklist and uses the results to create an action
plan that describes the most needed resources
(e.g., training, coaching, and financial resources).
The T I C is then used at regular intervals
(monthly or quarterly) to monitor progress. Im-
plementation activities are divided into two sec-
tions: Startup Activities and Ongoing Activities.
The Startup Activities section is primarily used to
monitor the work of initial implementation. The
Ongoing Activities section assists the team in
evaluating the activities required to sustain a PBIS
system. Six main areas of implementation activi-
ties are contained in the Startup Activities section:

• Establish commitment.

• Establish and maintain team.

• Conduct self-assessment.

• Establish schoolwide expectations.

• Establish information system.

• Build capacity for function-based support.

The implementation goals in the Ongoing Activi-
ties section are

• SWPBIS team has met at least monthly.

• SWPBIS team has given status report to
faculty at least monthly.

• Activities for SWPBIS action plan imple-
mented.

• Accuracy of implementation of SWPBIS
action plan assessed.

• Effectiveness of SWPBIS action plan imple-
mentation assessed.

• SWPBIS data analyzed.

Barrett, Bradshaw, and Lewis-Palmer (2008)
reported that a slightly modified version of the
TIC was found to have high internal consistency

(Cronbach’s alpha = .93, n = 1,633 forms com-
pleted).

Sustainability Survey. The sustainability sur-
vey contains questions about the organizational
features of the school that have facilitated or
inhibited implementation and sustainability of
SWPBIS. By mapping the literature-based sus-
tainability model onto the SWPBIS sustainability
model and determining shared features, survey
questions were created that have a foundation in
research and are appropriate for examining the
SWPBIS system. The survey questions are catego-
rized by the sustainability model components,
with five questions for each of the following sub-
scales:

• Shared vision and resources in a contextually
appropriate setting.

• Buy-in and agreement.

• Ongoing and active technical assistance.

• Use of data to make decisions.

• Lateral and vertical communication.

• Leadership from various levels.

• Administrative support.

• Regeneration.

The formatting of the survey includes closed
questions on the Likert scale, frequency ratings,
and two open-ended questions about aspects of
sustainability specific to the school.

The survey’s construct validity and clarity
were analyzed by a panel of experts, all of whom
have a background working with SWPBIS sys-
tems and are well-versed in issues of sustainability.
Feedback from the panel was used to make modi-
fications before the survey was sent to pilot
schools. Feedback from these pilot schools regard-
ing the format of the survey, comprehensiveness
of the content, and question clarity was then used
to revise the survey. Internal consistency reliability
(coefficient alpha) was calctilated from completed
questionnaires using item total correlations and
subscale correlations, resulting in adjustments to
the survey’s items and subscales.

PROCEDURE

When comparing the surveys of sustaining
schools and nonsustaining schools, is the level of

4 1 2 Summer 2012

TABLE 1

Item Total Correlations and Suhscale Alphas

Administrative Support

Item

1
2
3

4

5

Overall

Subscale

Consistency

a

.

78

.80

.

79

.69

.82

Communication

Item
1
2

3
4

5
a

.73

.76

.74

.70

.77

.78

Data-Based
Decision Making

Item a

1 .81
2

3 .78

4 .63

5

.66

.78

Regeneration

Item
1
2
3
4
5
a

.61

.73

.60

.66

.72

Technical

Assistance

and Dissemination

Item
1
2
3
4
5
a
.63
.63

.55

.60
.60

.65

sustainability associated with ratings on specific
sustainability components as measured by the sus-
tainabilit)’ survey? Logistic regression was used to
test the associations between the predictor vari-
ables—the sustainability components in place in
the school as measured by the sustainability sur-
vey as continuous variables—and the criterion
variable, which is the presence or absence of sus-
tainability as determined by the results of the im-
plementation measure as a dichotomous variable.
The school is the unit of analysis.

One question inherent to this study is
whether the latent constructs included in the sus-
tainability survey validly discriminate between
sustaining and nonsustaining schools. Statistical
analyses are necessary to determine if there is a
specific subset of variables that most parsimo-
niously describes a model of sustainability. To an-
alyze survey results witb logistic regression, the
following activities were completed:

• Analyze each predictor and its relation with
the outcome variable.

• Model the probability of the outcome of sus-
tainability by a series of univariate logistic
regression analyses.

• Fit a preliminary multivariate logistic model
using all predictors of importance.

• Fit alternative models.

• Add interactions.

• Remove statistically insignificant predictors.

• Gompare performance of alternative models
on the significance of each predictor, good-
ness of fit statistics, accuracy of prediction,
and diagnostic results.

The null hypothesis posits that the sustain-
ability model is not an improvement over the in-
tercept-only model {a = .05). The alternative
hypothesis is that the sustainability model does
provide a better fit to the data by demonstrating a
significant improvement over the intercept-only
model. The logistic model was applied to the sus-
tainability survey data using the LOGISTIG and
GENMOD procedures implemented in SAS®
Release 9.1 (SAS Institute, 2004).

PRELIMINARY ANALYSES

Of the 257 surveys sent to PBIS team leaders,
117 were returned (42%). Seventy-nine (79) of
the respondents were sustainers and 38 were non-
sustainers. Goefficient alphas were computed at
the subscale level and items that diminished the
subscales’ consistency were dropped. Generally
accepted conventions for coefficient alphas are:
S.90 = excellent, .80 to .89 = good, .70 to .79 =
acceptable, .60 to .69 = questionable, .50 to .59 =
poor, and <.5O = unacceptable (George & Mallery, 2003). Three subscales were released from the sustainability survey because their scale alphas were less than .60: (a) buy-in and commit- ment, (b) leadership from various levels, and (c) shared vision and resources. The five remaining subscales had coefficient alphas ranging from .65

Exceptional Children 4 1 3

TABLE 2

Descriptive Statistics for tbe Sustainability Survey

Survey Category

Administrative support (20 pts.)

Communication (25 pts.)

Data-based decisions making (20 pts.)

Regeneration (20 pts.)

Technical assistance (20 pts.)

M

16.54

21.19

15.32

16.05

18.05

Sustainers

SD

4.07

3.44

2.61

2.58

4.05

n
79
79
79
79
78

Subscales by Sample

Nonsustainers

M

13.75

18.18

13.69

15.67

16.83

SD

3.71

3

.28

2.53

2.

38

3.88

Group

n

37

38
37
38
37

All Respondents

M

15.65
20.21
14.80
15.93
17.97

SD

3.70
3.66
2.68
2.51
4.06

n

117
116
116
117
115

Note. The maximum number of points available is provided for each subscale.

to .82. Table 1 demonstrates item total correla-
tions and subscale alphas. Descriptive statistics for
the remaining sustainability survey categories are
presented in Table 2. Each of the survey cate-
gories or subscales is divided into sustaining and
nonsustaining groups and the overall means are
given for each of the subscales.

A logistic regression model was fit to the sur-
vey data to explain the predicted odds of sustain-
ability (i.e., sus = 1). There were five possible
main effects: (a) administrative support, (b) com-
munication, (c) data-based decision making, (d)
regeneration, and (g) technical assistance. With
the exception of technical assistance, which was
not included in the final logistic regression model,
these are the categories of the sustainability survey
that demonstrated strong internal consistency.
The Maximum Likelihood Estimator (MLE) is
the method used with logistic regression analysis
to estimate coefficients for the fitted model. Each
of the independent variables was regressed with
the outcome variable of sustainability. Four of the
five variables—administrative support, communi-
cation, data-based decision making, and technical
assistance—showed a significant relationship {p ^
.05) with the dependent variable of sustainability.
The Pearson correlations between independent
variables, the Variance Infiation Factor (VIF), the
Tolerance, and the absolute correlation between
the coefficient estimates all indicated the absence
of multicollinearity.

R ES U LTS

LOGISTIC REGRESSION ANALYSIS

In performing backward logistic regression, we
first tested a model that included all variables de-
termined significant in the univariate regression.
This model did not have a significantly better fit
than the intercept-only model. The second multi-
variate logistic regression included the three vari-
ables found to be most significant in the
univariate logistic regression models: administra-
tive support, communication, and data-based
decision making. The correlation between admin-
istrative support and communication (.60), as
seen in Table 3, seemed to create substantial inter-
ference in the model by causing significant
changes in the coefficients; therefore, it was neces-
sary to test the two variables together as an inter-
action effect. Statistics that test for predictiveness
and effectiveness of the fitted model, the Akaike
Information Criterion (AIC), the Schwarz Crite-
rion (SC), and negative twice the log likelihood
(-2 Log L) were tested for the model with and
without the independent variables. When the
model that includes data-based decision making
and an interaction between administrative sup-
port and communication is compared to the in-
tercept-only model, the AIC, SC, and —2 Log L
decrease by 26.05, 15.07, and 34.05, respectively

When comparing the intercept-only model
and the model that includes data-based decision
making and an interaction between administra-
tive support and communication, the model that

Summer 2012

TABLE 3

Pairwise Correlations Between Independent Variables

Independent Variable

Administrative support

Communication

Data-based decision making

Regeneration

Technical assistance

Communication
.60
Data-Based
Decision Making

.42

.43

Regeneration

.13

.31

.24

Technical
Assistance
.28

.49 •

.26

.40

includes the two effects predicts the outcome of
sustainability significantly better than the model
without any effects. The Wald chi-square of 18.50
is significant [p = .001), and thus we can reject
the null hypothesis that states all coefficients are
zero. The maximum likelihood ratio results
demonstrate significance.

PREDICTION OE THE MODEL

The fitted model has a strong prediction for sus-
tainability measuted by a concordance of about
82%. The Somet’s D index of the degree that pre-
dicted probabilities match actual outcomes is
64%, interpreted as 64% fewer etrors made in
predicting which of two schools demonstrated
sustainability by utilizing the estimated probabili-
ties rather than by chance alone. The Hosmer and
Lemeshow goodness of fit statistic (Hosmer &
Lemeshow, 1989) is 3.61 with/) = .89, indicating
a good fit. When the probability is not significant
(p > .05), a good fit is indicated. Thus, the logis-
tic fitted model appears to be the right model for
detecting sustaining schools.

Univariate logistic regression analyses were
conducted with the five subscales (administtative
support, communication, data-based decision
making, regeneration, and technical assistance),
and all but tegenetation were significant {p < .05), as shown in Table 4. Accordingly, a multi- variate logistic regression model was created that included all individually significant vatiables; however, this fout-variable model did not have a significantly better fit than the intercept-only model. Technical assistance, the vatiable that had the smallest effect within the model, was thus removed, and only administrative suppott, com- munication, and data-based decision making remained. Results of the new model with these three main effects demonstrated an intetaction occurring between administrative support and communication, further supported by the medium-sized correlation between the two vari- ables. This led to the creation of a model that included two main effects: data-based decision making and an interaction between admini- sttative support and communication. The null hypothesis could be tejected because a parsi- nionious model containing the two effects of

TABLE 4

Univariate Logistic Regression Analyses

Independent Variable
Administrative support
Communication

Data-based decisions making

Regeneration
Technical assistance

P of Likelihood Ratio
of Overall Model

.0002

<.OOO1

.0024

.4330

.0378

Wald Chi-Square

12.39

10.77

8.48

0.61

4.00

P Value

.0004

.0010

.0036

.4331

.0414

Exceptional Children

data-based decision making and an interaction
between administrative support and communica-
tion had a significantly better fit than the inter-
cept-only model. Accordingly, a school that has
data-based decision making along with a combi-
nation of administrative support and communica-
tion has better odds of sustaining PBIS than
schools that do not have this combination of
organizational features in place.

D I S C U S S I O N

RESPONDENT DEMOGRAPHICS

Many more elementary schools returned surveys
than middle or high schools. About 12% of the
surveys went out to middle schools and 16% of
the responses were from middle schools, whereas
14% of respondents were K-8 schools. High
schools made up 9% of the original sample and
returned 4% of the surveys. PBIS is more preva-
lent in elementary schools, so the ratio of surveys
returned (elementary schools returned about 65%
of the surveys) is appropriate for the national
makeup of PBIS schools. Tukey-Kramer one-way
analysis of variance (Hsu, 1996; ANOVA) was
used to compare the means of the sustainability
survey for the various demographic categories and
pairwise comparisons were formed. To ensure
familywise error was taken into consideration,
comparisons were not considered significantly dif-
ferent from each other unless the adjusted/) value
was jess than .01. Tukey-Kramer pairwise com-
parisons resulted in no significant differences be-
tween the different levels of schools, including
elementary, K-8, and K-12 schools.

A few significant effects were found for the
different demographic categories. Schools in
small- or medium-sized cities were significantly
more likely to have a higher rating for the sustain-
ability component of regeneration than schools in
rural areas. Ghallenges to sustaining innovations
in rural areas, such as the economic outlook of
the community, staffing, and opportunities to
collaborate, have been reported by multiple au-
thors (Kannapel & DeYoung, 1999; Seal & Har-
mon, 1995; Theobald & Nachtigal, 1995). Also,
schools with 100 to 299 students (as compared to
schools with 300-749 students) had significantly

greater scores for three organizational features: (a)
communication, (b) data-based decision making,
and (c) technical assistance. Further differences
were seen when comparing schools that have im-
plemented for 5 years and schools that have im-
plemented for 3 years. Schools that have
implemented for 5 years or more have a greater
level of administrative support, data-based deci-
sion making, and technical assistance. It is likely
that a school that has continued to implement
PBIS would have assistance from their adminis-
trator, would be using data to inform their prac-
tices, and would be receiving ongoing training to
assist in their PBIS activities. In addition, regular
use of data may result in administrators and staff
having more accurate and timely feedback, which
more closely connects the desired outcomes to the
implementation activities.

OPEN-ENDED QUESTIONS

Two open-ended questions were included in the
survey:

a. Is there anything else that helped your school
to sustain PBIS?

b. Were there any major obstacles for your
school that made sustaining PBIS difficult?

Of the 117 respondents who returned surveys, 63
respondents explained what had helped them sus-
tain PBIS and 84 respondents described obstacles
to their school’s efforts to sustain. The areas
reported as conducive to sustainability were lead-
ership (principals, districts, SEAs, teachers,
coaches, PBIS coordinators, consultants, and
counselors) with 20 responses, teacher buy-in
with eight responses, funding with seven re-
sponses, time to meet/regularly held meetings
with six responses, decision-making procedures
and technical assistance opportunities, both with
five responses. If fewer than five respondents re-
marked on the same facilitators they are not listed
here.

When asked what had helped their SWPBIS
implementation to sustain, the largest number of
respondents named leadership. Implementation
and sustainability research have demonstrated
that leadership is the base upon which a sustain-
able program is built (Benz et al., 2004; Berman
& McLaughlin, 1976; Heller & Firestone, 1995;

4 1 6 Sur? r2012

Huberman, 1983) and that it has a direct rela-
tionship to quality implementation (Kam et al.,
2003). Leaders provide direction and motivation
for the innovation by sheltering teachers from
other pressures, demonstrating that the innova-
tion is part of the central mission of the school,
and communicating positively about the innova-
tion with staff (Kam et al., 2003; Rohrbach, Cra-
ham, & Hansen, 1993; Sindelar et al, 2005;
Solomon, Battistich, Watson, Schaps, & Lewis,
2000). In terms of district support, one respon-
dent explained that “centtal office support and
the message that PBIS is important and an expec-
tation for schools” led to sustainability. District
support can increase the capacity of schools to
carry out innovations (Martinez & Harvey,
2004). The importance of teacher leadership in
one respondent’s school was demonstrated by “a
very strong universal team, led by two highly effi-
cient veteran teachers who were respected by
staff,” which together resulted in sustainability.
This response about teaming is reinforced by Sin-
delar and colleagues’ finding that teachers on
long-standing teams were less likely to face serious
teaching challenges. Martinez and Harvey agree
that having a cohesive team that has time to col-
laborate will lead to a more successful effort.

The areas reported as conducive to
sustainability were leadership, teacher
buy-in, funding, time to meet/regularly

held meetings, decision-making procedures,
and technical assistance opportunities.

Teacher buy-in and commitment was the
next most frequently reported factor leading to
sustainability; two of these respondents stated
that better than expected outcomes in the first
year led to increased teacher commitment. Short-
term gains can be critical (Elias et al., 2003; Han
& Weiss, 2005), especially in this time of in-
creased accountability when innovations need to
show quick results to be maintained in the
schools. Implementation and sustainability re-
search have demonstrated that teacher buy-in is
needed before the program can be implemented,
but perhaps there are multiple paths to buy-in
and, along with that, sustainability.

Regarding the second open-ended question
(“Were there any major obstacles for your school
that made sustaining PBIS difficult?”), 22 respon-
dents stated that funding was an obstacle, whereas
17 stated they were unable to meet consistently as
a team and did not have enough time to complete
all of the activities that were necessary for a strong
PBIS effort. Eight respondents believed that they
needed more staff to implement PBIS with fi-
delity. Also making the human resources issues
more difficult were turnover in administration
and turnover in staff, both reported by five
respondents. Further, seven respondents believed
lack of staff buy-in hurt the PBIS effort in
schools.

Appropriate funding was reported by seven
individuals as a beneficial factor in sustaining
PBIS; conversely, 22 respondents stated that inad-
equate funding was an obstacle in sustaining
PBIS. An organizational structure that ensures re-
sources are dedicated to the innovation will assist
the effort (Adelman & Taylor, 1998; Cherniss,
2006; Mihalic & Irwin, 2003; Mihalic et al.,
2004). Resource allocation also demonstrates pri-
ority. When innovations are not assigned a high
priority, they tend to be marginalized in favor of
activities that have high priority (Center for Men-
tal Health in Schools, 2010).

Four respondents spoke of philosophical
issues. Philosophical beliefs are cultural and orga-
nizational traditions of the schools and exert im-
plicit force on the change effort (McLaughlin &
Mitra, 2001). For some teachers, the use of
rewards, whether they are tangible or intangible,
is seen as inappropriate, and three respondents
reported this was the case in their schools. One
respondent reported the obstacle of “philosophi-
cal issues of some staff members who give out few
positives and no negatives.” There is a delicate
balance between adoption and adaptation of an
innovation when ensuring the innovation fits the
context of the schools, including the philosophi-
cal beliefs pf the teachers (Gager & Elias, 1997;
Martinez & Harvey, 2004; Weissberg Sí Creen-
berg, 1998). Although it is important that the
features of the innovation fit well with the philo-
sophical views of the school staff, modifying the
innovation by eliminating essential components
likely decreases the probability of achieving the
desired outcomes (Battistich, Schaps, Watson,

Exceptional Children 4 1 7

Solomon, &C Lewis, 2000; Kam et al., 2003;
Karachi, Abbott, Gatalano, Haggerty, & Fleming,
1999).

LOGISTIC REGRESSION MODEL

The literature describes the organizational feature
of communication mainly in tetms of collabora-
tion, specifically through regular communication
and staff meetings, which can create a sense of
collective efficacy (Berman & McLaughlin, 1976;
Rog et al., 2004). In one study, the more time
teachers spent communicating about the prac-
tice, the more skilled their implementation of the
innovation was (Bauchner, Eiseman, Gox, &
Schmidt, 1982). In the model of sustainability
tested here, lateral and verhal communication
occurs not just between teachers but also
between teachers and administrators. Without
administrative support, vertical communication is
unlikely to occur consistently or strategically. In
addition, administrative suppott is necessary to
allow time for teachers to collaborate laterally.-
Accordingly, this study’s model of sustainability
logically leads to an interaction between the inde-
pendent variables of administrative suppott and
communication. Gommunication between peers,
between teachers and administrators, between
schools and centtal district offices, and between
LEAs and SEAs is necessary if an innovation is to
be seen as important enougb to teceive ongoing
support from all levels. Most of this communica-
tion is not possible without support from admin-
istrators. Also, dedicated time for teachers to
collaborate on an innovation is unlikely to occur
without an administratot’s support or assistance.
For all of these reasons, administrative support
and communication seem to logically comple-
ment each other, and one can reasonably be seen
as a prerequisite of the other. Of all the organiza-
tional features hypothesized to lead to sustainabil-
ity, administrative support has the largest research
base and is the most well known by those work-
ing in the field of education who are concerned
with sustainability of innovations.

Data-based decision making did not meet
the .05 cutoff but did have Ap value less than .10,
and because it is supported by responses to the
open-ended questions, it has research validation
and is part of an overall model that fits signifi-

cantly better than an intercept-only model. It re-
mained in the final logistic regression model with
the intetaction between administrative support
and communication. Part of what makes commu-
nication in the system of PBIS so successful is
that PBIS team membets and other educatots are
able to use data to discuss the status and goals of
theit school. Data helps administrators to make
decisions about programming and modification
of instructional practices and aspects of the learn-
ing and social environment.

L I M I T A T I O N S

The analyses in this study wete exploratory and
caution should be used in drawing conclusions
from the results. The main limitations of this
study were the instrumentation used and the sam-
ple sizes. We used a previously untested survey in-
strument to make associations between
organizational features and sustainability. This
survey instrument is unlikely to have been com-
prehensive in measuring the presence of the eight
sustainability components that resulted from the
literature review. In addition, the subscale of
administtative support had fairly high correlations
with most of the other subscales. When using
logistic regression, it was difficult to determine
effects of other subscales wbenever administrative
support was included in the model. Perhaps the
construct of administrative support needs to be
conceptualized differently, parsing out specific
administrative activities to the various categories
(e.g., data-based decision making, and technical
assistance). Without question, administrative sup-
port touches upon each of the organizational fea-
tures, so determining if there is a way to create a
stand-alone organizational feature of administra-
tive support could be a next step in this line of re-
search. In addition, when the construct validity of
a survey, its internal consistency, and the norms
associated with the instrument are being estab-
lisbed, multiple iterations surveying different
samples of the population are necessary. This
study is merely the first of many steps necessary
for the survey to be accepted as valid and reliable.

The second main limitation is that the sam-
ple groups for research questions one and two are
unbalanced, with the sustainers making up the

4 1 8 Sun r2012

preponderance of the sample. This mirrors the
imbalance in the databases from which the sam-
ple was drawn. Although the sample size for the
nonsustainers was large enough to meet the crite-
ria for minimum cell sizes for the statistical analy-
ses, a follow-up study with more balanced
samples would increase the probability that the
conclusions drawn in this study are based on real
differences in the populations.

This study analyzed survey and existing im-
plementation data to determine the organiza-
tional and programmatic features that would
predict or be associated with sustainability.
Results demonstrated that having a combination
of the organizational features of administrative
support and communication along with data-
based decision making is associated with schools
sustaining PBIS over a number of years. Training
and practice in special education can be enhanced
by the findings described in this article; however,
much work is left to be completed. A focus on
evidence-based practices is only part of the equa-
tion in reaching the desired student outcomes
(Odom, 2009). Unless we support the initial and
continued implementation of those practices, the
field of special education cannot hope for signifi-
cant and lasting improvements for children with
disabilities.

R E F E R E N C E S

Adelman, H., & Taylor, L. (1998). Reframing mental
health in schools and expanding school reform. Educa-
tional Psychologist, 33(4), 135-152. http://dx.doi.org
/10.1207%2Fsl5326985ep3304_l

Adelman, H., & Taylor, L. (2003). On sustainability of
project innovation as systemic change. Journal of Edu-
cational and Psychological Consultation, 14(1), 1-25.
http://dx.doi.org/10.1207%2FSl532768XJEPCl401_
01

American Education Research Association. (2005).
Teaching teachers: Professional development to
improve student achievement. Research Points: Essential
Information for Education Policy, 3(1), 1—4.

Barrett, S. B., Bradshaw, C. P., & Lewis-Palmer, T.
(2008). Maryland statewide PBIS Initiative: Systems,
evaluation, and next steps. Journal of Positive Behavior
Interventions, 10(1), 105-114. http://dx.doi.org/10
.1177%2F109830070731254l

Battistich, V, Schaps, E., Watson, M., Solomon, D., &
Lewis, C. (2000). Effects of the child development pro-
ject on students’ drug use and other problem behaviors.
Journal of Primary Prevention, 27(1), 75—99. http://
dx.doi.org/10.1023%2FA%3A10070574l4994
Bauchner, J. E., Eiseman, J. W, Cox, P L., & Schmidt,
W. H. (1982). People, policies, and practices: Examining
the chain of school improvement. Volume III: Models of
change. Andover, MA: The NETWORK.

Benz, M. R., Lindstrom, L., Unruh, D., & Waintrup,
M. (2004). Sustaining secondary transition programs
in local schools. Remedial and Special Education, 25(\),
39-50. http://dx.doi.org/10.1177%2F0741932504025
0010501

Berman, P, & McLaughlin, M. W (1976). Implemen-
tation of educational innovation. The Educational
Eorum, 11(3), 343-370. http://dx.doi.org/10.1080%2
FOO131727609336469

Blase, K., & Eixsen, D. (2004). Infrastructure for imple-
menting and sustaining evidence-based programs with
fidelity. Tampa, FL: National Implementation Research
Network.

Boyce, T E., & Roman, H. R. (2002). Institutionaliz-
ing behavior based theory: Theories, concepts and prac-
tical suggestions. The Behavior Analyst, 3(1), 76—82.

Bradshaw, C , Koth, C , Bevans, K., Ialongo, N., &
Leaf, P. (2008). The impact of school-wide positive be-
havioral interventions and supports (PBIS) on the orga-
nizational health of elementary schools. School
Psychology Quarterly 23(4), 462-473. http://dx.doi.org
/10.1037%2Fa0012883

Buzhardt, J., Greenwood, C. R., Abbott, M., & Tapia,
Y. (2006). Research on scaling up evidence-based
instructional practice: Developing a sensitive measure
of the rate of implementation. Educational Technology
Research and Development, 54(5), 467-492.
http://dx.doi.org/10.1007%2Fsll423-006-0129-5
Carr, E., Dunlap, G., Horner, R., Koegel, R., Turnbull,
A., Sailor, W , . . . Fox, L. (2002). Positive behavior
support: Evolution of an applied science. Journal of Pos-
itive Behavior Interventions, 4, 4-16. http://dx.
doi.org/10.1177%2F 109830070200400102

Center for Mental Health in Schools. (2010). Turning
around, transforming, and continuously improving
schools: Eederal proposals are still based on a two- rather
than a three-component blueprint. Retrieved from
http://smhp.psych.ucla.edu/pdfdocs/tutning

Cherniss, C. (2006). School change and the microsociety
program. Thousand Oaks, CA: Corwin.

Datnow, A. (2005). The sustainability of comprehen-
sive school reform models in changing district and state

Exceptional Children

contexts. Educational Administration Quarterly, 4l{\),

121-151.

DeStefano, L., Dailey, D., Berman, K., & Mclnetney,
M. (2001). Synthesis of discussions about scaling up effec-
tive practices (OSEP Publication Number HS970
17002). Washington, DC: U.S. Department of Edu-
caiton. Office of Special Education Programs.

Eber, L., Sugai, G., Smith, C. R., & Scott, T. M.
(2002). Wraparound and positive behavioral interven-
tions and supports in the schools. Journal of Emotional
and Behavioral Disorders, 10{5), 171-180. http://
dx.doi.org/10.1177%2F10634266020100030501

Elias, M. J., Zins, J. E., Gtaczyk, P A , & Weissberg, R.

P. (2003). Implementation, sustainability, and scaling

up of social-emotional and academic innovations in

public schools. School Psychology Review, 32{3),

303-319.

Elliott, D. S., & Mihalic, S. (2004). Issues in dissemi-
nating and replicating effective prevention programs.
Prevention Science, 5(1), 47—52. http://dx.doi.org
/10.1023%2FB%3APREV.0000013981.28071.52

Fixsen, D. L., Naoom, S. F., Blase, K. B., Friedman, R.
M., & Wallace, F. (2005). Implementation research: A
synthesis of the literature. Tampa, FL: National Imple-
mentation Research Network, Louis de la Parte Florida
Mental Health Institute, University of Soutb Florida.
Available online at http://nirn.fmhi.usfedu/resources
/publications/Monogtaph/index.cfm

Fullan, M. (2002). The change leader. Educational
Leadership, 59(8), 16-21.

Fullan, M. (2005). Leadership and sustainability. Thou-
sand Oaks, CA: Corwin.

Futney, K. S., Hasazi, S. B., Clark, K , & Keefe-Hart-
nett, J. (2003). A longitudinal analysis of shift:ing pol-
icy landscapes in special and general education reform.
Exceptional Children, 70, 81-94.

Gager, P. J., & Elias, M. J. (1997). Implementing pre-
vention programs in high-risk environments: Applica-
tion of tbe resiliency paradigm. American Journal of
Orthopsychiatry 67{3), 3 6 3 – 3 7 3 . http://dx.doi.org
/10.1037%2Fh0080239

George, D., & Mallery, P (2003). SPSS for Windows
Step by Step: A Simple guide and reference. 11.0
update (4tb ed.). Boston, MA: Allyn & Bacon.

Gottfredson, D. C , & Gottfredson, G. D. (2002).
Quality of school-based prevention programs: Results
from a national survey. Journal of Research in Crime and
Delinquency, 39{l), 3 – 3 5 . h t t p : / / d x . d o i . o r g / 1 0
.1177%2F002242780203900101

Greenwood, C. R., Delquadri, J., &C Bulgren, J. (1993).
Current challenges to behavioral technology in the re-

form of schooling: Large-scale, high-quality implemen-
tation and sustained use of effective educational prac-
tices. Education and Treatment of Children, J6{4),
401-404.

Han, S. S., & Weiss, B. (2005). Sustainability of
teacher implementation of school-based mental health
programs. Journal of Abnormal Child Psychology, 33(6),
665-679. http://dx.doi.org/10.1007%2Fsl0802-005-
7646-2

Hatch, T (2000). What does it take to break the mold?

Rhetoric and reality in new American schools. Teachers

College Record, 102, 561-589. http://dx.doi.org/10

.llll%2F0l6l-4681.00068

Heller, M. F., & Firestone, W A. (1995). Who’s in

charge here? Sources of leadership for change in eight

schools. Elementary School Journal, 96(\), 6 5 – 8 6 .

http://dx.doi.org/10.1086%2F461815

Horner, R., Sugai, G., Smolkowski, K., Todd, A.,

Nakasato, J., & Esperanza, J. (2009). A randomized

control trial of school-wide positive behavior support in

elementary schools. Journal of Positive Behavior Inter-

ventions, 11{3), 1 1 3 – 1 4 4 . h t t p : / / d x . d o i . o r g / 1 0 .

1177%2F1098300709332067

Horner, R. H., & Sugai, G. (1999). Developing posi-
tive behavioral support systems. In G. Sugai & T. J.
Lewis (Eds.), Developing positive behavioral support for
students with challenging behaviors [Monograph].
Gouncil for Children with Behavioral Disorders.
Reston, VA.

Horner, R. H., Todd, A. W , Lewis-Palmer, T , Irvin, L.
K., Sugai, G., & Boland, J. B. (2004). The school-wide
evaluation tool (SET): A research instrument for assess-
ing school-wide positive behavior support. Journal of
Positive Behavior Interventions, 6{\), 3—12. http://
dx.doi.org/10.1177%2F 10983007040060010201

Hosmer, D. J. & Lemeshow, S. (1989). Applied Logistic
Regression. New York, NY: John Wiley.

Hsu, J. (1996). Multiple comparisons procedures. Lon-

don, England: Cbapman and Hall.

Huberman, A. M. (1983). School improvement strate-

gies that work: Some scenarios. Educational Leadership,

41{3), 23-27.

Joyce, B., &C Sbowers, B. (2002). Student achievement
through staff development (3rd ed.). Alexandria, VA:
Association for Supervision and Curriculum Develop-
ment.

Kam, C , Gteenberg, M. T , & Walls, C. T (2003). Ex-
amining the role of implementation quality in school-
based prevention using the PATHS curriculum.
Prevention Science, 4(\), 55-63. http://dx.doi.org/10
.1023%2FA%3A1021786811186

Summer 2012

Kannapel, R J., & DeYoung, A. J. (1999). The rural
school problem in 1999: A review and critique of the
literature, fournal of Research in Rural Education, 15(2),
67-79.

Karachi, T W, Abbott, R. D., Catalano, R. E, Hag-
gerty, K. P, & Fleming, C. B. (1999). Opening the
black box: Using process evaluation measures to assess
implementation and theory building. American fournal
of Community Psychology, 27(5), 711-731. http://
dx.doi.org/10.1023%2FA%3A1022194005511

Lewis, T. J., & Sugai, C. (1999). Effective behavior
support: A systems approach to proactive school-wide
management. Eocus on Exceptional Children, 31(6),
1-24.

Martinez, M., & Harvey, J. (2004). Erom whole school
reform to whole system reform. Washington, DC:
National Clearing House for Comprehensive School
Reform.

Mclntosh, K, Chard, D. J., Boland, J. B., & Horner,
R. H. (2006). Demonstration of combined efforts in
school-wide academic and behavioral systems and inci-
dence of reading and behavior challenges in early ele-
mentary grades, fournal of Positive Behavior
Interventions, 8(3), 146-155.

Mclntosh, K , Horner, R., & Sugai, C. (2009). Sus-
tainabiliry of systems-level evidence-based practices in
schools: Current knowledge and future directions. In
W. Sailor, G. Dunlap, G. Sugai, & R. Horner (Eds.)
Handbook of Positive Behavior Support (pp. 327-352).
New York, NY: Springer, http://dx.doi.org/10.1007
%2F978-0-387-09632-2_l 4

McLaughlin, M. W , & Mitra, D. (2001). Theory-
based change and change-based theory: Going deeper,
going broader, fournal of Educational Change, 2(4),
301-323. http://dx.doi.org/10.1023%2FA%3A1014
616908334

Mihalic, S., & Irwin, K. (2003). Blueprints for violence
prevention: From research to real-world settings—Fac-
tors influencing the successful replication of model pro-
grams. Youth Violence and fuvenile fustice, 1(4),
307-329. http://dx.doi.org/10.1177%2F154l20400
3255841

Mihalic, S., Irwin, K, Fagan, A., Ballard, D., & Elliott,
D. (July, 2004). Successful program implementation:
Lessons from blueprints, fuvenile fustice Bulletin, 1-11.

Nelson, J., Hurley, K , Synhorst, L., & Epstein, M.
(2008). The Nebraska three-tiered behavioral, preven-
tion model case study. In C. Greenwood, T. Kra-
tochwill and M. Clements (Eds.) Schoolwide prevention
models (pp. 61-86). New York, NY: Guilford.
Nelson, J. R., Martella, R., & Marchand-Martella, N.
(2002). Maximizing student learning: The effects of a

comprehensive school-based program for preventing
problem behaviors, fournal of Emotional and Behavioral
Disorders, 10(3), 136-148.

Odom, S. L. (2009). The tie that binds: Evidence-
based practice, implementation science, and outcornes
for children. Topics in Early Childhood Special Educa-
tion, 29(1), 53-61.

Ringeisen, H., Henderson, K., & Hoagwood, K.
(2003). Context matters: Schools and the “research to
practice gap” in children’s mental health. School Psychol-
ogy Review, 32(2), 153-168.

Rog, D., Boback, N., Barton-Villagrana, H., Marrone-
Bennett, P, Cardwell, J., Hawdon, J., . . . Reischl, T.
(2004). Sustaining collaboratives: A cross-site analysis
of The National Funding Collaborative on Violence
Prevention. Evaluation and Program Planning, 27,
249-261.

Rohrbach, L. A., Graham, J. W , & Hansen, W B.
(1993). Diffusion of a school-based substance abuse
prevention program: Predictors of program implemen-
tation. Prevention Medicine, 22(2), 237-260. http://
dx.doi.org/10.1006%2Fpmed. 1993.1020

Safran, S. P, & Oswald, K. (2003). Positive behavior
supports: Can schools reshape disciplinary practices?
Exceptional Children, 69, 361-373.

SAS Institute. (2004). SAS/STAT User’s Cuide, Version
9.1. SAS Institute, Cary, North Carolina, USA.

Schräg, J. A. (1996). Systems change leading to better
integration of services for students with special needs.
School Psychology Review, 25(4), 489-496.

Seal, K. R., & Harmon, H. L. (1995). Realities of rural
school reform. Phi Delta Kappan, 77(2), 119-123.

Sindelar, P T, Shearer, D. K., Yendol-Hoppey, D., &
Liebert, T. W. (2005). The sustainability of inclusive
school reform. Exceptional Children, 72, 317-331.

Skiba, R. (2002). Special education and school disci-
pline: A precarious balance. Behavioral Disorders, 27(2),
81-97.

Slavin, R. E., Holmes, G., Madden, N. A., Chamber-
lain, A., & Cheung, A. (2010). Ejfects of a data-driven
district-level reform model. Retrieved from http://vvww
. b e s t e v i d e n c e . o r g / w o r d / d a t a _ d r i v e n _ r e f o r m
_Mar_09_2010

Solomon, D., Battistich, V., Watson, M., Schaps, E., &
Lewis, C. (2000). A six-district study of educational
change: Direct and mediated effects of the child devel-
opment project. Social Psychology of Education, 4(\),
3-51. http://dx.doi.org/10.1023%2FA%3A 1009609
606692

Sugai, G., & Horner, R. H. (2002). The evolution of
discipline practices: School-wide positive behavior

Exceptional Children

Supports. Child Ó’ Eamily Behavior Therapy, 24(112),
23-50. http://dx.doi.org/10.1300%2FJ019v24nOl_03

Sugai, G., Horner, R. H., Dunlap, G., Heineman, M.,
Lewis, T. J., Nelson, C , & Wilcox, B. (2000). Apply-
ing positive behavioral support and functional behav-
ioral assessment in schools. Journal of Positive Behavior
Interventions, 2(3), 131-143.

Sugai, G., Horner, R. H., & Lewis-Palmer, T. (2001).
Team implementation checklist. Eugene, OR: Educa-
tional and Community Supports, University of
Oregon.

Sugai, G., Lewis-Palmer, T., Todd, A., & Horner, R. H.
(2001). School-wide Evaluation Tool (version 2.0). Eu-
gene, OR: Educational and Community Supports,
University of Oregon.

Theobald, P, & Nachtigal, P (1995). Culture, commu-
nity and the promise of rural education. Phi Delta Kap-
pan, 77(2), 132-135.

Weissberg, R. P, & Greenberg, M. T. (1998). School
and community competence-enhancement and preven-
tion programs. In 1. E. Siegel & K. A. Renninger
(Eds.), Handbook of child psychology: Vol. 4. Child psy-

chology in practice (5th ed., pp. 877-954). New Yotk,
NY: Wiley.

Weissberg, R. P, & Utne-O’Brien, M. (2004). Positive
development: Realizing the potential of youth: What
works in school-based social and emotional learning
programs for positive youth development. The Ameri-
can Academy of Political and Social Science, 86,
591-691. http://dx.doi.org/10.1177%2F000271620
3260093

Yin, R. K., & White, J. L. (1984). Federal technical
assistance efforts: Lessons and improvement in education

for 1984 and beyond. Washington, DC: Cosmos.

A B O U T T H E A U T H O R S

JENNIFER H. COFFEY (Washington, DC
CEC), Doctoral Student; and R O B E R T H .

HORNER (Oregon CEC), Professor, College of

Education, University of Oregon at Eugene.

Jennifer Coffey is now in the Office of Special

Education Programs at the U.S. Department of

Education, Washington, DC.

Address correspondence concerning this article

to Robert H. Horner, College of Education, 1235

University of Oregon, Eugene, OR 97402

(robh@uoregon.edu).

This manuscript was supported in part by a grant

from the Office of Special Education Programs,

U.S. Department of Education (H326S980003).

Opinions expressed herein are those of the au-

thors and do not necessarily reflect the position of

the U.S. Department of Education, and such en-

dorsements should not be inferred.

Manuscript received November 2010; manuscript

accepted June 2011.

4 2 2 Summer 2012

Copyright of Exceptional Children is the property of Council for Exceptional Children and its content may not

be copied or emailed to multiple sites or posted to a listserv without the copyright holder’s express written

permission. However, users may print, download, or email articles for individual use.

Journal of Positive Behavior Interventions
13(4) 208 –218
© 2011 Hammill Institute on Disabilities

Reprints and permission: http://www.
sagepub.com/journalsPermissions.nav

DOI: 10.1177/1098300710385348
http://jpbi.sagepub.com

Each year, schools around the world expend considerable
resources implementing evidence-based practices, and most
will not sustain beyond a few years (Fixsen, Naoom, Blase,
Friedman, & Wallace, 2005; Fullan, 2005; Latham, 1988).
This resulting cycle of implementing new practices each
year, as opposed to sustaining effective ones, is clearly
not a new phenomenon (Berman & McLaughlin, 1976;
Sarason, 1971), but it has come to represent business as
usual in school systems, particularly in this age of increased
accountability.

This cycle has two distinct types of costs. First, the fre-
quent change in programs costs the system resources. There
are the obvious monetary costs regarding the purchase and
training associated with new programs, but there are also
costs to staff willingness to implement new programs.
When implementation is abandoned, there is a draining
effect on enthusiasm for implementing change, and this
energy can be replaced with cynicism when the next pro-
gram is introduced (Elliott, Witt, Kratochwill, & Stoiber,
2002). Eventually, hesitant staff realize that if they wait
long enough, it is only a matter of time before the new pro-
gram will join the others in a virtual graveyard of discontin-
ued innovations (Latham, 1988).

A second cost comes in terms of diminished student
outcomes. When considering an evidence-based practice
such as School-Wide Positive Behavior Support (SWPBS;
Sugai & Horner, 2009), an initial investment in implemen-
tation can lead to important, valued outcomes. These out-
comes include decreased problem behavior (Bradshaw,
Mitchell, & Leaf, 2010), improved academic achievement
(Horner et al., 2009; Lassen, Steele, & Sailor, 2006;
McIntosh, Chard, Boland, & Horner, 2006), staff and admin-
istrator time regained as a result of fewer discipline referrals
(Scott & Barrett, 2004), and student instructional time
regained as a result of more time spent in class and on task
(Algozzine & Algozzine, 2007). Failure to sustain SWPBS
would result in the loss of these benefits for students and
school personnel.

385348PBI13410.1177/1098300710385348McInto
sh et al.Journal of Positive Behavior Interventions
© 2011 Hammill Institute on Disabilities

Reprints and permission: http://www.
sagepub.com/journalsPermissions.nav

1University of British Columbia, Vancouver, Canada
2U.S. Department of Education, Washington, DC, USA
3University of Oregon, Eugene, OR, USA

Corresponding Author:
Kent McIntosh, University of British Columbia, 2125 Main Mall,
Vancouver, BC V6T 1Z4
Email: kent.mcintosh@ubc.ca

Development and Initial Validation
of a Measure to Assess Factors Related
to Sustainability of School-Wide
Positive Behavior Support

Kent McIntosh1, Leslie D. MacKay1, Amanda E. Hume1, Jennifer Doolittle2,
Claudia G. Vincent3, Robert H. Horner3, and Ruth A. Ervin1

Abstract

Sustainability of effective practices in schools is a critical area for research in any domain. The purpose of this article
is to describe and evaluate the validity and reliability of a recently developed research instrument designed to evaluate
schools’ capacity to sustain school-wide positive behavior support (SWPBS) efforts at the universal tier. The School-Wide
Universal Behavior Sustainability Index–School Teams (SUBSIST) was created to assess factors (of the context, implementer
practices, and outcomes) that enhance or prevent sustainability of SWPBS. Content of the web-based survey was identified
through literature review, and initial validation analyses included ratings of content validity by an expert panel (n = 21) and
assessment of internal consistency, test–retest reliability, interrater reliability, and concurrent validity (with SWPBS fidelity
of implementation data) through a pilot study (n = 25). Results indicated strong psychometric properties for assessing
sustainability. The authors discuss the results in terms of future research in enhancing SWPBS sustainability.

Keywords

school-wide positive behavior support, systems change, sustainability, behavior assessment

Action Editor: Don Kincaid

http://crossmark.crossref.org/dialog/?doi=10.1177%2F1098300710385348&domain=pdf&date_stamp=2010-11-2

2

McIntosh et al. 209

Given that SWPBS is currently being implemented in
more than 13,000 schools across the United States and Canada
(Center on Positive Behavioral Interventions and Supports,
2010), it is important to explore how school, district, and state/
provincial teams can enhance the sustainability of SWPBS.
McIntosh, Horner, and Sugai (2009) outlined a research
agenda to study this phenomenon and apply results directly to
schools. This agenda involves identifying factors related to
sustainability, exploring the mechanisms by which sustain-
ability occurs, and designing and testing large-scale interven-
tions designed to enhance sustainability.

Fortunately, there has been recent progress in this area
in the form of research exploring the sustainability of
SWPBS in particular. Doolittle (2006) completed a study of
285 schools identifying what critical features of SWPBS
implementation predicted (a) initial implementation (full
implementation within 3 years) and (b) sustained implemen-
tation (full implementation for 5 years). Her results indi-
cated that systems for teaching expectations and use of
school-wide data were significant predictors of initial
implementation, as measured by the School-Wide Evaluation
Tool (SET; Sugai, Lewis-Palmer, Todd, & Horner, 2001).
In contrast, effective administrator and school team leader-
ship and an active student reward system were significant
predictors of sustained implementation.

More recently, Bambara, Nonnemacher, and Kern (2009)
completed a qualitative interview study of factors affecting
sustainability of individual student support systems within
SWPBS. The authors identified five factors critical to sus-
tainability: school culture, building administrator support,
time efficiency, capacity building, and stakeholder involve-
ment. These factors were identified by respondents as poten-
tially both enablers and barriers. For example, a school
culture based on an understanding of the concepts of SWPBS
and priority for its use was considered an enabler, but a cul-
ture in which a large number of staff members were opposed
to SWPBS would be viewed as a barrier. Many of these
themes echoed the factors, such as staff culture and building
leadership, generated by Kincaid, Childs, Blase, and Wallace
(2007) and Lohrmann, Forman, Martin, and Palmieri (2008),
who studied enablers and barriers to initial SWPBS imple-
mentation. Their research also identified important aspects
within the broad areas of school culture and administrator
support. Hence, there does appear to be some overlap bet-
ween factors affecting initial implementation and sustain-
ability, but the extent of the shared variance has yet to be
studied extensively.

A Model of Sustainability
From the literature on sustainability, McIntosh, Horner, and
Sugai (2009) proposed a model of sustainability. This
model described the theorized mechanisms by which
school-based practices might sustain, as well as four factors

contributing to sustainability. These factors include prior-
ity, effectiveness, efficiency, and continuous regeneration,
a factor including the use of data to readapt the practice
over time. These factors were proposed to affect fidelity of
implementation, the critical component of sustainability.

A model describing how these factors relate to sustain-
ability is included in Figure 1. In this figure, the effect of
SWPBS on long-term student outcomes (i.e., improved
social competence and academic achievement, reduced
problem behavior) is mediated by fidelity of implementa-
tion. Sustained fidelity of implementation is affected by
the four sustainability factors: priority, effectiveness, effi-
ciency, and continuous regeneration. As such, these factors
indirectly affect SWPBS outcomes by acting on fidelity of
implementation.

However, this model is as of yet untested, and given that
so many schools in the United States and Canada are cur-
rently implementing SWPBS, there is a tremendous oppor-
tunity to study sustainability on a large scale. Research
examining SWPBS sustainability could validate the model
and provide as a result specific actions that school teams
could take to enhance sustainability. Such results could
enhance sustainability of not only SWPBS but school-based
practices in general.

A Measure to Assess Factors
Related to Sustainability
With this goal in mind, the authors developed a measure
to assess the importance of factors and critical features of
SWPBS on its sustained implementation. A number of
studies on sustainability factors have used extensive inter-
views to generate themes for sustainability (Bambara et al.,
2009; Hieneman & Dunlap, 2001), and the goal of creating
this measure was to test elements of these themes on a
large, international scale. As such, a survey measure was

Distal
student

outcomes

Improved
social

competenc

e

Improved
academic

achievement

Reduced
problem
behavior

SWPBS

Effectiveness
Continuous

regeneration
EfficiencyPriority

SUSTAINABILITY FACTORS

Sustained
fidelity of

implementation

Figure 1. A model of School-Wide Positive Behavior Support
sustainability

210 Journal of Positive Behavior Interventions 13(4)

envisioned to identify which variables most significantly
predicted sustainability and failure to sustain. Results could
then be used to help long-term and newly implementing
SWPBS schools take research-validated steps to enhance
the sustainability of their systems.

The School-Wide Universal Behavior Sustainability
Index–School Teams (SUBSIST; McIntosh, Doolittle,
Vincent, Horner, & Ervin, 2009; please contact the first
author for a copy of the measure) is an instrument designed
to assess the variables that enhance or prevent sustainabil-
ity of universal tier SWPBS. The measure is administered
online as a web-based survey and is intended to be com-
pleted at the school level, by school team members or per-
sonnel familiar with the school in question (e.g., external
coaches).

The measure is composed of 50 items, or statements
about the SWPBS systems, such as “A vast majority of
school personnel (80% or more) support SWPBS” and
“There is regular measurement of fidelity of implementa-
tion (e.g., Team Checklist, SET, Benchmarks of Quality),”
and perceived barriers, such as “There are high levels of
turnover of school personnel who served as key leaders
(‘champions’) of SWPBS.” These items are organized into
eight broad subscales, including priority, building leader-
ship, external leadership, effectiveness, efficiency, use of
data, capacity building, and potential barriers, plus a set of
open-ended and demographic questions.

When completing the SUBSIST, respondents are asked
four questions per item. Two questions assess the extent to
which each item is true for the school (a) during the 1st
year of implementation and (b) at the time of administra-
tion. Two questions assess perceived importance of the
item for (a) initial implementation and (b) sustainability.
Responses for each question are indicated through a
Likert-type scale. As a result, the SUBSIST generates
individual item scores as well as total scores for initial
implementation, sustainability, and perceived importance
at two points in time. The measure takes 40 to 60 minutes
to complete.

Test and Item Development
The SUBSIST measure was created from a bank of items
regarding critical features (of the context, the practice, and
the outcomes) hypothesized to affect sustainability. These
items were derived through an extensive literature review
(see McIntosh, Horner, & Sugai, 2009) and discussion
among the authors based on their experience with sustain-
ability, measurement, and a previous sustainability study
(Doolittle, 2006). Priority was placed on identifying mal-
leable variables (e.g., those that can be changed; Biglan,
2004). Inclusion of items in the bank was determined
based on convergence of the literature and consensus of
the authors.

These items were then mapped onto the sustainability
model factors and sorted into eight measure subscales.
Continuous regeneration, an important factor, was divided
into two subscales, use of data and capacity building.
Administrative leadership, a recurring theme in the litera-
ture, was identified as a distinct area for sustainability and
further divided into building and external (e.g., district and
state/provincial) leadership subscales, reflecting the differ-
ences in influence between these two leadership structures
identified in the literature (Bambara et al., 2009; Fixsen et al.,
2005; Fullan, 2005; Ransford, Greenberg, Domitrovich,
Small, & Jacobson, 2009). Items hypothesized to be barri-
ers to sustainability were included in an eighth subscale and
reverse scored, so that higher scores in each subscale would
indicate increased potential for sustainability.

Finally, the survey itself was constructed in accordance
with recommended survey design practices (Alreck &
Settle, 1995; Dillman, 1983). These recommendations
include a brief introduction before the questions begin,
common response formats that vary little across questions,
inclusion of a few open-ended responses, and asking demo-
graphic questions at the end of the survey. In addition, as
the survey was web-based, care was taken to present items
on each page to keep the response choices in the same area
of the screen and minimize the need to scroll down to
answer, thereby reducing the need for extensive mouse
navigation.

Validation
It was of interest to complete research determining how
well the SUBSIST assesses factors related to sustainabil-
ity of SWPBS. The remainder of this article describes
initial validation and piloting of the SUBSIST measure
through two investigations. First, the authors convened a
review of the instrument by an expert panel to assess con-
tent validity. Second, the measure was pilot tested with
SWPBS school team leaders and coaches. Pilot study
results were used to finalize wording, obtain reliability
information, and identify concurrent validity with extant
fidelity of implementation data. The article is organized
into two studies: the content validity study and the pilot
study.

Content Validity Study
The first study examined the content validity of the SUBSIST.
The authors convened an expert panel to review the mea-
sure and completed a series of analyses as outlined by
Rubio, Berg-Wagner, Tebb, Lee, and Rauch (2003) to
assess the validity of the items, response format, and over-
all measure. The analyses explored the reliability and level
of expert panel members’ ratings of the measure’s content
validity.

McIntosh et al. 211

Method

Participants. Forty-one individuals were identified and
invited to serve on an expert panel to evaluate the content
validity of the measure and its items. These individuals
were selected based on one or more of the following crite-
ria: (a) extensive experience with sustained implementation
of SWPBS as a district, provincial, or state coordinator
(21 individuals); (b) current provision of training and coor-
dination as part of the National PBIS Center (13 individu-
als); or (b) experience conducting and publishing research
on general systems change in schools in the past 5 years
(7 individuals). As noted by Grant and Davis (1997), an
adequate number of content experts depends on the desired
level of expertise and diversity of knowledge needed. A
range of 2 to 20 experts has been suggested in the literature
(Gable & Wolf, 1993; Lynn, 1986; Sheatsley, 1983; Walz,
Strickland, & Lenz, 1991). Forty-one experts were sent
paper copies of the SUBSIST measure and a content valid-
ity scale. Twenty-one completed the scale, a response rate
of 51%, far above the commonly held criterion for mail sur-
vey response rates (30%; Dillman, 1983).

Measure. A content validity scale was designed to obtain
information from the expert panel on each item and the
SUBSIST measure as a whole. The scale included questions
on the extent to which the measure assessed sustainability
and was clear to understand and complete. Specifically,
each of the 52 original items was associated with two ques-
tions (one assessing whether the item represented sustain-
ability overall and one assessing whether the item represented
the subscale in particular). There were also eight additional
questions, two questions about the overall measure and six
questions about the survey format (e.g., question, response,
and anchors for the Likert-type scale), for a total of 112
questions. All answers were provided through a 4-point
Likert-type scale (from strongly disagree to strongly agree).
The scale also included space for open-ended comments
and suggestions to add, remove, or reword specific items.

Data analysis. The authors followed the procedures for
assessing content validity specified by Rubio and colleagues
(2003). Two sets of analyses were performed. As a prelimi-
nary analysis, expert panel reliability was calculated to
determine the agreement among expert panel members on
their ratings. This analysis was used not to assess the reli-
ability of the SUBSIST measure itself but rather how simi-
larly the expert panel members rated each item and the
measure as a whole. High expert panel reliability indicates
that the panel members agreed on the extent to which items
were important for sustainability, and low reliability indi-
cates disagreement on the importance of items for sustain-
ability. The 4-point scale was dichotomized into agree or
disagree, in accordance with previous research (Davis,
1992; Grant & Davis, 1997; Lynn, 1986; Rubio et al., 2003).
Expert panel reliability was calculated for each individual

question. To calculate the overall expert panel reliability for
the scale, the number of questions with a reliability score of
at least .80 was divided by the total number of questions on
the scale. This approach is recommended for studies that
involve a sample of experts that exceeds five (Lynn, 1986).

The second analysis involved calculating the Content
Validity Index (CVI), used to quantify results regarding the
extent to which each item and the total measure represented
the construct of sustainability (Rubio et al., 2003). It was
calculated based on the representativeness of the measure.
The CVI for each item was computed by counting the num-
ber of experts who rated the item as assessing or strongly
assessing sustainability and dividing that number by the
total number of experts. The final CVI number is the pro-
portion of experts who deemed the item as content valid.
The CVI for the measure was estimated by calculating the
average CVI across the items. A CVI of .80 is recommended
for new measures (Davis, 1992).

Results
Regarding expert panel reliability, expert ratings for 105
of the 112 questions were above the .80 criterion. The
overall expert panel reliability for the scale was .94. These
results indicate that the experts provided highly similar
ratings of the importance of each item for measuring sus-
tainability, and the CVI results could be interpreted with
confidence.

The CVI for each item was calculated to determine the
proportion of experts who deemed each item as content
valid. Of the 112 items on the SWPBS measure, all but one
overall question and the questions associated with four indi-
vidual items were rated above the .80 criterion. The CVI for
the measure as a whole was .95.

Changes to Measure
Of the four individual items that fell below the content
validity criterion, two of the items were deleted, and two
items were reworded based on panel feedback to improve
their clarity and specificity. The expert panel reliability and
CVI were recalculated with the two deleted items omitted.
The revised expert panel reliability was .97, and the revised
CVI score remained at .95.

Summary
The expert panel reliability data indicated that the experts’
responses were consistent, and they rated the SUBSIST,
including its items, questions, and response format, to be a
valid measure of critical features related to sustainability.
Feedback from the experts was used to revise the survey,
resulting in a reduction of two items (from 52 to 50) and
rewording of two retained items, as well as revision of the

212 Journal of Positive Behavior Interventions 13(4)

order of items. These changes were incorporated into the
SUBSIST measure before the pilot testing took place.

Pilot Study
The pilot study was conducted to examine the psychometric
properties of the SUBSIST measure. Respondents from
schools implementing universal tier SWPBS completed
the SUBSIST. Specifically, the following information was
generated: (a) reliability (internal consistency, test–retest,
and interrater reliability) and (b) concurrent validity (with a
SWPBS fidelity of implementation measure).

Method
Settings and participants. Personnel serving 14 public

SWPBS schools in five states (Maryland, Michigan,
Minnesota, New Hampshire, and Oregon) participated in
the pilot study. Of these, 11 were elementary schools, 1 was
a middle school, and 2 were secondary schools. Most
schools (79%) reported an enrolment between 300 and 749.
The majority of schools were in suburban regions (36%) or
small to medium cities (36%), and all were public schools.
Most schools (57%) reported the proportion of students
receiving free or reduced-price lunch to be between 20%
and 49%. The average number of years implementing
SWPBS was 4.9, ranging from 2 to 10 years. Fidelity of
implementation data, from either the SET (Sugai et al.,
2001) or Benchmarks of Quality (BoQ; Kincaid, Childs, &
George, 2005), was available for 11 of the 14 schools. Of
these schools, 9 met or exceeded criteria for adequate
implementation (80% on the SET; 70% on the BoQ).

Twenty-five participants in the 14 SWPBS schools com-
pleted the surveys. Fourteen participants served as school
SWPBS team leaders (also known as internal coaches), and
11 served as coaches (also known as district or external
coaches) for the school SWPBS teams. As a result, the sam-
ple included an intact dyad of school team leader and coach
from 11 schools implementing SWPBS. Of the 25 total par-
ticipants, 21 (84%) opted to complete the survey twice,
allowing for test–retest reliability to be calculated.

Measures. The SUBSIST measure, as revised through the
content validity study, was administered to pilot study par-
ticipants. The pilot survey allowed participants to complete
the survey and rate its overall appropriateness and ease of
completion. The survey also included open fields after each
item for respondents to suggest specific rewording of items.
For rating items “currently in place,” the minimum average
score possible was 1 (not true) and the maximum score pos-
sible was 4 (very true). For the sample, the mean average
SUBSIST scores for items currently in place was 3.51, with
a standard deviation of 0.24 and a range of 3.05 to 3.9

1.

As there are no existing research validated measures of
factors related to sustainability, the SUBSIST measure was

instead compared to schools’ fidelity of implementation for
the current school year, the core indicator of sustained
implementation (Han & Weiss, 2005; McIntosh, Horner, &
Sugai, 2009). The SET (Sugai et al., 2001) is an external
evaluation of a school’s universal SWPBS system that
includes a site visit, observations, and interviews of admin-
istrators, staff, and students. Administration takes 2 hours
and produces the percentage of critical features imple-
mented in seven subscales, plus an overall implementation
percentage. Research on the SET has identified that it is
sensitive to implementation of SWPBS and that an 80%
overall implementation criterion leads to significantly
improved student outcomes (Bradshaw et al., 2010; Horner
et al., 2009). Psychometric data indicates strong evidence of
construct and concurrent validity, interrater and test–retest
reliability, and internal consistency (Horner et al., 2004).
Concurrent validity and internal consistency data have since
been revalidated on a separate, larger sample of schools
(Vincent, Spaulding, & Tobin, 2010).

Extant SET data were available for 7 of the 14 schools in
the sample for the 2008–2009 school year, the same year
the SUBSIST was administered. These schools had a mean
overall SET score of 0.87 and a standard deviation of 0.08.
Five schools were above the 80% criterion, and two schools
were below the criterion.

Procedure. The study targeted school and district person-
nel implementing SWPBS in K-12 public schools. The
authors contacted state SWPBS coordinators in five states
(Maryland, Michigan, Minnesota, New Hampshire, and
Oregon) and asked them to recommend experienced school
team leaders and coaches with experience in sustainability
and interest in participating in a pilot study. The state
coordinators identified the schools and participants for
participation.

The participants were asked to complete the survey
twice; once at their earliest convenience and again approxi-
mately 2 weeks after their initial completion. The option
was given for the survey to be completed online or in paper
format, but all participants chose to complete the survey
online.

Data analysis. To assess multiple forms of reliability,
internal consistency, test–retest, and interrater reliability
scores were calculated. To assess internal consistency reli-
ability, Cronbach’s alpha coefficients were computed based
on responses to the questions “To what extent is [the item]
true for your school right now?” and “To what extent was
[the item] true for your school during the first 12 months of
implementation?” For the 21 participants who completed
the SUBSIST twice, test–retest reliability was calculated
using the total scores obtained at Times 1 and 2 by dividing
the lower score by the higher score. The dyads consisting of
a team leader and coach (n = 11 dyads) completed the survey.
Interrater agreement was calculated by dividing the lower
score by the higher score.

McIntosh et al. 21

3

To assess concurrent validity, SUBSIST scores for items
currently in place (M = 3.61, SD = 0.22) were correlated with
extant SET data for the 2008–2009 school year (M = .87,
SD = .08), the same year the SUBSIST was administered.
Scores for schools above and below the 80% SET criterion
were compared with an independent samples t test. In addi-
tion, correlations for coaches and team leaders were compared
to see if the strengths of the relationships differed by role.

Results
Reliability. Cronbach’s alpha coefficients were calculated

based on responses to the questions “To what extent is [the
item] true for your school right now?” and “To what extent
was [the item] true for your school during the first 12 months
of implementation?” The type of respondent was also con-
sidered, and reliabilities for both team leaders and coaches
were calculated (see Table 1). Alpha coefficients ranged
from .77 to .94, indicating strong internal consistency for
a survey measure. The test–retest reliability was .96 for
both team leaders and coaches. Between team leaders and
coaches, an average interrater reliability of .95 was found.

Concurrent validity. Correlations for the concurrent valid-
ity analyses are presented in Table 2. The overall correla-
tion between the two measures was moderate and statistically
significant (r = .68, p < .05). As seen, there was a wide range of correlations between the subscales of the two mea- sures. The SUBSIST priority subscale was significantly correlated with the expectations taught, reward system, and management subscales and the SET average score. The two SUBSIST leaderships subscales were strongly and signifi- cantly correlated with the district support subscale. Effec- tiveness was significantly correlated with district support and the SET average, and efficiency was significantly cor- related with expectations taught, management, and the SET average score. Use of data, capacity building, and potential barriers were not significantly related to any of the SET subscales, indicating the possibility that they may measure domains of sustainability not captured by the SET. Overall, these results show a moderate relation with fidelity of implementation for the year assessed. Results indicate that the SUBSIST measures a construct that is similar to but not exactly the same as that measured by the SET.

In addition, the difference in average scores between
schools with SET scores at or above 80% (M = 3.65, SD = 0.23)
and schools with SET scores below 80% (M = 3.46, SD = 0.09)
was not statistically significant (t = 1.58, p = .14), though
the small sample size indicated insufficient power to detect
a significant difference. Given the low power, the relation
between the SUBSIST and SET was assessed in a visual
format through the scatterplot shown in Figure 2. As seen,
the best-fit line indicates a linear relation between the mea-
sures, with scores from coaches and team leaders clustering
around this line. In keeping with the reliability data, correla-
tions between the SUBSIST and SET average scores for
coaches (r =.73, n = 7) and team leaders (r = .61, n = 6) were
not significantly different.

Discussion
The purpose of these studies was to examine the validity
and reliability of the SUBSIST, a measure assessing fac-
tors related to sustainability of universal SWPBS sys-
tems. The measure’s content validity was assessed through
an expert panel with experience in sustained SWPBS
implementation and published research in the area of
school systems change. The expert raters consistently
indicated that the measure was a valid tool for assessing
the important factors that affect sustainability. Results
from the pilot study indicated high levels of three indices
of reliability, and comparison to fidelity of implementa-
tion data indicated that the SUBSIST is sufficiently
related to external evaluations of fidelity of implementa-
tion. In sum, these results provide an initial indication
that the SUBSIST is a valid and reliable measure for
assessing factors influencing sustainability of SWPBS.
The SUBSIST shows promise as a research measure for
understanding how and why school SWPBS efforts are
sustained or abandoned.

Results from the content validity study were both prom-
ising and useful for further measure development. Overall,
the experts expressed strong support for the SUBSIST as a
sustainability measure. They rated nearly all of the items as
valid critical features for sustainability, and those that were
rated below the criterion were removed or revised. The
experts also provided extensive descriptive comments that
were used to revise the measure’s organization of questions
and layout. Given these ratings and the changes made, there
is adequate evidence of content validity.

After this refinement of the SUBSIST, pilot testing pro-
vided encouraging data regarding use with its intended
respondents. Results indicated that the measure has ade-
quate internal consistency, or in other words, that the sus-
tainability items are related to one another. Moreover,
scores were stable across both time and respondents.
Although the data from the pilot study could potentially
have identified whether team leaders or coaches were better

Table 1. Internal Consistency Results for the SUBSIST
(in Terms of Cronbach’s Alpha)

Question Team leader Coach

Item in place currently .84 .77
Item in place first 12 months .94 .92
Both questions .94 .93

Note. : N = 25; n (team leaders) = 14; n (coaches) = 11. SUBSIST = School –
Wide Universal Behavior Sustainability Index—School Teams.

T
a
b

le
2

.
C

o
rr

el
at

io
ns

A
m

o
ng

S
U

B
SI

S

T
a

nd
S

E

T
S

ub
sc

al
es

S

U
B

SI
ST

S
ub

sc
al

es
SE

T
S
ub
sc
al
es

Pr

io
ri

ty
B

ui
ld

in
g

le
ad

er
sh

ip
Ex

te
rn

al

le
ad
er
sh

ip
Ef

fe
ct

iv
en

es
s

Ef
fic

ie
nc

y
U

se
o

f
da

ta
C

ap
ac

it
y

bu
ild

in
g

Po
te

nt
ia

l
ba

rr
ie

rs
SU

B
SI

ST

av
er

a

g
e

Ex
pe

ct
at

io
ns

de

f

in
ed

Ex
pe
ct
at
io
ns

ta

ug
ht

R
ew

ar
d

sy
st

em
V

io
la

ti
o
n

sy
st

em
M

o
ni

to
ri

ng
a

nd

de
ci

si
o
n

m
ak

in
g

M
an

ag
em

en
t

D
is

tr
ic

t
su

pp

o
rt

SE
T

av

er
ag

e

Pr
io

ri
ty


.2

5
.2

4

.6

1*
.6

3

*
.5

1
−.

08
.1

1
.6

5

*
.2

2
.6

7*

.6

7

*
.1

4
−.

01
.6

2*
.1

1
.6
7*

B
ui

ld
in

g
le

ad
er

sh
ip


.8

2*
*

.4
8

.0
6

.0
0

−.
01

−.
28

.5
1

−.
34

−.
12

−.
16

.0
1

−.
41

.1
1

.7
3*

*
.2
4

Ex
te

rn
al

le
ad

er
sh

ip

.4
4

.1
4

.1
0

.2
6

−.
28

.6
3*

−.
10

−.
06

−.
10
.0
1

−.
50

.0
2

.7
1*

*
.3

1
Ef

fe
ct
iv
en
es
s


.4

1
.4

4
.4

0
.3

0
.8

4*
*

−.
11

.3
1

.5
0

.0
4

−.
00

.3
5

.5
7*

.6
1*

Ef
fic
ie
nc

y

.6
9*

*
.0

0
.3

5
.6

5*
.1

9
.6

6*
.5

4
.3

3
−.

12
.6

2*
.2

9
.7

4*
*

U
se

o
f
da

ta

.3
1

.2
5

.6
9*
*
.1

9
.4

0
.2

6
.2

5
−.

2

7
.3

5
.3

2
.5

3
C

ap
ac
it
y
bu
ild
in
g

.2

6
.5

0
−.

0

7
−.

0

6
.0

1
−.

13
−.

15
−.

05
.2

6
.0

8
Po

te
nt

ia
l b

ar
ri

er
s


.2

8
.0

9
.1

9
.3

3
.0

7
.3
6
.0
7
−.

17
.1

3
SU

B
SI
ST
a

ve
ra

ge

.0
1

.3
8

.3
9

.1
4
−.
28

.4
1

.6
3*

.6
8*

Ex
pe
ct
at
io
ns

d
ef

in
ed

.4

0
.4

6
.2

0
.0

3
−.

17
−.

45
.2

9
Ex

pe
ct

at
io

ns
t

au
gh

t

.8
7*

*
.5

6*
.3

4
.7

8*
*

−.
16

.8
5*

*
R

ew
ar

d
sy

st
em


.2

8
.3

8
.6

2*
−.

18
.7

7*
*

V
io

la
ti
o
n

sy
st

em

.5
4

.2
2

.0
4

.5
9*

M
o
ni

to
ri
ng
a
nd

de
ci
si
o
n
m
ak
in
g


.0

6
−.

49
.1

2
M
an
ag
em
en
t


.1

1
.6

7*
D

is
tr

ic
t

su
pp

o
rt


.3

4
SE

T
a
ve
ra
ge

N
ot

e.
:

n
=

16
. S

U
B
SI
ST

=
S

ch
oo

l-W
id

e
U

ni
ve

rs
al

B
eh

av
io

r
Su

st
ai

na
bi

lit
y

In
de

x—
Sc

ho
ol

T
ea

m
s;

SE
T
=
S
ch
oo
l-W
id

e
Ev

al
ua

tio
n

To
ol

(
Su

ga
i e

t
al

.,
20

01
).

*p
<

.0
5.

*
*p

< .0

1.

214

McIntosh et al. 215

respondents for the SUBSIST, the results indicate that
respondents in both roles provide valid, reliable responses
for measuring sustainability.

Concurrent validity results were also promising. Given
that there are currently no sustainability measures for com-
parison, fidelity of implementation for the year assessed
provided a best alternative, as durable, high fidelity of
implementation over time is the defining quantitative mea-
sure of sustained implementation. Results indicated that the
SUBSIST and SET assess related but unique constructs.
The differences in the scores according to SET criteria and
the scatterplot show a moderate relation between the mea-
sures, showing that high SUBSIST scores are associated
with high SET scores. The priority, effectiveness, and effi-
ciency subscales were most related to the SET average.
However, some SUBSIST subscales, such as use of data,
capacity building, and potential barriers, were not related to
the SET in this sample. One might expect the SUBSIST to
be more strongly correlated with the SET, but the SET was
designed to measure actual fidelity of implementation,
briefly, and at one point in time. In contrast, the SUBSIST
measures a broader range of features, including conditions
of the environment that influence fidelity of implementa-
tion, and assesses some constructs more comprehensively.
For example, the SET monitoring and decision making
subscales focus exclusively on ODR data, whereas the
SUBSIST use of data subscale measures a variety of out-
comes and fidelity data. Although there are some items that
overlap (e.g., building administrator regularly attends team

3

3.25

3.5

3.75

4

75 80 85 90 95 100

S
U

B
S

IS
T

A
v
e
ra

g
e

SET Average

Figure 2. Scatterplot of SUBSIST and SET scores with best-fit line
N o te. Squares = coaches; Diamonds = team leaders; SUBSIST = School-
Wide Universal Behavior Sustainability Index–School Teams;
SET = School-Wide Evaluation Tool.

meetings), only 12% of the SUBSIST’s items are similar to
those of the SET. Given these points, the moderate correla-
tion was to be expected. A possible conclusion is that fidel-
ity of implementation can be seen as an outcome of the
presence of sustainability factors as measured by the
SUBSIST, though future studies will be needed to examine
this relation in more detail.

Limitations
There are some limitations in these studies that are worth
noting. First, the sample size of the pilot study was small,
particularly for the concurrent validity analyses, and did not
include schools that had completely abandoned SWPBS.
Moreover, the sample was not large enough to assess more
complex questions, such as whether differential experience
with SWPBS implementation affected either the responses
provided or reliability, or whether some items were more or
less associated with fidelity of implementation. Planned
future research with the SUBSIST will examine these ques-
tions with a sample of both sustaining and nonsustaining
schools. Finally, a factor or principal components analysis
with a larger sample will be critical to validate the organi-
zation of the items into the existing eight subscales. As
such, these findings should be viewed as tentative until a
larger study can validate its conclusions.

Implications for Practice
The SUBSIST was designed first and foremost as a research
instrument to identify principles and factors affecting sus-
tainability. Its comprehensiveness and assessment of initial
implementation and perceptions data have the potential to
answer important questions regarding sustainability, but its
length makes it impractical for regular use by practitioners.
Given the widespread interest in assessing and enhancing
the sustainability of SWPBS across North America, we have
adapted the SUBSIST into a self-assessment tool for use by
school personnel. The SUBSIST Checklist (McIntosh,
2010; please contact the first author for a copy of the mea-
sure) is based on the format of the self-assessment from the
School-Wide PBS Implementer’s Blueprint (Sugai et al.,
2005) and is intended for school teams and coaches to assess
critical features and create action plans to enhance sustain-
ability. It takes approximately 30 minutes to complete.
There are 50 items on the checklist; they are the same as
those on SUBSIST, but only the “in place” questions are
asked for each item, reducing the number of questions by
75%. In addition, the potential barriers subscale was revised
to assess the presence of strategies to address specific bar-
riers, as opposed to solely the barriers themselves. This
change provides teams with options, as opposed to “admiring
the problem” (Curtis, Castillo, & Cohen, 2008). However,
it should be noted that because the format and some items

216 Journal of Positive Behavior Interventions 13(4)

of SUBSIST were modified to create the checklist, the
checklist’s own psychometric properties are unknown.

Future Directions
Much of what we know about sustainability comes from
qualitative and case study research. However, the SUBSIST
presents an opportunity to use the information obtained from
these studies and test hypotheses on a large scale. Results may
lead to information on overall factors affecting sustainability
and regional variations based on different external support
provided at the district or provincial/state level. Rather than
relying on measures based solely on theory or hypothesized
factors, school and district teams could focus their attention on
the most critical aspects of sustainability.

There are currently no evidence-based interventions at the
school level to sustain implementation after initial training or
to set the conditions so that student outcomes are maintained or
further improved. To address this significant gap in both
research and practice, it will be necessary to identify and
develop interventions to promote sustainability. It is hoped that
the use of the SUBSIST measure will identify the most impor-
tant areas to target for intervention. Such research could help
improve the sustainability of any evidence-based practice in
schools, and by extension, improve important outcomes for
more students.

Acknowledgments

The authors thank the expert panel and pilot study participants for
sharing their time and experience. The authors also thank the
action editor, reviewers, and Dr. Bruno Zumbo for their helpful
suggestions.

Declaration of Conflicting Interests

The authors declared no conflicts of interests with respect to the
authorship and/or publication of this article.

Funding

The authors disclosed receipt of the following financial support
for the research and/or authorship of this article:

This research was supported by a grant from the UBC Hampton
Endowment Fund. Opinions expressed herein do not necessarily
reflect the policy of the organization, and no official endorsement
should be inferred.

References

Algozzine, K., & Algozzine, B. (2007). Classroom instructional
ecology and school-wide positive behavior support. Journal
of Applied School Psychology, 24, 29–47.

Alreck, P. L., & Settle, R. B. (1995). The survey research hand-
book: Guidelines and strategies for conducting a survey.
New York, NY: McGraw-Hill.

Bambara, L., Nonnemacher, S., & Kern, L. (2009). Sustain-
ing school-based individualized Positive Behavior Support:

Perceived barriers and enablers. Journal of Positive Behavior
Interventions, 11, 161–178.

Berman, P., & McLaughlin, M. W. (1976). Implementation of
educational innovations. Educational Forum, 40, 345–370.

Biglan, A. (2004). Contextualism and the development of effec-
tive prevention practices. Prevention Science, 5, 15–21.

Bradshaw, C. P., Mitchell, M. M., & Leaf, P. J. (2010). Examining
the effects of school-wide positive behavioral interventions
and supports on student outcomes: Results from a randomized
controlled effectiveness trial in elementary schools. Journal of
Positive Behavior Interventions, 12, 133–148.

Center on Positive Behavioral Interventions and Supports. (2010).
Schools that are implementing SWPBIS. Retrieved from www
.pbis.org

Curtis, M. J., Castillo, J. M., & Cohen, R. M. (2008). Best practices
in system-level change. In A. Thomas & J. P. Grimes (Eds.),
Best practices in school psychology (Vol. 5, pp. 887–901).
Bethesda, MD: National Association of School Psychologists.

Davis, L. (1992). Instrument review: Getting the most from your
panel of experts. Applied Nursing Research, 5, 194–197.

Dillman, D. A. (1983). Mail and other self-administered question-
naires. In P. Rossi, J. D. Wright, & A. B. Anderson (Eds.), The
handbook of survey research (pp. 359–377). New York, NY:
Academic Press.

Doolittle, J. H. (2006). Sustainability of positive behavior supports
in schools (Unpublished doctoral dissertation). University of
Oregon, Eugene.

Elliott, S. N., Witt, J. C., Kratochwill, T. R., & Stoiber, K. C.
(2002). Selecting and evaluating classroom interventions. In
M. R. Shinn, H. M. Walker, & G. Stoner (Eds.), Interven-
tions for academic and behavior problems II: Preventive and
remedial approaches (pp. 243–294). Bethesda, MD: National
Association of School Psychologists.

Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., &
Wallace, F. (2005). Implementation research: Synthesis of the
literature (FMHI Publication No. 231). Tampa, FL: Univer-
sity of South Florida, Louis de la Parte Florida Mental Health
Institute, the National Implementation Research Network.

Fullan, M. (2005). Leadership and sustainability. Thousand Oaks,
CA: Corwin Press.

Gable, R. K., & Wolf, J. W. (1993). Instrument development in the
affective domain: Measuring attitudes and values in corporate
and school settings. Boston, MA: Kluwer Academic.

Grant, J. S., & Davis, L. L. (1997). Selection and use of content
experts for instrument development. Research in Nursing &
Health, 20, 269–274.

Han, S. S., & Weiss, B. (2005). Sustainability of teacher imple-
mentation of school-based mental health programs. Journal of
Abnormal Child Psychology, 33, 665–679.

Hieneman, M., & Dunlap, G. (2001). Factors affecting the outcomes
of community-based behavioral support: II. Factor category
importance. Journal of Positive Behavior Interventions, 3, 67–74.

Horner, R. H. (2009, March). Expanding the science, values and
vision of Positive Behavior Support. Paper presented at the

McIntosh et al. 217

Sixth International Association for Positive Behavior Support
Conference, Jacksonville, FL.

Horner, R. H., Sugai, G., Smolkowski, K., Eber, L., Nakasato, J.,
Todd, A. W., & Esperanza, J. (2009). A randomized, wait-list
controlled effectiveness trial assessing school-wide positive
behavior support in elementary schools. Journal of Positive
Behavior Interventions, 11, 133–144.

Horner, R. H., Todd, A. W., Lewis-Palmer, T., Irvin, L. K.,
Sugai, G., & Boland, J. B. (2004). The School-wide Evalua-
tion Tool (SET): A research instrument for assessing school-
wide positive behavior support. Journal of Positive Behavior
Interventions, 6, 3–12.

Kincaid, D., Childs, K., Blase, K. A., & Wallace, F. (2007). Iden-
tifying barriers and facilitators in implementing schoolwide
positive behavior support. Journal of Positive Behavior Inter-
ventions, 9, 174–184.

Kincaid, D., Childs, K., & George, H. (2005). School-wide bench-
marks of quality (Unpublished instrument). University of South
Florida, Tampa.

Lassen, S. R., Steele, M. M., & Sailor, W. (2006). The relationship
of school-wide positive behavior support to academic achieve-
ment in an urban middle school. Psychology in the Schools,
43, 701–712.

Latham, G. (1988). The birth and death cycles of educational inno-
vations. Principal, 68, 41–43.

Lorhmann, S., Forman, M., Martin, S., & Palmieri, S. (2008).
Social context and personal belief barriers that impede the
adoption of PBS at the universal level implementation. Journal
of Positive Behavior Interventions, 10, 256–259.

Lynn, M. (1986). Determination and quantification of content
validity. Nursing Research, 35, 382–385.

McIntosh, K. (2010). SUBSIST sustainability checklist. Vancouver,
Canada: University of British Columbia.

McIntosh, K., Chard, D. J., Boland, J. B., & Horner, R. H. (2006).
Demonstration of combined efforts in school-wide academic
and behavioral systems and incidence of reading and behav-
ior challenges in early elementary grades. Journal of Positive
Behavior Interventions, 8, 146–154.

McIntosh, K., Doolittle, J. D., Vincent, C. G., Horner, R. H., &
Ervin, R. A. (2009). School-wide universal behavior sustain-
ability index: School teams. Vancouver, Canada: University
of British Columbia.

McIntosh, K., Horner, R. H., & Sugai, G. (2009). Sustainability
of systems-level evidence-based practices in schools: Current
knowledge and future directions. In W. Sailor, G. Dunlap, G.
Sugai, & R. H. Horner (Eds.), Handbook of positive behavior
support (pp. 327–352). New York, NY: Springer.

Ransford, C. R., Greenberg, M. T., Domitrovich, C. E., Small, M., &
Jacobson, L. (2009). The role of teachers’ psychological expe-
riences and perceptions of curriculum supports on the imple-
mentation of a social and emotional learning curriculum. School
Psychology Review, 38, 510–532.

Rubio, D. M., Berg-Wagner, M., Tebb, S. S., Lee, E. S., & Rauch, S.
(2003). Objectifying content validity: Conducting a content

validity study in social work research. Social Work Research,
27, 94–104.

Sarason, S. B. (1971). The culture of the school and the problem of
change. Boston, MA: Allyn & Bacon.

Scott, T. M., & Barrett, S. B. (2004). Using staff and student time
engaged in disciplinary procedures to evaluate the impact of
school-wide PBS. Journal of Positive Behavior Interventions, 6,
21–27.

Sheatsley, P. B. (1983). Questionnaire construction and item writ-
ing. In P. Rossi, J. D. Wright, & A. B. Anderson (Eds.), The
handbook of survey research (pp. 195–230). New York, NY:
Academic Press.

Sugai, G., & Horner, R. H. (2009). Defining and describing
schoolwide positive behavior support. In W. Sailor, G. Dunlap,
G. Sugai, & R. H. Horner (Eds.), Handbook of positive behav-
ior support (pp. 307–326). New York, NY: Springer.

Sugai, G., Horner, R. H., Sailor, W., Dunlap, G., Eber, L.,
Lewis, T. J., . . . Nelson, C. M. (2005). School-wide positive
behavior support: Implementers’ blueprint and self-assessment.
Eugene: University of Oregon.

Sugai, G., Lewis-Palmer, T. L., Todd, A. W., & Horner, R. H.
(2001). School-wide Evaluation Tool (SET). Eugene, OR:
Educational and Community Supports. Retrieved from http://
www.pbis.org

Vincent, C. G., Spaulding, S. A., & Tobin, T. J. (2010).
A reexamination of the psychometric properties of the School-
wide Evaluation Tool (SET). Journal of Positive Behavior
Interventions, 12, 161–179.

Walz, C. F., Strickland, O., & Lenz, E. (1991). Measurement in
nursing research (Vol. 2). Philadelphia, PA: F.A. Davis.

About the Authors

Kent McIntosh, PhD, is an assistant professor of School
Psychology at the University of British Columbia. His current
research interests include response to intervention systems for
academic and social behavior, systems change, and culturally
responsive positive behavior support.

Leslie D. MacKay, MA, a doctoral candidate at the University of
British Columbia, has research experience with early literacy and
curriculum evaluation. Her current research interests include
school-wide positive behavior support and response to interven-
tion related to curriculum-based assessment in schools.

Amanda E. Hume is an MA student in school psychology at the
University of British Columbia. Her current research interests
include positive behavior support, consultation, and response to
intervention systems.

Jennifer Doolittle, PhD, is an education program specialist in the
Office of Special Education Programs in the U.S. Department
of Education. She is the program lead for the State Personnel
Development Grants Program. Before coming to the U.S. Department
of Education, she was a program specialist for the Oregon

218 Journal of Positive Behavior Interventions 13(4)

Department of Education, focusing on statewide initiatives for
response to intervention and positive behavioral interventions and
supports.

Claudia G. Vincent, PhD, is a research assistant in Educational
and Community Supports at the University of Oregon. Her current
interests include culturally responsive implementation of school-
wide positive behavior support.

Robert H. Horner, PhD is professor of special education at the
University of Oregon, and codirector of the OSEP Technical
Assistance Center on Positive Behavioral Interventions and Supports.

Ruth A. Ervin, PhD, is an associate professor at the Univer sity of
British Columbia. Her teaching and research interests include consul-
tation, systems change, and prevention and intervention strategies to
address learning and social-behavioral problems.

COHERENCE Chapter 5

Securing Accountability

Internal Accountability Simply stated, accountability is taking responsibility for one’s actions. At the core of accountability in educational systems is student learning. As City, Elmore, Fiarman, and Teitel (2009) argue, “the real accountability system is in the tasks that students are asked to do” (p. 23). Constantly improving and refining instructional practice so that students can engage in deep learning tasks is perhaps the single most important responsibility of the teaching profession and educational systems as a whole. In this sense, accountability as defined here is not limited to mere gains in test scores but on deeper and more meaningful learning for all students. Internal accountability occurs when individuals and groups willingly take on personal, professional, and collective responsibility for continuous improvement and success for all students (Hargreaves & Shirley, 2009).

External accountability is when system leaders reassure the public through transparency, monitoring, and selective intervention that their system is performing in line with societal expectations and requirements. The priority for policymakers, we argue, should be to lead with creating the conditions for internal accountability, because they are more effective in achieving greater overall accountability, including external accountability. Policymakers also have direct responsibilities to address external accountability, but this latter function will be far more effective if they get the internal part right.

Existing research on school and system effectiveness and improvement (DuFour & Eaker, 1998; Marzano, 2003; Pil & Leana, 2006; Zavadsky, 2009) and our own work with educational systems in the United States and internationally (Fullan, 2010; Hargreaves & Fullan,). suggests that internal accountability must precede external accountability if lasting improvement in student achievement is the goal. Richard Elmore (2004) conducted a series of intensive case studies of individual schools—some that failed to improve and some that improved their performance. Relative to the former, schools that failed to improve were not able to achieve instructional coherence, despite being in systems with strong external accountability. A minority of schools did develop internal coherence together and showed progress on student achievement. The main feature of successful schools was that they built a collaborative culture that combined individual responsibility, collective expectations, and corrective action—that is, internal accountability. Transparent data on instructional practices and student achievement were a feature of these cultures. As these cultures developed, they were also able to more effectively engage the external assessment system. Highlighting the fundamental role of internal accountability on school improvement, Elmore (2004) pointed out the following: It seems unlikely to us that schools operating in the default mode—where all questions of accountability related to student learning are essentially questions of individual teacher responsibility—will be capable of responding to strong obtrusive accountability systems in ways that lead to systematic deliberate. improvement of instruction and student learning. The idea that a school will improve, and therefore, the overall performance of its students, implies a capacity for collective deliberation and action that schools in our sample did not exhibit. Where virtually all decisions about accountability are made by individual teachers, based on their individual conceptions of what they and their students can do, it seems unlikely that these decisions will somehow aggregate into overall improvement for the school. (p. 19). Internal accountability is based on the notion that individuals and the group in which they work can transparently hold themselves responsible for their performance. We already know that current external accountability schemes do not work because, at best, they tell us that the system is not performing but does not give a clue about how to fix the situation. As Elmore (2004) observes, if people do not know how to fix the problem and so cannot do so, then the following will occur: Schools will implement the requirements of the external accountability system in pro forma ways without ever internalizing the values of responsibility and efficacy that are the nominal objectives of those systems. (p. 134) Elmore (2004) then concludes this:

investments in internal accountability must logically precede [emphasis added] any expectation that schools will respond productively to external pressure for performance. (p. 134) “Logically precede,” yes, but more to the point of our framework, internal accountability must strategically precede engagement with external accountability. This is why focusing direction, cultivating collaborative cultures, and deepening learning precedes accountability in our Coherence Framework. There are two messages here: One is that policymakers and other leaders are well advised to establish conditions for developing cultures of internal accountability. The second is that there are things other people can do when the hierarchy is not inclined to move. The answer is to “help make it happen in your own situation”—that is, develop collaborative work with your peers and push upward for this work to be supported. The history of the teaching profession is laced with assumptions of and conditions for isolated, individual responsibility. But atomistic responsibility, detached from any group, can never work. In a nutshell, the cultural shift needed is to shift to collaborative cultures that honor and align individual responsibility with collective expectations and actions. Elmore discusses several schools that he and his team studied. Most of them exemplify the individualistic model. Teachers work away on their own and periodically grapple or clash with external accountability requirements. But Elmore also discusses two cases where the schools have developed more or less “collaborative” cultures. The first case is St. Aloysius Elementary School Without exception, teachers described an atmosphere of high expectations. Some stressed a high priority on “reaching every child” and “making sure that no one is left behind” while others referred to a serious and supportive environment where everyone is expected to put forth excellent work. (Elmore, 2004, p. 164) It sounds ideal, but what happens when things don’t go as expected? At another school, Turtle Haven, Elmore (2004) asked teachers, “What happens when teachers do not meet the collective expectations?” He reports that “most teachers believed that a person who did not meet . . . expectations, or conform to a culture created by those expectations would first receive a great deal of support from the principal and other colleagues” (p. 183). If this approach failed to produce results, most Turtle Haven teachers said that the teacher in question would not be happy at the school and eventually would either “weed themselves out [or]. . . if there was a sense in the community that a certain number of children were not able to get the kind of education that we say we’re committed to providing . . . we would have to think whether the somebody belongs here or not” (Elmore, 2004, p. 183). This kind of culture is not foolproof, but we would say it stacks up well against the external accountability thinking that creates demands that go unheeded or can’t be acted on. In the collaborative cultures, the internal accountability system is based on visible expectations combined with consequences for failure to meet set expectations.

Such cultures, says Elmore (2004), are much better equipped to deal with external accountability requirements, adding that a school with a strong internal accountability culture might respond to external assessments in a number of ways, “including accepting and internalizing it; rejecting it and developing defenses against it, or incorporating just those elements of the system that the school or the individuals deem relevant” (p. 145). What is coming through in this discussion is that collaborative cultures with an eye to continuous improvement establish internal processes that allow them to sort out differences and to make effective decisions. At the level of the microdynamics of school improvement, Elmore (2004) draws the same conclusion we do at the system level: investing in the conditions that develop internal accountability is more critical than beefing up external accountability. The Ontario Reform Strategy, which we discussed in previous chapters, offers an illustrative example of the importance of internal accountability preceding external accountability systemwide. The Canadian province of Ontario, with 4,900 schools in 72 districts serving some two million students, started in 2004 to invest in building capacity and internal accountability at the school and district levels. The initial impulse for the reform came from leadership at the top of the education system—Dalton McGuinty, the premier of the province at the time—through the establishment of a small number of ambitious goals related to improvements in literacy, numeracy, and high school retention. However, the major investments focused on strengthening the collective capacity of teachers, school principals, and district leaders to create the conditions for improved instructional practice and student achievement (Glaze, Mattingley, & Andrews, 2013). There was little overt external accountability in the early stages of the Ontario Reform Strategy. External accountability measures were gradually introduced in the form of assessment results in grades 3 and 6 in literacy and numeracy, and in high school, retention numbers, transparency of data, and a school turnaround support-focused policy called Ontario Focused Intervention Program (OFIP) for schools that were underperforming. This system has yielded positive and measurable results in literacy that has improved dramatically across the 4,000 elementary schools and in high school graduation rates that have climbed from 68 percent to 84 percent. across the 900 high schools. The number of OFIP schools, formerly at over 800, has been reduced to 69 schools even after the criteria to identify a school as in need of intervention had widened to include many more schools (Glaze et al., 2013; Mourshed, Chijioke, & Barber, 2010). An evaluation of the reform strategy in 10 of Ontario’s 72 school districts that concentrated particularly on the special education aspects of the reform pointed to a significant narrowing of the achievement gap in writing scores for students with learning disabilities (Hargreaves & Braun, 2012). Concerns were expressed among teachers who were surveyed about some of the deleterious consequences of standardized testing in grades 3 and 6— that the tests came at the end of the year at a point that was too late to serve a diagnostic function, that they were not sufficiently differentiated in order to match differentiated instructional strategies, and that principals in some

The most intriguing finding though was that special education resource teachers, whose role was moving increasingly to providing in-class support, welcomed the presence of transparent objective data. They saw it as a way of drawing the attention of regular classroom teachers to the fact and the finding that students with learning disabilities could, with the right support, register valid and viable gains in measurable student achievement. Together, these findings point to the need to review the nature and form of high-stakes assessments—more differentiated, more just-in-time, and more directed at the needs of all students, perhaps—but also to the value of having transparent data that concentrate everyone’s attention on supporting all students’ success along with diagnostic data and collaborative professional responsibility for all students’ learning, development, and success A similar approach to whole system improvement can be found in U.S. districts that have been awarded the prestigious Broad Prize for Urban Education, granted to urban school districts that demonstrate the great- est overall performance and improvement while reducing achievement gaps based on race, ethnicity, and income. In her in-depth study of five such districts, Zavadsky (2009) finds that, while diverse in context and strategies, these districts have addressed the challenge of improving student performance systemwide following remarkably similar approaches: investing in, growing, and circulating the professional capital of schools (what they term building capacity) to improve instructional practice by fostering teacher collaboration and collective accountability. These successful schools set high instructional targets, attracting and developing talent, aligning resources to key improvement priorities, constantly monitoring progress, and providing timely targeted support when needed.

The solid and mounting evidence on the fundamental impact of internal accountability on the effectiveness and improvement of schools and school systems contrasts sharply with the scarce or null evidence that external accountability, by itself or as the prime driver, can bring about lasting and sustained improvements in student and school performance. There is, indeed, a growing realization that external accountability is not a capable driver of school and system effectiveness. At best, external accountability does not get its intended results. At worst, it produces undesirable and sometimes unconscionable consequences, such as the cheating scandal in Atlanta (Hill, 2015). We frequently ask successful practitioners that we work with how they themselves handle the “accountability dilemma” (direct accountability doesn’t work; indirect may be too soft). What follows are a few responses that we have personally received to this question: What is effective accountability? Not surprisingly, these views are entirely consistent with Elmore (2004): Accountability is now primarily described as an accountability for student learning. It is less about some test result and more about accepting ownership of the moral imperative of having every student learn. Teachers talk about “monitoring” differently. As they engage in greater sharing of the work, they talk about being accountable as people in the school community know what they are doing and looking to see what is changing for students as a result. And as they continue to deprivatize teaching, they talk about their principal and peers coming into their classrooms and expecting to see the work [of agreed-upon practices] reflected in their teaching, their classroom walls, and student work. (Anonymous, personal communication, November 2014).

Teachers and administrators talk about accountability by deprive- sizing their practice. If everyone knows what the other teacher or administrator is working on and how they are working on it with students, it becomes a lot easier to talk about accountability. When everyone has an understanding of accountability, creating clear goals and steps to reach those goals, it makes it easier for every- one to talk and work in accountable environments. (Elementary principal, personal communication, November 2014).

I spoke with my staff about accountability versus responsibility in brainstorming, about what is our purpose and who is responsible for what . . . being explicit and letting teachers collectively determine what our responsibilities are. (Secondary school principal, personal communication, November 2014) We are moving to define accountability as responsibility. My district has been engaged in some important work that speaks to intrinsic motivation, efficacy, perseverance, etc., and accountability is seen as doing what is best for students . . . working together to tackle any challenge and being motivated by our commitment as opposed to some external direction. (Superintendent, personal communication, November 2014).

When you blow down the doors and walls, you can’t help but be evermore accountable. (Superintendent, personal communication, November 2014) I do believe that a lot of work remains to be done on building common understanding on the notion of accountability. Many people still believe that someone above them in the hierarchy is accountable. Very few take personal accountability for student learning and achievement. There are still those who blame parents and students’ background for achievement. (Consultant, personal communication, November 2014) In one school, the talk about accountability was pervasive as the school became designated as underperforming. The morale of the school went down significantly, and the tension was omnipresent at every meeting. The team switched the conversation to motivation, innovation, and teamwork and the culture changed. The school is energized and the test scores went up in one year. The team is now committed to results and continuous improvement. (Consultant, personal communication, November 2014) In short, internal accountability is far more effective than external accountability. The bottom line is that it produces forceful accountability in a way that no hierarchy can possibly match. We have shown this to be the case for teachers, and we can make a parallel argument for students. If we want students to be more accountable, we need to change instruction toward methods that increase individual students’ responsibility for assessing their methods that increase individual students’ responsibility for assessing their own learning and for students to work in peer groups to assess and provide feedback to each other under the guidance of the teacher. We still need external accountability, and we can now position it more effectively.

External Accountability

External accountability concerns any entity that has authority over you. Its presence is still essential, but we need to reposition external accountability so that it becomes more influential in the performance of individuals, groups, and the system as a whole. We first take the perspective of external authorities and then flip back to local entities. External Authorities The first thing to note is that if the external body invests in building widespread internal accountability they will be furthering their own goals of greater organization or system accountability. The more that internal accountability thrives, the greater the responsiveness to external requirements and the less the externals have to do. When this happens, the center has less need to resort to carrots and sticks to incite the system to act responsibly. Dislodging top-down accountability from its increasingly miscast role has turned out to be exceedingly difficult. People at the top do not like to give up control. They cling to it despite obvious evidence that it does not work. And attacks on the inadequacy of top-down accountability have failed because they have only focused on the “from” side of freedom. Critics seem to be saying that accountability requirements do not work, so remove them. That is not the complete solution because it takes us back to nothing. The answer is found in our argument in this chapter—rely on developing the conditions for internal accountability and reinforce them with certain aspects of external accountability. In particular, central authorities should focus their efforts on two interrelated activities:

1. Investing in internal accountability

2. Projecting and protecting the system

By the first, I mean investing in the conditions that cause internal accountability to get stronger. The beauty of this approach, as we have seen, is that people throughout the system start doing the work of accountability. Though indirect, this form of accountability is more explicit, more present, and, of course, more effective. We have already suggested its components:

• A small number of ambitious goals, processes that foster shared goals (and even targets if jointly shaped)

• Good data that are used primarily for developmental purposes

• Implementation strategies that are transparent, whereby people and organizations are grouped to learn from each other (using the group to change the group)

• Examination of progress in order to problem solve for greater performance

The center needs to invest in these very conditions that result in greater focus, capacity, and commitment at the level of day-to-day practice. They invest, in other words, in establishing conditions for greater local responsibility. In this process, the center will still want goals, standards, assessment, proof of implementation, and evidence of progress. This means investment in resources and mechanisms of internal accountability that people can use to collaborate within their units and across them.

With strong internal accountability as the context, the external accountability role of the system includes the following:

1. Establishing and promoting professional standards and practices, including performance appraisal, undertaken by professionally respected peers and leaders in teams wherever possible and developing the expertise of teachers and teacher-leaders so that they can undertake these responsibilities. With the robust judgments of respected leaders and peers, then getting rid of teachers and administrators who should not be in the profession will become a transparent collective responsibility.

2. Ongoing monitoring of the performance of the system, including direct intervention with schools and districts in cases of persistent underperformance.

3. Insisting on reciprocal accountability that manages “up” as well as down so that systems are held accountable for providing the resources and supports that are essential in enabling schools and teachers to fulfill expectations (e.g., “failing” schools should not be closed when they have been insufficiently resourced, or individual teachers should be evaluated in the context of whether they have been forced into different grade assignments every year or have experienced constant leadership instability).

tors of student performance and well-being. These would include measures of social capital in the teaching profession such as extent of collaboration and levels of collegial trust. Outcome measures for students should also, as previously stated, include multiple measures including well-being, students’ sense of control over their own destiny (locus of control), levels of engagement in learning, and so forth. The Perspective of locals

4. Adoptingandapplyingindicatorsoforganizationalhealthasacontext for individual teacher and leader performance, such as staff retention rates, leadership turnover rates, teacher absenteeism levels, numbers of crisis-related incidents, and so on, in addition to outcome indicators of student performance and well-being. These would include measures of social capital in the teaching profession such as extent of collaboration and levels of collegial trust. Outcome measures for students should also, as previously stated, include multiple measures including well-being, students’ sense of control over their destiny (locus of control), levels of engagement in learning, and so forth.

The Perspective of locals

We have drawn on numerous relatively successful examples in this book. They all established strong degrees of internal accountability (people being self and group responsible) that served them well in the external ).

accountability arena. Such systems strengthened accountability by increasing focus, connecting dots and otherwise working on coherence, building capacity (so people could perform more efficaciously), being transparent about progress and practices, and engaging the external accountability system. As districts increase their capacity, they become stronger in the face of ill-advised external accountability demands as the following two extended examples reveal from Laura Schwalm, former superintendent of Garden Grove).

Example One: garden grove Handles External Pressure In the words of Laura Schwalm: Shortly after we completed our audit and instituted a district-wide mandate and system to place students in college prep (a–g) courses, Ed Trust and several other advocacy groups, with support from the California Department of Education (CDE), began “calling out” the low college readiness statistics in large urban districts in California. Every large urban district, including Garden Grove, was called out (rightfully so) with one exception of a district in the north, which was held as a model solution because they had made the age requirement mandatory for every student and claiming they had eliminated all other courses with absolutely no effect on their graduation rate. Based on this example, the advocacy groups started a very public campaign and got a majority of school boards, including LAUSD, to adopt the policies of this northern district with the pledge that they would achieve 100 percent a–g achievement with no increase in dropout rate within four to five years. When Garden Grove refused to comply (Long Beach did as well), we were more strongly targeted and pressured (the approach we had adopted was to not eliminate all support courses that were not college prep but rather to eliminate a few and to align the rest in a way to provide an “on ramp” to college prep courses while at the same time using individual student-by-student achievement data, rather than the former practice of “teacher recommendation” for placement in college prep courses) (one of the shameful things our audit revealed, which did not surprise me, was that if you were an Asian student with mean achievement on the California Standards Tests, you had about a 95 percent chance of being “recommended for placement in a-g courses”—conversely, if you were a Latino male with the exact same scores, you had less than 30 percent chance of being recommended for placement in these courses). As the pressure continued to adopt a policy of mandating an exclusive a–g curriculum, I met with a few of the key advocates and explained that while we shared the same goal of increasing our unacceptably low a–g completion rate, we strongly felt the approach they were suggesting was ill advised. Putting students in a course for which they were absolutely not prepared, based on very objective data, and then expecting them to pass the course with a grade of C or better was unfair to both students and teachers. They kept focusing on the district up north, which led me to point out to them that the data from that district did not support what they were claiming. If their approach was truly working, then their achievement scores, as measured by the state, should be outperforming ours, and in fact, they fell far short of ours, for all subgroups. Additionally, a neighboring district that had adopted the same policy now claimed a 90 percent a–g completion rate, yet 65 percent of their high school students scored below the mean on the state standards test. It clearly pointed out that all was not as it looked on the surface, and while I had no desire to criticize another district’s approach, I was not about to follow it. That caused the advocates to pause and finally to leave us alone. Our rate, both in terms of a–g completion and student achievement data by subgroup, continued to climb. Within a few years, we surpassed all the others, and over time, the policy the CDE and advocates had pushed into districts quietly vanished.

Example Two: garden grove Deals With the bureaucracy Again in Schwalm’s words:

Another example occurred during one of the CDE’s three-year systemwide compliance reviews. While I accepted the state’s responsibility to oversee that we were not using specially designated funding for inappropriate uses, as well as to assure we were following laws around equity and access for all students, the process they had was unnecessarily burdensome, requiring us to dedicate significant staff to collecting, cataloging, and preparing documentation that filled dozens and dozens of boxes. When the state team came—usually about 10 to 12 people, each looking at different programs with one person loosely designated as team lead—the expectation was that you treat them like royalty and that they had enormous authority. My view was somewhat different. I respected that they had a job to do, but just because they did not like the way we displayed something did not mean we needed to do it differently or because they would have used another approach—our approach if appropriately supported with data—was not out of bounds. At one of the first reviews early on in my superintendency, we drew a particularly weak but officious team with a very weak lead. They came up with some particularly lame findings (i.e., one team member commended us on how we used data to identify areas of focus for targeted groups of students, while another team member marked us as noncompliant in this area because we did not put it on a form that she had developed—and other equally ludicrous examples). At the end of the process, the superintendent was required to sign an agreement validating the team’s findings as well as a plan and timeline to bring things into “compliance.” I very professionally told them that I did not agree with their findings and thus could not sign either document—I was not going to pretend to fix something that I had no intention of doing because there was nothing wrong with it in the first place. What I did do was sign a document, which we drafted, acknowledging that the team had, in fact, been there and that we agreed to a couple of specific areas where we needed to and would make some changes, but I did not agree with the majority of the report and would not agree to take any action other than what was previously specified. This seemed pretty fair to me, but apparently it shocked them and the system, which was the beginning of my unpopularity with many in CDE. Probably this was made worse when the story got out (not by my telling), and other superintendents realized that they could do the same thing (although I advised those who contacted me—and a number did—that their life would not be particularly easy for a while and also that they should have the data and results to back their stand) (L. Schwalm, personal communication, 2014). You can see why in another book (where I cited an even more egregious example of defiance), I referred to Laura as a “rebel with a cause” (Fullan, 2015). There are two lessons here with what I have called both the freedom-from problem and the freedom-to problem. You need to attend to both. The freedom-from problem is what Laura did—refusing to comply with ridiculous demands. But she was backed up by her freedom- to actions in which she built a culture of coherence, capacity, and internal accountability. If you do the latter, you are in good shape to contend with the external accountability system, including acting on external performance data that do show that you need to improve,

In California as a whole, they currently face the freedom-to problem. The wrong drivers are on the way out the door. Jerry Brown, the governor, has suspended all statewide student tests for at least two years on the grounds that it is better to have no tests than to have the wrong test. So far so good, but getting rid of bad tests is not enough for securing accountability. New tests—Smarter Balanced Assessment Curriculum (SBAC)— are being piloted relative to CCSS. Districts would be well advised to use our Coherence Framework to build their focused accountability. They will then perform better and be in a better position to secure their own accountability as they relate to the ups and downs of external accountability. External accountability as wrong as it can get sometimes is a phenomenon that keeps you honest. Leaders need to be skilled at both internal and external accountability and their interrelationship.

Overview

 

 

 

Our team has been immersed in ‘whole system change’ for the past few years
in Ontario, Canada; California; Australia and New Zealand; and elsewhere. Our main
mode of learning is to go from practice to theory, and then back and forth to obtain
more specific insights about how to lead and participate in transformative change in
schools and school systems.

In this workshop we take the best of these insights from our most recent
publications: Stratosphere, The Professional Capital of Teachers, The Principal,
Freedom to Change, and Coherence and integrate the ideas into a single set of
learnings.

The specific objectives for participants are:

1. To learn to take initiative on what we call ‘Freedom to Change’.
2. To Understand and be able to use the ‘Coherence Framework’.
3. To analyze your current situation and to identify action strategies fro making

improvements.
4. Overall to gain insights into ‘leadership in a digital age’.

We have organized this session around six modules:

Module I Freedom From Change 1-

4

Module II Focusing Direction 5-

10

Module III Cultivating Collaborative Cultures 11-

14

Module IV Deepening Learning 15-2

2

Module V Securing Accountability 23-

30

Module VI Freedom To Change 31-

32

References 3

3

 

 

 

 

Please feel free to reproduce and use the
material in this booklet with your staff and others.

201

5

 

 

 

Freedom From Change

1

 

 
Shifting to
the Right Drivers

Right Wrong

§ Capacity building

§ Collaborative work

§ Pedagogy

§ Systemness

§

Accountability

§ Individual teacher and

leadership quality

§ Technology

§ Fragmented strategies

Freedom:

If you could make one

change in your school or

system what would it be?

What obstacles stand in

your way?

What would you change? What are the obstacles?

Trio Talk:

§ Meet up with two colleagues.

§ Share your choice and rationale.

§ What were the similarities and differences in the choices?

Module 1

 

2

The Concepts of Freedom § Freedom to is getting rid of the constraints.

§ Freedom from is figuring

out what to do when you

become more liberated.

Seeking Coherence § Within your table read the seven quotes from Coherence and circle
the one you like the best.

§ Go around the table and see who selected which quotes.

§ As a group discuss what ‘coherence’ means.

Coherence: The Right Drivers in Action for Schools, Districts, and Systems

Fullan, M., & Quinn, J. ( 2015). Corwin & Ontario Principals’ Council.

 
# Quote

1. There is only one way to achieve greater coherence, and that is through purposeful action and interaction,
working on capacity, clarity, precision of practice, transparency, monitoring of progress, and continuous
correction. All of this requires the right mixture of “pressure and support”: the press for progress within
supportive and focused cultures. p. 2

2. Coherence making in other words is a continuous process of making and remaking meaning in your own
mind and in your culture. Our framework shows you how to do this. p. 3

3. Effective change processes shape and reshape good ideas as they build capacity and ownership among
participants. There are two components: the quality of the idea and the quality of the process. p.14

4. … that these highly successful organizations learned from the success of others but never tried to imitate
what others did. Instead, they found their own pathway to success. They did many of the right things, and
they learned and adjusted as they proceeded. p.

15

5. Most people would rather be challenged by change and helped to progress than be mired in frustration.
Best of all, this work tackles “whole systems” and uses the group to change the group. People know they
are engaged in something beyond their narrow role. It is human nature to rise to a larger call if the
problems are serious enough and if there is a way forward where they can play a role with others.
Coherence making is the pathway that does this. p. ix

6. What we need is consistency of purpose, policy, and practice. Structure and strategy are not enough. The
solution requires the individual and collective ability to build shared meaning, capacity, and commitment
to

action.

When large numbers of people have a deeply understood sense of what needs to be done—
and see their part in achieving that purpose—coherence emerges and powerful things happen. p. 1

7. Coherence pertains to people individually and especially collectively. To cut to the chase, coherence
consists of the shared depth of understanding about the purpose and nature of the work. Coherence,
then, is what is in the minds and actions of people individually and especially collectively. p. 1-2

 

Freedom From Change

 

3

The Coherence Framework

 
 

Securing
Accountability

Focusing
Direction

Deepening
Learning

Cultivating
Collaborative

Cultures

Leadership

Module 1

 
4

Notes:

 

Focusing Direction

5

 

 
Focusing Direction

Purpose Driven:
Quick Write

Clarify your own moral purpose by reflecting and recording your

thoughts about these four questions using the quick write protocol:

§ What is your moral purpose?

§

What actions do I take to realize this moral purpose?

§

How do I help others clarify their moral purpose?

§ Am I making progress in realizing my moral purpose with

students?

Share your thoughts with other members of your team and discuss

themes that emerge.

Focusing Direction

§ Purpose Driven
§ Goals That Impact
§ Clarity of Strategy
§ Change Leadership

Deepening
Learning

Securing
Accountability

Cultivating
Collaborative
Cultures
Leadership

Module 2

 

6

 
What is my moral purpose?

What actions do I take to realize this moral purpose?

How do I help others clarify their moral purpose?

Am I making progress in realizing my moral purpose with students?

Focusing Direction

 

7

 
Clarity of Strategy § Successful change processes are a function of shaping and

reshaping good ideas as they build capacity and ownership.

§ Clarity about goals is not sufficient. Leaders must develop shared

understanding in people’s minds and collective action. Coherence

becomes a function of the interplay between the growing

explicitness of the strategy and the change culture. The two

variables of explicitness of strategy and quality of the change

culture interact creating four different results.

Change Quality Protocol

1. Superficiality

When the strategy is not very precise, actionable or clear (low explicitness) and people are comfortable

in the culture, we may see activity but at very superficial levels.

2. Inertia

This quadrant represents the history of the teaching profession—behind the classroom door, where

teachers left each other alone with a license to be creative or ineffective.

Innovative teachers receive little feedback on their ideas, nor do these ideas become available to others

and isolated, less than effective teachers get little help to improve.

3. Resistance

When innovations are highly prescribed (often detailed programs bought off the shelf) but culture is

weak and teachers have not been involved sufficiently in developing ownership and new capacities, the

result is pushback and resistance. If the programs are sound, they can result in short term gains

(tightening an otherwise loose system), but because teachers have not been engaged in shaping the

ideas or the strategy there is little willingness to take risks.

4. Depth

A strong climate for change with an explicitness of strategy is optimal. People operating in conditions of

high trust, collaboration, and effective leadership, are more willing to innovate and take risks. If we

balance that with a strategy that has precision, clarity, and measures of success, changes implemented

will be deep and have impact.

 
 

Module 2

 

8

Change Quality Quadrant

Change Climate (vertical axis):

§ Describes the degree to which a culture supports change by

fostering trust, nonjudgmentalism, leadership, innovation, and

collaboration.

Explicitness (horizontal axis):

§ Describes the degree of explicitness of the strategy, including

precision of the goals, clarity of the strategy, use of data, and

supports.

 

Change Quality Protocol

1. Brainstorm individually all the changes you are implementing in

your school or district and place each idea on a post-it along with

your initial.

2. Consider evidence of explicitness of the strategy and the strength

of the culture for each initiative. Mark the post-it as belonging to

quadrant 1, 2, 3 or 4.

3. When the first two steps are completed, all peers should place their

post-its on the quadrants at the same time.

4. Review each post-it looking for similarities or differences. Discuss

the evidence that led to the placement.

5. Select two or three important changes and discuss:

§ What is effective/ineffective about the explicitness of the

strategy?

§ What is effective/ineffective about the culture for change?

 
 

Focusing Direction

 

9

Three Keys to Maximizing
Impact

 
The Lead Learner:
The Principal’s New Role

To increase impact, principals should use their time differently: they

should direct their energies to developing the group.

The Principal’s New Role To lead the school’s teachers in a process of learning to improve their
teaching, while learning alongside them about what works and what

doesn’t.

 
 

Module 2

 
10

Notes:

 

Cultivating Collaborative Cultures

11

 

 
Cultivating Collaborative
Cultures

Within-School Variability

§ Variability of performance between schools is 36%, while variability

within schools is 64%. —OECD (2013)

Turn and Talk § Read the excerpt from John Hattie and discuss what the meaning
of ‘within school variability’ is.

Introduction

Hattie, J. (2015). What Works Best in Education: The Politics of Collaborative Expertise, pp. 1-2, Pearson.

 
The Largest Barrier to Student Learning: Within-School Variability

If we are to truly improve student learning, it is vital that we identify the most important barrier to such

improvement. And that barrier is the effect of within-school variability on learning. The variability between schools

in most Western countries is far smaller than the variability within schools (Hattie 2015). For example, the 2009

PISA results for reading across all OECD countries shows that the variability between schools is 36 per cent, while

the variance within schools is 64 per cent (OECD 2010).

There are many causes of this variance within schools, but I would argue that the most important (and one that we

have some influence to reduce) is the variability in the effectiveness of teachers. I don’t mean to suggest that all

teachers are bad; I mean that there is a great deal of variability among teachers in the effect that they have on

student learning. This variability is well known, but rarely discussed, perhaps because this type of discussion would

necessitate potentially uncomfortable questions. Hence, the politics of distraction are often invoked to avoid

asking them.

Cultivating
Collaborative Cultures
§ Culture of Growth
§ Learning Leadership
§

Capacity Building

§ Collaborative Work

Deepening
Learning
Securing
Accountability
Focusing
Direction
Leadership

Module 3

 

12

Overcoming Variability Through Collaborative Expertise

There is every reason to assume that by attending to the problem of variability within a school and increasing the

effectiveness of all teachers there will be a marked overall increase in achievement. So the aim is to bring the effect

of all teachers on student learning up to a very high standard. The ‘No Child Left Behind’ policy should have been

named ‘No Teacher Left Behind’.

This is not asking teachers and school leaders to attain some impossibly high set of dream standards; this is merely

asking for all teachers to have the same impact as our best teachers. Let’s consider some analogies: not all doctors

have high levels of expertise, and not all are in an elite college of surgeons; not all architects are in royal societies;

and not all engineers are in academies of engineers. Just because a doctor, architect or engineer is not a member

of these august bodies, however, does not mean that they are not worth consulting. They may not have achieved

the upper echelon, but they will still have reached a necessary level of expertise to practise.

Similarly, the teaching profession needs to recognise expertise and create a profession of educators in which all

teachers aspire to become members of the college, society or academy of highly effective and expert teachers.

Such entry has to be based on dependable measures based on expertise. In this way, we can drive all upwards and

not only reduce the variability among teachers and school leaders but also demonstrate to all (voters, parents,

politicians, press) that there is a ‘practice of teaching’; that there is a difference between experienced teachers and

expert teachers; and that some practices have a higher probability of being successful than others. The alternative

is the demise of teacher expertise and a continuation of the politics of distraction.

So, my claim is that the greatest influence on student progression in learning is having highly expert, inspired and

passionate teachers and school leaders working together to maximise the effect of their teaching on all students in

their care. There is a major role for school leaders: to harness the expertise in their schools and to lead successful

transformations. There is also a role for the system: to provide the support, time and resources for this to happen.

Putting all three of these (teachers, leaders, system) together gets at the heart of collaborative expertise.

§ Human Capital

§ Social Capital

§ Decisional Capital

What has a greater

impact

on teaching and learning?

§ Teacher appraisal?

§ Professional Development

§ Collaborative Cultures

Cultivating Collaborative Cultures

 

13

School Cultures § Talented schools improve weak teachers

§ Talented teachers leave weak schools

§ Good collaboration reduces bad variation

§ The sustainability of an organization is a function of the quality of

its lateral relationships

Freedom To Means § Autonomy & Cooperation

Balancing Autonomy
& Cooperation

§ If you choose

being on your

own you lose the

human

connection

necessary for life.

§ If you succumb to

the extreme of

being absorbed

in a group, you

lose your identity.

Struggle between Autonomy
and Cooperation

§ Countries granting schools independent status freer from

traditional bureaucracies find pockets of innovation among a larger

number of pockets of failure.

§ What is needed for success is to combine flexibility with

requirements for cooperation.

Forms of Cooperation § Building collaborative cultures

§ Participating in networks of schools or districts to learn from each

other

§ Relating to state policies and priorities

Groupthink § …situations where groups are cohesive, have highly directive
leadership, and fail to seek external information. Such groups

strive for unanimity, failing to consider alternative courses of

action.

Module 3

 
14

Point & Go? Meet up with a colleague from another table group.

§ Discuss a time you were part of groupthink. What impact did it

have on the group and you personally?

§ What is the power of autonomy?

§ How do you balance autonomy and cooperation?

 

Notes:

 

Deepening Learning

15

 

 
Deepening Learning

Stratosphere

Deep Learning
Competencies

§ The 6C’s provides an advance organizer for thinking about Deep

Learning Competencies as identified by New Pedagogies for Deep

Learning. The placemat organizer can be used to activate prior

knowledge about the 6C’s or to look for examples of the 6C’s

using video exemplars.

Exciting new learning
needs to be:

§ Irresistibly engaging

§ Elegantly efficient

§ Technology ubiquitous

§ Steeped in real life problem solving

§ Involves deep learning

STRATOSPHERE

Deepening Learning

§ Clarity of Learning Goals
§ Precision in Pedagogy
§ Shift Practices Through

Capacity Building
Focusing
Direction
Securing
Accountability
Cultivating
Collaborative
Cultures
Leadership

Module 4

 

16

The 6C’s Protocol § Form groups of six with each peer assigned one of the 6C’s.

§ Review the descriptors of the six deep learning competencies. Each

group member will take one competency and provide an example

of what that competency might look like and sound like in practice

or how it is being developed in their classroom or school.

§ Share the examples within the group of six.

§ Select a video of classroom practice and analyze it for examples of

how the six deep learning competencies are being developed. Use

the same graphic organizer to record evidence.

§ Discuss ways to incorporate one or more competencies in future

learning designs.

The 6C’s Protocol

1.

Communication

§ Coherent communication using a range of modes

§ Communication designed for different audiences

§ Substantive, multimodal communication

§ Reflection on and use of the process of learning to improve communication

2. Critical thinking
§ Evaluating information and arguments

§ Making connections and identifying patterns

§ Problem solving

§ Meaningful knowledge construction

§ Experimenting, reflecting, and taking action on ideas in the real world

3.

Collaboration

§ Working interdependently as a team

§ Interpersonal and team-related skills

§ Social, emotional, and intercultural skills

§ Management of team dynamics and challenges

4.

Creativity

§ Economic and social entrepreneurialism

§ Asking the right inquiry questions

§ Considering and pursuing novel ideas and solutions

§ Leadership for action

5. Character
§ Learning to learn

§ Grit, tenacity, perseverance, and resilience

§ Self-regulation and responsibility

§ Empathy for and contributing to the safety and benefit of others

6.

Citizenship

§ A global perspective

§ Understanding of diverse values and worldviews

§ Genuine interest in human and environmental sustainability

Deepening Learning

 

17

The 6 C’s of Learning Goals

Communication

Creativity

Critical Thinking

Character

Collaboration

Citizenship

Module 4

 

18

My Learning

Deepening Learning

Fullan, M., & Quinn, J. (2015). Coherence, pp. 95-96. Corwin & Ontario Principals’ Council.

 
My Learning

The first element refers to the need for students to take responsibility for their learning and to

understand the process of learning, if it is to be maximized. This requires students to develop skills in

learning to learn, giving and receiving feedback, and enacting student agency.

§ Learning to learn requires that students build metacognition about their learning. They begin to

define their own learning goals and success criteria; monitor their own learning and critically

examine their work; and incorporate feedback from peers, teachers, and others to deepen their

awareness of how they function in the learning process.

§ Feedback is essential to improving performance. As students make progress in mastering the

learning process, the role of the teacher gradually shifts from explicitly structuring the learning task,

toward providing feedback, activating the next learning challenge, and continuously developing

the learning environment.

§ Student agency emerges as students take a more active role in codeveloping learning tasks and

assessing results. It is more than participation; it is engaging students in real decision-making and a

willingness to learn together.

 
 

Deepening Learning

 

19

My Belonging

The second element of belonging is a crucial foundation for all human beings who are social by nature

and crave purpose, meaning, and connectedness to others.

§ Caring environments help students to flourish and meet the basic need of all humans to feel they

are respected and belong.

§ Relationships are integral to preparing for authentic learning. As students develop both

interpersonal connections and intrapersonal insight, they are able to move to successively more

complex tasks in groups and independently. Managing collaborative relationships and being self-

monitoring are skills for life.

My Aspirations

Student results can be dramatically affected by the expectations they hold of themselves and the

perceptions they believe others have for them (see also Quaglia & Corso, 2014).

§ Expectations are a key determinant of success, as noted in Hattie’s research. Students must believe

they can achieve and also feel that others believe that. They must codetermine success criteria and

be engaged in measuring their growth. Families, students, and teachers can together foster higher

expectations through deliberate means—sometimes simply by discussing current and ideal

expectations and what might make them possible to achieve.

§ Needs and interests are a powerful accelerator for motivation and engagement. Teachers who tap

into the natural curiosity and interest of students are able to use that as a springboard to deeply

engage students in tasks that are relevant, authentic, and examine concepts and problems in

depth.

Teachers, schools, and districts that combine strategies to unlock the three elements in their students

will foster untapped potential and form meaningful learning partnerships.

 
How good is your school
at addressing the three
‘mys’?

§ My learning (scale 1-10) = __________

§ My belonging (scale 1-10) = __________

§ My aspirations (scale 1-10) = __________

 
Reflect on what you can do to accelerate meaningful learning partnerships with students in you school.

 
 

Module 4

 

20

 

Deepening Learning

 

21

Students, Computers, and
Learning

§ Countries that invest more heavily in ICT do less well in student

achievement.

—OECD, 2015

Early Insights about
Leadership for NPDL:
Direction, Letting Go,
Consolidating

§ A cycle of trying things and making meaning

§ Co-learning dominates

§ Leaders spent a lot of time listening, learning, asking questions

§ Leaders help articulate what is happening, and how it relates to

impact

§ The role of tools is to provide focus and shape without

suffocating context

§ Ultimately you need people to take charge of their own learning

in a context of individual and collective efficacy

 

Module 4

 

22

Notes:

Securing Accountability

23

 
Sec uring Ac c ountability

Accountability

Fullan, M., & Quinn, J. (2015). Coherence, pp. 110-111. Corwin & Ontario Principals’ Council.

Simply stated, accountability is taking responsibility for one’s actions. At the core of accountability in

educational systems is student learning. As City, Elmore, Fiarman, and Teitel (2009) argue, “the real

accountability system is in the tasks that students are asked to do” (p. 23). Constantly improving and

refining instructional practice so that students can engage in deep learning tasks is perhaps the single

most important responsibility of the teaching profession and educational systems as a whole. In this

sense, accountability as defined here is not limited to mere gains in test scores but on deeper and more

meaningful learning for all students.

Internal ac c ountability occurs when individuals and groups willingly take on personal, professional,
and collective responsibility for continuous improvement and success for all students (Hargreaves &

Shirley, 2009). “ p. 110-111

External ac c ountability is when system leaders reassure the public through transparency,
monitoring, and selective intervention that their system is performing in line with societal expectations

and requirements. The priority for policy makers, we argue, should be to lead with creating the

conditions for internal accountability, because they are more effective in achieving greater overall

accountability, including external accountability. Policy makers also have direct responsibilities to

address external accountability, but this latter function will be far more effective if they get the internal

part right.

Securing Accountability

§ Internal Accountability
§ External Accountability

Focusing
Direction
Deepening
Learning
Cultivating
Collaborative
Cultures
Leadership

Module 5

 

24

Coherence: The Right Drivers in Action for Schools, Districts, and Systems

Fullan, M., & Quinn, J. ( 2015). Corwin & Ontario Principals’ Council, pp. 117-118.

 
# Quote

1. Accountability is now primarily described as an accountability for student learning. It is less about some
test result and more about accepting ownership of the moral imperative of having every student learn.
Teachers talk about “monitoring” differently. As they engage in greater sharing of the work, they talk
about being accountable as people in the school community know what they are doing and looking to
see what is changing for students as a result. And as they continue to deprivatize teaching, they talk about
their principal and peers coming into their classrooms and expecting to see the work [of agreed-upon
practices] reflected in their teaching, their classroom walls, and student work. (Anonymous, personal
communication, November 2014)

2. Teachers and administrators talk about accountability by deprivatizing their practice. If everyone knows
what the other teacher or administrator is working on and how they are working on it with students, it
becomes a lot easier to talk about accountability. When everyone has an understanding of accountability,
creating clear goals and steps to reach those goals, it makes it easier for everyone to talk and work in
accountable environments. (Elementary principal, personal communication, November 2014)

3. We are moving to define accountability as responsibility. My district has been engaged in some important
work that speaks to intrinsic motivation, efficacy, perseverance, etc., and accountability is seen as doing
what is best for students . . . working together to tackle any challenge and being motivated by our
commitment as opposed to some external direction. (Superintendent, personal communication,
November 2014)

4. I do believe that a lot of work remains to be done on building common understanding on the notion of
accountability. Many people still believe that someone above them in the hierarchy is accountable. Very
few take personal accountability for student learning and achievement. There are still those who blame
parents and students’ background for achievement. (Consultant, personal communication, November
2014)

5. In one school, the talk about accountability was pervasive as the school became designated as
underperforming. The morale of the school went down significantly, and the tension was omnipresent at
every meeting. The team switched the conversation to motivation, innovation, and teamwork and the
culture changed. The school is energized and the test scores went up in one year. The team is now
committed to results and continuous improvement. (Consultant, personal communication, November
2014)

 

Securing Accountability

 

25

T hree Step Interview

1. Form teams of three and letter off A, B. and C.

2. Read the excerpt on ‘Accountability’ from Coherence above and the

five quotes. Think about the responses to the questions below.

3. Begin the cycle with person A as the Interviewer, B as the

Respondent and C as the Recorder using the Advance Organizer.

4. Provide five minutes for each Respondent to respond and then

continue the cycle until all participants have been interviewed.

 
Question Person A Person B Person C

1. How would

you distinguish

between

Internal and

External

Accountability?

2. Describe

strategies your

school/district

uses to build

Internal

Accountability?

3. What steps

will you take to

ensure the

effective

implementation

of External

Accountability?

 
 

Module 5

 

26

Know T hey Impac t Turn and Talk:

§ Read the excerpt from John Hattie. What does your school

specifically do to develop a culture of evidence?

Know They Impact!

Hattie, J. (2015). What Works Best in Education: The Politics of Collaborative Expertise,

pp. 15-16. Pearson.

 
The model advanced here is that the school leader is responsible for asking on a continual basis about the impact

of all the adults on the learning of the students. Of course, I am not forgetting that the students are players in

improving their learning. But that is the bonus, the compound-interest component. What is requested is that

school leaders become leaders in evaluating the impact of all in the school on the progress of all students; the

same for teachers; and the same for students.

School leaders need to be continually working with their staff to evaluate the impact of all on student progression.

Leaders need to create a trusting environment where staff can debate the effect they have and use the information

to devise future innovations. And leaders need to communicate the information on impact and progression to the

students and parents. Schools need to become incubators of programs, evaluators of impact and experts at

interpreting the effects of teachers and teaching on all students.

In short, we need to develop an evaluation climate in our education system.

Experience has shown that ten- to twelve-week cycles of evaluation are about optimal. Fewer weeks tend to lead

to over-assessment or insufficient time to detect change; more weeks and the damage or success is done. We

should know this and react appropriately. It does mean asking teachers to be clear about what success or impact

would look like before they start to teach a series of lessons.

Of course, this must start by asking the questions, ‘Impact on what? To what magnitude? Impact for whom?’

Evaluating impact requires analyses of what a year’s growth looks like, and it is likely it may differ depending on

where the student begins in this growth. Evaluating impact asks schools and systems to be clearer about what it

means to be good at various disciplines, to be clearer about what a year’s progress looks like and to provide staff

with collaborative opportunities to make these decisions.

This is the hardest part of our work, as teachers we have been so ingrained to wait and see what the students do,

to see which students attend and then to pick out examples of successful progress. Our alternative model asks that

teachers be clearer about what success would look like and the magnitude of the impact, and we ask them to

prepare assessments to administer at the end – before they start teaching. The bonus of this latter preparation is

that it ensures that teachers understand what success is meant to look like before they start teaching, and it

increases the likelihood that teachers communicate these notions of success to the students.

There is also a need to include the student voice about teacher impact in the learning/teaching debates; that is, to

hear the students’ view of how they are cared about and respected as learners, how captivated they are by the

lessons, how they can see errors as opportunities for learning, how they can speak up and share their

understanding and how they can provide and seek feedback so they know where to go next. As the Visible

Learning research has shown, the student voice can be highly reliable, rarely includes personality comments and,

appropriately used, can be a major resource for understanding and promoting high-impact teaching and learning.

 
 

Securing Accountability

 

27

Developing a c ulture of evidenc e

Janet Clinton and I have used the theories of empowerment evaluation to spell out many of these mind frames (in

Clinton and Hattie 2014). Empowerment evaluation is based on the use of evaluation concepts, techniques and

findings to foster improvement. It increases the likelihood that programmes will achieve results by increasing the

capacity of stakeholders to plan, implement and evaluate their own programmes. We argued that we need to

teach educators:

§ to think evaluatively;

§ to have discussions and debates in light of the impact of what they do;

§ to use the tools of evaluation in schools (such as classroom observations of the impact of teachers on students,

interpreting test scores to inform their impact and future actions, and standard setting methods to clarify what

challenge and progression should look like in this school);

§ to build a culture of evidence, improvement and evaluation capacity-building;

§ to develop a mind frame based on excellence, defined in multiple ways, and for all;

§ and to take pride in our collective impact.

Empowerment evaluation helps to cultivate a continuous culture of evidence by asking educators for evidence to

support their views and interpretations and to engage in continual phases of analysis, decision-making and

implementation.

Note to Self How would I describe our evidence based culture?

 
 

Module 5

 

28

Freedom as Learning Feedback: A Gold Mine of Potential Growth

1. People don’t like feedback and want to be free from it.

2. Feedback is one of the key interacting simplifiers for individuals and

groups wanting to change.

3. To think in terms of active seeking means to think first and foremost

in terms of what receivers of feedback need and can do.

4. Giving and taking feedback are both challenging.

Feedbac k Forum Meet up with another colleague from a different district. Use the
following questions as the basis for your discussion

§ Think of a time when you received powerful feedback. Why was it

powerful? What did you learn from it?

§ What are the challenges of giving feedback?

§ Describe feedback that inspires growth.

 
Notes

 
 

Securing Accountability

 

29

Freedom To World § If we recast its role, feedback can become one of the most powerful
forces for the betterment of the individual and the organization.

Best Advic e § Take a risk and seek feedback, both because you will be worse off if
you do nothing and because you will learn from it.

C ultures that Value
Feedbac k

T urn and T alk Does our organization have a culture to support providing/receiving
feedback?

What, if anything, could we do to improve the culture for feedback?

Freedom To:
Ac c ountability

If you are seeking feedback and using feedback as an opportunity to

learn with respect to important goals, you are already on the path of

accountability: a willingness to accept responsibility for your own

actions.

 

 
 

Module 5

 
30

Notes:

 

Freedom to Change

31

 

 
Exploration vs
Engagement

§ What’s out there?

§ Who should we partner with

—Pentland, 2014

C riteria for Effec tive
Networking

1. A small number of ambitious goals (pre-school to tertiary)

2. Leadership at all levels

3. Cultures that produce ‘Collective Efficacy’

4. Mobilizing data and effective practices as a strategy for

improvement

5. Intervention in a non-punitive manner

6. Being transparent, relentless and increasingly challenging

—Rincón-Gallardo & Fullan, in press

New Zealand:

Joint Initiative Agreement

Read the Joint Initiative Agreement

§ What do you like about it?

§ What questions do you have?

§ Discuss implications for your work.

—Rincón-Gallardo & Fullan, in press

New Zealand Education Institute, Ministry of Education

Following up to Working Party Report

 
Working Party Report – O verarc hing Princ iples

1. Children are at the centre of a smooth and seamless whole of educational pathway from earliest learning to

tertiary options.

2. Parents who are informed and engaged are involved in their children’s education and part of a community

with high expectations for and of those children.

3. Teachers and education leaders, supported by their own professional learning and growth, and those of their

colleagues will systematically collaborate to improve educational achievement outcomes for their students.

4. Teachers and education leaders will be able to report measurable gain in the specific learning and

achievement challenges of their students.

5. Teachers and leaders will grow the capability and status of the profession within clearly defined career
pathways for development and advancement.

Key Learnings From the Working G roup Were:

1. Self-identified Communities of Learning should form around clear learner pathways from early childhood to

secondary education and may, over time, extend to include tertiary learning.

2. Each Community of Learning’s purpose is to enhance student achievement for educational success as set out

in the Vision of the National Curriculum documents; and the Community of Learning should define its own

achievement challenges, learning needs and areas of focus that enable it to support that purpose.

Module 6

 

32

3. Self-identified Communities of Learning should form around clear learner pathways from early childhood to

secondary education and may, over time, extend to include tertiary learning.

4. Each Community of Learning’s purpose is to enhance student achievement for educational success as set out

in the Vision of the National Curriculum documents; and the Community of Learning should define its own

achievement challenges, learning needs and areas of focus that enable it to support that purpose.

5. Each Community of Learning will be able to use data, evidence and research to target their efforts and

resources and demonstrate impact on the learning growth of its students.

6. Each Community of Learning should determine its own leadership and teaching, collaboration and support

functions that align with its achievement challenges, making the best use of its own and new resourcing. Some

leadership and teaching roles and their functions will be required for all Communities of Learning; other

functions may be particular to the Community.

7. Any appointment to a leadership role with the required functions will be made by the Community of Learning

in conjunction with an external professional adviser.

8. Successful collaboration changes and evolves, and Communities of Learning must have sufficient flexibility to

enable this rather than limit it.

9. In recognising these factors, each Community of Learning will access its own and new resources to support the

attainment of its goals.

10. A Community of Learning’s success will be dependent on ‘whole of Community of Learning collaboration’.

Therefore, allocation of sufficient time and resources to support participants in the Community of Learning is

critical.

11. The parties commit to undertake further work on Māori, Pasifika, Early Childhood Education, Support Staff,

Special Education and Professional learning and Development to build on the work begun in the Working

Group in the next and final stage of the Joint Initiative Development. The parties acknowledge this may lead

to additional changes in future collective agreement bargaining rounds.

12. Leadership, teaching, collaboration and support roles within Communities of Learning should align with career
pathways for principals, teachers, support and specialist staff to ensure continuous development of leadership

and teaching capacity.

 

 
Leadership from the
Middle

§ Where is the coherence—where is the glue?

We find it “in the middle”.

What Ac tions are you going to take home as a result of this workshop?

References

33

Fullan, M. (2011). Choosing the wrong drivers for whole system reform. Seminar Series 204. Melbourne:
Center for Strategic Education.

Fullan, M. (2013). Great to excellent: Launching the next stage of Ontario’s education reform.
www.edu.gov.on.ca/eng/document/reports/FullanReport_EN_07

Fullan, M. (2013). Stratosphere: Integrating technology, pedagogy and change knowledge. Toronto:
Pearson.

Fullan, M. (2014). The principal: Three keys to maximizing impact. San Francisco: Jossey-Bass.

Fullan, M. (2015). Freedom to change: Four strategies to put your inner drive into overdrive. San
Francisco, CA: Jossey-Bass.

Fullan, M., & Donnelly, K. (2015). Evaluating and assessing tools in the digital swamp. Bloomington, IN:
Solution Tree Press.

Fullan, M., & Quinn, J. (2015). Coherence: The right drivers in action for schools, districts, and systems.
Thousand Oaks, CA: Corwin; Toronto, ON: Ontario Principals’ Council.

Fullan, M., & Rincón-Gallardo, S. (in press). Developing high quality public education in Canada: The case
of Ontario. In F. Adamson, B. Astrand, & L. Darling-Hammond (Eds.), Global education reform:
Privatization vs public investments in national education systems. New York, NY: Routledge.

Fullan, M., Rincón-Gallardo, S., & Hargreaves, A. (2015). Professional capital as accountability. Education
Policy Analysis Archives, 23(15), 1-18.

Hargreaves, A. & Fullan, M. (2012). Professional capital: Transforming teaching in every school. New York:
Teachers College Press.

Hattie, J. (2015). What works best in education: The politics of collaborative expertise. London, UK:
Pearson.

Kirtman, L., & Fullan, M. (2015). Leaders who lead. Bloomington, IN: Solution Tree.

New Pedagogies for Deep Learning (NPDL). (2015). Retrieved from www.NPDL.global

November, A. (2012). Who owns the learning? Preparing students for success in the digital age.
Bloomington, IN: Solution Tree.

Nunan, D. (2003). Practical English language teaching. New York, NY: McGraw-Hill.

Organisation for Economic Cooperation and Development. (2013). Teachers for the 21s century: Using
evaluation to improve teaching. Paris, France: Author.

Pentland, A. (2014). Social physics: How good ideas spread—the lessons from a new science. New York,
NY: Penguin.

Quaglia, R.J., & Corso, M.J. (2014). Student voice: The instrument of change. Thousand Oaks, CA:
Corwin.

Rincón-Gallardo, S., & Fullan, M. (in press). Essential features of effective networks and professional
collaboration. Journal of Professional Capital and Community.

 

Michael Fullan,OC, is professor emeritus at the Ontario
Institute for Studies in Education, University of Toronto. He

served as special adviser in education to Ontario premier

Dalton McGuinty from 2003 to 2013, and now serves as one

of four advisers to Premier Kathleen Wynne. He has been awarded honorary doctorates from the

University of Edinburgh, University of Leicester, Nipissing University, Duquesne University, and the

Hong Kong Institute of Education. He consults with governments and school systems in several

countries around the world.

Fullan has won numerous awards for his more than thirty books, including the 2015 Grawemeyer

prize with Andy Hargreaves for Professional Capital. His books include the best sellers Leading in a

Culture of Change, The Six Secrets of Change, Change Leader, All Systems Go, Motion Leadership,

and The Principal: Three Keys to Maximizing Impact. His latest books are Coherence: The Right

Drivers in Action (with Joanne Quinn), Evaluating and Assessing Tools in the Digital Swamp (with

Katelyn Donnelly), Leadership: Key Competencies (with Lyle Kirtman), and Freedom to Change.

Special thanks to Joanne Quinn and Eleanor Adam for their training design contributions.

Produced by Claudia Cuttress

Cover Design by BlinkBlink

Please visit our website

michaelfullan.ca

Chapter 6

Our Coherence Framework is “simplexity.” Simplexity is not a real word, but it is a valuable concept. Simplexity means that you take a difficult problem and identify a small number of key factors (about four to six)—this is the simple part. And then you make these factors gel under the reality of action with its pressures, politics, and personalities in the situation—this is the complex part. In the case of our framework, there are only four big chunks and their interrelationships. Not only are these components dynamic but they also get refined over time in the setting in which you work. You have to focus on the right things, but you also must learn as you go. One of our favorite insights came from a retired CEO from a very successful company who, when asked about the most important thing he has learned about leadership, responded by say- ing, “It is more important to be right at the end of the meeting than the most important thing he has learned about leadership, responded by saying- ing, “It is more important to be right at the end of the meeting than the beginning” (David Cote, Honeywell, nyti.ms/1chUHqp). He was using this as a metaphor for a good change process: leaders influence the group, but they also learn from it. In fact, joint learning is what happens in effective change processes. If you are right at the beginning of the meeting, you are right only in your mind. If you are right at the notional end of the meeting, it means that you have processed the ideas with the group. McKinsey & Company conducted a study of leaders in the social sector (education et al.) and opened their report with these words: “chronic underinvestment [in leadership development] is placing increasing demands on social sector leaders” (Callanan, Gardner, Mendonca, & Scott, 2014). Their conclusions are right in our wheelhouse. In the survey of 200 social sector leaders, participants rated four critical attributes: balancing innovation with implementation, building executive teams, collaborating, and manag- ing outcomes. Survey respondents found themselves and their peers to be deficient in all four domains. In one table, they show the priorities—ability to innovate and implement, ability to surround selves with talented teams, collaboration, and ability to manage to outcomes—in terms of how respon- dents rated themselves and rated their peers as strong in the given domain. Both sets of scores were low—all below 40 percent. Collaboration, for example, was rated as 24 percent (self-rating) and 24 percent (rating of their peers). So the top capabilities are in short supply. Leaders build coherence when they combine the four components of our Coherence Framework with meeting the varied needs of the complex organizations they lead. Coherence making is a forever job because people come and go, and the situational dynamics are always in flux. They actively develop lateral and vertical connections so that the collaborative culture is deepened and drives deepened learning and reinforces the focused direction. Achieving coherence in a system takes a long time and requires continuous attention. The main threat to coherence is turnover at the top with new leaders who come in with their own agenda. It is not turnover per se that is the problem, but rather discontinuity of direction. Sometimes systems performing poorly do require a shakeup, but we have also seen situations where new leaders disrupt rather than build on the good things that are happening. And we have seen (more rarely in our experience) districts like Garden Grove where there was a change of superintendents based on a deliberate plan to continue and deepen the effectiveness of the system. The idea in changeover ideally combines continuity and innovation. As we have said, coherence making and re-making is a never-ending proposition. The previous chapters contain many ideas about leadership, and we hope the reader has garnered key lessons about each of the four components. We won’t repeat these ideas here. Instead, we boil down leadership to two big recommendations: master the framework, and develop leaders at all levels.

There are many different ways to proceed. Here are a few: conduct a mental inventory with others by applying the framework to your system to examine whether you have included everything and to determine how well you are doing on each sub-item; discuss the framework among your leadership team, starting with the four main headings to see if the ideas resonate; start discussing the main concepts with other leaders in the system as you begin to form plans and strategies; and start through action forums, working on the four domains. However you go about it, take the advice we gave in Chapter 2: participate as a learner working alongside others to move the organization forward. The framework is not a blueprint but a prompt to assess whether you are actually addressing the four components and the 13 subcomponents. Use the framework to get a 360-degree snapshot of how the coherence is perceived at all levels. To get you started, we provide a Coherence Assessment Tool in Figure 6.2. The tool includes the four components and prompts for starting discussions about the subcomponents. We encourage you to focus on identifying the evidence of each element in your organization. You may want to have individuals in different roles in the organization reflect and then combine those reflections to get a full picture. Consider areas where perceptions are similar and use areas that are different as starting points for deeper conversations—Is your approach comprehensive enough?

Are you addressing all four components? Consider your strengths but also the areas of greatest need as you review the four parts of the framework, and identify ways you can leverage the former and develop the latter. There is no one right formula—but what’s important is to use the exercise to move to action. Once again, the strongest change process shapes and reshapes quality ideas as it builds capacity and ownership among participants. As you become stronger and stronger in practicing the Coherence Framework, you will get greater enthusiasm and greater results that will spur people on to accomplish more. “Talking the walk,” as we have said, is both a great indicator and a great strategy for the group to become clearer and more committed individually and collectively. Can leaders at all levels clearly describe the framework as it is being used in the system?

As you use the Coherence Framework to reflect on organizational coherence, you can also think of progress in terms of developing specific leadership competencies. Kirtman and Fullan (2015) show how the seven competencies of highly effective leaders mesh with “whole system improvement.”

The seven skills are listed in Figure 6.3. figure 6.3 Leadership Competencies for Whole System improvement

1. Challenges the status quo 5. Has a high sense of urgency for change and sustainable result

2. s 2. Builds trust through clear 6. Commits to continuous improvement communications and expectations

3. 3. Creates a commonly owned plan 7. Builds external networks/partnerships for success 4. Focuses on the team over self.

These competencies map on our Coherence Framework. Challenging the status quo is part and parcel of focusing new directions. Building trust and creating a commonly owned plan are very much part of collaborating with purpose. Focusing on the team is about leadership development in others. The next two—sense of urgency in relation to results and continuous improvement—relate directly to internal and external accountability. External networks and partnerships are a wraparound set of collaborative activities that enable leaders to both use and contribute to the external environment. Most leaders, as the McKinsey & Company’s study revealed, are not good at leading the change process. Mastering our framework will address that deficit and enable you and your system to become much more effective and much more likely to become more sustainable. Most leaders, as the McKinsey & Company’s study revealed, are not good at leading the change process. Mastering our framework will address that deficit and enable you and your system to become much more effective and much more likely to become more sustainable. And you don’t have to do it alone; indeed, it cannot be done alone. It takes the group to change the group, and it takes many leaders to change the group. This is why developing leaders at all levels is essential.

develop their leadership skills and help others do the same. Leaders devel- oping other leaders becomes the natural order of the day. In addition, the organization should develop and use other tools to systematically foster leadership in the system. This would include mentoring, coaching, giving feedback, interning, and training in key skills such as communication and media skills. In our model, the difference is that these more formal strategies do not serve as drivers but as reinforcers of the direction of the organization generated by our four-part Coherence Framework. Again, Ontario did this well. Regular business concentrated on focused direction, collaboration, increasingly deeper learning, and internal accountability—all to serve the three core goals: increase student achievement, reduce the gap, and increase the public confidence in the public school system (latterly, Ontario has added a fourth goal: the well- being of students). To back this up, the leadership unit within the ministry developed (in partnership with districts) tools—leadership frameworks and strategies—to cultivate leadership within districts and schools (www .education-leadership-ontario.ca). There are two crucial elements of this strategy. One is that formal leadership development was expressly in the service of implementing the main agenda of the three core goals. They reinforced and were in the same direction as the core agenda. The leadership strategy was a supporter and reinforcer, not a driver. Second—and this is remarkable—the leadership framework tool was never compulsory, but everyone uses it. It became commonly owned because the process drew people to the best solution that has now become a requirement (every district must develop a leadership succession plan). The end result is that the day-to-day evolution of activities. Review Infographic 6, on page 138, to clarify how you will use lead- ership to integrate the four components of the Coherence Framework.

Calculate your order
Pages (275 words)
Standard price: $0.00
Client Reviews
4.9
Sitejabber
4.6
Trustpilot
4.8
Our Guarantees
100% Confidentiality
Information about customers is confidential and never disclosed to third parties.
Original Writing
We complete all papers from scratch. You can get a plagiarism report.
Timely Delivery
No missed deadlines – 97% of assignments are completed in time.
Money Back
If you're confident that a writer didn't follow your order details, ask for a refund.

Calculate the price of your order

You will get a personal manager and a discount.
We'll send you the first draft for approval by at
Total price:
$0.00
Power up Your Academic Success with the
Team of Professionals. We’ve Got Your Back.
Power up Your Study Success with Experts We’ve Got Your Back.

Order your essay today and save 30% with the discount code ESSAYHELP