Standards and Accountability
What do State reported “school rankings”/comparisons/student achievement & growth data say about the quality of a particular school and their administrators, teachers, programming (if anything)?
reading materials
Apple, M. W. (2007). Ideological success, educational failure? On the politics of No Child Left Behind. Journal of Teacher Education, 58(2), 108-116.
Young, M. D., Winn, K. M., & Reedy, M. A. (2017). The EveryStudent Succeeds Act: Strengthening the focus on educational leadership. Educational Administration Quarterly, 53(5), 705-726.
Knight, D. S. (2019). Are school districts allocating resources equitably? The Every Student Succeeds Act, teacher experience gaps, and equitable resource allocation. Educational Policy, 33(4), 615-649.
Phillips, V., & Wong, C. (2010). Tying together the common core of standards, instruction, and assessments. Phi Delta Kappan, 91(5), 37-42.
Anderson, K., Harrison, T., & Lewis, K. (2012). Plans to Adopt and Implement Common Core State Standards in the Southeast Region States. Issues & Answers. REL 2012-No. 136. Regional Educational Laboratory Southeast.
Xu, Z., & Cepa, K. (2018). Getting College-Ready during State Transition toward the Common Core State Standards. Teachers College Record, 120(6), n6.
https://doi.org/10.1177/0013161X17735871
Educational Administration Quarterly
2017, Vol. 53(5) 705 –726
© The Author(s) 2017
Reprints and permissions:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/0013161X17735871
journals.sagepub.com/home/eaq
Article
The Every Student
Succeeds Act:
Strengthening the
Focus on Educational
Leadership
Michelle D. Young1 , Kathleen M. Winn1, and
Marcy A. Reedy1
Abstract
Purpose: This article offers (a) an overview of the attention federal policy
has invested in educational leadership with a primary focus on the Every
Student Succeeds Act (ESSA), (b) a summary of the critical role school
leaders play in achieving the goals set forth within federal educational policy,
and (c) examples of how states are using the opportunity afforded by the
focus on leadership in ESSA. Findings: Through the examination of federal
policy and existing research in this arena, we review the level of attention
paid to educational leadership within Elementary and Secondary Education
Act, its reauthorizations, and other federal education legislation. ESSA
provides an enhanced focus on educational leadership and acknowledges
the importance of leaders in achieving federal goals for education.
Furthermore, ESSA acknowledges the importance of developing a strong
leadership pipeline and, thus, allows states and districts to use federal funds
to support leadership development. In this article, we delineate this focus
on leadership within ESSA and offer examples of how states are planning
to support leadership development. Implications and Conclusion: The
1University of Virginia, Charlottesville, VA, USA
Corresponding Author:
Kathleen M. Winn, University Council for Educational Administration, Curry School of
Education, University of Virginia, 405 Emmet Street South, PO Box 400277, Charlottesville,
VA 22904-0265, USA.
Email: kmw2tx@virginia.edu
735871EAQXXX10.1177/0013161X17735871Educational Administration QuarterlyYoung et al.
research-article2017
https://us.sagepub.com/en-us/journals-permissions
https://journals.sagepub.com/home/eaq
mailto:kmw2tx@virginia.edu
http://crossmark.crossref.org/dialog/?doi=10.1177%2F0013161X17735871&domain=pdf&date_stamp=2017-10-13
706 Educational Administration Quarterly 53(5)
important role that school leadership plays in supporting student, teacher,
and school-wide outcomes warrants its inclusion within federal education
policy. However, the opportunity to realize ESSA’s intended goals around
leadership development could be undermined by forces at both the state
and federal levels.
Keywords
federal and state policy, educational leadership, Every Student Succeeds Act,
ESSA, preparation
Introduction
In December 2015, the Every Student Succeeds Act (ESSA) was signed into
law, reauthorizing the Elementary and Secondary Education Act (ESEA) and
replacing the No Child Left Behind Act (NCLB). ESEA, the federal law that
authorizes federal funding for K-12 schools, represents the nation’s commit-
ment to equal educational opportunity for all students and has influenced the
education of millions of children. ESSA has two primary goals: to require
states to align their education programs with college and career ready stan-
dards and to extend the federal focus on equity by providing resources for
poor students, students of color, English learners, and students with disabili-
ties. For those in the field of educational leadership, ESSA provides a direct
acknowledgment of educational leadership as a factor in achieving national
educational goals. Specifically, the act provides new pathways for states and
districts to use federal funds for the development of school principals and
other school leaders (Every Student Succeeds Act, 2015).
This article assumes that the federal purposes behind ESEA and ESSA are
valid—that underserved student populations must receive additional
resources and special attention in order to receive equitable educational
opportunities and that the federal government should have a role in stimulat-
ing and supporting improvement in the quality of education offered to stu-
dents. Furthermore, we applaud the explicit inclusion of leadership among
ESSA’s substantive goals. ESSA provides an opportunity for leadership
development to be substantively addressed within a stable and long-term fed-
eral policy. Our support and enthusiasm, however, is tempered by two con-
cerns. First, we are concerned that forces at the state and federal levels (e.g.,
budget proposals) could undermine the efforts of states and local education
agencies to support substantive leadership development. Second, we are con-
cerned that programs for leadership development included within many state
Young et al. 707
ESSA plans “will under- or over-reach, and that states without the knowl-
edge, capacity, or will to act smartly will stagnate or regress” (Castagna,
Young, Gordon, Little, & Palmer, 2016, p. 2). Without prioritizing leadership
and adequately supporting the development of educational leaders, current
policies and programs will have a hard time meeting the core purposes of the
legislation.
The case for supporting the current focus on educational leadership and
leadership development in federal policies and programs rests on a simple
argument: Leadership matters. A growing body of research has consistently
demonstrated that leadership is one of the most important school-level factors
influencing a student’s education (e.g., Coelli & Green, 2012; Grissom &
Loeb, 2011; Leithwood, Seashore, Anderson, & Wahlstrom, 2004; Robinson,
Lloyd, & Rowe, 2008). Specifically, by directing their organization, managing
the people within the organization, leading vision and goal development of the
school and district, and improving the instructional agenda in their schools
and districts, leaders influence student learning and development (Leithwood
et al., 2004). Through their focus on these four critical areas, principals are one
of the most important school-level determinants of student achievement
(Leithwood et al., 2004). Emphasizing building leaders within federal policy
and incorporating their development within programming at the state and dis-
trict levels are essential to realizing federal education policy goals.
Furthermore, a growing body of evidence demonstrates a link between
leadership preparation and practice. Extensive reviews of research on exem-
plary leadership preparation programs and quality program features (e.g.,
Darling-Hammond, Meyerson, LaPointe, & Orr, 2009; Jackson & Kelley,
2002; McCarthy, 1999; Young & Crow, 2016; Young, Crow, Ogawa, &
Murphy, 2009) point to similar attributes of quality features. Key among
those features are (a) a quality and coherent curriculum that emphasizes
instructional leadership and school improvement and (b) integrated field
experiences that support the curriculum and are supervised by experienced
educational leaders. Indeed, research suggests a strong relationship between
what is taught and changes in how candidates understand and enact their
leadership (Young, O’Doherty, Gooden, & Goodnow, 2011), the develop-
ment of competencies (Leithwood, Jantzi, Coffin, & Wilson, 1996; Orr &
Barber, 2007), the capacity to support educational improvement (Pounder,
1995), and problem framing and problem solving. Moreover, in a compara-
tive study of two university–district partner programs and one conventional
university-based preparation program, Orr and Barber (2007) found that a
comprehensive and standards-based curriculum was significantly and posi-
tively related to three types of outcomes: self-assessed leadership knowledge
and skills, leadership career intentions, and graduate career advancement.
708 Educational Administration Quarterly 53(5)
The time seems ripe for examining the treatment of educational leadership
within federal policy and state plans for leadership development to ensure
congruency with new knowledge on the important roles educational leaders
and leadership development play in fostering student success. This article
begins with a review of the level of attention dedicated to educational leader-
ship within ESEA, subsequent reauthorizations of this landmark bill, and
other federal legislation focused on education. Subsequently, we summarize
the literature demonstrating the influence of educational leadership—both
direct and indirect—on the learning environment and on student achieve-
ment. We also describe the focus on leadership within ESSA. Having
reviewed the evidence linking leadership to federal education goals, we then
share several examples of how states are supporting leadership development
by using new avenues available to them through ESSA. We conclude with a
brief discussion of the opportunities and challenges presented by ESSA for
leadership development.
The Role of Leadership in Federal Education
Legislation: 1965-2015
Educational leadership has traditionally been an underappreciated and under-
resourced topic in federal education legislation. However, as the knowledge
base supporting educational leadership has expanded, so too has its treatment
in federal policy.
Since the initial passage of the ESEA in 1965, school leadership, which
includes terms like school leaders, educational leaders, principals, and educa-
tional leadership, has been referenced in multiple pieces of public federal
law. Using Pro Quest Congressional we found 1,042 pieces of legislation that
include the terms education and one or more of the following school leader-
ship terms: administrator, school leader, school leadership, educational
leader, educational leadership, and principal. This number, however, is
somewhat unreliable because the terms principal and administrator are used
in a number of bills to reference something other than a school leader (e.g.,
principal investigator). However, when the terms principal and administrator
are removed, the number of references to school-level leadership decreases
significantly to 14 pieces of federal legislation, the majority of which have
been passed since 2000. It is possible that the greater frequency of reference
to school leadership in federal policy since 2000 suggests a growing appre-
ciation for educational leadership among policy makers.
In addition to considering how frequently school leadership has been ref-
erenced in federal policy, it is also important to consider how substantively
and in what capacity school leadership has been addressed. The majority of
Young et al. 709
references to school leadership occurred in flagship education policy bills,
such as the reauthorizations of ESEA (2015, 2001, and 1987), the 2004 reau-
thorization of Individuals With Disabilities Education Act (IDEA), and reau-
thorizations of the Higher Education Act (HEA; 2008, 1998, 1992, and 1986).
For a full breakdown of flagship federal education legislation referencing
school leadership, see Table 1. The greater part of the remaining references to
school leadership are found in appropriations or supplemental appropriations
bills as well as in independent education reform bills.
With regard to substance, the three most relevant pieces of federal legisla-
tion include ESEA and subsequent reauthorizations and reauthorizations of
the HEA and IDEA. We provide a few highlights from each of these pieces of
legislation below.
Elementary and Secondary Education Act
In 1965, the Congress authorized the Elementary and Secondary Education
Act (ESEA). Developed by the Commissioner of Education and his team dur-
ing the Johnson administration, ESEA represented a revolutionary set of pro-
grams. For the Johnson administration, the legislation had two primary
purposes: (a) to provide a legislative strategy for establishing the precedent of
federal aid to K-12 public education and (b) to serve as a cornerstone of
Johnson’s “War on Poverty” (Kirst & Jung, 1980). The Johnson administra-
tion set out to achieve what it believed state and local governments were not:
ensuring access to quality education for underserved populations. According
to ESEA’s Declaration of Intent, the purpose of Title 1 was “to provide finan-
cial assistance to local educational agencies serving areas with high concen-
trations of children from low-income families to expand and improve their
educational programs by various means” (PL 89-10 Declaration of Intent,
quoted in Kirst & Jung, 1980, p. 21).
The initial authorization of ESEA in 1965 did not include reference to
building level leadership, but did reference educational leadership at the state
level. Specifically, the legislation included the following language (ESEA,
1965):
make grants to State educational agencies to pay part of the cost of experimental
projects for developing State leadership or for the establishment of special
services . . . (p. 59)
training and otherwise developing the competency of individuals who serve
State or local educational agencies and provide leadership, administrative, or
specialist services throughout the State . . . (p. 62)
710 Educational Administration Quarterly 53(5)
According to Kirst and Jung (1980), increasing the capacity of state depart-
ments of education and their leadership was a deliberate strategy used to
build ownership and support for the implementation of ESEA.
The reauthorizations of ESEA in 2015, 2001, and 1987 (particularly, ESSA
in 2015 and NCLB in 2001), in contrast, addressed school leadership more
comprehensively. They included the provision of local education agency
(LEA) subgrants for the “development and implementation of professional
Table 1. Flagship Federal Legislation Referencing School Leadership.
Legislation type Legislation name Year passed Educational leadership focus
Every Student
Succeeds Act
ESEA
reauthorization
2015-2016 Optional “3% set aside” of
Title II A funds for state-level
activities and funding for
“evidence-based” interventions
around leadership
Higher Education
Opportunity Act
HEA
reauthorization
2007-2008 Funding for partnership grants
for the development of
leadership programs
Individuals with
Disabilities Education
Act (IDEA)
IDEA
reauthorization
2004 Providing personnel
development grants and
interdisciplinary training to
support school leaders
No Child Left Behind
Act
ESEA
reauthorization
2001-2002 SEA grants and LEA subgrants
to support leadership (reform
certification, induction/
mentoring, professional
development) and support for
establishing a national principal
recruitment program
Higher Education
Amendments of 1998
HEA
reauthorization
1997-1998 Sense of Congress Declaration
that leadership is important
and support for partnerships
between IHEs and K-12
schools to identify strong
candidates
Reauthorization of the
Higher Education Act
HEA
reauthorization
1991-1992 Support for establishing
state leader academies and
professional development
academies in each state
Elementary and
Secondary School
Improvement
Amendments of 1988
ESEA
reauthorization
1987-1988 SEA grants and LEA subgrants to
support leadership
Higher Education
Amendments of 1986
HEA
reauthorization
1985-1986 Grants to “collect information
on school leadership skills”
Note. HEA = Higher Education Act; IHE = institutions of higher education; LEA = local education agency.
Young et al. 711
development programs for principals that enable the principals to be effective
school leaders and prepare all students to meet challenging State academic
content” (NCLB, 2001, p. 203). NCLB also included a national activity of
demonstrated effectiveness where the U.S. Department of Education (USDE)
was “authorized to establish and carry out a national principal recruitment
program to assist high-need local educational agencies in recruiting and train-
ing principals” (NCLB, 2001, p. 212). The additional provisions for school
leadership in the ESEA reauthorization of 2015 (ESSA) is covered in a later
section of this article.
The Higher Education Act
The initial authorization of the Higher Education Act (HEA) in 1965 did not
contain reference to school leadership; however, the reauthorizations of 2008,
1998, 1992, and 1986 did address school leadership. In fact, the 1998 HEA
reauthorization included a “Sense of Congress” declaration on the impor-
tance of school leadership, and authorized grants to “collect information on
school leadership skills” (HEA, 1988, p. 516). Other school leadership–
related policies contained in HEA reauthorizations included the following:
•• Establishing school leader and professional development academies in
each state (1992)
•• Providing partnership grants for the development of leader programs
(1998 and 2008)
Individuals With Disabilities Education Act
In 2004, the Individuals with Disabilities Education Act (IDEA) substan-
tively addressed school leadership in a number of ways, including
•• Providing personnel development grants to support “high-quality pro-
fessional development for principals, superintendents, and other
administrators, including training in instructional leadership,” as well
as other areas critical to the leadership of students with special needs
(P.L. 108-446, 2004, p. 129)
•• Supporting leadership preparation activities that provide “interdisci-
plinary training for various types of leadership personnel” (P.L. 108-
446, 2004, p. 133)
While leadership has not gone completely unnoticed within federal education
policy, in comparison with the attention devoted to other educational personnel
and programming, the focus on educational leadership has been limited. This is
712 Educational Administration Quarterly 53(5)
particularly true when you consider individual pieces of legislation. For example,
in the 2004 reauthorization of IDEA, school leadership is referenced in 15 places
within the Act. In contrast, teachers are referenced in 135 different places. Within
the next section, we review literature on the importance of educational leadership
to attaining the goals set forth within federal education legislation.
Research on the Connection Between Leadership
and Student Achievement
Research accumulating over the past 40 years suggests the dynamic nature of
both the leadership role and the context in which leaders work. However,
over the past 15 years, evidence of the importance of school leadership in
both direct and indirect ways has mounted, and this evidence has been con-
sistently shared with the field and policy makers alike.
Leaders affect every aspect of schooling. Indeed, principal leadership
directly shapes elements such as teacher practices (Robinson et al., 2008)
through providing instructional advice (Robinson et al., 2008), allocating
necessary resources for learning and development (Horng & Loeb, 2010;
LaPointe Terosky, 2014), offering professional development opportuni-
ties for teachers (Sanzo, Sherman, & Clayton, 2011; Sebastian &
Allensworth, 2012), establishing a culture of trust (Daly, 2009; Sanzo
et al., 2011; Tschannen-Moran, 2009), prioritizing equity (Brooks, Jean-
Marie, Normore, & Hodgins, 2007), collaborating and distributing leader-
ship (Leithwood, Harris, & Hopkins, 2008; Leithwood & Jantzi, 1990;
Marks & Printy, 2003; Sanzo et al., 2011), and focusing on student learn-
ing (Sanzo et al., 2011). Furthermore, through school leaders’ direct influ-
ence on these factors, they indirectly affect student achievement
(Leithwood et al., 2009; Robinson et al., 2008; Supovitz, Sirinides, &
May, 2010). There is substantial research evidence demonstrating that
school leaders can be powerful drivers of student outcomes. Robinson
et al. (2008) found in their meta-analysis that when school leaders focus
on effective instruction, “the more likely they are to have a positive
impact on students’ outcomes” (p. 664). This follows logically as they
“hold the formal authority, responsibility, and discretion for creating the
very conditions and supports that promote student achievement” (Hitt &
Tucker, 2016, pp. 561-562).
The remainder of this section is categorically organized based on previous
work by Leithwood and Riehl (2005) and Leithwood et al. (2008), who sug-
gest that school leaders meaningfully influence student learning through their
leadership of:
Young et al. 713
1. their organization,
2. the visions and goals of the school and district,
3. the people within the organization, and
4. the curricular and instructional agenda in their schools and districts.
In addition to providing a brief summary of the research that addresses the
relationship between and among these four areas of leader practice and stu-
dent achievement, we also highlight research that addresses the critical role
of leadership in supporting one of the key goals of ESSA: educational equity.
Although the evidence offered is not exhaustive, it is representative of com-
mon themes generally accepted by the field.
School Leaders Influence Their Organization
Silins, Mulford, and Zarins (2002) note that “school as a learning organiza-
tion is defined by the level and quality of leadership that characterizes the
everyday work of the school” (p. 634). Principals and other leaders influence
this everyday work in explicit ways like through hiring and staffing (Horng
& Loeb, 2010), building a trustworthy and loyal culture (Sanzo et al., 2011;
Silins et al., 2002; Tschannen-Moran, 2009) that is also safe (Sebastian &
Allensworth, 2012; Sebring, Allensworth, Bryk, Easton, & Luppescu, 2006),
supporting a collaborative environment through distributing leadership
(Sanzo et al., 2011; Silins et al., 2002; Spillane, 2005), fostering professional
learning communities (Sanzo et al., 2011), making connections with families
and the community (Hitt & Tucker, 2016), and leading school turnaround
efforts (Leithwood et al., 2008).
School leaders who subscribe to an instructional leadership approach promote
the achievement of school-wide goals and establish an atmosphere where attain-
ing those goals is realistic (Robinson et al., 2008). This is a key leadership under-
taking, given existing research (e.g., Sebastian & Allensworth, 2012) demonstrates
how the quality of the learning environment affects achievement for students.
School Leaders Influence the Development of and Execution of
the Visions and Goals of the School and District
Essential to setting the tone, culture, or climate of an organization is the
development and execution of vision and goals, a process that is advanced by
the leader (Hallinger & Murphy, 1985; Hitt & Tucker, 2016) and must be
focused on student learning (Robinson et al., 2008). Researchers indicate that
effective leaders explicitly plan and convey in detail how the mission, vision,
714 Educational Administration Quarterly 53(5)
and goals will be met (Robinson et al., 2008; Sanzo et al., 2011). It is through
these activities that school leaders are able to articulate and solidify a “sense
of overall purpose” (Silins et al., 2002, p. 620) for the school and inspiration
toward the advancement of improvement efforts. In short, school leaders who
have attended to the organization and the people within their organization are
well positioned to help their staff achieve their goals surrounding the mission
and vision of the organization.
School Leaders Influence the People Within Their Organization
The high-quality management of educator practice is an additional way
school leaders support organizational effectiveness and enhance the learning
experience for all students. As the formal educational administrator, school
leaders influence positively teachers’ “motivations, commitment and beliefs
connecting the supportiveness of their working conditions” (Leithwood et al.,
2008, p. 32). Leaders are positioned to foster an encouraging and trusting
tone that allows and empowers teachers to “take risks to improve outcomes”
(Daly, 2009, p. 207).
Principals supervise teachers in their instruction through the “collegial
and informal process of helping teachers improve their teaching (DiPaola &
Hoy, 2008, p. vi). Relatedly, but distinctly different, school and district lead-
ers are charged with evaluating the (a) curricular and instructional program-
ming (Leithwood, 2012; Murphy, Elliot, Goldring, & Porter, 2006; Sebring
et al., 2006) as well as (b) teacher and building principal professional practice
(DiPaola & Hoy, 2008; Murphy & Hallinger, 1988). Through utilization of
data, school leaders can effectively evaluate and influence these areas, thus,
sustaining the focus on the enterprise of continuous improvement (Hitt &
Tucker, 2016).
School Leaders Influence the Curricular and Instructional
Agenda in Their Schools and Districts
A primary responsibility of a school leader is to lead and monitor the curricu-
lar and instructional agenda (Hallinger & Murphy, 1985; Hitt & Tucker, 2016;
Robinson et al., 2008; Sanzo et al., 2011), ensuring its coherence (Sebastian &
Allensworth, 2012). Part of this leadership responsibility includes providing
guidance and advice about instructional practices and crafting targeted and
individualized feedback, support, and opportunities for teachers in this
endeavor (May & Supovitz, 2011). Robinson et al. (2008) found in their work
that “leaders in higher performing schools are distinguished from their coun-
terparts in otherwise similar lower performing schools by their personal
involvement in planning, coordinating, and evaluating teaching and teachers”
Young et al. 715
(p. 662). Through more active engagement, oversight, and coordination of the
school’s curricular and instructional program, leaders were able to positively
affect student outcomes.
Leadership for Equity
Aligned with the original purposes of ESEA, leadership is considered an
essential part of achieving equitable educational opportunities and outcomes
for all students, especially for those students who are poor and/or marginal-
ized. Researchers such as Gay (2002), Ware (2006), and Bondy, Ross,
Gallingane, and Hambacher (2007), Castagno and Brayboy (2008), note that
culturally responsive classrooms help to positively affect student achieve-
ment. Furthermore, the leader plays a critical role in fostering a culture of
support and inclusivity as well as supporting culturally relevant practice
among school staff (Auerbach, 2009; Brooks, Adams, & Morita-Mullaney,
2010; Khalifa, 2010; McKenzie et al., 2008; Robinson et al., 2008; Scanlan
& Lopez, 2012; Theoharis & O’Toole, 2011; Youngs & King, 2002).
Leadership’s critical role in this endeavor is highlighted through the focus of
Standard 3 in the National Educational Leadership Preparation (NELP)
Building Standards (National Policy Board for Educational Administration
[NPBEA], 2017) as well as in the 2015 Professional Standards for Educational
Leaders (NPBEA, 2015). Specifically, NELP Standard 3 calls upon leaders to
“promote the current and future success and well-being of each student and
adults by applying the knowledge, skills, and commitment necessary to
develop and maintain a supportive, equitable, culturally responsive and
inclusive school culture” (NPBEA, 2017, p. 17).
In sum, there is substantial research demonstrating the role of educational
leadership in supporting organizational effectiveness, student educational
outcomes, and educational equity. Because of their formal roles, school lead-
ers affect schools greatly (Leithwood et al., 2008) and are either “credited or
blamed for school outcomes” (Daly, 2009, p. 200). Although ESSA does not
approach this level of specificity with regard to leadership practice, this evi-
dence base justifies the focus within ESSA on educational leaders.
Furthermore, state and district policy makers have been encouraged to con-
sider this research in state plans for leadership development (CCSSO, 2016;
Herman et al., 2016).
The Role of Leadership in ESSA
Unlike previously adopted federal policies, ESSA presents a new and height-
ened focused on educational leadership, acknowledging the importance of
leadership to school improvement and student achievement. The Act
716 Educational Administration Quarterly 53(5)
recognizes that school leadership can be “a powerful driver of improved
education outcomes” (Herman et al., 2016, p. 1). For those who have been
advocating for a more intensive inclusion of leadership, this move has been
widely praised.
Specifically, the reauthorization of the ESSA, “emphasizes evidence-base
initiatives while providing new flexibilities to states and districts with regard
to the use of federal funds, including funds to promote effective school lead-
ership” (Herman et al., 2016, p. 1). Although the development of the Act was
preceded by years of effort to educate the public and policy makers on the
importance of educational leadership and leadership development, the pas-
sage of ESSA has stirred enthusiasm and activity among an even wider group
of stakeholders who are all hoping to make the most of the heightened focus
on educational leadership. In this section, we outline the main features of the
policy, including how leadership is portrayed, how it can be supported at the
state and local levels, and how the policy can support equity through
leadership.
How Leadership Is Defined
Under ESSA, states and districts are allowed multiple strategies for promot-
ing school improvement, and “school leadership is explicitly acknowledged
as a valid target of educational-improvement activities across the titles in
ESSA” (Herman et al., 2016, p. 4). School leadership under ESSA is defined
broadly and includes any individual who is (a) “an employee or officer of an
elementary school or secondary school, local educational agency, or other
entity operating an elementary school or secondary school” and who is (b)
“responsible for the daily instructional leadership and managerial operations
in the elementary school or secondary school building” (Every Student
Succeeds Act, 2015, p. 297).
How Leadership May Be Supported
Title I of ESSA authorizes approximately $16 billion in funding per year to
improve state and local education programs (Every Student Succeeds Act,
2015). Title I, which has traditionally included resources for identifying and
improving low-performing schools, allows states and districts to use federal
funds for activities targeting the knowledge and development of school prin-
cipals and other school leaders. Title II, however, is where the majority of
language concerning leadership development is found. Title II funds are typi-
cally reserved for recruiting and retaining teachers to reducing class sizes, or
providing professional development.
Young et al. 717
ESSA includes both flexible and targeted funding with allowable uses to
support the quality of teachers, principals, and other school leaders, including
an optional 3% set-aside of Title II funds for school leadership, as well as
state administrative funds. Together Title II, Part A allows each state to invest
almost 8% of its total allotment to support leadership pipeline activities,
including recruitment, preparation, and professional development. This is a
significant increase in funds that can be used to support school leaders and
contrasts starkly with current practice. For example, according to CCSSO
(2016) “two-thirds of school districts spend no money on professional devel-
opment for leaders” (p. 1).
Under ESSA, states may use funds (Title II, Part A and others) to support:
(a) the quality and effectiveness of teachers, principals, and other school
leaders; (b) the number of educators who are effective in improving student
academic achievement in schools; and (c) more equitable access to effective
teachers, principals, and other school leaders. Title II, Part A funds may be
used in several ways to support school leadership, such as (a) to support both
traditional and nontraditional pathways for developing educational leaders,
(b) to improve state policies and practices concerning licensure or certifica-
tion, recertification, and the adoption of standards for preparation and prac-
tice, (c) to help districts and local education agencies develop high-quality
professional development, and (d) to support districts’ recruitment and reten-
tion strategies that ensure a strong leadership pipeline (Castagna et al., 2016;
CCSSO, 2016; Every Student Succeeds Act, 2015; USDE, 2016).
ESSA Title II, Part B also includes funding opportunities for the develop-
ment of a strong leadership pipeline. Specifically, four competitive grants are
available to states, including (a) the School Leader Recruitment and Support
Program (SLRSP), (b) Supporting Effective Educator Development (SEED),
(c) Teacher and School Leader Incentive Programs (TSLIP), and (d) Education
Innovation Research (EIR). The SLRSP grants, formerly known as SLRP
grants, are available to states and districts that are interested in developing and
supporting talented leaders for high needs schools. Importantly, these grants
can be carried out with higher education partners. The SEED grants are avail-
able to higher education institutions as well as other nonprofits to help recruit,
select, prepare, and provide professional development for educations, includ-
ing educational leaders. The TSLIP, formerly known as TIF or teacher incen-
tive funds, have been expanded to include leadership and are available to
states, districts, and other nonprofit organizations to support the career path-
ways for talented educational professionals. Finally, the EIR, formerly i3, are
available for organizations interested in designing and implementing innova-
tive, and preferably research-based, leadership models (CCSSO, 2016; Every
Student Succeeds Act, 2015).
718 Educational Administration Quarterly 53(5)
Supporting Equity Through Leadership
One of the key goals of ESSA is to extend the federal focus on and support
for educational equity. It includes a number of provisions regarding the use of
funds to support schools identified as low-performing, including the provi-
sion of development for school leaders and instructional staff. For example,
Title II, Part A requires states to set aside 7% of their funding to help school
districts support low-performing schools, including to help remove barriers
to student achievement. Just as the knowledge and skills of educational lead-
ers can be a key support to achieving educational equity, they can also be a
barrier when leaders are not adequately prepared to support equity, inclusive-
ness, and cultural responsiveness. In such cases, leadership development can
serve as an important intervention. As explained by Herman et al. (2016), “in
many areas of the [ESSA] act where school leadership is not explicitly called
out (e.g., school improvement efforts under Title I), states and districts could
still choose to support leadership-focused activities in pursuit of school-
improvement objectives” (p. 4).
How States Are Strengthening the Focus on
Educational Leadership
September 18, 2017 marked the deadline for the submission of consolidated
state ESSA plans, and educational stakeholders at the local, state, and federal
level have been anxious to gain insight into whether and how states have used
the new opportunities to support leadership development offered through
ESSA. Importantly, each state was required to include in its consolidated
state plan a description of how it planned to use Title II and other relevant
ESSA funds for improving the quality of educators, and a description of their
systems for developing, retaining, and advancing educators—including prin-
cipals and other school leaders. The state plans were required to include, at a
minimum, a description of the state’s systems for certification and/or licens-
ing; the preparation of new educational professionals, particularly, those
being prepared to work with low-income students and students of color; and
the professional growth and improvement of educational professionals
(including school leaders), including induction, development, compensation,
and advancement.
Preliminary reviews of state plans conducted by researchers affiliated with
the University Council for Educational Administration (UCEA) indicate that
many states recognized the emerging research base connecting educational
leadership preparation and practice to key ESSA outcomes and have used it
as an impetus to address school leadership in their ESSA implementation
Young et al. 719
plans. The remainder of this section provides three examples of state
approaches to supporting school leadership, drawn from Michigan, New
Mexico, and Tennessee. These three states are not alone in their focus on
educational leadership or in their inclusion of new ways to support leadership
development; rather, they were chosen for inclusion because they plan to
exercise the option to use the 3% set-aside Title II funds for state-level activi-
ties that support school leadership in their ESSA implementation plans.
Example 1: Michigan
Michigan plans to invest resources in facilitating the development of strategic
partnerships between specific LEAs and educator preparation programs espe-
cially for the benefit of LEAs identified as Partnership Districts and/or LEAs
with low-performing schools as identified by the accountability system.
Partner educator preparation programs (EPPs) may be traditional programs
within institutions of higher education (IHEs), experimental programs within
IHEs, or alternate route preparation programs operated by IHEs or nonaffili-
ated nonprofit organizations, in accordance with Michigan law (MCL
380.1531i). These partnerships will focus on strategic recruitment of candi-
dates and context-specific clinical and residency-based preparation for both
teachers and leaders according to the needs of the partner LEA. Such district/
preparation provider partnerships are evidenced-based for effective leadership
preparation and suggest innovative thinking around school leadership.
Example 2: New Mexico
New Mexico is seeking to improve the percentage of students being taught by
effective or better teachers and principals using differentiated compensation
systems for each level of effective, highly effective, and exemplary teachers.
The state also plans to support the Principals Pursuing Excellence program to
educate and empower principals to practice leadership behaviors that drive sig-
nificant gains in student achievement. This 2-year leadership development pro-
gram leverages a “turnaround mentor” to work with principals in struggling
schools. Past participants in the program reported significant improvements. In
some cases, schools improved more than 3 times the average school in the state
in English language arts, and 1.7 times higher in mathematics.
Example 3: Tennessee
Tennessee’s goal is to create statewide and regional leadership pipelines that
produce transformational school leaders. As part of this effort, the state is
720 Educational Administration Quarterly 53(5)
developing an administrator evaluation rubric to guide a fair and transparent
administrator evaluation. The evaluation is designed to support a culture of
support for instructional leaders and is intended to help engage educators in
reflective dialogue to improve practice. The state also plans to support the
Tennessee Academy for School Leaders to provide high-quality professional
learning opportunities for school leaders that are aligned with the Tennessee
Instructional Leadership Standards. This includes induction academies for
new leaders, professional learning opportunities throughout the year, and uni-
versity partnership opportunities to advance licensure. Additionally, the state
plans to support the Governor’s Academy for School Leadership, in partner-
ship with the Tennessee Governor’s Office, Vanderbilt University, and dis-
tricts, to offer school leaders a 1-year leader development experience
anchored in practice-based mentorship, in-depth feedback cycles, and tai-
lored training sessions.
As noted above, these states represent only three examples of how states
are planning to use Title II funding to support leadership development. Even
within these three examples, we see a number of promising activities target-
ing the quality of school principals and other school leaders.
Conclusion
The current opportunity to support educational leadership development
through ESSA is incredibly important, and we are optimistic about many of
the ideas that have been put forward by states thus far. However, as noted
above, our support and enthusiasm is tempered by several concerns. First, we
are concerned that the efforts of states and local education agencies to support
substantive leadership development could be undermined by forces at either
the state or federal levels. Second, we are concerned that programs for leader-
ship development included within many state ESSA plans “will under- or
over-reach, and that states without the knowledge, capacity, or will to act
smartly will stagnate or regress” (Castagna et al., 2016, p. 2).
With regard to our first concern, perhaps the most obvious example
includes recent federal budget proposals that eliminate funding for educa-
tional leadership development. Should the federal government choose to
eliminate funding, it is unlikely that states will be in a position to fund the
activities included in their state plans. Grim budget proposals, however, are
not just a current concern, as education has been chronically underfunded for
years.
An additional force at play is the reduced authority of the USDE to regu-
late the design and implementation of state plans. Although as Ferguson
(2016) points out, the limitations placed on the Secretary of Education and
Young et al. 721
the department were a “fairly predictable response to both NCLB and the
Obama administration’s efforts” (p. 72), they limit the ability of the USDE to
serve as a resource for improving individual initiatives as well as the impact
of initiatives more broadly. Combined with the potential lack of funding,
states are placed in a more dominant role, but with fewer resources.
Similarly, while we are optimistic about the ideas that states are likely to
put forward, we are also keenly aware of the shrinking size of state depart-
ments of education and the impact of such downsizing and record numbers of
retirements on the expertise available within State Department of Education.
The commitment and capacity of state departments of education are key to
the effective implementation of ESSA programming.
Our final concern focuses on the tendency to think narrowly about educa-
tional leadership, the role of educational leaders, and leadership develop-
ment. Importantly, leadership is an integrative enterprise and success is
dependent not on one’s knowledge and skills in a few discrete areas, but in
developing expertise in the areas identified by national standards for educa-
tional leadership preparation (e.g., NELP, 2017) and practice (e.g.,
Professional Standards for Educational Leaders, 2015). Our review of the
literature only captured five of the key domains of leadership practice, yet
leaders work in other domains, such as their engagement with parents and
communities and their efforts to advocate for their students, staff, and schools,
which are essential to effective leadership practice.
Finally, we understand the critical role that leadership plays in ensuring
successful implementation, building commitment, and achieving educational
goals. We applaud the fact that federal policy has incorporated insight from
research on how leadership matters in supporting school improvement and
student achievement. As we think toward future reauthorizations of ESSA, we
would recommend a stronger emphasis on educational leadership that extends
beyond leadership development to leadership practice. The research presented
in the previous section demonstrates the important role that leadership plays in
supporting successful school environments and student achievement.
Furthermore, for more than 35 years, research has demonstrated the impor-
tance of strong leadership in the successful implementation of federal pro-
grams at the local school building level (Turnbull, Smith, & Ginsburg, 1981).
As demonstrated in this article, it has taken a long time for federal educa-
tional policy to give substantive attention to educational leadership and to
allow the use of significant funding to support the development of a strong
leadership pipeline. Just as it is important that current initiatives be fully
funded, it is also essential that we consider how to strengthen the focus on
and impact of leadership in federal education policy initiatives going for-
ward. Thus, what we hope to see is not a change in federal goals or purposes,
722 Educational Administration Quarterly 53(5)
but a commitment to fully fund ESSA as well as the adoption, over time, of
an enhanced strategy for achieving these purposes with greater success.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research,
authorship, and/or publication of this article.
Funding
The authors received no financial support for the research, authorship, and/or publica-
tion of this article.
ORCID iD
M. D. Young https://orcid.org/0000-0002-8380-9176
References
Auerbach, S. (2009). Walking the walk: Portraits in leadership for family engagement
in urban schools. School Community Journal, 19, 9-32.
Bondy, E., Ross, D. D., Gallingane, C., & Hambacher, E. (2007). Creating environ-
ments of success and resilience: Culturally responsive classroom management
and more. Urban Education, 42, 326-348. doi:10.1177/0042085907303406
Brooks, J., Jean-Marie, G., Normore, A. H., & Hodgins, D. W. (2007). Distributed
leadership for social justice: Exploring how influence and equity are stretched
over an urban high school. Journal of School Leadership, 17, 378-408. Retrieved
from http://www.eric.ed.gov/ERICWebPortal/recordDetail?accno=EJ807383
Brooks, K., Adams, S. R., & Morita-Mullaney, T. (2010). Creating inclusive learn-
ing communities for ELL students: Transforming school principals’ perspectives.
Theory Into Practice, 49, 145-151. doi:10.1080/00405841003641501.
Castagna, J., Young, K., Gordon, D., Little, B., & Palmer, S. (2016, August). Education
counsel: Summary analysis of ED’s proposed ESSA regulations for consolidated
plans, accountability, school improvement, and data reporting & ED’s proposed
ESSA regulations assessments and innovative assessment pilots. Retrieved
from http://educationcounsel.com/?publication=educationcounsel-summary-
analysis-eds-proposed-essa-regulations-consolidated-plans-accountability-
school-improvement-data-reporting-eds-proposed-essa-regulations-assessments-inn
Castagno, A. E., & Brayboy, B. M. J. (2008). Culturally responsive schooling for
Indigenous youth: A review of the literature. Review of Educational Research,
78, 941-993. doi:10.3102/0034654308323036
Coelli, M., & Green, D. A. (2012). Leadership effects: School principals and stu-
dent outcomes. Economics of Education Review, 31, 92-109. doi:10.1016/j
.econedurev.2011.09.001
Council of Chief State School Officers. (2016). Its time to take a big bet on school
leadership. Elevating School Leadership in ESSA Plans: A guide for states.
Washington, DC: Author. Retrieved from http://www.ccssoessaguide.org
https://orcid.org/0000-0002-8380-9176
http://www.eric.ed.gov/ERICWebPortal/recordDetail?accno=EJ807383
http://educationcounsel.com/?publication=educationcounsel-summary-analysis-eds-proposed-essa-regulations-consolidated-plans-accountability-school-improvement-data-reporting-eds-proposed-essa-regulations-assessments-inn
http://educationcounsel.com/?publication=educationcounsel-summary-analysis-eds-proposed-essa-regulations-consolidated-plans-accountability-school-improvement-data-reporting-eds-proposed-essa-regulations-assessments-inn
http://educationcounsel.com/?publication=educationcounsel-summary-analysis-eds-proposed-essa-regulations-consolidated-plans-accountability-school-improvement-data-reporting-eds-proposed-essa-regulations-assessments-inn
http://www.ccssoessaguide.org
Young et al. 723
Daly, A. J. (2009). Rigid response in an age of accountability: The potential of leader-
ship and trust. Educational Administration Quarterly, 45, 168-216. doi:10.1177/
0013161X08330499
Darling-Hammond, L., Meyerson, D., La Pointe, M. M., & Orr, M. T. (2009).
Preparing principals for a changing world. San Francisco, CA: Jossey-Bass.
DiPaola, M. F., & Hoy, W. K. (2012). Principals improving instruction: Supervision,
evaluation, and professional development. Charlotte, NC: Information Age.
Elementary and Secondary Education Act of 1965, H.R. 2362, 89th Cong., Pub. L.
No. 89-10 (1965).
Every Student Succeeds Act, Pub. L. No. 114-95 (2015).
Ferguson, M. (2016). ESSA is more than the latest acronym on education’s block. Phi
Delta Kappan, 97, 72-73. doi:10.1177/0031721716636879
Gay, G. (2002). Culturally responsive teaching in special education for ethnically
diverse students: Setting the stage. International Journal of Qualitative Studies
in Education, 6, 613-629. doi:10.1080/0951839022000014349
Grissom, J. A., & Loeb, S. (2011). Triangulating principal effectiveness: How
perspectives of parents, teachers, and assistant principals identify the central
importance of managerial skills. American Educational Research Journal, 48,
1091-1123. doi:10.3102/0002831211402663
Hallinger, P., & Murphy, J. (1985). Assessing the instructional management behavior
of principals. Elementary School Journal, 86, 217-247. doi:10.1086/461445
Herman, R., Gates, S., Arifkhanova, A., Bega, A., Chavez-Herrerias, E. R., Han, E.,
. . .Wrabel, S. (2016). School leadership interventions under the Every Student
Succeeds Act: Evidence review. Washington, DC: RAND Corporation.
Higher Education Act. 1988 Amendments to the Higher Education Act of 1965.
Public Law 105-244. (1988).
Hitt, D. H., & Tucker, P. D. (2016). Systematic review of key leader practices found
to influence student achievement: A unified framework. Review of Educational
Research, 86, 531-569. doi:10.3102/0034654315614911
Horng, B. Y. E., & Loeb, S. (2010). New thinking about instructional leadership. Phi
Delta Kappan, 92, 66-70. doi:10.1108/10569210910939681
Jackson, B. L., & Kelley, C. (2002). Exceptional and innovative programs in educa-
tional leadership. Educational Administration Quarterly, 38, 192-212.
Khalifa, M. (2010). Validating social and cultural capital of hyperghettoized at-
risk students. Education and Urban Society, 42, 620-646. doi:10.1177/00
13124510366225.
Kirst, M., & Jung, R. (1980). The utility of a longitudinal approach in assessing imple-
mentation: A thirteen-year view of Title I, ESEA. Educational Evaluation and
Policy Analysis, 2, 17-34. Retrieved from http://www.jstor.org/stable/1164086
LaPointe Terosky, A. (2014). From a managerial imperative to a learning impera-
tive: Experiences of urban, public school principals. Educational Administration
Quarterly, 50, 3-33. doi:10.1177/0013161X13488597
Leithwood, K. (2012). Ontario Leadership Framework with a discussion of the leader-
ship foundations. Ottawa, Ontario, Canada: Institute for Educational Leadership,
OISE. Retrieved from https://www.education-leadership-ontario.ca/application/
http://www.jstor.org/stable/1164086
https://www.education-leadership-ontario.ca/application/files/2514/9452/5287/The_Ontario_Leadership_Framework_2012_-_with_a_Discussion_of_the_Research_Foundations
724 Educational Administration Quarterly 53(5)
files/2514/9452/5287/The_Ontario_Leadership_Framework_2012_-_with_a_
Discussion_of_the_Research_Foundations
Leithwood, K., Harris, A., & Hopkins, D. (2008). Seven strong claims about suc-
cessful school leadership. School Leadership & Management, 28, 27-42.
doi:10.1080/13632430701800060
Leithwood, K., & Jantzi, D. (1990). Transformational leadership: How principals can
help reform school cultures. School Effectiveness and School Improvement, 1,
249-280. doi:10.1080/0924345900010402
Leithwood, K., Jantzi, D., Coffin, G., & Wilson, P. (1996). Preparing school leaders:
What works? Journal of School Leadership, 6, 316-342.
Leithwood, K., Louis, K. S., Wahlstrom, K., Anderson, S., Mascall, B., & Gordon,
M. (2009). How successful leadership influences student learning: The second
installment of a longer story. In A. Hargreaves, A. Lieberman, M. Fullan & D.
Hopkins (Eds.), Second international handbook of educational change (Vol. 23,
pp. 611-629). Anderson and Sacks Springer International Handbooks. New York,
NY: Springer. doi:10.1007/978-90-481-2660-6_35
Leithwood, K., Seashore, K., Anderson, S., & Wahlstrom, K. (2004). Executive sum-
mary: Review of research: How leadership influences student learning. Retrieved
from https://conservancy.umn.edu/bitstream/handle/11299/2102/CAREI%20
ExecutiveSummary%20How%20Leadership%20Influences ?sequence=1&
isAllowed=y
Leithwood, K. A., & Riehl, C. (2005). What do we already know about educational
leadership? In W. A. Firestone & C. Riehl (Eds.), A new agenda for research
in educational leadership (pp. 12-27). New York, NY: Teachers College Press.
Marks, H. M., & Printy, S. M. (2003). Principal leadership and school performance:
An integration of transformational and instructional leadership. Educational
Administration Quarterly, 39, 370-397. doi:10.1177/0013161X03253412
May, H., & Supovitz, J. A. (2011). The scope of principal efforts to improve instruc-
tion. Educational Administration Quarterly, 47, 332-352. doi:10.1177/00131
61X10383411
McCarthy, M. M. (1999). The evolution of educational leadership preparation pro-
grams. In J. Murphy & K. S. Louis (Eds.), Handbook of research on educational
administration: A project of the American Educational Research Association
(pp. 119-139). San Francisco, CA: Jossey-Bass.
McKenzie, K. B., Christman, D. E., Hernandez, F., Fierro, E., Capper, C. A., Dantley,
M., & Scheurich, J. J. (2008). From the field: A proposal for educating leaders
for social justice. Educational Administration Quarterly, 44, 111-138. doi:10.11
77/0013161×07309470.
Murphy, J., Elliot, S. N., Goldring, E., & Porter, A. C. (2006). Learning-centered
leadership: A conceptual foundation. New York, NY: Wallace Foundation.
Retrieved from http://files.eric.ed.gov/fulltext/ED505798
Murphy, J., & Hallinger, P. (1988). Characteristics of instructionally effective
school districts. Journal of Educational Research, 81, 175-181. doi:10.1080/
00220671.1988.10885819
https://www.education-leadership-ontario.ca/application/files/2514/9452/5287/The_Ontario_Leadership_Framework_2012_-_with_a_Discussion_of_the_Research_Foundations
https://www.education-leadership-ontario.ca/application/files/2514/9452/5287/The_Ontario_Leadership_Framework_2012_-_with_a_Discussion_of_the_Research_Foundations
https://conservancy.umn.edu/bitstream/handle/11299/2102/CAREI%20
ExecutiveSummary%20How%20Leadership%20Influences ?sequence=1&
isAllowed=y
https://conservancy.umn.edu/bitstream/handle/11299/2102/CAREI%20
ExecutiveSummary%20How%20Leadership%20Influences ?sequence=1&
isAllowed=y
https://conservancy.umn.edu/bitstream/handle/11299/2102/CAREI%20
ExecutiveSummary%20How%20Leadership%20Influences ?sequence=1&
isAllowed=y
http://files.eric.ed.gov/fulltext/ED505798
Young et al. 725
National Policy Board for Educational Administration. (2015). Professional stan-
dards for educational leaders. Washington, DC: Author. Retrieved from http://
www.npbea.org
National Policy Board for Educational Administration. (2017). NELP. Retrieved from
No Child Left Behind Act of 2001, Pub. L. No. 107-110 (2001).
Orr, M. T., & Barber, M. E. (2007). Collaborative leadership preparation: A compara-
tive study of innovative programs and practices. Journal of School Leadership,
16, 709-739.
Pounder, D. G. (1995). Theory to practice in administrator preparation: An evaluation
study. Journal of School Leadership, 5, 151-162.
Robinson, V. M. J., Lloyd, C. A., & Rowe, K. J. (2008). The impact of leader-
ship on student outcomes: An analysis of the differential effects of leadership
types. Educational Administration Quarterly, 44, 635-674. doi:10.1177/00131
61X08321509
Sanzo, K. L., Sherman, W. H., & Clayton, J. (2011). Leadership practices of success-
ful middle school principals. Journal of Educational Administration, 49, 31-45.
doi:10.1108/09578231111102045
Scanlan, M., & Lopez, F. (2012). Vamos! How school leaders promote equity and
excellence for bilingual students. Educational Administration Quarterly, 48,
583-625.
Sebastian, J., & Allensworth, E. (2012). The influence of principal leadership on class-
room instruction and student learning. Educational Administration Quarterly, 48,
626-663. doi:10.1177/0013161X11436273
Sebring, P. B., Allensworth, E., Bryk, A. S., Easton, J. Q., & Luppescu, S. (2006). The
essential supports for school improvement. Chicago, IL: Consortium on Chicago
School Research.
Silins, H. C., Mulford, W. R., & Zarins, S. (2002). Organizational learning and school
change. Educational Administration Quarterly, 38, 613-642. doi:10.1177/00131
61X02239641
Supovitz, J., Sirinides, P., & May, H. (2010). How principals and peers influ-
ence teaching and learning. Educational Administration Quarterly, 46, 31-56.
doi:10.1177/1094670509353043
Theoharis, G., & O’Toole, J. (2011). Leading inclusive ELL: Social justice leadership
for English language learners. Educational Administration Quarterly, 47, 646-
688. doi:10.1177/0013161×11401616.
Tschannen-Moran, M. (2009). Fostering teacher professionalism in schools. Journal
of Educational Administration, 45, 217-247. doi:10.1177/0013161X08330501
Turnbull, B. J., Smith, M. S., & Ginsburg, A. L. (1981). Issues for a new administra-
tion: The federal role in education. American Journal of Education, 89, 396-427.
Retrieved from http://www.jstor.org/stable/1085122
U.S. Department of Education. (2016). Non-Regulatory Guidance Title II, Part A:
Building systems of support for excellent teaching and learning (Non-Regulatory
Guidance Title II, Part A of the Elementary and Secondary Education Act of 1965,
as Amended by the Every Student Succeeds Act of 2015). Washington, DC: Author.
http://www.jstor.org/stable/1085122
726 Educational Administration Quarterly 53(5)
Ware, F. (2006). Warm demander pedagogy: Culturally responsive teaching that sup-
ports a culture of achievement for African American students. Urban Education,
41, 427-456. doi:10.1177/0042085906289710
Young, M. D., & Crow, G. (2016). The handbook of research on leadership prepara-
tion (2nd ed.). New York, NY: Routledge.
Young, M. D., Crow, G., Ogawa, R., & Murphy, J. (2009). The handbook of research
on leadership preparation. New York, NY: Routledge.
Young, M. D., O’Doherty, A., Gooden, M., & Goodnow, E. (2011). Measuring change
in leadership identity and problem framing. Journal of School Leadership, 21,
705-734.
Youngs, P., & King, M. B. (2002). Principal leadership for professional development
to build school capacity. Educational Administration Quarterly, 38, 643-670.
Author Biographies
Michelle D. Young, PhD, is the executive director of the University Council for
Educational Administration (UCEA) and a professor in educational leadership at the
University of Virginia. UCEA is an international consortium of research institutions
with graduate programs in educational leadership and policy. She works with univer-
sities, practitioners, professional organizations, and state and national leaders to
improve the preparation and practice of school and school system leaders and to
develop a dynamic base of knowledge on excellence in educational leadership. She
has been instrumental in both increasing the focus of research in the field of educa-
tional leadership on leadership preparation and development as well as strengthening
research translation, dissemination, and utilization processes. she is the primary editor
of the first and second editions of the Handbook of Research on the Education of
School Leaders and is currently chairing the revision of National Educational
Leadership Preparation (NELP) standards.
Kathleen M. Winn, PhD, is a postdoctoral research associate for the University
Council for Educational Administration housed in the Curry School of Education at
the University of Virginia. Her research interests are primarily situated in leadership
preparation, the intersection of leadership and science education, and program
evaluation.
Marcy A. Reedy, MA, is a project director with the University Council for Educational
Administration (UCEA). She specializes in educational policy and coordinates
UCEA’s policy and advocacy work. She is currently coordinating a comprehensive
review of state ESSA plans, with a specific focus on the use of Title II funds for edu-
cational leadership development initiatives. She is recognized for producing easily
accessible policy briefs and profiles. Prior to joining UCEA, she led the government
relations campaign for the Center for Excellence in Education.
1
Teachers College Record Volume 120, 060307, June 2018, 36 pages
Copyright © by Teachers College, Columbia University
0161-4681
Getting College-Ready During State
Transition Toward the Common Core State
Standards
ZEYU XU
American Institutes for Research
kENNAN CEPA
University of Pennsylvania
Background: As of 2016, 42 states and the District of Columbia have adopted the Common
Core State Standards (CCSS). Tens of millions of students across the country completed high
school before their schools were able to fully implement the CCSS. As with previous stan-
dards-based reforms, the transition to the CCSS-aligned state education standards has been
accompanied by curriculum framework revisions, student assessment redesigns, and school
accountability and educator evaluation system overhauls.
Purpose: Even if the new standards may improve student learning once they are fully imple-
mented, the multitude of changes at the early implementation stage of the reform might dis-
rupt student learning in the short run as teachers, schools, and communities acclimate to the
new expectations and demands. The goal of this study is not to evaluate the merits and defi-
ciencies of the CCSS per se, but rather to investigate whether college readiness improved among
high school students affected by the early stages of the CCSS implementation, and whether
students from different backgrounds and types of high schools were affected differently.
Research Design: We focus on three cohorts of 8theighth-grade students in Kentucky and
follow them until the end of the 11th -grade, when they took the state mandatory ACT tests.
The three successive cohorts—enrolled in the 8theighth -grade between 2008 and 2010—each
experienced different levels of exposure to CCSS transition. Using ACT scores as proxy mea-
sures of college readiness, we estimate cohort fixed-effects models to investigate the transitional
impact of standards reform on student performance on the ACT. To gauge the extent to which
the implementation of CCSS is directly responsible for any estimated cross-cohort differences
in student ACT performance, we conduct additional difference-in-differences analyses and
a falsification test.
Data: Our data include the population of 3 three cohorts of 8theighth -graders enrolled in
Kentucky public schools between 2008 and 2010. The total analytic sample size is 100,212. The
data include student test scores, student background characteristics, and school characteristics.
Teachers College Record, 120, 060307 (2018)
2
Findings: In the case of the CCSS transition in Kentucky, our findings suggest that stu-
dents continued to improve their college -readiness, as measured by ACT scores, during the
early stages of CCSS implementation. Furthermore, evidence suggests that the positive gains
students made during this period accrue to students in both high- and low-poverty schools.
However, it is not conclusive that the progress made in student college -readiness is necessarily
attributable to the new content standards.
Conclusions: As we seek to improve the education of our children through reforms and in-
novations, policymakers should be mindful about the potential risks of excessive changes.
Transition issues during the early stages of major educational changes sometimes lead to
short-term effects that are not necessarily indicative of the longer-term effects of a program
or intervention. Nevertheless, standards-based reforms are fairly frequent, and each takes
multiple years to be fully implemented, affecting millions of students. Therefore, we encourage
researchers and policymakers to pay more attention to the importance of transitional impact
of educational reforms.
This study provides a first look at college readiness in the early years of
the Common Core State Standards (CCSS) implementation. Using lon-
gitudinal administrative data from kentucky, we follow three cohorts of
students from eighth grade through 11th grade and find that students
exposed to the CCSS—including students in low-poverty schools—had
faster learning gains than similar students who were not exposed to the
standards. Although it is not conclusive whether cross-cohort improve-
ment was entirely attributable to the CCSS implementation, we find that
students gained proficiency in the years immediately before and after the
transition. Additionally, we find that student performance in subjects that
adopted CCSS-aligned curricula exhibited larger, more immediate im-
provement than student performance in subjects that did not.
INTRODUCTION
As of 2016, 42 states and the District of Columbia have adopted the
Common Core State Standards (CCSS or “Common Core”). The Common
Core standards, sponsored by the National Governors Association and the
Council of Chief State School Officers, were developed in 2009 and re-
leased in mid-2010 (NGA/CCSSO, 2010). The CCSS represent a cross-
state effort to adopt a set of “college- and career-ready standards for
kindergarten through 12th grade in English language arts/literacy and
mathematics.”1 The CCSS initiative grew out of concerns that existing
state standards were not adequately preparing students with the knowl-
edge and skills needed to compete globally (Porter, McMaken, Hwang, &
Yang, 2011), necessitating a clearer set of learning expectations that are
consistent across states. The initiative is also thought to offer the potential
benefit of allowing for cross-state collaboration when developing teaching
TCR, 120, 060307 Getting College-Ready During State Transition Toward the Common Core State Standards
3
materials, common assessment systems, as well as tools to support educa-
tors and schools.
Yet the CCSS initiative is not without controversy, and it has become in-
creasingly polarizing.2 Advocates and opponents disagree on many aspects
of the CCSS. key points of contention include the standards themselves,
the transparency of the development of these standards, their accompany-
ing standardized tests, the appropriateness of student proficiency levels
and their implications on performance gaps between high- and low-pover-
ty students, the financial cost of implementation, the adequacy of imple-
mentation supports, as well as the roles played by federal and corporate
entities in the development and adoption of CCSS.
As with previous standards-based reforms, the implementation of CCSS-
aligned state education standards has been accompanied by curriculum
framework revisions, student assessment redesigns, and school account-
ability and educator evaluation system overhauls (Clune, 1993; Rentner,
2013; Smith & O’Day, 1991). Even if the new standards may improve stu-
dent learning once they are fully implemented, the multitude of changes
at the early implementation stage of the reform might disrupt student
learning in the short run as teachers, schools, and communities accli-
mate to the new expectations and demands (Schmidt & Houang, 2012).
Furthermore, schools and districts with more constrained staffing capacity
and limited financial resources, such as those serving predominantly low-
income students, may face more challenges during the CCSS transition
(Logan, Minca, & Adar, 2012; Schmidt & Houang, 2012). Indeed, in a
survey of deputy superintendents of education in 40 CCSS states, 34 states
reported that finding adequate staff and financial resources to support all
of the necessary CCSS implementation activities is a major (22 states) or
minor (12 states) challenge (Rentner, 2013).
The concern that students and schools may be overwhelmed by the pres-
ence of multiple, concurrent changes to the education system motivated
us to investigate student performance trends during the transition to the
CCSS. The objective of this study is not to evaluate the merits and deficien-
cies of the CCSS per se. Instead, we focus on the transitional impact on
student learning that could arise from two competing hypotheses: On one
hand, students could potentially benefit from the new standards, which
some believe hold the promise of overcoming a number of flaws in previ-
ous standards-based reforms, such as low-quality content standards (Finn,
Petrilli, & Julian, 2006; Hill, 2001), poorly aligned assessments (Polikoff,
Porter, & Smithson, 2011), and misplaced incentives that distract attention
from lower-performing students (Hamilton, Stecher, Marsh, McCombs, &
Robyn, 2007; Taylor, Stecher, O’Day, Naftel, & LeFloch, 2010). On the oth-
er hand, regardless of the design and implementation quality of the CCSS,
Teachers College Record, 120, 060307 (2018)
4
student learning may suffer during the transition years when both stu-
dents and teachers need to learn and adapt to the new systems. Students
may also be adversely affected when human and financial resources are
diverted to support the transition. The net transitional impact on student
achievement is theoretically ambiguous, and it is a question that deserves
empirical attention.
Student learning experiences during policy transition years are some-
times dismissed as transitory and unreliable, and they are certainly not
reflective of the efficacy of the reform itself. However, tens of millions3 of
students across the country will complete high school before their schools
fully implement the CCSS. For those students, their experiences under
the incomplete implementation of the CCSS will have a lasting impact
on their future life opportunities. Whether college readiness improved
among high school students affected by the early stages of the CCSS imple-
mentation, and whether students from different backgrounds and types of
high schools were affected differently, are important research questions
that have yet to be addressed.
This paper starts to fill in this gap by using longitudinal administrative
data from kentucky, the first state to adopt the CCSS. A critical analytic
requirement to answer these questions is to have student achievement
measures that are comparable before and after the implementation of
the CCSS. States that adopted the new standards typically also redesigned
their assessments simultaneously. Therefore, any comparisons of student
achievement before and after the standards transition using state stan-
dardized tests would conflate changes of standards with changes of test-
ing regimes.
As a state that mandates all 11th graders to take the ACT, kentucky pro-
vides us with a rare opportunity to overcome this analytic challenge. As we
will discuss in more detail later, the ACT is designed to measure what stu-
dents need to know to be ready for entry-level college-credit courses (ACT,
2008). The kentucky Department of Education (kDE) and the Council
on Postsecondary Education (CPE) define college readiness as the abil-
ity for students to access “credit-bearing coursework without the need for
developmental education or supplemental courses” (kDE & CPE, 2010,
p. 7). Therefore, the design of the ACT aligns with kentucky’s operat-
ing definition of college readiness, which we adopt for the current study.4
Moreover, because the ACT is mandatory in kentucky and has been since
2007, we can measure the proficiency before and after the implementa-
tion of Common Core standards of all students, not just students who have
already decided to go to college. The mandatory nature of the ACT in
kentucky allows us to avoid the self-selection issue that often biases re-
search findings (Clark, Rothstein, & Schanzenbach, 2009).
TCR, 120, 060307 Getting College-Ready During State Transition Toward the Common Core State Standards
5
The remainder of this paper is organized as follows. In the next section,
we describe the transition to the CCSS in kentucky, followed by a discus-
sion of the theory and evidence of standards-based education reforms. We
then describe the data and measures we use in our analyses and outline
the empirical research design. Results are reported and discussed in the
final two sections of the paper.
COMMON CORE IN kENTUCkY
kentucky adopted the CCSS in 2010 and started its implementation in the
2011–2012 school year. Before 2011, kentucky’s education standards were
the kentucky Program of Studies (POS), and the 2006 Core Content for
Assessment described the particular skills and concepts that would be as-
sessed in each grade under POS. The POS-aligned kentucky Core Content
Test (kCCT) was a series of state tests designed to measure students’
learning in reading, math, science, social studies, and writing. Senate
Bill 1, enacted by the General Assembly in 2009, directed the kentucky
Department of Education (kDE) to revise state content standards and
launched kentucky’s transition toward the CCSS-aligned kentucky Core
Academic Standards (kCAS). Adopted by the kentucky State Board of
Education in June 2010, these new standards were developed jointly by
the National Governors Association (NGA) and the Council of Chief State
School Officers (CCSSO). Under the kCAS, the ELA and math curricu-
lum frameworks are now aligned with the CCSS, whereas the curriculum
frameworks for all other subject areas are carried over from POS.5
Similar to the experiences in many other CCSS states, a plethora of
other changes took place in kentucky in 2011–2012 along with the imple-
mentation of the CCSS. First, starting from the 2011–2012 school year,
the kentucky Performance Rating for Educational Progress (k-PREP)
tests replaced the kCCT. Students in Grades 3 through 8 are required
to take k-PREP in reading, math, science, social studies, and writing. In
addition, students started to take k-PREP end-of-course tests for high-
school level courses including English II, algebra II, biology, and U.S.
history. Second, in 2011–2012, kentucky started field testing major
components of its newly designed teacher evaluation system called the
“kentucky Teacher Professional Growth and Effectiveness System.”6 The
new system evaluated teacher performance based on multiple measures,
including student growth, student surveys, and observations by peers
and evaluators. Finally, a new school accountability model, “Unbridled
Learning: College/Career-Readiness for All,” took effect in the 2011–
2012 school year.7 The new model measures and categorizes school
performance based on student achievement in the five content areas,
Teachers College Record, 120, 060307 (2018)
6
student-achievement growth, measures of student-achievement gap
among student subgroups, high school graduation rates, and college-
and career-readiness. Since the U.S. Department of Education granted
kentucky a No Child Left Behind (NCLB, 2001) waiver in February
2012, kentucky can use the Unbridled Learning model to report both
state- and federal-level accountability measures.
The breadth of CCSS-related changes is what motivated our concerns
of potential disruption to learning among students who spend some or
most of their high school careers under the CCSS transition. The de-
piction here is also intended to reiterate that it is not feasible, nor our
goal, to disentangle the impact of one set of changes from the impact
of another. Instead, we acknowledge that standards-based educational
reforms often lead to cascading changes within schools and districts, and
that the focus of this study is on the overall impact of the CCSS transition
on student learning.
STANDARDS-BASED EDUCATION REFORMS
Using student test scores as an outcome of interest, education policymak-
ers have sought to improve schools and student learning (Coleman, 1966).
A core component of state and federal efforts to improve education has
been standards-based education reform. In the 1980s, states implement-
ed minimum standards for student learning, and the 1990s ushered in a
national movement toward raising these minimum standards (Swanson
& Stevenson, 2002). The NCLB of 2001 focused on getting students to
proficiency in math and reading by emphasizing accountability, sanctions,
and awards (NCLB of 2001, sec. 2Aiii). Since then, the 2009 Race to the
Top (RTTT), the 2011 ESEA Flexibility program, the CCSS, and the 2015
Every Student Succeed Act (ESSA) all encouraged states to set high stan-
dards so that children graduate high school ready for college and careers.
The theory underlying standards-based reforms posits that teaching
and learning will improve by (a) creating high-quality content standards
and clearly articulated learning goals, (b) designing student assessments
aligned to those standards to monitor progress toward achieving the learn-
ing goals, and (c) establishing support and incentives systems to facilitate
and motivate the adoption of the standards (Smith & O’Day, 1991). The
CCSS, the latest example of standards-based reforms, provides a common
set of standards for ELA and mathematics “defin[ing] the rigorous skills
and knowledge . . . that need to be effectively taught and learned for stu-
dents to be ready to succeed academically in credit-bearing, college-entry
courses.”8 Although the CCSS prescribes academic goals, it does not deter-
mine specific curricula for states or districts.
TCR, 120, 060307 Getting College-Ready During State Transition Toward the Common Core State Standards
7
Existing content analysis of the CCSS (Beach, 2011; Cobb & Jackson,
2011; Porter et al., 2011) shows that the CCSS requires a modest increase
in cognitive skills in math, and a larger increase for ELA, when compared
to previous state-level standards. However, differences between exist-
ing standards and the CCSS vary across states, and researchers find that
kentucky’s previous standards are among the least similar to the CCSS
(Schmidt & Houang, 2012). For example, unlike the previous standards,
kentucky’s CCSS-aligned standards mandate that each of kentucky’s post-
secondary institutions assist in the development of academic standards in
reading and mathematics to ensure that the curricula are aligned between
high school and college (Winters et al., 2009).
WHAT HAVE WE LEARNED ABOUT STANDARDS-BASED REFORMS?
To date, very little empirical research exists on the extent to which the
central goal of the CCSS—improved college- and career-readiness—has
been achieved. While student performance on the National Assessment of
Educational Progress (NAEP) tests improved after CCSS implementation,
the performance gains are small and may not be statistically significant
(Loveless, 2016). Furthermore, the correlation between NAEP scores and
CCSS implementation is by no means causal.
By comparison, there is a large literature on the impact of previous
standards-based reforms on teaching and learning. The most rigorous
studies suggest that reforms of this type, when properly implemented
and under certain circumstances, could improve student learning
and classroom instruction (Carnoy & Loeb, 2002; Dee & Jacob, 2011;
Hamilton et al., 2007; Hanushek & Raymond, 2005; Jacob, 2007; Stecher
et al., 2008; Taylor et al., 2010). However, research also documents evi-
dence of adverse outcomes of standards-based reforms, such as the nar-
rowing of the curriculum (Clarke et al., 2003), strategic manipulation of
the composition of the test-taking student population (Cullen & Reback,
2006; Figlio, 2006; Özek, 2012), diverting attention away from lowest-
performing students to students near the proficiency cut score (Booher-
Jennings, 2005; Hamilton et al., 2007; Stecher et al., 2008; Taylor et al.,
2010), excessive test preparation (Wong, Anagnostopoulos, Rutledge, &
Edwards, 2003), and even outright cheating on the test (Jacob & Levitt,
2003; Sass, Apperson, & Bueno, 2015).
Teachers College Record, 120, 060307 (2018)
8
A FOCUS ON THE TRANSITIONAL IMPACT ON STUDENT LEARNING
There are very few empirical studies that explicitly analyze the relation-
ship between standards-based education reforms and student achieve-
ment as the reforms are being implemented. Educational change takes
time and schools may face performance setbacks in the early years
(Fullan, 2001). Transition issues during the early stages of major educa-
tional changes sometimes lead to short-term effects that are not neces-
sarily indicative of the longer-term effects of a program or intervention
(kane & Staiger, 2002). For example, studies on the implementation of
school-level curriculum interventions, comprehensive school reforms,
teacher evaluation systems, and accountability systems (Borman, Hewes,
Overman, & Brown, 2003; Borman et al., 2007; Dee & Wyckoff, 2013;
Ladd, 2007) invariably find that changes in student achievement dur-
ing the early implementation stages are not always consistent with later
results. Understandably, researchers hesitate to make policy decisions
due to small sample sizes, minor year-to-year fluctuations, or similar is-
sues related to the early years of implementation (kane & Staiger, 2002;
kober & Rentner, 2011).
However, this should not lead researchers and policymakers to dismiss
the importance of the transitional impact of education reforms. Standards-
based reforms are fairly frequent, and each takes multiple years to be fully
implemented. As discussed earlier, most states have implemented major
educational reforms in response to the 2001 NCLB, the 2009 RTTT, and
the 2011 ESEA Flexibility program during the last decade, and they are
considering changes again after the passage of the 2015 ESSA. Changes
of the curriculum and instructional materials and the need to realign per-
formance expectations with the new standards are a major source of frus-
tration to teachers (Desimone, 2002). Educator commitment to the new
standards may also be reduced if teachers and education leaders perceive
such reforms as transitory (Ross et al., 1997). How educators react to stan-
dards transitions, in turn, will affect the learning experiences of tens of
millions of students, which will likely have long-term effects on students
(Rouse, Hannaway, Goldhaber, & Figlio, 2013).
For these reasons, this paper focuses on early impacts of statewide
standards implementation in kentucky and its differential effects across
schools and student subgroups, while recognizing that such early results
should not be the last word on the effectiveness of the CCSS.
TCR, 120, 060307 Getting College-Ready During State Transition Toward the Common Core State Standards
9
RESOURCE CONSTRAINTS AND DIFFERENTIAL IMPACT
An area of constant debate is whether the implementation of CCSS may ex-
acerbate the persisting achievement gaps between disadvantaged students
and their more affluent peers (for example, Reardon, 2011). In particu-
lar, implementing large-scale education standards reforms like the CCSS
is likely to impose additional challenges to resource-constrained schools
and students. Local administrators, teachers, principals, and other staff
working in high-poverty districts and schools feel generally less prepared
to implement the CCSS than their counterparts in low-poverty districts
and schools (Brown & Clift, 2010; Finnan, 2014). In addition, compared
to low-poverty schools, schools serving more disadvantaged students often
have fewer professional development resources and academic supports
for students (Regional Equity Assistance Centers, 2013).
On the other hand, standards may reduce inequality if they help schools
identify struggling students while also improving instruction for disadvan-
taged students (Gamoran, 2013). In their meta-analysis of early reform im-
plementation, Borman et al. (2003) find that high-poverty schools experi-
ence similar benefits to implementation as low-poverty schools. However,
schools’ successful implementation of new standards is critical to reducing
achievement gaps (Foorman, kalinowski, & Sexton, 2007). Furthermore,
the early years of standards implementation may cause the achievement
gap to widen, even if it reduces disparities in the long run.
With diverse student needs, accountability pressure, and resource con-
straints, the quality, scope, and strategy of standards implementation be-
tween high- and low-poverty schools are likely to be different, even though
standards may reduce the achievement gap over time. While we cannot
test the mechanisms of differential achievement gaps by school and stu-
dent poverty status, we can examine the extent to which school and stu-
dent poverty mediates the relationship between educational standards
transition and student achievement.
DATA AND MEASURES
The data provided to us by the kDE include detailed records for individu-
al students, school personnel, and student course-taking from school years
2008–2009 through 2012–2013, covering three years before the CCSS and
two years post-CCSS. Teachers and students are assigned unique identi-
fiers that can be used to track individuals over time; students and teachers
also can be linked to specific classrooms.
Utilizing available student-level data, we can control for background
characteristics (e.g., age, gender, race/ethnicity, free or reduced-price
Teachers College Record, 120, 060307 (2018)
10
lunch [FRPL] eligibility, special education status, and English language
learner [ELL] designation), enrollment, and state assessment scores.
These state assessment scores are from pre-CCSS exams (i.e., the kCCT)
that kentucky students took at the end of Grades 3 through 8 in reading,
mathematics, social studies, and writing.
Beginning in the 2007–2008 school year, all students in Grades 10 and
11 take the PLAN and the ACT, respectively. Both tests are provided by
the ACT, Inc. The PLAN is administered every September to all incoming
10th-grade students. The ACT, on the other hand, is administered near
the end of Grade 11 every March. For both the ACT and the PLAN, our
data include composite scores as well as four sub-scores (English, math-
ematics, reading, and science). The PLAN scores can be used for two pur-
poses: to augment the kCCT to control for student baseline academic
achievement, and to facilitate sensitivity analyses that are discussed in the
research design section below.
We aggregate individual FRPL-eligibility to the school level to examine
whether students at high-FRPL and low-FRPL schools have different ex-
periences to new curriculum implementation. School-level poverty con-
text is measured by the percentage of FRPL-eligible students in a school.
For students who attended multiple schools between Grades 9 and 11, we
use the average FRPL percentage across schools. We define schools in the
top one fifth of the school poverty distribution in kentucky (> 55 percent
FRPL) as high poverty, and those in the bottom fifth (≤ 35 percent) as
low poverty. (See Figure 1 for the distribution of school poverty among
kentucky public high schools.)
MEASURING COLLEGE READINESS
As the proportion of students planning to attend college has increased,
research has sought to develop an understanding of college readiness
(Conley, 2007; Porter & Polikoff, 2011; Roderick, Nagaoka, & Coca, 2009).
Ultimately, college readiness measures students’ ability to succeed in col-
lege (Conley, 2007), and existing literature urges a multidimensional view
of college readiness that includes content knowledge and basic skills, core
academic skills, non-cognitive skills and norms of performance, and “col-
lege knowledge (Conley, 2007, 2010; Roderick et al., 2009). However, such
a comprehensive measure is difficult to quantify.
As a result, most colleges continue to rely heavily on standardized
achievement tests, such as the ACT, to measure “cognitive ability, basic
skills, content knowledge, and common academic skills” (Roderick et
al., 2009, p. 185). The use of tests like the ACT in college admission has
relatively strong empirical support. The development of the ACT relies
TCR, 120, 060307 Getting College-Ready During State Transition Toward the Common Core State Standards
11
on college faculty’s input (Zwick, 2004), and ACT scores are found to
predict student grades in the first year of college (Allen, 2013; Allen
& Sconing, 2005), as well as students’ likelihood to persist in college
(Bettinger, Evans, & Pope, 2011). In fact, for some institutions, ACT
scores may be the “best single predictor of first-year college course per-
formance” (Maruyama, 2012).
More importantly, the design of the ACT is aligned with the operating
definition of college readiness in kentucky. As discussed in the introduc-
tion, kentucky considers a student to be college ready if the student is able
to take credit-bearing college courses without remediation (kDE & CPE,
2010), which is exactly what the ACT is designed to measure (ACT, Inc.,
2008). This is supported by empirical evidence that student ACT scores
are highly correlated with remedial course-taking (Bettinger & Long,
2009; Howell, 2011; Martorell & McFarlin, Jr., 2011; Scott-Clayton, Crosta,
& Belfield, 2014). For this reason, ACT test scores are a measure of col-
lege readiness that many states care about, leading 15 states (including
kentucky) to adopt the ACT as a federal accountability measure of high
school performance (Gewertz, 2016).
Figure 1. Distribution of the percentage of students eligible for free/
reduced-price lunch (FRPL) in school, high schools, 2009–2013
Teachers College Record, 120, 060307 (2018)
12
Finally, for a study like ours, it is critical to have a college-ready measure
for all students. Frequently, measures of college readiness, including ACT
test scores, are available only among a select group of students who plan to
apply to college (Clark et al., 2009; Goodman 2013; Roderick et al., 2009;
Steelman & Powell, 1996). In such cases, any changes in the average ACT
performance could reflect real improvement in college readiness, chang-
es in who elects to take the ACT, or both. But the ACT is mandatory for all
students in kentucky, so we can observe changes in student college readi-
ness that represent the entire high school student population in the state
without selection concerns. In fact, this is why Porter and Polikoff (2011),
in their review of possible measures of college readiness, recommend the
ACT as a good option for measuring college-level academic readiness.
In summary, we caution that the ACT is by no means a comprehen-
sive measure of college readiness. As pointed out in Maruyama (2012),
the use of ACT scores alone—particularly when threshold scores are cre-
ated to dichotomize students into being ready for college or not—leads
to imprecise prediction of student success in college. Maruyama (2012)
and other researchers (e.g., Roderick et al., 2009) recommend the use
of multiple indicators, such as high school course taking and grades, to
measure students’ ability to succeed in college. However, the ACT’s close
alignment with kentucky’s operating definition of college readiness, its
importance in college admissions, its correlation with college achieve-
ment, its relevance to state education accountability, and its mandatory
nature in kentucky make it the best, if an imperfect, outcome measure for
the current study.
RESEARCH DESIGN
Our analyses focus on three cohorts of eighth-grade students and follow
them until the end of the 11th grade (Exhibit 1). For all three cohorts,
student academic preparation for high school is measured by the kCCT at
the end of eighth grade. At the end of 11th grade, the ACT measures high
school students’ general educational development and their capability to
complete college-level work. Neither the kCCT nor the ACT changed dur-
ing the study period. Therefore, student performance at both the starting
and the end points is measured with the same test instruments for all three
cohorts, and is not affected by changing test familiarity.
Each of the three cohorts experienced different exposure to CCSS tran-
sition. For this study, “exposure” refers to the amount of time a student
spent in school while the state was implementing CCSS-aligned standards
and making related changes to student assessments, the state school ac-
countability system, and the teacher evaluation system. As Exhibit 1 shows,
TCR, 120, 060307 Getting College-Ready During State Transition Toward the Common Core State Standards
13
the first cohort of students enrolled in the eighth grade in 2007–2008
and had no exposure to CCSS transition before sitting for the ACT in
2010–2011. By comparison, the second and third cohorts of eighth-grade
students had spent one and two years, respectively, of their high school
careers under CCSS transition before taking the ACT. We take advantage
of this cross-cohort variation in student exposure to CCSS transition and
address the following question: For students starting high school at similar
performance levels and with similar background characteristics, did more
“exposure” to the transitional period of standards reform predict higher
ACT scores in Grade 11?
To answer this question, we first estimate a cross-cohort model in the
following form:
Here, student i’s ACT composite score is a function of his or her eighth-
grade test scores, student cohort, and background characteristics . The
eighth-grade score vector includes kCCT scores in all four tested subject
areas: English, mathematics, social studies, and writing. Student back-
ground characteristics include FRPL eligibility, race/ethnicity, ELL status,
and special education status. To capture cohort-to-cohort variation in high
school readiness, all ACT and kCCT scores are standardized by subject
across all years rather than within each year in order. The coefficients of
interest are and , which represent the ACT performance differen-
tials between students affected by CCSS implementation and unaffected
students who are otherwise comparable.9
In order to investigate whether the implementation of CCSS may have
differential effects on students and schools facing varying degrees of
Exhibit 1. Cross-cohort comparison of KCAS exposure: 2007–2008
through 2012–2013
& Powell, 1996). In such cases, any changes in the average ACT performance could reflect real
improvement in college readiness, changes in who elects to take the ACT, or both. But the ACT is
mandatory for all students in Kentucky, so we can observe changes in student college readiness that
represent the entire high school student population in the state without selection concerns. In fact, this is
why Porter and Polikoff (2011), in their review of possible measures of college readiness, recommend the
ACT as a good option for measuring college-level academic readiness.
In summary, we caution that the ACT is by no means a comprehensive measure of college readiness. As
pointed out in Maruyama (2012), the use of ACT scores alone—particularly when threshold scores are
created to dichotomize students into being ready for college or not—leads to imprecise prediction of
student success in college. Maruyama (2012) and other researchers (e.g., Roderick et al., 2009)
recommend the use of multiple indicators, such as high school course taking and grades, to measure
students’ ability to succeed in college. However, the ACT’s close alignment with Kentucky’s operating
definition of college readiness, its importance in college admissions, its correlation with college
achievement, its relevance to state education accountability, and its mandatory nature in Kentucky make
it the best, if an imperfect, outcome measure for the current study.
RESEARCH DESIGN
Our analyses focus on three cohorts of eighth-grade students and follow them until the end of the 11th
grade (Exhibit 1). For all three cohorts, student academic preparation for high school is measured by the
KCCT at the end of eighth grade. At the end of 11th grade, the ACT measures high school students’
general educational development and their capability to complete college-level work. Neither the KCCT
nor the ACT changed during the study period. Therefore, student performance at both the starting and the
end points is measured with the same test instruments for all three cohorts, and is not affected by
changing test familiarity.
Exhibit 1. Cross-cohort Comparison of KCAS Exposure: 2007–2008 through 2012–2013
Eighth-grade
cohort 2007-08 2008-09 2009-10 2010-11 2011-12 2012-13 2013-14
Cohort 1:
KCCT8 ACT
Cohort 2:
KCCT8 ACT
Cohort 3:
KCCT8 ACT
State Standards
Program of Studies (POS) Kentucky Core Academic Standards (KCAS)
Each of the three cohorts experienced different exposure to CCSS transition. For this study, “exposure”
refers to the amount of time a student spent in school while the state was implementing CCSS-aligned
standards and making related changes to student assessments, the state school accountability system,
and the teacher evaluation system. As Exhibit 1 shows, the first cohort of students enrolled in the eighth
grade in 2007–2008 and had no exposure to CCSS transition before sitting for the ACT in 2010–2011. By
comparison, the second and third cohorts of eighth-grade students had spent one and two years,
respectively, of their high school careers under CCSS transition before taking the ACT. We take
advantage of this cross-cohort variation in student exposure to CCSS transition and address the following
question: For students starting high school at similar performance levels and with similar background
characteristics, did more “exposure” to the transitional period of standards reform predict higher ACT
scores in Grade 11?
Teachers College Record, 120, 060307 (2018)
14
financial constraints, we estimate the cross-cohort model (equation 1) for
students in low and high school-poverty contexts separately. Within each
school type, we further split students into those who are eligible for FRPL
and those who are not, in order to capture the interplay between indi-
vidual- and school-level poverty conditions.
WHAT MAY EXPLAIN ACT PERFORMANCE TRENDS
To gauge the extent to which the implementation of CCSS is directly re-
sponsible for any estimated cross-cohort differences in student ACT per-
formance, we conduct two additional, more nuanced analyses. First, when
kentucky implemented the new state standards, it decided to adopt a re-
vised, CCSS-aligned curriculum framework for English and mathematics
(“targeted subjects”), but carried over the reading and science (“untarget-
ed subjects”) curricula from the old regime. This allows us to implement
a difference-in-differences type of analysis by comparing cross-cohort
changes in ACT scores on targeted subjects with cross-cohort changes on
untargeted subjects. The ACT performance trends on untargeted subjects
serve as our “counterfactuals,” representing what student ACT perfor-
mance might have been across all subject areas in the absence of content
standards changes. If CCSS-aligned curriculum framework changes did
make a difference, we would expect a stronger association between CCSS
exposure and student ACT performance on targeted subjects than on un-
targeted subjects. To test this hypothesis, we estimate the following cross-
subject, cross-cohort model:
Instead of using ACT composite score, this model uses ACT subject-
specific score (student i’s score on subject s, which includes English, math,
reading, and science) as the dependent variable. Compared to model (1),
model (2) adds an indicator variable T for targeted subjects and its inter-
action with cohort dummy variables. Coefficients and now repre-
sent cross-cohort differences in ACT performance on untargeted subjects
(reading and science). The coefficients of interest, and , estimate
the extent to which cross-cohort progress in student ACT performance
on targeted subjects (English and math) differs from that on untargeted
subjects. Because the unit of analysis is student-by-subject, the total sample
size is inflated by a factor of four. Therefore, we need to cluster standard
error estimates at the student level to take into account cross-subject cor-
relation of scores within individual students.
TCR, 120, 060307 Getting College-Ready During State Transition Toward the Common Core State Standards
15
One complication in our cross-subject, cross-cohort design is that the
CCSS for ELA also aims to raise the literacy standards in history/social
studies, science, and technical subjects. The goal is to help students achieve
the literacy skills and understandings required for college readiness in
multiple disciplines.10 In other words, “untargeted” subjects, at least in
theory, are not completely untouched by the curriculum reform. Insofar
as this design feature of the CCSS was implemented authentically, our dif-
ference-in-differences coefficients ( and ) estimate the lower-bound
effect of curriculum reform. However, these ELA standards are not meant
to replace content standards in those subject areas, but rather to supple-
ment them. Therefore, even if the revised English curriculum framework
benefits student performance in other subject areas, the benefits to those
subject areas are likely to be less immediate and pronounced than what we
might expect for directly targeted subject areas.
We conduct a second analysis out of the concern that model (1) takes
into account only cross-cohort performance differentials at a single point
in time. However, between the eighth and the 11th grade, students from
the three cohorts could have followed different performance trajectories,
either due to unobserved student characteristics or due to education in-
terventions or programs implemented right before the CCSS transition.
In other words, cross-cohort improvement in student performance may
have started before the implementation of the CCSS, and therefore it
should not be attributed to CCSS transition. We test this possibility by cre-
ating a pseudo year of change and conducting a falsification test. Because
the CCSS was not actually implemented in the pseudo year, we should
not detect any cross-cohort differences if the implementation of the CCSS
was directly responsible for those differences. Implementing this strategy,
however, requires the ACT (or similar tests aligned with the ACT) to be
administered to the same students repeatedly. The kentucky assessment
system provides us with a rare opportunity to conduct this falsification
test, as it requires all 10th-grade students take the PLAN tests. The PLAN,
often considered the “Pre-ACT” assessment, helps students understand
their college readiness midway through high school and plan accordingly
for their remaining high school years. The PLAN scores are highly predic-
tive of student performance on the ACT. In our sample, the correlations
between the two test scores range from 0.70 to 0.86.
Because the PLAN is administered at the beginning of the 10th grade
every September, none of the three cohorts under investigation had had
any meaningful exposure to CCSS implementation by the time they took
the PLAN. The timing of the PLAN administration allows us to examine
whether students from the three cohorts, otherwise comparable in terms
of background characteristics and performance at the start of high school,
Teachers College Record, 120, 060307 (2018)
16
had already been on different learning trajectories even before the CCSS
transition. This analysis is carried out by re-estimating model (1) after re-
placing the ACT composite scores with the PLAN composite scores, stan-
dardized across cohorts.
FINDINGS
COHORT CHARACTERISTICS
Descriptive statistics in Table 1 show that students from Cohorts 2 and 3
outperformed Cohort 1 students on ACT composite score by 0.18 and
0.25 points, respectively. These differences are equivalent to about 4% to
5% of a standard deviation (one standard deviation is 4.84 points). To put
the magnitude of these differences into context, Lipsey and colleagues
(2012) report that the annual achievement gain from Grade 10 to Grade
11 is around 0.15 standard deviations in nationally normed test scores.
Therefore, the cross-cohort gains in ACT performance are roughly equiv-
alent to three months of additional learning.
It is premature, however, to jump to strong conclusions, as the three
cohorts of eighth-grade students also differ in other ways. First, students
from the latter two cohorts appear to be more disadvantaged than Cohort
1 students, with higher percentages of students eligible for FRPL (53%
and 56% vs. 48%) and slightly higher percentages of minority students
(13% vs. 12%). On the other hand, compared with Cohort 1 students who
took the ACT prior to the adoption of the CCSS, students in the second
and third cohorts started high school with generally higher achievement
levels. On eighth-grade math, for instance, students from the latter two
cohorts scored 6% of a standard deviation higher than students from the
first cohort. On both eighth-grade reading and writing, Cohort 3 students
outperformed Cohort 1 students by an even larger margin of about 9%
of a standard deviation. Although the eighth-grade performance gap be-
tween students in Cohort 2 and Cohort 1 is smaller on these subjects,
those differences remain statistically significant.
CROSS-COHORT REGRESSIONS
Table 2 reports cross-cohort changes in student ACT performance for all
students and for student subgroups categorized by individual and school
poverty circumstances. Before we turn to the key findings, a few ancillary
results are worth noting. First, eighth-grade performance in all subject ar-
eas is a strong predictor of students’ ACT performance, with eighth-grade
mathematics test scores being the strongest predictor. Second, students
who are black, Hispanic, or with special education needs underperformed
TCR, 120, 060307 Getting College-Ready During State Transition Toward the Common Core State Standards
17
Table 1. Student Performance and Background Characteristics, by
Cohort
All 2011 Cohort 2012 Cohort 2013 Cohort
Mean SD Mean SD Mean SD Mean SD
Student performance
Eighth-grade
kCCT
Mathematics 0.14 0.95 0.10 0.96 0.16** 0.94 0.16** 0.94
Reading 0.14 0.95 0.11 0.94 0.12** 0.96 0.20** 0.94
Social studies 0.14 0.95 0.16 0.94 0.13** 0.96 0.13** 0.95
Writing 0.14 0.96 0.10 0.95 0.12** 0.97 0.19** 0.95
10th-grade PLAN
Composite 17.31 3.63 17.21 3.60 17.29** 3.68 17.42** 3.61
Mathematics 16.39 4.28 16.41 4.43 16.24** 4.16 16.55** 4.
25
English 17.26 4.29 17.10 4.36 17.40** 4.37 17.27** 4.11
Reading 16.92 4.51 16.76 4.45 16.96** 4.53 17.03** 4.53
Science 18.14 3.53 18.06 3.33 18.02 3.62 18.35** 3.60
11th-grade ACT
Composite 19.23 4.84 19.08 4.81 19.26** 4.91 19.33** 4.79
Mathematics 18.57 6.24 18.36 6.22 18.76** 6.30 18.56** 6.
19
English 18.95 4.50 18.79 4.51 19.03** 4.54 19.02** 4.45
Reading 19.41 5.83 19.32 5.66 19.34 5.91 19.58** 5.91
Science 19.45 4.81 19.31 4.86 19.40** 4.93 19.64** 4.62
Student background
characteristics
(percent)
Black 9.32 29.07 9.18 28.87 9.56 29.41 9.19 28.89
Hispanic 2.31 15.02 2.02 14.07 2.28** 14.93 2.63** 15.99
Other minority 1.31 11.38 1.02 10.03 1.30** 11.31 1.62** 12.63
Male 50.01 50.00 49.95 50.00 49.82 50.00 50.27 50.00
Special education 2.41 15.33 0.01 1.13 0.12** 3.41 7.29** 26.00
ELL 0.38 6.16 0.38 6.13 0.39 6.21 0.38 6.14
FRPL-eligible 52.33 49.95 47.71 49.95 53.06** 49.91 56.01** 49.64
Observations 100,212 31,595 36,139 32,478
Note. ** denotes the statistic is significantly different from Cohort 1 at p < 0.05.
Teachers College Record, 120, 060307 (2018)
18
their peers on average (Table 2, column 1). However, ELL students per-
formed better than their peers, mostly driven by the strong performance
of ELL students who are not eligible for FRPL (Table 2, columns 3 and 5).
Results presented in Table 2 suggest that exposure to CCSS transition
is associated with higher ACT composite scores (column 1). Specifically,
compared to Cohort 1 students with similar starting academic proficiency
and background characteristics, Cohort 2 students scored 3% of a stan-
dard deviation higher at the end of the first year of CCSS implementation.
After two years of schooling under the CCSS, Cohort 3 students not only
outscored Cohort 1 students to 4% of a standard deviation, but also signifi-
cantly outperformed their Cohorts 2 peers.
Table 2. Cross-Cohort Comparisons of ACT Composite Scores, by School
Poverty and Student FRPL Eligibility
(Standard errors in parentheses)
(1) (2) (3) (4) (5)
VARIABLES All
High-Poverty Schools Low-Poverty Schools
FRPL
students
Non-FRPL
students
FRPL
students
Non-FRPL
students
Cohort 2012 0.03*** 0.02 0.04 0.03** 0.03***
(0.00) (0.01) (0.03) (0.01) (0.01)
Cohort 2013 0.04*** 0.04*** 0.04 0.03** 0.02**
(0.00) (0.01) (0.03) (0.01) (0.01)
Eighth-grade KCCT
scores
Mathematics 0.40*** 0.27*** 0.39*** 0.42*** 0.56***
(0.00) (0.01) (0.02) (0.01) (0.01)
Reading 0.15*** 0.17*** 0.13*** 0.12*** 0.11***
(0.00) (0.01) (0.02) (0.01) (0.01)
Social studies 0.23*** 0.17*** 0.26*** 0.24*** 0.27***
(0.00) (0.01) (0.02) (0.01) (0.01)
Writing 0.12*** 0.07*** 0.11*** 0.09*** 0.11***
(0.00) (0.01) (0.02) (0.01) (0.01)
Background
characteristics
Black –0.03*** –0.08*** –0.14*** –0.06*** –0.08***
(0.01) (0.01) (0.03) (0.02) (0.02)
Hispanic –0.08*** –0.08*** 0.07 –0.13*** –0.07*
(0.01) (0.03) (0.09) (0.03) (0.04)
Other race 0.01 0.00 –0.31** 0.04 –0.03
(0.02) (0.05) (0.15) (0.04) (0.04)
TCR, 120, 060307 Getting College-Ready During State Transition Toward the Common Core State Standards
19
(1) (2) (3) (4) (5)
VARIABLES All
High-Poverty Schools Low-Poverty Schools
FRPL
students
Non-FRPL
students
FRPL
students
Non-FRPL
students
Male 0.03*** 0.00 0.09*** 0.01 0.07***
(0.00) (0.01) (0.02) (0.01) (0.01)
Special
education
–0.14*** –0.15*** 0.09 –0.07** 0.05
(0.01) (0.03) (0.09) (0.03) (0.04)
ELL 0.16*** 0.03 0.40 0.01 0.
28
(0.03) (0.04) (0.33) (0.07) (0.23)
FRPL-eligible –0.22***
(0.00)
Constant –0.04*** –0.30*** –0.16*** –0.16*** –0.05***
(0.00) (0.01) (0.02) (0.01) (0.01)
Observations 100,212 10,381 2,814 10,039 20,679
R-squared 0.64 0.53 0.64 0.61 0.64
Note. The reference cohort took the ACT in the 2010–2011 school year. The refer-
ence racial group is White.
*** p < 0.01. ** p < 0.05. * p < 0.1.
In columns 2 through 5 of Table 2, we explore whether there appears to
be heterogeneity in the association between exposure to the CCSS transi-
tion and ACT performance across student- and school-poverty subgroups.
There is some evidence of this heterogeneity. Among students in low-pov-
erty schools, students in both Cohorts 2 and 3 outscored Cohort 1. In
other words, all students in low-poverty schools, regardless of individual
FRPL eligibility, improved their ACT performance after a single year of
exposure to CCSS implementation. By comparison, among students in
high-poverty schools, particularly those eligible for FRPL, only Cohort
3 students outperformed their Cohort 1 counterparts, suggesting that it
took longer exposure to the CCSS for students in high-poverty schools to
demonstrate significant progress in ACT performance.
These findings raise the concern that students in high-poverty schools
may have lost ground to students in low-poverty schools in terms of per-
formance growth between the eighth and the 11th grade. One possible
reason, as discussed earlier, is that high-poverty schools are generally
perceived as less prepared to provide teachers and students with the re-
sources and support required by the standards transition. And oppo-
nents of the CCSS often cite the new standards as a potential distraction
Teachers College Record, 120, 060307 (2018)
20
to ongoing efforts in narrowing the student performance gap between
high- and low-poverty students (Rotberg, 2014). However, we cannot
pinpoint when such divergence in growth began to emerge. That is, we
are uncertain whether students in high-poverty schools started to fall be-
hind their counterparts in low-poverty schools before or after the imple-
mentation of the CCSS.
CROSS-SUBJECT CROSS-COHORT ANALYSIS
Next we use the ACT subject area scores to estimate a difference-in-
differences type model. These models use cross-cohort differences in
student ACT performance on untargeted subjects—subjects that did
not receive curriculum framework overhaul—as the counterfactual,
representing how cross-cohort patterns in ACT performance might
have looked in the absence of curriculum alignment with the CCSS. If
CCSS-aligned content standards are indeed superior to kentucky’s last-
generation standards, as claimed by advocates of the CCSS (Carmichael,
Martino, Porter-Magee, & Wilson, 2010), we should observe more pro-
nounced cross-cohort improvement in ACT performance on targeted
subjects that now have adopted CCSS-aligned curriculum frameworks.
This hypothesis is supported by comparisons between Cohort 1 and 2
students (Table 3). We detect no statistically significant improvement
in ACT performance on untargeted subjects (reading and science).
The coefficient on “Untargeted subjects, Cohort 2012” is 0.00. By com-
parison, ACT performance on targeted subjects (math and English) im-
proved after a single year of CCSS implementation, significantly outpac-
ing cross-cohort student-performance trajectory on untargeted subjects
by 5% of a standard deviation (the coefficient on “Targeted subjects,
Cohort 2012” is 0.05). Importantly, Cohort 2 students in both high- and
low-poverty schools improved significantly on targeted subjects relative
to untargeted subjects. The lack of progress in overall ACT performance
from Cohort 1 to Cohort 2 in high-poverty schools reported in Table 2
seems to be due to the deteriorating (although statistically insignificant)
performance on untargeted subjects, negating the gains students made
on targeted subjects.
TCR, 120, 060307 Getting College-Ready During State Transition Toward the Common Core State Standards
21
Table 3. Cross-Subject Cross-Cohort Comparisons of ACT Subject
Scores, by School Poverty and Student FRPL Eligibility
(Robust standard errors clustered at the student level in parentheses)
(1) (2) (3) (4) (5)
VARIABLES All
High-Poverty Schools Low-Poverty Schools
FRPL
students
Non-FRPL
students
FRPL
students
Non-FRPL
students
Untargeted subjects, –0.00 –0.02 0.01 –0.01 –0.01
Cohort 2012 (0.00) (0.01) (0.03) (0.01) (0.01)
Untargeted subjects, 0.04*** 0.04*** 0.04 0.04** 0.02*
Cohort 2013 (0.00) (0.01) (0.03) (0.02) (0.01)
Targeted subjects, 0.05*** 0.06*** 0.06*** 0.06*** 0.07***
Cohort 2012 (0.00) (0.01) (0.02) (0.01) (0.01)
Targeted subjects, –0.02*** –0.01 –0.00 –0.01 –0.00
Cohort 2013 (0.00) (0.01) (0.02) (0.01) (0.01)
Eighth-grade KCCT scores
Mathematics 0.38*** 0.25*** 0.38*** 0.40*** 0.53***
(0.00) (0.01) (0.02) (0.01) (0.01)
Reading 0.13*** 0.15*** 0.11*** 0.10*** 0.09***
(0.00) (0.01) (0.02) (0.01) (0.01)
Social studies 0.20*** 0.14*** 0.23*** 0.21*** 0.23***
(0.00) (0.01) (0.02) (0.01) (0.01)
Writing 0.10*** 0.06*** 0.09*** 0.08*** 0.09***
(0.00) (0.01) (0.02) (0.01) (0.01)
Background characteristics
Black –0.02*** –0.07*** –0.12*** –0.05*** –0.07***
(0.01) (0.01) (0.03) (0.01) (0.02)
Hispanic –0.07*** –0.06*** 0.07 –0.11*** –0.06*
(0.01) (0.02) (0.08) (0.02) (0.03)
Other race 0.01 0.01 –0.27** 0.05 –0.03
(0.01) (0.04) (0.13) (0.03) (0.04)
Male 0.04*** 0.01 0.09*** 0.02* 0.08***
(0.00) (0.01) (0.02) (0.01) (0.01)
Special education –0.12*** –0.14*** 0.09 –0.06* 0.04
(0.01) (0.03) (0.10) (0.03) (0.04)
ELL 0.15*** 0.03 0.32 0.01 0.
26
(0.03) (0.03) (0.38) (0.06) (0.17)
Teachers College Record, 120, 060307 (2018)
22
(1) (2) (3) (4) (5)
VARIABLES All
High-Poverty Schools Low-Poverty Schools
FRPL
students
Non-FRPL
students
FRPL
students
Non-FRPL
students
FRPL-eligible –0.20***
(0.00)
Targeted subjects –0.01*** –0.05*** –0.02 –0.05*** 0.03***
(0.00) (0.01) (0.02) (0.01) (0.01)
Constant –0.04*** –0.26*** –0.14*** –0.12*** –0.07***
(0.00) (0.01) (0.02) (0.01) (0.01)
Observations 401,099 41,621 11,270 40,185 82,758
R-squared 0.52 0.39 0.50 0.48 0.52
Note. The reference cohort took the ACT in the 2010–2011 school year. The refer-
ence racial group is White. Targeted subjects include English and mathematics,
for which the kCAS implemented new, CCSS-aligned curricula since 2011–2012.
Comparison subjects include science and reading, whose curricula were carried
over from the era of “Program of Studies,” the old state standards before kCAS.
*** p < 0.01. ** p < 0.05. * p < 0.1.
Cross-subject comparisons between Cohorts 1 and 3, however, demon-
strated a different pattern. By the end of the second year of the CCSS
implementation, Cohort 3 students outscored Cohort 1 students on both
targeted and untargeted subjects. On untargeted subjects, student perfor-
mance improved by 4% of a standard deviation. On targeted subjects, the
improvement was smaller (by 2% of a standard deviation) but remained
statistically significant (0.04 – 0.02 = 0.02 standard deviations). These pat-
terns were consistently observed for students enrolled in both high- and
low-poverty schools. One interpretation of the difference in Cohort 2 and
Cohort 3 coefficients is that curriculum changes not only benefit those
directly targeted subjects, but also other subject areas, albeit in a more tan-
gential way. As discussed earlier, the CCSS-aligned ELA framework is in-
tended to help improve literacy skills required in other subject areas. This
design feature implies that student performance on untargeted subjects
is likely to benefit from ELA curriculum change, with a lag as improved
literacy skills trickle down to these other subjects.
TCR, 120, 060307 Getting College-Ready During State Transition Toward the Common Core State Standards
23
CROSS-COHORT DIFFERENCES: WHEN DID THE DIVERGENCE BEGIN?
Starting high school with similar test scores, students from Cohorts 2 and
3 made more progress in terms of academic proficiency than Cohort 1
students by the end of the 11th grade. However, it remains unclear when
such cross-cohort divergence began. If students from the three cohorts
had been on different performance trajectories prior to the CCSS despite
having similar starting performance levels, our findings should not be com-
pletely attributed to CCSS implementation. To investigate this possibility,
we compare student’s 10th-grade PLAN composite scores across cohorts.
All three cohorts took the 10th-grade PLAN before the implementation of
the CCSS. Therefore, we should expect no cross-cohort differences in 10th-
grade scores if CCSS transition was responsible for improved student learn-
ing. Indeed, we find no difference in 10th-grade performance between stu-
dents in Cohorts 1 and 2 (Table 4), lending support to the interpretation
that CCSS implementation likely led to improved ACT performance from
Cohort 1 to Cohort 2. By comparison, Cohort 3 students outscored Cohort
1 students at the start of the 10th grade by 4% of a standard deviation. That
is, there is strong evidence that Cohort 3 students started pulling ahead of
comparable Cohort 1 students before the CCSS transition.11
Table 4. Cross-Cohort Comparisons of 10th-Grade PLAN Composite
Scores, by School Poverty and Student FRPL Eligibility
(Standard errors in parentheses)
(1) (2) (3) (4) (5)
VARIABLES All
High-Poverty Schools Low-Poverty Schools
FRPL
students
Non-FRPL
students
FRPL
students
Non-FRPL
students
Cohort 2012 0.01 –0.02 –0.03 0.02 0.02
(0.00) (0.01) (0.03) (0.01) (0.01)
Cohort 2013 0.04*** 0.04*** 0.05 0.04*** 0.05***
(0.00) (0.01) (0.03) (0.01) (0.01)
Eighth-grade KCCT
scores
Mathematics 0.40*** 0.28*** 0.42*** 0.42*** 0.55***
(0.00) (0.01) (0.02) (0.01) (0.01)
Reading 0.15*** 0.17*** 0.14*** 0.12*** 0.11***
(0.00) (0.01) (0.02) (0.01) (0.01)
Social studies 0.22*** 0.17*** 0.23*** 0.22*** 0.27***
(0.00) (0.01) (0.02) (0.01) (0.01)
Teachers College Record, 120, 060307 (2018)
24
(1) (2) (3) (4) (5)
VARIABLES All
High-Poverty Schools Low-Poverty Schools
FRPL
students
Non-FRPL
students
FRPL
students
Non-FRPL
students
Writing 0.12*** 0.08*** 0.13*** 0.10*** 0.12***
(0.00) (0.01) (0.02) (0.01) (0.01)
Background
characteristics
Black –0.09*** –0.16*** –0.18*** –0.09*** –0.11***
(0.01) (0.01) (0.03) (0.02) (0.02)
Hispanic –0.11*** –0.11*** 0.11 –0.11*** –0.07*
(0.01) (0.03) (0.09) (0.03) (0.04)
Other race 0.00 0.01 –0.06 0.01 –0.03
(0.02) (0.05) (0.16) (0.04) (0.04)
Male –0.02*** –0.05*** 0.05* –0.02* 0.02*
(0.00) (0.01) (0.02) (0.01) (0.01)
Special education –0.16*** –0.08*** –0.00 –0.05 0.01
(0.01) (0.03) (0.09) (0.03) (0.04)
ELL 0.11*** –0.02 0.48 0.01 0.15
(0.03) (0.04) (0.34) (0.07) (0.24)
FRPL-eligible –0.16***
(0.00)
Constant –0.03*** –0.24*** –0.14*** –0.13*** –0.08***
(0.00) (0.01) (0.02) (0.01) (0.01)
Observations 100,212 10,381 2,814 10,039 20,679
R-squared 0.63 0.55 0.63 0.60 0.63
Note. The reference cohort took the ACT in the 2010–2011 school year, and the
PLAN in the 2009–2010 school year. The reference racial group is White.
*** p < 0.01. ** p < 0.05. * p < 0.1.
Our falsification test appears to have reached contradictory conclusions
as to whether we should attribute cross-cohort improvement in ACT per-
formance to CCSS implementation. What we have learned from this ex-
ercise is that, between the eighth grade and the start of the 10th grade,
students in Cohorts 1 and 2 followed the same learning trajectory, whereas
the learning trajectory was steeper for Cohort 3 students. It becomes clear
that controlling for student academic proficiency at a single point in time
is insufficient to account for important baseline cross-cohort differences.
TCR, 120, 060307 Getting College-Ready During State Transition Toward the Common Core State Standards
25
We therefore augment models (1) and (2) by controlling for 10th-grade
PLAN scores in addition to the eighth-grade kCCT scores. The augment-
ed models allow us to answer the question: Among students who started
high school at similar levels and remained comparable in academic per-
formance at the start of Grade 10, did those in later cohorts outperform
those in the first cohort? The augmented models, however, may run the
risk of over-controlling: It is possible that schools adjusted their instruc-
tions in earlier grades while anticipating that performance expectations
in later grades will be different after the standards reform. If that were
the case, 10th-grade scores of later cohorts could reflect changes induced
by the standards reform; therefore, controlling for those scores would re-
move part of the transitional impact of the CCSS on student performance.
Table 5 presents estimates based on the augmented models. For both
model (1) and (2), adding the PLAN score explains an additional 13% to
18% of the total variation in student ACT scores. Focusing on ACT com-
posite scores, estimates in the top panel of Table 5 show that students from
both Cohorts 2 and 3 still significantly outperformed Cohort 1 students.
Cohort 2 students scored 2% of a standard deviation higher on average.
Interestingly, after controlling for the PLAN score, Cohort 2 students from
both high- and low-poverty schools improved their ACT performance rela-
tive to their counterparts in Cohort 1, alleviating the concern that recent
changes in the school system triggered by the CCSS may have dispropor-
tionate adverse effects on students in high-poverty schools.
Although Table 2 reports that Cohort 3 students experienced larger
cumulative gains between Grades 8 and 11 relative to Cohort 2 students
when both are compared to Cohort 1 students, most of the gains accrued
to Cohort 3 students had been achieved before the CCSS transition, by the
time when they started Grade 10. Consequently, once the PLAN score is
controlled for, Cohort 3 students outscored Cohort 1 students on the ACT
by just 1% of a standard deviation on average. The difference nevertheless
remained statistically significant. The results in the top panel of Table 5
indicate that exposure to CCSS transition was correlated with improved
college readiness, but a higher “dosage” of exposure was not necessarily
associated with continual improvement in student readiness.
Comparing results reported in Tables 4 and 5, it appears that Cohort 2
students made significant progress in Grades 10 and 11 (from 2010–2011 to
2011–2012), whereas Cohort 3 students made most of the gains in the ninth
grade (2010–2011) and continued to improve (at a slower rate) in Grades
10 and 11 (from 2011–2012 to 2012–2013). Although Cohorts 2 and 3 dif-
fer in the grades in which progress was observed, both cohorts improved
relative to the first cohort during the same time period (that is, in the year
immediately before the CCSS implementation and the years after).
Teachers College Record, 120, 060307 (2018)
26
Table 5. Cross-Subject and Cross-Cohort Comparisons of ACT Scores
While Controlling for PLAN, by School Poverty and Student FRPL
Eligibility
(1) (2) (3) (4) (5)
VARIABLES All
High-Poverty Schools Low-Poverty Schools
FRPL
students
Non-FRPL
students
FRPL
students
Non-FRPL
students
Cross-cohort models: Outcome = ACT composite scores
Cohort 2012 0.02*** 0.03*** 0.06*** 0.02* 0.02**
(0.00) (0.01) (0.02) (0.01) (0.01)
Cohort 2013 0.01*** 0.02* 0.01 0.00 –0.01
(0.00) (0.01) (0.02) (0.01) (0.01)
Observations 100,212 10,381 2,814 10,039 20,679
R-squared 0.81 0.69 0.79 0.77 0.82
Cross-subject, cross-cohort models: Outcome = ACT subject scores
Untargeted subjects, –0.00 0.00 0.03 –0.01 –0.02**
Cohort 2012 (0.00) (0.01) (0.02) (0.01) (0.01)
Untargeted subjects, 0.02*** 0.03** 0.02 0.01 –0.02***
Cohort 2013 (0.00) (0.01) (0.02) (0.01) (0.01)
Targeted subjects, 0.05*** 0.05*** 0.05** 0.06*** 0.08***
Cohort 2012 (0.00) (0.01) (0.02) (0.01) (0.01)
Targeted subjects, 0.00 –0.01 0.00 –0.00 0.04***
Cohort 2013 (0.00) (0.01) (0.02) (0.01) (0.01)
Observations 401,099 41,621 11,270 40,185 82,758
R-squared 0.65 0.50 0.63 0.60 0.65
Note. Standard errors in parentheses in the top panel, and standard errors clus-
tered at the student level in parentheses for the bottom panel. The reference co-
hort took the ACT in the 2010–2011 school year, and the PLAN in the 2009–2010
school year. The reference racial group is White. Targeted subjects include English
and mathematics, for which the kCAS implemented new, CCSS-aligned curricula
since 2011–2012. Comparison subjects include science and reading, whose curri-
cula were carried over from the era of “Program of Studies,” the old state standards
before kCAS. Regressions control for student PLAN scores in addition to kCCT
scores and the same list of student background characteristics as in earlier tables.
*** p < 0.01. ** p < 0.05. * p < 0.1. TCR, 120, 060307 Getting College-Ready During State Transition Toward the Common Core State Standards
27
The bottom panel in Table 5 reports cross-subject differences in cross-
cohort gains in ACT performance after taking into account 10th-grade
PLAN subject scores. Similar to results reported in Table 3, by the end of
the first year of CCSS transition, there was no statistically significant differ-
ence in ACT performance on untargeted subjects between Cohort 1 and
Cohort 2 students. On the other hand, ACT scores on targeted subjects
improved significantly (0.02 standard deviations) during the same period.
Two years into the standards transition, however, ACT performance on
both targeted and untargeted subjects improved (and by the same mag-
nitude since the coefficient on “Targeted subjects, Cohort 2013” is zero).
These patterns were largely consistent across student subgroups regard-
less of school poverty context. The findings appear to confirm that the
new math and ELA curriculum framework did make a difference, and
that reformed ELA curriculum might indeed have benefitted non-ELA
subjects with some delay.
DISCUSSION
With education policies increasingly focused on college- and career-ready
standards, important changes are being introduced at the state and lo-
cal levels. The CCSS is one prominent recent example. With the pas-
sage of the Every Student Succeeds Act (ESSA) in December 2015, states
are likely to continue to engage in an active agenda of education policy
experimentation. As we seek to improve the education of our children
through reforms and innovations, policymakers should be mindful about
the potential risks of excessive changes, which are a source of confusion
and frustration among teachers and undermine teachers’ commitment
to educational reforms. Indeed, the stability of education policy is one
of the key determinants of policy success under Porter’s policy attributes
framework (Porter, 1994).
Our study was motivated by concerns that changes triggered by the tran-
sition to CCSS might be disruptive to student learning in the short run,
even when those policy changes may benefit students once they are fully
implemented. The goal of the study is not only to provide a first look at
how student college readiness progressed in the early years of the CCSS
implementation in kentucky, but also to encourage researchers and poli-
cymakers to pay more attention to the transitional impact of educational
reforms in general. This is a highly pertinent issue, because with the pass-
ing of the ESSA, states are preparing to make further changes to systems
that were implemented merely five years ago under the CCSS. kentucky,
for example, has started the process of drafting a new accountability sys-
tem that began in August 2017.12
Teachers College Record, 120, 060307 (2018)
28
We hypothesize that implementation of standards-based education
reforms may have two diverging effects on student performance. First,
implementation may be disruptive to student learning, regardless of how
well designed the standards are. The disruptive effect of reform imple-
mentation may be more pronounced among students who are more dis-
advantaged and schools that are more resource-constrained. In this case,
as exposure increases, student performance may eventually improve, but
only after an initial decrease in test scores. In contrast, the benefits of
standards-based reforms may outweigh the negative influence of imple-
mentation. In this case, as exposure increases, student performance will
improve without any initial disruptions to the upward trend. It is reassur-
ing that, in the case of CCSS transition in kentucky, our findings support
the second hypothesis, and students continued to improve their college
readiness, as measured by ACT scores, during the early stages of CCSS
implementation. Furthermore, evidence suggests that the positive gains
students made during this period accrue to students in both high- and
low-poverty schools. In other words, the net effect of CCSS transition ap-
pears to be positive for all students.
However, it is not conclusive that the progress made in student college
readiness is necessarily attributable to the new content standards. On one
hand, we find that students made more progress on subjects directly tar-
geted by CCSS-aligned curriculum than on untargeted subjects after the
first year of CCSS implementation, suggesting that student performance
may have benefitted from the reformed content standards. Similarly, the
fact that student performance on untargeted subjects caught up with stu-
dent performance on targeted subjects by the end of Year 2 supports the
claim that the new CCSS-aligned ELA curriculum will eventually benefit
non-ELA subject areas. On the other hand, there is evidence that students
made significant progress toward college readiness both in the year im-
mediately before and during the early years of the CCSS implementation,
raising questions about the degree to which curriculum changes were di-
rectly responsible for observed performance improvement.
While it is unclear what might have changed in the year immediately
preceding the launch of the CCSS or whether those changes were CCSS-
induced, one speculation is that in anticipation of the upcoming stan-
dards reform, some schools and districts might have started the prepa-
ration for transition to the CCSS before its official launch. There are
anecdotal references to implementation activities starting in 2010, right
after kentucky adopted the CCSS in February 2010 but before the CCSS
implementation. A number of other states also reported teaching CCSS-
aligned curricula in English and math as early as 2010–2011 (Rentner,
2013). However, those activities were probably unlikely to generate
TCR, 120, 060307 Getting College-Ready During State Transition Toward the Common Core State Standards
29
benefits to student learning both fast enough and widespread enough to
be reflected in statewide average test scores.
All in all, kentucky’s CCSS transition does not seem to have adverse-
ly affected high school students’ academic performance, regardless of
their individual or school poverty status. However, there is potentially
important variation among states in approaches to standards-based
reforms, so kentucky’s experience with the CCSS transition does not
necessarily apply to other states. Future research should strive to collect
more empirical evidence on the transitional impact of standards-based
educational reforms on (a) student performance from other states; (b)
students in younger age groups, whose learning experiences may differ
from those of high school students; (c) additional student subgroups
characterized by their race/ethnicity, English proficiency, and special
education needs; and (d) students’ outcomes later in life, such as college
attendance and completion.
ACkNOWLEDGMENTS
We acknowledge support from the Bill & Melinda Gates Foundation for
this study. We thank the kentucky Department of Education for providing
us with the required data. This research has benefitted from the helpful
comments of two anonymous reviewers and inputs from Mike Garet, Dan
Goldhaber, Angela Minnici, Toni Smith, and Fannie Tseng. Tiffany Chu
provided excellent research assistance. Any and all errors are solely the
responsibility of the study’s authors, and the views expressed are those of
the authors and should not be attributed to their institutions, the study’s
funder, or the agencies supplying data.
Teachers College Record, 120, 060307 (2018)
30
NOTES
1. http://www.corestandards.org/about-the-standards/frequently-asked-ques-
tions. Accessed October 29, 2014.
2. See, for instance, discussions in Education Week (2014), Hess and McShane
(2014), Marchitello (2014), and Rotberg (2014).
3. Authors’ calculation based on three cohorts of projected 12th-grade public
school enrollment from Hussar and Bailey (2014).
4. In kentucky, career-ready standards are separate from college-ready stan-
dards, and they are measured using additional criteria such as industry certificates,
kentucky Occupational Skills Standards Assessment (kOSSA), Armed Services
Vocational Aptitude Battery (ASVAB), and ACT Workkeys.
5. See http://education.ky.gov/curriculum/docs/Documents/kCAS%20-%20
June%202013 for more details about kCAS.
6. See http://www.kentuckyteacher.org/wp-content/uploads/2012/04/Field-
Test-Guide-2-2-12 for more details about the new teacher evaluation system.
7. More details can be found at http://education.ky.gov/comm/ul/Pages/de-
fault.aspx.
8. See http://www.corestandards.org/assets/Criteria . Accessed on March 7,
2018.
9. Another potentially important control variable is high school tracking. Many
studies on tracking demonstrate that high school tracks are associated with stu-
dent test score gains. In our case, whether or not a student follows an academic
track may predict his or her ACT performance. Unfortunately, we do not have
detailed course-taking information to infer student tracks. However, the omission
of high school tracks as a control variable will not have a large impact on our es-
timates of cohort coefficients unless the proportion of students following various
high school tracks has changed significantly across cohorts.
10. http://www.corestandards.org/wp-content/uploads/ELA_Standards
11. We also re-estimated the cross-subject, cross-cohort model presented in
Table 3 by replacing ACT subject scores with corresponding PLAN subject scores.
Findings are similar to what is reported here for PLAN composite scores: We found
no diverging performance trajectories between Cohort 1 and 2 on any subjects by
Grade 10. However, Cohort 3 significantly outperformed Cohort 1 on the PLAN
on both untargeted and targeted subjects, raising questions about the extent to
which ACT performance gains achieved by Cohort 3 on all subjects can be attrib-
uted to changes in curriculum frameworks.
12. For more information, visit http://education.ky.gov/comm/Pages/Every-
Student-Succeeds-Act-%28ESSA%29.aspx.
TCR, 120, 060307 Getting College-Ready During State Transition Toward the Common Core State Standards
31
REFERENCES
Allen, J. (2013). ACT Research Report Series. Updating the ACT College Readiness Benchmarks
(ACT Research Report Series, 2013-6). Washington, DC: American College Testing
(ACT), Inc.
Allen, J., & Sconing, J. (2005). Using ACT Assessment scores to set benchmarks for college readiness
(ACT Research Report Series, 2005-3). Washington, DC: American College Testing
(ACT), Inc.
American College Testing (ACT). (2008). What kind of interpretations can be made on the basis of
ACT scores? Washington, DC: Author.
American College Testing (ACT). (2010). The alignment of Common Core and ACT’s College and
Career Readiness System. Washington, DC: Author.
Beach, R. W. (2011). Issues in analyzing alignment of language arts Common Core Standards
with state standards. Educational Researcher, 40(4), 179–182.
Bettinger, E. P., Evans, B. J., & Pope, D. G. (2011). Improving college performance and retention
the easy way: Unpacking the ACT exam (No. w17119). New York, NY: The National Bureau
of Economic Research.
Bettinger, E. P., & Long, B. T. (2009). Addressing the needs of under-prepared students in
higher education: Does college remediation work? The Journal of Human Resources, 44(3),
736–771.
Booher-Jennings, J. (2005). Below the bubble: “Educational triage” and the Texas
accountability system. American Educational Research Journal, 42(2), 231–268.
Borman, G. D., Hewes, G. M., Overman, L. T., & Brown, S. (2003). Comprehensive school
reform and achievement: A meta-analysis. Review of Educational Research, 73(2), 125–230.
Borman, G. D., Slavin, R. E., Cheung, A. C., Chamberlain, A. M., Madden, N. A., & Chambers,
B. (2007). Final reading outcomes of the national randomized field trial of Success for
All. American Educational Research Journal, 44, 701–731.
Brown, A. B., & Clift, J. W. (2010). The unequal effect of adequate yearly progress evidence
from school visits. American Educational Research Journal, 47(4), 774–798.
Carmichael, S. B., Martino, G., Porter-Magee, k., & Wilson, W. S. (2010). The state of state
standards and the Common Core—in 2010. Washington, DC: Thomas B. Fordham Institute.
Carnoy, M., & Loeb, S. (2002). Does external accountability affect student outcomes? A cross-
state analysis. Educational Evaluation and Policy Analysis, 24, 305–331.
Clark, M., Rothstein, J., & Schanzenbach, D. W. (2009). Selection bias in college admissions
test scores. Economics of Education Review, 28, 295–307.
Clarke, M., Shore, A., Rhoades, k., Abrams, L., Miao, J. & Li, J. (2003). Perceived effects of state
mandated testing programs on teaching and learning: Findings from interviews with educators in
low-, medium- and high-stakes states. Boston, MA: National Board on Educational Testing
and Public Policy.
Clune, W. H. (1993). Systemic educational policy: A conceptual framework. In S. H. Fuhrman
(Ed.), Designing coherent educational policy (pp. 125–140). San Francisco, CA: Jossey-Bass.
Cobb, P., & Jackson, k. (2011). Assessing the quality of the Common Core State Standards
for Mathematics. Educational Researcher, 40(4), 183–185.
Coleman, J. S. (1966). Equality of educational opportunity [summary report] (Vol. 2). Washington,
DC: U.S. Department of Health, Education, and Welfare, Office of Education.
Conley, D. T. (2007). Redefining college readiness. Eugene, OR: Educational Policy Improvement
Center.
Conley, D. T. (2010). College and career ready: Helping all students succeed beyond high school.
Hoboken, NJ: John Wiley & Sons.
Teachers College Record, 120, 060307 (2018)
32
Cullen, J. B., & Reback, R. (2006). Tinkering toward accolades: School gaming under a performance
accountability system (NBER Working Papers Series No. 12286). Cambridge, MA: National
Bureau of Economic Research.
Dee, T., & Jacob, B. (2011). The impact of No Child Left Behind on student achievement.
Journal of Policy Analysis and Management, 30, 418–446.
Dee, T., & Wyckoff, J. (2013). Incentives, selection, and teacher performance: Evidence from IMPACT
(NBER Working Paper No. 19529). New York, NY: The National Bureau of Economic
Research.
Desimone, L. (2002). How can comprehensive school reform models be successfully
implemented? Review of Educational Research, 72(3), 433–479.
Education Week. (2014). From adoption to practice: Teacher perspectives on the Common Core.
Retrieved from http://www.edweek.org/media/ewrc_teacherscommoncore_2014
on January 14, 2015.
Figlio, D. N. (2006). Testing, crime and punishment. Journal of Public Economics, 90(4–5),
837–851.
Finn, C. E., Petrilli, M. J., & Julian, L. (2006). The state of state standards 2006. Washington, DC:
Thomas B. Fordham Foundation.
Finnan, L. A. (2014). Common Core and other state standards: Superintendents feel optimism, concern
and lack of support. Alexandria, VA: The School Superintendents Association.
Foorman, B. R., kalinowski, S. J., & Sexton, W. L. (2007). Standards-based educational reform
is one important step toward reducing the achievement gap. In A. Gamoran (Ed.),
Standards-based reform and the poverty gap: Lessons for No Child Left Behind. Washington, DC:
Brookings Institution Press.
Fullan, M. (2001). The future of educational change. New Meaning of Educational Change, 3,
267–272.
Gamoran, A. (2013). Educational inequality in the wake of No Child Left Behind. Washington, DC:
Association for Public Policy and Management. Retrieved from http://www.appam.org/
assets/1/7/Inequality_After_NCLB .
Gewertz, C. (2016, March 23). State testing: An interactive breakdown of 2015-16 plans.
Education Week. Retrieved from http://www.edweek.org/ew/section/multimedia/state-
testing-an-interactive breakdown-of-2015-16.html.
Goodman, S. (2016). Learning from the test: Raising selective college enrollment by
providing information. Review of Economics and Statistics, 98(4), 671–684.
Hamilton, L. S., Stecher, B. M., Marsh, J. A., McCombs, J. S., & Robyn, A. (2007). Standards-
based accountability under No Child Left Behind: Experiences of teachers and administrators in
three states (MG-589-NSF). Santa Monica, CA: RAND Corporation.
Hanushek, E. A., & Raymond, M. E. (2005). Does school accountability lead to improved
student performance? Journal of Policy Analysis Management, 24, 297–327.
Hess, F. M., & McShane, M. Q. (2014). Flying under the radar? Analyzing Common Core media
coverage. Washington, DC: American Enterprise Institute. Retrieved from https://www.
aei.org/publication/flying-under-the-radar-analyzing-common-core-media-coverage on
January 14, 2015.
Hill, H. C. (2001). Policy is not enough: Language and the interpretation of state standards.
American Educational Research Journal, 38(2), 289–318.
Howell, J. S. (2011). What influences students’ need for remediation in college? Evidence
from California. The Journal of Higher Education, 82(3), 292–318.
Hussar, W. J., & Bailey, T. M. (2014). Projections of education statistics to 2022 (NCES 2014-
051). Washington, DC: U.S. Department of Education, National Center for Education
Statistics.
TCR, 120, 060307 Getting College-Ready During State Transition Toward the Common Core State Standards
33
Jacob, B. A. (2007). Test-based accountability and student achievement: An investigation of differential
performance on NAEP and state assessments (NBER Working Paper No. 12817). New York,
NY: The National Bureau of Economic Research.
Jacob, B., & Levitt, S. D. (2003). Rotten apples: An investigation of the prevalence and
predictors of teacher cheating. The Quarterly Journal of Economics, 118(3), 843–877.
kane, T. J., & Staiger, D. O. (2002). The promise and pitfalls of using imprecise school
accountability measures. The Journal of Economic Perspectives, 16(4), 91–114.
kentucky Department of Education and kentucky Council on Postsecondary Education.
(2010). Unified strategy for college and career readiness: Senate Bill 1 (2009). Retrieved from
http://education.ky.gov/educational/CCR/Documents/CCRUnifiedPlan_draft .
kober, N., & Rentner, D. S. (2011). Common Core State Standards: Progress and challenges in school
districts’ implementation. Washington, DC: Center on Education Policy.
Ladd, H. F. (2007). Holding schools accountable revisited. Presented at APPAM Fall Research
Conference: Spencer Foundation Lecture in Education Policy and Management,
Washington DC, November 2007. Association for Public Policy Analysis and Management.
Lipsey, M. W., Puzio, k., Yun, C., Hebert, M. A., Steinka-Fry, k., Cole, M. W., . . . Busick,
M. D. (2012). Translating the statistical representation of the effects of education interventions
into more readily interpretable forms (NCSER 2013-3000). Washington, DC: U.S. Department
of Education, Institute of Education Sciences, National Center for Special Education
Research. Retrieved from www.ies.ed.gov/ncser/pubs/20133000/pdf/20133000 on
January 26, 2015.
Logan, J. R., Minca, E., & Adar, S. (2012). The geography of inequality: Why separate means
unequal in American public schools. Sociology of Education, 85(3), 287–301.
Loveless, T. (2016). The 2016 Brown Center Report on American Education: How well are American
students learning? (Vol. 3, No. 5). Washington, DC: The Brown Center on Education
Policy, The Brookings Institution. Retrieved from http://www.brookings.edu/~/media/
Research/Files/Reports/2016/03/brown-center-report/Brown-Center-Report-2016.
pdf?la=en
Marchitello, M. (2014, September 26). Politics threaten efforts to improve K–12 education.
Center for American Progress. Retrieved from https://www.americanprogress.org/
issues/education/report/2014/09/26/97849/politics-threaten-efforts-to improve-k-12-
education/ on January 14, 2015.
Martorell, P., & McFarlin, I., Jr. (2011). Help or hindrance? The effects of college remediation
on academic and labor market outcomes. The Review of Economics and Statistics, 93(2),
436–454.
Maruyama, G. (2012). Assessing college readiness: Should we be satisfied with ACT or other
threshold scores? Educational Researcher, 41, 252–261.
National Governors Association Center for Best Practices & Council of Chief State School
Officers (NGA/CCSSO). (2010). Common Core State Standards. Retrieved from http://
www.corestandards.org/about-the-standards/frequently-asked-questions.
No Child Left Behind (NCLB) Act of 2001, Pub. L. No. 107-110, § 115, Stat. 1425 (2002).
Özek, U. (2012). One day too late: Mobile students in an era of accountability (Working Paper No.
82). Washington, DC: National Center for Analysis of Longitudinal Data in Education
Research.
Polikoff, M. S., Porter, A. C., & Smithson, J. (2011). How well aligned are state assessments of
student achievement with state content standards? American Educational Research Journal,
48(4), 965–995.
Porter, A. C. (1994). National standards and school improvement in the 1990s: Issues and
promise. American Journal of Education, 102(4), 421–449.
Teachers College Record, 120, 060307 (2018)
34
Porter, A., McMaken, J., Hwang, J., & Yang, R. (2011). Common Core Standards: The new
U.S. intended curriculum. Educational Researcher, 40, 103–116.
Porter, A. C., & Polikoff, M. S. (2011). Measuring academic readiness for college. Educational
Policy, 26(3), 394–417.
Reardon, S. F. (2011). The widening academic achievement gap between the rich and the
poor: New evidence and possible explanations. In G. J. Duncan & R. J. Murnane (Eds.),
Whither opportunity?: Rising inequality, schools, and children’s life chances (91–116). New York,
NY: Russell Sage Foundation.
Regional Equity Assistance Centers. (2013). How the Common Core must ensure equity by fully
preparing every student for postsecondary success: Recommendations from the Regional Equity
Assistance Centers on implementation of the Common Core State Standards. San Francisco, CA:
WestEd.
Rentner, D. S. (2013). Year 3 of implementing the Common Core State Standards: An overview
of states’ progress and challenges. Washington, DC: Georgetown University, Center on
Education Policy.
Roderick, M., Nagaoka, J., & Coca, V. (2009). College readiness for all: The challenge for
urban high schools. Future of Children, 19, 185–210.
Ross, S. M., Henry, D., Phillipsen, L., Evans, k., Smith, L., & Buggey, T. (1997). Matching
restructuring programs to schools: Selection, negotiation, and preparation. School
Effectiveness and School Improvement, 8(1), 45–71.
Rotberg, I. C. (2014, October 16). The endless search for silver bullets. Teachers College Record.
Retrieved from http://www.tcrecord.org/Content.asp?ContentId=17723 on October 25,
2014.
Rouse, C. E., Hannaway, J., Goldhaber, D., & Figlio, D. (2013). Feeling the Florida heat?
How low performing schools respond to voucher and accountability pressure. American
Economic Journal: Economic Policy, 5, 251–281.
Sass, T., Apperson, J., & Bueno, C. (2015). The long-run effects of teacher cheating on
student outcomes. Atlanta, GA: Atlanta Public Schools. Retrieved from http://www.
atlantapublicschools.us/crctreport.
Schmidt, W. H., & Houang, R. T. (2012). Curricular coherence and the Common Core
State Standards for Mathematics. Educational Researcher, 41(8), 294–308. http://doi.
org/10.3102/0013189X12464517
Scott-Clayton, J., Crosta, P. M., & Belfield, C. R. (2014). Improving the targeting of treatment
evidence from college remediation. Educational Evaluation and Policy Analysis, 36(3),
371–393.
Smith, M. S., & O’Day, J. (1991). Systemic school reform. In S. H. Fuhrman & B. Malen
(Eds.), The politics of curriculum and testing: The 1990 yearbook of the Politics of Education
Association (pp. 233–267). London, Uk: The Falmer Press.
Stecher, B. M., Epstein, S., Hamilton, L. S., Marsh, J. A., Robyn, A., McCombs, . . . Naftel, S.
(2008). Pain and gain: Implementing No Child Left Behind in three states, 2004-2006. Santa
Monica, CA: RAND Corporation.
Steelman, L. C., & Powell, B. (1996). Bewitched, bothered, and bewildering: The use and
misuse of state SAT and ACT scores. Harvard Educational Review, 66, 27–59.
Swanson, C. B., & Stevenson, D. L. (2002). Standards-based reform in practice: Evidence
on state policy and classroom instruction from the NAEP state assessments. Educational
Evaluation and Policy Analysis, 24, 1–27.
Taylor, J., Stecher, B., O’Day, J., Naftel, S., & LeFloch, k. C. (2010). State and local implementation
of the No Child Left Behind Act, Vol. IX—Accountability under NCLB: Final report. Washington,
DC: U.S. Department of Education.
TCR, 120, 060307 Getting College-Ready During State Transition Toward the Common Core State Standards
35
Winters, k., Williams, D., McGaha, V., Stine, k., Thayer, D., & Westwood, J. Senate Bill 1, 1 SB
(2009). Retrieved from http://education.ky.gov/comm/UL/Documents/SENATE%20
BILL%201%20HIGHLIGHTS .
Wong, k. k., Anagnostopoulos, D., Rutledge, S., & Edwards, C. (2003). The challenge of
improving instruction in urban high schools: Case studies of the implementation of the
Chicago academic standards. Peabody Journal of Education, 78(3), 39–87.
Zwick, R. (2004). Rethinking the SAT: The future of standardized testing in university admissions.
New York, NY: Routledge.
Teachers College Record, 120, 060307 (2018)
36
ZEYU XU is a Managing Researcher at the American Institutes for
Research. His research interests include teacher labor market policies
and student college -readiness. His work has been published in journals
such as the Journal of Urban Economics, the Journal of Policy Analysis and
Management, Education Economics, Educational Evaluation and Policy Analysis,
and Teachers College Record.
kENNAN CEPA is a graduate student in Sociology at the University of
Pennsylvania. She studies the transition to adulthood and how education
contributes to stratification. Her current work examines how students’ re-
liance on college loans influences college completion. Recently, her work
has appeared in Research in Higher Education.
After more than 20 years of messy thinking, mistakes, and misguided di-
rection, policy makers have finally given teachers and students a solid set of
standards in mathematics and literacy. The Common Core of Standards
only begins the process of moving academic performance in these sub-
jects to the levels we need, but it’s such a relief to have them. Now, the
Race to the Top funding and the federal investments in state assess-
ment systems have targets that make sense.
The anticipated adoption of the Common Core by 48 states
(only Texas and Alaska are not on board) also indicates genuine
political will to move away from disparate standards across the
country. The bottom line? K-12 public education is as close as
it has ever been to saying every high school graduate must
be college ready.
Having a set of common standards also lays the
groundwork for developing assessments aligned with
those college-ready standards and for developing
teaching tools that are aligned with both the stan-
dards and the assessments. It is a mountain of work,
but it’s work that is essential for creating a system
of education that is cohesive and coherent.
Why core standards, why choose college-
ready as a goal, and what do we do next? For
two years, the education team at the Bill &
Melinda Gates Foundation has been col-
lecting evidence about and thinking
Tying Together the
Common Core of
Standards, Instruction,
and Assessments
Fewer, clearer, higher standards will point the United States in the right
direction for developing an education system that prepares high school
graduates who are college-ready.
By Vicki Phillips and Carina Wong
pdkintl.org V91 N5 Kappan 37
JIunlimited/AbleStock.com
PDKConnect
To comment on
this article,
log in at
pdkintl.org and
click on
PDKConnect.
VICKI PHILLIPS is director of education
and CARINA WONG is deputy director,
College-Ready Work for the Bill & Melinda
Gates Foundation.
through answers to these questions. We’ve begun to
make substantial investments in some of the an-
swers. Gates partnered with the Council of Chief
State School Officers and the National Governors
Association to develop the Common Core of Stan-
dards and is investing in next steps, to the extent that
it’s appropriate for the foundation.
First, we want readers to understand that we do
not come to the issues as “think tank experts” who
are clueless as to what it’s really like on the ground in
schools and classrooms. One of us is the education
director of the foundation, the other is head of its
College-Ready Work Team. We’ve
been involved in this work for a long
time, from the bottom up and the
top down, as teacher, program ad-
ministrator, and teacher develop-
ment director. We worked together
at the National Center on Educa-
tion and the Economy when it
launched the first significant effort
to set high standards and align as-
sessments with them. Its New Stan-
dards Project in the 1990s formed
an alliance of states and districts,
and we helped them adopt the lan-
guage of standards and link them to
performance assessment systems.
Later, we teamed up again — as state superintendent
and as bureau director for assessment — in the Penn-
sylvania State Department of Education. There, we
worked around an illogical — but typical — set of
standards (politically difficult to change) and man-
aged to improve the state assessment system some-
what by clarifying targets for teachers and reducing
the standards covered. Moreover, one of us has expe-
rience implementing standards-based reforms as the
superintendent of both a mid-sized Pennsylvania dis-
trict and an urban Oregon school district.
The vision of a sensible and challenging educa-
tion for all students has always been central to this
work, but when we were dealing with states and
school districts, we often stumbled at trying to move
systems. There were too many pieces to change at
the same time, and never enough money and a lack
of political stamina at all levels. The No Child Left
Behind Act helped set priorities, but the prescriptive
accountability measures made it difficult for some
districts and states to use assessments as levers for
good practices.
With the Common Core of Standards, many
things now become possible. Because states will be
working from the same core, we can create broad-
based sharing of what works but, at the same time,
provide local flexibility to decide how best to teach
the core. The new standards also provide a platform
for innovation, a structure that can support creative
strategies for teaching core content in math and lit-
eracy.
RATIONALES FOR THE COMMON CORE AND
COLLEGE READY
Some justifiably worry about common standards.
The Common Core of Standards, however, points
state policy making in the right direction without
imposing rigid specifications about how states
should use them. Analyze the standards, and you’ll
see an underlying theme of “fewer, clearer, higher”
standards for math and literacy. They’re designed to
be more manageable for teachers and to focus on
preparing students for college.
The evidence for adopting college ready as a goal
could not be stronger. Using the evidence, our Col-
lege-Ready Work Team defines “ready” as access to
two-year transfer programs or four-year colleges
with the knowledge and skills to succeed in fresh-
man-year core courses — in other words, no reme-
dial work. This expectation is just as important for
young people who enroll in occupational certificate
programs after high school; success in these pro-
grams and in on-the-job training requires the skills
and knowledge embedded in the core standards.
The Gates Foundation advocated for fewer,
clearer, and higher standards because evidence sup-
ports the need for students to have certain skills as
they move into college, including:
• Academic skills and content that are basic but
also encompass big ideas in the disciplines;
• Cognitive skills, such as problem solving,
collaboration, and academic risk taking;
• Academic grit/academic relationships, such as
being motivated to do demanding work and
being engaged in it; and
• College knowledge, or knowing how to apply
for college successfully and navigate the
system. (“College Knowledge” is the title of a
book by David Conley of the University of
Oregon, who is analyzing college admission
and freshman-year requirements for the Gates
College-Ready Work Team.)
States face many choices in moving toward fewer,
clearer, higher standards. When considering fewer,
are the academic expectations for students the same
no matter what type of postsecondary education they
want? What is absolutely necessary to be successful
in credit-bearing college courses? How can second-
ary courses be reorganized to provide sufficient time
to learn the core content? To be clearer, evidence in-
dicates states must be sure their standards are coher-
ent (minimal repetition and with “big ideas” that
38 Kappan February 2010 pdkintl.org
The Gates Foundation
acknowledges risks in
setting on the table a
coherent system of
college-ready standards,
aligned assessments,
and teaching tools.
thread the content together), are aligned to assess-
ments, and use formative assessments to determine
proficiency. Higher does not mean piling on content.
Rather, it means being able to apply learning, to
transfer learning from one context to another, and to
measure up to international standards.
Some of these criteria already are in play in our
education system through limited programs, such as
Advanced Placement courses and the International
Baccalaureate. States also can borrow ideas from the
curricula and assessments in other countries or from
the popular International Cambridge Exam system,
provided they acknowledge and consider different
demographic and system contexts.
WHAT SHOULD HAPPEN NEXT?
The Common Core of Standards takes the guess-
work out of determining what students should know
and be able to do. The Common Core, however,
doesn’t tell states what the standards look like in
practice. Moreover, the Common Core will become
useful to teachers and policy makers only when it’s
part of a larger system of next-genera-
tion assessments that track how much
students know and how well they
know it.
We know how difficult it is to revise
state assessment systems — the ever-
present balancing act between quality
and cost, the need to maintain trust
among educators while seeking im-
provements, and the long lead time
necessary for changes. The opportu-
nity before states, however, is not sim-
ply to change their assessment systems.
The Common Core represents an op-
portunity to totally redesign assess-
ment systems, using the standards and
the college-ready goal as the guides.
Teachers would welcome state as-
sessment systems that measure what
they consider challenging classroom
work and are technically valid. That’s
why the Gates Foundation’s overall
strategy considers high-quality assess-
ments a critical resource for teacher ef-
fectiveness and teachers’ capacities to
prepare students for college-level work.
Bill Gates has said emphatically that
teachers deserve to have access to the
tools and supports they need every time
they step into the classroom. “Doctors
aren’t left alone in their offices to try to
design and test new medicines,” he told
an education forum in November 2008.
“They’re supported by a huge medical
research industry. Teachers need the same kind of sup-
port.”
Most states rely on single, standardized assess-
ments, now used to measure student achievement, as
a central component to measuring teacher effective-
ness, which
is not ideal.
States need to move their as-
sessment systems to higher levels — as the Race to
the Top guidelines specify — by giving teachers ex-
amples of formative assessments aligned to college-
ready core standards. States also need to build plat-
forms for data that teachers can use to improve their
instruction. Together, these provide the base for re-
designing state assessment systems.
The Gates Foundation has sorted out the tasks
ahead and placed its college-ready investments be-
hind helping teachers improve their practice through
intentionally designed tools, strategic partnerships,
and incentives for the tools to go to scale.
WHAT TO EXPECT IN MATH
Instead of strategies that constrict teaching, the
partners working with Gates are developing tools
pdkintl.org V91 N5 Kappan 39
that will show teachers what is possible when they
use fewer, clearer, and higher standards. The tools
will provide an image of standards-based instruction
that comes alive because of teacher creativity and
flexibility.
The Common Core of Standards is broadly writ-
ten, so the partners are analyzing the knowledge and
skills underneath them. They’re adding next-genera-
tion assessments to that base. Their course designs and
assessments can be used as single modules or linked
together to make a full course. They will work in tra-
ditional or in more proficiency-based classrooms.
The approach doesn’t duplicate other strategies,
such as the familiar backward mapping of content.
We know, for example, that influencing practice in
math is a very different ball game than what is
needed for literacy. In the past, higher standards just
meant more math. A decade ago, completing Alge-
bra I became the standard; now, the standard is com-
pleting Algebra II. But evidence about college ex-
pectations for learning tells a different story: Stu-
dents need more agility at data analysis and statistics
than advanced algebra. Moreover, college-ready
skills, according to the Common Core of Standards,
need to be more conceptual and less procedural.
The math strategy in which the foundation is in-
vesting gets at the details in the standards. It uses a
technology-based program to map specific math
and cognitive skills that can guide assessment and
provide a clear target for teachers as they decide
what students need to be college ready. Teachers and
state assessment systems will be able to draw from a
bank of assessments — formative and summative —
that set performance targets. Once the research is
completed on field trials, the assessments will be
universally available. Moreover, the initiative will
provide an assessment blueprint for math that could
be an alternative to current state assessments.
The instructional packages being developed by
our partner organization, the Shell Centre at the Uni-
versity of Nottingham in Great Britain under the
aegis of the University of California Berkeley, en-
courage teachers to use assessments differently. Each
unit will be anchored on a task or a set of tasks and
suggest interventions that address difficulties stu-
dents may be having, follow-up questions, and how
teachers can manage classroom discussions. Opti-
mally, these instructional/assessment packages will
evolve into a sequence of math modules for grades 8-
11. We may add other grades as the initiative pro-
gresses. The work also is developing summative end-
of-course tests in order to establish college-ready ex-
pectations. They won’t look like the summative tests
states use in their assessment systems now, and they
aren’t intended to be used for accountability. Their
purpose is to show how state accountability systems
could look — measuring performance aligned to
fewer, clearer, higher standards, assessing students’
abilities to use what they’ve learned, and giving teach-
ers data that they actually can use to improve their
practice. They will be an image of the possibilities.
The Shell Centre is working on a set of 20 in-
structional modules with assessments and end-of-
course tests for each grade level.
WHAT TO EXPECT IN LITERACY
Literacy is not as self-contained as math. As the
Common Core of Standards makes clear, literacy
skills cross subject-area boundaries but are not for-
mally taught once students enter the middle grades
(direct instruction often ends at 3rd grade). The fo-
cus of English/language arts in the middle grades
tends to highlight narrative reading and writing,
while college-ready skills emphasize reading multi-
ple and complex texts and writing that calls on stu-
dents to present and defend arguments. Generally,
middle grades teachers are not prepared to teach lit-
eracy this way.
40 Kappan February 2010 pdkintl.org
The College-Ready Work investments
The Gates Foundation will spend an estimated $354 million
between 2010 and 2014 to:
• Help states build a framework that could be the foundation
for a common proficiency conversation;
• Develop prototypes of both formative and summative
assessments in math and literacy that, by design, are
aligned to the core standards, provide challenging work for
students, and help teachers provide meaningful feedback to
students;
• Develop syllabi that lay out a course that connects the
standards, assessments, and instruction but depends on
teachers using their own creativity in the classroom.
• Seed new intermediaries for validation and item bank
development, and designing new models of professional
development;
• Develop specifications for new technology-based
instructional platforms that would help states deliver high-
quality assessments aligned to the core standards and help
districts acquire time-relevant data to improve instruction;
• Develop new ways of thinking about psychometric rules that
guide tests in order to get higher quality and more valid
items that can be used for large-scale assessment and
accountability systems;
• Develop new scoring technology and new forms of
diagnostic assessments; and
• Explore how to support student academic success, build
their academic tenacity, and surround them with responsive
education environments.
As with math, our partner for the literacy strat-
egy is mapping the details of the skills underneath
the broad Common Core of Standards. The instruc-
tional modules, however, are open-ended and adapt-
able to several core subjects. Think of literacy as a
spine; it holds everything together. The branches of
learning connect to it, meaning that all core content
teachers have a responsibility to teach literacy. Cur-
rently, however, there is no way to assess literacy be-
cause it is so diffused among the core courses. Un-
der the partners’ literacy initiative, teachers in Eng-
lish/language arts, social studies, and science will be
able to draw from the same assessment bank —
again, formative and summative — and adapt assess-
ments to their content. The literacy strategy an-
chors performance expectations at three levels of de-
mand, rather than by grade levels, allowing middle
grades teachers to choose the best match for stu-
dents. Each module also will include specific strate-
gies for students struggling with the assignments.
While flexible, the literacy instructional modules
keep a constant focus on high standards of perform-
ance to make this assurance in their Race to the Top
applications, and they could reshape state summa-
tive assessments. A system of these modules could
help develop a literacy consensus around the mean-
ing of scores and proficiency. Think about state as-
sessments that might include extended projects,
strong essays, and performance tasks requiring con-
ceptual understanding and transfer of knowledge.
The core partner for literacy is a new team, Lit-
eracy by Design, working under the aegis of the Ed-
ucation Trust. Literacy by Design is developing
reading and writing modules for grades 6-8. These
could become a syllabi and even a separate literacy
course. The foundation partners also are exploring
the possibility that a collection of summative liter-
acy assessments could become the basis for a col-
lege-ready literacy score.
pdkintl.org V91 N5 Kappan 41
Outline of a Literacy Module
This module uses social studies content for a
persuasive essay (argumentation is one of the
most frequently assigned modes in college-ready
writing). The ladder of assignments (from pretest to
final draft), scoring rubrics (for advanced, proficient,
and “not yet” levels), and summative assessment
can apply to a persuasive essay for
English/language arts and written assignments in
science. The assignment addresses several
specific standards in the Common Core of
Standards document.
Formative assignment prompt: The Declaration of
Independence states that “All men are created
equal.” Is equality possible? After reading a variety
of opinions and analyses, write an essay that
addresses the question and support your position
with evidence from your readings.
The formative assignment scoring rubric describes
the three levels of proficiency for these demands
and qualities: Focus, reading, claim, evidence, text
structure, and conventions.
The summative assessment differs from the
formative assignment in that it is not taught and
provides data about how well students have
internalized the critical thinking and writing involved
in the formative assignment. Students are writing a
timed assessment in class, so the scoring rubrics
are adjusted somewhat for leeway on composition
and conventions.
Summative prompt: Read the following two articles
that state opposing positions on the meaning of
“freedom.” Write a brief essay in which you argue
for one position or the other.
Piloting Mathematics Units
The Shell Centre has developed two kinds of math
units for teachers in grades 8 to 11: units that
develop students’ conceptual understanding of the
content and units that apply previously learned
content to problem solving. Both types:
• Give students the opportunity to learn the
Common Core of Standards;
• Use discussion, team work, and other nonlecture
modes of learning;
• Suggest next steps for instruction based on
student performance;
• Help teachers identify common misconceptions
and mistakes by students and suggest ways to
address them;
• Use different tools, such as individual mini white
boards or poster boards, to foster discussion
among students; and
• Include formative assessments and end-of-
course assessments for each grade level.
Depending on the outcome of the field trials, the
work may be extended to grades 6 to 12.
Learn more
about the
Common Core
of State
Standards
initiative
www.core-
standards.org
Subscribe to
a mailing list
to get updated
information
about the
standards effort
from the
Council of Chief
State School
Officers
www.ccsso.
org/federal_
programs/
13286.cfm
Both the math and the literacy instructional pack-
ages include rubrics and examples of student work.
Both also will require investments in strong profes-
sional development for teachers as they adapt to this
new system.
The technical partner for the math and literacy
tool development and the general assessment work
is the Center for Research on
Evaluation, Standards, and
Student Testing (CRESST)
at the University of Califor-
nia Los Angeles. Teachers
need to know that the assess-
ments are technically sound.
CRESST is mapping two
frameworks for states: One is
the content and skills in the
Common Core of Standards,
the other is the core cognitive
skills (for example, problem
solving, reasoning, collabora-
tion) that can be layered on
the content maps. CRESST
is developing a third framework that will link these
two frameworks into an assessment system.
CRESST also will validate the assessments being
developed by the math and literacy teams.
SUPPORTING STUDENTS
We realize that developing tools for teachers and
ultimately redesigning state assessment systems
must be matched with whatever students need to be
college ready. Student support is an ill-defined area
that encompasses a multitude of programs and serv-
ices, so we’ve chosen to concentrate on academic
support and on what systems (schools, districts,
states) can do differently to better support students’
academic success.
Two assumptions guide our work on student sup-
ports:
• College-ready students have identifiable key
attributes and skills that can be learned. Our
instructional tools, as mentioned earlier, focus
on the academic knowledge/skills and
cognitive strategies necessary to succeed as a
college freshman in core content courses.
Another attribute is tenacity and underlying
beliefs and behaviors that cause students to
embrace academic achievement. The final
learned attribute of college-ready students is
“college knowledge,” or the skills to access
college and move through the system.
• To dramatically increase the number of
students who are college ready, we will need
more “responsive” systems. Schools that are
“high demand, high support” develop both
academic preparedness and academic tenacity.
Responsive districts and states allow such
school environments to flourish and
encourage innovation. What if the schedule in
high schools offered choices such as on college
campuses, for example — with day, night, and
weekend options or field studies online and
abroad?
We will center this student support work on net-
works of different types, such as state/district part-
nerships that focus on policies to enable “respon-
sive” schools or collaborations of youth-serving or-
ganizations with different strengths. Always, we will
be looking for evidence to back up or challenge our
assumptions.
GOING TO SCALE
Through 2010, the college-ready work will con-
tinue to focus on developing and validating instruc-
tional tools, including a new generation of assess-
ments. While this is going on, we’re also investing
in the development of technologies, including web-
based ones, that will provide immediate analysis of
student performance and help with the adoption of
the instructional tools. Throughout this process,
Gates has been in conversations and analyses with
teacher groups, major education policy groups,
think tanks, and researchers.
Then, we’ll look for additional partners. Within
four years, we hope to be implementing the instruc-
tional and student support tools in 10 states and 30
school districts, collaborating with several national
policy networks to bring the tools to scale, and fa-
cilitating the use of the college-ready standards in
admission policies of university systems in selected
states.
Going to scale in this country also means influ-
encing the vendors of textbooks and assessments.
The tools we’re developing are prototypes or images
of what’s possible in classrooms using the Common
Core of Standards. They’re not models to be fol-
lowed literally. If policy makers use the tools for
their own conversations and decisions about stan-
dards and assessment, then vendors will have the
same conversations. Moreover, the tools will be
“open access.”
The Gates Foundation acknowledges risks in set-
ting on the table a coherent system of college-ready
standards, aligned assessments, and teaching tools.
This is an ambitious agenda. With the current po-
litical and financial support for state actions on core
standards and new assessment systems, taking risks
and being ambitious is the right approach. After all
— the future of kids is at stake. K
42 Kappan February 2010 pdkintl.org
Most states rely on single
standardized assessments,
now used to measure student
achievement, as a central
component to measuring
teacher effectiveness, which
is not ideal.
https://doi.org/10.1177/0895904817719523
Educational Policy
2019, Vol. 33(4) 615 –649
© The Author(s) 2017
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/0895904817719523
journals.sagepub.com/home/epx
Article
Are School Districts
Allocating Resources
Equitably? The Every
Student Succeeds Act,
Teacher Experience
Gaps, and Equitable
Resource Allocation
David S. Knight1
Abstract
Ongoing federal efforts support equalizing access to experienced educators
for low-income students and students of color, thereby narrowing the
“teacher experience gap.” I show that while high-poverty and high-minority
schools have larger class sizes and receive less funding nationally, school
districts allocate resource equitably, on average, across schools. However,
the least experienced teachers are still concentrated in high-poverty and
high-minority schools, both across and within districts. I then show that
additional state and local funding is associated with more equitable district
resource allocation. The study offers recommendations for state and federal
education policy related to the Every Student Succeeds Act.
Keywords
educational equity, federal education policy, teacher quality, school finance
1The University of Texas at El Paso, TX, USA
Corresponding Author:
David S. Knight, Center for Education Research and Policy Studies, College of Education, The
University of Texas at El Paso, 500 W. University Ave., Education Building, 105C, El Paso, TX
79968, USA.
Email: dsknight@utep.edu
719523EPXXXX10.1177/0895904817719523Educational PolicyKnight
research-article2017
https://us.sagepub.com/en-us/journals-permissions
https://journals.sagepub.com/home/epx
mailto:dsknight@utep.edu
http://crossmark.crossref.org/dialog/?doi=10.1177%2F0895904817719523&domain=pdf&date_stamp=2017-07-22
616 Educational Policy 33(4)
Lack of access to high-quality instructional resources prevents students
from receiving adequate opportunities to learn (Darling-Hammond, 2000,
2004). Decades of research have documented unequal funding and ineq-
uitable access to experienced, high-quality educators across student race/
ethnicity and socioeconomic status (e.g., Baker, Farrie, Johnson, Luhm,
& Sciarra, 2017; Clotfelter, Ladd, Vigdor, & Wheeler, 2006; Reardon,
2011). Because teachers are typically paid according to a district-level
salary schedule, unequal funding within school districts is directly linked
to the inequitable distribution of teacher experience across schools. The
U.S. Department of Education (U.S. DOE) currently uses two approaches
to place more experienced educators in high-poverty, Title I schools,
thereby narrowing the “teacher experience gap.” As part of the implemen-
tation process for the recently enacted Every Student Succeeds Act
(ESSA), the DOE established new regulations that would require strug-
gling districts to allocate equal teacher salary funding in high- and low-
poverty schools.1 In addition, a federal program, State Plans to Ensure
Equitable Access to Excellent Educators (U.S. DOE, 2014), requires
states education agencies to measure students’ access to high-quality and
experienced teachers, and develop plans for closing within-district teacher
quality gaps.
The purpose of this study is to assess the extent to which teacher salary
spending, teacher experience, and teacher–student ratios are equitably dis-
tributed within school districts nationally and to identify factors associated
with these patterns. Analyses focus on the role of district-level funding in
narrowing gaps in teacher resources. The study has direct implications for the
regulatory requirements of ESSA and state-level education policy. First, the
study shows the extent to which districts currently allocate teacher salary
funds equitably across schools and the types of districts with the largest fund-
ing gaps. A recent Brookings policy brief argued that districts already allo-
cate the same level of funding to high- and low-poverty schools, on average,
and requiring districts to do so would have no major impact on resource allo-
cation (Dynarski & Kainz, 2016). In contrast, other studies identify large
numbers of districts that do not provide equitable teacher salary funding
across schools (e.g., Heuer & Stullich, 2011). Second, analyses of teacher
experience gaps, and factors associated with those gaps, shed light on poten-
tial policy levers for increasing equity in the distribution of experienced
teachers within school districts. State education agencies across the nation
are implementing plans for enhancing access to effective educators in high-
poverty schools. Meanwhile, several recent high-profile legal cases have
argued that state laws pertaining to teacher tenure create teacher experience
gaps, especially in large urban school districts (e.g., Vergara v. California;
Knight 617
Wright v. New York). This study is the first to directly explore the relationship
between district-level resources and school-level teacher resource gaps.
I link recently released data from the Office of Civil Rights to other
national datasets to measure “teacher resource gaps”—inequitable distribu-
tions of teacher salary spending, teacher experience, and teacher–student
ratios—for low-income students and students of color nationally. I then esti-
mate models that predict district-level teacher resource gaps based on district
characteristics. This second set of analyses focus specifically on factors that
may allow districts to improve working conditions in their most difficult-to-
staff schools such as per-pupil district funding, teacher salaries, and expendi-
tures. The following research questions anchor the study:
Research Question 1: To what extent are teacher salary funding, teacher
experience, and student–teacher ratios equitably distributed within school
districts?
Research Question 2: To what extent is district per-student funding asso-
ciated with teacher resource gaps between high- and low-poverty schools
and between high- and low-minority schools within districts?
Findings show that, on average nationally, higher poverty schools have
less funding per student for teacher salaries, lower proportions of experi-
enced teachers, and fewer teachers per student compared with lower poverty
schools, even when controlling for district-level cost factors and comparing
schools within the same state. The same findings hold for students who iden-
tify as an underrepresented minority (Black, Latina/o, Native American,
Pacific Islander/Hawaiian native, or more than one race) and when compar-
ing Title I schools with non-Title I schools.
However, when comparing schools within the same district, a different
pattern emerges. Districts spend more on teacher salaries per student and
have more teachers per student in their higher poverty and higher minority
schools but have less experienced teachers compared with more advantaged
schools in the same district. In other words, districts make up for the fact that
their most novice teachers are concentrated in higher poverty schools by
increasing teacher–student ratios (lowering average class sizes) in those
schools, and as a result, spend more per student on teacher salaries in higher
need schools. These findings align with the federal “Comparability Rule,”
which requires districts to allocate equal teacher–student ratios in Title I and
non-Title I schools. However, these averages mask substantial variation in
district teacher resource gaps. Among districts with at least four elementary
schools, for example, 20% have large teacher salary gaps (i.e., allocate at
least 10% less teacher salary funding per student to their highest poverty
618 Educational Policy 33(4)
elementary schools compared with their lowest poverty elementary schools).
Finally, I find that greater levels of resources at the district level, as measured
by per-pupil funding, average teacher salaries, or expenditures per pupil rela-
tive to other districts in the same state or county, are all associated with
smaller teacher experience gaps and more equitable resource allocation pat-
terns within districts.
These findings have important policy implications at the federal, state,
and local level. Previously established federal regulations required dis-
tricts with schools in need of “comprehensive support and improvement”
and schools with low-performing subgroups (as determined in state
accountability plans) to address resource inequities, including both teacher
experience gaps and disparities in funding across schools.2 However, in
March 2017, Congress blocked all regulations for implementing ESSA
established under the Obama Administration. One week later, the DOE
released a new set of regulations that excludes the requirements that lower
performing districts take steps to address funding disparities across
schools. Removal of these federal regulations places greater responsibility
on state policymakers to monitor school district resource allocation.
Results from this study suggest that one of five districts across the country
currently has substantial resource inequities. At the same time, as others
have noted (e.g., Gordon, 2016), requiring that districts spend equal dol-
lars across schools could lead them to use forced teacher placements or
continue lowering student-staffing ratios in high-poverty schools without
addressing the underlying problem of high attrition in those schools.
Finally, findings suggest that one potential policy lever for helping dis-
tricts equalize spending across schools and improving disadvantaged stu-
dents’ access to experienced teachers may be through increasing state or
federal funding for school districts.
In the remainder of this article, I first explore past research on the alloca-
tion of funding across schools and distribution of teacher experience within
school districts. I then provide additional background on the changes included
in ESSA and the DOE’s process of “negotiated rulemaking.” The subsequent
sections describe the data and analytic approaches, findings, policy recom-
mendations, and conclusions.
Literature Review
The study contributes to two areas of research: The first pertains to within-
district resource allocation, and the second focuses specifically on equita-
ble access to more experienced or more effective teachers within school
districts.
Knight 619
Allocation of Funding Across Schools
Due to a lack of wide scale data on school expenditures, most school finance
studies compare educational expenditures across districts (e.g., Baker &
Corcoran, 2012; Reschovsky & Imazeki, 2001; Knight, 2017). Some
researchers have conducted in-depth case studies of school districts based on
their own collected data (e.g., Haxton, de los Reyes, Chambers, Levin, &
Cruz, 2012; Roza & Hill, 2004). The districts sampled in these studies gener-
ally allocate as least as many instructional staff per pupil in high-poverty (or
Title I) schools as in their low-poverty (or non-Title I) schools, thereby com-
plying with the Title I Comparability Rule. However, the clustering of less
experienced teachers in higher poverty schools creates an inequitable distri-
bution of per-student teacher salary funding. Given the limited number of
districts included in these analyses however, the studies do not shed light on
how pervasive this problem is nationally, or what district characteristics are
associated with resource and teacher experience gaps between high- and low-
poverty schools.
The American Recovery and Reinvestment Act of 2009 included funding
to collect, for the first time, national data on school-level expenditures (all
prior national school finance data were district level or based on samples of
schools). The DOE subsequently released a report finding that about half of
higher poverty schools received less state and local funding than lower pov-
erty schools in the same district and grade level. Similarly, more than 40% of
Title I schools had lower state and local personnel expenditures per pupil than
non-Title I schools in the same district and grade level (Heuer & Stullich,
2011). These findings comport with other studies drawing on the same data
(Government Accountability Office, 2011; Hanna, Marchitello, & Brown,
2015; Spatig-Amerikaner, 2012). In each study, the authors argue that the
DOE should strengthen the Comparability requirement within Title I to
require districts to allocate equal state and local funding for teacher salaries
across Title I and non-Title I schools (or across high- and low-poverty
schools).
Two reports are based on the more recently released Civil Rights Data
Collection project, which collected school-level expenditure and teacher sal-
ary data for the 2013-2014 school year (Dynarski & Kainz, 2016; Office of
Civil Rights, 2016). The First Look report from the Office of Civil Rights
(2016) finds that across all schools nationally, low-income students and stu-
dents of color attend schools with less experienced and more chronically
absent teachers. This initial report did not compare schools within the same
district. A recent Brookings policy brief based on the same data (Dynarski &
Kainz, 2016) found that on average, districts allocate equal amounts of state
620 Educational Policy 33(4)
and local funding for teacher salaries in high- and low-poverty schools and in
Title I and in non-Title I schools. The authors conclude that mandating dis-
tricts to provide equal per-pupil funding for teachers across schools is like
“pushing on a string,” as it would not lead to any substantial changes in
within-district resource allocation (Dynarski & Kainz, 2016). However, that
study did not examine variation across districts in the relationship between
school demographics and school-level funding. That is, there may be large
numbers of districts with inequitable resource allocation patterns even if data
show that on average, districts are allocating teacher salary expenditures
evenly across high- and low-poverty schools.
More broadly, none of the prior studies have examined district character-
istics associated with teacher expenditure and experience gaps. For example,
higher poverty districts or those receiving less state and local funding may
have larger teacher experience gaps. Similarly, expenditure gaps may be con-
centrated in certain states. In sum, the simple average relationships presented
in Dynarski and Kainz (2016) or the summary statistics presented in past
analyses mask important variation in various teacher resource gaps that have
implications for federal, state, and local policymaking. For example, if dis-
tricts with larger teacher resource gaps are clustered in particular states that
share educational policies, then state policies may serve as a potential policy
lever. Conversely, if particular district characteristics are associated with
inequitable resource allocation (e.g., funding levels and teacher salaries,
urbanicity, enrollment size, or poverty level), then federal policy could be
refined to help increase resource allocation equity in specific types of dis-
tricts, or states could target interventions to districts that tend to have larger
teacher resource gaps.
Distribution of Teacher Experience Across Schools
As noted earlier, because teacher salaries are typically based on experience,
the distribution of teacher experience across schools within districts largely
determines how funding is distributed within districts. A large body of
research shows that teacher experience, aptitude, and qualifications are all
inequitably distributed across schools (Baker & Green, 2015; Clotfelter et al.,
2006; Darling-Hammond, 2000, 2004; Knight & Strunk, 2016; Lankford,
Loeb, & Wyckoff, 2002; Peske & Haycock, 2006). More recent studies show
that teacher effectiveness—as measured by value-added scores, which esti-
mate teachers’ contribution to student achievement gains on test scores—is
also distributed inequitably (e.g., Glazerman & Max, 2011; Isenberg et al.,
2013; Sass, Hannaway, Xu, Figlio, & Feng, 2010). Most of these studies use
detailed administrative data from one or several districts. A small number of
Knight 621
studies examine teacher experience/quality gaps in districts across the entire
state (Clotfelter et al., 2006; Goldhaber, Lavery, & Theobald, 2015;
Goldhaber, Quince, & Theobald, 2016), but no recent studies examine teacher
experience gaps in all states nationally.
Several factors contribute to the inequitable distribution of teacher experi-
ence within school districts (Boyd, Lankford, Loeb, & Wyckoff, 2005; Krieg,
Theobald, & Goldhaber, 2016; Ladd, 2011; Scafidi, Sjoquist, & Stinebrickner,
2007). Both the initial match of educators to schools and lower retention rates
in high-needs schools contribute to the teacher experience gap. Less support-
ive school administration, greater accountability pressure, and unprofessional
work environments are all associated with higher teacher attrition within
school districts (Boyd et al., 2011; Clotfelter, Ladd, Vigdor, & Dias, 2004;
Hanushek, Kain, & Rivkin, 2004; Johnson, Kraft, & Papay, 2012, Ladd,
2012). Some researchers contend that restrictive teacher contracts contribute
to inequitable access to experienced teachers (Anzia & Moe, 2014; Goldhaber
et al., 2015), although others find no such evidence (e.g., Cohen-Vogel, Feng,
& Osborne-Lampkin, 2013). Despite the many studies examining teacher
attrition and inequitable distributions of teacher experience within school dis-
tricts, few studies systematically assess district characteristics associated
with larger teacher experience gaps.
A federal program to reduce district-level teacher quality gaps requires
state education agencies to measure teacher quality gaps and identify poten-
tial root causes (State Plans to Ensure Equitable Access to Excellent
Educators, U.S. DOE, 2014). The initiative lists primarily school-level issues
as potential root causes of teacher quality gaps such as poor teacher recruit-
ment strategies, school working conditions, and school leadership, but does
not consider differences in state funding and teacher salary levels across dis-
tricts (see Baker & Weber, 2016, for further discussion). Because state legis-
latures govern the district finance system, state education agencies are limited
in their ability to alter teacher salaries or district funding levels.
At the same time, greater levels of funding or higher district teacher sala-
ries may contribute to the narrowing of within-district teacher resource gaps
for several reasons. Studies show, for example, that competitive salaries and
lower teacher–student ratios or class sizes help districts attract and retain
teachers (Eller, Doerfler, & Meier, 2000; Gritz & Theobald, 1996; Hanushek
et al., 2004; Imazeki, 2005; Loeb, Darling-Hammond, & Luczack, 2005;
Murnane & Olsen, 1989). Greater resource levels permit districts to provide
higher salaries and may allow district leaders to support school administra-
tors in fostering more attractive working environments in schools with high
teacher turnover. Principals can, in turn, provide more planning or collabora-
tive meeting time for teachers, lower student loads, and provide additional
622 Educational Policy 33(4)
opportunities for professional development—conditions that studies link to
positive working conditions (Baker & Weber, 2016; Johnson et al., 2012;
Ladd, 2011). Conversely, districts with lower resource levels may lose teach-
ers to other districts in the same labor market that have resource advantages.
Teacher experience gaps would expand if this form of attrition is concen-
trated in high-poverty or high-minority schools. In short, evidence suggests
the potential for district resources to play an important role in narrowing
teacher quality gaps within school districts.
Despite research and policy efforts to understand factors associated with
district-level teacher quality gaps, no prior studies have systematically
assessed the extent to which students have equitable access to experienced
teachers nationally, or whether district-level resources help districts equalize
teacher resources across high- and low-poverty schools or across schools
with higher concentrations of students of color. The current study builds upon
the prior work by measuring teacher resource gaps in school districts nation-
ally and exploring factors associated with these gaps. The study is particu-
larly timely, given the recent regulatory requirements established by the DOE
as part of the implementation of ESSA (Ujifusa & Klein, 2016)3 and the
ongoing federal and state policy debates surrounding educator quality gaps. I
next provide additional background on the policy context underlying this
study.
Policy Context
Over the past four decades, policymakers in the U.S. DOE have enacted vari-
ous regulations to encourage school districts to allocate funding equitably
across schools. Below, I provide some background on federal funding regula-
tion and discuss the changes made through ESSA. I then present summary
statistics for variables that measure “teacher resources” (average teacher sal-
ary spending per student, teacher experience, and teacher–student ratios)
across schools.
Policy Regulations in Title I and Changes Under ESSA
Following the passage of ESSA, the DOE conducted the process of “negoti-
ated rulemaking,” in which the Department writes the rules for how a law
will be implemented and constituencies affected by a law are nominated and
convene to provide input into specific regulations. Historically, the govern-
ment ensured that federal Title I funding reaches the intended students
through three requirements: (a) maintenance of effort, (b) comparability, and
(c) supplement, not supplant (SNS).4 Maintenance of effort implies that no
Knight 623
states or districts can decrease total or per-pupil funding by more than 10%
from the prior year. ESSA makes no major changes to the maintenance of
effort requirement.
Comparability requires districts to staff Title I schools with equal to or
more instructional staff per pupil compared with non-Title I schools (the
“Comparability Rule”). This policy is meant to ensure that districts will
allocate funding equitably across schools. However, in many cases, the
highest poverty schools within districts have, on average, the least experi-
enced teaching staff (Goldhaber et al., 2015). Because districts typically
use standardized salary schedules that offer higher compensation to more
experienced teachers, districts that use equal staffing ratios across schools
often allocate less teacher salary funding per student to the highest poverty
schools. The “comparability loophole” refers to the lack of any requirement
that districts spend equal dollars per student across schools (Hanna et al.,
2015; McClure, 2008; National School Board Association, 2013; Roza,
2005, 2008). While the DOE has used the enactment of ESSA to push for
equalized spending on teacher salaries across schools, ESSA makes no
changes to the statutory language of the comparability requirement, and the
DOE did not suggest changes to the methods in which districts meet the
comparability requirement.
Instead, the DOE pushed for equalized spending on teachers in their initial
“Notice of Proposed Rulemaking” (published May 31, 2016) through the
SNS requirement (Gordon, 2016). Under SNS, federal dollars may not be
used for purposes that state law already requires schools to spend money
on—federal dollars must SNS, state and local dollars.5 The SNS regulation is
substantially changed under ESSA. The new law allows states and districts to
design their own methodology to determine whether the SNS requirement is
met. The goal of this change is to remove the burdensome reporting require-
ments under SNS, while maintaining some degree of accountability.
Following substantial opposition from members of congress and local stake-
holders (Gordon & Reber, 2015), the DOE elected not to require equal spend-
ing across schools in its Final Regulations (U.S. DOE, 2016).6 However, as
part of the plans for state accountability (Section 200.21 of ESSA), the DOE
required districts undergoing state accountability-based improvement plans
to “address resource inequities,” including “disproportionate assignment of
ineffective, out-of-field, or inexperienced teachers and possible inequities
related to the per-pupil expenditures” (p. 293). In March 2017, Congress
employed the rarely used Congressional Review Act to block all of the previ-
ously established regulations for implementing ESSA. Later that month, the
DOE released a revised set of rules that excludes any regulations of district
teacher resource gaps described above.
624 Educational Policy 33(4)
Table 1. Summary Statistics of Teacher Resources (Mean, Interquartile Range, and
Intraclass Correlation).
Elementary Middle school High school
Teacher salary spending
per student
US$3,176 US$3,003 US$3,243
[US$2,485,
US$3,528]
[US$2,424,
US$3,295]
[US$2,373,
US$3,615]
.546 .694 .451
Number of teachers per
100 students
5.72 5.61 5.70
[4.66, 6.58] [4.69, 6.33] [4.32, 6.42]
.392 .475 .238
Average percentage of
teachers with >2 years
of experience
81.0 81.4 83.4
[76.7, 89.5] [76.7, 90.2] [79.5, 92.2]
.083 .098 .210
Note. Each cell shows the mean, interquartile range, and intraclass correlation of teacher
resources. Intraclass correlations show the extent to which observations are correlated
within states (higher intraclass correlations imply that values within states are grouped such
that some states generally have higher values, while other states generally have lower values).
For elementary schools, the sample is limited to schools in districts with at least three other
elementary schools in the same district (i.e., districts with at least four elementary schools).
The same sample restrictions apply to analyses of middle schools and high schools. I limit the
sample for this table, so that numbers are comparable with those presented in other tables;
however, these summary statistics are similar when all schools are included.
In summary, in an effort to close the “comparability loophole,” the DOE
initially used changes to the SNS requirement to mandate that districts dem-
onstrate equal spending across schools. In their Final Regulations, the DOE
removed this requirement but included regulations under state accountability
plans that would force districts with lower performing schools to address
funding disparities and teacher experience gaps across schools. Finally, under
Secretary DeVos, the Department released a new set of regulations for ESSA
that excludes any intradistrict funding regulations, thereby placing greater
responsibility on state education agencies to resolve existing funding
disparities.
Variation in Teacher Resources
On average, elementary schools spend US$3,176 per student on teacher sala-
ries with state and local funds, staff schools with 5.7 teachers for each 100
students, and have about 81% of teachers with 3 or more years of experience.
These values are shown in Table 1. Variables are reported such that larger
numbers reflect a greater level of resources (5.7 teachers per 100 students
Knight 625
equate to 17.5 students per teacher). The interquartile range for elementary
schools shows, for example, that the bottom quartile of schools has fewer
than 77% of teachers with 3 or more years of experience, whereas the highest
quartile of schools has at least 90% experienced teachers (i.e., an interquartile
range of 10%-23% novice teachers).
The third row within each cell shows intraclass correlations, which mea-
sure the extent to which observations are correlated within states. Whether
there is variation in teacher resource variables and the level at which this
variation exists (i.e., state, district, school) has implications for the assess-
ment of teacher resource gaps. The higher the intraclass correlation, the more
states differ in their overall average level of instructional resources available
to students. For elementary schools, 54.6% of the variation in per-pupil
teacher salary expenditures is across states (and 45.4% is within states).
These figures align with prior research, showing that much of the differences
in district per-pupil expenditures are across states (Baker, 2014; Card &
Payne, 2002). In contrast, the average percentage of experienced teachers in
each school is less clustered within states, implying that the average teacher
experience in a particular school is relatively similar across states. The aver-
age teacher experience at a student’s school depends more on which district
and school the student attends in any given state, rather than the particular
state in which the student attends school. In the section below, I describe the
methods used to address our research questions.
Data and Analytic Approach
Data
The study draws on new school-level expenditure data collected by the Office
of Civil Rights for the 2013-2014 school year. These data are linked to
school-level data from NCES, district-level data from the U.S. Census
Bureau, and the district-level Education Comparable Wage Index (Taylor &
Fowler, 2006). National Center for Education Statistics (NCES) data include
information on student demographics, the school’s Title I status, district urba-
nicity, and enrollment size. U.S. Census Bureau data provide information on
school district revenues, expenditures, and poverty rates.
The Office of Civil Rights obtained teacher expenditure data from 86,802
schools. Importantly, districts reported actual teacher salary expenditures in
each school based on the actual salaries earned by teachers in those schools
(rather than simply costing teacher salary expenditures based on average dis-
trict salaries). I omit from the sample schools with missing student demo-
graphic data and schools that reported inaccurate teacher salary or other
626 Educational Policy 33(4)
resource data (e.g., reporting greater teacher salary expenditures than district
expenditures). The final analytic sample includes 14,447 districts and 81,424
schools, representing 89.7% of all currently operational, nonvirtual, and non-
state-operated campuses listed in NCES data.
Measuring Within-District Teacher Resource Gaps
I define teacher resource gaps as the difference in three measures of average
teacher resources between schools with high- and low proportions of students
of color and in poverty.Measures of teacher resources include the (a) per-
student teacher salary expenditures from state and local funding, (b) the num-
ber of teachers for each 100 students, and (c) the percentage of teachers with
3 or more years of experience at the school (shown in Table 1). I begin by
predicting the first measure of teacher resources, expenditures on teacher
salaries per student (labeled TRsd), based on the percentage of students eligi-
ble for free or reduced price lunch (FRL; labeled %FRLsd):
TR FRL MS HS
Other FRL MS
F
sd sd
sd sd
sd sd
= + + +
+ + ×
+
β β β β
β β
β
0 1 2 3
4 5
6
%
%
% RRL HS FRL Othersd sd d sd× + × + +β ϕ µ7 % .
(1)
Equation 1 includes dummy variables for whether school s in district d is a
middle school (MSsd), a high school (HSsd), or a span/nongraded school
(Othersd). The model includes district fixed effects, labeled ϕd, which allow
for within-district comparisons (µsd is the residual, and standard errors are
clustered at the district level). Thus, the β1 coefficient provides an estimate of
the change in per-pupil teacher salary spending in elementary schools for a
100% increase in the percentage of FRL students. The coefficients for the
middle and high school interaction terms (β5 and β6) show whether the rela-
tionship between funding and school poverty rate differs for middle and high
schools (compared with elementary schools).
Next, I substitute the outcome measure, per-pupil teacher salary spending,
with the ratio of teachers for each 100 students and the percentage of teachers
with 3 or more years of experience. Finally, I rerun each of the models this
time exchanging %FRL with the percentage of students at each school who
identify as an underrepresented minority. For each set of models, I begin with
a null model that includes only the variable of interest (%FRL, %URM, or
Title I school indicator), then add district and state covariates, state fixed
effects, and finally, the preferred model which includes district fixed effects
(Equation 1). This approach makes it possible to examine explicitly the pres-
ence of teacher resource gaps both across and within school districts.
Knight 627
I also examine teacher resource gaps by creating direct measures of
within-district resource gaps. To construct these variables, I first measure the
average teacher salary expenditures per student, average number of teachers
for each 100 students, and average percentage of teachers with more than 2
years of experience in the highest and lowest poverty quartiles of elementary
schools within each district (as measured by the %FRL at each school). I
construct the same measures for elementary schools at the highest and lowest
quartiles of percentage of student of color, and create each of these measures
separately for middle and high schools. To accurately measure upper and
lower quartiles of student demographic variables, I exclude districts with
fewer than four elementary schools for analyses of elementary school teacher
resource gaps and make similar sample restrictions for analyses of teacher
resource gaps in middle and high schools.
Assessing District Characteristics Associated with Teacher Resource Gaps. For the
second research question, I fit models predicting district-level teacher
resource gaps based on state and district characteristics. Models are run sepa-
rately for teacher resource gaps across elementary, middle, and high schools
(using the constructed measures of teacher resource gaps described above). I
run a series of ordinary least squares regressions predicting teacher resource
gaps, beginning with district covariates. The primary variables of interest are
district-level per-student state and local funding, expenditures, and teacher
salaries (I include these variables in separate models as they are highly cor-
related). Other district covariates include factors affecting the cost of educa-
tion: district poverty rate, district enrollment size, urbanicity, and the
educational cost of wage index (Duncombe & Yinger, 2005; Taylor & Fowler,
2006). I also control for student segregation using a constructed measure of
economic and racial segregation (the difference between the top and bottom
quartile of schools in the %FRL or percentage of underrepresented minority
students, URM), as well as average teacher experience. I then add state
covariates, state fixed effects (removing the state covariates), and finally,
county fixed effects. As before, I run identical models examining teacher
resource gaps based on %FRL and %URM students at the school. Models
with county fixed effects allow for focusing on differences in resources levels
between districts in the same labor market.
As a secondary approach for addressing Research Question 2, I also run
school-level models similar to Equation 1, this time adding interactions
between district funding levels and the %FRL and (in separate models) URM
students. I include the same set of district covariates as before. For these
models, each teacher resource variable is mean centered within districts to
focus on within-district disparities across schools. Because I control for
628 Educational Policy 33(4)
district characteristics related to the cost of education, the coefficient for the
interaction between per-pupil funding and the percentage of students at the
school eligible for FRL shows whether increases in per-pupil funding are
associated with more equitable teacher resource allocation. Finally, to make
these results more interpretable, I estimate predicted values of teacher
resources across school-level %FRL and %URM, calculated at various levels
of district per-pupil funding.7
Findings
Results are presented in three sections: I first review findings for the first
research question on the extent to which teacher resources are equitably dis-
tributed. Next, I provide results for the second research question on what
factors are associated with teacher resource gaps. Finally, I discuss some
extensions and specification checks to support these findings.
Assessing Teacher Resource Gaps
Results for Research Question 1 are shown in Tables 2 and 3. Panel A of Table
2 shows results for per-student teacher salary expenditures, Panel B shows
teacher–pupil ratios, and Panel C reports findings for teacher experience. As
shown in the first row of column 1, on average nationally, elementary schools
receive US$9.59 less per student in state and local funding for teacher salaries
for each 1% increase in FRL students, equivalent to a US$959 per-pupil gap
between schools with 100% FRL and 0% FRL. That number reduces to about
US$264 when comparing schools in the same state and controlling for local
district cost factors (shown in column 3). Log models indicate an 11.1% gap
between 0% FRL schools and 100% FRL schools in the same state.8 Funding
for middle schools is even more inequitable, whereas funding for high schools
is slightly more equitable compared with elementary schools (Rows 2 and 3).
The final column of Table 2 shows results for models that include district fixed
effects, which allow for comparisons of schools within the same district. The
relationship between poverty rates and teacher expenditures reverses when
comparing schools in the same district—higher poverty schools receive more
funding for teacher salaries, on average, than lower poverty schools in the
same district (about US$272 more per student between 0% FRL schools and
100% FRL schools in the same district, or about 6.3% based on log models).
Results for teacher–pupil ratios follow a similar pattern (Panel B). Results are
similar for students who identify as an underrepresented minority and in com-
parisons between Title I and non-Title I schools (results shown in online
appendix Table A1). These findings suggest that on average, the disparities
Knight 629
Table 2. Regression Coefficients Predicting the School-Level Per-Pupil State and
Local Expenditures on Teachers (Panel A), Teachers per 100 Students (Panel B),
and Percentage of Teachers With 3 or More Years of Experience (Panel C).
(1) (2) (3) (4)
Panel A: State and local expenditures on teacher salaries per student
%FRL −959.3*** −633.6*** −263.5*** 272.3***
(24.0) (23.9) (21.7) (21.6)
%FRL × Mid. school −306.3*** −259.8*** −77.0† −73.4*
(52.0) (50.0) (43.3) (31.7)
%FRL × High school 142.4** 85.3† 236.3*** 323.9***
(53.3) (51.5) (44.7) (34.0)
Panel B: Teachers per 100 students
%FRL −0.565*** −0.350*** 0.038 0.926***
(0.035) (0.034) (0.031) (0.035)
%FRL × Mid. school 0.047 −0.036 0.100 0.216***
(0.076) (0.071) (0.062) (0.052)
%FRL × High school 0.554*** 0.484*** 0.822*** 0.939***
(0.078) (0.073) (0.064) (0.055)
Panel C: Percentage of teachers with 3 or more years of experience
%FRL −0.084*** −0.074*** −0.074*** −0.079***
(0.003) (0.003) (0.003) (0.004)
%FRL × Mid. school −0.053*** −0.051*** −0.048*** −0.051***
(0.007) (0.006) (0.006) (0.005)
%FRL × High school −0.047*** −0.040*** −0.042*** −0.042***
(0.007) (0.006) (0.006) (0.006)
District covariates X X
State FE X
District FE X
Note. Models also include the main effects of middle schools, high schools, and schools
with other grade configurations (the reference category is elementary schools). District
covariates include average cost of wage index, district size dummy variables, and dummy
variables measuring population density. %FRL ranges from 0 to 1, are multiplied by 100, so
that coefficients are interpreted as the change associated with a 100% increase in %FRL.
For example, Model 1 shows that a 1% increase in FRL students in elementary schools is
associated with a US$9.59 decrease in funding per student (or about 0.30% given the mean
per-pupil funding in elementary schools of US$3,176 as shown in Table 1). The percentage
of teachers with 3 or more years of experience ranges from 0 to 100. As such, Model 1
shows that a 1% increase in FRL students in elementary schools is associated with a 0.084
percentage point decrease in the percentage of teachers with 3 or more years of experience
(a 0.1% decrease in the average percentage of teachers with 3 or more years of experience,
which is 81.4%). FRL = free or reduced price lunch; FE = fixed effects. *** p<.001, ** p<.01, *
p<.05, † p<.10.
630 Educational Policy 33(4)
observed in Models 1 to 3 result primarily from inequitable funding across
states and across districts within states, not from inequities within districts, as
several studies have suggested (e.g., Roza & Hill, 2004).
Table 3. Summary Statistics of Within-District Teacher Resource Gaps (Mean,
Interquartile Range, and Intraclass Correlation), Based on Poverty and Race/
Ethnicity.
Elementary Middle school High school
Gap in teacher salary spending per student
%FRL −US$36 −US$235 −US$570
[−US$284, US$224] [−US$402, US$94] [−US$930, US$82]
.040 .000 .002
%URM US$21 −US$246 −US$432
[−US$207, US$238] [−US$360, US$147] [−US$849, US$187]
.023 .000 .020
Gap in number of teachers per 100 students
%FRL −0.33 −0.55 −0.92
[−0.78, 0.14] [−0.97, −0.07] [−1.76, 0.07]
.025 .013 .019
%URM −0.21 −0.46 −0.78
[−0.62, 0.17] [−0.88, 0.04] [−1.62, 0.11]
.034 .039 .018
Gap in % of teachers with >2 years of experience
%FRL 0.0315 0.0563 0.0461
[−1.2, 8.4] [0.5, 10.1] [0.1, 10.5]
.009 .000 .000
%URM 0.0423 0.0618 0.0553
[0.0, 8.2] [0.7, 11.1] [−0.3, 10.1]
.027 .091 .000
Note. Each cell shows the mean, interquartile range, and intraclass correlation of teacher
resource gaps. Intraclass correlations show the extent to which observations are correlated
within states. Teacher resource gaps are defined as the difference between the top and
bottom quartile of schools in terms of the percentage of free or reduced price lunch students
(%FRL) and the percentage of students at the school who identify as an underrepresented
minority (%URM). Positive numbers indicate that schools with the highest %FRL or %URM
in their district have fewer teacher resources. For example, on average, elementary schools
in the highest quartile of FRL within their district receive US$36 more (a negative gap) per
student in state and local funding for teacher salaries compared with schools in the lowest
quartile of FRL in the same district. For comparisons across elementary schools, the sample
is limited to districts with at least four elementary schools. The same sample restrictions
apply to middle schools and high schools. FRL = free or reduced price lunch; URM =
underrepresented minority.
Knight 631
In contrast, teacher experience gaps exist both across schools in the same
state and across schools in the same district. The coefficient of −0.079 for
elementary schools shown in the first row of Panel C, column 4 (Table 2)
suggests that comparing schools in the same district, a 100% increase in the
percentage of FRL students is associated with a 7.9 percentage point decrease
in the proportion of teachers with 3 or more years of experience. Elementary
schools with 75% FRL students have, on average, 79.8% of teachers with 3
or more years of experience (after adjusting for covariates), whereas lower
poverty elementary schools with 25% FRL have 83.8% of teachers with 3 or
more years of experience on average, a gap of about 4.0 percentage points. As
demonstrated from the coefficients for middle and high schools in column 4
of Panel C, experience gaps in middle and high schools are even greater.
Based on the predicted values, the within-district experience gaps for middle
and high schools are 6.5 and 6.1 percentage points, respectively (based on
comparisons between schools with 25% FRL and 75% FRL in the same dis-
trict). Teacher experience gaps are even greater for students of color (see
online appendix Table A2). These findings comport with statewide analyses
of teacher experience gaps (e.g., Goldhaber et al., 2015; Hanushek et al.,
2004)—low-income students and students of color disproportionately attend
schools with the least experienced teachers within school districts.
Given that districts actually spend more per student on teacher salaries in
their higher poverty schools by providing more teachers per student, a natural
question is whether districts are encouraged to do so through the federal
Comparability Rule. I address this question by comparing resource allocation
patterns in districts with at least one, but not all Title I schools to districts with
all Title I schools. Because the Comparability Rule regulates resource alloca-
tion between Title I and non-Title schools, districts with all Title I schools are
not affected by the Comparability Rule. Results described above are not sub-
stantially different when running analyses separately for districts with at least
one but not all Title I schools and for districts with all Title I schools (shown
in online appendix Table A2).9 That districts with all Title I schools provide
more teachers per student in their high-poverty schools suggests that on aver-
age, districts use equal or progressive staffing ratios across schools even
when not mandated to do so through the federal Comparability Rule (which
only regulates staffing ratios between Title I and non-Title I schools).
Table 3 shows similar results based on the constructed measures of within-
district teacher resource gaps (positive gaps represent inequitable distribu-
tions). The figures align with the findings reviewed above: On average,
high-poverty elementary schools receive slightly more teacher salary funding
per student (about US$36 for elementary schools or 5.3% of a standard devia-
tion), and have more teachers per student than low-poverty schools in the
632 Educational Policy 33(4)
Figure 1. Average funding per student for teacher salaries and average percentage
of novice teachers in the highest and lowest poverty elementary schools within
districts (largest 1,000 districts nationally).
Note. Each circle represents a school district, with size proportionate to district enrollment.
Dark gray circles indicate districts in which the highest poverty elementary schools receive
less funding per student for teacher salaries (left side) or have less experienced teachers
(right side) than the lowest poverty elementary schools in the same district. The sample is
restricted to the largest 1,000 districts in the country (those with at least approximately
8,000 students). Lowest and highest poverty elementary schools are those in the bottom and
top quartile of %FRL, respectively. FRL = free or reduced price lunch.
same district, while teacher experience is inequitably distributed within dis-
tricts.10 Although the teacher salary gap for %URM in elementary schools is
positive (US$21), this figure is only 2.8% of a standard deviation in the over-
all salary spending gap for %URM in elementary schools. Thus, the within-
district teacher salary gap in elementary schools for both %FRL and %URM
is very close to 0. The intraclass correlations for teacher resource gaps are
substantially smaller than those for teacher resources. Between 91% and 99%
of the variation in teacher resource gaps is within states (teacher resource
gaps are more related to which district a student attends within a given state
and less related to the state in which a student lives). This suggests that states
do not differ substantially in their average teacher resource gaps, and it may
be less likely that state policies would explain much of the variation in teacher
resource gaps. At the same time, states may have policies that differentially
affect districts, so a lack of substantial differences across state average
resource gaps does not necessarily imply that state policy does not serve an
important role.
Finally, Figure 1 plots results for teacher resource gaps based on the pro-
portion of low-income students in elementary schools. The x-axis for the
graph on the left shows average teacher salary expenditures per student in
elementary schools that serve the highest income students within their dis-
tricts, and the y-axis shows average teacher salary expenditures in elementary
Knight 633
schools that serve the lowest income students within their districts. Districts
that fall above the dotted line have more equitable allocation of teacher salary
expenditures, in that higher poverty schools receive more salary expenditures
per student. The graph on the right presents the same information for teacher
experience (the average percentage of teachers with 2 or fewer years of expe-
rience). The graphs illustrate that although districts allocate slightly more
funding per student for teacher salaries to their high-poverty schools, on
average, many districts have inequitable distributions. Similarly, while
teacher experience is inequitably distributed across schools within districts,
many districts have more experienced teachers in their highest poverty
schools. In the section below, I examine the extent to which district or state
characteristics are associated with these differences in teacher resource gaps
across districts.
Factors Predicting Teacher Resource Gaps
Table 4 shows regression coefficients predicting teacher resource gaps based
on districts characteristics (all models are district-level regressions). The first
column includes state and district covariates, and the second and third col-
umns replace state covariates with state fixed effects, and then county fixed
effects. Columns 1 to 3 examine income-based teacher resource gaps, and
columns 4 to 6 repeat the same regressions for teacher resource gaps based on
race/ethnicity. Districts that receive higher state and local funding per student
have lower income-based teacher resource gaps than otherwise similar dis-
tricts in the same state or county. The coefficient in the first row of column 2
suggests that for each additional US$1,000 of state and local funding per
student relative to other districts in the same state (about 19% of a standard
deviation across all districts nationally), the within-district gap in per-pupil
teacher salary spending reduces by US$29 or about 4.3% of a standard devia-
tion. Models with county fixed effects (column 3, comparing districts in the
same county) suggest that the same increase in funding would lower teacher
salary gaps within districts by US$21 (3.1% of a standard deviation). Log
models show that a 10% increase in funding relative to other districts in the
same county reduces the teacher salary gap by 0.5%. Results are consistent
when I substitute the average state and local per-pupil funding with (a) the
district average per-pupil teacher salary spending or (b) overall expenditures
per student (run separately).11
Panel B of Table 4 shows that the same increase in state and local funding
lowers the gap in teacher–student ratios by 0.025 teachers per 100 students
when comparing districts in the same state and by 0.029 when comparing
districts in the same county (i.e., county fixed effects, column 3 of Panel B).
634 Educational Policy 33(4)
Table 4. Regression Coefficients Predicting District-Level Teacher Resource Gaps
Across Elementary Schools.
Teacher resource gaps by school
poverty rate
Teacher resource gaps by school %
students of color
(1) (2) (3) (4) (5) (6)
Panel A: Gap in teacher salary funding per student
State and local funding
per pupil
−17.29** −29.04*** −21.25* −10.68† −26.74*** −31.02**
(5.78) (6.96) (10.07) (6.01) (7.28) (9.95)
Segregation index −14.87 5.47 −9.69 −145.94* −100.65 −34.91
(74.43) (74.36) (105.12) (69.99) (71.07) (103.20)
Poverty rate 927.28*** 881.37*** 830.16*** 555.23*** 641.24*** 588.27**
(142.49) (144.32) (210.44) (141.83) (144.81) (203.33)
R2 .050 .137 .560 .022 .106 .585
Panel B: Gap in number of teachers per 100 students
State and local funding
per pupil
−0.011 −0.025* −0.030* −0.007 −0.025** −0.021†
(0.008) (0.010) (0.013) (0.008) (0.010) (0.012)
Segregation index −0.698*** −0.626*** −0.397** −0.865*** −0.828*** −0.466***
(0.103) (0.105) (0.141) (0.091) (0.094) (0.127)
Poverty rate 1.640*** 1.635*** 1.076*** 1.068*** 1.130*** 0.941***
(0.197) (0.204) (0.279) (0.184) (0.192) (0.250)
R2 .106 .149 .287 .080 .125 .649
Panel C: Gap in % of teachers with >2 years of experience
State and local funding
per pupil
−0.005*** −0.004* −0.006* −0.001 −0.002 0.001
(0.001) (0.002) (0.003) (0.001) (0.002) (0.003)
Segregation index 0.054** 0.043* 0.057* 0.060*** 0.066*** 0.03
(0.017) (0.018) (0.024) (0.014) (0.014) (0.021)
Poverty rate −0.033 −0.023 −0.04 −0.01 0.012 0.027
(0.033) (0.035) (0.051) (0.028) (0.030) (0.045)
R2 .086 .175 .706 .142 .225 .726
Districts covariates X X X X X X
State covariates X X
State fixed effects X X
County fixed effects X X
Note. The outcome for models in Panel A is the difference in the average per-pupil state and local spending
for teacher salaries between elementary schools in the top quartile of percentage of students eligible for free
or reduced price lunch (FRL) within the district and elementary schools in the bottom quartile of %FRL. Gaps
are positive when high-poverty schools receive less resources per student. The outcome for Panel B is the
gap in the number of teachers per 100 students between high- and low-poverty elementary schools within
districts. The outcome for Panel C is the teacher experience gap between high- and low-poverty elementary
schools within districts. The sample is restricted to districts with at least four elementary schools. Results
are consistent when comparing the top and bottom half of %FRL within districts (and expand the sample
to districts with at least two elementary schools). Results are also consistent when I exchange state and
local funding per pupil with average teacher salaries per pupil or with district per-pupil expenditures.
State and local funding per pupil is expressed in US$1,000 units. Other district covariates include district
poverty rate, urbanicity, cost of labor index, log enrollment, and the average percentage of teachers with
more than 2 years of experience (results are consistent if I remove controls for teacher experience). State
covariates include average poverty rate across school districts within the state, average spending per pupil
across districts, and the relative strength of teacher unions according to the rankings shown in Winkler and
Zeehandelaar (2012). *** p<.001, ** p<.01, * p<.05, † p<.10.
Knight 635
As shown in Panel C, a US$1,000 increase in state and local funding is asso-
ciated with a 0.4 percentage point reduction in the teacher experience gap
when comparing districts in the same state (Model 2) and a 0.6 percentage
point reduction when comparing districts in the same county (Model 3).
Given the standard deviation of the income-based teacher experience gap of
9.2 percentage points (shown in Table 3), these coefficients equate to a reduc-
tion of 4.4% and 6.5% of a standard deviation, respectively. Log models
show that a 10% increase in funding relative to other districts in the same
county reduces the teacher experience gap by 1.3%. When I substitute state
and local funding per student with the district average teacher salary expen-
ditures per student and overall expenditures per student, coefficients are still
negative but not significant. Finally, the results shown in columns 4 to 6 sug-
gest that when examining teacher resource gaps based on race/ethnicity, coef-
ficients for per-pupil spending are similar for teacher salary spending (Panel
A) and teacher–student ratios (Panel B) but are small and statistically insig-
nificant for teacher experience gaps (Panel C).
Several other district characteristics have statistically significant relation-
ships with teacher resource gaps. Not surprisingly, economic and racial seg-
regation are associated with economic and racial teacher resource gaps, but
the direction of the relationship varies by resource. Columns 1 to 3 of Table
4 show that as the level of economic segregation increases, districts target
more teachers per student to their highest poverty schools but have larger
teacher experience gaps. Economic segregation is not related to gaps in
teacher salary spending. Similarly, racial segregation is unrelated to teacher
salary spending gaps (after adding state fixed effects), negatively correlated
with teacher-student ratio gaps, and positively correlated with teacher experi-
ence gaps. One explanation for these patterns is that greater segregation
within school districts causes larger teacher experience gaps and districts
respond by targeting smaller class sizes to their highest poverty and highest
minority schools.
The third row within each panel of Table 4 shows coefficients for district
poverty level (which estimate the relationship between district poverty level
and teacher resource gaps). Higher poverty districts have larger gaps in
teacher salary and teacher–student ratios compared with otherwise similar
lower poverty districts in the same state or county, but poverty rate is not
related to teacher experience gaps.12 This finding contradicts those reported
in Goldhaber et al. (2015), which found greater teacher experience gaps in
higher poverty districts (measured at the student level, rather than the school
level as in this study). However, I ran identical models for just Washington
State (the setting of the Goldhaber et al. study) and confirmed that in
Washington, district poverty rate is positively correlated with teacher
636 Educational Policy 33(4)
Figure 2. The relationship between elementary school poverty level and teacher
resources per student (mean centered within districts) for otherwise similar
districts receiving above state average funding and below state average funding per
student.
Note. District funding is adjusted for factors affecting the cost of education, including the
local cost of labor, district poverty rate, district size, and urbanicity. The sample is limited to
elementary schools in districts with at least three other elementary schools. FRL = free or
reduced price lunch.
experience gaps, whereas that relationship reverses, on average, for the rest
of the country. Urban districts have larger teacher experience gaps than oth-
erwise similar suburban and rural districts in the same state, while district
enrollment is generally unrelated to teacher resource gaps. Compared with
districts in the same state, both teacher–student ratio gaps and teacher experi-
ence gaps increase with the cost of wage index. Finally, the average percent-
age of experienced teachers across all schools in a district is associated with
both lower teacher salary expenditure gaps and lower teacher experience
gaps. This finding likely suggests that districts with higher attrition are more
likely to have larger teacher experience gaps compared with otherwise simi-
lar districts in the same state or county with lower attrition.
Finally, Figure 2 shows how per-pupil funding is associated with the
extent to which districts target greater teacher resources to their highest pov-
erty schools. The graphs plot the relationship between %FRL and the amount
of (a) teacher salary expenditures per student, (b) teacher–student ratios, and
(c) average percentage of experienced teachers for districts that receive 15%
less funding than their state average (after controlling for observable differ-
ences in cost) and for districts that receive 15% more funding than their state
average. As described earlier, the average district allocates slightly more
teacher salary funding per student in their higher poverty schools. However,
Knight 637
the first graph of Figure 2 shows that districts with greater funding levels
allocate teacher salary expenditures even more progressively with respect to
school poverty rate, whereas districts receiving less state and local funding
allocate teacher salary expenditures regressively (as indicated by the down-
ward sloping dashed line in the graph on the left).
The next two graphs in Figure 2 provide evidence for why this relationship
exists. The middle graph shows that districts staff their higher poverty schools
with more teachers per student on average, but that relationship becomes
stronger as district funding increases. Similarly, the graph on the right shows
that teacher experience is inequitably distributed within school districts, on
average, but this relationship weakens with increases in district funding. That
is, greater district funding is associated with more equitable distributions of
teacher experience within school districts. Regression coefficients from these
models show that the differences in the slopes of these lines are significant at
conventional levels.13
Specification Checks and Extensions
The primary finding that per-pupil funding is associated with lower teacher
resource gaps could result from a variety of reasons. Above, I argued that
more resources help districts maintain supportive working conditions in their
higher need schools. Alternatively, districts that receive more funding may
differ in some other way that is correlated with both district funding and
lower teacher resource gaps. For example, districts with greater funding lev-
els, relative to other districts in the same state or county, might be located in
more advantaged neighborhoods. If districts in more advantaged neighbor-
hoods attract a teaching workforce with greater preference for working in the
least advantaged schools within those districts, then changes in funding rates
would not alter teacher resource gaps, as the underlying causal mechanism
would be a third variable that is only correlated with funding rates and
resource gaps.
One way to examine the possibility of omitted variable bias is by estimat-
ing the bias-adjusted treatment effect as proposed by Oster (2016). This pro-
cedure compares changes in the coefficient of interest with changes in the
r-square between the null model (with no covariates) and the full model (with
all covariates).14 In each of the results shown in Table 4, adding covariates
increases the r-square substantially, suggesting that observable characteris-
tics explain much of the variation in teacher resource gaps. Moreover, in each
case (with the exception of the model predicting race/ethnicity teacher expe-
rience gaps), the coefficient for per-pupil funding increases as additional
covariates are added (the null model with no covariates is not shown). The
638 Educational Policy 33(4)
bias-adjusted treatment effect is therefore larger than the main effect for each
of the results shown in Table 4.
Given that the sample is limited to districts with at least four elementary
schools (for analyses of elementary teacher resource gaps), some of the fixed
effects estimates may not draw on a sufficient number of districts within
states or counties, potentially limiting the ability to observe within-state or
within-county comparisons. However, results are consistent when I limit the
sample to states with at least 40 districts that meet sample requirement.
Results from county fixed effects are also consistent when I limit the sample
to only counties with at least 10 districts. The coefficient for per-pupil fund-
ing in county fixed effects models that predict teacher experience gaps
increases to 0.008 when the sample is limited to counties with at least 10
districts, implying that each additional US$1,000 per student relative to dis-
tricts in the same county is associated with a reduction in the teacher experi-
ence gap of 8.7% of a standard deviation.
A second specification test examines the sensitivity of the results to the
measurement of teacher resource gaps. To do this, I created a second set of
teacher resource measures that compare the difference between teacher
resources in schools that fall in the top half of %FRL and %URM and those
that fall in the bottom half (rather than the top and bottom quartile). As before,
I make these calculations separately for elementary, middle, and high schools.
This approach makes it possible to include districts with only two elementary
schools (and for analyses of middle and high schools, districts with only two
of those school types). Results are similar when using this alternate measure,
although in some cases the magnitude of the coefficient for per-pupil funding
decreases slightly.
Finally, I extend the analysis by exploring potential underlying mecha-
nism to explain the relationship between district funding and within-district
resource allocation equity. I examine a series of interaction effects between
per-pupil funding and district characteristics for models predicting teacher
resource gaps. Models with interactions between measures of segregation
and district funding suggest that district per-pupil funding has a stronger rela-
tionship with narrowing of teacher resource gaps in more segregated districts.
District-level resources may thus be even more important for closing teacher
resource gaps in districts that have more segregation across schools. However,
poverty rate, district percentage of student of color, urbanicity, and district
size are all unrelated to the relationship between funding and teacher resource
gaps (interactions are all insignificant). In other words, resources appear
equally as important in closing teacher resource gaps regardless of district
poverty, student demographics, urbanicity, and enrollment size.
Knight 639
Discussion
This study contributes to understanding of educational inequality in a number
of ways. Consistent with prior analyses (e.g., Card & Payne, 2002), results
show that inequality in school resource allocation is primarily caused by dis-
parities across states and across districts within states, while funding is more
evenly distributed within school districts on average. This pattern holds
regardless of whether districts face federal regulation through the
Comparability Rule, suggesting that districts likely have alternate incentives
to allocate resources equitably across schools beyond compliance with fed-
eral policy. For example, given studies that show historically underserved
students benefit more from additional resources (Nye, Hedges, &
Konstantopoulos, 2002; Ronfeldt, Loeb, & Wyckoff, 2013), district leaders
may choose to target more resources to higher need schools. In addition,
many states regulate district resource allocation across schools (Odden &
Picus, 2014).
I also find that despite district efforts to equalize learning opportunities by
providing equitable funding across schools, novice teachers are clustered in
higher poverty and higher minority schools within districts nationally. While
districts typically have direct control over class size and teacher–pupil ratio
policies—and most staff higher poverty schools with more teachers per stu-
dent—districts have far less control over the distribution of teacher experi-
ence (Darling-Hammond, 2004; Loeb & Strunk, 2007). As a result, districts
allocate more funding to their higher poverty schools by lowering class sizes
rather than having more experienced teachers in those schools. At the same
time, these broad averages mask substantial variation in teacher resource
gaps. Many districts actually provide less funding per student for teacher
salaries in schools with the highest percentage of low-income students and
student of color, while other districts have equal to or more experienced
teachers in their highest need schools. In contrast to teacher resources, most
of the variation in teacher resource gaps is across districts in the same state.
Finally, district inputs may explain some of the variation in the distribu-
tion of teacher resources. Results for the second research question show that
holding constant local cost factors, districts that receive more funding per
student, spend more, or offer higher salaries, relative to other districts in the
same state or county, have lower teacher salary expenditure gaps, lower
teacher–pupil ratio gaps, and in most cases, lower teacher experience gaps. In
districts that receive greater funding per student relative to otherwise similar
districts in the same state or county, teacher experience is more equitably
distributed across high- and low-poverty schools. Second, in part by defini-
tion, less segregated districts have more equitable distributions of teacher
640 Educational Policy 33(4)
resources. In short, additional resources and less segregated schools both
appear to help districts allocate funding more equitably and close teacher
experience gaps.
These findings have important policy implications: First, the requirement
of DOE that lower performing districts, as determined by state accountability
plans, address across-school resource inequities would likely affect a substan-
tial number of school districts. Many districts already allocate teacher salary
expenditures equitably across schools. However, consider the simple differ-
ence in funding between Title I and non-Title I schools. The data show that
approximately 939 districts provide more teacher salary expenditures per stu-
dent to non-Title I elementary schools compared with their Title I elementary
schools (46% of the 2,030 districts with at least one Title I elementary school
and at least one non-Title I elementary school). A total of 7.0 million students
attend Title I elementary, middle, or high schools in districts where non-Title
I schools receive more per-pupil teacher salary funding, on average, than Title
I schools at the same grade level. The total expenditure required to equalize
average funding in Title I schools to that of non-Title I schools across all dis-
tricts nationally is US$3.3 billion (a 2.2% increase in total state and local
teacher salary spending nationally). Given standardized teacher salary sched-
ules, districts would most likely accomplish this by increasing teacher–student
ratios in Title I schools. Without additional revenues however, districts would
need to implement forced teacher placements, which prior research shows are
largely unpopular and ineffective (Miller & Lee, 2014).
The findings suggest that districts’ ability to close teacher resource gaps
likely depends, in part, on the availability of resources relative to observa-
tionally similar districts in the same state or county. Policies that provide
more resources for underfunded school districts may help those districts nar-
row teacher quality gaps. Thus, federal efforts to provide more equitable
access to high-quality teachers may benefit from placing additional pressure
on state school finance systems. The federal government has exhibited sub-
stantial influence on state education agencies through competitive grants
(i.e., Race to the Top) and waivers from federal policies (Wrabel, Saultz,
Polikoff, McEachin, & Duque, 2018). The DOE has little direct influence
over state legislatures, which control school district funding levels. Most of
the external pressure placed on state legislatures to alter school funding has
historically come through state and federal judicial decisions. The federal
government’s focus on state education agencies and district human capital
policies may simply be a response to lack of authority over state legislatures.
However, identifying incentives for state legislatures to increase the equity
and overall level of funding across districts, perhaps by expanding Title I
funding through the Education Finance Incentive Grants (which currently
Knight 641
comprise 23% of Title I funding), may be an effective approach to improving
equitable access to high-quality teachers within districts.
A second policy implication relates to understanding of the teacher labor
market and school district achievement gaps. Despite recent efforts to under-
stand the extent to which disadvantaged students have equitable access to
experienced teachers, federal and state policymakers have little knowledge of
the types of districts with larger teacher experience gaps. This gap in the litera-
ture is especially important, given recent findings showing that district-level
achievement gaps persist across the income distribution in low-, middle-, and
higher income districts nationally (Reardon, Kalogrides, & Shores, 2016). The
findings from this study contradict prior statewide analyses in Washington and
North Carolina, which found that higher poverty districts have wider teacher
experience gaps (Clotfelter, Ladd, & Vigdor, 2005; Goldhaber et al., 2015). I
find that while experience gaps exist across the distribution of district poverty
rates, teacher experience gaps are actually the smallest in the highest poverty
districts and largest in midpoverty districts. Teachers who choose to work in
high-poverty districts may also choose to work (and remain) in their district’s
highest poverty schools. The propensity for greater teacher retention in the
highest poverty schools of high-poverty districts (compared with the highest
poverty schools of low-poverty districts) could be seen as an untapped asset
for high-poverty districts that are struggling with teacher retention. In sum-
mary, the problem of inequitable access to experienced teachers is not limited
to, or even concentrated in, high-poverty districts.
Third, the study has implications related to efforts to address educational
inequality more broadly. Much of the recent policy debates surrounding the
inequitable access to effective teachers has centered on state laws related to
teacher tenure, transfer, and dismissal (e.g., Vergara v. California, Wright v.
New York, and others). The role of equitable and adequate resources across
school districts is notably absent from the discourse. This study demonstrates
the importance of district funding rates, especially relative to otherwise simi-
lar districts in the same state or county, in helping districts close teacher expe-
rience gaps. Although other factors related to human capital management
policies play a role to be sure, district administrators’ ability to provide stu-
dents with equitable learning opportunities across schools depends on their
ability to improve teaching and learning conditions in their highest need
schools, which likely requires a sufficient level of resources. Although money
is not a panacea for improving working conditions, sufficient resources may
be a necessary condition (Grubb, 2009).
Finally, the study adds to policy discussion related to the growing trend of
resegregation across schools by race/ethnicity and by family income levels
642 Educational Policy 33(4)
(Frankenberg & Kotok, 2013). The national teacher experience gap found in
this study adds to the potential problems associated with race- and income-
based resegregation. In addition to increasing students’ interactions with
peers from other racial/ethnic or cultural background, desegregation neces-
sarily reduces disparities gaps in resources across schools (Mickelson &
Nkomo, 2012; Reardon & Firebaugh, 2002). Policymakers aiming to narrow
resource gaps between rich and poor schools and between schools serving
predominantly White students and students of color could focus on desegre-
gating schools in addition to reallocating resources more equitably.
Conclusion
As the DOE continues the process of negotiated rulemaking, federal policy-
makers will need to determine whether any federal regulations will govern
the SNS rule, or if the methodology for determining compliance will be left
up to individual states. The DOE’s ultimate goal of providing students with
equitable learning opportunities may be undermined by strict requirements
placed on districts to equalize funding across schools. States may benefit
from using targeted funding for high-needs districts as a way to reduce
within-district resource gaps. As this study demonstrates, despite the poten-
tially large impacts of the new federal education law, the greatest control over
the distribution of educational opportunity most likely rests with state legis-
latures who determine human capital management policies, school funding
levels, funding allocation patterns.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research,
authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research,
authorship, and/or publication of this article: This material is based upon work sup-
ported by the National Science Foundation under Grant No. 1661097 and the W. T.
Grant Foundation under Grant No. 186848.
Supplemental Material
Supplemental material is available for this article online.
Notes
1. Every Student Succeeds Act (ESSA) is the most recent reauthorization of the
Elementary and Secondary Education Act, initially passed during the 1960s War
Knight 643
of Poverty. The largest educational grant program is Title I, which targets fund-
ing to the nation’s most impoverished schools. Title I schools refer to schools
selected to receive Title I funding. Title I funding for higher poverty schools falls
under Title I, Part A. I refer to Title I, Part A simply as Title I throughout this arti-
cle. As part of the implementation of ESSA, the Department of Education (DOE)
is required to write the specific rules for how the law should be implemented in
states and districts, and how districts can use Title I funding.
2. These requirements are described in Section 200.21 of ESSA, with further docu-
mentation included in the DOE’s final regulations (https://ed.gov/policy/elsec/
leg/essa/essaaccountstplans1129 ).
3. As with other branches of the government, the DOE must go through a process
of “negotiated rulemaking,” in which the constituencies affected by a law are
nominated and convene to provide input into specific regulations for how a law
will be implemented. The final ESSA regulations approved under the Obama
administration appeared in the Federal Register on December 8, 2016 (Vol. 81,
No. 236). The DOE provided responses to comments on the initially proposed
regulations in a longer document posted to their website (https://ed.gov/policy/
elsec/leg/essa/essaaccountstplans1129 =).
4. These three regulations are outlined, respectively, in ESEA Sections 1118(a)
and 8521, as amended by the ESSA; §§20 U.S.C. 6321(a), 7901, ESEA Section
1118(b); §§20 U.S.C. 6321(b), and ESEA Section 1118(b); §§20 U.S.C. 6321(c).
5. Determining how districts would fund schools in the absence of Title I is not
straightforward. In previous iterations of the federal education law (ESEA, later
reauthorized as the No Child Left Behind Act), schools demonstrated compli-
ance with SNS by reporting, on a cost-by-cost basis, what was purchased with
Title I funds. Past research has shown that because funds allocated to the core
instructional program are difficult to justify as “extra” or supplemental, most
schools choose instead to use Title I funding for external programs (Gordon,
2016). The result is that schools create fragmented budgets that allocate Title I
funding to ineffective add-on programs or special pull-out programs that remove
high-needs students from the mainstream curriculum (Gordon & Reber, 2015).
6. The second change to Title I funding regulation in ESSA relates to the schoolwide
provision. Under the No Child Left Behind Act, schools could use Title I funding for
schoolwide purposes if at least 40% of students qualified as low income. Schools
receiving Title I funding that did not have more than 40% of students qualify for
funding were required to spend the funding specifically on academically struggling
students. Schools could target funding by providing those students with, for exam-
ple, smaller class sizes after school programs targeted professional development for
their teachers or some other targeted intervention. ESSA permits states to apply for
waivers that would allow schools to use Title I funding on schoolwide purposes,
regardless of whether those schools met the 40% threshold. While this change is
noteworthy, the analyses described in this article do not specifically address changes
to the schoolwide versus targeted assistance programs of Title I.
7. For these models, I convert state and local per-pupil funding to a percentage dif-
ference from the statewide mean. A value of 0.1 implies that a particular district
https://ed.gov/policy/elsec/leg/essa/essaaccountstplans1129
https://ed.gov/policy/elsec/leg/essa/essaaccountstplans1129
https://ed.gov/policy/elsec/leg/essa/essaaccountstplans1129 =
https://ed.gov/policy/elsec/leg/essa/essaaccountstplans1129 =
644 Educational Policy 33(4)
receives 10% more funding than the state average. Predicted values are estimated
using the margins command in STATA.
8. All log models are available from the author upon request.
9. Specifically, the coefficient for %FRL on models predicting per-student teacher
salary spending is 273.1 for districts with at least one, but not all Title I schools and
339.3 for districts with all Title I schools (a difference of 66.2 which is not statisti-
cally significant). Similarly, the coefficients for teachers per pupil for each group
are 0.978 and 0.789, respectively, and for models predicting teacher experience,
0.078 and 0.075, respectively. As noted in Table 1, only 531 districts have 0 Title I
schools, representing 880 schools and 0.7% of all students. Because these districts
are relatively small (with an average of 1.7 schools per district), I do not make
comparisons between high- and low-poverty schools within these districts.
10. Results for middle schools change slightly when the sample is limited to districts
with at least four middle schools. As shown in Table 2, Model 4 (which includes
district fixed effects), across the full sample, elementary and high schools have
slightly more equitable funding than middle schools (although districts allocate
greater per-student teacher salary funding to higher poverty schools at all three
school levels). However, when the sample is limited to districts with at least four
elementary, middle, or high schools, middle and high schools have slightly more
equitable funding distributions than elementary schools (and, as before, teacher
salary funding is equitably distributed at all three school levels). Table A4 in the
online appendix shows regression coefficients for models that limit the sample
to districts with at least four elementary, middle, or high schools.
11. These results are not shown but are available from the author upon request. I find
that a US$1,000 increase in district per-pupil expenditures is associated with
a reduction of 4.6% of a standard deviation of the teacher salary spending gap
when comparing districts in the same state (i.e., state fixed effects) and a 4.3%
reduction when comparing districts in the same county (county fixed effects). A
US$1,000 increase in the district average per-pupil teacher salaries lowers the
within-district teacher salary gap by 12% of a standard deviation when using
state fixed effects and by 15% of a standard deviation with county fixed effects.
12. I also ran the models described in Equation 1 (predicting teacher resources based
on student demographics) separately for high-poverty districts (above the 75th
percentile within the state), midpoverty districts (25th to 75th percentile of pov-
erty within the state), and low-poverty districts (below the 25th percentile of
district poverty rate). As expected, the coefficient for %FRL in models predicting
both teacher salaries per student and teacher–student ratios is largest in low-
poverty schools but positive for all three. In contrast, the %FRL coefficient in
models predicting teacher experience is negative across the poverty distribution,
but teacher experience is least inequitably distributed in high-poverty districts
(and most inequitably distributed in midpoverty districts).
13. The coefficients for the interaction between per-pupil funding and the percent-
age of students at the school eligible for FRL are significant for all three teacher
resource variables.
Knight 645
14. Specifically, the bias-adjusted treatment effect is βfull − δ × (βnull − βfull) ×
[(Rmax − Rfull) / (Rfull − Rnull)], where Rmax is the expected r-square if all observable
and unobservable covariates were included (assumed to be 1), δ is the propor-
tion of selection bias due to observable versus unobservable factors, and the
subscripts full and null refer to the β and r-square for the full model, with all
covariates and the null model, with no covariates (Oster, 2016).
References
Anzia, S. F., & Moe, T. M. (2014). Focusing on fundamentals: A reply to Koski and
Horng. Educational Evaluation and Policy Analysis, 36, 120-123.
Baker, B. D. (2014). Evaluating the recession’s impact on state school finance systems.
Education Policy Analysis Archives, 22(91). doi:10.14507/epaa.v22n91.2014
Baker, B. D., & Corcoran, S. P. (2012). The stealth inequality of school funding: How
state and local school finance systems perpeturate inequitable student spending.
Washington, DC: Center for American Progress.
Baker, B. D., Farrie, D., Johnson, M., Luhm, T., & Sciarra, D. G. (2017). Is school
funding fair? A national report card, sixth edition. Newark, NJ: Education Law
Center.
Baker, B. D., & Green, P. (2015). Conceptions of equity and adequacy in school
finance. In H. F. Ladd & M. E. Goertz (Eds.), Handbook of research in education
finance and governance (pp. 311-332). New York, NY: Routledge.
Baker, B. D., & Weber, M. (2016). State school finance inequities and the limits of pur-
suing teacher equity through departmental regulation. Education Policy Analysis
Archives, 24(47). Retrieved from http://dx.doi.org/10.14507/epaa.v24.2230
Boyd, D., Grossman, P., Ing, M., Lankford, H., Loeb, S., & Wyckoff, J. (2011). The
influence of school administrators on teacher retention decisions. American
Educational Research Journal, 48, 303-333. doi:10.3102/0002831210380788
Boyd, D., Lankford, H., Loeb, S., & Wyckoff, J. (2005). The draw of home: How
teachers’ preferences for proximity disadvantage urban schools. Journal of
Policy Analysis and Management, 24, 113-132.
Card, D., & Payne, A. A. (2002). School finance reform, the distribution of school
spending, and the distribution of student test scores. Journal of Public Economics,
83, 49-82.
Clotfelter, C. T., Ladd, H. F., & Vigdor, J. L. (2005). Who teaches whom? Race and
the distribution of novice teachers. Economics of Education Review, 24, 377-392.
Clotfelter, C. T., Ladd, H. F., Vigdor, J. L., & Diaz, R. A. (2004). Do school account-
ability systems make it more difficult for low-performing schools to attract and
retain high-quality teachers? Journal of Policy Analysis and Management, 23(2),
251-271.
Clotfelter, C. T., Ladd, H. F., Vigdor, J. L., & Wheeler, J. (2006). High- poverty
schools and the distribution of teachers and principals. North Carolina Law
Review, 85, 1345-1379.
Cohen-Vogel, L., Feng, L., & Osborne-Lampkin, L. T. (2013). Seniority provisions
in collective bargaining agreements and the “Teacher Quality Gap”. Educational
Evaluation and Policy Analysis, 35(3), 324-343.
http://dx.doi.org/10.14507/epaa.v24.2230
646 Educational Policy 33(4)
Darling-Hammond, L. (2000). Teacher quality and student achievement: A review of
state policy evidence. Education Policy Analysis Archives, 8(1). Retrieved from
http://dx.doi.org/10.14507/epaa.v8n1.2000.
Darling-Hammond, L. (2004). Inequality and the right to learn: Access to qualified
teachers in California’s public schools. Teachers College Record, 106, 1936-
1966.
Duncombe, W. D., & Yinger, J. (2005). How much more does a disadvantaged stu-
dent cost? Economics of Education Review, 24(5), 513-532.
Dynarski, M., & Kainz, K. (2016). Requiring school districts to spend comparable
amounts on Title I schools is pushing on a string (Evidence Speaks Reports [Vol
1, #21]). Washington, DC: Brookings Institution.
Eller, W., Doerfler, C., & Meier, K. (2000). Teacher turnover in Texas: Problems
and prospects: A report of the Texas Educational Excellence Project. College
Station: Texas Educational Excellence Project.
Frankenberg, E., & Kotok, S. (2013). Demography and educational politics in the
suburban marketplace. Peabody Journal of Education, 88(1), 112-126.
Glazerman, S., & Max, J. (2011). Do low-income students have equal access to the
highest-performing teachers? (No. 6955). Mathematica Policy Research.
Goldhaber, D., Lavery, L., & Theobald, R. (2015). Uneven playing field? Assessing
the teacher quality gap between advantaged and disadvantaged students.
Educational Researcher, 44, 293-307.
Goldhaber, D., Quince, V., & Theobald, R. (2016). Has it always been this way?
Tracing the evolution of teacher quality gaps in U.S. public schools (CALDER
Working Paper No. 171). Washington, DC: CALDER.
Gordon, N. (2016, March). Increasing targeting, flexibility, and transparency in Title
I of the ESEA. Washington,DC: The Hamilton Project.
Gordon, N., & Reber, S. (2015). The quest for a targeted and effective Title I ESEA:
Challenges in designing and implementing fiscal compliance rules. RSF: The
Russell Sage Foundation Journal of the Social Sciences, 1, 129-147.
Government Accountability Office. (2011, January). Elementary and Secondary
Education Act potential effects of changing comparability requirements.
Washington, DC: Author.
Gritz, R. M., & Theobald, N. D. (1996). The effects of school district spending priori-
ties on length of stay in teaching. Journal of Human Resources, 31(3), 477-512.
Grubb, W. N. (2009). The money myth: School resources, outcomes, and equity. New
York, NY: Russell Sage Foundation.
Hanna, R., Marchitello, M., & Brown, C. (2015). Comparable but unequal: School fund-
ing disparities. Washington, DC: Center for American Progress. Retrieved from
https://www.americanprogress.org/issues/education/report/2015/03/11/107985/
comparable-but-unequal/
Hanushek, E. A., Kain, J. F., & Rivkin, S. G. (2004). Why public schools lose teach-
ers. Journal of Human Resources, 39(2), 326-354.
Haxton, C., de los Reyes, I. B., Chambers, J., Levin, J., & Cruz, L. (2012). A case
study of Title I Comparability in three California school districts. Washington,
DC: American Institutes for Research.
http://dx.doi.org/10.14507/epaa.v8n1.2000
https://www.americanprogress.org/issues/education/report/2015/03/11/107985/comparable-but-unequal/
https://www.americanprogress.org/issues/education/report/2015/03/11/107985/comparable-but-unequal/
Knight 647
Heuer, R., & Stullich, S. (2011). Comparability of state and local expenditures among
schools within districts: A Report from the Study of School-Level Expenditures.
Washington, DC: U.S. Department of Education Office of Planning, Evaluation
and Policy Development Policy and Program Studies Service.
Imazeki, J. (2005). Teacher salaries and teacher attrition. Economics of Education
Review, 24(4), 431-449.
Isenberg, E., Max, J., Gleason, P., Potamites, L., Santillano, R., Hock, H., . . . Hansen,
M. (2013). Access to effective teaching for disadvantaged students. Washington,
DC: National Center for Education Evaluation and Regional Assistance, U.S.
Department of Education.
Johnson, S. M., Kraft, M. A., & Papay, J. P. (2012). How context matters in high-need
schools: The effects of teachers’ working conditions on their professional satis-
faction and their students’ achievement. Teachers College Record, 114, 1-39.
Knight, D. S. (2017). Are high-need school districts disproportionately impacted by
state funding cuts? School finance equity following the Great Recession. Journal
of Education Finance, 43, 169-194.
Knight, D. S., & Strunk, K. O. (2016). Who bears the cost of district funding cuts?
Equity implications of teacher layoffs. Educational Researcher, 45, 395-406.
Krieg, J. M., Theobald, R., & Goldhaber, D. (2016). A foot in the door: Exploring
the role of student teaching assignments in teachers’ initial job placements.
Educational Evaluation and Policy Analysis, 38, 364-388.
Ladd, H. F. (2011). Teachers’ perceptions of their working conditions: How predic-
tive of planned and actual teacher movement? Educational Evaluation and Policy
Analysis, 33, 235-261.
Ladd, H. F. (2012). Education and poverty: Confronting the evidence. Journal of
Policy Analysis and Management, 31, 203-227.
Lankford, H., Loeb, S., & Wyckoff, J. (2002). Teaching sorting and the plight of urban
schools: A descriptive analysis. Educational Evaluation and Policy Analysis, 24,
37-62.
Loeb, S., Darling-Hammond, L., & Luczak, J. (2005). How teaching conditions pre-
dict teacher turnover in California schools. Peabody Journal Education, 80(3),
44-70.
Loeb, S., & Strunk, S. (2007). Accountability and local control: Response to incentives
with and without authority over resource generation and allocation. Education
Finance and Policy, 2, 10-39.
McClure, P. (2008). The history of educational comparability in Title I of the
Elementary and Secondary Education Act of 1965. In J. Podesta & C. Brown
(Eds.), Ensuring equal opportunity in public education: How local school district
funding practices hurt disadvantaged students and what federal policy can do
about it (pp. 3-31). Washington, DC: Center for American Progress.
Mickelson, R. A., & Nkomo, M. (2012). Integrated schooling, life-course outcomes,
and social cohesion in multiethnic democratic societies. Review of Research in
Education, 36, 197-238.
Miller, L. J., & Lee, J. S. (2014). Policy barriers to school improvement: What’s real
and what’s imagined? Seattle, WA: Center on Reinventing Public Education.
648 Educational Policy 33(4)
Murnane, R. J., & Olsen, R. J. (1989). The effect of salaries and opportunity costs
on duration in teaching: Evidence from Michigan. Review of Economics and
Statistics, 71(2), 347-352.
National School Board Association. (2013, June). The challenges and unintended
consequences of using expenditures to determine Title I comparability. Retrieved
from https://www.nsba.org/sites/default/files/reports/NSBA
Nye, B., Hedges, L. V., & Konstantopoulos, S. (2002). Do low-achieving students
benefit more from small classes? Evidence from the Tennessee class size experi-
ment. Educational Evaluation and Policy Analysis, 24, 201-217.
Odden, A. R., & Picus, L. O. (2014). School finance: A policy perspective. New York,
NY: McGraw Hill.
Office of Civil Rights. (2016, June). 2013-14 civil rights data collection: A first look.
Washington, DC: U.S. Department of Education, Office of Civil Rights.
Oster, E. (2016). Unobservable selection and coefficient stability: Theory and
evidence. Journal of Business Economic Statistics. Advance online publication.
doi: 10.1080/07350015.2016.1227711
Peske, H. G., & Haycock, K. (2006). Teaching inequality: How poor and minority
students are shortchanged on teacher quality. Washington, DC: The Education
Trust.
Reardon, S. F. (2011). The widening academic achievement gap between the rich
and the poor: New evidence and possible explanations. In G. J. Duncan & R. J.
Murnane (Eds.), Whither opportunity (pp. 91-116). New York, NY: Russell Sage
Foundation.
Reardon, S. F., & Firebaugh, G. (2002). Measures of multigroup segregation.
Sociological Methodology, 32(1), 33-67.
Reardon, S. F., Kalogrides, D., & Shores, K. (2016). The geography of racial/ethnic
test score gaps (CEPA Working Paper No. 16-10). Retrieved from http://cepa.
stanford.edu/wp16-10
Reschovsky, A., & Imazeki, J. (2001). Achieving educational adequacy through
school finance reform. Journal of Education Finance, 26(4), 373-396.
Ronfeldt, M., Loeb, S., & Wyckoff, J. (2013). How teacher turnover harms student
achievement. American Educational Research Journal, 50(1), 4-36.
Roza, M. (2005). Strengthening Title I to help high-poverty schools: How Title I
funds fit into district allocation patterns. Seattle: Center on Reinventing Public
Education, University of Washington.
Roza, M. (2008). What if we closed the comparability loophole? Seattle: Center on
Reinventing Public Education, University of Washington.
Roza, M., & Hill, P. (2004). How within-district spending inequities help some
schools to fail. Brookings Papers on Education Policy, 201-227.
Sass, T. R., Hannaway, J., Xu, Z., Figlio, D. N., & Feng, L. (2010). Value added of
teachers in high-poverty schools and lower poverty schools. Journal of Urban
Economics, 72, 104-122.
Scafidi, B., Sjoquist, D. L., & Stinebrickner, T. R. (2007). Race, poverty, and teacher
mobility. Economics of Education Review, 26(2), 145-159.
https://www.nsba.org/sites/default/files/reports/NSBA
http://cepa.stanford.edu/wp16-10
http://cepa.stanford.edu/wp16-10
Knight 649
Spatig-Amerikaner, A. (2012). Unequal education: Federal loophole enables lower
spending on students of color. Washington, DC: Center for American Progress.
Taylor, L. L., & Fowler, W. J., Jr. (2006). A comparable wage approach to geo-
graphic cost adjustment (Research and Development Report. NCES-2006-321).
Washington, DC: National Center for Education Statistics.
Ujifusa, A., & Klein, A. (2016, March). ESSA rulemaking: A guide to negotiations.
Education Week, 35(24), 14-15.
U.S. Department of Education. (2014). State plans to ensure equitable access to
excellent educators—Frequently asked questions. Retrieved from https://www.
gpo.gov/fdsys/pkg/FR-2014-11-10/pdf/2014-26456
U.S. Department of Education. (2016). Final regulations (Docket ID ED-2016-
OESE-0032). Retrieved from https://ed.gov/policy/elsec/leg/essa/essaaccountst-
plans1129
Wrabel, S. L., Saultz, A., Polikoff, M. S., McEachin, A., & Duque, M. (2018). The
politics of elementary and secondary education act waivers. Educational Policy,
32, 117-140. doi:10.1177/0895904816633048
Winkler, A. M., & Zeehandelaar, D. (2012). How strong are US teacher unions? A
state-by-state comparison. Washington, DC: Thomas B. Fordham Institute.
Author Biography
David S. Knight, PhD, is an Assistant Professor in the department Educational
Leadership and Foundations, College of Education and Associate Director of the
Center for Education Research and Policy Studies at the University of Texas at El
Paso. His research focuses on educator labor markets, cost-effectiveness analysis, and
school finance.
https://www.gpo.gov/fdsys/pkg/FR-2014-11-10/pdf/2014-26456
https://www.gpo.gov/fdsys/pkg/FR-2014-11-10/pdf/2014-26456
https://ed.gov/policy/elsec/leg/essa/essaaccountstplans1129
https://ed.gov/policy/elsec/leg/essa/essaaccountstplans1129
Standards and Accountability
EPS 9610
October 20, 2020
1
The Pieces
Content Standards ->Curriculum ->Instruction ->Learning ->Assessment ->Accountability
The question: How do these pieces fit together?
Era of Reform
A Nation at Risk (1983)
America 2000 (1991)
Goals 2000: Educate America Act (1994)
Improving America’s Schools Act (ESEA, 1994)
No Child Left Behind Act (2001)
Race to the Top (2009)
Every Student Succeeds Act (2015)
Nation at Risk
Urged adoption of tougher standards
Stronger graduation requirements
More rigorous curriculum
Higher salaries for teachers
Improved teacher training
America 2000
Children will start school “ready to learn”
National graduation rate of 90%
Master of five core subjects before leaving 4th, 8th, and 12th grade
Lead the world in math & science
All American adults to be literate and prepared for work and citizenship
Every school safe and drug free
Goals 2000/ESEA (1994)
Build upon goals set under America 2000
States were required to create standards-based education system that would apply to all students
Standards in each grade
Tests to be administered to all poor children at least once in grades 3-5, 6-9, and 10-12.
NCLB (2001)
Increase accountability for student performance
Focus on what works
Reduce bureaucracy and increase flexibility
Empower parents
Theory of Action
What is the underlying “theory of action” for the following:
ESEA (1965)
NCLB (2001)
RTTT (2009)
ESSA (2016)
Theory of Action – NCLB
Holding schools and districts accountable for student performance
Concerned with the achievement gap – “the soft bigotry of low expectations”
Lack of funding and know-how in needy schools
Problems of poverty in society and larger culture
Dysfunctional school culture, lax system of governance, and no incentives for improving performance
https://www.c-span.org/video/?c4536967/president-bush
In 2000, the average African American 12th grader was reading and performing math at approximately the same level as the average white 8th eighth grader
NCLB was primarily targeted towards addressing the third explanation for the achievement gap – need for external pressure focused on student achievement would motivate local education systems to reform
Critics argued that the accountability provisions were unlikely to channel political pressure in constructive ways; standardized tests are too crude for measuring student achievement; reliance on tests lead to “rigging” the system in the curriculum, test prep, and the tests themselves; the laws expectations of 100% proficient by 2014 are unrealistic which leads to inevitable failure; need for a coherent school system to be able to actually implement reform – i.e., low performing schools do not have the capacity to respond to external pressures.
Hold schools accountable for the performance of student subgroups
Challenging content standards
State assessments that mirror those standards
Annually test students to measure competency in the “core subjects” of math and reading
Key components for accountability (NCLB)
1) Academic standards (e.g., GLCEs)
– state developed -> transition to Common Core
2) Achievement standards (e.g., M-Step, Smarter Balance)
– proficiency levels
3) Adequate Yearly Progress (AYP)
– disaggregated by race/ethnicity, low income, special ed, ELL
4) Sanctions
– assistance/plans, corrective action, restructuring
Race to the Top
1) College & career readiness
-> common core standards & assessments
2) Improving teacher effectiveness
-> reform teacher evaluation & compensation
3) Data systems to guide instruction
4) Turn around struggling schools
-> a) turnaround, b) restart, c) closure, or d) transformation
5) Promote innovation
-> support expansion of charter schools; STEM programs; etc
Michigan Response to RTT
1) Reform failing schools (RSC 380.1280(c))
-> identification of 5% lowest achieving schools
-> under supervision of state reform officer
-> choose one of 4 approved reform models
-> if plan does not work may be placed in EAA
2) Raised the dropout age to 18
3) Teacher/Administrator evaluation/compensation reform (RSC 380.1249 & 1250)
4) Change to Teacher Tenure Act
5) Expansion of charter/virtual schools
6) Change cut scores and move to Common Core
Common Core Standards
What are they?
Why do they exist?
What are the arguments for & against state adoption?
http://www.cc.com/video-clips/nemi1a/the-colbert-report-common-core-confusion
ESSA – Accountability
goodbye 100% proficient goal
identify 5% lowest performing schools for “comprehensive support”
identify high schools with grad rate <= 67% for “comprehensive support”
schools w/low-performing subgroup must implement “targeted intervention”
must assess 95% of all students
https://www.youtube.com/watch?v=qgGzhL9rDJ4
10
ESSA - Assessments
Annual assessment of students in grades 3-8, and one in high school, in math & English/language arts
May be delivered in part in form of projects, portfolios, and extended-performance tasks
At high school level, may implement nationally recognized tests
May set target limit for aggregate amount of time spent on assessment administration
11
ESSA – Teachers & Leaders
Eliminates HQT provision (no minimum bar for entry into the profession)
Districts required to describe how they will identify and address disparities in teacher quality (effectiveness, experience, qualifications) across student subgroups
States must collect and publicly report on these disparities
State must create plans to reduce these disparities
Districts must have mechanisms to inform parents regarding teacher professional qualifications
States must use fed PD funds to increase access to effective teachers for low-income students/students of color
12
Circling Back
Policy frameworks/models
How do we understand the process for passage of ESSA?
Policy Instruments
Accountability “mandates” remain, but why the softening?
Policy Implementation
More state and local discretion, more variation in implementation?
Policy Diffusion
Who will be the “thought” leaders?
13
Your district
Go to www.mischooldata.org
Click on “Parent Dashboard” and/or “Student Assessment”
Find your district
Explore…
Student proficiency on state assessments (district & individual schools – how much variation is there)
Graduation rates/Dropout rates
How do these vary by student subgroup?
How do these compare to other districts/schools in ISD and/or State?
FINALLY – what does this say about the “quality” of your school, administrators, teachers, etc?
14