Vol. 15, No. 4 - Education Next https://www.educationnext.org/journal/vol-15-no-04/ A Journal of Opinion and Research About Education Policy Fri, 11 Mar 2022 18:01:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://i0.wp.com/www.educationnext.org/wp-content/uploads/2019/12/e-logo.png?fit=32%2C32&ssl=1 Vol. 15, No. 4 - Education Next https://www.educationnext.org/journal/vol-15-no-04/ 32 32 181792879 ‘No-Racially-Disparate-Discipline’ Policies Opposed by Both Teachers and General Public https://www.educationnext.org/no-racially-disparate-discipline-policies-opposed-by-both-teachers-and-general-public/ Mon, 31 Aug 2015 00:00:00 +0000 http://www.educationnext.org/no-racially-disparate-discipline-policies-opposed-by-both-teachers-and-general-public/ In 2014 the U.S. Department of Education and the U.S. Department of Justice, acting together, sent every school district a letter asking local officials to avoid racial bias when suspending or expelling students.

The post ‘No-Racially-Disparate-Discipline’ Policies Opposed by Both Teachers and General Public appeared first on Education Next.

]]>

In 2014 the U.S. Department of Education and the U.S. Department of Justice, acting together, sent every school district a letter asking local officials to avoid racial bias when suspending or expelling students. District officials were advised that they risk legal action if school disciplinary policies have “a disparate impact, i.e., a disproportionate and unjustified effect on students of a particular race.” Even before this letter was mailed, the school district in Oakland, California, had settled charges of bias brought by the federal department of education by agreeing to “targeted reductions in the overall use of…suspensions for African American students, Latino students, and students receiving special education services.” In the Fall 2014 issue of Education Next, Richard Epstein, a University of Chicago law professor, questioned the departments’ action for forcing “school districts to comply with a substantive rule of dubious legal validity and practical soundness.” But in June 2015 the Supreme Court, in a Texas housing case, bolstered the departments’ position by holding that statistical evidence of “disparate impact” of policies across racial groups could be used as evidence of racial discrimination by a government agency. Joshua Dunn analyzes the ramifications of the decision in this issue (see “Disparate Impact Indeed,” legal beat, Fall 2015).

What does the public—and what do teachers—think of “no disparate impact” disciplinary policies? And what do they think of federal efforts to mandate them? To find out, the 2015 Education Next poll asked a nationally representative sample of some 4000 adults and an additional sample of some 700 teachers what they thought about policies ensuring equal rates of suspension and expulsion across racial and ethnic groups. The poll randomly divided both the public sample and the teacher sample into two groups. We asked members of one group whether they support or oppose “school district policies that prevent schools from expelling or suspending black and Hispanic students at higher rates than other students?” Half of the public opposes “no disparate impact” policies, while just 19 percent back the idea, with the remaining 32 percent taking no position one way or the other. That division of opinion is essentially the same among the second group, which was asked about a federal “no disparate impact” policy. By a large margin, the public opposes “no disparate impact” policies, regardless of whether the federal government or the local school district formulates them.

The division of opinion within the teaching profession is broadly similar to that of the public as a whole. No less than 59 percent of teachers oppose “no disparate impact” policies, while only 23 percent are in favor, with 18 percent of teachers taking the neutral position.

Higher levels of support for a “no disparate impact” policy are observed among African Americans—41 percent are in favor, while 23 percent oppose it. Only 31 percent of Hispanic respondents like the policy, however, with 44 percent in opposition.

Given the opposition among both teachers and the general public, one suspects that federal efforts to impose racially equal suspension and expulsion rates will be tempered by political realities. But if the civil rights attorneys inside the departments of justice and education are eager to press forward, and if school districts resist such pressures, the latter are likely to find a sympathetic audience both within and outside the teaching profession.

—Paul E. Peterson

This article appeared in the Fall 2015 issue of Education Next. Suggested citation format:

Peterson, P.E. (2015). “No-Racially-Disparate- Discipline” Policies Opposed by Both Teachers and General Public. Education Next, 15(4), 5.

The post ‘No-Racially-Disparate-Discipline’ Policies Opposed by Both Teachers and General Public appeared first on Education Next.

]]>
49703442
Many Options in New Orleans Choice System https://www.educationnext.org/many-options-new-orleans-choice-system/ Tue, 04 Aug 2015 00:00:00 +0000 http://www.educationnext.org/many-options-new-orleans-choice-system/ School characteristics vary widely

The post Many Options in New Orleans Choice System appeared first on Education Next.

]]>

As the school-choice movement accelerates across the country, several major cities—including Cincinnati, Detroit, Memphis, Milwaukee, and Washington, D.C.—are expanding their charter-school portfolios. Historically, communities have used charter schools not only in hopes of spurring traditional schools to improve but also to increase the variety of options available to families. If family preferences vary, and schools are given the autonomy to innovate and respond to market pressures, the theory holds, then we should expect a rich variety of schools to emerge.

A parent learns about KIPP New Orleans Schools during the annual New Orleans Schools Expo
A parent learns about KIPP New Orleans Schools during the annual New Orleans Schools Expo

But does this theory hold up in practice? First, it is not clear that parents do have distinct preferences when shopping for schools. If parents are uncertain about their child’s skills, they may play it safe and seek out a generic “basket” of school services. Second, charter schools always face the possibility of closure for low performance, and this threat may pressure the schools to avoid risk by imitating successful charter models. Government regulations might also inhibit a school’s capacity to offer a unique program. And finally, large charter management organizations (CMOs) may attempt to leverage economies of scale by replicating a single model at multiple schools. Conceivably, the market strategies of charter schools and large CMOs, rather than the needs of families and students, could drive the market, leading to more imitation and less diversity.

The city of New Orleans offers an ideal laboratory for examining how much true “choice” resides in a public school market. With 93 percent of its public-school students attending charter schools, New Orleans has the largest share of students enrolled in charters of any U.S. city. In some ways, the New Orleans system is unique, having been launched in the wake of a terrible disaster. However, the city’s student population—majority minority and mostly eligible for lunch subsidies—is typical of other urban centers where school reform is growing. Furthermore, the CMOs in New Orleans are supported by many of the same national foundations that support charter schools across the U.S., suggesting that similar patterns might emerge in other expanding charter markets. This study examines public schools in the Big Easy, investigating how—and how much—schools have differentiated themselves in a citywide school-choice system.

A New Approach

Previous studies have focused on the differences between charter schools and district schools, treating all charters within a community as essentially alike. In effect, these studies take a “top-down” approach, assuming that the governance of the school (charter versus district) determines the nature of the school. This approach may be appropriate where charter schools are few and their role is to fill service gaps. By contrast, our study adds a “bottom-up” approach, focusing not on governance but on salient school characteristics such as instructional hours, academic orientation, grade span, and extracurricular activities—factors that determine what students and families actually experience.

More than 30 different organizations operate charter schools in New Orleans.
More than 30 different organizations operate charter schools in New Orleans.

We ask, are New Orleans schools homogeneous or varied? Is this answer different when we use the bottom-up approach based on school characteristics rather than the top-down analysis based on school governance? And finally, to what degree is the New Orleans school market composed of unique schools, multiple small segments of similar schools, and larger segments of similar schools?

Grouping schools by key characteristics, we find considerable differentiation among schools in New Orleans. Furthermore, schools operated by the same CMO or governed by the same agency are not necessarily similar to one another. In fact, the differences and similarities among schools appear to be somewhat independent of what organizations and agencies are in charge. Overall, we find that the market comprises a combination of large segments of similar schools and smaller segments of like institutions, but also some schools that are truly unique.

A Charter School “Laboratory”

In 2003, the Louisiana Department of Education (DOE) created the state-run Recovery School District (RSD) and empowered it to take over failing schools. At the time, only a handful of charter schools were operating in New Orleans. In the aftermath of Hurricane Katrina in 2005, city and state leaders used the RSD to take over all underperforming schools in the city. The local school board continued to manage a small number of high-performing schools, some of which have selective admissions.

Over the next several years, the RSD contracted out the schools under its control to CMOs, including both single-school operators and larger CMO networks. Policymakers also expanded school choice by eliminating geographic attendance zones for students: students were henceforth free to enter lotteries for any open-enrollment school in the city. Open-enrollment schools in New Orleans, as well as some selective-admissions schools, provide free transportation to students across the city.

The city’s charter schools are governed by three different agencies: the state’s Board of Elementary and Secondary Education (BESE), the RSD, and the Orleans Parish School Board (OPSB). The schools are managed by more than 30 school operators. This milieu creates the potential for a wide variety of schools to emerge in New Orleans. But as noted, regulations and accountability demands could stifle diversity. For example, the RSD and DOE have strict test-based requirements for charter contract renewal; 45 schools have been closed, merged, or turned over to other operators since 2007. Also, state regulations set restrictions in some areas but provide autonomy in others. For instance, DOE establishes standards for teacher preparation and certification, but charter schools are allowed to hire uncertified teachers. All schools, including charters, are required to participate in the statewide teacher-evaluation system. The net effects of these policies on school autonomy and differentiation are unclear.

Data

Our study focuses on the 2014–15 school year; by that time, 100 percent of the RSD schools were operated by CMOs (including the schools formerly run directly by the RSD). The OPSB continued to operate a small number of district schools and was expanding its own charter-school portfolio. A final small group of charter schools continued under direct supervision of the BESE.

Our data come from the spring 2014 edition of the New Orleans Parents’ Guide to Public Schools, published annually by a local nonprofit organization and distributed free of charge. This publication is the primary formal source of information for parents choosing schools in New Orleans.

From the guide, we selected eight characteristics that reflect decisions schools make when designing their programs:

· whether the school has selective admissions or open enrollment

· whether the school mission is “college prep”

· whether the school has a specific curricular theme (e.g., math, technology, or arts)

· number of school hours (annual total)

· number of grades served

· number of sports

· number of other extracurricular activities (“extras”)

· number of student support staff (nurses, therapists, social workers, etc.).

We also considered measures that are not in the Parents’ Guide. For example, we ran some analyses with the number of suspensions and expulsions, as an indicator of discipline policies. This did not change the clustering. For other categories, such as instructional approach, we did not have good measures.

Note that not all New Orleans schools have autonomy over their admissions policies. Selective admission is permitted at OPSB district and charter schools and at BESE charter schools, but not at RSD schools. Any school can attract or repel certain student populations through the menu of enhanced student-support services that it offers, however. For example, schools with an on-site speech therapist might be more attractive to parents of children with individualized education plans (IEPs) requiring these services. Our measure of the intensity of student support services may therefore help to identify open enrollment schools that target a distinct student population.

Methods

A potential student speaks with a Sylvanie Williams College Prep representa- tive at the Schools Expo hosted by the Urban League of Greater New Orleans
A potential student speaks with a Sylvanie Williams College Prep representative at the Schools Expo hosted by the Urban League of Greater New Orleans

The simplest version of the top-down theory predicts that the number of differentiated “clusters” in a public school market will correspond to the number of governing agencies. New Orleans has two authorizers: the OPSB and BESE. Both authorizers are also the governing agency for some of their schools. BESE also authorizes the schools governed by the RSD, which are low-performing schools taken over by the state. If these three governing agencies have singular “tastes” for certain kinds of schools, we should observe high similarity among schools that fall under the same agency, and differences across governing agencies. To see the extent to which schools differ across the three governing agencies, we first group the schools by governing agency and check for differences along the characteristics listed earlier.

Next, we expand the top-down groupings to allow for additional differences between district-run schools, independent charter schools, and charter network schools (run by a CMO that operates multiple schools). This creates five groups of schools in New Orleans:

· OPSB district schools

· OPSB charter schools

· RSD charter network schools

· RSD independent charter schools

· BESE charter schools.

We then compare the results of this exercise to those obtained when we ignore governance arrangements and instead group schools from the bottom up, based on their characteristics alone. To do this we use cluster analysis, a statistical method designed to group objects of study (in this case, schools) based on similar qualities.

With cluster analysis, we can specify the number of groups that will be formed. To start, we first allow schools to form either three or five clusters to test whether similar governance predicts membership in the same cluster.

We then allow schools to form more clusters (up to 10) and select the best grouping based on meaningful within-group similarities and across-group differences. This strategy tests for the possibility of market segments that are not described in the top-down theory and allows us to identify schools with unique combinations of the measured characteristics (niche schools).

Results

We focus separately on 56 elementary schools and 22 high schools included in the Parents’ Guide. New Orleans school operators can select each school’s grade span, and it is quite common for elementary schools to serve grades K–8 and uncommon to have schools with just middle school grades (5–8). Therefore, we define elementary schools as those with any grade K–4, and high schools as those with any grade 9–12. A small number of schools that serve only middle-school grades were not included in this study.

On average, elementary schools enroll 540 students; 86 percent of students are eligible for free or reduced-price lunch, and 87 percent are black. Ninety-five percent of elementary schools are charter schools, 50 percent have a college-prep mission, 43 percent have a specific curricular theme, and 9 percent use selective admissions. For high schools, average enrollment is 550 students, with 78 percent of students eligible for free or reduced-price lunch; 84 percent of students are black. Ninety-six percent of high schools are charter schools, 52 percent have a college-prep mission, 57 percent have a specific curricular theme, and 22 percent use selective admissions.

Grouping from the Top Down: Elementary Schools. When we examine school characteristics by governing agency alone (three groups), we find that the groups differ by a statistically significant amount on only one of the five continuous variables we examine. Specifically, schools governed by the RSD have more school hours. When we analyze school characteristics by governing agency and school type (five groups), we find no statistically significant differences on these same variables. We do observe some modest differences across groups in the average values of the three yes/no variables (open enrollment or selective admissions, curricular theme or not, and college prep or not). Overall, however, results from the top-down approach suggest that governance arrangements do not correlate with notable differences in school characteristics.

High Schools. When we group the 22 high schools by governing agency (three groups) and by governing agency and school type (five groups), we find in both cases that the clusters differ on the number of sports offered. In the three-group structure, clusters also differ on the number of student support staff, while in the five-group structure, they differ on grade span.

Grouping from the Bottom Up: Elementary Schools. Despite modest differences between elementary schools grouped by governing agency and school type, we find that no two individual schools are identical on all eight variables. The two schools that are most similar to one another overall are a pair that includes an OPSB district school and an RSD charter network school. The second most similar pair includes an RSD charter network school and an OPSB charter school. These groupings provide initial evidence that the most similar schools across all characteristics do not share the same governing agency and type.

We first cluster the schools into three groups to further test the top-down assumption that school characteristics will be roughly aligned with the governing agency—the OPSB, the RSD, or BESE. In other words, we let the data determine the school groupings that produce the highest degree of similarity within groups and see whether the schools within those groups tend to have the same governance arrangement.

ednext_XV_4_arcetrigatti_fig01-small

Figure 1 shows that schools can exhibit similar characteristics but not share a governing agency. For example, cluster 1 is composed of schools that share a college-preparatory mission but represent two of three governing agencies, although most (28 of 38) are RSD charter network schools. Thirteen schools that share enough similarities to form a second cluster also include RSD and OPSB schools, but most are RSD independent charter schools. The third cluster of five schools includes three OPSB and two BESE charters that have selective admissions and a specific curricular theme.

Statistical analysis suggests there are no meaningful differences described by this grouping other than the differences in admissions, theme, and mission mentioned above. Overall, these results suggest that the RSD governs schools that are more similar to one another than those governed by the OPSB. But we are able to reject the hypothesis of the top-down theory that the governing agency predicts either similarities within school groups or differences across school groups; we also find evidence of differentiation within school operators.

We next test five groupings and again find that schools do not cluster by the combination of governing agency and school type (results not shown). The first cluster includes 19 schools, all but two of which are RSD charter network schools. However, RSD charter network schools are also found in three of the other four clusters. The other governing agency-school type combinations also appear in multiple clusters, except for the two selective-admissions BESE charter schools that form a cluster with three selective-admissions OPSB charters. Six of nine RSD independent charter schools are grouped in one cluster, but that cluster also contains RSD charter network schools and OPSB charter schools. OPSB charter schools appear in four of the five clusters.

Interestingly, even the CMO does not frequently predict cluster membership. While some larger CMOs have all their schools in a single cluster, KIPP, ReNEW, Algiers Charter School Association, and other charter networks have schools in multiple clusters.

Finally, the three OPSB district elementary schools, which might be expected to be the most similar because they are the only New Orleans schools operated by a government bureaucracy, also appear in multiple clusters. OPSB district schools are clustered with schools with several other governance arrangements, including RSD charter network schools and RSD independent charter schools.

Next we examine how the five clusters differ on the five continuous variables. The groups are not statistically different in extracurricular activities, sports, student support staff, or grade span. The groups do vary across school hours, with the two clusters composed of college-prep elementary schools reporting more hours than other clusters. Clustering to five groups explains only 38 percent of total variance in continuous clustering variables.

Overall, these results suggest that grouping schools by governing agency and type does not capture the market structure in New Orleans. Although we observe that RSD-governed schools tend to cluster together, there are multiple governance-type combinations represented in each cluster, and a CMO can have schools in up to three different clusters. Thus, the top-down theory of three or five groups appears to be inadequate to identify meaningful differences across schools.

We next characterize the market by allowing the clusters to emerge from the data. When we allow 10 groups to form, we find that the groups are statistically different along all continuous clustering variables, except the number of student support staff (see Figure 2).

ednext_XV_4_arcetrigatti_fig02-small

Cluster 1 contains 19 RSD charter schools with more-than-average school hours and a college-prep mission. Cluster 2 contains two OPSB charter schools and 10 RSD charter schools. The schools in this second cluster have near-average values for all continuous variables, and they do not have a curricular theme or college-prep mission.

Six other clusters capture additional nuance in the supply of schools. All the schools in these six clusters have at least one school with a curricular theme, and three of the six clusters contain only schools that also have a college-prep mission. However, the clusters vary across all the continuous variables except student support staff.

Finally, two elementary schools appear as outliers in the analysis, suggesting they occupy niches in the market. The first is a selective-admissions OPSB charter school with a curricular theme, fewer school hours, more extracurricular activities and sports, and a large grade span. The second is an RSD charter school with no curricular theme or college-prep mission, but higher than average numbers of extracurriculars, sports, and student support staff.

Overall, this more-flexible clustering strategy creates groupings that are more similar within group and more different across groups than clustering based on governing agency and school type. We observe that a single CMO can manage schools that differ from each other, and that similar schools can be governed by different agencies and managed by different organizations. We find that RSD schools are more likely to cluster together than are OPSB schools, which often occupy smaller market segments. Finally, we see that elementary schools cluster in groups with varied levels of school characteristics—except for student support staff. We find little evidence that any New Orleans elementary schools differentiate themselves by offering more in-house support staff than other schools.

ednext_XV_4_arcetrigatti_fig03-small

High Schools. The 22 New Orleans high schools include 12 RSD, 7 OPSB, and 3 BESE schools. There are 10 RSD charter network high schools, 2 RSD independent charter high schools, 5 OPSB charter high schools, 2 OPSB district high schools, and 3 BESE charter high schools. These groups vary statistically in the number of sports offered and student support staff, and also grade span.

ednext_XV_4_arcetrigatti_fig04-smallFirst, we use the school characteristics data to form three clusters. Compared to elementary schools, high schools appear to have more differences among them, and the maximum degree of difference is also greater. Overall, these findings do not support the assertion that schools vary by governing agency (see Figure 3).  Re-clustering into five groups also did not create groupings that reflect the combination of governing agency and school type (results not shown).

Using the flexible clustering strategy, we see a mixture of governing agency and school type across four clusters, with six outliers (see Figure 4). The largest cluster includes six high schools—one OPSB school and five RSD charter network schools run by four different CMOs. The second cluster is also diverse, with five total schools from three governing agency and type combinations. We also observe charter network schools and the OPSB schools in different clusters. The six outlier high schools include one OPSB district school, two OPSB charters, two BESE charters, and one RSD independent charter school.

Five of the six niche schools are selective-admissions schools. Five of them have a curricular theme (such as science and math, intercultural studies, or performing arts), and two have a college-prep mission. Outliers tend to have more extracurricular activities, shorter school hours, and larger grade spans. Overall, outliers are much more common among high schools, and every selective-admissions high school has its own niche.

Conclusion

New Orleans presents the opportunity to study an urban school system where charter schools comprise more than 90 percent of school campuses and total student enrollment. We find that school characteristics vary within both governance arrangements and individual CMOs, and that the most similar schools are often governed by different agencies and have different managing organizations. We also found a greater degree of market differentiation than would be expected from a top-down approach. Our methods reveal 10 distinct types of elementary schools comprising large segments of similar schools, small segments of two to three schools, and niche schools. Among high schools, we found four segments (both large and small) and a larger number of niche schools. This may reflect more specialized interests among older students.

Charter schools governed by the RSD are often, but not always, similar to each other, with emphasis on college-prep missions and more school hours. It is unclear if this reflects governing agency preferences or the fact that RSD schools are, by definition, previously low performing and therefore may be more constrained by test-based accountability. Moreover, schools within the same CMO network are often, but not always, similar to each other. Amid this similarity, we also find that within the RSD, CMOs can and do create diversified portfolios of schools.

Schools outside of the RSD are more likely to be diverse. For example, OPSB charter schools differ considerably from each other and often serve a market niche. Particularly at the high-school level, charter schools governed by the OPSB or BESE create niche markets with a curricular theme, while different CMOs come together to form a segment of similar schools, often sharing a college-prep mission. In the New Orleans context, this suggests that governing agencies may be more willing to provide unique offerings when they manage higher-performing schools with little risk of sanctions related to standardized testing. Furthermore, uniqueness often comes with selective admissions, which suggests that access to diverse school choices is greater for students who through ability or parent involvement can navigate a complex system of admissions rules and testing.

The small number of schools that remain under the bureaucratic control of the OPSB play a notable role in the school market. These schools appear in smaller clusters or stand alone as different from most charter schools. They also do not typically cluster with one another, suggesting that even a bureaucratic system can offer diverse options in a school-choice system.

Our study indicates that New Orleans parents can choose from among schools that vary on several key dimensions, and that these differences are not necessarily driven by the decisions of charter governing agencies or large CMOs. Even within large CMOs, we found significant variation among schools; for example, the expansion of KIPP in New Orleans to manage five elementary campuses did not result in five schools with identical characteristics.

Finally, we note that much of the market differentiation in New Orleans comes from schools authorized or run by either the Orleans Parish School Board or the Board of Elementary and Secondary Education. Having multiple governing agencies may be important for market differentiation.

As more cities expand school choice, we will have the opportunity to compare New Orleans to other markets to see how factors such as economies of scale, regulations, and demand influence the amount and quality of differentiation. We will also be able to observe the evolution of public school markets over time, to see if competitive pressures result in more differentiation or a drift toward imitation—and how such trends affect student outcomes.

Paula Arce-Trigatti is postdoctoral fellow in economics at Tulane University and the Education Research Alliance for New Orleans. Douglas N. Harris is professor of economics at Tulane University and founder and director of ERA-New Orleans. Huriya Jabbar is assistant professor of education policy at the University of Texas at Austin and research associate at ERA-New Orleans. Jane Arnold Lincove is assistant research professor of economics at Tulane University and associate director of ERA-New Orleans.

For more information on New Orleans, read “Good News for New Orleans: Early evidence shows reforms lifting student achievement,” by Douglas N. Harris, and “The New Orleans OneApp: Centralized enrollment matches students and schools of choice,” by Douglas N. Harris, Jon Valant, and Betheny Gross.

This article appeared in the Fall 2015 issue of Education Next. Suggested citation format:

Arce-Trigatti, P., Harris, D.N., Jabbar, H., and Lincove, J.A. (2015). Many Options in New Orleans Choice System: School characteristics vary wildly. Education Next, 15(4), 25-33.

The post Many Options in New Orleans Choice System appeared first on Education Next.

]]>
49703400
The New Orleans OneApp https://www.educationnext.org/new-orleans-oneapp/ Tue, 04 Aug 2015 00:00:00 +0000 http://www.educationnext.org/new-orleans-oneapp/ Centralized enrollment matches students and schools of choice

The post The New Orleans OneApp appeared first on Education Next.

]]>

ednext_XV_4_gross_img01In most of the U.S., the process for assigning children to public schools is straightforward: take a student’s home address, determine which school serves that address, and assign the student accordingly. However, states and cities are increasingly providing families with school choices. A key question facing policymakers is exactly how to place students in schools in the absence of residential school assignment.

In the immediate aftermath of Hurricane Katrina, New Orleans families could choose from an assortment of charter, magnet, and traditional public schools. The city initially took a decentralized approach to choice, letting families submit an application to each school individually and allowing schools to manage their own enrollment processes. This approach proved burdensome for parents, who had to navigate multiple application deadlines, forms, and requirements. Moreover, the system lacked a mechanism for efficiently matching students to schools and ensuring fair and transparent enrollment practices. The city has since upped the ante with an unprecedented degree of school choice and a highly sophisticated, centralized approach to school assignment.

Today, New Orleans families can apply to 89 percent of the city’s public schools by ranking their preferred schools on a single application known as the OneApp (see Figure 1). The city no longer assigns a default school based on students’ home addresses. Instead, a computer algorithm matches students to schools based on families’ ranked requests, schools’ admission priorities, and seat availability. Experience with the OneApp in New Orleans reveals both the significant promise of centralized enrollment and the complications in designing a system that is technically sound but clear to the public, and fair to families but acceptable to schools. The OneApp continues to evolve as its administrators learn more about school-choosing families and school-choosing families learn more about the OneApp. The approach remains novel, and some New Orleanians have misunderstood or distrusted the choice process. The system’s long-term success will require both continued learning and growth in the number of schools families perceive to be high-quality options.

The OneApp’s Design

Early centralized enrollment systems, and the matching algorithms at their core, suffered from a key flaw: the lotteries were designed so that if a family ranked its most-preferred school first and that school was in high demand, then the family could lose its second-ranked option. In this situation, it could be rational for families to rank less-preferred options first. This is precisely what families did in cities like Boston that used this approach to match students to district schools, and it likely produced inefficient outcomes.

ednext_XV_4_gross_fig01-smallThe challenge that faced the state entity that oversees most of the New Orleans schools, the Louisiana Recovery School District (RSD), was how to build a centralized, market-like enrollment system without inducing inefficient strategic behaviors. The solution was found in the Nobel Prize‒winning research of Stanford economist Al Roth. He, along with fellow Nobel Prize winner Lloyd Shapley, showed that a system could be designed to elicit true preferences just as prices would in a normal market. New Orleans and Denver became the first cities to use this Roth/Shapley-inspired centralized enrollment system across charter and district sectors. In New Orleans, this enrollment system is called the OneApp. To develop and run the OneApp, the RSD contracted with the Institute for Innovation in Public School Choice (IIPSC), an organization for which Roth has served as an adviser and board member.

For families, the OneApp process begins by acquiring an application packet with details about the application process, profiles of participating schools, and the application itself. Parents can request up to eight schools by submitting a ranked list to the RSD, in paper or online. The RSD then assigns students to schools based on families’ preferences, schools’ enrollment criteria, and seat availability. Families that do not submit a “Main Round” application, are not assigned to a school, or would like to try for a better placement may apply in a subsequent round. Families still lacking a satisfactory placement after the second round can go through a late enrollment process managed by the RSD to select from schools with available seats.

The machinery driving these placements is the RSD’s “deferred acceptance” computer algorithm. The first step of the process is to assign every student a lottery number for use when seats in oversubscribed schools must be allocated at random. The algorithm then tentatively assigns students to their first-choice schools, provided that students satisfy the entry criteria. If the school cannot accommodate all families applying for that grade, then the algorithm makes tentative assignments based on the school’s priority groupings (e.g., whether the student lives within the school’s broad catchment area) and students’ lottery numbers. At this point, students who were not assigned to their first-choice school are rejected from that school. Importantly, however, the algorithm leaves all assignments tentative until the final step. This means that students tentatively assigned to their first-choice school might later lose their seats to students who ranked that school lower than first but were rejected from all higher-ranked schools. This is key to the algorithm’s strategy-proof design.

In the next step of the process, all students who were rejected from their first-choice school are considered for their second-choice school. The algorithm considers them along with other second-choice applicants and those who were tentatively assigned to their first-choice schools. These steps are repeated for third choices and so on until no available seats remain. The algorithm’s final step is to actually assign all students to the schools to which they are tentatively assigned. Only then are families notified of the results.

The OneApp has many useful properties as a system for assigning students to schools of choice, including its strategy-proof design. To maximize the probability of receiving a desired placement, applicants have an incentive to rank as many schools as possible (eight) in their true order of preference. In fact, deviating from that strategy only makes it less likely that applicants will be assigned to their most-preferred schools. Yet even a technically elegant system—and especially one this difficult to explain—faces challenges when it confronts families making decisions for their children in actual choice settings.

The OneApp in the School Choice Context

The RSD set three goals for the OneApp: efficiency, fairness, and transparency. Here, we consider the OneApp and centralized enrollment in the context of these goals, at times defining them differently from how the RSD does. We examine not just the technical process of assigning students to schools, but also the relationship with the city’s broader school-choice setting, since the OneApp is so intertwined with New Orleans overall education policy. To incorporate empirical evidence when possible, we draw on data from interviews with 21 parents and surveys of 504 parents about the OneApp and school choice, conducted in the spring of 2014 by the Center on Reinventing Public Education (CRPE). We also utilize de-identified OneApp data containing families’ school requests and assignments for the 2013‒14 school year.

Efficiency. A centralized enrollment system like the OneApp may improve efficiency both in how families choose schools and how the broader market for schools operates. The RSD’s stated definition of efficiency is reasonable, if incomplete. It states that the OneApp can improve efficiency by making the enrollment process easier for parents to navigate, reducing the costs associated with choosing and enrolling in a school. We favor a definition that also considers how successfully the system matches families to the schools they want. Economists emphasize the importance of matching preferences with products—in this case, matching what families want with the available schools. Given the available schooling options, the OneApp algorithm is designed to do that.

How well the OneApp stacks up on this two-pronged definition of efficiency depends on the alternative to which it is compared. Relative to traditional zone-based assignment, the OneApp requires somewhat more effort from families. Families are asked to gather information and think about the many options in front of them before actively selecting a school and ranking their preferred schools. Families could incorporate school considerations into decisions about where to live, but once a residential decision is made, the school-housing linkage sharply limits a family’s options. Traditional zoned-based assignments may be less able to match family preferences than the OneApp, especially for those who don’t have the means to purchase or rent a home in a neighborhood with desirable public schools.

Compared with decentralized choice, where families apply to every school separately, centralized enrollment should be easier on families by reducing the applications and deadlines they have to navigate. It also should more efficiently match families to schools via a centralized matching algorithm. Perhaps surprisingly then, CRPE’s surveys of New Orleans parents in spring 2014 found that families that chose schools after the OneApp was instituted in 2012 reported greater difficulty with the number of applications and deadlines involved than families that chose schools before the OneApp. This may have been due to families adjusting to an unfamiliar process early in the OneApp’s tenure. It will be worth tracking future surveys to see if parents grow more comfortable with the procedures as these procedures grow more familiar.

ednext_XV_4_gross_fig02-smallIn general, most families that enter the OneApp are getting the schools they request. The RSD reports that 54 percent of Main Round applicants received their first-choice school and 75 percent got one of their top three choices for the 2015‒16 school year (see Figure 2). While these results are encouraging, no comparable metric exists for zone-based assignment or decentralized choice, and these metrics can be misleading. They indicate how well participating families are being matched to participating schools. These measures cannot gauge families’ true satisfaction with their school options and their matches. For example, if an extremely popular school joins the OneApp and many families rank that school first, the percentage of families receiving their first choice might fall even as the system’s ability to match families to desirable schools improves. For this reason, the OneApp data provide limited, though useful, information about family satisfaction. Continued surveys and discussions with school-choosing New Orleans families can complement the information from these publicized metrics.

Fairness. Defining fairness requires normative judgment. A high standard might hold that access to high-quality schools does not vary by students’ socioeconomic status. Every modern enrollment system would fall far short of this standard. Traditional zone-based systems generally leave low-income and minority students heavily concentrated in low-performing schools. Decentralized systems typically favor parents who have strong social networks and resources to understand, navigate, and even manipulate the many different enrollment processes in a city. The centralized OneApp system is not devoid of problems either. Students receive preference within their geographic catchment areas, and students from affluent families are more likely to have the preparation needed for admissions to selective schools. Moreover, the early deadline for schools with special entrance requirements—in December of the year before enrollment, two months before other Main Round applications are due—requires early awareness that may disadvantage all but the most well-informed or socially connected parents. On the other hand, families of all backgrounds at least have a chance to enter lotteries for the vast majority of schools, and even though some of the most desirable schools have early deadlines and additional requirements, simply including these schools in the OneApp likely makes them more visible and accessible than they would have been otherwise.

A more attainable definition of fairness, and the one adopted by the RSD, is that a system is fair if it sets rules governing enrollment and assignment in advance and then applies those rules consistently to all students. Residence-based school-assignment systems generally treat students within their zones equally for purposes of admission, though there have been cases of skirting the rules with incorrect addresses or special treatment. More significant problems arise in schools of choice when, for example, school leaders hide open seats from certain types of students or manipulate their lotteries or waitlists—problems that are especially likely when schools manage their own enrollment processes amid significant accountability pressure. Prior to the implementation of the OneApp, a study by Huriya Jabbar found that roughly one-third of New Orleans principals admitted to practices that kept certain students out. The OneApp has reduced opportunities for schools to engage in these behaviors by transferring decisionmaking authority in admissions from schools to the centralized process. While system leaders report that these behaviors became less common after the OneApp, it did not completely eliminate opportunities for unfair enrollment behaviors, as schools still might dissuade certain families from applying or enrolling. But these behaviors cannot be remedied with an application system alone.

Transparency…and Clarity. The RSD also includes transparency among its primary goals, and for good reason. Being open and honest about the rules governing enrollment and the strategies for effective participation is an essential element of the responsible administration of a centralized enrollment system. We submit, however, that simply being transparent is not enough with a program as unfamiliar and potentially confusing as a centralized enrollment system. A transparent system can still be unclear, and a lack of clarity can produce misunderstandings and distrust that undermine even the most transparent system.

To assess transparency, we again compare a centralized enrollment system with the alternatives. Attendance zones are extremely transparent, despite obvious questions about equity. At the other extreme, decentralized choice systems can have severe transparency concerns, with schools individually managing their lotteries and waitlists outside the view of the public or an oversight agency. State or local rules requiring public lotteries and equal treatment may be helpful but difficult to enforce, as Jabbar’s evidence on pre-OneApp principal behavior attests.

The OneApp, in contrast, requires that all rules and criteria determining admission are set in advance and, in fact, coded into a computer algorithm. The criteria are also included in the OneApp enrollment packet for the public to see. Some schools still give priority for criteria such as being the child of a school staff member, but these criteria at least are made known to the public. Putting this information in the OneApp booklet helps families understand the enrollment processes, and may discourage schools from adopting enrollment criteria or processes to strategically manipulate their pools of incoming students.

Being clear about certain elements of the OneApp has proven more difficult than being transparent. In some ways this is understandable, since at the core of the OneApp lies an algorithm that is difficult to explain to even the most interested audience. Yet clearly communicating to families information about the matching process and instructions for correctly filling out an application is essential, since misunderstandings or mistrust may lead parents to approach the OneApp in ways that undermine its goals. To examine the possibility of misunderstandings or mistrust, we analyzed patterns in OneApp rankings and interviews and surveys with parents. Useful, if limited, evidence of the OneApp’s clarity can be found by identifying application behaviors that reduce applicants’ probability of getting their desired placements.

We find evidence that many families do not approach the OneApp as its designers likely expected. The OneApp allows families to rank up to eight schools, and given the algorithm’s strategy-proof design, families cannot gain by ranking fewer than the allowed number. Yet most families rank far fewer than eight. Applicants seeking nonguaranteed kindergarten or 9th-grade Main Round placements for the 2013–14 school year submitted forms with only 3.1 schools ranked, on average. (Students are guaranteed slots in the schools they currently attend.) Perhaps these families were considering only a few OneApp schools before seeking out private schools or non-OneApp public schools. For many applicants, this did not seem to be the case. In the Main Round, 315 families that requested nonguaranteed kindergarten or 9th-grade placements with applications listing fewer than eight schools did not get placed at all. Of these families, about half (164) applied to at least one additional school in a subsequent round of the OneApp, which indicates a willingness to enroll in a school not originally ranked. Many of these families likely would have been better off listing additional schools in their Main Round application, when more schools were available to them. While this amounts to a small proportion of total OneApp applicants, others who ranked fewer than eight schools and yet received a Main Round placement might have simply been fortunate.

One possible explanation for this behavior is that many parents do not understand or believe the OneApp’s strategy-proof design. Parents interviewed by CRPE researchers described efforts to outwit the OneApp’s matching algorithm by ranking fewer than eight schools. For example, many interviewed parents reasoned that by ranking only their most-preferred schools, they gave the RSD little alternative but to assign them to one of their top choices. While such decisionmaking is hard to observe in the OneApp data, this kind of strategy puts parents at a greater risk of not matching to any school.

The number of families that do not submit an application at all suggests that many families, despite the RSD’s efforts to publicize the OneApp and provide information on procedures, may still be unclear about the OneApp process. For the 2013–14 school year, 2,881 applicants requested a nonguaranteed kindergarten or 9th-grade placement during the Main Round in February. However, another 774 applicants first requested a nonguaranteed kindergarten or 9th-grade placement in either Round 2 (in May) or Round 3 (in July), before the final administrative matching process. With some highly regarded schools filling up during the Main Round, these families’ access to desirable schools was limited. For many, missing the Main Round was likely the result of imperfect information about either the OneApp process or their own plans for the coming school year. And certain populations are especially vulnerable. Families just arriving in New Orleans, families with children just reaching school age, and families without access to informed social networks could struggle to learn about the OneApp process in time.

Centralized Enrollment and Education Policy

In many ways, the OneApp is more efficient, fair, and transparent than the decentralized choice system that preceded it. Despite this, some New Orleanians remain skeptical of the new system, often for reasons only tangentially related to the city’s enrollment process. For example, in one parent’s words, “This [common enrollment] would be great…if we had better choices.” We argue that these impressions tend to emerge not from the OneApp itself but from the larger choice system, especially the closely connected “supply side” of the market. Yet these impressions can have direct implications for the OneApp. How the public feels about the school choice setting in New Orleans can shape education policy, and education policy can shape the OneApp’s role, now and in the future.

Examples of supply-side issues that can affect public perception include transportation, selective admissions, and nonparticipation in the OneApp. If families cannot access the schools they want because commuting to those schools is too difficult, their children do not meet performance requirements, or those schools do not appear in the OneApp, then families are unlikely to believe that centralized enrollment gives them real choice.

These supply-side issues intersect in New Orleans, where it can feel like a decentralized school-choice system operates alongside a centralized one. Most public schools in New Orleans are administered by the RSD, but among other public schools are those run directly by the traditional school district (the Orleans Parish School Board, or OPSB), OPSB-authorized charter schools, and charter schools authorized by the state’s Board of Elementary and Secondary Education (BESE). Whereas all RSD schools participate in the OneApp and do so without academic entrance requirements, the same is not true of OPSB and BESE schools. Several OPSB and BESE public schools have selective admissions based on entrance exams, language proficiency exams, prior grades, essays, and other criteria. Some of these selective admissions schools do not currently participate in the OneApp, and school bus service is less consistently provided by them. This multi-part system can give rise to confusion and frustration, particularly among families trying to reconcile claims that they have unprecedented choice with the reality that their children may not have access to some of the city’s most desired public schools.

Parents also indicated a slim possibility of receiving a seat in a high-quality school. While New Orleans schools have improved considerably since pre-Katrina (see “Good News for New Orleans,” features, Fall 2015) and families seem to have a variety of schooling options (see “Many Options in New Orleans Choice System,” research, Fall 2015), only 22 of the 90 schools in the 2015–16 OneApp received a letter grade of A or B under the state’s accountability system. Of the four schools that received an A, three are full-immersion Spanish or French language schools that required applications during the Main Round’s Early Window period because they mandated language proficiency tests.

Moreover, while 89 percent of New Orleans public schools appeared in the OneApp, a few of the city’s highest-rated, most-desired schools constitute the 11 percent of New Orleans public schools that have chosen to handle enrollment processes on their own, outside of the OneApp. Some of these same schools have complex application requirements and ambiguous selection procedures, heightening the sense that the best schools in New Orleans are not truly accessible to all families.

In the long run, parental perceptions will also depend on how the school system responds to market demand. The OneApp can help in this regard, since it collects information about family preferences. Ideally, system leaders use this information—along with other data on school quality—to increase the number of high-quality seats (e.g., by adding seats to desirable schools or opening more schools like them) and reduce the number of low-quality seats (e.g., by closing low-performing, undesirable schools). Indeed, the RSD has incorporated demand data in judgments about school sites, placing popular schools in buildings that can accommodate future growth. However, responses through the portfolio management process can be slow to develop, and some high-demand schools, feeling effective at their current scale, have expressed reluctance to increase their enrollment substantially. Individual school leaders may be able to adjust to demand signals more quickly by better aligning their offerings with community needs, though research on schools’ responses to market pressures generally shows that schools make some programmatic improvements in response to demand pressures but focus more intently on superficial changes like improved marketing.

The OneApp will likely enjoy long-term public support only if it is woven into a larger fabric of school options and choice. These examples show that some important threads in this fabric are still missing. No matter how well thought out and carefully constructed the OneApp itself might be, families that find their preferred schools inaccessible or their options undesirable are likely to experience frustration and confusion. Some may judge the enrollment system using metrics of efficiency, fairness, and transparency, but parents will judge it based on their own experiences and interests.

The OneApp represents an ambitious policy shift, requiring families and educators to think in an entirely new way about how students are assigned to schools. Given this, and the fact that the OneApp is still in its early years, misunderstandings are not surprising. With most families getting one of their top-ranked schools, the number of satisfied parents could give system and school leaders time to improve the application process further as well as the quality of schools offered. There are signs in New Orleans that such learning and improvement are underway. RSD administrators routinely consider the system’s successes and failures, and modify it accordingly for the next iteration, all while the public continues to acclimate and learns how to better leverage the choice system. Continued learning and adaptation will be essential to the OneApp’s sustained success and the ability of New Orleans to provide the country with a model for student enrollment that is worthy of replication elsewhere.

Douglas N. Harris is professor of economics and founder and director of the Education Research Alliance for New Orleans at Tulane University. Jon Valant is postdoctoral fellow in the department of economics at Tulane University and at ERA-New Orleans. Betheny Gross is senior analyst and research director at the Center on Reinventing Public Education at the University of Washington Bothell.

For more information on New Orleans, read “Good News for New Orleans: Early evidence shows reforms lifting student achievement,” by Douglas N. Harris, and “Many Options in New Orleans Choice System: School characteristics vary widely,” by Paula Arce-Trigatti, Douglas N. Harris, Huriya Jabbar, and Jane Arnold Lincove.

This article appeared in the Fall 2015 issue of Education Next. Suggested citation format:

Harris, D.N., Valant, J., and Gross, B. (2015). The New Orleans OneApp: Centralized enrollment matches students and schools of choice. Education Next, 15(4), 17-22.

The post The New Orleans OneApp appeared first on Education Next.

]]>
49703414
Good News for New Orleans https://www.educationnext.org/good-news-new-orleans-evidence-reform-student-achievement/ Tue, 04 Aug 2015 00:00:00 +0000 http://www.educationnext.org/good-news-new-orleans-evidence-reform-student-achievement/ Early evidence shows reforms lifting student achievement

The post Good News for New Orleans appeared first on Education Next.

]]>

What happened to the New Orleans public schools following the tragic levee breeches after Hurricane Katrina is truly unprecedented. Within the span of one year, all public-school employees were fired, the teacher contract expired and was not replaced, and most attendance zones were eliminated. The state took control of almost all public schools and began holding them to relatively strict standards of academic achievement. Over time, the state turned all the schools under its authority over to charter management organizations (CMOs) that, in turn, dramatically reshaped the teacher workforce.

EDUCATIONA few states and districts nationally have experimented with one or two of these reforms; many states have increased the number of charter schools, for example. But no city had gone as far on any one of these dimensions or considered trying all of them at once. New Orleans essentially erased its traditional school district and started over. In the process, the city has provided the first direct test of an alternative to the system that has dominated American public education for more than a century.

Dozens of districts around the country are citing the New Orleans experience to justify their own reforms. In addition to being hailed by Democratic president Barack Obama and Louisiana’s Republican governor, Bobby Jindal, parliamentary delegations from at least two countries have visited the city to learn about its schools.

The unprecedented nature of the reforms and level of national and international attention by themselves make the New Orleans experience a worthy topic of analysis and debate. But also consider that the underlying principles are what many reformers have dreamed about for decades—that schools would be freed from most district and union contract rules and allowed to innovate. They would be held accountable not for compliance but for results.

There is clearly a lot of hype. The question is, are the reforms living up to it? Specifically, how did the reforms affect school practices and student learning? My colleagues and I at the Education Research Alliance for New Orleans (ERA-New Orleans) at Tulane University have carried out a series of studies to answer these and other questions. Our work is motivated by the sheer scale of the Katrina tragedy and the goal of supporting students, educators, and city leaders in their efforts to make the city’s schools part of the city’s revitalization effort. The rest of the country wants to know how well the New Orleans school reforms have worked. But the residents of New Orleans deserve to know. Here’s what we can tell them so far.

Before the Storm

Assessing the effects of this policy experiment involves comparing the effectiveness of New Orleans schools before and after the reforms. As in most districts, before Hurricane Katrina, an elected board set New Orleans district policies and selected superintendents, who hired principals to run schools. Principals hired teachers, who worked under a union contract. Students were assigned to schools based mainly on attendance zones.

The New Orleans public school district was highly dysfunctional. In 2003, a private investigator found that the district system, which had about 8,000 employees, inappropriately provided checks to nearly 4,000 people and health insurance to 2,000 people. In 2004, the Federal Bureau of Investigation (FBI) issued indictments against 11 people for criminal offenses against the district related to financial mismanagement. Eight superintendents served between 1998 and 2005, lasting on average just 11 months.

This dysfunction, combined with the socioeconomic background of city residents—83 percent of students were eligible for free or reduced-price lunch—contributed to poor academic results. In the 2004‒05 school year, Orleans Parish public schools ranked 67th out of 68 Louisiana districts in math and reading test scores. The graduation rate was 56 percent, at least 10 percentage points below the state average.

As a result, some reforms were already under way when Katrina hit in August 2005. The state-run Recovery School District (RSD) had already been created to take over low-performing New Orleans schools. The state had appointed an emergency financial manager to handle the district’s finances. There were some signs of improvement in student outcomes just before the storm, but, as we will see, these were relatively modest compared with what came next.

A Massive Experiment 

After Katrina, state leaders quickly moved almost all public schools under the umbrella of the RSD, leaving the higher-performing ones under the Orleans Parish School Board (OPSB). Gradually, the RSD turned schools over to charter operators, and the teacher workforce shifted toward alternatively prepared teachers from Teach for America and other programs. So new was the system that a new name was required—longtime education reformer Paul Hill called it the “portfolio” model.

Gradually, the RSD turned schools over to charter operators, and the teacher workforce shifted toward alternatively prepared teachers from Teach for America and other programs.
Gradually, the RSD turned schools over to charter operators, and the teacher workforce shifted toward alternatively prepared teachers from Teach for America and other programs.

Researchers often refer to such sudden changes as “natural experiments” and study them using a technique called “difference-in-differences.” The idea is to first take the difference between outcomes before and after the policy, in the place where it was implemented—the treatment group. This first difference is insufficient, however, because other factors may have affected the treatment group at the same time. This calls for making the same before-and-after comparison in a group that is identical, except for being unaffected by the treatment. Subtracting these two—taking the difference of the two differences between the treatment and comparison groups—yields a credible estimate of the policy effect.

We have carried out two difference-in-differences strategies:

1) Returnees only. We study only those students who returned to New Orleans after Hurricane Katrina. The advantage of this approach is that it compares the same students over time. One disadvantage is that it omits nonreturnees. Also, we can only study returnees over a short period of time—after 2009, they no longer have measurable outcomes to study.

2) Different cohorts. We consider the achievement growth of different cohorts of students before and after the reforms—for example, students in 3rd grade in 2005 and students in 3rd grade in 2012. The advantages here are that we can include both returnees and nonreturnees, and we can use this strategy to study longer-term effects. But the students are no longer the same.

In both strategies, the New Orleans data set includes all publicly funded schools in the city, including those governed by the district (OPSB), since all public schools were influenced by the reforms. The main comparison group includes other districts in Louisiana that were affected by Hurricane Katrina, and by Hurricane Rita, which came soon afterward. This helps account for at least some of the trauma and disruption caused by the storms, the quality of schools students attended in other regions while their local schools were closed, and any changes in the state tests and state education policies that affected both groups.

ednext_XV_4_harris_fig01-small

Effects on Average Achievement

Figure 1 shows the scores for each cohort, separately for New Orleans and the matched comparison group. The scores cover grades 3 through 8, are averaged across subjects, and are standardized so that zero refers to the statewide mean. The first thing to notice is that before the reforms, students in New Orleans performed far below the Louisiana average, at about the 30th percentile statewide. Students from the comparison districts also lagged behind the rest of the state, but by a lesser amount. The New Orleans students and the comparison group were moving in parallel before the reforms, however, suggesting that our matching process produced a comparison group that is more appropriate than the state as a whole.

The performance of New Orleans students shot upward after the reforms. In contrast, the comparison group largely continued its prior trajectory. Between 2005 and 2012, the performance gap between New Orleans and the comparison group closed and eventually reversed, indicating a positive effect of the reforms of about 0.4 standard deviations, enough to improve a typical student’s performance by 15 percentile points.

The estimates we obtain when we focus just on returnees are smaller and often not statistically significant, although the discrepancies are predictable: first, the returnees were probably more negatively affected by trauma and disruption; second, creating a new school system from scratch takes time, so we would expect any effects to be larger in later years; and third, the effects of the reforms seem more positive in early elementary grades, and the returnees were generally in middle school when they returned. Even so, the combination of analyses suggests effects of at least 0.2 standard deviations, or enough to improve a typical student’s performance by 8 percentage points.

But there is still the possibility that what appear to be reform effects are actually the result of other factors.

Addressing Additional Concerns

The goal of any analysis like this is to rule out explanations for the changes in outcomes other than the reforms themselves. Our main comparisons deal with many potential problems, such as changes in state tests and policies. Here we consider in more depth four specific factors that could bias the estimated effects on achievement: population change, interim school effects, hurricane-related trauma and disruption, and test-based accountability distortions.

Population change. Hurricane Katrina forced almost everyone to leave the city. Some returned and some did not. The most heavily flooded neighborhoods were (not coincidentally) those where family incomes were lowest, and people in these neighborhoods returned at much lower rates than people who lived in other parts of the city. Given the strong correlation between poverty and student outcomes, this could mean that higher test scores shown in Figure 1 are driven not by the reforms but by schools serving more-advantaged students.

Observers have pointed out that the share of the student population eligible for free or reduced-price lunch (FRL) actually increased slightly in New Orleans after the storm. But there are many reasons not to trust FRL data. For example, they reflect crude yes/no measures and are unlikely to capture extreme poverty of the sort common in New Orleans. Also, what really matters here is not whether poverty increased in New Orleans, but whether poverty increased more than in the comparison group. Therefore, in addition, we gathered data from the U.S. Census, which measures changes in income and the percentages of the population with various levels of education. We also carried out the difference-in-differences analysis in these demographic measures to understand the changes in New Orleans relative to the matched comparison group of hurricane-affected districts, and then simulated the effect of changes in family background characteristics on test scores using data from the federal Early Childhood Longitudinal Study.

We also examined pre-Katrina characteristics to see whether the returnees were different from nonreturnees and found that returnees did have slightly higher scores. In fact, we come to the same conclusion in both analyses: the expected increase in student outcomes after the hurricanes due to population change is no more than 0.02 to 0.06 standard deviations, or about 10 percent of the difference-in-differences estimates in Figure 1.

Interim school effects. Some of the changes in student learning may reflect neither the prestorm nor poststorm quality of New Orleans schools, but the performance of schools that students briefly attended outside the city after the evacuation. Other research on these students by Dartmouth economist Bruce Sacerdote suggests that New Orleans evacuees experienced larger improvements in school quality than evacuees from other districts.

Trauma and disruption. Any benefit of having good interim schools might be offset by the trauma and disruption of the storm itself and its aftermath. The majority of New Orleans returnees probably knew someone among the nearly 2,000 people who died in the Katrina aftermath. Also, almost all students experienced significant disruption, moving to unfamiliar neighborhoods and schools for extended periods. Reports of post-traumatic stress disorder remain common.

It is difficult to isolate trauma and interim school effects, but we can estimate the combination of the two. A study by the RAND Corporation of students from Louisiana districts affected by the hurricane suggests that these two factors had a short-term net negative effect on evacuees’ performance of 0.03 to 0.06 standard deviations. Our analysis suggests that the negative influence is even larger for New Orleans students, most likely because of the more extensive destruction in the city compared with most other areas along the state’s coast. Thus, at least in the years just after the reforms, the factors pushing student outcomes down were at least as large as the population changes pushing them up.

Test-based accountability distortions. One key part of the New Orleans reforms was the idea that the state would shut down schools within three to five years if they did not generate a high enough School Performance Score, a measure based on test scores and graduation rates. Prior research suggests that such intensive test-based accountability can lead to behaviors, such as teaching to the test, that increase scores without improvements in underlying learning or through reduced learning in nontested subjects.

To address this problem, we estimate effects separately by subject, recognizing that the stakes attached to math and language scores were roughly double the stakes for science and social studies scores during the period under analysis. Also, the state’s social promotion policy raises the stakes for students in grades 4 and 8. We find no evidence that the size of effects varied systematically with the stakes attached to the subjects or grades. However, it is hard to rule out other potential test-based accountability distortions with our data.

As further evidence, we considered descriptive information on nontest outcomes. State government reports indicate that, relative to the state as a whole, the New Orleans high school graduation rate and college entry rate (among high school graduates) rose 10 and 14 percentage points, respectively.

So, in theory, there are many challenges to estimating the effects of the New Orleans package of school reforms. The combined effect of these alternative factors on long-term achievement gains appears small, however, especially when compared with our initial estimate of the reform effects.

There is a clear pattern across these methods. The estimates are consistently within the same range, and even the lower end of that range suggests large positive effects.

Equity of Outcomes

In terms of achievement, all major subgroups of students were at least as well-off after the reforms.
In terms of achievement, all major subgroups of students were at least as well-off after the reforms.

Public schools exist to ensure that all children have an opportunity to succeed in life. Thus we consider not only the average effects of the reform package, but also whether the most-disadvantaged students benefited.

We first define equity in terms of how New Orleans, as an urban district, performed relative to districts serving more-advantaged students. Both before and after the reforms, at least 80 percent of New Orleans students were minority or eligible for FRL. It is therefore noteworthy that the reforms brought the city’s students near to the state average on a wide range of academic outcomes (see Figure 1).

It is also important to consider the distribution of effects within the city, and here the results are more mixed. All major subgroups of students—African American, low-income, special education, and English Language Learners (ELL)—were at least as well-off after the reforms, in terms of achievement. Critics of charter schools express concern about possible increases in racial isolation (some would say “segregation”). Among all of the various subgroups we considered, only Hispanic students seem to have experienced increases in isolation.

There have also been concerns about schools unfairly targeting low-income and African American students in disciplinary decisions. While we have not yet studied whether any student groups have been specifically targeted, we can say that the number of suspensions and expulsions has dropped since the reforms, for African American students and others alike.

There are a few less-positive signs, however. In our analysis of what families look for when choosing schools, we found that the lowest-income families place less weight on the School Performance Score than other families. Their circumstances may lead them to focus more on practical considerations such as distance to school and extended hours (to avoid extra child-care costs). Similarly, in our analysis of student mobility, we see that low-scoring students are less likely than high-scoring students to migrate toward schools with high scores. Finally, up until a few years ago, principals reported cherry-picking students by, for example, counseling out students deemed poor fits and holding invitation-only events to attract certain students.

Given the large improvements in average outcomes in a district that is almost entirely low-income and minority, and the mixed evidence on other equity indicators, it would be hard to say the outcomes from the New Orleans reforms are inequitable relative to what came before them. That said, they were highly inequitable to start with, and there is clearly room for improvement.

What Really Changed? 

To help improve the schools going forward, it is important to know how school practices and other intermediate outcomes changed. In a series of 15 ongoing studies, my collaborators at ERA-New Orleans and I have examined four main components of the reforms: choice and competition, teachers and leaders, charters and CMOs, and test-based accountability.

Some of the reform effect may be driven by parental choice and competition. The supply of schools in New Orleans appears highly differentiated. Some schools specialize in math and science, others in the arts. Some schools offer language immersion programs, while other schools have fairly traditional curricula. Some schools have selective admissions, while others are open enrollment or seek diverse student bodies. We also find that New Orleans families diverge in their schooling preferences, so having this degree of differentiation in schooling options is likely to help match what families want with what schools offer (see “The New Orleans OneApp,” features, and “Many Options in New Orleans Choice System,” research, Fall 2015).

It is still unclear, however, whether these changes in the market have contributed to the improvements in student outcomes. Even supporters of the reform efforts sometimes bristle when I use the word “market” and “competition” to describe the new system. Instead, they point to two other parts of the reform package: the authority of the state to close schools and the authority schools have over their teaching staffs.

Sixteen New Orleans schools have been completely closed and another 30 have been taken over in some fashion by either the RSD or OPSB—a large number in a city that has only about 90 public schools in total. Consistent with written state policies, we find that the School Performance Score is the strongest measurable driver of closure and renewal decisions. Moreover, in finding CMOs to open new schools and take over old ones, the RSD has preferred those with a track record of academic success.

School leaders in New Orleans talk frequently about how critical flexibility in personnel management is to their overall school success. Free of state and local mandates and constraints from union contracts, leaders reopening schools after the storm could hire anyone they wanted, including uncertified teachers, and dismiss teachers relatively easily. As CMOs took over, more of the teacher workforce came from alternative preparation programs such as Teach for America and The New Teacher Project. Consistent with some other studies, analyses commissioned by the state suggest that graduates of these programs contribute more to student achievement than graduates of traditional preparation programs.

The combination of policies had two types of effects on the teacher workforce. First, the percentages of teachers with regular certification and with 20 or more years of experience dropped by about 20 points each. Also, due to both the short-term commitments of some alternatively certified teachers and school autonomy over personnel, the teacher turnover rate nearly doubled. The fact that such large improvements in student learning could be achieved with these common metrics going in the “wrong direction” reinforces a common finding in education research: teacher credentials and turnover are not always good barometers of effectiveness.

Finally, we turn to a topic that is not typically thought of as part of the reform package but may be an essential component: costs and resources. Our analysis suggests that from 2004‒05 to 2011‒12, the same years covered by our achievement analysis, total public schooling expenditures per student increased by $1,000 in New Orleans relative to other districts in the state. Some of the increase probably reflects one-time start-up costs of new schools, and we are working to understand what share falls in that category. Regardless, there is wide agreement that the reforms did not come cheap.

None of this really tells us exactly which of the factors drove the improvements in student outcomes—no doubt they are interconnected—but it does provide some indication of how schools and families responded to the policy shift.

Implications for New Orleans

These findings have important implications for the New Orleans public schools, the many other urban districts pursuing the portfolio approach, and for the state and federal policies—especially test-based and market-based accountability—from which the New Orleans reforms emerged.

Relative to the state as a whole, the New Orleans high school graduation rate rose 10 percentage points after the New Orleans reforms.
Relative to the state as a whole, the New Orleans high school graduation rate rose 10 percentage points after the New Orleans reforms.

For New Orleans, the news on average student outcomes is quite positive by just about any measure. The reforms seem to have moved the average student up by 0.2 to 0.4 standard deviations and boosted rates of high school graduation and college entry. We are not aware of any other districts that have made such large improvements in such a short time.

The effects are also large compared with other completely different strategies for school improvement, such as class-size reduction and intensive preschool. This seems true even after we account for the higher costs. While it might seem hard to compare such different strategies, the heart of the larger school-reform debate is between systemic reforms like the portfolio model and resource-oriented strategies.

With the possible exception of distortions from test-based accountability, which are harder to identify, the reforms managed to avoid most of the side effects that many feared. But our findings also suggest areas of potential improvement. While the reforms have been successful on some dimensions of equity, it seems necessary to do more to ensure that all groups within the city benefit. All types of public school systems struggle with providing equitable access to quality schools, and the New Orleans system is no exception.

Implications for the Nation

Unfortunately, the effects of even the most successful programs are often not replicated when tried elsewhere, and there are good reasons to think the conditions were especially ripe for success in New Orleans:

There was nowhere to go but up. Pre-Katrina, the New Orleans public school system was highly dysfunctional, and student test scores made it the second-lowest-ranked district in the second-lowest-ranked state in the country.

New Orleans is an attractive city for young educators. The national response to the hurricane aftermath was heartening, and for many young people, contributing to the rebuilding effort became a calling. Later, as the reform effort took hold, New Orleans also became the nation’s epicenter of school reform, an ideal place for aspiring reform-minded educators. Because the city is smaller than many urban districts, school leaders could be very selective in choosing from the pool of educators who wanted to come and work there.

The effects might also be smaller, at least in the short run, if the reforms were adopted on a statewide basis, because the reform is dependent on a specific supply of teachers. It seems difficult enough attracting effective teachers and leaders to work long hours at modest salaries in New Orleans; doing it throughout Louisiana is unrealistic without a major change in the educator labor market. Nonetheless, it would be a mistake to dismiss the relevance of the New Orleans experience for others. It is relevant precisely because it is so unusual. The city’s reforms force us to question basic assumptions about what K‒12 publicly funded education can and should look like.

There is more to the debate than we can cover here, including fundamental philosophical issues about whose objectives and values should count in making schooling decisions. But there is also wide agreement that the academic outcomes considered here are important, so learning how much the reforms contribute to changes in academic measures should also be a key part of the conversation. Better understanding of all the elements of the reforms is something we owe to the city, its children, and everyone who suffered and perished in this terrible tragedy.

Douglas N. Harris is professor of economics at Tulane University and founder and director of the Education Research Alliance for New Orleans. The research cited here is coauthored with others on the ERA-New Orleans research staff (Paula Arce-Trigatti, Nathan Barrett, Lindsay Bell Weixler, Christian Buerger, Matthew Larsen, Jane Arnold Lincove, Whitney Ruble, Robert Santillano, and Jon Valant) and members of the ERA-New Orleans National Research Team (Huriya Jabbar, Jennifer Jennings, Spiro Maroulis, Katharine Strunk, Patrick Wolf, and Ron Zimmer).
All errors are the author’s.

For more information on New Orleans, read “Many Options in New Orleans Choice System: School characteristics vary widely,” by Paula Arce-Trigatti, Douglas N. Harris, Huriya Jabbar, and Jane Arnold Lincove, and “The New Orleans OneApp: Centralized enrollment matches students and schools of choice,” by Douglas N. Harris, Jon Valant, and Betheny Gross.

This article appeared in the Fall 2015 issue of Education Next. Suggested citation format:

Harris, D.N. (2015). Good News for New Orleans: Early evidence shows reforms lifting student achievement. Education Next, 15(4), 8-15.

The post Good News for New Orleans appeared first on Education Next.

]]>
49703433
The Myth About the Special Education Gap https://www.educationnext.org/myth-special-education-gap-charter-enrollment/ Tue, 21 Jul 2015 00:00:00 +0000 http://www.educationnext.org/myth-special-education-gap-charter-enrollment/ Charter enrollments driven by parental choices, not discriminatory policies

The post The Myth About the Special Education Gap appeared first on Education Next.

]]>

As public schools, charter schools are legally required to educate all students regardless of the difficulties they bring with them into the classroom. Nonetheless, many are concerned that the charter sector fails to educate all comers. Charter schools are often criticized for not enrolling similar proportions of students with disabilities as are enrolled in schools operated by the surrounding district. For instance, a recent report by the Government Accountability Office (GAO) found wide gaps between the percentages of students enrolled in special education in charter schools and in surrounding district schools. In New York City, Schools Chancellor Carmen Fariña recently implied that the city’s charter schools remove low-performing students in order to increase their aggregate test scores. Last year the New York Times published an op-ed arguing that the seeming success of charter schools in Harlem is driven by their willingness to push out students with disabilities, and that such “charter school refugees” drain district schools of resources.

ednext_XV_4_winters_img01Only anecdotal evidence has been offered in support of the claim that charter schools systematically remove students with disabilities, and little rigorous research has considered the underlying causes of the difference between the percentage of charter-school students and district-school students enrolled in special education, the so-called “special education gap.” But if we are to adopt sound policies to address such a gap, we need to understand its underlying causes.

In this study, I examine data on all elementary-school students in certain years in New York City and Denver, Colorado, to estimate the relative importance of various factors that appear to be contributing to a special education gap. My findings suggest that the gap, though real, is not as disturbing as it might seem. Two key drivers of the gap are differences in rates of students being classified as having a Specific Learning Disability (SLD) and the rates at which students who do not have disabilities move from one sector to the other. Neither factor indicates that charter schools are driving special education students away from their doors. Further, the size of the gap is determined largely by differences among students with mild rather than severe learning difficulties.

Both New York City and Denver are considered leaders in the charter school movement. Each city has experienced rapid expansion of the charter school sector in recent years. While the evidence for the effectiveness of charter schools nationwide is mixed, research has found that the charter schools in these cities are on average more effective than district schools in raising student test scores.

In my prior work on middle schools, I found that the special education gap in Denver was almost exclusively caused by differences in the rates at which students with disabilities and students without disabilities apply to charter schools in gateway grades (that is, when all students are entering school initially or graduating from elementary to middle school, for example). In this article, I identify key factors that contribute to the gap during the elementary-school years.

Although the data are richer for Denver than for New York City, my essential findings from the two cities are remarkably similar. In both, the relatively low enrollment of students with severe disabilities in charter schools accounts for very little of the gap, as there are very few of these students in either school sector. Instead, the special education gap begins in kindergarten, when students classified at a young age as having a speech or language disorder are less likely than other students to apply to charter schools. It grows in part because students enrolled in district elementary schools are considerably more likely to be classified as having an SLD than those enrolled in charter elementary schools. Also, students with disabilities are less likely than students without disabilities to enter charters in non-gateway grades.

Data

Longitudinal student-level data were provided by the departments of education in New York City and Denver. The New York City data cover the school years 2009‒10 through 2012‒13. The Denver data include 2008‒09 through 2013‒14. Each data set includes information for the universe of students attending a charter or district school in the respective city.

For each city, the relevant data identify the school in which the student was enrolled that year and indicate whether the student has an Individualized Education Program (IEP), which qualifies him or her for special education services. The data also include the student’s particular disability addressed by the IEP. Unique student identifiers allow me to map student movement and classification changes each year.

In New York City, students apply to each individual charter school directly. Unfortunately, as a result, I do not have information regarding whether students applied to (but did not enroll in) charter schools in New York City.

The school choice process is more centralized in Denver. Each year, students have the opportunity to state a preference for up to five schools—including charter and district schools. Most parents of students in gateway grades fill out the forms necessary to state a school preference. Thus, in Denver, for school year 2012–13, the data set also includes information about student preferences for schools according to the city’s school-choice policy.

ednext_XV_4_winters_fig01-small

The Gap in the Two Cities

As critics have claimed, there is in fact a special education gap in the two cities. In Denver, in 2012‒13, the percentage of special-education kindergarten students was 1.8 points higher in district schools than in charters. In grade 5 that difference was 4.7 percentage points. During the same school year in New York City, the differences at the same two grade levels were about 4 and 7 percentage points, respectively (see Figure 1a).

The paucity of severely disabled students in charter schools is often highlighted in public commentary on the special education gap. It is true that district schools enroll significantly larger percentages of students with relatively severe disability classifications than do charters. As shown in Figure 1b, the share of students with autism is 0.2 percentage points smaller in charters than in district schools in Denver and 1 percentage point smaller in New York City. Results for traumatic brain injury are similar. These differences do not contribute substantially to the overall special-education gap, however, as the percentage of students with severe disabilities is very small in both sectors.

Students who are identified as having speech and language disabilities play a much larger role in the gap story, especially among students in kindergarten (see Figure 1c). About 41 percent of the gap in kindergarten in New York City and 50 percent of the kindergarten gap in Denver is caused by the differential presence of this type of student. But few students classified in this manner early on continue to be identified as in need of special services. As a result, the gap between charters and districts for students with this type of disability declines to the point of insignificance in later grades.

I suspect that the kindergarten gap is driven primarily by the fact that school districts often provide speech and language services to students in need of them prior to entry into kindergarten, and the parents of such students are reluctant to switch to a charter school, thereby interrupting the continuation of these services. As a result, parental choices contribute to the creation of a special education gap at the very beginning of formal schooling.

The opposite situation prevails for the category of students identified as having an SLD (see Figure 1d). The growth in the special education gap after kindergarten in both cities is driven almost entirely by changes in the percentage of this group of students. Note that only a small percentage in either sector are classified as SLD students in kindergarten. Rather, the percentage increases rapidly from one year to the next as students pass through the elementary grades. But the growth of SLD enrollments is more rapid in district schools than in charters.

Those who focus on more “severe” classifications are ignoring the elephant in the room. SLD is among the mildest special-education classifications. It is also the most subjectively diagnosed. For example, prior research by Donald MacMillan and Gary Siperstein has indicated that SLD is likely overdiagnosed in district schools.

Charter School Application and Enrollment

Thus far I have discussed the type of disability that contributes the most to the special education gap between district and charter schools. No less important are the main factors that generate the gap: students entering charters may differ from those entering district schools (with respect to their special education needs), and students leaving charters may differ from those leaving district schools. Another factor is classification rates. District and charter schools may differ in their readiness to classify a student as having a disability. This is more likely in the case of mild disabilities, such as speech and language disabilities and SLD. The data allow me to look into each of these potential underlying causes of the gap.

The Denver data show that students with disabilities are somewhat less likely to apply to attend a charter than are students without disabilities.
The Denver data show that students with disabilities are somewhat less likely to apply to attend a charter than are students without disabilities.

Figure 1 provides some evidence regarding the types of students who enter into a charter school in kindergarten. Since students who apply to charter schools are assigned to enrollment randomly, we can have some confidence that the characteristics of those who enter charter schools in kindergarten mimic those of the students who apply.

Even if the lotteries are truly random, however, it is possible that students with disabilities who win a spot in a charter school are less likely to actually enroll. Unfortunately, because the results of enrollment lotteries are not centrally collected in New York City, the data set limits the ability to look at the characteristics of charter school applicants there. However, a unique feature of the Denver data set allows one to observe not only who enrolls in a charter school, but who applies to attend one through the city’s universal choice system.

The Denver data show that students with disabilities are somewhat less likely to apply to attend a charter than are students without disabilities. In kindergarten, 5.6 percent of students who listed at least one charter school as one of their five preferences had an IEP, while 7.8 percent of students who did not list a preference for a charter school had an IEP. These numbers are similar to those for actual percentages of students with IEPs enrolled in charter and district schools reported in Figure 1a.

Next, I look at students who leave their schools. If the special education gap is largely driven by charter schools systematically removing students with disabilities, we should expect that students with disabilities would be more likely to exit their school if it is a charter than if it is a district school. In New York City and Denver, this is not the case.

To examine this issue, I restrict each data set to include only students who were enrolled in kindergarten in the first observed year (2008‒09 in Denver, 2009‒10 in New York City). Figures 2a and 2b describe the percentage of such students who remain in their original elementary school after a given number of years according to their IEP status in kindergarten. (Results are similar for students who are observed with an IEP at any point in the time period considered.)

ednext_XV_4_winters_fig02-small

The results are again remarkably similar in the two city school systems. In both cities, students with existing IEPs are significantly and substantially more likely to remain in their kindergarten school if it is a charter than if it is a district school. In Denver, four years after entry in kindergarten, 65 percent of students with IEPs remain in their original charter school, compared to 37 percent of students who began in a district school. In New York City, four years after entry in kindergarten, 74 percent of students with IEPs remain in their original charter school, compared to 69 percent of students who began in a district school.

For the kindergarten cohorts of 2008‒09 in Denver and 2009‒10 in NYC, the impact of students with IEPs moving across sectors or out of the city school system is to decrease the special education gap in both cities. That’s because in both New York and Denver more students with IEPs enter charter schools in grades after kindergarten than exit them.

Of course, we cannot observe the reasons that students exit, and thus I cannot say just how numerous are the incidences of charters (or district schools) counseling out students with disabilities. Nevertheless, the results strongly suggest that the special education gap is not due primarily to students with disabilities exiting the charter sector.

The Classification Factor

As mentioned, the special education gap in elementary schools originates because students with disabilities (especially those related to speech and language) are less likely to enter charter schools in kindergarten. In both cities (especially in Denver), the special education gap grows as students proceed from kindergarten through the 5th grade, and charters classify fewer students as SLD than do district schools.

The gap will grow or contract if students in either sector either receive a new IEP or have their IEP status declassified. A student with an IEP could exit the city’s system entirely or move from one sector to the other.

The special education gap is not due primarily to students with disabilities exiting the charter sector.
The special education gap is not due primarily to students with disabilities exiting the charter sector.

For both cities, I again restrict the analysis to students who were enrolled in kindergarten in the first observed year. For each year after initial enrollment, I map student classifications and movements within and out of the city’s school system. I then quantify the influence of each factor on the change in the percentage of students who have an IEP within a sector. That is, the analysis quantifies how the percentage of students with IEPs in charter schools increased between 2008–09 and 2009–10 due to students being newly classified into special education, to students with IEPs exiting the sector, and so on.

In Denver, new IEPs increased the percentage of students with IEPs in district schools by 10.4 percentage points and the percentage of students with IEPs in the charter sector by 7.8 percentage points, for an increase in the gap of 2.6 points. In New York, the corresponding figures were 8.9 and 8.3, respectively, which increased the gap by less than 1 point (.57).

In both cities, students enrolled in charter schools are significantly less likely (and in Denver, substantially less likely) to be newly classified as having an IEP than are students in district schools. In both cities, this difference is driven nearly entirely by the greater probability that a student is classified as SLD in the district-school sector. It is not certain whether students in the district sector are more likely to become in need of special education or whether district procedures are designed to identify more readily that a student is in need of these services. One suspects that both factors are at work.

Mobility of students with IEPs obviously influences the percentage of students enrolled in special education. When a student with an IEP enters into a school, either from outside of the system or from the other sector, he has an impact on the receiving sector’s percentage of students with IEPs. The exits and entries of students without IEPs also influence the percentage of students who have IEPs within each sector by changing the total number of students in that sector (the denominator of the calculation), even though it has no impact on the number of students with IEPs (the numerator).

Student mobility increases the special education gap largely because of the movement of students who do not have IEPs. As we saw previously, elementary-school students without IEPs are more likely to enter charter schools in non-gateway grades than are students with IEPs. Each student without an IEP who enters a charter school decreases the percentage of students in the charter sector with an IEP.

This influence of student mobility on the special education gap is driven in part by the difference in size of the two sectors. Of course, the percentage of students with IEPs in a sector is calculated by dividing the number of students with IEPs by the total number of students in the sector. There are far more students enrolled in district schools than are enrolled in charter schools. Consequently, the movement of a single student from one sector to another has a much larger impact on the proportion of students with IEPs enrolled in charter schools than on the proportion of students with IEPs enrolled in district schools. This simple computational phenomenon tends to exacerbate the observed special-education gap.

For instance, consider the impact of a student who is not in special education moving from the district sector to the charter sector in Denver. During the time period analyzed, this category included 405 students. The impact of the movement of these students was a decrease in the proportion of students in special education in the charter sector of 5.1 percentage points. The influence of these same students on the district-school sector was an increase in the proportion of students classified as special education by only 0.9 percentage points. Thus, the overall impact of the movement of these students was to increase the special education gap by 4.2 percentage points.

Implications for Policy

The special education gap begins primarily because students classified as having a speech or language disorder are less likely than regular-enrollment students to apply. It grows in part because students enrolled in district schools are considerably more likely to be classified as having a specific learning disability in early elementary grades than are students enrolled in charter schools, and also because students without disabilities are more likely to enter charters in non-gateway grades than are students with disabilities. This result is remarkably similar across both cities. The overall special-education gap does not appear to be heavily influenced by relatively low enrollment of students with severe disabilities in charter schools.

That classification differences for SLD in later grades are a major driver of the gap is especially interesting. Prior research suggests that SLD is overidentified in district schools and that classifications are heavily influenced by student academic performance. These findings appear to open the door to the possibility that some portion of students who are not classified as disabled in charter schools would have been so classified had they instead attended a district school. Unfortunately, the analyses in this paper are not capable of identifying whether the differences in classifications are due to the type of student who attends each sector, or if there is something about charter schooling itself that reduces the probability that a student is newly classified as having a disability.

The conventional argument that charters enroll relatively few students with disabilities because they “counsel out” special needs students after they enroll is inconsistent with the enrollment data. In fact, students with disabilities are less likely to exit charter elementary schools than they are to exit district schools. More students with IEPs enter charter schools in non-gateway grades than exit them. Of course, I do not mean to imply that no student has been inappropriately removed by a charter school because of his disability. But the fact that students with special needs in charter schools are less mobile than those in district schools suggests that such incidences are not widespread. Policies meant to address the special education gap that focus on the movement of students with IEPs are unlikely to be productive.

One area where policymakers could influence the special education gap is by providing charters with resources and incentives to better recruit students with disabilities (particularly those with a speech or language impairment) to apply in kindergarten. Interestingly, the initial special-education gap in kindergarten is much smaller in Denver than it is in New York City. Though further research is required to make any firm judgments, the most likely reason for this difference is Denver’s use of a universal enrollment system in which charter schools participate compared to the practice in New York City, where parents apply to individual charter schools.

Marcus A. Winters is senior fellow at the Manhattan Institute and assistant professor in the College of Education at the University of Colorado Colorado Springs. The New York City results described were reported in a paper jointly released by the Center on Reinventing Public Education (CRPE) and the Manhattan Institute. The Denver results first appeared in a report for CRPE. The Denver results additionally appear in the May 2015 issue of Educational Researcher.

This article appeared in the Fall 2015 issue of Education Next. Suggested citation format:

Winters, M.A. (2015). The Myth About the Special Education Gap: Charter enrollments driven by parental choices, not discriminatory policies. Education Next, 15(4), 34-41.

The post The Myth About the Special Education Gap appeared first on Education Next.

]]>
49703166
What Did Race to the Top Accomplish? https://www.educationnext.org/what-did-race-to-the-top-accomplish-forum-weiss-hess/ Tue, 14 Jul 2015 00:00:00 +0000 http://www.educationnext.org/what-did-race-to-the-top-accomplish-forum-weiss-hess/ Education Next talks with Joanne Weiss and Frederick M. Hess

The post What Did Race to the Top Accomplish? appeared first on Education Next.

]]>

ednext_XV_4_forum_img01Race to the Top was the Obama administration’s signature education initiative. Initially greeted with bipartisan acclaim, it has figured in debates about issues ranging from the Common Core to teacher evaluation to data privacy. Five years have passed since the U.S. Department of Education announced the winners in the $4 billion contest. What can the competition and its aftermath teach us about federal efforts to spur changes in schooling?

Joanne Weiss, former chief of staff to U.S. Secretary of Education Arne Duncan and director of the federal Race to the Top program, argues that the initiative spurred comprehensive improvements nationwide and in numerous policy areas, among them standards and assessments, teacher evaluation methods, and public school choice. Frederick M. Hess, director of education policy studies at the American Enterprise Institute, whose books include Carrots, Sticks, and the Bully Pulpit: Lessons from a Half-Century of Federal Efforts to Improve America’s Schools, contends that the competition rewarded mainly grant-writing prowess and that policymakers should be wary of top-down efforts to spur innovation.

• Joanne Weiss: Innovative Program Spurred Meaningful Education Reform

• Frederick M. Hess: Lofty Promises But Little Change for America’s Schools

This article appeared in the Fall 2015 issue of Education Next. Suggested citation format:

Weiss, J., and Hess, F.M. (2015). What Did Race to the Top Accomplish? Education Next, 15(4), 50-56.

The post What Did Race to the Top Accomplish? appeared first on Education Next.

]]>
49703185
Innovative Program Spurred Meaningful Education Reform https://www.educationnext.org/innovative-program-spurred-meaningful-education-reform/ Tue, 14 Jul 2015 00:00:00 +0000 http://www.educationnext.org/innovative-program-spurred-meaningful-education-reform/ Much has been said about the impact of the Race to the Top program—some good, some not so good, some accurate, some less so.

The post Innovative Program Spurred Meaningful Education Reform appeared first on Education Next.

]]>

Much has been said about the impact of the Race to the Top program—some good, some not so good, some accurate, some less so. Because Race to the Top aimed to drive systems-level change, it’s still premature to reach firm conclusions about its impacts on outcomes for students, although that’s the verdict that ultimately matters most. Yet enough time has passed for a first take on the policies that Race to the Top helped pioneer. What did it seem to get right? What did it get wrong? And what does this mean for future policies? To those of us who were there, the intent was clear: Race to the Top was designed to identify those states with compelling ideas and viable plans for improving their educational systems, fund them, learn from them, and share their lessons widely.

A lot has changed in the five years since the program was launched. Forty-three states and the District of Columbia have new, higher standards pegged to college and career readiness. As states aimed toward these higher targets, many began by ratcheting up their proficiency bars (see “States Raise Proficiency Standards in Math and Reading,” features, Summer 2015). Virtually all are replacing their old fill-in-the-bubble tests of basic skills, tests that contributed to both low expectations for student learning and bad teaching practices, with significantly stronger assessments. A January 2013 report from the National Center for Research on Evaluation, Standards, and Student Testing confirms that the majority of questions on tests funded by Race to the Top gauge such higher-order skills as abstract thinking and communications. A good teacher is now recognized as someone whose students learn and grow, with 38 states revising their policies on educator effectiveness to include measures of student growth or achievement as one of multiple factors in teacher evaluations. Finally, charters and other public school‒choice policies—strengthened in 35 states—continue to empower parents to seek out the best educational opportunities for their children.

Given that there were only 12 Race to the Top winners (and seven runners-up who got small grants), it’s pretty clear that the program had an impact even in states that did not get grants. These states, awarded no new funding, could easily have reverted to their previous educational policies. But overwhelmingly, they chose not to (see Howell “Results of President Obama’s Race to the Top,” research, Fall 2015).

Race to the Top used a number of innovative strategies to encourage comprehensive reform. First, contrary to the “federal overreach” label, Race to the Top was a large-scale state empowerment program. It packaged reforms that were happening already, albeit slowly and unevenly, in states across the country, and it provided incentives to states to accelerate the pace and reach of these activities. From higher standards and 21st-century assessments, to educator effectiveness and the turnaround of failing schools, Race to the Top’s program elements were anchored firmly in the good work of states and districts. As a result, states were able to tap into existing constituencies’ support for the ideas, enthusiasm for the agenda, and pent-up creativity around the work.

Second, as Patrick McGuinn pointed out in a 2010 American Enterprise Institute paper, Race to the Top “shifted the focus of federal education policy from the [state] laggards to the leaders.” It moved away from the notion that federal policy is designed chiefly to prevent bad actors from doing harm, and it set its sights on excellence. It urged idea-rich, capable states to define and navigate paths to educational excellence, and in so doing, to blaze trails that could show the way for other states.

Third, Race to the Top treated education as a “system” rather than as a collection of discrete “silos.” Whereas past reform efforts generally targeted one element, Race to the Top asked states to build comprehensive and coherent education agendas across four key pillars or “assurances.” That ambitiousness was risky and bold, and it had downsides (read on). But state systems of education consist of interconnected policies and work streams, and if related elements don’t move forward in tandem, the efforts often fail to have impact.

Fourth, Race to the Top recognized that the politics of education reform are tough. So it rewarded states for enlisting districts and local communities in designing and implementing the plans; it encouraged states to build political support across key constituencies and across sectors; and it provided political cover for state and local leaders to push forward ideas that could be controversial.

Finally, Race to the Top used transparency to advance knowledge, share ideas, and counter politics. Everything—from states’ proposals to reviewers’ comments to revisions and later to amendments—was posted for states to learn from, researchers to analyze, the media to probe, and the public to watchdog. Further, this commitment to transparency underscored, in both red and blue states, that this competition wasn’t about politics. It was about education, and the best proposals would win.

So, what did Race to the Top get wrong? First, while “comprehensive and coherent” are good goals, Race to the Top expected states to take on a lot, and for many, it was too much, too fast. The result was messy, incoherent implementation in too many places and that understandably frustrated educators and parents and undermined some of the good work that was being done. In an ideal world, new standards would have been rolled out together with aligned curricula and professional development. The new instructional practices demanded by the standards would have been reflected and reinforced through teacher observations, with feedback given by trained coaches and principals. And student growth would have been introduced thoughtfully into teacher evaluation systems based on new measures aligned to the new standards. The sequencing of complex new initiatives matters a lot, and Race to the Top didn’t do enough to guide states in how to think it all through.

Second, the competition included too many criteria, the result of a desire to support states’ varied innovative efforts and to enable stakeholders and advocates to see themselves reflected in the work. The heavily weighted criteria (for example,
implementing standards, improving teacher and principal effectiveness, turning around the lowest-achieving schools, supporting high-performing charters) formed a coherent and comprehensive core. Other criteria offered options, but these too often exacerbated implementation challenges and contributed to a sense of a dominant federal perspective.

Third, Race to the Top did not do enough to mitigate competitors’ tendencies to overpromise in order to win. The competition advised applicants to develop plans that were “ambitious yet achievable,” and the reviewers were trained in how to evaluate the feasibility and credibility of plans. But these alone were insufficient backstops. And the federal rules that should have added teeth to the process, such as peer review and the withholding of grant funds for nonperformance, were wobbly at best.

It’s worth noting two critiques that pundits love, but that I largely reject: that Race to the Top was “too prescriptive” and that it epitomized “federal overreach.”

The criticism that the competition was “too prescriptive” is perhaps best summed up by Rick Hess’s suggestion in a July 2014 EdWeek blog post that rather than offer up its own criteria, “the Obama administration could have told the states, ‘Put forward your best ideas, and we’ll fund the most promising ones.’” It’s an attractive-sounding idea. In fact, the administration considered that approach, but rejected it because of the host of unintended negative consequences that let-a-thousand-flowers-bloom grant making would have had. Reviewers would have had no basis for comparing plans and determining scores, leading to inevitable charges of politicization and favoritism. Further, lacking political cover to implement the tougher reforms, states would likely have proposed weak, politically easy work with little or no impact to show for their efforts or taxpayers’ dollars. Finally, lower-capacity educational agencies craved more guidance, not less; they needed an application that, like a template, walked them through design. A total greenfield would have been a barrier for many.

The “federal overreach” critique of Race to the Top typically cites two things: the feds “forced” their hand-picked list of reforms on the country (see also “too prescriptive” above) and the feds “coerced” states to adopt the Common Core.

Any charge of coercion that is lobbed at a voluntary program is dubious on its face. Yes, Race to the Top put significant money on the table when times were tough, but every state got its pro rata share of $100 billion in Recovery Act funds, distributed by formula with virtually no strings attached. That was the lifeline. Race to the Top was the hard work states could choose to sign up for or not (and a number of states chose “not”).

What is worth acknowledging is that the administration didn’t anticipate that providing incentives to adopt college and career readiness standards drafted by the states would be seen, politically, as a threat to local control. Well before Race to the Top, a broad bipartisan coalition of states had come together under the aegis of the National Governors Association and the Council of Chief State School Officers to design and implement the Common Core State Standards. By May 2009, two months prior to the announcement of the preliminary Race to the Top guidelines, 46 governors and chiefs had already signed a memorandum of agreement that encouraged the federal government to “provide key financial support” for the Common Core State Standards “through the Race to the Top Fund” and the development of common assessments. Using Race to the Top dollars to support this state-led effort, at the request of states’ governors and chiefs, seemed like a wise use of funds at a key moment of need.

Nonetheless, the reasons that the administration failed to anticipate the backlash do not counteract the fact that a backlash has occurred. In the end, will Race to the Top have contributed to the undoing of the Common Core? Or will it simply be a footnote in the complex narrative of how the U.S. aligned its expectations for students with the demands of college and the workplace? I would place money on the latter. More than 40 states have maintained their commitment to high standards, arguing compellingly and openly for them. In addition, Race to the Top helped fund a new generation of high-quality, online assessments designed by states and educators to evaluate students’ progress toward college and career readiness. And it helped states fund strong new curricula, instructional materials, and professional development resources tied to these new standards, all now freely available to educators across the country.

Finally, I roundly reject the suggestion, as stated by Rick Hess, that “Race to the Top may have done as much to retard as to advance its laudable goals.” Detractors quote one another and cite oversight reports’ minor findings out of context, but offer no evidence that Race to the Top slowed adoption or implementation, much less retarded student achievement. And while it’s premature to reach any conclusions about Race to the Top’s impact on student outcomes, ambitious Race to the Top adopters, such as Tennessee and the District of Columbia, are posting encouraging student gains.

On balance and despite its imperfections, Race to the Top spurred important work that had a significant impact, both in states that won Race to the Top and in states that did not. All 46 state applicants and D.C. developed comprehensive education agendas to which their stakeholders were committed. States changed laws and regulations in an attempt to create policy environments that were more conducive to innovation and improvement. Many state agencies modernized, reorganizing around the work of helping districts and students succeed rather than around the work of passing funds down and compliance reports up. Access to technology increased, new materials were developed, and an ethos of collective learning and improvement started to emerge.

Governors and commissioners are leading their states through some of the biggest education changes since desegregation, spurred in part by Race to the Top. Neither the states nor the federal government got everything right. This is hard work; it’s disruptive, messy, and sometimes uncomfortable; and states and districts struggle to build the capacity needed for implementation. But I am hopeful that, on the other side of this hard work, states will find that they’ve changed the trajectory of learning for their students for the better. That will be the true indicator of success.

This is part of a forum on Race to the Top. For an alternate take, please see “Lofty Promises But Little Change for America’s Schools” by Frederick M. Hess.

This article appeared in the Fall 2015 issue of Education Next. Suggested citation format:

Weiss, J., and Hess, F.M. (2015). What Did Race to the Top Accomplish? Education Next, 15(4), 50-56.

The post Innovative Program Spurred Meaningful Education Reform appeared first on Education Next.

]]>
49703192
Lofty Promises But Little Change for America’s Schools https://www.educationnext.org/lofty-promises-little-change-americas-schools/ Tue, 14 Jul 2015 00:00:00 +0000 http://www.educationnext.org/lofty-promises-little-change-americas-schools/ In July 2009, it wasn’t just about the money. The $4 billion (to be spent over four years) amounted to less than 1 percent of what K‒12 schooling spends each year.

The post Lofty Promises But Little Change for America’s Schools appeared first on Education Next.

]]>

In July 2009, it wasn’t just about the money. The $4 billion (to be spent over four years) amounted to less than 1 percent of what K‒12 schooling spends each year. But Obama administration PR and the allure of free money combined to turn the exercise into catnip for state leaders. Media outlets were infatuated: Education Week ran stories with titles like “Racing for an Early Edge,” and national newspapers ran op-eds with headlines such as USA Today’s “Race to the Top Swiftly Changes Education Dynamic” (penned by former Republican Senate majority leader Bill Frist). A news search finds more than 19,000 mentions in 2009‒10, dwarfing even the mentions of “single-payer health care” during the midst of the Obamacare debates!

Some of the enthusiasm was certainly deserved. Race to the Top was fueled by admirable intentions, supervised by talented people, and reflected a great deal of sensible thinking on school improvement. In theory, it had much to recommend it.

In practice, Race to the Top was mostly a product of executive branch whimsy. The ARRA specified only that the federal government should encourage states to improve data systems, adopt “career-and-college-ready” standards and tests, hire great teachers and principals, and turn around low-performing schools. Beyond that, the Obama administration enjoyed enormous discretion. It could have designed a program that told the states, “Give us your best ideas, and we’ll fund the states that are pioneering the most promising approaches.” (Some thoughtful federal officials suggest such an approach isn’t viable—that prescriptive federal requirements are essential for political and practical reasons. That even the brightest minds can’t design a program to spur “innovation” except by relying on top-down directives highlights the problematic nature of the enterprise.)

Instead, the administration proposed 19 “priorities” that states seeking Race to the Top funds would be required to address. States could earn points in each category by promising to follow administration dictates, with the most successful states winning the cash. Few of the priorities entailed structural changes. Instead, they mostly emphasized things like professional development, ensuring an “equitable distribution” of good teachers and principals, “building strong statewide capacity,” “making education funding a priority,” and so on. Perhaps most fatefully, states could ace 3 of the 19 priorities by promising to adopt the brand-new Common Core and its federally funded tests.

Race to the Top was driven by a bureaucratic application process. The demands were so onerous that the Gates Foundation offered $250,000 grants to 16 favored states to help hire consultants to pen their grant applications. Racing to meet program deadlines, states slapped together proposals stuffed with empty promises. States promised to adopt “scalable and sustained strategies for turning around clusters of low-performing schools” and “clear, content-rich, sequenced, spiraled, detailed curricular frameworks.” Applications ran to hundreds of jargon-laden pages, including appendices replete with missing pages, duplicate pages, and everything from Maya Angelou’s poetry to letters of support from anyone who might sign a paper pledge. As one reviewer described it to me, “We knew the states were lying. The trick was figuring out who was lying the least.”

The competition rewarded grant-writing prowess and allegiance to the fads of the moment. Indeed, a number of the dozen winners clearly trailed the pack on the hard-edged reforms that Race to the Top was supposedly seeking to promote. When it came to state data systems, charter school laws, and teacher policy, winning states like Ohio, Hawaii, Maryland, and New York finished well back in the pack on rankings compiled by the Data Quality Campaign, the National Alliance for Public Charter Schools, and the National Council on Teacher Quality. When announcing round-one winners Tennessee and Delaware in March 2010, U.S. Secretary of Education Arne Duncan took pains to note that the two states had nearly 100 percent sign-offs from their local teachers unions. Reviewers took the hint, and states like Colorado and New Jersey got hammered for not collecting enough unenforceable assurances from their unions.

In the end, the effort suffered for its emphasis on promises rather than accomplishments, ambiguous scoring criteria, and murky process for selecting and training judges. Conservative analyst Chester E. Finn Jr. concluded that the review process didn’t reflect “what’s really going on in these states and the degree of sincerity of their reform convictions.” The reliance of winning states on outside consultants and grant writers also meant that the commitment of key legislators, civic leaders, or education officials to the promised reform agenda could be pretty thin.

Every one of the dozen winning states has come up short on its promises. As early as June 2011, the U.S. Government Accountability Office (GAO) reported that the dozen Race to the Top winners had already changed their plans 25 times. That same GAO report noted that officials were beset by challenges that included a “difficulty identifying and hiring qualified staff and complying with state procedures for awarding contracts…. Officials in the states we visited—Delaware, New York, Ohio, and Tennessee—said they experienced other challenges that led to months-long delays in implementing 13 of 29 selected RTT projects.” Hawaii’s continued failure to do what it had promised on teacher evaluation earned it “high-risk” status in 2011. By that early date, Florida had already made more than a dozen changes in promised deadlines, including a multiyear delay in teacher evaluation and a one-year delay in training principals for turnaround schools.

In 2012, the Obama-friendly Center for American Progress (CAP) reported, “Every state has delayed some part of their grant implementation.” As they sought to hit federal timelines, states fumbled on everything from the Common Core to teacher evaluation. As CAP researchers said one Florida reporter told them, “Only a handful of districts feel like they’re prepared to do [new teacher evaluations]. Most feel like they’re rushing.”

The Economic Policy Institute observed in 2013, “A review of the student-outcome targets set by states…reveals that all are extremely ambitious, but virtually none is achievable in any normal interpretation of that term.”

Despite a mediocre track record of school improvement, Ohio was a winner, partly for its “simple, yet bold, long-term aspirations,” including “a near-100% high school graduation rate from schools teaching at internationally competitive standards,” elimination of achievement gaps, and higher-ed completion rates “that are among the highest in the nation and world.” In spring 2015, the Columbus Dispatch observed, “Four years and $400 million later, Ohio has met one of five goals for the federal Race to the Top grant program. The state…fell short of reducing achievement gaps for minority students, improving reading and math scores as compared with the best-performing states, and increasing college enrollment. Although most goals were not achieved, state education officials focused on the positive in their final Race to the Top report.” Ohio still received its full complement of federal Race to the Top funds.

For all of his threats and bluster, Secretary Duncan has never withheld a nickel from a Race to the Top winner as a result of these violations. (As of April 2015, the U.S. Department of Education was still temporarily withholding a final $10 million earmarked for Georgia because officials had quibbles with elements of the state’s performance-based compensation system. But by this point, Georgia had already been on Duncan’s naughty list since 2012 without consequence.)

As Drew University political scientist Patrick McGuinn noted in 2010, “It is one thing for RTT to secure promises of state action, another thing for states to deliver promised action, and another thing entirely for their action to result in improvements in educational outcomes.”

So, what lessons can we draw five years on?

First, Do No Harm. The need to pursue proposals like Common Core testing and test-based teacher evaluation on federally determined timetables wound up creating new divisions and supersizing blowback. For instance, the Common Core, which might have been a collaborative effort of 15 or maybe 20 enthusiastic states absent federal “encouragement,” became a quasi-federal initiative with lots of halfhearted participants. In pushing states to hurriedly adopt new evaluation systems that specifically used test results to gauge teachers, Race to the Top also ensured that many not-ready-for-primetime systems would be hurriedly rolled out and entangled with the Common Core and its associated tests. The most telling example may be in New York, where the simultaneous effort to change testing and accountability fueled intense concerns about how the tests would affect teacher job security, engendering fierce backlash and strong teachers union support for the “opt-out” movement.

Build Reliable Infrastructure. It was no fault of the Obama administration, but the infrastructure to do Race to the Top well simply didn’t exist. Criteria for who should judge and how they should do so were made up on the fly. The need to do this in a hurry, along with conflict-of-interest rules, made it hard to assemble a first-rate pool of reviewers. U.S. Department of Education officials also had to combat concerns about the review process appearing too “political.” In the future, clear norms regarding reviewers, criteria, use of evidence, and institutional autonomy should be established before such programs are created.

Execution Should Be the Measure. The right measure for a program like Race to the Top is not how many states promise to undertake an action, but how many do it well. This is especially important when the goals are admirable but ambiguous, like improving professional development, educator preparation, or turnaround efforts. Whether states change these things matters much less than how they do so. That caution was too often ignored at the time, and has been too overlooked in the aftermath.

Seek to Eliminate Impediments. Race to the Top’s emphasis on expansive promises forced reviewers to try to divine the hearts and minds of state officials. A simpler, more fruitful course is to emphasize observable actions, particularly those that remove obsolete impediments or regulations. Such a course reflects a more humble vision of the federal role—one that believes Uncle Sam is better at helping states extricate themselves from yesterday than at telling them how to succeed tomorrow. In the case of Race to the Top, while much attention was paid to accomplishments like lifting charter caps or removing data firewalls, such measures accounted for well under one-quarter of Race to the Top’s points.

Reward Pioneers. While its marketing suggested otherwise, in practice Race to the Top used funds and public pressure to induce states to promise to adopt a slate of prescriptions. In many places, this led to a rushed adoption and ensured that many policies were executed poorly, undermining public confidence and support. That is a poor strategy for prompting innovation or improvement.

Beware of Opportunity Costs. The Obama administration dangled $4 billion in federal funds at the height of the Great Recession and linked them to states demonstrating that they’d “prioritize” education spending. At a time when states could have been using the crisis to focus on finally doing something about underfunded pensions or much-needed belt-tightening, they were preoccupied with dreaming up new spending proposals. Opportunity costs don’t just come in policies pursued and tabled, but also in the debates that policymakers should and don’t have.

The public imagination is often captured by the fact of a federal program, but what matters in a realm as complex as schooling is how programs actually work. In 2009 and 2010, proponents embraced Race to the Top as a singular triumph—enthralled by the symbolic statement that reformers had stormed the nation’s capital. Yet, five years on, even a well-wisher can conclude that Race to the Top may have done as much to retard as to advance its laudable goals. The admonition that “it’s not how you start, it’s how you finish” may never be more relevant than when Washington has bold ideas about how to improve America’s schools.

This is part of a forum on Race to the Top. For an alternate take, please see “Innovative Program Spurred Meaningful Education Reform” by Joanne Weiss.

This article appeared in the Fall 2015 issue of Education Next. Suggested citation format:

Weiss, J., and Hess, F.M. (2015). What Did Race to the Top Accomplish? Education Next, 15(4), 50-56.

The post Lofty Promises But Little Change for America’s Schools appeared first on Education Next.

]]>
49703193
Results of President Obama’s Race to the Top https://www.educationnext.org/results-president-obama-race-to-the-top-reform/ Tue, 14 Jul 2015 00:00:00 +0000 http://www.educationnext.org/results-president-obama-race-to-the-top-reform/ Win or lose, states enacted education reforms

The post Results of President Obama’s Race to the Top appeared first on Education Next.

]]>

ednext_XV_4_howell_mapclick

Caught between extraordinary public expectations and relatively modest constitutional authority, U.S. presidents historically have fashioned all sorts of mechanisms—executive orders, proclamations, memoranda—by which to move their objectives forward. Under President Barack Obama’s administration, presidential entrepreneurialism has continued unabated. Like his predecessors, Obama has sought to harness and consolidate his influence outside of Congress. He also has made contributions of his own to the arsenal of administrative policy devices. The most creative, perhaps, is his Race to the Top initiative, which attempted to spur wide-ranging reforms in education, a policy domain in which past presidents exercised very little independent authority.

Barack ObamaThis study examines the effects of Obama’s Race to the Top on education policymaking around the country. In doing so, it does not assess the efficacy of the particular policies promoted by the initiative, nor does it investigate how Race to the Top altered practices within schools or districts. Rather, the focus is the education policymaking process itself; the adoption of education policies is the outcome of interest.

No single test provides incontrovertible evidence about its causal effects. The overall findings, however, indicate that Race to the Top had a meaningful impact on the production of education policy across the United States. In its aftermath, all states experienced a marked surge in the adoption of education policies. This surge does not appear to be a statistical aberration or an extension of past policy trends. Legislators from all states reported that Race to the Top affected policy deliberations within their states. The patterns of policy adoptions and legislator responses, moreover, correspond with states’ experiences in the Race to the Top competitions.

In the main, the evidence suggests that by strategically deploying funds to cash-strapped states and massively increasing the public profile of a controversial set of education policies, the president managed to stimulate reforms that had stalled in state legislatures, stood no chance of enactment in Congress, and could not be accomplished via unilateral action.

Asking States to Compete

On February 17, 2009, President Obama signed into law the American Recovery and Reinvestment Act of 2009 (ARRA), legislation that was designed to stimulate the economy; support job creation; and invest in critical sectors, including education, in the aftermath of the Great Recession.  Roughly $100 billion of the ARRA was allocated for education, with $4.35 billion set aside for the establishment of Race to the Top, a competitive grant program designed to encourage states to support education innovation.

From the outset, the president saw Race to the Top as a way to induce state-level policymaking that aligned with his education objectives on college readiness, the creation of new data systems, teacher effectiveness, and persistently low-performing schools. As he noted in his July 2009 speech announcing the initiative, Obama intended to “incentivize excellence and spur reform and launch a race to the top in America’s public schools.”

The U.S. Department of Education (ED) exercised considerable discretion over the design and operation of the Race to the Top competition. Within a handful of broad priorities identified by Congress in ARRA, the Obama administration chose which specific policies would be rewarded, and by how much; how many states would receive financial rewards, and in what amount; and what kinds of oversight mechanisms would be used to ensure compliance. Subsequent to the ARRA’s enactment, Congress did not issue any binding requirements for the design or administration of the program. From an operational standpoint, Race to the Top was nearly entirely the handiwork of ED.

Race to the Top comprised three distinct phases of competition. Both Phase 1 and Phase 2 included specific education-policy priorities on which each applicant would be evaluated. States were asked to describe their current status and outline their future goals in meeting the criteria in each of these categories. The education policy priorities spanned six major scoring categories and one competitive preference category (see Table 1).

ednext_XV_4_howell_tab01-small

To assist states in writing their applications, ED offered technical assistance workshops, webinars, and training materials. Additionally, nonprofit organizations such as the National Council on Teacher Quality published reports intended to help states maximize their likelihood of winning an award. Nonetheless, substantial uncertainty shrouded some components of the competition, including the exact grading procedures, number of possible winners, total allocated prize amount per winning state, and prize allocation mechanism and timeline.

ednext_XV_4_howell_fig01-smallWhen all was said and done, 40 states and the District of Columbia submitted applications to Phase 1 of the competition. Finalists and winners were announced in March 2010. Phase 1 winners Tennessee and Delaware were awarded roughly $500 million and $120 million, respectively, which amounted to 10 percent and 5.7 percent of the two respective states’ budgets for K‒12 education for a single year. Figure 1 identifies all winners and award amounts.

Thirty-five states and the District of Columbia submitted applications to Phase 2 of the competition in June 2010. Ten winners were each awarded prizes between $75 million and $700 million in Phase 2.

Having exhausted the ARRA funds, the president in 2011 sought additional support for the competition. That spring, Congress allotted funds to support a third phase, in which only losing finalists from Phase 2 could participate. A significantly higher percentage of participating states won in Phase 3, although the amounts of these grants were considerably smaller than those from Phases 1 and 2. On December 23, 2011, ED announced Phase 3 winners, which received prizes ranging from $17 million to $43 million.

States that won Race to the Top grants were subject to a nontrivial monitoring process, complete with annual performance reports, accountability protocols, and site visits. After receiving an award letter, a state could immediately withdraw up to 12.5 percent of its overall award. The remaining balance of funds, however, was available to winning states only after ED received and approved a final scope of work from the state’s participating local education agencies. Each winning state’s drawdown of funds, then, depended upon its ability to meet the specific goals and timelines outlined in its scope of work.

Impact on State Policy

In its public rhetoric, the Obama administration emphasized its intention to use Race to the Top to stimulate new education-policy activity. How would we know if it succeeded? To identify the effects of Race to the Top on state-level policymaking, ideally one would take advantage of plausibly random variation in either eligibility or participation. Unfortunately, neither of these strategies is possible, as all states were allowed to enter the competition and participation was entirely voluntary. To discern Race to the Top’s policy consequences, therefore, I exploit other kinds of comparisons between policy changes in the 19 winning states and the District of Columbia, the 28 losers, and the 4 that did not participate; commitments that different states made in their applications and subsequent policymaking activities; and changes in policymaking at different intervals of the competitions.

Policy Adoptions. Perhaps the most telling piece of evidence related to the effect of Race to the Top is the number of relevant education reforms adopted as state policy in the aftermath of the competition’s announcement. To determine that number, my research team and I documented trends in actual policy enactments across the 50 states and the District of Columbia.  We tracked numerous policies that clearly fit the various criteria laid out under Race to the Top, and covered such topics as charter schools, data management, intervention into low-performing schools, and the use of test scores for school personnel policy, as well as three additional control policies—increased high-school graduation requirements, the establishment of 3rd-grade test-based promotion policies, and tax credits to support private-school scholarships—that were similar to Race to the Top policies but were neither mentioned nor rewarded under the program (see sidebar, opposite page, for specific policies tracked for Race to the Top applications and state adoptions).

Across all 50 states and the District of Columbia, we examined whether a state legislature, governor, school board, professional standards board, or any other governing body with statewide authority had enacted a qualifying policy each year between 2001 and 2014. Policies that were merely proposed or out for comment did not qualify. We also examined whether each state in its written application claimed to have already enacted each policy or expressed its clear intention to do so, as well as the number of points the application received in the scoring process.

Illinois state senator Kimberly Lightford noted, “I think Race to the Top was our driving force to get us all honest and fair, and willing to negotiate.”
Illinois state senator Kimberly Lightford noted, “I think Race to the Top was our driving force to get us all honest and fair, and willing to negotiate.”

These data reveal that the Race to the Top competitions did not reward states exclusively on the basis of what they had already done. Race to the Top, in this sense, did not function as an award ceremony for states’ past accomplishments. Rather, both states’ past accomplishments and their stated commitments to adopt new policies informed the scores they received—and hence their chances of winning federal funding.

We also found that states around the country enacted a subset of these reform policies at a much higher rate in the aftermath of Race to the Top than previously. Between 2001 and 2008, states on average enacted about 10 percent of reform policies. Between 2009 and 2014, however, they had enacted 68 percent. And during this later period, adoption rates increased every single year. At the rate established by preexisting trends, it would have taken states multiple decades to accomplish what, in the aftermath of the competitions, was accomplished in less than five years.

Policy Adoptions in Winning, Losing, and Nonapplying States. The surge of legislative activity was not limited to states that were awarded Race to the Top funding. Figure 2 illustrates the policy adoption activity of three groups of states: those that won in one of the three phases of competition; those that applied in at least one phase but never won; and those that never applied. In nearly every year between 2001 and 2008, policy adoption rates in these groups were both low and essentially indistinguishable from one another. In the aftermath of Race to the Top’s announcement, however, adoption rates for all three groups increased dramatically. By 2014, winning states had adopted, on average, 88 percent of the policies, compared to 68 percent among losing states, and 56 percent among states that never applied.

ednext_XV_4_howell_fig02-small

Regression analyses that account for previous policy adoptions and other state characteristics show that winning states were 37 percentage points more likely to have enacted a Race to the Top policy after the competitions than nonapplicant states. While losing states were also more likely than nonapplicants to have adopted such policies, the estimated effects for winning states are roughly twice as large. Anecdotal media reports, as well as interviews conducted by my research team, suggest that the process of applying to the competitions by itself generated some momentum behind policy reform. Such momentum, along with the increased attention given to Race to the Top policies, may explain why those states that did not even apply to the competition nonetheless began to enact these policies at higher rates.

Winning states were also more likely to have adopted one of the control policies, which is not altogether surprising, given the complementarities between Race to the Top policies and the chosen control policies. Still, the estimated relationship between winning and the adoption of Race to the Top policies is more than twice as large as that between winning and the adoption of control policies.

My results also suggest that both winning and losing states were especially likely to adopt policies about which they made clear commitments in their Race to the Top applications. Though the effects are not always statistically significant, winning states appear 21 percentage points more likely to adopt a policy about which they made a promise than one about which they did not; put differently, they were 36 percentage points more likely to adopt a policy about which they made an explicit commitment than were nonapplying states, which, for obvious reasons, made no promises at all. Losing states, meanwhile, were 31 percentage points more likely to adopt a policy on which they had made a promise than on a policy on which they had not.

Closer examination of winning, losing, and nonapplying states illuminates how Race to the Top influenced policymaking in all states, regardless of their status. One winning state, Illinois, submitted applications in all three phases before finally winning. Its biggest policy accomplishments, however, happened well before it received any funds from ED. The rapid enactment of Race to the Top policies in Illinois reflected a concerted effort by the state government to strengthen its application in each competition. Before the state even submitted its Phase 1 application, Illinois enacted the Performance Evaluation Reform Act (PERA), a law that significantly changed teacher and principal evaluation practices.

After losing in Phase 1, Illinois went on to adopt several other Race to the Top policies prior to submitting Phase 2 and Phase 3 applications. The competition served as a clear catalyst for education reform in the state. As Illinois state senator Kimberly Lightford noted, “It’s not that we’ve never wanted to do it before. I think Race to the Top was our driving force to get us all honest and fair, and willing to negotiate at the table.”

Whereas persistence eventually paid off for Illinois, California’s applications never resulted in Race to the Top funding. As in Illinois, lawmakers in California adopted several significant education reforms in an effort to solidify their chances of winning an award. Prior to the first-round deadline, the director of federal policy for Democrats for Education Reform noted that in California, “there’s been more state legislation [around education reform] in the last eight months than there was in the entire seven or eight years of No Child Left Behind, in terms of laws passed.”

California was not selected as a Phase 1 or Phase 2 winner, and a change in the governor’s mansion prior to Phase 3 meant the state would not compete in the last competition. While the state never did receive any funding, California did not revoke any of the policies it had enacted during its failed bids.

Although Alaska did not participate in Race to the Top, the state adopted policies that either perfectly or nearly perfectly aligned with Race to the Top priorities. Governor Sean Parnell acknowledged the importance of keeping pace with other states.
Although Alaska did not participate in Race to the Top, the state adopted policies that either perfectly or nearly perfectly aligned with Race to the Top priorities. Governor Sean Parnell acknowledged the importance of keeping pace with other states.

What about the four states that never applied for Race to the Top funding? By jump-starting education policy reform in some states, the competition may have influenced policy deliberations in others. Alaska provides a case in point. When Race to the Top was first announced, Alaska’s education commissioner, Larry LeDoux, cited concerns about federal government power and the program’s urban focus as reasons not to apply.

Still, in the years that followed, Alaska adopted a batch of policies that either perfectly or nearly perfectly aligned with Race to the Top priorities. One of the most consequential concerned the state’s teacher-evaluation system. In 2012, the Alaska Department of Education approved changes that required that 20 percent of a teacher’s assessment be based on data from at least one standardized test, a percentage that would increase to 50 by the 2018‒19 school year. In defending the rule, Governor Sean Parnell recognized the importance of keeping pace with other states’ policy achievements: “Nearly 20 states in the nation now weight at least 33 percent, and many 50 percent, of the performance evaluation based on student academic progress. I would like Alaska to lead in this, not bring up the rear with 20 percent of an evaluation focused on student improvement.” Those 20 states that had made the changes, it bears emphasizing, had participated in Race to the Top.

ednext_XV_4_howell_sidebar-small

Policymaker Perspectives. To further assess the influence of Race to the Top on state policymaking, I consulted state legislators. Embedded in a nationally representative survey of state legislators conducted in the spring of 2014 was a question about the importance of Race to the Top for the education policy deliberations within their states. Roughly one-third of legislators reported that Race to the Top had either a “massive” or “big” impact on education policymaking in their state. Another 49 percent reported that it had a “minor” impact, whereas just 19 percent claimed that it had no impact at all.

Lawmakers’ responses mirror my finding that Race to the Top influenced policymaking in all states, with the greatest impact on winning states. Winners were fully 36 percentage points more likely to say that Race to the Top had a massive or big impact than losers, who, in turn, were 12 percentage points more likely than legislators in states that never applied to say as much. If these reports are to be believed, Race to the Top did not merely reward winning states for their independent policy achievements. Rather, the competitions meaningfully influenced education policymaking within their states.

Even legislators from nonapplying states recognized the relevance of Race to the Top for their education policymaking deliberations. Indeed, a majority of legislators from states that never applied nonetheless reported that the competitions had some influence over policymaking within their states. Although dosages vary, all states appear to have been “treated” by the Race to the Top policy intervention.

From Policy to Practice. None of the preceding analyses speak to the translation of policy enactments into real-world outcomes. For all sorts of reasons, the possibility that Race to the Top influenced the production of education policy around the country does not mean that it changed goings-on within schools and districts.

Still, preliminary evidence suggests that Race to the Top can count more than just policy enactments on its list of accomplishments. As Education Next has reported elsewhere (see “States Raise Proficiency Standards in Math and Reading,” features, Summer 2015), states introduced more rigorous standards for student academic proficiency in the aftermath of Race to the Top. Moreover, they did so in ways that reflected their experiences in the competition itself.

ednext_XV_4_howell_fig03-smallFigure 3a tracks over a 10-year period the average rigor of standards in states that eventually won Race to the Top, states that applied but never won, and states that never applied. Throughout this period, eventual winners and losers looked better than nonapplicants. Before the competition, though, winners and loser looked indistinguishable from one another.  Between 2003 and 2009, the rigor of their state standards declined at nearly identical rates and to identical levels. In the aftermath of Race to the Top, however, winning states rebounded dramatically, reaching unprecedented heights within just two years. While losing states showed some improvement, the reversal was not nearly as dramatic. Nonapplying states, meanwhile, maintained their relatively low standards.

The impact of Race to the Top on charter schools, which constituted a less significant portion of the competition, is not nearly so apparent. In winning states, higher percentages of public school students attend charter schools than in either losing or non-applying states. But as Figure 3b shows, post-Race to the Top gains appear indistinguishable from the projections of previous trends. While Race to the Top may have helped sustain previous gains, it seems unlikely. Between 2003 and 2013, the three groups of states showed nearly constant gains in charter school enrollments.

Conclusions and Implications

With Race to the Top, the Obama administration sought to remake education policy around the nation. The evidence presented in this paper suggests that it met with a fair bit of success. In the aftermath, states adopted at unprecedented rates policies that were explicitly rewarded under the competitions.

States that participated in the competitions were especially likely to adopt Race to the Top policies, particularly those on which they made explicit policy commitments in their applications. These patterns of policy adoptions and endorsements, moreover, were confirmed by a nationally representative sample of state legislators who were asked to assess the impact of Race to the Top on education policymaking in their respective states.

Differences in the policy actions of winning, losing, and nonapplying states, however, do not adequately characterize the depth or breadth of the president’s influence. In the aftermath of Race to the Top, all states experienced a marked surge in the adoption of education policies. And legislators from all states reported that Race to the Top affected policy deliberations within their states.

While it is possible that Race to the Top appeared on the scene at a time when states were already poised to enact widespread policy reforms, several facts suggest that the initiative is at least partially responsible for the rising rate of policy adoption from 2009 onward. First, winning states distinguished themselves from losing and nonapplying states more by the enactment of Race to the Top policies than by other related education reforms. Second, at least in 2009 and 2010, Race to the Top did not coincide with any other major policy initiative that could plausibly explain the patterns of policy activities documented in this paper. (Obama’s selective provision of waivers to No Child Left Behind, a possible confounder, did not begin until later.) Finally, state legislators’ own testimony confirms the central role that the competitions played in the adoption of state policies between 2009 and 2014, either by directly changing the incentives of policymakers within applying states or by generating cross-state pressures in nonapplying states.

The surge of post-2009 policy activity constitutes a major accomplishment for the Obama administration. With a relatively small amount of money, little formal constitutional authority in education, and without the power to unilaterally impose his will upon state governments, President Obama managed to jump-start policy processes that had languished for years in state governments around the country. When it comes to domestic policymaking, past presidents often accomplished a lot less with a lot more.

William G. Howell is professor of American politics at the University of Chicago.

This article appeared in the Fall 2015 issue of Education Next. Suggested citation format:

Howell, W.G. (2015). Results of President Obama’s Race to the Top: Win or lose, states enacted education reforms. Education Next, 15(4), 58-66.

The post Results of President Obama’s Race to the Top appeared first on Education Next.

]]>
49703306
Wisconsin High Schools Learn from New PISA Test https://www.educationnext.org/wisconsin-high-schools-learn-new-pisa-test/ Thu, 25 Jun 2015 00:00:00 +0000 http://www.educationnext.org/wisconsin-high-schools-learn-new-pisa-test/ International comparison drives efforts to improve

The post Wisconsin High Schools Learn from New PISA Test appeared first on Education Next.

]]>

The countryside 30 miles west of Milwaukee was a great place to escape to in the middle of the 20th century, and that’s just what Alfred Lunt and Lynn Fontanne wanted. The premier theater couple of that era made Ten Chimneys their home in Genesee Depot, Wisconsin. It was a secluded spot where many of the biggest stars of American and British entertainment joined them for peaceful getaways.

Kettle Moraine High School students with superintendent Patricia Deklotz
Kettle Moraine High School students with superintendent Patricia Deklotz

But in the early years of the 21st century, there are no getaways from the demands of a fast-paced world. On the same turf that Lunt and Fontanne envisioned as a retreat, school leaders see their students as upcoming members of a global economy.

“Our results are generally high, but compared to whom?” asked Patricia Deklotz, superintendent of the Kettle Moraine School District, which covers a large piece of once heavily rural land, where more subdivisions appear every year.

Kettle Moraine High School, about a half dozen miles from Ten Chimneys, aims to answer Deklotz’s question. Compared to surrounding schools? Compared to the rest of Wisconsin? How about compared to students around the globe, especially those in high-performing nations?

Kettle Moraine High is trying to make the global comparison and thereby improve student outcomes.

The vehicle is the OECD Test for Schools, a small but growing program in which samples of 15-year-old students at selected schools take a version of the Program for International Student Assessment (PISA) exam created by the Paris-based Organization for Economic Cooperation and Development (OECD). The original PISA tests were launched in 2000 and have become the most recognized basis for comparing the achievement of students across nations and, in some cases, regions or parts of nations around the globe. The tests are given to a random sample of high-school students in each “economy,” as the OECD labels them. PISA tests generally require two hours to complete the reading, math, and science portions. Questions aim to assess the critical-thinking and problem-solving abilities of students rather than specific skills. Students also answer questions about their attitudes toward learning and the learning environments at their schools, yielding insights into the strengths and weaknesses in different systems of schooling.

Starting with a trial run in 2012 that involved more than 100 U.S. schools, the OECD Test for Schools has been offered to individual American schools in an effort to provide local school administrators with an international benchmark. School-level results can be compared to those obtained by economies that administer the PISA. The campaign to enlist schools to administer the new OECD tests—and, more importantly, to make good use of the results—has been led by America Achieves, a New York‒based nonprofit that wants to “fire up” the education system to be more ambitious and effective in improving student achievement. “We need to get better faster,” Jon Schnur, the executive chairman, explained. The OECD work of America Achieves has been supported by several large foundations, including Bloomberg Philanthropies, the Kern Family Foundation (headquartered only a few miles from Kettle Moraine High School), and the William and Flora Hewlett Foundation.

Schnur said he is pleased by both the number of schools that have taken the test and by how they’re using the results. “We’ve seen hundreds of schools really actively involved in a learning community around this, eager to learn, modify what they’re doing,” he said.

Raising Expectations

Superintendent Deklotz said her district was eager to join the OECD effort from the start. “This is a very good school district,” she said. Good teachers, good kids. The challenges of urban education seem far away: only about 10 percent of the 1,300 students at Kettle Moraine High qualify for free or reduced-price lunch, and about 90 percent are white.

ednext_XV_4_borsuk_fig01-small

The percentage of Kettle Moraine students rated as proficient or advanced in reading and math on Wisconsin’s standardized tests has been consistently above the state average in recent years (see Figure 1). The average score on the ACT college admission test was above the state average in 2013‒14, when more than 96 percent of Kettle Moraine students graduated from high school in four years (in Milwaukee, the figure was 61 percent). By the second fall after graduation, 75 percent of Kettle Moraine students had enrolled in postsecondary education (compared to 44 percent in Milwaukee).

But there was a bit of a “Lake Wobegon” problem in the district, said Steve Plum, principal of Kettle Moraine High School of Health Sciences, a charter school within the larger high school. As the Prairie Home Companion radio program jokes, all the children were above average, and it was a challenge to convince some staff members, parents, and others of the need to aim higher.

The local school board was not among those needing convincing. It set a goal of having graduates meet “international expectations.” Deklotz and her team were intent on pursuing that. “I wanted to have a lever to help my staff understand the need for continuous improvement with some urgency,” Deklotz said. “I wanted to say, ‘Guys, we’re good, but we can be better.’” She didn’t want the district to go crazy over the drive to improve, but she also didn’t want people to be unduly content.

Deklotz tried to have the high school included in the 2012‒13 trial run of the OECD test. It wasn’t selected, with no reason given.

But for 2013‒14, the Kern Family Foundation, a leading funder of efforts nationwide to increase the international competitiveness of students, stepped in. Kern offered to cover the costs ($10,000 to $12,000 per school) for first-year participation by Wisconsin schools.

Jack Linehan, a retired suburban Milwaukee school superintendent, was hired to recruit schools. Linehan said it was not an easy sell. Many schools said they were having problems with “test fatigue” and did not want to take on another test, even if might have useful results, was fairly brief (about three and a half hours on one day), and involved only a sample of the school’s students (50 to 85 15-year-olds). In the end, 13 Wisconsin schools took part.

ProHealth Care’s Dr. David J. Dominguese conducts a seminar on x-rays and human anatomy 
ProHealth Care’s Dr. David J. Dominguese conducts a seminar on x-rays
and human anatomy

As for the testing process itself, there were “bumps on the road typical of first-time testing,” Deklotz said. It was not easy to recruit Kettle Moraine High students to take the test; a sample of 85 were chosen, but only 59 actually took part. Tue Halgreen, the OECD’s project manager for the Test for Schools, said, “We would normally expect around 75 students to show up on the day of testing. If less than 75 students take the test, then the confidence intervals become larger,” increasing the amount of caution necessary in interpreting results.

Kettle Moraine administrators said some students didn’t take part because they were focused at the time on upcoming Advanced Placement tests and the OECD test carried no individual consequences for them; results are not reported for each student. Administrators also said there were indications that some students who took the test did not give it their best effort.

Nonetheless, Deklotz said the results were useful, even if they could not be stated with the same confidence a larger sample would have brought. She said the school learned a lot, both from the results on reading, math, and science and from answers to an additional set of questions asking students about such things as their engagement in learning and the learning climate in their classes.

Overall, the results supported the view that Kettle Moraine was good, but not that good, compared to global high performers. PISA describes results on a scale with six levels. In reading and science, none of the Kettle Moraine High students were on levels five or six, the top ratings. Only 10 percent placed in the top levels on math. On the other hand, few placed on level one or below; the large majorities—83 percent in reading, 81 percent in math, and 100 percent in science—ranked in the middle. Among schools in the United States participating in the OECD Test for Schools in the first year, Kettle Moraine’s scores were above average in math and science, and a bit below the middle of the pack in reading.

If the 2012 PISA tests were the comparison point, Kettle Moraine High School’s 2014 scale score in math would place it among the top five countries, while its 2014 score in reading would place it in the bottom ten (see Figure 2).

Jeff Walters, principal of Kettle Moraine High, said the results framed things clearly: “Where are we at and are we satisfied with that? And we’re not.”

ednext_XV_4_borsuk_fig02-small

Improving the Learning Culture

The results from the students’ responses to the learning culture questions on the OECD Test for Schools did provide some valuable help, Deklotz said.

For example, students were asked to what degree they agreed with the statement, “I get along with most of my teachers.” Deklotz said the expectation of teachers was that nearly 100 percent of students would agree. In higher-performing schools that administered the test, the percentage is indeed very high. But at Kettle Moraine, the 2014 result was closer to 80 percent—like the overall scores, good, but not great. Deklotz said this was a signal that the staff needed to work on building relationships with students.

A second example: students were asked how often their math classes begin long after the bell rings. About 70 percent said that was not an issue in their classes; in higher-performing schools the percentage is closer to 85 percent. “We would like to be closer to what the highest-performing schools reported,” Deklotz said.

Instructor Rebeccah Schmidt (left) assists a KMHS student in a biomedical sciences lab
Instructor Rebeccah Schmidt (left) assists a KMHS student in
a biomedical sciences lab

Kettle Moraine educators met before the beginning of the 2014‒15 school year to review how they were doing, including the OECD test data, and to consider how to do better. One change made for 2014‒15 was a revamped approach to using “advisory” periods, when 12 to 15 students meet with a teacher (sort of like homeroom periods of days gone by). The goal was to have more dialogue between a teacher and students over perceptions of what is going on in school and how to turn those impressions in more positive directions.

Michael Comiskey, principal of Kettle Moraine Middle School and director of math learning for the district, said that when he was a student, teachers never talked about what students were learning and why. Now, “we need to have more conversations with students about our approach,” he said. Learning needs to be more personalized to fit each student, Comiskey said, and the OECD test results help shape that.

The Kettle Moraine district has placed a big bet on improving outcomes through charter schools that serve elementary, middle school, and high school students. The charter schools generally offer more individualized and unconventionally structured programs aimed at increasing student engagement. In addition to what is call the “legacy school,” the high school building houses three charter schools: the High School of Health Sciences; KM Perform (emphasizing arts and performance); and KM Global (emphasizing individually guided, project-oriented work).

School leaders hope that the range of options will lead students to deeper, more engaged learning and higher achievement.

Weighing Costs and Benefits

The experience of Kettle Moraine and the Wisconsin schools as a whole illuminates hurdles facing advocates of wider use of the new OECD test.

Unhappiness on the part of educators, as well as students, about standardized testing, including the total load of such tests, is one big hurdle. The context includes growing opposition nationwide to standardized tests and the amount of time they take away from instruction as well as uncertainty about future federal and state accountability and testing policies. Wisconsin had an experience in spring 2015 that was rocky, at best, with its first (and, as it turns out, last) round of Common-Core testing in conjunction with its chosen test provider, the Smarter Balanced Assessment Consortium.

Leaders of many schools where the OECD Test for Schools was given in 2014 chose not to take part again in 2015. Of the 13 Wisconsin districts involved in the first year, only two (including Kettle Moraine) took part in the second year, and one other district took the test for the first time. Some may choose to do it every other year; for others, once seems like enough to get the input they want.

High-performing district Whitefish Bay, in Milwaukee’s suburbs, was one that did not participate in the second year. Maria Kucharski, director of teaching and learning for the district, said there were positive aspects to the experience. “Having an idea of where we are globally can be a lighthouse to ensure we are doing our best academically in preparing our students,” she said. “This should be done with caution, however, because we don’t view our goal of educating children to be based on achievement as much as many other skills, like critical thinking, collaboration, and a spirit and drive for entrepreneurship.” She added that school leaders liked the collaborative work with other districts that America Achieves facilitated.

Throughout the school, small-group work spaces and multifunctional furniture and design facilitate collaboration and exploratory learning.
Throughout the school, small-group work spaces and multifunctional furniture and design facilitate collaboration and exploratory learning.

But Whitefish Bay prefers tests that give information on performance of individual students and to use school time for instruction, Kucharski said. “At this point in time, Whitefish Bay does not feel that the benefits of the assessment that we identified outweigh the cost in instructional learning time,” she said.

Financial factors also are a hurdle to taking part in the OECD test. Is $11,000 a lot of money when it comes to the total budget of a high school? Many schools in Wisconsin are saying yes. State aid to schools has been reduced, and the “revenue cap” that the state imposes on the combination of aid and property tax income has gone down or stayed close to flat in recent years. The battle in 2011 between Governor Scott Walker and public employee unions, which attracted extensive national attention, produced some savings as some health insurance and retirement contributions were shifted to public employees. But the impact of the savings is wearing off, and school budgets are stressed. As much as she likes the OECD test, even Deklotz said it will be a year-to-year decision whether to take part. “While it is an expense, the data we receive back is well worth the investment,” Deklotz said. “The possibility that OECD will provide an online version at a reduced cost is very encouraging.”

America Achieves leaders say districts around the country are considering every-other-year use, although the organization recommends annual participation so that change can be monitored better.

The Kern Foundation remains supportive of the OECD initiative, but its funding was intended to pay only for first-year costs for Wisconsin schools. Ryan S. Olson, director of the K12 program for the foundation said, “There doesn’t seem to be any question that there is value in a tool such as this…. A lot of the people involved think this needs to become a movement.” As far as continuing to pay for schools to take part, he said, “Our intention was and is to have a tool that schools can use and afford that can become part of their regular system.”

America Achieves leaders say they are working with OECD in hopes of finding ways to reduce the cost of participation.

Year Two

Kettle Moraine made a few changes in the second year of the test. Administrators enlisted two groups of students: 68 students from the legacy high school and 43 students from the charter school KM Perform. (KM Perform has fewer than 150 students, and the small number of test-takers brought a warning in the OECD report that results “need to be interpreted with caution.”) School leaders put considerable energy into selling students on giving the test their best effort. Pep talks to students described the goal of the testing program and delivered the message that the students were representing Kettle Moraine in a global event. The students also received water bottles and T-shirts celebrating their participation.

Six of the students interviewed afterward said that they did their best and thought the test was better than some other standardized tests they’ve taken—more thought-provoking, with questions that more closely resembled real-world situations.

“It worked something in my brain,” said Maddi Racine, a sophomore at KM Perform. She said the questions were more in line with the way students are taught and called for critical thinking more than specific knowledge. Her comments reflect the goals of those who designed the tests.

Ethan Suhr, also a KM Perform sophomore, took the idea of being on the school “team” seriously. “We are representing something we are very passionate about being part of,” he said, referring to the charter school program.

ednext_XV_4_borsuk_fig03-smallWhen the results from the second year’s tests arrived in May, there were encouraging signs (see Figure 3). The scale score for Kettle Moraine High School students was particularly improved in reading, with 9 percent scoring in level five, the second-highest level. The science score also rose, but the math scores were unchanged; 9 percent of students scored in level five in science, and 15 percent scored in levels five or six in math. Average scores for students at KM Perform, the charter school, were higher than for the legacy school students.

Deklotz said the legacy school scores were above average for the United States. “This may be the result of building a culture of engagement around the assessment’s purpose and the use of results, and engaging staff in analyzing assessment results for the purpose of goal setting,” she said. She said the KM Perform test-takers gave high ratings for student engagement and student-teacher relationships, and the school’s results on these measures were commensurate with the top 10 percent of 2012 PISA test-takers in the United States.

There is no direct connection or alignment between the OECD Test for Schools and the Common Core State Standards effort that is shaping education and, in many states, roiling politics. Deklotz said her perspective was that “the shared value [of these efforts] would be increased expectation for student performance, especially over previous state assessments.” She said, “The OECD tests higher-order thinking and requires applied critical thinking and problem solving. Common Core standards raise the bar of academic expectation and application.”

About 446 schools in the U.S. have taken part in the OECD test so far. Add in Canada, Spain, and the United Kingdom, and the total worldwide is some 800 schools.

Schnur said he is encouraged by the growth of the OECD Test for Schools effort across the United States. “This tool has really taken off with a surprising degree of energy, both around the U.S. and the world,” Schnur said.

A recent report by America Achieves shows that while there is some modest progress in improving the performance of U.S. students in the lower quarter of the economic spectrum, results in the middle-income range have been flat.

“We have more than one kind of hill to climb in improving education and opportunity in the U.S.,” Schnur said. The needs of low-income, underserved, and minority children are urgent. But the proficiency rates of middle- and upper-income kids lag behind those for children in the same groupings in high-scoring nations around the world, he said.  Schnur said he expects participation in the OECD test to grow and results to increasingly help schools improve.

What does Deklotz like about the OECD test overall? The perspective it provides on how Kettle Moraine students are doing, measured against the world, plus the insight into what school leaders otherwise wouldn’t know about their students, including how the students see their school experience.

The OECD results, Deklotz said, “raise lots of great questions.” But, she said, they don’t provide the answers. You have to find those yourself.

Alan J. Borsuk is senior fellow at Marquette University Law School. A longtime education reporter for the Milwaukee Journal Sentinel, he continues to write a Sunday column for the paper.  

This article appeared in the Fall 2015 issue of Education Next. Suggested citation format:

Borsuk, A.J. (2015). Wisconsin High Schools Learn From New PISA Test: International comparison drives efforts to improve. Education Next, 15(4), 43-49.

The post Wisconsin High Schools Learn from New PISA Test appeared first on Education Next.

]]>
49703251