Sarah R. Cohodes, Author at Education Next https://www.educationnext.org/author/scohodes/ A Journal of Opinion and Research About Education Policy Wed, 10 Jul 2024 16:55:52 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://i0.wp.com/www.educationnext.org/wp-content/uploads/2019/12/e-logo.png?fit=32%2C32&ssl=1 Sarah R. Cohodes, Author at Education Next https://www.educationnext.org/author/scohodes/ 32 32 181792879 Why Education Increases Voting https://www.educationnext.org/why-education-increases-voting-evidence-boston-charter-schools/ Tue, 14 May 2024 09:01:50 +0000 https://www.educationnext.org/?p=49718160 Evidence from Boston Charter Schools

The post Why Education Increases Voting appeared first on Education Next.

]]>

Illustration

Americans with more education vote at higher rates. In the 2020 presidential election, 77 percent of eligible voters who had attended or graduated from college and 90 percent with post-graduate studies cast a ballot compared to 54 percent of voters with only a high-school diploma and 36 percent of dropouts. These trends in turnout rates have persisted for more than three decades, suggesting a link between years of schooling and voting. But does achieving higher levels of education cause citizens to show up and vote on election day? Or do education and voting simply go hand-in-hand, because some other variable contributes to them both?

The research to date is mixed. Some studies have found evidence of a causal relationship, while others have not. The available data also tell us little about why and how education increases voting.

We take on these questions by looking at the educational trajectories and adult voting records of students who attend charter schools in Boston. We focus on Boston because prior research has found that students who attend a city charter are more likely to pass high-school exit exams, have higher test scores, and are more likely to attend a four-year college than their non-charter peers. Further, because Boston charters are oversubscribed and enroll students based on random admissions lotteries, we can compare charter students, who receive more education, with similar students who did not win a lottery and therefore receive less education. If education is a causal factor in voting, we’d expect to find that the students who experience these academic gains are also more likely to vote as adults.

That is, in fact, what we find—but only for girls. We look at the voting records of charter and non-charter students and find substantial differences. While similar shares of charter and non-charter students are registered to vote by age 21, charter-school students are slightly more likely to vote in any election and substantially more likely to vote in the first presidential election for which they are eligible. Specifically, 41 percent of all charter-school students vote in their first presidential election compared to 35 percent of students who did not attend a charter, an increase of 17 percent.

When we look more closely at the data, we see that the charter effect is a female phenomenon. Female high-school students are 11 percentage points more likely to vote in adulthood if they attended a charter school, while the impact for males is nil. We investigate multiple explanations for these differences and find that increased civic participation is likely due to gains in noncognitive attributes like grit and self-control, which we measure by looking at student behaviors, such as school attendance and taking the SAT.

These findings are in line with widening gender gaps in educational attainment and political participation. In 2020, 82 percent of eligible women voted in the presidential election compared to 73 percent of eligible men. Meanwhile, in 2021 some 39 percent of women ages 25 and older had a bachelor’s degree compared to 37 percent of men, and males currently account for just 42 percent of all students at four-year colleges. Our research sheds new light on these patterns and points to a critical question for future study. What can schools do to enhance non-cognitive skills development in boys, and what intervention could boost civic participation in young men after graduation?

Academic Success at Boston Charter Schools

Charter schools are public schools, funded with public money, but managed by private organizations. In Massachusetts, the state board of elementary and secondary education authorizes charter schools for five-year terms, and for-profit charter operators are not permitted. State law caps the share of district funds that can be used for charter tuition, with limited flexibility. If a school cannot enroll all interested students, they conduct a random admissions lottery, enroll the winners, and place students who did not win on a waitlist. For the 2023-24 school year, some 76 charter schools statewide enrolled about 46,000 students, and 66 of those schools had waitlists with another 21,270 unique students.

Boston has the highest concentration of charter schools in the state. Most use policies associated with the “No Excuses” charter school movement: longer school days and years, a focus on academic achievement and behavior management, in-school tutoring, frequent teacher feedback, and data-driven instruction. Prior research has found that attending a Boston charter school for one year boosts student scores on standardized tests by about one-third of a standard deviation in math and one-fifth of a standard deviation in reading. These findings are generally in line with studies of similar charter schools in Chicago, Denver, Los Angeles, New York City, Newark, New Orleans, and the national non-profit KIPP network.

Our study looks at the voting behavior of young adults who applied to a randomized admissions lottery for a Boston charter high school. We include all charter middle and high schools that kept lottery records and enrolled students who were at least 18 by the 2016 general election. In all, that includes 12 charter schools and 9,562 lottery applicants who were scheduled to graduate between 2006 to 2017. The applicant pool is 58 percent Black, 27 percent Hispanic, and 10 percent white. About 20 percent receive special-education services and 74 percent qualify for free or reduced-price school lunch. Females account for 52 percent of applicants.

Through the lotteries, about two-thirds of applicants are offered a charter seat. This creates a natural experiment that we use to explore the potential causal link between charter-school attendance, which boosts academic scores and access to college, and voting. We use state education and voting records to compare academic outcomes and election turnout for students who are and are not offered a charter seat and adjust our estimates based on who actually attends a charter school. We do not include siblings of current students or other applicants who receive lottery preferences. Of course, not all students offered seats attend the charter; however, state data show that applicants who win the lottery are 46 percentage points more likely to attend a charter during their time in Massachusetts public schools. We also see that boys and girls are equally as likely to enroll in a charter school if offered a seat.

Linking Learning with Voting

First, we benchmark the impact of charter attendance on academic outcomes against results from prior research. As in other analyses, we find that students who enroll in a charter school experience large gains in AP test-taking and scores, SAT scores, and four-year college enrollment. On state tests, scores increase by about half of a standard deviation in math and one-third of a standard deviation in reading two years after winning an admissions lottery. Charter students take longer to graduate high school, with a decline of 9 percentage points in the four-year graduation rate, but there are no statistically significant differences in five- or six-year high school graduation rates. Boston charters boost enrollment in four-year colleges by 7.2 percentage points.

We then investigate whether these educational gains extend beyond the classroom to civic participation. We find no impact on voter registration—about 78 percent of students in both groups are registered to vote by age 21, with about 45 percent of students registered by their 19th birthday. However, we do find differences in voter turnout. We focus on the first possible presidential election after students turn 18 to leave less time for them to leave Massachusetts or the region, and thus our sample. Additionally, the first possible presidential election is the election closest to the charter school treatment, which we believe is most likely to show the influence of attendance.

Charter-school students are more likely to vote than non-charter students, with the biggest difference in the first presidential election in which they are eligible to vote (see Figure 1). Some 41 percent of charter students vote in the first presidential election after they turn 18 compared to 35 percent of non-charter students, a difference of 17 percent. Charter students are also more likely to vote in any presidential election, with turnout at 65 percent compared to 61 percent for non-charter students. In looking at all opportunities to vote, including off-cycle elections where turnout is generally very low, we find a difference of 3 percentage points, with 67 percent of charter students voting compared to 64 percent of non-charter students, though the difference is not statistically significant.

Figure 1: Higher voting rates for charter students

We also look at voting by student subgroups and find that female charter students experience outsized gains (see Figure 2). In terms of voting in the first possible presidential election, the charter impact is 11 percentage points for girls and zero for boys. We also find meaningful effects for other student subgroups. Voting increases by 7.5 percentage points for students who receive free or reduced-price school lunch, 12.1 percentage points for English language learners, and 11.3 percentage points for students who earn relatively higher scores on state tests.

Figure 2: Bigger boosts in voting for females and English language learners

“Soft Skills” and the Ballot Box

Our findings show that charter schools boost academic outcomes and civic participation. That raises a second question: how? What aspects of education contribute to students’ likelihood to vote as adults?

We look at five possible explanations of why education may increase voting: development of cognitive skills, civic skills, social networks, the degree to which charter attendance politicizes students, and noncognitive skills. Our finding of a gender gap in voting allows us to identify proxies for these mechanisms and test the impact of each one. If the gender gap we find in voting is also present on a proxy measure, that mechanism is the most likely to explain increased civic participation among female charter school graduates.

For example, to assess whether increased cognitive skills help explain why citizens with more education are more likely to vote, we compare the impact of charter attendance on average test scores in reading and math for the males and females in our sample. Both genders experience the same large increase in math scores, while the positive impact in reading is slightly bigger for males. Since these impacts do not mirror the female-only effect of attending a charter school on voting, cognitive skill development does not appear to influence civic participation. More knowledge doesn’t necessarily beget more voting.

We conduct similar analyses of proxies for the other four mechanisms and find evidence that development in one area appears to explain charters’ impact on voting: noncognitive skills. While our data do not include a direct measure of noncognitive skills, such as a survey-based measure of self-control or grit, we use high-school attendance and taking the SAT as a proxies, since they are related to persistence and follow-through. This approach builds on prior research and captures some of the attitudes and behaviors students would draw on in order to vote, as voting in the U.S. often involves navigating sign-up processes, planning ahead, and following through.

Overall, students at charter schools attend 12 additional days of school from grades 9-12 compared to non-charter students. However, this effect is driven entirely by girls. Female charter students attend 22 additional days of school compared to non-charter females, while charter males do not attend school more regularly than their non-charter counterparts. We find similar, but not statistically significant, differences in SAT taking: charter females are 8 percentage points more likely to take the SAT than non-charter females, while the effect of charter attendance for males is just 2 percentage points.

This evidence cannot prove that stronger noncognitive skills cause a boost in voting. But taken together, we see that charters appear to shift noncognitive skills more for girls than boys, and that these differences align with the observed pattern in voting gains. Further, the gender gap in noncognitive skill gains we observe is consistent with prior research. Studies have shown that girls enter kindergarten with greater noncognitive skills than boys, maintain their advantage through elementary school, and have greater self-discipline than boys in 8th grade. Other research has found that these differences explain 40 percent of the gender gap in college attendance. There is also research showing that girls may gain more noncognitive skills from educational interventions, and that conscientiousness and emotional stability increase voter turnout for women, but not men. Thus, girls—perhaps because of socialization—are more likely to turn gains in noncognitive skills into voting.

Although our study finds the main beneficiaries of civic gains are young women, education’s contribution to voting need not operate solely through girls. Interventions that increase noncognitive skills for boys may have similar effects, though we do not observe them in this context. It is also possible that U.S. schools, and charter schools specifically, are set up in such a way that they particularly develop the skills of girls but not boys. Research to date has mainly focused on the overall impact of noncognitive skill development through social and emotional learning programs or documented longstanding gender gaps in this arena. Interventions that boost noncognitive skill development and other lagging outcomes in boys (see “Give Boys an Extra Year of School,” reviews, Spring 2023) or school curricula that specifically target civic engagement (see “A Life Lesson in Civics,” research, Summer 2019) are areas ripe for further study.

Sarah R. Cohodes is associate professor at the Gerald R. Ford School of Public Policy at the University of Michigan. James J. Feigenbaum is assistant professor at Boston University.

This article appeared in the Summer 2024 issue of Education Next. Suggested citation format:

Cohodes, S.R., and Feigenbaum, J.J. (2024). Why Education Increases Voting: Evidence from Boston charter schools. Education Next, 24(3), 60-65.

The post Why Education Increases Voting appeared first on Education Next.

]]>
49718160
Josh Angrist and the Search for Truth https://www.educationnext.org/josh-angrist-and-the-search-for-truth-nobel-prize-economics/ Tue, 07 Dec 2021 10:01:52 +0000 https://www.educationnext.org/?p=49714240 Nobel prize in economics is nod to empiricism

The post Josh Angrist and the Search for Truth appeared first on Education Next.

]]>

“There’s no such thing as a research emergency,” one of my old bosses used to say. Try telling that to Josh Angrist, one of the 2021 winners of the Nobel Prize in economics, which he shared with David Card and Guido Imbens. When I was working closely with Josh and living in Cambridge, my then-boyfriend, now-husband quickly learned that if my cell phone ever rang before 8:30 a.m. or after 10 p.m., there was only one person who could be on the phone: Josh.

Photo of Joshua Angrist
Josh Angrist

What drove these early morning urgent phone calls about running the computer code for just one more regression analysis? An incessant need to search for truth, and to do so now.

Josh’s work has two main strands. The first develops tools to derive evidence — “truth” — from naturally occurring phenomena in the world. This is the work for which Josh won the Nobel (or rather, since we are being precise here, the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel). The prize was announced in October and will be formally awarded December 10. The second strand applies those techniques to the real world, in many cases in the context of economics of education.

My trial by fire came while employed as a research manager at a center at Harvard, working with Josh on a project determining the impacts of Boston charter schools on test scores. I was a recent Swarthmore grad when I met Josh, and he used to tease me about how he didn’t get in, and went to Oberlin instead. I told him he was in good company.

Somehow, I must have passed the test, because I became his coauthor when the project continued while I was a PhD student. We used econometric tools that Josh developed to estimate the causal effect of attending a charter school. Using charter school lotteries that are used to admit students, we compared those who were offered a seat at the charter with those who were not, also accounting for who actually attended a charter. (If you want to see how Josh explains it, you can watch this video.)

We found that Boston charters raise standardized test scores, and lift 4-year college attendance as well. Lest you think this is a case of economists elevating results only that confirm the benefits of market influences, other work by Josh and coauthors shows that traditional public schools with similar flexibility as charter schools (“pilots”) do not boost test scores, and that some charters reduce test scores.

In the search for truth, as shown by his work on charters, Josh is non-ideological and willing to debunk conventional wisdom. Think of all the ink spilled in the New York Times on selective high schools like the famous Stuyvesant. When most people think of these types of schools, they often think of them as engines of learning, propelling students to new educational trajectories that will help them access the American Dream.

You might be surprised to learn there is little evidence that these schools improve test scores or, more importantly, boost college-going or college selectivity. But Josh’s careful research with coauthors, which compares students just above the threshold of the admissions cutoff, to those just below, shows exactly this: attending an exam school in New York City or Boston does not change students’ educational trajectories. Looking at a similar question in Chicago, Josh and coauthors found that attending an exam school under an affirmative action plan actually decreased math scores, because it diverted students from some high-performing charter schools, similar to those we studied in Boston.

Josh is willing not only to debunk conventional wisdom, but also to debunk his own research. Using the same technique he would later apply to exam schools, in a paper from the late-1990s, Josh and Victor Lavy compared students in grade cohorts just above and just below a cutoff for a class size rule (inspired by the Israel education ministry’s application of Maimonides’s rule for group size when studying the Torah). They found that when the rule induced students into smaller class sizes because of a requirement to add a teacher when the cohort size was above the threshold, test scores increased. Revisiting this technique almost 20 years later, an update to the paper found no such evidence of score gains due to smaller class size.

The search for truth is not always an easy path, and what gets anointed as truth, either by intuition and sometimes by early evidence, is not always the final answer. There are also questions of whose truth is elevated in conventional research designs. Josh not only searches for truth in his own work, but has committed to preparing a whole generation of economists to join him in the search, through his teaching, his papers, and his textbooks and video lessons. And he has succeeded. Some of his own work documents the spread of empiricism in modern economics research.

For my part, the spine of my copy of Mostly Harmless Econometrics is now broken, having referenced it so many times when coding or writing up a paper. And when I am sitting at my desk teasing out relationships in my data, or trying to craft the perfect sentence to convey a point, or deciding how to explain a tough concept to my students, I often think of my teacher and now colleague, Josh Angrist.

“What would Josh do?” And then I think about the counterfactual, strike one more adverb from my sentence, or find a real world example to inspire my students.

Building truth is a slow process: Regression by regression, paper by paper, talk by talk, student by student. A presentation to research partners or policymakers, a quote in the newspaper. Sometimes, especially recently, it seems like evidence does not matter and it will never change anyone’s mind.

But the quest for truth is not quixotic. A recent experiment with mayors in Brazil showed them policy-relevant evidence and found both that the mayors updated their beliefs about the policy and were more likely to implement it. Maybe if we work hard enough to uncover truth, and to communicate it, evidence will find its way.

Sarah Cohodes is Associate Professor of Economics and Education at Teachers College, Columbia University and a faculty research affiliate at MIT Blueprint Labs.

The post Josh Angrist and the Search for Truth appeared first on Education Next.

]]>
49714240
Massachusetts Charter Cap Holds Back Disadvantaged Students https://www.educationnext.org/massachusetts-charter-cap-holds-back-disadvantaged-students/ Mon, 19 Sep 2016 00:00:00 +0000 http://www.educationnext.org/massachusetts-charter-cap-holds-back-disadvantaged-students/ This November, Massachusetts voters will go to the polls to decide whether to expand the state’s quota on charter schools.

The post Massachusetts Charter Cap Holds Back Disadvantaged Students appeared first on Education Next.

]]>
Executive Summary

This November, Massachusetts voters will go to the polls to decide whether to expand the state’s quota on charter schools. The ballot initiative would allow 12 new, approved charters over the current limit to open each year.

Would the ballot proposal be good for students in Massachusetts? To address this question, we need to know whether charter schools are doing a better job than the traditional public schools in districts where the cap currently limits additional charter school seats.

There is a deep well of rigorous, relevant research on the performance of charter schools in Massachusetts. This research exploits random assignment and student-level, longitudinal data to examine the effect of charter schools in Massachusetts.

This research shows that charter schools in the urban areas of Massachusetts have large, positive effects on educational outcomes. The effects are particularly large for disadvantaged students, English learners, special education students, and children who enter charters with low test scores.

In marked contrast, we find that the effects of charters in the suburbs and rural areas of Massachusetts are not positive. Our lottery estimates indicate that students at these charter schools do the same or worse than their peers at traditional public schools. Notably, the charter cap does not currently constrain charter expansion in these areas. The ballot initiative will therefore have no effect on the rate at which these charters expand.

Massachusetts’ charter cap currently prevents expansion in precisely the urban areas where charter schools are doing their best work. Lifting the cap will allow more students to benefit from charter schools that are improving test scores, college preparation, and college attendance.


ednext-blog-sept16-evidencespeaks-cohodes-img01

This November, Massachusetts voters will go to the polls to decide whether to expand the state’s quota on charter schools. The “Lift the Cap” referendum has generated enormous controversy, with supporters and opponents canvassing neighborhoods, running ads, and blitzing social media.

As is true with many policy debates, the back-and-forth about the referendum has generated a lot of heat but not much light.

There is a deep well of rigorous, relevant research on the performance of charter schools in Massachusetts. In fact, it is hard to think of an education policy for which the evidence is more clear.

As policies are debated, we often have to rely on research that is ill-suited to the task. Its methodology is frequently too weak to form a firm foundation for policy. Or, the population, design, and setting of the research study are so different from the policy in question that the findings cannot be easily extrapolated.

This is not one of those times. We have exactly the research we need to judge whether charter schools should be permitted to expand in Massachusetts. This research exploits random assignment and student-level, longitudinal data to examine the effect of charter schools in Massachusetts.

To preview the results: Charter schools in the urban areas in Massachusetts have large, positive effects on educational outcomes, far better than those of the traditional public schools that charter students would otherwise attend. The effects are particularly large and positive for disadvantaged students, English learners, special education students, and children who enter charters with low test scores. By contrast, the effects outside the urban areas (where the current cap does not constrain charter expansion) are zero to negative. This pattern of results accords with research at the national level, which finds positive impacts in urban areas and among disadvantaged students.[i]

Massachusetts’ charter cap currently prevents expansion in precisely the urban areas where charter schools are doing their best work. Lifting the cap will allow more students to benefit from charter schools that are improving test scores, college preparation, and college attendance.

Massachusetts’ charter school ballot question

Before we turn to a detailed discussion of the research, let’s summarize the ballot proposal and how it would alter the state’s charter law.

Current law sets a cap on the number of charter schools statewide, as well as the share of each district’s funds that can flow to charters. Massachusetts now has 78 charter schools.

Since 2010, a “smart cap” has given priority to applications from charter providers with a proven track record that seek to expand in low-performing districts.[ii] Even with the additional expansion permitted under the current smart cap, the charter cap constrains expansion in many urban areas, including Boston, Springfield, Malden, and Lawrence. Tens of thousands of students are on waiting lists for charter schools in these districts.[iii] The state’s low-income, immigrant, Hispanic, and Black students are concentrated in these cities.

The ballot initiative would raise the cap, allowing 12 new, approved charters over the current limit to open each year.[iv] New and expanding charters would have to go through the current application and review process, which is one of the most rigorous in the country.[v] An indicator of the robustness of the state’s oversight: since 1997, 17 charter schools that the state deemed ineffective or mismanaged have closed.

The state’s board of education would review any applications that seek to go above the current cap, as it does all charter applications. In contrast, in Ohio (where presidential candidate Donald Trump recently made a visit to a charter school), the state has 69 authorizers, including school districts, higher education institutions, and nonprofit organizations.[vi] Each authorizer has its own standards for approval, renewal, and revocation.

Ohio’s arrangement, in comparison to that in Massachusetts, makes it difficult for the state to set consistent, high standards for charter schools. We suspect that the robust system of accountability in Massachusetts underpins the strong performance of its charter sector.

Estimating charter school impacts

Would the ballot proposal, which allows the expansion of charter schools in low-performing districts, be good for students in Massachusetts? To address this question, we need to know whether charter schools are doing a better job than the traditional public schools in districts where the cap currently limits additional charter school seats.

In short, the answer is “Yes.” In urban, low-income districts of Massachusetts, charter students are learning more than children in the traditional public schools.

We base this statement on rigorous, peer-reviewed research. Since 2007, when we were both researchers at Harvard, we have collaborated with researchers at Harvard and MIT, including professors Joshua Angrist, Thomas Kane, Parag Pathak, and Chris Walters (who is now at Berkeley). In cooperation with the state’s department of education, which provided the student-level, longitudinal data necessary for this research, we have evaluated the effect of charter schools on student achievement, high school graduation, preparation for college, and college attendance.

Measuring the effectiveness of any school is challenging. Parents choose their kids’ schools, either by living in a certain school district or sending them to a private or charter school. As a result, some schools are filled with children of parents who are highly motivated and/or have extensive financial resources. This is selection bias, the key challenge in evaluating the effectiveness of schools.

Charters are required to run lotteries when they have more applicants than seats. And since many charter schools in Massachusetts have long waiting lists, there are many lotteries each year across the state.

The charter school lotteries are “natural experiments,” each their own randomized trial. Randomization is the gold standard for social-science research, allowing an “apples-to-apples” comparison. At the time of application, there are no differences (on average) between those who win and lose the admissions lottery. Should we observe differences in student outcomes after the lottery, we can be confident this is due to charter school attendance.[vii]

The evidence on Massachusetts charter schools

So what have we learned from our research?

Charter schools in Boston (where charter enrollment has almost reached the cap) produce very large increases in students’ academic performance.[viii] Education researchers often express test score differences in standard deviations, which allow for comparison across different tests, populations, and contexts. According to the most recent estimates, one year in a Boston charter middle school increases math test scores by 25 percent of a standard deviation. The annual increases for language arts are about 15 percent of a standard deviation.[ix] Test score gains are even larger in high school.

These differences for middle school and high school can be seen in the two graphs below, with the results disaggregated for subgroups of students. Values above zero indicate that charter school students score higher than their traditional public school counterparts. A shaded bar indicates a statistically significant positive effect.

ednext-blog-sept16-evidencespeaks-cohodes-fig01-small

ednext-blog-sept16-evidencespeaks-cohodes-fig02-small

How big are these effects? The test-score gains produced by Boston’s charters are some of the largest that have ever been documented for an at-scale educational intervention. They are larger, for example, than the effect of Head Start on the cognitive outcomes of four-year-olds (about 20 percent of a standard deviation).[x] The effect of one year in a Boston charter is larger than the cumulative effect of the Tennessee STAR experiment, which placed children in small classes for four years (17 percent of a standard deviation).[xi]

Another gauge of magnitude: the gap in test scores between Blacks and Whites nationwide (and in Boston) is roughly three-quarters of a standard deviation. One year in a Boston charter therefore erases roughly a third of the racial achievement gap.

One concern is that charter schools are just “teaching to the test.” To stay open, charter schools need to demonstrate they are effective, and performance on the MCAS (Massachusetts Comprehensive Assessment System, the statewide test) is an important part of that assessment. If the charter schools are simply coaching students on the skills they need to succeed on the MCAS, they may have little impact on real, lasting learning.

But we found positive effects of Boston’s charters beyond the MCAS test,[xii] and no evidence that they “inflate” MCAS scores.[xiii] These effects are represented in the figure below comparing the percent of charter vs. noncharter students attaining particular outcomes.[xiv] For example, the lottery studies show Boston charters substantially increase SAT scores. This is not explained by differential selection into this optional test, since charter students are just as likely as their peers in traditional public schools to take the SAT.

Boston charters double the likelihood of taking an Advanced Placement (AP) exam. They substantially increase the AP exam pass rate, with ten percent of charter students passing the AP calculus test, compared with just one percent of students in Boston’s other public schools.

Students at Boston’s charters are just as likely as their peers at traditional public schools to graduate high school, though they are more likely (by 14 percentage points) to take five years rather than four years to do so. Boston charter students enter high school with scores far below the state mean, and even further below the typical scores in the wealthy suburbs where AP courses are the norm. It is therefore unsurprising that it takes some students five years in high school to successfully complete AP courses (which are required by some Boston charters).

Boston charter students are far more likely to attend a four-year college than their counterparts in traditional public schools. This is likely due, at least in part, to their better academic preparation, as just explained. The difference is large: 59 percent attend a four-year college as compared to 41 percent for their counterparts who did not attend charters.

ednext-blog-sept16-evidencespeaks-cohodes-fig03-small

Reminder: All of these results are based on comparisons of applicants who randomly won or lost admission to charter schools. The estimates are therefore not biased by demographic differences between students at charters and traditional public schools.

Some might be concerned that the charter students have unusually motivated parents, as demonstrated by their willingness to apply to charters. But by this metric, all of the children in our lottery studies have motivated parents. Yet the students who don’t win admission to charters (and so are more likely to go to the traditional public schools) do far worse than those who win.

It’s also important to note here that more than a third of students in Boston Public Schools apply to charters, so any “cream skimming” goes pretty deep. As charters have expanded in Boston, differences between applicants and non-applicants in the city have narrowed considerably, and are now quite small.[xv]

Beyond Boston, charters in the other urban areas of Massachusetts also boost test scores.[xvi] Most of these schools are young compared to the Boston charters, and we have not yet evaluated their effects on long-term outcomes such as college attendance.

Across the board, we find that urban charters produce the biggest boosts for students who most need help. Score effects are largest for students who enter charters with the lowest scores. Urban charters are particularly effective for low-income and non-white students. The score gains for special education students and English learners are just as large as they are for students who are not in these specialized programs.[xvii]

In marked contrast, we find that the effects of charters in the suburbs and rural areas of Massachusetts are not positive. Our lottery estimates indicate that students at these charter schools do the same or worse than their peers at traditional public schools.

Many students in these non-urban districts have access to excellent schools, so it is not surprising that charters don’t produce better outcomes than the traditional public schools. In fact, the excellent schools are a draw for families who have the financial resources to move to high-performing, wealthy districts like Newton, Wellesley, and Weston. Low-income families can’t afford homes in these districts. Their choice is the local charter school.

Importantly, the charter cap does not constrain charters in the suburbs where they appear to have zero to negative effects. Current law allows charter schools to expand in these districts. The cap, if lifted, would expand choice in the urban areas where charters have been highly successful with disadvantaged students who most need access to better schools.

No one (including social scientists!) can predict the future. There is no guarantee that new charter schools will be as successful as existing charter schools. The research we have summarized here, and the state’s track record in carefully vetting schools, strongly suggest that if allowed to grow the charter schools in the urban areas of Massachusetts will continue to improve learning, especially among disadvantaged children.

The voters’ decision

The research we have summarized here is irrelevant to the decisions of some voters. Some oppose charter schools on principle, because they prefer the governance and structure of traditional public schools. That’s their prerogative.

What we find distressing, and intellectually dishonest, is when these preferences are confounded with evidence about the effectiveness of charter schools. The evidence is that, for disadvantaged students in urban areas of Massachusetts, charter schools do better than traditional public schools.

Voters are free to decide that the proven benefits that Massachusetts charter schools provide for disadvantaged students are outweighed by a principled opposition to charters. It’s our job as researchers to make clear the choice that voters are making.

— Sarah Cohodes and Susan Dynarski

Sarah Cohodes is an Assistant Professor of Education and Public Policy at Teachers College, Columbia University. Susan Dynarski is a professor of public policy, education and economics at the University of Michigan.

ednext-evidencespeaks-small

This post originally appeared as part of Evidence Speaks, a weekly series of reports and notes by a standing panel of researchers under the editorship of Russ Whitehurst.

 


Notes:

[i] See, for example, Gleason, Philip, Melissa Clark, Christina Clark Tuttle, Emily Dwoyer, and Marsha Silverberg. 2010. The Evaluation of Charter School Impacts: Final Report. NCEE 2010-4029. Washington, DC: U.S. Department of Education, National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences.

[ii] Massachusetts General Laws, An Act Relative to the Achievement Gap, 2010.

[iii] Massachusetts Department of Elementary and Secondary Education. Massachusetts Charter School Waitlist Updated Report for 2015-2016 (FY16). See attached spreadsheet for location-specific waitlist numbers.

[iv] https://ballotpedia.org/Massachusetts_Authorization_of_Additional_Charter_Schools_and_Charter_School_Expansion,_Question_2_(2016)

[v] National Association of Charter School Authorizers: Massachusetts.

[vi] National Association of Charter School Authorizers: Ohio.

[vii] The lottery analyses are conducted using two-stage least-squares (2SLS). Winning the lottery is used as an instrument for attending a charter school. Throughout, when we refer to “the effect of charter school attendance,” we mean the 2SLS estimate of the effect of charter attendance, with winning the lottery used to instrument for attendance.

[viii] Abdulkadiroglu, Atila, Joshua D. Angrist, Susan M. Dynarski, Thomas J. Kane, and Parag A. Pathak. 2011. “Accountability and Flexibility in Public Schools: Evidence from Boston’s Charters and Pilots.” Quarterly Journal of Economics 126(2): 669–748.

[ix]Cohodes, Sarah R., Elizabeth M. Setren, Christopher R. Walters, Joshua D. Angrist, and Parag A. Pathak. 2013. “Charter School Demand and Effectiveness: A Boston Update.” The Boston Foundation.

[x] Puma, Mike, Stephen Bell, Ronna Cook, Camilla Heid, Pam Broene, Frank Jenkins, Andrew Mashburn, and Jason Downer. 2012. Third Grade Follow-up to the Head Start Impact Study Final Report, OPRE Report # 2012-45, Washington, DC: Office of Planning, Research and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services.

[xi] Dynarski, Susan, Hyman, Joshua. and Schanzenbach, Diane. W. 2013. “Experimental Evidence on the Effect of Childhood Investments on Postsecondary Attainment and Degree Completion.” Journal of Policy Analysis and Management 32: 692–717.

[xii] Angrist, Joshua D., Sarah R. Cohodes, Susan M. Dynarski, Parag A. Pathak, and Christopher R. Walters. 2016. “Stand and Deliver: Effects of Boston’s Charter High Schools on College Preparation, Entry, and Choice.” Journal of Labor Economics 34(2).

[xiii] Cohodes, Sarah. 2016. “Teaching to the Student: Charter School Effectiveness in Spite of Perverse Incentives.” Education Finance and Policy 11(1): 1-42.

[xiv] In the graph, the percentages for charter students are the 2SLS estimates of effect of charter attendance added to relevant noncharter mean. The percentages for the noncharter students are the proportion who attain each outcome, for students in the sample who do not attend a charter school.

[xv] Cohodes, Sarah R., Elizabeth M. Setren, Christopher R. Walters, Joshua D. Angrist, and Parag A. Pathak. 2013. “Charter School Demand and Effectiveness: A Boston Update.” The Boston Foundation.

Setren, Elizabeth. 2015. “Special Education and English Language Learners in Boston Charter Schools: Impact and Classification.” School Effectiveness and Inequality Institute (SEII) Discussion Paper 2015.05.

[xvi] Angrist, Joshua D., Parag A. Pathak, and Christopher R. Walters. 2013. “Explaining Charter School Effectiveness.” American Economic Journal: Applied Economics 5(4): 1–27.

[xvii] Setren, Elizabeth. 2015.“Special Education and English Language Learners in Boston Charter Schools: Impact and Classification.” School Effectiveness and Inequality Institute (SEII) Discussion Paper 2015.05.

The post Massachusetts Charter Cap Holds Back Disadvantaged Students appeared first on Education Next.

]]>
49705268
When Does Accountability Work? https://www.educationnext.org/when-does-accountability-work-texas-system/ Tue, 27 Oct 2015 00:00:00 +0000 http://www.educationnext.org/when-does-accountability-work-texas-system/ Texas system had mixed effects on college graduation rates and future earnings

The post When Does Accountability Work? appeared first on Education Next.

]]>

David J. Deming sits down with EdNext’s Marty West to discuss his new study on the effects of a test-based accountability system in Texas on the Education Next Podcast.


When Congress passed the No Child Left Behind Act of 2001 (NCLB), standardized testing in public schools became the law of the land. The ambitious legislation identified test-based accountability as the key to improving schools and, by extension, the long-term prospects of American schoolchildren. Thirteen years later, the debate over the federal mandate still simmers. According to the 2015 EdNext poll, about two-thirds of K–12 parents support annual testing requirements, yet a vocal minority want the ability to have their children “opt out” of such tests (see “The 2015 EdNext Poll on School Reform,” features, Winter 2016). Teachers themselves are divided on the issue of high-stakes testing.

Exam system ' needs overhauling'NCLB required that states test students in math and reading each year, that average student performance be publicized for every school, and that schools with persistently low test scores face an escalating series of sanctions. We now have ample evidence that these requirements have caused test scores to rise across the country. What we don’t know is: Do these improvements on high-stakes tests represent real learning gains? And do they make students better off in the long run? In fact, we know very little about the impact of test-based accountability on students’ later success. If academic gains do not translate into a better future, why keep testing?

In this study, we present the first evidence of how accountability pressure on schools influences students’ long-term outcomes. We do so by examining how the test-based accountability system introduced in Texas in 1993 affected students’ college enrollment and completion rates and their earnings as adults. Though the Texas system predates NCLB, it was implemented under then governor George W. Bush and it served as a blueprint for the federal legislation he signed as president nearly a decade later. More important, it was implemented long enough ago to allow us to investigate its impact on adult outcomes, since individuals who were in high school in the mid- to late 1990s have now reached adulthood.

Our analysis reveals that pressure on schools to avoid a low performance rating led low-scoring students to score significantly higher on a high-stakes math exam in 10th grade. These students were also more likely to accumulate significantly more math credits and to graduate from high school on time. Later in life, they were more likely to attend and graduate from a four-year college, and they had higher earnings at age 25.

Those positive outcomes are not observed, however, among students in schools facing a different kind of accountability pressure. Higher-performing schools facing pressure to achieve favorable recognition appear to have responded primarily by finding ways to exempt their low-scoring students from counting toward the school’s results. Years later, these students were less likely to have completed college and they earned less.

In short, our results indicate that school accountability in Texas led to long-term gains for students who attended schools that were at risk of falling below a minimum performance standard. Efforts to use high-stakes tests to regulate school quality at a higher level, however, did not benefit students and may have led schools to adopt strategies that caused long-term harm.

The Accountability Movement

A handful of states, such as Texas and North Carolina, began implementing “consequential” school accountability policies in the early 1990s. Under these policies, performance on standardized tests was not only made public but was also tied to rewards and sanctions. The number of states with consequential school-accountability policies rose from 5 in 1994 to 36 in 2000.

The Texas school accountability system implemented under then Governor George W. Bush served as a blueprint for the federal legislation he signed as president nearly a decade later.
The Texas school accountability system implemented under then Governor George W. Bush served as a blueprint for the federal legislation he signed as president nearly a decade later.

Under the accountability system implemented by Texas in 1993, every public school was given one of four ratings: Low-Performing, Acceptable, Recognized, or Exemplary. Schools were rated based on the overall share of students who passed the Texas Assessment of Academic Skills tests in reading, writing, and mathematics; attendance and high-school dropout rates were also considered. Pass rates were calculated separately for four subgroups—white, African American, Hispanic, and economically disadvantaged—if such subgroup made up at least 10 percent of the school’s population. Schools were assigned an overall rating based on the pass rate of the lowest-scoring subgroup-test combination (e.g., math for whites), giving some schools strong incentives to focus on particular students and subjects. (Because the state’s math test was more difficult than its reading test, low math scores were almost always the main obstacle to improving a school’s rating.) School ratings were often published in full-page spreads in local newspapers, and schools that were rated as Low-Performing underwent an evaluation that could lead to serious consequences, including layoffs, reconstitution, and school closure.

The accountability system adopted by Texas bore many similarities to the accountability requirements of NCLB, enacted nine years later. NCLB mandated reading and math testing in grades 3 through 8 and at least once in high school, and it required states to rate schools on the basis of test performance overall and for key subgroups. It also called for sanctions on schools that failed to meet statewide targets for student proficiency rates. Finally, the system required states to report subgroup test results and to increase their proficiency rate targets over time.

Too Good to Be True?

ednext_XVI_1_deming_fig01-smallScores on high-stakes tests rose rapidly in states that were early adopters of school accountability, and Texas was no exception. Pass rates on the state’s 10th-grade exam, which was also a high-stakes exit exam for students, rose from 57 percent to 78 percent between 1994 and 2000, with smaller yet still sizable gains in reading (see Figure 1).

The interpretation of this so-called Texas miracle, however, is complicated by studies of schools’ strategic responses to high-stakes testing. Research on how high-stakes accountability affects test performance has found that scores on high-stakes tests tend to improve with accountability, often dramatically, whereas performance on low-stakes tests with a different format but similar content improves only slightly or not at all. Furthermore, studies in Texas and elsewhere have found that some schools raised their published test scores by retaining low-performing students in 9th grade, by classifying them as eligible for special education (or otherwise exempting them from the exam), and even by encouraging them to drop out.

Clearly, accountability systems that rely on short-term, quantifiable measures to drive improved performance can lead to unintended consequences. Performance incentives may cause schools and teachers to redirect their efforts toward the least costly ways of raising test scores, at the expense of actions that do not boost scores but may be important for students’ long-term welfare.

Our study overcomes the limits of short-term analysis by asking: when schools face accountability pressure, do their efforts to raise test scores generate improvements in higher education attainment, earnings, and other long-term outcomes?

Our Study

An ideal experiment to address this question would randomly assign schools to test-based accountability and then observe changes in both test scores and long-term outcomes, comparing the results to those of a control group of schools. Such an experiment is not possible in this case because of the rapid rollout of high-stakes testing in Texas and (later) nationwide. And unfortunately, data limitations preclude us from looking at prior cohorts of students who were not part of the high-stakes testing regime.

Instead, our research design compares successive grade cohorts within the same school—cohorts that faced different degrees of accountability pressure owing to changes in how the state defined school performance categories over time. Beginning in 1995, each Texas school received its overall rating based on its lowest subgroup-test pass rate. That year, at least 25 percent of all tested students in a high school were required to pass the 10th-grade exit exam in each subject in order for the school to receive an Acceptable rating. This standard rose by 5 percentage points every year, up to 50 percent in 2000. The standard for a Recognized rating also rose, from a 70 percent pass rate in 1995 and 1996 to 75 percent in 1997 and 80 percent from 1998 onward. In contrast, the dropout and attendance-rate standards remained constant over the period we study. We use these changes in performance standards to estimate the “risk” that each school will receive a particular rating, and we compare cohorts who attended a school when it was on the brink of receiving a Low-Performing or Recognized rating to cohorts in the same school in years that it was all but certain to be rated Acceptable—and therefore plausibly “safe” from accountability pressure.

Most research on school accountability has studied how schools respond to receiving a poor rating, but our approach focuses instead on the much larger group of schools that face pressure to avoid a Low-Performing rating in the first place. Because the ratings thresholds rose over time, the set of schools experiencing the most pressure also changed. Consider, for example, students in a school that was plausibly safe from accountability pressure in 1995 but was at risk of a Low-Performing rating in 1996. Students in the 1996 cohort are likely quite similar to students in the class before them, except for the fact that they were subject to greater accountability pressure. (Our analysis does include controls for various ways in which those cohorts may have differed initially, such as by incoming test scores and demographic makeup.) By comparing grade cohorts who faced different degrees of accountability pressure, we can ascertain how much their level of risk affects not only 10th-grade exam scores but also how much schooling they completed and their earnings later in life.

Findings

We find that students, on average, experience better outcomes when they are in a grade cohort that puts its school at risk of receiving a Low-Performing rating. They score higher on the 10th-grade math exam, are more likely to graduate from high school on time, and accumulate more math credits, including in subjects beyond a 10th-grade level.

Later in life, these students are 0.6 percentage points more likely to attend a four-year college and 0.37 percentage points more likely to graduate. They also earn about 1 percent more at age 25 than those who were in cohorts whose schools were not facing as much accountability pressure. The earnings increase is comparable to the impact of having a teacher at the 87th percentile, in terms of her “value added” to student achievement, versus a teacher at the value-added median (see “Great Teaching,” research, Summer 2012).

Since the Texas state test was a test of basic skills, and the accountability metric is based on pass rates, schools had strong incentives to focus on helping lower-scoring students. While schools surely varied in how they identified struggling students, one reliable predictor that students might fail the 10th-grade exam was whether they failed an 8th-grade exam.

ednext_XVI_1_deming_fig02-smallIn fact, when we take into account 8th-grade failure rates, we find that all of the aforementioned gains are concentrated among students who previously failed an exam. These students are about 4.7 percentage points more likely to pass the 10th-grade math exam, and they score about 0.2 standard deviations higher on the exam overall (see Figure 2). More importantly, they are significantly more likely to attend a four-year college (1.9 percentage points) and earn a bachelor’s degree (1.3 percentage points). These impacts, while small in absolute terms, represent about 19 and 30 percent of the mean for students who previously failed an 8th-grade exam. We also find that they earn about $300 more annually at age 25.

In contrast, we find negative long-term impacts for low-scoring students in grade cohorts attending a school in a year when it faced pressure to achieve a Recognized rating. Students from these cohorts who previously failed an exam are about 1.8 percentage points less likely to attend a four-year college and 0.7 percentage points less likely to earn a bachelor’s degree, and they earn an average of $748 less at age 25. This negative impact on earnings is larger, in absolute terms, than the positive earnings impact in schools at risk of being rated Low-Performing. However, there are fewer low-scoring students in high-scoring schools, so the overall effects on low-scoring students roughly cancel one another other out. Again we find no impact of accountability pressure on higher-achieving students.

What worked well. Higher test scores in high school do not necessarily translate into greater postsecondary attainment and increased earnings in adulthood, yet our study demonstrates that, for many students, accountability pressure does seem to positively influence these long-range outcomes. Additional knowledge of mathematics is one plausible explanation for these favorable impacts on postsecondary attainment and earnings. Accountability pressure could have caused students to learn more math through: 1) additional class time and resources devoted to math instruction and 2) changes in students’ later course-taking patterns, sparked by improved on-time passage of the exit exam.

Indeed, we find an average increase of about 0.06 math course credits per student in schools that face pressure to avoid a Low-Performing rating. We also find that the impacts on both math credits and long-range outcomes grow with cohort size and with the number of students who previously failed an 8th-grade exam, suggesting that students particularly benefited from accountability pressure when it prompted schoolwide reform efforts.

Prior research has demonstrated that additional mathematics coursework in high school is associated with higher earnings later in life, and that even one additional year of math coursework increases annual earnings by between 4 and 8 percentage points. In our study, controlling for the amount of math coursework reduces the effects of accountability pressure on bachelor’s degree receipt and earnings at age 25 to nearly zero, and lowers the impact on four-year college attendance by about 50 percent. This suggests that additional math coursework may be a key mechanism for the long-term impacts of accountability pressure under the Texas policy.

Additionally, we find some evidence that schools respond to the risk of being rated Low-Performing by adding staff, particularly in remedial classrooms. This response is consistent with studies of accountability pressure in Texas and elsewhere that find increases in instructional time and resources devoted to low-scoring students, and provides another possible explanation for the positive effects of accountability pressure for certain students.

Dangers of a poorly designed system. Despite finding evidence of significant improvements in long-range outcomes for some students, those same improvements were not enjoyed by others. Why might an accountability system generate seemingly contradictory results?

As mentioned earlier, high-stakes testing poses the risk that it may cause teachers and schools to adjust their effort toward the least costly (in terms of dollars or effort) way of boosting test scores, possibly at the expense of other constructive actions. Thus, one can try to understand the difference in impacts between the two kinds of accountability by asking: in each situation, what was the least costly method of achieving a higher rating?

In our data, the student populations of schools at risk of a Low-Performing rating were, on average, 23 percent African American and 32 percent Hispanic, and 44 percent of students were poor. The mean cohort size was 212, and the mean pass rate on the 8th-grade math exam was 56 percent. Since the overall cohort and each tested subgroup were on average quite large, these schools could only escape a Low-Performing rating through broad improvement in test performance. In contrast, school populations closer to the high end of the performance spectrum were only about 5 percent African American, 10 percent Hispanic, and 16 percent poor, with a mean cohort size of only 114 and a mean pass rate of 84 percent on the 8th-grade math exam. Thus, many of the schools that were aspiring to a Recognized rating could achieve it by affecting the scores of only a small number of students.

One example of how a small school might “game the system” is by strategically classifying students in order to influence who “counts” toward the school’s rating. Indeed, we find strong evidence that some schools trying to attain a Recognized rating did so by exempting students from the high-stakes test. These schools classified low-performing students as eligible for special education services to keep them from lowering the school’s rating (special education students could take the 10th-grade state test, but their scores did not count toward the rating).

In schools that had a chance to achieve a Recognized rating, low-scoring students who were not designated as eligible for special education in 8th grade were 2.4 percentage points more likely to be newly designated as such in 10th grade, an increase of more than 100 percent relative to the 2 percent designation rate in other schools. The designation of low-scoring students as eligible for special education was more common in schools where a small number of students had failed the 8th-grade exam, making it easier for educators to target specific students. We also find a small but still noteworthy decrease of 0.5 percentage points in special education classification for high-scoring students in these schools.

As a result of this strategic classification, marginal students in certain schools were placed in less-demanding courses and acquired fewer skills, accounting for the negative impact of accountability pressure on long-term outcomes for those students. In essence, those students did not receive the attention they needed in order to improve their learning.

Summing Up

Why do some students benefit from accountability pressure while others suffer? Our results suggest that Texas schools responded to accountability pressure by choosing the path of least resistance, which produced divergent outcomes. The typical school at risk of receiving a Low-Performing rating was large and had a majority nonwhite population, with many students who had previously failed an 8th-grade exam. These schools had limited opportunity to strategically classify students as eligible for special education services. Instead, they had to focus their efforts on truly helping a large number of students improve. As a result, students in these schools were more likely to pass the 10th-grade math exam on time, acquire more math credits in high school, and graduate from high school on time. In the long run, they had higher rates of postsecondary attainment and earnings. These gains were concentrated among students at the greatest risk of failure.

In other schools, the accountability system produced strong incentives to exempt students from exams and other requirements. In these schools, accountability pressure more than doubled the chances that a low-scoring student would be newly deemed eligible for special education. This designation exempted students from the normal high-school graduation requirements, which then led them to accumulate fewer math credits. In the long run, low-scoring students in these schools had significantly lower postsecondary attainment and earnings.

In some respects, though not all, the accountability policy in Texas served as the template for No Child Left Behind, and thus our findings may have applicability to the accountability regimes that were rolled out later in other states. In Texas, and under NCLB nationwide, holding schools accountable for the performance of every student subgroup has proven to be a mixed blessing. On the one hand, this approach shines light on inequality within schools in an attempt to ensure that “no child is left behind.” On the other hand, when schools can achieve substantial “improvements” by focusing on a relatively small group of students, they face a strong incentive to game the system. In Texas, this situation led some schools to strategically classify students as eligible for special education, which may have done them long-run harm.

What policy lessons can we draw from this study as Congress works out a new iteration of theElementary and Secondary Education Act to replace NCLB? First, policy complexity can carry a heavy cost. As many other studies have shown, high-stakes testing creates strong incentives to game the system, and the potential for strategic responses grows as the rules become more complicated. The second lesson is that, at least in Texas, school accountability measures only worked for schools that were at risk of receiving a failing grade. Therefore, the federal government might consider approaching school accountability the way the Food and Drug Administration regulates consumer products. Instead of rating and ranking schools, the feds could develop a system that ensures a minimum standard of quality.

David J. Deming is associate professor of education and economics at the Harvard Graduate School of Education. Sarah Cohodes is assistant professor of education and public policy at Teachers College, Columbia University. Jennifer Jennings is assistant professor of sociology at New York University. Christopher Jencks is the Malcolm Wiener Professor of Social Policy at the Harvard Kennedy School.

This article appeared in the Winter 2016 issue of Education Next. Suggested citation format:

Deming, D.J., Cohodes, S., Jennings, J., and Jencks, C. (2016). When Does Accontability Work? Texas system had mixed effects on college graduation rates and future earnings. Education Next, 16(1), 71-76.

The post When Does Accountability Work? appeared first on Education Next.

]]>
49703688