M. Danish Shakeel, Author at Education Next https://www.educationnext.org/author/dshakeel/ A Journal of Opinion and Research About Education Policy Wed, 10 Jan 2024 17:53:18 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://i0.wp.com/www.educationnext.org/wp-content/uploads/2019/12/e-logo.png?fit=32%2C32&ssl=1 M. Danish Shakeel, Author at Education Next https://www.educationnext.org/author/dshakeel/ 32 32 181792879 Defending Harvard’s Ranking of State Charter School Performance https://www.educationnext.org/defending-harvards-ranking-of-state-charter-school-performance/ Wed, 13 Dec 2023 10:00:25 +0000 https://www.educationnext.org/?p=49717570 English proficiency and disability status are among student background characteristics adjusted for in NAEP scores

The post Defending Harvard’s Ranking of State Charter School Performance appeared first on Education Next.

]]>
Students participate in a writing class at KIPP Memphis Collegiate Middle School in Tennessee.

In November 2023 we, at Harvard’s Program on Education Policy and Governance, released a new state-by-state ranking of the performance of charter school students on the National Assessment of Educational Progress, often called the Nation’s Report Card. The ranking is based on charter students’ scores on 24 NAEP tests of math and reading administered between 2009 and 2019. Ours is the first ranking of charter student performance on the same set of tests administered to samples of all students throughout the United States.

For the most part, the ranking has been well received. The head of the KIPP Foundation, the nation’s largest charter school network, says in one news report that the results “confirm our experience.” The National Alliance for Public Charter Schools comments that “the new data are ‘sobering in many respects,’ showing that charter schools in many places have ‘room to grow.’” And, of course, the ranking has been received enthusiastically by policymakers in states like Alaska, Massachusetts, New Hampshire, and Oklahoma—all of which came in at the top end of the standings. Even middle- to bottom-ranking states have not chosen to criticize the ranking procedures—though one charter-school advocate who did not like the below-average placement of his home state, objected on the rather bizarre grounds that “[o]nly a randomly selected sample of … students take the NAEP test,” a denial of the reliability of an approach regularly employed by the U.S. Census Bureau.

But in a recent blog post, Matthew Ladner, executive editor of NextSteps, a publication of the school-choice advocacy group Step Up For Students, has expressed his own doubts about our findings. He says we failed to adjust for the share of charter students who are in special education programs or are English Language Learners, we relied on information that fluctuates from one test to the next, and that charter students should have been ranked on state proficiency tests instead of the NAEP.

These criticisms either are wrong, mislead, or fail to take into account what was said in the technical version of the paper published in the Journal of School Choice.

We take particular exception to the erroneous claims that we “were unable to control for the rates of special education and English Language Learner status” on the NAEP. Those charges, if true, would be serious. But as reported in the abridged version that appeared in Education Next, scores are adjusted “to take into account the age of the test-taker, parents’ education levels, gender, ethnicity, English proficiency, disability status, eligibility for free and reduced school lunch, student-reported access to books and computers at home, and location [emphasis added].” In the unabridged version, we inform readers that eligibility for special education and English Language Learner status is ascertained by NAEP from school administrative records.

Ladner misleads when he notes that math scores of Texas 8th grade charter students tested by NAEP fluctuated substantially between 2017 and 2019. Although that is certainly correct, it is the very reason we use information from multiple tests over an extended period. As we say in the technical paper, “By combining results from 24 tests over an 11-year period, the chances of obtaining reliable results are greatly enhanced.”

Ladner argues it would be preferable to use data from the Stanford Education Data Archives, or SEDA, a source that provides student performance on every state’s proficiency tests. We in fact report a ranking obtained from the SEDA data, which is calculated in a manner comparable to the one used to construct the PEPG ranking, in Table A.11 in the appendix to the paper available in the Journal of School Choice. That ranking correlates with the PEPG ranking at the 0.7 level, which suggests the two data sources yield broadly similar results. As we discuss in our article, however, the NAEP tests are preferable because they allow for a ranking of students’ scores on the same set of tests. Ranking states based on SEDA data requires the strong assumption that state tests may all be placed on the same scale. Also, state proficiency tests are high-stakes tests used to evaluate both charter schools and their teachers, providing incentives to manipulate test results. NAEP is a low-stakes test that is not used for student, teacher, or school evaluations. Lastly, SEDA excludes over 32 percent of all charter schools from its sample. By contrast, PEPG’s NAEP sample includes over 99 percent of all charter student observations in NAEP.

But Ladner would have us use the problematic SEDA test data because SEDA reports changes in student performance in each school district and charter school from one year to the next. That requires yet another assumption: that there is no change in the composition of a school cohort from one year to the next, a particularly strong assumption for a school of choice.

As we concluded in both versions of the paper, “the PEPG rankings are not the last word on charter-school quality.” We are hopeful that assessments of charter school quality will continue to improve in the coming years. But we can only make progress if criticism is accurate and straightforward.

Paul E. Peterson is a professor of government at Harvard University, director of its Program on Education Policy and Governance, and senior editor at Education Next. M. Danish Shakeel is professor and the director of the E. G. West Centre for Education Policy at the University of Buckingham, U.K.

The post Defending Harvard’s Ranking of State Charter School Performance appeared first on Education Next.

]]>
49717570
The Nation’s Charter Report Card https://www.educationnext.org/nations-charter-report-card-first-ever-state-ranking-charter-student-performance-naep/ Tue, 14 Nov 2023 10:02:55 +0000 https://www.educationnext.org/?p=49717166 First-ever state ranking of charter student performance on the National Assessment of Educational Progress

The post The Nation’s Charter Report Card appeared first on Education Next.

]]>

Illustration

 

When Minnesota passed the nation’s first charter-school law in 1991, its main purpose was to improve education by allowing for new, autonomous public schools where teachers would have more freedom to innovate and meet students’ needs. Freed from state regulations, district rules, and—in most cases—collective-bargaining constraints, charter schools could develop new models of school management and “serve as laboratories for new educational ideas,” as analyst Brian Hassel observed in an early study of the innovation. In the words of Joe Nathan, a longtime school-choice advocate and former Minnesota teacher, “well-designed public school choice plans provide the freedom educators want and the opportunities students need while encouraging the dynamism our public education system requires.”

Over the next two decades, 45 additional states and Washington, D.C., passed their own laws establishing charter schools. And by 2020–21, nearly 7,800 charter schools enrolled approximately 3.7 million students, or 7.5 percent of all public-school students nationwide. The most recent charter law was passed in 2023 in Montana, though its implementation has so far been blocked by court order; today, only North Dakota, South Dakota, Nebraska, and Vermont have not passed charter legislation.

During those years, advocates have carefully tracked and analyzed state policies and enrollments to compare charter school growth, demand, and access across the United States. But to date, there have been no comparisons of charter school performance across states based on student achievement adjusting for background characteristics on a single set of nationally administered standardized tests. Instead, advocacy organizations routinely rank states based on one or more aspects of their charter school programs—factors such as the degree of autonomy charters are afforded, whether they receive equitable funding, and the share of a state’s students they serve. These rankings are informative, but they do not provide direct information about how much students are learning, which is, ultimately, the general public’s and policymakers’ primary concern.

We provide that information here, based on student performance in reading and math on the National Assessment of Educational Progress, or NAEP, between 2009 and 2019. These rankings, created at the Program on Education Policy and Governance (PEPG) at Harvard University, are adjusted for the age of the charter school and for individual students’ background characteristics. They are based on representative samples of charter-school students in grades 4 and 8 and cover 35 states and Washington, D.C. We also estimate the association between student achievement and various charter laws and characteristics.

Overall, the top-performing states are Alaska, Colorado, Massachusetts, New Hampshire, New York, Oklahoma, and New Jersey. The lowest-ranked charter performance is in Hawaii, followed by Tennessee, Michigan, Oregon, and Pennsylvania. Students in the South tend to perform above average, while students in midwestern Rust Belt states rank at the midpoint or below. We also find that students at schools run by charter networks outperform students at independent charters, on average, while students at schools run by for-profit organizations have lower scores on NAEP, on average. Students at charters authorized by state education agencies have higher scores than students at those authorized by local school districts, non-educational organizations, or universities.

We hope these rankings will spur charter-school improvement in much the same way that NAEP results have stimulated efforts to improve student achievement more generally. Current debates include whether authorizers should regulate schools closely or allow many and diverse flowers to bloom, whether charters should stand alone or be incorporated into charter school networks, and whether for-profit charters should be permitted. A state ranking of charter student performances may not answer such questions, but it can stimulate conversations and foster future research that could.

Assessing State-Level Achievement

We create the PEPG rankings based on NAEP tests in reading and math. The tests, known as the Nation’s Report Card, are administered every two years to representative samples of U.S. students in grades 4 and 8. To obtain a robust sample for each state, each survey wave includes more than 100,000 observations of public-school students in both district and charter schools. The number of tested charter-school students varies between 3,630 and 7,990 per test, depending on the subject, grade, and year.

Our analysis looks at the period between 2009 and 2019, when 24 tests were administered. This yielded 3,732,660 results in all, but we focus on the 145,730 results from charter-school students. We include results from Washington, D.C., and the 35 states with enough tested charter-school students to permit precise estimates. That excludes the five states that do not currently allow charter schools, as well as Alabama, Iowa, Kansas, Kentucky, Maine, Mississippi, Washington, Virginia, West Virginia, and Wyoming. Still, the results in our sample account for more than 99 percent of all charter-school student scores in NAEP.

We also look at anonymized demographic information about test-takers, which was provided by the U.S. Department of Education under a special license. The weighted composition of our sample is 32 percent white, 30 percent Black, 31 percent Hispanic, and 4 percent Asian and Pacific Islanders. Some 58 percent are from a low-income household. Fifty-six percent were tested at a charter school located in a city, 30 percent in a suburb, 5 percent in a small town, and 10 percent in a rural area. Among 8th graders, 45 percent indicate that at least one parent completed college. Another 37 percent report that their parent does not have a college degree, and information is missing for the remaining 18 percent.

In estimating charter performance by state, we place charter scores in each subject on a common scale, adjusting for year of testing, subject, grade level, and the year the charter school opened. NAEP weights test-score observations so they are representative of the true underlying student population. We also adjust scores to take into account the age of the test-taker, parents’ education levels, gender, ethnicity, English proficiency, disability status, eligibility for free and reduced school lunch, student-reported access to books and computers at home, and location.

We then rank states based on the adjusted average scores for their charter students from 2009 to 2019 as compared to the average scores for all charter students nationwide over the same period. We report the size of these differences, whether positive or negative, as a percentage of one standard deviation in student test scores and note here that a full standard deviation is equivalent to roughly three-and-a-half years of learning for students in these grades. Several states have such similar scores they can be considered to be statistically tied, so undue weight should not be placed on any specific rank number. (See the unabridged version of this paper, published in the Journal of School Choice, for information that allows one to calculate whether any two states are statistically tied.)

Figure 1: Ranking States by Charter Performance

Rankings and Results

The strongest academic performance from charter-school students is in No. 1-ranked Alaska, at 32 percent of a standard deviation above the average charter score nationwide, followed by Colorado and Massachusetts, then by New Hampshire, New York, Oklahoma, and New Jersey (see Figure 1). The lowest-ranked charter performance is in Hawaii, at 54 percent of a standard deviation below the national average, followed by Tennessee, Michigan, Oregon, and Pennsylvania.

Alaska’s high ranking for charter-school student achievement may seem surprising given its low ranking for NAEP performance by all public-school students. In a 2019 analysis by the Urban Institute, Alaska ranked at or near the bottom in both reading and math in grades 4 and 8. It is possible that results are skewed in some way by the challenge of controlling for Alaska’s distinctive indigenous population, which makes up about 20 percent of K–12 students. However, Stanford economist Caroline Hoxby found Alaska among the top three states in an analysis conducted on scores in 2003. Further, Alaska’s charter achievement ranks seventh when no adjustments are made for background characteristics. Charter student performance in Alaska seems to deserve its ranking in the top tier.

In looking at the five lowest-ranking states, Hawaii’s very poor performance is skewed downward by NAEP’s incorporation of indigenous Hawaiian population and other Pacific Islanders into the broad “Asian” category, a sizeable share of the charter student population (see “Does Hawaii Make the Case for Religious Charters?,” features, Winter 2024). If the analysis is limited to the years 2011 to 2019, indigenous Hawaiians and Pacific Islanders can be classified separately. When this is done for those years, Hawaii’s performance shifts to –35 percent of a standard deviation, and the state’s score resembles that of Tennessee.

Figure 2: Differences in Test Scores between White and Black Charter Students

We then estimate differences in test-score performance between students of various racial and ethnic groups in each state, while still adjusting for other background characteristics. States vary in the degree to which the performance of white charter students exceeds that of Black and Hispanic ones (see Figures 2 and 3). The gap between Black and white charter-school students’ test scores is more than a full standard deviation, or roughly equivalent to three-and-one-half years of learning, in D.C. and five states: Missouri, Wisconsin, Delaware, Michigan, and Maryland. By comparison, that gap is equivalent to about two-and-one-half years of learning in Oklahoma, Arizona, New York, Florida, and Illinois.

Figure 3: Differences in Test Scores Between White and Hispanic Charter Students

We find the largest score differences between white and Hispanic students in D.C., Pennsylvania, Delaware, Georgia, Idaho, and Massachusetts. States with the least divergence in white-Hispanic scores are Oklahoma, Louisiana, Illinois, Florida, and Ohio, where scores differ by roughly one to one-and-a-third years of learning.

Oklahoma and Florida have among the smallest disparities between white charter students and both Black and Hispanic charter students. By contrast, D.C. and Delaware have exceptionally large differences between those student groups. These differences may be a function of which students opt to enroll in charter schools or some other mechanism not captured by observed student characteristics. Or they may reflect divergent charter practices.

Comparison to Statewide Rankings

How closely do the PEPG state rankings mirror similar efforts to rank states based on student achievement across all public schools? We might expect strong correlations, as charter student performance could be affected by a state’s educational climate, including family and community support for schools and students as well as the talents and training of its teachers.

To explore this possibility, we calculate the relationship between PEPG rankings for charter students with state rankings made by the Urban Institute for student achievement at all public schools. Importantly, the comparison is for performance on the same tests for the same period, and the adjustments for family background characteristics are virtually identical.

The rankings for charters and for all public-school students are only modestly correlated (see Figure 4). Massachusetts, New Jersey, Colorado, and Florida have similarly high rankings on both. At the other end of the distribution, California sits at the 24th position in both standings. But the rankings for other states differ sharply. Texas, Pennsylvania, and Indiana are ranked 2, 10, and 12 on the Urban Institute list but land at 15, 31, and 20, respectively, in the PEPG ranking. Conversely, Oklahoma is ranked 6th and Utah is ranked 9th in the PEPG rankings, but these states rank 21st and 32nd, respectively, on the Urban Institute’s list. In short, charter-school performance is not simply a function of the educational environment of the state as a whole.

Figure 4: Ranking Charters vs. Ranking All Public Schools

A Close Look at CREDO

Another state-level ranking of charter schools warrants detailed discussion. In a June 2023 report, the Center for Research on Education Outcomes (CREDO) at Stanford University ranked 29 states by the academic performance of their charter schools from 2014 to 2019. This ranking is based on state test results and compares charter students’ performance, adjusted for prior-year test scores and student background characteristics, to that of students at nearby district schools. This average difference approach to assessing charter performance diverges significantly from the PEPG yardstick, which ranks states by the average level of charter performance, adjusted for student background.

CREDO rankings would nonetheless resemble the ones reported by PEPG if average student achievement were identical at all district schools throughout a state and the country as a whole. Since that is not the case, CREDO rankings are affected as much by scores at district schools as by scores at charters. This is not a mere hypothetical possibility. CREDO finds that test scores for Black students at charter schools showed they “had 35 days more growth in a school year in reading and 29 days in math” relative to comparable students in nearby district schools, and Hispanic students “grew an extra 30 days in reading and 19 additional days in math.”

Meanwhile, white charter students do no better in reading than white students at district schools, and they perform worse in math by 24 days of learning. CREDO also finds better outcomes for charter schools in cities than suburbs—test scores for students at urban charters showed 29 additional days of growth per year in reading and 28 additional days in math. Suburban charters did not perform significantly better than district schools in math but had “stronger growth in reading” amounting to 14 additional days of learning.

These findings could indicate that Black, Hispanic, and urban students attend higher-quality charter schools than those available to white and suburban students. But an alternative interpretation is more likely: White and suburban students have access to higher-quality district schools than those available to Blacks, Hispanics, and city residents. CREDO’s state ranking is useful in considering how the presence of charters affects the choices available to students in each state, but it does not order states by the performance levels of charter students, as the PEPG rankings do.

Impacts of Innovations

The specifics of each state’s charter law and regulations differ substantially, helping the charter sector live up to the “laboratory” principle. This sets the stage for a variety of comparisons looking at which aspects of charter school governance might contribute to student success.

For example, the type of agency granted the power to authorize charters ranges from the state board of education to local school districts to mayoral offices. Accountability requirements vary from tight, ongoing monitoring to nearly none. The saturation of the charter sector is similarly diverse—in states like Arizona, California, and Florida, 12 percent or more students attend a charter compared to 3 percent or less in Maryland, Mississippi, and New Hampshire. Charter funding differs as well, both among and within states, based on revenues and regulations set by federal, state, and local agencies and authorizers. In 2019, charter-school revenues per pupil ranged from $27,825 in D.C. to $6,890 in Oklahoma.

On some widely debated topics, we find little support for either side of the dialogue. For example, we find no higher levels of achievement in states with a larger percentage of public-school students attending charters. Nor do we find a correlation between charter student achievement and the age of the charter school, whether a state permits collective bargaining, or the level of per-pupil funding charter schools receive within a state.

We do find differences when looking at some of the innovative features of charter schools, including authorizing agencies, management structures, and whether schools have an academic or programmatic specialization.

For example, charter student performance varies with the type of authorizer that granted its charter. Students whose charter schools are authorized by a state education agency earn higher scores on NAEP than students whose schools were authorized by school districts and comparable local agencies. Compared to charter schools authorized by a state education agency, student achievement is 9 percent of a standard deviation lower at charter schools authorized by local education agencies like school districts, 10 percent lower at charter schools authorized by independent statewide agencies, 15 percent lower at schools authorized by non-education entities like a mayor’s office, and 19 percent lower at charter schools authorized by higher education institutions.

These results should not be interpreted as showing a causal connection between type of authorizer and student outcomes. Still, it might be noted that state education agencies have decades of experience at overseeing educational systems, an advantage not matched by any other type of authorizer. Local school districts do not authorize as effective charters as do state offices, but they outperform agencies that have had no prior experience in the field of education. Perhaps Helen Keller was right when she said, “Only through experience of trial and suffering can the soul be strengthened . . . and success achieved.”

We also find notable differences in student achievement between schools based on their management model. These fall into three categories: freestanding or standalone schools; schools run by nonprofit charter management organizations or networks like KIPP Foundation and BASIS Charter Schools; and schools run by for-profit education management organizations, such as Academia and ACCEL Schools.

Some 55 percent of the students in our sample attend freestanding, independent charter schools—the classic charter type, led by a small team, that is one of the thousand flowers expected to bloom. Another 23 percent of students attend charters that are part of nonprofit networks or management organizations, and 22 percent of the sample are at schools run by for-profit entities.

Compared to students at for-profit and freestanding, independent charters, students at charters that are part of a nonprofit network score 11 to 16 percent of a standard deviation higher on NAEP. This may be because networked charters benefit from an association with a larger entity, or perhaps because successful charters expand beyond a single school.

For-profit schools are arguably the most controversial component of the charter sector. Charter critic Diane Ravitch has argued that “our schools will not improve if we expect them to act like private, profit-seeking enterprises,” and in 2020, the Democratic Party platform proposed a ban on charter schools run by for-profit entities (see “Ban For-Profit Charters? Campaign issue collides with Covid-era classroom reality”, feature, Winter 2021).

Why do students at for-profit schools earn relatively lower scores on NAEP than at networked charters? For-profit organizations may launch charters where circumstances are more problematic, or they may find operations more challenging when faced with heavy political criticism and threats of closure and government regulation. Or possibly the profit motive is indeed inconsistent with higher student performance, as critics have alleged.

Our main purpose in ranking states by the performance of their charter students is to focus public and policymaker attention on the provision of high-quality schools, the purpose of charter legislation from its very beginning. Our second purpose is to supplement current state-level rankings of the charter-school environment and focus attention on outcomes, not simply state policies and procedures. Although previous rankings document the variety of environments in which charter schools operate, they do not report student achievement measured by a national test common to public schools across the country.

However, the PEPG rankings are not the last word on charter-school quality. We are not able to track year-by-year trends in charter quality within states, as the number of charter student test scores for any given year are too few for precise estimation. We have no information on student performance at virtual charters, as NAEP only monitors student performance at brick-and-mortar school sites. Also, these rankings are based on assessments of student performances in 4th and 8th grade, which excludes any insights as to charter contributions to early childhood and preschool education or high school or career and technical training programs. Finally, NAEP data are observational, not experimental, so causal inferences are not warranted.

It should also be kept in mind that these data are based upon an 11-year period ending in 2019, the eve of a pandemic that closed many charter and district schools for more than a year. Student performance was dramatically affected by the event, and charter enrollment appears to have increased substantially since then. The data reported here stand as a baseline against which future measurement of charter performance in the aftermath of that event may be compared—an especially important measure given the continued growth of the sector.

Paul E. Peterson is a professor of government at Harvard University, director of its Program on Education Policy and Governance, and senior editor at Education Next. M. Danish Shakeel is professor and the director of the E. G. West Centre for Education Policy at the University of Buckingham, U.K. An unabridged version of this paper has been published by the Journal of School Choice (2023).

This article appeared in the Winter 2024 issue of Education Next. Suggested citation format:

Peterson, P.E., and Shakeel, M.D. (2024). The Nation’s Charter Report Card: First-ever state ranking of charter student performance on the National Assessment of Educational Progress. Education Next, 24(1), 24-33.

The post The Nation’s Charter Report Card appeared first on Education Next.

]]>
49717166
A Half Century of Student Progress Nationwide https://www.educationnext.org/half-century-of-student-progress-nationwide-first-comprehensive-analysis-finds-gains-test-scores/ Tue, 09 Aug 2022 09:00:44 +0000 https://www.educationnext.org/?p=49715526 First comprehensive analysis finds broad gains in test scores, with larger gains for students of color than white students

The post A Half Century of Student Progress Nationwide appeared first on Education Next.

]]>

Illustration

Has the achievement of U.S. students improved over the past half century? Have gaps between racial, ethnic, and socioeconomic groups widened or narrowed?

These and similar questions provoke near-constant conversation. But answers are uncertain, partly because research to date has yielded inconsistent findings. Here we bring together information from every nationally representative testing program consistently administered in the United States over the past 50 years to document trends in student achievement from 1971 to 2017, the last year for which detailed information is currently available.

Contrary to what you may have heard, average student achievement has been increasing for half a century. Across 7 million tests taken by U.S. students born between 1954 and 2007, math scores have grown by 95 percent of a standard deviation, or nearly four years’ worth of learning. Reading scores have grown by 20 percent of a standard deviation during that time, nearly one year’s worth of learning.

When we examine differences by student race, ethnicity, and socioeconomic status, longstanding assumptions about educational inequality start to falter. Black, Hispanic, and Asian students are improving far more quickly than their white classmates in elementary, middle, and high school. In elementary school, for example, reading scores for white students have grown by 9 percent of a standard deviation each decade, compared to 28 percent for Asian students, 19 percent for Black students, and 13 percent for Hispanic students. Students from low socioeconomic backgrounds also are progressing more quickly than their more advantaged peers in elementary and middle school. And for the most part, growth rates have remained steady throughout the past five decades.

Conventional wisdom downplays student progress and laments increasing achievement gaps between the have and have-nots. But as of 2017, steady growth was evident in reading and especially in math. While the seismic disruptions to young people’s development and education due to the Covid-19 pandemic have placed schools and communities in distress, the successes of the past may give educators confidence that today’s challenges can be overcome.

Bypassing Conventional Wisdom

Scholars and public intellectuals from all sides of the political spectrum have consistently made the opposite case. Dating back to 1983’s A Nation at Risk, debate over the state of public education in the United States often has portrayed schools as failing and American students as falling behind. Books like 2009’s The Dumbest Generation and 1994’s The Decline of Intelligence in America argued that young people were so entranced by technology that they failed to develop basic knowledge and skills.

Public understanding of inequality also has assumed that racial, ethnic, and socioeconomic gaps in student achievement are universal and growing. In 2011, research by Stanford sociologist Sean Reardon appeared to show a widening of the socioeconomic achievement gap over the past 70 years. In 2012, conservative Charles Murray argued that “the United States is stuck with a . . . growing lower class that is able to care for itself only sporadically and inconsistently” even as the “new upper class has continued to prosper as the dollar value of [its] talents . . . has continued to grow.” In 2015, Harvard political scientist Robert Putnam wrote “rich Americans and poor Americans are living, learning, and raising children in increasingly separate and unequal worlds.” More recently, critiques by organizations like Black Lives Matter have identified racial inequality both inside and outside the classroom as a defining characteristic of American life.

But no study of student achievement over time has brought all the relevant data together in a systematic manner and assessed how these assumed trends are playing out. Our analysis does just that.

Our data consist of more than 7 million student test scores on 160 intertemporally linked math and reading tests administered to nationally representative samples of U.S. student cohorts born between 1954 and 2007 (see “Put to the Test“). By “intertemporally linked,” we mean that researchers in each of the testing programs have designed their tests to be comparable over time, by doing things such as repeating some of the same questions across different waves.

We estimate trends separately by testing program, subject, and grade level and report the median rather than average result to avoid giving undue importance to outliers, much as consensus projections of future economic growth typically use the median of predictions made by alternative economic models. We report changes in student achievement over time in standard deviation units. This statistic is best understood by noting that average performance differences between 4th- and 8th-grade students on the same test are roughly one standard deviation. Accordingly, we interpret a difference of 25 percent of a standard deviation as equivalent to one year of learning.

Clear Progress for U.S. Students Over 50 Years of Testing (Figure 1)

Achievement and the Flynn Effect

The surveys show a much steeper rise in math than reading performance (see Figure 1). In math, overall student performance rose by 19 percent of a standard deviation per decade, or 95 percent of a standard deviation over the course of 50 years—nearly four additional years’ worth of learning. In reading, however, the gains are only 4 percent of a standard deviation per decade, or 20 percent of a standard deviation over the same period.

The difference between the two subjects is puzzling. Mathematical knowledge and reasoning skills in the U.S. teaching force has long been a matter of concern. And mainstream math instruction in U.S. schools generally is considered inadequate relative to other developed countries, despite recent attempts to focus on developing mathematical understanding. Why is math achievement accelerating far more quickly than reading?

The answer, we believe, is found in recent research on human intelligence. Not long ago, intelligence quotient, or IQ, was considered a genetically determined constant that shifted only over the course of eons, as more intellectually and physically fit homo sapiens survived and procreated at higher rates. Then in the mid-1980s, James Flynn, a New Zealand political scientist, examined raw IQ data and found that scores were increasing by 3 points, or about 21 percent of a standard deviation, per decade. Though Flynn’s work was initially dismissed as an over-interpretation of limited information, his finding was replicated by many others.

In 2015, Jakob Pietschnig and Martin Voracek conducted a meta-analysis of 271 studies of IQ, involving 4 million people in 31 countries around the world over the course of more than a century. As Flynn did, they found growth in overall IQ scores. But they also distinguished between types of intelligence. This included crystallized knowledge, or the ability to synthesize and interpret observed relationships in the environment, which is rooted in facts, knowledge, and skills that can be recalled as needed. And it included fluid reasoning, or the ability to analyze abstract relationships, which is associated with recognizing patterns and applying logic to novel situations. In industrialized societies, for a period similar to the one covered by our study, they found that fluid reasoning grew by 15 percent of a standard deviation per decade compared to 3 percent for crystallized knowledge. This difference resembles what we observe in the achievement data: growth of 19 percent of a standard deviation per decade for math and 4 percent for reading.

That the growth rates for the two types of achievement and IQ parallel one another may be more than a coincidence. Reading draws heavily on crystallized knowledge of the observable world, and skillful readers can give meaning to words that denote features of their physical and social environment. In math, this type of knowledge is necessary to understand symbols such as 1, 2, and 3 or +, -, and =, but analyzing and manipulating relationships among symbols is more a function of fluid reasoning. Several studies have shown math performance to be more strongly associated than reading performance with higher levels of fluid reasoning. In addition, a longitudinal study of preschool children found emergent school vocabulary to be associated with gains in verbal intelligence, a form of crystallized knowledge, but not with gains in fluid reasoning.

In the meta-analysis, Pietschnig and Voracek point to the factors that affect brain development as the most likely explanation for differential growth in these types of intelligence. Studies in neurobiology and brain imaging have found that when environmental factors like nutrition, infections, air pollution, or lead poisoning damage the brain’s prefrontal cortex, it affects fluid reasoning, but not crystallized knowledge. The negative impact on brain development of, for example, growing up amid famine or war would appear to have the biggest impact on fluid reasoning intelligence, used for math, rather than crystalized knowledge, used for reading.

Over the past 100 years, mothers and babies from all social backgrounds across the world have enjoyed increasingly higher quality nutrition and less exposure to contagious diseases and other environmental risks. Pietschnig and Voracek find substantial growth in fluid reasoning and less growth in crystallized knowledge on every continent, with particularly large gains in Asia and Africa. If students’ performance on math tests depends more on fluid reasoning than crystallized knowledge, then the greater progress in math than reading may be due to environmental conditions when the brain is most malleable—in early childhood, or even before students are born.

 

Put to the Test

Our data come from approximately 7 million U.S. student observations, as well as 4.5 million international student observations, on math and reading assessments in five psychometrically linked surveys administered by governmental agencies. The surveys have administered 160 waves of 17 temporally linked tests of achievement to nationally representative cohorts of U.S. students for various portions of the past half century.

Together, these data provide information on student race and ethnicity, gender, and socioeconomic status (an index based upon student reports of parents’ education and the number of possessions in the home). Within each subject, age/grade, and assessment, we normalize each subsequent cohort’s test score distribution with respect to the mean of test scores in its initial year of administration. With a quadratic fit, we calculate the distance in standard deviations of the change in student performance for survey per decade.

1971-2012
National Assessment of Educational Progress, Long-Term Trend (LTT) Assessment
● Math and Reading – ages 9, 13, 17

1990-2017
National Assessment of Educational Progress, main NAEP
● Math and Reading – grades 4, 8, 12

1995-2015
Trends in International Math and Science Study (TIMSS)
● Math – grades 4, 8

2000-2015
Program for International Student Assessment (PISA)
● Math and Reading – age 15

2001-2016
Progress in International Reading Literacy Study (PIRLS)
● Reading – grade 4

 

 

The PISA Exception

The main exception to this pattern comes from the Program for International Student Assessment (PISA) given since 2000 to high-school students at age 15. On this test, and only on this test, both the overall trend and the math-reading comparison are the reverse of what we observe on all the other surveys. U.S. student performance declines over time, with steeper drops in math scores than in reading. In math, scores decline by 10 percent of a standard deviation per decade; in reading, they fall by 2 percent of a standard deviation per decade. This stands in sharp contradiction to student performance on the National Assessment of Educational Progress (NAEP). There, we see large gains of 27 percent of a standard deviation per decade in math among middle-school students, who take the test in 8th grade. In addition, student performance improves by 19 percent of a standard deviation per decade on another math exam, the Trends in International Math and Science Study (TIMSS). How can PISA obtain results so dramatically different from what other tests show? Is the PISA exam fundamentally flawed? Or is it measuring something different?

We cannot account for all differences among tests, but in our opinion, PISA math is as much a reading test as a math test. The goal of PISA is to measure a person’s preparation for life at age 15. It does not ask test-takers to merely solve mathematical problems, as do NAEP and TIMSS, but instead provides opportunities to apply mathematical skills to real-world situations. A 2018 analysis found that “more than two-thirds of the PISA mathematics items are independent of both mathematical results (theorems) and formulas.” A 2001 review found that 97 percent of PISA math items deal with real-life situations compared to only 48 percent of items in NAEP and 44 percent in TIMSS. Another analysis comparing the exams found that PISA questions often have more text, including extraneous information students should ignore, than NAEP questions. In addition, a 2009 study found “there is a very high correlation between PISA mathematics and PISA reading scores” and that “The overlap between document reading (e.g., graphs, charts, and tables) and data interpretation in mathematics becomes blurred.”

We do not pretend to know which testing program is administering the best exam. But we are quite certain that PISA is administering a decidedly different kind of math test, one that requires much more crystallized knowledge than other math tests.

Growth Over Time for Students of All Racial and Ethnic Groups (Figure 2)

Results by Social Group

Every test in our study shows a forward stride toward equality in student performance across race, ethnicity, and socioeconomic lines over the past half century (see Figure 2). The median rate of progress made by the average Black student exceeds that of the average white student by about 10 percent of a standard deviation per decade in both reading and math. Over 50 years, that amounts to about two years’ worth of learning, or about half the original learning gap between white and Black students. The disproportionate gains are largest for students in elementary school. They persist in middle school and, in diminished form, through the end of high school.

We don’t think this is due to outsized improvements in nutrition and medical care for Black children, because the gains are as great in reading as in math. It could be due to educationally beneficial changes in family income, parental education, and family size within the Black community. Other factors may also be at play, such as school desegregation, civil rights laws, early interventions like Head Start and other preschool programs, and compensatory education for low-income students. Regardless, the equity story is clearly positive, if still incomplete.

Hispanic student performance in math is similar: a steeper upward trend as compared to white students. However, gains in reading by Hispanic students, though still greater than the progress made by white students, are less pronounced than the math gains. This may be due to language barriers; about 78 percent of English language learners in the U.S. are Hispanic.

Overall, Asian students are making the most rapid gains in both subjects. Asian students have advanced by nearly two more years’ worth of learning in math and three more years’ worth of learning in reading than white students.

We also compare trends by socioeconomic status by building an index based on student reports of parents’ education as well as the number of possessions in the home. We compare achievement made by students coming from households in the top 25 percent and lowest 25 percent of the socioeconomic distribution. For all students, the achievement gap based on socioeconomic status closes by 3 percent of a standard deviation per decade in both reading and math.

The biggest gains occur in elementary school, where the gap closes over the 50-year period by 1.5 years’ worth of learning in math and three years’ worth in reading (see Figure 3). The differences shrink in middle school and are reversed in high school, where rates of progress by students in the top 25 percent modestly exceed those of students with the lowest socioeconomic status. The increase in the gap among the oldest students is 3 percent of a standard deviation per decade in math and 4 percent in reading.

In looking at low- and high-socioeconomic students within racial and ethnic groups, we see similar patterns for Black students in both subjects and for Hispanic students in math: achievement differences by socioeconomic background closing when students are tested at a younger age, but widening when students are tested toward the end of high school. Among Asian students, low-socioeconomic students continue to make greater progress than high-socioeconomic students in both subjects at all age levels.

What about income-based gaps in student achievement? In a widely circulated 2011 study, Stanford sociologist Sean Reardon found the income-achievement gap had increased dramatically over the past half century and more. However, the data upon which this claim rests are fragile, in that he relies for his conclusion upon results from disparate tests that are not linked and therefore are not necessarily comparable. To see whether trends from linked surveys support Reardon’s findings, we explore trends in achievement by the number and type of possessions students report as being in their homes, a plausible indicator of family income.

Overall, the evidence points in a direction opposite to Reardon’s findings, and results are qualitatively similar to the ones observed when estimated by the socioeconomic index. We find disproportionately larger gains for students in the lowest income quartile in both math and reading at younger ages. The difference is 5 percent of a standard deviation per decade in math and 6 percent in reading. However, we find that among students tested at the end of high school, the students from the highest quartile of the income distribution make greater progress than those from the lowest quartile by 6 percent of a standard deviation in math and 9 percent of a standard deviation in reading.

In sum, inferences about whether the size of the income gap, or the socioeconomic gap more generally, has increased or decreased depend largely on whether one places greater weight on tests administered to students in earlier grades or on trends for students tested as they reach the end of high school. For some, the high-school trend is most relevant, as it measures performance as students are finishing their schooling. For others, it is the least informative trend, as it could be subject to error if some older students are taking standardized tests less seriously in recent years or if rising graduation rates have broadened the pool of older students participating in the test.

But it is worth mentioning again that PISA stands out as an exception. It is the only test that shows much larger gains for U.S. high-school students from families in the lowest socioeconomic quartile than for those in the highest one. In math, the performance of the most advantaged 15-year-old students slid each decade by no less than 20 percent of a standard deviation in math and 14 percent in reading. Meanwhile, students in the bottom quartile showed notable gains of 4 percent of a standard deviation in math and 15 percent in reading. That amounts to closing the socioeconomic achievement gap by a full year’s worth of learning each passing decade. If PISA is to be believed, we are well on the way to equality of achievement outcomes.

Larger Gains for Disadvantaged Students in Elementary School, but Differences Decline and Are Reversed as Students Age (Figure 3)

Recent History

Critical assessments of America’s schools have a long history. But criticism grew sharper after the passage of the federal No Child Left Behind Act of 2001, which required annual testing and score reporting and set deadlines for improvement. In the past two decades, public opinion has been split widely between those who say the law enhanced student achievement and those who claim it made matters worse.

We split the sample into students born before and after 1990 to determine whether gains in median test scores were greater or lesser after the law was passed. Reading scores grew by 8 percent of a standard deviation more per decade among students born between 1991 and 2007 compared to students born between 1954 and 1990. In math, scores of more recent test-takers grew by 8 percent of a standard deviation per decade less than their predecessors.

Why would progress in math have slowed when progress in reading speeded up? The first half of the question is more easily explained than the second half. Trends in math achievement, as we have seen, are sensitive to changes in fluid reasoning ability. Factors that drive broad growth of that type of intelligence, such as better nutrition and decreased vulnerability to environmental contaminants, may have been changing more rapidly 30, 40, and 50 years ago compared to the past two decades. But why, then, have reading scores climbed more quickly? Did schools operating under No Child Left Behind have a more positive impact on reading performances? Or are families more capable of helping their children to read? Or both? Our data cannot say.

Recently, school closings in response to the Covid-19 pandemic seem to have had a negative impact on learning for an entire generation of students and exacerbated achievement gaps. This recalls similar educational setbacks from school closures during wars and strikes, reduced instructional time due to budget cuts (see “The Shrinking School Week,” research, Summer 2021), and broad absenteeism during weather events (see “In Defense of Snow Days,” research, Summer 2015). Indeed, Pietschnig and Voracek detect a slowdown in intellectual growth during World War II, a likely byproduct of both school closures and worldwide disruptions of economic and social progress.

But on the whole, families and schools both appear to have played a key role in reducing achievement gaps by race, ethnicity, and socioeconomic status over time. They also may have facilitated more rapid gains in reading among students born after 1990. Parental educational attainment and family incomes, both of which are strong correlates of student achievement, have risen in this more recent period. In addition, school reforms—desegregation, accountability measures, more equitable financing, improved services for students learning English, and school choice—have had their greatest impact on more recent cohorts of students.

Still, a research focus on families and schools may distract attention away from broader social forces that could be at least as important. For example, diminished progress in math for those born later than 1990 could be due to a decline in returns from improved health and nutrition in advanced industrialized societies. In addition, the greater gains of students at an early age and the recent flattening of growth in math performance all suggest that broader social, economic, and physical environments are no less important than schools and families. It is reasonable to infer from our research that policies benefiting children from the very beginning of life could have as much impact on academic achievement, especially in math, as focused interventions attempted when students are older.

Paul E. Peterson is a professor and director of the Program on Education Policy and Governance at Harvard University and a senior fellow at the Hoover Institution, Stanford University.

M. Danish Shakeel is a professor and director of the E. G. West Centre for Education Policy at University of Buckingham, U. K. This essay is drawn from an article just released by Educational Psychology Review.

This article appeared in the Fall 2022 issue of Education Next. Suggested citation format:

Shakeel, M.D., and Peterson, P.E. (2022). A Half Century of Student Progress Nationwide: First comprehensive analysis finds broad gains in test scores, with larger gains for students of color than white students. Education Next, 22(4), 50-58.

For more, please see “The Top 20 Education Next Articles of 2023.”

The post A Half Century of Student Progress Nationwide appeared first on Education Next.

]]>
49715526
Charters Improving at Faster Pace in Urban Areas https://www.educationnext.org/charters-improving-at-faster-pace-in-urban-areas/ Wed, 28 Oct 2020 09:01:14 +0000 https://www.educationnext.org/?p=49712748 Given the apparent underrepresentation of urban charter students in 2017, it is more likely that we are underestimating the overall gains

The post Charters Improving at Faster Pace in Urban Areas appeared first on Education Next.

]]>

IllustrationIn September we released an article on the Education Next website titled “Charter Schools Show Steeper Upward Trend in Student Achievement than District Schools.” Using a sample of more than four million test performances, it compares the progress made by cohorts of charter and district school students on the National Assessment of Educational Progress (NAEP) from 2005 to 2017. Overall, students at charters are advancing at a faster pace than those at district schools. The strides made by African-American charter students have been particularly impressive. We also see larger gains at charters, relative to district schools, by students from disadvantaged backgrounds. We saw more moderate differences between the two sectors for white students, and, among Asian Americans and Hispanic Americans, both sectors made sizeable, comparable strides forward. The figure below breaks out the results by both sector and ethnicity.

Figure: Trends in Average Performance at Charter and District Schools, 2005-2017

Our results have stimulated a good deal of conversation among policy analysts, particularly among those who support school charters. We have been surprised at the modest amount of criticism the study has received from charter critics. However, a report on the study in The 74 includes a critical comment by one observer, who suggests the sample we used was “skewed” in favor of urban charters, a locale where charter students are improving at a more rapid pace than those in district schools.

We were puzzled by the criticism because the sample used for our study comes directly from NAEP, often referred to as the nation’s report card, which, in compliance with federal law, seeks to draw a representative sample for each state and for the United States as a whole. Admittedly, NAEP draws its sample to be representative of all public school students, not to be representative of the district and charter sectors separately. It is conceivable, though not likely, that NAEP somehow drew its charter sample in such a way as to over-represent urban students.

For such a skew to bias our results, the skew would need to increase between 2005 and 2017, because our study shows only that cohorts of students at charter schools, on average, are improving at a more rapid rate over this period than those at district schools, not that the average charter school student is performing at a higher level than the average district school student.

Fortunately, we can check to see whether the composition of students tested by NAEP has become increasingly urban in a way that is inconsistent with trends reported by the National Center for Education Statistics (NCES) between 2007 and 2017. (The year 2007 is the first year for which the definition of urban is the same for the two data sets.) The comparison between the two data sets must be done cautiously because they are not measuring the same population and they are both subject to error. NAEP samples only students in 4th and 8th grade, because those are the two grades it tests (we excluded grade 12 due to the low number of observations for charters). NCES asks all public schools in the United States to report enrollment for all students in elementary and secondary schools.  Further, NAEP tests in the spring, whereas NCES reports fall enrollments. And reporting error can bedevil any national data collection effort.

Even with these caveats, an inspection of the urban composition reported by both NAEP and NCES is instructive. If the skewness hypothesis is correct, it implies that the urban trend is steeper in the NAEP than the NCES data set. But in fact, the trend is just the opposite of what the skew hypothesis implies (see the table below). In 2007, and even more in 2011, urban charters were over-represented in the NAEP data set (as compared to the NCES one), but that shifts to under-representation of urban charters in NAEP by 2017. In this final year of the available data, the urban share in the NAEP sample is 3.5 percentage points less than percentage given in the NCES data set. If the NCES data correctly identify the percentage of 4th and 8th grade students in urban charters, students in urban charters are under-represented in NAEP in 2017, after having been over-represented previously.

Share of charter school students in urban areas, as reported by NAEP and NCES

Year NCES NAEP NCES Source
2007 54.3 55.4 https://nces.ed.gov/programs/digest/d14/tables/dt14_216.30.asp
2009 54.8 53.06 https://nces.ed.gov/programs/digest/d14/tables/dt14_216.30.asp
2011 55.5 60.87 https://nces.ed.gov/programs/digest/d19/tables/dt19_216.30.asp
2013 56 58.26 https://nces.ed.gov/programs/coe/pdf/Indicator_CLA/coe_cla_2016_05.pdf
2015 56.5 55.29 https://nces.ed.gov/programs/digest/d17/tables/dt17_216.30.asp
2017 56.1 52.52 https://nces.ed.gov/programs/digest/d19/tables/dt19_216.30.asp

Note: Summary statistics are presented using survey weights.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2005–2017 Main NAEP.

In the original paper, we break out trends by locale, so that people can see for themselves whether urban charters are doing better than suburban charters. They are. But that fact does not support the suggestion that the entire sample over-estimates the gains in the charter sector.  Given the apparent under-representation of urban charter students in 2017, it is more likely that we are under-estimating the overall gains in the charter sector.

M. Danish Shakeel is a postdoctoral research fellow at the Program on Education Policy and Governance at Harvard University. Paul E. Peterson is the Henry Lee Shattuck Professor of Government and director of the Program on Education Policy and Governance at Harvard University, a senior fellow at the Hoover Institution at Stanford University, and senior editor of Education Next.

The post Charters Improving at Faster Pace in Urban Areas appeared first on Education Next.

]]>
49712748
Charter Schools Show Steeper Upward Trend in Student Achievement than District Schools https://www.educationnext.org/charter-schools-show-steeper-upward-trend-student-achievement-first-nationwide-study/ Tue, 08 Sep 2020 20:04:33 +0000 https://www.educationnext.org/?p=49712508 First nationwide study of trends shows large gains for African Americans at charters

The post Charter Schools Show Steeper Upward Trend in Student Achievement than District Schools appeared first on Education Next.

]]>

IllustrationThe number of charter schools grew rapidly for a quarter-century after the first charter opened its doors in 1992. But since 2016, the rate of increase has slowed. Is the pause related to a decline in charter effectiveness?

To find out, we track changes in student performance at charter and district schools on the National Assessment of Educational Progress, which tests reading and math skills of a nationally representative sample of students every other year. We focus on trends in student performance from 2005 through 2017 to get a sense of the direction in which the district and charter sectors are heading. We also control for differences in students’ background characteristics. This is the first study to use this information to compare trend lines. Most prior research has compared the relative effectiveness of the charter and district sectors at a single point in time.

Our analysis shows that student cohorts in the charter sector made greater gains from 2005 to 2017 than did cohorts in the district sector. The difference in the trends in the two sectors amounts to nearly an additional half-year’s worth of learning. The biggest gains are for African Americans and for students of low socioeconomic status attending charter schools. When we adjust for changes in student background characteristics, we find that two-thirds of the relative gain in the charter sector cannot be explained by demography. In other words, the pace of change is more rapid either because the charter sector, relative to the district sector, is attracting a more proficient set of students in ways that cannot be detected by demographic characteristics, or because charter schools and their teachers are doing a better job of teaching students.

Three Decades of Growth

The nation’s first charter school opened in Minnesota in 1991, under a state law that established a new type of publicly funded, independently operated school. School systems in 43 states and the District of Columbia now include charter schools, and in states like California, Arizona, Florida, and Louisiana, more than one in 10 public-school students attend them. In some big cities, those numbers are even larger: 45 percent in Washington, D.C., 37 percent in Philadelphia, and 15 percent in Los Angeles.

Nationwide, charter enrollment tripled between 2005 and 2017, with the number of charter students growing from 2 percent to 6 percent of all public-school students. But the rate of growth slowed after 2016 (see “Why Is Charter Growth Slowing? Lessons from the Bay Area,” research, Summer 2018). There are several possible reasons for this. The rate of states passing charter laws declined after 1999, and many of the laws passed since 2000 have included provisions that can stymie growth: caps on the number of schools allowed, arcane application requirements, and land-use and other regulations. In addition, a political backlash is slowing charter expansion in some states.

Researchers who have looked at the academic performance of students in charter and district schools at a single point in time have generally found it to be quite similar. For example, the 2019 “School Choice in the United States” report by the National Center for Education Statistics looked at students’ reading and math test scores in 2017 and found “no measurable differences” between the sectors. Also, multi-state studies by the Center for Research on Education Outcomes, or CREDO, at Stanford University have found only small differences in achievement at charter and district schools.

Analyses that summarize findings from multiple studies also report little difference on average between the two sectors, though they do identify specific situations in which charter schools excel. In a comprehensive review published in 2018, Sarah Cohodes wrote that, while the evidence on the whole shows “on average, no difference” between the two sectors, “urban charter schools serving minority and low-income students that use a ‘no excuses’ curriculum” have “significant positive impacts.” In a 2019 meta-analysis of 47 charter studies, Julian Betts and Y. Emily Tang found overall only a small predicted gain from attending a charter of between one-half and one percentile point. And in a 2020 paper, Anna Egalite reported little difference, on average, between the two sectors but wrote that charters in some locales reveal “statistically significant, large, and educationally meaningful achievement gains” for low-income students, students of color, and English language learners.

However, no study has used nationally representative data with controls for background characteristics to estimate trends in student performance over a twelve-year period. That is our goal here.

Data and Method

Our data come from the National Assessment of Educational Progress. NAEP is a low-stakes test that does not identify the performance of any student, teacher, school, or school district. Rather, it is used to assess the overall proficiency of the nation’s public-school students in various subjects at the state and national levels. A nationally representative sample of students in grades 4, 8, and 12 take the reading and math tests every other year. We do not report results for 12th-grade students because the number of test observations in the charter sector are too few to allow for precise estimation.

Between 2005 and 2017, more than four million tests were administered to district students, and nearly 140,000 tests were given to charter students, with data available on each student’s ethnicity, gender, eligibility for free and reduced lunch, and, for eighth-grade students only, the level of parental education, number of books in the home, and availability of a computer in the home. We do not include in our main analysis controls for participation in the federally funded special education and English language learner programs, because schools in the two sectors may define eligibility differently. However, we confirm that our results do not change in any material way when controls for these two variables are introduced.

We report trends in standard deviations, a conventional way of describing performance differences on standardized tests. Because NAEP tests are linked by subsets of questions asked both in grade 4 and 8, we can use this metric to estimate the difference in the average performances of students in those grades. We then create an estimate of a year’s worth of learning based on the average difference in student performance between those grades.

We compare performance of student cohorts on those tests in 2005 and 2017 and find that, on average, students in 8th grade performed 1.23 standard deviations higher than students in 4th grade. This implies that students learn enough each year to raise their reading and math test scores by approximately 0.31 standard deviations. Accordingly, we interpret a test-score improvement of 0.31 standard deviations as equivalent to roughly one year’s worth of learning.

Trends in performance are based on the distance between the charter and district school scores on NAEP tests in 2007, 2009, 2011, 2013, 2015, and 2017 and their average scores in 2005, which are set to zero. We report these differences in standard deviations. We apply the survey weights provided by NAEP to obtain representative results.

Figure 1 - Charters Catch Up to District Schools on National Tests

Investigating Differences by School Type

We first look at differences in average scores on the 2005 and 2017 tests. On average, district schools outperformed charter schools in 2005 in both the 4th and 8th grades—particularly in math. For 4th-grade students, the average math score at district schools was 237 points compared to 232 at charter schools, a difference of 0.15 standard deviations. In reading, the district school average was 217 compared to 216 at charters. For 8th-grade students, the average math score at district schools was 278 compared to 268 at charters, a difference of about 0.28 standard deviations. In reading, the district school average was 260 compared to 255 at charters.

By 2017, most of these differences had disappeared, or nearly so (see Figure 1). In 4th grade, charters still trailed districts by 3 points in math, with an average score of 236 compared to 239. In reading, however, the average charter score was one point higher at 266 compared to 265 for district schools. On 8th-grade tests, the sector had the same average score in math of 282 and virtually the same in reading, at 266 for charters and 265 for district schools. None of these 2017 differences were large enough to be statistically significant.

In looking at performance trends across all seven of the NAEP math and reading tests from 2005 through 2017, we find a larger increase in student achievement for students at charter schools than for students at district schools (see Figure 2). On average across grades and subjects, test scores at charter schools improved by 0.24 standard deviations during this time compared to 0.1 standard deviations at district schools.

Changes in the demographic composition of students who were enrolled at district and charter schools during those years may have differed, so we perform additional analyses that adjust for students’ background characteristics. After that adjustment, the test scores for students at charter schools improved by 0.09 standard deviations more than scores for students at district schools, which is equivalent to a little less than one-third of a year’s worth of learning. The differences are larger for 8th-grade students, at 0.12 standard deviations, than for 4th-grade students, at 0.06 standard deviations.

In other words, a considerable difference in the trends in student performance between charters and district schools cannot be explained by demographics. Either there are unobserved changes in student characteristics related to performance in the two sectors or charter schools, relative to district schools, are providing an increasingly effective learning environment.

Figure 2 - Accelerated Achievement Trends at Charter Schools

Results by Ethnicity

We then investigate differences in achievement by various student groups. To see whether cohort gains vary by ethnicity, we estimate changes for African Americans, white Americans, Hispanic Americans, and Asian Americans. In the absence of citizenship information, we assume that all tested students are Americans.

In 2005, average test scores for African-American students in both sectors were the lowest of the four groups. Test performance for African Americans improved over time at both district and charter schools, but the trend was far more dramatic at charters. This is especially noteworthy as one in three charter students is African American.

At district schools, average scores on all tests for cohorts of African Americans in grades 4 and 8 improved by 0.14 standard deviations between 2005 and 2017. At charter schools, the combined average gain was more than twice as large, at 0.33 standard deviations. For African-American 8th-grade charter students, average math scores improved by 0.46 standard deviations, which was four times larger than for students attending district schools. In reading, average scores improved by 0.33 standard deviations for students at charters, twice those of students attending district schools. Given the importance of closing the Black-white test score gap, the much steeper upward trend at charters is particularly meaningful. The magnitude of the difference is roughly a half-year’s worth of learning.

We compare the differences in achievement gains between the two sectors after adjusting for students’ background characteristics. Across reading and math tests at both grade levels, we find that cohorts of African Americans at charters performed higher by 0.17 standard deviations compared to those at district schools (see Figure 3). The upward trend is nearly as steep as the gains in the unadjusted estimates. In other words, very little of the differential gains in test-score performance by African-American students can be explained by changes in observable background characteristics.

Figure 3 - Outsized Gains for African American Charter Students

We next look at white Americans, who also account for about one in three charter students. Average scores for white students improve by 0.22 standard deviations in all grades and tests, more than twice the district sector gain of 0.1 standard deviations. After controlling for student characteristics, that estimate drops to 0.06 standard deviations. That unexplained differential change for white students is about one-third as large as it is for African Americans.

We find no clear difference in performance trends of Hispanic Americans in district and charter schools between 2005 and 2017. Hispanic Americans account for 24 percent of 4th-grade charter students and 30 percent of 8th graders. In both charters and district schools, their average combined performance increased by 0.21 standard deviations. These strong gains persist after controlling for background characteristics, emerging as one of the brightest and most notable aspects of education in the United States over this period. It is hard to conclude anything other than that Hispanic-American students are doing well in both sectors.

Much the same can be said for Asian Americans, the smallest ethnic group within the charter sector. They comprised only 4 percent of 4th-grade charter students and 5 percent of 8th-grade charter students. The advances in performance are higher for this segment of all tested students in both district and charter sector than for the three larger groups.

Results by Socioeconomic Status

To estimate trends by students’ socioeconomic status, we create an index based on 8th-grade student reports of parental education, availability of books in the home, and a computer in the home. We divide students into four equally sized groups, or quartiles, based on this index and discuss here the differences in achievement gains between those in the highest and lowest socioeconomic quartiles. NAEP did not ask 4th-grade students about their parents’ education and home possessions, so we cannot conduct a parallel analysis at that grade level.

We start by looking at average scores for 8th-grade students in the highest socioeconomic quartile. At district schools, cohort average scores improve by 0.02 standard deviations in math and 0.09 standard deviations in reading (see Figure 4).

Figure 4 - An Additional Half-Year of Learning for Low Socioeconomic Status Charter Students

After controlling for other background characteristics, the increments become slightly larger—0.04 and 0.13 standard deviations, respectively. At charters, cohorts of students in the highest quartile make even more rapid progress: average math scores improve by 0.27 standard deviations and average reading scores grow by 0.21 standard deviations over the study period. Again, we adjust for students’ background characteristics and find the magnitude of the trend at charters appears much the same—0.21 and 0.22 standard deviations, respectively.

Cohorts of students in the lowest socioeconomic quartile who are attending district schools show steeper gains than those in the highest quartile. The average test scores for students in the lowest quartile climb upward by 0.21 standard deviations in math and 0.24 standard deviations in reading. This suggests a modest closing of the socioeconomic achievement gap at district schools.

The group of these lowest-socioeconomic status students attending charter schools makes the most substantial progress of all. At charters, average test scores for students in the lowest quartile improve by 0.48 standard deviations in math and 0.31 standard deviations in reading. These estimates do not materially change when background controls are introduced. When the two subjects are combined, the differential in the trends between the charter and district sectors is 0.17 standard deviations, or approximately a half-year’s worth of learning.

Regional and Community Differences

To explore charter trends by region, we follow the model set forth by the U.S. Census and divide the United States into four sections: Northeast, Midwest, South, and West. Our analysis shows greater gains in student learning at charters compared to district schools in three of the four regions.

In the Northeast, we look at combined scores at both grade levels and find that students attending charter schools make more rapid gains than their peers in district schools. Students at district schools improve, on average, by only 0.05 standard deviations compared to 0.19 at charters, a difference of 0.14 standard deviations (see Figure 5). We control for student characteristics and find an even larger difference of 0.24 standard deviations, or about two-thirds of a year of learning. Average scores at charters improve by 0.38 standard deviations, a gain of over a year’s worth of learning, compared to 0.13 standard deviations at district schools.

In the Midwest, we estimate combined average gains at 0.25 standard deviations at charters. This is about 0.17 standard deviations larger than those in the district sector, or about half a year of learning. After adjusting for changes in background characteristics, that gain narrows to 0.11 standard deviations. This suggests that about half of the difference in performance trends is due to changes in student characteristics, and half is independent of such changes.

In the South, after adjusting for student characteristics, average scores at district schools improve by 0.19 standard deviations for the two subjects at both grade levels. At charters, average scores improve by 0.25 standard deviations, a charter-district differential of 0.06 standard deviations.

When statistically adjusted for background characteristics, cohorts of students at district schools in the West show average gains of 0.28 standard deviations in student performance in math and reading at both grade levels, higher than in any other region of the country. When statistical adjustments are made for changing demographics, the average upward trend in the West at charter schools comes to an average of 0.25 standard deviations, just short of those registered in the district sector.

Finally, we also look at differences in student performance between charters and district schools, in both urban and suburban communities. Two-thirds of the charter students who participated in NAEP attend schools located in cities, and we find larger gains for those students. Across all tests and grade levels on average, we find gains of 0.22 standard deviations between 2007 (the earliest year comparable data is available) and 2017, controlling for background characteristics. That is 0.08 standard deviations larger than in district schools and amounts to an additional one-quarter of a year’s worth of learning that cannot be attributed to observable differences in students’ backgrounds. We find no relative advantage for students attending charters in suburbs.

Figure 5 - Charter School Gains Vary by Region

Discussion

This is the first study to use nationally representative data to track changes in student achievement within the charter and district school sectors. Between 2005 and 2017, we find that in the district sector the performance of cohorts of students, once adjustments have been made for demographic characteristics, has trended upward by about a half-year’s worth of learning—a fairly optimistic portrait of trends in American schooling. The trend lines are particularly favorable for Hispanic Americans, Asian Americans, students in the West, and students in the lowest quartile of the socioeconomic distribution. We find less of an upward trend among white Americans, students in the Northeast, and students in the highest quartile of the socioeconomic distribution.

The performance of cohorts of students at charter schools has shifted upward more steeply than the trend at district schools, erasing the substantial gap between the two sectors that had existed in 2005. The average gains by 4th- and 8th-grade charter students are approximately twice as large as those by students in district schools, a difference of a half-year’s worth of learning. The steepest gains at charters, relative to district schools, are for African Americans, students in the Northeast, and those from households in the lowest quartile of the socioeconomic distribution.

About one-third of that gain can be explained by changes in students’ background characteristics, a signal perhaps that charters have become more attractive to broader segments of the population. The other two-thirds, however, cannot be explained by the demographic information gathered by NAEP.

We suspect that improved teaching and learning environments in the charter sector account for most, if not all, of the improvement not explained by background characteristics. Any change driven by intensified recruitment of more proficient students is likely to shift the demographic composition of the sector. If charter schools begin to open in communities with higher average test scores, that change is likely to be detected by changes in students’ socioeconomic status and ethnicity. And if parents of more proficient students are turning to charters in ever-increasing percentages, that change, too, is likely to be correlated with demographic changes. For these reasons, the most likely explanation for the remaining differential trends in student performance that persist after introducing controls for background characteristics are changes in pedagogical instruction and learning environments.

The combination of enhanced performance by charters and their recruitment of a more proficient clientele follows the same course taken by a classic example of disruptive innovation described by Clayton Christensen. Initially, the transistor radio was of such low quality that it was purchased only by those who did not have a viable alternative, primarily young people who wanted to listen to their own music. But as the product improved, its market share broadened to include adults with more resources.

The identification of the beginnings of such a trend within the charter sector is consistent with two other studies that have looked at performance trends in the charter sector: a study of Texas by Patrick Baude, Marcus Casey, Eric Hanushek, Gregory Phelan, and Steven Rivkin; and CREDO’s study of the four-year trend between 2009 and 2013 in 16 states. Both find greater progress relative to district schools, and both attribute the change to replacement of less effective schools with higher-performing ones.

Our findings also resemble some results from studies that estimate charter performance at a single point in time. That research has found that the more effective charter schools are serving disadvantaged students, most notably African-American students in urban areas, mainly located in the Northeast.

Otherwise, prior research on charters has found little difference between their performance and that of district schools, on average. Nothing in our results contradicts those findings. However, we do show that the pace of improvement is greater in the charter sector than in the district sector, and we show that much of the steeper upward trend in student performance at charters cannot be explained by changes in student demographic characteristics.

Given the rising achievement levels at charter schools, the slowdown in the sector’s growth rate cannot be attributed to declining quality. It is more likely that political resistance to charters is increasing as both the management and labor sides of the district sector become increasingly concerned that charters might prove to be as disruptive an innovation as the transistor.

M. Danish Shakeel is a postdoctoral research fellow at the Program on Education Policy and Governance at Harvard University. Paul E. Peterson is the Henry Lee Shattuck Professor of Government and director of the Program on Education Policy and Governance at Harvard University, a senior fellow at the Hoover Institution at Stanford University, and senior editor of Education Next.

This article appeared in the Winter 2021 issue of Education Next. Suggested citation format:

Shakeel, M.D., and Peterson, P.E. (2021). Charter Schools Show Steeper Upward Trend in Student Achievement than District Schools: First nationwide study of trends shows large gains for African Americans at charters. Education Next, 21(1), 40-47.

The post Charter Schools Show Steeper Upward Trend in Student Achievement than District Schools appeared first on Education Next.

]]>
49712508
Amid Pandemic, Support Soars for Online Learning, Parent Poll Shows https://www.educationnext.org/amid-pandemic-support-soars-online-learning-parent-poll-shows-2020-education-next-survey-public-opinion/ Tue, 18 Aug 2020 00:02:20 +0000 https://www.educationnext.org/?p=49712208 Results from the 2020 Education Next Survey of Public Opinion

The post Amid Pandemic, Support Soars for Online Learning, Parent Poll Shows appeared first on Education Next.

]]>
School sign which reads "Stay Safe Stay Home Covid 19"
Survey shows increased acceptance of online learning amid pandemic-induced physical school-closures.

For 14 years, the Education Next annual survey has tracked American opinion on education policy. We have gauged people’s views through the throes of the Great Recession, dramatic changes in partisan control in both Washington, D.C., and state capitols, and attendant shifts in the direction of federal and state education policy. None of that compares to the disruption that unfolded this spring, as the Covid-19 pandemic closed schools nationwide and brought the American economy to its knees.

This year’s survey, administered in May 2020, provides an early look at how the experiences of the past few months may shape Americans’ views on education policy going forward. The survey’s nationally representative sample of 4,291 adults includes an oversampling of teachers and of those who identify themselves as Black and Hispanic. (All results are adjusted for non-response and oversampling; see methods sidebar for details.)

In a companion essay, we report parents’ perspectives on their children’s educational experiences during the lockdowns (see “What American Families Experienced When Covid-19 Closed Their Schools,” features). Here we examine public opinion on issues at the core of education-policy debates as we head into the height of the 2020 presidential campaign. We start with a summary of the survey’s top findings.

1. Teacher Pay. Support for teacher pay hikes remains nearly as high as it has been at any point since 2008, when we first surveyed the public on the issue. Among those given information about current salary levels in their state, 55% say teacher salaries should increase—essentially the same as last year and a jump of 19 percentage points over 2017. Among those not given salary information, 65% back an increase.

2. School Spending. Americans are split on whether to increase overall investment in public schools. Among those told current expenditure levels, 45% say that K–12 school spending should increase. This level of support is 5 percentage points lower than last year’s, but it still registers 6 points higher than in 2017. Democrat (56%), Black (63%), and Hispanic (55%) respondents are more likely to back a boost in funding than are Republican (31%) and white (39%) respondents.

3. Online Education. Americans’ interest in online schooling is on the rise. In 2020, 73% of parents say they are willing to have their child take some high school courses via the Internet—a jump of 17 percentage points over 2009. Parents who report more positive experiences with remote instruction when schools closed this spring are more likely to support online education.

4. School Choice. Support for school-choice reforms either holds steady or declines modestly since last year. The policy of giving tax credits to fund private-school scholarships for low-income students—a concept backed by the Trump administration and recently given a boost by the U.S. Supreme Court’s decision in Espinoza v. Montana Department of Revenue—draws the most support, including from 59% of Republicans and 56% of Democrats. Attitudes toward charter schools divide along party lines: 54% of Republicans support charters, compared to only 37% of Democrats. Vouchers to help pay private-school tuition continue to command strong support among Black (60% for universal vouchers; 65% for low-income vouchers) and Hispanic (62% for universal vouchers; 59% for low-income vouchers) respondents. Universal vouchers are more popular among Republicans than Democrats (56% to 47%), but the reverse is true of vouchers targeted to low-income students (45% to 52%). Neither type of voucher polarizes public opinion as much as charter schools do.

5. Opinion on Public Schools and Teachers. The challenges to schools wrought by the pandemic have not shaken Americans’ confidence in their public schools. Levels of approval remain at or near peak. Fifty-eight percent of respondents give their local public schools a grade of A or B (down 2 points from last year), and 30% give the nation’s public schools a similar grade (the highest level the survey has recorded). The public also gives teachers high marks during this difficult time. On average, respondents rate 61% of local teachers as either excellent or good—a 5-percentage-point increase since 2018. They rate 14% of teachers as unsatisfactory.

6. Free College. Fifty-five percent of Americans endorse the idea of making public four-year colleges free to attend, a dip of 5 percentage points since last year. The concept divides Americans along party lines, with 74% of Democrats but just 29% of Republicans expressing support.

7. Trump Effect. On five issues—Common Core, charter schools, tax-credit-funded scholarships, merit pay for teachers, and in-state tuition for undocumented immigrants—we told a randomly selected group of respondents the president’s position while asking other respondents the same question without mentioning his views. Generally, information about Trump’s positions polarizes opinion, moving Republicans toward the president and pushing Democrats away. These shifts among partisans often offset each other and leave little discernible change in overall public opinion.

8. Populism and Education Policy. To explore the implications of populist sentiment among the American public, we posed a set of questions gauging the extent to which respondents agree with claims such as “elected officials should always follow the will of the people” and then identified the most- and least-populist respondents. We find that populism is a distinctive brand with adherents in both parties. Though 56% of Republicans rank above the median in terms of populism, so do 46% of Democrats. Moreover, populism is a strong predictor of education-policy views: The most-populist Americans assign lower grades to public schools locally and nationally and express greater approval for measures to expand school choice.

[td_block_image_box custom_title=”Interactive results from the 2020 Education Next survey” image_item0=”49712235″ image_title_item0=”Results from the 2020 Education Next Poll” custom_url_item0=”https://www.educationnext.org/2020-ednext-poll-interactive/” open_in_new_window_item0=”true” image_item1=”49712237″ image_title_item1=”Education Next Annual Poll: Trends Through 2020″ custom_url_item1=”https://www.educationnext.org/ednext-poll-interactive-trends-through-2020-public-opinion/” open_in_new_window_item1=”true”]

 

Teacher Pay and School Spending

This past spring, Education Next conducted its annual public-opinion survey during an unprecedented economic shutdown wrought by the coronavirus pandemic. The national unemployment rate had peaked in April at a seasonally adjusted 14.7%. State and local tax revenues were in free fall. The National Bureau of Economic Research announced that the nation had entered recession in February.

A dozen years ago, when the nation experienced a similar economic contraction at the outset of the Great Recession, public support for raising teacher pay and spending on education fell sharply. Support for increasing teacher pay dropped to 40% in 2009 from 54% in 2008, when we administered our survey at the peak of the housing bubble and just prior to the financial crisis that followed its rupture. Support for higher school spending also dropped by 14 percentage points, to 37% in 2009 from 51% in 2008. The downturn had long-lasting effects on public opinion: only in the past two years, after nearly a decade of steady economic growth, did support for increased public investment in education recover to match or exceed pre-crisis levels.

Will the Covid-19 recession have similar consequences? If so, they had not materialized as of mid-May, when the survey was conducted. Indicators of support for higher teacher salaries and more spending did tick downward in 2020 compared to last year. These changes are small, however, and often within the survey’s margin of error. School systems will undoubtedly face heightened competition for resources in the years to come. Yet the schools seem to maintain the public’s backing as that struggle begins.

Figure 1: Support Slips for Increasing Teacher Pay

Teacher salaries. We asked all survey respondents whether they thought that salaries for public-school teachers in their home state should increase, decrease, or stay about the same. As in past years, before asking this question we first told a random half of respondents what teachers in their state actually earn. Among those provided this information, 55% say teacher salaries should increase—essentially even with the 56% who gave that response last year (see Figure 1). Thirty-nine percent of “informed” respondents say teacher salaries in their state should remain about the same, while just 7% say they should decrease. In short, the share expressing support for increasing teacher pay is up 19 percentage points since 2017 and nearly as high as it has been at any point since 2008, when we first surveyed the public on the issue.

More Democrats than Republicans favor increasing teacher salaries, and that divide appears to have widened modestly over the past year. Support rose to 66% this year from 64% in 2019 among Democrats, and fell to 40% from 43% among Republicans. Meanwhile, teachers are even more convinced about the merits of increasing their own salaries, with 81% of them registering support, up from 76% last year.

Among those who are not first informed of what teachers currently earn, an even larger proportion of the public favors increasing teacher salaries. Sixty-five percent of this segment say that salaries should increase, 30% say they should remain the same, and 5% say they should decrease. These numbers reflect a modest 5-percentage-point decline in support since 2019, when 70% of “uninformed” respondents supported an increase. The higher level of endorsement for boosting teacher salaries among the “uninformed” respondents reflects the fact that most Americans believe that teachers are underpaid and earn far less than they actually do. When asked to estimate average annual teacher salaries in their state, respondents’ average guess came in at $42,816—30% less than the actual average of $61,018 across the participants in our survey.

School Spending. We also asked survey respondents whether government spending on public schools in their local district should increase, after first informing a randomly chosen half of respondents about current spending levels. Forty-five percent of those given this information say that spending should increase. This represents a 5-percentage-point decline since last year, but still leaves support up 6 percentage points over 2017. Forty-six percent of “informed” respondents say that spending on their local schools should stay about the same, while 10% say that spending should decrease.

As in the case of teacher salaries, the partisan gap in support for spending more on local schools widened a bit in the past year. Support for higher spending fell by just 3 percentage points among Democrats, to 56% in 2020 from 59% in 2019. Meanwhile, support dropped to 31% from 38% among Republicans, increasing the partisan gap to 25 percentage points. Fifty-nine percent of teachers favor spending more on their local schools, a 3-percentage-point increase since 2019.

Americans continue to underestimate dramatically what the government already spends on their local schools. On average, respondents to our survey guessed $8,140 per pupil annually—44% less than the average $14,504 actually spent.

Consistent with this, respondents not given information on current spending are more enthusiastic about spending more. Among the “uninformed,” 59% favor a boost in spending, 34% say it should stay about the same, while just 7% think it should decrease. The 59% of the public supporting an increase represents a modest decline of just 3 percentage points since 2019.

Figure 2: A Racial Divide in Support for School Spending

Support by race and ethnicity. The question of investing more money in the schools clearly divides respondents along racial and ethnic lines. While strong majorities of Black and Hispanic respondents support raising teacher salaries and spending more on their local schools, white respondents are less enthusiastic (see Figure 2). For example, 63% of Black respondents who are told current spending levels favor an increase, as do 55% of Hispanic respondents. The corresponding figure for white respondents is just 39%. Similar differences emerge among “uninformed” respondents on both school spending and teacher pay.

 

Figure 3: Major Gains for Online High-School Coursework

 

Online Education and Homeschooling

Online schooling. Americans’ openness to online education has increased in recent years (see Figure 3). In 2009, 56% of American parents said they would be willing to have their child take some academic courses online during high school. This share edged up somewhat to 61% in 2010 and remained at that level in 2013. In 2020, however, 73% of parents say they are willing to have their child take some high school courses online—a 17-percentage-point jump since 2009. The growth in approval for online schooling for secondary school students is even more dramatic among the public as a whole, for whom the share willing to have a child take such courses rose to 71% from 54% during the past 10 years.

We also approximated the extent of this support by asking how many courses a high school student should be allowed to take for credit online. Typically, students must complete 24 courses in high school to graduate. On average, Americans say that high school students should be allowed to take 11 courses online. This is a 22% increase from the average response of 9 courses in 2017. The pattern is identical when focusing on parents only—an increase to 11 courses from 9.

Does Americans’ growing support for online schooling reflect their recent experiences when most schools closed amid the pandemic? Our data do not offer conclusive evidence that these experiences changed attitudes, but those parents who reported more-positive experiences during school closures are more likely to support online schooling.

The 2020 Education Next survey included a battery of questions asking parents of children in kindergarten through 12th grade whose schools closed during the pandemic about their children’s experiences during the closure (see “What American Families Experienced When Covid-19 Closed Their Schools”). According to these parents, 88% of students primarily participated in their school’s remote instruction or activities on a computer, tablet, or similar device (as opposed to in other ways, such as using workbooks or worksheets). Among parents whose children primarily participated digitally in instruction during the closure, those who report more satisfaction with this instruction also express greater willingness to have their child go through high school taking some academic courses online. Among the top quartile in satisfaction, 85% are willing to have their children take such courses; among the bottom quartile, 58% are willing.

Furthermore, the parents who were least satisfied with instruction during the closure say high school students should be allowed to take 9 online courses, on average, for graduation credit, while the most-satisfied parents say high school students should be able to take 11 courses for such credit.

Homeschooling. Support for homeschooling has remained stable in recent years. Approximately half (49%) of Americans are in favor of allowing parents to educate their children at home instead of sending them to school; 35% of Americans say that they oppose that practice. In hopes of gauging what people think about homeschooling generally (as opposed to having children learn from home during a pandemic), we specifically referred to “ordinary circumstances when schools are open” when asking this question in 2020. The 49% approval share is statistically indistinguishable from the 45% who supported homeschooling when we last asked about it in 2017. The share of Americans who think parents should be required to receive approval from their local school district to homeschool their children was 54% in 2017 and is 54% in 2020. The share of Americans who support requiring parents to notify their local school district if they intend to homeschool their children was 73% in 2017 and is 70% in 2020.

 

Figure 4: Support Drops for the Common Core and Shared State Standards

School Reform

Common Core. In every annual survey since 2014, we have conducted a simple experiment to understand public attitudes toward the Common Core State Standards. We ask some respondents about their views on the Common Core, while other respondents receive the same question about a generic set of national math and reading standards. After rising to 50 percent in 2019, support for the Common Core (explicitly named) has dipped again to 43 percent (see Figure 4). This level of support aligns more closely with the results from the three years preceding 2019, perhaps suggesting that last year’s uptick was the result of chance rather than representing a true change in public opinion. An alternative explanation for this year’s decline in approval is that language in the question noting that the standards would be used to “hold public schools accountable for their performance” may have been a turn-off, given people’s awareness of the challenges schools have faced during the pandemic.

Among subgroups, neither teachers (37%) nor Republicans (36%) are particularly enthusiastic about the standards. Black (54%), Hispanic (52%), and Democratic (49%) support remains somewhat more robust. When we ask about generic national standards—the Common Core “brand” holds negative connotations for some people—support among the general public rises to 53%, but this too is a substantial decrease from the 66% favorability we observed last year.

School choice. Support for school choice reforms either holds steady or declines modestly from last year. Support for “the formation of charter schools” among the general public has slipped to 44% from 47% last year. Opposition to charter schools has also declined, however, to 37% from 40%. Meanwhile, the share of respondents who say they neither support nor oppose charters has grown to 19% from 13%. One way of looking at this is that more than 60% of people either support charter schools or may be open to persuasion on the topic.

 

Figure 5: Support for School Choice Varies along Party Lines

Democrats and Republicans remain sharply divided on the issue of charter schools (see Figure 5). A policy that has enjoyed steady support from presidents of both parties, charters are now supported by only 37% of Democrats compared to 54% of Republicans.

 

Figure 6: More Black and Hispanic Respondents Support School Choice

Among the Black community, only 48% now favor charter schools, down from 55% in 2019 (see Figure 6). Yet opposition to charters has also declined, to 27% from 30% last year. One in four Black respondents now say they neither support nor oppose charter schools, perhaps reflecting continuing debate on the issue among civil rights groups. Hispanic respondents favor charter schools by a 45% to 32% margin, with 22% taking a neutral stance. White respondents are more evenly split, with 44% in support and 39% opposed.

Compared to charters, publicly funded vouchers to attend private schools command slightly higher support from adherents of both parties and from the public as a whole. A slim majority (51%) favor a “universal” voucher program that extends the benefit to all families with children in public schools—and 48% support offering vouchers exclusively to low-income students. Vouchers continue to draw strong approval from Black respondents (60% for universal vouchers; 65% for low-income vouchers) and Hispanic respondents (62% for universal vouchers; 59% for low-income vouchers). Universal vouchers are more popular among Republicans than Democrats (56% to 47%), but the reverse is true of vouchers targeted to low-income students (45% to 52%). Perhaps surprisingly, neither type of voucher program proves to be as polarizing as charter schools.

Tax credits to subsidize donations to private-school scholarship funds—which recently received a strong boost from the U.S. Supreme Court’s decision in Espinoza v. Montana Department of Revenue—continue to be the most popular school-choice reform investigated in our survey. Fifty-seven percent of the general public favor such credits. Black (68%) and Hispanic (70%) Americans form key constituencies of this policy tool. Tax credits also span the political divide, commanding majorities from adherents of both parties (56% of Democrats and 59% of Republicans).

Teacher policies. Merit-based pay, the practice of compensating teachers in part on how much their students learn, garners support from 47% of the public—identical to last year and statistically indistinguishable from the two years prior. Parents are somewhat more skeptical of this approach, with 42% in support. Meanwhile, only 15% of teachers favor structuring their pay along these lines. Merit pay maintains majority support among Republicans (55%) compared to only 41% of Democrats.

Coinciding with the teachers strikes for higher pay over the last few years, positive views of teachers unions rose to 44% in 2019 from 32% in 2016. In 2020, this enthusiasm ticked downward slightly, to 41%. Teachers themselves are considerably more likely to say that their unions have a positive effect on local schools (66%), but nearly one in five educators hold negative views of teachers unions (18%).

 

Figure 7: Partisan and Racial Divides on Free College

Higher education. During the Democratic presidential primary campaign, U.S. Senator Bernie Sanders called for making all public four-year colleges tuition-free. Many of his competitors advanced similar proposals for increasing college affordability. Last year’s Education Next survey, administered at the height of this debate, found that fully 60% of Americans agreed with Sanders’s position. Now that the Democratic nomination process is effectively over and the political conversation has shifted to other topics, support for free public college has slipped to 55% (see Figure 7). This proposal is considerably more popular among Black (76%) and Hispanic (75%) respondents than white (44%) respondents. Fully 74% of Democrats favor free public college, compared to only 29% of Republicans.

 

Figure 8: Sharp Divides on In-State Tuition for Undocumented Students

We also asked respondents whether they support allowing undocumented immigrants—including the “Dreamers” recently protected by the Supreme Court’s decision in Department of Homeland Security v. University of California—to be eligible for in-state college tuition rates if they graduate from a local high school. Forty-six percent of respondents say they do favor this policy (see Figure 8). Support is notably higher among Hispanic respondents (73%) than among white respondents (37%). Democrats (66%) are also more favorably inclined toward granting in-state tuition rates to undocumented immigrants than are Republicans (22%).

 

Grading Schools, Colleges, and Universities

K–12 public schools. The Education Next survey has asked Americans to “grade” their public schools every year since 2007. Never before have our survey respondents been asked to make this assessment during a global pandemic that has shuttered nearly every school building, sending educators and families scrambling to piece together workable distance-learning options. How do Americans think their public schools performed during this extraordinary moment of upheaval?

Figure 9: Approval for Local and National Public Schools Near All-Time Highs

Despite the challenges, confidence in public K–12 schooling remains at a record-high level. Fifty-eight percent of respondents give their local public schools a grade of A or B (see Figure 9). This level of approval is only 2 percentage points lower than it was in 2019, when it reached the highest point in the history of the survey. Americans have long viewed their local public schools more positively than they do public schools in the nation as a whole. This holds true in 2020, but the opinion gap has narrowed. This year, 30% of respondents give the nation’s public schools a grade of A or B. This rating represents a 6-percentage-point increase from 2019 and is the highest level the survey has ever recorded, reducing the local-national opinion gap by 8 percentage points.

We also asked respondents to evaluate the quality of teachers in their local schools by assigning a percentage of them to the following four categories: excellent, good, satisfactory, or unsatisfactory. In 2018, the last time our survey featured this question, respondents categorized 56% of teachers as either excellent or good and 16% of teachers as unsatisfactory. In 2020, the public’s view of teachers ticks upward. Respondents categorize 61% of teachers as either excellent or good and 14% as unsatisfactory. Views about teacher quality are generally consistent across racial/ethnic, economic, and political lines. Only teachers themselves place a meaningfully larger proportion into the excellent or good categories (71%) and a smaller proportion into the unsatisfactory category (9%).

Does this robust confidence in the schools reflect stalwart support for the institution of public education, or does it perhaps indicate something else—an enthusiasm for the alternative modes of learning that the pandemic has suddenly introduced to a larger audience? Our data suggest that both interpretations may contain an element of truth. From our analysis of parents who experienced school closures, we know that satisfaction remains strong despite widespread perceptions that students are learning less at home than they would have if schools were open as usual. At the same time, we observe that support for online learning is also up. What is overwhelmingly clear, however, is that Americans’ views of their public schools are undimmed by the unprecedented challenges wrought by the school closures.

Colleges and universities. In the early days of the coronavirus crisis, prominent universities like Harvard and Stanford made headlines by closing their campuses and sending students home to finish the semester online. Nearly all colleges and universities eventually followed suit, resulting in a massive interruption of the college experience of students across the country. The survey allows us to discover how attitudes toward institutions of higher education have shifted in the wake of the pandemic.

Last year, we introduced a new set of questions asking respondents to evaluate four-year colleges and universities in the same way they evaluate elementary and secondary schools for the survey. Respondents are randomly assigned to receive one of four possible versions of the question, focusing on either public or private institutions and either institutions in their state or in the nation as a whole. As with K–12 education, Americans hold more positive views of nearby institutions than those in the nation as a whole. However, these views may be converging. The percentage of respondents giving in-state public colleges and universities an A or B dropped to 69% in 2020 from 78% in 2019. The analogous response for in-state private colleges and universities fell to 74% from 79%. Meanwhile, for institutions of higher education nationwide, the pattern reversed. The proportion of respondents giving an A or B to public and private colleges and universities in the nation as a whole both rose by 4 percentage points this year to 62% and 70%, respectively. Since this time last year, Americans appear to have grown more critical of in-state institutions but more supportive of their counterparts around the country.

 

The Trump Effect

The Trump era has seen ever-widening fissures between members of America’s two major political parties. Just as previous Education Next surveys have measured how the contemporaneous president’s views might shape those of the electorate, we once again set about examining this topic—this time looking at the possible influences of both President Donald Trump and former President Barack Obama.

To assess these influences, we examined the results from a series of experiments in which we randomly divided our survey sample into groups who received slightly different versions of the same question. One group of respondents was simply asked to register an opinion on an issue, while another group was first told Trump’s position on it. We conducted these experiments on five topics: Common Core, charter schools, tax credits, merit pay for teachers, and in-state college tuition rates for undocumented immigrants who graduate from high school in the respondent’s state. Trump opposes the Common Core and allowing undocumented immigrants to be eligible for in-state college tuition rates, but he supports the other three policies. We conducted similar experiments with Trump’s views in 2017 as well as with former President Barack Obama’s positions in 2009 and 2010, allowing us to compare the persuasive effects of presidential views over time.

Figure 10: The Trump Effect

Generally, information about Trump’s positions polarizes opinion—as it did in 2017 and as similar information about Obama did a decade ago. For example, there is no overall difference in support between those who are informed of Trump’s stance (45%) and those who are not (44%)—but this finding masks important crosscutting effects. Information about Trump’s position suppresses support for charter schools among Democrats by 7 percentage points (to 30% from 37%) while boosting support among Republicans by 11 percentage points (to 65% from 54%). As a result, the 17-percentage-point gap between Republicans and Democrats without information about Trump’s position doubles to a 35-percentage-point gap among those who receive this information (see Figure 10).

The same pattern holds for attitudes toward tax credits for donations to fund scholarships for low-income students to attend private schools. In this case, there is no discernible difference between Democratic support (56%) and Republican support (59%) when respondents are not informed of Trump’s support for this proposal. Information about Trump’s support decreases Democratic support to 46% and increases Republican support to 67%, opening a gap of 21 percentage points between parties. These shifts among partisans offset each other and leave no discernible change in overall opinion.

In other cases, information about Trump’s position widens the gap between Democrats and Republicans by shifting opinion among followers of one party but not the other. For example, information about Trump’s opposition to the Common Core standards reduces overall support by 7 percentage points, to 36% from 43%, but the drop is almost entirely among Republicans, whose support falls by 14 percentage points, to 22% from 36%. By contrast, Trump’s opposition to the Common Core has no significant effect among Democrats. As a result, the gap between party adherents widens from 13 percentage points among respondents not told of Trump’s position, to a 25-percentage-point difference. Among the public at large, support for Common Core drops to 36% from 43% when respondents are told about Trump’s position.

On the issue of merit pay, Trump’s support repels Democrats, whose approval drops to 23% from 41%. Yet, the president’s position has no significant effect among Republicans, 55% of whom support merit pay without information about Trump’s position and 59% of whom support it with this information. The partisan gap expands to 36 percentage points from 14 percentage points among respondents informed about Trump’s position. Among the population overall, this information depresses support to 40% from 47%.

On only one issue—a policy allowing undocumented immigrants to be eligible for the in-state college tuition rate if they graduate from a high school in the state—does information about Trump’s position fail to influence the attitudes of Democrats or Republicans. In fact, learning of Trump’s stance on this policy makes no statistically discernible impact among Democrats, Republicans, or the public as a whole.

We conducted four of these experiments in the 2017 Education Next survey—for charter schools, tax credits, Common Core, and merit pay. With the exception of the Common Core (where Trump’s position had no statistically identifiable effects for either party), the 2020 results parallel those from 2017, indicating that information about Trump’s positions are as polarizing in the fourth year of his administration as it was in the first year.

How does the Trump effect on public opinion compare to that which Obama exerted? In 2009 and 2010 surveys, we conducted a similar experiment with information about former President Barack Obama’s position on merit pay. In his first year in office, information about Obama’s support for merit pay boosted approval among both Democrats (+16 percentage points) and Republicans (+11 percentage points). However, by the next year, this information moved Democrats and Republicans in opposite directions. Democratic support for merit pay grew by 7 percentage points when they were told of Obama’s position, while Republican support dropped by 11 percentage points.

In the 2020 survey, we set out to compare the Trump effect and the Obama effect more directly. In addition to randomly assigning some respondents to be informed of Trump’s position, we randomly assigned a third group to be told Obama’s position and a fourth to learn both presidents’ positions. The Obama effect is a mirror image of the Trump effect. Information about Obama’s support for merit pay boosts support among Democrats by 9 percentage points but decreases Republican support by 20 percentage points. When respondents are told both Trump’s and Obama’s position, the effect for adherents of each party closely resembles the effect we observe when respondents are told just the position of their co-partisan president. Democrats increase support by 8 percentage points, similar to their 9-percentage-point boost, when they are told only Obama’s position. The difference among Republicans is 5 percentage points, similar to the 4-percentage-point difference among them when they are told only Trump’s position. In other words, it appears partisans tend to gravitate toward the position of a leader from their own party more than they move away from the position of a leader from the opposite party.

 

Populism and Education Policy

The American Revolution was a populist affair. Colonials tossed tea into Boston Harbor, erected liberty poles in town squares, and dragged a statue of King George III to the ground. The new nation’s first constitution, the Articles of Confederation, banned delegates from serving in Congress for more than three years out of six. That legacy has proved enduring. The Twenty-Second Amendment to the U.S. Constitution restricts presidents to two terms, and many states have placed term limits on governors, members of state legislatures, and even their federal Congressional representatives.

Political observers in both the media and academia have noted a rise of populist movements and leaders in American and global politics over the past decade. While definitions of populism differ somewhat, most pundits agree that it involves a belief that political leaders too often neglect the interests of “the people.” In the United States, the current rise of populism is most often associated with President Trump, but some observers also point to Democrats Bernie Sanders and Elizabeth Warren as populist voices. Is the rise of populism shaping the politics of American education?

To find out, we asked respondents whether they thought “elected officials should always follow the will of the people,” whether the “people, not elected officials, should make our most important policy decisions,” whether they “would rather be represented by an ordinary citizen than by an experienced elected official,” and three more such questions. To all of them, large majorities give an affirmative response.

Still, some respondents are more emphatically populist than others. A sizable share “strongly agree” with many of the items, while others give a more moderate “somewhat agree” response or say they neither agree nor disagree. A small percentage—often less than a quarter—disagree. We constructed a “populism scale” based on the responses to these questions, and then divided respondents into four quartiles according to their rankings on that scale. This technique allowed us to look at how the opinions of the most- and least-populist respondents differ on various education-related topics.

Who are the populists?

The demographics of populism are in many ways consistent with conventional thinking about this emerging political force in American politics. The older, less-educated, lower-income, Evangelical segments of the population are more likely to be suspicious of elected officials than the younger, more educated, affluent, and secular segments. Only 45% of those under the age of 30 rank above the median of the distribution on the populism scale, as compared to 59% over the age of 60. Fifty-four percent of those without a bachelor’s degree rank above the median, while only 43% of those with a college degree share that degree of populist fervor. Similarly, only 46% those who report an income of $75,000 or more rank above the median on the populism scale, while 51% of those who report incomes below $25,000 and 56% of those who report incomes between $25,000 and $75,000, are found in the top two populist quartiles. Fifty-five percent of those who say they have been “born again” are populist, as compared to 48% who do not report that religious experience.

Some demographic segments, however, defy the conventional portrait. For one thing, ethnic differences are not great, and do not necessarily occur in the direction one might predict. Hispanic Americans are more populist (55% rank above the median) than white Americans (50%) or Black Americans (48%). Even more surprising is the fairly modest correlation between political ideology and populism. While it is true that 57% of conservatives score above the median in their populist orientation, so do 43% of liberals. And even though 56% of Republicans fall into the more populist categories, so do 46% of Democrats. In other words, populism seems to form its own distinctive brand. While it overlaps Republican conservatism, its principles do not wholly align with the conventional fault lines of party and ideology. Finally, it is incorrect to view populists as alienated outsiders who want nothing to do with contemporary political battles. On the contrary, populists are just about as likely to be engaged in politics as others. While 51% of those who say they never or almost never participate in politics affirm a more populist position, so do 47% of those who say they are sometimes or often engaged in politics.

What do populists think about schools and school policy?

To see whether populists hold distinct views on policy, we compared the survey responses of those in the highest and lowest quartiles of the populist distribution. We found that the most-populist group is more critical of schools, thinks less well of Common Core, and is more in favor of school choice. In other words, populists are applying their more general political beliefs to the educational issues of the day.

Figure 11: Populist Americans Hold Distinctive Views

Populists are less happy with public schools. The Education Next survey asked respondents to assign grades of A to F to public schools in the nation as a whole. Among those in the most-populist quartile, only 23% give schools a grade of A or B, while 31% assign them a D or fail them. In the least-populist quartile, 35% award the nation’s schools one of the top two grades, while only 13% stick them with one of the worst two (see Figure 11). Populists give higher grades to schools in their local communities, but even in this instance populists are more critical than others. Among those in the most-populist quartile, 50% of respondents award a grade of A or B, while 19% give them a D or an F. By comparison, 61% of those in the least-populist quartile gave schools in their community one of the two highest grades, while only 10% found them worthy of no better than a D. The conventional wisdom that populists are unhappier with the status quo seems vindicated.

Populists support school choice. Since populists are less happy with the state of public schools nationwide, they might be expected to support school choice. To see if that is so, we looked at responses to questions related to vouchers, charters, tax credits and homeschooling. In every case we found that the most-populist quartile is much more supportive of greater school choice than the least-populist quartile. The differences in the level of support is 21 percentage points for charter schools, 17 percentage points for tax-credits for scholarships for low-income students, 16 percentage points for allowing parents to educate their children at home, 15 percentage points for vouchers for all, and 12 percentage points for vouchers for low-income students. Among respondents who are told that Trump favors charters and tax credits, the differences enlarge only marginally. In other words, populists seem willing to translate their ideals into actions. Just as they want the people to govern, so are they more likely to think people should have options when it comes to their children’s education.

Finally, on all the choice questions, the least-populist respondents were the most likely to take a neutral position, saying they neither favored nor opposed a policy. The average difference between the least-populist and the most-populist groups on all these questions was a sizable 12 percentage points. Those who are less likely to insist that elected officials respond to the will of the people are also less likely to take one side or the other on educational issues. Just as populists are consistent in thinking that the people should be in charge, those who are not populist are more willing to let others make the call.

Populists register more opposition to Common Core standards but more support for generic national standards. If populists distrust elites, they can be expected to oppose reforms they perceive as mandates from higher tiers of government. To see if this thesis applies to populists’ views on education policy, we asked randomly assigned respondents one of three variations on a question about common academic standards. The first version inquired about Common Core standards. Surprisingly, 42% of those in both the most-populist and least-populist quartiles say they favor the standards. However, opposition to the Common Core is much higher among the most populist (47%) than among the least populist (32%). Further, the least populist are more likely to say they neither support nor oppose the Common Core (26% to 11%), still another example of their reluctance to take a position on one or another side of an issue.

In the second version of the Common Core question, another randomly chosen group of respondents was told that Trump opposes the Common Core. That information reduces levels of support for the policy to 35% among both those in the highest and lowest quartiles on the populism scale, but it has little effect on opposition to the policy. Forty-four percent of the most populist register disapproval, as do 34% of the least populist, about the same as responses to Common Core when the president’s views go unmentioned. Again, the least-populist respondents are more likely to choose the neutral position.

In the third version of the common standards question, the name “Common Core” is omitted while everything else in the question remains the same. Levels of support for common standards soars among the most-populist group to 61%, while those in the least-populist group remain more or less unchanged (at 46%). Either populists think that such policies are better left to the states, or the Common Core brand continues to be especially toxic for them. Once again, the least-populist group is more likely to hold a neutral position.

In sum, populism is an identifiable set of ideas that help shape an individual’s policy positions. Populist sentiments are widespread, but those who hold them with the greatest intensity also view American schools more critically at both the national and local levels. They are more likely to support all forms of school choice, and they are more likely to disapprove of Common Core, but not of generic national standards. The least populist are more likely to say they neither support nor oppose a policy, a position that is consistent with their quite un-populist readiness to defer to the leadership of elected officials.

Michael B. Henderson is assistant professor at Louisiana State University’s Manship School of Mass Communication and director of its Public Policy Research Lab. David M. Houston is assistant professor of education policy at George Mason University. Paul E. Peterson is the Henry Lee Shattuck professor of government at Harvard University, director of Harvard University’s Program on Education Policy and Governance (PEPG), and senior editor of Education Next. M. Danish Shakeel is a postdoctoral research fellow at PEPG. Martin R. West, William Henry Bloomberg pro­fessor of education at Harvard University, is deputy director of PEPG and editor-in-chief of Education Next.

 

Survey Methodology

The data for this report come from the 14th annual Education Next survey, a series that began in 2007. Results from all prior surveys are available at www.educationnext.org/edfacts.

The survey was conducted from May 14 to May 20, 2020, by the polling firm Ipsos Public Affairs via its KnowledgePanel®. In its KnowledgePanel®, Ipsos Public Affairs maintains a nationally representative panel of adults (obtained via address-based sampling techniques) who agree to participate in a limited number of online surveys. Ipsos Public Affairs provides Internet access and a device to any participants in the KnowledgePanel® who lack them. Those who participated in the Education Next survey could elect to complete it in English or Spanish.

The total sample for the survey includes a nationally representative, stratified sample of adults (age 18 and older) in the United States (1,827), as well as representative oversamples of the following subgroups: parents of children in kindergarten through 12th grade (1,329), teachers (663), Blacks (811), and Hispanics (913). The total sample size for the 2020 Education Next survey is 4,291.

The completion rate for this survey is 49%. Survey weights were used to account for non-response and the oversampling of specific groups.

We report separately on the opinions of the general public, teachers, parents, Black respondents, Hispanic respondents, white respondents without a four-year college degree, white respondents with a four-year college degree, and self-identified Democrats and Republicans. We define parents as any respondent with a child under the age of 18 (1,677). We define Democrats and Republicans to include avowed partisans as well as respondents who say they “lean” toward one party or the other. In the 2020 EdNext survey sample, 54% of respondents identify as Democrats and 41% as Republicans; the remaining 4% identify as independent, undecided, or affiliated with another party.

We define teachers differently than in previous iterations of the survey. As in past years, we begin with an oversample of KnowledgePanel® members who work in a profession that Ipsos Public Affairs codes as teaching—a broad category that includes college professors, daycare teachers, and substitute teachers. However, this year we further screened this group by asking them which grades they teach. We define teachers as those who teach in at least one grade from kindergarten to 12th grade. This yields 523 K–12 teachers.

In general, survey responses based on larger numbers of observations are more precise, that is, less prone to sampling variance, than those based on groups with fewer numbers of observations. As a consequence, answers attributed to the national population are more precisely estimated than are those attributed to groups (such as teachers, Black respondents, or Hispanic respondents). The margin of error for binary responses given by respondents in the main sample in the survey is approximately 1.5 percentage points for questions on which opinion is evenly split. The specific number of respondents varies from question to question, owing to non-response on items and to the fact that, for several survey questions, we randomly divided the sample into multiple groups to examine the effect of variations in the way questions were posed. The exact wording of each question is available at www.educationnext.org/edfacts. Percentages reported in the figures and online tables do not always sum to 100, as a result of rounding to the nearest percentage point.

Information used in the experiments involving school-district spending and revenue were taken from the 2016–17 National Center for Education Statistics (NCES) Common Core of Data’s Local Education Agency Finance Survey for fiscal year 2017, version 1a, the most recent one available at the time the survey was prepared. Information used in the experiments involving state teacher salaries were drawn from the NCES Digest of Education Statistics, 2018 (Table 211.6), the most recent data available at the time the survey was prepared.

 

This article appeared in the Winter 2021 issue of Education Next. Suggested citation format:

Henderson, M.B., Houston, D.M., Peterson, P.E., Shakeel, M.D., and West, M.R. (2021). Amid Pandemic, Support Soars for Online Learning: Results from the 2020 Education Next survey of public opinion. Education Next, 21(1), 6-21.

The post Amid Pandemic, Support Soars for Online Learning, Parent Poll Shows appeared first on Education Next.

]]>
49712208