Vol. 18, No. 3 - Education Next https://www.educationnext.org/journal/vol-18-no-03/ A Journal of Opinion and Research About Education Policy Mon, 31 Jan 2022 14:44:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://i0.wp.com/www.educationnext.org/wp-content/uploads/2019/12/e-logo.png?fit=32%2C32&ssl=1 Vol. 18, No. 3 - Education Next https://www.educationnext.org/journal/vol-18-no-03/ 32 32 181792879 The Case for Holding Students Accountable https://www.educationnext.org/case-for-holding-students-accountable-how-extrinsic-motivation-gets-kids-work-harder-learn-more/ Tue, 15 May 2018 00:00:00 +0000 http://www.educationnext.org/case-for-holding-students-accountable-how-extrinsic-motivation-gets-kids-work-harder-learn-more/ How extrinsic motivation gets kids to work harder and learn more

The post The Case for Holding Students Accountable appeared first on Education Next.

]]>

Sometimes it seems as if we’ve tried everything in our efforts to reform public education, yet nothing has worked to boost student achievement at scale. And despite all of our reform attempts, we have ignored one of the most promising catalysts for student success.

What is this magical, elusive factor?

Student effort.

As education economists John H. Bishop and Ludger Woessmann have put it, “Student effort is probably the most important input in the education process.”

The principle is simple: when students work harder, they learn more. In the United States, though, we don’t expect most kids to work very hard, and they don’t. For all of the talk about “raising standards” and implementing “high stakes testing,” the United States is an outlier among developed nations when it comes to holding students themselves to account, and linking real-world consequences to academic achievement or the lack thereof.

In this article, we look at the evidence that external motivation can encourage middle-school and high-school students to work harder and learn more. We then identify a number of state and local policies that could put constructive pressure on students to exert effort in their academics. Such policies include instituting external, curriculum-based exams linked to real-world consequences for kids; maintaining high standards for earning good grades; and experimenting with well-designed cash-incentive programs. We conclude by considering how student accountability and student agency might combine for an even more effective approach in the future.

Students as Stakeholders

It might seem obvious that students have the biggest stake in their academic success. Education is correlated with future income and important measures of quality of life, and it is the students themselves who will eventually reap the benefits of their efforts in school—or the costs of their indifference. But the operative word here is eventually. To many adolescents, the adult future feels far away, uncertain, and generally unrelated to mastering algebra, understanding the stages of mitosis, or identifying dangling participles.

When even adults debate the payoffs of academic learning, it should be no surprise that many students do not see the “real world” relevance of their schoolwork. But even when they believe in the value of academics, teenagers may still prefer to spend their energy on the more-compelling activities competing for their attention—friends, sports, afterschool jobs, Snapchat, video games, not to mention less-wholesome pursuits. Delaying gratification is hard for most anyone, but researchers have shown that young people are especially present-focused, averse to planning for the longer term and struggling to overcome the impulse to procrastinate. The education system puts students in a position where, as Alexandra Usher and Nancy Kober of the Center on Education Policy expressed it, the “costs are up-front . . . while the benefits are delayed and sometimes difficult to grasp.”

The question is, what might be done to motivate adolescent students to work harder? The optimistic—one might say unrealistic—answer is to make schools so engaging, and the student-teacher relationship so supportive, that adolescents will be intrinsically motivated to work hard, despite the other demands on their time and attention, and despite the social costs they might pay.

Yet it’s hard for policymakers such as governors, legislators, and even school board members to move the needle on students’ intrinsic motivation. They can try to do so indirectly, via initiatives to recruit and retain talented teachers, to implement high-quality curricula, or to include measures of student engagement in school accountability systems. But those are all bank shots at best.

Another approach—one that we believe is more realistic—is to hold students themselves accountable for their performance by ensuring that their work is tied to real consequences. This approach is based in research and used throughout much of the world. By giving students a greater and more immediate stake in their schoolwork and their learning, such student-accountability policies could bridge the gap between effort and reward.

Accountability Boosts Effort

Even when they believe in the value of academics, teenagers may still prefer to spend their energy on the more- compelling activities competing for their attention—friends, sports, afterschool jobs, Snapchat, video games, not to mention less-wholesome pursuits.

The case for holding students accountable for their schoolwork and their learning has been undercut by the prevalent belief that incentives and other “extrinsic” motivators actually decrease student effort by eroding students’ intrinsic desire to learn. Psychologists in the 1970s discovered how extrinsic motivators could sometimes undermine intrinsic drive, and this idea has been widely popularized, most famously by Alfie Kohn’s 1993 book Punished by Rewards. Kohn and other education writers demonstrated how incentives can backfire, and they bolstered their cases with memorable anecdotes of daffy incentive initiatives, such as a Denver Planned Parenthood program’s offer to pay teenage girls a dollar a day not to get pregnant.

Yet these writers overstated the case against external motivators. The psychology literature never supported their blanket claims that “incentive plans cannot work,” as Kohn put it in the Harvard Business Review, and the conditions under which external motivators backfire are, according to a 1996 meta-analysis on the topic, “limited and easily remedied.” The evidence that external accountability lowers student motivation is mixed. Researchers found that external exams in Germany caused students to work harder, increased their performance, and made students more likely to want a job involving math, but the researchers also found that exams negatively affected students’ enjoyment of math and feelings of competence. When Bishop examined the effects of high-school exit exams, one traditional form of external accountability, on intrinsic motivation by comparing whether students subjected to this approach engaged in less reading for pleasure or were more likely to associate learning with rote memorization, he found no evidence that accountability undermined natural curiosity and even found some evidence of the opposite. The logic of Bishop’s finding is that systems that incentivize students to master academic material may in fact increase intrinsic drive, an unsurprising result for those of us who see learning as empowering.

Another way accountability can boost intrinsic motivation is by supporting pro-academic norms. As James Coleman observed as early as 1959, students often gang up to pick on the “curve raiser”: when students are graded on a curve relative to one another, those who work hard and raise the class average make things difficult for other students, who must then work harder for their grades (see “The Adolescent Society,” features, Winter 2006). This situation has been explored more recently by other social scientists, who have found that it can lead to social norms under which “nerds” are harassed and studious students of color are accused by their peers of “acting white” (see “‘Acting White,’” features, Winter 2006).

Smart student-accountability systems can help solve this problem—by setting high academic standards and, most crucially, by using external assessments to evaluate student progress. This means that policymakers may positively influence intrinsic motivation by optimizing student incentives, resulting in more pro-academic social norms as well as increased student interest and competence. In more recent years, behavioral economists have used experimental methods to better understand the connections between external motivation and human behavior and avoid the pitfalls Kohn and others have flagged. We discuss this further below, but behavioral economics has provided new experimental evidence that policymakers should be sensitive to the timing of accountability, ensure that positive incentives are not too small, and target students at the right ages.

And regardless of the interaction with intrinsic drive, external motivators can have powerful positive effects on student learning in their own right.

External Exams

Important evidence for the effect of student accountability on effort and achievement comes from the literature on curriculum-based external assessments. Several studies from the late 1990s and early 2000s support a strategy of using such external exams, showing that countries, Canadian provinces, and American and German states using content-based external exams for student accountability outperformed comparison jurisdictions, most likely because increased student stakes led to greater student effort. Yet such external exams have many forms and have not been equally successful in all contexts.

Substantial evidence from around the world has linked high-school exit exams to increased learning, but in the United States, where political pressures to relax graduation requirements have always kept the passing bar low, the evidence for their benefit has been inconclusive. Studies have variously found small positive effects, small negative effects, or, often, no effects. American researchers have also focused on whether such exams might induce students to drop out, with several studies finding greater dropout rates following the adoption of the exams.

Yet such pass-or-fail exams are not the only way to use external assessments to promote student accountability. In a recent paper, Anne Hyslop makes a case against the use of exit exams but argues that external assessments can be used in other ways to promote student accountability. In the past 20 years, many states have begun to require external end-of-course exams (EOCs) covering core subjects such as algebra, biology, and American history, often with consequences attached to a student’s performance. Some states have made passing the exams a condition for graduation, essentially turning them into exit exams, but others have increased the stakes for students instead by printing the EOC scores on student transcripts or factoring the scores into course grades. As with external exams in many other countries, EOC results here are typically reported in terms of specific performance thresholds (such as advanced, proficient, needs improvement) rather than as simple pass-or-fail grades, enabling clearer signals of academic performance. This more-nuanced form of signaling also increases the stakes for students, since it gives college admissions officers and potential employers additional information with which to evaluate candidates—an especially important factor in an era of grade inflation. While such a system is not yet mature in the United States, EOCs could form a powerful mechanismfor student accountability if adopted on a broader scale.

The benefits of external assessments are clear for the students enrolling in Advanced Placement and other elite programs that are trusted by colleges in large part because they are externally validated. AP helps solve the “curve raiser” problem by setting an external standard that is not controlled by the teacher, and one that all students in a given class can potentially meet. AP exams are graded by faraway educators, and high scores can earn students valuable college credit. In a sense, this turns preparing for AP exams into a team sport, giving the nerds permission to study hard and crush the test. It also breaks down the pernicious “avoidance treaties” between teachers and students, which Arthur B. Powell of Rutgers University has warned about: that is, the tacit agreement in some high schools that teachers won’t expect much of students, and vice versa. Without bargaining among students or between the students and the teachers, no one has an incentive to lower standards.

Yet even with the expansion of the AP program in recent years, only about a third of American students take at least one exam, and less than a quarter pass at least one test with a score of three or higher. The promise of high-quality EOCs is to extend the benefits of external assessment, and its virtuous cycle, to many more teenagers.

And non-elite students may disproportionately benefit from smart student-accountability policies, such as EOCs combined with real stakes for the students. Since incentives and external motivators have the strongest impact on students with low initial intrinsic motivation, such programs will have an outsized impact on low-achieving students, whose intrinsic motivation is often lower.

Additionally, the power of strong signals of academic performance—enabled by meaningful grades and test scores—has greater importance for students trapped in low-performing schools. Without meaningful signals of achievement, these students can excel yet have difficulty distinguishing themselves from their peers. Research shows that minorities accrue greater premiums from educational credentials that signal high achievement than whites, which means that watering down these signals through grade inflation, abolishing external exams, and lowering standards depletes a key resource for students from disadvantaged backgrounds. These students often lack the family connections and other advantages their more-affluent peers depend on, making academic signals even more important.

Each fall, high schools in Texas’s Garland Independent School District host pep rallies to recognize students passing their AP exams and earning checks through NMSI’s program.

Don’t Forget the Carrots

Requiring students to pass end-of-course exams is certainly an eat-your-broccoli approach to student accountability. Carrots are worth considering, too.

Take, for example, the College Readiness Program of the National Math and Science Initiative (NMSI). Offering substantial cash rewards to students and their teachers, the NMSI program has helped hundreds of thousands of students from low-income families succeed in Advanced Placement coursework. Cash incentives for students have a mixed record, with researchers generally finding greater effects when behaviors (such as reading books) rather than outcomes (such as end-of-year test scores) are incentivized. Yet robust evaluations of NMSI’s program, conducted by the economist Kirabo Jackson, show how incentivizing outcomes can powerfully affect both short- and long-term student outcomes, particularly when coupled with teacher support (see “Cash for Test Scores,” features, Fall 2008). In this case, teachers play an especially important role, because even if incentives increase student effort, their work will not bear fruit if the students don’t understand how to achieve the desired outcome.

Jackson’s evaluations of the NMSI program show that it increases college attendance by 4.2 percentage points while increasing college readiness as well as longer-term workforce outcomes. For some students, the effects are particularly strong: Hispanic students see an impressive 11 percent gain in earnings when exposed to the incentive program. Although pay-for-performance policies have often targeted teachers and administrators, NMSI’s program demonstrates that including the students themselves in such policies, if done right, can have game-changing effects.

Policymakers thinking of adopting cash incentive programs should take to heart the lessons of behavioral economics. One rule put forth by Bradley Allan and Roland Fryer in a 2011 white paper on education incentives is, “Don’t be cheap.” A distant incentive that amounts to pennies per hour for increased effort is more likely to make students indignant that their work is not being valued than to stimulate additional effort. Timing is also critical. While we want students to develop greater self-control and the ability to delay gratification, assisting them in the mastery of academic skills requires that we chop some tasks into smaller chunks and help students overcome procrastination by offering shorter-term rewards. To optimize these policies, education policymakers should continue to examine the latest from psychology and behavioral economics.

Lowered Expectations

While end-of-course exams and cash incentives carry great promise, other current “reforms” actually serve to discourage student effort. The most concerning trend is the push to reduce teachers’ authority to assign low grades for poor performance or late assignments. A number of districts nationwide have adopted “no zeroes” policies, banning grades lower than a 50 or 60 on any given assignment or exam, under the rationale that such low grades could make it mathematically impossible for students to recover. Several districts have also implemented “mandatory retake” policies, requiring that teachers allow students to retake exams or redo assignments if they receive a low grade the first time.

Perhaps the intentions behind these policies are pure, but they amount to the soft bigotry of low expectations when it comes to student effort and responsibility. Kids soon figure out that they can procrastinate on assignments or studying for exams without having to face the music, at least in the short term. Teachers lose a valuable tool for discouraging that kind of behavior and promoting effort and diligence. When schools expect less and less of students, we shouldn’t be surprised that students game the system.

Accountability and Agency

A focus on student effort and accountability may sound old-fashioned in an era when personalized, “competency-based education” is all the rage. But here’s the good news: the two go together like peanut butter and jelly.

Consider, for example, an experiment conducted by the behavioral economist Dan Ariely: in one of his courses, he set a different policy for turning in assignments in each of three class sections. One section of students could turn in their assignments at any point during the semester, including the last day; the second group had deadlines spaced across the term; and students in the third section had the option of pre-committing to deadlines of their own choosing—deadlines that, if missed, would result in consequences for the students. In that third section, where students could choose restrictions or absolute freedom, all students chose some restrictions, voluntarily setting up consequences for themselves that enabled the instructor to hold them accountable. In other words, almost all the students with a choice opted for accountability that had teeth. And they were smart to do so, because it was those in the section with maximum freedom and no accountability for deadlines who performed worst on the class assignments. Middle-school and high-school students may sometimes require a more paternalistic approach, but Ariely’s experiment shows that accountability does not necessarily have to be imposed from the top down.

A promise of introducing new technology into classrooms is that it will customize and personalize a student’s experience, often by increasing her choice. Student accountability enables a kind of “loose-tight” management of students, by which they are afforded greater flexibility over how to acquire a set of knowledge and skills (loose) and held strictly accountable for their outcomes (tight). Giving students greater agency over their learning and allowing them to move at their own pace may boost student interest and allow students to learn more quickly and efficiently. But we shouldn’t naively assume that most students will put in the effort to make these new systems work without caring adults guiding them and holding them accountable. It’s telling that the darling of personalized-learning aficionados, Summit Public Schools, makes extensive use of the Advanced Placement program in its high schools (see “Pacesetter in Personalized Learning,” features, Fall 2017). The high standards, external exams, and incentives baked into the AP program provide effective mechanisms for holding students accountable for working hard and making progress.

Unfortunately, too many policymakers are moving schools in the wrong direction by removing the few tools, such as meaningful grading standards and high-quality end-of-course exams, that might encourage more student effort.

Students benefit from accountability, and, given the right circumstances, they choose it. As reformers and entrepreneurs seek new applications of technology and innovative models of instruction to revolutionize education systems, schools must reassess their comparative advantages. In their roles as academic-community builders and the gatekeepers of credentials, school leaders should embrace the responsibility of holding students accountable.

Adam Tyner is associate director of research at the Thomas B. Fordham Institute. Michael J. Petrilli is president of the Thomas B. Fordham Institute and executive editor of Education Next.

This article appeared in the Summer 2018 issue of Education Next. Suggested citation format:

Tyner, A., and Petrilli, M.J. (2018). The Case for Holding Students Accountable: How extrinsic motivation gets kids to work harder and learn more. Education Next, 18(3), 26-32.

The post The Case for Holding Students Accountable appeared first on Education Next.

]]>
49708183
Rating Teacher-Preparation Programs https://www.educationnext.org/rating-teacher-preperation-programs-value-added-make-useful-distinctions/ Tue, 08 May 2018 00:00:00 +0000 http://www.educationnext.org/rating-teacher-preperation-programs-value-added-make-useful-distinctions/ Can value-added make useful distinctions?

The post Rating Teacher-Preparation Programs appeared first on Education Next.

]]>

Recent policies intended to improve teacher quality have focused on the preparation that teachers receive before entering the classroom. A short-lived federal rule would have required every state to assess and rank teacher-preparation programs by their graduates’ impact on student learning. Though the federal rule was repealed, last year some 21 states and the District of Columbia opted to rank teacher-preparation programs by measures of their graduates’ effectiveness in the classroom, such as their value-added scores.

But what does the research say? Do teachers from different preparation programs differ substantially in their impacts? Can outcomes like student test performance reliably identify more or less effective teacher-preparation programs?

To address these questions, we re-analyzed prior evaluations of teacher-preparation programs from six locations: Florida, Louisiana, Missouri, Texas, Washington State, and New York City. We found negligible differences in teacher quality between programs, amounting to no more than 3 percent of the average test-score gap between students from low-income families and their more affluent peers. Differences between programs were negligible even in Louisiana and New York City, where earlier evaluations had reported substantial differences and fueled the push for program accountability.

Most differences between programs would be too small to matter, even if we could measure them accurately. And we can rarely measure them accurately. The errors we make in estimating program differences are often larger than the differences we are trying to estimate. With rare exceptions, we cannot use student test scores to say whether a given program’s teachers are significantly better or worse than average. If policymakers want to hold preparation programs accountable for the quality of their graduates, there may be better ways to do it.

A Push for Accountability 

Four days before the 2016 election, the U.S. Department of Education (DOE) issued a regulation requiring every state to publish an annual “report card” on the quality of its teacher-preparation programs. Report cards would rate programs by their outcomes, such as graduates’ impacts on student performance on standardized tests, rather than program characteristics like curriculum and faculty credentials. Programs would be assigned one of four performance categories: low-performing, at-risk of being low-performing, effective, or exemplary. The report cards would be published on the Web. Like college ratings, they would provide feedback to preparation programs, help prospective teachers choose among programs, and help schools and districts evaluate job applicants from different programs. Programs persistently rated as low-performing would lose eligibility for federal TEACH grants, which provide $4,000 per year to students who train and then teach in a high-need subject or a high-poverty school.

The regulation was part of a larger plan to improve teacher recruitment and preparation nationwide, inspired by widespread concerns about the quality of teacher-training programs (see “21st-Century Teacher Education,” features, Summer 2013). Released in 2011, the plan won early support from some program providers, unions, and advocates. But when the specifics of the regulations were published in draft form in October 2016, they were criticized by congressional Republicans and union leaders as an example of burdensome federal overreach. President Randi Weingarten of the American Federation of Teachers said the regulation was fundamentally misguided. “It is, quite simply, ludicrous,” she said, “to propose evaluating teacher preparation programs based on the performance [test scores] of the students taught by a program’s graduates.”

The regulation was never implemented. In early 2017, after Republicans regained the White House, the rule was repealed by Congress. At a public signing ceremony, President Trump declared the repeal had removed “an additional layer of bureacracy to encourage freedom in our schools.”

However, report cards on teacher-preparation programs remain a live policy at the state level. In Louisiana, the practice dates back more than a decade; evaluators began to collect data in 2003–04 and first published a report card that named individual programs in 2008. In 2010, 11 states and the District of Columbia received funding to develop program report cards as part of their federal Race to the Top grants. By 2017, according to the National Council on Teacher Quality, 21 states and the District of Columbia were “collect[ing] and publicly report[ing] data that connect teachers’ student growth data to their preparation programs.”

Looking for a Research Base

On what did states and DOE base their decision to require report cards? Research comparing teacher-preparation programs has produced inconsistent results. Some research, from Louisiana and New York, claimed that differences between teacher-preparation programs were substantial. Other research, from Missouri and Texas, claimed that the differences between teacher-preparation programs were minuscule, and that it was rarely possible to tell which programs were better or worse.

In its 129-page regulation, DOE spent less than a sentence acknowledging—and dismissing—inconsistencies in the research. “While we acknowledge that some studies of teacher preparation programs find very small differences at the program level … we believe that the examples we have cited above provide a reasonable basis for States’ use of student learning outcomes” to evaluate teacher-preparation programs. It is unclear why officials at DOE dismissed research that didn’t support the idea of program rankings. It is also unclear why officials felt a need to issue a national regulation requiring all 50 states to rate teacher-preparation programs when research had not reached a consensus that rankings would be practical or useful.

In fact, in the public debate over the federal regulation, research carried no weight at all. Research had been published in academic journals and summarized in more popular outlets like Kappan and the Washington Post. Yet teachers’ unions did not cite research, and neither did members of Congress. Research went unmentioned in a 2015 Government Accountability Office report on teacher-training programs. When the DOE regulation listed 11 stakeholder groups that state governments must consult when specifying the data and analysis that would go into program report cards, neither researchers nor evaluators made the list.

Ranking Programs by Value-Added

Programs evaluated in a state report card may be “traditional” programs, in which a college student majors in education and completes student teaching to earn a degree and a teaching certificate. Or they may be “alternative” certification programs, which provide coursework and training to certify adults who already hold a bachelor’s degree in other subjects. Alternative programs are often run by school districts or nonprofits like Teach For America or The New Teacher Project, but the fastest-growing programs are run by for-profit corporations like Kaplan University or Teachers of Tomorrow.

A program that produces exceptional teachers may do so for different reasons. The program might provide excellent training that gives teachers the knowledge and skills they need to succeed in the classroom. Or the program could be very selective about the applicants that it accepts. State report cards don’t measure whether the teachers coming out of a program are good because of training or selectivity. As long as the program is putting effective teachers in the classroom, the report card will give it a positive review.

At least, that is what is supposed to happen. In principle, comparing the effectiveness of teachers from different programs sounds pretty simple. But in practice, there is a lot that can go wrong.

Let’s start with the simple part. Teachers are commonly evaluated by measuring their “value-added” to student scores on standardized tests. Value-added models begin by asking what students would be expected to score given their previous scores, poverty levels, and other characteristics. If students score above expectations, their teacher gets credit for the excess and her value-added is positive. If students score below expectations, the teacher gets credit for the shortfall and her value-added is negative.

To rank teacher-preparation programs, report cards average the value-added of teachers who have graduated from each program in the past few years. This approach to evaluating programs isn’t perfect, but it stands up to some common knocks. In criticizing the federal regulation, for example, Weingarten claimed that “the flawed framework . . . will punish teacher-prep programs whose graduates go on to teach in our highest-needs schools, most often those with high concentrations of students who live in poverty and English language learners.” But value-added models commonly adjust for poverty and English proficiency. And the federal regulation gave extra credit to programs that placed teachers in high-need schools.

The problem with ranking programs on value-added is not that the rankings are biased; the problem is that the rankings are almost random. Once random noise is sifted out of the rankings, the true differences between programs are usually too small to matter.

The Role of Randomness

We first looked at these issues in a 2016 study of 95 teacher-preparation programs in Texas. We ranked each program by estimating its teachers’ average value-added to math scores. The graph of rankings is seductive (see Figure 1). Once you see the graph, it’s hard not to think that the “best” programs—the ones that turn out the best teachers—are on the right, and the “worst” programs are on the left. You could even slice the graph into groups of programs that look as if they have similar quality, such as “effective” programs, “low-performing” programs, and “at risk” programs. That’s what the federal regulation would have required.

In fact, though, these programs are less different than they look. The differences that look so compelling in the graph are mostly random. There’s random error in student test scores; there’s random variation in the particular group of teachers who complete a program in a given year; there’s random variation in where those teachers end up working; and there’s random variation in how responsive their students are. These random factors vary from year to year, for reasons beyond a program’s control. So where a program falls in a given year’s rankings, and whether it moves up or down from one year to the next, is typically more a matter of luck than of quality.

It’s hard for almost everyone, even trained researchers, to appreciate how much the apparent differences between programs are due to random estimation error. We are often “fooled by randomness”—when we see a random pattern, we think it means more than it does.

To highlight the role of random error, we calculated the “null distribution,” or what the distribution of program rankings would look like if all the programs were actually identical and nothing but random estimation error were present. The null distribution looks an awful lot like our actual results: it is almost flat in the middle and flares at the ends (see Figure 2).

In fact, when we lay the null distribution over the Texas results, the fit is almost perfect (see Figure 3). Remember, the null distribution shows what program rankings would look like if they were entirely random. So the tight fit of the null distribution suggests that the rankings are, if not entirely random, then darn close. Even the programs that appear to stand out may stand out because of error. In fact, three quarters of the variation in Texas rankings—three quarters of the reason that one program ranks above another—is random chance. Only one quarter of the variation has anything to do with program quality.

When true differences are small and estimates are noisy, it is hard to single out specific programs as different from average. Here, too, it is easy to fool ourselves. According to the conventions of statistics, about 5 percent of the time we are permitted to make a “type 1 error” that singles out a program as “significantly different” when it is truly average. That risk might be acceptable in a state with just a couple of programs, but in Texas, where there are almost 100 programs, a 5 percent error rate ensures that we’ll erroneously label about five ordinary programs as exceptional. In fact, when we conducted our Texas evaluation, we found seven programs that were “significantly different” from average. Quite possibly five of these differences, or even all seven, were type 1 errors. Quite possibly just two of the programs, or none, were truly different.

A Six-State Review

After finishing our report card on Texas, we were a little confused. Our Texas results suggested there was little difference in effectiveness between teachers from various programs. Research from Missouri agreed. Yet there were reports from Louisiana and New York City suggesting larger differences. And there were reports from Florida and Washington State that we wanted to look at more closely.

Adding to the confusion, in each state researchers had compared programs using different statistical methods. So when researchers reached different conclusions, we couldn’t be sure if it was because of their programs or because of the methods used to compare them.

To clear things up, we re-analyzed the results from different states using a uniform set of statistical best practices. When we did that, we found that results from different states were actually very similar. In every state, the differences between most programs were minuscule. Having a teacher from one program or another typically changed student test scores by just .01 to .03 standard deviations, or 1 to 3 percent of the average score gap between poor and non-poor children.

Remarkably, these patterns held in every state we looked at—not just in Missouri and Texas, where program differences were already thought to be negligible, but also in Louisiana and New York City, where larger differences had been reported previously. For example, when we re-analyzed estimates for the 15 largest teacher-preparation programs in New York City, we found no significant differences between programs (see Figure 4). The estimates hewed very close to the null distribution, suggesting that little but estimation error was present. Similar patterns also held in Florida and Washington.

Why Ranking Programs on Value-Added Won’t Work

The differences between programs are typically too small to matter. And they’re practically impossible to estimate with any reliability. The errors that we make in estimation will often be larger than the differences we are trying to estimate. Program rankings will consist largely of noise, and program rankings will bounce up and down randomly from one year to another.

This means that we cannot rank programs in a meaningful order. And we cannot justify classifying programs by performance level (“effective,” “at risk,” etc.), as the federal regulation would have required. Statistically, at most one or two programs stand out from the pack in any given state. The other programs are practically indistinguishable.

None of this means that there are no differences between individual teachers. A large body of literature shows that some teachers are better than others, and that teacher quality can have meaningful effects on student success—not just on test scores, but also on graduation rates and even job success.

The problem is that the good teachers don’t all come from the same programs. The differences between good and bad teachers from the same program are much larger than the average differences between one program and another. So even if we could do a better job ranking programs, knowing what program prepared a teacher would give employers little guidance about how effective the teacher was likely to be.

We also don’t believe that all teacher-preparation programs are the same. Although the vast majority of programs are practically indistinguishable, there are exceptions—at most one or two per state, our results suggest—that really do produce teachers whose average impacts on test scores are significantly better than average.

For example, we know that Teach For America and UTeach both produce above-average teachers, although their effects are moderate in size and limited to math and science. But we don’t know that from state report cards. We know it from evaluations that focused specifically on UTeach and Teach For America.

Our results suggest there may also be an occasional program whose teachers are significantly worse than average. It could be valuable to look more closely at these rare outliers. But trying to rank other programs on value-added will just create confusion.

Should We Rank Programs in Other Ways?

It’s not helpful to rank a state’s programs by teachers’ value-added. With rare exceptions, the true differences between programs are so small that rankings would consist mostly of noise. But can we look at other measures of program quality? Student test scores are not the only way to evaluate programs. In fact, although the federal regulation required that no program be classified as “effective” unless its graduates had an exceptional impact on test scores, it did require that programs be evaluated using other indicators of quality as well.

One of those indicators was the ratings of a program’s graduates by principals or supervisors conducting teacher observations. However, we believe it is premature to require principal ratings in a formal ranking system. While principal ratings do vary across programs, there is research evidence that principal ratings are biased. They are biased in favor of teachers with advantaged students, and they are biased toward teachers whom the principal likes, or at least has evaluated positively in the past. Ratings by impartial outsiders are less biased, but teacher-rating forms still have a lot of room for improvement. While teacher observations remain a good topic for research, until observation forms get better they are not something that regulations should require or that states should use to rank programs.

The federal regulation also suggested reporting teachers’ ratings of their own preparation programs. Whether these ratings should be required is debatable. There is little research on teachers’ ratings of preparation programs, and there is a danger that some ratings may be noisy or biased. Still, prospective teachers may want to know what their predecessors thought of the training offered by a given program.

Finally, the federal regulation suggested tracking programs’ record of placing and retaining graduates in the teaching profession, especially at high-need schools. We think this is an excellent idea. If a large percentage of a program’s graduates are not becoming teachers, or not persisting as teachers, that is clearly a concern. Likewise, if a large percentage of graduates are persisting, especially at high-need schools, that is a sign of success. And placement and retention are straightforward to measure by linking program rosters to employment records. We favor reporting the percentage of program graduates who enter and persist in the field for which they were trained—not just for teacher-preparation programs, but for other college majors and training programs as well.

Paul T. von Hippel is an associate professor at the University of Texas at Austin and Laura Bellows is a doctoral student in public policy at Duke University. A detailed account of this analysis is available in the January 2018 issue of Economics of Education Review.

This article appeared in the Summer 2018 issue of Education Next. Suggested citation format:

Von Hippel, P.T., and Bellows, L. (2018). Rating Teacher-Preparation Programs: Can value-added make useful distinctions? Education Next, 18(3), 34-41.

The post Rating Teacher-Preparation Programs appeared first on Education Next.

]]>
49708224
A Disappointing National Report Card https://www.educationnext.org/disappointing-national-report-card-2017-naep-results-editor-west/ Wed, 02 May 2018 00:00:00 +0000 http://www.educationnext.org/disappointing-national-report-card-2017-naep-results-editor-west/ What explains the disappointing results?

The post A Disappointing National Report Card appeared first on Education Next.

]]>

Once every two years, the world of K–12 education holds its collective breath as it awaits the latest results from the National Assessment of Educational Progress (NAEP), often referred to as the Nation’s Report Card. The 2017 data, comprising math and reading scores for students in grades 4 and 8, arrived this April—and the news was not good. Scores ticked up in 8th-grade reading but otherwise remained flat, continuing a period of stagnation that’s now persisted for a decade.

The flatline since 2007 is especially disheartening after a decade and a half of steadily rising scores. Gains around the turn of the millennium were most impressive in math. By 2007, 4th graders performed the equivalent of two grade levels higher and 8th graders performed the equivalent of one grade level higher than their counterparts in 1996 had. Black and Latino students made particularly encouraging progress, as did those at the bottom of the achievement distribution. In contrast, the 2017 results revealed a modest widening in the gap between low- and high-scoring students. They also confirmed that the surprising drop in performance evident on the 2015 NAEP, when scores fell on three of four tests, was no one-time blip but rather a real—and persistent—change.

What explains these disappointing results? Ample evidence indicates that the gains students registered in the 1990s and 2000s were driven in large part by the adoption of test-based accountability systems, first on a voluntary basis by some states in the 1990s, and then by the rest under No Child Left Behind. However, as Mark Schneider, the incoming director of the federal Institute of Education Sciences, argued in 2011, the adoption of test-based accountability appears to produce a one-time increment in student achievement but does not seem sufficient to launch schools on a new trajectory of ever-higher performance (see “The Accountability Plateau,” December 2011). The latest results strengthen Schneider’s case—and suggest that simply defending test-based accountability in the Every Student Succeeds Act era won’t be enough to resume upward progress.

And what of the drop in performance since 2013? Commenting on the latest results for Education Next, Kirabo Jackson points a finger at declines in education spending in the wake of the Great Recession (see “Interpreting the 2017 NAEP Reading and Math Results,” April 2018). Although temporarily propped up by federal stimulus funds, average per-pupil spending nationwide fell after 2010. Jackson notes that the students tested by NAEP in 2015 and 2017 were the first cohorts to have experienced the brunt of that decrease, a drop of roughly $300 in per-pupil spending on average over the four years prior to each test. He also highlights new research linking post-recession spending reductions in specific districts to declines in achievement. Simply spending more may not be a reliable strategy to improve student achievement, but there’s good reason to believe that recent cutbacks have been harmful.

In fact, the resources available to students have declined by more than per-pupil spending figures would suggest. Michael Podgursky and colleagues documented how district payments for pension benefits grew from roughly $800 per student in 2010, when spending levels began to fall nationally, to more than $1,200 in 2017—a 50 percent increase over just six years (see “Pensions under Pressure,” features, Spring 2018). The bulk of this increase went to paying down debt on existing pension obligations, not to the direct costs of providing new benefits for current teachers. Such payments may be necessary, but they reap no benefit for today’s students—and could be one reason that teachers in several states have taken to the streets to protest stagnant pay.

Nor are pensions the only factor putting pressure on education spending. In this issue, Temple economist Doug Webber zeroes in on state higher-education spending, which has fallen substantially on a per-student basis over the past 30 years (see “Higher Ed, Lower Spending,” features). Webber combs through state budgets to ask a simple question—where has the money gone?—and finds a clear answer: Medicaid. States’ steadily growing obligations under the federal health-care program for low-income families and elders in nursing homes can explain the majority of the decline in higher-education spending. His conclusion: “constraining the rise of health-care costs is critical not just for those who care about health-care reform but for the public-higher-education landscape as well.”

Relative to higher education, K–12 schools have escaped such siphoning. Despite the recent decline, spending is still up substantially since 2005 and by an even greater amount over longer periods. But one wonders how long that will last, as the pressure from internal obligations and states’ competing commitments grows. That increasing tension may only heighten the challenge of putting American schools back on an upward trajectory.

— Martin R. West

This article appeared in the Summer 2018 issue of Education Next. Suggested citation format:

West, M.R. (2018). A Disappointing National Report Card. Education Next, 18(3), 5.

The post A Disappointing National Report Card appeared first on Education Next.

]]>
49708221
Higher Ed, Lower Spending https://www.educationnext.org/higher-ed-lower-spending-as-states-cut-back-where-has-money-gone/ Tue, 01 May 2018 00:00:00 +0000 http://www.educationnext.org/higher-ed-lower-spending-as-states-cut-back-where-has-money-gone/ As States Cut Back, Where Has the Money Gone?

The post Higher Ed, Lower Spending appeared first on Education Next.

]]>

As high-school students plan for graduation and beyond, many families are counting on their local college or university as an affordable next step. But years of rising tuition costs have made that more financially challenging. The average annual net price of a four-year public college, after grants and scholarships, doubled in inflation-adjusted terms from $2,180 in 1997–98 to $4,140 in 2017–18. Including room and board, the average net price increased by $5,660 over this period to $14,940—or nearly $60,000 for a four-year degree.

How did State U. get so expensive? A leading culprit is reduced state support. Since 1987, the typical student at a public college or university has seen the government subsidy for her education drop by $2,337, or roughly one quarter. And in prior research, I found that every $1,000 in state divestment leads colleges to raise tuition by about $300. But what explains falling state support?

In this analysis, I look at state spending decisions over the past 30 years to determine the relationship between state higher-education funding declines and increases in other categories. While we cannot account for every dollar of tuition increases, we can track state spending to see which programs are getting state and local tax dollars, and how that has contributed to declines in higher-education support.

I find that state and local public-welfare spending is easily the dominant factor driving budget decisions, with a $1 increase per capita associated with a $2.44 decrease in per-student higher-education funding—enough to explain the entire average national decline. In particular, my analysis finds that state Medicaid spending is the single biggest contributor to the decline in higher-education funding at the state and local level.

Some have argued that shifting costs to those pursuing a higher degree is appropriate, given the fact that students are disproportionately from higher-income families and are likely to receive ample returns on their higher-education investments. But it can also be said that investing in higher education benefits society as a whole, and that is the matter state policymakers need to focus on. I take no position here on the “best” allocation of state funds but merely seek to document the tradeoffs facing state and local governments.

A Complex Funding Picture

Not all higher-education spending is equally productive, and there have been plenty of eyebrow-raising investments making headlines in recent years. Building a climbing wall, lazy river, or laser-tag area for students to enjoy in their downtime, for example, probably has a different impact than spending the same amount to hire more tenure-track faculty. But the weight of the evidence suggests that, at least on average, public spending on universities leads to both desirable outcomes for students and faster economic growth. College graduates are far more likely to be employed and earn, on average, $32,000 more per year than adults with only a high-school diploma. Degree holders also pay far more in taxes and cost less in public-welfare spending over their lifetimes, with bottom-line estimates ranging from $250,000 to $500,000 per person.

Still, in the past three decades, average state and local funding per enrolled student has dropped by one quarter, or $2,337 (see Figure 1). In 1987, states spent $9,489 per student enrolled in a public two- or four-year school, on average. By 2015 that figure had fallen to $7,152—a modest recovery from a recent low of $6,441 per student in the wake of the Great Recession in 2012. These data include state and local funding, both of which support general operating costs of public institutions. State funding, which comes from the legislature via the state budget, is more often directed toward four-year institutions, while local spending, typically a dedicated education tax, is directed to two-year institutions.

Despite the marked decline in funding per student, it isn’t completely accurate to say that states are spending less on higher education; in fact, total state and local spending increased by 13.5 percent (in inflation-adjusted terms) from 1987 to 2015 nationwide. The problem is that the student population increased far more rapidly than state spending during that time, growing by 57.4 percent. Student contributions to colleges’ and universities’ revenues have increased during this time as a share of the total. By 2012, students’ tuition accounted for 25 percent of public college and university revenues, compared to 30 percent from state and local funds.

So long as the primary costs and method of delivery in higher education are based on labor, per-student appropriations will be the correct way to assess the magnitude of public support. Although there are likely efficiency gains associated with increasing scale at small institutions, these institutions are overwhelmingly private and are not representative of where most students attend college. The majority of the new students in higher education over the past several decades attend large public institutions, which have long since exhausted their economies of scale. In other words, the marginal cost of educating a school’s 25,000th student is roughly the same as the cost of educating student number 35,000.

Moreover, the consequences for public higher education are the same regardless of whether the numerator or denominator of per-student funding is to blame for the decline. Schools are being asked to do the same thing they have been doing (or more, as is the case in states that have tied funding to performance measures) with less support. When enrollments are rising, however, the dilemma faced by state governments is even more difficult, as maintaining the same level of funding per student necessitates either raising taxes or reducing other types of expenditures.

Trends in Other State Spending

Each year during budget season, the challenge for state and local officials is to allocate their resources to the uses that will have the highest return on investment for society. While justifying investment in higher education is not difficult, its support has nonetheless shrunk, while average per-resident spending in other major categories has increased (see Figure 2).

The spending categories frequently cited as potential contributors to that decline are: K–12 education; public welfare (which includes most Medicaid spending, Supplemental Security Income, food stamps, and Temporary Assistance for Needy Families); health and hospitals; police and fire protection; and corrections. To assess the trends in these funding streams, I review data on state and local appropriations for higher education from the Integrated Postsecondary Education Data System (IPEDS), and data for all other spending categories from the Annual Survey of State and Local Government Finances.

Based purely on the national trends, spending on K–12 education and public-welfare programs would appear the leading factors: on average, K–12 education spending per state resident increased by 41 percent, from $1,378 in 1987 to $1,946 in 2015, and public-welfare spending nearly tripled, growing from $645 per resident to $1,930. Spending in other major categories also accelerated sharply during that time, though their relatively lower costs have been less consequential for state and local budgets overall: health and hospital spending increased 67 percent, from $465 to $777 per resident; police and fire protection grew 59 percent, from $284 to $450; and corrections grew 66 percent, from $134 to $222.

State and local higher-education appropriations as measured by the IPEDS data offer the best approximation of the resources provided to institutions for the purpose of furnishing educational services to the public, but they do not capture all transfers of resources between state and local governments and higher education. In particular, they do not include grants or contracts that are earmarked for specific services institutions provide, such as commissioned research projects, agricultural service from land grant institutions, or sponsored training programs. In addition, any spending by colleges and universities is typically classified as state spending because public institutions are an extension of government, even though much of their spending is funded by tuition.

This underscores the difficulty in describing the financial link between institutions and the government: not including grants and contracts understates the subsidy that students receive, but including them would certainly overstate the subsidy. Incorporating this category of transfers does not alter the conclusions of the analyses reported below, but it does increase the implied level of government spending on higher education by 10 to 20 percent.

Where Did Higher-Education Funding Go?

Did expansion in other funding categories “cause” the decline in higher-education spending? National data can’t answer that question. Just because one type of expenditure is trending upward nationwide, on average, while per-student higher-education spending is declining, it is not necessarily true that one “caused” the other. For example, it could be the case that higher-education spending is falling and K–12 spending is rising, but that states with the largest K–12 increases aren’t the same ones cutting higher-education spending.

To shed light on the actual tradeoffs states are facing, I use state-by-state data from 1987 to 2015 to measure the relationship between higher-education appropriations per student and expenditure levels in nine categories: K–12 education, public welfare, health, police and fire protection, corrections, highways and roads, utilities, sanitation, and interest payments on debt. I then use the estimated relationship between each expenditure category and higher-education support, combined with the actual changes over time in each group, to assess how much of the decline in appropriations for higher education is due to each category. I do not control for any other factors, such as the condition of the economy, because doing so would obscure the relationship with higher education spending of any categories that tend to flucuate with the business cycle.

In essence, this is an accounting exercise that takes advantage of the fact that each state’s budget is effectively a zero-sum game. The results should not be considered causal in the sense of using what happened in the past to predict future events. For instance, a finding of a strong relationship between spending on public welfare and higher education would suggest that money that was previously spent on supporting higher education was shifted to support welfare programs in years past. But it would not necessarily tell us what could happen next year if welfare spending were to increase again.

The results of my preferred analysis indicate that public-welfare spending in fact explains roughly half of the post-1987 decline in higher-education appropriations, with health accounting for another 23 percent (see Figure 3). Police and fire protection explain 13 percent of the decline, with another 11 percent from the other spending categories of corrections, highways and roads, utilities, sanitation, and interest payments on debt.

Across multiple changes to my methodology, public welfare spending is always the dominant factor, accounting for between 53 percent and 100 percent of the decline in higher-education support. For example, looking at spending per capita within each category rather than total spending reveals that a $1 increase in per-capita public welfare spending is associated with as much as a $2.44 decrease in per-student higher-education funding. Spending on health and on police and fire protection accounts for between zero and 20 percent of the decline in higher-education funding, depending on whether spending is measured on an overall or per-capita basis.

Although it is the spending category with the second-largest overall expansion nationwide in recent decades, K–12 education spending at the individual state level is not related to declines in higher-education support. In fact, state-level changes in K–12 expenditures are positively associated with changes in spending on higher education. This isn’t surprising, as governments that see education as a priority likely value both K–12 and higher-education spending. In addition to this positive relationship, K–12 funding is financed in roughly equal proportions between the state and local levels, while public money for higher education comes mostly from states. Thus, there is no evidence that spending on higher education has been displaced by spending on K–12 education.

There are two important caveats. First, these proportions represent averages across all states. In reality, there are 50 different stories to be told about state divestment from higher education. For example, in my home state of Pennsylvania, state funding per student has declined by nearly half since 1987, from $7,609 to $3,955, which is 56 percent more than the national average decline. On the other hand, state and local support has actually grown over the past three decades in six states: Connecticut, Mississippi, Nebraska, New Mexico, Oklahoma, and Wyoming.

This same caveat applies to changes in other expenditure categories as well. Some states have increased per-capita public-welfare spending only modestly, such as $647 per resident in Utah, while other states have significantly expanded it, such as a roughly $2,000 increase in Vermont. Although aggregate averages are useful for distilling national trends into component shares, every state is unique along every dimension.

The second caveat is that the expenditure categories used in my analysis are still quite broad. Public welfare, for instance, includes four major programs, from income assistance to Medicaid to food stamps. Further complicating matters, some types of Medicaid spending are categorized as “health.” Other programs, such as food stamps, are funded jointly with the federal government. While the federal government pays for the benefits, administrative costs are split evenly with the state. (My analysis only includes the state and local portions of spending on these programs.)

The large number of different programs funded by state and local governments makes it impossible to estimate the sources of divestment from higher education at a finer level than I have done above. However, given the importance of public-welfare and health spending indicated by my findings, and the large increase in state spending on Medicaid (an increase of more than $1,000 per capita since 1987 based on figures from the Centers for Medicare and Medicaid Services), it is safe to conclude that Medicaid has been the single biggest contributor to the decline in higher-education support at the state and local level.

That said, it is unlikely that the Medicaid expansions provided for under the Affordable Care Act (ACA) are responsible for much, if any, of the decline in state higher-education funding. In states that accepted the provision, ACA expanded Medicaid coverage to all individuals with incomes less than 138 percent of the federal poverty line. Up until this year, the federal government has paid 100 percent of the costs for newly eligible individuals and maintained 50 percent cost sharing for individuals eligible for Medicaid under pre-ACA rules. This means that increases in medical costs are likely more responsible for putting stress on state budgets than increases in Medicaid enrollment. This may of course change as states start paying part of the costs of new enrollees—the share of the cost for ACA-expansion Medicaid enrollees covered by the federal government is set to decline to 90 percent in 2020—or if the federal government changes the rules governing Medicaid cost sharing.

Conclusions

There has been a gradual decline in public financial support of higher education over the past 30 years. The average state spends $2,337 less today per full-time-equivalent college student than in 1987. This divestment has been passed on to students partly in the form of higher tuition and partly through reduced spending, both of which have been shown to negatively impact students. While the public discussion around college usually focuses on the price paid by students, recent work by economists David Deming and Chris Walters suggests that declines in the amount colleges and universities spend may have a larger impact on student outcomes.

This essay asks a simple question: where did the money go? Reduced spending on higher education must go somewhere¸ and the goal of my analysis is to produce the best possible estimates of where the spending went, the degree to which changes in different categories of spending explain changes in higher-education spending per student.

In reality, there are 50 different answers to this question, but in the aggregate, states have shifted most of their former investment toward public-welfare programs, particularly Medicaid. This finding highlights the struggle state legislatures face to balance the immediate needs of today against investments in the future. Most important, it illustrates that constraining the rise of health-care costs is critical not just for those who care about health-care reform but for the public-higher-education landscape as well.

But it is important to recall that state budgets result from complicated political processes, and that new financial pressures—such as from ailing pension funds or withdrawn federal social-service support—are constantly emerging. Even when we can answer the question, “Where did the money which used to support higher education go?” there is no reason to think that the politicians of tomorrow will make the same choices as the politicians of yesterday.

Douglas Webber is associate professor in the Temple University Department of Economics and a research fellow at the Institute of Labor Economics.    

This article appeared in the Summer 2018 issue of Education Next. Suggested citation format:

Webber, D. (2018). Higher Ed, Lower Spending: As states cut back, where has the money gone? Education Next, 18(3), 50-56.

The post Higher Ed, Lower Spending appeared first on Education Next.

]]>
49708195
Strengthening the Roots of the Charter-School Movement https://www.educationnext.org/strengthening-roots-charter-school-movement-how-mom-and-pops-help-sector-diversify-grow/ Tue, 17 Apr 2018 00:00:00 +0000 http://www.educationnext.org/strengthening-roots-charter-school-movement-how-mom-and-pops-help-sector-diversify-grow/ How the mom-and-pops can help the sector diversify and grow

The post Strengthening the Roots of the Charter-School Movement appeared first on Education Next.

]]>

Over the past quarter century, charter schools have taken firm root in the American education landscape. What started with a few Minnesota schools in the early 1990s has burgeoned into a nationwide phenomenon, with nearly 7,000 charter schools serving more than three million students in 43 states and the nation’s capital.

Twenty-five years isn’t a long time relative to the history of public and private schooling in the United States, but it is long enough to merit a close look at the charter-school movement today and how it compares to the one initially envisaged by many of its pioneers: an enterprise that aspired toward diversity in the populations of children served, the kinds of schools offered, the size and scale of those schools, and the background, culture, and race of the folks who ran them.

Without question, the movement has given many of the country’s children schools that are now among the nation’s best of any type. This is an achievement in which all charter supporters can take pride.

It would be wrong, however, to assume that the developments that have given the movement its current shape have come without costs. Every road taken leaves a fork unexplored, and the road taken to date seems incomplete, littered with unanswered and important questions.

While the charter sector is still growing, the rate of its expansion has slowed dramatically over the years. In 2001, the number of charter schools in the country rose by 26 percent, and the following year, by 19 percent. But that rate steadily fell and now languishes at an estimated 2 percent annually (see Figure 1). Student enrollment in charter schools continues to climb, but the rate of growth has slowed from more than 30 percent in 2001 to just 7 percent in 2017.

And that brings us to those unanswered questions: Can the charter-school movement grow to sufficient scale for long-term political sustainability if we continue to use “quality”—as measured by such factors as test scores—as the sole indicator of a successful school? What is the future role of single-site schools in that growth, given that charter management organizations (CMOs) and for-profit education management organizations (EMOs) are increasingly crowding the field? And finally, can we commit ourselves to a more inclusive and flexible approach to charter authorizing in order to diversify the schools we create and the pool of prospective leaders who run them?

In this final query, especially, we may discover whether the movement’s roots will ever be deep enough to survive the political and social headwinds that have threatened the chartering tree since its first sprouting.

One School, One Dream

Howard Fuller, the lifelong civil rights activist, former Black Panther, and now staunch champion of school choice, once offered in a speech: “CMOs, EMOs . . . I’m for all them O’s. But there still needs to be a space for the person who just wants to start a single school in their community.”

In Fuller’s view, one that is shared by many charter supporters, the standalone or single-site school, and an environment that supports its creation and maintenance, are essential if we are to achieve a successful and responsive mix of school options for families.

But increasingly, single-site schools appear to suffer a higher burden of proof, as it were, to justify their existence relative to the CMOs that largely set the political and expansion strategies for the broader movement. Independent schools, when taken as a whole, still represent the majority of the country’s charter schools—55 percent of them, according to the National Alliance for Public Charter Schools. But as CMOs continue to grow, that percentage is shrinking.

Examining the role that single-site schools play and how we can maintain them in the overall charter mix is not simple, but it uncovers a number of factors that contribute to the paucity—at least on the coasts—of standalone schools that are also led by people of color.

Access to Support

If there is a recurring theme that surfaces when exploring the health and growth of the “mom-and-pops”—as many charter advocates call them—it’s this: starting a school, any school, is hard work, but doing it alone comes with particularly thorny challenges.

“Starting HoLa was way harder than any of us expected,” said Barbara Martinez, a founder of the Hoboken Dual Language Charter School, or HoLa, an independent charter school in Hoboken, New Jersey. “We ran into problems very early on and had to learn a lot very, very quickly.” Martinez, who chairs HoLa’s board and also works for the Northeast’s largest charter network, Uncommon Schools, added: “When a CMO launches a new school, they bring along all of their lessons learned and they open with an already well-trained leader. At HoLa, there was no playbook.”

Michele Mason, executive director of the Newark Charter School Fund, which supports charter schools in the city and works extensively with its single-site charters, made a similar point, noting that many mom-and-pops lack the human capital used by CMOs to manage the problems that confront any education startup. “[Prior to my arrival we were] sending in consultants to help school leaders with finance, culture, personnel, boards,” Mason said. “We did a lot of early work on board development and board support. The CMOs don’t have to worry about that so much.”

Mason added that the depth of the talent pool for hiring staff is another advantage that CMOs enjoy over the standalones. “Every personnel problem—turnover, et cetera—is easier when you have a pipeline.”

Access to Experts

Many large charter-school networks can also count on regular technical support and expertise from various powerhouse consultants and consulting firms that serve the education-reform sector. So, if knowledge and professional support are money, some observers believe that access to such wired-in “help” means the rich are indeed getting richer in the charter-school world.

Leslie Talbot of Talbot Consulting, an education management consulting practice in New York City, said, “About 90 percent of our charter work is with single-site schools or leaders of color at single sites looking to grow to multiple campuses. We purposely decided to focus on this universe of schools and leaders because they need unique help, and because they don’t have a large CMO behind them.” Talbot is also a member of the National Charter Collaborative, an organization that “supports single-site charter-school leaders of color who invest in the hopes and dreams of students through the cultural fabric of their communities.”

What are the kinds of support that might bolster a mom-and-pop’s chances of success? “There are lots of growth-related strategic-planning and thought-partnering service providers in [our area of consulting],” offered Talbot. “Single-site charter leaders, especially those of color, often are isolated from these professional development opportunities, in need of help typically provided by consulting practices, and unable to access funding sources that can provide opportunities” to tap into either of those resources.

Connections and Capital

The old bromide “It’s who you know” certainly holds true in the entrepreneurial environment of charter startups. As with any risky and costly enterprise, the power of personal and professional relationships can open doors for school leaders. Yet these are precisely the relationships many mom-and-pop, community-focused charter founders lack. And that creates significant obstacles for prospective single-site operators.

A 2017 Thomas B. Fordham Institute report analyzed 639 charter applications that were submitted to 30 authorizers across four states, providing a glimpse of the tea leaves that charter authorizers read to determine whether or not a school should open. Authorizing is most certainly a process of risk mitigation, as no one wants to open a “bad” school. But some of the study’s findings point to distinct disadvantages for operators who aren’t on the funder circuit or don’t have the high-level connections commanded by the country’s largest CMOs.

For instance, among applicants who identified an external funding source from which they had secured or requested a grant to support their proposed school, 28 percent of charters were approved, compared to 21 percent of those who did not identify such a source (see Figure 2).

“You see single-site schools, in particular with leaders of color, who don’t have access to capital to grow,” said Talbot. “It mirrors small business.” Neophyte entrepreneurs, including some women of color, “just don’t have access to the same financial resources to start up and expand.”

Michele Mason added that the funding problem is not resolved even if the school gets authorized. “Mom-and-pops don’t spend time focusing on [fundraising and networking] and they don’t go out there and get the money. They’re not on that circuit at all.”

“Money is an issue,” agreed Karega Rausch, vice president of research and evaluation at the National Association of Charter School Authorizers (NACSA). “If you look at folks who have received funding from the federal Charter Schools Program, for instance . . . those are the people getting schools off the ground. And this whole process is easier for a charter network that does not require the same level of investment as new startups.”

Authorizing and the Politics of Scale

Charter-school authorizing policies differ from state to state and are perhaps the greatest determinant of when, where, and what kind of new charter schools can open—and how long they stay in business. Such policies therefore have a major impact on the number and variety of schools available and the diversity of leaders who run them.

For example, on one end of the policy spectrum lies the strict regulatory approach embodied by the NACSA authorizing frameworks; on the other end, the open and pluralistic Arizona charter law. Each approach presents very different conditions for solo charter founders, for the growth of the sector as a whole, and, by extension, for the cultivation of political constituencies that might advocate for chartering now and in the future.

Arizona’s more open approach to authorizing has led to explosive growth: in 2015–16, nearly 16 percent of the state’s public-school students—the highest share among all the states—attended charter schools. The approach also earned Arizona the “Wild West” moniker among charter insiders. But as Matthew Ladner of the Charles Koch Institute argues, the state’s sector has found balance—in part because of an aggressive period of school closures between 2012 and 2016—and now boasts rapidly increasing scores on the National Assessment of Educational Progress, particularly among Hispanic students (see “In Defense of Education’s ‘Wild West,’features, Spring 2018). It has also produced such stellar college-preparatory schools as Great Hearts Academies and BASIS Independent Schools, whose success has helped the Arizona charter movement gain political support outside of its urban centers.

“When you have Scottsdale’s soccer moms on your side, your charters aren’t going away,” said Ladner.

NACSA’s approach, conversely, is methodical and therefore tends to be slow. Its tight controls on entry into the charter space have come to typify the authorizing process in many states—and have given rise to a number of the country’s best-performing schools and networks of any type, including Success Academy in New York City, Achievement First in Connecticut, Brooke Charter Schools in Boston, and the independent Capital City Public Charter School in D.C. However, some of NACSA’s policy positions could be considered unfriendly to sector growth. For instance, the association recommends that the initial term of charters be for no more than five years, and that every state develop a provision requiring automatic closure of schools whose test scores fall below a minimum level. Such provisions may have the most impact on single-site, community-focused charters, which might be concentrating on priorities other than standardized test scores and whose test results might therefore lag, at least in the first few years of operation.

Certainly, responsible oversight of charter schools is essential, and that includes the ability to close bad schools. “Despite a welcome, increasing trend of closing failing schools [over] the last five years, closing a school is still very hard,” Rausch said. “Authorizers should open lots of innovative and new kinds of schools, but they also have to be able to close them if they fail kids. We can’t just open, open, open. We need to make sure that when a family chooses a school there’s some expectation that the school is OK.”

The issue of quality is anchored in the pact between charter schools and their authorizers (and by extension, the public). Charter schools are exempt from certain rules and regulations, and in exchange for this freedom and flexibility, they are expected to meet accountability guidelines and get results. Over time, authorizers have increasingly defined those results by state test scores.

By this measure, the large CMOs have come out ahead. Overall, schools run by them have produced greater gains in student learning on state assessments, in both math and reading, than their district-school counterparts, while the mom-and-pops have fared less well, achieving just a small edge over district schools in reading and virtually none in math (see Figure 3).

But some charter advocates are calling for a more nuanced definition of quality, particularly in light of the population that most standalone charters—especially those with leaders of color—plan to serve. This is a fault-line issue in the movement.

“In my experience, leaders of color who are opening single sites are delivering a model that is born out of the local community,” said Talbot. “We’ve witnessed single-site charters headed by leaders of color serve large numbers of students who have high needs. Not at-risk . . . but seriously high needs—those ongoing emergent life and family conditions that come with extreme poverty,” such as homelessness. “When you compound this with [a school’s] lack of access to capital and support . . . you have this conundrum where you have leaders of color, with one to two schools, serving the highest-needs population, who also have the least monetary and human-capital support to deal with that challenge. And as a result, their data doesn’t look very good. An authorizer is going to say to a school like that, ‘You’re not ready to expand. You might not even be able to stay open.’”

When it comes to attempting a turnaround, standalone schools are again at a disadvantage relative to the CMOs. “What happens with the mom-and-pops is that if they don’t do well early—if their data doesn’t look good—there’s no one there to bail them out,” said Mason. “They don’t have anyone to come and help with the programming. The academic supports. And if they don’t have results early, then they’re immediately on probation and they’re climbing uphill trying to build a team, get culture and academics in place. CMOs have all the resources to come in and intervene if they see things going awry.”

Then, too, a charter school, especially an independent one, can often fill a specialized niche, focusing on the performing arts, or science, or world languages. “As an independent charter school, you have to be offering families something different, . . . and in our case it’s the opportunity for kids to become fully bilingual and bi-literate,” offered Barbara Martinez of HoLa. “It’s not about being better or beating the district. It’s about ensuring that you are not only offering a unique type of educational program, but that you also happen to be preparing kids for college and beyond. For us, [charter] autonomy and flexibility allow us to do that in a way that some districts can’t or won’t.”

In short, the superior performance of CMO schools vis-à-vis test scores does not imply that we should only focus on growing CMO-run schools. Given the resource disadvantages that independent operators face, and the challenging populations that many serve, we would be better advised to provide these leaders with more support in several areas: building better networks of consultants who can straddle the worlds of philanthropy and community; recruiting from non-traditional sources to diversify the pool of potential leaders, in terms of both race and worldview; and allowing more time to produce tangible results. Such supports might help more mom-and-pops succeed and, in the process, help expand and diversify (in terms of charter type and leader) the movement as a whole while advancing its political credibility.

The numbers tell the story on the subject of leadership. Charter schools serve a higher percentage of black and Hispanic students than district schools do, and while charter schools boast greater percentages of black and Hispanic principals than district schools, these charter-school leaders overall are far less diverse than the students they serve (see Figure 4). Though many may view charter schools primarily through the lens of performance, it seems that many of the families who choose them value community—the ability to see themselves in their schools and leaders—substantially more than we originally believed. Diverse leadership, therefore, is a key element if we want to catalyze both authentic community and political engagement to support the movement’s future.

More Is Better

A schooling sector that does not grow to a critical mass will always struggle for political survival. So what issues are at play when we consider the future growth of charter schools, and what role will single sites and a greater variety of school offerings play in that strategy? There’s no consensus on the answer.

A more pluralistic approach to charter creation—one that embraces more-diverse types of schools, academic offerings, and leadership and helps more independent schools get off the ground—might entail risks in terms of quality control, but it could also help the movement expand more quickly. And steady growth could in turn help the movement mount a robust defense in the face of deepening opposition from teachers unions and other anti-charter actors such as the NAACP. (Last year the NAACP released a task force report on charter schools, calling for an outright moratorium on new schools for the present and significant rule changes that would effectively end future charter growth.)

Another viewpoint within the movement, though, points out that the sector is still growing, though at a slower pace and even if there is a coincident reduction in the diversity of school types.

“We know the movement is still growing because the number of kids enrolled in charter schools is still growing,” said NACSA’s Rausch. “It’s just not growing at the same clip it used to, despite the fact that authorizers are approving the same percentage of applications.” He also noted that certain types of growth might go untallied: the addition of seats at an existing school, for instance, or the opening of a new campus to serve more students.

Rausch notes that one factor hampering sector-wide growth is a shrinking supply of prospective operators, single-site or otherwise. “We’ve seen a decline overall in the number of applications that authorizers receive,” he said. “What we need are more applications and more people that are interested in starting new single sites, or more single sites that want to grow into networks. But I’m also not sure there is the same level of intentional cultivation to get people to do this work [anymore]. I wonder if there is the same kind of intensity around [starting charters] as there used to be.”

Many charter supporters, however, don’t believe that an anemic startup supply is the only barrier to sector expansion in general, or to the growth of independent schools. Indeed, many believe there are “preferences” baked into the authorizing process that actually hinder both of these goals, inhibiting the movement’s progress and its creativity. That is, chartering is a movement that began with the aspiration of starting many kinds of schools, but it may have morphed into one that is only adept at starting one type of school: a highly structured school that is run by a CMO or an EMO and whose goal is to close achievement gaps for low-income kids of color while producing exceptional test scores. This “type” of charter is becoming synonymous with the term “charter school” across most of America. Among many charter leaders and supporters, these are schools that “we know work.”

In many regions of the country, these charters dominate the landscape and have had considerable success. However, given the pluralistic spirit of chartering overall, the issue of why they dominate is a salient one. Is it chance or is it engineered? Fordham’s report revealed that only 21 percent of applicants who did not plan to hire a CMO or an EMO to run their school had their charters approved, compared to 31 percent for applicants who did have such plans, which could indicate a bias toward CMO or EMO applicants over those who wish to start stand-alone schools. As Fordham’s Michael Petrilli and Amber Northern put it in the report’s foreword: “The factors that led charter applicants to be rejected may very well predict low performance, had the schools been allowed to open. But since the applications with the factors were less likely to be approved, we have no way of knowing.”

The institutional strength implied by a “brand name” such as Uncommon Schools or IDEA might give CMO schools more traction with authorizers and the public. “The truth is that telling a community that a school with a track record is going to open is significantly easier than a new idea,” offered Rausch. “But it’s important to remember that every network started as a single school. We need to continue to support that. I don’t think it’s either CMO or single site. It’s a ‘both/and.’”

If there is a bias toward CMO charters as the “school of choice” among authorizers, why might that be, and what would it mean for single sites? Some believe the problem is one where the goal of these schools is simply lost in the listening—or lack of it—and that the mom-and-pops could benefit from the assistance of professionals who know how to communicate a good idea to authorizers and philanthropists.

The language of “education people in general, and people of color in education specifically . . . doesn’t match up with the corporate language [that pervades the field and] that underpins authorizing and charter growth decisions,” said Talbot. “I think more [charter growth] funds, philanthropists, foundations, need . . . let’s call it translation . . . so there is common ground between leaders of color, single-site startups, foundations, and other participants in the space. I think this is imperative for growth, for recognition, and for competitiveness.”

What Now?

The future of chartering poses many questions. Admittedly, state authorizing laws frame the way the “what” and “who” of charters is addressed. Yet it is difficult to ignore some of the issues that have grown out of the “deliberate” approach to authorizing that has typified much of recent charter creation.

Some places, such as Colorado, have significant populations of single-site schools, but overall, the movement doesn’t seem to be trending that way. Rausch noted that certain localities, such as Indianapolis, have had many charter-school leaders of color, but the movement, particularly on the coasts, is mainly the province of white school leaders and organizational heads who tend to hold homogeneous views on test scores, school structure, and “what works.” And while some Mountain States boast charter populations that are diverse in ethnicity, income, and location, in the states with the greatest number of charters, the schools are densely concentrated in urban areas and largely serve low-income students of color. Neither of these scenarios is “right,” but perhaps a clever mix of both represents a more open, diverse, inclusive, and sustainable future for the charter movement. In the end, the answers we seek may not lie in the leaves that have grown on the chartering tree, but in the chaotic and diverse roots that started the whole movement in the first place.

Derrell Bradford is executive vice president of 50CAN, a national nonprofit that advocates for equal opportunity in K–12 education, and senior visiting fellow at the Thomas B. Fordham Institute.

This article appeared in the Summer 2018 issue of Education Next. Suggested citation format:

Bradford, D. (2018). Strengthening the Roots of the Charter-School Movement: How the mom-and-pops can help the sector diversify and grow. Education Next, 18(3), 16-24.

The post Strengthening the Roots of the Charter-School Movement appeared first on Education Next.

]]>
49708078
Trump and the Nation’s Schools https://www.educationnext.org/trump-and-the-nations-schools-assessing-administrations-early-impact-on-education-forum-burke-jeffries/ Wed, 04 Apr 2018 00:00:00 +0000 http://www.educationnext.org/trump-and-the-nations-schools-assessing-administrations-early-impact-on-education-forum-burke-jeffries/ Assessing the administration’s early impact on education

The post Trump and the Nation’s Schools appeared first on Education Next.

]]>

Presidential candidate Donald J. Trump did not emphasize education policy during his campaign, though he proposed a $20 billion program to promote school choice, derided Common Core, and even floated the idea of eliminating the U.S. Department of Education. As for higher education, Trump expressed concern over student debt and proposed a partial loan-forgiveness program. Observers suggested that,  as president, he might roll back Obama’s tough enforcement guidelines on campus sexual assault. How have Trump’s policies stacked up against promises in his first year as president? What effect has his administration had on the nation’s schools and colleges so far?

In this forum, Lindsey M. Burke of the Heritage Foundation’s Center for Education Policy argues that the administration has already made some positive strides, while Shavar Jeffries, president of Democrats for Education Reform, contends that Trump’s policies have only harmed children and schools.

 

A Strong Start on Advancing Reform
by Lindsey M. Burke

 

 

 

 

Harmful Policies, Values, and Rhetoric
by Shavar Jeffries

 

 

This article appeared in the Summer 2018 issue of Education Next. Suggested citation format:

Burke, L.M., and Jeffries, S. (2018). Trump and the Nation’s Schools: Assessing the administration’s early impact on education. Education Next, 18(3), 58-65.

The post Trump and the Nation’s Schools appeared first on Education Next.

]]>
49708044
A Strong Start on Advancing Reform https://www.educationnext.org/strong-start-on-advancing-reform-trump-and-nations-schools-forum-burke/ Wed, 04 Apr 2018 00:00:00 +0000 http://www.educationnext.org/strong-start-on-advancing-reform-trump-and-nations-schools-forum-burke/ Forum: Trump and the Nation’s Schools

The post A Strong Start on Advancing Reform appeared first on Education Next.

]]>

Schools are quintessentially local institutions that the distant federal government is ill-equipped to shape. Indeed, that government at any level delivers schooling could be at the heart of lackluster academic outcomes. Since the 1960s, combined federal, state, and local per-pupil spending has nearly tripled in real terms. The returns on this massive investment, as judged by the performance of high school students on the National Assessment of Educational Progress, have been meager at best. In his inaugural address on January 20, 2017, Donald J. Trump acknowledged this disconnect, noting that the U.S. education system is “flush with cash” but leaves students “deprived of knowledge.” Yet, there is only so much that can—or should—be done by the residents of 1600 Pennsylvania Avenue or the denizens of the Department of Education (DOE). The federal government simply does not have the constitutional authority, the financing stake (K–12 education is 90 percent state and locally funded), or the capacity to manage education across the country. But there are certain reforms that can and should be taken by the administration and Congress because they are under the purview of the federal government or would begin the process of unwinding federal intervention in education; in little more than a year, the Trump administration has made considerable strides in that direction.

Broadly, the administration shaped K–12 and higher education in three primary ways in 2017: through policy changes in conjunction with Congress; through considerable rescissions of Obama-era regulations; and through rhetorical markers on a variety of issues affecting education.

Policy Changes

One of the most consequential reforms to unfold over the past year is also one of the most recent: the expansion of school choice through a change to 529 college savings accounts. The Tax Cuts and Jobs Act, signed into law by the president in December 2017, incorporated an amendment offered by Republican senator Ted Cruz of Texas that makes K–12 private-school tuition eligible for 529 savings plans. These plans are tax-neutral savings accounts whose interest is not subject to federal taxes. Moreover, 34 states and the District of Columbia (D.C.) offer parallel state tax deductions and credits for 529 plan contributions, making them attractive savings vehicles. Families who choose to pay for K–12 expenses using their 529 accounts will clearly have less time to save for kindergarten than for high school, but the eligibility of anyone (such as a grandparent) to contribute to a student beneficiary’s account can also boost a family’s purchasing power.

Under the new law, 529 savers can withdraw up to $10,000 per year free of federal (and in some cases state) taxes to pay tuition expenses at an elementary or secondary private school. The economic benefit for families could add up substantially: holdings in 529 plans currently stand at $275 billion, up from just $2.4 billion in 1996. Critics of the new provision have argued that it fails to adequately extend benefits to children from low-income families, who may not have the financial means to save for tuition. States should address this issue by adopting universal education-choice options for all families (and many state-based programs are already geared specifically to low-income children by virtue of means testing). But here again, the ability for anyone to contribute to a designated beneficiary’s 529 means children from low-income families are not limited to funds their parents can contribute. By equalizing the tax treatment of K–12 and higher-education savings, the new law advances school choice without increasing direct federal intervention in education.

Trump also advanced school choice by signing into law a reauthorization of the D.C. Opportunity Scholarship Program, putting that program on solid footing after eight years of opposition—in the form of budget eliminations and reauthorization fights—from the Obama administration. It is appropriate for the federal government to fund the D.C. program since the district is under the jurisdiction of Congress. Students there, along with Native American children attending Bureau of Indian Education (BIE) schools and children from military families, are the few eligible populations to whom the federal government has a unique obligation to provide education services.

Education secretary Betsy DeVos speaks during a January 18 rally as part of National School Choice Week.

Regulatory Rollback

The Trump administration has arguably had the most success on the education-reform front in its work to repeal and rescind Obama-era education regulations.

With the backing of congressional Republicans, the administration came out of the gate swinging against prescriptive regulations on the Every Student Succeeds Act (ESSA) that the Obama administration had put in place in November 2016. Congress used the Congressional Review Act to pass a repeal of a regulation requiring that states rate teacher-training programs based on their graduates’ evaluation results and another regulation dealing with accountability measures. The president signed both into law in April 2017. The accountability rule was especially prescriptive and would have required states to assign each school a single summative performance rating based on a complicated set of indicators while also dictating methods for intervention in struggling schools. Both regulations were clearly beyond the purview of the federal government and not in keeping with the spirit of ESSA, which, ostensibly, sought to restore some control over education to the states.

Similarly, the new administration deferred to local authorities on policies pertaining to gender identity. Prior to leaving office, the Obama administration expanded the reach of Title IX by reinterpreting the law, which bars discrimination on the basis of sex, arguing that it applied to gender identity. The administration informed schools across the country that the departments of education and justice would now “treat a student’s gender identity as the student’s sex for purposes of enforcing Title IX.” The Trump administration reversed this guidance, which had conditioned access to federal funding on schools allowing students who identify as transgender to use the bathrooms and locker rooms of their choice. The Trump departments of justice and education issued a joint letter rescinding the policy, restoring decisions about this sensitive issue to local school leaders and parents, who can work together to find accommodations for all affected parties.

On the question of sexual assault on campus, the Obama administration issued a “Dear Colleague” letter in 2011 alerting colleges and universities that they should use a “preponderance of evidence” standard—rather than the more stringent “beyond a reasonable doubt” standard—when adjudicating sexual assault cases. The guidance created an unequal balance of power, stacking the deck in favor of the accuser and significantly weakening the due process rights of the accused. In September 2017, the Trump DOE rescinded the guidance, and Secretary of Education Betsy DeVos has indicated that she will be working on a replacement for the rule, in an effort to better protect both those who make charges of sexual assault and those who are accused of it.

Rhetorical Markers

Apart from its direct actions, the Trump administration’s rhetorical support for various measures, such as apprenticeship programs, continues to shape civic debate and inform congressional efforts. The White House has lauded the promise of school choice—a stark departure from the Obama years. While Obama was moderately supportive of public-school choice options such as charters, he was hostile toward private-school options such as the D.C. scholarship program. Trump, by contrast, appointed a secretary of education who had spent decades working to advance education choice for families, and his administration has attempted to advance school choice through federal policy as appropriate. The administration has also hinted at pursuing other school-choice proposals in the coming year, including a $1 billion initiative to provide education savings accounts to military families. In my view, the federal government should have a limited role in advancing school choice through policy (military choice, the D.C. scholarship program, and choice for children attending BIE schools being among the few exceptions). However, the administration’s rhetorical support for school-choice initiatives should bolster such efforts in the states.

The White House Budget

Budgets are aspirational documents. Although Congress rarely, if ever, implements a White House budget as written, the president’s funding plan sets the tone for the administration’s priorities on a host of issues. The Trump administration’s FY2019 budget request proposes a 5 percent reduction in spending on programs managed by DOE, eliminating grants focused on a variety of K–12 and higher-education programs, and ultimately reducing spending for the agency by $3.6 billion. The administration’s FY2018 budget went further, targeting reductions in federal education spending totaling $9 billion, which would have amounted to a 13 percent cut in the DOE’s $68 billion annual budget. That recommended cutback signaled a serious commitment to lessening federal intervention in education—a necessary condition for restoring state and local control. Had Congress adhered to the White House’s budget request, the proposal would have been the largest single-year percentage cut in the department’s discretionary budget since President Ronald Reagan’s 1983 budget proposal.

The administration’s budget remained an aspirational document in 2017, and it appears the same will happen in 2018. The omnibus appropriations bill passed by Congress in late March flouted the White House’s proposal by increasing, rather than decreasing, federal spending. The bill increased DOE’s budget by $3.9 billion, to $70.9 billion, representing a 6 percent increase over 2017. The administration had rightly sought reductions in that budget, aiming to begin the process of restoring state and local control of education. Yet Congress, once again, continued the federal education-spending spree.

More to Be Done

There are already indications that the administration will continue its efforts to shape education policy. In December 2017, the Trump administration filed an amicus brief urging the U.S. Supreme Court to overturn the 1977 Abood v. Detroit Board of Education decision, which allowed public-sector unions, including teachers unions, to collect fees even when an employee declines membership. The members of the court, including Trump appointee Neil Gorsuch, heard oral arguments in Janus v. AFSCME in late February and appear poised to follow the administration’s advice.

But without question, there is more to be done. Although the administration is constrained by the parameters of the law, the education department should continue to allow for as much flexibility for states as possible. ESSA was intended to create such flexibility on a host of measures after more than a decade of ineffective prescription ordered by No Child Left Behind. If California wants to simply identify underperforming schools on the state’s dashboard, as its accountability plan suggests, or if Arizona wants to allow schools to use any standardized test that fits their needs rather than a statewide test, as ESSA’s pilot option also allows, DOE should move out of the way of these state laboratories. (So far, the approval process for state accountability plans indicates the department is doing just that.) Ultimately, the administration should work with Congress to empower states to opt out of the law altogether and apply their share of ESSA funding toward state and local priorities. It should also work to advance choice for military families, for children in D.C., and for children attending BIE schools. And it should work with Congress to dramatically reduce higher-education subsidies and to reform accreditation, decoupling that process from federal financing in a step toward restoring it as a voluntary, meaningful practice.

In sum, within a year’s time the administration has repealed onerous guidance associated with ESSA that would have infused a level of prescription on par with what prevailed under NCLB; restored decisions about school bathroom policy to localities; worked to ensure due process for the accused in cases of sexual assault allegations on college campuses; and advanced school choice in an appropriate way through existing federal policy, reauthorizing the D.C. Opportunity Scholarship Program, and empowering families across the country with choice through expanded 529 savings plans. All of these reforms augur positive change for American education because they have put control in the hands of those closest to the students the policies affect, thus moving federal education policy in the right direction.

That’s a pretty strong start.

This article is part of a forum on Donald Trump and the nation’s schools. For an alternate take, please see, “Harmful Policies, Values, and Rhetoric,” by Shavar Jeffries.

This article appeared in the Summer 2018 issue of Education Next. Suggested citation format:

Burke, L.M., and Jeffries, S. (2018). Trump and the Nation’s Schools: Assessing the administration’s early impact on education. Education Next, 18(3), 58-65.

The post A Strong Start on Advancing Reform appeared first on Education Next.

]]>
49708048
Harmful Policies, Values, and Rhetoric https://www.educationnext.org/harmful-policies-values-rhetoric-trump-and-nations-schools-forum-jeffries/ Wed, 04 Apr 2018 00:00:00 +0000 http://www.educationnext.org/harmful-policies-values-rhetoric-trump-and-nations-schools-forum-jeffries/ Forum: Trump and the Nation’s Schools

The post Harmful Policies, Values, and Rhetoric appeared first on Education Next.

]]>

After little more than a year, President Donald J. Trump’s policies, values, and rhetoric have had a negative impact on our nation’s most vulnerable schoolchildren, particularly low-income students and students of color. This adverse effect is especially pronounced in five areas: oversight of federal education law; enforcement of federal guarantees of educational equity; budget and tax policy; the rescinding of the Deferred Action for Childhood Arrivals (DACA) policy; and Trump’s embrace of bigoted rhetoric and action that challenges the identities of students who are racial, ethnic, or religious minorities.

Oversight of ESSA

While the Every Student Succeeds Act (ESSA) of 2015 provides states with more flexibility than its predecessor law, No Child Left Behind, the Trump administration has failed to enforce key provisions of ESSA that Congress carefully wrote into statute. For example, as pointed out last year by Republican John Kline of Minnesota, an ESSA co-author and former chair of the House Committee on Education and the Workforce, “Arizona and New Hampshire recently passed laws that violate ESSA by permitting individual school districts to choose which assessments to administer.” Subsequently, the Department of Education (DOE) approved Arizona’s plan despite the violation. Approval of New Hampshire’s plan is still pending; however, none of DOE’s feedback thus far indicates that the state’s apples-to-oranges approach to comparing schools will pose an obstacle to the plan’s approval. This means that in Arizona, New Hampshire, and other states, different schools will be rated according to different indexes from a long list of possible options. Those of us who advocate for accountability as a means to expand educational opportunities for students from historically disadvantaged groups fear this federal policy approach will lead districts with poor student-achievement outcomes to select menu options that mask achievement gaps, which in turn will lead to a misdirection of resources and attention away from schools that most need them. This policy is simply illustrative. The administration has also approved or given encouraging signals to plans that violate clear ESSA statutory mandates to disaggregate student-achievement outcomes by race and family income and for English language learners and students with disabilities; to test all students in grades 3–8; and to assess at least 95 percent of all students.

Civil Rights Rollback

One of the most important roles of the federal government vis-à-vis U.S. public education is ensuring civil rights and educational equity, particularly when state and local governments have fallen short of meeting their responsibilities. U.S. Secretary of Education Betsy DeVos has rolled back the regular practice of the education department’s Office for Civil Rights (OCR) of probing further into civil rights complaints for evidence of larger, systemic violations. This change means that students who are harmed by state and local civil-rights violations will be far less likely to see those abuses remedied unless they, their parents, or someone else acting on their behalf files a direct and formal complaint. In March, DeVos also eliminated an appeals process for students claiming discrimination and shortened the time period in which claimants can file evidence with investigators.

Trump administration officials have also undercut protections against sexual abuse on college campuses. Last summer, Candice Jackson, the acting head of the OCR, dismissed the severity of the issue by asserting that 90 percent of such allegations on campus “fall into the category of ‘we were both drunk,’ ‘we broke up, and six months later I found myself under a Title IX investigation, . . . ‘” a statement for which she subsequently apologized. In September, DOE rescinded Obama-era guidance requiring more-stringent procedures for dealing with campus-based sexual assaults. The administration has also revoked rules and guidance dealing with other issues, including Obama-era protections for transgender students, and it is in the process of reviewing guidance aimed at preventing discriminatory school discipline on which, in testimony before Congress, DeVos said she would “defer to the judgment of state and local officials.”

Proposed Budget Cuts 

My forum partner points out that budgets are “aspirational documents.” It’s true that the budget drafts of any White House are usually ignored by Congress, but they reveal values and priorities. In its proposed FY2018 budget, the Trump administration called for slashing almost $10 billion in aid to K–12 and higher education, potentially resulting in the elimination of afterschool programs, substantial cuts to career and technical education programs, fewer supports for teachers, and instability of the Pell Grant Program. Trump did propose increases to the federal Charter Schools Program, but these relatively small boosts were overshadowed by the massive reductions he wanted. In fact, Trump’s cuts would harm even the public charter schools he purports to support: charters rely on Title II teacher-preparation grants to train their educators, and Trump wanted to eliminate the federal appropriation for that program. Given that he is now proposing to arm teachers, I must ask: why isn’t there enough money to train teachers to teach, when there’s suddenly enough to train them to be sharpshooters?  

In March, Congress finally passed a bipartisan spending bill that rejected Trump’s divisive and reckless spending priorities. None of Trump’s proposed education cuts were enacted—in fact, overall education funding saw a slight increase and, at the same time, important new investments were made in consensus education reforms including high-quality public charter schools.

Furthermore, the social safety net that supports vulnerable children and families is in jeopardy under the Trump administration. The ongoing efforts of the Republican-dominated House to slash Medicaid, dismantle the Affordable Care Act, and cut key social services programs would negatively affect school readiness and opportunities to learn for millions of students. More than one third of U.S. children, for example, rely on Medicaid for their health-care coverage and for screening and treatment of vision and hearing problems, developmental delays, and other conditions that, left unaddressed, can have an adverse impact on short- and long-term academic achievement. Medicaid also provides $4 billion to $5 billion in funding directly to public schools for services to students with disabilities and for vital support personnel such as school nurses and counselors. Research shows that children with access to Medicaid are more likely to graduate from high school and complete college than their peers who lack coverage.

In addition, Trump’s “starve the beast” tax policies are likely to pressure Congress to make deep education cuts in the future. The recently enacted Tax Cuts and Jobs Act will reduce revenue, portending large decreases in federal discretionary spending. The Congressional Budget Office estimates the tax bill will add $1.5 trillion to the deficit over 10 years. This deficit spending will ultimately require severe, across-the-board reductions in domestic programs, and Trump has already signaled, in both his proposed FY2018 and FY2019 budgets, that he favors billions in cuts to education. Furthermore, the new cap on federal income-tax deductions for individuals will jeopardize state and local education funding in states such as California, Connecticut, New Jersey, and New York.

Protesters gather on Capitol Hill to oppose Trump’s decision to end the Obama administration’s DACA policy.

DACA and Dreamers 

Trump also unnecessarily disrupted the lives of “Dreamers”—some 800,000 undocumented immigrants who were brought to the United States as children—and their families by ending President Obama’s DACA policy, setting an arbitrary deadline (March 5, 2018) for Congress to save the program and then breaking promise after promise to support a bipartisan legislative solution. Trump actually wound up opposing the proposal of the bipartisan group he had previously pledged to support, which likely determined its failure to garner the necessary 60 votes for passage in the Senate. While at this writing the courts have blocked the immediate end of DACA for current recipients, hundreds of young Americans nonetheless lose protections every day that Congress fails to act, and all Dreamers face an uncertain future.

Rescinding DACA disrupts learning environments across all levels of the U.S. education system. About 9,000 DACA K–12 teachers could be forced out of their classrooms. Students pursuing higher education will lose jobs that currently help them pay for tuition and living expenses, worsening the college dropout crisis. An estimated 200,000 citizen children whose parents have been protected under DACA will live with increased fear for their parents’ safety and may lose access to services if their parents avoid interactions with governmental agencies, including meetings with teachers and school administrators, for fear of deportation.

Climate of Fear 

In addition to the Trump administration’s direct policy actions, Trump’s bigoted and offensive rhetoric has assaulted our racial, ethnic, and religious minorities, implying that millions of American families and children are less than full members of our society. In a post-election report titled “The Trump Effect: The Impact of the 2016 Presidential Election on Our Nation’s Schools,” the Southern Poverty Law Center presented results of a survey of more than 10,000 educators and school administrators and found that 80 percent of them reported observing heightened anxiety and concern on the part of students over the impact of the election on themselves and their families.

Trump has shown himself to be an unapologetic endorser of divisive racial, religious, and ethnic stereotypes, insisting for years that the first black president was born in Kenya and not the United States; labeling Mexican immigrants as rapists and criminals during the announcement of his presidential candidacy; attempting to ban Muslim immigrants; insinuating that a Muslim Gold Star mother had been forbidden to speak in public by her husband; and casting blame “on many sides” in the wake of neo-Nazi and white-supremacist violence. When the president of the United States gives credence to such pernicious labeling, it should be unsurprising that some impressionable young people throughout the country act to marginalize minority students, and that minority children may internalize these messages about their civic identity.

 Little to Embrace

The differences I have with the Trump administration are rooted in its policies and rhetoric, not its party affiliation. In our work at Democrats for Education Reform, my colleagues and I regularly interact with elected officials across party lines in efforts to advance positive academic outcomes for students. But Trump’s commitment to significant cuts in federal discretionary spending, a deep federalist ideology that tends to defer reflexively to state action (and is thus averse to federal civil-rights guarantees), and an embrace of bigoted rhetoric and action provide little substance for pro-student reform advocates to embrace. And his administration’s proposed investments in the federal Charter Schools Program do little to offset that damage. All students, but particularly low-income students and students of color, face many challenges in their pursuit of educational opportunity, both from within and outside the schoolhouse. So far, this administration’s policies have done nothing to help alleviate these challenges.

This article is part of a forum on Donald Trump and the nation’s schools. For an alternate take, please see, “A Strong Start on Advancing Reform,” by Lindsey M. Burke.

This article appeared in the Summer 2018 issue of Education Next. Suggested citation format:

Burke, L.M., and Jeffries, S. (2018). Trump and the Nation’s Schools: Assessing the administration’s early impact on education. Education Next, 18(3), 58-65.

The post Harmful Policies, Values, and Rhetoric appeared first on Education Next.

]]>
49708050
Unlocking the Science of How Kids Think https://www.educationnext.org/unlocking-science-how-kids-think-new-proposal-for-reforming-teacher-education/ Tue, 27 Mar 2018 00:00:00 +0000 http://www.educationnext.org/unlocking-science-how-kids-think-new-proposal-for-reforming-teacher-education/ A new proposal for reforming teacher education

The post Unlocking the Science of How Kids Think appeared first on Education Next.

]]>

In 2002 I was invited to give a talk to 500 school teachers. The invitation puzzled me, as my research at the time had nothing to do with education; I was a psychologist studying how different parts of the brain support different types of human learning. I mentioned this to the person who invited me, and she said, “We know. We want you to tell us about cognitive psychology. We think our teachers would be interested.” I shrugged, accepted the invitation, and forgot about it. Six months later (and days before I was to give the talk) I was wondering what had possessed me to say yes. Surely teachers would already know anything I could tell them about human memory, or attention, or motivation that would be relevant to teaching. I felt anxious and was sure the presentation would be a disaster.

But it wasn’t. Teachers thought it was interesting and relevant to their practice. Most surprising to me, they were unfamiliar with the content, even though it came from the very first class in human cognition a college student would take. I wondered: how could teachers not know the ABCs of cognition?

Yet the following 15 years have shown that experience was not a fluke. I’ve written four books and dozens of articles and have delivered scores of talks for teachers on the basics of cognition. In so doing, I’ve addressed what teachers saw as a need; what I haven’t done is think about why the need exists. Shouldn’t teachers learn how children think during their training? In this essay I consider why they don’t, and what we might do about it.

What Should Teachers Know?

Is my experience representative? Are most teachers unaware of the latest findings from basic science—in particular, psychology—about how children think and learn? Research is limited, but a 2006 study by Arthur Levine indicated that teachers were, for the most part, confident about their knowledge: 81 percent said they understood “moderately well” or “very well” how students learn. But just 54 percent of school principals rated the understanding of their teachers that high. And a more recent study of 598 American educators by Kelly Macdonald and colleagues showed that both assessments may be too optimistic. A majority of the respondents held misconceptions about learning—erroneously believing, for example, that children have learning styles dominated by one of the senses, that short bouts of motor-coordination exercises can improve the integration of the brain’s left and right hemispheres, and that children are less attentive after consuming sugary drinks or snacks.

But perhaps when teachers say they “know how children learn,” they are not talking about learning from a scientific perspective but about craft knowledge. They take the question to mean, “Do you know how to ensure that children in your classroom learn?” which is not the same as understanding the theoretical principles of psychology. In fact, in a 2012 study of 500 new teachers by the American Federation of Teachers (AFT), respondents said that their training was too theoretical and didn’t prepare them for teaching “in the real world.” Maybe they have a point. Perhaps teachers don’t need generalized theories and abstractions, but rather ready-to-go strategies—not information about how children learn, but the best way to teach fractions; not how children process negative emotion, but what to say to a 3rd grader who is dejected about his reading.

Most education researchers disagree, and they offer a reasonable argument. Some situations a teacher will encounter are predictable—a future teacher of 4th graders knows she will teach fractions—but many other situations are not. All teachers face problems for which their education leaves them unprepared: a 2nd grader goes to a corner of the room and spins, or a group of 6th graders laughs at a classmate because he whispers to himself when he reads. At these unpredictable moments, the teacher must improvise. How she responds to a child in a novel situation will depend, in part, on her beliefs about the cognitions, emotions, and motivations of children. In fact, future teachers have views about how children learn even before they begin their teacher-education programs. One goal of teacher education, then, is to ensure that these beliefs are as accurate as possible.

Whether for this reason or others, most teacher-education programs require some coursework in educational psychology. More important, every state requires that teachers pass an exam as part of the licensing process, and psychological content appears on most of these tests. For example, the publisher’s study guide for the Praxis II exam (used in more than 30 states) includes a list of psychological principles that test-takers should know (such as “how knowledge is constructed”), as well as the work of theorists (such as Bandura, Piaget, Bruner) and psychological terms (such as schema, zone of proximal development, operant conditioning). Two sample questions from this exam appear in the sidebar.

In sum, many U.S. teachers report that their education is overly theoretical and not of great utility. It’s clear that they are required to learn some basic principles of psychology as part of that education, but it is not clear that practicing teachers remember what they were taught.

Reform in Teacher Education

If a large percentage of teachers forget what they learn, that might be taken as evidence for the weakness of teacher preparation. Certainly, teachers’ lack of retention is consistent with the finding that teacher coursework predicts student outcomes poorly. Likewise, some research indicates that licensure test scores are associated with student outcomes, but those scores may simply be a proxy for a teacher’s cognitive ability. More generally, the lack of data showing the effectiveness of traditional teacher education might be viewed as support for policies that limit or eliminate the requirement that teachers undergo traditional teacher preparation. If we suspect teachers forget important aspects of their training and we know teachers without this preparation are mostly indistinguishable from those who get it, why set this meaningless hurdle? Requiring the coursework and a passing grade on a licensure test serves only to incur costs in time and money to future teachers, potentially closing the profession to some candidates. Given that some groups (such as African American men) are underrepresented in the profession, and that there are teacher shortages in certain geographic regions and subject areas, the requirement seems counterproductive.

Other observers have suggested that teacher education shouldn’t be eliminated, but it should be refocused. Current programs emphasize abstract theory at the expense of practical knowledge. There is, by this argument, only so much that can be learned from textbooks and lectures. Teaching is a skill, like tennis, that requires doing to gain proficiency. No one would think of teaching a child to play tennis by starting with a couple of years of book learning and no court time. Little wonder that teachers say their education overemphasized theory. These considerations point to greater emphasis on student-teaching placements, although existing research does not show that such apprenticeships necessarily lead to better student outcomes.

I suggest a third point of view. There’s reason for optimism that knowledge of the basic science of learning can improve teaching, and ultimately, student outcomes. Optimism, not confidence, because there is little direct evidence bearing on the question. Nevertheless, research does show that teacher beliefs influence their classroom decisions, so it is not a wild notion to suppose that accurate beliefs about how children learn will lead to better classroom decisions than inaccurate beliefs will.

The problem, I suggest, is twofold, and lies in the details of what future teachers learn, and how they learn it. Teachers are asked to learn content that is appropriate for future scientists, not future practitioners. And future teachers do not get sufficient practice with the concepts they are taught.

Science versus Application

What must scientists know? Scientists develop theories to account for observations. Observations come from the inspection and measurement of the world, inside the laboratory and out. A theory is a small set of statements that summarizes a large set of observations. Newton observed the movement of objects in many different circumstances, and summarized how they move with three laws of motion.

Scientists have recorded many observations of children’s cognition, motivation, and emotion over the last 100 years. Naturally, observations can be idiosyncratic, even if they are collected under controlled laboratory conditions. The observations that really matter are those that are observed consistently. Consider Piaget’s concept of conservation of number. In his famous demonstration, a four-year-old child will agree with you that two lines, each composed of eight buttons, contain the same number of buttons. But if, as the child watches, you elongate one of the rows by increasing the distance between the buttons, the child will now insist that the longer row has more buttons. Very young children do not yet recognize that rearranging a number of objects does not change their quantity.

Scientists have developed theories to account for these observations. For example, Piaget proposed that cognition develops in four stages. The second stage (ages two to seven) is characterized by difficulty in thinking abstractly and a focus on what is perceptually salient. Hence, a child in this stage cannot fathom that her mother was once her grandmother’s little girl, because her mother is so obviously grown. In the case of the buttons, the abstract idea of number is beyond the child, but the perceptual characteristic “bigger” is obvious to the child, and equates to “more.”

It seems self-evident that future scientists need to learn both observations (what children usually do) and theories to account for the observations. That’s the stuff of science. K–12 teachers, I will argue, have little use for psychological theory, but could benefit from knowing the observations—developmental patterns and consistencies in children’s cognition, motivation, and emotion. Such knowledge roughly equates to “understanding children.”

How can teachers use scientific observations about children? Some have direct classroom application. For example, around 4th grade, most children develop a more sophisticated understanding of how their own memories work; even without instruction on the principles of memory, children learn that some types of repetition help them to remember things more than others. A 5th-grade teacher who wants to ask students to work more independently would benefit from this knowledge: she could make a more informed bet that asking her 10-year-old students to commit things to memory will mostly work out. (For examples of scientific observations and classroom applications, see sidebar.)

Of course, not all scientific observations are equally useful to teachers. Some features of children’s minds have little prospect for classroom application. For example, if you lift two objects that are the same mass but different sizes, the larger one will feel lighter. That’s the size-weight illusion, and it is extremely reliable, but it’s hard to see how teachers would find it useful.

And the observations that do hold promise for education cannot be applied blindly. A teacher who learns that practice helps memory should not have 1st graders drilling a small set of math facts for two straight hours; practice helps memory, but under the wrong circumstances it can harm motivation.

The usefulness of scientific observations of children’s behaviors for teachers is widely appreciated, if textbooks for future teachers are any indicator. And these same books discuss the challenges involved in translating scientific findings into teaching practice. But teacher education misses the mark by emphasizing theory.

In contrast to observations, theoretical statements—for example, Piaget’s proposal that the thinking of children from ages two to seven tends to be concrete rather than abstract—are not helpful to teachers. On the positive side, a theoretical statement could provide a tidy summary of a large collection of observations, making them easy to understand, coordinate, and remember. But overall, theories have significant drawbacks when applied to practice.

First, scientific theories do more than summarize observations; they are meant to push science forward, to prompt new research. Thus, they go beyond existing data to make novel predictions about as-yet-unobserved phenomena. In the case of Piaget, many predictions derived from his theory were wrong, including the prediction about young children’s limited ability to think abstractly. Teachers guided by Piagetian theory, rather than by direct observation of children’s success in learning, will underestimate what young students can learn. More generally, when pre-service teachers learn the latest scientific theories, they are almost certainly learning content that will later be shown to be at least partially wrong.

A second problem with focusing on theory is that teachers are often taught multiple theories meant to account for the same phenomena. Again, that’s central to the purpose of the scientific enterprise: we refine and improve our theories for a set of observations by proposing multiple theories and setting one against the other. So, future researchers should learn multiple theories because they need to understand how theories are compared and evaluated. But for future teachers, the competition among theories can lead to a narrowing of perspective.

For example, a teacher reading any of the popular educational-psychology textbooks will encounter two wildly different theoretical accounts of student motivation. The behaviorist account emphasizes children’s motivation to earn rewards and avoid punishments. Classroom applications of this theory focus on systems that reward students for various behaviors or incremental achievements. Humanist theories, by contrast, emphasize students’ sense of autonomy, stressing that they are motivated to undertake tasks they see as under their control. Classroom applications of this perspective focus on ways to offer students greater choice.

The classroom practices—rewards and choice—are not incompatible, but the theories are. Each explicitly discounts what the other highlights, and both are incomplete. Professors of education introduce pre-service teachers to both theories, presumably because doing so exposes these future practitioners to a wider range of tools they might use in their classrooms. But because the theories are incompatible, one might presume that the classroom applications are incompatible as well. If you’re a behaviorist, you use one approach; if you’re a humanist, the other. Whichever choice teachers make, though, they all have classrooms with students who respond to rewards and to choice.

The presentation of multiple theoretical accounts is the rule rather than the exception in teacher education. The concept of intelligence provides another example. Again, many empirical observations could prove useful to teachers—for example, that intelligence can be improved with sustained cognitive work—but there is no single accepted theory of intelligence. It is variously described as having three relatively independent components, eight relatively independent components, or many, many non-independent components. Learning provides another example: educational theorists variously describe learning in terms of overt behavior, as mental symbols, or as a social construction. Teachers could hardly be blamed for thinking that scientists have some theories but have not yet figured out how learning works.

We see why teachers feel that much of their education is of low utility: much of it is. Teachers are taught (and via licensing exams, tested on) empirical observations (how kids think and act) as well as psychological theories. But only the former holds the promise of improving the practice of teaching.

The Need for Practice

The second reason teachers find their education impractical is that they do not get enough practice with the principles they learn to fully absorb them and thus make them useful.

I’ve suggested that teachers’ study of psychology ought to focus on consistencies in children’s cognitive, emotional, and motivational makeup, and that future teachers be asked to learn some of these consistencies. It’s important to note that these consistencies are abstractions. Consider “thinking fails when people try to keep too many things in mind at once.” That’s clear enough, but it can manifest in observable behavior quite differently, depending on the student’s age, the task he is performing, his emotional state, and other factors. A shy 3rd grader who is mentally overloaded by a rapid series of five instructions may just look blank. A 10th grader who is mentally overloaded by stereotype threat during a math test may respond with anger. Or with resignation. Teachers need to learn not just the abstract generalizations that scientists have described but how they play out in particular contexts.

This problem has been targeted in the past. A committee of educational psychologists, under the auspices of the American Psychological Association (APA), met in the mid-1990s to consider how future teachers might learn abstract principles of science in ways that could apply to classroom practice. The committee report recommended that authors of educational-psychology textbooks offer examples of how these principles play out in school, and provide more classroom scenarios for pre-service teachers to interpret. Another APA committee revisited the issue in 2011 and concluded that textbooks had improved along the lines suggested.

It was a sound strategy, but it didn’t solve the problem, as evidenced by the AFT’s 2012 survey showing that teachers still considered their education overly theoretical. The problem cannot be solved just by tying scientific abstractions to classroom examples; education students need sustained practice in making those connections. A single semester—the duration of a typical educational-psychology course—won’t do it.

In a landmark study of this issue by Patricia Cheng and colleagues, the researchers examined the problem-solving abilities of college students who had taken a course in deductive logic. Although they had successfully solved logic problems on course examinations, when they were given a standard logical form disguised as a “brain teaser” they were no better at solving it than students who had not taken the course (see Figure 1).

By definition, abstractions—a deductive logical form or a principle of children’s thinking—can look different, depending on context. Recognizing the underlying structure takes practice, but practice does the trick. Students who had taken more than one logic course were much more successful at solving the brain teaser.

If such theories are to be useful in the long term, what’s learned in an educational psychology course must be reinforced in other coursework and in fieldwork. The teacher specializing in adolescent literacy would learn about the limitations of attention in that context, while the teacher specializing in elementary math would learn different consequences of the same observation about children’s thinking. That would require coordination across the teacher-education curriculum. Beyond the classroom, pre-service teachers should continue to learn about and apply this content during their student-teaching placements, which would, of course, require that their mentors be able and willing to incorporate relevant feedback into their coaching.

Next Steps

I began this article by highlighting two prominent ideas for the reform of teacher education: eliminating the traditional requirements for a teaching career, or radically changing those requirements to maximize student-teaching experience and minimize coursework. Here I’ve suggested a third way: change the content of education-degree coursework to focus on consistencies in children’s thinking, and greatly curtail how much scientific theory we ask future teachers to learn. What are the logical next steps toward implementing this third way?

I should note that important data are missing from my analysis. We have only spotty evidence as to what practicing teachers actually know about child psychology. Neither do we have solid evidence that teaching that aligns with scientists’ understanding of children is more effective than teaching that does not. Although many would suspect they could predict the outcomes of this missing research, we would be wise to test these assumptions empirically before undertaking a wholesale reform of teacher education.

The changes would not be minor. Textbooks would need to be revised, and courses would need to be overhauled—and not just courses in educational psychology, but (to a lesser extent) courses throughout the curriculum, to ensure that they coordinate with the new content. The difficulty of persuading professors to change their courses should not be underestimated. Faculty in higher education are used to autonomy in the classroom, and we surrender it with great reluctance. Given the scale of this change, the easiest way forward would be to create a pilot program within a college of education rather than attempting schoolwide reform. Faculty will be much easier to persuade if a small-scale trial shows promising results.

That leads us to the question: how do we define and measure “promising results”? Naturally, the ultimate aim would be improved student learning, but I would suggest that three other types of measurement be collected in parallel. First, we must be sure teachers retain the psychological principles they are taught. Second, we must be confident that they not only know the principles, but they also know how to use them in lesson plans. Third, we must be confident that they actually do use the principles in their teaching. And then we would need to gauge whether the students of teachers who use these principles in lesson plans have better educational outcomes than students whose teachers do not.

The financial commitment, then, is probably high. But the benefits could be substantial and the investment would pay dividends long into the future.

Daniel T. Willingham is professor of psychology at the University of Virginia. His most recent book is The Reading Mind: A Cognitive Approach to Understanding How the Mind Reads.

This article appeared in the Summer 2018 issue of Education Next. Suggested citation format:

Willingham, D.T. (2018). Unlocking the Science of How Kids Think: A new proposal for reforming teacher education. Education Next, 18(3), 42-49.

The post Unlocking the Science of How Kids Think appeared first on Education Next.

]]>
49707975
An Elite Grad-School Degree Goes Online https://www.educationnext.org/elite-grad-school-degree-goes-online-georgia-tech-virtual-masters-increase-access-education/ Tue, 20 Mar 2018 00:00:00 +0000 http://www.educationnext.org/elite-grad-school-degree-goes-online-georgia-tech-virtual-masters-increase-access-education/ Can Georgia Tech’s virtual master’s increase access to education?

The post An Elite Grad-School Degree Goes Online appeared first on Education Next.

]]>

Online coursework has been heralded as potentially transformative for higher education, including by lowering the cost of delivery and increasing access for disadvantaged students. Rather than physically attending a class with peers and an instructor at a set time and location, online students can satisfy class requirements at home and on their own schedules, by logging on to a website, engaging in chat sessions, and completing assignments digitally.

Such courses have grown in popularity. In 2015, 14 percent of U.S. college students were enrolled in online-only programs, and another 15 percent of students took at least one class online. Most of that growth has been at large public institutions, with for-profit colleges accounting for about one-third of online students nationwide.

Those numbers raise a question: who takes online classes? Does online education simply substitute for in-person education or does it serve students who would not otherwise enroll in an educational program? While existing research has compared academic performance between in-person and online students, little is known about the differences among the students themselves. Do online programs attract additional students and thereby increase the number of people obtaining education?

An innovative program at the Georgia Institute of Technology provides an opportunity to study this question. In 2014, Georgia Tech’s College of Computing, which is regularly ranked in the top 10 in the United States, started enrolling students in a fully online version of its highly regarded Master of Science in Computer Science degree—the earliest educational model to combine the inexpensive nature of online education with a degree program from a highly ranked institution. The online degree costs about $7,000, less than one-sixth of the $45,000 that out-of-state students pay to enroll in the same program in person. The classes were designed by faculty to mirror the in-person courses, are graded to the same standards, and lead to the identical degree without any “online” distinction. It is now the nation’s largest master’s-degree program in computer science.

We first compare the online and in-person applicant pools and find there is nearly no overlap between these two programs. Unlike the in-person master’s, the online program attracts older employed students. Next, we rigorously estimate whether this online option expands access to education for students. We find that students admitted to the program are more likely to pursue postsecondary education than those who are not admitted. In other words, access to the online program does not appear to substitute for other educational options. Those not admitted to the online program do not find appealing alternatives in the current higher-education landscape and thus do not pursue further education.

These findings indicate that the higher education market has been failing to meet demand for mid-career online options. Our analysis does not directly address the question of whether the quality of the online program is as high as that of the in-person program, but it does put that question in a new light. For the vast majority of online students, the alternative is not an in-person degree program but rather no degree at all. Even so, we find that a majority of enrollees in the online program are on track to complete their degrees and perform as well as or better academically than students who enroll on campus.

The Georgia Tech program confirms that, when done well, online coursework can substantially increase overall educational attainment and expand access to students who would not otherwise enroll.

An Elite Online Program

The Georgia Tech computer-science master’s degree was the first large-scale online program of its kind: it is offered by a highly ranked department, priced much lower than its in-person equivalent, and culminates in a prestigious graduate degree. It stands in contrast to the models of online education that preceded it, which involved either highly ranked institutions offering online degrees that cost as much as their in-person equivalents, lower-ranked institutions offering inexpensive online degrees with low labor-market returns, or a variety of institutions offering free massive open online courses (MOOCs), with unclear returns and very high attrition rates.

Since the founding of the Georgia Tech program, similar efforts have taken root at other prestigious institutions. For example, the University of Illinois at Urbana-Champaign now offers a fully online version of its highly regarded MBA for about one-third of the cost of the in-person program, Yale University is currently developing a fully online version of its Master of Medical Science degree for physician assistants, and the University of Colorado at Boulder has just started an online Master of Science in Electrical Engineering.

The Georgia Tech program was developed by the university and AT&T and is offered through a platform designed by Udacity, one of the largest providers of MOOCs. To earn their degree, students must complete 10 courses, specializing in computational perception and robotics, computing systems, interactive intelligence, or machine learning. The typical student takes one or two courses each semester and the expected time to graduation is six to seven semesters. In order to maintain educational quality, the online courses use similar assignments and grading standards as their in-person counterparts.

Though deadlines for submitting assignments are the same as the in-person courses, one major difference is that all lecture-watching and other learning experiences are asynchronous, meaning that there is no fixed time during which a student must be online. All content is posted at the start of the semester so that students may proceed at a pace of their choosing. Students schedule their exams within a specified window and are monitored to guard against cheating. Most interaction happens in online forums where students post questions and receive answers from fellow students, teaching assistants, or faculty members. Faculty members interact with students in online office hours, though online forums are typically run by the head teaching assistant.

To make the online program accessible to a wider range of applicants than its in-person counterpart, Georgia Tech’s admissions website describes as “preferred qualifications” having a BA in computer science or a related field with an undergraduate GPA of 3.0 or higher. Applicants to the online program are not required to submit GRE scores, while those applying to the in-person program must. Online students can apply and start the program in either the spring or fall semester; students in the in-person program may only begin in the fall.

Demand for the online program is large: it attracts over 3,400 applicants annually, about twice as many as its in-person equivalent. Some 61 percent of applicants are admitted, almost five times the 13 percent admission rate for the in-person program, and 80 percent of those admitted enroll. As a result, each year nearly 1,700 students begin a computer-science master’s degree through Georgia Tech’s online program, making it the largest computer-science master’s degree program in the United States, and possibly the world.

Who Applies to an Online Master’s Program?

We examine data for all applicants to the online program’s first six cohorts, from spring 2014 to fall 2016, and for all applicants to four cohorts of the in-person program, from fall 2013 through fall 2016. For each applicant, we have basic self-reported demographic information, including age, gender, race/ethnicity, and citizenship. Applicants also report their employer, postsecondary education history, undergraduate GPA, and the field and type of any degree earned. In our data, less than 0.2 percent of the nearly 18,000 applicants to either program applied to both programs.

In order to track all applicants’ enrollment at any postsecondary institution in the United States, we merge their data to the National Student Clearinghouse (NSC). In addition, because the NSC data contain information only on enrollment in formal higher-education degree programs, we survey all spring 2014 online applicants to capture other forms of education and training. We also ask which characteristics of Georgia Tech’s online degree program factor in their decision to apply.

The online and in-person applicant pools look fairly similar in terms of gender and race among American applicants, but the online program also attracts a much more American demographic than does the in-person program (see Figure 1). About 70 percent of the online applicants are U.S. citizens, compared to 8 percent of in-person applicants. The vast majority of in-person applicants are citizens of India (nearly 70 percent) or China (nearly 20 percent); less than 10 percent of applicants to the online program are Indian or Chinese citizens. That more than 70 percent of online program enrollees are U.S. citizens makes that pool substantially more American than the national pool of those completing computer-science master’s degrees, of whom 52 percent are U.S. citizens.

The online program attracts a substantially older demographic than the in-person program, with the average age at 34 compared to 24 for in-person applicants. These older online applicants are largely in the middle of their careers: nearly 90 percent list a current employer on their applications compared to less than 50 percent of in-person applicants. And while we find that hardly anyone older than 30 applies to the in-person program, the opposite is true of the online program. Only 16 percent of online applicants are 25 or younger and less than 30 percent are between 25 and 30. The majority of applicants are over 30, with substantial representation of students in their 40s and 50s.

To learn more about applicants’ family backgrounds and academic skills, we look at their undergraduate institutions using data from the Integrated Postsecondary Education Data System (IPEDS), which we are able to do for 88 percent of U.S. citizen applicants. We find that online applicants come from colleges where the average student’s SAT math score is 30 points or about 0.2 standard deviations lower than students from in-person applicants’ colleges. Their colleges also have a higher proportion of low-income students, as well as a substantially lower six-year graduation rate. Online applicants are much less likely than in-person applicants to have majored in computer science, and more likely to have majored in engineering, mathematics, physical sciences, and even the social sciences and humanities.

In our survey, online applicants are asked to rate the importance of various features of the online master’s-degree program to their decision to apply. The top four characteristics all relate to the geographic or temporal flexibility that an asynchronous, fully online program provides, with 69 percent valuing not needing to commute or relocate and 65 percent citing the program’s flexible time commitments (see Figure 2). The cost and Georgia Tech’s reputation are also valued characteristics, with 53 percent of respondents describing them as “extremely important” and 85 to 90 percent citing them as either “important” or “extremely important.” Skill development is cited as “extremely important” by slightly less than half of applicants.

Does an Online Master’s Program Expand the Pool of Students?

A key goal of our study is to determine whether the existence of an online option alters applicants’ educational trajectories. If not for access to such an option, would its applicants pursue other educational options? Or does the online option lack close substitutes in the current higher-education market?

We compare the educational outcomes of two groups of students with similar academic qualifications but with one important difference: those offered admission to the online program, and those denied. This analysis includes all students who applied to the Georgia Tech online master’s program in spring 2014 and uses NSC data to track whether they were enrolled in any graduate program as of fall 2016.

We focus on spring 2014, the program’s first semester, to exploit a one-time admissions practice that makes it possible to study the causal effect of being admitted. When the program began, Georgia Tech initially opted to constrain the number of students accepted, which officials did by sorting applications by undergraduate GPA, reading them in descending order, and offering immediate entry only to the first 500 or so applications deemed admissible. As a result, only applicants with an undergraduate GPA of 3.26 or higher were eligible for admission in spring 2014. Eventually, all of the applications were reviewed and some students both below and above the 3.26 threshold were made offers of deferred admission.

The threshold provides an opportunity to compare similar students’ trajectories, focusing on the impact of an offer of admission to the online program. Applicants just above and below the threshold should differ only in their access to the online option and be nearly identical in terms of academic skills, as measured by GPA as well as other characteristics. We obtain more precise results by controlling for gender, race/ethnicity, citizenship, age, employment, and college major, but we obtain similar findings without these controls.

This method allows us to measure the causal effect of admission to the online program as long as students could not manipulate whether their GPA was just above or below the cutoff. We believe this is the case because applicants’ GPAs appeared on official transcripts not provided by the student and applicants had no knowledge that a GPA of 3.26 would play any role in the process. Additionally, we find no differential sorting across the threshold in terms of gender, race, citizenship, age, employment, or college major.

Using NSC data, we track whether students were enrolled in any graduate program as of fall 2016, well beyond the point at which all spring 2014 applicants would have had to enroll if admitted or would have had time to apply to and enroll in other institutions if rejected. We focus on the likelihood that a given student received any admission offer, regardless of its timing.

We find that students just above the GPA threshold were about 21 percentage points more likely both to be admitted and to enroll in the online program than students just below the threshold (see Figure 3). This implies that roughly all of the marginal applicants admitted because of the GPA threshold accepted the offer of admission, and that they appear not to have competing options that would cause them to decline their offer.

We then look at the NSC data to determine whether applicants just below the threshold who were denied admission to Georgia Tech enrolled in a different degree program, in any field of study. The overall levels of such enrollment are quite low, with less than 20 percent enrolling elsewhere. The few alternatives chosen by such applicants are generally lower-ranked online programs from institutions such as DeVry University or Arizona State University.

This stands in contrast to applicants to the in-person program, about half of whom eventually enroll in alternative U.S. degree programs, including at prestigious competitors such as Carnegie Mellon or the University of Southern California. In addition, looking at the full applicant pool, we see no falloff in enrollment to the online Georgia Tech program among students with much higher GPAs. This suggests the market is not providing appealing alternatives for a wide range of students for whom the online master’s degree is appealing.

Finally, survey data on students’ informal, non-degree training produces no evidence that access to the online degree program reduces hours spent on non-degree training—and, in fact, our estimates, while statistically insignificant, suggest that access to the online program may actually increase informal education, such as time spent on professional certification programs and coding boot camps.

An Underserved Student Market

Our study finds the first rigorous evidence that we know of showing that an online degree program can increase educational attainment. We see significant demand for the first low-cost online degree offered by a highly ranked institution, and our analysis shows that demand is from students who would not otherwise pursue a master’s degree.

We also find that this online option expands access to education and does not substitute for other informal training, and that students denied admission do not pursue any other formal education. Further, unlike the younger, predominantly international applicants to the in-person equivalent, applicants to the online program are largely mid-career Americans. Taken together, this implies that the higher-education market had previously been failing to meet demand for a program like Georgia Tech’s online computer-science master’s degree.

Demand aside, can the online program produce computer-science graduates of sufficient quality? Early evidence from Georgia Tech suggests that it can. To test whether online students were finishing their courses with as much knowledge as in-person students, Georgia Tech blindly graded final exams for online and in-person students taking the same course from the same instructor, and found the online students slightly outperformed the in-person students. Online students are also highly likely to continue their studies: among those who started in 2014, at least 62 percent remained enrolled two years later, apparently on track to complete their degrees. (The actual percentage is likely higher, since many students take a semester off and then re-enroll the following semester.)

Given the nearly 1,200 Americans enrolling each year in Georgia Tech’s online computer-science master’s program and conservatively assuming only 62 percent graduate, we would expect at least 725 new American computer-science master’s degrees to be awarded annually. Nationwide, about 11,000 Americans earn their master’s degree in computer science each year, implying that this single program will boost the annual national production of American computer-science master’s by about 7 percent.

We conclude with two questions raised by this research. First, to what extent will the conclusions drawn from this particular online program apply to other populations and subjects? It seems likely, for example, that mid-career training in other fields might be amenable to this model, and moves by other institutions suggest they believe there are untapped markets in such training. Whether such low-cost, high-quality models can make inroads in undergraduate or secondary education remains to be seen, however.

Second, how large are the learning and labor market impacts of this online degree and how do they compare to that of the in-person equivalent? Looking at the undergraduate colleges attended by both types of computer-science students at Georgia Tech suggests that online students are, on average, somewhat weaker academically than their in-person counterparts. Nonetheless, comparisons of student achievement across the online and in-person formats suggests that online students finish their courses with at least as much knowledge as their in-person counterparts.

We hope to explore in subsequent work the extent to which the online degree is valued by the labor market, and whether and how it affects career advancement. Whether students who earn their computer-science master’s degree online are perceived as similar in quality to their in-person counterparts will have broad implications for the evolving role of online coursework in the postsecondary sector.

Joshua Goodman is associate professor of public policy at Harvard University. Julia Melkers is associate professor in the School of Public Policy at Georgia Institute of Technology. Amanda Pallais is the Paul Sack Associate Professor of Political Economy and Social Studies at Harvard University.

This article appeared in the Summer 2018 issue of Education Next. Suggested citation format:

Goodman, J., Melkers, J., and Pallais, A. (2018). An Elite Grad-School Degree Goes Online: Can Georgia Tech’s virtual master’s increase access to education? Education Next, 18(3), 66-72.

The post An Elite Grad-School Degree Goes Online appeared first on Education Next.

]]>
49707939