Vol. 12, No. 3 - Education Next https://www.educationnext.org/journal/vol-12-no-03/ A Journal of Opinion and Research About Education Policy Fri, 05 Jan 2024 15:19:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://i0.wp.com/www.educationnext.org/wp-content/uploads/2019/12/e-logo.png?fit=32%2C32&ssl=1 Vol. 12, No. 3 - Education Next https://www.educationnext.org/journal/vol-12-no-03/ 32 32 181792879 Not All Teachers Are Made of Ticky-Tacky, Teaching Just the Same https://www.educationnext.org/not-all-teachers-are-made-of-ticky-tacky-teaching-just-the-same/ Fri, 08 Jun 2012 00:00:00 +0000 http://www.educationnext.org/not-all-teachers-are-made-of-ticky-tacky-teaching-just-the-same/ The true import of the Chetty study

The post Not All Teachers Are Made of Ticky-Tacky, Teaching Just the Same appeared first on Education Next.

]]>

We know a good teacher can increase the lifetime income of a classroom by over $250,000,” the president told the country in his State of the Union speech. His comment was based on a pioneering study by Chetty, Friedman, and Rockoff, published in this issue (see “Great Teaching,” Research), which for the first time combines tax data that reveal earnings at age 28 with information on student learning when that
person was in elementary school.

The president said the study showed that we need new resources and policies to “keep good teachers on the job and reward the best ones.” But does the work of the Chetty team justify strong policy interventions? Do school board members need to peruse Education Next’s reader-friendly version of this econometric study, then take appropriate steps to replace weak teachers with high performers?

A number of commentators think not. “The differences produced by the high value-added teachers are relatively small,” Diane Ravitch tells her readers. Maria Bustillos objects to “firing ‘weaker’ teachers for the sake of a barely perceptible increase in students’ ‘lifetime income.’” Sherman Dorn says the effects are only “moderate.”

For these commentators, apparently, teachers are made of the same ticky-tacky that was used to build those identical “little boxes on the hillside” about which folksinger Malvina Reynolds crooned back in the 1960s. The people in those tickytacky houses were all made out of “ticky-tacky,” she warbled, and “they come out all the same.”

The Reynolds melody was as catchy as her words, and every adolescent was soon whistling it. But, fortunately, great teachers have always ignored such nonsense. They passionately care about the lives and education of each individual student—even when they know that the rewards come slowly.

Education is a long, measured process. Good parents start the education of their children the minute they are born, even though the payoff is years away. It is even more so with teachers, as they work with students for fewer hours a day.

Nonetheless, a top-notch teacher, as compared to a typical one, can over the course of a year raise student performance by as much as a third of a year’s worth of learning.

But despite those gains, salaries earned at age 28 are only $182 more, or 1 percent higher, for students who have experienced a year of great teaching. When the payoff is so low, why should we care whether schools keep their good teachers? Why should we bother asking bad teachers to find another job?

The answer is simple: One percent gains seem small, but they add up in the same way those saved Ben Franklin pennies do. Just 1 percent of additional income from one year in a room with a great teacher adds up to $25,000 over the typical wage earner’s lifetime. Extrapolating out to 10 years of excellent instruction, one can hazard the claim that the opportunity to enjoy consistently high-level instruction bolsters lifetime income by a quarter of a million dollars. That just about justifies the handsome tuitions charged by high-quality private schools and the large sums parents pay to buy homes in neighborhoods with outstanding schools.

And a great teacher works with not just one student but has a substantial average impact on all 28 of those in the typical class the Chetty team studied. Over the space of just 10 years, a teacher affects the lives of 280 students. On average, a great teacher has an impact that adds up to nothing short of $7 million. When the future is discounted at the standard rate, the annual value of the great teacher, relative to the typical one, drops to around a quarter of a million dollars, the number President Obama used.

Admittedly, some of these numbers are extrapolations and all are subject to error. But there is no justification for all teachers to be paid an identical salary as long as they have the same meaningless credentials and have spent the same number of years in the classroom. It’s time for school districts to stop treating teachers as if they were ticky-tacky—little boxes, sitting in the classroom, all teaching just the same.

This article appeared in the Summer 2012 issue of Education Next. Suggested citation format:

Peterson, P.E. (2012). Not All Teachers Are Made of Ticky-Tacky, Teaching Just the Same: The true import of the Chetty study. Education Next, 12(3), 5.

The post Not All Teachers Are Made of Ticky-Tacky, Teaching Just the Same appeared first on Education Next.

]]>
49699374
Teaching the Teachers https://www.educationnext.org/teaching-the-teachers/ Thu, 10 May 2012 00:00:00 +0000 http://www.educationnext.org/teaching-the-teachers/ Achievement Network offers support for data-driven instruction

The post Teaching the Teachers appeared first on Education Next.

]]>

The test question showed a carton labeled “15 pencils.” “Sharif sharpened 5 pencils,” the question continued. “Which fractions represent the pencils that Sharif sharpened?”

Fourteen of the 4th graders at Washington, D.C.’s Hope Community Charter School had chosen the right answer—1/3 and 5/15—on a test written for the school by Boston-based Achievement Network (ANet). But 20 chose the wrong answer, and two didn’t answer at all.

So on a bright November afternoon three weeks after the test, Hope’s math specialist, Christine Madison, and two of the school’s 4th-grade teachers huddled over five pages of test-score data assembled for them by ANet. Hope’s Tolson campus serves 420 youngsters in grades PreK–8, almost all of them African American and two-thirds of them from low-income families. It is one of three D.C. charters that are operated by Virginia-based Imagine Schools and are working with ANet. The city’s charter board calls Hope “mid-performing”—about 40 percent of its elementary-school children and 60 percent of its middle schoolers are considered proficient in math and English.

ANet coach, Amrutha Nagarajan, coaches 14 schools for the organization.

 

The ANet data showed that the children generally understood fractions. But they also showed that many youngsters—including some with otherwise good scores—were unsteady at fractional models, or word problems, which are among the 15 math standards that Washington schools are expected to teach their 4th graders.

The fraction lesson, drawn from the class textbook, apparently didn’t work when the teachers first taught it. So at this half-day data-analysis exercise scripted by ANet and overseen by an ANet coach, Madison and the teachers debated why it failed and plotted how to reteach it. How about using an art project, fraction charts, flip-books, team competitions, they mused. How about reteaching the lesson to youngsters grouped by ability? How about reteaching boys and girls differently?

Think about how you taught the lesson the first time, and then do something different, urged Madison, who grew more exuberant with each new idea. “I think I may not have used enough visual aids,” one teacher finally conceded as Madison beamed.

Learning Curve

John Maycock is the founder of Achievement Network, a nonprofit organization that provides data-analysis training and coaching for school leaders and teachers.

Data-driven instruction began its spread across the country about a decade ago, in the footsteps of the No Child Left Behind requirement that schools administer yearly achievement tests. Those tests didn’t help teachers spot and backfill learning gaps, though. Scores came back after everyone had moved on to the next grade, and anyway, the tests were designed to hold schools accountable for the performance of groups: Did enough English-learners pass, enough African Americans? They were not intended to show which students didn’t understand decimals.

By most accounts, a few charter schools began testing their youngsters more frequently, with the idea that teachers could use those interim results to inform their teaching. “If you pay attention to what students learn and what they don’t, you learn how to teach more effectively,” says Paul Bambrick-Santoyo, whose book Driven by Data is a primer on data-driven instruction.

But on the ground, data-driven instruction has encountered problems. Schools complain that interim assessments produced by publishers aren’t always aligned with curricula, pacing guides, or year-end state tests. The assessments are often too easy, handing schools an unhappy surprise when state test results are posted.

Some districts have taken over the job of producing interim tests, but their data offices have the reputation of taking so long to return results that the information is too old to be of much use. (Ben Fenton of New Leaders for New Schools says he has encountered schools that sidestep their districts by photocopying their kids’ answer sheets and grading the assessments themselves.)

Schools that have tried to develop their own assessments have found the job overwhelming. Jermall Wright, principal of southeast Washington’s Leckie Elementary, told me that his leadership team tried it when they decided that the district’s assessments were inadequate. But writing, scoring, and analyzing the tests took so much time that they quickly abandoned the effort.

In any event, few teacher-education schools include data-analysis training, so many teachers don’t know how to read the data, or don’t have the time to use the information to rethink their lesson plans.

By the mid-2000s, “data was starting to become a hot topic,” says John Maycock, who at the time was completing a master’s degree in the school-leadership program at Harvard’s Graduate School of Education. But “teachers were saying they wanted help” understanding and using it, he adds.

“We started to see that just having access to better data was not enough to drive improvement,” says Joe Siedlecki, a program officer at the Michael and Susan Dell Foundation, which has given $1.7 million to ANet.

Maycock’s solution was to found a nonprofit organization that combines rigorous, standards-aligned assessments; data-analysis training and coaching for school leaders and teachers; guided peer review; and networking across schools. Schools join ANet, pay a fee for its services, and commit their teachers and principals to a four-times-a-year cycle of testing and data review. The model goes beyond traditional professional-development models by linking ANet’s work to each school’s data feedback loop: student achievement results inform the guidance ANet provides.

Coaching the Team

School leaders also agree to carve out time for teachers to look at the data together, and to take part in the cycle of meetings and reviews themselves.

Two days after Hope’s data-analysis meeting, I returned to the charter school to listen as its leadership team reviewed the session with ANet coach Amrutha Nagarajan, a 28-year-old Wellesley- and Harvard-educated former banker. Nagarajan came to Washington as a D.C. Teaching Fellow, resisting pressure from her Indian-immigrant parents to pursue a business career, she says, and now coaches 14 schools for ANet.

Hope had administered its second cycle of interim assessments in math and English-language arts on November 8 and 9 after downloading the tests from ANet’s web site. The untimed tests are given every six to eight weeks and typically take youngsters about an hour, Nagarajan told me. The 4th-grade math test asked 34 questions; the 3rd-grade language-arts test included three readings—a folk tale, a poem, and a nonfiction passage—and 20 questions.

The school’s leadership team had the option to view the year’s assessments well beforehand to be sure the school’s lesson plans and pacing would prepare kids for the district’s year-end tests. Hope doesn’t factor the ANet interim test scores into youngsters’ overall grades, and in their contract with ANet, network schools agree not to use the scores to rate their teachers, a move designed to dampen teacher resistance. School leaders also agree to carve out time for teachers to look at the data together, and to take part in the cycle of meetings and reviews themselves.

After the early-November tests, Hope shipped its completed answer sheets to ANet’s Boston office. Within 48 hours of receiving them, ANet posted the results online, and Hope printed out a set for every teacher. The data tell teachers how their students answered each question, of course, but also how each youngster, the class, and the grade scored on questions aligned to each standard, like dividing whole numbers or identifying details in a reading passage.

The data showed that among Hope’s 5th graders, for example, 88 percent appeared to understand how to find the area and perimeter of rectangles and triangles, but only 26 percent could do the same with circles. Among 8th graders, 65 percent could analyze details and draw conclusions from two reading passages—they did better at nonfiction than fiction—but just 52 percent could identify the author’s main purpose in writing the piece.

In their contract with ANet, network schools agree not to use the scores to rate their teachers, a move designed to dampen teacher resistance.

ANet’s coaching script next called for Nagarajan and the leadership team to go over the results—in ANet parlance, this is a pre-data meeting—and set priorities for a professional development day, or data meeting, two days later. They agreed that Hope’s 8th-grade language-arts teachers would concentrate on how better to teach “author’s purpose,” a D.C. learning standard. Its 6th-grade teachers would focus on “drawing conclusions,” its 3rd-grade teachers on “analyzing details,” and so on, through each grade and subject.

The idea, Nagarajan told me, is for teachers to “go deep on one or two standards” by dissecting four or five test questions each at the data meeting. The goal, she added, is for that kind of item analysis to become part of each teacher’s routine as she becomes more comfortable with data.

Nagarajan—whose teaching experience includes a year in Chennai, India, after the 2004 Indian Ocean tsunami—remained in the background on data meeting day as Hope’s teachers worked on their reteaching plans. But she and ANet provided a clear structure to keep the school’s improvement plans on track.

During the data meeting, teachers pored over a form called an “item analysis template”—downloaded from the ANet web site—that forced them to think through the test questions that had given their kids the most grief. “What were the misconceptions” that led so many students to choose the wrong answer, the form asked them to consider. What groups of students missed the answer? What did students need to know to get it right?

Next, they worked through a “reteach action plan,” also downloaded from ANet. How was the lesson taught originally, the form asked. How and when would it be retaught, and to whom—the whole class, a small group, individual children?

Nagarajan, meanwhile, pressed Hope’s leadership team to meet deadlines and create what she called “follow-up structures.” When Dr. Chloé Marshall, Hope’s high-energy principal, said her teachers would file their reteaching plans that Friday, Nagarajan asked, “By the end of Friday or the beginning of Friday?” When would they do the reteaching, the next step on the ANet agenda, she asked. Those “reteaches” are supposed to be slipped into a compatible lesson so they don’t derail a teacher’s lesson plans and pacing, and target just those kids who need them.

Nagarajan continued: When would Hope retest—a quick two- or three-question quiz in each class—to make sure the new lesson was effective? When would teachers hold their “reflection meeting,” the last step in the assessment cycle, to look at the new results? “Does that make sense? What do you think?” she pressed the leadership team.

At the postdata-day debrief—more ANet parlance—Nagarajan and the school’s leadership team conceded that the English teachers were still learning how to use the ANet data to break down the broad standards into smaller skills, and to figure out which skills their students were lacking. But they also saw progress: teachers were talking more, sharing strategies, and acknowledging the need to teach differently.

“Some teachers were still challenging the test” by laying the blame on bad questions, Nagarajan said. But many more were “owning the data,” insisted Marshall, making the shift from the-kids-aren’t-learning-it to I’m-not-teaching-it. And with that, the discussion moved on to new teaching strategies, new delivery strategies, resources for new lesson plans, and the team’s goals for Hope’s students.

“The object isn’t to teach kids a process” that leads them to the right answer on a test, “but to visualize a problem and solve it,” Madison said to general agreement. “That’s what will help them in real life.”

Meeting a Need

Many teachers were “owning the data,” making the shift from the-kids-aren’t-learning-it to I’m-not-teaching-it.

John Maycock, who is now 37 and calls himself ANet’s “chief growth officer,” had managed afterschool centers in San Francisco, where he says he became “hooked forever” on education. But his real interest was “to be part of something entrepreneurial. I wanted to start something that was an expressed need from the schools,” he adds.

In 2004, Maycock and his mentor, Marci Cornell-Feist, assembled leaders from 10 Boston charter schools around the idea for Achievement Network. Cornell-Feist is the founder of the High Bar, which helps charter boards with management and governance issues.

The Boston charters had begun using interim assessments to prepare their kids for the year-end Massachusetts Comprehensive Assessment System, or MCAS. But the interim tests from outside vendors weren’t as rigorous as, or even aligned with, the MCAS. “They weren’t setting up the school leaders and teachers for success,” Maycock says.

The charters told him they needed better assessments, better data, and help understanding how to use the information, he says. They wanted a common assessment so they could compare results among themselves and use the data to identify best practices. And they wanted assessments that would serve as an instructional tool and not another gotcha mechanism to punish teachers.

Maycock raised $200,000 in seed money from a Massachusetts foundation, but also asked the schools each to pitch in $5,000 “to make it count,” he says. Schools now pay on a sliding scale: those like Hope that are in their first year and need intensive coaching pay $30,000. That declines to $14,000 a year once schools have been in the network for a few years and need less coaching.

Seven charter middle schools signed up with ANet in the 2005–06 school year, its first. Massachusetts had released the MCAS questions for the first time, and Maycock separated them by standard and skill, dissected them for rigor, and wrote his own interim assessments that mirrored the state exam.

James Peyser, a partner in NewSchools Venture Fund, which has invested $1.4 million in ANet and holds a seat on its board, says ANet’s assessments are remarkable for their rigor, which he adds are aimed at readying kids for college, not just for the state tests.

Three Boston district schools joined in ANet’s second year after catching wind of it. Maycock formed a second network of charter schools in Washington in 2008, and nine D.C. district schools joined the next year with help from the Dell grant. There are now 74 schools in the D.C. network.

New Orleans, Newark, Chicago, New York City, and Nashville-Memphis have since launched networks. There’s a network of three virtual schools, and a Baltimore network is planned for 2012. ANet says that 250 schools with some 70,000 kids were members of its networks in the 2011–12 school year. The organization has revenues of $9 million this school year, including $6 million in school fees.

Testing has expanded from the initial grades 6 and 7 to cover grades 3 through 8; ANet is piloting interim assessments for 2nd graders and a set of science tests. High school interims are more complicated because of wider course offerings, but they are “on our radar to consider—very much so,” Maycock says.

In 2010, ANet won a competitive $5 million Investing in Innovation (i3) grant from the U.S. Department of Education, which it is using, in part, to fund a large randomized study of its impact.

In its own analysis, ANet says the number of its youngsters who scored proficient or above on state tests last year increased by 7 percentage points in English and 4 percentage points in math in Chicago, and by 5 points in English and 3 points in math in New Orleans. Of the six cities for which it reported scores last year, ANet said four made twice the gains in English as the rest of their respective states, and three made double the state gains in math.

In D.C., about 6,600 youngsters in ANet’s charter and district schools took year-end tests in 2011. ANet says those scoring proficient in English increased by 4.5 percent and in math by 9 percent from the year earlier. That translates into 319 more kids passing the language exam and 662 more passing math, numbers Maycock calls “huge.” In just the D.C. district ANet schools, the increases were smaller—4 percent in English and 6.6 percent in math—but still better than the improvement of less than 2 percent posted by district schools that didn’t partner with ANet.

Network Strength

A learning walk explores peer-group feedback, or how to get teachers to help one another figure out how to reteach a troublesome lesson.

The schools in ANet’s original network were a lot alike: urban with high-need populations. Maycock has recently convinced stronger schools to join each network; in D.C., Janney and Horace Mann Elementary Schools, which are among the district’s highest-performing, white-majority schools, joined a network that is generally minority and struggling. The idea is to get charters and district schools, and stronger and weaker schools—schools that don’t generally cross paths—to share ideas and goad each other to improve.

Network schools have access to each other’s grade-level data, they share ANet coaches, and they’re invited to regular “learning walks,” where one network school models a practice for other network members.

A few days after the data-day review, I visited Powell Elementary, a district school in northeast D.C., for a learning walk on peer-group feedback, or how to get teachers to help one another figure out how to reteach a troublesome lesson. Teachers, data and instructional coaches, and a principal from eight widely different schools attended.

The practice Powell was showing off involved having its teachers present their reteaching plans—developed on data day—to a handful of teachers from other grades and specialties. These “critical friends” ask “clarifying questions” about the plan, and then talk it over among themselves. The presenting teachers can take or leave the suggestions without having to defend their lesson plans.

As I listened, a Powell math teacher modeled the process while the visitors leaned in close and tossed out their own ideas. Consider a math competition, said the dean of an all-boys, entirely African American charter school that seemed to have little in common with Powell: “Kids respond well to that.” Identify the 10 words most commonly used in word problems, said a math specialist from a district school that seemed to mirror Powell’s English-learner enrollment.

“I hadn’t thought about using manipulatives” in the lesson, conceded the Powell teacher as the ideas rolled in—and his kids would benefit from a hands-on lesson that burned up some of their energy, he added. After two hours, with the learning walk long ended, a dozen teachers from around the network were still huddled together, still talking lesson plans.

Powell keeps an ANet data wall in its front lobby and records how many youngsters in each class score proficient or advanced in math and in language arts for each ANet assessment cycle. Powell’s parents attend a data meeting when the results come out each cycle, and “all but three or four” regularly attend, principal Janeece Docal told me.

Powell’s highly public use of the data contrasts with that of Hyde-Addison Elementary, a third-year ANet school in D.C.’s swank Georgetown neighborhood, which uses the ANet data only internally. “We see what you know and what you don’t know. We see what we’ve taught you,” principal Dana Nerenberg told me.

Powell links the data discussion to the kids’ future, Docal explained: good ANet scores translate into good scores on the year-end test, which will land the youngsters in the high school and then the college and then the job of their choice. “Education equals freedom,” she said a dozen times over the afternoon.

How schools use the data “depends on the school’s culture,” says Justin Jones, a former Teach For America corps member and recruiter who heads the D.C. network.

Peyser, at NewSchools Venture Fund, says the goal is to help “change and strengthen school culture toward data” until “it becomes the way they do business.”

 

June Kronholz is an Education Next contributing editor.

This article appeared in the Summer 2012 issue of Education Next. Suggested citation format:

Kronholz, J. (2012). Teaching the Teachers: Achievement Network offers support for data-driven instruction. Education Next, 12(3), 8-15.

The post Teaching the Teachers appeared first on Education Next.

]]>
49699322
Great Teaching https://www.educationnext.org/great-teaching/ Wed, 25 Apr 2012 00:00:00 +0000 http://www.educationnext.org/great-teaching/ Measuring its effects on students' future earnings

The post Great Teaching appeared first on Education Next.

]]>
Birdette Hughey is the 2011 Mississippi Teacher of the Year.

In February 2012, the New York Times took the unusual step of publishing performance ratings for nearly 18,000 New York City teachers based on their students’ test-score gains, commonly called value-added (VA) measures. This action, which followed a similar release of ratings in Los Angeles last year, drew new attention to the growing use of VA analysis as a tool for teacher evaluation. After decades of relying on often-perfunctory classroom observations to assess teacher performance, districts from Washington, D.C., to Los Angeles now evaluate many of their teachers based in part on VA measures and, in some cases, use these measures as a basis for differences in compensation.

Newspapers that publish value added measures no doubt relish the attention they generate, but the bigger question in our view is whether VA should play any role in the evaluation of teachers. Advocates argue that the use of VA measures in decisions regarding teacher selection, retraining, and dismissal will boost student achievement, while critics contend that the measures are a poor indicator of teacher quality and should play little if any role in high-stakes decisions. The Obama administration has thrown its weight squarely behind the advocates, launching a series of programs that encourage states to develop evaluation systems based substantially on VA measures.

The debate over the merits of using value added to evaluate teachers stems primarily from two questions. First, do VA measures work? In other words, do they accurately capture the effects teachers have on their students’ test scores? One concern is that VA measures will incorrectly reward or penalize teachers for the mix of students they get if students are assigned to teachers based on characteristics that VA analysis typically ignores.

Second, do VA measures matter in the long run? For example, do teachers who raise test scores also improve their students’ outcomes in adulthood or are they simply better at teaching to the test? Recent research has shown that high-quality early-childhood education has large impacts on outcomes such as college completion and adult earnings, but no study has identified the long-term impacts of teacher quality as measured by value added.

We address these two questions by analyzing school-district data from grades 3–8 for 2.5 million children, linked to information on their outcomes as young adults and the characteristics of their parents. We find that teacher VA measures both work and matter. First, we find that VA measures accurately predict teachers’ impacts on test scores once we control for the student characteristics that are typically accounted for when creating VA measures. Second, we find that students assigned to high-VA teachers are more likely to attend college, attend higher-quality colleges, earn more, live in higher socioeconomic status (SES) neighborhoods, and save more for retirement. They are also less likely to have children during their teenage years.

Teachers in all grades from 4 to 8 have large impacts on their students’ adult lives. On average, a 1-standard-deviation improvement in teacher value added (equivalent to having a teacher in the 84th percentile rather than one at the median) in a single grade raises a student’s earnings at age 28 by about 1 percent. Replacing a teacher whose value added is in the bottom 5 percent with an average teacher would increase students’ total lifetime incomes by more than $1.4 million for a typical classroom (equivalent to $250,000 in present value). In short, good teachers create substantial economic value, and VA measures are useful in identifying them.

Our findings address the three main critiques of VA measures raised in a recent Phi Delta Kappan article by Stanford education professor Linda Darling-Hammond and her colleagues. We show directly using quasi-experimental tests that standard VA measures are not biased by the students assigned to each teacher. Hence, value-added metrics successfully disentangle teachers’ impacts from the many other influences on student progress. We also show that although VA measures fluctuate across years, they are sufficiently stable that selecting teachers even based on a few years of data would have substantial impacts on student outcomes such as earnings.

 

Data

We draw information from two sources: school-district records on students and teachers, and information on the same students and their parents from administrative data sources such as tax records. The school-district data contain student enrollment history, test scores, and teacher assignments from the administrative records of a large urban school district. These data span the school years 1988–89 through 2008–09 and cover roughly 2.5 million children in grades 3 through 8.

The school-district data include approximately 18 million test scores. Test scores are available for English language arts and math for students in grades 3–8 from the spring of 1989 to 2009. In the early part of the sample period, these tests were specific to the district, but by 2005–06 all tests were statewide, as required under the No Child Left Behind law. In order to calculate results that combine scores from different tests, we standardize test scores by subject, year, and grade. The district data also contain other information on students, such as race or ethnicity, gender, and eligibility for free or reduced-price lunch (a standard measure of poverty).

Our data on students’ adult outcomes include earnings, college attendance, college quality (measured by the earnings of previous graduates of the same college), neighborhood quality (measured by the percentage of college graduates in their zip code), teenage birth rates for females (measured by claiming a dependent born when the woman was still a teenager), and retirement savings (measured by contributions to 401[k] plans). Parent characteristics include household income, marital status, home ownership, 401(k) savings, and mother’s age at child’s birth.

Do Value-Added Measures Work?

Value-added analysis aims to isolate the causal effects teachers have on student achievement by comparing how well their students perform on end-of-year tests relative to similar students taught by other teachers. These comparisons take into account students’ test scores in the prior year as well as their race or ethnicity, gender, age, suspensions and absences in the previous year, whether they repeated a grade, special education status, and limited English status. We also control for teacher experience as well as for class and school characteristics, including class size and the academic performance and demographic characteristics of all students in the relevant classroom and school.

Many other researchers use methods for measuring teacher value added that are similar to ours, so it is not surprising that we obtain similar results. For example, we find that a 1-standard-deviation increase in teacher value added corresponds to increases in student math and English scores of 12 and 8 percent of a standard deviation, respectively. In both subjects, this difference is equivalent to approximately three months of additional instruction.

Can we take this as evidence of teachers’ causal impact on student test scores? Recent studies by economists Thomas Kane, Doug Staiger, and Jesse Rothstein, among others, have reached divergent conclusions about whether VA measures should be interpreted in this way. In particular, critics contend that VA measures are likely to be biased as a result of the way that students are assigned to teachers. For example, some teachers might be consistently assigned students with higher-income parents (which typically cannot be accounted for by school districts when generating VA measures because they do not collect precise data on family income). We implement two new tests to determine whether VA estimates are biased.

Our first test examines whether in fact high-VA teachers tend to be assigned students from more-advantaged families. We calculate an overall measure of parents’ socioeconomic status, combining the parental characteristics listed above. Not surprisingly, parent socioeconomic status is strongly predictive of student test scores, and, looking at simple correlations, we find that less-advantaged students do tend to be assigned to teachers with lower VA measures. However, controlling for the limited set of student characteristics available in school-district databases, such as test scores in the previous grade, is sufficient to account for the assignment of students to teachers based on parent characteristics. That is, if we take two students who have the same 4th-grade test scores, demographics, classroom characteristics, and so forth, the student assigned to a teacher with higher VA in grade 5 does not systematically have different parental income or other characteristics.

This first test shows that any bias in VA estimates due to the omission of parent characteristics that we are able to observe is minimal. The possibility remains, however, that students are assigned to teachers based on unmeasured characteristics unrelated to parent socioeconomic status. For example, principals may consistently assign their most-disruptive students to teachers whom they believe are up to the challenge. Alternatively, principals might assign these same students to their least-effective teachers, whom they are not worried about losing. Our second test seeks to determine the amount of bias introduced by this kind of sorting.

To do so, we exploit the fact that adjacent grades of students within the same school are frequently assigned to teachers with very different levels of value added because of idiosyncrasies in teacher assignments and turnover. During our analysis period, roughly 15 percent of teachers in our data switched to a different grade within the same school from one year to the next, 6 percent of teachers moved to a different school within the same district, and another 6 percent left the district entirely. These year-to-year changes in the teaching staff at a given school generate differences in value added that are unlikely to be related to student characteristics.

To illustrate, suppose a high-VA 4th-grade teacher enters a school at the beginning of a school year. If VA estimates capture teachers’ true impact on their students, students entering grade 4 in that school should have higher year-end test scores than those of the previous cohort. And the size of the change in test scores across these consecutive cohorts should correspond to the change in the average value added across all teachers in the grade. For example, in a school with three equal-sized 4th-grade classrooms, the replacement of a teacher with a VA estimate of 0.05 standard deviations with one with a VA estimate of 0.35 standard deviations should increase average test scores among 4th-grade students by 0.1 standard deviations.

In fact, that is exactly what we find, as shown in Figure 1. To construct this figure, we first define the top 5 percent of teachers as “high VA” and the bottom 5 percent as “low VA.” Figure 1 displays average test scores for cohorts of students in the years before and after a high-VA teacher arrives. We see that end-of-year test scores in the subject and grade taught by that teacher rise immediately by about 4 percent of a standard deviation. This impact on average test scores is commensurate in magnitude with what we would have predicted given the increase in average teacher value added for the students in that grade.

We obtain parallel findings when we examine the departure of high-VA teachers and the entry and exit of low-VA teachers. When a high-VA teacher leaves a given subject-grade-school combination, test scores of subsequent students in that subject, grade, and school fall. Likewise, students benefit from the departure of a low-VA teacher and are harmed by the arrival of a low-VA teacher.

Together, these results provide direct evidence that removing low-VA teachers (bottom 5 percent) and retaining high-VA teachers (top 5 percent) improves the academic achievement of students. But what about the remaining 90 percent of teachers? When we perform a similar analysis for all teachers, we again find that changes in the quality of the teaching staff strongly predict changes in test scores across consecutive cohorts of students in the same school, grade, and subject. Moreover, in middle schools, where students usually learn math and English from different teachers, we confirm that the arrival or departure of math teachers affects math scores but not English scores (and vice versa).

Using these techniques, we can calculate the amount of bias in our VA estimates. We find that the degree of bias is, on average, less than 2 percent. We therefore conclude that standard VA estimates accurately capture the impact that teachers have on their students’ test scores. Although the results could differ in other settings, our method of using natural teacher turnover to evaluate bias in VA estimates can be easily implemented by school districts to evaluate the accuracy of their VA models.

Do Value-Added Measures Matter?

Even though value-added measures accurately gauge teachers’ impacts on test scores, it could still be the case that high-VA teachers simply “teach to the test,” either by narrowing the subject matter in the curriculum or by having students learn test-taking strategies that consistently increase test scores but do not benefit students later in their lives. To address this issue, we measure the relationship between teachers’ value added and their students’ outcomes in adulthood. We compare students who were assigned high-VA vs. low-VA teachers in grades 4–8 and study their outcomes in adulthood.

We find that high-VA teachers raise students’ chances of attending college at age 20 (see Figure 2a). A student assigned to a teacher with a VA 1 standard deviation higher is 0.5 percentage points more likely to attend college at age 20 (an increase of 1.3 percent). Students of higher-VA teachers also attend higher-quality colleges, as measured by the average earnings of previous graduates of those colleges.

A person’s income doesn’t begin to stabilize until their late twenties, so our analysis of earnings focuses on the year when students were 28, the oldest age at which we observe a sufficiently large number of students. We find that having spent a single year in the classroom of a teacher with value added that is 1 standard deviation higher increases earnings at age 28 by $182, or about 1 percent (see Figure 2b). If that 1 percent advantage were to remain stable throughout an individual’s career, it would add up to about $25,000 in total earnings.

In addition to improved earnings, we also find that improvements in teacher value added significantly reduce the likelihood that female students will have a child during their teenage years, increase the socioeconomic status of the neighborhoods in which students live in adulthood, and raise 401(k) retirement savings rates. Moreover, it is likely that improved education would yield benefits that we are not able to measure but have been shown by other studies, such as reduced crime and improved citizenship.

To sum up, our evidence confirms that the students of high-VA teachers benefit not just by scoring higher on math and reading tests at the end of the school year, but also through improved outcomes later in life. The size of these effects may seem small, but recall that they reflect the impact of a higher-VA teacher for a single year and could compound over time to the extent that students are exposed to multiple high-VA teachers. As important, a single high-VA teacher has this effect not only on a single student but rather on an entire classroom—and often on many classrooms of students over the course of a career.

 

Policy Implications

In a recent article (see “Valuing Teachers,” features, Summer 2011), Eric Hanushek argues in favor of dismissing the bottom 5 percent of teachers based on their VA scores. While such a policy would have many costs and benefits that are beyond the scope of our study, we can illustrate the magnitudes implied by our analysis by calculating its impacts on students’ earnings. Our estimates imply that replacing a teacher whose value added is in the bottom 5 percent with an average teacher would increase students’ cumulative lifetime income by a total of $1.4 million per classroom taught. This gain is equivalent to $267,000 in present value at age 12, discounting at a 5 percent interest rate. However, it is important to realize there is uncertainty in VA measures, which are estimates that may be based on only a few classrooms of students, so the gains from removing teachers identified as ineffective based on a limited number of years of data are smaller. We estimate the gains from “deselecting” the bottom 5 percent of teachers to be approximately $135,000 in present value based on one year of data and $190,000 based on three years of data. These benefits, while still large, would have to be weighed against any costs associated with the policy, such as teachers demanding higher pay to compensate them for the risk of dismissal.

We also measure the expected gains from policies that pay higher salaries or bonuses to high-VA teachers in order to increase retention rates. The gains from such policies appear to be only somewhat larger than their costs. Although the benefit from retaining a teacher whose value added is at the 95th percentile after three years is nearly $200,000 per year, most bonus payments end up going to high-VA teachers who would have stayed even without the additional payment. Replacing low-VA teachers is therefore likely to be a more cost-effective strategy to increase teacher quality in the short run than paying to retain high-VA teachers. In the long run, higher salaries could attract more high-VA teachers to the teaching profession, a potentially important benefit that we do not measure here.

While these calculations illustrate the magnitudes of teachers’ impacts on students, they do not by themselves offer a blueprint for the design of optimal teacher evaluations, salaries, or merit-pay policies. Teachers were not evaluated based on test scores in the school district and time period we study. VA measures may not be as useful for identifying teachers with positive long-term impacts on their students if teachers respond to their use in evaluation systems by engaging in practices such as teaching to the test or even outright cheating. In addition, our analysis does not compare value added with other measures of teacher quality, like evaluations based on classroom observation, which might be even better predictors of teachers’ long-term impacts than VA scores.

In summary, our research demonstrates that good teachers are of great value to their students, and that VA measures are a potentially valuable tool for measuring teacher performance. The most important lesson we draw is that finding policies to raise the quality of teaching is likely to yield substantial economic and social benefits.

Raj Chetty is professor of economics at Harvard University. John N. Friedman is assistant professor of public policy at Harvard Kennedy School. Jonah E. Rockoff is associate professor of business at Columbia University’s Graduate School of Business. For further information on the study, see http://obs.rc.fas.harvard.edu/chetty/value_added.html.

Commentary

In light of the widespread attention given to the Chetty, Friedman, and Rockoff research, Education Next asked four experts to comment on the study’s implications for teacher policy.

Implications for Policy Are Not So Clear – By Douglas Harris
Profound Implications for State Policy – By Chris Cerf and Peter Shulman
More Evidence Would Be Welcome – By Dale Ballou
Low-Performing Teachers Have High Costs – By Eric A. Hanushek

This article appeared in the Summer 2012 issue of Education Next. Suggested citation format:

Chetty, R., Friedman, J.N., and Rockoff, J.E. (2012). Great Teaching: Measuring its effects on students’ future earnings. Education Next, 12(3), 58-64.

The post Great Teaching appeared first on Education Next.

]]>
49699295
Fight Club https://www.educationnext.org/fight-club/ Wed, 25 Apr 2012 00:00:00 +0000 http://www.educationnext.org/fight-club/ Are advocacy organizations changing the politics of education?

The post Fight Club appeared first on Education Next.

]]>

An unabridged version of this article is available here.


Every few weeks, a group of education reform advocacy organizations (ERAOs) gathers in Washington, D.C., to compare notes and plot strategy in what is (half in jest) referred to as “fight club.” Like the subject of the 1999 David Fincher movie, this fight club sees itself as the underdog in an epic struggle for freedom and equality. While the target of the film’s ire is consumerism, these national ERAOs and their counterparts at the state level are focused on enacting sweeping education policy changes to increase accountability for student achievement, improve teacher quality, turn around failing schools, and expand school choice. As Terry Moe documents in his recent book, Special Interest, for decades the politics of school reform have been dominated by the education establishment, the collection of teachers unions and other school employee associations derisively called the “blob” by reformers. But the past two years have witnessed an unprecedented wave of state education reforms, much of it fiercely opposed by the unions. The ERAOs played an active role in pushing for these changes, and it is clear that they are reshaping the politics of school reform in the United States in important ways. But does the reform blob really stand a chance of defeating the education blob?

What Are the ERAOs?

Interviews with ERAO leaders reveal that the challenges of implementing No Child Left Behind (NCLB)—in particular, states’ efforts to game its accountability, choice, and school restructuring mandates—spawned the creation of policy advocacy organizations that could push for reform in state capitols. As Joe Williams, executive director of Democrats for Education Reform (DFER) explained, “There was recognition over time that good ideas alone weren’t enough and weren’t going to get us across the finish line in terms of systemic reform. There needed to be a significant investment of time and resources in advocating for political changes that would enable and protect reform.” The largest of the ERAOs (in terms of staff, budget, and reach) are Stand for Children, StudentsFirst, the 50-State Campaign for Achievement Now (50CAN), DFER, and the Foundation for Excellence in Education (FEE), but this remains a relatively decentralized and fragmented movement. Different groups embrace somewhat different policy agendas and tactics, from grassroots mobilization to lobbying policymakers and operating political action committees.

Another way that ERAOs differ is in their scope and where they operate. Groups such as Advance Illinois and the Tennessee State Collaborative on Reforming Education are independent operators that focus explicitly on a single state or city. Stand for Children, 50CAN, DFER, and FEE are national organizations that work in multiple states. Stand for Children currently has affiliates in 9 states, 50CAN operates in 4 states (originating from its flagship ConnCAN, which operates in Connecticut alone), and DFER has 11 state chapters (see sidebar). How do the ERAOs decide what states to operate in? Marc Porter Magee, president and founder of 50CAN, talks about a “vetting process” that centers on figuring out what the “advocacy value-add score” would be in a potential state. Collectively, the ERAO leaders I spoke with identified three critical factors: 1) Is there a void to fill (no existing organization already doing the work)? 2) Is there sufficient local support for reform, and are local champions in place to lead the effort? 3) Is state philanthropic support available to fund the effort and sustain it over time?

While the groups vary considerably in tactics and geographic base, several common elements are apparent. The first is a connection to school choice, and, in particular, to the charter school movement. Many of the ERAOs emerged from the frustration of charter school operators—and their supporters in the business and civil rights communities—at the restrictions placed on charter operations and growth. In addition, ERAOs generally embrace test-based accountability, reforms aimed at improving teacher quality, and aggressive interventions in chronically underperforming schools. One of the most important developments in recent years, in fact, has been the coming together of two previously separate strands of the education reform movement: “system refiners,” who embrace accountability, and “system disrupters,” who advocate choice. Many reform groups are funded by the same foundations, particularly the “big three”—Walton, Gates, and Broad. The support of conservative foundations and the embrace of market-based school reforms have led some observers—and many critics in the education establishment—to label the ERAOs “corporate school reformers.” StudentsFirst CEO Michelle Rhee called this description “bizarre” and noted that she, like many others in these organizations, is a lifelong Democrat with a deep concern for social justice. Suzanne Tacheny Kubach, executive director of the Policy Innovators in Education Network (PIE Network), emphasizes that a focus on partisan orientation or funding sources obscures that “almost all the advocacy groups working in the country were either founded by or are advised by civic boards made up of state leaders concerned about the direction of their public schools.”

The ERAO Playbook

Marc Porter Magee, president and founder of 50CAN

A critical first page in the playbook for reform groups is to increase the amount of information available about school system performance. Virtually all of them support reforms to improve the quality and transparency of state standards and assessments and the creation of state report cards that enable policymakers and parents to view school-level data on student achievement. The increased availability of this information—one of the most important legacies of NCLB—in turn helps the groups to highlight the need for school reform in state capitols and build support among parents and community groups. ERAOs use these data to create a sense of urgency and to craft detailed evidence-based policy recommendations. 50CAN, for example, releases a detailed “State of Public Education” report prior to launching a new state branch. The groups also build momentum for change—and help policymakers make tough political choices—by documenting community support for reform through public opinion polls. In Indiana, for example, Stand for Children hired an independent firm to survey teachers about proposed reforms and was able to report that many reforms had strong teacher support despite the opposition of their union.

There is both a public and private dimension to ERAO work. Behind the scenes the groups work to cultivate relationships and build credibility with governors and state legislators and their professional staff as well as with state education-agency folks. They hold regular briefings for these insiders—often bringing in nationally recognized experts—to make the case for reform and report on how other states have tackled similar challenges. They also wage a very public campaign for the hearts and minds of average citizens by organizing town hall meetings with parents and publishing op-eds in state and local media. They publicize the report cards developed by national research organizations—such as the National Council on Teacher Quality’s “State Teacher Policy Yearbook” and the Thomas B. Fordham Institute’s “State of State Standards,” which enable comparison of one state’s policies with those in the rest of the country. ERAOs organize phone banks, rallies in state capitols, and online petitions to build momentum behind reform.

While newer reform advocacy organizations often partner with older groups like the Education Trust, they differ in approach and tactics. Older groups have tended to confine their efforts to research and lobbying, while the newer groups are more explicitly political, creating public pressure for reform to make it easier for policymakers to embrace difficult changes and then rewarding those who advance their agenda. Robin Steans, executive director of Advance Illinois, observed that “in the past the SEA [state education agency] was often alone in pushing reform in the state but now we are able to help lead the charge, to bring media attention and change the stakes and get folks to the table.” Central to this effort, as Bruno Manno has noted, is the quest to mobilize parents (see “Not Your Mother’s PTA,” features, Winter 2012). The perception that older parent groups such as the Parent Teacher Association are closely aligned with teachers unions and wedded to the status quo has led to the formation of new reform-oriented parent groups (such as Parent Revolution) and parent advocacy campaigns by groups like Stand for Children. The ERAOs take advantage of data microtargeting capabilities to identify potential supporters and use social media like Twitter and Facebook to regularly inform and mobilize them for advocacy.

A Coordinated Movement?

Suzanne Tacheny Kubach, executive director of the Policy Innovators in Education Network

It is tempting to see the patchwork of state and national school reform organizations as a fully integrated and coordinated movement. Yet, as a January 2012 study from the PIE Network concluded, “The most common thread across these states that enacted reforms was actually a lack of tight coordination among the varied members of these coalitions.” While many ERAOs share goals and move on parallel paths, and coordinate where it makes sense, no one group dominates or is in charge. One reason is the significant variation in political context. The unique policy landscape of each state necessitates that reform coalitions and agendas be built state by state. In Colorado, for example, the coalition that successfully pushed for the “Great Teachers and Leaders Act” comprised 22 different stakeholder groups and 40 different community and business leaders. While many members of state reform coalitions are education-specific groups, others focus on civil rights or business issues. Coalition size and diversity ensure considerable variation in the groups’ education agendas, and often even greater variation in their noneducation agendas. Civil rights and business groups, for example, often find themselves on the same side of school choice debates but on opposite sides of collective bargaining and taxing-and-spending issues. As a result, a standing coalition of ERAOs is difficult to build or sustain across different policy proposals.

Many of the groups talk to one another frequently, through a regular conference call organized by the Education Trust, at meetings organized by funders such as the Walton Family Foundation, and at conferences convened by groups such as the NewSchools Venture Fund. To the degree that there is an organizational home for ERAOs, it seems to be the PIE Network, which held its first meeting in 2007. The PIE Network emerged, according to executive director Kubach, because of “the growing realization that the arena of state policymaking matters a lot for school reform and you can’t just do everything at the federal level. We needed to connect the conversation in Washington with a coalition of different kinds of groups at the state level—business leaders, civic leaders, and grassroots constituents.” The 34 organizations in the network operate in 23 states and Washington, D.C. Network members include affiliates of Stand for Children and 50CAN, business groups like the Massachusetts Business Alliance for Education, the Oklahoma Business and Education Coalition, and Colorado Succeeds, and civic groups like Advance Illinois and the League of Education Voters (Washington). The PIE Network is also supported by five “policy partners,” which span the ideological spectrum but agree on the network’s reform commitments: Center for American Progress, Center on Reinventing Public Education, Education Sector, National Council on Teacher Quality, and Thomas B. Fordham Institute. Like many ERAOs, PIE Network is funded by the big three (Walton, Gates, and Broad) along with the Joyce and Stuart foundations.

The PIE Network facilitates regular communication among its members: it distributes a bimonthly newsletter, hosts a monthly conference call for leaders of its member groups, and convenes two face-to-face meetings each year—one with about 40 participants for group leaders and another larger, invitation-only meeting designed to bring the advocacy group leaders together with policy experts and policymakers. The organization also uses Twitter to act as an information clearinghouse by retweeting/aggregating all of the posts from its member organizations. Kubach argues that it is extremely difficult for individual state reform organizations to do this work by themselves and that the PIE Network has worked to encourage cross-state collaboration and the “cross-pollination” of reform ideas, and enable the “acceleration of the school reform movement.” One tangible example is that PIE Network members share legislative language for school reform bills (such as to improve teacher evaluation and tenure) that are being pushed in state legislatures, obviating the need for groups to undertake this time-consuming and technical work on their own. Nonetheless, despite the increasing communication among ERAOs, it appears to be too early to speak of them as constituting a coordinated movement, and given some of the challenges and divisions identified below, they may never become one. Indeed, Kubach explained that, at least for the PIE Network, centralized coordination has never been the goal: “There’s a pretty clear understanding across the sector that states are where most of reform policy is made and that local actors concerned about their schools are the most credible voices to lead that change. Our goal is to strengthen those local voices—not to overshadow them with a single-minded, nationally orchestrated campaign.”

ERAO Victories

The ERAO leaders I spoke with praised the Obama administration’s Race to the Top (RttT) competitive grant program for creating momentum behind reform at the state level and providing political cover for reformers. Rhee observed that “RttT was a brilliant idea. It really helped us build bipartisan coalitions. Right now Republicans are being more aggressive on education reform than Democrats at the state level, but being able to say that a Democratic president and education secretary were supportive really helped to convince Democrats to do more courageous things.” As Steven Brill noted in Class Warfare (see “Great Teachers in the Classroom?” book reviews, Spring 2012), school reform advocates seized the momentum created by RttT to mobilize and collaborate in advancing their agenda in state legislatures. PIE Network director Kubach observed that it “created urgency, a moment of real comparability across states and pressure to change.” ERAOs helped to facilitate state-to-state comparisons and develop legislative agendas by assessing existing state policies against the RttT criteria. They then lobbied state policymakers and created grassroots campaigns to mobilize support.

It is difficult to precisely gauge their impact, but it is clear that ERAOs are having a large—and increasing—influence on education debates at the state and national levels and that their efforts have contributed significantly to the passage of important legislation. Indiana governor Mitch Daniels recently remarked that he has seen a “tectonic shift” on education in states and that “more legislators are free from the iron grip of the education establishment.” Hari Sevugen, communications director at StudentsFirst, noted that “what we’ve lacked and what those fighting for the status quo had was an organized effort that decision makers had in the back of their mind as they put together education policy. That equation was highly imbalanced, but is now changing.” StudentsFirst claims to have signed up a million members in its first year and to have helped change 50 different state education policies.

The recent wave of teacher quality reforms offers perhaps the best evidence of ERAO impact, as no area of education reform has been more strongly resisted by the unions. Nearly two-thirds of states have changed their teacher evaluation, tenure, and dismissal policies in the past two years: 23 states now require that standardized test results be factored into teacher evaluations, and 14 allow districts to use these data to dismiss ineffective teachers. While in 2009 no state required student performance to be central to the awarding of tenure, today 8 states do. ERAOs have been hailed for playing a pivotal role in the passage of these new laws, with Stand for Children leading the effort in Colorado and Illinois. Former Illinois board of education chairman Jesse Ruiz said that the group was “an instigator, a catalyst, you might say.” In fewer than 100 days, Stand raised about $3.5 million in the state and used $600,000 of that to make contributions to seven House and two Senate campaigns. This kind of hardball political organizing and lobbying has long been employed by the unions to defeat school reform legislation but increasingly is being utilized by the ERAOs to drive change.

Democratic Divides

Joe Williams, executive director of Democrats for Education Reform

While the ERAOs emphasize bipartisanship so that they can work effectively with policymakers on both sides of the aisle, the groups confront two very different challenges related to partisan politics. First, the Democratic Party is divided over school reform—particularly on school choice, test-based accountability, and teacher quality. One of the most important and unresolved issues is how the groups will navigate their complicated relationship with civil rights organizations and teachers unions. Teachers unions are a crucial part of the Democratic Party’s base and yet have long been resistant to the kinds of reforms the ERAOs are advocating. But the unions themselves are also in flux. Harvard’s Susan Moore Johnson has noted the rise of “reform unionism”: support for reform is increasing inside the unions, particularly in the American Federation of Teachers (AFT) and among younger teachers. This trend has spawned such pro-reform teacher organizations as Teach Plus and Educators 4 Excellence.

Collectively, civil rights groups have assumed an ambiguous and fluid position in the school reform debates, though with major groups at times supportive of elements of the ERAO agenda. As Jesse Rhodes observed in a 2011 article in Perspectives on Politics, a number of civil rights groups have “played a central role in developing and promoting standards, testing, accountability, and limited school choice policies in order to achieve what they view as fundamentally egalitarian purposes.” Yet these groups have historically been closely aligned politically with the teachers unions and continue to find common ground given the large number of minority teachers, particularly in urban areas. This helps to explain why the NAACP sided with the unions against school closures and charter school expansion in New York City and Newark, for example, even as the group supports the ERAOs’ call for closing achievement gaps. There is also a major generational and racial gap between the leaders of groups like the NAACP and ERAO leaders, who are an overwhelmingly young, elite-schooled, and “white” bunch and as such are often viewed skeptically by people of color. Figuring out how to create state-level alliances with civil rights groups and mobilize urban communities—which are disproportionately minority and poor—remains an ongoing challenge.

The Need for a “RFER”

The second challenge is preserving over time the fairly broad bipartisan consensus on the ERAO agenda. As DFER’s Williams observed, “There are times where we agree with Republicans, but also plenty of times where we disagree—especially at the federal level and about funding.” While ERAOs generally support an active role for the federal government in promoting school reform and accountability, the rise of the Tea Party has highlighted how many conservatives continue to oppose such activism. And while ERAOs have led the charge to reform teacher evaluation and tenure policies, they have generally opposed more fundamental changes to collective bargaining pushed by Republican governors in places like Wisconsin. Similarly, while many Democrats (as well as many of the ERAOs) support the expansion of charter schools and school choice, there is much greater ambivalence over the school voucher proposals that Republicans are pushing in many states.

The creation of DFER has shifted the politics of education inside of the Democratic Party and provided cover for reform-minded Democrats in Congress and state capitols from the more liberal, union-friendly base. But a Republican counterpart to DFER—which insiders jokingly refer to as ReeFER—has yet to emerge. The Foundation for Excellence in Education (FEE) serves that role to an extent, but it does not currently lobby or make political contributions. FEE was started by former governor Jeb Bush to help spread the accountability reforms he enacted during his time in office and has been very active in the South and West. The organization hosts an influential summit every year for state policymakers and also sponsors Chiefs for Change, current and former state education superintendents who advocate for school reform. FEE has concentrated its work on six states (Florida, Indiana, Oklahoma, New Mexico, Louisiana, and Arizona) but is active in more than 20.

Winning Battles or the War?

Over the past two years, ERAOs have shown that they can mobilize quickly and effectively on behalf of reform. But as FEE’s Patricia Levesque warns, education reform is a long-term endeavor where “success is incremental” and “progress can be torn down quickly if momentum is stopped.” The recent struggles of the winning Race to the Top states have demonstrated that ensuring that policy reforms are implemented effectively on the ground and sustained over time is crucial, though less “sexy” than winning legislative victories. Major policy victories can quickly be undone by a new governor or legislature or undermined during the rule-making process, what Levesque called “death by a thousand cuts.” Battles over implementation occur in different venues (state boards, task forces, and education agencies), are more technical and less visible, and demand different tactics than legislative fights. ERAOs’ roles must include technical assistance, reporting, and watchdog vis-à-vis state education agencies.

To date, ERAOs have focused on states they consider hospitable to their efforts. There are important limitations to this approach, as it leaves many states unserved; 27 states, for example, are not represented on PIE Network’s membership list. Indeed, this strategy may actually ensure that states most in need of reform advocacy (and perhaps with the worst-performing school systems) will be ignored. The hope among ERAOs is that laggard states will feel pressure to follow reform-oriented states, but there is no guarantee that this will happen. It is also important to keep in mind how new the ERAOs are and how small their staffs are, often just a handful of folks. Sevugen at StudentsFirst remarked that despite ambitious goals, the group is essentially a “start-up” and that “we are trying to fly the plane while we build it.” Clearly, to be successful over the long haul, ERAOs will need to better coordinate their efforts within and across states. Rhee is optimistic on this front, noting that “more critical masses of reform-oriented folks are being built up, and I’m seeing more leaders of education reform organizations saying ‘we need to figure out how we can align our efforts in a more effective and efficient way than in the past.’ It’s not going to happen overnight, but I’m very hopeful that it will happen in the next two to three years.”

Though the groups are still young, the “reform blob” is providing a counterweight to the teachers unions in school reform debates at the state level. The ability of the ERAOs to overcome the unions should not be overestimated, however. The unions’ extensive resources—and large staff—enable them to be present everywhere, and it is unclear whether the ERAOs will be able to match their efforts in every venue. Kubach commented that “in California, there are reform groups like EdVoice, California Business for Education Excellence, and the Education Trust West that among them have maybe 25 employees working in rented office suites. The number of employees working for the teachers unions and administrators associations is much, much larger, and they all own multi-story buildings near the capital. [Even with] StudentsFirst there, that doesn’t come close to tipping the scales. The suggestion that the reform movement is the ‘big money game’ in any state capital is simply laughable.”

Still, the unprecedented state school reform activity of recent years—and, in particular, the enactment of a large number of teacher quality and school choice bills—testifies to the role these groups are playing in mobilizing political support behind reforms that even five years ago faced long odds. Several ERAO leaders recalled how few reform organizations there were, and how few local or state politicians were willing to take up the mantle of reform. Today, it is clear that a new club of reform organizations is itching for a fight and that politicians in both parties are increasingly willing to join them in the ring.

Patrick McGuinn is associate professor of political science and education at Drew University.

This article appeared in the Summer 2012 issue of Education Next. Suggested citation format:

McGuinn, P. (2012). Fight Club: Are advocacy organizations changing the politics of education? Education Next, 12(3), 25-31.

The post Fight Club appeared first on Education Next.

]]>
49699335
When Education Reform Gets Personal https://www.educationnext.org/when-education-reform-gets-personal/ Tue, 24 Apr 2012 00:00:00 +0000 http://www.educationnext.org/when-education-reform-gets-personal/ Confessions of a policy-wonk father

The post When Education Reform Gets Personal appeared first on Education Next.

]]>

Over more than 20 years in the field of education—including two with Teach For America—I have helped promote state standards, the Common Core, the hiring of teachers with strong content knowledge, longer class periods for math and reading, and extra support for struggling students, to name a few. I have recently discovered, however, that what I believe as an education policy wonk is not always what I believe as a father. I am incredibly fortunate that my two young daughters are ready learners who attend a high-functioning school. That said, I make the following confessions:

As a policy wonk, I push for high academic expectations for all students. I know that American competitiveness requires excellence in subjects such as math and science that our schools do not teach very well. As a father, however, I find that what matters most to me is that my daughters are happy in school.

In Montgomery County, Maryland, where I live, academic expectations are extremely high. Our school district aims to teach math, for example, in a rigorous way. I appreciate this goal, but to date “increased rigor” has primarily meant that some students skip grade-level math classes and enroll in classes meant for older kids. Basic skills that are taught and reinforced in the grades being skipped are often given short shrift. In 2nd grade, my daughter brought home worksheets on probability before she had any real understanding of the concept, or even a strong foundation in simple division. Her frustration with probability, and consequently math, grew as we substituted times-table drills for play dates. Last year, to my horror, she said that she hated math. This year, which has included an increased focus on math facts and an inspiring teacher, math has become her favorite subject.

With my policy hat on, I know that a teacher’s academic background is critical. As a father, however, I want a teacher who manages a calm, safe, and fun classroom, and who loves children. One of the best teachers my children have had is our regular babysitter, who speaks English as a second language and never graduated from high school.

Of course, there are some gems at our school (thank you, Ms. Bederman, now retired) who are knowledgeable, skilled, passionate about learning, and passionate about children. To a father, Ms. Bederman was a gift from heaven; to a policy wonk she is the Holy Grail. Why can’t we identify and train more of these treasures? Why wasn’t every teacher in our school crowded into Ms. Bederman’s classroom to witness her magic? Why didn’t the principal require every teacher to crowd into her classroom?

As a policy wonk, I believe that student learning flourishes in classrooms that include students with a wide range of abilities and backgrounds. As a father, I want my daughters to appreciate diversity of all types. But I also want them to be surrounded by children who come to school ready and eager to learn. These goals come into conflict when some students are constantly disruptive; the policy wonk must preach patience to the father who wants the class disrupter out.

My daughter’s kindergarten class included a troubled boy who was going through the foster-care placement process. He is exactly the type of child that can benefit most from an excellent education, but he regularly disrupted class. One day, when I was in the classroom, the teacher—talented, but inexperienced—spent more than half of her time trying to keep this boy on task.

I feel for children like him; my company works with schools and districts to improve outcomes for these kids. But I was angry. The other children were clearly uncomfortable. His disruptions reduced learning time for my daughter, and seemed to steal some of her innocence and excitement about school.

The tension between my understanding of good education policy—driven by a deep commitment to equity and the belief that an outstanding education can transform lives, and this country—and what is right for my daughters makes me both a better policy wonk and a better father. The tension also illustrates why school reform is so difficult.

Scott Joftus is the president of the education-consulting firm Cross & Joftus.

This article appeared in the Summer 2012 issue of Education Next. Suggested citation format:

Joftus, S. (2012). When Education Reform Gets Personal: Confessions of a policy-wonk father. Education Next, 12(3), 80.

The post When Education Reform Gets Personal appeared first on Education Next.

]]>
49699291
Do Schools Begin Too Early? https://www.educationnext.org/do-schools-begin-too-early/ Tue, 24 Apr 2012 00:00:00 +0000 http://www.educationnext.org/do-schools-begin-too-early/ The effect of start times on student achievement

The post Do Schools Begin Too Early? appeared first on Education Next.

]]>

What time should the school day begin? School start times vary considerably, both across the nation and within individual communities, with some schools beginning earlier than 7:30 a.m. and others after 9:00 a.m. Districts often stagger the start times of different schools in order to reduce transportation costs by using fewer buses. But if beginning the school day early in the morning has a negative impact on academic performance, staggering start times may not be worth the cost savings.

Proponents of later start times, who have received considerable media attention in recent years, argue that many students who have to wake up early for school do not get enough sleep and that beginning the school day at a later time would boost their achievement. A number of school districts have responded by delaying the start of their school day, and a 2005 congressional resolution introduced by Rep. Zoe Lofgren (D-CA) recommended that secondary schools nationwide start at 9:00 or later. Despite this attention, there is little rigorous evidence directly linking school start times and academic performance.

In this study, I use data from Wake County, North Carolina, to examine how start times affect the performance of middle school students on standardized tests. I find that delaying school start times by one hour, from roughly 7:30 to 8:30, increases standardized test scores by at least 2 percentile points in math and 1 percentile point in reading. The effect is largest for students with below-average test scores, suggesting that later start times would narrow gaps in student achievement.

The primary rationale given for start times affecting academic performance is biological. Numerous studies, including those published by Elizabeth Baroni and her colleagues in 2004 and by Fred Danner and Barbara Phillips in 2008, have found that earlier start times may result in fewer hours of sleep, as students may not fully compensate for earlier rising times with earlier bedtimes. Activities such as sports and work, along with family and social schedules, may make it difficult for students to adjust the time they go to bed. In addition, the onset of puberty brings two factors that can make this adjustment particularly difficult for adolescents: an increase in the amount of sleep needed and a change in the natural timing of the sleep cycle. Hormonal changes, in particular, the secretion of melatonin, shift the natural circadian rhythm of adolescents, making it increasingly difficult for them to fall asleep early in the evening. Lack of sleep, in turn, can interfere with learning. A 1996 survey of research studies found substantial evidence that less sleep is associated with a decrease in cognitive performance, both in laboratory settings and through self-reported sleep habits. Researchers have likewise reported a negative correlation between self-reported hours of sleep and school grades among both middle- and high-school students.

I find evidence consistent with this explanation: among middle school students, the impact of start times is greater for older students (who are more likely to have entered adolescence). However, I also find evidence of other potential mechanisms; later start times are associated with reduced television viewing, increased time spent on homework, and fewer absences. Regardless of the precise mechanism at work, my results from Wake County suggest that later start times have the potential to be a more cost-effective method of increasing student achievement than other common educational interventions such as reducing class size.

Wake County

The Wake County Public School System (WCPSS) is the 16th-largest district in the United States, with 146,687 students in all grades for the 2011–12 school year. It encompasses all public schools in Wake County, a mostly urban and suburban county that includes the cities of Raleigh and Wake Forest. Start times for schools in the district are proposed by the transportation department (which also determines bus schedules) and approved by the school board.

Wake County is uniquely suited for this study because there are considerable differences in start times both across schools and for the same schools at different points in time. Since 1995, WCPSS has operated under a three-tiered system. While there are some minor differences in the exact start times, most Tier I schools begin at 7:30, Tier II schools at 8:15, and Tier III at 9:15. Tiers I and II are composed primarily of middle and high schools, and Tier III is composed entirely of elementary schools. Just over half of middle schools begin at 7:30, with substantial numbers of schools beginning at 8:00 and 8:15 as well. The school day at all schools is the same length. But as the student population has grown, the school district has changed the start times for many individual schools in order to maintain a balanced bus schedule, generating differences in start times for the same school in different years.

The only nationally representative dataset that records school start times indicates that, as of 2001, the median middle-school student in the U.S. began school at 8:00. More than one-quarter of students begin school at 8:30 or later, while more than 20 percent begin at 7:45 or earlier. In other words, middle school start times are somewhat earlier in Wake County than in most districts nationwide. The typical Wake County student begins school earlier than more than 90 percent of American middle-school students.

Data and Methods

The data used in this study come from two sources. First, administrative data for every student in North Carolina between 2000 and 2006 were provided by the North Carolina Education Research Data Center. The data contain detailed demographic variables for each student as well as end-of-grade test scores in reading and math. I standardize the raw test scores by assigning each student a percentile score, which indicates performance relative to all North Carolina students who took the test in the same grade and year. The second source of data is the start times for each Wake County public school, which are recorded annually and were provided by the WCPSS transportation department.

About 39 percent of WCPSS students attended magnet schools between 2000 and 2006. Since buses serving magnet schools must cover a larger geographic area, ride times tend to be longer for magnet school students. As a result, almost all magnet schools during the study period began at the earliest start time. Because magnet schools start earlier and enroll students who tend to have higher test scores, I exclude magnet schools from my main analysis. My results are very similar if magnet school students are included.

The data allow me to use several different methods to analyze the effect of start times on student achievement. First, I compare the reading and math scores of students in schools that start earlier to the scores of similar students at later-starting schools. Specifically, I control for the student’s race, limited English status, free or reduced-price lunch eligibility, years of parents’ education, and whether the student is academically gifted or has a learning disability. I also control for the characteristics of the school, including total enrollment, pupil-to-teacher ratio, racial composition, percentage of students eligible for free lunch, and percentage of returning students. This approach compares students with similar characteristics who attend schools that are similar, except for the fact that some schools start earlier and others start later.

The results produced by this first approach could be misleading, however, if middle schools with later start times differ from other schools in unmeasured ways. For example, it could be the case that more-motivated principals lobby the district to receive a later start time and also employ other strategies that boost student achievement. If that were the case, then I might find that schools with later start times have higher test scores, even if start times themselves had no causal effect.

To deal with this potential problem, my second approach focuses on schools that changed their start times during the study period. Fourteen of the district’s middle schools changed their start times, including seven schools that changed their start times by 30 minutes or more. This enables me to compare the test scores of students who attended a particular school to the test scores of students who attended the same school in a different year, when it had an earlier or later start time. For example, this method would compare the test scores of students at a middle school that had a 7:30 start time from 1999 to 2003 to the scores of students at the same school when it had an 8:00 start time from 2004 to 2006. I still control for all of the student and school characteristics mentioned earlier.

As a final check on the accuracy of my results, I perform analyses that compare the achievement of individual students to their own achievement in a different year in which the middle school they attended started at a different time. For example, this method would compare the scores of 7th graders at a school with a 7:30 start time in 2003 to the scores of the same students as 8th graders in 2004, when the school had a start time of 8:00. As this suggests, this method can only be used for the roughly 28 percent of students in my sample whose middle school changed its start time while they were enrolled.

Results

My first method compares students with similar characteristics who attend schools that are similar except for having different start times. The results indicate that a one-hour delay in start time increases standardized test scores on both math and reading tests by roughly 3 percentile points. As noted above, however, these results could be biased by unmeasured differences between early- and late-starting schools (or the students who attend them).

Using my second method, which mitigates this bias by following the same schools over time as they change their start times, I find a 2.2-percentile-point improvement in math scores and a 1.5-point improvement in reading scores associated with a one-hour change in start time.

My second method controls for all school-level characteristics that do not change over time. However, a remaining concern is that the student composition of schools may change. For example, high-achieving students in a school that changed to an earlier start time might transfer to private schools. To address this issue, I estimate the impact of later start times using only data from students who experience a change in start time while remaining in the same school. Among these students, the effect of a one-hour later start time is 1.8 percentile points in math and 1.0 point in reading (see Figure 1).

These estimated effects of changes in start times are large enough to be substantively important. For example, the effect of a one-hour later start time on math scores is roughly 14 percent of the black-white test-score gap, 40 percent of the gap between those eligible and those not eligible for free or reduced-price lunch, and 85 percent of the gain associated with an additional year of parents’ education.

The benefits of a later start time in middle school appear to persist through at least the 10th grade. All students in North Carolina are required to take the High School Comprehensive Test at the end of 10th grade. The comprehensive exam measures growth in reading and math since the end of grade 8 and is similar in format to the end-of-grade tests taken in grades 3–8. Controlling for the start time of their high school, I find that students whose middle school started one hour later when they were in 8th grade continue to score 2 percentile points higher in both math and reading when tested in grade 10.

I also looked separately at the effect of later start times for lower-scoring and higher-scoring students. The results indicate that the effect of a later start time in both math and reading is more than twice as large for students in the bottom third of the test-score distribution than for students in the top third. The larger effect of start times on low-scoring students suggests that delaying school start times may be an especially relevant policy change for school districts trying to meet minimum competency requirements (such as those mandated in the No Child Left Behind Act).

Why Do Start Times Matter?

The typical explanation for why later start times might increase academic achievement is that by starting school later, students will get more sleep. As students enter adolescence, hormonal changes make it difficult for them to compensate for early school start times by going to bed earlier. Because students enter adolescence during their middle-school years, examining the effect of start times as students age allows me to test this theory. If the adolescent hormone explanation is true, the effect of school start times should be larger for older students, who are more likely to have begun puberty.

I therefore separate the students in my sample by years of age and estimate the effect of start time on test scores separately for each group. In both math and reading, the start-time effect is roughly the same for students age 11 and 12, but increases for those age 13 and is largest for students age 14 (see Figure 2). This pattern is consistent with the adolescent hormone theory.

To further investigate how the effect of later start times varies with age, I estimate the effect of start times on upper elementary students (grades 3–5). If adolescent hormones are the mechanism through which start times affect academic performance, preadolescent elementary students should not be affected by early start times. I find that start times in fact had no effect on elementary students. However, elementary schools start much later than middle schools (more than half of elementary schools begin at 9:15, and almost all of the rest begin at 8:15). As a result, it is not clear if there is no effect because start times are not a factor in the academic performance of prepubescent students, or because the schools start much later and only very early start times affect performance.

Of course, increased sleep is not the only possible reason later-starting middle-school students have higher test scores. Students in early-starting schools could be more likely to skip breakfast. Because they also get out of school earlier, they could spend more (or less) time playing sports, watching television, or doing homework. They could be more likely to be absent, tardy, or have behavioral problems in school. Other explanations are possible as well. While my data do not allow me to explore all possible mechanisms, I am able to test several of them.

I find that students who start school one hour later watch 12 fewer minutes of television per day and spend 9 minutes more on homework per week, perhaps because students who start school later spend less time at home alone. Students who start school earlier come home from school earlier and may, as a result, spend more time at home alone and less time at home with their parents. If students watch television when they are home alone and do their homework when their parents are home, this behavior could explain why students who start school later have higher test scores. In other words, it may be that it is not so much early start times that matter but rather early end times.

Previous research tends to find that students in early-starting schools are more likely to be tardy to school and to be absent. In Wake County, students who start school one hour later have 1.3 fewer absences than the typical student—a reduction of about 25 percent. Fewer absences therefore may also explain why later-starting students have higher test scores: students who have an early start time miss more school and could perform worse on standardized tests as a result.

Conclusion

Later school start times have been touted as a way to increase student performance. There has not, however, been much empirical evidence supporting this claim or calculating how large an effect later start times might have. My results indicate that delaying the start times of middle schools that currently open at 7:30 by one hour would increase math and reading scores by 2 to 3 percentile points, an impact that persists into at least the 10th grade.

These results suggest that delaying start times may be a cost-effective method of increasing student performance. Since the effect of later start times is stronger for the lower end of the distribution of test scores, later start times may be particularly effective in meeting accountability standards that require a minimum level of competency.

If elementary students are not affected by later start times, as my data suggest (albeit not definitively), it may be possible to increase test scores for middle school students at no cost by having elementary schools start first. Alternatively, the entire schedule could be shifted later into the day. However, these changes may pose other difficulties due to child-care constraints for younger students and jobs and afterschool activities for older students.

Another option would be to eliminate tiered busing schedules and have all schools begin at the same time. A reasonable estimate of the cost of moving start times later is the additional cost of running a single-tier bus system. The WCPSS Transportation Department estimates that over the 10-year period from 1993 to 2003, using a three-tiered bus system saved roughly $100 million in transportation costs. With approximately 100,000 students per year divided into three tiers, it would cost roughly $150 per student each year to move each student in the two earliest start-time tiers to the latest start time. In comparison, an experimental study of class sizes in Tennessee finds that reducing class size by one-third increases test scores by 4 percentile points in the first year at a cost of $2,151 per student per year (in 1996 dollars). These calculations, while very rough, suggest that delaying the beginning of the school day may produce a comparable improvement in test scores at a fraction of the cost.

Finley Edwards is visiting assistant professor of economics at Colby College.

For more from Education Next on this topic, please read:

•  “Time for School? When the snow falls, test scores also drop

• “Rise and Shine: How school start times affect academic performance

• “How To Make School Start Later: Early-morning high school clashes with teenage biology, but change is hard

For more, please see “The Top 20 Education Next Articles of 2023.”

This article appeared in the Summer 2012 issue of Education Next. Suggested citation format:

Edwards, F. (2012). Do Schools Begin Too Early? The effect of start times on student achievement. Education Next, 12(3), 52-57.

The post Do Schools Begin Too Early? appeared first on Education Next.

]]>
49699311
Door Still Closed https://www.educationnext.org/door-still-closed/ Tue, 24 Apr 2012 00:00:00 +0000 http://www.educationnext.org/door-still-closed/ Alabama plaintiffs lose federal school finance challenge

The post Door Still Closed appeared first on Education Next.

]]>

The federal courthouse door has been closed to school finance litigation since 1973, when the Supreme Court ruled in   San Antonio v. Rodriguez that unequal spending grounded in unequal distribution of taxable real property does not violate the Constitution. That makes a recent federal case, Lynch v. Alabama, important for seeking an alternative entrance. To the plaintiffs’ disappointment, Rodriguez still blocked the way.

Filing in 2008, the plaintiffs in Lynch alleged that Alabama underfunds education in violation of Title VI of the Civil Rights Act, which forbids racial discrimination in federally assisted programs, and the Fourteenth Amendment’s Equal Protection Clause. Essentially putting Alabama’s history on trial, the suit maintained that racist motivations color every aspect of the state’s school-funding system. While most litigants contend that school finance relies too much on local property taxes, the plaintiffs in Lynch argued that localities should be able to rely more on property taxes. Alabama raises only 5 percent of its school revenue from property taxes, with the rest coming from income and sales taxes.

According to the plaintiffs, Alabama’s constitution of 1901, and amendments in the 1970s and 1980s, placed racially motivated limits on property taxes that prevent poor, primarily black communities from raising sufficient revenue to adequately fund education. In addition to capping the millage rate, the state created differential assessments for different categories of property. This meant, for example, that forested land, which comprises 70 percent of the state, was taxed at a significantly lower rate than other property. The plaintiffs asked the court to eliminate all limitations on property tax rates and all differential assessments.

The state contended that its constitution, as amended in the era of civil rights, is not racially motivated and that the current tax regime does not unfairly burden black students. It also argued that if granted, the plaintiffs’ remedy would all but destroy the real estate market and lead to economic “calamity.” Alabama’s forest industry, taking a keen interest in the case, said that taxes on forested land would increase 1,000 percent without differential assessments.

After a trial in 2011, district court judge Lynwood Smith issued a sprawling 854-page opinion that agreed that Alabama inadequately funds education but nevertheless concluded that “like it or not,” because of Supreme Court precedent, Alabama’s property-tax system is constitutional. In Rodriguez, Smith said, the Court “faced similar facts” and found no constitutional violation. Even though the 1901 constitution was a “misbegotten spawn” obviously “perverted by a virulent, racially discriminatory intent,” he concluded that amendments from the 1970s and 1980s modifying the offending portions of the constitution were not obviously motivated by racial animus. Smith also asserted that the funding system does not have a racially discriminatory effect, pointing out that “Alabama’s black students actually fare better in terms of yield per-mill per-student than do white students.” As a result, the plaintiffs had proved only that there are disparities but not “along racial lines.”

Smith went out of his way to show displeasure at having to rule against the plaintiffs. Alabama’s education system, he said, is hamstrung by “two unfortunate realities”: “mankind’s self-serving nature” and “Supreme Court jurisprudence.” Because of the first, a majority of the state’s voters are unwilling to vote for services that do not directly benefit them, leaving rural black and white students to suffer. As to the second, he argued that the “Court’s rulings on education since the 1970s mirror its decisions [such as Plessy v. Ferguson] from the late nineteenth century” and have “allowed unequal and inadequate school funding to evolve.”

Such tendentious moralizing aside, Smith’s opinion indicates that Rodriguez poses a high, but perhaps not insurmountable, hurdle for school-finance advocates in lower federal courts. A less-conflicted judge confronting similar facts might find a way to side with the plaintiffs. But the Supreme Court, which has expressed increasing skepticism about the desirability of judicial oversight of schools, seems unlikely to overturn well-established precedent and thrust lower courts into the quagmire of school funding and tax policy.

 

Joshua Dunn is associate professor of political science at the University of Colorado–Colorado Springs. Martha Derthick is professor emerita of government at the University of Virginia.

This article appeared in the Summer 2012 issue of Education Next. Suggested citation format:

Dunn, J., and Derthick, M. (2012). Door Still Closed: Alabama plaintiffs lose federal school finance challenge. Education Next, 12(3), 7.

The post Door Still Closed appeared first on Education Next.

]]>
49699343
Best Practices Are the Worst https://www.educationnext.org/best-practices-are-the-worst/ Tue, 03 Apr 2012 00:00:00 +0000 http://www.educationnext.org/best-practices-are-the-worst/ Picking the anecdotes you want to believe: A book review of Marc Tucker's “Surpassing Shanghai”

The post Best Practices Are the Worst appeared first on Education Next.

]]>

Surpassing Shanghai: An Agenda for American Education Built on the World’s Leading Systems
Edited by Marc Tucker
Harvard Education Press, 2011, $49.99; 288 pages.

 

As reviewed by Jay P. Greene

“Best practices” is the worst practice. The idea that we should examine successful organizations and then imitate what they do if we also want to be successful is something that first took hold in the business world but has now unfortunately spread to the field of education. If imitation were the path to excellence, art museums would be filled with paint-by-number works.

The fundamental flaw of a “best practices” approach, as any student in a half-decent research-design course would know, is that it suffers from what is called “selection on the dependent variable.” If you only look at successful organizations, then you have no variation in the dependent variable: they all have good outcomes. When you look at the things that successful organizations are doing, you have no idea whether each one of those things caused the good outcomes, had no effect on success, or was actually an impediment that held organizations back from being even more successful. An appropriate research design would have variation in the dependent variable; some have good outcomes and some have bad ones. To identify factors that contribute to good outcomes, you would, at a minimum, want to see those factors more likely to be present where there was success and less so where there was not.

“Best practices” lacks scientific credibility, but it has been a proven path to fame and fortune for pop-management gurus like Tom Peters, with In Search of Excellence, and Jim Collins, with Good to Great. The fact that many of the “best” companies they featured subsequently went belly-up—like Atari and Wang Computers, lauded by Peters, and Circuit City and Fannie Mae, by Collins—has done nothing to impede their high-fee lecture tours. Sometimes people just want to hear a confident person with shiny teeth tell them appealing stories about the secrets to success.

With Surpassing Shanghai, Marc Tucker hopes to join the ranks of the “best practices” gurus. He, along with a few of his colleagues at the National Center on Education and the Economy, has examined the education systems in some other countries with successful outcomes so that the U.S. can become similarly successful. Tucker coauthors the chapter on Japan, as well as an introductory and two concluding chapters. Tucker’s collaborators write chapters featuring Shanghai, Finland, Singapore, and Canada. Their approach to greatness in American education, as Linda Darling-Hammond phrases it in the foreword, is to ensure that “our strategies must emulate the best of what has been accomplished in public education both from here and abroad.”

But how do we know what those best practices are? The chapters on high-achieving countries describe some of what those countries are doing, but the characteristics they feature may have nothing to do with success or may even be a hindrance to greater success. Since the authors must pick and choose what characteristics they highlight, it is also quite possible that countries have successful education systems because of factors not mentioned at all. Since there is no scientific method to identifying the critical features of success in the best-practices approach, we simply have to trust the authority of the authors that they have correctly identified the relevant factors and have properly perceived the causal relationships.

But Surpassing Shanghai is even worse than the typical best-practices work, because Tucker’s concluding chapters, in which he summarizes the common best practices and draws policy recommendations, have almost no connection to the preceding chapters on each country. That is, the case studies of Shanghai, Finland, Japan, Singapore, and Canada attempt to identify the secrets to success in each country, a dubious-enough enterprise, and then Tucker promptly ignores all of the other chapters when making his general recommendations.

Tucker does claim to be drawing on the insights of his coauthors, but he never actually references the other chapters in detail. He never names his coauthors or specifically draws on them for his conclusions. In fact, much of what Tucker claims as common lessons of what his coauthors have observed from successful countries is contradicted in chapters that appear earlier in the book. And some of the common lessons they do identify, Tucker chooses to ignore.

For example, every country case study in Surpassing Shanghai, with the exception of the one on Japan coauthored by Marc Tucker, emphasizes the importance of decentralization in producing success. In Shanghai the local school system “received permission to create its own higher education entrance examination. This heralded a trend of exam decentralization, which was key to localized curricula.” The chapter on Finland describes the importance of the decision “to devolve increasing levels of authority and responsibility for education from the Ministry of Education to municipalities and schools…. [T]here were no central initiatives that the government was trying to push through the system.” Singapore is similarly described: “Moving away from the centralized top-down system of control, schools were organized into geographic clusters and given more autonomy…. It was felt that no single accountability model could fit all schools. Each school therefore set its own goals and annually assesses its progress toward meeting them…” And the chapter on Canada teaches us that “the most striking feature of the Canadian system is its decentralization.”

Tucker makes no mention of this common decentralization theme in his conclusions and recommendations. Instead, he claims the opposite as the common lesson of successful countries: “students must all meet a common basic education standard aligned to a national or provincial curriculum… Further, in these countries, the materials prepared by textbook publishers and the publishers of supplementary materials are aligned with the national curriculum framework.” And “every high-performing country…has a unit of government that is clearly in charge of elementary and secondary education…In such countries, the ministry has an obligation to concern itself with the design of the system as a whole…”

Conversely, Tucker emphasizes that “the dominant elements of the American education reform agenda” are noticeably absent from high-performing countries, including “the use of market mechanisms, such as charter schools and vouchers….” But if Tucker had read the chapter on Shanghai, he would have found a description of a system by which “students choose schools in other neighborhoods by paying a sponsorship fee. It is the Chinese version of school choice, a hot issue in the United States.” And although the chapter on Canada fails to make any mention of it, Canada has an extensive system of school choice, offering options that vary by language and religious denomination. According to recently published research by David Card, Martin Dooley, and Abigail Payne, competition among these options is a significant contributor to academic achievement in Canada.

There is a reason that promoters of best-practices approaches are called “gurus.” Their expertise must be derived from a mystical sphere, because it cannot be based on a scientific appraisal of the evidence. Marc Tucker makes no apology for his nonscientific approach. In fact, he denounces “the clinical research model used in medical research” when assessing education policies. The problem, he explains, is that no country would consent to “randomly assigning entire national populations to the education systems of another country or to certain features of the education system of another country.” On the contrary, countries, states, and localities can and do randomly assign “certain features of the education system,” and we have learned quite a lot from that scientific process. In the international arena, Tucker may want to familiarize himself with the excellent work being done by Michael Kremer and Karthik Muralidharan utilizing random assignment around the globe.

In addition, social scientists have developed practices to observe and control for differences in the absence of random assignment that have allowed extensive and productive analyses of the effectiveness of educational practices in different countries. In particular, the recent work of Ludger Woessmann, Martin West, and Eric Hanushek has utilized the PISA and TIMSS international test results that Tucker finds so valuable, but they have done so with the scientific methods that Tucker rejects. Even well-constructed case study research, like that done by Charles Glenn, can draw useful lessons across countries. The problem with the best-practices approach is not entirely that it depends on case studies, but that by avoiding variation in the dependent variable it prevents any scientific identification of causation.

Tucker’s hostility to scientific approaches is more understandable, given that his graduate training was in theater rather than a social science. Perhaps that is also why Tucker’s book reminds me so much of The Music Man. Tucker is like “Professor” Harold Hill come to town to sell us a bill of goods. His expertise is self-appointed, and his method, the equivalent of “the think system,” is obvious quackery. And the Gates Foundation, which has for some reason backed Tucker and his organization with millions of dollars, must be playing the residents of River City, because they have bought this pitch and are pouring their savings into a band that can never play music except in a fantasy finale.

Best practices really are the worst.

Jay P. Greene is professor of education reform at the University of Arkansas and a fellow at the George W. Bush Institute.

This article appeared in the Summer 2012 issue of Education Next. Suggested citation format:

Greene, J.P. (2012). Best Practices Are the Worst: Picking the anecdotes you want to believe. Education Next, 12(3), 72-73.

The post Best Practices Are the Worst appeared first on Education Next.

]]>
49699261
Special Choices https://www.educationnext.org/special-choices/ Mon, 27 Feb 2012 00:00:00 +0000 http://www.educationnext.org/special-choices/ Do voucher schools serve students with disabilities?

The post Special Choices appeared first on Education Next.

]]>
Photo / School Choice Wisconsin

Nine school voucher programs in seven states specifically provide choice for families with disabled children (see sidebar). In Florida, for example, more than 22,000 students with disabilities receive McKay Scholarships to attend private schools at a per-student cost to the government that averaged $7,220 in 2010–11. But what about the private schools that participate in voucher programs open to all low-income families, such as those in Milwaukee, Cleveland, New Orleans, and Washington, D.C.? Do these schools exclude most students who in a public school setting would be identified as in need of special education?

Critics of voucher programs often argue that private schools do exclude most disabled students, and the matter occasionally has been the subject of litigation. Yet accurate information on students with disabilities served by private schools is notable for its absence.

The main reason for the lack of accurate information is that private schools do not operate under the provisions of the federal law that furnishes aid to the states for students identified as needing special education. Public schools expend considerable resources identifying children eligible for special services, both because they are under an obligation to provide those services and because they receive additional funds from federal and state governments if a child is identified as having a disability that affects their learning. Those obligations, rights, and funding support do not apply if parents choose to place their children in private schools with the help of a voucher. By and large, private schools have not developed the capacity to identify children with disabilities, and many of them are reluctant to do so, as they believe it leads to stigmatization of the children.

In other words, a child who may be classified as in need of special education in a public school may not be classified as such if his or her family chooses a private school, using a voucher to defray the cost. As a result, any official statistics on the prevalence of students with disabilities in public and private schools can be highly misleading.

We have not been able to surmount all of the obstacles to identifying the percentage of students in private schools who would have been identified as in need of special education in public schools, but we believe we have fairly accurate information on this question for the country’s largest and longest-running school-voucher program. The Milwaukee Parental Choice Program (MPCP), first established in 1990 and steadily expanded to include more private schools and more students in subsequent years, now serves more than 23,000 students who attend 107 different private schools. The annual voucher a school receives for each MPCP student is approximately $6,000. MPCP thus provides an excellent context for detecting the admission policies of private schools when a modest-value voucher program for low-income students is operating at scale.

In 2006, the State of Wisconsin authorized our research team to conduct a five-year evaluation of MPCP. Through the course of that study, we collected a wealth of data about the students in the voucher program and in the Milwaukee Public Schools (MPS) that permit us to estimate what proportion of the voucher student population would qualify for special education if the students were enrolled in public schools instead.

Drawing on different sources of data and various analytic methods, we estimate that anywhere between 7.5 and 14.6 percent of voucher students have disabilities that would land reported by the Wisconsin State Department of Public Instruction (DPI), a figure that gave rise to a lawsuit alleging discrimination by the MPCP program.

Following is a discussion of the procedures we followed to obtain our estimates and an explanation for the disparity between our estimates and the ones DPI has provided.

Structure of Special Education

As mentioned previously, receiving a special education designation brings with it certain legal rights for services or accommodations in the public educational sphere, as provided by the federal law known as the Individuals with Disabilities Education Act (IDEA). Once so designated, public school students are entitled to receive a free and appropriate public education (FAPE), to include special education services in the least restrictive environment possible and according to an individualized education program (IEP). A student’s IEP is drawn up by a committee that includes the student’s parents or guardians, local public-school officials, and relevant medical or psychological diagnosticians and care providers. The resulting special services and accommodations are funded through a combination of federal, state, and local monies based on formulas established in law. In Wisconsin, the federal government pays about 11 percent of the extra cost of educating each special-education student, with the state paying 26 percent and the local public-school district covering the remaining 63 percent.

The legal and funding structure surrounding students with disabilities in the private sector differs greatly from the situation in the public sector. Unless a public school district itself places a special education student in a private school, the IEP and additional funding associated with a student with a disability in the public sector does not transfer with the student if the child enrolls in a private school. The point is made in an August 2011 DPI memo on the subject:

Students with disabilities attending voucher schools as part of the MPCP are considered parentally placed private school students and as such, DPI treats them in the same fashion as students attending private non-voucher schools. Under [state law] parentally placed private school students are…not entitled to a Free and Appropriate Public Education.

If a parent enrolls a student with special needs in a private school, that student must surrender her legal rights to special educational services. Private schools are not required by federal law to enroll students with special needs, and they are not entitled to any additional resources from the state if they do so. Private schools can either accommodate the student themselves, using whatever resources they have, or negotiate with public school officials regarding the provision of special services to the student by the public school system with additional public funds (a process called “equitable services”).

Maintaining a count of those thought to be in need of special services also varies by sector. In the public sector, careful record keeping is stressed because disability status has major implications for the kinds of instructional and other services students will receive. In the private sector, special education tends to be handled much less formally, inasmuch as schools are ordinarily not required to follow formal procedures in diagnosing or serving students with special educational needs.

Given the contrasts between how special education is governed and managed in the public and private education sectors, we hypothesize the following:

1. The same student will have a higher likelihood of being identified as in need of special education if in a public school than if in a private school.

2. Given the funding available for extra services for disabled children attending public schools, a higher proportion of students with disabilities than those without disabilities will choose to remain in the public sector rather than use a voucher.

3. Any data that rely on official reports of disability will under-count the percentage of students in private schools who would have been identified as in need of special education had they attended public schools.

To test these hypotheses, we used two alternative methods to estimate the actual percentage of students in private schools who would have been identified as in need of special education in public school had they selected that sector.

Method I: Same Student, Different Sector

The better of our two methods relies on information from those students who attended schools in both the public and the private sectors during the course of our study. During the five years of our evaluation, 20.1 percent or 1,475 of the 7,338 students in our MPCP and MPS study panels switched from one school sector to the other, in some cases multiple times.

We received enrollment files from MPS each year that included information on the special education status of each MPS student. We also collected enrollment lists from every private school in MPCP and asked school officials to indicate if students had disabilities that qualified them for special education. For students who switched school sectors during the study period, we can determine whether those who were identified as needing special education in the public sector were similarly identified when they attended private schools, and vice versa. In other words, we can use each student in our study as his or her own control group to learn whether disability designations vary by sector.

Our analysis indicates that Milwaukee students who switched between the public and private school sectors were much more likely to be identified as in need of special education when they were in the public sector. On average, controlling for factors such as year and student grade, those who attended schools in both sectors were classified as in need of special education at the rate of 9.1 percent when attending private schools but at a rate of 14.6 percent when attending Milwaukee’s public schools. If we assume that a student’s need for special education did not change at the time the student switched sectors, this suggests that 5.5 percent of students attending private schools were not identified as in need of special education but would have been had they been attending public school. In other words, the identification rate in the public schools appears to be 60 percent higher (the 5.5 percent increment divided by 9.1 percent) than in the private schools. The identification rate was higher when students were in MPS both because many students who switched from MPCP to MPS received special education designations in MPS and because many students with special education designations in MPS shed them when they enrolled in MPCP schools.

The 14.6 percent MPCP disability rate is based only on students who switched sectors (35 percent of MPCP students). Those students appear to have higher rates of disability than those who did not switch. Based on principal surveys, for the 65 percent of MPCP students who did not switch, the disability rate was 3.75 percent. To get an overall rate for MPCP students, we compute a weighted average for the two groups of 7.5 percent. We suspect that this rate is conservative, since several voucher school principals told us they resist labeling students in such a way. Combining this conservative estimate with the estimate from our analysis of only students who switched sectors yields a range of 7.5 to 14.6 percent, which we think captures the likely student disability rate in MPCP.

Click to enlarge

Method II: Parental Estimates of Disability Rates

Our second estimate of the student disability rate in MPCP comes from interviews with parents. In 2007 we interviewed a random sample of parents of MPCP students in grades 3–8, all the parents of MPCP 9th graders, and a sample of parents of MPS students who were matched to the sample of MPCP students based on their grade in school, neighborhood of residence, ethnicity, test-score performance, and other characteristics. We expanded this sample with additional parents of 3rd-grade students similarly chosen in 2007 and 2008. Altogether, we interviewed a majority of the parents of 3,669 students in MPCP and 3,669 students in MPS.

The survey included the following questions:

• Does [child’s name] have any physical disabilities?

• Does [child’s name] have any learning disabilities?

If a parent answered yes to the learning disabilities question, we further asked,

• How well do the facilities at [child’s name] school attend to his/her particular needs?

According to parental responses to the first two of these questions, 2.5 percent of students in MPCP have a physical disability and 9.8 percent have a learning disability (see Figure 1). The corresponding rates reported by parents of MPS students were 4.1 percent and 18.5 percent for physical and learning disabilities, respectively. Combining the categories and eliminating overlapping cases, it is estimated that the disability rate in the MPCP sector is 11.4 percent, as compared to 20.4 percent for the MPS sector.

There is every reason to believe that these parental responses are consistent and fairly accurate indicators of what the parents are told by school officials and what they themselves know about their children. The official MPS rate for this time period is between 18 and 19 percent, just slightly less than the 20.4 percent reported by our MPS parents. The 11.4 percent disability rate for MPCP students based on our survey is midway between the 7.5 percent rate for all students in MPCP based on school staff designations and the 14.6 percent rate based on observing some of the students in both school sectors.

It is interesting that within a scaled-up, long-standing voucher program, parental satisfaction with services for students with disabilities achieves a balance across sectors. Similar levels of satisfaction with special education services are reported, regardless of whether the student was in MPCP or MPS (see Figure 2). Presumably, the choice of sectors and schools allowed parents to obtain an educational setting they view as appropriate for their child.

Photo / School Choice Wisconsin

Discussion

Our estimates of the prevalence of MPCP students who have a disability range from 7.5 to 14.6 percent. The 14.6 percent estimate is based on the identification by public schools of the need for special services for those students who attended school in both sectors, while parental reports peg the rate at 11 percent, and the combination of MPCP and MPS school personnel suggest it is 7.5 percent.

All of these estimates are higher than the one provided, on March 29, 2011, by DPI, which said that “the private schools [participating in MPCP] reported about 1.6 percent of choice students have a disability.” That statement provoked a lawsuit by disability rights groups against DPI, which administers MPCP, based on the charge that the program discriminates in admissions against students with disabilities.

The estimate provided by DPI was based on the percentage of MPCP students who were given test accommodations on the 2010 state accountability exams. Only a fraction of students with disabilities receive accommodations on exams, and accommodations are only permitted if an IEP committee of school personnel requests them. Since few students with disabilities in private schools have IEP committees, the student-testing accommodation rate for MPCP may bear little relationship to the actual student-disability rate in the program. In fact, using administrative data we collected from the MPCP schools, we were able to determine that only one-quarter of the MPCP students judged by their school to have a disability were actually given any accommodation for last year’s test.

Using multiple measures of student disability, each of which is more valid and reliable than testing accommodation statistics, the estimates we produced indicate a 7.5 to 14.6 percent participation rate for students with disabilities in the voucher schools in comparison to the 17 to 19 percent participation rate reported for students with disabilities by the public schools. The difference could be due to discrimination against disabled students, as has been alleged, but the evidence is not sufficient to draw any such conclusions. Where disabilities are severe, private schools may not have the necessary facilities, and even in less severe instances, parents may prefer the legal entitlements and the greater range of funded services in the public sector.

What we do know, with considerable certainty, is that while the percentage of students in the voucher schools with disabilities is substantially lower than the disability rate in the public schools, it is at least four times higher than public officials have claimed. These statistical findings reinforce our views that the sectors cannot be easily compared to one another on this particular metric, because they operate under different legal obligations, financial incentives, and cultural norms. Special education is special in very different ways in public schools and in voucher programs.

Patrick J. Wolf is professor of education reform at the University of Arkansas. John F. Witte is professor of political science and public affairs at the University of Wisconsin-Madison. David J. Fleming is assistant professor of political science at Furman University.

This article appeared in the Summer 2012 issue of Education Next. Suggested citation format:

Wolf, P.J., Witte, J.F., and Fleming, D.J. (2012). Special Choices: Do voucher schools serve students with disabilities? Education Next, 12(3), 16-22.

The post Special Choices appeared first on Education Next.

]]>
49699197
Moynihan Redux https://www.educationnext.org/moynihan-redux/ Mon, 20 Feb 2012 00:00:00 +0000 http://www.educationnext.org/moynihan-redux/ Sadly, still more single-parent families. A review of Mitch Pearlstein's "Shortchanging Student Achievement: The Educational, Economic, and Social Costs of Family Fragmentation"

The post Moynihan Redux appeared first on Education Next.

]]>

Shortchanging Student Achievement: The Educational, Economic, and Social Costs of Family Fragmentation
by Mitch Pearlstein
Rowman & Littlefield, 2011, $24.95; 165 pages.

As reviewed by Nathan Glazer

This book comes to us with a remarkable range of recommenders: Glenn Loury, Abigail and Stephan Thernstrom, Eric Hanushek, Ron Haskins, Heather MacDonald, David Blankenhorn, Chester Finn, and others. It is published as part of a series edited by Education Next’s own Frederick M. Hess. To my mind it is being recommended largely for the worthy cause in which its writer has been engaged for 20 years or more—deploring the breakdown of the traditional family. It is somewhat disorderly in presenting evidence for its central argument, and the author has an odd style in which almost every statement is hedged. This is not done as a matter of scholarly caution, but rather to preempt the charge that he is making too much of his thesis and thereby discounting other explanations for the educational, and subsequent occupational and economic, failure of so many American children.

But the central thesis, however presented, is hardly contestable: the fragmentation of the American family, in which the norm of two parents raising children in a marriage has been radically reduced by the increase of children born and raised out of wedlock, engenders grave problems for many American children and American society. As the first chapter puts it, we have moved “From Moynihan to ‘My Goodness.’” The “Moynihan” is, of course, Daniel P. Moynihan, author of the famous, or infamous, 1965 report on the black family. The “My Goodness” is our response to the enormous increase in the proportion of babies who are born out of wedlock or are illegitimate, terms one uses with embarrassment now but which may still have had some currency in 1965. The figures that so alarmed Moynihan—24 percent for blacks versus 3 percent for whites—have since ballooned to more than 70 percent for blacks and 30 percent for whites, figures that would have been unimaginable in 1965.

Pearlstein quotes a Swedish demographer: “The USA stands out as an extreme case with its very high proportion of children born to a lone mother, with a higher probability that children experience a union disruption than anywhere else…”

Mitch Pearlstein is director of a think tank in Minneapolis, the Center of the American Experiment, which he founded after a career working for University of Minnesota president C. Peter Magrath, for Minnesota governor Albert H. Quie, as an editorial writer for the St. Paul Pioneer Press, and at the U.S. Department of Education with Chester Finn. Despite his solid Minnesota credentials, Pearlstein comes out of Far Rockaway High School in Queens, New York, whose decline from a nurturer of future Nobel prizewinners, furnishes much of the background to his distress over American education (as the decline of so many other once-great New York City high schools serves so many others, including this reviewer).

Pearlstein is more an advocate than an analyst. He is well aware of the expansive literature on the fragmentation of the American family, its causes and consequences, scholarly as well as popular. But he often mixes together childhood trauma and distress, family disruption, poverty, troubled neighborhoods, and still more, as possible causes. All are undoubtedly linked, but social scientists do try to pry these various forces apart using statistical techniques. Nevertheless, his main point holds: it stands to reason that being raised by a single mother is more difficult for a child than being raised by two parents.

Pearlstein is clearly more comfortable presenting the facts from whatever source than in advocating any solution:

No proposed solution in this book is equal to the central problem it aims to solve. There is no tax break, no welfare reform, no marriage education program, no public service campaign…that can reduce out-of-wedlock birth rates and divorce rates to what they were as recently as when the Everly Brothers beseeched “Little Suzy” to wake up lest their reputations get shot.

What is to be done? Pearlstein can reel off pages of programs that have attempted to raise educational achievement. He reminds us, if we have forgotten or never knew, that under the George W. Bush administrations more than 200 programs were instituted to aid marital stability. But he is no great advocate of any specific programs or approaches, whether to improve educational achievement or deal with the underlying problem of family fragmentation that makes life for children more difficult. He is of sociologist Peter Rossi’s persuasion, made popular by Moynihan, on the effect of social programs. As Rossi phrased the “iron law of evaluation,” “the expected value of any net impact assessment of any large-scale social program is zero.” Educational reform after reform, many that appear to have good effects, crumble under close evaluation, and with the passage of time. And those that manage to keep up a record of improvement with children who are expected to do poorly in school, such as KIPP (Knowledge Is Power Program), cannot be brought to scale, owing to the talents and energy they require.

All this is commonly known, and Pearlstein well reports what we have learned, which is not encouraging. In his chapter on “Strengthening Learning,” he has nothing new to propose. But he does like the emphasis on exercised authority—in loco parentis, schools in place of absent parents—that Gerald Grant and others have emphasized as making for an effective school. And he has a good word for the differentiated digital education that Clayton Christensen and his colleagues pressed for in Disrupting Class (see “Something’s Better Than Nothing,” book reviews, Fall 2008).

Nor is he more optimistic about most programs to strengthen marriage. When the first of “three sophisticated experiments” designed to test the effectiveness of marriage programs aimed at low-income couples was evaluated and reported on by Mathematica, the Rossi dictum again prevailed: “[Building strong families] did not make couples more likely to stay together or get married…it did not improve couple’s relationships.”

Pearlstein does strike a new note, not commonly seen among advocates of strong and stable families, when he raises the issue of the high incarceration rate in the United States generally, and the exceptionally high rates for blacks, which take so many black men out of the marriage market. Here he does have something new to propose: not anything that will reduce the incarceration rate, but some effort to reduce the extensive “collateral sanctions” that come with a prison sentence and make getting a job and rehabilitation so hard. Ohio may well be correct in forbidding ex-convicts to be auctioneers, but why should it forbid them a commercial driver’s license? He makes a surprising but reasonable point when he asks what has happened to “forgiveness.” When a prison sentence has been completed, should it not be easier to have a conviction vacated, after a spell or period of good behavior, so it is not a lifelong ball and chain?

On occasion Pearlstein argues that among the bad effects of the fragmented family is the increasing division in the United States between those who can make a good life on the basis of stable backgrounds and effective education, and those who cannot. He is speaking about increasing inequality, but not in the way it is usually addressed, in relation to tax policy. He pays no attention to how the effects of single parenthood might be moderated for children to some degree by economic measures, such as child benefits, as in Europe. He appreciates it when those on the left give attention to the problem of family fragmentation that so concern him. Might he not pay more attention to the economic and social policies they advocate that could moderate the harsh effects of single parenthood or the economic consequences of divorce? Even if less frequent in Europe, their effects, owing to social measures, are not so harsh and divisive there, and that could have been given more attention.

Nathan Glazer is professor emeritus of education and sociology at Harvard University.

This article appeared in the Summer 2012 issue of Education Next. Suggested citation format:

Glazer, N. (2012). Moynihan Redux: Sadly, still more single-parent families. Education Next, 12(3), 70-71.

The post Moynihan Redux appeared first on Education Next.

]]>
49699279