1/1/2020
Understanding the Nation’s Report Card: What the NAEP Is and Is Not
When the latest National Assessment of Educational Progress (NAEP) scores were released in late October 2019, many pundits, journalists, and education observers swiftly deemed the data to be something of a disappointment.
Indeed, one might say the collective sigh—of exasperation, not relief—could be heard from coast to coast, with USA Today pointing out that “reading and math scores haven’t budged in a decade” and Secretary of Education Betsy DeVos declaring the latest scores to be evidence of widespread failure in America’s public schools.
Although it can seem all too easy to be drawn into the discourse, this is a good time to step back and consider what the NAEP is—and what it is not.
What the NAEP is
The NAEP is a “congressionally mandated project” administered by the National Center on Education Statistics, better known as NCES.
Launched in 1969, the assessment provides a way to gauge student progress on a national scale; it is nationally normed because the test content is not bound to any one state’s curriculum standards. Across rural, suburban, and urban districts, selected students in grades four, eight, and 12 take the test every two years alongside students with disabilities and English language learners (ELLs). The NAEP is designed to measure “what America's students know and can do” in math and reading, with other subjects sometimes included as well. Data from the tests is disaggregated, meaning results are broken down into various racial and socioeconomic categories.
But really, what is the NAEP?
If the above “at-a-glance” summary left you with more questions than answers, you're not alone. Indeed, the answers to the following questions are usually missing from the quick-take news coverage that tends to follow every NAEP score release:
-
Who actually takes the test? How are students chosen to participate in this random sampling of same-grade kids?
-
Who actually writes the test questions? Is it teachers? Policymakers?
-
What do NAEP scores actually tell us about the state of education in the nation?
Washington Post education writer Valerie Strauss recently argued that NAEP scores are “often misinterpreted,” using Secretary of Education DeVos’s reaction as an example. In a speech delivered on the same day that the 2019 NAEP results were released, DeVos characterized the overall decline in reading scores from 2017 to 2019 as a “student achievement crisis” that indicated “2 out of 3 of our nation’s children aren’t proficient readers,” despite the fact that proficiency should not be equated to grade level.
What the NAEP is not
Although proficiency scores and grade levels are two different things, the conflation of these yardsticks happens all too often, according to Tom Loveless of the Brookings Institution, who wrote about what he termed the “NAEP proficiency myth” after journalist-turned-education reform advocate Campbell Brown made a similar claim to DeVos in 2016. In fact, the NAEP’s proficiency rating is a high watermark of sorts, with “proficient” indicating performance above grade level. Given this context, Brown and DeVos's splashy public announcements that “two-thirds of Americans can’t read at grade level” lose much of their impact. Here’s why:
-
The National Assessment Governing Board (the entity that creates the framework for what should be on NAEP tests) has stated in writing that its evaluation scale of advanced, proficient, basic, and below basic does not match up with grade level.
-
As Loveless pointed out, the NAEP’s own cut marks have been called into question by the National Academy of Sciences and the federal Government Accountability Office, raising questions about their validity.
-
The NAEP website itself acknowledges that the NCES has yet to determine whether its “achievement levels are reasonable, valid, and informative to the public”; therefore, the evaluation scale should be viewed as experimental.
Navigating the in-between, and where to go from here
The NAEP is regarded as the “gold standard” because it is a national assessment (i.e., it is not reflective of any one state’s standards), with the perception being that this makes it less flawed and more objective. However, that doesn't mean it tells us everything—or, indeed, that it tells us the right things—about the students who take it.
In a 2018 opinion piece published by the online education news outlet Education Week, Ian Rowe of the Thomas B. Fordham Institute argued that some factors impacting student achievement (for example, family structure) are not included in the disaggregated data. Consequently, the NAEP-provided glimpse into what students “know and can do” is often mistaken for the full picture—and, as Rowe explained, “inattentional blindness” can lead people to overlook important information or occurrences because they are too focused on another element.
In other words, it is easy for observers and officials alike to look at the latest NAEP results and draw conclusions that confirm their own assumptions about the current state of public education. That said, there is still value to be gleaned from the NAEP, particularly with regard to trends that have stood the test of time. For instance, NAEP results have consistently shown us that the students who struggle the most—students of color who live in poverty, for example—continue to lag behind their white, wealthier peers.
While the 2019 NAEP scores deserve more nuanced appraisal and fewer quick takes, this succinct observation by USA Today reporter Erin Richards is a solid starting point: “America’s students are struggling with reading. And the country's education system hasn't found a way to make it better.”