Assessment Literacy

This is the first of several essays on assessment literacy that noted experts and practitioners are authoring especially for NAAD.



ASSESSMENT LITERACY: MAGIC BULLET?

W. James Popham

University of California, Los Angeles

11/3/2019

To get underway, my own answer to the question posed in the above title is an assertion that assessment literacy is, indeed, a magic bullet capable of doing gobs of good for the nation’s educators. Pushing that answer with a mite more verve, I believe that increasing the assessment literacy of America’s educators constitutes, by far, the most cost-effective way to improve our schools.

Surely, other ways exist to boost the quality of U.S. schools. For instance, by substantially reducing the numbers of students to be taught by each teacher, we would see students’ learning improve because meaningfully lowered pupil/teacher ratios would permit teachers to better tailor their instruction for each child’s strengths and weaknesses. As another example, if we were to provide truly big-bopper boosts in teachers’ salaries, this would lure more high-talent individuals to the teaching profession so that, over time, those talented teachers would have a beneficial impact on instruction in our schools.

Yet, both of those sure-winner ways of improving our schools cost more money than, for the foreseeable future, out society is willing to shell out. Dandy ideas, no dollars. In comparison, the cost needed to boost educators’ assessment literacy is downright trifling.

But what is this “assessment literacy” that I’m touting as cost-affordable magic? Well, although minor differences exist in the way that different authorities define the concept, here’s the definition I’ve been touting for more than a decade: “Assessment literacy consists of an individual’s understandings of the fundamental assessment concepts and procedures deemed likely to influence educational decisions.” That’s right, if increased assessment literacy does not improve the quality of real-world educational decisions, I have no interest in it.

My own belief, then, is that if educators—all the way from a beginning teacher up to a state school superintendent—genuinely understand a modest number of assessment’s most basic concepts and procedures, better assessment-dependent decisions be made. That is, fewer mistakes will be made about students’ instruction. Currently, I think there are only about a half-dozen such must-be-understood procedures and concepts.

To illustrate, general agreement now exists among measurement mavens that assessment validity hinges on the quality of evidence provided to support the accuracy of test-based interpretations for a test’s intended use. Thus, if a district’s leaders are comparing schools’ quality using students’ scores on standardized tests for which no evidence-based argument supporting such evaluative comparisons exists, it is nearly certain that many school-evaluation mistakes will be made. If those district leaders genuinely understand the concept of validity, however, they can either stop using the offending standardized tests or, if unable to do so, can explain to relevant individuals why purpose-specific validity evidence is a requisite.

Increased assessment literacy will not, overnight, lead educators to the land of mistake-free decisions, but investments in boosting assessment literacy can clobber a crate-full of mistaken test-linked decisions. Assessment literacy, for the economy-conscious among us, not only turns out to be a magic bullet, but it’s also a blinking bargain!