Of Pandemics and Assessments
At Ed Research Works, Saro Mohammed, Ph.D. fights the injustice of knowledge-hoarding by mobilizing research to foster the best learning experiences for each learner. Using her 15 years' experience as a researcher, evaluator, and advisor to public, private, and non-profit education programs, she conceptualized the Just Ed Research design principles to ensure research serves all communities.
In October 2022, the first “post” pandemic (or near-post-pandemic) results were released from our National Assessment of Educational Progress (NAEP), commonly referred to as the Nation’s Report Card. It caused quite a stir, with local and national media outlets publishing headlines from the relatively benign, “Student math scores are down from pre-COVID levels, the National Report Card finds”; to the somewhat alarming, “Parents Are Owed the Truth About Learning Loss. NAEP Proves It”. Still others, including the DLC, reported on attempts to dig deeper into the results to identify and name specific underlying causes for this “learning loss”. In reading these pieces that tried to follow the story of the data more closely, I found myself annoyed at the back-and-forth.
My reaction boils down to three main issues:
How valid are NAEP 2022 and other assessment results given the context in which they were administered?
Do these assessments - and NAEP 2022 in particular - tell us anything new?
Instead of mostly looking back, can we focus on where do we go from here?
My thoughts on these issues follow.
Validity - Do the scores mean what we think they mean?
When any psychological assessment (which includes assessments of academic performance) is created, we seek to understand how “good” that assessment is by measuring its reliability and validity. Reliability can be thought of as the assessment’s consistency. How likely is it that the same person would get the same score when taking the assessment at different times (assuming they have the same level of skill each time they take it). If you think about a ruler, it is a highly reliable tool - odds are an object that is 10 inches long will always be measured as 10 inches long by the same ruler, if you measure it one time or 100. Validity builds on that concept and tells us the extent to which we are measuring what we think we are measuring.
An assessment cannot be valid if it is unreliable; and the NAEP is rigorously examined every time it is administered to confirm its reliability and validity. The NAEP is administered in person, and the two assessments being compared were given between January and March both in 2020 and 2022. Think about this for a moment–can we honestly say that conditions for students were the same pre-pandemic and post-pandemic? Aside from issues directly related to learning, the vast majority of students experienced some combination of their own illness, illness and possible loss of family members, isolation, and other traumas. These are all understood by most educators and policymakers to have impacted learning. It should also be clear that these issues changed the conditions under which students were taking NAEP and other assessments–calling into question the validity if not also the reliability of these assessments.
It is clear that students’ physical, cognitive, and affective approach to this test in 2022 would have been different in nontrivial ways to their approach in 2020. Are we confident that these scores are measuring what we think they are, and what they have measured in the past? I am not.
Novelty - Is this evidence of learning “loss” new information?
Let’s assume that we accept the validity of the 2022 NAEP scores. Is the story that these data tell us new information? Essentially, there is an overall decline in math and reading performance by 9 year old students in 2022 compared to 9 year old students in 2020, with lots of variation across the sample (from school to school; across proportions of remote/in-person learning time; across geographic areas; across income levels, ethnicity, etc). This variation is consistent from 2020 to 2022, although it is more extreme (in other words gaps are bigger); but the overall trend of declining average performance is the opposite of what we have seen in the past when scores were generally rising.
Is this new or surprising information? Did we really need to administer this assessment two years earlier than scheduled in order to know that students didn’t learn as much in the last two years as in the two years prior?
I understand our need for documenting what we perceive about learning, and I’m a proponent of accountability in general (one thing accountability did do for us in K12 in the US is lay bare the fact that our education system was only serving some students and consistently underserving - or not serving at all - entire communities of students). But, I also believe in a time and place for everything, and from my perspective, January - March 2022 was not the time nor place to assess our 9 year olds via an additional in-person assessment on their math and reading skills.
Did we expect productivity in any other industry to be maintained from March 2020 through March 2022? In fact, during that time period we have seen major disruptions across most, if not all, sectors of the economy and society. And–importantly–we are starting to see those other sectors shaking off pandemic impacts as they adjust to post-pandemic conditions.
Which brings me to my third issue.
Where do we go from here?
Regardless of all I’ve said above, here we are with NAEP scores from 2022 - so we may as well use them. All of the articles I mentioned in my opening, I think, are trying to do just that. Authors are looking at the data and trying to answer questions about what did vs what should have happened, and what we might do differently next time. I am not frustrated with that approach in general. However, I am frustrated by the questions that are being asked of these data. Was emergency remote learning effective or not? Did school closures lead to learning loss or not? To me, these are not questions that will help us figure out how to plan for next time (oh please let there not be a next time).
We have the opportunity to answer important questions, in a data- and evidence-driven way, because of digital learning tools, and it seems we are choosing not to. I think the only way to move forward is to recognize that now more than ever each individual student is at a different place, on a different path, and has different needs on their learning journey. And we have the tools, the structures, and the ability to know - for each student - where they are, what path they are on, and what their needs are, today. This is the type of assessment and data we should be poring over and dissecting. A global pandemic changes things, and we need to acknowledge that things are different now. Rather than hand-wringing, or finger pointing, about all the learning that was “lost” to COVID-19, I think we have an imperative to each K12 learner to instead double down and invest in their individual learning from this moment onward.