A Lifeline for Online Education? A New Critic of CREDO's Virtual Twin Studies

BY PETER ROBERTSON
Independent Digital Learning Researcher and Consultant

Robert Pondiscio’s admission earlier this fall that “there are serious questions about the validity” of the Center for Research on Education Outcomes (CREDO’s) findings about online learning outcomes felt like a lifeline to online educators who have been treading water for almost a decade. Senior Fellow Pondiscio wrote a column for the Fordham Institute based on what he heard on a panel he hosted entitled “Is it Time for a Fresh Look at Online Education?” I recommend watching the whole 75-minute panel recording. One crucial point is that CREDO’s “virtual twin” methodology, which is at the core of several widely cited reports criticizing online school effectiveness, fails to acknowledge essential ways that online students differ from brick-and-mortar students.

CREDO introduced its virtual twin methodology in a 2009 study of charter school performance. They consolidated longitudinal student data in 16 states. They compared charter schools with traditional schools by pairing charter school students with “twins” based on race, gender, age, ELL, special education, and free lunch status. The idea was that students should have test scores comparable to those of their “twins” and if they didn’t, the difference would be because of the school. The available twinning attributes obviously disguised differences between students, and many of those differences were likely systemic and significant. But CREDO muted pushback against their methods and findings because they found some charter schools under-performing and others over-performing, their scholars were able to drag opponents into methodology debates the public couldn’t follow, and the size of their study conveyed seriousness.

In 2015 CREDO coordinated the release of a virtual twin study about online schools with another study by Mathematica, and a policy framework from the Center on Reinventing Public Education (CRPE). The three documents had the collective effect, as one education management organization (EMO) leader put it at the time, of throwing online schools out of the charter schooling lifeboat. Charter school enrollment had been growing at double-digit rates for decades, and political criticism of their results, public funding, and use of for-profit vendors had mounted. At the time, most online schools were charters, and most online students attended charter schools managed by two for-profit EMOs. Whether intentional or not, throwing online schools out of the lifeboat signaled charter school sector concern about for-profit vendors.

CREDO also used the 2015 online school report to bolster the credibility of the virtual twin methodology, which they had been trying to establish as the gold standard for sorting out competing claims about charter schools and convincingly demonstrating that some outperform traditional public schools. Some saw throwing online charter schools out of the metaphorical boat as evidence that virtual twins could be an objective tool for weeding out low-performing charters and demonstrating that some were charting a path toward meaningfully better public education. Those who argued the approach was flawed were ignored or shouted down by a tight network of committed charter school policy advocates looking to help charters avoid growth caps and other policy backlash. The Fordham Institute was part of that network, with its then-President Michael Petrilli blaming online schools for weighing down the overall charter sector. (To be clear: Committed opponents of charter schools, like the Network for Public Education, never bought the “virtual twin” methodology. But lots of education policy wonks did.)

That 2015 CREDO report left online schools treading the waters of disreputable education policy, trying to keep afloat. Those working in online schools always knew they were making important differences in students' lives (and literally helping save some of those lives). It was always a struggle to quantify that impact in a way skeptics would understand. However, after the 2015 CREDO report, defensiveness seemed the only possible stance. Policy changes varied from state to state, but the national growth rate of online school enrollment slowed. COVID-19, which underscored that online learning is here to stay, also resulted in bad emergency remote teaching that fed the CREDO narrative.

At the same time, research made possible by the “natural experiment” of the COVID-19 emergency has finally begun to document how online students differ from brick-and-mortar students. Thanks to Dennis Beck, Ian Kingsbury, and Ben Scafidi, academic research now exists that clearly shows what online educators have always known: Students come to online schools to solve problems. The problems they come to solve - and what’s involved in their problem-solving processes - mark them as different from brick-and-mortar students in ways virtual twinning can never account for, ways that introduce systemic bias into virtual twin-based research. Refugees from bullying, a significant proportion of online students, face challenges invisible to virtual twinning but which nevertheless have academic effects. Because online students don’t get lunch if their parents fill out the free lunch applications, poverty is underreported in online schools, and poor online students can get virtual twins who are middle-class. On top of the twinning problems, online students are still mainly required to test in person, in “artificial testing conditions” that demonstrably increase stress, distraction, and mental fatigue and likely lower their test scores.

Some of that research was shared in “Is It Time for a Fresh Look at Online Education?” Robert Pondiscio listened to his guests and realized “there are serious questions about the validity” of CREDO’s findings. We still don’t understand enough about how online students differ from brick-and-mortar students or what that should mean for teaching and learning. Some states are finally letting online students test in a familiar learning environment like brick-and-mortar students do, but most states still don’t. That alone will not fix how we measure online students’ learning or close any real achievement gaps. We are far from having accountability in education for what parents actually care about and still further from knowing how to measure that - in online schools and traditional brick-and-mortar schools.

But at least a Fordham Institute senior fellow has looked at the evidence and asked if it is time for a second look at online education. That offers hope that online education might get back its seat in the boat and help chart the course to a more personalized and inclusive educational future instead of being seen as a problem to be thrown overboard. Students and families with access to online options continue to choose them, and it’s time we all work together to understand why, what they are getting out of it that they aren’t getting from traditional brick-and-mortar schools, and what that can mean for the future of education.

Next
Next

Don't believe everything you read (about digital learning…or anything else)