Q&A with Dana Suskind and John List on Why Some Early Childhood Interventions Lose Impact at Scale
Blog Post
July 8, 2021
As the nation rebounds from the pandemic and state leaders determine how to spend early childhood dollars, there is more need and more opportunity than ever to invest strategically in early childhood programs. Early childhood interventions have a strong evidence base proving profoundly positive and lifelong benefits. But moving from a successful pilot to a state-level program requires careful design in order to achieve the same positive results at scale. A new volume, edited by Dana Suskind and John List, features research that examines the science of scaling as well as promising approaches to overcoming the scale-up effect. To learn more about the science of scaling-up interventions, I interviewed Dana Suskind and John List via email.
Congratulations on the publication of the book you edited, The Scale-Up Effect in Early Childhood and Public Policy: Why Interventions Lose Impact at Scale and What We Can Do About It, and the associated policy brief. The publication of the work coincides with historic investments and proposed investments in early learning. As decision-makers at various levels build their short- and long-term strategies, what impact do you hope your edited volume will have?
The historic levels of interest and investment we are seeing have the potential to be truly transformative. But it’s really important to acknowledge that scaling up these programs—from a research setting to a real-world context—has been challenging.
We hope that The Scale-Up Effect will encourage and equip decision-makers to actively seek to understand whether available research is applicable to the policy need at hand, rather than assuming it is. And we hope it will shed light on different aspects of implementation necessary for success at scale. Ultimately, we hope individuals that develop, study, adopt, and implement early childhood programs and policies will leverage the recommendations shared in the book so that more families can reap the benefits of high-quality, evidence-based programs.
To the uninitiated, scaling something that already exists should be easy. You just copy what other people already figured out and do more of it. But in practice, scaling from a model to a system can be complex. What are some of the complexities of scaling a model program?
Indeed, most of us think that scalable ideas have some silver bullet feature, i.e., some quality that bestows a “can’t miss” appeal. But that’s not the case. There is no single quality that distinguishes ideas that have the potential to succeed at scale from those that do not.
So many evidence-based programs show promise in research settings, but fail in the real world. The literature refers to it as voltage drop: a phenomenon when an enterprise falls apart at scale and positive results fizzle. The video below explores a number of factors that can lead to a voltage drop.
The reality is most interventions or programs are designed to benefit a population within a certain situation and context. Just because it succeeds does not mean (much less guarantee) that the program will produce the same or even similar results with a different population in a different setting.
Our work on scaling is motivated by the goal of digging into the economics of scaling, understanding how the benefit-cost ratio changes when we move from the small scale to the large scale, and applying that understanding to generate more effective, scalable programs and policies.
What are some of the main reasons programs at scale do not achieve the same level of results as a model or pilot program? How can decision-makers or program implementers avoid this?
Scaling is a fragile concept. We’ve found that scalable ideas are ones that hold five key traits—what we call the BIG5. A deficiency in any one can render an idea unscalable, so just being aware of them—knowing what to look out for and ask about—is a critical first step toward avoiding a voltage drop.
First, and this sounds obvious, is that there must be adequate evidence to support scaling! But what constitutes sufficient evidence? We advocate that a post-study probability of at least 0.95 should be achieved before enacting public policies. In practice, this amounts to three or four well-powered, independent replications of an original finding.
The second element of the BIG5 is representativeness of the population. In other words, you can’t assume that the small subset of people for whom an idea worked originally are representative of the general population that needs to be served.
Third is the representativeness of the situation. If original research results are dependent on the specific context in which the study was conducted, or if they are not achieved in a policy-relevant environment, we can expect the benefit-cost profile to change at scale.
A fourth key aspect pertains to spillovers, which you can think of as a corollary of the Law of Unintended Consequences, or what happens when the implementation of your idea has unplanned effects that backfire and diminish your results.
Finally, the fifth element of the BIG5 represents marginal cost or the supply-side economics of scaling—does your idea have economies or diseconomies of scale? For example, consider a program that requires school districts to hire the best teachers to deliver results. Identifying and hiring the top 30 might be easy, but what about the 300th best teacher? Can you really differentiate them from the 700th best?
Leaders want to invest in evidence-based programs and policies. However, there is limited funding for research, program evaluation, or new model development in early learning. What are your recommendations for the types of research and studies that are needed to learn more about scaling?
To be truly effective for policy purposes, research studies should be designed, from the very beginning, with an idea of what a successful intervention would look like fully implemented in the field, applied to the entire subject population, sustained over a long period of time. We should prioritize studies that enroll representative populations and are implemented in representative contexts. We should encourage replication studies that seek to confirm original impressive findings as well as long-term studies that examine the effects of a program over a sufficient time frame.
Jumping the gun and acting, or choosing not to act, based on early evidence can have substantial real-world consequences. Consider the Moving to Opportunity program, in which families from impoverished neighborhoods got the chance to move to better-off areas. The program was a success, but that wasn’t immediately evident. The substantial returns weren’t fully clear until participating children reached adulthood.
Academia, however, offers few incentives (or resources) for tracking the effects of a program over the very long term or for publishing replication studies or studies without a wow-factor. But it’s just as important to know what doesn’t work in certain situations as it is to know what does.
Dana Suskind is Founder and Co-Director of the TMW Center for Early Learning + Public Health; Professor of Surgery and Pediatrics and Director of Pediatric Cochlear Implantation Program at the University of Chicago Medicine.
John List is Founder and Co-Director of the TMW Center for Early Learning + Public Health; Kenneth C. Griffin Distinguished Service Professor of Economics at the University of Chicago; and author of forthcoming book The Voltage Effect: How to Make Good Ideas Great and Great Ideas Scale.
Enjoy what you read? Subscribe to our newsletter to receive updates on what’s new in Education Policy!