Putting the Experiment Back in the Experimental Sites Initiative
Policy Paper
Jan. 23, 2018
In a nation in which postsecondary education is increasingly required in order to compete in a global market and enter the middle class, improving federal higher education policies and the provision of federal student aid is more important than ever. The U.S. Department of Education’s Experimental Sites Initiative is designed to help policymakers test out higher education policy and program improvements on a small scale to learn what works. Those improvements would help more Americans access higher education and complete their degrees. They would also mean better value for taxpayers, who fund federal student aid programs.
The Experimental Sites Initiative has been in place in one form or another since the mid-1980s. To date, the Department of Education (the Department) has launched around 30 “experiments” through the initiative, most of which have been focused on testing new rules for federal student aid programs. The initiative is designed to allow the Department to grant flexibility to institutions of higher education—colleges and universities—to test and evaluate potential federal policy changes, such as providing Pell Grants to high school students to assess whether that increases their college-going rates. The opportunity it provides policymakers to “try before you buy” is valuable: with $130 billion going to institutions each year through the federal financial aid programs, even small changes to student aid policy can affect millions of students. Moreover, beyond student aid, the initiative creates the potential for small-scale experimentation and evidence-building for future policy changes.
In reality, however, the Experimental Sites Initiative has been underutilized as a learning tool. Over the years, the Department has used the initiative for different purposes, including providing new flexibility to institutions and advancing policy changes in the absence of congressional action. Most experiments collected only descriptive statistics—information that is useful to track, but which does not answer questions about whether the policy or program adjustments created the intended effects and for whom. Some experiments have not even collected those basic data. The only two experiments for which the Department designed credible evaluations had low participation, inhibiting the successful completion of those analyses.
Today, the Department of Education, Congress, and education advocates should seize the opportunity—and the responsibility—to revive the original mission of the initiative: catalyzing innovation and rigorous learning about what works in higher education. That means putting the “experiment” back in the Experimental Sites Initiative by designing, funding, and carrying out true evaluations.