Short-Term Pell Evaluation Report Leaves Policymakers and Observers Shrugging

Blog Post
Shutterstock
March 18, 2021

In mid-December, the Education Department finally released a long-awaited evaluation report of one of its experiments, testing the efficacy of providing Pell Grants for very-short-term programs. But the report did little but disappoint onlookers who had hoped for substantive new findings.

The short-term Pell pilot was launched in 2011 under the Experimental Sites Initiative, which grants the Education Department authority to waive certain types of statutory or regulatory requirements for an institution in order to test out a new policy idea. In this case, the Department waived the minimum required length of a program (currently set at 15 weeks and 600 clock hours), to instead allow job-training programs to receive Pell Grants if they were as short as 8 weeks and 150 clock hours. (A separate experiment, launched at the same time and involving many of the same institutions, tested the efficacy of providing Pell Grants to students who had already earned a bachelor’s degree, currently only allowed for certain types of teaching credential programs.)

The short-term Pell experiment grew out of the popular—if entirely unproven—notion that very-short-term programs could quickly get workers employed in well-paying jobs, an old idea whose popularity was renewed by the Great Recession. And unlike the vast majority of the Department’s Experimental Sites Initiative projects, the short-term training one was actually designed to produce real research, with random assignment of student eligibility and an independent contract awarded for evaluation of the effectiveness. Department officials promised that, through the experiment, it would answer the question of whether providing Pell Grants for very-short-term programs “increases employment rates or wages.”

Nearly a decade after that promise, and three years after the experiment ended, policymakers are still asking the same question.

In December 2020, the Institute of Education Sciences released an evaluation for the experiment that didn’t ask whether the programs led to increased employment or wages. Instead, the report found that offering students Pell Grants for very-short-term programs increased enrollment in and completion of those programs notably (by 15 and 9 percentage points, respectively). That’s no surprise — given free money to enroll in programs, students did, in fact, enroll. On the matter of employment outcomes, IES noted that “employment and wage data could shed light on the economic value of the occupational training programs students completed.” But while the data exist in other federal agencies to measure those kinds of labor market outcomes, the evaluators didn’t invest in doing so, and thus couldn’t report either way on whether the programs paid off.

One thing we can say is that many of the credentials that were completed through the experiment do not lead to well-paying, safe, or secure employment. For instance, about 40 percent of graduates in the experiment completed a “high-demand” program (those expected to have a large share of job openings and/or rapid growth in jobs), more than the 29 percent of students who did not receive a Pell Grant to enroll. But nearly two-thirds of those (65 percent) were in transportation programs—like truck-driving. According to the American Trucking Association, annual turnover rates for truckers are as high as 95 percent for large truckload fleets. Another one in four graduates of the experimental very-short-term programs (24 percent) were in health professions. But common very-short-term healthcare credentials like certified nursing assistant (CNA) programs often lead to low wages, and also have high annual turnover rates — more than 80 percent among home health aides, for instance, and as high as 100 percent for nursing homes.

Concerningly, this was one of the few Experimental Sites Initiative pilots run by the Department actually designed to produce actionable, rigorous results — and it still failed to answer the key questions its designers were asking. That’s why, last week, a bipartisan group of lawmakers re-introduced the Innovation Zone Act. The IZA would reshape the Initiative to establish experiments that can actually answer key policy questions, require the Department to produce evaluation results that hew to rigorous standards, and allow experiments to continue only as long as they are producing relevant information.

In the meantime, lawmakers are left without answers. The policy idea—extending Pell Grants to very-, very-short-term programs—is a tempting one for policymakers facing down unemployment of more than 10 million workers. But a substantial body of research suggests that, enticing as the idea may be, very few job training programs can be completed in just 8 weeks and still result in the skills needed to obtain a job paying family-sustaining wages.

The latest evaluation adds little to that body of research. After spending taxpayer dollars to send thousands of students through very-short-term programs, policymakers are no closer to knowing whether the juice was worth the squeeze — or what will happen if lawmakers spend billions more in taxpayer dollars on programs that may leave students with poverty-level wages or unemployed altogether.

Enjoy what you read? Subscribe to our newsletter to receive updates on what’s new in Education Policy!

Related Topics
Higher Education Accountability & Consumer Protection Higher Education Data and Transparency