“Don’t Take Their Word For It”: Waiting for Evidence on the New SAT & ACT
Blog Post
Shutterstock
July 20, 2016
Recent changes to federal education law give states explicit permission to ditch their old high school assessments and use college entrance exams like the SAT and ACT to evaluate the performance of their schools instead. Close to ten states have jumped at the opportunity in the last year alone: Connecticut’s legislature, for instance, recently passed legislation discontinuing high school end-of-course exams mid-year and replaced them with the SAT. Oklahoma also began a pilot program to administer the ACT to 11th students across their state, and shortly after, eliminated their current program of high school tests.
During a National Forum hosted by the Education Commission of the States this June, representatives from both Connecticut and Oklahoma shared more about their states’ approaches. Gayle Slossberg, State Senator from Connecticut, spoke with enthusiasm about her state’s swift transition to the SAT, which happened mid-school year. She and other lawmakers believed it was the right choice for students so, she said, “we thought: why should we wait?” In contrast, Joy Hofmeister, Oklahoma’s State Superintendent, lauded her department’s pilot program, saying that Oklahoma would not be quick to rush into a full commitment to a new assessment plan. Instead, she said, Oklahoma would wait to see the science on new assessments, because “it would be a disservice to simply go with what’s popular.”
If the trend toward using college entrance exams like the SAT and ACT to evaluate schools continues, states should consider Oklahoma’s pilot as a model for implementation. I have made the case before that the sole use of college entrance exams in high schools poses a risk to the quality of learning, the efficacy of accountability systems, and ultimately, the success of high school students. Given these potential pitfalls, states looking for an effective long-term test option should allow sufficient time to determine whether college entrance exams are appropriate measures of student learning before fully committing to them.
After adopting the Common Core State Standards back in 2010, many states collaborated to develop an effective long-term test option, spending years developing and piloting the PARCC and Smarter Balanced assessments before fully committing to them. Both the PARCC and Smarter Balanced assessments—which many states recently replaced (or are considering replacing) with the SAT or ACT—have also been subject to independent evaluation. A recent study conducted by the Human Resources Research Organization (HumRRO) evaluated both PARCC and Smarter Balanced (in addition to the ACT Aspire, a system of assessments for lower grades offered by ACT Inc, and MCAS, the Massachusetts high school subject assessments) on three major elements of test quality: their alignment to the Common Core State Standards, the depth of ‘thinking skills’ assessed, and how accessible they are for English Language Learners (ELLS) and students with learning differences. Researchers found that PARCC and Smarter Balanced were strongly aligned to the Common Core standards, and generally possessed equal or greater depth than both ACT Aspire and MCAS.
No such research yet validates the SAT or ACT exams. We do not yet have evidence that these college entrance exams are aligned to the Common Core State Standards, much less to state-specific standards like Oklahoma’s. The College Board (the developers of SAT) released a new version of the exam this school year, which they market as aligned to the Common Core. Meanwhile, ACT Inc. says that their exam is updated incrementally, but stop just short of saying that they are aligned to the Common Core. Whatever their official stances are on alignment to the Common Core specifically, and college-ready standards in general, neither vendor has had sufficient time to submit their claims to independent research. Until they do, states have no evidence, beyond these two organizations’ assurances, that the new versions of these college entrance exams are aligned to their state’s standards.
If states were using these exams purely to ease the process of applying to college, this shift might not be such a risk. But when states throw out their grade-level or subject-specific tests in favor of a college entrance exam, they have no other source of uniform data about how students are performing on their state academic standards. Left with only their college entrance exam, states must count on that test to give accurate information on how students are performing in high school. Until they can be assured that the information students receive from those tests is actually aligned to what they’ve learned in school, it is a fairly large risk to a) deprive students of any other shared information on how they are progressing in school and b) hold those students, educators, schools, districts, and the state accountable for results.
Given these concerns, states that feel compelled to drop their high school assessments in favor of the SAT or ACT would do well to adopt a more measured “wait and see” approach like Oklahoma’s. That way, at least, they will have several years to determine whether or not the perceived benefits of using a college entrance exam outweigh the risks. Beginning with a pilot program at least increases the likelihood that if states decide to bring college entrance exams to scale, they will be able to do so effectively.
In another session on the future of state accountability systems, another state policymaker urged states to wait for the evidence to come in before adopting statewide college entrance exams: “don’t take their word for it.”