Worry Less about Your High Schoolers' Testing Time and More about Their Tests' Quality
Weekly Article
March 24, 2016
The proudest moment of my teaching career was probably also the loudest. Nearly in tears, I had called Shantall*’s house, knowing she was busy celebrating her fifth grade graduation with a big family party. “Shantall? This is Ms. Swisher. I got your scores back. You got a four.” I heard the phone take a tumble as Shantall yelled “MOM. MOM, COME HERE. I GOT A FOUR!!!” The rest of that call, placed to celebrate a Level Four, the highest possible score on North Carolina’s end-of-grade assessments**, was mostly incomprehensible from the sheer volume of hollering and crying happening (admittedly on both ends) of the telephone. Shantall had never scored above a Level One on a state assessment, and this was a moment of tremendous pride for her, for her family, and for me, her teacher.
Not everyone is celebrating end-of-year tests. On the contrary, throughout the U.S. assessment is the topic of much controversy and conversation, most of which focuses on whether tests are “good” or “bad” for students—and therefore misses the point entirely. Rather than accepting this false dichotomy, parents and educators alike should examine why we want to measure our students’ academic accomplishment, and which tests when can help us achieve that.
Starting in the third grade, my students, like students all across the state of North Carolina, received two signals about their progress: end-of-grade assessments in math and reading each year. A third in science is given at the end of elementary school. My students took their scores seriously, and, with guidance, were intentional about using them to set goals for improvement. At the end of each year, families could count on receiving unambiguous signals about how well their students were doing, which they could in turn use to start conversations about how our school could better serve them going forward. Thanks to the North Carolina Legislature, my former students will continue to receive these clear, consistent signals all the way through high school. (North Carolina high schoolers take end-of-course exams, which help them gauge their readiness at the end of each key course.) And thanks to federal law enacted in 2001, all students in grades three through eight in the United States are guaranteed that same information.
Unfortunately, fewer and fewer high school students around the country are guaranteed the same information that my students will continue to receive until they graduate. New data from New America shows that, in the past year, an unprecedented number of states chose to stop providing students with clear, specific information about their progress at regular intervals in favor of a cheaper, one-off option: the SAT and ACT. States’ understandable concerns with over-testing in U.S. schools—a topic that has received much attention in recent years—has morphed into a movement that's undermining students' access to information on their academic achievement.
The SAT and ACT provide tempting alternatives to other high school assessment systems. For one, they are cheap, mostly because students only take one exam. For another, the allure of one exam also gives states the appearance of addressing the issue of over-testing. To be sure, concerns about both cost and test time are often well-founded. As salient a concern as the financial burden might be, these exams do not represent smart-cost saving, but a real danger to the quality of the education your high school student receives. And in the case of over-testing, it is not these statewide, end-of-year exams that represent the majority of a student’s testing burden, but additional tests purchased and administered locally. The ‘one test fits and solves all’ solution, then, doesn’t actually solve anything, and, in fact, will almost certainly create new problems. Here’s why you and your student should be worried about how high schools are using college entrance exams:
First, college entrance exams can’t give you or your student the right information about what they are (or aren’t) learning. Entrance exams are designed to predict how well students will perform in college, not to assess what they learn in high school courses. When states forego their high school exams in favor of the SAT or ACT, students lose the opportunity to understand how they’re progressing on specific sets of knowledge and skills (e.g. “how well do I understand Algebra I?”), because college entrance exams can’t give them that much information. They’re simply not designed for that purpose.
Second, college entrance exams are too little information, too late. In Colorado, students and their families used to receive six major data points throughout high school—one end-of-course math and one English language arts (ELA) exam every year from ninth through eleventh grade. Colorado chose to discontinue those exams this year, opting to only administer the ACT in eleventh grade, meaning students have gone from receiving information about their readiness at six points throughout high school to one point at the end of eleventh grade. In addition to giving students too little, these assessments also come too late. The end of eleventh grade, when most students take the exam, is an awfully late time to receive a wakeup call about your readiness to graduate, especially considering scores may not arrive until summer. If students are to significantly alter their trajectories toward college readiness, they need early and consistent signals to chart their progress, and entrance exams are not equipped to do that.
Third, college entrance exams can’t give teachers and administrators enough information to adequately support your child. Educators are charged with helping students master a set of standards that, working in concert with one another, build students up to college and career readiness. College entrance exams do not have the capacity to assess all of these standards in depth, so educators are left without a clear picture of how students performed in their subject area, and on specific standards. This deprives teachers of important information that they need in order to better serve their students—just like a car salesperson is unlikely to improve sales without knowing their sales figures, teachers are unlikely to improve their teaching without reliable information on how their students are doing. High school exams designed to assess what students learn in their courses are also vital components of other school-level decision-making functions, such as triggering targeted interventions for students who are struggling or ensuring students who are ready for advanced opportunities like AP or college coursework have the opportunity to participate. College entrance exams are not designed to assist with any of these functions, so when they are used as a replacement for standards-based exams, educators and administrators are left without consistent data with which to make strategic decisions about how to help students.
Fourth, the use of college entrance exams has the potential to transform high schools’ curriculum into little more than test-prep. That is, teachers’ instructional focus can narrow when students take fewer tests. Between our 2015 and 2016 state scans, eight new states shifted to the use of college entrance exams to meet federal high school accountability requirements, as compared to two the previous year. There are potential incentives (and consequences) attached to these exams. When adults are held accountable for how well students perform on the SAT or ACT, rather than for how well they master academic standards, it’s not difficult to imagine that educators could feel pressure to teach to the exam rather than to the standards. College entrance exams cannot (and, again, are not designed to) reflect four years of rigorous mathematics and ELA content, which means many important pieces of a well-rounded education are almost certainly bound to be lost if educators are teaching to the exam. Adapting instruction to one exam will be a step backward to the teach-to-the-test model decried in the early years of No Child Left Behind, under which students received narrow and restrictive instruction modeled after high-stakes exams instead of rich and challenging standards.
Last, but certainly not least, the newest generation of college entrance exams have not yet undergone rigorous evaluation. The two most widely used college entrance exams, the SAT and ACT, have both recently undergone changes to better align their content to widely adopted college- and career-ready standards outlined by states adopting the Common Core. And while the evidence that the previous iterations of each exam were effective predictors of students’ readiness for college was mixed at best, there’s no evidence base for these new exams. Certainly, all exams need early adopters in order to form an evidence base, but it is troubling that so many states have moved so quickly to put all of their eggs in an as-of-yet-untested basket.
In the public debate around testing, so much of the conversation has been reduced to vastly oversimplified narratives about the evils of testing. This, in spite of the truth that in myriad other areas of our lives, we readily accept that assessment is necessary for any kind of growth or change. Want to get in better shape? You need to measure your progress. Want to bake a better loaf of sourdough bread? You’d better keep track of different things you’re trying—and taste loaf after loaf after loaf. Measuring learning is different (students are not bread), but not entirely. We need good tests because, as Shantall happily learned, good information on present performance is a basic prerequisite for preparing for a brighter future.
*Name has been changed to protect student privacy.
**North Carolina’s scoring system has since been altered—students can now score between a Level One and a Level Five.