New MPI Report Points to Better Ways to Account for English Learner Academic Performance

Blog Post
Shutterstock
April 1, 2024

Recent iterations of the Elementary and Secondary Education Act of 1965 (ESEA), from No Child Left Behind (NCLB) in 2002, to most recently the Every Student Succeeds Act (ESSA) in 2015, has brought improvements to accountability systems used to track how well schools are serving the academic and linguistic development of students identified as English learners (ELs). Prior to NCLB it was common for schools not to assess ELs’ content knowledge in areas such as English language arts (ELA) and math, and until ESSA, ELs’ progress towards English proficiency was excluded from these systems. This means that ELs’ linguistic and academic outcomes were not considered when determining how well a school was performing and meeting the needs of its students.

Today, ELs’ performance on statewide summative assessments in subjects like math and ELA and whether they are making progress towards English language proficiency (ELP) are integral parts of how we hold schools accountable. Despite these evolutions, researchers and advocates have been calling attention to the limitations of the data produced by statewide summative assessments for years and raising questions about how useful these data actually are in differentiating how well schools are serving EL students. These limitations are rooted in the fact that for students identified as ELs, these tests inadvertently end up assessing their language proficiency even if that is not the intended purpose. And as a result, it is difficult to parse out what they actually know in the subject being tested (e.g., math or science).

A new report by the Migration Policy Institute (MPI) dives into this issue by examining whether there is a more nuanced way to measure ELs’ academic performance by taking into consideration their ELP. The report also explores how information about the learning environments in which students are entrenched, such as access to qualified teachers, could provide useful context for their academic and linguistic outcomes.

The report is composed of two parts, the first, directed by Dr. Pete Goldschmidt of California State University Northridge, examined how states’ current statistical models can be refined to better align ELs’ ELP progress with their outcomes and growth expectations on state content assessments. The second piece, led by Dr. Megan Hopkins of the University of California, San Diego and Dr. Julie Sugarman of MPI sought to identify a set of opportunity-to-learn (OTL) indicators that could be used to better contextualize the academic and linguistic outcomes of students identified as ELs. Together, the findings from these two studies seek to provide a more accurate understanding of schools’ contributions to EL success.

Accounting for English learners’ ELP level on academic assessments

The first study focused on parsing out the impact an EL student’s language proficiency may have on both their academic performance on statewide assessments in the current year (their “status”) and their progress over time (their “growth”). Currently, states calculate an EL student’s status and growth in ELA and math solely based on the score they get on a standardized test. But as research has shown, these test scores may not accurately reflect what they know and how much they have learned within a given timeframe if they are still in the process of developing their academic English proficiency.

As a result, the lead researcher makes the case for developing “adjusted academic proficiency” cut scores, or expectations about where a student’s performance should be based on their ELP level and grade level. According to author, these differentiated proficiency expectations (i.e. status) could provide a more complete picture of their academic performance as it it would be “misleading to say that EL students were behind in ELA without considering their ELP level.” But that is exactly what we have been doing up until now.

For example, under the current system a state would judge an EL student’s performance on the ELA test based on the statewide standardized grade level expectation for all students in their respective grade. However, as the paper showed, an EL student’s ELP level considerably influences their ability to show their skills and knowledge in ELA and math compared to their non-ELs peers. Under a more refined system, an EL student would be judged on how their performance compares to grade level expectations for students at a certain ELP level.

To be clear, this does not mean states should create lower expectations for students identified as ELs. Instead, it allows the state to pinpoint whether their performance is a function of their ELP and grade level, or truly an issue with their grasp of the content. According to the author, doing so “offers more explicit insight into how ELs are performing in academic content areas and thus how schools are serving them.” And as their ELP progresses, these academic expectations will increase accordingly until they are reclassified.

In terms of measuring an EL student’s growth from one year to the next in a given subject, the author proposes two alternative statistical methods to determine if an EL’s growth (or lack thereof) in a particular subject is due to a school’s contributions to the student or simply the byproduct of the student having a better grasp of English since the last time they were assessed.

Currently, states measure growth by calculating how many points a student has increased or decreased on a given subject as measured by a statewide assessment from one year to the next without considering the student’s ELP level in either year. This method, however, does not provide insight as to whether any growth reflected was because the student learned more math or because they made progress in their English abilities. As a result, the author proposes two statistical methods that a state could use to capture the influence of ELP in academic growth model calculations.

These methods are capable of “generating less biased estimates about schools’ contribution to ELs’ academic growth,” and they increase “the likelihood that test scores capture schools’ true contribution to student learning.” Neither are currently being used for accountability.

Looking beyond assessment scores for EL accountability

In the second study, the authors talked to representatives from state and local education agency staff, community advocacy organizations, and parents of EL students to identify EL-specific indicators that could tell us more about the quality of language instruction support provided to these students. ESSA required states to adopt opportunity-to-learn (OTL) indicators, but none to date have solely focused on measuring EL program quality, as was explored in the study.

According to the participants, four indicators would be helpful in assessing quality of EL programming at any given school:

  1. Details about the instructional programs available for ELs : To measure this, schools would collect data on which model/s ELs are enrolled in (e.g. English-only, dual language immersion, transitional bilingual); aggregate data on how each EL student receives English Language Development (ELD) services, including mode of delivery (e.g., pull-out classes, push-in support, co-teaching); and calculating the amount of ELD instructional time each EL student receives.

  2. The extent to which ELs have access to qualified teachers: This could be measured by collecting data on the number of teachers (both EL and content areas) who have EL-related certifications like an ESL or bilingual certification endorsement; the amount of EL-related professional development teachers participated in each year; and measuring teacher to student ratios specifically for EL-classified students.

  3. ELs’ access to opportunities on par with their non-EL peers: This could be measured by EL participation in educational opportunities such as gifted and talented programs, qualifying for the Seal of Biliteracy, the extent to which they have access to advanced coursework, and participation in extracurricular activities.

  4. The ability of ELs’ families to engage and partner with schools: Specific measures that could be used to assess authentic EL family engagement could not be identified.

These OTL indicators would supplement and complement academic assessment data and according to the authors, these data would “contextualize EL student outcomes in order to understand patterns of progress or lack thereof, and gauge whether schools are meeting their civil rights obligations to provide support for EL students that facilitates their ELP and academic achievement.”

Although the report was mainly geared towards how states can improve their accountability over EL education, the report was rounded out with implications for how to improve both state and federal data collection infrastructure. Together, the findings make great strides to dismantle the (artificial) achievement gap between English learners and their non-EL peers which has been perpetuated by a lack of attention to the impact that English learners’ ELP level has on their performance on academic assessments. And without changes like these we will continue to have a misguided understanding of not only ELs’ perceived academic abilities, but also how well schools are serving these students.

Related Topics
English Learners Accountability, Assessment, and Data