Steep Learning Curve Ahead: Deciphering English Learner Data

Blog Post
Shutterstock
June 15, 2018

In 2002, the Bush administration’s No Child Left Behind Act (NCLB) ushered in a new era of education data collection: for the first time school districts and states had to report data on how English Learner (EL) students were performing on standardized assessments. These data provide a touch point for understanding ELs’ academic performance and identifying schools that are struggling to serve these students effectively.

NCLB’s 2015 successor, Every Student Succeeds Act (ESSA), maintained that spirit and extended additional guidelines for ensuring success in EL learning outcomes. The law shifted accountability for progress in acquiring English language proficiency from Title III to Title I. As my colleague Janie Carnock wrote in a recent report, the shift increased the “visibility and importance of ELs by integrating their linguistic outcomes into the core accountability structure for all students under Title I.” Therefore, understanding EL data and designing appropriate metrics has become increasingly important for states.  

English learner data is not always clear cut and making extrapolations about how these students are performing is complicated by several factors including the myriad of data available.  A new Migration Policy Institute (MPI) report attempts to provide readers with an easy-to-follow guide on understanding EL data. The report’s author, Julie Sugarman, starts with the basics by detailing the types of data school systems collect on students including background information (i.e. gender, ethnicity/race, home language), services (e.g. EL services, special education, Title I, gifted and talented), assessment data and other information such as attendance, discipline, and graduation status.  

Importantly, the report also outlines who has access to what data and how they can use it. For example, teachers and administrators have direct access to specific student data, which allows them to explore different questions about the performance and progress of their EL students.The general public has access to EL data through public-use databases. Although not as specific as the student-level data available to teachers and administrators, this aggregated data still allow one to ask and research detailed and informed questions about ELs.

The availability of EL data has led to more nuanced understandings of how these students are performing. However, as Sugarman notes, EL data are complex and it can be easy to fall into the trap of misinterpreting data, especially when comparing EL data across states and over time.   

First, comparing data across states is complicated by two main factors: how states identify ELs and who is included in the EL subgroup. States are free to set their own criteria for determining who is an English learner including the assessment used to determine initial proficiency in English and the score necessary to shed the EL label. This state-to-state variability means that a student who is counted as an EL in one state might not be counted as an EL in another. Moreover, under ESSA, states are allowed to determine how long a former EL student is counted in the subgroup for accountability purposes. Specifically, some states include former ELs for two years after they have passed their English proficiency test, while others include them for four years (the maximum allowed under the law). 

Additionally, EL is not a permanent label, as different students move in and out of the EL subgroup as they reach proficiency in English. My colleague Janie Carnock compares this to a “revolving door” and notes that, “due to migration trends and shifting demographics, this flow of EL students entering and exiting is not necessarily balanced every year; the population is in a state of constant flux.” This reality makes it even more difficult to accurately track ELs performance across states.

Yet another complication arises when considering differences between datasets. Sugarman points to differences in methods, indicators, and purpose of datasets. While one state may use method A to derive conclusion B, another state may use method C to derive conclusion B. If that was not enough, data can also range in its details. While one state may have extensive data on student data and school/district data, another state may not.

In addition to comparing data across states, comparing data across time can also be tricky. In her report, Sugarman warns readers to avoid confusing overall school improvement with individual student progress. While it may seem like students are improving in their English proficiency, that may just be attributed to an overall institutional improvement that has schoolwide impact. Second, tools used to collect data often evolve, so comparing EL data from one year to another is not always straightforward. One must keep in mind the context of the tools used within a specific year.

As the EL population grows, so too should our knowledge of how to interpret and leverage data to promote effective policies, instructional programs and services to ensure the success of these students.

Related Topics
Every Student Succeeds Act NCLB ESSA