Testing to keep the lights on: A comparative case study of accountability in adult English language education policy

Restricted (Penn State Only)
- Author:
- Cherewka, Alexis
- Graduate Program:
- Lifelong Learning and Adult Education
- Degree:
- Doctor of Philosophy
- Document Type:
- Dissertation
- Date of Defense:
- November 10, 2022
- Committee Members:
- Erica Frankenberg, Outside Unit & Field Member
Esther Prins, Co-Chair & Dissertation Advisor
John Holst, Co-Chair of Committee
Anna Kaiper-Marquez, Major Field Member
Susan Land, Program Head/Chair - Keywords:
- adult basic education
adult education policy
critical policy analysis
comparative case study
accountability
standardized testing
adult English language education - Abstract:
- Language skills are critical for life in a new country. Over 20 million immigrants and refugees in the U.S. face challenges in their daily lives, gaining work, and furthering their education due to limited English proficiency (Batalova et al., 2021). Publicly funded adult basic education (ABE) services support emergent multilingual adults through classes to learn English, obtain a high school equivalency degree, prepare for higher education, and much more. The Workforce Innovation and Opportunity Act (WIOA) of 2014 established a narrowed performance accountability system, and despite scholars’ concern about the impact for ABE programs and students (Belzer, 2017; McHugh & Doxsee, 2018; Pickard, 2016), there are no studies to date that investigate the impact for emergent multilingual adults (EMAs). Therefore, the purpose of this comparative case study is to understand how implementation of federal policy influences programming for emergent multilingual adults. As the result of fieldwork at three adult education providers, I argue that WIOA’s performance accountability is restrictive because credential and employment metrics were inaccessible or irrelevant for many EMAs. As a result of the limitations of these other metrics, adult education providers were bound to a singular measure of performance: pre- and post-standardized testing. Although testing was largely incapable of capturing individual EMAs’ progress and achievements, these assessments played an outsized role in program design and instruction. Providers allocated substantial resources toward testing and reporting on performance and aimed to improve their post-testing rates through testing as early as possible and incentivizing testing. Yet, the institutions where WIOA programs were embedded largely determined how assessment influenced program design and instruction. Finally, providers aspired to mitigate test anxiety and repurposed testing. Ultimately, instead of testing acting as a method of holding providers accountable for high-quality instruction, it became a way for these providers to keep the lights on.