Abstract
Academic writing ability is an important aspect of success in higher education. Recently, standardized academic language proficiency tests (such as the TOEFL) have begun to include integrated writing tasks, which ask test-takers to read and/or listen to a passage and construct a response that reflects the information in the passage(s). Arguably, integrated tasks more closely resemble authentic academic tasks than independent tasks, and therefore increase the construct validity of assessment tools that include them (Cumming et al., 2005; Taylor & Angelis, 2008). A number of recent studies have investigated differences between the products and processes of responding to independent and integrated tasks (Guo et al., 2013; Kyle & Crossley, 2016; Plakans, 2009a; Plakans & Gebril, 2013). In this study, the relationship between automated indices of source text use and holistic quality scores is investigated, building on Plakans and Gebril (2013). The results indicate that a number of indices related to content word overlap and n-gram overlap explained a substantial portion of the variance in holistic scores. These results generally align with the findings of Plakans and Gebril (2013), and provide important implications for increasing the construct coverage of automated scoring models (such as e-rater).
Original language | English |
---|---|
Article number | 100467 |
Journal | Assessing Writing |
Volume | 45 |
DOIs | |
Publication status | Published - 2020 Jul |
Bibliographical note
Publisher Copyright:© 2020 Elsevier Inc.
All Science Journal Classification (ASJC) codes
- Language and Linguistics
- Education
- Linguistics and Language