Welcome to the Journal of Writing Assessment
Check out JWA's Reading List for reviews of relevant writing assessment publications.
The Journal of Writing Assessment provides a peer-reviewed forum for the publication of manuscripts from a variety of disciplines and perspectives that address topics in writing assessment. Submissions may investigate such assessment-related topics as grading and response, program assessment, historical perspectives on assessment, assessment theory, and educational measurement as well as other relevant topics. Articles are welcome from a variety of areas including K-12, college classes, large-scale assessment, and noneducational settings. We also welcome book reviews of recent publications related to writing assessment and annotated bibliographies of current issues in writing assessment. Please refer to the submission guidelines on this page for information for authors and submission guidelines.
The Journal of Writing Assessment online ISSN 2169-9232.
The Journal of Writing Assessment is proud and appreciative of the support of the following organizations:
DEPARTMENT OF ENGLISH
COLLEGE OF LETTERS, ARTS & SOCIAL SCIENCES
Volume 7, Issue 1: October 2014
Review Essay: Paul B. Diederich? Which Paul B. Diederich?
by Rich Haswell
Robert L. Hampel’s 2014 edited collection of pieces by Paul Diederich, most of them unpublished, casts Diederich in a new light. The articles, reports, and memoranda reveal him and his work in writing assessment as deeply progressive, both in the educational and political sense. They call for a re-interpretation of his factoring of reader judgments (1961), his analytical scale for student essays (1966), and his measuring of student growth in writing (1974). The pieces also depict Diederich as an intricate and sometimes conflicted thinker, who always saw school writing performance and measurement in terms of the psychological, social, and ethical. He still has relevance today, especially for writing assessment specialists wrestling with current issues such as the testing slated for the Common Core State Standards.
Linguistic microfeatures to predict L2 writing proficiency: A case study in Automated Writing Evaluation
by Scott A. Crossley, Kristopher Kyle, Laura K. Allen , Liang Guo, & Danielle S. McNamara
This study investigates the potential for linguistic microfeatures related to length, complexity, cohesion, relevance, topic, and rhetorical style to predict L2 writing proficiency. Computational indices were calculated by two automated text analysis tools (Coh-Metrix and the Writing Assessment Tool) and used to predict human essay ratings in a corpus of 480 independent essays written for the TOEFL. A stepwise regression analysis indicated that six linguistic microfeatures explained 60% of the variance in human scores for essays in a test set, providing an exact accuracy of 55% and an adjacent accuracy of 96%. To examine the limitations of the model, a post-hoc analysis was conducted to investigate differences in the scoring outcomes produced by the model and the human raters for essays with score differences of two or greater (N = 20). Essays scored as high by the regression model and low by human raters contained more word types and perfect tense forms compared to essays scored high by humans and low by the regression model. Essays scored high by humans but low by the regression model had greater coherence, syntactic variety, syntactic accuracy, word choices, idiomaticity, vocabulary range, and spelling accuracy as compared to essays scored high by the model but low by humans. Overall, findings from this study provide important information about how linguistic microfeatures can predict L2 essay quality for TOEFL-type exams and about the strengths and weaknesses of automatic essay scoring models.