Skip to main content
eScholarship
Open Access Publications from the University of California

About

The Journal of Writing Assessment provides a peer-reviewed forum for the publication of manuscripts from a variety of disciplines and perspectives that address topics in writing assessment. Submissions may investigate such assessment-related topics as grading and response, program assessment, historical perspectives on assessment, assessment theory, and educational measurement as well as other relevant topics. Articles are welcome from a variety of areas including K-12, college classes, large-scale assessment, and noneducational settings. We also welcome book reviews of recent publications related to writing assessment and annotated bibliographies of current issues in writing assessment.

Please refer to the submission guidelines on this page for information for authors and submission guidelines.

Articles

Review Essay: Paul B. Diederich? Which Paul B. Diederich?

Robert L. Hampel’s 2014 edited collection of pieces by Paul Diederich, most of them unpublished, casts Diederich in a new light. The articles, reports, and memoranda reveal him and his work in writing assessment as deeply progressive, both in the educational and political sense. They call for a re-interpretation of his factoring of reader judgments (1961), his analytical scale for student essays (1966), and his measuring of student growth in writing (1974). The pieces also depict Diederich as an intricate and sometimes conflicted thinker, who always saw school writing performance and measurement in terms of the psychological, social, and ethical. He still has relevance today, especially for writing assessment specialists wrestling with current issues such as the testing slated for the Common Core State Standards.

Linguistic microfeatures to predict L2 writing proficiency: A case study in Automated Writing Evaluation

This study investigates the potential for linguistic microfeatures related to length, complexity, cohesion, relevance, topic, and rhetorical style to predict L2 writing proficiency. Computational indices were calculated by two automated text analysis tools (Coh-Metrix and the Writing Assessment Tool) and used to predict human essay ratings in a corpus of 480 independent essays written for the TOEFL. A stepwise regression analysis indicated that six linguistic microfeatures explained 60% of the variance in human scores for essays in a test set, providing an exact accuracy of 55% and an adjacent accuracy of 96%. To examine the limitations of the model, a post-hoc analysis was conducted to investigate differences in the scoring outcomes produced by the model and the human raters for essays with score differences of two or greater (N = 20). Essays scored as high by the regression model and low by human raters contained more word types and perfect tense forms compared to essays scored high by humans and low by the regression model. Essays scored high by humans but low by the regression model had greater coherence, syntactic variety, syntactic accuracy, word choices, idiomaticity, vocabulary range, and spelling accuracy as compared to essays scored high by the model but low by humans. Overall, findings from this study provide important information about how linguistic microfeatures can predict L2 essay quality for TOEFL-type exams and about the strengths and weaknesses of automatic essay scoring models.

Language Background and the College Writing Course

In an era of growing linguistic diversity, assessment of all college writing courses needs to include a focus on multilingual equity: How well does the course serve the needs of students with varying language backgrounds and educational histories? In this study, an Education and Language Background (ELB) survey was developed on a scale measuring divergence from default assumptions of college students as U.S.-educated monolingual English speakers. This survey data was used in assessment of a junior-level college writing course by correlating student ELB data with writing sample scores. On the pre-test, multilingual students and immigrants educated in non-U.S. systems scored significantly lower, but by the post-test this effect had disappeared, suggesting that junior-level writing instruction may be of especial utility to such students. Survey data also revealed important language and education differences between students who began their career at College Y in first-year composition and those who transferred in later. Students' language background information should be routinely collected at the beginning of each course using an instrument, such as the ELB, that systematically quantifies student language identity using multiple questions, thus permitting both a nuanced portrait of how multilinguality interacts with student writing proficiency, and development of differentiated instruction strategies.

Dynamic Patterns: Emotional Episodes within Teachers' Response Practices

Responding to student writing is one activity where teachers' emotions become relevant, but there are limited scholarly conversations directly discussing emotion as a component of teachers' responses to student writing. This article brings together scholarship on emotion, survey results, and narrative description of two specific teachers to suggest the relationship between emotion and response: a dynamic, recursive episode pattern of values, triggers, emotions, and actions. The results of 146 surveys of writing teachers reporting on emotions in their response practices provide a contextual grounding for a closer examination of the interrelated emotional episode of one teacher, Brittney. An awareness of the emotional episode of response promotes reflection and acts as a catalyst for teachers to think about their teacherly identity.