Skip to main content
eScholarship
Open Access Publications from the University of California

About

The Journal of Writing Assessment provides a peer-reviewed forum for the publication of manuscripts from a variety of disciplines and perspectives that address topics in writing assessment. Submissions may investigate such assessment-related topics as grading and response, program assessment, historical perspectives on assessment, assessment theory, and educational measurement as well as other relevant topics. Articles are welcome from a variety of areas including K-12, college classes, large-scale assessment, and noneducational settings. We also welcome book reviews of recent publications related to writing assessment and annotated bibliographies of current issues in writing assessment.

Please refer to the submission guidelines on this page for information for authors and submission guidelines.

Articles

Write Outside the Boxes: The Single Point Rubric in the Secondary ELA Classroom

The conversation around writing assessment in educational settings has been developed by research, practices, and legislation over the last 100 years. This article focuses on secondary writing assessment, where instructors are typically limited by local and statewide requirements. The debate on the use of rubrics (such as the traditional analytic and holistic) and the use of narrative/feedback assessment has shaped secondary writing instruction and assessment, but it has largely been shaped by stakeholders outside of the classroom. This article presents the Single Point Rubric (SPR) as a possible tool to work against the problematic applications of analytic and holistic rubrics without the commitment of time, focus, and energy that narrative feedback assessment demands. Rooted in decades old concepts of grid-grading, the SPR combines the formulaic and time-saving components of rubrics with the differentiated and individualized components of narrative response and grading via detailed feedback. Though the SPR is not an answer to the problems involved with writing assessment, it provides a tool that has largely been neglected for teachers who desire to individualize writing assessment while remaining concise and efficient.

Comments on Student Papers: Student Perspectives

This paper reports on the results of a research project that examines how students responded to instructors' comments on academic papers written for a first-year writing course at a large, Midwestern university. Data collected consisted of rough drafts with instructor comments, final drafts of the same papers, and two sets of interviews, one after students had received the teacher's comments, and one after they had revised the final draft. In the interest of contributing to our understanding of what response to student writing does, this study explores student reflections on what they think, feel, and do in response to instructor comments. Findings suggest good reasons for the lack of one-to-one correspondence between an instructor's comments and an improved final draft, and that we may need to look at factors other than revised drafts for evidence of student learning. Keywords: teaching writing, responding, instructor comments, writing process, learning

Helping Faculty Self-Regulate Emotional Responses in Writing Assessment: Use of an Overall Response Rubric Category

Faculty evaluation of student learning artifacts is a critical activity as accrediting bodies call for campuses to promote "cultures of assessment." Also important are opportunities for faculty engagement and development that assessment projects provide. However, such projects come with significant challenges to facilitators and faculty scorers themselves. Faculty bring their own expertise and beliefs about student learning and writing to the assessment context, all of which can have emotional valence. Assessment sessions may emphasize faculty scoring to components of a rubric, perhaps eliminating a holistic score from the conversations because holistic scores are not viewed as actionable data points and thus are often not parts of the rubric (McConnell & Rhodes, 2017). As actionable data and reliability among scorers are emphasized in assessment, and holistic scores fall away, are we losing an important scoring tool by removing a place for assessment scorers to log their overall responses to the work that they are evaluating? This exploratory study reports scorers' use of an "overall response" category, added to the rubric in two assessment projects. Results indicate that faculty found the new category helpful, not just in managing their emotional responses, but also in leveraging their emotions to complete the scoring task. Correlation and regression analyses also suggest that scorers maintained orthogonal scoring across rubric categories.

Argument Essays Written in the 1st and 3rd Years of College: Assessing Differences in Performance

Building on earlier longitudinal studies and focusing on the concept of writing performance and the issue of transfer, this article discusses an assessment of thesis-driven argument essays written by students in their freshman and junior years at a large, urban, Hispanic serving university. The article addresses the question of whether students in the study were able to "transfer"what they had been taught in their first-year writing classes to writing tasks assigned in third year classes and indicates that modest gains did occur, particularly in the use of sources and evidence. It also discusses several factors contributing to improvement in student performance and to maximizing the possibility of transfer, in particular. Throughout, the emphasis is on process-oriented strategies and thesis-driven argument in the first-year writing class and the specificity and clarity of the writing prompts assigned in junior level classes. Examination of these paired essays, written in a similar genre by the same students reveals that improvement was greatest for students with less than adequate writing skills. The study thus suggests that "near transfer" of the ability to write a thesis-driven argument essay did occur between the first and third years for this student population.

Slouching Toward Sustainability: Mixed Methods in the Direct Assessment of Student Writing

The development of present-day assessment culture in higher education has led to a disciplinary turn away from statistical definitions of reliability and validity in favor of methods argued to have more potential for positive curricular change. Such interest in redefining reliability and validity also may be inspired by the unsustainable demands that large-scale quantitative assessment would place on composition programs. In response to this dilemma, we tested a mixed-methods approach to writing assessment that combined large-scale quantitative assessment using thin-slice methods with targeted, smaller-scale qualitative assessment of selected student writing using rich features analysis. We suggest that such an approach will allow composition programs to (a) directly assess a representative sample of student writing with excellent reliability, (b) significantly reduce total assessment time, and (c) preserve the autonomy and contextualized quality of assessment sought in current definitions of validity.

Writing Assessment Validity: Adapting Kane's Argument-Based Validation Approach to the Assessment of Writing in the Post-Process Era

This article examines the translatability of Kane's (1992, 2006, 2013, 2016) argument-based validation approach to the aims and needs of college writing assessment, particularly given (a) the elusive ways in which writing is theorized in the post-process era, and (b) the composition/writing assessment community's concern for matters of social justice. As part of this process, I review current conceptualizations of writing in the composition literature; I then discuss some overarching implications of assessment as a decision-making process; and then I review the underlying principles of Kane's approach. From there, I discuss which elements of Kane's theory apply more to testing and test development and less to writing assessment as theorized and practiced today; and then I offer criticism of Kane's theory and call for adaptations that could be made to help forge a theory of assessment validity more suited to the goals of writing assessment and composition studies at large.