Skip to main content
eScholarship
Open Access Publications from the University of California

About

The Journal of Writing Assessment provides a peer-reviewed forum for the publication of manuscripts from a variety of disciplines and perspectives that address topics in writing assessment. Submissions may investigate such assessment-related topics as grading and response, program assessment, historical perspectives on assessment, assessment theory, and educational measurement as well as other relevant topics. Articles are welcome from a variety of areas including K-12, college classes, large-scale assessment, and noneducational settings. We also welcome book reviews of recent publications related to writing assessment and annotated bibliographies of current issues in writing assessment.

Please refer to the submission guidelines on this page for information for authors and submission guidelines.

Articles

Rhetorical Writing Assessment: The Practice and Theory of Complementarity

Writing portfolio assessment and communal (shared, dialogical) assessment are two of our field's most creative, courageous, and influential innovations. Because they are also relatively expensive innovations, however, they remain vulnerable to cost-cutting by university administrators and to attacks from testing corporations. This article lays a theoretical foundation for those two powerful and valuable practices in teaching and assessing writing. Building on the concept of "complementarity" as developed in the fields of quantum physics (Bohr; Kafatos & Nadeau) and rhetoric (Bizzell) and adapted for educational evaluation (Guba & Lincoln), we provide some of the "epistemological basis," called for by Huot, on which portfolio and communal assessment are based and by which those practices can be justified. If we must look to science to validate our assessment practices (and perhaps we must), we should not settle for outdated theories of psychometrics that support techniques like multiple-choice testing. Instead, from more recent scientific theorizing we can garner strong support for many of our best practices, including communal and portfolio assessment. By looking to the new science--including the new psychometrics (Cronbach, Moss)--we can strengthen and protect assessment practices that are vibrantly and unapologetically rhetorical.

The Misuse of Writing Assessment for Political Purposes

This article focuses on the political dimensions of writing assessment, outlining how various uses of writing assessment have been motivated by political rather than educational, administrative, and professional concerns. Focusing on major purposes for writing assessment, this article examines state-mandated writing assessments for high school students, placement testing for incoming college students, and upper class college writing assessments such as rising junior tests and other exit measures that are supposed to determine whether students can write well enough to be granted a college degree. Each of these assessments represents a gate through which students must pass if they are to gain access to the privileges and enhanced salaries of college graduates, and so they carry a particular social weight along with their academic importance. In other words, each of these tests carry significant consequences or high stakes. According to the most recent and informed articulations of validity, each of the cases examined in this article require increased attention to the decisions being made and the consequences for students, teachers, and educational institutions. In each case, this article addresses the political reasons why these assessments are set in motion and point to the inner contradictions that make it quite impossible for them ever to accomplish their vaguely stated purposes.

Uncovering Rater's Cognitive Processing and Focus Using Think-Aloud Protocols

This article summarizes the findings of a series of studies that attempt to document cognitive differences between raters who rate essays in psychometric, large-scale direct writing assessment settings. The findings from these studies reveal differences in both what information the rater considers as well as how that information is processed. Examining raters according to their ability to agree on identical scores for the same papers, this article demonstrates that raters who exhibit different levels of agreement in a psychometric scoring system approach the decision-making task differently and consider different aspects of the essay when making that decision. The research summarized here is an initial step in understanding the relationship between rater cognition and performance. It is possible that future research will enable us to better understand how these differences in rater cognition come about so that those who administer rating projects will be better equipped to plan, manage, and improve the processes of rater selection, training, and evaluation.

Understanding Student Writing--Understanding Teachers Reading, a review of Lad Tobin: Reading Student Writing: Confessions,Meditations and Rants

Let me begin with what Brian Huot has called a rather "simple argument": If an instructor wishes to respond to student writing, he or she must read that piece of writing first. I imagine (or, at least, strongly hope) that most readers would agree with this statement. If there is an instructor who has developed a method of response that does not involve reading, I would be curious to hear about the success of such a method.

An Annotated Bibliography of Writing Assessment: Reliability and Validity, Part 2

In this, our third installment of the bibliography on assessment, we survey the second half of the literature on reliability and validity. The works we annotate focus primarily on the theoretical and technical definitions of reliability and validity--and in particular, on the relationship between the two concepts. We summarize psychometric scholarship that explains, defines, and theorizes reliability and validity in general and within the context of writing assessment. Later installments of the bibliography will focus on specific sorts of assessment practices and occasions, such as portfolios, placement assessments, and program assessment--all practices for which successful implementation depends on an understanding of reliability and validity.