Welcome to the Journal of Writing Assessment
Check out JWA's Reading List for reviews of relevant writing assessment publications.
The Journal of Writing Assessment provides a peer-reviewed forum for the publication of manuscripts from a variety of disciplines and perspectives that address topics in writing assessment. Submissions may investigate such assessment-related topics as grading and response, program assessment, historical perspectives on assessment, assessment theory, and educational measurement as well as other relevant topics. Articles are welcome from a variety of areas including K-12, college classes, large-scale assessment, and noneducational settings. We also welcome book reviews of recent publications related to writing assessment and annotated bibliographies of current issues in writing assessment. Please refer to the submission guidelines on this page for information for authors and submission guidelines.
The Journal of Writing Assessment online ISSN 2169-9232.
The Journal of Writing Assessment is proud and appreciative of the support of the following organizations:
DEPARTMENT OF ENGLISH
COLLEGE OF LETTERS, ARTS & SOCIAL SCIENCES
Volume 8, Issue 1: 2015
Introduction to Volume 8
by Diane Kelly-Riley and Carl Whithaus
Editors' Introduction to Volume 8
Three Interpretative Frameworks: Assessment of English Language Arts-Writing in the Common Core State Standards Initiative
by Norbert Elliot, Andre A. Rupp, David M. Williamson
We present three interpretative frameworks by which stakeholders can analyze curricular and assessment decisions related to the Common Core State Standards Initiative in English Language Arts-Writing (CCSSI ELA-W). We pay special attention to the assessment efforts of the Smarter Balanced Assessment Consortium (Smarter Balanced) and the Partnership for Assessment of Readiness for College and Careers (PARCC). Informed by recent work in educational measurement and writing assessment communities, the first framework is a multidisciplinary conceptual analysis of the targeted constructs in the CCSSI ELA-W and their potential measurement. The second framework is provided by the Standards for Educational and Psychological Testing (2014) with a primary focus on foundational principles of validity, reliability/precision, and fairness. The third framework is evidence-centered design (ECD), a principled design approach that supports coherent evidentiary assessment arguments. We first illustrate how Standards-based validity arguments and ECD practices have been integrated into assessment work for the CCSSI ELA-W using Smarter Balanced and PARCC assessment reports. We then demonstrate how all three frameworks provide complementary perspectives that can help stakeholders ask principled questions of score interpretation and use.
Shifting the Locus of Control: Why the Common Core State Standards and Emerging Standardized Tests May Reshape College Writing Classrooms
by Joanne Addison, University of Colorado Denver
In 2010 the Common Core State Standards, a set of outcomes-based standards detailing core skills for K-12 English Language Arts and Math classrooms across the US, were released. This was followed by the release of related standards-based assessments, most notably the large-scale standardized tests developed through the Partnership for Assessment of Readiness for College and Career (PARCC) and the Smarter Balanced Assessment Consortium (SBAC). Because the Standards and their attendant standardized tests are limited to the K-12 curriculum, they are generally thought of as something happening within our elementary and secondary schools, not something that may have a direct effect on how we teach writing at the college level. By mapping the increased control of professional development networks for teachers by private philanthropists and testing companies, vertical alignment of K-20 standardized tests, and new approaches to funding education reform and research, we can begin to see how and why the Standards and emerging standardized tests will reshape our college writing classrooms. Understanding this shift is crucial to reasserting teacher agency at all levels of the curriculum and reinforcing assessment as primarily a teaching and learning practice, not a system of accountability and control.
Teaching and Learning in an “Audit Culture”: A Critical Genre Analysis of Common Core Implementation
by Brad Jacobson
This article examines classroom materials and sample assessments to understand the effects of Common Core implementation on the teaching and learning of writing. Drawing on theories of genre systems and intertextuality, the article focuses on the ways in which a Common Core-aligned senior English Language Arts textbook and sample writing assessment recontextualize the standards in writing prompts, criteria, and written instructions related to argumentative writing. This critical genre analysis demonstrates the ways in which a theory of writing is transformed in the implementation of the standards, and makes visible the ways in which the implementation process privileges the goals and needs of an accountability mandate rather than the teachers and students enacting the standards.
Teacher Perceptions of the Impact of the Common Core Assessments on Linguistically Diverse High School Students
by Todd Ruecker, Bee Chamcharatsri, and Jet Saengngoen
Any discussion of the Common Core State Standards (CCSS) is incomplete without an understanding of the assessments that go along with them, since test makers were an integral part of the panels designing the standards. The Partnership for Assessment of Readiness for College and Careers (PARCC) is one of the two consortia (along with Smarter Balanced) developing tests for multiple states that align with the CCSS. This article shows how teachers perceive the impact of the CCSS and high-stakes assessment, particularly the PARCC, on a linguistically diverse school in the Southwestern U.S. The authors begin by reviewing work focused on how the creators of the CCSS and the associated assessments have overlooked ELL student populations. They then present findings from a multi-year study involving teacher interviews and classroom observations, focusing particularly on the following: psychological effects on students, the challenges of developing a literacy test for a homogeneous population, accessibility and accommodations, and computer-based administration.
Moving Beyond the Common Core to Develop Rhetorically Based and Contextually Sensitive Assessment Practices
by Angela Clark-Oates, Sherry Rankins-Robertson, Erica Ivy, Nicholas Behm, and Duane Roen
Much political and disciplinary debate has occurred regarding The Common Core State Standards and the development and implementation of concomitant standardized tests generated by the two national assessment consortia: The Partnerships for Assessment of Readiness for College and Careers (PARCC) and Smarter Balanced Assessment Consortium (SBAC). In entering the debate about K-12 standardized assessment, the authors critique the top-down model of assessment that has dominated K-12 education and is currently being promoted by the national assessment consortia, and how the assessments associated with the national assessment consortia promote an interpretation of college readiness from a skill-based framework. Moreover, we examine PARCC by using content analysis to illustrate how it is an inflexible assessment measure that fails to capture the complexity of learning, specifically in literacy based on more than thirty years of disciplinary research. In contrast, using the construct of college readiness as defined by National Council of Teachers of English, National Writing Project, and Writing Program Administrators in The Framework for Success in Postsecondary Writing (Framework), we champion the Framework as not only a viable alternative for conceptualizing effective methods for teaching and learning for college readiness, but also as a heuristic for developing rhetorically based and contextually sensitive assessment practices through the implementation of portfolio assessment.
To Aggregate or Not? Linguistic Features in Automatic Essay Scoring and Feedback Systems
by Scott A. Crossley, Kristopher Kyle, and Danielle S. McNamara
This study investigates the relative efficacy of using linguistic micro-features, the aggregation of such features, and a combination of micro-features and aggregated features in developing automatic essay scoring (AES) models. Although the use of aggregated features is widespread in AES systems (e.g., e-rater; Intellimetric), very little published data exists that demonstrates the superiority of using such a method over the use of linguistic micro-features or combination of both micro-features and aggregated features. The results of this study indicate that AES models comprised of micro-features and a combination of micro-features and aggregated features outperform AES models comprised of aggregated features alone. The results also indicate that that AES models based on micro-features and a combination of micro-features and aggregated features provide a greater variety of features with which to provide formative feedback to writers. These results have implications for the development of AES systems and for providing automatic feedback to writers within these systems.
Book Review: Henry Chauncey: An American Life by Norbert Elliot
by Bob Broad, Illinois State University
If you want to read a history of writing assessment as it developed during the 20th century within the narrow and specialized confines of the Educational Testing Service (ETS), you can’t do better than Norbert Elliot’s On a Scale: A Social History of Writing Assessment in America (2005). If your curiosity about ETS is not satisfied by that enormously careful and detailed history, and if you want to gain a close-up, intimate understanding of the person one author called ETS’s “first president and abiding institutional deity” (Owen, 1985, p. 1), then you can’t do better than Elliot’s new biography, Henry Chauncey: An American Life.
Later in this review, I will re-visit those last two “ifs.”
Editorial Board 2015