Contract Grading in a Technical Writing Classroom: A Case Study
by Lisa M. Litterio, Bridgewater State University
The subjectivity of assessing writing has long been an issue for instructors, who carefully craft rubrics and other indicators of assessment while students grapple with understanding what constitutes an "A" and how to meet instructor-generated criteria. Based on student frustration with traditional grading practices, this case study of a 20-student technical writing classroom employed teacher-as-researcher observation and student surveys to examine how students in a technical writing classroom in the Northeast collaborated together to generate criteria relating to the quality of their writing assignments. The study indicates that although students perceive more involvement in the grading process, they resist participation in crafting criteria as a class and prefer traditional grading methods by an “expert,” considering it a normative part of the grading process. The study concludes with implications for integrating contract grading in the technical writing classroom.
Keywords: technical writing, contract grading, assessment, student feedback
As a new instructor of technical writing, I quickly developed a passion for teaching this course. Because the material offers students the opportunity to consider the applicability of writing to their own fields through case studies, document design, and digital practices, students expressed an infectious enthusiasm. They also displayed an eagerness to examine, discuss, and apply not only the course content to their professional lives, but also tenets of the field, which include workplace ethics and collaboration. The concept of creating a contract grading system developed from my passion for teaching technical writing for the first time in the previous academic year and the possibility of involving students more in the assessment process. Contract grading seemed well-suited to this course, where I encourage students to consider the rhetorical nature of technical writing, understand one’s audience, and the importance of the writer-reader relationship (Connors, 1982). As an educator, I also emphasize belonging to a discourse community of professionals (Miller, 1979) and address humanistic inquiries through carefully-crafted assignments (Selber, 1994). Blending contract grading within a technical writing classroom allows for students to understand how the process of writing technical documents mirrors the collaborative and communicative practices of workplace writing; writing, rewriting, and negotiating the contract itself applies to the writing in their professional lives.
The concept of contract grading also emerged from witnessing student frustration and sometimes obsession with traditional grading methods. I have observed how grading becomes a deterrent to understanding concepts, taking risks with writing, or challenging oneself in the classroom. Instead of the mentality of “I just want an A,” it was my hope that a system of contract grading would encourage students to ask, “How do I learn and apply criteria of exemplary technical documents to my own work?”. This study, then, developed from striving to mitigate students’ frustration with traditional grading methods and acknowledging that technical writing is a specific style of writing. The aim of this study was to examine student attitudes and perceptions of their involvement with contract grading, particularly through collaborating and assessing the quality of technical writing documents. This article contextualizes the aim first through literature that addresses the purpose of contract grading and its role in the technical writing classroom. Then, I provide a framework for this study using teacher-as-researcher methods and surveys to analyze student perceptions of contract grading within a technical writing classroom in order to determine how these attitudes shape assessment practices.
Contract Grading in Composition
Contract grading is commonly understood as negotiation among students and faculty for their overall performance in the course, and has been used in a variety of ways in the writing classroom to improve student writing, the teacher-student relationship, and the classroom environment since its emergence in the 1970s. The focus on contract grading in composition is an assessment practice that aligns with composition’s process approach to teaching writing. According to the Conference on College and Composition (CCCC) Position Statement on writing assessment (2014), “Writing assessment is useful primarily as a means for improving teaching and learning. The primary purpose of any assessment should govern its design, its implementation, and the generation and dissemination of its results” (p. 4). Assessment needs to be carefully crafted; contract grading can invite students to help shape it. Donald Murray among so many other voices in the process movement of composition theory has encouraged us to consider the multitude of ways we can teach writing as a process, not as a product, categorizing activities and stages of the writing process (Elbow, 1973; Emig, 1971; Flower & Hayes, 1981; Murray, 1972; Rohman, 1965). The field of composition has the continual thread of the process-oriented approach to teaching writing throughout its history, and contract grading has the potential to be an assessment practice that aligns with this pedagogy.
Contract grading also uses critical pedagogy to create a classroom environment where students have more ownership of their assessment and the process is more democratic in grading. Moreno-Lopez (2005) explored how critical pedagogy can be integrated with contract grading, as it “focuses on students’ voices and encourages active learning” (n.p.). While it is idealistic, this system also reflects the problem-posing method of education and critical pedagogy of Freire (1993): “The teacher is no longer merely the one who teaches, but one who is himself taught in dialogue with the students, who in turn while being taught also teach. They become jointly responsible for a process in which all grow” (p. 78). At its core, contract grading blends critical pedagogy in the writing classroom with traditional composition theory focused on process. Critical pedagogy emphasizes learning from the student in the classroom and shaping teaching practices while contract grading traditionally allows students to focus more on their process of composing rather than a final product.
More radically, Inoue (2005) applied the concept of “fourth generation evaluation” from Guba and Lincoln (1989), which constituted a “hermeneutic dialectic,” a method “for each stakeholder to offer input into an evaluation, in a kind of round-robin style, thus creating a circular process of recursive negotiation and consensus making” (pp. 151-152). Inoue explained, “Having stake in these processes means students can critically engage their writing as meaningful practices situated within a community for particular purposes” (p. 222). Through contract gradings at a 300 level writing class at a public university, students become a vital part of the decision-making process rather than receivers of teacher-generated assessment criteria.
Danielewicz and Elbow (2009) developed a detailed contract grading system for their sections of first-year writing at a public university. Although Danielewicz typically teaches an honors section and Elbow does not, they both use a similar system of contract grading. Students who meet indicators of quantity, such as attendance, participation in draft workshops and peer review, conferences, and final drafts are guaranteed a B. This assessment is grounded “entirely on the basis of what you do—on your conscientious effort and participation” (p. 246). Any grade higher than a B is evaluated by instructor assessment. Although Danielewicz and Elbow engaged their students in discussions of what constitutes high-quality writing, they did not use student-generated criteria to assess their work.
Contract Grading in Technical and Professional Writing
Specific research relating to contract grading in a technical and professional writing classroom has not been addressed as thoroughly as in composition. Most of the research in technical and professional writing is relegated to the 1970s and 1980s, with a significant number of authors using contract grading within technical communication/speech classrooms. King (1972) evaluated a public speaking course taught successfully for six semesters at Florida State University and defined contract grading as a system where students can set their own aspirational levels. He argued contract grading causes students to work harder and learn more through a deeper understanding of grading requirements. Similarly, Andrew and Darlyn Wolvin (1975) provided students with the option for contract or traditional grading in their technical speech classroom, where students were comprised of different majors. They demonstrated through empirical observation that more students selected contract grading, and this system resulted in not only higher grades, but also a deeper understanding of student expectations.
One reason why contract grading may have value within a technical or professional writing environment is due to how its tasks mimic professional practices. Early on, Hassencahl (1977) explained this relationship: “Grading contracts model themselves on the real world where we regularly make agreements or contracts which have the force of law” (p. 3). Contract grading, then, is seen as a model for the types of contracts and conversations students will experience long after the technical writing classroom. Similarly, Bishop (2005) echoed this idea:
Instead of an eternally deferred, unknown collection of work for the teacher, the writer plans, predicts, alters, and controls the contents of the portfolio via the contract . . . the contract and portfolio establish a sequence of work just as a professional writer schedules work using desires and deadlines. (pp. 113-114)
The concept of negotiating and renegotiating a document is a practice that occurs within a contract grading model and extends beyond the classroom into a workplace environment.
A technical writing classroom also differs from a first-year composition classroom with its emphasis on workplace and collaborative writing. Unlike a first-year composition course where students are learning how to write academic research papers, technical writing classrooms engage students with the specific writing of their discipline At some institutions, a technical writing course is considered a service course comprised of a variety of majors, including engineering, sciences, medicine, and computing, while at others, students specialize in technical writing. For students in these majors, contract grading can be a method that leads to fluency within their writing discipline. For example, Matulich (1983) implemented contract grading in a community college because
this system was one way I could reinforce throughout the whole term the essentials of technical writing while getting the students into the local community and urging them to write in the content areas and within structural form of their own majors. (p. 110)
In the classroom, students selected their own writing projects based upon what is valued in their field; for instance, a feasibility report (accounting), a formal lab report (sciences), or a progress report (business). Matulich also encouraged students to solicit feedback from faculty members in specific disciplines. Although the logistics of this system may not be feasible at other institutions, it upholds the idea that writing is shaped by the unique discourse communities in each discipline. Matulich summarized, “The need to write clearly and in an organized fashion is reinforced by faculty in science and business when students request guidance in technical writing projects” (p. 115). This style of contract grading reinforces the purpose of technical communication, which is to become more adept at understanding disciplinary specific conventions.
The technical writing classroom, when paired with contract grading, can also offer a powerful space for educators to focus on the importance of the writing process rather than product. Bishop (1987) articulated that in a typical technical writing classroom, people from different majors gather to learn the specific conventions of their field. “Such an orientation,” she explained, “often results in a product oriented syllabus and an overemphasis on writing structure at the expense of writing content and writers’ growth and development” (p. 1). More recently, instructors of technical writing have seen contract grading as an opportunity to align values of new media studies with this form of assessment. Gillette (2005) argued contract grading was well-suited to new media projects in his classroom, which involved “collaboration, debate, discussion and exploration.” Gillette worked out a contract with each individual student, agreeing upon areas of the project s/he needed to complete and standards for their success or failure. Rather than a product-driven approach, more common in a multimedia classroom, Gillette’s contract grading model encourages students to focus more on the process of developing these works.
Defining Quality in Contract Grading
The concept of quality in contract grading in this study developed from the need to remove quality from the hands of the instructor to those of the students, encouraging them to understand genre conventions and apply them to their own documents. Contract grading too often is a system where the instructor defines criteria for assessing work and students have autonomy only over the quantity of assignments. For example, Shor (2009), who piloted contract grading in his own classroom, explained, “Mostly, I give one of three grades on written work—A, B, or C—earned first by meeting quantitative minimums for each grade and second by my judgment of quality in the writing” (p. 8). For Shor, quantitative minimums and judgment are determined by the instructor rather than a consensus created by students in the classroom. Danielewicz and Elbow (2009) did not reference quality unless a student received the scoring of “better than B” (p. 2), and explicitly stated, “We don’t profess to give students any power over these high-grade decisions” (p. 2). In addition, Adkison and Tchudi (1997) have explored quality by using “achievement grading” that takes into account the range and depth of student work, but did not involve students in their process for developing criteria and “creditable” work.
Prior research in technical communication has failed to involve students in discussions of quality with contract grading. Hassencahl (1977) implored “there must be some control on quality as well as quantity of work” (p. 31), and encouraged instructors to designate levels of competency rather than have students determine their own. Similarly, Stelzner (1975), teaching speech communication at the University of Massachusetts, wrote that one of the strongest criticisms of contract grading is not defining quality with students. She explained, “The instructor must define what constitutes A-quality, B-quality, etc.” (p. 131) and encouraged the teacher to clarify her or his standards (p. 132). As instructors generate indicators of quality as evidenced in this research, they neglect an awareness of whether students are involved in the process.
Instructors have more recently begun to examine how students can be involved in determining quality for assignments, and most appear to place the onus on students. While Inoue (2005) focused extensively on effort, he invited students to develop rubrics for assignments by splitting them into groups, having them share notes, and produce rubrics for course discussion. Inoue encouraged students to identify a list of proficiency markers in paragraphs, for instance, containing a consistent claim or supporting a claim with evidence (p. 217). In the rubric-creation process, Inoue acted “as a coach toward sound assessment practices and active learning stances by making them do the hard work of assessment” (p. 221). Inoue, as well, explained how community-based assessment pedagogy supports student effort and involvement. While Inoue developed a model of student generated assessment, Danielewicz and Elbow (2009) focused extensively on providing students relief from quality, instead emphasizing completing assignments, effort, and participating in the peer review process.
In addition, McDonnell (2011) developed a system of quality that aligns with contract grading by blending strategies from engineering and technical documentation. He used Total Quality Education (TQE), developed by engineer Deming (1986), which abolishes grading practices and allows students to manage their own grading through participatory management (p. 23). This schema involves asking students why they have enrolled in the course to determine the purpose of the course, then inviting them to collaborate in teams to produce writing that receives support from the instructor. In McDonnell’s study, students preferred this method of determining their own quality, and explicitly stated that by setting the standards for quality, they became more involved in the process, more like teachers than students (p. 223).
Hypothesis and Research Design
This study extends traditional notions of contract grading and aligns itself with scholarship in community-based assessment pedagogy, as I consider not only a student’s overall performance in the course, but also their role in developing and assessing individual assignments. While McDonnell and Inoue invited students to have complete autonomy over criteria for assignments, my first entry into contract grading involved closely examining student-generated collaborative feedback while still allowing the instructor to determine quality of the overall work. This study allowed me to serve as the mediator, assessing the criteria students developed and taking an active role in analyzing whether they have applied these criteria. This decision blends several approaches in contract grading and serves as a transition from traditional models of assessment to contract grading. I argue that by blending traditional concepts from contract grading, including the instructor as evaluator, with student-generated criteria, the result is a classroom of students who have more involvement with their assignments and understanding of the writing standards they must develop and meet. I detail this process more thoroughly in the following section.
My study also differs from Danielewicz and Elbow in that it recognizes the role of a faculty member as a mediator in evaluation, not only in areas beyond a B. Thus, this study was conceived with the instructor ascribing a final grade, which limited the scope of class-generated criteria to the writing process, particularly because contract grading was new to me and my students. This study blends Inoue’s principles of class-generated criteria focused on quality with Danielewicz and Elbow’s notion that contract grading invites students at the beginning of the semester to contract out for a specific grade. This combination of class-generated criteria and determining one’s performance in the course reflects the collaborative practices of technical writing with consensus-basis discussions and the traditional notions of contract grading in composition with goal setting for a particular grade. Unlike prior studies, I examined how students determined what constitutes quality writing through classroom discussions of genres. In this way, students need to not only identify the rhetorical purpose of technical documents, but also what makes them exemplary.
Finally, in examining the literature in technical and professional writing, in recent years there has been a dearth of scholarship regarding contract grading. Even though there are syllabi available online from faculty who use contract grading in technical writing classrooms, much of this focuses on instructor approaches to contract grading rather than student attitudes or opinions about the system. Thus, this study acknowledges the scarcity of literature, which creates the exigence to revisit contract grading particularly in the technical writing classroom.
Three main research questions guided this study: (1) What are student attitudes and opinions of contract grading, particularly in the process of generating criteria for assignments?; (2) What are student attitudes with respect to understanding the “quality” of technical writing documents?; and (3) What implications do these attitudes and opinions have for developing a contract grading system in the technical writing classroom? In order to examine these questions, the teacher-as-researcher method allowed for gathering exploratory data that would directly impact teaching and research practices, while surveys allowed for understanding student attitudes and perceptions both in mid-process of contract grading and the end of the process.
This contract grading study was implemented in my technical writing classroom in the fall of 2014 at a state university in the Northeast. The technical and professional writing program at my university currently consists of service courses, designed to introduce majors across the university to professional, business, and workplace writing. Technical Writing I is the only technical writing course offered at the undergraduate level, and is considered an intermediate course consisting of students across majors. Of the 20 students enrolled in the course, approximately seven majors were represented, including English, psychology, health studies, computer science, public health, aviation, and special education. Three of the students were English as a Second Language (ESL) students. Of the 20 students enrolled, twelve were seniors and eight were juniors. There were 11 female students and 9 male students
This class is designed to prepare students for the kinds of writing and communication practices in their disciplines. As the syllabus states,
This course focuses on preparing you to present specialized information (where we get the ‘technical’) in a clear and accessible way to your audience. Further, you will develop familiarity with specific writing contexts within your own fields of study and will gain experience communicating technical information to non-technical audiences.
On the first day of the semester, I alerted students both verbally and through writing in the syllabus that the class consists of a contract grading system. All students had no prior experience of contract grading. Similar to other institutions, the university allows a two-week add/drop period for students at the beginning of the semester where they can decide to withdraw from the course without penalty. None of the students dropped the course or withdrew; all 20 completed the semester with passing grades.
I decided to pilot contract grading in my own technical communication course because researchers have shown that it is valuable to experiment with contract grading in one’s own classroom before making it an institutional program or departmental policy (Inoue, 2005; Danielewicz and Elbow, 2009). Teacher-as-researcher method is one lauded by compositionists and contract grading enthusiasts. Developed by Stenhouse in the 1960s, “teacher-researcher” refers to the systematic and intentional inquiry of a school system, class, or group of students conducted by teachers (Ray, 1993, p. 173). The methodology of teacher-as-researcher attempts to traverse the divide between the researcher, the creator of knowledge, and the teacher, the one who distributes it, to offer a research design that actively involves educators and their students.Ray described teacher-as-researcher: “Students are not merely co-creators and subjects whom the teacher-research assesses; they are co-researchers and sources of knowledge whose insights help focus and provide new directions for the study” (p. 176). This methodological approach also aligns with critical pedagogy, which is one of the goals noted of contract grading in the literature review. Students not only contribute to their own assessment, but also they shape research practices of the evaluator
I phrased contract grading as a way to involve my students in the grading process by determining the criteria for each assignment. The handout to the students read, “Contract grading puts you in the driver’s seat. It lets you take control of your grade and focus on the areas of the class that most interest you.” The handout defined contract grading and provided students with the following section about how, as a class, they will determine the criteria for assessing quality:
The criteria for generating assignments will be based on criteria the entire class determines after researching, reading, and assessing texts. For example, the entire class through a discussion, will generate a list of criteria for a high quality assignment, i.e., identifying information, alignment of document, clear & well-written prose, spelling, etc.
As a new teacher to technical writing, I wanted to experiment with contract grading to determine if students had more involvement in the grading process and more of an understanding of principles discussed in technical writing, such as usability and document design.
I guided students through entire-class discussions, where they generated their own criteria for assignments, similar to Inoue’s rubric development. However, in the attempt to create a system of assessment akin to the quantity method—whether an assignment was completed or not—I created a system where I would first determine whether an assignment contained the class-generated criteria. I wanted to limit the scope of contract grading to student involvement in generating criteria to allow my students and me to transition toward a contract grading approach in the field of technical writing. The process of contract grading I used consisted of class discussion, where students generated the criteria, and evaluating student work, where I applied their criteria to the assignment to determine students’ grades.
In addition to determining specific criteria for assignments, students also had the ability to decide on their overall performance in the course, a tenet of contract grading. Their overall course performance was based on the following system:
A – To contract for a final grade of an A, students will have no more than three unexcused absences and will complete six assignments of “excellent” quality and two presentations of “excellent” quality
B – To contract for a final grade of a B, students will have no more than four unexcused absences and will complete five assignments of “great” quality and two presentations of “great” quality
C – To contract for a final grade of a C, students will have no more than five unexcused absences and will complete four assignments of “good” quality and two presentations of “good” quality
In this system, students completed assignments based on aspects of quality they generated as a class. One week later, students submitted their contracts for a specific grade. Of the 20 students in the course, 17 contracted for an “excellent” quality grade (A & A-) and 3 for a “good” quality (B). More discussion will follow that offers additional insight into how students perceived such a system.
The other qualitative methods used in this study consisted of open-ended surveys, administered at mid-semester and end of the semester. These surveys encouraged students to verbalize their perceptions as a process, and to uncover “actions intertwined with the dynamics of time, such as things that emerge, change, occur in particular sequences, or become strategically implemented” (Miles, Huberman, & Saldana, 2014, p. 75). The open-ended surveys were able to ascertain student attitudes and perceptions of contract grading, particularly since they were anonymous. These questionnaires (See Appendix A) were also important as this was an exploratory study designed to elicit an extensive amount of information from students (MacNealy 1999; Strauss & Corbin, 1990). Open-ended questions at the mid-semester asked students about their understanding of contract grading based on the course, how characteristics of quality relate to contract grading, and whether students prefer this to traditional form of grading. At the end of the semester, students shared and reflected more extensively on their overall experience with contract grading, including their role in the process. All 20 students responded to the IRB-approved survey distributed at the mid-semester and the end of the semester.
In order to ask students specific questions about “quality” in the surveys, classroom time, particularly discussions, were devoted to examining sample documents written by professionals and students along with examples from the textbook. These samples encompassed a range of levels of quality. As the teacher-researcher, and for pedagogical purposes, I did not inform students which examples were exemplary; rather, we worked inductively to evaluate the document. I regularly asked students questions such as, “What is effective about this document? What areas are problematic? How would you assess this document?” As a class, we discussed documents and I often called on students to elicit feedback about strengths and weaknesses of a particular document. Based on my observations, students seemed more accepting of documents from the textbook and were highly critical of anonymous student samples from a prior semester When asked what grade students might give and why to various student samples, the majority identified the student work as “low quality” or “C level,” even on work I considered strong.
After spending several class sessions on a specific genre of document, approximately one class session before the rough draft of the document was due, I invited students as a whole class to brainstorm areas of quality they would need to meet to succeed on their own assignment. I encouraged students to use the textbook and discuss the kinds of criteria that would be useful for their projects and I helped them determine generalizable criteria. I invited the class to brainstorm criteria an assignment would need to meet, from global issues such as structure, or local issues such as correct grammar and syntax. As students brainstormed, I guided them toward principles valued in technical communications and precise language for their identified criteria. Figure 1 below demonstrates the generalizable criteria the class developed for the first assignment, a resume.
Figure 1. Criteria for Resume Assignment
Contains an objective/summary
Contains relevant experience
Free from errors (grammatical, spelling, etc.)
Proper alignment and layout
Uses specific and strong verbs
Has a purpose to order/sequence
For transparency, there were no secret ballots. Students voiced their assertion of a particular measure of quality by raising their hands to vote openly. Similar to Inoue, who stated that “by ‘consensus,’ I do not mean that the class is in complete and full agreement, only that hard agreements have to be explicitly made eventually, despite some individuals’ disagreements” (p. 209), students were generally in agreement regarding the criteria that would be used to evaluate their assignments. Instead of a traditional letter-based grading system of A, B, C, I used descriptive terms with all positive connotations such as “excellent,” “great,” and “good.” My goal was to see if this would impact student involvement, as well as their perception and the impact of a grade on their performance and effort. Developing an entirely positive grading schema was influenced by Zak’s (1990) work, which found students who received only positive comments seemed to gain more authority over their writing. To be consistent with contract grading practices that encourage students to establish their own criteria, we determined that “excellent” quality needed to meet 8-10 of the student-generated criteria, while a “great” paper met 6-7 and a “good” paper would have fewer than 6. This practice is modeled from traditional practices of contract grading that focus on fulfilling quantity of assignments, so students would understand how fulfilling a number of standards would affect their grade.
At mid-semester, student attitudes were split with respect to contract grading, with 9 students preferring contract grading and 9 students preferring traditional grading. 1 student reported “no difference” and 1 student reported “neutral in preference.” At the end of semester, student preferences with respect to contract grading were even stronger, as 15 students expressed they preferred the traditional system of grading and 5 preferred some of the merits of contract grading. The following results, which I have coded by patterns that have resulted from the data, highlight and recognize specific areas of resistance along with one area of potential benefit of this type of assessment. The findings, detailed below, suggest that the majority of students were resistant to contract grading and preferred traditional methods of assessment by the end of the semester. The analysis of the findings provides explanations for student resistance and, when necessary, contextualizes them in conversation with prior contract grading studies.
Preferences for Comfort and Habituation of Traditional Grading Practices
By the end of the semester, the majority of students expressed resistance to contract grading, which included preference for traditional grading practices. One student shared s/he preferred the instructor having the sole voice of authority for assessment:
As a senior in college I know how to track my own grades and it is a lot easier to do so without this contract grading. It doesn’t really make a difference on the end grade anyway, so it doesn’t push people to work harder.
Students explained contract grading still contains traditional elements of grading. For example, one of the criteria students determined for a strong resume was “strong and specific language,” relating to plain language, language valued for its clear organization and accessibility. Some expressed the bias and subjectivity of such a system. One student wrote:
Some things [about contract grading] seem clear cut but others seem like they could go either way which I don’t get. Like if in the resume strong language was required, we could have thought we used strong language but you didn’t. This is where I don’t get how contract grading works, and it appears more like traditional grading where you decide, not us.
This student pinpoints one of the problems of this study, and of determining “quality” in contract grading. A paper may or may not have a title; a paper may or may not have paragraphs; but trying to determine the strength of the title or of the organization varies from reader to reader and thus becomes more dependent on the reader. Here, the student summarized the complexity of assessing writing with a binary of either meeting or not fulfilling criteria determined by the class. Students seemed to prefer a letter grade system, rather than the terminology used in this contract grading system and also indicated they were interested in a more active role, including the final evaluation. One student summarized, “even if the class creates the criteria, it is still subjectively evaluated by the professor’s biases,” speaking to the instructor’s ability to make the final evaluation on a document. Overall, student feedback showed they were influenced by me as the evaluator. Involving them in all aspects of the contract grading process may have resulted in different findings.
Resistance to contract grading has been documented by other researchers (Spidell & Thelin, 2006; Shor, 1992). Spidell and Thelin (2006) claimed students in their composition course “resisted the implementation of a contract because they could not conceive of a grading system that did not quantify their efforts” (p. 41). Some of this resistance stems from the notion that contract grading was new for these students. One student wrote in the survey that contract grading “logistically makes more sense but I’m so preconditioned to traditional grading.” Shor (1992) explained students’ proclivity to traditional grading practices could account for some of their resistance, and noted, “empowering educators face traditional student resistance as well as resistance coming from the invitation to empowerment itself” (p. 139). Seemingly, then, my students were distrustful of a new system of grading. They also were suspicious of a system based on positive, qualitative qualifiers of “excellent”, “great”, and “good,” which come with anxiety and assumptions for students, rather than letter grades (Veit, 1979).
Difficulty Generating Criteria and Seeing Themselves as Experts
In addition to resisting new systems, students shared extensively that traditional grading practices, which favor receiving feedback from an “expert,” are valued parts of the process rather than grading criteria generated by peers. One student stated:
I appreciate the class input; however, you are the instructor. You hold the doctoral degree. I wish I got more input from you vs. the class on my rough drafts. You tell us what is expected in a document, not the other way around. If this class was a major requirement then I would trust my classmate’s judgments more. However, this class is not made up of all refined English majors; therefore, I had a hard time taking their suggestions to better my skills.
The student summarized one of the inherent problems of contract grading with student consensus, particularly in a service course: accepting that other students are the “experts” or have enough knowledge to generate meaningful and relevant criteria. Other students alluded to this problem of being an expert of technical writing. One student wrote, “I found it difficult to come up with criteria ourselves. On projects I would have liked to have specifically what we needed to have completed on it.” Another mentioned, “I also thought the criteria needed was more opinion based than what was actually supposed to be there.” These results indicate students perceived generating criteria for documents to be challenging because of their limited expertise but also because that the criteria was dependent on the reader.
The survey results indicated students perceived their classmates’ criteria to be superficial, rather than fully articulating the tenets of a specific genre. For instance, a student shared that “a lot of the criteria generated by students is cosmetics, and everyone likes something different. I feel some students like to have autonomy but I do not.” Students may have been more comfortable developing criteria relating to syntax or grammar rather than rhetorical conventions or disciplinary-specific writing conventions, which were more unfamiliar to them. Interestingly, in this collective class discussion of generating criteria, students seemed to distrust the authority and expertise of their classmates, and even consider some of the criteria to be “cosmetic” rather than meaningful and comprehensive. These findings suggest that students were still learning the necessary skills of a technical writer within this service learning course. They also suggest that contract grading may be better suited for courses within a major or minor, as students become more skilled in understanding the conventions of their fields.
Understanding Quality through Consensus and Discussion
Although students reported that determining specific quality for assignments (the end result) was a subjective process, they recognized the value in taking part of the process, especially discussions relating to quality and criteria for specific assignments. When asked, “How does ‘quality’ and characteristics of quality factor into [contract grading]?”, one student shared,“[quality] is up to us…thus we should know how the quality for an excellent assignment should be.” Another student shared that “class-designated expectations of quality allow students to think more critically about the work they produce, rather than impelling them to methodically work for a standard letter grade.” These responses indicate that unlike traditional grading practices, students within a contract system have to think more critically about the assignment and the specifications required for that assignment.
Allowing students the opportunity to discuss the quality of assignments in class and through their extensive written feedback on surveys caused them to define more precisely what quality means in relation to technical writing assignments. For example, several students produced their own definitions for quality, not specific to the assignments discussed. From the survey question asking students to “define what quality means in relation to a technical document,” students shared some of the following responses:
- “how neat and organized the work is”
- “college level [work]”
- “evident that time and effort was put forth”
- “proper grammar and sentence structure”
Although these responses are varied, students began to develop specific criteria for documents rather than relying on instructor rubrics.
In addition to these findings, students also reported that contract grading involves students more with the grading process, which they seemed to enjoy. This was a new opportunity for each student. One shared that “this style…does get the student more involved on how they will be graded which can provide a more accurate understanding of the assignment.” Another student explained the merits of contract grading by claiming, “this style of grading is not bad because it challenges me to meet my own expectations of me. The traditional type is like the potential the other person sees in me.” A student shared, “I like getting to choose the criteria necessary for assignments, allowing me to fully understand what needed to be done” while another noted that “coming up with criteria as a class seemed to help explain/show why certain elements of each project were necessary and important.” These findings suggest the aspect of contract grading students found to be most valuable was their part in the process, which included generating criteria and articulating what a quality document is, depending on the particular genre.
The student findings from this exploratory study suggest there were specific areas of resistance to contract grading, particularly the instructor serving as the assessor of the student-generated criteria. One of these problems was determining that a student work either possessed or did not possess a certain criteria generated by the class. Students might also have been influenced from terminology in the textbook; as we discussed resumes we referred to a passage from Markel with examples of precise and specific language. At times, students perceived that “strong and powerful language” was an assessment from me, the instructor, rather than terms used in the textbook. The language can actively involve students in demonstrating where that aspect occurred in their work. Through the surveys, students indicated negative attitudes with respect to the instructor using student-generated criteria to assess their essays. As a new teacher of technical writing, I assumed that principles of technical writing may often be seen as more objective, but the student attitudes and opinions reminded me that assessing writing is an arduous task, often dependent on the evaluator’s perspective. In addition, this finding allows the opportunity for instructors of technical writing and other fields to carefully consider their role in the assessment process, and create a more open a conversation about understanding our students’ choices as authors.
This exploratory study suggests that when implementing contract grading in the classroom, especially implementing criteria generated by students, to not only consider student demographics and majors, but also their role in the contract grading process. As stated previously, of the 20 students in the course, none had prior experience with contract grading. Many perceived contract grading model as reliant on their peers and this collaborative process was viewed with skepticism because, as several shared, their classmates lacked the expertise of the professor. In addition, many of the students’ disciplines represented were in the sciences, and the need for clear, objective criteria could have impacted their ability to support a new and unknown grading system because of the need for specific criteria. Because this was a service course where students from a variety of different majors were all learning the skills sets of technical writing together, they were distrustful of their peers generating the criteria. This suggests that perhaps the method of contract grading I implemented may be better suited for upper level, major-specific courses. In addition, other systems might produce different results, such as where students fully generate criteria and assigning evaluations, or alternatively, a scaffolded type of contract grading, allowing more and more student participation. Future studies could explore this aspect of fuller student participation in all aspects of assessment, including student-generated criteria and student self-assessment of these criteria. Such a study could be used to determine if students still associate contract grading with traditional grading practices, or if they come to different conclusions.
The importance of understanding student attitudes and perceptions not only impacts our understanding of student experiences, but it also challenges much of the literature on contract grading, often touted as a more egalitarian, positive, student-centered assessment model. By assuming students are receptive and invested in contract grading, we often do not realize that perhaps the most effective assessment method is involving them from the beginning in terms of the process of contract grading. As Anson (1989) summarized,
Students must become their own evaluators. In essence, we are asking teachers to help wean students from a simple view of the world. We want students to see teachers not as right authority figures to be deferred to, nor as wrong authority figures to be rejected, but as individuals, representing a culture and a discipline, with whom to talk. (p. 77)
The process of “weaning” students begins by involving them in aspects of their assessments to determine their specific contract grading needs. As Cook explained, research in technical communication is still focused primarily on “the extent to which programmatic goals are observable in student performance” (as qtd. In Jablonski & Nagelhout, 2010, p. 174). This study lays the groundwork for future studies, which can work recursively, beginning with student attitudes of contract grading and then exploring how to channel these attitudes into developing a method of assessment that involves both students and instructors. Rather than inviting students into a contract, we need to have conversations with our students about existing assessment models. For technical communication, we could also allow students the flexibility and latitude to create and negotiate their own contracts, which mirrors workplace writing. This kind of teaching would serve to mitigate student perceptions of instructor bias as in this study and also place more agency on students throughout the writing process, from beginning to end.
This study also yielded important findings for what students did not resist with respect to contract grading; having “a voice” in the process. As students indicated through surveys, they did perceive their involvement with generating criteria and the necessity of these criteria with the respect to the quality of documents. In a service course, students also acknowledged that they had to have an understanding of the quality of documents to be involved with the contract grading system. This tells us that a contract grading system focused on quality has the potential to concentrate our teaching and student work on mastering excellence across a variety of genres. It means articulating and negotiating the quality of these documents and the criteria necessary to deliver them while acknowledging that technical communication has a greater emphasis on visual design and communal writing. For other subjects, it involves conversations about exemplary characteristics of particular genres and inviting students to both generate and evaluate criteria.
This study also affected my own teaching practices in a personal way. Danielewicz and Elbow argued contract grading does not only affect students in the classroom, but it also improves teaching. As a junior faculty member, and as someone new to teaching technical writing, implementing contract grading and listening to the voices of students taught me that technical writing is a form of writing that asks us to deeply consider the audience, speaker, context, and rhetorical purpose. In using a yes or no answer to whether a document contained “strong and specific prose”, I created a dichotomy that did not allow for the discussions and subjectivity surrounding those claims. This study has impacted my own teaching practices, particularly discussions about the rhetorical nature of technical writing and the humanistic concerns of document design, delivery, and content. It has called to light the need to encourage our students to consider more broadly collective norms and contexts of their actions as they enter into the professional world (Ornatowski, 1997, p. 34). Whether teaching technical writing, composition, or another course, contract grading invites us to consider our own influences as instructors and the way in which students perceive their involvement and are involved in assessment.
Finally, this system reinforced that writing technical documents is a process that mirrors the collaborative and communicative practices of workplace writing while the writing, rewriting, and negotiating the contract itself is applicable to the writing in their professional lives. Contract grading in a technical writing classroom has the potential to lay the groundwork for researchers in other disciplines to consider how our assessment practices can echo our pedagogical practices and course outcomes. As evident from this study and the literature review, we must continue to explore the ways in which contract grading systems can be integrated into writing classrooms to encourage ourselves and our students to become more skilled readers, writers, and evaluators.
I would like to thank the JWA editors, Diane Kelly-Riley and Carl Whithaus, for their feedback and support of this article. I also want to thank the anonymous JWA reviewers for their comments. Finally, I extend my gratitude to Peter Elbow, who offered me advice during revisions of this article and encouraged me to tell my own story through the research process.
Adkison, S., & Tchudi, S. (1997). Grading on merit and achievement: Where quality meets quantity. In S. Tchudi (Ed.), Alternatives to Grading Student Writing (pp. 192-208). Urbana, IL: NCTE.
Anson, C. (1989). Writing and response: Theory, practice, and research. Urbana, IL: NCTE.
Bishop, W. (1987, July). Revising the technical writing classroom: Peer critiques, self-evaluation, and portfolio grading. Paper presented at the Annual Meeting of the Penn State Conference on Rhetoric and Composition, State College, PA. Retrieved from http://eric.ed.gov/?id=ED285178
Bishop, W. (2005). Contracts, radical revisions, portfolios, and the risks of writing. In A. Leahy (Ed.), Power and Identity in the Creative Writing Classroom (pp. 109-120). Tonawanda, NY: Multilingual Matters Ltd.
Conference on College Composition and Communication. (2014). Writing assessment: A position statement. Retrieved from http://www.ncte.org/cccc/resources/positions/writingassessment
Connors, R. J. (1982). The rise of technical writing in America. Journal of Technical Writing and Communication, 12(4), 329-352.
Danielewicz, J., & Elbow, P. (2009). A unilateral grading contract to improve learning and teaching. College Composition and Communication, 61(2), 244-268.
Deming, W. E. (1986). Out of the crisis. Cambridge: MIT Press.
Elbow, P. (1973). Writing without teachers (2nd ed.). New York: Oxford UP.
Emig, J. (1971). The composing processes of twelfth graders. Urbana, IL: National Council of Teachers of English.
Flower, L., & Hayes, J.R. (1981). A cognitive process theory of writing. College Composition and Communication, 32(4), 365-387.
Freire, P. (1993). Pedagogy of the Oppressed. New York, NY: Continuum Books.
Gillette, D. (2005). Lumiere ghosting and the new media classroom. Kairos: A Journal of Rhetoric, Technology, and Pedagogy, 10(1), n.p. Retrieved from http://kairos.technorhetoric.net/9.2/features/gillette/index.html
Guba, E. G., & Lincoln, Y. S. (1989). Fourth Generation Evaluation. Thousand Oaks, CA: SAGE Publications.
Hassencahl, F. (1977). Contracts move from commerce to the classroom. Proceedings from the Annual Meeting of the Eastern Communication Association, 1-13. Retrieved from http://eric.ed.gov/?id=ED162360
Inoue, A. B. (2005). Community-based assessment pedagogy. Assessing Writing, 9(3), 208-238.
Jablonski, J., & Nagelhout, E. (2010). Assessing professional writing programs using technology as a site of praxis. In M. N. Hundleby, & J. Allen (Eds.), Assessment in Technical and Professional Communication (pp. 171-188). Amityville, NY: Baywood Publishing Press.
King, T. (1972). A contract approach to a public speaking course. The Speech Teacher, 21, 143-144.
MacNealy, M. S. (1999). Strategies for empirical research in writing. New York, NY: Addison Wesley Longman.
Matulich, L. (1983). Contract learning in the traditional technical writing class. Paper presented at the Annual Convention of the American Association of Community and Junior Colleges, New Orleans, LA.
McDonnell, C. (2011). A farewell to grades. In S. Tchudi (Ed.), Alternatives to Grading Student Writing (pp. 210-224). Urbana, IL: NCTE
Miles, M. B., Huberman, A. M., & Saldana, J. (2014). Qualitative data analysis: A sourcebook (3rd ed.). Thousand Oaks, California: SAGE Publications, Inc.
Miller, C. (1979). A humanistic rationale for technical writing. College English, 40(2), 610-617
Moreno-Lopez, I. (2005). Sharing power with students: The critical language classroom. Radical Pedagogy, 7(2), n.p. Retrieved from http://www.radicalpedagogy.org/radicalpedagogy/Sharing_Power_with_Students__The_Critical_Language_Classroom.html
Murray, D. (1972). Teach writing as a process not product. In V. Villanueva (Ed.), Cross talk in comp theory: A reader (pp.1-6). Urbana, IL: National Council of Teachers of English.
Ornatowski, C. M. (1997). Social construction theory and technical communication. In K. Staples and C. Ornatowski (Eds.), Foundations for teaching technical communication: Theory, practice, and program design. Greenwich, CT: Ablex.
Ray, R. E. (1993). The practice of theory: Teacher research in composition. Urbana, IL: NCTE.
Rohman, D. G. (1965). Pre-Writing the Stage of Discovery in the Writing Process. Critical Theory and the Teaching of Composition, 16(2), 106-112. Retrieved from http://www.jstor.org/stable/354885
Selber, S. (1994). Beyond skill building: Challenges facing technical communication teachers in the computer age. Technical Communication Quarterly, 3, 365-390.
Shor, I. (1992). Empowering education: Critical teaching for social change. Chicago, IL: University of Chicago Press.
Shor, I. (2009). Critical pedagogy is too big to fail. Journal of Basic Writing, 28(2), 6-27.
Spidell, C, & Thelin, W.H. (2006). Not ready to let go: A study of resistance to grading contracts. Composition Studies, 31(1), 35-68.
Stelzner, S. L. (1975). Selected approaches to speech communication evaluation: A symposium. The Speech Teacher, 24(2), 127-132.
Stenhouse, L. (1983). Authority, education, and emancipation. Portsmouth, NH: Heinemann.
Strauss, A., & Corbin, J. (1990). Basics of qualitative research: Grounded theory procedures and techniques. Newbury Park, CA: Sage.
Veit, R. C. (1979). De-grading composition: Do papers need grades? College English, 41(4), 432-435.
Wolvin, A.D., & Wolvin, D.R. (1975). Contract grading in technical speech communication. Speech Teacher, 24(2), 139-142.
Zak, F. (1990). Exclusively positive responses to student writing. Journal of Basic Writing, 9(2), 40-53.
Sample Survey – Mid- Semester
This is my first experience with contract grading: YES___________ NO_________________
If no, briefly discuss the earlier context:
What is your understanding of contract grading from this course?
What grade did you contract out for?
How do quality and characteristics of quality factor into developing criteria?
Thus far, do you prefer this style of grading or a traditional type? Why?