Volume 5, Issue 1: 2012

Big Rubrics and Weird Genres: The Futility of Using Generic Assessment Tools Across Diverse Instructional Contexts

by Chris M. Anson, Deanna P. Dannels, Pamela Flash, Amy L. Housley Gaffney1

Interest in "all-purpose" assessment of students' writing and/or speaking appeals to many teachers and administrators because it seems simple and efficient, offers a single set of standards that can inform pedagogy, and serves as a benchmark for institutional improvement. This essay argues, however, that such generalized standards are unproductive and theoretically misguided. Drawing on situated approaches to the assessment of writing and speaking, as well as many years of collective experience working with faculty, administrators, and students on communication instruction in highly specific curricular contexts, we demonstrate the advantages of shaping assessment around local conditions, including discipline-based genres and contexts, specific and varied communicative goals, and the embeddedness of communication instruction in particular "ways of knowing" within disciplines and subdisciplines. By sharing analyses of unique genres of writing and speaking at our institutions, and the processes that faculty and administrators have used to create assessment protocols for those genres, we support contextually-based approaches to assessment and argue for the abandonment of generic rubrics.


Persistently, queries appear on large national listservs in composition studies, communication across the curriculum, or writing program administration asking for advice about the development of a single, all-purpose rubric to assess writing or communication across the poster's entire institution, which is often a large university with thousands of students and dozens of disciplinary areas.

In a new position to "explore" the state of writing across the curriculum at my new institution, I have been asked to help design a university-wide grading rubric.

I've been asked to provide an assessment rubric for the communication requirements in our college-wide General Ed program. . . . The rubric wanted by the Assessment Committee here must be suitable for all graduates, regardless of major.

[W]e're interested in finding an effective grading rubric--literally a one-page form--that would make it easier for faculty from Social Science and Science disciplines to comment effectively and efficiently on student writing.

As part of our revision, we are considering a campus-wide rubric for WAC courses . . . . Does anyone else have a campus-wide writing rubric?


Such queries are not a recent development; for years, many colleges and universities have shared an interest in finding or developing a single rubric or set of criteria that can be used to assess writing or speaking in virtually any curricular context.

Interest in "all-purpose" assessment usually has its genesis in a desire to establish common goals for communication or gather institution-wide data as simply and uniformly as possible. Creating one set of standards that drives everyone's attention to communication abilities may seem preferable to becoming entangled in localized assessments tied to varied, even idiosyncratic practices in specific colleges, majors, departments, or courses. Such generic rubrics are said to unite faculty around common goals, terminology, and rhetorical perspectives. A number of organizations have even tried to create common national outcomes for writing or speaking, such as the Association of American Colleges and Universities' "VALUE Rubric: Written Communication" (AAC&U), the National Communication Association's "Competent Speaker Form," (NCA), the Council of Writing Program Administrators' WPA Outcomes Statement for First-Year Composition (WPA), and an ultimately failed cross-curricular project--an attempt to create a set of national outcomes for writing beyond first-year composition courses (Anderson, Anson, Townsend, & Yancey, forthcoming). These projects take the form of a set of expectations or criteria against which entire programs and institutions are supposed to measure their success in preparing students as communicators.

In this article, we argue that rubrics reflecting generalized standards wear the guise of local application, fooling us into believing that they will improve teaching, learning, and both classroom and larger-scale assessment. Our conclusions are informed by extensive experience working with faculty, administrators, and students on communication instruction in specific curricular contexts. Drawing on theories of communication as situated and contextualized (e.g., Lave and Wengler, 1991), we demonstrate the advantages of shaping assessment in communication across the curriculum (CAC) to local contexts and conditions, including specific and varied communicative goals, discipline-based genres, and the "ways of knowing and doing" embedded within disciplines and subdisciplines (Carter, 2007). By sharing analyses of unique genres of writing and speaking, and the processes faculty and administrators have used to create assessment protocols for those genres, we support contextually-based approaches to assessment (Broad et al., 2010) and argue for the education-wide abandonment of generic rubrics.

Contextual Dependencies in Evaluation
Research in the fields of writing and communication studies has long demonstrated the presence and functions of discursive differences based on disciplinary formations and the emergence of discipline- or context-specific genres (Bazerman, 1988; Fahnestock, 1999; MacDonald, 1994; Miller & Selzer, 1985). Although early work on these differences leaned toward textual or rhetorical analysis, more recent scholarship has employed richer methodologies that consider the histories and emergence of situated genres and their complex functions within particular communities. From the perspective of activity theory, writers don't develop abilities generically and simply apply them seamlessly to whatever new contexts where they may need or want to write. Instead, as Russell (1995) has put it, "one acquires the genres . . . used by some activity field as one interacts with people involved in the activity field and the material objects and signs those people use . . . " (56). This principle is further demonstrated in studies of successful novice and professional writers who move from one discursive community to another and experience significant difficulty "transferring" their abilities across these contexts (Anson & Forsberg, 1990; Beaufort, 2007). Even skills that might seem easily transportable are often highly context-dependent; writers who learn to "analyze" in college literary papers struggle when asked to do so in a marketing proposal or an argument in political science. "Making assertions" or "providing evidence and support" involves rhetorical moves whose linguistic manifestations are nuanced and context-dependent, shaped by shared understandings of how knowledge is created and mediated in specific settings (see Odell, Goswami, & Quick, 1983).

Many situated practices, shared genres, and ways of disciplinary knowing are highly specific yet still relatively normative across particular discursive communities or even clusters of communities, what Carter (2007) calls "metadisciplines." For example, patient logs or shift reports in nursing will share certain textual and epistemological features in different clinical settings based on their purposes and specific rhetorical demands (such as conveying information with the utmost accuracy and objectivity). Yet in educational institutions, writing and oral communication assignments also take on variations and idiosyncrasies based on specific pedagogical strategies and learning goals; that is, they are purposefully adapted to their educational context. In many college courses--even in professionally-oriented programs--mixed, hybrid, invented, and creatively orchestrated assignments may call for unique or blended genres representing either localized pedagogical goals (such as ensuring insightful reading of a chapter) or professional goals (such as practicing close observation), or some mix of both. For example, an assignment in an art history course asks students to visit the storage area of a campus museum to examine artifacts from the Asmat people of Indonesia. Students conduct careful observations of certain artifacts and take notes based on a heuristic designed by the instructor (Barnes, 2010). Following their analysis of their chosen artifact, students produce an "object condition report" whose context is imagined to be internal to the community of professional art historians, such as curators. Shifting audiences and rhetorical purposes, they then write a "museum label" designed to explain certain features of the artifact to the museum-visiting public. The texts students submit exist in a "conditional rhetorical space" (Anson & Dannels, 2004), sharing some features of writing within the professional context they are designed to emulate but driven by rhetorical, pedagogical, and content-related goals associated with the classroom where they are ultimately located and evaluated.

The professor who designed the art history assignment looks for evidence of students' learning that emerges from and is directly related to his learning goals, which include "gaining competency in applying the analytical skills of an art historian to the specific works of art or art historical issues" (Barnes, 2010). An attempt to apply more generic assessment criteria to the results of this assignment would likely violate the principle of "constructive alignment" (Biggs, 2007), which refers to the relationship between specific learning goals, the methods of achieving those goals, and the assessment criteria used to judge the success with which they have been achieved. In the context of an art label, "well organized" takes on a quite different meaning, manifested in quite different text, than the same generic concept applied to an experiential "violation of social norms" assignment in social psychology or a reflective blog entry in a service-learning course in food science. Even genres thought to be canonical, such as lab reports, vary across different sections of the same course, as one of us discovered during a consultation with several faculty teaching the same heat-transfer experiment. Considerable differences in such basic characteristics as the length of the lab report arose from the learning goals that the faculty used to shape their expectations ("be able to compress multiple observations of phenomena into a highly synthesized account focusing on results" vs. "be able to explain in sufficiently robust detail the processes observed in the experiment").

To illustrate in greater detail the importance of deriving assessment criteria from specific contexts and occasions for communication, we turn to some more comprehensive analyses of oral and written genres practiced in discipline-based undergraduate courses at two institutions (the University of Minnesota and North Carolina State University). We begin by describing the University of Minnesota's Writing Enriched Curriculum (WEC) project and its process for supporting faculty-generated, discipline- and genre-relevant grading criteria. We then turn to North Carolina State University for an extended example of assessment of an oral genre, partly because this domain of communicative performance suffers even more than the teaching of writing from the problem of overgeneralized criteria and all-purpose rubrics. More importantly, however, it shows how oral presentation practices that seem to carry a uniform set of performative standards are, in fact, imbricated with highly context-specific expectations. As is clear from these analyses, any attempt to create criteria defining a "successful" performance within such disciplinarily and pedagogically distinctive contexts must emerge from, and be unique to, these genres as they are defined and practiced within the communities where they have meaning. The highly situated nature of these performances makes them partially or wholly resistant to the application of generic evaluation rubrics or sets of generalized criteria.

Going to the Source: Faculty Members Transform Writing Assessment in the Disciplines
A three-way mismatch between what faculty members in a discipline say they expect of student writers, what they ask students to do in their writing assignments, and the criteria they use in assessing the resulting writing became evident in 2006 when the University of Minnesota began collecting writing assignments and student writing samples from across the disciplines. A component of the pioneering Writing-Enriched Curriculum program (WEC), the sampling process involves collecting writing assignments and student writing from three levels of courses.

The WEC project works toward the integration of relevant writing instruction and writing assessment in the disciplines by engaging faculty groups in creating and implementing customized Undergraduate Writing Plans. Like the Writing and Speaking Outcomes project at North Carolina State University (Anson; Anson, Carter, Dannels, & Rust, 2003) and the departmental rubrics model at George Mason University, WEC is premised on the belief that not all faculty members who incorporate writing assignments into their teaching have had adequate opportunity to scrutinize their own assumptions about what student writing should or could look like in their courses. Once acculturated into their disciplinary discourse communities, they, like those who came before them, express confidence in their abilities to recognize successful writing when they see it. Their confidence often falters, however, when their characterizations are probed, when they are asked to describe what they really mean by "well organized" or "analytical" or "adequately developed," and it falters further when they are asked to distinguish between standards that are appropriate for undergraduate and graduate-level writing.

The WEC model develops from two related convictions. First, those who teach undergraduate students in the disciplines should be the ones to shape the writing instruction and assessment that occur there. Second, curricular infusion of discipline-relevant writing instruction will not be adequately achieved until local faculty groups have had a chance to examine, and possibly revise, assumptions about what writing and writing instruction look like and entail. At the heart of the WEC process, therefore, is a series of facilitated discussions in which departmental faculty react to student writing samples and survey data and then react to their reactions, all in the name of generating content for their Writing Plan. The writing samples faculty discuss in these meetings are drawn from three of the unit's courses, one at the introductory level, one early in the major, and one at the capstone level. Survey data are derived from online surveys administered to a unit's faculty, student majors, and a population of professional affiliates external to the university at the start of the WEC process. These surveys are designed to capture all three populations' characterizations of writing in both academic and professional applications of the discipline and their assessment of students' writing strengths and weaknesses.

Areas of dissonance captured in survey data serve as useful provocation, initially inspiring faculty members to attend meetings and ultimately forcing them to prod their assumptions about the written genres they assign and about the criteria they privilege in assessing student work. In their Writing Plan, these revised (or in-revision) ideas answer five key questions (see Figure 1). The second question--"With what writing abilities should undergraduate majors in this unit graduate?"--is arguably the most important addressed in early Writing Plans. In answering this question, faculty members wrestle with ways of characterizing abilities that lie at the confluence of writing and thinking. Later, in addressing the third question, "Where and how in this unit's curriculum, are (or shall) these desired abilities be developed?," they continue wrestling, this time with how to go about accommodating the developmental nature of writing--an idea that is new to many around the table. Transcriptions of meetings reveal that although faculty members are challenged by these discussions, they find them both useful and interesting. "Oh!" crowed a junior faculty member in the Department of Ecology, Evolution, and Behavior, "I had no idea that we all agreed that students in all three of our sub-fields needed to write to both policy and science audiences! This is going to make grading papers in my course so much easier."

Figure 1

Figure 1: Skeletal Outline of WEC's Undergraduate Writing Plans



As noted above, this process ultimately results in Undergraduate Writing Plans in which the faculty describe not only the writing they expect, but also the ways in which their undergraduate curricula can change to support students' development as writers. Once approved by a faculty board, the plans then move into a recursive cycle of implementation and assessment.

As the WEC team began to collect and, in conversation with faculty members in the disciplines, examine collected samples, they found that a large percentage of writing assignments fell into one of two groups. In the first group are those bearing generic titles ("Research Paper I," "Formal Assignment 2") and equally generic grading criteria ("Organization," "Accuracy," "Development"). In the second group are those bearing discipline-specific genres ("Theatrical Production Analysis," "Scientific Brief"), detailing specific thinking abilities instructors wanted to see demonstrated in students' deliverables ("engage a visual metaphor and persuade your reader of its centrality to the director's interpretation," "identify a scientific area of uncertainty"), and yet listed the same generic grading criteria. Rare indeed is the assignment in which title, description, and grading criteria were aligned with the writing abilities faculty members valued, and rarer still were the departmental faculty who, when asked about their level of satisfaction with student writing, voiced unanimous, unequivocal enthusiasm.

Analyses of survey results, meeting transcripts, collected assignments, and samples of student writing show that even where faculty members across the disciplines seem to agree, they don't. Although the writing they assign may be broadly categorized into similar-sounding genres, these genre titles can refer to markedly different forms of writing. As illustration of this point, Figure 2 shows that across all surveyed disciplines, "oral presentations," "research papers," and "essays" were among the most frequent writing assignments.

Figure 1

Figure 2. Responses from 780 interdisciplinary faculty members and graduate TA instructors to WEC survey question: "Which of the following writing assignments have you incorporated in any of the academic major courses that you teach within the past year?"


These broad patterns tempt the assumption that students encounter similar genres as they move from course to course, fulfilling distribution requirements. When faculty groups are asked to provide further details about what, in their discipline, a typical "research paper" or "essay" might look like, distinctions begin to emerge. For example, a majority of faculty surveyed from the University of Minnesota's School of Nursing indicated that they assigned "research papers." In subsequent discussions, the Nursing faculty agreed that among the most frequently assigned research papers they require of their students is the "Patient Care Plan," which requires students to incorporate primary research data into comprehensive, descriptive, researched, and cited analyses of patients' complaints, diagnoses, and prescribed care. A typical research paper in Political Science looks strikingly different. In that department, students are frequently asked to "identify a research question that is germane to the course objectives" and to develop a "source-based response" to that question, "utilizing the theoretical lenses we've covered in this course." In Philosophy, faculty surveys indicated that very few faculty assign research papers. Instead, they favor the "essay." When asked about these essays, they provided varied descriptions. Some assign short-answer comprehension questions ("What did Spinoza mean by . . . ?") while others assign longer papers requiring students to present and evidence innovative interpretations and applications of philosophical tracts or logic-based explications of complex concepts. Across campus, in the Department of Ecology, Evolution and Behavior, essay assignments require students to write scientific briefs for real or hypothetical officials. In this interpretation of the essay form, students must relay condensed factual data to non-scientific readers who are in the position of creating environmental legislation. Clearly, although these individual instructors are naming similar genres, the forms of writing and the cognitive moves students will need to make in the writing are quite different.

WEC data also indicate that where similarly worded grading criteria may be found in diverse courses, faculty and students usually interpret those criteria in various ways. As shown in Figure 3, the most commonly utilized grading criteria rate students' abilities to analyze. How are students to interpret that criterion? Shall they use the version they picked up in their general biology courses where they would break something into its component parts and look closely at each? Alternately, shall they use the version they used in literature courses, where they were expected to apply theoretical lenses to interpretable content?

Figure 1

Figure 3: Predominance of Analysis as an expected feature of undergraduate writing across 23 departments participating in WEC


According to both survey results and WEC discussions, faculty members in the Political Science department hold in high regard the demonstration of analytic thinking. Yet when asked in a meeting about what successful analysis looks like, or how it might contrast with forms of analysis students might be asked to do in other classes, discussion coughed to a stop. Was it similar to analysis students might be required to conduct in their general biology courses? No! Well, was it akin to the kinds of analysis they might conduct on texts in a literature course? Definitely not! After much deliberation, the faculty determined that the kinds of analysis they looked for in student writing involved "disaggregating the logic" found in secondary material, "so that it can be empirically evaluated." And how might this expectation be communicated to undergraduate students? "We want them to identify the different kinds of evidence used in political debates, to argue why these choices were made and what these choices say about the debaters." At the conclusion of this discussion, one faculty member popped out of his chair to announce to all present, "Now I know for the first time how to answer students who plead with me to tell them what I want in these papers!"

Clearly, the process WEC uses for enabling faculty members to close the gap between expectations and criteria is not fast work. Early assessment suggests, however, that it is effective. Iterative sampling of writing assignments reveals that many fewer mismatched messages are being sent to students. Further samples of student writing collected early and later in the process and rated using the criteria crafted by unit faculty members suggest that student writing is successfully drawing closer to faculty members' expectations.

The resulting "alignment" of expectations and criteria (see Biggs, 2003) is demonstrated in Figures 4, 5, and 6, which longitudinally display the multi-year process of unearthing what faculty value in students' responses to their assignments.

Figure 4

Figure 4: Evolution in the ways the Mechanical Engineering faculty and students describe "analytical" writing

Figure 5

Figure 5: Evolution in the ways the Political Science faculty and students describe "analytical" writing.

Figure 6

Figure 6: Evolution in the ways the Graphic Design faculty and students describe "analytical" writing


To assess the extent to which this process of clarifying discipline-specific writing expectations is paying off in WEC units, samples of student writing are rated against faculty-identified rating criteria by a panel of internal and external raters. As the first unit to pilot the WEC model in 2007, Political Science is the first to have accrued a comparable set of student writing samples. In the summer of 2010, those student texts were rated against faculty-generated criteria by a group of three independent raters. The results were impressive--student texts posted significant gains between 2007 and 2009 (see Table 1). In 2009, students were better able to analyze evidence, identify and summarize arguments, and relate various perspectives to one another than they were in 2007. This increased success can be attributed to the changed ways in which faculty members in Political Science began to describe their discipline-specific and genre-specific expectations in writing assignments and grading instruments. As a result of the discussions they had with their colleagues in the four WEC meetings, they became more able to describe the kinds of writing they expected from students, and to clear up confusion that students might have about how political analysis worked and looked different from biological or literary analysis.



Table 1: Rating results in Political Science

Writing Criteria: Political Science20072009
Identifies questions central to the field.80%* 93%
Explicates a relevant and compelling thesis or hypothesis. 7% 53%
Distinguishes among different kinds of sources. 40% 53%
Relates various perspectives to one another analytically. 20% 33%
Displays research for germane evidence. 80% 67%
Draws conclusions about the central question from evidence. 13% 53%

.*The percentages on this table are averages of the three raters' scores for all of the student texts for each criterion.



In recent interviews, Political Science faculty members report increased adaptation of the department-generated list of criteria, and, as importantly, increased discussions with students about what these attributes look like in assigned readings and in student drafts. Although some voiced skepticism at the start of the process that they and their colleagues--scholars from divergent subfields of political science and theory--would agree on any criteria, the discussions generated very little disagreement. This may be because the list shown in Table 1 doesn't drill down to a level of detail that would attract disputes. In demonstrating their ability to "relate various perspectives to one another analytically," students on the "theory side" of political science and students on the "science" side of political science will use different forms of analysis. Other criteria lists generated by current WEC units contain items that are less open to interpretation. Consider the following: "Represents the ideas and arguments of others such that their logical progression and rational appeal is evident" (Philosophy); "Identifies and quantitatively analyzes errors and uncertainties" (Physics); and "Evaluates the effectiveness of artistic choices against perceived intent and/or named criteria" (Theatre and Dance).

By engaging independent raters in assessing capstone-level writing against sets of faculty-generated, context-specific criteria, the WEC project has given the University of Minnesota an alternative to cross-disciplinary assessment (using generic rubrics) of student writing. The project shows student writing drawing closer to meeting discipline-relevant criteria as Writing Plans continue to be implemented. As a result, the WEC program can tell the University that faculty-driven (rather than centrally-mandated) context-specific curricular change is leading to improved student writing.

When faculty members' expectations remain tacit, students' understanding of the norms of disciplinary discourse and of their own developing writing abilities may remain tacit too. In group interviews, upper-division undergraduate students told us that although they now felt confident in their abilities to write in their majors, they weren't sure how they had developed these abilities and had trouble describing them in non-generic terms. Early on, they had found out pretty quickly that all genres looked weird--even those they assumed they had tamed in high school. A Political Science student, asked to write an analytical argument, assumed he was being asked to write an "Op-Ed." A Mechanical Engineering student, asked to turn in a problem set, assumed that he was being asked for solutions alone, stripped of equations. Had they been graded according to generic criteria, they may have assumed either that they had the misfortune of enrolling in the course of a challenging and withholding instructor, or that they needed to choose a major that required even less writing.

Design Critiques as a "Weird" Oral Genre
To understand the complexities of evaluating a specific genre of oral communication, we begin with an example of a student's experience in an undergraduate course in landscape architecture at North Carolina State University. This course was part of a larger project focused on creating, implementing, and assessing discipline-specific communication competencies in the College of Design. In the larger project, a team of researchers engaged in ethnographic observations within studios (classes) in all five departments in the College of Design to gather baseline data about the ways in which "critiques" (the primary oral genre in this setting) were enacted by teachers and students. The research team also gathered ethnographic interview data with faculty and students to explore the critique's communication climate, valued communication competencies, and specific issues related to feedback. From these data, the research team created instructional modules to support students' work in critique preparation (http://www.ncsu.edu/www/ncsu/design/sod5/communication/). We focus here on one aspect of the study involving the creation and use of a discipline-specific rubric for assessment of the critique in the landscape architecture course.

In order to more fully understand the critique and the development of a rubric for assessing student performance, we begin with a description of one student's design presentation. During the first half of the semester in this course, Bethany (our case study subject) and her peers presented their landscape design concepts to gain feedback before moving forward with their projects. Bethany had pinned up four drawings on the wall before the class began their critiques. After approaching her drawings, she briefly tried to settle her model onto a desk. "Well, uhhh . . . ," she began, then said "Hi" and introduced herself to the audience. She explained that her concept was organized around the idea of urban music and sounds at the site of the landscape. During this discussion, she gestured toward the model to orient the audience to the spaces surrounding the site, explaining that she organized the site to make a connection between a bell tower and an open courtyard because the site led to significant spaces on campus. She went on to talk about her decision to move forward with curvilinear forms because there was a mix of those forms on campus. A critic picked up her model, and Bethany used her pen to make various points about places on the site, for example, demonstrating how she moved a path in order to accommodate her concept.

After Bethany had addressed her overarching architectural concept (music) and her organization of the space, one of the critics took advantage of a lull in her presentation:

Critic (holding model): So your theme of music and rhythm, is that communicated in here? Or through your plant material? Or through the pathways?

Bethany: I would say it's through the paths and the, um, the different, um, patterns of, um, landforms. I have this step down into the lower area and this is more like a, a large striking note and I'm going to play with it more, um, with vertical elevations, but as far as . . . this would be straight, directional music, this would be the baseline.



The critic went on to ask her about a particular space and its potential use for a stage (the project required that students designate a space where a stage could be set up for university events such as departmental graduations). Bethany provided only a vague answer. The discussion continued with critics asking questions about how Bethany envisioned the site. During this conversation, Bethany stood with her arms crossed, hugging her notebook to her chest, except when taking the occasional note. Other than her model, the only visual she referenced was the top drawing she had pinned up, to which she only briefly referred after a critic asked for details.

Critiques like the one Bethany was engaged in (also referred to as "juries" or "reviews") are an integral part of design classes (Anthony, 1991). During critiques, students talk about their projects for a designated amount of time (usually between four and thirty minutes). They then participate in a feedback session that involves comments and questions from critics, who include faculty and even outside professionals. Because critiques are usually quite performative and interactive, and can be emotion-laden and at times uncomfortable, it can be challenging to create rubrics based on their multiple, complex features.

Most design literature that addresses the pedagogy of critiques--when instruction is given at all (Nicol & Pilling, 2000)--adopts what Morton and O'Brien (2005) call a "public speaking," or generic, approach. For example, Anthony (1991) advises students to dress appropriately for the occasion, prepare in advance, and emphasize key points--all advice that could be garnered from any public speaking course and that says nothing about communicating like a designer. Yet critics such as those who responded to Bethany's project are quick to identify design-specific communication successes and failures during critiques and rarely address the generic skills identified in the field's instructional advice. Emerging scholarship that takes a communication-in-the disciplines approach, however, suggests the need to recognize discipline-specific competencies in design (Dannels, 2001).

Other research emerging from the communication-in-the-disciplines framework suggests that critiques (along with all oral genres) have important relational elements, in addition to content or performance-focused elements. These relational elements--termed "relational genre knowledge" (Dannels, 2009)--include the real and simulated, present and future relational systems that are invoked in teaching and learning the oral genre. In practice, relational genre knowledge calls upon teachers to attend to relational systems that are important to the oral genre event. These relational aspects of oral genre learning, while often lying beneath the surface of the instructional space, can be a significant factor in the perceived and actual success of the event.

The distinct and situated nature of the design critique and its complex relational elements make it an oral genre that does not fit the normal generic presentation form. Given the design critique's "weird" form, we argue that generic, all-purpose rubrics miss important complexities to which faculty and students in design need to attend. For example, organizations such as the National Communication Association publish criteria for evaluating students' presentations and other forms of communication. The most widely known of these is the Competent Speaker Form (Morreale et al.), which evaluates students on eight criteria using a scale of unsatisfactory, satisfactory, and excellent, with competencies focused on general areas of topic, thesis, supporting materials, organization, language, vocal variety, pronunciation, and physical behaviors. While the eight criteria promoted by this form are certainly relevant to the teaching and evaluation of public speaking, they are simultaneously too broad and too narrow to evaluate students' design critiques.

In place of using rubrics such as the Competent Speaker Form, we argue for the value of creating a discipline-specific rubric based on the situated, desired competencies relevant to the particular oral genre. To understand the benefits of the landscape architecture rubric over a generalized form, it is helpful to step back to the creation and implementation of the rubric. As mentioned, the development of a rubric for design students' communication competencies was a part of a larger research project aimed at understanding students' communication over the course of a semester (Gaffney, 2010). The rubric was based on previously established competencies derived through a thematic analysis of feedback given in design critiques (Dannels et al., 2008). The researchers looked for feedback that suggested what a student should do in presenting in a critique. They then refined the competencies in consultation with design faculty. The final list contained five competencies: demonstration of the evolution of the design, transparent advocacy of intent, clear connection between visuals and the spoken message, credible staging of presentation, and interaction management. To evaluate observable behaviors with these competencies, Gaffney operationally defined each competency based on the behaviors from the Dannels et al. study. Each competency was scaled to five more specific levels of achievement; the full realization of the competency served as the high end of the scale and all competencies were treated as equally important. It is also important to note that the competencies were not intended to evaluate students' design work itself, but rather the presentation of their designs and their interactions with critics.

Another expert was consulted to assess content validity for each definition. Other scholars were also asked to code additional videotaped critiques not used in the final analysis for this project in order to uncover any unconscious assumptions and to consider points that required clarification. The instructors whose students were involved in the study evaluated videotaped presentations with the rubric and those responses were used as a check to ensure that coding was calibrated to the "indigenous" (Jacoby & McNamara, 1999) view of communication abilities in the design critique. The final form called for a rating between 1 and 5, where 5 indicated successfully meeting a given criterion (see Appendix).

After all landscape architecture critiques were coded, they were checked for intercoder reliability to confirm that categories were clearly defined and the coding was appropriate. The second coder was given a brief description of the critique's purposes so that she would be able to make informed assessments of the critiques, and was asked to evaluate each critique using the given rubric. These checks, which were all within an acceptable range using Cohen's kappa, provided evidence that the rubric could be applied by multiple coders consistently.

To illustrate the importance and depth of situated rubrics, we turn to an example of how the design rubric was used to evaluate Bethany's presentation. In this example, we will argue for three benefits of the discipline-specific rubric over the generalized Competent Speaker Form published by the NCA: that it more closely parallels the structure and desired competencies of the critique; that it allows for a discussion of the important relational elements in design critiques; and that it provides a deeper understanding of design critique purposes.

Structure and Competencies.
A key component of students' critiques is their ability to explain their concept. Students need to be able to address the larger concept behind the design and the process by which that concept was realized. This competency is deemed important enough by design faculty that it was placed as the first competency on the form after discussion about the criteria. On this dimension, Bethany was evaluated at a level of 3: "Student identifies the concept, but in vague ways; explanations of how the student worked are clear and confusing at approximately equal rates." Bethany introduced her overarching concept (music) but still talked in general terms. Her process appeared largely in her description of how she organized the site. The ability to evaluate a student's concept is only minimally present in the Competent Speaker Form, being implied in criteria such as topic and thesis, supporting material, and introduction/conclusion.

Another key criterion for the design critique form is credibility. On credibility, Bethany scored a 2: "Student's physical and spoken performance generally detracts from the student's credibility with some small exceptions." Bethany's lack of vocal variety and uncertainty in speech (e.g., "um") reduced her credibility, as did her crossed arms. Bethany's presentation portrayed her as uncertain and unconfident, not like a designer who knows what she is doing. The closest parallels to this competency among the competent speaker criteria focus on language, use of body, and use of voice. Other than the prevalence of "um" in her speech patterns, Bethany did not misuse language, although students who used more design vocabulary in their presentations tended to appear more confident and credible; therefore, she was rated between satisfactory and unsatisfactory. Bethany did not speak with much vocal variety and had some wavering in her voice, but her volume and rate where within reasonable ranges for the room and audience size, which supported a rating of satisfactory, but her overall voice use (lack of vocal variety) fits more with a judgment of unsatisfactory. Bethany's physical behaviors were distracting in the sense that she often had her arms crossed, but she did use physical behaviors to support her message (e.g., gesturing through the model). In the discipline-specific rubric, then, Bethany's behaviors distracted from her credibility, with some exceptions. These abilities are not captured in the Competent Speaker Form, which is unable to reflect the importance that design disciplines place on credibility through the explanation of key project elements, such as concepts.

Relational Elements.
The design critique is a genre full of relational considerations. This focus came through in the discipline-specific rubric's use of audience as a competency. Bethany fell in the middle of the audience competency: "Student shows approximately equal amounts of interest and disinterest in audience feedback and perspectives; reactions are approximately equally positive and negative." Bethany's presentation showed that she was open to comments from the critics, such as how to utilize the space. However, at the same time, her body language demonstrated hostility and the conversation she pursued with critics focused mostly on surface details. These aspects of Bethany's critique are not accounted for in the Competent Speaker Form, which focuses on an extemporaneous style and eye contact. During the interactive feedback session that occurs after students' presentations, students need to be prepared to ask and answer questions, and to receive both positive and negative comments. These interactions are colored, at times, by relational issues (e.g., power, trust, facework) that students must learn to manage. Once again, the criteria on the Competent Speaker Form do not account for the audience interactions that are a foundational part of design critiques, although, albeit generally, it at least acknowledges the importance of audience.

Critique Purposes.
Whereas the Competent Speaker Form focuses on summative assessment, the discipline-specific form accounts for the expectation of students' future participation in critiques. A key component of the discipline-specific form that highlights this advantage is in the argument competency. On this criterion, Bethany was evaluated at a 3: "Student provides argument for how design choices address the site but with key aspects of the argument (e.g., key pieces of evidence or connections) missing or confusing." Bethany provided some arguments supporting her project, such as why she had moved a path to connect two parts of the site. She supported her larger concept, music, by talking about the sounds already on the site, such as the campus bell tower. However, she did not support all aspects of her presentation and she stumbled when explaining decisions such as choices of plants. These aspects of her presentation are somewhat covered by the introduction/conclusion, topic, and body criteria on the Competent Speaker Form. However, those criteria do not provide a clear means for evaluating students' abilities to make the argument for their design choices. By dispersing this aspect of presentations into multiple criteria, the Competent Speaker Form downplays the importance of argumentation in design critiques.

Not surprisingly, visuals--and their use to support the design argument--also play an important in the critique. For Bethany, visuals were particularly problematic, earning her a 2: "Student consistently does not make connections between oral content and visual material; connections that are made are disorderly." Of the five visuals Bethany brought into the critique (four drawings and one model), she only used only the model without prompting. Even with prompting, she only used one drawing, and references to it were vague. Although visuals could be considered a form of "supporting material" as articulated in the competent speaker form, the lack of visuals in the form's criteria shows its limited utility for evaluating design communication competencies.

While the pedagogy of public speaking is often reduced to simplified purposes (to inform, to persuade, to entertain), the critique's purpose is much more complicated. Students must persuade the audience about their concept (accomplished through argumentation), but more nuanced purposes exist, such as demonstrating credibility in the context of being socialized into the disciplines through the presentation and feedback. The critique also provides an opportunity for faculty to provide feedback on students' visuals, which are the focus of much of students' effort. Compared to the Competent Speaker form, the discipline-specific form outlined here--created specifically and intentionally within the disciplinary context of design--provides students with more robust feedback on their progress that is simultaneously broad enough to provide formative assessment while specific enough for students to use productively. Although it was beyond the scope of this study, it would be interesting to understand how students receive this feedback (e.g., if they find it more robust or helpful). Increased information about students' responses to such rubrics could provide insight that would lead to further iterations of productive and useful rubrics.

Going Local: Toward a New Model of Evaluation and Assessment
As ongoing work at the University of Minnesota suggests, faculty within academic disciplines act on often tacit knowledge about what makes student writing successful in their courses and curricula. Extensive discussions and analyses of writing assignments, student texts, and faculty expectations draw that knowledge to the surface, where it can be deployed productively in enhanced pedagogy. Similarly, the case study of an oral communication genre in a specific course at North Carolina State University demonstrates the disciplinary nuances at work in judgments of student performance, rendering a broad, nationally publicized oral communication rubric unhelpful. Put simply, generic, all-purpose criteria for evaluating writing and oral communication fail to reflect the linguistic, rhetorical, relational, and contextual characteristics of specific kinds of writing or speaking that we find in higher education. This problem applies to discipline-specific genres and variations of them often found in upper-level courses within the major. It applies to highly creative, learning-based assignments that are the staple of the writing-to-learn movement--microthemes, scenarios and vignettes, provided-data papers, dialogue assignments, hybrid genres, specific kinds of brief presentations, conventional and electronic poster sessions, and novel uses of genres such as mock "company" obituaries in business courses or poetry in psychology courses (see Young, Connor-Greene, Waldvogel, & Paul, 2003). And, as clearly demonstrated in the work of the WEC program at the University of Minnesota, it applies even to relatively canonical academic genres that cross groups of courses, especially in general-education curricula.

Generic rubrics also blur the expectations for assignment genres constructed in situ, ignoring what we know about the relationship between discourse and instructional context. To the extent that they are distributed to instructors pre-packaged, such rubrics can drive the creation of assignments and communication experiences from the "outside in"--a process in which the tail of generic evaluation standards wags the dog of principled, context-specific pedagogy. Generic criteria, in other words, lead to the creation of stereotypical assignments that best match the generality of the criteria, reifying vague, institutional-level standards but misaligning pedagogy and assessment, and removing the need for teachers to engage in reflective practice based on what they are trying to accomplish (Schon, 1987). In their vagueness, they can also lead to mismatches between context-specific assignments and what students are told about how those assignments will be evaluated; the teacher instantiates the broad, high-level criteria with his or her own personal values, but students have no access to the particulars, leading to guesswork, frustration and, we believe, impeded learning.

In contrast, developing assignment- and context-specific criteria based on learning goals compels teachers to think about the relationship between what they are asking students to do and what they hope their students take away from their coursework in new knowledge, abilities, and understandings (Broad et al., 2009). Asking, "What discourse- and context-specific things do I want students to demonstrate they know or can do as a result of this assignment?" inevitably leads teachers to ask, "How can I support the acquisition of this knowledge or ability in my instruction or through the assignment itself?" As a result of being invested in this process, teachers spend more time helping their students to understand, internalize, and apply the criteria to their developing work (O'Neill, Moore, & Huot, 2009, pp. 83-86), a process that can effectively involve the students themselves in articulating "community-based" criteria for evolving projects (see Anson, Davis, & Vilhotti, 2011; Inoue, 2004).

When teachers work out performance criteria for their own specific assignments, they also reference the content that informs the assignments, which all-purpose rubrics, by their very nature, must ignore. Because they are generic, such rubrics focus on empty structures that are deliberately void of meaningful information--information communicated purposefully to an audience. Generic criteria divorce aspects of communication such as "voice," "organization," or "support" from the actual material the student is writing or speaking about. Perhaps no occasion for writing has received more publicity for this failure than the 25-minute essay-writing test included in the SAT test, in which students might write falsehoods such as "The American Revolution began in 1842" or "Anna Karenina, a play by the French author Joseph Conrad, was a very upbeat literary work" and still receive a high score (Winerip, 2005). This problem is even more dramatically demonstrated in critiques of "automated essay scoring," the evaluation of writing by machines (Ericsson & Haswell, 2006; Whithaus, 2005).

The value of localizing assessment criteria suggests several areas for continued reform. First, in our experience there is continued need for teachers across the disciplines to learn strategies for developing assignment- and course-specific criteria. As Broad (2003) and Wilson (2006) have thoroughly demonstrated, this process involves a self-conscious attempt to draw out often highly tacit expectations, or "values," associated with students' communication performances and make them explicit, then organize and express them in a way that makes them accessible and purposeful to teachers and students. Where we disagree with these scholars is in their distrust of all rubrics, even those created for highly specific tasks; Broad, for example, argues that teachers "cannot provide an adequate account of [their] rhetorical values just by sitting down and reflecting on them" (3), and Wilson believes that developing our own rubrics "doesn't change their reductive nature" (57). Based on our own extensive consultations with faculty in dozens of disciplines, we believe that with adequate faculty development, teachers can learn to articulate those linguistic, rhetorical, relational, and other features that differentiate successful from less successful performances and use those productively in their instruction, including rendering them in what we call a "shorthand" form--a rubric or set of criteria--designed as a template for deeper and more formative classroom work with the features that instantiate them (Anson, 2007).

This articulation, however, best takes place in a context where "values" are determined and expressed at several levels: specific tasks and assignments; courses; and programs or academic concentrations. In much of our faculty-development work in the area of evaluation and assessment, we have experienced the greatest success when specific assignments are designed to help student to reach explicit course objectives and goals. In turn, the course itself is taught in a way that intentionally helps students to achieve clearly articulated, discipline-specific programmatic outcomes (Anson, 2006; Carter, 2003). Like Biggs's concept of constructive alignment, this process, at multiply articulated levels, involves what Wiggins and McTighe (2005) call "backward design": identify desired learning outcomes, decide how those outcomes will be manifested and recognized in students' work, and then structure activities, assignments, and other learning experiences so that students are able to produce work that provides the evidence of the desired achievement. This is a consciously intentional model that derives criteria for assessment from specific tasks grounded in particular disciplinary and other goals, replacing the more common, tacitly-driven model in which faculty provide information, give an assignment, and then try to figure out what they wanted from it. Across multiple levels, backward design has the potential to help achieve much greater coherence (while respecting variety and autonomy) among what are usually highly disconnected experiences and expectations both within and across courses.

As our cases suggest, advocating for localized, "bottom-up" assessment practices is not easy. Never before have educational institutions been more at pains to account for their performance, and they have responded by putting into place large-scale, institution-wide assessments that ask such questions as "How well is our general-education program preparing students to meet the challenges of the 21st century" or "Now that we have a two-course writing-intensive requirement, how well are the 32,000 undergraduates at our institution writing?" Hundreds of colleges and universities devote major financial resources and significant amounts of administrative and faculty time trying to answer such questions without much input from faculty and students who occupy the spaces where the learning actually happens--in specific classrooms (see Anson, 2009). Instead of seeking an "efficient" process of testing large numbers of students using single, decontextualized measures and then evaluating the results with a generic rubric, university-level administrators would gain significantly more knowledge about their institution's successes and shortcomings by establishing a process of programmatic or departmental assessment that enlists the participation of faculty and even students in determining what counts as success and then assessing it (for a representative models at the programmatic level, see North Carolina State University's process of undergraduate program review at http://www.ncsu.edu/assessment/upr.htm and the University of Minnesota's WEC program at http://www.wec.umn.edu/). Such processes reflect, at more specific departmental and disciplinary levels, a new theory of communication assessment based on the principles that it is site-based, locally controlled, context-sensitive, rhetorically-based, and accessible to all (Huot, 1996). At the very least, the conversations that take place within departments and programs (and, in the context of general education, across those departments and programs) are of immense value to educational enhancement and often become heuristically transformative.

At the course level, such localized practices require the time and intellectual energy of faculty to create clear criteria or expectations for every significant oral or written assignment that students complete. At the program level, they involve a collaborative effort to define learning outcomes that make sense within the specific disciplinary and professional context of a program or major, a process that has taken us, at the institutions we represent, years of work. Although it could be argued that generic criteria provide a starting point by providing language whose heuristic value compels faculty in the disciplines to think about general but sometimes unconsidered concepts such as "rhetorical purpose" or "style appropriate to the defined or invoked audience," there is too much risk that such criteria will "make do" for everything students produce. Working in the interstitial spaces between learning and assessing learning will never be easy or quick, but it is essential if we are to achieve those educational outcomes to which we and our students aspire.


1 The order of authorship is purely alphabetical.

Chris Anson is University Distinguished Professor and Director of the Campus Writing and Speaking Program at North Carolina State University, where he helps faculty in nine colleges to use writing and speaking in the service of students' learning and improved communication. A scholar of composition studies and writing across the curriculum, he has published 15 books and over 100 articles and book chapters, and has spoken widely across the U.S. and in 27 other countries. He is currently the Associate Chair of the Conference on College Composition and Communication and will chair the organization in 2013. More information can be found at http://www.ansonica.net

Dr. Deanna P. Dannels is Professor of Communication, Director of GTA Development in Communication, and Associate Director of the Campus Writing and Speaking Program at North Carolina State University. Dr. Dannels' current research explores theoretical and pedagogical frameworks for communication across the curriculum and protocols for designing, implementing, and assessing oral communication within the disciplines. She has published in areas of composition, teaching and learning, engineering education, business and technical communication, and professional identity construction. Her primary theoretical contributions include the "communication in the disciplines" and "relational genre knowledge" frameworks. Dr. Dannels is consistently sought after as a workshop facilitator and conference presenter, specifically for her focus on incorporating oral communication in technical disciplines such as science, math, design, and engineering.

Pamela Flash directs the Writing Across the Curriculum Program and Writing-Enriched Curriculum Project at the University of Minnesota, Twin Cities. Flash's funded research areas include disciplinary differences in writing and writing pedagogies, applied ethnography, writing assessment, and curricular change. Based on this research, she has developed models for engendering faculty-powered, data-driven curricular change within interdisciplinary and intradisciplinary curricula and is currently investigating the institutional portability of these models.

Dr. Amy L. Housley Gaffney is an assistant professor in the Department of Communication at the University of Kentucky, where she is also a faculty member in the Division of Instructional Communication. Her research is focused on the teaching and learning of oral communication as well as assessment, especially as it relates to communication content, attitudes, and skills. She has worked with faculty and students in disciplines such as landscape architecture, engineering, physics, and education to enhance students' communication abilities and content learning.

Correspondence can be sent to:
Chris Anson
207 Sedgemoor Drive
Cary, NC 27513
chris_anson@ncsu.edu
919-677-9454




References
American Association of Colleges and Universities. (2010). VALUE rubric: Written communication. Retrieved from http://www.aacu.org/value/rubrics/index_p.cfm?CFID=29424527&CFTOKEN=82871912

Anderson, P., Anson, C. M., Townsend, M., & Yancey, K. B. (forthcoming). Beyond composition: Creating a national outcomes statement for writing across the curriculum. In D. Roen, D. Holdstein, N. Behm, & G. Glau (Eds.), The WPA Outcomes Statement: A decade later. West Lafayette, IN: Parlor Press.

Anson, C. M. (2006). Assessing writing in cross-curricular programs: Determining the locus of activity. Assessing Writing, 11, 100-112.

Anson, C. M. (2007). Beyond formulas: Closing the gap between rigid rules and flexible strategies for student writing. In K. K. Jackson & S. Vavra (Eds.), Closing the gap (pp. 147-164). Charlotte: Information Age.

Anson, C. M. (2009). Assessment in action: A mobius tale. In M. Hundleby & J. Allen (Eds.), Assessment in technical and professional communication (pp. 3-15). Amityville, NY: Baywood.

Anson, C. M., & Dannels, D. (2004). Writing and speaking in conditional rhetorical space. In E. Nagelhout & C. Rutz (Eds.), Classroom space(s) and writing instruction (pp. 55-70). Cresskill, NJ: Hampton Press.

Anson, C. M., & Forsberg, L. L. (1990). Moving beyond the academic community: Transitional stages in professional writing. Written Communication, 7, 200-231.

Anson, C. M., Davis, M., & Vilhotti, D. (2011). "What do we want in this paper?": Generating criteria collectively. In J. Harris, J. Miles, & C. Paine (Eds.), Teaching with student texts (pp. 35-45). Logan, UT: Utah State University Press.

Bazerman, C. (1988). Shaping written knowledge: The genre and activity of the experimental article in science. Madison: University of Wisconsin Press.

Beaufort, A. (2007). College writing and beyond: A new framework for university writing instruction. Logan, UT: Utah State University Press.

Biggs, J. B. (2003). Teaching for quality learning at university (2nd ed.). Buckingham: Open University Press.

Broad, B. (2003). What we really value: Beyond rubrics in teaching and assessing writing. Logan, UT: Utah State University Press.

Broad, B., Adler-Kassner, L., Alford, B., Detweiler, J., Estrem, H., Harrington, S., McBride, M., Stalions, E., & Weeden, S. (2009). Organic writing assessment: Dynamic criteria mapping in action. Logan: Utah State University Press.

Carter, M. (2007). Ways of knowing, doing, and writing in the disciplines. College Composition and Communication, 58(3), 385-418.

Carter, M. (2003). A process for establishing outcomes-based assessment plans for writing and speaking in the disciplines. Journal of Language and Learning Across the Disciplines, 6.1, 4-29. Retrieved from http://wac.colostate.edu/atd/archives.cfm?showatdarchives=llad

Ericsson, P. F., & Haswell, R. (Eds.). (2006). Machine scoring of student essays: Truth and consequences. Logan: Utah State University Press.

Fahnestock, J. (1999). Rhetorical figures in science. New York: Oxford University Press.

Huot, B. (1996). Towards a new theory of writing assessment. College Composition and Communication, 47, 549-566.

Inoue, A. (2004). Community-based assessment pedagogy. Assessing Writing, 9.3, 208-238.

Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge: Cambridge University Press.

MacDonald, S. P. (1994). Professional academic writing in the humanities and social sciences. Carbondale: Southern Illinois University Press.

Miller, C., & Selzer, J. (1985). Special topics of argument in engineering reports. In L. Odell & D. Goswami (Eds.), Writing in nonacademic settings (pp. 309-341). New York: Guilford Press.

Odell, L., Goswami, D., & Quick, D. (1983). Writing outside the English composition class: Implications for teaching and for learning. In R. W. Bailey & R. M. Fosheim (Eds.), Literacy for life: The demand for reading and writing (pp.175-194). New York: Modern Language Association.

O'Neill, P., Moore, C., & Huot, B. (2009). A guide to college writing assessment. Logan: Utah State University Press.

Russell, D. (1995). Activity theory and its implications for writing instruction. In J. Petraglia (Ed.), Reconceiving writing, rethinking writing instruction (pp. 51-78). Mahwah, NJ: Lawrence Erlbaum.

Schon, D. (1987). Educating the reflective practitioner. San Francisco: Jossey-Bass.

Whithaus, C. (2005). Teaching and evaluating writing in the age of computers and high-stakes testing. Mahwah, NJ: Erlbaum.

Wiggins, G. P., & McTighe, J. (2005). Understanding by Design (2nd ed.). Alexandria, VA: Association for Supervision and Curricular Development.

Wilson, M. (2006). Rethinking rubrics in writing assessment. Portsmouth, NH: Heinemann.

Winerip, M. (2005, May 4). SAT essay test rewards length and ignores errors. New York Times. Retrieved from http://www.nytimes.com/2005/05/04/education/04education.html

Young, A., Connor-Greene, P., Waldvogel, J., & Paul, C. (2003). Poetry across the curriculum: Four disciplinary perspectives. Language and Learning Across the Disciplines, 6.2, 14-44. Retrieved from http://wac.colostate.edu/atd/archives.cfm?showatdarchives=llad#6.2



Appendix: Landscape Architecture Rubric

Concept
5: Student identifies the overall concept of the design and describes his/her work on the project in an organized flow that addresses both the beginning and end.
4: Identifies concept but with some vagueness; addresses the beginning and end of work; explanations of how student worked are generally fine with minor points of confusion.
3: Identifies the concept, but in vague ways; explanations of how the student worked are clear and confusing at approximately equal rates.
2: Does not address concept in obvious ways; addresses the beginning or end of work in vague or confusing ways; organization and connections are difficult to follow.
1: Does not address concept; presents no understandable sense of design process.

Credibility
5: Student's physical and spoken performance consistently enhances his/her credibility.
4: Credibility supported by physical and spoken performance with some small exceptions.
3: Physical and spoken performance both supports and detracts from credibility at approximately equal rates.
2: Physical and spoken performance detracts from credibility with some small exceptions.
1: Student's physical and spoken performance consistently detracts from credibility.

Argument Note: A student may present a convincing argument for the specific choices made even with an unclear concept.
5: Student provides convincing argument(s) for how his/her design choices address the given site and constraints with evidence to support the argument.
4: Student provides overall solid argument(s) with evidence for how design choices address the site but with minor aspects of the argument missing or confusing.
3: Student provides argument for how design choices address the site but with key aspects of the argument (e.g., key pieces of evidence or connections) missing or confusing.
2: Student provides little, vague, or incomplete support for how choices address the site.
1: Student provides no convincing support for how design choices address the site.

Visual
5: Oral content is coordinated with visual material in a logical manner.
4: Talks about most of the displayed visual material in a logical fashion with a few notable exceptions (e.g., needs to be prompted to explain some of the visuals).
3: Often matches oral and visual content but connections are inconsistent and/or disorderly.
2: Student consistently does not make connections between oral content and visual material; connections that are made are disorderly.
1: Visual material and oral content are not matched.

Audience
5: Student demonstrates that he/she values feedback and embraces alternative perspectives through overall positive (e.g., nodding, answering questions) reactions to audience.
4: Student shows some interest (by way of reactions) in audience feedback and considers alternative perspectives but at times either does not react or demonstrates slight negativity.
3: Student shows approximately equal amounts of interest and disinterest in audience feedback and perspectives; reactions are approximately equally positive and negative.
2: Student shows little interest in audience feedback and perspectives and shows hesitance/resistance to suggestions.
1: Student shows no interest in audience feedback or alternative perspectives.