Richard Braddock, Richard Lloyd-Jones, and Lowell Schoer set out to conduct scholarly research in composition, rectify some mistakes of their predecessors, and make a "genuine contribution to knowledge." They seemed to achieve success on all counts, taking neither the painfully scientific approach of Francis Christensen (a.k.a. "paragraph dude") nor the rather artsy-fartsy approach of Ken Macrorie. Although my concern is with pedagogy rather than research, many of this article's sound research principles carry over into sound teaching practices, especially in assessment.
The authors begin their discussion of research by talking about "the writer variable" and the need to distinguish between "writing ability" and "writing performance," a principle that has tremendous implications for assessment. Six years in the trenches has taught me just how wide the gap between ability and performance can be. When I evaluate a student's work, there are no guarantees that I am looking at a true indicator of her ability. The "Research" authors wrote about the multitude of variables that can affect students' work: everything from noisy lawn mowers to illness to personal issues. It is true that environmental and psychological factors can heavily influence how well a student writes. However, teachers can narrow the ability/performance gap and provide the best possible conditions under which students can succeed. Although I cannot control all variables, I can maintain a connection with students so that I know what personal issues might influence their grades (one reason why it's so important to me to teach at a smaller school). As the authors discuss under the "examination situation" variable, I can monitor the classroom environment and be aware of potential distractors, from lighting to the temperature of the room to how I organize my seating chart.
The scariest part of this selection was the rater variable--"the tendency of a rater to vary in his own standards of evaluation" (200). Specifically, the author referred to two factors that most strongly impact outcomes: "personal feelings" and "rater fatigue" (200). Grading English papers is maddeningly subjective, despite the most detailed rubrics. My personal feelings must be put aside as I simultaneously try to grade content, organization, voice, word choice, sentence fluency, and grammar. It doesn’t matter how I feel when I'm grading a multiple choice test. I could be about ready to take a baseball bat to my china cabinet and smash up 12 place settings of Noritake; it wouldn’t affect the student’s grade. But if I start to feel cranky grading papers, there’s no hunkering down and powering through until I reach the end of the stack. I have to step away for a while, at least until sanity’s circulation is restored to my cramped brain.
The bottom line is awareness. Part of good teaching is being aware of the variables that affect both teachers and students and adapting in response to those variables. In comp teaching, as in comp research, we need to leave behind the alchemic dreams, prejudices, and makeshift operations and conduct the work of our respective disciplines with "strength and depth" (197).

You've brought up some great points about writer variability, ability, and performance. Grading papers is unquestionably the most difficult part of my job. As objective as I try to be, subjectivity enters my grading. How can it not? Because I also teach at a small school, I know my students. I frequently know what is going on at home and at school. If students are having a bad day, their writing (or lack of it) reflects what is going on in their lives. I agree, too, that grading issues are frightening. Our English department members sometimes cross-grade papers, and we find that our grading is similar but seldom is exactly the same. That's a little scary, too, but I suspect the subjective nature of writing lends itself to the inexact art of grading.
ReplyDeleteWith the variables listed in this chapter, it is a wonder that any trustworthy research comes out of this field at all. One of the things that I responded to most was how difficult it would be to manage these variables in a classroom setting. Keeping track of how outside influences may influence students, how students will react to an assignment, and how teachers will react to a specific student is a tall order. I have often wondered how scientific based research is possible when you have to take into account subjective variables with both the experimenter and subject. Writing is so much about perspectives that this kind of reseach seems like a nearly imposible task. That is not to say that it should not be attmepted. With the nature of the field, the data is open to interpreation in a way the would be impossible with a hard science, which is one of my favorite aspects of English as a whole.
ReplyDeleteI found the number of variables daunting as well. It is true that a students composition grade does not always indicate the true extent of their writing ability. I also agree that writing is one of the hardest things to grade. I have always felt that the content of the paper is the most important, but content can be difficult to grade. What one person may admire in content another might distain. Many writing programs have developed rubrics to aid in grading writing, but even the rubrics can be interpreted differently. So is it ever really possible to be truly objective towards the grading of writing?
ReplyDeleteI'm glad you addressed some aspects of the "rater variable" here because I can see where that could be a serious problem. I think I am learning as much from my classmates as I from the reading in this class. I would expect grading papers to be very subjective simply for the fact that there is the form/content issue and the rater could easily be sucked into focusing on one over the other if they are not careful.
ReplyDelete