The first major papers are done and evaluated. And now some thoughts on my College’s ability-based approach. A fast internet search will provide loads of listed links.
While Ability-based may sound like a buzzword approach, this method of teaching and learning involves defining a set of “abilities” in descriptive form with the student as subject, as in: the student Writes articulate arguments with increasingly sophisticated claims using authoritative, documented evidence, and appeals. Our department has broken such language into different degrees of ability and demonstration in the form of standards of evaluation, often called a rubric, which is somewhat inaccurate. I prefer standards or degrees of evaluation.
The notion is that people can learn to drive forklifts. Some drivers, however, are the “go-tos.” Others keep backing into the walls. Others can do just fine. There’s no real reason to bother with “why” questions or with the typical judgement that standardized tests provide. This method involves standards but resists standardization as the concepts are broad and learnable.
Long ago, I would develop fairly complicated explanations for grades, as I grew tired of justifying them. A meant this and C meant that. In our current model we represent each degree of ability with a number (humans, apparently, are doomed to hierarchies). Doing this assist in understanding that the number or the grade doesn’t matter. What matters is the meaning of the thing. It’s entirely possible to provide a set of explanations on a student paper that illustrate the degree to which they are writing “articulate arguments” or that provide information about how to improve their method of evaluating a source for bias or motive. In a poetry course, we can move the language toward the requirements of the particular discipline or adhere to a general definition of “problem-solving” as poets do it. Practical politics, however, gets in the way. Students express themselves differently when they say I got an A, what’d you get? versus I got a “can assert a conclusion that doesn’t rely upon belief,” what’d you get?
Students will often tell me they want to know what their grade is, and that’s all they want to know. They use code for this; they say, “I want to know how I’m doing.” I might say, “Well, you need more aggressive analysis and stop using hard-core partisans as experts.” “Yeah, but how do I get an A” is the typical coded response, when the response I gave is the answer. I say: just work on improving analysis or find better sources with which to practice. It can get heated because the modern student isn’t typically acclimated to academic or professional material, communication norms, work load, and subject matter. It’s not something one can just explain.
I’ve been using this method going on nine years. I started in English Literature courses, providing students explanations and means for improvement on their work rather than grades, and boy did I get hell for from students. The ire I’ve received in response has always been difficult to deal with but no more difficult necessarily than the responses I used to get to grades. “Why a C? I need an A to keep my GPA or I won’t get into my program” Or, “I’ve never got D in my life! You’re the worst fucking teacher ever.” From there, the conversations would go haywire.
This semester has proven interesting in the evolution of this system of evaluating as I reviewed some of the best papers I’ve ever read at midterm. More than half of the students nailed the assignment. Student work in an ability-based model theoretically provides a narrative of learning. Students should begin early unable to demonstrate satisfactory work but after practice, writing, and reading, should improve. Why? Because early work involves foundational stuff like summary writing, research basics, short analyses, comparison work, and then the student can move to making a claim or taking a position. Those who stick with the approach typically do improve. Those who want top scores early and won’t take time to understand where they might improve if they put their noses to it typically drop (this is merely a hypothesis). Some students think this makes me a shitty teacher, who explains nothing, and doesn’t give a crap about their needs or wants. (I had a student recently whistle with disbelief at what students have said on the “professor rating” web site; I don;t dare look myself.) Other students grin, bear it, and make out fine in the end. Statistically my success rate is pretty good ( I often grab stats on how students do after they move on), and students who come back by the office claim that the torture paid off. For teachers, anecdotal evidence can be instructive. We deal with people as people and need to know what they do with what they learn.
This last round of bulk good work, some of it excellent, is good and excellent because it demonstrates that the students are learning into the abilities. Some students are still guessing about the difference between an argument and a statement of fact. Here’s an example of guessing: “So ‘n’ so argues that Romney or Obama said that if elected every one will get healthcare.” In the ability-based model guessing amounts to a boolean expression. Why: because people can learn to discriminate between these concepts. When students identified and evaluated evidence in relation to an argument, they got it right, maybe not expressed as well as Keats could express but good enough to show that they can do the job.
This doesn’t, however, validate the pedagogy I employ. Too many variables get in the way of this. What the student success does tell me is that they are learning and they’re learning beyond my assistance. It’s important often to avoid pedagogy validity arguments as in some cases courses might simply get lucky with a whole bunch of stars or struggle with a whole bunch of people who needed more preparation or lots of assistance.
Some of the methods are risky. Firstly, I don’t read and comment on drafts anymore. I don’t ask students to provide drafts that I then give back with comments, as my own teachers did and as I once perpetrated. This is not a methodological crime. Past experience has taught me that this method leads to the encouragement of poor study and editing habits, especially for raw freshmen who need more learning in study habits than anything having to do with good writing. Instead, I ask students to read other student drafts and edit against the abilities in typical peer review sessions. How students edit their peers tells me a lot about their own habits of reading and resilience in the face of problems. I ask students to provide me with their edited copy for kicks.
This is risky as final papers may indeed show a great deal of missed opportunity or lack of learning in comparison to more polished work that teachers traditionally poor over in prep for final copy. When it works, the amount of learning a student shows is apparent in comparison to past work. A writer notices the difference between the past and present if they make decisions on their own. This gives me more dramatic information about what I need to do in the classroom. If the majority of students are still having issues with paragraph divisions and transitions, then I can see that in unadulterated copy, and I can work with this issue more in class. In addition, heavily edited drafts by teachers may produced more polished final drafts. This, however, may not assist students when they’re asked to write for later courses where assistance from the professor is no longer provided.
Secondly, I do a hell of a lot of modeling, which is where a screen and word processor really come in handy in a writing course. Using the computer I can build a set of paragraphs and show students what synthesis and analysis looks like on the fly. They see and hear my thought process; they see how I correct spelling; they see how I clean up a cut and paste job from an online article with embedded links or superscripting. We discuss the process a lot. We throw an article up on the screen and we talk about why a writer fell down on the job, either leaping to a conclusion or providing an irrelevant example to support an otherwise perfectly reasonable argument. Then the students are expected to go out and read, practice, and study the notes they generated in discussion, in modeling, and in draft revision, as I will typically grab a student draft and take it apart for all to see (of course, only if agreed upon by the poor student under glass) and then put it back together using the concepts we’re trying to learn: elements of persuasive writing, paragraphing, analysis, quoting and reference, and idea development.
From a teacher’s perspective, observing a range of student performance is a good thing. This range provides a framework for evaluating the story of learning in a particular course. For several years I’ve been struggling with low performance, low preparation, and heavy drop rates. I don’t see an end to this trend. But sometimes the story of performance is encouraging, some times not so encouraging, but it’s valuable nonetheless in instructing the instructor.