I’m taking students through the wordpress back end. It’s fun. They like it. At least those students who are awake.
This is an interesting project, building a college from the ground up.
Christine Ortiz is taking a leave from her prestigious post as a professor and dean at the Massachusetts Institute of Technology to start a radical, new nonprofit university that she says will have no majors, no lectures, and no classrooms.
I don’t know why I bristle at articles like Steven Hayward’s in The New Criterion. It’s called Conservatives and Higher Ed. Maybe I just don’t see or understand as he sees and understands and that might be my problem. He makes this comment in reference to Max Weber and some form of academic gamble:
Now it’s no longer just a steep hill—more like a rock climb without ropes. Max Weber said over a hundred years ago that “Academic life is an utter gamble.” The odds are getting steadily worse, and if you’re a rational person calculating the odds, you may shy away from a Ph.D. track, or consider non-academic paths as more attractive than academic paths. This probably describes conservatives more than liberals.
What Weber was making reference to was the tenuous position that academics have in attracting students to their courses. They might be fantastic scholars but horrible teachers, and this was a real issue. Hayward would seem to imply, also, that one rank is rational and other isn’t. But this is small beef.
My bigger question throughout the piece goes to definitions. Hayward writes
On the surface you’d think that the pool of conservative students who express satisfaction with higher education would lead more of them toward graduate paths, except for their evident alienation from the liberal dominance of the humanities and social sciences, perhaps along with a perceived higher salience for conservatives on pursuing “practical” professional vocations.
I don’t think it’s interesting to frame liberals and conservatives on a scale of “practicals.”
The larger implication in these kinds of articles is that Academia excludes and that college teaching just isn’t attractive to Conservatives because they either want to make real money or feel alienated or there is some sort of systematic bias against their hire in the Humanities. I think the matter is irrelevant to the core mission of the college.
First of all, how does one read Dickinson? The reader reads the poem. If the reader or scholar is Liberal or Conservative or has two heads, the reader must read the poem, unless the poet is banned for being some sort of radical to establishment ideology. Interlocutors can go from there. Does a political persuasion matter? Maybe, but at least we have the poem to work with. Reading or studying poetry may be implicated as a “narrow” pursuit rather than as grand generalist’s concern for breadth. Hayward’s call to Weaver is just odd. There are plenty of poetry readers who see the larger culture at play. Why Ideas Have Consequences became a Conservative “slogan” is beyond me. He quotes this from Weaver:
By far the most significant phase of the theory of the gentleman is its distrust of specialization. It is an ancient belief, going back to classical antiquity, that specialization of any kind is illiberal in a freeman. A man willing to bury himself in the details of some small endeavor has been considered lost to these larger considerations which must occupy the mind of a ruler.
Maybe this made sense in 1949, when specialists were studying atoms rather than attending to some requirement of becoming a ruler of something. The larger point matters, sure: we shouldn’t get so caught up in one thing such that the future is shut out and that we forget where we live. But this has very little to do, it seems to me, with who’s the liberal or conservative in the room but with the kinds of questions that might be asked: is a science focused charter school a good idea or is a school that treats all subject in depth the way to go? Artists require focus and serious study, however, and we shouldn’t confuse intense concentration with “narrowness.” Programming is difficult. It takes a lot of study. As in poetry. The person who takes up the guitar will find this out fast.
We just hired a new faculty member in our Humanities department. “We want more liberals around here” never came up as a question.
Over the years my attitudes about managing classroom activity has changed. It’s a long story. It begins with my own college experience being read to by the professor or even further back being told that thinking on my own would get me into trouble in grade school. I hated school. But I loved graduate school. I thought (which was probably a mistake): why not take the things I liked and make them work at the undergraduate level.
The thing I liked about undergraduate and graduate learning was that, for the most part, I could make my own decisions: I could drink beer instead of going to class; I could go to class and drink beer; it was up to me. To me compulsory is a dirty word and my fingers still smell of the iron bars of grade school. Yes college: I could do it or not do it and take the consequences. I remember a conversation with a professor. I said, “I have to do this reading.” He stabbed me with his reading-shrunken eyeballs and said, “You don’t have to do shit.” In addition, the lively use of technology by many of my professors was an inspiring mix of theory, application, and invention. The good professors would think a lot about why something might work and then try it, even if it failed. Then they would try something else. They asked questions like: how can we make big classes feel smaller? How can we take the advantages of residential colleges and mimic these with tech?
Recently (by recent I mean the last ten years or so), I’ve altered my strategies to include more emphasis on competency-based evaluation and instruction, generic assessments, and to placing more of the burden of learning on the people in my courses. By competency-based I mean telling students that they’re not after a grade on a paper but aiming to improve thinking and skills through written revision and hard work. By generic assessment I mean going from something like this:
Read this specific article and evaluate the author’s use of evidence
Evaluate an author’s use of evidence in support of an argument. Find the author on your own.
Much of the above has to do with the fact that I like to change readings a lot and I don’t want to have to rewrite every assessment I provide to students.
By placing more of the burden on students, I mean to remove what I see as artificial or un-unassessable quantities in the regular movements of the semester: what’s the proper punishment for missing a deadline, I ask myself: grade diminishment or loss of opportunity to learn something? Recall the above conversation with my professor: he meant, “It’s up to you, Bub.”
I still have deadlines, but I tell people that if they miss a paper, what they miss is the opportunity for assessment. This presents a lot of risk, risk I’ve been willing to live with. For example, years ago I stopped reading student drafts because I found it difficult to avoid what might be called robotic or automated revision. That story goes like this: Cut this, this, and this comma and here’s a little about why, and develop the idea in this paragraph with more evidence. The commas would go, simply to reappear elsewhere and in the same context, and people would simply not do the development, responding with the common, “I didn’t know what you meant.” The whole business started to feel oddly enabling. I asked: does teacher editing lead to deep learning?
The typical semester now goes like this: students revise their own copy based on discussion and concepts worked on in class. I expect students in the research course to find copious amounts of information on topics and to study it against some fairly formulaic questions (what I call the argument framework): what’s the problem; what’s the position; what are the arguments; what’s the evidence; what are the appeals; and is it all done effectively or ineffectively by the author or authors and why? What’s your take? Students hand in their respective papers, I evaluate them and provide general ideas about improvement and expect students to revise, applying what they’ve learned. The results are still pretty raw, but those results reflect writing only the student has touched. They own them.
The general competencies are: identification, description, and evaluation/analysis.
Hypothetically, it all sounds pretty well and good. But in the last few years, students have taken the option of not turning things in for evaluation and waiting until the end of the semester to make their case, as the majority end-of-semester grade comes from final portfolios, which is meant to show the results of assessment and revision. Most of the time this makes for strange papers that show almost no improvement because very little option for improvement was made available. They’re supposed to own it all.
Consider this scenario. Student A stumbles to class most days but forgets to wake up in time for the first Chemistry exam. The teacher notes that the student failed to take the exam, hence marking a zero in the grade book. Let’s say this happens throughout the semester, grossing the student a zero in Chemistry. The teacher’s puzzled because attendance was perfect, with the exception of exam days. What’s the accurate conclusion: the student failed to demonstrate any knowledge of the subject even though they attended every session and appeared to take notes? I could give this story the most positive of outcomes: the student weeps about the goose egg but invents a new cure for disease in their basement.
Writing courses are similar. A student may participate in the day to day and then fail to turn in a paper, or not participate in the day to day and turn in nothing, or play the truant, turn in all their stuff at the end, and win the golden apple. In the first two scenarios, what they’ve failed to do is demonstrate what they’ve learned (maybe they didn’t show and neglected their papers because they were working on a novel). In a writing course the main method for providing proof of learning is the much-loved academic, MLA-styled paper, the revised paper, and then a final proof. In a competency push, I want to be able to compare the first to the final, where evidence of learning shines through. Problem is: students are not providing me the drafts.
Time to rethink my approach.
Well, I worry about a lot of things. I’m a personality that worries.
It would appear that nationally the causes of higher education, one of which is to produce independent, thoughtful citizens (real rabble-rousers, you might call them), are being crushed by political interests. Most people have read about student debt and the costs of “choosing” to invest in an institution after high school. But the investment is lopsided with national and state government transferring costs to “the people.” We know that one person’s debt is another person’s profit.
There are a number of big sectors in Higher Ed. Public colleges and universities, privates, and for-profits, and somewhere beneath these trade schools stick their nose out from under the bed. What an interesting story this has been since financial turmoils in the 70s, late 90s, and 2008. It’s a complicated story. Sufficit it to say, most public institutions and families are increasingly going it alone, wielding their pea shooters in the woods. (I’m still waiting for the verdict on the Bayh-Dole Act.)
I live on metaphors. They help to boil things to their approximate essence. So, I imagine I’m a local politician in Connecticut driving the winter streets. What I see are humps, cracks, and holes in the gritty pitch from this long cold season and its mysterious substances meant to melt the ice and corrode brake lines. Someone’s going to have to pay for the repair, and I’ll be waiting for the complaints. It’s a life phenomenon: in your 40s, 50s, and mores, you’ll complain about paying for stuff you couldn’t imagine paying for in your 20s. Maybe our new robot kitchens in the future will bust holes in the wall board by opening the cabinet doors with too much force. Could happen. Or the metallurgical requirements of my coming bionic fingers will hammer the final coffin nail into some rare frog “somewhere south of not here.”
But I speculate this: we need all-out return to publically-funded higher ed and the material that holds us all up. And that means solving the inequality equations. Maybe my students will start marching on their own behalf.
Then there are a hundred other things to worry about.
In the “wrapper class” of the WordPress.org footer element someone wrote this: <h6>Code is Poetry</h6>. If the reader asks the browser “what is h6 in html” he or she might stumble on the w3schools explanation, which goes: “<h1> defines the most important heading. <h6> defines the least important heading.” In the study of literature and language or rhetoric, we call this relationship irony. In this case, I would argue that the subtlety of the footer claim–Code is Poetry is a Fact Claim, but it’s also a Metaphor (it doesn’t say Code is kinda sorta like Poetry, which would be a Simile)–is an example of visual irony in relation to h1 and h6. Because while h6 is supposed to be a less kind a sorta header, on the WordPress website, if one bothers to scroll down to the footer, that “Code is Poetry” argument is pretty prominent. So, it must mean something.
I would argue, however, that WordPress in the future should reconsider the Metaphor as in a few years the casual scroller of its website might not know what it means or implies. It implies that someone thought about it. Someone was at a meeting and suddenly had a thought and said, “Hey, you know, we ought’a put Code is Poetry in the footer, man.” The boss, maybe Matt Mullenweg, responded, “That’s a fantastic idea. I know exactly what that metaphor’s all about.” Mullenweg was a PoliSci major in school. He also played the sax in high school. He also likes music, you know all that useless stuff. I read all this at the Wikipedia page on Mullenweg. I made up the quotes.
Before the reader wonders at all this nonsense, the above paragraphs were generated by Faith Middleton’s interview and talk with Gina Barreca regarding the latter’s article in the Hartford Courant titled Humanities are at the Heart of a Real Education. As an aside, the title of the article is enclosed in h1 tags. The heart of Barreca’s piece goes to the current and all the past battles over what higher education in the United States should be doing, and, in addition, what constitutes an educated person. Hard or soft, Chemistry or Poetry, Math or English. Employment, unemployment. I hinted at this in an early post on the issue of programmers. The title to that one goes: “Do We Need More Coders”? My answer is an emphatic No and Yes.
Here’s another way of putting the problem. Are Universities and Colleges places where people should be trained or are they places where people should be educated? Barreca writes:
Administrators who market (their verb, not mine) education as a passport to success instead of defining it as pathway to knowledge are, essentially, advocating for the training of workers rather than for the education of citizens.
There are three terms that need defining here: “education,” “knowledge,” and “training.”
In a recent class I provided this metaphor to the students: You have a factory. You’re provided materials sufficient only to manufacture a Pinto. So you make a Pinto. Out comes the Pinto. The Board of Regents of the State of Connecticut observe this and say, with astonished dismay, “Where’s the cadillac?”
It’s not the most precise metaphor. Students are not Pintos. And I don’t like relating schooling to factories. That’s not the point.
Some of the students in class were Poets enough to grasp the figures of speech here. Most, however, had to no idea what “Pinto” meant.
Writ large, the Humanities is about significant stories. The story of women, the story of men, the story of horrors, the stories of success. What we did; what we didn’t do; what wasn’t said; what was known and unknown. It’s about the things we do to ourselves and why. Stories can be lost and forgotten.
To those students who hadn’t a clue about the Pinto, I told the story. They chewed on it for a while. They learned a little bit about the power of metaphor and that maybe paying for expectation might make sense in the long run.
For some reason I find this Fast Company article on Sebastian Thrun fascinating.
Here’s where I got really excited, regarding Thrun’s Stats 101 course and the relationship between the quality of the course and whether or not it would be successful:
Only it wasn’t: For all of his efforts, Statistics 101 students were not any more engaged than any of Udacity’s other students. “Nothing we had done had changed the drop-off curve,” Thrun acknowledges.
Here’s some context for the above quote that has nothing to do with online education, Udacity, or Smartboards. The good teachers I know mostly consider themselves failures. A particular semester will end and the dejected class of faculty will go back to the drawing board, rehearsing their future plays, and adding to the perennial checklist of things to alter for next time. At the beginning of the semester, the syllabus was newly minted with additional directions, already. Other content was added to stave off that unforeseen and persistent, naggly question. “It’s right there on the syllabus,” a teacher will say. “I’ll clarify further.” Done, as summer work. The links were refreshed. The Calendar was shined to perfection. And so the semester ends with half the students gone and pretty much the same ratio of grades puncturing the brains of the bewildered.
I had a conversation just the other day with a seasoned Psychology prof ready to go at the online course with a mouse pointer sharpened by “student success foreshadowing.” She paused. She said, “Yeah, we do this every semester.” But still, that video showing students how to find the directions for the assignment could always be made a little clearer.
Teachers worry a lot about students, learning, assessment, and curriculum. But they also know that revisions come with unforeseen consequences. This is something that novice faculty learn over time. We will always seek better learning and better clarity. That’s the nature of the ecosystem. Every course will tell a story and some courses can themselves be a story. Maybe the final exam is the climax. First we’ll do this, then this, then that, and by the time we get to Oedipus the student will have this, that, and the other thing to work with for improved analysis and interpretation of our despairing protagonist.
I pretty much have the curriculum nailed for my Comp II course. But it still doesn’t work right. There’s still a part of the story that’s missing. I’ll hunt it down next break and rewrite the syllabus.
But in all seriousness, the theme that appears to be missing in the story of Professor Thrun, at least as far as FC tells it, is that “students” are human beings. Human beings experience the world in the private space of their minds. Most of the time, I don’t know what my students know, and I’m just as much a solipsism to them as they are to me. Most of the time motivation, technique, expertise, and the relationship between effort and evidence are a mystery. There’s that old trick of the greenhorn writer who scribes a query thusly: “This is the best damned story every” and so on. Here’s a hypothetical: we’ve had lots of geniuses over time who have walked the planet, shod and unshod. We could hire this superteam to construct the “killer app” of online or on-ground courses. The result will be the same, and this is where statistics get us into trouble. The students who grasp and demonstrate will grasp and demonstrate. Those who do not grasp and demonstrate, or, more importantly, do not demonstrate and either grasp or don’t grasp will grasp and demonstrate OR not. (Hm, that was tough to formulate.)
In my view, statistics are problematic in determining the success or failure of a college course, whether it smells of chalk dust or is warmed by binary code. Chafkin quotes Thrun here in regards to the “painful moment”:
As Thrun was being praised by Friedman, and pretty much everyone else, for having attracted a stunning number of students–1.6 million to date–he was obsessing over a data point that was rarely mentioned in the breathless accounts about the power of new forms of free online education: the shockingly low number of students who actually finish the classes, which is fewer than 10%. Not all of those people received a passing grade, either, meaning that for every 100 pupils who enrolled in a free course, something like five actually learned the topic. If this was an education revolution, it was a disturbingly uneven one.
“We were on the front pages of newspapers and magazines, and at the same time, I was realizing, we don’t educate people as others wished, or as I wished. We have a lousy product,” Thrun tells me. “It was a painful moment.”
The arithmetic in my head tells me that 10% of 1.6 million is 160,000. Additional math leads to this after the equals sign: 8,000. This means 8,000 people passed whatever courses are a part of the smorgasbord. Is this a problem, given that out all the courses unmentioned in the above quote a million and change people did not eat their vegetables? We don’t know the reasons. We can’t know the reasons. Degrees of interest, access, modes, prerequisites, time, ability, attention, disagreement with technique? I would submit that this has little to do with “lousy” products and more to do with being human. The system I work with to do online ed is, in my estimation, not that great of a product. It’s not a fantastic communication tool, which is what a decent system ought to do best. At heart, any learning system is about getting ideas across and getting ideas back in a context that makes sense. Classrooms simulate that most ancient and persistent of situations: a group gathering to share ideas and maybe learn something in the process. The key here is “maybe.” Then again, why is “coworking” space all the rage these days? Because its pretty basic human stuff.
Here’s a further at heart: a) people cannot be forced to learn (or watch Youtube videos) and b) institutions cannot guarantee learning (no matter the quality of TED talks). See a). That’s why accountability in education will always lead to comedy sketches. And there’s more to doing it than just wanting to. I’m not a great fan of thinking about education in the context of for-profit because of the human quotient. Imagine if I sold tables to customers with a sign that said: this one got a C. My point of view on this is that education is best viewed as a public service that will succeed or fail on the tenacity and mindfulness of students, not chocolate-covered systems that when bitten into reveal their broccoli center (You know, the chocolate covered broccoli syndrome typically associated with education games).
Just to refer back to that first quote I started with. I say, join the club.
I think it’s fascinating that Thun is really bugged by his perceived failure. I would have to conclude that, given this, he’s a good teacher. Teachers who don’t obsess about improvement and who think they can actually teach well should find another line of work.
Given the implications of the content of this CT Mirror article, it would be interesting to consider what “transfer should mean.” It’s a good question to ask: how many courses from one institution should transfer to another institution in the higher ed “ecosystem” without compromising the authority of a degree granting institution?
Assuming scenarios. A student might take several courses at a university, then transfer those courses to a community college, such as Tunxis Community College, where I teach. That same student might accumulate a pretty good sized number of credits or hours. How many of these should the ending institution accept as accumulations toward, say, a degree in English or Biology.
The question assumes a paradigm. The model is: what constitutes a degree. It’s rare that universities differ all that much in their amounts and curriculum. Is Biology different in Kansas than anywhere else, assuming a graduate might go off and work somewhere in Washington State? All degree-granting institutions should have the freedom to determine the authority of their degrees within the construct of the larger discipline. So, claiming this or that number is the correct amount is both arbitrary and coherent.
My message to students is typically this: if you take a writing course and this course transfers to Yale, then consider yourself at Yale. That makes sense to me.
The first major papers are done and evaluated. And now some thoughts on my College’s ability-based approach. A fast internet search will provide loads of listed links.
While Ability-based may sound like a buzzword approach, this method of teaching and learning involves defining a set of “abilities” in descriptive form with the student as subject, as in: the student Writes articulate arguments with increasingly sophisticated claims using authoritative, documented evidence, and appeals. Our department has broken such language into different degrees of ability and demonstration in the form of standards of evaluation, often called a rubric, which is somewhat inaccurate. I prefer standards or degrees of evaluation.
The notion is that people can learn to drive forklifts. Some drivers, however, are the “go-tos.” Others keep backing into the walls. Others can do just fine. There’s no real reason to bother with “why” questions or with the typical judgement that standardized tests provide. This method involves standards but resists standardization as the concepts are broad and learnable.
Long ago, I would develop fairly complicated explanations for grades, as I grew tired of justifying them. A meant this and C meant that. In our current model we represent each degree of ability with a number (humans, apparently, are doomed to hierarchies). Doing this assist in understanding that the number or the grade doesn’t matter. What matters is the meaning of the thing. It’s entirely possible to provide a set of explanations on a student paper that illustrate the degree to which they are writing “articulate arguments” or that provide information about how to improve their method of evaluating a source for bias or motive. In a poetry course, we can move the language toward the requirements of the particular discipline or adhere to a general definition of “problem-solving” as poets do it. Practical politics, however, gets in the way. Students express themselves differently when they say I got an A, what’d you get? versus I got a “can assert a conclusion that doesn’t rely upon belief,” what’d you get?
Students will often tell me they want to know what their grade is, and that’s all they want to know. They use code for this; they say, “I want to know how I’m doing.” I might say, “Well, you need more aggressive analysis and stop using hard-core partisans as experts.” “Yeah, but how do I get an A” is the typical coded response, when the response I gave is the answer. I say: just work on improving analysis or find better sources with which to practice. It can get heated because the modern student isn’t typically acclimated to academic or professional material, communication norms, work load, and subject matter. It’s not something one can just explain.
I’ve been using this method going on nine years. I started in English Literature courses, providing students explanations and means for improvement on their work rather than grades, and boy did I get hell for from students. The ire I’ve received in response has always been difficult to deal with but no more difficult necessarily than the responses I used to get to grades. “Why a C? I need an A to keep my GPA or I won’t get into my program” Or, “I’ve never got D in my life! You’re the worst fucking teacher ever.” From there, the conversations would go haywire.
This semester has proven interesting in the evolution of this system of evaluating as I reviewed some of the best papers I’ve ever read at midterm. More than half of the students nailed the assignment. Student work in an ability-based model theoretically provides a narrative of learning. Students should begin early unable to demonstrate satisfactory work but after practice, writing, and reading, should improve. Why? Because early work involves foundational stuff like summary writing, research basics, short analyses, comparison work, and then the student can move to making a claim or taking a position. Those who stick with the approach typically do improve. Those who want top scores early and won’t take time to understand where they might improve if they put their noses to it typically drop (this is merely a hypothesis). Some students think this makes me a shitty teacher, who explains nothing, and doesn’t give a crap about their needs or wants. (I had a student recently whistle with disbelief at what students have said on the “professor rating” web site; I don;t dare look myself.) Other students grin, bear it, and make out fine in the end. Statistically my success rate is pretty good ( I often grab stats on how students do after they move on), and students who come back by the office claim that the torture paid off. For teachers, anecdotal evidence can be instructive. We deal with people as people and need to know what they do with what they learn.
This last round of bulk good work, some of it excellent, is good and excellent because it demonstrates that the students are learning into the abilities. Some students are still guessing about the difference between an argument and a statement of fact. Here’s an example of guessing: “So ‘n’ so argues that Romney or Obama said that if elected every one will get healthcare.” In the ability-based model guessing amounts to a boolean expression. Why: because people can learn to discriminate between these concepts. When students identified and evaluated evidence in relation to an argument, they got it right, maybe not expressed as well as Keats could express but good enough to show that they can do the job.
This doesn’t, however, validate the pedagogy I employ. Too many variables get in the way of this. What the student success does tell me is that they are learning and they’re learning beyond my assistance. It’s important often to avoid pedagogy validity arguments as in some cases courses might simply get lucky with a whole bunch of stars or struggle with a whole bunch of people who needed more preparation or lots of assistance.
Some of the methods are risky. Firstly, I don’t read and comment on drafts anymore. I don’t ask students to provide drafts that I then give back with comments, as my own teachers did and as I once perpetrated. This is not a methodological crime. Past experience has taught me that this method leads to the encouragement of poor study and editing habits, especially for raw freshmen who need more learning in study habits than anything having to do with good writing. Instead, I ask students to read other student drafts and edit against the abilities in typical peer review sessions. How students edit their peers tells me a lot about their own habits of reading and resilience in the face of problems. I ask students to provide me with their edited copy for kicks.
This is risky as final papers may indeed show a great deal of missed opportunity or lack of learning in comparison to more polished work that teachers traditionally poor over in prep for final copy. When it works, the amount of learning a student shows is apparent in comparison to past work. A writer notices the difference between the past and present if they make decisions on their own. This gives me more dramatic information about what I need to do in the classroom. If the majority of students are still having issues with paragraph divisions and transitions, then I can see that in unadulterated copy, and I can work with this issue more in class. In addition, heavily edited drafts by teachers may produced more polished final drafts. This, however, may not assist students when they’re asked to write for later courses where assistance from the professor is no longer provided.
Secondly, I do a hell of a lot of modeling, which is where a screen and word processor really come in handy in a writing course. Using the computer I can build a set of paragraphs and show students what synthesis and analysis looks like on the fly. They see and hear my thought process; they see how I correct spelling; they see how I clean up a cut and paste job from an online article with embedded links or superscripting. We discuss the process a lot. We throw an article up on the screen and we talk about why a writer fell down on the job, either leaping to a conclusion or providing an irrelevant example to support an otherwise perfectly reasonable argument. Then the students are expected to go out and read, practice, and study the notes they generated in discussion, in modeling, and in draft revision, as I will typically grab a student draft and take it apart for all to see (of course, only if agreed upon by the poor student under glass) and then put it back together using the concepts we’re trying to learn: elements of persuasive writing, paragraphing, analysis, quoting and reference, and idea development.
From a teacher’s perspective, observing a range of student performance is a good thing. This range provides a framework for evaluating the story of learning in a particular course. For several years I’ve been struggling with low performance, low preparation, and heavy drop rates. I don’t see an end to this trend. But sometimes the story of performance is encouraging, some times not so encouraging, but it’s valuable nonetheless in instructing the instructor.
This article by Amanda Ripley titled College is Dead. Long Live College is somewhat unnerving. I have all my current assignments ready for students in a software package called Digication, for reasons too long to mention in this post. Students will upload papers to each assignment and I’ll use the software to wade through them all and assess them. I manage the day to day of calendars, directions, and certain instructional aspects of my courses using a WordPress MU install run by Sixnut (that’s the name of the college strung in the opposite of the normal spelling).
Some students get confused and look for assignments in our version of Blackboard and say, “I couldn’t find the assignment.” But that’s another story. I have students who run up against technological problems. They run their home laptops off of current because their batteries are killed and so if the cat knocks the cord out the device goes blank. Or their printers color cartridges are down to dust so their drafts won’t print (who was the genius who decided that a black cartridge wasn’t ink enough to print a black and white essay?). And the price tonnage of ink prohibits just running to the store for more. I have a student who couldn’t participate in peer review sessions because he fell, broke his arm, and smashed his computer as his backpack took a good portion of the impact. Or so he says, though the sling he wears is some sort of proof. Many of my students don’t know how to solve common issues with their latest pricey equipment, which is typically far more advanced than mine. I sat with a student the other day, showing her/him how to actually close out running software on the latest greatest Mac and to find that hitherto unfindable paper. Sometimes those desktops are a real mess.
Most of my students have everything they need to do everything but the task at hand. This technological ambience is a phenomenon of everyday experience. Therefore, the question of how to make a college course a place were mindfulness is encouraged is now an apparent issue in design. The author writes:
This fall, to glimpse the future of higher education, I visited classes in brick-and-mortar colleges and enrolled in half a dozen MOOCs. I dropped most of the latter because they were not very good. Or rather, they would have been fine in person, nestled in a 19th century hall at Princeton University, but online, they could not compete with the other distractions on my computer.
It could be argued that the digital native is always at some task. I’ve noticed in class that these tasks rarely have much to do with what I want people to focus on, though often it’s hard to tell what’s in peoples’ heads. While some students appear off in the ether during a lecture or discussion, they are indeed listening or at least prove so later in response to a question or submitted work.
Ripley spends a lot of time developing her experience with a Udacity physics course. There’s a video intro, the instructor introduces himself, and then he and the students get down to business
“This course is really designed for anyone … In Unit 1, we’re going to begin with a question that fascinated the Greeks: How big is our planet?” To answer this question, Brown had gone to the birthplace of Archimedes, a mathematician who had tried to answer the same question over 2,000 years ago.
Minute 4: Professor Brown asked me a question. “What did the Greeks know?” The video stopped, patiently waiting for me to choose one of the answers, a task that actually required some thought. This happened every three minutes or so, making it difficult for me to check my e-mail or otherwise disengage — even for a minute.
“You got it right!” The satisfaction of correctly answering these questions was surprising. (One MOOC student I met called it “gold-star methadone.”) The questions weren’t easy, either. I got many of them wrong, but I was allowed to keep trying until I got the gold-star fix.
My colleague John Timmons figured the repetition question out years ago in his online courses and approaches the question of testing in a sensible way, allowing student to relearn as they’re assessed. I’ve tried to mimic this approach in my own brick and mortar courses in a variety of ways. We’ve understood the importance of feedback and examine, in new media, how the digital can be advantageous in this regard. Trial and error, learning from mistakes, and the significance of testing guesses against experience is important for growth; games teach these lessons, as does getting lost in the mall as a child. If it was good enough for Sir Gawain, I claim, it’s good enough for me.
Studies of physics classes in particular have shown that after completing a traditional class, students can recite Newton’s laws and maybe even do some calculations, but they cannot apply the laws to problems they haven’t seen before. They’ve memorized the information, but they haven’t learned it — much to their teachers’ surprise.
The “teacher surprise” here is interesting to consider. One of the reasons for surprise may have to do with what teachers have learned to consider as the definition of success in a course, which is often times geared to the narrow focus of a particular task, such as covering Chapter 5 through 7 so what’s in Chapters 5 through 7 can be “learned.” I remember having to memorize the nerves of the hand in Anatomy class because in Anatomy class it is important to learn all the hand’s nerves. But the meaning of the hands nerves to a non-major is difficult to fathom.
The intent of a course may simply be to memorize facts and to take a few multiple choice tests. The facts that form the subject of the course may be important to recall. The question is: should this be the intention of “any” course of study, which determines the flavor of feedback a student may be intended to receive? Question 2: should people be surprised to learn that rote learning or even the application of heuristics may not constitute problem solving or the ability to diagnose. If memory serves, my history courses in undergraduate school had a lot to do with reading about historical events and having to recall them on essays. But my memory fails in the details. What I do know is that I understand history now much differently than I used to; now it’s something I depend on. I’ve forgotten the nerves of the hand, though.
I’m not generally surprised at Richard Arum’s conclusions in Academically Adrift. In my work with academic curriculum over the last several years, I’ve come to the conclusion that expected application or knowledge testing isn’t always a part of courses in huge doses. In this context, I reflect back on my high school and undergraduate experience and remember that it was in the high school band where I had the best memory of learning, seconded by graduate school. One reason is Aristotelian in process, meaning that students are expected to go from general basics to specificity over the course of an arbitrary period of time, although the “arbitrary time aspect” isn’t Aristotle’s fault.
In the band, we worked as a team; in the band, we had all sort of ways of applying what we learned; we often failed and walked away with lowered heads only to rear back upright when the competition was won; and when we sucked, the leader was never at a loss to cuss the hell out of us. I earned experience by watching that same teacher “outside” of the classroom in his devotion to discipline, art, and the machines of his trade, and to the amount of work he did to manage hundreds of students, and when he tackled the mysterious glue sniffer on the lawn prior to an afternoon marching practice, then waited for security to arrive, I saw him in a new light. I still remember him as a courageous person, personally flawed, sure, but he understood humanness and would do anything for his charges. If you didn’t practice, he always figured it out. He would apply the appropriate level of derision to your shitty of character. With the guitar, you can either play a scale or you can’t. And when you can, there’s always the opportunity to improve, and if you don’t improve, YOU need to work harder at it. In band performances, either you got clapped at or you were nailed by tomatoes. But we needed the master teacher. We knew that if trouble encroached on the field, he’d tackle it, even if it meant personal damage.
The power of the digital is its ability to be trained or designed for individual people. It’s entirely possible to construct a learning environment where aid is available from a variety of sources and time streams and where asynchrony can work to the advantage of individuals. Maybe one person will take six months to learn what another person can learn in a month. Traditional teaching environments won’t allow for this obvious problem. Thus, a student who can’t demonstrate the requisite amount of learning in fifteen weeks “fails.” (This is indeed a certain kind of failure, but I can’t think of any successful game that operates this way. Failure in life should best be seen as a stage in learning.) A student can pay up and take the failed course again. The business plan, however, won’t allow for a student to pay once and take more time to demonstrate the required learning. There are also rules of fairness and the question of the value for the amount paid. The digital provides for disruption of all this.
But institutions don’t currently work this way, though they could. And so the digital disrupts the “structure” of a modern college degree regardless of the nature of the degree. I would posit that modern, mass education will always fail people if arbitrary, exacting structures provide the definitional framework, unless it is, indeed, judged as an “exclusive” system, like military training.
What prevents change? Definitions of value and organizational imagination.
Ripley’s essay is devoted heavily toward anecdotal evidence. While I appreciate Niazi and colleagues’ experience with MOOCs, their experience is a small slice of the story continuum. However, stores about peoples’ experience with online learning is significant circumstantially and to provide context and for asking good questions about priorities, such as the theme of good teaching and the arbitrary notion of periods of learning.