David England and I hit the NEASC meetup on Wednesday, visiting the organization’s annual meeting for a presentation on Thursday entitled “Institutional Research that Supports Faculty Investment in Assessment.”
We arrived in Boston just in time for lunch. After lunch I attended a session called “Measuring Depth of Learning in the Humanities” moderated by Bruce Mallory, CIHE Commissioner. The presenters included Orin L. Grossman, Academic Vice President, Fairfield University, Fairfield, CT; Ellen McCulloch-Lovell, President, Marlboro College, Marlboro, VT; David Scobey, Director, Harward Center for Community Partnerships, Bates College, Lewiston, ME. The description for this presentation is:
Faculty in the humanities disciplines find it more challenging to engage in assessment in ways similar to faculty in other disciplines. This session will address these challenges, suggest useful venues and structures, and provide examples of tools and methods for enabling faculty in the humanities to engage in assessment in ways they find most useful and appropriate.
My response to this presentation was mixed. I never really found an explanation for the central focus–the “more challenging” issue–as I find that the Humanities is readily pumped to do assessment, given that it assesses students in interesting and fairly straight-forward ways. Yes, we often probe the abstract, but how we probe is not difficult to understand. The premise here suggests a straw man: that assessment is difficult because the Humanities is not a realm of objective, identifiable abilities. The presenters often fell back on abstract ideas of ability and hence I felt that I was merely listening to cogently presented resistance inspired by Humanities’ abstract subject matter or expectations. Not clear of the nature of this, though, in the context of the session’s description.
David England attended the session entitled “Using Mixed Methods and Longitudinal Studies to Assess Student Learning,” moderated by Jill Reich.
I next attended a session entitled “Setting the Stage for Productive Measures of Learning” moderated by Gai Carpenter. The presenters included David Finney, President, Champlain College, Burlington, VT; Marty Krauss, Provost, Brandeis University, Waltham, MA; Emile Netzhammer, Provost and Vice President for Academic Affairs, Keene State College, Keene, NH. The description for this session was
This session will address how three different types of institutions have progressed over time in their implementation of institution- and program-level assessment, and how they have enabled an investment in the effort on the part of faculty and staff. Presenters will discuss successes and the challenges in building a culture of inquiry and evidence on their respective campuses. The session will also address how these institutions have used accreditation as leverage to support good efforts.
This session was interesting and generated lots of questions from the audience. Krauss, Netzhammer, and Finney provided lots of detail about their involvement with faculty and students in their efforts to develop assessment ecologies. Krauss herself led committees in assessment and managed the typical problems that come with developing institutional effectiveness on a large scale. This was interesting and proves that administrative leaders can play hands-on roles with decisions confronted by faculty and students. Our own president sits on our World Cultures ability group and thus is right in the thick of decisions related to teaching. Netzhammer described his role with faculty and his direct involvement with teaching issues related to assessment.
David England attended the session entitled “Assessing Curricular and Instructional Practices in General Education: Linking Evidence to Improvement,” moderated by James Leheny. David was pretty excited about this session. In the following break, we had a discussion with Richard Vaz, Dean, Interdisciplinary and Global Studies Division and Associate Professor of Electrical and Computer Engineering, Worcester Polytechnic Institute, Worcester, MA. David was intrigued by WPI’s independent study and collaborative student work. We asked Richard lots of questions about how students enroll themselves in the independent project areas, which are required of faculty and students. We were also intrigued by WPI’s Global Perspectives Program. The idea is pretty simple: take what you’ve learned and apply it. In lieu of a fifth course, independent study projects for students seem interesting.
In any event, we had a nice dinner and then met back Thursday morning for our discussion. To repeat, the title of our talk was “Institutional Research that Supports Faculty Investment in Assessment.” The moderator for our talk was the very professional Julie Alig. Our partners in the four person panel discussion were Cate Rowen, Director of Institutional Research and Educational Assessment, and Susan Etheredge, Associate Professor of Education and Child Study, Smith College, Northampton, MA.
Cate and Susan began the presentation by describing how faculty and admin staff have partnered at Smith in their pursuit of comprehensive assessment. The partnership provides faculty with information that they need to make informed decisions about curriculum, teaching, and student performance.
I started my and David’s portion with a brief description of our ability-based model in the context of the existing course-based gen ed model adhered to by CT and provided examples of eLumen assessment screens to demonstrate hands-on example of assessment practice and data, which segued into David exploration of his role in institutional effectiveness: swift response to requests; data synthesis and meaning; the gen ed matrix; and all kinds of other good things.
Both presentation groups elicited lots of questions from a crowd that was well over one hundred strong. One question that always bothers me and I always fumble came from a gentleman in the crowd who asked how we deal with the double work of grading and assessment. My response that we should not see the two practices as separate (so what is the question?) always appears unsatisfactory to people. I don’t believe he was asking about physical data entry work, but was thinking of grading and assessment as two separate activities.
In any event, we were pressed with many questions after the talk by numerous colleagues about eLumen, what we do with data, and how will we assist others in the future. We saw lots of friends from Saint Joseph, Manchester, Mitchell, and Charter Oak and met some new and interesting people. NEASC is a hopping conference and it was well worth the trip and prep work. One of Julie Alig’s last comments was: “You showed people that this is all doable.” We’re getting close but still have lots of work to do.
Thanks for the summary and, most of all, to both you and David, for presenting our abilities-based assessment system at Tunxis. We are extremely proud that you were able to put the Tunxis model forward amidst colleagues from some schools that have traditionally had more resources. It’s heartening to see how far we have come. We used to go to such conferences to learn what we had not yet thought of doing. Now though we humbly continue to learn and grope our way forward, we also have something to offer and a way to inspire others. Look forward to addtional talk about what you learned there and where we should be going.
Thanks for taking all the time it took to do this on our behalf. I know your colleagues engaged in assessment here are equally proud of you.
I share your bewilderment over the alleged difficulty that many people who teach in the Humanities disciplines have with the notion of abilities assessment in their areas of expertise.
On one level, this is what they’ve been doing for as long as they’ve been teaching. It was called, and is still called, “grading.” The difference, and the challenge, is not with the exercise itself but with the attempt to connect assessment to the institution’s work in a systematic way.
In short, assesment was fine as long as it was called grading and we didn’t have to explain why we awarded the grades that we did. There is a risk in presenting our standards to others who will assess them–the risk that our old, comfortable ways may not measure up. Even more risky, we may learn that we have never thought rigorously about our standards. If we can’t explain them to others, after all, how can we be sure that we understand them ourselves.
In summary, I guess I’d have to say that I’m right there with you.
Keep up the great work riding that NEASC wave. Nice job on the poetry too… Thanks.
–Hope you hang ten for a happy holiday season,