In my last post, I argued that scaling the seminar is an equity issue:
[S]caling the seminar is essential for equity if we want to avoid a two-track system of education—one at expensive universities and another for the rest of us. This is already a serious problem that the current situation is likely to make worse.
I also argued that we are experiencing a failure of imagination about how to scale the seminar because EdTech has become obsessed with solving the wrong problem:
We’ve been pursuing the solutions that have helped improve student self-study by providing machine-generated feedback on formative assessments. While these solutions have proven beneficial in some disciplines and have been especially important in helping students through developmental math, they have hit a wall. Even in the subjects where they work well, these solutions usually have sharply limited value by themselves. They work best when the educators assigning these products use their feedback to reduce (and customize) their lectures while increasing class discussion and project work.
Despite the proliferation of these products, flipped classrooms and other active learning techniques are spreading slowly. And meanwhile, there is a whole swath of disciplines for which machine grading simply doesn’t work. This most broadly applies to any class in which the evaluation of what students say and write is a major part of the teaching process. EdTech has been trying for too long to slam its square peg into a round hole.
In this post, I will outline an approach for scaling social feedback and peer-assessment as the means for scaling the seminar. I will use the example of an English composition course for two reasons. First, I recently had the pleasure of presenting at a virtual workshop on the topic hosted by the University System of Georgia and supported by the APLU. So I have already done some thinking about this particular topic. Second, English composition is one of the hardest seminar-style classes to scale. Students need frequent and intensive feedback. And particularly in access-oriented contexts, many of these first-year students do not come to college with the skill sets necessary to provide high-quality writing-oriented peer review. So it’s a good test case.
Continue reading in eLiterate...