I just received the feedback that Princeton collected from students in my undergraduate course in Social Networks this spring. But, by now, all my students have left for the summer, and I’m not going to teach this class again for a while. In other words, this university-collected feedback might be good for evaluating me as a teacher, but it is not well-suited for making me a better teacher.
The timeliness and granularity of this end-of-semester feedback differs than what I’ve seen happening inside of tech companies like Microsoft, Facebook, and Google (and even in some of my own online research projects). I think that one reason that online systems are improving at an impressive rate is that there is often a very tight feedback loop between action and feedback. And, this tight feedback loop enables continual improvement. Therefore, this semester I tried to create a tighter feedback loop between teaching and feedback. My teaching assistants and I created a simple system for micro surveys that we deployed at the end of each class. I found the feedback very helpful, and it caused me to make two concrete improvements to my teaching: more demonstrations and better class endings. In this post, I’ll describe exactly what we did and how it could be better next time. I’ll also include an example report and a link to the open source code that we used to generate it.