fast, flexible, and scalable feedback on teaching with end-of-class micro surveys

soc204_s2015_survey

I just received the feedback that Princeton collected from students in my undergraduate course in Social Networks this spring.  But, by now, all my students have left for the summer, and I’m not going to teach this class again for a while.  In other words, this university-collected feedback might be good for evaluating me as a teacher, but it is not well-suited for making me a better teacher.

The timeliness and granularity of this end-of-semester feedback differs than what I’ve seen happening inside of tech companies like Microsoft, Facebook, and Google (and even in some of my own online research projects).  I think that one reason that online systems are improving at an impressive rate is that there is often a very tight feedback loop between action and feedback.  And, this tight feedback loop enables continual improvement.  Therefore, this semester I tried to create a tighter feedback loop between teaching and feedback.  My teaching assistants and I created a simple system for micro surveys that we deployed at the end of each class.  I found the feedback very helpful, and it caused me to make two concrete improvements to my teaching: more demonstrations and better class endings.  In this post, I’ll describe exactly what we did and how it could be better next time.  I’ll also include an example report and a link to the open source code that we used to generate it.

What did you do?

This spring I taught an introductory course about social networks for about 80 students.  At the end of each class, the final slide in my deck was a link such as bit.do/soc204_apr27.  Students followed the link to complete the micro survey (here’s an example) either with their laptop — most students used laptops — or their smartphone.  I was initially worried that there might be a problem with students not bringing a device to class, but that did not occur.

Then after class, Sarah James — the amazing teaching assistant who designed and built this system — would pull down the data, run her R scripts, and produce a report that was sent to all the instructional staff.  The results of the survey, which we would have quickly after class was over, were then used to inform decisions about the future class activities.

Here’s a slightly modified example of the feedback report, and here’s the code that generated it.  Sarah has made this code available open source that others can read it, use it, and improve it.

Also, one concern we had during the semester was that students might be getting annoyed by all our end-of-class surveys.  In fact, the response rate declined pretty steadily over time.  But, in the end-of-semester evaluation run by Princeton, some students reported appreciating the surveys and none reported disliking them.  In response to the question “How would you describe the overall quality of the lectures? Please comment, as appropriate, on how well the instructor presented the subject matter, stimulated your intellectual curiosity and independent thinking, and contributed to your knowledge of the subject matter.” about 25% of the respondents mentioned that they appreciated that we were open to student feedback in order to improve the class.  For example, “he was always asking for our feedback which was really effective” and “I also appreciated the surveys [and] the instructors’ commitment to improve the course.”

What did you learn?

I learned that rapid feedback was really helpful to me, and I think it led to lots of little improvements.  There are two concrete improvements, however, that stand out to me.  These improvements would have been difficult to make using only end-of-semester feedback.  Also, I’m confident that continued use of the system would lead to continued improvements in future years.

The first improvement I made was adding more demonstrations to class.  One question on our survey was “What was best about today’s class?”, and we found that every time there was a demonstration, most students rated this as their favorite.  This feedback made me think more about how we should spend class time, and I decide to add more demonstrations as the semester went on.  For example, here are some demonstrations that we did:

  • An activity where students tossed coins to illustrate the effects of attrition on the observed length of completed chains in the small-world experiment
  • An activity where students guessed how much candy was in a bowl — both with and without information about what other students had guessed — to illustrate information cascades
  • A self-fulfilling prophecy experiment involving up-voting a post on YikYak
  • A standing ovation demonstration to illustrate the surprising dynamics of threshold-based decision making

In future years, I hope to develop even more of these simple demonstrations.

The second improvement spurred by the end-of-class surveys is better endings for each class.  While I’m teaching I’ve been known to lose track of time and then rush through the end.  I’ve been aware that this was a problem that I have, but something about the end-of-class feedback caused me to move from “Oh, this is something that I should work on” to coming up with a concrete solution to fix it.  So, I decided to develop a standard ending format for each class that 1) summarized the material from that day’s class, 2) provided motivating ideas for the reading that the students were going to do for the next class, and 3) encouraged them to fill out our end-of-class survey.  Also, to help with timing, I asked one of the teaching assistants to give me a 10 minute warning and 5 minute warning before the end of class.  These were not hard changes, but I think that having concrete, repeated feedback from the students forced me to identify and fix the problem.

What would you do next time?

I think that becoming a better teacher is probably a process of hundreds of little changes like more demonstrations and better endings.  None of these are shocking or surprising, but I’m convinced that our survey system enabled us to identify these problems and then nudged us to fix them.  So, I’ll definitely do this next time I teach.

But, this first was a first attempt, and there are a few things that I would differently next time.  In terms of the survey instrument:

  • I would love to figure out a way to get more open-ended responses from the students.  As is, few students responded to “Anything else you’d like us to know?” even though I said different things in class to encourage responses, including discussing some of the students’ responses and addressing any questions that they raised at the beginning of the next class. Perhaps we could move this question up in the survey or make it visually more appealing (I think that the small answer box is not ideal).
  • There was very little variation in the question “Rate the course thus far; 1 = bad, 5 = great”.  Therefore, we should probably ask it less frequently.
  • There was a high correlation in responses between “How many minutes were you bored today?” and “How many minutes were you confused today?”  Therefore, in the future these might be combined into something like “How many minutes did class seem off track today?”
  • Get ideas for new questions from other surveys such as the ones listed here.

In terms of the code to process survey responses and create a daily report:

  • The whole process we used evolved over the course of the semester, and the fact that that the survey kept changing introduced some complexity into the code.  I think that this is unavoidable because the ability to change the survey over times was really helpful to us. Anyone else doing this should expect the survey to evolve over the semester and should plan accordingly.
  • To get the data from Google Forms to R, we might try using the googlesheets package by Jenny Bryan, which was released during the semester.
  • When showing trends over time, we might try to show more than just the mean.

Acknowledgements

This feedback system was designed and implemented by Sarah James, one of the great teaching assistants for the course.  The system improved over the course of the semester based on excellent feedback from the other teaching assistants: Andres Lajous, Andrew Ledford, Nicole Pangborn, and Han Zhang. We also benefited from helpful advice from two people from Princeton’s McGraw Center for Teaching and Learning: Jeff Himpele and Nic Voge.

Our system used either free or open source software so the total financial cost to us and our students was 0.  In particular, we thank Google for Google Forms, and we thank the creators of the R packages that we used: plyr (Hadley Wickham), ggplot2 (Hadley Wickham and Winston Chang), and RMarkdown (JJ Allaire and colleagues).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s