Three important ideas in teaching are active learning, data driven decisions, and personalization. A large body of literature supports the benefits of active learning and suggests many different ways to implement it. In addition, as classes grow in size it is becoming more important to make decisions based on data from the class rather than simple intuition and impressions. This data should also be used to personalize the course to better fit a specific course offering's student cohort and find ways to implement automation that personalizes each student experience. This talk will include a mock lecture, a discussion on teaching practices used in the lecture, and how to have data driven personalization for students that helps teachers follow the software engineering practice of DRY, "don't repeat yourself."
In the mock lecture, we will cover basic manipulation of Python lists and how Python handles pointers in the context of lists. Afterwards, the teaching practices reflection will highlight the use of active learning in the mock lecture and how student question/answer data influenced the mock lecture's content. The talk will close with a discussion on how to help alleviate the teacher problem of frequently repeating the same explanation but with different students, a.k.a. not being DRY.
A way to help a teacher become more DRY is by offloading the task of repeating an explanation to a computer. The introductory computer science course at UC Berkeley has an automated question/answer system, where students are presented with code and asked to predict its output. When students struggle with these questions, they are entering wrong answers. These wrong answers can be analyzed using a mixed methods approach of qualitative and quantitative practices to identify common student errors. From this analysis we built a student error model and deployed it in the automating question/answer system with teacher written explanations per student error. These explanations are delivered immediately upon detection of the student error from the model. Hence, this has the system be repetitive, not the teacher, allowing the teacher to focus on students with rare or complicated student errors that require more nuance than the automated system can provide.
Kristin Stephens-Martinez is a Ph.D. candidate at University of California, Berkeley, advised by Armando Fox in Computer Science Education. She is a founding member of ACE Lab, Algorithms and Computing for Education (acelab.berkeley.edu), and the founder of EECS Peers (www.eecs.berkeley.edu/eecs-peers/), a graduate student group dedicated to supporting fellow grad students with grad school life. Her research interests focus on using data to find insights that can be be turned into learning interventions, one such intervention is currently running with a 900 student class. Kristin has served as a teaching assistant for upper and lower division classes with enrollments up to 1,000's of students and co-taught an undergraduate seminar on education technology. In addition, she has mentored thirteen undergraduates in research, mentored ten graduate students through the WICSE Little/Big Sisters program (www-inst.eecs.berkeley.edu/~wicse/) and EECS Peers, and served as a discipline cluster leader at the UC Berkeley Conference for First-Time GSIs. Kristin received her B.S. in Computer Science from University of Maryland, College Park. Her M.S. in Computer Science is from University of California, Berkeley advised by Vern Paxson in computer networking.