Saturday, February 27, 2016

Experiments in Teaching, Part 3

So far I've written about experiments in teaching creative writing classes. The other subject domains I handle are computer science and information technology classes. Call these my confessions on the bad things I did teaching there.

Last year, I finally got to teaching the freshman introductory programming class. The language was C, not my first choice, but the department chairs told me that it was already decided and there would be no more argument about it. Well, okay. The last time I taught C was twenty years ago. The thing with C, though, is that the language hasn't changed all that much so I settled right in quite quickly.

Here's the thing: throughout the entire semester, I never actuallly read nor manually graded the students' code. Shocking, isn't it?

Instead, I used an automated testing tool (Virtual Programming Laboratory -- VPL -- on Moodle, in case you're interested) to evaluate my students' work. Every week, I would post a set of programming problems. On the testing tool, I specified different inputs and the expected outputs. The students would then work against these inputs. After submitting, the tool would give them immediate feedback on whether their code was correct or not. This is the essence of test-driven development, and I was using it with first year students.

It's not strictly true that I never read students' code. I did, but on individual basis. As they were working through the problems, some of them would run into a wall. I would sit with them and give them pointers of where their code went wrong. But I only did this for those who were stuck and for those whose shoulders I was randomly peering over.

The main problem with this freewheeling approach is that it lets the students develop all sorts of bad habits. They make up their own variable names, they can write inefficient code, they don't place comments. Just like professional programmers.

The upside of this approach is that my students got to do thirty programming problems throughout the semester. That's an average of 2.5 problems per week. This is for three classes, two of which had the full complement of forty students. I had exactly one hundred students. Imagine checking 250 submissions every week. That would be just insane.

Now that they're under different teachers this second semester, my former students are telling me they miss my way of teaching. The benefit of automated testing was the immediate feedback, which allowed them to adjust and rethink their programs. For their present teachers, it could take as much as a month before they got their results.

Another thing they miss: all my programming exercises, quizzes, and exams are open notes. Really, what's the point of putting away all the notes and references? In real world programming, we're always looking up examples and documentation anyway. It would be hypocritical to ask students to do otherwise.