At the beginning of the semester I decided to try something new with homework. Basically, students were supposed to do as many problems as they thought necessary to get the understanding of the material that they desired. If a student notices that there are essentially one or two types of problems in a section (as is frequently the case with our textbook), they probably don't need to do many problems. Other students might require more practice to feel comfortable with material. I gave them the responsibility of deciding for themselves how much to do.
Each week, students turned in a sheet saying which problems they did, how they would grade their own understanding, what major and minor issues they had, any questions they had, and a "well-written" solution to one problem. Grading has been pretty easy, and I've basically just graded on completion. It's been nice to get questions on homework (and respond with individual answers), and to see students evaluating their own mistakes. Quality of "well-written" solutions varied a bit, but I did see many good ones.
Our first exam really didn't go so great, even though I hadn't seen many students rate their own understanding on homeworks much below a B. I think this was partially due to this relaxed structure of homework. After the exam I asked the students to tell me (anonymous feedback was fine) if they wanted to see anything change to make the class better for them, and I never heard anything. Our second exam, last week, went a bit better, and I like to think perhaps study habits had improved.
Anyway, part of what I wanted to see, doing homework this way, was if there was a relationship between number of problems done for homework and exam scores. I've been keeping track of how many problems each student did each week (probably miscounting slightly occasionally), and so after this last exam I broke out gnumeric. In the name of privacy (thanks to those in my twitter/facebook network for their thoughts here), I varied each point by some random small perturbation, and have removed axes labels and scales, with the following result:
I know it wasn't particularly scientific, or rigorous, but that's what I've got. I never told the students I was collecting this data (I'll show this to them in class tomorrow), in hopes that they wouldn't be just making up how much they did, but it's still a possibility. I almost wish I'd kept track of the scores students gave themselves, but it's too late now.
Speaking of class tomorrow, I better go sort out what we're doing...