At the beginning of the semester I decided to try something new with homework. Basically, students were supposed to do as many problems as they thought necessary to get the understanding of the material that they desired. If a student notices that there are essentially one or two types of problems in a section (as is frequently the case with our textbook), they probably don't need to do many problems. Other students might require more practice to feel comfortable with material. I gave them the responsibility of deciding for themselves how much to do.
Each week, students turned in a sheet saying which problems they did, how they would grade their own understanding, what major and minor issues they had, any questions they had, and a "well-written" solution to one problem. Grading has been pretty easy, and I've basically just graded on completion. It's been nice to get questions on homework (and respond with individual answers), and to see students evaluating their own mistakes. Quality of "well-written" solutions varied a bit, but I did see many good ones.
Our first exam really didn't go so great, even though I hadn't seen many students rate their own understanding on homeworks much below a B. I think this was partially due to this relaxed structure of homework. After the exam I asked the students to tell me (anonymous feedback was fine) if they wanted to see anything change to make the class better for them, and I never heard anything. Our second exam, last week, went a bit better, and I like to think perhaps study habits had improved.
Anyway, part of what I wanted to see, doing homework this way, was if there was a relationship between number of problems done for homework and exam scores. I've been keeping track of how many problems each student did each week (probably miscounting slightly occasionally), and so after this last exam I broke out gnumeric. In the name of privacy (thanks to those in my twitter/facebook network for their thoughts here), I varied each point by some random small perturbation, and have removed axes labels and scales, with the following result:
The x-axis is number of problems done, and the y-axis is the sum of the two exam grades divided by the sum of the two best possible exam grades. It's nice to see that the linear fit has a positive slope, at least. The correlation coefficient for the actual scores was about 0.23, so not so great. What I find slightly interesting is that it just about looks like (and does in the actual scores too) there are 3 clusters... a large collection on the left, another 8 there in the middle, and 3 more at the end who did lots of problems.
I know it wasn't particularly scientific, or rigorous, but that's what I've got. I never told the students I was collecting this data (I'll show this to them in class tomorrow), in hopes that they wouldn't be just making up how much they did, but it's still a possibility. I almost wish I'd kept track of the scores students gave themselves, but it's too late now.
Speaking of class tomorrow, I better go sort out what we're doing...
Subscribe to:
Post Comments (Atom)
2 comments:
It really would be a good idea to remove that regression line. Given that, just from the shape of the data, there is obviously no linear fit, fitting a straight line isn't going to tell you anything, and is indeed positively misleading.
The data is *not* increasing from left to right, except in the limited sense that you had no complete failures among the small group who completed most of the problems, which could just be down to the sample size.
Given that you've got clusters, stick to looking at the clusters, and thinking about what that means.
Given, for example, that no one is doing all the problems, would it be an idea to suggest some important ones to them, so they don't just spend the limited time they give you working on questions 1 2 3 out of 10? It looks like this would be a useful skill to get your right-hand-side students to develop -- they are spending more time working on the problems than their exam results warrant.
@Jon, thanks for your feedback, I think you've made several good points. I haven't come up with much of an interpretation about clustering... perhaps it's just an illusion. Perhaps those 8 work together?
On your final point, I think the textbook we use doesn't have interesting enough exercise sets. When I look at them, I see basically one or two questions in each section, just copied with different numbers. I believe I pointed this out to students at the beginning of the semester, but brought it back up today.
Also, on your suggestion, I removed the line when I showed my students the graph. One student noted the lack of correlation. Of course, I showed them the graph with the line too, for grins.
Post a Comment