You are browsing the archive for 2013 July.

Introducing the “answer score”

July 31, 2013 in Features

If you have viewed one of your courses in the last day or so you may have noticed a small addition to the main menu.  A new score, called the “answer score“, now appears (see the screenshot on the right).

The original score and associated algorithm remains unchanged except it has now been renamed to “reputation score” which more accurately reflects its purpose.  A number of instructors have been using this original score to assign extra credit to their students – as outlined in the following blog post which also describes the algorithm in detail:

http://www.peerwise-community.org/2013/01/03/scoring-for-fun-and-extra-credit/

However, this “reputation score” was the source of some confusion for those students who did not read the description of how it was calculated (this description appears if the mouse is hovered over the score area).  This is exemplified by the following comment submitted via the PeerWise feedback form:

This is a great tool. I love it. The only criticism is the slow update on the score. You need to wait 30min+ to see what score you have after a session.

This confusion is related to the fact that the “reputation score” for an individual student only increases when other students participate as well (and indicate that the student’s contributions have been valued).  On the other hand, the new “answer score” updates immediately when a student answers a question and this immediate feedback may alleviate some of the concerns and provide students with another form of friendly competition.  As soon as an answer is selected, the number of points earned (or in some cases lost) is displayed immediately as shown in the screenshots below.

More importantly, the new “answer score” now provides another measure of student participation within PeerWise.  Instructors may like to use this to set targets for students to reach.  A detailed description of how this works follows, but very basically for most students the “answer score” will be close to 10 multiplied by the number of questions they have answered “correctly” (where a correct answer is one that either matches the question author’s suggested answer or is the most popular answer selected by peers).  For example, the chart below plots the number of answers submitted that agreed with the author’s answer (note, this is a lower bound on the number of “correct” answers as just defined previously) against the new “answer score” for all students in one course who were active in the 24 hours after the new score was released.  The line is almost perfectly straight and has a slope close to 10.

A few students fall marginally below the imaginary line with a slope of 10 – this is because every “correct” answer submitted earns a maximum of 10 points however a small number of points are lost if an “incorrect” answer is submitted.  The number of points deducted for an incorrect answer depends on the number of alternative answers associated with the multiple-choice question – for example questions with 5 options have a lower associated penalty than questions with 2 options.  If a large number of questions are answered by randomly selecting answers (which is obviously a behaviour that we would want to discourage students from adopting), the “answer score” should generally not increase.

So, how can you now begin to make use of this?

It appears to be quite common, from hearing a range of instructors describe how they implement PeerWise in their own classrooms, to require students to answer a minimum number of questions per term.  To make things a little more interesting, you can now set an “answer score” target instead.  This is practically the same thing but a bit more fun (just remember to multiply by 10) – if you normally ask your students to answer 50 questions, try setting an “answer score” target of 500!

There is a new student leaderboard for the top “answer scores”, and as an instructor you can download a complete list of answer scores at any time (from the Administration section).

And if in doubt….. guess ‘D’

July 13, 2013 in Talking point

By now, hopefully you’re enjoying a well-deserved summer break (at least those in the Northern Hemisphere…..) In the summer spirit, here’s an interesting question that we were asked this week.

This week, we gave a virtual workshop on PeerWise, as part of the Western Conference on Science Education, held in London, Ontario. (Slides here, if you’re interested) One of the participants asked a seemingly innocent question that got us thinking.

What is the most common choice of correct answer chosen by authors of an MCQ?

Whilst not knowing the answer there and then, we realized that we were sitting on a goldmine of data! The PeerWise system now contains over 600,000 student-authored questions. Granted, not all of these are 5 item MCQs, but a substantial fraction are. Could we mine this data to see if there really was a preferred answer choice and, if so, which option was it?

It turns out that there are nearly a quarter of a million 5-item ‘active’ MCQs in the PeerWise system, and that the most commonly chosen answer by authors of questions is ‘D’. The percentage of ‘D’ correct answers (24.98%) may not look all that much more than the 20% that might be expected if the choice of answers was totally random, but the sheer number of questions analyzed here (223,435) makes this a highly significant result (in the sense of ‘statistical significance’, possibly less so in the sense of ‘change-the-world-significant’…..)

It’s interesting to note that the extremities of the answer range are both below the ‘random choice’ value of 20%. There’s a certain logic in thinking that authors may want to conceal the right answer somewhere other than the last, or perhaps even more so, the first answer choice.

Is this just a peculiarity of student-generated questions, and what about questions with fewer than 5 answer choices, you might be thinking? The collective wisdom of the internet is not a great deal of help here. Various sites (Yahoo answers included) include commentary that lacks a definitive answer, but is not short of ‘definite’ proclamations that it is answer B. Or C. Or, if you’re not sure, pick the longest answer. Clearly, some people would be better served by adopting a different strategy when preparing to answer MCQs, such as actually learning the material. Learning solutions magazine claims the most common answer on a 4-item question is C or D. When I got down the Google search page to a link to ‘Wild guessing strategies on tests’, I stopped reading. Feel free to supplement this with your own research….

(Just for the record, from our analysis of another 220,000 4-item MCQs, the most popular answer chosen by authors is C, by a nose from B, both of which are well above the 25% expected value if truly random.)