You are browsing the archive for Talking point.

Does PeerWise aid pupil progress in the secondary school?

June 9, 2015 in Talking point

As part of my PGCE in UK secondary schools, I have had to produce some research into educational methods. Having first seen reference to PeerWise in the Royal Society of Chemistry’s Education in Chemistry (EIC) magazine, I felt it would be interesting to discover the usefulness of PeerWise in Secondary Education. Below I have reproduced the research report and findings which I hope will be of interest. I hope that the findings of this research will be of interest for many and look forward to expanding on this research in the near future.

 

1.1 The Research Premise

 

Does self-generated content enhance pupil progress by giving them a greater understanding of the subject content? This question is the focus of this research but before detailing how this question is to be answered, it requires dissection to ascertain exactly what is required to provide a suitable answer. This dissection will occur through the following questions:

1. What is self-generated content?

2. Why is self-generated content of any use in education?

3. What methods will be used to enable pupils to self-generate content?

4. How can progress be measured against the use (or lack thereof) of self-generated content?

The following sections will answer each of the following questions and end with a summary of the aims and objectives of this research.

1.1.1 What is self-generated content?

 

By far one of the most widely used models for cognitive development in education is that from Bloom’s Taxonomy1,2. In summary, objectives for learning are developed such that learners progress from simple factual recall (low-order thinking/ cognitive skills) to application and evaluating (high-order thinking/metacognitive skills). As metacognitive skills are more highly valued, efforts to push learners towards fulfilling these learning objectives are increased2-4.

Self-generated content has been used for some time in education. One simply has to consider assignments such as essays, and lab reports to recognise that these had to, by definition, be produced by the student as part of their studies5. Much research has been performed on self-regulated learning, of which self-generated content is an integral part, and focussed around the use of Bandura’s Theory6. This theory suggests that self-regulated learning is a consequence of personal, environmental and behavioural influences7.

The value of self-generated content varies from subject to subject. For example, artistic work tends to place more value on self-generated content compared to mathematical work5. This tendency leads into concept regarding so-called ‘academic’ subjects where textbooks and the like are more highly valued and rely on students being able to regurgitate information, of which self-generated content is used as a means to assess the learning of a particular student5.

1.1.2 Why is self-generated content of any use?

 

So if self-generated content is considered mostly of use as a form of assessment, why bother with is as a mode of imparting knowledge and information? In short: previous research on student generated content has shown a significant correlation between summative assessment scores and levels of participation in generating self-assessment content3,8-10.

Clearly the ability to produce one’s own resources based on one’s level of understanding reinforces learning and allows for greater levels of metacognition to occur. If such content is to be used by others, then the process of developing content requires more metacognitive work, as the content needs to then be accessible to others, not just its author. Additionally, this leads to greater engagement and achievement in the overall learning process5,10.

1.1.3 Methods for pupils to self-generate content

 

Consequently there is a challenge for educators to enhance metacognitive skills through application. There are a range of methods available, but this research will discuss two in detail. The first of these involves multiple choice questions (MCQs). While answering these types of questions can be relatively easy – tending to rely on recall of factual information – writing MCQs requires a much broader skill set. A good understanding of the subject content is a requisite due to good MCQs having an optimal number of answers (ranging between 3 and 5)11 where the incorrect answers also need to be plausible – possibly as a result of common misconceptions or mistakes3. Writing MCQs therefore is more time-consuming than it is to answer them. When learners produce their own multiple choice question, they are challenged to use higher cognitive levels than would be required to simply answer them.

Science is a highly conceptual subject and some concepts can be more easily explained using analogies. In the context of a lesson, this involves explaining a new concept by describing it in a more familiar context 12. These comparisons allow for development of understanding of new knowledge or alter that which is already understood 13,14. Indeed, within the National Curriculum, there are requirements for pupils to learn about several different models such as the heliocentric model of the Solar System and the Bohr model of the atomic nucleus to give two examples. It has therefore been argued that analogies are akin to using models, and therefore inseparable from understanding scientific concepts 15.

However, the issues surrounding models – recognising that they provide a simplified means to understanding ‘real-life’ systems – is not necessarily appreciated by pupils. One of the main issues identified is the creation of misconceptions 14,16 which is ironically what the model is attempting to avoid. It is therefore crucial that any attempt to use analogies are presented in a manner that is explicitly clear that they do not necessarily describe the ‘full picture.

Another issue with analogies are those categorised as Model I 17 analogies. These require low levels of input from pupils, and present low levels of monitoring pupil understanding. These usually arise from when a teacher provides a model to the pupils. From Oliva et al’s(17 teaching model constructs, matching analogies to descriptions of ‘real-life’ processes agrees with Model IIA. The subsequent level, whereby pupils would need to construct their own analogies for the effects of different parameters on reaction rate, matches Model IIB. The final approach, requiring high input from pupils and high levels of monitoring progress focusing on pupils sharing their analogies and creating discussion would result in Model III analogies which are the type which require the hardest level of cognitive action on the part of the pupil but result in the maximum development of understanding of the concepts being taught.

1.1.4 How to measure progress?

 

By far the easiest method of determining pupil progress is through end-of-topic, or end-of-year tests. Comparisons can be made between differing groups, such as those given the opportunity to self-generate content, and those not given such an opportunity. This method is similar to that performed by other research groups2,9,18-20. Consequently, if a group of pupils were split such that one could use PeerWise, and thus generate their own learning repository, whereas the other group were not, then comparisons could be made to determine how much progress was made between these two groups over a period of time. Ideally, such a comparison would occur over a period of 2-3 years, but in the context of this research, such comparisons will be attempted over one topic (covering approximately 6-7 weeks).

1.2 Aims and objectives

 

This research will aim to address the following question:

Does the ability to self-generate content on PeerWise improve pupil progress?

This will be answered using the following criteria:

1. What level of participation is achieved when pupils are given the opportunity to generate their own content?

2. How effective were pupils at generating content over an entire topic?

3. What impact did self-generated content have on pupils’ attainment?

4. Did pupils believe that the option to produce their own learning resources was beneficial to them?

 

2 Research methodology

2.1 The method

This research was performed at a co-educational independent boarding school and focused on the progress of pupils in one topic within the school’s year 9 specification. The topic chosen was Oxygen, Oxides, Hydrogen and Water – a unit within the Edexcel IGCSE specification.

The year is split into four paired sets – sets 1 and 2 being the ‘top’ sets and sets 7 and 8 having the weakest pupils. Two sets were selected – sets 4 and 5 – to be given the task of using PeerWise to aid their studies. A specific course was set up on the PeerWise software for these pupils to use.

The first lesson of the topic was used to introduce PeerWise to the pupils including a short demonstration of how to produce and answer MCQs. Upon completion of the demonstration, pupils were given the task of logging in and completing the following tasks as homework: (1) write three MCQs, (2) answer 1 MCQ and (3) comment on 1 MCQ. Upon completion of this task, pupils were informed that they would be free to use the software as much as they liked. Pupils were also informed that their activity on PeerWise would be monitored – unsuitable questions or comments would be removed and sanctions applied accordingly. Reference to previous research was also provided, stating that greater activity on PeerWise did coincide with higher attainment, thus the desire of pupils to be successful was used as motivation to increase their activity on PeerWise. Pupils were also informed that this would be a ‘use it or lose it’ process. The more activity that was observed on PeerWise, the more likely that it would be opened up to the rest of the year’s cohort for their revision, and use throughout their IGCSE studies. Minimal activity, or failure to use PeerWise would result in the courses set up to be closed down and the opportunity to use it denied for their peers. This too was aimed at ensuring motivation towards using PeerWise.

Data collected would be both quantitative as well as qualitative. The quantitative data would focus around the number of questions and answers uploaded per day as well as comparisons between mean end-of-topic test results for each set in the cohort. The results would be used to determine the effectiveness of pupil-generated content in developing pupil understanding of the subject content, and thus, their overall progress.

The qualitative data would be in the form of a questionnaire given to the pupils in sets 4 and 5 after completing the test to give feedback on their experience with PeerWise, focusing around its ease of use and whether they would continue to use it throughout their studies. This would be used to determine whether pupils felt they had benefitted from its use and what improvements they felt could be applied to aid their use of PeerWise.

For both, the results obtained and the questionnaire answers would be anonymised. The results for the end-of-topic test would be averaged and no names attributed to any particular score while the questionnaire would be provided without a requirement to add the pupil’s name to help the pupils be honest about their experience using PeerWise and explaining to them that only the author would see their answers for the questionnaire.

 

3 Results and discussion

3.1 Use of PeerWise

 

PeerWise was introduced to the Pupils in the two sets during their first lesson on Oxygen, Oxides, Hydrogen and Water. They were given the task of producing three MCQs for their peers as well as answer and comment on a minimum of one other question. To aid the pupils, two questions were previously uploaded for them. Figure 1 below shows the number of questions uploaded each day during the topic.

 

Figure 1 Number of questions uploaded per day during IGCSE topic. Note that the topic took longer than the two week period shown in this plot. After the 24th March, no further questions were uploaded. 30 questions were uploaded in total out of a minimum expectation of 111

 

A total of 30 questions were uploaded over the entire topic which was considerably lower than expected. The two sets combined had a total of 37 pupils so 111 questions would have been expected if all pupils had completed the minimum requirements. The reasons for this greatly decreased number of questions uploaded stem from some of the feedback gained after the research had been completed once the opportunity to use it for the other topics covered by the pupils. Namely that writing questions was ‘hard’ and pupils would prefer the teacher to write the questions. It was explained to them that this would defeat the point of PeerWise as it is based on pupils providing assistance to one another by producing MCQs on areas they have confidence with, and answering MCQs on areas they feel less confident with.

In looking through the questions, one had been deleted shortly after uploading due to the pupil recognising it as unacceptable. This supports references in the literature where teacher/instructor input is minimal due to the pupils/students self-regulating over their activity on PeerWise. In reviewing the questions, the vast majority (25 out of 30) focused on air composition and acid rain – two areas covered over the very first few lessons in the topic. The remaining five were spread over tests for gases and identifying the elements present in specific compounds e.g. water. These results were anticipated – that activity would decrease as more pupils completed their minimum requirements although it was hoped that pupils would see the benefits of using this program and continue to use it throughout the topic, thus resulting in questions covering every aspect in the topic.

The number of answers uploaded, and the comments uploaded however, showed a marked difference. (Figure 2)

 

Figure 2 Number of answers uploaded per day during the IGCSE topic. 214 answers were uploaded over the entire topic. The green ringed column comprises of 11 answers given by one pupil during the Easter holidays when usually no homework is set.

 

The 30 questions were answered a total of 214 times over the course of the topic with the majority answered shortly after the MCQs were uploaded. Of interest is the green ringed results on the 11th April. This is of interest because the answers were all given by one pupil during the Easter holidays when no homework had been given. Thus, for one pupil at least, recognition had been made that this program is useful for revision and can be accessed at any time, even outside of normal school time. The comments usually uploaded with the answers were along the lines of ‘this is a useful question for revision’. Several different pupils had given this comment so the association between the subject matter of the MCQs and the process involved in either writing or answering them as an aid to revision was clearly remarked upon. Within individual questions, the number of answers and the feedback given was useful in assessing the learning of individual pupils. More answers to particular questions coincided with areas of lower understanding and higher rated questions were generally very well written. The consequences of these questions are discussed below.

3.2 Comparisons of End-of-Topic Results

 

Upon completion of the topic, the pupils were given a week to revise for their end-of-topic test. The mean mark for this test would be compared, on a set-by-set basis, with the remainder of the Year 9 cohort. It was hoped that the process of self-generating content in the form of MCQs would have a positive effect on the overall mark achieved by the pupils in the two sets that had been given the opportunity to use PeerWise. The average marks for each set is shown below in Figure 1Figure 3.

 

Figure 3 Comparative average scores for the end-of-topic test for Oxygen, Oxides, Hydrogen and Water. The results for 3 set 6 had not been received at the time of submission. The green circle highlights the average score for the pupils in the sets who had access to PeerWise. All scores are ± 1 standard deviation.

 

From the results shown in , the average results were not greatly affected by the use of PeerWise. As an argument for the use of PeerWise, this does not provide much supporting evidence. This does, however, demonstrate that the overall setting of ability in this school is effective. Despite this, the section of the test in which the PeerWise using sets scored highest was on the composition of the air and on acid rain – the two main areas pupils had produced their MCQs. So as a tool to aid pupil understanding, self-generated content is beneficial. On other areas where MCQs were not produced, or the test did not address the area covered by the MCQs produced, pupils were relying on their ‘normal’ revision techniques. This could pose an issue – why give pupils the task of generating their own content if it is not assessed in a test or exam? The answer to this question derives from the holistic understanding of the subject in question. Science is can be a very difficult subject – some topics can be abstract with little or no obvious link to previous study whereas other topics can quickly become tedious for pupils as they can be covering the same ground at other topics, albeit in a different context. An example of the latter point is from the topic where this research was performed. In this topic, pupils learnt about testing for gases. In the subsequent topic (called Salts and Chemical tests), many of these tests were covered again. In particular, the chemical reaction involving calcium carbonate and acid was covered three times in three different topics over the Year 9 scheme of work, and on each occasion, was addressed in a different manner. By self-generating content, pupils need to develop their understanding of the subject content to a level that is clear and concise to their peers that enables not only their peers to develop their understanding, but to demonstrate their own.

3.3 Results from Questionnaire

 

Having analysed the results from the end-of-topic test, the opinions of the pupils were considered. How effective did they find PeerWise? Would they use it of their own accord? How difficult did they find using PeerWise? All of these questions were addressed in some form using the questionnaire in 4.2Appendix A. Pupils were asked to grade several aspects of PeerWise and their experience using it on a Likert Scale. The results are summarised below in Figure 4.

 

Figure 4 Percentage scores for pupil responses to the questionnaire (see Appendix A).

 

From the results of the questionnaire, it can be seen that there was a generally positive response to the use of PeerWise with the majority of pupils describing the program as easy to use, useful for revision and like PeerWise to be available for use throughout their studies. A sixth question was also provided (but not included in Figure 4) enquiring about how pupils would prefer the use of PeerWise to be regulated. Pupils were given options about it being given purely as homework, to be used as and when the pupils wanted to use it or for its use to count towards the pupils’ end of topic/year mark. The summary can be seen below in Figure 5.

 

Figure 5 Percentage of pupils preferring PeerWise to be (a) given purely as homework, (b) used as and when needed and (c) counted towards end of topic or end of year exam marks

 

From Figure 5, it can clearly be seen that the majority of pupils would not want activity on PeerWise to be included in their end of topic or end of year exam marks. This idea was introduced as, before the reforming of the UK education system, pupils would have had to perform some form of coursework which drew on several areas of the subject in order to complete effectively. Additionally, as PeerWise could be used to link topics together, having it contribute to pupil’s end of year marks would force them, in a sense, towards developing this holistic approach to the subject and develop their understanding of each topic. In later years, ‘gaps’ in pupil knowledge would be filled which, when compared to the course list on PeerWise, would enable them to produce more links between topics, thus progress and develop their knowledge and understanding further.

Two pupils did note down they would prefer both of options a and b – that PeerWise be set as homework and for it to be used as and when needed, but the majority opted for either one or the other option. Among the additional comments provided, several pupils stated that they would prefer PeerWise to have several questions uploaded by their teacher or even set as revision homework – a method which would enable the teacher to more accurately gauge how much revision is being performed by individual pupils. This last point was surprising and had not been considered until after the questionnaires had been collected in and read.

3.4 Providing pupils further opportunities

 

Upon collecting the questionnaires, reviewing and analysing the data, the two sets were then given a chance to vote on whether, for the next topic and for all other topics covered in Year 9, they would like additional courses on PeerWise to be provided for them to use of their own accord. The response was heavily in favour of this and subsequent courses were uploaded. Surprisingly, within six hours of access being given to the pupils for the next topic, four questions had already been uploaded, answered and commented on. Use of PeerWise is now being monitored on an occasional basis to ensure no unsuitable activity is taking place. The questions uploaded have been observed to be well written and will, if this use is continued, result in a strong repository of questions for pupils to use throughout their studies.

 

4 Conclusions and Future Work

4.1 Conclusions

 

37 pupils were given the opportunity to enhance their learning through the use of a MCQ forum, PeerWise. Their activity was monitored and comparisons were made between their end of topic test results and those of their peers in the rest of the cohort. Uptake of the activity insofar as writing MCQs was lower than expected and heavily related to content covered in the first few lessons given. Pupils were observed to prefer answering questions, referring to the writing of MCQs as difficult and wanting their teacher to produce them instead.

The results from the end-of-topic test did not show a marked difference in overall pupil attainment, rather the results were as expected for each set’s ability. For the pupils using PeerWise, the majority of their marks received tied very closely with the content they had produced on PeerWise. This was repeated in all 37 pupil’s tests and demonstrates that self-generated content can be used to reinforce learning in lessons.

The overall lack of questions arose mainly from the fact that activity n PeerWise was described as voluntary. As a result, a good number of pupils opted to not participate in writing questions, but preferred to answer questions produced by their peers. Feedback gained from the questionnaire showed that pupils recognised the usefulness of PeerWise, especially for revision purposes. One pupil in particular remarked that using PeerWise could be used to assess whether or not pupils are actually revising for their tests, be it end-of-topic tests or end-of-year exams. A small minority of pupils described how they would prefer PeerWise to contribute to their results but overall, the ability to use PeerWise on an ad hoc basis was more important.

Comparisons between the results from this research, and that of research at the tertiary level demonstrates a clear difference. At the tertiary level, students have chosen to study the subject further and so have an invested interest in high attainment whereas at the secondary level, especially at Key Stage 4, most pupils do not have the choice in certain subjects, and thus their interest may not be as high. Consequently, they may not feel as though they have as strong a vested interest in high attainment.

This last statement obviously comes with a caveat. The pupils selected are a small group compared to the rest of their year group, and indeed, the whole school population. This generalisation may be unfounded and requires further research into determining the true extent of pupils’ vested interests.

4.2 Future work

 

Further research can therefore follow the subsequent differing routes (or even a combination of them) as detailed below:

1. Expand the pupil numbers to include an entire year group – this will enable more informed discussion about the effects of PeerWise on pupil progress as a result of having a larger sample group and being able to observe more closely the effects of the pupils’ personal vested interests in their education.

2. Apply the use of PeerWise throughout the entire content of the academic year – this would enable greater AfL by monitoring which topics generated more answers which is an indication of where pupils have difficulties as well as ensure pupils are able to use it continuously throughout their studies rather than it being introduced at a comparatively odd time of the year (as was the case with this research).

3. Perform comparative studies between KS4 and KS5 students – this would enable a detailed review of pupil/s vested interests as at KS4, pupils do not have the option regarding studying the sciences whereas at KS5 they do. It would therefore be of interest to observe whether the decision to study the subject further influences the motivation of the student to self-generate content.

Undoubtedly, the short timescale of this research will have influenced the results obtained. Subsequent research would therefore need to be extended over a period of at least three years in order to be able to generate data comparable to that from other tertiary level research groups.

 

4.3 References

1. Krathwohl DR. A revision of bloom’s taxonomy: An overview. Theory into Practice. 2002;41(4):212-+.

2. Bates SP, Galloway RK, Riise J, Homer D. Assessing the quality of a student-generated question repository. physical review special topics – physics education research. 2014;10(2):020105.

3. Galloway KW, Burns S. Doing it for themselves: Students creating a high quality peer-learning environment. Chem Educ Res Pract. 2015;16(1):82-92.

4. Draper SW. Catalytic assessment: Understanding how MCQs and EVS can foster deep learning. British Journal of Educational Technology. 2009;40(2):285-293.

5. Sener J. In search of student-generated content in online education. E-edukacja na świecie. 2007;21(4).

6. Schraw G, Crippen KJ, Hartley K. Promoting self-regulation in science education: Metacognition as part of a broader perspective on learning. Research in Science Education. 2006;36(1-2):111-139.

7. Bandura A. Self-efficacy: The exercise of control. New York: Freeman; 1997.

8. Frase L, Schwartz B. Effect of question production and answering on prose recall. Journal of educational psychology. 1975;67(5):628.

9. Denner PR, Rickards JP. A developmental comparison of the effects of provided and generated questions on text recall. Contemporary Educational Psychology. 1987;12(2):135.

10. Sanchez-Elez M, Pardines I, Garcia P, et al. Enhancing students’ learning process through self-generated tests. J Sci Educ Technol. 2014;23(1):15-25.

11. Vyas R, Supe A. Multiple choice questions: A literature review on the optimal number of options. Natl Med J India. 2008;21(3):130-133.

12. Gershon M. 19. analogies. In: How to use differentiation in the classroom: The complete guide. ; 2013:185.

13. Mozzer NB, Justi R. Students’ pre- and post-teaching analogical reasoning when they draw their analogies. Int J Sci Educ. 2012;34(3):429-458.

14. Haglund J. Collaborative and self-generated analogies in science education. Studies in Science Education. 2013;49(1):35-68.

15. Coll RK, France B, Taylor I. The role of models/and analogies in science education: Implications from research. International Journal of Science Education. 2005;27(2):183-198.

16. Gilbert J, Osbourne R. The use of models in science and science teaching. European Journal of Science Education. 1980;2(1):1-11.

17. Oliva JM, Azcárateb P, Navarreteb A. Teaching models in the use of analogies as a resource in the science classroom. International Journal of Science Education. 2007;29(1):45.

18. Denny P, Luxton-Reilly A, Hamer J. Student use of the PeerWise system. NEW YORK; 1515 BROADWAY, NEW YORK, NY 10036-9998 USA: ASSOC COMPUTING MACHINERY; 2008:77.

19. Hardy J, Bates SP, Casey MM, et al. Student-generated content: Enhancing learning through sharing multiple-choice questions. International Journal of Science Education. 2014;36(13):2180-2194.

20. Bates SP, Galloway RK, McBride KL. Student-generated content: Using PeerWise to enhance engagement and outcomes in introductory physics courses. 2011 Physics Education Research Conference. 2012;1413:123-126.

 

Do badges work?

November 19, 2013 in Publications, Talking point

Badges everywhere

Have you ever wondered whether some of the “game-like” rewards that are becoming more and more common online actually have a measurable impact on user participation?  Does the promise of earning a “Hotel specialist” badge on Trip Advisor motivate travellers to write more reviews?  On Stack Overflow, a popular question and answer forum for programmers, do people answer more questions than they otherwise would so that they can increase their reputation score and earn a higher spot on the global leaderboard?

Of course, if you play games these kinds of rewards are nothing new – performance in many games is measured by points, leaderboards have been around since the earliest arcade games, and the Xbox Live platform has been rewarding players with achievements for nearly a decade.  Now, in an attempt to motivate users across a broad range of applications, we see these game-like elements appearing more frequently.  But do they work?

Badges in PeerWise

PeerWise includes several game-like elements (points have been discussed on this blog before), including badges (or “virtual achievements”).  For example, regular practice is rewarded with the “Obsessed” badge, which is earned for returning to PeerWise on 10 consecutive days and correctly answering a handful of questions each time.

Other badges include the “Insight” badge, for writing at least 2 comments that receive an agreement, the “Helper” badge for improving the explanation of an existing question, and the “Good question author” badge, awarded for authoring a question that receives at least 5 “excellent” ratings from other students.  A complete list of the available badges can be seen by clicking the “View my badges” link on the Main menu.

As you would expect, some badges are much harder to earn than others.  Almost every student earns the “Question answerer” badge – awarded when they answer their very first question.  The following chart shows the percentage of students with the “Question answerer” badge that earn each of the other available badges.  Only about 1 in 200 students earn the “Obsessed” badge.

The badges in PeerWise can be classified according to the roles that they play (there is a nice article by Antin and Churchill that explores this further):

  • “Goal setting”: helping the student set personal targets to achieve
  • “Instruction”: helping the student discover features of the application
  • “Reputation”: awarded when the quality of the student’s contributions are endorsed by others

It is interesting to note that most of the badges awarded for answering questions are of the “Goal setting” variety, whereas those awarded for authoring questions are mainly in the “Reputation” category.

And now back to our original question – do these badges have any influence over the way that students use PeerWise?  When considering this question, we must keep in mind that observed effects may not necessarily be positive ones.  One of the criticisms levelled at extrinsic rewards, such as game-like elements, is that they have the potential to undermine intrinsic motivation in a task, which is clearly of concern in an educational context.  However, this is a somewhat contentious claim, and very recent work by Mekler et al. showed no negative impact on intrinsic motivation in an experiment measuring the effect of using game elements to reward user participation in an online image-tagging activity (although it must be noted that this was a short-term study and motivation was self-reported).

Anecdotal support

There is certainly some anecdotal evidence that the PeerWise badges are being noticed by students in a positive way.  Examples of this include public tweets:

as well as responses to a survey conducted in 2012 at the University of Auckland:

“I didn’t think I was “badge” type of person, but I did enjoy getting badges (I was the first one to get the obsessed badge – yay!). It did help motivate me to do extra and in doing so, I believe I have learnt more effectively.”

“The badges did make me feel as if I was achieving something pretty important, and helped keep Peerwise interesting.”

Another example was nicely illustrated in a talk given by James Gaynor and Gita Sedhi from the University of Liverpool in June this year, in which they presented their experiences using PeerWise at a local teaching and learning conference.  On one of their slides, they displayed a summary of student responses to the question: “Was there any particular aspect of PeerWise you liked?

Across the two courses examined, “badges” and “rewards” emerged quite strongly (points, rewards, achievements and rankings were coded as “Other rewards”).

However, it should be noted that not all students are so positive about the badges.  Other responses to the previously mentioned survey indicate that the effect on some students is fleeting:

“well, it kinda increase my motivation a bit at the beginning. but then i get bored already”

“They don’t really affect my motivation now, but they did when I first started.”

and others pay no attention to the badges at all:

“I never cared about the badges -> simply because they dont mean anything -> i.e. does not contribute to our grade”

“They did nothing for my motivation.”

Controlled experiment

To understand the impact of the badges more clearly, we conducted a randomised, controlled experiment in a very large class (n > 1000).  All students in the class had identical participation requirements (author 1 question and answer 20 questions), however only half of the students were able to see the badges in the interface and earn them for their participation.  This group was referred to as the “badges on” group, whereas the control group who were not able to see the badges were referred to as the “badges off” group.  The experiment ran over a period of 4 weeks in March 2012, and the class generated approximately 2600 questions and submitted almost 100,000 answers.

Students in the “badges on” group, who were able to earn the badges, submitted 22% more answers than students in the control group.  The chart below plots the day to day differences over the course of the study – on all but one day, the “badges on” students submitted more answers than the “badges off” students.

The table below summarises the number of questions authored, answers submitted, and distinct days of activity for students in each group.

 

The presence of the badges in the interface had a significant positive effect on the number of questions answered and the number of distinct days that students were active with PeerWise.  Interestingly, although there was no effect on the number of questions authored by students, no negative effects were observed – for example, the increase in the number of answers submitted did not lead to a reduction in the accuracy of those answers.

If you you would like to see additional data from this experiment, as well as a more complete discussion and acknowledgment of the threats to the validity of the results, the full paper is available online (and on the PeerWise Community Resources page).  Of course, no experiment is perfect, and this work probably raises more questions than it answers, but it does provide some empirical evidence that the badges in the PeerWise environment do cause a change in the way that students engage with the activity.  Perhaps we could see similar effects in other, similar, educational tools?

TLDR: And for those who prefer movies to reading, the conference in which this work was published required a brief accompanying video.  If you are really keen, see if you can last the full 40 seconds!

 

PW-C is one year old

October 10, 2013 in Announcements, Talking point

It was a year ago today that we launched the PeerWise-Community site: we are officially one! :)

We now have over 300 users registered on the sites, and have had nearly 5,000 unique visitors request 26,000 individual pages.

In terms of location of visitors to the site, we’ve welcomed viewers from all inhabited continents, with the UK topping the country list with nearly 1/3 of all visits. Not bad for an (allegedly) small and irrelevant island (not my words, you understand….)  The USA, Canada, New Zealand and Australia follow for the rest of the top 5 visitor countries. Approximately half of all traffic over the last year was from returning users, who had previously visited the site.

There’s a huge amount of data that Google Analytics gives you, that can consume vast amounts of time drilling into! But one of the more surprising things to me was that the vast majority of all accesses (over 80%) still come from desktop / laptop devices rather than tablet or mobile. And also, after the home page and the registration page, it turns out that publication page was the one most frequently consulted.

If any members have suggestions for things they would like to see in year 2 on the site, please add comments below!

 

And if in doubt….. guess ‘D’

July 13, 2013 in Talking point

By now, hopefully you’re enjoying a well-deserved summer break (at least those in the Northern Hemisphere…..) In the summer spirit, here’s an interesting question that we were asked this week.

This week, we gave a virtual workshop on PeerWise, as part of the Western Conference on Science Education, held in London, Ontario. (Slides here, if you’re interested) One of the participants asked a seemingly innocent question that got us thinking.

What is the most common choice of correct answer chosen by authors of an MCQ?

Whilst not knowing the answer there and then, we realized that we were sitting on a goldmine of data! The PeerWise system now contains over 600,000 student-authored questions. Granted, not all of these are 5 item MCQs, but a substantial fraction are. Could we mine this data to see if there really was a preferred answer choice and, if so, which option was it?

It turns out that there are nearly a quarter of a million 5-item ‘active’ MCQs in the PeerWise system, and that the most commonly chosen answer by authors of questions is ‘D’. The percentage of ‘D’ correct answers (24.98%) may not look all that much more than the 20% that might be expected if the choice of answers was totally random, but the sheer number of questions analyzed here (223,435) makes this a highly significant result (in the sense of ‘statistical significance’, possibly less so in the sense of ‘change-the-world-significant’…..)

It’s interesting to note that the extremities of the answer range are both below the ‘random choice’ value of 20%. There’s a certain logic in thinking that authors may want to conceal the right answer somewhere other than the last, or perhaps even more so, the first answer choice.

Is this just a peculiarity of student-generated questions, and what about questions with fewer than 5 answer choices, you might be thinking? The collective wisdom of the internet is not a great deal of help here. Various sites (Yahoo answers included) include commentary that lacks a definitive answer, but is not short of ‘definite’ proclamations that it is answer B. Or C. Or, if you’re not sure, pick the longest answer. Clearly, some people would be better served by adopting a different strategy when preparing to answer MCQs, such as actually learning the material. Learning solutions magazine claims the most common answer on a 4-item question is C or D. When I got down the Google search page to a link to ‘Wild guessing strategies on tests’, I stopped reading. Feel free to supplement this with your own research….

(Just for the record, from our analysis of another 220,000 4-item MCQs, the most popular answer chosen by authors is C, by a nose from B, both of which are well above the 25% expected value if truly random.)

Moderate, monitor or leave well alone?

October 22, 2012 in Talking point

Talking point: When you’re using a PeerWise activity in your course, what do you do in terms of intervening in the students’ questions, answers and discussions?

When I tell someone about PeerWise for the first time, more often than not their first question is “But what if the students submit something that’s wrong?” My answer? “I don’t worry about it.”

As instructors, it seems to me that there are broadly three approaches we can take to administering our PeerWise repositories: you can moderate them, checking every question for correctness; you can monitor them, not explicitly checking everything but keeping a close eye out and intervening if you spot something egregiously incorrect; or you can leave well alone, and let the students look after themselves. My personal preference is for the last of these options, but plenty of people seem to recoil in horror when I tell them that. However, I would contend that not only can we legitimately not intervene at all, we definitely shouldn’t. Here’s why:

An instructor-free PeerWise space explicitly belongs to the students, and they have full responsibility for its contents. If they do spot an error, it’s up to them to resolve it: the teacher isn’t going to come along and fix it for them. I think this gives a much greater sense of ownership to the students, with corresponding greater commitment. Plus, deciding whether something really is an error or not, and why, can spawn some great, in-depth discussions in the comments threads, which I would argue are some of the most potent learning opportunities offered by PeerWise. This would be lost if we swept in, helicopter-like, to rescue the students all the time.

My experience in courses I have run is that less than a tenth of the submitted material has obvious errors, and from what has been reported this ratio seems to be broadly similar in other courses elsewhere. A good number of these problems do get identified and fixed by the students. “But not all,” I hear you say. True, not all. A small proportion of incorrect content does persist. But I’ve made my peace with that: students are smart people, and they understand the context very well – they know it’s a student-generated resource, and not everything in it is guaranteed to be correct. Besides, it’s not like PeerWise is the only place students might come across incorrect ideas: informal study groups in the library, the cafe or on Wikipedia are widespread, and no-one is moderating those…

I believe that as instructors we should be more relaxed about this sort of thing: any potential disadvantages of student mistakes are outweighed by the intellectual challenges of self and peer assessment, and taking responsibility for their own learning. PeerWise is a great space for students to make their own contributions to their course, and some of them really push the boundaries of creativity and sophistication. Don’t inhibit them by peering over their shoulder, virtual ‘red pen’ at the ready.

So, I’ve set my stall out. Are you nodding in agreement, or pounding the keyboard in rage? Vote in the poll (in the sidebar on the right), and use the comments below to tell me why I’m right or wrong!

So…what do you do in your courses? Have you changed what you do as a result of something that has happened? Would you prefer to do something different but have concerns or reservations? Join the debate!