You are browsing the archive for Uncategorized.

A student’s point of view

July 8, 2014 in Uncategorized

Amongst the numerous posts on the PeerWise Community blog are accounts by instructors of their experiences, descriptions of features within PeerWise, ideas for helping students use PeerWise effectively, and even a few curiosities.

However, one thing missing from this blog has been the student voice – that is, until now!

The idea for this began after reading an excellent post on the Education in Chemistry blog written by Michael Seery (Chemistry lecturer at the Dublin Institute of Technology).  In the post, Seery expressed three main reservations for not (yet) using PeerWise (although, as indicated in his post’s title, it appears he is slowly warming to the idea – and even has an account!).  The post itself is very thoughtfully written and worth a read if you haven’t yet seen it.  And, I should add, Seery maintains a fantastic blog of his own (the wonderfully titled “Is this going to be on the exam?“) which covers all kinds of teaching and education-related topics, which I thoroughly recommend.

A few days after Seery’s post was published, a student named Matt Bird wrote a comment on the post describing his experiences using PeerWise in a first year Chemistry course at the University of Nottingham.  Incidentally, this course was taught by Kyle Galloway who has previously spoken about PeerWise and who I see gave a talk yesterday entitled “PeerWise: Student Generated, Peer Reviewed Course Content” at the European Conference on Research in Chemistry Education 2014 – congratulations Kyle!)

Matt’s comment briefly touched on the reservations expressed by Seery – question quality, plagiarism and assessment – but also discussed motivation and question volume.  I thought it would be interesting to hear more from Matt to include a student perspective on the PeerWise Community blog and so I contacted him by email.  He was good enough to agree to expand a little on the points originally outlined in his comment.

Included below are Matt’s responses to 5 questions I sent him – and, in the interests of trying to be balanced, the last question specifically asks Matt to comment on what he liked least about using PeerWise.

Tell us a little about the course in which you used PeerWise.  How was your participation with PeerWise assessed in this course?  Do you think this worked well?
We used PeerWise for the Foundation Chemistry module of our course. It was worth 5% of the module mark, and was primarily intended as a revision resource. To get 2% we were required to write 1 question, have an answer score of 50 or more, and comment/rate at least 3 questions. Exceeding these criteria would get 3%, and being above the median reputation score would get the full 5%. Despite only being worth a small amount of the module, I think this system worked well to encourage participation as it was easy marks, and good revision.

What did you think, generally, about the quality of the questions created by your classmates?  How did you feel about the fact that some of the content, given that it was student-authored, may be incorrect?
In general the questions were good quality. Obviously some were better than others, but there were very few bad questions. There were cases where the answer given to the question was incorrect, or the wording of the question itself unclear, but other students would identify this and suggest corrections in a comment. In almost all cases the question author would redo the question.

Were you concerned that some of your fellow students might copy their questions from a text book?
I wasn’t concerned about questions being copied from textbooks. At the end of the day it is a revision resource, and textbook questions are a valid way of revising. The author still had to put the question into multiple choice format, thinking about potential trick answers they could put (we all enjoyed making the answers mistakes people commonly made!) so they had to put some effort in. Obviously lecturers may have a different opinion on this!

How did you feel about the competitive aspects of PeerWise (points, badges, etc.)? 
The competitive aspects were what kept me coming back. It was an achievement to earn the badges (especially the harder ones), and always nice to be in the top 5 on one or more of the leader-boards. If you knew your friends’ scores then you could work out if you were beating them on the leader boards or not, which is kind of ‘fun’.  I fulfilled the minimum requirements fairly quickly, so most of my contributions were done to earn badges, and work my way up the leader-boards (and to revise, of course!).

Do you feel that using PeerWise in this course helped you learn? What did you personally find most useful / the best part about using PeerWise? What did you personally find least useful / the worst part about using PeerWise?
I got 79 % for the first year of the course, so something went right! PeerWise probably contributed somewhat to that, as it did help me with areas I was less strong on.  It’s hard to say what the most useful part of PeerWise was, but the number of questions was certainly useful. I guess that’s more to do with the users rather than the system though. As previously mentioned the competitive aspect was fun.  The worst part of PeerWise would be the rating system. No matter how good the question, and how good the comments about the question were hardly anybody rated questions above 3/5 with most coming in at around 2. I guess nobody wanted to rate question too highly and be beaten in that leader-board! It would also have been nice to create questions where multiple answers were correct so you need to select 2 answers.  Overall, I enjoyed using PeerWise and hope it is used again later on in my course.

Many sincere thanks to Matt Bird for taking the time to respond to these questions – particularly during his summer break – enjoy the rest of your holiday Matt!

Although his feedback represents the opinion of just one student, several interesting points are highlighted.  For one thing, that one of the most common instructor concerns regarding PeerWise (the lack of expert quality control) did not seem to be of particular concern.  In fact, Matt seems to appear fairly confident in the ability of his classmates to detect and suggest corrections for errors.

When commenting on the aspects of PeerWise that did concern him, Matt mentioned that the student-assigned ratings did a poor job of differentiating between questions.  Indeed, this does appear to be somewhat of an issue in this course.  The histogram below illustrates the average ratings of all questions available in the course.

Of the 363 questions in the repository, 73% were rated in a narrow band between 2.5 and 3.5 and 96% of all questions had average ratings between 2.0 and 4.0.  While there are some techniques that students can use to find questions of interest to them (such as searching by topic or “following” good question authors) it seems like this is worth investigating further.

Below are two example questions pulled out of the repository from Matt’s course – only the question, student answers and the explanation are shown, but for space reasons none of the comments that support student discussion around the questions are included.  I selected these questions more or less at random, given that I am completely unfamiliar with the subject area!  It is, of course, difficult to pick just one or two questions that are representative of the entire repository – but these examples go a small way towards illustrating the kind of effort that students put into generating their questions.

And finally, one other thing Matt mentioned in his feedback was that he would liked to have seen other question formats (in addition to single-answer multiple choice).  Watch this space…

PeerWise – Experiences at University College London

September 13, 2013 in Uncategorized, Use cases

PeerWise – Experiences
at University College London

by Sam Green and Kevin Tang

Department of Linguistics, UCL

Introduction

In February 2012, as part of a small interdisciplinary team, wPeerWise_Logoe secured a small grant of
 £2500 from the Teaching Innovation Grant fund to develop and implement the use of PeerWise within a single module in the Department of Linguistics at University College London (UCL). The team was made up of various advisory staff from the Centre for Applied Learning and Teaching, also from the Division of Psychology and Language Sciences (PALS), and lecturers and Post-Graduate Teaching Assistants (PGTAs) in the department of Linguistics. The use of the system was monitored and student’s participation made up 10% of their grade for the term.

The subsequent academic year we extended its use across several further modules in the department by obtaining an e-Learning Development Grant at UCL.

Overall aims and objectives

The PGTAs adapted the material developed in the second half of the 2011/12 term to provide guidelines, training, and further support to new PGTAs and academic staff running modules using PeerWise. The experienced PGTAs were also be involved in disseminating the project outcome and sharing good practice.

Methodology – Explanation of what was done and why

Introductory session with PGTAs:

A session run by the experienced PGTAs was held prior to the start of term for PGTAs teaching on modules utilising PeerWise. This delivered information on the structure and technical aspects of the system, the implementation of the system in their module, and importantly marks and grading. This also highlighted the importance of team-work and necessity of participation. An introductory pack was provided for new PGTAs to quickly adapt the system for their respective modules.

Introductory session with students:

Students taking modules with a PeerWise component were required to attend a two-hour training and practice workshop, run by the PGTAs teaching on their module. After being given log-in instructions, students participated in the test environment set up by the PGTAs. These test environments contained a range of sample questions (written by the PGTAs) relating to students’ modules and which demonstrated to the students the quality of questions and level of difficulty required. More generally, students were given instructions on how to provide useful feedback, and how to create educational questions.

Our PGTA - Thanasis Soultatis giving an introductory session to PeerWise for students

Our PGTA – Thanasis Soultatis giving an introductory session to PeerWise for students

Our PGTA - Kevin Tang giving an introductory session to PeerWise for students

Our PGTA – Thanasis Soultatis giving an introductory session to PeerWise for students

Course integration

In the pilot implementation of PeerWise, BA but not MA students were required to participate. BA students showed more participation than MA students, but the latter nevertheless showed engagement with the system. Therefore, it was decided to make PeerWise a compulsory element of the module to maximise the efficacy of peer-learning.

It was decided that students should work in ‘mixed ability’ groups, due to the difficult nature of creating questions. However, to effectively monitor individual performance, questions were required to be answered individually. Deadlines situated throughout the course ensured that students engaged with that week’s material, and spread out the workload.

Technical improvement

The restriction of image size and lack of an ability to upload or embed audio files (useful for phonetic/phonological questions in Linguistics) was circumvented by using a UCL-wide system which allows students to host these sorts of files. This system (MyPortfolio) allows users to create links to stored media. This also allows the students to effectively anonymise the files, thus keeping them secret for the purpose of questioning.

Project outcomes

Using the PeerWise administration tools, we observed student participation over time. Students met question creation deadlines as required, mostly by working throughout the week to complete the weekly task. In addition, questions were answered throughout the week, revealing that students didn’t appear to see the task purely as a chore. Further, most students answered more than the required number of questions, again showing their willing engagement. The final point on deadlines was that MA students used PeerWise as a revision tool entirely by their own choices. Their regular creation of questions created a repository of revision topics with questions, answers, and explanations

Active Engagement

Active Engagement

The Statistics

PeerWise provides a set of PeerWise scores. To increase the total score, one needs to achieve good scores for each component.

The students were required to:

  • write relevant, high-quality questions with well thought-out alternatives and clear explanations
  • answer questions
  • rate questions and leave constructive feedback
  • use PeerWise early (after questions are made available) as the score increases over time based on the contribution history

Correlations between the PeerWise scores and the module scores were performed to test the effectiveness of PeerWise on student’s learning. A nested model comparison was performed to test the effectiveness of the PeerWise grouping in prediction of the students’ performance. The performance in Term 1 differs somewhat between the BA students and MA students, but not in Term 2 after manipulations with the PeerWise grouping with the BAs.

Term 1:

The BA students showed no correlation at all, while the MAs showed a strong correlation (r = 0.49, p < 0.001***)

MA Students - Term 1 - Correlation between PeerWise Scores and Exam Scores

MA Students – Term 1 – Correlation between PeerWise Scores and Exam Scores

In light of this finding, we attempted to identify the reasons behind this divergence in correlations. One potential reason was that grouping with the BAs was done randomly, rather than by mixed-ability, while the grouping with the MAs was done by mixed-ability. We,  hypothesized that mixed-ability grouping is essential to the successful use of the system. To test this hypothesis, we asked the PGTA for the BAs to regroup the PeerWise groups in the second term based on mixed-ability. This PGTA did not have any knowledge of the students’ Peerwise scores in Term 1, while the PeerWise grouping for the MAs largely remained the same.

Term 2:

The assignments in Term 2 were based on three assignments spread out over the term. The final PeerWise score (taken at the end of the Term 2) was tested for correlation with each of the three assignments.

With the BAs, the PeerWise score correlated with all three assignments with increasing levels of statistical significance – Assignment 1 (r = 0.44, p = 0.0069**), Assignment 2 (r = 0.47, p = .0.0040*) and Assignment 3 (r = 0.47, p = .0.0035**).

With the MAs, the findings were similar, with the difference that Assignment 1 was not significant with a borderline p-value of 0.0513 – Assignment 1 (r = 0.28, p = 0.0513), Assignment 2 (r = 0.46, p = 0.0026**) and Assignment 3 (r = 0.33, p = 0.0251**).

A further analysis was performed to test if PeerWise grouping has an effect on assignment performance. This consisted of a nested-model comparison with PeerWise score and PeerWise Group as predictors, and the mean assignment scores as the predictee. The lm function in R statistical package was used to build two models, the superset model having both PeerWise score and PeerWise Group as the predictors, and the subset model having only the PeerWise score as the predictor. An ANOVA was used to compare the two models, and it was found that while both PeerWise scores and PeerWise grouping were significant predictors separately, PeerWise grouping made a significant improvement in prediction with p < 0.05 * (see Table 1 for the nested-model output).

Table 1: ANOVA results

Analysis of Variance Table

Model 1: Assignment_Mean ~ PW_score + group
Model 2: Assignment_Mean ~ PW_score
  Res.Df    RSS Df Sum of Sq      F  Pr(>F)  
1     28 2102.1                              
2     29 2460.3 -1   -358.21 4.7713 0.03747 *
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

The strong correlation found with the BA group in Term 2 (but not in Term 1) is likely to be due to the introduction of mixed-ability grouping. The group effect suggests that the students performed at a similar level as a group, which implies group learning. This effect was only found with the BAs but not with the Mas; this difference could be attributed to the quality of the mixed-ability grouping since the BAs (re)grouping was based on Term 1 performance, while the MA grouping was based on the impression on the students that the TA had in the first two weeks of Term 1. With the BAs and MAs, there was a small increase of correlation and significance level over the term; this might suggest that the increasing use of the system assists with improving assignment grades over the term.

Together these findings suggest that mixed-ability grouping is key to peer learning.

Evaluation/reflection:

A questionnaire was completed by the students about their experience with our implementation of PeerWise. The feedback was on the whole positive with a majority of students agreeing that

  1. Developing original questions on course topics improved their understanding of those topics
  2. Answering questions written by other students improved their understanding of those topics.
  3. Their groups worked well together

These highlighted the key concept of PeerWise – Peer Learning

feedback_understandingfeedback_writtenfeedback_group

Our objective statistical analyses together with the subjective feedback from the students themselves strongly indicated that the project enhanced student learning and benefitted their learning experience.

E-learning awareness

One important experience was the recognition that peer learning – using e-learning – can be a highly effective method of learning for students, even with low amounts of any regular and direct contact from PGTAs to students regarding their participation.

It was necessary to be considerate of the aims of the modules, understand the capabilities of PeerWise and it’s potential for integration with the module, and importantly to plan in detail the whole module’s use of PeerWise from the beginning. Initiating this type of e-learning system required this investigation and planning in order for students to understand the requirements and the relationship of the system to their module. Without explicit prior planning, with teams working in groups and remotely from PGTAs and staff (at least, with regards their PeerWise interaction), any serious issues with the system and its use may not have been spotted and/or may have been difficult to counteract.

As mentioned, the remote nature of the work meant that students might not readily inform PGTAs of issues they may have been having, so any small comment was dealt with immediately. One issue that arose was group members’ cooperation; this required swift and definitive action, which was then communicated to all relevant parties. In particular, any misunderstandings with the requirements were dealt with quickly, with e-mails sent out to all students, even if only one individual or group expressed concern or misunderstanding.

Dissemination and continuation

A division-wide talk (video recorded as a ‘lecturecast’) was given by Kevin Tang and Sam Green (the original PGTAs working with PeerWise) introducing the system to staff within the Division of Psychology and Language Sciences. This advertised the use and success of PeerWise to several interested parties, as did a subsequent lunchtime talk to staff at the Centre for the Advancement of Teaching and Learning. As the experienced PGTAs documented their experiences in detail, created a comprehensive user-guide, included presentations for students and new administrators of PeerWise, and made this readily-available for UCL staff and PGTAs, the system can capably be taken up by any other department. Further, within the Department of Linguistics there are several ‘second-generation’ PGTAs who have learned the details of, and used, PeerWise for their modules. These PGTAs will in turn pass on use of the system to the subsequent year, should PeerWise be used again; they will also be available to assist any new users of the system. In sum, given the detailed information available, and current use of the system by the Department of Linguistics, as well as the keen use by staff in the department (especially given the positive results of its uptake), it seems highly likely that PeerWise will continue to be used by several modules, and will likely be taken up by others.

Acknowledgements

 

Scoring: for fun and extra credit!

January 3, 2013 in Uncategorized

PeerWise includes several “game-like” elements (such as badges, points and leaderboards) which are designed primarily for fun and to inject a bit of friendly competition between students.  As an example, students accumulate points as they make their contributions and their score is displayed near the top right corner of the main menu.

In fact, if you have participated in your own course, perhaps you have noticed your score increasing over time?

Of course, not all students are motivated by such things, but a quick search of recent Twitter posts reveals that some students really seem to enjoy earning the various virtual rewards that are on offer:

Some instructors have even considered using these elements to award “bonus marks” or “extra credit” as a way of motivating their students.  Obtaining the data to verify that students have met certain goals is trivial – for example, instructors can view the scores of all of their students in real-time by selecting “View scores of all students” from the Administration menu:

However, one difficulty with using the score for awarding such credit is that coming up with realistic target scores is complicated by the way the scoring algorithm works.  The algorithm has previously been discussed in detail, but basically it rewards students for making contributions that are valued by their peers.  In order to achieve the highest possible score, a student must make regular contributions and:

  • author questions that their peers rate highly
  • answer questions correctly before their peers
  • rate questions as they are subsequently rated by their peers

What this means is the total number of points that a student can earn depends on how often their classmates endorse their contributions, and this is dependent not only on the size of the class but also on the “requirements” placed on the activity by the course instructor.  This makes it tricky to set reasonable score targets for students to reach.  Recently on the PeerWise-Community forum, member Brad Wyble raised exactly this point:

I’m interesting in linking the peerwise score to extra credit points but I’m a little stuck on how to proceed without an idea of what the range of possible values might be.   I don’t need to know an exact number, but is it possible to provide a rough estimate given a course of size 60? And how would this estimate change for a size of 150?

So, what is the typical range of PeerWise scores for a class of a given size?  Let’s start with a class of 50 students.  It turns out the range can be quite wide, as exemplified by the two extreme cases in the figure below (each line represents the set of scores for a single class of 50 students – the average number of questions authored and answers submitted by students in each class are shown in the legend):

Not only were students in the “blue” class all highly active, but almost all of them made contributions in each of the three areas required for maximising the score: question authoring, answering questions, and rating questions.  On the other hand, students in the “red” class were all quite active in answering questions, but only a few students in this class were active in all three areas.  In fact, only the first 12 students had non-zero component scores for each of the three components.  The remaining students scored 0 for the question authoring component (most likely because they chose not author any questions).  Students 13-24 in the figure had component scores only for question authoring and rating questions, whereas the remaining students (all below student 25) only had a single component score (for answering questions).  These students chose not to evaluate any of the questions they answered, and ended up with very low total scores (even though in some cases they may have answered many questions).

To calculate a “typical” range of scores for classes of varying sizes, we can average the class scores over a number of classes.  For example, to calculate the typical range for classes of approximately 200 students, a set of 20 classes were selected (where class sizes ranged from 185 to 215) and the student scores for each class were listed in descending order.  To calculate the average “top score”, the top score in each of the 20 classes was averaged.  Likewise for the second top score, and so on, averages were calculated in decreasing order for all remaining scores.  The figure below plots the average set of scores for classes of varying sizes (approximately 50, 100, 150 and 200) by averaging the class scores across a series of sample courses (in each calculation, between 15 and 20 classes were examined).


Brad also makes the following point in his forum post:

I suppose that another option would be to compute the grading scheme at the end of the semester once we see what the distribution of point values are.

This is an excellent idea – in many cases, the range of scores for a given course appear to be fairly consistent from one semester to the next (assuming the class size and participation requirements do not vary greatly).  The figure below plots the set of PeerWise scores for one particular course over 6 different semesters.  The class size was fairly consistent (around 350-400 students) and although the scores do vary, there is probably enough consistency to give instructors in future semesters some idea of what to expect (which may help them define targets for awarding bonus marks or extra credit).

In this class, only the very top few students achieve scores above 6000.  It is interesting to note towards the right hand edge of the chart, the very sharp drops in the curves correspond to the students who have not made contributions in each of the three areas.  Earning points in each of the question authoring, answering questions, and rating questions components is critical to achieving a good score – and it is probably important for instructors to emphasise this to students (although this information is shown when students hover their mouse over the score on the main menu).

Has anyone tried using the points (or the badges) as a way of rewarding students with extra credit or bonus marks?  It would be interesting to hear of your experience – please share!

What does a typical PeerWise course look like?

October 12, 2012 in Uncategorized

If you have ever wondered whether your class is too small (or too big) to use a tool like PeerWise, you may be interested in the following data.  To get a sense for both the typical size of a class on PeerWise, and the typical number of contributions made by students in each class, data from the last 1000 courses was examined.

While there are many examples of very large classes (>300 students), and even a few extremely large ones (>800 students), the majority of classes have fewer than 50 students.  The breakdown is given in the chart below.

In terms of student participation, it is quite common for instructors to reward a small fraction of course credit to students who contribute at least a certain number of questions and answers (for example, in the most recent class I taught, students were required to author 2 and answer 20 questions).

The table below gives the average overall participation (in terms of questions and answers) for classes of different sizes.  It also shows the average contributions per student in each of those classes.


The rightmost columns of this table are perhaps the most interesting – the chart below plots the figures from these rightmost columns – that is, the average number of questions authored and the average number of questions answered, by students in classes of various sizes.

If you ignore the very small (<20) and very large (>800) classes, the average number of answers submitted by students in all courses falls in quite a narrow range – from just below 30 to just above 40 answers per student.  Unsurprisingly, the average number of answers drops in very small courses – as students in these courses are unlikely to have a very large number of questions available to them to answer.  Conversely, very large courses sometimes end up with very large banks of questions (often many thousands of questions), enabling the very enthusastic students to answer (almost) as many questions as they would like.

So, where does your class fit in, and could the contributions of your students be described as “typical”?

Just how big is this thing…?

October 11, 2012 in Uncategorized

Immersed in numbers

From time to time, I email Paul Denny (he’s here – @paul), creator of PeerWise, to ask for up to date usage figures for the student-facing PeerWise website, to put into a talk or presentation that I am giving. I am giving one of these early next week to the assembled crew of the Carl Wieman Science Education Initiative here at UBC, so thought I would share some figures that Paul sent over…. some of them may surprise you!

  • Institutions:  308
  • Creators:  1796
  • Courses: 1905
  • Users: 94961
  • Questions: 379464
  • Answers: 8172405

Yes, you did read that correctly – getting on for half a million questions and ten million answers !!

Small print on the data:
“Institutions” only counts institutions for which there has been at least some activity, “creators” are instructors/teachers with the ability to create new courses.  ”Courses” includes all repositories created (even those for which is there is no associated content). “Questions” only includes active questions (i.e. not deleted or archived versions).