Case Study: Medical Engineering

Aim

This was a Masters level course where the students were studying implant design technology. The teacher wanted them to cover seven joint replacement technologies. She set each student the task of reviewing the literature and writing a report on technologies for just one type of joint. But she wanted students to read their peers' reports so that they would find out about all seven technologies. She also wished to develop their knowledge of literature review writing. Notice how the preparation stage involves some co-construction of the task and  assessment guidelines.  

Preparation

The teacher began by explaining what she thought students would learn from peer assessment. She explained that she wanted them to learn from reading each others' reports and make judgments about different ways of dealing with the literature, giving views and making recommendations. In groups students then discussed the task and what needed to be included in the literature review. They discussed its purpose and who they were writing it for (audience) and made suggestions for refining the task instruction. After this session the task instruction was finalised. Next the students were asked to think about what needed to be included in the reports, and these ideas were collated. Finally they were asked to think about how the reports might be assessed. In groups students brainstormed the criteria they would use to make judgments about the reports. For example, they expected a report to demonstrate an understanding of anatomy and joint replacement requirements; they expected an appreciation of the market etc.. These expectations (criteria) were then written up by the teacher as peer assessment task and guidelines.

Next the teacher organised a rehearsal marking session. The aim was for the teacher and students to reach an understanding of what constituted a good literature review report and of how constructive feedback could be given. Three reports from the previous year were emailed to students before the session. They were asked to read them, rank them and provide some justification for their ranking. In the session, students worked in groups to compare rankings and justification. At this stage, it proved useful to look again at the assessment guidelines and amend them as necessary. Students were also given sample feedback comments on one of the reports to discuss.

Implementation: peer assessing the reports

Students uploaded anonymised reports (identified only by a number) onto the departmental data base. The teacher distributed a table allocating four reports to each student (making sure they didn't get their own topic and that each topic was different). Students downloaded their allocated reports and were given one week to read and mark them.

They were asked to write feedback comments of about a page on each report, (though this proved to be too long as it meant each student received 4 pages of feedback).

They then submitted the marks and comments. Marks were collated and a mean mark calculated for each student. The teacher then checked for reports with a wide range of marks and where this was the case, stepped in to moderate them. Marks and comments were returned to students within two weeks.

Evaluation

After the return of marks and comments a questionnaire was given out. This gave a snap shot of students' immediate reaction to peer assessment. A week later we held a focus group where students could give a more considered response.

Many students agreed they ‘learned stuff' from peer and collaborative assessment and reported a better understanding of their own strengths and weaknesses. They saw different ways of approaching the task and realised there was no one correct solution. 80% of students agreed/strongly agreed that peer and collaborative assessment gave them a better understanding of the assessment criteria and that they understood what teachers expected in a report.

On the whole, students accepted the grades given by their peers with 65% agreeing/strongly agreeing that peer assessment was a fair method of assessment. On the other hand, 65% agreed/strongly agreed that marking by academic staff would be fairer! They were concerned about the range of marks and inconsistencies in feedback, something which is, of course, less of  a problem when marking is only done by one person.

The teacher felt that it was important for students to see the marks given by each peer assessor (not just the mean mark). She wanted students to see the full range of marks.

Replication?

Students commented that they would have benefited from peer assessment experiences earlier in their programme of study, and indeed Orsmond (2004: 19) warns that the ‘one-off peer assessment experience is not good'. The School of Engineering and Materials Sciences is now beginning to integrate peer and self assessment at all levels of its degree programme.

(This work received funding from the Engineering Subject Centre of the Higher Education Academy).

Documents downloadable from this page: