Making digital exemplars

In addition to my usual classroom use of exemplars as a means of familiarising students with the assessment requirements of a specific module, this year I have created a video walk through of an exemplar. Initially this was to enable those who missed the relevant class to catch up on the session, but the approach was welcomed by students who attended the exemplars activity session, as well as those who did not. 

How to create a digital exemplar walk through: 

• Bring up the exemplar on screen after selecting a ‘good’ piece of work

• Read through and use comments in word to annotate the work with points which would be surfaced in feedback, particularly comments related to the assessment criteria (of course!). Comments include things done well, things done less well which could be avoided and opportunities for further detail and development. This tagging process acts only as an aide memoire so that as I create  feedback video I am aware of what I wanted to include. 

• Open Screencast-o-Matic to very easily screen record the work as a video as I re-read it through and talk to each of the tags. ‘This work includes this … which is useful because…’ ‘this work used literature in this way …. It might be better to …. Because ….’. None of this is rehearsed; that would be too time consuming. The resultant video is a commentary on performance.

• The video is uploaded and made available to students.

After using the resource there was some consensus amongst my students that the value was ONLY in listening to the assessment commentary and not specifically in looking at the work. One student described how they listened but did not watch. They then recorded notes about what they should include, remember and avoid. They avoided looking at the work for fear of having their own ideas reshaped. If assessment judgments are ‘socially situated interpretive act[s]’ then the digitised marking commentary may be a useful way of making that process more transparent for students, and indeed for other staff.

I will definitely be including this in future modules.

Handley, K., den Outer, B. & Price, B. (2013) Learning to mark: exemplars, dialogue and participation in assessment communities. Higher Education Research & Development Vol. 32 , Iss. 6.

9 Things to do with Assessment Rubrics

I’ve used rubrics in assessment marking since I first held an academic role some fifteen-ish years ago. For me, rubrics are an essential tool in the assessment toolkit. It’s important to recognize that they are not a ‘silver bullet’ and if not integrated in to teaching and support for learning, they may have no impact whatsoever on student engagement with assessment. I am therefore trying to collate a list of the ways in which rubrics can be used with students to enhance their performance, help them grow confidence and to demystify the assessment process. My top nine, in no particular order, are as follows:

  1. Discuss the rubric and what it means. This simply helps set out expectations and requirements, and provides opportunities for clarification.
  2. Encourage students to self-assess their own performance using the rubric, so that they engage more deeply with the requirements of the assessment.
  3. Encourage students to peer assess each other’s performance using the rubric, leading to further familiarization with the task, as well as the development of critical review and assessment judgment skills. This also allows the seeding of further ideas in relation to the task, through exposure to the work of others.
  4. Get students to identify the mark that they are aiming for and re-write the criteria in their own words. This sparks discussion about the requirements, flushes out any issues needing clarity and can result in students raising their aspirations (as the ‘assessment code’ is decrypted there are moments of “If that’s what it mean’s … I can do that”).
  5. Facilitate a negotiation of the rubric. Where full student led creation of a rubric is impractical, or not desirable, a tentative rubric can be presented and negotiated with the class. Students can have an influence on the coverage, the language, and the weightings. As well as familiarizing with the requirements, this allows a sense of ownership to develop. In my own experience rubrics are always better for student negotiations.
  6. Undertake a class brainstorm as the basis for the rubric design. Ask what qualities should be assessed e.g. report writing skills, then identify what this means to students themselves e.g. flow, use of literature to support argument. Then use this list to develop a rubric. It is a form of negotiation, but specifically it allows the rubric to grow out of student ideas. By using student language, the criteria are already written in a form that is accessible to the group (after all they designed the key components).
  7. Simply use the rubric as the basis for formative feedback with students to aid familiarity.
  8. Use the criteria to assess exemplars of previous students’ work. This will have the benefits of familiarity, developing assessment judgment as well as sparking new ideas from exposure to past students work. Of course this can be further developed with full sessions or online activities built around exemplar review, but the rubric can be central to this.
  9. A rubric can be partially written to offer space for choice. Leaving aspects of the rubric for students to complete leaves room for students to show their individuality and to customize tasks. Rubrics don’t box-us in to total uniformity. Recently I created a rubric for a research project and left space for students to articulate the presentational aspects of the criteria. Some students filled in the rubric to support the production of a report, others a poster and others a journal article.
img_20160908_152105
Using a class brainstorm to form the basis of a rubric with criteria relating to reflection 

I have only included approaches that I have used first hand. I’d like to build this up with the experiences of others; if you have additional suggestions please do let me know.

Undergraduate Vivas

viva
Access the guide by clicking the image above

Over the last six months I have been looking in to the Undergraduate Viva. Asking questions such as what are the benefits? What makes a good undergraduate viva? and, How can students be prepared for their undergraduate viva? One of the results of this  is a guidance document  on how to conduct a viva of this type. It may be of interest to others.

Feedback conversations: How should I use technology? A nuanced approach …

One of the most frequent conversations I have is around improving feedback, and how technology can help. Increasingly I am trying to encourage a more nuanced discussion about feedback, because deciding on what feedback to give and how to give it is not simply about a choice between one tool or another; the choice should be the result of the individual lecturer’s preference, the context, the type of assessment or activity upon which feedback is being offered, the characteristics of the student or cohort, the aims of the feedback, the tools and technology required, the quality management requirements and no doubt many other factors. Some of the common questions I get are shared below with comments:

Should I use GradeMark for feedback? 

Well, it depends a good deal on what you want to achieve. GradeMark has many benefits but in itself it will not make feedback better. To be clear, like any technology it can make a difference but it is not a silver bullet; without meaningful engagement and a commitment to change practice, it will not improve satisfaction with feedback.

GradeMark can help you to achieve consistency in the comments that you offer to students because you can create a bank of common points to annotate the work, and it can enable you to add a greater amount of feed forward signposting advice to students for their common errors, for example if a group are struggling to paraphrase, you could create a comment advising of techniques and pointing to resources that might help and use this many times. GradeMark can help with a sense of fairness too, as marks can be allocated using a rubric. This is entirely optional, and there are of course other ways to employ a rubric. It can help with legibility, as comments are typed; but so too can very clear handwriting and other technologies. It can allow you to save time at certain points in the marking and feedback process, as you can get started on your marking as soon as students hand in rather than delaying until you receive a complete set of papers. It can aid transparency when team marking; you can see how another tutor is marking and feeding back – again this is possible to achieve in other ways, but being able to see each other’s marking in real time can create ongoing dialogue about the way marks are allocated and the way comments are added. If you are really concerned about reading on a screen, this might be a problem; but if you consume news, media, research and other things via a screen, it may be worth laying aside your concerns and giving this a try. All of these benefits though can only be realised if the user is working with the technology and is not simply transferring existing practices in to a digital environment.

Will it save me time? 

Yes and no. It’s not that simple. It depends how you use the facilities and what type of feedback you give. You can use as many or as few of the tools within GradeMark as you see fit. You can use the facilities within GradeMark in any combination: Voice over comments, annotations (stock comments or personalised as if marking on paper), you can use a rubric, auto generated scoring from the rubric (or not) and you can use a final summary comment. Each individual needs to look at their set up and then consider what they want to achieve, they should then select the aspects of the tool that work for their situation. Annotations may be better for corrective, structural feedback, or feedback on specific aspects of calculations, but the narrative may be the place to provide feedback on key ideas within the work. If you go in to using GradeMark solely to achieve efficiencies, you will most likely be disappointed upon first usage because there is a set up investment and it takes a large group or multiple iterations to get payback on that initial time spent. In my experience those who use GradeMark may start out seeking efficiency, but end up with a focus on enhancing their feedback within the time constraints available to them. When time is saved by a user, I have seen colleagues simply re-spend this time on making enhancements, particularly to personalise the feedback further.
Ok, so what equipment do I need to be able to use GradeMark? Is it best to use a tablet?

Again it much depends on your work flows and preferences. A desktop computer is my preference as I like lots of screen room and I like to settle in to one spot with unlimited supplies of tea, whenever I mark. Others like to be mobile and the tablet version of GradeMark allows you to effectively download all scripts, mark and feedback and then upload. So unlike the desktop version you don’t need to be connected to the Internet – for those marking on the go, this is a good thing.

I see other people using other technologies for marking, like Dragon Dictate and annotation apps on tablets, are these better than GradeMark? 


There is a large toolkit available for assessment and feedback and each has strengths and weaknesses, and each fits differently with personal preferences and context. So Dragon dictate can be used to speak a narrative or extensive comments, it’s not perfect but may help those who struggle with typing; annotation apps allow the conversion of handwriting to text, and they allow comments to be added at the point of need within a script (though GradeMark allows this too). On the downside a manual intervention is needed to to return the feedback to students. Track change can be good for corrective feedback, but it can cause students to look at their work and feel that it wasn’t good enough as it has the electronic equivalent of red pen all over it!
Second markers or external examiners refuse to use the same interface… Then what …?

I’d suggest that you encourage others in the process to use the technology that you have selected. Give them early warning and offer to support the process. A pre-emptive way of dealing with this is to ensure a course wide approach to feedback, so agreeing, as a group, the tools that you will use. This should then be discussed with the external and others at appointment. It’s harder to resist a coordinated approach. Policy change is what is really needed for this, so lobbying might help!!!

But students like handwritten feedback, they find computer based feedback impersonal …

Maybe so, but all students prefer legible feedback and feedback that they can collect without coming back on to campus. Also is it not part of our responsibility as educators to ensure students can work digitally, even with their feedback? Students who tell us that they like handwritten feedback often feel a personal connection between them and the marker, but feedback using technology can be highly personalised. It is simply up to the assessor to use the tools available to achieve levels of personalisation; the tools themselves offer choices to the feedback craftsman. Adding a narrative comment, an audio comment or customising stock comments can all give a personal touch. However if the feedback giver chooses none of these things, then of course the feedback will be depersonalised.

Students say they don’t like electronic feedback…

Some might and the reasons are complex. If we introduce a new feedback method at the end of a students programme, without explanation, irritation is inevitable as we have just added a complication at a critical point. Equally if feedback across a students journey is predominantly paper based, it is no wonder they struggle to remember how to retrieve their digital feedback and so get frustrated. If the feedback is too late to be useful, that will also cause students to prefer old methods. It may be useful to coordinate feedback approaches with others on your course area so the student gets a consistent approach, rather than encountering the occasional exotic technology with no clear rationale. Finally, though, students also need to be trained to do more than receive their feedback. They might file it, return to it, précis it and identify salient points. Good delivery of feedback will never alone be enough. Timeliness and engagement are also key to allowing students to work gain the benefits of their feedback.

Seeing things differently ….

One of the benefits of using technology in feedback is not often spoken about, or written of. When we engage meaningfully with technology in feedback it can change our approach to providing feedback, irrespective of the technology. By (real) example, someone trying annotation software may have a realisation that legibility is a real issue for them and they must prioritise this in future; someone using a rubric may start giving priority to assessment criteria as the need for equity and consistency becomes more sharply placed in focus; someone using stock comments and adding a voice over becomes aware of the need for the personal touch in feedback; and finally, someone using audio becomes aware that the volume of feedback produced may be overwhelming for students to digest and so revise their approach. These realisations live on beyond any particular technology use; so when we think of using technology for feedback, it may be useful to be conscious of the changes that can be brought about to the feedback mindset, and judge success in these terms rather than just mastery of, or persistence with one or another tool.

Course level assessment – nice idea, but what does it really mean?

It is increasingly clear that thinking about curriculum in the unit of ‘the course’ rather than the unit of ‘the module is conducive to cohesive course design. It avoids repetition, ensures the assessment journey makes sense to the student and can make feedback meaningful as one task is designed to link to the next. I have not found much  in the literature on course level assessment; while it is advocated in principle amongst educational development communities, it is perhaps less clear what course level assessment actually looks like.

I can see three possibilities, though there may be more. These conceptions are described as if delivered through the modular frameworks which remain the dominant framework for programmes. Any comments on other approaches would be very welcome.

Type 1: Compound assessment

Imagine two modules being taught on entirely discrete themes. Within them might be learning about terminology, key theories, processes, and calculations. Within the modular framework they may be taught entirely independently. In such a model there is nowhere in the curriculum where these skills can be overtly combined. A third module could be introduced which draws upon learning from module one and module two. Of course in reality it may be five modules drawn upon in a sixth compound module.

By example, a module focused upon business strategy may be taught entirely separately from a module on economics. Under such a scenario students may never get to consider how changes in the economy influence strategy, the associated tactics and the need for responsive planning. It is these compound skills, abilities and levels of professional judgment that the course (not the modules) seek to develop. One way of addressing this limitation is to provide a third module which draws together real business scenarios and concentrates on encouraging students to combine their knowledge. A ‘compound’ module could be based around case studies and real world scenarios, it may be limited in its ‘indicative content’ and leave a degree of openness to draw more flexibly on what is happening in the current external environment. Open modules can be uncomfortable and liberating in equal measure for the tutor, as there is a less familiar script. It might concentrate on the development of professional behaviours rather than additional content.The module might have timetabled slots, or could take the form of a one off exercise, field trip or inquiry. Teaching would be more facilitative rather than content/delivery led.

One of the challenges with such a module is that many tutors may be reluctant to give over credits to what seems to be a content free or light module. Going back to basics though, graduates are necessarily more than empty vessels filled with ‘stuff’. If we look at the course level and identify what we want to produce in our outcomes, and what the aims of our programmes actually are, then the flexible compound module fits well as an opportunity for fusing knowledge and developing competent, confident, early professionals. When knowledge is free and ubiquitous online, acting as a global external hard disk, we need to look at the graduates we build and challenge any view that higher education is primarily about the transfer of what the lecturer knows to the student. Surely the compound skills of researching the unfamiliar, combining knowledge from different areas, and making decisions with incomplete data in a moving environment are much more important. The compound module is an opportunity to facilitate learning which alights with the course level outcomes sought.

This type of course level learning and assessment undoubtedly requires an appreciation of the skills, attitudes, values and behaviours that we wish to foster in students and it needs confidence in the tutor to facilitate rather than transmit.

Type 2: Shared  assessment

The next way that I can conceive a form of course level assessment is more mechanistic. Take two modules (module one and module two, taught separately); to bring about efficiencies, the assessment of each module is undertaken within the same assignment, activity or exam. It may be an exam with two parts one for each module; it may be a presentation which is viewed by two assessors, each reviewing a separate aspect of content or it could be an assignment which has areas of attention clearly marked for each module. The education benefits of this are, in my view, much less obvious than for type 1, nevertheless students may see some links between the parts of modules in taking such an approach. The shared assessment must be designed to make clear which aspect relates to which module or else a student could be penalised or rewarded twice for the same points. Under such an approach it is conceivable to pass one element and fail the other. I remain to be convinced of the real benefits of this approach which feels like surface level ‘joined up-ness’.

Type 3: Combined assessment 

The term combined assessment is used here to describe an approach which assesses two modules through a single meaningful strategy. If there are two fifteen credit modules, one on mathematics for engineers and one on product design, an assessment which uses knowledge from each taught unit can be drawn upon to pass a single assessment – for example via a design and build exercise. The assessment subsumes both modules, the two elements are integrated (in contrast to the shared assessment approach) and there are potential marking efficiencies. Without clear attribution of marks to one or the other module it may be tricky when a student fails; what do they restudy? But presumably a tutor would be able to advise where the limitations of the performance are and which unit would be usefully revisited. In some cases it may be both. In reality this approach may be little different than having a large module with two themes contained within it.

So they were my three ideas for programme level assessment but I am convinced that there are other ways of achieving this in a meaningful way. The suitability of each approach will depend on what the course team want to achieve, but clearly the benefits of the compound assessment approach are very different from a shared or combined strategy.


permalink jigsaw header image courtesy of Yoel Ben-Avraham under Creative Commons https://www.flickr.com/photos/epublicist/3545249463

Efficiency, technology and feedback

In considering staff experiences of choosing and using feedback technology, one of the emerging themes has been the differing views on feedback technologies and efficiencies. While the jury is still out on the data and the process is incomplete, my observations are that efficiency can be conceived in different ways in the negotiation of technology. For some efficiency is a primary driver in the decision making process. The search for technology and the refinement of its use is motivated and shaped by the quest for efficiencies. For others efficiencies are a welcome benefit of technology – they are almost an unexpected gift – welcome, but not necessary. Efficiencies also appear to be conceived relatively; rarely are efficiencies discussed without a reference to the relative enhancement gains that can be made through a technology. Wherever there is a time saving there is a tendency to ‘re-spend’ the saved time making still more enhancements to the feedback – adding detail and depth for example. In this way efficiencies become difficult to identify as they are theoretically achievable but in reality they are trumped by the possibility for improvement. Efficiency also seems to be a veto concept for some; it is not a particular concern in the run of practice but is triggered only when a particular technology is likely to encroach other activities or provide an intolerable stress.

Jing-tastic; Audio visual tool

During a summer of local and international educational development workshops ‘Jing’ has had many outings. I was struck to see how this really simple facility never fails to make people say – “wow, I can really use that”. It’s a low ceiling technology with transformative potential. From my summer ‘tour’ here are some thoughts on how Jing can be used productively by those involved in teaching and supporting learning.

Formative feedback – an assignment walk through
As described here and also by Russell Stannard , Jing can be used to offer feedback on actual assignment work by enabling a visual-voice combination to be used. Feedback can be given and related to the assignment on screen. Early signs are that this approach is widely enjoyed by students who particularly value the ability to play and replay the feedback; the personal tones of the feedback; and, the privacy and convenience of getting the feedback in a location that suits them.

Best bits and “no no’s”(one to many feedback)
To feed forward and enable one group to learn from another, Jing can be a way of presenting good practice and things to avoid. This needs a little care to avoid showing individuals up, but with careful doctoring any ethical issues can be avoided! This can be used as a group feedback method, and can be a useful interim form of feedback when individual comments can’t be provided in time to be useful for the next assessment. Such videos can be added to the VLE or sent direct to students.

Correction
In addition to providing feedback, Jing can assist with directly facilitating corrections. This, I find, is particularly helpful with very specific and detailed tasks. The visual element can help enable the recipient to use tools to make future changes. An example would be a student who has issues with alignment being shown how to use the facility in Word, which shows the spaces and tab marks. In discussion with colleagues I am advised that the same principles may carry to correction of language or sentence structure.

Peer feedback
Lots of attention s being given to teacher led Jing feedback, but this is freeware and as a result can be easily utilised for students giving peer to peer feedback. This might even help with communication skills and confidence.

Summative feedback – a tour of the mark sheet
Students have fed back their desire to know how marks have been allocated. One way this can be brought about is through the use of Jing to discuss the mark itself; perhaps by the tutor talking through the feedback sheet, one section or outcome at a time. In this way Jing can be a useful complementary technology.

A reflective tool for students (an audio layer in the battle against plagiarism)
One of the ways we can mitigate plagiarism, and encourage learners to reflect on their learning processes, is through the inclusion of an annotated bibliography in any assignment. As an alternative, perhaps catering for different styles and preferences, students could review their own assignment and create a walk through of any difficult points, any areas that they feel could be improved and any things they would do different in future. They could also comment on how they found particular readings cited in their work.

Recalling assumptions (project management tool)
As part of my role is project management, Jing also helps with remembering what we did and why. A two minute voice over on a spreadsheet means that when we go back and think how on earth did we arrive at x, y or z, that we have the detail captured from the moment. Jing is now, therefore, becoming a favourite of accountants and data managers as well as teachers!