Feedback conversations: How should I use technology? A nuanced approach …

One of the most frequent conversations I have is around improving feedback, and how technology can help. Increasingly I am trying to encourage a more nuanced discussion about feedback, because deciding on what feedback to give and how to give it is not simply about a choice between one tool or another. The choice should be the result of the individual lecturer’s preference, the context, the type of assessment or activity upon which feedback is being offered, the characteristics of the student or cohort, the aims of the feedback, the tools and technology required, the quality management requirements and no doubt many other factors.

Some of the common questions I get are shared below with comments:

Should I use GradeMark for feedback? 

Well, it depends a good deal on what you want to achieve. GradeMark has many benefits but in itself it will not make feedback better. To be clear, like any technology it can make a difference but it is not a silver bullet. Without meaningful engagement and a commitment to change practice, it will not improve satisfaction with feedback.

GradeMark can help you to achieve consistency in the comments that you offer to students because you can create a bank of common points to annotate the work, and it can enable you to add a greater amount of feed forward signposting advice to students for their common errors. For example, if a group are struggling to paraphrase, you could create a comment advising of techniques and pointing to resources that might help and use this many times.

GradeMark can help with a sense of fairness too, as marks can be allocated using a rubric. This is entirely optional, and there are of course other ways to employ a rubric. It can help with legibility, as comments are typed; but so too can very clear handwriting and other technologies. It can allow you to save time at certain points in the marking and feedback process, as you can get started on your marking as soon as students hand in rather than delaying until you receive a complete set of papers. It can aid transparency when team marking; you can see how another tutor is marking and feeding back – again this is possible to achieve in other ways, but being able to see each other’s marking in real time can create ongoing dialogue about the way marks are allocated and the way comments are added.  The maximum benefits though can only be realised if the user is reflectively working with the technology and is not simply transferring existing practices in to a digital environment.

Finally, if you are really concerned about reading on a screen, this might be a problem. But really … if you consume news, media, research and other things via a screen, it may be worth laying aside your concerns and giving this a try.

Will it save me time? 

Yes and no. It’s not that simple. It depends how you use the facilities and what type of feedback you give. You can use as many or as few of the tools within GradeMark as you see fit. You can use the facilities within GradeMark in any combination: Voice over comments, annotations (stock comments or personalised as if marking on paper), you can use a rubric, auto generated scoring from the rubric (or not) and you can use a final summary comment. Each individual needs to look at their set up and then consider what they want to achieve, they should then select the aspects of the tool that work for their situation. Annotations may be better for corrective, structural feedback, or feedback on specific aspects of calculations, but the narrative may be the place to provide feedback on key ideas within the work. If you go in to using GradeMark solely to achieve efficiencies, you will most likely be disappointed upon first usage because there is a set up investment and it takes a large group or multiple iterations to get payback on that initial time spent.

In my experience those who use GradeMark may start out seeking efficiency, but end up with a focus on enhancing their feedback within the time constraints available to them. When time is saved by a user, I have seen colleagues simply re-spend this time on making enhancements, particularly to personalise the feedback further.

Ok, so what equipment do I need to be able to use GradeMark? Is it best to use a tablet?

Again it much depends on your work flows and preferences. A desktop computer is my preference as I like lots of screen room and I like to settle in to one spot with unlimited supplies of tea, whenever I mark. Others like to be mobile and the tablet version of GradeMark allows you to effectively download all scripts, mark and feedback and then upload. So unlike the desktop version you don’t need to be connected to the Internet – for those marking on the go, this is a good thing.

I see other people using other technologies for marking, like Dragon Dictate and annotation apps on tablets, are these better than GradeMark? 
There is a large range of kit available to support a tutor’s work in assessment and feedback and each has strengths and weaknesses, and each fits differently with personal preferences and context. Dragon dictate, for example, can be used to speak a narrative, it’s not perfect but may help those who struggle with typing. Annotation apps allow the conversion of handwriting to text, and they allow comments to be added at the point of need within a script (though GradeMark allows this too). On the downside a manual intervention is needed to return the feedback to students. Whilst Track change can be good for corrective feedback,  it can cause students to look at their work and feel that it wasn’t good enough as it has the electronic equivalent of red pen all over it!
Second markers or external examiners refuse to use the same interface… Then what …?

I’d suggest that you encourage others in the process to use the technology that you have selected. Give them early warning and offer to support the process. A pre-emptive way of dealing with this is to ensure a course wide approach to feedback, so agreeing, as a group, the tools that you will use. This should then be discussed with the external and others at appointment. It’s harder to resist a coordinated approach. Policy change is what is really needed for this, so lobbying might help!!!

But students like handwritten feedback, they find computer based feedback impersonal …

Maybe so, but all students prefer legible feedback and feedback that they can collect without coming back on to campus. Also is it not part of our responsibility as educators to ensure students can work digitally, even with their feedback? Students who tell us that they like handwritten feedback often feel a personal connection between them and the marker, but feedback using technology can be made to be highly personalised. It is simply up to tutors to use the tools available to achieve levels of personalisation; the tools themselves offer choices to the feedback craftsman. Adding a narrative comment, an audio comment or customising stock comments can all give a personal touch. However if the feedback giver chooses none of these things, then of course the feedback will be depersonalised.

Students say they don’t like electronic feedback…

Some might, and the reasons are complex. If we introduce a new feedback method at the end of a students programme, without explanation, irritation is inevitable as we have just added a complication at a critical point. Equally if feedback across a students journey is predominantly paper based, it is no wonder they struggle to remember how to retrieve their digital feedback and so get frustrated. If the feedback is too late to be useful, that will also cause students to prefer old methods. It may be useful to coordinate feedback approaches with others on your course area so the student gets a consistent approach, rather than encountering the occasional exotic technology with no clear rationale. Finally, though, students also need to be trained to do more than receive their feedback. They might file it, return to it, précis it and identify salient points. Good delivery of feedback will never alone be enough. Timeliness and engagement are also key to allowing students to work gain the benefits of their feedback.

Seeing things differently ….

One of the benefits of using technology in feedback is not often spoken about, or written of. When we engage meaningfully with technology in feedback it can change our approach to providing feedback, irrespective of the technology. By (real) example, someone trying annotation software may have a realisation that legibility is a real issue for them and they must prioritise this in future; someone using a rubric may start giving priority to assessment criteria as the need for equity and consistency becomes more sharply placed in focus; someone using stock comments and adding a voice over becomes aware of the need for the personal touch in feedback; and finally, someone using audio becomes aware that the volume of feedback produced may be overwhelming for students to digest and so revise their approach. These realisations live on beyond any particular technology use; so when we think of using technology for feedback, it may be useful to be conscious of the changes that can be brought about to the feedback mindset, and judge success in these terms rather than just mastery of, or persistence with one or another tool.

Course level assessment – nice idea, but what does it really mean?

It is increasingly clear that thinking about curriculum in the unit of ‘the course’ rather than the unit of ‘the module is conducive to cohesive course design. It avoids repetition, ensures the assessment journey makes sense to the student and can make feedback meaningful as one task is designed to link to the next. I have not found much  in the literature on course level assessment; while it is advocated in principle amongst educational development communities, it is perhaps less clear what course level assessment actually looks like.

I can see three possibilities, though there may be more. These conceptions are described as if delivered through the modular frameworks which remain the dominant framework for programmes. Any comments on other approaches would be very welcome.

Type 1: Compound assessment

Imagine two modules being taught on entirely discrete themes. Within them might be learning about terminology, key theories, processes, and calculations. Within the modular framework they may be taught entirely independently. In such a model there is nowhere in the curriculum where these skills can be overtly combined. A third module could be introduced which draws upon learning from module one and module two. Of course in reality it may be five modules drawn upon in a sixth compound module.

By example, a module focused upon business strategy may be taught entirely separately from a module on economics. Under such a scenario students may never get to consider how changes in the economy influence strategy, the associated tactics and the need for responsive planning. It is these compound skills, abilities and levels of professional judgment that the course (not the modules) seek to develop. One way of addressing this limitation is to provide a third module which draws together real business scenarios and concentrates on encouraging students to combine their knowledge. A ‘compound’ module could be based around case studies and real world scenarios, it may be limited in its ‘indicative content’ and leave a degree of openness to draw more flexibly on what is happening in the current external environment. Open modules can be uncomfortable and liberating in equal measure for the tutor, as there is a less familiar script. It might concentrate on the development of professional behaviours rather than additional content.The module might have timetabled slots, or could take the form of a one off exercise, field trip or inquiry. Teaching would be more facilitative rather than content/delivery led.

One of the challenges with such a module is that many tutors may be reluctant to give over credits to what seems to be a content free or light module. Going back to basics though, graduates are necessarily more than empty vessels filled with ‘stuff’. If we look at the course level and identify what we want to produce in our outcomes, and what the aims of our programmes actually are, then the flexible compound module fits well as an opportunity for fusing knowledge and developing competent, confident, early professionals. When knowledge is free and ubiquitous online, acting as a global external hard disk, we need to look at the graduates we build and challenge any view that higher education is primarily about the transfer of what the lecturer knows to the student. Surely the compound skills of researching the unfamiliar, combining knowledge from different areas, and making decisions with incomplete data in a moving environment are much more important. The compound module is an opportunity to facilitate learning which alights with the course level outcomes sought.

This type of course level learning and assessment undoubtedly requires an appreciation of the skills, attitudes, values and behaviours that we wish to foster in students and it needs confidence in the tutor to facilitate rather than transmit.

Type 2: Shared  assessment

The next way that I can conceive a form of course level assessment is more mechanistic. Take two modules (module one and module two, taught separately); to bring about efficiencies, the assessment of each module is undertaken within the same assignment, activity or exam. It may be an exam with two parts one for each module; it may be a presentation which is viewed by two assessors, each reviewing a separate aspect of content or it could be an assignment which has areas of attention clearly marked for each module. The education benefits of this are, in my view, much less obvious than for type 1, nevertheless students may see some links between the parts of modules in taking such an approach. The shared assessment must be designed to make clear which aspect relates to which module or else a student could be penalised or rewarded twice for the same points. Under such an approach it is conceivable to pass one element and fail the other. I remain to be convinced of the real benefits of this approach which feels like surface level ‘joined up-ness’.

Type 3: Combined assessment 

The term combined assessment is used here to describe an approach which assesses two modules through a single meaningful strategy. If there are two fifteen credit modules, one on mathematics for engineers and one on product design, an assessment which uses knowledge from each taught unit can be drawn upon to pass a single assessment – for example via a design and build exercise. The assessment subsumes both modules, the two elements are integrated (in contrast to the shared assessment approach) and there are potential marking efficiencies. Without clear attribution of marks to one or the other module it may be tricky when a student fails; what do they restudy? But presumably a tutor would be able to advise where the limitations of the performance are and which unit would be usefully revisited. In some cases it may be both. In reality this approach may be little different than having a large module with two themes contained within it.

So they were my three ideas for programme level assessment but I am convinced that there are other ways of achieving this in a meaningful way. The suitability of each approach will depend on what the course team want to achieve, but clearly the benefits of the compound assessment approach are very different from a shared or combined strategy.

permalink jigsaw header image courtesy of Yoel Ben-Avraham under Creative Commons

Using technology for student feedback: Lecturer perspectives. In their words

The document posted is a collection of short narrative portraits that has been constructed during my doctoral research, titled, ‘Using technology for student feedback: Lecturer perspectives’. Within the study, fifteen participants were interviewed. Each told their story of how and why they used technology in feedback. This illuminated challenges in the development of academic practice, it uncovered some of the ways in which feedback practice is formed, and it showed some of the ways in which lecturers internally mediate technology selection.

Individual interview transcripts were reduced to portraits (essentially these are mini accounts). This was done using a systematic and reflexive process articulated by Seidman (2013). The portraits themselves, and the process of data reduction, provided learning which fed in to the wider analytical process. These portrait stories are not all included in the final thesis in their full form, however given that narratives can provide instant knowledge (Webster & Mertova, 2007) I wanted to publish the collection. The participant portraits are presented here because they stand alone as insights in to the formation of academic practice.

DOWNLOAD Participant stories – in their words

Seidman, I. (2013). Interviewing as qualitative research: A guide for researchers in education and the social sciences (4th ed.). New York: Teachers College Press.

Webster, L., & Mertova, P. (2007). Using narrative inquiry as a research method: An introduction to using critical event narrative analysis in research on learning and teaching. Abingdon: Routledge.