Conference Reflections: Harper Adams University Learning & Teaching Conference 2017

This slideshow requires JavaScript.

Last week was the third Harper Adams Learning and Teaching Conference. This annual event brings together colleagues from across the institution, alongside colleagues from partner organisations in, and beyond, the UK. The conference was jam-packed with awesomeness! Although I couldn’t get to everything, the sessions that I did attend were informative and motivating.

Professor Tansy Jessop started off by inspiring a ‘nuclear climb down’ on assessment, where together teaching staff agree to summatively assess less. The shift away from too much summative assessment, Tansy reminded us, only succeeds if we collectively mean it. If some tutors are stealthily continuing to over-assess, then inevitably students will direct their attention to these activities at the expense of others. She talked about relinquishing assessment driven pedagogies of control, to a curriculum driven more by learning.
The keynote also brought some pragmatic suggestions of what staff can do by way of formative feedback strategies. I sensed a quiet wave of ‘Oh yeh’ moments around the room as the ideas were really workable. Suggestions included:

  • A policy approach of less assessments (the coordinated climb down)
  • Use of public spaces, like blogs, to collate ongoing learning and reading (the audience drives meaningful engagement)
  • Asking students to design multi choice questions
  • Asking students to bring along sources to class and then through group discussion arriving at the ‘best’ sources
  • Working with journal papers to write abstracts or deduce ideas in papers based on abstracts.

My own ‘aha’ moment was to rename every formative assessment, simply as activities that drive learning. I think I knew this already, but it’s easy to drown in terminology and metrics that cloud definitions and purpose. The keynote also highlighted how we might make the most of formative feedback. Humorously critiquing some well used feedback structures (like the feedback sandwich), Tansy suggested that, essentially, we need to become more dialogic around feedback. We need to find ways to have conversations, find out what feedback is useful, encourage students to solicit the right types of feedback and to take control of their learning.

In one of the workshop sessions the brilliantly enthusiastic Professor Kay Sambell encouraged us to consider how we use exemplars. Some sharing around the room threw up some different practical approaches, including using exemplars to: demonstrate the underpinning features of academic writing (e.g. What is involved in making an argument); take the stress out of understanding a task to free up headspace for more detailed and creative aspects of the task, essentially this is about demystifying the task; provide a process of socialisation in to the academic requirements of assessment; and, to provide a starting point. We also discussed some of the limitations of using exemplars, which included: Triggering worry in students who may believe standards set to be unachievable; stifling creativity as students might only see one way to complete the task; and, risking students believing the exemplar to be the finished article rather than a finished article. Moving on from our evaluation, we identified different things to do with exemplars. We were united in agreeing that just giving examples would do little in itself to help students. Active use of exemplars was shown to include such things as:

  • Peer marking to familiarise with task requirements
  • Discussion of different exemplars
  • Rank ordering exemplars
  • Analysing ‘fail’ grade work to help understand what should be avoided

Decisions about how to use exemplars included whether to annotate, whether to provide full or partial exemplars, and whether to use student work only or to consider tutor generated work too. By the end of this session my ‘note to self’ was that looking at weaker work in depth was a valuable step in working with exemplars. It provides a window in to the assessment process for students, it can help them avoid common pitfalls and it can massively raise awareness of issues of academic practice.

Rebekah Gerard’s poster was a great complement to Kay’s session. Bex shows how we can really use live exemplars in a workshop session to improve exam technique. She used a technique called ‘pass the problem’ and her PgC action research showed how students experienced this strategy.  Her poster shows the technique she used for ease of replicability:

Dr. Jaqueline Potter, from Keele University, shared her analysis of teaching excellence award nominations which had led to a better understanding of what qualities students value in staff. The overwhelming message was about kindliness. Whilst students want constructive, joined up and useful feedback, they really want it as a personal, kindly interaction. How to be kind is quite a different matter, but presumably remembering what it was to be a student would go a good way to help to keep an empathetic mindset. After completing our in-house PgC in Teaching and `supporting Learning many colleagues report that their best learning is in the process of being a student again and gaining an understanding of the stresses, strains and liminality of this process. Perhaps to embody the kindness that Jackie’s research has highlighted, we should all be eternal students. My note to self here is to follow Jackie’s lead and analyse the scheme data I hold on teaching excellence – or ask what do students value?

Jane Headley and Rebecca Payne’s session on exemplars was a great lot of fun! By offering a task to us (the task was – getting your team through a piece of A5 paper) and giving each group a different experience with an exemplar, we were able to feel and experience the use of exemplars. Our team had an exemplar in full, but as a team who wanted to be original (I was just happy to pass, but others wanted to excel) we decided to ditch knowledge of the exemplar and add our own twist. The result was redefining team (after all it didn’t say a human version of your team) and to create a stop motion video. This first hand experience showed me that exemplars can show students that a task is possible and that it can then free up the creative mind to do the task differently. Working in a team, and with an enjoyable task, simply added to the creativity. This point too is something we would do well to remember!

For posterity I have retained a conference programme.

LT_Conference_2017 programme

The only bad thing about the day is not being able to get to all of the sessions. Luckily I have previously heard the other speakers and they are all awesome!

Making digital exemplars

In addition to my usual classroom use of exemplars as a means of familiarising students with the assessment requirements of a specific module, this year I have created a video walk through of an exemplar. Initially this was to enable those who missed the relevant class to catch up on the session, but the approach was welcomed by students who attended the exemplars activity session, as well as those who did not. 

How to create a digital exemplar walk through: 

• Bring up the exemplar on screen after selecting a ‘good’ piece of work

• Read through and use comments in word to annotate the work with points which would be surfaced in feedback, particularly comments related to the assessment criteria (of course!). Comments include things done well, things done less well which could be avoided and opportunities for further detail and development. This tagging process acts only as an aide memoire so that as I create  feedback video I am aware of what I wanted to include. 

• Open Screencast-o-Matic to very easily screen record the work as a video as I re-read it through and talk to each of the tags. ‘This work includes this … which is useful because…’ ‘this work used literature in this way …. It might be better to …. Because ….’. None of this is rehearsed; that would be too time consuming. The resultant video is a commentary on performance.

• The video is uploaded and made available to students.

After using the resource there was some consensus amongst my students that the value was ONLY in listening to the assessment commentary and not specifically in looking at the work. One student described how they listened but did not watch. They then recorded notes about what they should include, remember and avoid. They avoided looking at the work for fear of having their own ideas reshaped. If assessment judgments are ‘socially situated interpretive act[s]’ then the digitised marking commentary may be a useful way of making that process more transparent for students, and indeed for other staff.

I will definitely be including this in future modules.

Handley, K., den Outer, B. & Price, B. (2013) Learning to mark: exemplars, dialogue and participation in assessment communities. Higher Education Research & Development Vol. 32 , Iss. 6.

9 Things to do with Assessment Rubrics

I’ve used rubrics in assessment marking since I first held an academic role some fifteen-ish years ago. For me, rubrics are an essential tool in the assessment toolkit. It’s important to recognize that they are not a ‘silver bullet’ and if not integrated in to teaching and support for learning, they may have no impact whatsoever on student engagement with assessment. I am therefore trying to collate a list of the ways in which rubrics can be used with students to enhance their performance, help them grow confidence and to demystify the assessment process. My top nine, in no particular order, are as follows:

  1. Discuss the rubric and what it means. This simply helps set out expectations and requirements, and provides opportunities for clarification.
  2. Encourage students to self-assess their own performance using the rubric, so that they engage more deeply with the requirements of the assessment.
  3. Encourage students to peer assess each other’s performance using the rubric, leading to further familiarization with the task, as well as the development of critical review and assessment judgment skills. This also allows the seeding of further ideas in relation to the task, through exposure to the work of others.
  4. Get students to identify the mark that they are aiming for and re-write the criteria in their own words. This sparks discussion about the requirements, flushes out any issues needing clarity and can result in students raising their aspirations (as the ‘assessment code’ is decrypted there are moments of “If that’s what it mean’s … I can do that”).
  5. Facilitate a negotiation of the rubric. Where full student led creation of a rubric is impractical, or not desirable, a tentative rubric can be presented and negotiated with the class. Students can have an influence on the coverage, the language, and the weightings. As well as familiarizing with the requirements, this allows a sense of ownership to develop. In my own experience rubrics are always better for student negotiations.
  6. Undertake a class brainstorm as the basis for the rubric design. Ask what qualities should be assessed e.g. report writing skills, then identify what this means to students themselves e.g. flow, use of literature to support argument. Then use this list to develop a rubric. It is a form of negotiation, but specifically it allows the rubric to grow out of student ideas. By using student language, the criteria are already written in a form that is accessible to the group (after all they designed the key components).
  7. Simply use the rubric as the basis for formative feedback with students to aid familiarity.
  8. Use the criteria to assess exemplars of previous students’ work. This will have the benefits of familiarity, developing assessment judgment as well as sparking new ideas from exposure to past students work. Of course this can be further developed with full sessions or online activities built around exemplar review, but the rubric can be central to this.
  9. A rubric can be partially written to offer space for choice. Leaving aspects of the rubric for students to complete leaves room for students to show their individuality and to customize tasks. Rubrics don’t box-us in to total uniformity. Recently I created a rubric for a research project and left space for students to articulate the presentational aspects of the criteria. Some students filled in the rubric to support the production of a report, others a poster and others a journal article.
img_20160908_152105
Using a class brainstorm to form the basis of a rubric with criteria relating to reflection 

I have only included approaches that I have used first hand. I’d like to build this up with the experiences of others; if you have additional suggestions please do let me know.

Undergraduate Vivas

viva
Access the guide by clicking the image above

Over the last six months I have been looking in to the Undergraduate Viva. Asking questions such as what are the benefits? What makes a good undergraduate viva? and, How can students be prepared for their undergraduate viva? One of the results of this  is a guidance document  on how to conduct a viva of this type. It may be of interest to others.

Feedback conversations: How should I use technology? A nuanced approach …

One of the most frequent conversations I have is around improving feedback, and how technology can help. Increasingly I am trying to encourage a more nuanced discussion about feedback, because deciding on what feedback to give and how to give it is not simply about a choice between one tool or another. The choice should be the result of the individual lecturer’s preference, the context, the type of assessment or activity upon which feedback is being offered, the characteristics of the student or cohort, the aims of the feedback, the tools and technology required, the quality management requirements and no doubt many other factors.

Some of the common questions I get are shared below with comments:

Should I use GradeMark for feedback? 

Well, it depends a good deal on what you want to achieve. GradeMark has many benefits but in itself it will not make feedback better. To be clear, like any technology it can make a difference but it is not a silver bullet. Without meaningful engagement and a commitment to change practice, it will not improve satisfaction with feedback.

GradeMark can help you to achieve consistency in the comments that you offer to students because you can create a bank of common points to annotate the work, and it can enable you to add a greater amount of feed forward signposting advice to students for their common errors. For example, if a group are struggling to paraphrase, you could create a comment advising of techniques and pointing to resources that might help and use this many times.

GradeMark can help with a sense of fairness too, as marks can be allocated using a rubric. This is entirely optional, and there are of course other ways to employ a rubric. It can help with legibility, as comments are typed; but so too can very clear handwriting and other technologies. It can allow you to save time at certain points in the marking and feedback process, as you can get started on your marking as soon as students hand in rather than delaying until you receive a complete set of papers. It can aid transparency when team marking; you can see how another tutor is marking and feeding back – again this is possible to achieve in other ways, but being able to see each other’s marking in real time can create ongoing dialogue about the way marks are allocated and the way comments are added.  The maximum benefits though can only be realised if the user is reflectively working with the technology and is not simply transferring existing practices in to a digital environment.

Finally, if you are really concerned about reading on a screen, this might be a problem. But really … if you consume news, media, research and other things via a screen, it may be worth laying aside your concerns and giving this a try.

Will it save me time? 

Yes and no. It’s not that simple. It depends how you use the facilities and what type of feedback you give. You can use as many or as few of the tools within GradeMark as you see fit. You can use the facilities within GradeMark in any combination: Voice over comments, annotations (stock comments or personalised as if marking on paper), you can use a rubric, auto generated scoring from the rubric (or not) and you can use a final summary comment. Each individual needs to look at their set up and then consider what they want to achieve, they should then select the aspects of the tool that work for their situation. Annotations may be better for corrective, structural feedback, or feedback on specific aspects of calculations, but the narrative may be the place to provide feedback on key ideas within the work. If you go in to using GradeMark solely to achieve efficiencies, you will most likely be disappointed upon first usage because there is a set up investment and it takes a large group or multiple iterations to get payback on that initial time spent.

In my experience those who use GradeMark may start out seeking efficiency, but end up with a focus on enhancing their feedback within the time constraints available to them. When time is saved by a user, I have seen colleagues simply re-spend this time on making enhancements, particularly to personalise the feedback further.

Ok, so what equipment do I need to be able to use GradeMark? Is it best to use a tablet?

Again it much depends on your work flows and preferences. A desktop computer is my preference as I like lots of screen room and I like to settle in to one spot with unlimited supplies of tea, whenever I mark. Others like to be mobile and the tablet version of GradeMark allows you to effectively download all scripts, mark and feedback and then upload. So unlike the desktop version you don’t need to be connected to the Internet – for those marking on the go, this is a good thing.

I see other people using other technologies for marking, like Dragon Dictate and annotation apps on tablets, are these better than GradeMark? 
There is a large range of kit available to support a tutor’s work in assessment and feedback and each has strengths and weaknesses, and each fits differently with personal preferences and context. Dragon dictate, for example, can be used to speak a narrative, it’s not perfect but may help those who struggle with typing. Annotation apps allow the conversion of handwriting to text, and they allow comments to be added at the point of need within a script (though GradeMark allows this too). On the downside a manual intervention is needed to return the feedback to students. Whilst Track change can be good for corrective feedback,  it can cause students to look at their work and feel that it wasn’t good enough as it has the electronic equivalent of red pen all over it!
Second markers or external examiners refuse to use the same interface… Then what …?

I’d suggest that you encourage others in the process to use the technology that you have selected. Give them early warning and offer to support the process. A pre-emptive way of dealing with this is to ensure a course wide approach to feedback, so agreeing, as a group, the tools that you will use. This should then be discussed with the external and others at appointment. It’s harder to resist a coordinated approach. Policy change is what is really needed for this, so lobbying might help!!!

But students like handwritten feedback, they find computer based feedback impersonal …

Maybe so, but all students prefer legible feedback and feedback that they can collect without coming back on to campus. Also is it not part of our responsibility as educators to ensure students can work digitally, even with their feedback? Students who tell us that they like handwritten feedback often feel a personal connection between them and the marker, but feedback using technology can be made to be highly personalised. It is simply up to tutors to use the tools available to achieve levels of personalisation; the tools themselves offer choices to the feedback craftsman. Adding a narrative comment, an audio comment or customising stock comments can all give a personal touch. However if the feedback giver chooses none of these things, then of course the feedback will be depersonalised.

Students say they don’t like electronic feedback…

Some might, and the reasons are complex. If we introduce a new feedback method at the end of a students programme, without explanation, irritation is inevitable as we have just added a complication at a critical point. Equally if feedback across a students journey is predominantly paper based, it is no wonder they struggle to remember how to retrieve their digital feedback and so get frustrated. If the feedback is too late to be useful, that will also cause students to prefer old methods. It may be useful to coordinate feedback approaches with others on your course area so the student gets a consistent approach, rather than encountering the occasional exotic technology with no clear rationale. Finally, though, students also need to be trained to do more than receive their feedback. They might file it, return to it, précis it and identify salient points. Good delivery of feedback will never alone be enough. Timeliness and engagement are also key to allowing students to work gain the benefits of their feedback.

Seeing things differently ….

One of the benefits of using technology in feedback is not often spoken about, or written of. When we engage meaningfully with technology in feedback it can change our approach to providing feedback, irrespective of the technology. By (real) example, someone trying annotation software may have a realisation that legibility is a real issue for them and they must prioritise this in future; someone using a rubric may start giving priority to assessment criteria as the need for equity and consistency becomes more sharply placed in focus; someone using stock comments and adding a voice over becomes aware of the need for the personal touch in feedback; and finally, someone using audio becomes aware that the volume of feedback produced may be overwhelming for students to digest and so revise their approach. These realisations live on beyond any particular technology use; so when we think of using technology for feedback, it may be useful to be conscious of the changes that can be brought about to the feedback mindset, and judge success in these terms rather than just mastery of, or persistence with one or another tool.

Course level assessment – nice idea, but what does it really mean?

It is increasingly clear that thinking about curriculum in the unit of ‘the course’ rather than the unit of ‘the module is conducive to cohesive course design. It avoids repetition, ensures the assessment journey makes sense to the student and can make feedback meaningful as one task is designed to link to the next. I have not found much  in the literature on course level assessment; while it is advocated in principle amongst educational development communities, it is perhaps less clear what course level assessment actually looks like.

I can see three possibilities, though there may be more. These conceptions are described as if delivered through the modular frameworks which remain the dominant framework for programmes. Any comments on other approaches would be very welcome.

Type 1: Compound assessment

Imagine two modules being taught on entirely discrete themes. Within them might be learning about terminology, key theories, processes, and calculations. Within the modular framework they may be taught entirely independently. In such a model there is nowhere in the curriculum where these skills can be overtly combined. A third module could be introduced which draws upon learning from module one and module two. Of course in reality it may be five modules drawn upon in a sixth compound module.

By example, a module focused upon business strategy may be taught entirely separately from a module on economics. Under such a scenario students may never get to consider how changes in the economy influence strategy, the associated tactics and the need for responsive planning. It is these compound skills, abilities and levels of professional judgment that the course (not the modules) seek to develop. One way of addressing this limitation is to provide a third module which draws together real business scenarios and concentrates on encouraging students to combine their knowledge. A ‘compound’ module could be based around case studies and real world scenarios, it may be limited in its ‘indicative content’ and leave a degree of openness to draw more flexibly on what is happening in the current external environment. Open modules can be uncomfortable and liberating in equal measure for the tutor, as there is a less familiar script. It might concentrate on the development of professional behaviours rather than additional content.The module might have timetabled slots, or could take the form of a one off exercise, field trip or inquiry. Teaching would be more facilitative rather than content/delivery led.

One of the challenges with such a module is that many tutors may be reluctant to give over credits to what seems to be a content free or light module. Going back to basics though, graduates are necessarily more than empty vessels filled with ‘stuff’. If we look at the course level and identify what we want to produce in our outcomes, and what the aims of our programmes actually are, then the flexible compound module fits well as an opportunity for fusing knowledge and developing competent, confident, early professionals. When knowledge is free and ubiquitous online, acting as a global external hard disk, we need to look at the graduates we build and challenge any view that higher education is primarily about the transfer of what the lecturer knows to the student. Surely the compound skills of researching the unfamiliar, combining knowledge from different areas, and making decisions with incomplete data in a moving environment are much more important. The compound module is an opportunity to facilitate learning which alights with the course level outcomes sought.

This type of course level learning and assessment undoubtedly requires an appreciation of the skills, attitudes, values and behaviours that we wish to foster in students and it needs confidence in the tutor to facilitate rather than transmit.

Type 2: Shared  assessment

The next way that I can conceive a form of course level assessment is more mechanistic. Take two modules (module one and module two, taught separately); to bring about efficiencies, the assessment of each module is undertaken within the same assignment, activity or exam. It may be an exam with two parts one for each module; it may be a presentation which is viewed by two assessors, each reviewing a separate aspect of content or it could be an assignment which has areas of attention clearly marked for each module. The education benefits of this are, in my view, much less obvious than for type 1, nevertheless students may see some links between the parts of modules in taking such an approach. The shared assessment must be designed to make clear which aspect relates to which module or else a student could be penalised or rewarded twice for the same points. Under such an approach it is conceivable to pass one element and fail the other. I remain to be convinced of the real benefits of this approach which feels like surface level ‘joined up-ness’.

Type 3: Combined assessment 

The term combined assessment is used here to describe an approach which assesses two modules through a single meaningful strategy. If there are two fifteen credit modules, one on mathematics for engineers and one on product design, an assessment which uses knowledge from each taught unit can be drawn upon to pass a single assessment – for example via a design and build exercise. The assessment subsumes both modules, the two elements are integrated (in contrast to the shared assessment approach) and there are potential marking efficiencies. Without clear attribution of marks to one or the other module it may be tricky when a student fails; what do they restudy? But presumably a tutor would be able to advise where the limitations of the performance are and which unit would be usefully revisited. In some cases it may be both. In reality this approach may be little different than having a large module with two themes contained within it.

So they were my three ideas for programme level assessment but I am convinced that there are other ways of achieving this in a meaningful way. The suitability of each approach will depend on what the course team want to achieve, but clearly the benefits of the compound assessment approach are very different from a shared or combined strategy.


permalink jigsaw header image courtesy of Yoel Ben-Avraham under Creative Commons https://www.flickr.com/photos/epublicist/3545249463

Efficiency, technology and feedback

In considering staff experiences of choosing and using feedback technology, one of the emerging themes has been the differing views on feedback technologies and efficiencies. While the jury is still out on the data and the process is incomplete, my observations are that efficiency can be conceived in different ways in the negotiation of technology. For some efficiency is a primary driver in the decision making process. The search for technology and the refinement of its use is motivated and shaped by the quest for efficiencies. For others efficiencies are a welcome benefit of technology – they are almost an unexpected gift – welcome, but not necessary. Efficiencies also appear to be conceived relatively; rarely are efficiencies discussed without a reference to the relative enhancement gains that can be made through a technology. Wherever there is a time saving there is a tendency to ‘re-spend’ the saved time making still more enhancements to the feedback – adding detail and depth for example. In this way efficiencies become difficult to identify as they are theoretically achievable but in reality they are trumped by the possibility for improvement. Efficiency also seems to be a veto concept for some; it is not a particular concern in the run of practice but is triggered only when a particular technology is likely to encroach other activities or provide an intolerable stress.