Conference Reflections: Harper Adams University Learning & Teaching Conference 2017

This slideshow requires JavaScript.

Last week was the third Harper Adams Learning and Teaching Conference. This annual event brings together colleagues from across the institution, alongside colleagues from partner organisations in, and beyond, the UK. The conference was jam-packed with awesomeness! Although I couldn’t get to everything, the sessions that I did attend were informative and motivating.

Professor Tansy Jessop started off by inspiring a ‘nuclear climb down’ on assessment, where together teaching staff agree to summatively assess less. The shift away from too much summative assessment, Tansy reminded us, only succeeds if we collectively mean it. If some tutors are stealthily continuing to over-assess, then inevitably students will direct their attention to these activities at the expense of others. She talked about relinquishing assessment driven pedagogies of control, to a curriculum driven more by learning.
The keynote also brought some pragmatic suggestions of what staff can do by way of formative feedback strategies. I sensed a quiet wave of ‘Oh yeh’ moments around the room as the ideas were really workable. Suggestions included:

  • A policy approach of less assessments (the coordinated climb down)
  • Use of public spaces, like blogs, to collate ongoing learning and reading (the audience drives meaningful engagement)
  • Asking students to design multi choice questions
  • Asking students to bring along sources to class and then through group discussion arriving at the ‘best’ sources
  • Working with journal papers to write abstracts or deduce ideas in papers based on abstracts.

My own ‘aha’ moment was to rename every formative assessment, simply as activities that drive learning. I think I knew this already, but it’s easy to drown in terminology and metrics that cloud definitions and purpose. The keynote also highlighted how we might make the most of formative feedback. Humorously critiquing some well used feedback structures (like the feedback sandwich), Tansy suggested that, essentially, we need to become more dialogic around feedback. We need to find ways to have conversations, find out what feedback is useful, encourage students to solicit the right types of feedback and to take control of their learning.

In one of the workshop sessions the brilliantly enthusiastic Professor Kay Sambell encouraged us to consider how we use exemplars. Some sharing around the room threw up some different practical approaches, including using exemplars to: demonstrate the underpinning features of academic writing (e.g. What is involved in making an argument); take the stress out of understanding a task to free up headspace for more detailed and creative aspects of the task, essentially this is about demystifying the task; provide a process of socialisation in to the academic requirements of assessment; and, to provide a starting point. We also discussed some of the limitations of using exemplars, which included: Triggering worry in students who may believe standards set to be unachievable; stifling creativity as students might only see one way to complete the task; and, risking students believing the exemplar to be the finished article rather than a finished article. Moving on from our evaluation, we identified different things to do with exemplars. We were united in agreeing that just giving examples would do little in itself to help students. Active use of exemplars was shown to include such things as:

  • Peer marking to familiarise with task requirements
  • Discussion of different exemplars
  • Rank ordering exemplars
  • Analysing ‘fail’ grade work to help understand what should be avoided

Decisions about how to use exemplars included whether to annotate, whether to provide full or partial exemplars, and whether to use student work only or to consider tutor generated work too. By the end of this session my ‘note to self’ was that looking at weaker work in depth was a valuable step in working with exemplars. It provides a window in to the assessment process for students, it can help them avoid common pitfalls and it can massively raise awareness of issues of academic practice.

Rebekah Gerard’s poster was a great complement to Kay’s session. Bex shows how we can really use live exemplars in a workshop session to improve exam technique. She used a technique called ‘pass the problem’ and her PgC action research showed how students experienced this strategy.  Her poster shows the technique she used for ease of replicability:

Dr. Jaqueline Potter, from Keele University, shared her analysis of teaching excellence award nominations which had led to a better understanding of what qualities students value in staff. The overwhelming message was about kindliness. Whilst students want constructive, joined up and useful feedback, they really want it as a personal, kindly interaction. How to be kind is quite a different matter, but presumably remembering what it was to be a student would go a good way to help to keep an empathetic mindset. After completing our in-house PgC in Teaching and `supporting Learning many colleagues report that their best learning is in the process of being a student again and gaining an understanding of the stresses, strains and liminality of this process. Perhaps to embody the kindness that Jackie’s research has highlighted, we should all be eternal students. My note to self here is to follow Jackie’s lead and analyse the scheme data I hold on teaching excellence – or ask what do students value?

Jane Headley and Rebecca Payne’s session on exemplars was a great lot of fun! By offering a task to us (the task was – getting your team through a piece of A5 paper) and giving each group a different experience with an exemplar, we were able to feel and experience the use of exemplars. Our team had an exemplar in full, but as a team who wanted to be original (I was just happy to pass, but others wanted to excel) we decided to ditch knowledge of the exemplar and add our own twist. The result was redefining team (after all it didn’t say a human version of your team) and to create a stop motion video. This first hand experience showed me that exemplars can show students that a task is possible and that it can then free up the creative mind to do the task differently. Working in a team, and with an enjoyable task, simply added to the creativity. This point too is something we would do well to remember!

For posterity I have retained a conference programme.

LT_Conference_2017 programme

The only bad thing about the day is not being able to get to all of the sessions. Luckily I have previously heard the other speakers and they are all awesome!

From BTEC to HE – reflections on student conversations

Recently I have been reflecting on the experience of students from vocational backgrounds who come to university. We know that, in some universities, success rates amongst BTEC students are lower than those from A’Level backgrounds, but I am not sure that we really understand why this is the case and what it is really like to be a student from a vocational background in higher education. I am presently trying to understand a little more about the VQ in HE (Vocational Qualifications in Higher Education) student experience. From my recent purposeful conversations with students, some observations on this topic are shared.

  • The academic world can be confusing and stressful after a BTEC. The courses and expectations in HE are very different than those experienced previously, but on the upside, with assistance of the right type, students can be well prepared to thrive. Assessment is an area where key differences are felt. It is not just the profile of assessment types that may differ, but the culture of ‘submit-feedback-improve-resubmit’ that seems prevelent at level three, but often lacking at level 4 and beyond.
  • The types of support that can be useful include academic skills development particularly focusing on:
    • Equipping students to understand the requirements of an assessment
    • Teaching students how to break down a task to minimise feelings of being overwhelmed;
    • Developing time management skills;
    • Developing organisation skills.
    • Building confidence to rid the imposter syndrome (simple reminders that ‘you’re doing fine’ mean so, so much).
    • Revisiting class content – going over lecture notes
    • Getting started with writing (e.g. providing structure for students to frame their own writing)
    • Locating reading and sources
    • Referencing (supporting these skills, and not being pedantry when students are getting to grips with sourcing information)
  • Skills development and reduction of stress were often talked about together. Academic support and counselling skills sit side by side.
  • While VQ students may face challenges with specific aspects of their course, there may be many other aspects of the course where students feel confident and have a good degree of mastery from their vocational qualification. This raises the question, whether more can be done by way of skills exchanges or peer mentorship between A’Level and BTEC students. While we should be concerned about the achievement and experience of specific groups, we need to be very careful not to create a deficit and fix-it culture. More might be done to simply recognise the specific and valuable strengths brought to the mix by students from vocational backgrounds.
  • The idea that students from a BTEC background prefer coursework because it is ‘what they are used to’ does not tally all of the discussions that I’ve had. Students tell me that they can quickly learn to thrive with exam format. While the first one or two are nerve wracking, again with support, and revision strategies, many students can start to feel relatively comfortable with this type of assessment. The re-introduction of exam format assessments, is, at least according to those I have spoken with, less stressful when the content of the exams aligns with coverage of their vocational prior qualification, rather than presenting entirely unfamiliar content and demand. While this insight in to exam perception challenges my own assumptions that BTEC students may not be comfortable with this type of assessment, it’s important to remember that perception/preference and actual learning gain are different things.
  • Through my student conversations I have been reminded of tutor actions that can be particularly useful for VQ students (and indeed all students) in preparing for exams:
    • Provision of past papers and signposting to these
    • Revision classes
    • Providing checklists of what topics should be revised
    • Highlighting key topics in class to guide revision focus
    • Providing model answers
    • Providing a booklet of practice questions
    • Providing a menu of revision techniques to encourage active revision
    • Comprehensive session resources shared in a format that can easily be revisited
  • Overwhelmingly the students that I have spoken with said the most important point about support is that they need it to be accessible, welcoming and friendly. The tone in which support is offered absolutely matters.
  • Handing in those early pieces of work is a really big step within the university experience. Having some kind of facility to have work reviewed before submission is seen as a really valuable to remove fear and anxiety. There are of course many ways that such a step can be built in to the formative feedback journey of provision.

Undoubtedly all of these points could be addressed through a universal design approach to learning, whereby the curriculum, the classroom, the learning relationships and the online environment are intended to allow as many students as possible to reach their potential.

Effective feedback through GradeMark

This week I’ve been asked to share my own thoughts on how GradeMark can be used effectively. After some recent student interviews on this topic, here are my top ten points, some of which apply outside of GradeMark too.

  1. Give sufficient emphasis in feedback to content and context as well as structural issues in student submissions. Students sometimes report back that they have too much emphasis on grammar, layout and spelling and not enough on content. Some tutors may have a preference to feedback on what is easier to see.
  2. Avoid annotations e.g. exclamations, highlighting question marks, without explanation for students – they don’t know what short hand annotations mean. Give ideas on what students can do to improve their work in future e.g. you might use a range of literature sources and compare the different ideas, rather than only providing one perspective.
  3. Use the full features of GradeMark to help with feeding forward on how to improve and to provide links to further advice. This is almost impossible to achieve by hand, so use the full range of the tools features.
  4. Wherever possible make sure that feedback can help with the next assignment – know what students will study next, so you can make these links.
  5. Capture4
    Relating comments to criteria

    Use the criteria explicitly in the feedback so that students can see how decisions about their work have been made. You could even use the colour feature within GradeMark to relate comments to different criteria e.g. Pink for analysis, Green for Evaluation, Yellow for structure and writing style.

  6. Avoid using the term “I think ….” While we all know feedback has a degree of subjectivity to it, making links to the criteria should counter the expression of personal preferences to determine marks. Students often feel that their marks are about playing to the preferences of different tutors. This can be countered by consistent use of criteria.
  7. Use summative comments as well as annotation to draw attention to the most important areas for future development and/or any specific issues arising. Students may not always be able to prioritise amongst many annotations.
  8. To speed up your workflow, consider using voice to text facilities alongside GradeMark. This is especially simple on Apple devices where the built in ‘speech to text’ is usable and accurate.
  9. Use common errors and issues from one year to pre-prepare comments for use in feedback in the following session (assuming adjustments in teaching don’t address everything). Because these are prepared in advance the advice can be more detailed and helpful and they can be used in teaching so that students can absolutely see what they should and shouldn’t do.
  10. Capture3
    Personalising a generic comment

    Wherever possible personalise comments; students appreciate the interaction or dialogue through feedback when it feels personalised to them or, when anonymous, to their work – “I suppose because we are all in the same boat, I suppose it wasn’t  really personalized because I suppose a majority of us get the same things wrong” (Student quote 2017). So if you aren’t personalising, advise the students why this is the case. If you do want to personalise there are many ways to do this including:

  • Using an audio comment in addition to generic text comments
  • Adding additional specific comments on to generic comments
  • Including a summary comment in addition to annotation

I’m sure there are other points, and some of these are already well rehersed in literature, though particularly point 1.  is little explored elsewhere. 

 

9 Things to do with Assessment Rubrics

I’ve used rubrics in assessment marking since I first held an academic role some fifteen-ish years ago. For me, rubrics are an essential tool in the assessment toolkit. It’s important to recognize that they are not a ‘silver bullet’ and if not integrated in to teaching and support for learning, they may have no impact whatsoever on student engagement with assessment. I am therefore trying to collate a list of the ways in which rubrics can be used with students to enhance their performance, help them grow confidence and to demystify the assessment process. My top nine, in no particular order, are as follows:

  1. Discuss the rubric and what it means. This simply helps set out expectations and requirements, and provides opportunities for clarification.
  2. Encourage students to self-assess their own performance using the rubric, so that they engage more deeply with the requirements of the assessment.
  3. Encourage students to peer assess each other’s performance using the rubric, leading to further familiarization with the task, as well as the development of critical review and assessment judgment skills. This also allows the seeding of further ideas in relation to the task, through exposure to the work of others.
  4. Get students to identify the mark that they are aiming for and re-write the criteria in their own words. This sparks discussion about the requirements, flushes out any issues needing clarity and can result in students raising their aspirations (as the ‘assessment code’ is decrypted there are moments of “If that’s what it mean’s … I can do that”).
  5. Facilitate a negotiation of the rubric. Where full student led creation of a rubric is impractical, or not desirable, a tentative rubric can be presented and negotiated with the class. Students can have an influence on the coverage, the language, and the weightings. As well as familiarizing with the requirements, this allows a sense of ownership to develop. In my own experience rubrics are always better for student negotiations.
  6. Undertake a class brainstorm as the basis for the rubric design. Ask what qualities should be assessed e.g. report writing skills, then identify what this means to students themselves e.g. flow, use of literature to support argument. Then use this list to develop a rubric. It is a form of negotiation, but specifically it allows the rubric to grow out of student ideas. By using student language, the criteria are already written in a form that is accessible to the group (after all they designed the key components).
  7. Simply use the rubric as the basis for formative feedback with students to aid familiarity.
  8. Use the criteria to assess exemplars of previous students’ work. This will have the benefits of familiarity, developing assessment judgment as well as sparking new ideas from exposure to past students work. Of course this can be further developed with full sessions or online activities built around exemplar review, but the rubric can be central to this.
  9. A rubric can be partially written to offer space for choice. Leaving aspects of the rubric for students to complete leaves room for students to show their individuality and to customize tasks. Rubrics don’t box-us in to total uniformity. Recently I created a rubric for a research project and left space for students to articulate the presentational aspects of the criteria. Some students filled in the rubric to support the production of a report, others a poster and others a journal article.
img_20160908_152105
Using a class brainstorm to form the basis of a rubric with criteria relating to reflection 

I have only included approaches that I have used first hand. I’d like to build this up with the experiences of others; if you have additional suggestions please do let me know.

Undergraduate Vivas

viva
Access the guide by clicking the image above

Over the last six months I have been looking in to the Undergraduate Viva. Asking questions such as what are the benefits? What makes a good undergraduate viva? and, How can students be prepared for their undergraduate viva? One of the results of this  is a guidance document  on how to conduct a viva of this type. It may be of interest to others.

The process of defining graduate attributes

I am aware others are grappling with how to define graduate attributes, so I thought it helpful to share the approach that we took. As part of a whole university curriculum review, and a strategy review, we set about trying to identify what it was that the curriculum should achieve. Essentially we asked, what was our goal?  Unless we know this any curriculum initiatives would be tinkering. So we asked a very fundamental question, what should a Harper graduate be? This goes beyond simply asking what they should be able to do, and incorporates a sense of self that is needed to deal with a fast changing external environment and this is needed to be resilient for the future. This idea is underpinned by Ron Barnett’s work on working in super complexity. It’s a huge question but one that we answered, I think, in a creative way.

BAG.png
Resources from the ‘build a graduate’ workshop

We gathered as many staff as were able to attend to join a room with huge pieces of card printed with a giant graduate. In course teams staff were then asked to build a graduate in their discipline. Using the card as a focus for thinking, prioritising, debate and discussion each team built their own graduate. Of course this informed course level thinking before more detailed discussions got underway about course content. Using post it notes to stick on to the graduate allowed rearrangement, re-prioritisation and change as the group discussions evolved. The views in the room were not formed in isolation since colleagues were involved in both student and industry engagement.

 

After each team had spent several hours identifying what they graduate would look like in a perfect world, we collated all of the words used by all of the teams. These were then collated and put in to a word cloud creator. The commonality in the lists showed itself as the larger words were repeated across different course areas. After some sorting and filtering it became clear that we did have a collective and common vision of what the graduates of the future should be. This exercise became the foundation of the new graduate attributes. The build a graduate exercise was also undertaken by course teams with students and industry contacts. The word cloud produced is shown below.

The word cloud gave students and staff a visual connection to the exercise that we had taken, and a constant reminder that the definition of ‘our’ graduateness was a collective exercise.

wordle.PNG
A first workshop output on defining graduateness

 

Capture2.PNG
The final version of the graduate attributes 

 

The headline attributes helped to ground the Learning and Teaching Strategy; they provided clear direction as to what our activity should be pointing to. It provided one of the key cascading ideas for strategy and operational policy.

 

For the curriculum aspects, once we have the broad terms for what a graduate should be, we interpreted each attribute, skill area of understanding for each level of study. This involves some word-smithery and some external scoping to see how others level their outcomes, but it also required an eye on the future.  We ended up with was a breakdown of each of the graduate attributes, and a description of what should be achieved each level in this area. A snapshot of the attributes are offered below.

Capture2.PNG

It’s one thing articulating the graduate attributes and specifying them for each level, it is quite another to deploy them as the beating heart of the real curriculum. The first thing that we did was ask course teams to develop programmes that addressed each area at the correct level. Course level engagement forced deeper conversations about ‘what does digital literacy mean in our context?’ ‘where are the opportunities for global perspectives?’ and this sparked the attributes into life. Each programme then mapped where the attributes were met, but this one way mapping was deemed insufficient, as once it is complete it can, in reality, be committed to a top drawer and dismissed as a paper exercise. So we went a step further and requested that modules were individually mapped against the graduate outcomes. This makes it much clearer to students and staff, what skills the module should address. Through validation and scrutiny each module was checked to ensure it really was enabling the development of these attributes, through its content, pedagogy, assessment or independent activities. The next step is to get student to actually consider their progress against the graduate outcomes in a meaningful, rather than tick-boxy way. I’m sure others have taken different approaches to developing graduate attributes, but this sought to be pragmatic and inclusive.

Feedback conversations: How should I use technology? A nuanced approach …

One of the most frequent conversations I have is around improving feedback, and how technology can help. Increasingly I am trying to encourage a more nuanced discussion about feedback, because deciding on what feedback to give and how to give it is not simply about a choice between one tool or another. The choice should be the result of the individual lecturer’s preference, the context, the type of assessment or activity upon which feedback is being offered, the characteristics of the student or cohort, the aims of the feedback, the tools and technology required, the quality management requirements and no doubt many other factors.

Some of the common questions I get are shared below with comments:

Should I use GradeMark for feedback? 

Well, it depends a good deal on what you want to achieve. GradeMark has many benefits but in itself it will not make feedback better. To be clear, like any technology it can make a difference but it is not a silver bullet. Without meaningful engagement and a commitment to change practice, it will not improve satisfaction with feedback.

GradeMark can help you to achieve consistency in the comments that you offer to students because you can create a bank of common points to annotate the work, and it can enable you to add a greater amount of feed forward signposting advice to students for their common errors. For example, if a group are struggling to paraphrase, you could create a comment advising of techniques and pointing to resources that might help and use this many times.

GradeMark can help with a sense of fairness too, as marks can be allocated using a rubric. This is entirely optional, and there are of course other ways to employ a rubric. It can help with legibility, as comments are typed; but so too can very clear handwriting and other technologies. It can allow you to save time at certain points in the marking and feedback process, as you can get started on your marking as soon as students hand in rather than delaying until you receive a complete set of papers. It can aid transparency when team marking; you can see how another tutor is marking and feeding back – again this is possible to achieve in other ways, but being able to see each other’s marking in real time can create ongoing dialogue about the way marks are allocated and the way comments are added.  The maximum benefits though can only be realised if the user is reflectively working with the technology and is not simply transferring existing practices in to a digital environment.

Finally, if you are really concerned about reading on a screen, this might be a problem. But really … if you consume news, media, research and other things via a screen, it may be worth laying aside your concerns and giving this a try.

Will it save me time? 

Yes and no. It’s not that simple. It depends how you use the facilities and what type of feedback you give. You can use as many or as few of the tools within GradeMark as you see fit. You can use the facilities within GradeMark in any combination: Voice over comments, annotations (stock comments or personalised as if marking on paper), you can use a rubric, auto generated scoring from the rubric (or not) and you can use a final summary comment. Each individual needs to look at their set up and then consider what they want to achieve, they should then select the aspects of the tool that work for their situation. Annotations may be better for corrective, structural feedback, or feedback on specific aspects of calculations, but the narrative may be the place to provide feedback on key ideas within the work. If you go in to using GradeMark solely to achieve efficiencies, you will most likely be disappointed upon first usage because there is a set up investment and it takes a large group or multiple iterations to get payback on that initial time spent.

In my experience those who use GradeMark may start out seeking efficiency, but end up with a focus on enhancing their feedback within the time constraints available to them. When time is saved by a user, I have seen colleagues simply re-spend this time on making enhancements, particularly to personalise the feedback further.

Ok, so what equipment do I need to be able to use GradeMark? Is it best to use a tablet?

Again it much depends on your work flows and preferences. A desktop computer is my preference as I like lots of screen room and I like to settle in to one spot with unlimited supplies of tea, whenever I mark. Others like to be mobile and the tablet version of GradeMark allows you to effectively download all scripts, mark and feedback and then upload. So unlike the desktop version you don’t need to be connected to the Internet – for those marking on the go, this is a good thing.

I see other people using other technologies for marking, like Dragon Dictate and annotation apps on tablets, are these better than GradeMark? 
There is a large range of kit available to support a tutor’s work in assessment and feedback and each has strengths and weaknesses, and each fits differently with personal preferences and context. Dragon dictate, for example, can be used to speak a narrative, it’s not perfect but may help those who struggle with typing. Annotation apps allow the conversion of handwriting to text, and they allow comments to be added at the point of need within a script (though GradeMark allows this too). On the downside a manual intervention is needed to return the feedback to students. Whilst Track change can be good for corrective feedback,  it can cause students to look at their work and feel that it wasn’t good enough as it has the electronic equivalent of red pen all over it!
Second markers or external examiners refuse to use the same interface… Then what …?

I’d suggest that you encourage others in the process to use the technology that you have selected. Give them early warning and offer to support the process. A pre-emptive way of dealing with this is to ensure a course wide approach to feedback, so agreeing, as a group, the tools that you will use. This should then be discussed with the external and others at appointment. It’s harder to resist a coordinated approach. Policy change is what is really needed for this, so lobbying might help!!!

But students like handwritten feedback, they find computer based feedback impersonal …

Maybe so, but all students prefer legible feedback and feedback that they can collect without coming back on to campus. Also is it not part of our responsibility as educators to ensure students can work digitally, even with their feedback? Students who tell us that they like handwritten feedback often feel a personal connection between them and the marker, but feedback using technology can be made to be highly personalised. It is simply up to tutors to use the tools available to achieve levels of personalisation; the tools themselves offer choices to the feedback craftsman. Adding a narrative comment, an audio comment or customising stock comments can all give a personal touch. However if the feedback giver chooses none of these things, then of course the feedback will be depersonalised.

Students say they don’t like electronic feedback…

Some might, and the reasons are complex. If we introduce a new feedback method at the end of a students programme, without explanation, irritation is inevitable as we have just added a complication at a critical point. Equally if feedback across a students journey is predominantly paper based, it is no wonder they struggle to remember how to retrieve their digital feedback and so get frustrated. If the feedback is too late to be useful, that will also cause students to prefer old methods. It may be useful to coordinate feedback approaches with others on your course area so the student gets a consistent approach, rather than encountering the occasional exotic technology with no clear rationale. Finally, though, students also need to be trained to do more than receive their feedback. They might file it, return to it, précis it and identify salient points. Good delivery of feedback will never alone be enough. Timeliness and engagement are also key to allowing students to work gain the benefits of their feedback.

Seeing things differently ….

One of the benefits of using technology in feedback is not often spoken about, or written of. When we engage meaningfully with technology in feedback it can change our approach to providing feedback, irrespective of the technology. By (real) example, someone trying annotation software may have a realisation that legibility is a real issue for them and they must prioritise this in future; someone using a rubric may start giving priority to assessment criteria as the need for equity and consistency becomes more sharply placed in focus; someone using stock comments and adding a voice over becomes aware of the need for the personal touch in feedback; and finally, someone using audio becomes aware that the volume of feedback produced may be overwhelming for students to digest and so revise their approach. These realisations live on beyond any particular technology use; so when we think of using technology for feedback, it may be useful to be conscious of the changes that can be brought about to the feedback mindset, and judge success in these terms rather than just mastery of, or persistence with one or another tool.