Diversifying assessment (and assessment generally)

After a inspiring Learning & Teaching Forum lead by Professor Chris Rust of Oxford Brookes, I pledged my post session action would be to capture my best bits from the day. So … Some  take away points from today’s session ….

  1. Authentic assessment is an excellent way to encourage engagement by students as it helps to personalise the student’s approach to the task and generates buy in. So rather than offering abstract tasks, like produce an essay on leadership styles, frame it to have a sense of audience and so that it emulates real world situations that the student may encounter. This could be as simple as changing an essay on business planning principles into a presentation of a business proposal to a prospective funder.
  2. Spot review class activities and feedback to the group – a major efficiency saver and a good way of making feedback a routine. So in a class of 40 students pick out five pieces of work to review and send group feedback based on those which have been seen.
  3. Getting back to first principles of constructive alignment guarantees some variety. If the learning outcomes are sufficiently varied and, if the assessment lines up to ensure that the actual outcomes are being assessed, this should in itself offer a degree of variety.
  4. Diversity is good, but care needs to be taken to ensure that the student journey does not become disjointed by variety. Having some repetition of assessment approaches at the programme level ensures that there is opportunity for students to make use of feedback. A balance is to be struck between variety and the opportunity to facilitate growth in students.
  5. Tutor time is disproportionately spent on housekeeping feedback – Are headings present? Are tables labelled? Is evidence offered when requested?  etc etc. A super simple tip might be to have a housekeeping checklist that students complete before submission to deal with all of these aspects, thus allowing time to be better spent on more substantial points of feedback. It works on the principle that answering ‘no’ to any of the housekeeping question prompts a response such that issues are dealt with before submission. Using such checklists, as a routine, ensures that the student takes greater responsibility for their own learning and tutor feedback can deal with more substantial issues. It must be an integral part of the assignment, i.e. will not mark without it, or else it will not be adopted.
  6. Less is more. With a greater number of summative assessments the opportunity to give feedback which can feed forward is limited by processes, effort spent on the justification of grades and administration. Instead, lose an assessment and gain the opportunity to utilise feed forward on a piece of work. One assignment, but constructed drawing upon feedback along the way. Simple, but brilliant (and a bit more like real life where review on a document would be an entirely sensible step).
  7. Reviewer is king! It is the act of reviewing more than the act of receiving feedback that can spur interest, new insights and leaps in understanding. Getting peer review embedded within courses is an excellent way of raising the presence and effectiveness of the feedback process. To buy into this we need to lay aside fears around peer feedback meaning a lack of parity in the quality and quantity of feedback received (which may be inevitable), and appreciate the value of the experience of reviewing as where the learning is really at. Liken this to being a journal reviewer – how much is learned by engaging with a review whether good or awful? (Analogy courtesy of Mark).
  8. Group vivas. Liking this lot and not something I had previously encountered. So simply a group project and attribution of marks depends in some part on a group viva where honesty is, in theory, self-regulating.
  9. 24 hours to act. In considering the value of formative quizzes, computer aided or class based as an opportunity to engage with knowledge received, we were reminded of the benefits of engaging sooner rather than later. Engagement with formative quizzes (or indeed reflective processes) within 24 hours of a class is much more effective that if left.
  10. Use audio feedback - it doubles feedback and makes production smoother.
  11. Future proof feedback plans. Think how SMART devise ubiquity will play out in future. Formative in-class tests may be more efficient on paper for now, but insure efforts by taking a dual pronged approach (online and paper).
  12. Pool efforts. Whether across course teams, departments or with colleagues nationally, look for efficiency gains in providing formative question banks. Open educational resource banks (e.g. Open jorum), subject centres and commercial textbooks with CDs of instructor question banks may all be sources to consider.
  13. The uber simple approach of asking students what the strengths and weaknesses of their assignment are can focus minds. Additionally asking them on which aspect they would like feedback creates a learning dialogue and ensures feedback is especially useful.

While all of these points matter it remains that it is most important to review the bigger picture. A major barrier to diversifying assessment and capitalising (in learning terms) of feedback opportunities can be modular structure to programmes. The TESTA Project revealed that  “the volume of feedback students receive does not predict even whether students think they receive enough feedback, without taking into account the way assessment across the programme operates”.  Volume of feedback or assessment will not improve student perceptions of feedback. Point 4. Above leads to the assumption also that timeliness alone is not enough either. While it is good for module level assessment and feedback to be considered in relation to the ideas above  a holistic look at the programme level helps us to understand the assessment journey of the student. How can feedback feed forward? Is their sufficient variety across a programme? Is their repetition? All questions worth asking beyond the module level.

The Jing feedback experiment

Since the last post on Jing (screen capture) I have tried it out more intensively by making 45 videos for formative feedback on personal development. I received draft submissions from students, opened them on the screen, started the video capture and recorded as I went.

Lessons learnt …

  • Read through once only and highlight in yellow any areas where a comment should be made (a higher level of scripting than that means you may as well write the feedback first )
  • Live with imperfection. Unless you edit the feedback in an audio editor, Jing is one take only. Live with the odd, ‘errr…..’ … pause or stumble or else the videos will take a ridiculous amount of time.
  • Manage expectations: Jing feedback was sought once word got around, this created a rush at the last minute. For the sake of workload give cut offs, and only feedback on a pre-determined amount of work.
  • Opt out not in. Given the openness of feedback, being technically accessible by others and given the alternative nature of the approach brief students and tell them what you are doing and why, and offer an opt out. No-one chose this.
  • Practice makes efficient. The first handful of videos took forever. Had I not made a public commitment to do this I would have ditched it out of sheer frustration. It did get better.
  • Using other types of video in class meant that this was a familiar approach to students. It was in synch with classroom methods. For example, I used video feedback to playback a critique of a case study.
  • It saved an awful amount of time by removing the need for proofing my own feedback.

While it may seem labour intensive to offer 45 verbal feedbacks I was secure in the knowledge that 45 written feedback attempts would take an awful lot longer. The depth of the feedback was also more than could have been realistically achieved on paper. You can say a lot in 5 minutes.

What did the students think …

  • Students thought this was fantastic!
  • ‘Like a conversation’
  • Personalised
  • ‘It was like having a one to one tutorial’
  • Enabled them to work through changes one at a time with the video open and their work open at the same time
  • Only one technical glitch was reported
  • Lots of feedback is possible in this way

Other Jing ideas…

An alternative approach I saw recently was a tutor talking through the grade sheet. Giving a verbal commentary on why decisions were made as they were. A different take on Jing.

As a spin off from this work, experimentation shows Jing can work well with White Board technology too, so that in-class examples can be used and taken away. A blue tooth mic and you’re away …

(How to make a Jing feedback video is outlined here http://www.techsmith.com/education-tutorial-feedback-jing.html )

5 reasons why giving pass/fail marks, as opposed to percentage grades, might not be a bad idea

Grades1. Grades may be an inhibitor of deeper self-reflection, which is in turn linked to self-regulated learning (White and Fantone 2010). Grade chasing distracts from meaningful learning review (see also Dweck 2010). For real examples of this, some student views visible in the comments here are useful http://tinyurl.com/66r3mdu

2. Research shows that performance is neither reduced nor enhanced by pass/fail grading systems (Robins, Fantone et al. 1995). For those worrying about a reduction in standards caused by the removal of grades, don’t!

3. Pass-Fail grades are more conducive to a culture of collaboration, which in turn links to higher levels of student satisfaction (Robins, Fantone et al. 1995; Rohe, Barrier et al. 2006; White and Fantone 2010). The increased collaboration may be especially beneficial as preparation for certain professions which require high levels of cooperative working (as noted in a medical context by Rohe, Barrier et al. 2006).

4. Pass-fail counteracts challenges brought about by grade inflation practices (Jackson 2011).

5. Pass-fail is associated with lower student anxiety and higher levels well being (Rohe, Barrier et al. 2006). That has to be good!

Dweck, C. S. (2010). “Even Geniuses Work Hard.” Educational Leadership 68(1): 16-20.
Jackson, L. J. (2011). “IS MY SCHOOL NEXT?” Student Lawyer 39(8): 30-32.
Robins, L. S., J. C. Fantone, et al. (1995). “The effect of pass/fail grading and weekly quizzes on first-year students’ performances and satisfaction.” Academic Medicine: Journal Of The Association Of American Medical Colleges 70(4): 327-329.
Rohe, D. E., P. A. Barrier, et al. (2006). “The Benefits of Pass-Fail Grading on Stress, Mood, and Group Cohesion in Medical Students.” Mayo Clinic Proceedings 81(11): 1443-1448.
White, C. B. and J. C. Fantone (2010). “Pass-Fail Grading: Laying the Foundation for Self-Regulated Learning.” Advances in Health Sciences Education 15(4): 469-477.

Thoughts on peer review

A combination of intensive marking, reviewing and discussion of peer review has formed a few thoughts associated with peer review …

On rubric led feedback

Rubrics are useful as they in some way at least set the ground rules for feedback – for example ensuring that feedback remains about technical aspects, clarity and meaning and not about the writer (this I recall was a concern of Wallace & Poulson, 2003). They make explicit the areas on which feedback can be expected, and if used in the construction of work they can act as another source of guidance. In some ways perhaps rubrics help objectify and depersonalise the feedback. This may be good and bad! It’s great for transparent quality processes but sometimes perhaps can detach emotional dimensions from reading.

The rubrics as a product of words can be open to different interpretations (I should imagine more so across cultures). Perhaps dialogue around the use of rubrics is a must to ensure mutual understanding.

Feeding back on other people’s writing is both an art and a science, the rubric can be a useful guide but it could also be stifling if used in a restrictive way. I fear that rubrics used too zealously (perhaps as a consequence of the mood of transparency) erode individuality in the process.

On reviewer – reviewee relationships

Cowan and Chiu (2009) tell of the experience of the reviewed and the reviewer in an exchange of peer review critique.

· The reviewer was concerned not to cause offence and seemed to seek to explain his comments especially where, across different cultures there was a fear, or an awareness, of inadvertently causing offence with terminology and phrase (Exacerbated by virtual exchanges being used)
· The reviewer noted the importance of facilitating the ideas of the writer and not imposing his own ways of how things should be done
· The reviewer was keen to receive the feedback and welcomed critical input; the feedback was received openly and with respect for the time given to review

And from this we can draw….
Trust and respect is a pre-requisite for peer review. An assumption of good intent by the recipient and a degree of sensitivity by the reviewer seems to underpin useful exchange.
Whilst the setting was a peer review journal – the lessons could carry to student feedback.

Rules of engagement for review and feedback

Wallace and Poulson (2003, p.6) outline of what being critical actual means – points included open-mindedness and being constructive, respectful and questioning. Their points appear to translate into rules of engagement for review – they read like a brief of good sportsmanship in the context of peer review.

By opting into the process either by submitting work for review or by becoming a reviewer do we opt in to play by a set of (hazy and perhaps tacit) rules or principles? If so then perhaps it would be helpful for novice reviewers (and some experienced ones too) to have these made explicit. Invisible rules are pretty hard to play by!

Cowan, J. and Y.-C. Chiu (2009). “A critical friend from BJET?” British Journal of Educational Technology 40(1): 58-60.

Wallace, M. and L. Poulson (2003). Learning to read critically in educational leadership and management, Sage Publications.

E-assessment for work based learning: Functionality v ideals

There are very close ties, or at least there can be, between e-learning, e-assessment and work-based learning. The compatibility of e- methods and work-based, work-located studies is in many instances because of:

· Pragmatic considerations

o Access anytime, anywhere using asynchronous technologies.

· Quality concerns

o E-learning allows the HEI to contain a direct link in to provision that may otherwise be entirely delivered by a partner organisation.

Where e-assessment is used for these reasons only (without consideration of the wider learning design) there may be limited benefits, a reductionist approach results. What is lost when only a narrow rationale is used for choosing e-assessment for work-based learning?

· Association with authenticity If assessment is a bolt-on, a means to an end, then the opportunity to enable work-based learners to use and build upon their day-to-day practices may be lost in the rush to simply weigh knowledge.

· Association with social justice - E-assessment offers a chance to level the playing field a little more. In getting away from essay writing and enabling the creation of multi-media artefacts for assessment, learners can play to their strengths and enhance authenticity. However, to enhance the chance of such an approach succeeding the use of media playfulness needs to be engrained into delivery/the learning journey, the e-infrastructure, the human support and the assessment success criteria. This is not a concept easily bolted on!

· Disjunction – Constructive alignment remains widely accepted good practice for all learning and teaching, based on this accepted wisdom alignment between assessment tasks and learning should remain clear. Whilst the content of learning can form an alignment, even when using assessment as a bolt-on, there may be disjunction when the means of assessment is remote from the learning experience. It could be argued that the assessment instrument should be synergistic with the journey.  For example a student sitting down to take a computer aided test with scenario based question and answers when none of the delivery has been in this way may result in feelings of separateness between learning and assessment; likewise for a summative portfolio to be online after a face to face delivery which did not in any way utilise technology erodes the possibility of full and deep engagement and potentially acts to make the technology alien and intimidating under pressure.

As we design work-based learning initiatives with an e- element in either delivery or assessment or both care must be taken to be holistic in looking at how to support, how to maximise the benefit and how to ensure that for the learners a sense of journey is maintained. There are potentially lost opportunities and also dangers of disjuncture in being overly focussed on finding a means assessment that ‘will do the job’ of providing some measure of learning without considering such designs more holistically with reference to integration with delivery, support (tutor or peer) and [work-based, local] context.  This tension then in the relationship between e-learning and work-based learning equates to function v. ideal design.

Sign of the times: assessment

Last week Jacob (5) submitted his year one homework by handing over a URL to his teacher – brilliant! A clear sign of the times J His task was to re-tell a story by any means; cartoon, writing, through a video or other. It seemed like potentially a lot of writing to get him to write a story which contained the detail which he retains, so we decided that it would be best to talk about the story so that we could hear all that he has to say. Jacob is pretty camera shy and so we filmed his chat in a very informal sofa setting with friends in the lounge, so that he was relaxed enough to chat through the story. Unhappy with the video, he wanted to film again. So this time he sat in the ‘hotseat’ in a staged environment (well a dining room chair!) and we tried to get him to talk directly to the camera. No chance! After five false starts we decided that the less high quality camera work which captured his story telling was much better than trying to produce BBC standard interviews in which he just couldn’t get across his ideas under pressure. Extreme analysis for a 5 year olds homework granted, however it struck me that this had synergy with what I am trying to do everyday…

We know that learners in a workplace setting have heaps of knowledge and they use it in practice, yet exams and under the spotlight assessments are often undesirable and do not enable the learners to show what it is that they can really do. When developing new provision (through REEDNet), one of our key aims is to facilitate learners to demonstrate their knowledge, skills and understanding in ways which fit their current ways of working, which dovetail practice and which do not add so much pressure to a learning experience that the credit value of the is lost.

Assistance for writing employer engagement modules

As part of my work to support the REEDNet project I have been forming up some guidance on how we, in HE, can design modules to support employer engagement initiatives. What started as a quick guide has grown in to a 20 or so page booklet on how assessment, learning outcomes and teaching & learning strategies can be written with respect for the authenticity of learning in the workplace and for the real needs of employers. 

The guide is downloadable here:

http://www.harper-adams.ac.uk/aspire/files/moduleguidance.pdf

Whilst the booklet is aimed specifically at supporting designers of employer engagement / work-based learning modules I suspect it may have relevance to module authors more widely.

(improved typeset version from the printers, hopefully to follow shortly on the same link), printed copies available by request.

Feedback must feed forwards

Thinking about feedback and how it is used in the learning process …  

Picture 3… feedback would seem to be most valuable when it can cause changes in understanding, approach or confidence such that an improvement in learning and also assessment work can be actioned. It may be a useful reminder to all if course designers can factor in key moments for tutor/facilitator feedback and/or peer feedback.  Feedback is only useful when it can help learning progression.

Planning assessment work

A recurring observation for me when reading undergraduate work is that there is often a lack of planning present. There may well be lots of information, some interesting and valuable comments and some treatment of literature, but I am convinced that by offering more time to the planning process, students would both produce higher quality work and would feel more in control of their learning. 

In the production of each assessment product I would hope that everyone has read the resources thoroughly, considered the meaning of the learning outcome (asking what do I really  need to demonstrate) and the assessment criteria (asking what features does my work need to hit the level that I am aiming for?). Then draw up a list or a table of a chart or some tool to help you map out what the key elements of your assessment activity will be, even before putting pen to paper. Sometimes the planning stage can be lengthy but it helps to provide the building blocks of assignments so that when t pen is put to paper, the focus can be on the standard of writing, cohesiveness and conciseness. Without research it is impossible to tell how much planning and what methods are used, but I suspect planning techniques are under-utilised. 

Feedback and marking : quality and standards

In my previous  post about feedback I got feedback (thanks)  about the subjectivity of the feedback system, this is reiterated every results day in my mailbox. This set me musing … further … 

Essentially there is unease at the existence of marker preference or marker emphasis. My first thought on this was yes –  there is. In any art or social science this is inevitable and inline with the subjects deepest philosophical underpinnings.  Thinking then in more detail about ways to eradicate this there may be scope in paired marking or explicit areas for the student to ‘hit’ (and for markers to assess). However the paired marking it seems to me would merely offer some kind of sense of safety in numbers, which makes the process no more visible or objective. Then, criteria setting at a task level feels like undermining the holistic production of research and adds a sense of micro-management not in keeping with graduateness. So should we eradicate these differences or come to understand and appreciate them. 

What is critical here is that the issues lie with feedback I and absolutely not with marking. Since my first marking experience in HE, I can reveal that OI have never had any moments of unreasonable numeric disparity in marking. The figures of marking do stack up.

A casual conversation with Piers Maclean on assessment led me to reach a point of understanding. Simple perhaps, but important: The marking process is standard based, the feedback process is a monologue judgement about quality. 

Reaching a mark is quite straight forward within what is essentially a public marking scheme. A piece of work use literature, connects with literature or elaborately collates and synthesises ideas from literature. In effect it does or it does not. But the feedback is far more subjective. It is a subjective commentary on what has been done, what may have been done, what could be done differently, where things may be developed for ‘next time’ and where energies may be placed in future assignments. The mark is arrived at quite differently in a far more transparent way than the subjective feedback.

So the emerging question is, should feedback be objectified as a process or appreciated in its natural subjective form.