Making digital exemplars

In addition to my usual classroom use of exemplars as a means of familiarising students with the assessment requirements of a specific module, this year I have created a video walk through of an exemplar. Initially this was to enable those who missed the relevant class to catch up on the session, but the approach was welcomed by students who attended the exemplars activity session, as well as those who did not. 

How to create a digital exemplar walk through: 

• Bring up the exemplar on screen after selecting a ‘good’ piece of work

• Read through and use comments in word to annotate the work with points which would be surfaced in feedback, particularly comments related to the assessment criteria (of course!). Comments include things done well, things done less well which could be avoided and opportunities for further detail and development. This tagging process acts only as an aide memoire so that as I create  feedback video I am aware of what I wanted to include. 

• Open Screencast-o-Matic to very easily screen record the work as a video as I re-read it through and talk to each of the tags. ‘This work includes this … which is useful because…’ ‘this work used literature in this way …. It might be better to …. Because ….’. None of this is rehearsed; that would be too time consuming. The resultant video is a commentary on performance.

• The video is uploaded and made available to students.

After using the resource there was some consensus amongst my students that the value was ONLY in listening to the assessment commentary and not specifically in looking at the work. One student described how they listened but did not watch. They then recorded notes about what they should include, remember and avoid. They avoided looking at the work for fear of having their own ideas reshaped. If assessment judgments are ‘socially situated interpretive act[s]’ then the digitised marking commentary may be a useful way of making that process more transparent for students, and indeed for other staff.

I will definitely be including this in future modules.

Handley, K., den Outer, B. & Price, B. (2013) Learning to mark: exemplars, dialogue and participation in assessment communities. Higher Education Research & Development Vol. 32 , Iss. 6.

The tricky issue of word count equivalence

The challenges of managing media rich assessments, or managing student choice in assessment, has been evident in higher education for as long as I have been employed in the sector, and probably a lot longer. Back in 2004, when I worked on the Ultraversity Programme, the course team had an underpinning vision which sought to: enable creativity; encourage negotiation of assessment formats such that the outputs were of use; and, develop the digital capabilities of students (a form of assessment as learning). We encouraged mixed media assessment submissions for all modules. At this time we debated ‘the word count issue’ and emerged with a pragmatic view that alternative media should be broadly equivalent (and yes that is fuzzy, but ultimately this helps develop judgment skills of students themselves).

In the HEA accredited PgC in Teaching and Supporting Learning that I now manage, we assess using a patchwork media portfolio. Effectively there are five components (including an evaluation of assessment and feedback practices, a review of approaches used in teaching or supporting learning and a review of inclusive practices used) plus there is a stitching piece (a reflection on learning). The assessment brief describes what the students should show, but it is not prescriptive on the precise format. Each element has a word guide, but this should be used by those working with alternative media as a guide to the size of the output and the effort they apply.

wordcount

Where students opt for media rich formats, they are asked to decide on equivalence. Close contact in class sessions provides a guiding hand on judgment, critically with peer input (‘yes, that sounds fair’). Techniques to assess equivalence include taking a rough ‘words per minute’ rate and then scaling up. I have had other items such as posters and PowerPoints, again, I ask them to use their own approximation based on effort. Because the students in this particular programme are themselves lecturers in HE, there is a degree of professional reflection applied to this issue. We don’t ask for transcripts or supplementary text when an individual submits an audio or video format, because it can add considerable work and it may be a deterrent to creativity.

Media experimentation within this programme is encouraged because of the transformative effect it can have on individuals who then feel free to pass on less traditional, more creative methods to their students. I asked one of my students to share their thoughts having just submitted a portfolio of mixed media. Their comments are below:

My benefits from using media were;

  • Opportunity to develop skills
  • Creativity
  • More applied to the role I have as a teacher than a written report would have been
  • Gave ideas to then roll out into my own assessment strategies, to make these more authentic for students
  • Enjoyable and I felt more enthused to tackle the assignment elements

But I wouldn’t say it was quicker to produce, as it takes a lot of advanced planning. And, it was tricky to evidence / reference, which is a requisite for level 7. This is where I fell down a little.

I judged equivalence with a 60-100 words per minute time frame for narrative, and / or, I wrote the piece in full (with word count) and then talked it through. I think the elements that I chose to pop into video were those that were more reflective, and lent themselves better to this approach. With the more theoretical components, where I wasn’t feeling creative or brave enough to turn it into something spangly, I stuck with the written word. The exception to this was the learning design patch, where I wanted to develop particular skills by using a different approach.

This student’s comments very much match up with comments made back in 2009, by Ultraversity students who reported “without exception, felt that they had improved their technical skills through the use of creative formats in assessment” (Arnold, Thomson, Williams, 2009, p159).   Looking back at this paper I was reminded that a key part of managing a mixed picture of assessment is through the criteria, we said “In looking at rich media, the assessor needs be very clear about the assessment criteria and the role that technology has in forming any judgments, so as to avoid the ‘wow’ factor of quirky technology use. At the same time he/she must balance this with the reward of critical decision-making and appropriateness in the use of technology. Staff and student awareness of this issue as well as internal and external quality assurance guards against this occurrence” (p161). This is exactly the approach taken within the PgC Teaching and Supporting Learning. Tightly defined assessment criteria have been very important in helping to apply consistent assessment judgments across different types of submission.

If we want to receive identically formatted items, which all address the learning outcomes using the same approach, then of course mandating a single format with a strict word count is the way to go. But if we want to encourage an attitude to assessment which encourages creativity in new lecturers, and which acts as a development vehicle for their own digital skills, then we must reduce concerns about word counts and encourage junior colleagues to develop and use their professional judgment in this matter. The student quote above shows the thoughtful approach taken by one student to address the issue for themself.

Frustratingly, even by using word count as the reference point for parity we may ‘other’ some of the more creative approaches that we seek to encourage and normalize, but ultimately wordage has long been the currency of higher education. It is good to see some universities being pro-active in setting out a steer for equivalence so that individual staff do not feel that they are being maverick with word counts when seeking to encourage creativity.

Course level assessment – nice idea, but what does it really mean?

It is increasingly clear that thinking about curriculum in the unit of ‘the course’ rather than the unit of ‘the module is conducive to cohesive course design. It avoids repetition, ensures the assessment journey makes sense to the student and can make feedback meaningful as one task is designed to link to the next. I have not found much  in the literature on course level assessment; while it is advocated in principle amongst educational development communities, it is perhaps less clear what course level assessment actually looks like.

I can see three possibilities, though there may be more. These conceptions are described as if delivered through the modular frameworks which remain the dominant framework for programmes. Any comments on other approaches would be very welcome.

Type 1: Compound assessment

Imagine two modules being taught on entirely discrete themes. Within them might be learning about terminology, key theories, processes, and calculations. Within the modular framework they may be taught entirely independently. In such a model there is nowhere in the curriculum where these skills can be overtly combined. A third module could be introduced which draws upon learning from module one and module two. Of course in reality it may be five modules drawn upon in a sixth compound module.

By example, a module focused upon business strategy may be taught entirely separately from a module on economics. Under such a scenario students may never get to consider how changes in the economy influence strategy, the associated tactics and the need for responsive planning. It is these compound skills, abilities and levels of professional judgment that the course (not the modules) seek to develop. One way of addressing this limitation is to provide a third module which draws together real business scenarios and concentrates on encouraging students to combine their knowledge. A ‘compound’ module could be based around case studies and real world scenarios, it may be limited in its ‘indicative content’ and leave a degree of openness to draw more flexibly on what is happening in the current external environment. Open modules can be uncomfortable and liberating in equal measure for the tutor, as there is a less familiar script. It might concentrate on the development of professional behaviours rather than additional content.The module might have timetabled slots, or could take the form of a one off exercise, field trip or inquiry. Teaching would be more facilitative rather than content/delivery led.

One of the challenges with such a module is that many tutors may be reluctant to give over credits to what seems to be a content free or light module. Going back to basics though, graduates are necessarily more than empty vessels filled with ‘stuff’. If we look at the course level and identify what we want to produce in our outcomes, and what the aims of our programmes actually are, then the flexible compound module fits well as an opportunity for fusing knowledge and developing competent, confident, early professionals. When knowledge is free and ubiquitous online, acting as a global external hard disk, we need to look at the graduates we build and challenge any view that higher education is primarily about the transfer of what the lecturer knows to the student. Surely the compound skills of researching the unfamiliar, combining knowledge from different areas, and making decisions with incomplete data in a moving environment are much more important. The compound module is an opportunity to facilitate learning which alights with the course level outcomes sought.

This type of course level learning and assessment undoubtedly requires an appreciation of the skills, attitudes, values and behaviours that we wish to foster in students and it needs confidence in the tutor to facilitate rather than transmit.

Type 2: Shared  assessment

The next way that I can conceive a form of course level assessment is more mechanistic. Take two modules (module one and module two, taught separately); to bring about efficiencies, the assessment of each module is undertaken within the same assignment, activity or exam. It may be an exam with two parts one for each module; it may be a presentation which is viewed by two assessors, each reviewing a separate aspect of content or it could be an assignment which has areas of attention clearly marked for each module. The education benefits of this are, in my view, much less obvious than for type 1, nevertheless students may see some links between the parts of modules in taking such an approach. The shared assessment must be designed to make clear which aspect relates to which module or else a student could be penalised or rewarded twice for the same points. Under such an approach it is conceivable to pass one element and fail the other. I remain to be convinced of the real benefits of this approach which feels like surface level ‘joined up-ness’.

Type 3: Combined assessment 

The term combined assessment is used here to describe an approach which assesses two modules through a single meaningful strategy. If there are two fifteen credit modules, one on mathematics for engineers and one on product design, an assessment which uses knowledge from each taught unit can be drawn upon to pass a single assessment – for example via a design and build exercise. The assessment subsumes both modules, the two elements are integrated (in contrast to the shared assessment approach) and there are potential marking efficiencies. Without clear attribution of marks to one or the other module it may be tricky when a student fails; what do they restudy? But presumably a tutor would be able to advise where the limitations of the performance are and which unit would be usefully revisited. In some cases it may be both. In reality this approach may be little different than having a large module with two themes contained within it.

So they were my three ideas for programme level assessment but I am convinced that there are other ways of achieving this in a meaningful way. The suitability of each approach will depend on what the course team want to achieve, but clearly the benefits of the compound assessment approach are very different from a shared or combined strategy.


permalink jigsaw header image courtesy of Yoel Ben-Avraham under Creative Commons https://www.flickr.com/photos/epublicist/3545249463

Diversifying assessment (and assessment generally)

After a inspiring Learning & Teaching Forum lead by Professor Chris Rust of Oxford Brookes, I pledged my post session action would be to capture my best bits from the day. So … Some  take away points from today’s session ….

  1. Authentic assessment is an excellent way to encourage engagement by students as it helps to personalise the student’s approach to the task and generates buy in. So rather than offering abstract tasks, like produce an essay on leadership styles, frame it to have a sense of audience and so that it emulates real world situations that the student may encounter. This could be as simple as changing an essay on business planning principles into a presentation of a business proposal to a prospective funder.
  2. Spot review class activities and feedback to the group – a major efficiency saver and a good way of making feedback a routine. So in a class of 40 students pick out five pieces of work to review and send group feedback based on those which have been seen.
  3. Getting back to first principles of constructive alignment guarantees some variety. If the learning outcomes are sufficiently varied and, if the assessment lines up to ensure that the actual outcomes are being assessed, this should in itself offer a degree of variety.
  4. Diversity is good, but care needs to be taken to ensure that the student journey does not become disjointed by variety. Having some repetition of assessment approaches at the programme level ensures that there is opportunity for students to make use of feedback. A balance is to be struck between variety and the opportunity to facilitate growth in students.
  5. Tutor time is disproportionately spent on housekeeping feedback – Are headings present? Are tables labelled? Is evidence offered when requested?  etc etc. A super simple tip might be to have a housekeeping checklist that students complete before submission to deal with all of these aspects, thus allowing time to be better spent on more substantial points of feedback. It works on the principle that answering ‘no’ to any of the housekeeping question prompts a response such that issues are dealt with before submission. Using such checklists, as a routine, ensures that the student takes greater responsibility for their own learning and tutor feedback can deal with more substantial issues. It must be an integral part of the assignment, i.e. will not mark without it, or else it will not be adopted.
  6. Less is more. With a greater number of summative assessments the opportunity to give feedback which can feed forward is limited by processes, effort spent on the justification of grades and administration. Instead, lose an assessment and gain the opportunity to utilise feed forward on a piece of work. One assignment, but constructed drawing upon feedback along the way. Simple, but brilliant (and a bit more like real life where review on a document would be an entirely sensible step).
  7. Reviewer is king! It is the act of reviewing more than the act of receiving feedback that can spur interest, new insights and leaps in understanding. Getting peer review embedded within courses is an excellent way of raising the presence and effectiveness of the feedback process. To buy into this we need to lay aside fears around peer feedback meaning a lack of parity in the quality and quantity of feedback received (which may be inevitable), and appreciate the value of the experience of reviewing as where the learning is really at. Liken this to being a journal reviewer – how much is learned by engaging with a review whether good or awful? (Analogy courtesy of Mark).
  8. Group vivas. Liking this lot and not something I had previously encountered. So simply a group project and attribution of marks depends in some part on a group viva where honesty is, in theory, self-regulating.
  9. 24 hours to act. In considering the value of formative quizzes, computer aided or class based as an opportunity to engage with knowledge received, we were reminded of the benefits of engaging sooner rather than later. Engagement with formative quizzes (or indeed reflective processes) within 24 hours of a class is much more effective that if left.
  10. Use audio feedback – it doubles feedback and makes production smoother.
  11. Future proof feedback plans. Think how SMART devise ubiquity will play out in future. Formative in-class tests may be more efficient on paper for now, but insure efforts by taking a dual pronged approach (online and paper).
  12. Pool efforts. Whether across course teams, departments or with colleagues nationally, look for efficiency gains in providing formative question banks. Open educational resource banks (e.g. Open jorum), subject centres and commercial textbooks with CDs of instructor question banks may all be sources to consider.
  13. The uber simple approach of asking students what the strengths and weaknesses of their assignment are can focus minds. Additionally asking them on which aspect they would like feedback creates a learning dialogue and ensures feedback is especially useful.

While all of these points matter it remains that it is most important to review the bigger picture. A major barrier to diversifying assessment and capitalising (in learning terms) of feedback opportunities can be modular structure to programmes. The TESTA Project revealed that  “the volume of feedback students receive does not predict even whether students think they receive enough feedback, without taking into account the way assessment across the programme operates”.  Volume of feedback or assessment will not improve student perceptions of feedback. Point 4. Above leads to the assumption also that timeliness alone is not enough either. While it is good for module level assessment and feedback to be considered in relation to the ideas above  a holistic look at the programme level helps us to understand the assessment journey of the student. How can feedback feed forward? Is their sufficient variety across a programme? Is their repetition? All questions worth asking beyond the module level.

The Jing feedback experiment

Since the last post on Jing (screen capture) I have tried it out more intensively by making 45 videos for formative feedback on personal development. I received draft submissions from students, opened them on the screen, started the video capture and recorded as I went.

Lessons learnt …

  • Read through once only and highlight in yellow any areas where a comment should be made (a higher level of scripting than that means you may as well write the feedback first )
  • Live with imperfection. Unless you edit the feedback in an audio editor, Jing is one take only. Live with the odd, ‘errr…..’ … pause or stumble or else the videos will take a ridiculous amount of time.
  • Manage expectations: Jing feedback was sought once word got around, this created a rush at the last minute. For the sake of workload give cut offs, and only feedback on a pre-determined amount of work.
  • Opt out not in. Given the openness of feedback, being technically accessible by others and given the alternative nature of the approach brief students and tell them what you are doing and why, and offer an opt out. No-one chose this.
  • Practice makes efficient. The first handful of videos took forever. Had I not made a public commitment to do this I would have ditched it out of sheer frustration. It did get better.
  • Using other types of video in class meant that this was a familiar approach to students. It was in synch with classroom methods. For example, I used video feedback to playback a critique of a case study.
  • It saved an awful amount of time by removing the need for proofing my own feedback.

While it may seem labour intensive to offer 45 verbal feedbacks I was secure in the knowledge that 45 written feedback attempts would take an awful lot longer. The depth of the feedback was also more than could have been realistically achieved on paper. You can say a lot in 5 minutes.

What did the students think …

  • Students thought this was fantastic!
  • ‘Like a conversation’
  • Personalised
  • ‘It was like having a one to one tutorial’
  • Enabled them to work through changes one at a time with the video open and their work open at the same time
  • Only one technical glitch was reported
  • Lots of feedback is possible in this way

Other Jing ideas…

An alternative approach I saw recently was a tutor talking through the grade sheet. Giving a verbal commentary on why decisions were made as they were. A different take on Jing.

As a spin off from this work, experimentation shows Jing can work well with White Board technology too, so that in-class examples can be used and taken away. A blue tooth mic and you’re away …

(How to make a Jing feedback video is outlined here http://www.techsmith.com/education-tutorial-feedback-jing.html )

5 reasons why giving pass/fail marks, as opposed to percentage grades, might not be a bad idea

Grades1. Grades may be an inhibitor of deeper self-reflection, which is in turn linked to self-regulated learning (White and Fantone 2010). Grade chasing distracts from meaningful learning review (see also Dweck 2010). For real examples of this, some student views visible in the comments here are useful http://tinyurl.com/66r3mdu

2. Research shows that performance is neither reduced nor enhanced by pass/fail grading systems (Robins, Fantone et al. 1995). For those worrying about a reduction in standards caused by the removal of grades, don’t!

3. Pass-Fail grades are more conducive to a culture of collaboration, which in turn links to higher levels of student satisfaction (Robins, Fantone et al. 1995; Rohe, Barrier et al. 2006; White and Fantone 2010). The increased collaboration may be especially beneficial as preparation for certain professions which require high levels of cooperative working (as noted in a medical context by Rohe, Barrier et al. 2006).

4. Pass-fail counteracts challenges brought about by grade inflation practices (Jackson 2011).

5. Pass-fail is associated with lower student anxiety and higher levels well being (Rohe, Barrier et al. 2006). That has to be good!

Dweck, C. S. (2010). “Even Geniuses Work Hard.” Educational Leadership 68(1): 16-20.
Jackson, L. J. (2011). “IS MY SCHOOL NEXT?” Student Lawyer 39(8): 30-32.
Robins, L. S., J. C. Fantone, et al. (1995). “The effect of pass/fail grading and weekly quizzes on first-year students’ performances and satisfaction.” Academic Medicine: Journal Of The Association Of American Medical Colleges 70(4): 327-329.
Rohe, D. E., P. A. Barrier, et al. (2006). “The Benefits of Pass-Fail Grading on Stress, Mood, and Group Cohesion in Medical Students.” Mayo Clinic Proceedings 81(11): 1443-1448.
White, C. B. and J. C. Fantone (2010). “Pass-Fail Grading: Laying the Foundation for Self-Regulated Learning.” Advances in Health Sciences Education 15(4): 469-477.

Thoughts on peer review

A combination of intensive marking, reviewing and discussion of peer review has formed a few thoughts associated with peer review …

On rubric led feedback

Rubrics are useful as they in some way at least set the ground rules for feedback – for example ensuring that feedback remains about technical aspects, clarity and meaning and not about the writer (this I recall was a concern of Wallace & Poulson, 2003). They make explicit the areas on which feedback can be expected, and if used in the construction of work they can act as another source of guidance. In some ways perhaps rubrics help objectify and depersonalise the feedback. This may be good and bad! It’s great for transparent quality processes but sometimes perhaps can detach emotional dimensions from reading.

The rubrics as a product of words can be open to different interpretations (I should imagine more so across cultures). Perhaps dialogue around the use of rubrics is a must to ensure mutual understanding.

Feeding back on other people’s writing is both an art and a science, the rubric can be a useful guide but it could also be stifling if used in a restrictive way. I fear that rubrics used too zealously (perhaps as a consequence of the mood of transparency) erode individuality in the process.

On reviewer – reviewee relationships

Cowan and Chiu (2009) tell of the experience of the reviewed and the reviewer in an exchange of peer review critique.

· The reviewer was concerned not to cause offence and seemed to seek to explain his comments especially where, across different cultures there was a fear, or an awareness, of inadvertently causing offence with terminology and phrase (Exacerbated by virtual exchanges being used)
· The reviewer noted the importance of facilitating the ideas of the writer and not imposing his own ways of how things should be done
· The reviewer was keen to receive the feedback and welcomed critical input; the feedback was received openly and with respect for the time given to review

And from this we can draw….
Trust and respect is a pre-requisite for peer review. An assumption of good intent by the recipient and a degree of sensitivity by the reviewer seems to underpin useful exchange.
Whilst the setting was a peer review journal – the lessons could carry to student feedback.

Rules of engagement for review and feedback

Wallace and Poulson (2003, p.6) outline of what being critical actual means – points included open-mindedness and being constructive, respectful and questioning. Their points appear to translate into rules of engagement for review – they read like a brief of good sportsmanship in the context of peer review.

By opting into the process either by submitting work for review or by becoming a reviewer do we opt in to play by a set of (hazy and perhaps tacit) rules or principles? If so then perhaps it would be helpful for novice reviewers (and some experienced ones too) to have these made explicit. Invisible rules are pretty hard to play by!

Cowan, J. and Y.-C. Chiu (2009). “A critical friend from BJET?” British Journal of Educational Technology 40(1): 58-60.

Wallace, M. and L. Poulson (2003). Learning to read critically in educational leadership and management, Sage Publications.