Course level assessment – nice idea, but what does it really mean?

It is increasingly clear that thinking about curriculum in the unit of ‘the course’ rather than the unit of ‘the module is conducive to cohesive course design. It avoids repetition, ensures the assessment journey makes sense to the student and can make feedback meaningful as one task is designed to link to the next. I have not found much  in the literature on course level assessment; while it is advocated in principle amongst educational development communities, it is perhaps less clear what course level assessment actually looks like.

I can see three possibilities, though there may be more. These conceptions are described as if delivered through the modular frameworks which remain the dominant framework for programmes. Any comments on other approaches would be very welcome.

Type 1: Compound assessment

Imagine two modules being taught on entirely discrete themes. Within them might be learning about terminology, key theories, processes, and calculations. Within the modular framework they may be taught entirely independently. In such a model there is nowhere in the curriculum where these skills can be overtly combined. A third module could be introduced which draws upon learning from module one and module two. Of course in reality it may be five modules drawn upon in a sixth compound module.

By example, a module focused upon business strategy may be taught entirely separately from a module on economics. Under such a scenario students may never get to consider how changes in the economy influence strategy, the associated tactics and the need for responsive planning. It is these compound skills, abilities and levels of professional judgment that the course (not the modules) seek to develop. One way of addressing this limitation is to provide a third module which draws together real business scenarios and concentrates on encouraging students to combine their knowledge. A ‘compound’ module could be based around case studies and real world scenarios, it may be limited in its ‘indicative content’ and leave a degree of openness to draw more flexibly on what is happening in the current external environment. Open modules can be uncomfortable and liberating in equal measure for the tutor, as there is a less familiar script. It might concentrate on the development of professional behaviours rather than additional content.The module might have timetabled slots, or could take the form of a one off exercise, field trip or inquiry. Teaching would be more facilitative rather than content/delivery led.

One of the challenges with such a module is that many tutors may be reluctant to give over credits to what seems to be a content free or light module. Going back to basics though, graduates are necessarily more than empty vessels filled with ‘stuff’. If we look at the course level and identify what we want to produce in our outcomes, and what the aims of our programmes actually are, then the flexible compound module fits well as an opportunity for fusing knowledge and developing competent, confident, early professionals. When knowledge is free and ubiquitous online, acting as a global external hard disk, we need to look at the graduates we build and challenge any view that higher education is primarily about the transfer of what the lecturer knows to the student. Surely the compound skills of researching the unfamiliar, combining knowledge from different areas, and making decisions with incomplete data in a moving environment are much more important. The compound module is an opportunity to facilitate learning which alights with the course level outcomes sought.

This type of course level learning and assessment undoubtedly requires an appreciation of the skills, attitudes, values and behaviours that we wish to foster in students and it needs confidence in the tutor to facilitate rather than transmit.

Type 2: Shared  assessment

The next way that I can conceive a form of course level assessment is more mechanistic. Take two modules (module one and module two, taught separately); to bring about efficiencies, the assessment of each module is undertaken within the same assignment, activity or exam. It may be an exam with two parts one for each module; it may be a presentation which is viewed by two assessors, each reviewing a separate aspect of content or it could be an assignment which has areas of attention clearly marked for each module. The education benefits of this are, in my view, much less obvious than for type 1, nevertheless students may see some links between the parts of modules in taking such an approach. The shared assessment must be designed to make clear which aspect relates to which module or else a student could be penalised or rewarded twice for the same points. Under such an approach it is conceivable to pass one element and fail the other. I remain to be convinced of the real benefits of this approach which feels like surface level ‘joined up-ness’.

Type 3: Combined assessment 

The term combined assessment is used here to describe an approach which assesses two modules through a single meaningful strategy. If there are two fifteen credit modules, one on mathematics for engineers and one on product design, an assessment which uses knowledge from each taught unit can be drawn upon to pass a single assessment – for example via a design and build exercise. The assessment subsumes both modules, the two elements are integrated (in contrast to the shared assessment approach) and there are potential marking efficiencies. Without clear attribution of marks to one or the other module it may be tricky when a student fails; what do they restudy? But presumably a tutor would be able to advise where the limitations of the performance are and which unit would be usefully revisited. In some cases it may be both. In reality this approach may be little different than having a large module with two themes contained within it.

So they were my three ideas for programme level assessment but I am convinced that there are other ways of achieving this in a meaningful way. The suitability of each approach will depend on what the course team want to achieve, but clearly the benefits of the compound assessment approach are very different from a shared or combined strategy.

permalink jigsaw header image courtesy of Yoel Ben-Avraham under Creative Commons

Diversifying assessment (and assessment generally)

After a inspiring Learning & Teaching Forum lead by Professor Chris Rust of Oxford Brookes, I pledged my post session action would be to capture my best bits from the day. So … Some  take away points from today’s session ….

  1. Authentic assessment is an excellent way to encourage engagement by students as it helps to personalise the student’s approach to the task and generates buy in. So rather than offering abstract tasks, like produce an essay on leadership styles, frame it to have a sense of audience and so that it emulates real world situations that the student may encounter. This could be as simple as changing an essay on business planning principles into a presentation of a business proposal to a prospective funder.
  2. Spot review class activities and feedback to the group – a major efficiency saver and a good way of making feedback a routine. So in a class of 40 students pick out five pieces of work to review and send group feedback based on those which have been seen.
  3. Getting back to first principles of constructive alignment guarantees some variety. If the learning outcomes are sufficiently varied and, if the assessment lines up to ensure that the actual outcomes are being assessed, this should in itself offer a degree of variety.
  4. Diversity is good, but care needs to be taken to ensure that the student journey does not become disjointed by variety. Having some repetition of assessment approaches at the programme level ensures that there is opportunity for students to make use of feedback. A balance is to be struck between variety and the opportunity to facilitate growth in students.
  5. Tutor time is disproportionately spent on housekeeping feedback – Are headings present? Are tables labelled? Is evidence offered when requested?  etc etc. A super simple tip might be to have a housekeeping checklist that students complete before submission to deal with all of these aspects, thus allowing time to be better spent on more substantial points of feedback. It works on the principle that answering ‘no’ to any of the housekeeping question prompts a response such that issues are dealt with before submission. Using such checklists, as a routine, ensures that the student takes greater responsibility for their own learning and tutor feedback can deal with more substantial issues. It must be an integral part of the assignment, i.e. will not mark without it, or else it will not be adopted.
  6. Less is more. With a greater number of summative assessments the opportunity to give feedback which can feed forward is limited by processes, effort spent on the justification of grades and administration. Instead, lose an assessment and gain the opportunity to utilise feed forward on a piece of work. One assignment, but constructed drawing upon feedback along the way. Simple, but brilliant (and a bit more like real life where review on a document would be an entirely sensible step).
  7. Reviewer is king! It is the act of reviewing more than the act of receiving feedback that can spur interest, new insights and leaps in understanding. Getting peer review embedded within courses is an excellent way of raising the presence and effectiveness of the feedback process. To buy into this we need to lay aside fears around peer feedback meaning a lack of parity in the quality and quantity of feedback received (which may be inevitable), and appreciate the value of the experience of reviewing as where the learning is really at. Liken this to being a journal reviewer – how much is learned by engaging with a review whether good or awful? (Analogy courtesy of Mark).
  8. Group vivas. Liking this lot and not something I had previously encountered. So simply a group project and attribution of marks depends in some part on a group viva where honesty is, in theory, self-regulating.
  9. 24 hours to act. In considering the value of formative quizzes, computer aided or class based as an opportunity to engage with knowledge received, we were reminded of the benefits of engaging sooner rather than later. Engagement with formative quizzes (or indeed reflective processes) within 24 hours of a class is much more effective that if left.
  10. Use audio feedback – it doubles feedback and makes production smoother.
  11. Future proof feedback plans. Think how SMART devise ubiquity will play out in future. Formative in-class tests may be more efficient on paper for now, but insure efforts by taking a dual pronged approach (online and paper).
  12. Pool efforts. Whether across course teams, departments or with colleagues nationally, look for efficiency gains in providing formative question banks. Open educational resource banks (e.g. Open jorum), subject centres and commercial textbooks with CDs of instructor question banks may all be sources to consider.
  13. The uber simple approach of asking students what the strengths and weaknesses of their assignment are can focus minds. Additionally asking them on which aspect they would like feedback creates a learning dialogue and ensures feedback is especially useful.

While all of these points matter it remains that it is most important to review the bigger picture. A major barrier to diversifying assessment and capitalising (in learning terms) of feedback opportunities can be modular structure to programmes. The TESTA Project revealed that  “the volume of feedback students receive does not predict even whether students think they receive enough feedback, without taking into account the way assessment across the programme operates”.  Volume of feedback or assessment will not improve student perceptions of feedback. Point 4. Above leads to the assumption also that timeliness alone is not enough either. While it is good for module level assessment and feedback to be considered in relation to the ideas above  a holistic look at the programme level helps us to understand the assessment journey of the student. How can feedback feed forward? Is their sufficient variety across a programme? Is their repetition? All questions worth asking beyond the module level.

The Jing feedback experiment

Since the last post on Jing (screen capture) I have tried it out more intensively by making 45 videos for formative feedback on personal development. I received draft submissions from students, opened them on the screen, started the video capture and recorded as I went.

Lessons learnt …

  • Read through once only and highlight in yellow any areas where a comment should be made (a higher level of scripting than that means you may as well write the feedback first )
  • Live with imperfection. Unless you edit the feedback in an audio editor, Jing is one take only. Live with the odd, ‘errr…..’ … pause or stumble or else the videos will take a ridiculous amount of time.
  • Manage expectations: Jing feedback was sought once word got around, this created a rush at the last minute. For the sake of workload give cut offs, and only feedback on a pre-determined amount of work.
  • Opt out not in. Given the openness of feedback, being technically accessible by others and given the alternative nature of the approach brief students and tell them what you are doing and why, and offer an opt out. No-one chose this.
  • Practice makes efficient. The first handful of videos took forever. Had I not made a public commitment to do this I would have ditched it out of sheer frustration. It did get better.
  • Using other types of video in class meant that this was a familiar approach to students. It was in synch with classroom methods. For example, I used video feedback to playback a critique of a case study.
  • It saved an awful amount of time by removing the need for proofing my own feedback.

While it may seem labour intensive to offer 45 verbal feedbacks I was secure in the knowledge that 45 written feedback attempts would take an awful lot longer. The depth of the feedback was also more than could have been realistically achieved on paper. You can say a lot in 5 minutes.

What did the students think …

  • Students thought this was fantastic!
  • ‘Like a conversation’
  • Personalised
  • ‘It was like having a one to one tutorial’
  • Enabled them to work through changes one at a time with the video open and their work open at the same time
  • Only one technical glitch was reported
  • Lots of feedback is possible in this way

Other Jing ideas…

An alternative approach I saw recently was a tutor talking through the grade sheet. Giving a verbal commentary on why decisions were made as they were. A different take on Jing.

As a spin off from this work, experimentation shows Jing can work well with White Board technology too, so that in-class examples can be used and taken away. A blue tooth mic and you’re away …

(How to make a Jing feedback video is outlined here )

5 reasons why giving pass/fail marks, as opposed to percentage grades, might not be a bad idea

Grades1. Grades may be an inhibitor of deeper self-reflection, which is in turn linked to self-regulated learning (White and Fantone 2010). Grade chasing distracts from meaningful learning review (see also Dweck 2010). For real examples of this, some student views visible in the comments here are useful

2. Research shows that performance is neither reduced nor enhanced by pass/fail grading systems (Robins, Fantone et al. 1995). For those worrying about a reduction in standards caused by the removal of grades, don’t!

3. Pass-Fail grades are more conducive to a culture of collaboration, which in turn links to higher levels of student satisfaction (Robins, Fantone et al. 1995; Rohe, Barrier et al. 2006; White and Fantone 2010). The increased collaboration may be especially beneficial as preparation for certain professions which require high levels of cooperative working (as noted in a medical context by Rohe, Barrier et al. 2006).

4. Pass-fail counteracts challenges brought about by grade inflation practices (Jackson 2011).

5. Pass-fail is associated with lower student anxiety and higher levels well being (Rohe, Barrier et al. 2006). That has to be good!

Dweck, C. S. (2010). “Even Geniuses Work Hard.” Educational Leadership 68(1): 16-20.
Jackson, L. J. (2011). “IS MY SCHOOL NEXT?” Student Lawyer 39(8): 30-32.
Robins, L. S., J. C. Fantone, et al. (1995). “The effect of pass/fail grading and weekly quizzes on first-year students’ performances and satisfaction.” Academic Medicine: Journal Of The Association Of American Medical Colleges 70(4): 327-329.
Rohe, D. E., P. A. Barrier, et al. (2006). “The Benefits of Pass-Fail Grading on Stress, Mood, and Group Cohesion in Medical Students.” Mayo Clinic Proceedings 81(11): 1443-1448.
White, C. B. and J. C. Fantone (2010). “Pass-Fail Grading: Laying the Foundation for Self-Regulated Learning.” Advances in Health Sciences Education 15(4): 469-477.

Thoughts on peer review

A combination of intensive marking, reviewing and discussion of peer review has formed a few thoughts associated with peer review …

On rubric led feedback

Rubrics are useful as they in some way at least set the ground rules for feedback – for example ensuring that feedback remains about technical aspects, clarity and meaning and not about the writer (this I recall was a concern of Wallace & Poulson, 2003). They make explicit the areas on which feedback can be expected, and if used in the construction of work they can act as another source of guidance. In some ways perhaps rubrics help objectify and depersonalise the feedback. This may be good and bad! It’s great for transparent quality processes but sometimes perhaps can detach emotional dimensions from reading.

The rubrics as a product of words can be open to different interpretations (I should imagine more so across cultures). Perhaps dialogue around the use of rubrics is a must to ensure mutual understanding.

Feeding back on other people’s writing is both an art and a science, the rubric can be a useful guide but it could also be stifling if used in a restrictive way. I fear that rubrics used too zealously (perhaps as a consequence of the mood of transparency) erode individuality in the process.

On reviewer – reviewee relationships

Cowan and Chiu (2009) tell of the experience of the reviewed and the reviewer in an exchange of peer review critique.

· The reviewer was concerned not to cause offence and seemed to seek to explain his comments especially where, across different cultures there was a fear, or an awareness, of inadvertently causing offence with terminology and phrase (Exacerbated by virtual exchanges being used)
· The reviewer noted the importance of facilitating the ideas of the writer and not imposing his own ways of how things should be done
· The reviewer was keen to receive the feedback and welcomed critical input; the feedback was received openly and with respect for the time given to review

And from this we can draw….
Trust and respect is a pre-requisite for peer review. An assumption of good intent by the recipient and a degree of sensitivity by the reviewer seems to underpin useful exchange.
Whilst the setting was a peer review journal – the lessons could carry to student feedback.

Rules of engagement for review and feedback

Wallace and Poulson (2003, p.6) outline of what being critical actual means – points included open-mindedness and being constructive, respectful and questioning. Their points appear to translate into rules of engagement for review – they read like a brief of good sportsmanship in the context of peer review.

By opting into the process either by submitting work for review or by becoming a reviewer do we opt in to play by a set of (hazy and perhaps tacit) rules or principles? If so then perhaps it would be helpful for novice reviewers (and some experienced ones too) to have these made explicit. Invisible rules are pretty hard to play by!

Cowan, J. and Y.-C. Chiu (2009). “A critical friend from BJET?” British Journal of Educational Technology 40(1): 58-60.

Wallace, M. and L. Poulson (2003). Learning to read critically in educational leadership and management, Sage Publications.

E-assessment for work based learning: Functionality v ideals

There are very close ties, or at least there can be, between e-learning, e-assessment and work-based learning. The compatibility of e- methods and work-based, work-located studies is in many instances because of:

· Pragmatic considerations

o Access anytime, anywhere using asynchronous technologies.

· Quality concerns

o E-learning allows the HEI to contain a direct link in to provision that may otherwise be entirely delivered by a partner organisation.

Where e-assessment is used for these reasons only (without consideration of the wider learning design) there may be limited benefits, a reductionist approach results. What is lost when only a narrow rationale is used for choosing e-assessment for work-based learning?

· Association with authenticity If assessment is a bolt-on, a means to an end, then the opportunity to enable work-based learners to use and build upon their day-to-day practices may be lost in the rush to simply weigh knowledge.

· Association with social justice E-assessment offers a chance to level the playing field a little more. In getting away from essay writing and enabling the creation of multi-media artefacts for assessment, learners can play to their strengths and enhance authenticity. However, to enhance the chance of such an approach succeeding the use of media playfulness needs to be engrained into delivery/the learning journey, the e-infrastructure, the human support and the assessment success criteria. This is not a concept easily bolted on!

· Disjunction – Constructive alignment remains widely accepted good practice for all learning and teaching, based on this accepted wisdom alignment between assessment tasks and learning should remain clear. Whilst the content of learning can form an alignment, even when using assessment as a bolt-on, there may be disjunction when the means of assessment is remote from the learning experience. It could be argued that the assessment instrument should be synergistic with the journey.  For example a student sitting down to take a computer aided test with scenario based question and answers when none of the delivery has been in this way may result in feelings of separateness between learning and assessment; likewise for a summative portfolio to be online after a face to face delivery which did not in any way utilise technology erodes the possibility of full and deep engagement and potentially acts to make the technology alien and intimidating under pressure.

As we design work-based learning initiatives with an e- element in either delivery or assessment or both care must be taken to be holistic in looking at how to support, how to maximise the benefit and how to ensure that for the learners a sense of journey is maintained. There are potentially lost opportunities and also dangers of disjuncture in being overly focussed on finding a means assessment that ‘will do the job’ of providing some measure of learning without considering such designs more holistically with reference to integration with delivery, support (tutor or peer) and [work-based, local] context.  This tension then in the relationship between e-learning and work-based learning equates to function v. ideal design.

Sign of the times: assessment

Last week Jacob (5) submitted his year one homework by handing over a URL to his teacher – brilliant! A clear sign of the times J His task was to re-tell a story by any means; cartoon, writing, through a video or other. It seemed like potentially a lot of writing to get him to write a story which contained the detail which he retains, so we decided that it would be best to talk about the story so that we could hear all that he has to say. Jacob is pretty camera shy and so we filmed his chat in a very informal sofa setting with friends in the lounge, so that he was relaxed enough to chat through the story. Unhappy with the video, he wanted to film again. So this time he sat in the ‘hotseat’ in a staged environment (well a dining room chair!) and we tried to get him to talk directly to the camera. No chance! After five false starts we decided that the less high quality camera work which captured his story telling was much better than trying to produce BBC standard interviews in which he just couldn’t get across his ideas under pressure. Extreme analysis for a 5 year olds homework granted, however it struck me that this had synergy with what I am trying to do everyday…

We know that learners in a workplace setting have heaps of knowledge and they use it in practice, yet exams and under the spotlight assessments are often undesirable and do not enable the learners to show what it is that they can really do. When developing new provision (through REEDNet), one of our key aims is to facilitate learners to demonstrate their knowledge, skills and understanding in ways which fit their current ways of working, which dovetail practice and which do not add so much pressure to a learning experience that the credit value of the is lost.