Making digital exemplars

In addition to my usual classroom use of exemplars as a means of familiarising students with the assessment requirements of a specific module, this year I have created a video walk through of an exemplar. Initially this was to enable those who missed the relevant class to catch up on the session, but the approach was welcomed by students who attended the exemplars activity session, as well as those who did not. 

How to create a digital exemplar walk through: 

• Bring up the exemplar on screen after selecting a ‘good’ piece of work

• Read through and use comments in word to annotate the work with points which would be surfaced in feedback, particularly comments related to the assessment criteria (of course!). Comments include things done well, things done less well which could be avoided and opportunities for further detail and development. This tagging process acts only as an aide memoire so that as I create  feedback video I am aware of what I wanted to include. 

• Open Screencast-o-Matic to very easily screen record the work as a video as I re-read it through and talk to each of the tags. ‘This work includes this … which is useful because…’ ‘this work used literature in this way …. It might be better to …. Because ….’. None of this is rehearsed; that would be too time consuming. The resultant video is a commentary on performance.

• The video is uploaded and made available to students.

After using the resource there was some consensus amongst my students that the value was ONLY in listening to the assessment commentary and not specifically in looking at the work. One student described how they listened but did not watch. They then recorded notes about what they should include, remember and avoid. They avoided looking at the work for fear of having their own ideas reshaped. If assessment judgments are ‘socially situated interpretive act[s]’ then the digitised marking commentary may be a useful way of making that process more transparent for students, and indeed for other staff.

I will definitely be including this in future modules.

Handley, K., den Outer, B. & Price, B. (2013) Learning to mark: exemplars, dialogue and participation in assessment communities. Higher Education Research & Development Vol. 32 , Iss. 6.

The tricky issue of word count equivalence

The challenges of managing media rich assessments, or managing student choice in assessment, has been evident in higher education for as long as I have been employed in the sector, and probably a lot longer. Back in 2004, when I worked on the Ultraversity Programme, the course team had an underpinning vision which sought to: enable creativity; encourage negotiation of assessment formats such that the outputs were of use; and, develop the digital capabilities of students (a form of assessment as learning). We encouraged mixed media assessment submissions for all modules. At this time we debated ‘the word count issue’ and emerged with a pragmatic view that alternative media should be broadly equivalent (and yes that is fuzzy, but ultimately this helps develop judgment skills of students themselves).

In the HEA accredited PgC in Teaching and Supporting Learning that I now manage, we assess using a patchwork media portfolio. Effectively there are five components (including an evaluation of assessment and feedback practices, a review of approaches used in teaching or supporting learning and a review of inclusive practices used) plus there is a stitching piece (a reflection on learning). The assessment brief describes what the students should show, but it is not prescriptive on the precise format. Each element has a word guide, but this should be used by those working with alternative media as a guide to the size of the output and the effort they apply.


Where students opt for media rich formats, they are asked to decide on equivalence. Close contact in class sessions provides a guiding hand on judgment, critically with peer input (‘yes, that sounds fair’). Techniques to assess equivalence include taking a rough ‘words per minute’ rate and then scaling up. I have had other items such as posters and PowerPoints, again, I ask them to use their own approximation based on effort. Because the students in this particular programme are themselves lecturers in HE, there is a degree of professional reflection applied to this issue. We don’t ask for transcripts or supplementary text when an individual submits an audio or video format, because it can add considerable work and it may be a deterrent to creativity.

Media experimentation within this programme is encouraged because of the transformative effect it can have on individuals who then feel free to pass on less traditional, more creative methods to their students. I asked one of my students to share their thoughts having just submitted a portfolio of mixed media. Their comments are below:

My benefits from using media were;

  • Opportunity to develop skills
  • Creativity
  • More applied to the role I have as a teacher than a written report would have been
  • Gave ideas to then roll out into my own assessment strategies, to make these more authentic for students
  • Enjoyable and I felt more enthused to tackle the assignment elements

But I wouldn’t say it was quicker to produce, as it takes a lot of advanced planning. And, it was tricky to evidence / reference, which is a requisite for level 7. This is where I fell down a little.

I judged equivalence with a 60-100 words per minute time frame for narrative, and / or, I wrote the piece in full (with word count) and then talked it through. I think the elements that I chose to pop into video were those that were more reflective, and lent themselves better to this approach. With the more theoretical components, where I wasn’t feeling creative or brave enough to turn it into something spangly, I stuck with the written word. The exception to this was the learning design patch, where I wanted to develop particular skills by using a different approach.

This student’s comments very much match up with comments made back in 2009, by Ultraversity students who reported “without exception, felt that they had improved their technical skills through the use of creative formats in assessment” (Arnold, Thomson, Williams, 2009, p159).   Looking back at this paper I was reminded that a key part of managing a mixed picture of assessment is through the criteria, we said “In looking at rich media, the assessor needs be very clear about the assessment criteria and the role that technology has in forming any judgments, so as to avoid the ‘wow’ factor of quirky technology use. At the same time he/she must balance this with the reward of critical decision-making and appropriateness in the use of technology. Staff and student awareness of this issue as well as internal and external quality assurance guards against this occurrence” (p161). This is exactly the approach taken within the PgC Teaching and Supporting Learning. Tightly defined assessment criteria have been very important in helping to apply consistent assessment judgments across different types of submission.

If we want to receive identically formatted items, which all address the learning outcomes using the same approach, then of course mandating a single format with a strict word count is the way to go. But if we want to encourage an attitude to assessment which encourages creativity in new lecturers, and which acts as a development vehicle for their own digital skills, then we must reduce concerns about word counts and encourage junior colleagues to develop and use their professional judgment in this matter. The student quote above shows the thoughtful approach taken by one student to address the issue for themself.

Frustratingly, even by using word count as the reference point for parity we may ‘other’ some of the more creative approaches that we seek to encourage and normalize, but ultimately wordage has long been the currency of higher education. It is good to see some universities being pro-active in setting out a steer for equivalence so that individual staff do not feel that they are being maverick with word counts when seeking to encourage creativity.

Feedback conversations: How should I use technology? A nuanced approach …

One of the most frequent conversations I have is around improving feedback, and how technology can help. Increasingly I am trying to encourage a more nuanced discussion about feedback, because deciding on what feedback to give and how to give it is not simply about a choice between one tool or another. The choice should be the result of the individual lecturer’s preference, the context, the type of assessment or activity upon which feedback is being offered, the characteristics of the student or cohort, the aims of the feedback, the tools and technology required, the quality management requirements and no doubt many other factors.

Some of the common questions I get are shared below with comments:

Should I use GradeMark for feedback? 

Well, it depends a good deal on what you want to achieve. GradeMark has many benefits but in itself it will not make feedback better. To be clear, like any technology it can make a difference but it is not a silver bullet. Without meaningful engagement and a commitment to change practice, it will not improve satisfaction with feedback.

GradeMark can help you to achieve consistency in the comments that you offer to students because you can create a bank of common points to annotate the work, and it can enable you to add a greater amount of feed forward signposting advice to students for their common errors. For example, if a group are struggling to paraphrase, you could create a comment advising of techniques and pointing to resources that might help and use this many times.

GradeMark can help with a sense of fairness too, as marks can be allocated using a rubric. This is entirely optional, and there are of course other ways to employ a rubric. It can help with legibility, as comments are typed; but so too can very clear handwriting and other technologies. It can allow you to save time at certain points in the marking and feedback process, as you can get started on your marking as soon as students hand in rather than delaying until you receive a complete set of papers. It can aid transparency when team marking; you can see how another tutor is marking and feeding back – again this is possible to achieve in other ways, but being able to see each other’s marking in real time can create ongoing dialogue about the way marks are allocated and the way comments are added.  The maximum benefits though can only be realised if the user is reflectively working with the technology and is not simply transferring existing practices in to a digital environment.

Finally, if you are really concerned about reading on a screen, this might be a problem. But really … if you consume news, media, research and other things via a screen, it may be worth laying aside your concerns and giving this a try.

Will it save me time? 

Yes and no. It’s not that simple. It depends how you use the facilities and what type of feedback you give. You can use as many or as few of the tools within GradeMark as you see fit. You can use the facilities within GradeMark in any combination: Voice over comments, annotations (stock comments or personalised as if marking on paper), you can use a rubric, auto generated scoring from the rubric (or not) and you can use a final summary comment. Each individual needs to look at their set up and then consider what they want to achieve, they should then select the aspects of the tool that work for their situation. Annotations may be better for corrective, structural feedback, or feedback on specific aspects of calculations, but the narrative may be the place to provide feedback on key ideas within the work. If you go in to using GradeMark solely to achieve efficiencies, you will most likely be disappointed upon first usage because there is a set up investment and it takes a large group or multiple iterations to get payback on that initial time spent.

In my experience those who use GradeMark may start out seeking efficiency, but end up with a focus on enhancing their feedback within the time constraints available to them. When time is saved by a user, I have seen colleagues simply re-spend this time on making enhancements, particularly to personalise the feedback further.

Ok, so what equipment do I need to be able to use GradeMark? Is it best to use a tablet?

Again it much depends on your work flows and preferences. A desktop computer is my preference as I like lots of screen room and I like to settle in to one spot with unlimited supplies of tea, whenever I mark. Others like to be mobile and the tablet version of GradeMark allows you to effectively download all scripts, mark and feedback and then upload. So unlike the desktop version you don’t need to be connected to the Internet – for those marking on the go, this is a good thing.

I see other people using other technologies for marking, like Dragon Dictate and annotation apps on tablets, are these better than GradeMark? 
There is a large range of kit available to support a tutor’s work in assessment and feedback and each has strengths and weaknesses, and each fits differently with personal preferences and context. Dragon dictate, for example, can be used to speak a narrative, it’s not perfect but may help those who struggle with typing. Annotation apps allow the conversion of handwriting to text, and they allow comments to be added at the point of need within a script (though GradeMark allows this too). On the downside a manual intervention is needed to return the feedback to students. Whilst Track change can be good for corrective feedback,  it can cause students to look at their work and feel that it wasn’t good enough as it has the electronic equivalent of red pen all over it!
Second markers or external examiners refuse to use the same interface… Then what …?

I’d suggest that you encourage others in the process to use the technology that you have selected. Give them early warning and offer to support the process. A pre-emptive way of dealing with this is to ensure a course wide approach to feedback, so agreeing, as a group, the tools that you will use. This should then be discussed with the external and others at appointment. It’s harder to resist a coordinated approach. Policy change is what is really needed for this, so lobbying might help!!!

But students like handwritten feedback, they find computer based feedback impersonal …

Maybe so, but all students prefer legible feedback and feedback that they can collect without coming back on to campus. Also is it not part of our responsibility as educators to ensure students can work digitally, even with their feedback? Students who tell us that they like handwritten feedback often feel a personal connection between them and the marker, but feedback using technology can be made to be highly personalised. It is simply up to tutors to use the tools available to achieve levels of personalisation; the tools themselves offer choices to the feedback craftsman. Adding a narrative comment, an audio comment or customising stock comments can all give a personal touch. However if the feedback giver chooses none of these things, then of course the feedback will be depersonalised.

Students say they don’t like electronic feedback…

Some might, and the reasons are complex. If we introduce a new feedback method at the end of a students programme, without explanation, irritation is inevitable as we have just added a complication at a critical point. Equally if feedback across a students journey is predominantly paper based, it is no wonder they struggle to remember how to retrieve their digital feedback and so get frustrated. If the feedback is too late to be useful, that will also cause students to prefer old methods. It may be useful to coordinate feedback approaches with others on your course area so the student gets a consistent approach, rather than encountering the occasional exotic technology with no clear rationale. Finally, though, students also need to be trained to do more than receive their feedback. They might file it, return to it, précis it and identify salient points. Good delivery of feedback will never alone be enough. Timeliness and engagement are also key to allowing students to work gain the benefits of their feedback.

Seeing things differently ….

One of the benefits of using technology in feedback is not often spoken about, or written of. When we engage meaningfully with technology in feedback it can change our approach to providing feedback, irrespective of the technology. By (real) example, someone trying annotation software may have a realisation that legibility is a real issue for them and they must prioritise this in future; someone using a rubric may start giving priority to assessment criteria as the need for equity and consistency becomes more sharply placed in focus; someone using stock comments and adding a voice over becomes aware of the need for the personal touch in feedback; and finally, someone using audio becomes aware that the volume of feedback produced may be overwhelming for students to digest and so revise their approach. These realisations live on beyond any particular technology use; so when we think of using technology for feedback, it may be useful to be conscious of the changes that can be brought about to the feedback mindset, and judge success in these terms rather than just mastery of, or persistence with one or another tool.

(My) Lessons from the flipped classroom

In September 2015 I committed to deliver a thirty-credit module, called The Teaching Practitioner, using A flipped classroom pedagogy. The module is the first of two in a PgC Teaching and Supporting Learning in HE; it is associated with Associate Fellowship of the Higher Education Academy.

My motivation for flipping the classroom was three fold:

  1. My contact time was limited and therefore moving ‘delivery of content’ outside of the classroom was an answer to a specific timetable challenge.
  2. In learning and teaching provision of this type I wanted to actively avoid ‘preaching’ or appearing as the ‘authority’. Everyone, without exception, on a work-based programme brings experience and the class dynamic is much more about guiding equals and facilitating mutual learning.
  3. I would rather place my energies in to discursive, challenging and unexpected contact time, rather than repeat sessions of transmitting content, which can be accessed in other ways.

The pattern of delivery was simply that each week I shared materials to work through, including narrated presentations, videos (commissioned and existent), reading, reflective tasks and then we would gather to discuss. The discussions varied in formality, structure and style as the module progressed. Over the course of the module I learnt a great deal, the key points from my mental list of lessons are shared below.

Screen Shot 2016-03-17 at 08.46.13
To do list example (click to view)

Essential to do list: Each week I published what needed to be done in advance of the face-to-face class. Importantly the list split out what was essential and what was optional. Participants reported that this was a helpful organizing distinction and allowed better management of their activity. This is something that I would definitely adopt in future modules of any type to act as a pacesetter. Simple, perhaps obvious, but actively encouraging participants to make choices about the level of engagement they can make is a pragmatic way of supporting work based practitioners who have so many competing demands on their time.

Slides not videos: I experimented with the media format of presentational material (pre-class content). The staple across most weeks was the narrated PowerPoint. I found more editing control by using Audacity to record the audio and then drag and drop in to PowerPoint, compared to recording direct in to PowerPoint. Audacity gave me opportunity to edit out any major interruptions with ease (phone calls, door knocks etc). I included some video lectures of studio production quality however participants found them relatively less engaging, with a preference for visuals and audios mixed in together with the ability to more easily navigate the presentation. I was surprised by this preference, but there is no doubt narrated presentations are easier to create.

Don’t force theory: We took a discursive approach to our face-to-face time (which was usually two hours per week). I provided questions and starters and then tried to guide the discussion. At first the conversation was loose, multi-directional, on and off-topic. I worried that we were not being ‘very level seven’ and the participants shared some of these concerns. However an under the surface, a process of sense making was going on; each person, in their own language and terms, through sharing and reflecting on their own experience got chance to reconceive, affirm and evaluate their practice. The explicit linking to theory was a more private activity, which seemed to occur in response to assessment. It was only obvious that this had taken place at the end of the module as discussion and theory were fused. Perhaps the discussions were a shared liminal space in which we muddled through difficult issues, then we went away to individually reflect and make clear.

Screen Shot 2016-03-17 at 12.13.31.png
A conception of flipped learning as a three stage process

 Facilitation skills matter more than online production skills: My role can be linked to all the activities of a facilitator, including:

  • Summarizing

    Screen Shot 2016-03-17 at 09.01.39
    A discussion summary in progress
  • Questioning
  • Providing occasional expertise
  • Sharing anecdotes
  • Signposting
  • Collating the issues that we couldn’t solve and referring them to other forums, or mentally ‘parking them’ as knowingly messy
  • Archiving ideas (e.g. photographing shared lists and posting them online for future reference)
  • Providing clarity as needed
  • Providing confidence
  • Managing the group dynamics
  • Modeling active listening

As we progressed through the weeks, methods for each of these aspects became more developed e.g. creating graphics for summaries, defining the discussion purpose to keep us mainly on task. One thing I did from time to time was add a summary of the discussion as a resource for reference so that everyone had opportunity to revisit key points. This involved simply using my mobile phone and talking through the diagrams that we had created in class such that everyone had a record. This was not onerous at all if done straight after the session while fresh in memory.

Quick and dirty production process: If the model of delivery is going to be sustainable then resources need to be produced within a realistic time frame. By taking a quick and dirty approach to development, those on the programme see the approach as achievable and replicable; it provides accessible modeled practice. For me there is also a really clear sign in this approach that the value of the learning experience is the interaction and not a resource. To avoid perfectionism I never listened to my own presentations after they were recorded other than for a quick sound check.

Shared endeavor: While new roles were not formally defined, we fell in to a more even relationship. I sensed that we were co-researchers (in to the effectiveness of the pedagogy) and co-learners (about all aspects of the programme). We were facilitators and facilitated, rather than ‘teacher and student’. To reinforce this role equality, I tried to be very open about when I was learning too.

Allow choice about levels of engagement: As grown ups, participants face a simple rational choice about whether to engage or attend; sometime this choice is made in light of personal life and professional workload. In the weeks where individuals had not done the preparation for class, no action was taken or penalty applied. This approach relies on a commitment to engage and the rewards are implicit in the design. It also reflects the idea of running a community of equals. The group dynamic needs to be honest about the need for preparation, but pragmatic when this slips. If the facilitation works well then even those who have not prepared should be invited and able to contribute experience, and hopefully then inspired to retrospectively visit the online class.

A human process not a technical one: Flipped classroom may evoke thoughts about complex online tools and an unfathomable methodology of teaching promoted by centres of e-learning and academic development, but for me the experience of flipped classroom is a fundamentally human process which involves a respect the opportunity to explore individual experience and knowledge. It allows social learning and creates space for the discussion of any issues arising that matter to the group. I hope the language around this practice, and the identity of the learning model as slightly exotic, does not take away from the collegial simplicity, which resonates with traditional seminar based learning.

Support for the flipped approach from participants was demonstrated in three distinct ways: i) the adoption of flipped classroom by some group members ii) protest when classes are not flipped iii) outstanding, highly personalized, deeply connected assignments to demonstrate the culmination of meaningful engagement (though I am a little bias on the last point).

If I had a point nine on my list, it would be to keep faith that the approach will pay off, even when there is angst about its effectiveness. That said, when I saw in the module assessments that we had reached our destination (albeit a fleeting one on the way to the next module) I was very relieved!



Children, parents and social media use

Recently I have been involved in developing learning objects which offer practical advice on how students can manage their digital footprint . The resources encourage students to build a positive online identity (known as ‘brand ‘me’). Such a positive presence typically contains artifacts and interactions that tell a personal story but which, publicly at least, don’t include debaucherous highlights for all the world to see.
 My interest in social media and how children/students use it is not solely a professional one. Over the last two or three years we have had a turbulent relationship with the web and social media in our own home, but it finally appears that we have reached a point of online equilibrium. Many friends and colleagues have had the routine updates of this conscious journey and asked that I share what has been learnt. I won’t share every detail – but here are some things that we learned that helped us:

  • There is really no point banning children’s web access; this is a sure way to create a battleground and is really unnecessary. No matter how tempting it seems, it will create a form of isolation from friends. Social media has replaced phone communication, so any threats of withdrawal have to be seen in this light. Ultimately we’re going to have to self regulate, so denying access is not really helping to build those self-management skills.
  • Start early – any pre-teen can understand that ‘saying anything online is like standing in the town square and shouting at the top of your voice!’
  • The idea that the web is educational and children need access for homework is fine, but any belief that each child should have total control over web-enabled devices is misplaced. Shared devices work just as well, and if anything they encourage file discipline and sound data management. Too many ‘owned’ devices create territorial behaviours; sharing creates tolerance. It also acts as a further safeguard in making sure information coming in and out is not locked away.
  • Try to talk about web use as a routine point, and encourage honesty. With a hint of comedy we routinely ask ‘what’s happened in social media land today?’ 🙂 Who said what to whom? Who is having a hard time? etc. Web safety sites only seem to encourage us to only discuss undesirable behavior (advising us to ‘report any weirdos’, ‘press the red button’ if you want to report content etc.), but such responses are largely not appropriate for the situations that children need to navigate. Through pro-active discussion children can reach more informed positions about what they see and how they interact. They need to know how to respond to peers posting walls of selfies and encouraging others to vote out the ugliest (awful isn’t it?), and not just the situations of ‘stranger danger’ that adults often conceive as being the biggest issue. .
  • Look out for physical symptoms of anxiety, which might be linked to social media use. These devices allow us to be ‘always on’ and alert; for teens/pre-teens this can mean no down time.  When always connected, a sense that something may be missed never goes away; in itself this creates stress. Until children have the wherewithal to make positive choices and understand that they do not need to be on call, this facility is very damaging.
    • Discuss why down time is important (Eating dinner with a child who has a vibrating smartphone in their pocket is like trying to eat dinner while fifty people throw snowballs at your window and heckle);
    • Show that choosing when to communicate is empowering and not weak;
    • Constructively challenge the need to ‘hear’ all communications (in the same way that all crisps in the house do not need to be consumed, moderation is good);
    • Discuss which devices and features can help manage the noise (there are phones that bridge the ‘first phone’ to a super smart phone).
  • Encourage children to filter unwanted drivel. 20-second clips of pointless stuff on Snapchat may be OK between close friends (developmental, even), but trying to keep up with everyone’s everyday is meaningless. 100 friends, posting two twenty second clips per day is a whole lot of lost time.
  • Measure the benefit of real activities explicitly against the benefits of ‘spent’ online time. Discuss the opportunity-cost of online activities. By example, after cycling ten miles on a sunny day and fitting in a nice lunch, make a comparison to how many YouTube videos, Snap Chat communications or Instagram pictures might have been otherwise chewed through– then recognize which is better!
  • Don’t be the fun police – some time watching YouTube and flicking through content is just fine. Watching ‘How animal’s eat their food’ for the twentieth time can be mildly amusing.
  • Recognise different behaviours in others (and discuss). Did you notice how that person spent their entire visit texting others? Is that OK?  Did you notice how filming a concert through a phone device caused a disconnect and created a whole load of footage never to be watched again?
  • Practice (and practise) being fully present. It’s hard to pedal moderate online approaches unless we do it ourselves!
  • Don’t resign to the idea that kids are all consumed by gadgetry and apps. There is balance to be found.
  • Recognise that all of the variables in online usage are changing (child’s age, peer behavior, the tools themselves) and so the search for a balanced online pattern is going to be ongoing and shifting.

This post was co-produced with my own children (thank you). What you have here is our shared experience. Back to my professional interest, I believe there is a need for much more research about how children and young people can learn to become digitally resilient and capable; I’m sure this touches on parenting, formal education, confidence, self-esteem and self-efficacy. More research and better advice (particularly more realistic advice) would, I’m sure help parents who are permanently exasperated and who feel unable to deal with this issue.

Releasing slides before lectures – is it really a good idea?

I’ve recently been considering the risks and benefits of sharing presentational slides before lectures, and the effect it has on both attendance and performance. Some conclusions from my scoping are shared below. This review is not a recommendation that linear presentation software should be used in classes, clearly this is not the only way to structure learning.

Sharing lecture slides (almost universally PowerPoint slides) before a class is widely believed to not negatively impact attendance (e.g. by Billings-Gagliardi & Mazor, 2007; Frank, Shaw & Wilson, 2009; Worthington, & Levasseur, 2015). Billings-Gagliardi & Mazor (2007) conclude “Fears that the increasing availability of technology-enhanced educational materials has a negative impact on lecture attendance seem unfounded” (2007, p573). The evidence is not entirely unanimous though, with some research, particularly before 2006, pointing to a connection between pre-lecture release and attendance.

In Sambrook & Rowley’s (2010) research students reported that their peers have used slides as a substitute for attendance, but even so, non-attendance was most likely to be linked to other factors such as illness or crisis, and slides were likely to be an assistive facility rather than a root cause of non-attendance. Dolnicar’s (2005) research showed why students attend lectures – he included such factors as students wanting to: find out what they are supposed to know;  avoid missing important information;  find out about assessment; and make sure they learn the key content. They also attended because of university expectations. Others, for example Fitzpatrick, Cronin, & Byrne (2011), have looked at reasons for non-attendance  and reported factors such as curriculum overload issues and poor quality of teaching. It is perhaps unsurprising that, according to the balance of this evidence, lecture notes alone don’t appear to have an impact on attendance.

Within their research on making slides available through online environments, Sambrook & Rowley noted that “The most emphatic response [in their survey] was to the statement “lecture notes should be available on Blackboard” … the availability of webnotes has become expected” (2010, p.35). The value placed on pre-release of slides is also emphasised in students own pro-active stance on virtual learning environments (see for example Cain, 2012).

Research shows that electronic materials, which are shared before a class, are perceived as helpful to students’ preparation for learning, which in turn encourages attendance (Billings-Gagliardi & Mazor, 2007; Sambrook & Rowley, 2010). Specifically, as a result of advanced publication of notes online, students reported: i) better opportunity to retain content in the lecture when they had prepared, ii) being more organised in note taking iii) recognition of opportunities to pick out areas of the lecture where they will need further explanation (e.g. to ‘zone in’ during actual classes) – these points were especially important for international students and students with dyslexia (Sambrook & Rowley, 2010). Additionally “[b]y posting slides before lecture, students have the opportunity to prepare in advance for class and perhaps feel more comfortable in volunteering thoughts and opinions” (Babb & Ross, 2009, p.878).

The sharing of slides before lectures is associated with better note taking and/or perceptions of better note taking (Frey & Birnbaum, 2002; Babb & Ross, 2009). Sambrook & Rowley (2010) suggest “Providing lecture notes in advance can address cognitive processing problems student face with working memory overload, when they are trying to both listen to the lecturer and write their own notes”. Some research does however point to an over reliance on slides as limiting note taking, so the benefit of processing information in note taking is diminished, in turn this could be linked to achievement: “In short, many instructors fear that … slides encourage less encoding and that less encoding will translate into less learning” (Washington & Levasseur, 2015, p.15). Often students note taking skills are not well developed (Haynes, McCarley, & Williams, 2015; Pardini et al. 2005). Making slides available in itself is not a silver bullet for note taking, but students do report using slides as a structure for their thoughts. Actions to develop skills note taking skills are recommended (Haynes, McCarley, & Williams , 2015).

Irrespective of early or late release, use of PowerPoint in a way that oversimplifies ideas can stifle discovery, hinder deeper learning, and provide knowledge in linear and disconnected forms (Kinchin, Chadha, & Kokotailo, 2008; Isseks, 20011). Sambrook & Rowley, through their review of literature, indicate that slides can be associated with knowledge being fashioned in restricted ways, but they go on to add that this is a consequence of the way the tool is used rather than the tool per se. Maxwell (2012), Apperson, Laws, & Scepansky (2008) and Iseeks (2011) advise that the use of bullet points on PowerPoints should be reduced with more use being made of visual stimulus and lecturer engagement to provoke deeper, authentic and human engagement and to “complement and enhance” delivery (Maxwell, 2012, p. 48).

Having explored some literature it is clear that early release of slides is an increasing expectation. There are considerable benefits of early release to some students (particularly international, dyslexic and those with less confidence to speak out in class). On balance, understandable lecturer concerns about attendance are unsupported in more recent literature and there is even evidence that some more vulnerable students are more likely to attend classes if given time to prepare. The factors affecting lecture attendance concern a wide range of variables; where these lead to non-attendance, the slides provide a helping hand. Nevertheless, it is also clear that efforts to develop note-taking skills in students and the development of skills in the effective use of PowerPoint for educators would be well placed, to avoid students falling asleep with their eyes open (such is the title of a paper by O’Rouke et al, 2014) . In reaching this conclusion it does throw up a puzzle – if we use presentation tools for pictures, artifacts and stimuli, instead of an explicit guide to content, is their any point adding these to a virtual learning environment before a lecture? There is no evidence either way, or at least none that I have found, but presumably  some other means of pre-class indication of what to expect would enable the benefits of early release of slides outlined above (which rather presume a focus on course content) to be realised while maintaining engagement through a more creative use of presentational software.

Finally, it may be useful to note that there is experimentation occurring in to how to support learning through alternative technologies, particularly as the university’s role as authoritative transmitter of knowledge is under review, again O’Rouke’s paper provides a useful starting point for considering other modus operandi for the provision of resources.

References to download

This research was used to inform institutional guidance on inclusive practice  

Looking at the value of lecture capture

Looking at lecture capture led me to ask questions about the technology’s effectiveness. I can’t help feel that lecture capture is  counter-intuitive, since we know transmission based learning is less effective than active learning (so, why would we invest more in it and replicate it?) and we know that concentration spans for online engagement don’t readily lend themselves to hour long broadcasts (my own concentration sees frustration after 15 minutes!). Nevertheless adoption is on the increase  and students clearly appreciate the opportunity to apply catch up TV principles to learning – they value the flexibility.

As lecture capture heads towards the mainstream, I thought it useful to look at the evidence of the benefits and challenges of this technology, especially in light of a prediction that we may begin to move away from capturing lectures to viewing lectures as performances – something Professor Phil Race constantly emphasises with the idea of making the lecture unmissable and engaging.

My reading notes can be downloaded but the headline points were:

  1. More research is needed in to actual, rather than perceived effectiveness of lecture capture.
  2. Students appreciate lecture capture and believe it helps learning but the actual impact is unclear. Critically there is little or no evidence that lecture capture really impacts performance. Some subsets of users appear to show higher scores, but this may be associated with their diligence rather than the impact of heavy usage of downloads.
  3. The circumstances in which lecture capture is effective and the reasons for it are also unclear. Research suggests that content heavy subjects are best suited to this technology and interactive subjects less so, and this makes good common sense. By implication then, this point raises the question would lecture capture lead to a less interactive delivery style?
  4. Lecture capture is suspected as having a connection with more effective note taking and students appear to selectively watch lectures to address tricky concepts. These recurrent findings, irrespective of the growth of lecture capture, point to the value of addressing how students take notes as an academic skill and raise the question of how media can be used to address difficult concepts in watchable and debunking, even (dare I say) enjoyable ways.

If they are useful please help yourself to my lecture capture quick notes.