The tricky issue of word count equivalence

The challenges of managing media rich assessments, or managing student choice in assessment, has been evident in higher education for as long as I have been employed in the sector, and probably a lot longer. Back in 2004, when I worked on the Ultraversity Programme, the course team had an underpinning vision which sought to: enable creativity; encourage negotiation of assessment formats such that the outputs were of use; and, develop the digital capabilities of students (a form of assessment as learning). We encouraged mixed media assessment submissions for all modules. At this time we debated ‘the word count issue’ and emerged with a pragmatic view that alternative media should be broadly equivalent (and yes that is fuzzy, but ultimately this helps develop judgment skills of students themselves).

In the HEA accredited PgC in Teaching and Supporting Learning that I now manage, we assess using a patchwork media portfolio. Effectively there are five components (including an evaluation of assessment and feedback practices, a review of approaches used in teaching or supporting learning and a review of inclusive practices used) plus there is a stitching piece (a reflection on learning). The assessment brief describes what the students should show, but it is not prescriptive on the precise format. Each element has a word guide, but this should be used by those working with alternative media as a guide to the size of the output and the effort they apply.

wordcount

Where students opt for media rich formats, they are asked to decide on equivalence. Close contact in class sessions provides a guiding hand on judgment, critically with peer input (‘yes, that sounds fair’). Techniques to assess equivalence include taking a rough ‘words per minute’ rate and then scaling up. I have had other items such as posters and PowerPoints, again, I ask them to use their own approximation based on effort. Because the students in this particular programme are themselves lecturers in HE, there is a degree of professional reflection applied to this issue. We don’t ask for transcripts or supplementary text when an individual submits an audio or video format, because it can add considerable work and it may be a deterrent to creativity.

Media experimentation within this programme is encouraged because of the transformative effect it can have on individuals who then feel free to pass on less traditional, more creative methods to their students. I asked one of my students to share their thoughts having just submitted a portfolio of mixed media. Their comments are below:

My benefits from using media were;

  • Opportunity to develop skills
  • Creativity
  • More applied to the role I have as a teacher than a written report would have been
  • Gave ideas to then roll out into my own assessment strategies, to make these more authentic for students
  • Enjoyable and I felt more enthused to tackle the assignment elements

But I wouldn’t say it was quicker to produce, as it takes a lot of advanced planning. And, it was tricky to evidence / reference, which is a requisite for level 7. This is where I fell down a little.

I judged equivalence with a 60-100 words per minute time frame for narrative, and / or, I wrote the piece in full (with word count) and then talked it through. I think the elements that I chose to pop into video were those that were more reflective, and lent themselves better to this approach. With the more theoretical components, where I wasn’t feeling creative or brave enough to turn it into something spangly, I stuck with the written word. The exception to this was the learning design patch, where I wanted to develop particular skills by using a different approach.

This student’s comments very much match up with comments made back in 2009, by Ultraversity students who reported “without exception, felt that they had improved their technical skills through the use of creative formats in assessment” (Arnold, Thomson, Williams, 2009, p159).   Looking back at this paper I was reminded that a key part of managing a mixed picture of assessment is through the criteria, we said “In looking at rich media, the assessor needs be very clear about the assessment criteria and the role that technology has in forming any judgments, so as to avoid the ‘wow’ factor of quirky technology use. At the same time he/she must balance this with the reward of critical decision-making and appropriateness in the use of technology. Staff and student awareness of this issue as well as internal and external quality assurance guards against this occurrence” (p161). This is exactly the approach taken within the PgC Teaching and Supporting Learning. Tightly defined assessment criteria have been very important in helping to apply consistent assessment judgments across different types of submission.

If we want to receive identically formatted items, which all address the learning outcomes using the same approach, then of course mandating a single format with a strict word count is the way to go. But if we want to encourage an attitude to assessment which encourages creativity in new lecturers, and which acts as a development vehicle for their own digital skills, then we must reduce concerns about word counts and encourage junior colleagues to develop and use their professional judgment in this matter. The student quote above shows the thoughtful approach taken by one student to address the issue for themself.

Frustratingly, even by using word count as the reference point for parity we may ‘other’ some of the more creative approaches that we seek to encourage and normalize, but ultimately wordage has long been the currency of higher education. It is good to see some universities being pro-active in setting out a steer for equivalence so that individual staff do not feel that they are being maverick with word counts when seeking to encourage creativity.

Posted in Academic Practice (PgC Teaching & Supporting Learning), Patchwork text & Patchwork media, Technology, Uncategorized | Tagged , , , | Leave a comment

Towards Inclusivity

A recent HEFCE blog post reminds of the need to continually consider inclusive practice in HE. Many universities are responding to the need for inclusivity with a range of policy approaches, guidance documents, suggestions for best practice and the internal publication of student data to further make the case for change. In looking at the recent blog post, and the recently published Inclusive Teaching and Learning in Higher Education as a Route to Excellence, the focus of inclusivity remains predominantly with disability. Of course this dimension of inclusive practice is enormously important; as explained in the blog, there are more students with disabilities entering HE, the achievement of disabled students is below peers when there is no additional funding in place, and that funding is being cut. Morally this is wrong and action is needed. Nevertheless there are other complex dimensions in the inclusive landscape. Groups of students that could under perform on their potential include specific socio-economic groups, students from an educational background that is ‘different’ than the majority (especially students on BTEC routes), and BME students.  My main concerns when I read about inclusive practice are i) the full spectrum of issues associated with inclusivity are not getting coverage ii) a sense in the public discourse that the sector is addressing inclusivity as a result of a funding change, and not out of a moral responsibility to all students iii) the deep-seated sense of othering that occurs as a result of ii).

In considering how to approach inclusivity, the philosophy of universal design is appealing; essentially in this, teaching and learning is established to enable all students (or as many as possible) through anticipatory approaches. In a universal design approach all education should be set up to encourage all people to access provision and reach their real potential. While this is a wonderful vision, the reality of retro fitting design principles on to established curricula is hugely complex. 

Recognising the pragmatic limits of universal design, I remain concerned that  a superficial approach to inclusivity is emerging in pedagogy. Yes we can get assignment briefs out earlier as required, yes we can post material on the virtual learning environment if that is university policy, and yes we can make sure reading lists are up to date. These points are important, but they are but small pieces of a large jigsaw. For a sustainable, deep routed and sincere approach to inclusivity, a more holistic approach is needed. I am proposing four levels of action to make for real inclusivity. These are summarised below. This is not exhaustive of course. 

Ways to progress inclusive practice:

  1. Rules

Rules and guidelines can relate to a whole manner of aspects of inclusive practice including: Posting lecture notes or slides in advance of classes to help orient students who may wish to read through the content before class to address any areas of underpinning knowledge that their own education did not afford them time to explore; allowing students to record classes on their personal devices; ensuring that assessment briefs meet the required standards of accessibility; using appropriate instructional design layouts for online spaces; and, using minimum font sizes for visual presentations. All of these and many more rules can be implemented. However, focusing only on this type of approach to bringing about inclusivity feels like an extension of a deficit model where staff must behave in a certain way to accommodate groups of students. While the rules are important, and these types of practice are essential, they can be received as yet another bureaucratic thing to do. This approach in isolation can work against fostering a deeper culture of inclusivity. It encourages a surface approach to the issue. A much more holistic and deeper approach is required to make a real, long lasting difference to student learning.

  1.  Developing student learning skills

Students can be supported to self-help and to develop skills that empower them in their own learning. If students learn how to learn, even when they are lacking a specific set of skills to help them thrive in higher education they will not be phased and will be able to progress. Well-formed personal development programmes, and an attention to the skills of learning alongside taught ‘content’ are essential to empowering students with the study skills necessary to overcome any barriers to learning. Supporting students to develop skills in note taking, critical reading, listening, writing in different genres, revision, exam technique, project planning, making the most of their learning routines etc etc. is an significant component of creating an inclusive learning environment.

  1. Developing inclusive mindsets through open and honest engagement

Staff in HE are at times challenged by the diversity of the student body. The recent Times Higher Staff Survey gives some insight in to some lecturers’ frustrations, with a sense of weakening standards, ill prepared students and a lack of student work ethic. Perhaps these concerns are inevitable and are not so different to the ones I heard sixteen or so years ago when I began a career in HE, but they do sit awkwardly in an age of inclusive practice.  There is a complex academic psychology around inclusivity. Many/most lecturers will have academic excellence behind them; they survived and thrived in a system of learning that allowed them to ‘come good’. When these systems change, some teaching staff experience the disruption of genuinely held views, for example about standards, and about the balance of effort between lecturer and student. A rift emerges between privately held personal beliefs, histories and values, and the expectations for teaching practice. Even those who champion inclusivity may still have repressed  concerns about some specific issues. What is said publically, may still be in conflict with some inner feelings. There is then a need to provide a frank, challenging and respectful dialogue in higher education institutions if private theories are going to work with public discourse. Rules and skills training programmes will otherwise be undermined by the occasional, but damming, careless comment, or the unquantifiable look of exasperation. If inclusivity is to become more embedded, then the dual discourse (under and over the radar) needs to come together. Mindsets can’t be forcibly steered towards embracing inclusivity; but conditions of openness mean that deep-seated beliefs can be aired and held up to debate. Hocking, back in 2010, noted “the need for shifts in negative beliefs about, and attitudes towards, student diversity that currently inhibit the development of inclusive learning and teaching”. I’m suggesting here a new level of openesss about the reasons that some may appear negative, some empathy towards these deep seated views and dialogue to engage with some of the underlying issues which prevent a real mindset shift.

  1. Praxis

Praxis is used here to mean values driven, living, self-reviewing, sincere practice; it is more heartfelt than just practice! If inclusivity is to be a way of working, rather than a set of steps, a process model or a policy discourse, then it needs to become a way of working and thinking. I don’t think this is as much of a change as might be assumed. Most lecturers that I have encountered want to put students first. They want them to succeed and reach their potential. That is it. That is the cornerstone of inclusivity. The debate and politicisation and the connection to our own beliefs stemming from personal history falls second to the simplistic aspiration to help others do well. When we go back to first principles, the praxis of inclusivity is very simple: How can I assist all students to succeed?

Posted in HE, Inclusivity, Learning to learn (meta-learning), Uncategorized | Tagged | Leave a comment

9 Things to do with Assessment Rubrics

I’ve used rubrics in assessment marking since I first held an academic role some fifteen-ish years ago. For me, rubrics are an essential tool in the assessment toolkit. It’s important to recognize that they are not a ‘silver bullet’ and if not integrated in to teaching and support for learning, they may have no impact whatsoever on student engagement with assessment. I am therefore trying to collate a list of the ways in which rubrics can be used with students to enhance their performance, help them grow confidence and to demystify the assessment process. My top nine, in no particular order, are as follows:

  1. Discuss the rubric and what it means. This simply helps set out expectations and requirements, and provides opportunities for clarification.
  2. Encourage students to self-assess their own performance using the rubric, so that they engage more deeply with the requirements of the assessment.
  3. Encourage students to peer assess each other’s performance using the rubric, leading to further familiarization with the task, as well as the development of critical review and assessment judgment skills. This also allows the seeding of further ideas in relation to the task, through exposure to the work of others.
  4. Get students to identify the mark that they are aiming for and re-write the criteria in their own words. This sparks discussion about the requirements, flushes out any issues needing clarity and can result in students raising their aspirations (as the ‘assessment code’ is decrypted there are moments of “If that’s what it mean’s … I can do that”).
  5. Facilitate a negotiation of the rubric. Where full student led creation of a rubric is impractical, or not desirable, a tentative rubric can be presented and negotiated with the class. Students can have an influence on the coverage, the language, and the weightings. As well as familiarizing with the requirements, this allows a sense of ownership to develop. In my own experience rubrics are always better for student negotiations.
  6. Undertake a class brainstorm as the basis for the rubric design. Ask what qualities should be assessed e.g. report writing skills, then identify what this means to students themselves e.g. flow, use of literature to support argument. Then use this list to develop a rubric. It is a form of negotiation, but specifically it allows the rubric to grow out of student ideas. By using student language, the criteria are already written in a form that is accessible to the group (after all they designed the key components).
  7. Simply use the rubric as the basis for formative feedback with students to aid familiarity.
  8. Use the criteria to assess exemplars of previous students’ work. This will have the benefits of familiarity, developing assessment judgment as well as sparking new ideas from exposure to past students work. Of course this can be further developed with full sessions or online activities built around exemplar review, but the rubric can be central to this.
  9. A rubric can be partially written to offer space for choice. Leaving aspects of the rubric for students to complete leaves room for students to show their individuality and to customize tasks. Rubrics don’t box-us in to total uniformity. Recently I created a rubric for a research project and left space for students to articulate the presentational aspects of the criteria. Some students filled in the rubric to support the production of a report, others a poster and others a journal article.
img_20160908_152105

Using a class brainstorm to form the basis of a rubric with criteria relating to reflection 

I have only included approaches that I have used first hand. I’d like to build this up with the experiences of others; if you have additional suggestions please do let me know.

Posted in Academic Practice (PgC Teaching & Supporting Learning), Assessment, HE, Uncategorized | Leave a comment

Undergraduate Vivas

viva

Access the guide by clicking the image above

Over the last six months I have been looking in to the Undergraduate Viva. Asking questions such as what are the benefits? What makes a good undergraduate viva? and, How can students be prepared for their undergraduate viva? One of the results of this  is a guidance document  on how to conduct a viva of this type. It may be of interest to others.

Posted in Assessment, curriculum, Uncategorized | Tagged , | Leave a comment

The process of defining graduate attributes

I am aware others are grappling with how to define graduate attributes, so I thought it helpful to share the approach that we took. As part of a whole university curriculum review, and a strategy review, we set about trying to identify what it was that the curriculum should achieve. Essentially we asked, what was our goal?  Unless we know this any curriculum initiatives would be tinkering. So we asked a very fundamental question, what should a Harper graduate be? This goes beyond simply asking what they should be able to do, and incorporates a sense of self that is needed to deal with a fast changing external environment and this is needed to be resilient for the future. This idea is underpinned by Ron Barnett’s work on working in super complexity. It’s a huge question but one that we answered, I think, in a creative way.

BAG.png

Resources from the ‘build a graduate’ workshop

We gathered as many staff as were able to attend to join a room with huge pieces of card printed with a giant graduate. In course teams staff were then asked to build a graduate in their discipline. Using the card as a focus for thinking, prioritising, debate and discussion each team built their own graduate. Of course this informed course level thinking before more detailed discussions got underway about course content. Using post it notes to stick on to the graduate allowed rearrangement, re-prioritisation and change as the group discussions evolved. The views in the room were not formed in isolation since colleagues were involved in both student and industry engagement.

 

After each team had spent several hours identifying what they graduate would look like in a perfect world, we collated all of the words used by all of the teams. These were then collated and put in to a word cloud creator. The commonality in the lists showed itself as the larger words were repeated across different course areas. After some sorting and filtering it became clear that we did have a collective and common vision of what the graduates of the future should be. This exercise became the foundation of the new graduate attributes. The build a graduate exercise was also undertaken by course teams with students and industry contacts. The word cloud produced is shown below.

The word cloud gave students and staff a visual connection to the exercise that we had taken, and a constant reminder that the definition of ‘our’ graduateness was a collective exercise.

wordle.PNG

A first workshop output on defining graduateness

 

Capture2.PNG

The final version of the graduate attributes 

 

The headline attributes helped to ground the Learning and Teaching Strategy; they provided clear direction as to what our activity should be pointing to. It provided one of the key cascading ideas for strategy and operational policy.

 

For the curriculum aspects, once we have the broad terms for what a graduate should be, we interpreted each attribute, skill area of understanding for each level of study. This involves some word-smithery and some external scoping to see how others level their outcomes, but it also required an eye on the future.  We ended up with was a breakdown of each of the graduate attributes, and a description of what should be achieved each level in this area. A snapshot of the attributes are offered below.

Capture2.PNG

It’s one thing articulating the graduate attributes and specifying them for each level, it is quite another to deploy them as the beating heart of the real curriculum. The first thing that we did was ask course teams to develop programmes that addressed each area at the correct level. Course level engagement forced deeper conversations about ‘what does digital literacy mean in our context?’ ‘where are the opportunities for global perspectives?’ and this sparked the attributes into life. Each programme then mapped where the attributes were met, but this one way mapping was deemed insufficient, as once it is complete it can, in reality, be committed to a top drawer and dismissed as a paper exercise. So we went a step further and requested that modules were individually mapped against the graduate outcomes. This makes it much clearer to students and staff, what skills the module should address. Through validation and scrutiny each module was checked to ensure it really was enabling the development of these attributes, through its content, pedagogy, assessment or independent activities. The next step is to get student to actually consider their progress against the graduate outcomes in a meaningful, rather than tick-boxy way. I’m sure others have taken different approaches to developing graduate attributes, but this sought to be pragmatic and inclusive.

Posted in curriculum, Uncategorized | Tagged , , | Leave a comment

Feedback conversations: How should I use technology? A nuanced approach …

One of the most frequent conversations I have is around improving feedback, and how technology can help. Increasingly I am trying to encourage a more nuanced discussion about feedback, because deciding on what feedback to give and how to give it is not simply about a choice between one tool or another; the choice should be the result of the individual lecturer’s preference, the context, the type of assessment or activity upon which feedback is being offered, the characteristics of the student or cohort, the aims of the feedback, the tools and technology required, the quality management requirements and no doubt many other factors. Some of the common questions I get are shared below with comments:

Should I use GradeMark for feedback? 

Well, it depends a good deal on what you want to achieve. GradeMark has many benefits but in itself it will not make feedback better. To be clear, like any technology it can make a difference but it is not a silver bullet; without meaningful engagement and a commitment to change practice, it will not improve satisfaction with feedback.

GradeMark can help you to achieve consistency in the comments that you offer to students because you can create a bank of common points to annotate the work, and it can enable you to add a greater amount of feed forward signposting advice to students for their common errors, for example if a group are struggling to paraphrase, you could create a comment advising of techniques and pointing to resources that might help and use this many times. GradeMark can help with a sense of fairness too, as marks can be allocated using a rubric. This is entirely optional, and there are of course other ways to employ a rubric. It can help with legibility, as comments are typed; but so too can very clear handwriting and other technologies. It can allow you to save time at certain points in the marking and feedback process, as you can get started on your marking as soon as students hand in rather than delaying until you receive a complete set of papers. It can aid transparency when team marking; you can see how another tutor is marking and feeding back – again this is possible to achieve in other ways, but being able to see each other’s marking in real time can create ongoing dialogue about the way marks are allocated and the way comments are added. If you are really concerned about reading on a screen, this might be a problem; but if you consume news, media, research and other things via a screen, it may be worth laying aside your concerns and giving this a try. All of these benefits though can only be realised if the user is working with the technology and is not simply transferring existing practices in to a digital environment.

Will it save me time? 

Yes and no. It’s not that simple. It depends how you use the facilities and what type of feedback you give. You can use as many or as few of the tools within GradeMark as you see fit. You can use the facilities within GradeMark in any combination: Voice over comments, annotations (stock comments or personalised as if marking on paper), you can use a rubric, auto generated scoring from the rubric (or not) and you can use a final summary comment. Each individual needs to look at their set up and then consider what they want to achieve, they should then select the aspects of the tool that work for their situation. Annotations may be better for corrective, structural feedback, or feedback on specific aspects of calculations, but the narrative may be the place to provide feedback on key ideas within the work. If you go in to using GradeMark solely to achieve efficiencies, you will most likely be disappointed upon first usage because there is a set up investment and it takes a large group or multiple iterations to get payback on that initial time spent. In my experience those who use GradeMark may start out seeking efficiency, but end up with a focus on enhancing their feedback within the time constraints available to them. When time is saved by a user, I have seen colleagues simply re-spend this time on making enhancements, particularly to personalise the feedback further.
Ok, so what equipment do I need to be able to use GradeMark? Is it best to use a tablet?

Again it much depends on your work flows and preferences. A desktop computer is my preference as I like lots of screen room and I like to settle in to one spot with unlimited supplies of tea, whenever I mark. Others like to be mobile and the tablet version of GradeMark allows you to effectively download all scripts, mark and feedback and then upload. So unlike the desktop version you don’t need to be connected to the Internet – for those marking on the go, this is a good thing.

I see other people using other technologies for marking, like Dragon Dictate and annotation apps on tablets, are these better than GradeMark? 


There is a large toolkit available for assessment and feedback and each has strengths and weaknesses, and each fits differently with personal preferences and context. So Dragon dictate can be used to speak a narrative or extensive comments, it’s not perfect but may help those who struggle with typing; annotation apps allow the conversion of handwriting to text, and they allow comments to be added at the point of need within a script (though GradeMark allows this too). On the downside a manual intervention is needed to to return the feedback to students. Track change can be good for corrective feedback, but it can cause students to look at their work and feel that it wasn’t good enough as it has the electronic equivalent of red pen all over it!
Second markers or external examiners refuse to use the same interface… Then what …?

I’d suggest that you encourage others in the process to use the technology that you have selected. Give them early warning and offer to support the process. A pre-emptive way of dealing with this is to ensure a course wide approach to feedback, so agreeing, as a group, the tools that you will use. This should then be discussed with the external and others at appointment. It’s harder to resist a coordinated approach. Policy change is what is really needed for this, so lobbying might help!!!

But students like handwritten feedback, they find computer based feedback impersonal …

Maybe so, but all students prefer legible feedback and feedback that they can collect without coming back on to campus. Also is it not part of our responsibility as educators to ensure students can work digitally, even with their feedback? Students who tell us that they like handwritten feedback often feel a personal connection between them and the marker, but feedback using technology can be highly personalised. It is simply up to the assessor to use the tools available to achieve levels of personalisation; the tools themselves offer choices to the feedback craftsman. Adding a narrative comment, an audio comment or customising stock comments can all give a personal touch. However if the feedback giver chooses none of these things, then of course the feedback will be depersonalised.

Students say they don’t like electronic feedback…

Some might and the reasons are complex. If we introduce a new feedback method at the end of a students programme, without explanation, irritation is inevitable as we have just added a complication at a critical point. Equally if feedback across a students journey is predominantly paper based, it is no wonder they struggle to remember how to retrieve their digital feedback and so get frustrated. If the feedback is too late to be useful, that will also cause students to prefer old methods. It may be useful to coordinate feedback approaches with others on your course area so the student gets a consistent approach, rather than encountering the occasional exotic technology with no clear rationale. Finally, though, students also need to be trained to do more than receive their feedback. They might file it, return to it, précis it and identify salient points. Good delivery of feedback will never alone be enough. Timeliness and engagement are also key to allowing students to work gain the benefits of their feedback.

Seeing things differently ….

One of the benefits of using technology in feedback is not often spoken about, or written of. When we engage meaningfully with technology in feedback it can change our approach to providing feedback, irrespective of the technology. By (real) example, someone trying annotation software may have a realisation that legibility is a real issue for them and they must prioritise this in future; someone using a rubric may start giving priority to assessment criteria as the need for equity and consistency becomes more sharply placed in focus; someone using stock comments and adding a voice over becomes aware of the need for the personal touch in feedback; and finally, someone using audio becomes aware that the volume of feedback produced may be overwhelming for students to digest and so revise their approach. These realisations live on beyond any particular technology use; so when we think of using technology for feedback, it may be useful to be conscious of the changes that can be brought about to the feedback mindset, and judge success in these terms rather than just mastery of, or persistence with one or another tool.

Posted in Assessment, feedback, Technology, Uncategorized | Leave a comment

International staff development in teaching and learning: Lessons learnt

cstaAt the end of another period of working with lecturers from overseas I thought it would be useful to pause for thought and identify lessons from working in the area of transnational staff development. I have just completed my third international staff development summer school, and here is what I found. These points come with the caveat that they are only my learning, and others may, of course, have different views of what works.

Be conscious of your assumptions.

Before meeting academics from other continents it’s easy to let assumptions about what they may or may not be doing in practice, or what they may or may not believe, creep in to your thinking and planning. By example with tutors from China a common belief is that they are always involved in a transmission modes of education; this is simply not my experience and the view relates to outdated assumptions. While some Anglicized techniques may be new to colleagues from other locations my experience tells me we have a shared passion for making learning better, and some overlap already in our methods such as employing a flipped classroom.

Explore biography.

It’s always useful to start any transnational academic staff development by exploring the experiences and biography of the individuals involved. Those involved in supporting transnational staff development can then be agile and responsive to the specific needs of the group. Things that I have tried are very simple: Starting the development programme with a list of questions and concerns that the group would like to address, shared via a post-it-note board, hosting a session with no plan and offering the hot seat format where we simply respond to the questions of the group in an effort to take stock and allow international colleagues to add context to their growing understanding, and walking together to listen more informally to the needs of the group.

Ensure that learning is always two way.

One directional international staff development appears like neo-colonial self-righteousness. We need to employ empathetic methods wherein we offer our own practices, identify some of the limitations of these approaches, and also invite teachers from other locations and nations to do the same. It feels disrespectful to do anything else.

Get the basic accessibility matters right

Take care of the basics to ensure provision is accessible. Particularly it’s important to make sure that delivery is slow enough to allow the digestion of material; if you’re not sure, keep checking with the group. Also, ensure that resources (slides, papers, etc.) are available before the class so that translation apps can be used to familiarise with any tricky words.

Team teach, always.

Team teaching allows a richness which is not possiblealone; I tend to work with people who can offer a very practical take on the theoretical ideas and research that I am exploring. As well as simply providing more experience in the room, teaching in pairs provides an opportunity to model professional differences, which are inevitable in teaching. So by example when working with a colleague in a session about teaching evaluation, my view was that it is okay to identify some of the areas upon which you specifically want feedback from teaching observation, but my colleague’s view was that this might limit the range of feedback and prevent previously unnoticed habits or issues from surfacing. Our nuanced differences were explored publicly as a model of divergent views; this raises the exposure of the group to detailed discussion. Team teaching also provided in the moment opportunities for peer observation and debrief; this should be routinely incorporated through the provision of collegial feedback on what worked well, and what less so.

Use staff development as a vehicle for the host organisation’s staff development.

By keeping international staff development locked within one or two people, there is limited benefit. Encouraging colleagues who may be outside of the usual staff development circuit to come and join in, can, I think, have a considerable impact, in growing capacity and confidence for this type of work. It can also provide experience for aspiring Senior Fellows within the UK Professional Standards Framework. It can also provide challenge and a refreshing set of ideas, for staff who often manage staff development.

 

Be confident about lessons on technology transferability

Within international staff development programmes we may wish to explore tools for learning and teaching. Things like Twitter, Padlett, Nearpod, Facebook and Kahoot. A first thought may be, well what’s the point as we have access to different apps, and some of our apps may not be appropriate in a different cultural context. However, our experiences are that sharing is still good, even when the tools are not immediately transferable, as we can learn by return about alternative apps. Most importantly, since technology and the way we use it reflects much about the underpinning power, beliefs and values in learning, then exploring technology is a much more valuable experience than just swapping ideas on apps we like.

Hold discussion using first language

Group work with international, especially Chinese staff, is always very productive, but it is incredible demanding and limiting to make that group work happen in a second language (e.g. English). Permitting sense-making in the first language allows rich debate and discussion to evolve, rather than slowing down the pace and adding another cognitive load. This approach loses the opportunity for input to the discussion from the session host, but in my experience this is a process worth paying to enable lively group discussion, and in any case group précis in English can give the headlines.

Lead by self-exposure on the difficult discussions.

Asking questions, or discussing topics, which require some exposure of personal fragilities, can be tricky to get started. Working with my colleague, Jane Headley, we found that by sharing something of ourselves before we ask others to do the same was helpful in creating an open forum. So for example, when discussing technology, I shared some adverse feedback that I had received on my own approach; I then told the story of what happened next, and I identified the decisions I made that lead to a less than perfect learner experience. This lighthearted ‘fessing up’ made others comfortable to share their own critical incidents and learning from it. As well as showing openness, this deconstruction of practice also models reflection in action.

Use many examples and stories

Using real examples of situations, challenges and successes can really aid understanding, though remembering that some political context may be important to explain why one or another decision was taken at the time, e.g. we had funding for this type of work, or we are preparing for TEF.

Encourage journal writing

Finally, by encouraging some reflective writing after each topic or session, international staff can form their own ‘take away’ record of i) what were the key learning points and ii) what next to research, extent or apply aspects of learning? A simplified, structured formal of learning journals can promote consolidation and impact from learning.

This list is no doubt incomplete, so please do add any other points that you might have.

Creative commons image sources:

https://commons.wikimedia.org/wiki/File:Globe_terrestre_Orange_te_Bleu.svg

 

Posted in Academic Practice (PgC Teaching & Supporting Learning), Events, HE, Learning Journals, Uncategorized | Tagged , , | Leave a comment