I have formed a short guide to action research particularly to support colleagues in higher education who may be undertaking action research for the first time. This is absolutely not intended to be a substitute for literature but it is offered as a ‘first stop’ for anyone contemplating this methodology. It offers practical ideas and tips and seeks to answer some of the key questions that I understand new action researchers to have. As ever, any feedback, additional inclusions or suggestions for revision would be welcome.
Download the booklet here: Action Research Introductory Resource
In considering how to support curriculum design for new programmes I have developed a question framework for course design teams to use to help them to deliberate and discuss the shape of new programmes. It tries to encourage a balance of looking back at what has worked before, and looking forward at how designs could be improved. It encourages discussion in the context of the discipline, but focuses on the underpinning structure of the design rather than specific content.
Download the course design think sheet
Looking at lecture capture led me to ask questions about the technology’s effectiveness. I can’t help feel that lecture capture is counter-intuitive, since we know transmission based learning is less effective than active learning (so, why would we invest more in it and replicate it?) and we know that concentration spans for online engagement don’t readily lend themselves to hour long broadcasts (my own concentration sees frustration after 15 minutes!). Nevertheless adoption is on the increase and students clearly appreciate the opportunity to apply catch up TV principles to learning – they value the flexibility.
As lecture capture heads towards the mainstream, I thought it useful to look at the evidence of the benefits and challenges of this technology, especially in light of a prediction that we may begin to move away from capturing lectures to viewing lectures as performances – something Professor Phil Race constantly emphasises with the idea of making the lecture unmissable and engaging.
My reading notes can be downloaded but the headline points were:
- More research is needed in to actual, rather than perceived effectiveness of lecture capture.
- Students appreciate lecture capture and believe it helps learning but the actual impact is unclear. Critically there is little or no evidence that lecture capture really impacts performance. Some subsets of users appear to show higher scores, but this may be associated with their diligence rather than the impact of heavy usage of downloads.
- The circumstances in which lecture capture is effective and the reasons for it are also unclear. Research suggests that content heavy subjects are best suited to this technology and interactive subjects less so, and this makes good common sense. By implication then, this point raises the question would lecture capture lead to a less interactive delivery style?
- Lecture capture is suspected as having a connection with more effective note taking and students appear to selectively watch lectures to address tricky concepts. These recurrent findings, irrespective of the growth of lecture capture, point to the value of addressing how students take notes as an academic skill and raise the question of how media can be used to address difficult concepts in watchable and debunking, even (dare I say) enjoyable ways.
If they are useful please help yourself to my lecture capture quick notes.
The document posted is a collection of short narrative portraits that has been constructed during my doctoral research, titled, ‘Using technology for student feedback: Lecturer perspectives’. Within the study, fifteen participants were interviewed. Each told their story of how and why they used technology in feedback. This illuminated challenges in the development of academic practice, it uncovered some of the ways in which feedback practice is formed, and it showed some of the ways in which lecturers internally mediate technology selection.
Individual interview transcripts were reduced to portraits (essentially these are mini accounts). This was done using a systematic and reflexive process articulated by Seidman (2013). The portraits themselves, and the process of data reduction, provided learning which fed in to the wider analytical process. These portrait stories are not all included in the final thesis in their full form, however given that narratives can provide instant knowledge (Webster & Mertova, 2007) I wanted to publish the collection. The participant portraits are presented here because they stand alone as insights in to the formation of academic practice.
DOWNLOAD Participant stories – in their words
Seidman, I. (2013). Interviewing as qualitative research: A guide for researchers in education and the social sciences (4th ed.). New York: Teachers College Press.
Webster, L., & Mertova, P. (2007). Using narrative inquiry as a research method: An introduction to using critical event narrative analysis in research on learning and teaching. Abingdon: Routledge.
Part way through my thesis research I have stood back to ask what is all this data saying? To this end I have produced a pause for thought document about the emerging findings. This is not the finished output, but in creating it I managed to consolidate my thoughts, and in sharing it I hope for any comments that may help refine further analysis or additional data collection.
Please feel free to leave any questions or comments in response.
And thank you to those who have assisted me in setting up the next round of interviews after a recent plea for help.
Preliminary Research Findings available here.
As I edit through my thesis I needed to cut out a short section that I had written about how mixed methods can be highly compatible with critical realism. As my work emerged I did not, in the end, use mixed methods. This section was hard won and so simply deleting it felt like a waste. Therefore, attached to this post is a short piece about the compatibility of mixed methods and critical realism, in hope that it may be useful to someone else.
Download the short article here: Marrying mixed methods and critical realism
In considering staff experiences of choosing and using feedback technology, one of the emerging themes has been the differing views on feedback technologies and efficiencies. While the jury is still out on the data and the process is incomplete, my observations are that efficiency can be conceived in different ways in the negotiation of technology. For some efficiency is a primary driver in the decision making process. The search for technology and the refinement of its use is motivated and shaped by the quest for efficiencies. For others efficiencies are a welcome benefit of technology – they are almost an unexpected gift – welcome, but not necessary. Efficiencies also appear to be conceived relatively; rarely are efficiencies discussed without a reference to the relative enhancement gains that can be made through a technology. Wherever there is a time saving there is a tendency to ‘re-spend’ the saved time making still more enhancements to the feedback – adding detail and depth for example. In this way efficiencies become difficult to identify as they are theoretically achievable but in reality they are trumped by the possibility for improvement. Efficiency also seems to be a veto concept for some; it is not a particular concern in the run of practice but is triggered only when a particular technology is likely to encroach other activities or provide an intolerable stress.