The developing potential in it was clear to see in Sian Bayne’s paper (Bayne S. (2015) Teacherbot: interventions in automated teaching. Teaching in Higher Education. 20(4):455-467) on the ‘teacherbot’ developed by a team at the University of Edinburgh, and used in an earlier MOOC.
The twitterbot examples we’d found as a collective were by their very nature designed for consumption by large audiences, and in most cases offered little interaction. They use the power of algorithms to source, rework and mix up varieties of content.
Given Twitter’s primary use as a platform for interaction, it was really intriguing to find how the teacherbot had been designed for use for a particular set of people (namely the MOOC participants) but also provided value in several ways.
It seemed like the bot offered some immediate efficiency gains, for example, automated reminders of assessment deadlines, based on keywords. But it seemed to me in the paper that one of Sian’s key arguments was that automation should not just be viewed as simply a way of gaining efficiencies. Indeed exploration in this field should be framed differently, to provide a wider field to test its potential.
So the real excitement comes from the examples of interaction between the teacherbot and MOOC student, even if it was “slightly ‘clunky’ and often rather wide-of-the-mark”. It seems in some cases it provoked useful reflection and discussion, which would add value to the student’s experience.
The paper did make me ponder on a few things: It’d be fascinating to see
- It’d be fascinating to see teacherbot v2. Given its first iteration was understandably clunky at times, and that an algorithm/bot improves when it is receptive to feedback, it’d be useful to view how much interaction happens the second time around, and the value it brings to the student experience. Given subsequent ‘polishing’ of the teacherbot interactions, I wonder if it would become increasingly difficult to spot it as a bot. (Obviously, there are the ethical discussions to be had here about not raising awareness of this with students, but putting this to one side for now…).
- Taking this further, and should bots become more commonplace with these environments, would students increase or decrease their interaction with the bot as a result, and ultimately would they even care if it was human or non-human? Is that relevant to the experience, and is this just a personal choice?
- In some of the forum postings, there has been some discussion about the procurement of technology within education, and how this has in several instances been sold into the institution or organisation gain as an efficiency gain (primarily). The pedagogical advantages have been at times been a secondary consideration. Given the speed of development or development of any technology is often tied in with the adoption rate (more usage brings more development), should we think more pragmatically about this, and ensure that any technology we wish to bring into education has an efficiency element to appeal to certain stakeholders within the decision-making process?