“LARCing” about

Catching up on Week 11 after a bout of sickness, I’ve been playing with the LARC tool provided by the University to explore the topic of learning analytics.

It provides different styles of reports (lenient, moderate, or strict) based on metrics typically used within learning analytics (attendance; engagement; social; performance; persona). I dived straight in and picked the extremes around engagement, simply as ‘engagement’ to me seems a particularly woolly metric…

LARC report, w/c 20 Nov 17. Strict, engagement.

LARC report, w/c 20 Nov 17. Lenient, engagement.

The contrast between the two is quite stark. The lenient style seems more human – it’s more encouraging (“your active involvement is really great”) and conversational/personable (“you seemed”… compared with “you were noticeably…”).

Despite both being automated, the lenient style feels less ‘systematic’ than the strict. Does this suggest that humans are more likely to be more lenient and accommodating, or is simply that we associate this type of language less with automated language – so it doesn’t feel more ‘human’, just less ‘computer’? This certainly chimes with insights into the Twitter ‘Teacherbot’ from Bayne S. (2015). This line of human/computer is beginning to be increasingly blurred through the use of artificial intelligence, and how students react to these interactions is of particular personal interest.

I think it’s interesting to think about how one responds to each style. Given my engagement appears to ‘satisfactory’ at a base level, the feedback isn’t necessarily looking to provoke a particular response. However, if my engagement was less than satisfactory, then I’m not sure personally which one would personally provoke a better response and get me into action. I guess it depends whether it’s the ‘carrot or the stick’ that is the better driver for the student.

The examples above make me consider the Course Signals project in more detail, which was discussed in Clow (2013) and Gasevic et al (2015). From my understanding, this project provides tutors with relevant information about their students’ performance, and the tutor decides on the format of the intervention (should it be conducive to make one). The LARC project has gone one step further it seems, in that the style of response has been created. Referring to my initial point about choice of style, in the Course Signals approach ultimately the tutor would make this choice based on their understanding of the student. That’s not to say this couldn’t ultimately be delivered automatically with some increased intelligence – it would just need some A/B testing early on in the student’s interaction with the course to test different forms of feedback, and see what provokes the desired response. Of course, this discovery phase would bring with it significant risks, as they are likely to receive erratic and wide-ranging types of feedback when engagement with the course at its most embryonic.

As a side note, Clow (2013) discusses the development of semantic and more qualitative data aggregation and this being able to put to more meaningful use. Given this, perhaps a logical next step would be to develop the algorithms to understand the register and tone of language used in the blog posts and relay any feedback to the student in a similar style (as a way of increasing engagement).

Going back to the LARC project, I thought it’d be useful to look at attendance, particularly in light of Gasevic et al’s (2015) references to the pitfalls in this.

LARC report, w/c 20 Nov 17. Moderate, attendance.

Gasevic uses three “axioms” to discuss the issues in learning analytics. One of these is agency, in that students have the discretion of choice in how they study. Naturally then, a weakness in analysing attendance, in particular, is going to be in benchmarking, both against the student’s prior history and amongst the cohort as a whole. Naturally, this was done by design by the UoE team, but we were asked to generate LARC reports based on a week when activity was largely done outside of the VLE, namely on Twitter. As such there’s an issue here, in that the tool does not have the context of the week factored into it, and raises questions about the term ‘attendance’ as a whole. Attendance has been extrapolated from the number of ‘logins’ by the student, and the two may not be as compatible as may look on first reflection.

When comparing with the wider group, it’s also easy to point out potential holes across the group. One student may prefer to log in once, download all the materials and digest before interacting on the discussion forums. Another may be more of a ‘lurker’, preferring to interact later in the week, perhaps when other commitments permit.

Ultimately this all starts to come down to context, both from a situational, pedagogical and peer perspective and this is where a teacher can add significant value. I think one of the wider challenges for learning analytics is the aggregation of these personal connections and observations, however, this raises the challenges of bias and neutrality. It seems that learning analytics as indicators can offer significant value, and the extent to which metrics are seen to represent the ‘truth’ needs constant challenging.

References:

  • Clow, D. (2013). An overview of learning analytics. Teaching in Higher Education, 18(6) pp.683–695.
  • Gasevic, D, Dawson, S & Siemens, G 2015, ‘Let’s not forget: Learning analytics are about learning’ TechTrends, vol 59, no. 1, pp. 64. DOI: 10.1007/s11528-014-0822
  • Bayne S. (2015) Teacherbot: interventions in automated teaching. Teaching in Higher Education. 20(4):455-467)

Posted

in

by