The algorithmic paradox of personalised learning

(There are non-algorithmic paradoxes too, but let’s start here)

Junaid Mubeen
4 min readFeb 24, 2018
Image source

Machine Learning screwed up again this week. YouTube’s classification system somehow placed a conspiracy video (which attempted to smear one of the teen survivors of the horrific shooting in Florida) at the top of its trending list. For all their sophistication, YouTube’s classification algorithms are not designed to understand each video on its own merits. Instead they make a judgement based on how similar the video is compared to others on the platform (in this case, the footage was taken from a reputable news source, which was enough to make it credible to the classification algorithm). Google recently pledged to assign 10,000 people to clean up “problematic content” (evidently it hasn’t kicked in yet), some measure of the algorithmic vulnerability that taints their sites.

YouTube’s mishap, and countless others like it, should serve as a cautionary tale to EdTech. The space is replete with products that rely on the same automated models where, this time, it is students’ learning that is being classified. Adaptive tutors (my own ballpark), for instance, are increasingly based on the judgements of ‘intelligent’ algorithms that determine what students’ know, where they are struggling, and what they should learn next. These automated tutors, many of which rely on machine learning models similar to those that power social media recommendations, are being touted as the great enabler of personalised learning. I know this because I am among the touters: I believe these tools can automate the most mundane aspects of learning and teaching (more another time). I also believe, however, that attempts to achieve personalised learning through algorithms alone border on paradoxical.

The paradox goes as follows: advocates (like myself) start with the idea that learning experiences should cater to the individual needs of every learner. We point to Todd Rose’s brilliant book The End of Average in defiance of any instructional approach that bases its decisions on the needs of the mythical average student (what Rose delightfully terms as the ‘averagarian’ approach).

Next comes the technological innovation: we develop algorithms, often employing the weaponry of machine learning, to identify the optimal next lesson for a given student: the topic, its difficulty, choice of representation and so much else is based on the unique learning profile of the student.

The paradox arises when you realise how these models are developed. Machine learning is designed to makes inferences of an individual by mapping their profile to a wider user base. Netflix will recommend movies based on what similar users go on to watch, where similar is based on your viewing patterns and demographics. As you spend more time on these so-called intelligent systems, they will adapt to your usage patterns, honing in on your needs and preferences over time. But even the most elaborate models only know you in terms of how you compare to your fellow users.

In the same way, a machine learning-driven tutor will base its ‘understanding’ of each learner not only on their historical behaviours, but on how those behaviours match up to other learners. You will be offered a Fractions lesson on the basis that, on ‘average’, it helped other students too. The notion of ‘average’ may be tucked away in the abstraction of complex models, but it will always lean on what the system has picked up from other students.

So there’s the crux: no matter how sophisticated a classification algorithm is, there is an extent to which aggregate judgements inform your individual experience. In this sense, machine learning is unable to escape the clutches of the averagarian approach.

How easy it is for ML to screw up (image source)

YouTube’s latest example shows how even the most advanced classification algorithms can get it horribly wrong. Machines do not possess common sense or common decency; what appears to be a tiny nuance in features can result in a lapse of judgement that places vicious fake news at the top of its trending list. What might the consequences of misclassification — of making assumptions of an individual based on knowledge of the crowd — be for intelligent tutors? The answers are worthy of a separate post - the fact that we even have to contemplate the question tells us that that the ideals of personalised learning are undermined by a purely algorithmic approach.

The case for personalised learning must not be deferred to the presumed capabilities of machine learning-driven tutoring algorithms. Any meaningful approach to personalised learning, whether or not it uses digital technology , must resist aggregate judgements. Google’s solution of throwing an army of human checkers to walk back errors is not adequate either: for personalised learning to earn its stripes, the square aim must be prevention of such errors.

After all, what do you call an educational approach that forgives repeated screw-ups that arise as a result of crude aggregate judgements?

I am a research mathematician turned educator. Say hello on Twitter or LinkedIn and sign up below to receive more content like this.

--

--

Junaid Mubeen
Junaid Mubeen

Written by Junaid Mubeen

Mathematics. Education. Innovation. Views my own.

Responses (5)