If, like me, you misspent your youth reading superhero comics, you’ll probably know that Spider-Man had a personal philosophy: with great power comes great responsibility. These days, we might amend that slightly: with great computing power comes great responsibility.
There’s little doubt that artificial intelligence (AI) has the potential to transform the mundane tasks that teachers and office staff have to perform: test marking, helping parents with routine enquiries (through a ‘bot’), and crunching huge amounts of data to look for patterns. Indeed, Analytics can already make senses of masses of data, as we saw in Making sense of big data. Imagine how powerful it would be when combined with AI in the form of a bot that could not only make recommendations, but then, of its own accord, decide which data it would be most efficient to process next.
Another development on the cards is AI which can mark student essays accurately. It could do this quite easily, once it has been fed enough correct essays to be able to judge an essay it has never seen before.
This all sounds wonderful, but there are potential problems that we really ought to be talking about now.
The first is that AI as it works now is a black box. It reaches conclusions in a way that is hidden from view. In other words, we often don’t know how the program produced the result it did. Indeed, as Rose Luckin points out in her latest book, Machine Learning and Human Intelligence, the program itself doesn’t know how it reached the conclusion. It has no self-awareness or meta-cognition: it doesn’t actually know how it ‘thinks’.
This means that, from a philosophical point of view, we are prepared to take the word of a program that can process data much quicker than we ever could, but which has no idea what it’s doing. Unfortunately, even if you have little time or patience for philosophical considerations, there are practical pitfalls too.
Suppose there is a child in your school or MAT -- let’s call him Jake -- who is continually misbehaving. The reason is that he is the main carer at home, and often comes to school hungry. He is tired and stressed, and because he often gets in late because of his caring duties, he is falling behind in his school work. Consequently, he misbehaves, whether out of a sense of frustration, or to provide a face-saving reason for his falling grades.
Now imagine an AI program that looks through all the data, and learns what poor behaviour looks like, and how it’s associated with low grades. It then starts to look at that data more than other data, and predicts which pupils are likely to get into trouble. Surprise, surprise, Jake is flagged up quite often, with the result that he has more letters sent home, which then feeds back into the system.
Now imagine what happens when Jake is about to go on to secondary school. His data record will be used by the secondary school to, say, put him in a special class so as to reduce his disruptive influence on other kids. He doesn’t learn very much, which leads to more misbehaviour.
This kind of positive feedback has already happened in the ‘big wide world’. For example, there was a case of a homeless man in the USA who accumulated a number of arrests for loitering. Because of this criminal record, he was denied housing, which prolonged his homelessness, which led to more ‘loitering’.
Such self-fulfilling prophecies built in to algorithms are bad enough. They are made worse when combined with something known as ‘automation bias’, which is where people trust technology more than they trust a human being.
Thus when the computer tells you to keep an eye on Jake, you are inclined to believe it without question. After all, the AI has ‘looked’ at all the data, so it must be right. This attitude could also lower the usefulness of an AI system that marks essays. As unlikely as it sounds, one of your students could come up with a completely new theory about, say, Economics. (It has been known: when J.M.Keynes was asked why he had failed his Economics examination at Cambridge, he replied that it was because he knew more about Economics than his professors.) Since the AI has learnt what the ‘correct’ answer is, it will mark the student’s essay as wrong.
A related danger is that, if the AI is correctly marking the essay without any input from a teacher, the latter has no opportunity to see what misconceptions the student has developed.
To combat the twin dangers of self-fulfilling algorithms and automation bias, schools need to ensure that the role of human beings is not denigrated to the extent that AI rules with no questions asked. Teachers and senior leaders must feel they have the confidence to question what the AI program is saying. Unlike people, computers don’t have empathy, and they don’t understand nuance.
In the case of Jake and others like him, it would take early intervention by a person who is prepared to look beyond the numbers. Continuing to punish Jake for a situation that is beyond his control doesn’t help anyone.
In the case of grades, teachers need to feel they have the right to question unexpectedly bad marks. If the student whose essay is marked as grade F is usually a high-flier, then it’s better to look into it that merely accept the computer’s decision.
The era of autonomous AI in schools may be some way off, but is probably closer than we might think. In a situation in which the computer is crucial to many key decisions, how will you ensure that those decisions can be questioned?