When teachers put down their whiteboard markers at the end of last year, the world was spinning smoothly on its axis. By mid-January, however, artificial intelligence and ChatGPT had hit the headlines and the educational future had, allegedly,
changed forever.
Perhaps. Just two months after its launch, it’s too early for anyone to be sure but the implications of ChatGPT (and its equivalents in development) will take a long time to work out.
What has struck me initially is the total absence of any discussion about relationships. Yet humans are social animals and we are wired to learn through human interaction. Teachers know that that great teaching has two essential elements: the ability to communicate ideas effectively, and trust. No matter how clearly I explain something, a student who feels lost, unsure, ‘stupid’ or fearful of making mistakes will not learn well. As a Maths teacher, I know that instilling confidence has been a vital part of my own classroom practice, and the rewards of doing this successfully are what makes teaching the best job in the world. I expect that most parents intuit this as well: assuming a basic level of competence from the teacher, we understand that the quality of the relationship can still be decisive in determining how much the child actually learns.
In ignoring this essential point, the ChatGPT debate is just another iteration of previous unrealised ‘revolutions’. First the internet, and then smartphones and Google, were supposed to render schools redundant – but why aren’t parents simply keeping their children at home, opening up a search engine and watching well-informed and disciplined minds emerge? There are, of course, stories of exceptional auto-didacts but all our experience establishes that, for almost everyone, learning in a community under the guidance of an expert is the most efficient and sustainable way to learn.
Other issues are, in my view, being overlooked. For instance, the CEO of Coursera (a provider of online courses) claims that, through artificial intelligence:
“Writing — not thinking, but writing — will also be eliminated. If you are not a great writer but you have good thoughts, it will help you put your good thoughts into clear writing.”
This could, perhaps, apply to simple forms of communication, conveying information or descriptions, where writing may well be nothing more than simple transmission of fully-formed thought.
For complex ideas and higher analysis, however, there is a feedback loop between thought and written word. Articulating a thought in writing demands a precision which forces us to sharpen our ideas. It is a common experience to write a draft on a complex topic and realise, through grappling with accurate expression, that one’s first thoughts are not quite right. Understanding comes not from sliding quickly over slippery ideas but from wrestling them down to the ground. Writing is the grappling process. Eliminate writing and you lose quality of thought.
There are other false assumptions in the debate – for instance, that ChatGPT’s capacity to answer a given question achieves key educational objectives, when in reality that is just the starting point. The higher, and more difficult, objective is to acquire the capacity to
ask interesting questions, which necessarily relies on a pre-existing foundation of knowledge: precisely what will be lost if artificial intelligence is relied on to do the work for us.
But at this point, I will move on to the question that
has been widely discussed: implications for assessment.
Clearly, some forms of assessment will need to be quickly adapted. There is a place for asking students simply to recount information (as part of ensuring that they have certain knowledge as a starting point) and the power of ChatGPT already proves that it would be unwise to attach much weight to such tasks if completed unsupervised, but in this respect there’s not too much difference at school level between access to ChatGPT and access to parents who are willing to assist. No doubt, teachers everywhere must give renewed attention to the quality of the assessment task and pay attention to the features which make it harder to hack with artificial intelligence, which may drive an overall improvement in quality. As one commentator
has said, this development “should be welcomed by all those who value original thinking and effective writing, not because of what it does but because of what it exposes.” There are also ways of improving the validity of the results, such as making the drafting process part of the assessment task and knowing your students well enough to be able to spot anomalous work. Where it really counts, completing work under exam conditions is an effective fix, although this will disappoint those who have been advocating for the total abolition of exams – something I have never thought a good idea.
In some ways, however, the focus on the tools of cheating misses the point because the ultimate protection is the culture of learning. Plagiarism and dishonesty are a tempting short-term fix for a specific assessment barrier but students who are hungry to learn will quickly come to understand that cheating – whether through ChatGPT or many other ingenious means – only undermines their long-term objectives. I’m not romantic enough to rely on this for high-stakes assessments but also not pessimistic enough to think that just because a new form of cheating is available, it spells the end of the essay as we know it.
Committed as we are at Queenwood to offering a liberal education underpinned by a love of learning for its own sake, these new developments should not represent a fundamental crisis but an opportunity to emphasise and refine what we already do.