Artificial Intelligence in the Classroom Revisited -- updated with comments in response to Miguel Guhlin
Since republishing the article below, I commented on a post by Miguel, who then responded with another blog post citing mine. His article takes mine a few steps further, because he asked ChatGPT to advise him on how to give feedback to some work.
This made me think about how useful I would have found ChatGPT back in the mid-80s. I had a job as a member of a local authority’s permanent unattached team. This was a collection of teachers whose job was to be parachuted into schools to teach a particular subject — their own specialism — when the member of staff concerned was on maternity leave or long-term sick leave.
Note the mention of “specialist subject”. Ha! My subject was Economics at the time, but I often had conversations like this:
Senior teacher: Could you also take on English?
Me: But I don’t teach English.
ST: But you can speak it, so what’s the problem? It would help us a lot.
Me: Well, OK I suppose. Will the Head of English give me the work to cover and the resources?
ST Of course!
I suspect that in each case the ST had their fingers crossed behind their back, because a lot of the time I had to devise the work myself. That wasn’t too bad, but knowing how best to assess it and give feedback was another challenge. I worked hard, so the kids were not disadvantaged, but it would take hours an hours. If only there had beena ChatGPT to give me a good head’s start.
Read Miguel’s post here:
Comment Hoist: Terry Freedman. Also, AI Assessment of Student Work
I published the following article in April 2019. It’s still relevant. Scroll to the end to see additional links.
Asimov’s computer teacher
The science fiction writer Isaac Asimov wrote a story set in the year 2157, in which a boy discovers a ‘real’ book. In it, he learns that in the 20th century, kids weren’t taught by a robot but by a real person in a place called ‘school’. He and his friend muse on the fun those long ago children must have had. So, how far off was Asimov in his prediction – and should we take heart from the observation by Arthur C Clarke, that any teacher who could be replaced by a machine probably should be?
An insight from a recent McKinsey report is useful. In ‘Harnessing automation for a future that works’, the blog post about the report makes the point that: “The effects of automation might be slow at a macro level, within entire sectors or economies, for example, but they could be quite fast at a micro level, for individual workers whose activities are automated or for companies whose industries are disrupted by competitors using automation.”
That’s a profound statement. It is highly unlikely that when you return to school from the summer break you’ll be handed your P45 on the grounds that a robot has taken your place. But what we are likely to find is that some of the jobs teachers do are taken out of their hands, either completely or partially. Let’s consider a few possibilities.
Automatic marking
To reduce the workload arising from marking, many teachers set tests that mark themselves. There is nothing artificially intelligent about this usually, but it’s worth considering two things. First, by using self-marking tests you are showing your willingness to hand over decisions to a computer. True, you’re the one who set the test and told the software what the correct answers are, but as a matter of principle you have taken that first step.
Secondly, experts in this field are confident that computers can be taught to mark essays and creative writing. To do so someone would have to ‘show’ the computer hundreds of essays marked by teachers. It will then be able to work out for itself what to look for, and could then mark essays unaided.
Assessment: the bigger picture
One of the challenges teachers face when trying to work out whether a pupil knows something is balancing the requirement for lots of evidence with the need to reduce workload. Detectives have a similar conundrum when faced with a crime scene: which aspects of it are significant? Are there similarities between this crime scene and others that have been recorded? Enter an AI project called VALCRI, in which the computer can carry out the equivalent of 73 searches through the evidence in one click of the mouse, and even knows that words such as ‘scruffy’ and ‘dishevelled’ mean pretty much the same thing.
Perhaps in the future similar systems will be available in education. Imagine being able to present the computer with a pupil’s e-portfolio, essays, project notes and annotated programs, and be rewarded with an instant analysis of her ‘computing capability’. At the very least, AI could be used to suggest the student’s weak and strong points – of course, how to convey that to the student and scaffold appropriate learning activities would still be up to you, the teacher.
On hand to help
At Georgia Tech – a public research university in Atlanta, Georgia – one of the teaching assistants the students find very helpful is Jill Watson. She is always in the online forums, and answers students’ questions quickly and accurately. What the students are not told during their course, however, is that Jill Watson is a ‘bot’: a piece of AI software. Imagine having your own Jill Watson in school, to answer parents’ and students’ ‘low level’ questions, freeing up the admin staff to deal with more serious matters.
Another interesting area of research was reported in an article in The Guardian in May 2017, which described the development of a ‘bot’ that could respond to statements in different modes, such as liking or disgust. Could such a piece of software eventually find its way into support lines, to give initial advice, at least, to help a child in distress?
A mixed reality?
There are apps that can bring to life subjects like astronomy and chemistry (Augmented Reality), while Virtual Reality can put a student right in the centre of a different experience. Is one better than the other, and more likely to gain traction in education? Well, perhaps the most intriguing development, and the most likely, will be the merging of these different approaches.
Work is already under way here. For example, a Holocaust survivor has been filmed in ‘3D video’ answering 1,800 questions. Imbued with AI, the resulting ‘hologram’ can answer questions from students appropriately, by working out what it has been asked.
Meanwhile, Pearson is developing a holographic nurse that will be both artificially and emotionally intelligent, to be viewed through Microsoft’s Hololens (this is a headset that projects 3D images to a space in the room you’re in, meaning that you can walk around them). The idea is that the Pearson nurse will be able to respond to questions just like a real one.
It’s all very exciting; but it’s worth considering whether the biggest challenge facing educationalists might now be how to ensure that the technology works for us, and with our full knowledge. What we probably need is an education ethics committee.
Thinking machines
Artificial intelligence continues to develop by leaps and bounds. What makes it very different from – and much more exciting than – early attempts at creating computers that think is what has come to be known as ‘deep learning’. Whereas before, programmers would have to load the computer with every possible scenario they could think of, software developments have led to computers learning on their own. In a very small nutshell, simply by exposing it to a situation, such as driving along a road or playing a game, the computer can figure out the rules by itself. And what this means is that it can handle a situation that wasn’t foreseen, and therefore not included in the program.
This article originally appeared in Teach Secondary magazine.
See also Introducing ChatGPT, and join in the discussion.
If you enjoyed reading this, why not subscribe to my newsletter, Digital Education, for news, views and reviews?