We don’t have very long to wait before the educational AI projects funded by the Department for Education are unveiled, if all goes to plan. (See AI teacher tools set to break down barriers to opportunity.)
The good news of course is that thiis is happening at all. When I worked at the Qualifications and Curriculum Authority we had three com panies working on assessing pupils in ICT (information and technology). Some very interesting work came out of that, but the basic skills segment was tested in an Office-like environment that wasn’t Microsoft Office, because of copyright reasons. That meant that the pupils had the additional barrier of learning the new environment.
That’s part of the bad news: will the AI tools developed as part of thiis project be sufficientky different from the AI already available to make it a barrier to use?
But the main thing that ran alarm bells for me was this:
“The prototype AI tools, to be developed by April 2025, will draw on a first-of-its-kind AI store of data to ensure accuracy – so teachers can be confident in the information training the tools. The world-leading content store, backed by £3 million funding from the Department for Science, Innovation and Technology, will pool and encode curriculum guidance, lesson plans and anonymised pupil work which will then be used by AI companies to train their tools to generate accurate, high-quality content.”
Questions arise:
Curriculum guidance:
What happens when the curriculum changes?
Will innovative curricula developments by teachers be able to be accomodated? For example, when I was a head of Computing and ICT I didn’t start from the National Curriculum Programme of Study. I developed a scheme of work that was a modification of a scheme of work I’d had a hand in called Infomatics, produced by an organisation called ACITT. My view, as I told Niel McLean (of the QCA) at the time: as far as I was concerned, if the Programme of Study didn’t include the stuff in my scheme of work, that needed to be changed, not my scheme of work!
Arrogant? Possibly. But I’m an expert in teaching Computing and ICT so I know what I’m talking about.Lesson plans:
Where do these come from? If developed by teachers, will those teachers (or the schools they work in) be paid, and their copyright acknowledged?
How good are these lesson plans? The Key Stage Three ICT Strategy’s lesson plans back in the ‘90s and ‘00s were like painting by numbers. They were full of instructions like “After discussing this for three minutes go on to slide 7 in the PowerPoint”. I wasn’t sure whether the aim was to enable anyone to teach the subject whether they knew anything about it or not, or to bore the kids into submission. The materials themselves were excellent — I knew some of the people who worked on them and so I wasn’t surprised. But the lesson plans? Oh boy.
Anonymised pupil work
Will those pupils be compensated for the use of their work?
Will thiis pupil work be exemplars of ideal pupil work, or will the whole range of possibilities be covered?
How will any maverick thinking be rewarded? For example, when I was teaching pupils about relational databases, and then set them a short project, my expectation was that they nwould use a relational database in their solution. However, I also said that they could use anything they liked, as long as they could justify their choice. One group of pupils used a programming approach (Visual Basic), while another group set up a rather complex spreadsheet. Their solutions worked well, and they were able to back up their choices. Will the educational AI being developed be able to handle that sort of thiing, or will it have particular approaches and solutions “in mind”?
Assessment
Finally, will the software give pointers that pupils can learn from and that teachers can use in their feedback, or will it do the work for the pupils? I tried using AI to mark a deliberately provocative essay and it gave me a lecture about how morally wrong I was and then wrote the “correct” answer for me. As it happens, the pretend essay I wrote put forward a legitimate economic argument, and morals didn’t come into it. Maybe that in itself was the problem, but I didn’t expect or desire the AI to make any judgements in that regard. The experimentb was useful in that it showed up what in-built biases there were in the software. But a useful tool for assessment? Not so much.
You may also find this article interesting. It has a section on using AI for assessment:
The future of AI in Education: notes on a Westminster Education Forum Conference