AI-generated content: does it matter how articles are written?
The BBC used artificial intelligence to generate reports on the UK General Election results. The AI “wrote” an article for most of the Constituencies when the results were in, apart from a few. Human editors checked the articles and sometimes added a bit of context.
Gary Henderson, a teacher and Director of IT in a UK school, asks a very interesting question:
I think it does make a difference, for several reasons.
First, these kind of articles can at present only create articles based on data. The statistics are presented to the reader, and that is pretty much that. As far as I can see, each article has been written in the same way, with the facts presented in the same order. For example, the name of the winner is given, followed by the number of votes by which they won.
But how does the AI know to do that, and why are some data cited and not others? For example, in Kensington, the Conservatives won by 150 seats. But how many people voted for them altogether?
Obviously, human writers will make a decision, or use a style guide, to dictate the answers to questions such as these. My concern is that we can easily hold people to account should they present the data in a way that misleads people. But how do you question an algorithm once (a) it has been established and embedded enough that nobody really knows how it works? And/or (b) that is (probably) not in the public domain for intellectual property reasons?
The headlines sometimes also assume prior knowledge which is not always addressed in the article. For example, the headline for Ilford South reads:
Ilford South: Mike Gapes loses seat
That’s quite anodyne. What it does not tell you is that Mike Gapes lost his seat after 27 years, because he left the Labour Party. Unfortunately for him and others, people in the UK tend to vote for a party (or against a party) rather than a person.
So that headline is very objective, because it hides much background data (Gapes left the Labour Party after 27 years: why? You can read the answer on his website.) However, because the headline and the report omit important background information, you could argue that it’s not objective, in the sense of being impartial, a point made (in a different context, obviously) by Christopher Goering and Paul Thomas (Eds) in Critical Media Literacy and Fake News in Post-Truth America (Amazon affiliate link).
Another issue is that these articles are mind-numbingly boring, which I’m actually rather pleased about. Why? Because as long as AI can’t write creatively, people like myself who earn money from writing have little to fear.
Indeed, I think a partnership could work really well. In the context of education technology (the area in which I write the most), I’d love to have a bot that could trawl through online archives, pull out some great stats and then write a report presenting the data. I could then analyse that data, adding colour, context and nuance to the basic report generated by the AI.
Getting back to Gary Henderson’s question, I suppose the answer is as follows. If you just want the basic facts, the bare bones, then perhaps it doesn’t make a difference. The AI can do a good job of reporting election results, football scores and weather patterns. You don’t need writing skills to be able to do that, only research skills. So ultimately, if the AI can’t get the data for itself (if it isn’t available online, say), then all you need is some low-paid research assistants to obtain the data and feed it in. It could lead to few jobs for journalists and more opportunities for interns.
It’s amusing and interesting to note that Jonathan Swift made the point that writing skills are not needed if you have a writing machine nearly 300 years ago, in Gulliver’s Travels (1726). Having met a professor who has invented a writing machine he says:
“Everyone knew how laborious the usual Method is of attaining to Arts and Sciences; whereby by his Contrivance, the most ignorant Person at a reasonable Charge, and with a little bodily Labour, may write books in Philosophy, Poetry, Politics, Law, Mathematics and Theology, without the least assistance from Genius or Study.”
However, if you want reporting that can bring the data alive, place it in a wider context, and evince an emotional response or connection from the reader, then it does make a difference. If you don’t believe me, you just have to compare the late sports writer Hugh McIlvanney’s depiction of cold weather with a factual report. The latter might have stated something like:
“The temperature was 2 degrees Celcius but because of the wind felt like minus 6 degrees.”
In Mcilvanney’s hands, that sort of thing was transformed into:
“It was the kind of wind that stripped the flesh off your bones, then came back for the marrow.”
The difference is that in the latter you can almost feel the cold. And that to me is the difference between an AI-written article and a human-written one. In fact, to put it another way, I think Arthur C. Clarke’s observation about teachers vis-a-vis teaching machines may be applied here, namely:
Any writer that could be replaced by AI probably should be.