In this series I am looking at some classic science fiction stories and suggesting how they might be used as the basis for discussions in Computing (and related subjects) classes. The Computing Programme of Study in England gives plenty of scope for discussing the ramifications of developments in technology. The aims, in particular, include the following broad statements:
The national curriculum for computing aims to ensure that all pupils:
...
- can evaluate and apply information technology, including new or unfamiliar technologies, analytically to solve problems
- are responsible, competent, confident and creative users of information and communication technology
Spoiler alert: in order to discuss these stories, I have had to reveal the plot and the outcome. However, in each case I’ve provided a link to a book in which you can find the story, should you wish to read it first.
One of the well-known tropes in science fiction — especially the sort of sci-fi you see in comics and superhero films — is the mad scientist. The person who fancies himself (it’s usually a him) as ruler of the world and invents some dastardly device to aid in his devilish designs.
But in sensible science fiction, the scientists are often not so much mad or intentioned, but driven. Or, simply, their inventions and breakthroughs lead to unintended consequences.
Flowers for Algernon comes into that last category. Someone makes a biological breakthrough and finds a way to increase IQ. As always, it’s tried out on animals first, and seems to be a miracle cure. It’s a long time since I read the story, so the specific details of how this comes about are somewhat hazy, but a person of low intelligence takes the drug and it improves his intelligence. The story is written in the first person in the form of a diary, and as the protagonist’s IQ improves so does his spelling, thinking and writing.
He falls in love with his special school teacher, and she with him, and they become research colleagues. But then he notices that the effects of the drug on the animals is only temporary. They revert to their original state, and he realises that that is going to happen to him. It does, and that’s reflected in the worsening of his writing and his memory.
I realise that this has more to do with science than computing, but the common thread I think is the ethical one. Should there be some sort of ethical oversight committee to evaluate new developments, in much the same way as new drugs have to undergo extensive trials? And even if that were to happen, who would be qualified to sit on such a committee? Why would its members have any more insight into the possible consequences than anyone else?
We’re in an undesirable situation already, with artificial intelligence. AI comes up with a solution for something, and nobody knows how it did so. So you have what is, in effect, a black box making decisions that affect people’s lives.
Under the GDPR regulations, companies are supposed to provide transparency into how their algorithms work. Well, good luck with forcing companies like Google to do that. But even if they wanted to comply, would the companies always even know?
I don’t have the answers to such questions, but I think they might be interesting to discuss with students. In my opinion computing should involve wider considerations than just coding.