Look, I'll be the first to admit that I love automation. Or, perhaps more accurately, I love the idea of automation. Automated marking? Count me in! Artificial Intelligence to help teachers? Love it.
The trouble is, automation never comes up to the standards achievable by human beings. I'll give a few examples to show what I mean.
Splicing
I used to love making and editing films -- of the 8mm variety, long before video was an option for anybody. I was especially good at splicing together one bit of film to another. The need for this would come about either because I would need to cut out a bit of filming that went wrong, or for creative purposes.
The process involved putting each end of film into an editor. This had a magnified viewing screen and a low-level light so that you could see the film without your eyes falling out. You then had to grind down each end of film, apply special glue to the ends, and then put one end on top of the other in order to stick them together.
The photo below shows a typical film editor of the period. The small metal piece on the right is the splicer, where you would grind the ends down and glue them together.
It was a very fine process. If you didn't grind down the film ends enough, the join would be too thick and would probably snag up the projector. If you overdid it, the join would be too thin, and would probably come apart as it was being pulled through the projector. If you got the glue anywhere else on the film except in the join, it would show up on the screen as a splodge, even though the film would be going through the projector at the rate of 24 frames (individual pictures) a second.
I was pretty good at this, though I say so myself. Let's put it this way: I went to an amateur film competition showing once, and my splicing was way better than that of some of the finalists. You could see their joins; you couldn't see mine.
The thing is, I was using a manual splicer. That is, the grinding down was done manually. It required not only seeing what you were doing, but feeling it too. I was reminded of this by a radio programme I was listening to. Someone whose job it was to shape sheets of steel into a particular shape said it was a matter of hearing the change in the sound of the machine as much as looking at what you were doing.
After several years, the splicer needed replacing, so I thought I'd treat myself to an automated one. It wasn't nearly as good in terms of results. Yes, it was quicker, but also inferior.
Automated blogging
A couple of years ago I experimented with creating an automated news feed on my blog. It worked, in the sense that it was quicker than manually writing out blog posts -- but only if I was happy with the results, which I wasn't. I discovered, as I write in the article, that writing a decent blog post of 50 words doesn't take a tenth of the time it takes to write one of 500 words. It takes more like half the time, by the time you've tweaked and fiddled. So once again, automation by itself gives an inferior result.
Automated content sharing
I've experimented with various forms of automated content sharing, using apps like IFTTT (If this then that). For example, my blog posts are shared on Facebook.
That's great, it's a wonderful time-saver. Unfortunately, it looks automated. Each sharing is prefaced by the words "Here's a link to an article I wrote". The first time you see that, you'll probably think I sat there at that moment and typed it. But as soon as you see the same words again -- and again and again - you'll probably twig that I'm using some form of automation.
Once again, automation saves time, but to get the best result requires human intervention.
I've found this with every kind of ed tech I've used. It seems to me that we face a choice when it comes to automating processes, at least right now. Perhaps AI will develop to the point where it can tweak automated processes in a way that a person can, and be inexpensive enough for ordinary people to be able to afford it. But right now the choice is: best result vs quick result. The best result is labour and time-intensive. The quick result has a much lower variable cost (in terms of time and labour). You have to decide for yourself whether or not the quick result is good enough.
For example, returning to the idea of automated marking, I felt as a teacher that being able to set a quick test every week and have it marked with no input from me once I'd actually created the test was fantastic. The tests wouldn't have won any awards, but they were good enough to give both my pupils and myself an idea of what they'd learnt. It enabled me to very quickly pick up pupils who were falling behind, or who were surging ahead, in almost real time. That is, I was able to respond very quickly, as opposed to discovering the situation several weeks or months late.
On the other hand, automated marking had not reached the point where I felt I could set essays or even short answers and have them marked automatically. (As an experiment, I found a website that generated an essay using a whole load of fake references and meaningless but academic-sounding phrases. I put the essay through an automatic essay marker, and achieved a B+ for my 'efforts'.) Therefore, when it came to mock exams, I assessed the work the hard way.
At the moment, automating ed tech will yield a good enough result in most situations. What it won't do is yield the best result.