My worst IT training days #4: Too much admin on a course (Updated)
This article was first published in 2019.
The title of this article is not quite accurate. First, it refers not so much to a single day as to a whole course. Secondly, the course was quite good. Nevertheless, I believe the cautionary tale I’m about to relate justifies its inclusion in this series.
During the time that I was an ICT advisor in a Local Authority, the government’s education department of the day announced that there was funding available for digital skills training for teachers. (A quick word of explanation: The role of an ICT advisor involved working with schools to provide training, resources and other forms of support to teachers of ICT and Computing.)
The funding was to come from something called the New Opportunities Fund, so the training scheme ended up being referred to as “NOF training”. Mention that phrase to anyone of a certain age, and watch them turn white. Governments have a remarkable talent for being unable to match good intentions with plans that will enable those intentions to be realised. (See The trouble with Government education technology initiatives.) NOF training was no exception.
The problem with it was simple: the money could not be used to train teachers in basic skills. Instead, it had to be used to train teachers in applying those skills in the classroom, that is, to use education technology at the “chalkface”.
Take a moment to reflect on this: what teacher is going to want to undergo training on how to use technology in the classroom if they can’t use the technology in the first place? In other words -- and this is purely my opinion of course -- the NOF training scheme was set up to not solve the problem it was intended to solve. Indeed, the only teachers I ever heard expressing satisfaction with the training were those who had signed up to one of the training providers which ignored the injunction against teaching basic skills and did so anyway. And that’s another thing: it’s my contention, born of experience, that the only approaches that meet official expectations are those devised by the mavericks who ignore the rules.
Given our lack of confidence in the NOF training scheme, the advisory team of which I was a member set up another training scheme to run alongside it. This took the form of an externally-certificated course which covered not only basic IT skills but also their practical application in the classroom. The course was broad, deep and therefore brilliant. The course participants loved it and, as far as the aims and coverage and types of challenge were concerned, so did we.
Nevertheless, we declined to run the scheme again the following year. Why so? The amount of admin was nothing short of astonishing. There was a weekly assignment in addition to end of term assignments. Each one was quite detailed, with a detailed marking scheme to match. There was also a complicated sort of way of adding up and averaging out the marks of the assignments over time.
To help us cope with this avalanche of “paperwork” I created a spreadsheet that dealt with all the averaging of marks and flagging up of un-handed-in work. But that was a massive undertaking in itself, involving named cells, advanced functions and a spot of programming in Visual Basic for Applications.
I think this experience provides a very good lesson for anyone creating a course or a scheme of work. It doesn’t matter how brilliant the materials are, or how wonderful the experiences enjoyed by the course participants. If the assessment scheme is so complicated and burdensome that it collapses under its own weight, there will come a point for some people where the cost (to them) outweighs the benefits. Yes, we all want robust assessment, assessment that is both valid and reliable. But out here in the real world, you sometimes have to make compromises.
Relying on the teacher’s or trainer’s professional judgement may sound too subjective, but falling back on apparently objective methods (like rubrics and complex marking schemes) usually end up being judged subjectively anyway.
This is because the longer, ie more detailed, the criteria are, the more easy they are to apply, but the less meaningful they become. The reason is that once you start breaking things down into their component parts, you end up with a tick list of competencies which, taken together, may not mean very much at all. The whole is nearly always greater than the sum of its parts, so even if someone has all of the individual skills required or has carried out all of the tasks required, the end result may still not be very good.
Conversely, someone may not be fully competent in every area but still do a brilliant job of using education technology. So you end up having to use your own judgement about how to grade something, which is exactly what a marking scheme like a rubric was meant to avoid in the first place. To put it another way, if the criteria are too "locked down", this could lead to assessors introducing their own interpretations to aid the process of coming to a "correct" conclusion.
I’m pretty certain that had the course we run placed more emphasis on our professional judgement than rigid adherence to the most detailed assessment scheme I’ve ever seen, we’d have run it more than just the once.