Some time ago, quite soon after I moved into an ITE job at the University of Southampton, I posted on my thoughts on the relative merits of university-led and School Direct training routes. Looking back now, I would summarise the post as essentially suggesting that there were some advantages to SD that universities ought to have sorted out ages ago, but there are problems with SD too, and all sorts of quality-control issues. A surprising number of people seemed to think I got that about right.
Since then things have progressed and, for example, I’m now doing a bit of tutoring for Teach First so understand that programme much better than before. I’ve worked with trainee teachers from a SCITT as well. And, in general, have just seen more of the system, its triumphs, and its disasters. At the same time, we’ve been going through a series of desperate measures by the NCTL to boost recruitment (bursaries at SLT salary levels, abandonment of allocations, QTS on the side for under-graduates) and denials of the blindingly, bloody obvious from the DfE about current recruitment and retention levels. So things have progressed but I remain worried about the future of ITE in this country.
The National Audit Office report was much-needed and I strongly suggest anyone with an opinion on ITE at the system level should read it. However, this post is prompted as much by what’s not in the report, as by what is.
Everyone in ITE – university tutors, school-based training co-ordinators, TF leadership development officers, and particularly the many individual mentors who are generally adding the demands of mentoring on top of their own teaching workloads with very little compensation – is working like trojans to deliver the best possible training for new entrants to the profession but it is all made so much more difficult by the lack of any stability in the system. If anyone has the impression that the DfE have a carefully thought-through plan, are proceeding intelligently, and properly evaluating as they go then I haven’t met them yet. Sure, there is a policy direction, but that’s not the same thing at all. The NAO report states “The Department… does not yet have sufficient information about long-term costs and the extent to which each route, and increasing schools’ role in the process, has improved teaching standards”. I think that’s very generous!
On the other hand, powerful people associated with the DfE are hardly unique in spending some time getting the feel of things, thinking they have the answer to making a significant improvement, and then ploughing ahead with lots of determination and not much sensitivity to feedback. Read The Blunders of Our Governments for further insight!
In a better world, what would be the questions it would help to know the answers to in ITE? Here is my current list:
- Which training routes, or aspects of training, tend to produce the best teachers?
- Which training routes, or aspects of training, tend to produce teachers who stay in teaching?
- For both the above questions, what is the answer in absolute terms, and what is the answer when looking at value-added?
- What elements of the various selection processes correlate with successful outcomes?
- How much does each training route actually cost the taxpayer?
There are plenty of people that will happily pontificate on these, and probably provide an answer, but I’m yet to be convinced that anyone can back their assertions up with convincing evidence.
I believe Education Datalab are about to report on some aspects of Q2. That’ll be a great start! And from what I know about this project it has the potential to provide a permanent and rich source of information to relate training to retention and other aspects of early careers in teaching.
There have been a couple of commendable attempts to evaluate the impact of Teach First on children’s outcomes too, but as far as I know, that’s about it for teacher quality. Trying to measure the effectiveness of teachers is a significant problem but actually, if you are talking about trying to identify trends across large groups of trainees then it is certainly possible. Ofsted make some kind of attempt to measure absolute outcomes but it’s based on a small number of single observations, some pretty arbitrary judgements, is almost certainly unreliable, and in the end all ITE is effectively graded on a two point scale so that’s not an awful lot of use.
Given how long universities have had to work on selection criteria, and the research expertise around in some of them, it’s a bit embarrassing that selection procedures haven’t been more thoroughly investigated. To be fair, though, medical schools are only just beginning to get their act together on this too, and the outcome metrics for doctors are probably rather simpler to sort out than for teachers.
Maybe we do know the answer to Q5. The NAO report contains the graph below but these are not simple calculations because trainee teachers’ cost impacts and benefits in schools are complex. The thing that puzzles me is that we pay schools for placements and, although they provide some training, £3000 per trainee seems like a very high net cost. Also, although I suspect TF is expensive, they must save about half a salary in most schools even with days out, lighter timetables, mentor remission etc. I would like to see details of the source analysis for this graph.
There is the beginnings of a project, in which I have a hand, to try to develop a value-added model of evaluation that can be applied to ITE. This is important because absolute measures are likely to assess the quality of successful applicants, and that’s definitely not the same as the quality of training. It isn’t going to be easy and, at first, it’s likely to be a bit ropey because the measurement of both initial potential and NQT teacher effectiveness are problematic. However, if we can get some momentum going, and perhaps tie it in with some of the work happening elsewhere like the Education Datalab project, then we just might be able to start to fill the gap the DfE don’t seem to be addressing. Let’s hope so. It would be a startling revelation if we could actually point at robust data and say “Look! this is working better than that – now let’s figure out why.” If we want an evidence-informed profession, finding out what really works in training teachers might be quite a good move. We certainly haven’t got any spare trainee teachers to break!