Linking ITT and workforce data: a step in the right direction

I had the great pleasure of meeting Becky Allen back at the beginning of the year for a bit of a discussion about the work Education Datalab were doing on matching teacher training records to the School Workforce Census. I suspect a pretty monumental amount of effort has gone into nailing down the final details since then but two of the three linked reports are now published. I suggest you start here to either have a quick look at the key findings, or to access the full reports. So far I’ve just read the NCTL one.

It is immediately apparent that this is something the DfE ought to have done years ago. There is a lot of talk of evidence-based policy-making but any kind of genuine commitment to such a thing would have seen this sort of data-analysis set up prior to the seismic changes to ITT that have been implemented since 2010. Hey-ho; better late than never.

In theory this methodology could be used for a much longer-term project that might start generating some really useful data on the impact of various approaches to training teachers. It is easy to pick up this work and think it is limited to evaluating structural issues about ITT routes but if you consider the richness of a data set that can pretty much link every teacher in the maintained sector back to their ITT experiences, there is almost unlimited potential. Inevitably, for ITT providers, there is a pretty steady (and self-selecting) drift out of contact over the years after qualification. This work potentially solves that problem for research on any aspect of ‘what works’ in ITT. That’s something for the future; what of the findings here?

It would be tremendously easy for a lot of people in ITE to say “I told you so” in regard to the Teach First retention figures. Actually, I think the useful questions are more subtle than that but figures first. Using the lower-bound numbers, traditional HEI-led routes have about 60% of those initially recruited working as teachers in the maintained sector in their third year after qualifying. SCITTs are higher at 70% (but these would have been the early adopters). School Direct hasn’t been running long enough to have figures. Teach First is under 50%.

datalab retention graph

However, there are several things to remember about Teach First. Their qualifying year involves teaching potentially difficult classes, mostly in schools with more challenging behaviour, with variable levels of in-school/in-class support, whereas university-led trainee teachers are supernumerary, on lower timetables, and working in a wider range of schools, and rarely those in a category or Grade 3. Teach First are also possibly more likely to continue to work in more challenging schools although I think that is an assumption I would want to see data on because certainly some participants move from TF schools to schools at the opposite end of the socio-economic spectrum.

There are also a few things to remember about HEI-led courses. Financial survival, and the need to make up the numbers across all the shortage subjects, probably mean that in these subjects the HEI-led cohort has a longer tail than for any other route. SCITTs may have some of these pressures too but, particulary in the years for this report, are likely to have had the opportunity to be more selective. I suspect it’s the other way round for subjects like PE, English and history where the larger scale of HEIs generates a larger pool of applicants compared to SCITTs. Since shortage subjects make up the bulk of an HEI cohort, you would expect to have a lower qualification rate, and also some marginal grade 2s where support (or lack of it) in their employing school might determine success in their NQT year. As pointed out right at the beginning, the report can’t tell us anything about what would happen to the same trainee teachers if they were trained via a different route.

Teach First recruitment has been astonishingly successful. Having seen the marketing machine in action, and with access to funding that very few providers can match, that is perhaps not completely surprising but it has been terrific nonetheless. This means they probably have the strongest cohort of all at the start of training. For me, the critical question to ask is, if Teach First training was more like the HEI-led route, or a SCITT, would there be hundreds more high quality teachers still in the classroom. There is no way to tell from this report but, anecdotally, the Teach First participants I have worked with would all have had excellent outcomes on the HEI-led course or School Direct programmes I mainly work on. What I don’t know is whether they would have gone into teacher training at all.

If Teach First is mainly putting people who would never have tried teaching into struggling schools with teacher recruitment problems, to do a decent job for two or three years, then that is probably a justifiable use of public money; if they are putting potentially high quality, long-career teachers through training in a way that knocks an additional 10-20% off retention, that doesn’t look so good. I suppose there might be other benefits; I’m unconvinced by these but make up your own mind. Sam Freedman sets out the most positive case here.

What about the other findings?

  • Three regions of England – North East, North West and South West – appear to have large numbers of new qualified teachers who do not join a state-sector school immediately after achieving QTS.
    • This is pretty good evidence that the NCTL need to sort out the Teacher Supply Model, but that was already very apparent. We are waiting on tenterhooks for the announcement on allocation methodology (so presumably they are desperately trying to invent something at the moment; let’s hope they don’t make another almighty cock-up!
  • Those studying on undergraduate with QTS courses have low initial retention rates in the profession, though we cannot know whether this results from subsequent choices made by the individual or recruitment decisions made by schools.
    • They do, but the data also shows they catch up later. I suspect that if you have a B.Ed. sooner or later it becomes the best option for a professional career whereas PGCEs have their UG degree as an alternative option (depending on subject a bit)
  • Teach First has very high two year retention rates, but thereafter their retention is poorer than other graduate routes.
    • I’m hoping, perhaps in vain, that the move away from QTS  might link teacher development across from ITT into the first year(s) of post-qualification employment for other routes and get a bit of the 2-year TF programme effect into other routes.
  • Ethnic minority teacher trainees have very low retention rates.
    • I suspect because they are much more likely to have limited experience of the UK education system if educated abroad, and are also more likely to be EAL, both of which, in my experience, can affect classroom relationships. It would be enormously useful to have data that separates UK and non-UK educated teachers and drill down a bit. In my part of the world, UK-educated BME applicants are thin on the ground but I don’t notice anything that would lower their retention rate.
  • Individuals who train part-time or who are older have much poorer retention rates, which may simply reflect other family commitments that interfere with continuous employment records.
    • UoS doesn’t do part-time. I have a hunch that retention might actually be better for older trainee teachers on our Science PGCE – they do mostly need a proper job to pay mortgages whereas younger trainees often don’t have that commitment. On the other hand, whilst they are nearly all tremendous people to work with, developing into a good teacher is partly about developing habits that are effective in the classroom and I think changing habits gets harder as you get older. It’s also a very fast-moving environment when you are a novice and again I think adapting to this gets harder with age. They are quite often particularly good at developing relationships with teenagers though, so it’s swings and roundabouts, maybe.

So those are my first thoughts. I think we have some way to go to get stable and effective initial teacher education that is structurally sound and therfore with the potential for continuous improvement. NCTL have tried quite hard to break what we had; now we need to take the best of the many pieces and put them back together again, hopefully to end up with something better than before. High quality evidence is a key part of this process, as are people in high places that are prepared to pay attention to it. This report is a very important step in the right direction.

 

 

Advertisements

TeachFirst recruitment presentation

Although I work in Initial Teacher Education, I don’t really know very much about TeachFirst. I’ve picked up a bit from bloggers and Tweeters but am aware that sometimes I’m not clear who came through TeachFirst and who trained via one of the other routes. Laura MacInerney (who also helpfully collated some relevant posts), Kris Boulton, Joe Kirby, Harry Fletcher-Wood (not absolutely sure but think he was TF) and Daisy Christodolou are visible presences online and, whilst I certainly don’t always agree with some of these folk, there’s no doubting either their commitment to education or, for those still on the front-line, their commitment to improving their own classroom practice. I’m also aware of some of the data: Teach First achieve a similar 5-year retention to other routes (and of course that’s in some tough schools), Ofsted have graded their provision Outstanding in all categories, there is some evidence that schools engaging with Teach First outperform those that don’t (although whether that’s a reflection of the impact of the teachers, or of the school’s leadership is uncertain), and the 7:1 applicant to trainee ratio, and academic level of trainees is encouraging (although Teach First have unique requirements around subject specialism that skew this figure significantly). Prior to last night, my views about Teach First were limited by having no direct experience of the programme, but were generally positive with a couple of concerns. The very last part of my blog on the Carter Review will show you where I was at, with a more specific comment about subject knowledge on Kris Boulton’s post here.

After attending the recruitment presentation organised by PhySoc (University of Southampton Physics Society) I am not so sure. Now, I appreciate that a recruitment campaign is not the same thing as a teacher training programme but I was genuinely shocked by the way Teach First was presented.

The first bit of the presentation was a clear description of the fundamental issue of educational outcomes for FSM pupils in England followed by a call to arms. There was a slight sense of the priviledged and academically able going into the slums to save the deserving poor but I think that is probably justifiable salesmanship. What came next just wasn’t. In a fifteen minute presentation I reckon the ‘classroom teaching’ bit got less than 30 seconds. Not once was there any suggestion that wanting to work with children was a pre-requisite for teaching. Not once was there any suggestion that children might have anything to offer. In fact, really the only time that working in schools was dwelt upon was to flag up that in 15 years, 14 TFers had made headteacher (or it might have been 15 headteachers in 14 years), along with a statistic about the faster career progression for Teach First trained teachers. There was a fair bit of emphasis on the main details of the two year Leadership Development Programme, though. I cannot believe that the two years it takes to go from novice to the end of the NQT can be marketed as Leadership Development – wouldn’t Teacher Development be more appropriate? Maybe not when the details are as follows:

  • Six weeks residential training described as a cross between Freshers’ Week and something else (sorry can’t remember what but I guess it was less contentious).
  • Some teaching
  • Internships with major international companies like Deloitte, PwC, Goldman-Sachs, Civil Service Fast Stream… there was a list. I paraphrase but “This isn’t just about making the coffee, your leadership potential will be developed and you will have the opportunity to be involved in real projects during this time.”
  • A bit more teaching
  • Enrolment in a Learning Network – to develop your problem-solving skills
  • The opportunity to complete a Masters degree (sadly this is no longer free but is still heavily subsidised at £500/year). Bloody hell – that is cheap!
  • Become a Teach First ambassador. I think the possibility of continuing in education got mentioned here (presumably as some sort of senior leader although I assume the reference to eight, third year teachers, having headships is a typo) but mainly this was about the opportunities with Teach First Platinum Partners – that’s PwC, Goldman-Sachs etc. again. More info on the Teach First website.

There was an overview of the programme on a little flow diagram; the two teaching bits were the same size as each of the other bits. Do I need to say any more?

Anyway, I’m pretty clear that one recruitment presentation isn’t a basis on which to judge a programme that puts large numbers of keen new teachers into schools in socio-economically deprived areas, but it has given me pause for thought. This presentation was absolutely clearly suggesting that a couple of years of teaching was a way to get a massive leg up on the career ladder. This is the criticism that has been consistently levelled at Teach First and now I know why. I’m perfectly happy with teachers having alternative career options; I’m not happy about this short-selling of teachers and teaching.

 

 

Is Ofsted helping to improve ITE? Part II

My first post on Ofsted inspections of ITE set out where I am coming from, and considered the purpose of these inspections. It concluded:

“So if Ofsted were to step back from reporting on good practice, and if the difference between Grade 1 and 2 (over 80% of providers) has a rather arbitrary effect on available provision, that leaves Ofsted as an effective enforcer of absolute minimum standards and a possible pressure, and possible guide, to improving the quality of training. The former role requires reliable differentiation between Grade 1/2 and Grade 3/4; the latter two require valid measurement of training quality, and the ‘guide’ bit requires accurate identification of strengths and weaknesses. In this second post, I’ll try to dig into the issues of reliability, validity, and accuracy that my original comment alluded to.”

For those of you who are not aware of how an ITE inspection works, the call comes first thing Thursday, with the inspection starting on Monday. For the two years the previous Framework operated, this could be at any point in the academic year. The inspectors look at statutory requirements; data (on outcomes and tracking of progress, and NQT surveys about their training); observe training sessions (if there are any); observe trainees teaching to get an idea of their progress, to look at the quality of mentoring, and for evidence of good training showing in their teaching; and observe NQTs (and maybe RQTs) teaching to judge the quality of the final product. That’s my summary, for more information check the Handbook.

When we were inspected, secondary trainees were observed right at the begining of their second placement i.e. day 3, so the ones affected only found out with a weekend’s notice that not only were they going to be teaching a class on Monday, in a school they didn’t know, but it was going to be with an Ofsted inspector observing. I thought that was an unacceptably awful thing to do to trainees. The inspection team handled it sensitively but I just felt grossly unprofessional about the whole thing. It’s less important, but clearly there is also a major issue with reliability here, too. How can inspectors reasonably be expected to judge trainee progress if one lot are observed in their first placement, others on day 3 of their second placement, and at another provider they are observed after many weeks of teaching?

This is one of the main drivers behind my original comment about the way in which Ofsted inspects ITE. However, under the new framework this has been sorted out. Hurrah! The changes are probably best summarised in the revisions to the framework but under the newest framework, there will be a summer inspection which will include observation of training sessions (if there are any) and trainees teaching; and an autumn inspection which will focus on observation of NQTs (and maybe RQTs) teaching. There is still an issue with HEI courses finishing placements at Whitsun, and SDs going to the end of the summer term, and some weeks having training sessions, and some weeks none, so I do think Ofsted really need to get calendar info before setting dates if they want to improve reliability by comparing like with like, but it is a solid step away from ‘dreadful’ and actually I think quite a bold and imaginative idea.

The second thing that really upsets me about Ofsted is the pressure on ITE providers over grading trainees. Under the Grading Descriptors on p.33 the Handbook states that for Grade 1 or Grade 2 ITE providers “all trainees awarded QTS exceed the minimum level of practice expected of teachers as defined in the Teachers’ Standards”. That word ‘exceed’ is critical; in other words, if any trainee gets a Grade 3 then an ITE provider Requires Improvement. I think this is probably a remaining ripple from the big splash casued by the changing of Grade 3 from ‘Satisfactory’ to ‘Requires Improvement’. At Grade 3 a trainee meets the Teachers’ Standards and therefore will be awarded QTS but where once this was Satisfactory, it no longer is. Providers certainly ought to be trying to provide extended placements with extra support to reach Grade 2 before gaining QTS but it also ought to be acceptable for providers to work hard to support Grade 3 NQTs in schools. At the moment, this is a very dangerous strategy because a Grade 3 might not go into teaching (but will still be in the data, and qualified). The incentive to find some spurious evidence and chance upgrading them before awarding QTS is obvious. We have taken the right approach at my university; I will be fuming if that comes back to bite us.

Of course, the alternative is to find some spurious evidence and fail them. If we are really saying that we don’t want these trainees in the profession then, fine, but the Teachers’ Standards and/or award of QTS needs changing to reflect the standard required. Don’t just tell providers that Grade 3 meets the Standards for QTS but it isn’t acceptable to let anyone at this standard be awarded QTS. And, of course, completion rates are significant data in an inspection. Just like exclusion rates for schools, high completion rates might demonstrate excellent recruitment and training, but they could also reflect over-grading and low standards. Good recruitment decisions obviously help with completion rates but where is the evidence that there is a reliable way to discriminate all the potentially good teachers? Where are the science and maths teachers we need going to come from if we only take dead certs?

Anyway, those are the two points that led to my labelling ITE inspections ‘dreadful’, so it’s one down and one to go for Ofsted on fixing these. I will now try to get some perspective on the issues with reliability, validity, and accuracy, promised at the start of this post.

So here are some of the reliability problems with ITE inspections:

  • Even under the new Framework, inspectors are likely to see different things at different providers depending on when in the summer term they visit. This is not easily resolved but I would like to see Ofsted acknowledging the challenge, at least.
  • The amount of training observed is likely to be tiny (if any). I think the danger of a poor session from one trainer tarring the whole course with the same brush is too high.
  • NQT observations are attempting to evaluate the quality of the finished product. There is no mention of individual lesson observation grades in the Handbook but our inspection team saw only seven secondary NQTs which leaves an awful lot riding on those individual performances. Hopefully the two-part inspection will increase this number but there is nothing in the Handbook to reassure me that Ofsted are clear about how many are required to ensure reliability isn’t affected by random variation.
  • The same reliability issue affects any comparisons drawn between NQT quality when observed, and grading of trainees at the end of training. The Handbook doesn’t appear to require this but it was a clear feature of our inspection (so maybe the framework has changed).
  • Any observation of NQTs is bound to be influenced by the quality of induction and training provided by the employing school, and their ability to pick NQTs that suit their school. Under the previous framework all schools involved would be in the ITE Partnership, so maybe that’s fair game; under the new Framework I’m not so sure that will be the case.
  • Observation of RQTs is hard to justify (although interviewing them about their training may well be appropriate), because so much will have happened in schools since training. Maybe this won’t be a feature of inspections but the Handbook is a bit ambiguous on this. The phrase being “NQTs/former trainees”.

And here are some of the validity problems:

  • There is no evidence-based way to determine the standard of trainees at the start of their training; so any measure of the quality of outcomes will reflect not only the quality of training but also the quality of applicants. It’s not currently possible to measure ‘value-added’ but there is a sense that this is nonetheless what Ofsted think they are doing. Maybe the argument is that recruitment and training quality together are being evaluated but this is pretty advantageous for the providers with the best reputations who get more applicants. Is reputation really a variable that Ofsted want to include in their inspection outcomes?
  • Completion rates might demonstrate excellent training and support, but they could also reflect over-grading and low standards, as described above. ITE providers must, in the end, be gatekeepers to the profession – children are owed that.
  • The Grade 3 penalty means, as described above, that if the best ITE provider in the country correctly grades a trainee 3 and hasn’t sorted it before inspection then that one piece of data will count more than everything else combined.
  • The new framework places a big emphasis on behaviour. Inspectors won’t be seeing the training, only the performance of trainees and NQTs. What they see will depend an awful lot on context. The NQT having a ding-dong battle (that they will eventually win) with a truculent Y10 class could easily represent outstanding training, whilst the clockwork smoothness of another class might be due to smashing kids, or a trainee for whom good behaviour comes as easily as breathing.
  • The NQT Survey data depends a lot on responses and there is no mechanism for validating the data; our inspection was possibly triggered by a drop in the previously high ratings from this survey but that data was flatly contradicted by our exit point survey data so what happened remains an unsolved mystery.

Finally, on the subject of accuracy, inspectors are in for three days maximum; during this time they may be able to make a fair stab at judging the quality of the provider but I really don’t think that they can achieve a level of understanding that would allow an accurate description of not only what, but why, the provider was doing well in certain areas, or not so well. I think inspectors will tend to see strengths and weaknesses in the presence or absence of the things they value in ITE – confirmation bias at work – and I don’t think that is good enough evidence on which to build world-class intial teacher education.

I’m not actually saying that I think Ofsted ITE judgements are necessarily unreliable or invalid, I’m just saying that there are all these issues that are fairly obvious and I have no sense that Ofsted are engaged in worrying about these things. Maybe it is possible for an inspection team to accurately grade providers on a 1-4 scale, but I think it’s ambitious, and if these judgements aren’t right then Ofsted could be failing to correctly identify providers offering poor quality training, and they could be creating pressure to improve, and offering guidance, that doesn’t actually lead in the direction of genuine improvements – the problem we’ve been seeing in schools until recently.

There have been some very sensible suggestions that school inspections should move to a three-tier grading system and I think this would make sense for ITE. I’m not sure that trying to distinguish Outstanding from Good is terribly helpful whereas getting really effective at distinguishing Requires Improvement from ‘Good or Better’ is terribly important so we don’t have badly trained NQTs entering the system. And this brings me to the massive elephant in the ITE inspection room.

elephant

I’m very aware that the effectiveness of the established system of training teachers has been a moot point but it has at least been pretty stable. Now, ITE is going through a massive upheaval. SCITTs are sometimes, effectively, single schools, and SD alliances can be very small too, or dominated by one school. I’m certain some brilliant things will be happening but also sure there will be some disasters. A lot of this new training is, on paper, quality assured by HEIs or well-established SCITTs but SD has put schools in an exceptionally strong position to plough their own furrows. The chaotic nature of all this is entirely the doing of the DfE but it is Ofsted that are ultimately responsible for enforcing standards. SD should have been introduced more gradually but, given that the seeds were all cast at once, it needs a bit of germination time and there may be a few sickly seedlings that will produce excellent crops so it seems a bit harsh for Ofsted to get the hoe out straight away. For this reason, the complete avoidance of SD in our recent inspection is possibly justified, but Ofsted need to quickly be exceptionally clear about how they are going to engage with SD. In particular, I don’t think it is acceptable to lump SD and provider-led training together. Yes, a provider that allows poor quality SD to run on their watch needs to be pulled up on this, but unless somehow this drills down to the decisions made at school-level, providers will be held responsible for decisions made at the periphery of their control (even when their own training is excellent) whilst the school leaders who should have done better (or stayed out of it if they weren’t sure they were going to get it right) remain largely unscathed. If Ofsted tame the elephant, we might all come out of the SD revolution in some semblance of order and then be able to get on with the question of how to make our NQTs even better-prepared for civilisation’s most essential profession. If Ofsted don’t get this right, children will suffer.

 

Is Ofsted helping to improve ITE?

A short time ago, I wrote a post about the Carter Review, and my thoughts on the future for Initial Teacher Education. With one casual tweet, the education blogmeister, Tom Bennett, catapulted that post into the limelight (well, maybe into the wings) and several people were kind enough to tweet a smattering of applause, which has provided me with useful encouragement. Thank you.

Sean Harford is Ofsted’s Director, Initial Teacher Education and Regional Director, East of England. He responded to what was possibly not the most thoroughly considered part of my post, by extending an invitation to discuss the Ofsted ITE inspection process. This follows some fairly high profile meetings between senior Ofsteders like Sean, and Mike Cladingbowl, and people like Andrew Smith, Tom Bennett, Tom Sherrington, David Didau, Ross McGill, Shena Lewington et al.

What I actually said was “Do something about the dreadful way in which Ofsted inspects ITE (won’t go into details here but it really sucks)”. That is not terribly nuanced so I think the first thing I need to do is to clarify my own thinking about this. And since it is possible that the university I work for will become known, I should start by stating unequivocally that our most recent inspection, which was under what at the time we were calling the new framework but is now the old framework (i.e. the one that ran from September 2012 to June 2014), was highly professional, very well-led, produced a report which reflected strengths and weaknesses in our courses, and the grade was probably about right. I am making this statement partly to attempt to show that my thoughts on inspection of ITE are not just the rumblings of someone who feels his chips have been pissed on, and partly because the inspectors’ names are obviously on the report and I don’t want anything I say to reflect badly on them.

So, moving on from the preamble, it seems to me that the starting point for thinking about either the ITE Inspection Framework or the wider role of Ofsted in teacher training, is to decide what the purpose of inspection is. At the moment, it’s primary function is to grade ITE providers and report on the strengths and weaknesses of their provision. What purpose does that serve?

Ofsted grading affects allocation of places to training providers; this is set out clearly for next year but is not new. This is pretty crucial; in schools the difference between grades might have some implications for SLT careers but only an Ofsted disaster usually leads to redundancies. In HEIs the difference between Grade 1 and 2 might well be the difference between financially viable or not, and therefore everyone’s jobs. The impact of the Grade 3 for the University of Leeds will be worth monitoring. This could all be seen as a drive to higher standards – sorting the wheat from the chaff – but this assumes both that the grading is reliable* (at least to within about 1/4 of a grade) and that Ofsted grading has a direct effect on the future of ITE provision (it doesn’t – ITE is much more precarious in a Russell Group or 1994 Group university than in an ex-teacher training college or SCITT because it’s not the main focus of the institution).

Secondly, Ofsted grading might affect trainee choices. I can’t produce any evidence to support this claim but I think that the most astute trainees probably do look at both Ofsted grade (and HEI reputation if relevant) but it is difficult to see how anyone not familiar with the system would correctly compare reports for HEIs, SCITTS, and SD lead schools. The less astute trainees are often thoroughly confused by the variety of training routes and have done shockingly little research before making their decisions so Ofsted reports don’t have any impact on their choices, and even for the first group, I think a lot of decisions are based on geography in the end.

Thirdly, within any given institution, there is likely to be pressure to aspire to an Outstanding grade (even if this pressure is not the same for every provider). This will drive standards up if, and only if, inspection outcomes make a valid measurement of the quality of training. In the end, the reliability* of the grade doesn’t matter for this but it does matter if Ofsted divert attention away from the quality of training towards other things that might influence the inspectors.

Finally, an Ofsted grade of Inadequate would lead to the removal of accreditation by the NCTL so Ofsted inspections have a role in setting a minimum standard. I don’t think there has been a Grade 4 since 2010 but a Grade 3 will lead to a further inspection within 12 months and might lead quite quickly to improvement or annihilation.

Actually, not finally, but it’s instructive that all my first thoughts were focused on the grading. An Ofsted report, of course, also identifies what the inspection team think are the strengths and weaknesses of the provision. If these are accurately identified then the report would be a useful guide to making genuine improvements; if these are not accurately identified then they become a ticklist of things to fix before the next inspection and may have no positive impact on the quality of training. And accurate or not, if the tutors don’t buy in to the conclusions then it will definitely be an exercise in papering over cracks, whether these are structural or cosmetic.

Ofsted also has a secondary role in identifying and reporting particularly good practice but I think these reports tend to be too superficial to do more than point out a direction – with the emphasis at the moment strongly focused on effective partnership. I guess there are some suggestions here for ways of managing partnerships that seem to be working but there isn’t the detail needed to understand why some partnerships work better than others. I think the danger with this secondary function is that ITE providers will start looking for “what Ofsted want” which has been the scourge of many schools and colleges, and we don’t necessarily want every provider running an EAL session in Hungarian, as Durham do, so maybe Ofsted should restrict itself to commenting on themes emerging from its inspections e.g. that the quality of partnerships is often an important difference between the more and less effective providers, with the best providers identified. These providers might then be in the best position to explain to the rest of us exactly what they have done, perhaps with UCET or the TSC etc. helping to co-ordinate this; I think that might be a more effective way of disseminating good practice and it matches the model that hopefully schools and teachers are moving towards, of taking professional responsibility for their own development.

So if Ofsted were to step back from reporting on good practice, and if the difference between Grade 1 and 2 (over 80% of providers) has a rather arbitrary effect on available provision, that leaves Ofsted as an effective enforcer of absolute minimum standards and a possible pressure, and possible guide, to improving the quality of training. The former role requires reliable* differentiation between Grade 1/2 and Grade 3/4; the latter two require valid measurement of training quality, and the ‘guide’ bit requires accurate identification of strengths and weaknesses. In my second post on this, I’ll try to dig into the issues of reliability, validity, and accuracy that my original comment alluded to.

 

*Yes, science teachers, I know this should be “reproducible” but this is social science, not GCSE Physics, so I’m going old skool.

 

The Carter Review and the future of ITT

With Tom Bennett giving evidence to the Carter Review of Initial Teacher Education here and here, which I hadn’t realised was forging ahead so quickly, I thought I probably ought to finally get round to writing a blog post that has been gestating in my head for a while. I don’t think it terribly likely that it will have any impact on the review process – although you never know which significant person might stumble upon it and find a fresh perspective helpful – so this is more about marshalling my own thoughts about the job I’ve been doing for a year now than pretending I have any influence.

I came out of the classroom and into ITT at what might well be termed “interesting times”. Although I think 30 million deaths will be avoided, there have been, and will be some more, HEI tutors looking for new jobs as ITT becomes increasingly unappealing (mainly for financial reasons) to the VCs of many universities. So far I think Bath Spa, Keele, and OU have gone and Loughborough have jettisoned everything except PE. Leeds just got an Ofsted Grade 3 so must be worried, and I’ve heard that several other Russell Group PGCEs are hanging by a thread. However there are loads of HEIs delivering PGCE courses (possibly too many) so the demise of a few may not matter for children and schools but it definitely does matter when reviewing ITT because a background of declining funding, staff cuts, and increased workload for those who are left, isn’t the best starting point for improving the HEI side of things.

So does that mean that the road to glory lies with the School Direct (SD) model and/or School Centred Initial Teacher Training (SCITT)? The university I work for adopted SD early and keenly, on the back of a very successful Graduate Teacher Programme (GTP), so I have had a good chance to look at this option, warts and all. In terms of time in the classroom, SD does have more because it starts at the beginning of September and finishes at the end of July, whereas a typical HEI PGCE starts a week or two later and finishes before the end of June. If there was evidence that this was helpful then there would be nothing to stop HEIs from running their final placement until the end of term as well, except that they would almost certainly have to pay schools more and as noted above, money is a problem. Actually this isn’t the main difference between SD and a traditional HEI course, there are two differences that are much more significant.

Firstly, with an HEI course, there is a good opportunity in the first placement to make some fundamental errors and be nervous, uncertain, and moderately ineffective, before moving to the longer second placement, leaving those mistakes behind, and starting afresh. There is also the option of moving trainees during the longer placement if a mentor, department, or some other issue results in a stagnation of progress. Matching trainees to placements is really hard to get right and the variation in mentoring style (some nuture, others pull no punches) and department (some have detailed SoW, others expect teachers to design their own) means that sometimes a good trainee just fails to find a fit. With SD, although there is a short second placement, trainees really only get the one experience, and if the relationships with either classes, or mentors, don’t get off to a good start, those problems have to be fixed in situ. The GTP worked because the trainees were generally very robust and the small numbers meant that it was easier to ensure a good fit; SD trainees are not the same. Where the fit with the school is good, SDs get a great experience, but some don’t and that’s when the limited options become apparent.

Secondly, whilst some SDs have used a model whereby the trainees train alongside the HEI trainees, a lot of SD courses have most of the training in schools (the HEIs mostly focusing on the Master’s level work for PGCE). In the end, this was surely the intention of SD; to get trainee teachers out of the clutches of “those who can’t teach…” and who fill their heads with ‘progressive nonsense’, and into the clutches of successful schools who would ‘train them properly’. A lot of Teaching Alliances and Teaching Schools are doing a great job with essential training in e.g. SEN, PSHE, safeguarding, talking to parents, data etc. and classroom practice e.g. lesson planning, differentiation ideas, assessment. Some, unfortunately are delivering good basic training but then thinking it’s all done and not pushing trainees once their teaching is satisfactory. Some are also doing a great job with behaviour management training but I’m afraid that some are not. Only a few are providing training that involves engaging with research, and high quality subject-specific training is a real problem. The behaviour management training is a problem where the school doesn’t have a strong, all-encompassing grip themselves – either because they don’t have to (sufficient levels of leafiness) or because it’s handled mainly at department level – or I’m afraid that some teachers who should know better default back to their own PGCE experience and pass on the things that were education orthodoxy some time ago because they don’t feel confident enough to rate their own experience more highly. Even where the training is very good, they still don’t get much chance to try things out with different kids or under a different school system because they are based in just one school. The subject-specific problem is just a reflection of very small cohorts e.g. two science trainees, so this has to be done in departments and is often ad hoc (or non-existent). This is definitely the biggest issue of which SD trainees are aware; a barrage of questions about how to teach x, y, and z plus  “how do we get more of this?” is the typical response to the two or three subject-specific sessions I did with SDs this year. Finally, the engagement with research is just lack of expertise in schools. The big alliances do have the funding to bring in some expertise but most SD is done in-house by people with other responsibilities and only a handful are getting to grips with research as teachers, in the ResearchED mould.

Finally. nothing to do with the quality of training but shifting 50% of ITT from HEI to SD is creating a recruitment crisis. This is a partly a fragmentation problem, and partly a selection problem. The first issue is just that if you take a fixed number of potential teachers and present them with a lot more training choices, they get spread more thinly. This matters because they don’t spread evenly. (Whilst mentioning fragmentation, the administration burden of SD has nearly finished off some schools, who now have to do all the UCAS work, dozens of interviews, and all sorts of things that HEIs do but without the benefits of scale). The second, more serious issue, is the number of potential teachers that I have seen who are being rejected by schools but would have made perfectly good, if not instantly outstanding, teachers, and the odd dodgy one that somehow gets chosen. I think schools are better at choosing NQTs to fit their school, than identifying trainees with potential.

So was SD a mistake – was all rosy in the HEI graden?

No, there are problems here too. The charge that “those who can’t teach, teach teachers” is way out of line but it is true that a number of my colleagues haven’t taught children for a long time. I can see a scenario in which this wouldn’t matter if there was really good and effective collaboration between very experienced tutors, deeply engaged with an overview of relevant research, and very effective teachers with a vice-like grip on behaviour management and effective classroom practice, with the two things feeding into each other, but at the moment the divide between theory and practice is too big. We do get people in from local schools, for example all the early BM training is done this way, but although these people know exactly what they’re talking about, it’s too remote from practice and doesn’t follow through into placements. I have a suspicion that there are some examples of excellent collaborations out there (I don’t know first hand but if the Carter Review doesn’t speak to Michael Fordham and the other Cambridge history mentors I think they will have missed a trick). It seems so obvious that the best ITT would come from really great collaboration between HEIs and effective teachers in effective schools, that it is worth looking at why this isn’t happening more. The first issue is that research-led universities have other priorities. The best academics have to make an effort if they want to engage with the PGCE, it’s not the default position, and even if they did, many have very specific research interests that might not be relevant to training new teachers. From the other end, PGCE tutors are a lot busier than anyone can see from the outside looking in, and the time for identifying, engaging with, evaluating, and using the research is very limited. In this respect most PGCE tutors are in exactly the same position as most teachers apart from having done more work at Master’s level at some point in the past. Secondly, there is no obvious reason for successful teachers to make career moves into HEIs because the pay and career structure is a lot better in schools (I make about £35K with zero chance of promotion, which is less attractive on both counts than my previous middle management position). At the moment, taking a few years out of school to train teachers is unlikely to be the thing that cracks open an assistant headship. There are reasons to make the move (in my case the flexible hours have solved a child care problem) but then the requirement to teach and assess at Master’s level will prevent many effective school teachers from making the transition. Finally, from what I’ve read online, the Cambridge history collaboration sounds tremendous; we are a million miles from that level of engagement from our mentors. Just getting them out of school for an afternoon twice a year is like getting blood from a stone. Sometimes just trying to communicate by phone or email is a trial. Unfortunatelywe are so tight for placements (always an issue for maths and science here) that it is very difficult to put pressure on schools to give mentors more time because if we lost two or three we might not be able to place all our trainees. For me, the most striking thing about my new job is the way I hand over nearly all responsibility for my tutees to school mentors once they are on placement; the quality of each trainee’s experience depends enormously on the mentor and yet that mentoring is probably the thing I have least control over.

So I think what I’m saying is that, although there are elements of SD that could really improve ITT, the fragmentation of expertise and the current lack of accountability over standards is a major problem. Like Joe Kirby I worry about consistent quality; unlike Joe, I think the answer lies in improving what HEIs do, not going further down the school-based route because if the DfE continue to drive ITT out of HEIs we are going to have a short-term recruitment crisis and in the long-term I think that we might have some dazzling examples of fantastic training and a lot of low-quality, uninformed ITT, delivered by alliances that just don’t have the personnel or capacity to do a great job. In the end, even Teaching Schools do not have teacher training as their raison d’etre. Whilst PGCE may be pretty low on the Russell Group food chain, teacher training is the reason I and my colleagues have a job, and that means the quality of what we deliver drives every decision we make. What we need is to find a way to incentivise Teaching Schools and others to work more closely with HEIs rather than to be in competition with them. We want experienced HEI tutors to provide the continuity but then to have others moving more freely between the classroom and the university. We need to establish what it is that education research can tell us about effective teaching, and not leave it to those leading the training to all individually try to squeeze this work into their evenings and weekends. Can I be specific?

  • Establish stability over the allocations so that universities can make informed decisions about whether or not to continue to offer ITT and so that schools can work out how they want to operate.
  • Find an incentive that will get universities and schools working together more closely, so SD and HEI routes share good practice and build on each other’s strengths.
  • Make it a clear expectation that all schools offer training placements, through whatever route, and that they allocate appropriate time to match e.g. releasing mentors for training or to collaborate better with providers.
  • Provide some decent education research or other funding, specifically for those involved in ITT, or T&L in schools, to give them the time to engage broadly with the research base as part of their job rather than on top of everything else.
  • Establish a clear core framework for what teachers should know and be able to do to be awarded QTS (and this absolutely has to be owned by the profession and not imposed by the Carter Review or anyone else – if that requires a Royal College, fine, but if a respected group like Headteachers’ Roundtable or some prominent school or university, or ResearchED or something can get this established so it spreads across schools and HEIs that could also work). David Weston has been prominently saying this for some time and Rob Coe recently too. This should be based on a combination of research and existing good practice, and will take time and money to get right.
  • Do something about the dreadful way in which Ofsted inspects ITT (won’t go into details here but it really sucks).
  • Start holding training providers to account through the online community i.e. do for ITT what Old Andrew has done for Ofsted. I don’t think insisting providers publish their training materials – as Dominic Cummings has suggested – is viable but if the online community work with trainees and NQTs to name and shame genuine garbage, HEIs will sit up and take note pretty sharpish.
  • This one is specific to science but I would like to see the majority of trainees doing a Subject Knowledge Enhancement course before training so we can fix those who currently start with rudimentary six-year old GCSE in one or more of the three subjects they have to teach.
  • And finally I would like all ITT courses to include just one Master’s level assignment (20 credits) in the form of a literature review. I think 60 credits is too much and distracts from classroom practice but one assignment is the chance to get a good grasp of research methodology in education. This would mean the end of the ITT year would be QTS but then I think teachers should do the other 40 credits in NQT+1 or NQT+2 when they’ve got the head space for it, to complete PGCE. This would help to keep HEIs and research in touch with schools and early career teachers.

Some of this is about systems, and some about incentives. As always, tinkering with the systems is only important to the extent that it provides stability for people and organisations to make commitments and the long-term investments of time that lead to higher quality outcomes. The DfE often forget this (possibly that is a charitable interpretation) operating as they do on a five year election cycle. I hope the Carter Review doesn’t.

 

Post script:

There are three other models of ITT that I’m aware of. SCITTs, Teach First, and Troops to Teachers. The latter is very small and specialised and probably isn’t relevant. SCITTs I know very little about but I should think what I’ve said above still holds with SCITTs taking the place of HEIs if they are big enough – I don’t have any really strong views about whether an HEI or SCITT is better if the tutors are the right people. Teach First, I know a bit more about, and they deserve massive credit for the very significant glamour they’ve added to the image of ITT. The details of the training I can’t comment on except that I like the idea of front-loading the training (the SD programmes that get trainees in front of classes after 1 day need to take note) although I think it’s essential to also have time for reflection after trying things with real children and after watching effective teachers in the classroom, and I’m not sure how much time there is for this with Teach First. The emphasis on getting the teachers with high potential into tough schools is brilliant. The vagueness over subject-specialism and Teach First’s apparent option to ignore the subject allocations that everyone else is constrained by, worry me a bit. And I suspect that Teach First have issues with things like mentoring quality, that are also problems elsewhere. The final issue with Teach First is that whether or not the preparation and support in school is first rate, there will still be failures, and if it does all go wrong then everyone involved takes a big hit. I think what’s important is that Teach First isn’t seen as some kind of beacon of hope that everyone else should be emulating, but an example of an alternative training route, meeting a particular need, with elements to be admired and elements to be improved.

#EducationFest No.4: How will we know?

This is the fourth in a series of posts on the Festival of Education at Wellington College and the second post on Rob Coe’s talk. The first is here.

Moving on from the possible can of worms associated with the Danielson Framework, Rob’s session was really about how teachers can improve and how research and evaluation has an important role to play in this process if hours and hours of wasted time are to be avoided. He is closely involved with the EEF Toolkit and suggested this was a good starting point for the question of what we should be doing to improve. However, I think he suggested an even more important question to be asked once we think we have identified the thing we need to work on.

“Does focusing on these things lead to improvement?” It’s a critical point, isn’t it? A teacher might well feel, or be told, that their subject knowledge was weak but there is a possibility they might put hours and hours of work into improving this, only to find the impact on their pupils to be zero. It’s a wider question though. Currently the zeitgeist in the blogosphere is about retrieval practice, distributed practice, and interleaving. There is lots of good research from cognitive psychology to support these ideas but what if we put hours and hours into re-writing SoWs only to find the impact on our pupils to be zero? The EEF Toolkit, Hattie’s meta-analysis, and one or two other reviews do point very strongly to a few things that do have significant impact. Feedback is probably the best example, but if it were that simple then AfL would have had a much bigger impact on the effectiveness of teaching in the UK than has actually been the case.

I suspect the problem is that different teachers need different things, and different teachers implement the same idea in different ways. There were three teachers in my first physics department. The HoD was an Oxford graduate, by far the best physicist, and capable of brilliant teaching ideas, but taught everything by the seat of his pants, sometimes went over the heads of his pupils, and left all but the most capable feeling disoriented. The other teacher was the fiercest disciplinarian in the school, originally a chemistry specialist, and was organised and pedantic to a fault; his pupils worked tremendously hard, did some very high standard work, and completed the course with immaculate notes, but often struggled to link knowledge to solve problems when working independently. I was short on both subject knowledge and classroom experience and my two biggest problems were keeping everyone on task and not completely cocking up the physics, but I had a pretty good feel for the problems pupils had in understanding the subject. With the benefit of hindsight I would have said we all needed to improve but in different ways. Feedback may well have an effect size of 0.8, or 8 months or whatever, but it certainly wouldn’t have had that impact on my teaching at that time. And if we had tried AfL or some other feedback strategy, there’s every chance that we would each have done it differently. As Rob pointed out, despite all we know about learning, CPD still mostly consists of just explaining at length to teachers what they should do and expecting them to understand and be able to do it. Even a typical behavioural intervention (+4 months) wouldn’t have helped me as I was already using an assertive discipline strategy to moderate but not universal effect. What I needed was to do a lot of past papers, add some more variety to my teaching, and work out how to notice behavioural issues and nip them in the bud before they had become disruptive.

Having cogitated on this for a week or so, I find myself going back to Ben Goldacre and the whole RCT thing. There are a whole bunch of issues with running RCTs in education that are less of an issue in medicine, but I think the biggest difference is that diagnosis in medicine is a lot more sophisticated than in education. There may have been many decades of evidence-based medicine but I suspect that it’s still pretty hard to know “what works” if the symptoms are “feeling unwell”. In education, when we talk about how to improve, we’re at the “feeling unwell” level of diagnosis. We might well find that high quality research would show that giving unwell patients Lemsip might have an effect size of 0.8 but that doesn’t mean it’s the best treatment for leukemia, cirrhosis of the liver, or someone throwing a sickie.

I don’t suppose Rob Coe intended me to head off on this particular tangent but it’s the mark of a great talk that it changes your thinking. Thanks Rob – best session of the festival, and the competition was pretty fierce.

#EducationFest No.3: A Research-based, Constructivist View?

This is the third in a series of posts on the Festival of Education at Wellington College.

Rob Coe is currently occupying the position, shared perhaps only with Dylan Wiliam, of a Colossus with limbs astride the sometimes separate worlds of education research and education practice. There are other well-regarded academics that can claim the same combination of having worked in schools, and having produced high quality research directly relevant to teaching, but I’m not aware of anyone other than these two so prominently engaged in dialogue with the profession.

His contribution at ResearchEd 2013 about graded lesson observations last year turned out to be momentous in its effect and has been very widely quoted. Whilst entirely in agreement with the majority of teachers that the typical ‘three graded observations per year’ approach to performance management is crap, I do have some reservations about the way Rob used the US research papers, and the way this has been picked up and passed on as if it reflects a major study carried out in this country, using our methods of lesson observation. So with that in mind, but also a keen awareness that Rob was likely to have something interesting and important to say – his ResearchEd 2013 talk is online, and is well worth watching – I settled myself in the Old Hall and studied the oils of previous Masters of Wellington College, breathing in the oak-panelled atmosphere.

Rob started with three questions about improving teaching: “What does better look like?”, “How do we get better?” and “How will we know if we have?” I’m a big believer in the importance of asking good questions in teaching; Rob’s were humdingers.

Rob strikes me as a measured commentator and he wasn’t going to provide a definitive answer in under an hour. Instead he laid out some interesting thoughts. I’ve split these into two posts because the first thing he said has led me in a different direction to the rest.

And the first thing he did was lay into the Teachers’ Standards. Well, more ‘laid-back into’ but his wry comment, like with the reliability of lesson grades, was all that was required. In contrast he offered the Danielson Framework for Teaching as an example of how research could be used to develop something better. Having looked at that framework, it appears to have a fair bit to offer, but it does describe itself as follows:

The Framework for Teaching is a research-based set of components of instruction, aligned to the INTASC standards, and grounded in a constructivist view of learning and teaching.

That word ‘constructivist’ is interesting, especially in the same sentence as ‘research-based’. I suggest you have a look at the framework if you’re interested but I guess my take on it would be that there is a child-centred element to it that might not be to everyone’s taste. This raises some fairly fundamental questions: if the framework is based on really good research then the implication is that this child-centred element is part of the answer to Rob’s first question,”What does better look like?” If he is right, that will certainly upset some and please others. If on the other hand, the child-centred element is wrong, then either the research it’s based on is dodgy (in which case why hasn’t Rob spotted this?), or there is a deeper problem that good research is giving us more than one ‘correct’ answer about what better looks like. This is a really hefty question; at the moment there is a feeling in education that if research is carried out effectively, and teachers engage with this properly, there may not be a complete blueprint for effective teaching but it will be possible to paint a picture of it in broad brushstrokes. For the neo-traditionalists, the research on constructivist approaches is both limited and flawed, and they would argue the decent research pretty much all points their way. So where is Danielson, and by association, Rob, coming from? Is he actually Rob The Blob? If not, is it possible that there is good research in favour of both traditional and constructivist approaches?

Does anyone whose read the research leading to Danielson’s conclusions fancy commenting?

My thoughts on the rest of Rob’s talk are in #EducationFest No. 4