Linking ITT and workforce data: a step in the right direction

I had the great pleasure of meeting Becky Allen back at the beginning of the year for a bit of a discussion about the work Education Datalab were doing on matching teacher training records to the School Workforce Census. I suspect a pretty monumental amount of effort has gone into nailing down the final details since then but two of the three linked reports are now published. I suggest you start here to either have a quick look at the key findings, or to access the full reports. So far I’ve just read the NCTL one.

It is immediately apparent that this is something the DfE ought to have done years ago. There is a lot of talk of evidence-based policy-making but any kind of genuine commitment to such a thing would have seen this sort of data-analysis set up prior to the seismic changes to ITT that have been implemented since 2010. Hey-ho; better late than never.

In theory this methodology could be used for a much longer-term project that might start generating some really useful data on the impact of various approaches to training teachers. It is easy to pick up this work and think it is limited to evaluating structural issues about ITT routes but if you consider the richness of a data set that can pretty much link every teacher in the maintained sector back to their ITT experiences, there is almost unlimited potential. Inevitably, for ITT providers, there is a pretty steady (and self-selecting) drift out of contact over the years after qualification. This work potentially solves that problem for research on any aspect of ‘what works’ in ITT. That’s something for the future; what of the findings here?

It would be tremendously easy for a lot of people in ITE to say “I told you so” in regard to the Teach First retention figures. Actually, I think the useful questions are more subtle than that but figures first. Using the lower-bound numbers, traditional HEI-led routes have about 60% of those initially recruited working as teachers in the maintained sector in their third year after qualifying. SCITTs are higher at 70% (but these would have been the early adopters). School Direct hasn’t been running long enough to have figures. Teach First is under 50%.

datalab retention graph

However, there are several things to remember about Teach First. Their qualifying year involves teaching potentially difficult classes, mostly in schools with more challenging behaviour, with variable levels of in-school/in-class support, whereas university-led trainee teachers are supernumerary, on lower timetables, and working in a wider range of schools, and rarely those in a category or Grade 3. Teach First are also possibly more likely to continue to work in more challenging schools although I think that is an assumption I would want to see data on because certainly some participants move from TF schools to schools at the opposite end of the socio-economic spectrum.

There are also a few things to remember about HEI-led courses. Financial survival, and the need to make up the numbers across all the shortage subjects, probably mean that in these subjects the HEI-led cohort has a longer tail than for any other route. SCITTs may have some of these pressures too but, particulary in the years for this report, are likely to have had the opportunity to be more selective. I suspect it’s the other way round for subjects like PE, English and history where the larger scale of HEIs generates a larger pool of applicants compared to SCITTs. Since shortage subjects make up the bulk of an HEI cohort, you would expect to have a lower qualification rate, and also some marginal grade 2s where support (or lack of it) in their employing school might determine success in their NQT year. As pointed out right at the beginning, the report can’t tell us anything about what would happen to the same trainee teachers if they were trained via a different route.

Teach First recruitment has been astonishingly successful. Having seen the marketing machine in action, and with access to funding that very few providers can match, that is perhaps not completely surprising but it has been terrific nonetheless. This means they probably have the strongest cohort of all at the start of training. For me, the critical question to ask is, if Teach First training was more like the HEI-led route, or a SCITT, would there be hundreds more high quality teachers still in the classroom. There is no way to tell from this report but, anecdotally, the Teach First participants I have worked with would all have had excellent outcomes on the HEI-led course or School Direct programmes I mainly work on. What I don’t know is whether they would have gone into teacher training at all.

If Teach First is mainly putting people who would never have tried teaching into struggling schools with teacher recruitment problems, to do a decent job for two or three years, then that is probably a justifiable use of public money; if they are putting potentially high quality, long-career teachers through training in a way that knocks an additional 10-20% off retention, that doesn’t look so good. I suppose there might be other benefits; I’m unconvinced by these but make up your own mind. Sam Freedman sets out the most positive case here.

What about the other findings?

  • Three regions of England – North East, North West and South West – appear to have large numbers of new qualified teachers who do not join a state-sector school immediately after achieving QTS.
    • This is pretty good evidence that the NCTL need to sort out the Teacher Supply Model, but that was already very apparent. We are waiting on tenterhooks for the announcement on allocation methodology (so presumably they are desperately trying to invent something at the moment; let’s hope they don’t make another almighty cock-up!
  • Those studying on undergraduate with QTS courses have low initial retention rates in the profession, though we cannot know whether this results from subsequent choices made by the individual or recruitment decisions made by schools.
    • They do, but the data also shows they catch up later. I suspect that if you have a B.Ed. sooner or later it becomes the best option for a professional career whereas PGCEs have their UG degree as an alternative option (depending on subject a bit)
  • Teach First has very high two year retention rates, but thereafter their retention is poorer than other graduate routes.
    • I’m hoping, perhaps in vain, that the move away from QTS  might link teacher development across from ITT into the first year(s) of post-qualification employment for other routes and get a bit of the 2-year TF programme effect into other routes.
  • Ethnic minority teacher trainees have very low retention rates.
    • I suspect because they are much more likely to have limited experience of the UK education system if educated abroad, and are also more likely to be EAL, both of which, in my experience, can affect classroom relationships. It would be enormously useful to have data that separates UK and non-UK educated teachers and drill down a bit. In my part of the world, UK-educated BME applicants are thin on the ground but I don’t notice anything that would lower their retention rate.
  • Individuals who train part-time or who are older have much poorer retention rates, which may simply reflect other family commitments that interfere with continuous employment records.
    • UoS doesn’t do part-time. I have a hunch that retention might actually be better for older trainee teachers on our Science PGCE – they do mostly need a proper job to pay mortgages whereas younger trainees often don’t have that commitment. On the other hand, whilst they are nearly all tremendous people to work with, developing into a good teacher is partly about developing habits that are effective in the classroom and I think changing habits gets harder as you get older. It’s also a very fast-moving environment when you are a novice and again I think adapting to this gets harder with age. They are quite often particularly good at developing relationships with teenagers though, so it’s swings and roundabouts, maybe.

So those are my first thoughts. I think we have some way to go to get stable and effective initial teacher education that is structurally sound and therfore with the potential for continuous improvement. NCTL have tried quite hard to break what we had; now we need to take the best of the many pieces and put them back together again, hopefully to end up with something better than before. High quality evidence is a key part of this process, as are people in high places that are prepared to pay attention to it. This report is a very important step in the right direction.

 

 

Diet (Effect size = 0.12)

This post is part of a series looking at the influences on attainment described in Hattie J. (2009) Visible Learning: a synthesis of more than 800 meta-analyses relating to achievement. Abingdon: Routledge. The interpretation of Hattie’s work is problematical because the meaning of the different influences on achievement isn’t always clear. Further context here.

Working my way through the influences I’ve skipped a few that didn’t look terribly interesting, had low effect size, or had nothing to do with what happens in schools but I have had a little look at Diet (Effect size = 0.12) because I am surprised this is so low. At the college where I used to work, which was in a typical deprived coastal-urban setting, we had plenty of students who hadn’t been terribly successful at GCSE and were doing Level 1 and 2 courses to try to improve their qualifications. Amongst this group it wasn’t unusual to find that a student’s breakfast had been 1.5 litres of Coke and a Monster, which I always found pretty stunning. I think I would rather have White Russian on my Cornflakes than have to face drinking that lot in the morning!

I’ve tended to go along with the general opinion that Coke and Red Bull is likely to have a significant effect on learning performance and have this vague memory of various studies having shown that a balanced diet and a proper low GI breakfast leads to significantly better concentration during the school day. That certainly seems to be the opinion of leading nutritionists successful chefs appointed as government advisers. However I’m not sure that proper scientists would agree that the caffeine was a major problem. On the other hand, blood glucose levels and/or particular additives or nutrients might be a different matter.

I work quite closely with Professor Marcus Grace who, as well as tutoring on the Secondary Science PGCE at Southampton, is one of the significant figures involved in the LifeLab project. I really ought to get round to asking him about this – there is so much research expertise in the School of Education Studies that I need to work on tapping into! When I get round to doing that I’ll update this post; meanwhile, what evidence is Hattie basing his d=0.12 on?

There is one meta-analysis, Kavale and Forness (1983). I can only access the abstract but it’s clear that despite the missing clause in Hattie’s summary, the meaning that I had assumed he intended does match this meta-analysis. Equally it is clear that this is very specifically looking at ADHD and not children without this diagnosis. Essentially this paper states that the studies analysed do not provide evidence to support the earlier hypothesis that dietary changes could have a positive effect on ADHD symptoms. I’m guessing that the outcome measure was not academic achievement, but more likely some behavioural measure, which reminds me again that Hattie seems rather blasé about what his meta-analyses are measuring.

A quick trawl for more recent work suggests to me that things may have moved on, with this Schab and Trinh (2004) meta-analysis dealing only with double-blind, placebo-controlled trials getting d=0.21-0.28. Again there is this issue of whether or not Hattie’s 0.40 average effect size is the correct bar for comparison. With double-blind, placebo-controlled trials, it shouldn’t be. The methodology ought to make the inherent effect of the intervention zero and these authors are clear that their meta-analysis does show that artificial food colours affect ADHD. Having said that, when the trials were separated into groups according to who was reporting on the effects, teachers couldn’t detect any difference in behaviour but parents could. That’s not parents’ wishful thinking because of the double-blind; it might have rather more to do with the difficulty kids have in shifting their teachers’ expectations. Stevens et al. (2011) is a review of the literature, including both the meta-analyses mentioned above. They reach a similar conclusion but picking up the suggestion in Shab and Trinh that the effect might be restricted to only a proportion of children with an ADHD diagnosis (10%-33%). However the Bateman et al. (2004) study on the Isle of Wight involving Southampton academics and a further study (and a smaller one from the USA cited on p.286 in Stevens et al.) suggest quite strongly that artificial food colourings affect all children (well – young ones at least).

Since writing this post I’ve come across this Harvard Mental Health Letter reviewing the relationship between diet and ADHD. It includes the findings from the Schab and Trinh (2004) meta-analysis but also some other research. The conclusions are similar – that some artificial food colourings do seem to have an effect on at least a proportion of children, which probably means that reducing exposure is a good thing. It also suggests that increasing Omega-3 essential fatty acids and micronutrients might just help too. A final point is that the research on the effect of sugar on behaviour suggests there is no link (but of course the link with obesity and Type II diabetes is only too obvious). But the strongest message is that the usual recommendations for a healthy diet apply to all children.

Anyway, this isn’t something for day-today teaching. There are all sorts of issues around ADHD (like whether it is a useful diagnosis, whether drug treatments are a good idea, and so on) and even if all children are susceptible to artificial food colourings it’s possibly something teachers might helpfully be aware of but it isn’t going to affect what we do in our classrooms. I again find myself wishing that Visible Learning was narrower in its breadth and deeper in it’s depth but it’s been an interesting evening educating myself. Next, I’m going to jump to Time on Task (Effect size = 0.38) because I want to look at this in relation to a paper by Professor Daniel Muijs (another big hitter from the Southampton School of Education Studies) that suggests Time on Task is one of the most important influences on achievement.

Index to series on Hattie’s Visible Learning

This post is just a quick reference index to my series of posts looking at the influences on attainment described in Hattie (2009) Visible Learning: a synthesis of more than 800 meta-analyses relating to achievement. Abingdon: Routledge.

The interpretation of Hattie’s work is problematical because the meaning of the different influences on achievement isn’t always clear. Further context here. There are also some significant issues with Hattie’s methodology but despite these shortcomings, Visible Learning remains as the boldest attempt to draw together all areas of education research.

The list below shows my posts in the order they appear in Visible Learning. I have only looked at some influences, skipping those that I thought to be self-explanatory, outside the influence of teachers, or inconsequential.

—————————————————————————————

Piagetian programs (d=1.28)

Self-reported grades (d=1.44)

Concentration, persistence, and engagement (d=0.48)

Diet (d=0.12)

—————————————————————————————

The intention is to have the index presented in two forms:

The second will be taken directly from the list of influences in rank order of effect size in Appendix B of Visible Learning pp.297-300 (but I haven’t copied that out yet – waiting for an evening when my brain is too fried for anything less mechanical!)

#EducationFest No.4: How will we know?

This is the fourth in a series of posts on the Festival of Education at Wellington College and the second post on Rob Coe’s talk. The first is here.

Moving on from the possible can of worms associated with the Danielson Framework, Rob’s session was really about how teachers can improve and how research and evaluation has an important role to play in this process if hours and hours of wasted time are to be avoided. He is closely involved with the EEF Toolkit and suggested this was a good starting point for the question of what we should be doing to improve. However, I think he suggested an even more important question to be asked once we think we have identified the thing we need to work on.

“Does focusing on these things lead to improvement?” It’s a critical point, isn’t it? A teacher might well feel, or be told, that their subject knowledge was weak but there is a possibility they might put hours and hours of work into improving this, only to find the impact on their pupils to be zero. It’s a wider question though. Currently the zeitgeist in the blogosphere is about retrieval practice, distributed practice, and interleaving. There is lots of good research from cognitive psychology to support these ideas but what if we put hours and hours into re-writing SoWs only to find the impact on our pupils to be zero? The EEF Toolkit, Hattie’s meta-analysis, and one or two other reviews do point very strongly to a few things that do have significant impact. Feedback is probably the best example, but if it were that simple then AfL would have had a much bigger impact on the effectiveness of teaching in the UK than has actually been the case.

I suspect the problem is that different teachers need different things, and different teachers implement the same idea in different ways. There were three teachers in my first physics department. The HoD was an Oxford graduate, by far the best physicist, and capable of brilliant teaching ideas, but taught everything by the seat of his pants, sometimes went over the heads of his pupils, and left all but the most capable feeling disoriented. The other teacher was the fiercest disciplinarian in the school, originally a chemistry specialist, and was organised and pedantic to a fault; his pupils worked tremendously hard, did some very high standard work, and completed the course with immaculate notes, but often struggled to link knowledge to solve problems when working independently. I was short on both subject knowledge and classroom experience and my two biggest problems were keeping everyone on task and not completely cocking up the physics, but I had a pretty good feel for the problems pupils had in understanding the subject. With the benefit of hindsight I would have said we all needed to improve but in different ways. Feedback may well have an effect size of 0.8, or 8 months or whatever, but it certainly wouldn’t have had that impact on my teaching at that time. And if we had tried AfL or some other feedback strategy, there’s every chance that we would each have done it differently. As Rob pointed out, despite all we know about learning, CPD still mostly consists of just explaining at length to teachers what they should do and expecting them to understand and be able to do it. Even a typical behavioural intervention (+4 months) wouldn’t have helped me as I was already using an assertive discipline strategy to moderate but not universal effect. What I needed was to do a lot of past papers, add some more variety to my teaching, and work out how to notice behavioural issues and nip them in the bud before they had become disruptive.

Having cogitated on this for a week or so, I find myself going back to Ben Goldacre and the whole RCT thing. There are a whole bunch of issues with running RCTs in education that are less of an issue in medicine, but I think the biggest difference is that diagnosis in medicine is a lot more sophisticated than in education. There may have been many decades of evidence-based medicine but I suspect that it’s still pretty hard to know “what works” if the symptoms are “feeling unwell”. In education, when we talk about how to improve, we’re at the “feeling unwell” level of diagnosis. We might well find that high quality research would show that giving unwell patients Lemsip might have an effect size of 0.8 but that doesn’t mean it’s the best treatment for leukemia, cirrhosis of the liver, or someone throwing a sickie.

I don’t suppose Rob Coe intended me to head off on this particular tangent but it’s the mark of a great talk that it changes your thinking. Thanks Rob – best session of the festival, and the competition was pretty fierce.

#EducationFest No.2: More root than trunk

This is the second in a series of posts on the Festival of Education at Wellington College.

After Wilshaw, the first proper session of my day was Tom Sherrington. Of the distracting number of blogs I follow (I have to ration myself to 20 minutes at a time otherwise nothing else would ever get done, but it’s a strain because there’s so many people out there writing interesting stuff) Tom’s blog is the one I find myself most in tune with, most of the time. I thought I would feel the same way about his session on how traditional and progressive teaching approaches tend to blend together in most of the good teaching sequences we see in real classrooms, but I left just a little dissatisfied. He gave plenty of examples, not all from the selective setting where he is currently headteacher. He quite rightly identified electric circuits as a good example of when the teacher’s explanation and direction is crucial to childrens’ learning, and how good subject knowledge is critical in doing this well. Just as appropriately he talked about situations like A-Level investigations where giving children the opportunity to direct their own learning allows them to develop their interest in the subject and flourish intellectually. His description of how, in his school, Art and D&T were tightly controlled and very teacher-led at KS3 so children gained the skills required for later, much more self-directed, projects at KS4, was a good example of how learners can progress quite quickly from novice to a much more expert level where more open learning is appropriate. He presented his tree model of effective teaching; the more progressive roots providing important nourishment and skills and a highly structured, traditional trunk providing rigourous knowledge. The point being that both are needed if glorious foliage is to be developed. However, as a physicist, perhaps Tom had forgotten that one of the most prominent misconceptions in biology is that the bulk of a plant comes from the soil when, in fact, it comes mainly from the air. I found the whole session a bit like that. I didn’t find fault with his thesis, but I didn’t find his talk totally convincing either. Maybe too much anecdote and not enough evidence. Maybe just a lot of good material but not carefully enough marshalled. But it wasn’t just the tree that reminded me of a physicist talking about biology that is essentially correct but just not quite learned well-enough to avoid some ragged edges. Maybe it was all roots and no trunk. He didn’t need to convince me, but I don’t think this session will have convinced any of the people that he does need to convince.

Teach like a Champion?

Top of my Christmas list this year was Doug Lemov’s Teach like a Champion. I think the Initial Teacher Training we run is good in many ways but the extent to which trainees get specific, concrete advice on ways to improve their teaching depends very heavily on the skills of the mentor in school. Where the mentors are excellent, the advice and target-setting are really specific and the trainee can try new techniques out immediately. But a lot of mentors don’t manage this, even though they are very supportive generally. I wondered whether ideas from this book could fill some of the gap. Having read it, I think it’s a great book with plenty to say to those involved in ITT in the UK but maybe it’s not such a good choice for a trainee’s reading list. I’ll come back to what it does have to offer in a minute: caveats first.

Lemov draws on a number of outstanding teachers for examples but I get the impression the total number is not actually that high. In effect he has spent time in the most successful classrooms within a linked chain of charter schools serving a particular demographic, and has made the assumption that anything he sees replicated across these most successful classrooms must be a factor in the success of these outstanding teachers. I guess he doesn’t make any claims that his observation system  is particularly systematic, or rigorous in its approach, but there is no doubt that he is pretty convinced himself that this is a list of the techniques used by outstanding teachers. A coach and horses could be driven through this methodology except that there is something fundamentally sound about the basic premise, as long as the reader appreciates that some of the techniques may be much more effective than others and some may even be counterproductive; that what works in these classrooms may not work in all classrooms; and that there is a chance that Lemov has missed something deeper and more elusive that makes these techniques work for the teachers observed but fall flat if applied by others without this deeper something in place. In particular I notice that the behavior management techniques are almost entirely devoted to keeping classes on track where behaviour is basically okay already. Nothing about establishing class rules, really. Nothing about what to do when a bout of fake coughing starts round the room, or deliberate, invisible tapping under the desks. Nothing about how to respond to the pupil that tells you to fuck off when asked to move. Not even an in depth description of the full procedure to follow when you first ask for silence, wait for it, don’t get it, and still have half the class chatting rather than one or two individuals. I have a suspicion that these things either don’t happen to the teachers Lemov was observing, or they do and were dealt with at the start of term, before Lemov’s observations. I can’t believe the schools in question don’t experience these things at all; in fact I should think these are the sort of schools that need metal detectors and security guards on the doors just to keep guns off the premises. This thought, that Lemov may be missing something fundamental, worries me quite a bit, but, although that’s a pretty hefty disclaimer, I think we need more of this kind of thinking in the UK ITT system. Those who still think that university tutors spend their entire time filling the heads of trainees with theoretical flights of fancy and a selection of bogus teaching techniques born of the summer of love are way off the mark. I can’t vouch for all providers but we provide a mix of subject knowledge focused on anticipating misconceptions, basic teaching skills like how to plan a lesson around your learning objectives and how to check progress as you go along, questioning technique, and so on, knowledge about whole school issues like SEN, and lots of time (2/3rds of the course) in school observing, practising, and improving. Almost all the whole school stuff and a series of sessions on behavior management, AfL, pace, and other tricky areas, are delivered by outstanding practitioners from local schools, and I think nearly everything we do is focused on helping trainees make the most of their placements. However what we don’t do are regular, repetitive opportunities to identify and practise individual techniques. And I think we should! It’s to this aspect of our current practice that Lemov speaks, and actually this was my primary motivation in reading the book. What the book does do exceptionally well is isolation of individual techniques. “No Opt Out” isn’t just presented as one of a dozen elements of effective questioning, it’s described as a single entity, to be understood, and practised, on its own, until mastered. I think this may be the thing that would make most difference to my trainees, particularly those to whom teaching doesn’t come so naturally. In the classroom there is just too much going on for an inexperienced teacher to get much focus on specific techniques. In fact it’s a testament to the quality of the trainees I work with that they manage to do so quite as effectively as they do. What if I could remove some of that pressure? What if I could give them the chance to make the classroom a place for stress testing rather than tentative first steps? Maybe that could lead to something really exciting. And then what if we could meld Lemov’s instinct for addressing the issue of ‘What Works’ as a blow by blow account of action at the coal face with Hattie’s academic rigor, and something like David Weston’s teasing out of nuance in the data? What if Schools of Education in universities were somehow turned on their head so that the function of at least some of the research was driven by the need to inform ITT? Then we might really be motoring on the evidence based highway.