TeachFirst recruitment presentation

Although I work in Initial Teacher Education, I don’t really know very much about TeachFirst. I’ve picked up a bit from bloggers and Tweeters but am aware that sometimes I’m not clear who came through TeachFirst and who trained via one of the other routes. Laura MacInerney (who also helpfully collated some relevant posts), Kris Boulton, Joe Kirby, Harry Fletcher-Wood (not absolutely sure but think he was TF) and Daisy Christodolou are visible presences online and, whilst I certainly don’t always agree with some of these folk, there’s no doubting either their commitment to education or, for those still on the front-line, their commitment to improving their own classroom practice. I’m also aware of some of the data: Teach First achieve a similar 5-year retention to other routes (and of course that’s in some tough schools), Ofsted have graded their provision Outstanding in all categories, there is some evidence that schools engaging with Teach First outperform those that don’t (although whether that’s a reflection of the impact of the teachers, or of the school’s leadership is uncertain), and the 7:1 applicant to trainee ratio, and academic level of trainees is encouraging (although Teach First have unique requirements around subject specialism that skew this figure significantly). Prior to last night, my views about Teach First were limited by having no direct experience of the programme, but were generally positive with a couple of concerns. The very last part of my blog on the Carter Review will show you where I was at, with a more specific comment about subject knowledge on Kris Boulton’s post here.

After attending the recruitment presentation organised by PhySoc (University of Southampton Physics Society) I am not so sure. Now, I appreciate that a recruitment campaign is not the same thing as a teacher training programme but I was genuinely shocked by the way Teach First was presented.

The first bit of the presentation was a clear description of the fundamental issue of educational outcomes for FSM pupils in England followed by a call to arms. There was a slight sense of the priviledged and academically able going into the slums to save the deserving poor but I think that is probably justifiable salesmanship. What came next just wasn’t. In a fifteen minute presentation I reckon the ‘classroom teaching’ bit got less than 30 seconds. Not once was there any suggestion that wanting to work with children was a pre-requisite for teaching. Not once was there any suggestion that children might have anything to offer. In fact, really the only time that working in schools was dwelt upon was to flag up that in 15 years, 14 TFers had made headteacher (or it might have been 15 headteachers in 14 years), along with a statistic about the faster career progression for Teach First trained teachers. There was a fair bit of emphasis on the main details of the two year Leadership Development Programme, though. I cannot believe that the two years it takes to go from novice to the end of the NQT can be marketed as Leadership Development – wouldn’t Teacher Development be more appropriate? Maybe not when the details are as follows:

  • Six weeks residential training described as a cross between Freshers’ Week and something else (sorry can’t remember what but I guess it was less contentious).
  • Some teaching
  • Internships with major international companies like Deloitte, PwC, Goldman-Sachs, Civil Service Fast Stream… there was a list. I paraphrase but “This isn’t just about making the coffee, your leadership potential will be developed and you will have the opportunity to be involved in real projects during this time.”
  • A bit more teaching
  • Enrolment in a Learning Network – to develop your problem-solving skills
  • The opportunity to complete a Masters degree (sadly this is no longer free but is still heavily subsidised at £500/year). Bloody hell – that is cheap!
  • Become a Teach First ambassador. I think the possibility of continuing in education got mentioned here (presumably as some sort of senior leader although I assume the reference to eight, third year teachers, having headships is a typo) but mainly this was about the opportunities with Teach First Platinum Partners – that’s PwC, Goldman-Sachs etc. again. More info on the Teach First website.

There was an overview of the programme on a little flow diagram; the two teaching bits were the same size as each of the other bits. Do I need to say any more?

Anyway, I’m pretty clear that one recruitment presentation isn’t a basis on which to judge a programme that puts large numbers of keen new teachers into schools in socio-economically deprived areas, but it has given me pause for thought. This presentation was absolutely clearly suggesting that a couple of years of teaching was a way to get a massive leg up on the career ladder. This is the criticism that has been consistently levelled at Teach First and now I know why. I’m perfectly happy with teachers having alternative career options; I’m not happy about this short-selling of teachers and teaching.




Diet (Effect size = 0.12)

This post is part of a series looking at the influences on attainment described in Hattie J. (2009) Visible Learning: a synthesis of more than 800 meta-analyses relating to achievement. Abingdon: Routledge. The interpretation of Hattie’s work is problematical because the meaning of the different influences on achievement isn’t always clear. Further context here.

Working my way through the influences I’ve skipped a few that didn’t look terribly interesting, had low effect size, or had nothing to do with what happens in schools but I have had a little look at Diet (Effect size = 0.12) because I am surprised this is so low. At the college where I used to work, which was in a typical deprived coastal-urban setting, we had plenty of students who hadn’t been terribly successful at GCSE and were doing Level 1 and 2 courses to try to improve their qualifications. Amongst this group it wasn’t unusual to find that a student’s breakfast had been 1.5 litres of Coke and a Monster, which I always found pretty stunning. I think I would rather have White Russian on my Cornflakes than have to face drinking that lot in the morning!

I’ve tended to go along with the general opinion that Coke and Red Bull is likely to have a significant effect on learning performance and have this vague memory of various studies having shown that a balanced diet and a proper low GI breakfast leads to significantly better concentration during the school day. That certainly seems to be the opinion of leading nutritionists successful chefs appointed as government advisers. However I’m not sure that proper scientists would agree that the caffeine was a major problem. On the other hand, blood glucose levels and/or particular additives or nutrients might be a different matter.

I work quite closely with Professor Marcus Grace who, as well as tutoring on the Secondary Science PGCE at Southampton, is one of the significant figures involved in the LifeLab project. I really ought to get round to asking him about this – there is so much research expertise in the School of Education Studies that I need to work on tapping into! When I get round to doing that I’ll update this post; meanwhile, what evidence is Hattie basing his d=0.12 on?

There is one meta-analysis, Kavale and Forness (1983). I can only access the abstract but it’s clear that despite the missing clause in Hattie’s summary, the meaning that I had assumed he intended does match this meta-analysis. Equally it is clear that this is very specifically looking at ADHD and not children without this diagnosis. Essentially this paper states that the studies analysed do not provide evidence to support the earlier hypothesis that dietary changes could have a positive effect on ADHD symptoms. I’m guessing that the outcome measure was not academic achievement, but more likely some behavioural measure, which reminds me again that Hattie seems rather blasé about what his meta-analyses are measuring.

A quick trawl for more recent work suggests to me that things may have moved on, with this Schab and Trinh (2004) meta-analysis dealing only with double-blind, placebo-controlled trials getting d=0.21-0.28. Again there is this issue of whether or not Hattie’s 0.40 average effect size is the correct bar for comparison. With double-blind, placebo-controlled trials, it shouldn’t be. The methodology ought to make the inherent effect of the intervention zero and these authors are clear that their meta-analysis does show that artificial food colours affect ADHD. Having said that, when the trials were separated into groups according to who was reporting on the effects, teachers couldn’t detect any difference in behaviour but parents could. That’s not parents’ wishful thinking because of the double-blind; it might have rather more to do with the difficulty kids have in shifting their teachers’ expectations. Stevens et al. (2011) is a review of the literature, including both the meta-analyses mentioned above. They reach a similar conclusion but picking up the suggestion in Shab and Trinh that the effect might be restricted to only a proportion of children with an ADHD diagnosis (10%-33%). However the Bateman et al. (2004) study on the Isle of Wight involving Southampton academics and a further study (and a smaller one from the USA cited on p.286 in Stevens et al.) suggest quite strongly that artificial food colourings affect all children (well – young ones at least).

Since writing this post I’ve come across this Harvard Mental Health Letter reviewing the relationship between diet and ADHD. It includes the findings from the Schab and Trinh (2004) meta-analysis but also some other research. The conclusions are similar – that some artificial food colourings do seem to have an effect on at least a proportion of children, which probably means that reducing exposure is a good thing. It also suggests that increasing Omega-3 essential fatty acids and micronutrients might just help too. A final point is that the research on the effect of sugar on behaviour suggests there is no link (but of course the link with obesity and Type II diabetes is only too obvious). But the strongest message is that the usual recommendations for a healthy diet apply to all children.

Anyway, this isn’t something for day-today teaching. There are all sorts of issues around ADHD (like whether it is a useful diagnosis, whether drug treatments are a good idea, and so on) and even if all children are susceptible to artificial food colourings it’s possibly something teachers might helpfully be aware of but it isn’t going to affect what we do in our classrooms. I again find myself wishing that Visible Learning was narrower in its breadth and deeper in it’s depth but it’s been an interesting evening educating myself. Next, I’m going to jump to Time on Task (Effect size = 0.38) because I want to look at this in relation to a paper by Professor Daniel Muijs (another big hitter from the Southampton School of Education Studies) that suggests Time on Task is one of the most important influences on achievement.

Index to series on Hattie’s Visible Learning

This post is just a quick reference index to my series of posts looking at the influences on attainment described in Hattie (2009) Visible Learning: a synthesis of more than 800 meta-analyses relating to achievement. Abingdon: Routledge.

The interpretation of Hattie’s work is problematical because the meaning of the different influences on achievement isn’t always clear. Further context here. There are also some significant issues with Hattie’s methodology but despite these shortcomings, Visible Learning remains as the boldest attempt to draw together all areas of education research.

The list below shows my posts in the order they appear in Visible Learning. I have only looked at some influences, skipping those that I thought to be self-explanatory, outside the influence of teachers, or inconsequential.


Piagetian programs (d=1.28)

Self-reported grades (d=1.44)

Concentration, persistence, and engagement (d=0.48)

Diet (d=0.12)


The intention is to have the index presented in two forms:

The second will be taken directly from the list of influences in rank order of effect size in Appendix B of Visible Learning pp.297-300 (but I haven’t copied that out yet – waiting for an evening when my brain is too fried for anything less mechanical!)

Concentration, persistence and engagement (Effect size = 0.48)

This post is part of a series looking at the influences on attainment described in Hattie J. (2009) Visible Learning: a synthesis of more than 800 meta-analyses relating to achievement. Abingdon: Routledge. The interpretation of Hattie’s work is problematical because the meaning of the different influences on achievement isn’t always clear. Further context here.

Following on from the big effect sizes for some of the influences listed under the heading of Background, like Piagetian programs and Self-reported grades, there are a series of low to medium effect sizes under the heading Attitudes and Dispositions. Mostly I am ignoring these because correlations between achievement and things like personality traits and self-concept don’t give much for a teacher to work on. All the recent focus on this, stemming from Duckworth’s Grit, and Dweck’s Mindset, only matters to teachers if there is some good evidence that we can shift children along these scales and that’s definitely not what most of these categories are about.

However, I thought it worth a closer look at Concentration, persistence and engagement (Effect size = 0.48) because this sounds like it is really very close to that Grit and Mindset work. Now Grit is a personality trait – psychology rather than education. But Mindset is definitely in the education realm with a proprietary programme and lots of related initiatives. The research on attempts to shift childrens’ mindset looks quite promising (this is a good summary) but my impression is that quite a bit of it is not truly independent. That hasn’t prevented its, quite understandable, enthusiastic adoption by some schools, though, so it will be interesting to see the outcome of the EEF funded project being run in Portsmouth, in Spring 2015.

Given that the research base for Mindset dates from 1990, you might think it featured in this section on Concentration, persistence and engagement but I’m not aware of any meta-analysis so for that reason it wouldn’t feature in Visible Learning. However, it seems so close to the title of this section that, within the kind of broad-brush approach Visible Learning takes, the effect size of 0.48 might tell us something about the likely impact of becoming a growth mindset school.

Unfortunately, the meta-analyses referenced by Hattie don’t really tell us very much about the potential effect of increasing concentration, persistence, or engagement. Kumar (1991) looked at the correlation between different teaching approaches (in science) and student engagement. Now student engagement might be a good thing but, as Hattie points out in his commentary “we should not make the mistake…of thinking that because students look engaged…they are…achieving”. And Kumar has nothing to say about achievement in this meta-analysis. Also, although there was quite a big range of correlations (0.35 to 0.73) across the different teaching approaches, the probability of these differences being random is too high to claim statistical significance at a reasonable level – the perennial problem of typical sample sizes in education research. Datta and Narayanan (1989) were looking at the relationship between concentration and performance, but in work settings; maybe that’s transferable, but maybe not. Equally, Feltz and Landers (1983) were looking at mental visualisation of motor tasks so, apart from subjects like PE, dance, and possibly D&T I cannot see the relevance to teaching. Finally Cooper and Dorr (1995) looked at whether there was a difference between ethnic groups, which again doesn’t tell us anything about how we might improve achievement, particularly since there was little difference found. There is one more meta-analysis in the synthesis although it doesn’t feature in Hattie’s commentary; this is Mikolashek (2004). This was a meta-analysis of factors affecting the resilience – I think actually normal academic success as a proxy for resilience – of at-risk students. The abstract seems to suggest that internal and family factors are significant but, again, there is no measurement of the effect of anything a teacher might do to enhance these.

Looking at the overall picture here I think Hattie has pushed the envelope too far. One of the criticisms of meta-analysis is the danger of amalgamating studies that were actually looking at different things e.g. oral feedback, written feedback, peer feedback. I think it’s fine to lump all feedback together if measured by roughly the same outcome, provided this limitation is made clear. The next stage might be to unpick whether all forms of feedback are equally effective but unless it’s clear that one form is something like 0.20, another 0.60, and the third 1.00 (average Effect Size = 0.60) during the initial analysis, knowing that feedback is worth a more detailed look seems helpful. However, for this influence I think the ‘comparison of apples and oranges’ charge is justified criticism. The five meta-analyses are all looking at different things, in different contexts, and with several different outcome measures. I cannot see the value in averaging the effect sizes and am starting to wonder how much more of this I’m going to find as I continue to work through the book. Diet interventions (Effect size = 0.12) is next – which dietary changes, I wonder?