Concentration, persistence and engagement (Effect size = 0.48)

This post is part of a series looking at the influences on attainment described in Hattie J. (2009) Visible Learning: a synthesis of more than 800 meta-analyses relating to achievement. Abingdon: Routledge. The interpretation of Hattie’s work is problematical because the meaning of the different influences on achievement isn’t always clear. Further context here.

Following on from the big effect sizes for some of the influences listed under the heading of Background, like Piagetian programs and Self-reported grades, there are a series of low to medium effect sizes under the heading Attitudes and Dispositions. Mostly I am ignoring these because correlations between achievement and things like personality traits and self-concept don’t give much for a teacher to work on. All the recent focus on this, stemming from Duckworth’s Grit, and Dweck’s Mindset, only matters to teachers if there is some good evidence that we can shift children along these scales and that’s definitely not what most of these categories are about.

However, I thought it worth a closer look at Concentration, persistence and engagement (Effect size = 0.48) because this sounds like it is really very close to that Grit and Mindset work. Now Grit is a personality trait – psychology rather than education. But Mindset is definitely in the education realm with a proprietary programme and lots of related initiatives. The research on attempts to shift childrens’ mindset looks quite promising (this is a good summary) but my impression is that quite a bit of it is not truly independent. That hasn’t prevented its, quite understandable, enthusiastic adoption by some schools, though, so it will be interesting to see the outcome of the EEF funded project being run in Portsmouth, in Spring 2015.

Given that the research base for Mindset dates from 1990, you might think it featured in this section on Concentration, persistence and engagement but I’m not aware of any meta-analysis so for that reason it wouldn’t feature in Visible Learning. However, it seems so close to the title of this section that, within the kind of broad-brush approach Visible Learning takes, the effect size of 0.48 might tell us something about the likely impact of becoming a growth mindset school.

Unfortunately, the meta-analyses referenced by Hattie don’t really tell us very much about the potential effect of increasing concentration, persistence, or engagement. Kumar (1991) looked at the correlation between different teaching approaches (in science) and student engagement. Now student engagement might be a good thing but, as Hattie points out in his commentary “we should not make the mistake…of thinking that because students look engaged…they are…achieving”. And Kumar has nothing to say about achievement in this meta-analysis. Also, although there was quite a big range of correlations (0.35 to 0.73) across the different teaching approaches, the probability of these differences being random is too high to claim statistical significance at a reasonable level – the perennial problem of typical sample sizes in education research. Datta and Narayanan (1989) were looking at the relationship between concentration and performance, but in work settings; maybe that’s transferable, but maybe not. Equally, Feltz and Landers (1983) were looking at mental visualisation of motor tasks so, apart from subjects like PE, dance, and possibly D&T I cannot see the relevance to teaching. Finally Cooper and Dorr (1995) looked at whether there was a difference between ethnic groups, which again doesn’t tell us anything about how we might improve achievement, particularly since there was little difference found. There is one more meta-analysis in the synthesis although it doesn’t feature in Hattie’s commentary; this is Mikolashek (2004). This was a meta-analysis of factors affecting the resilience – I think actually normal academic success as a proxy for resilience – of at-risk students. The abstract seems to suggest that internal and family factors are significant but, again, there is no measurement of the effect of anything a teacher might do to enhance these.

Looking at the overall picture here I think Hattie has pushed the envelope too far. One of the criticisms of meta-analysis is the danger of amalgamating studies that were actually looking at different things e.g. oral feedback, written feedback, peer feedback. I think it’s fine to lump all feedback together if measured by roughly the same outcome, provided this limitation is made clear. The next stage might be to unpick whether all forms of feedback are equally effective but unless it’s clear that one form is something like 0.20, another 0.60, and the third 1.00 (average Effect Size = 0.60) during the initial analysis, knowing that feedback is worth a more detailed look seems helpful. However, for this influence I think the ‘comparison of apples and oranges’ charge is justified criticism. The five meta-analyses are all looking at different things, in different contexts, and with several different outcome measures. I cannot see the value in averaging the effect sizes and am starting to wonder how much more of this I’m going to find as I continue to work through the book. Diet interventions (Effect size = 0.12) is next – which dietary changes, I wonder?






Tops and Bottoms

A fair bit of my career has been spent in sixth form colleges, and I started out in selective schools. In both cases, once the kids were in, even where there was setting, I never felt that the students were defined by their current level of performance. By contrast, I spent a few days recently in a well-regarded local comprehensive school and was a little shocked by just how ingrained notions of fixed ability seemed to be to the culture of the school. “These are the top set that aren’t doing triple science so they’re not  a bad class, really” was the low-down on one class I observed – it wasn’t just a rather sloppy objective assessment of their current academic level: it was a value judgement.

Now, I’m not going to get on my high-horse here because if you’ve already done a significant amount of filtering on GCSE grades, 11+, or – in my present incarnation – the acquisition of a degree, then you’ve no right to lecture those who haven’t on a crudeness you might perceive in their approach to differentiation. On the other hand, this is a school that my own son might attend one day, and I would be horrified if I heard him consigned to a particular caste in this way. I’m not against setting; my impression is that there is a fair accumulation of evidence that setting confers some benefits on those in the top sets to the mild detriment of everyone else but the onus is on anyone promoting a fully mixed-ability approach to show how ordinary teachers, with an ordinary cohort, can make this work for all pupils, without a significant increase in workload – I just don’t think we’ve got that good at differentiation, yet. Meanwhile, as I hear my trainees only a few weeks into their first placement referring to their classes in the same way, my question is, “accepting that classes are differentiated on academic performance, how do I work with my trainees to ensure that they don’t fall into this labelling trap?”

I think my own answer to this is the same as the approach I took with my A-Level students – a bit of neuroscience, a bit of Dweck, a bit of Pygmalion in the Classroom (yes, I’m aware of the short-comings of the original research), a few examples of students that transformed their academic performance, and a relentless effort to focus feedback on the work and not on the person. Next year, I think this needs to be given higher priority, appear earlier in the PGCE course, and I need to be quicker to question anyone I hear labelling in this way.