A somewhat careless comment on Andrew Smith’s blog (which he responded to with a clear demonstration that he knew more about Hattie’s work than I do) has led me back to the original Visible Learning: a synthesis of over 800 meta-analyses relating to achievement. There are a whole bunch of issues with Hattie’s methodology, which are probably fairly well-known by now e.g. David Weston’s ResearchEd 2013 talk; Learning Spy’s post which is related to Ollie Orange’s . I’ve tried to summarise these for my own clarity. If you read the introduction to Visible Learning, or churn your way through Visible Learning for Teachers, it’s pretty clear that Hattie is conscious of at least some of the limitations of his work (maybe not some of the statistical issues, though). In some ways Andrew is bucking the trend in education at the moment – a few years ago Hattie was definitely the most prominent researcher in the field of education but his star has undoubtedly waned. For a while there, he really was The Messiah, but that wasn’t his fault, more a consequence of being responsible for some important evidence at just the moment that the deep-water swell of evidence-based practice felt bottom and started to build. At first surfers flocked to, and eulogised, Hattie’s miraculous surf break but when it turned out to not be as smooth, glassy and regular as they hoped, and other surf spots were discovered, it almost inevitably fell from favour somewhat.
As Hattie himself points out, any attempt to just look at the headline effect sizes and conclude “this works, that doesn’t” is not only misinterpreting his work, but missing the point. His approach is to take the huge mass of evidence and use it to draw out themes that really do tell us something about how to teach more effectively, but always to appreciate that this must be in the context of our own teaching, our own students, and our own settings.
However, I think there is another barrier to making effective use of Hattie’s work. I think I’ve been aware of it for a while but the recent brief exchange with Andrew Smith has highlighted it for me. Interpretation of Hattie’s work is problematical because the meaning of the different influences on achievement isn’t clear. I first encountered Hattie’s work through the Head of History at the college where I worked. He was a fantastic teacher and had been significantly influenced by Geoff Petty’s book Evidence Based Teaching which in turn was heavily influenced by Visible Learning. I think Petty made a pretty decent stab at interpreting Hattie’s work but I also think he was influenced by some of his own ideas about effective teaching (Teaching Today pre-dates Visible Learning and I think shows that he didn’t take on board all the evidence from Visible Learning when he read it) and there are points where he freely admits to basically taking an educated guess at what some of Hattie’s influences actually refer to.
So having gone on to read quite a lot online about Hattie’s work, and continuing to encounter the same issue, I keenly started out on Visible Learning for Teachers, and was enormously disappointed with it. Expecting non-technical clarification and additional detail about the meta-analyses, instead it is an attempt to leave all the detail behind and draw some conclusions about the implications for teachers. A worthy aim, but a good couple of hundred pages longer than necessary; it reminded me of Jane Eyre!
It wasn’t long after reading this that the methodological issues with Visible Learning started to be spoken of more prominently and although I have continued to use the list of effect sizes as a kind of quick reference to support some ideas about effective teaching, I’ve more-or-less left it at that. So the video posted on Tom Sherrington’s blog over the summer blew me away somewhat – here was the clear, coherent message that was missing from Visible Learning for Teachers. Subsequently, and spurred on by my recent error, I’ve gone back to the original Visible Learning. I really see no reason why Hattie thought that teachers needed this interpreting; it’s not very technical and the introductory and concluding chapters draw the threads together at least as well as anything in the Teachers’ version. That fundamental issue still remains though, that for at least some of the influences, the meaning is hazy. On the other hand, the references are clear, and working at a university I am lucky enough to have unobstructed access to many of them.
It’s therefore time to do some reading, and sort out the nature of the influences that remain unclear to me. My plan is to take each influence in order from Visible Learning and do just enough to feel confident of the meaning. I’m hoping for most influences this will just involve reading the relevant page or two from Visible Learning (a lot are very clear) but for some I expect to need to go back to the most prominent original meta-analysis to see what it was actually about. I’ll let you know how I get on but Hattie starts with the section on Contributions from the Student: Background. Prior achievement (Effect Size = 0.67) is clear enough but Piagetian programs (Effect Size = 1.28) is not (I had assumed this was things like CASE and CAME – which have been shown to be very effective – so that shows how much I need to do this reading). I can’t make much sense of Hattie’s paragraph on this so, here we go. I’ll let you know how I get on.