This is the fourth in a series of posts on the Festival of Education at Wellington College and the second post on Rob Coe’s talk. The first is here.
- #EducationFest No.1: Play up, play up, and play the game
- #EducationFest No.2: More root than trunk
- #EducationFest No.3: A research-based, constructivist view?
Moving on from the possible can of worms associated with the Danielson Framework, Rob’s session was really about how teachers can improve and how research and evaluation has an important role to play in this process if hours and hours of wasted time are to be avoided. He is closely involved with the EEF Toolkit and suggested this was a good starting point for the question of what we should be doing to improve. However, I think he suggested an even more important question to be asked once we think we have identified the thing we need to work on.
“Does focusing on these things lead to improvement?” It’s a critical point, isn’t it? A teacher might well feel, or be told, that their subject knowledge was weak but there is a possibility they might put hours and hours of work into improving this, only to find the impact on their pupils to be zero. It’s a wider question though. Currently the zeitgeist in the blogosphere is about retrieval practice, distributed practice, and interleaving. There is lots of good research from cognitive psychology to support these ideas but what if we put hours and hours into re-writing SoWs only to find the impact on our pupils to be zero? The EEF Toolkit, Hattie’s meta-analysis, and one or two other reviews do point very strongly to a few things that do have significant impact. Feedback is probably the best example, but if it were that simple then AfL would have had a much bigger impact on the effectiveness of teaching in the UK than has actually been the case.
I suspect the problem is that different teachers need different things, and different teachers implement the same idea in different ways. There were three teachers in my first physics department. The HoD was an Oxford graduate, by far the best physicist, and capable of brilliant teaching ideas, but taught everything by the seat of his pants, sometimes went over the heads of his pupils, and left all but the most capable feeling disoriented. The other teacher was the fiercest disciplinarian in the school, originally a chemistry specialist, and was organised and pedantic to a fault; his pupils worked tremendously hard, did some very high standard work, and completed the course with immaculate notes, but often struggled to link knowledge to solve problems when working independently. I was short on both subject knowledge and classroom experience and my two biggest problems were keeping everyone on task and not completely cocking up the physics, but I had a pretty good feel for the problems pupils had in understanding the subject. With the benefit of hindsight I would have said we all needed to improve but in different ways. Feedback may well have an effect size of 0.8, or 8 months or whatever, but it certainly wouldn’t have had that impact on my teaching at that time. And if we had tried AfL or some other feedback strategy, there’s every chance that we would each have done it differently. As Rob pointed out, despite all we know about learning, CPD still mostly consists of just explaining at length to teachers what they should do and expecting them to understand and be able to do it. Even a typical behavioural intervention (+4 months) wouldn’t have helped me as I was already using an assertive discipline strategy to moderate but not universal effect. What I needed was to do a lot of past papers, add some more variety to my teaching, and work out how to notice behavioural issues and nip them in the bud before they had become disruptive.
Having cogitated on this for a week or so, I find myself going back to Ben Goldacre and the whole RCT thing. There are a whole bunch of issues with running RCTs in education that are less of an issue in medicine, but I think the biggest difference is that diagnosis in medicine is a lot more sophisticated than in education. There may have been many decades of evidence-based medicine but I suspect that it’s still pretty hard to know “what works” if the symptoms are “feeling unwell”. In education, when we talk about how to improve, we’re at the “feeling unwell” level of diagnosis. We might well find that high quality research would show that giving unwell patients Lemsip might have an effect size of 0.8 but that doesn’t mean it’s the best treatment for leukemia, cirrhosis of the liver, or someone throwing a sickie.
I don’t suppose Rob Coe intended me to head off on this particular tangent but it’s the mark of a great talk that it changes your thinking. Thanks Rob – best session of the festival, and the competition was pretty fierce.