I always feel a little frisson of excitement when I’m on a train pulling into Waterloo. I’m lucky to live close enough to get to the kind of national and international events that come with it being the capital, but far enough away for it to be an occasional treat (although I know there are some crazy fools that do it as a daily commute). At the moment, there seems to be another stunning, giant building on the skyline every time I visit, and I’ve never got over the strange feeling that comes with seeing the Palace of Westminster in the flesh – well, in the stone – rather than as a backdrop to the TV news. Having laid out my credentials as a country bumpkin, I bet it wasn’t just me that was excited yesterday, as teachers (and the rest of us) converged on South Hampstead High School for ResearchEd 2015. Despite the cheeky jokes from Helen Pike and Tom Bennet about ‘movements’, ResearchEd is doing something very exciting. We’re still a long way from properly integrating research and teaching, but we’re heading in the right direction and that’s something new.
It’s always tremendously difficult to decide what not to go to at these sorts of events but you have to make a choice and what came out of my choices was an idea that’s probably already obvious but hadn’t occurred to me before. It feels like the resolution to something that has been bothering me for a while – a joining of dots. I’ve written a few things before about the relationship between large-scale research and classroom practice, particularly following Rob Coe’s session at Wellington but also this on my re-reading of Hattie, and a comment on research scale issues. At ResearchEd, I first went to Susie Wilson’s session on getting teachers involved in deep learning about their own work, then Becky Allen’s session on how teachers can get involved in large-scale research, and in the afternoon Beth Greville-Giddings journal club and then the IoP session on physics education research, finishing off with Rob Coe, who made some excellent points about research quality and usefulness. Maybe just because this covered all points on the spectrum from big RCTs to individual classrooms it felt like I ended up looking at the problem from a new perspective.
I could probably just adapt Becky’s flowchart (if I’d taken a photo of it – doh!), which was really useful in showing how teachers could be involved in the front and back of large-scale research, but I’ll do my best to clarify my thinking in words.
The trouble with all education research, however high quality, is that I think it can only ever say something on the scale at which it was conducted. Small-scale action research (sometimes) tells you something about one bunch of kids, in one context, with one teacher, at one point in time. The findings might be useful but even just changing the children might change the findings – and teachers change children more-or-less every year. Large-scale research (at it’s best) gives us some idea of whether or not a particular thing works across a range of settings, on average. It’s even potentially possible to drill down further from an ‘average’ outcome to be able to say things like ‘sizeable positive effect for most but doesn’t work if…’ But it’s not possible to drill down to be able to say whether or not it will work in a particular classroom with a particular teacher; the implementation problem; the problem resolutely demonstrated by AfL. This is what has been bothering me.
Yesterday I could see a resolution. Teachers could be involved in driving research questions and exploring how to implement and evaluate in pilot studies, then education research teams could conduct the large-scale work to generate the kind of overview and average effect size that the EEF have been producing, but then the missing final step is for schools and individual teachers to take those findings, think hard about the implications for their own practice, decide whether to make changes and then carefully evaluate anything they do as a result. I offer my apologies for stating the bleedin’ obvious but I’m sometimes rather slow to catch on.
Sadly, this still isn’t really what the system incentivises, despite the emphasis these days on ‘impact’. That final implementation and evaluation stage needs a lot of researcher advice and practical support because it’s not (yet?) something that teachers are good at. I work with a lot of very, very capable graduates but with an occasional exception, the quality of their M-Level PGCE evaluation of an aspect of their teaching, is pretty pants. Completing a full Masters will take teachers another step forward but we can’t be great teachers and great researchers – it’s hard enough being a great teacher!
However, it is exactly the sort of collaborative approach that ResearchEd is in the vanguard of. It’s what several people have been saying for a while. It’s what finally made sense to me yesterday, and the key thought that I will take from ResearchEd 2015.