Which Knowledge; Which Skills?

There has been a hefty onslaught recently against the deliberate teaching of skills by those in favour of a knowledge-based curriculum. Knowledge is essential, and it seems to me an unassailable argument that teaching only skills to the exclusion of knowledge is a mistake, but that’s not something I’ve witnessed in my career, a point made by several people here, here and here. If you’ve been following the trend, or even if you haven’t, then there’s been plenty written that covers the basic points of the debate. However, the argument that knowledge should be favoured to the exclusion of skills, seems to be gaining momentum and I’m thinking this is a case of throwing the baby out with the bathwater. Either that, or it’s the result of an unjustified alignment of teaching knowledge with one pedagogy and teaching skills with another. A recent post from Joe Kirby has galvanised me to join the fray.

The first issue for me is the question of what is meant, both by ‘knowledge’ and by ‘skills’. If you want to achieve a clear dichotomy then knowledge can be seen as factual information e.g. the universe is 13.8 billion years old; universal indicator is green in a neutral solution; the function of the lungs is to remove CO2 and increase O2 in the blood etc. and you can use ‘skills’ to refer to non-cognitive skills such as the ability to work effectively in a group, to evaluate, presentation skills etc.

If you start with either of these definitions then, if you want to denigrate a knowledge-based curriculum you can argue that this is all about making children learn large numbers of facts with the implication that they will end up with masses of knowledge with which they can do little, and if it’s skills-based that you want to shoot down with flaming arrows then it’s not difficult to show that trying to teach these skills directly, or using a trivial context which pupils already know all about to avoid the need for new knowledge, is going to be ineffective, or will widen the gap, dumb down etc.

Alternatively there is a more subtle way to look at this. The reason why accumulation of facts can be supported is that, in the hands of any vaguely competent teacher, facts are not unconnected, and with masses of knowledge pupils can do all sorts of sophisticated things, like make an evidence-based argument for the Big Bang theory, describe the general features of acid-base reactions and predict the salts produced, or identify the similarities and differences between the respiratory systems of a range of animals. And the reason that teaching skills can be defended is that, firstly, within each curriculum area there are lots of skills that are crucial e.g. how to lay out a results table for a new investigation, how to manipulate a burette to get a titration accurate to 0.5 ml, how to evaluate conflicting evidence to draw an appropriate conclusion, and secondly, there is evidence that meta-cognitive programmes have a high effect-size, so whilst trying to teach critical-thinking skills does not seem to be very effective, teaching study skills can be.

And, for me, somewhere in the middle it becomes increasingly hard to decide the extent to which a skill is actually an accumulation of integrated facts and recall of experience of similar examples i.e. knowledge. The champions of knowledge-based teaching would, I think, mostly argue exactly this point – that to learn a skill you have to accumulate knowledge. To go back to my titration example, the skill starts with facts (it is possible to be accurate to +/- 0.5 ml, your eye must be level with the numbers when reading the scale), followed by learning what this looks like, then being able to recall the feel of more or less stiff taps, and knowing how to adjust them, and that they leak if they are loosened too far, and finally repeated practice drives into the long-term memory knowledge of all the little subtle things about when to go fast and when slow and what different indicators look like in different solutions…

I don’t think it matters much whether this is a skill, or an accumulation of knowledge. Old Andrew makes the distinction between the sort of skills I’m talking about, and generic skills which might be taught in a context-free way but even here I think teachers would be making a choice about the context. It may be a mistake to teach critical-thinking skills, or essay-writing skills in a knowledge-lite context but you can’t teach science, or history, or English without developing these skills, in context, in your students, can you?

So in the end, I’m with LeadingLearner – in a different jungle. What matters to me is deciding how much of the curriculum is going to be recall knowledge (and which knowledge it should be), how much is going to be about being able to do sophisticated things with this knowledge (and what are the most important sophisticated things), how much is going to require pupils to apply general principles in new contexts (like drawing up that results table), and how much (if any) is going to be about practical skills. I would like to see all of the above in the new science GCSE – when we finally get it. And then we should be starting the same debate for Beyond 2020, not a ‘knowledge versus skills’ debate but a ‘which knowledge; which skills’ debate.

Advertisements

Grade 2 or bust! Perverse incentives

Two things that the DfE have got right – there may be others – are two of the changes to school accountability systems. The Wolf report quite rightly identified the perverse incentives in accountability measures that led to schools pushing pupils into BTECs and other vocational qualifications. And there is no doubt that the 5 A*-C measures and floor targets focused too great a proportion of a schools attention on the C/D borderline. Now, I’m not suggesting all is rosy in this particular garden – some pupils benefit from doing high-quality vocational qualifications; I’m not convinced that the EBacc is anything other than Gove’s tendency to assume that what worked for him is best for everyone else as well; the changes were handled clumsily at times; and it’s not clear whether long-term planning is anything more than an oxymoron under the current regime, but perverse the previous incentives were, and Progress 8 and the new floor standards just have to be an improvement.

So, this post is about a similar perverse incentive in ITT and the impact it is having on the quality of the NQT you may be working with this, or next, year. In ITT trainees are graded 1-4 on a very similar basis to experienced teachers. A trainee that has not met the Teachers Standards is graded 4 and would not be awarded QTS. A trainee that has just met the Teachers Standards would be 3, and those consistently teaching Good or Outstanding lessons (possibly a bit rough round the edges but more-or-less on the same basis as experienced teachers) would be graded 2 or 1. There is a bit more to it than that, but the quality of teaching is (quite rightly) key. So far, so good. But how is an ITT provider judged? Well, as an ITT provider, to get a Grade 1 or Grade 2 the inspection handbook states that “all trainees awarded QTS exceed the minimum level of practice expected of teachers as defined in the Teachers’ Standards”. That word ‘exceed’ is critical; in other words, if any trainee gets a Grade 3 then an ITT provider Requires Improvement. Is it just me or is this nuts? Of course, the better the training, the greater the likelihood of trainees being 2 or 1, but if they’re a 3 then they’re a 3 and an incentive like this just means providers have to find some way of them being a ‘2’. We get judged on completion rates as well, so while the genuine 4s get weeded out, we can’t afford to take the same approach to a 3. In any case, a trainee at Grade 3 has met the Teachers Standards. If they’re not ready to take their own classes then the Teachers Standards need tightening up. Threatening ITT providers with a big stick is just papering over the cracks. Wilshaw has been taking ITT providers to task over the quality of some NQTs recently but his own organisation is pushing us to overgrade trainees. If these trainees are ready for their NQT year, with the expectation that they improve as they go, then let us say that honestly so everyone knows where they stand; if they’re not ready, then let’s be clear about that too and have a mechanism for dealing with the problem. At the moment, this perverse incentive just sweeps the whole thing under the carpet and that cannot be good for the children in our schools.

Graded Lesson Observations: Defibrillation or a Stake through the Heart?

An observer enters your classroom. Is this person your HoD, the assistant head with responsibility for T&L, an Ofsted inspector, or a demon who has occupied a corpse and is coming to suck your blood? A fair number of commentators have recently suggested the latter and have been sharpening words, and presumably a variety of sticks, with a view to dispatching said vampires to the demon dimensions. Like Rupert Giles, Robert Coe from Durham University CEM (possibly a pseudonym for the Watchers Council) has been quietly dispensing the wisdom of the ancients academics, guiding the Slayers in their quest. But is the graded lesson observation really the personification of evil, or does it have a soul worth saving?

Wilshaw’s Westminster Education Forum speech on 7th November 2013 included the line: “Which ivory towered academic, for example, recently suggested that lesson observation was a waste of time – Goodness me!” Does Wilshaw need to pay more attention to the ivory towered ones? Is his organisation trying to perform a task as fundamentally uncertain as measuring the combined momentum and position of a sub-atomic particle; is it engaged in a legitimate assessment technique but doing it in a slightly crap way; or is the Ofsted Christmas party actually a masquerade ball of orgiastic hedonism where innocent teachers are dragged to be ripped assunder in a feeding frenzy of unimaginable gore?

In ITT, observations are a big part of how we assess the progress of trainees. It doesn’t feel as though the judgements we make are unreliable; over the course of a number of observations, we would feel confident that an accurate picture of a trainee’s teaching was being drawn. Are we deluding ourselves when we reflect on this practice; are we even capable of reflection…

If you pick up Robert Coe’s blog entry on this you’ll see that he is linking to two pieces of research. The first is the massive (and massively well-funded – thanks Bill & Melinda) MET project. Now, I make no claims to either the academic clout of Robert Coe, or to expertise in this area, but reading the MET policy and practice brief  I can see where Coe’s figures are coming from, but not his conclusion that observations are unreliable to the point of worthlessness as a measure of teacher performance. The MET project seems to me to be making suggestions about how to improve the reliability of observations not concluding that they are good only for a staking. Of course, like Wilshaw, anyone involved in a project called “Measuring Teacher Effectiveness” may be somewhat biased towards the idea that it is actually possible to measure such a thing, and continued research funding may even depend on that outcome, but the MET project is looking at a range of ways to measure teacher effectiveness and I can’t see why, if they were looking at data that suggested observations were a waste of time, they wouldn’t say so and recommend a system based on other measurement methods.

Strong, Gargani & Hacifazlioğlu (2011) is the other piece of research. It’s behind a paywall but for good papers there’s often an academic somewhere that has breached their institutions copyright rules and posted it somewhere helpful. In interpreting the results, it’s important to appreciate that of the three experiments, two involved judging teachers on the basis of two minute clips of whole-class teaching (chosen to avoid any behavioural management incidents!). However, the third experiment did involve observations of videos of whole lessons, but using a complex observational protocol – the CLASS tool – that seems to weight student engagement and various other, dare I say it, constructivist ideals quite strongly. Coe is right to state that the ability of observers to pick good teachers in these experiments was in the same league as Buffy’s ability to pick good boyfriends but he leaves out at a crucial point which I think I’d better quote.

This analysis showed that a small subset of items produced scores that accurately identified teachers as either above or below average. All of these items were from the instructional domain. They included clearly expressing the lesson objective, integrating students’ prior knowledge, using opportunities to go beyond the current lesson, using more than one delivery mechanism or modality, using multiple examples, giving feedback about process, and asking how and why questions.

The final point made in the paper is that “This… has motivated us to undertake development of an observational measure that can predict teacher effectiveness.”

So I’m not sure that Coe has it right on this evidence. Yes, we all (ITT, Ofsted, and school leaders) need to recognise that sloppy observation procedure and training will lead to meaningless judgements. Yes, using graded observations for staff development may be a bit like burning witches to improve their chances at the last judgement. Yes, value-added data may be a better, or even the best, method for judging the effectiveness of a teacher and/or their teaching. But, in ITT where value-added data does not exist, I think my colleagues and I really ought to be bringing some of the academic clout of our Faculty to bear on using research like this to develop a model for lesson observation that delivers reliable outcomes. I’ll let you know how we get on, and give you a shout if we need any stake holders.

Teach like a Champion?

Top of my Christmas list this year was Doug Lemov’s Teach like a Champion. I think the Initial Teacher Training we run is good in many ways but the extent to which trainees get specific, concrete advice on ways to improve their teaching depends very heavily on the skills of the mentor in school. Where the mentors are excellent, the advice and target-setting are really specific and the trainee can try new techniques out immediately. But a lot of mentors don’t manage this, even though they are very supportive generally. I wondered whether ideas from this book could fill some of the gap. Having read it, I think it’s a great book with plenty to say to those involved in ITT in the UK but maybe it’s not such a good choice for a trainee’s reading list. I’ll come back to what it does have to offer in a minute: caveats first.

Lemov draws on a number of outstanding teachers for examples but I get the impression the total number is not actually that high. In effect he has spent time in the most successful classrooms within a linked chain of charter schools serving a particular demographic, and has made the assumption that anything he sees replicated across these most successful classrooms must be a factor in the success of these outstanding teachers. I guess he doesn’t make any claims that his observation system  is particularly systematic, or rigorous in its approach, but there is no doubt that he is pretty convinced himself that this is a list of the techniques used by outstanding teachers. A coach and horses could be driven through this methodology except that there is something fundamentally sound about the basic premise, as long as the reader appreciates that some of the techniques may be much more effective than others and some may even be counterproductive; that what works in these classrooms may not work in all classrooms; and that there is a chance that Lemov has missed something deeper and more elusive that makes these techniques work for the teachers observed but fall flat if applied by others without this deeper something in place. In particular I notice that the behavior management techniques are almost entirely devoted to keeping classes on track where behaviour is basically okay already. Nothing about establishing class rules, really. Nothing about what to do when a bout of fake coughing starts round the room, or deliberate, invisible tapping under the desks. Nothing about how to respond to the pupil that tells you to fuck off when asked to move. Not even an in depth description of the full procedure to follow when you first ask for silence, wait for it, don’t get it, and still have half the class chatting rather than one or two individuals. I have a suspicion that these things either don’t happen to the teachers Lemov was observing, or they do and were dealt with at the start of term, before Lemov’s observations. I can’t believe the schools in question don’t experience these things at all; in fact I should think these are the sort of schools that need metal detectors and security guards on the doors just to keep guns off the premises. This thought, that Lemov may be missing something fundamental, worries me quite a bit, but, although that’s a pretty hefty disclaimer, I think we need more of this kind of thinking in the UK ITT system. Those who still think that university tutors spend their entire time filling the heads of trainees with theoretical flights of fancy and a selection of bogus teaching techniques born of the summer of love are way off the mark. I can’t vouch for all providers but we provide a mix of subject knowledge focused on anticipating misconceptions, basic teaching skills like how to plan a lesson around your learning objectives and how to check progress as you go along, questioning technique, and so on, knowledge about whole school issues like SEN, and lots of time (2/3rds of the course) in school observing, practising, and improving. Almost all the whole school stuff and a series of sessions on behavior management, AfL, pace, and other tricky areas, are delivered by outstanding practitioners from local schools, and I think nearly everything we do is focused on helping trainees make the most of their placements. However what we don’t do are regular, repetitive opportunities to identify and practise individual techniques. And I think we should! It’s to this aspect of our current practice that Lemov speaks, and actually this was my primary motivation in reading the book. What the book does do exceptionally well is isolation of individual techniques. “No Opt Out” isn’t just presented as one of a dozen elements of effective questioning, it’s described as a single entity, to be understood, and practised, on its own, until mastered. I think this may be the thing that would make most difference to my trainees, particularly those to whom teaching doesn’t come so naturally. In the classroom there is just too much going on for an inexperienced teacher to get much focus on specific techniques. In fact it’s a testament to the quality of the trainees I work with that they manage to do so quite as effectively as they do. What if I could remove some of that pressure? What if I could give them the chance to make the classroom a place for stress testing rather than tentative first steps? Maybe that could lead to something really exciting. And then what if we could meld Lemov’s instinct for addressing the issue of ‘What Works’ as a blow by blow account of action at the coal face with Hattie’s academic rigor, and something like David Weston’s teasing out of nuance in the data? What if Schools of Education in universities were somehow turned on their head so that the function of at least some of the research was driven by the need to inform ITT? Then we might really be motoring on the evidence based highway.