Piagetian programs: effect size = 1.28

Hattie states that the one meta-analysis for this influence found a very high correlation between Piagetian stage and achievement (more for maths 0.73 than reading  0.40). Quite what is meant by this isn’t clear. I’m guessing that some sort of test was done to determine the Piagetian stage and the correlation is between this and achievement. Piaget’s original theory suggests that the stages are age-related but later work has criticised this part of the theory – he did base his theories a lot on the development of just his own children – so presumably the research behind this meta-analysis was based on the idea that children made the breakthrough to a new stage at different ages, and that those who reached stages earlier, might achieve more highly. If I remember correctly, the CASE and CAME programmes (and Let’s Think! for primary) were designed to accelerate progress through the Piagetian stages – from the concrete to the formal-operational stage in the CASE and CAME programmes) and there is some evidence that all these programmes have a significant effect including a long-lasting influence on achievement not only in science but spilling over into English, and several years later at that. Maybe these would count as Piagetian programmes.

So that’s my starting point but what does the Jordan and Brownlee (1981) meta-analysis actually deal with? Well, at the moment all I can find is the abstract:

The relationship between Piagetian and school achievement tests was examined through a meta-analysis of correlational data between tests in these domains. Highlighted is the extent to which performance on Piagetian tasks was related to achievement in these areas. The average age for the subjects used in the analysis was 88 months, the average IQ was 107. Mathematics and reading tests were administered. Averaged correlations indicated that Piagetian tests account for approximately 29% of variance in mathematics achievement and 16% of variance in reading achievement. Piagetian tests were more highly correlated with achievement than with intelligence tests. One implication might be the use of Piagetian tests as a diagnostic aid for children experiencing difficulties in mathematics or reading.

I have made a few enquiries and will update this post if I get hold of the full text but it seems quite close to my assumption that it’s about a correlation between tests of Piagetian stages and achievement. I don’t think that’s of any direct use since it doesn’t tell us anything about how we accelerate progression through the stages. On the other hand, if we know that there is a good correlation between Piagetian stage and achievement, and if it transpires that it is possible to change the former, and that this does have a casual effect on the latter, then we would perhaps be cooking on gas.

Where does CASE, CAME, and Let’s Think! come into this? Well, these Cognitive Acceleration (CA) programmes cannot be relevant to this influence, as classified by Hattie, because the first paper on CASE was published in 1990 and the meta-analysis used by Hattie for this influence labelled Piagetian programs dates from 1981. However, as well as the evidence for the effectiveness of these CA programmes from those involved in developing them, they were included in a meta-analysis on thinking skills Higgins et al (2005), which Hattie has made use of. Where do you think this is found? Not under Piagetian programs; not under Metacognitive strategies; no, I don’t think you’ll guess – under Creativity programs (Effect Size = 0.65). I would instinctively have though Creativity programs was something in the Ken Robinson mould. Instead Hattie is picking up a collection of specific curriculum programmes based around clearly stated things to be taught, and particular ways to do the teaching, that emphasise the explicit development of thinking strategies. And buried in here are some very high effect sizes.

I actually taught CASE (without proper training, I’m afraid) for a year, whilst doing a maternity cover about ten years ago. I thought it was pretty good at the time but if the effect sizes hold up (the EEF have a Let’s Think Secondary Science effectiveness trial underway that will report in 2016) then we should probably be thinking about making this a pretty integral part of science and maths teaching. If anyone is looking for access to the programmes then it’s organised by Let’s Think.

Probably the final point on all this is that I’ve started this post with a title that includes Piaget, whose theory on cognitive psychology is a primary source of justification for the whole constructivist teaching movement. And I’ve ended up talking about a programme directly drawing on his theory that appears to have an effect size at least comparable to Direct Instruction. Should the new-traditionalists be worried? No more than is justified. CASE has at least as much in common with Direct Instruction as it does with Problem-based Learning, and although it includes significant amounts of peer discussion it is definitely teacher-led. I continue to argue my case that teachers should be in charge of learning, but that we shouldn’t throw the quality learning baby out with the constructivist bath-water.

Next, Self-reported grades (Effect Size = a whopping 1.44)

Which Knowledge; Which Skills?

There has been a hefty onslaught recently against the deliberate teaching of skills by those in favour of a knowledge-based curriculum. Knowledge is essential, and it seems to me an unassailable argument that teaching only skills to the exclusion of knowledge is a mistake, but that’s not something I’ve witnessed in my career, a point made by several people here, here and here. If you’ve been following the trend, or even if you haven’t, then there’s been plenty written that covers the basic points of the debate. However, the argument that knowledge should be favoured to the exclusion of skills, seems to be gaining momentum and I’m thinking this is a case of throwing the baby out with the bathwater. Either that, or it’s the result of an unjustified alignment of teaching knowledge with one pedagogy and teaching skills with another. A recent post from Joe Kirby has galvanised me to join the fray.

The first issue for me is the question of what is meant, both by ‘knowledge’ and by ‘skills’. If you want to achieve a clear dichotomy then knowledge can be seen as factual information e.g. the universe is 13.8 billion years old; universal indicator is green in a neutral solution; the function of the lungs is to remove CO2 and increase O2 in the blood etc. and you can use ‘skills’ to refer to non-cognitive skills such as the ability to work effectively in a group, to evaluate, presentation skills etc.

If you start with either of these definitions then, if you want to denigrate a knowledge-based curriculum you can argue that this is all about making children learn large numbers of facts with the implication that they will end up with masses of knowledge with which they can do little, and if it’s skills-based that you want to shoot down with flaming arrows then it’s not difficult to show that trying to teach these skills directly, or using a trivial context which pupils already know all about to avoid the need for new knowledge, is going to be ineffective, or will widen the gap, dumb down etc.

Alternatively there is a more subtle way to look at this. The reason why accumulation of facts can be supported is that, in the hands of any vaguely competent teacher, facts are not unconnected, and with masses of knowledge pupils can do all sorts of sophisticated things, like make an evidence-based argument for the Big Bang theory, describe the general features of acid-base reactions and predict the salts produced, or identify the similarities and differences between the respiratory systems of a range of animals. And the reason that teaching skills can be defended is that, firstly, within each curriculum area there are lots of skills that are crucial e.g. how to lay out a results table for a new investigation, how to manipulate a burette to get a titration accurate to 0.5 ml, how to evaluate conflicting evidence to draw an appropriate conclusion, and secondly, there is evidence that meta-cognitive programmes have a high effect-size, so whilst trying to teach critical-thinking skills does not seem to be very effective, teaching study skills can be.

And, for me, somewhere in the middle it becomes increasingly hard to decide the extent to which a skill is actually an accumulation of integrated facts and recall of experience of similar examples i.e. knowledge. The champions of knowledge-based teaching would, I think, mostly argue exactly this point – that to learn a skill you have to accumulate knowledge. To go back to my titration example, the skill starts with facts (it is possible to be accurate to +/- 0.5 ml, your eye must be level with the numbers when reading the scale), followed by learning what this looks like, then being able to recall the feel of more or less stiff taps, and knowing how to adjust them, and that they leak if they are loosened too far, and finally repeated practice drives into the long-term memory knowledge of all the little subtle things about when to go fast and when slow and what different indicators look like in different solutions…

I don’t think it matters much whether this is a skill, or an accumulation of knowledge. Old Andrew makes the distinction between the sort of skills I’m talking about, and generic skills which might be taught in a context-free way but even here I think teachers would be making a choice about the context. It may be a mistake to teach critical-thinking skills, or essay-writing skills in a knowledge-lite context but you can’t teach science, or history, or English without developing these skills, in context, in your students, can you?

So in the end, I’m with LeadingLearner – in a different jungle. What matters to me is deciding how much of the curriculum is going to be recall knowledge (and which knowledge it should be), how much is going to be about being able to do sophisticated things with this knowledge (and what are the most important sophisticated things), how much is going to require pupils to apply general principles in new contexts (like drawing up that results table), and how much (if any) is going to be about practical skills. I would like to see all of the above in the new science GCSE – when we finally get it. And then we should be starting the same debate for Beyond 2020, not a ‘knowledge versus skills’ debate but a ‘which knowledge; which skills’ debate.

Beyond 2013?

Last week the Association for Science Education (ASE) ran a New Curriculum Question Time event with a panel of moderate-to-big hitters including Brian Cartwright (Ofsted’s National Adviser for Science) and Paul Black. I wasn’t there but there are some notes on the ASE website and a little bit on Twitter. It’s not entirely clear to me whether there is still (or ever was) an opportunity to influence either the KS4 NC or the next generation of GCSEs in science subjects but I suppose, if not, it could be seen as the first step towards influencing some future review. I think I’m correct in suggesting that the Beyond 2000 report was published in 1998 and informed the 2004 NC review and the 2006 GCSEs. To me, the notes from the ASE are addressing some of the same issues as Beyond 2000; could this be the start of “Beyond 2013”? and might we see this come to fruition not in 2016 but around 2021?

Anyway, so much for the crystal ball gazing; what about the issues discussed? Well, it’s hard to make out any kind of cohesive theme from what’s on the web but there seems to have been discussion of future science citizens versus future scientists, skills versus knowledge, and the future for assessment of practical skills and the soft skills that go with them. I find myself drawing parallels with Beyond 2000, which defined the problem with great clarity but, to my mind, was short of a convincing proposal. Or rather, the proposal was convincing but, seen from the sixth form perspective from which I was viewing it, the outcome was unsatisfactory.

As things stand there is very little disagreement amongst sixth form teachers about which is the better preparation for A-Level. I don’t know if the data supports this view but if it is true that students do better at A-Level off the back of triple (and there is a good, quantitative research question for someone if it’s not been done already) then that suggests triple isn’t a “non-essential extra” and the old dichotomy between “scientists and … future science citizens” resolves into triple for one and double for the other, with career decisions made at 13 or 14 (which they often are anyway). What happens a lot at the moment is that the ‘top’ sets cram triple into the time others have for double, and then pupils from both routes go on to A-Level with the ones that were already behind in y10 starting at a disadvantage because they know less science. That doesn’t seem like a very good system to me. If that’s not going to continue to happen then someone needs to make an outstanding job of the next generation of Additional GCSEs (at least they’ve got plenty of time…) so that, given equal teaching time, the double provides the same foundation for A-Level as the triple. I guess that if the double were very carefully matched to the skills and content needed for Y12, with the triple covering other content of interest (for physics maybe some more depth in astronomy and cosmology; electronics; nanotechnology; new materials; medical physics…) then maybe they would be seen to be equal preparation for future scientists but then this can’t give equal recognition to the needs of the future science citizens because this solution packs the double with the content needed by future scientists. It could go the other way with the double filled with all the more broadly-appealing, qualitative topics, (and particularly the topical questions about climate change, magnetic fields near power lines, nuclear power…) and triple powering through forces and motion and circuits, but then the gap from double to A-Level would be almost unbridgeable.

Ever since I read the Beyond 2000 report, I have found myself thinking that you either have one qualification route that does its best to cater for both groups (neither very satisfactorily), or two routes with one completely focused on training scientists and burying misconceptions that matter later, and the other focused on HSW for controversial topics and popular science. The idea of a common qualification with a bolt-on for future scientists seems like genius but it doesn’t work because of the way pupils and/or schools have to make decisions and fit everything in. I know it’s controversial but I would go for two completely separate routes with every school offering both, and then focus on making KS3 science a great experience, making the “future science citizens” GCSE course engaging and, very importantly, as rigorous as the “future scientists” course, and do everything possible to ensure that schools are not allowed to decide that keen scientists are going to be “citizens” on the basis of relatively poor performance at KS3, whilst also allowing potential A* pupils that want to focus on arts and humanities to do so. If the two routes are genuinely of equal difficulty, and equal size (if the future scientists route content is right then it won’t matter that it’s only a double GCSE) that should sort itself out. Combine that with really good careers advice in Y9 and I reckon that at 14, kids can be trusted to make good decisions. There is even some evidence (Bennett et al. 2013) that having a choice at GCSE, with good advice, helps increase take-up at A-Level. But maybe this isn’t any better than what we’ve got, and maybe it would end up with two unequal courses – science, and basket-weaving – which would be a disaster. Worth a thought, though.